text
stringlengths 56
7.94M
|
---|
\begin{document}
\title[Arithmetic statistics for the Fine Selmer group]{Arithmetic statistics for the Fine Selmer group in Iwasawa theory}
\author[A.~Ray]{Anwesh Ray}
\address[A.~Ray]{Centre de recherches mathématiques,
Université de Montréal,
Pavillon André-Aisenstadt,
2920 Chemin de la tour,
Montréal (Québec) H3T 1J4, Canada}
\email{[email protected]}
\author[R.~Sujatha]{R.Sujatha}
\address[R.~Sujatha]{Department of Mathematics\\
University of British Columbia\\
Vancouver BC, Canada V6T 1Z2}
\email[Sujatha]{[email protected]}
\begin{abstract}
We study arithmetic statistics for Iwasawa invariants for fine Selmer groups associated to elliptic curves.
\end{abstract}
\subjclass[2010]{11G05, 11R23 (primary); 11R45 (secondary).}
\keywords{Arithmetic statistics, Iwasawa theory, Fine Selmer groups, elliptic curves, local torsion, Euler characteristic.}
\maketitle
\section{Introduction}
\label{section:intro}
\mathfrak{p}ar Iwasawa studied growth questions of class groups in certain infinite towers of number fields. These initial developments led to deep questions, many of which are still unresolved. Given a number field $F$ and a prime number $p$, let $F_{\op{cyc}}$ denote the cyclotomic $\mathbb{Z}_p$-extension of $F$. Letting $F_n\subset F_{\op{cyc}}$ be such that $[F_n:F]=p^n$, denote by $\op{Cl}_p(F_n)$ the $p$-Sylow subgroup of the class group of $F_n$. Iwasawa showed that for $n\mathbf{g}g 0$,
\begin{equation}\label{first equation}\# \op{Cl}_p(F_n)=p^{p^n\mu+\lambda n+\mathfrak{n}u},\end{equation} where $\mu, \lambda\in \mathbb{Z}_{\mathbf{g}eq 0}$ and $\mathfrak{n}u\in \mathbb{Z}$. Let $K(F_{\op{cyc}})\subset \bar{K}$ be the maximal abelian pro-$p$ extension of $F_{\op{cyc}}$ in which all primes of $F_{\op{cyc}}$ are unramified. The classical $\mu=0$ conjecture of Iwasawa states that $X(F_{\op{cyc}}):=\op{Gal}(K(F_{\op{cyc}})/F_{\op{cyc}})$ is finitely generated as a $\mathbb{Z}_p$-module. Equivalently, the invariant $\mu$ in \eqref{first equation} is equal to $0$. The $\mu=0$ conjecture was proved by Ferrero and Washington \cite{ferrero1979iwasawa} for all abelian number fields $F/\mathbb{Q}$.
\mathfrak{p}ar Mazur initiated the Iwasawa theory of elliptic curves and abelian varieties in \cite{mazur72}. Let $E$ be an elliptic curve and $p$ an odd prime number at which $E$ has good ordinary reduction. The main object of study is the $p$-primary Selmer group over the cyclotomic $\mathbb{Z}_p$-extension of $\mathbb{Q}$. Greenberg conjectured that when the residual representation on the $p$-torsion subgroup of $E$ is irreducible, the Iwasawa $\mu$-invariant of this Selmer group must vanish, see \cite[Conjecture 1.11]{Gre99}. This means that the Pontryagin dual of the Selmer group over $\mathbb{Q}_{\op{cyc}}$ is a finitely generated $\mathbb{Z}_p$-module. The rank of the dual Selmer group as a $\mathbb{Z}_p$-module is the $\lambda$-invariant, and is of significant interest. It was proven by Kato \cite{kato2004p} and Rohlrich \cite{rohrlich1984onl} that the rank of $E(\mathbb{Q}_n)$ is bounded as $n\rightarrow \infty$. In fact, $\op{rank} E(\mathbb{Q}_n)$ is always bounded above by the $\lambda$-invariant. When the residual representation is reducible, there are examples and explicit criteria under which the $\mu$-invariant is positive, see \cite[sections 3 and 7]{ray2021mu}.
\mathfrak{p}ar Let $E_{/\mathbb{Q}}$ be an elliptic curve with good reduction at an odd prime $p$ (either ordinary or supersingular). The \emph{fine Selmer group} was systematically studied by Coates and the second named author in \cite{coates2005fine}. This is the subgroup $\Rfine{\mathbb{Q}_{\op{cyc}}}$ of the $p$-primary Selmer group over $\mathbb{Q}_{\op{cyc}}$ consisting of all cohomology classes that are trivial when localized at the primes above $p$ and the primes at which $E$ has bad reduction (cf. Definitions \ref{fine Selmer def} and \ref{fine Selmer def 2}). The fine Selmer group is closely related to class group extensions studied by Iwasawa. Conjecture A in \cite{coates2005fine} predicts that the $\mu$-invariant of the fine Selmer group is always zero. Let $F=\mathbb{Q}(E[p])$ be the extension of $\mathbb{Q}$ which is \emph{cut out} by the residual representation on $E[p]$. In other words, it is the Galois extension of $\mathbb{Q}$ fixed by the kernel of the residual representation. By Serre's Open image theorem \cite[section 4, Theorem 3]{Serre72}, if $E$ is a non-CM elliptic curve, $\op{Gal}(F/\mathbb{Q})$ is isomorphic to $\op{GL}_2(\mathbb{Z}/p\mathbb{Z})$ for all but finitely many primes $p$. Thus, the extension $F/\mathbb{Q}$ is in most cases non-abelian and in fact, non-solvable; and Iwasawa's $\mu=0$ conjecture is wide open for such extensions. It follows from results of Coates and the second named author that Iwasawa's $\mu=0$ conjecture for $\op{Gal}(K(F_{\op{cyc}})/F_{\op{cyc}})$ is closely related to the $\mu=0$ conjecture for the fine Selmer group. In greater detail, the $\mu$-invariant vanishes for the fine Selmer group over $F_{\op{cyc}}$ vanishes if and only if the classical Iwasawa $\mu$-invariant for $X(F_{\op{cyc}})$ vanishes (cf. Theorem \ref{class group relation to conjecture A}).
\mathfrak{p}ar The arithmetic statistics of elliptic curves is concerned with the study of the average behaviour of certain interesting arithmetic invariants associated with elliptic curves. One such invariant is the rank of the Mordell-Weil group. A conjecture of Katz and Sarnak \cite{katz1999random} supported with heuristics from random matrix theory predicts that when ordered by height (or conductor or discriminant), $50\%$ of all elliptic curves have rank $0$ and the other $50\%$ have rank $1$. Certain unconditional results on rank distribution have been proven by Bhargava and Shankar \cite{bhargava2013average} who study the average cardinality of certain Selmer groups over $\mathbb{Q}$. This provides ample motivation to study the average behavior of Iwasawa invariants of elliptic curves associated to Selmer groups over $\mathbb{Q}_{\op{cyc}}$. The main difficulty is that the methods of Bhargava and Shankar can only be applied to $n$ Selmer groups over $\mathbb{Q}$ and for numbers $n\leq 5$. Therefore, an altogether different method is required to study Selmer groups over the cyclotomic $\mathbb{Z}_p$-extension. Arithmetic statistics for the Iwasawa theory of elliptic curves was initiated by Kundu and the first named author in \cite{KR21}. In the present article, we study similar questions for the fine Selmer group. In greater detail, we study the following questions:
\begin{enumerate}
\item Fix a prime number $p$. What can be said about the proportion of elliptic curves $E_{/\mathbb{Q}}$ with good reduction at $p$ for which the $p$-primary fine Selmer group over $\mathbb{Q}_{\op{cyc}}$ is infinite (i.e., either $\mu>0$ or $\lambda>0$)? Here, the elliptic curves are ordered according to their \emph{naive height} (cf. section \ref{s 4} for further details).
\item Given an elliptic curve $E_{/\mathbb{Q}}$ and a fixed positive number $Y>0$, what can be said about the number of primes $p\leq Y$ at which $E$ has good reduction and the fine Selmer group of $E$ over $\mathbb{Q}_{\op{cyc}}$ is infinite.
\end{enumerate}
In section \ref{s 4}, we study the first question and prove explicit results. From a statistical point of view it makes sense to only consider elliptic curves of rank $\leq 1$ (since the elliptic curves of rank $>1$ are expected to constitute a set of density $0$). Let $p$ be a prime $\mathbf{g}eq 5$. Consider the set $\mathfrak{B}_p$ (resp. $\mathfrak{D}_p$) consisting of elliptic curves $E_{/\mathbb{Q}}$ that have Mordell-Weil rank $0$ (resp. $1$), good reduction at $p$, and for which the $p$-primary fine Selmer group $\Rfine{\mathbb{Q}_{\op{cyc}}}$ is infinite. Theorem \ref{th 4.6} gives a precise upper bound for the upper natural density of $\mathfrak{B}_p$ in terms of $p$. In greater detail, let $\bar{\mathfrak{d}}(\mathfrak{B}_p)$ denote the upper density of $\mathfrak{B}_p$ (see Definition \ref{density def} for a precise definition). Let $f,g$ and $h$ be functions defined on the set of primes $p$, taking positive real values. Then, we say that $f(p)\leq g(p)+O(h(p))$ if $f(p)\leq g(p) + c h(p)$ for some constant $c\mathbf{g}eq 0$. We view the association $p\mapsto \bar{\mathfrak{d}}(\mathfrak{B}_p)$ as a function on the set of prime numbers $p$. We show that \[\bar{\mathfrak{d}}(\mathfrak{B}_p)\leq \frac{2\zeta(10)+1}{p}+O\left(\frac{\op{log}p(\op{log log} p)^2}{p^{3/2}}\right).\] A similar bound is also obtained for the function $p\mapsto \bar{\mathfrak{d}}(\mathfrak{D}_p)$, see Theorem \ref{th 4.8}. The bounds we obtain for $\bar{\mathfrak{d}}(\mathfrak{B}_p)$ and $\bar{\mathfrak{d}}(\mathfrak{D}_p)$ are conditional. For all primes $p$ for which our bounds are valid, it is assumed that $\Sh(E/\mathbb{Q})[p^\infty]$ is finite. Furthermore, we assume a conjecture of Delaunay on the density of elliptic curves $E_{/\mathbb{Q}}$ for which $\Sh(E/\mathbb{Q})[p^\infty]\mathfrak{n}eq 0$. We refer to Conjecture \ref{Del} for a precise statement. One also considers the set $\mathfrak{F}_p$ of all elliptic curves $E_{/\mathbb{Q}}$ with Mordell-Weil rank $0$ and good \emph{ordinary} reduction at $p$, such that the entire Selmer group $\op{Sel}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})$ is infinite. The bound obtained for $\bar{\mathfrak{d}}(\mathfrak{F}_p)$ in \cite[section 4]{KR21} is of the form
\[\bar{\mathfrak{d}}(\mathfrak{F}_p)=O\left(\frac{\op{log}p(\op{log log} p)^2}{p^{1/2}}\right)\] (see Theorem \ref{kundu ray theorem} for the refined statement). The results show that that given a prime $p$ the fine Selmer group $\Rfine{\mathbb{Q}_{\op{cyc}}}$ of most elliptic curves is finite. Furthermore, we give an explicit upper bound for the proportion of elliptic curves for which $\Rfine{\mathbb{Q}_{\op{cyc}}}$ may be infinite and this proportion becomes smaller as $p$ gets larger. In fact, the quantity decreases at the rate of $O(1/p)$. On the other hand, for elliptic curves of rank 0, the Selmer group $\op{Sel}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})$ shows similar behavior on average. However, as $p$ gets larger, the upper bound decreases at a much slower rate.
\mathfrak{p}ar In section \ref{s 5}, we study a variant of the second question. Let $E$ be an elliptic curve with Mordell-Weil rank zero. According to Corollary \ref{ cor 2.9}, the set of primes $p$ at which $\Rfine{\mathbb{Q}_{\op{cyc}}}$ is infinite is contained in the set of primes $\Pi_{lt,E}\cup \mathfrak{S}$. Here, $\Pi_{lt, E}$ is the set of primes $p$ at which $E(\mathbb{Q}_p)[p]\mathfrak{n}eq 0$. Such primes are referred to as the set of \emph{local torsion primes}. On the other hand, $\mathfrak{S}$ is an explicit finite set of primes (see \emph{loc. cit.} for details). Given a positive number $Y>0$, we denote by $\Pi_{lt,E}(Y)$ the set of local torsion primes $p$ that are $\leq Y$. We prove results about the expected size of $\# \Pi_{lt, E}(Y)$ when $Y$ is fixed and $E$ ranges over all elliptic curves. This gives us some insight into the number of primes $\leq Y$ at which the fine Selmer group may be infinite. We use a version of the Large Sieve inequality due to Huxley, see Theorem \ref{huxleythm}. Fix $Y>0$ (for example $Y=100$), let $P(Y):=\sum_{p\leq Y} \frac{\# \mathfrak{A}_p}{p^4}$ be the quantity defined in the discussion following the proof of Theorem \ref{mean square}. Here, the fraction $\frac{\# \mathfrak{A}_p}{p^4}$ represents the proportion of elliptic curves $\mathcal{E}$ over $\mathbb{Z}_p$ such that $\mathcal{E}(\mathbb{Q}_p)[p]\mathfrak{n}eq 0$. We refer to Definition \ref{ definition Cp Sp} for the precise definition of the set $\mathfrak{A}_p$. The main result of section \ref{s 5}, Theorem \ref{th s5 main}, shows that $P(Y)$ is a good estimate for $\#\Pi_{lt,E}(Y)$. More precisely, it is shown that there is an absolute constant $c>0$ such that for any number $\beta>0$, the proportion of all elliptic curves $E_{/\mathbb{Q}}$ for which \begin{equation}\label{above inequality}|\#\Pi_{lt,E}(Y)-P(Y)|<\beta \sqrt{P(Y)}\end{equation} is $\mathbf{g}eq (1-\frac{c}{\beta^2})$. Thus, the average size of $\# \Pi_{lt, E}(Y)$ is close to $P(Y)$ for an explicit positive density set of elliptic curves. For instance, setting $\beta=10\sqrt{c}$, we find that $\mathbf{g}eq 99\%$ of elliptic curves satisfy the above inequality \eqref{above inequality}. These results shed light on the distribution of primes outside of which the fine Selmer group is finite, when the elliptic curve $E$ in question has Mordell-Weil rank zero.
\section{Iwasawa theory of the Fine Selmer group}
\mathfrak{p}ar Let $E$ be an elliptic curve defined over $\mathbb{Q}$ and $p$ an odd prime at which $E$ has good reduction. Fix an algebraic closure $\bar{\mathbb{Q}}$ of $\mathbb{Q}$ and a finite set of primes $S$ containing $\{p,\infty\}$ outside of which $E$ has good reduction. Let $\mathbb{Q}_S$ be the maximal extension of $\mathbb{Q}$ in $\bar{\mathbb{Q}}$ in which all primes $\ell\mathfrak{n}otin S$ are unramified. Denote by $E[p^n]$ the $p^n$-torsion subgroup of $E(\bar{\mathbb{Q}})$, and set $E[p^\infty]$ to be the the union of all $p$-primary torsion groups $E[p^n]$. Given a field $L\subset \mathbb{Q}_S$, set $H^i(\mathbb{Q}_S/L, \cdot):=H^i(\op{Gal}(\mathbb{Q}_S/L), \cdot)$. For a number field $L$ and a prime $\ell\in S$, set $K_\ell(\cdot /L)$ to be the direct sum $\bigoplus_{v|\ell} H^1(L_v, \cdot)$. Here, $v$ ranges over the primes of $L$ above $\ell$.
\begin{definition}\label{fine Selmer def}Let $L$ be a number field. The \emph{fine Selmer group} $\Rfine{L}$ is defined to be the kernel of the restriction map
\[\op{ker}\left\{H^1\left(\mathbb{Q}_S/L, E[p^\infty]\right)\longrightarrow \bigoplus_{\ell \in S} K_\ell(E[p^\infty]/L)\right\}.\]
\end{definition}
\mathfrak{p}ar We study the fine Selmer group from an Iwasawa theoretic point of view. This involves passing up an infinite tower of number fields. Let $\mathbb{Q}(\mu_{p^\infty})$ be the field generated by $\mathbb{Q}$ and the $p$-power roots of unity $\mu_{p^\infty}\subset \bar{\mathbb{Q}}$. For $n\in \mathbb{Z}_{\mathbf{g}eq 1}$, let $\mathbb{Q}_n\subset \mathbb{Q}(\mu_{p^\infty})$ be the extension of $\mathbb{Q}$ contained in $\mathbb{Q}(\mu_{p^\infty})$ such that $[\mathbb{Q}_n:\mathbb{Q}]=p^n$. The \emph{cyclotomic $\mathbb{Z}_p$-extension} of $\mathbb{Q}$ is the union $\mathbb{Q}_{\op{cyc}}:=\bigcup_{n\mathbf{g}eq 1} \mathbb{Q}_n$, with Galois group $\Gamma:=\op{Gal}(\mathbb{Q}_{\op{cyc}}/\mathbb{Q})$. Fix a topological generator $\mathbf{g}amma\in \Gamma$. This gives rise to an isomorphism $\mathbb{Z}_p\xrightarrow{\sim} \Gamma$, sending $a\in \mathbb{Z}_p$ to $\mathbf{g}amma^a$. The \emph{Iwasawa algebra} is the completed group algebra $\Lambda:=\varprojlim_n \mathbb{Z}_p[\op{Gal}(\mathbb{Q}_n/\mathbb{Q})]$. Fix an isomorphism of $\Lambda$ with the formal power series ring $\mathbb{Z}_p\llbracket T \rrbracket$, where $T$ is the formal variable in place of $(\mathbf{g}amma-1)\in \Lambda$.
\mathfrak{p}ar \begin{definition}\label{fine Selmer def 2}The fine Selmer group over $\mathbb{Q}_{\op{cyc}}$ is taken to be the direct limit
\begin{equation}\label{fine selmer def 2}\Rfine{\mathbb{Q}_{\op{cyc}}}:=\varinjlim_n \Rfine{\mathbb{Q}_{n}}\end{equation} and is a cofinitely generated module over $\Lambda$.
\end{definition}In other words, the Pontryagin dual
\[\mathcal{R}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})^\vee:=\op{Hom}_{\op{cts}}\left(\mathcal{R}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}}), \mathbb{Q}_p/\mathbb{Z}_p\right)\] is finitely generated over $\Lambda$. We refer to \cite{coates2000galois} for the definition of the $p$-primary Selmer group over $\mathbb{Q}_{\op{cyc}}$, which we denote by $\op{Sel}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})$. Note that the fine Selmer group is a subgroup of $\op{Sel}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})$. It follows from a deep result of Kato \cite{kato2004p} that $\op{Sel}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})^\vee$ is torsion over $\Lambda$. In particular it follows that $\Rfine{\mathbb{Q}_{\op{cyc}}}^\vee$ is torsion over $\Lambda$.
\mathfrak{p}ar Let $M$ be a cofinitely generated and cotorsion $\Lambda$-module. We introduce the Iwasawa invariants associated to the Pontryagin dual $M^\vee:=\op{Hom}\left(M, \mathbb{Q}_p/\mathbb{Z}_p\right)$. Associated with $M^\vee$ are its Iwasawa invariants. By the \emph{Structure Theorem} for finitely generated and torsion $\Lambda$-modules \cite[Theorem 13.12]{washington1997}, $M^{\vee}$ is pseudo-isomorphic to a finite direct sum of cyclic $\Lambda$-modules. In other words, there is a map of $\Lambda$-modules
\[
M^{\vee}\longrightarrow \left(\bigoplus_{i=1}^s \Lambda/(p^{\mu_i})\right)\oplus \left(\bigoplus_{j=1}^t \Lambda/(f_j(T)) \right)
\]
with finite kernel and cokernel. Here, $\mu_i>0$ and $f_j(T)$ is a distinguished polynomial (i.e., a monic polynomial with non-leading coefficients divisible by $p$).
The characteristic ideal of $M^\vee$ is (up to a unit) generated by
\[
f_{M}(T) := p^{\sum_{i} \mu_i} \mathfrak{p}rod_j f_j(T)=p^\mu g_M(T),
\]
where $\mu=\sum_i \mu_i$ and $g_M(T)$ is the distinguished polynomial $\mathfrak{p}rod_j f_j(T)$. The quantity $\mu$ is the $\mu$-invariant, which is denoted $\mu_p(M)$, and the $\lambda$-invariant $\lambda_p(M)$ is the degree of $g_M(T)$. In this paper, we shall set $\mu_p(E/\mathbb{Q}_{\op{cyc}})$ (resp. $\lambda_p(E/\mathbb{Q}_{\op{cyc}})$) to denote the $\mu$-invariant (resp. $\lambda$-invariant) of the classical Selmer group $\op{Sel}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})$. On the other hand, we set $\mu_p^{\op{fn}}(E/\mathbb{Q}_{\op{cyc}})$ (resp. $\lambda_p^{\op{fn}}(E/\mathbb{Q}_{\op{cyc}})$) to denote the $\mu$-invariant (resp. $\lambda$-invariant) of the fine Selmer group $\mathcal{R}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})$. At this point, we note that for the fine Selmer group, there is no known analog of the main conjecture. Therefore, the fine Selmer group is a purely algebraic object and in many respects, its structure is more mysterious than the classical Selmer group.
\mathfrak{p}ar Let $E$ be an elliptic curve with good reduction at $p$ and let $F:=\mathbb{Q}(E[p])$, i.e., the field which is fixed by the kernel of the mod-$p$ representation on $E[p]$. Let $\mu_p(F_{\op{cyc}}/F)$ be the classical Iwasawa $\mu$-invariant associated with the maximal abelian pro-$p$ extension of $F_{\op{cyc}}$ which is unramified at all primes. The classicial $\mu=0$ conjecture of Iwasawa predicts that $\mu_p(K_{\op{cyc}}/K)=0$ for any number field $K$. The following result due to Coates and the second named author establishes a relationship between the vanishing of the classical $\mu$-invariant and the $\mu$-invariant of the fine Selmer group of $E$ over $F$.
\begin{theorem}\label{class group relation to conjecture A}
Let $E$ be an elliptic curve and $p$ an odd prime number. Let $F$ be the number field given by $\mathbb{Q}(E[p])$. Then, the following are equivalent
\begin{enumerate}
\item $\mathcal{R}_{p^\infty}(E/F_{\op{cyc}})^\vee$ is a torsion $\Lambda$-module with $\mu_p^{\op{fn}}(E/F_{\op{cyc}})=0$,
\item $\mu_p(F_{\op{cyc}}/F)=0$.
\end{enumerate}
\end{theorem}
\begin{proof}
The statement follows directly from from \cite[Theorem 3.4]{coates2005fine}.
\end{proof}
One should note that the above result in fact provides us with a criterion for the classical $\mu=0$ conjecture to hold for number field extensions $F=\mathbb{Q}(E[p])$, which need not be abelian or even solvable. However, instead of studying the vanishing of $\mu_p^{\op{fn}}(E/F_{\op{cyc}})$, we shall study conditions for the vanishing of $\mu_p^{\op{fn}}(E/\mathbb{Q}_{\op{cyc}})$. Conjecture A of \cite{coates2005fine} asserts that $\mu_p^{\op{fn}}(E/\Q_{\op{cyc}})=0$ for all elliptic curves $E_{/\mathbb{Q}}$. The above result shows that it is a special case of the $\mu=0$ conjecture for number fields.
\mathfrak{p}ar Now consider for a moment the special case when the residual representation
\[\bar{\rho}:\op{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\rightarrow \op{GL}_2(\mathbb{F}_p)\]on $E[p]$ is reducible, i.e., $\bar{\rho}=\mtx{\varphi_1}{\ast}{0}{\varphi_2}$, for globally defined characters \[\varphi_i: \op{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\rightarrow \mathbb{F}_p^\times.\]
\begin{proposition}
Let $E_{/\mathbb{Q}}$ be an elliptic curve and $p$ an odd prime such that $\bar{\rho}$ is reducible, then, $\mu_p^{\op{fn}}(E/\Q_{\op{cyc}})=0$.
\end{proposition}
\begin{proof}
Let $\mathbb{Q}(\varphi_i)$ be the field fixed by the kernel of $\varphi_i$ and set $L$ to be the composite of $\mathbb{Q}(\varphi_1)$ and $\mathbb{Q}(\varphi_2)$. Note that $\mathbb{Q}(E[p^\infty])/L$ is pro-$p$ and since $L$ an abelian number field, by the Theorem of Ferrero and Washington \cite{ferrero1979iwasawa}, the Iwasawa $\mu=0$ conjecture holds for $L_{\op{cyc}}/L$. It follows from \cite[Corollary 3.5]{coates2005fine} that in fact, $\mu_p^{\op{fn}}(E/L_{\op{cyc}})=0$, and it is easy to see that this implies that $\mu_p^{\op{fn}}(E/\Q_{\op{cyc}})=0$.
\end{proof}
In contrast, the $\mu$-invariant of the Selmer group $\op{Sel}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})$ is known to be positive in some cases when $\bar{\rho}$ is reducible. From a statistical point of view, the case of interest is that when $\bar{\rho}$ is surjective, in particular, irreducible. It follows from Serre's Open image theorem that when $E_{/\mathbb{Q}}$ is an elliptic curve without complex multiplication, the residual representation on $E[p]$ is surjective for all but finitely many primes $p$. A \emph{Serre curve} is an elliptic curve whose adelic Galois representation has index-$2$ in $\op{GL}_2(\hat{\mathbb{Z}})$, see \cite{jones2010almost} for a more precise definition. Serre curves are of interest since they are the elliptic curves for which the adelic Galois image is as large as possible. It is not hard to see that given a Serre curve, the mod-$p$ Galois representation is irreducible for all odd primes $p$. Jones proves in \emph{loc. cit.} that $100\%$ of elliptic curves ordered according to height are Serre curves.
\mathfrak{p}ar Let $f(T)$ be the characteristic element of the fine Selmer group $\mathcal{R}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})$, and write $f(T)=a_r T^r+a_{r+1}T^{r+1}+\dots$, where $a_r\in \mathbb{Z}_p$ is non-zero. Here, the number $r$ is called the order of vanishing of $f(T)$ at $T=0$ and $a_r$ is called the \emph{leading term}. The class of $a_r$ modulo $\mathbb{Z}_p^\times$ does not depend on the choice of topological generator $\mathbf{g}amma\in \Gamma$. Let $\mathcal{M}(E/\mathbb{Q})$ be the subgroup of elements of $\Rfine{\mathbb{Q}}$ that lie in the image of the Kummer map
\[E(\mathbb{Q})\otimes \mathbb{Q}_p/\mathbb{Z}_p\rightarrow H^1(\mathbb{Q}_S/\mathbb{Q}, E[p^\infty]),\] and the \emph{fine Tate-Shafarevich group} $\mathbb{Z}he_{p^\infty}(E/\mathbb{Q})$ is the cokernel of
\[\mathcal{M}(E/\mathbb{Q})\rightarrow \Rfine{\mathbb{Q}}.\] Then, $\mathbb{Z}he_{p^\infty}(E/\mathbb{Q})$ is naturally identified with a subgroup of $\Sh(E/K)[p^\infty]$. In our applications, we would like to understand when $\mathbb{Z}he_{p^\infty}(E/\mathbb{Q})$ is non-zero. The following result shows that the fine Tate-Shafarevich group vanishes precisely when the $p$-primary part of the Tate-Shafarevich group does.
\begin{theorem}[Wuthrich]\label{theorem 2.5}
Let $E$ be an elliptic curve defined over $\mathbb{Q}$. Then, $\mathbb{Z}he_{p^\infty}(E/\mathbb{Q})\mathfrak{n}eq 0$ precisely when $\Sh(E/\mathbb{Q})[p^\infty]\mathfrak{n}eq 0$. Furthermore, if $\op{rank} E(\mathbb{Q})>0$, then, $\mathbb{Z}he_{p^\infty}(E/\mathbb{Q})=\Sh(E/\mathbb{Q})[p^\infty]$.
\end{theorem}
\begin{proof}
See \cite[Theorems 3.4, 3.5]{wuthrich2007fine}.
\end{proof}
When $\op{rank} E(\mathbb{Q})=0$, $\mathbb{Z}he_{p^\infty}(E/\mathbb{Q})$ may be strictly smaller than $\Sh(E/\mathbb{Q})[p^\infty]$, as numerical examples in \cite{wuthrich2007fine} show.
\mathfrak{p}ar Wuthrich gives an explicit formula for the leading term of the characteristic element $a_r$, see \cite[Corollary 6.2]{wuthrich2007iwasawa}. We let $D=D_{E,p}$ be the cokernel of the natural map $E(\mathbb{Q})\otimes \mathbb{Z}_p$ to the $p$-adic completion of $E(\mathbb{Q}_p)$. At a prime $\ell$ at which $E$ has bad reduction, $c_\ell(E)$ will denote the Tamagawa number at $\ell$. Given two numbers $a,b$, we write $a\sim b$ if $a=ub$ for some unit $u\in \mathbb{Z}_p^\times$. There is a $p$-adic height pairing on the Selmer group $\Rfine{\mathbb{Q}}$, see \cite[section 5]{wuthrich2007iwasawa}. The regulator $\op{Reg}\left(\Rfine{\mathbb{Q}}\right)$ is the determinant of this height pairing. The compact version of the fine Selmer group $\mathfrak{R}_{p^\infty}(E/\mathbb{Q})$ is the kernel of the localization map
\[\mathfrak{R}_{p^\infty}(E/\mathbb{Q}):=\op{ker}\left(H^1(\mathbb{Q}_S/\mathbb{Q}, T_p(E))\rightarrow H^1(\mathbb{Q}_p, T_p(E))\right).\]
Here, $T_p(E)$ denotes the $p$-adic Tate-module associated to $E$.
\begin{theorem}[Wuthrich]\label{wuthrich thm}
Let $E$ be an elliptic curve over $\mathbb{Q}$ with potentially good reduction at $p$. Assume that the following conditions are satisfied:
\begin{enumerate}[label=(\alph*)]
\item The fine Tate-Shafarevich group $\mathbb{Z}he_{p^\infty}(E/\mathbb{Q})$ is finite.
\item The cyclotomic height pairing on $\Rfine{\mathbb{Q}_{\op{cyc}}}$ is nondegenerate.
\end{enumerate}
Then, the following assertions hold
\begin{enumerate}
\item $r=\op{max}\left(0,\op{rank} E(\mathbb{Q})-1\right)$.
\item There is an injection with finite cokernel $J$ of $\mathfrak{R}_{p^\infty}(E/\mathbb{Q})$ into the cokernel of the corestriction map
\[\op{cor}:\varprojlim_n H^1(\mathbb{Q}_S/\mathbb{Q}_n, T_p(E))\rightarrow H^1(\mathbb{Q}_S/\mathbb{Q}, T_p(E)).\]
\item We have the following formula for the leading term $a_r$
\begin{equation}\label{leading term}a_r\sim \op{Reg}\left(\Rfine{\mathbb{Q}}\right)\times \left(\frac{\#\op{Tors}_{\mathbb{Z}_p}(D)\times \mathfrak{p}rod_{\ell\mathfrak{n}eq p} c_\ell(E) \times \#\mathbb{Z}he_{p^\infty}(E/\mathbb{Q})}{\# J}\right),\end{equation}
where, $\op{Tors}_{\mathbb{Z}_p}(D)$ denotes the $p$-primary torsion subgroup of $D$.
\end{enumerate}
\end{theorem}
\begin{proof}
The above result is \cite[Theorem 1.1]{wuthrich2007fine}.
\end{proof}
Note that under the assumptions of the above theorem, $r=0$ and the regulator $\op{Reg}\left(\Rfine{\mathbb{Q}}\right)=1$ when $\op{rank} E(\mathbb{Q})\leq 1$. As a result of the above formula, we obtain an important consequence towards the vanishing of the Iwasawa invariants of the fine Selmer group $\Rfine{\mathbb{Q}_{\op{cyc}}}$.
\begin{corollary}\label{criterion for mu=lambda=0}
Let $E$ be an elliptic curve over $\mathbb{Q}$ and $p$ an odd prime. Assume that $E$ has potentially good reduction at $p$ and satisfied the conditions of Theorem \ref{wuthrich thm}. Furthermore, assume that the following conditions hold:
\begin{enumerate}
\item $\op{rank} E(\mathbb{Q})\leq 1$,
\item $p\mathfrak{n}mid c_\ell(E)$ for all primes $\ell\mathfrak{n}eq p$,
\item $\mathbb{Z}he_{p^\infty} (E/\mathbb{Q})=0$,
\item $\op{Tors}_{\mathbb{Z}_p}(D)=0$.
\end{enumerate}
Then, the $\mu$ and $\lambda$-invariants of the fine Selmer group $\Rfine{\mathbb{Q}_{\op{cyc}}}$ are $0$. In particular, $\Rfine{\mathbb{Q}_{\op{cyc}}}$ has finite cardinality.
\end{corollary}
\begin{proof}
It follows from Theorem \ref{wuthrich thm} that $r=0$. Furthermore, the regulator \[\op{Reg}\left(\Rfine{\mathbb{Q}}\right)=1.\] Furthermore, the conditions imply that $a_0=1$, hence the characteristic element of $\Rfine{\mathbb{Q}_{\op{cyc}}}$ is a unit in $\Lambda$. The result follows from this.
\end{proof}
The above result has an interesting consequence from a statistical point of view. Let us first introduce some notation. Given $x>0$, recall that $\mathfrak{p}i(x)$ is the number of primes $p\leq x$ and that the prime number theorem states that $\mathfrak{p}i(x)\sim \frac{x}{\op{log}(x)}$, i.e.,
\[\lim_{x\rightarrow\infty} \frac{\mathfrak{p}i(x)}{\left(x/\op{log}(x)\right)}=1.\]
Given a set of primes $\mathcal{S}$, set $\mathcal{S}(x):=\{p\in \mathcal{S}\mid p\leq x\}$, note that $\# \mathcal{S}(x)\leq \mathfrak{p}i(x)$ by definition.
\begin{definition}
Let $\beta\in [0,1]$ and $\mathcal{S}$ be a set of prime numbers. We say that $\mathcal{S}$ has density $\beta$ if
\[\lim_{x\rightarrow \infty} \frac{\# \mathcal{S}(x)}{\mathfrak{p}i(x)}=\beta.\] Note that part of the requirement is that the limit actually exists.
\end{definition}
Thus, if the set $\mathcal{S}$ has density $1$, we say that $\mathcal{S}$ consists of $100\%$ of primes. If the complement of $\mathcal{S}$ in the set of all primes is finite, then indeed, $\mathcal{S}$ consists of $100\%$ of primes, however, not conversely. Given an elliptic curve $E_{/\mathbb{Q}}$, let $\Pi_{lt,E}$ be the set of \emph{local torsion primes} for $E$, i.e., the set of all primes $p$ such that $E$ has good reduction at $p$ and $E(\mathbb{Q}_p)$ contains a $p$-torsion point. Let $\Pi_{an, E}$ be the set of \emph{anomalous primes}, i.e., primes $p$ at which $E$ has good reduction such that $\widetilde{E}(\mathbb{F}_p)[p]\mathfrak{n}eq 0$. Here, $\widetilde{E}$ is the reduction of $E$ at $p$. Let $\Pi_E$ be the set of primes $p$ at which $E$ has good reduction such that $\op{Tors}_{\mathbb{Z}_p} D\mathfrak{n}eq 0$.
\begin{proposition}\label{PiE}
Let $E$ be an elliptic curve over $\mathbb{Q}$, then, $\Pi_{lt,E}\subseteq \Pi_{an, E}$. Furthermore, if $\op{rank}E(\mathbb{Q})=0$, then, $\Pi_E\subseteq \Pi_{lt, E}$.
\end{proposition}
\begin{proof}
Let $E$ have good reduction at $p$, in other words, $E$ can be viewed as an elliptic curve over $\op{Spec}\mathbb{Z}_p$. Letting $E_0$ be the formal group of $E$, we have a short exact sequence
\[0\rightarrow E_0(p\mathbb{Z}_p)\rightarrow E(\mathbb{Q}_p)\rightarrow \widetilde{E}(\mathbb{F}_p)\rightarrow 0.\] Note that $E_0(p\mathbb{Z}_p)\simeq \mathbb{Z}_p$ does not contain any $p$-torsion, and therefore, there is an injection $E(\mathbb{Q}_p)[p^\infty]\hookrightarrow \widetilde{E}(\mathbb{F}_p)[p^\infty]$. This shows that if $p$ is a local torsion prime, then $p$ is an anomalous prime, i.e., $\Pi_{lt,E}\subseteq \Pi_{an, E}$. On the other hand, when $\op{rank} E(\mathbb{Q})=0$ and $E(\mathbb{Q}_p)$ has no nontrivial $p$-torsion, it follows that $\op{Tors}_{\mathbb{Z}_p} D=0$. Therefore, $\Pi_E\subseteq \Pi_{lt,E}$, when the Mordell-Weil rank of $E$ is $0$.
\end{proof}
\begin{corollary}\label{ cor 2.9}
Let $E$ be a non-CM elliptic curve over $\mathbb{Q}$ with $\op{rank} E(\mathbb{Q})=0$ for which the Tate-Shafarevich group $\Sh(E/\mathbb{Q})$ is finite. Then, for $100\%$ of primes $p$, \[\mu_p^{\op{fn}}(E/\Q_{\op{cyc}})=0\text{ and }\lambda_p^{\op{fn}}(E/\Q_{\op{cyc}})=0.\] The set of primes for which $\mu_p^{\op{fn}}(E/\Q_{\op{cyc}})>0$ or $\lambda_p^{\op{fn}}(E/\Q_{\op{cyc}})>0$ is contained in the set $\Pi_{lt, E}\cup \mathfrak{S}$, where $\mathfrak{S}$ is the set of primes dividing $ 2 \# \Sh(E/\mathbb{Q})\times \mathfrak{p}rod_{\ell\mathfrak{n}eq p} c_\ell(E)$.
\end{corollary}
\begin{proof}
Since it is assumed that $\Sh(E/\mathbb{Q})$ is finite, it follows that $\mathbb{Z}he_{p^\infty}(E/\mathbb{Q})=0$ for all but finitely many primes. It is also clear (from the fact that $c_\ell(E)$ is a non-zero integer) that $\mathfrak{p}rod_{\ell\mathfrak{n}eq p} c_\ell(E)$ is a unit in $\mathbb{Z}_p$ for all but finitely many primes $p$. It is well known that $\Pi_{\op{an}, E}$ has density zero, see \cite{murty1997modular}. Since it is assumed that $\op{rank} E(\mathbb{Q})=0$, Proposition \ref{PiE} asserts that $\Pi_{E}\subseteq \Pi_{\op{an},E}$, and hence $\Pi_{E}$ has density zero. Therefore, we have shown that the conditions of Corollary \ref{criterion for mu=lambda=0} hold for $100\%$ of primes $p$.
\end{proof}
\section{Statistics for local torsion primes}
Let $E_{/\mathbb{Q}}$ be an elliptic curve. We recall that a prime $p$ at which $E$ has good reduction is said to be \emph{anomalous} if $p\mid \#\widetilde{E}(\mathbb{F}_p)$. Given an elliptic curve $E$, $\Pi_{an,E}$ is the set of anomalous primes for $E$. The following result is due to Greenberg, see \cite[Theorems 4.1 and 5.1]{Gre99}.
\begin{theorem}[Greenberg]
Let $E$ be an elliptic curve over $\mathbb{Q}$ without complex multiplication such that $\op{rank} E(\mathbb{Q})=0$ and $\Sh(E/\mathbb{Q})$ is finite. Then, for $100\%$ of primes $p$, the $p$-primary Selmer group $\op{Sel}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})$ has $\mu=0$ and $\lambda=0$. The set of primes $p$ at which either $\mu>0$ or $\lambda>0$ is contained in $\Pi_{an,E}\cup \mathfrak{S}$, where $\mathfrak{S}$ is the finite set of primes dividing $2\#\Sh(E/\mathbb{Q}) \mathfrak{p}rod_{\ell\mathfrak{n}eq p} c_\ell(E)$. Furthermore, the set of primes at which $\mu>0$ or $\lambda>0$ is infinite if and only if $\Pi_{an,E}$ is infinite.
\end{theorem}
\mathfrak{p}ar Recall that for $x>0$, the set $\Pi_{an, E}(x)$ is the subset of $\Pi_{an,E}$ consisting of primes $p\leq x$. There are results about the asymptotic growth of $\#\Pi_{an, E}(x)$ as a function in $x$. The conjecture of Lang and Trotter \cite{lang2006frobenius} predicts that there is a constant $C\mathbf{g}eq 0$ (depending on $E$ and independent of $p$) such that
\begin{equation}\label{LTconj}\#\Pi_{an, E}(x)\sim C \frac{\sqrt{x}}{\op{log} x}.\end{equation}
This is only conjectural, in fact, the best known upper bound, due to V.~K.~Murty (see \cite{murty1997modular}) asserts that
\[\#\Pi_{an, E}(x)\ll \frac{x(\log \log x)^2}{(\log x)^2}.\] In \eqref{LTconj}, the constant $C$ may actually be $0$, and there are examples of elliptic curves $E$ such that the entire set of anomalous primes $\Pi_{an,E}$ is finite. Such elliptic curves are known as \emph{finitely anomalous curves}, and were studied by Ridgdill in her thesis \cite{ridgdill2010frequency}.
\mathfrak{p}ar In studying the fine Selmer group, anomalous primes no longer play a role since $\widetilde{E}(\mathbb{F}_p)$ is not a term in the Euler characteristic formula for the leading term of the characteristic element. Instead we turn our attention to local torsion primes. Let $E$ be an elliptic curve over $\mathbb{Q}$, and $p$ a prime number. Recall that \[D=D(E,p):=\op{cok} \left(E(\mathbb{Q})\otimes \mathbb{Z}_p\longrightarrow \widehat{E(\mathbb{Q}_p)}\right),\] where $\widehat{E(\mathbb{Q}_p)}$ is the $p$-adic completion of $E(\mathbb{Q}_p)$. The torsion subgroup $\op{Tors}_{\mathbb{Z}_p}D$ features in the numerator of \eqref{leading term}. The following conjecture on the distribution of local torsion primes is supported by heuristics.
\begin{conjecture}[David-Weston]\label{conj dw}
Let $E$ be an elliptic curve over $\mathbb{Q}$ without complex-multiplication. Then, the set of local torsion primes is finite.
\end{conjecture}
The following heuristic is taken from \cite[p.1]{davidweston2008}. Suppose $p$ is a local torsion prime, i.e., $E(\mathbb{Q}_p)[p]\mathfrak{n}eq 0$ such that $E$ has good reduction at $p$. This in particular implies that $E(\mathbb{F}_p)[p]\mathfrak{n}eq 0$. If $p\mathbf{g}eq 7$, then it follows from the Weil bound that $a_p(E)=1$. On the other hand, $a_p(E)=1$ occurs with probability $\frac{1}{4\sqrt{p}}$. Given an elliptic curve $\widetilde{E}$ over $\mathbb{F}_p$ satisfying some additional conditions, $1$ out of every $p$ lifts $E$ of $\widetilde{E}$ to $\mathbb{Z}_p$ has a local torsion point, see \cite[Corollary 3.5]{davidweston2008}. The sum $\sum_n \frac{1}{4\sqrt{p}\cdot p}=\frac{1}{4}\sum_p p^{-3/2}$ is finite.
\mathfrak{p}ar On the other hand, Conjecture 1.2 in \cite{wuthrich2007iwasawa} states that the fine Selmer group $\Rfine{\mathbb{Q}_{\op{cyc}}}$ associated to an elliptic curve with $\op{rank} E(\mathbb{Q})\leq 1$ is finite for all but finitely many primes $p$. More generally, the $\mathbb{Z}_p$-corank of $\Rfine{\mathbb{Q}_{\op{cyc}}}$ is expected to be equal to the $\mathbb{Z}_p$-corank of $\Rfine{\mathbb{Q}}$ for all but finitely many primes $p$. We state the conjecture slightly differently from its original formulation.
\begin{conjecture}[Wuthrich] \label{conj wuthrich}
Let $E$ be an elliptic curve over $\mathbb{Q}$, then for all but finitely many primes $p$ of good reduction, the fine Selmer group $\Rfine{\mathbb{Q}_{\op{cyc}}}$ satisfies $\mu=0$ and $\lambda=\op{max}\left(\op{rank} E(\mathbb{Q})-1, 0\right)$.
\end{conjecture}
We specialize to the case when $\op{rank} E(\mathbb{Q})=0$ to obtain the following easy observation.
\begin{proposition}
Suppose that $E$ is an elliptic curve over $\mathbb{Q}$ with $\op{rank} E(\mathbb{Q})=0$. Assume that the following conditions are satisfied
\begin{enumerate}
\item $E$ does not have complex multiplication,
\item Conjecture \ref{conj dw} holds for $E$,
\item $\Sh(E/\mathbb{Q})$ is finite.
\end{enumerate}
Then, Conjecture \ref{conj wuthrich} holds for $E$.
\end{proposition}
\begin{proof}
Note that $\op{Reg}\left(\Rfine{\mathbb{Q}}\right)=1$ since $\op{rank} E(\mathbb{Q})=0$. The conditions of Theorem \ref{wuthrich thm} are satisfied. Since $\op{rank} E(\mathbb{Q})=0$, the order of vanishing of $f(T)=\op{char} \left(\Rfine{\mathbb{Q}_{\op{cyc}}}\right)$ at $T=0$ is $0$. We appeal to Wuthrich's formula \eqref{leading term} for the leading term $a_0$. Note that $a_0$ is a unit in $\mathbb{Z}_p$ if and only if $\mu_p^{\op{fn}}(E/\Q_{\op{cyc}})=0$ and $\lambda_p^{\op{fn}}(E/\Q_{\op{cyc}})=0$. Conjecture \ref{conj wuthrich} in this context states that $a_0=a_0(E,p)$ is a $p$-adic unit for all but finitely many primes $p$. By Corollary \ref{ cor 2.9}, $a_0$ is a $p$-adic unit if an only if
\begin{enumerate}
\item $p\mathfrak{n}mid c_\ell(E)$ for all primes $\ell\mathfrak{n}eq p$,
\item $\mathbb{Z}he_{p^\infty} (E/\mathbb{Q})=0$,
\item $\op{Tors}_{\mathbb{Z}_p}(D)=0$.
\end{enumerate}
Theorem \ref{theorem 2.5} implies that $\mathbb{Z}he_{p^\infty} (E/\mathbb{Q})=0$ if and only if $\Sh(E/\mathbb{Q})[p^\infty]=0$. There are only finitely many primes $p$ which divide the Tamagawa product. Since it is assumed that $\Sh(E/\mathbb{Q})$ is finite, it follows that $\mathbb{Z}he_{p^\infty} (E/\mathbb{Q})=0$ for all but finitely many primes. If the set of local torsion primes is finite, then, $\op{Tors}_{\mathbb{Z}_p}(D)=0$ for all but finitely many primes $p$. As a result, the above conditions imply that Conjecture \ref{conj wuthrich} holds for $E$.
\end{proof}
Let $E$ be an elliptic curve over $\mathbb{Q}$ with good reduction at $p$, in other words, $E_{/\mathbb{Q}_p}$ admits a smooth local model $\mathcal{E}_{/\mathbb{Z}_p}$. Given a finitely generated abelian group $M$, we set $\op{rank}_pM:=\op{dim}\left(M\otimes \mathbb{F}_p\right)$ to be the $p$-rank of $M$.
\begin{lemma}\label{init lemma}
Let $\mathcal{E}$ be an elliptic curve over $\mathbb{Z}_p$, then,
\[\op{rank}_p \left(\mathcal{E}(\mathbb{Z}/p^2\mathbb{Z})\right)=\begin{cases} 1 \text{ if }E(\mathbb{Q}_p)[p]=0;\\
2\text{ if }E(\mathbb{Q}_p)[p]\mathfrak{n}eq 0.
\end{cases}\]
\end{lemma}
\begin{proof}
The assertion is a special case of \cite[Lemma 3.1]{davidweston2008}.
\end{proof}
Given a pair $(a,b)$, we let $E_{a,b}$ be the elliptic curve defined by $y^2=x^3+ax+b$.
\begin{proposition}\label{fibers order p}
Let $E_{a,b}$ be an elliptic curve associated to a pair $(a,b)\in \mathbb{F}_p\times \mathbb{F}_p$ and such that the j-invariant $j(E_{a,b})\mathfrak{n}eq 0,1728$. Then, there are $p$ distinct pairs $(A_i, B_i)\in \mathbb{Z}/p^2\times \mathbb{Z}/p^2$ such that $(A_i, B_i)\equiv (a,b)\mod{p}$ and \[\op{rank}_p E_{A_i, B_i}(\mathbb{Z}/p^2)=2.\]
\end{proposition}
\begin{proof}
See \cite[Proposition 3.4]{davidweston2008} for a proof of the result.
\end{proof}
\begin{definition}\label{ definition Cp Sp}
Let $\mathfrak{S}_p$ be the subset of $\mathbb{F}_p\times \mathbb{F}_p$ consisting of all pairs $(a,b)$ such that
\[\mathfrak{D}elta_{a,b}\mathfrak{n}eq 0\text{ and }E_{a,b}(\mathbb{F}_p)[p]\mathfrak{n}eq 0.\] Let $\mathfrak{S}_p^j$ be the subset of $\mathfrak{S}_p$ consisting of pairs $(a,b)$ such that $j(E_{a,b})=j$. Set $\mathfrak{A}_p$ be the set of pairs $(A,B)\in \mathbb{Z}/p^2\times \mathbb{Z}/p^2$ such that
\[p\mathfrak{n}mid \mathfrak{D}elta_{A,B} \text{ and }\op{rank}_p\left(E_{A,B}(\mathbb{Z}/p^2)\right)=2.\]
Given $j\in \mathbb{F}_p$, let $\mathfrak{A}_p^j\subseteq \mathfrak{A}_p$ consist of the pairs for which $j(E_{A,B})\equiv j\mod{p}$.
\end{definition}
We would like to estimate $\#\mathfrak{S}_p$ and $\#\mathfrak{A}_p$. Note that by definition $\mathfrak{A}_p$ reduces mod-$p$ to $\mathfrak{S}_p$, and by Proposition \ref{fibers order p}, the fibers of the reduction map $\mathfrak{A}_p^j\rightarrow \mathfrak{S}_p^j$ have order $p$ provided $j\mathfrak{n}eq 0, 1728$. The quantity $\#\mathfrak{S}_p$ can be expressed in terms of Hurwitz class numbers associated to integral binary quadratic forms with fixed discriminant.
\mathfrak{p}ar We follow the notation and conventions of \cite[section 2]{schoof1987nonsingular}. Let $\mathfrak{D}elta\in \mathbb{Z}_{<0}$ with $\mathfrak{D}elta\equiv 0,1\mod{4}$, and set
\[B(\mathfrak{D}elta):=\left\{ax^2+bxy+cy^2\in \mathbb{Z}[x,y]\mid a>0, b^2-4ac=\mathfrak{D}elta\right\}.\] The group $\op{SL}_2(\mathbb{Z})$ acts on $\mathbb{Z}[x,y]$, where a matrix $\sigma=\mtx{p}{q}{r}{s}$ sends $x\mapsto (px+qy)$ and $y\mapsto (rx+sy)$. Thus, if $f=ax^2+bxy+cy^2$, the matrix $\sigma$ acts by
\[f\circ \sigma= a(px+qy)^2+b(px+qy)(rx+sy)+c(rx+sy)^2.\] It can be checked that $B(\mathfrak{D}elta)$ is stable under the action of $\op{SL}_2(\mathbb{Z})$ and $B(\mathfrak{D}elta)/\op{SL}_2(\mathbb{Z})$ is finite, see \emph{loc. cit.} for additional details.
\begin{definition}
The \emph{Hurwitz class number} $H(\mathfrak{D}elta)$ is the order of $B(\mathfrak{D}elta)/\op{SL}_2(\mathbb{Z})$.
\end{definition}
Given an elliptic curve $E_{/\mathbb{F}_p}$, the Frobenius element $\varphi\in \op{End}(E)$ satisfies the equation $\varphi^2-t\varphi+p=0$. Here, $t$ is an integer given by $\# E(\mathbb{F}_p)=p+1-t$. According to the well-known Hurwitz bound, we have that $|t|\leq 2 \sqrt{p}$. Note that $p$ divides $t$ if and only if $E$ is supersingular.
\begin{definition} (cf. \cite[Definition 4.1]{schoof1987nonsingular})
Two elliptic curves $E$ and $E'$ over $\mathbb{F}_p$ are \emph{isogenous over $\mathbb{F}_p$} if \[\#E(\mathbb{F}_p)=\#E'(\mathbb{F}_p).\] Let $I(t)$ be the isogeny class of elliptic curves over $\mathbb{F}_p$ with $\#E(\mathbb{F}_p)=p+1-t$. Let $N(t)$ denote the number of $\mathbb{F}_p$-isomorphism classes of elliptic curves in $I(t)$.
\end{definition}
Consider the special case when $\#E(\mathbb{F}_p)$ is divisible by $p$, in other words, $t\equiv 1\mod{p}$. Note that since $|t|\leq 2\sqrt{p}$, it follows that if $p\mathbf{g}eq 7$, then, $E(\mathbb{F}_p)$ contains a non-zero $p$-torsion point precisely when $t=1$. However, if $p\leq 5$, then the two possibilities are $t=-p+1$, in which case, $\# E(\mathbb{F}_p)=2p$, and $t=1$, i.e., $\#E(\mathbb{F}_p)=p$.
\mathfrak{p}ar Let $\bar{\mathfrak{S}}_p$ be the set of isomorphism classes of elliptic curves $E$ over $\mathbb{F}_p$ such that $E(\mathbb{F}_p)[p]\mathfrak{n}eq 0$. Thus, we have that
\[\#\bar{\mathfrak{S}}_p=\begin{cases}N(1)&\text{ for }p\mathbf{g}eq 7,\\
N(-p+1)+N(1) &\text{ for }p\leq 5.\end{cases}\] Waterhouse showed that the number of elliptic curves in a given isogeny class can be computed in terms of certain class numbers, see \cite{waterhouse1969abelian}. These results were later reformulated in terms of Hurwitz class numbers by Schoof.
\begin{theorem}[Waterhouse, Schoof]\label{waterhouse schoof theorem}
Let $t$ be an integer such that $p\mathfrak{n}mid t$ and $t^2<4p$, then, $N(t)=H(t^2-4p)$.
\end{theorem}
\begin{proof}
The reader is referred to \cite[Theorem 4.6]{schoof1987nonsingular} for a proof of the result.
\end{proof}
\begin{corollary}\label{ estimate on bar Sp}
Let $p$ be a prime and let $\bar{\mathfrak{S}}_p$ be as above. Then, we have that
\[\#\bar{\mathfrak{S}}_p=\begin{cases}H(1-4p)&\text{ for }p\mathbf{g}eq 7,\\
H(p^2+1-6p)+H(1-4p) &\text{ for }p\leq 5.\end{cases}\]
\end{corollary}
\begin{proof}
From the discussion above Theorem \ref{waterhouse schoof theorem},
\[\#\bar{\mathfrak{S}}_p=\begin{cases}N(1)&\text{ for }p\mathbf{g}eq 7,\\
N(-p+1)+N(1) &\text{ for }p\leq 5.\end{cases}\] The result then follows from Theorem \ref{waterhouse schoof theorem}.
\end{proof}
\begin{proposition}\label{Cp estimate prop}
Let $p$ be any prime and $\mathfrak{S}_p$ be as above. Then, we have that
\[\# \mathfrak{S}_p\leq \left(\frac{p-1}{2}\right)\left\{ z_p H(p^2+1-6p)+H(1-4p) \right\}< Cp^{3/2} \op{log}p (\op{log log } p)^2,\] where $C>0$ is an effective constant (independent of $p$) and $z_p=\begin{cases} 1 &\text{ if }p\leq 5,\\
0 &\text{ if }p\mathbf{g}eq 7.
\end{cases}$
\end{proposition}
\begin{proof}
The elliptic curves $E_{a,b}$ and $E_{a',b'}$ are isomorphic over $\mathbb{F}_p$ if there exists $c\in \mathbb{F}_p^\times$ such that
\[a'=c^4a \text{ and } b'=c^6 b.\] Therefore, the number of elliptic curves $E_{a',b'}$ isomorphic to a given elliptic curve $E_{a,b}$ is at most $\left(\frac{p-1}{2}\right)$. Therefore, we have that
\[\begin{split}
\# \mathfrak{S}_p\leq \left(\frac{p-1}{2}\right) \# \bar{\mathfrak{S}}_p<Cp^{3/2} \op{log} p (\op{log log} p)^2,
\end{split}\] where the first inequality is the assertion of Corollary \ref{ estimate on bar Sp}, and the second follows from a well known bound on Hurwitz numbers \cite[Proposition 1.8]{Lenstra_annals}.
\end{proof}
\begin{corollary}\label{cor frakCp}
For $\mathfrak{A}_p$ be as in Definition \ref{ definition Cp Sp}, we have that
\begin{equation}\label{bound on Up}\# \frac{\mathfrak{A}_p}{p^4}<\frac{2}{p} + C p^{-3/2}\op{log}p (\op{log log} p)^2,\end{equation} where $C>0$ is an effective constant.
\end{corollary}
\begin{proof}
We subdivide $\mathfrak{A}_p$ into two disjoint sets $\mathfrak{A}_p^{(1)}$ and $\mathfrak{A}_p^{(2)}$. Set $\mathfrak{A}_p^{(1)}:=\mathfrak{A}_p^0\cup \mathfrak{A}_p^{1728}$, i.e., the pairs $(A,B)\in \mathfrak{A}_p$ such that the $j$-invariant is either $0$ or $1728$ modulo $p$. Note that $\mathfrak{A}_p^{(1)}$ consists of all pairs $(A,B)\in \mathfrak{A}_p$ such that $p\mid AB$. Clearly, the number of pairs $(A,B)$ such that $p\mid A$ is $p^3$. Thus we have the following obvious upper bound: \begin{equation}\label{cor 3.13 e1}\#\mathfrak{A}_p^{(1)}<2 p^3.\end{equation} Let $\mathfrak{A}_p^{(2)}$ be the complement of $\mathfrak{A}_p^{(1)}$ in $\mathfrak{A}_p$. For any pair $(A,B)\in \mathfrak{A}_p^{(2)}$, we have by construction that $j(E_{A,B})\mathfrak{n}ot \equiv 0, 1728\mod{p}$. Proposition \ref{fibers order p} asserts that given a pair $(a, b)\in \mathfrak{S}_p$ with $j$-invariant not equal to $0$ or $1728$, there are $p$ lifts $(A,B)\in \mathfrak{A}_p$ of $(a,b)$. Therefore, we have the following upper bound: $\# \mathfrak{A}_p^{(2)}\leq p \# \mathfrak{S}_p$. Invoking Proposition \ref{Cp estimate prop}, we obtain the following bound:
\begin{equation}\label{cor 3.13 e2}\# \mathfrak{A}_p^{(2)}< Cp^{5/2} \op{log}p (\op{log log } p)^2.\end{equation}
Putting together \eqref{cor 3.13 e1} and \eqref{cor 3.13 e2}, we obtain
\[ \frac{\#\mathfrak{A}_p}{p^4}=\frac{\#\mathfrak{A}_p^{(1)}}{p^4}+\frac{\#\mathfrak{A}_p^{(2)}}{p^4}<\frac{2}{p} + C p^{-3/2}\op{log}p (\op{log log} p)^2.\]
\end{proof}
\begin{remark}
In the proof of the above result, we have invoked the obvious upper bound for $\#\mathfrak{A}_p^{(1)}$, which contributes to the term $\frac{2}{p}$ in the estimate \eqref{bound on Up}. Given a pair $(A,B)$ such that $p\mid AB$, note that the pair $(A,B)$ is contained in $\mathfrak{A}_p^{(1)}$ precisely when $(\bar{A}, \bar{B})=(A,B)\mod{p}$ is contained in $\mathfrak{S}_p$. Let $\mathfrak{S}_p^{\star}$ be the subset of $\mathfrak{S}_p$ consisting of all pairs $(a,b)$ such that either $a$ or $b$ is $0$. In the above proof, we have used the obvious bound $\#\mathfrak{S}_p^{\star}< 2p$ to obtain that
\[\# \mathfrak{A}_p^{(1)}\leq p^2\#\mathfrak{S}_p^{\star}< 2p^3.\] The main term $2/p$ in the estimate $\eqref{bound on Up}$ can be replaced by $\#\mathfrak{S}_p^{\star}/p^2$. Thus, a better understanding of $\#\mathfrak{S}_p^{\star}$ would potentially give a better estimate.
\end{remark}
\section{Results for a fixed prime and varying elliptic curve}\label{s 4}
\mathfrak{p}ar Let $E_{/\mathbb{Q}}$ be an elliptic curve. Note that $E$ is defined by a Weierstrass equation $E:y^2=x^3+ax+b$, where the integers $a$ and $b$ such that for all primes $\ell$, $\ell^6\mathfrak{n}mid b$ whenever $\ell^4\mid a$. Such a Weierstrass equation is unique and referred to as the \emph{minimal Weierstrass equation}. The \emph{naive height} of $E$ is defined to be the quantity $H(E):=\op{max}\left\{|a|^3, b^2\right\}$.
\begin{definition}\label{density def}Let $\mathcal{E}$ be the set of isomorphism classes of elliptic curves defined over $\mathbb{Q}$. Given a subset $\mathcal{S}$ of $\mathcal{E}$, and $x>0$, we set
\[\mathcal{S}_{<x}:=\{E\in \mathcal{S}\mid H(E)<x\},\]
and $\frak{d}(\mathcal{S},x):=\frac{\# \mathcal{S}_{<x}}{\# \mathcal{E}_{<x}}$.
We say that $\mathcal{S}$ has upper (resp. lower) density $\delta$ if $\limsup_{x\rightarrow \infty} \frak{d}(\mathcal{S},x) =\delta$ (resp. $\liminf_{x\rightarrow \infty} \frak{d}(\mathcal{S},x) =\delta$). If
\[\lim_{x\rightarrow \infty} \frak{d}(\mathcal{S},x) =\delta,\] then, we say that $\mathcal{S}$ has density $\delta$. Part of the requirement here is that the above limit exists. We let $\bar{\mathfrak{d}}(\mathcal{S})$ (resp. $\underline{\mathfrak{d}}(\mathcal{S})$) be the upper (resp. lower) density of $\mathcal{S}$. The density is denoted $\mathfrak{d}(\mathcal{S})$. Note that when $\mathfrak{d}(\mathcal{S})$ is defined, so are $\underline{\mathfrak{d}}(\mathcal{S})$ and $\bar{\mathfrak{d}}(\mathcal{S})$, and
\[\mathfrak{d}(\mathcal{S})=\underline{\mathfrak{d}}(\mathcal{S})=\bar{\mathfrak{d}}(\mathcal{S}).\]
\end{definition}
\mathfrak{p}ar The conjecture on rank distribution by Katz and Sarnak \cite{katz1999random} predicts that when ordered by height, $50\%$ of elliptic curves have rank $0$, the other $50\%$ of rank $1$ and $0\%$ of rank $>1$. In other words, let $\mathcal{E}^0, \mathcal{E}^1$ and $\mathcal{E}^{>1}$ be the subset of elliptic curves of rank $0,1$ and $>1$ respectively. Then, the conjecture predicts that $\mathfrak{d}(\mathcal{E}^0)=\mathfrak{d}(\mathcal{E}^1)=\frac{1}{2}$, and $\mathfrak{d}(\mathcal{E}^{>1})=0$.
\mathfrak{p}ar Certain unconditional results have been proven by Bhargava and Shankar who study the average size of the the $p$-Selmer group $\op{Sel}_p(E/\mathbb{Q})$. This is the $p$-torsion subgroup of the $p$-primary Selmer group $\op{Sel}_{p^\infty}(E/\mathbb{Q})$. For instance, in the preprint \cite{bhargava2013average}, it is shown that the average size of the $5$-Selmer group is $6$ as $E$ varies over $\mathcal{E}$. This result implies that $\underline{\mathfrak{d}}(\mathcal{E}^0)\mathbf{g}eq \frac{1}{5}$ and $\bar{\mathfrak{d}}(\mathcal{E}^{>1})\leq \frac{1}{5}$.
\mathfrak{p}ar In this section, we fix a prime number $p\mathbf{g}eq 5$ and study the following question.
\begin{question}
As $E$ varies over all elliptic curves defined over $\mathbb{Q}$, what can be said about the proportion of elliptic curves $E$ for which the following equivalent conditions satisfied:
\begin{enumerate}
\item $\Rfine{\mathbb{Q}_{\op{cyc}}}$ has finite cardinality.
\item The $\mu$ and $\lambda$-invariants of $\Rfine{\mathbb{Q}_{\op{cyc}}}$ are both equal to $0$.
\item The leading term $f(0)$ of the characteristic element $f(T)$ of $\Rfine{\mathbb{Q}_{\op{cyc}}}$ is a unit in $\mathbb{Z}_p$.
\end{enumerate}
\end{question}
Note that when the Selmer group $\op{Sel}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})$ is finite, it is in fact equal to $0$, see \cite[Corollary 3.6]{KR21}. This is only possible when the Mordell-Weil rank of $E$ is $0$ and $E$ has ordinary reduction at $p$ (for further details, the reader may refer to \cite[section 4]{KR21}). Since the fine Selmer group is a subgroup of the Selmer group, it follows that if $\op{Sel}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})=0$, then so is $\Rfine{\mathbb{Q}_{\op{cyc}}}=0$. However, it is indeed possible for the fine Selmer group $\Rfine{\mathbb{Q}_{\op{cyc}}}$ to be finite, while the entire Selmer group $\op{Sel}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})$ is infinite. As the following example shows, this may be so even in the special case when the elliptic curve $E$ has Mordell-Weil rank $0$ and good ordinary reduction at $p$.
\begin{example}
We refer to \cite[section 9.1]{wuthrich2007iwasawa}, consider the elliptic curve
\[E:y^2+y=x^3-x^2-10x-20,\] and consider the prime $p=5$. The $\mu$-invariant of the Selmer group $\op{Sel}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})$ is $1$, in fact,
\[\op{Sel}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})^\vee=\Lambda/p.\] On the other hand, $\Rfine{\mathbb{Q}_{\op{cyc}}}$ is finite and non-zero. In particular, the $\mu$ and $\lambda$-invariants of the fine Selmer group are zero, however, $\Rfine{\mathbb{Q}_{\op{cyc}}}\mathfrak{n}eq 0$.
\end{example}
More generally, if $\Rfine{\mathbb{Q}_{\op{cyc}}}$ is finite and non-zero at a prime of good ordinary reduction, then the Selmer group $\op{Sel}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})$ must be infinite, since otherwise it would be zero.
\subsection{Statistics for the Selmer group}
Before we proceed to prove results for the fine Selmer group, let us briefly recall the statistical results for the Selmer group $\op{Sel}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})$ proved by Kundu and the first named author in \cite[section 4]{KR21}. Some of these results may be marginally refined using \cite[Lemma 6.4]{ray2021arithmetic}. In this subsection, we summarize the results from these sources. Let $\mathfrak{F}_p$ be the the set of elliptic curves $E\in \mathcal{E}$, such that
\begin{enumerate}
\item $\op{rank} E(\mathbb{Q})=0$,
\item $E$ has good ordinary reduction at $p$,
\item $\op{Sel}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})\mathfrak{n}eq 0$.
\end{enumerate}
Let $\mathcal{S}_p$ be the set of elliptic curves $E_{/\mathbb{Q}}$ such that $\Sh(E/\mathbb{Q})[p^\infty]\mathfrak{n}eq 0$. A conjecture of Delaunay \cite{delaunay2007heuristics} predicts that $\bar{\mathfrak{d}}(\mathcal{S}_p)=1-\mathfrak{p}rod_{i\mathbf{g}eq 1}\left(1-\frac{1}{p^{2i-1}}\right)=\frac{1}{p}+\frac{1}{p^3}-\frac{1}{p^4}+\dots$. The heuristics supporting this conjecture have been further refined by Poonen and Rains, see \cite{poonen2012random}. \begin{conjecture}[Del-p]\label{Del}
The proportion of all elliptic curves such that $E$ for which $p\mid \# \Sh(E/\mathbb{Q})$ is given by
\[\left(1-\mathfrak{p}rod_{i\mathbf{g}eq 1} \left(1-\frac{1}{p^{2i-1}}\right)\right).\]
\end{conjecture}
In preparation of the next result, recall the definition of the set $\mathfrak{S}_p$ from Definition \ref{ definition Cp Sp}.
\begin{theorem}(Kundu $\&$ R.) \label{kundu ray theorem}
Let $p\mathbf{g}eq 5$ be a prime number and $\mathfrak{F}_p$ be defined as above. Assume $\Sh(E/\mathbb{Q})[p^\infty]$ is finite for all elliptic curves $E_{/\mathbb{Q}}$ and that the conjecture (Del-p) is satisfied. Then, there is an effective constant $C_1>0$ such that the following bounds are satisfied:
\begin{equation}\label{selmer main formula}
\begin{split}\bar{\mathfrak{d}}(\mathfrak{F}_p)\leq & \zeta(10)\frac{\#\mathfrak{S}_p}{p^2}+\left(1-\mathfrak{p}rod_{i\mathbf{g}eq 1} \left(1-\frac{1}{p^{2i-1}}\right)\right)+(\zeta(p)-1),\\
< & C_1 \frac{\op{log} p (\op{log log}p)^2}{\sqrt{p}}+\left(1-\mathfrak{p}rod_{i\mathbf{g}eq 1} \left(1-\frac{1}{p^{2i-1}}\right)\right)+(\zeta(p)-1).
\end{split}\end{equation}
\end{theorem}
Moreover, the effective constant $C_1$ can be taken to be $\zeta(10) C$, where $C$ is the constant from Proposition \ref{Cp estimate prop}.
\begin{proof}
The result is essentially a rephrasing of \cite[Theorem 4.3]{KR21}, however, thanks to the minor update in the statement taken from \cite{ray2021arithmetic}, and for the convenience of the reader, we summarize the proof here. Let $E$ be an elliptic curve with good ordinary reduction at $p$, and $f(T)$ be the characteristic element of $\op{Sel}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})^{\vee}$. Express $f(T)$ as a power series. Suppose that $E\in \mathfrak{F}_p$, then, in particular, $E$ has good ordinary reduction at $p$ and $\op{rank} E(\mathbb{Q})=0$. On the other hand, it is assumed that $\Sh(E/\mathbb{Q})[p^\infty]$ is finite, and therefore the Selmer group $\op{Sel}_{p^\infty}(E/\mathbb{Q})$ is finite. It follows that the constant term $f(0)$ is non-zero $p$-adic integer. By the well known Euler characteristic formula,
\[f(0)\sim \frac{\# \Sh(E/\mathbb{Q})[p^{\infty}]\times \left(\mathfrak{p}rod_{l}c_l(E)\right)\times \left(\# \widetilde{E}(\mathbb{F}_p)\right)^2}{\#\left(E(\mathbb{Q})[p^{\infty}]\right)^2}.\]
In particular, if the three terms $\# \Sh(E/\mathbb{Q})[p^{\infty}]$, $\left(\mathfrak{p}rod_{l}c_l(E)\right)$ and $\#\widetilde{E}(\mathbb{F}_p)$ are units in $\mathbb{Z}_p$, then $f(0)$ will be a unit in $\mathbb{Z}_p$. We study the average behaviour of the following quantities for fixed $p$ and varying $E\in \mathcal{E}$,
\begin{enumerate}
\item $s_p(E):=\#\Sh(E/\mathbb{Q})[p^{\infty}]$,
\item $\tau_p(E):=\mathfrak{p}rod_l c_l^{(p)}(E)$,
\item $\delta_p(E):=\#\left(\widetilde{E}(\mathbb{F}_p)[p]\right)$.
\end{enumerate}
Let $\mathcal{E}_1, \mathcal{E}_2$ and $\mathcal{E}_3$ to be the set of \emph{all} elliptic curves $E_{/\mathbb{Q}}$ with good reduction at $p$, such that $p$ divides $s_p(E)$, $\tau_p(E)$ and $\delta_p(E)$ respectively. Note that if $f(0)$ is a $p$-adic unit, then $\op{Sel}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})=0$. Thus if $E\in \mathfrak{F}_p$, then $f(0)$ is not a unit in $\mathbb{Z}_p$, hence, $E\in \mathcal{E}_1\cup \mathcal{E}_2\cup \mathcal{E}_3$. From the containment $\mathfrak{F}_p\subseteq \bigcup_{i=1}^3 \mathcal{E}_i$, we obtain the following bound:
\[\bar{\mathfrak{d}}(\mathfrak{F}_p)\leq \sum_{i=1}^3 \bar{\mathfrak{d}}(\mathcal{E}_i).\]
Since we assume that (Del-p) is satisfied, we have that
\[\bar{\mathfrak{d}}(\mathcal{E}_1)\leq \left(1-\mathfrak{p}rod_{i\mathbf{g}eq 1} \left(1-\frac{1}{p^{2i-1}}\right)\right).\]
It follows from \cite[Theorem 4.2]{KR21} or \cite[Corollary 8,8]{hatley2021statistics} that
\[\bar{\mathfrak{d}}(\mathcal{E}_2)\leq \zeta(p)-1.\]
Next, it follows from \cite[Theorem 4.14]{KR21} and Proposition \ref{Cp estimate prop} (or \cite[Lemma 6.4]{ray2021arithmetic}) that
\[\bar{\mathfrak{d}}(\mathcal{E}_3)\leq \zeta(10) \frac{\# \mathfrak{S}_p}{p^2}<C_1 \frac{\op{log} p(\op{loglog} p)^2}{\sqrt{p}},\] where $C_1=\zeta(10) C$. Putting everything together, the result follows.
\end{proof}
\begin{remark}
The result shows that when $p$ is suitably large the density of the set $\mathfrak{F}_p$ is quite small.
\begin{itemize}
\item Note that $\sqrt{p}$ grows faster than $\op{log} p (\op{loglog} p)^2$ as $p\rightarrow \infty$, hence, the term $C_1 \frac{\op{log} p(\op{log log} p)^2}{\sqrt{p}}$ goes to $0$ as $p\rightarrow \infty$. However, the exact values of $\# \frac{\mathfrak{S}_p}{p^2}$ can be calculated for small enough primes, and for this, we refer to Table \ref{tab:1} (or, \cite[Table 2]{KR21II}).
\item The function \[f(p):= \left(1-\mathfrak{p}rod_{i\mathbf{g}eq 1} \left(1-\frac{1}{p^{2i-1}}\right)\right)=\frac{1}{p}+\frac{1}{p^3}-\frac{1}{p^4}+\dots\] is a function whose domain is the set of prime numbers $p$. As a function of $p$, it is asymptotic to $\frac{1}{p}$. In other words,
\[\lim_{p\rightarrow \infty} \frac{f(p)}{p^{-1}}=1,\] where the limit is over primes $p$. \mathfrak{p}ar This is easy to prove, however, we provide details here. First, it is easy to see that for any integer $m\mathbf{g}eq 1$,
\[1\mathbf{g}eq \mathfrak{p}rod_{i\mathbf{g}eq m}\left(1-\frac{1}{p^{2i-1}}\right)\mathbf{g}eq 1-\sum_{i\mathbf{g}eq m} \frac{1}{p^{2i-1}}.\]
Therefore, we have the following bounds
\begin{equation}\label{bound 1}f(p)= 1-\left(1-\frac{1}{p}\right)\mathfrak{p}rod_{i\mathbf{g}eq 2}\left(1-\frac{1}{p^{2i-1}}\right)\mathbf{g}eq 1-\left(1-\frac{1}{p}\right)=\frac{1}{p},\end{equation}
\begin{equation}\label{bound 2}f(p)= 1-\mathfrak{p}rod_{i\mathbf{g}eq 1}\left(1-\frac{1}{p^{2i-1}}\right)\leq \sum_{i\mathbf{g}eq 1} \frac{1}{p^{2i-1}}= \frac{1}{p}+\frac{1}{p(p^2-1)}.\end{equation}
Combining \eqref{bound 1} and \eqref{bound 2}, we find that
\[1\leq \frac{f(p)}{p^{-1}}\leq 1+\frac{1}{(p^2-1)},\] and it thus follows that $f(p)\sim \frac{1}{p}$. Therefore, in particular, $f(p)$ is asymptotically smaller than the term $C_1 \frac{\op{log} p(\op{log log} p)^2}{\sqrt{p}}$ as a function of $p$.
\item It is easy to see that
\[\zeta(p)-1=2^{-p}+\sum_{n\mathbf{g}eq 3} n^{-p}< 2^{-p}+\int_{2}^\infty x^{-p}dx=2^{-p}\left(\frac{p+1}{p-1}\right).\]
\end{itemize}
\end{remark}
Thus, Theorem \ref{kundu ray theorem} shows that $\bar{\mathfrak{d}}(\mathfrak{F}_p)\rightarrow 0$ as $p\rightarrow \infty$, and in fact, as a function of $p$ is $O\left(\frac{\op{log}p (\op{log log} p)^2}{\sqrt{p}}\right)$, thereby indicating that given a prime $p$, most elliptic curves $E_{/\mathbb{Q}}$ of rank $0$ and good ordinary reduction at $p$ have $\op{Sel}_{p^\infty}(E/\mathbb{Q}_{\op{cyc}})=0$. In fact, the above remark shows that the contribution of the Tamagawa factors is very little in comparison to that of anomalous primes.
\subsection{Statistics for the Fine Selmer group in the rank-zero case}
\mathfrak{p}ar We now turn our attention to the fine Selmer group. In this subsection, we study the case when $\op{rank} E(\mathbb{Q})=0$. Let $p\mathbf{g}eq 5$ be a prime number and let $\mathfrak{B}_p$ be the set of elliptic curves $E_{/\mathbb{Q}}$ such that the following conditions are satisfied:
\begin{enumerate}
\item $E$ has good reduction at $p$,
\item $\op{rank} E(\mathbb{Q})=0$,
\item $\Rfine{\mathbb{Q}_{\op{cyc}}}$ is infinite (equivalently, either $\mu_p^{\op{fn}}(E/\Q_{\op{cyc}})>0$ or $\lambda_p^{\op{fn}}(E/\Q_{\op{cyc}})>0$).
\end{enumerate}
Thus, we no longer impose the good ordinary condition at $p$.
\begin{theorem}\label{th 4.6}
Let $p\mathbf{g}eq 5$ be a prime and $\mathfrak{B}_p$ be defined as above. Assume that $\Sh(E/\mathbb{Q})[p^\infty]$ is finite for all elliptic curves $E_{/\mathbb{Q}}$ and that the conjecture (Del-p) is satisfied. Then, we have the following bound on the upper density of $\mathfrak{B}_p$,
\begin{equation}\label{main bound fine Selmer}
\begin{split}
\bar{\mathfrak{d}}(\mathfrak{B}_p)\leq & \zeta(10)\frac{\#\mathfrak{A}_p}{p^4}+ \left(1-\mathfrak{p}rod_{i\mathbf{g}eq 1} \left(1-\frac{1}{p^{2i-1}}\right)\right)+(\zeta(p)-1),\\
\leq & \frac{2 \zeta(10)}{p} + C_1 \frac{ \op{log}p (\op{log log} p)^2}{p^{3/2}}+\left(1-\mathfrak{p}rod_{i\mathbf{g}eq 1} \left(1-\frac{1}{p^{2i-1}}\right)\right)+(\zeta(p)-1).\\
\end{split}
\end{equation}
Here $C_1$ is explicit constant given by $C_1=\zeta(10) C$ from Corollary \ref{cor frakCp}. \end{theorem}
\begin{proof}
Consider the following quantities:\begin{enumerate}
\item $s_p(E):=\#\Sh(E/\mathbb{Q})[p^{\infty}]$,
\item $\tau_p(E):=\mathfrak{p}rod_l c_l^{(p)}(E)$,
\item $\beta_p(E):=\#\left(E(\mathbb{Q}_p)[p]\right)$.
\end{enumerate}
Recall from the proof of Theorem \ref{kundu ray theorem} that $\mathcal{E}_1$ and $\mathcal{E}_2$ consist of all elliptic curves $E_{/\mathbb{Q}}$ such that $p$ divides $s_p(E)$ and $\tau_p(E)$ respectively. Let $\mathcal{E}_4$ be the set of all elliptic curves $E_{/\mathbb{Q}}$, with good reduction at $p$, such that $p$ divides $\beta_p(E)$. For these elliptic curves, $p$ is a local torsion prime over $\mathbb{Q}_p$. It follows from Corollary \ref{criterion for mu=lambda=0} that $\mathfrak{B}_p$ is contained in the union $ \mathcal{E}_1\cup \mathcal{E}_2\cup \mathcal{E}_4 $. Hence,
\begin{equation}\label{E1 to E4 bound}\bar{\mathfrak{d}}(\mathfrak{B}_p)\leq \bar{\mathfrak{d}}(\mathcal{E}_1)+\bar{\mathfrak{d}}(\mathcal{E}_2)+\bar{\mathfrak{d}}(\mathcal{E}_4).\end{equation} From the proof of Theorem \ref{kundu ray theorem}, we have that
\[\bar{\mathfrak{d}}(\mathcal{E}_1)+\bar{\mathfrak{d}}(\mathcal{E}_2)\leq \left(1-\mathfrak{p}rod_{i\mathbf{g}eq 1} \left(1-\frac{1}{p^{2i-1}}\right)\right)+(\zeta(p)-1).\]
We prove an upper bound for $\bar{\mathfrak{d}}(\mathcal{E}_4)$. We recall that $\mathfrak{A}_p$ is the set of pairs $(A,B)\in \mathbb{Z}/p^2\times \mathbb{Z}/p^2$ such that $p\mathfrak{n}mid \mathfrak{D}elta_{A,B} \text{ and }\op{rank}_p\left(E_{A,B}(\mathbb{Z}/p^2)\right)=2$. It follows from Lemma \ref{init lemma} that an elliptic curve $E_{A,B}$ is contained in $\mathcal{E}_4$ if and only the reduction of $(A,B)$ modulo $p^2$ lies in $\mathfrak{A}_p$. Thus it would seem that $\mathfrak{d}(\mathcal{E}_4)$ is simply equal to $\frac{\# \mathfrak{A}_p}{\#\left(\mathbb{Z}/p^2\times \mathbb{Z}/p^2\right)}=\frac{\# \mathfrak{A}_p}{p^4}$, but this is not quite the case, since not all pairs $(A,B)\in \mathbb{Z}\times \mathbb{Z}$ are minimal. It is well known that the proportion of all pairs $(A,B)$ that are minimal is $\frac{1}{\zeta(10)}$, see for instance \cite{CS20}. In greater detail, let $\mathcal{W}$ be the set of all pairs of integers $(A,B)$, and $\mathcal{W}_{<x}$ be the subset of pairs such that the height is $<x$. Note that since each elliptic curve $E_{/\mathbb{Q}}$ admits a unique minimal Weierstrass equation, we may identify $\mathcal{E}$ with a subset of $\mathcal{W}$. It is easy to see that
\[\lim_{x\rightarrow \infty} \frac{\# \mathcal{E}_{4,<x}}{\# \mathcal{W}_{<x}}=\frac{\# \mathfrak{A}_p}{p^4}\] and since
\[\lim_{x\rightarrow \infty} \frac{\# \mathcal{E}_{<x}}{\# \mathcal{W}_{<x}}=\frac{1}{\zeta(10)},\] it follows that
\[\mathfrak{d}(\mathcal{E}_4)=\lim_{x\rightarrow x}\frac{\# \mathcal{E}_{4,<x}}{\# \mathcal{E}_{<x}}=\zeta(10)\frac{\# \mathfrak{A}_p}{p^4}.\]
Appealing to Corollary \ref{cor frakCp}, we have that
\[\mathfrak{d}(\mathcal{E}_4)<\frac{2 \zeta(10)}{p} + C_1 p^{-3/2}\op{log}p (\op{log log} p)^2.\] The result now follows from \eqref{E1 to E4 bound}.
\end{proof}
Asymptotically in $p$, the bound \eqref{main bound fine Selmer} is \[\frac{2\zeta(10)+1}{p}+O\left(p^{-3/2}\op{log}p (\op{log log} p)^2\right),\] which is significantly smaller than the asymptotic estimate $O\left(p^{-1/2}\op{log}p (\op{log log} p)^2\right)$ from Theorem \ref{kundu ray theorem}. This tells us that the fine Selmer group is finite a lot more often than the classical Selmer group over $\mathbb{Q}_{\op{cyc}}$, since we have saved on an entire factor of $p^{1/2} \op{log}p (\op{log log} p)^2$ in our estimates.
\subsection{Statistics for the Fine Selmer group in the rank-one case}
\mathfrak{p}ar Let $E_{/\mathbb{Q}}$ be an elliptic curve with positive Mordell Weil rank $r=\op{rank}E(\mathbb{Q})$. Set $\widehat{E(\mathbb{Q}_p)}$ to be the $p$-adic completion of $E(\mathbb{Q}_p)$ given by $\widehat{E(\mathbb{Q}_p)}:=\varprojlim_n \frac{E(\mathbb{Q}_p)}{p^n E(\mathbb{Q}_p)}$. Recall that $D$ is the cokernel of the map $E(\mathbb{Q})\otimes \mathbb{Z}_p\rightarrow \widehat{E(\mathbb{Q}_p)}$. The group $\widehat{E(\mathbb{Q}_p)}$ decomposes into a direct sum $\mathbb{Z}_p\oplus T$, where $T$ is the torsion subgroup of $\widehat{E(\mathbb{Q}_p)}$ (see \cite[Lemma 2.1, (iv)]{hiranouchi2019local}). Note that $T$ is non-zero when $p$ is a local torsion prime. The summand $\mathbb{Z}_p$ arises from the formal group of $E$. Write $E(\mathbb{Q})\otimes \mathbb{Z}_p= \mathbb{Z}_p^r\oplus T'$, where $T'$ is a finite group. The map $E(\mathbb{Q}_p)\otimes \mathbb{Z}_p\rightarrow \widehat{E(\mathbb{Q}_p)}$ gives rise to a map
\[\mathbb{Z}_p^r\oplus T'\rightarrow \mathbb{Z}_p\oplus T.\] The map on the first factors is denoted by $\mathfrak{p}hi_E:\mathbb{Z}_p^r\rightarrow \mathbb{Z}_p$. This is the composite of the maps \[\mathbb{Z}_p^r\hookrightarrow \mathbb{Z}_p^r\oplus T'\rightarrow \mathbb{Z}_p\oplus T\rightarrow \mathbb{Z}_p,\] where the first map is the inclusion into the first factor and the last map is the projection on the first factor.
\begin{lemma}\label{tors d criterion}
Let $E$ be an elliptic curve and $p$ a prime as above and suppose that $\op{Tors}_{\mathbb{Z}_p}(D)\mathfrak{n}eq 0$. Then, at least one of the following hold:
\begin{enumerate}
\item $p$ is a local torsion prime of $E$,
\item $\mathfrak{p}hi_E$ is not surjective.
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose that $p$ is not a local torsion prime. Then, it is clear $D$ is a quotient of $\op{cok} \mathfrak{p}hi_E$. Hence, if $D\mathfrak{n}eq 0$, it follows that $\op{cok} \mathfrak{p}hi_E$ is non-zero. This implies that $\mathfrak{p}hi_E$ is not surjective.
\end{proof}
\mathfrak{p}ar We shift our attention to the case when our elliptic curves $E_{/\mathbb{Q}}$ have Mordell-Weil rank $1$. In this setting, the $p$-adic regulator $\op{Reg}\left(\Rfine{\mathbb{Q}}\right)=1$, and this makes it possible to study this case. Since $r=1$, the map $\mathfrak{p}hi_E:\mathbb{Z}_p\rightarrow \mathbb{Z}_p$ is surjective if and only if it is an isomorphism. We let $\mathcal{E}_5$ be the set of elliptic curves $E_{/\mathbb{Q}}$ such that
\begin{enumerate}
\item $\op{rank} E(\mathbb{Q})=1$,
\item the map $\mathfrak{p}hi_E$ is \emph{not} an isomorphism.
\end{enumerate}
The upper density $\bar{\mathfrak{d}}(\mathcal{E}_5)$ shall play a role in our results. We are unable to prove any satisfactory bound on $\bar{\mathfrak{d}}(\mathcal{E}_5)$ which is why we resort to heuristics. The probablity that an elliptic curve has rank $1$ is expected to be $\frac{1}{2}$. This follows from the rank distribution conjecture. The map $\mathfrak{p}hi_E: \mathbb{Z}_p\rightarrow \mathbb{Z}_p$ is multiplication by an element $\alpha\in \mathbb{Z}_p$, and is an isomorphism precisely when $\alpha$ is a unit. Thus, the probablity that $\mathfrak{p}hi_E$ is \emph{not} an isomorphism can be expected to be $\frac{1}{p}$. Thus, the heuristic indicates that $\bar{\mathfrak{d}}(\mathcal{E}_5)=\frac{1}{2p}$. We state our results in terms of the the density $\bar{\mathfrak{d}}(\mathcal{E}_5)$, and do not explicitly assume that this heuristic is correct. Let $p\mathbf{g}eq 5$ be a prime number and let $\mathfrak{D}_p$ be the set of elliptic curves $E_{/\mathbb{Q}}$ such that the following conditions are satisfied:
\begin{enumerate}
\item $E$ has good reduction at $p$,
\item $\op{rank} E(\mathbb{Q})=1$,
\item $\Rfine{\mathbb{Q}_{\op{cyc}}}$ is infinite.
\end{enumerate}
\begin{theorem}\label{th 4.8}
Let $p\mathbf{g}eq 5$ be a prime and $\mathfrak{D}_p$ be defined as above. Assume that $\Sh(E/\mathbb{Q})[p^\infty]$ is finite for all elliptic curves $E_{/\mathbb{Q}}$ and that the conjecture (Del-p) is satisfied. Then, we have the following bound on the upper density of $\mathfrak{D}_p$,
\begin{equation}\label{main bound fine Selmer rank 1}
\begin{split}
\bar{\mathfrak{d}}(\mathfrak{D}_p)\leq & \bar{\mathfrak{d}}(\mathcal{E}_5)+\zeta(10)\frac{\#\mathfrak{A}_p}{p^4}+ \left(1-\mathfrak{p}rod_{i\mathbf{g}eq 1} \left(1-\frac{1}{p^{2i-1}}\right)\right)+(\zeta(p)-1),\\
\leq & \bar{\mathfrak{d}}(\mathcal{E}_5)+\frac{2 \zeta(10)}{p} + C_1 \frac{ \op{log}p (\op{log log} p)^2}{p^{3/2}}+\left(1-\mathfrak{p}rod_{i\mathbf{g}eq 1} \left(1-\frac{1}{p^{2i-1}}\right)\right)+(\zeta(p)-1).\\
\end{split}
\end{equation}
Here $C_1$ is explicit constant given by $C_1=\zeta(10) C$ from Corollary \ref{cor frakCp}. \end{theorem}
\begin{proof}
Let $E$ be an elliptic curve in $\mathfrak{D}_p$. Consider the following quantities:\begin{enumerate}
\item $s_p(E):=\#\Sh(E/\mathbb{Q})[p^{\infty}]$,
\item $\tau_p(E):=\mathfrak{p}rod_l c_l^{(p)}(E)$,
\item $\beta_p(E):=\#\left(E(\mathbb{Q}_p)[p]\right)$.
\end{enumerate}
Recall that $\mathcal{E}_1$, $\mathcal{E}_2$ and $\mathcal{E}_4$ consist of all elliptic curves $E_{/\mathbb{Q}}$ such that $p$ divides $s_p(E)$, $\tau_p(E)$ and $\beta_p(E)$ respectively. It follows from Corollary \ref{criterion for mu=lambda=0} that $E$ is either contained in the union $\mathcal{E}_1\cup \mathcal{E}_2$ or $\op{Tors}_{\mathbb{Z}_p}(D)\mathfrak{n}eq 0$. It follows from Lemma \ref{tors d criterion} that if $\op{Tors}_{\mathbb{Z}_p}(D)\mathfrak{n}eq 0$, then $E\in \mathcal{E}_4\cup \mathcal{E}_5$. Therefore, combining the above observations, we find that $\mathfrak{D}_p$ is contained in the union $\mathcal{E}_1\cup \mathcal{E}_2\cup \mathcal{E}_4\cup \mathcal{E}_5$, and hence,
\[\bar{\mathfrak{d}}(\mathfrak{D}_p)\leq \bar{\mathfrak{d}}(\mathcal{E}_1)+\bar{\mathfrak{d}}(\mathcal{E}_2)+\bar{\mathfrak{d}}(\mathcal{E}_4)+\bar{\mathfrak{d}}(\mathcal{E}_5).\] From the proof of Theorem \ref{th 4.6},
\[\begin{split}\bar{\mathfrak{d}}(\mathcal{E}_1)+\bar{\mathfrak{d}}(\mathcal{E}_2)+\bar{\mathfrak{d}}(\mathcal{E}_4)\leq & \zeta(10)\frac{\#\mathfrak{A}_p}{p^4}+ \left(1-\mathfrak{p}rod_{i\mathbf{g}eq 1} \left(1-\frac{1}{p^{2i-1}}\right)\right)+(\zeta(p)-1)\\
< & C_1 \frac{ \op{log}p (\op{log log} p)^2}{p^{3/2}}+ \left(1-\mathfrak{p}rod_{i\mathbf{g}eq 1} \left(1-\frac{1}{p^{2i-1}}\right)\right)+(\zeta(p)-1),\end{split}\]and the result follows from this.
\end{proof}
\section{Sieve Methods and the distribution of Local torsion primes}\label{s 5}
\mathfrak{p}ar The distribution of local torsion primes is closely connected to the behavior of Iwasawa invariants of elliptic curves. Let $E_{/\mathbb{Q}}$ be an elliptic curve with Mordell-Weil rank zero for which the Tate-Shafarevich group $\Sh(E/\mathbb{Q})$ is finite. Then Corollary \ref{ cor 2.9} asserts that $\mu_p^{\op{fn}}(E/\Q_{\op{cyc}})=0$ and $\lambda_p^{\op{fn}}(E/\Q_{\op{cyc}})=0$ for all primes $p\mathfrak{n}otin \Pi_{lt, E}\cup \mathfrak{S}$. Here, $\mathfrak{S}$ is the set of primes that divide $2\#\Sh(E/\mathbb{Q})\times \mathfrak{p}rod_{\ell\mathfrak{n}eq p} c_\ell(E)$. In particular, if the set $\Pi_{lt,E}$ is finite, then $\mu_p^{\op{fn}}(E/\Q_{\op{cyc}})=0$ and $\lambda_p^{\op{fn}}(E/\Q_{\op{cyc}})=0$ for all but finitely many primes $p$.
\mathfrak{p}ar In this section we shall fix a number $Y>0$, and prove results about the expectation for $\#\Pi_{lt,E}(Y)$, as $E$ ranges over all elliptic curves over $\mathbb{Q}$. We shall prove our results for elliptic curves on average. For this purpose, the shall employ the use of a certain refined version of the Large sieve inequality due to Bombieri and Davenport \cite{bombieri1969large}. The version that we use has previously been used in various other contexts, see \cite{duke1997elliptic, jones2010almost}. The version we require is slightly more general than the sieve used in the aforementioned works. Given real numbers $a\leq b$, let $[a,b]$ denote the closed interval from $a$ to $b$. Given a tuple of real numbers $(M_1, \dots, M_k)\in \mathbb{R}^k$ and a tuple of positive real numbers $(N_1, \dots, N_k)\in \mathbb{R}_{\mathbf{g}eq 0}^k$, the product $\mathfrak{p}rod_{j=1}^r [M_j, M_j+N_j]$ consists of all tuples of real numbers $(x_1, \dots, x_k)$ such that $M_j\leq x_j\leq M_j+N_j$ for all $j$. We shall refer to such a product of intervals as a \emph{box} in $\mathbb{R}^k$. For each point $n\in B\cap \mathbb{Z}^k$ let $c(n)$ be a complex number. Define a function on $\mathbb{R}^k$ as follows \[S(x):=\sum_{n\in B} c(n) e(n\cdot x),\] where $x\in \mathbb{R}^k$ and $e(z):=e^{2\mathfrak{p}i i z}$.
\begin{definition}
Let $A\subset \mathbb{R}^k$ be a finite subset and let $\delta=(\delta_1, \dots, \delta_k)$ be a tuple of positive numbers. We say that $A$ is \emph{$\delta$ well-spaced} if for all $\alpha=(\alpha_1,\dots, \alpha_k)\in A$ and $\beta=(\beta_1,\dots, \beta_k)\in A$ such that $\alpha\mathfrak{n}eq \beta$,
\[\op{max}\left\{\frac{|\alpha_i-\beta_i|}{\delta_i}\mid i=1,\dots, k\right\}\mathbf{g}eq 1.\]
\end{definition}
In other words, for each pair $(\alpha, \beta)$ such that $\alpha\mathfrak{n}eq \beta$, there is a coordinate $i$ such that $|\alpha_i-\beta_i|\mathbf{g}eq \delta_i$. The following generalization of the Large sieve inequality is due to Huxley, see \cite[Theorem 1]{huxley1968large}. \begin{theorem}[Huxley] \label{huxleythm}
Let $A$ be a $\delta$ well-spaced finite set of vectors in $\mathbb{R}^k$ and let $B=\mathfrak{p}rod_{j=1}^k [M_j, M_j+N_j]$. Then, we have the following bound
\[\sum_{\alpha\in A} |S(\alpha)|^2\leq \left(\mathfrak{p}rod_{j=1}^k (N_j^{1/2}+\delta_j^{-1/2})^2\right)\sum_{n\in B} |c(n)|^2.\]
\end{theorem}
\mathfrak{p}ar Fix a number $N\in \mathbb{Z}_{\mathbf{g}eq 1}$ and for each prime $p$, we fix a subset $\Omega(p)$ of $\left(\mathbb{Z}/p^N \mathbb{Z} \right)^k$ for each prime $p$. For each integral vector $m\in \mathbb{Z}^k$, real number $x>0$, we define \[P(x;m):=\#\left\{p\leq x \mid m\mod{p^N}\in \Omega(p)\right\}\] and
\[P(x):=\sum_{p\leq x} \#\Omega(p) p^{-Nk}.\]
In the statement of the result below, $x$ will refer to a positive real number.
\begin{theorem}\label{mean square}
For each prime $p$, let $\Omega(p)$ be a subset of $\left(\mathbb{Z}/p^N\mathbb{Z}\right)^k$. Let \[B=\mathfrak{p}rod_{j=1}^k [M_j, M_j+N_j]\] be a box in $\mathbb{R}^k$ such that $N_j\mathbf{g}eq x^{2N}$ for all $j$. Let $V(B)$ be the volume of the box $V(B)=\mathfrak{p}rod_j N_j$, then
\[\sum_{m\in B\cap \mathbb{Z}^k} \left(P(x;m)-P(x)\right)^2\ll_{k,N} V(B) P(x).\]
\end{theorem}
The notation above means that there is a constant $C>0$ which depends only on $k$ and $N$, and not on $x$, such that $\sum_{m\in B\cap \mathbb{Z}^k} \left(P(x;m)-P(x)\right)^2\leq C V(B) P(x)$ for all real numbers $x>0$.
\begin{proof}
Let $A=\bigcup_{p\leq x}A_p$, where \[A_p=\left\{\alpha_1(p), \alpha_2(p),\dots, \alpha_{p^{Nk}-1}(p)\right\}\] is a complete set of representatives $\frac{1}{p^N} \mathbb{Z}^k/\mathbb{Z}^k$. The set is $\delta$ well-spaced for $\delta_i=\frac{1}{x^{2N}}$ for all $i$. A standard argument shows that Theorem \ref{huxleythm} can be applied to $A$, in order to deduce the statement of Theorem \ref{mean square}. This follows verbatim from the argument given in \cite[p.93, l.15 to p.94, l.10]{gallagher1973large}.
\end{proof}
We next apply the above result to studying the average distribution of local torsion primes. Let $k,N=2$ and for each prime $p$, let $\Omega(p):=\mathfrak{A}_p\subset \mathbb{Z}/p^2\times \mathbb{Z}/p^2$. Thus, for $Y>0$, we have that $P(Y)=\sum_{p\leq Y} \frac{\# \mathfrak{A}_p}{p^4}$. Let $C,D>Y^4$, and consider the set $\mathcal{S}_{C,D}$ of all pairs $m=(a,b)\in \mathbb{Z}\times \mathbb{Z}$ with $|a|<C$ and $|b|<D$. Fix a number $\beta>0$. Let $\mathcal{S}_{C,D}^\beta$ be the subset of $m=(a,b)\in\mathcal{S}_{C,D}$ such that
\[|\#\Pi_{lt,E_{a,b}}(Y)-P(Y)|<\beta \sqrt{P(Y)},\] and set $\mathcal{Q}_{C,D}^\beta:=\mathcal{S}_{C,D}\backslash \mathcal{S}_{C,D}^\beta$.
\begin{theorem}\label{th s5 main}
Let $Y>0$ be a fixed number. Then, there is an absolute constant $c>0$ such that for any number $\beta>0$ the proportion of elliptic curves $E$ for which
\begin{equation}\label{boring eqn}|\#\Pi_{lt,E}(Y)-P(Y)|<\beta \sqrt{P(Y)}\end{equation} is $\mathbf{g}eq (1-\frac{c}{\beta^2})$. Furthermore, if $C,D>Y^4$, we have that $\frac{\#\mathcal{Q}_{C,D}^\beta}{\#\mathcal{S}_{C,D}}<\frac{c}{\beta^2}$ for any $\beta>0$.
\end{theorem}
\begin{remark}
Setting $\beta=10\sqrt{c}$, we find that $\mathbf{g}eq 99\%$ of elliptic curves satisfy \eqref{boring eqn}.
\end{remark}
\begin{proof}
Let $C$ and $D$ be integers such that $C,D>Y^4$. We are to show that there is an absolute constant $c>0$ such that $\frac{\#\mathcal{Q}_{C,D}^\beta}{\#\mathcal{S}_{C,D}}<\frac{c}{\beta^2}$ for any $\beta>0$. This implies that the proportion of elliptic curves $E_{/\mathbb{Q}}$ (ordered by height) for which
\[|\#\Pi_{lt,E}(Y)-P(Y)|\mathbf{g}eq \beta \sqrt{P(Y)}\]
is $\leq c/\beta^2$.
\mathfrak{p}ar Let $E_{a,b}$ be an elliptic curve associated with a minimal pair $m=(a,b)\in \mathbb{Z}\times \mathbb{Z}$. Note that by definition, $P(Y;m)=\#\Pi_{lt, E}(Y)$.
\mathfrak{p}ar According to the square sieve inequality of Theorem \ref{mean square},
\begin{equation}\label{last th e1}\sum_{m\in \mathcal{S}_{C,D}}\left(P(m;Y)-P(Y)\right)^2< c\#\mathcal{S}_{C,D} P(Y),\end{equation} where $c>0$ is an absolute constant. On the other hand,
\begin{equation}\label{last th e2}\begin{split}\sum_{m\in \mathcal{S}_{C,D}}\left(P(m;Y)-P(Y)\right)^2\mathbf{g}eq & \sum_{m\in \mathcal{Q}_{C,D}}\left(P(m;Y)-P(Y)\right)^2,\\
= & \sum_{m\in \mathcal{Q}_{C,D}}\left(\#\Pi_{lt,E_{m}}(Y)-P(Y)\right)^2,\\
\mathbf{g}eq & \sum_{m\in \mathcal{Q}_{C,D}}\beta^2 P(Y)= \# \mathcal{Q}_{C,D} \beta^2 P(Y).\end{split}\end{equation}
Combining the inequalities from \eqref{last th e1} and \eqref{last th e2}, we have shown that $\frac{\#\mathcal{Q}_{C,D}^\beta}{\#\mathcal{S}_{C,D}}<\frac{c}{\beta^2}$, and this completes the proof.
\end{proof}
The above result thus proves an explicit result for the expectation for $\#\Pi_{lt,E}(Y)$, which is shown to be close to $P(Y)= \sum_{p\leq Y}\frac{\# \mathfrak{A}_p}{p^4}$.
\mathfrak{n}ewpage
\section{Numerical data}
\begin{center}
\begin{table}[h]
\caption{Data for $\mathfrak{S}_p/p^2$ for primes $7\leq p<150$.}
\label{tab:1}
\begin{tabular}{ |c|c|c|c| }
\hline
$p$ & $\#\mathfrak{S}_p'/p^2$ & $p$ & $\#\mathfrak{S}_p'/p^2$ \\
\hline
7 & 0.0816326530612245 & 71 & 0.0208292005554453 \\
11 & 0.0413223140495868 & 73 & 0.0270219553387127 \\
13 & 0.0710059171597633 & 79 & 0.0374939913475405\\
17 & 0.0276816608996540 & 83 & 0.0178545507330527 \\
19 & 0.0581717451523546 & 89 & 0.0222194167403106 \\
23 & 0.0415879017013233 & 97 & 0.0255074928260176\\
29 & 0.0332936979785969 & 101 & 0.00980296049406921 \\
31 & 0.0312174817898023 & 103 & 0.0288434348194929 \\
37 & 0.0306793279766253 & 107 & 0.00925845051969604 \\
41 & 0.0118976799524093 & 109 & 0.0181802878545577 \\
43 & 0.0567874526771228 & 113 & 0.0263137285613595 \\
47 & 0.0208239022181983 & 127 & 0.0169260338520677 \\
53 & 0.0277678889284443 & 131 & 0.0189382903094225 \\
59 & 0.0166618787704683 & 137 & 0.0108689860940913 \\
61 & 0.0349368449341575 & 139 & 0.0142849748977796 \\
67 & 0.0147026063711294 & 149 & 0.0133327327597856 \\
\hline
\end{tabular}
\end{table}
\end{center}
\mathfrak{n}ewpage
\subsection*{Data availability statement} No data was generated or analyzed in this paper.
\end{document} |
\begin{document}
\title{\uppercase {On The Prolongations of Homogeneous Vector Bundles}
\footnote{
2000 \textit{Mathematics Subject Classification}.
Primary 53C30; Secondary 55R91.
}
\footnote{
\textit{Key words and phrases:} Homogeneous Space, Fiber bundles, Prolongation, Homogeneous Vector Bundles
}
\begin{abstract} In this paper, we introduce a study of prolongations of homogeneous vector bundles. We give an alternative approach for the prolongation. For a given homogeneous vector bundle $E$, we obtain a new homogeneous vector bundle. The homogeneous structure and its corresponding representation are derived. The prolongation of induced representation, which is an infinite dimensional linear representation, is also defined.
\end{abstract}
\section{Introduction}
\hspace{5mm} In this study, we continue to work on prolongations. In our previous work \cite{Myarticle}, we have defined prolongations of finite-dimensional real representations of Lie groups and obtained faithful representations on tangent bundles of Lie groups \cite{Myarticle}. In this work, we use the prolongations of these representations to give an alternative method to prolonge a vector bundle, specially a homogeneous vector bundle, which also has group actions in its structure. We also give the definition of the prolongation of induced representations. In the literature, the well known method for prolongation is to use lifts( for example vertical lifts or complete lifts)\cite{Yano} or to use jet prolongations \cite{Saunders}. For example in \cite{Fisher}, Fisher and Laguer worked on the second order tangent bundles by using jets. For further information about jet manifolds, we refer to \cite{Cordero},\cite{Saunders}.
Homogeneous vector bundles were studied, because of their applications to cohomologies and complex analytic Lie groups. In 1957, Raoul Bott \cite{Bott} dealt with induced representations in the framework of complex analytic Lie groups. In 1964, Griffiths gave differential-geometric
derivations of various properties of homogeneous complex manifolds. He gave some differential geometry applications to homogeneous
vector bundles and to the study of sheaf cohomology \cite{Griffiths}. In 1988, Purohit \cite{Purohit} showed that there is a one-to-one correspondence between homogeneous vector bundles and linear representations. Various other studies about homogeneous vector bundles can be found in the literature.(\cite{Boralevi},\cite{Harboush})
Moreover, in 1972, {\it{R. W. Brockett and H. J. Sussmann}} described how the tangent bundle of a homogeneous space can be viewed as a homogeneous space \cite{Brockett}. They associated every Lie group $G$ with another Lie group $G^*=Lie(G)\times G$ constructed as a semi direct product with the group operation given by
\begin{equation}
(a,g).(a',g')=(a+ad(g)(a'),gg')
\end{equation}
\noindent where $(a,g), (a',g') \in G^*$. They also showed that if $G$ acts on a manifold $X$, then there exists a left action of $G^*$ on $TX$ by
\begin{equation}
(a,g).v= d\sigma_g (v)+\bar{a}(g.\pi(v))\hspace{2mm} for\hspace{1mm} all\hspace{1mm} v\in TX
\end{equation}
\noindent Here $\pi$ denotes the natural projection from $TX$ onto $X$ (i.e. $\pi(v)=x$ if and only if $v\in X$) and $\sigma_g:X \to X$ is the map $ x \to gx$. Clearly, both $\sigma_g(v)$ and $\bar{a}(g.\pi(v))$ belong to $T_{g.\pi(v)}$, so the sum is well-defined.
This paper is organized as follows. In section \ref{pre} we give some basic definitions and theorems that we need for our proofs. In section \ref{Pro}, we give homogeneous vector bundle structure of the prolonged bundle. And at the end , in section \ref{conc}, we give the conclusion and future work.
\section{Preliminaries}\label{pre}
First of all, we give the definition of a homogeneous vector bundle.
\begin{definition}
Let $G$ be a Lie group, $F$ be a $n$ dimensional real vector space, and $G$ acts transitively on a manifold $M$. Let $H$ be the isotropy subgroup of $G$ at a fixed point $p_0 \in M$ so that $M$ becomes the coset space $G/H$. In addition, suppose $G$ acts on the vector bundle $E$ sitting over $G/H$ so that its action on the base agrees with the usual action of $G$ an cosets. Then such a structure $(E, \pi, M, F)$ is called a homogeneous vector bundle.\cite{Purohit}
\end{definition}
\begin{theorem}\label{sigmaofE}
Homogeneous vector bundles over $G/H$ are in one-to-one correspondence with linear representations of $H$ \cite{Purohit}.
\end{theorem}
Above mentioned representation is defined as follows:
Let $(E, \pi, M, F)$ be a homogeneous vector bundle where $M=G/H$, $H$ be the isotropy subgroup of $G$ at $p_0$, and $F=\pi^{-1}(p_0)$. Then, there exists a Lie group representation \\$\sigma:H \to Aut(F)$ with
\begin{equation}
\begin{gathered}
\sigma(h):F \to F\\
\hspace{3cm} q \to \sigma(h)(q)=hq
\end{gathered}
\end{equation}
where $h \in H$.
Conversely, if $G$ is a Lie group with isotropy subgroup $H$, $F$ is a finite dimensional real vector space and $\sigma:H \to Aut(F)$ is a representation, then there exists a homogeneous vector bundle $(E,\pi,G/H,F)$ where $E=G\times_H F$ \cite{Adams, Kobayashi}.
Homogeneous vector bundles also corresponds infinite dimensional real representations. This correspondence is illustrated in the following proposition.
\begin{proposition}\cite{Purohit}
Let $(E, \pi,M, F)$ be a homogeneous vector bundle, where $M=G/H$, and $\Gamma(E)$ denotes (global) cross sections of the vector bundle $E$. For all $g \in G$, $\rho(g)$ can be defined by the following:
\begin{equation}
\begin{gathered}
\rho(g):\Gamma(E) \to \Gamma(E)\hspace{12cm}\\
\hspace{3cm} \psi \to \rho(g)(\psi):M \to E \hspace{12cm}\\
p \to (\rho(g)(\psi))(p)=g.\psi(g^{-1}p)\hspace{25mm}
\end{gathered}
\end{equation}
Clearly, $\rho$ is a representation of $G$ in $\Gamma(E)$ which is induced by the representation $\sigma$ that is defined in theorem \ref{sigmaofE}. We call the representation $\rho$ as the induced representation of $E$.
\end{proposition}
\begin{theorem} If a Lie group $G$ acts transitively and with maximal rank on a
differentiable manifold $X$, then $G^*$ acts transitively and with maximal rank
on the tangent bundle of $X$ \cite{Brockett}.
\end{theorem}
We have following remarks for above theorem:
\begin{remark}
\begin{enumerate}
\item Clearly, above result implies that the tangent bundle of a coset space
$G/H$ is again a coset space and moreover, is of the form $G^*/K$ for some
closed subgroup $K$ of $G^*$.
\item If $H$ is a closed subgroup of $G$, then $H^*$ can be identified, in an
obvious way, with a closed subgroup of $G^*$. One verifies easily that the
isotropy group of $0_x$ corresponding to the action of $G^*$ on $TX$ is
precisely $H_x^*$, where $H_x$ is the isotropy group of $x$ corresponding to the
action of $G$ on $X$ In particular, we have the diffeomorphism $T(G/H)\simeq G^*/H^*$.
\end{enumerate}
\end{remark}
\begin{definition}
Let $\Phi$ be a $n$-dimensional Lie group representation on $G$. "The prolongation of the representation $\Phi$", which is denoted by $\widetilde{\Phi}$ is given by the following equation:
\begin{eqnarray}
\widetilde{\Phi}:TG \to GL(2n)\hspace{54mm} \nonumber \\
(a,v) \to \widetilde{\Phi}(a,v)=\
\begin{pmatrix}
\Phi(a) & 0\\
[(d(\Phi)_e(v))_j^i].\Phi(a) & \Phi(a)
\end{pmatrix}.
\end{eqnarray}
\end{definition}
\section {\bf{Prolongation of a Homogeneous Vector Bundle}} \label{Pro}
In this section, we'll introduce a homogeneous vector bundle with prolonged Lie group representation, which is defined in \cite{Myarticle}.
\newline
\noindent{\bf{Main Result:}} \\
Let $(E,\pi, G|H, F)$ be the homogeneous vector bundle with the corresponding Lie group representation $\sigma: H\to Aut(F)$. Using the prolongation of the representation $\sigma$ defined in \cite{Myarticle}, we have
\begin{equation}
\tilde{\sigma}(dR_h(a))=\
\begin{pmatrix}
\sigma(h) & 0\\
d(\sigma)_e(a).\sigma(h) & \sigma(h)
\end{pmatrix}
\end{equation}
where $\tilde{\sigma}: TH \to Aut(TF)$ is the prolongation of representation $\sigma$.
Getting composition of $\Theta$ and $\tilde{\sigma}$, we define $\sigma^*:H^* \to Aut(TF)$ as follows:
\begin{equation}
\sigma^*(a,h)=\
\begin{pmatrix}
\sigma(h) & 0\\
d(\sigma)_e(a).\sigma(h) & \sigma(h)
\end{pmatrix}
\end{equation}
where $\Theta$ denotes the natural diffeomorphism $\Theta: H^* \to TH$.
In the following configuration, we give the summary of what implications we will have next.
\begin{equation}
E \longleftrightarrow \sigma\longrightarrow \sigma^* \longleftrightarrow E^*
\end{equation}
where $E=(E, \pi, G/H, F)$, $\sigma:H \to Aut(F)$, $\sigma^*:H^* \to Aut(TF)$ and $ E^*=(E^*, \pi^*, G^*/H^*, TF)$.
Now we define the new induced action which will be used for defining equivalence classes.
\newline
\subsection{\bf{The Prolonged Action of $H^*$ on $G^*\times TF$:}}
Using the above representation $\sigma^*$, and the natural action of $G^*$ to its coset space $G^*/H^*$, we have the following induced action $\tilde{\alpha}:(G^* \times TF)\times H^* \to G^* \times TF$ which can be obtained by the coset space definition:
\begin{equation}
\tilde{\alpha}(a,g,v,b,h)=((a,g).(b,h),\sigma^*((b,h)^{-1})(v)).\label{complex}
\end{equation}
\begin{proposition}
If $v=(\xi ,u)\in TF$ and $\sigma^*((b,h)^{-1})(v)=(\tilde{\xi},\tilde{u})$, then
\begin{eqnarray}
\tilde{\alpha}(a,g,v,b,h)=(a+adj(g)(b), gh,\tilde{\xi},\tilde{u}).
\end{eqnarray}
\end{proposition}
\begin{proof}
Since $(a,g).(b,h)=(a+adj(g)(b),gh)$ and $(b,h)^{-1}=(adj(h^{-1})(-b),h^{-1})$, we have
\begin{equation}
\sigma^*((b,h)^{-1})(v)=\sigma^*(adj(h^{-1})(-b),h^{-1})(\xi,u)\nonumber
\end{equation}
Using the definition of $\sigma^*$ we have
\begin{equation}
=\
\begin{pmatrix}
\sigma(h^{-1}) & 0 \\
d(\sigma)_e(-adj(h^{-1})(b)) & \sigma(h^{-1})
\end{pmatrix}.\
\begin{pmatrix}
\xi \\
u
\end{pmatrix}.\nonumber
\end{equation}
\begin{equation}
=(\sigma(h^{-1})(\xi), [(d\sigma)_e(-adj(h^{-1})(b))]_j^i \xi_ix_j +[\sigma(h^{-1})]_j^i u_i \dot{x}_j)\label{action}
\end{equation}
\begin{equation}
=(\tilde{\xi},\tilde{u})\label{sigma}
\end{equation}
where $v=(\xi,u) \in TF$ for all $(a,g) \in G^*$ and $(b,h) \in H^*$.
Therefore, using equation (\ref{sigma}), we finish the proof.
\end{proof}
Using above action, it is possible to form an equivalence relation on $G^* \times TF$.
\newline
\subsection{\bf{The Prolonged Equivalence Relation:}}
If we use above induced action, we have the following equivalence relation on $G^* \times TF$ by \cite{Adams, Kobayashi}:
\newline
$((a,g),(\xi, u)) \simeq((a',g'),(\xi', u'))$ if and only if there exists $(b,h) \in H^*$ such that the following equation holds
\begin{equation}
((a',g'),(\xi', u'))=((a,g),(\xi, u)).(b,h).\label{equiv}
\end{equation}
The next corollary gives the simplified form of equivalence relation that we have defined above.
\begin{corollary}
The equivalence relation defined by the equation (\ref{equiv}) can be given by the following.
\begin{equation}
((a,g),(\xi, u))\simeq ((a',g'),(\xi', u')) \Leftrightarrow \left\{ \begin{array}{rcl} a'=a+adj(g)(b),\hspace{6cm}\\ g'=gh,\hspace{78mm} \\ \xi'=\sigma(h^{-1})(\xi),\hspace{66mm} \\ u'=[(d\sigma)_e(-adj(h^{-1})(b))]_j^i \xi_i x_j+[\sigma(h^{-1})]_j^i u_i \dot{x}_j.\hspace{12mm}\end{array}\right. \label{equi}
\end{equation}
\end{corollary}
\begin{proof}
If $((a,g),(\xi, u))\simeq ((a',g'),(\xi', u'))$, then there exist $(b,h) \in H^*$ such that
\begin{eqnarray}
((a',g'),(\xi', u'))&=&((a,g),(\xi, u)).(b,h).\nonumber\\
&=&\sigma^*(a,g,(\xi,u),b,h)
\end{eqnarray}
Using equation (\ref{action}), we have $\xi'=\sigma(h^{-1})(\xi)$ and $u'=[(d\sigma)_e(-adj(h^{-1})(b))]_j^i \xi_ix_j +[\sigma(h^{-1})]_j^i u_i \dot{x}_j. $
Moreover, since $\sigma^*(a,g,(\xi,u),b,h)\in T_{(a,g).(b,h)}$, then we have
\begin{eqnarray}
(a',g')&=&(a,g).(b,h)\nonumber\\
&=&(a+adj(g)(b),gh)
\end{eqnarray}
which finishes the proof.
\end{proof}
\begin{definition}
We denote $E^*$ to be set of equivalence classes follows from (\ref{equi}), i.e. $$E^*=G^*\times_{H^*}TF$$ and we refer $E^*$ as the {\it{Prolonged Vector Bundle}}.
\end{definition}
\begin{remark}
The bundle projection of the prolonged bundle is
\begin{eqnarray}
\pi^*_{E^*}: G^* \times_{H^*} TF \to G^*/H^* \hspace{5cm}\nonumber \\
((a,g),v)H^* \to \pi^*_{E^*}(((a,g),v)H^*)=(a,g)H^*\hspace{15mm} \nonumber
\end{eqnarray}
and local trivialization of the prolonged bundle is
\begin{eqnarray}
\psi^* : (\pi^*_{E^*})^{-1}(V) \to V \times TF\hspace{55mm} \nonumber \\
((a,g),v)H^* \to \psi^*(((a,g),v)H^* )=((a,gH),\sigma^*(b,h)(v))
\end{eqnarray}
where $V \subset T(G/H)$ is an open subset.
\end{remark}
So far, we have given the structures of the prolonged bundle. In the following, we define a new prolongation that is obtained by the prolongation of homogeneous vector bundles.
\newline
\begin{definition}
Let $\rho:G \to Aut(\Gamma(E))$ be the induced representation of the homogeneous vector bundle $E$. Then $\rho^* :TG \to Aut(\Gamma(E^*))$ is called the prolongation of the induced representation $\rho$, where $E^*$ denotes the prolongation of $E$.
\end{definition}
\section{Conclusion and Future Work}\label{conc}
In this paper, we have introduced a study of prolongations of homogeneous vector bundles. We have used one-to-one correspondence of homogeneous vector bundles and finite dimensional Lie group representations, and we have defined prolongations of homogeneous vector bundles. We introduced the geometric structures of this new bundle, such as local trivialization of the bundle, equivalence classes and the bundle projection. We have defined the prolongations of induced representations which are infinite dimensional linear representation. In future, we plan to study on prolongations of infinite dimensional representations by using one-to-one correspondence of homogeneous vector bundles.
\end{document} |
\begin{document}
\title{\bf Moment subset sums over finite fields}
\author{Tim Lai\textsuperscript{1}, Alicia Marino\textsuperscript{2}, Angela Robinson\textsuperscript{3}, Daqing Wan\textsuperscript{4}}
\maketitle
\thispagestyle{fancy}
\begin{center}
\textsuperscript{1}Indiana University, Bloomington\\
\textsuperscript{2}University of Hartford\\
\textsuperscript{3}National Institute of Standards and Technology\\
\textsuperscript{4}University of California, Irvine
\end{center}
{\bf Abstract:} The $k$-subset sum problem over finite fields is a classical NP-complete problem.
Motivated by coding theory applications, a more complex problem is the higher $m$-th moment $k$-subset sum problem over finite fields. We show that there is a deterministic polynomial time algorithm for the $m$-th moment $k$-subset sum problem over finite fields for each fixed $m$ when the evaluation set is the image set
of a monomial or Dickson polynomial of any degree $n$. In the
classical case $m=1$,
this recovers previous results
of Nguyen-Wang (the case $m=1, p>2$) \cite{WN18} and the results of Choe-Choe (the case $m=1, p=2$) \cite{CC19}.
\section{Introduction}
One of the most puzzling problems in theoretical computer science, originally posed in 1971, is to determine whether P = NP \cite{C71}. That is, to determine whether the complexity class of problems which can be solved in deterministic polynomial time is equivalent to the class of problems whose solutions, if any, can be verified in deterministic polynomial time. For a comprehensive survey on this topic, see Widgerson's forthcoming monograph \cite{Wi19}.
All NP-complete problems are equivalent to each other under polynomial time reduction.
One approach to proving that P = NP is to find an NP-complete problem and prove (or disprove) that it is deterministically solvable in polynomial time.
We choose the $k$-subset sum problem over finite fields \cite{CLRS09}, which is
a classical NP-complete problem. Although this problem is out of reach, our aim of this paper is to explore deterministic polynomial time algorithms to this and similar variations of this problem in various interesting special cases.
Let $p$ be a prime, $q=p^s$ for some integer $s>0$, and $\mathbb{F}_q$ the finite field of $q$ elements. Given a subset $D=\{x_1, \ldots, x_d\} \subset \mathbb{F}_q$ and $b \in \mathbb{F}_q$, let
\begin{align*}
N(D,b)= \#\{S \subseteq D: \sum_{x \in S}x=b\}.
\end{align*}
The dense input size of $D$ is $d\log q$, since one can simply list all the $d$ elements of $D$ in $\mathbb{F}_q$ where each takes $\log q$ space.
The decision subset sum problem (SSP) over finite fields asks if given $D$ and $b$, can one determine whether $N(D,b)>0$ in polynomial time in terms of the dense input size $d\log q$? If $N(D,b)>0$, then there exists at least one collection $S \subseteq D$ of elements which sum to $b$. This solution, $S$, can be checked by addition of $|S|\leq d$ elements of size $\textnormal{log }q$, thus SSP $\in$ NP for every fixed $p$. When $p=2$, it is a linear algebra problem and thus
SSP $\in$ P. It is known SSP is NP-complete for each fixed $p>2$.
Motivated by numerous applications, a more precise version of the SSP is to determine whether there exists a subset $S \subseteq D$ of given size $k$ whose elements sum to $b$ given a set $D$ and target $b$ as above. The decision version of this $k$-subset sum problem ($k$-SSP) is as follows. Given a subset $D=\{x_1, \ldots, x_d\} \subset \mathbb{F}_q, k\in \{1, \ldots, d\}$ and $b \in \mathbb{F}_q$, for
\begin{align*}
N_k(D,b)= \#\{S \subseteq D: \sum_{x \in S}x=b, |S|=k\},
\end{align*}
determine whether $N_k(D,b)>0$.
The decision $k$-SSP problem is NP-hard for every fixed $p$, including the
more difficult case $p=2$ which is the main result in \cite{V97} determining that computing the
minimum distance of binary codes is NP-hard. In general, the complexity of the $k$-SSP problem depends on the relationship between $d$ and the modulus $q$. When $q=\mathcal{O}(\textnormal{poly}(d))$, dynamic programming solves the problem in polynomial time \cite{GM91, L05}. The trivial exhaustive search algorithm shows that $k$-SSP $\in$ P when $d = \mathcal{O}(\textnormal{log log}\,q)$. It is known that $k$-SSP is NP-hard when $d=(\textnormal{log }q)^c$ for constant $c>0$, see \cite{LO85,GM91}. An explicit formula for $N_k(D,b)$ was presented for the case of $D = \mathbb{F}_q$ \cite{LW08}.
In coding theory, $k$-SSP arises from computing the minimum distance of a linear code and the deep hole problem for Reed-Solomon codes. The set $D$ is called the \textit{evaluation set} as it is exactly the evaluation set of the corresponding Reed-Solomon code.
If one moves further to consider the harder problem of computing the error distance
of a received word (namely, maximal likelihood decoding) in Reed-Solomon codes, one is naturally lead to
the following higher moment $k$-subset sum problem.
More formally, given a subset $D=\{x_1, \ldots, x_d\} \subset \mathbb{F}_q, k\in \{1, \ldots, d\}$, $m \in \mathbb{N}$, and $\boldsymbol{b} = (b_1, \ldots, b_m) \in \mathbb{F}_q^m$, determine whether
\begin{align*}
N_k(D,\boldsymbol{b},m)= \#\{S \subseteq D: \sum_{y \in S}y^j=b_j, 1\leq j \leq m, |S|=k\},
\end{align*}
is positive. This problem is known as the $m$-th moment $k$-SSP and
its complexity has been studied recently. It is proven to be NP-hard for general $D$ if $m\leq 3$ \cite{GGG15} or smaller than $\mathcal{O}(\textnormal{log}\,\textnormal{log}\,\textnormal{log}\,q)$ \cite{GGG18}. An explicit combinatorial
formula for $N_k(D, \boldsymbol{b}, m)$ is obtained
in \cite{Ng19} when $m=2$ and $D=\mathbb{F}_q$.
All the problems and results above are based on a model where we use the dense input $\{D,b\}$ of size $\mathcal{O}(d\, \textnormal{log}\,q)$ by listing all the $d$ elements of $D$. Though improved solutions to the decision $k$-SSP with such dense input are desired, one may also consider an \textit{algebraic input} model wherein $D$ is the set of images under some polynomial map applied to field elements. That is, for some monic polynomial $g(x) \in \mathbb{F}_q [x]$ of degree $n$,
\[
D = g(\mathbb{F}_q) = \{g(a) : a \in \mathbb{F}_q\}.
\]
In this situation, the algebraic input size would be $n\log q$ since it is enough to write
down the $n$ coefficients of the input polynomial $g(x)$. A fundamental problem is to ask
if the $k$-SSP and the $m$-th moment $k$-SSP can be solved in deterministic polynomial time
in terms of the algebraic input size $n\log q$. This appears more difficult as it is not
even clear if the problem is in NP because both $k$ and the set size $d=|D| \geq q/n$ can already be
exponential in terms of the algebraic input size $n\log q$. No complexity result is yet known for the algebraic model.
The last author conjectured that $k$-SSP can be solved in deterministic polynomial time in
algebraic input size $n\log q$ if the order of the
Galois group $G_g$ of $g(x)-t$ over ${\mathbb{F}}_q(t)$ is bounded by
a polynomial in $n\log q$. The last condition is trivially satisfied if
$$n = O(\log\log q/\log\log\log q)$$
since then $|G_g| \leq n!$ is bounded by a polynomial in $\log q$.
This condition is also satisfied when $g(x)$ is a monomial
or
Dickson polynomial of any degree $n$.
Note that this conjecture is highly non-trivial, as it is not even clear whether the problem is in NP since
we are using the algebraic (sparse) input
size and $d\geq q/n$ is exponential in
$n\log q$ for
$n=O(\log\log q)$. Thus, we cannot write down all the elements of $D$ as listing all
elements of $D$ already takes exponential time. In a sense, our set $D$ is given as
a black-box.
As a supporting evidence, this conjecture has been proved to be
true in the special case when the evaluation set $D$ is the image of the monomial
$x^n$ or Dickson polynomials of degree $n$: see \cite{WN18} for the case $p>2$ and
\cite{CC19} for the
case $p=2$. The aim of the present paper is to extend these results from $m=1$ ($k$-SSP)
to the higher $m$-moment $k$-SSP for each fixed $m$. Namely, our main result is
\begin{theorem}\label{THM1}
Let the evaluation set $D$ be the image set of a monomial or a Dickson polynomial of degree $n$
over $\mathbb{F}_q$.
There is a deterministic algorithm which for
any given $m\in \mathbb{N}$, $b\in \mathbb{F}_q^m$ and integer
$k\geq 0$,
decides if $N_k(D,b,m)>0$
in time $(n\log q)^{C_m}$, where $C_m$ is a constant depending only on $m$.
In particular, this is a polynomial time algorithm in the algebraic input
size $n\log q$ for each fixed $m$.
\end{theorem}
To prove the above theorem,
we will need to combine all the
techniques available so far: dynamic programming for large $n>q^{\epsilon}$,
Kayal's algorithm \cite{K05} for
constant $k$, Brun sieve for medium $k$, the Li-Wan sieve
for large $k$ and $p>2$, and the recent Choe-Choe argument \cite{CC19} for large $k$ and $p=2$.
In addition, we need to employ the Weil bound to prove a
crucial new partial character
sum estimate.
\section{Background}
One important tool in our proof is character sum estimates. Let $\psi: \mathbb{F}_q \rightarrow \mathbb{C}$ be an additive character.
We know from character theory that for a nontrivial character $\psi$ we have $\sum\limits_{x \in \mathbb{F}_q} \psi (x) = 0$. However, in the case of the trivial character, the sum is the size of the finite field.
Let $G = \mathbb{F}_q$ and let $\widehat{G}$ be the set of all additive characters for $\mathbb{F}_q$. Then we have the following equality
\[
\sum_{\psi \in \widehat{G}} \psi(x) = \left\{
\begin{array}{ll}
q & \quad \textnormal{if }x = \textbf{0} \\[1em]
0 & \quad \textnormal{if }x \neq \textbf{0}
\end{array}
\right..
\]
\begin{def1}[Dickson Polynomial]
Let $n$ be a positive integer and $a\in \mathbb{F}_q$. The Dickson polynomial of degree $n$ is defined as
\begin{equation*}
D_n(x, a) = \sum_{i=0}^{\lfloor n/2 \rfloor} \frac{n}{n-i}\binom{n-i}{i} (-a)^i x^{n-2i}.
\end{equation*}
\end{def1}
\noindent If $n=pn_1$ is divisible by $p$, one checks that $D_{pn_1}(x, a) = D_{n_1}(x, a)^p$. Thus, we can assume that $n$ is not divisible by $p$.
Note that for $a=0$, $D_n(x,0)=x^n$, so we see that Dickson polynomials are generalizations of monomials. Of particular use to us is the size of the image of these polynomials, also known as the \textit{value set}. A simple fact for the monomial $D_n(x,0)=x^n$ is that
\[
|D_n(\mathbb{F}_q^\times,0)| =
\begin{cases}
q-1 & \gcd(n, q-1)=1 \\
\frac{1}{\ell}(q-1) & \gcd(n, q-1)=\ell
\end{cases}
\]
In the first case, the map is $1$ to $1$; in the latter case, the map is $\ell$ to $1$. It turns out an analogous preimage-counting statement holds when $a\ne 0$. Chou, Mullen, and Wassermann in \cite{CMW} used a character sum argument to calculate the following.
\begin{nots} For $b,c,d \in \mathbb{Z}$, Let $b^c||d$ denote that $b^c$ fully divides $d$ so that $b^{c+1} \nmid d$.
\end{nots}
\begin{theorem}\label{dicksonTheorem}
Let $n\geq 2$ and $a\in \mathbb{F}_q^*$. If $q$ is even, then $|D_n^{-1}(D_n(x_0,a))| =$
\begin{equation*}
\left\{
\begin{array}{cl}
\gcd(n, q-1) & \text{ if condition A holds} \\
\gcd(n, q+1) & \text{ if condition B holds} \\
\dfrac{\gcd(n,q-1)+\gcd(n,q+1)}{2} & \text{$D_n(x_0,a)=0$},
\end{array}
\right .
\end{equation*}
where `condition A' holds $\text{if } x^2+x_0x+a \text{ is reducible over $\mathbb{F}_q$ and } D_n(x_0,a)\ne 0$; `condition B' holds $\text{if } x^2+x_0x+a \text{ is irreducible over $\mathbb{F}_q$ and } D_n(x_0,a)\ne 0$. \newline
\noindent If $q$ is odd, let $\eta$ be the quadratic character of $\mathbb{F}_q$. If $2^r||(q^2-1)$ then $|D_n^{-1}(D_n(x_0,a))|=$
\begin{equation*}
\left\{
\begin{array}{cl}
\gcd(n, q-1) & \text{if } \eta(x_0^2-4a)=1 \text{ and } D_n(x_0,a)\ne \pm 2a^{n/2} \\
\gcd(n, q+1) & \text{if } \eta(x_0^2-4a)=-1 \text{ and } D_n(x_0,a)\ne \pm 2a^{n/2} \\
\dfrac{\gcd(n, q-1)}{2} & \text{if } \eta(x_0^2-4a)=1 \text{ and condition C holds} \\
\dfrac{\gcd(n, q+1)}{2} & \text{if } \eta(x_0^2-4a)=-1 \text{ and condition C holds} \\
\dfrac{\gcd(n,q-1)+\gcd(n,q+1)}{2} & \text{otherwise}, \\
\end{array}
\right.
\end{equation*}
where `condition C' holds if
\begin{equation*}
2^t||n \text{ with } 1\leq t\leq r-1, \eta(a)=-1, \text{ and } D_n(x_0,a)=\pm 2a^{n/2}
\end{equation*}
or
\begin{equation*}
2^t||n \text{ with } 1\leq t\leq r-2, \eta(a)=1, \text{ and } D_n(x_0,a)=- 2a^{n/2}.
\end{equation*}
\end{theorem}
\noindent They also showed an explicit formula for the size of the value set of $D_n(x,a)$, denoted $|V_{D_n(x,a)}|$.
We state their result in the odd $q$ case.
\begin{theorem}\label{dicksonTheorem2}
Let $a\in \mathbb{F}_q^*$. If $2^r||(q^2-1)$ and $\eta$ is the quadratic character on $\mathbb{F}_q$ when $q$ is odd, then
\begin{equation*}
|V_{D_n(x,a)}| = \frac{q-1}{2\gcd(n,q-1)}+\frac{q+1}{2\gcd(n,q+1)}+\delta
\end{equation*}
where
\begin{equation*}
\delta = \left\{
\begin{array}{ll}
1 & \text{if $q$ is odd, $2^{r-1}||n$ and $\eta(a)=-1$} \\
\dfrac{1}{2} & \text{if $q$ is odd, $2^{t}||n$ with $1\leq t\leq r-2$} \\
0 & \text{otherwise}.
\end{array}
\right.
\end{equation*}
\end{theorem}
As a consequence, for Dickson polynomials
of degree $n$, the value set cardinality $d=|D|$ can be computed in polynomial time in $n\log q$. Note that for a general
polynomial $g(x)\in \mathbb{F}_q[x]$ of
degree $n$, computing the image size
$|g(\mathbb{F}_q)|$ is a difficult problem, and there is no known polynomial time algorithm in terms of the algebraic
input size $n\log q$, see \cite{CHW13}
for complexity results and $p$-adic algorithm.
\subsubsection*{Weil's Character Sum Bound}
The following classical case of the Weil bound is well known. We shall give a more general
form later.
\begin{theorem} (Weil Bound)
Let $f(x) \in \mathbb{F}_q[x]$ be a polynomial of degree $m$, where $(p,m) = 1$ and $\psi$ a non-trivial additive character of $\mathbb{F}_q$. Then
\[
\left | \sum_{x \in \mathbb{F}_q} \psi ( f(x)) \right | \le (m-1) \sqrt{q}.
\]
\end{theorem}
For our purposes it will be important to have a good estimate for certain incomplete character sums, where the sum
is not summing over the full field $\mathbb{F}_q$, but over the image set $D$ of another polynomial $g(x)$. This is not available yet for general $g(x)$, but can be proved for monomials and Dickson polynomials. The monomial case is straightforward.
\begin{proposition}\label{MonomialWeil}
Let $f(x) \in \mathbb{F}_q[x]$ be a polynomial of degree $m$ such that $p \nmid m$.
Let $D = \{x^n : x \in \mathbb{F}_q\}$ where $(n+1)^2 \leq q$. Then
\[
\left | \sum_{x \in D} \psi ( f(x)) \right | \le m \sqrt{q}.
\]
\end{proposition}
\begin{proof}
Without loss of generality, we can assume that $n|(q-1)$.
Let $D^\times = \{x^n : x \in \mathbb{F}_q^\times \}$. Using the Weil bound above,
\begin{align*}
\left| \sum_{x \in D} \psi(f(x)) \right| &= \left| \psi(f(0)) +\sum_{x \in D^\times} \psi(f(x)) \right| \\
&= \left| \psi(f(0)) + \frac{1}{n} \sum_{x \in \mathbb{F}_q^\times} \psi(f(x^n)) \right| \\
&= \left| \psi(f(0)) + \frac{1}{n} \sum_{x \in \mathbb{F}_q} \psi(f(x^n)) -\frac{1}{n} \psi(f(0))\right| \\
&\leq 1 + \frac{1}{n}(mn-1)\sqrt{q} + \frac{1}{n} \\
&= 1 + m\sqrt{q} - \frac{\sqrt{q}-1}{n}.
\end{align*}
If $(n+1)^2 \leq q$ then we conclude $$\left | \sum_{x \in D} \psi ( f(x)) \right | \le m \sqrt{q}.$$
\end{proof}
When $D$ is the image of Dickson polynomials, the corresponding character sum estimate is harder. We need the following version of Weil's bound, which is the case $d=1$ of Theorem 5.6 in \cite{FW}.
\begin{theorem}
Let $f_i(t)$ ($1\leq i\leq n$) be polynomials in $\mathbb{F}_q[t]$, let $f_{n+1}(t)$ be a rational function in $\mathbb{F}_q(t)$, let $D_1$ be the degree of the highest square free divisor of $\prod_{i=1}^n f_i(t)$, let
\[D_2= \begin{cases}
0 & \deg(f_{n+1})\leq 0 \\
\deg(f_{n+1}) & \deg(f_{n+1})>0,
\end{cases}
\]
let $D_3$ be the degree of the denominator of $f_{n+1}$, and let $D_4$ be the degree of the highest square free divisor of the denominator of $f_{n+1}(t)$ which is relatively prime to $\prod_{i=1}^n f_i(t)$.
Let $\chi_i:\mathbb{F}_q^* \to \mathbb{C}^*$ $(1\leq i\leq n)$ be multiplicative characters of $\mathbb{F}_q$, and let $\psi=\psi_p \circ \text{Tr}_{\mathbb{F}_q / \mathbb{F}_p}$ for a non-trivial additive character $\psi_p: \mathbb{F}_p \to \mathbb{C}^*$ of $\mathbb{F}_p$. Extend $\chi_i$ to $\mathbb{F}_q$ by setting $\chi_i(0)=0$. Suppose that $f_{n+1}(t)$ is not of the form $r(t)^p-r(t)+c$ in $\mathbb{F}_q(t)$. Then, we have
\begin{align*}
&\left| \sum_{a\in \mathbb{F}_{q}, f_{n+1}(a)\ne \infty} \chi_1(f_1(a)) \cdots \chi_n(f_n(a)) \psi(f_{n+1}(a)) \right| \\
&\phantom{\sum}\leq (D_1 + D_2 + D_3 + D_4 - 1)\sqrt{q},
\end{align*}
where the sum is taken over those $a\in \mathbb{F}_{q}$ such that $f_{n+1}(a)$ is well-defined.
\end{theorem}
\noindent As a consequence, we derive the following character sum bounds.
\begin{cor}\label{weilBounds}
Let $\psi_{\text{Tr}}=\psi_p \circ \text{Tr}_{\mathbb{F}_q / \mathbb{F}_p}$ be the
canonical additive character,
$\psi:\mathbb{F}_q\to \mathbb{C}^*$ any non-trivial additive character of $\mathbb{F}_q$, and $\eta:\mathbb{F}_q^*\to \mathbb{C}^*$ the quadratic character if $q$ is odd. Let $f(x)$ be a polynomial in $\mathbb{F}_q[x]$ of degree $m$ not divisible by $p$.\begin{enumerate}
\item For all $q$, we have
\begin{equation*}
\left| \sum_{x\in \mathbb{F}_q} \psi(f(D_n(x,a)))\right| \leq (mn-1)\sqrt{q}.
\end{equation*}
\item If $q$ is odd, then
\begin{equation*}
\left| \sum_{x\in \mathbb{F}_q} \eta(x^2-4a)\psi(f(D_n(x,a)))\right| \leq (mn+1)\sqrt{q}.
\end{equation*}
\item If $q$ is even, then
\begin{align*}
\left|\sum_{x\in \mathbb{F}_q^*} \psi_{\text{Tr}}\left( f(D_n(x,a))+a/x^2\right)\right| &= \left| \sum_{x\in \mathbb{F}_q^*} \psi_{\text{Tr}}\left( f(D_n(x,a))+a^{q/2}/x\right) \right| \\
&\leq (mn+1)\sqrt{q}.
\end{align*}
\end{enumerate}
Note that none of the polynomials in place of $f_{n+1}(x)$ are of the form $r(t)^2-r(t)+c$. This is clear if $n$ is also not divisible by $p$.
If $n$ is divisible by $p$, it can be reduced to the case when $n$ is not divisible by $p$ using the identity $D_{pn_1}(x, a) = D_{n_1}(x, a)^p$.
The following lemma is the key character sum estimate we need. The proof follows the method used in \cite{KW16}, where the case $m=1$ is treated.
\end{cor}
\begin{lemma}\label{dicksonWeilEven} Let $f(x)$ be a polynomial in $\mathbb{F}_q[x]$ of degree $m$ not divisible by $p$. Let $D=\{D_n(x,a)\ |\ x\in \mathbb{F}_q\}$, for $a\in \mathbb{F}_q^*$. If $\psi:(\mathbb{F}_q,+)\to \mathbb{C}^*$ is a non-trivial additive character, then the following estimates hold:
\begin{equation*}
\left|\sum_{x\in D} \psi(f(x)) \right|\leq (mn+1)\sqrt{q}.
\end{equation*}
\end{lemma}
\begin{proof}
The sum can be rewritten in the following way:
\begin{equation*}
S_f:= \sum_{y\in D} \psi(f(y)) = \sum_{x\in \mathbb{F}_q} \psi(f(D_n(x,a))) \frac{1}{N_x},
\end{equation*}
where $N_x=|D_n^{-1}(D_n(x,a))|$ is size of the preimage of the value $D_n(x,a)$.
\noindent \textbf{When $q$ is even:}
\noindent By Theorem \ref{dicksonTheorem}, $N_x$ can be quantified. Let $\text{Tr}:\mathbb{F}_q\to \mathbb{F}_2$ denote the absolute trace. Using the fact that $z^2+xz+a$ is reducible over $\mathbb{F}_q$ if and only if $\text{Tr}(a/x^2)=0$, we obtain
\begin{align*}
S_f &=\sum_{\substack{x\in \mathbb{F}_q^* \\
\text{Tr}(a/x^2)=0}} \frac{1}{\gcd(n, q-1)} \psi(f(D_n(x,a))) \\
&+ \sum_{\substack{x\in \mathbb{F}_q^* \\ \text{Tr}(a/x^2)=1}} \frac{1}{\gcd(n, q+1)} \psi(f(D_n(x,a))) \\
&+ \frac{1}{\gcd(n, q-1)}\psi(f(D_n(0,a))) + O(1),
\end{align*}
where $O(1)$ is a constant of size at most 1, which we accept by dropping the $D_n(x,a)=0$ case. Denote $\psi_1: \mathbb{F}_2\to \mathbb{C}^*$ as the order two additive character and $\psi_{\text{Tr}}=\psi_1\circ \text{Tr}$, which is an additive character from $\mathbb{F}_q\to \mathbb{C}^*$. Simplifying and rearranging gives
\begin{align*}
S_f &= \frac{1}{2\gcd(n,q-1)} \sum_{x\in \mathbb{F}_q^*} \psi(f(D_n(x,a)))(1+\psi_{\text{Tr}}(a/x^2)) \\
&\phantom{=}+ \frac{1}{2\gcd(n, q+1)} \sum_{x\in \mathbb{F}_q^*} \psi(f(D_n(x,a)))(1-\psi_{\text{Tr}}(a/x^2)) \\
&\phantom{=}+ \frac{1}{\gcd(n, q-1)}\psi(D_n(0,a)) + O(1) \\
&= \left( \frac{1}{2\gcd(n,q-1)} + \frac{1}{2\gcd(n, q+1)} \right) \sum_{x\in \mathbb{F}_q^*} \psi(f(D_n(x,a))) \\
&\phantom{=}+ \left( \frac{1}{2\gcd(n,q-1)} - \frac{1}{2\gcd(n, q+1)} \right) \sum_{x\in \mathbb{F}_q^*} \psi(f(D_n(x,a)))\psi_{\text{Tr}}(a/x^2) \\
&\phantom{=}+ \frac{1}{\gcd(n, q-1)}\psi(f(D_n(0,a))) + O(1).
\end{align*}
We add and subtract $\left(\frac{1}{2\gcd(n,q-1)} + \frac{1}{2\gcd(n, q+1)} \right)\psi(D_n(0,a))$ to complete the first sum:
\begin{align*}
&= \left( \frac{1}{2\gcd(n,q-1)} + \frac{1}{2\gcd(n, q+1)} \right) \sum_{x\in \mathbb{F}_q} \psi(f(D_n(x,a))) \\
&+ \left( \frac{1}{2\gcd(n,q-1)} - \frac{1}{2\gcd(n, q+1)} \right) \sum_{x\in \mathbb{F}_q^*} \psi(f(D_n(x,a)))\psi_{\text{Tr}}(a/x^2) \\
&+ \left( \frac{1}{2\gcd(n,q-1)} -\frac{1}{2\gcd(n, q+1)} \right)\psi(f(D_n(0,a))) + O(1).
\end{align*}
In order to estimate the sum in second term, take $b\in \mathbb{F}_q^*$ so that $\psi(x)=\psi_{\text{Tr}}(bx)$. Then,
\begin{equation*}
\sum_{x\in \mathbb{F}_q^*} \psi(f(D_n(x,a)))\psi_{\text{Tr}}(a/x^2) = \sum_{x\in \mathbb{F}_q^*} \psi_{\text{Tr}}(bf(D_n(x,a))+a/x^2).
\end{equation*}
Applying the bounds in Corollary \ref{weilBounds} with $f$ replaced by $bf$,
\begin{align*}
\left|\sum_{y\in D} \psi(f(y))\right| &\leq \left( \frac{1}{2\gcd(n,q-1)} + \frac{1}{2\gcd(n, q+1)} \right)(mn-1)\sqrt{q} \\
&\phantom{=}\phantom{=}+ \left| \frac{1}{2\gcd(n,q-1)} - \frac{1}{2\gcd(n, q+1)} \right| (mn+1)\sqrt{q} + 2 \\
&\leq (mn+1)\sqrt{q}.
\end{align*}
\noindent \textbf{When $q$ is odd:}
\noindent We use Theorem \ref{dicksonTheorem} again to calculate $N_x$. Let $\eta$ be the quadratic character of $\mathbb{F}_q$. Then,
\begin{align*}
S_f &=\sum_{\substack{x\in \mathbb{F}_q \\ \eta(x^2-4a)=1}} \frac{1}{\gcd(n, q-1)} \psi(f(D_n(x,a))) \\
&+ \sum_{\substack{x\in \mathbb{F}_q \\ \eta(x^2-4a)=-1}} \frac{1}{\gcd(n, q+1)} \psi(f(D_n(x,a))) + O(1).
\end{align*}
The term $O(1)$ is a constant of size at most 2, which we accept by dropping the complicated `condition C' and `otherwise' cases. Simplifying and rearranging gives
\begin{align*}
&= \frac{1}{2\gcd(n,q-1)} \sum_{x\in \mathbb{F}_q} \psi(f(D_n(x,a)))(1+\eta(x^2-4a)) \\
&\phantom{=}+ \frac{1}{2\gcd(n, q+1)} \sum_{x\in \mathbb{F}_q} \psi(f(D_n(x,a)))(1-\eta(x^2-4a)) + O(1) \\
&= \left( \frac{1}{2\gcd(n,q-1)} + \frac{1}{2\gcd(n, q+1)} \right) \sum_{x\in \mathbb{F}_q} \psi(f(D_n(x,a))) \\
&\phantom{=}+ \left( \frac{1}{2\gcd(n,q-1)} - \frac{1}{2\gcd(n, q+1)} \right) \sum_{x\in \mathbb{F}_q} \psi(f(D_n(x,a)))\eta(x^2-4a) + O(1).
\end{align*}
Again applying the bounds in Corollary \ref{weilBounds},
\begin{align*}
\left|\sum_{x\in D} \psi(f(x)) \right|&\leq \left( \frac{1}{2\gcd(n,q-1)} + \frac{1}{2\gcd(n, q+1)} \right)(mn-1)\sqrt{q} \\ &\phantom{=}\phantom{=}+ \left| \frac{1}{2\gcd(n,q-1)} - \frac{1}{2\gcd(n, q+1)} \right|(mn+1)\sqrt{q} + 2\\
&\leq (mn+1)\sqrt{q},
\end{align*}
which was to be shown.
\end{proof}
\section{$k$-MSS($m$)}
We are now ready to consider the $m$-th moment $k$-subset sum problem, called $k$-MSS($m$) in short. Let $m$ be a fixed positive integer, and $g(x) \in \mathbb{F}_q [x]$ a polynomial of degree $n$ with $1\leq n \leq q-1$.
Let $D = g ( \mathbb{F}_q)$
and $\boldsymbol{b} = (b_1, b_2, \ldots, b_m) \in \mathbb{F}_q^m$. Since we are working in characteristic $p$, we have that $$(x_1^i + \ldots + x_k^i)^p = x_1^{ip} + \ldots + x_k^{ip}.$$ Thus if $b_i^p \neq b_{ip}$ for some $ip\leq m$, there will be no solutions for $k$-MSS($m$). We may and will assume without loss of generality that $b_i^p = b_{ip}$ for all $ip\leq m$ in the remainder of this paper. Under this assumption, the $j$-th power equation in the $k$-MSS($m$) can and will be dropped for all $j$ divisible by $p$.
We introduce the moment subset sum problem over subsets of size $k$ with the value
\begin{align}
N_k(D,\boldsymbol{b},m)=\# \left\{ S \subseteq D : |S| = k, \sum_{y \in S} y^j = b_j , 1 \le j \le m, p \nmid j \right\}.
\end{align}
Thus, from now on, the index $j$ is
not divisible by $p$.
Determining whether $N_k(D,\boldsymbol{b},m) >0$ for given $\{D, \boldsymbol{b}\}$ is the decision version of the $k$-MSS($m$) problem.
As indicated before, we shall use the
algebraic input size $n\log q$.
A closely related number is the following integer
\begin{align*}
M_k (D,\boldsymbol{b},m) = \# \{ (x_1, \dots, x_k) \in D^k : \sum_{i=1}^k x_i^{j} = b_j, \\
x_{i_1} \ne x_{i_2}, \forall \,1 \le i_1 < i_2 \le k, \ p \nmid j \}.
\end{align*}
It is clear that $M_k (D,\boldsymbol{b},m) = k! N_k (D,\boldsymbol{b},m)$.
We deduce
\begin{theorem} $M_k(D,\boldsymbol{b},m) > 0$ if and only if $N_k(D,\boldsymbol{b},m) > 0$.
\end{theorem}
Our problem is then reduced to deciding if $M_k (D,\boldsymbol{b},m)>0$.
We can reduce this further by assuming from duality that $k \le \frac{|D|}{2}$. The strategy to solve this new problem is to combine all established strategy for the original subset sum problem and apply the character sum estimate from the previous section. We shall divide $k$ into three different ranges (constant size,
medium size, and large size) and
use different methods for each range.
The main idea is to use algorithms to solve boundary cases of parameters and
to use mathematics to prove that
there is a solution when the parameters
are in the interior.
If $n>q^{\epsilon}$ for constant $\epsilon>0$, then $q$ is polynomial in
$n\log q$, we can list all elements of $D$
and use the dynamic programming algorithm
to solve the moment subset sum problem in polynomial time.
In the rest of the paper, we can and will
assume that $n < q^{\epsilon}$ for whatever positive constant $\epsilon$ we like.
\subsection*{$k$-MSS($m$) for constant size $k$}
The main result that we depend on in this case is due to Kayal's solvability algorithm for polynomial systems over $\mathbb{F}_q$ \cite{K05}, which we summarize in this context below.
Let $f_1, \dots, f_m \in \mathbb{F}_q[x_1, \dots, x_n]$, where $d$ is the maximum degree of all the polynomials. Let $X = V(f_1, \dots, f_m)$ be the vanishing locus of the polynomials. Then the result of Kayal \cite{K05} states the following.
\begin{theorem}
The decision problem of $\# X(\mathbb{F}_q) > 0$ can be solved in time $\left(d^{n^{cn}} m \log q\right)^{O(1)}$ for some constant $c>0$.
\end{theorem}
Most of the conditions in our $k$-MSS($m$) are polynomial equations, with the exception of the condition that the individual elements be distinct. However, we can easily consider this as a polynomial equation at the cost of additional variables. Recall that $D =\{g(x) : x \in \mathbb{F}_q\, , g(x) \in \mathbb{F}_q [x]\}$ for a polynomial $g$ such that $\text{deg}(g)=n$. For the context of the $k$-MSS($m$) problem, we are deciding if the variety determined by the vanishing locus of
$$f_j (x_1, \dots, x_k ):=\left(\sum_{i=1}^k g(x_i)^j\right) -b_j, \ 1 \le j \le m, \ p\nmid j$$
and the additional polynomial
\[
\left( \prod_{i_1 \ne i_2} (g(x_{i_1}) - g(x_{i_2})) \right)x_{k+1} - 1
\]
have any $\mathbb{F}_q$-rational points. Each $f_j$ has degree at most $mn$ while the latter polynomial has degree $n \binom{k}{2} + 1$.
Now, we assume $k \le 3m+1$.
Then, $n\binom{k}{2} + 1 \leq 9nm^2$ and so all the polynomials have degrees
bounded by $9nm^2$.
Kayal's theorem then states that the decision problem can be solved in time which is bounded by a
polynomial in
\[
(9nm^2)^{(k+1)^{O((k+1))}}\log q = (9nm^2)^{(3m+2)^{O((3m+2))}}\log q.
\]
This is $(n \log q)^{O(1)}$ if $m$ is a constant. Thus, we have proved the following
\begin{theorem}\label{ThmSmallK}
Let $D =\{g(x) : x \in \mathbb{F}_q\}$, where $g(x) \in \mathbb{F}_q [x]$ is any polynomial of degree $n$. Let
$m$ be a fixed positive integer. Assume $k \leq 3m+1$. Then $k$-MSS(m) can be solved in time $(n \log q)^{O(1)}$.
\end{theorem}
The condition $k\leq 3m+1$ is all we need. It can be replaced by
any bound $k\leq C$, where $C$ is a positive constant.
\subsection*{$k$-MSS($m$) for medium $k$}
We now consider the moment $k$-subset sum problem for medium-sized values of $k$.
Fix $m \in \mathbb{N}$ and $\boldsymbol{b} = (b_1, \ldots, b_m) \in \mathbb{F}_q^m$. Let $m_p = |\{j : 1 \leq j \leq m, p \nmid j\}| = m - \lfloor \frac{m}{p} \rfloor$.
Recall
\begin{align*}
M_k(D,\boldsymbol{b},m) = |\{(x_1, \ldots, x_k ) \in D^k :& \sum_{i=1}^k x_i^{j} - b_j =0, \\
&x_{i_1} \neq x_{i_2} \text{ for } i_1 \neq i_2, \\
&1 \leq j \leq m, p \nmid j\}|.
\end{align*}
and
\begin{align*}
M_k(D,\boldsymbol{b},m) = k! \cdot N_k(D,\boldsymbol{b},m),
\end{align*}
where $$N_k(D,\boldsymbol{b},m) = |\{S \subseteq D : |S| = k, \sum\limits_{y \in S} y^j = b_j, 1 \leq j \leq m, p \nmid j\}|.$$
We wish to decide when
$M_k(D,\boldsymbol{b},m) > 0$.
The following theorem solves this problem
in the medium $k$ case if certain character sum estimate is satisfied.
\begin{theorem}\label{med_k_thm}
Let $D = g(\mathbb{F}_q)$ where $g \in \mathbb{F}_q[x]$ with deg$(g) = n$. Let $\psi$ be a non-trivial
additive character of $\mathbb{F}_q$.
Assume for all $f \in \mathbb{F}_q[x]$ of degree at most $m$ with $p \nmid \text{deg}(f)$,
we have
$$\left|\sum_{x \in D} \psi(f(x))\right| \leq (mn+1)\sqrt{q}.$$
Then
$M_k(D,\boldsymbol{b},m) > 0$
if $2n(mn+1) < q^\frac{1}{6}$ and $3m_p+1 < k < q^\frac{5}{12}$.
\end{theorem}
The first condition $2n(mn+1) < q^\frac{1}{6}$ is already satisfied, since we assumed that $n<q^{\epsilon}$ and $m$ is a constant.
The second condition $3m_p+1 < k < q^\frac{5}{12}$ gives the medium range of $k$.
Towards this goal, we define
$$R= |\{(x_1, \ldots, x_k ) \in D^k : \sum_{i=1}^k x_i^{j} - b_j = 0, 1 \leq j \leq m, p \nmid j\}|.$$
We say that $\boldsymbol{x}=(x_1, \ldots, x_k )\in \mathbb{F}_q^k$ \textit{is a solution} if $\boldsymbol{x}$ satisfies the conditions of $R$. Note that $R$ counts solutions allowing for those with repeated entries, while $M_k(D,\boldsymbol{b},m)$ strictly counts solutions with distinct entries. We define a new number to compute the size of $R$ with the added condition that the first two entries of $\boldsymbol{x}$ are equal. Let
$$R_{12} = |\{(x_1, \ldots, x_k ) \in D^k : 2x_2^{j} +\sum_{i=3}^k x_i^{j} - b_j = 0, 1 \leq j \leq m, p \nmid j\}|.$$
Then the Brun sieve tells us that
$$M_k(D,\boldsymbol{b},m) \geq R - \sum_{1 \leq i_1 < i_2 \leq k} R_{i_1 i_2} = R - \binom{k}{2}R_{12}.$$
In order to rewrite $R$ and $R_{12}$ and obtain bounds for them we use the theory of characters.
Let $\psi$ be a non-trivial additive character of $\mathbb{F}_q$.
Recall that we have the following summation:
\[
\sum_{c \in \mathbb{F}_q} \psi(cx) = \left\{
\begin{array}{ll}
q & \quad \text{if } x = 0 \\[1em]
0 & \quad \text{if } x \neq 0
\end{array}
\right.
\]
We would like to take advantage of this character sum equation and have it evaluate solutions positively and evaluate non-solutions to zero. Thus we have the following identity.
\[
\prod_{j=1, p\nmid j}^m\left(\sum_{c \in \mathbb{F}_q} \psi(c(\sum_{i=1}^k x_i^j-b_j))\right) = \left\{
\begin{array}{ll}
q^{m_p} & \quad \text{if } \textbf{$x$} \text{ is a solution}\\[1em]
0 & \quad \text{if } \textbf{$x$} \text{ is not a solution}
\end{array}
\right.
\]
With this in mind, we can rewrite $R$ as below
\begin{align*}
R &= \frac{1}{q^{m_p}}\sum_{x \in D^k}\prod_{j=1, p\nmid j}^m\sum_{c \in \mathbb{F}_q} \psi(c(\sum_{i=1}^k x_i^j-b_j))\\
&= \frac{1}{q^{m_p}} \sum_{\boldsymbol{x} \in D^k} \sum_{\boldsymbol{c} \in \mathbb{F}_q^{m_p}} \prod_{j=1, p\nmid j}^m \psi(c_j (\sum_{i=1}^k x_i^j -b_j)) \\
&= \frac{1}{q^{m_p}} \sum_{\boldsymbol{c} \in \mathbb{F}_q^{m_p}}\sum_{\boldsymbol{x} \in D^k} \prod_{j=1, p\nmid j}^m \psi(c_j (\sum_{i=1}^k x_i^j -b_j)) \\
&= \frac{1}{q^{m_p}} \sum_{\boldsymbol{c} \in \mathbb{F}_q^{m_p}}\sum_{\boldsymbol{x} \in D^k} \psi\left(\sum_{j=1, p\nmid j}^m c_j \left(\sum_{i=1}^k x_i^j -b_j\right)\right)
\end{align*}
By separating the contribution of the trivial term, we obtain the following.
\begin{align*}
R &= \frac{1}{q^{m_p}} \sum_{\boldsymbol{x} \in D^k} \psi(0) + \frac{1}{q^{m_p}}\sum_{0 \neq \boldsymbol{c} \in \mathbb{F}_q^{m_p}} \sum_{\boldsymbol{x} \in D^k} \psi\left(\sum_{j=1, p\nmid j}^m c_j \left(\sum_{i=1}^k x_i^{j}-b_j\right)\right) \\
&= \frac{|D|^k}{q^{m_p}} + \frac{1}{q^{m_p}} \sum_{0 \neq \boldsymbol{c} \in \mathbb{F}_q^{m_p}} S_c,
\end{align*}
where
$$S_c = \sum_{\boldsymbol{x} \in D^k} \psi\left(\sum_{j=1, p\nmid j}^m c_j \left(\sum_{i=1}^k x_i^{j}-b_j\right)\right).$$
Define
$$f(x) = \sum\limits_{j=1, p\nmid j}^m c_j x^{j} \in \mathbb{F}_q[x].$$
Note that the degree of $f$ is not divisible by $p$ and at most $m$ if $c\not=0$. We now want to find an upper bound for $S_c$. Notice that
\begin{align*}
\psi\left(\sum_{j=1, p\nmid j}^m c_j \left(\sum_{i=1}^k x_i^{j}-b_j\right)\right) &= \psi\left(\sum_{i=1}^k\sum_{j=1, p\nmid j}^m c_j x_i^{j} - \sum_{j=1, p\nmid j}^m c_jb_j\right) \\
&= \psi(f(x_1)) \cdots \psi(f(x_k))\psi(-\sum_{j=1, p\nmid j}^m c_jb_j) \\
&= A \cdot \prod_{i=1}^k \psi(f(x_i)).
\end{align*}
Here, $A = \psi(-\sum\limits_{j=1, p\nmid j}^m c_jb_j)$ and so $|A| = \prod\limits_{j=1, p\nmid j}^m|\psi(-c_jb_j)|=1$. Thus $$|S_c| = \left|\sum\limits_{\boldsymbol{x} \in D^k}\prod\limits_{i=1}^k \psi(f(x_i))\right| = \left(\left|\sum\limits_{x \in D} \psi(f(x))\right|\right)^k.$$
By our assumptions, $|S_c| \leq (mn+1)^k(\sqrt{q})^k$. It follows that
$$\left| R - \frac{|D|^k}{q^{m_p}} \right| = \frac{1}{q^{m_p}}\sum_{0 \neq \boldsymbol{c} \in \mathbb{F}_q^{m_p}} |S_c| \leq \frac{q^{m_p}-1}{q^{m_p}}(mn+1)^k q^\frac{k}{2} <(mn+1)^k q^\frac{k}{2}.$$
{\bf Remark}. Igor Shparlinski kindly informed us that the average trick in \cite{Sh15} can be used to improve
the above coefficient $(mn+1)^k$ to $(mn+1)^{k-2}$.
The idea is to apply the character sum estimate only to the first $(k-2)$-th power in $|S_c|$, and then compute the remaining quadratic moment over $c$, resulting in a saving of the factor
$(mn+1)^2$. This type of improvement is theoretically
interesting, but would not significantly improve the lower bound condition $3m_p+1 <k$ in our theorem,
which is enough for our algorithmic purpose of
this paper.
Now we can rewrite $R_{12}$ in a similar way.
\begin{align*}
R_{12} &= \frac{1}{q^{m_p}} \sum_{\boldsymbol{x} \in D^{k-1}} \prod_{j=1, p\nmid j}^m \sum_{c \in \mathbb{F}_q} \psi(c(2x_1^j + \sum_{i=3}^k x_i^j-b_j)) \\
&= \frac{1}{q^{m_p}} \sum_{\boldsymbol{x} \in D^{k-1}} \sum_{\boldsymbol{c} \in \mathbb{F}_q^{m_p}} \prod_{j=1, p\nmid j}^m \psi(c_j (2x_1^j + \sum_{i=3}^k x_i^j -b_j)) \\
&= \frac{1}{q^{m_p}} \sum_{\boldsymbol{c} \in \mathbb{F}_q^{m_p}}\sum_{\boldsymbol{x} \in D^{k-1}} \prod_{j=1, p\nmid j}^m \psi(c_j (2x_1^j + \sum_{i=3}^k x_i^j -b_j)) \\
&= \frac{1}{q^{m_p}} \sum_{\boldsymbol{c} \in \mathbb{F}_q^{m_p}}\sum_{\boldsymbol{x} \in D^{k-1}} \psi\left(\sum_{j=1, p\nmid j}^m c_j \left(2x_1^j + \sum_{i=3}^k x_i^j -b_j\right)\right)
\end{align*}
By separating the contribution of the trivial character, we obtain the following.
\begin{align*}
R_{12} &= \frac{1}{q^{m_p}} \sum_{\boldsymbol{x} \in D^{k-1}} \psi(0) + \frac{1}{q^{m_p}}\sum_{0 \neq \boldsymbol{c} \in \mathbb{F}_q^{m_p}} \sum_{\boldsymbol{x} \in D^{k-1}} \psi\left(\sum_{j=1, p\nmid j}^m c_j \left(2x_1^j + \sum_{i=3}^k x_i^j -b_j\right)\right) \\
&= \frac{|D|^{k-1}}{q^{m_p}} + \frac{1}{q^{m_p}} \sum_{0 \neq \boldsymbol{c} \in \mathbb{F}_q^{m_p}} S_c^{12},
\end{align*}
where $$S_c^{12} = \sum\limits_{\boldsymbol{x} \in D^{k-1}}\psi(\sum\limits_{j=1, p\nmid j}^m c_j (2x_1^{j} + \sum\limits_{i=3}^{k} x_i^{j}-b_j)).$$ By a similar manipulation in the previous case,
\begin{align*}
S_c^{12} &= \sum\limits_{\boldsymbol{x} \in D^{k-1}}\psi(2f(x_1))\psi(f(x_3))\cdots\psi(f(x_{k}))\psi(-\sum\limits_{j=1, p\nmid j}^m c_jb_j) \\
&= A\sum\limits_{\boldsymbol{x} \in D^{k-1}}\psi(2f(x_1))\prod_{i=3}^{k}\psi(f(x_i)).
\end{align*}
By a rearrangement, we see that
\begin{align*}
\left|S_c^{12}\right| &= \left| \sum_{\boldsymbol{x} \in D^{k-1}} \psi(2f(x_1))(\prod_{i=3}^{k} \psi(f(x_i)))\right| \\
&= \left| \left(\sum_{x \in D} \psi(2f(x))\right)\left(\sum_{x \in D} \psi(f(x))\right)^{k-2}\right|
\end{align*}
By our assumptions, if $p>2$ (and thus $2\not=0$),
\begin{align*}
\left|S_c^{12}\right| &\leq (mn+1)\sqrt{q}(mn+1)^{k-2}(\sqrt{q})^{k-2} \\
&= (mn+1)^{k-1}q^\frac{k-1}{2}.
\end{align*}
The case $p=2$ can be handled in a similar way, and one get the alternate bound
$$\left|S_c^{12}\right| \leq
|D|(mn+1)^{k-2}q^\frac{k-2}{2}.$$
We assume that $p>2$ for simplicity.
Now we have that
\begin{align*}
\left|R_{12} - \frac{|D|^{k-1}}{q^{m_p}}\right| &= \frac{1}{q^{m_p}} \left|\sum_{\textbf{0} \neq \textbf{c} \in \mathbb{F}_q^{m_p}} S_c^{12}\right| \\
&\leq \frac{1}{q^{m_p}}\sum_{\textbf{0} \neq \textbf{c} \in \mathbb{F}_q^{m_p}} (mn+1)^{k-1}q^\frac{k-1}{2} \\
&= \frac{q^{m_p}-1}{q^{m_p}} (mn+1)^{k-1}q^\frac{k-1}{2} \\
&< (mn+1)^{k-1}q^\frac{k-1}{2}.
\end{align*}
Since we have the following two inequalities,
$$\left|R_{12} - \frac{|D|^{k-1}}{q^{m_p}}\right| < (mn+1)^{k-1}q^\frac{k-1}{2}$$
$$\left|R - \frac{|D|^{k}}{q^{m_p}}\right| < (mn+1)^k q^\frac{k}{2}$$
we see that
$$\frac{|D|^k}{q^{m_p}}-(mn+1)^k q^\frac{k}{2} < R, \text{ and }$$
$$R_{12} < \frac{|D|^{k-1}}{q^{m_p}} + (mn+1)^{k-1}q^\frac{k-1}{2}.$$
Then
\begin{align*}
R &- \binom{k}{2} R_{12} > \frac{|D|^{k}}{q^{m_p}}-(mn+1)^k q^\frac{k}{2} - \binom{k}{2}\left(\frac{|D|^{k-1}}{q^{m_p}} + (mn+1)^{k-1}q^\frac{k-1}{2} \right) \\
&= |D|^{k-1}\frac{1}{q^{m_p}}\left(|D|- \binom{k}{2}\right)-(mn+1)^{k-1}q^\frac{k-1}{2}\left((mn+1)\sqrt{q} + \binom{k}{2}\right) \\
&= \frac{1}{q^{m_p}}\left(|D|^{k-1}\left(|D|-\binom{k}{2}\right)\right) - (mn+1)^{k-1}q^\frac{k-1}{2}\left((mn+1)\sqrt{q} + \binom{k}{2}\right).
\end{align*}
We wish to show that $R- \binom{k}{2} R_{12}$ is positive and thus we need to show that $$|D|^{k-1}\left(|D|-\binom{k}{2}\right) \geq q^{m_p}(mn+1)^{k-1}q^\frac{k-1}{2}\left((mn+1)\sqrt{q} + \binom{k}{2}\right).$$
However since $\text{deg}(g) = n$ we know that $|D| \geq \frac{q}{n}$. Thus it is enough to show that $$\left(\frac{q}{n}\right)^{k-1}\left(\frac{q}{n}-\binom{k}{2}\right) \geq q^{m_p+\frac{k-1}{2}}(mn+1)^{k-1}\left((mn+1)\sqrt{q} + \binom{k}{2}\right).$$
Towards this goal, we utilize our assumptions that $2n(mn+1) < q^\frac{1}{6}$ and $3m_p+1 < k < q^\frac{5}{12}$.
It is enough to prove
$$\left(\frac{q}{n}\right)^{k-1} \geq q^{m_p+\frac{k-1}{2}}(mn+1)^{k-1}, \
\left(\frac{q}{n}-\binom{k}{2}\right) \geq \left((mn+1)\sqrt{q} + \binom{k}{2}\right).
$$
For the first inequality, we have
\begin{align*}
\left(\frac{q}{n}\right)^{k-1}>q^{m_p+\frac{k-1}{2}}(mn+1)^{k-1}
&\iff q^{k-1-m_p-\frac{k-1}{2}} > (mn+1)^{k-1}n^{k-1} \\
&\iff q^{\frac{k-1}{2}-m_p} > (mn+1)^{k-1}n^{k-1}. \\
\end{align*}
Since $2n(mn+1) < q^\frac{1}{6}$,
the right side is bounded by
$$(mn+1)^{k-1}n^{k-1}<(n(mn+1))^{k-1} < q^\frac{k-1}{6}.$$
Our problem is now reduced to showing that $q^{\frac{k-1}{6}+m_p} < q^{\frac{k-1}{2}}$. Namely,
$$m_p < \frac{k-1}{2}-\frac{k-1}{6} =\frac{k-1}{3}.$$
This is satisfied since $3m_p+1 < k$.
Thus we have shown that
\begin{equation}\label{med_proof_eq_1}
\left(\frac{q}{n}\right)^{k-1}>q^{m_p+\frac{k-1}{2}}(mn+1)^{k-1}.
\end{equation}
For the second inequality, we need to show that $n(mn+1)\sqrt{q} + 2n\binom{k}{2} < q$. Since $k < q^\frac{5}{12}$ and $2n(mn+1) < q^\frac{1}{6}$,
we know that $k^2n < q^\frac{5}{6}q^\frac{1}{6}/2 = q/2$.
We deduce that
\begin{equation}\label{med_proof_eq_2}
n(mn+1)\sqrt{q} + 2n\binom{k}{2} < \frac{q^{1/6 +1/2}}{2} + \frac{q}{2} < q.
\end{equation}
The theorem is proved.
\begin{cor}
Let $D = \{x^d : x \in \mathbb{F}_q\}$ or $D = \{D_n(x,a) : x \in \mathbb{F}_q\}$ for $a \in \mathbb{F}_q^\times$. Then $M_k(D,b,m) > 0$ if $2n(mn+1) < q^\frac{1}{6}$ and $3m_p+1 < k < q^\frac{5}{12}$.
\end{cor}
Let $\psi$ be a non-trivial additive character of $\mathbb{F}_q$.
We have shown that all $f \in \mathbb{F}_q[x]$ of degree at most $m$ with $p \nmid \text{deg}(f)$,
$$\left|\sum_{x \in D} \psi(f(x))\right| \leq m \sqrt{q}$$
if $D = \{x^d : x \in \mathbb{F}_q\}$, and
$$|\sum_{x \in D} \psi(f(x))| \leq (mn+1) \sqrt{q}$$
if $D = \{D_n(x,a) : x \in \mathbb{F}_q\}$.
Since $m\sqrt{q} \leq (mn+1)\sqrt{q}$, the character sum condition in Theorem \ref{med_k_thm} is satisfied. The medium case is proved.
\subsection*{$k$-MSS($m$) for large $k$}
Following established procedures, we use the Li-Wan sieve \cite{LW10} to analyze large values of $k$. This method has been used several times \cite{ZW12, KW16, LW10, LW18, LW19,WN18} and is now standard. So, we will only give an outline and indicate the differences.
We begin by discussing the relevant notation and concepts that we will apply in our context. In this section, we assume that $D$ is the image of a monomial or Dickson polynomial of degree $n$. The
relevant character sum estimate is then true.
We use the notation $S_k$ to denote the symmetric group on $k$ letters. For a permutation $\tau \in S_k$, its disjoint cycle decomposition is written as
\[
\tau = (a_1 a_2 \cdots a_{m_1})(a_{m_1+1} \cdots a_{m_2}) \cdots (a_{m_{k-1}+1} \cdots a_{m_k}).
\]
We shall refer to $\tau$ interchangeably with its disjoint cycle decomposition, which we fix beforehand.
Denote by $\overline{X} = \{(x_1,\dots,x_k) \in D^k : x_i \ne x_j, \forall i \ne j\}$. For the sake of brevity, we will denote $k$-tuples from such products by $x = (x_1, \dots, x_k)$ when there is no risk of confusion.
Let $\psi$ be a fixed non-trivial additive character of $\mathbb{F}_q$.
Recall from earlier sections that we are interested in
\[
h_c(x_1,\dots,x_k) = \psi \left(
\sum_{j=1,p\nmid j}^m c_j \left(\sum_{i=1}^k x_i^{j}-b_j\right)\right),
\]
where $c$ is not the zero vector.
Now define
$$F(c) = \sum\limits_{x \in \overline{X}} h_c(x_1,\dots,x_k), \ F_\tau(c) = \sum\limits_{x \in X_\tau} h_c(x_1, \dots x_k),$$
where $X_\tau$ consists of tuples in $\overline{X}$ such that
\begin{align*}
x_{a_1} = \cdots = x_{a_{m_1}}, x_{m_1+1} = \ldots = x_{m_2}, \ldots, x_{m_{k-1}+1} = \ldots = x_{m_k}
\end{align*}
and so on. Now, let's think of $\tau$ as having $e_1$ cycles of length $1$, $e_2$ cycles of length $2$, and so on, up until $e_k$ cycles of length $k$. Note that $\sum\limits_{i=1}^k ie_i = k$. This allows us to express $F_\tau(c)$ as:
\begin{align*}
F_\tau(c) &= \sum_{x \in X_\tau} \psi \left( \sum_{j=1,p\nmid j}^m c_j \left( \sum_{i=1}^k i (x_{i1}^{j} + \cdots + x_{ie_i}^{j}) - b_j \right)\right) \\
&= \sum_{\substack{x_{il} \in D \\ 1 \le i \le k\\ 1 \le l \le e_i}} \psi \left( \sum_{j=1,p\nmid j}^m c_j \left( \sum_{i=1}^k \sum_{l=1}^{e_i} i x_{il}^{j} \right) \right)\psi ( \sum_{j=1,p\nmid j}^m -c_j b_j) \\
&= \sum_{\substack{x_{il} \in D \\ 1 \le i \le k\\ 1 \le l \le e_i}} \prod_{i=1}^k \psi^i \left( \sum_{p\nmid j, l} c_j x_{il}^{j} \right) \psi ( \sum_{j=1,p\nmid j}^m -c_j b_j).
\end{align*}
Let's consider the inner sum.
\[
\sum_{x_{il}\in D}\psi^i(\sum_{p\nmid j} c_j x_{il}^{j}) = \sum_{x \in D} \psi ^i (f(x))
\]
where $f(x) = \sum_{j=1,p\nmid j}^m c_j x^{j}$. Hence,
if the $c_j$'s are not all zero, we have
\[
\left |\sum_{x\in D} \psi(\sum_{j=1,p\nmid j}^m c_j x^{j}) \right | \le (mn+1) \sqrt{q}.
\]
Now the order of $\psi$ is $p$ so the order of $\psi^i$ is $\frac{p}{(i,p)}$, which is $p$ unless $p \mid i$, in which case it is $1$. Therefore,
\begin{align*}
|F_\tau(c)| &= \left | \prod_{i=1}^k \left( \sum_{x \in D} \psi^i \left( \sum_{j=1,p\nmid j}^m c_j x^{j} \right) \right) ^{e_i} \psi(\sum_{j=1,p\nmid j}^m -c_jb_j) \right| \\
& \le \prod_{\substack{i \\
1 \le i \le k\\
p \nmid i}} ((mn+1) \sqrt{q})^{e_i} \cdot \prod_{\substack{i \\
1 \le i \le k\\
p \mid i}} |D|^{e_i}.
\end{align*}
The Li-Wan sieve says that
\[
F(c) = \sum_{\sum ie_i = k} (-1)^{k - \sum e_i} N(e_1, \dots, e_k) F_{e_1, \dots, e_k}(c),
\]
where $N(e_1,..., e_k)$
denote the number of permutations in $S_k$
with cycle type $(e_1,..., e_k)$, and $F_{e_1, \dots, e_k}(c)$ denotes $F_{\tau}(c)$ for any $\tau$ of cycle type
$(e_1,..., e_k)$.
Using the above estimates and Lemma 2.1 in \cite{WN18}, one obtains
\begin{align*}
|F(c)| &\le \sum_{\sum ie_i = k} N(e_1, \dots, e_k) \prod_{(i,p) =1} ((mn+1) \sqrt{q})^{e_i} \cdot \prod_{p \mid i} |D|^{e_i} \\
& \le \left ((mn+1) \sqrt{q} + k + \frac{|(mn+1)\sqrt{q} - |D||}{p} -1 \right )_k
\end{align*}
where we define $(x)_k := x(x-1) \cdots (x-k+1)$.
This concludes our discussion of the Li-Wan sieve and the appropriate adaptation to our context. We now return to the framework in the previous sections, with notations as before. Let's see how the above Li-Wan helps.
Recall
\begin{align*}
M_k(D,b,m) &= \sum_{x \in \textbf{X}} \frac{1}{q^{m_p}} \sum_\psi \sum_{c_j \in \mathbf{F}_q} \psi \left( \sum_{j=1,p\nmid j}^m c_j \left( \sum_{i=1}^k x_i^{j} - b_j \right) \right) \\
&= \frac{1}{q^{m_p}} (|D|)_k + \sum_{(\cdots c_j \cdots) \ne 0} \frac{1}{q^{m_p}}F(c).
\end{align*}
Therefore,
\begin{align}
\left | M_k(D,b,m) - \frac{1}{q^{m_p}} (|D|)_k \right | &< \left((mn+1) \sqrt{q} + k + \frac{\left |(mn+1) \sqrt{q} - |D| \right|}{p} - 1\right)_k \\
& \le \left(0.013 |D| + k + \frac{|D|}{p} \right)_k.
\end{align}
This estimate is the analogue of equation (2.3) in \cite{WN18}, resulting from assuming that
\[
(mn+1)\sqrt{q} \le 0.013 |D|.
\]
If further, $6m_p \ln q \leq k \leq \frac{|D|}{2}$, the same argument as in
the proof of Theorem 2.3 in \cite{WN18}
shows that $M_k(D, b, m)>0$.
We obtain
\begin{theorem}
Let $D$ be the image of a monomial of Dickson polynomial of degree $n$ .
Assume that $p>2$, $(mn+1)\sqrt{q} \leq 0.013|D|$, and $6m_p \ln q \leq k \leq \frac{|D|}{2}$. Then, $M_k(D,b,m)>0$.
\end{theorem}
Note that if $p=2$, the same proof works, but only for $k$ in the shorter range
$6m_p \ln q \leq k \leq \frac{(1-\epsilon)|D|}{2}$. That is, $k$ cannot reach
all the way to $|D|/2$ if $p=2$.
Since $|D|\geq q/n$, the condition
$(mn+1)\sqrt{q} \leq 0.013|D|$ is satisfied if $n(mn+1)\sqrt{q} \leq 0.013q$, which is certainly true since
$m$ is fixed and $n<q^{\epsilon}$.
\begin{section}{Case $p=2$}
Finally, we examine the $k$-MSS($m)$ over finite fields of characteristic 2. The result of Kayal used for $k$-MSS($m$) for constant $k$ and our proof for medium-sized $k$ still hold in fields of characteristic 2. Thus Theorem \ref{ThmSmallK} and Theorem \ref{med_k_thm} hold for $q=p^s$ for all $p$.
To analyze the case $p=2$ for large $k$, we rely on recent work by Choe and
Choe \cite{CC19} which examines the subset sum problem over finite fields of characteristic 2 . We adjust the definitions of this work to fit the higher moment subset sum problem over $D$ which are images of monomials or Dickson polynomials. Note that $p=2$ in this section.
We will prove an analogue of Theorem 2.3 in \cite{CC19}. Let $D \subseteq \mathbb{F}_q$, $k \leq |D|/2$, and $f(x) = \sum\limits_{j=1, p\nmid j}^m c_j x^{j}$, for $c_j \in \mathbb{F}_q$. For a nontrivial additive character $\psi$ of $\mathbb{F}_q$, define
\begin{align*}
S_D(k,\psi,f)=\sum_{\substack{x_{i} \in D \\
x_i \ \textnormal{distinct}}}\psi(f(x_1)+f(x_2)+ \ldots +f(x_k)).
\end{align*}
Although $S_D(k,\psi,f)$ sums over distinct $x_i$, there is no assumption that the $f(x_i)$ are distinct. Over finite fields of characteristic 2, however, if $x_i=x_j$, then $f(x_i)=f(x_j)$, and the sum $f(x_i)+f(x_j)$ is equivalent to $2f(x_i)=0$. It follows that
\begin{align*}
S_D(2,\psi,f)&=\sum_{\substack{x_{1},x_{2} \in D \\
x_1 \neq x_2}}\psi(f(x_1)+f(x_2)) \\
&= (\sum_{x \in D}\psi(f(x)))^2 -|D|.
\end{align*}
By induction , one derives the following recursive formula for $S_D(k,\psi,f)$ for all $k>1$, which is the analogue of Lemma 2.1 \cite{CC19}.
\renewcommand\labelitemi{$\cdot$}
\begin{lemma}
Let $D$ be a subset of $\mathbb{F}_q$ with more than 3 elements and $\psi$ be a nontrivial additive character of $\mathbb{F}_q$. Then
\begin{itemize}
\item $S_D(1,\psi,f)=\sum\limits_{x\in D}\psi(f(x))$,
\item $S_D(2,\psi,f)= S_D(1,\psi,f)^2-|D|$, and
\item $S_D(k,\psi,f)= S_D(1,\psi,f)S_D(k-1,\psi,f)-(|D|-k+2)(k-1)S_D(k-2,\psi,f)$, where $3 \leq k \leq |D|$.
\end{itemize}
\end{lemma}
This lemma can be applied to prove analogue of Lemma 2.2 \cite{CC19}. The statement is as follows.
\begin{lemma}
Let $D$ be a subset of $\mathbb{F}_q$ with more than 4 elements and $\psi$ be a nontrivial additive character of $\mathbb{F}_q$. If
\begin{align*}
\left|\sum_{x \in D}\psi(f(x))\right| \leq \frac{1}{16}|D|,
\end{align*}
then
\begin{align*}
|S_D(k,\psi,f)| < \left(\frac{9}{16}|D|\right)^k, \textnormal{ for all } k \leq \frac{|D|}{2}.
\end{align*}
\end{lemma}
From Proposition \ref{MonomialWeil} and Lemma \ref{dicksonWeilEven}, it follows that when $D$ is the image of a polynomial of degree $n$ such that the value set character sum estimate satisfies
\begin{align*}
\left|\sum_{x \in D}\psi(f(x))\right| < (mn+1)\sqrt{q},
\end{align*}
then the condition $n(mn+1) < \frac{1}{16}\sqrt{q}$ implies that
\begin{align*}
\left|\sum_{x \in D}\psi(f(x))\right| < (mn+1)\sqrt{q} < \frac{1}{16}\frac{q}{n} \leq \frac{1}{16}|D|.
\end{align*}
As in the previous section, a standard character sum argument gives the inequality
\begin{align}
\left|M_k(D,b,m) - (\frac{1}{q})^{m_p} (|D|)_k \right| < \max_{c\in \mathbb{F}_q^{m_p}-0}
S(k, \psi, f_c),
\end{align}
where $f_c = \sum_{j=1, p\nmid j}^m c_j x^{j}$. It follows that
\begin{align}
\left|M_k(D,b,m) - (\frac{1}{q})^{m_p} (|D|)_k \right| < \left(\frac{9}{16}|D|\right)^k.
\end{align}
The same argument as in the proof of Theorem 2.3 in \cite{CC19} shows that
if
$$3.05sm_p=3.05 m_p\log_2q < k \leq |D|/2,$$
then
\begin{align}
\frac{1}{q^{m_p}} (|D|)_k > \frac{1}{q^{m_p}}\left(\frac{9}{16}|D|\right)^k 2^{sm_p}
= \left(\frac{9}{16}|D|\right)^k,
\end{align}
Thus, we obtain
\begin{theorem} Let $p=2$ and $n(mn+1) < \frac{1}{16}\sqrt{q}$.
Then $M_k(D,b,m)>0$ for all
$3.05 m_p\log_2q < k \leq |D|/2$.
\end{theorem}
We conclude that when $D$ is the image of degree $n$
polynomial satisfying the value set character sum estimate in Lemma \ref{dicksonWeilEven}, the $m$-th moment
subset sum problem over $D$ can be solved in
deterministic polynomial time in the algebraic
input size $n \log q$, for every constant $m$. In particular, this is true when $D$ is the image of a monomial of Dickson polynomial of degree $n$.
\end{section}
\begin{section}{Conclusion}
We show that there is a deterministic polynomial time algorithm for the $m$-th moment $k$-subset sum problem over finite fields for each fixed $m$ when the evaluation set is the image set
of a monomial or Dickson polynomial of any degree $n$.
An open problem is to ask if Theorem \ref{THM1} can be proved
for larger range of $m$, say, $m=O(\log\log q)$. The difficulty
lies in the small $k$ range such as $k\leq 3m+1$.
\end{section}
\begin{center}
{\bf Acknowledgements}
\end{center}
This work was supported by the Early Career Research Workshop in Coding Theory, Cryptography, and Number Theory held at Clemson University in 2018, under NSF grant DMS-1547399.
\end{document} |
\begin{document}
\begin{center}
{\bf Asymptotic Analysis for a Nonlinear Reaction-Diffusion System \\ Modeling an Infectious Disease}
Hong-Ming Yin\footnote{Corresponding Author. Email: [email protected]}\\
Department of Mathematics and Statistics\\
Washington State University\\
Pullman, WA 99164, USA.\\
and \\
Jun Zou\\
Department of Mathematics\\
The Chinese University of Hong Kong\\
Shatin, N.T., Hong Kong
\end{center}
\begin{abstract}
In this paper we study a nonlinear reaction-diffusion system which models an infectious disease caused by bacteria such as those for cholera. One of the significant features in this model is that a certain portion of the recovered human hosts may lose a lifetime immunity and could be infected again. Another important feature in the model is that the mobility for each species is allowed to be dependent upon both the location and time.
With the whole population assumed to be susceptible with the bacteria, the model is a strongly coupled nonlinear reaction-diffusion system. We prove that the nonlinear system has a unique solution globally in any space dimension under some natural conditions on the model parameters and the given data. Moreover, the long-time behavior and stability analysis for the solutions are carried out rigorously. In particular, we characterize the precise conditions on variable parameters about the stability or instability of all steady-state solutions. These new results provide the answers to several open questions raised in the literature.
\end{abstract}
\ \\
{\bf AMS Mathematics Subject Classification:} 35K57 (Primary), 92C60 (Secondary).
\ \\
{\bf Key Words and Phrases:} Infectious disease model; nonlinear reaction-diffusion system; global existence and uniqueness; stability analysis.
\section{Introduction}
In biological, ecological, health and medical sciences, researchers have a great deal of interest to establish a suitable mathematical model for various infectious diseases. The current global pandemic attracts even more scientists to this field. There are many different mathematical models for an infectious disease in the literature. Roughly speaking, these models can be divided by two categories: a data-based discrete model and a continuous model based on a population growth (see \cite{GF2008,SR2013,WMI2018}). Our approach is based on a continuous model which provides a much more convenient tool to analyze the complicated dynamics of the interaction among susceptible, infected and recovered patients. A continuous model is typically governed by a system of ordinary differential equations (ODE model) or a system of partial differential equations (PDE model). For an ODE model, a monumental work was done in 1927 by Kermack and McKendrick \cite{KM1927}. Since then, a significant progress has been made in modeling and analyzing various infectious diseases such as SIR, SEIR models and their various extensions. An ODE model often provides a clear and precise description of physical quantities and their relations. By using an ODE model, one can study detailed dynamical interaction between viruses and various species as well as other qualitative properties such as reproduction numbers. This type of ODE models is widely adopted and used by researchers in all fields, particularly those in biological and health sciences. On the other hand, when one takes the movement of species across different geographical regions into consideration, it is necessary to include a diffusion process in a mathematical model to reflect the movement. This leads to modeling an infectious disease by using a system of partial differential equations (PDEs), often called reaction-diffusion equations. A well-known work \cite{HLBV1994} discussed a number of PDE models arising from biological, ecological and animal sciences and explained why the PDE approach is more appropriate in those areas. There are a large number of research studies, conference proceedings and monograph in both PDE and ODE models in the literature. We list only some of them here as examples, e.g., \cite{AM1979,MA1979,DG1999,DW2002,KP1994} for the SIR ODE models and \cite{ABLN2007,FMWW2019,HLBV1994,JDH1995,LN1996,SLX2019} for the SIR PDE models. Many more references can be found in a SIAM Review paper by Hethcote \cite{H2000} and the monograph by Busenberg and Cooke \cite{BC2012}, Cantres and Cosner \cite{CC1981}, Daley and Gani \cite{DG1999}, Lou and Ni \cite{LN1996}, etc. It is worth noting from the mathematical point of view that the PDE models present significant more challenges for scientists to study the dynamics of the solutions and to analyze qualitative properties of the solutions. Many important mathematical questions such as global existence and uniqueness are still open for some popular PDE models.
This is one of the motivations for the current study.
In this paper we consider a mathematical model in a heterogeneous domain for an infectious disease caused by bacteria such as Cholera without lifetime immunity. Without considering the diffusion-process of the population, the ODE models have been studied extensively (see, e.g., \cite{AB2011,BC2012,DW2002,ESTD2002}). The model considered in this work is a direct extension of the ODE model.
To describe the mathematical model, we introduce the following variables:
\begin{eqnarray*}
S(x,t) & = & \mbox{Susceptible population concentration at location $x$ and time $t$} \\
I(x,t) & = & \mbox{ Infected population concentration at location $x$ and time $t$} \\
R(x,t) & = & \mbox{ Recovered population concentration at location $x$ and time $t$} \\
B(x,t) & = & \mbox{Concentration of bacteria at location $x$ and $t$}
\end{eqnarray*}.
We assume that the whole population is susceptible to the bacteria. Moreover, the rate of growth for the population, denoted by $b(x,t,S)$, depends on location, time and the population itself.
A classical example for $b$ is that the population growth follows a logistic growth model with a maximum capacity $k_1>0$:
\[ b(x,t,s)=b_0s(1-\frac{s}{k_1}),\]
where $b_0> 0$ represents the growth rate of the population.
The population reduction caused by infected patients is denoted by a nonlinear function $g_1(x,t,S,I,B)$ which is nonnegative.
A typical form of the nonlinear function $g_1$ is given by (see \cite{YW2017,Yin2020}):
\[ g_1(x,t,S,I,B)=\beta_1SI+\beta_2Sh_1(B), ~~h_1(B)=\frac{B}{B+k_2},\]
where $\beta_1, \beta_2$ are positive transmission parameters and $h_1(B)$ represents the maximum saturation rate of bacteria on
human hosts and $k_2>0$.
The bacteria growth follows the same assumption, denoted by $g_2(x,t,s)$ with a maximum capacity $k_3>0$:
\[ g_2(x,t,s)=g_0s(1-\frac{s}{k_3}),\]
where $g_0> 0$ is the growth rate of the bacteria.
We also assume that the diffusion coefficients depend on location and time.
By extending the ODE model (see \cite{BC2012,DW2002,LW2011} etc.,), we obtain the following reaction-diffusion system:
\setcounter{section}{1}
\setcounter{local}{1}
\begin{eqnarray}
S_t-\nabla\cdot [a_1(x,t)\nabla S] & = & b(x,t,S)-g_1(x,t,S,I,B)-d_1S+\sigma R,\\
I_t-\nabla \cdot [a_2(x,t)\nabla I] & = & g_1(x,t,S,I,B)-(d_2+\gamma)I,\stepcounter{local} \\
R_t-\nabla \cdot [a_3(x,t)\nabla R] & = & \gamma I-(d_3+\sigma)R,\stepcounter{local} \\
B_t-\nabla \cdot [a_4(x,t)\nabla B] & = & \xi I +g_2(x,t,B)-d_4 B.\stepcounter{local}
\end{eqnarray}
The biological meaning of various parameters and functions in the model are given below (see \cite{ESTD2002,WW2015,YW2016}):
\begin{eqnarray*}
a_i & = & \mbox{the diffusion coefficients, $i=1,2,3,4$},\\
\gamma & = & \mbox{the recovery rate of infectious individuals},\\
\sigma & = & \mbox{the rate at which recovered individuals lose immunity},\\
d_i & = & \mbox{the natural death rate of species or bacteria},\\
\xi & = & \mbox{the shedding rate of bacteria by infectious human hosts}.
\end{eqnarray*}
To complete the mathematical model, we assume that the system (1.1)-(1.4) holds in $Q_T=\Omega\times (0,T]$ for any $T>0$, where $\Omega$ is a bounded domain in $R^n$ with $C^2$-boundary $\partial \Omega$. The initial concentrations for all species are known and we assume that no species can cross the boundary $\partial \Omega$. This leads to the following initial and boundary conditions:
\begin{eqnarray}
& & (\nabla_{\nu}S,\nabla_{\nu}I,\nabla_{\nu}R, \nabla_{\nu}B) = 0, \hspace{1cm} (x,t)\in \partial \Omega\times (0,T],\stepcounter{local}\\
& & (S(x,0),I(x,0),R(x,0),B(x,0))=(S_0(x),I_0(x),R_0(x),B_0(x)), x\in \Omega,\stepcounter{local}
\end{eqnarray}
where $\nu$ represents the outward unit normal on $\partial \Omega$.
We would like to give a short review about the known results for the above model. For the ODE system corresponding to (1.1)-(1.4), there are many studies for various interesting mathematical problems such as global existences, dynamical interaction between the bacteria and species (see, e.g., \cite{AB2011,DW2002,ESTD2002,TH1992}). The stability analysis is also carried out by several researchers (see \cite{LW2011,SD2012,TW2011} etc.). When the movement of species is considered in the model, the corresponding PDE system is much more complicated to study. This is due to the fact that the maximum principle can not be applied for a system of reaction-diffusion equations.
It is a challenge to establish the global well-posedness for the PDE system (1.1)-(1.6). Nevertheless, when the space dimension is equal to $1$, under certain conditions on $g_1$ and $g_2$, the global existence is established (see \cite{WW2015,YW2016,YW2017,Y2018}). The reason is that the total population is bounded in $L^1(\Omega)$, which implies a global boundedness for $S(x,t)$ by using Sobolev embedding for the space dimension $n=1$. However, this method does not work when the space dimension $n$ is greater than $1$. In a SIAM review article
(\cite{PS2000}), the authors considered the following system
(with $a$ and $b$ being two positive constants):
\begin{eqnarray*}
& & u_t-a\Delta u =f(u,v), \hspace{1cm} x\in \Omega, ~t>0,\\
& & v_t-b\Delta v =g(u,v), \hspace{1cm} x\in \Omega, ~t>0,
\end{eqnarray*}
subject to appropriate initial and boundary conditions.
Suppose $f(0,v), g(u,0)\geq 0$ for all $u,v \geq 0$. Then under the condition that
\[ f(u,v)+g(u,v)\leq 0,\]
the $L^1$-norms of the nonnegative solutions $u$ and $v$ are bounded, i.e.,
\[\sup_{t>0}\int_{\Omega} (u+v)dx\leq C.\]
However, the solution $(u,v)$ may blow up in finite time when the space dimension is greater than 1 if no additional
conditions on $f(u,v)$ and $g(u,v)$ are made. Therefore, as indicated in \cite{PS2000}, one must impose some additional conditions in order to obtain a global bound for a reaction-diffusion system. There are some interesting results for a general reaction-diffusion system when leading coefficients are constants. In 2000, under certain additional conditions, Pierre-Schmitt (\cite{PS2000}) introduced a dual method to establish such a bound for the reaction-diffusion system. In 2007, Desvillettes-Fellner-Pierre-Vovelle introduced in \cite{DFPV2007} an entropy condition originated by Kanel in 1990
(\cite{KA1990}) and extended the dual method to a more general reaction-diffusion system with constant diffusion coefficients and established the global bound with a quadratic-growth reaction as long as a total mass is controlled ($L^1-$boundedness). In 2009, Caputo-Vasseur \cite{CV2009} extended the entropy method to establish a global existence for a reaction-diffusion system where the nonlinear reaction terms grow at most sub-quadratically. One can see an interesting review by M. Pierre in 2010 \cite{PI2010}.
Caceres-Canizo extended in 2017 \cite{CC2017} to the case where the reaction terms grow at most quadratically under certain conditions on the steady-state solutions. In 2018, Souplet \cite{SOU2018} established the global well-posedness for a reaction-diffusion system with quadratic growth in the reaction. Very recently, some considerable progress was made for a reaction-diffusion system by Fellner-Morgan-Tang in 2019 \cite{FMT2019} and Morgan-Tang in 2020 \cite{MT2020}. They are able to derive a global bound for the solution of a reaction-diffusion as long as the diffusion coefficients are smooth and nonlinear reaction terms in the system satisfy a condition called an intermediate growth condition, which replaces the entropy condition. Their approach is based on a combination of the dual method and the entropy method. In 2021, Fitzgibbon-Morgan-Tang-Yin \cite{FMTY2021} studied a very general reaction-diffusion system with a controlled mass and nonsmooth diffusion coefficients.
They established the global well-posedness for the system with at most a polynomial growth for reactions. Moreover, several interesting examples as applications arising from biological, health sciences and chemical reactive-flow were studied in the paper. Those results made a substantial progress for a general reaction-diffusion system with a controlled mass. However, due to the nonlinearity in Eq. (1.1), these results do not cover the nonlinear system (1.1)-(1.4), particularly, we do not have growth conditions here on $g_1$ with respect to $(s_1, s_2, s_3)$ for the global existence (see Theorem 2.1 in section 2).
The purpose of this paper has twofold. The first purpose is to establish the existence of a global solution to the generalized system (1.1)-(1.6) in any space dimension, without any restriction on parameters nor growth conditions with respect to $s_i$ for $g_1$. This extends a result obtained by the first author in his recent work \cite{Yin2020}. Our method in this paper is based on some key ideas developed in \cite{Yin2020}. The special structure of the system (1.1)-(1.4) will also play a key role. We shall also use various techniques from the theories of elliptic and parabolic equations (see \cite{EVANS,LI1996,LSU}). To derive an a priori bound, we use a crucial result for a linear parabolic equation in the Campanato-John-Nirenberg-Morrey space from \cite{Yin1997}, which extends the DiGoigi-Nash's estimate with weaker conditions for nonhomogeneous terms. The other purpose of the current work is to present the stability analysis of all steady-state solutions, which was not addressed in \cite{Yin2020}. In particular, for the following classical choices of the growth model \cite{CC1981}:
\begin{eqnarray}
& & {\ }\hspace{1cm}skip-0.8truecm b(x,t,S)=b_0S\left(1-\frac{S}{k_1}\right), ~g_1( x,t,S,I,B)=\beta_1SI+\beta_2Sh_1(B), ~h_1(B)=\frac{B}{B+k_2}\stepcounter{local}\\
& & {\ }\hspace{1cm}skip-0.8truecm g_2(B)=g_0B\left(1-\frac{B}{k_2}\right),\stepcounter{local}
\end{eqnarray}
we are able to precisely describe what conditions are needed for a steady-state solution to be stable or unstable. Roughly speaking, we shall demonstrate that under the conditions:
\[ d_1>b_0, ~~d_2\geq 0, ~~d_3\geq 0, ~~d_4>g_0,\]
the steady-state solution is stable. On the other hand, if either $d_1<b_0$ or $d_4<g_0$, then we can choose a set of suitable values for parameters $\sigma, \gamma, \beta_1$ and $\beta_2$ such that the steady-state solution is unstable.
This implies that our stability conditions are optimal. This stability analysis provides some important guidance to practitioners and scientists in biological, ecological and health sciences.
The paper is organized as follows. In Section 2 we first recall some function spaces which are frequently used in the subsequent analysis, and then state our main results. In Section 3, we prove the first part of the main results on global solvability of the system (1.1)-(1.6) (Theorem 2.1 and Corollary 2.1). In Section 4 we focus on a general stability analysis and obtain the sufficient conditions on parameters which ensure the stability of a steady-state solution.
In Section 5, for a set of concrete functions $b(x,t,s), h_1(s)$ and $g_2(x,t,s)$ we give precisely conditions on the model parameters, under which a steady-state solution is stable or unstable. Finally, some concluding remarks are given in Section 6.
Throughout the paper, we shall use $C$, with or without subscript, for a generic constant depending only on the given data in the model, including the upper bound of the terminal time $T$, and it may take a different value at each occurrence.
\section{ Preliminaries and Statement of Main Results}
For reader's convenience, we recall some standard function spaces which will be used frequently in the subsequent analysis.
For $\alpha\in (0,1)$, we denote by $C^{\alpha}(\bar{\Omega})$ (or $C^{\alpha, \frac{\alpha}{2}}(\bar{Q}_{T})$) the H\"older space in which every function is H\"older continuous with respect to $ x$ (or $(x,t))$ with exponent $\alpha$ in $\bar{\Omega}$ (or $(\alpha,\frac{\alpha}{2})$ in $\bar{Q}_{T}$). For $T=\infty$, we write $Q_T=\Omega\times (0, T)$ as $Q=\Omega\times (0,\infty).$
For $p\geq 1$ and a Banach space $V$ with norm $||\cdot||_v$, we define
\[ L^p(0,T; V)=\{ F(t): t\in [0,T]\rightarrow V; ~ ||F||_{L^p(0,T;V)}<\infty\},\]
equipped with the norm
\[ ||F||_{L^{p}(0,T;V)}=\left(\int_{0}^{T} ||F||_v^p dt\right)^{\frac{1}{p}}.\]
When $V=L^{p}(\Omega)$, we simply write
$L^p(Q_{T})=L^{p}(0,T;L^{p}(\Omega))$,
with its norm as $||\cdot||_p$.
Sobolev spaces $W^{k,p}(\Omega)$ and $W_{p}^{k,l}(Q_{T})$ are defined the same as in the classical references (see, e.g., \cite{EVANS}). Let $V_2(Q_T)=\{ u\in C([0,T];W_{2}^{1,0}(\Omega)): ||u||_{V_{2}}<\infty\} $ (see \cite{LSU})
equipped with the norm
\[ ||u||_{V_{2}}=\max_{0\leq t\leq T}||u||_{L^{2}(\Omega)}+\sum_{i=1}^{n}||u_{x_{i}}||_{L^2(Q_{T})}.\]
We will also use the Campanato-John-Nirenberg-Morry space $L^{2,\mu}(Q_T)$, which is defined as a subspace of $L^2(Q_{T})$ with its norm given by
\[||u||_{L^{2, \mu}(Q_{T})}=||u||_{L^{2}(Q_{T})}+[u]_{2, \mu, Q_{T}}<\infty,\]
where
\[ [u]_{2, \mu, Q_{T}}=\sup_{\rho>0, z_0\in Q_{T}}\left(\rho^{-\mu}\int_{Q_{\rho}(z_0)}|u-u_Q|^2dxdt\right)^{\frac{1}{2}},\]
with $z_0=(x_0,t_0), Q_{\rho}(z_0)=B_{\rho}(x_0)\times (t_0-\rho^2, t_0]$, and $u_Q$ representing the average of $u$ over $Q_{\rho}(z_0)$ for any $Q_{\rho}(z_0)\subset Q_{T}$;
see Troianiello $\cite{T1987} $ for its detailed definition and properties.
An important fact of the space is that
$L^{2, \mu+2}(Q_{T})$ is equivalent to $\ca$ with $\alpha=\frac{\mu-n}{2}$ if $n<\mu\leq n+2$ (Lemma 1.19 in \cite{T1987}). We shall write the norm of $L^{2, \mu}(Q_{T})$ as
$||u||_{2, \mu}$.
We first state the basic assumptions for the diffusion coefficients and the known data involved in our model (1.1)-(1.4).
All other model parameters are assumed to be positive constants throughout this paper.
One can easily extend the well-posedness results to more general system when those parameters are functions of $(x,t)$ as long as the basic structure of the system is preserved.
\ \\
{\bf H(2.1).} Assume that $a_i\in L^{\infty} (Q)$. There exist two positive constants $a_0$ and $A_0$ such that
\[ 0<a_0\leq a_i(x,t)\leq A_0, \hspace{1cm} (x,t)\in Q_{T}, ~i=1,2,3,4.\]
{\bf H(2.2).} Assume that all initial data $U_0(x):=(S_0(x), I_0(x), R_0(x), B_0(x))$ are nonnegative on $\Omega$. Moreover, $\nabla U_0(x)\in L^{2,\mu_0}(\bar{\Omega})^4$ with $\mu_0\in (n-2,n)$.
\ \\
{\bf H(2.3)}. (a) Let $b(x,t,s), d_i(x,t,s)$ and $g_2(x,t,s)$ be measurable in $Q\times R^+$ and locally Lipschitz continuous with respect to $s$, and
$0\leq b(x,t,0), ~d_i(x,t,0)\in L^{\infty}(Q)$. Moreover, it holds
for some $M>0$ that
\[ d_i(x,t,s)\geq d_0\geq 0, ~~b_s(x,t,s)\leq b_0, \hspace{1cm} (x,t,s)\in Q\times[M,\infty).\]
(b) Let $g_1(x,t,s_1,s_2,s_3)$ be measurable in $Q\times (R^+)^3$ and nonnegative, differentiable with respect to $s_1,s_2,s_3$, and
\begin{eqnarray*}
& & g_1(x,t,0,s_2,s_3)\geq 0, \hspace{1cm} s_2, s_3\geq 0,\\
& & g_2(x,t,0)\geq 0, ~~g_{2s}(x,t,s)\leq g_0, \hspace{1cm} (x,t,s)\in Q\times R^+.
\end{eqnarray*}
where $k_1, k_2$ and $k_3$ represent the maximum capacity of the general population, the infected population and the bacteria, respectively.
For convenience, we set $U(x,t)=(u_1,u_2,u_3,u_4)$ to be a vector-valued function defined in $Q_T$, with
\[ u_1(x,t)=S(x,t), ~u_2(x,t)=I(x,t), ~u_3(x,t)=R(x,t), ~u_4(x,t)=B(x,t), \, ~(x,t)\in Q_T. \]
The right-hand sides of the equations (1.1)-(1.4) are denoted by $f_1(x,t,U)$, $f_2(x,t,U)$, $f_3(x,t,U)$ and $f_4(x,t,U)$, respectively.
With the new notation, the system (1.1)-(1.6) can be written as the following reaction-diffusion system:
\setcounter{section}{2}
\setcounter{local}{1}
\begin{eqnarray}
& & u_{1t}-\nabla\cdot [a_1(x,t)\nabla u_1]=f_1(x,t,U), \hspace{1cm} (x,t)\in Q_{T},\\
& & u_{2t}-\nabla\cdot [a_2(x,t)\nabla u_2]=f_2(x,t,U), \hspace{1cm} (x,t)\in Q_{T},\stepcounter{local} \\
& & u_{3t}-\nabla\cdot [a_3(x,t)\nabla u_3]=f_3(x,t,U), \hspace{1cm} (x,t)\in Q_{T},\stepcounter{local}\\
& & u_{4t}-\nabla\cdot [a_4(x,t)\nabla u_4]=f_4(x,t,U), \hspace{1cm} (x,t)\in Q_T,\stepcounter{local}
\end{eqnarray}
subject to the initial and boundary conditions:
\begin{eqnarray}
& & U(x,0)=U_0(x):=(S_0(x),I_0(x),R_0(x),B_0(x)), \hspace{1cm} x\in \Omega, \stepcounter{local} \\
& & \nabla_{\nu}U(x,t)=0, \hspace{1cm} (x,t)\in \partial \Omega\times (0,T].\stepcounter{local}
\end{eqnarray}
We define
\[ X=V_2(Q_{T})\bigcap L^{\infty}(Q_{T}).\]
\ \\
{\bf Definition 2.1.} We say $U(x,t)\in X^4$ is a weak solution to the problem (2.1)-(2.6) in $Q_{T}$ if it holds
for all functions $\phi_k\in X$ with $\phi_{kt}\in L^2(Q_{T}),
\phi_k(x,T)=0$ on $\Omega$ for $k=1,2,3,4$:
\begin{eqnarray*}
& & \int_{0}^{T}\int_{\Omega}\left[-u_k\cdot \phi_{kt}+a_k\nabla u_k\cdot \nabla\phi_k\right]dxdt\\
& & = \int_{\Omega} u_k(x,0)\phi _k(x,0)dx+\int_{0}^{T}\int_{\Omega}f_k(x,t,U)\phi_k(x,t)dxdt. \stepcounter{local}
\end{eqnarray*}
\ \\
{\bf Theorem 2.1.} Under the assumptions H(2.1)-H(2.3), the problem (2.1)-(2.6) has a weak solution in $X$ and the weak solution is nonnegative and bounded in $Q_T$ for any $T>0$.
Moreover, it holds that $u_i(x,t)\in C^{\alpha, \frac{\alpha}{2}}(\bar{Q}_{T})$ for $i=1,2,3,4$.
Under some additional conditions on $b$ and $g_2$, we can deduce an uniform bound of the weak solution to the problem (2.1)-(2.6) in $Q$.
We state such a result for the special case
which is needed in the subsequent asymptotic analysis.
\ \\
{\bf Corollary 2.1.} Under the conditions H(2.1)-(2.2), we further assume
\[ b_s(x,t,s)-d \geq \lambda_0>0, ~~g_{2s}(x,s)-d_4\geq \lambda_0>0,\hspace{1cm} (x,t,s)\in Q\times [0,\infty),\]
and
\[ \int_{0}^{\infty}\int_{\Omega}b_0(x,t)dxdt<\infty.\]
Then the weak solution of the problem (2.1)-(2.6) is bounded globally in $Q$.
\ \\
{\bf Remark 2.1.} The weak solution obtained in Theorem 2.1 may grow to infinity as $t\to \infty$ if there is no additional conditions imposed on $b(x,t,S), g_2(x,t,s)$ and $d_1(x,t,s), d_4(x,t,s)$. On the other hand, if one assumes that
$g_1$ and $g_2$ grow at most in a polynomial power with respect to $s_i$, then one can verify that the conditions in \cite{FMTY2021}
hold. Consequently, a global bound in $Q$ can be deduced.
The next theorem states our main stability results for the steady-state solutions to the problem (2.1)-(2.6).
\ \\
{\bf Theorem 2.2.} Under the condition H(4.1) (see Section 4), a steady-state solution
is asymptotically stable if
\[ d_1>B_0, ~~d_4>G_0,\]
and the parameters $\beta_1, \beta_2, \gamma, \sigma$ are appropriately small, where $B_0$
and $G_0$ are constants which depend on the steady-state solution.
It turns out that the conditions in Theorem 2.2 are almost necessary in order to ensure the stability of each steady-state solution. In Section 5, we will see that when $b(x,t,s), g_1$ and $g_2(x,t,s)$ are of the form in (1.7)-(1.8),
then we have a very precise set of conditions for the model parameters to ensure the local stability or instability for each steady-state solution. To avoid repetitions, we state this result in Section 5,
since there are many specific cases we have to consider.
\section{Global Solvability and Proof of Theorem 2.1}
In this section we first derive some a priori estimates for a weak solution to the system (2.1)-(2.6), then show the existence of a unique weak solution. Finally, we establish the global boundedness and the H\"older continuity.
\setcounter{section}{3}
\setcounter{local}{1}
\ \\
{\bf Lemma 3.1} Under the assumptions H(2.1)-(2.2), a weak solution of the system (2.1)-(2.6) is nonnegative.
This is a well-known result since each $f_i(x,t,u_1,u_2,u_3,u_4)$
is quasi-positive for $i=1,2,3,4$, and is also locally Lipschitz continuous with respect to each $u_k$ for $k=1,2,3,4$.
Interested readers may refer to \cite{BLS2002} for a detailed proof.
Next we apply the energy method to derive an a priori estimate in the space $V_2(Q_T)$.
\ \\
{\bf Lemma 3.2} Under the assumptions H(2,1)-(2.3), there exists a constant $C_1$ such that
\[\sum_{k=1}^{4}||u_k||_{V_{2}(Q_T)}\leq C_1.\]
{\bf Proof}. We multiply Eq.(1.1) by $u_1$ and integrate over $\Omega$ to obtain
\begin{eqnarray*}
& & \frac{1}{2}\frac{d}{dt}\int_{\Omega}u_1^2dx+a_0\int_{\Omega}|\nabla u_1|^2dx+\int_{\Omega} g_1u_1dx +d_0\int_{\Omega}u_1^2dx
\nonumber\\
& & \leq \int_{\Omega}b(x,t,u_1) u_1 dx+\sigma\int_{\Omega}u_1u_3dx\nonumber \\
& & \leq C\int_{\Omega}[1+u_1^2] dx+ C\int_{\Omega}[u_1^2+u_3^2]dx,\stepcounter{local}
\end{eqnarray*}
where we have used the assumption H(2.3)(a) at the second estimate.
We can perform a similar energy estimate for Eq.(1.3) to deduce
\begin{eqnarray*}
& & \frac{1}{2}\frac{d}{dt}\int_{\Omega}u_3^2dx+a_0\int_{\Omega}|\nabla u_3|^2dx
\leq \gamma \int_{\Omega}u_2u_3dx \leq C\int_{\Omega}[u_2^2 + u_3^2]dx.\stepcounter{local}
\end{eqnarray*}
In order to derive an estimate for $u_2$, we make use of the special structure of the system (2.1)-(2.4). To do so, we define
\[ v(x,t)=u_1(x,t)+u_2(x,t), \hspace{1cm} (x,t)\in Q.\]
Then it is easy to see that $v(x,t)$ satisfies
\setcounter{section}{3}
\setcounter{local}{1}
\begin{eqnarray}
v_t-\nabla\cdot [d_2\nabla v]&=&\nabla \cdot [(d_1-d_2)\nabla u_1]+f_1(x,t,U)+f_2(x,t,U), ~~(x,t)\in Q_{T},\\
\nabla_{\nu}v(x,t)&=&0, \hspace{1cm} (x,t)\in \partial \Omega\times (0,T],\stepcounter{local}\\
v(x,0)&=&S_0(x)+I_0(x), \hspace{1cm} x\in \Omega.\stepcounter{local}
\end{eqnarray}
We multiply Eq.(3.1) by $ v$ and then integrate over $\Omega$ to obtain
\begin{eqnarray*}
& & \frac{1}{2}\frac{d}{dt}\int_{\Omega}v^2dx +a_0\int_{\Omega}|\nabla v|^2dx\\
& & =-\int_{\Omega} [(d_1-d_2)\nabla u_1\cdot \nabla v]dx+\int_{\Omega}v[f_1(x,t,U)+f_2(x,t,U)]dx\\
& & :=J_1+J_2.
\end{eqnarray*}
A direct application of the Cauchy-Schwarz's inequality implies
\[ |J_1|\leq \varepsilon\int_{\Omega}|\nabla v|^2dx +C(\varepsilon)\int_{\Omega}|\nabla u_1|^2 dx.\]
On the other hand, using the fact that
\[ f_1(x,t,U)+f_2(x,t,U)=b(x,t,u_1)-d_1u_1+\sigma u_3-(d_3+\gamma)u_2,\]
we readily derive that
\begin{eqnarray*}
|J_2| & = & |\int_{\Omega}v[f_1(x,t,U)+f_2(x,t,U)]dx|\\
& \leq & C\int_{\Omega}[v (1+u_1+ u_3)]dx
\leq C+C\int_{\Omega}[v^2+u_1^2+u_3^2]dx.
\end{eqnarray*}
Now choosing $\varepsilon=\frac{a_0}{2}$, we can readily derive from the above estimates that
\begin{eqnarray*}
& & \frac{d}{dt}\int_{\Omega}v^2dx +a_0\int_{\Omega}|\nabla v|^2dx
\leq C+C\int_{\Omega}[v^2+u_1^2+u_3^2]dx.
\end{eqnarray*}
By combining the above energy estimates for $u_1, v$ and $u_3$, we can further deduce
\begin{eqnarray*}
&& \frac{d}{dt}\int_{\Omega}[u_1^2+v^2+u_3^2]dx+\int_{\Omega}[|\nabla u_1|^2+|\nabla v|^2+
|\nabla u_3|^3]dx\\
& & \leq C\int_{\Omega}[u_1^2+v_2^2+u_3^2]dx,
\end{eqnarray*}
then a direct application of Gronwall's inequality implies
\begin{eqnarray*}
& & \sup_{0<t<T}\int_{\Omega}[u_1^2+v^2+u_3^2]dx+\int_{0}^{T}\int_{\Omega}[|\nabla u_1|^2+|\nabla v|^2+
|\nabla u_3|^3]dxdt\\
& & \leq C+C\int_{\Omega}[S_0^2+I_0^2+R_0^2]dx.
\end{eqnarray*}
Noting that $v=u_1+u_2$, we can write
\begin{eqnarray*}
& & \int_{\Omega}|\nabla v|^2 dx= \int_{\Omega}[|\nabla u_1|^2+|\nabla u_2|^2]dx+2\int_{\Omega}
[(\nabla u_1)\cdot (\nabla u_2)]dx.
\end{eqnarray*}
But using the Cauchy-Schwarz's inequality, we can see
\begin{eqnarray*}
\int_{\Omega}
[(\nabla u_1)\cdot (\nabla u_2)]dx & \leq & \varepsilon\int_{\Omega}|\nabla u_2|^2dx+C(\varepsilon)\int_{\Omega}|\nabla u_1|^2dx\\
& \leq & \varepsilon\int_{\Omega}|\nabla u_2|^2dx+C(\varepsilon)\int_{\Omega}[u_1^2+u_3^2]dx.
\end{eqnarray*}
Using the above estimates and choosing $\varepsilon$ to be sufficiently small,
we can obtain
\begin{eqnarray*}
& & \int_{\Omega}[u_1^2+u_2^2+u_3^2]dx+
+\int\int_{Q_{T}}[|\nabla u_1|^2+|\nabla u_2|^2+|\nabla u_3|^2]dxdt\\
& &\leq C+ C\int_{\Omega}[S_0^2+I_0^2+R_0^2]dx.
\end{eqnarray*}
For $u_4$, we note that
\[ h_2(x,t,u_4)u_4\leq k_0(u_4^2+1).\]
Then we can readily derive from Eq. (2.4) that
\[ \frac{d}{dt}\int_{\Omega}u_4^2dx +a_0\int_{\Omega}|\nabla u_4|^2dx\leq C\int_{\Omega}[u_2^2+u_4^2]dx.\]
Now an integration over $(0, T)$ implies
\begin{eqnarray*}
& & \sup_{0<t<T}\int_{\Omega} u_4^2dx+\int\int_{Q_{T}}|\nabla u_4|^2dxdt
\leq C+C\int_{\Omega}B_0^2dx+C\int\int_{Q_{T}}u_2^2 dxdt\\
& & \leq C+C\int_{\Omega}[S_0^2+I_0^2+R_0^2+B_0^2]dx.
\end{eqnarray*}
This proof of Lemma 3.2 is now completed.
\hspace{1cm}fill Q.E.D.
In order to derive more a priori estimats, we need a crucial result about the Camapanto-John-Nirenberg-Morrey estimate for a general parabolic equation. For reader's convenience, we state the result in detail here (see Lemma 3.3 below).
Consider the parabolic equation:
\begin{eqnarray}
& & u_t-Lu =\sum_{i=1}^{n}f_i(x,t)_{x_i}+f(x,t), \hspace{1cm} (x,t)\in Q_{T},\stepcounter{local}\\
& & u(x,t)=0 ~~\mbox{or} ~~u_{\nu}(x,t)=0, \hspace{1cm} (x,t)\in \partial \Omega \times (0,T],\stepcounter{local}\\
& & u(x,0)=u_0(x), \hspace{1cm} x\in \Omega.\stepcounter{local}
\end{eqnarray}
where
$Lu:=(a_{ij}(x,t)u_{x_{i}})_{x_j}+b_i(x,t)u_{x_i}+c(x,t)$
is an elliptic operator. We assume there are positive constants $A_1,A_2$ and $A_3$ such that $A=(a_{ij}(x,t)_{n\times n}$ is
a positive definite matrix that satisfies
\[ A_0|\xi|^2 \leq a_{ij}\xi_i\xi_j \leq A_1|\xi|^2, \hspace{1cm} \xi\in R^n,\]
and
\[ \sum_{i=1}^{n}||b_i||_{L^{\infty}(Q_{T})}+||c||_{L^{\infty}(Q_{T})}\leq A_2<\infty.\]
\ \\
{\bf Lemma 3.3}. (\cite{Yin1997}) Let $u(x,t)$ be a weak solution of the parabolic equation (3.8)-(3.10).
Let $u_0\in C^{\alpha}(\bar{\Omega})$ with $u_0(x)=0$ on $\partial \Omega$, and $\nabla u_0\in L^{2,\mu_0}(\Omega)$
for some $\mu_0\in (n-2,n)$.
Then for any $\mu\in [0,n)$, there exists a constant $C$ such that
\[ ||\nabla u||_{L^{2, \mu}(Q_{T})}\leq C[||\nabla u_0||_{L^{2, (\mu-2)^+}(\Omega)}+||f||_{L^{2, (\mu-2)^+}(\Omega)}+\sum_{i=1}^{n}
||f_i||_{L^{2, \mu}(Q_{T})}].\]
Moreover, it holds that $u\in L^{2, \mu+2}(Q_{T})$ and
\[ ||u||_{L^{2, 2+\mu}( Q_{T})}\leq C[||\nabla u_0||_{L^{2, (\mu-2)^+}(\Omega)}+||f||_{L^{2, (\mu-2)^+}(\Omega)}+\sum_{i=1}^{n}||f_i||_{L^{2, \mu}(Q_{T})}\]
for a constant $C$ that depends only on $A_0, A_1,A_2, n$ and $Q_{T}$.
\ \\
{\bf Lemma 3.4} Under the assumptions H(2.1)-(2.3), the weak solution of (2.1)-(2.4) satisfies
\[\sum_{k=1}^{4}||u_k||_{\ca}\leq C(T).\]
{\bf Proof}. Let $\mu\in (0, n)$ be arbitrary.
By Lemma 3.3, we have
\begin{eqnarray}
& & ||\nabla u_3||_{L^{2, \mu}(Q_{T})}\leq C[||\nabla R_0||_{L^{2,(\mu-2)^+}(\Omega)}+||u_2||_{L^{2, (\mu-2)^+}(Q_{T})}+
||u_3||_{L^{2, (\mu-2)^+}(Q_{T})}].\stepcounter{local}
\end{eqnarray}
On the other hand, we note that $v(x,t)=u_1(x,t)+u_2(x,t)$ satisfies the system (3.1)-(3.3),
so we can apply Lemma 3.3 again to obtain
\begin{eqnarray}
& & ||\nabla v||_{L^{2, \mu}(Q_{T})}\leq C[||\nabla v_0||_{L^{2,(\mu-2)^+}(\Omega)}+\sum_{i=1}^{3}||u_i||_{L^{2, (\mu-2)^+}(Q_{T})}].\stepcounter{local}
\end{eqnarray}
To derive the $L^{2,\mu}$-estimate for $u_1$, we note that
\[ u_{1t}-\nabla [a_1(x,t)\nabla u_1]\leq b(x,t,u_1)-d_1u_1+\sigma u_{3}=[b_s(x,t,\theta)-d_1]u_1+b(x,t,0)+\sigma u_3,\]
where $\theta$ is the mean-value between $0$ and $u_1$.
Using the facts that $b_s(x,t,s)$ and $b(x,t,0)$ are bounded, we can use the same calculations as in Lemmas 3.2 and 3.3 to obtain
\[ ||\nabla u_1||_{L^{2,\mu}(Q_{T})}\leq C[||\nabla S_0||_{L^{2,\mu}(\Omega)}+||u_3||_{L^{2,\mu}(Q_{T})}].\]
Now we can combine the $L^{2,\mu}(Q_T)$-estimates for $u_1, v$ and $u_3$
and note that $v=u_1+u_2$ to obtain
for any $\mu\in [0, n)$ that
\begin{eqnarray}
& & \sum_{i=1}^{3}||\nabla u_i||_{L^{2, \mu}(\Omega)} \leq C[||\nabla U_0||_{L^{2,(\mu-2)^+}(\Omega)}
+\sum_{i=1}^{3}||u_i||_{L^{2, (\mu-2)^+}(Q_{T})}]+C.\stepcounter{local}
\end{eqnarray}
Using the fact that $u_i\in V_2(Q_{T})$, we derive
for any $\mu_1\in [0,2)$ that
\begin{eqnarray}
\sum_{i=1}^{3}||\nabla u_i||_{L^{2, \mu_1}(Q_{T})}\leq C[\sum_{i=1}^{3}||\nabla u_{i0}||_{L^{2}(\Omega)}+1]. \stepcounter{local}
\end{eqnarray}
Now we can apply the interpolation theory for the parabolic equation (2.3)
(see Lemma 2.6 in \cite{Yin1997} ) to further deduce
\[ ||u_3||_{L^{2,\mu_1+2}(\Omega)}\leq C[||u_2||_{L^{2}(\Omega)}+||u_3||_{L^{2}(\Omega)}
+||\nabla u_3||_{L^{2}(Q_{T})}]+C.\]
Next we go back to the system (2.1)-(2.3) and apply the same process for $\mu_2=\mu_1+2$
to obtain
\begin{eqnarray}
& & \sum_{i=1}^{3}||\nabla u_i||_{2, \mu_2, Q_{T}} \leq C[\sum_{i=1}^{3}||\nabla u_{i0}||_{L^{2,\mu_2}(\Omega)}+\sum_{i=1}^{3}||u_i||_{2, (\mu_2-2)^+,Q_{T}} +C].\stepcounter{local}
\end{eqnarray}
Then after a finite number of steps, we can deduce for any $\mu\in (0,n)$ that
\begin{eqnarray}
& & \sum_{i=1}^{3}||u_i||_{L^{2, \mu+2}(\Omega)}\leq C[\sum_{i=1}^{3}||u_i||_{L^2(\Omega)}+||\nabla u_{i0}||_{L^{2, (\mu-2)^2}(\Omega)}\nonumber\\
& & \leq C[\sum_{i=1}^{3}||u_i||_{L^{2}(Q_{T})}+\sum_{i=1}^{3}||\nabla u_{i0}||_{L^{2, (\mu-2)^+}]}.\stepcounter{local}
\end{eqnarray}
Now we apply the interpolation theory again (see Lemma 2.6 in \cite{Yin1997}) to derive
\begin{eqnarray*}
& & \sum_{i=1}^{3}||u_i||_{2, \mu_0+4, Q_{T}}
\leq C[\sum_{i=1}^{3}||u_i||_{L^{2}(Q_{T})}+\sum_{i=1}^{3}||\nabla u_{i0}||_{L^{2, \mu_0}}.\stepcounter{local}
\end{eqnarray*}
But noting that $\mu_0\in (n-2, n)$, we can then obtain by Lemma 1.19 in \cite{T1987} that
\[ \sum_{i=1}^{3}||u_i||_{\ca}\leq C,\]
for $\alpha =\frac{\mu_0+2-n}{2}$.
The proof of Lemma 3.5 is now completed.
\hspace{1cm}fill Q.E.D.
\ \\
{\bf Proof of Theorem 2.1}. First of all, by using the energy method we see that the weak solution of (2.1)-(2.6) must be unique since the solution is bounded and $f_k$ is locally Lipschitz continuous with respect to $u_i$ for all $k, i\in \{1,2,3,4\}$. With the a priori estimates in Lemmas 3.1-3.4, there are several approaches, such as the truncation method and Galerkin finite element method, to prove the desired result
(see, e.g., \cite{BLS2002,FMTY2021, Yin2020}). Here we choose a different approach,
the bootstrap argument (see \cite{YCW2017}), for the proof.
Let $T\in (0,\infty)$ be any fixed number,
it is easy to show that the system (2.1)-(2.6) has a unique local weak solution in $X$ in $Q_{T_0}$ for some small $T_0>0$.
Let
\[ T^*=sup\{T_0: \mbox{the system (2.1)-(2.6) has a unique weak solution in $Q_{T_0}$}\}.\]
Suppose $T^*<T$ (otherwise, nothing is needed to prove). We note that the a priori estimates in Lemmas 3.1 and 3.4 hold
for any weak solution. It follows that
\[ \lim_{t\rightarrow T*-}sup [\sum_{k=1}^{4}||u_k||_{V_2(Q_t)}+\sum_{k=1}^{4}||u_k||_{\ca}]<\infty.\]
By the compactness, we know that
\[ u_k(x,T^*)\in H^1(\Omega), \nabla u_k\in L^{2, (\mu-2)^+}(\Omega) ~~\mbox{for any $\mu\in (n,n+2).$}\]
Now, we use $U(x,T^*)$ as an initial value and consider the system (2.1)-(2.6) for $t\geq T*$. Then the local existence result
implies that there exists a small $t_0>0$ such that the problem (2.1)-(2.6) has a unique weak solution in the interval $[T*, T^*+t_0).$
Consequently, we obtain a weak solution to the system (2.1)-(2.6) in the interval $[0,T^*+t_0)$.
This is a contradiction with the definition of $T^*$, therefore we have $T^*=T$.
\hspace{1cm}fill Q.E.D.
\ \\
Next, we prove Corollary 2.1.
Assume that there exists a constant $\lambda_0>0$ such that
\[ d_1(x,t,s)-b_s(x,t,s)\geq \lambda_0>0, ~~d_4(x,t,s)-g_{2s}(x,t,s)\geq \lambda_0,\hspace{1cm} (x,t,s)\in Q\times [0,\infty).\]
With the above assumption, we take the integration over $\Omega$ for Eq. (2.1)-(2.3) to obtain
\[ \frac{d}{dt}\int_{\Omega}(u_1+u_2+u_3)dx+\min\{(d_0,\lambda_0\}\int_{\Omega}(u_1+u_2+u_3)dx\leq \int_{\Omega}b(x,t,0)dx.\]
Then it is easy to see
\[ \sup_{t\geq 0}\int_{\Omega}(u_1+u_2+u_3)dx\leq C.\]
Now we derive a uniform estimate in $L^2(Q)$. By using the energy estimate for Eq.(2.1),
we can see that
\[ \frac{d}{dt}\int_{\Omega}u_1^2dx+\int_{\Omega}|\nabla u_1|^2dx
\leq C[\int_{\Omega}b(x,t,0)^2dx+C\int_{\Omega}u_3^2dx.]\]
For $v(x,t):=u_1(x,t)+u_2(x,t)$, we can derive from Eq.(3.1)-(3.3) that
\begin{eqnarray*}
& & \frac{d}{dt}\int_{\Omega}v^2dx+\int_{\Omega}|\nabla v|^2dx
\leq C\int_{\Omega}|\nabla u_1|^2dx+C\int_{\Omega}[(b(x,t,0)^2+u_1^2+u_3^2]dx\\
& & \leq C[\int_{\Omega}(b(x,t,0)^2+u_1^2+u_3^2) dx].
\end{eqnarray*}
where we have used the estimate of $u_1$ at the second estimate.
Again, we can use the energy estimate for Eq.(2.3) to obtain
\[ \frac{d}{dt}\int_{\Omega}u_3^2dx+\int_{\Omega}|\nabla u_3|^2dx\leq C\int_{\Omega}u_2^2dx.\]
But we know from the Gagliardo-Nirenberg estimate
for $p=q=2, s=1, \theta=\frac{n}{n-2}$ and $\varepsilon>0$,
\[ \int_{\Omega} u^2dx\leq \varepsilon \int_{\Omega}|\nabla u|^2 dx+C(\varepsilon) ||u||_{L^1(\Omega)}, \]
then using the uniformly boundedness of $L^1(\Omega)$-norms of $u_1, u_2, u_3$,
we get for sufficiently small $\varepsilon$,
\begin{eqnarray*}
& & \sup_{t\geq 0}\int_{\Omega}[u_1^2+u_2^2+u_3^2]dx+\int_{0}^{t}\int_{\Omega}[|\nabla u_1|^2+|\nabla u_2|^2+|\nabla u_3|^2dx]\\
& & \leq C_1+C_2\int_{0}^{t}\int_{\Omega}b(x,t,0)^2dxdt
\leq C_3\,.
\end{eqnarray*}
Next we use the iteration method again as in the proof of Theorem 2.1.
From Eq.(3.2) for $v$ and $u_3$, we deduce, respectively,
\[ ||\nabla v||_{2, \mu}\leq C+C||u_1||_{2, \mu}+C||u_3||_{2,\mu}\]
and
\[||\nabla u_3||_{2,\mu}\leq C+C||u_2||_{2, \mu}. \]
For $u_1$, we see by noting that $g_1\geq 0$,
\[ u_{1t}-\nabla[a_1(x,t)\nabla u_1]\leq [b_0(x,t)-d_1]u_1+\sigma u_3.\]
As $u_1\geq 0$ in $Q$, we can follow the same argument as in \cite{Yin1997} to obtain
for $\mu\in (n-2, n)$,
\[ ||\nabla u_1||_{2,\mu}\leq C+C[||u_1||_{2, \mu}+||u_3||_{2, \mu}].\]
As $u_1, v, u_3$ are uniformly bounded in $L^2(Q)$, the interpolation for $v$ and $u_3$ with $\mu=0$ yields that
\[ ||v||_{2,2}+||u_3||_{2, 2}\leq C.\]
Hence, we can obtain the $L^{2, \mu}(Q)$-estimate for $\nabla u_1$ with $\mu=2$:
\[ ||\nabla u_1||_{2,2}\leq C+ C[||u_1||_{2, 2}+||u_2||_{2, 2}],\]
which is uniformly bounded.
We can now go back to the equations for $v$ and $u_3$ with $\mu=2$ to obtain
\[ ||v||_{2,4}+||u_3||_{2, 4}\leq C[||u_1||_{2,2}+||u_2||_{2,2}+||u_3||_{2,2}].\]
By continuing the above iteration process, after a finite number of steps,
we obtain for $\alpha=\frac{\mu_0-n}{2}$ that
\[ ||v||_{\ca}+||u_3||_{\ca}\leq C.\]
Consequently, we get
\[ ||u_1||_{L^{\infty}(Q)}\leq C.\]
Once we know that $u_2$ is uniformly bounded, then from Eq.(2.4), we
can apply the maximum principle to obtain
\[\sup_{t\geq 0}||u_4||_{L^{\infty}(\Omega)}\leq C.\]
With the a priori bound for each $u_i$, we can extend the weak solution in $Q_{T}$ to $Q$.
\hspace{1cm}fill Q.E.D.
\section{Linear Stability Analysis}
To illustrate the main idea, we assume that $b$ and $g_2$ depend only on $x$ and $s$.
We also focus on the following model cases:
\[ b(x,t,s)=b_0(x)s(1-\frac{s}{k_1}), ~~g_1=\beta_1u_1u_2+\beta_2 \frac{u_1u_4}{u_4+k_2}, ~~g_2=g_0(x)s(1-\frac{s}{k_3}).\]
Moreover, we assume that all parameters $\sigma, \gamma, \beta_1, \beta_2, d_i, k_i$ are positive constants. The general case can be carried out similarly as long as the functions are differentiable.
Consider the steady-state problem in $\Omega$:
\setcounter{section}{4}
\setcounter{local}{1}
\begin{eqnarray}
-\nabla\cdot [a_1(x)\nabla u_1] & = & b(x,u_1)-g_1(x,u_1,u_2,u_4)-d_1u_1+\sigma u_3,\\
-\nabla\cdot [a_2(x)\nabla u_2] & = & g_1(x,u_1,u_2,u_4)-(d_2+\gamma)u_2,\stepcounter{local} \\
-\nabla\cdot [a_3(x)\nabla u_3] & = & \gamma u_2-(d_3+\sigma)u_3,\stepcounter{local} \\
-\nabla\cdot [a_4(x)\nabla u_4] & = & \xi u_2 +g_2(x,u_4)-d_4 u_4\stepcounter{local}
\end{eqnarray}
subject to the boundary condition
\begin{eqnarray}
\partial_{\nu}U(x) =0, \hspace{1cm} x\in \partial \Omega,\stepcounter{local}
\end{eqnarray}
where $U(x)=(u_1(x), u_2(x), u_3(x), u_4(x))$.
It is clear that there is a trivial solution $U(x)=(0,0,0,0)$ if $b(x,0)=g_1(x,0)=g_2(x,0)=0.$ But we are interested in nontrivial solutions, and will make the following assumptions.
\ \\
H(4.1). (a) $0<a_0\leq a_i(x)\leq A_0$ on $\Omega$; \\
(b) $b_0(x)\geq b_1>0$ and $g_0(x)\geq g_1>0$, and both are bounded.
\ \\
{\bf Lemma 4.1}. Under the assumptions H(4.1), the elliptic system (4.1)-(4.5) has at least one nonnegative weak solution
$U(x)\in W^{1,2}(\Omega)$. Moreover, the weak solution is H\"older continuous in $\bar{\Omega}$ for any space dimension.\\
{\bf Proof.} Since the argument is very similar to the case for a parabolic system, we only sketch the proof.
The key step is to derive an a priori estimate in H\"older space.
As a first step, we know that a solution of (4.1)-(4.5) must be nonnegative since every right-hand side of (4.1) to (4.4) is quasi-positive. Next we can use the same argument as for the parabolic case to
derive $L^1$-estimate for $u_i(x)\geq 0, i=1,2,3,4$ on $\Omega$.
Indeed, by direct integration we have
\[ \int_{\Omega}[d_2u_2+d_3u_3]dx+\int_{\Omega}b_0(x) u_1^2dx=\int_{\Omega}(b_0-d_1)u_1dx.\]
Then an application of the Cauchy-Schwarz's inequality yields
\[ \int_{\Omega}[u_1^2+d_2u_2+d_3u_3]dx\leq C.\]
On the other hand, we obtain from Eq.(4.1) that
\[ g_0\int_{\Omega}u_4^2dx \leq \xi\int_{\Omega}u_2dx+g_0\int_{\Omega}(g_0-d_4)u_4dx\leq C+\frac{g_0}{2}\int_{\Omega}u_4^2dx,\]
which implies
\[ \int_{\Omega}u_4^2dx \leq C.\]
Next step is to derive the $L^2(\Omega)$-estimate for $u_2$ and $u_3$. The idea is very much similar to the case for a parabolic system.
The energy estimate for Eq.(4.1) yields that, for any $\varepsilon>0$,
\[ \int_{\Omega}|\nabla u_1|^2dx+\int_{\Omega}u_{1}^3dx\leq C(\varepsilon)+\varepsilon\int_{\Omega}u_3^2dx.\]
It is easy to see that, by adding up Eq.(4.1) and Eq.(4.2), $v(x):=u_1(x)+u_2(x)$ satisfies that
\[ -\nabla[a_2(x) \nabla v]=\nabla[(a_1(x)-a_2(x))\nabla u_1]+b(x,u_1)-d_1u_1-(d_2+\gamma)u_2+\sigma u_3.\]
Then we can get by the energy estimate that
\[ \int_{\Omega}|\nabla v|^2dx+\int_{\Omega} v^2dx \leq C(\varepsilon)+2\varepsilon\int_{\Omega}u_3^2dx.\]
From Eq.(4.3) we have by using Cauchy-Schwarz's inequality that
\begin{eqnarray*}
& & a_0\int_{\Omega}|\nabla u_3|^2dx+(d_3+\sigma)\int_{\Omega}u_3^2dx\\
& & \leq \gamma \int_{\Omega}u_2u_3dx
\leq \frac{d_3+\sigma}{2}\int_{\Omega} u_3^2dx+\frac{\gamma}{2(d_3+\sigma)}\int_{\Omega} u_2^2dx,
\end{eqnarray*}
which implies
\[ \int_{\Omega}|\nabla u_3|^2dx+\int_{\Omega}u_3^2dx\leq C\int_{\Omega}u_2^2dx.\]
Now we can combine the above estimates for $u_1, v$ and $u_3$ and choose $\varepsilon$ sufficiently small to conclude
\begin{eqnarray}
\sum_{i=1}^{4}||\nabla u_i||_{L^{2}(\Omega)}+\sum_{i=1}^{4}\int_{\Omega}u_i^2dx\leq C. \stepcounter{local}
\end{eqnarray}
To derive a further a priori estimate, we use the Campanato estimate for elliptic equations (\cite{T1987} ) to obtain that $u_i\in C^{\alpha}(\bar{\Omega})$ and
\[ \sum_{i=1}^{4}||u_i||_{ C^{\alpha}(\bar{\Omega})}\leq C.\]
With the above a priori estimates, we can use the Schauder's fixed-point theorem (\cite{GT1987}) to obtain the existence of a weak solution for the system (4.1)-(4.5) and the weak solution is in the space $W^{1,2}(\Omega)\bigcap C^{\alpha}(\bar{\Omega})$. We skip this step here.
\hspace{1cm}fill Q.E.D.
\ \\
{\bf Remark 4.1} The uniqueness is not expected in general since one can see that there are many nontrivial constant solutions when
$b_1(x,s), g_1, g_2$ have the special forms as stated in the introduction.
Next, we shall consider the steady-state solutions to the system (4.1)-(4.5).
Let
$Z^*(x)=(u_1^*(x),u_2^*(x),u_3^*(x),u_4^*(x))$
be such a steady-state solution.
For $\varepsilon>0$, we consider a small perturbation near $Z^*(x)$ and set
\[ Z(x,t)=Z^*(x)+\varepsilon Z_1(x,t), \hspace{1cm} (x,t)\in Q,\]
where
$Z_1(x,t)=(U_1(x,t), U_2(x,t), U_3(x,t), U_4(x,t))$, with
$U_i(x,t)=u_i(x,t)-u_i^*(x)$ for $i=1,2,3,4$.
A direct calculation shows that $Z_1$ satisfies the following linear system:
\setcounter{section}{4}
\setcounter{local}{6}
\begin{eqnarray}
& & U_{1t}-\nabla\cdot [a_1\nabla U_1]=F_1(Z_1), \hspace{1cm} (x,t)\in Q,\\
& & U_{2t}-\nabla\cdot [a_2\nabla U_2]=F_2(Z_1), \hspace{1cm} (x,t)\in Q,\stepcounter{local} \\
& & U_{3t}-\nabla\cdot [a_3\nabla U_3]=F_3(Z_1), \hspace{1cm} (x,t)\in Q,\stepcounter{local}\\
& & U_{4t}-\nabla\cdot [a_4\nabla U_4]=F_4(Z_1), \hspace{1cm} (x,t)\in Q,\stepcounter{local}
\end{eqnarray}
subject to the initial and boundary conditions:
\begin{eqnarray}
& & Z_1(x,0)=Z_1(x,0), \hspace{1cm} x\in \Omega, \stepcounter{local} \\
& & \nabla_{\nu}Z_1(x,t)=0, \hspace{1cm} (x,t)\in \partial \Omega\times (0,\infty),\stepcounter{local}
\end{eqnarray}
where the right-hand sides of the system (4.6)-(4.9) are given by
\begin{eqnarray*}
F_1(Z_1) & = & [b_s(x,u_1^*)-\beta_1u_2^*-\beta_2h_1(u_4^*)-d_1]U_1-\beta_1u_1^*U_2+\sigma U_3-(\beta_2u_1^*h_1^{'}(u_4^*)U_4,\\
F_2(Z_1) & = & (\beta_1u_2^*+\beta_2 h_1(u_2^*0)U_1+[\beta_1u_1^*-(d_2+\gamma)]U_2+\beta_2u_1^*h_1^{'}(u_4^*)U_4,\\
F_3(Z_1) & = & \gamma U_2-(d_3+\sigma)U_3,\\
F_4(Z_1) & = & \xi U_2-h_{2s}(x,u_4^*)U_4.
\end{eqnarray*}
\noindent
{\bf Theorem 4.1} Under the assumptions H(4.1), the steady-state solution $Z^*(x)$ to the system (4.1)-(4.5) is asymptotically stable if the following conditions hold:
\[ d_1-B_0>0, ~~d_4-G_0>0,\]
and $\beta_1$ is suitably small,
where $B_0$ and $G_0$ are given by
\[ B_0=\max_{x\in \Omega}|b_s(x,u_1^*)|, ~~G_0=\max_{\Omega}|h_{2s}(x,u_4^*)|.\]
{\bf Proof}. For any positive integer $k$,
we multiply Eq.(4.1) by $U_1^k$ and integrate over $\Omega$ to obtain
\begin{eqnarray*}
& & \frac{1}{k+1}\frac{d}{dt}\int_{\Omega}U_1^{k+1}dx+\frac{4a_0}{(k+1)^2}\int_{\Omega}
|\nabla U_1^{\frac{k+1}{2}}|^2dx\\
& & +\int_{\Omega}[d_1+\beta_1u_2^*+\beta_2 h_1(u_4^*)-b_s(x,u_1^*)]U_1^{k+1}dx\\
& & \leq |J|,
\end{eqnarray*}
where $J$ is given by
\[ J=-\beta_1\int_{\Omega}u_1^*U_2U_1^k dx +\sigma\int_{\Omega}U_3 U_1^k dx -\beta_2\int_{\Omega}u_1^*h_1^{'}(u_4^{*}) U_4U_1^k dx:=J_1+J_2+J_3.\]
Let $U_0=\max_{\Omega}u_1^*(x)$, then we can use the Young's inequality to readily get
\begin{eqnarray*}
|J_1|&\leq& \beta_1U_0\int_{\Omega}\left[\frac{ k}{k+1}U_1^{k+1}+\frac{1}{(k+1)} U_2^{k+1}\right] dx,\\
|J_2| & \leq & \sigma\int_{\Omega}\left[\frac{ k}{k+1}U_1^{k+1}+\frac{1}{(k+1)} U_3^{k+1}\right] dx,\\
|J_3| & \leq & \beta_2U_0G_0\int_{\Omega}\left[\frac{ k}{k+1}U_1^{k+1}+\frac{1}{(k+1)} U_4^{k+1}\right] dx.
\end{eqnarray*}
Now we can easily see for sufficiently small
$\sigma, \beta_1, \beta_2 $ that
\begin{eqnarray*}
& & \frac{1}{k+1}\frac{d}{dt}\int_{\Omega}U_1^{k+1}dx+\frac{4a_0}{(k+1)^2}\int_{\Omega}
|\nabla U_1^{\frac{k+1}{2}}|^2dx\\
& & +[d_1+\beta_1u_2^*+\beta_2 h_1(u_4^*)-b_s(x,u_1^*)]\int_{\Omega}U_1^{k+1}dx\\
& & \leq \frac{C}{(k+1)}\int_{\Omega}\left[U_2^{k+1}+U_3^{k+1}+U_4^{k+1}\right]dx.
\end{eqnarray*}
We can apply the same argument above for $U_2,U_3, U_4$ from Eq.(4.2), Eq.(4.3) and Eq.(4.4), respectively, to obtain
\begin{eqnarray*}
& & \frac{1}{k+1}\frac{d}{dt}\int_{\Omega}U_2^{k+1}dx+\frac{4a_0}{(k+1)^2}\int_{\Omega}
|\nabla U_2^{\frac{k+1}{2}}|^2dx+(d_2+\gamma-\beta_1U_0)\int_{\Omega}U_2^{k+1} dx\\
& & \leq \frac{C}{(k+1)}\int_{\Omega}\left[U_1^{k+1}+U_4^{k+1}\right]dx;\\
& & \frac{1}{k+1}\frac{d}{dt}\int_{\Omega}U_3^{k+1}dx+\frac{4a_0}{(k+1)^2}\int_{\Omega}
|\nabla U_3^{\frac{k+1}{2}}|^2dx+(d_3+\sigma)\int_{\Omega}U_3^{k+1} dx\\
& & \leq \frac{C}{(k+1)}\int_{\Omega}U_2^{k+1} dx;\\
& & \frac{1}{k+1}\frac{d}{dt}\int_{\Omega}U_4^{k+1}dx+\frac{4a_0}{(k+1)^2}\int_{\Omega}
|\nabla U_4^{\frac{k+1}{2}}|^2dx+(d_4-G_0)\int_{\Omega}U_4^{k+1} dx\\
& & \leq \frac{C}{(k+1)}\int_{\Omega}U_2^{k+1}dx.
\end{eqnarray*}
We now look at the quantity
\[ Y(t)=\int_{\Omega}\left[ U_1^{k+1}+U_2^{k+1}+U_3^{k+1}+U_4^{k+1}\right]dx.\]
Noting from the assumption H(4.1) that there exists a small number, denoted by $\beta_0$, such that
\[d_1-B_0\geq \beta_0, ~~d_2+\gamma-\beta_1U_0\geq \beta_0, ~~d_3+\sigma>\beta_0,
~~d_4-G_0\geq \beta_0, \]
we can add up the above estimates for $U_i^{k+1}$ to derive for sufficiently large $k$ that
\[ \frac{1}{k+1}Y'(t)+\beta_0Y(t)\leq 0.\]
This readily implies
\[ Y(t)\leq C (k+1)Y(0).\]
Taking the $k^{th}$-root on both sides, we obtain as
$k\rightarrow \infty$ that
\[\sum_{i=1}^{4}\sup_{0<t<\infty}|U_i|_{L^{\infty}(\Omega)}\leq \sup_{\Omega}|Z_1(x,0)|_{L^{\infty}(\Omega)}.\]
This implies that the solution $Z_1(x,t)$ is asymptotically stable near the steady-state solution $Z^*(x)$.
\hspace{1cm}fill Q.E.D.
\section{Further Stability Analysis}
\setcounter{section}{5}
\setcounter{local}{1}
In this section we investigate the stability of constant steady-state solutions corresponding to the system (1.1)-(1.4). To illustrate the method and physical meaning, we further assume that the diffusion coefficients and the death rate are constants:
\ \\
{\bf H(5.1)}. (a) Let $a_i$ and $d_i$ be positive constants, and
\[ a_0=min\{a_1, a_2, a_3, a_4\}, ~~d_0=min \{d_1, d_2, d_3, d_4\}.\]
(b) Functions $b$, $h_1$ and $h_2$ are of the following
forms for two constants $b_0$ and $g_0$:
\[ b(x,t,s)=b_0s(1-\frac{s}{k_{1}}), ~~h_1(s)=\frac{s}{s+k_2}, ~~h_2(s)=g_0s(1-\frac{s}{k_3}).\]
Consider the corresponding steady-state system in $\Omega$:
\setcounter{section}{5}
\setcounter{local}{1}
\begin{eqnarray}
-\nabla\cdot [a_1(x)\nabla u_1] & = & b(x,u_1)-\beta_1u_1u_2-\beta_2u_1\cdot h_1(u_4)-d_1u_1+\sigma u_3,\\
-\nabla\cdot [a_2(x)\nabla u_2] & = & \beta_1u_1u_2+\beta_2u_1 \cdot h_1(u_4)-(d_2+\gamma)u_2,\stepcounter{local} \\
-\nabla\cdot [a_3(x)\nabla u_3] & = & \gamma u_2-(d_3+\sigma)u_3,\stepcounter{local} \\
-\nabla\cdot [a_4(x)\nabla u_4] & = & \xi u_2 +h_2(x,u_4)-d_4 u_4\stepcounter{local}
\end{eqnarray}
subject to the boundary condition
\begin{eqnarray}
\partial_{\nu}U(x) =0, \hspace{1cm} x\in \partial \Omega,\stepcounter{local}
\end{eqnarray}
where $U(x)=(u_1(x), u_2(x), u_3(x), u_4(x))$.
We can easily derive from (5.1) to (5.4) that
\[ (d_1-b_0)\int_{\Omega}u_1dx+d_2\int_{\Omega}u_2dx+d_3\int_{\Omega}u_3dx+\frac{b_0}{K_1}\int_{\Omega}u_1^2dx=0.\]
\[(d_4-g_0)\int_{\Omega}u_4dx+\frac{g_0}{K_2}\int_{\Omega}u_4^2dx=\xi\int_{\Omega}u_2dx, \]
from which we readily see that there exists one trivial solution, i.e., $u_1=u_2=u_3=u_4=0$ if $b_0\leq d_1$ and $g_0\leq d_4$.
On the other hands, we can also see that there are two sets of steady-state solutions. The first set of constant solutions requires
$b_0> d_1$ and $g_0> d_4$:
\begin{eqnarray*}
& & Z_1=(0,0,0,0); ~~Z_2=(\frac{K_1(b_0-d_1)}{b_{0}}, 0,0, 0);
~~Z_3=(0,0,0,\frac{K_2(g_0-d_4)}{g_{0}}).
\end{eqnarray*}
There exists another set of constant solutions:
\[ Z_4=\left\{(S,I,R,B): R=\frac{\gamma}{d_3+\sigma}I. \right\}, \]
where $S,I$ and $B$ are the solutions of the following nonlinear system:
\begin{eqnarray}
& & \frac{b_0}{K_1}S^2-(b_0-d_1)S+\left(d_2+\gamma-\frac{\sigma \gamma}{d_3+\sigma}\right)I=0,\\
& & \frac{g_0}{K_2}B^2-(g_0-d_4)B-\xi I=0,\stepcounter{local}\\
& & S=\frac{(d_2+\gamma)I}{\beta_1I+\beta_2 h_1(B)}.\stepcounter{local}
\end{eqnarray}
\ \\
{\bf Lemma 5.1}. The nonlinear system (5.5)-(5.7) has at least one solution if and only if the following condition holds:
\[\frac{K_1(b_0-d_1)}{2b_0}>\frac{d_2+\gamma}{\beta_2}.\]
\ \\
{\bf Proof}: We first derive a necessary condition which will ensure the existence of
a nontrivial constant solution.
By solving the quadratic equation (5.5) for $S$, we obtain
\begin{eqnarray*}
& & S_1=\frac{(b_0-d_1)+ \sqrt{(b_0-d_1)^2-\frac{4b_0}{K_1}[(d_2+\gamma)-\frac{\sigma \gamma}{d_3+\sigma}]I}}{\frac{2b_0}{K_1}},\\
& & S_2=\frac{(b_0-d_1)- \sqrt{(b_0-d_1)^2-\frac{4b_0}{K_1}[(d_2+\gamma)-\frac{\sigma \gamma}{d_3+\sigma}]I}}{\frac{2b_0}{K_1}}.
\end{eqnarray*}
Noting that
\[ (d_2+\gamma)-\frac{\sigma\gamma}{d_3+\sigma}>0, \]
we see that the range of $I$ must satisfy
\[ 0\leq I\leq I^*:=\frac{K_1(b_0-d_1)^2}{4b_0[(d_2+\gamma)-\frac{\sigma\gamma}{d_3+\sigma}]}.\]
But we can see from Eq.(5.7) that
\[S=\frac{(d_2+\gamma)I}{\beta_1I+\beta_2 h_1(B)}=\frac{d_2+\gamma}{\beta_1}[1-\frac{\beta_2 h_1(B)}{\beta_1I+\beta_2h_1(B)}].\]
If we consider $S$ as a function of $I$, i.e., $S=S(I)$, we get
\[ S(0)=0, ~~S'(I)>0, ~~S(\infty)=\frac{d_2+\gamma}{\beta_1}.\]
On the other hand, if we consider $S_1$ as a function of $I$, i.e., $S_1=S_1(I)$,
then we have
\[ S_1(0)=\frac{K_1(b_0-d_1)}{b_0}, ~~S_1'(I)<0.\]
We readily see that
\[ \min_{I\in[0,I^*]}S_1(I)=S_1(I^*)=\frac{K_1(b_0-d_1)}{2b_0}, ~~\max_{I\in [0,I^*]}S_1(I)=S_1(0)=\frac{K_1(b_0-d_1)}{b_0}.\]
Consequently, $S(I)$ and $S_1(I)$ have an intersection point if and only if
\[ \frac{K_1(b_0-d_1)}{2b_0}>\frac{d_2+\gamma}{\beta_2}.\]
Moreover, the intersection point is unique since both $S(I)$ and $S_1(I)$ are monotone functions.
Similarly, we see for $S_2$,
\[ S_2(0)=0, ~~S_2'(I)>0, ~~S_2''(I)>0.\]
Hence we have
\[\max_{I\in [0,I^*]}S_2(I)=\frac{K_1(b_0-d_1)}{2b_0}\]
The above indicates the existence of an intersection point between $S(I)$ and $S_2(I)$ as long as
\[\frac{K_1(b_0-d_1)}{2b_0}>\frac{d_2+\gamma}{\beta_2}.\]
Once $I$ and $S$ are determined, one can easily solve for $B$ from Eq.(5.6):
\[ B=\frac{K_2\left[(g_0-d_4)+\sqrt{(g_0-d_4)^2+\frac{4g_0\xi}{K_2}I}\,\right]}{2g_0}.\]
\hspace{1cm}fill Q.E.D.
\ \\
{\bf Proof of Theorem 2.2}.
Let $A$ be the diagonal matrix with the diffusion coefficients $a_i$.
We can calculate the Jacobian matrix for the nonlinear reaction terms from system (2.1)-(2.4):
\[B_1(Z)= \left( \frac{\partial f_i}{\partial u_{i}} \right)_{4\times 4}.\]
For $Z_1=(0,0,0,0)$, it is easy to see the $4 \times 4$~matrix:
\[B_1(Z_1)= \left( \begin{array}{cccc}
b_0-d_1 & 0 & \sigma & -\frac{\beta_2K_1(b_0-d_1)}{b_0K_2}\\
0 &\frac{\beta_2K_1(b_0-d_1)}{b_0} -(d_2+\gamma) & 0 & \frac{\beta_2K_1(b_0-d_1)}{K_2b_0} \\
0 & \gamma & -(d_3+\sigma) & 0\\
0 & \xi & 0 & g_0-d_4
\end{array} \right).\]
Let $0\leq \lambda_1<\lambda_2<\cdots $ be the eighenvalue of the Laplacian operator subject to the homogeneous Neumann boundary condition.
It is easy to calculate the eigenvalues of $A_j(Z_1)=DF(Z_1)-\lambda_jA$:
\[ \mu_{1j}=b_0-d_1-\lambda_j a_1, \mu_{2j}=-(d_2+\gamma)-\lambda_j a_2, \mu_{3j}=-(d_3+\sigma)-\lambda_ja_3, \mu_{4j}=
g_0-d_4-\lambda_j a_4.\]
Since $\lambda_1=0$ is the first eigenvalue and $b_0\geq d_1$ and $g_0\geq d_4$, it follows that
$Z_1=(0,0,0,0)$ is unstable unless $b_0\leq d_1, g_0 \leq d_4$.
Since $\lambda_j\geq 0$, the eigenvalues indicate that the stability of $Z_1$ is not affected by the diffusion processes.
This is clear since the birth rate is greater than the death rate. The population must be positive for a long time.
For $Z_2=(\frac{K_1(b_0-d_1)}{b_{0}}, 0,0, 0)$, we can see the $4 \times 4$~matrix:
\[B_1(Z_2)= \left( \begin{array}{cccc}
-(b_0-d_1) & -\frac{K_1\beta_1(b_0-d_1)}{b_0} & \sigma & -\frac{\beta_2K_1(b_0-d_1)}{b_0K_2} \\
0 & \frac{\beta_1K_1(b_0-d_1)}{b_0}-(d_2+\gamma) & 0 & \frac{\beta_2K_1(b_0-d_1)}{b_0K_2} \\
0 & \gamma & -(d_3+\sigma) & 0\\
0 & \xi & 0 & g_0-d_4
\end{array} \right).\]
Then we consider
\[ A_j(Z_2)=DF(Z_2)-\lambda_jA,\]
and see its characteristic polynomial, denoted by $P(\mu)$, is equal to
\begin{eqnarray*}
P(\mu)= & & (b_0-d_1-\lambda_ja_1-\mu)(d_3+\sigma+\lambda_ja_3+\mu)\\
& &\{ [\mu^2-[(g_0-d_4-\lambda_ja_4+m_0-(d_2+\gamma+\lambda_j a_2)\mu\\
& & + [m_0-(d_2+\gamma+\lambda_j a_2)][g_0-d_4-\lambda_j a_4]-\xi m_0\}.
\end{eqnarray*}
where
\[ m_0=\frac{\beta_2K_1(b_0-d_1)}{b_0}.\]
We obtain the eigenvalues
\begin{eqnarray*}
\mu_1 & = & -(b_0-d_1)-\lambda_ja_1,\\
\mu_2 & = & -(d_3+\sigma+\lambda_ja_3),\\
\mu_3 & = & \frac{M_1+\sqrt{M_1^2-4M_2}}{2},\\
\mu_4 & = & \frac{M_1-\sqrt{M_1^2-4M_2}}{2},
\end{eqnarray*}
where
\begin{eqnarray*}
M_1 & = & m_0-(d_2+\gamma+\lambda_ja_2)+(g_0-d_4-\lambda_ja_4);\\
M_2 & = & [m_0-(d_2+\gamma+\lambda_ja_2)][g_0-(d_4+\lambda_ja_4)]-\xi m_0
\end{eqnarray*}
It follows that
$Z_2$ is locally stable if $M_1<0$ and $M_2>0$ and $Z_2$ is unstable for either $M_1>0$ or $M_2<0$
or $M_1^2-4M_2>0$ when $M_2>0$.
On the other hand, we know
\[ \lambda_j\rightarrow \infty ~ \mbox{as $j\rightarrow \infty$},\]
and $M_1^2-4M_2>0$.
Consequently, we conclude that $Z_2$ is an unstable steady-state solution.
Now we calculate $A_j(Z_3)$:
\[ A_j(Z_3)=DF(Z_3)-\lambda_j A.\]
For $Z_3=(0,0,0,\frac{K_2(g_0-d_4)}{g_{0}})$, we can see the $4 \times 4$~matrix:
\[B_1(Z_3)= \left( \begin{array}{cccc}
(b_0-d_1) & 0 & \sigma & 0 \\
\frac{\beta_2(g_0-d_4) }{(2g_0-d_4)} & -(d_2+\gamma) & 0 & 0 \\
0 & \gamma & -(d_3+\sigma) & 0\\
0 & \xi & 0 & -( g_0-d_4)
\end{array} \right).\]
We know the characteristic polynomial for the matrix $A_j(Z_3)=DF(Z_3)-\mu I_{4\times 4}$ is equal to
\begin{eqnarray*}
P(\mu)= & & |A_j(Z_3)|=-[(g_0-d_4+\lambda_j a_4)+\mu]P_0(\mu),
\end{eqnarray*}
where
\[ P_0(\mu)=\ [(b_0-d_1-\lambda_j a_1-\mu)(d_3+\sigma+\lambda_ja_3+\mu)(d_2+\gamma+\lambda_ja_2+\mu)+\frac{\sigma \gamma \beta_2(g_0-d_4)}{2g_0-d_4}.\]
Hence, the first eigenvalue is equal to
\begin{eqnarray*}
\mu_1 & = & -(g_0-d_1+\lambda_j a_4),
\end{eqnarray*}
To see the rest of eigenvalues of $P(\mu)$, we use a lemma from Yin-Chen-Wang \cite{YCW2017}.
\ \\
{\bf Lemma 5.2} Let $p>0$, $q$ and $h$ be constants, and
\[ P_0(\mu)=\mu^3+p\mu^2+q\mu +h=0.\]
Then it holds that \\
(a) If $h<0$, there exists a positive root;\\
(b) If $0<h<pq$, all roots have negative real parts;\\
(c) If $pq<h$, there is a root with positive real part;\\
(d) If $pq=h$, the roots are $\mu_1=-p, \mu_2=\sqrt{-q}, \mu_3=-\sqrt{-q}.$
Let
\[ P_0(\mu)=\mu^3+p\mu^2+q\mu+h,\]
with its coefficients given by
\begin{eqnarray*}
p & = & (d_2+\gamma+\lambda_j a_2)+(d_3+\sigma+\lambda_ja_3)-(b_0-d_1-\lambda_ja_1);\\
q & = & (d_2+\gamma+\lambda_j a_2)(d_3+\sigma+\lambda_j a_3)-(b_0-d_1-\lambda_j a_1)[(d_2+\gamma+\lambda_j a_2)+(d_3+\sigma+\lambda_ja_3)];\\
h & = & (d_1+\lambda_ja_1-b_0)(d_2+\gamma+\lambda_ja_2)(d_3+\sigma+\lambda_j a_3)-\frac{\sigma\gamma\beta_2(g_0-d_4)}{2g_0-d_4}.
\end{eqnarray*}
Since $\lambda_1=0$ is one of the eigenvalues and $d_1-b_0<0, g_0-d_4>0$, we see $h<0$ from the expression of $h$, so $Z_3$ is unstable.
Finally, we study the stability of $Z_4$. Since $u_4$ always has positive solutions as long as $u_2$ is positive, it does not affect
the stability of other variables. We only need to focus on the stability of $(u_1,u_2,u_3)$. Furthermore, since $\lambda_1=0$ is the first eigenvalue, the rest of eigenvalues have the same sign with $d_i$ which increases the stability of the solution. Therefore, we only need to find the conditions for the stability when $\lambda_1=0$.
It is easy to calculate the Jacobian matrix
\[B_1^*= \left( \begin{array}{ccc}
-L_0 & -\beta_1S_0 & \sigma \\
\beta_1I_0+\beta_2h(B_0) & -(d_2+\gamma) & 0 \\
0 & \gamma & -(d_3+\sigma)
\end{array} \right)\]
where
\[ L_0= (d_1-b_0)+\frac{2b_0S_0}{K_1}+\beta_1I_0+\beta_2h_1(B_0).\]
The characteristic polynomial of $B_1^*$ is equal to
\[ P(\mu)=\mu^3+p_0\mu^2+q_0\mu+h_0=0.\]
where
\begin{eqnarray*}
p_0 & = & L_0+(d_2+\gamma)+(d_3+\sigma)+L_0;\\
q_0 & = & (d_3+\sigma)(L_0+d_2+\gamma)+L_0(d_2+\gamma)
+\beta_1S_0(\beta_1I_0+\beta_2h_1(B_0));\\
h_0 & = & (d_3+\sigma)[L_0(d_2+\gamma)+\beta_1S_0(\beta_1I_0+\beta_2h_1(B_0)]
-\sigma\gamma(\beta_1 I_0+\beta_2h_1(B_0)).
\end{eqnarray*}
By Lemma 5.2, we can see the stability or instability of the steady-state solution precisely when parameters varies.
In particular, when $L_0>0$, if $\sigma, \gamma, \beta_1$ and $\beta_2$ are sufficiently small, we see
the condition $0<h_0<p_0q_0$ holds. Consequently, the steady-state solution $(S_0,I_0,R_0)$ is stable. This result confirms the result of Theorem 2.2 about the stability analysis of the steady-state solution. \hspace{1cm}fill Q.E.D.
\section{Conclusion}
In this paper we have studied a nonlinear mathematical model for an epidemic caused by cholera without life-time immunity.
The diffusion coefficients are different for each species. Moreover, these coefficients are allowed to be dependent upon the concentration as well as the space location and time. The resulting model system is strongly coupled. We established the global well-posedness for the coupled reaction-diffusion system under some very mild conditions on the given data. Moreover,
we have analyzed the linear stability for the steady-state solutions and proved that there is a turing phenomenon when the diffusion coefficients are different. This result indicates that there are some fundamental differences between the ODE model and the corresponding PDE model.
These results show that the mathematical model is well-defined and can be used by other researchers to conduct the field study.
The theoretical results obtained in this paper lays a solid foundation for other scientists in related fields to further study
more constructive qualitative properties of the solutions. The study will provide scientists a deeper understanding of the dynamics of the interaction between bacteria and susceptible, infected and recovered species. We have used many ideas and techniques from the elliptic and parabolic equations, particularly, the energy method and Sobolev's inequalities.
There are some open questions that remain to be answered, and further studies are needed.
\ \\
{\bf Acknowledgements}.
This work was motivated by some open questions raised by Professor K. Yamazaki from Texas Tech University and Professor Jin Wang from University of Tennessee at Chattanooga in WSU biological seminar series. The authors would like to thank them for some helpful discussions about the model.
The work of the second author was substantially supported by Hong Kong RGC General Research Fund (projects 14306921 and 14306719).
\end{document} |
\begin{document}
\thanks{The first author acknowledges the support of FAPESP grant 2008/11471-6 and the second author the support of NSF grant DMS 0556368}
\subjclass[2000]{Primary: 46B03, Secondary 03E15}
\keywords{Tight Banach spaces, Dichotomies, Classification of Banach spaces}
\begin{abstract} We analyse several examples of separable Banach spaces, some of them new, and relate them to
several dichotomies obtained in \cite{FR4}, by
classifying them according to which side of the dichotomies they fall.
\end{abstract}
\title{Banach spaces without minimal subspaces - Examples}
\tableofcontents
\section{Introduction}\label{intro}
In this article we give several new examples of Banach spaces, corresponding
to different classes of a list defined in \cite{FR4}. This paper may be seen as
a more empirical continuation of \cite{FR4} in which our stress is on the
study of examples for the new classes of
Banach spaces considered in that work.
\subsection{Gowers' list of inevitable classes}
In the paper \cite{g:dicho}, W.T. Gowers had defined a program of isomorphic
classification of Banach spaces. The aim of this program is a {\em loose
classification of Banach spaces up to subspaces}, by producing a list of classes of Banach spaces such that:
(a) if a space belongs to a class,
then every subspace belongs to the same class, or maybe, in the case when the
properties defining the class depend on a basis of the space, every block
subspace belongs to the same class,
(b) the classes are {\em inevitable}, i.e., every Banach space contains a subspace in one of the classes,
(c) any two classes in the list are disjoint,
(d) belonging to one class gives a lot of information about operators that may be defined on the space or on its subspaces.
\
We shall refer to such a list as a {\em list of inevitable classes of
Gowers}.
For the motivation of Gowers' program as well as the relation of this program
to classical problems in Banach space theory we refer to \cite{FR4}.
Let us just say that the class of spaces $c_0$ and $\ell_p$ is seen as the
nicest or most regular class, and
so, the objective of Gowers' program really is the classification of those spaces (such as Tsirelson's
space $T$) which do not contain a copy of $c_0$ or $\ell_p$. Actually, in \cite{FR4}, mainly spaces without {\em minimal subspaces} are
classified, and so in this article, we shall consider various examples of
Banach spaces without minimal subspaces.
We shall first give a summary of the classification obtained in \cite{FR4}
and of the results that led to that classification.
After the construction by Gowers and Maurey of a {\em hereditarily
indecomposable} (or HI) space $GM$, i.e., a space such that no subspace may be
written as the direct sum of infinite dimensional subspaces \cite{GM}, Gowers
proved that every Banach space contains either an HI subspace or a subspace
with an unconditional basis \cite{g:hi}.
This dichotomy is called {\em first dichotomy} of
Gowers in \cite{FR4}.
These were the first two examples of
inevitable classes. He then
refined the list by proving a {\em second dichotomy}: any Banach space contains a subspace with a
basis such that either no two disjointly supported block subspaces are
isomorphic, or such that any two subspaces have further
subspaces which are isomorphic. He called the second property {\em quasi
minimality}. Finally,
H. Rosenthal had defined a space to be {\em minimal} if it embeds into any of its subspaces. A
quasi minimal space which does not contain a minimal subspace is called {\em
strictly quasi minimal}, so Gowers again divided the class of quasi minimal
spaces into the class of strictly quasi minimal spaces and the class of minimal
spaces.
Gowers therefore produced a list of four inevitable classes of Banach spaces,
corresponding to classical examples, or more recent couterexamples to
classical questions:
HI spaces, such as $GM$; spaces with bases such that no disjointly supported
subspaces are isomorphic, such as the
couterexample $G_u$ of Gowers to the hyperplane's problem of Banach \cite{g:hyperplanes};
strictly quasi minimal spaces with an unconditional basis, such as Tsirelson's space $T$ \cite{tsi} ; and
finally, minimal spaces, such as $c_0$ or $\ell_p$, but also $T^*$,
Schlumprecht's space $S$ \cite{S1}, or as proved recently in \cite{MP}, its dual $S^*$.
\
\subsection{The three new dichotomies}
In \cite{FR4} three dichotomies for Banach spaces were
obtained. The first one of these new dichotomies,
the {\em third dichotomy}, concerns the property of minimality defined by Rosenthal. Recall that a
Banach space is minimal if it embeds into any of its infinite dimensional
subspaces. On the other hand, a space $Y$ is {\em
tight} in a basic sequence $(e_i)$ if there is a sequence of successive
subsets $I_0<I_1<I_2<\ldots$ of $\N$, such that for all infinite subsets
$A\subseteq \N$, we have
$$
Y\not\sqsubseteq [e_n\del n\notin \bigcup_{i\in A}I_i].
$$
A {\em tight basis} is a basis such that every subspace is tight
in it, and a {\em tight space} is a space with a tight basis \cite{FR4}.
The subsets $I_n$ may clearly be chosen to be intervals or even to form a partition of $\N$. However it is convenient not to require this condition in the definition, in view of forthcoming special cases of tightness.
It is observed in \cite{FR4} that the tightness property is hereditary,
incompatible with minimality, and it is proved that:
\begin{thm}[3rd dichotomy, Ferenczi-Rosendal 2007]\label{main}
Let $E$ be a Banach space without minimal subspaces. Then $E$ has a tight subspace.
\end{thm}
Actual examples of tight spaces in \cite{FR4} turn out to satisfy one of two stronger forms of tightness.
The first was called {\em tightness by range}. Here the
range, ${\rm range} \ x$, of a vector $x$ is the smallest interval of integers
containing its support on the given basis, and the range of a block subspace $[x_n]$ is
$\bigcup_n {\rm range} \ x_n$. A basis $(e_n)$ is tight by range when for
every block subspace $Y=[y_n]$, the sequence of successive subsets
$I_0<I_1<\ldots$ of $\N$ witnessing the tightness of $Y$ in $(e_n)$ may be
defined by $I_k={\rm range}\ y_k$ for each $k$. This is equivalent to no two
block subspaces with disjoint ranges being comparable, where two spaces are comparable if one embeds into the other.
When the definition of tightness may be checked with $I_k={\rm supp}\ y_k$
instead of ${\rm range}\ y_k$, then a stronger property is obtained which is called
tightness by support, and is
equivalent to the property defined by Gowers in the second dichotomy that no disjointly
supported block subspaces are isomorphic, Therefore $G_u$ is an example of
space with a basis which is tight by support and therefore by range.
The second kind of tightness was called {\em tightness with constants}. A basis $(e_n)$ is
tight with constants when for for every infinite dimensional space $Y$, the
sequence of successive subsets $I_0<I_1<\ldots$ of $\N$ witnessing the
tightness of $Y$ in $(e_n)$ may be chosen so that
$Y \not\sqsubseteq_K [e_n \del n \notin I_K]$ for each $K$. This is the case for Tsirelson's space $T$ or its $p$-convexified version $T^{(p)}$ \cite{CS}.
As we shall see, one of the aims of this paper is to present various examples of tight spaces
of these two forms.
\
In \cite{FR4} it was proved that there are natural dichotomies between each of these strong
forms of tightness and respective weak forms of minimality. For the first
notion, a space $X$ with a
basis $(x_n)$ is said to be {\em subsequentially minimal} if every subspace of $X$
contains an isomorphic copy of a subsequence of $(x_n)$. Essentially
this notion had been previously considered by Kutzarova, Leung, Manoussakis
and Tang in the context of modified partially mixed Tsirelson spaces \cite{KLMT}.
\begin{thm}[4th dichotomy, Ferenczi-Rosendal 2007]\label{main2}
Any Banach space $E$ contains a
subspace with a basis that is either tight by range or is subsequentially minimal.
\end{thm}
The second case in Theorem \ref{main2}
may be improved to the following hereditary property of a basis $(x_n)$, that we call {\em
sequential minimality}: $(x_n)$ is quasi minimal and every block sequence of
$[x_n]$ has a subsequentially minimal block sequence.
There is also a dichotomy concerning tightness with constants. Recall that given two Banach spaces $X$ and $Y$, we say that $X$ is {\em crudely finitely
representable} in $Y$ if there is a constant $K$ such that for any
finite-dimensional subspace $F\subseteq X$ there is an embedding $T\complementlon F\rightarrow
Y$ with constant $K$, i.e., $\norm{T}\cdot\norm{T^{-1}}\ensuremath{\leqslant} K$.
A space $X$ is said to be {\em locally minimal} if for some constant
$K$, $X$ is $K$-crudely finitely representable in any of its subspaces.
\begin{thm}[5th dichotomy, Ferenczi-Rosendal 2007] \label{main3}
Any Banach space $E$ contains a subspace with a basis that is either tight
with constants or is locally minimal.
\end{thm}
\
Finally there exists a sixth
dichotomy theorem due to A.
Tcaciuc \cite{T}, stated here in a slightly strengthened form. A space $X$ is
{\em uniformly inhomogeneous} when
$$\forall M\ensuremath{\geqslant} 1\; \exists n \in \N\; \forall Y_1,\ldots,Y_{2n} \subseteq X\;
\exists y_i \in\ku S_{Y_i}\;(y_i)_{i=1}^n\not\sim_M(y_i)_{i=n+1}^{2n},$$
where $Y_1,\ldots,Y_{2n}$ are assumed to be infinite-dimensional subspaces of $X$.
On the contrary, a basis $(e_n)$ is said to be {\em strongly asymptotically
$\ell_p$}, $1 \ensuremath{\leqslant} p \ensuremath{\leqslant} +\infty$, \cite{DFKO}, if there exists a constant $C$
and a function $f:\N \rightarrow \N$ such that for any $n$, any family of $n$
unit vectors which are disjointly supported in $[e_k \del k \ensuremath{\geqslant} f(n)]$ is
$C$-equivalent to the canonical basis of $\ell_p^n$.
Tcaciuc then proves \cite{T} :
\begin{thm}[Tcaciuc's dichotomy, 2005]
Any Banach space contains a subspace with a basis which is either uniformly
inhomogeneous or strongly asymptotically $\ell_p$ for some
$1 \ensuremath{\leqslant} p \ensuremath{\leqslant} +\infty$.
\end{thm}
The six dichotomies and the interdependence of the properties involved can be
visualised in the following diagram.
\[
\begin{tabular}{ccc}
Strongly asymptotic $\ell_p$&$**\textrm{ Tcaciuc's dichotomy }**$&
Uniformly inhomogeneous\\
$\Downarrow$& &$\Uparrow$\\
Unconditional basis&$**\textrm{ 1st dichotomy }**$& Hereditarily indecomposable\\
$\Uparrow$& &$\Downarrow$\\
Tight by support & $**\textrm{ 2nd dichotomy }**$ & Quasi minimal \\
$\Downarrow$&&$\Uparrow$\\
Tight by range & $**\textrm{ 4th dichotomy }**$ & Sequentially minimal \\
$\Downarrow$&&$\Uparrow$\\
Tight& $**\textrm{ 3rd dichotomy }**$ & Minimal\\
$\Uparrow$& &$\Downarrow$\\
Tight with constants& $**\textrm{ 5th dichotomy }**$ & Locally minimal\\
\end{tabular}
\]
\
Moreover,
$${\rm Strongly\ asymptotic\ } \ell_p {\rm\ not\ containing\ } \ell_p. 1 \ensuremath{\leqslant} p <+\infty
\Rightarrow {\rm Tight\ with\ constants},$$
and
$${\rm Strongly\ asymptotic\ } \ell_{\infty}
\Rightarrow {\rm Locally\ minimal}.$$
\
Note that while a basis tight by support must be unconditional, a basis which is tight by range may span a HI space. So tightness by support and tightness by range are two different notions. We would lose this subtle difference if we required the sets $I_n$ to be intervals in the definition of tightness.
Likewise a basis may be tight by range without being (nor containing a basis which is) tight with constants, and tight with constants without being (nor containing a basis which is) tight by range. Actually none of the converses of the implications appearing on the left or the right of the list of the six dichotomies holds, even if one allows passing to a further subspace. All the claims of this paragraph are easily checked by looking at the list of examples of Theorem \ref{final}, which is the aim of this paper.
The fact that a strongly asymptotically $\ell_p$ space not containing $\ell_p$ must be tight with constants is proved in \cite{FR4} but is essentially due to the authors of \cite{DFKO}, and
the observation that such bases are unconditional may also be found in \cite{DFKO}. The easy fact that HI spaces are uniformly homogeneous (with $n=2$ in the definition) is observed in \cite{FR4}. That HI spaces are quasi-minimal is due to Gowers \cite{g:dicho}, and that minimal spaces are locally minimal is a consequence of an observation by P. G. Casazza \cite{C} that every minimal space must $K$-embed into all its subspaces for some $K \ensuremath{\geqslant} 1$.
The other implications are direct consequences of the definitions, and more explanations and details may be found in \cite{FR4}.
\
\subsection{The list of 19 inevitable classes} Combining the six dichotomies and the relations between them, the following
list of 19 classes of Banach spaces contained in any Banach space is obtained
in \cite{FR4}:
\begin{thm}[Ferenczi - Rosendal 2007]\label{final} Any infinite dimensional Banach space contains a subspace of one of
the types listed in the following chart:
\begin{center}
\begin{tabular}{|l|l|l|}
\hline
Type & Properties & Examples \\
\hline
(1a) & HI, tight by range and with constants & ?\\
(1b) & HI, tight by range, locally minimal & $G^*$\\
\hline
(2) & HI, tight, sequentially minimal & ? \\
\hline
(3a) & tight by support and with constants, uniformly inhomogeneous & ? \\
(3b) & tight by support, locally minimal, uniformly inhomogeneous & $G_u^*$ \\
(3c) & tight by support, strongly asymptotically
$\ell_p$, $1 \ensuremath{\leqslant} p <\infty$ & $X_u, X_{abr}$ \\
(3d) & tight by support, strongly asymptotically
$\ell_{\infty}$ & $X_u^*$ \\
\hline
(4) & unconditional basis, quasi minimal, tight by range & ? \\
\hline
(5a) & unconditional basis, tight with constants, sequentially minimal, & ? \\
& uniformly inhomogeneous & \\
(5b) & unconditional basis, tight, sequentially and locally minimal,
& ? \\
& uniformly inhomogeneous & \\
(5c) & tight with constants, sequentially minimal, & $T$, $T^{(p)}$ \\
& strongly asymptotically $\ell_p$, $1 \ensuremath{\leqslant} p<\infty$ & \\
(5d) & tight, sequentially minimal, strongly asymptotically $\ell_{\infty}$ & ?\\
\hline
(6a) & unconditional basis, minimal, uniformly inhomogeneous & $S,S^*$ \\
(6b) & minimal, reflexive, strongly asymptotically $\ell_{\infty}$ & $T^*$\\
(6c) & isomorphic to $c_0$ or $l_p$, $1 \ensuremath{\leqslant} p<\infty$ & $c_0$, $\ell_p$\\
\hline
\end{tabular}
\end{center}
\end{thm}
\
The class of type (2) spaces may be divided into two subclasses, using the 5th dichotomy, and the class of type (4) into four, using the 5th and the 6th dichotomy, giving a total of 19 inevitable classes. Since we know of no example of a type (2) or type (4) space to begin with, we do not write down the list of possible subclasses of these two classes, leaving this as an exercise to the interested reader.
Note that the tightness property may be used to obtain lower bounds of complexity for the relation of isomorphism between subspaces of a given Banach space. This was initiated by B. Bossard \cite{Bo} who used Gowers' space $G_u$ and its tightness by support. Other results in this direction may be found in \cite{FR4}. We also refer to \cite{ergodic} for a more introductory work to this question.
\
In \cite{FR4} the existence of $X_u$ and the properties of $S$, $G$, $G_u$ and $X_u$ which appear in the chart and are mentioned without proof. It is the
main objective of this paper to prove the results about the spaces which
appear in the above chart.
\
So in what follows various (and for some of
them new) examples of ``pure'' tight spaces are analysed combining
some of the properties of tightness or minimality associated to each dichotomy.
We shall provide several examples of tight spaces from the two main families of exotic
Banach spaces: spaces of the type of Gowers and Maurey \cite{GM} and spaces of the type of
Argyros and Deliyanni \cite{AD}. Recall that both types of spaces are defined
using a coding procedure to
``conditionalise'' the norm of some ground space defined by induction.
In spaces of the type of Gowers and Maurey, the ground space is the space $S$
of Schlumprecht, and in spaces of the type of Argyros and Deliyanni, it is
a mixed (in further versions modified or partly modified) Tsirelson space associated to the sequence of
Schreier families.
The space $S$ is far from being asymptotic $\ell_p$ and is actually uniformly
inhomogeneous, and this is the case for our examples of the type
of Gowers-Maurey as well. On the other hand, we use a space in the second
family, inspired by an example of Argyros, Deliyanni, Kutzarova and
Manoussakis \cite{ADKM}, to produce strongly asymptotically $\ell_1$ and
$\ell_{\infty}$ examples with strong tightness properties.
\section{Tight unconditional spaces of the type of Gowers and Maurey}
In this section we prove that the dual of the type (3) space $G_u$ constructed
by Gowers in \cite{g:hyperplanes} is locally minimal of type (3),
that Gowers' hereditarily indecomposable and asymptotically unconditional
space $G$ defined in \cite{g:asymptotic} is of type (1), and that its dual
$G^*$ is locally minimal of type (1). These spaces are
natural variations on Gowers and Maurey's space $GM$, and so familiarity with
that construction will be assumed: we shall not redefine the now classical
notation relative to $GM$, such as the sets of integers $K$ and $L$, rapidly increasing sequences (or R.I.S.), the
set ${\bf Q}$ of functionals, special functionals, etc., instead we shall try to give details on the
parts in which $G_u$ or $G$ differ from $GM$.
The idea of the proofs is similar to \cite{g:hyperplanes}. The HI property for
Gowers-Maurey's spaces is obtained as follows. Some vector $x$ is constructed
such that $\norm{x}$ is large, but so that if $x'$ is obtained from $x$ by
changing signs of the components of $x$, then $x^*(x')$ is small for any
norming functional $x^*$, and so $\norm{x'}$ is small. The upper bound for
$x^*(x')$ is obtained by a combination of unconditional estimates (not
depending on the signs) and of conditional estimates (i.e., based on the fact
that $|\sum_{i=1}^n \epsilonilon_i|$ is much smaller than $n$ if
$\epsilonilon_i=(-1)^i$ for all $i$).
For our examples we shall need to prove that some operator $T$ is
unbounded. Thus we shall construct a vector $x$ such that say $Tx$ has large
norm, and such that $x^*(x)$ is small for any norming $x^*$. The upper bound
for $x^*(x)$ will be obtained by the same unconditional estimates as in the HI
case, while conditional estimates will be trivial due to disjointness of
supports of the corresponding component vectors and functionals.
The method will be similar for the dual spaces.
\
Recall that if $X$ is a space with a bimonotone basis, an
$\ell_{1+}^n$-average with constant $1+\epsilonilon$ is a
normalised vector of the form $\sum_{i=1}^n x_i$, where $x_1<\cdots<x_n$ and
$\norm{x_i} \ensuremath{\leqslant} \frac{1+\epsilonilon}{n}$ for all $i$.
An $\ell_{\infty+}^n$-average with constant $1+\epsilonilon$ is a normalised vector of the form
$\sum_{i=1}^n x_i$, where $x_1<\cdots<x_n$ and
$\norm{x_i} \ensuremath{\geqslant} \frac{1}{1+\epsilonilon}$ for all $i$.
An $\ell_{1+}^n$-vector (resp. $\ell_{\infty+}^n$-vector) is a non zero
multiple of an $\ell_{1+}^n$-average (resp. $\ell_{\infty+}^n$-average).
The function $f$ is defined by $f(n)=\log_2(n+1)$. The space $X$ is said to satisfy a lower $f$-estimate if for any
$x_1<\cdots<x_n$,
$$\frac{1}{f(n)}\sum_{i=1}^n \norm{x_i} \ensuremath{\leqslant} \norm{\sum_{i=1}^n x_i}.$$
\begin{lemme}\label{linftyn}
Let $X$ be a reflexive space with a bimonotone basis and satisfying a lower $f$-estimate.
Let $(y_k^*)$ be a normalised block sequence of $X^*$, $n \in \N$, $\epsilonilon,\alpha>0$.
Then there
exists a constant $N(n,\epsilonilon)$, successive subsets $F_i$ of
$[1,N(n,\epsilonilon)]$, $1 \ensuremath{\leqslant} i \ensuremath{\leqslant} n$, and $\lambda>0$ such that if
$x_i^*:=\lambda \sum_{k \in F_i} y_k^*$ for all $i$, then $x^*=\sum_{i=1}^n
x_i^*$ \ is an $\ell_{\infty +}^n$- average with constant $1+\epsilonilon$.
Furthermore, if for each $i$, $x_i$ is such that $\norm{x_i} \ensuremath{\leqslant} 1$,
${\rm range}\;x_i \subseteq {\rm range}\;x_i^*$ and $x_i^*(x_i) \ensuremath{\geqslant} \alpha \norm{x_i^*}$, then
$x=\sum_{i=1}^n x_i$ is an $\ell_{1+}^n$-vector with constant
$\frac{1+\epsilonilon}{\alpha}$ such that $x^*(x) \ensuremath{\geqslant} \frac{\alpha}{1+\epsilonilon}\norm{x}$.
\end{lemme}
\begin{proof}
Since $X$ satisfies a lower $f$-estimate, it follows by duality that any
sequence of successive functionals $x_1^*<\cdots<x_n^*$ in $G_u^*$ satisfies the following
upper estimate:
$$1 \ensuremath{\leqslant} \norm{\sum_{i=1}^n x_i^*} \ensuremath{\leqslant} f(n) \max_{1 \ensuremath{\leqslant} i \ensuremath{\leqslant} n}\norm{x_i^*}.$$
Let $N=n^k$ where $k$ is such that $(1+\epsilonilon)^k>
f(n^k)$. Assume towards a contradiction that the result is false for
$N(n,\epsilonilon)=N$, then
$$y^*=(y_1^*+\ldots+y_{n^{k-1}}^*)+\ldots+(y_{(n-1)n^{k-1}+1}^*+\ldots+y_{n^k}^*)$$
is not an $\ell_{\infty +}^n$-vector with constant $1+\epsilonilon$, and therefore, for some $i$,
$$\norm{y_{i n^{k-1}+1}^*+\ldots+y_{(i+1)n^{k-1}}^*} \ensuremath{\leqslant} \frac{1}{1+\epsilonilon}\norm{y^*}.$$
Applying the same reasoning to the above sum instead of $y^*$, we obtain,
for some $j$,
$$\norm{y_{j n^{k-2}+1}^*+\ldots+y_{(j+1)n^{k-2}}^*} \ensuremath{\leqslant} \frac{1}{(1+\epsilonilon)^2}\norm{y^*}.$$
By induction we obtain that
$$1 \ensuremath{\leqslant} \frac{1}{(1+\epsilonilon)^k} \norm{y^*}
\ensuremath{\leqslant} \frac{1}{(1+\epsilonilon)^k} f(n^k),$$
a contradiction.
Let therefore $x^*$ be such an $\ell_{\infty +}^n$-average with constant
$1+\epsilonilon$ of the form $\sum_i x_i^*$.
Let for each $i$, $x_i$ be such that $\norm{x_i} \ensuremath{\leqslant} 1$,
${\rm range}\ x_i \subseteq {\rm range}\ x_i^*$ and $x_i^*(x_i) \ensuremath{\geqslant} \alpha \norm{x_i^*}$.
Then
$$\norm{\sum_i x_i} \ensuremath{\geqslant} x^*(\sum_i x_i) \ensuremath{\geqslant} \alpha (\sum_i \|x_i^*\|) \ensuremath{\geqslant}
\frac{\alpha n}{1+\epsilonilon},$$ and in particular for each $i$,
$$\norm{x_i} \ensuremath{\leqslant} 1 \ensuremath{\leqslant} \frac{1+\epsilonilon}{\alpha n}\norm{\sum_i x_i},$$
so $\sum_i x_i$ is a $\ell_{1+}^n$-vector with constant $\frac{1+\epsilonilon}{\alpha}$. We also obtain that
$$x^*(\sum_i x_i) \ensuremath{\geqslant}
\frac{\alpha n}{1+\epsilonilon} \ensuremath{\geqslant} \frac{\alpha}{1+\epsilonilon}\norm{\sum_i x_i},$$
as required.
\end{proof}
The following lemma is fundamental and therefore worth stating explicitly. It
appears for example as Lemma 4 in \cite{g:asymptotic}. Recall that an
$(M,g)$-form is a functional of the form $g(M)^{-1}(x_1^*+\ldots+x_M^*)$, with
$x_1^*<\cdots<x_M^*$ of norm at most $1$.
\begin{lemme}[Lemma 4 in \cite{g:asymptotic}]\label{fundamental}
Let $f,g\in\mathcal F$ with $g\ensuremath{\geqslant}\sqrt f$, let $X$ be a space with a bimonotone basis
satisfying a lower $f$-estimate, let $\epsilonilon>0$ and $\epsilonilon'=\min\{\epsilonilon,1\}$, let $x_1,\ldots,x_N$ be a R.I.S. in X
for $f$ with constant $1+\epsilonilon$ and let $x=\sum_{i=1}^Nx_i$. Suppose that
$$\norm{Ex}\ensuremath{\leqslant}\sup\Bigl\{|x^*(Ex)|:M\ensuremath{\geqslant} 2, x^*\ \hbox{is an $(M,g)$-form}
\Bigr\}$$
for every interval $E$ such that $\norm{Ex}\ge 1/3$. Then $\norm
x\ensuremath{\leqslant}(1+\epsilonilon+\epsilonilon')Ng(N)^{-1}$.
\end{lemme}
\subsection{A locally minimal space tight by support}
Let $G_u$ be the space defined in \cite{g:hyperplanes}. This space has a suppression
unconditional basis, is tight by
support and therefore reflexive, and its norm is given by the following
implicit equation, for all $x \in c_{00}$:
$$\norm{x}=\norm{x}_{c_0}\vee\ \sup\Bigl\{f(n)^{-1}\sum_{i=1}^n\norm{E_i x}\Del 2 \ensuremath{\leqslant} n, E_1<\ldots<E_n\Bigr\}$$
$$\vee\ \sup\Bigl\{|x^*(x)|\Del k \in K, x^* \hbox{ special of length } k \Bigr\}$$
where $E_1, \ldots, E_n$ are successive
subsets (not necessarily intervals) of $\N$.
\begin{prop}\label{gu} The dual $G_u^*$ of $G_u$ is tight by support and locally minimal.
\end{prop}
\begin{proof}
Given $n \in \N$ and $\epsilonilon=1/10$ we may by Lemma \ref{linftyn} find some
$N$ such that there exists in the span of any
$x_1^*<\ldots<x_N^*$ an $\ell_{\infty+}^{n}$-average with constant
$1+\epsilonilon$.
By unconditionality we deduce that any block-subspace of $G_u^*$ contains
$\ell_{\infty}^n$'s uniformly, and therefore $G_u^*$ is locally minimal.
Assume now towards a contradiction that $(x_n^*)$ and $(y_n^*)$ are disjointly
supported and equivalent block sequences in $G_u^*$, and let $T: [x_n^*] \rightarrow [y_n^*]$ be defined by $Tx_n^*=y_n^*$.
We may assume that each $x_n^*$ is an $\ell_{\infty +}^n$-average with
constant $1+\epsilonilon$. Using Hahn-Banach theorem, the $1$-unconditionality of the
basis, and Lemma \ref{linftyn}, we may also find for each $n$ an $\ell_{1+}^n$-average $x_n$ with
constant $1+\epsilonilon$ such that ${\rm supp}\ x_n \subseteq {\rm supp}\ x_n^*$
and
$x_n^*(x_n) \ensuremath{\geqslant} 1/2$.
By construction, for each $n$, $Tx_n^*$ is disjointly supported from $[x_k]$,
and up to modifying $T$, we may assume that $Tx_n^*$ is in ${\bf Q}$ and of
norm at most $1$ for each $n$.
If $z_1,\ldots,z_m$ is a R.I.S. of these ${\ell}_{1+}^n$-averages $x_n$ with constant $1+\epsilonilon$,
with $m \in [\log N, \exp N]$, $N \in L$, and $z_1^*,\ldots,z_m^*$ are the
functionals associated to $z_1,\ldots,z_m$, then by \cite{g:hyperplanes} Lemma 7, the $(m,f)$-form
$z^*=f(m)^{-1}(z_1^*+\ldots+z_m^*)$ satisfies
$$z^*(z_1+\ldots+z_m) \ensuremath{\geqslant} \frac{m}{2f(m)} \ensuremath{\geqslant}
\frac{1}{4}\norm{z_1+\ldots+z_m},$$
and furthermore $Tz^*$ is also an $(m,f)$-form.
Therefore we may build R.I.S. vectors $z$ with constant $1+\epsilonilon$ of
arbitrary length $m$ in $[\log N, \exp N]$, $N \in L$, so that $z$ is
$4^{-1}$-normed by an $(m,f)$-form $z^*$ such that $Tz^*$ is also
an $(m,f)$-form.
We may then consider a sequence $z_1,\ldots,z_k$ of length $k \in K$ of such
R.I.S. vectors of length $m_i$, and some corresponding $(m_i,f)$-forms $z_1^*,\ldots,z_k^*$ (i.e
$z_i^*$ $4^{-1}$-norms $z_i$ and $Tz_i^*$ is also an $(m_i,f)$-form for all $i$), such
that
$Tz_1^*,\ldots,Tz_k^*$ is a special sequence.
Then we let
$z=z_1+\cdots+z_k$ and $z^*=f(k)^{-1/2}(z_1^*+\ldots+z_k^*)$.
Since $Tz^*=f(k)^{-1/2}(Tz_1^*+\ldots+Tz_k^*)$ is a special functional it follows that
$$\norm{Tz^*} \ensuremath{\leqslant} 1.$$Our aim is now to show that $\norm{z} \ensuremath{\leqslant} 3kf(k)^{-1}$.
It will then follow that
$$\norm{z^*} \ensuremath{\geqslant} z^*(z)/\norm{z} \ensuremath{\geqslant} f(k)^{1/2}/12.$$
Since $k$ was arbitrary in $K$ this will imply that $T^{-1}$ is unbounded and provide the desired contradiction.
The proof is almost exactly the same as in \cite{g:hyperplanes}. Let $K_0=K
\setminus \{k\}$ and let $g$ be the corresponding function given by
\cite{g:hyperplanes} Lemma 6. To prove that $\norm{z} \ensuremath{\leqslant}3kf(k)^{-1}$ it is
enough by \cite{g:hyperplanes} Lemma 8 and Lemma \ref{fundamental} to prove
that for any interval $E$ such that $\norm{Ez} \ensuremath{\geqslant} 1/3$, $Ez$ is normed by
some $(M,g)$-form with $M \ensuremath{\geqslant} 2$.
By the discussion in the proof of the main theorem in \cite{g:hyperplanes},
the only possible norming functionals apart from $(M,g)$-forms are special
functionals of length $k$. So let $w^*=f(k)^{-1/2}(w_1^*+\cdots+w_k^*)$ be a
special functional of length $k$, and $E$ be an interval such that $\norm{Ez}
\ensuremath{\geqslant} 1/3$. We need to show that $w^*$ does not norm $Ez$.
Let $t$ be minimal such that $w_t^* \neq Tz_t^*$. If $i \neq j$ or $i=j>t$ then
by definition of special sequences there exist $M \neq N \in L$, $\min(M,N)
\ensuremath{\geqslant} j_{2k}$, such that $w_i^*$ is an $(M,f)$-form and $z_j$ is an R.I.S.
vector of size $N$ and constant $1+\epsilonilon$. By \cite{g:hyperplanes} Lemma~8,
$z_j$ is an $\ell_{1+}^{N^{1/10}}$-average with constant $2$. If $M<N$ then
$2M<\log \log \log N$ so, by \cite{g:hyperplanes} Corollary 3, $|w_i^*(Ez_j)|
\ensuremath{\leqslant} 6f(M)^{-1}$. If $M>N$ then $\log \log \log M>2N$ so, by
\cite{g:hyperplanes} Lemma 4, $|w_i^*(Ez_j)| \ensuremath{\leqslant} 2f(N)/N$. In both cases it
follows that $|w_i^*(Ez_j)| \ensuremath{\leqslant} k^{-2}$.
If $i=j=t$ we have $|w_i^*(Ez_j)| \ensuremath{\leqslant} 1$. Finally if $i=j<t$ then
$w_i^*=Tz_i^*$. Since $Tz_i^*$ is disjointly supported from $[x_k]$ and
therefore from $z_j$,
it follows simply that $w_i^*(Ez_j)=0$ in that case.
Summing up we have obtained that
$$|w^*(Ez)| \ensuremath{\leqslant} f(k)^{-1/2}(k^2. k^{-2}+1)=2f(k)^{-1/2} < 1/3 \ensuremath{\leqslant} \norm{Ez}.$$
Therefore $w^*$ does not norm $Ez$ and this finishes the proof.
\end{proof}
\subsection{Uniformly inhomogeneous examples} It may be observed that $G_u^*$ is uniformly inhomogeneous. We state this in
a general form which implies the result for $G_u$, Schlumprecht's space $S$
and its dual $S^*$. This is also true for Gowers-Maurey's space $GM$ and its dual $GM^*$, as well as
for $G$ and $G^*$, where $G$ is the HI asymptotically unconditional space of
Gowers from \cite{g:asymptotic}, which we shall redefine and study later
on. As HI spaces are always uniformly inhomogeneous however, we need to
observe that a slightly stronger result is obtained by the proof of the next
statement to see that Proposition \ref{spaceswithtcaciuc} is not trivial in
the case of $GM$, $G$ or their duals - see the three paragraphs after Proposition \ref{spaceswithtcaciuc}.
\begin{prop}\label{spaceswithtcaciuc} Let $f \in {\mathcal F}$ and let $X$ be a space with a bimonotone basis
satisfying a lower $f$-estimate. Let $\epsilonilon_0=1/10$, and assume that for every
$n \in [\log N, \exp N], N \in L$, $x_1,\ldots,x_n$ a R.I.S. in $X$
with constant $1+\epsilonilon_0$ and $x=\sum_{i=1}^Nx_i$,
$$\norm{Ex}\ensuremath{\leqslant}\sup\Bigl\{|x^*(Ex)|:M\ensuremath{\geqslant} 2, x^*\ \hbox{is an $(M,f)$-form}
\Bigr\}$$
for every interval $E$ such that $\norm{Ex}\ge 1/3$. Then $X$ and $X^*$
are uniformly inhomogeneous.
\end{prop}
\begin{proof} Given $\epsilonilon>0$, let $m \in L$ be such that
$f(m) \ensuremath{\geqslant} 24\epsilonilon^{-1}$.
Let $Y_1,\ldots,Y_{2m}$ be arbitrary block subspaces of $X$. By the classical
method for spaces with a lower $f$ estimate, we may find a R.I.S. sequence $y_1<\cdots<y_m$ with constant
$1+\epsilonilon_0$ with $y_i \in Y_{2i-1}, \forall i$. By Lemma \ref{fundamental},
$$\norm{\sum_{i=1}^m y_i} \ensuremath{\leqslant} 2mf(m)^{-1}.$$
Let on the other hand $n \in [m^{10},\exp m]$ and $E_1<\cdots<E_m$ be sets such
that $\bigcup_{j=1}^m E_j=\{1,\ldots,n\}$ and $|E_j|$ is within $1$ of
$\frac{n}{m}$
for all $j$. We may construct a R.I.S. sequence $x_1,\ldots,x_n$ with
constant $1+\epsilonilon_0$ such that $x_i \in Y_{2j}$ whenever $i \in E_j$.
By Lemma \ref{fundamental},
$$\norm{\sum_{i \in E_j}x_i} \ensuremath{\leqslant}
(1+2\epsilonilon_0)(\frac{n}{m}+1)f(\frac{n}{m}-1)^{-1} \ensuremath{\leqslant} 2nf(n)^{-1} m^{-1}.$$
Let $z_j=\norm{\sum_{i \in E_j}x_i}^{-1}\sum_{i \in E_j}x_i$. Then $z_j \in
Y_{2j}$ for all $j$ and
$$\norm{\sum_{j=1}^m z_j} \ensuremath{\geqslant} f(n)^{-1}\sum_{j=1}^m \big(\norm{\sum_{i \in E_j}x_i}^{-1}\sum_{i \in E_j}\norm{x_i}\big) \ensuremath{\geqslant} m/2.$$
Therefore
$$\norm{\sum_{i=1}^m y_i} \ensuremath{\leqslant} 4f(m)^{-1}\norm{\sum_{i=1}^m z_i} \ensuremath{\leqslant} \epsilonilon \norm{\sum_{i=1}^m z_i}.$$
Obviously $(y_{i})_{i=1}^m$ is not $\epsilonilon^{-1}$-equivalent to
$(z_{i})_{i=1}^m$, and this means that $X$ is uniformly inhomogeneous.
The proof concerning the dual is quite similar and uses the same notation.
Let $Y_{1},\ldots,Y_{2m}$ be arbitrary block subspaces of $X^*$. By Lemma \ref{linftyn}
we may find a R.I.S. sequence $y_1<\cdots<y_m$ with constant
$1+\epsilonilon_0$ and functionals $y_i^* \in Y_{2i-1}$ such that
${\rm range}\ y_i^* \subseteq {\rm range}\ y_i$ and $y_i^*(y_i) \ensuremath{\geqslant} 1/2$ for all
$i$. Since
$\norm{\sum_{i=1}^m y_i} \ensuremath{\leqslant} 2mf(m)^{-1}$, it follows that
$$\norm{\sum_{i=1}^m y_i^*} \ensuremath{\geqslant} \norm{\sum_{i=1}^m y_i}^{-1}\sum_{i=1}^m
y_i^*(y_i) \ensuremath{\geqslant} f(m)/4.$$
On the other hand we may construct a R.I.S. sequence $x_1,\ldots,x_n$ with
constant $1+\epsilonilon_0$ and functionals $x_i^*$ such that
${\rm range}\ x_i^* \subseteq {\rm range}\ x_i$, $x_i^*(x_i) \ensuremath{\geqslant} 1/2$ for all
$i$, and such that $x_i^* \in Y_{2j}$ whenever $i \in E_j$.
Since
$\norm{\sum_{i \in E_j}x_i} \ensuremath{\leqslant} 2nf(n)^{-1} m^{-1}$, it follows that
$$\norm{\sum_{i \in E_j}x_i^*} \ensuremath{\geqslant} \frac{n}{3m}\frac{mf(n)}{2n}=f(n)/6.$$
Let $z_j^*=\norm{\sum_{i \in E_j}x_i^*}^{-1}\sum_{i \in E_j}x_i^*$. Then $z_j^* \in
Y_{2j}$ for all $j$ and
$$\norm{\sum_{j=1}^m z_j^*} \ensuremath{\leqslant} \frac{6}{f(n)}f(n)=6.$$
Therefore
$$\norm{\sum_{i=1}^m z_i^*} \ensuremath{\leqslant} 24f(m)^{-1}\norm{\sum_{i=1}^m y_i^*}
\ensuremath{\leqslant} \epsilonilon \norm{\sum_{i=1}^m y_i^*}.$$
\end{proof}
\begin{cor} The spaces $S$, $S^*$, $GM$, $GM^*$, $G$, $G^*$, $G_u$, and $G_u^*$ are uniformly
inhomogeneous. \end{cor}
\
A slightly stronger statement may be obtained by the
proof of Proposition \ref{spaceswithtcaciuc}, in the sense that the vectors
$y_i$ in the definition of uniform inhomogeneity may be chosen to be
successive. More explicitely, the conclusion may be replaced by the statement that
$$
\forall M\ensuremath{\geqslant} 1\; \exists n \in \N\; \forall Y_1,\ldots,Y_{2n} \subseteq X\;
\exists y_i \in\ku S_{Y_i}\;(y_i)_{i=1}^n\not\sim_M(y_i)_{i=n+1}^{2n}.
$$
where $y_1<\cdots<y_n$ and $y_{n+1}<\cdots<y_{2n}$, and as before $Y_1,\dots,Y_{2n}$ are infinite-dimensional subspaces of $X$.
This property is therefore a block version of the property of uniform
inhomogeneity. It was observed in \cite{FR4} that the
sixth dichotomy had the following ``block'' version: any Schauder basis of a Banach space
contains a block sequence which is either block uniformly inhomogeneous in the
above sense or asymptotically $\ell_p$ for some $p \in [1,+\infty]$.
It is interesting to observe that either side of this dichotomy corresponds to
one of the two main families of HI spaces, namely spaces
of the type of Gowers-Maurey, based on the example of Schlumprecht, and spaces of the
type of Argyros-Deliyanni, based on Tsirelson's type spaces.
More precisely, spaces of the type of Gowers-Maurey are block uniformly
inhomogeneous, while spaces of the type of Argyros-Deliyanni are
asymptotically $\ell_1$. Observe that the original dichotomy of Tcaciuc fails
to distinguish between these two families, since any HI space is trivially
uniformly inhomogeneous, see \cite{FR4}.
\section{Tight HI spaces of the type of Gowers and Maurey}\label{gowers-maurey}
In this section we show that Gowers' space $G$ constructed in \cite{g:asymptotic} and its dual
are of type
(1).
The proof is a refinement of the proof that
$G_u$ or $G_u^*$ is of type (3), in which we observe that the hypothesis of
unconditionality may be replaced by asymptotic unconditionality. The idea is
to produce constituent parts of vectors or functionals in Gowers'
construction with sufficient control on their supports (and not just on
their ranges, as would be enough to obtain the HI property for example).
\subsection{ A HI space tight by range}
The space $G$ has a norm defined by induction as in $GM$, with the addition of
a new term which guarantees that its basis $(e_n)$ is $2$-asymptotically
unconditional, that is for any sequence of normalised vectors
$N<x_1<\ldots<x_N$, any sequence of scalars $a_1,\ldots,a_N$ and any sequence
of signs $\epsilonilon_1,\ldots,\epsilonilon_N$,
$$\norm{\sum_{n=1}^N \epsilonilon_n a_n x_n} \ensuremath{\leqslant} 2\norm{\sum_{n=1}^N a_n x_n}.$$
The basis is bimonotone and, although this is not stated in
\cite{g:asymptotic}, it may be proved as for $GM$ that $G$ is reflexive. It
follows that the dual basis of $(e_n)$ is also $2$-asymptotically
unconditional. The norm on $G$ is defined by the implicit
equation, for all $x \in c_{00}$:
$$\norm{x}=\norm{x}_{c_0}\vee\ \sup\Bigl\{f(n)^{-1}\sum_{i=1}^n\norm{E_i x}\Del 2 \ensuremath{\leqslant} n, E_1<\ldots<E_n\Bigr\}$$
$$\vee\ \sup\Bigl\{|x^*(Ex)|\Del k \in K, x^* \hbox{ special of length } k, E \subseteq \N\Bigr\}$$
$$\vee\ \sup\Bigl\{\norm{Sx}\Del S \hbox{ is an admissible operator}\Bigr\},$$
\
where $E$, $E_1,\ldots,E_n$ are intervals of integers, and $S$ is an {\em
admissible operator} if $Sx=\frac{1}{2}\sum_{n=1}^N \epsilonilon_n E_n x$ for some
sequence of signs $\epsilonilon_1,\ldots,\epsilonilon_N$ and some sequence
$E_1,\ldots,E_N$ of intervals which is {\em admissible}, i.e. $N<E_1$ and
$1+\max E_i=\min E_{i+1}$ for every $i < N$.
{\em R.I.S. pairs} and {\em special pairs} are considered in \cite{g:asymptotic}; first we shall need
a more general definition of these.
Let $x_1,\ldots,x_m$ be a R.I.S. with constant $C$, $m \in [\log N, \exp N]$,
$N \in L$, and let
$x_1^*,\ldots, x_m^*$ be successive normalised functionals. Then we
call {\em generalised R.I.S. pair with constant $C$} the pair
$(x,x^*)$ defined by
$x=\norm{\sum_{i=1}^m x_i}^{-1}(\sum_{i=1}^m x_i)$ and
$x^*=f(m)^{-1}\sum_{i=1}^m x_i^*$.
Let $z_1,\ldots,z_k$ be a sequence of successive normalised R.I.S. vectors with constant
$C$, and let
$z_1^*,\ldots, z_k^*$ be a special sequence such that $(z_i,z_i^*)$ is a generalized
R.I.S. pair for each $i$. Then we shall call {\em generalised special pair
with constant $C$} the pair
$(z,z^*)$ defined by
$z=\sum_{i=1}^k z_i$ and
$z^*=f(k)^{-1/2}(\sum_{i=1}^k z_i^*)$.
The pair $(\norm{z}^{-1}z,z^*)$ will be called {\em normalised generalised
special pair}.
\begin{lemme}\label{critical}
Let $(z,z^*)$ be a generalised special pair in $G$, of length $k \in K$, with
constant $2$ and such that
${\rm supp}\ z^* \cap {\rm supp}\ z = \emptyset$.
Then
$$\norm{z} \ensuremath{\leqslant} \frac{5k}{f(k)}.$$
\end{lemme}
\begin{proof}
The proof follows classically the methods of \cite{GM} or
\cite{g:hyperplanes}.
Let $K_0=K \setminus \{k\}$ and let $g$ be the corresponding function given
by \cite{g:asymptotic} Lemma 5. To prove that $\norm{z} \ensuremath{\leqslant}5kf(k)^{-1}$ it is
enough by Lemma \ref{fundamental} to prove that
for any interval $E$ such that $\norm{Ez} \ensuremath{\geqslant} 1/3$, $Ez$ is normed by some $(M,g)$-form with $M \ensuremath{\geqslant} 2$.
By the discussion in \cite{g:asymptotic} after the definition of the norm, the
only possible norming functionals apart from $(M,g)$-forms are of the form
$Sw^*$ where $w^*$ is a special functional of length $k$, and $S$ is an
``acceptable'' operator according to the terminology of \cite{g:asymptotic}. We shall not state the definition of an acceptable
operator $S$, we shall just need to know that since such an operator is
diagonal of norm at most $1$, it preserves support and $(M,g)$-forms,
\cite{g:asymptotic} Lemma 6. So let $w^*=f(k)^{-1/2}(w_1^*+\cdots+w_k^*)$ be a
special functional of length $k$, $S$ be an acceptable operator, and $E$ be an
interval such that $\norm{Ez} \ensuremath{\geqslant} 1/3$. We need to show that $Sw^*$ does not
norm $Ez$.
Let $t$ be minimal such that $w_t^* \neq z_t^*$. If $i \neq j$ or $i=j>t$ then
by definition of special sequences there exist $M \neq N \in L$, $\min(M,N)
\ensuremath{\geqslant} j_{2k}$, such that $w_i^*$ and therefore $Sw_i^*$ is an $(M,f)$-form and
$z_j$ is an R.I.S. vector of size $N$ and constant $2$. By \cite{g:asymptotic}
Lemma 8, $z_j$ is an $\ell_{1+}^{N^{1/10}}$-average with constant $4$. If $M<N$
then $2M<\log \log \log N$ so, by \cite{g:asymptotic} Lemma 2, $|Sw_i^*(Ez_j)|
\ensuremath{\leqslant} 12f(M)^{-1}$. If $M>N$ then $\log \log \log M>2N$ so, by
\cite{g:asymptotic} Lemma 3, $|Sw_i^*(Ez_j)| \ensuremath{\leqslant} 3f(N)/N$. In both cases it
follows that $|Sw_i^*(Ez_j)| \ensuremath{\leqslant} k^{-2}$.
If $i=j=t$ we simply have $|Sw_i^*(Ez_j)| \ensuremath{\leqslant} 1$. Finally if $i=j<t$ then $w_i^*=z_i^*$. and
since
${\rm supp}\ Sz^*_i \subseteq {\rm supp}\ z_i^*$
and
${\rm supp}\ Ez_i \subseteq {\rm supp}\ z_i,$
it follows that $Sw_i^*(Ez_j)=0$ in this case.
Summing up we have obtained that
$$|Sw^*(Ez)| \ensuremath{\leqslant} f(k)^{-1/2}(k^2. k^{-2}+1)=2f(k)^{-1/2} < 1/3 \ensuremath{\leqslant} \norm{Ez}.$$
Therefore $Sw^*$ does not norm $Ez$ and this finishes the proof.
\end{proof}
The next lemma is expressed in a version which may seem technical but this will
make the proof that $G$ is of type (1) more pleasant to read. At first
reading, the reader may simply assume that $T=Id$ in its hypothesis.
\begin{lemme}\label{average}
Let $n \in \N$ and let $\epsilonilon>0$. Let $(x_i)_i$ be a normalised block basis
in $G$
of length $n^k$ and supported after $2n^k$, where $k=\min\{i \del f(n^i) < (1+\epsilonilon)^i\}$, and $T:[x_i] \rightarrow G$ be
an isomorphism such that $(Tx_i)$ is also a normalised block basis. Then for
any $n \in \N$ and $\epsilonilon>0$, there exist a finite interval $F$ and a
multiple $x$ of $\sum_{i \in F}x_i$ such that $Tx$ is an $\ell_{1+}^n$-average
with constant $1+\epsilonilon$, and a normalised functional $x^*$ such that $x^*(x)
>1/2$ and ${\rm supp}\ x^* \subseteq \bigcup_{i \in F}{\rm range}\ x_i$.
\end{lemme}
\begin{proof}
The proof from \cite{g:asymptotic} that the block basis $(Tx_i)$ contains an
$\ell_{1+}^n$-average with constant $1+\epsilonilon$ is the same as for $GM$, and
gives that such a vector exists of the form $Tx=\lambda \sum_{i \in F}Tx_i$,
thanks to the condition on the length of $(x_i)$. We may
therefore deduce that $2|F|-1<{\rm supp}\ x$. Let $y^*$ be a unit functional
which norms $x$ and such that ${\rm range}\; y^* \subseteq {\rm range}\; x$.
Let $x^*=Ey^* $ where $E$ is the union of the $|F|$ intervals ${\rm range}\;
x_i, i \in F$. Then $x^*(x)=y^*(x)=1$ and by unconditional asymptoticity of
$G^*$, $\norm{x^*} \ensuremath{\leqslant} \frac{3}{2}\norm{y^*}<2$.
\end{proof}
The proof that $G$ is HI requires defining ``extra-special sequences'' after
having defined special sequences in the usual $GM$ way. However, to prove that
$G$ is tight by range, we shall not need to enter that level of complexity and
shall just use special sequences.
\begin{prop}\label{type1} The space $G$ is of type (1). \end{prop}
\begin{proof}
Assume some normalised block-sequence $(x_n)$ is such that $[x_n]$ embeds into
$Y=[e_i, i \notin \bigcup_n {\rm range}\ x_n]$ and look for a contradiction.
Passing to a subsequence and by reflexivity we may assume that there is some
isomorphism $T:[x_n] \rightarrow Y$ satisfying the hypothesis of Lemma
\ref{average}, that is, $(Tx_n)$ is a normalised block basis in $Y$. Fixing
$\epsilonilon=1/10$ we may construct by Lemma \ref{average} some block-sequence of
vectors in $[x_n]$ which are $1/2$-normed by functionals in ${\bf Q}$ of
support included in $\bigcup_n {\rm range}\; x_n$, and whose images by $T$ form
a sequence of increasing length ${\ell}_{1+}^n$-averages with constant
$1+\epsilonilon$. If $Tz_1,\ldots,Tz_m$ is a R.I.S. of these
${\ell}_{1+}^n$-averages with constant $1+\epsilonilon$, with $m \in [\log N, \exp
N]$, $N \in L$, and $z_1^*,\ldots,z_m^*$ are the functionals associated to
$z_1,\ldots,z_m$, then by \cite{g:asymptotic} Lemma 7, the $(m,f)$-form
$z^*=f(m)^{-1}(z_1^*+\ldots+z_m^*)$ satisfies
$$
z^*(z_1+\ldots+z_m) \ensuremath{\geqslant} \frac{m}{2f(m)} \ensuremath{\geqslant} \frac{1}{4}\norm{Tz_1+\ldots+Tz_m} \ensuremath{\geqslant} (4\norm{T^{-1}})^{-1}\norm{z_1+\dots+z_m}.
$$
Therefore we may build R.I.S. vectors $Tz$ with constant $1+\epsilonilon$ of
arbitrary length $m$ in $[\log N, \exp N]$, $N \in L$, so that $z$ is
$(4\norm{T^{-1}})^{-1}$-normed by an $(m,f)$-form $z^*$ of support included in
$\bigcup_n {\rm range}\;x_n$. For such $(z,z^*)$, $(Tz,z^*)$ is a generalised
R.I.S. pair. We then consider a sequence $Tz_1,\ldots,Tz_k$ of length $k \in
K$ of such R.I.S. vectors, such that there exists some special sequence of
corresponding functionals $z_1^*,\ldots,z_k^*$, and finally the pair $(z,z^*)$
where $z=z_1+\cdots+z_k$ and $z^*=f(k)^{-1/2}(z_1^*+\ldots+z_k^*)$: observe
that the support of $z^*$ is still included in $\bigcup_n {\rm range}\;x_n$.
Since $(Tz,z^*)$ is a generalised special pair, it follows from
Lemma \ref{critical} that $$\norm{Tz} \ensuremath{\leqslant} 5kf(k)^{-1}.$$ On the other hand,
$$\norm{z} \ensuremath{\geqslant} z^*(z) \ensuremath{\geqslant} (4\norm{T^{-1}})^{-1}k f(k)^{-1/2}.$$
Since $k$ was arbitrary in $K$ this implies that $T^{-1}$ is unbounded and provides the desired contradiction.
\end{proof}
\subsection{A HI space tight by range and locally minimal}
As we shall now prove, the dual $G^*$ of $G$ is of type (1) as well, but also
locally minimal.
\begin{lemme}\label{linftynbis}
Let $(x_i^*)$ be a normalised block basis in $G^*$. Then for any $n \in \N$
and $\epsilonilon>0$, there exists $N(n,\epsilonilon)$, a finite interval $F
\subseteq [1,N(n,\epsilonilon)]$, a multiple $x^*$ of
$\sum_{i \in F}x_i^*$ which is an $\ell_{\infty +}^n$-average with constant
$1+\epsilonilon$ and an $\ell_{1+}^n$-average $x$ with constant $2$ such that
$x^*(x) >1/2$ and ${\rm supp}\ x \subseteq \bigcup_{i \in F}{\rm range}\ x_i^*$.
\end{lemme}
\begin{proof}
We may assume that $\epsilonilon<1/6$. By Lemma \ref{linftyn} we may find for each
$i
\ensuremath{\leqslant} n$
an interval $F_i$, with $|F_i| \ensuremath{\leqslant} 2\min F_i$, and a vector $y_i^*$ of
the form $\lambda \sum_{k \in F_i} x_k^*$, such that
$y^*=\sum_{i=1}^n y_i^*$ is an $\ell_{\infty +}^n$-average with constant
$1+\epsilonilon$. Let, for each $i$, $x_i$ be normalised such that
$y_i^*(x_i)=\norm{y_i^*}$ and ${\rm range}\ x_i \subseteq {\rm range}\ y_i^*$.
Let $y_i=E_i x_i$, where $E_i$ denotes the canonical projection
on $[e_m, m \in \bigcup_{k \in F_i}{\rm range}\ x_k^*]$. By the asymptotic
unconditionality of $(e_n)$, we have that $\norm{y_i} \ensuremath{\leqslant} 3/2$.
Let $y_i^{\prime}=\norm{y_i}^{-1}y_i$, then
$$y_i^*(y_i^{\prime})=\norm{y_i}^{-1}y_i^*(y_i)=\norm{y_i}^{-1}y_i^*(x_i) \ensuremath{\geqslant}
\frac{2}{3}\norm{y_i^*}.$$
By Lemma \ref{linftyn}, the vector $x=\sum_i y_i^{\prime}$ is an
$\ell_{1+}^n$-vector with constant $2$, such that $x^*(x)
>\norm{x}/2$, and clearly ${\rm supp}\ x \subseteq \bigcup_{i \in F}{\rm range}\ x_i^*$.
\end{proof}
\begin{prop} The space $G^*$ is locally minimal and tight by range.
\end{prop}
\begin{proof}
By Lemma \ref{linftynbis} we may find in any finite block subspace of $G^*$ of length
$N(n,\epsilonilon)$ and supported after $N(n,\epsilonilon)$ an
$\ell_{\infty+}^n$-average with constant $1+\epsilonilon$. By asymptotic
unconditionality
we deduce that uniformly, any block-subspace of $G^*$ contains
$\ell_{\infty}^n$'s, and therefore $G^*$ is locally minimal.
We prove that $G^*$ is tight by range. Assume towards a contradiction that
some normalised block-sequence $(x_n^*)$ is such that $[x_n^*]$ embeds into
$Y=[e_i^*, i \notin \bigcup_n {\rm range}\ x_n^*]$ and look for a
contradiction. If $T$ is the associated isomorphism, we may by passing to a
subsequence and perturbating $T$ assume that $Tx_n^*$ is successive.
Let $\epsilonilon=1/10$. By Lemma \ref{linftynbis}, we find in $[x_k^*]$ and for
each $n$, an $\ell_{\infty+}^n$-average
$y_n^*$ with constant $1+\epsilonilon$ and
an $\ell_{1+}^n$-average $y_n$ with constant $2$, such that
$y_n^*(y_n) > 1/2$ and ${\rm supp}\ y_n \subseteq \bigcup_k {\rm range}\ x_k^*$.
By construction, for each $n$, $Ty_n^*$ is disjointly supported from $[x_k^*]$,
and up to modifying $T$, we may assume that $Ty_n^*$ is in ${\bf Q}$ and of
norm at most $1$ for each $n$.
If $z_1,\ldots,z_m$ is a R.I.S. of these ${\ell}_{1+}^n$-averages $y_n$ with constant $2$,
with $m \in [\log N, \exp N]$, $N \in L$, and $z_1^*,\ldots,z_m^*$ are the
${\ell}_{\infty+}^n$-averages associated to $z_1,\ldots,z_m$, then by \cite{g:hyperplanes} Lemma 7, the $(m,f)$-form
$z^*=f(m)^{-1}(z_1^*+\ldots+z_m^*)$ satisfies
$$z^*(z_1+\ldots+z_m) \ensuremath{\geqslant} \frac{m}{2f(m)} \ensuremath{\geqslant}
\frac{1}{6}\norm{z_1+\ldots+z_m},$$
and furthermore $Tz^*$ is also an $(m,f)$-form.
Therefore we may build R.I.S. vectors $z$ with constant $2$ of
arbitrary length $m$ in $[\log N, \exp N]$, $N \in L$, so that $z$ is
$6^{-1}$-normed by an $(m,f)$-form $z^*$ such that $Tz^*$ is also
an $(m,f)$-form.
We may then consider a sequence $z_1,\ldots,z_k$ of length $k \in K$ of such
R.I.S. vectors of length $m_i$, and some corresponding functionals $z_1^*,\ldots,z_k^*$ (i.e.,
$z_i^*$ $6^{-1}$-norms $z_i$ and $Tz_i^*$ is also an $(m_i,f)$-form for all $i$), such
that
$Tz_1^*,\ldots,Tz_k^*$ is a special sequence.
Then we let
$z=z_1+\cdots+z_k$ and $z^*=f(k)^{-1/2}(z_1^*+\ldots+z_k^*)$, and observe that
$(z,Tz^*)$ is a generalised special pair.
Since $Tz^*=f(k)^{-1/2}(Tz_1^*+\ldots+Tz_k^*)$ is a special functional it follows that
$$\norm{Tz^*} \ensuremath{\leqslant} 1.$$ But it follows from Lemma \ref{critical} that $\norm{z} \ensuremath{\leqslant} 5kf(k)^{-1}$.
Therefore
$$\norm{z^*} \ensuremath{\geqslant} z^*(z)/\norm{z} \ensuremath{\geqslant} f(k)^{1/2}/30.$$
Since $k$ was arbitrary in $K$ this implies that $T^{-1}$ is unbounded and provides the desired contradiction.
\end{proof}
It remains to check that $G^*$ is HI. The proof is very similar to the one in
\cite{g:asymptotic} that $G$ is HI, and
we shall therefore not give all details. There are two main differences
between the two proofs. In \cite{g:asymptotic} some special vectors and
functionals are constructed, the vectors are taken alternatively in
arbitrary block subspaces $Y$ and $Z$ of $G$, and no condition is imposed on where to pick the functionals. In our case there is no condition on where to choose the vectors but we need to pick the
functionals in arbitrary subspaces $Y$ and $Z$ of $G^*$ instead. This is
possible because of Lemma \ref{linftynbis}. We also need to correct what seems to be a slight imprecision in the proof of
\cite{g:asymptotic} about the value of some normalising factors, and therefore we also get worst constants for our
estimates.
Let $\epsilonilon=1/10$. Following Gowers we define an {\em R.I.S. pair} of size $N$ to be a generalised R.I.S.
pair $(x,x^*)$ with constant $1+\epsilonilon$ of the form $(\norm{x_1+\ldots+x_N}^{-1}(x_1+\ldots+x_N),
f(N)^{-1}(x_1^*+\dots+x_N^*))$, where $x_n^*(x_n)\ensuremath{\geqslant} 1/3$ and
${\rm range}\ x_n^* \subset{\rm range}\ x_n$ for each $n$.
A {\em special pair} is a normalised generalised special pair with constant $1+\epsilonilon$ of the form
$(x,x^*)$ where $x=\norm{x_1+\ldots+x_k}^{-1}(x_1+\ldots+x_k)$ and
$x^*=f(k)^{-1/2}(x_1^*+\dots+x_k^*)$ with ${\rm range}\ x_n^* \subseteq {\rm range}\ x_n$
and for each $n$, $x_n^*\in\bf
Q$, $|x_n^*(x_n)-1/2|<10^{-\min{\rm supp}\ x_n}$.
By \cite{g:asymptotic} Lemma 8, $z$ is a R.I.S. vector with constant $2$
whenever $(z,z^*)$ is a special pair. We shall also require that
$k \ensuremath{\leqslant} \min{\rm supp}\ x_1$, which will imply by \cite{g:asymptotic} Lemma 9 that for
$m<k^{1/10}$, $z$ is a $\ell_{1+}^m$-average with constant 8 (see the
beginning of the proof of Proposition \ref{GstarHI}).
Going up a level of ``specialness'', a {\em special R.I.S.-pair}
is a generalised R.I.S.-pair with constant 8 of the form
$(\norm{x_1+\ldots+x_N}^{-1}(x_1+\ldots+x_N),
f(N)^{-1}(x_1^*+\dots+x_N^*))$, where
${\rm range}\ x_n^* \subset{\rm range}\ x_n$ for each $n$, and with
the additional condition that $(x_n,x_n^*)$ is a special
pair of length at least $\min {\rm supp}\ x_n$. Finally, an {\em extra-special pair} of
size $k$ is a normalised generalised special pair $(x,x^*)$ with constant 8 of the form
$x=\norm{x_1+\ldots+x_k}^{-1}(x_1+\ldots+x_k)$ and
$x^*=f(k)^{-1/2}(x_1^*+\dots+x_k^*)$ with ${\rm range}\ x_n^* \subseteq {\rm
range}\ x_n$, such that, for each $n$,
$(x_n,x_n^*)$ is a special R.I.S.-pair of length
$\sigma(x_1^*,\dots,x_{n-1}^*)$.
\
Given $Y,Z$ block subspaces of $G^*$ we shall show how to find an
extra-special pair $(x,x^*)$ of size $k$, with $x^*$ built out of vectors in
$Y$ or $Z$, such that the signs of these constituent parts of $x^*$ can be
changed according to belonging to $Y$ or $Z$ to produce a vector
$x^{\prime*}$ with $\norm{x^{\prime*}}\ensuremath{\leqslant} 12f(k)^{-1/2}\norm{x^*}$. This will
then prove the result.
Consider then an extra-special pair $(x,x^*)$. Then $x$ splits up as
$$\nu^{-1}\sum_{i=1}^k\nu_i^{-1}\sum_{j=1}^{N_i}\nu_{ij}^{-1}
\sum_{r=1}^{k_{ij}}x_{ijr}$$
and $x^*$ as
$$f(k)^{-1/2}\sum_{i=1}^k f(N_i)^{-1}\sum_{j=1}^{N_i}f(k_{ij})^{-1}
\sum_{r=1}^{k_{ij}}x^*_{ijr}\,$$
where
the numbers $\nu$, $\nu_i$ and $\nu_{ij}$ are the norms of what
appears to the right. These special sequences are
chosen far enough ``to the right'' so that $k_{ij}\ensuremath{\leqslant}\min{\rm supp}\ x_{ij1}$,
and also so that $(\max{\rm supp}\ x_{i\,j-1})^2k_{ij}^{-1}\ensuremath{\leqslant} 4^{-(i+j)}$.
We shall also write $x_i$ for $\nu_i^{-1}\sum_{j=1}^{N_i}\nu_{ij}^{-1}
\sum_{r=1}^{k_{ij}}x_{ijr}$ and $x_{ij}$ for $\nu_{ij}^{-1}
\sum_{r=1}^{k_{ij}}x_{ijr}$.
We define a vector $x'$ by
$$\sum_{i=1}^k\nu_i^{\prime -1}\sum_{j=1}^{N_i}\nu_{ij}^{\prime -1}
\sum_{r=1}^{k_{ij}}(-1)^rx_{ijr},$$
where
the numbers $\nu_i^{\prime}$ and $\nu_{ij}^{\prime}$ are the norms of what
appears to the right.
We shall write $x'_i$ for $\nu_i^{\prime -1}\sum_{j=1}^{N_i}\nu_{ij}^{\prime -1}
\sum_{r=1}^{k_{ij}}(-1)^rx_{ijr}$ and $x'_{ij}$ for $\nu_{ij}^{\prime -1}
\sum_{r=1}^{k_{ij}}(-1)^rx_{ijr}$.
Finally we define a functional $x^{\prime *}$ as
$$f(k)^{-1/2}\sum_{i=1}^k f(N_i)^{-1}\sum_{j=1}^{N_i}f(k_{ij})^{-1}
\sum_{r=1}^{k_{ij}}(-1)^k x^*_{ijr}.$$
\begin{prop}\label{GstarHI} The space $G^*$ is HI.\end{prop}
\begin{proof}
Fix $Y$ and $Z$ block subspaces of $G^*$.
By Lemma \ref{linftynbis} we may construct an extra-special pair $(x,x^*)$ so that $x^*_{ijr}$ belongs to $Y$ when
$r$ is odd and to $Z$ when $r$ is even.
We first discuss the normalisation of the vectors involved in the definition of $x'$.
By the increasing condition on $k_{ij}$ and $x_{ijr}$ and by asymptotic
unconditionality, we have that
$$\norm{\sum_{r=1}^{k_{ij}} (-1)^r x_{ijr}} \ensuremath{\leqslant} 2
\norm{\sum_{r=1}^{k_{ij}} x_{ijr}},$$
which means that $\nu^{\prime}_{ij} \ensuremath{\leqslant} 2 \nu_{ij}$.
Furthermore it also follows that
the functional $(1/2)f(k_{ij})^{-1/2}\sum_{r=1}^{k_{ij}}(-1)^rx^*_{ijr}$ is
of norm at most $1$, and therefore we have that
$\norm{\sum_{r=1}^{k_{ij}}(-1)^rx_{ijr}}\ensuremath{\geqslant}
(1/2)k_{ij}f(k_{ij})^{-1/2}$.
Lemma 9 from \cite{g:asymptotic} therefore tells us that, for
every $i,j$, $x'_{ij}$ is an $\ell_{1+}^{m_{ij}}$-average with
constant 8, if $m_{ij}<k_{ij}^{1/10}$.
But the $k_{ij}$ increase so fast that, for any $i$, this implies that
the sequence $x'_{i1},\dots,x'_{i\,N_i}$ is a rapidly increasing
sequence with constant 8.
By \cite{g:asymptotic} Lemma 7, it follows that
$$\norm{\sum_{j=1}^{N_i} x_{ij}^{\prime}} \ensuremath{\leqslant} 9 N_i/f(N_i).$$
Therefore by the $f$-lower estimate in $G$
we have that $\nu^{\prime}_i \ensuremath{\leqslant} 9\nu_i$.
We shall now prove that $\norm{x'} \ensuremath{\leqslant} 12kf(k)^{-1}$.
This will imply that
$$\norm{x_*^{\prime}} \ensuremath{\geqslant} \frac{x_*^{\prime}(x')}{\norm{x'}}
\ensuremath{\geqslant} \frac{f(k)}{12k}[f(k)^{-1/2}
\sum_{i=1}^k f(N_i)^{-1}\nu_i^{\prime -1}\sum_{j=1}^{N_i}f(k_{ij})^{-1}\nu_{ij}^{\prime -1}
\sum_{r=1}^{k_{ij}}x_{ijr}^*(x_{ijr})]$$
$$\ensuremath{\geqslant} f(k)^{1/2}(12k)^{-1}.18^{-1}
[\sum_{i=1}^k f(N_i)^{-1}\nu_i^{-1}\sum_{j=1}^{N_i}f(k_{ij})^{-1}\nu_{ij}^{-1}
\sum_{r=1}^{k_{ij}}x_{ijr}^*(x_{ijr})]$$
$$= f(k)^{1/2} (216k)^{-1} \sum_{i=1}^k x_i^*(x_i) \ensuremath{\geqslant} 648^{-1}
f(k)^{1/2}.$$
By construction of $x^*$ and $x^{\prime *}$ this will imply that
$$\norm{y^*-z^*} \ensuremath{\geqslant} 648^{-1}f(k)^{1/2}\norm{y^*+z^*}$$
for some non zero $y^* \in Y$ and $z^* \in Z$, and since $k \in K$ was arbitrary,
as well as $Y$ and $Z$, this will prove that $G^*$ is HI.
\
The proof that $\norm{x'} \ensuremath{\leqslant} 12kf(k)^{-1}$ is given in three steps:
\
\paragraph{Step 1} {\em The vector $x'$ is a R.I.S. vector with constant 11.}
\begin{proof} We already know the sequence $x'_{i1},\dots,x'_{i\,N_i}$ is a rapidly increasing
sequence with constant 8. Then by \cite{g:asymptotic} Lemma 8 we get that $x'_i$ is also
an $\ell_{1+}^{M_i}$-average with constant 11, if $M_i<N_i^{1/10}$.
Finally, this implies that $x'$ is an R.I.S.-vector with constant 11,
as claimed.\end{proof}
\paragraph{Step 2} {\em Let $K_0=K\setminus\{k\}$, let $g\in\mathcal F$ be
the corresponding function given by \cite{g:asymptotic} Lemma 5.
For every interval $E$ such that $\norm{Ex'}\ensuremath{\geqslant} 1/3$, $Ex'$ is normed by
an $(M,g)$-form.}
\begin{proof}The proof is exactly the same as the one of Step 2 in the proof
of Gowers concerning $G$,
apart from some constants which are modified due to the change of constant in
Step 1 and to the normalising constants relating $\nu_i$ and $\nu_{ij}$
respectively to
$\nu_i^{\prime}$ and $\nu_{ij}^{\prime}$. The reader is therefore referred to
\cite{g:asymptotic}.\end{proof}
\paragraph{Step 3} {\em The norm of $x'$ is at most $12kg(k)^{-1}=12kf(k)^{-1}$}
\begin{proof} This is an immediate consequence of Step 1, Step 2 and of
Lemma \ref{fundamental}.
\end{proof}
We conclude that the space $G^*$ is HI, and thus locally minimal of type (1). \end{proof}
\section{Unconditional tight spaces of the type of Argyros and Deliyanni}\label{argyros}
By Proposition \ref{spaceswithtcaciuc}, unconditional or HI spaces built on the model of
Gowers-Maurey's spaces are uniformly inhomogeneous (and even block uniformly inhomogeneous). We shall now
consider a space of Argyros-Deliyanni type, more specifically of the type of a
space constructed by Argyros, Deliyanni, Kutzarova and Manoussakis
\cite{ADKM}, with the
opposite property, i.e., with a basis which is strongly
asymptotically $\ell_1$. This space will also be tight by support and therefore will not contain a copy of $\ell_1$. By the implication at the end of the diagram which appears just before Theorem \ref{final}, this basis will therefore be tight with
constants as well, making this example the ``worst'' known so far in
terms of minimality.
Again in this section block vectors will not necessarily
be normalized and some familiarity with the construction in
\cite{ADKM} will be assumed.
\subsection{A strongly asymptotically $\ell_1$ space tight by support}
In \cite{ADKM} an example of HI space $X_{hi}$ is constructed, based
on a ``boundedly modified'' mixed Tsirelson space $X_{M(1),u}$. We
shall construct an unconditional version $X_u$ of $X_{hi}$ in a
similar way as $G_u$ is an unconditional version of $GM$. The proof
that $X_u$ is of type (3) will be based on the proof that $X_{hi}$ is HI, conditional
estimates in the proof of \cite{ADKM} becoming essentially trivial
in our case due to disjointness of supports.
Fix a basis $(e_n)$ and ${\mathcal M}$ a family of finite subsets
of $\N$. Recall that a family $x_1,\ldots,x_n$ is {\em ${\mathcal
M}$-admissible} if $x_1<\cdots<x_n$ and $\{\min {\rm supp}\
x_1,\ldots,\min {\rm supp}\ x_n\} \in {\mathcal M}$, and {\em
${\mathcal M}$-allowable} if $x_1,\ldots,x_n$ are vectors with
disjoint supports such that $\{\min {\rm supp}\ x_1,\ldots,\min {\rm
supp}\ x_n\} \in {\mathcal M}$. Let ${\mathcal S}$ denote the family
of Schreier sets, i.e., of subsets $F$ of $\N$ such that $|F| \ensuremath{\leqslant} \min F$,
${\mathcal M}_j$ be the subsequence of the sequence $({\mathcal
F}_k)$ of Schreier families associated to sequences of integers
$t_j$ and $k_j$ defined in \cite{ADKM} p 70.
We need to define a new notion. For $W$ a set of functionals which
is stable under projections onto subsets of $\N$, we let ${\rm
conv}_{\Q}W$ denote the set of rational convex combinations of
elements of $W$. By the stability property of $W$ we may write any
$c^* \in {\rm conv}_{\Q}W$ as a rational convex combination of the
form $\sum_i \lambda_i x_i^*$ where $x_i^* \in W$ and ${\rm supp}\
x_i^* \subseteq {\rm supp}\ c^*$ for each $i$. In this case the set
$\{x_i^* \}_i$ will be called a $W$-compatible decomposition of
$c^*$, and we let $W(c^*) \subseteq W$ be the union of all
$W$-compatible decompositions of $c^*$. Note that if ${\mathcal M}$
is a family of finite subsets of $\N$, $(c_1^*,\ldots,c_d^*)$ is
${\mathcal M}$-admissible, and $x_i^* \in W(c_i^*)$ for all $i$,
then $(x_1^*,\ldots,x_d^*)$ is also ${\mathcal M}$-admissible.
Let ${\mathcal B}=\{\sum_{n}\lambda_n e_n: (\lambda_n)_n \in c_{00},
\lambda_n \in \Q \cap [-1,1]\}$ and let $\Phi$ be a 1-1 function
from ${\mathcal B}^{<\N}$ into $2\N$ such that if
$(c_1^*,\ldots,c_k^*) \in {\mathcal B}^{<\N}$, $j_1$ is minimal such
that $c_1^* \in {\rm conv}_{\Q}{\mathcal A}_{j_1}$, and
$j_l=\Phi(c_1^*,\ldots,c_{l-1}^*)$ for each $l=2,3,\ldots$, then
$\Phi(c_1^*,\ldots,c_k^*)>\max\{j_1,\ldots,j_k\}$ (the set
${\mathcal A}_j$ is defined in \cite{ADKM} p 71 by ${\mathcal
A}_j=\cup_n(K_j^n \setminus K^0)$ where the $K_j^n$'s are the sets
corresponding to the inductive definition of $X_{M(1),u}$).
For $j=1,2,\ldots $, we set $L_j^0=\{ \pm e_n:n\in \N\}$. Suppose
that $\{ L_j^n\}_{j=1}^{\infty }$ have been defined. We set
$L^n=\cup_{j=1}^{\infty}L^n_j$ and
$$L_1^{n+1}=\pm L_1^n\cup\{\frac{1}{2}(x_1^{*}+\ldots +x_d^{*}):d\in \N,x_i^{*}\in L^n,$$
$$(x_1^{*},\ldots,x_d^{*})\;{\rm is}\; \ {\mathcal S}-{\rm allowable}\},$$
\noindent and for $j\ensuremath{\geqslant} 1$,
$$L_{2j}^{n+1}=\pm L_{2j}^n\cup\{\frac{1}{m_{2j}}(x_1^{\ast }+\ldots
+x_d^{*}):d\in {\N},x_i^{*}\in L^n,$$
$$(x_1^{*},\ldots ,x_d^{*})\;{\rm is}\;
{\mathcal M}_{2j}-{\rm admissible }\},$$
$$L_{2j+1}^{\prime\;n+1}=\pm L_{2j+1}^n\cup \{\frac{1}{m_{2j+1}}(x_1^{\ast
}+\ldots +x_d^{\ast }):d\in {\N} {\rm\ such\ that}$$
$$\exists (c_1^*,\ldots,c_d^*)\;{\mathcal M}_{2j+1}-{\rm admissible\ and\ }
k>2j+1 {\rm\ with\ }c_1^* \in {\rm conv}_{\Q}L_{2k}^n, x_1^*\in
L_{2k}^n(c_1^*),$$
$$c_i^* \in {\rm conv}_{\Q}L_{\Phi (c_1^{\ast },\ldots ,c_{i-1}^{\ast })}^n,
x_i^{\ast }\in L_{\Phi (c_1^{\ast },\ldots ,c_{i-1}^{\ast
})}^n(c_i^*) \;{\rm for}\;1<i\ensuremath{\leqslant} d\},$$
$$L_{2j+1}^{n+1}=\{ Ex^{\ast }:x^{\ast }\in L_{2j+1}^{\prime\;n+1},
E {\rm\ subset\ of\ } \N\}.$$
We set ${\mathcal B}_j=\cup_{n=1}^{\infty }(L_j^n\setminus L^0)$
and we consider the norm on $c_{00}$
defined by the set $L=L^0\cup (\cup_{j=1}^{\infty }{\mathcal B}_j)$.
The space $X_u$ is the completion of $c_{00}$ under this norm.
\
In \cite{ADKM} the space $X_{hi}$ is defined in the same way except
that $E$ is an {\bf interval} of integers in the definition of
$L_{2j+1}^{n+1}$, and the definition of $L_{2j+1}^{\prime\;n+1}$ is
simpler, i.e., the coding $\Phi$ is defined directly on ${\mathcal
M}_{2j+1}$-admissible families $x_1^*,\ldots,x_d^*$ in $L^{<\N}$ and
in the definition each $x_i^*$ belongs to $L_{\Phi
(x_1^{\ast},\ldots ,x_{i-1}^{\ast})}^n$. To prove the desired
properties for $X_u$ one could use the simpler definition of
$L_{2j+1}^{\prime\;n+1}$; however this definition doesn't seem to
provide enough special functionals to obtain interesting properties
for the dual as well.
The ground space for $X_{hi}$ and for $X_u$ is the space
$X_{M(1),u}$ associated to a norming set $K$ defined by the same
procedure as $L$, except that $K_{2j+1}^n$ is defined in the same
way as $K_{2j}^n$, i.e.
$$K_{2j}^{n+1}=\pm K_{2j}^n\cup\{\frac{1}{m_{2j}}(x_1^{\ast }+\ldots
+x_d^{*}):d\in {\N},x_i^{*}\in K^n,$$
$$(x_1^{*},\ldots ,x_d^{*})\;{\rm is}\;
{\mathcal M}_{2j+1}-{\rm admissible }\}.$$
For $n=0,1,2,\ldots ,$ we see that $L_j^n$ is a subset of $K_j^n$,
and therefore $L \subseteq K$.
The norming set $L$ is closed under
projections onto {\bf subsets} of $\N$, from which it follows that
its canonical basis is unconditional, and has the property that for
every $j$ and every ${\mathcal M}_{2j}$--admissible family $f_1,
f_2, \ldots f_d$ contained in $L$, $f=\frac{1}{m_{2j}}(f_1+\cdots
+f_d)$ belongs to $L$. The {\em weight} of such an $f$ is defined
by $w(f)=1/m_{2j}$. It follows that for every $j=1,2,\ldots$ and
every
${\mathcal M}_{2j}$--admissible family $x_1<x_2<\ldots<x_n$ in $X_u$,
$$\|\sum_{k=1}^nx_k\|\ensuremath{\geqslant}\frac{1}{m_{2j}}\sum_{k=1}^n\| x_k\|.$$ Likewise, for
${\mathcal S}$--allowable families $f_1,\ldots,f_n$ in $L$, we have
$f=\frac{1}{2}(f_1+\cdots+f_d) \in L$, and we define $w(f)=1/2$. The
weight is defined similarly in the case $2j+1$.
\begin{lemme} The canonical basis of $X_u$ is strongly asymptotically
$\ell_1$.
\end{lemme}
\begin{proof} Fix $n\ensuremath{\leqslant} x_1,\ldots,x_n$ where $x_1,\ldots,x_n$ are normalised and
disjointly supported. Fix $\epsilonilon>0$ and let for each $i$, $f_i \in L$ be
such that $f_i(x_i) \ensuremath{\geqslant} (1+\epsilonilon)^{-1}$ and ${\rm supp}\ f_i \subseteq
{\rm supp}\ x_i$. The condition on the supports may be imposed
because $L$ is
stable under projections onto subsets of $\N$.
Then $\frac{1}{2}\sum_{i=1}^n \pm f_i \in L$ and therefore
$$\norm{\sum_{i=1}^n \lambda_i x_i} \ensuremath{\geqslant} \frac{1}{2}\sum_{i=1}^n |\lambda_i|
f_i(x_i) \ensuremath{\geqslant} \frac{1}{2(1+\epsilonilon)}\sum_{i=1}^n |\lambda_i|,$$ for
any $\lambda_i$'s. Therefore $x_1,\ldots,x_n$ is $2$-equivalent to
the canonical basis of $\ell_1^n$.
\end{proof}
It remains to prove that $X_u$ has type (3). Recall that an analysis
$(K^s(f))_s$ of $f \in K$ is a decomposition of $f$ corresponding to
the inductive definition of $K$, see the precise definition in
Definition 2.3 \cite{ADKM}. We shall combine three types of
arguments. First $L$ was constructed so that $L \prec K$, which
means essentially that each $f \in L$ has an analysis $(K^s(f))_s$
whose elements actually belong to $L$ (see the definition on page 74
of \cite{ADKM}); so all the results obtained in Section
2 of \cite{ADKM} for spaces defined through arbitrary $\rightarrowde{K} \prec K$ (and in particular the crucial
Proposition 2.9) are valid in our case. Then we shall produce
estimates similar to those valid for $X_{hi}$ and which are of two
forms: unconditional estimates, in which case the proofs from
\cite{ADKM} may be applied directly up to minor changes of
notation, and thus we shall refer to \cite{ADKM} for details of the
proofs; and conditional estimates, which are different from those of
$X_{hi}$, but easier due to hypotheses of disjointness of supports, and for which we shall give the proofs.
Recall that if ${\mathcal F}$ is a family of finite subsets of $\N$, then
$${\mathcal F}^{\prime}=\{A \cup B: A, B \in {\mathcal F}, A \cap
B=\emptyset\}.$$ Given $\varepsilon >0$ and $j=2,3,\ldots $, an
$(\varepsilon ,j)$-{\it basic special convex combination
($(\varepsilon ,j)$- basic s.c.c.) (relative to $X_{M(1),u})$} is a
vector of the form $\sum_{k\in F}a_ke_k$ such that: $F\in {\mathcal
M}_j,a_k\ensuremath{\geqslant} 0, \sum_{k\in F}a_k=1$, $\{a_k\}_{k\in F}$ is
decreasing, and, for every $G\in {\mathcal F}^{\prime
}_{t_j(k_{j-1}+1)}$, $\sum_{k\in G}a_k< \varepsilon $.
Given a block sequence $(x_k)_{k\in {\bf N}}$ in $X_{u}$ and $j\ensuremath{\geqslant} 2$,
a convex combination
$\sum_{i=1}^na_ix_{k_i}$ is said to be an $(\varepsilon ,j)$-{\it
special convex combination} of $(x_k)_{k\in {\bf N}}$ ($(\varepsilon
,j)$-s.c.c), if there exist $l_1<l_2<\ldots <l_n$ such that $2<{\rm
supp}\ x_{k_1}\ensuremath{\leqslant} l_1<{\rm supp}\ x_{k_2}\ensuremath{\leqslant} l_2< \ldots <{\rm
supp}\ x_{k_n}\ensuremath{\leqslant} l_n$, and $\sum_{i=1}^na_ie_{l_i}$ is an
$(\varepsilon , j)$-basic s.c.c.
An $(\varepsilon ,j)$-s.c.c. $\sum_{i=1}^n a_ix_{k_i}$
is called {\it seminormalised} if $\| x_{k_i}\|=1,\; i=1,\ldots ,n$ and
$$\|\sum_{i=1}^na_ix_{k_i}\|\ensuremath{\geqslant}\frac{1}{2}.$$
Rapidly increasing sequences and $(\varepsilon , j)$--R.I. special
convex combinations in $X_u$ are defined by \cite{ADKM} Definitions 2.8 and 2.16
respectively, with $\rightarrowde{K}=L.$
Using the lower estimate for ${\mathcal M}_{2j}$-admissible families
in $X_u$ we get as in \cite{ADKM} Lemma 3.1.
\begin{lemme}\label{scc} For $\epsilonilon>0$, $j=1,2,\ldots$ and every
normalised block sequence $\{ x_k\}_{k=1}^{\infty }$ in $X_u$, there
exists a finite normalised block sequence $(y_s)_{s=1}^n$ of $(x_k)$
and coefficients $(a_s)_{s=1}^ n$ such that $\sum_{s=1}^na_sy_s$ is
a seminormalised $(\epsilonilon,2j)$--s.c.c..
\end{lemme}
The following definition is inspired from some of the hypotheses of
\cite{ADKM} Proposition 3.3.
\begin{defi}
Let $j>100$. Suppose that
$\{ j_k\}_{k=1}^n$, $\{ y_k\}_{k=1}^n$, $\{ c_k^{\ast }\}_{k=1}^n$
and $\{b_k\}_{k=1}^n$ are such that
{\rm (i)} There exists a rapidly increasing sequence
$$\{ x_{(k,i)}:\; k=1,\ldots ,n,\; i=1,\ldots ,n_k\} $$ with
$x_{(k,i)}<x_{(k,i+1)}<x_{(k+1,l)}$ for all $k<n$, $i<n_k$, $l\ensuremath{\leqslant}
n_{k+1},$ such that:
\noindent {\rm (a)} Each $x_{(k,i)}$ is a seminormalised
$(\frac{1}{m^4_{j_{(k,i)}}}, j_{(k,i)})$--s.c.c. where, for each
$k$, $2j_k+2<j_{(k,i)},\; i=1,\ldots n_k.$
\noindent {\rm (b)} Each $y_k$ is a $(\frac{1}{m^4_{2j_k}},2j_k)$--
R.I.s.c.c. of $\{ x_{(k,i)}\}_{i=1}^{n_k}$ of the form
$y_k=\sum _{i=1}^{n_k}b_{(k,i)}x_{(k,i)}.$
\noindent {\rm (c)} The sequence $\{ b_k\}_{k=1}^n$ is decreasing
and $\sum _{k=1}^nb_ky_k$ is a $(\frac{1} {m^4_{2j+1}},
2j+1)$--s.c.c.
{\rm (ii)} $c_k^{\ast }\in {\rm conv}_{\Q}L_{2j_k}$, and $\max({\rm
supp}\ c_{k-1}^{\ast } \cup {\rm supp}\ y_{k-1}) < \min({\rm supp}\
c_k^* \cup {\rm supp}\ y_k)$, $\forall k$.
{\rm (iii)} $j_1>2j+1$ and $2j_k=\Phi (c_1^{\ast },\ldots
,c_{k-1}^{\ast })$, $k=2,\ldots ,n$.
Then $(j_k,y_k,c_k^*,b_k)_{k=1}^n$ is said to be a {\em
$j$-quadruple}.
\end{defi}
The following proposition is essential. It is the counterpart of
Lemma \ref{critical} for the space $X_u$.
\begin{prop}\label{criticalbis} Assume that $(j_k,y_k,c_k^*,b_k)_{k=1}^n$ is a
$j$-quadruple
in $X_u$ such that
${\rm supp}\ c_k^* \cap {\rm supp}\ y_k=\emptyset$ for all
$k=1,\ldots,n$. Then
$$\norm{\sum_{k=1}^n b_km_{2j_k}y_k} \ensuremath{\leqslant} \frac{75}{m_{2j+1}^2}.$$
\end{prop}
\begin{proof}
Our aim is to show that
for every $\varphi\in\cup_{i=1}^{\infty }{\mathcal B}_i$,
$$\varphi (\sum_{k=1}^n b_k m_{2j_k}y_k)\ensuremath{\leqslant}
\frac{75}{m_{2j+1}^2}.$$ The proof is given in several steps.
{\tt 1st Case}: $w(\varphi)=\frac{1}{m_{2j+1}}$. Then $\varphi$ has
the form $\varphi =\frac{1}{m_{2j+1}}(Ey^*_{1}+\cdots
+Ey^*_{k_2}+Ey^*_{k_2+1}+\cdots Ey^*_d)$ where $E$ is a subset of
$\N$ and where $y_k^* \in L_{2j_k}(c_k^*)\ \forall k \ensuremath{\leqslant} k_2$ and
$y_k^* \in L_{2j_k}(d_k^*)\ \forall k \ensuremath{\geqslant} k_2+1$, with $d_{k_2+1}^*
\neq c_{k_2+1}^*$ (this is similar to the form of such a functional
in $X_{hi}$ but with the integer $k_1$ defined there equal to $1$ in our case).
If $k \ensuremath{\leqslant} k_2$ then $c_s^*$ and therefore $y_s^*$ is disjointly
supported from $y_k$, so $Ey_s^*(y_k)=0$ for all $s$, and therefore
$\varphi(y_k)=0$. If $k=k_2+1$ then we simply have $|\varphi(y_k)|
\ensuremath{\leqslant} \norm{y_k} \ensuremath{\leqslant} 17m_{2j_k}^{-1}$, \cite{ADKM} Corollary 2.17.
Finally if $k>k_2+1$ then since $\Phi$ is 1-1, we have that
$j_{k_2+1} \neq j_k$ and for all $s=k_2+1,\ldots,d$, $d_s^*$ and
therefore $y_s^*$ belong to ${\mathcal B}_{2t_s}$ with $t_s \neq
j_k$. It is then easy to check that we may reproduce the proof of
\cite{ADKM} Lemma 3.5, applied to $Ey_1^*,\ldots,Ey_d^*$, to obtain
the unconditional estimate
$$|\varphi(m_{2j_k}y_k)| \ensuremath{\leqslant} \frac{1}{m_{2j+1}^2}.$$
In particular instead of \cite{ADKM} Proposition 3.2, which is a
reformulation of \cite{ADKM} Corollary 2.17 for $X_{hi}$, we simply
use \cite{ADKM} Corollary 2.17 with $\rightarrowde{K}=L$.
Summing up these estimates we obtain the desired result for the 1st
Case.
{\tt 2nd Case}: $w(\varphi )\ensuremath{\leqslant} \frac{1}{m_{2j+2}}.$ Then we get an
unconditional estimate for the evaluation of $\varphi (\sum
_{k=1}^{n} b_k m_{2j_k}y_k)$ directly, reproducing the short proof
of \cite{ADKM} Lemma 3.7, using again \cite{ADKM} Corollary 2.17
instead of \cite{ADKM} Proposition 3.2. Therefore
$$|\varphi (\sum_{k=1}^nb_km_{2j_k}y_k)|
\ensuremath{\leqslant}\frac{35}{m_{2j+2}} \ensuremath{\leqslant} \frac{35}{m_{2j+1}^2}.$$
{\tt 3rd Case}: $w(\varphi)>\frac{1}{m_{2j+1}}$.
We have $y_k=\sum_{i=1}^{n_k}b_{(k,i)}x_{(k,i)}$
and the sequence $\{ x_{(k,i)},k=1,\ldots n,i=1,\ldots n_k\}$ is a
R.I.S. w.r.t. $L$. By \cite{ADKM} Proposition 2.9 there exist a
functional $\psi\in K^{\prime }$ (see the definition in \cite{ADKM}
p 71) and blocks of the basis $u_{(k,i)}$, $k=1,\ldots ,n$,
$i=1,\ldots ,n_k$ with
${\rm supp}\ u_{(k,i)}\subseteq {\rm supp}\ x_{(k,i)}$,
$\| u_k\|_{\ell_1}\ensuremath{\leqslant} 16$ and such that
$$|\varphi (\sum_{k=1}^nb_km_{2j_k}
(\sum_{i=1}^{n_k}b_{(k,i)}x_{(k,i)}))|\ensuremath{\leqslant} m_{2j_1}b_1b_{(1,1)}+\psi
(\sum_{k=1}^nb_k
m_{2j_k}(\sum_{i=1}^{k_n}b_{(k,i)}u_{(k,i)}))+\frac{1}{m_{2j+2}^2}$$
$$\ensuremath{\leqslant}\psi (\sum_{k=1}^nb_km_{2j_k}(\sum_{i=1}^{k_n}
b_{(k,i)}u_{(k,i)}))+\frac{1}{m_{2j+2}}.$$ Therefore it suffices to
estimate
$$\psi(\sum_{k=1}^nb_km_{2j_k}(\sum_{i=1}^{n_k}
b_{(k,i)}u_{(k,i)})).$$
In \cite{ADKM} $\psi$ is decomposed as $\psi_1+\psi_2$ and
different estimates are applied to $\psi_1$ and $\psi_2$. Our case
is easier as we may simply assume that $\psi_1=0$ and
$\psi_2=\psi$. We shall therefore refer to some arguments of
\cite{ADKM} concerning some $\psi_2$ keeping in mind that $\psi_2=\psi$.
Let $D_1^k,\ldots,D_4^k$ be defined as in \cite{ADKM} Lemma 3.11
(a). Then as in \cite{ADKM}, $$\bigcup_{p=1}^4D_p^k=\bigcup _{i=1}^{n_k}
{\rm supp}\ u_{(k,i)}\cap {\rm supp}\ \psi.$$ The proof that
$$\psi|_{\bigcup_kD_2^k}(\sum_kb_km_{2j_k}(\sum_ib_{(k,i)}u_{(k,i)}))
\ensuremath{\leqslant}\frac{1}{m_{2j+2}}, \ensuremath{\leqslant}no (1)$$
$$\psi|_{\bigcup_kD_3^k}(\sum_kb_km_{2j_k}(\sum_ib_{(k,i)}u_{(k,i)}))
\ensuremath{\leqslant}\frac{16}{m_{2j+2}}, \ensuremath{\leqslant}no (2)$$ and
$$\psi|_{\bigcup_kD_1^k}(\sum_kb_km_{2j_k}(\sum_ib_{(k,i)}
u_{(k,i)}))\ensuremath{\leqslant}\frac{1}{m_{2j+2}}. \ensuremath{\leqslant}no (3)$$ may be easily
reproduced from \cite{ADKM} Lemma 3.11. The case of $D_4^k$ is
slightly different from \cite{ADKM} and therefore we give more
details. We claim
\
\noindent{\em Claim:} Let $D=\bigcup_k D_4^k$. Then
$$\psi|_{D}(\sum_kb_km_{2j_k}(\sum_ib_{(k,i)}u_{(k,i)}))
\ensuremath{\leqslant}\frac{64}{m_{2j+2}}, \ensuremath{\leqslant}no (4)$$
Once the claim is proved it follows by adding the estimates that the
3rd Case is proved, and this concludes the proof of the Proposition.
\
\noindent{\em Proof of the claim:} Recall that $D_4^k$ is defined by
$$D_4^k=\{
m\in\bigcup_{i=1}^{n_k}A_{(k,i)} : {\rm for\ all}\ f\in\bigcup_sK^s(\psi)\
{\rm with}\; m\in {\rm supp}f, w(f) \ensuremath{\geqslant} \frac{1}{m_{2j_k}}\;{\rm
and}$$
$${\rm there}\;{\rm
exists}\; f\in\bigcup_sK^s(\psi)\;{\rm with}\; m\in {\rm supp}f,
w(f)=\frac{1}{m_{2j_k}}{\rm and}$$
$${\rm for}\;{\rm
every}\;g\in\bigcup_sK^s(\psi)\; {\rm with}\; {\rm supp}\ f \subset
{\rm supp}\ g {\rm \ strictly}, w(g)\ensuremath{\geqslant}\frac{1}{m_{2j+1}}\}.$$
For every $k=1,\ldots ,n$, $i=1,\ldots ,n_k$
and every $m\in {\rm supp}\ u_{(k,i)}\cap D_4^k,$ there exists a
unique functional $f^{(k,i,m)}\in \bigcup _sK^s(\psi)$ with $m\in {\rm
supp}\ f$, $w(f)=\frac{1}{m_{2j_k}}$ and such that, for all $g\in
\bigcup_sK^s(\psi)$ with ${\rm supp}\ f \subseteq {\rm supp}\ g$
strictly, $w(g)\ensuremath{\geqslant} \frac{1}{m_{2j+1}}.$ By definition, for $k\neq
p$ and $i=1,\ldots ,n_k$, $m\in {\rm supp}\ u_{(k,i)}$, we have
${\rm supp}f^{(k,i,m)}\cap D^p_4=\emptyset.$ Also, if
$f^{(k,i,m)}\neq f^{(k,r,n)},$ then ${\rm supp}\ f^{(k,i,m)}\cap
{\rm supp}\ f^{(k,r,n)}=\emptyset.$
For each $k=1,\ldots ,n$, let $\{ f^{k,t}\}_{t=1}^{r_k}\subseteq \bigcup
K^s(\varphi ) $ be a selection of mutually disjoint such functionals
with $D^k_4=\bigcup _{t=1}^{r_k} {\rm supp}\ f^{k,t}.$ For each such
functional $f^{k,t}$, we set
$$a_{f^{k,t}}=\sum_{i=1}^{n_k}b_{(k,i)}\sum_{m\in {\rm supp}\ f^{k,t}} a_m.$$
Then,
$$f^{k,t}(b_km_{2j_k}(\sum_ib_{(k,i)}u_{(k,i)}))\ensuremath{\leqslant}
b_ka_{f^{k,t}}.\ensuremath{\leqslant}no (5)$$ We define as in \cite{ADKM} a functional
$g\in K^{\prime }$ with $|g|_{2j}^{\ast }\ensuremath{\leqslant} 1$ (see definition
\cite{ADKM} p 71), and blocks $u_k$ of the basis so that $\|
u_k\|_{\ell_1}\ensuremath{\leqslant} 16$, ${\rm supp}\ u_k\subseteq \bigcup_i{\rm supp}\
u_{(k,i)}$ and
$$\psi|_{D_4}(\sum_kb_km_{2j_k}(\sum_ib_{(k,i)}u_{(k,i)}))
\ensuremath{\leqslant} g(2\sum_kb_ku_k),$$ \noindent hence by \cite{ADKM} Lemma 2.4(b)
we shall have the result.
For $f=\frac{1}{m_q}\sum_{p=1}^df_p\in\bigcup_sK^s(\psi|_{D_4})$ we set
$$J=\{ 1\ensuremath{\leqslant} p\ensuremath{\leqslant} d: f_p=f^{k,t}\;{\rm for}\;{\rm some}\; k=1\ldots ,n, \;
t=1,\ldots , r_k\},$$
$$T=\{ 1\ensuremath{\leqslant} p\ensuremath{\leqslant} d:\;{\rm there}\;{\rm exists}\;f^{k,t}\;{\rm with}\;
{\rm supp}f^{k,t} \subseteq {\rm supp}f_p \; {\rm strictly}\}.$$
For every $f\in\bigcup_sK^s(\psi|_{D_4})$ we shall define by induction
a functional $g_f$, by $g_f=0$ when $J\cup T=\emptyset$,
while if $J\cup T\neq\emptyset $ we shall construct $g_f$ with the following
properties. Let $D_f= \bigcup_{p\in J\cup T}{\rm supp}f_p$ and
$u_k=\sum a_{f^{k,t}}e_{f^{k,t}}$, where $e_{f^{k,t}}=e_{\min{\rm
supp} f^{k,t}}$, then:
(a) ${\rm supp}\ g_f\subseteq {\rm supp}\ f$.
(b) $g_f\in K^{\prime }$ and $w(g_f)\ensuremath{\geqslant} w(f)$,
(c) $f|_{D_f}(\sum_kb_km_{2j_k}(\sum_ib_{(k,i)}u_{(k,i)})) \ensuremath{\leqslant}
g_f(2\sum_kb_ku_k)$.
\
\noindent Let $s>0$ and suppose that $g_f$ have been defined for all
$f\in\bigcup_{t=0}^{s-1}K^t(\psi|_{D_4})$ and let
$f=\frac{1}{m_q}(f_1+\ldots +f_d)\in K^s(\psi|_{D_4})\N^\Nckslash
K^{s-1}(\psi|_{D_4})$ where the family $(f_p)_{p=1}^d$ is ${\mathcal
M}_q$-admissible if $q>1$, or ${\mathcal S}$-allowable if $q=1$. The
proofs of case (i) ($1/m_q=1/m_{2j_k}$ for some $k \ensuremath{\leqslant} n$) and case
(ii) ($1/m_q>1/m_{2j+1}$) are identical with \cite{ADKM} p 106.
Assume therefore that case (iii) holds, i.e., $1/m_q=1/m_{2j+1}$. For
the same reasons as in \cite{ADKM} we have that $T=\emptyset$.
Summing up we assume that $f \in K^s(\psi|_{D_4})\N^\Nckslash
K^{s-1}(\psi|_{D_4})$ is of the form
$$f=\frac{1}{m_{2j+1}}\sum_{p=1}^d
f_p=\frac{1}{m_{2j+1}}(Ey_1^*+\ldots+Ey_{k_2}^*+Ey_{k_2+1}^*+\ldots+Ey_d^*),$$
where $(y_i^*)_i$ is associated to
$(c_1^*,\ldots,c_{k_2}^*,d_{k_2+1}^*,\ldots)$ with $d_{k_2+1}^* \neq
c_{k_2+1}^*$, that $T=\emptyset$ and $J \neq \emptyset$, and it only
remains to define $g_f$ satisfying (a)(b)(c).
Now by the proof of \cite{ADKM} Proposition 2.9,
$\psi=\psi_{\varphi}$ was defined through the analysis of $\varphi$,
in particular by \cite{ADKM} Remark 2.19 (a),
$$\psi=\frac{1}{m_{2j+1}}\sum_{k \in I}\psi_{Ey_k^*}$$
for some subset $I$ of $\{1,\ldots,d\}$. Furthermore, for $l \in I$,
$l \ensuremath{\leqslant} k_2$ and $1 \ensuremath{\leqslant} k \ensuremath{\leqslant} d$, ${\rm supp}\ Ey_l^* \cap {\rm
supp}\ x_k=\emptyset$, therefore there is no functional in a family
of type I and II w.r.t. $\overlineerline{x_k}$ of support included in
${\rm supp}\ Ey_l^*$ (see \cite{ADKM} Definition 2.11 p 77). This
implies that $D_{Ey_l^*}=\emptyset$ (\cite{ADKM} Definition p 85),
and therefore that $\psi_{Ey_l^*}=0$ (\cite{ADKM} bottom of p 85).
For $l \in I$, $l>k_2+1$, then since $\Phi$ is $1-1$,
$w(Ey_l^*)=w(Ed_l^*) \neq 1/m_{2j_k} \forall k$. Therefore
$w(\psi_{Ey_l^*}) \neq 1/m_{2j_k} \forall k$,
\cite{ADKM} Remark 2.19 (a). Then by the definition of $D_4^k$,
${\rm supp}\ \psi_{Ey_l^*} \cap D_4^k=\emptyset$ for all $k$.
Finally this
means that $\psi_{|D_4}=\frac{1}{m_{2j+1}}\psi_{Ey^*_{k_2+1}|D_4}$
and $J=\{k_2+1\}$, $D_f={\rm supp}\ f_{k_2+1}$. Write then
$f_{k_2+1}=f^{k_0,t}$ and set $g_f=\frac{1}{2}e^*_{f_{k_2+1}}$,
therefore (a)(b) are trivially verified. It only remains to check
(c). But by (5),
$$f|_{D_f}(\sum_kb_km_{2j_k}(\sum_ib_{(k,i)}u_{(k,i)}))\ensuremath{\leqslant}
b_{k_0} a_{f_{k_2+1}}$$
$$=b_{k_0} a_{f_{k_2+1}} e^*_{f_{k_2+1}}(e_{f_{k_2+1}})
=g_f(2b_{k_0} a_{f_{k_2+1}}e_{f_{k_2+1}})$$
$$=g_f(2\sum_t b_{k_0} a^{f_{k,t}}e_{f^{k,t}})
=g_f(2\sum_kb_ku_k).$$ So (c) is proved. Therefore $g_f$ is defined
for each $f$ by induction, and the Claim is verified. This concludes
the proof of the Proposition.
\end{proof}
\begin{prop} The space $X_u$ is of type (3). \end{prop}
\begin{proof} Assume towards a contradiction that $T$ is an isomorphism from
some block-subspace $[x_n]$ of $X_u$ into the subspace $[e_i, i \notin \bigcup_n
{\rm supp}\ x_n]$. We may assume that $\max({\rm supp}\ x_n,{\rm
supp}\ Tx_n) < \min({\rm supp}\ x_{n+1},{\rm supp}\ Tx_{n+1})$ and
$\min{\rm supp}\ x_n<\min{\rm supp}\ Tx_n$ for each $n$, and by
Lemma \ref{scc}, that each $x_n$ is a $(\frac{1}{m_{2n}^4},2n)$
R.I.s.c.c. (\cite{ADKM} Definition 2.16). We may write
$$x_n=\sum_{t=1}^{p_n} a_{n,t}x_{n,t}$$ where $(x_{n,1},\ldots,x_{n,p_n})$ is
${\mathcal M}_{2n}$-admissible. Let for each $n,t$, $x_{n,t}^* \in
L$ be such that ${\rm supp}\ x_{n,t}^* \subseteq {\rm supp}\ Tx_{n,t}$
and such that
$$x_{n,t}^*(Tx_{n,t}) \ensuremath{\geqslant} \frac{1}{2}\norm{Tx_{n,t}} \ensuremath{\geqslant}
\frac{1}{4\norm{T^{-1}}},$$ and let
$x_n^*=\frac{1}{m_{2n}}(x_{n,1}^*+\ldots+x_{n,p_n}^*) \in L_{2n}$.
Note that ${\rm supp}\ x_n^* \cap {\rm supp}\ x_n=\emptyset$ and
that
$$x_n^*(Tx_n) \ensuremath{\geqslant} \frac{1}{m_{2n}}\sum_{t=1}^{p_n}
\frac{a_{n,t}}{4\norm{T^{-1}}}= (4\norm{T^{-1}}m_{2n})^{-1}.$$ We
may therefore for any $j>100$ construct a $j$-quadruple
$(j_k,y_k,c_k^*,b_k)_{k=1}^n$ satisfying the hypotheses of Proposition
\ref{criticalbis} and such that $y_k \in [x_i]_i$ and $c_k^*(Ty_k) \ensuremath{\geqslant}
(4\norm{T^{-1}}m_{2j_k})^{-1}$ for each $k$ (note that we may assume
that $c_k^* \in L_{j_{2k}}$ for each $k$). From Proposition
\ref{criticalbis} we deduce
$$\norm{\sum_{k=1}^n b_k m_{2j_k}y_k} \ensuremath{\leqslant} \frac{75}{m_{2j+1}^2}.$$
On the other hand $\psi=\frac{1}{m_{2j+1}}\sum_{k=1}^n c_k^*$
belongs to $L$ therefore
$$\norm{T(\sum_{k=1}^n b_k m_{2j_k}y_k)} \ensuremath{\geqslant}
\psi(\sum_{k=1}^n b_k m_{2j_k}Ty_k) \ensuremath{\geqslant}
\frac{1}{4\norm{T^{-1}}m_{2j+1}}.$$ We deduce finally that
$$m_{2j+1} \ensuremath{\leqslant} 300 \norm{T}\norm{T^{-1}},$$
which contradicts the boundedness of $T$.
\end{proof}
\subsection{A strongly asymptotically $\ell_{\infty}$ space tight by support}
Since the canonical basis of $X_u$ is tight and unconditional, it
follows that $X_u$ is reflexive. In particular this implies that the
dual basis of the canonical basis of $X_u$ is a strongly
asymptotically $\ell_{\infty}$ basis of $X_u^*$. It remains to prove
that this basis is tight with support.
It is easy to prove by duality that for any ${\mathcal
M}_{2j}$-admissible sequence of functionals $f_1,\ldots,f_n$ in
$X_u^*$, we have the upper estimate
$$\norm{\sum_i f_i} \ensuremath{\leqslant} m_{2j}\sup_i \norm{f_i}.$$
We use this observation to prove a lemma about the existence of
s.c.c. normed by functionals belonging to an arbitrary subspace of
$X_u^*$. The proof is standard except that estimates have to be
taken in $X_u^*$ instead of $X_u$.
\begin{lemme}\label{sccbis} For $\epsilonilon>0$, $j=1,2,\ldots$ and every
normalised block sequence $\{f_k\}_{k=1}^{\infty }$ in $X_u^*$,
there exists a normalised functional $f \in [f_k]$ and a
seminormalised $(\epsilonilon,2j)$--s.c.c. $x$ in $X_u$ such that ${\rm
supp}\ f \subseteq {\rm supp}\ x$ and $f(x) \ensuremath{\geqslant} 1/2$.
\end{lemme}
\begin{proof}
For each $k$ let $y_k$ be normalised such that ${\rm supp}\ y_k={\rm
supp}\ f_k$ and $f_k(y_k)=1$. Recall that the integers $k_n$ and
$t_n$ are defined by $k_1=1$, $2^{t_n}\ensuremath{\geqslant} m_n^2$ and
$k_n=t_n(k_{n-1}+1)+1$, and that ${\mathcal M}_j={\mathcal F}_{k_j}$
for all $j$.
Applying Lemma \ref{scc}
we find a successive sequence of $(\epsilonilon,2j)$--s.c.c. of $(y_k)$
of the form $(\sum_{i \in I_k}a_i y_i)_k$ with $\{f_i, i \in I_k\}$
${\mathcal F}_{k_{2j-1}+1}$-admissible. If $\norm{\sum_{i \in
I_k}f_i} \ensuremath{\leqslant} 2$ for some $k$, we are done, for then
$$(\sum_{i \in I_k}f_i)(\sum_{i \in I_k}a_i y_i) \ensuremath{\geqslant} \frac{1}{2}\norm{\sum_{i
\in I_k}f_i}.$$ So assume $\norm{\sum_{i \in I_k}f_i}>2$ for
all $k$, apply the same procedure to the sequence
$f_k^1=\norm{\sum_{i \in I_k}f_i}^{-1}\sum_{i \in I_k}f_i$, and
obtain a successive sequence of $(\epsilonilon,2j)$--s.c.c. of the
sequence $(y_k^1)_k$ associated to $(f_k^1)_k$, of the form
$(\sum_{i \in I_k^1}a_i^1 y_i^1)_k$, with $\{f_l: {\rm supp}\ f_l
\subseteq \sum_{i \in I_k^1}f_i^1\}$ a ${\mathcal
F}_{k_{2j-1}+1}[{\mathcal F}_{k_{2j-1}+1}]$-admissible, and
therefore ${\mathcal M}_{2j}$-admissible set. Then we are done
unless $\norm{\sum_{i \in I_k^1}f_i^1}> 2$ for all $k$, in which
case we set
$$f_k^2=\norm{\sum_{j \in I_k^1}f_j^1}^{-1}
\sum_{j \in I_k^1}f_j^1$$ and observe by the upper estimate in
$X_u^*$ that
$$1=\norm{f_k^2}=\norm{\sum_{j \in I_k^1}\sum_{i \in I_j}\norm{\sum_{j \in
I_k^1}f_j^1}^{-1} \norm{\sum_{i \in I_j}f_i}^{-1} f_i} \ensuremath{\leqslant}
m_{2j}/4.$$ Repeating this procedure we claim that we are done in at
most $t_{2j}$ steps. Otherwise we obtain that the set
$$A=\{f_l:
{\rm supp}\ f_l \subseteq \sum_{i \in
I_k^{t_{2j-1}}}f_i^{t_{2j-1}}\}$$ is ${\mathcal M}_{2j}$-admissible.
Since $f_k^{t_{2j}}=\sum_{f_l \in A}\alpha_l f_l$, where the
normalising factor $\alpha_l$ is less than $(1/2)^{t_{2j}}$ for each
$l$, we deduce from the upper estimate that
$$1=\norm{f_k^{t_{2j}}} \ensuremath{\leqslant} 2^{-t_{2j}}m_{2j},$$
a contradiction by definition of the integers $t_i$'s. \end{proof}
\
To prove the last proposition of this section we need to make two
observations. First if $(f_1,\ldots,f_n) \in {\rm conv}_{\Q}L$ is
${\mathcal
M}_{2j}$-admissible, then
$\frac{1}{m_{2j}}\sum_{k=1}^n f_k \in {\rm conv}_{\Q}L_{2j}.$
Indeed using the stability of $L$ under projections onto subsets of $\N$ we
may easily find convex rational coefficients $\lambda_i$
such that each $f_k$ is of the form
$$f_k=\sum_i \lambda_i f_i^k,\ f_i^k \in L,\ {\rm supp}\ f_i^k
\subseteq {\rm supp}\ f_k\ \forall i.$$
Then $\frac{1}{m_{2j}}\sum_{k=1}^n f_k=\sum_i \lambda_i
(\frac{1}{m_{2j}}\sum_{k=1}^n f_i^k)$ and each
$\frac{1}{m_{2j}}\sum_{k=1}^n f_i^k$ belongs to $L_{2j}$.
Likewise if $\psi=\frac{1}{m_{2j+1}}(c_1^*+\ldots+c_d^*)$, $k>2j+1$,
$c_1^* \in {\rm conv}_{\Q} L_{2k}$ and $c_l^* \in {\rm conv}_{\Q}
L_{\Phi(c_1^*,\ldots,c_{l-1}^*)}\ \forall l \ensuremath{\geqslant} 2$, then $\psi \in
{\rm conv}_{\Q} L$. Indeed as above we may write
$$\psi=\sum_i \lambda_i (\frac{1}{m_{2j+1}}\sum_{l=1}^d f_i^l),\
f_i^1 \in L_{2k}, f_i^l \in L_{\Phi(c_1^*,\ldots,c_{i-1}^*)}(c_i^*)\
\forall l \ensuremath{\geqslant} 2,$$ and each $\frac{1}{m_{2j+1}}\sum_{l=1}^d f_i^l$
belongs to $L_{2j+1}^{\prime n+1} \subseteq L$.
\begin{prop} The space $X_u^*$ is of type (3).
\end{prop}
\begin{proof} Assume towards a contradiction that $T$ is an isomorphism from
some block-subspace $[f_n]$ of $X_u^*$ into the subspace $[e_i^*, i \notin
\cup_n {\rm supp}\ f_n]$. We may assume that $\max({\rm supp}\
f_n,{\rm supp}\ Tf_n) < \min({\rm supp}\ f_{n+1},{\rm supp}\
Tf_{n+1})$ and $\min{\rm supp}\ Tf_n<\min{\rm supp}\ f_n$ for each
$n$. Since the closed unit ball of $X_u^*$ is equal to
$\overlineerline{{\rm conv}_{\Q}L}$ we may also assume that $f_n \in {\rm
conv}_{\Q}L$ for each $n$. Applying Lemma \ref{sccbis}, we may also
suppose that each $f_n$ is associated to a $(\frac{1}{m_{2n}^4},2n)$
s.c.c. $x_n$ with $Tf_n(x_n) \ensuremath{\geqslant} 1/3$ and ${\rm supp}\ x_n \subset
{\rm supp}\ Tf_n$, and we shall also assume that $\norm{Tf_n}=1$ for
each $n$. Build then for each $k$ a $(\frac{1}{m_{2k}^4},2k)$
R.I.s.c.c. $y_k=\sum_{n \in A_k}a_n x_n$ such that $(Tf_n)_{n \in
A_k}$ and therefore $(f_n)_{n \in A_k}$ is ${\mathcal
M}_{2k}$-admissible. Then note that by the first observation
before this proposition,
$$c_k^*:=m_{2k}^{-1}\sum_{n \in A_k}f_n \in {\rm conv}_{\Q}L_{2k},$$
and observe that ${\rm supp}\ c_k^* \cap {\rm supp}\
y_k=\emptyset$ and that $Tc_k^*(y_k) \ensuremath{\geqslant} (3m_{2k})^{-1}$.
We may therefore for any $j>100$ construct a $j$-quadruple
$(j_k,y_k,c_k^*,b_k)_{k=1}^n$ satisfying the hypotheses of Proposition
\ref{criticalbis} and such that $c_k^* \in [f_i]_i$ and $Tc_k^*(y_k) \ensuremath{\geqslant}
(3m_{2j_k})^{-1}$ for each $k$. From
Proposition
\ref{criticalbis} we deduce
$$\norm{\sum_{k=1}^n b_k m_{2j_k}y_k} \ensuremath{\leqslant} \frac{75}{m_{2j+1}^2}.$$
Therefore
$$\norm{\sum_{k=1}^d Tc_k^*} \ensuremath{\geqslant}
\frac{\sum_{k=1}^d b_k m_{2j_k}Tc_k^*(y_k)}{\norm{\sum_{k=1}^n b_k
m_{2j_k}y_k}} \ensuremath{\geqslant} \frac{m_{2j+1}^2}{225},$$ but on the other hand
$$\norm{\sum_{k=1}^d c_k^*} \ensuremath{\leqslant} m_{2j+1}$$
since by the second observation the functional
$m_{2j+1}^{-1}\sum_{k=1}^d c_k^*$ belongs to ${\rm conv}_{\Q}L$. We
deduce finally that
$$m_{2j+1} \ensuremath{\leqslant} 225 \norm{T},$$
which contradicts the boundedness of $T$.
\end{proof}
\section{Problems and comments}
Obviously the general question one is compelled to ask is whether it is possible to find an example for each of the classes or subclasses appearing in the chart of Theorem \ref{final}.
However we wish to be more specific here and concentrate on the classes which either seem particularly interesting, or easier to study, or which are related to one of the spaces considered in this paper.
\
Let us first observe that the examples of locally minimal, tight spaces produced so far could
be said to be so for trivial reasons: since they hereditarily contain
$\ell_{\infty}^n$'s uniformly, any Banach space is crudely finitely
representable in any of their subspaces.
It remains open whether there exist other examples. Observing that a locally minimal and tight space cannot be strongly asymptotically $\ell_p, 1 \ensuremath{\leqslant} p<+\infty$, by one of the implications in the diagram before Theorem \ref{final}, and up to the 6th dichotomy, the problem may be summed up as:
\begin{prob} Find
a tight, locally minimal, uniformly inhomogeneous Banach space
which does not contain $\ell_{\infty}^n$'s uniformly, or equivalently, which has finite
cotype.
\end{prob}
It also unknown whether tightness by range and tightness with constants are the only possible forms of tightness, up to passing to subspaces. Equivalently, using the 4th and 5th dichotomy:
\begin{prob} Find a tight Banach space which is sequentially and locally minimal.
\end{prob}
Or, since such a space would have to be of type (2), (5b), (5d):
\begin{prob} \
\begin{itemize}
\item[(a)] Find a HI space which is sequentially minimal.
\item[(b)] Find a space of type (5b).
\item[(c)] Find a space of type (5d). Is the dual of some modified mixed Tsirelson's space such a space?
\end{itemize}
\end{prob}
For the next problem, we observe that the only known examples of spaces tight with constants are strongly asymptotic $\ell_p$ spaces not containing $\ell_p$, where $1 \ensuremath{\leqslant} p <+\infty$.
\begin{prob} Find a space tight with constants and uniformly inhomogeneous. \end{prob}
More specifically, listing two subclasses for which we have a possible candidate:
\begin{prob}
\
\begin{itemize}
\item[(a)] Find a space of type (1a). Is $G$ or one of its subspaces such a space?
\item[(b)] Find a space of type (3a). Is $G_u$ or one of its subspaces such a space?
\end{itemize}
\end{prob}
\
Recently, S. Argyros, K. Beanland and T. Raikoftsalis \cite{ABR,ABR2} constructed an example $X_{abr}$ with a basis which is strongly asymptotically $\ell_2$ and therefore weak Hilbert, yet every operator is a strictly singular perturbation of a diagonal map, and no disjointly supported subspaces are isomorphic. In our language, $X_{abr}$ is therefore a new space of type (3c), which we include in our chart.
We conclude by mentioning the very recent and remarkable result of S. Argyros and R. Haydon solving the scalar plus compact problem \cite{AH}: there exists a HI space which is a predual of $\ell_1$ and on which every operator is a compact perturbation of a multiple of the identity. To our knowledge nothing is known about the exact position of this space in the chart of Theorem \ref{final}.
\begin{prob} Find whether Argyros-Haydon's space is of type (1) or of type (2). \end{prob}
\
\begin{flushleft}
{\em Address of V. Ferenczi:}\\
Departamento de Matem\'atica,\\
Instituto de Matem\'atica e Estat\' \i stica,\\
Universidade de S\~ao Paulo,\\
rua do Mat\~ao, 1010, \\
05508-090 S\~ao Paulo, SP,\\
Brazil.\\
\texttt{[email protected]}
\end{flushleft}
\
\begin{flushleft}
{\em Address of C. Rosendal:}\\
Department of Mathematics, Statistics, and Computer Science\\
University of Illinois at Chicago,\\
851 S. Morgan Street,\\
Chicago, IL 60607-7045,\\
USA.\\
\texttt{[email protected]}
\end{flushleft}
\end{document} |
\begin{document}
\title{On the enumeration of positive cells in generalized cluster
complexes and Catalan hyperplane arrangements}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{example}[theorem]{Example}
\newtheorem{examples}[theorem]{Examples}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{question}[theorem]{Question}
\begin{abstract}
Let $\Phi$ be an irreducible crystallographic root system with Weyl
group $W$ and coroot lattice $\check{Q}$, spanning a Euclidean space
$V$. Let $m$ be a positive integer and ${\mathcal A}^m_\Phi$ be the arrangement
of hyperplanes in $V$ of the form $(\alpha, x) = k$ for $\alpha \in
\Phi$ and $k = 0, 1,\dots,m$. It is known that the number $N^+ (\Phi,
m)$ of bounded dominant regions of ${\mathcal A}^m_\Phi$ is equal to the number
of facets of the positive part $\Delta^m_+ (\Phi)$ of the generalized
cluster complex associated to the pair $(\Phi, m)$ by S.~Fomin and
N.~Reading.
We define a statistic
on the set of bounded dominant regions of ${\mathcal A}^m_\Phi$ and conjecture
that the corresponding refinement of $N^+ (\Phi, m)$ coincides with the
$h$-vector of $\Delta^m_+ (\Phi)$. We compute these refined numbers for
the classical root systems as well as for all root systems when $m=1$
and verify the conjecture when $\Phi$ has type $A$, $B$ or $C$ and when
$m=1$.
We give several combinatorial interpretations to these numbers in terms
of chains of order ideals in the root poset of $\Phi$, orbits of the
action of $W$ on the quotient $\check{Q} / \, (mh-1) \, \check{Q}$ and
coroot lattice points inside a certain simplex, analogous to the ones
given by the first author in the case of the set of all dominant regions
of ${\mathcal A}^m_\Phi$. We also provide a dual interpretation in terms of order
filters in the root poset of $\Phi$ in the special case $m=1$.
\end{abstract}
\section{Introduction and results}
\label{intro}
Let $V$ be an $\ell$-dimensional Euclidean space, with inner product
$( \ , \ )$. Let $\Phi$ be a (finite) irreducible crystallographic
root system spanning $V$ and $m$ be a fixed nonnegative integer. We
denote by ${\mathcal A}_\Phi^m$ the collection of hyperplanes in $V$ defined
by the affine equations $(\alpha, x) = k$ for $\alpha \in \Phi$ and
$k = 0, 1,\dots,m$, known as the $m$th \textit{extended Catalan
arrangement} associated to $\Phi$. Thus ${\mathcal A}_\Phi^m$ is invariant
under the action of the Weyl group $W$ associated to $\Phi$ and
reduces to the Coxeter arrangement ${\mathcal A}_{\Phi}$ for $m=0$. Let
$\Delta^m (\Phi)$ denote the generalized cluster complex associated
to the pair $(\Phi, m)$ by S.~Fomin and N.~Reading \cite{FR2}. This
is a simplicial complex which reduces to the cluster complex $\Delta
(\Phi)$ of S.~Fomin and A.~Zelevinsky \cite{FZ} when $m=1$. It
contains a natural subcomplex, called the positive part of
$\Delta^m (\Phi)$ and denoted by $\Delta^m_+ (\Phi)$, as an induced
subcomplex. The complex $\Delta^m (\Phi)$ was also studied independently
by the second author \cite{Tz} when $\Phi$ is of type $A$ or $B$; see
Section \ref{pre} for further information and references.
The Weyl group $W$ acts on the coroot lattice $\check{Q}$ of $\Phi$
and its dilate $(mh-1) \, \check{Q}$, where $h$ denotes the Coxeter
number of $\Phi$. Hence $W$ acts also on the quotient $T_m = \check{Q}
/ \, (mh-1) \, \check{Q}$.
For a fixed choice of a positive system $\Phi^+ \subseteq \Phi$,
consider the partial order on $\Phi^+$ defined by letting $\alpha \le
\beta$ if $\beta - \alpha$ is a nonnegative linear combination of
positive roots, known as the \emph{root poset} of $\Phi$. An
\textit{order filter} or \textit{dual order ideal} in $\Phi^+$ is a
subset ${\mathcal I}$ of $\Phi^+$ such that $\alpha \in {\mathcal I}$ and $\alpha \le
\beta$ in $\Phi^+$ imply $\beta \in {\mathcal I}$. The filter ${\mathcal I}$ is called
\emph{positive} if it does not contain any simple root.
The following theorem connects the objects just discussed. Parts (i),
(ii) and (iii) appear in \cite[Corollary 1.3]{Ath1}, \cite[Proposition
2.13]{FR2} and \cite[Theorem 7.4.2]{Ha},
respectively. The last statement was found independently in \cite{Ath1,
Pa, So}.
\begin{theorem} \mbox{\rm (\cite{Ath1, FR2, Ha})}
Let $\Phi$ be an irreducible crystallographic root system of rank $\ell$
with Weyl group $W$, Coxeter number $h$ and exponents $e_1,
e_2,\dots,e_\ell$. Let $m$ be a positive integer and let
\[ N^+ (\Phi, m) = \prod_{i=1}^{\ell} \frac{e_i + mh - 1}{e_i + 1}. \]
The following are equal to $N^+ (\Phi, m)$:
\begin{enumerate}
\itemsep=0pt
\item[{\rm (i)}] the number of bounded regions of ${\mathcal A}^m_\Phi$ which lie in the
fundamental chamber of ${\mathcal A}_\Phi$,
\item[{\rm (ii)}] the number of facets of $\Delta^m_+ (\Phi)$ and
\item[{\rm (iii)}] the number of orbits of the action of $W$ on $\check{Q} / \,
(mh-1) \, \check{Q}$.
\end{enumerate}
Moreover, for $m=1$ this number is equal to the number of positive
filters in the root poset of $\Phi$.
\label{thm0}
\end{theorem}
The purpose of this paper is to define and study a refinement of the number
$N^+ (\Phi, m)$ and prove that it has similar properties with the one defined
by the first author \cite{Ath2} for the total number
\[ N (\Phi, m) = \prod_{i=1}^{\ell} \frac{e_i + mh + 1}{e_i + 1} \]
of regions of ${\mathcal A}^m_\Phi$ in the fundamental chamber of ${\mathcal A}_\Phi$. To be
more precise let $H_{\alpha, k}$
be the affine hyperplane in $V$ defined by the equation $(\alpha, x) = k$
and $A_\circ$ be the fundamental alcove of the affine Weyl arrangement
corresponding to $\Phi$. A \emph{wall} of a region $R$ of ${\mathcal A}^m_\Phi$ is
a hyperplane in $V$ which supports a facet of $R$. For $0 \le i \le \ell$
we denote by $h_i (\Phi, m)$ the number of regions $R$ of ${\mathcal A}_\Phi^m$ in
the fundamental chamber of ${\mathcal A}_\Phi$ for which exactly $\ell - i$ walls of
$R$ of the form $H_{\alpha, m}$ separate $R$ from $A_\circ$, meaning that
$(\alpha, x) > m$ holds for $x \in R$. The numbers $h_i (\Phi, m)$ were
introduced and studied in \cite{Ath2}. Let $h_i (\Delta^m (\Phi))$ and $h_i
(\Delta^m_+ (\Phi))$ be the $i$th entries of the $h$-vector of the simplicial
complexes $\Delta^m (\Phi)$ and $\Delta^m_+ (\Phi)$, respectively. It follows
from case by case computations in \cite{Ath2, FR2, Tz} that $h_i (\Phi, m)
= h_i (\Delta^m (\Phi))$ for all $i$ when $\Phi$ is of classical type in the
Cartan-Killing classification.
We define $h^+_i (\Phi, m)$ as the number of bounded regions $R$ of
${\mathcal A}_\Phi^m$ in the fundamental chamber of ${\mathcal A}_\Phi$ for which exactly
$\ell - i$ walls of $R$ of the form $H_{\alpha, m}$ do not separate $R$ from
the fundamental alcove $A_\circ$. Theorem \ref{thm0} implies that the sum
of the numbers $h^+_i (\Phi, m)$, as well as that of $h_i (\Delta^m_+
(\Phi))$, for $0 \le i \le \ell$ is equal to $N^+ (\Phi, m)$. The
significance of the numbers $h^+_i (\Phi, m)$ comes from the following
conjecture, which can be viewed as the positive analogue of
\cite[Conjecture 3.1]{FR2}.
\begin{conjecture}
For any irreducible crystallographic root system $\Phi$ and all $m \ge 1$ and
$0 \le i \le \ell$ we have $h^+_i (\Phi, m) = h_i (\Delta^m_+ (\Phi))$.
\label{conj0}
\end{conjecture}
Our first main result (Corollary \ref{cor:conj}) establishes the previous
conjecture when $m=1$ and when $\Phi$ has type $A$, $B$ or $C$ and $m$ is
arbitrary. Our second main result provides
combinatorial interpretations to the numbers $h^+_i (\Phi, m)$ similar to
the ones given in \cite{Ath2} for $h_i (\Phi, m)$. To state this result we
need to recall (or modify) some definitions
and notation from \cite{Ath2}. For $y \in T_m$ consider the stabilizer of
$y$ with respect to the $W$-action on $T_m$. This is a subgroup of $W$
generated by reflections. The minimum number of reflections needed to
generate this subgroup is its \emph{rank} and is denoted by $r(y)$. We
may use the notation $r(x)$ for a $W$-orbit $x$ in $T_m$ since stabilizers
of elements of $T_m$ in the same $W$-orbit are conjugate subgroups of $W$
and hence have the same rank. A subset ${\mathcal J}$ of $\Phi^+$ is an \emph{order
ideal} if $\Phi^+ \, {\setminus} {\mathcal J}$ is a filter. An increasing chain ${\mathcal J}_1
\subseteq {\mathcal J}_2 \subseteq \cdots \subseteq {\mathcal J}_m$ of ideals in $\Phi^+$ is
a \textit{geometric chain of ideals} of length $m$ if
\begin{equation}
({\mathcal J}_i + {\mathcal J}_j) \, \cap \Phi^+ \subseteq {\mathcal J}_{i+j}
\label{bi2}
\end{equation}
holds for all indices $i, j$ with $i + j \le m$ and
\begin{equation}
({\mathcal I}_i + {\mathcal I}_j) \, \cap \Phi^+ \subseteq {\mathcal I}_{i+j}
\label{bi1}
\end{equation}
holds for all indices $i, j$, where ${\mathcal I}_i = \Phi^+ \, {\setminus} {\mathcal J}_i$ for
$0 \le i \le m$ and ${\mathcal I}_i = {\mathcal I}_m$ for $i > m$. Such a chain is called
\emph{positive} if ${\mathcal J}_m$ contains the set of simple roots or,
equivalently, if ${\mathcal I}_m$ is a positive filter. A positive root $\alpha$
is \emph{indecomposable} of \emph{rank} $m$ with respect to this
increasing chain of ideals if $\alpha$ is a maximal element of ${\mathcal J}_m
{\setminus} {\mathcal J}_{m-1}$ and it is not possible to write $\alpha = \beta + \gamma$
with $\beta \in {\mathcal J}_i$ and $\gamma \in {\mathcal J}_j$ for indices $i, j \ge 1$
with $i + j = m$. The following theorem refines part of Theorem
\ref{thm0}.
\begin{theorem}
Let $\Phi$ be an irreducible crystallographic root system of rank
$\ell$ with Weyl group $W$, $m$ be a positive integer and $O_m (\Phi)$
be the set of orbits of the action of $W$ on $\check{Q} / \, (mh-1) \,
\check{Q}$. For any $0 \le i \le \ell$ the following are equal:
\begin{enumerate}
\itemsep=0pt
\item[{\rm (i)}] the number $h^+_{\ell-i} (\Phi, m)$,
\item[{\rm (ii)}] the number of positive geometric chains of ideals in the
root poset $\Phi^+$ of length $m$ having $i$ indecomposable elements
of rank $m$,
\item[{\rm (iii)}] the number of orbits $x \in O_m (\Phi)$ with $r(x) = i$
and
\item[{\rm (iv)}] the number of points in $\check{Q} \cap (mh-1) \,
\overline{A_\circ}$ which lie in $i$ walls of $(mh-1) \,
\overline{A_\circ}$.
\end{enumerate}
In particular, the number of positive geometric chains of ideals in
$\Phi^+$ of length $m$ is equal to $N^+ (\Phi, m)$.
\label{thm1}
\end{theorem}
The equivalence of (iii) and (iv) follows essentially from the
results of \cite[Section 7.4]{Ha}.
In the special case $m=1$ the arrangement ${\mathcal A}_\Phi^m$ consists of
the hyperplanes $H_\alpha$ and $H_{\alpha, 1}$ for all $\alpha \in
\Phi$ and is known as the \textit{Catalan arrangement} associated to
$\Phi$, denoted ${\rm Cat}_\Phi$. Moreover a geometric chain of ideals
consists of a single ideal ${\mathcal J}$ in $\Phi$. This chain is positive if
${\mathcal J}$ contains the set of simple roots or, equivalently, if ${\mathcal I} =
\Phi^+ \, {\setminus} {\mathcal J}$ is a positive filter and in that case the set
of rank one indecomposable elements is the set of maximal elements
of ${\mathcal J}$. We write $h^+_i (\Phi)$ instead of $h^+_i (\Phi, m)$ when
$m=1$. Part of the next corollary is implicit in the work of
E.~Sommers \cite[Section 6]{So}.
\begin{corollary}
Let $\Phi$ be an irreducible crystallographic root system of rank
$\ell$ with Weyl group $W$ and $O (\Phi)$ be the set of orbits of the
action of $W$ on $\check{Q} / (h-1) \, \check{Q}$. For any $0 \le i \le
\ell$ the following are equal:
\begin{enumerate}
\itemsep=0pt
\item[{\rm (i)}] the number of ideals in the root poset $\Phi^+$ which
contain the set of simple roots and have $i$ maximal elements,
\item[{\rm (ii)}] the number $h^+_{\ell-i} (\Phi)$ of bounded regions $R$ of
${\rm Cat}_\Phi$ in the fundamental chamber of ${\mathcal A}_\Phi$ such that $i$ walls
of $R$ of the form $H_{\alpha, 1}$ do not separate $R$ from $A_\circ$,
\item[{\rm (iii)}] the number of orbits $x \in O (\Phi)$ with $r(x) = i$,
\item[{\rm (iv)}] the number of points in $\check{Q} \cap (h-1) \,
\overline{A_\circ}$ which lie in $i$ walls of $(h-1) \,
\overline{A_\circ}$ and
\item[{\rm (v)}] the entry $h_{\ell-i} (\Delta_+ (\Phi))$ of the $h$-vector of
the positive part of $\Delta (\Phi)$.
\end{enumerate}
\label{cor1}
\end{corollary}
Our last theorem provides a different interpretation to the numbers $h^+_i
(\Phi)$ in terms of order filters in $\Phi^+$.
\begin{theorem}
For any irreducible crystallographic root system $\Phi$ and any nonnegative
integer $i$ the number $h^+_i (\Phi)$ is equal to the number of positive
filters in $\Phi^+$ having $i$ minimal elements.
\label{thm2}
\end{theorem}
Theorem \ref{thm1} is proved in Sections \ref{som} and \ref{proof} by
means of two bijections. The first is the restriction of a bijection
of \cite[Section 3]{Ath2} and maps the set of positive geometric
chains of ideals in $\Phi^+$ of length $m$ to the set of bounded regions
of ${\mathcal A}_\Phi^m$ in the fundamental chamber (Theorem \ref{cor:bij1}) while
the second maps this set of regions to the set of $W$-orbits of $T_m$.
In the case $m=1$ the composite of these two bijections gives essentially
a bijection of Sommers \cite{So} from the set of positive filters in
$\Phi^+$ to $\check{Q} \cap (h-1) \, \overline{A_\circ}$. The proof of
Theorem \ref{thm1} in these two sections parallels the one of Theorem 1.2
in \cite{Ath2} and for this reason most of the details are omitted. The
main difference is that the unique alcove in a fixed bounded region of
${\mathcal A}_\Phi^m$ which is furthest away from $A_\circ$ plays the role played
in \cite{Ath2} by the unique alcove in a region of ${\mathcal A}_\Phi^m$ closest to
$A_\circ$. The existence of these maximal alcoves was first established
and exploited in the special case $m=1$ by Sommers \cite{So}. In Section
\ref{f} we prove Conjecture \ref{conj0} when $m=1$ and when $\Phi$ has type
$A$, $B$ or $C$ and $m$ is arbitrary (Corollary \ref{cor:conj}) using the
fact that $h_i (\Phi, m) = h_i (\Delta^m (\Phi))$ holds for all $i$ in
these cases. A key ingredient in the proof is a new combinatorial
interpretation (see part (iii) of Theorems \ref{thm:eleni1} and
\ref{thm:eleni2}) to the $f$-numbers defined from the $h_i (\Phi, m)$
and $h^+_i (\Phi, m)$ via the usual identity relating $f$-vectors and
$h$-vectors of simplicial complexes. In Section \ref{class} we compute
the numbers which appear in Theorem \ref{thm1} for root systems of
classical type and those in Corollary \ref{cor1} for root systems of
exceptional type. We also prove Theorem \ref{thm2} by exploiting the
symmetry of the distribution of the set of all filters in $\Phi^+$ by the
number of minimal elements, observed by D. Panyushev \cite{Pa}. Some useful
background material is summarized in Section \ref{pre}. We conclude with
some remarks in Section
\ref{remarks}.
Apart from \cite{Ath2}, our motivation for this work comes to a great
extent from the papers by Fomin and Reading \cite{FR2}, Fomin and
Zelevinsky \cite{FZ} and Sommers \cite{So}.
\section{Preliminaries}
\label{pre}
In this section we introduce notation and terminology and recall a few
useful facts related to root systems, affine Weyl groups, generalized
cluster complexes and the combinatorics of ${\mathcal A}^m_\Phi$. We refer to
\cite{Hu} and \cite{Ath2, FR1, FR2} for further background and
references and warn the reader that, throughout the paper, some of our
notation and terminology differs from that employed in \cite{Ath2} (this
is done in part to ease the co-existence of order filters and order
ideals in this paper, typically denoted by the letters ${\mathcal I}$ and ${\mathcal J}$,
respectively, and in part to match some of the notation of \cite{FR2}).
\noindent
{\bf Root systems and Weyl groups.}
Let $V$ be an $\ell$-dimensional Euclidean space with inner product
$( \ , \ )$. Given a hyperplane arrangement ${\mathcal A}$ in $V$, meaning a
discrete set of affine subspaces of $V$ of codimension one, the
\emph{regions} of ${\mathcal A}$ are the connected components of the space
obtained from $V$ by removing the hyperplanes in ${\mathcal A}$. Let $\Phi$ be
a crystallographic root system spanning $V$. For any real $k$ and
$\alpha \in \Phi$ we denote by $H_{\alpha, k}$ the hyperplane in $V$
defined by the equation $(\alpha,x) = k$ and set $H_{\alpha} = H_{\alpha,
0}$. We fix a positive system $\Phi^+ \subseteq \Phi$ and the
corresponding (ordered) set of simple roots $\Pi =
\{\sigma_1,\ldots,\sigma_\ell\}$. For $1 \le i \le \ell$ we denote by
$s_i$ the orthogonal reflection in the hyperpane $H_{\sigma_i}$, called a
\emph{simple reflection}. We will often write $\Phi_I$
instead of $\Phi$, where $I$ is an index set in bijection with $\Pi$,
and denote by $\Phi_J$ the parabolic root system corresponding to $J
\subseteq I$. If $\Phi$ is irreducible we denote by $\tilde{\alpha}$
the highest positive root, by $e_1, e_2,\dots,e_{\ell}$ the exponents
and by $h$ the Coxeter number of $\Phi$ and set $p=mh-1$, where $m$ is
a fixed positive integer. The following well known lemmas will be used,
as in \cite{Ath2}.
\begin{lemma} {\rm (\cite[Lemma 2.1]{Ath2})}
{\rm (i)} If $\alpha_1, \alpha_2,\dots,\alpha_r \in \Phi^+$ with
$r \ge 2$ and $\alpha = \alpha_1 + \alpha_2 + \cdots + \alpha_r \in
\Phi^+$ then there exists $i$ with $1 \le i \le r$ such that $\alpha
- \alpha_i \in \Phi^+$.
\noindent
{\rm (ii)} {\rm (cf. \cite{Pa, So})}
If $\alpha_1, \alpha_2,\dots,\alpha_r \in \Phi$ and
$\alpha_1 + \alpha_2 + \cdots + \alpha_r = \alpha \in \Phi$ then
$\alpha_1 = \alpha$ or there exists $i$ with $2 \le i \le r$ such
that $\alpha_1 + \alpha_i \in \Phi \cup \{0\}$. $
\Box$
\label{lem:ro}
\end{lemma}
\begin{lemma} {\rm (\cite[Ch. 6, 1.11, Proposition 31]{Bou} \cite[p.
84]{Hu})} If $\Phi$ is irreducible and $\tilde{\alpha} = \sum_{i=1}^{\ell}
c_i \, \sigma_i$ then $\sum_{i=1}^{\ell} c_i = h-1$.
\label{lem:h}
\end{lemma}
We denote by ${\mathcal A}_{\Phi}$ the
\emph{Coxeter arrangement} associated to $\Phi$, i.e. the collection
of linear hyperplanes $H_\alpha$ in $V$ with $\alpha \in \Phi$, and
by $W$ the corresponding \emph{Weyl group}, generated by the reflections
in these hyperplanes. Thus $W$ is finite and minimally generated by the
set of simple reflections, it leaves $\Phi$ invariant and acts simply
transitively on the set of regions of ${\mathcal A}_\Phi$, called \emph{chambers}.
The \emph{fundamental chamber} is the region defined by the inequalities
$0 < (\alpha, x)$ for $\alpha \in \Phi^+$. A subset of $V$ is called
\emph{dominant} if it is contained in the fundamental chamber. The
\emph{coroot lattice} $\check{Q}$ of $\Phi$ is the ${\mathbb Z}$-span of the
set of coroots
\[ \Phi^\vee = \left\{ \frac{2 \alpha}{(\alpha,\alpha)}: \ \alpha \in
\Phi \right\}. \]
>From now on we assume for simplicity that $\Phi$ is irreducible. The
group $W$ acts on the lattice $\check{Q}$ and on its sublattice $p \,
\check{Q}$, hence it also acts on the quotient $T_m (\Phi) = \check{Q}
/ p \, \check{Q}$. We denote by $O_m (\Phi)$ the set of orbits of the
$W$-action on $T_m (\Phi)$ and use the notation $T (\Phi)$ and $O
(\Phi)$ when $m=1$. We denote
by $\widetilde{{\mathcal A}}_\Phi$ the \emph{affine Coxeter arrangement},
which is the infinite hyperplane arrangement in $V$ consisting of the
hyperplanes $H_{\alpha, k}$ for $\alpha \in \Phi$ and $k \in {\mathbb Z}$, and
by $W_a$ the \emph{affine Weyl group}, generated by the reflections in
the hyperplanes of $\widetilde{{\mathcal A}}_\Phi$. The group $W_a$ is the
semidirect product of $W$ and the translation group in $V$ corresponding
to the coroot lattice $\check{Q}$ and is minimally generated by the set
$\{s_0, s_1,\dots,s_{\ell}\}$ of \emph{simple affine reflections}, where
$s_0$ is the reflection in the hyperplane $H_{\widetilde{\alpha}, 1}$.
For $w \in W_a$ and $0 \le i \le \ell$, the reflection $s_i$ is a
\emph{right ascent} of $w$ if $\ell (ws_i) > \ell(w)$, where $\ell (w)$
is the length of the shortest expression of $w$ as a product of simple
affine reflections.
The group $W_a$ acts simply transitively on the set of regions of
$\widetilde{{\mathcal A}}_{\Phi}$, called \emph{alcoves}. The \emph{fundamental
alcove} of $\widetilde{{\mathcal A}}_{\Phi}$ can be defined as
\[ A_\circ = \{ x \in V: \, 0 < (\sigma_i, x ) \
{\rm for} \ 1 \le i \le \ell \ {\rm and} \ (\tilde{\alpha}, x) <
1\}. \]
Note that every alcove can be written as $w A_{\circ}$ for a unique $w
\in W_a$. Moreover, given $\alpha \in \Phi^+$, there exists a unique
integer $r$, denoted $r(w, \alpha)$, such that $r-1 < (\alpha, x) < r$
holds for all $x \in w A_\circ$. The next lemma is a reformulation of
the main result of \cite{Sh0}.
\begin{lemma} \mbox~{\rm (\cite[Theorem 5.2]{Sh0}).}
Let $r_\alpha$ be an integer for each $\alpha \in \Phi^+$. There exists
$w \in W_a$ such that $r(w, \alpha) = r_\alpha$ for each $\alpha \in
\Phi^+$ if and only if
\begin{equation}
r_\alpha + r_\beta - 1 \le r_{\alpha + \beta} \le r_\alpha + r_\beta
\label{eq:alc}
\end{equation}
for all $\alpha, \beta \in \Phi^+$ with $\alpha + \beta \in \Phi^+$.
\label{lem:alc}
\end{lemma}
We say that two open regions in $V$ are \emph{separated} by
a hyperplane $H \in \widetilde{{\mathcal A}}_{\Phi}$ if they lie in different
half-spaces relative to $H$. If $R$ is a region of a subarrangement of
$\widetilde{{\mathcal A}}_{\Phi}$ or the closure of such a region (in particular,
if $R$ is a chamber or an alcove), we refer to the hyperplanes of
$\widetilde{{\mathcal A}}_\Phi$ which support facets of the closure of $R$ as
the \emph{walls} of $R$.
\noindent
{\bf Generalized cluster complexes.} Let $\Phi$ be crystallographic
(possibly reducible) of rank $\ell$. The generalized cluster complex
$\Delta^m (\Phi)$ is an abstract simplicial complex on the vertex set
$\Phi^m_{\ge -1}$ consisting of the negative simple roots and $m$ copies
of each positive root; we refer to \cite[Section 1.2]{FR2} for the
definition. It is a pure complex of dimension $\ell-1$ \cite[Proposition
1.7]{FR2}. If $\Phi$ is a direct product $\Phi = \Phi_1 \times \Phi_2$
then $\Delta^m (\Phi)$ is the simplicial join of $\Delta^m (\Phi_1)$ and
$\Delta^m (\Phi_2)$. We denote by $\Delta^m_+ (\Phi)$ the induced
subcomplex of $\Delta^m (\Phi)$ on the set of vertices obtained from
$\Phi^m_{\ge -1}$ by removing the negative simple roots and call this
simplicial complex the \emph{positive part} of $\Delta^m (\Phi)$. For $0
\le i \le \ell$ we denote by $f_{i-1} (\Delta^m (\Phi))$ and
$f_{i-1} (\Delta^m_+ (\Phi))$ the number of $(i-1)$-dimensional faces of
the complex $\Delta^m (\Phi)$ and $\Delta^m_+ (\Phi)$, respectively. These
numbers are related to the $h_i (\Delta^m (\Phi))$ and $h_i (\Delta^m_+
(\Phi))$ by the equations
\begin{equation}
\sum_{i=0}^\ell \, f_{i-1} (\Delta^m (\Phi)) (x-1)^{\ell-i} = \sum_{i=0}^\ell
\, h_i (\Delta^m (\Phi)) \, x^{\ell-i}
\label{fFR}
\end{equation}
and
\begin{equation}
\sum_{i=0}^\ell \, f_{i-1} (\Delta^m_+ (\Phi)) (x-1)^{\ell-i} =
\sum_{i=0}^\ell \, h_i (\Delta^m_+ (\Phi)) \, x^{\ell-i}
\label{f+FR}
\end{equation}
respectively.
Following \cite{FR2, Tz} we give explicit combinatorial descriptions of the
complexes $\Delta^m (\Phi)$ and $\Delta^m_+ (\Phi)$ when $\Phi$ has type $A$,
$B$ or $C$. For $\Phi = A_{n-1}$ let ${\bf P}$ be a convex polygon with $mn+2$
vertices. A diagonal of ${\bf P}$ is called \emph{$m$-allowable} if it divides
${\bf P}$ into two polygons each with number of vertices congruent to 2 mod
$m$. Vertices of $\Delta^m (\Phi)$ are the $m$-allowable diagonals of ${\bf P}$
and faces are the sets of pairwise noncrossing diagonals of this kind. For
$\Phi = B_n$ or $C_n$ let ${\bf Q}$ be a centrally symmetric convex polygon with
$2mn+2$ vertices. A vertex of $\Delta^m (\Phi)$ is either a diameter of
${\bf Q}$, i.e. a diagonal connecting antipodal vertices, or a pair of
$m$-allowable diagonals related by a half-turn about the center of ${\bf Q}$.
A set of vertices of $\Delta^m (\Phi)$ forms a face if the diagonals of
${\bf Q}$ defining these vertices are pairwise noncrossing. In all cases the
explicit bijection of $\Phi^m_{\ge -1}$ with the set of allowable diagonals
of ${\bf P}$ or ${\bf Q}$ just described is analogous to the one given in
\cite[Section 3.5]{FZ} for the usual cluster complex $\Delta (\Phi)$, so
that the negative simple roots form an $m$-snake of allowable diagonals in
${\bf P}$ or ${\bf Q}$ and $\Delta^m_+ (\Phi)$ is the subcomplex of $\Delta^m
(\Phi)$ obtained by removing the vertices in the $m$-snake.
The \emph{negative part} of a face $c$ of $\Delta^m (\Phi)$
is the set of indices $J \subseteq I$, where $\Phi = \Phi_I$, which correspond
to the negative simple roots contained in $c$. The next lemma appears as
\cite[Proposition 3.6]{FZ} in the case $m=1$ and follows from the explicit
description of the relevant complexes in the remaining cases.
\begin{lemma}
Assume that either $m=1$ or $\Phi_I$ has type $A, B$ or $C$. For any $J
\subseteq I$ the map $c \mapsto c {\setminus} \, \{-\alpha_i: i \in J\}$
is a bijection from the set of faces of $\Delta^m (\Phi_I)$ with negative
part $J$ to the set of faces of $\Delta^m_+ (\Phi_{I {\setminus} J})$. In particular
\begin{equation}
f_{k-1} (\Delta^m (\Phi_I)) = \sum_{J \subseteq I} \ f_{k-|J|-1}
(\Delta^m_+ (\Phi_{I {\setminus} J})),
\label{ff+FR}
\end{equation}
where $f_{i-1} (\Delta) = 0$ if $i < 0$ for any complex $\Delta$. $
\Box$
\label{lem:FR}
\end{lemma}
For $m=1$ essentially the same equation as (\ref{ff+FR}) has appeared in the
context of quiver representations in \cite[Section 6]{MRZ}.
\noindent
{\bf Regions of ${\mathcal A}^m_\Phi$ and chains of filters.} Let $\Phi$ be
irreducible and crystallographic of rank $\ell$. For $0 \le i \le \ell$ let
$h_i (\Phi, m)$ be the number of dominant regions $R$ of ${\mathcal A}_\Phi^m$ for
which exactly $\ell - i$ walls of $R$ of the form $H_{\alpha, m}$ separate
$R$ from $A_\circ$, as in Section \ref{intro}. We recall another combinatorial
interpretation of $h_i (\Phi, m)$ from \cite{Ath2} using slightly different
terminology. We call a decreasing chain
$$\Phi^+ = {\mathcal I}_0 \supseteq {\mathcal I}_1 \supseteq {\mathcal I}_2 \supseteq \cdots \supseteq
{\mathcal I}_m$$
of filters in $\Phi^+$
a \textit{geometric chain of filters} of length $m$ if (\ref{bi2}) and
(\ref{bi1}) hold under the same conventions as in Section \ref{intro} (the
term \textit{co-filtered chain of dual order ideals} was used in
\cite{Ath2} instead). A positive root $\alpha$ is \emph{indecomposable} of
\emph{rank} $m$ with respect to this chain if $\alpha \in {\mathcal I}_m$ and it is
not possible to write $\alpha = \beta + \gamma$ with $\beta \in {\mathcal I}_i$ and
$\gamma \in {\mathcal I}_j$ for indices $i, j \ge 0$ with $i + j = m$. Let $R_{\mathcal I}$
be the set of points $x \in V$ which satisfy
\begin{equation}
\begin{tabular}{ll} $(\alpha, x) > r$, & {\rm if} \ $\alpha \in {\mathcal I}_r$
\\ $0 < (\alpha, x) < r$, & {\rm if} \ $\alpha \in {\mathcal J}_r$
\end{tabular}
\label{map}
\end{equation}
for $0 \le r \le m$, where ${\mathcal J}_r = \Phi^+ \, {\setminus} {\mathcal I}_r$. The following
statement combines parts of Theorems 3.6 and 3.11 in \cite{Ath2}.
\begin{theorem}
The map ${\mathcal I} \mapsto R_{\mathcal I}$ is a bijection from the set of geometric chains
of filters of length $m$ in $\Phi^+$ to the set of dominant regions of
${\mathcal A}^m_\Phi$. Moreover a positive root $\alpha$ is indecomposable of rank $m$
with respect to ${\mathcal I}$ if and only if $H_{\alpha, m}$ is a wall of $R_{\mathcal I}$
which separates $R_{\mathcal I}$ from $A_\circ$.
In particular $h_i (\Phi, m)$ is equal to the number of geometric chains
of filters in $\Phi^+$ of length $m$ having $\ell-i$ indecomposable elements
of rank $m$.
$
\Box$
\label{thm:ath2}
\end{theorem}
By modifying the definition given earlier or using the interpretation in
the last statement of the previous theorem we can define the numbers $h_i
(\Phi, m)$ when $\Phi$ is reducible as well. Clearly
\[ h_k (\Phi_1 \times \Phi_2, m) = \sum_{i + j = k} \ h_i (\Phi_1,
m) \, h_j (\Phi_2, m) \]
for any crystallographic root systems $\Phi_1, \Phi_2$.
\section{Chains of ideals, bounded begions and maximal alcoves}
\label{som}
In this section we generalize some of the results of Sommers \cite{So} on
bounded dominant regions of ${\rm Cat}_\Phi$ and positive filters in $\Phi^+$ to
bounded dominant regions of ${\mathcal A}_\Phi^m$ and positive geometric chains of
ideals and establish the equality of the numbers appearing in (i) and (ii)
in the statement of Theorem \ref{thm1}. The results of this and the following
section are analogues of the results of Sections 3 and 4 of \cite{Ath2}
on the set of all dominant regions of ${\mathcal A}_\Phi^m$. Their proofs are
obtained by minor adjustments from those of \cite{Ath2}, suggested by the
modifications of the relevant definitions, and thus are only sketched or
omitted.
Let $\Phi$ be irreducible and crystallographic of rank $\ell$ and let
${\mathcal J}$ be a positive geometric chain of ideals
\[ \emptyset = {\mathcal J}_0 \subseteq {\mathcal J}_1 \subseteq {\mathcal J}_2 \subseteq \cdots
\subseteq {\mathcal J}_m \]
in $\Phi^+$ of length $m$, so that (\ref{bi2}) and (\ref{bi1}) hold,
where ${\mathcal I}_i = \Phi^+ {\setminus} \, {\mathcal J}_i$, and $\Pi \subseteq {\mathcal J}_m$. We
define
\[ r_\alpha({\mathcal J}) = \min \{r_1 + r_2 + \cdots + r_k: \alpha = \alpha_1
+ \alpha_2 + \cdots + \alpha_k \ {\rm with} \ \alpha_i \in {\mathcal J}_{r_i} \
{\rm for \ all} \ i\} \]
for any $\alpha \in \Phi^+$. Observe that $r_\alpha({\mathcal J})$ is well defined
since $\Pi \subseteq {\mathcal J}_m$ and that $r_\alpha ({\mathcal J}) \le r$ for
$\alpha \in {\mathcal J}_r$, with $r_\alpha ({\mathcal J}) = 1$ if and only if $\alpha
\in {\mathcal J}_1$.
\begin{lemma}
If $\alpha = \alpha_1 + \alpha_2 + \cdots + \alpha_k \in \Phi^+$ and
$\alpha_i \in \Phi^+$ for all $i$ then
\[ r_\alpha({\mathcal J}) \, \le \, \sum_{i=1}^k \, r_{\alpha_i} ({\mathcal J}). \]
\label{lem0}
\end{lemma}
\noindent
\emph{Proof.}
This is clear from the definition.
$
\Box$
\begin{lemma}
Let $\alpha \in \Phi^+$ and $r_\alpha({\mathcal J}) = r$.
\begin{enumerate}
\itemsep=0pt
\item[{\rm (i)}] If $r \le m$ then $\alpha \in {\mathcal J}_r$.
\item[{\rm (ii)}] If $r > m$ then there exist $\beta, \gamma \in \Phi^+$ with
$\alpha = \beta + \gamma$ and $r = r_\beta({\mathcal J}) + r_\gamma({\mathcal J})$. Moreover
we may choose $\beta$ so that $r_\beta ({\mathcal J}) \le m$.
\end{enumerate}
\label{lem1}
\end{lemma}
\noindent
\emph{Proof.}
Analogous to the proof of \cite[Lemma 3.2]{Ath2}.
$
\Box$
\begin{lemma}
If $\alpha, \beta, \alpha + \beta \in \Phi^+$ and $a, b$ are
integers such that $r_{\alpha + \beta}({\mathcal J}) \le a+b$ then
$r_\alpha({\mathcal J}) \le a$ or $r_\beta({\mathcal J}) \le b$.
\label{lem:cp}
\end{lemma}
\noindent
\emph{Proof.}
By induction on $r_{\alpha + \beta}({\mathcal J})$, as in the proof of \cite[Lemma
3.3]{Ath2}.
$
\Box$
\begin{corollary}
We have
\[ r_\alpha({\mathcal J}) + r_\beta({\mathcal J}) - 1 \, \le \, r_{\alpha + \beta}({\mathcal J}) \,
\le \, r_\alpha({\mathcal J}) + r_\beta({\mathcal J}) \]
whenever $\alpha, \beta, \alpha + \beta \in \Phi^+$.
\label{cor:k}
\end{corollary}
\noindent
\emph{Proof.}
The second inequality is a special case of Lemma \ref{lem0} and the first
follows from Lemma \ref{lem:cp} letting $a = r_\alpha ({\mathcal J}) - 1$ and $b =
r_{\alpha + \beta}({\mathcal J}) - a$.
$
\Box$
We denote by $R_{\mathcal J}$ the set of points $x \in V$ which satisfy the
inequalities in (\ref{map}) (thus we allow a slight abuse of notation
since the same set was denoted by $R_{\mathcal I}$ in Section \ref{pre}, where
${\mathcal I}$ is the chain of complementary filters ${\mathcal I}_i$). Since $\Pi \subseteq
{\mathcal J}_m$ we have $0 < (\sigma_i, x) < m$ for all $1 \le i \le \ell$ and
$x \in R_{\mathcal J}$ and therefore $R_{\mathcal J}$ is bounded.
\begin{proposition}
There exists a unique $w \in W_a$ such that $r(w, \alpha) = r_\alpha({\mathcal J})$
for $\alpha \in \Phi^+$. Moreover, $w A_\circ \subseteq R_{\mathcal J}$. In particular,
$R_{\mathcal J}$ is nonempty.
\label{prop:min}
\end{proposition}
\noindent
\emph{Proof.}
The existence in the first statement follows from Lemma \ref{lem:alc} and
Corollary \ref{cor:k} while uniqueness is obvious.
For the second statement let $\alpha \in \Phi^+$ and $1 \le r \le m$. Part (i)
of Lemma \ref{lem1} implies that $r_\alpha ({\mathcal J}) \le r$ if and only if $\alpha
\in J_r$. Hence from the inequalities
\[ r_\alpha ({\mathcal J}) - 1 \, < \, (\alpha, x) \, < \, r_\alpha ({\mathcal J}), \]
which hold for $x \in w A_\circ$, we conclude that $w A_\circ \subseteq
R_{\mathcal J}$.
$
\Box$
Let $\psi$ be the map which assigns the set $R_{\mathcal J}$ to a positive geometric
chain of ideals ${\mathcal J}$ in $\Phi^+$ of length $m$. Conversely, given a bounded
dominant region $R$ of ${\mathcal A}_\Phi^m$ let
$\phi(R)$ be the sequence $\emptyset = {\mathcal J}_0 \subseteq {\mathcal J}_1 \subseteq {\mathcal J}_2
\subseteq \cdots \subseteq {\mathcal J}_m$ where ${\mathcal J}_r$ is the set of $\alpha \in
\Phi^+$ for which $(\alpha, x) < r$ holds in $R$. Clearly each ${\mathcal J}_r$ is an
ideal in $\Phi^+$.
\begin{theorem}
The map $\psi$ is a bijection from the set of positive geometric chains of
ideals in $\Phi^+$ of length $m$ to the set of bounded dominant regions of
${\mathcal A}_\Phi^m$, and the map $\phi$ is its inverse.
\label{cor:bij1}
\end{theorem}
\noindent
\emph{Proof.}
That $\psi$ is well defined follows from Proposition \ref{prop:min},
which guarantees that $R_{\mathcal J}$ is nonempty (and bounded). To check that
$\phi$ is well defined observe that if $R$ is a bounded dominant region
of ${\mathcal A}_\Phi^m$ and if $(\alpha, x) < i$ and $(\beta,
x) < j$ hold for $x \in R$ then $(\alpha + \beta, x) < i+j$ must hold for
$x \in R$, so that $\phi({\mathcal J})$ satisfies (\ref{bi2}). Similarly, $\phi({\mathcal J})$
satisfies (\ref{bi1}). That $\Pi \subseteq {\mathcal J}_m$ follows from \cite[Lemma
4.1]{Ath1}. It is clear that $\psi$ and $\phi$ are inverses of each other.
$
\Box$
Let $R = R_{\mathcal J}$ be a bounded dominant region of ${\mathcal A}_\Phi^m$, where ${\mathcal J}
= \phi(R)$. Let $w_R$ denote the element of the affine Weyl group $W_a$
which is assigned to ${\mathcal J}$ in Proposition \ref{prop:min}. The following
proposition implies that $w_R A_\circ$ is the alcove in $R$ which is the
furthest away from $A_\circ$. In the special case $m=1$ the existence
of such an alcove was established by Sommers \cite[Proposition 5.4]{So}.
\begin{proposition}
Let $R$ be a bounded dominant region of ${\mathcal A}_\Phi^m$. The element $w_R$
is the unique $w \in W_a$ such that
$w A_\circ \subseteq R$ and whenever $\alpha \in \Phi^+$, $r \in {\mathbb Z}$
and $(\alpha, x) > r$ holds for some $x \in R$ we have $(\alpha, x)
> r$ for all $x \in w A_\circ$.
\label{prop:wR}
\end{proposition}
\noindent
\emph{Proof.}
Analogous to the proof of \cite[Proposition 3.7]{Ath2}.
$
\Box$
We now introduce the notion of an indecomposable element with respect
to the increasing chain of ideals ${\mathcal J}$.
\begin{definition}
Given $1 \le r \le m$, a root $\alpha \in \Phi^+$ is indecomposable of
rank $r$ with respect to ${\mathcal J}$ if $\alpha \in {\mathcal J}_r$ and
\begin{enumerate}
\itemsep=0pt
\item[{\rm (i)}] $r_\alpha({\mathcal J}) = r$,
\item[{\rm (ii)}] it is not possible to write $\alpha = \beta + \gamma$ with
$\beta \in {\mathcal J}_i$ and $\gamma \in {\mathcal J}_j$ for indices $i, j \ge 1$ with
$i + j = r$ and
\item[{\rm (iii)}] if $r_{\alpha + \beta} ({\mathcal J}) = t \le m$ for some $\beta
\in \Phi^+$ then $\beta \in {\mathcal J}_{t-r}$.
\end{enumerate}
\label{def}
\end{definition}
Observe that, by part (i) of Lemma \ref{lem1}, the assumption $\alpha
\in {\mathcal J}_r$ in this definition is actually implied by condition (i). For
$r=m$ the definition is equivalent to the one proposed in Section
\ref{intro}, as the following lemma shows.
\begin{lemma}
A positive root $\alpha$ is indecomposable of rank $m$ with respect to
${\mathcal J}$ if and only if $\alpha$ is a maximal element of ${\mathcal J}_m {\setminus} {\mathcal J}_{m-1}$
and it is not possible to write $\alpha = \beta + \gamma$ with $\beta \in
{\mathcal J}_i$ and $\gamma \in {\mathcal J}_j$ for indices $i, j \ge 1$ with $i + j = m$.
\label{lem:m}
\end{lemma}
\noindent
\emph{Proof.}
Suppose that $\alpha \in {\mathcal J}_m$ is indecomposable of rank $m$. Since
$r_\alpha({\mathcal J}) = m$ we must have $\alpha \notin {\mathcal J}_{m-1}$. Hence to show
that $\alpha$ satisfies the condition in the statement of the lemma
it suffices to show that $\alpha$ is maximal in ${\mathcal J}_m$. If not then by
Lemma \ref{lem:ro} (i) there exists $\beta \in \Phi^+$ such that $\alpha
+ \beta \in {\mathcal J}_m$. Then clearly $r_{\alpha + \beta} ({\mathcal J}) \le m$ and
$r_{\alpha + \beta} ({\mathcal J}) \ge m$ by Corollary \ref{cor:k}. Hence
$r_{\alpha + \beta} ({\mathcal J}) = m$ and condition (iii) of Definition
\ref{def} leads to a contradiction.
For the converse, suppose that $\alpha \in {\mathcal J}_m$ satisfies the condition
in the statement of the lemma. In view of part (i) of Lemma \ref{lem1},
condition (iii) in Definition \ref{def}
is satisfied since $\alpha$ is assumed to be maximal in ${\mathcal J}_m$. Hence to
show that $\alpha$ is indecomposable of rank $m$ it suffices to show that
$r_\alpha({\mathcal J}) = m$. This is implied by the assumption that $\alpha \notin
{\mathcal J}_{m-1}$ and part (i) of Lemma \ref{lem1}.
$
\Box$
\begin{lemma}
Suppose that $\alpha$ is indecomposable with respect to ${\mathcal J}$.
\begin{enumerate}
\itemsep=0pt
\item[{\rm (i)}] We have $r_\alpha({\mathcal J}) = r_\beta({\mathcal J}) + r_\gamma({\mathcal J}) - 1$
whenever $\alpha = \beta + \gamma$ with $\beta, \gamma \in \Phi^+$.
\item[{\rm (ii)}] We have $r_\alpha({\mathcal J}) + r_\beta({\mathcal J}) = r_{\alpha +
\beta}({\mathcal J})$ whenever $\beta, \alpha + \beta \in \Phi^+$.
\end{enumerate}
\label{lem:ind}
\end{lemma}
\noindent
\emph{Proof.}
Analogous to the proof of \cite[Lemma 3.10]{Ath2}. For part (ii), letting
$r_\alpha({\mathcal J}) = r$ and $r_{\alpha + \beta}({\mathcal J}) = t$, we prove instead
that $r_\beta({\mathcal J}) \le t-r$. This implies the result by Corollary
\ref{cor:k}.
$
\Box$
The following theorem explains the connection between indecomposable
elements of ${\mathcal J}$ and walls of $R_{\mathcal J}$.
\begin{theorem}
If ${\mathcal J}$ is a positive geometric chain of ideals in $\Phi^+$ of length
$m$ with corresponding region $R = R_{\mathcal J}$ and $1 \le r \le m$ then the
following sets are equal:
\begin{enumerate}
\itemsep=0pt
\item[{\rm (i)}] the set of indecomposable roots $\alpha \in \Phi^+$ with
respect to ${\mathcal J}$ of rank $r$,
\item[{\rm (ii)}] the set of $\alpha \in \Phi^+$ such that $H_{\alpha, r}$
is a wall of $R$ which does not separate $R$ from $A_\circ$ and
\item[{\rm (iii)}] the set of $\alpha \in \Phi^+$ such that $H_{\alpha, r}$
is a wall of $w_R A_\circ$ which does not separate $w_R A_\circ$ from
$A_\circ$.
\end{enumerate}
\label{mylemma}
\end{theorem}
\noindent
\emph{Proof.}
We prove that $F_r (R) \subseteq F_r ({\mathcal J}) \subseteq F_r (w_R) \subseteq
F_r (R)$ for the three sets defined in the statement of the theorem as
in the proof of \cite[Theorem 3.11]{Ath2}, replacing the inequalities
$(\alpha, x ) > k$ which appear there by $(\alpha, x ) < r$ and recalling
from the proof of Proposition \ref{prop:wR} that $(\alpha, x) < \,
r_\alpha ({\mathcal J})$ holds for all $\alpha \in \Phi^+$ and $x \in R_{\mathcal J}$.
$
\Box$
We denote by $W_m (\Phi)$ the subset of $W_a$ consisting of the
elements $w_R$ for the bounded dominant regions $R$ of ${\mathcal A}_\Phi^m$;
see Figure \ref{figure1} for the case $\Phi = A_2$ and $m=2$. We
abbreviate this set as $W (\Phi)$ in the case $m=1$. The elements of
$W (\Phi)$ are called \emph{maximal} in \cite{So}.
\begin{corollary}
For any nonnegative integers $i_1, i_2,\dots,i_m$ the following
are equal:
\begin{enumerate}
\itemsep=0pt
\item[{\rm (i)}] the number of positive geometric chains of ideals in $\Phi^+$ of
length $m$ having $i_r$ indecomposable elements of rank $r$ for each
$1 \le r \le m$,
\item[{\rm (ii)}] the number of bounded dominant regions $R$ of ${\mathcal A}_\Phi^m$
such that $i_r$ walls of $R$ of the form $H_{\alpha, r}$ do not separate
$R$ from $A_\circ$ for each $1 \le r \le m$ and
\item[{\rm (iii)}] the number of $w \in W_m (\Phi)$ such that $i_r$ walls of
$w A_\circ$ of the form $H_{\alpha, r}$ do not separate $w A_\circ$ from
$A_\circ$ for each $1 \le r \le m$.
\end{enumerate}
\label{cor:ik}
\end{corollary}
\noindent
\emph{Proof.}
Combine Theorems \ref{cor:bij1} and \ref{mylemma}.
$
\Box$
The following corollary is immediate.
\begin{corollary}
For any nonnegative integer $i$ the numbers which appear in {\rm (i)}
and {\rm (ii)} in the statement of Theorem \ref{thm1} are both equal to
the number of $w \in W_m (\Phi)$ such that $i$ walls of $w A_\circ$ of
the form $H_{\alpha, m}$ do not separate $w A_\circ$ from $A_\circ$.
$
\Box$
\label{cor:part1}
\end{corollary}
As was the case with $h_i (\Phi, m)$, the interpretation in part (ii)
of Theorem \ref{thm1} mentioned in the previous corollary or the original
definition can be used to define $h^+_i (\Phi, m)$ when $\Phi$ is
reducible. Equivalently we define
\[ h^+_k (\Phi_1 \times \Phi_2, m) = \sum_{i + j = k} \ h^+_i (\Phi_1,
m) \, h^+_j (\Phi_2, m) \]
for any crystallographic root systems $\Phi_1, \Phi_2$.
We now consider the special case $m=1$. A positive geometric chain of
ideals ${\mathcal J}$ of length $m$ in this case is simply a single ideal ${\mathcal J}$ in
$\Phi^+$ such that $\Pi \subseteq {\mathcal J}$, meaning that ${\mathcal I} = \Phi^+ \, {\setminus}
{\mathcal J}$ is a positive filter. By Lemma \ref{lem:m} the rank one indecomposable
elements of ${\mathcal J}$ are exactly the maximal elements of ${\mathcal J}$.
\begin{corollary}
For any nonnegative integer $i$ the following are equal to $h^+_{\ell-i}
(\Phi)$:
\begin{enumerate}
\itemsep=0pt
\item[{\rm (i)}] the number of ideals in the root poset $\Phi^+$ which contain
all simple roots and have $i$ maximal elements,
\item[{\rm (ii)}] the number of bounded dominant regions $R$ of ${\rm Cat}_\Phi$
such that $i$ walls of $R$ of the form $H_{\alpha, 1}$ do not separate
$R$ from $A_\circ$,
\item[{\rm (iii)}] the number of $w \in W (\Phi)$ such that $i$ walls of
$w A_\circ$ of the form $H_{\alpha, 1}$ do not separate $w A_\circ$ from
$A_\circ$ and
\item[{\rm (iv)}] the number of elements $w \in W (\Phi)$ having $i$ right
ascents.
\end{enumerate}
\label{cor:shi}
\end{corollary}
\noindent
\emph{Proof.}
This follows from the case $m=1$ of Corollary \ref{cor:ik} and \cite[Lemma
2.5]{Ath2}.
$
\Box$
\section{Coroot lattice points and the affine Weyl group}
\label{proof}
In this section we complete the proof of Theorem \ref{thm1} (see Corollary
\ref{cor:proof}). We assume that $\Phi$ is irreducible and crystallographic
of rank $\ell$.
As in \cite[Section 4]{Ath2}, by the reflection in $W$ corresponding to
a hyperplane $H_{\alpha, k}$ we mean the reflection in the linear hyperplane
$H_\alpha$. We let $p = mh-1$, as in Section \ref{pre}, and $D_m (\Phi) =
\check{Q} \cap p \, \overline{A_\circ}$. The following elementary lemma,
for which a detailed proof can be found in \cite[Section 7.4]{Ha}, implies
that $D_m (\Phi)$ is a set of representatives for the orbits of the
$W$-action on $T_m (\Phi)$.
\begin{lemma} \mbox{\rm (cf. \cite[Lemma 7.4.1]{Ha})}
The natural inclusion map from $D_m (\Phi)$ to the set $O_m (\Phi)$ of
orbits of the $W$-action on $T_m (\Phi)$ is a bijection.
Moreover, if $y \in D_m (\Phi)$ then the stabilizer of $y$ with respect
to the $W$-action on $T_m (\Phi)$ is the subgroup of $W$ generated by the
reflections corresponding to the walls of $p \, \overline{A_\circ}$
which contain $y$. In particular, $r(y)$ is equal to the number of
walls of $p \, \overline{A_\circ}$ which contain $y$. $
\Box$
\label{lem:hai}
\end{lemma}
We will define a bijection $\rho: W_m (\Phi) \rightarrow D_m (\Phi)$ such
that for $w \in W_m (\Phi)$, the number of walls of $w A_\circ$ of
the form $H_{\alpha, m}$ which do not separate $w A_\circ$ from $A_\circ$
is equal to the number of walls of $p \, \overline{A_\circ}$ which
contain $\rho (w)$. Let $R_f$ be the region of ${\mathcal A}^m_\Phi$ defined
by the inequalities $m-1 < (\alpha, x) < m$ for $1 \le i \le \ell$.
Let $w_f = w_{R_f}$ be the unique element $w$ of $W_m (\Phi)$ such
that $w A_\circ \subseteq R_f$. We define the map $\rho: W_m (\Phi)
\rightarrow \check{Q}$ by
\[ \rho (w) = (w_f \, w^{-1}) \cdot 0 \]
for $w \in W_m (\Phi)$. Observe that, by Lemma \ref{lem:h}, the alcove
$w_f A_\circ$ can be described explicitly as the open simplex in $V$
defined by the linear inequalities $(\sigma_i, x ) < m$ for $1 \le i
\le \ell$ and $(\tilde{\alpha}, x) > mh-m-1$. For any $1 \le r \le m$
we define the simplex
\[ \Sigma^r_m = \{x \in V: \, m-r \, \le \, (\sigma_i, x ) \
{\rm for}
\ 1 \le i \le \ell \ {\rm and} \ (\tilde{\alpha}, x) \, \le \,
mh-m+r-1\},\]
so that $\Sigma^m_m = p \, \overline{A_\circ}$. For any
$\ell$-dimensional simplex $\Sigma$ in $V$ bounded by hyperplanes
$H_{\alpha, k}$ in $\widetilde{{\mathcal A}}_\Phi$ with $\alpha \in \Pi \cup
\{\tilde{\alpha}\}$ we denote by $H(\Sigma, i)$ the wall of $\Sigma$
orthogonal to $\tilde{\alpha}$ or $\sigma_i$, if $i = 0$ or $i > 0$,
respectively. We write $H(w, i)$ instead of $H(w \overline{A_\circ},
i)$ for $w \in W_a$. The reader is invited to test the results that
follow in the case pictured in Figure \ref{figure1}.
\begin{theorem}
The map $\rho$ is a bijection from $W_m (\Phi)$ to $D_m (\Phi)$. Moreover
for any $w \in W_m (\Phi)$, $1 \le r \le m$ and $0 \le i \le \ell$, the
point $\rho(w)$ lies on the wall $H(\Sigma^r_m, i)$ if and only if
the wall $(w w_f^{-1}) \, H(w_f, i)$ of $w A_\circ$ is of the form
$H_{\alpha, r}$ and does not separate $w A_\circ$ from $A_\circ$.
\label{bij}
\end{theorem}
\noindent
\emph{Proof.}
Analogous to the proof of \cite[Theorem 4.2]{Ath2}.
$
\Box$
\begin{figure}
\caption{The maximal alcoves of the bounded dominant regions and the
simplex $p \, \overline{A_\circ}
\label{figure1}
\end{figure}
\begin{corollary}
For any nonnegative integers $i_1, i_2,\dots,i_m$ each of the quantities
which appear in the statement of Corollary \ref{cor:ik} is equal to the
number of points in $D_m (\Phi)$ which lie in $i_r$ walls of $\Sigma^r_m$
for all $1 \le r \le m$.
\label{ik:new}
\end{corollary}
\noindent
\emph{Proof.}
This follows from Theorem \ref{bij}.
$
\Box$
The next corollary completes the proof of Theorem \ref{thm1}.
\begin{corollary}
For any $0 \le i \le \ell$ the following are equal to $h^+_{\ell-i}
(\Phi, m)$:
\begin{enumerate}
\itemsep=0pt
\item[{\rm (i)}] the number of points in $D_m (\Phi)$ which lie in $i$ walls
of $p \, \overline{A_\circ}$ and
\item[{\rm (ii)}] the number of $w \in W_m (\Phi)$ such that $i$ walls of
$w A_\circ$ of the form $H_{\alpha, m}$ do not separate $w A_\circ$
from $A_{\circ}$.
\end{enumerate}
\label{cor:proof}
\end{corollary}
\noindent
\emph{Proof.}
By Lemma \ref{lem:hai}, the number of orbits $x \in O_m (\Phi)$ with
rank $r(x) = i$ is equal to the number of points in $D_m (\Phi)$
which lie in $i$ walls of $p \, \overline{A_\circ}$. The statement now
follows by specializing Corollary \ref{ik:new} and recalling that
$\Sigma^m_m = p \, \overline{A_\circ}$.
$
\Box$
\begin{remark} {\rm
Part (ii) of Corollary \ref{cor:proof} implies that $h^+_\ell (\Phi, m)$
is equal to the cardinality of $\check{Q} \cap \, (mh-1) A_\circ$. An argument
similar to the one employed in \cite[Remark 4.5]{Ath2} shows that $\check{Q}
\cap \, (mh-1) A_\circ$ is equinumerous to $\check{Q} \cap \, (mh-h-1)
\overline{A_\circ}$. Therefore from Theorem \ref{thm0} (iii) we conclude
that $h^+_\ell (\Phi, m) = N^+ (\Phi, m-1)$. Since the reduced Euler
characteristic $\widetilde{\chi} (\Delta^m_+ (\Phi))$ of $\Delta^m_+
(\Phi)$ is equal to $(-1)^{\ell-1} h_\ell (\Delta^m_+ (\Phi))$, it follows
from the results of the next section that $$\widetilde{\chi} (\Delta^m_+
(\Phi)) = (-1)^{\ell-1} N^+ (\Phi, m-1)$$ if $\Phi$ has type $A$, $B$ or
$C$ (see \cite[(3.1)]{FR2} for the corresponding property of $\Delta^m
(\Phi)$).
$
\Box$
\label{rem}}
\end{remark}
The following conjecture is the positive analogue of \cite[Conjecture
3.7]{FR2}.
\begin{conjecture}
For any crystallographic root system $\Phi$ and all $m \ge 1$ the complex
$\Delta^m_+ (\Phi)$ is pure $(\ell-1)$-dimensional and shellable and has
reduced Euler characteristic equal to $(-1)^{\ell-1} N^+ (\Phi, m-1)$.
In particular it is Cohen-Macaulay and has the homotopy type of a wedge of
$N^+ (\Phi, m-1)$ spheres of dimension $\ell-1$.
\label{conj1}
\end{conjecture}
\section{The numbers $f_i (\Phi, m)$ and $f^+_i (\Phi, m)$}
\label{f}
Let $\Phi$ be a crystallographic root system of rank $\ell$ spanning the
Euclidean space $V$. We define numbers $f_i (\Phi, m)$ and $f^+_i (\Phi,
m)$ by the relations
\begin{equation}
\sum_{i=0}^\ell \, f_{i-1} (\Phi, m) (x-1)^{\ell-i} = \sum_{i=0}^\ell \,
h_i (\Phi, m) \, x^{\ell-i}
\label{f_i}
\end{equation}
and
\begin{equation}
\sum_{i=0}^\ell \, f^+_{i-1} (\Phi, m) (x-1)^{\ell-i} = \sum_{i=0}^\ell \,
h^+_i (\Phi, m) \, x^{\ell-i}
\label{f_i+}
\end{equation}
respectively. Comparing to equation (\ref{f+FR}) we see that Conjecture
\ref{conj0} for the pair $(\Phi, m)$ is equivalent to the statement that
\begin{equation}
f^+_{i-1} (\Phi, m) = f_{i-1} (\Delta^m_+ (\Phi))
\label{conj00}
\end{equation}
for all $i$, where $f_{i-1} (\Delta^m_+ (\Phi))$ is as in Section \ref{pre}.
We will give a combinatorial interpretation to the numbers $f _{i-1} (\Phi,
m)$ and $f^+_{i-1} (\Phi, m)$ as follows. For $0 \le k \le \ell$ we denote
by ${\mathcal F}_k (\Phi, m)$ the collection of $k$-dimensional (nonempty) sets of
the form
\begin{equation}
\bigcap_{(\alpha, r) \, \in \, \Phi^+ \times \{0, 1,\dots,m\}}
\ \tilde{H}_{\alpha, r}
\label{cells}
\end{equation}
where $\tilde{H}_{\alpha, r}$ can be
\[ \cases{
H^+_{\alpha, 0}, & if \ $r=0$, \cr
H^-_{\alpha, m}, \, H^+_{\alpha, m} \ {\rm or} \ H_{\alpha, m}, &
if \ $r=m$, \cr
H^-_{\alpha, r} \ {\rm or} \ H^+_{\alpha,r}, & if \ $1 \le r < m$ } \]
and $H^-_{\alpha, r}$ and $H^+_{\alpha, r}$ denote the two open half-spaces
in $V$ defined by the inequalities $(\alpha, x) < r$ and $(\alpha, x) > r$,
respectively. Observe that each element of ${\mathcal F}_k (\Phi, m)$ is dominant.
We also denote by ${\mathcal F}^+_k (\Phi, m)$ the elements of ${\mathcal F}_k (\Phi, m)$
which are bounded subsets of $V$ or, equivalently, the sets of the form
(\ref{cells}) with $\tilde{H}_{\sigma_i, m} = H^-_{\sigma_i, m}$ or
$H_{\sigma_i, m}$ for $1 \le i \le \ell$. In the special case $m=1$ part
(iii) of the following theorem is the content of Remark 5.10 (v) in
\cite{FR1}.
\begin{theorem}
For any irreducible crystallographic root system $\Phi$ and all $m \ge 1$
and $0 \le k \le \ell$ the number $f_{k-1} (\Phi, m)$ counts
\begin{enumerate}
\itemsep=0pt
\item[{\rm (i)}] pairs $(R, S)$ where $R$ is a dominant region of ${\mathcal A}_\Phi^m$ and
$S$ is a set of $\ell-k$ walls of $R$ of the form $H_{\alpha, m}$ which
separate $R$ from $A_\circ$,
\item[{\rm (ii)}] pairs $({\mathcal I}, T)$ where ${\mathcal I}$ is a geometric chain of filters in
$\Phi^+$ of length $m$ and $T$ is a set of $\ell-k$ indecomposable roots of
rank $m$ with respect to ${\mathcal I}$ and
\item[{\rm (iii)}] the elements of ${\mathcal F}_k (\Phi, m)$.
\end{enumerate}
\label{thm:eleni1}
\end{theorem}
\noindent
\emph{Proof.}
>From (\ref{f_i}) we have
\[ f_{k-1} (\Phi, m) = \sum_{i=0}^k \ h_i (\Phi, m) {\ell-i \choose \ell-k}
\]
which cleary implies (i) and (ii) (see Theorem \ref{thm:ath2}). To complete
the proof it suffices to give
a bijection from the set ${\mathcal R}_k (\Phi, m)$ of pairs $(R, S)$ which appear in
(i) to ${\mathcal F}_k (\Phi, m)$. Given such a pair $\tau = (R, S)$ let $g(\tau)$ be
the intersection (\ref{cells}), where $\tilde{H}_{\alpha, r}$ is chosen so
that $R \subseteq \tilde{H}_{\alpha, r}$ unless $r=m$ and $H_{\alpha, r} \in
S$, in which case $\tilde{H}_{\alpha, r} = H_{\alpha, r}$. Let $S =
\{H_{\alpha_1, m}, H_{\alpha_2, m},\dots,H_{\alpha_{\ell-k}, m}\}$ and let
$F_S$ be the intersection of the hyperplanes in $S$. It follows from
\cite[Corollary 3.14]{Ath2} that $S$ is a proper subset of the set of walls
of an alcove of $\widetilde{{\mathcal A}}_\Phi$ and hence that $F_S$ is nonempty and
$k$-dimensional. To show that $g(\tau)$ is nonempty and $k$-dimensional, so
that $g: {\mathcal R}_k (\Phi, m) \rightarrow {\mathcal F}_k (\Phi, m)$ is well defined, we need to
show that $F_S$ is not contained in any hyperplane $H_{\alpha, r}$ with
$\alpha \in \Phi^+$ and $0 \le r \le m$ other than those in $S$. So suppose
that $F_S \subseteq H_{\alpha, r}$ with $\alpha \in \Phi^+$ and $r \ge 0$.
Then there are real numbers $\lambda_1, \lambda_2,\dots,\lambda_{\ell-k}$
such that
\begin{equation}
\alpha = \lambda_1 \alpha_1 + \lambda_2 \alpha_2 + \cdots + \lambda_{\ell-k}
\alpha_{\ell-k}
\label{lambdas}
\end{equation}
and $r = m (\lambda_1 + \lambda_2 + \cdots + \lambda_{\ell-k})$. Observe
that the $\alpha_i$ are minimal elements of the last filter in the geometric
chain of filters in $\Phi^+$ corresponding to $R$ and hence that they form
an antichain in $\Phi^+$, meaning a set of pairwise incomparable elements.
It follows from the first main result of \cite{So} (see the proof of
\cite[Corollary 6.2]{AR}) that the coefficients $\lambda_i$ in
(\ref{lambdas}) are nonnegative integers. Hence either $r>m$ or $\alpha
= \alpha_i$ and $r=m$ for some $i$, so that $H_{\alpha, r} \in S$.
To show that $g$ is a bijection we will show that given $F \in {\mathcal F}_k (\Phi,
m)$ there exists a unique $\tau \in {\mathcal R}_k (\Phi, m)$ with $g(\tau)=F$. Let
$(\varpi_1^\vee, \varpi_2^\vee,\dots,\varpi_\ell^\vee)$ be the linear
basis of $V$ which is dual to $\Pi$, in the sense that
\[ (\sigma_i, \varpi_j^\vee) = \delta_{ij}. \]
Observe that if $g(\tau) = F$ with $\tau = (R, S)$, $x$ is a point in $F$
and $\epsilon_i$ are sufficiently small positive numbers then
\begin{equation}
x + \sum_{i=1}^\ell \ \epsilon_i \varpi_i^\vee \, \in \, R.
\label{epsilon}
\end{equation}
Since regions of ${\mathcal A}^m_\Phi$ are pairwise disjoint this implies uniqueness
of $R$, and hence of $\tau$. To prove the existence let $R$ be the unique
region of ${\mathcal A}^m_\Phi$ defined by (\ref{epsilon}). Equivalently $R$ can be
obtained by replacing all hyperplanes of the form $H_{\alpha, m}$ in the
intersection (\ref{cells}) defining $F$ by $H^+_{\alpha, m}$. It suffices to
show that any such hyperplane $H_{\alpha, m}$ is a wall of $R$ since then,
if $S$ is the set of hyperplanes of the form $H_{\alpha, m}$ which contain
$F$ then $\tau = (R, S) \in {\mathcal R}_k (\Phi, m)$ and $g (\tau) = F$.
Suppose on the contrary that $H_{\alpha, m} \supseteq
F$ is not a wall of $R$. It follows from Theorem \ref{thm:ath2} that
$\alpha$ is not indecomposable of rank $m$ with respect to the geometric
chain ${\mathcal I}$ of filters $\Phi^+ = {\mathcal I}_0 \supseteq {\mathcal I}_1 \supseteq \cdots
\supseteq {\mathcal I}_m$ corresponding to $R$ and hence that one can write $\alpha =
\beta + \gamma$ for some $\beta \in {\mathcal I}_i$, $\gamma \in {\mathcal I}_j$ with $i+j = m$.
Since $(\alpha, x) = m$ for $x \in F$ and $(\beta, x) > i$ and $(\gamma, x)
> j$ hold for $x \in R$, so that $(\beta, x) \ge i$ and $(\gamma, x) \ge j$
hold for $x \in F$, we must have $(\beta, x) = i$ and $(\gamma, x) = j$
for $x \in F$. However one of $i,j$ must be less than $m$ and this
contradicts the fact that $F$ can be contained in $H_{\alpha, r}$ for
$\alpha \in \Phi^+$ only if $r=m$.
$
\Box$
The proof of the next theorem is entirely similar to that of Theorem
\ref{thm:eleni1} and is omitted.
\begin{theorem}
For any irreducible crystallographic root system $\Phi$ and all $m \ge 1$
and $0 \le k \le \ell$ the number $f^+_{k-1} (\Phi, m)$ counts
\begin{enumerate}
\itemsep=0pt
\item[{\rm (i)}] pairs $(R, S)$ where $R$ is a dominant bounded region of
${\mathcal A}_\Phi^m$ and $S$ is a set of $\ell-k$ walls of $R$ of the form $H_{\alpha,
m}$ which do not separate $R$ from $A_\circ$,
\item[{\rm (ii)}] pairs $({\mathcal J}, T)$ where ${\mathcal J}$ is a positive geometric chain of
ideals in $\Phi^+$ of length $m$ and $T$ is a set of $\ell-k$ indecomposable
roots of rank $m$ with respect to ${\mathcal J}$ and
\item[{\rm (iii)}] the elements of ${\mathcal F}^+_k (\Phi, m)$. $
\Box$
\end{enumerate}
\label{thm:eleni2}
\end{theorem}
The reader is invited to use part (iii) of Theorems \ref{thm:eleni1} and
\ref{thm:eleni2} as well
as Figure \ref{figure1} to verify that $f_{-1} = 1$, $f_0 = 8$, $f_1 = 12$,
$f^+_{-1} = 1$, $f^+_0 = 6$ and $f^+_1 = 7$ when $\Phi = A_2$ and $m=2$.
It should be clear that apart from the necessary modifications in the
statements of part (i), Theorems \ref{thm:eleni1} and \ref{thm:eleni2}
are also valid when $\Phi$ is reducible.
\begin{lemma}
For any crystallographic root system $\Phi_I$ and all $m \ge 1$ and $0 \le
k \le \ell$ we have
\[ f_{k-1} (\Phi_I, m) = \sum_{J \subseteq I} \ f^+_{k-|J|-1} (\Phi_{I
{\setminus} J}, m). \]
\label{lem:ff+}
\end{lemma}
\noindent
\emph{Proof.}
For $J \subseteq I$ let $V_J$ be the linear span of the simple roots indexed
by the elements of $J$ and let $p_J: V_I \rightarrow V_{I {\setminus} J}$ be the orthogonal
projection onto $V_{I {\setminus} J}$. We define the \emph{simple part} of $F \in
{\mathcal F}_k (\Phi_I, m)$ as the set of indices $j \in I$ such that $F \subseteq
H^+_{\sigma_j, m}$. Observe that if $J$ is the simple part of $F$ then for
$\alpha \in \Phi^+$ and $x \in F$ we have
\[ \begin{tabular}{ll} $(\alpha, x) = (\alpha, p_J (x))$, &
{\rm if} \ $\alpha \in \Phi_{I {\setminus} J}$
\\ $(\alpha, x) > m$, & {\rm otherwise.}
\end{tabular} \]
It follows that $p_J$ induces a bijection from the set of elements of ${\mathcal F}_k
(\Phi_I, m)$ with simple part $J$ to ${\mathcal F}^+_{k-|J|} (\Phi_{I {\setminus} J}, m)$.
Hence counting the elements of ${\mathcal F}_k (\Phi_I, m)$ according to their simple
part proves the lemma.
$
\Box$
The same type of argument as that in the next corollary appears in the proof
of \cite[Proposition 6.1]{MRZ}.
\begin{corollary}
If for some pair $(\Phi, m)$ every parabolic subsystem of $\Phi$ satisfies
{\rm (\ref{ff+FR})} and we have $h_i (\Phi, m) = h_i (\Delta^m (\Phi))$ for
all $i$ then we have $h^+_i (\Phi, m) = h_i (\Delta^m_+ (\Phi))$ for all $i$
as well.
\label{cor:if}
\end{corollary}
\noindent
\emph{Proof.}
>From the assumption we have $f_{k-1} (\Phi, m) = f_{k-1} (\Delta^m (\Phi))$
for all $k$. Equation (\ref{ff+FR}) and Lemma \ref{lem:ff+} imply that
$f^+_{k-1} (\Phi, m) = f_{k-1} (\Delta^m_+ (\Phi))$ for all $k$ via M\"obius
inversion on
the set of pairs $(k, I)$ partially ordered by letting $$(l, J) \le (k, I)
\ {\rm if \ and \ only \ if} \ J \subseteq I \ {\rm and} \ k-l = |I {\setminus} J|.$$
This is equivalent to the conclusion of the corollary.
$
\Box$
\begin{corollary}
Conjecture \ref{conj0} holds for root systems of type $A$, $B$ or $C$ and any
$m\ge 1$ and for all root systems when $m=1$.
\label{cor:conj}
\end{corollary}
\noindent
\emph{Proof.}
This follows from Lemma \ref{lem:FR}, the previous corollary and the fact
that the equality $h_i (\Phi, m) = h_i (\Delta^m (\Phi))$ can be checked
case by case from the explicit formulas given in \cite{Ath2, FR2, Tz} in
the cases under consideration.
$
\Box$
We conclude this section with a combinatorial interpretation to $f^+_{k-1}
(\Phi, m)$ similar to those provided in parts (i) and (ii) of Theorem
\ref{thm:eleni2}.
\begin{theorem}
For any irreducible crystallographic root system $\Phi$ and all $m \ge 1$
and $0 \le k \le \ell$ the number $f^+_{k-1} (\Phi, m)$ counts
\begin{enumerate}
\itemsep=0pt
\item[{\rm (i)}] pairs $(R, S)$ where $R$ is a dominant region of ${\mathcal A}_\Phi^m$
and $S$ is a set of $\ell-k$ walls of $R$ of the form $H_{\alpha,
m}$ which separate $R$ from $A_\circ$ such that $S$ contains all such
walls of $R$ with $\alpha \in \Pi$ and
\item[{\rm (ii)}] pairs $({\mathcal I}, T)$ where ${\mathcal I}$ is a geometric chain of filters in
$\Phi^+$ of length $m$ and $T$ is a set of $\ell-k$ indecomposable roots of
rank $m$ with respect to ${\mathcal I}$ which contains all simple indecomposable
roots of rank $m$ with respect to ${\mathcal I}$.
\end{enumerate}
\label{thm:eleni3}
\end{theorem}
\noindent
\emph{Proof.}
The sets in (i) and (ii) are equinumerous by Theorem \ref{thm:ath2}.
To complete the proof one can argue that the map $g$ in the proof of
Theorem \ref{thm:eleni1} restricts to a bijection from the set in (i) to
${\mathcal F}^+_k (\Phi, m)$. Alternatively, arguing as in the proof of
Corollary \ref{cor:if}, it suffices to show that
\[ f_{k-1} (\Phi_I, m) = \sum_{J \subseteq I} \
g^+_{k-|J|-1} (\Phi_{I {\setminus} J}, m), \]
where $\Phi = \Phi_I$ and $g^+_{k-1} (\Phi, m)$ denotes the cardinality
of the set of pairs, say ${\mathcal G}^+_k (\Phi, m)$, which appears in (ii). Let
${\mathcal G}_k (\Phi_I, m)$ denote the set of pairs defined in (ii) of the
statement of Theorem
\ref{thm:eleni1}. For $({\mathcal I}, T) \in {\mathcal G}_k (\Phi_I, m)$ call the set of
simple roots which are indecomposable of rank $m$ with respect to ${\mathcal I}$
and are not contained in $T$ the \emph{simple part} of $({\mathcal I}, T)$ and
for any $J \subseteq I$ denote by $\Lambda_J$ the order filter of roots
$\alpha \in \Phi^+_I$ for which $\sigma \le \alpha$ for some $\sigma \in
J$. It is straightforward to check from the definitions that the map
which sends a pair $({\mathcal I}, T)$ to $({\mathcal I} {\setminus} \Lambda_J, T)$, where ${\mathcal I}
{\setminus} \Lambda_J$ denotes the chain obtained from ${\mathcal I}$ by removing
$\Lambda_J$ from each filter of ${\mathcal I}$, induces a bijection from the set
of elements of ${\mathcal G}_k (\Phi_I, m)$ with simple part $J$ to ${\mathcal G}^+_{k-|J|}
(\Phi_{I {\setminus} J}, m)$. Therefore counting the elements of ${\mathcal G}_k (\Phi_I,
m)$ according to their simple part gives the desired equality.
$
\Box$
\section{Classical types and the case $m=1$}
\label{class}
In this section we compute the numbers $h^+_i (\Phi, m)$ and $f^+_{i-1}
(\Phi, m)$ in the cases of the classical root systems.
\begin{proposition}
The number $h^+_i (\Phi, m)$ is equal to
\begin{center}
\begin{tabular}{lr}
{\Large $\frac{1}{i+1} {n-1 \choose i} {mn-2 \choose i}$}, & {\rm if} \
$\Phi = A_{n-1}$, \\ \\
{\Large ${n \choose i} {mn-1 \choose i}$}, & {\rm if} \ $\Phi = B_n$
{\rm or} $C_n$, \\ \\
{\Large ${n \choose i} {m(n-1)-1 \choose i} + {n-2 \choose i-2}
{m(n-1) \choose i}$}, & {\rm if} \ $\Phi = D_n$.
\end{tabular}
\end{center}
\label{thm:ABCD}
\end{proposition}
\noindent
\emph{Proof.}
The proof can be obtained from that of \cite[Proposition 5.1]{Ath2} by
replacing the quantity $mh+1$ by $mh-1$ and using Theorem 1.3 (iii)
instead of \cite[Theorem 1.2 (ii)]{Ath2}.
$
\Box$
The following corollary is a straightforward consequence of Proposition
\ref{thm:ABCD} and equation (\ref{f_i+}).
\begin{corollary}
The number $f^+_{k-1} (\Phi, m)$ is equal to
\begin{center}
\begin{tabular}{lr}
{\Large $\frac{1}{k+1} {n-1 \choose k} {mn+k-1 \choose k}$}, & {\rm if} \
$\Phi = A_{n-1}$, \\ \\
{\Large ${n \choose k} {mn+k-1 \choose k}$}, & {\rm if} \ $\Phi =
B_n$ {\rm or} $C_n$,
\\ \\
{\Large ${n \choose k} {m(n-1)+k-1 \choose k} + {n-2 \choose k-2}
{m(n-1)+k-2 \choose k}$}, & {\rm if} \ $\Phi = D_n$.
\end{tabular}
\end{center}
$
\Box$
\label{cor:ABCD}
\end{corollary}
In the case $m=1$, the following corollary for $\Phi = A_{n-1}$ and
$\Phi = B_n, C_n$ is a special case of \cite[(34)]{Ch} and \cite[(46)]{Ch},
respectively.
\begin{corollary}
The number $f_{k-1} (\Delta^m_+(\Phi))$ is equal to
\begin{center}
\begin{tabular}{lr}
{\Large $\frac{1}{k+1} {n-1 \choose k} {mn+k-1 \choose k}$}, & {\rm if} \
$\Phi = A_{n-1}$, \\ \\
{\Large ${n \choose k} {mn+k-1 \choose k}$}, & {\rm if} \ $\Phi = B_n$
{\rm or} $C_n$.
\end{tabular}
\end{center}
Moreover $$f_{k-1} (\Delta_+(D_n)) = {n \choose k} {n+k-2 \choose k} +
{n-2 \choose k-2} {n+k-3 \choose k}.$$
\label{cor:AB}
\end{corollary}
\noindent
\emph{Proof.}
Combine Corollaries \ref{cor:conj} and \ref{cor:ABCD}.
$
\Box$
\begin{remark} {\rm
The number of positive filters in $\Phi^+$ with $i$
minimal elements has been computed for the exceptional root systems
by Victor Reiner as shown in the following table.}
\end{remark}
{\scriptsize
\begin{table}[hptb]
\begin{center}
\begin{tabular}{| l| l | l | l | l | l | l | l | l | l |} \hline
\ \ \ \ $i$ & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline \hline
$\Phi = G_2$ &1 & 4 & & & & & & \\ \hline
$\Phi = F_4$ &1 & 20 & 35 & 10 & & & & \\ \hline
$\Phi = E_6$ &1 & 30 & 135 & 175 & 70 & 7 & & \\ \hline
$\Phi = E_7$ &1 & 56 & 420 & 952 & 770 & 216 & 16 & \\ \hline
$\Phi = E_8$ &1 & 112 & 1323 & 4774 & 6622 & 3696 & 770 & 44
\\ \hline
\end{tabular}
\caption{The numbers $h^+_i (\Phi)$ for the exceptional root systems.}
\label{except}
\end{center}
\end{table}
}
\noindent
\emph{Proof of Theorem \ref{thm2}.}
We will prove the statement of the theorem without the assumption that
$\Phi$ is irreducible. Let $\ell$ be the rank of $\Phi = \Phi_I$. We
write $h_k (\Phi_I)$ instead of $h_k (\Phi_I, 1)$, so that $h_{\ell-k}
(\Phi_I)$ counts the filters in $\Phi_I^+$ with $k$ minimal elements as
well as the ideals in $\Phi_I^+$ with $k$ maximal elements. Let
$\widetilde{h}^+_k (\Phi)$ denote the number of positive filters in
$\Phi^+$ with $k$ minimal elements. Counting filters in $\Phi_I^+$ by
the set of simple roots they contain gives
\begin{equation}
h_{\ell-k} (\Phi_I) = \sum_{J \subseteq I} \ \widetilde{h}^+_{k-|J|}
(\Phi_{I {\setminus} J}).
\label{dual1}
\end{equation}
Similarly, counting ideals in $\Phi_I^+$ by the set of simple roots
they do not contain gives
\[ h_{\ell-k} (\Phi_I) = \sum_{J \subseteq I} \ h^+_{\ell-|J|-k}
(\Phi_{I {\setminus} J}). \]
Since it is known \cite{Pa} that $h_{\ell - k} (\Phi_I) = h_k (\Phi_I)$
the previous relation can also be written as
\begin{equation}
h_{\ell-k} (\Phi_I) = \sum_{J \subseteq I} \ h^+_{k - |J|} (\Phi_{I {\setminus}
J}).
\label{dual2}
\end{equation}
Comparing (\ref{dual1}) and (\ref{dual2}) and using M\"obius inversion
as in Section \ref{f} gives $\widetilde{h}^+_k (\Phi_J) = h^+_k (\Phi_J)$
for all $J \subseteq I$, which is the statement of the theorem.
$
\Box$
\section{Remarks}
\label{remarks}
\noindent
1. The following reciprocity relation
\begin{equation}
N^+ (\Phi, m-1) = (-1)^\ell N (\Phi, -m)
\label{rec}
\end{equation}
was observed by Fomin and Reading \cite[(2.12)]{FR2}. We will show that,
as suggested by S. Fomin (private communication), this relation is in
fact an instance of Ehrhart reciprocity. Let $i(n)$ be the cardinality
of $\check{Q} \cap \, n \overline{A_\circ}$ for $n \in {\mathbb N}$. It is clear
that the vertices of the simplex $\overline{A_\circ}$ have rational
coordinates in the basis $\Pi$ of $V$ of simple roots, hence also in the
basis $\Pi^\vee = \left\{ 2 \alpha / (\alpha,\alpha): \ \alpha \in
\Pi \right\}$ of $V$. Therefore the function $i(n)$ is the Ehrhart
quasi-polynomial of $\overline{A_\circ}$ with respect to the lattice
$\check{Q}$ (see \cite[Section 4.6]{Sta} for an introduction to the
theory of Ehrhart quasi-polynomials). Ehrhart reciprocity \cite[Theorem
4.6.26]{Sta} implies that
\[ (-1)^\ell i(-n) = \# \, (\check{Q} \cap \, n A_\circ) \]
for $n \in {\mathbb N}$ and hence, setting $n=mh-1$ and consulting \cite[Theorem
1.1]{Ath2}, that
\[ (-1)^\ell N (\Phi, -m) = \# \, (\check{Q} \cap \, (mh-1) A_\circ). \]
Remark \ref{rem} asserts that
\[ \# \, (\check{Q} \cap \, (mh-1) A_\circ) = N^+ (\Phi, m-1) \]
and hence (\ref{rec}) holds.
\noindent
2. It would be interesting to give combinatorial proofs of the formulas
in Corollary \ref{cor:AB} directly from the description of the relevant
complexes given in \cite{FR2, FZ, Tz} and Section \ref{pre}.
\noindent
3. After this paper was completed the following came to our attention.
(i) The numbers $\widetilde{h}^+_i (\Phi)$ of positive filters in $\Phi^+$
with $i$ minimal elements are also discussed and partially computed in
\cite[Section 3]{Pa2}. (ii) Theorem 2.7 in \cite{FR3} implies that
Lemma \ref{lem:FR} is valid for all pairs $(\Phi, m)$. In view of (ii)
and the equality $h_i (\Phi, m) = h_i (\Delta^m (\Phi))$ (see \cite{Ath2,
FR2, FR3}) when $\Phi = D_n$, it follows from Corollary
\ref{cor:if} that Conjecture \ref{conj0} is also valid for root systems
of type $D$ and arbitrary $m$.
\vspace*{0.25 in}
\noindent
\emph{Acknowledgements}. We are grateful to Sergey Fomin and Nathan
Reading for making their work \cite{FR2} available to us. We also thank
Sergey Fomin for useful discussions and Victor Reiner for providing
the data of Table \ref{except} as well as the software to confirm it.
\end{document} |
\begin{document}
\def\mathbf{R}{\mathbf{R}}
\def{\eR}^d{{\mathbf{R}}^d}
\def{\eR}^dd{{\mathbf{R}}^{d\times d}}
\def{\eR}^dsym{{\mathbf{R}}^{d\times d}_{sym}}
\def\mathbf{N}{\mathbf{N}}
\def\mathbf{Z}{\mathbf{Z}}
\def\mathbb{I}_d{\mathbb{I}_d}
\def\mathcal{S}{\mathcal{S}}
\def\partial_t{\partial_t}
\def\mathbb{D}{\mathbb{D}}
\def\mathbb{D}Dev{\mathbb{D}^{D}}
\def\mbox{d}{\mbox{d}}
\newcommand{\operatorname{ess\,inf}}{\operatorname{ess\,inf}}
\newcommand{\operatorname{ess\,sup}}{\operatorname{ess\,sup}}
\newcommand{\operatorname{supp}}{\operatorname{supp}}
\newcommand{\operatorname{span}}{\operatorname{span}}
\newcommand\dx{\; \mathrm{d}x}
\newcommand\dy{\; \mathrm{d}y}
\newcommand\dz{\; \mathrm{d}z}
\newcommand\dt{\; \mathrm{d}t}
\newcommand\ds{\; \mathrm{d}s}
\newcommand\diff{\mathrm{d}}
\newcommand\dvr{\mathop{\mathrm{div}}\nolimits}
\newcommand\pat{\partial_t}
\newcommand\sym{\mathrm{sym}}
\newcommand\diam{\mathrm{diam}}
\newcommand\tr{\mathop{\mathrm{tr}}\nolimits}
\newcommand\lspan{\mathop{\mathrm{span}}\nolimits}
\title{Local-in-time existence of strong solutions to a class of compressible non-Newtonian Navier-Stokes equations}
\author{Martin Kalousek\footnote{[email protected], Institute of Mathematics, Czech Academy of Sciences, \v Zitn\'a 25, 115 67 Praha 1, Czech Republic}, V\'aclav M\'acha\footnote{[email protected], Institute of Mathematics, Czech Academy of Sciences, \v Zitn\'a 25, 115 67 Praha 1, Czech Republic}, \v{S}\'arka Ne\v{c}asov\'a\footnote{[email protected], Institute of Mathematics, Czech Academy of Sciences, \v Zitn\'a 25, 115 67 Praha 1, Czech Republic}}
\maketitle
\begin{abstract}
The aim of this article is to show a local-in-time existence of a strong solution to the generalized compressible Navier-Stokes equation for arbitrarily large initial data. The goal is reached by $L^p$-theory for linearized equations which are obtained with help of the Weis multiplier theorem and can be seen as generalization of the work of Shibata and Enomoto \cite{EnSh} (devoted to compressible fluids) to compressible non-Newtonian fluids.
\end{abstract}
\textbf{Keywords:} non-Newtonian fluids, the Weis theorem, $L^p$-theory\\
\textbf{AMS subject classification:} 35A01, 35B65, 35P09, 35Q35
\section{Introduction}
In this paper, we analyze a boundary value problem for Navier-Stokes equations describing the flow of a compressible non-Newtonian fluid that reads
\begin{equation}\label{SystemClass}
\begin{alignedat}{2}
\partial_t(\rho u)+\dvr(\rho u\otimes u)+\nabla \pi(\rho)=&\dvr\mathcal{S} &&\text{ in }Q_T,\\
\partial_t\rho+\dvr(\rho u)=&0 &&\text{ in }Q_T,\\
u(0,\cdot)=u_0,\rho(0,\cdot)=&\rho_0&&\text{ in }\Omega.
\end{alignedat}
\end{equation}
Here $\Omega\subset\mathbf{R}^d,d\geq 2$ is the domain occupied by the fluid, $T>0$ is the time of evolution and $Q_T=(0,T)\times\Omega$. We denote by $u(t,x)$ the fluid velocity, by $\rho(t,x)$ the fluid density and by $\mathcal{S}$ the viscous stress. We assume the periodic boundary condition, i.e., $\Omega$ is a torus
\begin{equation*}
\Omega = \left([-1,1]|_{\{-1,1\}}\right)^d.
\end{equation*}
We restrict ourselves to the constitutive relation
\begin{equation}\label{SDef}
\mathcal{S}=2\mu(|\mathbb{D}Dev u|^2)\mathbb{D}Dev u+\lambda(\dvr u)\dvr u\mathbb{I}_d,
\end{equation}
where $\mu$ and $\lambda$ are viscosity coefficients whose properties will be specified later.
\subsection{Discussion and main result}
The system with $\mu \equiv \mu_0\in \mathbb R$ and $\lambda\equiv \lambda_0\in \mathbb R$ has been extensively studied. The mathematical analysis of compressible viscous fluids goes back to 1950s. The first result concerning uniqueness was given by Graffi \cite{Graffi} and Serrin \cite{Serrin}. A local in time existence in H\" older continuous spaces was proven by Nash \cite {Nash}, Itaya \cite{Itaya1,Itaya2} and Vol'pert and Hudjaev \cite {Volpert}. In Sobolev-Slobodetskii space the local existence was shown by Solonnikov \cite{Sol}. The local-in-time existence and global-in-time existence for small data in Hilbert space traces back to Valli \cite{Valli}. The optimal regularity of local in time solutions was obtained by Charve and Danchin \cite{ChaDa}. The global solution with the initial data close to a rest state was shown by Matsumura-Nishida \cite{MatNi1,MatNi2} in the whole space and the exterior domain. The case of bounded domain was treated in work of Valli, Zajaczkowski \cite{VaZa}.
The global-in-time existence of a weak solution is much more recent and goes back to the half of of 1990s. The existence is known due to nowadays standard Feireisl - Lions theory -- we refer to \cite{FeNoPe} and to \cite{Lions}.
However, there is an unsatisfactory number of articles for the system with a general $\mu$ and $\lambda$. In particular, Mamontov \cite{Mamontov1}, \cite{Mamontov2} considered an exponentially growing viscosity and isothermal pressure. For such case the global existence of a weak solution was shown. Recently, Abbatiello, Feireisl and Novotn\'y in \cite{AbFeNo} provide a proof of the existence of so-called dissipative solution -- a notion of solution which is 'weaker' than a weak solution. The existence of weak solution is still unproven. The main obstacle seems to be the lack of compactness of velocity $u$ which is caused by an insufficient control of $\pat u$ in (and near) vacuum regions.
Let us mention that such type of problem was already studied in the incompressible case by Ladyzhenskaya \cite{Lad} and then extensively studied by group of J. Ne\v cas \cite{Bel1, Bel2, MNR}. Further, basic properties of weak or measure-valued solution of incompressible non-Newtonian fluids are described in \cite{MNRR,BeBl}. Concerning the compressible case the first attempt to solve this problem can be found in work of Ne\v casov\' a, Novotn\' y \cite{MNN} where the existence of measure-valued solution was shown.
The concept of the maximal regularity is classical and the main achievements for abstract theory go back to the work of Ladyzhenskaya, Uraltzeva, Solonnikov \cite{LUS}, Da-Prato, Grisvard \cite{DG}, Amann \cite {Amann2} and Pr{\" u}ss \cite{P}. The notion of $\cal R$--sectoriality was introduced by Clement and Pr\"uss see \cite{CP}. The fundamental result by Weis \cite{W} about equivalency of description for the maximal regularity in terms of vector-valued Fourier multipliers and ${\cal R}$ sectoriality was a breakthrough for the applications.
The Weis multiplier theorem allows to deduce the $L^p$ maximal regularity to problems connected with the fluid dynamics. We refer to \cite{DeHiPr} and \cite{KuWe} for the comprehensible decription of the method.
The maximal $L^p$ regularity result for incompressible fluids can be found in work by Shibata, Shimizu \cite{SHS}, for the non-Newtonian incompressible situation it was established in work by Bothe and Pr\"uss \cite{BP}. Maximal regularity for the compressible case was shown by Shibata, Enomoto\cite{EnSh}.
The goal of this paper is to use the Weis multiplier theorem to show the short-time existence of a strong solution in $L^p$ setting for the non-Newtonian compressible fluid.
Let us introduce our main result.
\begin{Theorem}\label{Thm:Main}
Let $\mu \in\mathcal C^3([0,\infty))$ and $\lambda \in \mathcal C^2(\mathbf{R})$ satisfy $\mu(s) + 2\mu'(s) s >0$ for all $s\geq 0$ and $\lambda(r) + \lambda'(r)r >0$ for all $r\in\mathbf{R}$. Let, moreover, $\pi \in \mathcal C^2([0,\infty))$, $q>d$ and $p\in (1,\infty)$ be given. Then for every $u_0\in W^{2,q}(\Omega)$ and $\rho_0\in W^{1,q}(\Omega)$, $\frac 1{\rho_0}\in L^\infty(\Omega)$ there is $T>0$ such that there exists
\begin{equation*}
(\rho,u)\in L^p(0,T;W^{1,q}(\Omega))\times L^p(0,T;W^{2,q}(\Omega))
\end{equation*}
with
\begin{equation*}
(\pat \rho,\pat u)\in L^p(0,T;W^{1,q}(\Omega))\times L^p(0,T;L^q(\Omega))
\end{equation*}
which satisfies \eqref{SystemClass}.\label{thm.main}
\end{Theorem}
The proof of the main result relies on several steps. The first step is the use of Lagrange coordinates to get rid of the convective term.
Secondly, we linearize the resulting system
and we show $L^p-L^q$ regularity property of a solution. This is discussed in Section \ref{Lpreg} and it is obtained by means of the Weis multiplier theorem.
The sufficient regularity of the velocity $u$ allows to deduce appropriate bounds
and one may use the Banach fix-point argument to deduce the existence of a strong solution assuming the time $T>0$ is small enough. This is described in Section \ref{Bfp}.
The remaining part of this introductory section is devoted to necessary preliminary statements.
\subsection{Notation}
We denote by $\mathbb{I}_d$ the $d\times d$ identity matrix. For the $d\times d$--matrix valued mapping $G$ $\sym G$ denotes the symmetric part of $G$, i.e., $\sym G=\frac{1}{2}\left(G+G^\top\right)$ and $G^{D}$ is the traceless part of $G$, i.e., $G^{D}=G-\frac{1}{d}\tr G\mathbb{I}_d$. Let $u$ be a mapping with values in $\mathbf{R}^d$ then $\mathbb{D} u=\sym(\nabla u)$ is the symmetric part of the gradient and $\mathbb{D}Dev u$ stands for the traceless part of $\mathbb{D} u$.
For purposes of this paper we denote by $\mathcal{V}^{p,q}(Q_T)$ the following function space
\begin{equation*}
\mathcal{V}^{p,q}(Q_T)=L^p(0,T;W^{2,q}(\Omega))\cap W^{1,p}(0,T;L^q(\Omega))
\end{equation*}
with the norm $\|\cdot\|_{V^{p,q}(Q_T)}=\|\cdot\|_{L^p(0,T;W^{2,q}(\Omega))}+\|\cdot\|_{W^{1,p}(0,T;L^q(\Omega))}$.
Starting with definition \eqref{SDef}, using the fact that $\tr\mathbb{D}Dev u=0$ and the symmetry of $\mathbb{D}Dev u$ we have
\begin{equation*}
\begin{split}
(\dvr \mathcal{S})_j=&2\mu(|\mathbb{D}Dev u|^2)\sum_{l=1}^d\partial_k \left((\mathbb{D}Dev u)_{jl} -\frac{1}{d}\delta_{jl}\dvr u\right)\\
&+4\mu'(|\mathbb{D}Dev u|^2)\sum_{k,l,m=1}^d(\mathbb{D}Dev u)_{jk}(\mathbb{D}Dev u)_{lm}\partial_k\left((\mathbb{D} u)_{lm}-\frac{1}{d}\delta_{lm}\dvr u\right)\\
&+\sum_{l=1}^d\delta_{jl}\left(\lambda(\dvr u)+\lambda '(\dvr u)\dvr u\right)\partial_l\dvr u\\
=&\mu(|\mathbb{D}Dev u|^2)\sum_{l=1}^d(\partial^2_l u_j+\partial_j\partial_l u_l)+2\mu'(|\mathbb{D}Dev u|^2)\sum_{k,l,m=1}^d(\mathbb{D}Dev u)_{jk}(\mathbb{D}Dev u)_{lm}\left(\partial_k\partial_m u_l+\partial_k\partial_lu_m\right)\\
&-\frac{2}{d}\mu(|\mathbb{D}Dev u|^2)\partial_j\dvr u+\left(\lambda(\dvr u)+\lambda'(\dvr u)\dvr u\right)\partial_j \dvr u\\
=&\mu(|\mathbb{D}Dev u|^2)\sum_{k=1}^d(\partial^2_l u_j+\partial_j\partial_l u_j)+4\mu'(|\mathbb{D}Dev u|^2)\sum_{k,l,m=1}^d(\mathbb{D}Dev u)_{jl}(\mathbb{D}Dev u)_{km}\partial_l\partial_m u_k\\
&-\frac{2}{d}\mu(|\mathbb{D}Dev u|^2)\partial_j\dvr u+\left(\lambda(\dvr u)+\lambda'(\dvr u)\dvr u\right)\partial_j \dvr u.
\end{split}
\end{equation*}
Hence we deduce that
\begin{equation}\label{eq:DivS}
(\dvr \mathcal{S})_j=\sum_{k,l,m=1}^d a^{lm}_{jk}(\mathbb Du)\partial_m\partial_l u_k,
\end{equation}
where
\begin{equation}\label{eq:ACoef}
\begin{split}
a^{lm}_{jk}(\mathbb Du)=&\mu(|\mathbb{D}Dev u|^2)\left(\delta_{jk}\delta_{lm}+\delta_{jm}\delta_{kl}\right)+4\mu'(|\mathbb{D}Dev u|^2)(\mathbb{D}Dev u)_{jl}(\mathbb{D}Dev u)_{km}\\
&+\left(\lambda(\dvr u)+\lambda'(\dvr u)\dvr u-\frac{2}{d}\mu(|\mathbb{D}Dev u|^2)\right)\delta_{km}\delta_{jl}.
\end{split}
\end{equation}
We note that $a_{jk}^{lm}$ possesses the following symmetries
\begin{equation}\label{eq:a.symmetry}
a_{jk}^{lm}=a_{kj}^{lm}=a^{jk}_{lm}=a^{jm}_{lk}=a^{lk}_{jm}\text{ for all }j,k,l,m=1,\dots,d.
\end{equation}
We define for a given $u\in C^1(\overline{\Omega})^d$
the quasilinear differential operator $\mathcal{A}$ as
\begin{equation}\label{eq:def.A}
\mathcal{A}(\mathbb D u)(\mathbb Dv)=\sum_{l,m=1}^da^{lm}_{jk}(\mathbb Du)\partial_l\partial_m v,
\end{equation}
whose ellipticity is ensured by certain identities satisfied by $\mu,\lambda$ and their derivatives as we now show.
Considering $\xi\in\mathbf{R}^{d\times d}_{sym}$ we get
\begin{equation*}
\begin{split}
a^{lm}_{jk}\xi_{jl}\xi_{km}=& \mu(|\mathbb{D}Dev u|^2)(\xi_{jl}^2+\xi^2_{km})+4\mu'(|\mathbb{D}Dev u|^2)(\mathbb{D}Dev u)_{jl}\xi_{jl}(\mathbb{D}Dev u)_{km}\xi_{km}\\
&+\left(\lambda(\dvr u)+\lambda'(\dvr u)\dvr u-\frac{2}{d}\mu(|\mathbb{D}Dev u|^2)\right)\xi_{jj}\xi_{ll}.
\end{split}
\end{equation*}
Hence we obtain
\begin{equation*}
\begin{split}
\sum_{j,k,l,m=1}^da^{lm}_{jk}\xi_{jl}\xi_{km}=&2\mu(|\mathbb{D}Dev u|^2)|\xi|^2+4\mu'(|\mathbb{D}Dev u|^2)|\mathbb{D}Dev u\cdot\xi|^2\\
&+\left(\lambda(\dvr u)+\lambda'(\dvr u)\dvr u-\frac{2}{d}\mu(|\mathbb{D}Dev u|^2)\right)(\tr \xi)^2.
\end{split}
\end{equation*}
Applying the decomposition $\xi=\xi^D+\frac{tr\xi}{d}\mathbb{I}_d$, in particular the fact that $\xi^D\cdot\mathbb{I}_d=0$, we arrive at
\begin{equation}\label{BilFormDet}
\sum_{j,k,l,m=1}^da^{lm}_{jk}\xi_{jl}\xi_{km}=2\mu(|\mathbb{D}Dev u|^2)|\xi^d|^2+4\mu'(|\mathbb{D}Dev u|^2)|\mathbb{D}Dev u\cdot\xi^d|^2+\left(\lambda(\dvr u)+\lambda'(\dvr u)\dvr u\right)(\tr \xi)^2.
\end{equation}
We deal with the notion of strong ellipticity of the operator $\mathcal{A}(\mathbb Du)$ which means that there exists $C_{el}(\mathbb Du)>0$ such that
\begin{equation}\label{StrongEll}
\sum_{j,k,l,m=1}^da^{lm}_{jk}\xi_{jl}\xi_{km}\geq C_{el}(\mathbb Du)|\xi|^2\text{ holds for all }\xi\in\mathbf{R}^{d\times d}.
\end{equation}
Let investigate necessary conditions on $\mu$ and $\lambda$ related to the strong ellipticity of $\mathcal{A}(\mathbb Du)$.
We first choose $g\in\mathbf{R}^d$ as an eigenvector of $\mathbb{D}Dev u$ and $h\in\mathbf{R}^d$ being perpendicular to $g$. We consider the matrix $(\xi)_{jk}=\frac{1}{2}(g_jh_k-g_kh_j)$, $j,k=1,\ldots d$ and we obtain from \eqref{BilFormDet} combined with \eqref{StrongEll}
\begin{equation*}
2\mu(|\mathbb{D}Dev u|^2)|\xi|^2\geq C_{el}(\mathbb Du)|\xi|^2
\end{equation*}
as $\xi$ is obviously traceless and $\xi^D=\xi$ consequently. Hence one concludes that
\begin{equation*}
2\mu(s)\geq C_{el}(\mathbb Du)\text{ for all }s\in[0,\|\mathbb{D}Dev u\|^2_{L^\infty(\Omega)}].
\end{equation*}
As $u\in C^1(\overline\Omega)^d$ can be chosen arbitrarily, we get
\begin{equation}\label{MuCond1}
\mu(s)> 0\text{ for any }s\geq 0.
\end{equation}
In order to get the next condition involving $\mu$, we set $\xi=\mathbb{D}Dev u$ in \eqref{StrongEll}, which together with \eqref{BilFormDet} imply
\begin{equation*}
2\mu(|\mathbb{D}Dev u|^2)|\mathbb{D}Dev u|^2+4\mu'(|\mathbb{D}Dev u|^2)|\mathbb{D}Dev u|^2|\mathbb{D}Dev u|^2\geq C_{el}(u)|\mathbb{D}Dev u|^2.
\end{equation*}
Hence one concludes that
\begin{equation*}
2\mu(s)+4\mu'(s)s\geq C_{el}(\mathbb Du),\text{ for any }s\in [0,\|\mathbb{D}Dev u\|^2_{L^\infty(\Omega)}]
\end{equation*}
and finally
\begin{equation}\label{MuCond2}
\mu(s)+2\mu'(s)s> 0\text{ for any }s\geq 0
\end{equation}
by repeating the arguments leading to \eqref{MuCond1}. To determine a condition on $\lambda$ we choose $\xi=\mathbb{I}_d$ in \eqref{StrongEll}. It follows that $(\mathbb{I}_d)^D=0$ and we obtain from \eqref{BilFormDet} that
\begin{equation*}
\left(\lambda(\dvr u)+\lambda'(\dvr u)\dvr u\right)d^2\geq C_{el}(\mathbb Du)d.
\end{equation*}
Repeating the arguments leading to \eqref{MuCond1}, \eqref{MuCond2} respectively, we get
\begin{equation*}
\lambda(r)+\lambda'(r)r\geq \frac{C_{el}(u)}{d}\text{ for all }r\in[-\|\dvr u\|_{L^\infty(\Omega)},\|\dvr u\|_{L^\infty(\Omega)}]
\end{equation*}
and then
\begin{equation}\label{LambdaCond}
\lambda(r)+\lambda'(r)r>0\text{ for all }r\in\mathbf{R}.
\end{equation}
We note that conditions \eqref{MuCond1}, \eqref{MuCond2} and \eqref{LambdaCond} are also sufficient for the ellipticity of $\mathcal{A}(\mathbb Du)$. If $\mu'(|\mathbb{D}Dev u|^2)\geq 0$ these conditions ensure that
\begin{equation}\label{AuxStepEl1}
\sum_{j,k,l,m = 1}^d a^{lm}_{jk}\xi_{jl}\xi_{km}\geq 2\mu(|\mathbb{D}Dev u|^2)|\xi^d|^2+(\lambda(\dvr u)+\lambda'(\dvr u)\dvr u)(\tr\xi)^2.
\end{equation}
From the continuity of $\mu$ and the function $r\mapsto\lambda(r)+\lambda(r)'r$ and from the assumption $u\in C^1(\overline\Omega)^d$ we conclude the existence of $s_{min}\in[0,\|\mathbb{D}Dev u\|^2_{L^\infty(\Omega)}]$ and $r_{min}\in[-\|\dvr u\|_{L^ \infty},\|\dvr u\|_{L^ \infty}]$ such that $\mu(s)\geq \mu(s_{min})>0$ for all $s\in[0,\|\mathbb{D}Dev u\|^2_{L^\infty(\Omega)}]$ and $\lambda(r)+\lambda'(r)r\geq \lambda(r_{min})+\lambda'(r_{min})r_{min}>0$ for all $r\in[-\|\dvr u\|_{L^ \infty},\|\dvr u\|_{L^ \infty}]$. It follows from \eqref{AuxStepEl1} that
\begin{equation}\label{AuxStepEl2}
\sum_{j,k,l,m=1}^d a^{lm}_{jk}\xi_{jl}\xi_{km}\geq 2\mu(s_{min})|\xi^d|^2+(\lambda(r_{min})+\lambda'(r_{min})r_{min})(\tr\xi)^2.
\end{equation}
Moreover, since one can easily check that the mapping $\xi\mapsto \sqrt{|\xi^D|^2+(\tr \xi)^2}$ is an equivalent norm on $\mathbf{R}^{d\times d}$, the ellipticity of $\mathcal{A}(u)$ follows from \eqref{AuxStepEl2} by setting $C_{el}=\min\{2\mu(s_{min}),\lambda(r_{min})+\lambda'(r_{min})r_{min})\}$.
If $\mu'(|\mathbb{D}Dev u|^2)< 0$ one applies the Cauchy--Schwarz inequality in \eqref{BilFormDet} to obtain
\begin{multline}\label{AuxStepEl3}
\sum_{j,k,l,m=1}^da^{lm}_{jk}\xi_{jl}\xi_{km}\geq\\ 2\mu(|\mathbb{D}Dev u|^2)|\xi^d|^2+4\mu'(|\mathbb{D}Dev u|^2)|\mathbb{D}Dev u|^2|\xi^d|^2+\left(\lambda(\dvr u)+\lambda'(\dvr u)\dvr u\right)(\tr \xi)^2.
\end{multline}
The continuity of $s\mapsto \mu(s)+2\mu'(s)s$ and the assumption $u\in C^1(\overline\Omega)^d$ imply the existence of $\tilde s_{min}\in[0,\|\mathbb{D}Dev u\|^2_{L^\infty(\Omega)}]$ such that $\mu(s)+2\mu'(s)s\geq \mu(\tilde s_{min})+2\mu'(\tilde s_{min})\tilde s_{min}>0$ for all $s\in[0,\|\mathbb{D}Dev u\|^2_{L^\infty(\Omega)}]$. Going back to \eqref{AuxStepEl3} the ellipticity of $\mathcal{A}(\mathbb Du)$ follows by setting $C_{el}=\min\{\mu(\tilde s_{min})+2\mu'(\tilde s_{min})\tilde s_{min},\lambda(r_{min})+\lambda'(r_{min})r_{min}\}$.
\section{Local-in-time well posedness of the compressible Navier-Stokes}
\label{Bfp}
In this section we present a proof of Theorem \ref{Thm:Main}. First, we transform system \eqref{SystemClass} into Lagrangian coordinates. Then a linearization of the transformed system is derived. Next, employing Theorem \ref{thm:regularita} we show that the solution operator to the linearized system is a contraction on a suitably chosen function space. Finally, by the Banach fixed point theorem we conclude the local-in-time well posedness of the transformed system and of \eqref{SystemClass} accordingly.
\subsection{System in Lagrangian coordinates}
We define the Lagrangian coordinates as
\begin{equation}\label{LCoord}
\begin{split}
X_u(t,y) & = y + \int_0^t u(s,X_u(s,y)) \ {\rm d}s.
\end{split}
\end{equation}
A general function $f(t,x):\Omega\mapsto \mathbb R$ fulfills
\begin{equation*}
\begin{split}
\partial_t f(t, X_u(t,y)) &= \partial_1 f(t, X_u(t,y)) + \nabla_x f(t, X_u(t,y)) \cdot u\\
\nabla_y f(t,X_u(t,y)) & = \nabla_x f(t, X_u(t,y)) \nabla_y X_u (t,y)\\
\nabla_x f & = \nabla_y f (\nabla_y X_u)^{-1}.
\end{split}
\end{equation*}
The Jacobi matrix of the transformation $X_u$ is $\mathbb I_d+\int_0^t\nabla_y u(s,X_u(s,y))\ds$. We tacitly assume the invertibility of this matrix, which is ensured if
\begin{equation}\label{SigmaCond}
\sup_{t\in(0,T)}\left\|\int_0^t\nabla_y u(s,\cdot)\ds\right\|_{L^\infty(\Omega)}< \sigma
\end{equation}
for some small number $\sigma$. Since we will work with functions that possess Lipschitz regularity with respect to space variables, the latter condition is fulfilled if $T$ is chosen suitably small.
In what follows we use the notation $E_u:= \mathbb I_d - (\nabla X_u)^{-1}$. Let us note that we get from \eqref{LCoord}
\begin{equation}\label{EExpr}
E_u(t,y)=W\left(\int_0^t\nabla u(s,X_u(s,y))\ds\right),
\end{equation}
where $W$ is a $d\times d$--matrix valued mapping that is smooth with respect to matrices $B\in\mathbf{R}^{d\times d}$ with $|B|<2\sigma$ and $W(0)=0$.
We define $\tilde \varrho (t,y) = \varrho(t,X(t,y))$ and $\tilde u (t,y)= u(t,X(t,y))$ and we rewrite \eqref{SystemClass} as
\begin{equation}\label{eq:NS.Lagrange}
\begin{split}
\pat \tilde\varrho + \tilde\varrho \dvr_y \tilde u &= G(\tilde \varrho, \tilde u)\\
\tilde\varrho \pat \tilde u + \nabla_y \pi(\tilde \varrho) - \dvr_y \mathcal{S}(D_y \tilde u) &= F(\tilde \varrho,\tilde u)
\end{split}
\end{equation}
where
\begin{equation*}
\begin{split}
G(\tilde \varrho,\tilde u ) &= - \nabla \tilde u \cdot E_{\tilde u}\tilde \varrho\\
F(\tilde \varrho, \tilde u) & = \nabla_y \pi(\tilde\varrho) E_{\tilde u} + \dvr_y\tilde{\mathcal{S}}(\tilde {u}) - \dvr_y \mathcal{S}(\mathbb{D}Dev_y\tilde u) - \tr{\tilde {\mathcal S}(\tilde u)}
E_{\tilde u}\\
\mathcal{S}(\mathbb{D}Dev_y\tilde u)&=\mu(|\mathbb{D}Dev_y\tilde u|^2)\mathbb{D}Dev_y\tilde u+\lambda(\dvr_y \tilde u)\dvr_y\tilde u\mathbb{I}_d\\
\tilde{\mathcal{S}}(\tilde u)&=\mu(|\mathcal G(\tilde u)|^2)\mathcal G(\tilde u)+\lambda(\mathcal{D}(\tilde u))\mathcal D(\tilde u)\mathbb{I}_d\\
\mathcal{G}(\tilde u)&=\mathbb{D}Dev_y\tilde u - \sym (\nabla_y \tilde u E_{\tilde u})^{D},\ \mathcal D(\tilde u)=\dvr_y \tilde u-\nabla_y \tilde u\cdot E_{\tilde u}.
\end{split}
\end{equation*}
\subsection{Linearized system}
Let $\tilde\varrho = \varrho_* + \theta_0 + \theta$ where $\varrho_*$ is a constant and $\theta_0 = \varrho_0 - \varrho_*$ is independent of $t$. Recall both $\varrho_*$ and $\theta_0$ are given. Consequently, \eqref{eq:NS.Lagrange} becomes (recall \eqref{eq:def.A})
\begin{equation}
\begin{split}\label{eq:linearized}
\pat \theta + (\varrho_* + \theta_0) \dvr_y (\tilde u - u_0)& = \mathscr G (\theta,\tilde u)\\
(\varrho_* + \theta_0) \pat \tilde u - \mathcal A(\mathbb D_y u_0)(\mathbb D_y(\tilde u-u_0)) + \pi'(\varrho_* + \theta_0)\nabla_y \theta & = \mathscr F(\theta,\tilde u)
\end{split}
\end{equation}
where
\begin{equation}\label{RHSExpr}
\begin{split}
\mathscr G (\theta,\tilde u) =& G(\varrho_* + \theta_0 + \theta,\tilde u) - \theta \dvr_y \tilde u - (\rho_* + \theta_0)\dvr_y u_0\\
\mathscr F (\theta,\tilde u) =& F(\varrho_* + \theta_0 + \theta, \tilde u) - \theta \partial_t \tilde u + \dvr_y \mathcal{S}(\mathbb{D}Dev_y\tilde u) - \mathcal A(\mathbb D_yu_0)(\mathbb D_y(\tilde u-u_0)) + \pi'(\varrho_* + \theta_0) \nabla_y \theta_0 \\&
+ \left(\pi'(\varrho_* + \theta_0 + \theta) - \pi'(\varrho_* + \theta_0)\right)\nabla_y(\theta_0 + \theta)
\end{split}
\end{equation}
and the system is equipped with the initial conditions $\theta(0,\cdot)=0$, $\tilde u(0,\cdot)=u_0$.
In order to shorten the notation we omit the subscript $y$ in the rest of this section.
\subsection{The fix-point argument}
We define
\begin{align*}
H_{T,M}=\{&(\vartheta,w)\in W^{1,p}(0,T;W^{1,q}(\Omega))\times \mathcal V^{p,q}(Q_T):\
\vartheta(0)=0, w(0)=u_0,\ \\
&[\vartheta,w]_{T,M}=\|\vartheta\|_{W^{1,p}(0,T;W^{1,q}(\Omega))}+\|w-u_0\|_{\mathcal{V}^{p,q}}\leq M\}
\end{align*}
and an operator $\Phi$ as
\begin{equation*}
\Phi(\vartheta,w) = (\theta, \tilde u)
\end{equation*}
where $\theta\in W^{1,p}(0,T;W^{1,q}(\Omega))$, $\tilde u\in L^p(0,T;W^{2,q}(\Omega)^d)\cap W^{1,p}(0,T;L^q(\Omega)^d)$ is a solution to
\begin{equation*}
\begin{split}
\pat \theta + (\varrho_* + \theta_0) \dvr (\tilde u -u_0)&= \mathscr G(\vartheta,w)\\
(\varrho_* + \theta_0)\pat \tilde u - \mathcal A(\mathbb Du_0)(\mathbb D(\tilde u-u_0)) + \pi'(\varrho_* + \theta_0)\nabla\theta & = \mathscr F(\vartheta,w)\\
\theta(0)=0,\ \tilde u(0)=u_0.
\end{split}
\end{equation*}
In order to apply the Banach fixed-point theorem, we need to justify that for certain values of $T$ and $M$ the operator $\Phi$ is a contraction on $H_{T,M}$.
Next we introduce several estimates. We recall the embedding inequality
\begin{equation}\label{EmbIneq}
\sup_{t\in[0,T]}\|u\|_{W^{1,\infty}(\Omega)}\leq c\left(\|u\|^p_{L^p(0,T;W^{2,q}(\Omega))}+\|u\|^p_{W^{1,p}(0,T;L^q(\Omega))}\right)^\frac{1}{p},
\end{equation}
see \cite[p. 10]{Ta} for details, valid for $u\in\mathcal V^{p,q}(Q_T)$, $p\in(1,\infty)$ and $q>d$. For an arbitrary $w\in B_M=\{v\in\mathcal V^{p,q}(Q_T):v(0)=u_0, \|v-u_0\|_{\mathcal V^{p,q}(Q_T)}\leq M\}$ it follows that
\begin{equation*}
\|w\|_{\mathcal V^{p,q}(Q_T))}\in[T^\frac{1}{p}\|u_0\|_{W^{2,q}(\Omega)}-M,T^\frac{1}{p}\|u_0\|_{W^{2,q}(\Omega)}+M].
\end{equation*}
Accordingly, we have $\|w\|_{\mathcal V^{p,q}(Q_T)}\leq M+LT^\frac{1}{p}$ where \begin{equation*}
L=\|\theta_0\|_{W^{1,q}(\Omega)}+\|u_0\|_{W^{2,q}(\Omega)}.
\end{equation*}
Employing \eqref{EmbIneq} we deduce that
\begin{equation}\label{SymGDivBound}
\|\mathbb{D}Dev w\|_{L^\infty(0,T;L^\infty(\Omega))}+\|\dvr w\|_{L^\infty(0,T;L^\infty(\Omega))}\leq c\|w\|_{\mathcal V^{p,q}(Q_T)}\leq c(M+LT^\frac{1}{p}).
\end{equation}
By the Sobolev embedding theorem and the H\"older inequality we infer
\begin{equation*}
\int_0^T\|\nabla w(s\cdot)\ds\|_{L^\infty(\Omega)}\leq \int_0^T\|w(s,\cdot)\|_{W^{2,q}(\Omega)}\ds\leq cT^\frac{1}{p'}\|w\|_{L^p(0,T;W^{2,q}(\Omega))}.
\end{equation*}
Hence the condition \eqref{SigmaCond} is fulfilled for any $w\in H_{T,M}$ assuming $T$ is suitably small.
Moreover, it follows from \eqref{EExpr} and the paragraph below it that
\begin{equation}\label{EEst}
\|E_w(t)\|_{W^{k,q}(\Omega)}\leq c\left\|\int_0^t\nabla w\ {\rm d}s\right\|_{W^{k,q}(\Omega)}\leq c\int_0^t\|w\|_{W^{k+1,q}(\Omega)}\ {\rm d}s,\ k=0,1.
\end{equation}
The Sobolev embedding then yields (since $q>d$)
\begin{equation}\label{SupEEst}
\|E_w(t)\|_{L^\infty(0,T;L^\infty(\Omega))}\leq cT^\frac{1}{p'}\|w\|_{L^p(0,T;W^{2,q}(\Omega))}
\end{equation}
and we use H\"older inequality to deduce
\begin{equation}\label{ELPQEst}
\|E_w\|_{L^p(0,T;W^{1,q}(\Omega))}\leq c\left(\int_0^Tt^\frac{p}{p'}\|\nabla w\|^p_{L^p(0,T;W^{1,q}(\Omega))}\ {\rm d}t\right)^\frac{1}{p}\leq cT\|w\|_{L^p(0,T;W^{2,q}(\Omega))}.
\end{equation}
By \eqref{EEst} and \eqref{EmbIneq} we infer
\begin{align*}
\|\mathcal{G}(w)\|_{L^\infty(0,T;L^\infty(\Omega))}\leq &c\|\nabla w\|_{L^\infty(0,T;L^\infty(\Omega))}(1+\|E_{w}\|_{L^\infty(0,T;L^\infty(\Omega))})\\
\leq &c\|w\|_{\mathcal V^{p,q}(Q_T)}(1+T^\frac{1}{p'}\|w\|_{L^p(0,T;W^{2,q}(\Omega))})
\end{align*}
and thus
\begin{equation}\label{CrlGEst}
\|\mathcal{G}(w)\|_{L^\infty(0,T;L^\infty(\Omega))}\leq c(M+LT^\frac{1}{p})(1+T^\frac{1}{p'}(M+LT^\frac{1}{p}))
\end{equation}
due to \eqref{EmbIneq}. One similarly deduces
\begin{equation}\label{CrlDEst}
\|\mathcal{D}(w)\|_{L^\infty(0,T;L^\infty(\Omega))}\leq c(M+LT^\frac{1}{p})(1+T^\frac{1}{p'}(M+LT^\frac{1}{p})).
\end{equation}
From now on we restrict ourselves to $T\leq 1$ and fix $R>c(1+L^2)$, where $c$ is the maximum of constants from \eqref{SymGDivBound}, \eqref{CrlGEst} and \eqref{CrlDEst}. Moreover, we assume that $M$ fulfills $R\geq c(1+L+M)^2$ and define the quantity $K$ as follows
\begin{align*}
K=\sup_{s\in[0,R^2]}&\sum_{j=0}^3|\mu^{(j)}(s)|+\sup_{t\in[-R,R]}\sum_{j=0}^2|\lambda^{(j)}(t)|.
\end{align*}
Obviously, the assumed smoothness of functions $\mu$ and $\lambda$ ensures that $K$ is finite.
Furthermore, we define
\begin{equation*}
\Pi=\sup_{r\in[-\tilde cM,\tilde c M]} \sum_{j=0}^2\|\pi^{(j)}(\rho_*+\theta_0+r)\|_{L^\infty(Q_T)}
\end{equation*}
where $\tilde c$ is such that $\|\theta\|_{L^\infty(0,T;L^\infty(\Omega))}\leq \tilde c M$ for all $(\theta, w)\in H_{T,M}$ (see also \eqref{ThetaEst}).
The fact that $\Pi$ is finite follows as $\rho_*+\theta_0$ is a continuous function by the Sobolev embedding and the assumption $\rho_*+\theta_0\in W^{1,q}(\Omega)$, $q>d$.
First, we check that $\Phi:H_{T,M}\to H_{T,M}$ if $T$ and $M$ are chosen appropriately. To this end we fix $(\vartheta,w)\in H_{T,M}$ and estimate $\mathscr G (\vartheta,w)$, $\mathscr F (\vartheta,w)$ respectively.
As $\vartheta(0)=0$, we get for $t\in[0,T]$ using the H\"older inequality
\begin{equation*}
\|\vartheta(t)\|_{W^{1,q}(\Omega)}\leq \int_0^t\|\partial_t\vartheta\|_{W^{1,q}(\Omega)}\ {\rm d}s\leq T^\frac{1}{p'}\|\partial_t\vartheta\|_{L^p(0,T;W^{1,q}(\Omega))}
\end{equation*}
and consequently
\begin{equation}\label{ThetaEst}
\|\vartheta\|_{L^\infty(0,T;W^{1,q}(\Omega))}\leq T^\frac{1}{p'}\|\vartheta\|_{W^{1,p}(0,T;W^{1,q}(\Omega))}.
\end{equation}
Let us focus on the estimate of $\mathscr G (\vartheta,w)$. Using the Sobolev embedding, \eqref{EEst}, \eqref{ThetaEst} and the H\"older inequality it follows that
\begin{equation}\label{RhsGEst}
\begin{split}
\|\nabla w\cdot E_w(\rho_*+\theta_0+\vartheta)\|_{W^{1,q}(\Omega)}&\leq c\|w\|_{W^{2,q}(\Omega)}\|E_w\|_{W^{1,q}}\|\rho_*+\theta_0+\vartheta\|_{W^{1,q}(\Omega)}\\
&\leq c \|w\|_{W^{2,q}(\Omega)}T^\frac{1}{p'}\|w\|_{L^p(0,T;W^{2,q}(\Omega))}\left(\|\rho_*+\theta_0\|_{W^{1,q}(\Omega)}+\|\vartheta\|_{W^{1,q}(\Omega)}\right)\\
&\leq c(M+L)^2T^\frac{1}{p'}\|w\|_{W^{2,q}(\Omega)}
\end{split}
\end{equation}
and
\begin{equation}\label{RhsGEst2}
\|\vartheta \dvr w\|_{W^{1,q}(\Omega)}\leq c\|\vartheta\|_{W^{1,q}(\Omega)}\|w\|_{W^{2,q}(\Omega)}\leq cT^\frac{1}{p'}M\|w\|_{W^{2,q}(\Omega)}.
\end{equation}
In order to proceed with the bound of $\mathscr F(\vartheta, w)$ we begin with some preparatory work. The assumed smoothness of $\mu$ and $\lambda$ yields by the mean value theorem
\begin{equation}\label{MuDiff}
\begin{split}
\mu^{(k)}(|A|^2)-\mu^{(k)}(|B|^2)=&\int_0^1 \frac{\mbox{d}}{\ds}\mu^{(k)}(|sA+(1-s)B|^2)\ds\\
=&2\int_0^1 \mu^{(k+1)}(|sA+(1-s)B|^2)(sA+(1-s)B)\cdot(A-B)\ds\ k=0,1,2,
\end{split}
\end{equation}
\begin{equation}\label{MuPrDiff}
\begin{split}
&\mu'(|A|^2)|A|^2-\mu'(|B|^2)|B|^2=\int_0^1 \frac{\mbox{d}}{\ds}\left(\mu'(|sA+(1-s)B|^2)|sA+(1-s)B|^2\right)\ds\\
&=2\int_0^1 (sA+(1-s)B)\cdot(A-B)\left(\mu''(|sA+(1-s)B|^2)|sA+(1-s)B|^2+\mu'(|sA+(1-s)B|^2)\right)\ds,
\end{split}
\end{equation}
\begin{equation}\label{MuDivDiff}
\begin{split}
\mu(|A|^2)\dvr A-\mu(|B|^2)\dvr B=&\int_0^1 \frac{\mbox{d}}{\ds} \left(\mu(|sA+(1-s)B|^2)\dvr (sA+(1-s)B)\right)\ds\\
=&\int_0^1 2\mu'(|sA+(1-s)B|^2)(sA+(1-s)B)\cdot(A-B)\dvr (sA+(1-s)B)\\&+\mu(|sA+(1-s)B|^2)\dvr(A-B)\ds\\
\end{split}
\end{equation}
and
\begin{equation}\label{MuPrDivDiff}
\begin{split}
\mu'(&|A|^2)|A|^2\dvr A-\mu'(|B|^2)|B|^2\dvr B\\
=&\int_0^1 \frac{\mbox{d}}{\ds} \left(\mu'(|sA+(1-s)B|^2)|sA+(1-s)B|^2\dvr (sA+(1-s)B)\right)\ds\\
=&\int_0^1 2(sA+(1-s)B)\cdot(A-B)\left(\mu''(|sA+(1-s)B|^2)|sA+(1-s)|^2\dvr(sA+(1-s)B)\right.\\
&\left.+\mu'(|sA+(1-s)B|^2)\right)\dvr (sA+(1-s)B)\\
&+\mu'(|sA+(1-s)B|^2)|sA+(1-s)B|^2\dvr(A-B)\ds
\end{split}
\end{equation}
assuming $A,B$ are $d\times d$--matrix valued sufficiently smooth functions.
Moreover, it follows that
\begin{equation}\label{LambdaDiff}
\lambda(a)-\lambda(b)=\int_0^1 \lambda'(sa+(1-s)b)(a-b)\ds,
\end{equation}
\begin{equation*}
\begin{split}
\lambda'(a)a-\lambda'(b)b=&\int_0^1 \frac{\mbox{d}}{\ds}\left(\lambda'(sa+(1-s)b)(sa+(1-s)b)\right)\ds\\=&\int_0^1\left( \lambda'(sa+(1-s)b)+\lambda''(sa+(1-s)b)(sa+(1-s)b)\right)(a-b)\ds,
\end{split}
\end{equation*}
\begin{equation}\label{LambdaNabDiff1}
\begin{split}
\lambda(a)\nabla a-\lambda(b)\nabla b=&\int_0^1 \frac{\mbox{d}}{\ds}\left(\lambda(sa+(1-s)b)\nabla (sa+(1-s)b)\right)\ds\\
=& \int_0^1\lambda'(sa+(1-s)b)(a-b)\nabla (sa+(1-s)b) + \lambda(sa+(1-s)b)\nabla(a-b)\ds
\end{split}
\end{equation}
\begin{equation}\label{LambdaNabDiff2}
\begin{split}
\lambda'(a)a\nabla a-\lambda'(b)b\nabla b=&\int_0^1 \frac{\mbox{d}}{\ds}\left(\lambda'(sa+(1-s)b)(sa+(1-s)b)\nabla (sa+(1-s)b)\right)\ds\\
=&\int_0^1 \lambda''(sa+(1-s)b)(a-b)(sa+(1-s)b)\nabla (sa+(1-s)b)\\
&+\lambda'(sa+(1-s)b)(a-b)\nabla (sa+(1-s)b)\\
&+\lambda'(sa+(1-s)b)(sa+(1-s)b)\nabla (a-b)\ds
\end{split}
\end{equation}
for arbitrary real valued functions $a,b$.
Recalling \eqref{eq:ACoef} we get using \eqref{MuDiff}, \eqref{MuPrDiff}, \eqref{LambdaDiff}, \eqref{LambdaNabDiff1}, \eqref{LambdaNabDiff2}, \eqref{EmbIneq} and the Jensen inequality
\begin{equation}\label{ACoefDifEst}
\begin{split}
&\|a_{jk}^{lm}(\mathbb Du)-a_{jk}^{lm}(\mathbb Dv)\|_{L^\infty(0,T;L^\infty(\Omega))}\\&\leq K\int_0^1\left(1+\|s\mathbb{D}Dev u+(1-s)\mathbb{D}Dev v\|^2_{L^\infty(0,T;L^\infty(\Omega))}\right)\|s\mathbb{D}Dev u+(1-s)\mathbb{D}Dev v\|_{L^\infty(0,T;L^\infty(\Omega))}\\
&\times\|\mathbb{D}Dev (u-v)\|_{L^\infty(0,T;L^\infty(\Omega))}+(1+\|s\dvr u+(1-s)\dvr v\|_{L^\infty(0,T;L^\infty(\Omega))})\|\dvr(u-v)\|_{L^\infty(0,T;L^\infty(\Omega))}\ds\\
&\leq cK(1+\|u\|_{\mathcal V^{p,q}(Q_T)}+\|v\|_{\mathcal V^{p,q}(Q_T)}+\|u\|^2_{\mathcal V^{p,q}(Q_T)}+\|v\|^2_{\mathcal V^{p,q}(Q_T)})\|u-v\|_{\mathcal V^{p,q}(Q_T)},
\end{split}
\end{equation}
whenever $u,v\in \mathcal V^{p,q}(Q_T)$.
We can proceed with the bound of \eqref{RHSExpr}$_2$. First, we estimate using \eqref{CrlGEst}, \eqref{CrlDEst} and \eqref{SupEEst}
\begin{equation}\label{TrSEEst}
\begin{split}
\|\tr(\tilde{\mathcal{S}}(w)E_{w})\|_{L^p(0,T;L^q(\Omega))}\leq&K \left(\|\mathcal G(w)\|_{L^\infty(0,T;L^\infty(\Omega))}+\|\mathcal D(w)\|_{L^\infty(0,T;L^\infty(\Omega))}\right)\|E_{w}\|_{L^p(0,T;L^q(\Omega))}\\
\leq& cKT(M+L)^2(1+(M+L)).
\end{split}
\end{equation}
Then we continue by expanding the divergence to arrive at
\begin{equation*}
\begin{split}
&\dvr(\tilde{\mathcal S}(w)-\mathcal{S}(\mathbb{D}Dev w))=\mu(|\mathcal G(w)|^2)\dvr\mathcal G(w)-\mu(|\mathbb{D}Dev w|^2)\dvr\mathbb{D}Dev w\\&+2\mu'(|\mathcal G(w)|^2)|\mathcal G(w)|^2\dvr\mathcal G(w)-2\mu(|\mathbb{D}Dev w|^2)|\mathbb{D}Dev w|^2\dvr\mathbb{D}Dev w\\
&+\lambda(\mathcal D(w))\nabla\mathcal D(w)-\lambda(\dvr w)\nabla\dvr w\\
&+\lambda'(\mathcal D(w))\mathcal D(w)\nabla\mathcal D(w)-\lambda'(\dvr w)\dvr w\nabla\dvr w=\sum_{i=1}^4I_j.
\end{split}
\end{equation*}
We estimate each $I_j$ separately. We observe that denoting
\begin{equation*}
\mathcal G_s=s(\mathbb{D}Dev w-\sym(\nabla wE_w)^D)+(1-s)\mathbb{D}Dev w
\end{equation*}
it follows by \eqref{MuDivDiff} that
\begin{align*}
I_1=-\int_0^1 2\mu'(|\mathcal G_s|^2)\mathcal G_s\cdot\sym(\nabla wE_w)^D\dvr\mathcal G_s+\mu(|\mathcal G_s|^2)\dvr\sym(\nabla wE_w)^D\ds.
\end{align*}
Further we employ the estimates
\begin{equation}\label{GSEst}
\begin{split}
\|\mathcal G_s\|_{L^\infty(0,T;L^\infty(\Omega))}\leq& c\|\mathbb{D}Dev w\|_{L^\infty(0,T;L^\infty(\Omega))}+\|\nabla w\|_{L^\infty(0,T;L^\infty(\Omega))}\|E_w\|_{L^\infty(0,T;L^\infty(\Omega))}
\\
\leq& c\|w\|_{\mathcal V^{p,q}(Q_T)}(1+T^\frac{1}{p'}\|w\|_{\mathcal{V}^{p,q}(Q_T)}),
\end{split}
\end{equation}
\begin{equation}\label{DivGSEst}
\begin{split}
\|\dvr\mathcal G_s \|_{L^p(0,T;L^q(\Omega))}\leq& c\|w\|_{L^p(0,T;L^q(\Omega))}(1+\|E_{w}\|_{L^\infty(0,T;L^\infty(\Omega))})+\|\nabla w\|_{L^\infty(0,T;L^\infty(\Omega))}\|\nabla E_w\|_{L^p(0,T;L^q(\Omega))}\\
\leq&c\|w\|_{\mathcal V^{p,q}(Q_T)}(1+(T^\frac{1}{p'}+T)\|w\|_{\mathcal V^{p,q}(Q_T)}),
\end{split}
\end{equation}
\begin{equation}\label{DivSymEst}
\begin{split}
\|\dvr\sym(\nabla wE_w)^D\|_{L^p(0,T;L^q(\Omega))}\leq&c\|\nabla^2 w\|_{L^p(0,T;L^q(\Omega))}\|E_w\|_{L^\infty(0,T;L^\infty(\Omega))}\\
&+\|\nabla w\|_{L^\infty(0,T;L^\infty(\Omega))}\|\nabla E_w\|_{L^p(0,T;L^q(\Omega))}\\
\leq&c\|w\|^2_{\mathcal V^{p,q}(Q_T)}(T^\frac{1}{p'}+T)
\end{split}
\end{equation}
that follow by \eqref{EEst}, \eqref{ELPQEst} and \eqref{EmbIneq}.
Employing the Jensen inequality, \eqref{GSEst}, \eqref{DivGSEst} and \eqref{DivSymEst} we infer
\begin{equation}\label{I1Est}
\begin{split}
\|&I_1\|_{L^p(0,T;L^q(\Omega))}\\
\leq&cK\|w\|^2_{\mathcal V^{p,q}(Q_T)}(1+T^\frac{1}{p'}\|w\|_{\mathcal V^{p,q}(Q_T)})\|\nabla w\|_{L^\infty(0,T;L^\infty(\Omega))}\|E_w\|_{L^\infty(0,T;L^\infty(\Omega))}(1+(T^\frac{1}{p'}+T)\|w\|_{\mathcal V^{p,q}(Q_T)})\\
&+cK\|w\|^2_{\mathcal V^{p,q}(Q_T)}(T^\frac{1}{p'}+T)\\
\leq& cK \|w\|^2_{\mathcal V^{p,q}(Q_T)}\left((1+T^\frac{1}{p'}\|w\|_{\mathcal V^{p,q}(Q_T)})T^\frac{1}{p'}\|w\|^2_{\mathcal V^{p,q}(Q_T)} (1+(T^\frac{1}{p'}+T)\|w\|_{\mathcal V^{p,q}(Q_T)})+T^\frac{1}{p'}+T\right)\\
\leq& cK(M+L)^2\left((1+(M+L))^2T^\frac{1}{p'}(M+L)^2+T^\frac{1}{p'}+T\right),
\end{split}
\end{equation}
where the Sobolev embedding, interpolation inequality \eqref{EmbIneq} and \eqref{ELPQEst} were also employed.
By \eqref{MuPrDivDiff} we have
\begin{multline*}
I_2 =-\int_0^1 2\left(\mu''(|\mathcal G_s|^2)|\mathcal G_s|^2+\mu'(|\mathcal G_s|^2)\right)\mathcal G_s\cdot\sym(\nabla wE_w)^D\dvr\mathcal G_s\\+\mu(|\mathcal G_s|^2)|\mathcal G_s|^2\dvr\sym(\nabla wE_w)^D\ds.
\end{multline*}
Then using \eqref{GSEst}, \eqref{DivGSEst} and \eqref{DivSymEst} we obtain
\begin{equation}\label{I2Est}
\begin{split}
\|&I_2\|_{L^p(0,T;L^q(\Omega))}\\
\leq &cK\left((\|w\|^2_{\mathcal V^{p,q}(Q_T)}(1+T^\frac{1}{p'}\|w\|_{\mathcal V^{p,q}(Q_T)})^2+1)\|w\|_{\mathcal V^{p,q}(Q_T)}(1+T^\frac{1}{p'}\|w\|_{\mathcal V^{p,q}(Q_T)})\|\nabla w\|_{L^\infty(0,T;L^\infty(\Omega))}\right.\\
&\left.\times \|E_w\|_{L^\infty(0,T;L^\infty(\Omega))}\|w\|_{\mathcal V^{p,q}(Q_T)}(1+(T^\frac{1}{p'}+T)\|w\|_{\mathcal V^{p,q}(Q_T)})+\|w\|^4_{\mathcal V^{p,q}(Q_T)}(1+T^\frac{1}{p'}\|w\|_{\mathcal V^{p,q}(Q_T)})(T^\frac{1}{p'}+T)\right)\\
\leq& cK\left(((M+L)^2(1+(M+L)^2)+1)(M+L)^4(1+(M+L))T^\frac{1}{p'}(1+(M+L))\right.\\
&\left.+(M+L)^4(1+(M+L))(T^\frac{1}{p'}+T)\right).
\end{split}
\end{equation}
Denoting
\begin{equation*}
\mathcal D_s=s(\dvr w-\nabla w\cdot E_w)+(1-s)\dvr w
\end{equation*}
we obtain by \eqref{LambdaNabDiff1}
\begin{align*}
I_3=-\int_0^1 \lambda'(\mathcal D_s)\nabla w\cdot E_w\nabla\mathcal D_s+\lambda(\mathcal D_s)\nabla(\nabla w\cdot E_w).
\end{align*}
Moreover, one has by \eqref{EmbIneq} and \eqref{EEst}
\begin{equation}\label{DSEst}
\begin{split}
\|\mathcal D_s\|_{L^\infty(0,T;L^\infty(\Omega))}\leq& c\|\nabla w\|_{L^\infty(0,T;L^\infty(\Omega))}(1+\|E_w\|_{L^\infty(0,T;L^\infty(\Omega))})\\
\leq& c\|w\|_{\mathcal V^{p,q}(Q_T)}(1+T^\frac{1}{p'}\|w\|_{\mathcal V^{p,q}(Q_T)})
\end{split}
\end{equation}
and
\begin{equation}\label{NabDSEst}
\begin{split}
\|\nabla\mathcal D_s\|_{L^p(0,T;L^q(\Omega))}\leq& c\|\nabla^2w\|_{L^p(0,T;W^{2,q}(\Omega))}+\|\nabla^2w\|_{L^p(0,T;W^{2,q}(\Omega))}\|E_w\|_{L^\infty(0,T;L^\infty(\Omega))}\\
&+\|\nabla w\|_{L^\infty(0,T;L^\infty(\Omega))}\|\nabla E_w\|_{L^p(0,T;L^q(\Omega))}\\
\leq& c\|w\|_{\mathcal V^{p,q}(Q_T)}\left(1+(T^\frac{1}{p'}+T)\|w\|_{\mathcal V^{p,q}(Q_T)}\right).
\end{split}
\end{equation}
Then it follows by \eqref{EmbIneq}, \eqref{EEst}, \eqref{DSEst} and \eqref{NabDSEst}
\begin{equation}\label{I3Est}
\begin{split}
\|I_3\|_{L^p(0,T;L^q(\Omega))}\leq &cK\left(\|\nabla w\|_{L^\infty(0,T;L^\infty(\Omega))}\|E_w\|_{L^\infty(0,T;L^\infty(\Omega))}\|\nabla \mathcal D_s\|_{L^p(0,T;L^q(\Omega))}\right.\\
&\left.+\|\nabla^2 w\|_{L^p(0,T;L^q(\Omega))}\|E_w\|_{L^\infty(0,T;L^\infty(\Omega))}+\|\nabla w\|_{L^\infty(0,T;L^\infty(\Omega))}\|\nabla E_w\|_{L^p(0,T;L^q(\Omega))}\right)\\
\leq &cK\left(T^\frac{1}{p'}\|w\|^3_{\mathcal V^{p,q}(Q_T)}\left(1+(T^\frac{1}{p'}+T)\|w\|_{\mathcal V^{p,q}(Q_T)}\right)+(T^\frac{1}{p'}+T)\|w\|^2_{\mathcal V^{p,q}(Q_T)}\right)\\
\leq&cK\left(T^\frac{1}{p'}(M+L)^3\left(1+(M+L)\right)+(T^\frac{1}{p}+T)(M+L)^2\right).
\end{split}
\end{equation}
By \eqref{LambdaNabDiff2} we obtain
\begin{align*}
I_4=&-\int_0^1 \lambda''(\mathcal D_s)\nabla w\cdot E_w\mathcal D_s\nabla\mathcal D_s+\lambda'(\mathcal D_s)\nabla w\cdot E_w\nabla\mathcal D_s\\
&+\lambda'(\mathcal D_s)\mathcal D_s\nabla(\nabla w\cdot E_w)\ds.
\end{align*}
Finally, we get by \eqref{CrlDEst} and \eqref{EmbIneq}
\begin{equation}\label{I4Est}
\begin{split}
\|&I_4\|_{L^p(0,T;L^q(\Omega))}\\
\leq& cK \left(\|\nabla w\|_{L^\infty(0,T;L^\infty(\Omega))}\|E_{w}\|_{L^\infty(0,T;L^\infty(\Omega))}(1+\|\mathcal D_s\|_{L^\infty(0,T;L^\infty(\Omega))})\|\nabla\mathcal D_s\|_{L^p(0,T;L^q(\Omega))}\right.\\
&\left.+\|\mathcal D_s\|_{L^\infty(0,T;L^\infty(\Omega))}(\|\nabla^2w\|_{L^p(0,T;L^q(\Omega))}\|E_w\|_{L^\infty(0,T;L^\infty(\Omega))}+\|\nabla w\|_{L^\infty(0,T;L^\infty(\Omega))}\|\nabla E_w\|_{L^p(0,T;L^q(\Omega))}\right)\\
\leq&cK\left(T^\frac{1}{p'}\|w\|^4_{\mathcal V^{p,q}(Q_T)}(1+T^\frac{1}{p'}\|w\|_{\mathcal V^{p,q}(Q_T)})\left(1+(T^\frac{1}{p'}+T)\|w\|_{\mathcal V^{p,q}(Q_T)}\right)\right.\\
&\left.+(T^\frac{1}{p'}+T)\|w\|^3_{\mathcal V^{p,q}(Q_T)}(1+T^\frac{1}{p'}\|w\|_{\mathcal V^{p,q}(Q_T)})\right)\\
\leq&cK\left(T^\frac{1}{p'}(M+L)^4\left(1+(M+L)\right)^2+(T^\frac{1}{p'}+T)(1+M+L)^3\right).
\end{split}
\end{equation}
In order to proceed with the estimate of $\mathcal{F}$ we have
\begin{equation}\label{PrDiffEst2}
\begin{split}
&\|\pi'(\rho_*+\theta_0+\vartheta)\nabla(\theta_0+\vartheta)E_w\|_{L^p(0,T;L^q(\Omega))}\leq \Pi\|\nabla(\theta_0+\vartheta)\|_{L^p(0,T;L^q(\Omega))}\|E_{w}\|_{L^\infty(0,T;L^\infty(\Omega))}\\
&\leq c\Pi(L+M)T^\frac{1}{p'}(M+L).
\end{split}
\end{equation}
Applying \eqref{ThetaEst} and the Sobolev embedding we obtain
\begin{equation}\label{TDerEst}
\|\vartheta\partial_t w\|_{L^p(0,T;L^q(\Omega))}\leq \|\vartheta\|_{L^\infty(0,T;L^\infty(\Omega))}\|\partial_t w\|_{L^p(0,T;L^q(\Omega))}\leq c T^\frac{1}{p'}M(M+L).
\end{equation}
One immediately has
\begin{equation}\label{PrDiffEst3}
\|\pi'(\rho_*+\theta_0)\nabla \theta_0\|_{L^p(0,T;L^q(\Omega))}\leq c T^\frac{1}{p}\Pi L.
\end{equation}
By the assumed smoothness of $\pi$, the Sobolev embedding and \eqref{ThetaEst} we get
\begin{equation}\label{PrDiffEst4}
\begin{split}
\|(\pi'(\rho_*+\theta_0+\vartheta)-\pi'(\rho_*+\theta_0))\nabla(\theta_0+ \vartheta)\|_{L^p(0,T;L^q(\Omega))}&\leq \Pi\|\vartheta\|_{L^\infty(0,T;L^\infty(\Omega))}\|\nabla(\theta_0+\vartheta)\|_{L^p(0,T;L^q(\Omega))}\\
&\leq c\Pi T^\frac{1}{p'}M(L+M).
\end{split}
\end{equation}
It remains to estimate the norm of the difference $\dvr \mathcal{S}(\mathbb{D}Dev w)-\mathcal{A}(\mathbb D u_0)(\mathbb Dw)$. To this end, recalling \eqref{eq:DivS} and \eqref{eq:def.A} we obtain
\begin{equation}\label{LinearizDiff}
\begin{split}
\|\dvr \mathcal{S}(\mathbb{D}Dev w)-\mathcal{A}(\mathbb D u_0)(\mathbb D(w-u_0))\|_{L^p(0,T;L^q(\Omega))}\leq&\|\sum_{j,k,l,m=1}^d(a_{jk}^{lm}(\mathbb D w)-a_{jk}^{lm}(\mathbb D u_0))\partial_l\partial_m (w-u_0)\|_{L^p(0,T;L^q(\Omega))}\\&+\|\sum_{j,k,l,m=1}^da_{jk}^{lm}(\mathbb D w)\partial_l\partial_mu_0\|_{L^p(0,T;L^q(\Omega))}.
\end{split}
\end{equation}
By \eqref{eq:ACoef} and \eqref{ACoefDifEst} it follows that
\begin{align*}
&\|a_{jk}^{lm}(\mathbb D w)-a_{jk}^{lm}(\mathbb D u_0)\|_{L^\infty(0,T;L^\infty(\Omega))}
\\&\leq cK(1+\|w\|_{\mathcal V^{p,q}(Q_T)}+\|u_0\|_{W^{2,1}(\Omega)}+\|w\|^2_{\mathcal V^{p,q}(Q_T)}+\|u_0\|^2_{W^{2,q}(\Omega)})\|w-u_0\|_{\mathcal V^{p,q}(Q_T)}.
\end{align*}
We use the above estimate and \eqref{EmbIneq} in \eqref{LinearizDiff} to deduce
\begin{equation}\label{LastDiffEst}
\begin{split}
\|\dvr \mathcal{S}(\mathbb{D}Dev w)-&\mathcal{A}(\mathbb Du_0)(\mathbb D(w-u_0))\|_{L^p(0,T;L^q(\Omega))}\leq c K(1+L+L^2+M+M^2)M^2\\
&+cK(1+(M+L)+(M+L)^2)LT^\frac{1}{p}.
\end{split}
\end{equation}
We summarize \eqref{RhsGEst}, \eqref{RhsGEst2}, \eqref{TrSEEst}, \eqref{I1Est}, \eqref{I2Est}, \eqref{I3Est}, \eqref{I4Est}, \eqref{TDerEst},\eqref{PrDiffEst2}, \eqref{PrDiffEst3}, \eqref{PrDiffEst4} and \eqref{LastDiffEst} to conclude
\begin{multline*}
\|\mathcal G(\vartheta,w)\|_{L^p(0,T;W^{1,q}(\Omega))}+\|\mathcal F(\vartheta,w)\|_{L^p(0,T;L^q(\Omega))}\\
\leq c\left(\sum_{j=1}^{N_1} K^{k_j}L^{l_j}M^{m_j}\Pi^{|k_j-1|}T^{\alpha_j}+\sum_{j=1}^{N_2} K^{n_j}L^{o_j}M^{p_j} + T\right),
\end{multline*}
where $\alpha_j >0$, $l_j,m_j, o_j, \in\mathbf{N}\cup\{0\}$, $p_j\in \mathbf{N}\setminus\{1\}$, $k_j,n_j\in\{0,1\}$.
The $M$ appears on the right hand side either to some nonnegative power in a product with a nonnegative power of $T$ or to the second or higher power not multiplied by a nonnegative power of $T$. As a result, for every $c>0$ we can choose $M$ and consequently also $T$ in such way that
$$\|\mathcal G(\vartheta,w)\|_{L^p(0,T;W^{1,q}(\Omega))}+\|\mathcal F(\vartheta,w)\|_{L^p(0,T;L^q(\Omega))}\leq c M.
$$
Theorem $\ref{thm:regularita}$ then yields $\Phi:H_{T,M}\mapsto H_{T,M}$ for these appropriately chosen $T$ and $M$.
In order to verify that $\Phi$ is a contraction on $H_{T,M}$ we fix arbitrary pairs $(\vartheta^1,w^1), (\vartheta^2,w^2)\in H_{T,M}$. Next we note that the difference $(\theta^1-\theta^2,\tilde u^1-\tilde u^2)$ of solutions corresponding to $(\vartheta^1,w^1)$, $(\vartheta^2,w^2)$ respectively, solves the equations
\begin{equation}\label{eq:LinearizedDiff}
\begin{split}
\partial_t (\theta^1-\theta^2)+(\rho_*+\theta_0)\dvr(\tilde u^1-\tilde u^2)=&\mathscr{G}(\vartheta^1,w^1)-\mathscr{G}(\vartheta^2,w^2),\\
(\rho_*+\theta_0)\partial_t(w^1-w^2)-\mathcal{A}(\mathbb D u_0)(\mathbb D(\tilde u^1-\tilde u^2))+\pi'(\rho_0+\theta_0)\nabla(\theta^1-\theta^2)=&\mathscr{F}(\vartheta^1,w^1)-\mathscr{F}(\vartheta^2,w^2).
\end{split}
\end{equation}
We begin with the estimate of the difference on the right hand side of \eqref{eq:LinearizedDiff}$_1$. Employing \eqref{ThetaEst}, the Sobolev embedding and \eqref{EmbIneq} it follows that
\begin{equation}\label{RhsGDiff1}
\begin{split}
\|\vartheta^1\dvr w^1-\vartheta^2\dvr w^2\|_{L^p(0,T;W^{1,q}(\Omega))}\leq& c\|\vartheta^1-\vartheta^2\|_{L^\infty(0,T;L^\infty(\Omega))}\|w^1\|_{L^p(0,T;W^{2,q}(\Omega))}\\&+c\|\vartheta^2\|_{L^\infty(0,T;L^\infty(\Omega))}\|w^1-w^2\|_{L^p(0,T;W^{2,q}(\Omega))}\\
\leq &cT^\frac{1}{p'}(M+L)\|\vartheta^1-\vartheta^2\|_{L^{\infty}(0,T;L^{\infty})}\\
&+cT^\frac{1}{p'}M\|w^1-w^2\|_{L^p(0,T;W^{2,q}(\Omega))}
\end{split}
\end{equation}
as well as
\begin{equation}\label{RhsGDiff2}
\begin{split}
&\|\nabla w^1\cdot E_{w^1}(\rho_*+\theta_0+\vartheta^1)-\nabla w^2\cdot E_{w^2}(\rho_*+\theta_0+\vartheta^2)\|_{L^p(0,T;W^{1,q}(\Omega))}\\
&\leq \|w^1-w^2\|_{L^p(0,T;W^{2,q})}\|E_{w^1}\|_{L^\infty(0,T;L^\infty(\Omega))}\|\rho_*+\theta_0+\vartheta^1\|_{L^\infty(0,T;L^\infty(\Omega))}\\
&+\|w^2\|_{L^\infty(0,T;W^{1,\infty}(\Omega)}\|E_{w^1}-E_{w^2}\|_{L^p(0,T;W^{1,q}(\Omega))}\|\rho_*+\theta_0+\vartheta^1\|_{L^\infty(0,T;L^\infty(\Omega))}\\
&+\|w^2\|_{L^\infty(0,T;W^{1,\infty}(\Omega)}\|E_{w^2}\|_{L^\infty(0,T;L^\infty(\Omega)}\|\vartheta^1-\vartheta^2\|_{L^p(0,T;W^{1,q}(\Omega))}\\
&\leq cMT^\frac{1}{p'}\|w^1-w^2\|_{L^p(0,T;W^{2,q}(\Omega))}(L+M)+c(M+L)^2T\|w^1-w^2\|_{L^p(0,T;W^{2,q}(\Omega))}\\
&+c(M+L)MT^\frac{1}{p'}\|\vartheta^1-\vartheta^2\|_{L^p(0,T;W^{1,q}(\Omega))}
\end{split}
\end{equation}
where \eqref{ELPQEst} was also applied.
Next we focus on te estimate of the difference on the right hand side in \eqref{eq:LinearizedDiff}$_2$. By the assumed smoothness of $\pi$ we get
\begin{equation}\label{PrDiffEst1}
\begin{split}
\|\nabla\pi(\rho_*+\theta_0+\vartheta^1)E_{w^1}&-\nabla\pi(\rho_*+\theta_0+\vartheta^2)E_{w^2}\|_{L^p(0,T;L^q(\Omega))}\\
&\leq \Pi\|\vartheta^1-\vartheta^2\|_{L^p(0,T;L^q(\Omega))}\|E_{w^1}\|_{L^\infty(0,T;L^\infty(\Omega))}+\Pi\|E_{w^1}-E_{w^2}\|_{L^p(0,T;L^q(\Omega))}\\
&\leq c\Pi(T^\frac{1}{p'}(M+L)+T)[\vartheta^1-\vartheta^2,w^1-w^2]_{T,M}.
\end{split}
\end{equation}
In order to proceed, we observe that by \eqref{SupEEst} and \eqref{ELPQEst}
\begin{align*}
\|&\mathcal G(w^1)-\mathcal G(w^2)\|_{L^\infty(0,T;L^\infty(\Omega))}\\
\leq& \|\mathbb{D}Dev w^1-\mathbb{D}Dev w^2\|_{L^\infty(0,T;L^\infty(\Omega))}+\|\sym(\nabla(w^1-w^2)E_{w^1})^D\|_{L^\infty(0,T;L^\infty(\Omega))}\\
&+\|\sym(\nabla w^2E_{w^1-w^2})^D\|_{L^\infty(0,T;L^\infty(\Omega))}\\
\leq &c\|w^1-w^2\|_{\mathcal V^{p,q}(Q_T)}(1+T^\frac{1}{p'}\|w^1\|_{L^p(0,T;W^{2,q}(\Omega))}+T\|w^2\|_{L^\infty(0,T;W^{1,\infty}(\Omega)f)}).
\end{align*}
Similarly, we deduce
\begin{align*}
\|&\mathcal D(w^1)-\mathcal D(w^2)\|_{L^\infty(0,T;L^\infty(\Omega))}\\
&\leq\|\dvr (w^1-w^2)\|_{L^\infty(0,T;L^\infty(\Omega))}+\|\nabla(w^1-w^2)\cdot E_{w^1}\|_{L^\infty(0,T;L^\infty(\Omega))}\\
&+\|\nabla w^2\cdot E_{w^1-w^2}\|_{L^\infty(0,T;L^\infty(\Omega))}\\
\leq& c\|w^1-w^2\|_{\mathcal V^{p,q}(Q_T)}(1+T^\frac{1}{p'}\|w^1\|_{L^p(0,T;W^{2,q}(\Omega))}+T\|w^2\|_{L^\infty(0,T;W^{1,\infty(\Omega)})}).
\end{align*}
We employ the latter two estimates, \eqref{CrlGEst}, \eqref{CrlDEst}, \eqref{MuDiff} and \eqref{EmbIneq} to obtain
\begin{equation}\label{TrSEDiffEst}
\begin{split}
\|&\tr\tilde{\mathcal{S}}(w^1)E_{w^1}-\tr\tilde{\mathcal{S}}(w^2)E_{w^2}\|_{L^p(0,T;L^q(\Omega))}\\
\leq&\left(\|\mu(|\mathcal G(w^1)|^2)\|_{L^\infty(0,T;L^\infty(\Omega))}\|\mathcal G(w^1)-\mathcal G(w^2)\|_{L^\infty(0,T;L^\infty(\Omega))}\right.\\
&\left.+\|\lambda(\mathcal D (w^1)\|_{L^\infty(0,T;L^\infty(\Omega))}\|\mathcal D (w^1)-\mathcal D(w^2)\|_{L^\infty(0,T;L^\infty(\Omega))}\right.\\
&\left.+\|\mu(|\mathcal G(w^1)|^2)-\mu(|\mathcal{G}(w^2)|^2)\|_{L^\infty(0,T;L^\infty(\Omega))}\|\mathcal G(w^2)\|_{L^\infty(0,T;L^\infty(\Omega))}\right.\\
&\left.+\|\lambda(\mathcal D (w^1))-\lambda(\mathcal D(w^2))\|_{L^\infty(0,T;L^\infty(\Omega))}\|\mathcal D(w^2)\|_{L^\infty(0,T;L^\infty(\Omega))}\right)\|E_{w^1}\|_{L^p(0,T;L^q(\Omega))}\\
&+\left(\|\mu(|\mathcal{G}(w^2)|^2)\|_{L^\infty(0,T;L^\infty(\Omega))}\|\mathcal G(w^2)\|_{L^\infty(0,T;L^\infty(\Omega))}\right.\\
&\left.+\|\lambda(\mathcal D(w^2))\|_{L^\infty(0,T;L^\infty(\Omega))}\|\mathcal D(w^2)\|_{L^\infty(0,T;L^\infty(\Omega))}\right)\|E_{w^1-w^2}\|_{L^p(0,T;L^q(\Omega))}\\
\leq& cK \left(1+(M+L)+(M+L)^2(1+(M+L))^2\right.\\
&\left.+(M+L)(1+(M+L))\right)T(M+L)\|w^1-w^2\|_{\mathcal V^{p,q}(Q_T)}.
\end{split}
\end{equation}
We continue with expanding the divergence to obtain
\begin{align*}
\dvr& \left(\tilde{\mathcal S}(w^1)-\tilde{\mathcal S}(w^2)-(\mathcal S(\mathbb{D}Dev w^1)-\mathcal S(\mathbb{D}Dev w^2))\right)\\
=&\left(\mu(|\mathcal{G}(w^1)|^2)\dvr\mathcal{G}(w^1)-\mu(|\mathbb{D}Dev w^1|^2)\dvr\mathbb{D}Dev w^1\right.\\
&\left.-\left(\mu(|\mathcal{G}(w^2)|^2)\dvr\mathcal{G}(w^2)-\mu(|\mathbb{D}Dev w^2|^2)\dvr\mathbb{D}Dev w^2\right)\right)\\
&+2\left(\mu'(|\mathcal G(w^1)|^2)|\mathcal G(w^1)|^2\dvr\mathcal{G}(w^1)-\mu'(|\mathbb{D}Dev w^1|^2)|\mathbb{D}Dev w^1|^2\dvr\mathbb{D}Dev w^1\right.\\
&\left.-\mu'(|\mathcal G(w^2)|^2)|\mathcal G(w^2)|^2\dvr\mathcal{G}(w^2)+\mu'(|\mathbb{D}Dev w^2|^2)|\mathbb{D}Dev w^2|^2\dvr\mathbb{D}Dev w^2\right)\\
&+\left(\lambda(\mathcal D(w^1))\nabla\mathcal{D}(w^1)-\lambda(\dvr w^1)\nabla\dvr w^1\right.\\
&\left.+\lambda(\mathcal D(w^2))\nabla\mathcal{D}(w^2)-\lambda(\dvr w^2)\nabla\dvr w^2\right)\\
&+\left(\lambda'(\mathcal{D}(w^1))\mathcal D(w^1)\nabla\mathcal D(w^1)-\lambda'(\dvr w^1)\dvr w^1\nabla\dvr w^1\right.\\
&-\left.\lambda'(\mathcal{D}(w^2))\mathcal D(w^2)\nabla\mathcal D(w^2)+\lambda'(\dvr w^2)\dvr w^2\nabla\dvr w^2\right)=\sum_{j=1}^4I_j.
\end{align*}
We estimate the norm of each $I_j$ separately. First, denoting
\begin{equation*}
\mathcal G^j_s=s\mathcal{G}(w^j)+(1-s)\mathbb{D}Dev w^j\ j=1,2
\end{equation*}
we obtain by \eqref{MuDiff}, \eqref{MuDivDiff} and \eqref{MuPrDivDiff}
\begin{align*}
I_1=&-\int_0^1(\mu'(|\mathcal{G}^1_s|^2)-\mu'(|\mathcal{G}^2_s|^2))\mathcal{G}^1_s\cdot\sym(\nabla w^1E_{w_1})^D\dvr\mathcal G^1_s\\
&+\mu'(|\mathcal G^2_s|^2)(\mathcal{G}^1_s-\mathcal G^2_s)\cdot\sym(\nabla w^1E_{w_1})^D\dvr\mathcal G^1_s\\
&+\mu'(|\mathcal G^2_s|^2)\mathcal{G}^2_s\cdot\sym(\nabla(w^1-w^2)E_{w^1})^D\dvr\mathcal G^1_s\\
&+\mu'(|\mathcal G^2_s|^2)\mathcal{G}^2_s\cdot\sym(\nabla w^2E_{w^1-w^2})^D\dvr\mathcal G^1_s\\
&+\mu'(|\mathcal G^2_s|^2)\mathcal{G}^2_s\cdot\sym(\nabla w^2E_{w^2})^D\dvr(\mathcal G^1_s-\mathcal{G}^2_s)\\
&+(\mu(|\mathcal{G}^1_s|^2)-\mu(|\mathcal{G}^2_s|^2))\dvr\sym(\nabla w^1E_{w^1})^D\\
&+\mu(|\mathcal{G}^2_s|^2)\dvr\sym(\nabla (w^1-w^2)E_{w^1})^D\\
&+\mu(|\mathcal{G}^2_s|^2)\dvr\sym(\nabla w^2E_{w^1-w^2})^D\ds
\end{align*}
and
\begin{align*}
I_2=&-\int_0^1 2\left((\mu''(|\mathcal G^1_s|^2)-\mu''(|\mathcal G^2_s|^2))|\mathcal G^1_s|^2+\mu'(|\mathcal G^1_s|^2)-\mu'(|\mathcal G^2_s|^2)\right)\\
&\mathcal G^1_s\cdot\sym(\nabla w^1E_{w^1})^D\dvr\mathcal G^1_s+2\mu''(|\mathcal G^2_s|^2)(|\mathcal G^1_s|^2-|\mathcal G^2_s|^2)\mathcal G^1_s\cdot\sym(\nabla w^1E_{w^1})^D\dvr\mathcal G^1_s\\
&+2\left(\mu''(|\mathcal G^2_s|^2)|\mathcal G^2_s|^2+\mu'(|\mathcal G^2_s|^2)\right)\left(\mathcal G^1_s-\mathcal G^2_s\right)\cdot\sym(\nabla w^1E_{w^1})^D\dvr\mathcal G^1_s\\
&+2\left(\mu''(|\mathcal G^2_s|^2)|\mathcal G^2_s|^2+\mu'(|\mathcal G^2_s|^2)\right)\mathcal G^2_s\cdot\sym(\nabla(w^1-w^2)E_{w^1})^D\dvr\mathcal G^1_s\\
&+2\left(\mu''(|\mathcal G^2_s|^2)|\mathcal G^2_s|^2+\mu'(|\mathcal G^2_s|^2)\right)\mathcal G^2_s\cdot\sym(\nabla w^2E_{w^1-w^2})^D\dvr\mathcal G^1_s\\
&+2\left(\mu''(|\mathcal G^2_s|^2)|\mathcal G^2_s|^2+\mu'(|\mathcal G^2_s|^2)\right)\mathcal G^2_s\cdot\sym(\nabla w^2E_{w^2})^D\dvr(\mathcal G^1_s-\mathcal G^2_s)\\
&+(\mu(|\mathcal G^1_s|^2)-\mu(|\mathcal G^2_s|^2))|\mathcal G^1_s|^2\dvr\sym(\nabla w^1E_{w^1})^D\\
&+\mu(|\mathcal G^2_s|^2)(|\mathcal G^1_s|^2-|\mathcal G^2_s|^2)\dvr\sym(\nabla w^1E_{w^1})^D\\
&+\mu(|\mathcal G^2_s|^2)|\mathcal G^2_s|^2\dvr\sym(\nabla (w^1-w^2)E_{w^1})^D\\
&+\mu(|\mathcal G^2_s|^2)|\mathcal G^2_s|^2\dvr\sym(\nabla w^2E_{w^1-w^2})^D.
\end{align*}
Then denoting
\begin{equation*}
\mathcal D^j_s=s\mathcal D(w^j)+(1-s)\dvr w^j\ j=1,2
\end{equation*}
we obtain using \eqref{LambdaNabDiff1}
\begin{align*}
I_3=&-\int_0^1 \left(\lambda'(\mathcal D^1_s)-\lambda'(\mathcal D^2_s)\right)\nabla w^1\cdot E_{w^1}\nabla\mathcal D^1_s+\lambda'(\mathcal D^2_s)\nabla (w^1-w^2)\cdot E_{w^1}\nabla\mathcal D^1_s\\
&+\lambda'(\mathcal D^2_s)\nabla w^2\cdot E_{w^1-w^2}\nabla\mathcal D^1_s+\lambda'(\mathcal D^2_s)\nabla w^2\cdot E_{w^2}\nabla(\mathcal D^1_s-\mathcal D^2_s)\\
&+(\lambda(\mathcal D^1_s)-\lambda(\mathcal D^2_s))\nabla(\nabla w^1\cdot E_{w^1})+\lambda(\mathcal D_s^2)\nabla(\nabla(w^1-w^2)\cdot E_{w^1})
\\
&+\lambda(\mathcal D_s^2)\nabla(\nabla w^2\cdot E_{w^1-w^2})\ds
\end{align*}
and
\begin{align*}
I_4=&-\int_0^1 \left(\lambda''(\mathcal D^1_s)-\lambda''(\mathcal D^2_s)\right)\nabla w^1\cdot E_{w^1}\mathcal D^1_s\nabla\mathcal D^1_s+\lambda''(\mathcal D^2_s)\nabla(w^1-w^2)\cdot E_{w^1}\mathcal D^1_s\nabla\mathcal D^1_s\\
&+\lambda''(\mathcal D^2_s)\nabla w^2\cdot E_{w^1-w^2}\mathcal D^1_s\nabla\mathcal D^1_s+\lambda''(\mathcal D^2_s)\nabla w^2\cdot E_{w^2}(\mathcal D^1_s-\mathcal D^2_s)\nabla\mathcal D^1_s\\
&+\lambda''(\mathcal D^2_s)\nabla w^2\cdot E_{w^2}\mathcal D^2_s\nabla(\mathcal D^1_s-\mathcal D^2_s)+\left(\lambda'(\mathcal D^1_s)-\lambda'(\mathcal D^2_s)\right)\mathcal D^1_s\nabla(\nabla w^1\cdot E_{w^1})\\
&+\lambda'(\mathcal D^2_s)\left(\mathcal D^1_s-\mathcal D^2_s\right)\nabla(\nabla w^1\cdot E_{w^1})+\lambda'(\mathcal D^2_s)\mathcal D^2_s\nabla(\nabla(w^1-w^2)\cdot E_{w^1})\\
&+\lambda'(\mathcal D^2_s)\mathcal D^2_s\nabla(\nabla w^2\cdot E_{w^1-w^2})\ds.
\end{align*}
We also have the estimates
\begin{align*}
\|\mathcal G^1_s&-\mathcal G^2_s\|_{L^\infty(0,T;L^\infty(\Omega))}\\
\leq&
\|\mathbb{D}Dev(w^1-w^2)\|_{L^\infty(0,T;L^\infty(\Omega))}+\|\nabla(w^1-w^2)\|_{L^\infty(0,T;L^\infty(\Omega))}\|E_{w^1}\|_{L^\infty(0,T;L^\infty(\Omega))}\\
&+\|\nabla w^2\|_{L^\infty(0,T;L^\infty(\Omega))}\|E_{w^1-w^2}\|_{L^\infty(0,T;L^\infty(\Omega))}\\
\leq&c\|w^1-w^2\|_{\mathcal V^{p,q}(Q_T)}(1+T^\frac{1}{p'}(\|w^1\|_{L^p(0,T;W^{2,q}(\Omega))}+\|w^2\|_{\mathcal V^{p,q}(Q_T)}))
\end{align*}
and
\begin{align*}
\|\dvr&(\mathcal G^1_s-\mathcal G^2_s)\|_{L^p(0,T;L^q(\Omega))}\\
\leq& c\|\nabla^2w^1-w^2\|_{L^p(0,T;L^q(\Omega))}(1+\|E_{w^2}\|_{L^\infty(0,T;L^\infty(\Omega))})\\
&+c\|\nabla(w^1-w^2)\|_{L^\infty(0,T;L^\infty(\Omega))}\|\nabla E_{w^2}\|_{L^p(0,T;L^q(\Omega))}\\
&+\|\nabla^2w^1\|_{L^p(0,T;L^q(\Omega))}\|E_{w^2-w^1}\|_{L^\infty(0,T;L^\infty(\Omega))}\\
&+\|\nabla w^1\|_{L^\infty(0,T;L^\infty(\Omega))}\|\nabla E_{w^2-w^1}\|_{L^p(0,T;L^q(\Omega))}\\
\leq&c\|w^1-w^2\|_{\mathcal V^{p,q}(Q_T)}(1+(T^\frac{1}{p'}+T)(\|w^1\|_{\mathcal V^{p,q}(Q_T)}+\|w^2\|_{\mathcal{V}^{p,q}(Q_T)}))
\end{align*}
by \eqref{EEst} and \eqref{EmbIneq}. By the previous two estimates along with \eqref{GSEst}, \eqref{DivGSEst}, \eqref{DivSymEst}, \eqref{DSEst} and \eqref{NabDSEst} in straightforward manipulations similar to the ones in the boundedness proof we infer that
\begin{equation}\label{AuxSum}
\sum_{j=1}^4\|I_j\|_{L^p(0,T;L^q(\Omega))}\leq \|w^1-w^2\|_{\mathcal V^{p,q}(Q_T)}cK\sum_{j=1}^{\tilde N} M^{p_j}L^{r_j}T^{\kappa_j},\ p_j,r_j\in\mathbf{N}\cup \{0\},\kappa_j>0.
\end{equation}
We omit the detailed presentation of the proof of the latter inequality in order to keep the length of the paper in a reasonable limit.
We infer in a straightforward way
\begin{equation}\label{TDerDiffEst}
\begin{split}
\|\vartheta^1\partial_t w^1-\vartheta^2 \partial_t w^2\|_{L^p(0,T;L^q(\Omega))}\leq& \|\vartheta^1-\vartheta^2\|_{L^\infty(0,T;L^\infty(0,))}\|\partial_t w^1\|_{L^p(0,T;L^q(\Omega))}\\&+\|\vartheta^2\|_{L^\infty(0,T;L^\infty(0,))}\|\partial_t w^1-\partial_t w^2\|_{L^p(0,T;L^q(\Omega))}\\
\leq &cT^\frac{1}{p'}M\left(\|\vartheta^1-\vartheta^2\|_{W^{1,p}(0,T;W^{1,q}(\Omega))}+\|\partial_t (w^1-w^2)\|_{L^p(0,T;L^q(\Omega))}\right)
\end{split}
\end{equation}
and
\begin{equation}\label{PrDiff3Est}
\begin{split}
&\|(\pi'(\rho_*+\theta_0+\vartheta^1)-\pi'(\rho_*+\theta_0))\nabla(\theta_0+\vartheta^1)-(\pi'(\rho_*+\theta_0+\vartheta^2)-\pi'(\rho_*+\theta_0))\nabla(\theta_0+\vartheta^2)\|_{L^p(0,T;L^q(\Omega))}\\
&\leq \Pi\|\vartheta^1-\vartheta^2\|_{L^\infty(0,T;L^\infty(\Omega))}\|\theta_0+\vartheta^1\|_{L^p(0,T;W^{1,q}(\Omega))}+2\Pi\|\vartheta^1-\vartheta^2\|_{L^p(0,T;W^{1,q}(\Omega))}
\\
&\leq c\Pi T^\frac{1}{p'}(L+M+1)\|\vartheta^1-\vartheta^2\|_{W^{1,p}(0,T;W^{1,q}(\Omega))}.
\end{split}
\end{equation}
Applying \eqref{ACoefDifEst} and \eqref{EmbIneq} we have
\begin{equation}\label{LTermDiffEst}
\begin{split}
\|&\dvr(\mathcal S(\mathbb{D}Dev w^1)-\mathcal S(\mathbb{D}Dev w^2))-\mathcal A(\mathbb Du_0)(\mathbb D(w^1-w^2))\|_{L^p(0,T;L^q(\Omega))}\\
=&\|\mathcal{A}(\mathbb Dw^1)(\mathbb Dw^1)-\mathcal A(\mathbb Dw^2)(\mathbb Dw^2)-\mathcal A(\mathbb Du_0)(\mathbb D(w^1-w^2))\|_{L^p(0,T;L^q(\Omega))}\\
\leq& \|\mathcal{A}(\mathbb Dw^1)(\mathbb D(w^1-w^2))-\mathcal A(\mathbb Du_0)(\mathbb D(w^1-w^2))\|_{L^p(0,T;L^q(\Omega))}\\
&+\|\mathcal{A}(\mathbb Dw^1)(\mathbb Dw^2)-\mathcal(A)(\mathbb Dw^2)(\mathbb Dw^2) \|_{L^p(0,T;L^q(\Omega))}\\
\leq&\sum_{j,k,l,m=1}^d\left(\|a_{jk}^{lm}(\mathbb Dw^1)-a_{jk}^{lm}(\mathbb Du_0)\|_{L^\infty(0,T;L^\infty(\Omega))}\|\partial_k\partial_l(w^1-w^2)\|_{L^p(0,T;L^q(\Omega))}\right.\\
&\left.+\|a_{jk}^{lm}(\mathbb Dw^1)-a_{jk}^{lm}(\mathbb Dw^2)\|_{L^\infty(0,T;L^\infty(\Omega))}\|\partial_k\partial_l w^2\|_{L^p(0,T;L^q(\Omega))}\right)\\
\leq &cK(1+\|w^1\|_{L^\infty(0,W^{1,\infty}(\Omega))}+\|u_0\|_{W^{1,\infty}(\Omega)}+\|w^1\|^2_{L^\infty(0,W^{1,\infty}(\Omega))}+\|u_0\|_{W^{1,\infty}(\Omega)})\\
&\times\|w^1-u_0\|_{L^\infty(0,T;W^{1,\infty}(\Omega))}\|w^1-w^2\|_{L^p(0,T;W^{2,q}(\Omega))}\\
&+cK(1+\|w^1\|_{L^\infty(0,W^{1,\infty}(\Omega))}+\|w^2\|_{W^{1,\infty}(\Omega)}+\|w^1\|^2_{L^\infty(0,W^{1,\infty}(\Omega))}+\|w^2\|_{W^{1,\infty}(\Omega)})\\
&\times\|w^1-w^2\|_{L^\infty(0,T;W^{1,\infty}(\Omega))}\|w^2\|_{L^p(0,T;W^{2,q}(\Omega))}\\
\leq& cK(1+(M+L)+L+(M+L)^2+L^2)M\|w^1-w^2\|_{\mathcal V^{p,q}(Q_T)}\\
&+cK(M+L)(1+(M+L)+(M+L)^2)\|w^1-w^2\|_{\mathcal V^{p,q}(Q_T)}.
\end{split}
\end{equation}
Taking into account \eqref{RhsGDiff1}, \eqref{RhsGDiff2}, \eqref{PrDiffEst1}, \eqref{TrSEDiffEst}, \eqref{AuxSum}, \eqref{TDerDiffEst},
\eqref{PrDiff3Est} and \eqref{LTermDiffEst} we conclude
\begin{equation*}
\begin{split}
&\|\mathscr G(\vartheta^1,w^1)-\mathscr G(\vartheta^2,w^2)\|_{L^p(0,T;W^{1,q}(\Omega))}+\|\mathscr F(\vartheta^1,w^1)-\mathscr F(\vartheta^2,w^2)\|_{L^p(0,T;L^q(\Omega))}\\
&\leq [\vartheta^1-\vartheta^2,w^1-w^2]_{T,M}c\left(\sum_{j=1}^{Q_1} K^{k_j}L^{l_j}M^{m_j}\Pi^{|k_j-1|}T^{\beta_j}+\sum_{j=1}^{Q_1} K^{n_j}L^{o_j}M^{p_j}\right),
\end{split}
\end{equation*}
where $l_j,m_j,o_j\in\mathbf{N}\cup\{0\}$, $\beta_j>0$, $p_j\in\mathbf{N}\setminus\{1\}$ and $k_j,n_i\in\{0,1\}$.
As before, $T$ and $M$ can be chosen such that the sum in the bracket on the right hand side is less than $c$ for arbitrary $c>0$. Consequently, $T$ and $M$ can be chosen such that $\Phi$ is a contraction according to Theorem \ref{thm:regularita} and the Banach fix-point theorem then yields the validity of Theorem \ref{thm.main}.
It only remains to prove Theorem \ref{thm:regularita}. This is content of the next section.
\section{$L^p$ regularity of the linearized system}
\label{Lpreg}
The main aim of this section is to proof the following regularity theorem for \eqref{eq:linearized}. For simplicity, we denote $u = \tilde u - u_0$ throughout this section.
\begin{Theorem}\label{thm:regularita} Let $\theta_0,\ \nabla u_0\in BUC(\Omega)$\footnote{Here $BUC$ denotes the space of bounded and uniformly continuous functions. Note that $W^{1,q}(\Omega)\hookrightarrow BUC(\Omega)$ whenever $q>d$.} and let $\varrho_*+\theta_0\in L^\infty(\Omega)$ and $(\varrho_* + \theta_0)^{-1}\in L^\infty(\Omega)$. Then for every $(\mathscr G, \mathscr F)\in L^p(0,T;W^{1,q}(\Omega))\times L^p(0,T;L^q(\Omega))$ the solution $(\theta, u)$ to \eqref{eq:linearized} satisfies
\begin{equation*}
\| u\|_{\mathcal V^{p,q}(Q_T)} + \|\theta\|_{W^{1,p}(0,T;W^{1,q}(\Omega))}
\leq c\left(\|\mathscr G\|_{L^p(0,T;W^{1,q}(\Omega))} + \|\mathscr F\|_{L^p(0,T;L^q(\Omega)}\right)
\end{equation*}
where $c$ is a constant independent of $\theta,\ \tilde u$ and the right hand side.
\end{Theorem}
In order to reach this goal we use a semi-group approach which may be found in e.g. \cite{DeHiPr} or in \cite{EnSh}. In particular, the equation \eqref{eq:linearized} can be rewritten as
\begin{equation}\label{eq:A.operator}
\partial_t \left(\begin{matrix} \theta\\ u\end{matrix}\right) - \mathscr A \left(\begin{matrix}\theta \\ u \end{matrix}\right) = \left(\begin{matrix} \mathscr G\\ \mathscr F\end{matrix}\right)
\end{equation}
for $\mathscr A$ defined as
$$
\mathscr A\left(\begin{matrix}\theta\\ u\end{matrix}\right) = \left(\begin{matrix} - (\varrho_* + \theta_0)\dvr u \\ \frac 1{\varrho_*+\theta_0}\mathcal A(\mathbb D u_0)(\mathbb D u) - \frac 1 {\varrho_* + \theta_0} \pi'(\varrho_* + \theta_0)\nabla \theta\end{matrix}.\right)$$
The equation \eqref{eq:A.operator} is equipped with zero initial conditions, i.e. $\theta(0,\cdot) \equiv 0$ and $u(0,\cdot) \equiv 0$.
Note that here we write $\mathscr F$ instead of $\frac 1{\varrho_*+\theta_0}\mathscr F$. We can afford this inaccuracy since $\frac 1{\varrho_* + \theta_0}$ and $\varrho_* + \theta_0$ are bounded in $L^\infty$ in all applications and thus
\begin{equation*}
\frac1{\varrho_* + \theta_0}\mathscr F \in L^p(0,T;L^q(\Omega)) \Leftrightarrow \mathscr F \in L^p(0,T;L^q(\Omega)).
\end{equation*}
The resolvent problem for such equation is
\begin{equation}\label{eq:lin.operator}
\lambda\left(\begin{matrix} \theta\\ u\end{matrix}\right) - \mathscr A \left(\begin{matrix}\theta\\ u\end{matrix}\right) = \left(\begin{matrix}\mathscr G\\\mathscr F\end{matrix}\right).
\end{equation}
First, we recall the definition of $\mathcal R-$boundedness (see \cite[Definition 2.1]{EnSh}):
\begin{Definition}
A family $\mathcal T\subset \mathcal L(X,Y)$ (i.e. a family of linear operators from $X$ to $Y$) is called $\mathcal R-$bounded if there exist constants $C>0$ and $p\in [1,\infty)$ such that for each $n\in \mathbb N$, $T_j\in \mathcal T$, $f_j\in X$ ($j\in\{1,\ldots, n\}$ and for all sequences $\{r_j(v)\}_{j=1}^n$ of independent, symmetric, $\{-1,1\}$-valued random variables on $[0,1]$ there holds
$$
\int_0^1 \left\|\sum_{j=1}^n r_j(v) T_j f_j\right\|^p\ {\rm d}v \leq C\int_0^1 \left\|\sum_{j=1}^n r_j(v) f_j \right\|_X^p\ {\rm d}v.
$$
The smallest such $C$ is calles $\mathcal R-$bound and it is denoted by $\mathcal R_{\mathcal L(X,Y)} (\mathcal T)$.
\end{Definition}
We aim to prove the $\mathcal R$--sectoriality of the operator $\mathscr A$, i.e., to justify that $\mathscr R_\lambda = \lambda(\lambda \mathbb I_d - \mathscr A)^{-1}$ is $\mathcal R-$bounded for all $\lambda\in \Sigma_{\beta,\nu}$ for some $\beta\in (0,\frac \pi 2)$ and $\nu>0$. Here
$$
\Sigma_{\beta,\nu} = \{\mu\in \mathbb C,\ {\rm arg}(\mu-\nu)\leq \pi-\beta\}.
$$
In the last part of this section we show that this $\mathcal R-$boundedness is sufficient for the $L^p-L^q$ regularity. To reach this goal, we use the following theorem which is due to Weis.
\begin{Theorem} Let $X$ and $Y$ be two $UMD$ spaces and $p\in (1,\infty)$. Let $M$ be a function in $C^{1}(\mathbb R\setminus {0},\mathcal L(X,Y))$ such that\label{thm.weis}
\begin{equation*}
\begin{split}
&\mathcal R_{\mathcal L(X,Y)} \left(\{M(\tau),\ \tau \in \mathbb R\setminus \{0\}\}\right) =\kappa_0<\infty\\
& \mathcal R_{\mathcal L(X,Y)}\left(\{\tau M'(\tau),\ \tau\in \mathbb R\setminus\{0\} \}\right) = \kappa_1<\infty.
\end{split}
\end{equation*}
Then $T_M$ defined as $T_M\Phi = \mathcal F^{-1}[M\mathcal F(\Phi)]$ is extended to a bounded linear operator from $L^p(\mathbb R,X)$ into $L^p(\mathbb R,Y)$. Moreover,
\begin{equation*}
\|T_M\|_{\mathcal L(X,Y)} \leq c (\kappa_0 + \kappa_1)
\end{equation*}
where $c>0$ is independent of $p,\ X,\ Y,\ \kappa_0$ and $\kappa_1$.
\end{Theorem}
Here $UMD$ stands for spaces with uniform martingale differences. Their properties might be found in \cite{burkholder} or in \cite{francia}. For our purposes it is enough to know that $L^p$ and $W^{k,p}$ are $UMD$ once $p\in (1,\infty).$
Since we assume $\lambda\neq 0$ in \eqref{eq:lin.operator}, we deduce
\begin{equation*}
\theta = \frac 1\lambda \left(\mathscr G - (\varrho_* + \theta_0) \dvr u\right).
\end{equation*}
Consequently, we may rewrite the resolvent problem as
\begin{equation}\label{eq:hvezdicka}
\lambda u - \mathcal B_\lambda (u) = F_\lambda\qquad \mbox{ on }\Omega
\end{equation}
where
\begin{equation*}
\begin{split}
&\mathcal B_\lambda(u) = \frac 1{\varrho_*+\theta_0} \mathcal A(\mathbb D u_0)(\mathbb D u) + \pi'(\varrho_* + \theta_0) \frac 1\lambda \nabla\dvr u\\
&F_\lambda = T_\lambda\left(\begin{matrix}\mathscr G\\ \mathscr F\end{matrix}\right) := \left( \mathscr F - \nabla \mathscr G \frac 1\lambda \right).
\end{split}
\end{equation*}
We claim the following.
\begin{Theorem}
\label{thm.non-constant} Let $\mathcal U_\lambda:L^q(\Omega)\mapsto W^{2,q}(\Omega)$ be a solution operator to \eqref{eq:hvezdicka}. Then
\begin{equation*}
\begin{split}
\mathcal R_{L^q(\Omega)} \left(\left\{\lambda \mathcal U_\lambda, \lambda \in \Sigma_{\beta,\nu}\right\}\right) &\leq c\\
\mathcal R_{L^q(\Omega)} \left(\left\{|\lambda|^{1/2} \partial_j\mathcal U_\lambda, \lambda \in \Sigma_{\beta,\nu}\right\}\right) &\leq c\\
\mathcal R_{L^q(\Omega)} \left(\left\{\partial_j\partial_k \mathcal U_\lambda, \lambda \in \Sigma_{\beta,\nu}\right\}\right) &\leq c
\end{split}
\end{equation*}
for every $j, k \in \{1,\ldots,d\}$, every $q\in (1,\infty)$ and for some $\beta\in (0,\pi/2)$ and $\nu>0$.
\end{Theorem}
Note that the $\mathcal R-$boundedness of $\mathscr R_\lambda$ is a consequence of this Theorem and Proposition \ref{pro.2.16.ensh}. Indeed, recall that $\mathscr R_\lambda$ has two components. In particular, $\mathscr R_{\lambda 1} \left(\begin{matrix}\mathscr G\\ \mathscr F\end{matrix}\right) = \lambda\theta$ and $\mathscr R_{\lambda 2} \left(\begin{matrix} \mathscr G\\ \mathscr F\end{matrix}\right) = \lambda u$. Both of them can be written as a composition and sum of certain operators, namely
\begin{equation*}
\begin{split}
\mathscr R_{\lambda 1} \left(\begin{matrix} \mathscr G\\ \mathscr F\end{matrix}\right) &= \left( -(\varrho_* + \theta_0) \dvr \mathcal U_\lambda\circ T_\lambda\left(\begin{matrix}\mathscr G\\\mathscr F\end{matrix}\right) + \mathscr G\right)\\
\mathscr R_{\lambda 2} \left(\begin{matrix}\mathscr G\\ \mathscr F\end{matrix}\right) & = \left(\lambda \mathcal U_\lambda \circ T_\lambda \right) \left(\begin{matrix} \mathscr G\\ \mathscr F\end{matrix}\right).
\end{split}
\end{equation*}
Proposition \ref{pro.2.16.ensh} then yields the claim.
In order to show Theorem \ref{thm.non-constant} we first prove its version with constant coefficients.
\begin{Theorem}\label{Thm:ConstCase}
Let $\mathscr B_\lambda(u):W^{2,q}(\mathbb R^d)\mapsto L^{q}(\mathbb R^d)$ be an operator defined as
\begin{equation*}
\mathscr B_\lambda(u) = \frac 1{\gamma_1} \sum_{k,l,m = 1}^da_{jk}^{lm} (|D|) \partial_l\partial_m u_k + \frac{\gamma_2}{\lambda\gamma_1}\sum_{l=1}^d\partial_j\partial_l u_l
\end{equation*}
where $a$ is defined by $\eqref{eq:ACoef}$ and $D\in \mathbb R^{d\times d}_{{\rm sym}}$, $\gamma_1,\ \gamma_2\in \mathbb R^+$ are arbitrary. Then the solution operator $\mathscr U_\lambda: L^q(\mathbb R^d)\mapsto W^{2,q}(\mathbb R^d)$, $\mathscr U_\lambda: f\mapsto u$ where $u$ solves
\begin{equation*}\label{eq:const.coef}
\lambda u - \mathscr B_\lambda u = f
\end{equation*}
in $\mathbb R^d$ fulfills
\begin{equation*}
\begin{split}
\mathcal R_{L^q(\mathbb R^d)} \left(\left\{\lambda \mathscr U_\lambda, \lambda \in \Sigma_{\beta,\nu}\right\}\right) &\leq c\\
\mathcal R_{L^q(\mathbb R^d)} \left(\left\{|\lambda|^{1/2} \partial_j\mathscr U_\lambda, \lambda \in \Sigma_{\beta,\nu}\right\}\right) &\leq c\\
\mathcal R_{L^q(\mathbb R^d)} \left(\left\{\partial_j\partial_k \mathscr U_\lambda, \lambda \in \Sigma_{\beta,\nu}\right\}\right) &\leq c
\end{split}
\end{equation*}
for every $j, k \in \{1,\ldots,d\}$, every $q\in (1,\infty)$ and for some $\beta\in (0,\pi/2)$ and $\nu>0$.
\end{Theorem}
\begin{proof}
We apply Fourier transform to rewrite \eqref{eq:const.coef} as
\begin{equation*}
\lambda \hat u_m + \frac 1{\gamma_1} \sum_{j,k,l=1}^da_{mj}^{kl} (|D|) \xi_k\xi_l \hat u_j + \frac{\gamma_2}{\lambda} \xi_m\sum_{l=1}^d\xi_l \hat u_l = \hat f_m
\end{equation*}
where
\begin{equation*}
\hat u(\xi) := \mathcal F(u) = \int_{\mathbb R^d} e^{-i\eta\xi} u(\eta) \ {\rm d}\eta.
\end{equation*}
Set
\begin{equation*}
\begin{split}
E(\xi)_{mj} &= \sum_{k,l=1}^d\frac 1{\gamma_1} a_{mj}^{kl} (|D|) \xi_k\xi_l\\
E(\xi, \lambda)_{ml} &= \frac{\gamma_2}{\lambda \gamma_1} \xi_m\xi_l.
\end{split}
\end{equation*}
Naturally,
\begin{equation*}
u = \mathcal F^{-1} \left((\lambda \mathbb I_d + E(\xi) + E(\lambda,\xi))^{-1}\hat f\right)
\end{equation*}
assuming $(\lambda \mathbb I_d + E(\xi) + E(\lambda,\xi))^{-1}$ exists -- this is discussed below. Further,
\begin{equation}\label{eq:symboly}
\begin{split}
\lambda u &= \mathcal F^{-1}\left(\lambda (\lambda \mathbb I_d + E(\xi) + E(\lambda,\xi))^{-1}\hat f\right)\\
|\lambda|^{\frac 12} \partial_j u & = \mathcal F^{-1} \left(|\lambda|^{\frac 12} i \xi_j(\lambda \mathbb I_d + E(\xi) + E(\lambda,\xi))^{-1}\hat f\right)\\
\partial_j\partial_k u & = \mathcal F^{-1} \left( - \xi_j\xi_k(\lambda \mathbb I_d + E(\xi) + E(\lambda,\xi))^{-1}\hat f\right)
\end{split}
\end{equation}
According to Theorem \ref{thm.3.3.ensh} it suffices to show that the multipliers appearing in \eqref{eq:symboly} satisfies
\begin{equation}\label{eq:multipl}
|\partial_\xi^\alpha m(\lambda,\xi)|\leq c_\alpha |\xi|^{-|\alpha|}
\end{equation}
for all multi-indeces $\alpha \in\mathbb N_0^d$ and for all $\lambda \in \Sigma_{\beta,\nu}$ where $\beta$ and $\nu$ are appropriately chosen. \\
First, consider a matrix $\lambda \mathbb I_d + E(\xi)$. Since $E$ is elliptic and symmetric (see \eqref{StrongEll} and \eqref{eq:a.symmetry}) there is an orthogonal $\xi-$dependent matrix $O$ such that
\begin{equation*}
E(\xi) = O \left(\begin{matrix}\mu_1 & 0 & 0\\ 0& \mu_2 & 0\\ 0 & 0& \mu_3 \end{matrix}\right) O^T.
\end{equation*}
We restrict ourselves here to the case of $3\times 3$--matrices for clarity of the presentation.
We consider without loss of generality $\mu_3\geq\mu_2\geq \mu_1 \geq c |\xi|^2$ (here $c$ in fact depend on $D$, however, we assume in applications that $D$ is always bounded by some constant dependent on $u_0$). Consequently,
\begin{equation*}
\lambda \mathbb I_d + E(\xi) = O \left(\begin{matrix} \lambda + \mu_1 & 0 & 0\\ 0 & \lambda + \mu_2 & 0 \\ 0 & 0 & \lambda + \mu_3\end{matrix}\right) O^T
\end{equation*}
and there exists $(\lambda \mathbb I_d + E(\xi))^{-1}$ which may be written as
\begin{equation*}
(\lambda \mathbb I_d + E(\xi))^{-1} = O \left(\begin{matrix} \frac 1{\lambda + \mu_1} & 0 & 0\\ 0 & \frac1{\lambda + \mu_2} & 0 \\ 0 & 0 & \frac{1}{\lambda + \mu_3}\end{matrix}\right) O^T.
\end{equation*}
We use Lemma \ref{lem.3.1.ensh} and the orthogonality of $O$ to claim
\begin{equation*}
|(\lambda \mathbb I_d + E(\xi))^{-1}|\leq \frac{c(\nu)}{|\lambda| + |\xi|^2}
\end{equation*}
in an operator norm. Further
\begin{equation*}
(\lambda \mathbb I_d + E(\xi) + E(\xi,\lambda))^{-1} = (\lambda \mathbb I_d + E(\xi))^{-1}(\mathbb I_d + (\lambda \mathbb I_d + E(\xi))^{-1} E(\xi,\lambda))^{-1}
\end{equation*}
and, from definition,
\begin{equation*}
|(\lambda \mathbb I_d + E(\xi))^{-1} E(\xi,\lambda)|\leq \frac{\gamma_2}{|\lambda|} \frac{c(\nu) |\xi|^2}{|\lambda| + c|\xi|^2}\leq \frac12
\end{equation*}
assuming $|\lambda|$ is sufficiently high. This can be managed by a proper choice of $\nu$ and $\beta$. For this choice we get $\lambda \mathbb I_d + E(\xi) + E(\xi,\lambda)$ invertible and
\begin{equation*}
\left|(\lambda \mathbb I_d + E(\xi) + E(\xi,\lambda))^{-1}\right|\leq \frac{c(\nu)}{|\lambda| + |\xi|^2}.
\end{equation*}
To proceed further, we prove the following lemma:
\begin{Lemma}
Let $|\lambda|>\lambda_0>0$. There exist coefficients $a_{j,n}$ depending on $\lambda_0$ such that
\begin{equation}\label{eq:odhad.na.derivaci}
\left|\partial^\alpha_\xi (\lambda \mathbb I_d + E(\xi) + E(\xi,\lambda))^{-1}\right| \leq \sum_{j=0}^{\left\lfloor \frac n2 \right\rfloor} a_{j,n} |\xi|^{n-2j} \left(\frac{c(\nu)}{|\lambda| + |\xi|^2}\right)^{1+n-j}
\end{equation}
for all multi-indeces $\alpha$ with $|\alpha| = n$
\end{Lemma}
\begin{proof}
During this proof, we assume that all matrix multiplications appearing in the proof are commutative. This allows to simplify the notation and it does not affect the validity of the main claim. Using this assumption, we are going to prove
\begin{equation}\label{eq:indukce}
\partial^\alpha_\xi(\lambda \mathbb I_d + E(\xi) + E(\xi,\lambda))^{-1} = \sum_{j=0}^{\left\lfloor \frac n2\right\rfloor} a_{j,n} \left(E'(\xi) + E'(\xi,\lambda)\right)^{n-2j} \left(\lambda \mathbb I + E(\xi) + E(\xi,\lambda)\right)^{-1-n+j}.
\end{equation}
Here we also use another simplification as $E'(\xi)$ denotes arbitrary first partial derivative and the bracket $\left(E'(\xi) + E'(\xi,\lambda)\right)^{n-2j}$ should be read as the product of $n-2j$ first derivatives which are not necessarily with respect to the same coordinate. We also assume that $a_{j,n}$ is independent of $\lambda$ as far as $|\lambda|$ is far away from zero.
Clearly, \eqref{eq:odhad.na.derivaci} follows directly from \eqref{eq:indukce}. We proof \eqref{eq:indukce} by induction. Let $n=0$, then
\begin{equation*}
\partial^0 (\lambda \mathbb I_d + E(\xi) + E(\xi,\lambda))^{-1} = (\lambda \mathbb I_d + E(\xi) + E(\xi,\lambda))^{-1}.
\end{equation*}
Now let \eqref{eq:indukce} holds for certain $n\in \mathbb N$ and let $\alpha$ be a multiindex of length $|\alpha| = n+1$. Then
\begin{equation*}
\partial^\alpha_\xi(\lambda \mathbb I_d + E(\xi) + E(\xi,\lambda))^{-1} = \left(\partial^{\alpha'}_\xi(\lambda \mathbb I_d + E(\xi) + E(\xi,\lambda))^{-1}\right)'
\end{equation*}
for certain multi-index $\alpha'$ with $|\alpha'| = n$. Since $E(\xi)$ and $E(\xi,\lambda)$ are second order polynomials in $\xi$, $E''(\xi)$ and $E''(\xi,\lambda)$ are constants which might be included into $a_{j,n}$. We apply one partial derivation on the right hand side of \eqref{eq:indukce}. For $j\neq \frac n2$ we get (throughout this proof we use the letter $a$ to denote a constant which may vary from line to line but it possesses the same dependencies as $a_{j,n}$)
\begin{multline*}
\left(\left(E'(\xi) + E'(\xi,\lambda)\right)^{n-2j} \left(\lambda \mathbb I_d + E(\xi) + E(\xi,\lambda)\right)^{-1-n+j}\right)' =\\
a\left(E'(\xi) + E'(\xi,\lambda)\right)^{n-2j-1}\left(\lambda \mathbb I_d + E(\xi) + E(\xi,\lambda)\right)^{-1-n+j}\\
+ a \left(E'(\xi) + E'(\xi,\lambda)\right)^{n-2j+1} \left(\lambda \mathbb I_d + E(\xi) + E(\xi,\lambda)\right)^{-2-n+j}.
\end{multline*}
Both terms can be found on the right hand side of relation \eqref{eq:indukce} assuming $|\alpha|= n+1$.
Now let $j=\frac n2$. Then
\begin{multline*}
\left((E'(\xi) + E'(\xi,\lambda))^0 \left(\lambda \mathbb I_d + E(\xi) + E(\xi,\lambda)\right)^{-1-\frac n2}\right)'\\
= \left(E'(\xi) + E'(\xi,\lambda)\right) \left(\lambda \mathbb I_d + E(\xi) + E(\xi,\lambda)\right)^{-2-\frac n2}\\
= \left(E'(\xi) + E'(\xi,\lambda)\right) \left(\lambda \mathbb I_d + E(\xi) + E(\xi,\lambda)\right)^{-1-(n+1) - \left\lfloor \frac{n+1}{2}\right\rfloor}
\end{multline*}
and this term is also on the right hand side of \eqref{eq:indukce} assuming $|\alpha|= n+1$.
\end{proof}
We use this lemma to infer
\begin{equation*}
|\partial^\alpha_\xi (\lambda \mathbb I_d + E(\xi) + E(\xi,\lambda))^{-1}|\leq c^\alpha\left(|\lambda|^{1/2} + |\xi|\right)^{-2-|\alpha|}.
\end{equation*}
An easy calculation yield the validity of \eqref{eq:multipl}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm.non-constant}] Recall that, according to the standard decomposition of unity, for all $\varepsilon>0$ there is a covering $\{B_j\}_{j=1}^\infty$ of the torus $\Omega$ such that
\begin{itemize}
\item $\bigcup_{j=1}^\infty B_j \supset \Omega$.
\item There are functions $\zeta_j$ and $\tilde \zeta_j$ such that $\operatorname{supp} \zeta_j \subset B_j$, $\tilde \zeta_j = 1$ on $\operatorname{supp} \zeta_j$, $\operatorname{supp} \tilde \zeta_j\subset 2B_j$, $\|\zeta_j\|_{W^{2,\infty}} + \|\tilde\zeta_j\|_{W^{2,\infty}} \leq c$ with $c$ independent of $j$, and $\sum_{j=1}^\infty \zeta_j = 1$ on $\Omega$.
\item Since $\nabla u_0, \theta_0\in BUC$, we may assume the following property: for every $j\in \mathbb N$
\begin{equation*}
|\mathbb Du_0(x) - \mathbb Du_0(x_j)| + |\theta_0(x) - \theta_0(x_j)| <\varepsilon
\end{equation*}
for all $x\in B_j$. Here $x_j$ is the center of a ball $B_j$.
\end{itemize}
We emphasize that $\varepsilon>0$ will be chosen later.
For every $j$ we solve an equation
\begin{multline}\label{eq:lokalni}
\lambda u_j - \frac1{\rho_* + \theta_0(x_j)} \mathcal A(\mathbb D u_0(x_j))(\mathbb D u) - \pi'(\varrho_* + \theta_0(x_j))\frac 1\lambda \nabla \dvr u_j \\
= \tilde \zeta_j\left(F + \left(\frac 1{\varrho_* + \theta_0(x)} \mathcal A(\mathbb D u(x))(\mathbb D u) - \frac 1{\varrho_* + \theta_0(x_j)} \mathcal A(\mathbb D u_0(x_j))(\mathbb D u) \right)\right. \\
+\left.\left(\pi'(\varrho_* + \theta_0(x)) \frac{1}\lambda \nabla\dvr u_j\right) - \pi'(\varrho_* + \theta_0(x_j))\frac 1\lambda \nabla \dvr u_j\right).
\end{multline}
Since $\mathbb D u$ and $\theta$ are bounded and uniformly continuous function, \eqref{eq:lokalni} can be written as
\begin{equation}\label{eq:lokal.operator}
\lambda u_j - \mathscr B_\lambda u_j = \tilde \zeta_j F + \tilde\zeta_j \mathscr P_{\lambda j} u_j
\end{equation}
where $\mathscr P_{\lambda j}:W^{2,q}(\mathbb R^d)\mapsto L^q(\mathbb R^d)$ is a linear operator bounded independently of $j$. Note also that $\mathscr B_\lambda + \mathscr P_{\lambda j} = \mathcal B_\lambda$ on $B_j$.
We claim that $\lambda(\lambda \mathbb I_d - \mathscr B_\lambda - \mathscr P_{\lambda j})^{-1}$, $|\lambda|^{\frac 12} \partial_k(\lambda \mathbb I_d - \mathscr B_\lambda - \mathscr P_{\lambda j})^{-1}$ and $\partial_k\partial_l(\lambda \mathbb I_d - \mathscr B_\lambda - \mathscr P_{\lambda j})^{-1}$ are $\mathcal R-$bounded on $\Sigma_{\beta,\nu}$ assuming $\varepsilon$ is small enough. To justify this claim it is enough to repeat the proof of \cite[Proposition 4.2]{DeHiPr}. First, recall that $\mathscr P_\lambda$ is $\mathcal R-$bounded on $\Sigma_{\beta,\nu}$ by $c\varepsilon$ according to Proposition \ref{pro.2.13.ensh}. The same proposition also yields $(\lambda \mathbb I_d - \mathscr B_\lambda)^{-1}$ is $\mathcal R-$bounded as $\lambda(\lambda\mathbb I_d-\mathscr B_\lambda)^{-1}$ is $\mathcal R-$bounded on $\Sigma_{\beta,\nu}$ by Theorem~\ref{Thm:ConstCase}. We have
\begin{multline*}
\lambda(\lambda \mathbb I_d -\mathscr B_\lambda - \mathscr P_{\lambda j})^{-1} = \lambda (\lambda \mathbb I_d - \mathscr B_\lambda)^{-1} (\mathbb I_d - \mathscr P_{\lambda j}(\lambda \mathbb I_d - \mathscr B_{\lambda})^{-1})^{-1}\\
= \lambda (\lambda \mathbb I_d - \mathscr B_\lambda)^{-1} \sum_{n=0}^\infty (\mathscr P_{\lambda j} (\lambda\mathbb I_d - \mathscr B_\lambda)^{-1})^n.
\end{multline*}
We use Proposition \ref{pro.2.16.ensh} to deduce
\begin{equation*}
\mathcal R\left\{\lambda (\lambda \mathbb I_d - \mathscr B_\lambda)^{-1} ( \mathscr P_{\lambda j} (\lambda\mathbb I_d - \mathscr B_\lambda)^{-1})^n\right\} \leq \mathcal R\left\{ \lambda (\lambda \mathbb I_d - \mathscr B_\lambda)^{-1}\right\}\left( c\varepsilon \mathcal R\left\{(\lambda \mathbb I_d - \mathscr B_\lambda)^{-1}\right\}\right)^{n}
\end{equation*}
Now it is enough to take $\varepsilon$ such that $\gamma:= c\varepsilon \mathcal R\left\{(\lambda \mathbb I_d - \mathscr B_\lambda)^{-1}\right\}$ is less than $1$. This allows to claim that
\begin{equation*}
\mathcal R\left\{\lambda(\lambda \mathbb I_d -\mathscr B_\lambda - \mathscr P_{\lambda j})^{-1}\right\}\leq \mathcal R\left\{\lambda(\lambda \mathbb I_d -\mathscr B_\lambda)^{-1}\right\}\frac 1{1-\gamma}.
\end{equation*}
The same true can be deduced also for the remaining two operators.
Let $\mathcal T_j(\lambda)$ be a solution operator to \eqref{eq:lokal.operator}. We define
\begin{equation*}
\mathcal W(\lambda) (f) = \sum_{j=1}^\infty \zeta_j \mathcal T_j(\lambda)(\tilde \zeta_j f).
\end{equation*}
Then $u = \mathcal W(\lambda) (f)$ fulfills
\begin{equation*}
\lambda u - \mathcal B_\lambda u = f - \mathscr K_\lambda(f)
\end{equation*}
where $\mathscr K_\lambda:L^q(\Omega)\mapsto L^q(\Omega)$ is defined as $\mathscr K_\lambda(f) = \sum_{j=1}^\infty\mathcal B_\lambda (\zeta_j u_j) - \zeta_j \mathcal B_\lambda (u_j)$ and $u_j = \mathcal T_j(\lambda)(\tilde\zeta_j f)$.
The operator $\mathscr K_\lambda$ may be written as
\begin{multline}\label{eq:definice.k}
\mathscr K_\lambda = \frac 1{\varrho_* + \theta_0}\sum_{j=1}^\infty\sum_{k,l,n=1}^d a_{mn}^{kl}(\mathbb D u_0) \left(\partial_k\partial_l \zeta_j u_{j,n} + \partial_l \zeta_j \partial_k u_{j,n} + \partial_k \zeta_j \partial_l u_{j,n}\right)\\
+ \sum_{j=1}^\infty\pi'(\varrho_* + \theta_0) \frac 1\lambda \left(\nabla \zeta_j \dvr u_j + \nabla^2 \zeta_j u_j + \nabla \zeta_j \nabla u_j\right)
\end{multline}
and $\mathscr K_\lambda$ is in particular $\mathcal R-$bounded on $\Sigma_{\beta,\nu'}$. Indeed, we show this particular boundedness of one term of the right hand side of \eqref{eq:definice.k} since the others might be treated in the same way.
\begin{multline*}
\int_0^1 \left\|\sum_{l=1}^n r_l(v) \left(\sum_{j=1}^\infty (\nabla \zeta_j)\dvr \mathcal T_j(\lambda_l)(\tilde\zeta_j f)\frac{\pi'(\varrho_* + \theta_0)}{\lambda}\right) \right\|^q_{L^q(\Omega)}\ {\rm d}v\\
\leq c \sum_{j=1}^\infty \int_0^1 \left\|\sum_{l=1}^n r_l(v)(\nabla \zeta_j) \dvr \mathcal T_j(\lambda_l)(\tilde\zeta_j f)\right\|^q_{L^q(B_j\cap \Omega)}\ {\rm d}v\\
\leq c \sum_{j=1}^\infty \int_0^1 \left\|\sum_{l=1}^n r_l(v)|\lambda_l|^{-1/2} |\lambda_l|^{1/2} \dvr \mathcal T_j(\lambda_l)(\tilde\zeta_j f)\right\|^q_{L^q(B_j\cap \Omega)}\ {\rm d}v\\
\leq c \nu'^{-1/2}\sum_{j=1}^\infty \int_0^1 \left\|\sum_{l=1}^n r_l(v) |\lambda_l|^{1/2} \dvr \mathcal T_j(\lambda_l)(\tilde\zeta_j f)\right\|^q_{L^q(B_j\cap \Omega)}\ {\rm d}v\\
\leq c \nu'^{-1/2}\sum_{j=1}^\infty \int_0^1 \left\|\sum_{l=1}^n r_l(v) f\right\|^q_{L^q(B_j\cap \Omega)}\ {\rm d}v.
\end{multline*}
Consequently, the $\mathcal R-$bound of $\mathscr K_\lambda$ is $c\nu'^{-1/2}$ and by a proper choice of $\nu'$ it is less or equal than $\frac 12$. We deduce that
\begin{equation*}
\mathcal R_{\mathcal L(L^q)}\{(\mathbb I_d-\mathscr K_\lambda)^{-1}, \lambda \in \Sigma_{\beta,\nu'}\}\leq 2.
\end{equation*}
See also Proposition \ref{pro.2.16.ensh}.
We thus have $u = \mathcal W\circ (\mathbb I_d-\mathscr K_\lambda)^{-1} (f)$ and this solution operator fulfills all the demanded properties.
Note also that the solution to \eqref{eq:hvezdicka} is unique. This comes from the ellipticity of operator $-\mathcal B_\lambda$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:regularita}]
Due to Theorem \ref{thm.non-constant} we have
\begin{equation}\label{LResRBound}
\lambda(\lambda \mathbb I_d - \mathscr A)^{-1}\ \mathcal R-\text{bounded for all }\lambda \in \Sigma_{\beta,\nu}.
\end{equation}
Especially, $\lambda(\lambda \mathbb I_d -\mathscr A)^{-1}$ is bounded and by the Hille-Yosida theorem (see \cite[Theorem 3.1, Chapter 1]{viorel}) there exists a semigroup of class $C^0$ generated by $\mathscr A$. We denote this semigroup $\mathscr T$, i.e.
\begin{equation*}
\pat \mathscr T -\mathscr A\mathscr T = 0.
\end{equation*}
We set $ U(t) = \int_0^t \mathscr T(t-s) F(s)\ {\rm d}s$ where $F = (\mathscr G, \mathscr F)$ and we immediately get
\begin{equation}\label{eq:trik.s.nu}
\|U\|_{L^p(0,T;(W^{1,q}(\Omega)\times L^q(\Omega)))} \leq c \|F\|_{L^p(0,T;W^{1,q}(\Omega)\times L^q(\Omega))}
\end{equation}
where $c$ depends also on the time interval $(0,T)$. Recall that $U$ has two components and we denote them by $\theta$ and $u$. Moreover $U$ solves
\begin{equation*}
\pat U - \mathscr A U = F
\end{equation*}
which is another form of \eqref{eq:A.operator}.
Clearly, $U$ solves also
\begin{equation*}
\pat U + 2\nu U - \mathscr A U = F + 2\nu U.
\end{equation*}
We set $G = F + 2\nu U$ and we extend the definition of $G$ and $U$ in such a way that $U(t) = 0$, $G(t) = 0$ for all $t<0$.
We may write
\begin{equation*}
U(t) = \int_{-\infty}^\infty e^{(-2\nu\mathbb I_d + \mathscr A)(t-s)}\chi_{[0,\infty)}(t-s) G(s) \ {\rm d}s = \left(e^{(-2\nu\mathbb I_d +\mathscr A)(s)} \chi_{[0,\infty)}(s)\right)* \left(G(s)\right) (t).
\end{equation*}
By rules for Fourier transform we get
\begin{equation*}
\widehat{\pat U} = i\xi \left((-i\xi-2\nu)\mathbb I_d + \mathscr A\right)^{-1} \widehat G
\end{equation*}
and $M(\xi)=i\xi\left((-i\xi-2\nu)\mathbb I_d+\mathscr A\right)^{-1}$ is the corresponding multiplier as mentioned in Theorem \ref{thm.weis}. Indeed, using \eqref{LResRBound}, Proposition \ref{pro.2.13.ensh} and Proposition \ref{pro.2.16.ensh} we infer that $M(\xi)$ and $\xi M'(\xi)$ are $\mathcal{R}$--bounded and Theorem \ref{thm.weis} implies
\begin{equation*}
\|\pat U\|_{L^p(0,T;W^{1,q}(\Omega)\times L^q(\Omega))} \leq c\|G\|_{L^p(0,T;W^{1,q}(\Omega)\times L^q(\Omega))} \leq c \|F\|_{L^p (0,T;(W^{1,q}(\Omega)\times L^q(\Omega)))}
\end{equation*}
having \eqref{eq:trik.s.nu} in mind. Consequently,
\begin{equation}\label{eq:prvni.vysledek}
\|\theta\|_{W^{1,p}(0,T;W^{1,q}(\Omega))} + \| u\|_{W^{1,p}(0,T;L^q(\Omega))} \leq c\left(\|\mathscr G\|_{L^p(0,T;W^{1,q}(\Omega))} + \|\mathscr F\|_{L^p(0,T;W^{1,q}(\Omega))}\right).
\end{equation}
Note also that $\theta,u$ solves
\begin{equation*}
2\nu\left(\begin{matrix}\theta\\ u\end{matrix}\right) - \mathscr A \left(\begin{matrix} \theta\\ u \end{matrix}\right) = \left(\begin{matrix} \mathscr G \\ \mathscr F\end{matrix}\right) - \pat \left(\begin{matrix} \theta \\ u \end{matrix}\right) + 2\nu\left(\begin{matrix}\theta\\ u \end{matrix} \right).
\end{equation*}
Theorem \ref{thm.non-constant} yields that $\partial_j\partial_k \lambda^{-1}\mathcal R_{\lambda 2}$ is bounded on $\Sigma_{\beta,\nu}$ for every $j,k= 1,\ldots,d$ and thus we deduce (with help of \eqref{eq:prvni.vysledek})
\begin{equation*}
\|u\|_{L^p(W^{2,q})}\leq c\left(\|\mathscr G\|_{L^p(W^{1,q})} + \|\mathscr F\|_{L^p(W^{1,q})}\right).
\end{equation*}
This completes the proof.
\end{proof}
\section{Appendix}
Here we present several theorems from other sources for readers convenience.
\begin{Theorem}\label{thm.3.3.ensh}
[Theorem 3.3 in \cite{EnSh}] Let $1<q<\infty$ and let $\Lambda\subset \mathbb C$. Let $m(\lambda,\xi)$ be a function defined on $\Lambda \times (\mathbb R^d\setminus \{0\})$ such that for any multi-index $\alpha \in \mathbb N^d_0$ there exists a constant $C_\alpha$ depending on $\alpha$ and $\Lambda$ such that
$$ |\partial^\alpha_\xi m(\lambda,\xi)|\leq C_\alpha |\xi|^{-\alpha}
$$
for any $(\lambda,\xi)\in \Lambda\times (\mathbb R^d\setminus \{0\})$. Let $K_\lambda$ be an operator defined by $K_\lambda f = \mathcal F^{-1}_\xi [m(\lambda,\xi)\hat f(\xi)]$. Then, the set $\{K_\lambda|\lambda\in \Lambda\}$ is $\mathcal R-$bounded on $\mathcal L(L^q(\mathbb R^d))$ and
$$
\mathcal R_{\mathcal L(L^q(\mathbb R^d))} (\{K_\lambda | \lambda\in\Lambda\}) \leq C_{q,d} \max_{|\alpha|\leq d} C_\alpha.
$$
\end{Theorem}
\begin{Lemma}
[Lemma 3.1 in \cite{EnSh}] \label{lem.3.1.ensh} Let $0<\beta<\pi/2$ and $\nu_0>0$. For any $\lambda\in \Sigma_{\beta,\nu}$ we have
$$
|\lambda + |\xi|^2| \geq \sin(\beta/2) (|\lambda| + |\xi|^2).
$$
\end{Lemma}
\begin{Proposition}
[Proposition 2.13 in \cite{EnSh}] \label{pro.2.13.ensh} Let $D\subset \mathbb R^d$ be a domain and let $\Lambda$ be a domain in $\mathbb C$. Let $m(\lambda)$ be a bounded function on $\Lambda$ and let $M_m(\lambda):L^q(D)\mapsto L^q(D)$ be defined as $M_m(\lambda) f = m(\lambda)f$. Then
$$
\mathcal R_{\mathcal L(L^q(D))} (\{M_m(\lambda)|\lambda\in \Lambda\}) \leq C_{d,q,D} \|m\|_{L^\infty(\Lambda)}.
$$
\end{Proposition}
\begin{Proposition}
[Proposition 2.16 in \cite{EnSh} or Proposition 3.4 in \cite{DeHiPr}] \label{pro.2.16.ensh} \begin{enumerate}\item Let $X$ and $Y$ be Banach spaces and let $\mathcal T$ and $\mathcal S$ be $\mathcal R-$bounded families on $\mathcal L(X,Y)$. Then $\mathcal T+\mathcal S = \{T+S| T\in \mathcal T, S\in \mathcal S\}$ is also $\mathcal R-$bounded on $\mathcal L(X,Y)$ and
$$
\mathcal R_{\mathcal L(X,Y)} (\mathcal T + \mathcal S)\leq \mathcal R_{\mathcal L(X,Y)} (\mathcal T) + R_{\mathcal L(X,Y)} (\mathcal S).
$$
\item Let $X,Y$ and $Z$ be Banach spaces and let $\mathcal T$ and $\mathcal S$ be $\mathcal R-$bounded families on $\mathcal L(X,Y)$ and $\mathcal L(Y,Z)$ respectively. Then $\mathcal S\mathcal T = \{ST| T\in \mathcal T, S\in \mathcal S\}$ is $\mathcal R-$bounded on $\mathcal L(X,Y)$ and
$$
\mathcal R_{\mathcal L(X,Z)}(\mathcal S\mathcal T) \leq \mathcal R_{\mathcal L(X,Y)} (\mathcal T) \mathcal R_{\mathcal L(Y,Z)}(\mathcal S).
$$
\end{enumerate}
\end{Proposition}
{\bf Acknowledgement}: The research of \v{S}.N. and V.M. leading to these results has received funding from the Czech Sciences Foundation (GA\v CR), GA19-04243S and in the framework of RVO: 67985840. The research of M.K. was supported by RVO: 67985840.
\end{document} |
\begin{document}
\title{$GI$-graphs and their groups\\[+12pt] }
\author{
Marston D.E. Conder
\\[+3pt]
{\normalsize Department of Mathematics, University of Auckland,}\\
{\normalsize Private Bag 92019, Auckland 1142, New Zealand} \\[+3pt]
{\normalsize [email protected]}\\[+6pt]
\and
Toma\v{z} Pisanski
\\[+3pt]
{\normalsize Faculty of Mathematics and Physics, University of Ljubljana,} \\
{\normalsize Jadranska 19, 1000 Ljubljana, Slovenia} \\[+3pt]
{\normalsize [email protected]}\\[+6pt]
\and
Arjana \v{Z}itnik
\\[+3pt]
{\normalsize Faculty of Mathematics and Physics, University of Ljubljana,} \\
{\normalsize Jadranska 19, 1000 Ljubljana, Slovenia} \\[+3pt]
{\normalsize [email protected]}
}
\date{10 July 2012}
\maketitle
\begin{abstract}
The class of generalized Petersen graphs was introduced by Coxeter in the 1950s.
Frucht, Graver and Watkins determined the automorphism groups of
generalized Petersen graphs in 1971, and much later, Nedela and \v Skoviera
and (independently) Lovre\v ci\v c-Sara\v zin characterised those which are
Cayley graphs.
In this paper we extend the class of generalized Petersen graphs to a class
of {\em $GI$-graphs}.
For any positive integer $n$ and any sequence $j_0,j_1,....,j_{t-1}$ of integers mod $n$,
the $GI$-graph $GI(n;j_0,j_1,....,j_{t-1})$ is a $(t\!+\!1)$-valent graph
on the vertex set $\mathbb{Z}_t \times \mathbb{Z}_n$, with edges of two kinds:
\begin{itemize}
\item an edge from $(s,v)$ to $(s',v)$, for all distinct $s,s' \in \mathbb{Z}_{t}$
and all $v \in \mathbb{Z}_n$,
\item edges from $(s,v)$ to $(s,v + j_s)$ and $(s,v - j_s)$, for all $s \in \mathbb{Z}_{t}$ and $v \in \mathbb{Z}_n$.
\end{itemize}
\noindent
By classifying different kinds of automorphisms,
we describe the automorphism group of each $GI$-graph,
and determine which $GI$-graphs are vertex-transitive and which are Cayley graphs.
A $GI$-graph can be edge-transitive only when $t \leq 3$,
or equivalently, for valence at most $4$.
We present a unit-distance drawing
of a remarkable $GI(7;1,2,3)$.
\noindent
{\bf Keywords}: $GI$-graph, generalized Petersen graph,
vertex-transitive graph, edge-transitive graph, circulant graph,
automorphism group, wreath product, unit-distance graph.\\
\noindent
{\bf Mathematics Subject Classification (2010)}:
20B25,
05E18,
05C75.
\end{abstract}
\section{Introduction}
\label{intro}
Trivalent graphs (also known as cubic graphs) form an extensively studied class of graphs.
Among them, the Petersen graph is one of the most important
finite graphs, constructible in many ways, and is a minimal counter-example for many
conjectures in graph theory.
The Petersen graph is the initial member of a family of graphs $G(n,k)$,
known today as {\em Generalized Petersen graphs}, which have similar constructions.
Generalized Petersen graphs were first introduced by Coxeter \cite{Coxeter} in 1950,
and were named in 1969 by Watkins \cite{Watkins}.
A standard visualization of a generalized Petersen graph consists of
two types of vertices: half of them belong to an outer rim, and the other half belong
to an inner rim; and there are three types of edges: those in the outer rim, those in
the inner rim, and the `spokes', which form a $1$-factor between the inner rim and the outer rim.
The outer rim is always a cycle, while the inner rim may consist of several isomorphic cycles.
A generalized Petersen graph $G(n,k)$ is given by two parameters $n$ and $k$,
where $n$ is the number of vertices in each rim, and $k$ is the `span' of the inner rim
(which is the distance on the outer rim between the neighbours of two adjacent vertices
on the inner rim).
The family $G(n,k)$ contains some very important graphs.
Among others of particular interest are the $n$-prism $G(n,1)$, the D\"{u}rer graph $G(6,2)$,
the M\"{o}bius-Kantor graph $G(8,3)$, the dodecahedron $G(10,2)$,
the Desargues graph $G(10,3)$, the Nauru graph $G(12,5)$, and of course the Petersen graph
itself, which is $G(5,2)$.
Generalized Petersen graphs possess a number of interesting properties.
For example, $G(n,k)$ is vertex-transitive if and only if either
$n = 10$ and $k = 2$, or $k^2 \equiv \pm 1$ mod $n\,$ \cite{Frucht},
and a Cayley graph if and only if $k^2 \equiv 1$ mod $n\,$ \cite{Lovrecic1, NedelaSkoviera},
and arc-transitive only in the following seven cases:
$(n,k) = (4,1)$, $(5,2)$, $(8,3)$, $(10, 2)$, $(10, 3)$, $(12, 5)$ or $(24, 5)\,$ \cite{Frucht}.
If we want to maintain the symmetry between the two rims, then another parameter has to
be introduced, allowing the span on the outer rim to be different from 1. This gives the
definition of an {\em $I$-graph}.
The family of $I$-graphs was introduced in 1988 in the Foster Census \cite{Foster}.
For some time this family failed to attract the attention of many researchers,
possibly due to the fact that among all $I$-graphs, the only ones that are vertex-transitive
are the generalized Petersen graphs \cite{igraphs,LovrecicMarusic}.
Still, necessary and sufficient conditions for testing whether or not two $I$-graphs are
isomorphic were determined in \cite{igraphs,HPZ2}, and these were used to enumerate
all $I$-graphs in \cite{Petkovsek}.
Also in \cite{HPZ2} it was shown that all generalized Petersen graphs are
unit-distance graphs, by representing them as isomorphic $I$-graphs.
Furthermore, in \cite{igraphs} it was shown that automorphism group of a connected
$I$-graph $I(n,j,k)$ that is not a generalized Petersen graph is either dihedral or
a group with presentation
\begin{equation*}
\Gamma = \langle\, \rho, \tau, \varphi \mid
\rho^n = \tau^2 = \varphi^2 = 1, \,
\rho\tau\rho = \tau, \,
\varphi\tau\varphi = \tau, \,
\varphi\rho\varphi = \rho^a \, \rangle
\end{equation*}
for some $a \in \mathbb{Z}_n$,
and that among all $I$-graphs, only the
generalized Petersen graphs can be vertex-transitive or edge-transitive.
In this paper we further
generalize both of these families of graphs, and call them
\emph{generalized I-graphs}, or simply {\em $GI$-graphs}.
We determine the group of automorphisms of any $GI$-graph. Moreover,
we completely characterize the edge-transitive, vertex-transitive and Cayley graphs,
among the class of $GI$-graphs.
At the end of the paper we briefly discuss the problem of unit-distance realizations of $GI$-graphs.
This problem has been solved for $I$-graphs in \cite{HPZ1}.
We found a remarkable new example of a 4-valent unit-distance graph,
namely $GI(7;1,2,3)$, which is a Cayley graph on 21 vertices for the group $\mathbb{Z}_7 \rtimes \mathbb{Z}_3$.
Let us note that ours is not the only possible generalization.
For instance, see \cite{Lovrecic} for another approach,
which is not much different from ours.
The basic difference is that our approach
uses complete graphs, while the approach by Lovre\v ci\v c-Sara\v zin,
Pacco and Previtali in \cite{Lovrecic} uses cycles;
their construction coincides with ours for $t \le 3$, but not for larger $t$.
We acknowledge the use of {\sc Magma} \cite{Magma} in constructing and analysing
examples of $GI$-graphs, and helping us to see patterns and test conjectures
that led to many of the observations made and proved in this paper.
\section{Definition of $GI$-graphs and their properties}
\label{defn-props}
For positive integers $n$ and $t$ with $n \ge 3$,
let $(j_0,j_1, \dots , j_{t-1})$ be any sequence of integers
such that $0 < j_k < n$ and $j_k \ne n/2$, for $0 \le k < t$. \\[-8pt]
Then we define $GI(n;j_0,j_1, \dots , j_{t-1})$ to be the graph
with vertex set $\mathbb{Z}_t \times \mathbb{Z}_n$, and with edges of two types:
\\[+4pt]
\begin{tabular}{ll}
(a)\hskip -7pt ${}$ & an edge from $(s,v)$ to $(s',v)$, for all distinct $s,s' \in \mathbb{Z}_{t}$ and all $v \in \mathbb{Z}_n$, \\[+2pt]
(b)\hskip -7pt ${}$ & edges from $(s,v)$ to $(s,v + j_s)$ and $(s,v - j_s)$, for all $s\in \mathbb{Z}_{t}$ and all $v \in \mathbb{Z}_n$. \\[+4pt]
\end{tabular}
This definition gives us an infinite family of graphs, which we call \emph{GI-graphs}.
The graph $GI(n;j_0,j_1, \dots , j_{t-1})$ has $nt$ vertices, and is regular of valence $(t-1)+2 = t+1.$
Edges of type (a) are called the \emph{spoke edges},
while those of type (b) are called the \emph{layer edges}.
Also for each $s \in \mathbb{Z}_t$ the set $L_s = \{(s,v) : v \in \mathbb{Z}_n\}$
is called a {\em layer}, and
for each $v \in \mathbb{Z}_n$ the set $S_v = \{(s,v) : s \in \mathbb{Z}_t\}$
is called a {\em spoke}.
We observe that the induced subgraph
on each spoke is a complete graph $K_t$ of order $t$.
On the other hand, the induced subgraph on the layer $L_s$ is a union of $d$ cycles
of length $n/d_s$, where $d_s =\gcd(n,j_s)$. \\[-8pt]
In the case $t=1$, the graph $GI(n;j_0)$ is simply a union
of disjoint isomorphic cycles of length $n/\gcd(n,j_0)$.
In the case $ t = 2$, we have $I$-graphs; for example,
$GI(n;1,j)$ is a generalized Petersen graph, for every $j$,
and in particular, $GI(5;1,2)$ is the dodecahedral graph
(the $1$-skeleton of a dodecahedron).
Some other examples are illustrated in Figure~\ref{fig:examplesGIgraphs}.
\begin{figure}
\caption{$GI$-graphs $GI(6;2,2)$, $GI(6;1,1,2)$, and $GI(6;2,1,2)$.}
\label{fig:examplesGIgraphs}
\end{figure}
Note that taking $j_k^\prime = \pm j_k$ for all $k$ gives a $GI$-graph
$GI(n;j_0^\prime,j_1^\prime, \dots , j_{t-1}^\prime)$
that is exactly the same as
$GI(n;j_0,j_1, \dots , j_{t-1})$.
Similarly, any permutation of
$j_0, j_1, \dots, j_{t-1}$ gives a $GI$-graph isomorphic to
$GI(n;j_0,j_1, \dots , j_{t-1})$.
Therefore we will usually assume that $0 < j_k < n/2$ for all $k$,
and that $j_0 \le j_1 \le \dots \le j_{t-1}$.
In this case, we say that the $GI$-graph
$GI(n;j_0,j_1, \dots , j_{t-1})$ is in \emph{standard form}.
The following theorem gives a partial answer to the problem of
distinguishing between two $GI$-graphs.
\begin{Proposition}
\label{thm:multiply}
Suppose $j_0,j_1,\dots,j_{t-1} \ne 0$ or $n/2 $ modulo $n$, and
$a$ is a unit in $\mathbb{Z}_n$.
Then the graph $GI(n;aj_0, aj_1, \dots, aj_{t-1})$ is isomorphic to
the graph $GI(n;j_0,j_1, \dots, j_{t-1})$.
\end{Proposition}
\begin{proof}
Since $a$ is coprime to $n$,
the numbers $av$ for $0 \le v < n$, are all distinct in $\mathbb{Z}_n$,
and so we can label the vertices of $GI(n;aj_0, aj_1, \dots, aj_{t-1})$
as ordered pairs $(s,av)$ for $s \in \mathbb{Z}_t$ and $v \in \mathbb{Z}_n$.
Now define a mapping
$\varphi : V(GI(n;aj_0,aj_1, \dots, aj_{t-1})) \to V(GI(n;j_0,j_1, \dots, j_{t-1}))$
by setting $ \varphi((s,av) ) = (s,v)$ for all $s \in \mathbb{Z}_t$ and all $v \in \mathbb{Z}_n$.
This is clearly a bijection, and since a vertex $(s,av)$ in $GI(n;aj_0, aj_1, \dots, aj_{t-1})$
is adjacent to $(s',av)$ for each $s' \in \mathbb{Z}_t \setminus \{s\}$
and to $(s,av \pm aj_s) = (s,a(v \pm j_s))$, it is easy to see that
$\varphi$ is also a graph homomorphism.
\end{proof}
We may collect the parameters $j_s$ into a multiset $J$, and then use the abbreviation
$GI(n; J)$ for the graph $GI(n;j_0,j_1, \dots, j_{t-1})$.
Also we will say that the multiset $J$ is in \emph{canonical form} if it is lexicographically first
among all the multisets that give isomorphic copies of $GI(n; J)$ via Proposition~\ref{thm:multiply}.
We now list some other properties of $GI$-graphs.
Proposition~\ref{thm:spokes} shows that the spoke edges are easy to recognise when $t > 3$.
\begin{Proposition}
\label{thm:factors}
The graph $GI(n;j_0,j_1, \dots , j_{t-1})$ admits a factorization into
a $(t-1)$-factor $nK_t$ and a $2$-factor $($namely the spokes and the layers$)$.
\end{Proposition}
\begin{Proposition}
\label{thm:spokes}
An edge of a $GI$-graph with $4$ or more layers is a spoke-edge
if and only if it belongs to some clique of size $4$.
\end{Proposition}
\begin{proof}
No edge between two vertices in the same layer can lie in a $K_4$ subgraph,
because the subgraph induced on each layer is a union of cycles,
and no two spokes between two different layers can have a common vertex.
\end{proof}
\begin{Proposition}
\label{thm:connectedGI}
Let $d=\gcd(n,j_0,j_1,\dots, j_{t-1})$. Then the graph
$GI(n;j_0,j_1, \dots , j_{t-1})$ is a disjoint union of $d$ copies of
$GI(n/d;j_0/d,j_1/d, \dots , j_{t-1}/d)$. In particular, the
graph $GI(n;j_0,j_1, \dots , j_{t-1})$ is connected if and only if $d = 1$.
\end{Proposition}
\begin{proof}
First observe that the edges of every spoke make up a clique (of order $t$),
so the graph is connected if and only if every two spokes are connected via
the layer edges. Now there exists an edge between two spokes $S_u$ and $S_v$
whenever $v-u$ is a multiple of $j_s$ for some $s$, and hence a path of
length $2$ between $S_u$ and $S_v$ whenever $v-u$ is a $\mathbb{Z}_n$-linear
combination of some $j_s$ and $j_{s'}$, and so on. Thus $S_u$ and $S_v$ lie
in the same connected component of the graph if and only if $v-u$ is expressible
(mod $n$) as a $\mathbb{Z}$-linear combination of $j_0,j_1,\dots, j_{t-1}$,
say $v-u = cn + c_{0}j_{0} + c_{1}j_{1} + \dots + c_{t-1} j_{t-1}$
for some $c_0,c_1,\dots, c_{t-1} \in \mathbb{Z}$. By Bezout's identity, this occurs if and
only if $v-u$ is a multiple of $\gcd(n,j_0,j_1,\dots, j_{t-1}) = d$.
It follows that the graph has $d$ components, each containing a set of spokes
$S_v$ with $v = u+jd$ for fixed $u$ and variable $j$, that, is, with subscripts
differing by multiples of $d$. Finally, since
$(v-u)/d = c(n/d)+ c_{0}(j_{0}/d) + c_{1}(j_{1}/d) + \dots + c_{t-1}(j_{t-1}/d)$,
it is easy to see that each component is isomorphic to $GI(n/d;j_0/d,j_1/d, \dots , j_{t-1}/d)$.
\end{proof}
Finally, note that the restriction of a $GI$-graph to any proper subset of its layers
gives rise to another $GI$-graph.
In particular, if $J$ and $K$ are multisets with $J \subseteq K$,
then $GI(n;J)$ is an induced subgraph of $GI(n;K)$.
\section{Automorphisms of $GI$-graphs}
\label{automs}
In this section, we consider the possible automorphims of a $GI$-graph
$X=GI(n;J)$, where $J =\{j_0,j_1, \dots , j_{t-1}\}$ is any multiset.
If $X$ is disconnected, then since all connected components of $X$
are isomorphic to each other (by Proposition \ref{thm:connectedGI}),
we may simply reduce this to the consideration of automorphisms
a connected component of $X$ (and then find the automorphism group
using a theorem of Frucht \cite{Frucht0}, cf. \cite{Harary}).
Hence from now on, we will assume that $X$ is connected.
The set of edges of $X=GI(n;J)$ may be partitioned into spoke edges and layer edges,
and we will call this partition of edges the \emph{fundamental edge-partition} of $X$.
We know that the graph induced on the spoke edges is a collection of complete graphs,
and that the graph induced on the layer edges is a collection of cycles
(with each cycle belonging to a single layer, but with a layer being composed
of two or more cycles of the same length $n/\gcd(n,j_s)$ if the corresponding
element $j_s$ of $J$ is not a unit mod $n$).
We will say that an automorphism of $X$ \emph{respects the fundamental
edge-partition} if it takes spoke edges to spoke edges, and layer
edges to layer edges.
Any automorphism of $X$ that does not respect the fundamental edge-partition
(and so takes some layer edge to a spoke edge, and some spoke edge to a layer edge)
will be called \emph{skew}.
\begin{Theorem}
\label{theorem:skewautom}
Let $X$ be a connected $GI$-graph with $t$ layers, where $t \ge 2$.
If $X$ has a skew automorphism, then either $\,t = 2$ and $X$ is isomorphic to one
of the seven special generalized Petersen graphs
$G(4,1)$, $G(5,2)$, $G(8,3)$, $G(10, 2)$, $G(10, 3)$, $G(12, 5)$ and $G(24, 5)$,
or $\,t = 3$ and $X$ is isomorphic to $GI(3;1,1,1)$.
Moreover, each of these eight graphs is arc-transitive
$($and is therefore both vertex-transitive and edge-transitive$)$.
\end{Theorem}
\begin{proof}
First, if $t > 3$ then no layer edge lies in a clique of size $t$, but every spoke edge does,
and therefore no automorphism can map a spoke edge to a layer edge. Thus $t \le 3$.
Next, suppose $t = 3$, and let $\varphi$ be an automorphism taking an edge $e$ of some
spoke $S_v$ to an edge $e'$ of some layer $L_s$.
Since every edge of a spoke lies in a triangle, namely the spoke itself,
it follows that $\varphi$ must take the whole spoke $S_v = \{(0,v),(1,v),(2,v)\}$ containing
$e$ to some triangle containing the layer edge $e'$, and then the other two edges of the
triangle $\{\varphi(0,v),\varphi(1,v),\varphi(2,v)\}$ must be edges from the same layer as $e'$,
namely $L_s$. It follows that $j_s = n/3$. But then since each of the images
$\varphi(0,v),\varphi(1,v),\varphi(2,v)$ lies in two triangles (namely a spoke and
a triangle in $L_s$), each of the vertices $(0,v),(1,v)$ and $(2,v)$ must similarly lie in
two triangles, and it follows that all three layers contain a triangle, so $j_0 = j_1 = j_2 = n/3$.
In particular, $\gcd(j_0,j_1,j_2) = n/3$, and by connectedness, Proposition~\ref{thm:connectedGI}
implies $n/3 = 1$, so $n = 3$ and $j_0 = j_1 = j_2 = 1$.
Thus $X$ is $GI(3;1,1,1)$, which is well-known to be arc-transitive
(see \cite{Lovrecic}, for example).
Finally, for the case $t =2$, everything we need was proved in \cite{Frucht} and \cite{igraphs}.
\end{proof}
\begin{Corollary}
\label{corollary:edgetransitive}
Every edge-transitive connected $GI$-graph is isomorphic to one of the eight graphs
listed in Theorem~{\em\ref{theorem:skewautom}}.
\end{Corollary}
Hence from now on, we will consider only the automorphisms that respect the
fundamental edge-partition.
There are three special classes of such automorphisms:
\\[+4pt]
\begin{tabular}{cl}
(1) & ${}$\hskip -8pt automorphisms that preserve every layer \\[+2pt]
(2) & ${}$\hskip -8pt automorphisms that preserve every spoke \\[+2pt]
(3) & ${}$\hskip -8pt automorphisms that permute both the layers and the spokes non-trivially.\\[+6pt]
\end{tabular}
\\
\noindent
We will consider particular cases of automorphisms of these types below.
Define mappings $\rho:V(X) \to V(X)$ and $\tau: V(X) \to V(X)$ given by
\begin{equation*}
\rho(s,v) = (s,v+1)
\ \ \
\mbox{and}
\ \ \
\tau(s,v) = (s,-v) \quad \hbox{ for all } s \in \mathbb{Z}_t \hbox{ and all } v \in \mathbb{Z}_n. \tag{$\dagger$}
\end{equation*}
Clearly these are automorphisms of $X$ of type (1), permuting the vertices in each
layer. Indeed $\rho$ can be viewed as a rotation (of order $n$), and $\tau$ as a
reflection (of order $2$), and it follows that the automorphism group of $X$
contains a dihedral subgroup of order $2n$, generated by $\rho$ and $\tau$.
These $2n$ automorphisms are all of type (1), and all of them respect the fundamental
edge-partition of $X$.
Next, if two of the members of the multiset $J$ are equal,
say $j_{s_1}=j_{s_2}$ for $s_1 \ne s_2$, then we have an automorphism
$\lambda_{i,s_1,s_2}$ that exchanges two cycles of layers
$L_{s_1}$ and $L_{s_2}$, but preserves every spoke. These automorphisms are of type (2).
\begin{Proposition}
\label{propn:mix_layers}
Suppose $j_{s_1}=j_{s_2}$ where $s_1 \ne s_2$, and define $d=\gcd(n,j_{s_1}) = \gcd(n,j_{s_2})$.
Then for each $i \in \mathbb{Z}_d$, the mapping $\lambda_{i,s_1,s_2}:V(X) \to V(X)$ given by
$$\lambda_{i,s_1,s_2}(s,v)=\left\{ \begin{array}{ll}
(s_2,v) & \mbox{if} \ \ s=s_1 \ \ \mbox{and} \ \ v \equiv i \ \ \mbox{\rm mod} \ d \\
(s_1,v) & \mbox{if} \ \ s=s_2 \ \ \mbox{and} \ \ v \equiv i \ \ \mbox{\rm mod} \ d \\
\,(s,v) & \mbox{otherwise}
\end{array} \right.
$$
is an automorphism of $X$, which respects the fundamental edge-partition,
and preserves all layers other than $L_{s_1}$ and $L_{s_2}$.
\end{Proposition}
\begin{proof}
This is obviously a permutation of $V(X)$, preserving adjacency.
Moreover, it is also clear that $\lambda_{i,s_1,s_2}$ preserves every spoke $S_v$,
and exchanges one of the cycles in layer $L_{s_1}$ with the corresponding
cycle in layer $L_{s_2}$, while preserving all other layer cycles.
\end{proof}
\begin{Corollary}
\label{cor:exchange_layers}
Suppose $j_{s_1}=j_{s_2}$ where $s_1 \ne s_2$, and define $d=\gcd(n,j_{s_1}) = \gcd(n,j_{s_2})$. \\[+1pt]
Then the product
$$\lambda_{s_1,s_2} := \lambda_{0,s_1,s_2}\lambda_{1,s_1,s_2}\dots \lambda_{d-1,s_1,s_2}$$
is an automorphism of $X$ that respects the fundamental edge-partition,
and exchanges layers $L_{s_1}$ and $L_{s_2}$, while preserving every other layer.
\end{Corollary}
There is another family of automorphisms exchanging layers that exist in
some situations; but these automorphism do not preserve spokes,
and so they are of type (3):
\begin{Proposition}
\label{propn:change_layers}
Let $a$ be any unit in $\mathbb{Z}_n$ with the property that $aJ =\{\pm j_0, \pm j_1, \dots, \pm j_{t-1}\}$,
and then let $\alpha: \mathbb{Z}_t \to \mathbb{Z}_t$ be any bijection with the property that
$j_{\alpha(s)} = \pm a j_s$ for all $s \in \mathbb{Z}_t$.\\[+1pt]
Then the mapping $\sigma_a: V(X) \to V(X)$ given by
$$
\sigma_a(s,v) = (\alpha(s),a v) \ \ \hbox{ for all } s \in \mathbb{Z}_t \, \hbox{ and all } v \in \mathbb{Z}_n
$$
is an automorphism of $X$ that respects the fundamental edge-partition.
\end{Proposition}
\begin{Remarks}
Note that the mapping $\alpha$ is not uniquely determined
if there exist distinct $s_1$ and $s_2$ for which $j_{s_1}= \pm j_{s_2}$,
but we can always define the mapping $\alpha$ so that it is a bijection
(and satisfies $j_{\alpha(s)} = \pm a j_s$ for all $s \in \mathbb{Z}_t$).
Indeed $\alpha$ is uniquely determined if we require that $\alpha(s_1) < \alpha(s_2)$
whenever $s_1 < s_2$ and $j_{s_1}= \pm j_{s_2}$.
On the other hand, $\sigma_a$ is not defined when the
condition $aJ =\{\pm j_0, \pm j_1, \dots, \pm j_{t-1}\}$ fails
(or equivalently, when $a(J \cup -J) \ne J \cup -J$).
Note also that $\sigma_1$ is the identity automorphism, while $\sigma_{-1}$ is the
automorphism $\tau$ defined earlier, since for $a = -1$ we may take $\alpha$ as
the identity permutation and then $\sigma_{-1}(s,v) = (s,-v) = \tau(s,v)$ for every vertex $(s,v)$.
\end{Remarks}
\begin{proof}
First, let $b$ be the multiplicative inverse of $a$ in $\mathbb{Z}_n^{\,*}$.
Then for any $(s,v) \in \mathbb{Z}_t \times \mathbb{Z}_n$, we have
$\sigma_a(\alpha^{-1}(s),bv)=(\alpha(\alpha^{-1}(s)),abv)=(s,v)$,
and therefore $\sigma_a$ is surjective.
Since $V(X)$ is finite, it follows that $\sigma_a$ is a permutation.
Also $\sigma_a$ preserves edges, indeed it respects the fundamental edge-partition,
because it takes each neighbour $(s^\prime,v)$ of the vertex $(s,v)$ in the spoke $S_v$
to the neighbour $(\alpha(s^\prime), av)$ of the vertex $(\alpha(s), av)$ in the spoke $S_{av}$,
and takes the two neighbours $(s,v \pm j_s)$ of the vertex $(s,v)$ in the layer $L_s$
to the two neighbours $\sigma_a(s,v \pm j_s)=(\alpha(s),a (v \pm j_s))=(\alpha(s), a v \pm a j_s)
=(\alpha(s),a v \pm j_{\alpha(s)})$ of the vertex $(\alpha(s), av)$ in the layer $L_{\alpha(s)}$.
\end{proof}
In the remaining part of this section we will show that if the $GI$-graph $X$ is connected,
then the automorphisms described above and their products give all of the automorphisms of $X$
that respect the fundamental edge-partition.
For this we require two technical Lemmas, the proofs of which are obvious.
\begin{Lemma}
\label{lemma:spokespoke}
Let $X$ be a connected $GI$-graph with at least two layers.
Then every automorphism of $X$ that preserves spoke edges
must permute the spokes $($like blocks of imprimitivity$)$.
\end{Lemma}
\begin{Lemma} \label{lemma:layers}
Every automorphism of a $GI$-graph that respects the fundamental edge-partition
must permute the layer cycles.
\end{Lemma}
It will also be helpful to relate the automorphisms of a $GI$-graph to the automorphisms
of the corresponding circulant graph.
Let $S$ be a subset of $\mathbb{Z}_n$ such that $S=-S$ and $0 \not \in S$.
Then the \emph{circulant graph} $\mathcal{C}irc(n;S)$ is defined as the graph with
vertex set $\mathbb{Z}_n$, such that vertices $u$ and $v$ are adjacent precisely
when $u-v \equiv a$ mod $n$ for some $a \in S$.
Equivalently, this is the Cayley graph for $\mathbb{Z}_n$ given by the subset $S$.
Note that $\mathcal{C}irc(n;S)$ is connected if and only if $S$ additively generates $\mathbb{Z}_n$,
that is, if and only if some linear combination of the members of $S$ is $1$ mod $n$.
Now suppose that $S=\{s_1, \dots, s_c\}$, and that $\Gamma=\mathcal{C}irc(n;S)$ is connected.
For $1 \le i \le c$, let $G_{i,1},G_{i,2}, \ldots, G_{i,k_i}$ be the distinct cosets
of the cyclic subgroup $G_{i,1}=\langle s_i \rangle$ in $G=\langle S\rangle$.
Then we can form a partition $\mathcal C = \{ C_{ij} \}$
of the edges of $\Gamma$, where
$$
C_{ij} =\{\,\{g,g+s_i\}: \, g \in G_{i,j} \,\} \quad \hbox{ for } \, 1 \le j \le k_i \, \hbox{ and } \,1 \le i \le c.
$$
Notice that each part $C_{ij}$ of $\mathcal C$ consists of precisely the edges
of a cycle formed by adding multiples of the single element $s_i$ of $S$ to
a member of the coset $G_{i,j}$.
We say that an automorphism $\varphi$ of $\Gamma$ \emph{respects the partition} $\mathcal C$
if $\varphi(C_{ij}) \in \mathcal C$ for every $C_{ij} \in \mathcal C$.
We have the following, thanks to Joy Morris:
\begin{Theorem}
\label{thm:joymorris}
Suppose the circulant graph $\,\Gamma=\mathcal{C}irc(n;S)$ is connected.
If $\psi$ is an automorphism of $\Gamma$ which fixes the vertex $0$ and
respects the partition $\mathcal C = \{ C_{ij} \}$, then $\psi$ is induced
by some automorphism of $\mathbb{Z}_{n}$ --- that is, there exists a unit $a \in \mathbb{Z}_n$
with the property that $\, \psi(x)=ax\,$ for every $x \in \mathbb{Z}_n$
$($and in particular, $aS = S)$.
\end{Theorem}
For a proof (by induction on $|S|$), see \cite{Morris}.
To apply it, we associate with our graph $X= GI(n;J)$ the circulant graph $Y = \mathcal{C}irc(n;S \cup -S)$,
where $S$ is the underlying set of $J$. \\ Note that the projection
$\eta : V(X) \to V(Y)$ given by $\eta(s,v)$ = $v$ takes every layer edge $\{(s,v), (s,v + j_s)\}$
of $X$ to the edge $\{v,v + j_s\}$ of $Y$, and hence gives a graph homomorphism from
the subgraph of $X$ induced on layer edges onto the graph $Y$.
\begin{Proposition}
\label{propn:XtoY}
Every automorphism of $X= GI(n;J)$ that preserves the set of spoke edges
induces an automorphism of $Y = \mathcal{C}irc(n;S \cup -S)$ that respects the partition $\mathcal C = \{ C_{ij} \}$.
\end{Proposition}
\begin{proof}
Any such automorphism $\varphi$ induces a permutation on the set of spokes of $X$,
and hence under the above projection $\eta$, induces an automorphism of $Y$, say $\psi$.
Moreover, since $\varphi$ preserves the layer edges, it must permute the layer cycles
among themselves, and it follows that $\psi$ respects the partition $\mathcal C = \{ C_{ij} \}$.
\end{proof}
\begin{Corollary}
\label{cor:preserve_layercycles}
Suppose $X$ is connected. Then every automorphism of $X=GI(n;J)$ that
respects the fundamental edge-partition of $X$ is expressible as a product of powers of
the rotation $\rho$, the reflection $\tau$, and the automorphisms $\lambda_{i,s_1,s_2}$
and $\sigma_a$ defined in Proposition~{\em\ref{propn:mix_layers}} and
Proposition~{\em\ref{propn:change_layers}}.
\end{Corollary}
\begin{proof}
First, any such automorphism $\varphi$ induces a permutation on the set of spokes of $X$,
and so by multiplying by a suitable element of the dihedral group of order $2n$
generated by $\rho$ and $\tau$, we may replace $\varphi$ by an automorphism
$\varphi'$ that respects the fundamental edge-partition of $X$, and preserves
the spoke $S_0$. In particular, $\varphi'$ induces an
automorphism of $Y = \mathcal{C}irc(n;S \cup -S)$ that fixes the vertex $0$.
By Theorem~\ref{thm:joymorris}, this automorphism of $Y$ is induced
by multiplication by some unit $a \in \mathbb{Z}_n$, and then by multiplying by the
inverse of $\sigma_a$ we may replace $\varphi'$ by an automorphism
$\varphi''$ that preserves all of the spokes $S_v$.
Finally, since $\varphi''$ preserves all of the spokes and also permutes the
layer cycles among themselves, $\varphi''$ is expressible as a product of the
automorphisms $\lambda_{i,s_1,s_2}$ defined in Proposition~\ref{propn:mix_layers}.
\end{proof}
As a special case, we have also the following, for the automorphisms that preserve layers:
\begin{Corollary}
\label{cor:preserve_layers}
Suppose $X$ is connected. Then any automorphism of $X=GI(n;J)$ that
takes layers to layers is a product of powers of the rotation $\rho$, the reflection $\tau$,
and the automorphisms $\lambda_{s_1,s_2}$ and $\sigma_a$ defined
in Corollary~{\em\ref{cor:exchange_layers}} and Proposition~{\em\ref{propn:change_layers}}.
\end{Corollary}
\section{Automorphism groups of $GI$-graphs}
\label{automgps}
Now that we know all possible automorphisms of a $GI$-graph, it is not difficult
to determine their number, and construct the automorphism groups in many cases.
We will sometimes use $F(n;J)$ to denote the number of automorphisms of $GI(n;J)$,
and $A(n;J)$ to denote the automorphism group $GI(n;J)$.
The automorphism group $A(n;J)$ of $GI(n;J)$ always contains a dihedral subgroup
of order $2n$, generated by the rotation $\rho$ and the reflection $\tau$,
defined in ($\dagger$) in the previous section (before Proposition~\ref{propn:mix_layers}).
Note that the relations $\rho^n = \tau^2 = (\rho\tau)^2 = 1$ hold, with the third of
these being equivalent to $\tau\rho\tau = \rho^{-1}$.
We split the consideration of $F(n;J)$ and $A(n;J)$ into four cases, below.
\subsection{The disconnected case}
\label{subs:disconnected}
Let $d = \gcd(n,J)$. Then $GI(n;J)$ is the disjoint union of $d$ isomorphic
copies of $GI(n;J/d)$.
This reduces the computation of $\Aut(X)$ to the case of connected $GI$-graphs.
In particular,
we have
\begin{equation*} \label{eqn:autdis}
A(n;J) \cong A(n,J/d) \wr {\rm Sym} (d) \\[+2pt]
\end{equation*}
so $\Aut(GI(n;J))$ is the wreath product of $\Aut(GI(n;J/d))$ by the symmetric group ${\rm Sym}(d)$ of degree $d$,
and therefore
\begin{equation*} \label{eqn:numdis}
F(n,J) = |\!\Aut(GI(n;J))| = d! \,(F(n,J/d))^d.
\end{equation*}
\subsection{The edge-transitive case}
\label{subs:ET}
The eight connected edge-transitive $GI$-graphs were given in Theorem~\ref{theorem:skewautom}.
Seven of them are generalized Petersen graphs, with $J = \{1,k\}$ for some $k \in \mathbb{Z}_n^{\, *}$,
and their automorphism groups are known --- see \cite{Frucht} or \cite{Lovrecic} for example.
For each of these seven graphs, all of which are cubic, there is an automorphism $\mu$ of order $3$
that fixes the vertex $(0,0)$ and induces a $3$-cycle on it neighbours $(1,0)$, $(0,1)$ and $(0,n-1)$.
In particular, this automorphism $\mu$ takes the spoke edge $\{(0,0),(1,0)\}$ to the layer edge $\{(0,0),(0,1)\}$,
and its effect on the other vertices is easily determined.
In the cases $(n,k) = (4,1)$, $(8,3)$, $(12,5)$ and $(24,5)$, where $n \equiv 0$ mod $4$
and $k^2 \equiv 1$ mod $n$, the three automorphisms $\rho$, $\tau$ and $\mu$ generate $A(n;J)$
and satisfy the defining relations
$$
\rho^n = \tau^2 = \mu^3 = (\rho\tau)^2 = (\rho\mu)^2 = (\tau\mu)^2 = [\rho^4,\mu] = 1
$$
for a group of order $12n$ which we may denote for the time being as $\Gamma(n,k)$,
although strictly speaking, the second parameter $k$ is not necessary.
Similarly in the case $(n,k) = (10,2)$, the three automorphisms $\rho$, $\tau$ and $\mu$
generate $A(n;J)$, which has order $12n$,
but they satisfy different defining relations, with the relation $[\rho^4,\mu] = 1$
replaced by $\mu\rho^{-1}\mu\rho^{2}\mu^{-1}\rho^{2}\tau = 1$.
In the other two cases (namely $(n,k) = (5,2)$ and $(10,3)$), the
automorphisms $\rho$, $\tau$ and $\mu$ generate a subgroup of index $2$ in $A(n;J)$,
which has order $24n$.
In summary, the automorphism groups of the eight connected edge-transitive $GI$-graphs and
their orders can be described as below:
\begin{equation*} \label{eqn:autet}
\begin{array}{rclccrcl}
\Aut(GI(4,1,1))& \hskip -6pt \cong \hskip -3pt \!&\Gamma(4,1) \ \cong \ S_4 \times \mathbb{Z}_2 & \ \ & \ \ & F(4,1,1)& \hskip -6pt = \hskip -4pt &48 \\[+2pt]
\Aut(GI(5,1,2))& \hskip -6pt \cong \hskip -3pt \!&S_5 & \ \ & \ \ & F(5,1,2)& \hskip -6pt = \hskip -4pt &120 \\[+2pt]
\Aut(GI(8,1,3))& \hskip -6pt \cong \hskip -3pt \!&\Gamma(8,3) & \ \ & \ \ & F(8,1,3)& \hskip -6pt = \hskip -4pt &96 \\[+2pt]
\Aut(GI(10,1,2))& \hskip -6pt \cong \hskip -3pt \!&A_5 \times \mathbb{Z}_2 & \ \ & \ \ & F(10,1,2)& \hskip -6pt = \hskip -4pt &120 \\[+2pt]
\Aut(GI(10,1,3))& \hskip -6pt \cong \hskip -3pt \!&S_5 \times \mathbb{Z}_2 & \ \ & \ \ & F(10,1,3)& \hskip -6pt = \hskip -4pt &240 \\[+2pt]
\Aut(GI(12,1,5))& \hskip -6pt \cong \hskip -3pt \!&\Gamma(12,5) & \ \ & \ \ & F(12,1,5)& \hskip -6pt = \hskip -4pt &144 \\[+2pt]
\Aut(GI(24,1,5))& \hskip -6pt \cong \hskip -3pt \!&\Gamma(24,5)& \ \ & \ \ & F(24,1,5)& \hskip -6pt = \hskip -4pt &288 \\[+2pt]
\Aut(GI(3,1,1,1))& \hskip -6pt \cong \hskip -3pt \!&(D_6 \times D_6) \rtimes \mathbb{Z}_2 & \ \ & \ \ & F(3,1,1,1)& \hskip -6pt = \hskip -4pt &72.
\end{array}
\end{equation*}
See \cite{Frucht} and/or \cite{Lovrecic} for further details.
\subsection{The case where $J$ is a set (with no repetitions)}
\label{subs:set}
Suppose $J$ is a set (and not a multiset), in standard form, and let $X = GI(n;J)$.
If $X$ is not connected, then sub-section~\ref{subs:disconnected} applies,
while if $X$ is connected and edge-transitive, then sub-section~\ref{subs:ET} applies,
so we will suppose that $X$ is connected but not edge-transitive.
Then by Corollary \ref{cor:preserve_layers}, we know that the automorphism group of $X$
is generated by the automorphisms $\rho$, $\tau$ and the set $\{\sigma_a : \, a \in A\}$,
where
$$
A = \{\,a \in Z_n^{\,*}\ | \ a(J\cup-J) = J\cup -J\, \}.
$$
It is easy to see that $A$ is a subgroup of $\mathbb{Z}_n^{\,*}$.
Indeed since $\sigma_1$ is trivial, $\sigma_{-1}=\tau$,
and $\sigma_a \sigma_b=\sigma_{ab}$ for all $a,b \in A$,
the set $S = \{\sigma_a : \, a \in A\}$ is a subgroup of $\Aut(X)$, isomorphic to $A$.
In particular, $S$ is abelian.
It is also easy to see that if composition of functions is read from left to right,
and $\alpha$ is the bijection satisfying $j_{\alpha(s)} = \pm a j_s$ for all $s \in \mathbb{Z}_t$,
then
$$(\rho \sigma_a)(s,v) = \sigma_a(s,v+1) = (\alpha(s),a(v+1)) = (\alpha(s),av+a) = \rho^a(\alpha(s),av)
= (\sigma_a \rho^a)(s,v)$$
for every vertex $(s,v)$, and so $\rho \sigma_a = \sigma_a \rho^a$ for all $a \in A$.
Rearranging, we have $\sigma_a^{-1}\rho \sigma_a = \rho^a$ for all $a \in A$,
which shows that every element of $S$ normalizes the cyclic subgroup of order $n$
generated by the rotation $\rho$.
Finally, again since $\tau = \sigma_{-1} \in S$, this implies that the automorphism group
of $X = GI(n;J)$ is a semi-direct product:
$$
A(n;J) \ = \ \langle\, \{\rho\} \cup S \,\rangle \ \cong \ \langle \rho\rangle \rtimes S \ \cong \ C_n \rtimes A,
\quad \hbox{ of order } \, F(n;J) = n|A|.
$$
\subsection{The general case}
\label{subs:general}
In this sub-section we deal with all remaining possibilities, in which $J$ is a multiset
with repeated elements, in standard form, and $X = GI(n;J)$ is connected but not edge-transitive.
Here we need two new sets of parameters, namely the multiplicity $m_j$ in $J$ of each element $j$
from the underlying set of $J$ (that is, the number of $s \in \mathbb{Z}_t$ for which $j_s = j$),
and $d_j = \gcd(n,j)$ for all such $j$.
Also we need the set $B$ of all $a \in Z_n^{\,*}$ with the property that
$aJ =\{\pm j_0, \pm j_1, \dots, \pm j_{t-1}\}$.
Note that this is always a subgroup of $Z_n^{\,*}$, but is not always the same as the
subgroup $A = \{\,a \in Z_n^{\,*}\ | \ a(J\cup-J) = J\cup -J\, \}$
that we took in the previous sub-section, since the multiplicities of $j$ and $\pm aj$ in $J$
might not be the same for some $a \in A$, but clearly they must be the same for every $a \in B$.
Now by Corollary~\ref{cor:preserve_layercycles} we know that the automorphism
group of $X$ is generated by the automorphisms $\rho$ and $\tau$,
the automorphisms $\sigma_a$ for $a \in B$ (as defined
in Proposition~\ref{propn:change_layers}), and the automorphisms
$\lambda_{i,s,s^\prime}$ (as defined in Proposition~\ref{propn:mix_layers}) that mix cycles.
Just as in the previous case, the set $S = \{\sigma_a : \, a \in B\}$ is a subgroup
of $\Aut(X)$, isomorphic to the subgroup $B$ of $Z_n^{\,*}$.
Again also we have $\sigma_a^{-1}\rho \sigma_a = \rho^a$ for all $a \in B$,
and so every element of $S$ normalizes the cyclic subgroup of order $n$
generated by the rotation $\rho$.
Next, for each $j \in J$, define $\Omega_j = \{ s \in \mathbb{Z}_t \ | \ j_s = j\,\}$,
which is a set of size $m_j$, and for the time being, let $d = d_j = \gcd(n,j)$.
Also define $\Omega_{ji} = \{ (s,v) \in V(X) \ | \ s \in \Omega_j, \ v \equiv i \ {\rm mod} \ d \,\}$,
for $j \in J$ and $i \in \mathbb{Z}_d$ (where $d = \gcd(n,j)$).
Note that $|\Omega_{ji}| = m_{j} n/d$, because $\Omega_{ji}$ is like a strip
of vertices across $m_j$ layers of $X$, containing the $n/d$ vertices of one cycle
from each of these layers.
By Proposition~\ref{propn:mix_layers}, for every two distinct $s_1,s_2$ in $\Omega_j$
and every $i \in \mathbb{Z}_d$,
there exists an involutory automorphism $\lambda_{i,s_1,s_2}$ that exchanges one of the
$d$ cycles from layer $L_{s_1}$ with the corresponding cycle from layer $L_{s_2}$,
and preserves every spoke.
This automorphism induces a transposition on the set of $m_j$ layer cycles
containing the vertices of the set $\Omega_{ji}$. If we let the pair $\{s_1,s_2\}$ vary,
we get all such transpositions, and hence for fixed $i \in \mathbb{Z}_d$, the automorphisms
$\lambda_{i,s_1,s_2}$ with $s_1,s_2$ in $\Omega_j$ generate a subgroup isomorphic
to the symmetric group ${\rm Sym}(m_j)$, acting with $n/d$ orbits of length $m_j$ on $\Omega_{ji}$
and fixing all other vertices.
Moreover, for any two distinct $i_1,i_2$ in $\mathbb{Z}_d$, the elements of $T_{i_1}$ and $T_{i_2}$
move disjoint sets of vertices (namely $\Omega_{ji_1}$ and $\Omega_{ji_2}$), and hence
commute with each other. Hence the subgroup $T_j$ generated by all of the automorphisms
$\lambda_{i,s_1,s_2}$ with $s_1,s_2$ in $\Omega_j$ is isomorphic to the direct
product of $d$ copies of ${\rm Sym}(m_j)$, one for each value of $i$ in $\mathbb{Z}_d$.
Similarly, for any two distinct $j,j'$ in $J$, the corresponding subgroups $T_{j}$
and $T_{j'}$ move disjoint sets of vertices (from disjoint sets of layers of $X$),
and hence commute with each other, so the subgroup $N$ generated by the set of
all of the automorphisms $\lambda_{i,s_1,s_2}$ is a direct product
$\Pi_{j \in J\,} T_j \cong \Pi_{j \in J\,} (S_{m_j})^{d_{j}}$, of order $\Pi_{j \in J\,} ({m_j}!)^{d_{j}}$.
On the other hand, for fixed $s_1$ and $s_2$ in $\Omega_j$, then
$$
\rho^{-1}{\lambda_{i,s_1,s_2\,}}{\rho} = \lambda_{i+1,s_1,s_2}
\quad \ \hbox{ and } \ \quad
\tau^{-1}{\lambda_{i,s_1,s_2\,}}{\tau} = \lambda_{-i,s_1,s_2}
\quad \ \hbox{ for all } \, i \in \mathbb{Z}_d,
$$
so the automorphisms $\lambda_{i,s_1,s_2}$ are permuted among themselves in
a cycle under conjugation by the rotation $\rho$,
and fixed or interchanged in pairs under conjugation by the reflection $\tau$.
Finally if $a \in B\setminus \{\pm 1\}$,
and $j'$ is the element of (the underlying set of) $J$ congruent to $\pm aj$ mod $n$,
then the automorphism $\sigma_a$ defined in Proposition~\ref{propn:change_layers}
takes the layers $L_s$ for $s \in \Omega_j$ to the layers $L_{s'}$ for $s' \in \Omega_{j'}$,
and conjugates the subgroup $T_j$ (generated by those $\lambda_{i,s_1,s_2}$
with $s_1,s_2$ in $\Omega_j$) to the corresponding subgroup $T_{j'}$.
Hence $\sigma_a$ normalises the subgroup $N = \Pi_{j \in J\,} T_j$.
Thus $N$ is normalised by $\rho$ and $\tau$ ($=\sigma_{-1}$) and all the other $\sigma_a$,
and is therefore normal in $\Aut(X)$. It follows that
$$
A(n;J) \ = \ \langle\, N \cup \{\rho\} \cup S \,\rangle \ \cong \ N \rtimes \langle \rho\rangle \rtimes S
\ \cong \ \prod_{j \in J\,} (S_{m_j})^{d_{j}} \rtimes C_n \rtimes B,
$$
of order \hskip 4cm
${\displaystyle F(n;J) = n\,|B|\prod_{j \in J} (m_{j}!)^{d_{j}}}, $ \\[+12pt]
where the products are taken over all $j$ from the underlying set of $J$,
without multiplicities.
\subsection{Summary}
\label{subs:summary}
Combining the results from the four sub-sections above gives an algorithm
for computing the automorphism groups of $GI$-graphs and their automorphism
groups in general.
\section{Vertex-transitive $GI$-graphs}
\label{vertextrans}
In this section we consider further symmetry properties of $GI$-graphs.
By Corollary \ref{corollary:edgetransitive}, we know there are only eight different
connected edge-transitive $GI$-graphs $GI(n;J)$ having two or more layers.
In particular, there are no such graphs with four or more layers.
In contrast, we will show that there are several vertex-transitive $GI$-graphs,
by giving a classification of them.
Note that the graph $GI(n;J)$ will be vertex-transitive if we are able to permute
the layers of $GI(n;J)$ transitively among themselves.
Now for each non-zero $a \in \mathbb{Z}_n$, consider multiplication of the (multi)set $J$ by $a$.
If this preserves $J$ (as a multiset), then it gives a bijection from $J$ to $J$, and so
by Proposition~\ref{propn:change_layers}, an automorphism $\sigma_a$ of $GI(n;J)$, permuting the layers.
The graph $GI(n;J)$ will be vertex-transitive if the group generated by all
such $\sigma_a$ acts transitively on the layers.
\begin{Theorem}
\label{thm:vtsubgroup}
Let $J$ be any subset of $\mathbb{Z}_n^{\,*}$ with the two properties that {\em (a)} $J \cap -J = \emptyset$,
and {\em (b)} $J \cup -J$ is a $($multiplicative$)$ subgroup of $\mathbb{Z}_n^{\,*}$.
Then $GI(n;J)$ is vertex-transitive.
\end{Theorem}
\begin{proof}
Since $J \cup -J$ is a subgroup of $\mathbb{Z}_n^{\,*}$ (not containing $0$), multiplication by any $a \in J$
gives a bijection from $J$ to $J$ and hence an automorphism $\sigma_a$ of $GI(n;J)$.
Moreover, for any $a,b \in J$ there exists $c \in J$ such that $ac=\pm b$ in $\mathbb{Z}_n$,
and in this case, the automorphism $\sigma_c$ takes any layer $s$ with $j_s = a$
to a layer $s'$ with $j_{s'} = \pm b$.
It follows that the group generated by $\{\,\sigma_a\! : a\in J \,\}$ acts transitively
on the layers of $GI(n;J)$, and hence that the group generated by $\{\rho\} \cup \{\,\sigma_a\! : a\in J \,\}$
acts transitively on the vertices of $GI(n;J)$.
\end{proof}
\begin{Corollary}
\label{corollary:VT}
Let $A$ be any subgroup of the multiplicative group $\mathbb{Z}_n^{\,*}$
containing an element of $\mathbb{Z}_n\setminus \{\pm 1\}$.
If $-1 \in A$, then take $J = A \cap \{1,2,\dots, \lfloor \frac{n-1}{2} \rfloor \}$
$($so that $A = J \cup -J)$,
while if $-1 \not\in J$, let $J = A$.
Then $GI(n;J)$ is vertex-transitive.
Hence for every integer $n > 6$, there exists at least one vertex-transitive $GI$-graph
of the form $GI(n;J)$ for some $J$ with $|J| > 1$.
\end{Corollary}
Note that the above requires $\phi(n) = |\mathbb{Z}_n^{\,*}|$ to be at least $4$,
so that $n > 4$ and $n \ne 6$, in order for there to be at least two layers.
A sub-family consists of those for which $A$ is the cyclic
subgroup $\{1,r,r^2, \ldots, r^{t-1})$ generated by the powers of a single
unit $r \in \mathbb{Z}_n^{\,*}\setminus \{\pm 1\}$.
An example is given in Figure~\ref{fig:twoGIgraphs}(b), with $n = 7$ and $r = 2$
(and $2^2 \equiv 4 \equiv -3$ mod $7$).
\begin{figure}
\caption{The graph $GI(5;1,1,2)$ in (a) has 5-cycles as its three layers but is not
vertex-transitive, while the graph $GI(7;1,2,3)$ in (b) is vertex-transitive and has two edge
orbits.}
\label{fig:twoGIgraphs}
\end{figure}
Next, we say that a subset $J=\{j_0,j_1, \dots, j_{t-1}\} $ of $\mathbb{Z}_n$ is \emph{primitive}
if $1 \in J$ and $j_i \ne \pm j_k$ whenever $i \ne k$.
Also we say that the graph $GI(n;J)$ is \emph{primitive} if $J$ is a primitive subset of $\mathbb{Z}_n$.
Note that any such graph is connected, since $1 \in J$.
\begin{Theorem}
\label{thm:primVT}
A primitive $GI$-graph $GI(n;J)$ is vertex-transitive if and only if
either $J \,\cup \,-J$ is a $($multiplicative$)$ subgroup of $\mathbb{Z}_n^{\,*}$,
or $n=10$ and $J=\{1,2\}$.
\end{Theorem}
\begin{proof}
First, it was shown in \cite{Frucht} that $GI(10;1,2)$ is vertex-transitive.
Also by Theorem \ref{thm:vtsubgroup}, we know that $GI(n;J)$ is
vertex-transitive when $J \cup -J$ is a subgroup of $\mathbb{Z}_n^{\,*}$.
Conversely, suppose that $X=GI(n;J)$ is a primitive vertex-transitive
$GI$-graph, other than $GI(10;1,2)$.
We have to show that $J \cup -J$ is a subgroup of $\mathbb{Z}_n^{\,*}$.
Since $X$ is primitive, we have $1 \in J$,
and without loss of generality we may assume that $j_0=1$.
By Theorem~\ref{theorem:skewautom}, we know that if $X$ has a skew automorphism,
then either $t = 2$ and $(n,j_1) = (4,1)$, $(5,2)$, $(8,3)$, $(10, 2)$, $(10, 3)$, $(12, 5)$
or $G(24, 5)$, or $t = 3$ and $(n,j_1,j_2)= (3,1,1)$.
It is easy to see that $J \cup -J$ is a subgroup of $\mathbb{Z}_n^{\,*}$ in all of these
cases except $(n,t,j_0,j_1) = (10,2,1, 2)$. Hence we may assume
that $X$ has no skew automorphism, and therefore every automorphism of $X$
preserves the fundamental edge-partition.
Now because $X$ is vertex-transitive, and the layer $L_0$ is a single $n$-cycle,
it follows that all the layers of $X$ must be cycles, and so every element of $J$ must be
coprime to $n$, and therefore a unit mod $n$.
In particular, there are no automorphisms that `mix' cycles from different layers.
In fact, since $J$ is primitive, $J \cup -J$ contains $2t$ distinct elements,
and it follows that $X$ has no automorphisms of the form given in
Proposition~\ref{propn:mix_layers} or Corollary~\ref{cor:exchange_layers}.
Hence (by Theorem~\ref{thm:joymorris}) the only automorphisms that preserve the spoke
$S_0$ are the automorphisms $\sigma_a$ given in Corollary \ref{cor:preserve_layers}.
But for any $x \in J$ (say $x = j_s$), by vertex-transitivity there exists an
automorphism of $X$ that maps the vertex $(0,0)$ to the vertex $(s,0)$,
and this must be one of the automorphisms $\sigma_a$, where $a$ is a unit in $\mathbb{Z}_n$
and $a(J \cup -J) = J \cup -J$.
In particular, since $\sigma_a$ takes $(0,v)$ to $(\alpha(0),a v)$ for all $v \in \mathbb{Z}_n$,
we have $\alpha(0) = s$ and therefore $x = j_s = j_{\alpha(0)} = \pm a j_0 = \pm a$,
which gives $x(J \cup -J) = \pm a(J \cup -J) = \pm (J \cup -J) = J \cup -J$, for every $x \in J$.
Thus $J \cup -J$ is closed under multiplication, and by finiteness (and the fact that
every element of $J$ is a unit mod $n$), it follows that $J \cup -J$ is a subgroup of
$\mathbb{Z}_n^{\,*}$, as required.
\end{proof}
We will now find some other examples, and show that every vertex-transitive $GI$-graph has a special form.
To do that, we introduce some more notation:
we denote by $[k]J$ the concatenation of $k$ copies of the multiset $J$.
Note that this may involve a non-standard ordering of the elements of $[k]J$,
but it makes the proofs of some things in this and the next section easier to explain
--- specifically, Theorem~\ref{thm:classvt} and Lemmas~\ref{lem:multipleCayley}
and~\ref{lem:doublenonCayley}.
\begin{Theorem}
\label{thm:classvt}
Let $X=GI(n;J)$ be any connected vertex-transitive $GI$-graph.
Then
\\[+2pt]
{\em (a)} If $\Aut(X)$ has a vertex-transitive subgroup that preserves the
fundamental edge-partition of $X$,
then $GI(n;[k]J)$ is vertex-transitive for every positive integer $k$.
\\[+2pt]
{\em (b)} All elements in $J \cup -J$ have the same multiplicity, say $k_0$,
and $($so conversely$)$ the graph $X = GI(n;J)$ is isomorphic to $GI(n;[k_0]J_0)$
for some primitive subset $J_0$ of $\mathbb{Z}_n$, such that $GI(n;J_0)$ is vertex-transitive.
\end{Theorem}
\begin{proof}
Let $X = GI(n;J) = GI(n; j_0,j_1,...,j_{t-1})$, and let $Y = GI(n;[k]J)$.
Note that the vertex-set of $Y$ is $\mathbb{Z}_{kt} \times \mathbb{Z}_n$,
and we can write $[k]J = (j_0,j_1,...,j_{kt-1})$, where $j_c = j_d$
whenever $c \equiv d$ mod $t$, and accordingly,
we can write each member $s$ of $Z_{kt}$ in the form $at+b$
where $a \in \mathbb{Z}_k$ and $b \in \mathbb{Z}_t$.
Also note that any permutation $f$ of $\{1,2,...,k\}$ gives rise to a corresponding
permutation $\widetilde{f}$ of $\{1,2,...,kt\}$, defined by setting
$\widetilde{f}(at+b) = f(a)t+b$ for all $a \in \mathbb{Z}_k$ and all $b \in \mathbb{Z}_t$,
and in fact gives rise to an automorphism $\theta = \theta_f$ of $Y = GI(n;[k]J)$, defined by
\\[-14pt]
\begin{center}
$ \theta_f(at+b,v) = (\widetilde{f}(at+b),v) = (f(a)t+b,v)
\quad \mbox{for all} \ a \in \mathbb{Z}_k, \ b \in \mathbb{Z}_t \, \mbox{ and }\, v \in \mathbb{Z}_n.$
\\[+6pt]
\end{center}
It is easy to see that $\theta_f$ preserves the edges of each spoke $S_v$,
and permutes the layers among themselves.
In fact $\theta_f$ takes $L_{at+b}$ to $L_{f(a)t+b}$ for all $a \in \mathbb{Z}_k$ and all $b \in \mathbb{Z}_t$,
and hence $\theta_f$ preserves each of the sets $\{L_s : s \in \mathbb{Z}_{kt} \, | \, s \equiv b \ {\rm mod} \ t\,\}$ for $b \in \mathbb{Z}_t$.
It follows that given any two layers $L_c = \{(c,v) : v \in \mathbb{Z}_n\}$ and
$L_d = \{(d,v) : v \in \mathbb{Z}_n\}$ with $c \equiv d$ mod $t$, there exists an
automorphism $\theta$ of $Y$ taking $L_c$ to $L_d$.
In particular, since $\Aut(Y)$ is transitive on vertices of each layer
(as is the automorphism group of every $GI$-graph), we find that $\Aut(Y)$ has
at most $t$ orbits on vertices of $Y$.
We can now prove (a), by extending certain automorphisms of $X$ to automorphisms
of $Y$ that make it vertex-transitive.
Let $\xi$ be any automorphism of $X$ that respects the fundamental edge-partition.
Define a permutation
$\pi = \pi_{\xi}$ of the vertex set of $Y$ by letting
$$
\pi(at+b,v) = (at+c,w) \quad \mbox{ whenever } \, \xi(b,v) = (c,w),
$$
for all $a \in \mathbb{Z}_k$, all $b \in \mathbb{Z}_t$, and all $v \in \mathbb{Z}_n$.
If $e$ is a spoke edge of $Y$, say from $(at+b,v)$ to $(a't+b',v)$,
and $(c,w) = \xi(b,v)$, then since $\xi$ takes spoke edges to spoke edges in $X$,
we see that $\xi(b',v) = (c',w)$ for some $c' \in \mathbb{Z}_t$, and so by definition
$\pi(a't+b',v) = (a't+c',w)$, which is a neighbour of $(at+c,w)$.
Thus $\pi$ takes the edge $e$ to the spoke edge in $Y$ from $(at+c,w)$ to $(a't+c',w)$.
Similarly, if $e$ is a layer edge of $Y$, say from $(at+b,v)$ to $(at+b,z)$,
with $z = v+j_b$ (since $j_d = j_{d'}$ whenever $d \equiv d'$ mod $t$),
and $(c,w) = \xi(b,v)$, then since $\xi$ permutes the layers of $X$,
we know that $\xi$ takes the neighbour $(b,z) = (b,v+j_b)$
of $(b,v)$ on the same layer of $X$ as $(b,v)$ to a neighbour of $(c,w)$ on the
the same layer of $X$ as $(c,w)$, namely $(c,w \pm j_c)$.
Hence by definition, $\pi(at+b,z) = (at+c,w \pm j_c)$,
which is a neighbour of $(at+c,w)$ in $Y$ because $j_{at+c} = j_c$.
Thus $\pi$ takes $e$ to a layer edge from $(at+c,w)$
to $(at+c,w \pm j_c)$ in $Y$.
In particular, since $\pi$ preserves both the set of all spoke edges of $Y$
and the set of all layer edges of $Y$, we find that $\pi = \pi_{\xi}$ is an automorphism of $Y$.
Moreover, since $\xi$ can be chosen to take any layer of $X$ to any other layer of $X$,
it follows that the subgroup of $\Aut(Y)$ generated by the automorphisms $\theta_f$
and $\pi_{\xi}$ found above is transitive on layers of $Y$, and hence $Y$ is vertex-transitive.
Next we prove (b), namely that all elements in $J \cup -J$ have the same multiplicity,
say $k_0$, and $X$ is isomorphic to $GI(n;[k_0]J_0)$ for some
primitive subset $J_0$ of $\mathbb{Z}_n$ such that $GI(n;J_0)$ is vertex-transitive.
If $X$ is edge-transitive, then by Theorem~\ref{theorem:skewautom} we have
$(n;J) = (4;1,1)$, $(5;1,2)$, $(8;1,3)$, $(10;1,2)$, $(10;1,3)$, $(12;1,5)$, $(24;1,5)$
or $(3;1,1,1)$. In the first case, we can take $k_0 = 2$ and $J_0 = \{1\}$, and
observe that $GI(n;J_0) = GI(4,1)$ which is simply a $4$-cycle, and vertex-transitive.
Similarly, in the last case, we can take $k_0 = 3$ and $J_0 = \{1\}$, and
observe that $GI(n;J_0) = GI(3,1)$ which is a $3$-cycle, and vertex-transitive.
In all the other six cases, we can take $k_0 = 1$ and $J_0 = J$, and note that
$X = GI(n;J)$ itself is vertex-transitive. Thus (b) holds in all eight cases, and so from
now on, we may assume that $X$ is not edge-transitive, and hence that
every automorphism of $X$ respects the fundamental edge-partition.
This implies that $\Aut(X)$ is transitive on the layers of $X$, and it follows that
all the layer cycles have the same length, so $\gcd(n,j_s)=\gcd(n,j_0)$ for all $s \in \mathbb{Z}_t$.
But on the other hand, $X = GI(n;J)$ is connected, so $\gcd(n,j_0,j_1\dots,j_{t-1})=1$.
Thus $\gcd(n,j_s) = 1$ for all $s \in \mathbb{Z}_t$.
In particular, there exists $a \in \mathbb{Z}_n^{\,*}$ such that $1 = aj_s \in aJ$.
Now by Theorem~\ref{thm:multiply}, the graph $X = GI(n;J)$ is isomorphic to $GI(n;aJ)$,
and therefore we can replace $J$ by $aJ$, or more simply, suppose that $1 \in J$.
If all the elements of $J$ are the same, then $X = GI(n;J)$
is isomorphic to $GI(n; [t]\{1\})$, and then since the set $\{1\}$ is primitive
and $GI(n; 1)$ is simply an $n$-cycle, again (b) holds.
So now suppose that not all elements of $J$ are the same.
For any two distinct $j_i, j_s \in J$, there must be an automorphism $\sigma_a$
that takes layer $L_i$ to layer $L_s$, by Corollary \ref{cor:preserve_layers}.
In this case $a(J \cup -J) = J \cup -J$, by definition of $\sigma_a$,
and therefore the multiplicities of $j_i$ and $j_s$ are the same.
Hence all elements of $J \cup -J$ have the same multiplicity, say $k_0$.
In particular, $J=[k_0]J_0$ where $J_0$ is the underlying set of $J$,
and $X$ is isomorphic to $GI(n;[k_0]J_0)$.
The set $J_0$ is primitive since it contains 1 and all of its elements are distinct.
To finish the proof, all we have to do is show that $GI(n;J_0)$ is vertex-transitive.
But that is easy: for any two distinct $j_i,j_s \in J_0$, we know that
there exists an automorphism $\sigma_a$ of $X$ taking layer $L_i$ of $X$ to layer $L_s$
of $X$, and $a(J \cup -J) = J \cup -J$; it then follows that $a(J_0 \cup -J_0)=J_0 \cup -J_0$,
and therefore $\sigma_a$ induces an automorphism of $GI(n;J_0)$ that takes
layer $L_i$ of $GI(n;J_0)$ to layer $L_s$ of $GI(n;J_0)$, as required.
\end{proof}
Note that the above theorem above applies only to connected $GI$-graphs.
Disconnected vertex-transitive $GI$-graphs are just disjoint unions of connected
vertex-transitive $GI$-graphs, and can be dealt with accordingly.
We finish this section with observations about the graphs $GI(5;1,2)$
and $GI(10;1,2)$.
The Petersen graph $GI(5;1,2)$ is vertex-transitive, and its automorphism group
acts transitively on the two layers; in fact so does a subgroup of order 20 which
preserves the set of its ten layer edges.
By Theorem \ref{thm:classvt}, it follows that every $GI$-graph of the
form $GI(5;1,2,1,2,\ldots,1,2)$ is vertex-transitive.
On the other hand, the automorphism group of the dodecahedral
graph $GI(10;1,2)$ has no layer-transitive subgroup preserving the
set of layer edges (and the set of spoke edges), and so the above theorem
does not apply to it.
In fact $GI(10;[k]\{1,2\})$ is not vertex-transitive for any $k > 1$, because
the fact that $2$ is not a unit mod $10$ implies that the automorphism group
has two orbits on layers.
The graph $GI(10;1,2)$ is the only such exception, since for every other
vertex-transitive $GI$-graph $X$, either $\Aut(X)$ itself preserves the
fundamental edge-partition, or $X$ is edge-transitive and is then
one of the other seven graphs in Theorem~\ref{theorem:skewautom},
and for each of those, the subgroup of $\Aut(X)$ preserving the fundamental
edge-partition is layer-transitive.
\section{Cayley $GI$-graphs}
\label{cayley}
In this section we characterise the $GI$-graphs that are Cayley graphs.
First, a Cayley graph ${\rm Cay}(G,S)$ is a graph whose vertices can be labelled
with the elements of some group $G$, and whose edges correspond to multiplication
by the elements of some subset $S$ or their inverses. In particular, the edges of
${\rm Cay}(G,S)$ may be taken as the pairs $\{g,sg\}$ for all $g \in G$ and all $s \in S$,
and then the group $G$ acts naturally as a group of automorphisms of ${\rm Cay}(G,S)$
by right multiplication. This action is transitive on vertices, indeed regular on vertices:
for any ordered pair $(u,v)$ of vertices, there is a unique element of $G$ taking $u$
to $v$ (namely $g = u^{-1}v$).
Alternatively, a Cayley graph is any (regular) graph $X$ whose automorphism group
has a subgroup $G$ that acts regularly on vertices. In that case, any particular
vertex can be labelled with the identity element of $G$, and the subset $S$ can be taken
as the set of all $s \in G$ taking that vertex to one of its neighbours.
Note that under both definitions, the Cayley graph is connected if and only if the
set $S$ generates the group $G$. Also note that every Cayley graph is vertex-transitive
(by definition), and that every non-trivial element of the subgroup $G$ fixes no vertices
of the graph.
Now suppose $X= GI(n;J)$ is a vertex-transitive $GI$-graph.
We will assume that $X$ is connected, because if it is not, then it is simply a disjoint
union of isomorphic copies of a connected smaller example.
In particular, by Theorem~\ref{thm:classvt}, we know that either $J$ is primitive
(and $X$ is one of the graphs given by Theorem~\ref{thm:primVT}),
or all elements in $J \cup -J$ have the same multiplicity $k_0 > 1$ and then $X$ is isomorphic
to $GI(n;[k_0]J_0)$ for some primitive subset $J_0$ of $\mathbb{Z}_n$ such that $GI(n;J_0)$ is
vertex-transitive.
Also we will suppose that $X$ is not $GI(10;1,2)$, for reasons related
to Theorem~\ref{thm:primVT}.
In fact, of the seven generalized Petersen graphs among the eight edge-transitive
$GI$-graphs listed in Theorem~\ref{theorem:skewautom}, it is known by the main
result of \cite{NedelaSkoviera} or \cite{Lovrecic1} that $G(4,1)$, $G(8,3)$, $G(12, 5)$
and $G(24, 5)$ are Cayley graphs, while $G(5,2)$, $G(10, 2)$ and $G(10, 3)$ are not.
Most of this (and the fact that the eighth edge-transitive graph $GI(3;1,1,1)$ is a Cayley graph)
will actually follow from what we prove below.
Consider the case where $J$ is primitive (as we defined in Section~\ref{vertextrans}).
In this case, $J \,\cup \,-J$ is a subgroup of $\mathbb{Z}_n^{\,*}$
under multiplication, and also $|\!\Aut(X)| = n|J \cup -J| = 2n|J| = 2|V(X)|$.
Hence if $G$ is a subgroup of $\Aut(X)$ that acts regularly on vertices of $X$,
then $G$ is a subgroup of index $2$ in $\Aut(X)$.
On the other hand, $G$ cannot contain the element $\tau$, since $\tau$ is
a non-trivial automorphism with fixed points (namely the vertices $(s,0)$ for all $s$),
and it follows that $G$ must be generated by the rotation $\rho$ and some
subgroup of index $2$ in $\{\sigma_a : \, a \in J \cup -J\}$ not containing $\sigma_{-1} = \tau$.
The latter has to be of the form $\{\sigma_a : \, a \in K\}$ for some subgroup
$K$ of $J \cup -J$, such that $-1 \notin K$.
Conversely, if $J$ is a set, and $K$ is a subgroup of index $2$ in $J \cup -J$
not containing $-1$, then the group generated by $\{\sigma_a : \, a \in K\}$ permutes
the layers of $X$ transitively, and so the subgroup generated
by $\{\rho\} \cup \{\sigma_a : \, a \in K\}$ acts regularly on $V(X)$.
Thus we have the following:
\begin{Proposition}
\label{thm:primCayley}
If $GI(n;J)$ is primitive and $J \cup -J$ is a multiplicative subgroup of $\mathbb{Z}_n^{\,*},$
then $GI(n;J)$ is a Cayley graph if and only if $J \cup -J$
has a subgroup of index $2$ that does not contain $-1$.
\end{Proposition}
Note that this gives infinitely many examples of $GI$-graphs that are Cayley graphs,
including those where $n$ is a prime congruent to $3$ mod $4$ and $J$ is the
subgroup of all squares in $\mathbb{Z}_n^{\,*}$.
On the other hand, it also gives infinitely many vertex-transitive $GI$-graphs that are not Cayley
graphs, including those where $n$ is a prime congruent to $1$ mod $4$
and $J \, \cup\, -J = \mathbb{Z}_n^{\,*} = \mathbb{Z}_n\!\setminus\!\{0\}$.
This theorem also shows that among the six primitive $GI$-graphs that
are edge-transitive, $G(8;1,3)$, $G(12;1,5)$ and $G(24;1,5)$ are Cayley graphs,
while $G(5;1,2)$ and $G(10;1,3)$ are not.
(The graph $GI(10;1,2)$ is not a Cayley graph, for other reasons.)
Next, consider the more general case, where $X=GI(n;J)$ is connected and
vertex-transitive.
In this case, by Theorem~\ref{thm:classvt},
we know that all elements in $J \cup -J$ have the same multiplicity $k_0$,
and $X$ is isomorphic to $GI(n;[k_0]J_0)$ for some primitive subset $J_0$
of $\mathbb{Z}_n$, such that $GI(n;J_0)$ is vertex-transitive.
Also by what we found in Section~\ref{vertextrans} and
sub-section~\ref{subs:general} of Section~\ref{automgps},
we have $d_j := \gcd(n,j) = 1$ for all $j \in J_0$, and therefore
$$
|\!\Aut(X)| = n|J_0 \cup -J_0| \prod_{j \in J_0} (k_0!)^{d_j}
= 2n|J_0| (k_0!)^{|J_0|}.
$$
We will find the following helpful, and to state it, we will refer to the
automorphism $\rho$ of each $GI$-graph $GI(n;J)$ as its {\em standard rotation},
and sometimes denote it by $\rho_J$.
\begin{Lemma}
\label{lem:multipleCayley}
If $GI(n;J)$ has a vertex-regular subgroup containing the standard rotation,
then so does $GI(n;[k]J)$ for every integer $k > 1$.
\end{Lemma}
\begin{proof}
Let $X = GI(n;J)$ and $Y = GI(n;[k]J)$, and let $\rho$ ($= \rho_J$) be the
standard rotation for $X$. Also let $\{\rho\} \cup S$ be a generating set
for a vertex-regular subgroup of $\Aut(X)$.
Note that $\Aut(X)$ is layer-transitive on $X$, since $X$ is not $GI(10;1,2)$.
Now by multiplying elements of $S$
by powers of $\rho$ if necessary, we may assume that $\langle S \rangle$
induces a regular permutation group on the set of layers of $X$.
In particular, $\langle S\rangle$ has order $J$.
Next, for each $\xi \in S$, the automorphism $\pi_\xi$ defined in the proof
of Theorem~\ref{thm:classvt} acts fixed-point-freely on $Y = GI(n;[k]J)$,
and it follows that the set $\{\pi_\xi : \xi \in S\}$ generates a subgroup of order $|J|$
that permutes the layers of $Y = GI(n;[k]J)$ in $|J|$ blocks of size $k$.
Also if $f$ is the $k$-cycle $f = (1,2,\dots,k)$ in ${\rm Sym}(k)$, then the
automorphism $\theta_f$ defined in the proof of Theorem~\ref{thm:classvt}
induces a $k$-cycle on each of those $|J|$ layer-blocks.
Finally, $\theta_f$ commutes with $\rho_{[k]J}$ and all the $\pi_\xi$
(for $\xi \in S$), so the subgroup generated by $\rho_{[k]J}$, $\theta_f$
and all the $\pi_\xi$ has order $nk|J|$, and acts regularly on the vertices of $Y$,
as required.
\end{proof}
Note that this shows, for example, that both of the remaining two edge-transitive
$GI$-graphs $GI(4;1,1)$ and $GI(3;1,1,1)$ are Cayley graphs.
Somewhat surprisingly, we also have the following:
\begin{Lemma}
\label{lem:doublenonCayley}
If $J$ is primitive and both $GI(n;J)$ and $GI(n;[2]J)$ are vertex-transitive, then $GI(n;[2]J)$
is always a Cayley graph, and so is $GI(n;[k]J)$ for every even integer $k > 1$.
\end{Lemma}
\begin{proof}
First, if $GI(n;J)$ is a Cayley graph, then this follows from Lemma~\ref{lem:multipleCayley},
so we will assume that $GI(n;J)$ is not a Cayley graph.
Also because $GI(n;[2]J)$ is vertex-transitive, we know that $X \ne GI(10;1,2)$,
and so $J \cup -J$ is a subgroup of $\mathbb{Z}_n^{\,*}$, by Theorem~\ref{thm:primVT}.
On the other hand, by Proposition~\ref{thm:primCayley}, we know that $J \cup -J$ has no subgroup
of index $2$ that excludes $-1$.
Hence we can write $J \cup -J$ as $U \times W$, where $U$ is a cyclic $2$-subgroup
containing $-1$ and of order $q = 2^e$ for some $e > 1$, and $W$ is complementary
to $U$, and of order $2t/q$. Also let $u$ be a generator of $U$, so that $u^{q/2} = -1$.
Now consider the automorphisms of $Y = GI(n;[2]J)$.
For each $a \in J \cup -J = U \times W$, without loss of generality we will choose the
associated bijection $\alpha: \mathbb{Z}_{2t} \to \mathbb{Z}_{2t}$ to be the `duplicate' of the
corresponding natural bijection from $\mathbb{Z}_{t}$ to $\mathbb{Z}_{t}$, namely so that
$\alpha$ takes $s$ to $s'$, and $s+t$ to $s'+t$, whenever
$j_{s'} = j_{s'+t} = \pm a j_{s} = \pm a j_{s+t}$ (for $0 \le s < t$).
For the moment, suppose that $W$ is trivial, so that $U = J \cup -J$.
Then the automorphism $\sigma_u$ is not semi-regular,
because the vertex $(0,0)$ lies in a cycle of length $q/2$ consisting of
all $(s,0)$ with $0 \le s < t$ and $\pm j_{s} = u^i$ for some $i$, while the vertex $(0,1)$
lies in a cycle of length $q$ consisting of all $(s,u^i)$ such that $0 \le s < t$ and $\pm j_{s} = u^i$,
for $0 \le i < q$.
Hence in particular, the subgroup generated by $\rho$ and $\sigma_u$ has order
$nq = 2nt$, but cannot be vertex-regular (since the $(q/2)$th power of $\sigma_u$ is
a non-trivial element with fixed points).
On the other hand, we can multiply $\sigma_u$ by $\lambda_{0,t}$, which
interchanges vertices $(0,v)$ and $(t,v)$, for all $v \in Z_n$, and find that
$\sigma_{u}\lambda_{0,t}$ is a semi-regular element of order $q$, with
$n/q$ cycles of length $q$. (The vertex $(0,0)$ lies in a cycle of length $q = 2t$
consisting of all $(s',0)$ with $\pm j_{s'} = u^i$ for some $i$, while the vertex $(0,1)$
lies in a cycle of length $q$ consisting of all $(s,u^i)$ such that $0 \le s < t$ and $\pm j_{s} = u^i$
for even $i$, and all $(s+t,u^i)$ such that $0 \le s < t$ and $\pm j_{s} = u^i$ for odd $i$;
the cycles containing the other vertices $(s',1)$ have a similar form.)
It follows that the subgroup generated by $\rho$ and $\sigma_{u}\lambda_{0,t}$
has order $nq = 2nt$, and is transitive on vertices, and hence is vertex-regular,
so that $GI(n;[2]J)$ is a Cayley graph.
When $W$ is non-trivial, the elements $\sigma_w$ for all $w$ in $W$
(or simply all $w$ from a generating set for $W$) induce a regular permutation
group on the layers $L_s$ for which $\pm j_s \in W$, and it follows that
the subgroup generated by $\rho$ and $\sigma_{u}\lambda_{0,t}$ and these
$\sigma_w$ acts regularly on the vertices of $Y$, again making $GI(n;[2]J)$
a Cayley graph.
Finally, for any even integer $k > 2$, we find that $GI(n;[k]J) = GI(n;[k/2][2]J)$ is
a Cayley graph, by applying Lemma~\ref{lem:multipleCayley} with $[2]J$ in
place of $J$, and $k/2$ in place of $k$.
\end{proof}
On the other hand, the same does not hold when $k$ is odd:
\begin{Lemma}
\label{lem:oddnonCayley}
If $J$ is primitive and $GI(n;J)$ is vertex-transitive but
not a Cayley graph, then $GI(n;[k]J)$ is not a Cayley graph for any odd integer $k > 1$.
\end{Lemma}
\begin{proof}
Assume the contrary, so that $X = GI(n;J)$ is vertex-transitive
and not a Cayley graph, but $Y = GI(n;[k]J)$ is a Cayley graph, for some odd $k$.
Then we know that $X \ne GI(10;1,2)$, since $Y$ is vertex-transitive,
and so $J \cup -J$ is a subgroup of $\mathbb{Z}_n^{\,*}$, by Theorem~\ref{thm:primVT}.
On the other hand, since $X$ is not a Cayley graph, Proposition~\ref{thm:primCayley}
tells us that $J \cup -J$ has no subgroup of index $2$ that
excludes $-1$, and therefore $J \cup -J$ contains an element $u$ of (multiplicative)
order $4m$ for some $m$, with $u^{2m} = -1$.
Also by Theorem~\ref{theorem:skewautom}, we know that $Y$ is not edge-transitive,
and so $\Aut(Y)$ preserves the fundamental edge-partition of $Y$, and hence every
subgroup of $\Aut(Y)$ permutes the layers of $Y$ among themselves.
Now let $R$ be a vertex-regular subgroup of $\Aut(Y)$, and take $b = u^m$, which has
order $4$, with $b^2 = -1$ in $\mathbb{Z}_n^{\,*}$.
Next, choose $i$ such that $j_i = \pm b$ (noting that such an $i$ must exist because
$b$ lies in the subgroup $J \cup -J$).
Then by vertex-transitivity of $R$, there exists some automorphism $\theta$ of $Y$ taking
the vertex $(0,0)$ to the vertex $(i,0)$. Moreover, by our knowledge of the structure
of $\Aut(Y)$ from Section~\ref{automgps} and the fact that all of the automorphisms
$\lambda_{s_1,s_2}$ and $\sigma_a$ preserve the spoke $S_0$, it follows that
$\theta = w\sigma_b$ or $w\sigma_{-b}$ for some $w$ in the subgroup $N$ generated
by the set of all of the automorphisms $\lambda_{s_1,s_2}$.
Since $R$ acts regularly on vertices, every non-trivial automorphism in $R$ has to be
semi-regular. In particular, $\theta$ is semi-regular, as is its square
\\[-14pt]
\begin{center}
$\theta^2 = (w\sigma_{\pm b})^2 = w(\sigma_{\pm b}w\sigma_{\pm b}^{\ -1}) \sigma_{(\pm b)^2}
= w'\sigma_{-1} = w'\tau$,
\\[-10pt]
\end{center}
where $w' = w(\sigma_{\pm b}w\sigma_{\pm b}^{\ -1}) \in N$.
Both $w'$ and $\tau$ preserve the spoke $S_0$, and therefore so does $w'\tau$,
and thus $w'\tau$ acts semi-regularly on $S_0$. But also $\tau$ fixes every
vertex $(s,0)$ of $S_0$, and so $w'$ itself acts semi-regularly on $S_0$.
Furthermore, since every element of $N$ preserves the set $\{L_0,L_t,\dots,L_{(k-1)t}\}$
of $k$ layers corresponding to the occurrences of $1$ in $J$,
it follows that both $w'\tau$ and $w'$ act semi-regularly on the set $K = \{(rt,0) : 0 \le r < k\}$.
In particular, cycles of the permutation induced by $w'$ on $K = \{(rt,0) : 0 \le r < k\}$
must all have the same length, say $\ell$. Note that $w'$ is non-trivial, for otherwise
$w'\tau = \tau$, which is not semi-regular on vertices (because it has fixed points),
and therefore $\ell > 1$. But also $\ell$ must divide $k$, so $\ell$ is odd.
Now consider any $\ell$-cycle of $w'$ on $K$, say $((s_1,0),(s_2,0), \dots, (s_\ell,0))$.
Because $\tau$ fixes every vertex of $K$, this is also a cycle of $w'\tau$,
and hence all cycles of $w'\tau$ have length $\ell$.
Also by definition of the elements generating $N$
(as defined in Proposition~\ref{propn:mix_layers}),
we know that $((s_1,1),(s_1,1), \dots, (s_\ell,1))$ must be a cycle of $w'$.
But now the cycle of $w'\tau$ containing the vertex $(s_1,1)$ is
\\[-14pt]
\begin{center}
$((s_1,1),(s_2,-1),(s_3,1), \dots,(s_{\ell-1},-1),(s_{\ell},1),
(s_1,-1),(s_2,1),(s_3,-1), \dots,(s_{\ell},-1)),$
\\[-10pt]
\end{center}
which has length $2k$, and this contradicts the fact that $w'\tau$ is semi-regular.
\end{proof}
Putting together Proposition~\ref{thm:primCayley} and Lemmas~\ref{lem:multipleCayley}
and~\ref{lem:doublenonCayley}, we have the following:
\begin{Theorem}
\label{thm:nonprimCayley}
If $X=GI(n;J)$ is connected, then $X$ is a Cayley graph if and only if
\\[+2pt]
{\em (a)} $J$ is primitive, and $J \cup -J$ is a multiplicative subgroup of $\mathbb{Z}_n^{\,*},$
with a subgroup of index $2$ that does not contain $-1$, or
\\[+2pt]
{\em (b)}
$X = GI(n;J)$ is isomorphic to $GI(n;[k_0]J_0)$
for some primitive subset $J_0$ of $\mathbb{Z}_n$ and some integer $k_0 > 1$,
such that either $GI(n;J_0)$ is a Cayley graph,
or $k_0$ is even and $GI(n;J_0)$ is vertex-transitive but is not the dodecahedral graph $GI(10;1,2)$.
\end{Theorem}
\section{Additional remarks}
\label{conclusion}
The family of $GI$-graphs forms a natural generalisation of the Petersen graph.
Our initial studies of $GI$-graphs have shown that this family
is indeed very interesting and deserves further consideration.
These graphs are also related to circulant graphs \cite{Morris}. Through that
relationship, we were able to solve the puzzle of what appeared to be
unstructured automorphisms of $GI$-graphs, and this enabled us to find their
automorphism groups and classify those that are vertex-transitive or Cayley graphs.
Let us mention also the problem of unit-distance drawings of $GI$-graphs.
A graph is a \emph{unit-distance graph} if it can be drawn in the plane
such that all of its edges have the same length.
In \cite{HPZ1}, it was shown that all $I$-graphs are unit-distance graphs.
On the other hand, obviously no $GI$-graph with four or more layers can be a unit-distance
graph, since it contains a $K_4$ as a subgraph, which itself is not a unit-distance
graph. Hence the only open case of interest is the sub-class of $GI$-graphs
having three layers.
For each $k \in \mathbb{Z}_n$, the graph $GI(n; k,k,k)$ is a cartesian product of two cycles
and is therefore a unit-distance graph by \cite[Theorem 3.4]{HorvatPisanski2}.
We know of only one other connected example that is a unit-distance graph,
and it is remarkable.
This is the graph $GI(7;1,2,3)$, which is shown
in Figure \ref {fig:UDGIgraphs}. The vertices can be drawn
equidistantly on three concentric circles with radii
$$R_1=\frac{1}{2\sin(\pi/7)}, \ \ \
R_2=\frac{1}{2\sin(2\pi/7)}, \ \ \ \mbox{and} \ \ \
R_3=\frac{1}{2\sin(3\pi/7)},
$$
and the two smaller circles rotated through angles of $\pi/3$ and
$-\pi/3$ with respect to the largest circle.
One can then verify that all edges have the same length $1$.
\begin{figure}
\caption{The graph $GI(7;1,2,3)$ as a unit-distance graph.}
\label{fig:UDGIgraphs}
\end{figure}
The graph $GI(7;1,2,3)$ is a Cayley graph for the non-abelian group
of order $21$, namely $\mathbb{Z}_7 \rtimes_{2\,} \mathbb{Z}_3$, which has presentation
$\langle \, a,b \, | \ a^7 = b^3 = 1, \, b^{-1}ab = a^2 \, \rangle$.
Its girth is 3 but it contains no cycles of length 4.
This means that its Kronecker cover (see \cite{ImrichPisanski}) has girth 6 and is
a Levi graph \cite{Coxeter} of a self-polar, point- and line-transitive but not
flag-transitive combinatorial $(21_4)$-configuration.
The resulting configuration is different from the configuration of Gr\"unbaum
and Rigby \cite{GR}, since the latter configuration is flag-transitive but
the one obtained from $GI(7;1,2,3)$ is not.
\end{document} |
\begin{document}
\title{Stochastic Least-squares Petrov--Galerkin Method for Parameterized Linear Systems
hanks{This work was supported by the U.S. Department of Energy Office of Advanced Scientific Computing Research, Applied Mathematics program under award DEC-SC0009301 and by the U.S. National Science Foundation under grant DMS1418754.}
\begin{abstract}
We consider the numerical solution of parameterized linear
systems where the system matrix, the solution, and the right-hand side are
parameterized by a set of uncertain input parameters. We explore spectral
methods in which the solutions are approximated in a chosen finite-dimensional subspace. It has been shown that the stochastic Galerkin projection technique fails to minimize any measure of the solution error \cite{mugler2013convergence}.
As a remedy for this, we propose a novel stochastic least-squares Petrov--Galerkin (LSPG) method. The proposed method is
optimal in the sense that it produces the solution that minimizes a weighted
$\ell^2$-norm of the residual over all solutions in a given
finite-dimensional subspace. Moreover, the method
can be adapted to minimize the solution error in different weighted
$\ell^2$-norms by simply applying a weighting function within the least-squares
formulation. In addition, a goal-oriented semi-norm induced by an
output quantity of interest can be minimized by defining a weighting function
as a linear functional of the solution. We establish optimality and error
bounds for the proposed method, and extensive numerical experiments show that
the weighted LSPG methods outperforms other spectral methods in
minimizing corresponding target weighted norms.
\end{abstract}
\begin{keywords}
stochastic Galerkin, least-squares Petrov--Galerkin projection, residual minimization, spectral projection
\end{keywords}
\begin{AMS}
35R60, 60H15, 60H35, 65C20, 65N30, 93E24
\end{AMS}
\section{Introduction} \label{sec:intro}
Forward uncertainty propagation for parameterized linear systems is important
in a range of applications to characterize the effects of uncertainties on the
output of computational models. Such parameterized linear systems arise in
many important problems in science and engineering, including
stochastic partial differential equations (SPDEs) where uncertain input parameters are modeled as a set of
random variables (e.g., diffusion/ground water flow simulations where
diffusivity/permeability is modeled as a random field \cite{holden1996stochastic, zhang2001stochastic}). It has
been shown \cite{elman2011assessment} that intrusive methods (e.g., stochastic Galerkin
\cite{babuska2004galerkin, deb2001solution, ghanem2003stochastic,
matthies2005galerkin, xiu2002modeling}) for uncertainty propagation can lead
to smaller errors for a fixed basis dimension, compared with non-intrusive
methods \cite{xiu2010numerical} (e.g., sampling-based methods \cite{barth2011multi,
graham2011quasi, kuo2012quasi}, stochastic collocation
\cite{babuvska2007stochastic, nobile2008sparse}).
The stochastic Galerkin method combined with generalized polynomial chaos
(gPC) expansions \cite{xiu2002wiener} seeks a polynomial approximation of the
numerical solution in the stochastic domain by enforcing a Galerkin
orthogonality condition, i.e., the residual of the parameterized linear system
is forced to be orthogonal to the span of the stochastic polynomial basis with
respect to an inner product associated with an underlying probability measure.
The Galerkin projection scheme is popular for its simplicity (i.e., the trial
and test bases are the same) and its optimality in terms of minimizing an energy norm of solution
errors when the underlying PDE operator is symmetric positive definite. In many applications, however, it has been shown that the stochastic Galerkin method does not exhibit any optimality property \cite{mugler2013convergence}. That is, it does not produce solutions that minimize any measure of the solution error. In such cases, the stochastic Galerkin method can lead to poor approximations and non-convergent behavior.
To address this issue, we propose a novel optimal projection technique, which we refer to as the stochastic least-squares Petrov--Galerkin (LSPG) method. Inspired by the successes of LSPG methods in nonlinear model reduction \cite{CarlbergGappy, carlberg2013gnat, carlbergGalDiscOpt}, finite element methods \cite{bochev1998finite, bochev2009least,jiang1990least}, and iterative linear solvers (e.g., GMRES, GCR) \cite{saadBook}, we propose, as an alternative to enforcing the Galerkin orthogonality condition, to directly minimize the residual of a parameterized linear system over the stochastic domain in a (weighted) $\ell^2$-norm. The stochastic LSPG method produces an optimal solution for a given stochastic subspace and guarantees that the $\ell^2$-norm of the residual monotonically decreases as the stochastic basis is enriched. In addition to producing monotonically convergent approximations as measured in the chosen weighted $\ell^2$-norm, the method can also be adapted to target output quantities of interest (QoI); this can be accomplished by employing a weighted $\ell^2$-norm used for least-squares minimization that coincides with the $\ell^2$-(semi)norm of the error in the chosen QoI.
In addition to proposing the stochastic LSPG method, this study shows that specific choices of weighting functions lead to equivalence between the stochastic LSPG method and both the stochastic Galerkin
method and the pseudo-spectral method \cite{xiu2007efficient, xiu2010numerical}.
We demonstrate the effectiveness of this method with extensive numerical
experiments on various SPDEs. The results show that the proposed LSPG
technique significantly outperforms the stochastic Galerkin when the solution
error is measured in different weighted $\ell^2$-norms. We also show that the
proposed method can effectively minimize the error in target QoIs.
An outline of the paper is as follows. Section \ref{sec:spectral_methods} formulates parameterized linear algebraic systems and reviews conventional spectral approaches for computing numerical solutions. Section \ref{sec:slspg} develops a residual minimization formulation based on least-squares methods and its adaptation to the stochastic LSPG method. We also provide proofs of optimality and monotonic convergence behavior of the proposed method. Section \ref{sec:analysis} provides error analysis for stochastic LSPG methods. Section \ref{sec:results} demonstrates the efficiency and the effectiveness of the proposed methods by testing them on various benchmark problems. Finally, Section \ref{sec:conclusion} outlines some conclusions.
\section{Spectral methods for parameterized linear systems}\label{sec:spectral_methods}
We begin by introducing a mathematical formulation of parameterized linear
systems and briefly reviewing the stochastic Galerkin and the
{pseudo-spectral} methods
, which are spectral methods for approximating the numerical solutions of such systems.
\subsection{Problem formulation}\label{sec:problem_form}
Consider a parameterized linear system
\begin{equation}\label{eq:param_sys}
A(\xi) u(\xi) = \RHS{(\xi)},
\end{equation}
where $A:\Gamma\rightarrow\RR{n_x\times n_x}$, and $u,b: \Gamma\rightarrow\RR{n_x}$.
The system is parameterized by a set of stochastic input parameters
$\xi(\omega) \equiv \{ \xi_1(\omega),\ldots,\xi_{n_\xi}(\omega)\}$. Here, $\omega\in\Omega$
is an elementary event in a probability space $(\Omega,\mathcal F, P)$ and
the stochastic domain is denoted by $\Gamma \equiv \prod_{i=1}^{{n_\xi}}
\Gamma_i$ where $\xi_i: \Omega \rightarrow \Gamma_i$. We are interested in
computing a spectral approximation of the numerical solution $u(\xi)$ in an
${n_\psi}$-dimensional
subspace $S_{{n_\psi}}$ spanned by a finite set of polynomials
$\{\psi_i(\xi)\}_{i=1}^{{n_\psi}}$ such that
$S_{{n_\psi}} \equiv \Span{\psi_i}_{i=1}^{{n_\psi}}\subseteq L^2(\Gamma)$, i.e.,
\begin{equation}\label{eq:spectral_approx}
u(\xi) \approx \tilde{u}(\xi) = \sum_{i=1}^{{n_\psi}} \bar{u}_i \psi_i(\xi)=(\psi^T(\xi)\otimes I_{n_x})\bar u,
\end{equation}
where $\{\bar{u}_i\}_{i=1}^{{n_\psi}}$
with $\bar{u}_i \in \mathbb{R}^{n_x}$ are unknown coefficient vectors,
$\bar{u} \equiv [\bar u_1^T\ \cdots\ \bar u_{{n_\psi}}^T]^T \in \mathbb{R}^{n_x {n_\psi}}$ is the vertical concatenation of these coefficient vectors, $\psi \equiv [\psi_1\ \cdots\ \psi_{{n_\psi}}]^T\in\RR{{n_\psi}}$ is a concatenation of the polynomial basis, $\otimes$ denotes the Kronecker product, and $I_{n_x}$ denotes the identity matrix of dimension $n_x$. Note
that $\tilde{u}\in(S_{{n_\psi}})^{n_x}$. Typically, the ``stochastic'' basis $\{\psi_i\}$ consists of products of univariate polynomials: $\psi_i \equiv \psi_{\alpha(i)} \equiv \prod_{k=1}^{{n_\xi}} \pi_{\alpha_k(i)}(\xi_k)$ where $\{\pi_{\alpha_k(i)}\}_{k=1}^{{n_\xi}} $ are univariate polynomials, ${\alpha}(i) = (\alpha_1(i),\cdots,\alpha_{{n_\xi}}(i)) \in \mathbb{N}^{{n_\xi}}_0$ is a multi-index and $\alpha_k$ represents the degree of a polynomial in $\xi_k$. The dimension of the stochastic subspace
${n_\psi}$ depends on the number of random variables ${n_\xi}$, the maximum polynomial
degree $p$, and a construction of the polynomial space (e.g., a total-degree
space that contains polynomials with total degree up to $p$, $\sum_{k=1}^{{n_\xi}}\alpha_k(i) \leq p$). By substituting
$u(\xi)$ with $\tilde{u}(\xi)$ in \eqref{eq:param_sys}, the residual can be defined
as
\begin{align}\label{eq:residual_def}
r(\bar{u};\xi) &\coloneqq \RHS{(\xi)}- A(\xi) \sum_{i=1}^{{n_\psi}} \bar{u}_i \psi_i(\xi)=\RHS{(\xi)} - (\psi^T(\xi) \otimes A(\xi)) \bar{u},
\end{align}
where $\psi^T(\cdot) \otimes A(\cdot): \Gamma \rightarrow \mathbb{R}^{n_x \times {n_\psi} n_x}$.
It follows from \eqref{eq:spectral_approx} and \eqref{eq:residual_def} that our goal now is to compute the unknown coefficients $\{\bar{u}_i\}_{i=1}^{{n_\psi}}$ of the solution expansion.
We briefly review two conventional approaches for doing so: the stochastic Galerkin method and the pseudo-spectral method. In the following, $\rho \equiv \rho(\xi)$ denotes an underlying measure of the stochastic space $\Gamma$ and
\begin{align}
\langle g, h \rangle_\rho &\equiv \int_\Gamma g(\xi) h(\xi) \rho(\xi) d\xi, \label{eq:def_inner_prod}\\
E[ g ] &\equiv \int_\Gamma g(\xi) \rho(\xi) d\xi, \label{eq:def_expectation}
\end{align}
define an inner product between scalar-valued functions $g(\xi)$ and $h(\xi)$ with respect to $\rho(\xi)$ and the expectation of $g(\xi)$, respectively. The $\ell^2$-norm of a vector-valued function $v(\xi) \in \RR{n_x}$ is defined as
\begin{align}\label{eq:def_l2_norm}
\| v \|_2^2 & \equiv \sum_{i=1} ^{n_x} \int_\Gamma v^2_i(\xi) \rho(\xi)d\xi = E[v^T v] .
\end{align}
Typically, the polynomial basis is constructed to be orthogonal in the $\langle\cdot,\cdot\rangle_\rho$ inner product, i.e., $\langle\psi_i,\psi_j\rangle_\rho = \prod_{k=1}^{{n_\xi}} \langle \pi_{\alpha_k(i)}, \pi_{\alpha_k(j)} \rangle_{\rho_k} = \delta_{ij}$,
where $\delta_{ij}$ denotes the Kronecker delta.
\subsection{Stochastic Galerkin method} \label{sec:SG_method}
The stochastic Galerkin method computes the unknown coefficients $\{\bar{u}_i\}_{i=1}^{{n_\psi}}$ of $\tilde{u}(\xi)$ in \eqref{eq:spectral_approx} by imposing orthogonality of the residual \eqref{eq:residual_def} with respect to the inner product $\langle \cdot,\cdot\rangle_\rho$ in the subspace $S_{{n_\psi}}$. This Galerkin orthogonality condition can be expressed as follows: Find $\bar{u}^\text{SG}\in\RR{n_x{n_\psi}}$ such that
\begin{equation}\label{eq:galerkin_condition}
\langle r_i(\bar{u}^\text{SG}), \psi_j \rangle_{\rho} = E[ r_i(\bar{u}^\text{SG}) \psi_j ] = 0, \qquad i=1,\ldots,n_x,\ j = 1,\ldots,{n_\psi},
\end{equation}
where $r\equiv [r_1\ \cdots\ r_{n_x}]^T$. The condition \eqref{eq:galerkin_condition} can be represented in matrix notation as
\begin{equation}\label{eq:galerkin_matrix_form}
E[ \psi \otimes r(\bar{u}^\text{SG}) ] = 0.
\end{equation}
From the definition of the residual \eqref{eq:residual_def}, this gives a system of linear equations
\begin{equation}\label{eq:sg_linear_sys}
E[\psi \psi^T \otimes A] \bar{u}^\text{SG} = E[\psi \otimes \RHS{}],
\end{equation}
of dimension $n_x {n_\psi}$. This yields an algebraic expression for the stochastic-Galerkin approximation
\begin{equation}\label{eq:sg_linear_sys_proj}
\tilde u^\text{SG}(\xi) = (\psi(\xi)^T\otimes I_{n_x})E[\psi\psi^T\otimes A]^{-1}E[\psi\otimes A u].
\end{equation}
If $A(\xi)$ is symmetric positive definite, the solution
of linear system \eqref{eq:sg_linear_sys} minimizes the solution error
$\errorsol{x} \equiv u - x$ in the $A(\xi)$-induced energy norm $\| v \|_A^2 \equiv E[ v^T A v]$, i.e.,
\begin{equation}\label{eq:stochGalOpt}
\tilde{u}^\text{SG}(\xi) = \underset{x\in (S_{{n_\psi}})^{n_x}}{\arg\min}\|\errorsol{x}\|_A^2.
\end{equation}
In general, however, the stochastic-Galerkin approximation does not minimize any measure
of the solution error.
\subsection{Pseudo-spectral method} \label{sec:PS_method}
The pseudo-spectral method directly approximates the unknown coefficients $\{\bar{u}_i\}_{i=1}^{{n_\psi}}$ of $\tilde{u}(\xi)$ in \eqref{eq:spectral_approx} by exploiting orthogonality of the polynomial basis $\{ \psi_i(\xi)\}_{i=1}^{{n_\psi}}$. That is, the coefficients $\bar{u}_i$ can be obtained by projecting the numerical solution $u(\xi)$ onto the orthogonal polynomial basis as
\begin{equation}\label{eq:ps_projection}
\bar{u}_i^\text{PS} = E[ u \psi_i ], \quad i=1,\ldots,{n_\psi},
\end{equation}
which can be expressed as
\begin{equation}
\bar{u}^\text{PS} = E[ \psi \otimes A^{-1} \RHS{}],
\end{equation}
or equivalently
\begin{equation}\label{eq:ps_linear_sys_proj}
\tilde u^\text{PS}(\xi) = (\psi(\xi)^T\otimes I_{n_x})E[\psi\otimes u].
\end{equation}
The associated optimality property of the approximation, which can be derived from optimality of orthogonal projection, is
\begin{equation}\label{eq:spectralProjOpt}
\tilde{u}^\text{PS}(\xi) = \underset{x\in (S_{{n_\psi}})^{n_x}}{\arg\min}\|\errorsol{x}\|_2^2.
\end{equation}
In practice, the coefficients $\{\bar{u}^\text{PS}_i\}_{i=1}^{{n_\psi}}$ are approximated via numerical quadrature as
\begin{align}\label{eq:ps_proj_quad}
\bar{u}_i^\text{PS} = E[ u \psi_i ]=\sum_{k=1}^{n_q} u(\xi^{(k)}) \psi_i(\xi^{(k)}) w_k= \sum_{k=1}^{n_q} \left( A^{-1}(\xi^{(k)}) f(\xi^{(k)}) \right) \psi_i(\xi^{(k)}) w_k,
\end{align}
where $\{(\xi^{(k)}, w_k)\}_{k=1}^{n_q}$ are the quadrature points and weights.
While stochastic Galerkin leads to an optimal approximation \eqref{eq:stochGalOpt} under certain conditions and pseudo-spectral projection minimizes the $\ell^2$-norm of the solution error \eqref{eq:spectralProjOpt}, neither approach provides the flexibility to tailor the optimality properties of the approximation. This may be important in applications where, for example, minimizing the error in a quantity of interest is desired. To address this, we propose a general optimization-based framework for spectral methods that enables the choice of a targeted weighted $\ell^2$-norm in which the solution error is minimized.
\section{Stochastic least-squares Petrov--Galerkin method}\label{sec:slspg}
As a starting point, we propose a residual-minimizing formulation that computes the coefficients $\bar{u}$ by directly minimizing the $\ell^2$-norm of the residual, i.e.,
\begin{align}\label{eq:min_res_L2_first}
\begin{split}
\tilde u^\text{LSPG}(\xi) &= \underset{x \in (S_{{n_\psi}})^{n_x}}{\arg\min}{ \| f - Ax\|_2^2 }
=
\underset{x \in (S_{{n_\psi}})^{n_x}}{\arg\min}{ \| \errorsol{x}\|_{A^TA}^2 },
\end{split}
\end{align}
where $\|v\|_{A^TA}^2\equiv E[v^TA^TAv]$. Thus, the $\ell^2$-norm of the residual is equivalent to a weighted $\ell^2$-norm of the solution error.
Using \eqref{eq:spectral_approx} and
\eqref{eq:residual_def}, we have
\begin{equation}\label{eq:min_res_L2}
\bar{u}^\text{LSPG} = \underset{\bar{x} \in \RR{n_x {n_\psi}}}{\arg\min}{ \| r(\bar{x})\|_2^2 }.
\end{equation}
The definition of the residual \eqref{eq:residual_def} allows the objective function in \eqref{eq:min_res_L2} to be written in quadratic form as
\begin{align}\label{eq:res_quadratic}
\begin{split}
\| r( \bar{x}) \|_2^2 &= \| f- (\psi^T \otimes A) \bar{x} \|_2^2 = \bar{x}^T E[\psi\psi^T \otimes A^T A ] \bar{x} - 2 E[ \psi \otimes A^T f]^T \bar{x} + E[f^T f].
\end{split}
\end{align}
Noting that the mapping $\bar x\mapsto \| r( \bar{x}) \|_2^2$ is convex, the (unique) solution $\bar{u}^\text{LSPG}$ to \eqref{eq:min_res_L2} is a stationary point of $\| r( \bar{x}) \|_2^2$ and thus satisfies
\begin{equation}\label{eq:res_NE}
E[\psi \psi^T \otimes A^TA] \bar{u}^\text{LSPG} = E[\psi \otimes A^T f],
\end{equation}
which can be interpreted as the normal-equations form of the linear least-squares problem \eqref{eq:min_res_L2}.
Consider a generalization of this idea that minimizes the solution error in a
targeted weighted $\ell^2$-norm by choosing a specific weighting function.
Let us define a weighting function $M(\xi) \equiv M_\xi(\xi) \otimes M_x(\xi) $, where $M_\xi: \Gamma \rightarrow \mathbb{R}$ and $M_x: \Gamma \rightarrow \mathbb{R}^{n_x \times n_x}$. Then, the stochastic LSPG method can be written as
\begin{align}\label{eq:min_res_L2_first_weight}
\begin{split}
\tilde u^{\text{LSPG}(M)}(\xi) &= \underset{x \in (S_{{n_\psi}})^{n_x}}{\arg\min}{ \| M(\RHS{} - Ax)\|_2^2 }
=
\underset{x \in (S_{{n_\psi}})^{n_x}}{\arg\min}{ \| \errorsol{x}\|_{A^TM^TMA}^2 },
\end{split}
\end{align}
with $\|v\|_{A^TM^TMA}^2\equiv E[v^TA^TM^TMAv]= E[(M_\xi^T M_\xi \otimes
(M_xAv)^T M_x Av]$. Algebraically, this is equivalent to
\begin{align}\label{eq:stochastic_lspg_algebraic}
\begin{split}
\bar{u}^{\text{LSPG}(M)} &= \underset{{\bar{x} \in \RR{n_x {n_\psi}} }}{\arg\min}{\| M r( \bar{x}) \|_2^2}
= \underset{{\bar{x} \in \RR{n_x {n_\psi}} }}{\arg\min}{ \| (M_\xi \otimes M_x) (1 \otimes \RHS{}- \left( \psi^T \otimes A \right) \bar{x}) \|_{2}^2}\\
&= \underset{{\bar{x} \in \RR{n_x {n_\psi}} }}{\arg\min}{ \| M_\xi \otimes (M_x \RHS{}) - \left( (M_\xi \psi^T) \otimes (M_x A) \right) \bar{x} \|_{2}^2}.
\end{split}
\end{align}
We will restrict our attention to the case $M_\xi(\xi) = 1$ and denote $M_x(\xi)$ by $M(\xi)$ for simplicity. Now, the algebraic stochastic LSPG problem \eqref{eq:stochastic_lspg_algebraic} simplifies to
\begin{align}\label{eq:min_wlspg}
\bar{u}^{\text{LSPG}(M)} &= \underset{{\bar{x} \in \RR{n_x {n_\psi}} }}{\arg\min}{\| M r( \bar{x}) \|_2^2} = \underset{{\bar{x} \in \RR{n_x {n_\psi}} }}{\arg\min}{\| M \RHS{} - (\psi^T \otimes M A) \bar{x} \|_2^2}.
\end{align}
The objective function in \eqref{eq:min_wlspg} can be written in quadratic form as
\begin{align}\label{eq:wlspg_quad_form}
\begin{split}
\| Mr(\bar x)\|_2^2 = &\bar{x}^T E[ (\psi \psi^T \otimes MA^TM^T MA)] \bar{x}
- 2 (E[ \psi \otimes A^TM^TMf ])^T \bar{x} + E[\RHS{}^TM^TM\RHS{}].
\end{split}
\end{align}
As before, because the mapping $\bar x\mapsto \|Mr(\bar x)\|_2^2$ is convex, the unique solution $\bar{u}^{\text{LSPG}(M)}$ of \eqref{eq:min_wlspg} corresponds to a stationary point of $\|Mr(\bar x)\|_2^2$ and thus satisfies
\begin{equation}\label{eq:wlspg_NE}
E[ \psi \psi^T \otimes A^TM^T MA] \bar{u}^{\text{LSPG}(M)} = E[ \psi \otimes A^T M^TMf ],
\end{equation}
which is the normal-equations form of the linear least-squares problem \eqref{eq:min_wlspg}.
This yields the following algebraic expression for the stochastic-LSPG approximation:
\begin{equation}\label{eq:lspg_linear_sys_proj}
\tilde u^{\text{LSPG}(M)}(\xi) = (\psi(\xi)^T\otimes I_{n_x})E[\psi\psi^T\otimes A^TM^TMA]^{-1}E[\psi\otimes A^TM^TMA u].
\end{equation}
\textbf{Petrov--Galerkin projection}. Another way of interpreting the normal equations \eqref{eq:wlspg_NE} is that the
(weighted) residual $M(\xi)r(\bar u^{\text{LSPG}(M)}; \xi) $ is
enforced to be orthogonal to the subspace spanned by the optimal test basis
$\{\phi_i\}_{i=1}^{{n_\psi}}$ with $\phi_i(\xi)\coloneqq \psi_i(\xi) \otimes
M(\xi)A(\xi)$ and $\Span{\phi_i}_{i=1}^{{n_\psi}}\subseteq L^2(\Gamma)$. That is, this projection is precisely the (least-squares) {Petrov--Galerkin}
projection,
\begin{equation}\label{eq:lspg_projection}
E[ \phi^T ( \RHS{} - (\psi^T \otimes MA)\bar{u}^{\text{LSPG}(M)} )] = 0,
\end{equation}
where $\phi(\xi)\equiv[\phi_1(\xi)\ \cdots\ \phi_{{n_\psi}}(\xi)]$.
\textbf{Monotonic Convergence}. The stochastic least-squares Petrov-Galerkin is monotonically convergent. That is, as the trial subspace $S_{{n_\psi}}$ is enriched (by adding polynomials to the basis), the optimal value of the convex objective function $\| M r(\bar{u}^{\text{LSPG}(M)})\|_2^2$ monotonically decreases. This is apparent from the LSPG optimization problem \eqref{eq:min_res_L2_first_weight}: Defining
\begin{align}\label{eq:min_res_L2_first_weight_superspace}
\begin{split}
\tilde u^{\text{LSPG}(M)}larger(\xi) &= \underset{x \in (S_{{n_\psi}+1})^{n_x}}{\arg\min}{ \| M(\RHS{} - Ax)\|_2^2 }
,
\end{split}
\end{align}
we have $\| M(\RHS{} - A\tilde u^{\text{LSPG}(M)}larger)\|_2^2\leq \| M(\RHS{} - A\tilde u^{\text{LSPG}(M)})\|_2^2$ (and $\|u - u^{\text{LSPG}(M)}larger\|_{A^TM^TMA}$ $\leq \|u - u^{\text{LSPG}(M)}\|_{A^TM^TMA}$) if $S_{{n_\psi}}\subseteq S_{{n_\psi}+1}$.
\textbf{Weighting strategies}. Different choices of weighting function $M(\xi)$ allow LSPG to minimize different measures of the error. We focus on four particular choices:
\begin{enumerate}
\item $M(\xi) = C^{-1}(\xi)$, where $C(\xi)$ is a Cholesky factor of $A(\xi)$,
i.e., $A(\xi) = C(\xi)C^T(\xi)$. This decomposition exists if and only if $A$
is symmetric positive semidefinite. In this case, LSPG minimizes the energy
norm of the solution error $\|\errorsol{{x}} \|_A^2 \equiv \| C^{-1} r(\bar{x}) \|_2^2$ ($= \| e( (\Psi^T \otimes I_{n_x})\bar{x})\|_A^2$) and is mathematically equivalent to the stochastic Galerkin method described in Section \ref{sec:SG_method}, i.e., $\tilde u^{\text{LSPG}(M)}Arg{C^{-1}}=\tilde u^\text{SG}$. This can be seen by comparing \eqref{eq:stochGalOpt} and \eqref{eq:min_res_L2_first_weight} with $M=C^{-1}$, as $A^TM^TMA = A$ in this case.
\item $M(\xi) = I_{n_x}$, where $I_{n_x}$ is the identity matrix of dimension
$n_x$. In this case, LSPG minimizes the $\ell^2$-norm of the residual $ \| \errorsol{ x}\|_{A^TA}\equiv \|r(\bar{x}) \|_2^2$.
\item $M(\xi) = A^{-1}(\xi)$. In this case, LSPG minimizes the $\ell^2$-norm of solution error $\| \errorsol{x} \|_2^2 \equiv \| A^{-1} r(\bar x)\|_2^2$. This is mathematically equivalent to the pseudo-spectral method described in Section \ref{sec:PS_method}, i.e., $\tilde u^{\text{LSPG}(M)}Arg{A^{-1}}=\tilde u^\text{PS}$, which can be seen by comparing \eqref{eq:spectralProjOpt} and \eqref{eq:min_res_L2_first_weight} with $M=A^{-1}$.
\item $M(\xi) = F(\xi)A^{-1}(\xi)$ where $F: \Gamma \rightarrow \mathbb{R}^{n_{o} \times
n_x}$ is a linear functional of the solution associated with a vector of output quantities of interest. In this case, LSPG minimizes the $\ell^2$-norm of the error in the output quantities of interest $\|F \errorsol{x} \|_2^2 \equiv \| FA^{-1}r(\bar{x}) \|_2^2 $.
\end{enumerate}
We again emphasize that two particular choices of the weighting function $M(\xi)$ lead to equivalence between LSPG and existing spectral-projection methods (stochastic Galerkin and pseudo-spectral projection), i.e.,
\begin{equation}\label{eq:equivalence}
\tilde u^{\text{LSPG}(M)}Arg{C^{-1}}=\tilde u^\text{SG},\quad\quad \tilde u^{\text{LSPG}(M)}Arg{A^{-1}}=\tilde u^\text{PS},
\end{equation}
where the first equality is valid (i.e., the Cholesky decomposition $A(\xi) = C(\xi)C^T(\xi)$ can be computed) if and only if $A$ is symmetric positive semidefinite.
Table \ref{tab:wlspg_method_table} summarizes the target quantities to minimize (i.e., $\| \errorsol{x} \|_\Theta^2 \equiv E[ \errorsol{x}^T \Theta \errorsol{x}])$, the corresponding LSPG weighting functions, and the method names LSPG($\Theta$).
\begin{table}[htbp]
\begin{center}\footnotesize
\renewcommand{1.3}{1.3}
\setlength{\tabcolsep}{4pt}
\caption{Different choices for the LSPG weighting function}\label{tab:wlspg_method_table}
\begin{tabular}{| l | l | l | l |}
\hline
\multicolumn{2}{|c|}{Quantity minimized by LSPG} & \multirow{2}{*}{Weighting function} & \multirow{2}{*}{Method name}\\
\cline{1-2}
Quantity & Expression & & \\
\hline
Energy norm of error & $\| \errorsol{x} \|_A^2$ & $M(\xi)=C^{-1}(\xi)$ & ^\text{LSPG}SG\\
$\ell^2$-norm of residual & $\|\errorsol{x}\|_{A^TA}^2$ & $M(\xi) = I_{n_x}$ & ^\text{LSPG}I \\
$\ell^2$-norm of solution error & $\| \errorsol{x} \|_2^2$ & $M(\xi) = A^{-1}(\xi)$ & ^\text{LSPG}PS\\
$\ell^2$-norm of error in quantities of interest & $\|F\errorsol{x}\|_{2}^2$ & $M(\xi) = F(\xi)A^{-1}(\xi)$ & ^\text{LSPG}Q \\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Error analysis}\label{sec:analysis}
If an approximation satisfies an optimal-projection condition
\begin{equation}
\tilde u = \underset{x \in (S_{{n_\psi}})^{n_x}}{\arg\min}{ \|
\errorsol{x}\|_{\Theta}^2 },
\end{equation}
then
\begin{equation}
\| \errorsol{\tilde u}\|_{\Theta}^2 = \min_{x \in (S_{{n_\psi}})^{n_x}}\| \errorsol{x}\|_{\Theta}^2.
\end{equation}
Using norm equivalence
\begin{equation}\label{eq:norm_equivalence}
\|x\|_{\Theta'}^2\leq C\|x\|_{\Theta}^2,
\end{equation}
we
can characterize the solution error $e(\tilde u)$ in any alternative norm $\Theta'$
as
\begin{gather}
\| \errorsol{\tilde u}\|_{\Theta'}^2 \leq C\min_{x \in (S_{{n_\psi}})^{n_x}}\| \errorsol{x}\|_{\Theta}^2.
\end{gather}
Thus, the error in an alternative norm $\Theta'$ is controlled by the
optimal objective-function value
$\min_{x \in (S_{{n_\psi}})^{n_x}}\|\errorsol{x}\|_\Theta^2$ (which can be made small if the
trial space admits accurate solutions) and the stability constant $C$.
Table \ref{tab:norm_equivalence} reports norm-equivalence constants for the
norms considered in this work. Here,
we have defined
\begin{equation}
\sigma_\text{min}(M) \equiv \inf_{x\in(L^2(\Gamma))^{n_x}}\|Mx\|_2/\|x\|_2,\quad
\sigma_\text{max}(M) \equiv \sup_{x\in(L^2(\Gamma))^{n_x}}\|Mx\|_2/\|x\|_2.
\end{equation}
\begin{table}[ht]
\begin{center}\footnotesize
\renewcommand{1.3}{1.5}
\setlength{\tabcolsep}{4pt}
\caption{Stability constant $C$ in \eqref{eq:norm_equivalence}}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
&$\Theta'=A$&$\Theta'=A^TA$&$\Theta'=2$ & $\Theta'=F^TF$\\
\hline
$\Theta=A$& 1& $\sigma_\text{max}(A)$& $\frac{1}{\sigma_\text{min}(A)}$& $\frac{\sigma_\text{max}(F)^2}{\sigma_\text{min}(A)}$\\
$\Theta=A^TA$& $\frac{1}{\sigma_\text{min}(A)}$& 1& $\frac{1}{\sigma_\text{min}(A)^2}$& $\frac{\sigma_\text{max}(F)^2}{\sigma_\text{min}(A)^2}$\\
$\Theta=2$& $\sigma_\text{max}(A)$& $\sigma_\text{max}(A)^2$& 1& $\sigma_\text{max}(F)^2$\\
$\Theta=F^TF$& $\frac{\sigma_\text{max}(A)}{\sigma_\text{min}(F)^2}$
& $\frac{\sigma_\text{max}(A)^2}{\sigma_\text{min}(F)^2}$ & $\frac{1}{\sigma_\text{min}(F)^2}$& 1\\
\hline
\end{tabular}
\label{tab:norm_equivalence}
\end{center}
\end{table}
\noindent This exposes several interesting conclusions. First, if $n_o
< n_x$, then the null space of $F$ is nontrivial and so $\sigma_\text{min}(F)
= 0$. This implies that ^\text{LSPG}Q, for which $\Theta=F^TF$, will have
an undefined value of $C$ when the solution error is measured in other norms, i.e.,
for $\Theta' = A$, $\Theta' = A^TA$, and $\Theta' = 2$. It will have
controlled errors only for $\Theta' = F^TF$, in which case $C=1$.
Second, note that for problems with small
$\sigma_\text{min}(A)$, the $\ell^2$ norm in the quantities of interest may
be large for the ^\text{LSPG}SG{}, or ^\text{LSPG}I{}, while it will remain well behaved for ^\text{LSPG}PS{} and
^\text{LSPG}Q{}.
\section{Numerical experiments}\label{sec:results}
This section explores the performance of the LSPG
methods for solving elliptic SPDEs parameterized by one random variable (i.e., ${n_\xi}=1$). The maximum polynomial degree used in the stochastic
space $S_{{n_\psi}}$ is $p$; thus, the dimension of $S_{{n_\psi}}$ is
${n_\psi} = p +1$. In physical space, the SPDE is defined over a two-dimensional rectangular
bounded domain $D$, and it is discretized using the finite element
method with bilinear ($Q_1$) elements as implemented in the
Incompressible Flow and Iterative Solver Software (IFISS) package
\cite{ifiss}. Sixteen elements are employed in each dimension, leading to $n_x
= 225 = 15^2$ degrees of freedom excluding boundary nodes. All numerical experiments are performed on an Intel 3.1 GHz i7 CPU, 16 GB RAM using \textsc{Matlab} R2015a.
\textbf{Measuring weighted $\ell^2$-norms}. For all LSPG methods, the weighted
$\ell^2$-norms can be measured by evaluating the expectations in the quadratic
form of the objective function shown in \eqref{eq:wlspg_quad_form}.
This requires evaluation of three expectations
\begin{equation}
\|M r(\bar{x}) \|_2^2 \coloneqq \bar{x}^T T_1 \bar{x} - 2 T_2^T \bar{x} + T_3,
\end{equation}
with
\begin{align}
T_1 \coloneqq & E[ (\psi \psi^T \otimes A^TM^T M^TA)] \in \mathbb{R}^{n_x {n_\psi} \times n_x {n_\psi}},\label{eq:T1_exp}\\
T_2 \coloneqq & E[ \psi \otimes A^T M^T M \RHS{} ] \in \mathbb{R}^{n_x {n_\psi}},\label{eq:T2_exp}\\
T_3 \coloneqq & E[ \RHS{}^TM^T M\RHS{}] \in \mathbb{R}.\label{eq:T3_exp}
\end{align}
Note that $T_3$ does not depend on the stochastic-space dimension ${n_\psi}$.
These quantities can be evaluated by
numerical quadrature or analytically if closed-form expressions for those
expectations exist. Unless otherwise specified, we compute these quantities using the {\tt integral} function in \textsc{Matlab}, which performs
adaptive numerical quadrature based on the 15-point Gauss--Kronrod quadrature formula \cite{shampine2008vectorized}.
\begin{comment}
\subsubsection{Numerical quadrature}
For any types of $A(\xi)$, $M(\xi)$, and $\psi(\xi)$, numerical quadrature can be used to evaluate all three terms (i.e., $T_1$, $T_2$, and $T_3$). For a given function $g(\xi)$, numerical quadrature can be described as a sum of functions evaluated at quadrature points and multiplied by corresponding quadrature weights
\begin{align}
E[ g ] = \sum_{k=1}^{n_q} g(\xi^{(k)}) \rho(\xi^{(k)}) w^{(k)}
\end{align}
where $\{\xi^{(k)},w^{(k)}\}$ are quadrature points and weights. In \textsc{Matlab}, the \textit{integral} function implements an adaptive numerical quadrature based on the 15-point Gauss-Kronrod quadrature formula \cite{shampine2008vectorized}.
\end{comment}
\textbf{Error measures}.
In the experiments, we assess the error in approximate solutions computed
using various spectral-projection techniques using
four relative error measures (see Table \ref{tab:wlspg_method_table}):
\begin{align}\label{eq:error_measures}
\eta_r(x) \coloneqq\frac{\|\errorsol{x}\|_{A^TA}^2}{\|f\|_2^2},\quad
\eta_e(x) \coloneqq\frac{\|\errorsol{x}\|_2^2}{\|u\|_2^2},\quad
\eta_A(x) \coloneqq\frac{\|\errorsol{x}\|_A^2}{\|u\|_A^2},\quad
\eta_Q(x) \coloneqq\frac{\|F\errorsol{x}\|_2^2}{\|Fu\|_2^2}.
\end{align}
\begin{comment}
\KTC{Note that the argument of $e$ should be a vector in $\RR{n_x}$, while
$\bar x$ is a vector in $n_xn_xi$ from earlier. Thus, I replaced the
arguments here with $\tilde u$}
\end{comment}
\subsection{Stochastic diffusion problems}\label{sec:diff}
Consider the steady-state stochastic diffusion equation with homogeneous boundary conditions,
\begin{equation}\label{eq:str_diff}
\left\{
\begin{array}{r l l}
-\nabla \cdot (a(x,\xi) \nabla u(x,\xi) ) &= f(x,\xi) &\text{ in } D \times \Gamma\\
u(x,\xi) &= 0 &\text{ on } \partial D \times \Gamma,
\end{array}
\right.
\end{equation}
where the diffusivity $a(x,\xi)$ is a random field and $D=[0, 1] \times [0,
1]$. The random field $a(x,\xi)$ is specified as an exponential of a truncated
Karhunen-Lo\`eve (KL) expansion \cite{loeve1978probability} with covariance kernel,
$C({x},\, {y}) \equiv \sigma^2 \exp \left ( - \frac{|x_1 - y_1|}{c} -
\frac{|x_2 - y_2|}{c} \right)$,
where $c$ is the correlation length, i.e.,
\begin{equation} \label{eq:def_rf}
a(x,\xi) \equiv \exp( \mu + \sigma a_1(x) \xi),
\end{equation}
where $\{\mu, \sigma^2\}$ are the mean and variance of $a$ and $a_1(x)$ is the
first eigenfunction in the KL expansion.
\begin{comment}
\KTC{We need to specify (1) the distribution of $\xi$, (2) the codomain of
$\xi$, which is $\Gamma$, and (3) the measure of the stochastic space $\rho$.}
\end{comment}
After applying the spatial
(finite-element) discretization, the problem can be reformulated as a parameterized linear system
of the form \eqref{eq:param_sys},
where
$A(\xi)$ is a parameterized stiffness matrix obtained from the weak form of
the problem whose $(i,j)$-element is $[A(\xi)]_{ij} = \int_D
\nabla a(x,\xi) \varphi_i(x) \cdot \varphi_j(x) dx$ (with $\{ \varphi_i \}$
standard finite element basis functions) and $\RHS{(\xi)}$ is a parameterized
right-hand side whose $i$th element is $[\RHS{(\xi)}]_i = \int_D f(x,\xi) \varphi_i(x)
dx$.
Note that $A(\xi)$ is symmetric positive definite for this problem; thus
^\text{LSPG}SG{} is a valid projection scheme (the Cholesky factorization
$A(\xi) = C(\xi)C(\xi)^T$ exists) and is equal to stochastic Galerkin
projection.
\textbf{Output quantities of interest}. We consider $n_o$ output
quantities of interest ($F(\xi)u(\xi) \in \RR{n_o}$) that are random linear functionals of the solution and $F(\xi)$ is of dimension $n_o \times n_x$ having the form:
\begin{itemize}
\item[(1)] $F_1(\xi) \coloneqq g(\xi)\times G$ with $G\sim U (0,\,1)^{n_o \times n_x}$: each entry of $G$ is drawn uniformly from $[0, 1]$ and $g(\xi)$ is a scalar-valued function of $\xi$. The resulting output QoI, $F_1(\xi)u(\xi)$, is a vector-valued function of dimension $n_o$.
\item[(2)] $F_2(\xi)\coloneqq \RHS{(\xi)}^T \bar{M} $: $\bar{M}$ is a mass matrix defined via $[\bar{M}]_{ij} \equiv \int_D \varphi_i(x) \varphi_j(x)dx$. The output QoI is a scalar-valued function $F_2(\xi)u(\xi) = \RHS{(\xi)}^T \bar{M} u(\xi) $, which approximates a spatial average $\frac{1}{| D |}\int_D f(x,\xi) u(x,\xi) dx$.
\end{itemize}
\begin{figure}
\caption{Relative error measures versus polynomial
degree for diffusion problem 1: lognormal random coefficient and
deterministic forcing. Note that each LSPG method performs best in the error measure it
minimizes.}
\label{fig:easy_pvsq_sub3}
\label{fig:easy_pvsq_sub1}
\label{fig:easy_pvsq}
\end{figure}
\subsubsection{Diffusion problem 1: Lognormal random coefficient and
deterministic forcing}\label{sec:easy_example}
In this example, we take $\xi$ in \eqref{eq:def_rf} to follow a standard normal distribution (i.e., $\rho(\xi) = \frac{1}{\sqrt{2\pi}}\exp\left(-\frac{\xi^2}{2}\right)$ and $\xi \in (-\infty, \infty)$) and $f(x,\xi) = 1$ is deterministic. Because $\xi$ is normally
distributed, normalized Hermite polynomials (orthogonal with respect to $\langle\cdot,\cdot\rangle_\rho$) are used as polynomial basis $\{\psi_i(\xi)\}_{i=1}^{{n_\psi}} $.
\begin{figure}
\caption{Pareto front of relative error measures versus wall time for varying polynomial degree $p$ for diffusion problem 1: lognormal random coefficient and deterministic forcing.}
\label{fig:easy_quad}
\end{figure}
Figure \ref{fig:easy_pvsq} reports the relative errors
\eqref{eq:error_measures} associated with solutions computed using four LSPG methods
(^\text{LSPG}SG{}, ^\text{LSPG}I{}, ^\text{LSPG}PS{}, and ^\text{LSPG}Q{}) for varying
polynomial degree $p$. Here, we consider the random output QoI, i.e., $F=F_1$, $n_o=100$, and $g(\xi) = \xi$.
This result
shows that three methods (^\text{LSPG}SG{}, ^\text{LSPG}I{}, and ^\text{LSPG}PS{})
monotonically converge in all four error measures, whereas ^\text{LSPG}Q{} does
not. This is an artifact of rank deficiency in $F_1$, which leads to
$\sigma_\text{min}(F_1) = 0$; as a result, all stability constants $C$ for which
$\Theta = F^TF$ in Table \ref{tab:norm_equivalence} are unbounded, implying
lack of error control.
Figure \ref{fig:easy_pvsq} also shows that each LSPG method
minimizes its targeted error measure for a given
stochastic-subspace dimension (e.g., LSPG minimizes the $\ell^2$-norm of the residual); this is also evident from Table
\ref{tab:norm_equivalence}, as the stability constant realizes its minimum
value ($C=1$) for $\Theta = \Theta'$. Table \ref{tab:norm_equiv_for_diff1}
shows actual values of the stability constant of this problem and well
explains the behaviors of all LSPG methods. For example, the first column of
Table \ref{tab:norm_equiv_for_diff1} shows that the stability constant is
increasing in the order (^\text{LSPG}SG, ^\text{LSPG}I, ^\text{LSPG}PS, and ^\text{LSPG}Q), which is
represented in Figure \ref{fig:easy_pvsq_sub3}.
\begin{table}[ht]
\begin{center}\footnotesize
\renewcommand{1.3}{1.3}
\setlength{\tabcolsep}{4pt}
\caption{Stability constant $C$ of Diffusion problem 1}
\renewcommand{1.3}{1.2}
\begin{tabular}{|l|r|r|r|r|}
\hline
&$\Theta' = A$&$\Theta' = A^TA$&$\Theta' = 2$ & $\Theta' = F^TF$\\
\hline
$\Theta = A$ & 1 & 26.43 & 2.06 & 11644.22\\
$\Theta = A^TA$ & 2.06 & 1 & 4.25 & 24013.48 \\
$\Theta = 1$ & 26.43 & 698.53 & 1 & 5646.32\\
$\Theta = F^TF$ & $\infty$ & $\infty$ & $\infty$ & 1\\
\hline
\end{tabular}
\label{tab:norm_equiv_for_diff1}
\end{center}
\end{table}
The results in Figure \ref{fig:easy_pvsq} do not account for computational costs. This point is addressed in Figure \ref{fig:easy_quad}, which shows the relative errors as a function of CPU time.
As we would like to devise a method that minimizes both the error and computational time, we examine a Pareto front (black dotted line) in each error measure. For a fixed value of $p$,
^\text{LSPG}PS{} is the fastest method because it does not require solution of a coupled system of linear equations of dimension $n_x {n_\psi}$ which is required by the other three LSPG methods (^\text{LSPG}SG, ^\text{LSPG}I, and ^\text{LSPG}Q).
As a result, pseudo-spectral projection (^\text{LSPG}PS) generally
yields the best overall performance in practice, even when it produces larger
errors than other methods for a fixed value of $p$.
Also, for a fixed value of $p$, ^\text{LSPG}SG{} is faster than ^\text{LSPG}I{} because the weighted stiffness matrix
$A(\xi)$ obtained from the finite element discretization is sparser
than $A^T(\xi)A(\xi)$. That is, the number of nonzero entries to be
evaluated for ^\text{LSPG}SG{} in numerical quadrature is smaller than the ones
for ^\text{LSPG}I, and exploiting this sparsity structure in the numerical quadrature
causes ^\text{LSPG}SG{} to be faster than ^\text{LSPG}I.
\subsubsection{Diffusion problem 2: Lognormal random coefficient and random
forcing}\label{sec:hard_example}
This example uses the same random field $a(x,\xi)$
\eqref{eq:def_rf}, but instead employs a random forcing term\footnote{In \cite{mugler2013convergence}, it was shown that stochastic Galerkin solutions of an analytic problem
$a(\xi)u(\xi)=f(\xi)$ with this type of forcing are divergent in the $\ell^2$-norm
of solution errors as $p$ increases.} $f(x,\xi) =
\exp(\xi)|\xi-1|$. Again, $\xi$ follows a standard normal distribution and
normalized Hermite polynomials are used as polynomial basis.
We consider the second output QoI, $F=F_2$.
As shown in Figure \ref{fig:sg_fail},
the stochastic Galerkin method fails to converge monotonically in three error
measures as the
stochastic polynomial basis is enriched. In fact, it exhibits monotonic
convergence only in the error measure it minimizes (for which monotonic
convergence is guaranteed).
\begin{figure}
\caption{Relative errors versus polynomial degree for
stochastic Galerkin (i.e., ^\text{LSPG}
\label{fig:sg_fail}
\end{figure}
\begin{figure}
\caption{Pareto front of relative error measures versus
wall time for varying polynomial degree $p$ for diffusion problem 2: lognormal random coefficient and random forcing}
\label{fig:hard_quad}
\end{figure}
Figure \ref{fig:hard_quad} shows that this trend applies to other methods as
well when effectiveness is viewed with respect to CPU time; each technique exhibits monotonic convergence in its tailored error
measure only. Moreover, the Pareto fronts (black dotted lines) in each
subgraph of Figure \ref{fig:hard_quad} shows that the LSPG method tailored for
a particular error measure is Pareto optimal in terms of minimizing the error
and computational wall time. In the next experiments, we examine goal-oriented ^\text{LSPG}Q{} for varying number of output quantities of interest $n_o$ and its effect on the stability constant $C$. Figure \ref{fig:hard_quad_qoi_three} reports three error measures computed
using all four LSPG methods. For ^\text{LSPG}Q, the first linear function
$F=F_1$ is applied with $g(\xi) = \sin(\xi)$ and a varying number of outputs
$n_o = \{100,
150, 200, 225\}$. When
$n_o=225$, ^\text{LSPG}Q{} and ^\text{LSPG}PS{} behave similarly in all three
weighted $\ell^2$-norms. This is because when $n_0=225=n_x$, then
$\sigma_\text{min}(F) > 0$, so the stability constants $C$ for $\Theta =
F^TF$ in Table \ref{tab:norm_equivalence} are bounded. Figure \ref{fig:hard_quad_qoi_qoi} reports relative errors in the quantity of
interest $\eta_Q$ associated with linear functionals $F=F_1$ for two different functions $g(\xi)$,
$g_1(\xi) = \sin(\xi)$ and $g_2(\xi) = \xi$. Note that ^\text{LSPG}SG{} and
^\text{LSPG}I{} fail to converge, whereas ^\text{LSPG}PS{} and ^\text{LSPG}Q{} converge, which can be explained by the stability constant $C$ in Table \ref{tab:norm_equivalence} where $\LSV{A} = 26.43$ and $\SSV{A} = 0.48$ for the linear operator $A(\xi)$ of this problem. ^\text{LSPG}Q{} converges monotonically and produces the smallest error (for a
fixed polynomial degree $p$) of all the methods as expected.
\begin{figure}
\caption{Relative error measures versus polynomial degree for
a varying dimension $n_o$ of the output matrix $F=F_1$ for diffusion problem 2: lognormal random coefficient and random forcing. Note that ^\text{LSPG}
\label{fig:hard_quad_qoi_three}
\end{figure}
\begin{figure}
\caption{Plots of the error norm of output QoI for diffusion problem 2: lognormal random coefficient and random forcing when a linear functional is (a) $F(\xi) \equiv \sin(\xi) \times [0,1]^{100\times n_x}
\label{fig:hard_quad_qoi_qoi}
\end{figure}
\subsubsection{Diffusion problem 3: Gamma random coefficient and random
forcing}\label{sec:gamma_example}
This section considers a stochastic diffusion problem parameterized by a random variable that has a Gamma distribution, where $a(x,\xi) \equiv \exp(1 + 0.25 a_1(x) \xi + 0.01 \sin(\xi))$
with density $\rho(\xi) \equiv \frac{\xi^{\alpha} \exp(-\xi)}{\bar{\Gamma}(\alpha+1)}$, $\bar{\Gamma}$ is the Gamma function, $\xi \in [0, \infty)$, and $\alpha=0.5$. Normalized Laguerre polynomials (which are orthogonal with respect to $\langle \cdot, \cdot \rangle_\rho$) are used as polynomial basis. We consider a random forcing
term $f(x,\xi) =
\log_{10}(\xi) |\xi - 1|$ and the second QoI $F(\xi) = F_2(\xi) = \RHS{(\xi)}^T
\bar{M}$. Note that numerical quadrature is the only option for computing expectations arise in this problem.
Figure \ref{fig:gamma_synthetic_quad} shows the
results of solving the problem with the four different LSPG methods. Again,
each version of LSPG monotonically decreases its corresponding target weighted
$\ell^2$-norm as the stochastic basis is enriched. Further, each LSPG method
is Pareto optimal in terms of minimizing its targeted error measure and the
computational wall time.
\begin{figure}
\caption{Pareto front of relative error measures versus wall time for varying polynomial degree $p$ for diffusion problem 3: Gamma random coefficient and random
forcing. Note that each method is Pareto optimal in terms of minimizing its targeted error measure and computational wall time.}
\label{fig:gamma_synthetic_quad}
\end{figure}
\subsection{Stochastic convection-diffusion problem: Lognormal random coefficient and
deterministic forcing}\label{sec:cd}
\begin{comment}
\KTC{This section should remove all references to LSPG($C^{-1}$) as the
Cholesky factor $C$ does not exist. Instead, this approach should just be
called stochastic Galerkin, which does not associate with any error
minimization principle and is thus not an LSPG method.}
\end{comment}
We now consider a non-self-adjoint example, the steady-state convection-diffusion equation
\begin{equation}
\label{eq:cd_strong}
\left\{
\begin{array}{r l l}
-\epsilon \nabla \cdot (a({x},\,\xi) \nabla u({x},\,\xi) ) + \vec{w} \cdot \nabla u({x},\,\xi) &= f({x},\,\xi) &\text{ in } D \times \Gamma,
\\
u({x},\,\xi) &= g_D({x}) &\text{ on } \partial D \times \Gamma
\end{array}
\right.
\end{equation}
where $D=[-1,\,1]\times[-1,\,1]$
\begin{comment}
\KTC{Note thate should use $=$ here, not
$\equiv$ as each experiment changes this variable and we are merely setting it
to be equal to the given domain here.}
\end{comment}
, $\epsilon$ is the viscosity parameter, and $u$ satisfies inhomogeneous Dirichlet boundary conditions
\begin{equation}
g_D({x}) =
\left\{
\begin{array}{c l}
g_D(x,\,1) = 0 &\text{ for } [-1, y] \cup [x,1] \cup [-1 \leq x \leq 0, -1],\\
g_D(1,\,y) = 1 &\text{ for } [1, y] \cup [0 \leq x \leq 1, -1].
\end{array}
\right.
\end{equation}
The inflow boundary consists of the bottom and the right portions of $\partial
D$, $[x,-1] \cup [1, y]$ \cite{elman2014finite}. We consider a zero forcing
term $f(x,\xi) =0$ and a constant convection velocity $\vec{w} \equiv (-\sin \frac{\pi}{6}, \cos \frac{\pi}{6})$. We consider the convection-dominated case (i.e., $\epsilon = \frac{1}{200}$).
\begin{figure}
\caption{Pareto front of relative error measures versus wall time for varying polynomial degree $p$ for stochastic convection-diffusion problem: lognormal random coefficient and deterministic forcing term.}
\label{fig:cd_easy}
\end{figure}
For the spatial discretization, we essentially use the same finite element as above (bilinear $Q_1$ elements) applied to the weak formulation of \eqref{eq:cd_strong}. In addition, we use the streamline-diffusion method \cite{brooks1982streamline} to stabilize the discretization in elements with large mesh Peclet number. (See \cite{elman2014finite}, Ch. 8 for details.) Such spatial discretization leads to a parameterized linear system of the form \eqref{eq:param_sys} with
\begin{equation}\label{eq:param_sys_cd}
A(\xi) = \epsilon D(a(\xi); \xi) + C(\xi) + S(\xi),
\end{equation}
where $D(a(\xi);\xi)$, $C(\xi)$ and $S(\xi)$ are the diffusion term, the
convection term, and the streamline-diffusion term, respectively, and $[\RHS{(\xi)}]_i = \int_D f(x,\xi) \varphi_i (x) dx$. For this
numerical experiment, the number of degrees of freedom in spatial domain is
$n_x=225$ (15 nodes in each spatial dimension) excluding boundary nodes. For ^\text{LSPG}Q, the first linear function
$F=F_1$ is applied with $n_o=100$ outputs and $g(\xi) = \exp(\xi)|\xi-1|$.
Figure \ref{fig:cd_easy} shows the numerical results computed using the stochastic
Galerkin method and three LSPG methods (^\text{LSPG}I, ^\text{LSPG}PS, ^\text{LSPG}Q). Note that
the operator $A(\xi)$ is not symmetric positive-definite in this case; thus
LSPG($A$) is not a valid projection scheme (the Cholesky factorization $A(\xi)
= C(\xi)C(\xi)^T$ does not exist and the energy norm of the solution error
$\| \errorsol{x}\|_A^2$ cannot be defined) and stochastic
Galerkin does not minimize an any measure of the solution error. These results show that pseudo-spectral projection is Pareto optimal for achieving relatively larger error measures; this is because of its relatively low cost since, in contrast to the other methods, it does not require the solution of a coupled linear system of dimension $n_x{n_\psi}$. In addition, the stochastic Galerkin projection is not Pareto optimal for any of the examples; this is caused by the lack of
optimality of stochastic Galekin in this case and highlights the significant benefit of
optimal spectral projection, which is offered by the stochastic LSPG method. In
addition, the residual $\eta_r$ and solution error $\eta_e$ incurred by
^\text{LSPG}Q{} are uncontrolled, because $n_o< n_x$ and thus $\sigma_\text{min}(F) = 0$.
Finally, note that each LSPG method is Pareto optimal for small errors in
its targeted error measure.
\subsection{Numerical experiment with analytic computations}
For the results presented above, expected values were computed using numerical
quadrature (using the \textsc{Matlab} function {\tt integral}). This is a
practical and general approach for numerically computing the required
integrals of \eqref{eq:T1_exp}--\eqref{eq:T3_exp}, and is the only option when
analytic computations are not available (as in Section
\ref{sec:gamma_example}). In this section, we briefly discuss how the costs
change if analytic methods based on closed-form integration exist and are used
for these integrals. Note that in general, however, analytic computation are
unavailable, for example, if the random variables have a finite support (e.g., truncated Gaussian random variables as shown in \cite{ullmann2012efficient}).
\textbf{Computing $T_1$}. Analytic computation of $T_1$ is possible if either $E[A^TMMA \psi_l]$ or $E[MA \psi_l]$ can be evaluated analytically. For ^\text{LSPG}SG{} and ^\text{LSPG}I, if $E[A \psi_l]$ can be evaluated so that the following gPC expansion can be obtained analytically
\begin{equation}\label{eq:pce_for_rf}
A(\xi) = \sum_{l=1}^{\infty} A_l \psi_l (\xi), \quad A_l \equiv E[ A \psi_l],
\end{equation}
where $A_l \in \mathbb{R}^{n_x \times n_x}$, then $T_1$ can be computed analytically. Replacing $A(\xi)$ with the series of \eqref{eq:pce_for_rf} for ^\text{LSPG}SG{} ($M(\xi)=C^{-1}(\xi)$) and ^\text{LSPG}I{} ($M(\xi)=I_{n_x}$) yields
\begin{align}\label{eq:T1_sg}
T_1^\text{SG}sup = \sum_{l=1}^{n_a} E \left[ \psi\psi^T \otimes \left( A_l \psi_l \right)\right] &= \sum_{l=1}^{n_a} E [ \psi\psi^T \psi_l \otimes A_l ],
\end{align}
and
\begin{align}\label{eq:T1_lspg}
T_1^\text{LSPG}sup = E[ \psi \psi^T \otimes \sum_{k=1}^{n_a} \sum_{l=1}^{n_a}\left(
A_k \psi_k \right)^T \left( A_l \psi_l \right)] = \sum_{k=1}^{n_a}
\sum_{l=1}^{n_a} E[ \psi \psi^T \psi_k \psi_l \otimes A_k^T A_l ],
\end{align}
where the expectations of triple or quadruple products of the polynomial basis (i.e., $E[\psi_i \psi_j \psi_k]$ and $E[\psi_i \psi_j \psi_k \psi_l]$) can be computed analytically.
For ^\text{LSPG}PS, an analytic computation of $T_1$ is straightfoward because $M(\xi) A(\xi) = I_{n_x}$ and, thus,
\begin{align}
T_1^\text{PS}sup = E[ \psi\psi^T \otimes I_{n_x} ] = I_{n_x {n_\psi}}.
\end{align}
Similarly, analytic computation of $T_1$ is possible for ^\text{LSPG}Q if there
exists a closed formulation for $E[F \psi_l]$ or $E[F^TF \psi_l]$, which is again in general not available.
\textbf{Computing $T_2$}. Analytic computation of $T_2$ can be performed in a similar way. If the random function $\RHS{(\xi)}$ can be represented using a gPC expansion,
\begin{equation}\label{eq:f_pce}
\RHS(\xi) = \sum_{l=1}^{n_b} \RHS{}_l \psi_l(\xi), \quad \RHS{}_l \equiv E[ \RHS{} \psi_l ],
\end{equation}
then, for ^\text{LSPG}SG{} and ^\text{LSPG}I, $T_2$ can be evaluated analytically by computing
expectations of bi or triple products of the polynomial bases (i.e.,
$E[\psi_i \psi_j ]$ and $E[\psi_i \psi_j \psi_k]$). For ^\text{LSPG}PS{} and ^\text{LSPG}Q,
however, an analytic computation of $T_2$ is typically unavailable because a closed-form expression for $A^{-1}(\xi)$ does not exist.
\begin{figure}
\caption{Pareto front of relative error measures versus wall time for varying polynomial degree $p$ for diffusion problem 2: Lognormal random coefficient and random
forcing. Analytic computations are used as much as possible to evaluate expectations.}
\label{fig:hard_analytic}
\end{figure}
We examine the impact of these observations on the cost of solution of the problem studied in Section
\ref{sec:hard_example}, a the steady-state stochastic diffusion equation \eqref{eq:str_diff} with lognormal random field $a(x,\xi)$ as in \eqref{eq:def_rf}, and random forcing $f(x,\xi) = \exp(\xi)|\xi-1|$.
Figure \ref{fig:hard_analytic} reports results for this problem for analytic computation of expectations. For ^\text{LSPG}SG{}, analytic computation of the expectations $\{T_i\}_{i=1}^3$ requires fewer terms than for ^\text{LSPG}I. In fact, comparing \eqref{eq:T1_sg} and \eqref{eq:T1_lspg} shows that computing $T_1^\text{LSPG}sup$ requires computing and assembling $n_a^2$ terms, whereas computing $T_1^\text{SG}sup$ involves only $n_a$ terms. Additionally the quantities $\{ A_k^T A_l \}_{k,l = 1}^{n_a}$ appearing in the terms of $T_1^\text{LSPG}sup$ in \eqref{eq:T1_lspg} are typically denser than the counterparts $\{A_k\}_{k=1}^{n_a}$ appearing in \eqref{eq:T1_sg}, as the sparsity pattern of $\{A_k\}_{k=1}^{n_a}$ is identical to that of the finite element stiffness matrices. As a result, ^\text{LSPG}SG{} is Pareto optimal for small computational wall times when any error metric is considered. When the polyomial degree $p$ is small, ^\text{LSPG}SG{} is computationally faster than ^\text{LSPG}PS, as ^\text{LSPG}PS{} requires the solution
of $A(\xi^{(k)})u(\xi^{(k)}) = f(\xi^{(k)})$ at each quadrature point and cannot exploit analytic computation. As the stochastic basis is enriched, however, each tailored LSPG method outperforms other LSPG methods in minimizing its corresponding target error measure.
\begin{comment}
where a random function $a(\xi) = \exp(g(\xi)): \Gamma \rightarrow
\mathbb{R}^{n_x \times n_x}$ and $g(\xi)$ has affine dependence on random
variables (e.g., a Karhunen-Lo\'eve expansion \cite{loeve1978probability},
$g(\xi) = g_0 + \sum_{k=1} g_k \xi_k$). In Table
\ref{tab:analytic_comp_table}, distributions of random variables, weight
functions, and types of orthogonal polynomials are listed. With this setting,
we explain the analytical computations for LSPG($M=C^{-1}$), LSPG($M=I$), and LSPG($M=A^{-1}$) with a focus on the computation of $T_1$.
\begin{table}[!b]
\begin{center}\footnotesize
\renewcommand{1.3}{1.3}
\setlength{\tabcolsep}{4pt}
\caption{Distributions of random variables and corresponding polynomials and
weight functions and analytical computation needed for \eqref{eq:pce_for_rf}
when the random field is of form, $\exp( a \xi)$ \KTC{Be very careful with
consistency. There is not a period at the end of this caption, but there is
for other captions.} } \label{tab:analytic_comp_table}
\begin{tabular}{ c | c | c | c | l }
\hline
Distribution & Polynomial & Weight function & Support & Analytical expression for \eqref{eq:pce_for_rf}\\
\hline
Normal & Hermite & $\frac{1}{\sqrt{2\pi}} \exp \left(-\frac{\xi^2}{2}\right) $ & $[-\infty, \infty]$ & $\int^{\infty}_{\infty} \xi^n \exp (-\frac{\xi^2}{2}) \exp(a\xi) d\xi$ \\
Uniform & Legendre & $\frac{1}{2}$ & [0,1] & $\int^{1}_{0} \xi^n \exp \left(a\xi \right) d\xi$ \\
Exponential & Laguerre & $\exp(-\xi)$ & $[0,\infty]$ & $\int^{\infty}_{0} \xi^n \exp (-\xi) (a\xi) d\xi$\\
Gamma($\alpha$) & Generalized Laguerre & $\frac{\xi^{\alpha} \exp(-\xi)}{\Gamma(\alpha+1)}$ & $[0,\infty]$ & $\int^{\infty}_{0} \xi^n \xi^{\alpha} \exp (-\xi) \exp ( a\xi ) d\xi$ \\
Beta($\alpha$, $\beta$) & Jacobi & $\frac{(1-\xi)^\alpha (1-\xi)^\beta}{2^{\alpha+\beta+1} B(\alpha+1, \beta)}$ & $[0,\infty]$ & $\int^{\infty}_{0} \xi^n \xi^{\alpha}\xi^{\beta} \exp \left(a\xi \right) d\xi$\\
\hline
\end{tabular}
\end{center}
\end{table}
In Table \ref{tab:analytic_comp_table}, analytical expressions needed for computing coefficients of \eqref{eq:rf_pce_coeff} are listed and there exist close-form expressions for those integrals.
\end{comment}
\begin{comment}
\begin{table}[htbp]
\begin{center}\footnotesize
\renewcommand{1.3}{1.3}
\setlength{\tabcolsep}{4pt}
\caption{Analytical computability of the expectations $T_1$ and $T_2$ for varying weighting functions $M(\xi)$ when the coefficients $\{A_l\}_{l=1}^{n_a}$ and $\{f_l \}_{l=1}^{n_f}$ in \eqref{eq:rf_pce_coeff} and \eqref{eq:f_pce} can be obtained analytically}
\label{tab:computability}
\begin{tabular}{ c|c | c | c | c}
\hline
&$M(\xi)=C^{-1}(\xi){}^\dagger$ & $M(\xi)=I {}^\ddagger$ & $M(\xi)=A^{-1}(\xi)^{\dagger \dagger}$ & $M(\xi)=F(\xi) A^{-1}(\xi)$\\
\hline
$T_1$&O & O & O & X\\
\hline
$T_2$&O & O & X & X\\
\hline
\multicolumn{5}{l}{${}^{\dagger}:$ SG, ${}^{\ddagger}:$ LSPG, ${}^{\dagger \dagger}$: pseudo-spectral}\\
\end{tabular}
\end{center}
\end{table}
\end{comment}
\section{Conclusion} \label{sec:conclusion}
In this work, we have proposed a general framework for optimal spectral projection
wherein the solution error can be minimized in weighted $\ell^2$-norms of
interest. In particular, we propose two new methods that minimize the
$\ell^2$-norm of the residual and the $\ell^2$-norm of the error in an output
quantity of interest. Further, we showed that when the linear operator is symmetric positive
definite, stochastic Galerkin is a
particular instance of the proposed methodology for a specific choice of
weighted $\ell^2$-norm. Similarly, pseudo-spectral projection is a particular case of
the method for a specific choice of weighted $\ell^2$-norm.
Key results from the numerical experiments include:
\begin{itemize}
\item For a fixed stochastic subspace, each LSPG method minimizes its targeted
error measure (Figure \ref{fig:easy_pvsq}).
\item For a fixed computational cost, each LSPG method often minimizes its
targeted error measure (Figures \ref{fig:hard_quad},
\ref{fig:gamma_synthetic_quad}). However, this does
not always hold, especially for smaller computational costs (and smaller
stochastic-subspace dimensions) when larger errors
are acceptable. In
particular
pseudo-spectral projection (^\text{LSPG}PS) is often significantly less
expensive than other methods for a fixed
stochastic subspace, as it does not require solving a coupled linear system of
dimension $n_x{n_\psi}$ (Figures \ref{fig:easy_quad}, \ref{fig:cd_easy}). Alternatively, when analytic computations are possible,
stochastic Galerkin (^\text{LSPG}SG)) may be significantly less expensive than other methods for a fixed
stochastic subspace (Figure \ref{fig:hard_analytic}).
\item Goal-oriented ^\text{LSPG}Q{} can have uncontrolled errors in error
measures that deviate from the output-oriented error measure $\eta_Q$ when the
linear operator $F$ has more columns $n_x$ than rows $n_o$ (Figure
\ref{fig:hard_quad_qoi_three}). This is because the minimum singular value
is zero in this case
(i.e., $\sigma_\text{min}(F)=0)$), which leads to unbounded stability
constants in other error measures (Table \ref{tab:norm_equivalence}).
\item Stochastic Galerkin often leads to divergence in different error
measures (Figure \ref{fig:sg_fail}). In this case, applying LSPG with the appropriate
targeted error measure can significantly improve accuracy (Figure
\ref{fig:hard_quad}).
\end{itemize}
Future work includes developing efficient sparse solvers for the stochastic LSPG methods and extending the methods to parameterized nonlinear systems.
\begin{comment}
\KTC{This ends abruptly. I would recommend a window into future work or some other concluding statement.}
\KTC{Possible future work:
\begin{itemize}
\item Extensions to
\begin{itemize}
\item
Parameterized nonlinear systems. This could be achieved via nonlinear least
squares. If the Gauss--Newton method is used, then the present approach
would be applied at each Gauss--Newton iteration as in the LSPG method for
model reduction.
\item Time-dependent systems.
As in model reduction, We expect LSPG projection to mitigate the instabilities sometimes
encountered when stochastic Galerkin is applied to dynamical systems.
\end{itemize}
\item Efficient sparse solvers
\end{itemize}
}
\end{comment}
\end{document} |
\begin{document}
\title[On Stable embeddability of partitions]{On stable embeddability of partitions}
\author{Dongseok KIM}
\address{Department of Mathematics \\ Kyungpook National University \\ Taegu 702-201 Korea }
\email{[email protected], [email protected]}
\thanks{}
\author{Jaeun Lee}
\address{Department of Mathematics\\ Yeungnam University\\ Kyongsan, 712-749, Korea }
\email{[email protected]}
\thanks{The first author was supported in part by KRF Grant
M02-2004-000-20044-0. The second author was supported in part by
Com$^2$Mac-KOSEF} \subjclass[2000]{Primary 05A17; Secondary 94A99}
\begin{abstract}
Several natural partial orders on integral partitions, such as the
embeddability, the stable embeddability, the bulk embeddability and
the supermajorization, raise in the quantum computation, bin-packing
and matrix analysis. We find the implications between these partial
orders. For integral partitions whose entries are all powers of a
fixed number $p$, we show that the embeddability is completely
determined by the supermajorization order and we find an algorithm
to determine the stable embeddability.
\end{abstract}
\maketitle
\succcurlyeqction{introduction}
A \emph{partition} $\lambda$ is a finite sequence of nonincreasing
positive real numbers, denoted by $\lambda=[\lambda_1, \lambda_2,
\ldots, \lambda_n]$ where $\lambda_i\ge \lambda_j$ for all $i\le j$.
$\lambda_i$ is called an \emph{entry} of $\lambda$. A partition
$\lambda$ is an \emph{integral} partition if all $\lambda_i \in
\mathbb{N}$. Throughout the article, we assume all partitions are
integral unless we state differently. Let $\lambda=[\lambda_1,
\lambda_2, \ldots, \lambda_m]$, $\mu=[\mu_1,\mu_2, \ldots, \mu_n]$
be two partitions. We can naturally define an \emph{addition} of two
partitions, $\lambda + \mu$ by a reordered juxtaposition, a
\emph{product} of two partitions, $\lambda\times\mu$ by
$[\lambda_i\cdot \mu_j]$ and a scalar multiplication,
$\alpha\lambda$ by $[\alpha\cdot \lambda_i]$. We denote
$\overset{n}{\overbrace{\lambda\times\lambda\times
\ldots\times\lambda}}$ by $\lambda^{\times n}$. We recall
definitions of partial orders on partitions. For more terms and
notations, we refer to \cite{bhatia:gtm, Stanley:enumerative2}. A
partition $\lambda$ \emph{supermajorizes} a partition $\mu$, or
$\lambda \succcurlyeq_S \mu$, if for every $x \in \mathbb{N}$
$$\sum_{\lambda_i\ge x} \lambda_i \ge \sum_{\mu_j\ge x} \mu_j.$$
A partition $\lambda$ \emph{embeds} into $\mu$ if there exists a map
$\varphi : \{1, 2, \ldots, m\} \rightarrow \{1, 2, \ldots, n\}$ such
that
$$\sum_{i\in\varphi^{-1}(j)} \lambda_i \le \mu_j$$
for all $j$, denoted by $\lambda\hookrightarrow\mu$. This embedding problem can
be interpolated as a \emph{bin-packing problem} by replacing the
entries of a partition $\lambda$ by the sizes of the blocks and the
entries of a partition $\mu$ by the sizes of the bins. It is well
known that the question of whether $\lambda$ embeds into $\mu$ is
computable but NP-hard.
Kuperberg found an interesting embeddability, $\lambda$
\emph{bulk-embeds} into $\mu$, or $\lambda {\overset{b}\hookrightarrow} \mu$, if for every
rational $\epsilon > 0$, there exists an $N$ such that
$\lambda^{\times N} \hookrightarrow \mu^{\times
N(1+\epsilon)}$~\cite{Kuperberg:hybrid}. He showed the following
theorem.
\begin{thm}\rm{~\cite{Kuperberg:hybrid}} Let $\lambda$ and $\mu$ are two
partitions, then $\lambda {\overset{b}\hookrightarrow} \mu$ if and only if
$$||\lambda||_p \le ||\mu||_p$$
for all $p \in [1,\infty]$. \label{th:embed}
\end{thm}
He also showed the following implications,
\begin{align}
\lambda \hookrightarrow \mu \Longrightarrow\lambda <^{\!\!\!\!\!^{L}}curlyeq_S \mu
\Longrightarrow \lambda {\overset{b}\hookrightarrow} \mu, \nonumber\\
\lambda{\overset{b}\hookrightarrow} \mu \not\Longrightarrow\lambda <^{\!\!\!\!\!^{L}}curlyeq_S \mu
\not\Longrightarrow \lambda\hookrightarrow\mu. \label{kup}
\end{align}
One can consider a partition as the capacity of a quantum memory
\cite{NV:majorization}. Kuperberg introduced a stable embeddability
in the presence of an auxiliary memory \cite{Kuperberg:hybrid}. A
partition $\lambda$ \emph{stably embeds} into a partition $\mu$ if
there exist a partition $\nu$ such that
$\lambda\times\nu\hookrightarrow\mu\times\nu$, denoted by $\lambda{\overset{s}\hookrightarrow} \mu$.
Then he asked the relation between the stable embeddability and the
supermajorization order. We answer the question and compare these
embeddabilities in section \ref{compa}. A complete classification of
the stable embeddability is unknown. Since the sizes of the
classical memories are all powers of $2$, it is natural to study the
case all entries of partitions are powers of a fixed positive
integer $p$. For these partitions, we find that the embeddability is
completely determined by the supermajorization order. Also we find
an algorithm to determine the stable embeddability in
section~\ref{stable}.
\succcurlyeqction{Comparison of embeddabilities}
For partitions $\lambda, \mu$, we find the following diagram about
the implications of these embeddabilities.
$$
\begin{matrix}
\lambda\hookrightarrow\mu & \Longrightarrow & \lambda<^{\!\!\!\!\!^{L}}curlyeq_S\mu \\
\Downarrow & & \Downarrow \\ \lambda{\overset{s}\hookrightarrow}\mu & \Longrightarrow &
\lambda{\overset{b}\hookrightarrow}\mu
\end{matrix}
$$
The converse of all implications are false. We provide
counterexamples in Example~\ref{counterexample}. Moreover, there is
no relation between the stable embeddability and the
supermajorization order, which address the question arose
in~\cite{Kuperberg:hybrid}. For these counterexamples, we need to
show a few facts about these embeddabilities. One can see that if
$\lambda\hookrightarrow\mu$, then $||\lambda||_p\le ||\mu||_p$ for all $p\in
[1,\infty].$
\begin{thm} Let $\lambda, \mu$ be partitions. If $\lambda\neq\mu$
and $\lambda \hookrightarrow\mu$, then $||\lambda||_p < ||\mu||_p$ for all
$1<p<\infty$. \label{st}
\end{thm}
\begin{proof}
Let
$$\lambda=[a_1, a_2, \ldots, a_l],~~ \mu
= [b_1, b_2, \ldots, b_m].$$
We will prove it by a contradiction. Suppose $\lambda\hookrightarrow \mu$,
$\lambda\neq\mu$ and $||\lambda||_p = ||\mu||_p$ for some
$1<p<\infty$. Then there exists a map $\varphi : \{1, 2, \ldots, l\}
\rightarrow \{1, 2, \ldots, m\}$ presenting the embedding. We divide
cases by the sizes of $l, m$. If $l>m$, then there exist $i_1, i_2$
and $j$ such that $\{i_1, i_2\}\subset \varphi^{-1}(j)$. Since
$$\alpha^p+\beta^p < (\alpha+\beta)^p$$ for all $p> 1$ and nonzero
$\alpha, \beta$, we have
\begin{align}
a_{i_1} +a_{i_2} \le b_j \Longrightarrow a_{i_1}^p+a_{i_2}^p < b_j^p
\Longrightarrow ||\lambda||_p^p < ||\mu||_p^p . \label{inject}
\end{align}
If $l<m$, then there is $k$ such that $\varphi^{-1}(k)=\emptyset$
and hence
\begin{align}
\sum_i (a_{i})^p \le (\sum_j (b_j)^p) - (b_k)^p \Longrightarrow
||\lambda||_p^p < ||\mu||_p^p .\label{surject}
\end{align}
If $l=m$, then there exists $j$ such that $a_k=b_k$ for all $k<j$
and $a_j\neq b_j$ because $\mu^{\times n}\neq\mu^{\times n}$.
Obviously we know $a_j < b_j$. Since $\lambda^{\times n}\hookrightarrow
\mu^{\times n},$ either two or more boxes embed into the box of the
size $b_j$ or a part of the box of size $b_j$ has not been used. If
two or more boxes of $\lambda^{\times n}$ embed into the box of the
size $b_j$, then we find a contradiction by equation~\ref{inject}.
If a part of the box of size $b_j$ has not been used, then we find a
contradiction by equation~\ref{surject}. \end{proof}
The following corollary shows the essentiality of $\epsilon$ in
Theorem~\ref{th:embed}.
\begin{cor} Let $\lambda, \mu$ be partitions.
If $||\lambda||_p = ||\mu||_p$ for some $1<p<\infty$ and
$\lambda\neq \mu$, then $\lambda^{\times n}{\not\hookrightarrow} \mu^{\times n}$ for
all $n$. \label{strict}
\end{cor}
\begin{proof}
Suppose $\lambda^{\times n}\hookrightarrow \mu^{\times n}$ for some $n$. Since
$\lambda\neq \mu$, we find $\lambda^{\times n}\neq \mu^{\times n}$.
By Theorem \ref{st} if $\lambda^{\times n}\neq\mu^{\times n}$ and
$\lambda^{\times n}\hookrightarrow \mu^{\times n}$ for some $n$, then
$||\lambda^{\times n}||_p < ||\mu^{\times n}||_p$ for all
$1<p<\infty$. But one can observe that for any partition $\lambda$,
$$||\lambda^{\times n}||_p=(||\lambda||_p)^n.$$
Thus we find a contradiction that for all $1<p<\infty$,
$$||\lambda||_p < ||\mu||_p.$$
\end{proof}
\begin{cor} Let $\lambda, \mu$ be two partitions.
If $||\lambda||_p = ||\mu||_p$ for some $1<p<\infty$ and
$\lambda\neq \mu$, then $\lambda{\overset{s}{\not\hookrightarrow}} \mu$. \label{cor:proper}
\end{cor}
\begin{proof}
Suppose $\lambda{\overset{s}\hookrightarrow} \mu$. There exists a partition $\nu$ such that
$\lambda\times \nu \hookrightarrow \mu\times\nu$. Since $\lambda\times \nu \neq
\mu\times\nu$ by Theorem~\ref{st}, for all $1<p<\infty$
$$||\lambda\times \nu||_p < ||\mu\times\nu||_p.$$
One can easily see that $||\lambda||_p = ||\mu||_p$ for some
$1<p<\infty$ implies that for the same $p$,
$$||\lambda\times \nu||_p = ||\lambda||_p||\nu||_p = ||\mu||_p||\nu||_p =||\mu\times\nu||_p.$$
Therefore, $\lambda{\overset{s}{\not\hookrightarrow}}\mu$.
\end{proof}
\begin{exa}
Let $\lambda_1=[2,2,2,2]$, $\lambda_2=[8, 8, 8, 8, 4, 4, 4, 4]$,
$\lambda_3=[4,2,2]$, $\mu_1=[4, \overset{8}{\overbrace{1, 1, \ldots,
1}}]$, $\mu_2=[3,3,3]$, $\mu_3=[16,\overset{16}{\overbrace{2, 2,
\ldots, 2}},\overset{16}{\overbrace{1, 1, \ldots, 1}}]$ and
$\mu_4=[5,3]$. Then $\lambda_1{\overset{s}\hookrightarrow}\mu_1$ but $\lambda_1\not<^{\!\!\!\!\!^{L}}curlyeq_S\mu_1$
and $\lambda_1{\not\hookrightarrow}\mu_1$. $\lambda_1$$<^{\!\!\!\!\!^{L}}curlyeq_S$$\mu_2$ but $\lambda_1
{\not\hookrightarrow} \mu_2$. $\lambda_2{\overset{b}\hookrightarrow}\mu_3$ but $\lambda_2{\overset{s}{\not\hookrightarrow}}\mu_3$ and
$\lambda_2\not<^{\!\!\!\!\!^{L}}curlyeq_S\mu_3$. $\lambda_3<^{\!\!\!\!\!^{L}}curlyeq_S\mu_4$ but
$\lambda_3{\overset{s}{\not\hookrightarrow}}\mu_4$.\label{counterexample}
\end{exa}
\begin{proof}
If we set $\nu=[2,1,1]$, we get
$$\lambda_1\times\nu=[4,4,4,4,\overset{8}{\overbrace{2, 2, \ldots,
2}}]~~ \mathrm{and} ~~
\mu_1\times\nu=[8,4,4,\overset{8}{\overbrace{2, 2, \ldots, 2}},
\overset{16}{\overbrace{1, 1, \ldots, 1}}].$$ Then one can see that
$\lambda_1{\overset{s}\hookrightarrow}\mu_1$. Since $$\sum_{(\lambda_1)_i\ge 2}
(\lambda_1)_i =8
> 4 =\sum_{(\mu_1)_j\ge 2} (\mu_1)_j,$$ we see $\lambda_1\not<^{\!\!\!\!\!^{L}}curlyeq_S\mu_1$.
It is clear that $\lambda_1{\not\hookrightarrow}\mu_1$. To show $\lambda_2{\overset{b}\hookrightarrow}\mu_3$,
one can check that
$$||\lambda_2||_p \le ||\mu_3||_p$$ for all $p\in[1,\infty]$ and
the equality holds at
$$p=\frac{\mathrm{Ln}(1 + \sqrt{5})}{\mathrm{Ln}(2)}>1.$$
Since $\lambda_2 \neq \mu_3$, we find that $\lambda_2 {\overset{s}{\not\hookrightarrow}} \mu_3$
by Corollary~\ref{cor:proper}. Clearly $\lambda_3<^{\!\!\!\!\!^{L}}curlyeq_S\mu_4$.
Suppose $\lambda_3{\overset{s}\hookrightarrow}\mu_4$, there exists a partition
$\nu=[\nu_1,$$ \nu_2,$$ \ldots,
$$\nu_n]$ such that $\lambda_3\times\nu \hookrightarrow \mu_4\times\nu$. Let
$p$ be the power of $2$ in the prime factorization of the greatest
common divisor $(\nu_1, \nu_2, \ldots, \nu_n)$ of $\nu_1, \nu_2,
\ldots, \nu_n$. First we looks at entries of $\lambda_3\times\nu$,
all these entries are multiples of $2^{p+1}$. Since
$||\lambda_3\times\nu||_1=||\mu_4\times\nu||_1$, there will be no
space in $\mu_4\times \nu$ which was not used in the embedding, $i.
e.,$ for all $j$,
$$\sum_{i\in\varphi^{-1}(j)} (\lambda_3\times\nu)_i = (\mu_4\times\nu)_j.$$
Therefore, all entries of $\mu_4\times\nu$ have to be multiples of
$2^{p+1}$. Since all entries of $\mu_4$ are odd numbers, $\nu_i$ has
to be a multiple of $2^{p+1}$ and so does the greatest common
divisor of $\nu_1, \nu_2, \ldots, \nu_n$. It contradicts the
hypothesis of $p$. All others should be straightforward.
\end{proof}
\label{compa}
\succcurlyeqction{Stable embeddability}
Let $\lambda, \mu$ be two partitions. Let us consider the following
algorithm which is called a \emph{ first fit}
algorithm~\cite{AM:firstrandom}. From $\lambda_1$ of $\lambda$,
place it to any entry of $\mu$ in which it fits. Then repeat this
step for $\lambda_2$ and so on. Usually this is not an efficient
algorithm~\cite{johnson:bestvsfirst}. It is obvious that if the
first fit algorithm works, then $\lambda\hookrightarrow \mu$. The converse is
not true in general. But with some conditions on $\lambda$ we can
show that it determines the embeddability of $\lambda$ into $\mu$.
\begin{thm}
Let $\lambda=[\lambda_1,\lambda_2, \ldots, \lambda_s], \mu =[\mu_1$,
$\mu_2$, $\ldots $, $\mu_t]$ be partitions with
$\lambda_i|\lambda_j$ for all $i\ge j$. If $\lambda\hookrightarrow \mu$, then
the first fit algorithm works. \label{dividable}
\end{thm}
\begin{proof}
Let us induct on $s$. It is trivial for $s=1$ because this is the
first step of the algorithm. Suppose this is true for $s=n$, we look
at the case $\lambda=[\lambda_1,\lambda_2, \ldots,\lambda_{n+1}]$.
Since $\lambda\hookrightarrow \mu$, there is a map $\varphi : \{ 1, 2, \ldots ,
n+1\} \rightarrow \{1, 2, \ldots, t\}$ which represents the
embedding of $\lambda$ into $\mu$, let us denote $\varphi(1)=j$.
Then we will construct another embedding representing map $\psi$
after we decide where we put $\lambda_1$, say $\mu_k$, $i.e.,$ $\psi
(1)=k$. To construct $\psi$, let us compare $\varphi(1)$ and
$\psi(1)$. If $\varphi(1)=\psi(1)$, we pick $\psi = \varphi$. If
$\varphi(1)=j\neq k=\psi(1)$, first we need to prove that there
exists a subset $P$ of $\varphi^{-1}(k) =\{\lambda_{i_1},
\lambda_{i_2}, \ldots , \lambda_{i_l}\}$ such that
$$\sum_{j\in P} \lambda_j \le \lambda_1 \hskip 1cm \mathrm{and} \hskip 1cm
\sum_{j\in P^c} \lambda_j \le \mu_k-\lambda_1.$$ First we divide
cases by the sizes of $\lambda_1$ and $\lambda_2$. If
$\lambda_1=\lambda_2$, then we pick $P=\{2\}$. If $\lambda_1
\neq\lambda_2$, then $\lambda_1>\lambda_2$. If
$$\sum_{j=2}^{l} \lambda_j \le \lambda_1,$$ we can pick $P=\{2, 3,
\ldots , l\}$. Otherwise, there exist an integer $m$ such that
$$ \sum_{j=2}^{m} \lambda_j\le \lambda_1 < \sum_{j=2}^{m+1}\lambda_j
= (\sum_{j=2}^{m}\lambda_j) + \lambda_{m+1}.$$ If we divide by
$\lambda_{m+1}$, we have
$$ \sum_{j=2}^{m} \frac{\lambda_j}{\lambda_{m+1}}\le
\frac{\lambda_1}{\lambda_{m+1}} <
\sum_{j=2}^{m}\frac{\lambda_j}{\lambda_{m+1}} + 1.$$ Since
$\lambda_i|\lambda_j$ for all $i\ge j$, all these three numbers are
integers, so the first two have to be the same. We choose $P=\{2, 3,
\ldots , m\}$. Once we have such a $P$, we can define
$$\psi(i) = \left\{
\begin{array}{cl}
k & ~~\mathrm{if}~~ i=1, \\
j & ~~\mathrm{if}~~ i \in \varphi^{-1}(k)\cap P, \\
\varphi (i) & ~~\mathrm{if}~~ i\not\in\varphi^{-1}(k)
\cup\{1\} ~~\mathrm{or}~~ i\in \varphi^{-1}(k)\cap P^{c}.
\end{array} \right.
$$
Then $\psi|_{\tilde\lambda}$ shows $\tilde\lambda=[\lambda_2,\ldots
\lambda_{n+1}] \hookrightarrow \tilde\mu=[\mu_1, \ldots,
\mu_k-\lambda_1,\ldots, \mu_t]$. By the induction hypothesis, the
first fit algorithm works.
\end{proof}
Let $\mathcal{P}$ be the set of all partitions whose entries are all
powers of a fixed number $p$. For these partitions, we can show
that the supermajorization completely determine the embeddability.
Instead of the standard notation, we can use
$$\lambda=[a_0, a_1, a_2, \ldots, a_s]_p$$
where $a_i$ is the number of entries $p^i$.
\begin{thm} Let $\lambda, \mu$ be partitions in $\mathcal{P}$.
$\lambda\hookrightarrow\mu$ if and only if
$\lambda<^{\!\!\!\!\!^{L}}curlyeq_S \mu$. \label{superpp}
\end{thm}
\begin{proof}
We only need to show that if $\lambda<^{\!\!\!\!\!^{L}}curlyeq_S\mu$, $\lambda\hookrightarrow\mu$
because of equation \ref{kup}. Suppose $\lambda<^{\!\!\!\!\!^{L}}curlyeq_S\mu$. Let
$$\lambda=[ a_0, a_1,a_2,
\ldots, a_s]_p, ~~ \mu=[b_0, b_1, b_2, \ldots, b_t]_p.$$ Without
loss of generality, we assume $a_s\neq 0\neq b_t$. Obviously $s\le
t$. We induct on the number of the boxes of $\lambda$, say $k$. If
$k=1$, then $a_s=1$
$$
1p^s=\lambda_{\ge p^s} \le \mu_{\ge p^s}=\sum_{j=s}^{t}b_jp^{j}
$$
implies $\lambda\hookrightarrow\mu$. For nonzero $a_s$, we pick a box of size
$p^s$, put it into a box of size $p^t$ in $\mu$. Then for $\lambda$
we subtract $1$ from $a_s$ and for $\mu$, we subtract $1$ from $b_t$
and distribute the reminder of $p^t-p^s$ in base $p$ into $\mu$.
One can observe that all these numbers which have been distributed
are bigger than or equal to $p^s$. Thus resulting partitions still
have the same supermajorization order. By the induction hypothesis,
we find an embedding of $\lambda'=[a_0, a_1, \ldots, a_{s-1},
a_s-1]_p$ into $\mu'=[b_0', b_1', \ldots, b_{t-1}', b_t-1]_p$. But,
it is easy to recover an embedding of $\lambda$ into $\mu$.
\end{proof}
Now we look the stable embeddability for partitions in
$\mathcal{P}$.
\begin{thm}
Let $\lambda, \mu$ be partitions in $\mathcal{P}$. If $\lambda{\overset{s}\hookrightarrow}
\mu$, then there exists a partition $\nu$ in $\mathcal{P}$ such that
$\lambda\times\nu \hookrightarrow \mu\times\nu$. \label{Cpower}
\end{thm}
\begin{proof}
Suppose $\lambda{\overset{s}\hookrightarrow} \mu$, then there is a partition $\nu$ such that
$\lambda\times\nu \hookrightarrow \mu\times\nu$ and $\nu=[c_1, c_2, \ldots,
c_k].$ We can uniquely rewrite $c_j$ in the base $p$ such as
$$c_j= c_{j,0} p^0 + c_{j,1}p^1 + c_{j,2} p^{2} + \ldots + c_{j,l(j)}
p^{l(j)}$$ where $c_{j,i}$ are nonnegative integers less than $p$
and $ c_{j,l(j)}\neq 0$. Using these expressions we can subdivide
$\nu$ to get a refinement $$\tilde\nu=[\sum_{j} c_{j,0}, \sum_{j}
c_{j,1}, \ldots , \sum_j c_{j,i}, \ldots, \sum_{j} c_{j, t}]_p$$
where the sum runs over all nonzero $c_{j,i}$ for each $i$. If the
boxes $\sum[ c_{i_k}\times p^{j_k}]$ of $\lambda\otimes\nu$ were
embedded into $[c_m\times p^{m'}]$ in $\mu\otimes\nu$, We can show
that the refinement of $\sum[ c_{i_k}\times p^{j_k}]$ can be
embedded in the refinement of $[c_m\times p^{m'}]$. Precisely if
$$p^{j_1}c_{i_1}+ p^{j_2}c_{i_2} + \ldots + p^{j_n}c_{i_n} \le
p^{m'}c_m$$ where $j_1\le j_2 \le \ldots \le j_n$, $c_{j_t}\neq 0$
and
$$c_{i_t}=c_{i_t,0}p^{0} + c_{i_t,1}p^{1} + \ldots + c_{i_t,l(i_t)}
p^{l(i_t)}$$ for all $t$, then
$$\sum_{\alpha =1}^{n}\sum_{\beta =0}^{l(\beta)}
[c_{i_{\alpha},\beta}\times p^{i_{\alpha}+\beta}]\hookrightarrow
\sum_{\gamma}^{l(m)}[c_{m,\gamma}\times p^{m'+\gamma}].$$
First we look at the case, $n = 1$. If $p^{j_1}c_{i_1} \le p^{m'}
c_m$, one can easily see that
$$\sum_{\beta}[c_{i_1,\beta}\times p^{j_1+\beta}]<^{\!\!\!\!\!^{L}}curlyeq_S
\sum_{\gamma}[c_{m,\gamma}\times p^{m'+\gamma}]$$ because we are
comparing two integers in base $p$. By Theorem~\ref{superpp},
$$\sum_{\beta}[c_{i_1,\beta}\times p^{j_1+\beta}]\hookrightarrow
\sum_{\gamma}[c_{m,\gamma}\times p^{m'+\gamma}].$$
For the case $n>1$, we look at the integer $$\sum_{\alpha
=1}^{n}\sum_{\beta =0}^{l(\beta)} c_{i_{\alpha},\beta}\times
p^{i_{\alpha}+\beta}$$ as a sum of integers $$\sum_{\beta
=0}^{l(\beta)} c_{i_{\alpha},\beta}\times p^{i_{\alpha}+\beta}$$ in
base $p$. Then this returns to the case $n=1$. If we keep on
tracking the addition, we can recover the embedding of
$$\sum_{\alpha =1}^{n}\sum_{\beta =0}^{l(\beta)}
[c_{i_{\alpha},\beta}\times p^{i_{\alpha}+\beta}]\hookrightarrow
\sum_{\gamma}^{l(m)}[c_{m,\gamma}\times p^{m'+\gamma}].$$
Moreover, this process does not involve with other terms. Therefore,
we can rewrite $\nu$ as the shape we desired.
\end{proof}
\begin{cor}
Let $\lambda=[a_i]_p, \mu=[b_i]_p$ be partitions in $\mathcal{P}$.
If $0 \le a_i\le b_i (0\le b_i < a_i)$, we have two new partitions
$\tilde\lambda,$$ \tilde\mu$ which are obtained from $\lambda,$
$\mu$ by replacing $a_i, b_i$ by $0$ and from $\mu, \lambda$ by
replacing by $b_i-\mathrm{Min}\{ a_i, b_i\}(a_i-\mathrm{Min}\{ a_i,
b_i\},$ $respectively)$. Then $\lambda{\overset{s}\hookrightarrow} \mu$ if and only if
$\tilde\lambda{\overset{s}\hookrightarrow} \tilde\mu$. \label{shape}
\end{cor}
\begin{proof}
We assume $a_i\le b_i$ for a fixed $i$. Suppose $\lambda{\overset{s}\hookrightarrow} \mu$.
By Theorem~\ref{Cpower}, we can find
$$\nu=[\nu_0, \nu_{1}, \ldots, \nu_n]_p$$
such that all entries of $\nu$ are all powers of a fixed number $p$
and $c_k$ is the number of the boxes of size $p^k$. Now
$\lambda\otimes\nu\hookrightarrow \mu\otimes\nu$ and $\lambda\otimes\nu,
\mu\otimes\nu$ satisfy the hypothesis of Theorem~\ref{dividable}, we
can use the first fit algorithm. We put all boxes whose sizes are
bigger than $p^i\times p^n$. Then we consider $a_i\cdot c_n$ boxes
of size $p^i\times p^n$ in $\lambda\otimes\nu$. But none of boxes of
size $p^i\times p^n$ in $\mu\otimes\nu$ were used in the previous
steps, we can put these into boxes of size $p^i\times p^n$ of
$\mu\otimes\nu$. Then we finish the rest of boxes of size $p^{n+i}$.
Then we repeat the same process to the next $a_i\cdot c_{n-1}$ boxes
of size $p^i\times p^{n-1}$ in $\lambda\otimes\nu$. This embedding
keeps all boxes $p^{i}\otimes \nu$ of $\lambda\otimes\nu$ into
$p^{i}\otimes \nu$ of $\mu\otimes\nu$. Thus, $\lambda\otimes\nu\hookrightarrow
\mu\otimes\nu$. The converse is obvious.
\end{proof}
\subsection{An algorithm to determine the stable embeddability}
We introduce an algorithm to decide $\nu$. Let $\lambda, \mu$ be
partitions in $\mathcal{P}$, $i.e.,$ $\lambda=[a_0, a_1, \ldots,
a_n]_p$ and $\mu=[b_0, b_{1}, \ldots, b_m]_p,$ where $a_i, b_i$ are
the number of boxes of size $p^i$ in $\lambda, \mu$ respectively. By
Theorem~\ref{superpp}, we can decide whether $\lambda$ can be
embedded in $\mu$ or not. Before we apply the algorithm, we modify
the shape of $\lambda, \mu$ by Corollary~\ref{shape} such that none
of $a_i, b_i$ are nonzero simultaneously. If $a_n\neq 0\neq b_m$ and
$m<n$, $\lambda$ can not be stably embedded into $\mu$. For
convenience, we will assume $p$ is $2$, $\nu$ is a rational
partition whose entries are of non-positive powers of $2$ and $c_k$
is the number of boxes in $\nu$ of the size $2^{-k}$.
Initially, we will start $c_0=1$. There are $a_n$ boxes of size
$2^n$ in $\lambda\times[c_0 \times 1]$ and none of blocks of size
$2^n$ in $\mu\times[c_0\times 1]$. But there are rooms for
$$b_m \times 2^{m-n} +b_{m-1} \times 2^{m-n-1} + \ldots + b_{n+1}\times 2$$
many boxes of size $2^n$ in $\mu\times[c_0\times 1]$. If
$$b_m \times
2^{m-n} + b_{m-1} \times 2^{m-n-1} + \ldots + b_{n+1}\times 2 \ge
a_n,$$ we set $c_1$ to zero and keep the difference for the next
step, say $M$. Otherwise we set $$c_1 =\lceil \frac{a_n - (b_m
\times 2^{m-n} + b_{m-1} \times 2^{m-n-1} + \ldots + b_{n+1}\times
2)}{b_m} \rceil$$ and $M=0$, where $\lceil x \rceil$ is the smallest
natural number which is bigger than or equal to $x$. Then we look at
$\lambda\times[c_0\times 1, c_1\times\frac{1}{2}],
\mu\times[c_0\times 1,c_1\times\frac{1}{2}]$. We have $a_{n-1}\cdot
c_0 + a_n\cdot c_1$ many boxes of size $2^{n-1}$ in
$\lambda\times[c_0\times 1]$, then we compare it with
$$
2\times M+ c_1\times ( b_m\times 2^{m-n} + b_{m-1} \times 2^{m-n-2}
+ \ldots + b_{n+1}\times 2) + b_{n-1}\cdot c_0
$$
and we repeat exactly the same process. For $N\ge m$, we find $c_N$
by comparing two terms $$\alpha= b_{m-1}\times c_{N+m-1} +
b_{m-2}\times c_{N+m-2} + \ldots + b_0\times c_{N} + 2\times M$$ and
$$\beta=a_{n}\times c_{N+n} + a_{n-1}\times c_{N+n-1} + \ldots +
a_{0}\times c_{N}$$ because these numbers count exactly how many
blocks of size $2^{-N}$ in the product
$$\lambda\times[c_0\times 1, c_1\times\frac{1}{2},
\ldots, c_N\times\frac{1}{2^{N}}]$$ and $$\mu\times[c_0\times1,
c_1\times\frac{1}{2}, \ldots, c_N\times\frac{1}{2^N}]$$ where $M$ is
the number of boxes that were left in the previous step. Then
$C_{N+1}$ is $\lceil (\beta-\alpha)/b_m \rceil$ if $\beta-\alpha > 0
($and set $M=0)$ $0$ otherwise $($set $M= \alpha-\beta,$
$respectively)$. Then we compare the next biggest boxes. We stop if
we get $n$ consecutive $0$'s for $c_i$. Let $N$ be the largest
integer that $c_N$ is non-zero. We repeat the process starting
$c_0=(b_m)^{N+1}$. One can easily see that we no longer have to use
$\lceil \rceil$ because $(b_m)^{N+1-k}|c_k$ for all $0\le k \le
N+1$. Finally we multiply $2^N$ to make $\nu$ an integral partition.
To compare the optimality of such $\nu$'s, we define the
\emph{length} of $\nu=[c_0, c_1, \ldots, c_n]_p$ to be $n+1$ where
$c_0\neq 0 \neq c_n$. From the given $\lambda, \mu$ we collect all
possible $\nu \in \mathcal{P}$ and $\lambda\otimes \nu\hookrightarrow
\mu\otimes \nu$, say ${\mathcal T}$. Then we define a partial order on ${\mathcal T}$
by a lexicographic order,
$$( \mathrm{length}\hskip .2cm \mathrm{of}\hskip .2cm \widehat\lambda(\nu),
\frac{c_1}{c_0}, \ldots, \frac{c_n}{c_0}).$$ Moreover, ${\mathcal T}$ is
closed under an addition, a tensor and a scalar multiplication.
\begin{thm}
1) The algorithm stops at finite time if and only if
$\lambda{\overset{s}\hookrightarrow}\mu$.
2) Let $D$ be a partition which is obtained from the algorithm. Then
$D$ is a minimal element with respect to the partial order we
defined on ${\mathcal T}$. \label{optimal}
\end{thm}
\begin{proof}
We want to show that if $\lambda{\overset{s}\hookrightarrow}\mu$, then the algorithm must
stop at finite steps and the one we find by the algorithm has the
smallest length. Since $\lambda{\overset{s}\hookrightarrow}\mu$, ${\mathcal T}$ is nonempty and we
find a minimal element in ${\mathcal T}$, say $${\mathcal T}t=[t_0, t_1, \ldots,
t_l]_p.$$ First we assume the existence, $i. e.,$ the algorithm
gives us an integral partition
$$\nu=[c_0, c_1, \ldots, c_m]_p.$$ By the minimality,
we have $l\le m$. But the process itself provides us $l\ge m$. We
compare $$c_0{\mathcal T}t=[c_0\cdot t_0, c_0\cdot t_1, \ldots, c_0\cdot
t_l]_p$$ and $$t_0 \nu=[t_0\cdot c_0, t_0\cdot c_1, \ldots, t_0\cdot
c_m]_p.$$ Suppose $c_o{\mathcal T}t\neq t_o\nu$. There is a $j$ such that
$$c_0\cdot t_j < t_0\cdot c_j.$$ But this obviously contradicts the
process of the algorithm. Therefore, $c_0{\mathcal T}t = t_0\nu$ and it does
also prove the existence.
\end{proof}
\label{stable}
\succcurlyeqction{Discussions}
\label{discussion}
\subsection{Algebraic embeddabilities.}
Let ${\mathcal A}$ be a finite dimensional semisimple algebra over an
algebraically closed field $K$. By a simple application of
Webberburn-Artin theorem, we can decompose ${\mathcal A}$ into a direct sum
of matrix algebras. From a direct sum of matrix algebras ${\mathcal A}$, we
can find a unique integral partition $\lambda$, denoted by
$\lambda({\mathcal A})$. For an integral partition $\lambda$, one can assign
a direct sum of matrix algebras
$${\mathcal A}(\lambda) = \bigoplus_{i=1}^{m} {\mathcal M}_{\lambda_i},$$
where ${\mathcal M}_{\lambda_i}$ is the set of all $\lambda_i$ by
$\lambda_i$ matrices over $K$. For integral partitions, one can see
that $\lambda\hookrightarrow\mu$ if and only if ${\mathcal A}(\lambda)$ embeds into
${\mathcal A}(\mu)$ as $K$ algebras. All other partial orders can be
naturally defined for a direct sum of matrix algebras. The question
of the embeddability between algebraic objects such as groups,
rings, modules and etc, is a long standing difficult question. For
some algebraic objects such as sets, vector spaces, the question is
straightforward. The embeddability between the modules over a
complex simple Lie algebra is completely determined by
Littlewood-Richardson formula and Schur's lemma. Authors have made a
few progress on stable embeddability, the product is replaced by the
tensor product, between the modules over a complex simple Lie
algebra \cite{lie}. The stable embeddability between other algebraic
objects should be an interesting question.
\subsection{Analytic embeddabilities.}
Let $\lambda, \mu$ be partition in $\mathcal{P}$. The algorithm we
defined in section \ref{stable} brings us a new embeddability,
$\lambda$ \emph{weakly stably embeds} into $\mu$, denoted by
$\lambda{\overset{w. s}{\hookrightarrow}}\mu$, if there exists a rational partition $\nu$ of
infinite length such that all entries of $\nu$ are nonpositive
powers of the fixed number $p$ and
$$\sum_{i=0}^{\infty} c_i p^{-i} <\infty,$$
where $c_i$ is the number of the entries $p^{-i}$. One can see that
\begin{eqnarray}
\begin{matrix}
\lambda{\overset{s}\hookrightarrow}\mu & \Longrightarrow & \lambda{\overset{w. s}{\hookrightarrow}}\mu & \Longrightarrow & \lambda{\overset{b}\hookrightarrow}\mu \\
& & & & \Updownarrow \\
& & ||\lambda||_p < ||\mu||_p, ~ \forall p\in(1,\infty) &
\Longrightarrow & ||\lambda||_p \le ||\mu||_p,~ \forall
p\in[1,\infty]
\end{matrix}
\label{ws}
\end{eqnarray}
It is not known that the converses of the first row of equation
\ref{ws} are true or not for partitions in $\mathcal{P}$. Authors
have written a program that performs the algorithm described in
section~\ref{stable} to see $||\lambda||_p < ||\mu||_p, ~ \forall
p\in(1,\infty)$ and equality holds for $p=1$ and $\infty$ implies
$\lambda{\overset{s}\hookrightarrow}\mu$. We have not found any answer yet.
\end{document} |
\begin{document}
\date{}
\title{Geometry of a weak para-$f$-structure}
\begin{abstract}
We study the geometry of the weak almost para-$f$-structure and its satellites.
This allow us to produce totally geodesic foliations and Killing vector fields and
also to take a fresh look at the para-$f$-structure introduced by A.\,Bucki and A.\,Miernowski.
We demonstrate this by generalizing several known results on almost para-$f$-manifolds.
First, we express the covariant derivative of $f$ using a new tensor on a metric weak para-$f$-structure,
then we prove that on a weak para-${\cal K}$-manifold the characteristic vector fields are Killing and $\ker f$ defines a totally geodesic foliation.
Next, we show that a para-${\cal S}$-structure is rigid (i.e., a weak para-${\cal S}$-structure is a para-${\cal S}$-structure),
and that a metric weak para-$f$-structure with parallel tensor $f$ reduces to a weak para-${\cal C}$-structure.
We obtain corollaries for $p=1$, i.e., for a weak almost paracontact~structure.
\vskip1.5mm\noindent
\textbf{Keywords}: para-$f$-structure; distribution; totally geodesic foliation; Killing vector field
\vskip1.5mm
\noindent
\textbf{Mathematics Subject Classifications (2010)} 53C15, 53C25, 53D15
\end{abstract}
\section*{Introduction}
A distribution (or a foliation, associated with integrable distribution) on a pseudo-Riemannian manifold
is \textit{totally geodesic} if any geodesic of a manifold that is tangent to the distribution at one point is tangent to it at all points.
Such foliations have the simplest extrinsic geometry of the leaves and appear in Riemannian geometry, e.g., in the theory of $\mathfrak{g}$-foliations,
as kernels of degenerate tensors, e.g., \cite{AM-1995,FP-2017}.
We are motivated by the problem of finding structures on manifolds, which lead to totally geodesic foliations and Killing vector fields,
see~\cite{fip}.
A well-known source of totally geodesic foliations
is
a~para-$f$-structure on a smooth manifold $M^{2n+p}$, defined
using (1,1)-tensor field $f$ satisfying $f^3 = f$ and having constant rank $2n$, see \cite{BN-1985,m1976}.
The~paracontact geometry (a counterpart to the contact geometry)
is a higher dimensional analog of almost product ($p=0$) \cite{g1967}, and almost paracontact ($p=1$) structures \cite{CFG-survey}.
A~para-$f$-structure with $p=2$ arises in the study of hypersurfaces in almost contact manifolds, e.g., \cite{BL-69}.
Interest in para-Sasakian manifolds is due to their connection with para-K\"{a}hler manifolds and their role in mathematical~physics.
If there exists a set of vector fields $\xi_1, \ldots , \xi_p$ with certain properties, then $M^{2n+p}$ is said to
have a para-$f$-structure with complemented frames.
In this case, the tangent bundle $TM$ splits into three complementary subbundles:
$\pm1$-eigen-distributi\-ons for $f$ composing a $2n$-dimensional distribution $f(TM)$ and a $p$-dimensional distribution $\ker f$
(the kernel of $f$).
In \cite{RWo-2}, we introduced the ``weak" metric structures that generalize
an $f$-structure and a para-$f$-structure, and allow us to take a fresh look at the classical theory.
In~\cite{Rov-arxiv}, we studied geometry of a weak $f$-structure and its satellites that are analogs of ${\mathcal K}$- ${\mathcal S}$- and ${\mathcal C}$- manifolds.
In~this paper, using a similar approach, we study geometry of a weak para-$f$-structure and its important cases
related to a pseudo-Riemannian manifold endowed with a totally geodesic foliation.
A~natu\-ral question arises: {how rich are weak para-$f$-structures compared to the classical ones}?
We~study this question for weak analogs of para-${\mathcal K}$-, para-${\mathcal S}$- and para-${\mathcal C}$- structures.
The proofs of main results use the properties of new tensors, as well as the constructions required in the classical~case.
The~theory presented here can be used to deepen our knowledge of pseudo-Riemannian geometry of
manifolds equipped with distributions.
This article consists of an introduction and five sections.
In Section~\ref{sec:1}, we discuss the properties of ``weak" metric structures generalizing some classes of para-$f$-manifolds.
In Section~\ref{sec:2} we express the covariant derivative of $f$ of a weak para-$f$-structure using a new tensor
and show that on a weak para-${\mathcal K}$-manifold the characteristic vector fields are Killing and $\ker f$ defines a totally geodesic foliation.
Also, for a weak almost para-${\mathcal C}$-structure and a weak almost para-${\mathcal S}$-structure, $\ker f$ defines a totally geodesic foliation.
In Section~\ref{sec:3a}, we apply to weak almost para-${\mathcal S}$-manifolds the tensor~$h$ and prove stability of some known results.
In Section~\ref{sec:3} we complete the result in \cite{RWo-2} and prove the rigidity theorem that a weak para-${\mathcal S}$-structure is a para-${\mathcal S}$-structure.
In Section~\ref{sec:4}, we show that a weak para-$f$-structure with parallel tensor $f$ reduces to a weak para-${\mathcal C}$-structure,
we also give an example of such a structure.
\section{Preliminaries}
\label{sec:1}
Here, we describe ``weak" metric structures generalizing certain classes of para-$f$-manifolds and discuss their properties.
A \textit{weak para-$f$-structure} on a smooth manifold $M^{\,2n+p}$ is defined by a $(1,1)$-tensor field $f$ of rank $2\,n$
and a~nonsingular $(1,1)$-tensor field $Q$ satisfying, see \cite{RWo-2},
\begin{equation}\label{E-fQ-1}
f^3 - fQ = 0,\qquad
Q\,\xi=\xi\quad (\xi\in\ker f).
\end{equation}
If $\ker f=\{X\in TM: f(X)=0\}$ is parallelizable, then we fix vector fields $\xi_i\ (1\le i\le p)$, which span $\ker f$,
and their dual one-forms $\eta^i$. We get a~\textit{weak almost para-$f$-structure}
(a weak almost paracontact structure for $p=1$), see~\cite{RWo-2},
\begin{equation}\label{2.1}
f^2 = Q -\sum\nolimits_{i}\eta^i\otimes\xi_i, \quad \eta^i(\xi_j)=\delta^i_j \,.
\end{equation}
Using \eqref{2.1} we get $f(TM)=\bigcap_{i}\ker\eta^i$ and that $f(TM)$ is $f$-invariant, i.e.,
\begin{equation}\label{2.1-D}
{f} X\in f(TM),\quad X\in f(TM).
\end{equation}
By \eqref{2.1}-\eqref{2.1-D}, $f(TM)$ is invariant for $Q$.
A weak almost $f$-structure is called \textit{normal} if the following tensor
(known for $Q={\rm id}_{TM}$, e.g., \cite{FP-2017}) is identically~zero:
\begin{align}\label{2.6X}
N^{\,(1)}(X,Y) = [{f},{f}](X,Y) - 2\sum\nolimits_{i} d\eta^i(X,Y)\,\xi_i .
\end{align}
The Nijenhuis torsion
of ${f}$ and the exterior derivative
of $\eta^i$ are given~by
\begin{align}\label{2.5}
[{f},{f}](X,Y) & = {f}^2 [X,Y] + [{f} X, {f} Y] - {f}[{f} X,Y] - {f}[X,{f} Y],\ X,Y\in\mathfrak{X}_M , \\
\label{3.3A}
d\eta^i(X,Y) & = \frac12\,\{X(\eta^i(Y)) - Y(\eta^i(X)) - \eta^i([X,Y])\},\quad X,Y\in\mathfrak{X}_M .
\end{align}
\begin{remark}\rm
A differential $k$-\textit{form} on a smooth manifold $M$ is a skew-symmetric tensor field
$\omega$ of type $(0, k)$. According to the conventions of
\cite{KN-69},
\begin{eqnarray}\label{eq:extdiff}
\nonumber
& d\omega ({X}_1, \ldots , {X}_{k+1}) = \frac1{k+1}\sum\nolimits_{\,i=1}^{k+1} (-1)^{i+1} {X}_i(\omega({X}_1, \ldots , \widehat{{X}}_i\ldots, {X}_{k+1}))\\
& +\sum\nolimits_{\,i<j}(-1)^{i+j}\,\omega ([{X}_i, {X}_j], {X}_1, \ldots,\widehat{{X}}_i,\ldots,\widehat{{X}}_j, \ldots, {X}_{k+1}),
\end{eqnarray}
where ${X}_1,\ldots, {X}_{k+1}\in\mathfrak{X}_M$ and $\,\widehat{\cdot}\,$ denotes the
operator of omission, defines a $(k+1)$-form $d\omega$ -- the \textit{exterior differential} of $\omega$.
Thus, \eqref{eq:extdiff} with $k=1$ gives~\eqref{3.3A}.
\end{remark}
If there exists a pseudo-Riemannian metric $g$ such that
\begin{align}\label{2.2}
g({f} X,{f} Y)= -g(X,Q\,Y) +\sum\nolimits_{i}
\eta^i(X)\,\eta^i(Y),\quad X,Y\in\mathfrak{X}_M,
\end{align}
then $({f},Q,\xi_i,\eta^i,g)$ is called a {\it metric weak para-$f$-structure},
$M({f},Q,\xi_i,\eta^i,g)$ is called a \textit{metric weak para-$f$-manifold}, and $g$ is called a \textit{compatible metric}.
Putting $Y=\xi_i$ in \eqref{2.2} and using \eqref{E-fQ-1}, we get
$g(X,\xi_i) = \eta^i(X)$,
thus, $f(TM)\,\bot\,\ker f$ and $\{\xi_i\}$ is an orthonormal frame of $\ker f$.
\begin{remark}\rm
According to \cite{RWo-2}, a weak almost para-$f$-structure admits a compatible pseudo-Riemannian metric if ${f}$
admits a skew-symmetric representation, i.e., for any $x\in M$ there exist a neighborhood $U_x\subset M$ and a~frame $\{e_k\}$ on $U_x$,
for which ${f}$ has a skew-symmetric matrix.
\end{remark}
The following statement is well-known for the case of $Q={\rm id}_{TM}$.
\begin{proposition}
{\rm (a)}
For a weak almost para-$f$-structure the following hold:
\[
{f}\,\xi_i=0,\quad \eta^i\circ{f}=0,\quad \eta^i\circ Q=\eta_i\quad (1\le i\le p),\quad [Q,\,{f}]=0.
\]
{\rm (b)}
For a metric weak almost para-$f$-structure
the tensor ${f}$ is skew-symmetric and the tensor $Q$ is self-adjoint, i.e.,
\begin{equation}\label{E-Q2-g}
g({f} X, Y) = -g(X, {f} Y),\quad
g(QX,Y)=g(X,QY).
\end{equation}
\end{proposition}
\begin{proof}
(a) By \eqref{E-fQ-1} and \eqref{2.1}, ${f}^2\xi_i=0$.
Applying \eqref{E-fQ-1} to $f\xi_i$, we get $f\xi_i=0$.
To show $\eta^i\circ{f}=0$, note that $\eta^i({f}\,\xi_i)=\eta^i(0)=0$, and, using \eqref{2.1-D}, we get $\eta^i({f} X)=0$ for $X\in f(TM)$.
Next, using \eqref{2.1} and ${f}(Q\,\xi_i) = {f}\,\xi_i=0$, we get
\begin{align*}
{f}^3 X = {f}({f}^2 X) = {f}\,QX -\sum\nolimits_{i}\eta^i(X)\,{f}\xi_i = {f}\,QX,\\
{f}^3 X = {f}^2({f} X) = Q\,{f} X -\sum\nolimits_{i}\eta^i({f} X)\,\xi_i = Q\,{f} X
\end{align*}
for any $X\in f(TM)$. This and $[Q,\,{f}]\,\xi_i=0$ provide $[Q,\,{f}]=Q\,{f} - {f} Q = 0$.
(b) By~\eqref{2.2}, the~restriction $Q_{|\,f(TM)}$ is self-adjoint. This and \eqref{E-fQ-1} provide (\ref{E-Q2-g}b).
For any $Y\in f(TM)$ there is $\tilde Y\in f(TM)$ such that ${f}Y=\tilde Y$.
From \eqref{2.1} and \eqref{2.2} with $X\in f(TM)$ and $\tilde Y$ we get
\begin{eqnarray*}
g(fX,\tilde Y) = g(fX, fY) \overset{\eqref{2.2}}= -g(X, QY) \overset{\eqref{2.1}}
= -g(X, f^2 Y) = -g(X, f\tilde Y),
\end{eqnarray*}
and (\ref{E-Q2-g}a) follows.
\end{proof}
\begin{remark}\rm
For a weak almost para-$f$-structure, the tangent bundle
decomposes as
$TM=f(TM)\oplus\ker f$, where $\ker f$ is a $p$-dimensional characte\-ristic distribution;
moreover,
if we assume that the symmetric tensor $Q$ is positive definite,
then $f(TM)$ decomposes into the sum of two $n$-dimensional subbundles: $f(TM)={\mathcal D}_+\oplus{\mathcal D}_-$,
corresponding to positive and negative eigenvalues of $f$,
and in this case we get
$TM={\mathcal D}_+\oplus{\mathcal D}_-\oplus\ker f$.
\end{remark}
Define the difference
tensor $\widetilde{Q}$ (vanishing on a para-$f$-structure) by
\[
\tilde{Q} = Q - {\rm id}_{TM}.
\]
By the above, $\widetilde{Q}\,\xi_i=0$ and $[\tilde{Q},{f}]=0$.
We can rewrite \eqref{2.5} in terms of the Levi-Civita connection $\nabla$ as
\begin{align}\label{4.NN}
[{f},{f}](X,Y) = ({f}\nabla_Y{f} - \nabla_{{f} Y}{f}) X - ({f}\nabla_X{f} - \nabla_{{f} X}{f}) Y;
\end{align}
in particular, since ${f}\,\xi_i=0$,
\begin{align}\label{4.NNxi}
[{f},{f}](X,\xi_i)= {f}(\nabla_{\xi_i}{f})X +\nabla_{{f} X}\,\xi_i -{f}\,\nabla_{X}\,\xi_i, \quad X\in \mathfrak{X}_M .
\end{align}
The {fundamental $2$-form} $\Phi$ on $M({f},Q,\xi_i,\eta^i,g)$ is defined by
\begin{align*}
\Phi(X,Y)=g(X,{f} Y),\quad X,Y\in\mathfrak{X}_M.
\end{align*}
Since $\eta^1\wedge\ldots\wedge\eta^p\wedge\Phi^n\ne0$,
a metric weak para-${f}$-manifold is orientable.
\begin{definition}\rm
A metric weak para-$f$-structure
$({f},Q,\xi_i,\eta^i,g)$ is called a \textit{weak para-${\mathcal K}$-structure} if it is normal and the form $\Phi$ is closed, i.e., $d \Phi=0$.
We define two subclasses of weak para-${\mathcal K}$-manifolds as follows:
\textit{weak para-${\mathcal C}$-manifolds} if $d\eta^i = 0$ for any $i$, and \textit{weak para-${\mathcal S}$-manifolds}~if
\begin{align}\label{2.3}
d\eta^i = \Phi,\quad 1\le i\le p .
\end{align}
Omitting the normality condition, we get the following: a metric weak para-$f$-structure
is called
(i)~a \textit{weak almost para-${\mathcal S}$-structure} if \eqref{2.3} is valid;
(ii)~a \textit{weak almost para-${\mathcal C}$-structure}
if $\Phi$ and $\eta^i$ are closed forms.
\end{definition}
For $p=1$, weak para-${\mathcal C}$- and weak para-${\mathcal S}$- manifolds reduce to weak
para-cosymplectic manifolds and weak
para-Sasakian manifolds, respectively.
Recall the formulas with the Lie derivative $\pounds_{Z}$ in the $Z$-direction and $X,Y\in\mathfrak{X}_M$:
\begin{eqnarray}\label{3.3B}
(\pounds_{Z}{f})X & = & [Z, {f} X] - {f} [Z, X],\\
\label{3.3C}
(\pounds_{Z}\,\eta^j)X & = & Z(\eta^j(X)) - \eta^j([Z, X]) , \\
\label{3.7}
\nonumber
(\pounds_{Z}\,g)(X,Y) &= & Z(g(X,Y)) - g([Z, X], Y) - g(X, [Z,Y])\\
& = & g(\nabla_{X}\,Z, Y) + g(\nabla_{Y}\,Z, X).
\end{eqnarray}
The following tensors are known in the theory of para-$f$-manifolds, e.g., \cite{FP-2017}:
\begin{align}
\label{2.7X}
N^{\,(2)}_i(X,Y) &= (\pounds_{{f} X}\,\eta^i)Y - (\pounds_{{f} Y}\,\eta^i)X \overset{\eqref{3.3A}}= 2\,d\eta^i({f} X,Y) - 2\,d\eta^i({f} Y,X), \\
\label{2.8X}
N^{\,(3)}_i(X) &= (\pounds_{\xi_i}{f})X \overset{\eqref{3.3B}}= [\xi_i, {f} X] - {f} [\xi_i, X],\\
\label{2.9X}
N^{\,(4)}_{ij}(X) &= (\pounds_{\xi_i}\,\eta^j)X \overset{\eqref{3.3C}}= \xi_i(\eta^j(X)) - \eta^j([\xi_i, X])
= 2\,d\eta^j(\xi_i, X).
\end{align}
For $p=1$, the tensors \eqref{2.7X}--\eqref{2.9X} reduce to the following tensors on (weak) almost paracontact manifolds:
$N^{\,(2)}(X,Y) = (\pounds_{\varphi X}\,\eta)Y - (\pounds_{\varphi Y}\,\eta)X, \
N^{\,(3)} = \pounds_{\xi}\,\varphi,\
N^{\,(4)} = \pounds_{\xi}\,\eta$ .
\begin{remark}\rm
Let $M^{2n+p}(\varphi,Q,\xi_i,\eta^i)$ be a framed weak para-$f$-manifold.
Consider the product manifold $\bar M = M^{2n+p}\times\mathbb{R}^p$, where $\mathbb{R}^p$ is a Euclidean space
with a basis $\partial_1,\ldots,\partial_p$, and define tensor fields $\bar f$ and $\bar Q$ on $\bar M$ putting
\begin{align*}
\bar f(X, \sum a^i\partial_i) = (fX -\sum a^i\xi_i, \sum \eta^j(X)\partial_j),
\quad
\bar Q(X, \sum a^i\partial_i) = (QX, \sum a^i\partial_i) .
\end{align*}
Hence, $\bar f(X,0)=(fX,0)$, $\bar Q(X,0)=(QX,0)$ for $X\in\ker f$,
$\bar f(\xi_i,0)=(0,\partial_i)$, $\bar Q(\xi_i,0)=(\xi_i,0)$ and
$\bar f(0,\partial_i)=(-\xi_i,0)$, $\bar Q(0,\partial_i)=(0,\partial_i)$.
Then it is easy to verify that $\bar f^{\,2}=-\bar Q$. The tensors $N^{\,(i)}\ (i=1,2,3,4)$ appear when we use
the integrability condition $[\bar f, \bar f]=0$ of $\bar f$ to express the normality condition
of a weak almost para-$f$-structure.
\end{remark}
\section{The geometry of a metric weak para-$f$-structure}
\label{sec:2}
Here, we study the geometry of the characteristic distribution $\ker f$,
supplement the sequence of tensors \eqref{2.6X} and \eqref{2.7X}--\eqref{2.9X}
with a new tensor $N^{\,(5)}$ and calculate the covariant derivative of $f$ on a metric weak para-$f$-structure.
A distribution ${\mathcal D}\subset TM$ is \textit{totally geodesic} if and only if
its second fundamental form vanishes, i.e., $\nabla_X Y+\nabla_Y X\in{\mathcal D}$ for any vector fields $X,Y\in{\mathcal D}$ --
this is the case when {any geodesic of $M$ that is tangent to ${\mathcal D}$ at one point is tangent to ${\mathcal D}$ at all its points}.
Any integrable and totally geodesic distribution determines a totally geodesic foliation.
A foliation, whose orthogonal distribution is totally geodesic, is said to be a Riemannian foliation.
For example, a foliation is Riemannian if it is invariant under transformations (isometries) generated by Killing vector fields.
Note that $X = X^\top + X^\bot$, where $X^\top$ is the projection of the vector $X\in TM$ onto $f(TM)$,
and $X^\bot = \sum\nolimits_{i}\eta^i(X)\,\xi_i$.
The next statement generalizes
\cite[Proposition~3]{FP-2017},
i.e., $Q={\rm id}_{ TM}$.
\begin{proposition}\label{thm6.1}
Let a metric weak para-$f$-structure be normal. Then $N^{\,(3)}_i$ and $N^{\,(4)}_{ij}$ vanish~and
\begin{align}\label{3.1KK}
N^{\,(2)}_i(X,Y) =\eta^i([\widetilde{Q} X,\,{f} Y]);
\end{align}
moreover, the characteristic distribution $\ker f$ is totally geodesic.
\end{proposition}
\begin{proof}
Assume $N^{\,(1)}(X,Y)=0$ for any $X,Y\in TM$. Taking $\xi_i$ instead of $Y$ and using the formula of Nijenhuis tensor \eqref{2.5}, we~get
\begin{eqnarray}\label{3.11}
0 & =& [{f},{f}](X,\xi_i) - 2\sum\nolimits_{j} d\eta^j(X,\xi_i)\,\xi_j \notag\\
& =& {f}^2[X,\xi_i] - {f}[{f} X,\xi_i] - 2\sum\nolimits_{j} d\eta^j(X,\xi_i)\,\xi_j.
\end{eqnarray}
For the scalar product of \eqref{3.11} with $\xi_j$, using
${f}\,\xi_i=0$, we~get
\begin{align}\label{3.11A}
d\eta^j(\xi_i,\,\cdot)=0;
\end{align}
hence, $N^{\,(4)}_{ij}=0$, see \eqref{2.9X}.
Next, combining \eqref{3.11} and \eqref{3.11A}, we get
\begin{align*}
0 = [{f},{f}](X,\xi_i) = {f}^2[X,\xi_i] - {f}[{f} X,\xi_i] = {f}\,(\pounds_{\xi_i}{f})X .
\end{align*}
Applying ${f}$ and using \eqref{2.1} and $\eta^i\circ{f}=0$, we achieve
\begin{eqnarray}\label{3.14}
\nonumber
0 & = {f}^2 (\pounds_{\xi_i}{f})X
= Q(\pounds_{\xi_i}{f})X - \sum\nolimits_{j}\eta^j((\pounds_{\xi_i}{f})X)\,\xi_j \\
& = Q(\pounds_{\xi_i}{f})X - \sum\nolimits_{j}\eta^j([\xi_i,{f} X])\,\xi_j.
\end{eqnarray}
Further, \eqref{3.11A} and \eqref{3.3A} yield
\begin{align}\label{3.11B}
0=2\,d\eta^j({f} X, \xi_i)
=({f} X)(\eta^j(\xi_i)) - \xi_i(\eta^j({f} X)) - \eta^j([{f} X, \xi_i])
=\eta^j([\xi_i, {f} X]).
\end{align}
Since $Q$ is non-singular, from \eqref{3.14}--\eqref{3.11B} we get $\pounds_{\xi_i}{f}=0$, i.e, $N^{\,(3)}_i=0$, see~\eqref{2.8X}.
Replacing $X$ by ${f} X$ in our assumption $N^{\,(1)}=0$ and using \eqref{2.5} and \eqref{3.3A}, we get
\begin{align}\label{2.6}
0 &= g([{f},{f}]({f} X,Y) - 2\sum\nolimits_{j} d\eta^j({f} X,Y)\,\xi_j,\ \xi_i) \notag\\
&= g([{f}^2 X,{f} Y],\xi_i) - ({f} X)(\eta^i(Y)) + \eta^i([{f} X,Y]) ,\quad 1\le i\le p.
\end{align}
Using \eqref{2.1} and
$[{f} Y, \eta^j(X) \xi_i] = ({f} Y)(\eta^j(X)) \xi_i + \eta^j(X)[{f} Y, \xi_i]$, we rewrite \eqref{2.6}~as
\begin{equation*}
0 = \eta^i([QX, {f} Y]) -\sum \eta^j(X)\,\eta^i([\xi_j, {f} Y])
+ {f} Y(\eta^i(X)) - {f} X(\eta^i(Y)) + \eta^i([{f} X,Y]).
\end{equation*}
Since \eqref{3.11B} gives $\eta^i([{f} Y, \xi_j])=0$, the above equation becomes
\begin{align}\label{2.9}
\eta^i([QX, {f} Y]) + ({f} Y)(\eta^i(X)) - ({f} X)(\eta^i(Y)) + \eta^i([{f} X,Y]) = 0.
\end{align}
Finally, combining \eqref{2.9} with \eqref{2.7X}, we get \eqref{3.1KK}.
Using the identity
\begin{align}\label{3.Ld}
\pounds_{\xi_i}=\iota_{{\xi_i}}\,d + d\,\iota_{{\xi_i}},
\end{align}
from \eqref{3.11A} and $\eta^i(\xi_j)=\delta^i_j$ we obtain
$\pounds_{\xi_i}\,\eta^j = d (\eta^j(\xi_i)) + \iota_{\xi_i}\, d\eta^j = 0$.
On the other hand, by \eqref{3.3C} we have
\[
(\pounds_{\xi_i}\,\eta^j)X= g(X,\nabla_{\xi_i}\,\xi_j)+g(\nabla_{X}\,\xi_i,\,\xi_j),\quad
X\in\mathfrak{X}_M.
\]
Symmetrizing this and using $\pounds_{\xi_i}\,\eta^j =0$ and $g(\xi_i,\, \xi_j)=\delta_{ij}$ yield
\begin{align}\label{3.30}
\nabla_{\xi_i}\,\xi_j+\nabla_{\xi_j}\,\xi_i =0,
\end{align}
thus, the distribution $\ker f$ is totally geodesic.
\end{proof}
Recall the co-boundary formula for exterior derivative $d$ on a $2$-form $\Phi$,
\begin{eqnarray}\label{3.3}
\nonumber
d\Phi(X,Y,Z) & =& \frac{1}{3}\,\big\{X\,\Phi(Y,Z) + Y\,\Phi(Z,X) + Z\,\Phi(X,Y) \\
&& -\Phi([X,Y],Z) - \Phi([Z,X],Y) - \Phi([Y,Z],X)\big\}.
\end{eqnarray}
By direct calculation we get the following:
\begin{align}\label{3.9A}
(\pounds_{\xi_i}\,\Phi)(X,Y) = (\pounds_{\xi_i}\,g)(X, {f}Y) + g(X,(\pounds_{\xi_i}{f})Y) .
\end{align}
The following result generalizes \cite[Proposition~4]{FP-2017}.
\begin{theorem}\label{C-K}
On a weak para-${\mathcal K}$-manifold the vector fields $\xi_1,\ldots,\xi_p$ are Killing and
\begin{align}\label{6.1e}
\nabla_{\xi_i}\,\xi_j = 0,\quad 1\le i,j\le p ;
\end{align}
thus,
$\ker f$ is integrable and defines a totally geodesic Riemannian foliation with flat leaves.
\end{theorem}
\begin{proof}
By Proposition~\ref{thm6.1}, the distribution $\ker f$ is totally geodesic, see \eqref{3.30}, and $N^{\,(3)}_i=\pounds_{\xi_i}{f}=0$.
Using $\iota_{{\xi_i}}\Phi=0$ and condition $d\Phi=0$ in the identity \eqref{3.Ld},
we get $\pounds_{\xi_i}\Phi=0$. Thus, from \eqref{3.9A} we obtain $(\pounds_{\xi_i}\,g)(X, {f}Y)=0$.
To show $\pounds_{\xi_i}\,g=0$, we will examine $(\pounds_{\xi_i}\,g)(fX, \xi_j)$ and $(\pounds_{\xi_i}\,g)(\xi_k, \xi_j)$.
Using $\pounds_{\xi_i}\,\eta^j =0$,
we get
\[
(\pounds_{\xi_i}\,g)(fX, \xi_j)=(\pounds_{\xi_i}\,\eta^j)fX -g(fX, [\xi_i,\xi_j])=-g(fX, [\xi_i,\xi_j])=0.
\]
Using \eqref{3.30}, we get
$(\pounds_{\xi_i}\,g)(\xi_k, \xi_j)= -g(\xi_i, \nabla_{\xi_k}\,\xi_j+\nabla_{\xi_j}\,\xi_k) = 0$.
Thus, $\xi_i$ is a Killing vector field, i.e., $\pounds_{\xi_i} g=0$.
By $d\Phi(X,\xi_i,\xi_j)=0$ and \eqref{3.3} we obtain $g([\xi_i,\xi_j], fX)=0$, i.e., $\ker f$ is integrable.
From this and \eqref{3.30} we get $\nabla_{\xi_k}\,\xi_j=0$; thus, the sectional curvature is $K(\xi_i,\xi_j)=0$.
\end{proof}
\begin{theorem}\label{thm6.2}
For a weak almost para-${\mathcal S}$-structure,
we get $N^{\,(2)}_i=N^{\,(4)}_{ij}=0$ and
\begin{equation}\label{E-N1}
(N^{\,(1)}(X,Y))^\bot = 2\,g(X, f\widetilde{Q} Y)\,\bar\xi \,;
\end{equation}
moreover, $N^{\,(3)}_i$ vanishes if and only if $\,\xi_i$ is a Killing vector field.
\end{theorem}
\begin{proof} Applying \eqref{2.3} in \eqref{2.7X} and using skew-symmetry of ${f}$ we get $N^{\,(2)}_i=0$.
Equation \eqref{2.3} with $Y=\xi_i$ yields $d\eta^j(X,\xi_i)=g(X,{f}\,\xi_i)=0$ for any $X\in\mathfrak{X}_M$; thus, we get \eqref{3.11A},
i.e., $N^{\,(4)}_{ij}=0$.
Using \eqref{2.3} and
\[
g([f,f](X,Y), \xi_i) = g([fX,fY], \xi_i) = -2\,d\eta^i(fX,fY) = -2\,\Phi(fX, fY)
\]
for all $i$, we also calculate
\begin{eqnarray*}
& \frac12\,g(N^{\,(1)}(X,Y), \xi_i) = -d\eta^i(fX,fY) - g(\sum\nolimits_{j}d\eta^j(X,Y)\,\xi_j, \xi_i) \\
&= -\Phi(fX, fY) -\Phi(X, Y) = g(X, (f^3-f)Y) = g(X, \widetilde{Q} f Y),
\end{eqnarray*}
that proves \eqref{E-N1}.
Next, invoking \eqref{2.3} in the equality
\begin{align*}
(\pounds_{\xi_i}\,d\eta^j)(X,Y) = \xi_i(d\eta^j(X,Y)) - d\eta^j([\xi_i,X], Y) - d\eta^j(X,[\xi_i,Y]),
\end{align*}
and using \eqref{3.7}, we obtain for all $i,j$
\begin{align}\label{3.9}
(\pounds_{\xi_i}\,d\eta^j)(X,Y) = (\pounds_{\xi_i}\,g)(X, {f}Y) + g(X,(\pounds_{\xi_i}{f})Y).
\end{align}
Since $\pounds_V=\iota_{V}\circ d+d\circ\iota_{V}$, the exterior derivative $d$ commutes with the Lie-derivative, i.e., $d\circ\pounds_V = \pounds_V\circ d$, and as in the proof of Theorem~\ref{C-K}, we get that $d\eta^i$ is invariant under the action of $\xi_i$, i.e., $\pounds_{\xi_i}\,d\eta^j=0$.
Therefore, \eqref{3.9} implies that $\xi_i$ is a Killing vector field if and only if $N^{\,(3)}_i=0$.
\end{proof}
\begin{theorem}\label{thm6.2C}
For a weak almost para-${\mathcal C}$-structure, we get $N^{\,(2)}_i=N^{\,(4)}_{ij}=0$, $N^{\,(1)}=[{f},{f}]$, and
\eqref{6.1e};
thus, the distribution $\ker f$ is tangent to a totally geodesic foliation with the sectional curvature $K(\xi_i,\xi_j)=0$.
Moreover, $N^{\,(3)}_i=0$ if and only if $\,\xi_i$ is a Killing vector~field.
\end{theorem}
\begin{proof}
By \eqref{2.7X} and \eqref{2.9X} and since $d\eta^i=0$, the tensors $N^{\,(2)}_i$ and $N^{\,(4)}_{ij}$ vanish on a weak almost para-${\mathcal C}$-structure.
Moreover, by \eqref{2.6X} and \eqref{3.9}, respectively, the tensor $N^{\,(1)}$ coincides with $[f,f]$,
and $N^{\,(3)}_i=\pounds_{\xi_i}{f}\ (1\le i\le p)$ vanish if and only if each $\xi_i$ is a Killing~vector.
From the equalities
\begin{align*}
3\,d\Phi(X,\xi_i,\xi_j) = g([\xi_i,\xi_j], fX), \qquad
2\,d\eta^k(\xi_j, \xi_i) = g([\xi_i,\xi_j],\xi_k)
\end{align*}
and conditions $d\Phi=0$ and $d\eta^i=0$ we obtain
\begin{align}\label{6.1d}
[\xi_i, \xi_j] & = 0,\quad 1\le i,j\le p .
\end{align}
Next, from $d\eta^i=0$ and the equality
\[
2\,d\eta^i(\xi_j,X)+2\,d\eta^j(\xi_i,X) = g(\nabla_{\xi_i}\,\xi_j+\nabla_{\xi_j}\,\xi_i, X)
\]
we obtain \eqref{3.30}: $\nabla_{\xi_i}\,\xi_j+\nabla_{\xi_j}\,\xi_i=0$. From this and \eqref{6.1d} we get \eqref{6.1e}.
\end{proof}
We will express $\nabla_{X}{f}$ using a new tensor on a metric weak para-$f$-structure.
The following assertion
generalizes \cite[Proposition~1]{FP-2017}.
\begin{proposition}\label{lem6.1}
For a metric weak para-$f$-structure
we get
\begin{eqnarray}\label{3.1}
& 2\,g((\nabla_{X}{f})Y,Z) = -3\,d\Phi(X,{f} Y,{f} Z) - 3\, d\Phi(X,Y,Z) - g(N^{\,(1)}(Y,Z),{f} X)\notag\\
& +\sum\nolimits_{i}\big( N^{\,(2)}_i(Y,Z)\,\eta^i(X) + 2\,d\eta^i({f} Y,X)\,\eta^i(Z) - 2\,d\eta^i({f} Z,X)\,\eta^i(Y)\big)\notag\\
& + N^{\,(5)}(X,Y,Z),
\end{eqnarray}
where a skew-symmetric w.r.t. $Y$ and $Z$ tensor $N^{\,(5)}(X,Y,Z)$ is defined by
\begin{eqnarray*}
N^{\,(5)}(X,Y,Z) &=& ({f} Z)\,(g(X, \widetilde{Q}Y)) -({f} Y)\,(g(X, \widetilde{Q}Z)) +g([X, {f} Z], \widetilde{Q}Y)\\
&&-\,g([X,{f} Y], \widetilde{Q}Z) + g([Y,{f} Z] -[Z, {f} Y] - {f}[Y,Z],\ \widetilde{Q} X).
\end{eqnarray*}
\end{proposition}
\begin{proof}
Using
the skew-symmetry of ${f}$, one can compute
\begin{eqnarray}\label{3.4}
& 2\,g((\nabla_{X}{f})Y,Z) = 2\,g(\nabla_{X}({f} Y),Z) + 2\,g( \nabla_{X}Y,{f} Z) \notag\\
& = X\,g({f} Y,Z) + ({f} Y)\,g(X,Z) - Z\,g(X,{f} Y) \notag\\
& +\, g([X,{f} Y],Z) +g([Z,X],{f} Y) - g([{f} Y,Z],X) \notag\\
& +\, X\,g(Y,{f} Z) + Y\,g(X,{f} Z) - ({f} Z)\,g(X,Y)\notag\\
& +\, g([X,Y],{f} Z) + g([{f} Z,X],Y) - g([Y,{f} Z],X).
\end{eqnarray}
Using \eqref{2.2}, we obtain
\begin{align}\label{XZ}
\notag
g(X,Z) &= -\Phi({f} X, Z) -g(X,\widetilde{Q} Z) +\sum\nolimits_{i}\big(\eta^i(X)\,\eta^i(Z) +\eta^i(X)\,\eta^i(\widetilde{Q} Z)\big)\\
&= -\Phi({f} X, Z) + \sum\nolimits_{i}\eta^i(X)\,\eta^i(Z) - g(X, \widetilde{Q}Z).
\end{align}
Thus, and in view of the skew-symmetry of ${f}$ and applying \eqref{XZ} six times, \eqref{3.4} can be written~as
\begin{align*}
& 2\,g((\nabla_{X}{f})Y,Z) = X\,\Phi(Y, Z)
+({f} Y)\,\big(-\Phi({f} X, {Z})+\sum\nolimits_{i}\eta^i(X)\,\eta^i(Z) \big) \\
& - ({f} Y)\,g(X,\widetilde{Q}Z) - Z\,\Phi(X,Y) \\
& +\Phi([X,{f} Y],{f} {Z}) + \sum\nolimits_{i}\eta^i([X,{f} Y])\eta^i(Z) - g([X,{f} Y],\widetilde{Q}Z) +\Phi([Z,X],Y) \notag\\
& -\Phi([{f} Y,Z],{f} {X}) - \sum\nolimits_{i}\eta^i([{f} Y,Z])\,\eta^i(X) + g([{f} Y, Z], \widetilde{Q}X) + X\,\Phi(Y,Z) \\
& +Y\,\Phi(X,Z) - ({f} Z)\,\big(-\Phi({f} X, {Y}) + \sum\nolimits_{i}\eta^i(X)\,\eta^i(Y)\big) + ({f} Z) g(X, \widetilde{Q}Y) \\
& +\Phi([X,Y],Z) + g({f}[-{f} Z,X],{f} {Y}) + \sum\nolimits_{i}\eta^i([{f} Z,X])\eta^i(Y) - g([{f} Z,X],\widetilde{Q}Y)\\
& +g({f}[Y,{f} Z],{f} {X}) - \sum\nolimits_{i}\eta^i([Y,{f} Z])\,\eta^i(X) + g([Y,{f} Z], \widetilde{Q}X) .
\end{align*}
We also have
\begin{eqnarray*}
g(N^{\,(1)}(Y,Z),{f} X) = g({f}^2 [Y,Z] + [{f} Y, {f} Z] - {f}[{f} Y,Z] - {f}[Y,{f} Z], {f} X)\\
= - g({f}[Y,Z], \widetilde{Q} X) + g([{f} Y, {f} Z] - {f}[{f} Y,Z] - {f}[Y,{f} Z] - [Y,Z], {f} X).
\end{eqnarray*}
From this and \eqref{3.3} we get the required result.
\end{proof}
\begin{remark}\rm
For particular values of the tensor $N^{\,(5)}$ we get
\begin{eqnarray}\label{KK}
\nonumber
N^{\,(5)}(X,\xi_i,Z) & = & - N^{\,(5)}(X, Z, \xi_i) = g( N^{\,(3)}_i(Z),\, \widetilde{Q} X),\\
\nonumber
N^{\,(5)}(\xi_i,Y,Z) &=& g([\xi_i, {f} Z], \widetilde{Q}Y) -g([\xi_i,{f} Y], \widetilde{Q}Z),\\
N^{\,(5)}(\xi_i,Y,\xi_j) &=& N^{\,(5)}(\xi_i,\xi_j, Y) =0.
\end{eqnarray}
\end{remark}
We will discuss the meaning of $\nabla_{X}{f}$ for weak almost para-${\mathcal S}$- and weak para-${\mathcal K}$- structures.
The following corollary of Proposition~\ref{lem6.1} and Theorem~\ref{thm6.2}
generalizes well-known results with $Q={\rm id}_{TM}$.
\begin{corollary}\label{cor3.1}
For a weak almost para-${\mathcal S}$-structure we get
\begin{align}\label{3.1A}
\nonumber
2\,g((\nabla_{X}{f})Y,Z) & = - g(N^{\,(1)}(Y,Z),{f} X) +2\,g(fX,fY)\,\bar\eta(Z) \\
& -2\,g(fX,fZ)\,\bar\eta(Y) + N^{\,(5)}(X,Y,Z),
\end{align}
where $\bar\eta=\sum\nolimits_{i}\eta^i$.
In particular, taking $x=\xi_i$ and then $Y=\xi_j$ in \eqref{3.1A}, we get
\begin{align}\label{3.1AA}
2\,g((\nabla_{\xi_i}{f})Y,Z) &= N^{\,(5)}(\xi_i,Y,Z) ,\quad 1\le i\le p,
\end{align}
and \eqref{6.1e}; thus, the characteristic distribution
is tangent to a totally geodesic foliation with flat leaves.
\end{corollary}
\begin{proof}
According to Theorem~\ref{thm6.2}, for a weak almost para-${\mathcal S}$-structure we have
$d\eta^i = \Phi$ and $N^{\,(2)}_i= N^{\,(4)}_{ij}=0$.
Thus, invoking \eqref{2.3} and using Theorem~\ref{thm6.2} in \eqref{3.1}, we get \eqref{3.1A}.
From \eqref{3.1AA} with $Y=\xi_j$ we get $g(f\nabla_{\xi_i}\,\xi_j, Z)=0$,
thus $\nabla_{\xi_i}\,\xi_j\in\ker f$.~Also,
\[
\eta^k([\xi_i,\xi_j])= -2\,d\eta^k(\xi_i,\xi_j) =-2\,g(\xi_i, f\xi_j)=0;
\]
hence, $[\xi_i,\xi_j]=0$, i.e., $\nabla_{\xi_i}\,\xi_j=\nabla_{\xi_j}\,\xi_i$.
Finally, from $g(\xi_j,\xi_k)=\delta_{jk}$, using the covariant derivative with respect to $\xi_i$ and the above equality,
we get $\nabla_{\xi_i}\,\xi_j\in f(TM)$. This together with $\nabla_{\xi_i}\,\xi_j\in\ker f$ proves \eqref{6.1e}.
\end{proof}
\section{The tensor field $h$}
\label{sec:3a}
Here, we apply for a weak almost para-${\mathcal S}$-manifold the tensor field $h=(h_1,\ldots,h_p)$, where
$h_i=\frac{1}{2}\, N^{\,(3)}_i = \frac{1}{2}\,\pounds_{\xi_i}{f}$ .
By Theorem~\ref{thm6.2}, $h_i=0$ if and only if $\xi_i$ is a Killing field.
First, we calculate
\begin{align}\label{4.2}
\nonumber
& (\pounds_{\xi_i}{f})X \overset{\eqref{3.3B}} = \nabla_{\xi_i}({f} X) - \nabla_{{f} X}\,\xi_i - {f}(\nabla_{\xi_i}X - \nabla_{X}\,\xi_i) \\
&\ = (\nabla_{\xi_i}{f})X - \nabla_{{f} X}\,\xi_i + {f}\nabla_X\,\xi_i.
\end{align}
For $X=\xi_i$ in \eqref{4.2}, using
$g((\nabla_{\xi_i}{f})\,\xi_j,Z)=\frac12 N^{\,(5)}(\xi_i,\xi_j,Z)=0$, see \eqref{3.1AA},
and $\nabla_{\xi_i}\,\xi_j=0$, see Corollary~\ref{cor3.1}, we get
\begin{align}\label{4.2b}
h_i\,\xi_j = 0.
\end{align}
The following result generalizes the fact that for an almost para-${\mathcal S}$-structure, each tensor $h_i$ is self-adjoint and commutes with ${f}$.
\begin{proposition}
For a weak almost para-${\mathcal S}$-structure, the tensor $h_i$ and its conjugate $h_i^*$ satisfy
\begin{eqnarray}\label{E-31}
g((h_i-h_i^*)X, Y) &=& \frac{1}{2}\,N^{\,(5)}(\xi_i, X, Y),\\
\label{E-30b}
\nabla\,\xi_i &=&
Q^{-1} {f}\, h^*_i - f , \\
\label{E-31A}
h_i{f}+{f}\, h_i &=& -\frac12\,\pounds_{\xi_i}\widetilde{Q}.
\end{eqnarray}
\end{proposition}
\begin{proof}
(i) The scalar product of \eqref{4.2} with $Y$,
using \eqref{3.1AA}, gives
\begin{align}\label{4.3}
g((\pounds_{\xi_i}{f})X,Y) &= N^{\,(5)}(\xi_i, X, Y) + g({f}\nabla_{X}\,\xi_i - \nabla_{{f} X}\,\xi_i,\ Y).
\end{align}
Similarly,
\begin{align}\label{4.3b}
g((\pounds_{\xi_i}{f})Y,X) &=
N^{\,(5)}(\xi_i, Y, X) +g({f}\nabla_{Y}\,\xi_i - \nabla_{{f} Y}\,\xi_i,\ X).
\end{align}
Using \eqref{2.7X} and $(fX)(\eta^i(Y)) -(fY)(\eta^i(X))\equiv0$
(this vanishes if either $X$ or $Y$ equals $\xi_j$ and also for $X$ and $Y$ in~$f(TM)$), we get
$N^{\,(2)}_i (X,Y) = \eta^i([f Y, X]-[f X, Y])$.
Thus, the difference of \eqref{4.3} and \eqref{4.3b} gives
\begin{align*}
2\,g((h_i-h_i^*)X,Y) = N^{\,(5)}(\xi_i, X, Y) - N^{\,(2)}_i (X,Y).
\end{align*}
From this and equality $N^{\,(2)}_i=0$ (see Theorem~\ref{thm6.2}) we get \eqref{E-31}.
(ii) From Corollary \ref{cor3.1} with $Y=\xi_i$, we find
\begin{align}\label{4.4}
g((\nabla_{X}{f})\xi_i,Z) &= -\frac12\,g(N^{\,(1)}(\xi_i,Z),{f} X)
-g({f} X, {f} Z)
+ \frac12\,N^{\,(5)}(X,\xi_i,Z).
\end{align}
Note that $\frac 12\,N^{\,(5)}(X,\xi_i,Z)= g(h_i Z, \widetilde Q X)$, see \eqref{KK}.
By \eqref{2.5} with $Y=\xi_i$, we get
\begin{align}\label{2.5B}
[{f},{f}](X,\xi_i) = {f}^2 [X,\xi_i] - {f}[{f} X,\xi_i] = f N^{\,(3)}_i (X).
\end{align}
Using \eqref{2.2}, \eqref{3.3B} and \eqref{2.5B}, we calculate
\begin{align}
\label{4.4A}
g([{f},{f}](\xi_i,Z),{f} X)&= g({f}^2\,[\xi_i,Z] - {f}[\xi_i,{f} Z],{f} X) = - g({f}(\pounds_{\xi_i}{f})Z,{f} X)\notag\\
&= g((\pounds_{\xi_i}{f})Z,QX) -\sum\nolimits_{j}\eta^j(X)\,\eta^j((\pounds_{\xi_i}{f})Z) .
\end{align}
From \eqref{2.3} we have
$g([X,\xi_i], \xi_k) = 2\,d\eta^k(\xi_i,X)=2\,\Phi(\xi_i,X)=0$.
By \eqref{6.1e}, we get
$g(\nabla_X\,\xi_i, \xi_k) = g(\nabla_{\xi_i}X, \xi_k) = -g(\nabla_{\xi_i}\xi_k, X) = 0$ for $X\in f(TM)$,
thus
\begin{align}\label{E-30-xi}
g(\nabla_{X}\,\xi_i,\ \xi_k) = 0,\quad X\in TM,\ 1\le i,k \le p .
\end{align}
Using \eqref{4.2}, we get
\begin{align}\label{3.1A3}
2\,g((\nabla_{\xi_i}{f})Y,\xi_j) \overset{\eqref{3.1AA}}= N^{\,(5)}(\xi_i,Y,\xi_j) \overset{\eqref{KK}}=0 .
\end{align}
From \eqref{4.2}, \eqref{E-30-xi} and \eqref{3.1A3} we get
\begin{align}\label{4.5}
g((\pounds_{\xi_i}{f})X,\xi_j) = -g(\nabla_{{f} X}\,\xi_i,\xi_j) = 0.
\end{align}
Since ${f}\,\xi_i=0$, we find
\begin{align}\label{4.5A}
(\nabla_{X}{f})\,\xi_i = -{f}\,\nabla_{X}\,\xi_i.
\end{align}
Thus, combining \eqref{4.4}, \eqref{4.4A} and \eqref{4.5}, we find
\begin{eqnarray}\label{4.6}
\nonumber
& -g({f}\,\nabla_{X}\xi_i, Z) = g(X,QZ) - g(h_iZ,QX) - \sum\nolimits_{j}\eta^j(X)\eta^j(Z) + g(h_i Z, \widetilde Q X) \\
& = g(h_iZ, X) + g(X,QZ) - \sum\nolimits_{j}\eta^j(X)\,\eta^j(Z) + g(h_i Z, \widetilde Q X) .
\end{eqnarray}
Replacing $Z$ by ${f} Z$ in \eqref{4.6} and using \eqref{2.1}, \eqref{E-30-xi} and ${f}\,\xi_i=0$, we achieve \eqref{E-30b}:
\begin{align*}
g(Q\,\nabla_{X}\,\xi_i, Z) = g(({f} Q -h_i{f}) Z, X)
= g( {f}( h^*_i - Q) X, Z) .
\end{align*}
(iii) Using \eqref{2.1}, we obtain
\begin{eqnarray*}
& {f}\nabla_{\xi_i}{f} +(\nabla_{\xi_i}{f}){f} = \nabla_{\xi_i}\,({f}^2)
= \nabla_{\xi_i}\widetilde{Q} -\nabla_{\xi_i}(\sum\nolimits_{j}\eta^j\otimes \xi_j) ,
\end{eqnarray*}
where in view of \eqref{6.1e}, we get $\nabla_{\xi_i}(\sum\nolimits_{j}\eta^j\otimes \xi_j)=0$.
From the above and \eqref{4.2}, we get \eqref{E-31A}:
\begin{eqnarray*}
&& 2(h_i{f}+{f} h_i)X = {f}(\pounds_{\xi_i}{f})X +(\pounds_{\xi_i}{f}){f} X \\
&& = {f}(\nabla_{\xi_i}{f})X +(\nabla_{\xi_i}{f}){f} X +{f}^2\nabla_X\,\xi_i -\nabla_{{f}^2 X}\,\xi_i \\
&& = -(\nabla_{\xi_i}\widetilde{Q})X -\widetilde{Q}\nabla_X\,\xi_i+\nabla_{\widetilde{Q}X}\,\xi_i
+\sum\nolimits_{j}\big(g(\nabla_X\,\xi_i, \xi_j)\,\xi_j -g(X, \xi_j)\nabla_{\xi_j}\,\xi_i\big) \\
&& = [\widetilde{Q}X, \xi_i] - \widetilde{Q}\,[X, \xi_i]
= -(\pounds_{\xi_i}\widetilde{Q})X .
\end{eqnarray*}
We used \eqref{6.1e} and
\eqref{E-30-xi}
to show $\sum\nolimits_{j}\big(g(\nabla_X\,\xi_i, \xi_j)\,\xi_j -g(X, \xi_j)\nabla_{\xi_j}\,\xi_i\big)=0$.
\end{proof}
\begin{remark}\rm
For a weak almost para-${\mathcal S}$-structure, using \eqref{3.1A3}, we find
\[
2\,g(h_i X, \xi_j)=-g(\nabla_{f X}\,\xi_i, \xi_j) \overset{\eqref{E-30-xi}}= 0;
\]
thus,
the distribution
$f(TM)$ is invariant under $h_i$; moreover, $h^*_i\,\xi_j = 0$, see also \eqref{4.2b}.
\end{remark}
The next statement follows from Propositions~\ref{thm6.1} and \ref{lem6.1}.
\begin{corollary}
For a weak para-${\mathcal K}$-structure, we have
\begin{eqnarray}\label{3.1K}
\nonumber
&& 2\,g((\nabla_{X}{f})Y,Z) =
\sum\nolimits_{i}\big( 2\,d\eta^i({f} Y,X)\,\eta^i(Z) - 2\,d\eta^i({f} Z,X)\,\eta^i(Y) \\
&& +\,\eta^i([\widetilde{Q}Y,\,{f} Z])\,\eta^i(X) \big)
+ N^{\,(5)}(X,Y,Z).
\end{eqnarray}
In particular, using \eqref{E-31} with $h_i=0$, gives
$2\,g((\nabla_{\xi_i}{f})Y,Z) = \eta^i([\widetilde{Q} Y,\,{f} Z])$ for $1\le i\le p$.
\end{corollary}
\section{The rigidity of a para-${\mathcal S}$-structure}
\label{sec:3}
An important class of metric para-$f$-manifolds is given by para-${\mathcal S}$-manifolds.
Here, we study a wider class of weak para-${\mathcal S}$-manifolds and prove the rigidity theorem for para-${\mathcal S}$-manifolds.
\begin{proposition}
For a weak para-${\mathcal S}$-structure we get
\begin{eqnarray}\label{4.10}
\nonumber
& g((\nabla_{X}{f})Y,Z) = g(QX,Z)\,\bar\eta(Y) - g(QX,Y)\,\bar\eta(Z) +\frac{1}{2}\, N^{\,(5)}(X,Y,Z) \\
& -\sum\nolimits_{j} \eta^j(X)\big(\bar\eta(Y)\eta^j(Z) - \eta^j(Y)\bar\eta(Z)\big) .
\end{eqnarray}
\end{proposition}
\begin{proof} Since $({f},Q,\xi_i,\eta^i,g)$ is a metric
weak $f$-structure with $N^{\,(1)}=0$, by Corollary~\ref{cor3.1}, we get~\eqref{4.10}.
\end{proof}
\begin{remark}\rm
Using $Y=\xi_i$ in \eqref{4.10}, we get $f\nabla_{X}\,\xi_i = -f^2X - \frac{1}{2}\,(N^{\,(5)}(X,\xi_i,\,\cdot))^\flat$,
which gene\-ralizes the equality $\nabla_{X}\,\xi_i=-fX$ for a para-${\mathcal S}$-structure, e.g., \cite{FP-2017}.
\end{remark}
It was shown in \cite{RWo-2} that
a weak almost para-${\mathcal S}$-structure with positive partial Ricci curvature can be deformed to an almost para-${\mathcal S}$-structure.
The~main result in this section is the following rigidity theorem.
\begin{theorem}\label{T-4.1}
A metric weak para-$f$-structure
is a weak para-${\mathcal S}$-structure if and only if it is a para-${\mathcal S}$-structure.
\end{theorem}
\begin{proof}
Let $({f},Q,\xi_i,\eta^i,g)$ be a weak para-${\mathcal S}$-structure.
Since $N^{\,(1)}=0$, by Proposition~\ref{thm6.1}, we get $N^{\,(3)}_i=0$.
By \eqref{KK}, we then obtain $N^{\,(5)}(\cdot\,,\xi_i,\,\cdot\,)=0$.
Recall that $\tilde{Q}X = Q X - X$ and $\eta^j(\widetilde{Q} X)=0$.
Using the above and $Y=\xi_i$ in \eqref{4.10}, we~get
\begin{align}\label{4.12}
\nonumber
& g((\nabla_{X}{f})\,\xi_i,Z) = g(QX,Z) -\eta^i(Q X)\,\bar\eta(Z)
+\sum\nolimits_{j} \eta^j(X)\big(\eta^j(Z) - \delta^j_i\,\bar\eta(Z)\big)\\
\nonumber
& = g(Q X^\top, Z) +\sum\nolimits_{j} \eta^j(Z)\big(\eta^j(Q X) - \eta^i(Q X)\big)
-\sum\nolimits_{j} \eta^j(Z)\big(\eta^j(X) - \eta^i(X)\big)\\
& = g(Q X^\top, Z) +\sum\nolimits_{j} \eta^j(Z)\big(\eta^j(\widetilde{Q} X) - \eta^i(\widetilde{Q} X)\big)
= g(Q X^\top, Z) .
\end{align}
Using \eqref{4.5A}, we rewrite \eqref{4.12} as $g(\nabla_{X}\,\xi_i,{f} Z) = g(Q X^\top,Z)$.
By the above and \eqref{2.1}, we find
\begin{align}\label{4.14}
g(\nabla_X\,\xi_i +{f} X^\top, \,{f}\,Z) = 0.
\end{align}
Since ${f}$ is skew-symmetric, applying \eqref{4.10} with $Z=\xi_i$ in \eqref{4.NN}, we obtain
\begin{eqnarray}\label{4.17}
&& g( [{f},{f}](X,Y),\xi_i) = g([{f} X, {f} Y], \xi_i) = g((\nabla_{{f} X}{f})Y, \xi_i) - g((\nabla_{{f} Y}{f})X, \xi_i) \notag\\
&&\quad = g(Q\,{f} Y,X) - g(Q\,{f} Y, \xi_i)\,\bar\eta(X) -g(Q\,{f} X,Y) +g(Q\,{f} X, \xi_i)\,\bar\eta(Y) .
\end{eqnarray}
Recall that $[Q,\,{f}]=0$ and $f\,\xi_i=0$. Thus, \eqref{4.17} yields for all $i$,
\begin{align*}
g( [{f},{f}](X,Y), \xi_i) = 2\,g(QX,{f} Y) .
\end{align*}
From this, using the definition of $N^{\,(1)}$, we get for all $i$,
\begin{align}\label{4.18}
g(N^{\,(1)}(X,Y), \xi_i) = 2\,g(\widetilde{Q} X, {f} Y) .
\end{align}
From $N^{\,(1)}=0$ and \eqref{4.18} we get $g(\widetilde{Q} X, {f} Y)=0$ for all $X,Y\in \mathfrak{X}_M$; thus, $\widetilde Q=0$.
\end{proof}
For a weak almost para-${\mathcal S}$-structure all $\xi_i$ are Killing if and only if $h=0$, see Theorem~\ref{thm6.2}.
The equality $h=0$ holds for a weak para-${\mathcal S}$-structure since it is true for a para-${\mathcal S}$-structure, see Theorem~\ref{T-4.1}.
We~will prove this property of a weak para-${\mathcal S}$-structure directly.
\begin{corollary}
For a weak para-${\mathcal S}$-structure, $\xi_1,\ldots,\xi_p$ are Killing vector fields; moreover, $\ker f$
is integrable and
defines a Riemannian totally geodesic foliation.
\end{corollary}
\begin{proof}
In view of \eqref{4.5A} and $\bar\eta(\xi_i)=1$, Eq. \eqref{4.10} with $Y=\xi_i$ becomes
\begin{align}\label{4.6A}
g(\nabla_{X}\,\xi_i, {f} Z) = -\eta^i(X)\,\bar\eta(Z) +g(X,QZ) + \frac 12\,N^{\,(5)}(X,\xi_i,Z) .
\end{align}
Combining \eqref{4.6} and \eqref{4.6A}, and using \eqref{E-30-xi},
we achieve for all $i$ and $X,Z$,
\begin{align*}
g(h_iZ,QX) = \sum\nolimits_{j}\eta^j(X)\,\eta^j(Z) -\eta^i(X)\,\bar\eta(Z),
\end{align*}
which implies $hZ=0$ for $Z\in f(TM)$ (since $Q$ is nonsingular).
This and \eqref{4.2b} yield $h=0$.
By~Theorem~\ref{thm6.2}, $\ker f$ defines a totally geodesic foliation. Since $\xi_i$ is a Killing field,
we~get
\[
0 = (\pounds_{\xi_i}\,g)(X,Y) = g(\nabla_{X}\,\xi_i, Y) + g(\nabla_{Y}\,\xi_i, X)
= -g(\nabla_{X} Y + \nabla_{Y} X,\ \xi_i)
\]
for all $i$ and $X,Y\bot\,\ker f$. Thus, $f(TM)$ is totally geodesic, i.e., $\ker f$ defines a Riemannian foliation.
\end{proof}
For $p=1$, from Theorem~\ref{T-4.1} we have the following
\begin{corollary}
A weak almost paracontact metric structure on $M^{2n+1}$ is a weak para-Sasakian structure if and only if it is a para-Sasakian structure,
i.e., a normal weak paracontact metric structure, on $M^{2n+1}$.
\end{corollary}
\section{The characteristic of a weak para-${\mathcal C}$-structure}
\label{sec:4}
An important class of metric para-$f$-manifolds is given by para-${\mathcal C}$-mani\-folds.
Recall that $\nabla_{X}\,\xi_i=0$ holds on para-${\mathcal C}$-manifolds.
\begin{proposition}
Let $({f},Q,\xi_i,\eta^i,g)$ be a weak
para-${\mathcal C}$-structure. Then
\begin{align}\label{6.1}
& 2\,g((\nabla_{X}{f})Y,Z) = N^{\,(5)}(X,Y,Z),\\
\label{6.1b}
& 0 = N^{\,(5)}(X,Y,Z) + N^{\,(5)}(Y,Z,X) + N^{\,(5)}(Z, X, Y) ,\\
\label{6.1c}
0 & = N^{\,(5)}({f} X,Y,Z) + N^{\,(5)}({f} Y,Z,X) + N^{\,(5)}({f} Z, X, Y) .
\end{align}
Using \eqref{6.1} with $Y=\xi_i$ and \eqref{2.1}, we get
\begin{align*}
g(\nabla_{X}\,\xi_i,\,Q Z) = -\frac12\,N^{\,(5)}(X,\xi_i,{f} Z).
\end{align*}
\end{proposition}
\begin{proof}
For a weak almost para-${\mathcal C}$-structure $({f},Q,\xi_i,\eta^i,g)$, using Theorem~\ref{thm6.2C}, from \eqref{3.1}
we get
\begin{equation}\label{6.1a}
2\,g((\nabla_{X}{f})Y,Z)= - g([{f},{f}](Y,Z),{f} X) + N^{\,(5)}(X,Y,Z).
\end{equation}
From \eqref{6.1a}, using condition $[{f},{f}]=0$ we get \eqref{6.1}.
Using \eqref{3.3} and \eqref{6.1}, we write
\[
0 = 3\,d\Phi(X,Y,Z) = g((\nabla_X\,{f})Z,Y) +g((\nabla_Y\,{f})X, Z) +g((\nabla_Z\,{f})Y, X);
\]
hence, \eqref{6.1b} is true.
Using \eqref{4.NN}, \eqref{6.1} and the skew-symmetry of ${f}$, we obtain
\begin{align*}
0 & = 2\,g([{f},{f}](X,Y),Z) \\
& = N^{\,(5)}(X, Y, {f} Z) + N^{\,(5)}({f} X, Y, Z)
- N^{\,(5)}(Y, X, {f} Z) - N^{\,(5)}({f} Y,X,Z) .
\end{align*}
This and \eqref{6.1b} with $X$ replaced by ${f} X$ provide \eqref{6.1c}.
\end{proof}
Recall that $X^\bot = \sum\nolimits_{i}\eta^i(X)\,\xi_i$.
Consider a weaker condition than \eqref{6.1d}:
\begin{align}\label{E-xi31}
[\xi_i,\xi_j]^\bot =0,\quad 1\le i,j\le p.
\end{align}
In the following theorem, we characterize weak para-${\mathcal C}$-manifolds in a wider class of metric weak para-$f$-manifolds
using the condition $\nabla{f}=0$.
\begin{theorem}\label{thm6.2D}
A metric weak para-$f$-structure with $\nabla{f}=0$ and \eqref{E-xi31} is a~weak para-${\mathcal C}$-structure with $N^{\,(5)}=0$.
\end{theorem}
\begin{proof}
Using condition $\nabla{f}=0$, from \eqref{4.NN} we obtain $[{f},{f}]=0$.
Hence, from \eqref{2.6X} we get $N^{\,(1)}(X,Y)=-2\,\sum\nolimits_{i} d\eta^i(X,Y)\,\xi_i$,
and from \eqref{4.NNxi} we obtain
\begin{align}\label{E-cond1}
\nabla_{{f} X}\,\xi_i - {f}\,\nabla_{X}\,\xi_i = 0,\quad X\in \mathfrak{X}_M.
\end{align}
From \eqref{3.3}, we calculate
\[
3\,d\Phi(X,Y,Z) = g((\nabla_{X}{f})Z, Y) + g((\nabla_{Y}{f})X,Z) + g((\nabla_{Z}{f})Y,X);
\]
hence, using condition $\nabla{f}=0$ again, we get $d\Phi=0$. Next,
$N^{\,(2)}_i(Y,\xi_j) = -\eta^i([{f} Y,\xi_j]) = g(\xi_j, {f}\nabla_{\xi_i} Y) =0$.
Setting $Z=\xi_j$ in \eqref{3.1} and using the condition $\nabla{f}=0$ and the properties
$d\Phi=0$, $N^{\,(2)}_i(Y,\xi_j)=0$ and $N^{\,(1)}(X,Y)=-2\sum\nolimits_{i} d\eta^i(X,Y)\,\xi_i$, we find
$0 = 2\,d\eta^j({f} Y, X) - N^{\,(5)}(X,\xi_j, Y)$.
By~\eqref{KK} and~\eqref{E-cond1},
\[
N^{\,(5)}(X,\xi_j, Y) = g([\xi_j,{f} Y] -{f}[\xi_j,Y],\, \widetilde{Q} X)
= g(\nabla_{{f} Y}\,\xi_j - {f}\,\nabla_{Y}\,\xi_j,\, \widetilde{Q} X) = 0;
\]
hence, $d\eta^j({f} Y, X)=0$. From this and $g([\xi_i,\xi_j],\xi_k)=2\,d\eta^k(\xi_j, \xi_i)=0$ we get $d\eta^j=0$.
By the above, $N^{\,(1)}=0$. Thus, $({f},Q,\xi_i,\eta^i,g)$ is a weak para-${\mathcal C}$-structure.
Finally, from \eqref{6.1} and condition $\nabla{f}=0$ we get $N^{\,(5)}=0$.
\end{proof}
\begin{corollary}
A normal metric weak para-$f$-structure with
$\nabla f=0$
is a~weak para-${\mathcal C}$-structure with
$N^{\,(5)}=0$.
\end{corollary}
\begin{proof} By $N^{\,(1)}=0$, we get $d\eta^i =0$ for all $i$.
As in Theorem~\ref{thm6.2D}, we get $d\Phi=0$.
\end{proof}
\begin{example}\rm
Let $M$ be a $2n$-dimensional smooth manifold and $\tilde{f}:TM\to TM$ an endomorphism of rank $2n$ such that
$\nabla\tilde{f}=0$.
To construct a weak para-${\mathcal C}$-structure on $M\times\mathbb{R}^p$
(or $M\times \mathbb{T}^p$, where
$\mathbb{T}^p$ is a $p$-dimensional flat torus),
take any point $(x, t_1,\ldots,t_p)$
and set $\xi_i = (0, d/dt_i)$, $\eta^i =(0, dt_i)$~and
\[
{f}(X, Y) = (\tilde{f} X,\, 0),\quad
Q(X, Y) = (\tilde{f}^{\,2} X,\, Y).
\]
where $X\in T_xM$ and $Y=\sum_i Y^i\xi_i\in\{\mathbb{R}^p_t, \mathbb{T}^p_t\}$.
Then \eqref{2.1} holds and Theorem~\ref{thm6.2D} can be used.
\end{example}
For $p=1$, from Theorem~\ref{thm6.2D} we have the following
\begin{corollary}
Any weak almost paracontact structure $(\varphi,Q,\xi,\eta,g)$ with the property $\nabla\varphi=0$
is a~weak para-cosymplectic structure.
\end{corollary}
\end{document} |
\begin{document}
\twocolumn[
\icmltitle{CRFL: Certifiably Robust Federated Learning against Backdoor Attacks}
\begin{icmlauthorlist}
\icmlauthor{Chulin Xie}{uiuc}
\icmlauthor{Minghao Chen}{zju}
\icmlauthor{Pin-Yu Chen}{ibm}
\icmlauthor{Bo Li}{uiuc}
\end{icmlauthorlist}
\icmlaffiliation{uiuc}{University of Illinois at Urbana-Champaign}
\icmlaffiliation{zju}{Zhejiang University}
\icmlaffiliation{ibm}{IBM Research}
\icmlcorrespondingauthor{Chulin Xie}{[email protected]}
\icmlcorrespondingauthor{Pin-Yu Chen}{[email protected]}
\icmlcorrespondingauthor{Bo Li}{[email protected]}
\icmlkeywords{Machine Learning, ICML}
\vskip 0.3in
]
\printAffiliationsAndNotice{}
\begin{abstract}
Federated Learning (FL) as a distributed learning paradigm that aggregates information from diverse clients to train a shared global model, has demonstrated great success.
However, malicious clients can perform poisoning attacks and model replacement to introduce backdoors into the trained global model.
Although there have been intensive studies designing robust aggregation methods and empirical robust federated training protocols against backdoors, existing approaches lack \textit{robustness certification}.
This paper provides the first general framework, Certifiably Robust Federated Learning (CRFL), to train certifiably robust FL models against backdoors.
Our method exploits clipping and smoothing on model parameters to control the global model smoothness, which yields a sample-wise robustness certification on backdoors with limited magnitude.
Our certification also specifies the relation to federated learning parameters, such as poisoning ratio on instance level, number of attackers, and training iterations.
Practically, we conduct comprehensive experiments across a range of federated datasets, and provide the first benchmark for certified robustness against backdoor attacks in federated learning. Our code is publicaly available at \href{https://github.com/AI-secure/CRFL}{https://github.com/AI-secure/CRFL}.
\end{abstract}
\section{Introduction}
\label{submission}
Federated learning (FL) has been widely applied to different applications given its high efficiency and privacy-preserving properties~\cite{smith2017federated,mcmahan2016communication,zhao2018federated}.
However, recent studies show that it is easy for the local client to add adversarial perturbation such as ``backdoors" during training to compromise the final aggregated model~\cite{bhagoji2018analyzing,bagdasaryan2020backdoor,wang2020attackthetails,xie2019dba}. Such attacks raise great security concerns and have become the roadblocks towards the real-world deployment of federated learning.
Although there have been intensive studies on robust FL by designing robust aggregation methods~\cite{fung2020limitations,pillutla2019robustrfa,fu2019attack,blanchard2017machinekrum,el2018hidden,chen2017distributed,yin2018byzantine}, developing empirically robust federated training protocols (e.g., gradient clipping~\cite{sun2019can}, leveraging noisy perturbation~\cite{sun2019can} and additional evaluation during training~\cite{andreina2020baffle}), current defense approaches lack robustness guarantees against the backdoor attacks under certain conditions. To the best of our knowledge, certified robustness analysis and algorithms for FL against backdoor attacks remain elusive.
\begin{figure}
\caption{Overview of certifiably robust federated learning (CRFL)}
\label{fig:teaser}
\end{figure}
To bridge this gap, in this work we propose a certifiably robust federated learning (CRFL) framework as illustrated in Figure~\ref{fig:teaser}.
In particular, during \textit{training}, we allow the local agent to update their model parameters to the center server, and the server will: 1) aggregate the collected model updates, 2) clip the norm of the aggregated model parameters, 3) add a random noise to the clipped model, and finally 4) send the new model parameters back to each agent.
Note that all of the operations are conducted on the server side to reduce the load for local clients and to prevent malicious clients.
During \textit{testing}, the server will smooth the final global model with randomized parameter smoothing and make the final prediction based on the parameter-smoothed model.
Using CRFL, we theoretically prove that the trained global model would be certifiably robust against backdoors as long as the backdoor is within our certified bound. To obtain such robustness certification, we first quantify the closeness of models aggregated in each step by viewing this process as a Markov Kernel~\cite{asoodeh2020differentially,makur2019informationphdmit,polyanskiy2015dissipation,polyanskiy2017strong}. We then leverage the model closeness together with the parameter smoothing procedure to certify the final prediction.
Empirically, we conduct extensive evaluations on MNIST, EMNIST, and financial datasets to evaluate the certified robustness of CRFL and study how FL parameters affect certified robustness.
\underline{\bf Technical Contributions.} In this paper, we take the \textit{first} step
towards providing certified robustness for FL against backdoor attacks.
We make contributions on both theoretical
and empirical fronts.
\begin{itemize}[leftmargin=*,itemsep=-0.5mm]
\item We propose the first certifiably robust federated learning (CRFL) framework against backdoor attacks.
\item Theoretically, we analyze the training dynamics of the aggregated model via Markov Kernel, and propose parameter smoothing for model inference. Altogether, we prove the certified robustness of CRFL.
\item We conduct extensive experiments on MNIST, EMNIST, and financial datasets to show the effect of different FL parameters (e.g. poisoning ratio, number of attackers, and training iterations) on certified robustness.
\end{itemize}
\section{Related work}
\paragraph{Backdoor Attacks on Federated Learning}
The goal of backdoor attacks against federated learning is to train strong poisoned local models and submit malicious model updates to the central server, so as to mislead the global model~\cite{bhagoji2018analyzing}.
\cite{bagdasaryan2020backdoor} studies the model replacement approach, where the attacker scales malicious model updates to replace the global model with local backdoored one.
\cite{xie2019dba} exploit the decentralized nature of federated learning and propose a distributed backdoor attack.
\paragraph{Robust Federated Learning}
In order to nullify the effects of attacks while aggregating client updates, a number of robust aggregation algorithms have been proposed for distributed learning~\cite{fung2020limitations,pillutla2019robustrfa,fu2019attack,blanchard2017machinekrum,el2018hidden,chen2017distributed,yin2018byzantine}. These methods either identify and down-weight the malicious updates through certain distance or similarity metrics, or estimate a true ``center’’ of the received model updates rather than taking a weighted average. However, many of those methods assume that the data distribution is i.i.d cross distributed clients, which is not the case in FL setting.
Other defenses are several robust federated protocols that mitigate poisoning attacks during training.
\cite{andreina2020baffle} incorporates an additional validation phase to each round of FL to detect backdoor.
\cite{sun2019can} show that clipping the norm of model updates and adding Gaussian noise can mitigate backdoor attacks that are based on the model replacement paradigm. None of these provides certified robustness guarantees.
A concurrent work~\cite{cao2021provably} proposes Ensemble FL for provable secure FL against malicious clients, which requires training \emph{hundreds of} FL models and focuses on client-level certification. Our work allows standard FL protocol, and our certification is applicable to feature, sample, and client levels.
\section{Preliminaries}
\label{sec_prelim}
\subsection{Federated Averaging}
\paragraph{Learning Objective}
Suppose the model parameters are denoted by $w\in \mathbb{R}^d$, we consider the following distributed optimization problem:
$
\min_{ w\in \mathbb{R}^d} \{F( w) \triangleq \sum_{i=1}^N p_i F_i( w) \ \},
$
where $N$ is the number of clients, and $p_i$ is the aggregation weight of the $i$-th client such that $p_i\ge 0$ and $\sum_{i=1}^N p_i=1$.
Suppose the $i$-th client holds $n_i$ training data in its local dataset $S_i = \{z_{1}^i, z_{2}^i,\ldots, z_{n_i}^i\}$. The local objective $F_i(\cdot)$ is defined by
$
F_i( w) \triangleq \frac{1}{n_i} \sum_{j=1}^{n_i} \ell( w; z_j^i),
$
where $\ell(\cdot;\cdot)$ is a defined learning loss function.
\paragraph{One Round of Federated Learning (Periodic Averaging SGD)}
In federated learning, the clients are able to perform multiple local iterations to update the local models~\cite{mcmahan2017communication}. So we formulate the SGD problem in FL as Periodic Averaging SGD~\cite{wang2019cooperative,li2019convergence}.
Specifically, at round $t$, first, the central sever sends current global model $w_{t-1}$ to all clients.
Second, every client $i$ initializes its local model $ w_{(t-1)\tau_i}^i = w_{t-1}$ and then performs $\tau_i~( \tau_i \ge 1)$ local updates, such that
$w_{s}^i \gets w_{s-1}^i - \eta_i g_i( w_{s-1}^i; \xi_{s-1}^i),~s= (t-1)\tau_i+1, (t-1)\tau_i+2, \ldots, t\tau_i,$
where $\eta_i$ is the learning rate, $\xi_{s}^i \subset S^i$ are randomly sampled mini-batches with batch size $n_{B_i}$, and $g_i( w; \xi^i) = \frac{1}{n_{B_i}} \sum_{z_j^i \in \xi^i } \nabla \ell ( w; z_j^i) $ denotes the stochastic gradient.
The local clients send the local model updates $w_{t\tau_i}^i - w_{t-1}$ to the server.
Finally, the server aggregates over the local model updates into the new global model $ w_{t}$ such that
$ w_{t} \gets w_{t-1} + \sum_{i=1}^N p_i (w_{t\tau_i}^i - w_{t-1}) .$
\subsection{Threat Model}\label{subsection:threat_model}
The goal of backdoor is to inject a backdoor pattern during training such that any test input with such pattern will be mis-classified as the target label~\cite{gu2019badnets}. The purpose of backdoor attacks in FL is to manipulate local models and simultaneously fit the main task and backdoor task, so that the global model would behave normally on untampered data samples while achieving high attack success rate on backdoored data samples.
We consider the backdoor attack via model replacement approach~\cite{bagdasaryan2020backdoor} where the attackers train the local models using the poisoned datasets, and scale the malicious updates before sending it to the sever.
Suppose there are $R$ adversarial clients out of $N$ clients, we assume that each of them only attack once and they perform model replacement attack together at the same round $\mathsf {t_{adv}}$. Such distributed yet coordinated backdoor attack is shown to be effective in~\cite{xie2019dba}.
Let $D:=\{S_1,S_2,\ldots,S_N \}$ be the union of original benign local datasets in all clients.
For a data sample $z_j^i:=\{x_j^i,y_j^i\}$ in $S_i$,
we denote its backdoored version as
${z'}_j^i:=\{x_j^i+{{\delta_i}_x}, y_j^i+{{\delta_i}_y}\}$,
and the backdoor as $\delta_i :=\{{\delta_i}_x,{\delta_i}_y \}$. We assume an adversarial client $i$ has $q_i$ backdoored samples in its local dataset $S'^i$ with size $n_i$.
Let $D':=\{{S'}_1,\ldots, {S'}_{R-1}, {S'}_{R}, {S}_{R+1},\ldots,S_N \}$ be the union of local datasets in the adversarial round $\mathsf {t_{adv}}$.
Then we have
$D'= D+ \{\{\delta_i\}_{j=1}^{q_i}\}_{i=1}^R. $
Before $\mathsf {t_{adv}}$, the adversarial clients train the local model using original benign datasets.
When $t=\mathsf {t_{adv}}$, for adversarial client $i$, each local iteration is trained on the backdoored local dataset $S'_i$ such that
${w'}_{s}^i \gets {w'}_{s-1}^i - \eta_i g_i( {w'}_{s-1}^i;
{\xi'}_{s-1}^i), \\
s=(t-1)\tau_i+1,(t-1)\tau_i+2,\ldots,t\tau_i, $
where $w'$ is the malicious model parameters, ${\xi'}^i$ is the mini-batch sampled from $S'_i$ and the local model is initialized as $ {w'}_{(t-1)\tau_i}^i = w_{t-1}$.
Following~\cite{bagdasaryan2020backdoor}, we assume the attacker add a fixed number of backdoored samples $q_{B_i}$ in each training batch, then the mini-batch gradient is ${g}_i(w'; {\xi'}^i) = \frac{1}{n_{B_i}} (\sum_{j=1}^{q_{B_i}} \nabla \ell ( w'; {z'}_j^i) + \sum_{j=q_{B_i}+1}^{n_{B_i}} \nabla \ell ( w'; {z}_j^i) ) $. The poison ratio of dataset $S'_i$ is $q_{B_i}/n_{B_i} = q_{i}/n_{i}$.
Since for each local iteration, the local model is updated with backdoored mini-batch samples, more local iterations will drive the local model ${w'}_{s}^i$ farther from the corresponding one ${w}_{s}^i$ in benign training process.
Then the adversarial clients scale their malicious local updates before submitting to the server. Let the scale factor be $\gamma_i$ for $i$-th adversarial client, then the scaled update is
$
{\gamma_i}( {w'}_{\mathsf {t_{adv}}\tau_i}^i - {w}_{\mathsf {t_{adv}}-1}).
$
The server aggregates over the malicious and benign updates into an infected global model ${w'}_t$ such that
$ {w'}_{t} \gets w_{t-1} + \sum_{i=1}^R p_i \gamma_i ({w'}_{t\tau_i}^i - w_{t-1}) + \sum_{i=R+1}^N p_i (w_{t\tau_i}^i - w_{t-1}). $
In fact, even though the adversarial clients only attack at round $\mathsf {t_{adv}}$ and in the later rounds ($t \textgreater \mathsf {t_{adv}}$) they use the original benign datasets, the global model is already infected starting from $\mathsf {t_{adv}}$, so we still denote the global model parameters as ${w'}_t$ in later rounds.
\section{Methodology}
In this Section, we introduce our proposed framework CRFL{}, which is composed of a training-time subroutine (Algorithm~\ref{algo:parameters_perturbation}) and a test-time subroutine (Algorithm~\ref{alg:certify_parameters_perturbation}) for achieving certified robustness.
\subsection{CRFL Training: Clipping and Perturbing}
During training, at round $t=1,\ldots, T-1$, local clients update their models,
and the server performs aggregation.
Then, in our training protocol, the server clip the model parameters
$\mathrm{Clip}_{\rho_t}(w_{t}) \gets w_{t}/ \max(1, \frac{\|w_{t}\|}{\rho_t})$ so that its norm is bounded by $\rho_t$,
and then add isotropic Gaussian noise $\epsilon_t \sim \mathcal N(0,\sigma_t^2\bf I) $ directly on the aggregated global model parameters (coordinate-wise noise):
$ \widetilde w_{t} \gets \mathrm{Clip}_{\rho_t}({w_{t}}) + \epsilon_t.$
Throughout this paper, $\|\cdot\| $ denotes the $\ell_2$ norm $\|\cdot\|_2$.
In the next round $t+1$, client $i$ initializes its local model with noisy new global model $ w_{t\tau_i}^i \gets \widetilde w_{t}$.
In the final round $T$, we only clip the global model parameters.
The procedure is summarized in Algorithm \ref{algo:parameters_perturbation} and denoted by
$\mathcal{M}$, which outputs the global model parameters $\mathrm{Clip}_{\rho_T}(w_{T})$. Then we define
$
\mathcal{M}(D) := \mathrm{Clip}_{\rho_T}(w_{T}).
$
\begin{algorithm}[ht]
\caption{\small Federated averaging with parameters clipping and perturbing}
\label{algo:parameters_perturbation}
\scalebox{0.85}{
\begin{minipage}{1.0\linewidth}
\begin{algorithmic}
\STATE {\bfseries Server's input:} initial model parameters $w_0$, $\widetilde w_0 \gets w_0$
\STATE {\bfseries Client $i$'s input:} local dataset $S_i$ and learning rate $\eta_i$
\FOR{each round $t=1, \ldots, T$}
\STATE The server sends $\widetilde w_{t-1}$ to the $i$-th client
\FOR{client $i=1, 2, \ldots, N$ in parallel}
\STATE initialize local model $w_{(t-1)\tau_i} ^i \gets \widetilde w_{t-1} $
\FOR{local iteration $ s =(t-1)\tau_i+1, \ldots, t\tau_i $}
\STATE compute mini-batch gradient ${g_i}(\cdot;\cdot)$
\STATE $w_{s}^i \gets w_{s-1}^i - \eta_i g_i(w_{s-1}^i; \xi_{s-1}^i) $
\mathbb{E}NDFOR
\STATE The $i$-th client sends $w_{t\tau_i}^i - \widetilde w_{t-1}$ to the server
\mathbb{E}NDFOR
\STATE The server updates the model parameters
\STATE $w_t \gets \widetilde w_{t-1}+ \sum \limits_{i = 1}^N p_i (w_{t\tau_i}^i-\widetilde w_{t-1}) $
\STATE The server clips the model parameters
\STATE $\mathrm{Clip}_{\rho_t}(w_{t}) \gets w_{t}/ \max(1, \frac{\|w_{t}\|_2}{\rho_t})$
\STATE The server adds noise
\IF{$t \le T-1$}
\STATE $\epsilon_t \gets$ \text{a sample drawn from } $\mathcal N(0,\sigma_t^2\bf I)$
\STATE $\widetilde w_{t} \gets \mathrm{Clip}_{\rho_t}(w_{t}) + \epsilon_t$
\mathbb{E}NDIF
\mathbb{E}NDFOR
\STATE {\bfseries Output:} Clipped global model parameters $\mathrm{Clip}_{\rho_T}(w_{T})$
\end{algorithmic}
\end{minipage}
}
\end{algorithm}
\subsection{CRFL Testing: Parameter Smoothing}
\paragraph{Smoothed Classifiers}
We study multi-class classification models and define a classifier $h:\mathcal {(W,X)} \rightarrow \mathcal{Y} $ with finite set of label $ \mathcal{Y}=\{1,\ldots, C\}$, where $C$ denotes the number of classes.
We extend the randomized smoothing method~\cite{cohen2019certifiedrandsmoothing} to \emph{parameter smoothing} for constructing a new, ``smoothed'' classifier $h_s$ from an arbitrary base classifier $h$. The robustness properties can be verified using the smoothed classifier $h_s$.
Given the model parameter $w$ of $h$, when queried at a test sample $x_{test}$, we first take a majority vote over the predictions of the base classifier $h$ on random model parameters drawn from a probability distribution $\mu$, i.e., the smoothing measure, to obtain the ``votes'' ${H_s^{c}}(w;x_{test})$ for each class $c \in \mathcal{Y}$. Then the label returned by the smoothed classifier $h_s$ is the mostly probable label among all classes (the majority vote winner). Formally,
\begin{equation} \label{eq:define_smoothed_cls}
\begin{aligned}
&{h_s}(w;x_{test})= \arg \max_{c\in \mathcal{Y} } {H_s^{c}}(w;x_{test}), \\
& \text{where } {H_s^{c}}(w;x_{test})= \mathbb{P}_{W\sim \mu(w)}[h(W;x_{test})=c].
\end{aligned}
\end{equation}
To be aligned with the training time Gaussian noise (perturbing), we also adopt Gaussian smoothing measures $\mu(w) = \mathcal N(w,{\sigma_T}^2\bf I)$ during testing time. In practice, the exact value of the probability $p_c=\mathbb{P}_{W\sim \mu(w)}[h(W;x_{test})=c]$ for label $c$ is difficult to obtain for neural networks, and hence we resort to Monte Carlo estimation~\cite{cohen2019certifiedrandsmoothing,lecuyer2019certifieddp} to get its approximation $\hat p_c$.
At round $t=T$, given the clipped aggregated global model $\mathrm{Clip}_{\rho_T}({w_T})$, we add Gaussian noise $\epsilon_T^k \sim \mathcal N(0,\sigma_T^2\bf I)$ for $M$ times to get $M$ sets of noisy model parameters ($M$ Monte Carlo samples for estimation), such that
$\widetilde w_T^k \gets \mathrm{Clip}_{\rho_T}({w_T}) + \epsilon_T^k,~k=1, 2, \ldots, M.$
In Algorithm~\ref{alg:certify_parameters_perturbation},
The function $\mathrm{ GetCounts }$ runs the classifier with each set of noisy model parameters $w_T^k$ for one test sample $x_{test}$, and returns a vector of class counts.
Then we take the most probable class $\hat c_A$ and the runner-up class $\hat c_B$ to calculate the corresponding $\hat p_A$ and $\hat p_B$.
The function $\mathrm{CalculateBound}$ calibrates the empirical estimation to bound the probability $\alpha$ of $h_s$ returning an incorrect label. Given the error tolerance $\alpha$, we use Hoeffding’s inequality~\cite{hoeffding1994probability} to compute a lower bound $\underline{p_A}$ on the probability $H_s^{c_A}(w;x_{test})$ and a upper bound $\overline{p_B}$ on the probability $H_s^{c_B}(w;x_{test})$ according to
$\underline{p_A}= \hat p_A - \sqrt{\frac{\log(1/\alpha)}{2N}}$, $\overline{p_B}= \hat p_B + \sqrt{\frac{\log(1/\alpha)}{2N}}$.
We leave the function $\mathrm{CalculateRadius}$ to be defined with our main results in later sections and we will analyze the robustness properties of the model trained and tested under our framework CRFL{}.
\begin{algorithm}[ht]
\caption{\small Certification of parameters smoothing}
\label{alg:certify_parameters_perturbation}
\scalebox{0.85}{
\begin{minipage}{1.0\linewidth}
\begin{algorithmic}
\STATE {\bfseries Input:} a test sample $x_{test}$ with true label $y_{test}$, the global model parameters $\mathrm{Clip}_{\rho_T}(w_{T})$, the classifier $h(\cdot, \cdot)$
\FOR{$k=0,1, \ldots M$}
\STATE $\epsilon_T^k \gets $ \text{a sample drawn from } $\mathcal N(0,\sigma_T^2\bf I)$
\STATE $\widetilde w_{T}^{k} = \mathrm{Clip}_{\rho_T}(w_{T}) + \epsilon_T^k$
\mathbb{E}NDFOR
\STATE Calculate empirical estimation of $p_A,p_B$ for $x_{test}$
\STATE $ \texttt{counts} \gets \mathrm{ GetCounts }(x_{test}, \{ \widetilde w_{T}^{1}, \ldots, \widetilde w_{T}^{M}\} )$
\STATE $\hat c_A , \hat c_B \gets $ top two indices in $\texttt{counts}$
\STATE $\hat p_A, \hat p_B \gets \texttt{counts}[\hat c_A]/M, \texttt{counts}[\hat c_B]/M $
\STATE Calculate lower and upper bounds of $p_A,p_B$
\STATE $ \underline{p_A} , \overline{p_B} \gets \mathrm{ CalculateBound }$($\hat p_A$, $\hat p_B$, $N, \alpha$)
\IF{$ \underline{p_A} > \overline{p_B} $}
\STATE $\mathsf{RAD} =\mathrm{CalculateRadius}(\underline{p_A} , \overline{p_B})$
\STATE {\bfseries Output:} Prediction $\hat c_A$ and certified radius $\mathsf{RAD}$
\mathbb{E}LSE
\STATE {\bfseries Output:} $\texttt{ABSTAIN}$ and 0
\mathbb{E}NDIF
\end{algorithmic}
\end{minipage}
}
\end{algorithm}
\paragraph{Comparison with Certifiably Robust Models in Centralized Setting}
Our method is different from previous certifiably robust models in centralized learning against evasion attacks~\cite{cohen2019certifiedrandsmoothing} and backdoors~\cite{weber2020rab}.
Once the $M$ noisy models (at round $T$, with $\sigma_T$) are generated, they are fixed and used for every test sample during test time, just like RAB \cite{weber2020rab} in the centralized setting. However, RAB actually trains $M$ models using $M$ noise-corrupted datasets, while we just train one model through FL and finally generated $M$ noise-corrupted copies of it.
For every test sample, randomized smoothing \cite{cohen2019certifiedrandsmoothing} generates $M$ noisy samples. Suppose the test set size is $m$. Then during testing, there are $m \cdot M$ times noise addition on test samples for randomized smoothing, and $M$ times noises addition on trained model for CRFL. To our best knowledge, this is the first work to study \textit{parameter} smoothing rather than input smoothing, which is an open problem motivated by the FL scenario, since the sever directly aggregates over the model parameters.
\section{Certified Robustness of CRFL}
\subsection{Pointwise Certified Robustness}
\paragraph{Goal of Certification}
In the context of data poisoning in federated learning, the goal is to protect the global model against adversarial data modification made to the local training sets of distributed clients. Thus, the goal of certifiable robustness in federated learning is for each test point, to return a prediction as well as a certificate that the prediction would not change had some features in (part of) local training data of certain clients been modified.
Following our threat model in Section~\ref{subsection:threat_model} and our training protocol in Algorithm~\ref{algo:parameters_perturbation}, we define the trained global model
$
\mathcal{M}(D') := \mathrm{Clip}_{\rho_T}({w'}_{T}).
$
For the FL training process that is exposed to model replacement attack, when the distance between $D'$ (backdoored dataset) and $D$ (clean dataset) is under certain threshold (i.e., the magnitude of $\{\{\delta_i\}_{j=1}^{q_i}\}_{i=1}^R$ is bounded), we can certify that $\mathcal{M}(D')$ is ``close'' to $\mathcal{M}(D)$ and thus is robust to backdoors.
The rationale lies in the fact that we perform clipping and noise perturbation on the model parameters to control the global model deviation during training.
During testing, intuitively, under the Gaussian smoothing measures $\mu$ as described in Algorithm~\ref{alg:certify_parameters_perturbation}, for two close distribution $\mu(\mathcal{M}(D'))$ and $\mu(\mathcal{M}(D))$, we would expect that even though the probabilities for each class $c$, i.e., $ {H_s^{c}}(\mathcal{M}(D');x_{test})$ and $ {H_s^{c}}(\mathcal{M}(D);x_{test})$, may not be equal, the returned most likely label ${h_s}(\mathcal{M}(D');x_{test})$ and ${h_s}(\mathcal{M}(D);x_{test})$ should be consistent.
In summary, we aim to develop a robustness certificate by studying under what condition for $\{\{\delta_i\}_{j=1}^{q_i}\}_{i=1}^R$ that the prediction for a test sample is consistent between the smoothed FL models trained from $D$ and $D'$ separately, i.e., ${h_s}(\mathcal{M}(D');x_{test})= {h_s}(\mathcal{M}(D);x_{test})$.
To put forth our certified robustness analysis, we make the following assumptions on the loss function of all clients.
Then we present our main theorem and explain its derivation through \textit{model closeness} and \textit{parameter smoothing}.
Throughout this paper, we denote $ \nabla_w \ell(w;z)$ as $ \nabla \ell(w;z)$ for simplicity.
\begin{assumption}[Convexity and Smoothness] \label{assumption:Smoothness}
The loss function $\ell(w;z)$ is $\beta$-smoothness,
i.e, $\forall w_1,w_2$,
$$
\| \nabla \ell(w_1;z) - \nabla \ell(w_2;z)\| \leq \beta \| w_1 - w_2\|.
$$
In addition, the loss function $\ell(w;z)$ is convex. Then co-coercivity of the gradient states:
\begin{align*}
& \| \nabla \ell(w_1;z) -\nabla \ell(w_2;z)\|^2 \\ &\leq \beta \langle w_{1} - w_{2},\nabla \ell(w_1;z)- \nabla \ell(w_2;z) \rangle.
\end{align*}
\end{assumption}
\begin{assumption}[Lipschitz Gradient w.r.t. Data] \label{assumption:data_lipschitz}
The gradient $\nabla_w l(z;w)$ is $L_{\mathcal Z}$ Lipschitz with respect to the argument $z$ and norm distance $\|\cdot\|$, i.e, $\forall z_1,z_2$,
\begin{equation}
\| \nabla \ell(w;z_1) - \nabla \ell(w;z_2)\| \leq L_{\mathcal Z} \| z_1 - z_2\|. \nonumber
\end{equation}
\end{assumption}
\begin{assumption}\label{assumption:fl_system_train_test}
The whole FL system follows Algorithm~\ref{algo:parameters_perturbation} to train and Algorithm~\ref{alg:certify_parameters_perturbation} to test.
\end{assumption}
The assumptions on convexity and smoothness are common in the analysis of distributed SGD \cite{li2019convergence,wang2019cooperative}. We also make assumption on the Lipschitz gradient w.r.t. data, which is used in~\cite{fallah2020personalized,reisizadeh2020robustdistributionshifts} for analyzing the heterogeneous data distribution across clients.
\paragraph{Main Results}\label{sec:main_results}
\begin{restatable}[General Robustness Condition]{theorem}{thmrobustnessthreelevel}
\label{theorem_robustness_3_level}
Let $h_s$ be defined as in Eq.~\ref{eq:define_smoothed_cls}. When $\eta_i \le \frac{1}{\beta}$ and Assumptions~\ref{assumption:Smoothness},~\ref{assumption:data_lipschitz}, and~\ref{assumption:fl_system_train_test} hold, suppose $c_A \in \mathcal{Y} $ and $\underline{p_A}, \overline{p_B} \in [0,1]$ satisfy
\begin{equation}
{H_s^{c_A}}(\mathcal{M}(D');x_{test}) \ge \underline{p_A} \ge \overline{p_B} \ge \max_{c\ne c_A} {H_s^{c}}(\mathcal{M}(D');x_{test}), \nonumber
\end{equation}
then if
\begin{small}
\begin{equation}
\begin{aligned}
&R\sum_{i=1}^R (p_i \gamma_i \tau_i \eta_i \frac{{{q_B}_i}}{{{n_B}_i}} \|\delta_i\|)^2
&\le \frac{-\log \left( 1- (\sqrt{\underline{p_A}} - \sqrt{\overline{p_B} })^2 \right) \sigma_{\mathsf {t_{adv}}}^2 }{2 L_{\mathcal Z}^2 \prod\limits_{t=\mathsf {t_{adv}}+1}^{T} \left(2\mathbb{P}hi \left (\frac{\rho_t }{\sigma_{t}}\right)-1 \right)}, \nonumber \\
\end{aligned}
\end{equation}
\end{small}
it is guaranteed that
\begin{equation}
{h_s}(\mathcal{M}(D');x_{test}) = {h_s}(\mathcal{M}(D);x_{test}) = c_A, \nonumber
\end{equation}
where $\mathbb{P}hi$ is standard Gaussian's cumulative density function (CDF) and the other parameters are defined in Section \ref{sec_prelim}.
\end{restatable}
In practice, since the server does not know the global model in the current FL system is poisoned or not, we assume the model is already backdoored and derive the condition when its prediction will be certifiably consistent with the prediction of the clean model.
Our certification is on three levels: \textit{feature}, \textit{sample}, and \textit{client}.
If the magnitude of the backdoor is upper bounded for every attackers, then we can re-write the Theorem~\ref{theorem_robustness_3_level} as the following corollary.
\begin{restatable}[Robustness Condition in Feature Level]{corollary}{corrobustpixellevel}
\label{corollary_robustness_pixel_level}
Using the same setting as in Theorem \ref{theorem_robustness_3_level} but further assume identical backdoor magnitude $\|\delta\|= \|\delta_i\| $ for $i=1,\ldots,R$.
Suppose $c_A \in \mathcal{Y} $ and $\underline{p_A}, \overline{p_B} \in [0,1]$ satisfy
$$
{H_s^{c_A}}(\mathcal{M}(D');x_{test}) \ge \underline{p_A} \ge \overline{p_B} \ge \max_{c\ne c_A} {H_s^{c}}(\mathcal{M}(D');x_{test}),
$$
then $ {h_s}(\mathcal{M}(D');x_{test})= {h_s}(\mathcal{M}(D);x_{test}) = c_A$ for all $\|\delta\| < \mathsf{RAD}$, where
\begin{small}
\begin{equation}\label{eq:certified_radius_R}
\begin{aligned}
\mathsf{RAD} = \sqrt{\frac{-\log \left( 1- (\sqrt{\underline{p_A}} - \sqrt{\overline{p_B} })^2 \right) \sigma_{\mathsf {t_{adv}}}^2 }{2 R L_{\mathcal Z}^2 \sum\limits_{i=1}^R ( p_i \gamma_i \tau_i \eta_i \frac{{{q_B}_i}}{{{n_B}_i}})^2 \prod\limits_{t=\mathsf {t_{adv}}+1}^{T} \left(2\mathbb{P}hi \left (\frac{\rho_t }{\sigma_{t}}\right)-1 \right) }} \\
\end{aligned}
\end{equation}
\end{small}
\end{restatable}
The function $\mathrm{CalculateRadius}$ in our Algorithm~\ref{alg:certify_parameters_perturbation} can calculate the certified radius $\mathsf{RAD}$ according to Corollary~\ref{corollary_robustness_pixel_level}.
We now make several remarks about Corollary~\ref{corollary_robustness_pixel_level} and {will verify them in our experiments}: 1) The noise level $\sigma_t$ and the parameter norm clipping threshold $\rho_t$ are hyper-parameters that can be adjusted to control the robustness-accuracy trade-off. For instance, the certified radius $\mathsf{RAD}$ would be large when: $\sigma_t$ is high; $\rho_t$ is small; the margin between $\underline{p_A}$ and $\overline{p_B}$ is large; the number of attackers $R$ is small; the poison ratio $\frac{{{q_B}_i}}{{{n_B}_i}}$ is small; the scale factor $\gamma_i$ is small; the aggregation weights for attackers $p_i$ is small; the local iteration $\tau_i$ is small; and the local learning rate $\eta_i$ small. 2) Since $0 \le 2\mathbb{P}hi(\cdot) -1 \le 1$, the certified radius $\mathsf{RAD}$ goes to $\infty$ as $ T \rightarrow \infty$ when $\mathbb{P}hi(\cdot)<1$. Ituitively, the benign fine-tuning after backdoor injection round $\mathsf {t_{adv}}$ would mitegate the poisoning effect. Thus, with infinite rounds of such fine-tuning, the model is able to tolerate backdoors with arbitrarily large magnitude. In practice, we note that the continued multiplication in the denominator may not approach 0 due to numerical issues, which we will verify in the experiments section. 4) Large number of clients $N$ will decrease the aggregation weights $p_i$ of attackers, thus it can tolerate backdoors with large magnitude, resulting in higher $\mathsf{RAD}$. 5) For general neural networks, efficient computation of Lipschitz gradient constant (w.r.t. data input) is an open question, especially when the data dimension is high. We will provide a closed-form expression for $ L_{\mathcal Z}$ under some constraints next.
As mentioned in Section~\ref{subsection:threat_model}, the backdoor for data sample $z_j^i$ includes both the backdoor pattern $\delta_{i_x}$ and adversarial target label flipping $\delta_{i_y}$. In Assumption \ref{assumption:data_lipschitz} we define $ L_{\mathcal Z}$ with $z=\{x,y\}$ (concatenation of $x$ and $y$) to certify against both backdoor patterns and label-flipping effects. Without loss of generality, here we focus on backdoor patterns considering bounded model parameters in Lemma~\ref{lm:L_z_for_multiclass_logistic_regression}, which provides a closed-form expression for $ L_{\mathcal Z}$ in the case of multi-class logistic regression. By applying $ L_{\mathcal Z}$ from Lemma~\ref{lm:L_z_for_multiclass_logistic_regression} to Theorem~\ref{theorem_robustness_3_level}, it indicates that the prediction for a test sample is independent with the backdoor pattern so the backdoor pattern is disentangled from the adversarial target label.
\begin{restatable}[]{lemma}{lemmalzforlogreg}
\label{lm:L_z_for_multiclass_logistic_regression}
Given the upper bound on model parameters norm, i.e., $\|w\| \leq \rho$, and two data samples $z_1$ and $z_2$ with $x_1\neq x_2$ ($y_1=y_2$), for multi-class logistic regression (i.e., one linear layer followed by a softmax function and trained by cross-entropy loss), its Lipschitz gradient constant w.r.t data is $ L_{\mathcal Z} = \sqrt{2+2\rho + \rho^2} $. That is,
\begin{equation}
\| \nabla \ell(w;z_1) - \nabla \ell(w;z_2)\| \leq \sqrt{2+2\rho + \rho^2} \| z_1 - z_2\|. \nonumber
\end{equation}
\end{restatable}
Proof for Lemma~\ref{lm:L_z_for_multiclass_logistic_regression} is provided in the Appendix \ref{sec:proof l_z}.
In order to formally derive the main theorem, there are two key results. We first quantify the closeness between the FL trained models $\mathcal{M}(D')$ and $\mathcal{M}(D))$ using Markov Kernel{}, and then connect the model closeness to the prediction consistency through parameter smoothing.
\subsection{Model Closeness}
As described in Algorithm \ref{algo:parameters_perturbation}, owing to the Gaussian noise perturbation mechanism, in each iteration the global model can be viewed as a random vector with the Gaussian smoothing measure $\mu$.
We use
the \textit{f}-divergence{} between $\mu(\mathcal{M}(D'))$ and $\mu(\mathcal{M}(D))$ as a statistical distance for measuring model closeness of the final FL model.
Based on the data post-processing inequality, when we interpret each round of CRFL{} as a probability transition kernel, i.e., a Markov Kernel{}, the contraction coefficient of Markov Kernel{} can help bound the divergence over multiple training rounds of FL.
Let $f:(0,\infty) \rightarrow \mathbb{R}$ be a convex function with $f(1)=0$, $\mu$ and $\nu$ be two probability distributions. Then the \textit{f}-divergence{} is defined as $D_f(\mu || \nu) = E_{W \sim \nu }[f(\frac{\mu(W)}{\nu(W)})].$
Common choices of \textit{f}-divergence{} include total variation ($f(x)=\frac{1}{2}\|x-1\|$) and Kullback-Leibler (KL) divergence ($f(x)=x \log x$).
The data processing inequality \cite{raginsky2016strong, polyanskiy2015dissipation,polyanskiy2017strong} for the relative entropy states that, for any convex function $f$ and any probability transition kernel (Markov Kernel{}), $D_f(\mu K||\nu K) \le D_f(\mu ||\nu)$, where $\mu K$ denotes the push-forward of $\mu$ by $K$, i.e., $\mu K = \int \mu(dW) K(W)$.
In other words, $D_f(\mu || \nu)$ decreases by post-processing via $K$.
\cite{asoodeh2020differentially} extend it to analyze SGD.
In our setting,
all the operations in one round of our CRFL{}, including SGD, clipping and noise perturbations, are incorporated as a Markov Kernel{}. We note that in the single-round attack setting, the adversarial clients use clean datasets to train the local models after $\mathsf {t_{adv}}$, so the Markov operator is the same as the one in the benign training process. Therefore the \textit{f}-divergence{} of the two global models (backdoored and benign) of interest decreases over rounds, which is characterized by a contraction coefficient defined in Appendix \ref{sec_app_modelclossness}.
We quantify such contraction property of Markov Kernel{} for each round with the help of two hyperparameters in the server side: model parameter norm clipping threshold $\rho_t$ and the noise level $\sigma_t$, and finally bound \textit{f}-divergence{} of global models in round $T$.
Although our analysis can be adopted to general \textit{f}-divergence{}, we here use KL divergence{} as an instantiation to measure the model closeness.
\begin{restatable}[]{theorem}{thmdivergenceroundtmain}
\label{therom:divergence_round_T_main}
When $\eta_i \le \frac{1}{\beta}$ and Assumptions~\ref{assumption:Smoothness},~\ref{assumption:data_lipschitz}, and~\ref{assumption:fl_system_train_test} hold, the KL divergence{} between $\mu(\mathcal{M}(D))$ and $\mu(\mathcal{M}(D'))$ with $\mu(w) = \mathcal N(w,{\sigma_T}^2\bf I)$ is bounded as:
\begin{small}
\begin{equation}
\begin{split}
&D_{KL}( \mu(\mathcal{M}(D)) || \mu(\mathcal{M}(D')) ) \\
&\le \frac{ 2R\sum_{i=1}^R\left
(p_i \gamma_i \tau_i \eta_i \frac{{{q_B}_i}}{{{n_B}_i}} L_{\mathcal Z} \|\delta_i\| \right)^2 }{\sigma_{\mathsf {t_{adv}}}^2} \prod_{t=\mathsf {t_{adv}}+1}^{T} \left(2\mathbb{P}hi \left (\frac{\rho_t }{\sigma_{t}}\right)-1 \right) \nonumber
\end{split}
\end{equation}
\end{small}
\end{restatable}
The proof is provided in the Appendix \ref{sec_app_modelclossness}.
\subsection{Parameter Smoothing}
We connect the model closeness to the prediction consistency by the following theorem.
The smoothed classifier $h_s$ is robustly certified at $\mu(w')$ with respect to the bounded KL divergence{}, $D_{KL}(\mu(w),\mu(w'))\le \epsilon$.
\begin{theorem}\label{theorem:param_smoothing}
Let $h_s$ be defined as in Eq.~\ref{eq:define_smoothed_cls}. Suppose $c_A \in \mathcal{Y} $ and $\underline{p_A}, \overline{p_B} \in [0,1]$ satisfy
\begin{equation}
{H_s^{c_A}}(w';x_{test}) \ge \underline{p_A} \ge \overline{p_B} \ge \max_{c\ne c_A} {H_s^{c}}(w';x_{test}), \nonumber
\end{equation}
then ${h_s}(w';x_{test}) = {h_s}(w;x_{test}) = c_A$ for all ${w}$ such that $D_{KL}(\mu(w),\mu(w'))\le \epsilon$, where
\begin{equation}
\begin{aligned}
\epsilon = - \log \Big(1-(\sqrt{\underline{p_A}} - \sqrt{\overline{p_B}})^2\Big) \nonumber
\end{aligned}
\end{equation}
\end{theorem}
The proof is provided in the Appendix \ref{sec_app_param_smothing}.
Finally, combining Theorem~\ref{therom:divergence_round_T_main} and ~\ref{theorem:param_smoothing} leads to our main Theorem \ref{theorem_robustness_3_level}. In detail,
Theorem~\ref{therom:divergence_round_T_main} states that $D_{KL}( \mu(\mathcal{M}(D)) || \mu(\mathcal{M}(D')))$ under our CRFL{} framework is bounded by certain value that depends on the difference between $D$ and $D'$. Theorem~\ref{theorem:param_smoothing} states that for a test sample $x_{test}$, as long as the KL divergence{} is smaller than $- \log (1-(\sqrt{\underline{p_A}} - \sqrt{\overline{p_B}})^2) $, the prediction from the poisoned smoothed classifier $h_s$ that is built upon the base classifier with model parameter $\mathcal{M}(D')$ will be consistent with the prediction from $h_s$ that is built upon $\mathcal{M}(D)$. Therefore, we derive the condition for $D$ and $D'$ in Theorem~\ref{theorem_robustness_3_level}, under which $D_{KL}( \mu(\mathcal{M}(D)) || \mu(\mathcal{M}(D'))) \leq - \log (1-(\sqrt{\underline{p_A}} - \sqrt{\overline{p_B}})^2) $. This condition also indicates that $h_s$ built upon the model parameter $\mathcal{M}(D')$ is certifiably robust.
\paragraph{Defend against Other Potential Attack}
Here we discuss the potentials to generalize our method against other training-time attacks.
1) Our method can naturally extend to \emph{fixed-frequency} attack by applying our analysis for each attack period. In particular, we can repeatedly apply our Theorem~\ref{therom:divergence_round_T_main} to analyze model closeness for each attack period, and the different initializations of each period can be bounded based on its last period. Then Theorem~\ref{theorem:param_smoothing} can be applied to connect model closeness to certify the prediction consistency.
2)
\cite{wang2020attackthetails} introduce edge-case adversarial training samples to enforce the model to misclassify inputs on the tail of input distribution. The edge-case attack essentially conducts a special semantic attack~\cite{bagdasaryan2020backdoor} by selecting rare images instead of directly adding backdoor patterns. It is possible to apply our framework against such attack by viewing it as the whole \emph{sample} manipulation.
\paragraph{Comparison with Differentially Private Federated Learning}
In order to protect the privacy of each client, differentially private federated learning (DPFL) mechanisms are proposed~\cite{geyer2017differentially, mcmahan2018learning, agarwal2018cpsgd} to ensure that the learned FL model is essentially unchanged when one individual client is modified.
Compared with DPFL, our method has several fundamental differences and addresses additional challenges: 1) Mechanisms: DPFL approaches add training-time noise to provide privacy guarantee, while ours add smoothing noise during training and testing to provide certified robustness against data poisoning. In general, the added noise in CRFL does not need to be as large as that in DPFL to provide \textit{strong} privacy guarantee, and therefore preserve higher model utility.
2) Certification goals: DPFL approaches provide client-level privacy guarantee for the learned model parameters, while in CRFL the robustness guarantee is derived for certified pointwise prediction which could be on the feature, samples and clients levels. 3) Technical contributions: DPFL approaches derive DP guarantee via DP composition theorems~\cite{dwork2014algorithmic, abadi2016deep}, while we quantify the global model deviation via Markov Kernel and verify the robustness properties of the smoothed model via parameter smoothing.
\section{Experiments}
In our experiments, the attackers perform the model replacement attack at round $\mathsf {t_{adv}}$ during our CRFL{} training, and the server performs parameter smoothing on a possibly backdoored FL model at round $T$ to calculate the certified radius $\mathsf{RAD}$ for each test sample based on Corollary~\ref{corollary_robustness_pixel_level}.
Specifically, we evaluate the effect of the training time noise $\sigma_t$, the attacker's ability which includes the number of attackers $R$, the poison ratio $\frac{q_{B_i}}{n_{B_i}}$ and the scale factor $\gamma_i$, robust aggregation protocol, the number of total clients $N$ and the number of training rounds $T$. Moreover, we evaluate the model closeness empirically to justify Theorem \ref{therom:divergence_round_T_main}.
\subsection{Experiment Setup}
We focus on multi-class logistic regression (one linear layer with softmax function and cross-entropy loss), which is a convex classification problem.
We train the FL system following our CRFL{} framework with three datasets: Lending Club Loan Data (LOAN{})~\citep{loandataset}, MNIST{}~\cite{lecun-mnisthandwrittendigit-2010}, and EMNIST{}~\cite{cohen2017emnist}.
We refer the readers to Appendix \ref{sec:app_exp_details} for more details about the datasets, parameter setups and attack setting.
We train the FL global model until convergence and then use our certification in Algorithm~\ref{alg:certify_parameters_perturbation} for robustness evaluation.
The metrics of interest are \emph{certified rate{}} and \emph{certified accuracy{}}. Given a test set of size $m$, for $i$-th test sample, the ground truth label is $y_i$, and the output prediction is either $c_i$ with the certified radius $\mathsf{RAD}_i$ or $c_i=\texttt{ABSTAIN}$ with $\mathsf{RAD}_i=0$.
Then we calculate
\textbf{certified rate{}} at $r$ as $\frac{1}{m}\sum_{i=1}^m \mathbb{1}\{\mathsf{RAD}_i \ge r\}$ ,
and \textbf{certified accuracy{}} at $r$ as $\frac{1}{m} \sum_{i=1}^m\mathbb{1}\{c_i=y_i$ and $\mathsf{RAD}_i \ge r\}$.
The certified rate{} is the fraction of the test set that can be certified at radius $\mathsf{RAD} \ge r $, which reveals how consistent the possibly backdoored classifier's prediction with the clean classifier's prediction.
The certified accuracy{} is the fraction of the test set for which the possibly backdoored classifier makes correct and consistent predictions with the clean model.
In the displayed figures, there is a critical radius beyond which the certified accuracy{} and certified rate{} are dropped to zero.
Since each test sample has its own calculated certified radius $\mathsf{RAD}_i$, this critical value is a threshold that none of them have a larger radius than it, similar to the findings in \cite{cohen2019certifiedrandsmoothing}.
We certified 10000/5000/10000 samples from the LOAN{}/MNIST{}/EMNIST{} test sets. In all experiments, unless otherwise stated, we use $\sigma_T=0.01$ to generate $M=1000$ noisy models in parameter smoothing procedure, and use the error tolerance $\alpha=0.001$.
In our experiments, we adopt the expression of $L_{\mathcal Z}$ in Lemma~\ref{lm:L_z_for_multiclass_logistic_regression}. $L_{\mathcal Z}$ can be generalized to other poisoning settings by specifying $z_1, z_2$ in Assumption~\ref{assumption:data_lipschitz} under the case of ``$x_1\neq x_2$ and $y_1 \neq y_2$'' or ``$x_1 = x_2$ and $y_1 \neq y_2$''.
\subsection{Experiment Results}
We only change one factor in each experiment and keep others the same as the experiment setup. We plot the certified accuracy{} and certified rate{} on the clean test set, and report the results on the backdoored test set in Appendix~\ref{sec:app_exp_details}.
\begin{figure}
\caption{Certified accuracy and certified rate on MNIST{}
\label{fig:training_sigma_noise}
\end{figure}
\paragraph{Effect of Training Time Noise}
Since we aim to defend against backdoor attack, the training time noise $\sigma$ ($\sigma=\sigma_t, t<T$) in our Algorithm~\ref{algo:parameters_perturbation} is more essential than $\sigma_T$ in parameter smoothing (Algorithm~\ref{alg:certify_parameters_perturbation}). The reason is that $\sigma$ can nullify the malicious model updates at early stage.
Figure~\ref{fig:training_sigma_noise} plots the certified accuracy{} and certified rate{} attained by training FL system with different $\sigma$.
In Figure~\ref{fig:training_sigma_noise}(a), when $\sigma$ is high, certified rate{} is high at every $r$ and large radius can be certified. Figure~\ref{fig:training_sigma_noise}(b)(c)(d) show that large radius is certified but at a low accuracy, so the parameter noise $\sigma$ controls the trade-off between certifiability and accuracy, which echoes the property of evasion-attack certification~\cite{cohen2019certifiedrandsmoothing}.
Comparing the solid line with the dashed line for each color, we can see that the parameter smoothing with $\sigma_T$ does not hurt the accuracy much.
\paragraph{Effect of Attacker Ability}
From the perspective of attackers, the larger number of attackers $R$, the larger poison ratio $\frac{q_{B_i}}{n_{B_i}}$ and the larger scale factor $\gamma_i$ result in the stronger attack. Figure~\ref{fig:poison_ratio}, Figure~\ref{fig:gamma}, and Figure~\ref{fig:number_of_attackers} show that in the three datasets, the stronger the attack, the smaller radius can be certified. After training sufficient number of rounds with clean datasets after $\mathsf {t_{adv}}$, we show that the certified radius is not sensitive to the attack timing $\mathsf {t_{adv}}$ in Appendix~\ref{ap:more_results_clean_testset}.
\begin{figure}
\caption{MNIST{}
\label{fig:poison_ratio}
\end{figure}
\begin{figure}
\caption{Certified accuracy with different scaling factor $\gamma$ on LOAN{}
\label{fig:gamma}
\end{figure}
\begin{figure}
\caption{MNIST{}
\label{fig:number_of_attackers}
\end{figure}
\begin{figure}
\caption{Certified accuracy on MNIST{}
\label{fig:robustagg}
\end{figure}
\paragraph{Effect of Robust Aggregation}
Our CRFL{} can be used to assess different robust aggregation rules. Figure~\ref{fig:robustagg} presents the certified accuracy{} on MNIST{} and EMNIST{} as $R$ is varied, when our CRFL{} adopts the robust aggregation algorithm RFA~\cite{pillutla2019robustrfa}, which detects outliers and down-weights the malicious updates during aggregation.
Comparing FedAvg in Figure~\ref{fig:number_of_attackers} with RFA in Figure~\ref{fig:robustagg} (the magnitude of x-axis is different), we observe that very large radius can be certified under RFA. This is because that the attacker is assigned with very low aggregation weights $p_i$, which is part of our bound in Eq.~\ref{eq:certified_radius_R}. Our certified radius reveals that RFA is much robust than FedAvg, which shows the potential usage of our certified radius as an evaluation metric for the robustness of other robust aggregation rules.
\begin{figure}
\caption{Certified accuracy on MNIST{}
\label{fig:total_clients}
\end{figure}
\paragraph{Effect of Client Number}
Distributed learning across a large number of clients is an important property of FL. Figure~\ref{fig:total_clients} shows that large radius can be certified when $N$ is large (i.e., more clients can tolerant larger backdoor magnitude), because it decreases the aggregation weights $p_i$ of attackers. Moreover, the backdoor effect could be mitigated by more benign model updates during training.
\begin{figure}
\caption{LOAN{}
\label{fig:cer_acc_T}
\end{figure}
\begin{figure}
\caption{(a) The $\ell_2$ distance of the global models between the backdoored training process and the benign training process. (b) Numerical analysis of the standard Gaussian CDF $\mathbb{P}
\label{fig:global_l2distance_numerical_issue}
\end{figure}
\paragraph{Effect of Training Rounds}
According to Figure~\ref{fig:cer_acc_T}, the certified accuracy{} is higher when $T$ is larger.
However, the largest radius that can be certified for the test set does not increase. We note that this is due to numerical issues of the standard Gaussian CDF $\mathbb{P}hi(\cdot)$.
As we mentioned in Section \ref{sec:main_results}, the continued multiplication in the denominator of Eq.~\ref{eq:certified_radius_R} will not achieve 0 in practice. Otherwise the certified radius $\mathsf{RAD}$ goes to $\infty$ as $ T \rightarrow \infty$ since $0 \le 2\mathbb{P}hi(\rho/\sigma) -1 \le 1$.
To verify our argument, we fix $\underline{p_A} $ and $ \overline{p_B}$ to be 0.7 and 0.1, use default values for other parameters, and study the relationship between $\rho/\sigma$, $T$ and $\mathsf{RAD}$ in Figure~\ref{fig:global_l2distance_numerical_issue}(b). When $\rho/\sigma$ is larger than certain threshold, the certified radius $\mathsf{RAD}$ does not change much when $T$ increases. If one wishes to increase $T$ for improving certified radius, then we suggest to keep $\rho/\sigma$ smaller than the threshold to make effect.
The increased certified accuracy{} when $T$ is large in Figure~\ref{fig:cer_acc_T} could be attributed to improved model performance up to convergence, so the margin between $\underline{p_A} - \overline{p_B}$ is widened.
We also study the error tolerance $\alpha$ and the number of noisy models $M$ in Appendix \ref{ap:more_results_clean_testset}. Larger $M$ yields larger certified radius, and the certified radius is not very sensitive to $\alpha$.
\paragraph{Empirical Evaluation on Model Closeness}
Our theorems are derived based on the analysis in comparison to a ``virtual" benign training process. Empirically, we train such FL global model under the benign training process and compare the $\ell_2$ distance between the clean global model and the backdoored global model at every round. In Figure~\ref{fig:global_l2distance_numerical_issue}(a), one attacker performs model replacement attack on MNIST{} at round $\mathsf {t_{adv}}$= $\{20,40,60\}$ respectively. We can observe that the plotted $\ell_2$ distance over the FL training rounds after $\mathsf {t_{adv}}$ is decreasing, which echos our assumption that because all clients behave normal and use their clean local datasets to purify the global model after $\mathsf {t_{adv}}$, the global models between two training process become close. This observation also can justify the model closeness statement in Theorem \ref{therom:divergence_round_T_main}.
\section{Conclusion}
This paper establishes the first framework (CRFL) on certifiably robust federated learning against backdoor attacks. CRFL employs model parameter clipping and perturbing during training, and uses model parameter smoothing during testing, to certify conditions under which a backdoored model will give consistent predictions with an oracle clean model.
Our theoretical analysis characterizes the relation between certified robustness and federated learning parameters, which are empirically verified on three different datasets.
\onecolumn
\appendix
\section*{Appendix}
The Appendix is organized as follows:
\begin{itemize}[leftmargin=*,itemsep=-0.5mm]
\item Appendix~\ref{sec:app_exp_details} provides more details on experimental setups for training, presents the effect of Monte Carlo estimation and runtime of attacks, and reports the results on backdoored test set.
\item Appendix~\ref{sec_app_modelclossness} provides proofs for our Theorem~\ref{therom:divergence_round_T_main} and Lemma~\ref{lm:L_z_for_multiclass_logistic_regression} related to model closeness.
\item Appendix~\ref{sec_app_param_smothing} gives proofs for our Theorem~\ref{theorem:param_smoothing} related to the parameter smoothing.
\end{itemize}
\section{Experimental Details} \label{sec:app_exp_details}
\subsection{More Details on Experiment Setup for Training}
We focus on multi-class logistic regression (one linear layer with softmax function and cross-entropy loss), which is a convex classification problem.
We train the FL system following our CRFL{} framework with three datasets: Lending Club Loan Data (LOAN{})~\citep{loandataset}, MNIST{}~\cite{lecun-mnisthandwrittendigit-2010}, and EMNIST{}~\cite{cohen2017emnist}. The financial dataset LOAN{} is a tabular dataset that contains the current loan status (Current, Late, Fully Paid, etc.) and latest payment information, which can be used for loan status prediction. It consists of 1,808,534 data samples and we divide them by 51 US states, each of whom represents a client in FL, hence the data distribution is non-i.i.d. 80\% of data samples are used for training and the rest is for testing. EMNIST{} is an extended MNIST dataset that contains 10 digits and 37 letters. In the two image datasets, we split the training data for FL clients in an i.i.d. manner. The data description and other parameter setups are summarized in Table~\ref{tb: Dataset description}. For these datasets, the local learning rate $\eta_i$ is 0.001 for all clients. The server performs an adaptive norm clipping threshold $\rho_t$ that increases by time so that the normal learning ability of the model can be preserved (described in Table~\ref{tb: Dataset description}), and sets the fixed training noise level $\sigma_t=0.01$ ($t<T$). When the clipping threshold is not a fixed value, $L_{\mathcal Z}$ is calculated based on $\rho_{\mathsf {t_{adv}}}$ following Lemma~\ref{lm:L_z_for_multiclass_logistic_regression} for our experiment,.
\begin{wrapfigure}{r}{0.15\textwidth}
\centering
\begin{subfigure}
\centering
\includegraphics[width=0.1\textwidth]{experiments/mnist_backdoor_pattern.png}
\end{subfigure}
\caption{Backdoor pattern for image datasets}
\label{fig:backdoor_pattern}
\end{wrapfigure}
Regarding the attack setting, by default, we set $R=1$, and if there are more adversarial clients, we use same parameters setups for all of them. For the pixel-pattern backdoor in MNIST{} and EMNIST{}, the attackers add the backdoor pattern (see Figure.~\ref{fig:backdoor_pattern} for an example) in images and swap the label of any sample with such patterns into the target label, which is ``digit 0''. Similarly, for the preprocessed\footnote{We preprocess LOAN{} by dropping the features which are not digital and cannot be one-hot encoded, and then normalizing the rest 90 features and so that the value of each feature is between 0 and 1.} LOAN{} dataset, the attackers increase the value of the two features (i.e., num\_tl\_120dpd\_2m, num\_tl\_90g\_dpd\_24m) as a backdoor pattern, and swap label to ``Does not meet the credit policy. Status:Fully Paid''.
Since we adopt Lemma~\ref{lm:L_z_for_multiclass_logistic_regression} for our experiments, we focus on the backdoor pattern $\|\delta_i\|=\|\delta_{i_x}\|$.
The magnitude of backdoored pattern in every example is $\|\delta_i\|=0.1$ on three datasets. Every attacker's batch is mixed with correctly labeled data and such backdoored data with poison ratio $q_{B_i} / n_{B_i}$.
We train the FL global model until convergence and then use our certification in Algorithm~\ref{alg:certify_parameters_perturbation} for robustness evaluation.
\begin{table}[htbp]
\centering
\scalebox{0.85}{
\begin{tabular}{l c c c c c c c c c}
\toprule
Dataset& Classes & $\#$Training samples & Features & $N$ & $q_{B_i} / n_{B_i}$ & $\tau_i$ & $\gamma_i$ & $\mathsf {t_{adv}}$& $\rho_t$ \\
\midrule
LOAN{}& 9& 1446827 & 91 & 20 & 40/800 & 143 & 10 & 6 & 0.025t+2 \\
MNIST{}& 10& 60000 & 784 & 20 & 5/100 & 30 & 10& 10 & 0.1t+2\\
EMNIST{}& 47& 697932 & 784 & 50 & 5/200 & 70 & 20& 10 & 0.25t+4\\
\bottomrule
\end{tabular}}
\caption{Dataset description and parameters}
\label{tb: Dataset description}
\end{table}
\subsection{More Experimental Results on Clean Test Set}
\label{ap:more_results_clean_testset}
\paragraph{Effect of Monte Carlo estimation}
Recall that we use $M$ and $\alpha$ when calculating the lower bound $\underline{p_A}$ and the upper bound $\overline{p_B}$.
Figure~\ref{fig:M_alpha} (left) shows that larger number $M$ of noisy models used for certification can result in larger certified radius. Figure~\ref{fig:M_alpha} (middle) presents that the certified radius is smaller when the error tolerance $\alpha$ is smaller but overall the certified accuracy{} is not very sensitive to $\alpha$.
\begin{figure}
\caption{Left: Certified accuracy on MNIST{}
\label{fig:M_alpha}
\end{figure}
\paragraph{Effect of Attack Timing $\mathsf {t_{adv}}$}
For Figure~\ref{fig:M_alpha} (right), we use a strong attack ($\gamma$=100, $R$=2) and report the certified accuracy with different $\mathsf {t_{adv}}$.
As described in Table~\ref{tb: Dataset description}, $\rho_{\mathsf {t_{adv}}}$ increases with $\mathsf {t_{adv}}$, and $L_{\mathcal Z}$ is calculated based on $\rho_{\mathsf {t_{adv}}}$. In order to control variable, we use the same, loose $L_z$ which is calculated based on $\rho_{44}$ for all $\mathsf {t_{adv}}=10,20,40,43,44$.
The results show that the certified radius is not sensitive to the attack timing $\mathsf {t_{adv}}$ after training sufficient number of rounds with clean datasets after $\mathsf {t_{adv}}$.
\subsection{Experimental Results on Backdoored Test Set}
In this section, we report the certified accuracy{} on the backdoored test set. For every test sample, the backdoor pattern is added to the input while the label is still correct. As shown in Figure~\ref{fig:cer_acc_backdoor_mnist_attack_ability} and \ref{fig:cer_acc_backdoor_mnist_others}, the results are similar to the results on the clean test set.
\begin{figure}
\caption{Certified accuracy with different attack ability (a)(c)(d) and certified accuracy under robust aggregation RFA~\cite{pillutla2019robustrfa}
\label{fig:cer_acc_backdoor_mnist_attack_ability}
\end{figure}
\begin{figure}
\caption{Certified accuracy with different $\sigma$ (a), $N$ (b) and $T$ (c) on MNIST{}
\label{fig:cer_acc_backdoor_mnist_others}
\end{figure}
\section{Proofs of Model Closeness}
\label{sec_app_modelclossness}
In this section,
we will present preliminaries on \textit{f}-divergence{}, define the problem of model closeness and then provide the detailed proofs for our Theorem ~\ref{therom:divergence_round_T_main} and Lemma ~\ref{lm:L_z_for_multiclass_logistic_regression} that are related to model closeness.
Let us list the notations used in the paper and the Appendix in Table~\ref{tb: notations}.
\begin{table}[htbp]
\scalebox{0.7}{
\begin{tabular}{l c}
\toprule
Notation & Description \\
\midrule
$\mathcal{M}(\cdot)$ & the training protocol in Algorithm~\ref{alg:certify_parameters_perturbation} \\
$z_j^i:=\{x_j^i,y_j^i\}$ & $j$-th data sample at client $i$ with input $x_j^i$ and label $y_j^i$\\
${z'}_j^i:=\{x_j^i+{{\delta_i}_x}, y_j^i+{{\delta_i}_y}\}$ & backdoored version of $z_j^i$ where ${\delta_i}_x$ is input backdoor pattern and ${\delta_i}_y$ is label flipping effect \\
$D:=\{S_1,S_2,\ldots,S_N \}$ & Clean training dataset, the union of clean local dataset of $N$ clients\\
$D'= D+ \{\{\delta_i\}_{j=1}^{q_i}\}_{i=1}^R. $ & poisoned training dataset in round $\mathsf {t_{adv}}$ with $R$ attackers and $q_i$ poisoned samples in $i$-th attacker's local dataset\\
$\mathcal{M}(D)$& the clipped global model obtained from $\mathcal{M}$ using $D$\\
$\mathcal{M}(D')$ & the clipped global model obtained from $\mathcal{M}$ that uses $D'$ at round $\mathsf {t_{adv}}$ and uses $D$ at round $t \neq \mathsf {t_{adv}}$\\
$g_i(w)= g_i( w; \xi^i) $ & local gradients at client $i$ w.r.t $w$ with clean batch $\xi^i$\\
${g'}_i(w)={g}_i(w; {\xi'}^i) $ & local gradients at client $i$ w.r.t $w$ with poisoned batch ${\xi'}^i$\\
$\mathcal{B}^i \triangleq {g'}_i(w) - g_i(w) $ & the difference between poisoned local gradient and benign local gradient w.r.t same model parameters $w$ \\
$w_{s}^i$& client $i$'s local model parameters at local iteration $s$ \\
$w_{t} \gets \widetilde w_{t-1}+ \sum \limits_{i = 1}^N p_i (w_{t\tau_i}^i-\widetilde w_{t-1}) $ & aggregated global model at round $t$\\
$ \mathrm{Clip}_{\rho_t}(w_{t}) \gets w_{t}/ \max(1, \frac{\|w_{t}\|}{\rho_t})$ & clipped global model with model parameters norm threshold $\rho_t$ at round $t$\\
$\widetilde w_{t} \gets \mathrm{Clip}_{\rho_t}(w_{t}) + \epsilon_t$ & global model at round $t$ that is perturbed by noise $\epsilon_t$ \\
$h_s$ & the smoothed classifier transferred from the base classifier $h$\\
$p_c={H_s^{c}}(w;x_{test})= \mathbb{P}_{W\sim \mu(w)}[h(W;x_{test})=c]$ & the probability (the majority votes) of class $c$ for the given $w$ and $x_{test}$\\
${h_s}(w;x_{test})= \arg \max_{c\in \mathcal{Y} } {H_s^{c}}(w;x_{test})$ &the mostly probable label among all classes (the majority vote winner) for the given $w$ and $x_{test}$ \\
\bottomrule
\end{tabular}
}
\caption{Table of notations}
\label{tb: notations}
\centering
\end{table}
Throughout this paper, ``benign training process'' is the process that trains with clean dataset $D$ for $T$ rounds and outputs $\mathcal{M}(D)$; ``backdoored training process'' is the process that trains with poisoned dataset $D'$ at round $\mathsf {t_{adv}}$, trains with original clean dataset when $t\neq \mathsf {t_{adv}}$, and outputs $\mathcal{M}(D')$.
\subsection{Preliminaries on \textit{f}-divergence{}} \label{sec:f-div preliminaris}
Let $f:(0,\infty) \rightarrow \mathbb{R}$ be a convex function with $f(1)=0$, $\nu$ and $\rho$ be two probability distributions. Then \textit{f}-divergence{} is defined as
\begin{equation}
D_f(\nu || \rho) = E_{W \sim \rho }\left[f(\frac{\nu(W)}{\rho(W)})\right].
\end{equation}
Common \textit{f}-divergence{} includes Total variation $f(x)=\frac{1}{2}\|x-1\|$ and Kullback-Leibler (KL) divergence $f(x)=x \log x$.
\begin{lemma} \label{lemma:divergence_guassian}
For $m_1, m_2 \in \mathbb{R}^d $ and $\sigma \textgreater 0$, let $\mathcal N_1 $and $\mathcal N_2$ denote Gaussian distribution $\mathcal N_1(m_1,\sigma^2I)$ and $\mathcal N_2(m_2,\sigma^2I)$, respectively. Then,
\begin{align}
D_{KL}(\mathcal N_1 || \mathcal N_2)= \frac{\|m_2-m_1\|^2}{2\sigma^2},
\end{align}
\begin{align}
D_{TV}(\mathcal N_1 || \mathcal N_2)= 2\mathbb{P}hi \left( \frac{\|m_2-m_1\|}{\sigma} \right)-1,
\end{align}
where $\mathbb{P}hi$ is the CDF of the Gaussian distribution.
\end{lemma}
The well-known data processing inequality~\cite{polyanskiy2015dissipation} for the relative entropy states that, for any convex function $f$ and any stochastic transformation (probability transition kernel), i.e., Markov Kernel{} $K$, we have
$$
D_f(\nu K||\rho K) \le D_f(\nu ||\rho),
$$
where $\nu K$ denotes the push-forward of $\nu$ by $K$, i.e., $\nu K = \int \nu(dW) K(W)$. In other words, $D_f(\nu || \rho)$ decreases by post-processing. \cite{asoodeh2020differentially} extends it into machine learning and the operations in a Markov Kernel{} contain one step of Stochastic Gradient Descent (SGD).
To capture this effect, the quantity of the noisiness of a Markov operator~\cite{raginsky2016strong} for \textit{f}-divergence{}, i.e., contraction coefficient~\cite{asoodeh2020differentially}, is defined as
\begin{equation}\label{eq:contraction_def}
\eta_{f}(K) := \sup \limits_{\nu,\rho; {D_f(\nu||\rho)\ne 0}}\frac{D_f(\nu K||\rho K)}{D_f(\nu ||\rho)}.
\end{equation}
\begin{lemma}[Two-point characterization of Total variation~\cite{dobrushin1956central}] \label{lm:two_point_tv}
The supremum in the definition of $\eta_{TV}(K)$ can be restricted to point mass:
\begin{equation}
\eta_{TV}(K) := \sup \limits_{y_1,y_2 \in y } D_{TV}(K(y_1)||K(y_2))
\end{equation}
\end{lemma}
\begin{lemma}[$\eta_{TV}(K)$ Upper Bound~\cite{makur2019informationphdmit}] \label{lm:contraction_TV_upper_bound}
For any \textit{f}-divergence{}, we have
\begin{equation}
\eta_{f}(K) \leq \eta_{TV}(K)
\end{equation}
\end{lemma}
\subsection{Problem Definition}
As described in Algorithm \ref{algo:parameters_perturbation}, due to the Gaussian noise perturbation mechanism, in each iteration the global model can be viewed as a random vector with the Gaussian smoothing measure $\mu$.
We use
the \textit{f}-divergence{} between $\mu(\mathcal{M}(D'))$ and $\mu(\mathcal{M}(D))$ as a statistical distance for measuring model closeness.
According to the data post-processing inequality, when we interpret each round of CRFL{} as a probability transition kernel, i.e., a Markov Kernel{}, the contraction coefficient of Markov Kernel{} can help bound the divergence over multiple training rounds of FL.
\paragraph{Iteration as Markov Kernel{}}
We identify each iteration as a Markov Kernel{}. At iteration $t$, the central server produces the new model by $ \widetilde w_{t} \gets \mathrm{Clip}_{\rho_t}\left( w_{t} \right)+ \epsilon_t$ where $w_{t}$ is the aggregated model. We denote $w_{t} = \mathbb{P}si_t ( \widetilde w_{t-1} ) $, and
\begin{equation}
\widetilde w_{t} \gets \mathrm{Clip}_{\rho_t}\left( \mathbb{P}si_t\left(\widetilde w_{t-1}\right) \right)+ \epsilon_t,
\end{equation}
where
\begin{equation}
\mathbb{P}si_t(\widetilde w_{t-1}) \triangleq \widetilde w_{t-1}- \sum_{i=1}^N p_i \eta_i \sum_{s=(t-1)\tau_i+1}^{t\tau_i} g_i\left( w_{s-1}^i; \xi_{s-1}^i\right)
\end{equation}
is the federated learning SGD process and the local model is initialized as $ w_{(t-1)\tau_i}^i \gets \widetilde w_{t-1}$.
Therefore, iteration $t$ can be realized by $K_t$, a Markov Kernel{} associated with the mapping $ \widetilde w_{t-1} \rightarrow \mathrm{Clip}_{\rho_t} (\mathbb{P}si_t(\widetilde w_{t-1})) + \epsilon_t$. $K_t$ receives $\widetilde w_{t-1}$ and then generates $\widetilde w_{t}$. Let $\mu_t$ denote the distribution of global model ${\widetilde w_{t}}$, and we have $\widetilde w_{t-1}\sim \mu_{t-1}$, then
$
\mu_{t} =\int \mu_{t-1}(dy)K_t(y).
$
\paragraph{Model Replacement Attack at $\mathsf {t_{adv}}$ }
We define the backdoored federated learning SGD process ${\mathbb{P}si'}_t$ at round $t=\mathsf {t_{adv}}$ as
\begin{align}
{\mathbb{P}si'}_{t}(\widetilde w_{t-1})&\triangleq \widetilde w_{t-1} - \sum_{i=1}^{R} p_i \gamma_i \eta_i \sum_{s=(t-1)\tau_i+1}^{t\tau_i} g_i\left( {w'}_{s-1}^i; {\xi'}_{s-1}^i\right)
- \sum_{i=R+1}^N p_j \eta_j \sum_{s=(t-1)\tau_i+1}^{t\tau_i} g_j\left( {w}_{s-1}^i; {\xi}_{s-1}^i\right)
\end{align}
where the local model is initialized as $ {w'}_{(t-1)\tau_i}^i \gets \widetilde w_{t-1}$. Then we define the corresponding Markov Kernel{} $K'_{t}$ associated with the mapping $ \widetilde w_{t-1} \rightarrow \mathrm{Clip}_{\rho_t}( {\mathbb{P}si'}_t(\widetilde w_{t-1}))+ \epsilon_t$. Through aggregation, the global model is influenced by adversarial clients. Let $\mu'_{t}$ denotes the distribution of backdoored global model ${\widetilde w'_{t}}$, and we have $\widetilde w_{t-1}\sim \mu_{t-1}$, then
$
\mu'_{t} =\int \mu_{t-1}(dy)K'_t(y).
$
\paragraph{After Model Replacement Attack} After $\mathsf {t_{adv}}$, all clients use the original clean datasets to update their local model. However, the global model in the backdoored training process already begins to differ from the one in the benign training process from round $\mathsf {t_{adv}}$ so it is difficult to analysis it through distributed SGD. Therefore, we use Markov Kernel{} to quantify the poisoning effect. When $t\textgreater \mathsf {t_{adv}}$, we have $\widetilde w'_{t-1}\sim \mu'_{t-1}$, then
$
\mu'_{t} =\int \mu'_{t-1}(dy)K_t(y).
$
Because the clean datasets are used for both clean and backdoored training process when $t\textgreater \mathsf {t_{adv}}$, the Markov Kernel{} $K_t$ is the same. We define the contraction coefficient \cite{asoodeh2020differentially} as:
\begin{equation}\label{eq:define_eta_f_sup_fl}
\eta_{f}(K_t):= \sup \limits_{\mu_{t-1},\mu'_{t-1}; \atop {D_f(\mu_{t-1}\|\mu'_{t-1})\ne 0}}\frac{D_f(\mu_{t-1} K_t\|\mu'_{t-1} K_t)}{D_f(\mu_{t-1} \|\mu'_{t-1})}.
\end{equation}
Therefore, $\eta_{f}(K_t)$ can serve as the upper bound for the real
$
\frac{D_f(\mu_{t} \|\mu'_{t})}{D_f(\mu_{t-1} \|\mu'_{t-1})}.
$
Then we write the model closeness $D_f(\mu_T \| \mu_T' )$ as:
\begin{align} \label{eq_df_modelclosess}
D_f(\mu_T \| \mu_T' ) &= D_f(\mu_{\mathsf {t_{adv}}} \| \mu'_{\mathsf {t_{adv}}} ) \frac{D_f(\mu_{\mathsf {t_{adv}}+1} \| \mu'_{\mathsf {t_{adv}}+1} ) }{D_f(\mu_{\mathsf {t_{adv}}} \| \mu'_{\mathsf {t_{adv}}} )} \cdot \cdot\cdot \frac{D_f(\mu_T \| \mu_T' )}{D_f(\mu_{T-1} \| \mu_{T-1}' )} \nonumber \\
&\le D_f(\mu_{\mathsf {t_{adv}}} \| \mu'_{\mathsf {t_{adv}}} )\prod_{t=\mathsf {t_{adv}}+1}^{T} \eta_{f}(K_t).
\end{align}
We will compute $D_f(\mu_{\mathsf {t_{adv}}} \| \mu'_{\mathsf {t_{adv}}} )$ and $\eta_{f}(K_t)$ respectively in the following sections.
\subsection{Analysis for $t=\mathsf {t_{adv}}$}
We would like to bound the divergence of the global model at round $\mathsf {t_{adv}}$ between the benign training process and the backdoor training process, i.e., $D_f(\mu_{\mathsf {t_{adv}}} \| \mu'_{\mathsf {t_{adv}}} )$. We consider KL divergence. Based on the KL divergence for two Gaussian distributions in Lemma~\ref{lemma:divergence_guassian} and Assumption~\ref{assumption:fl_system_train_test}, we have
\begin{align} \label{eq:kldiv_tadv_overview}
D_{KL}(\mu_{\mathsf {t_{adv}}}\|\mu'_{\mathsf {t_{adv}}}) &= D_{KL}\left( \mathcal N\left(\mathrm{Clip}_{\rho_\mathsf {t_{adv}}} \left( w_{\mathsf {t_{adv}}} \right),\sigma_{\mathsf {t_{adv}}}^2\bf I \right) \| \mathcal N\left(\mathrm{Clip}_{\rho_\mathsf {t_{adv}}} \left( w'_{\mathsf {t_{adv}}} \right),\sigma_{\mathsf {t_{adv}}}^2\bf I\right)\right) \nonumber \\
&= \frac{\left\| \mathrm{Clip}_{\rho_\mathsf {t_{adv}}} ( w_{\mathsf {t_{adv}}} ) -\mathrm{Clip}_{\rho_\mathsf {t_{adv}}} \left( w'_{\mathsf {t_{adv}}}\right) \right\|^2}{2\sigma_{\mathsf {t_{adv}}}^2} \nonumber \\
&\le \frac{\left\| w_{\mathsf {t_{adv}}} - w'_{\mathsf {t_{adv}}} \right\|^2}{2\sigma_{\mathsf {t_{adv}}}^2}.
\end{align}
\paragraph{Accumulated Effect in Local Iterations}
In order to bound $\left\| w_{\mathsf {t_{adv}}} - w'_{\mathsf {t_{adv}}} \right\|^2$, we look at the local iterations $ s =(t-1)\tau_i+1, (t-1)\tau_i+2, \ldots, t\tau_i $ of adversarial client $i$ for the benign training process and the backdoored training process.
We use ${\underline{s}} = s- (\mathsf {t_{adv}}-1)\tau_i,\underline{s}=1,2,\ldots, \tau_i $ for simplicity.
We denote $\Delta_{\underline{s}}^i \triangleq w_{\underline{s}}^i -{w'}_{\underline{s}}^i$. Note that $\Delta_0^i =0$ because in the start of round $\mathsf {t_{adv}}$, the initial local model is the same benign global model $ w_{(\mathsf {t_{adv}}-1) \tau_i}^i = {w'}_{(\mathsf {t_{adv}}-1) \tau_i}^i = \widetilde w_{\mathsf {t_{adv}}-1} $ for all clients $i \in [N]$ in both benign and backdoored training process.
For simplicity, we will use $g_i(w), {g'}_i(w)$ instead of $g_i(w; \xi), g_i(w; \xi')$ in the rest of this section.
We denote $\mathcal{B}^i \triangleq {g'}_i(w) - g_i(w) $.
\begin{lemma}\label{lm:delta s and delta s-1}
Under Assumption~\ref{assumption:Smoothness} and the condition $\eta_i \le \frac{1}{\beta}$, for ${\underline{s}} \in [1, \tau_i ]$, we have
\begin{equation}
\begin{aligned}
{\Delta_{{\underline{s}}+1}^i}^2 \le {\Delta_{\underline{s}}^i}^2 + 2\eta_i \left\| \mathcal{B}^i \right\| \Delta_{\underline{s}}^i + 2 \eta_i^2 \left \| \mathcal{B}^i \right \|^2.
\end{aligned}
\end{equation}
\end{lemma}
We defer the proof to Section~\ref{sec:proof_for_some_lemmas}.
Lemma~\ref{lm:delta s and delta s-1} states that the deviation at the current local iteration $\Delta_{\underline{s}}^i$ is added upon the deviation at the last iteration.
\begin{lemma}\label{lm:delta s}
Based on Lemma~\ref{lm:delta s and delta s-1}, under Assumption~\ref{assumption:Smoothness} and the condition $\eta_i \le \frac{1}{\beta}$, for ${\underline{s}} \in [1, \tau_i ]$, we have
\begin{equation}
\begin{aligned}
\Delta_{\underline{s}}^i \le 2\eta_i \left\| \mathcal{B}^i \right\| {\underline{s}}.
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}
We prove it using induction argument~\cite{zhang2017efficient}.
Due to the fact $\Delta_0^i=0$, so $\Delta_1^i \le \sqrt{ 2 \eta_i^2 \left \| \mathcal{B}^i \right \|^2 } \le 2\eta_i \left\| \mathcal{B}^i \right\|$. Therefore, $ \Delta_{\underline{s}}^i \le 2\eta_i \left\| \mathcal{B}^i \right\| \underline{s} $ for $\underline{s} =1$. Suppose the argument $ \Delta_{\underline{s}}^i \le 2\eta_i \left\| \mathcal{B}^i \right\| \underline{s} $ holds for some $\underline{s} $, then we verify $\underline{s}+1$,
\begin{equation}
\begin{aligned}
{\Delta_{\underline{s}+1}^i}^2 &\le 4\eta_i^2 \left\| \mathcal{B}^i \right\|^2 \underline{s}^2+ 4\eta_i^2 \left\| \mathcal{B}^i \right\|^2 \underline{s} + 2 \eta_i^2 \left \| \mathcal{B}^i \right \|^2 \\
&= \eta_i^2 \left\| \mathcal{B}^i \right\|^2 (4\underline{s}^2+ 8\underline{s}+4) \\
&\le 4\eta_i^2 \left\| \mathcal{B}^i \right\|^2 (\underline{s}+1)^2. \nonumber \\
\end{aligned}
\end{equation}
It turns out that $ \Delta_{\underline{s}}^i \le 2\eta_i \left\| \mathcal{B}^i \right\| \underline{s} $ also holds for ${\underline{s}}+1$. Thus, the argument is correct.
\end{proof}
Lemma~\ref{lm:delta s} states that
the deviation is accumulated over the local iterations. The larger number of local iterations $\tau_i$, the larger deviation $\Delta_{{\tau_i}}^i$.
Next, we provide the upper bound for $\|\mathcal{B}^i\| $.
\begin{lemma}\label{lm:batch l_z lipz}
Under the Assumption~\ref{assumption:data_lipschitz} on Lipschitz gradient w.r.t. data, when the adversarial clients have ${q_B}_i$ backdoored samples out of a batch with size ${n_B}_i$, we have
\begin{equation}
\begin{aligned}
\|\mathcal{B}^i\| \le \frac{{{q_B}_i}}{{{n_B}_i}} L_{\mathcal Z} \|\delta_i\|.
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}
\begin{align*}
\left \|\mathcal{B}^i \right\| &= \left\| {g'}_i(w) - g_i(w) \right \| \\
&= \left \| \frac{1}{n_{B_i}} \left(\sum_{j=1}^{q_{B_i}} \nabla \ell ( w; {z'}_j^i) + \sum_{j=q_{B_i}+1}^{n_{B_i}} \nabla \ell ( w; {z}_j^i) \right) - \frac{1}{n_{B_i}} \sum_{j=1}^{n_{B_i}} \nabla \ell ( w; z_j^i) \right \| \\
&= \left \| \frac{1}{n_{B_i}} \sum_{j=1}^{q_{B_i}} \left(\nabla \ell ( w; {z'}_j^i) - \nabla \ell ( w; {z}_j^i) \right)\right \| \\
&\le \left \| \frac{1}{n_{B_i}} L_{\mathcal Z} \sum_{j=1}^{q_{B_i}} ({z'}_j^i - {z}_j^i) \right \| \\
&= \frac{{{q_B}_i}}{{{n_B}_i}} L_{\mathcal Z} \left \|\delta_i\right\|.
\end{align*}
\end{proof}
\paragraph{Scaling and Aggregation}
Let the scale factor be $\gamma_i$ for $i$-th adversarial client, then the scaled malicious local update is
$\gamma_i ( {w'}_{\mathsf {t_{adv}}\tau_i}^i - \widetilde {w}_{\mathsf {t_{adv}}-1}).$
We assume in the benign setting (which is a virtual training process for analyzing, and we do not really train such model), this client also scales its clean local updates as
$
\gamma_i( {w}_{\mathsf {t_{adv}}\tau_i}^i - \widetilde {w}_{\mathsf {t_{adv}}-1}),
$
which can be expanded as
$
- \eta_i \gamma_i \sum_{s=(\mathsf {t_{adv}}-1)\tau_i+1}^{\mathsf {t_{adv}}\tau_i} g_i\left( w_{s-1}^i; \xi_{s-1}^i\right).
$
This assumption does not hurt the global model performance in the virtual benign setting since the local learning objectives are benign so scaling the updates is equivalent to scale its local learning rate $\eta_i \gets \eta_i \gamma_i $.
After aggregation, the deviation between global model parameters in benign and backdoored training process can be bounded. Note that the benign local model updates are cancelled out since they are the same in the two training process.
\begin{lemma}\label{lm:scaled_local_diff}
The deviation between the aggregated global model in the benign training process and the global model in the backdoored training process at round $\mathsf {t_{adv}}$ is
\begin{equation}
\begin{aligned}
\| w_{\mathsf {t_{adv}}} - w'_{\mathsf {t_{adv}}} \|^2 = R \sum_{i=1}^R (\gamma_i p_i \Delta_{ {\tau_i}}^i )^2.
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}[Proof]
\begin{align*}
&\left\| w_{\mathsf {t_{adv}}} - w'_{\mathsf {t_{adv}}} \right\|^2 \\
&= \left\| \sum_{i=1}^R p_i \gamma_i ({w}_{\mathsf {t_{adv}} \tau_i}^i - w_{t-1}) - \sum_{i=1}^R p_i \gamma_i ({w'}_{\mathsf {t_{adv}} \tau_i}^i - w_{t-1}) \right \|^2\\
&= \left\| \sum_{i=1}^R p_i \gamma_i \left( {w}_{\mathsf {t_{adv}}\tau_i}^i- {w'}_{\mathsf {t_{adv}}\tau_i}^i \right) \right \|^2\\
&= \left\| \sum_{i=1}^R p_i \gamma_i \Delta_{{\tau_i}}^i \right \|^2\\
&\le R \sum_{i=1}^R \left ( p_i \gamma_i \Delta_{{\tau_i}}^i \right )^2,
\end{align*}
where we use the fact from linear algebra that $\|\sum_{i=1}^R a_i \|^2 \leq R \sum_{i=1}^R \| a_i \|^2 $.
\end{proof}
\begin{lemma} \label{lemma_kl_tadv}
Under Assumption~\ref{assumption:Smoothness},~\ref{assumption:data_lipschitz},~\ref{assumption:fl_system_train_test} and the condition $\eta_i \le \frac{1}{\beta}$, we have
\begin{equation}
D_{KL}(\mu_{\mathsf {t_{adv}}}\|\mu'_{\mathsf {t_{adv}}}) \le \frac{ 2R \sum_{i=1}^R \left
(p_i \gamma_i \tau_i \eta_i\ \frac{{{q_B}_i}}{{{n_B}_i}} L_{\mathcal Z} \|\delta_i\| \right)^2 }{\sigma_{\mathsf {t_{adv}}}^2}.
\end{equation}
\end{lemma}
\begin{proof}
Plugging Lemma \ref{lm:delta s} and Lemma \ref{lm:batch l_z lipz} into Lemma \ref{lm:scaled_local_diff}, we have:
\begin{equation} \label{eq:tadv_global_model_diff}
\left\| w_{\mathsf {t_{adv}}} - w'_{\mathsf {t_{adv}}} \right\|^2 \le R \sum_{i=1}^R ( 2 p_i \gamma_i \tau_i \eta_i \frac{{{q_B}_i}}{{{n_B}_i}} L_{\mathcal Z} \|\delta_i\| ) ^2.
\end{equation}
Plugging Eq.~\ref{eq:tadv_global_model_diff} to. Eq.~\ref{eq:kldiv_tadv_overview}, it is clear that the divergence of noisy global model parameters between the benign and backdoor training process at round $\mathsf {t_{adv}}$ is bounded.
\end{proof}
\subsection{Analysis for $t>\mathsf {t_{adv}}$}
Now we focus on the contraction coefficient $\eta_{f}(K_t) $ when $t\textgreater \mathsf {t_{adv}}$.
\begin{lemma}\label{lm:eta_tv}
Based on Lemma~\ref{lemma:divergence_guassian} and~\ref{lm:two_point_tv}, under Assumption~\ref{assumption:fl_system_train_test}, we have
\begin{equation}
\eta_{TV}(K_t) \le 2\mathbb{P}hi \left (\frac{\rho_t }{\sigma_t}\right)-1.
\end{equation}
\end{lemma}
\begin{proof}[Proof]
\begin{align*}
&\eta_{TV}(K_t) := \sup \limits_{w_1,w_2 \in W } D_{TV}(K_t(w_1)\|K_t(w_2))\\
&\le \sup \limits_{w_1,w_2 \in W } D_{TV}\Bigg( \mathcal N\Big(\mathrm{Clip}_{\rho_t} (\mathbb{P}si( w_1)),\sigma_t^2{\bf I} \Big) \| \mathcal N\Big(\mathrm{Clip}_{\rho_t} (\mathbb{P}si( w_2)),\sigma_t^2{\bf I}\Big)\Bigg) \\
&= \sup \limits_{w_3,w_4 \in ball(\rho_t) } D_{TV}\Bigg( \mathcal N\Big( w_3,\sigma_t^2 {\bf I} \Big) \| \mathcal N\Big(w_4,\sigma_t^2\bf I\Big)\Bigg) \\
&= \sup \limits_{w_3,w_4 \in ball(\rho_t) } 2\mathbb{P}hi \left (\frac{\|w_3 - w_4\| }{2\sigma_t}\right)-1\\
&= 2\mathbb{P}hi \left (\frac{\rho_t }{\sigma_t}\right)-1. \tag*{ \textit{ $\triangleright$\ the norm of model parameters is bounded by $\rho_t$}}
\end{align*}
\end{proof}
Finally, we obtain the divergence of global model in round $T$. We restate our Theorem ~\ref{therom:divergence_round_T_main} here.
\begingroup
\def\ref{theorem:param_smoothing}{\ref{therom:divergence_round_T_main}}
\begin{theorem}
When $\eta_i \le \frac{1}{\beta}$ and Assumptions~\ref{assumption:Smoothness},~\ref{assumption:data_lipschitz}, and~\ref{assumption:fl_system_train_test} hold, the KL divergence{} between $\mu(\mathcal{M}(D))$ and $\mu(\mathcal{M}(D'))$ with $\mu(w) = \mathcal N(w,{\sigma_T}^2\bf I)$ is bounded as:
\begin{align*}
D_{KL}( \mu(\mathcal{M}(D)) || \mu(\mathcal{M}(D')) ) \le \frac{ 2R\sum_{i=1}^R\left
(p_i \gamma_i \tau_i \eta_i \frac{{{q_B}_i}}{{{n_B}_i}} L_{\mathcal Z} \|\delta_i\| \right)^2 }{\sigma_{\mathsf {t_{adv}}}^2} \prod_{t=\mathsf {t_{adv}}+1}^{T} \left(2\mathbb{P}hi \left (\frac{\rho_t }{\sigma_{t}}\right)-1 \right)
\end{align*}
\end{theorem}
\addtocounter{theorem}{-1}
\endgroup
\begin{proof}[Proof]
\begin{align*}
&D_{KL}( \mu(\mathcal{M}(D)) || \mu(\mathcal{M}(D')) ) = D_{KL}(\mu_T || \mu_T' ) \\
&\le D_{KL}(\mu_{\mathsf {t_{adv}}} || \mu'_{\mathsf {t_{adv}}} )\prod_{t=\mathsf {t_{adv}}+1}^{T} \eta_{KL}(K_t) \tag*{ \textit{ $\triangleright$\ because of Eq.~\ref{eq_df_modelclosess}}}\\
&\le D_{KL}(\mu_{\mathsf {t_{adv}}} || \mu'_{\mathsf {t_{adv}}} )\prod_{t=\mathsf {t_{adv}}+1}^{T} \eta_{TV}(K_t) \tag*{ \textit{ $\triangleright$\ because of Lemma ~\ref{lm:contraction_TV_upper_bound}}}\\
& \le \frac{ 2R \sum_{i=1}^R \left
(p_i \gamma_i \tau_i \eta_i\left\| \mathcal{B}^i \right\| \right)^2 }{\sigma_{\mathsf {t_{adv}}}^2} \prod_{t=\mathsf {t_{adv}}+1}^{T} \left(2\mathbb{P}hi \left (\frac{\rho_t }{\sigma_{t}}\right)-1 \right) \tag*{ \textit{ $\triangleright$\ because of Lemma~\ref{lemma_kl_tadv} and \ref{lm:eta_tv}}}\\
& \le \frac{ 2R\sum_{i=1}^R\left
(p_i \gamma_i \tau_i \eta_i \frac{{{q_B}_i}}{{{n_B}_i}} L_{\mathcal Z} \|\delta_i\| \right)^2 }{\sigma_{\mathsf {t_{adv}}}^2} \prod_{t=\mathsf {t_{adv}}+1}^{T} \left(2\mathbb{P}hi \left (\frac{\rho_t }{\sigma_{t}}\right)-1 \right) \tag*{ \textit{ $\triangleright$\ because of Lemma~\ref{lm:batch l_z lipz}}}.
\end{align*}
\end{proof}
\subsection{Proof of Lemma~\ref{lm:delta s and delta s-1} } \label{sec:proof_for_some_lemmas}
We first introduce a new lemma, which will be used to prove Lemma~\ref{lm:delta s and delta s-1}.
\begin{lemma}\label{lm:smoothness and convex loss}
Under Assumption~\ref{assumption:Smoothness} on convexity and smoothness, we have
\begin{equation}
\begin{aligned}
\left \| g_i\left(w_{\underline{s}}^i\right) -{g'}_i\left({w'}_{\underline{s}}^i\right) \right \|^2 \le 2 \beta \left \langle \Delta_{\underline{s}}^i, g_i\left(w_{\underline{s}}^i\right) - g_i\left({w'}_{\underline{s}}^i\right) \right \rangle
+ 2 \left \| {g'}_i\left({w'}_{\underline{s}}^i\right)-g_i\left({w'}_{\underline{s}}^i\right)\right \|^2.
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}
\begin{align*}
&\left \| g_i\left(w_{\underline{s}}^i\right) -{g'}_i\left({w'}_{\underline{s}}^i\right) \right \|^2 \\
&= \left \| \left[g_i\left(w_{\underline{s}}^i\right) - g_i\left({w'}_{\underline{s}}^i\right)\right] -\left[{g'}_i\left({w'}_{\underline{s}}^i\right)-g_i\left({w'}_{\underline{s}}^i\right)\right] \right \|^2 \\
&\le 2 \left \| g_i\left(w_{\underline{s}}^i\right) - g_i\left({w'}_{\underline{s}}^i\right) \right\|^2 + 2 \left \| {g'}_i\left({w'}_{\underline{s}}^i\right)-g_i\left({w'}_{\underline{s}}^i\right)\right \|^2 \\
&\le 2 \beta \left \langle \Delta_{\underline{s}}^i, g_i\left(w_{\underline{s}}^i\right) - g_i\left({w'}_{\underline{s}}^i\right) \right \rangle + 2 \left \| {g'}_i\left({w'}_{\underline{s}}^i\right)-g_i\left({w'}_{\underline{s}}^i\right)\right \|^2. \tag*{ \textit{ $\triangleright$\ because of Assumption ~\ref{assumption:Smoothness}}}
\end{align*}
\end{proof}
Next we provide the proof of Lemma~\ref{lm:delta s and delta s-1}.
\begin{proof}[Proof of Lemma~\ref{lm:delta s and delta s-1}]
When $\eta_i \le \frac{1}{\beta}$,
\begin{small}
\begin{align*}
&{\Delta_{{\underline{s}}+1}^i}^2 \triangleq \left \| w_{\underline{s}+1}^i - {w'}_{\underline{s}+1}^i \right \|^2 \\
&= \left \| (w_{\underline{s}}^i - {w'}_{\underline{s}}^i) - \eta_i\left[ g_i\left(w_{\underline{s}}^i\right) -{g'}_i\left({w'}_{\underline{s}}^i\right)\right] \right \|^2 \\
&= {\Delta_{\underline{s}}^i}^2 + \eta_i^2 \left \| g_i\left(w_{\underline{s}}^i\right) -{g'}_i\left({w'}_{\underline{s}}^i\right) \right \|^2 - 2\eta_i \left \langle w_{\underline{s}}^i - {w'}_{\underline{s}}^i , g_i\left(w_{\underline{s}}^i\right) -{g'}_i\left({w'}_{\underline{s}}^i\right) \right \rangle \\
&= {\Delta_{\underline{s}}^i}^2 + \eta_i^2 \left \| g_i\left(w_{\underline{s}}^i\right) -{g'}_i\left({w'}_{\underline{s}}^i\right) \right \|^2 + 2\eta_i \left \langle w_{\underline{s}}^i - {w'}_{\underline{s}}^i , {g'}_i\left({w'}_{\underline{s}}^i\right) - g_i\left({w'}_{\underline{s}}^i\right) \right \rangle
- 2\eta_i \left \langle w_{\underline{s}}^i - {w'}_{\underline{s}}^i , g_i\left(w_{\underline{s}}^i\right) - g_i\left({w'}_{\underline{s}}^i\right) \right \rangle \\
&\le {\Delta_{\underline{s}}^i}^2 + 2 \eta_i^2 \left \| {g'}_i\left({w'}_{\underline{s}}^i\right)-g_i\left({w'}_{\underline{s}}^i\right)\right \|^2 + 2\eta_i \left \langle w_{\underline{s}}^i - {w'}_{\underline{s}}^i, {g'}_i\left({w'}_{\underline{s}}^i\right) - g_i\left({w'}_{\underline{s}}^i\right) \right \rangle
+ (2 \beta \eta_i^2 -2\eta_i) \left \langle w_{\underline{s}}^i - {w'}_{\underline{s}}^i , g_i\left(w_{\underline{s}}^i\right) - g_i\left({w'}_{\underline{s}}^i\right) \right \rangle \tag*{ \textit{ $\triangleright$\ because of Lemma~\ref{lm:smoothness and convex loss}}} \\
&\le {\Delta_{\underline{s}}^i}^2 + 2 \eta_i^2 \left \| {g'}_i\left({w'}_{\underline{s}}^i\right)-g_i\left({w'}_{\underline{s}}^i\right)\right \|^2 + 2\eta_i \left \langle w_{\underline{s}}^i - {w'}_{\underline{s}}^i , {g'}_i\left({w'}_{\underline{s}}^i\right) - g_i\left({w'}_{\underline{s}}^i\right) \right \rangle \tag*{ \textit{ $\triangleright$\ because of $\eta_i \le \frac{1}{\beta}$ }} \\
&\le {\Delta_{\underline{s}}^i}^2 + 2 \eta_i^2 \left \| {g'}_i\left({w'}_{\underline{s}}^i\right)-g_i\left({w'}_{\underline{s}}^i\right)\right \|^2 + 2\eta_i \Delta_{\underline{s}}^i \left\| {g'}_i\left({w'}_{\underline{s}}^i\right) - g_i\left({w'}_{\underline{s}}^i\right) \right\| \tag*{ \textit{ $\triangleright$\ because of $ \langle a,b \rangle \le \|a\|\|b\| $ }} \\
&= {\Delta_{\underline{s}}^i}^2 + 2\eta_i \left\| \mathcal{B}^i \right\| \Delta_{\underline{s}}^i + 2 \eta_i^2 \left \| \mathcal{B}^i \right \|^2
\tag*{ \textit{ $\triangleright$\ because of the definition $\mathcal{B}^i \triangleq {g'}_i(w) - g_i(w) $ }}.
\end{align*}
\end{small}
\end{proof}
\subsection{Proof of Lemma~\ref{lm:L_z_for_multiclass_logistic_regression}} \label{sec:proof l_z}
We first restate our Lemma~\ref{lm:L_z_for_multiclass_logistic_regression} here and then provide the detailed proof.
\lemmalzforlogreg*
\begin{proof}
Given model parameters $W$ of one linear layer, data samples $z=\{x,y\}$ and $z'=\{x',y\}$, we denote their loss as $\ell(W;z)$ and $\ell(W;z')$, where $x \in \mathbb { R }^{1\times d_x}$, $W \in \mathbb { R }^{d_x\times C}$. $Y \in \mathbb { R }^{1\times C}$ is a one-hot vector for $C$ classes where $Y_i = \mathbb{1}\{i=y\}$.
For $x$, we denote $xW$ as the output of the linear layer, $P_i(x)= \mathrm{softmax}(xW)_i$ as the normalized probability for class $i$ (the output of the softmax function). The cross-entropy loss is calculated as
\begin{align}
\ell(x) = - \sum_i Y_i \log P_i(x) = - \sum_i Y_i \log \mathrm{softmax}(xW)_i.
\end{align}
We define $G\in \mathbb { R }^{d_x\times C}$ as the gradient for one sample:
\begin{align}
G(x)=\nabla \ell(W;\{x,y\})=\frac{d\ell}{dW}(x) = x^{\top}(P(x)-Y),
\end{align}
and we define $G'$ as
\begin{align}
G(x') =\nabla \ell(W;\{x',y\}) =\frac{d\ell}{dW}(x') = x'^{\top}(P(x')-Y).
\end{align}
According to the mean value theorem \cite{rudin1976principles}, for a continuous vector-valued function $f:[a,b]\to\mathbb { R }^k$ differentiable on $(a,b)$, there exist $c \in (a,b)$ such that
\begin{align}
\frac{\|f(b)-f(a)\|}{b-a} \le \|f'(c)\|.
\end{align}
Because $x$ is normalized to $[0,1]$ (a common dataset pre-processing method), when we define $G_l(t) = G(x'+t(x-x')), t\in[0,1]$, based on the mean value theorem we have
\begin{align*}
\|G(x)-G(x')\| &= \left \|G_l(1)-G_l(0) \right \| \\
& \le \left\|\frac{dG_l}{dt}(t_0)\right \| (1-0) \\
& = \left\|\frac{dG}{dx}(\xi)\odot(x-x')\right \| \\
& \le \left\|\frac{dG}{dx}(\xi)\right\|\left \|x-x' \right \|
\end{align*}
where $\xi=x'+t_0(x-x'), t_0\in[0,1]$, $\frac{dG}{dx}(\xi)$ is a 3 dimension tenosr and $\odot$ is tensor product.
We reduce the computation to 2 dimension matrix for simplification. Let $G_i$ denote the $i$th colunm of matrix $G$ (the gradient w.r.t $W_i$). Let $\mathbf{1}_i$ denote a row vector where $i$-th element is 1 and the others is 0. We have
\begin{small}
\begin{align*}
&\|G(x)-G(x')\| \\
&\le \left \|\frac{dG}{dx}(\xi) \right \|\left \|x-x' \right \| \\
&= \sqrt{\sum_i^C \left \|\frac{dG_i}{dx}(\xi)\right \|^2}\left \|x-x' \right \| \\
& = \sqrt{\sum_i^C \left \| \frac{dx^{\top}(P_i-Y_i)}{dx}(\xi) \right \|^2}\left \|x-x' \right \| \tag*{ \textit{ $\triangleright$\ as $G_i(x) = x^\top(P_i(x)-Y_i)$ }} \\
& = \sqrt{\sum_i^C \left \|\frac{dx^{\top}}{dx}(\xi)(P_i-Y_i)+x^{\top}\frac{d(P_i-Y_i)}{dx}(\xi) \right \|^2}\left \|x-x' \right \| \\
& = \sqrt{\sum_i^C \left \|(P_i(\xi)-Y_i) I+x^{\top} (P_i(\xi)\mathbf{1}_i-P_i(\xi)P(\xi)) W^{\top} \right \|^2} \left \|x-x' \right \| \tag*{ \textit{ $\triangleright$\ as $\frac{d(P_i-Y_i)}{dx} = \frac{d\mathrm{softmax}(xW)_i}{dx} = (P_i\mathbf{1}_i-P_iP) W^{\top} $ }} \\
& \le \sqrt{\sum_i^C \|(P_i-Y_i)\|^2+2\|(P_i-Y_i)\| \|x^{\top} (P_i\mathbf{1}_i-P_iP) W^{\top}\|+\|x^{\top} (P_i\mathbf{1}_i-P_iP) W^{\top}\|^2}\left \|x-x' \right \| \tag*{ \textit{ $\triangleright$\ denote $P_i$ as $P_i(\xi)$ for simplicity} } \\
& \le \sqrt{\sum_i^C\|(P_i-Y_i)\|+2||x^{\top} (P_i\mathbf{1}_i-P_iP) W^{\top}\|+\|x^{\top} (P_i\mathbf{1}_i-P_iP) W^{\top}\|^2}\left \|x-x' \right \|, \tag*{ \textit{ $\triangleright$\ as $\|(P_i-Y_i)\| \le 1$ }} \\
& \le \sqrt{\sum_i^C \|(P_i-Y_i)\|+2P_i\|x\| \|(\mathbf{1}_i-P) W^{\top}\|+P_i^2\|x\|^2 \|(\mathbf{1}_i-P) W^{\top}\|^2}\left \|x-x' \right \| \\
& \le \sqrt{\sum_i^C \|(P_i-Y_i)\|+2P_i \| W\|+P_i \|W\|^2}\left \|x-x' \right \|, \tag*{ \textit{ $\triangleright$\ as $\|x\| \le 1$ and $0\le P_i \le 1$ }} \\
& \le \sqrt{\sum_i^C\|(P_i-Y_i)\|+2P_i\rho+P_i^2 \rho^2}\left \|x-x' \right \|, \tag*{ \textit{ $\triangleright$\ as $\|W\| \le \rho$ }} \\
& \le \sqrt{2+2\rho + \rho^2}\left \|x-x' \right \| .
\end{align*}
\end{small}
\end{proof}
\section{Proofs of Parameter Smoothing}\label{sec_app_param_smothing}
In this section, we explain our parameter smoothing for general \textit{f}-divergence{}, and give closed-form certification for KL divergence{}, which corresponds to the proofs for our Theorems~\ref{theorem:param_smoothing}.
\subsection{General Framework for Robustness Certification}
Consider a classifier $h:\mathcal {(W,X)} \rightarrow \mathcal{Y} $. The output of the classifier depends on both the test input and its model parameters (i.e., model weights) of this classifier. In the testing phase, the model weight $w$ is fixed, just like $x_{test}$, so it can be seen as an argument for the classifier $h$.
For example, in a one-linear-layer model, $h (w; x_{test} )= \mathrm{softmax} (w \times x_{test})$, where $\times$ is the multiplication operation; in a one-conv-layer model, $h (w; x_{test} )= \mathrm{softmax} (w \circledast x_{test})$ where $\circledast$ is the convolution operation. In a model with multiple layers, the expression of model prediction $h (w; x_{test} )$ also holds, where $w$ consists of the weights from all layers.
To our best knowledge, this is the first work to study \textit{parameter} smoothing on $w$ rather than input smoothing on $x_{test}$.
We want to verify the robustness of smoothed multi-class classifier. Recall that we smooth the classifier $h:\mathcal {(W,X)} \rightarrow \mathcal{Y} $ with finite set of label $ \mathcal{Y} $ using a smoothing measure $\mu: \mathcal{W} \mapsto \mathcal{P(W)}$. The resulting randomly smoothed classifier $h_s$ is
\begin{equation}
h_s(w;x_{test})= \arg \max_{c\in \mathcal{Y}} \mathbb{P}_{W\sim \mu(w)}[h(W;x_{test})=c]
\end{equation}
Our goal is to certify that the prediction $ h_s(w;x_{test})$ is robust to model parameters perturbations of size at most $\epsilon$ measured by some distance function $d$, i.e.,
\begin{equation} \label{eq:goal}
h_s(w';x_{test})= h_s(w;x_{test}) \text{ $\forall{w'}$ such that $d(w,w')\le \epsilon$}
\end{equation}
We assume $\mathcal W \subseteq \mathbb{R}^d $ (a $d$ dimensional model parameters space). Our framework involves a reference measure $\rho=\mu(w)$, the set of perturbed distributions $\mathcal{D}_{w,\epsilon}=\{\mu(w'): d(w,w')\le \epsilon \}$, and a set of specifications $\phi: \mathcal {(W,X)} \rightarrow \mathcal{Z} \subseteq \mathbb{R}$. Specifically, let $c= h_s(w;x_{test})$. Since we are working on the multi-class classification problem, for every pair of classes $\{c, c'\}$ where $c'\in \mathcal{Y} \setminus \{c\}$, we need a $\phi$, which is a generic function over the model parameters space that we want to verify has robustness properties. Following \cite{dvijotham2020framework}, for every $c'\in \mathcal{Y} \setminus \{c\}$, we define a specification $\phi_{c, c'}: \mathcal {(W,X)} \mapsto \{-1, 0, +1\}$ as follows:
\begin{equation} \label{eq:specification_c_c'}
\phi_{c, c'} (w) =
\begin{cases}
+1 & \text{if $h(w;x_{test})=c$}\\
-1 & \text{if $h(w;x_{test})=c'$}\\
0 & \text{otherwise}
\end{cases}
\end{equation}
where we denote $\phi_{c, c'} (w;x_{test})$ as $\phi_{c, c'} (w)$ for simplicity.
\begin{proposition} \label{eq:goal_specification}
The smoothed classifier $h_s$ is robustly certified, i.e., Eq.~\ref{eq:goal} holds, if and only if for every $c'\in \mathcal{Y} \setminus \{c\}$, $\phi_{c, c'}$ is robustly certified at $\mu(w)$ w.r.t $\mathcal{D}_{w,\epsilon}$.
Verifying that a given specification $\phi$ is robustly certified is equivalent to checking if the optimal value of the following optimization problem is non-negative:
\begin{equation}
OPT(\phi, \rho, \mathcal{D}_{w,\epsilon}) := \min_{\nu \in \mathcal{D}_{w,\epsilon}} \mathbb{E}_{W'\sim \nu}(\phi(W'))
\end{equation}
\end{proposition}
\begin{proof}
Note that for any perturbed distribution $\nu \in \mathcal{D}_{w,\epsilon}$, according to the definition of expectation and Eq.~\ref{eq:specification_c_c'}, we have
\begin{equation}
\begin{aligned}
\mathbb{E}_{W' \sim \nu}[\phi_{c, c'}(W')] = \mathbb{P}_{W' \sim \nu}[h(W'; x_{test})=c] - \mathbb{P}_{W' \sim \nu}[h(W'; x_{test})=c'].
\end{aligned}
\end{equation}
Therefore, $\mathbb{E}_{W' \sim \nu}[\phi_{c, c'} (W')] \ge 0$ for all $c'\in \mathcal{Y} \setminus \{c\}$ is equivalent to $c = \arg \max_{y\in \mathcal{C}} \mathbb{P}_{W'\sim \nu}[h(W'; x_{test})=y]$. For $\nu = \mu(w')$, this means that $h_s(w'; x_{test}) = c$.
In other words, $\mathbb{E}_{W' \sim \nu}[\phi_{c, c'}(W')] \ge 0$ for all $c'\in \mathcal{Y} \setminus \{c\}$ and all $\nu = \mu(w') \subset \mathcal{D}_{w,\epsilon}$ if and only if $h_s(w';x_{test})=c$ for all $w'$ such that $d(w,w')\le \epsilon$, proving the required robustness certificate.
\end{proof}
Then we define the certification problem \footnote{It is called information-limited robust certification in \cite{dvijotham2020framework} for input smoothing.}:
\begin{definition}\label{def:the class of specifications}
Given a reference distribution $\rho \in \mathcal{P(W)}$, probabilities $p_A$ ,$p_B$ that satisfy $p_A, p_B \ge 0 $, $p_A + p_B \le 1$,
we define the class of specifications $S$:
\begin{equation}
S = \{\phi : \mathcal {(W,X)} \mapsto \{-1, 0, +1\} \text{ s.t. } \mathbb{P}_{W\sim \rho}[\phi(W) = +1] \ge p_A,
\mathbb{P}_{W\sim \rho}[\phi(W) = -1] \le p_B \}
\end{equation}
\end{definition}
Given the above definition of $S$, we can rewrite Proposition \ref{eq:goal_specification} as:
\begin{proposition} \label{proposition_S}
The smoothed classifier $h_s$ is robustly certified, i.e., Eq.~\ref{eq:goal} holds, if and only if $S$ is robustly certified at $\mu(w)$ w.r.t $\mathcal{D}_{w,\epsilon}$.
Verifying that $S$ is robustly certified is equivalent to checking if the condition $\mathbb{E}_{W' \sim \nu}[\phi (W')] \ge 0$ holds for all $\nu \in \mathcal{D}_{w,\epsilon}$ and $\phi \in S$.
\end{proposition}
We need to provide guarantees that hold simultaneously over a whole class of specifications ($\phi_{c, c'} $ for all $c'\in \mathcal{Y} \setminus \{c\}$ ). In fact, $p_A$ can be the seen as the ``votes'' for the top-one class $c$, and $p_B$ can be seen as the ``votes'' for the runner-up class.
We note that the function $f(\cdot)$ used in \textit{f}-divergence{} is convex. As shown in \cite{dvijotham2020framework} (but for input smoothing), for perturbation sets $\mathcal{D}_{w,\epsilon}=\{\mu(w'): d(w,w')\le \epsilon \}=\{\nu: D_f(\nu\| \mu(w))\le \epsilon \}$ specified by a \textit{f}-divergence{} $D_f$ bound $\epsilon$, this certification task can be solved efficiently using convex optimization.
\begin{theorem}\label{theorem:nonnegative_opt}
Let $D_f$ be \textit{f}-divergence{}, $\epsilon$ be the divergence constraint, $S$, $p_A , p_B$ be as in Definition~\ref{def:the class of specifications}.
The smoothed classifier $h_s$ is robustly certified at reference distribution $\rho$ with respect to $\mathcal{D}_{w,\epsilon}=\{\nu: D_f(\nu\| \rho)\le \epsilon \}$ if and only if the optimal value of the following convex optimization problem is non-negative:
\begin{equation}
\begin{aligned}
\max_{\lambda \ge 0, \kappa} \kappa - \lambda \epsilon
- p_A f_\lambda^*(\kappa - 1)
- p_B f_\lambda^*(\kappa + 1)
- (1- p_A - p_B) f_\lambda^*(\kappa) \ge 0
\end{aligned}
\end{equation}
\end{theorem}
\begin{proof}[Proof]
We prove the theorem according to Proposition~\ref{proposition_S}.
Let $\rho(W)$ be the clean model parameters distribution, $\nu(W)$ be the perturbed model parameters distribution, $r(W)=\frac{\nu(W)}{\rho(W)}$ be likelihood ratio.
We have
\begin{equation}
\begin{aligned}
\mathbb{E}_{W\sim \nu} [\phi(W)] &= \mathbb{E}_{W\sim \rho}[r(W)\phi(W)],\\
D_f(\nu \|\rho) &= \mathbb{E}_{W\sim \rho}[f(r(W))], \\
\mathbb{E}_{W \sim \rho}[r(W)]&=1.
\end{aligned}
\end{equation}
The third condition is obtained using the fact that $\nu$ is a probability measure. The optimization over $\nu$, which is equivalent to optimizing over $r$, can be written as
\begin{equation} \label{eq: opt-problem}
\begin{aligned}
\min_{r\ge 0} &~ \mathbb{E}_{W\sim \rho}[r(W)\phi(W)] \\
s.t. &~\mathbb{E}_{W\sim \rho}[f(r(W))]\le \epsilon, \mathbb{E}_{W \sim \rho}[r(W)]=1
\end{aligned}
\end{equation}
We solve the optimization using Lagrangian duality as follows. We first dualize the constraints on $r$ \cite{dvijotham2020framework} to obtain
\begin{equation} \label{eq:middle-dualize-problem}
\begin{aligned}
& \min_{r\ge 0} \mathbb{E}_{W\sim \rho}[r(W)\phi(W)] + \lambda (\mathbb{E}_{W\sim \rho}[f(r(W))] - \epsilon) + \kappa ( 1- \mathbb{E}_{W \sim \rho}[r(W)])\\
&= \min_{r\ge 0} \mathbb{E}_{W\sim \rho}[r(W)\phi(W)+ \lambda f(r(W)) - \kappa r(W) ] + \kappa - \lambda \epsilon \\
&= \kappa - \lambda \epsilon - \mathbb{E}_{W\sim \rho} [\max_{r\ge 0} \kappa r(W) - r(W)\phi(W)-\lambda f(r(W)) ]\\
&= \kappa - \lambda \epsilon - \mathbb{E}_{W\sim \rho} [\max_{r\ge 0} r(W) (\kappa - \phi(W))-\lambda f(r(W)) ]\\
&= \kappa - \lambda \epsilon - \mathbb{E}_{W\sim \rho} [\max_{r\ge 0} r(W) (\kappa - \phi(W))- f_\lambda(r(W)) ]\\
&\le \kappa - \lambda \epsilon - \mathbb{E}_{W\sim \rho} [ f_\lambda^*(\kappa - \phi(W)) ]
\end{aligned}
\end{equation}
where $f_\lambda^*(u)= \max_{v\ge 0} (uv-f_\lambda(v)), f_\lambda(v) = \lambda f(v) $.
By strong duality, maximizing the final expression in Eq.~\ref{eq:middle-dualize-problem} with respect to $\lambda \ge 0, \kappa$ achieves the optimal value in Eq.~\ref{eq: opt-problem}. If the optimal value is non-negative, the specification $S$ is robustly certified.
\begin{equation}
\begin{aligned}
\max_{\lambda \ge 0, \kappa} \kappa - \lambda \epsilon - \mathbb{E}_{W\sim \rho} [ f_\lambda^*(\kappa - \phi(W)) ]
\end{aligned}
\end{equation}
We can plug in $p_A,p_B$ defined in Definition~\ref{def:the class of specifications}:
\begin{equation}
\begin{aligned}
\max_{\lambda \ge 0, \kappa} \kappa - \lambda \epsilon
- p_A f_\lambda^*(\kappa - 1)
- p_B f_\lambda^*(\kappa + 1)
- ( 1- p_A - p_B) f_\lambda^*(\kappa)
\end{aligned}
\end{equation}
where $p_A = \mathbb{P}_{W\sim \rho}[\phi(W) = +1]$,
$p_B = \mathbb{P}_{W\sim \rho}[\phi(W) = -1]$,
$ 1- p_A - p_B = \mathbb{P}_{W\sim \rho}[\phi(W) =0] $,
\end{proof}
\begin{remark}
Note that our differences from \cite{dvijotham2020framework} are in two aspects: (1)~Our certification is with respect to the smoothing scheme on model parameters $W$; (2)~We concretize the corresponding Theorem 2 in \cite{dvijotham2020framework} by the explicit constraints on $p_A$, $p_B$.
\end{remark}
\subsection{Closed-form Certificate for KL Divergence}
We instantize Theorem~\ref{theorem:nonnegative_opt} with KL divergence{}.
\begin{lemma}\label{lemma:KL closed-form certificate}
Let $D_{KL}$ be the KL divergence{}, $\epsilon$ be the divergence constraint, $S$, $p_A , p_B$ be as in Definition~\ref{def:the class of specifications}.
The smoothed classifier $h_s$ is robustly certified at reference distribution $\rho$ with respect to $\mathcal{D}_{w,\epsilon}=\{\nu: D_{KL}(\nu\| \rho)\le \epsilon \}$ if and only if:
\begin{equation}
\begin{aligned}
\epsilon \le - \log \Big(1-(\sqrt{p_A} - \sqrt{p_B})^2\Big)
\end{aligned}
\end{equation}
\end{lemma}
\begin{proof}[Proof for Lemma~\ref{lemma:KL closed-form certificate}]
The function $f(u) = ulog(u)$ for KL divergence{} is a convex function with $f(1)=0$ , then we have
$$
f_\lambda^*(u)= \max_{v\ge 0} (uv- \lambda f(v)) = \max_{v\ge 0}(uv- \lambda v \log(v)).
$$
Setting the derivative with respect to $v$ to 0 and solving for $v$, we obtain
$v = \exp \left(\frac{u-\lambda}{\lambda} \right), \lambda \textgreater 0 $.
So we have
\begin{equation}
f_\lambda^*(u) = \lambda \exp \left(\frac{u}{\lambda}-1 \right).
\end{equation}
Suppose we have a bound on the KL divergence $D_f(\nu \|\rho)\le \epsilon$, then we want that the optimal certificate is non-negative:
\begin{equation} \label{eq:opt_certifate_kl}
\begin{aligned}
\max_{\lambda \textgreater 0, \kappa} \Bigg( \kappa - \lambda \epsilon - p_A \lambda \exp \left(\frac{\kappa - 1}{\lambda}-1 \right) - p_B \lambda \exp \left(\frac{\kappa + 1}{\lambda}-1 \right) - (1-p_A-p_B) \lambda \exp \left(\frac{\kappa}{\lambda}-1 \right) \Bigg)\ge 0.
\end{aligned}
\end{equation}
Setting $y=\kappa/\lambda, z= \frac{1}{\lambda}(z\textgreater0)$,
we can rewrite Eq.~\ref{eq:opt_certifate_kl} as:
\begin{equation} \label{eq:opt_certifate_kl_substitute}
\begin{aligned}
\max_{z \textgreater 0, y } \Bigg( \frac{1}{z} \Big( y - \epsilon - p_A \exp( y - z -1) - p_B \exp(y + z -1) - (1-p_A-p_B) \exp( y -1) \Big) \Bigg)\ge 0.
\end{aligned}
\end{equation}
Because $\frac{1}{z}$ is positive, we divide both the LHS and RHS by $\frac{1}{z}$ and our goal can be rewritten as:
\begin{equation} \label{eq:opt_certifate_kl_substitute_simple}
\begin{aligned}
\max_{z \textgreater 0, y } \Bigg( y - \epsilon - p_A \exp( y - z -1) - p_B \exp(y + z -1) - (1-p_A-p_B) \exp( y -1) \Bigg)\ge 0.
\end{aligned}
\end{equation}
Setting the derivative of the LHS with respect to $z$ to 0 and solving for $z$, we obtain
\begin{equation}
\begin{aligned}
p_A \exp( y - z -1) - p_B \exp(y + z -1) &=0\\
z &= \log(\sqrt{\frac{p_A}{p_B}}).
\end{aligned}
\end{equation}
Thus the LHS of Eq.~\ref{eq:opt_certifate_kl_substitute_simple} reduces to
\begin{equation}
\begin{aligned}
\max_{y } \Bigg( y - \epsilon- \Big(1-(\sqrt{p_A} - \sqrt{p_B})^2\Big) \exp( y -1) \Bigg).
\end{aligned}
\end{equation}
Setting the derivative with respect to $y$ to 0 and solving for $y$, we obtain
\begin{equation}
\begin{aligned}
1 - \Big(1-(\sqrt{p_A} - \sqrt{p_B})^2\Big) \exp( y -1) &= 0\\
y&= 1- \log \Big(1-(\sqrt{p_A} - \sqrt{p_B})^2\Big).
\end{aligned}
\end{equation}
Now the LHS of Eq.~\ref{eq:opt_certifate_kl_substitute_simple} reduces to
\begin{equation}
\begin{aligned}
- \log \Big(1-(\sqrt{p_A} - \sqrt{p_B})^2\Big) - \epsilon.
\end{aligned}
\end{equation}
For this number to be positive, we need
\begin{equation}
\begin{aligned}
\epsilon \le - \log \Big(1-(\sqrt{p_A} - \sqrt{p_B})^2\Big).
\end{aligned}
\end{equation}
Hence, proved.
\end{proof}
\begin{remark}
The challenges are: 1) we divide both the LHS and RHS of Eq.~\ref{eq:opt_certifate_kl_substitute} by $\frac{1}{z}$ to obtain Eq.~\ref{eq:opt_certifate_kl_substitute_simple}, otherwise the derivative of the LHS of Eq.~\ref{eq:opt_certifate_kl_substitute} cannot be calculated directly. Moreover, setting $y=\kappa/\lambda, z= \frac{1}{\lambda}$ makes it much easier to solve the optimization problem.
2) \cite{dvijotham2020framework} does not directly provide proof for KL Divergence. They prove the certification for Renyi Divergence and then regard KL as a special case of Renyi Divergence.
\end{remark}
Finally, we restate our Theorem ~\ref{theorem:param_smoothing} here.
\begingroup
\def\ref{theorem:param_smoothing}{\ref{theorem:param_smoothing}}
\begin{theorem}
Let $h_s$ be defined as in Eq.~\ref{eq:define_smoothed_cls}. Suppose $c_A \in \mathcal{Y} $ and $\underline{p_A}, \overline{p_B} \in [0,1]$ satisfy
\begin{equation}
{H_s^{c_A}}(w';x_{test}) \ge \underline{p_A} \ge \overline{p_B} \ge \max_{c\ne c_A} {H_s^{c}}(w';x_{test}), \nonumber
\end{equation}
then ${h_s}(w';x_{test}) = {h_s}(w;x_{test}) = c_A$ for all ${w}$ such that $D_{KL}(\mu(w),\mu(w'))\le \epsilon$, where
\begin{equation}
\begin{aligned}
\epsilon = - \log \Big(1-(\sqrt{\underline{p_A}} - \sqrt{\overline{p_B}})^2\Big) \nonumber
\end{aligned}
\end{equation}
\end{theorem}
\addtocounter{theorem}{-1}
\endgroup
\begin{proof}
We use Lemma~\ref{lemma:KL closed-form certificate} to prove Theorem ~\ref{theorem:param_smoothing}.
In practice, since the server does not know the global model in the current FL system is poisoned or not, we assume the model is already backdoored and derive the condition when its prediction will be certifiably consistent with the prediction of the clean model. Therefore, the reference distribution $\rho = \mu(w')$ and $\nu = \mu(w)$. Moreover, ${H_s^{c_A}}(w';x_{test}) \ge \underline{p_A}$ is equivalent to $ \mathbb{P}_{W\sim \rho}[\phi(W) = +1] \ge p_A$, and $ \max_{c\ne c_A} {H_s^{c}}(w';x_{test}) \le \overline{p_B} $ is equivalent to $\mathbb{P}_{W\sim \rho}[\phi(W) = -1] \le p_B$. Rewriting Lemma~\ref{lemma:KL closed-form certificate} leads to Theorem ~\ref{theorem:param_smoothing}.
\end{proof}
\end{document} |
\begin{document}
\begin{center}
{\bf{THE NONCOMMUTATIVE $\ell_1-\ell_2$ INEQUALITY FOR HILBERT C*-MODULES AND THE EXACT CONSTANT}}\\
K. MAHESH KRISHNA AND P. SAM JOHNSON \\
Department of Mathematical and Computational Sciences\\
National Institute of Technology Karnataka (NITK), Surathkal\\
Mangaluru 575 025, India \\
Emails: [email protected], [email protected]
Date: \today\\
\end{center}
\hrule
\textbf{Abstract}: Let $\mathcal{A}$ be a unital C*-algebra. Then the theory of Hilbert C*-modules tells that
\begin{align*}
\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}\leq \sqrt{n} \left(\sum_{i=1}^{n}a_ia_i^*\right)^\frac{1}{2}, \quad \forall n \in \mathbb{N}, \forall a_1, \dots, a_n \in \mathcal{A}.
\end{align*}
By modifications of arguments of Botelho-Andrade, Casazza, Cheng, and Tran given in 2019, for certain tuple $x=(a_1, \dots, a_n) \in \mathcal{A}^n$, we give a method to compute a positive element $c_x$ in the C*-algebra $\mathcal{A}$ such that the equality
\begin{align*}
\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}=c_x \sqrt{n} \left(\sum_{i=1}^{n}a_ia_i^*\right)^\frac{1}{2}.
\end{align*}
holds. We give an application for the integral of G. G. Kasparov. We also derive the formula for the exact constant for the continuous $\ell_1-\ell_2$ inequality.\\
\textbf{Keywords}: C*-algebra, Hilbert C*-module, Hilbert space.
\textbf{Mathematics Subject Classification (2020)}: 46L05, 46L08, 46C05.
\section{Introduction}
Let $\mathbb{K}=\mathbb{C}$ or $\mathbb{R}$ and $x \in \mathbb{K}^n$. Universally known $\ell_1-\ell_2$ inequality for Hilbert spaces states that $\|x\|_1\leq \sqrt{n}\|x\|_2.$ In 2019, Botelho-Andrade, Casazza, Cheng, and Tran
\cite{BOTELHOCASAZZACHENGTRAN} gave a characterization which allows to compute a constant $c_x$, for a given $x$ such that $\|x\|_1= c_x \sqrt{n}\|x\|_2.$ First we recall this result.
\begin{definition}\cite{BOTELHOCASAZZACHENGTRAN}\label{CASAZZADEFINITION}
A vector $x=\frac{1}{\sqrt{n}}(c_1, \dots, c_n) \in \mathbb{K}^n$ is said to be a constant modulus vector if $|c_i|=1$, for all $i=1,\dots, n$.
\end{definition}
\begin{theorem}\cite{BOTELHOCASAZZACHENGTRAN}\label{CASAZZARESULT}
Let $x=(a_1, \dots, a_n) \in \mathbb{K}^n$. The following are equivalent.
\begin{enumerate}[\upshape(i)]
\item We have
\begin{align*}
\|x\|_1=\left(1-\frac{c_x}{2}\right) \sqrt{n} \|x\|_2.
\end{align*}
\item We have
\begin{align*}
\sum_{i=1}^{n}\left|\frac{|a_i|}{\|x\|_2}-\frac{1}{\sqrt{n}}\right|^2=c_x.
\end{align*}
\item The infimum of the distance from $\frac{x}{\|x\|_2}$ to the constant modulus vector is $\sqrt{c_x}$.
\end{enumerate}
In particular,
\begin{align*}
\|x\|_1\leq \sqrt{s}\|x\|_2 \iff \left(1-\frac{c_x}{2}\right) \sqrt{n} \leq \sqrt{s} \iff 1-\frac{c_x}{2} \leq \sqrt{\frac{s}{n}} .
\end{align*}
\end{theorem}
Theorem \ref{CASAZZARESULT} says that as long as we have equality connecting one-norm and two-norm, the constant can be determined using two-norm and the dimension of space. Further, it also helps to find the distance between $\frac{x}{\|x\|_2}$ to certain types of vectors (constant modulus vectors). This result found uses in nonlinear diffusion and diffusion state distances \cite{MAGGIONIMURPHY, COWENDEVKOTA}. A variation of Theorem \ref{CASAZZARESULT} which concerns subspaces is the following.
\begin{theorem}\cite{BOTELHOCASAZZACHENGTRAN}\label{CASAZZAPRO}
Let $W$ be a subspace of $\mathbb{K}^n$ and let $ P:\mathbb{K}^n \to W$ be onto orthogonal projection. Then the following are equivalent.
\begin{enumerate}[\upshape(i)]
\item For every unit vector $x \in W$,
$
\|x\|_1\leq \left(1-\frac{c_x}{2}\right)\sqrt{n}.
$
\item The distance of any unit vector in $ W$ to any constant modulus vector $ x \in W$ is greater than or equal to $\sqrt{c_x}$.
\item For every constant modulus vector $x \in W$,
$
\|Px\|_2\leq 1-\frac{c_x}{2}.
$
\end{enumerate}
\end{theorem}
We organized this paper as follows. In Section \ref{MODULES}, we obtain a result (Theorem \ref{GENERAL}), which is similar to first two implications of Theorem \ref{CASAZZARESULT}, in the context of Hilbert C*-modules. A partial result is obtained (Proposition \ref{PREVIOUS}) which corresponds to (iii) in Theorem \ref{CASAZZARESULT}. In Section \ref{CONTINUOUSSECION}, we derive results which are similar to Theorems \ref{CASAZZARESULT} and \ref{CASAZZAPRO}, namely Theorems \ref{IMP} and \ref{PRETHM}, respectively, for the function space $\mathcal{L}^2(X)$ whenever $\mu(X)<\infty$.
\section{The noncommutative $\ell_1-\ell_2$ inequality for Hilbert C*-modules and the exact constant}\label{MODULES}
Let $\mathcal{A}$ be a unital C*-algebra. Then the space $\mathcal{A}^n$ becomes (left) Hilbert C*-module over the C*-algebra $\mathcal{A}$ w.r.t. the inner product
\begin{align*}
\langle x, y\rangle\coloneqq \sum_{i=1}^{n}a_ib_i^*, \quad \forall x=(a_1, \dots, a_n), y=(b_1, \dots, b_n) \in \mathcal{A}^n
\end{align*}
and the norm
\begin{align*}
\|x\|\coloneqq \|\langle x, x\rangle\|^\frac{1}{2}=\left\|\sum_{i=1}^{n}a_ia_i^*\right\|^\frac{1}{2}, \quad \forall x=(a_1, \dots, a_n) \in \mathcal{A}^n
\end{align*}
(see \cite{LANCE, PASCHKE} for Hilbert C*-modules).
Let $a_1, \dots, a_n \in \mathcal{A}$ and let
\begin{align*}
x=((a_1a_1^*)^\frac{1}{2}, \dots, (a_na_n^*)^\frac{1}{2}), \quad y=(1, \dots, 1)\in \mathcal{A}^n.
\end{align*}
By applying the Cauchy-Schwarz inequality in Hilbert C*-modules (Proposition 1.1 in \cite{LANCE}) for this pair we get
\begin{align*}
\left(\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}\right)^2\leq n \sum_{i=1}^{n}a_ia_i^*, \quad \forall n \in \mathbb{N}, \forall a_1, \dots, a_n \in \mathcal{A}.
\end{align*}
By taking C*-algebraic square root (see Theorem 1.4.11 in \cite{LIN})
\begin{align}\label{ELL1TWO}
\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}\leq \sqrt{n} \left(\sum_{i=1}^{n}a_ia_i^*\right)^\frac{1}{2}, \quad \forall n \in \mathbb{N}, \forall a_1, \dots, a_n \in \mathcal{A}.
\end{align}
We call the Inequality (\ref{ELL1TWO}) as the noncommutative $\ell_1-\ell_2$ inequality for Hilbert C*-modules. A standard result in C*-algebra is that an element $a\in \mathcal{A}$ is positive if and only if $a=bb^*$ for some $b\in \mathcal{A}$. Thus Inequality (\ref{ELL1TWO}) can also be written as
\begin{align*}
\sum_{i=1}^{n}a_ia_i^*\leq \sqrt{n} \left(\sum_{i=1}^{n}(a_ia_i^*)^2\right)^\frac{1}{2}, \quad \forall n \in \mathbb{N}, \forall a_1, \dots, a_n \in \mathcal{A}.
\end{align*}
Note that Inequality (\ref{ELL1TWO}) is the $\ell_1-\ell_2$ inequality for Hilbert spaces whenever the C*-algebra is the field of scalars.
\begin{theorem}\label{GENERAL}
Let $x=(a_1, \dots, a_n) \in \mathcal{A}^n$ be such that $\langle x, x \rangle $ is invertible. The following are equivalent.
\begin{enumerate}[\upshape(i)]
\item We have
\begin{align*}
\left(\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}\right)\langle x, x \rangle^\frac{1}{2}+\langle x, x \rangle^\frac{1}{2}\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}=\sqrt{n}\langle x, x \rangle^\frac{1}{2}(2-c_x)\langle x, x \rangle^\frac{1}{2}.
\end{align*}
\item We have
\begin{align*}
\sum_{i=1}^{n}\left(\langle x, x \rangle^\frac{-1}{2}(a_ia_i^*)^\frac{1}{2}-\frac{1}{\sqrt{n}}\right)\left(\langle x, x \rangle^\frac{-1}{2}(a_ia_i^*)^\frac{1}{2}-\frac{1}{\sqrt{n}}\right)^*=c_x.
\end{align*}
\end{enumerate}
\end{theorem}
\begin{proof}
We make expansion and see
\begin{align*}
&\sum_{i=1}^{n}\left(\langle x, x \rangle^\frac{-1}{2}(a_ia_i^*)^\frac{1}{2}-\frac{1}{\sqrt{n}}\right)\left(\langle x, x \rangle^\frac{-1}{2}(a_ia_i^*)^\frac{1}{2}-\frac{1}{\sqrt{n}}\right)^*\\
&=\sum_{i=1}^{n}\left(\langle x, x \rangle^\frac{-1}{2}(a_ia_i^*)^\frac{1}{2}(a_ia_i^*)^\frac{1}{2}\langle x, x \rangle^\frac{-1}{2}+\frac{1}{n}-\frac{\langle x, x \rangle^\frac{-1}{2}(a_ia_i^*)^\frac{1}{2}}{\sqrt{n}}-\frac{(a_ia_i^*)^\frac{1}{2}\langle x, x \rangle^\frac{-1}{2}}{\sqrt{n}}\right)\\
&=\langle x, x \rangle^\frac{-1}{2}\left(\sum_{i=1}^{n}a_ia_i^*\right)\langle x, x \rangle^\frac{-1}{2}+1-\frac{\langle x, x \rangle^\frac{-1}{2}}{\sqrt{n}}\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}-\left(\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}\right)\frac{\langle x, x \rangle^\frac{-1}{2}}{\sqrt{n}}\\
&=2-\frac{\langle x, x \rangle^\frac{-1}{2}}{\sqrt{n}}\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}-\left(\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}\right)\frac{\langle x, x \rangle^\frac{-1}{2}}{\sqrt{n}}=c_x
\end{align*}
if and only if
\begin{align*}
\frac{\langle x, x \rangle^\frac{-1}{2}}{\sqrt{n}}\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}+\left(\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}\right)\frac{\langle x, x \rangle^\frac{-1}{2}}{\sqrt{n}}=2-c_x
\end{align*}
if and only if
\begin{align*}
\left(\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}\right)\langle x, x \rangle^\frac{1}{2}+\langle x, x \rangle^\frac{1}{2}\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}=\sqrt{n}\langle x, x \rangle^\frac{1}{2}(2-c_x)\langle x, x \rangle^\frac{1}{2}.
\end{align*}
\end{proof}
A particular case of Theorem \ref{GENERAL} which is very similar to Theorem \ref{CASAZZARESULT} is the following.
\begin{corollary}
Let $x=(a_1, \dots, a_n) \in \mathcal{A}^n$ be such that $\langle x, x \rangle $ is invertible and commutes with $\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}$. The following are equivalent.
\begin{enumerate}[\upshape(i)]
\item We have
\begin{align*}
\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}=\left(1-\frac{c_x}{2}\right) \sqrt{n} \langle x, x \rangle^\frac{1}{2}=\sqrt{n} \langle x, x \rangle^\frac{1}{2}\left(1-\frac{c_x}{2}\right).
\end{align*}
\item We have
\begin{align*}
\sum_{i=1}^{n}\left(\langle x, x \rangle^\frac{-1}{2}(a_ia_i^*)^\frac{1}{2}-\frac{1}{\sqrt{n}}\right)\left(\langle x, x \rangle^\frac{-1}{2}(a_ia_i^*)^\frac{1}{2}-\frac{1}{\sqrt{n}}\right)^*=c_x.
\end{align*}
\end{enumerate}
In particular,
\begin{align*}
\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}\leq \sqrt{s} \langle x, x \rangle^\frac{1}{2} \iff \left(1-\frac{c_x}{2}\right) \sqrt{n} \leq \sqrt{s} \iff 1-\frac{c_x}{2} \leq \sqrt{\frac{s}{n}} .
\end{align*}
\end{corollary}
We next derive a result which gives one sided implication in Theorem \ref{CASAZZARESULT}. For this, we need to generalize Definition \ref{CASAZZADEFINITION}.
\begin{definition}
A vector $x=\frac{1}{\sqrt{n}}(c_1, \dots, c_n) \in \mathcal{A}^n$ is said to be a constant modulus vector if $c_ic_i^*=1$, for all $i=1,\dots, n$.
\end{definition}
Recall that an element $a$ in a unital C*-algebra $\mathcal{A}$ is said to be an isometry if $a^*a=1$. Thus a vector is a constant modulus vector is adjoint of each of its coordinates is an isometry upto scalar.
\begin{proposition}\label{PREVIOUS}
Let $x=(a_1, \dots, a_n) \in \mathcal{A}^n$ be such that $a_ia_i^*$ is invertible for each $i$. Define
\begin{align*}
c_x \coloneqq 2-\frac{\langle x, x \rangle^\frac{-1}{2}}{\sqrt{n}}\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}-\left(\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}\right)\frac{\langle x, x \rangle^\frac{-1}{2}}{\sqrt{n}}.
\end{align*}
Then the infimum of the distance from $\langle x, x \rangle^\frac{-1}{2}x$ to the constant modulus vector is less than or equal to $\sqrt{\|c_x\|}$.
\end{proposition}
\begin{proof}
Note that the condition $a_ia_i^*$ is invertible for each $i$ implies that $\langle x, x \rangle $ is invertible. Now consider the vector $\frac{1}{\sqrt{n}}((a_1a_1^*)^\frac{-1}{2}a_1, \dots, (a_na_n^*)^\frac{-1}{2}a_n)$, which is unit modulus. Using the definition of infimum and by an expansion we get
\begin{align*}
&\inf\left\{\|\langle x, x \rangle^\frac{-1}{2}x-y\|: y=\frac{1}{\sqrt{n}}(c_1, \dots, c_n) \in \mathcal{A}^n \text{ is constant modulus vector}\right\}\\
&~= \inf\left\{\left\|\sum_{i=1}^{n}\left(\langle x, x \rangle^\frac{-1}{2}a_i-\frac{c_i}{\sqrt{n}}\right)\left(\langle x, x \rangle^\frac{-1}{2}a_i-\frac{c_i}{\sqrt{n}}\right)^*\right\|^\frac{1}{2}:c_i \in \mathcal{A}, c_ic_i^*=1,i=1,\dots, n\right\}\\
&~\leq \left\|\sum_{i=1}^{n}\left(\langle x, x \rangle^\frac{-1}{2}a_i-\frac{(a_ia_i^*)^\frac{-1}{2}a_i}{\sqrt{n}}\right)\left(\langle x, x \rangle^\frac{-1}{2}a_i-\frac{(a_ia_i^*)^\frac{-1}{2}a_i}{\sqrt{n}}\right)^*\right\|^\frac{1}{2}\\
&~\leq\left\|\sum_{i=1}^{n}\langle x, x \rangle^\frac{-1}{2}a_ia_i^*\langle x, x \rangle^\frac{-1}{2}-\frac{\langle x, x \rangle^\frac{-1}{2}}{\sqrt{n}}\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}-\left(\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}\right)\frac{\langle x, x \rangle^\frac{-1}{2}}{\sqrt{n}}+1\right\|^\frac{1}{2}\\
&~=\left\|2-\frac{\langle x, x \rangle^\frac{-1}{2}}{\sqrt{n}}\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}-\left(\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}\right)\frac{\langle x, x \rangle^\frac{-1}{2}}{\sqrt{n}}\right\|^\frac{1}{2}=\|c_x\|^\frac{1}{2}.
\end{align*}
\end{proof}
Proposition \ref{PREVIOUS} and Theorem \ref{CASAZZARESULT} lead to the following question: Does converse of Proposition \ref{PREVIOUS} hold? We see that when $n=1$, $c_x=0$ and hence converse holds. It is not known that for $n\geq2$. Next we derive a result which concerns the $\ell_1-\ell_2$ inequality for submodules of Hilbert C*-modules.
\begin{proposition}\label{NEXT}
Let $\mathcal{N}$ be a submodule of $\mathcal{A}^n$ and $x \in \mathcal{N}$ be a vector such that $\langle x, x \rangle =1$.
If the distance of $x$ to the constant modulus vector is greater than or equal to $c_x$, then
\begin{align*}
c_x \leq \left\|2-\frac{2}{\sqrt{n}}\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}\right\|^\frac{1}{2}.
\end{align*}
\end{proposition}
\begin{proof}
By doing a similar calculation as in the proof of Proposition \ref{PREVIOUS} we get that
\begin{align*}
c_x &\leq \inf\left\{\|x-y\|: y=\frac{1}{\sqrt{n}}(c_1, \dots, c_n) \in \mathcal{A}^n \text{ is a constant modulus vector}\right\}\\
&= \inf\left\{\left\|\sum_{i=1}^{n}\left(a_i-\frac{c_i}{\sqrt{n}}\right)\left(a_i-\frac{c_i}{\sqrt{n}}\right)^*\right\|^\frac{1}{2}:c_i \in \mathcal{A},c_ic_i^*=1,i=1,\dots, n\right\}\\
&\leq \left\|2-\frac{2}{\sqrt{n}}\sum_{i=1}^{n}(a_ia_i^*)^\frac{1}{2}\right\|^\frac{1}{2}.
\end{align*}
\end{proof}
Again a look at Proposition \ref{NEXT} and Theorem 2.4 in \cite{BOTELHOCASAZZACHENGTRAN} which lead to the following question: Does converse of Proposition \ref{NEXT} hold?\\
In the spirit of Theorem 3.1 in \cite{BOTELHOCASAZZACHENGTRAN}, we next give an application of the previous theorem. For this we need some concepts.\\
Let $G$ be a compact Lie group and $\mu$ be the left Haar measure on $G$ such that $\mu(G)=1$ (see \cite{SEPANSKI}). If $f, g:G \to \mathcal{A}$ are continuous functions, then we define
\begin{align*}
\langle f, g \rangle \coloneqq \int_{G}f(x)g(x)^*\,d\mu(x),
\end{align*}
where the integral is in the sense of G. G. Kasparov
(see
\cite{KASPAROV, MANUILOVTROITSKY}). Now we can state the result.
\begin{theorem}
Let $G$ be a compact Lie group, $\mu(G)=1$, $f:G \to \mathcal{A}$ be continuous, $f(x)\geq 0$, $\forall x \in G$ and $\langle f, f \rangle=1$. The following are equivalent.
\begin{enumerate}[\upshape(i)]
\item We have
$
\int_{G}f(x)\,d\mu(x)=1-\frac{c}{2}.
$
\item We have
$
\langle f-1, f-1 \rangle =c.
$
\end{enumerate}
\end{theorem}
\begin{proof}
Consider
$
4=\langle f-1, f-1 \rangle+\langle f+1, f+1\rangle
=\langle f-1, f-1 \rangle +1+1+2\int_{G}f(x)\,d\mu(x)
=\langle f-1, f-1 \rangle +2+2\int_{G}f(x)\,d\mu(x)
$
which implies $\langle f-1, f-1 \rangle=2-2\int_{G}f(x)\,d\mu(x).$ Conclusion follows by taking $c=\langle f-1, f-1 \rangle=2-2\int_{G}f(x)\,d\mu(x).$
\end{proof}
\section{Exact constant for the continuous $\ell_1-\ell_2$ inequality}\label{CONTINUOUSSECION}
Let $X$ be a measure space with finite measure. Continuous Cauchy-Schwarz inequality tells that $\|f\|_1\leq \sqrt{\mu(X)}\|f\|_2$. Given $f\in \mathcal{L}^2(X)$, we now derive a method for the exact constant in the equality $\|f\|_1= c_f \sqrt{\mu(X)}\|f\|_2$. For this, we reform the Definition \ref{CASAZZADEFINITION}.
\begin{definition}\label{DEFCON}
A function $f\in \mathcal{L}^2(X)$ is said to be a constant modulus function if $|f(x)|=\frac{1}{\sqrt{\mu(X)}}, \forall x \in X$.
\end{definition}
Definition \ref{DEFCON} says that a function is constant modulus function if its image lies in the circle of radius $\frac{1}{\sqrt{\mu(X)}}$, centered at origin.
\begin{theorem}\label{IMP}
For $f\in \mathcal{L}^2(X)$, the following are equivalent.
\begin{enumerate}[\upshape(i)]
\item We have
\begin{align*}
\|f\|_1= \left(1-\frac{c_f}{2}\right) \sqrt{\mu(X)} \|f\|_2.
\end{align*}
\item We have
\begin{align*}
\int_{X}\left|\frac{|f(x)|}{\|f\|_2}-\frac{1}{\sqrt{\mu(X)}}\right|^2\,d\mu(x)=c_f.
\end{align*}
\item The infimum of the distance from $\frac{f}{\|f\|_2}$ to the constant modulus function is $\sqrt{c_f}$.
\end{enumerate}
In particular,
\begin{align*}
\|f\|_1\leq \sqrt{s} \|f\|_2 \iff \left(1-\frac{c_f}{2}\right) \sqrt{\mu(X)} \leq \sqrt{s} \iff 1-\frac{c_f}{2} \leq \sqrt{\frac{s}{\mu(X)}} .
\end{align*}
\end{theorem}
\begin{proof}
(i) $\iff$ (ii) Starting from the integral in (ii) we see that
\begin{align*}
\int_{X}\left|\frac{|f(x)|}{\|f\|_2}-\frac{1}{\sqrt{\mu(X)}}\right|^2\,d\mu(x)&=\frac{1}{\|f\|_2^2}\int_{X}|f(x)|^2\,d\mu(x)+\frac{1}{\mu(X)}\int_{X}\,d\mu(x)\\
&\quad-2\frac{1}{\|f\|_2\sqrt{\mu(X)}}\int_{X}|f(x)|\,d\mu(x)\\
&=2\left(1-\frac{1}{\|f\|_2\sqrt{\mu(X)}}\int_{X}|f(x)|\,d\mu(x)\right)\\
&=c_f
\end{align*}
if and only if
\begin{align*}
\frac{1}{\|f\|_2\sqrt{\mu(X)}}\int_{X}|f(x)|\,d\mu(x)=1-\frac{c_f}{2}
\end{align*}
if and only if
\begin{align*}
\int_{X}|f(x)|\,d\mu(x)=\left(1-\frac{c_f}{2}\right)\|f\|_2\sqrt{\mu(X)}.
\end{align*}
(i) $\iff$ (iii) This follows from the calculation
\begin{align*}
&\inf\left\{\left\|\frac{f}{\|f\|_2}-g\right\|_2: g \in \mathcal{L}^2(X) \text{ is constant modulus function}\right\}\\
&~=\inf\left\{\left(\int_{X}\left|\frac{f(x)}{\|f\|_2}-g(x)\right|^2\,d\mu(x)\right)^\frac{1}{2}: g \in \mathcal{L}^2(X) \text{ is a constant modulus function}\right\}\\
&~=\inf\bigg\{\left(\int_{X}\left|\frac{f(x)}{\|f\|_2}\right|^2\,d\mu(x)+\int_{X}|g(x)|^2\,d\mu(x)-\frac{2}{\|f\|_2}\text{Re}\left(\int_{X}f(x)\overline{g(x)}\,d\mu(x)\right)\right)^\frac{1}{2}:\\
&\quad \quad g \in \mathcal{L}^2(X) \text{ is a constant modulus function}\bigg\}\\
&~=\inf\bigg\{\left(1+1-\frac{2}{\|f\|_2}\text{Re}\left(\int_{X}f(x)\overline{g(x)}\,d\mu(x)\right)\right)^\frac{1}{2}:
g \in \mathcal{L}^2(X) \text{ is a constant modulus function}\bigg\}\\
&~=\left(2-\frac{2}{\sqrt{\mu(X)}\|f\|_2}\int_{X}|f(x)|\,d\mu(x)\right)^\frac{1}{2}.
\end{align*}
\end{proof}
To obtain further results we need a result whose proof will follow from the routine argument using Hilbert projection theorem.
\begin{theorem}\label{IMPROVEDPROJECTION}
Let $\mathcal{K}$ be a closed subspace of a Hilbert space $\mathcal{H}$ and let $ P:\mathcal{H} \to \mathcal{K}$ be onto orthogonal projection. Then for each $ h \in \mathcal{H}$ with $Ph\neq0$, $\frac{Ph}{\|Ph\|}$ is the closest unit vector in $\mathcal{K}$ to $h$.
\end{theorem}
We now use Theorem \ref{IMPROVEDPROJECTION} to obtain relations between closed subspaces of $ \mathcal{L}^2(X)$ and continuouss $\ell_1-\ell_2$ inequality.
\begin{theorem}\label{PRETHM}
Let $W$ be a closed subspace of $ \mathcal{L}^2(X)$ and let $ P:\mathcal{L}^2(X) \to W$ be onto orthogonal projection. Then the following are equivalent.
\begin{enumerate}[\upshape(i)]
\item For every unit vector $f \in W$,
$
\|f\|_1\leq \left(1-\frac{c_f}{2}\right)\sqrt{\mu(X)}.
$
\item The distance of any unit vector in $ W$ to any constant modulus function $ f \in W$ is greater than or equal to $\sqrt{c_f}$.
\item For every constant modulus function $f \in W$,
$
\|Pf\|_2\leq 1-\frac{c_f}{2}.
$
\end{enumerate}
\end{theorem}
\begin{proof}
(i) $\iff$ (ii) We do a similar calculation as in the proof of Theorem \ref{IMP} and get
\begin{align*}
&\inf\left\{\left\|f-g\right\|_2: g \in \mathcal{L}^2(X) \text{ is a constant modulus function}\right\}\\
&~=\inf\left\{\left(\int_{X}\left|f(x)-g(x)\right|^2\,d\mu(x)\right)^\frac{1}{2}: g \in \mathcal{L}^2(X) \text{ is a constant modulus function}\right\}\\
&~=\inf\bigg\{\left(\int_{X}\left|f(x)\right|^2\,d\mu(x)+\int_{X}|g(x)|^2\,d\mu(x)-2\text{Re}\left(\int_{X}f(x)\overline{g(x)}\,d\mu(x)\right)\right)^\frac{1}{2}:\\
&\quad \quad g \in \mathcal{L}^2(X) \text{ is a constant modulus function}\bigg\}\\
&~=\inf\bigg\{\left(1+1-2\text{Re}\left(\int_{X}f(x)\overline{g(x)}\,d\mu(x)\right)\right)^\frac{1}{2}:
g \in \mathcal{L}^2(X) \text{ is a constant modulus function}\bigg\}\\
&~=\left(2-\frac{2}{\sqrt{\mu(X)}}\int_{X}|f(x)|\,d\mu(x)\right)^\frac{1}{2}.
\end{align*}
Therefore \\
\begin{align*}
\sqrt{c_f } \leq \left(2-\frac{2}{\sqrt{\mu(X)}}\int_{X}|f(x)|\,d\mu(x)\right)^\frac{1}{2}
\end{align*}
if and only if
\begin{align*}
\|f\|_1=\int_{X}|f(x)|\,d\mu(x)\leq \left(1-\frac{c_f}{2}\right)\sqrt{\mu(X)} .
\end{align*}
(ii) $\iff$ (iii) Let $ f \in W$ be constant modulus function.
In view of Theorem \ref{IMPROVEDPROJECTION} we calculate
\begin{align*}
\left\|\frac{Pf}{\|Pf\|}-f\right\|^2&=1+1-\left \langle \frac{Pf}{\|Pf\|}, f\right\rangle -\left \langle f, \frac{Pf}{\|Pf\|}\right\rangle=2-\left \langle \frac{P^2f}{\|Pf\|}, f\right\rangle -\left \langle f, \frac{P^2f}{\|Pf\|}\right\rangle\\
&=2-2\|Pf\|.
\end{align*}
Therefore
\begin{align*}
c_f \leq \left\|\frac{Pf}{\|Pf\|}-f\right\| \text{\quad if and only if \quad } \|Pf\| \leq 1-\frac{c_f}{2}.
\end{align*}
\end{proof}
\end{document} |
\begin{eqnarray}gin{document}
\title{
Inequalities for trace norms of $2 imes 2$ block matrices}
\begin{eqnarray}gin{abstract}
This paper derives an inequality relating the $p$-norm of a positive $2 \times 2$ block
matrix to the $p$-norm of the $2 \times 2$ matrix obtained by replacing each
block by its $p$-norm. The inequality had been known for integer values of $p$,
so the main contribution here is the extension to all values $p \geq 1$.
In a special case the result reproduces Hanner's inequality.
A weaker inequality which applies also to non-positive matrices is
presented. As an application in quantum information theory,
the inequality is used to obtain some results concerning
maximal $p$-norms of product channels.
\end{abstract}
\pagebreak
\section{Introduction and statement of results}
Quantum information theory has raised some interesting mathematical questions
about completely positive trace preserving maps.
Such maps describe the evolution of open quantum systems, or quantum
systems in the presence of noise \cite{BS}.
Many of these questions are related to the quantum entropy of states,
and the associated notion of the trace norm, or $p$-norm, of a state.
In one case \cite{K1} the investigation of the additivity question for product channels
(which will be explained in Section 5) led
to an inequality for $p$-norms of positive $2 \times 2$ block matrices for integer values of $p$.
The present paper is devoted to showing that this inequality extends to non-integer values of
$p$. Some implications of this result for the additivity question are
presented, as well as a somewhat weaker inequality which applies to
all $2 \times 2$ block matrices.
The inequality for positive matrices turns out to be closely related to Hanner's inequality \cite{Ha},
which itself relates to the uniform convexity of the matrix spaces $C_p$
(these matrix spaces are the non-commutative versions of the function spaces $L_p$).
The precise relation between these results will be described after the statements of Theorem \ref{thm1}
and Theorem \ref{thm2} below. Hanner's inequality and uniform convexity
for $C_p$ were first established by Tomczak-Jaegermann \cite{T-J} for special values
of $p$, and later proved for all $p \geq 1$ by Ball, Carlen and Lieb \cite{BCL}.
Many of the ideas and methods used in the proofs
of Theorems \ref{thm1} and \ref{thm2} in this paper are taken from
the paper by Ball, Carlen and Lieb. The heart of the proof of Theorem \ref{thm1} is the convexity
result presented below in Lemma \ref{lemma3}, which extends a result used by Hanner \cite{Ha}
in his original paper.
Let $M$ be a $2n \times 2n$ positive semi-definite matrix. It can be written
in the block form
\begin{eqnarray}\label{def:M}
M = \pmatrix{ X & Y \cr Y^{*} & Z}
\end{eqnarray}
where $X,Y,Z$ are $n \times n$ matrices. The condition $M \geq 0$ requires that
$X \geq 0$ and $Z \geq 0$, and also that
$Y = X^{1/2} R Z^{1/2}$ where $R$ is a contraction.
Recall that the $p$-norm of a matrix $A$ is defined as
\begin{eqnarray}
|| A ||_p = \bigg( {\rm Tr} ( A^{*} A)^{p/2} \bigg)^{1/p}
\end{eqnarray}
Define the $2 \times 2$ matrix
\begin{eqnarray}\label{def:m}
m = \pmatrix{ || X ||_p & || Y ||_p \cr || Y ||_p & || Z ||_p }
\end{eqnarray}
From H\"older's inequality it follows that
\begin{eqnarray}
|| Y ||_p = || X^{1/2} \, R Z^{1/2} ||_p \leq || X ||_p^{1/2} \,\, || Z ||_p^{1/2}
\end{eqnarray}
which implies that $m \geq 0$ also.
\begin{eqnarray}gin{thm}\label{thm1}
Let $M$ and $m$ be defined as in (\ref{def:M}) and (\ref{def:m}).
The following inequalities hold:
\par\noindent {\bf a)} for $1 \leq p \leq 2$,
\begin{eqnarray}\label{thm1.1}
|| M ||_p \geq || m ||_p
\end{eqnarray}
\par\noindent {\bf b)} for $2 \leq p \leq \infty$,
\begin{eqnarray}\label{thm1.2}
|| M ||_p \leq || m ||_p
\end{eqnarray}
\end{thm}
Theorem \ref{thm1} is easily proved for integer values of $p$ using
H\"older's inequality (see \cite{K1} for details). In the case where
$X=Z$ and $Y=Y^{*}$, the norms of $M$ and $m$ simplify in the following way:
\begin{eqnarray}
|| M ||_{p}^p & = & || X + Y ||_{p}^p + || X - Y ||_{p}^p \\
|| m ||_{p}^p & = & \bigg( || X ||_p + || Y ||_p \bigg)^p
+ \bigg| || X ||_p - || Y ||_p \bigg|^p
\end{eqnarray}
With these substitutions, the inequalities (\ref{thm1.1}) and (\ref{thm1.2})
are seen to be special cases of
Hanner's inequality \cite{Ha} for the matrix spaces $C_p$. As mentioned above,
Hanner's inequality for $C_p$ was proved by Tomczak-Jaegermann \cite{T-J} for special values
of $p$, and later proved for all $p \geq 1$ by Ball, Carlen and Lieb \cite{BCL}.
The next Theorem presents a weaker pair of
inequalities which hold for all $2 \times 2$ block matrices.
\begin{eqnarray}gin{thm}\label{thm2}
Let $X$, $Y$, $Z$, $W$ be complex $n \times n$ matrices.
Define the $2 \times 2$ symmetric matrix
\begin{eqnarray}\label{def:alpha}
\alpha = \pmatrix{||X||_p & \Big( {1 \over 2} ||Y||_p^{p} + {1 \over 2} ||W||_p^{p} \Big)^{1/p} \cr
\Big( {1 \over 2} ||Y||_p^{p} + {1 \over 2} ||W||_p^{p} \Big)^{1/p} & ||Z||_p}
\end{eqnarray}
The following inequalities hold:
\par\noindent {\bf a)} for $1 \leq p \leq 2$,
\begin{eqnarray}\label{thm2.1}
\bigg|\bigg| \pmatrix{X & Y \cr W & Z} \bigg|\bigg|_p \geq
2^{1/p} \bigg[ {p-1 \over 2}\,\, {\rm Tr} (\alpha^2) + {2 - p \over 4}\,\, ({\rm Tr} \alpha )^2 \bigg]^{1/2}
\end{eqnarray}
\par\noindent {\bf b)} for $2 \leq p \leq \infty$,
\begin{eqnarray}\label{thm2.2}
\bigg|\bigg| \pmatrix{X & Y \cr W & Z} \bigg|\bigg|_p \leq
2^{1/p} \bigg[ {p-1 \over 2}\,\, {\rm Tr} (\alpha^2) + {2 - p \over 4}\,\, ({\rm Tr} \alpha )^2 \bigg]^{1/2}
\end{eqnarray}
\end{thm}
Again considering the special case where $X = X^{*} = Z$ and $Y = Y^{*} = W$, the right side
of (\ref{thm2.1}) and (\ref{thm2.2}) becomes
\begin{eqnarray}
2^{1/p} \bigg[ ||X||_p^2 + (p-1) \,\, ||Y||_p^2 \bigg]^{1/2}
\end{eqnarray}
The inequalities in this case were derived in \cite{BCL}, and used to establish the
2-uniform convexity (with best constant) of the space $C_p$.
When the block matrix $M$ on the left side
of (\ref{thm2.1}) is positive and defined as in (\ref{def:M}), the
inequality can be easily derived from Theorem \ref{thm1}, as follows.
Observe that in this case
\begin{eqnarray}\label{new.m}
|| m ||_p = \bigg( (u + v)^p + (u - v)^p \bigg)^{1/p}
\end{eqnarray}
where
\begin{eqnarray}
u & = & {||X||_p + ||Z||_p \over 2} \\
v & = & \Bigg[ \Bigg({||X||_p - ||Z||_p \over 2}\Bigg)^2 + ||Y||_p^2 \Bigg]^{1/2}
\end{eqnarray}
Gross's two-point inequality \cite{G} states that for all
real numbers $a$ and $b$, and all $1 \leq p \leq 2$,
\begin{eqnarray}\label{Gross}
\bigg( |a+b|^p + |a-b|^p \bigg)^{1/p} \geq 2^{1/p}
\bigg( a^2 + (p-1) \,b^2 \bigg)^{1/2}
\end{eqnarray}
Applying Gross's inequality to the right side of (\ref{new.m}) and using
(\ref{thm1.1}) immediately
gives (\ref{thm2.1}). In section 3 we prove Theorem \ref{thm2} in the general case
(where positivity is not assumed) by
using some very non-trivial results from the paper \cite{BCL}.
Most of the new work in this paper goes into the proof of
Theorem \ref{thm1}, part (a). The proof has three main ingredients: for convenience we state them as
separate lemmas here. The first ingredient is a slight modification of a convexity result
from \cite{BCL}.
\begin{eqnarray}gin{lemma}\label{lemma2}
Let $M = \pmatrix{X & Y \cr Y^{*} & Z} \geq 0$ where $X,Y,Z$ are $n \times n$ matrices.
For fixed $Y$, and for $1 \leq p \leq 2$, the function
\begin{eqnarray}
(X,Z) \longmapsto {\rm Tr} M^p - {\rm Tr} X^p - {\rm Tr} Z^p
\end{eqnarray}
is jointly convex in $X$ and $Z$.
\end{lemma}
The second ingredient extends a convexity result of Hanner \cite{Ha}
to the case of positive $2 \times 2$ matrices
with positive coefficients.
\begin{eqnarray}gin{lemma}\label{lemma3}
Let $A = \pmatrix{a & c \cr c & b} > 0$ where $a,b,c \geq 0$.
For $1 \leq p \leq 2$, the function
\begin{eqnarray}\label{def:g}
g(A) = {\rm Tr} \pmatrix{a^{1/p} & c^{1/p} \cr c^{1/p} & b^{1/p}}^p
\end{eqnarray}
is convex in $A$.
\end{lemma}
The third ingredient is a monotonicity result for positive $2 \times 2$ matrices.
\begin{eqnarray}gin{lemma}\label{lemma4}
Let $A = \pmatrix{a & c \cr c & b} > 0$ where $a,b,c \geq 0$.
For fixed $c$, and for $1 \leq p \leq 2$, the function
\begin{eqnarray}\label{def:h}
(a,b) \longmapsto {\rm Tr} A^{p} - a^{p} - b^{p}
\end{eqnarray}
is decreasing in $a$ and $b$.
\end{lemma}
The paper is organised as follows. In Section 2 we present the proof of Theorem \ref{thm1}
using Lemmas \ref{lemma2}, \ref{lemma3} and \ref{lemma4}. Section 3 contains the proof of
Theorem \ref{thm2}, which is mostly a straightforward adaptation of the proof
of the uniform convexity result in \cite{BCL}.
Lemmas \ref{lemma2}, \ref{lemma3} and \ref{lemma4} are proved in Section 4,
and Section 5 describes an application of Theorem \ref{thm1} in Quantum Information Theory.
\section{Proof of Theorem \ref{thm1}}
Many of the ideas in this proof are taken from the proof of Hanner's inequality
in \cite{BCL}. First, we borrow the duality argument from Section IV of that paper
to show that part (b) follows from part (a). For $p \geq 2$ define $q \leq 2$ to be
its conjugate index. Then there is a $2n \times 2n$ matrix $K$ satisfying
$ || K ||_q = 1$ such that
\begin{eqnarray}
|| M ||_p = \sup_{L: || L ||_q =1} | \, {\rm Tr} ( L M ) \, | = {\rm Tr} ( K M )
\end{eqnarray}
The positivity of $M$ means that $K$ can be assumed to be positive.
Let
\begin{eqnarray}
K = \pmatrix{A & C \cr C^{*} & B} \geq 0
\end{eqnarray}
then
\begin{eqnarray}\label{a->b}
{\rm Tr} ( K M ) & = & {\rm Tr} (A X) + {\rm Tr} ( C Y^{*} ) + {\rm Tr} (C^{*} Y ) + {\rm Tr} ( B Z ) \\ \nonumber
& \leq & || A ||_q \, || X||_p + 2 ||C||_q \, ||Y||_p + ||B||_q \, ||Z||_p \\ \nonumber
& = & {\rm Tr} \pmatrix{||A||_q & ||C||_q \cr ||C||_q & ||B||_q} m \\ \nonumber
& \leq & \bigg|\bigg| \pmatrix{||A||_q & ||C||_q \cr ||C||_q & ||B||_q} \bigg|\bigg|_q \, ||m||_p \\
\nonumber & \leq & ||K||_q \, ||m||_p \\ \nonumber
& = & ||m||_p
\end{eqnarray}
The first and second inequalities are applications of H\"older's inequality,
the last inequality uses part (a) of Theorem \ref{thm1}.
Next we turn to the proof of part (a) of Theorem \ref{thm1}.
The inequality becomes an equality at the
values $p=1,2$,
so we will assume henceforth that $1 < p < 2$. Using the singular value
decomposition we can write
\begin{eqnarray}
Y = U D V^{*}
\end{eqnarray}
where $U,V$ are unitary matrices and $D \geq 0$ is diagonal. Unitary invariance of the
$p$ norm implies that
\begin{eqnarray}
||M||_p = \bigg|\bigg| \pmatrix{U^{*} X U & D \cr D & V^{*} Z V} \bigg|\bigg|_p
\end{eqnarray}
and also that $||X||_p = ||U^{*} X U||_p$,
$||Z||_p = ||V^{*} Z V||_p$ and $||Y||_p = ||D||_p$.
So without loss of generality we will assume henceforth that $Y$ is diagonal and
non-negative.
Next we use a diagonalization argument from Section III of \cite{BCL}.
Let $U_1, \dots, U_{2^n}$ denote the $2^n$ diagonal $n \times n$
matrices with diagonal entries $\pm 1$. Then for any
$n \times n$ matrix $A$ we have
\begin{eqnarray}
A_d = \sum_{i=1}^{2^n} 2^{-n} \,\, U_i A U_{i}^{*}
\end{eqnarray}
where $A_d$ is the diagonal part of $A$. Since $Y$ is diagonal this implies that
\begin{eqnarray}\label{av1}
\sum_{i=1}^{2^n} 2^{-n} \pmatrix{U_i & 0 \cr 0 & U_i} \pmatrix{X & Y \cr Y & Z}
\pmatrix{U_{i}^{*} & 0 \cr 0 & U_{i}^{*}}
= \pmatrix{X_d & Y \cr Y & Z_d}
\end{eqnarray}
and by the same reasoning
\begin{eqnarray}\label{av2}
\sum_{i=1}^{2^n} 2^{-n} \pmatrix{U_i & 0 \cr 0 & U_i} \pmatrix{X & 0 \cr 0 & Z}
\pmatrix{U_{i}^{*} & 0 \cr 0 & U_{i}^{*}}
= \pmatrix{X_d & 0 \cr 0 & Z_d}
\end{eqnarray}
Now we combine (\ref{av1}) and (\ref{av2}) with the convexity result Lemma \ref{lemma2},
which gives
\begin{eqnarray}\label{ineq1}
{\rm Tr} \pmatrix{X & Y \cr Y & Z}^p - {\rm Tr} \pmatrix{X & 0 \cr 0 & Z}^p
\geq
{\rm Tr} \pmatrix{X_d & Y \cr Y & Z_d}^p - {\rm Tr} \pmatrix{X_d & 0 \cr 0 & Z_d}^p
\end{eqnarray}
The matrices $X_d,Y,Z_d$ are all diagonal with non-negative entries. Denote these
entries by $(x_1, \dots, x_n)$, $(y_1, \dots, y_n)$ and $(z_1, \dots, z_n)$
respectively. Then
\begin{eqnarray}\label{eqn1}
{\rm Tr} \pmatrix{X_d & Y \cr Y & Z_d}^p =
\sum_{i=1}^n {\rm Tr} \pmatrix{x_i & y_i \cr y_i & z_i}^p
\end{eqnarray}
Now for $i=1,\dots,n$ define
\begin{eqnarray}
a_i = x_i^p, \quad b_i = z_i^p, \quad c_i = y_i^p
\end{eqnarray}
and introduce the $2 \times 2$ matrices
\begin{eqnarray}
A_i = \pmatrix{a_i & c_i \cr c_i & b_i}
\end{eqnarray}
It follows that
\begin{eqnarray}\label{p-norms}
||X_d||_p & = & (a_1 + \cdots + a_n)^{1/p} \\ \nonumber
||Y||_p & = & (c_1 + \cdots + c_n)^{1/p} \\ \nonumber
||Z_d||_p & = & (b_1 + \cdots + b_n)^{1/p}
\end{eqnarray}
and the definition (\ref{def:g}) implies that
\begin{eqnarray}\label{eqn3}
{\rm Tr} \pmatrix{||X_d||_p & ||Y||_p \cr ||Y||_p & ||Z_d||_p}^p
= g(A_1 + \cdots + A_n)
\end{eqnarray}
Furthermore (\ref{eqn1}) implies that
\begin{eqnarray}\label{eqn2}
{\rm Tr} \pmatrix{X_d & Y \cr Y & Z_d}^p =
g(A_1) + \cdots + g(A_n)
\end{eqnarray}
Also, for any positive number $k$ we have $g(k A) = k g(A)$. Combining this with the
convexity result Lemma \ref{lemma3} gives
\begin{eqnarray}
g(A_1 + \cdots + A_n) \leq g(A_1) + \cdots + g(A_n),
\end{eqnarray}
which from (\ref{eqn2}) and (\ref{eqn3}) implies that
\begin{eqnarray}\label{ineq2}
{\rm Tr} \pmatrix{X_d & Y \cr Y & Z_d}^p \geq
{\rm Tr} \pmatrix{||X_d||_p & ||Y||_p \cr ||Y||_p & ||Z_d||_p}^p
\end{eqnarray}
Combining (\ref{ineq1}) with (\ref{ineq2}) gives
\begin{eqnarray}\label{ineq3}
& {\rm Tr} & \pmatrix{X & Y \cr Y & Z}^p - {\rm Tr} \pmatrix{X & 0 \cr 0 & Z}^p \\ \nonumber
\geq
& {\rm Tr} & \pmatrix{||X_d||_p & ||Y||_p \cr ||Y||_p & ||Z_d||_p}^p -
{\rm Tr} \pmatrix{||X_d||_p & 0 \cr 0 & ||Z_d||_p}^p
\end{eqnarray}
Furthermore
\begin{eqnarray}
||X_d||_p \leq ||X||_p, \quad\quad
||Z_d||_p \leq ||Z||_p
\end{eqnarray}
Applying Lemma \ref{lemma4} to the right side of (\ref{ineq3}) shows that
\begin{eqnarray}\label{ineq4}
& {\rm Tr} & \pmatrix{||X_d||_p & ||Y||_p \cr ||Y||_p & ||Z_d||_p}^p -
{\rm Tr} \pmatrix{||X_d||_p & 0 \cr 0 & ||Z_d||_p}^p \\ \nonumber
\geq & {\rm Tr} & \pmatrix{||X||_p & ||Y||_p \cr ||Y||_p & ||Z||_p}^p -
{\rm Tr} \pmatrix{||X||_p & 0 \cr 0 & ||Z||_p}^p
\end{eqnarray}
Furthermore
\begin{eqnarray}
{\rm Tr} \pmatrix{X & 0 \cr 0 & Z}^p =
{\rm Tr} \pmatrix{||X||_p & 0 \cr 0 & ||Z||_p}^p
\end{eqnarray}
and therefore (\ref{ineq3}) and (\ref{ineq4})
imply the result Theorem \ref{thm1}.
\section{Proof of Theorem \ref{thm2}}
This proof follows very closely the methods in Section III of \cite{BCL}.
First we use a duality argument to deduce (\ref{thm2.2}) from (\ref{thm2.1}).
Let $p \geq 2$ and let $q$ be the index conjugate to $p$. Then
it follows as in (\ref{a->b}) that there is a matrix $K = \pmatrix{
A & C \cr D & B}$ such that $||K||_q =1$ and
\begin{eqnarray}\label{3.1}
\bigg|\bigg| \pmatrix{X & Y \cr W & Z} \bigg|\bigg|_p & = &
{\rm Tr} \, K \, \pmatrix{X & Y \cr W & Z} \\ \nonumber
& = & {\rm Tr} \bigg( AX + CW + DY + BZ \bigg)
\end{eqnarray}
Define
\begin{eqnarray}
a = ||A||_q, \quad b = ||B||_q, \quad c = \Big({1 \over 2}||C||_q^q +
{1 \over 2}||D||_q^q \Big)^{1/q}
\end{eqnarray}
and similarly
\begin{eqnarray}\label{def:x}
x = ||X||_p, \quad z = ||Z||_p, \quad y = \Big({1 \over 2}||Y||_p^p +
{1 \over 2}||W||_p^p \Big)^{1/p}
\end{eqnarray}
Then applying H\"older's inequality to (\ref{3.1}) gives
\begin{eqnarray}\label{3.2}
\bigg|\bigg| \pmatrix{X & Y \cr W & Z} \bigg|\bigg|_p \leq
ax + bz + 2 c y
\end{eqnarray}
This is rewritten as
\begin{eqnarray}\label{3.3}
ax + bz + 2 c y & = & 2 \Big({a+b \over 2}\Big) \Big({x+z \over 2}\Big)
+ 2 \Big({a-b \over 2}\Big) \Big({x-z \over 2}\Big) + 2 c y \\ \nonumber
& = & 2 \Big({a+b \over 2}\Big) \Big({x+z \over 2}\Big) \\ \nonumber
& + & 2 \Big({ q-1}\Big)^{1/2}
\Big({a-b \over 2}\Big) \Big({1 \over q-1}\Big)^{1/2} \Big({x-z \over
2}\Big) \\ \nonumber
& + & 2 \Big({q-1}\Big)^{1/2} c \Big({1 \over q-1}\Big)^{1/2} y
\end{eqnarray}
Now we apply the Cauchy-Schwarz inequality to the right side
of (\ref{3.3}); the result is
\begin{eqnarray}\label{3.3a}
ax + bz + 2 c y
& \leq & 2 \, \bigg[ \Big({a+b \over 2}\Big)^2 + (q-1) \Big({a-b \over 2}\Big)^2 + (q-1) c^2 \bigg]^{1/2}
\nonumber \\
& \times & \bigg[ \Big({x+z \over 2}\Big)^2 + {1 \over q-1} \Big({x-z \over 2}\Big)^2 + {1 \over q-1} y^2
\bigg]^{1/2}
\end{eqnarray}
Furthermore,
\begin{eqnarray}
\Big({a+b \over 2}\Big)^2 + (q-1) \Big({a-b \over 2}\Big)^2 + (q-1) c^2
= {q-1 \over 2}\,\, {\rm Tr} (k^2) + {2 - q \over 4}\,\, ({\rm Tr} k )^2
\end{eqnarray}
where $k$ is the $2 \times 2$ matrix
\begin{eqnarray}
k = \pmatrix{a & c \cr
c & b}
\end{eqnarray}
Since $q \leq 2$, (\ref{thm2.1}) implies that
\begin{eqnarray}\label{3.3b}
\bigg[ {q-1 \over 2}\,\, {\rm Tr} (k^2) + {2 - q \over 4}\,\, ({\rm Tr} k )^2 \bigg]^{1/2} & \leq &
2^{-1/q} \,\, \bigg|\bigg| \pmatrix{A & C \cr D & B} \bigg|\bigg|_q \nonumber \\
& = & 2^{-1/q} \, || K ||_q \nonumber \\
& = & 2^{-1/q}
\end{eqnarray}
Combining (\ref{3.2}), (\ref{3.3a}) and (\ref{3.3b}) gives
\begin{eqnarray}
\bigg|\bigg| \pmatrix{X & Y \cr W & Z} \bigg|\bigg|_p & \leq &
2^{1-1/q} \,\, \bigg[ \Big({x+z \over 2}\Big)^2 + {1 \over q-1} \Big({x-z \over 2}\Big)^2 +
{1 \over q-1} y^2 \bigg]^{1/2} \nonumber
\\ & = &
2^{1/p} \,\, \bigg[ \Big({x+z \over 2}\Big)^2 + (p-1) \Big({x-z \over 2}\Big)^2 + (p-1) y^2 \bigg]^{1/2}
\nonumber \\
& = & 2^{1/p} \bigg[ {p-1 \over 2}\,\, {\rm Tr} (\alpha^2) + {2 - p \over 4}\,\, ({\rm Tr} \alpha )^2 \bigg]^{1/2}
\end{eqnarray}
where $\alpha$ was defined in (\ref{def:alpha}), and this proves (\ref{thm2.2}).
Suppose now that $1 \leq p \leq 2$.
The first step in the proof of (\ref{thm2.1}) is to reduce the result to the case where the matrix is
self-adjoint. This is done by modifying an argument from section III of
\cite{BCL}. Given $X$, $Y$, $W$ and $Z$ define the matrices
\begin{eqnarray}
J = \pmatrix{X & Y \cr W & Z}
\end{eqnarray}
and
\begin{eqnarray}
L = \pmatrix{0 & X & 0 & Y \cr X^{*} & 0 & W^{*} & 0 \cr
0 & W & 0 & Z \cr Y^{*} & 0 & Z^{*} & 0}
\end{eqnarray}
Then $L = L^{*}$ and furthermore
\begin{eqnarray}\label{3.4}
{\rm Tr} | L |^p = {\rm Tr} (L^{*} L)^{p/2} = {\rm Tr} (J^{*} J)^{p/2} + {\rm Tr} (J J^{*})^{p/2} =
2 \, {\rm Tr} |J|^p
\end{eqnarray}
Assuming that (\ref{thm2.1}) holds for self-adjoint matrices, it implies that
\begin{eqnarray}\label{3.5}
|| L ||_p \geq 2^{1/p} \,\,
\bigg[ {p-1 \over 2}\,\, {\rm Tr} (\begin{eqnarray}ta^2) + {2 - p \over 4}\,\, ({\rm Tr} \begin{eqnarray}ta )^2 \bigg]^{1/2}
\end{eqnarray}
where $\begin{eqnarray}ta$ is given by
\begin{eqnarray}\label{3.6}
\begin{eqnarray}ta = \pmatrix{2^{1/p} \,||X||_p & \Big( ||Y||_p^{p} + ||W||_p^{p} \Big)^{1/p} \cr
\Big( ||Y||_p^{p} + ||W||_p^{p} \Big)^{1/p} & 2^{1/p} \,||Z||_p}
\end{eqnarray}
Comparing with (\ref{def:alpha}) shows that $\begin{eqnarray}ta = 2^{1/p} \alpha$, and hence
(\ref{3.4}) and (\ref{3.5}) imply (\ref{thm2.1}).
The self-adjoint case will be handled by modifying slightly a very non-trivial
proof in section III of the paper \cite{BCL}.
For convenience we state the hard part of the proof in \cite{BCL} as a separate lemma
here, and refer the reader to the original source for its proof.
\begin{eqnarray}gin{lemma}\label{lemma5} {\rm [Ball, Carlen and Lieb]}
Let $A$ and $B$ be self-adjoint $n \times n$ matrices,
with $A$ non-singular, and suppose that $1 \leq p \leq 2$.
Then
\begin{eqnarray}
{d^2 \over d r^2} \bigg( {\rm Tr} |A + r B|^p \bigg)^{2/p} \bigg|_{r=0}
\geq 2 (p-1) \bigg( {\rm Tr} |B|^p \bigg)^{2/p}
\end{eqnarray}
\end{lemma}
Now suppose that $X$, $Y$ and $Z$ are $n \times n$ complex matrices with
$X$ and $Z$ self-adjoint. Define
\begin{eqnarray}
F = \pmatrix{X & 0 \cr 0 & Z}, \quad
G = \pmatrix{0 & Y \cr Y^{*} & 0}
\end{eqnarray}
Using the notation introduced in (\ref{def:x}), the goal is to show that
\begin{eqnarray}\label{3.7}
\bigg( {\rm Tr} |F + r G|^{p} \bigg)^{2/p} \geq 2^{2/p} \,
\bigg[ \Big({x+z \over 2}\Big)^2 + (p-1) \Big({x-z \over 2}\Big)^2 + (p-1) r^2 y^2 \bigg]
\end{eqnarray}
at the value $r=1$, where now $y = ||Y||_p$. First, it is easy to show
that (\ref{3.7}) holds at $r=0$: in this case the left side is
$(x^p + z^p)^{2/p}$, and Gross's two-point inequality (\ref{Gross}) implies that
\begin{eqnarray}
(x^p + z^p)^{2/p} \geq 2^{2/p} \Big[ \Big({x+z \over 2}\Big)^2 +
(p-1) \Big({x-z \over 2}\Big)^2
\Big]
\end{eqnarray}
Second, both sides of (\ref{3.7}) are even functions of $r$ (the left side because
the matrices $F + r G$ and $F - r G$ have the same spectrum), hence the
derivatives of both sides vanish at $r=0$.
Therefore it is sufficient to prove that
\begin{eqnarray}\label{3.8}
{d^2 \over d r^2} \bigg( {\rm Tr} |F + r G|^{p} \bigg)^{2/p} \geq 2^{2/p} \,
2 (p-1) y^2 = 2 (p-1) \bigg( {\rm Tr} |G|^p \bigg)^{2/p}
\end{eqnarray}
for all $0 \leq r \leq 1$. The inequality (\ref{3.8}) is established by
the following argument (again borrowed from \cite{BCL}).
By continuity, it can be assumed that the ranges
of $F$ and $G$ span all of ${\bf C}^{2n}$ (recall that $X$, $Y$, $Z$ are $n \times n$
matrices) and therefore that $F + r G$ is non-singular at all but possibly
$2n$ values of $r$ in the interval $0 \leq r \leq 1$. By continuity again it
is sufficient to establish (\ref{3.8}) at these non-singular values.
Let $r_0$ be such a non-singular value, and let $A = F + r_0 G$ and $B = G$.
Then at $r=r_0$, (\ref{3.8}) becomes
\begin{eqnarray}\label{3.9}
{d^2 \over d r^2} \bigg( {\rm Tr} |A + r B|^p \bigg)^{2/p} \bigg|_{r=0}
\geq 2 (p-1) \bigg( {\rm Tr} |B|^p \bigg)^{2/p}
\end{eqnarray}
But this is exactly the statement of Lemma \ref{lemma5}, hence (\ref{thm2.1}) is
proved.
\section{Proofs of Lemmas}
\subsection{Proof of Lemma \ref{lemma2}}
This result is a slight modification of a convexity result proved
in Section IV of \cite{BCL}. For a positive matrix $M = \pmatrix{X & Y \cr Y^{*} & Z} \geq 0$,
define $M_d = \pmatrix{X & 0 \cr 0 & Z} \geq 0$
and $F = M - M_d$. Let
\begin{eqnarray}
D = \pmatrix{D_1 & 0 \cr 0 & D_2} = D^{*}
\end{eqnarray}
be a block diagonal self-adjoint matrix, and define
\begin{eqnarray}e
\phi(s) & = & {\rm Tr} (M + s D)^p - {\rm Tr} (M_d + s D)^p \\
& = & {\rm Tr} (M_d + F + s D)^p - {\rm Tr} (M_d + s D)^p
\end{eqnarray}e
Then for $1 \leq p \leq 2$ the second derivative of $\phi$ has the following integral
representation (see \cite{BCL} for details):
\begin{eqnarray}\label{phi-der}
{\phi}''(0) & = & p {\gamma}_p \int_{0}^{\infty}
t^{p-1} {\rm Tr} \bigg( {1 \over t + M_d + F} D {1 \over t + M_d + F} D -
{1 \over t + M_d} D {1 \over t + M_d} D \bigg) d t \nonumber \\
&&
\end{eqnarray}
for some constant $\gamma_p$.
Furthermore, the matrices $M_d + F + s D$ and $M_d - F + s D$ have the
same spectrum, hence (\ref{phi-der}) can be written
\begin{eqnarray}\label{phi-der2}
{\phi}''(0) = {p \over 2} {\gamma}_p \int_{0}^{\infty}
t^{p-1} {\rm Tr} \bigg( {1 \over t + M_d + F} & D & {1 \over t + M_d + F} \,\, D \\ \nonumber
+ {1 \over t + M_d - F} & D & {1 \over t + M_d - F}\,\, D \\ \nonumber
- 2 \,\,{1 \over t + M_d} & D & {1 \over t + M_d}\,\, D \bigg) d t
\end{eqnarray}
Ball, Carlen and Lieb \cite{BCL} proved that for $t \geq 0$, and
for any self-adjoint matrix $A$, the map
\begin{eqnarray}
X \longmapsto {\rm Tr} {1 \over t + X} A {1 \over t + X} A
\end{eqnarray}
is convex on the set of positive
matrices. Applying this to (\ref{phi-der2}) with $X = M_d$ and $A = D$
shows that ${\phi}''(0) \geq 0$, which is the convexity result
in Lemma \ref{lemma2}.
\subsection{Proof of Lemma \ref{lemma3}}
Since $g$ is homogeneous it is sufficient to prove that
\begin{eqnarray}
g(A + B) \leq g(A) + g(B)
\end{eqnarray}
for any $A,B$ of the specified form. To prove this,
it is sufficient to show that
\begin{eqnarray}
{d \over dt} g(A + t B) |_{t=0} \leq g(B)
\end{eqnarray}
for any $A,B$. Let
\begin{eqnarray}
A = \pmatrix{a & c \cr c & b},\quad
B = \pmatrix{x & y \cr y & z}
\end{eqnarray}
Define
\begin{eqnarray}
M = \pmatrix{a^{1/p} & c^{1/p} \cr c^{1/p} & b^{1/p}},\quad
L = \pmatrix{a^{(1-p)/p} x & c^{(1-p)/p} y \cr
c^{(1-p)/p} y & b^{(p-1)/p} z}
\end{eqnarray}
Then
\begin{eqnarray}\label{der1}
{d \over dt} g(A + t B) |_{t=0} = {\rm Tr} M^{p-1} \, L
\end{eqnarray}
The idea of the proof is to maximise the right side of (\ref{der1}) as
a function of $M$, and show that the maximum is achieved when
$A$ and $B$ are proportional, in which case the bound is an equality.
This will be done by
explicitly finding the critical points of ${\rm Tr} M^{p-1} \, L$.
To this end write the spectral decomposition of $M$ in the form
\begin{eqnarray}
M = \pmatrix{a^{1/p} & c^{1/p} \cr c^{1/p} & b^{1/p}} = \lambda P_1 + \mu P_2
\end{eqnarray}
where $P_i$ are projectors onto the normalised eigenvectors of $M$,
and $\lambda, \mu$ are the eigenvalues (notice that the
positivity of $A$ and $B$ implies that both $M$ and $L$ are also
positive). If we assume that
$\lambda \geq \mu$ then for some $0 \leq t \leq 1$ we have
\begin{eqnarray}
a^{1/p} & = & \lambda t + \mu (1-t) \\
c^{1/p} & = & \sqrt{t(1-t)}(\lambda - \mu) \\
b^{1/p} & = & \lambda (1-t) + \mu t
\end{eqnarray}
Furthermore it also follows that
\begin{eqnarray}
M^{p-1} = \pmatrix{k_{11} & k_{12} \cr k_{12} & k_{22}} = {\lambda}^{p-1} P_1 +
{\mu}^{p-1} P_2
\end{eqnarray}
where
\begin{eqnarray}
k_{11} & = & {\lambda}^{p-1} t + {\mu}^{p-1} (1-t) \\
k_{12} & = & \sqrt{t(1-t)}({\lambda}^{p-1} - {\mu}^{p-1}) \\
k_{22} & = & {\lambda}^{p-1} (1-t) + {\mu}^{p-1} t
\end{eqnarray}
Substituting into (\ref{der1}) gives
\begin{eqnarray}\label{der2}
{\rm Tr} M^{p-1} \, L = k_{11} a^{(1-p)/p} x + 2 k_{12} c^{(1-p)/p} y
+ k_{22} b^{(p-1)/p} z
\end{eqnarray}
Equation (\ref{der2}) is invariant under a rescaling of $M$.
Define
\begin{eqnarray}
h = {\mu \over \lambda}, \quad\quad 0 \leq h \leq 1
\end{eqnarray}
then (\ref{der2}) is a function of $t$ and $h$, and can be written as
\begin{eqnarray}
{\rm Tr} M^{p-1} \, L = F(t,h) = F_1 (t,h) x + F_2 (t,h) y + F_3 (t,h) z
\end{eqnarray}
where
\begin{eqnarray}
F_1(t,h) & = & {t + (1-t) h^{p-1} \over (t + (1-t) h)^{p-1}} \\
F_2(t,h) & = & 2 \bigg(t(1-t)\bigg)^{1 - p/2} {1 - h^{p-1} \over
(1-h)^{p-1}} \\
F_3(t,h) & = & F_1(1-t,h)
\end{eqnarray}
The goal is to maximise $F(t,h)$ over $t$ and $h$. Define
\begin{eqnarray}
G & = & \Big(t + (1-t) h\Big) \Big(1 - h^{p-1}\Big) - (p-1) (1-h)
\Big(t + (1-t) h^{p-1}\Big) \\
H & = & \Big((1-t) + t h\Big) \Big(1 - h^{p-1}\Big) - (p-1) (1-h)
\Big((1-t) + t h^{p-1}\Big)
\end{eqnarray}
and also let
\begin{eqnarray}
\xi & = & x \Big(t + (1-t) h\Big)^{-p} \\
\eta & = & y (1-h)^{-p} \, \Big(t(1-t)\Big)^{-p/2} \\
\zeta & = & z \Big(1-t + t h\Big)^{-p}
\end{eqnarray}
Then explicit calculation shows that
\begin{eqnarray}
{\partial F \over \partial t} = G \xi - (G-H) \eta - H \zeta
\end{eqnarray}
and
\begin{eqnarray}
{\partial F \over \partial h} = - t(1-t) (p-1) (1 - h^{p-2})
(\xi - 2 \eta + \zeta)
\end{eqnarray}
The critical equations are
\begin{eqnarray}\label{crit}
{\partial F \over \partial t} = {\partial F \over \partial h} = 0
\end{eqnarray}
One obvious set of solutions is obtained when $t=0$ or $t=1$, or $h=1$.
In all of these cases, the matrix $M$ must be
diagonal, in which case (\ref{der1}) implies
\begin{eqnarray}
{\rm Tr} M^{p-1} \, L = {\rm Tr} B = {\rm Tr} \pmatrix{x^{1/p} & 0 \cr
0 & z^{1/p}}^p \leq g(B)
\end{eqnarray}
and this establishes the result. If $0 < t < 1$ and $h < 1$, the critical
equations can be written
\begin{eqnarray}\label{crit2}
G (\xi - \eta) & = & H(\zeta - \eta) \nonumber \\
\xi - \eta & = & - (\zeta - \eta)
\end{eqnarray}
It is easy to show that $h < 1$ implies that $G >0$ and $H > 0$,
hence the solution of (\ref{crit2}) satisfies $\xi = \eta = \zeta$.
In this case $M$ must be proportional
to the matrix
\begin{eqnarray}
\pmatrix{x^{1/p} & y^{1/p} \cr y^{1/p} & z^{1/p}}
\end{eqnarray}
and substituting into (\ref{der1}) then gives
\begin{eqnarray}
{\rm Tr} M^{p-1} \, L = g(B)
\end{eqnarray}
which proves the result.
\subsection{Proof of Lemma \ref{lemma4}}
By the convexity result Lemma \ref{lemma3}, it is sufficient to prove that
the function $(a,b) \mapsto {\rm Tr} A^{p} - a^{p} - b^{p}$ is decreasing as $a, b \rightarrow \infty$.
For $a >> 1$, and for $1 < p < 2$, easy estimates show that
\begin{eqnarray}
{\rm Tr} A^{p} - a^{p} - b^{p} \simeq p c^2 a^{p-2}
\end{eqnarray}
which is indeed decreasing. Similarly for $b$.
\section{Application to qubit maps}
Quantum information theory has generated an interesting conjecture concerning
completely positive maps on matrix algebras.
Let $\Phi$ be a completely positive trace-preserving (CPTP) map
on the algebra of $n \times n$ matrices.
The minimal entropy of $\Phi$ is defined by
\begin{eqnarray}\label{def:Smin}
S_{\rm min}(\Phi) = \inf_{\rho} S(\Phi(\rho))
\end{eqnarray}
where $S$ is the von Neumann entropy and the $\inf$ runs over $n \times n$
density matrices (satisfying $\rho \geq 0$ and ${\rm Tr} \rho = 1$).
Minimal entropy is conjectured to be additive for product maps, that is,
it is conjectured that
\begin{eqnarray}\label{conj1}
S_{\rm min}(\Phi_1 \otimesimes \Phi_2) = S_{\rm min}(\Phi_1) + S_{\rm min}(\Phi_2)
\end{eqnarray}
for any pair of CPTP maps $\Phi_1$ and $\Phi_2$. The conjecture (\ref{conj1}) has been
established in some special cases \cite{S}, \cite{K2} but a general proof remains elusive.
For related reasons,
Amosov, Holevo and Werner \cite{AHW} defined the maximal $p$-norm for a CPTP map to be
\begin{eqnarray}\label{def:nu}
{\nu}_{p}(\Phi) = \sup_{\rho} || \Phi(\rho) ||_p
\end{eqnarray}
where the $\sup$ runs again over density matrices. They conjectured that this quantity
is multiplicative for product maps, that is
\begin{eqnarray}\label{AHW}
{\nu}_{p}(\Phi_1 \otimesimes \Phi_2) = {\nu}_{p}(\Phi_1) \,\, {\nu}_{p}(\Phi_2)
\end{eqnarray}
Holevo and Werner later discovered a family of counterexamples to this conjecture
for $p \geq 4.79$,
using maps which act on $3 \times 3$ or higher
dimensional matrices \cite{WH}. The conjecture remains open
if at least one of the pair is a qubit map
(which acts on $2 \times 2$ matrices) or if $p \leq 4$.
As an application of Theorem \ref{thm1}, we now show that it implies the
result (\ref{AHW}) in one special case, namely when $\Phi_1$ is the qubit
depolarizing channel and $p \geq 2$. This result was
derived previously using a lengthier argument \cite{K2}, and the purpose of
this presentation is to explore an alternative method which may allow
new approaches to the additivity problem. Indeed,
the method shown below can be easily extended to cover all unital qubit channels
and even some non-unital qubit maps, thus extending the results in \cite{K1}
which were derived for integer values of $p$.
Unfortunately, the restriction to $p \geq 2$ does not allow any conclusions to be drawn about
additivity of minimal entropy.
The depolarizing channel $\Delta$ acts on a state
$\rho = \pmatrix{a & c \cr \overline{c} & b}$ by
\begin{eqnarray}
\Delta(\rho) = \lambda \rho + {1 - \lambda \over 2} I =
\pmatrix{{\lambda}_{+} a + {\lambda}_{-} b & \lambda c \cr
\lambda \overline{c} & {\lambda}_{-} a + {\lambda}_{+} b}
\end{eqnarray}
where $\lambda$ is a real parameter and
${\lambda}_{\pm} = (1 \pm \lambda)/2$.
We will suppose here that $0 \leq \lambda \leq 1$.
The maximal $p$-norm of $\Delta$ is easily computed to be
\begin{eqnarray}
{\nu}_{p}(\Delta) = \Bigg( \Big({1 + \lambda \over 2}\Big)^p +
\Big({1 - \lambda \over 2}\Big)^p \Bigg)^{1/p}
\end{eqnarray}
Now consider a positive $2n \times 2n$ matrix $M$:
\begin{eqnarray}
M = \pmatrix{A & C \cr C^{*} & B}
\end{eqnarray}
The map $\Delta \otimesimes I$ acts on $M$ via
\begin{eqnarray}
(\Delta \otimesimes I) (M) =
\pmatrix{{\lambda}_{+} A + {\lambda}_{-} B & \lambda C \cr
\lambda C^{*} & {\lambda}_{-} A + {\lambda}_{+} B}
\end{eqnarray}
Let $p \geq 2$, and
let $q \leq 2$ be the index conjugate to $p$. Then
as explained at the start of section 2,
there is a positive $2n \times 2n$ matrix $K$ satisfying
$|| K ||_q = 1$ such that
\begin{eqnarray}\label{eqn5}
|| (\Delta \otimesimes I) (M) ||_p = {\rm Tr} \bigg(K (\Delta \otimesimes I) (M) \bigg)
\end{eqnarray}
Following the methods used in (\ref{a->b}), this leads to
\begin{eqnarray}
{\rm Tr} \bigg(K (\Delta \otimesimes I) (M) \bigg) & \leq &
\bigg|\bigg| \pmatrix{
{\lambda}_{+} ||A||_p + {\lambda}_{-} ||B||_p & \lambda ||C||_p \cr
\lambda ||C||_p & {\lambda}_{-} ||A||_p + {\lambda}_{+} ||B||_p}
\bigg|\bigg|_{p} \nonumber \\
& = & || \Delta(m) ||_p
\end{eqnarray}
where $m$ is the $2 \times 2$ matrix
\begin{eqnarray}
m = \pmatrix{||A||_p & ||C||_p \cr
||C||_p & ||B||_p}
\end{eqnarray}
By definition of the $p$-norm this implies
\begin{eqnarray}\label{nextineq}
|| (\Delta \otimesimes I) (M) ||_p \leq {\nu}_{p}(\Delta) \,\, \Big( ||A||_p + ||B||_p \Big)
\end{eqnarray}
Now let $\rho$ be a $2n \times 2n$ density matrix,
\begin{eqnarray}
\rho = \pmatrix{\rho_{11} & \rho_{12} \cr \rho_{21} & \rho_{22}}
\end{eqnarray}
and consider the case where $M = (I \otimesimes \Phi)(\rho)$ and $\Phi$ is some other
channel, so that $(\Delta \otimesimes I) (M) = (\Delta \otimesimes \Phi)(\rho)$. Then
\begin{eqnarray}
A = \Phi(\rho_{11}), \quad B = \Phi(\rho_{22})
\end{eqnarray}
and hence
\begin{eqnarray}
||A||_p + ||B||_p \leq \nu_{p}(\Phi) \,\, {\rm Tr} (\rho_{11} + \rho_{22})
= \nu_{p}(\Phi)
\end{eqnarray}
Therefore (\ref{nextineq}) implies that
\begin{eqnarray}\label{ineq5}
|| (\Delta \otimesimes \Phi) (\rho) ||_p \leq {\nu}_{p}(\Delta) \, \nu_{p}(\Phi)
\end{eqnarray}
Since (\ref{ineq5}) is valid for all $\rho$, we get
\begin{eqnarray}
{\nu}_{p}(\Delta \otimesimes \Phi) \leq
{\nu}_{p}(\Delta) \, \nu_{p}(\Phi)
\end{eqnarray}
and this establishes the result (\ref{AHW}), since the inequality in the other direction
follows by restricting to product states.
{\bf Acknowledgements}
This work was supported in part by
National Science Foundation Grant DMS--0101205.
\begin{eqnarray}gin{thebibliography}{~~}
\bibitem{AHW} G.G. Amosov, A.S. Holevo, and R.F. Werner,
``On Some Additivity Problems in Quantum Information Theory'',
{\em Problems in Information Transmission},
{\bf 36}, 305 -- 313 (2000).
\bibitem{BCL} K. Ball, E. Carlen and E. Lieb,
``Sharp uniform convexity and smoothness inequalities for trace
norms'', {\em Invent. math.} {\bf 115}, 463 -- 482 (1994).
\bibitem{BS} C. H. Bennett and P.W. Shor,
``Quantum Information Theory''
{\em IEEE Trans. Info. Theory} {\bf 44}, 2724--2748 (1998).
\bibitem{G} L. Gross,
``Logarithmic Sobolev inequalities'',
{\em Am. Jour. Math.} {\bf 97}, 1061 -- 1083 (1975).
\bibitem{Ha} O. Hanner,
``On the uniform convexity of $L^p$ and $l^p$'',
{\em Ark. Math.} {\bf 3}, 239 -- 244 (1958).
\bibitem{K1} C. King,
``Maximization of capacity and $l_p$ norms for some
product channels'', {\em Jour. Math. Phys.} {\bf 43}, no. 3,
1247 -- 1260 (2002).
\bibitem{K2} C. King,
``Additivity for unital qubit channels'',
{\em Jour. Math. Phys.} {\bf 43}, no. 10, 4641 -- 4653 (2002).
\bibitem{S} P. W. Shor,
``Additivity of the classical capacity of entanglement-breaking quantum channels'',
{\em Jour. Math. Phys.} {\bf 43}, no. 9, 4334 -- 4340 (2002).
\bibitem{T-J} N. Tomczak-Jaegermann,
``The moduli of smoothness and convexity and Rademacher
averages of trace classes $S_p$'', {\em Studia Math.} {\bf 50},
163 -- 182 (1974).
\bibitem{WH} R. F. Werner and A. S. Holevo,
``Counterexample to an additivity conjecture for output purity
of quantum channels'',
{\em Jour. Math. Phys.} {\bf 43}, no. 9, 4353 -- 4357 (2002).
\end{thebibliography}{~~}
\end{document} |
\begin{document}
\title{Entanglement Dynamics in Three Qubit $X$-States}
\author{Yaakov S. Weinstein}
\affiliation{Quantum Information Science Group, {\sc Mitre},
260 Industrial Way West, Eatontown, NJ 07224, USA}
\begin{abstract}
I explore the entanglement dynamics of a three qubit system in an initial $X$-state
undergoing decoherence including the possible exhibition of entanglement sudden death (ESD).
To quantify entanglement I utilize negativity measures and make use of appropriate
entanglement witnesses. The negativity results are then extended to $X$-states
with an arbitraty number of qubits. I also demonstrate non-standard behavior of the
tri-partite negativity entanglement metric, its sudden appearance after some amount
of decoherence followed quickly by its disappearance. Finally, I solve for a lower bound on
the three qubit $X$-state concurrence, demonstrate when this bound goes to zero,
and outline simplifcations for the calculation of higher order $X$-state concurrences.
\end{abstract}
\pacs{03.67.Mn, 03.67.Bg, 03.67.Pp}
\maketitle
\section{Introduction}
Entanglement is a quantum mechanical phenomenon in which quantum systems exhibit
correlations above and beyond what is classically possible. As such it is a crucial
resource for many aspects of quantum information processing
including quantum computation, quantum cryptography and communications, and
quantum metrology \cite{book}. Due to its fundamental, and increasingly practical,
importance there is a growing body of literature dedicated to studies of entanglement.
Nevertheless, many aspects of entanglement, especially
multi-partite entanglement and its evolution, are in need of further
exploration \cite{HHH}.
The unavoidable degradation of entanglement due to decoherence has severely
hampered experimental attempts to realize quantum information protocols.
Decoherence is a result of unwanted interactions between the system of interest
and its environment. Highly entangled, and thus highly non-classical, states
may be severely corrupted by decoherence \cite{Dur}. This is especially
troubling as these states tend to be the most potentially useful for quantum information
protocols. An extreme manifestation of the detrimental effects of decoherence on
entanglement is entanglement suddent death (ESD): in which decoherence causes
a complete loss of entanglement in a finite time \cite{DH,YE1} despite the fact that
the system coherence goes to zero only asymptotically. Much has been written about this
aspect of entanglement for bi-partite systems and there have been several initial
experimental studies of this phenomenon \cite{expt}. Fewer studies look at
ESD, and specifically ESD with respect to multi-partite entanglement, in multi-partite systems
\cite{ACCAD,LRLSR,YYE,BYW,YSW2}.
A class of two qubit states that are generally known to exhibit ESD are the so called
$X$-states \cite{xstate}, so named due to the pattern of non-zero density
matrix elements. These states play an important role in a number of physical
systems \cite{Xexpt}, and allow for easy calculation of certain entanglement measures.
In this paper, I explore the entanglement dynamics of three qubit
$X$-states in dephasing and depolarizing environments as a function of decoherence strength.
Previous studies of three qubit $X$-shaped states utilize more restrictive sets of states:
GHZ-diagonal states \cite{GS} and generalized GHZ-diagonal states \cite{A}. Other papers have
examined specific examples of three qubit $X$-state entanglement including the effects of
dephasing on a three-qubit quantum error correction protocol \cite{YSW1}.
To quantify entanglement within the three qubit systems I utilize the negativity,
$N_j$, defined as the most negative eigenvalue of the partial
transpose of the density matrix \cite{neg} with respect to qubit $j$. This
provides three distinct entanglement measures. As a pure tri-partite
entanglement metric for mixed states I will use the tri-partite negativity, $N^{(3)}$
which is simply the third root of the product of the negativities with respect to each of
three qubits \cite{SGA}, $N^{(3)} \equiv (N_1N_2N_3)^{1/3}$. A mixed state with non-zero
$N^{(3)}$ is distillable to a GHZ state. It is important to note the existence of bound
entanglement which may be present even if all negativity measures in a three qubit system
are equal to zero. Thus, when I refer to ESD of a state with respect to given negativity metrics
this should not be confused with separability of the state. Nevertheless, besides general interest
in the behavior of these entanglement metrics, the disappearance of negativity plays an important
role in quantum information protocols in that it indicates that the entanglement of the state is
not distillable.
The physical significance of $X$-states mentioned above demands and efficient means of
experimentally determining the presence of entanglement. This can be accomplished via `entanglement witnesses.'
Three qubit states can be separated into four broad categories: separable (in all three qubits),
biseparable, and there exist two types of locally inequivalent tri-partite entanglement (GHZ
and W-type) \cite{DVC}. Reference \cite{ABLS} provides a similar classification schemes for
mixed states each of which includes within it the previous classes. These are separable (S)
states, bi-separable (B) states, W states, and GHZ states, which encompasses the complete
set of three qubit states.
Entanglement witnesses are used to determine in which class a given state belongs.
These observables give a positive or zero expectation value for
all states of a given class and negative expectation values for at least one
state in a higher ({\it i.e.}~more inclusive) class. I will make use of specific
entanglement witnesses \cite{ABLS} that will identify whether a state is in the
GHZ$\backslash$W class ({\it i.e.}~a state in the GHZ class but not in the W class),
in which case the state has experimentally observable GHZ-type tri-partite entanglement.
Though the use of entanglement witness cannot guarantee that entanglement is not present, it does
give experimental bounds on whether the entanglement can be observed.
The results presented in this paper are (i) the analytical determination of various negativity measures
for $X$-states of an arbitrary number of qubits including how the negativity evolves under decoherence,
(ii) the demonstration that negativity disappears in finite time for $X$-states subject to different
types of decoherence and the (analytical and numerical) determination
of the decoherence strength when this occurs, (iii) the analytical calculation of the expectation value of
$X$-states undergoing decoherence with respect to appropriate entanglement witnesses, (iv) the demonstration
of the sudden appearance, only at non-zero decoherence strength, in some $X$-states of the tri-partite
negativity, and (v) the calculation of a bound on the three qubit concurrence for $X$-states and the
description of how this can be extended to more qubits. In addition, I prove in
the Appendix that the set of generalized GHZ-diagonal states do not cover all possible $X$-states.
\section{Three Qubit $X$-States}
There are a number of classes of three qubit states whose entanglement properties
have been studied and whose non-zero density matrix elements form an $X$ shape:
\begin{equation}
\rho_X(a_j,b_j,c_j) = \left(
\begin{array}{cccccccc}
a_1 & 0 & 0 & 0 & 0 & 0 & 0 & c_1\\
0 & a_2 & 0 & 0 & 0 & 0 & c_2 & 0\\
0 & 0 & a_3 & 0 & 0 & c_3 & 0 & 0\\
0 & 0 & 0 & a_4 & c_4 & 0 & 0 & 0\\
0 & 0 & 0 & c_4^* & b_4 & 0 & 0 & 0\\
0 & 0 & c_3^* & 0 & 0 & b_3 & 0 & 0\\
0 & c_2^* & 0 & 0 & 0 & 0 & b_2 & 0\\
c_1^* & 0 & 0 & 0 & 0 & 0 & 0 & b_1\\
\end{array}
\right)
\label{xstate}
\end{equation}
where $j = 1,...,4$. The most basic is a pure three qubit GHZ state with wavefunction
$|\psi_k^{\pm}\rangle = \frac{1}{\sqrt{2}}(|k\rangle\pm|\overline{k}\rangle)$, where
$k$ is a three bit binary number between zero and seven and $\overline{k}$ is the result of
flipping each bit of $k$. The density matrix of this state is $\rho_X(1/2_j,1/2_j,\pm1/2_j)$.
Mixed states that are diagonal in the basis of these eight states form the
set of GHZ-diagonal states studied in \cite{GS}. The basis states have coefficients
$\sqrt{\lambda_k^{\pm}}$ for all $0 < k < 3$, the squares of which sum to one.
The density matrix elements of these states are thus
$a_j = b_j = \lambda_k^{+}+\lambda_k^{-}$ and $c_j = \lambda_k^{+}-\lambda_k^{-}$.
A generalized GHZ state is a non-maximally entangled state of the form:
\begin{equation}
|\psi_k^{\pm}(\alpha,\beta)\rangle = \alpha|k\rangle\pm\beta|\overline{k}\rangle.
\end{equation}
The density matrix of this state is $\rho_X(|\alpha|^2_j,|\beta|^2_j,\pm\alpha\beta^*_j)$
for $j = 1,...,4$. An incoherent mixture of generalized GHZ states where $k$ now ranges from 0 to 7
(as opposed to 0 to 3 used in in \cite{GS}) form a generalized GHZ-diagonal state.
These states are studied in \cite{A} and have the form:
\begin{equation}
\rho = \sum_{k = 0}^7\lambda_k^+|\psi_k^+(\alpha,\beta)\rangle\langle\psi_k^+(\alpha,\beta)|+\lambda_k^-|\psi_k^-(\alpha,\beta)\rangle\langle\psi_k^-(\alpha,\beta)|.
\end{equation}
For these states the density matrix elements are as follows:
\begin{eqnarray}
a_j &=& |\alpha|^2(\lambda_k^{+}+\lambda_k^{-}) + |\beta|^2(\lambda_{\overline{k}}^{+}+\lambda_{\overline{k}}^{-}) \\
b_j &=& |\beta|^2(\lambda_k^{+}+\lambda_k^{-}) + |\alpha|^2(\lambda_{\overline{k}}^{+}+\lambda_{\overline{k}}^{-}) \\
c_j &=& \alpha\beta^*(\lambda_k^{+}-\lambda_k^{-}) + \alpha^*\beta(\lambda_{\overline{k}}^{+}-\lambda_{\overline{k}}^{-})
\end{eqnarray}
for $1 < j < N/2$ and $k = j - 1$. However, as shown in the Appendix,
generalized GHZ-diagonal states do not include all possible $X$-states. This is due to the restriction
of constant $\alpha$ and $\beta$ for all contributing generalized GHZ states.
In this paper I consider $X$-states that are completely general, limited only by the restriction
that the state is a proper density matrix. For convenience, I will refer to the four density matrix elements
$a_j, b_j, c_j$ and $c_j^*$ of the X-state as a GHZ-type state. At most, four GHZ-type states contribute to
each three qubit $X$-state.
\section{X-State Entanglement}
An $X$-state is a mixed state that can be written as a sum of GHZ-type states.
When a partial trace is taken over any one of the three qubits of an $X$-state the resulting
two qubit matrix is diagonal. This demonstrates that the entanglement of an $X$-state is either
tri-partite or biseparable but not completely separable. Before calculating any specific
entanglement metric and studying its decay in a given decohering environment, we note that
an upper bound on the entanglement decay was derived in \cite{A} for a number of different
decohering environments. Though these bounds were calculated for the more limited generalized
GHZ-diagonal states they appear to be appropriate to the states studied in this work. However,
these upper bounds go to zero only in the limit of complete decoherence. Thus, the states never
exhibit entanglement sudden death for any entanglement metric and the bounds cannot be used to
study the ESD phenomenon. Below, I explore specific entanglement metrics for which I provide
analytical solutions to exactly calculate the decoherence strength at which the $X$-states
exhibit ESD for the given entanglement metrics. While ESD of these metrics cannot guarantee
separability of the $X$-state it does provide important information concerning distillability
and the ability to determine the presence of entanglement.
To calculate the negativity of a three qubit $X$-state we take the eigenvalues
of the partial transpose of the density matrix with respect to one of the qubits.
These 24 eigenvalues (8 for each possible partial transpose) are all of the form:
\begin{equation}
E_{ij} = \frac{1}{2}\left(a_j+b_j\pm\sqrt{(a_j-b_j)^2+4|c_i|^2}\right)
\label{XEigs}
\end{equation}
for all $i,j = 1,...,4$ and $i\neq j$. From these eigenvalues
one can see how the negativity detects the entanglement of an $X$-state.
Let us first assume a GHZ-type state with the only non-zero elements
$a_j, b_j, c_j$ and $c_j^*$. The eigenvalues which utilize elements
$a_j$ and $b_j$ cannot be negative (since $a_j+b_j = 1$ and $c_i = 0$ for
all $i\neq j$). An additional three eigenvalues will be equal to $-|c_j|$,
demonstrating the entanglement in the system. $X$-states
that are sums of two GHZ type states have non-zero elements
$a_i, a_j, b_i, b_j, c_i, c_j, c_i^*, c_j^*$. Such states will again
have negative eigenvalues $-|c_i|$, $-|c_j|$ and two additional possibly negative eigenvalues
$\frac{1}{2}(a_k+b_k-\sqrt{(a_k-b_k)^2+4|c_\ell|^2})$ where $k,\ell = i,j$ and
$k \neq \ell$. As more density matrix elements of the $X$-state are filled up
the eigenvalues tend to have the form of these latter two eigenvalues.
\subsection{Dephasing Environment}
We now look at the entanglement evolution of the three qubit $X$-states with no
interaction between the qubits, in an independent qubit dephasing environment
noting the exhibition of ESD with respect to the negativity. The independent qubit dephasing
environment is fully described by the Kraus operators
\begin{equation}
K_1 = \left(
\begin{array}{cc}
1 & 0 \\
0 & \sqrt{1-p} \\
\end{array}
\right); \;\;\;\;
K_2 = \left(
\begin{array}{cc}
0 & 0 \\
0 & \sqrt{p} \\
\end{array}
\right),
\end{equation}
where the dephasing parameter $p$ can also be written in a time-dependent fashion,
$p = 1-\exp(-\kappa t)$. When all three qubits undergo dephasing we have eight
Kraus operators each of the form
$A_l = (K_i\otimes K_j\otimes K_k)$ where $l = 1,2,...,8$ and $i,j,k = 1,2$.
The effect of a dephasing environment of strength $p$ on an $X$-state is to
reduce the anti-diagonal elements of the density matrix by a factor
$(1-p)^{3/2}$ while leaving the diagonal elements constant. To calculate the
negativity we look at the eigenvalues of the $X$-state after taking
the partial transpose with respect to the desired subsystem. The relevant
eigenvalues are now of the form:
\begin{equation}
\frac{1}{2}\left(a_j+b_j-\sqrt{(a_j-b_j)^2+4|c_i|^2(1-p)^3}\right).
\label{ZEigs}
\end{equation}
The eigenvalues go to zero when:
\begin{equation}
p = 1-\frac{(a_j b_j)^{1/3}}{|c_i|^{2/3}}.
\label{pEq1}
\end{equation}
Based on the above, it is easy to see that ESD with respect to negativity is not
exhibited by $X$-states made up of single GHZ-type states (with non-zero elements
$a_j + b_j =1$, and $c_j$): the negativity with respect to any one qubit, and thus the
tri-partite negativity as well, is simply $-|c_j|(1-p)^{3/2}$. When the $X$-state is
a mixture of GHZ type states ESD may be exhibited.
Fig. \ref{fig1} shows a sample $X$-state that is the sum of two GHZ-type
states that exhibits ESD with respect to the negativity of the third qubit, $N_3$.
However, the state does not exhibit ESD with respect to $N_1$ and $N_2$, they are negative for any
value of $p$. The state thus exhibits ESD with respect to the tri-partite negativity at the same
dephasing strength as $N_3$. For stronger dephasing no tri-partite entanglement is detected.
\begin{figure}
\caption{(Color online) Eigenvalues of partially transposed $X$-state density matrix with
non-zero elements $a_1 = \frac{5}
\label{fig1}
\end{figure}
The above can be compared to the experimental detection capabilities of entanglement witnesses.
Appropriate entanglement witnesses for $X$-states are of the sort:
\begin{equation}
W_k = \frac{3}{4}\openone - |GHZ(k)\rangle\langle GHZ(k)|
\end{equation}
where $|GHZ(k)\rangle = \frac{1}{\sqrt{2}}(|k\rangle+|\overline{k}\rangle)$.
For an $X$-state consisting of a single GHZ-type
state the entanglement witness $W_k$ gives $Tr[W_k\rho] = \frac{1}{4}(1-4(1-p)^{3/2}|c_k|)$.
Thus, $W_k$ loses its ability to detect entanglement at dephasing strength
$p = 1-\frac{1}{2^{4/3}|c_k|^{2/3}}$.
The maximum occurs for $|c_k| = 1/2$ in which case the entanglement is no longer detected
at $p = 1-\frac{1}{2^{1/3}}$. For general $X$-states the entanglement witnesses give the
following:
\begin{eqnarray}
\rm{Tr}[W_{\ell}\rho] &=& \frac{1}{4}(3(a_i+b_i+a_j+b_j+a_k+b_k) \nonumber\\
&+& a_{\ell}+b_{\ell}-4(1-p)^{3/2}|c_\ell|).
\end{eqnarray}
This can be solved for the exact value of $p$ at which the entanglement is no longer detected:
\begin{equation}
p = 1-\frac{(3(a_i+b_i+a_j+b_j+a_k+b_k)+a_{\ell}+b_{\ell})^{2/3}}{2^{4/3}|c_{\ell}|^{2/3}}.
\end{equation}
I note that which of the above witnesses is most sensitive may depend on the intial state and
the decoherence strength and can be determined via a minimization process. What is important
is that the witnesses detect purely tri-partite entanglement that does not include biseparable
but not completely separable entanglement.
\subsection{Depolarizing Environment}
I now look at an independent qubit depolarizing environment and, as above, explore the
entanglement evolution of the three qubit $X$-states. The Kraus
operators for this environment are:
\begin{equation}
K_1= \sqrt{1-\frac{3p}{4}}\openone,
K_w = \frac{\sqrt{p}}{2}\sigma_w,
\end{equation}
where $\sigma_w$ are the Pauli spin operators,
$w = x,y,z$ and $p$ is now the depolarizing strength. The depolarizing environment
affects both the anti-diagonal and diagonal elements of the density matrix. The anti-diagonal
elements are simply reduced by a factor of $(1-p)^3$. The diagonal element $a_i$ becomes:
\begin{eqnarray}
a_i^{\prime} &=& a_i(1-\frac{3p}{2}+\frac{3p^2}{4}-\frac{p^3}{8}) \nonumber\\
&+& (a_j+a_k+b_{\ell})(\frac{p}{2}-\frac{p^2}{2}+\frac{p^3}{8}) \nonumber\\
&+& b_i\frac{p^3}{8}+(a_{\ell}+b_j+b_k)(\frac{p^2}{4}-\frac{p^3}{8})
\end{eqnarray}
where if $(i,\ell) = (1,4)$, $(j,k) = (2,3)$ and vice versa. For the $b_i^{\prime}$ elements
simply replace each term $b_l$ with $a_l$ and each $a_l$ with $b_l$, for $l = 1,...,4$.
Since the decohering environment preserves the $X$ shape of the density matrix the eigenvalues
of the partially transposed density matrix follow Eq. \ref{XEigs} and the critical value of $p$
for which a given eigenvalue goes from negative to positive can be analytically determined.
When the initial density matrix is composed of only one GHZ-type state eigenvalues of the
partially transposed density matrix are the same for each qubit and the state exhibits ESD
at the same depolarizing strength for all $N_j$ and $N^{(3)}$.
When the $X$-state density matrix is a mixture of multiple GHZ-type states ESD
may be exhibited with respect to specific negativity measures at different
depolarizing strengths. Figure \ref{fig2}
shows the lowest eigenvalue of the partially transposed density matrix with respect to each
of the three qubits for a sample $X$-state composed of a mixture of GHZ-type states
as a function of decoherence strength. One eigenvalue is always positive
({\it i.e.} indicating zero negativity) and two of the eigenvalues cross zero
({\it i.e.} the state undergoes ESD with respect to the single qubit negativities)
at different decoherence strengths. For low values of $p$ two of the lowest eigenvalues are negative
demonstrating the presence of entanglement. However, there is no measurable GHZ distillable tri-partite
entanglement as measured by $N^{(3)}$. For slightly higher values of $p$ there is a small region
for which only one of the eigenvalues is negative. Now $N^{(3)}$ becomes negative showing
a sudden {\it birth} of (GHZ distillable) tri-partite negativity. As $p$ increases further, the state exhibits
ESD with respect to all single qubit negativities and $N^{(3)}$ (and $N_2$) becomes positive.
This sort of $N^{(3)}$ behavior indicates that there is only a small region of decoherence strengths
(which does not include $p = 0$) for which we can be sure there exists GHZ-distillable entanglement.
Such behavior, going from positive to negative and back, cannot occur when the $X$-state is composed
of only two GHZ-type states. This is because two of the single qubit negativites are equal, the partial
trace with respect to two of the qubits give the same set of eigenvalues. The sign of $N^{(3)}$ is thus
determined solely by the eigenvalues of the partially transposed state with respect to the third qubit. An example of
a state exhibiting the sudden birth of $N^{(3)}$ entanglement followed by an exhibition of ESD
is portrayed in the inset of Fig. \ref{fig2}.
\begin{figure}
\caption{(Color online) Eigenvalues of partially transposed $X$-state density matrix with
non-zero elements $a_1 = \frac{1}
\label{fig2}
\end{figure}
To test for the presence of purely tri-partite entanglement in the depolarizing system we look
at the expectation value of the state with an entanglement witness. Using the witnesses
defined above we note that witness $W_j$ for a state depolarized with a strength $p$ gives:
\begin{eqnarray}
\rm{Tr}[W_j\rho] &=& \frac{1}{8}(p^2-2p+6)(\sum_{i \neq j} a_i+b_i) \nonumber\\
&-& \frac{1}{8}(3p^2-6p-2)(a_j+b_j)+(p-1)^3c_j.
\end{eqnarray}
This equation can then be solved for the critical decohernce strength at which entanglement
will no longer be detected.
\section{$X$-States with More Qubits}
The particular matrix structure of the $X$-state allows us to extend our results beyond three
qubits. A matrix with non-zero elements in an $X$ shape can be block diagonalized with blocks of
size $2\times 2$. Assuming the $X$-matrix elements along the diagonal are $d_1,...,d_N$,
and the elements along the anti-diagonal are $e_1,...,e_N$ (starting at the top right), the $m$th
$2\times 2$ block along the diagonal has elements:
\begin{equation}
A_m = \left(
\begin{array}{cc}
d_m & e_m \\
e_{N-m+1} & d_{N-m+1} \\
\end{array}
\right);
\label{BDiag}
\end{equation}
where $1 \leq m \leq N/2$. Thus, the eigenvalues of any dimension $X$-shaped matrices
are simply the eigenvalues of the $2\times 2$ blocks which are
\begin{equation}
\frac{1}{2}(d_m+d_{N-m+1}\pm\sqrt{(d_m-d_{N-m+1})^2+4e_me_{N-m+1}}).
\label{Eig2}
\end{equation}
When calculating the negativity a partial transpose of the density matrix must be taken. This has the effect
of rearranging only the elements along the anti-diagonal while preserving the $X$ shape. Thus, the eigenvalues
of the partial transpose of an $X$-state of any dimension have the form of Eq. \ref{XEigs} and the negativity
is easily calculated.
\section{Three-Qubit Concurrence}
In this section I derive an explicit expression for a lower bound of
three qubit mixed state concurrence as defined in \cite{LFW} for $X$-states.
A general expression for a lower bound on the three-qubit concurrence is:
\begin{equation}
\tau_3 = \sqrt{\frac{1}{3}\sum^6_{\ell=1}\left[(C_{\ell}^{12|3})^2+(C_{\ell}^{13|2})^2+(C_{\ell}^{23|1})^2\right]}.
\end{equation}
Each of the three bi-partite concurrence terms $C^{ij|k}_\ell$ is given as the sum of the six terms:
\begin{equation}
C_{\ell} = \rm{max}\{0,\sqrt{\lambda^1_\ell}-\sqrt{\lambda^2_\ell}-\sqrt{\lambda^3_\ell}-\sqrt{\lambda^4_\ell}\},
\end{equation}
where $\lambda^l_\ell$ are the non-zero eigenvalues of $\tilde{\rho} = \rho S_\ell^{ij|k}\rho^*S_\ell^{ij|k}$
in descending order. The operators $S_\ell^{ij|k}$ are given by $S_\ell^{ij|k} = L_\ell^{ij}\otimes L_0^k$
where $L_\ell^{ij}$ is one of six generators of the group SO(4) operating on qubits $i,j$, and $L_0^k$
is the generator of SO(2), the Pauli matrix $\sigma_y$, operating on qubit $k$. This lower bound on mixed
state concurrence has been calculated for some simple $X$-states in \cite{SF}. Here I look to extend these
results and note where the lower bound goes to zero. Once the lower bound does go to zero, there is no longer
a guarantee that entanglement is present.
For initial density matrices that are $X$-states only six of the 18 contributing terms to the three
qubit concurrence are non-zero. More specifically, only the two SO(4) generators with elements on
the anti-diagonal contribute to each of the three bi-partite concurrences. I will refer to the
SO(4) generator with anti-diagonal $(-1,0,0,1)$ as $L_1^{ij}$, and the SO(4) generator with
anti-diagonal $(0,-1,1,0)$ as $L_2^{ij}$ . For $X$-states the four eigenvalues that make up
each of the six terms are of the form:
\begin{equation}
\lambda^{ij|k}_{\ell,m_{\pm}} = a_mb_m+|c_m|^2\pm2\sqrt{a_mb_m|c_m|^2}
\end{equation}
for two different values of $m$. For the partition $12|3$ and SO(4) generator $\ell = 1$ the contributing
terms have $m = 1,2$. For the generator $\ell = 2, m = 3,4$. Similarly, for the partition $23|1$ we find
$\ell = 1, m = 2,3$ and $\ell = 2, m = 1,4$. Finally, for the $13|2$ partition $\ell = 1, m = 2,4$ and
$\ell = 2, m = 1,3$. Given these eigenvalues the three-concurrence can be easily computed.
\subsection{Dephasing Environment}
For $X$-states composed of only one GHZ-type state an exact calculation for $\tau_3$
in a dephasing environment yields the maximum between 0 and:
\begin{equation}
\sqrt{a_i-a_i^2+c_i(-\omega_i+2\gamma_i^{\frac{1}{2}})} - \sqrt{a_i-a_i^2+c_i(-\omega_i-2\gamma_i^{\frac{1}{2}})}
\label{DephaseEig}
\end{equation}
where,
\begin{eqnarray}
\omega_i &=& c_i(p-1)^3 \nonumber\\
\gamma_i &=& a_i(a_i-1)(p-1)^3.
\end{eqnarray}
Eq. \ref{DephaseEig} goes to zero only in the limit of $p\rightarrow 1$. Thus, the lower bound cannot
go to zero for a GHZ-type state, some entanglement will always be present. In fact, the lower bound
cannot go to zero unless the $X$-state is composed of four GHZ-type states. This is because $\tau_3$
is a summation of terms and can go to zero only if each term goes to zero. Each one of these
(six) terms consists of four eigenvalues, two for each of two $m$ values. If the eigenvalues
of one of the $m$ values are zero (which will happen if $c_i$ and $a_i$ or $b_i = 0$) the term
will have the form of Eq. \ref{DephaseEig}. Thus, that term, if not initially zero, will remain
non-zero until $p = 1$. This behavior is demonstrated in Fig. \ref{fig3} and should be contrasted
with the negativity and tri-partite negativity measures. The negativity of $X$-states can go to
zero in a dephasing channel when the $X$-state is composed of only two GHZ-type
states. The reason for this is that the negativity can be defined with respect to only one of the
qubits (for example $N_3$) which may go to zero while negativity measures with respect to the other
qubits do not. The three qubit concurrence, however, is a sum over all terms and therefore cannot go
to zero unless each bi-partite concurrence term goes to zero.
\subsection{Depolarizing Environment}
The bound on the three-qubit concurrence for $X$-states composed of one GHZ-type state in a
depolarizing environment gives the maximum between 0 and:
\begin{equation}
\frac{q_i}{4}-\sqrt{\omega^2_i-\frac{\gamma^{\prime}_i}{64}-
\frac{\sqrt{\omega^2_i\gamma^{\prime}_i}}{4}} + \sqrt{\omega^2_i-\frac{\gamma^{\prime}_i}{64}+
\frac{\sqrt{\omega^2_i\gamma^{\prime}_i}}{4}}
\end{equation}
where,
\begin{eqnarray}
q_i &=& (p-2)p\sqrt{4(p-1)^2(a_i-a_i^2)-p(p-2)} \\
\gamma^{\prime}_i &=& (a_i(p-2)^3+(a_i-1)p^3)((a_i-1)(p-2)^3+a_1p^3). \nonumber
\end{eqnarray}
$\tau_3$ for this state is shown in Fig. \ref{fig3} for an initial state $a_i = b_i = 1/2$.
For the depolarizing environment $\tau_3$ can go to zero even for $X$-states composed of
only one GHZ-type state.
\begin{figure}
\caption{(Color online) Top row: Three qubit concurrence for $X$-states composed of one GHZ-type
state with $a_i = b_i = 1/2$. Left: in a dephasing environment $\tau_3$ never goes to zero.
Right: in a depolarizing environment $\tau_3$ goes to zero when $\frac{1}
\label{fig3}
\end{figure}
The matrix $\tilde{\rho}$ for a $X$-state density matrix with any of the six SO(4)
generators retains its $X$ shape, having four diagonal and four anti-diagonal elements
(and thus have only four non-zero eigenvalues as noted in \cite{LFW}). The $X$ is
`balanced' when $S_\ell^{ij|k}$ is anti-diagonal. I use the term `balanced' as follows:
any diagonal $X$-matrix element $d_m$ that is non-zero has a non-zero counterpart
element $e_m$, where I have used the notation of Eq. \ref{BDiag}.
An unbalanced $X$-matrix will have non-zero elements whose counterparts are zero.
When a balanced $X$-matrix is block diagonalized non-zero $2\times2$ blocks will have
four non-zero elements leading to non-degenerate eigenvalues like those of Eq. \ref{Eig2}.
Block diagonalized unbalanced $X$-matrices will have a zero in one of the off-diagonal elements
of the $2\times2$ diagonal block as can be noted from Eq. \ref{BDiag}. In the
unbalanced case the eigenvalues are then simply the diagonal elements of the block
which are of the form $a_ib_j,a_ib_j,a_kb_l,a_kb_l$. These elements (eigenvalues) are
degenerate and thus these bi-partite concurrence terms equal zero.
\subsection{$n$-Qubit Concurrence}
As mentioned above, only the two SO(4) generators with elements on the anti-diagonal,
contribute to the three qubit concurrence. This is because only these two
generators have anti-diagonal elements which lead to balanced $X$-matrices
$\tilde{\rho}$. The other generators lead to unbalanced $\tilde{\rho}$ matrices whose
eigenvalues are simply its diagonal elements. The eigenvalues are each doubly degenerate
meaning that these terms will not contribute to the concurrence.
The above allows us to simplify calculations for higher qubit concurrences of
$X$-states. The only terms necessary to calculate are those that utilize anti-diagonal
$S_\ell^{ij...|k\ell...}$ matrices. Thus, for four qubits there would be four terms from the
$\rm{SO(4)}\otimes\rm{SO(4)}$ generators for each of the three balanced partitions
(two qubits on each side of the partition) and an additional four terms from the
$\rm{SO(8)}\otimes\rm{SO(2)}$ generators for each of the four unbalanced partitions
(partitions of three and one qubit). This gives a total of 28 terms which
should significantly simplify these calculations.
\section{Conclusions}
In this paper I have studied the entanglement dynamics for three qubit $X$-states in
both dephasing and depolarizing environments. To do this I have analytically calculated the eigenvalues
of partial transposes of the $X$-states which allows for easy determination of the negativity. Since the
dephasing and depolarizing environments retain the density matrix $X$ shape one can calculate which initial
states will exhibit ESD with respect to the negativity measures and at what decoherence strength.
I noted that the tri-partite negativity, a tri-partite entanglement measure which is sufficient to
ensure GHZ distillability, can exhibit non-standard behavior for certain $X$-states: its appearance
only at non-zero decoherence strength followed by its sudden disappearance. In addition, I explored
the detection capability of entanglement witnesses sensitive to tri-partite entanglement. As with
the negativity, the expectation value of the $X$-state with respect to the entanglement witness can
be solved analytically and are vital in assessing potential experimental studies.
These results are extended to systems made of arbitrary numbers of qubits. Finally, I analytically
solved for the relevant terms of a lower bound on the three-qubit concurrence for an $X$-state, demonstrated
when it goes to zero in dephasing and depolarizing environments. This method may be useful for calculating
concurrences for larger numbers of qubits.
It is a pleasure to thank G. Gilbert and S. Pappas for helpful feedback
and acknowledge support from the MITRE Innovation Program under MIP grant \#20MSR053.
\appendix
\section{General X-States}
As mentioned in the main part of the paper, a previously studied set of states with an $X$
shaped density matrix is the generalized GHZ-diagonal states \cite{A}. In this Appendix
I prove that generalized GHZ-diagonal states do not include all possible X-states by constructing an explicit
state with an X-shaped density matrix that is not part of the aforementioned set.
Generalized GHZ-diagonal states with $n$ qubits have the form
\begin{equation}
\rho = \sum_{k = 0}^{N-1}\lambda_k^+|\psi_k^+(\alpha,\beta)\rangle\langle\psi_k^+(\alpha,\beta)|+\lambda_k^-|\psi_k^-(\alpha,\beta)\rangle\langle\psi_k^-(\alpha,\beta)|,
\end{equation}
where $N = 2^n$ is the Hilbert space dimension. The density matrix elements of these states using
the notation of Eq.~\ref{xstate} are as follows:
\begin{eqnarray}
a_j &=& |\alpha|^2(\lambda_k^{+}+\lambda_k^{-}) + |\beta|^2(\lambda_{\overline{k}}^{+}+\lambda_{\overline{k}}^{-}) \\
b_j &=& |\beta|^2(\lambda_k^{+}+\lambda_k^{-}) + |\alpha|^2(\lambda_{\overline{k}}^{+}+\lambda_{\overline{k}}^{-}) \\
c_j &=& \alpha\beta^*(\lambda_k^{+}-\lambda_k^{-}) + \alpha^*\beta(\lambda_{\overline{k}}^{+}-\lambda_{\overline{k}}^{-})
\end{eqnarray}
for $1 < j < N/2$ and $k = j - 1$.
We now construct an $X$-state that is not a generalized GHZ-diagonal state.
Let us set $c_j = 0$. There are then three possible solutions for Eq.~A4:
\begin{description}
\item [$\alpha = 0$ or $\beta = 0$] {~\\
If either of these is true then all other $c_m$ for $m \neq j$ must also equal zero. }
\item [$\lambda_{k}^+ = \lambda_{k}^-$ \emph{and} $\lambda_{\overline{k}}^+ = \lambda_{\overline{k}}^-$] {~ \\
If this is true $a_j = 2(|\alpha|^2\lambda_{k} + |\beta|^2\lambda_{\overline{k}})$ and
$b_j = 2(|\beta|^2\lambda_{k} + |\alpha|^2\lambda_{\overline{k}})$ where
$\lambda_{k} = \lambda_{k}^+ = \lambda_{k}^-$. Therefore, if in addition
$b_j = 0$ (which would require $\lambda_{k} = \lambda_{\overline{k}} = 0$),
$a_j$ must equal zero.}
\item [$(\lambda_{k}^{+}-\lambda_{k}^{-}) = \frac{\alpha^*\beta}{\alpha\beta^*}(\lambda_{\overline{k}}^{-}-\lambda_{\overline{k}}^{+})$] {~\\
Let $\alpha^*\beta = re^{i\theta}$ where $r,\theta$ are real. Then the fraction $\frac{\alpha^*\beta}{\alpha\beta^*} =
e^{2i\theta}$. As mentioned in the main part of the paper, the coefficients $\lambda$ are all real forcing
$e^{2i\theta}$ to be real and $\theta = 0,m\pi$ for all integers $m$. Therefore, $\alpha^*\beta$ and
$\alpha\beta^*$ must both be purely real or purely imaginary.}
\end{description}
We can now explicitly construct a two qubit $X$-state that is not part of the set of generalized
GHZ-diagonal states by setting $c_1 = 0$ and violating each of the three conditions listed above.
Such a state can have the form:
\begin{equation}
\rho_C = \left(
\begin{array}{cccc}
a_1 & 0 & 0 & 0 \\
0 & a_2 & re^{i\phi} & 0 \\
0 & re^{-i\phi} & b_2 & 0 \\
0 & 0 & 0 & 0 \\
\end{array}
\right).
\label{vstate}
\end{equation}
where $r,\theta$ are real and $r^2 < a_2b_2$ guarantees the density matrix has
positive eigenvalues. In addition, $\rho_C$ must be trace 1, $a_1+a_2+b_2 = 1$,
and its purity must be $a_1^2 + a_2^2 + b_2^2 +2r^2 \leq 1$.
In Eq. \ref{vstate} $c_1 = 0$, yet $c_2 \neq 0$, indicating that $\alpha,\beta\neq 0$.
Furthermore, $b_1 = 0$ while $a_1$ does not, indicating that $\lambda_0^+ = \lambda_0^-$
and $\lambda_3^+ = \lambda_3^-$ cannot both be true. Finally, $c_2 = re^{i\phi}$ need not be
real nor purely imaginary indicating that
$(\lambda_0^{+}-\lambda_0^{-}) \neq \frac{\alpha^*\beta}{\alpha\beta^*}(\lambda_3^{-}-\lambda_3^{+})$.
Thus, the state $\rho_C$ is not part of the set of generalized GHZ-diagonal states though
it certainly is an $X$-state.
\end{document} |
\begin{document}
\title{On the Notion of Proposition in Classical and Quantum Mechanics}
\author{C. Garola}
\address{Dipartimento di Fisica dell'Universit\`a and Sezione INFN, \\
Via per Arnesano, 73100 Lecce, Italy \\
E-mail: [email protected]}
\author{S. Sozzo}
\address{Dipartimento di Fisica dell'Universit\`a and Sezione INFN, \\
Via per Arnesano, 73100 Lecce, Italy \\
E-mail: [email protected]}
\maketitle
\abstracts{The term \emph{proposition} usually denotes in quantum mechanics (QM) an element of (standard) quantum logic (QL). Within the orthodox interpretation of QM the propositions of QL cannot be associated with sentences of a language stating properties of individual samples of a physical system, since properties are \emph{nonobjective} in QM. This makes the interpretation of propositions problematical. The difficulty can be removed by adopting the objective interpretation of QM proposed by one of the authors (\emph{semantic realism}, or \emph{SR}, interpretation). In this case, a unified perspective can be adopted for QM and classical mechanics (CM), and a simple first order predicate calculus ${\mathcal L}(x)$ with Tarskian semantics can be constructed such that one can associate a \emph{physical proposition} (\emph{i.e.}, a set of physical states) with every sentence of ${\mathcal L}(x)$. The set ${\mathcal P}^{f}$ of all physical propositions is partially ordered and contains a subset ${\mathcal P}^{f}_{T}$ of \emph{testable physical propositions} whose order structure depends on the criteria of testability established by the physical theory. In particular, ${\mathcal P}^{f}_{T}$ turns out to be a Boolean lattice in CM, while it can be identified with QL in QM. Hence the propositions of QL can be associated with sentences of ${\mathcal L}(x)$, or also with the sentences of a suitable quantum language ${\mathcal L}_{TQ}(x)$, and the structure of QL characterizes the notion of testability in QM. One can then show that the notion of \emph{quantum truth} does not conflict with the classical notion of truth within this perspective. Furthermore, the interpretation of QL propounded here proves to be equivalent to a previous pragmatic interpretation worked out by one of the authors, and can be embodied within a more general perspective which considers states as first order predicates of a broader language with a Kripkean semantics.}
\section{Introduction}
It is often maintained in the literature on the foundations of quantum mechanics (QM) that the lattice of propositions of quantum logic (QL)\footnote{For the sake of brevity, we simply call \emph{quantum logic} here the formal structure that is called in literature \emph{concrete}, or \emph{standard}, (\emph{sharp}) \emph{quantum logic},\cite{dcgg04} together with its standard physical interpretation.} is a logical calculus which is different from the classical logical calculus and specific of QM (see Ref. 1 for a review on this subject till the early seventies; for a more recent perspective, together with an updated bibliography, see, \emph{e.g.}, Refs. 2 and 3). Yet many scholars do not accept this view and argue that QL is a mathematical structure with a physical interpretation, not a new logic (for an explicit statement of this position see, \emph{e.g.}, Ref. 4).
In our opinion, the unsettled quarrel between the positions above finds its roots in a specific feature of the standard interpretation of QM, that is, \emph{nonobjectivity} of physical properties. Because of this feature, there are sentences attributing physical properties to samples of a given physical system that are meaningful or meaningless (\emph{i.e.}, have or have not a truth value, respectively) depending on the state of the object, and also sentences that are meaningless in any case, even if they belong to the natural language of physics (a known example of these is the statement ``the particle $x$ has position $\vec r$ and momentum $\vec p$ at time $t$''). Hence the propositions of QL cannot be connected in a direct way with sentences of this kind, following standard procedures in classical logic (CL), which makes their logical interpretation problematical (in particular, QL seems to introduce a new mysterious concept of \emph{quantum truth}\cite{bo1})\footnote{A rather recent investigation on the concept of proposition has been done by R\'{e}dei\cite{r98}. Within R\'{e}dei's analysis physical properties, or sentences about probabilities of properties, are directly taken as elementary sentences of a logical language, and propositions are identified with equivalence classes of (elementary or complex) sentences, each class containing all sentences which are equivalent with respect to a quantum concept of truth. Our analysis here considers a different kind of elementary sentences and introduces various kinds of propositions. The lattice of R\'{e}dei's propositions is then isomorphic, in QM, to the lattice of all \emph{testable physical propositions} introduced here (Sec. 6).}.
The above difficulties cannot be removed as long as nonobjectivity is maintained to be an unavoidable feature of QM. Nevertheless most physicists accept nonobjectivity, basing this acceptance on well known no--go theorems (the most famous of which are probably Bell's\cite{b64,m93} and Bell--Kochen--Specker's\cite{m93}$^{-}$\cite{ks67}). It has been proven in a number of papers by one of the authors, however, that these theorems, which are mathematically well established, rest on assumptions which follow from implicitly adopting an epistemological position which is suitable for classical physics but contrasts with the operational philosophy of QM.\cite{gs96b}$^{-}$\cite{ga05} To be precise, they assume the simultaneous validity of a set of empirical physical laws in which the observables that appear in some laws are incompatible with the observables that appear in other laws, so that it is impossible, according to QM, to check whether all the laws of the set hold simultaneously. This suggests that the simultaneous validity assumption should be dropped in QM: but, then, the no--go theorems cannot be proved. It follows that the nonobjectivity of physical properties can no more be classified as a logical necessity, but only as a (legitimate) interpretational choice, and alternative interpretations of QM in which objectivity of properties is restored become possible. An interpretation of this kind has then be constructed by one of us, together with other authors (\emph{semantic realism}, or \emph{SR}, interpretation\cite{g99}$^{-}$\cite{g02,gp04,g91,gs96a}). The SR interpretation preserves the mathematical apparatus and the statistical interpretation of QM, and yet considers every elementary sentence attributing a physical property to a given individual physical object as meaningful (though its truth value may be empirically accessible or not, depending on the state of the object).
Because of objectivity, the SR interpretation avoids the difficulties of the standard interpretation pointed out above, so that physical propositions can be introduced in QM associating them to sentences of a suitable classical predicate calculus. This allows us to propound in this paper a general scheme based on classical logic for the introduction of physical propositions in physical theories, which can then be particularized to classical mechanics (CM) and to QM. Our scheme explains, in particular, how QL can be obtained by using a testability criterion for selecting a suitable subset in the set of all physical propositions, and shows that a notion of quantum truth can be derived from the classical notion of truth as correspondence (as explicated rigorously by Tarski's semantic theory\cite{t44,t56}).
In order to favour a better understanding of the above results, let us describe the content of the present paper in more details.
In Sec. 2 we construct a classical first order predicate calculus ${\mathcal L}(x)$, with monadic predicates and one individual variable only, in which a classical (Tarskian) notion of truth is adopted, and associate a family of \emph{individual propositions}, parametrized by the interpretations of the variable, with every (open) sentence of ${\mathcal L}(x)$.
In Sec. 3 we define \emph{physical propositions}, introduce the truth value \emph{certainly true} on ${\mathcal L}(x)$ (which adds without contradiction to the standard values \emph{true}/\emph{false}), and study some properties of the poset $({\mathcal P}^f, \subseteq)$ of all physical propositions.
In Sec. 4 we conclude the general part of the paper by introducing the subset ${\mathcal P}_{T}^f \subseteq {\mathcal P}^f$ of all \emph{testable} physical propositions, which is basic for the analysis of measurement processes in the framework of specific physical theories (as CM and QM).
In Sec. 5 we specialize the notions introduced in the previous sections to CM. We show that, if suitable axioms (which are justified by the intended interpretation) are introduced, the concepts of individual proposition, physical proposition and testable physical proposition can be identified, which provides a very simple scheme that explains why people usually say that ``classical mechanics follows classical logic'' (which is however a misleading statement in our opinion).
In Sec. 6 we show that the different kinds of propositions introduced in the general part cannot be identified in QM, and introduce some specific axioms which are supported by the broad existing literature on QL. These allow us to construct a quantum language ${\mathcal L}_{TQ}(x)$, based on ${\mathcal L}(x)$, which is such that the set of all physical propositions associated with its sentences coincides with ${\mathcal P}_{T}^f$ and can be identified with the set of all propositions of QL. It follows that every proposition of QL can be associated with a sentence of a suitable first order predicate calculus, as in classical logic, and that the set of all propositions of QL is selected on the basis of a criterion of testability, which is tipically physical and shows the empirical character of the lattice structure of QL.
In Sec. 7 we use the interpretation provided in Sec. 6 in order to look deeper into the concept of `quantum truth'. We show that this concept directly follows in our approach from the concept of \emph{certainly true} introduced in the general part, hence it does not conflict with the classical concept of truth. This provides a satisfactory unification of notions that are usually regarded as incompatible.
In Sec. 8 we discuss the relations between the \emph{semantical} interpretation of QL provided in Sec. 6 with the \emph{pragmatic} interpretation propounded by one of us in a recent paper.\cite{g05} We show that the two interpretations can be easily translated one into the other, and that they are intuitively equivalent.
In Sec. 9 we briefly comment on our approach from a general logical perspective. We note that individual and physical propositions can be considered as propositions in a standard sense in CL if states are considered as \emph{possible worlds} (\emph{modal interpretation} of QL). This interpretation is however problematical, and we briefly sketch a possible alternative which refers to the broader language introduced by one of us, together with other authors, in some previous papers.\cite{g91,gs96a}
\section{The language ${\mathcal L}(x)$} \label{linguaggio}
The formal language that we want to construct in this section is a simplified and modified version of the more general language introduced in some previous papers\cite{g91,gs96a} with the aim of formalizing a sublanguage of the observative language of QM.
The alphabet of ${\mathcal L}(x)$ consists of an individual variable $x$, a set ${\mathcal E}= \{ E, F, \ldots \}$ of monadic predicates called \emph{properties}, a set $\{ \lnot, \land, \lor \}$ of logical connectives and a set $\{ (\, , \,) \}$ of auxiliary signs.
The formation rules for sentences, or well-formed formulas (wffs), of ${\mathcal L}(x)$ are the standard (recursive) formation rules for wffs of a classical first order predicate calculus, in which $\lnot$, $\land$, $\lor$ denote negation, conjunction and disjunction, respectively. We denote by $\phi(x)$ the set of all wffs of ${\mathcal L}(x)$, and by ${\mathcal E}(x)$ the set of all \emph{elementary} sentences (or \emph{atomic} wffs) of ${\mathcal L}(x)$.
The semantics of ${\mathcal L}(x)$ consists of a family of Tarskian semantics parametrized by a set $\mathcal S$ of \emph{states}. Every $S \in \mathcal S$ is associated with a universe ${\mathcal U}_S$ of \emph{physical objects}. An \emph{interpretation} of the variable $x$ is a mapping $\rho:(x,S) \in \{ x \}\times {\mathcal S} \longrightarrow \rho_S(x) \in {\mathcal U}_S$. For every $S \in \mathcal S$ and $E \in \mathcal E$, an extension $ext_{S} E \subset {\mathcal U}_S$ is defined. The atomic wff $E(x)$ is \emph{true} in the state $S$ for the interpretation $\rho$ iff $\rho_S(x) \in ext_{S} E$, \emph{false} otherwise. The truth value of molecular wffs of ${\mathcal L}(x)$ is then defined following standard (recursive) truth rules in Tarskian semantics. For every interpretation $\rho$ and state $S$, we call \emph{assignment function} the mapping $\sigma_S^\rho : \phi(x) \longrightarrow \{ T,F \}$ (where $T$ stands for \emph{true} and $F$ for \emph{false}) which associates a truth value with every wff of ${\mathcal L}(x)$ following the truth rules mentioned above.
The \emph{intended interpretation} of ${\mathcal L}(x)$ is anticipated by the terminology that we have adopted. States are defined operationally as classes of physically equivalent preparation procedures (briefly, \emph{preparations}) and properties as classes of physically equivalent (ideal) registration procedures (briefly, \emph{registrations})\footnote{The notion of physical equivalence is not trivial and requires a careful analysis of the notions of preparation and (ideal) registration procedure.\cite{gs96a} We do not insist on this issue here for the sake of brevity.}. The universe ${\mathcal U}_S$ consists of samples of a prefixed physical system $\Omega$ prepared according to any preparation in $S$. Whenever an interpretation $\rho$ and a state $S$ are given, an elementary sentence, say $E(x)$, of ${\mathcal L}(x)$ states a (physical) property $E$ of the physical object $\rho_S(x) \in {\mathcal U}_S$ (by abuse of language, we often avoid mentioning the interpretation $\rho$ in the following, and briefly say that $E(x)$ attributes the property $E$ to the physical object $x$ in the state $S$).
It must be stressed that the intended interpretation of ${\mathcal L}(x)$ provided here implies that the semantics of ${\mathcal L}(x)$ is incompatible with QM whenever the standard interpretation of QM is adopted. Indeed, within this interpretation QM is maintained to be a semantically \emph{nonobjective} (or \emph{contextual}) theory, which implies that the extension $ext_S E$ is not defined for every property $E$. Hence the general scheme for propositions in physical theories propounded in this paper is based on an explicit acceptance of the SR interpretation of QM mentioned in the Introduction, which is semantically objective (we have already noted in the Introduction that the possibility of such an interpretation follows from a criticism of the implicit assumptions underlying the no--go theorems that should prove that QM is necessarily a nonobjective theory).
It must also be stressed that the operational definition of properties as classes of registrations makes every elementary wff $E(x) \in \phi(x)$ \emph{testable}, in the sense that a physical procedure exists that, under specified physical conditions, allows one to check empirically the truth value of $E(x)$. Yet, it is important to observe that this check does not reduce in all theories to registering a physical object $x$ in the state $S$ by means of a registration in $E$. There are indeed physical theories, as QM, in which the registration usually modifies the state $S$ in an unpredictable way, so that the obtained result refers to the state after the registration, not to $S$. In these theories the empirical accessibility of the truth values of $E(x)$ is then restricted to a proper subset of states which depends on $E$ (see Sec. 7).
Let us introduce now some further definitions. Firstly, two binary relations of \emph{logical preorder} $\le$ and \emph{logical equivalence} $\equiv$ can be defined on $\phi(x)$ by following standard procedures in classical logic, \emph{i.e.}, by setting, for every $\alpha(x),\beta(x)\in \phi(x)$,
\begin{displaymath}
\alpha(x) \le \beta(x) \quad \textrm{\emph{iff}}
\end{displaymath}
\begin{displaymath}
\textrm{for every} \, \, \rho \in \mathcal R, \, S \in \mathcal S,
\sigma_S^{\rho}(\alpha(x))=T \, \, \, \textrm{implies} \, \, \, \sigma_S^{\rho}(\beta(x))=T,
\end{displaymath}
and
\begin{displaymath}
\alpha(x) \equiv \beta(x) \quad \textrm{\emph{iff}} \quad \alpha(x) \le \beta(x) \, \, \textrm{and} \, \, \beta(x) \le \alpha(x).
\end{displaymath}
It is then easy to see that the partially ordered set (briefly, \emph{poset}) \hspace*{-1mm} $(\phi(x)/_{\equiv},\hspace*{-1mm} \le)$ (where $\le$ denotes, by abuse of language, the order canonically induced on $\phi(x)/_{\equiv}$ by the preorder $\le$ defined on $\phi(x)$) is a Boolean lattice (the \emph{Lindenbaum--Tarski algebra} of ${\mathcal L}(x)$).
Secondly, let $\mathcal R$ be the set of all possible interpretations of $x$. Then, we associate an \emph{individual proposition} $p_{\alpha(x)}^{\rho}$ with every pair $(\rho, \alpha(x)) \in {\mathcal R} \times \phi(x)$, defined as follows.
\begin{equation}
p_{\alpha(x)}^{\rho}= \{ S \in {\mathcal S} \quad | \quad \sigma_S^{\rho}(\alpha(x))=T \}.
\end{equation}
The definition of $p_{\alpha(x)}^{\rho}$ implies that
\begin{displaymath}
\sigma_S^{\rho}(\alpha(x))=T \quad \textrm{\emph{iff}} \quad S \in p_{\alpha(x)}^{\rho}.
\end{displaymath}
Furthermore, one easily gets that, for every elementary wff $E(x) \in \phi(x)$,
\begin{equation}
p_{E(x)}^{\rho}= \{ S \in {\mathcal S} \quad | \quad \rho_S(x) \in ext_S E \},
\end{equation}
while for every $\alpha(x), \beta(x) \in \phi(x)$ one gets
\begin{eqnarray}
p_{\lnot\alpha(x)}^{\rho}= \mathcal S \setminus p_{\alpha(x)}^{\rho}, \label{not} \\
p_{\alpha(x) \land \beta(x)}^{\rho}= p_{\alpha(x)}^{\rho} \cap p_{\beta(x)}^{\rho}, \label{and} \\
p_{\alpha(x) \lor \beta(x)}^{\rho}= p_{\alpha(x)}^{\rho} \cup p_{\beta(x)}^{\rho} \label{or}
\end{eqnarray}
(where $\setminus$, $\cap$, $\cup$ denote set--theoretical subtraction, intersection and union, respectively). Let $\subseteq$ denote set--theoretical inclusion and let ${\mathcal P}^{\rho}$ be the set of all individual propositions associated with sentences of $\phi(x)$ whenever $\rho$ is fixed. Then, Eqs. (\ref{not}), (\ref{and}) and (\ref{or}) imply that also the poset $({\mathcal P}^{\rho}, \subseteq)$ is a Boolean lattice.
Thirdly, by using the definitions of logical order, logical equivalence and proposition introduced above, we get
\begin{displaymath}
\alpha(x) \le \beta(x) \quad \textrm{\emph{iff}} \quad \textrm{for every} \, \, \rho \in {\mathcal R}, \, p_{\alpha(x)}^{\rho} \subseteq p_{\beta(x)}^{\rho},
\end{displaymath}
\begin{displaymath}
\alpha(x) \equiv \beta(x) \quad \textrm{\emph{iff}} \quad \textrm{for every} \, \, \rho \in {\mathcal R}, \, p_{\alpha(x)}^{\rho} = p_{\beta(x)}^{\rho},
\end{displaymath}
which show that the logical relations on $\phi(x)$ imply set--theoretical relations on every set ${\mathcal P}^{\rho}$ of individual propositions.
\section{The poset of physical propositions} \label{proposizioni}
The intended interpretation of ${\mathcal L}(x)$ introduced in Sec. 2 suggests to associate a set of states with every sentence of ${\mathcal L}(x)$, to be precise the set of states which make this sentence true whatever the interpretation of the variable may be. Thus, for every $\alpha(x) \in \phi(x)$ we define a \emph{physical proposition} $p_{\alpha(x)}^{f}$, as follows.
\begin{equation} \label{pfalpha}
p_{\alpha(x)}^{f}= \{ S \in {\mathcal S} \, \, | \, \, \forall \rho \in {\mathcal R}, \, \sigma_{S}^{\rho}(\alpha(x))= T \}.
\end{equation}
By using the definitions introduced in Sec. 2, we then get
\begin{equation} \label{propositions}
p_{\alpha(x)}^{f}= \{ S \in {\mathcal S} \, \, | \, \, \forall \rho \in {\mathcal R}, \, S \in p_{\alpha(x)}^{\rho} \}= \cap_{\rho} p_{\alpha(x)}^{\rho}.
\end{equation}
We denote by ${\mathcal P}^{f}$ the set of all physical propositions associated with wffs of ${\mathcal L}(x)$, that is, we put
\begin{equation} \label{propositionset}
{\mathcal P}^{f}= \{ p_{\alpha(x)}^{f} \, \, | \, \, \alpha(x) \in \phi(x) \}.
\end{equation}
For every $\alpha(x) \in \phi(x)$ we can now introduce the notion of ``true with certainty'' by setting:
\begin{displaymath}
\alpha(x) \, \, \textrm{is \emph{certainly true in S}} \quad \textrm{\emph{iff}} \quad S \in p_{\alpha(x)}^{f}.
\end{displaymath}
The new notion thus follows from the standard notion of truth introduced in Sec. 2 and applies to open wffs of $\phi(x)$ independently of any interpretation of the variable $x$ (we note explicitly that we do not introduce here a notion of \emph{certainly false in S}: whenever $S \notin p_{\alpha(x)}^{f}$, we simply say that $\alpha(x)$ is not certainly true in $S$).
The definition of physical proposition associated with a wff $\alpha(x) \in \phi(x)$ also allows us to introduce the new binary relations of \emph{physical preorder} and \emph{physical equivalence} on $\phi(x)$. For every $\alpha(x), \beta(x) \in \phi(x)$, we put
\begin{displaymath}
\alpha(x) \prec \beta(x) \quad \textrm{\emph{iff}} \quad p_{\alpha(x)}^{f} \subseteq p_{\beta(x)}^{f},
\end{displaymath}
\begin{displaymath}
\alpha(x) \approx \beta(x) \quad \textrm{\emph{iff}} \quad p_{\alpha(x)}^{f} = p_{\beta(x)}^{f}.
\end{displaymath}
By comparing the definitions of $\prec$ and $\approx$ with the definitions of $\le$ and $\equiv$, respectively, one gets
\begin{displaymath}
\alpha(x) \le \beta(x) \quad \textrm{\emph{implies}} \quad \alpha(x) \prec \beta(x),
\end{displaymath}
\begin{displaymath}
\alpha(x) \equiv \beta(x) \quad \textrm{\emph{implies}} \quad \alpha(x) \approx \beta(x),
\end{displaymath}
that is, logical preorder implies physical preorder and logical equivalence implies physical equivalence. The converse implications do not hold in general, in the sense that one cannot prove that they hold without introducing further assumptions. We come back on this issue in Secs. 5 and 6.
Let us come now to the set ${\mathcal P}^f$ of all physical propositions. This set is obviously partially ordered by set--theoretical inclusion, but the properties of the poset $({\mathcal P}^f,\subseteq)$ depend on the specific physical theory that is considered. In particular, one cannot generally assert that $({\mathcal P}^f,\subseteq)$ is a Boolean lattice, as $({\mathcal P}^{\rho},\subseteq)$. However, some weaker features of it can be established. Indeed, let $\alpha(x),\beta(x) \in \phi(x)$. Then, the following statements hold.
\vskip 2mm
(i) $p_{\lnot\alpha(x)}^{f} \subseteq {\mathcal S} \setminus p_{\alpha(x)}^{f}$,
(ii) $p_{\alpha(x) \land \beta(x)}^{f}= p_{\alpha(x)}^{f} \cap p_{\beta(x)}^{f}$,
(iii) $p_{\alpha(x) \lor \beta(x)}^{f} \supseteq p_{\alpha(x)}^{f} \cup p_{\beta(x)}^{f}$
\vskip 2mm
\noindent
(note that, generally, neither ${\mathcal S} \setminus p_{\alpha(x)}^{f}$ nor $p_{\alpha(x)}^{f} \cup p_{\beta(x)}^{f}$ belong to ${\mathcal P}^f$; statement (ii) shows instead that $ p_{\alpha(x)}^{f} \cap p_{\beta(x)}^{f}$ belongs to ${\mathcal P}^f$).
Let us prove statements (i), (ii) and (iii). By using Eqs. (\ref{propositions}) and (\ref{not}), we get
\begin{displaymath}
p_{\lnot\alpha(x)}^{f}= \cap_\rho p_{\lnot\alpha(x)}^{\rho} = \cap_{\rho} ({\mathcal S} \setminus p_{\alpha(x)}^{\rho}) \subseteq {\mathcal S} \setminus \cap_\rho p_{\alpha(x)}^{\rho} ={\mathcal S} \setminus p_{\alpha(x)}^{f}.
\end{displaymath}
Furthermore, by using Eqs. (\ref{propositions}) and (\ref{and}), we get
\begin{displaymath}
p_{\alpha(x) \land \beta(x)}^{f}= \cap_\rho p_{\alpha(x) \land \beta(x)}^{\rho}= \cap_\rho (p_{\alpha(x)}^{\rho} \cap p_{\beta(x)}^{\rho})=
\end{displaymath}
\begin{displaymath}
=( \cap_\rho p_{\alpha(x)}^{\rho}) \cap ( \cap_\rho p_{\beta(x)}^{\rho})= p_{\alpha(x)}^{f} \cap p_{\beta(x)}^{f}.
\end{displaymath}
Finally, by using Eqs. (\ref{propositions}) and (\ref{or}), we get
\begin{displaymath}
p_{\alpha(x) \lor \beta(x)}^{f}= \cap_\rho p_{\alpha(x) \lor \beta(x)}^{\rho}= \cap_\rho (p_{\alpha(x)}^{\rho} \cup p_{\beta(x)}^{\rho}) \supseteq
\end{displaymath}
\begin{displaymath}
\supseteq( \cap_\rho p_{\alpha(x)}^{\rho}) \cup ( \cap_\rho p_{\beta(x)}^{\rho})= p_{\alpha(x)}^{f} \cup p_{\beta(x)}^{f}.\end{displaymath}
To close up, we note that the definitions of $\prec$ and $\approx$ on $\phi(x)$ imply that the poset $(\phi(x)/_{\approx}, \prec)$ (where $\prec$ denotes, by abuse of language, the order canonically induced on $\phi(x)/_{\approx}$ by the preorder $\prec$ defined on $\phi(x)$) is order--isomorphic to $({\mathcal P}^f,\subseteq)$.
\section{The general notion of testability} \label{testabilita}
The intended physical interpretation of ${\mathcal L}(x)$ suggests that a sentence of ${\mathcal L}(x)$ can be classified as \emph{empirically decidable}, or \emph{testable}, iff it can be associated with a registration procedure that allows one (under physical conditions to be carefully specified, see Sec. 2) to determine its truth value whenever an interpretation $\rho$ of the variable $x$ is given. Since all elementary sentences are testable, one is thus led to define the subset $\phi_T(x) \subseteq \phi(x)$ of all \emph{testable wffs} of $\phi(x)$ as follows.
\begin{equation} \label{testable}
\phi_T(x)= \{ \alpha(x) \in \phi(x) \, \, | \, \, \exists E_{\alpha} \in {\mathcal E} \, : \, \alpha(x) \equiv E_{\alpha}(x) \}.
\end{equation}
The subset ${\mathcal P}_{T}^{f} \subseteq {\mathcal P}^{f}$ of all physical propositions associated with wffs of $\phi_T(x)$ will then be called \emph{the set of all testable physical propositions}. More formally,
\begin{equation}
{\mathcal P}_{T}^{f}= \{ p_{\alpha(x)}^{f}\in {\mathcal P}^{f} \, \, | \, \, \alpha(x) \in \phi_T(x) \}.
\end{equation}
Of course, $\phi_T(x)$ is preordered by the restrictions of the preorders $\le$ and $\prec$ defined on $\phi(x)$ to it. For the sake of simplicity, we will denote preorders and equivalence relations on $\phi_T(x)$ by the same symbols used to denote them on $\phi(x)$. Hence, the logical preorder $\le$ implies the physical preorder $\prec$, and the logical equivalence $\equiv$ implies the physical equivalence $\approx$ also on $\phi_T(x)$. We thus get two preorder structures, $(\phi_T(x), \le)$ and $(\phi_T(x), \prec)$, and two posets $(\phi_T(x)/_{\equiv}, \le)$ and $(\phi_T(x)/_{\approx}, \prec)$. The latter, in particular, is isomorphic to $({\mathcal P}_{T}^{f}, \subseteq)$.
We shall see in the next sections some further characterizations of the foregoing posets within the framework of specific theories.
\section{Classical mechanics (CM)} \label{meccanica classica}
It is well known that in classical mechanics (CM) all physical objects in a given state $S$ possess the same properties. This feature of CM can be formalized here by introducing the following assumption.
\noindent
\textbf{CMS.} \emph{For every $S \in \mathcal S$ and $E \in \mathcal E$, either $ext_{S} E={\mathcal U}_S$ or $ext_{S} E=\emptyset$.}
\noindent
It follows from assumption CMS that, for every interpretation $\rho \in \mathcal R$, $\rho_{S}(x)\in ext_S E$ iff $ext_{S} E={\mathcal U}_S$, and $\rho_{S}(x)\notin ext_S E$ iff $ext_{S} E=\emptyset$. Therefore, the assignment function $\sigma_S^{\rho}$ does not depend on the specific interpretation $\rho$. More explicitly, for every interpretation $\rho$ and state $S$,
\begin{displaymath}
\left \{ \begin{array}{c}
\begin{tabular}{ccc}
$\sigma^{\rho}_S(E(x))=T$ & \emph{iff} & $ext_S E= {\mathcal U}_S$ \\
$\sigma^{\rho}_S(E(x))=F$ & \emph{iff} & $ext_S E=\emptyset$
\end{tabular}
\end{array} \right.,
\end{displaymath}
\begin{displaymath}
\left \{ \begin{array}{c}
\begin{tabular}{ccc}
$\sigma^{\rho}_S(E(x) \land F(x))=T$ & \emph{iff} & $ext_S E= {\mathcal U}_S=ext_S F$ \\
$\sigma^{\rho}_S(E(x) \land F(x))=F$ & \emph{iff} & $ext_S E \ne ext_S F$
\end{tabular}
\end{array} \right.
\end{displaymath}
(where $E,F \in \mathcal E$), etc.
Since $\sigma_S^{\rho}$ does not depend on $\rho$, neither the individual proposition $p_{\alpha(x)}^{\rho}$ depends on $\rho$, and we can omit writing the index $\rho$ in both symbols. Thus, for every $\rho \in \mathcal R$, the individual proposition associated with $\alpha(x) \in \phi(x)$ is given by
\begin{equation}
p_{\alpha(x)}= \{ S \in {\mathcal S} : \sigma_{S}(\alpha(x))=T \}.
\end{equation}
More explicitly, we have
\begin{equation}
p_{E(x)}= \{ S \in {\mathcal S} : ext_S E= {\mathcal U}_{S} \},
\end{equation}
\begin{equation}
p_{E(x) \land F(x)}= \{ S \in {\mathcal S} : ext_S E= {\mathcal U}_{S}= ext_S F \}= p_{E(x)} \cap p_{F(x)}.
\end{equation}
etc.
The set ${\mathcal P}^{\rho}$ of all individual propositions associated with wffs of ${\mathcal L}(x)$ obviously does not depend on $\rho$, and will be simply denoted by $\mathcal P$. Because of the above specific features, the general notions introduced in Secs. 2, 3, 4 particularize in CM as follows.
For every $\alpha(x), \beta(x) \in \phi(x)$, and $S \in \mathcal S$,
\begin{center}
\begin{tabular}{lll}
$\sigma_S(\alpha(x))=T$ & \emph{iff} & $\quad S \in p_{\alpha(x)}$, \\
$\alpha(x) \le \beta(x)$ & \emph{iff} & $p_{\alpha(x)} \subseteq p_{\beta(x)}$, \\
$\alpha(x) \equiv \beta(x)$ & \emph{iff} & $p_{\alpha(x)} = p_{\beta(x)}$.
\end{tabular}
\end{center}
It also follows from the general case that the Lindenbaum--Tarski algebra $(\phi(x)/_{\equiv}, \le)$ of ${\mathcal L}(x)$ is isomorphic to the Boolean lattice of individual propositions $({\mathcal P}, \subseteq)$, so that the two lattices can be identified.
Coming to physical propositions, we get, for every $\alpha(x) \in \phi(x)$,
\begin{equation}
p_{\alpha(x)}^f=p_{\alpha(x)},
\end{equation}
and, therefore, ${\mathcal P}^f=\mathcal P$. Thus, the set of all physical propositions coincides in CM with the set of all individual propositions, and the notions of \emph{true} and \emph{certainly true} also coincide.
Furthermore the intended physical interpretation suggests that every sentence of the language ${\mathcal L}(x)$ is testable in CM. This inspires the following assumption.
\noindent
\textbf{CMT.} \emph{The set of all testable sentences of the language ${\mathcal L}(x)$ coincides in CM with the set of all sentences of ${\mathcal L}(x)$, that is, $\phi_T(x)=\phi(x)$ in CM.}
\noindent
Assumption CMT implies that ${\mathcal P}_T^f={\mathcal P}^f=\mathcal P$, whence
\begin{displaymath}
({\mathcal P}_{T}^f, \subseteq)= ({\mathcal P}, \subseteq).
\end{displaymath}
More explicitly, the poset of all testable physical propositions of a physical system $\Omega$ coincides with the poset of all individual propositions of its language ${\mathcal L}(x)$, and has the structure of a Boolean lattice. This result explains, in particular, the common statement in the literature that ``the logic of a classical mechanical system is a classical propositional logic''.\cite{r98} This statement is however misleading in our opinion, since it ignores the conceptual difference between individual, physical and testable physical propositions, that coincide in CM only because of assumptions CMS and CMT.
\section{Quantum mechanics (QM)} \label{meccanica quantistica}
We have stressed in Sec. 2 that our semantics (hence the general scheme in Secs. 2, 3 and 4) is unsuitable for QM whenever the standard interpretation of this theory is accepted. As anticipated in the Introduction and in Sec. 2, we therefore adopt in the present paper the SR interpretation of QM worked out by one of the authors and by other authors in a series of articles,\cite{g99}$^{-}$\cite{g02,gp04,g91,gs96a} according to which $ext_{S} E$ can be defined in every physical situation (we show in Sec. 7 that the new perspective also allows us to elucidate the concept of quantum truth underlying the standard interpretation of QM). At variance with CM, it may then occur in QM that $\emptyset \ne ext_{S} E \ne {\mathcal U}_S$, so that the assignment function $\sigma_{S}^{\rho}$ generally depends on the interpretation $\rho$. The formulas written down for the general case cannot be simplified as in Sec. 5. In particular, ${\mathcal P}^{f} \ne {\mathcal P}^{\rho}$, assumptions CMS and CMT do not hold, and ${\mathcal P}_{T}^{f} \subset {\mathcal P}^{f}$.
In order to discuss how the general case particularizes when QM is considered, let us briefly remind the mathematical representations of physical systems, states and properties within this theory.
Let $\Omega$ be a physical system. Then, $\Omega $ is associated with a separable Hilbert space $\mathcal H$ over the field of complex numbers. Let us denote by $(\mathcal{L(H)},\subseteq)$ the poset of all closed subspaces of $\mathcal H $, partially ordered by set--theoretical inclusion, and let ${\mathcal A} \subset \mathcal{L(H)}$ be the set of all one--dimensional subspaces of $\mathcal H $. Then (in absence of superselection rules) a mapping
\begin{equation} \label{statiatomi}
\varphi:S\in {\mathcal S} \longrightarrow \varphi (S)\in \mathcal{A}
\end{equation}
exists which maps bijectively the set $\mathcal{S}$ of all pure states of $\Omega$ onto $\mathcal{A}$ (for the sake of simplicity, we will not consider mixed states in this paper, so that we understand the word \emph{pure} in the following)\footnote{It follows easily that every pure state S can also be represented by any vector $|\psi \rangle \in \varphi (S)\in \mathcal{A}$, which is the standard representation adopted in elementary QM. Moreover, a pure state $S$ is usually represented by an (orthogonal) projection operator on $\varphi (S)$ in more advanced QM. However, the representation $\varphi $ introduced here is more suitable for our purposes in the present paper.}. In addition, a mapping
\begin{equation} \label{proprietaproiettori}
\chi :E\in \mathcal{E\longrightarrow \chi }(E)\in \mathcal{L(H)}
\end{equation}
exists which maps bijectively the set $\mathcal E$ of all properties of $\Omega $ onto $\mathcal{L(H)}$.
The poset $(\mathcal{L(H)},\subseteq)$ is characterized by a set of mathematical properties. In particular, it is a complete, orthocomplemented, weakly modular, atomic lattice which satisfies the covering law.\cite{m63}$^{-}$\cite{bc81} We denote by $^{\bot }$, $\Cap $ and $\Cup $ orthocomplementation, meet and join, respectively, in $(\mathcal{L(H)}, \subseteq)$ (it is important to observe that $\Cap $ coincides with the set--theoretical intersection $\cap $ of subspaces of $\mathcal{L(H)}$, while $^{\bot }$ does not generally coincide with the set--theoretical complementation $^{\prime}$, nor $\Cup$ coincides with the set--theoretical union $\cup $). Furthermore, we note that $\mathcal{A}$ obviously coincides with the set of all atoms of $(\mathcal{L(H)},\subseteq)$.
Let us denote by $\prec $ the order induced on $\mathcal{E}$, via the bijective representation $\chi $, by the order $\subseteq$ defined on $\mathcal{L(H)}$. Then, the poset $(\mathcal{E},\prec )$ is order--isomorphic to $(\mathcal{L(H)},\subseteq)$, hence it is characterized by the same mathematical properties characterizing $(\mathcal{L(H)},\subseteq)$. In particular, the unary operation induced on it, via $\chi $, by the orthocomplementation defined on $(\mathcal{L(H)},\subseteq)$, is an orthocomplementation, and $(\mathcal{E},\prec )$ is an orthomodular (\emph{i.e.}, orthocomplemented and weakly modular) lattice, usually called \emph{the lattice of properties} of $\Omega $. By abuse of language, we denote the lattice operations on $(\mathcal{E},\prec )$ by the same symbols used above in order to denote the corresponding lattice operations on $(\mathcal{L(H)},\subseteq)$.
Orthomodular lattices are said to characterize semantically \emph{orthomodular QLs }in the literature.\cite{dcgg04} The lattice of properties $(\mathcal{E},\prec )$ is a less general structure in QM, since it inherits a number of further properties from $(\mathcal{L(H)},\subseteq)$, and can be identified with the concrete, or standard, sharp QL mentioned in Sec. 1 (simply called QL here for the sake of brevity).
A further lattice, isomorphic to $(\mathcal{E},\prec)$, will be used in the following. In order to introduce it, let us consider the mapping
\begin{equation}
\theta :E\in \mathcal{E}\longrightarrow \mathcal{S}_{E}=\{S\in \mathcal{S}
\mid \varphi (S)\subseteq \chi (E)\}\in \mathcal{L(S)},
\end{equation}
where $\mathcal{L(S)}=\{\mathcal{S}_{E}\mid E\in \mathcal{E}\}$ is the range of $\theta$, and generally is a proper subset of the power set $\mathcal{P(S)}$ of $\mathcal{S}$. The poset $(\mathcal{L(S)},\subseteq )$ is order--isomorphic to $(\mathcal{L(H)},\subseteq )$, hence to $(\mathcal{E},\prec )$, since $\varphi $ and $\chi $ are bijective, so that $\theta$ is bijective and order--preserving. Therefore $(\mathcal{L(S)},\subseteq)$ is characterized by the same mathematical properties characterizing $(\mathcal{E},\prec )$. In particular, the unary operation induced on it, via $\theta$, by the orthocomplementation defined on $(\mathcal{E},\prec )$, is an orthocomplementation, and $(\mathcal{L(S)},\subseteq )$ is an orthomodular
lattice. We denote orthocomplementation, meet and join on $(\mathcal{L(S)},\subseteq )$ by the same symbols $^{\bot }$, $\Cap $, and $\Cup $, respectively, that we have used in order to denote the corresponding operations on $(\mathcal{L(H)},\subseteq )$ and $(\mathcal{E},\prec )$, and call $(\mathcal{L(S)},\subseteq )$ \emph{the lattice of closed subsets of} $\mathcal{S}$ (the word \emph{closed} refers here to the fact that, for every $\mathcal{S}_{E}\in $ $\mathcal{L(S)}$, $(\mathcal{S}_{E}^{\bot})^{\bot }=\mathcal{S}_{E}$). We also note that the operation $\Cap $ coincides with the set--theoretical intersection $\cap $ on $\mathcal{L(S)}$ because of the analogous result holding in $(\mathcal{L(H)},\subseteq)$\footnote{Whenever the dimension of $\mathcal H$ is finite, the lattice $(\mathcal{L(H)},\subseteq)$ and/or the lattice $(\mathcal{L(S)},\subseteq)$ can be identified with Birkhoff and von Neumann's modular lattice of \emph{experimental propositions}, which was introduced in the 1936 paper that started the research on QL.\cite{bvn36} This identification is impossible if the dimension of $\mathcal H$ is not finite, since $(\mathcal{L(H)},\subseteq)$ and $(\mathcal{L(S)},\subseteq)$ are weakly modular but not modular in this case. Birkhoff and von Neumann's requirement of modularity has deep roots in von Neumann's concept of probability in QM according to some authors.\cite{r98}}.
Basing on the above definitions, we now introduce the following assumption.
\noindent
\textbf{QMT.} \emph{The poset $({\mathcal P}_{T}^{f}, \subseteq)$ of all testable physical propositions associated with statements of $\phi_{T}(x)$ (equivalently, with atomic statements of ${\mathcal L}(x)$) coincides in QM with the lattice $({\mathcal L}({\mathcal S}), \subseteq)$ of all closed subsets of $\mathcal S$.}
\noindent
Assumption QMT is intuitively natural, and can be justified by using the standard statistical interpretation of QM. We do not insist on this topic here for the sake of brevity. We note instead that assumption QMT implies that the posets $(\phi_{T}(x)/_{\approx} , \prec)$ and $({\mathcal P}_{T}^{f}, \subseteq)$, on one side, and the lattices $({\mathcal L}({\mathcal S}), \subseteq)$, $(\mathcal{L(H)},\subseteq)$, $({\mathcal E}, \prec)$ on the other side, are order--isomorphic. Therefore also the operations of meet, join and orthocomplementation on $(\phi_{T}(x)/_{\approx} , \prec)$ and $({\mathcal P}_{T}^{f}, \subseteq)$ will be denoted by the symbols $\doublecap$, $\doublecup$ and $^{\perp}$, respectively. The link of these operations with set--theoretical meet, join and complementation in the set $\mathcal S$ of all states can be established as follows. For every $\alpha(x),\beta(x) \in \phi_{T}(x)$,
\begin{eqnarray}
(p_{\alpha(x)}^{f})^{\perp} \subseteq {\mathcal S} \setminus p_{\alpha(x)}^{f}, \label{lnot} \\
p_{\alpha(x)}^{f} \cap p_{\beta(x)}^{f}=p_{\alpha(x)}^{f} \doublecap p_{\beta(x)}^{f}, \label{land} \\
p_{\alpha(x)}^{f} \cup p_{\beta(x)}^{f} \subseteq p_{\alpha(x)}^{f} \doublecup p_{\beta(x)}^{f} \label{lor}.
\end{eqnarray}
The isomorphisms above allow one to recover QL as a quotient algebra of sentences of ${\mathcal L}(x)$. They, however, make intuitively clear that associating the properties (or `propositions') of QL with sentences of ${\mathcal L}(x)$ is not trivial. The association requires indeed selecting testable wffs of $\phi(x)$, grouping them into classes of physical rather than logical equivalence, adopting assumption QMT, and, finally, identifying $({\mathcal L}({\mathcal S}), \subseteq)$ with $({\mathcal E}, \prec)$.
The isomorphisms above also suggest looking deeper into the links existing between the logical operations defined on $\phi(x)$ and the lattice operations of QL. To this end, let us note that statements (i), (ii) and (iii) in Sec. 3, if compared with Eqs. (\ref{lnot}), (\ref{land}) and (\ref{lor}), respectively, yield, for every $\alpha(x), \beta(x) \in \phi_T(x)$,
\begin{eqnarray}
p_{\lnot\alpha(x)}^{f} \subseteq {\mathcal S} \setminus p_{\alpha(x)}^{f} \supseteq ( p_{\alpha(x)}^{f})^{\perp}, \label{qnot} \\
p_{\alpha(x) \land \beta(x)}^{f}= p_{\alpha(x)}^{f} \cap p_{\beta(x)}^{f}= p_{\alpha(x)}^{f} \Cap p_{\beta(x)}^{f}, \label{qand} \\
p_{\alpha(x) \lor \beta(x)}^{f} \supseteq p_{\alpha(x)}^{f} \cup p_{\beta(x)}^{f} \subseteq p_{\alpha(x)}^{f} \Cup p_{\beta(x)}^{f}. \label{qor}
\end{eqnarray}
Eq. (\ref{qand}) shows that, if $\alpha(x)$ and $\beta(x)$ belong to $\phi_T(x)$, then $\alpha(x) \land \beta(x)$ belongs to $\phi_T(x)$, and establishes a strong connection between the connective $\land$ of ${\mathcal L}(x)$ and the lattice operation $\Cap$ of QL. Eqs. (\ref{qnot}) and (\ref{qor}) establish instead only weak connections between the connectives $\lnot$ and $\lor$, from one side, and the lattice operations $^{\perp}$ and $\Cup$, from the other side. Hence, no simple structural correspondence can be established between ${\mathcal L}(x)$ and QL.
One can, however, obtain a more satisfactory correspondence between the sentences of a suitable language and the `propositions' of QL by using a fragment of ${\mathcal L}(x)$ in order to construct a new quantum language ${\mathcal L}_{TQ}(x)$, as follows.
First of all, we consider two properties $E,F \in \mathcal E$ and observe that, since the mapping $\chi$ introduced in Eq. (\ref{proprietaproiettori}) is bijective, $E$ and $F$ coincide whenever they are represented by the same subspace of ${\mathcal L}({\mathcal H})$. This implies that the following sequence of equivalences holds.
\begin{displaymath}
p_{E(x)}^{f}=p_{F(x)}^{f} \quad \textrm{\emph{iff}} \quad E=F \quad \textrm{\emph{iff}} \quad E(x) \approx F(x) \quad \textrm{\emph{iff}} \quad E(x) \equiv F(x).
\end{displaymath}
It follows in particular that every equivalence class of $\phi_{T}(x)/_{\approx}$ contains one and only one atomic wff of ${\mathcal L}(x)$. Since the set ${\mathcal E}(x)$ of all atomic wffs of ${\mathcal L}(x)$ (Sec. 2) belongs to $\phi_T(x)$, we conclude that the correspondence that maps every $\alpha(x) \in \phi_T(x)$ onto the atomic wff $E_{\alpha}(x)$, the existence of which is guaranteed by Eq. (\ref{testable}), is a surjective mapping. Moreover, this mapping maps all physically equivalent wffs of $\phi_T(x)$ onto the same atomic wff of ${\mathcal E}(x)$.
Secondly, let us consider the set $\phi_{\land}(x)$ of all wffs of ${\mathcal L}(x)$ which either are atomic or contain the connective $\land$ only. Because of Eq. (\ref{qand}), the proposition associated with a wff $\alpha(x) \land \beta(x)$ of this kind belongs to ${\mathcal P}_T^{f}$, hence $\alpha(x) \land \beta(x)$ belongs to $\phi_T(x)$, so that $\phi_{\land}(x) \subseteq \phi_T(x)$. Then, let us introduce a new connective $\lnot_{Q}$ (\emph{quantum negation}) which can be applied (repeatedly) to wffs of $\phi_{\land}(x)$ following standard formation rules for negation connectives. We thus obtain a new formal language ${\mathcal L}_{TQ}(x)$, whose set of wffs will be denoted by $\phi_{TQ}(x)$. We adopt the semantic rules introduced in Sec. 2 for all wffs of $\phi_{\land}(x)\subseteq \phi_{TQ}(x)$, and complete the semantics of ${\mathcal L}_{TQ}(x)$ by means of the following rule.
\noindent
\textbf{QN.} \emph{Let $\alpha(x)\in \phi_{TQ}(x)$ and let a wff $E_{\alpha}(x) \in {\mathcal E}(x)$ exist such that $\alpha(x)$ is true} iff \emph{$E_{\alpha}(x)$ is true. Then, $\lnot_{Q} \alpha(x)$ is true} iff \emph{$E_{\alpha}^{\perp}(x)$ is true.}
\noindent
It is easy to see that rule QN implies that for every $\alpha(x) \in \phi_{TQ}(x)$, an elementary wff $E_{\alpha}(x)$ exists such that $\alpha(x)$ is true iff $E_{\alpha}(x)$ is true. This conclusion has the following immediate consequences.
(i) One can define, for every interpretation $\rho$ of the variable $x$ and state $S$, an assignment function $\tau_{S}^{\rho}:\phi_{TQ}(x) \longrightarrow \{ T,F \}$. Hence, a logical preorder and a logical equivalence relation (that we still denote by the symbols $\le$ and $\equiv$, respectively, by abuse of language) can be defined on $\phi_{TQ}(x)$ by using the definitions in Sec. 2 with $\phi_{TQ}(x)$ in place of $\phi(x)$ and $\tau_{S}^{\rho}$ in place of $\sigma_{S}^{\rho}$.
(ii) One can associate a physical proposition with every $\alpha(x) \in \phi_{TQ}(x)$ by using Eq. (\ref{pfalpha}) with $\tau_{S}^{\rho}$ in place of $\sigma_{S}^{\rho}$. Hence a physical preorder and a physical equivalence relation (that we still denote by the symbols $\prec$ and $\approx$, respectively, by abuse of language) can be defined on $\phi_{TQ}(x)$ by using the definitions in Sec. 3 with $\phi_{TQ}(x)$ in place of $\phi(x)$ (one can also show that $\approx$ coincides with $\equiv$ on $\phi_{TQ}(x)$).
(iii) The notion of testability introduced in Sec. 4 can be extended to ${\mathcal L}_{TQ}(x)$ by using Eq. (\ref{testable}) with $\phi_{TQ}(x)$ in place of $\phi(x)$, obtaining that all wffs of $\phi_{TQ}(x)$ are testable. Hence, the set of all physical propositions associated with wffs of $\phi_{TQ}(x)$ coincides with ${\mathcal P}_T^{f}$.
It follows from (ii) and (iii) that $(\phi_{TQ}(x)/_{\approx}, \prec)$ is isomorphic to the lattice $({\mathcal P}_T^{f}, \subseteq)$, so that these two order structures can be identified.
The set of connectives defined on ${\mathcal L}_{TQ}(x)$ can now be enriched by introducing derived connectives. In particular, a \emph{quantum join} can be defined by setting, for every $\alpha(x),\beta(x) \in \phi_{TQ}(x)$,
\begin{equation}
\alpha(x) \lor_Q \beta(x)=\lnot_{Q}(\lnot_{Q}\alpha(x) \land \lnot_{Q}\beta(x)).
\end{equation}
It is then easy to show that the following equalities hold.
\begin{eqnarray}
p_{\lnot_{Q} \alpha(x)}^{f}=(p_{\alpha(x)}^{f})^{\perp}, \\
p_{\alpha(x) \land \beta(x)}^{f}=p_{\alpha(x)}^{f} \doublecap p_{\beta(x)}^{f}, \\
p_{\alpha(x) \lor_{Q} \beta(x)}^{f}=p_{\alpha(x)}^{f} \doublecup p_{\beta(x)}^{f}.
\end{eqnarray}
The equations above establish a strong connection between the logical operations defined on $\phi_{TQ}(x)$ and the lattice operations of QL. Hence, a structural correspondence exists between ${\mathcal L}_{TQ}(x)$ and QL, and the latter can be recovered within our general scheme also by firstly considering the set of all elementary wffs of ${\mathcal L}(x)$, and then constructing ${\mathcal L}_{TQ}(x)$ and the quotient algebra $(\phi_{TQ}(x)/_{\approx}, \prec)$. It is now apparent that \emph{the semantic rules for quantum connectives have an empirical character} (they depend on the mathematical representation of states and properties in QM and on assumption QMT) and that \emph{they coexist with the semantic rules for classical connectives in our approach} (the deep reason of this is, of course, our adoption of the SR interpretation of QM). In our opinion, these conclusions are relevant, since they deepen and formalize a new perspective on QL that has been propounded in some previous papers\cite{gs96b}$^{-}$\cite{gs96a} and is completely different from the standard viewpoint about this kind of logic.
To conclude, let us observe that a further derived connective $\xrightarrow[Q]{}$ can be introduced in $\phi_{TQ}(x)$ by setting, for every $\alpha(x),\beta(x)\in \phi_{TQ}(x)$,
\begin{equation}
\alpha(x)\xrightarrow[Q]{} \beta(x)=(\lnot_{Q} \alpha(x)) \lor_{Q} (\alpha(x) \land \beta(x)).
\end{equation}
One can thus recover within ${\mathcal L}_{TQ}(x)$ the \emph{Sasaki hook}, the role of which is largely discussed in the literature on QL.\cite{r98,dcgg04,bc81}
\section{Quantum truth}
The general notion of \emph{certainly true} introduced in Sec. 3 is defined for all wffs of ${\mathcal L}(x)$. Yet, according to our approach, only wffs of $\phi_{T}(x)$ can be associated with empirical procedures which allow one to check whether they are certainly true or not. Whenever $\alpha(x) \in \phi_{T}(x)$, the notion of \emph{certainly true} can be worked out in order to define a verificationist notion of \emph{quantum truth} (\emph{Q--truth}) in QM, as follows.
\noindent
\textbf{QT}. \emph{Let $\alpha(x) \in \phi_{T}(x)$. Then, we put:}
\emph{$\alpha(x)$ is} Q--true \emph{in $S \in \mathcal S$} iff \emph{$S \in p_{\alpha(x)}^{f}$;}
\emph{$\alpha(x)$ is} Q--false \emph{in $S \in \mathcal S$} iff \emph{$S \in (p_{\alpha(x)}^{f})^{\perp}$;}
\emph{$\alpha(x)$ has no Q--truth value in $S \in \mathcal S$ (equivalently, $\alpha(x)$ is} Q--indeterminate \emph{in $S$)} iff \emph{$S \in {\mathcal S} \setminus (p_{\alpha(x)}^{f} \cup (p_{\alpha(x)}^{f})^{\perp})$}.
\noindent
It obviously follows from definition QT that $\alpha(x)$ is Q--true in $S$ iff it is certainly true in $S$.
Definition QT can be physically justified by using the analysis of the notion of truth in QM recently provided by ourselves\cite{gs04} and successively deepened by one of us.\cite{g05} We only note here that it is equivalent to defining a wff $\alpha(x)\in \phi(x)$ as Q--true (Q--false) in $S$ iff:
(i) $\alpha(x)$ is testable;
(ii) $\alpha(x)$ can be tested and found to be true (false) on the physical object $x$ without altering the state $S$ of $x$.
The proof of the equivalence of the two definitions is rather simple but requires some use of the laws of QM (see again Refs. 21 and 26).
It is apparent that the notions of truth and Q--truth coexist in our approach. Indeed, a wff $\alpha(x) \in \phi(x)$ is Q--true (Q--false) for a given state $S$ of the physical system iff it belongs to $\phi_{T}(x)$ and it is true (false) independently of the interpretation of the variable $x$ (equivalently, iff it belongs to $\phi_{T}(x)$ and can be empirically proved to be true or false without altering the state $S$ of $x$). This realizes an \emph{integrated perspective}, according to which the classical and the quantum conception of truth are not mutually incompatible.\cite{g05,gs04,g06} However, definition QT introduces the notion of Q--truth on a fragment only (the set $\phi_T(x) \subset \phi(x)$) of the language ${\mathcal L}(x)$. If one wants to introduce this notion on the set of all wffs of a suitable quantum language, one can refer to the language ${\mathcal L}_{TQ}(x)$ constructed at the end of Sec. 6. Then, all wffs of $\phi_{TQ}(x)$ are testable, and definition QT can be applied in order to define Q--truth on ${\mathcal L}_{TQ}(x)$ by simply substituting $\phi_{TQ}(x)$ to $\phi_{T}(x)$ in it. Again, classical truth and Q-truth may coexist on ${\mathcal L}_{TQ}(x)$ in our approach.
Let us close this section by commenting briefly on the notion of truth within standard interpretation of QM. Whenever this interpretation is adopted, the languages ${\mathcal L}(x)$ and ${\mathcal L}_{TQ}(x)$ can still be formally introduced, but no classical semantics can be defined on them because of the impossibility of defining, for every $S \in \mathcal S$ and $E \in \mathcal E$, $ext_{S} E$ (see Sec. 2). One can still define, however, a notion of Q--truth for ${\mathcal L}_{TQ}(x)$. Indeed, one can firstly introduce a mapping $\chi: \alpha(x) \in \phi_{TQ}(x) \longrightarrow E_{\alpha} \in \mathcal E$ by means of recursive rules, as follows.
\begin{displaymath}
\begin{tabular}{ll}
For every $\alpha(x) \in \phi_{TQ}(x)$, & $\chi(\lnot_{Q}\alpha(x))=E_{\alpha}^{\perp}$, \\
For every $\alpha(x),\beta(x) \in \phi_{TQ}(x)$, & $\chi(\alpha(x) \land \beta(x)=E_{\alpha} \Cap E_{\beta}$. \\
\end{tabular}
\end{displaymath}
Then, one can associate a physical proposition $p_{\alpha(x)}^{f} \in {\mathcal L}({\mathcal S})$ with every $\alpha(x) \in \phi_{TQ}(x)$ by setting $p_{\alpha(x)}^{f}=\theta(E_{\alpha})$. Finally, one can define Q--truth on $\phi_{TQ}(x)$ by means of definition QT, independently of any classical definition of truth.
It is apparent that the above notion of Q--truth can be identified with the (verificationist\cite{gs04}) quantum notion of truth whose peculiar features have been widely explored by the literature on QL (in particular, a \emph{tertium non datur} principle does not hold in ${\mathcal L}_{TQ}(x)$). Hence, the interpretation of QL as a new way of reasoning which is typical of QM seems legitimate. But this widespread opinion is highly problematical. Indeed, whenever $S$ is given, some wffs of $\phi_{TQ}(x)$ have a truth value, some have not, quantum connectives are not truth--functional and the notion of truth appears rather elusive and mysterious.\cite{bo1} Accepting our general perspective provides instead a reinterpretation of the notion of truth underlying the standard interpretation of QM, reconciling it with classical truth, and allows one to avoid the paradoxes following from the simultaneous (usually implicit) adoption of two incompatible notions of truth (classical and quantum).
\section{The pragmatic interpretation of QL}
The definition of \emph{Q--true in $S$} as \emph{certainly true in $S$} for wffs of $\phi_{T}(x)$ in Sec. 7 suggests, intuitively, that the assertion of a sentence $\alpha(x)$ of $\phi_{T}(x)$ should be considered justified in $S$ whenever $\alpha(x)$ is Q--true in $S$, unjustified otherwise. This informal definition can be formalized by introducing the assertion sign $\vdash$ and setting
\begin{center}
$\vdash \alpha(x)$ is \emph{justified} (\emph{unjustified}) in $S$ \emph{iff} \\
$\alpha(x)$ is Q--true (not Q--true) in $S$.
\end{center}
The set of all elementary wffs of $\phi_{T}(x)$, each preceded by the assertion sign $\vdash$, can be identified with the set of all elementary assertive formulas of the quantum pragmatic language ${\mathcal L}_{Q}^{P}$ introduced by one of the authors in a recent paper\cite{g05} in order to provide a pragmatic interpretation of QL\footnote{It must be noted that the pragmatic interpretation of QL has some advantages with respect to the interpretation propounded in Sec. 6. In particular, it is independent of the interpretation of QM that is accepted (standard or SR), while our interpretation in this paper follows from adopting a classical notion of truth, hence from accepting the SR interpretation of QM.}. The set $\psi_{A}^{Q}$ of all \emph{assertive formulas} (afs) of ${\mathcal L}_{Q}^{P}$ is made up by all aforesaid elementary afs plus all formulas obtained by applying recursively the \emph{pragmatic connectives} $N$, $K$, $A$ to elementary afs. For every $S \in \mathcal S$ a \emph{pragmatic evaluation function} $\pi_S$ is defined which assigns a \emph{justification value} (justified/unjustified) to every af of $\psi_{A}^{Q}$ and allows one to introduce on $\psi_A^Q$ a preorder $\prec$ and an equivalence relation $\approx$ following standard procedures. More important, a \emph{p--decidable sublanguage} ${\mathcal L}_{QD}^{P}$ of ${\mathcal L}_{Q}^{P}$ can be constructed whose set $\phi_{AD}^{Q}$ of afs consists of a suitable subset of all afs of $\psi_{A}^{Q}$ which have a justification value that can be determined by means of empirical procedures of proof (in particular, all elementary afs of $\psi_A^{Q}$ belong to $\phi_{AD}^{Q}$). ${\mathcal L}_{QD}^{P}$ can then be compared with the quantum language ${\mathcal L}_{TQ}(x)$ introduced at the end of Sec. 6 by constructing a one--to--one mapping $\tau$ of $\phi_{TQ}(x)$ onto $\phi_{AD}^{Q}$, as follows.
\begin{displaymath}
\begin{tabular}{ll}
For every $E(x) \in \phi_{TQ}(x)$, & $\tau (E(x))=\vdash E(x)$, \\
For every $\alpha (x) \in \phi_{TQ}(x)$, & $\tau (\lnot_{Q} \alpha(x))=N\vdash \alpha(x)$, \\
For every $\alpha (x), \beta(x) \in \phi_{TQ}(x)$, & $\tau(\alpha(x) \land \beta(x))=\vdash \alpha(x) K \vdash \beta(x)$, \\
For every $\alpha (x), \beta(x) \in \phi_{TQ}(x)$, & $\tau(\alpha(x) \lor_{Q} \beta(x))=\vdash \alpha(x) A \vdash \beta(x)$.
\end{tabular}
\end{displaymath}
Indeed, it is rather easy to show (we do not provide an explicit proof here for the sake of brevity) that the mapping $\tau$ preserves the preorder $\prec$ and the equivalence relation $\approx$ (in the sense that $\alpha(x) \prec \beta(x)$ iff $\tau(\alpha(x)) \prec \tau(\beta(x))$, and $\alpha(x) \approx \beta(x)$ iff $\tau(\alpha(x)) \approx \tau(\beta(x))$). Moreover, the wff $\alpha(x) \in \phi_{TQ}(x)$ is Q--true iff the af $\tau(\alpha(x)) \in \phi_{AD}^{Q}$ is justified, which translates a semantic concept (\emph{Q--true}) defined on the language ${\mathcal L}_{TQ}(x)$ into a pragmatic concept (\emph{justified}) defined on the pragmatic language ${\mathcal L}_{QD}^{P}$. Bearing in mind our comments at the end of Sec. 6, we can summarize these results by saying that QL can be interpreted as a theory of the notion of testability in QM from a semantic viewpoint, a theory of the notion of empirical justification in QM from a pragmatic viewpoint. The two interpretations can be connected, via the mapping $\tau$, in such a way that \emph{Q--true} transforms into \emph{justified}, which is intuitively satisfactory.
\section{Physical propositions and possible worlds}
The formal language ${\mathcal L}(x)$ introduced in Sec. 2 is exceedingly simple from a syntactical viewpoint, even if it is very useful in order to illustrate what physicists actually do when dealing with QL. Its syntactical simplicity has forced us, however, to set up a somewhat complicate semantics, in which, in particular, states are formally treated as possible worlds of a Kripke--like semantics. A less intuitive but logically more satisfactory approach should provide an extended syntactical apparatus, simplifying semantics. This could be done by enriching the alphabet of ${\mathcal L}(x)$ in two ways:
(i) adding a universal quantifier (with standard semantics);
(ii) adding the set of states as a new class of monadic predicates of ${\mathcal L}(x)$.
Let us comment briefly on these possible extensions of ${\mathcal L}(x)$. Firstly, let (i) only be introduced. Then, a family of individual propositions can be associated with the quantified wff $(\forall x) \alpha(x)$, and a proposition $p_{(\forall x) \alpha(x)}= \cap_{\rho} p_{\alpha(x)}^{\rho}$ can be associated with it. Hence, we get
\begin{displaymath}
p_{\alpha(x)}^{f}=p_{(\forall x) \alpha(x)}
\end{displaymath}
which provides a satisfactory interpretation of the physical propositions introduced in Sec. 3 and of the related notion of certainly true.
Second, let us note that considering states as possible worlds is a common practice in QL,\cite{dcgg04} but it doesn't fit well with the standard logical interpretation of possible worlds. In order to avoid this problem, one could introduce (ii), as one of us has done, together with other authors, in several papers.\cite{g91,gs96a} In this case, states are not considered possible worlds, propositions as defined in the present paper are not propositions in the standard logical sense (rather, an `individual proposition' associated with a wff $\alpha(x)$ is the set of all states which make a sentence of the form $S(x) \rightarrow \alpha(x)$ true in a given interpretation of $x$, while a `physical proposition' is a set of `certainly yes' states which make a sentence of the form $(\forall x) (S(x) \rightarrow \alpha(x))$ true). We do not insist here on this more general scheme, and limit ourselves to observe that it is compatible with a standard Kripkean semantics, which can be enriched by introducing \emph{physical laboratories} in order to characterize the truth mode of empirical physical laws in more details and connect the notions of probability and frequency.\cite{g91,gs96a} Yet, of course, an approach of this kind would make much less direct and straightforward the interpretation of QL that we have discussed in this paper.
\end{document} |
\begin{document}
\title{Grape Cold Hardiness Prediction via Multi-Task Learning}
\begin{abstract}
Cold temperatures during fall and spring have the potential to cause frost damage to grapevines and other fruit plants, which can significantly decrease harvest yields. To help prevent these losses, farmers deploy expensive frost mitigation measures such as sprinklers, heaters, and wind machines when they judge that damage may occur. This judgment, however, is challenging because the cold hardiness of plants changes throughout the dormancy period and it is difficult to directly measure. This has led scientists to develop cold hardiness prediction models that can be tuned to different grape cultivars based on laborious field measurement data. In this paper, we study whether deep-learning models can improve cold hardiness prediction for grapes based on data that has been collected over a 30-year time period. A key challenge is that the amount of data per cultivar is highly variable, with some cultivars having only a small amount. For this purpose, we investigate the use of multi-task learning to leverage data across cultivars in order to improve prediction performance for individual cultivars. We evaluate a number of multi-task learning approaches and show that the highest performing approach is able to significantly improve over learning for single cultivars and outperforms the current state-of-the-art scientific model for most cultivars.
\end{abstract}
\section*{Introduction}
The ability of grapevines to survive cold temperatures during fall, winter, and spring, is known as Cold Hardiness ($H_c$). Cold hardiness in grapes and other plants is dynamic in nature with a predictable seasonal trend. Cold hardiness is low at the beginning of fall as the plant has not yet acclimatized and peaks during mid-winter when the plant reaches acclimation. As spring arrives, the plant deacclimatizes and the cold hardiness decreases to the low summer levels. This means that during the fall and spring, when cold hardiness is low, unusually cold temperatures can be lethal, especially arising from sudden frost events.
To mitigate lethal damage due to cold temperatures, farmers can deploy expensive preemptive methods such as wind machines, sprinklers, and heaters to raise the air temperature. However, the decision of when to invest in expensive mitigation depends on knowledge of the current unknown cold hardiness. While cold hardiness can be measured, it requires expertise and expensive equipment, which farmers rarely have. Thus, farmers rely on estimates of cold hardiness derived from a combination of experience and scientific models. This highlights the need for accurate data-centric models for cold-hardiness prediction.
Current state-of-the-art cold hardiness models (e.g. \cite{ferguson_modeling_2014}) use a biological basis to obtain a parameterized model that can be tuned for different grape cultivars using cold-hardiness data. While reasonably effective, these models are relatively simple and only use ambient temperature as input. Rather, cold hardiness likely depends on multiple weather factors (e.g. humidity and precipitation) in complex ways \cite{mills_cold-hardiness_2006} that are not full captured by current scientific models. This raises the question of whether modern machine learning methods can improve on current models via their increased expressiveness and ability to consume richer inputs.
In this work, we evaluate the use of Recurrent Neural Networks (RNNs) for predicting cold hardiness based on time series weather data. A key challenge is that ground-truth cold-hardiness data is quite limited in comparison with many applications of deep learning. In our experiments, we find that for some grape cultivars, where there is significant data, RNNs can be quite accurate and outperform a current state-of-the-art model. However, for cultivars with more limited data, the RNNs can perform poorly. This raises the question of whether we can leverage data across multiple cultivars to improve the prediction performance for cultivars with limited data.
Our main contributions are: 1) To frame this multi-cultivar learning problem as multi-task learning, and 2) To propose and evaluate a variety of multi-task RNN models on real-world data collected from over twenty cultivars with data amounts ranging from 34 to just 4 seasons. Our results show that multi-task learning is able to significantly outperform learning from just the data of a single cultivar and very often outperforms the state-of-the-art scientific model. We are aiming to install a model result from this work on an existing weather network for trial use by grape farmers.
\begin{figure}
\caption{$LTE_{50}
\label{fig:LTE-example-seasons}
\end{figure}
\section*{Background}
The \emph{cold hardiness} of a plant characterizes its ability to resist injury during exposure to low temperatures. In this work, we will focus on grapevine cold hardiness, where injury corresponds to lethal bud freezing, which decreases crop yield. In order to quantify how grapevine cold hardiness varies throughout the dormancy period, scientists use differential thermal analysis (DTA). DTA results in a measurement of the lethal temperatures at which 10\%, 50\%, and 90\% of the bud population die/freeze, which are denoted by $LTE_{10}$, $LTE_{50}$, and $LTE_{90}$ respectively \cite{mills_cold-hardiness_2006}. Figure \ref{fig:LTE-example-seasons} shows the $LTE_{50}$ values for three cultivars and temperature ranges throughout a dormant season.
Since this measurement process requires expensive, specialized equipment and expertise, scientists have used collected data to develop grape cold-hardiness models, which aim to estimate the lethal temperatures based on only historical temperature data. The current state-of-the-art model, developed by \citeauthor{ferguson_dynamic_2011} (\citeyear{ferguson_dynamic_2011}), integrates plant biology concepts to find a relation between daily temperatures and changes in cold hardiness. Intuitively, the \emph{Ferguson model} computes the daily change in cold hardiness (e.g. as measured by $LTE_{50}$) based on the day's accumulated thermal time (being above or below certain temperature thresholds) weighted by coefficients that vary with the stage of dormancy. The model has a small number of parameters, e.g. thresholds for thermal times, which can be tuned for a particular cultivar. Tuning was done by performing a brute-force grid search over the parameter space to identify the parameter settings that resulted in the most accurate predictions. While this model has produced promising results and is in use by growers, it has limited expressiveness (only a handful of parameters) and only uses daily temperature data as input, rather than also factoring in other influential weather measurements (e.g. humidity and precipitation).
The limitations of the current scientific models raise the question of whether we can improve cold-hardiness prediction through the use of modern deep learning models. On one hand, such black-box deep models can be much more expressive and can easily incorporate additional weather data as input. On the other hand, the cold-hardiness data set sizes are relatively small from a deep-learning perspective, which may limit the potential benefits. The remainder of the paper explores this question. Below we first describe the cold-hardiness data sets used in our work followed by a description and evaluation of our deep learning approaches.
\section*{Cold Hardiness Datasets}
The cold hardiness of endo\textendash and ecodormant primary buds from up to 30 genetically diverse cultivars/genotypes of field-grown grapevines has been measured since 1988 in the laboratory of the WSU Irrigated Agriculture Research and Extension Center (IAREC) in Prosser, WA (46.29°N latitute; -119.74°W longitude). In the vineyards of the IAREC, the WSU-Roza Research Farm, Prosser, WA (46.25°N latitude; -119.73°W longitude), and in the cultivar collection of Ste. Michelle Wine Estates, Paterson, WA (45.96°N latitute; -119.61°W longitude), cane samples containing dormant buds were collected daily, weekly, or at 2-week intervals from leaf fall in autumn to bud swell in spring. These two phenological events typically occurred in October and in April, respectively \cite{ferguson_dynamic_2011, ferguson_modeling_2014}.
All samples were analyzed with DTA to record ground truth for $LTE_{10}, LTE_{50}$, and $LTE_{90}$ measurements of cold hardiness.
Additionally, meteorological/environmental daily data from the closest on-site weather station to each vineyard (cultivar) was obtained using the API provided by AgWeatherNet \cite{AgWeatherNet}. The three stations used are Prosser.NE (46.25°N latitude; -119.74°W longitude), Roza.2 (46.25°N latitude; -119.73°W longitude), and Paterson.E (45.94°N latitude; -119.49°W longitude).
The result is a continually growing dataset for each cultivar that contains a varying number of seasons of daily weather data along with cold-hardiness LTE labels for the days that samples were collected. Following prior work we consider \emph{a season} to extend from September 7th to May 15th, which is a conservative interval that should almost always contain the full dormancy period. Our experiments involve cultivars with data sets ranging from 34 to 4 seasons.
\begin{table}[htp]
\centering
\fontsize{10}{10}\selectfont
\resizebox{1\columnwidth}{!}{
\begin{tabular}{ |l|l|r|r| }
\hline
\multicolumn{1}{|c|}{\textbf{Cultivar}} &\makecell{\textbf{LTE} \\ \textbf{Data Seasons}} &\makecell{\textbf{LTE Total} \\ \textbf{Years of Data}} &\makecell{\textbf{LTE Total} \\ \textbf{Samples}} \\\hline
Barbera &2006-2022 &14 &151\\\hline
Cabernet Franc &2005-2012 &4 &35\\\hline
Cabernet Sauvignon &1988-2022 &34 &829\\\hline
Chardonnay &1996-2022 &26 &783\\\hline
Chenin Blanc &1988-2022 &18 &193\\\hline
Concord &1988-2022 &27 &484\\\hline
Gewurztraminer &2005-2016 &9 &101\\\hline
Grenache &2006-2022 &14 &151\\\hline
Lemberger &2006-2016 &6 &60\\\hline
Malbec &2004-2022 &17 &261\\\hline
Merlot &1996-2022 &26 &897\\\hline
Mourvedre &2005-2022 &12 &133\\\hline
Nebbiolo &2006-2022 &14 &152\\\hline
Pinot Gris &2003-2022 &17 &190\\\hline
Riesling &1988-2022 &34 &636\\\hline
Sangiovese &2005-2022 &15 &165\\\hline
Sauvignon Blanc &2006-2022 &12 &140\\\hline
Semillon &2006-2022 &13 &201\\\hline
Syrah &1999-2022 &23 &486\\\hline
Viognier &1999-2022 &18 &206\\\hline
Zinfandel &2006-2022 &14 &150\\\hline
\end{tabular}
}
\caption{Summary of cultivars' LTE data collection.}
\label{tab:data-description}
\end{table}
{\bf Cultivar Dataset Details.}
Table \ref{tab:data-description} shows a summary of the number of years of data collected for selected cultivars.
The dataset for a given cultivar contains a row for each day of all data-collection seasons. Note, since cold hardiness was not measured on each day of a season, some rows do not contain LTE data. Below we highlight the key information contained in each row used by our models.
\begin{itemize}
\item DATE: The date of the weather observation.
\item AWN\_STATION: The closest AgWeatherNet station from where the environmental readings are taken.
\item LTE values (when available): $LTE_{10}$, $LTE_{50}$, $LTE_{90}$. In degrees Celsius.
\item MIN\_AT, AVG\_AT, MAX\_AT: Minimum, average, and maximum air temperature observed at 1.5 meters above the ground. In degrees Celsius.
\item MEAN\_AT: $(MIN\_AT + MAX\_AT)/2$. In degrees Celsius.\footnote{This is recorded since it is the temperature measure used by the scientific model.}
\item MIN\_RH, AVG\_RH, MAX\_RH: Minimum, average, and maximum relative humidity value observed at 1.5 meters above the ground. In percent.
\item MIN\_DEWPT, AVG\_DEWPT, MAX\_DEWPT: Minimum, average, and maximum dew point (temperature the air needs to be cooled to in order to achieve relative humidity). In degrees Celsius.
\item P\_INCHES: Observed sum of precipitation for the daily period. In inches.
\item WS\_MPH, MAX\_WS\_MPH: Average and maximum observed wind speed at 1.5 meters above the ground for the daily period. In Miles Per Hour.
\end{itemize}
\section*{Deep Cold-Hardiness Models and Training}
\begin{figure*}
\caption{Network Architectures. FC denotes fully connected layers and GRU denotes Gated Recurrent Unit. a) The RNN backbone is used to process weather data sequences $(x_t)$. b) The single-task model with a single prediction head for a single cultivar. c) Multi-Head Model which has a prediction head for each cultivar allowing backbone features to be shared. d) Task Embedding Model, which combines the weather data features with a learned task embedding for each cultivar before entering the backbone network. }
\label{fig:model-diagrams}
\end{figure*}
Given the availability of cold-hardiness data, we can formulate cold-hardiness prediction as a sequence prediction problem. We will use $i$ to index the different grape cultivars with $N_i$ denoting the number of seasons available for cultivar $i$. The sequence data for season $k$ of cultivar $i$ is denoted by $S_{i,k}$ and has the form $S_{i,k} = (x_1, y_1, x_2, y_2, \ldots, x_H, y_H)$, where $x_t$ is the weather data for day $t$, $y_t$ is the ground truth LTE data for day $t$, and $H$ is the number of days per season. Recall that $y_t$ is not measured on each day of a season (e.g. measured every two weeks) and hence for days where the LTE measurements are unavailable $y_t = N/A$. Finally, the data set for cultivar $i$ is denoted by $D_i = \{S_{i,k} \;|\; k\in \{1,\ldots,N_i\}\}$.
Given a data set $D_i$ our learning goal is to produce a model $M_i$ that can take as input a sequence of daily weather measurements $(x_1,x_2,\ldots, x_t)$ up to a particular day $t$ and produce a sequence of predicted LTE estimates $(\hat{y}_1,\hat{y}_2,\ldots,\hat{y}_t)$ for cultivar $i$. Typically, a farm manager will be most interested in the estimate $\hat{y}_t$. This estimate can then be compared to the low-temperature forecast for that day to help decide whether to prepare for frost mitigation measures. The key question of this work is to evaluate whether modern deep learning methods can provide farm managers with improved predictions compared to the current state-of-the-art cold-hardiness models.
We will refer to the problem of learning $M_i$ based on only $D_i$ as \emph{single-task learning (STL)}, which is the general framework used for the vast majority of deep learning applications. Importantly, the performance of STL is significantly influenced by the amount of available training data, which according to Table \ref{tab:data-description} varies widely across the different cultivars. Thus, we might expect STL performance for cultivars with small datasets to suffer in comparison to those with large datasets. To address this issue, we consider the \emph{multi-task learning (MTL)} framework \cite{mtlfirstpaper, mtlsurvey1, mtlsurvey2,mtlsurvey3}, which involves learning a predictive model for cultivar $i$ using a combined dataset of all cultivars $D = \{D_1, D_2, \ldots, D_C\}$, where $C$ is the number of cultivars for which we have data. Intuitively, MTL offers the potential to identify common structures among the multiple learning tasks (i.e. cultivars) in order to improve performance for individual cultivars, especially those with limited data.
Below, we first introduce the STL deep model that we developed, which will serve as our deep-learning baseline for cold-hardiness prediction. Next, we introduce two frameworks for modifying that model to support MTL. Finally, we describe certain details of the training strategy used in our experiments. To the best of our knowledge, this is the first work that has considered deep models for cold-hardiness prediction in both the STL and MTL settings.
\subsection*{Single-Task Model}
Our basic STL makes causal LTE predictions by sequentially processing a weather data sequence $x_1,x_2,\ldots, x_t$ and at each step outputting the corresponding LTE estimate. For this purpose we use a recurrent neural network (RNN) model \cite{RNN}, which is a widely used model for sequence data. The RNN backbone used by both our STL and MTL models is illustrated in Figure \ref{fig:model-diagrams}a), which we denote by $f_{\theta}$ with parameters $\theta$. The backbone network begins with two Fully Connected (FC) layers, followed by a Gated Recurrent Unit (GRU) layer \cite{GRU}, which is followed by another FC layer. Our STL model, shown in Figure \ref{fig:model-diagrams}b), simply feeds daily weather data $x_t$ into the first FC layer as input and adds an additional FC layer to produce the final LTE prediction output. Intuitively, the GRU unit, through its recurrent connection is able to build a latent-state representation of the sequence data that has been processed so far. For our cold-hardiness problem, this representation should capture information about the weather history which is useful for predicting LTE. In some sense, the latent state can be thought of as implicitly approximating the internal state of the plant as it evolves during dormancy. As described below, each STL model $M_i$ is trained independently on its cultivar-specific dataset $D_i$.
\subsection*{Multi-Task Models}
We consider two types of MTL models that directly extend the RNN backbone of Figure \ref{fig:model-diagrams}a), the multi-head model and the task embedding model.
{\bf Multi-Head Model.} The multi-head model is perhaps the most straightforward approach to MTL and has been quite successful in prior work when tasks are highly related \cite{mtlfirstpaper}. As illustrated in Figure \ref{fig:model-diagrams}c), the multi-head model is identical to the STL model, except, that it adds $C$ parallel cultivar-specific fully-connected layers to the backbone (i.e. prediction heads). Each prediction head is responsible for producing the LTE prediction for its designated cultivar. This model allows the cultivars to share the features produced by the RNN backbone, with each cultivar-specific output simply being a linear combination of the shared features. Intuitively, if there are common underlying features that are useful across cultivars, then this architecture allows those to emerge based on the combined set of data. Thus, cultivars with small amounts of data can leverage those useful features and simply need to tune a set of linear weights based on the available data. We abbreviate this model with \textbf{MultiH} in future sections.
\begin{table*}[tb]
\centering\fontsize{9}{9}\selectfont
\resizebox{1.33\columnwidth}{!}{
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
\thead{Cultivar} & \thead{MultE} & \thead{ConcatE} & \thead{AddE} & \thead{MultiH} & \thead{Single} & \thead{Ferguson} \\ \hline
Barbera & 1.92 & 1.50 & 2.07 & 1.89 & 4.22 & 1.78 \\ \hline
Cabernet Franc & 4.84 & 2.36 & 3.49 & 2.39 & 4.00 & 1.45 \\ \hline
Cabernet Sauvignon & 2.93 & 1.75 & 1.82 & 2.27 & 3.43 & 1.83 \\ \hline
Chardonnay & 1.33 & 1.46 & 1.44 & 1.40 & 1.60 & 1.79 \\ \hline
Chenin Blanc & 1.85 & 1.51 & 1.57 & 1.45 & 2.47 & 2.27 \\ \hline
Concord & 2.33 & 2.42 & 2.32 & 1.98 & 2.61 & 2.02 \\ \hline
Gewurztraminer & 1.97 & 1.40 & 1.66 & 1.20 & 2.70 & 1.84 \\ \hline
Grenache & 3.07 & 1.86 & 2.17 & 1.79 & 2.86 & 1.92 \\ \hline
Lemberger & 3.01 & 1.65 & 2.24 & 1.49 & 3.23 & 2.21 \\ \hline
Malbec & 1.80 & 1.32 & 1.32 & 0.96 & 1.71 & 1.66 \\ \hline
Merlot & 1.74 & 1.53 & 1.39 & 1.53 & 1.66 & 1.55 \\ \hline
Mourvedre & 1.84 & 1.65 & 1.70 & 1.56 & 2.25 & 1.83 \\ \hline
Nebbiolo & 2.36 & 1.58 & 1.87 & 1.24 & 2.48 & 1.80 \\ \hline
Pinot Gris & 2.07 & 1.61 & 1.63 & 1.61 & 2.04 & 2.02 \\ \hline
Riesling & 2.80 & 1.47 & 1.77 & 1.97 & 3.63 & 1.55 \\ \hline
Sangiovese & 1.65 & 1.73 & 1.71 & 1.40 & 1.84 & 1.61 \\ \hline
Sauvignon Blanc & 1.33 & 1.43 & 1.52 & 1.22 & 1.71 & 1.42 \\ \hline
Semillon & 2.37 & 1.67 & 1.42 & 1.75 & 3.58 & 1.50 \\ \hline
Syrah & 1.22 & 1.22 & 1.28 & 1.29 & 1.57 & 1.25 \\ \hline
Viognier & 3.90 & 1.75 & 2.30 & 2.28 & 4.16 & 1.36 \\ \hline
Zinfandel & 3.10 & 1.45 & 1.56 & 1.60 & 2.64 & 1.90 \\ \hline
\end{tabular}
}
\caption{Comparison of the performance of proposed MTL methods with STL and the existing state-of-the-art method. Note that the performance is measured in terms of Root Mean Squared Error.}
\label{tab:mainresults}
\end{table*}
{\bf Task Embedding Models.} Our proposed Task Embedding models are motivated by thinking about current scientific models and how they address multiple tasks. The Ferguson model, for example, has a fixed structure, based on scientific knowledge, but a small number of parameters that can be tuned for each cultivar. Our task embedding model aims to generalize this concept by having a neural network learn both the structure of the model that accepts task-specific parameters as well as learning the parameters of each cultivar. Note that the cultivar parameters and model structure will not have a clear scientific interpretation due to the black-box nature of deep models. The trade-off for interpretability is the potential for better performance due to increased expressive power.
Specifically, our proposed Task Embedding models, as shown in Figure \ref{fig:model-diagrams}d), are similar in spirit to context-sensitive neural networks \cite{csnn,taskembedding1,taskembedding2}, where a task-specific context is provided as additional input to the neural network with only a single output being computed.
We obtain this task-specific context by encoding the task as a one-hot vector and finding a corresponding mapping using a differentiable embedding layer. We explore different ways of incorporating the obtained task embedding, via element-wise Addition, Concatenation, and element-wise Multiplication. We abbreviate these models with \textbf{AddE}, \textbf{ConcatE}, \textbf{MultE} in future sections.
\subsection*{Model and Training Details}
We construct our dataset by selecting the dormant season data for all cultivars. Missing features are filled in by linear interpolation. We discard seasons that have $ <10\% $ valid LTE readings. We only include seasons where at least $ 90\% $ of temperature data is not missing. Missing LTE label readings are not interpolated, instead, the missing LTE labels' losses are masked during the training and evaluation process. We choose 2 seasons for each cultivar as our test set. We run three trials of training for all our experiments with different train/test splits and average the performance over the three trials.
We rely on the following weather features for learning our models - Temperature, Humidity, Dew Point, Precipitation, and Wind Speed. Our models output predictions for $LTE_{10}$, $LTE_{50}$, and $LTE_{90}$ which are optimized simultaneously, helping in inductive transfer. We focus exclusively on the model's predictive power for $LTE_{50}$ in this work. We consider the Mean Squared Error(MSE) as our loss function and treat the Root Mean Squared Error(RMSE) as our performance metric.
We consider Adam \cite{adam} as the optimizer of choice for our training process. We use a learning rate of 0.001 with a batch size of 12 seasons shuffled randomly. We train all our models for 400 epochs. The input features have a dimensionality of 12. The output dimensionality of the linear layers of the RNN backbone are 1024, 2048 and 1024 respectively. The GRU has a hidden state and internal memory of dimensionality 2048.
\section*{Experiments}
In this section, we present our main empirical results. Our experiments involve 21 cultivars from Table \ref{tab:data-description}. In particular, we removed any cultivar that has less than 4 years of data and removed cultivars for which the Ferguson model results were unavailable for comparison.
{\bf Multi-Task Versus Single-Task Learning.} Table \ref{tab:mainresults} shows the root mean squared error (RMSE) of the multi-task models, single-task model, and Ferguson model for all 21 cultivars. The first observation is that with the exception of Chardonnay, the single-task model never outperforms the state-of-the-art Ferguson model. For cultivars with small amounts of data, there is often a dramatic decrease in performance over Ferguson, while other cultivars with larger datasets are close to Ferguson's performance.
The second observation is that for each cultivar, with rare exceptions, the 4 multi-task models all outperform the corresponding single-task model. As expected, the improvement tends to be most pronounced for the smaller dataset cultivars. This shows that multitask learning is indeed able to identify and exploit common structures among the different cultivars, leading to improved generalization. Among the MTL methods, the MultiHead approach consistently outperforms other methods. Among the task embedding approaches, the concatenation approach performs best.
We observe, with the exception of Cabernet Franc and Viognier, that our approaches outperform the Ferguson model, the MultiHead or concat task embedding approaches being the best-performing models for most cultivars. The gap in the performance is more dramatic in cultivars with low data, such as Lemberger and Gewurztraminer.
\begin{table}[t]
\centering \fontsize{9}{9}\selectfont
\resizebox{1\columnwidth}{!}{
\begin{tabular}{|l|c|c|c|c|c|}
\hline
\thead{Cultivar} & \thead{2} & \thead{5} & \thead{10} & \thead{20} & \thead{All} \\ \hline
Riesling (MTL) & 2.25 & 2.16 & 2.10 & 1.71 & 1.97 \\ \hline
Riesling (STL) & 4.59 & 4.40 & 3.66 & 3.41 & 3.63 \\ \hline
Cabernet Sauvignon (MTL) & 2.02 & 2.16 & 2.27 & 2.07 & 2.27 \\ \hline
Cabernet Sauvignon (STL) & 2.68 & 3.24 & 3.68 & 2.91 & 3.43 \\ \hline
Merlot (MTL) & 1.69 & 1.54 & 1.55 & 1.41 & 1.53 \\ \hline
Merlot (STL) & 2.26 & 1.99 & 1.83 & 1.67 & 1.66 \\ \hline
\end{tabular}
}
\caption{Measuring the impact of varying the dataset size for chosen cultivars. The experiment is conducted for both STL and MTL. We choose 2, 5, 10, and 20 seasons as reasonable choices to evaluate. The performance is measured in terms of RMSE.}
\label{tab:datasetsize}
\end{table}
\begin{table}[t]
\centering \fontsize{9}{9}\selectfont
\resizebox{0.98\columnwidth}{!}{
\begin{tabular}{|l|c|c|c|c|c|}
\hline
\thead{Cultivar} & \thead{High} & \thead{Low} & \thead{Mix} & \thead{All} & \thead{Single} \\ \hline
Riesling & 1.95 & & 1.70 & 1.97 & 3.63 \\ \hline
Cabernet Sauvignon & 2.49 & & 2.28 & 2.27 & 3.43 \\ \hline
Chardonnay & 1.33 & & 1.41 & 1.40 & 1.60 \\ \hline
Concord & 2.00 & & 1.90 & 1.98 & 2.61 \\ \hline
Merlot & 1.46 & & 1.39 & 1.53 & 1.66 \\ \hline
Syrah & 1.13 & & & 1.29 & 1.57 \\ \hline
Chenin Blanc & 1.60 & & & 1.45 & 2.47 \\ \hline
Viognier & 2.60 & & & 2.28 & 4.16 \\ \hline
Malbec & 1.12 & & & 0.96 & 1.71 \\ \hline
Pinot Gris & 1.48 & & & 1.61 & 2.04 \\ \hline
Barbera & & 2.93 & & 1.89 & 4.22 \\ \hline
Grenache & & 2.04 & & 1.79 & 2.86 \\ \hline
Nebbiolo & & 1.85 & & 1.24 & 2.48 \\ \hline
Zinfandel & & 2.07 & & 1.60 & 2.64 \\ \hline
Semillon & & 2.47 & & 1.75 & 3.58 \\ \hline
Mourvedre & & 1.93 & 1.75 & 1.56 & 2.25 \\ \hline
Sauvignon Blanc & & 1.65 & 1.27 & 1.22 & 1.71 \\ \hline
Gewurztraminer & & 1.83 & 1.35 & 1.20 & 2.70 \\ \hline
Lemberger & & 2.24 & 1.50 & 1.49 & 3.23 \\ \hline
Cabernet Franc & & 2.91 & 2.13 & 2.39 & 4.00 \\ \hline
\end{tabular}
}
\caption{Measuring the impact of choosing a subset of available tasks for MTL and how it fares against choosing all tasks.}
\label{tab:tasksubset}
\end{table}
{\bf Impact of Task Dataset Size.} Table \ref{tab:datasetsize} shows the performance of MTL(MultiHead) and STL models when we choose one of three cultivars (Riesling, Merlot, Cabernet Sauvignon) with $\sim$30 seasons of data and artificially select only a subset of seasons, and train the MultiHead model. We also train STL models in the same setup. As expected for STL, with the exception of Cabernet Sauvignon, introducing more seasons of data up to an extent does help in improving performance.\footnote{Note that there is a consistent decrease in performance when going from 20 seasons to ALL. The reasons for this remain to be explored; however, it is likely due to the influence of a small number of unusual seasons.}
Interestingly we see that an MTL model trained on just 2 or 5 seasons of data for a cultivar outperforms an STL model using all of that cultivar's data. This reflects the fact that MTL is indeed able to leverage the information present in other cultivars to learn a good model for that specific cultivar. In a sense, the data from other cultivars appear to be as valuable as tens of seasons of data for a specific cultivar.
{\bf Impact of the number of tasks} - The goal here is to understand how different subsets of tasks impact the performance of an MTL model. Here, we select different subsets of our tasks, 10 tasks with the most amount of data, 10 tasks with the least amount of data, and 10 tasks with a mix of high and low amounts of data. We train the MultiHead architecture for this experiment.
Table \ref{tab:tasksubset} presents the cultivars in order of largest to smallest datasets. Each column corresponds to the subset of cultivars used in each experiment. Interestingly, we observe that an MTL model trained on any of our chosen subsets always outperforms single-task models for all cultivars.
For cultivars with relatively higher amounts of data, surprisingly, it is better to choose a mix of high and low data cultivars to get a better performing model. Including all the cultivars for training does not lead to consistent gains for all cultivars with high amounts of data. The reasons for this observation require further analysis and experimentation.
For cultivars with lower amounts of data, again, choosing a mix of high and low data cultivars leads to a better performing model. Including all the cultivars for training does indeed lead to consistent gains over choosing a subset.
{\bf Impact of Training Setting.} Here, we consider a different training setting, Transfer Learning \cite{transferlearning}, where a new cultivar arrives and is incorporated into the model without access to past data.
This is in contrast to MTL where all datasets are accessible at training time. We consider finetuning as a straightforward approach to transfer learning. Finetuning, in the case of MultiHead refers to replacing the task-specific final layers with a newly initialized layer for the new task. For the task embedding approaches, finetuning refers to learning the coefficients of a linear combination of existing task embeddings.
Table \ref{tab:trainingsetting} shows the RMSE metrics for the finetuning paradigm for the different proposed methods relative to their corresponding RMSE metrics for MTL from Table \ref{tab:mainresults}.
In the case of the MultiHead architecture, we observe that finetuning is on par with MTL. This seems to indicate that there are no tasks that hurt the MTL training process.
Although, for the Task Embedding approaches, we see that finetuning does worse than MTL for most cultivars for the Concatenate and Additive variants. For the multiplicative embedding variants, we see marginal to substantial gains in performance.
\begin{table}[t]
\centering \fontsize{9}{9}\selectfont
\resizebox{0.99\columnwidth}{!}{
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{\textbf{Cultivar}} &\makecell{\textbf{ConcatE} \\ \textbf{FT}} &\makecell{\textbf{MultE} \\ \textbf{FT}} &\makecell{\textbf{AddE} \\ \textbf{FT}} &\makecell{\textbf{MultiH} \\ \textbf{FT}}\\ \hline
Barbera & -1.02 & 0.04 & -1.69 & 0.01 \\ \hline
Cabernet Franc & -1.50 & 2.41 & -2.28 & -0.05 \\ \hline
Cabernet Sauvignon & -1.02 & 0.64 & -1.29 & -0.01 \\ \hline
Chardonnay & -1.14 & 0.04 & -3.01 & 0.11 \\ \hline
Chenin Blanc & -0.74 & 0.35 & -2.96 & -0.05 \\ \hline
Concord & -3.38 & 0.11 & -2.96 & -0.24 \\ \hline
Gewurztraminer & -1.58 & 0.51 & -2.66 & -0.25 \\ \hline
Grenache & -0.32 & 1.27 & -1.69 & -0.01 \\ \hline
Lemberger & -2.53 & 1.37 & -0.43 & -0.16 \\ \hline
Malbec & -2.78 & 0.77 & -1.66 & -0.07 \\ \hline
Merlot & -0.84 & 0.32 & -2.32 & 0.11 \\ \hline
Mourvedre & -1.28 & 0.22 & -1.58 & -0.07 \\ \hline
Nebbiolo & -2.51 & 0.72 & -1.31 & -0.41 \\ \hline
Pinot Gris & -1.14 & 0.53 & -2.44 & 0.08 \\ \hline
Riesling & -1.78 & 1.14 & -1.00 & 0.31 \\ \hline
Sangiovese & -0.96 & 0.31 & -1.73 & 0.05 \\ \hline
Sauvignon Blanc & -1.22 & 0.09 & -0.23 & -0.02 \\ \hline
Semillon & -1.46 & 0.97 & -3.08 & 0.35 \\ \hline
Syrah & -0.97 & -0.06 & -1.87 & 0.00 \\ \hline
Viognier & -1.25 & 2.13 & -1.89 & 0.52 \\ \hline
Zinfandel & -2.24 & 1.31 & -1.75 & -0.19 \\ \hline\hline
\textbf{Median} & -1.25 & 0.53 & -1.75 & -0.01 \\ \hline
\textbf{Mean} & -1.51 & 0.72 & -1.90 & 0.00 \\ \hline
\end{tabular}
}
\caption{Comparing Transfer Learning with Multi-Task Learning. Note that the performance is relative to corresponding MTL counterparts in table \ref{tab:mainresults}. If a term is positive, it means that transfer learning does better than MTL in that case. We abbreviate finetuning with FT in the column names.}
\label{tab:trainingsetting}
\end{table}
\section*{Path to Deployment}
Farmers use AgWeatherNet \cite{AgWeatherNet} and WSU Viticulture and Enology \cite{viticulture_wsu} websites to monitor cold hardiness through the deployment of the Ferguson model and publication of real LTE values, respectively. Our goal is to finalize the MTL-based models proposed in this paper and deploy them onto AgWeatherNet for the 2022-2023 season for beta testing.
\section*{Conclusion}
We showed that multi-task learning is an effective approach to predicting Cold Hardiness for grapevines. In particular, our model consistently outperforms the state-of-the-art scientific model without relying on expert domain knowledge. This model will be deployed on an existing weather network for the 2022-2023 season. In the future, we plan to apply these ideas to cold-hardiness prediction for other crops, such as cherries and apples. In addition, we plan to investigate the utility of MTL for other agriculture-related problems with limited data.
\end{document} |
\begin{equation}gin{document}
\titlerunning{Complexity of inexact Proximal Point under Holderian Growth}
\title{Complexity of Inexact Proximal Point Algorithm for minimizing convex functions with Holderian Growth
\thanks{Andrei Patrascu was supported by a grant of the Romanian Ministry of Education and Research, CNCS - UEFISCDI, project number PN-III-P1-1.1-PD-2019-1123, within PNCDI III.
Paul Irofti was supported by a grant of the Romanian Ministry of Education and Research, CNCS - UEFISCDI, project number PN-III-P1-1.1-PD-2019-0825, within PNCDI III.}
}
\author{Andrei Patrascu \and
Paul Irofti }
\institute{A. Patrascu \and P. Irofti \at Research Center for Logic, Optimization and Security (LOS), \\
Department of Computer Science, Faculty of Mathematics and Computer Science, University of Bucharest, Academiei 14, Bucharest, Romania \\
\email{[email protected], [email protected]}
}
\date{Received: date / Accepted: date}
\maketitle
\begin{equation}gin{abstract}
\noindent Several decades ago the Proximal Point Algorithm (PPA) started to gain a long-lasting attraction for both abstract operator theory and numerical optimization communities. Even in modern applications, researchers still use proximal minimization theory to design scalable algorithms that overcome nonsmoothness. Remarkable works as \cite{Fer:91,Ber:82constrained,Ber:89parallel,Tom:11} established tight relations between the convergence behaviour of PPA and the regularity of the objective function. In this manuscript we derive nonasymptotic iteration complexity of exact and inexact PPA to minimize convex functions under $\gamma-$Holderian growth: $\BigO{\log(1/\epsilon)}$ (for $\gamma \in [1,2]$) and $\BigO{1/\epsilon^{\gamma - 2}}$ (for $\gamma > 2$). In particular, we recover well-known results on PPA: finite convergence for sharp minima and linear convergence for quadratic growth, even under presence of deterministic noise. Moreover, when a simple Proximal Subgradient Method is recurrently called as an inner routine for computing each IPPA iterate, novel computational complexity bounds are obtained for Restarting Inexact PPA. Our numerical tests show improvements over existing restarting versions of the Subgradient Method.
\keywords{Inexact proximal point, weak sharp minima, Holderian growth, finite termination.}
\end{abstract}
\section{Introduction}\label{sec:intro}
The problem of interest of our paper formulates as the following convex nonsmooth minimization:
\begin{equation}gin{align}\label{problem_of_interest}
F^* = \min_{x \in \mathbf{R}^n} \;\{ F(x) := f(x) + \psi(x) \}.
\end{align}
Here we assume that $f: \mathbf{R}^n \mapsto \mathbf{R}^{}$ is convex and $\psi: \mathbf{R}^n \mapsto (-\infty,\infty]$ is convex, lower semicontinuous and proximable. By proximable function we refer to those functions whose proximal mapping is computable in closed form or linear time. The above model finds plenty of applications from which we shortly mention compressed sensing \cite{BecTeb:09fista}, sparse risk minimization \cite{Lin:18,XiaLin:14} and graph-regularized models \cite{YanEla:16}.
Dating back to '60s, the classical Subgradient Methods (SGM) \cite{Sho:62,Sho:64,Pol:67SM1,Pol:69SM2,Pol:87book} established $\BigO{1/\epsilon^2}$ iteration complexity for minimizing convex functions up to finding $\hat{x}$ such that $f(\hat{x}) - f^* \le \epsilon$. Despite the fact that this complexity order is unimprovable for the class of convex functions, particular growth or error bounds properties can be exploited to obtain better lower orders. Error bounds and regularity conditions have a long history in optimization, systems of inequalities or projection methods: \cite{Ant:94,BurFer:93,Fer:91,Pol:67SM1,Pol:69SM2,Pol:87book,BolNgu:17,Hu:16,Luo:93}.
Particularly, in the seminal works \cite{Pol:78,Pol:87book} SGM is proved to converge linearly towards weakly sharp minima of $F$. The optimal solutions $X^*$ is a set of weak sharp minima (WSM) if there exists $\sigma_F > 0$ such that
\begin{equation}gin{align*}
WSM: \qquad F(x) - F^* \ge \sigma_F \text{dist}_{X^*}(x), \quad \forall x \in \text{dom} {F}.
\end{align*}
Acceleration of other first-order algorithms has been proved under WSM in subsequent works as \cite{Ant:94,BurFer:93,Dav:18,Rou:20}. Besides acceleration, \cite[ Section 5.2.3]{Pol:87book} introduces the ``superstability" of sharp optimal solutions $X^*$: under small perturbations of the objective function $F$ a subset of the weak sharp minima $X^*$ remains optimal for the perturbed model. The superstability of WSM was used in \cite{Pol:78,Ned:10} to show the robustness of inexact SGM. In short, using low persistently perturbed subgradients at each iteration, the resulted perturbed SGM still converges linearly to $X^*$. In the line of these results, we also show in our manuscript that similar robustness holds for the proximal point methods under WSM.
Other recent works as \cite{Yan:18,Joh:20,Nec:19,Lu:20,Kor:76,Tom:11,Li:12,Hu:16,Fre:18,Luo:93,Gil:12,Ren:14,BolNgu:17,JudNes:14uniform} look at a suite of different growth regimes besides WSM and use them to improve the complexity of first-order algorithms. Particularly, in our paper we are interested in the $\gamma-$Holderian growth : let $\gamma \ge 1$
\begin{equation}gin{align*}
\gamma-HG: \qquad F(x) - F^* \ge \sigma_F \text{dist}_{X^*}^{\gamma}(x), \quad \forall x \in \text{dom} {F}.
\end{align*}
Note that $\gamma-HG$ is equivalent to the Kurdyka–Łojaziewicz (KL)
inequality for convex, closed, and proper functions, as shown in \cite{BolNgu:17}.
It includes the class of uniformly convex functions analyzed in \cite{JudNes:14uniform} and,
obviously, it covers the sharp minima WSM, for $\gamma = 1$.
The Quadratic Growth (QG), covered by $\gamma = 2$, was analyzed in a large suite of previous works \cite{Lu:20,Yan:18,Luo:93,Nec:19} and, although is weaker than strong convexity, it could be essentially exploited (besides Lipschitz gradient continuity) to show $\BigO{\log(1/\epsilon)}$ complexity of proximal gradient methods11. Our analysis recover similar complexity orders under the same particular assumptions.
\noindent Some recent works \cite{Yan:18,Joh:20,Gil:12,Fre:18,Ren:14} developed restarted SGM schemes, for minimizing convex functions under $\gamma$-HG or WSM, and analyzed their theoretical convergence and their natural dependence on the growth moduli $\gamma$ and $\sigma_F$. Restarted SubGradient (RSG) of \cite{Yan:18} and Decaying Stepsize - SubGradient (DS-SG) of \cite{Joh:20} present iteration complexity estimates of $\BigO{\log(1/\epsilon)}$ under WSM and $\BigO{\frac{1}{\epsilon^{2(\gamma-1)}}}$ bound under $\gamma-$(HG) in order to attain $\text{dist}_{X^*}(x) \le \epsilon$. These bounds are optimal for bounded gradients functions, as observed by \cite{NemNes:85}. Most SGM schemes are dependent up to various degrees on the knowledge of problem information. For instance, RSG and DS-SG rely on lower bounds of optimal value $F^*$ and knowledge of parameters $\sigma_F, \gamma$ or other Lipschitz constants. The restartation is introduced in order to avoid the exact estimation of modulus $\sigma_F$. Also, our schemes allows estimations of problem moduli such as $\gamma, \sigma_F$, covering cases when these are not known. In the best case, when estimations are close to the true parameters, similar complexity estimates $\BigO{\frac{1}{\epsilon^{2(\gamma-1)}}}$ are provided in terms of subgradient evaluations. Moreover, by exploiting additional smooth structure we further obtain lower estimates.
The work of \cite{JudNes:14uniform} approach the constrained model, i.e. $\psi$ is the indicator function of a closed convex set, and assume $\gamma-$uniform convexity:
\begin{equation}gin{align*}
f(\alpha x + (1-\alpha) y) \le \alpha f(x) +& (1-\alpha)f(y) \\
& - \frac{1}{2}\sigma_f \alpha(1-\alpha)[\alpha^{\gamma - 1} + (1-\alpha)^{\gamma - 1}]\norm{x-y}^{\gamma},
\end{align*}
for all feasible $x,y$ and $\gamma \ge 2$. The authors obtain optimal complexity bounds
$\BigO{\sigma_f^{-2/\gamma} \epsilon^{-2(\gamma - 1)}}$ when the subgradients of $f$ are bounded. Moreover, their restartation technique are adaptive to growth modulus $\gamma$ and parameter $\sigma_F$, up to a fixed number of iterations.
Inherent for all SGMs, the complexity results of these works essentially requires the boundedness of the subgradients, which is often natural for nondifferentiable functions. However, plenty of convex objective functions coming from risk minimization, sparse regression or machine learning presents, besides their particular growth, a certain smoothness degree which is not compatible with the subgradient boundedness assumption. Enclosing the feasible domain in a ball is an artificial remedy used to further keep the subgradients bounded, which however might load the implementation with additional uncertain tuning heuristics. Our analysis shows how to exploit smoothness in order to improve the complexity estimates.
The analysis of \cite{Rou:20} investigates the effect of restarting over the optimal first-order schemes under $\gamma$-HG and $\nu$-Holder smoothness, starting from results of \cite{NemNes:85}. For $\psi =0$, $\epsilon-$suboptimality is reached after $\BigO{\log(1/\epsilon)}$ accelerated gradient iterations if $\nabla F$ is Lipschitz continuous and $2-$Holder growth holds, or after $\BigO{ 1/\epsilon^{\frac{\gamma-2}{2}} }$ iterations when the growth modulus is larger than $2$. In general, if $\nabla F$ is $\nu-$Holder continuous, they restart the Universal Gradient Method and obtain an overall complexity of $\BigO{\log(1/\epsilon)}$ if $\gamma = \nu$, or $\BigO{ 1/\epsilon^{\frac{2(\gamma-\nu-1)}{2\nu-1}} }$ if $\gamma > \nu$. Although these estimates are unimprovable and better than ours, in general the implementation of the optimal schemes requires complete knowledge of growth and smoothness parameters.
\noindent Several decades ago the \textit{Proximal Point Algorithm (PPA)} started to gain much attraction for both abstract operator theory and the numerical optimization communities. Even in modern applications, where large-scale nonsmooth optimization arises recurrently, practitioners still inspire from proximal minimization theory to design scalable algorithmic techniques that overcomes nonsmoothness. The powerful PPA iteration consists mainly in the recursive evaluation of the proximal operator associated to the objective function. The proximal mapping is based on the infimal convolution with a metric function, often chosen to be the squared Euclidean norm:
$ \text{prox}_{\mu}^F(x) := \arg\min_z F(z) + \frac{1}{2\mu}\norm{z - x}^2.$
The Proximal Point recursion:
\begin{equation}gin{align*}
x^{k+1} = \text{prox}_{\mu}^F(x^k).
\end{align*}
became famous in optimization community when \cite{Roc:76,Roc:76augmented} and \cite{Ber:82constrained,Ber:89parallel} revealed its connection to various multipliers methods for constrained minimization, see also \cite{Nes:21,Nes:21b,Nem:04,Gul:91,Gul:92}. There are remarkable works that shown how the growth regularity is a key factor in the iteration complexity of PPA.
Finite convergence of the exact PPA under WSM is proved by \cite{BurFer:93,Fer:91,Ant:94}.
Furthermore, in \cite{Ber:89parallel,Kor:76} can be found an extensive convergence analysis of the exact PPA and the Augmented Lagrangian algorithm under $\gamma-$(HG). Although the results and analysis are of a remarkable generality, they are of asymptotic nature (see \cite{Tom:11}). A nonasymptotic analysis is found in \cite{Tom:11}, where the equivalence between a Dual Augmented Lagrangian algorithm and a variable stepsize PPA is established. The authors analyze sparse learning models of the form: $\min_{x \in \mathbf{R}^n} \; f(Ax) + \psi(x),$
where $f$ is twice differentiable with Lipschitz continuous gradient, $A$ a linear operator and $\psi$ a convex nonsmooth regularizer. Under $\gamma-$Holderian growth, ranging with $\gamma \in [1,2]$, they show nonasymptotic superlinear convergence rate of the exact PPA with exponentially increasing stepsize.
For the inexact variant they kept further a slightly weaker superlinear convergence.
The progress, from the asymptotic analysis of \cite{Roc:76,Kor:76} to a nonasymptotic one, is remarkable due to the simplicity of the arguments. However, a convergence rate of inexact PPA (IPPA) could become irrelevant without quantifying the local computational effort spent to compute each iteration, since one inexact iteration of PPA requires the approximate solution the regularized optimization problem. Among the remarkable references on inexact versions of various proximal algorithms are \cite{Mon:13,Sol:00,Sol:01,Sol:01b,Sol:99,Sol:99b,Lin:15,Lin:18,Mai:19,Shulgin:21,Nes:21,Nes:21b}.
We mention that, a small portion of the results on the WSM case, contained into this manuscript, has been recently published by the authors in \cite{PatIroLett:22}. However, we included it in the present manuscript for the sake of completeness.
\noindent \textbf{Contributions}. We list further our main contributions:
\noindent \textit{Inexact PPA under $\gamma-$(HG)}. We provide nonasymptotic iteration complexity bounds for IPPA to solve \eqref{problem_of_interest} under $\gamma-$HG, when $\gamma \ge 1$. In particular, we obtain $\BigO{\log(1/\epsilon)}$ for $\gamma \in [1,2]$ and, in the case of best parameter choice, $\BigO{1/\epsilon^{\gamma - 2}}$ for $\gamma > 2$, to attain $\epsilon$ distance to the optimal set. All these bounds require only convexity of the objective function $F$ and they are independent on any bounded gradients or smoothness. We could not find these nonasymptotic estimates in the literature for general $\gamma \ge 1$.
\noindent \textit{Restartation}. We further analyze the complexity bounds of restarting IPPA, that facilitates the derivation of better computational complexity estimates than the non-restarted IPPA. The complexity estimates have similar orders in both restarted and non-restarted algorithms for all $\gamma \ge 1$.
\noindent \textit{Total computational complexity}.
We derive total complexity, including the inner computational effort spent at each IPPA iteration, in terms of number of inner (proximal) gradient evaluations.
If $f$ has $\nu-$Holder continuous gradients we obtain that, in the case of best parameter choice, there are necessary:
\begin{equation}gin{align*}
[\gamma = 1+\nu] \qquad & \BigO{\log(1/\epsilon)} \\
[\nu = 1] \qquad & \BigO{1/\epsilon^{\gamma - 2}}\\
[\nu = 0] \qquad & \BigO{1/\epsilon^{2(\gamma - 1)}}
\end{align*}
proximal (sub)gradient evaluations to approach to $\epsilon$ distance to the optimal set. As we discuss in the section \ref{sec:total_complexity}, the total complexity is dependent on various restartation variables.
\noindent \textit{Experiments}. Our numerical experiments confirm a better behaviour of the restarted IPPA, that uses an inner subgradient method routine, in comparison with other two restarting strategies of classical Subgradient Method. We performed our tests on several polyhedral learning models that includes Graph SVM and Matrix Completion, using synthetic and real data.
\subsection{Notations and preliminaries}\label{sec:prelim}
\noindent Now we introduce the main notations of our manuscript. For $x,y \in \mathbf{R}^n$ denote the scalar product $\langle x,y \rangle = x^T y$ and Euclidean norm by $\|x\|=\sqrt{x^T x}$. The projection operator onto set $X$ is denoted by $\pi_X$ and the distance from $x$ to the set $X$ is denoted $\text{dist}_X(x) = \min_{z \in X} \norm{x-z}$. The indicator function of $Q$ is denoted by $\iota_{Q}$.
Given function $h$, then by $h^{(k)}$ we denote the composition $h^{(k)}(x):= \underbrace{\left(h \circ h \circ \cdots h\right) }_{k \; \text{times}} (x)$.
We use $\partial h(x)$ for the subdifferential set and $h'(x)$ for a subgradient of $h$ at $x$. In differentiable case, when $\partial h$ is a singleton, $\nabla h$ will be eventually used instead of $h'$. By $X^*$ we denote the optimal set associated to \eqref{problem_of_interest} and by \textit{$\epsilon-$ suboptimal point} we understand a point $x$ that satisfies $\text{dist}_{X^*}(x) \le \epsilon$.
\noindent A function $f$ is called $\sigma-$strongly convex if the following relation holds:
$$ f(x) \ge f(y) + \langle f'(y), x - y\rangle + \frac{\sigma}{2}\norm{x-y}^2 \qquad \forall x,y \in \mathbf{R}^n. $$
Let $\nu \in [0,1]$, then we say that a differentiable function $f$ has $\nu-$Holder continuous gradient with constant $L>0$ if :
\begin{equation}gin{align*}
\norm{f'(x) - f'(y)} \le L\norm{x-y}^{\nu} \quad
\forall x,y \in \mathbf{R}^n.
\end{align*}
\noindent Notice that when $\nu = 0$, the Holder continuity describes nonsmooth functions with bounded gradients, i.e. $\norm{f'(x)} \le L $ for all $x \in \text{dom}(f)$. The $1-$Holder continuity reduces to $L-$Lipschitz gradient continuity.
\noindent Given a convex function $f$, we denote its Moreau envelope \cite{Roc:76,Ber:89parallel,Roc:76augmented} with $f_{\mu}$ and its proximal operator with $\text{prox}_{\mu}^f(x)$, defined by:
\begin{equation}gin{align*}
f_{\mu}(x) & = \min\limits_{z} \; f(z) + \frac{1}{2\mu}\norm{z - x}^2 \\
\text{prox}_{\mu}^f(x) &= \arg\min_z \; f(z) + \frac{1}{2\mu}\norm{z - x}^2. \end{align*}
We recall the nonexpansiveness property of the proximal mapping \cite{Roc:76}:
\begin{equation}gin{align}\label{rel:prox_nonexpansiveness}
\norm{ \text{prox}_{\mu}^f(x) - \text{prox}_{\mu}^f(y) } \le \norm{x-y} \quad \forall x,y \in \text{dom}(f).
\end{align}
Basic arguments from \cite{Roc:76,Ber:89parallel,Roc:76augmented} show that the gradient $\nabla f_{\mu}$ is Lipschitz continuous with constant $\frac{1}{\mu}$ and satisfies:
\begin{equation}gin{align}\label{rel:smooth_grad_to_nonsmooth_grad}
\nabla f_{\mu}(x) = \frac{1}{\mu}\left( x - \text{prox}_{\mu}^f(x)\right) \in \partial f(\text{prox}_{\mu}^f(x)).
\end{align}
In the differentiable case, obviously $\nabla f_{\mu}(x) = \nabla f(\text{prox}_{\mu}^f(x))$.
\noindent \textit{Paper structure}.
In section \ref{sec:Holder_growth} we analyze how the growth properties of $F$ are also inherited by $F_{\mu}$. The key relations on $F_{\mu}$ will become the basis for the complexity analysis.
In section \ref{sec:IPPA} we define the iteration of inexact Proximal Point algorithm and discuss its stopping criterion. The iteration complexity is presented in section \ref{sec:IPPA_complexity} for both the exact and inexact case. Subsequently, the restarted IPPA is defined and its complexity is presented. Finally, in section \ref{sec:total_complexity} we quantify the complexity of IPPA in terms of proximal (sub)gradient iterations and compare with other results. In the last section we compare our scheme with the state-of-the-art restarting subgradient algorithms.
\section{Holderian growth and Moreau envelopes}\label{sec:Holder_growth}
\noindent As discussed in the introduction, $\gamma$-HG relates tightly with widely known regularity properties such as WSM \cite{Yan:18,Pol:78,Pol:87book,BurFer:93,Fer:91,Ant:94,Dav:18}, Quadratic Growth (QG) \cite{Yan:18,Lu:20,Nec:19}, Error Bound \cite{Luo:93} and Kurdika-Lojasiewicz inequality \cite{BolNgu:17,Yan:18}.
\noindent Next we show how the Moreau envelope of a given convex function inherits its growth properties over its entire domain excepting a certain neighborhood of the optimal set. Recall that $\min_x F(x) = \min_x F_{\mu}(x)$.
\begin{equation}gin{lemma}\label{lemma:deterministic_moreau_growth}
Let $F$ be a convex function and let $\gamma-$(HG) hold. Then the Moreau envelope $F_{\mu}$ satisfies the relations presented below.
\noindent $(i)$ Let $\gamma = 1$ WSM:
\begin{equation}gin{align*}
F_{\mu}(x) - F^* \ge H_{\sigma^2_F\mu}(\sigma_F \text{dist}_{X^*}(x)),
\end{align*}
where $H_{\tau}(s) = \begin{equation}gin{cases}s - \frac{\tau}{2}, & s > \tau \\
\frac{1}{2\tau}s^2, & s \le \tau
\end{cases}$ is the Huber function.
\noindent $(ii)$ Let $\gamma = 2$:
\begin{equation}gin{align*}
F_{\mu}(x) - F^* \ge \frac{\sigma_F}{1 + 2\sigma_F \mu}\text{dist}_{X^*}^2(x).
\end{align*}
\noindent $(iii)$ For all $\gamma \ge 1$:
\begin{equation}gin{align*}
F_{\mu}(x) - F^* \ge\varphi(\gamma) \min \left\{ \sigma_F\text{dist}^{\gamma}_{X^*}(x), \frac{1}{2\mu}\text{dist}^{2}_{X^*}(x) \right\},
\end{align*}
where $\varphi(\gamma) = \min_{\lambda \in [0,1]} \lambda^{\gamma} + (1-\lambda)^2$.
\end{lemma}
\begin{equation}gin{proof}
By using $\gamma$-HG, we get:
\begin{equation}gin{align}
F_{\mu}(x) - F^*
& = \min_z F(z) - F^* + \frac{1}{2\mu}\norm{z-x}^2 \nonumber \\
& \ge \min_z \sigma_F\text{dist}_{X^*}^{\gamma}(z) + \frac{1}{2\mu}\norm{z-x}^2 \nonumber \\
& = \min_{z,y \in X^*} \sigma_F\norm{z-y}^{\gamma} + \frac{1}{2\mu}\norm{z-x}^2. \label{eq:eq:prelim_Moreau_growth}
\end{align}
The solution of \eqref{eq:eq:prelim_Moreau_growth} in $z$, denoted as $z(x)$ satisfies the following optimality condition: $\sigma_F\gamma \frac{z(x) - y}{\norm{z(x) - y}^{2 - \gamma}} + \frac{1}{\mu}\left(z(x) - x \right) = 0$, which simply implies that
\begin{equation}gin{align}\label{eq:solution_Holder_subproblem}
z(x) = \frac{\norm{z-y}^{2-\gamma}}{\norm{z-y}^{2-\gamma} + \sigma_F \mu \gamma} x + \frac{\sigma_F \mu \gamma}{\norm{z-y}^{2-\gamma} + \sigma_F \mu \gamma} y.
\end{align}
$(i)$ For a function with sharp minima ($\gamma = 1$) it is easy to see that $(z(x) - y) \left[1 + \frac{\sigma_F \mu}{\norm{z(x) - y}} \right] = x - y$. By taking norm in both sides then: $\norm{z(x)- y} = \max\{0, \norm{x-y} - \sigma_F \mu\}$ and \eqref{eq:solution_Holder_subproblem} becomes:
$$ z(x) = \begin{equation}gin{cases} y + \left( 1 - \frac{\sigma_F\mu}{\norm{x-y}} \right) (x-y), & \norm{x-y} > \sigma_F \mu\\
y, & \norm{x-y} \le \sigma_F \mu \end{cases}$$
By replacing this form of $z(x)$ into \eqref{eq:eq:prelim_Moreau_growth} we obtain our first result:
\begin{equation}gin{align*}
F_{\mu}(x) - F^*
& \ge \min_{y \in X^*}
\begin{equation}gin{cases}\sigma_F \norm{x-y} - \frac{\mu\sigma-F^2}{2}, & \norm{x-y} > \sigma_F\mu \\
\frac{1}{2\mu}\norm{y-x}^2, & \norm{x-y} \le \sigma_F\mu
\end{cases} \\
& \ge
\begin{equation}gin{cases}\sigma_F \text{dist}_{X^*}(x) - \frac{\mu\sigma^2_F}{2}, & \text{dist}_{X^*}(x) > \sigma_F \mu \\
\frac{1}{2\mu}\text{dist}_{X^*}(x)^2, & \text{dist}_{X^*}(x) \le \sigma_F \mu
\end{cases}.
\end{align*}
\noindent $(ii)$ For quadratic growth, \eqref{eq:solution_Holder_subproblem} reduces to $z(x) = \frac{1}{1 + 2\sigma_F \mu} x + \frac{2\sigma_F \mu }{1 + 2\sigma_F \mu} y$ and by \eqref{eq:eq:prelim_Moreau_growth} leads to:
\begin{equation}gin{align*}
F_{\mu}(x) - F^*
& \ge \min_{y \in X^*} \frac{\sigma_F}{1 + 2\sigma_F \mu}\norm{y-x}^2 = \frac{\sigma_F}{1 + 2\sigma_F \mu}\text{dist}_{X^*}^2(x).
\end{align*}
\noindent $(iii)$ Lastly, from \eqref{eq:solution_Holder_subproblem} we see that $z(x)$ lies on the segment $[x,y]$, i.e. $z(x) = \lambda x + (1-\lambda)y$ for certain nonnegative subunitary $\lambda$. Using this argument into \eqref{eq:eq:prelim_Moreau_growth} we equivalently have:
\begin{equation}gin{align*}
F_{\mu}(x) - F^* & \ge \min_{y \in X^*, \lambda \in [0,1], z = \lambda x + (1-\lambda)y} \sigma_F\norm{z-y}^{\gamma} + \frac{1}{2\mu}\norm{z-x}^2 \\
& = \min_{y \in X^*, \lambda \in [0,1]} \sigma_F\lambda^{\gamma}\norm{x-y}^{\gamma} + \frac{(1-\lambda)^2}{2\mu}\norm{x-y}^2 \\
&= \min_{\lambda \in [0,1]} \sigma_F\lambda^{\gamma}\text{dist}^{\gamma}_{X^*}(x) + \frac{(1-\lambda)^2}{2\mu}\text{dist}^{2}_{X^*}(x) \\
& \ge \min \left\{ \sigma_F\text{dist}^{\gamma}_{X^*}(x), \frac{1}{2\mu}\text{dist}^{2}_{X^*}(x) \right\} \min_{\lambda \in [0,1]} \lambda^{\gamma} + (1-\lambda)^2.
\end{align*}
\end{proof}
\noindent It is interesting to remark that the Moreau envelope $F_{\mu}$ inherits a similar growth landscape as $F$ outside a given neighborhood of the optimal set. For instance, under WSM, outside the tube $\mathcal{N}(\sigma_F\mu) = \{x \in \mathbf{R}^n: \; \text{dist}_{X^*}(x) \le \sigma_F\mu \}\}$, the Moreau envelope $F_{\mu}$ grows sharply.
Inside of $\mathcal{N}(\sigma_F \mu)$ it grows quadratically which, unlike the objective function $F$, allows the gradient to get small near to the optimal set. This separation of growth regimes suggests that first-order algorithms that minimize $F_{\mu}$ would reach very fast the region $\mathcal{N}(\sigma_F\mu)$, allowing large steps in a first phase and subsequently, they would slow down in the vicinity of the optimal set.
This discussion extends to general growths when $\gamma > 1$, where a similar separation of behaviours holds for appropriate neighborhoods. Note that when $F$ has quadratic growth with constant $\sigma_F$, also the envelope $F_{\mu}$ satisfies a quadratic growth with a smaller modulus $\frac{\sigma_F}{1 + 2\sigma_F \mu}$.
\begin{equation}gin{remark}
It will be useful in the subsequent sections to recall the connection between Holderian growth and Holderian error bound under convexity.
\noindent Observe that by a simple use of convexity into $\gamma$-HG, we obtain for all $x \in \text{dom} F$
\begin{equation}gin{align}\label{Holder_error_bound_steps}
\sigma_F \text{dist}_{X^*}^{\gamma}(x) \le F(x) - F^* \le \langle F'(x), x - \pi_{X^*}(x^*) \rangle \le \norm{F'(x)}\text{dist}_{X^*}(x),
\end{align}
which immediately turns into the following error bound:
\begin{equation}gin{align}\label{Holder_error_bound}
\sigma_F \text{dist}_{X^*}^{\gamma-1}(x) \le \norm{F'(x)} \qquad x \in \text{dom}{F}.
\end{align}
Under WSM, by replacing $x$ with $\text{prox}_{\mu}^F(x)$ into \eqref{Holder_error_bound} and by using property \eqref{rel:smooth_grad_to_nonsmooth_grad} pointing that $ \nabla F_{\mu}(x) \in \partial F(\text{prox}_{\mu}^F(x))$, we obtain as well a similar bound on $\norm{\nabla F_{\mu}(\cdot)}$ at non-optimal points:
\begin{equation}gin{align}\label{gamma1_Env_Holder_error_bound}
\sigma_F \le \norm{\nabla F_{\mu}(x)} \quad \forall x \notin X^*.
\end{align}
This is the traditional key relation for the finite convergence of PPA. Under $\gamma-$HG, starting from Lemma \ref{lemma:deterministic_moreau_growth}$(iii)$, by using convexity of $F_{\mu}$ and, further, by following similar inequalities as in \eqref{Holder_error_bound_steps}, another error bound is obtained:
\begin{equation}gin{align}\label{gamma_all_Env_Holder_error_bound}
\text{dist}_{X^*}(x) \le \Max{ \left[ \frac{1}{\sigma_F \varphi(\gamma)}\norm{\nabla F_{\mu}(x)} \right]^{\frac{1}{\gamma - 1}} , \frac{2\mu}{\varphi(\gamma)}\norm{\nabla F_{\mu}(x)} } \quad \forall x.
\end{align}
\end{remark}
\noindent In the following section we start with the analysis of exact and inexact PPA. Aligning to old results on the subgradient algorithms back to \cite{Pol:78}, we illustrate the robustness induced by the weak sharp minima regularity.
\section{Inexact Proximal Point algorithm}\label{sec:IPPA}
\noindent The basic exact PPA iteration is shortly described as
\begin{equation}gin{align*}
x^{k+1} = \text{prox}_{\mu}^F(x^k).
\end{align*}
Recall that by \eqref{rel:smooth_grad_to_nonsmooth_grad} one can express $\nabla F_{\mu}(x^k) = \frac{1}{\mu}(x^k - \text{prox}_{\mu}^F(x^k))$, which makes PPA equivalent with the constant stepsize Gradient Method (GM) iteration:
\begin{equation}gin{align}\label{Gradient_Method_PPA}
x^{k+1} = x^k - \mu \nabla F_{\mu}(x^k).
\end{align}
Since our reasoning from below borrow simple arguments from classical GM analysis, we will use further \eqref{Gradient_Method_PPA} to express PPA.
It is realistic not to rely on explicit $\text{prox}_{\mu}^F(x^k)$, but an approximated one to a fixed accuracy. By using such an approximation, one can immediately form an approximate gradient $\nabla_{\delta} F_{\mu}(x^k)$ and interpret IPPA as an inexact Gradient Method. Let $x \in \text{dom}{F}$, then a basic $\delta-$ approximation of $\nabla F_{\mu}(x)$ is
\begin{equation}gin{align}\label{inexactness_criterion}
\nabla_{\delta} F_{\mu}(x):=\frac{1}{\mu}\left(x - \tilde{z} \right), \quad \text{where} \quad \norm{\tilde{z} - \text{prox}_{\mu}^F(x)} \le \delta.
\end{align}
\noindent Other works as \cite{Roc:76,Sal:12,Sol:00,Sol:01} promotes similar approximation measures for inexact first order methods.
\noindent Now we present the basic IPPA scheme with constant stepsize.
\begin{equation}gin{algorithm}
\caption{Inexact Proximal Point Algorithm($x^0,\mu,\{\delta_k\}_{k \ge 0}, \epsilon$)}
Initialize $\; k: = 0$\\
\While{ $\norm{\nabla_{\delta_k} F(x^k)} > \epsilon$ or $\delta_k > \frac{\epsilon}{\mu}$}{
$\text{Given $x^k$ compute} \;\; x^{k+1} \; \text{such that}: \; \norm{x^{k+1} - \text{prox}_{\mu}^F(x^k)} \le \delta_k$ \\
$k := k+1 $ \\
}
Return $x^k$
\end{algorithm}
\noindent There already exist a variety of relative or absolute stopping rules in the literature for the class of first-order methods \cite{Tom:11,Sal:12,Pol:87book,Hum:05}.
However, since bounding the norms of gradients is commonly regarded as one of the simplest optimality measures, we use the following:
\begin{equation}gin{align}\label{stopping_criterion}
\norm{\nabla_{\delta_k} F(x^k)} < \epsilon \qquad \delta_k \le \frac{\epsilon}{\mu}
\end{align}
which is computable by the nature of the iteration. The next result shows the relation between \eqref{stopping_criterion} and the distance to optimal set.
\begin{equation}gin{lemma}\label{lemma:morenvgrad_bound_relating_with_distopt}
Let $\mu,\delta > 0$ and assume $x \notin X^*$ then:
\noindent $(i)$ Let $\gamma = 1$, if $\norm{\nabla_{\delta} F_{\mu}(x^k)} + \frac{\delta}{\mu}< \sigma_F$, then
\begin{equation}gin{align}\label{direct_distance_bound}
\text{dist}_{X^*}(x^k) \le
\mu \norm{\nabla_{\delta} F_{\mu}(x^k)} + \delta \quad \text{and} \quad \text{prox}_{\mu}^F(x^k) = \pi_{X^*}(x^k).
\end{align}
\noindent $(ii)$ Let $\gamma > 1$, then
\begin{equation}gin{align*}
\text{dist}_{X^*}(x^k) \le
\Max{\left[ \frac{\mu \norm{\nabla_{\delta} F_{\mu}(x^k)} + \delta}{\mu\sigma_F \varphi(\gamma)} \right]^{\frac{1}{\gamma - 1}}, \frac{2\mu \norm{\nabla_{\delta} F_{\mu}(x^k)} + \delta}{\varphi(\gamma)}}.
\end{align*}
\end{lemma}
\begin{equation}gin{proof}
Assume $\norm{\nabla_{\delta} F_{\mu}(x)} \le \epsilon$. Observe that on one hand, by the triangle inequality that: $\norm{\nabla F_{\mu}(x)} \le \norm{\nabla_{\delta} F_{\mu}(x)} + \frac{\delta}{\mu} \le \epsilon + \frac{\delta}{\mu} =: \tilde{\epsilon}$. On the other hand,
one can easily derive:
\begin{equation}gin{align}
\norm{\nabla F_{\mu}(x)}
\le \sqrt{\frac{2}{\mu}[F_{\mu}(x)-F^*]} & \le \sqrt{\frac{2}{\mu}[F(\pi_{X^*}(x)) + \frac{1}{2\mu}\norm{x-\pi_{X^*}(x)}^2-F^*]} \nonumber\\
& =
\frac{1}{\mu}\text{dist}_{X^*}(x). \label{upperbound_morenvgrad}
\end{align}
\noindent Now let $\gamma = 1$. Based on $\nabla F_{\mu}(x) \in \partial F(\text{prox}_{\mu}^F(x))$, for nonoptimal $\text{prox}_{\mu}^F(x)$ the bound \eqref{gamma1_Env_Holder_error_bound} guarantees $ \norm{\nabla F_{\mu}(x)} = \norm{F'(\text{prox}_{\mu}^F(x))} \ge \sigma_F $. Therefore, if $x$ would determine $\tilde{\epsilon} < \sigma_F$, and implicitly $\norm{\nabla F_{\mu}(x)} < \sigma_F$, the contradiction with the previous lower bound impose that $\text{prox}_{\mu}^F(x) \in X^*$. Moreover, in this case, since $ \norm{x - \text{prox}_{\mu}^F(x)} = \mu \norm{\nabla F_{\mu}(x)} \overset{\eqref{upperbound_morenvgrad}}{\le} \text{dist}_{X^*}(x)$ then obviously $\text{prox}_{\mu}^F(x) = \pi_{X^*}(x)$. On summary, a sufficiently small approximate gradient norm $\norm{\nabla_{\delta} F_{\mu}(x)} < \sigma_F - \frac{\delta}{\mu}$ also confirms a small distance to optimal set $\text{dist}_{X^*}(x) \le \mu\tilde{\epsilon}$.
\noindent Let $\gamma > 1$. Then our assumption $\norm{\nabla F_{\mu}(x)} \le \tilde{\epsilon}$, \eqref{upperbound_morenvgrad} and \eqref{Holder_error_bound} implies the error bound:
\begin{equation}gin{align*}
\text{dist}_{X^*}(x) \le \Max{\left[ \frac{\tilde{\epsilon}}{\sigma_F \varphi(\gamma)} \right]^{\frac{1}{\gamma - 1}}, \frac{2\mu \tilde{\epsilon}}{\varphi(\gamma)}},
\end{align*}
which confirms the last result.
\end{proof}
\noindent If $X^*$ are weakly sharp minimizers, the above lemma states that, for sufficiently small $\delta$ and $\norm{\nabla_{\delta } F_{\mu}(x^k)}$ one can directly guarantee that $x^k$ is close to $X^*$. From another viewpoint, Lemma \ref{lemma:morenvgrad_bound_relating_with_distopt} also suggests for WSM that a sufficiently large $\mu > \frac{\text{dist}_{X^*}(x^0)}{\sigma_F}$ provides $\text{prox}_{X^*}(x^0) = \pi_{X^*}(x^0)$ and a $\delta-$optimal solution is obtained as the output of the first IPPA iteration. A similar result stated in \cite{Fer:91}, guarantees the existence of a sufficiently large smoothing value $\mu$ which makes PPA to converge in a single (exact) iteration.
\section{Iteration complexity of IPPA}\label{sec:IPPA_complexity}
For reasons that will be clear later, note that some of the results from below are given for constant inexactness noise $\delta_k = \delta$. The Holderian growth leads naturally to recurrences on the residual distance to the optimal set, that allow us to give direct complexity bounds.
\begin{equation}gin{theorem}\label{th:IPPconvergence}
Let $F$ be convex and $\gamma-$HG hold.
\noindent $(i)$ Under sharp minima $\gamma = 1$, let $\{\delta_k\}_{k \ge 0}$ nonincreasing and assume $\text{dist}_{X^*}(x^{0}) \ge \mu \sigma_F$, then
\begin{equation}gin{align*}
\text{dist}_{X^*}(x^{k})
\le \max\left\{\text{dist}_{X^*}(x^{0}) - \sum\limits_{i=0}^{k-1}(\mu\sigma_F - \delta_i), \delta_{k-1} \right\}.
\end{align*}
\noindent $(ii)$ Under quadratic growth $\gamma = 2$, let $\sum\limits_{i \ge 0} \delta_i < \mathcal{G}amma < \infty$ then:
\begin{equation}gin{align*}
\text{dist}_{X^*}(x^{k})
\le \left[\frac{1}{1 + 2\mu \sigma_F}\right]^{\frac{k-4}{4}} ( \text{dist}_{X^*}(x^{0}) + \mathcal{G}amma ) + \left(1 + \frac{1}{\sqrt{1+2\mu\sigma_F}-1}\right) \delta_{\lceil \frac{k}{2} \rceil + 1} .
\end{align*}
\noindent $(iii)$ Let $\gamma-$HG hold.
Define
\begin{equation}gin{align*}
h(r) =
\begin{equation}gin{cases}
\Max{r - \frac{\mu\varphi(\gamma)\sigma_F}{2} r^{\gamma-1}, \frac{1+\sqrt{1-\varphi(\gamma)}}{2}r }, & \text{if} \; \gamma \in (1,2) \\
\Max{r - \frac{\mu\varphi(\gamma)\sigma_F}{2} r^{\gamma-1}, \frac{1+\sqrt{1-\varphi(\gamma)}}{2}r ,\left( 1- \frac{1}{\gamma - 1}\right)r}, & \text{if} \; \gamma > 2.
\end{cases}
\end{align*}
then the following convergence rate holds:
\begin{equation}gin{align*}
\text{dist}_{X^*}(x^k)
&\le
\max\left\{h^{(k)}(\text{dist}_{X^*}(x^0)), \bar{\delta}_k \right\}
\end{align*}
where
$\bar{\delta}_k = \Max{ h(\bar{\delta}_{k-1}), \left(\frac{2\delta_k}{\mu\varphi(\gamma)\sigma_F}\right)^{\frac{1}{\gamma -1}},\frac{2\delta_k}{1-\sqrt{1-\varphi(\gamma)}} }.$
\end{theorem}
\begin{equation}gin{proof}
The proof can be found in the appendix.
\end{proof}
\noindent Note that in general, all the convergence rates of Theorem \ref{th:IPPconvergence} depend on two terms: the first one illustrates the reduction of the initial distance to optimum and the second term reflects the accuracy of the approximated gradient. Therefore, after a finite number (for $\gamma = 1$) or at most $\mathcal{O}(\log(1/\epsilon))$ (for $\gamma > 1$) IPPA iterations, the evolution of inner accuracy $\delta_k$ becomes the main bottleneck of the convergence process.
The above theorem provides an abstract insight about how fast should the accuracy $\{\delta_k\}_{k \ge 0}$ decay in order that $\{x^k\}_{k \ge 0}$ attains the best rate towards the optimal set in terms of $\text{dist}_{X^*}(x^k)$. In short, $(i)$ shows that for WSM a recurrent constant decrease on $\text{dist}_{X^*}(x^k)$ is established only if $\delta_k < \mu\sigma_F$, while the noise $\delta_k$ is not necessary to vanish. This aspect will be discussed in more details below. The last parts $(ii)$ and $(iii)$ suggest that a linear decay of $\delta_k$ (for $\gamma = 2$) and, respectively, $\delta_{k+1} = h(\delta_k)$ for general $\gamma > 1$, ensure the fastest convergence of the residual.
\noindent Several previous works as \cite{Ned:10,Pol:78,Pol:87book} analyzed perturbed and incremental SGM algorithms, under WSM, that use noisy estimates $G_k$ of subgradient $F'(x^k)$. A surprising common conclusion of these works is that under a sufficiently low persistent noise:
$0< \norm{F'(x^k) - G_k} < \sigma_F,$ for all $k \ge 0$,
SGM still converges linearly to the optimal set. Although IPPA is based on a similar approximate first order information, notice that the smoothed objective $F_{\mu}$ do not satisfy the pure WSM property. However, our next result states, in a similar context of small but persistent noise of magnitude at most $\frac{\delta}{\mu}$, that IPPA attains $\delta-$accuracy after a finite number of iterations.
\begin{equation}gin{corrolary}\label{cor:complexity_sharp_minima}
Let $\delta_k = \delta < \mu\sigma_F$ and $\gamma = 1$, then after
\begin{equation}gin{align*}
K = \left\lceil \frac{\text{dist}_{X^*}(x^0)}{\mu\sigma_F - \delta}\right\rceil
\end{align*}
IPPA iterations, $x^K$ satisfies $\text{prox}_{\mu}^F(x^K) = \pi_{X^*}(x^K)$ and $\text{dist}_{X^*}(x^K) \le \delta$.
\end{corrolary}
\begin{equation}gin{proof}
From Theorem \ref{th:IPPconvergence}$(i)$ we obtain directly:
\begin{equation}gin{align*}
\text{dist}_{X^*}(x^{k})
\le \max\left\{\text{dist}_{X^*}(x^{0}) - k(\mu\sigma_F - \delta), \delta \right\},
\end{align*}
which means that after at most $K$ iterations $x^k$ reaches $\text{dist}_{X^*}(x^{K}) \le \delta < \mu\sigma_F$. Lastly, the same reasoning as in Lemma \ref{lemma:morenvgrad_bound_relating_with_distopt}, based on the relations \eqref{upperbound_morenvgrad} and \eqref{gamma1_Env_Holder_error_bound}, lead to $\text{prox}_{\mu}^F(x^K) = \pi_{X^*}(x^K)$.
\end{proof}
\noindent To conclude, if the noise magnitude $\frac{\delta}{\mu}$ is below the threshold $\sigma_F$, or equivalently $0< \delta^{\nabla} : =\norm{\nabla F_{\mu}(x^k) - \nabla_{\delta} F_{\mu}(x^k)} < \sigma_F $
then after a finite number of iterations IPPA reaches $\delta$ distance to $X^*$.
We see that under sufficiently low persistent noise, IPPA still guarantees convergence to the optimal set assuming the existence of an inner routine that computes each iteration. In other words, "noisy" Proximal Point algorithms share similar stability properties as the noisy Subgradient schemes from \cite{Ned:10,Pol:78,Pol:87book}.
This discussion can be extended to general decreasing $\{\delta_k\}_{k \ge 0}$.
\noindent We show next that Theorem \ref{th:IPPconvergence} covers well-known results on exact PPA.
\begin{equation}gin{corrolary}\label{cor:complexity_for_exact_case}
Let $\{x^k\}_{k \ge 0}$ be the sequence of exact PPA, i.e. $\delta_k = 0$. By denoting $r_0 = \text{dist}_{X^*}(x^0)$, an $\epsilon-$suboptimal iterate is attained, i.e. $\text{dist}_{X^*}(x^k) \le \epsilon$, after a number of iterations of $\mathcal{K}_e(\gamma,\epsilon)$:
\begin{equation}gin{align*}
& \text{WSM}: \qquad \qquad \mathcal{K}_e(1,\epsilon) = \left\lceil \frac{r_0 - \epsilon}{\mu\sigma_F}\right\rceil \\
& \text{QG}: \qquad \mathcal{K}_e(2,\epsilon) = \mathcal{O}\left( \frac{1}{\mu\sigma_F} \log\left(\frac{r_0}{\epsilon}\right) \right)
\end{align*}
For $\gamma \in (1,2)$, let $\mathcal{T} = \frac{r_0 }{(\mu \varphi(\gamma)\sigma_F )^{\frac{1}{2-\gamma}}} $. Then
\begin{equation}gin{align*}
\mathcal{K}_e(\gamma,\epsilon) =
\begin{equation}gin{cases}
\mathcal{O}\left( \Min{ \mathcal{T}^{2-\gamma} \log\left( \frac{r_0 }{ \epsilon } \right) , \mathcal{T} }\right), &\text{if} \; \epsilon \ge (\mu\varphi(\gamma)\sigma_F )^{\frac{1}{2-\gamma} } \\
\mathcal{O}\left( \log\left( \Min{r_0, (2\mu\sigma_F)^{\frac{1}{2-\gamma}}} / \epsilon \right) \right)
&\text{if} \; \epsilon < (\mu\varphi(\gamma)\sigma_F )^{\frac{1}{2-\gamma} }\\
\end{cases}
\end{align*}
\noindent For $\gamma > 2$
\begin{equation}gin{align*}
\mathcal{K}_e(\gamma,\epsilon) = \BigO{ \frac{1}{\epsilon^{\gamma - 2}} }.
\end{align*}
\end{corrolary}
\begin{equation}gin{proof}
The proof for the first two estimates are immediately derived from Theorem \ref{th:IPPconvergence} $(i)$ and $(ii)$. For $\gamma \in (1,2)$, we considered $\alpha = \frac{\mu\varphi(\gamma)\sigma_F}{2}, \begin{equation}ta = \frac{1 + \sqrt{1-\sigma_F}}{2}$ into Corrolary \ref{corr:exact_complexity} and obtained an estimate for our exact case. To refine the complexity order, we majorized some constants by using: $(2\begin{equation}ta \mu\sigma_F)^{\frac{1}{2-\gamma}} \le (2 \mu\sigma_F)^{\frac{1}{2-\gamma}}$ and $1 - \sqrt{1-\varphi(\gamma)} < 1$. For $\gamma > 2$, we replace the same $\alpha$ as before and $\hat{\begin{equation}ta} = \Max{ \frac{1 + \sqrt{1-\sigma_F}}{2}, 1 - \frac{1}{\gamma - 1} }$ into Corrolary \ref{corr:exact_complexity} to get the last estimate.
\end{proof}
\noindent The finite convergence of the exact PPA, under WSM, dates back to \cite{Fer:91,BurFer:93,Ant:94,Ber:82constrained}.
Since PPA is simply a gradient descent iteration, its iteration complexity under QG $\gamma = 2$ shares the typical dependence on the conditioning number $\frac{1}{\mu\sigma_F}$.
The Holder growth $\gamma \in (1,2)$ behaves similarly with the sharp minima case: fast convergence outside the neighborhood around the optimum which increases with $\mu$.
\noindent A simple argument on the tightness of the bounds for the case $\gamma > 2$ can be found in \cite{BauDao:15}. Indeed, Douglas-Rachford, PPA and Alternating Projections algorithms were analyzed in \cite{BauDao:15} for particular univariate functions. The authors proved that PPA requires $\BigO{1/\epsilon^{\gamma -2}}$ iterations to minimize the particular objective $F(x) = \frac{1}{\gamma}|x|^{\gamma}$ (when $\gamma > 2$) up to $\epsilon$ tolerance.
\begin{equation}gin{corrolary}\label{cor:complexity_for_inexact_case}
Under the assumptions of Corrolary \ref{cor:complexity_for_exact_case}, recall the notation $\mathcal{K}_e(\gamma,\epsilon)$ for the exact case. The complexity of IPPA$(x^0,\mu,\{\delta_k\})$ to attain an $\epsilon-$suboptimal point is:
\begin{equation}gin{align*}
& \text{WSM}: \left[\delta_k \le \frac{\delta_0}{2^k} \right] \qquad \hspace{1.5cm} \BigO{\Max{\mathcal{K}_e(1,\epsilon)+\frac{\delta_0}{\mu\sigma_F}, \log\left( \frac{\delta_0}{\epsilon}\right)} }\\
& \text{QG}: \left[\delta_k \le \frac{\delta_0}{2^k}\right] \qquad \hspace{3cm} \Max{ \mathcal{O}\left( \mathcal{K}_e(2,\epsilon) \right),1}\\
& \text{HG}:\left[\gamma \in (1,2), \delta_k \le \frac{\delta_0}{2^{k}} \right] \;\; \hspace{2.8cm} \mathcal{O}\left( \mathcal{K}_e(\gamma,\epsilon)\right)\\
& \text{HG}: \left[\gamma > 2, \delta_k = \left( 1 - \frac{1}{\gamma - 1}\right)^{k(\gamma - 1)} \delta_{0}\right] \qquad \mathcal{O}\left( \mathcal{K}_e(\gamma,\epsilon)\right).
\end{align*}
\end{corrolary}
\begin{equation}gin{proof}
The proof for the first two estimates can be derived immediately from Theorem \ref{th:IPPconvergence} $(i)$ and $(ii)$. We provide details for the other two cases.
For $\gamma \in (1,2)$ we use the same notations as in the proof of Theorem \ref{th:IPPconvergence} (given in the appendix). There, the key functions which decide the decrease rate of $\text{dist}_{X^*}(x^k)$ are the nondecreasing function $h$ and accuracy $\hat{\delta}_k = \Max{\left( \frac{2\delta_k}{\mu\sigma_F \varphi(\gamma)} \right)^{\frac{1}{\gamma-1}}, \frac{2\delta_k}{1 - \sqrt{1-\varphi(\gamma)}}}$. First recall that
\begin{equation}gin{align}\label{varphi_bounds}
\frac{1}{2} = \varphi(2) \le \varphi(\gamma) \le \varphi(1) = \frac{3}{4}.
\end{align}
which implies that for any $\delta \ge 0$
\begin{equation}gin{align}\label{rel:low_bound_h}
\frac{\delta}{2} \overset{\eqref{varphi_bounds}}{\le} \frac{1 + \sqrt{1-\varphi(\gamma)}}{2}\delta \le h(\delta).
\end{align}
Recalling that $\bar{\delta}_k = \Max{\hat{\delta}_{k}, h^{}(\bar{\delta}_{k-1})}$, then by Theorem \ref{th:IPPconvergence}$(iii)$ we have:
\begin{equation}gin{align}\label{rel:IPPconvergence_reloaded}
\text{dist}_{X^*}(x^k)
&\le
\max\left\{h^{(k)}(\text{dist}_{X^*}(x^0)), \bar{\delta}_k \right\}
\end{align}
By taking $\delta_k = \frac{\delta_{k-1}}{2}$ then $ \hat{\delta}_{k} =
\Max{\frac{1}{2^{\frac{1}{\gamma - 1}}}\left( \frac{2\delta_{k-1}}{\mu\sigma_F \varphi(\gamma)} \right)^{\frac{1}{\gamma-1}}, \frac{1}{2}\frac{2\delta_{k-1}}{1 - \sqrt{1-\varphi(\gamma)}}} \le \frac{\hat{\delta}_{k-1}}{2} \overset{\eqref{rel:low_bound_h}}{\le} h^{}(\hat{\delta}_{k-1})$. By this recurrence, the monotonicity of $h$ and $\hat{\delta}_0 = \bar{\delta}_0$, we derive:
\begin{equation}gin{align*}
\bar{\delta}_k
& \le \Max{h(\hat{\delta}_{k-1}), h^{}(\bar{\delta}_{k-1})} = h^{}(\bar{\delta}_{k-1}) \\
& = h^{(k)}(\hat{\delta}_{0}).
\end{align*}
Finally this key bound enters into \eqref{rel:IPPconvergence_reloaded} and we get:
\begin{equation}gin{align*}
\text{dist}_{X^*}(x^k)
& \le \Max{h^{(k)}\left(\text{dist}_{X^*}(x^0)\right), h^{(k)}\left(\hat{\delta}_{0}\right)}\\
& \le h^{(k)}\left(\Max{\text{dist}_{X^*}(x^0), \hat{\delta}_{0}}\right),
\end{align*}
where for the last equality we used the fact that, since $h$ is nondecreasing, $h^{(k)}$ is monotonically nondecreasing.
Finally, by applying Theorem \ref{th:central_recurrence} we get our result.
Now let $\gamma >2$. By redefining $h$ as in Theorem \ref{th:IPPconvergence}, observe that
\begin{equation}gin{align}\label{rel:low_bound_h_hat}
\delta \left( 1 - \frac{1}{\gamma - 1}\right)\le h(\delta).
\end{align}
Take $\delta_k = \left( 1 - \frac{1}{\gamma - 1}\right)^{\gamma - 1} \delta_{k-1}$ then
\begin{equation}gin{align*}
\hat{\delta}_{k} & =
\Max{\left( 1 - \frac{1}{\gamma - 1}\right) \left( \frac{2\delta_{k-1}}{\mu\sigma_F \varphi(\gamma)} \right)^{\frac{1}{\gamma-1}}, \left( 1 - \frac{1}{\gamma - 1}\right)^{\gamma - 1} \frac{2\delta_{k-1}}{1 - \sqrt{1-\varphi(\gamma)}}} \\
& \le \left( 1 - \frac{1}{\gamma - 1}\right) \hat{\delta}_{k-1} \overset{\eqref{rel:low_bound_h_hat}}{\le} h^{}(\hat{\delta}_{k-1}).
\end{align*}
We have shown in the proof of Theorem \ref{th:IPPconvergence} that also this variant of $h$ is nondecreasing and thus, using the same reasoning as in the case $\gamma \in (1,2)$ we obtain the above result.
\end{proof}
\noindent Overall, if the noise vanishes sufficiently fast (e.g. a linear decay with factor $\frac{1}{2}$ in the case $\gamma \le 2$) then the complexity of IPPA has the same order as the one of PPA. However, a fast vanishing noise $\delta_k$ requires an efficient inner routine that computes $x^k$. Now consider $F$ to be nonsmooth and the inner routine to be a simple classical Subgradient Method. Recall that minimizing a strongly convex function $f$ up to $\delta$ accuracy, i.e. $f(x) - f^* \le \delta$, takes $\BigO{\frac{1}{\delta}}$ subgradient iterations \cite{JudNes:14uniform}. By using strong convexity, i.e. $\frac{\sigma_f}{2}\norm{x-x^*}^2 \le f(x) - f^* \le \delta$, in order to approach a minimizer at distance $\delta$, SGM requires $\BigO{\frac{1}{\delta^2}}$ iterations.
Therefore, by taking into account that the cost of a single iteration of IPPA $\BigO{\frac{1}{\delta_k^2}}$, the naive direct counting of the total number of subgradient evaluations $\BigO{\sum\limits_{k=0}^T \frac{1}{\delta_k^2}}$, necessary to obtain $\text{dist}_{X^*}(x^T) \le \epsilon$, yields a worse estimate than the known optimal $\BigO{\frac{1}{\epsilon^{2(\gamma-1)}}}$. However, the restarted variant of IPPA overcomes this issue.
\section{Restarted Inexact Proximal Point Algorithm}\label{sec:RIPPA}
\noindent The Restarted IPPA (RIPPA) illustrates a simple recursive call of the IPPA combined with a multiplicative decrease of the parameters. Observe that RIPPA is completely independent of the problem constants.
\begin{equation}gin{algorithm}
\caption{Restarted IPPA ($x^0,\mu_0,\delta_0,\rho$)}
Initialize $\delta^{\nabla}_0 := \delta_0, t:=0$\\
\While{stopping criterion}{
$\text{Call IPPA} \; \text{to compute:} \;\; x^{t+1} = IPPA(x^t, \mu_t, \{\delta_k := \delta_t\}_{k \ge 0}, 5\delta_t^{\nabla} )$ \\
$\text{Update:} \;\; \mu_{t+1} = 2\mu_t, \; \delta^{\nabla}_{t+1} = \frac{\delta^{\nabla}_{t}}{2^{\rho}} , \; \delta_{t+1} = \mu_{t+1}\delta^{\nabla}_{t+1}$ \\
$\; t := t+1$\\
}
\end{algorithm}
As in the usual context of restartation, we call any $t-$th iteration an epoch. The stopping criterion can be optionally based on a fixed number of epochs or on the reduction of gradient norm \eqref{stopping_criterion}.
Denote $K_0 = \lceil \frac{\text{dist}_{X^*}(x^0)}{\mu_0\delta_0} \rceil $.
\begin{equation}gin{theorem}\label{th:RIPP_complexity}
Let $\delta_0,\mu_0$ be positive constants and $\rho > 1$. Then the sequence $\{x^t\}_{t\ge 0}$ generated by $RIPPA(x^0,\mu_0,\delta_0)$ attains $\text{dist}_{X^*}(x^t) \le \epsilon$ after a number of $\mathcal{T}_{IPP}(\gamma,\epsilon)$ iterations.
\noindent Let $\gamma = 1$ and assume $\epsilon < \mu_0\sigma_F$ and $\text{dist}_{X^*}(x^0) \ge \mu_0\sigma_F$, then
\begin{equation}gin{align*}
\mathcal{T}_{IPP}(1,\epsilon) = \frac{1}{\rho - 1} \log\left( \frac{\mu_0 \delta_0}{\epsilon} \right) + \mathcal{T}_{ct},
\end{align*}
where $\mathcal{T}_{ct} = K_0\lceil \frac{1}{\rho}\log\left( \frac{12\delta_0}{\sigma_F} \right) \rceil .$
In particular, if $\delta_0 < \mu_0\sigma_F$ then $RIPPA(x^0,\mu_0,\delta_0,0)$ reaches the $\epsilon-$suboptimality within $\BigO{\log\left( \frac{\mu_0\sigma_F}{\epsilon}\right)}$ iterations.
\noindent Let $\gamma = 2$, then
\begin{equation}gin{align*}
\mathcal{T}_{IPP}(2,\epsilon) = \BigO{\frac{1}{\rho}\log \left( \frac{\delta_0}{\epsilon}\right) } + K_0,
\end{align*}
\noindent Let $\gamma \in (1,2)$, then:
\begin{equation}gin{align*}
\mathcal{T}_{IPP}(\gamma,\epsilon) = \BigO{\Max{ \frac{\gamma-1}{\rho}\log \left( \frac{\delta_0}{\epsilon}\right), \frac{1}{\rho-1}\log \left( \frac{\mu_0\delta_0}{\epsilon}\right)}} + K_0.
\end{align*}
Otherwise, for $\gamma > 2$
\begin{equation}gin{align*}
\mathcal{T}_{IPP}(\gamma,\epsilon) = \BigO{\left( \frac{\delta_0}{\epsilon}\right)^{ \left[\left(1 -\frac{1}{\rho} \right)(\gamma - 1) - 1 \right]\Max{ 1 , \frac{1}{(1 - 1/\rho)(\gamma - 1)} } } } + K_0.
\end{align*}
\end{theorem}
\begin{equation}gin{proof}
In this proof we use notation $K_t$ for the number of iterations in the $t-$th epoch, large enough to turn the stopping criterion to be satisfied. We denote $x^{k,t}$ as the $k-$th IPPA iterate during $t-$th epoch. Recall from Theorem \ref{th:general_IPP_complexity} that
\begin{equation}gin{align}\label{rel:epoch_length}
K_t = \left \lceil \frac{\text{dist}_{X^*}(x^{t})}{ \delta_t} \right\rceil
\end{align}
is sufficient to guarantee $\norm{\nabla_{\delta_{t}}F_{\mu_t}(\hat{x}^{K_t,t})} \le 5\delta_{t}^{\nabla}$ and thus the end of $t-$th epoch.
Furthermore, by the triangle inequality
\begin{equation}gin{align}\label{rel:lower_bound_noisygrad}
\norm{\nabla F_{\mu_t}(x^{t+1})} - \delta_t^{\nabla} \le \norm{\nabla_{\delta_t} F_{\mu_t}(x^{t+1})} \le 5\delta_t^{\nabla},
\end{align}
which implies that the end of $t-$th epoch we also have $\norm{\nabla F_{\mu_t}(x^{t+1})} \le 6\delta_t^{\nabla}$.
\noindent Let WSM hold and recall assumption $\text{dist}_{X^*}(x^0) \ge \mu_0\sigma_F$.
For sufficiently large $t$ we show that restartation loses any effect and after a single iteration the stopping criterion of epoch $t$ is satisfied. We separate the analysis in two stages: the first stage covers the epochs that produce $x^{t+1}$ satisfying $\norm{ \nabla F_{\mu_t}(x^{t+1})} > \sigma_F$. The second one covers the rest of epochs when the gradient norms decrease below the threshold $\sigma_F$, i.e. $\norm{ \nabla F_{\mu_t}(x^{t+1})} \le \sigma_F$.
In the first stage, the stopping rule $\norm{ \nabla F_{\mu_t}(x^{t+1})} \le \delta_t^{\nabla}$ limits the first stage to maximum $T_1 = \left \lceil\frac{1}{\rho}\log \left( \frac{12\delta_0}{\sigma_F}\right) \right\rceil$ epochs.
The total number of iterations in this stage is bounded by: $\sum_{t = 0}^{T_1}K_t \le T_1K_0 $.
For the second stage when $\norm{\nabla F_{\mu_{t-1}}(x^t)} < \sigma_F$, Lemma \ref{lemma:morenvgrad_bound_relating_with_distopt} states that $\text{prox}_{\mu,F}(x^t) = \pi_{X^*}(x^t)$ and thus we have
\begin{equation}gin{align*}
\text{dist}_{X^*}(x^t) = \mu_{t-1}\norm{\nabla F_{\mu_{t-1}}(x^t)} & \le \mu_{t-1} \delta_{t-1}^{\nabla} = \delta_{t-1} \\
& < 2\mu_{t-1}\sigma_F = \mu_t\sigma_F.
\end{align*}
Therefore, by Theorem \ref{th:IPPconvergence}
\begin{equation}gin{align*}
\text{dist}_{X^*}(x^{t+1})
& \le \max \left\{\text{dist}_{X^*}(x^t) - K_{t}(\mu_{t}\sigma_F-\delta_t),\delta_t \right\} \\
& \le \max \left\{\mu_{t}\sigma_F - K_t (\mu_t\sigma_F- \delta_t),\delta_t \right\}
\end{align*}
which means that after a single iteration, i.e. $K_t = 1$, it is guaranteed that $\text{dist}_{X^*}(x^{t+1}) \le \delta_t$. In this phase, the output of IPPA is in fact the only point produced in $t$-th epoch and the necessary number of epochs (or equivalently the number of IPPA iterations) is $T_2 = \BigO{\frac{1}{\rho-1}\log\left( \frac{\delta_{T_1}}{\epsilon}\right)}$.
\noindent Let $\gamma = 2$. By Lemma \ref{lemma:deterministic_moreau_growth}$(ii)$ and convexity of $F_{\mu}$ yields $ \frac{\sigma_F}{1+2\sigma_F\mu}\text{dist}_{X^*}^2(x) \le \langle \nabla F_{\mu}(x), x - \pi_{X^*}(x)\rangle \le \norm{\nabla F_{\mu}(x)}\text{dist}_{X^*}(x)$. By using the inequality formed by the first and last terms, together with the relation \eqref{rel:lower_bound_noisygrad}, then at the end of $t-1$ epoch:
\begin{equation}gin{align*}
\text{dist}_{X^*}(x^{t}) \le \norm{\nabla F_{\mu_{t-1}}(x^t)} \left( \frac{1}{ \mu_{t-1} \sigma_F} + 2 \right) \le 6\delta_{t-1}\left( \frac{1}{ \mu_{t-1} \sigma_F} + 2 \right),
\end{align*}
suggesting that the necessary number of epochs is
$T = \BigO{\frac{1}{\rho}\log \left( \frac{\delta_0}{\epsilon}\right)}.$
This fact allow us to refine $K_t$ in \eqref{rel:epoch_length} as
$K_t = \left\lceil 3 \cdot 2^{\rho}\left( 2 + \frac{1}{\mu_{t-1}\sigma_F}\right) \right\rceil \quad \forall t \ge 1. \nonumber $
Since $K_t$ is bounded, then the total number of IPPA iterations has the order $\sum_{t = 0}^{T-1}K_t = K_0 + \BigO{T}.$
\noindent Let $\gamma > 1$. Similarly as in the previous two cases, $\norm{\nabla F_{\mu_{t-1}}(x^t)} \le 6\delta_t^{\nabla}$ guaranteed by $t-1$ epoch further implies
\begin{equation}gin{align*}
\text{dist}_{X^*}(x^{t}) \overset{\eqref{gamma_all_Env_Holder_error_bound}}{\le} \Max{ \frac{12\delta_{t-1}}{\varphi(\gamma)} , \left[ \frac{ 6\delta_{t-1}^{\nabla} }{\varphi(\gamma)\sigma_F} \right]^{\frac{1}{\gamma-1}} },
\end{align*}
which suggests that the maximal number of epochs is
\begin{equation}gin{align*}
T = \BigO{\Max{ \frac{\gamma-1}{\rho}\log \left( \frac{\delta_0}{\epsilon}\right), \frac{1}{\rho-1}\log \left( \frac{\mu_0\delta_0}{\epsilon}\right)}}.
\end{align*}
Now, $K_t$ of \eqref{rel:epoch_length} becomes
\begin{equation}gin{align}
K_t &= \left\lceil \Max{ \frac{3\cdot 2^{\rho+1}}{\varphi(\gamma)} , D 2^{t\left[ (\rho - 1) - \frac{\rho}{\gamma-1}\right] + \frac{\rho}{\gamma-1} } } \right\rceil \quad \forall t \ge 1, \nonumber
\end{align}
where $D = \left(\frac{6\delta_0}{\sigma_F \varphi(\gamma)} \right)^{2/\gamma} \frac{1}{\mu_0\delta_0}$. For $\gamma \le 2$, $K_t$ is bounded, thus for $\gamma > 2$ we further estimate
the total number of IPPA iterations by summing:
\begin{equation}gin{align}
\sum_{t = 0}^{T_1}K_t
& = K_0 + \sum_{t = 1}^{T_1} \left\lceil \Max{ \frac{3\cdot 2^{\rho+1}}{\varphi(\gamma)} , D 2^{t\left[ (\rho - 1) - \frac{\rho}{\gamma-1}\right] + \frac{\rho}{\gamma-1} } } \right\rceil \\
& \le K_0 + T_1 + \Max{ \frac{3\cdot 2^{\rho+1}}{\varphi(\gamma)}T_1 , D2^{\frac{\rho}{\gamma-1}} \sum_{t = 1}^{T} 2^{t\left[ (\rho - 1) - \frac{\rho}{\gamma-1}\right] } } \nonumber
\end{align}
Finally,
$\sum_{t = 1}^{T} 2^{t\left[ (\rho - 1) - \frac{\rho}{\gamma-1}\right] }
= \BigO{\left( \frac{\delta_0}{\epsilon}\right)^{ \Max{ \left(1 -\frac{1}{\rho} \right)(\gamma - 1) - 1 , 1 - \frac{\rho}{(\rho - 1)(\gamma - 1)} } } }
$.
\end{proof}
\begin{equation}gin{remark}
Notice that for any $\gamma \in [1,2]$, logarithmic complexity $\BigO{\log(1/\epsilon)}$ is obtained.
When $\gamma > 2$, the above estimate is shortened as
\begin{equation}gin{align*}
\BigO{\left( \frac{1}{\epsilon}\right)^{(\zeta - 1)\Max{1,\frac{1}{\zeta}}}},
\end{align*}
where $\zeta = (\gamma - 1)\left( 1 - \frac{1}{\rho}\right)$,
In particular, if $\rho \le \frac{\gamma - 1}{\gamma - 2}$, then all epochs reduce to length $1$ and the total number of IPPA iterations reduces to the same order as in the exact case:
\begin{equation}gin{align*}
\BigO{\left( \frac{1}{\epsilon}\right)^{\gamma - 2} }.
\end{align*}
\end{remark}
\section{Inner Proximal Subgradient Method routine}\label{sec:total_complexity}
Although the influence of growth modulus on the behaviour of IPPA is obvious, all complexity estimates derived in the previous sections assume the existence of an oracle computing an approximate proximal mapping:
\begin{equation}gin{align}\label{the_subproblem}
x^{k+1} \approx \arg\min\limits_{z \in \mathbf{R}^n} \; F(z) + \frac{1}{2\mu}\norm{z - x^k}^2.
\end{align}
Therefore in the rest of this section we focus on the solution of the following inner minimization problem:
\begin{equation}gin{align*}
\min\limits_{z \in \mathbf{R}^n} \; f(z) + \psi(z) + \frac{1}{2\mu}\norm{z - x}^2.
\end{align*}
In most situations, despite the regularization term, this problem is not trivial and one should select an appropriate routine that computes $\{x^k\}_{k \ge 0}$.
For instance, variance-reduced or accelerated proximal first-order methods were employed in \cite{Lin:15,Lin:18,Lu:20,Luo:93} as inner routines and theoretical guarantees were provided. Also, Conjugate Gradients based Newton method was used in \cite{Tom:11} under a twice differentiability assumption on $f$. We limit our analysis only to gradient-type routines and let other accelerated or higher-order methods, that typically improve the performance of their classical counterparts, for future work.
In this section we evaluate the computational complexity of IPPA in terms of number of proximal gradient iterations. The basic routine for solving \eqref{the_subproblem}, that we analyze below, is the Proximal (sub)Gradient Method. Notice that when $f$ is nonsmooth with bounded subgradients, we consider only the case when $\psi$ is an indicator function for a simple, closed convex set. In this situation, PsGM becomes a simple projected subgradient scheme with constant stepsize that solves \eqref{the_subproblem}.
\begin{equation}gin{algorithm}
\caption{Proximal subGradient Method (PsGM) ($z^0,x^k, \alpha, \mu, N$)}
\For{$\ell = 0, \cdots, N-1 $}{
$z^{\ell+1} = \text{prox}_{\alpha}^\psi \left(z^\ell - \alpha \left( f'(z^\ell) + \frac{1}{\mu}(z^\ell - x^k)\right) \right) $ \\
}
$\text{Output:} \; z^N$
\end{algorithm}
Through a natural combination of the outer guiding IPP iteration and the inner PsGM scheme, we further derive the total complexity of proximal-based restartation Subgradient Method in terms of subgradient oracle calls.
\begin{equation}gin{algorithm}\label{algorithm:IPP-SGM}
\caption{Restarted Inexact Proximal Point - SubGradient Method (RIPP-PsGM) ($x^0, \delta_0, \mu_0, \rho, q, L_f, T$)}
Initialize: $t := 0, \alpha_0 : = \frac{\mu_0}{2}, N_0 : = \Max{8\log(L_f/\delta_0)+1,\rho-1} $ \\
\For{ $t:=0, \cdots, T-1$}{
$x^{0,t} : = x^t, k: = 0$\\
\Do{ $\norm{x^{k-1,t} - x^{k-2,t}} > \mu_t\delta_t$ }{
$ x^{k+1,t} := PsGM(x^{k,t}, x^{k,t}, \alpha_t, \mu_t, N_t)$ \\
$k:=k+1$ \\
}
$x^{t+1}:= x^{k-1,t}$\\
{$\alpha_{t+1}: = \alpha_{t}2^{-q}, N_{t + 1}: = N_t 2^{-(q+1)}$\\
$\mu_{t+1}:= 2\mu_t, \delta_{t+1}:= 2^{-\rho}\delta_t $\\
}
}
Return $x^{T}, \delta_T$ \\
\hrule
\textbf{Postprocessing}($\tilde{x}^0,\begin{equation}ta_0,\mu,N,K$) \\
\For{ $k:=0, \cdots, K-1$}{
$ \tilde{x}^{k+1} := PsGM(\tilde{x}^{k}, \tilde{x}^{k}, \begin{equation}ta_k, \mu, N)$ \\
$\begin{equation}ta_{k+1}: = \begin{equation}ta_{k}/2$}
Return $\tilde{x}^{K}$
\end{algorithm}
\begin{equation}gin{theorem}\label{th:main_computational_theorem}
Let the assumptions of Theorem \ref{th:Holder_gradients} hold and $\rho > 1, \delta_0 \ge 2L_f$ then the following assertions hold:
\noindent $(i)$ Let $x^f$ be generated as follows:
\begin{equation}gin{align*}
\{x^T, \delta_T\} &= \textbf{RIPP-PsGM}(x^0,\delta_0,\mu_0,\rho,2\rho - 1,L_f,T) \\
x^f &= \textbf{Postprocessing}(x^T,\delta_T^2/L_f^2 ,\mu_T,\lceil 2(L_f/\delta_T)^2 \rceil,\lceil\log(\delta_T/\epsilon)\rceil).
\end{align*}
For $T \ge \left\lceil \frac{1}{\rho} \log(12\delta_0/\sigma_F) \right \rceil$, the final output $x^f$ satisfies $\text{dist}_{X^*}(x^f) \le \epsilon$ after
$\BigO{\log\left( 1 / \epsilon \right) }$
PsGM iterations.
\noindent $(ii)$ Moreover, if $f$ has $\nu-$Holder continuous gradients with constant $L_f$ and $\nu \le \gamma - 1$, assume $\delta_0 \ge (2L_f^2)^{\frac{1}{2(1-\nu)}} \mu_0^{\frac{\nu}{1-\nu}}$ and
$\frac{q-1}{\rho - 1} \ge 2(1-\nu)$.
If $T = \BigO{ \Max{ \frac{\gamma-1}{\rho}\log(\mu_0\delta_0/\epsilon), \frac{1}{\rho-1}\log(\mu_0\delta_0/\epsilon) } }$, then the output $x^T:=RIPP-PsGM(x^0,\delta_0,\mu_0,\rho,q,L_f,T)$ satisfies $\text{dist}_{X^*}(x^T) \le \epsilon$ within a total cost of:
\begin{equation}gin{align*}
\BigO{ 1/\epsilon^{ \Max{ \gamma - 2 + \frac{q}{\rho}(\gamma - 1) , (\gamma-2)\frac{\rho}{(\rho - 1)(\gamma - 1)} + \frac{q}{\rho - 1} } }}
\end{align*}
PsGM iterations.
\end{theorem}
\begin{equation}gin{proof}
We keep the same notations as in the proof of Theorem \ref{th:RIPP_complexity} and redenote $\delta_t^{\nabla}:=\delta_t$.
By assumption $\delta_0 \ge 2L_f$ we observe that $ (4\alpha_0\mu_0L_f^2)^{\frac{1}{2}} \le 2\mu_0L_f \le \mu_0 \delta_0$.
Since $\alpha_t = \alpha_02^{-(2\rho-1)t}$, then the inequality $4\alpha_t\mu_tL_f^2 \le (\delta_t)^{2} $ recursively holds for all $t \ge 0$.
This last inequality allow Theorem \ref{th:Holder_gradients} to establish that at $t-$epoch there are enough:
\begin{equation}gin{align*}
[t = 0] &\quad N_0 = \left\lceil 8\log\left( \frac{ \norm{\nabla F_{\mu_0}(x^0)} }{\delta_0} \right) \right\rceil \\
[t > 0] & \quad N_t = \left\lceil 4\cdot 2^{2\rho t} \log\left( \frac{ \mu_{t-1}\delta_{t-1}}{\mu_t\delta_{t}} \right)\right\rceil =
\left\lceil 4(\rho - 1)2^{2\rho t}\right\rceil
\end{align*}
PsGM iterations. Lastly, we compute the total computational cost by summing over all $N_t$.
Recall that at the end of $t-$th epoch RIPPA guarantees that $\norm{\nabla F_{\mu_t}(x^{t+1})} \le \delta_t^{\nabla}$. After a number of epochs of $T = \left \lceil \frac{1}{\rho} \log\left( \frac{\delta_0}{\sigma_F} \right)\right \rceil$, measuring a total number of:
\begin{equation}gin{align*}
\mathcal{T}_{1} = \sum\limits_{t = 0}^{T_1 - 1} N_t K_{t}
& = N_0 K_{0} + K_{0}\sum\limits_{t = 1}^{T_1-1} \left\lceil 4(\rho - 1)2^{2\rho t}\right\rceil
\\
&= \BigO{K_{0} 2^{2\rho T_1}} = \BigO{K_{0} \left(\frac{\delta_0}{\sigma_F} \right)^{2} }.
\end{align*}
PsGM iterations, then one has $x^{T}$ satisfying $\text{dist}_{X^*}(x^{T}) \le \mu_{T-1}\delta_T \le \mu_{T-1}\sigma_F$. Now we evaluate the final postprocessing loop, which aims to bring the iterate $x^T$ into the $\epsilon-$suboptimality region. By Theorem \ref{th:Holder_gradients}, a single call of $PsGM \left(x^{T},x^{T},\begin{equation}ta_0, \mu_{T-1}, \left\lceil 2\left(\frac{L_f}{\delta_T} \right)^2 \right\rceil \right)$ guarantees that $\text{dist}_{X^*}(\tilde{x}^{1}) \le \frac{\begin{equation}ta_0 L_f^2}{2\sigma_F} \le \frac{\begin{equation}ta_0 L_f^2}{2\delta_T} = \frac{\delta_T}{2}$.
In general, if $\text{dist}_{X^*}(\tilde{x}^{k}) \le \frac{\begin{equation}ta_{k-1} L_f^2}{2 \sigma_F} \le \frac{\begin{equation}ta_{k-1} L_f^2}{2 \delta_{T}} $, Theorem \ref{th:Holder_gradients} specifies that after $\left \lceil 2\left(\frac{\text{dist}_{X^*}(\tilde{x}^{k})}{\begin{equation}ta_{k} L_f} \right)^2 \right \rceil \overset{\text{Theorem} \; \ref{th:Holder_gradients}}{\le}
\left\lceil 2\left(\frac{L_f}{\delta_{T}} \right)^2 \right \rceil $ routine iterations, the output $\tilde{x}^{k+1}$ satisfies $\text{dist}_{X^*}(\tilde{x}^{k+1}) \le \frac{\begin{equation}ta_k L_f^2}{2 \sigma_F} \le \frac{\begin{equation}ta_k L_f^2}{2 \delta_{T}}$. Therefore, by setting $N =\left\lceil 2\left(\frac{L_f}{\delta_T} \right)^2 \right\rceil, K = \lceil\log(\delta_T/\epsilon)\rceil$ and by running the final procedure \textbf{Postprocessing}($x^T,\delta_T^2/L_f^2,\mu_{T-1},N,K$), the "second phase" loop produces $\text{dist}_{X^*}(x^{K}) \le \epsilon$. The total cost of the "second phase" loop can be easily computed by
$$\mathcal{T}_2 = K N = \left\lceil 2\left(\frac{L_f}{\delta_T} \right)^2 \right\rceil \left\lceil\log \left(\frac{\delta_T}{\epsilon}\right)\right\rceil.$$
Lastly, by taking into account that $\delta_t = \BigO{\sigma_F}$, then the total complexity has the order:
\begin{equation}gin{align*}
\mathcal{T}_1 + \mathcal{T}_2 = \BigO{ K_{0} \left(\frac{\delta_0}{\sigma_F} \right)^{2} + \left(\frac{L_f}{\sigma_F} \right)^2 \log \left(\frac{\sigma_F}{\epsilon}\right)},
\end{align*}
which confirms the first part of the above result.
\noindent Now we prove the second part of the result. By assumption $\delta_0 \ge (2L_f^2)^{\frac{1}{2(1-\nu)}} \mu_0^{\frac{\nu}{1-\nu}} $ we observe that:
\begin{equation}gin{align}\label{rel:induction_initial_step}
(4\alpha_0\mu_0L_f^2)^{\frac{1}{2(1-\nu)}} \le \mu_0 \delta_0.
\end{align}
Further we show that, for appropriate stepsize choices $\alpha_t$, the inequality $4\alpha_t\mu_tL_f^2 \le (\mu_t\delta_t)^{2(1-\nu)} $ recursively holds for all $t \ge 0$ under initial condition \eqref{rel:induction_initial_step}. Indeed let $2(\rho - 1)(1-\nu) \le q-1$, then
\begin{equation}gin{align}
4\alpha_t \mu_t L_f^2
& =
\frac{2\mu_{0}^2 L_f^2}{2^{(q-1)t}}
\overset{\eqref{rel:induction_initial_step}}{\le} \frac{(\mu_0\delta_0)^{2(1-\nu)} }{2^{2(\rho-1)(1-\nu)t}} = (\mu_t \delta_t)^{2(1-\nu)}. \label{rel:constant_inner_condition}
\end{align}
The inequality \eqref{rel:constant_inner_condition} allow Theorem \ref{th:Holder_gradients} to establish the necessary inner complexity for each IPPA iteration. By using bounds from Theorem \ref{th:Holder_gradients}, at $t-$epoch there are enough:
\begin{equation}gin{align*}
[t = 0] &\quad N_0 = \left\lceil 8\log\left( \frac{ \norm{\nabla F_{\mu_0}(x^0)} }{\delta_0} \right) \right\rceil \\
[t > 0] & \quad N_t = \left\lceil 4\cdot 2^{(q+1)t} \log\left( \frac{\mu_{t-1} \delta_{t-1}}{\mu_t\delta_t} \right)\right\rceil =
\left\lceil 4(\rho - 1)2^{(q+1)t}\right\rceil
\end{align*}
PsGM iterations. We still keep the same notations from Theorem \ref{th:RIPP_complexity}.
\noindent Let $\gamma > 1$ and recall $T = \BigO{ \Max{ \frac{\gamma-1}{\rho}\log(\mu_0\delta_0/\epsilon), \frac{1}{\rho-1}\log(\mu_0\delta_0/\epsilon) } }.$ By following a similar reasoning, we require:
\begin{equation}gin{align*}
\sum\limits_{t = 0}^{T - 1} N_t K_{t}
& = N_0 K_{0} + \BigO{
\sum_{t = 1}^{T} 2^{t\left[ (\rho - 1) - \frac{\rho}{\gamma-1} + q + 1\right] }
} \\
& = N_0 K_{0}+ \BigO{ 2^{T\left[ (\rho - 1) - \frac{\rho}{\gamma-1} + q + 1\right] }}.
\end{align*}
Let $\zeta = \frac{\rho - 1}{\rho}(\gamma - 1) \ge 1$, then the exponent of the last term becomes:
\begin{equation}gin{align*}
T\left[ (\rho - 1) - \frac{\rho}{\gamma-1} + q + 1\right]
& = \Max{ \frac{\gamma - 1}{\rho}, \frac{1}{\rho - 1} } \left[ (\rho - 1) - \frac{\rho}{\gamma-1} + q + 1\right] \\
& = \gamma - 2 + \frac{q}{\rho}(\gamma - 1).
\end{align*}
Otherwise, if $\zeta < 1$ then the respective exponent turns into:
\begin{equation}gin{align*}
T\left[ (\rho - 1) - \frac{\rho}{\gamma-1} + q + 1\right]
& = \frac{\rho}{\rho-1}\frac{\gamma - 2}{\gamma - 1} + \frac{q}{\rho - 1}.
\end{align*}
\end{proof}
\begin{equation}gin{remark}
Although the bound on the number of epochs in RIPP-PsGM depends somehow on $\gamma$, a sufficiently high value of $T$ ensure the result to hold. To investigate some important particular bounds hidden into the above complexity estimates (of Theorem \ref{th:main_computational_theorem}) we analyze different choices of the input parameters $(\rho,q)$ which will be synthesized in Table 1.
\noindent Assume $\nu$ is known, $q = 1 + 2(1-\nu)(\rho - 1)$ and denote $\zeta = \left(1 - \frac{1}{\rho} \right)(\gamma - 1)$
\begin{equation}gin{align}
[\gamma = 1 + \nu ] & \quad \BigO{ 1/\epsilon^{ (3-2\gamma)(\zeta - 1)\Max{1,\frac{1}{\zeta}} }} \label{rel:known_nu_estimates1}\\
[\gamma > 1 + \nu ] & \quad\BigO{ 1/\epsilon^{ \left[ 2(\gamma - \nu - 1) + (1-2\nu)(\zeta - 1) \right]\Max{1,\frac{1}{\zeta}} }}. \label{rel:known_nu_estimates2}
\end{align}
\noindent Under knowledge of $\gamma$, by setting $\rho = \frac{\gamma - 1}{\gamma - 2}$ then $\zeta = 1$ and \eqref{rel:known_nu_estimates1} becomes $\BigO{\log(1/\epsilon)}$. When $\gamma < 2$, a sufficiently large $\rho$ simplify \eqref{rel:known_nu_estimates2} into $\BigO{1/\epsilon^{3-2\nu - \frac{1}{\gamma - 1} }}$.
Given any $\nu \in [1/2,1]$ and $\gamma > 2$, similarly for $\rho \to \infty $ the estimate \eqref{rel:known_nu_estimates2} reduces to $\BigO{1/\epsilon^{(3-2\nu)(\gamma - 1)-1}}$.
\noindent In the particular smooth case $\nu = 1$, bounds \eqref{rel:known_nu_estimates1}-\eqref{rel:known_nu_estimates2} become:
\begin{equation}gin{align*}
[\gamma = 2 ] & \quad \BigO{ 1/\epsilon^{ \Max{\frac{1}{\rho},\frac{1}{\rho-1}} }} \\
[\gamma > 2 ] & \quad\BigO{ 1/\epsilon^{ \left[\gamma - 2 + \frac{\gamma - 1}{\rho} \right]\Max{1,\frac{1}{\zeta}} }}.
\end{align*}
For high values of $\rho \ge \log(1/\epsilon)$, the first one becomes $\BigO{ \log(1/\epsilon) }$. Also the second one reduces to $\BigO{ 1/\epsilon^{\gamma - 2} }$ when $\rho \ge (\gamma - 1) \log(1/\epsilon)$.
\noindent In the bounded gradients case $\nu = 0 $ the estimates reduces to
\begin{equation}gin{align}
[\gamma = 1 ] & \quad \BigO{ \log(1/\epsilon) } \label{rel:no_info_estimates1}\\
[\gamma > 1 ] & \quad\BigO{ 1/\epsilon^{ \left[2(\gamma - 1) + \zeta - 1 \right]\Max{1,\frac{1}{\zeta}} }}. \label{rel:no_info_estimates2}
\end{align}
First observe that when the main parameters $\sigma_F,L_f,\gamma$ are known then $\zeta = 1$ and we recover the same iteration complexity in terms of the number of subgradient evaluations as in the literature \cite{Yan:18,Joh:20,Rou:20,Fre:18}.
The last estimate \eqref{rel:no_info_estimates2} holds when RIPP-PsGM performs a sufficiently high number of epochs, under no availability of problem parameters $\sigma_F, \nu, \gamma$.
\end{remark}
\noindent In \cite{NemNes:85,Rou:20} are derived the optimal complexity estimates $\BigO{\epsilon^{-\frac{2(\gamma - \nu-1)}{3(1+\nu) - 2} } }$, in terms of the number of (sub)gradient evaluations, for accelerated first-order methods under $\nu-$Holder smoothness and $\gamma-$HG. These optimal estimates require full availability of the problem information: $\gamma, \sigma_F, \nu, L_F$.
\begin{equation}gin{table}[h!]
\begin{equation}gin{center}
\begin{equation}gin{tabular}{c|c|c|c|c}
\textbf{Knowledge} & \textbf{DS-SG}\cite{Joh:20} & \textbf{RSG}\cite{Yan:18} & \textbf{Restarted UGM}\cite{NemNes:85} & \textbf{IPPA-PsGM} \\
\textbf{} & & & \\
\hline
$\sigma_F, \gamma, L_f (\nu = 0)$ &
$\BigO{\epsilon^{-2(\gamma - 1)}}$ &
$\BigO{\epsilon^{-2(\gamma - 1)}}$ &
$\BigO{\epsilon^{-2(\gamma - 1)}}$ &
$\BigO{\epsilon^{-2(\gamma - 1)}}$ \\
$\sigma_F, \gamma, L_f, \nu > 0 $ &
- &
- &
$\BigO{\epsilon^{-\frac{2(\gamma - \nu-1)}{3(1+\nu) - 2}}}$ &
\eqref{rel:known_nu_estimates1}/\eqref{rel:known_nu_estimates2} \\
$\gamma, L_f$ &
- &
- &
$\BigO{\epsilon^{-2(\gamma - 1)}}$ &
$\BigO{\epsilon^{-2(\gamma - 1)}}$ \\
$\nu, L_f$ &
- &
- &
- &
\eqref{rel:known_nu_estimates1}/\eqref{rel:known_nu_estimates2} \\
$L_f$ &
- &
- &
- &
\eqref{rel:no_info_estimates2}
\end{tabular}
\caption{Comparison of complexity estimates under various knowledge degrees of problem information}
\end{center}
\label{tab:comparison}
\end{table}
\section{Numerical simulations}\label{sec:numerical}
In the following Section we evaluate RIPP-PsGM by applying it to real-world applications often found in machine learning tasks.
The algorithm and its applications are public and available online~\footnote{https://github.com/pirofti/IPPA}.
Unless stated otherwise,
we perform enough epochs (restarts) until the objective is within $\varepsilon_0=0.5$ proximity to the CVX computed optimum.
The current objective value is computed within each inner PsGM iteration. All the models under consideration satisfy WSM property and therefore the implementation of PsGM reduces to the scheme of (projected) Subgradient Method.
We would like to thank the authors of the methods we compare with for providing the code implementation for reproducing the experiments.
No modifications were performed by us on the algorithms or their specific parameters.
Following their implementation and as is common in the literature,
in our reports we also use the minimum error obtained in the iterations thus far.
All our experiments were implemented in Python 3.10.2 under ArchLinux (Linux version 5.16.15-arch1-1) and executed on an AMD Ryzen Threadripper PRO 3955WX with 16-Cores and 512GB of system memory.
\subsection{Robust $\ell_1$ Least Squares}
We start out with the least-squares (LS) problem in the $\ell_1$ setting.
This form deviates from standard LS by imposing an $\ell_1$-norm on the objective and
by constraining the solution sparsity through the $\tau$ parameter on its $\ell_1$-norm.
Our goal is to analyze the effect of the data dimensions, the problem and IPPA parameters
on the total number of iterations.
\begin{equation}gin{align*}
\min\limits_{x \in \mathbf{R}^n} \; & \; \norm{Ax - b}_1 \\
\text{s.t.} & \;\; \norm{x}_1 \le \tau
\end{align*}
\begin{equation}gin{figure}[t]
\centering
\subfigure[]{\includegraphics[width=0.31\textwidth]{rippa-rho.pdf}\label{fig:rippa_rho}}
\subfigure[]{\includegraphics[width=0.31\textwidth]{rippa-mn.pdf}\label{fig:rippa_mn}}
\subfigure[]{\includegraphics[width=0.31\textwidth]{rippa-tau.pdf}\label{fig:rippa_tau}}
\caption{Total number of inner iterations needed for various parametrizations.
(a) Varying $\rho$.
(b) Varied problem dimensions where we set $m=n$.
(c) Varying $\tau$}
\label{fig:l1ls}
\end{figure}
Our first experiment from Figure \ref{fig:rippa_rho} investigates the effect of the $\rho$ parameter on the unconstrained $\ell_1$-LS formulation ($\tau=\infty$) on a small $50\times 20$ problem.
In our experiment we start with $\mu=0.1$, with 9 epochs and vary $\rho$ from 1.005 to 1.1 in 0.005 steps sizes.
In Figure~\ref{fig:rippa_mn}, we repeat the same experiment with fixed $\rho=1.005$ now, but with varied problem dimensions starting from 10 up to 200 in increments of 5 where we set both dimensions equal ($m=n$).
Finally, in Figure~\ref{fig:rippa_tau}, we study the effect of the problem specific parameter on
the total number of iterations. Although dim effects are noticed in the beginning, we can see a sudden burst past $\tau=3.4$. Please note that this is specific to $\ell_1$-LS and not to IPPA in general as we will see in the following section.
\subsection{Graph Support Vector Machines}
Graph SVM adds a graph-guided lasso regularization to the standard SVM averaged hinge-loss objective
and extends the $\ell_1$-SVM formulation through the factorization $Mx$ where $M$ is the weighted graph adjacency matrix, i.e.
\begin{equation}gin{align}
\min\limits_{x \in \mathbf{R}^n} \; & \; \frac{1}{m}\sum_{i=1}^m \max \{0, 1 - y_i a_i^T x \} + \tau\norm{Mx}_1
\label{graphsvm}
\end{align}
where $a_i \in \mathbf{R}^n$, $y_i \in \{\pm1\}$ are the $i-$th data point and its label, respectively.
When $M=I_n$ the Regularized Sparse $\ell_1$-SVM formulation is recovered.
\begin{equation}gin{figure}
\centering
Synthetic \\
\subfigure[]{\includegraphics[width=0.49\textwidth]{graphsvm-F_synth-iters.pdf}}
\subfigure[]{\raisebox{3mm}{\includegraphics[width=0.49\textwidth]{graphsvm-F_synth-time.pdf}}}
20newsgroups \\
\subfigure[]{\includegraphics[width=0.49\textwidth]{graphsvm-F_20news-iters.pdf}}
\subfigure[]{\raisebox{3mm}{\includegraphics[width=0.49\textwidth]{graphsvm-F_20news-time.pdf}}}
$\ell_1$-SVM ($M = I_n$) \\
\subfigure[]{\includegraphics[width=0.49\textwidth]{graphsvm-F_l1svm-iters.pdf}}
\subfigure[]{\raisebox{2mm}{\includegraphics[width=0.49\textwidth]{graphsvm-F_l1svm-time.pdf}}}
\caption{GraphSVM experiments on synthetic data, first row, on the 20newsgroups data-set, second row, and with $M=I_n$ on the third row.
(a), (c), (e): Objective error evolution across inner iterations.
(b), (d), (f): Objective error evolution across inner iterations measured in CPU time.
The error is displayed in logarithmic scale.}
\label{fig:graphsvm}
\end{figure}
In Figure~\ref{fig:graphsvm} we present multiple experiments on different data and across different $\tau$ parametrizations. Figure \ref{fig:graphsvm} (a) and (b) compares RIPP-PsGM with R\textsuperscript{2}SG from \cite{Yan:18} on synthetic random data $\{C,X,M\}$ from the standard normal distribution.
The same initialization and starting point $x_0$ was used for all methods.
We use $m=100$ measurements of $n=512$ samples $x$
with initial parameters $\mu=0.1$, $\rho=1.0005$ and $\tau=1$ which we execute for 15 epochs.
We repeat the experiment in Figure \ref{fig:graphsvm} (c) and (d), but this time on real-data
from the 20newsgroup data-set\footnote{https://cs.nyu.edu/~roweis/data.html}
following the experiment from \cite{Yan:18},
with parameters $\mu=0.1$, $\rho=1.005$, and $\tau=3$.
Here we find a similar behaviour for both methods as in the synthetic case. In Figure \ref{fig:graphsvm} (e) and (f),
we repeat the experiment by setting $M=I_m$ in \eqref{graphsvm} thus recovering the regularized $\ell_1$-SVM formulation.
We notice here that RIPP-PsGM maintains its position ahead of R\textsuperscript{2}SG and that the error drop is similar to the 20newsgroup experiment.
\begin{equation}gin{figure}[t]
\centering
\subfigure[]{\includegraphics[width=0.31\textwidth]{rippa-graphsvm-rho.pdf}\label{fig:rippa_graphsvm_rho}}
\subfigure[]{\includegraphics[width=0.31\textwidth]{rippa-graphsvm-mn.pdf}\label{fig:rippa_graphsvm_mn}}
\subfigure[]{\includegraphics[width=0.31\textwidth]{rippa-graphsvm-tau-0.5.pdf}\label{fig:rippa_graphsvm_tau}}
\caption{GraphSVM: Total number of inner iterations needed for various parametrizations.
(a) Varying $\rho$.
(b) Varied problem dimensions where we set $m=n$.
(c) Varied $\tau$.}
\label{fig:graphsvm_params}
\end{figure}
Furthermore, Figure~\ref{fig:graphsvm_params} rehashes the experiments from the $\ell_1$-LS Section in order to study the effect of the data dimensions and of the problem parameters on the number of total number of required inner iterations.
The results for $\rho$ and the data dimensions are as expected: as they grow they almost linearly increase the iteration numbers.
For the GraphSVM specific parameter $\tau$,
we find the results are opposite to that of $\ell_1$-LS;
it is harder to solve the problem when $\tau$ is small.
\begin{equation}gin{figure}[ht]
\centering
\subfigure[]{\includegraphics[width=0.49\textwidth]{l1svm-iters.pdf}}
\subfigure[]{\raisebox{5mm}{\includegraphics[width=0.5\textwidth]{l1svm-iters-zoom.pdf}}}
\caption{Sparse $\ell_1$-SVM: (a) Objective error evolution across inner iterations.
(b) Zoomed objective error evolution.}
\label{fig:l1svm}
\end{figure}
\noindent In order to compare with DS\textsuperscript{2}SG~\cite{Joh:20} and R\textsuperscript{2}SG, we follow the Sparse SVM experiment from \cite{Joh:20} which requests $M=I_m$ and removes the regularization $\tau\norm{x}_1$ from \eqref{graphsvm} and instead
adds it as a constraint $\{x: \norm{x}_1 \le \tau\}$.
We used the parameters $\mu=0.001$, $\rho=1.0005$, and $\tau=0.4$ and plotted the results in Figure~\ref{fig:l1svm}.
Please note that the starting point is the same, with a quick initial drop for all three methods as can be seen in Figure~\ref{fig:l1svm} (a).
Execution times were almost identical and took around 0.15s.
To investigate the differences in convergence behaviour,
we zoom in after the first few iterations in the 4 panels of Figure~\ref{fig:l1svm} (b).
In the first panel we show the curves for all methods together and in the other three we see the individual curves for each.
Our experiments showed that this is the general behaviour for $\tau>0.4$:
DS\textsuperscript{2}SG has a slightly sharper drop,
with RIPP-PsGM following in closely with a staircase behaviour
while R\textsuperscript{2}SG takes a few more iterations to reach convergence.
For smaller values of $\tau$ we found that RIPP-PsGM reaches the solution in just 1--5 iterations within
$10^{-9}$ precision,
while the others lag behind for a few hundred iterations.
\subsection{Matrix Completion for Movie Recommendation}
In this section,
the problem of matrix completion is applied
to the standard movie recommendation challenge which recovers a full user-rating matrix $X$ from the partial observations $Y$
corresponding to the $N$ known user-movie ratings pairs.
\begin{equation}gin{align*}
\min\limits_{X \in \mathbf{R}^{m\times n}} \; & \; \frac{1}{N}\sum_{(i,j)\in\Sigma}^N |X_{ij} - Y_{ij}| + \tau\norm{X}_\star
\end{align*}
where $\Sigma$ is the set of user-movie pairs with $N = |\Sigma|$. Solving this will complete matrix X based on the known sparse matrix Y while maintaining a low rank.
\begin{equation}gin{figure}[t]
\centering
\subfigure[]{\includegraphics[width=0.49\textwidth]{ipp-mc-iters-inner.pdf}}
\subfigure[]{\includegraphics[width=0.49\textwidth]{ipp-mc-time-inner.pdf}}
\caption{Matrix completion: (a) Objective error evolution across inner iterations.
(b) Objective error evolution across inner iterations measured in CPU time.}
\label{fig:mc}
\end{figure}
In Figure~\ref{fig:mc} we reproduce the experiment from \cite{Yan:18}
with parameters $\mu=0.1$, $\rho=1.005$, $\tau=3$
on a synthetic database with 50 movies and 20 users
filled with 250 i.i.d. randomly chosen ratings from 1 to 5.
We let a few more R\textsuperscript{2}SG iterations execute to show that no progress is made.
\section{Appendix A}
\begin{equation}gin{lemma}\label{lemma:first_sequence}
Let the sequence $\{u_k\}_{k \ge 0}$ satisfy: $u_{k+1} \le \alpha_k u_k + \begin{equation}ta_k,$ where $\alpha_k \in [0,1), \{\begin{equation}ta_k\}_{k \ge 0} $ nonincreasing and $\sum\limits_{i=0}^{\infty} \begin{equation}ta_i \le \mathcal{G}amma$. Then the following bound holds:
\begin{equation}gin{align*}
u_k \le u_0 \prod\limits_{j=0}^k \alpha_j + \mathcal{G}amma \prod\limits_{j=\lceil k/2 \rceil + 1}^k \alpha_j + \max\limits_{\lceil k/2 \rceil +1 \le i \le k}\frac{\begin{equation}ta_i}{1-\alpha_i}.
\end{align*}
Moreover, if $\alpha_k = \alpha \in [0,1)$ then:
$\;\; u_k \le \alpha^{(k-4)/2}(u_0 + \mathcal{G}amma)+ \frac{\begin{equation}ta_{\lceil k/2 \rceil +1}}{1-\alpha}.$
\end{lemma}
\begin{equation}gin{proof}[of Lemma \ref{lemma:first_sequence}]
By using a simple induction we get:
\begin{equation}gin{align*}
u_{k+1}
&\le \alpha_k u_k + \begin{equation}ta_k \le u_0 \prod\limits_{j=0}^k \alpha_j + \sum\limits_{i=0}^k \begin{equation}ta_i \prod\limits_{j=i+1}^k \alpha_j \\
& = u_0 \prod\limits_{j=0}^k \alpha_j + \sum\limits_{i=0}^{\lceil k/2 \rceil} \begin{equation}ta_i \prod\limits_{j=i+1}^k \alpha_j + \sum\limits_{i=\lceil k/2 \rceil+1}^k \begin{equation}ta_i \prod\limits_{j=i+1}^k \alpha_j
\end{align*}
\begin{equation}gin{align*}
& \le u_0 \prod\limits_{j=0}^k \alpha_j + \mathcal{G}amma \prod\limits_{j=\lceil k/2 \rceil + 1}^k \alpha_j + \sum\limits_{i=\lceil k/2 \rceil+1}^k \frac{\begin{equation}ta_i}{1-\alpha_i}(1-\alpha_i) \prod\limits_{j=i+1}^k \alpha_j \\
& \le u_0 \prod\limits_{j=0}^k \alpha_j + \mathcal{G}amma \prod\limits_{j=\lceil k/2 \rceil + 1}^k \alpha_j + \max\limits_{\lceil k/2 \rceil +1 \le i \le k}\frac{\begin{equation}ta_i}{1-\alpha_i} \sum\limits_{i=\lceil k/2 \rceil+1}^k (1-\alpha_i) \prod\limits_{j=i+1}^k \alpha_j\\
& \le u_0 \prod\limits_{j=0}^k \alpha_j + \mathcal{G}amma \prod\limits_{j = \lceil k/2 \rceil + 1}^k \alpha_j + \max\limits_{\lceil k/2 \rceil +1 \le i \le k}\frac{\begin{equation}ta_i}{1-\alpha_i}.
\end{align*}
\end{proof}
\begin{equation}gin{proof}[ of Corrolary \ref{cor:complexity_for_exact_case}]
\noindent
By taking $\delta_k = 0$ in Theorem \ref{lemma:decrease}, the first two complexity estimates result straightforwardly. For third estimate, let $\alpha >0, \begin{equation}ta \in [0,1]$ and the sequence $\{d^k\}_{k \ge 0}$ satisfy:
\begin{equation}gin{align*}
d^{k+1} \le \max\{d^k - \alpha, \begin{equation}ta d^k \}.
\end{align*}
Observe that as long as $d^k \ge \frac{\alpha}{1-\begin{equation}ta}$ then it reduces on a finite convergence regime:
\begin{equation}gin{align*}
d^{k+1} \le d^k - \alpha \le d^0 - k\alpha.
\end{align*}
Assume that at iteration $K_1$ it enters the region of diameter $\frac{\alpha}{1-\begin{equation}ta}$, i.e. $d_{\hat{k}} \le \frac{\alpha}{1-\begin{equation}ta}$. Then it performs at most $K_1 = \left\lceil\frac{d^0}{\alpha} - \frac{1}{1-\begin{equation}ta} \right\rceil$ in the finite convergence regime. Inside the region, the regime changes to the linear convergence:
\begin{equation}gin{align*}
d^{k+1} \le \begin{equation}ta d^k \le \begin{equation}ta^k d^{\hat{k}} \le \begin{equation}ta^k \frac{\alpha}{1 - \begin{equation}ta}.
\end{align*}
The necessary number of iterations of linear convergence regime is bounded by $K_2 = \left\lceil\frac{1}{1-\begin{equation}ta}\log\left( \frac{\alpha}{(1-\begin{equation}ta)\epsilon}\right) \right\rceil$. Therefore, the total complexity $K_1 + K_2$ is deduced for our problem once we replace $\alpha = \mu\sigma_F\varphi(\gamma)$ and $\begin{equation}ta = \sqrt{1 - \varphi(\gamma)}$.
\end{proof}
\begin{equation}gin{lemma}\label{max_lemma}
Let $a_1 \ge \cdots \ge a_n$ be $n$ real numbers, then the following relation holds:
\begin{equation}gin{align*}
\max\left\{ 0, a_n, a_n + a_{n-1}, \cdots, \sum\limits_{j = 1}^n a_j \right\} = \max \left\{0,\sum\limits_{j = 1}^n a_j \right\}.
\end{align*}
\end{lemma}
\begin{equation}gin{proof}
Since we have: $$\max\left\{ 0, a_n, \cdots, \sum\limits_{j = k}^n a_j \right\} = \max\left\{ \max\{0, a_n\}, \cdots, \max \left\{0,\sum\limits_{j = 1}^n a_j \right\} \right\}, $$ then it is sufficient to show that for any positive $k$:
\begin{equation}gin{align}\label{max_lemma:key_ineq}
\max\left\{ 0, \sum\limits_{j = k}^n a_j \right\} \le \max\left\{0, \sum\limits_{j = k-1}^n a_j\right\}.
\end{align}
Indeed, if $a_{k-1} \ge 0$ then \eqref{max_lemma:key_ineq} results straightforward. Consider that $a_{k-1} < 0$, then by monotonicity we have: $a_j < 0$ for all $j > k-1$ and thus $\sum\limits_{j = k}^n a_j < 0$. In this case it is obvious that $\max\left\{ 0, \sum\limits_{j = k}^n a_j \right\} = \max\left\{0, \sum\limits_{j = k-1}^n a_j\right\}
= 0$, which confirms the final above results.
\end{proof}
\section{Appendix B}
\begin{equation}gin{theorem}\label{th:general_IPP_convrate}
Let $\{x^k\}_{k \ge 0}$ be the sequence generated by IPPA with inexactness criterion \eqref{inexactness_criterion}, then the following relation hold:
\begin{equation}gin{align*}
\text{dist}_{X^*}(x^{k+1}) & \le \text{dist}_{X^*}(x^k) - \mu \frac{ F_{\mu}(x^k) - F^* }{\text{dist}_{X^*}(x^k)} + \delta_k.
\end{align*}
Moreover, assume constant accuracy $\delta_k = \delta $. Then after at most:
\begin{equation}gin{align*}
\left\lceil \frac{\text{dist}_{X^*}(x^0)}{\delta} \right\rceil
\end{align*}
iterations, a point $\tilde{x} \in \{x^0, \cdots, x^k\}$ satisfies $\norm{\nabla_{\delta} F_{\mu}(\tilde{x})} \le \frac{4\delta}{\mu}$ and $ \text{dist}_{X^*}(\tilde{x}) \le \text{dist}_{X^*}(x^{0})$.
\end{theorem}
\begin{equation}gin{proof}
By convexity of $F$, for any $z$ we derive:
\begin{equation}gin{align}
& \norm{\text{prox}_{\mu}^F(x^k) - z}^2 = \norm{x^k - z}^2 + 2\langle \text{prox}_{\mu}^F(x^k) - x^k, x^k - z\rangle + \norm{\text{prox}_{\mu}^F(x^k) - x^k}^2 \nonumber\\
& = \norm{x^k - z}^2 - 2\mu\langle \nabla F(\text{prox}_{\mu}^F(x^k)), \text{prox}_{\mu}^F(x^k) -z\rangle - \norm{\text{prox}_{\mu}^F(x^k) - x^k}^2 \nonumber\\
& \le \norm{x^k - z}^2 - 2\mu \left( F(\text{prox}_{\mu}^F(x^k)) - F(z) + \frac{1}{2\mu}\norm{\text{prox}_{\mu}^F(x^k) - x^k }^2 \right) \nonumber\\
& = \norm{x^k - z}^2 - 2\mu \left( F_{\mu}(x^k) - F(z) \right).\label{eq:0acc_recurrence}
\end{align}
In order to obtain, by the triangle inequality we simply derive:
\begin{equation}gin{align}\label{eq:triangle}
\norm{x^{k+1} - z} \le \norm{\text{prox}_{\mu}^F(x^k) - z}
& + \norm{\text{prox}_{\mu}^F(x^k) - x^{k+1}} \nonumber\\
& \le \norm{\text{prox}_{\mu}^F(x^k) - z} + \delta
\end{align}
Finally, by taking $ z = \pi_{X^*}(x) $, then:
\begin{equation}gin{align}
\text{dist}_{X^*}(x^{k+1}) \le \norm{x^{k+1} - \pi_{X^*}(x^k)} \overset{\eqref{eq:triangle}}{\le} \norm{\text{prox}_{\mu}^F(x^k) - \pi_{X^*}(x^k)} + \delta \nonumber\\
\overset{\eqref{eq:0acc_recurrence}}{\le} \sqrt{\text{dist}_{X^*}^2(x^k) - 2\mu \left( F_{\mu}(x^k) - F^* \right)} + \delta \label{rel:decrease_distopt_IPPA} \\
\le \text{dist}_{X^*}(x^k)\sqrt{1 - 2\mu \frac{ F_{\mu}(x^k) - F^* }{\text{dist}_{X^*}^2(x^k)} } + \delta \nonumber\\
\le \text{dist}_{X^*}(x^k)\left(1 - \mu \frac{ F_{\mu}(x^k) - F^* }{\text{dist}_{X^*}^2(x^k)} \right) + \delta, \nonumber
\end{align}
where in the last inequality we used the fact $\sqrt{1-2a}\le 1-a$.
The last inequality leads to the first part from our result:
\begin{equation}gin{align}\label{rel:prelim_recurrence_(B)}
\text{dist}_{X^*}(x^{k+1}) & \le \text{dist}_{X^*}(x^k) - \mu \frac{ F_{\mu}(x^k) - F^* }{\text{dist}_{X^*}(x^k)} + \delta_k.
\end{align}
Assume that
\begin{equation}gin{align}\label{rel:Fmu_assumption}
\frac{ F_{\mu}(x^0) - F^* }{\text{dist}_{X^*}(x^0)} \ge \frac{\delta}{\mu}
\end{align}
and denote $K = \min\{k \ge 0 \;:\; \text{dist}_{X^*}(x^{k+1}) \ge \text{dist}_{X^*}(x^k) \}$. Then \eqref{rel:prelim_recurrence_(B)} has two consequences. First, obviously for all $k < K$:
\begin{equation}gin{align*}
F_{\mu}(x^k) - F^* \le \frac{1}{\mu}\left(\text{dist}_{X^*}^2(x^{k}) - \text{dist}_{X^*}^2(x^{k+1}) \right) + \frac{\delta}{\mu}\text{dist}_{X^*}(x^0).
\end{align*}
By further summing over the history we obtain:
\begin{equation}gin{align}\label{rel:1/k_MG_rate}
F_{\mu}(\hat{x}^k) - F^* \le \min\limits_{0 \le i \le k} F_{\mu}(x^i) - F^* & \le \frac{1}{k+1}\sum\limits_{i=0}^k F_{\mu}(x^i) - F^* \nonumber\\
&\le \frac{\text{dist}_{X^*}^2(x^{0}) }{\mu (k+1)} + \frac{\delta}{\mu}\text{dist}_{X^*}(x^0).
\end{align}
\noindent Second, since $K$ is the first iteration at which the residual optimal distance increases, then $\text{dist}_{X^*}(x^K) \le \text{dist}_{X^*}(x^{K-1}) \le \cdots \le \text{dist}_{X^*}(x^0)$ and \eqref{rel:prelim_recurrence_(B)} guarantees:
\begin{equation}gin{align*}
F_{\mu}(x^K) - F^* \le \frac{\delta}{\mu}\text{dist}_{X^*}(x^K) \le \frac{\delta}{\mu}\text{dist}_{X^*}(x^0).
\end{align*}
By unifying both cases we conclude that after at most: $K_{\delta} = \frac{\text{dist}_{X^*}(x^0)}{\delta}$ iterations the threshold: $F_{\mu}(x^{K_{\delta}}) - F^* \le \frac{2\delta}{\mu}\text{dist}_{X^*}(x^0)$ is reached. Notice that if \eqref{rel:Fmu_assumption} do not hold, then $K_{\delta} = 0$.
\noindent Now we use the same arguments from \cite[Sec. I]{Nes:12Optima} to bound the norm of the gradients. Observe that the Lipschitz gradients property of $F_{\mu}$ leads to:
\begin{equation}gin{align}
& F_{\mu}(\hat{x}^{k+1}) \le F_{\mu}(\hat{x}^k - \mu \nabla_{\delta} F(\hat{x}^k)) \nonumber \\
& = F_{\mu}(\hat{x}^k) - \mu \langle \nabla F_{\mu}(\hat{x}^k), \nabla_{\delta} F_{\mu}(\hat{x}^k) \rangle + \frac{\mu}{2}\norm{\nabla_{\delta} F_{\mu}(\hat{x}^k)}^2 \nonumber\\
& = F_{\mu}(\hat{x}^k) + \mu \langle \nabla_{\delta} F_{\mu}(\hat{x}^k) - \nabla F_{\mu}(\hat{x}^k), \nabla_{\delta} F_{\mu}(\hat{x}^k) \rangle - \frac{\mu}{2}\norm{\nabla_{\delta} F_{\mu}(\hat{x}^k)}^2 \nonumber\\
& = F_{\mu}(\hat{x}^k) - \frac{\mu}{4}\norm{\nabla_{\delta} F_{\mu}(\hat{x}^k)}^2 + \mu \langle \nabla_{\delta} F_{\mu}(\hat{x}^k) - \nabla F_{\mu}(\hat{x}^k), \nabla_{\delta} F_{\mu}(\hat{x}^k) \rangle - \frac{\mu}{4}\norm{\nabla_{\delta} F_{\mu}(\hat{x}^k)}^2 \nonumber\\
& \le F_{\mu}(\hat{x}^k) - \frac{\mu}{4}\norm{\nabla_{\delta} F_{\mu}(\hat{x}^k)}^2 + \mu \norm{\nabla_{\delta} F_{\mu}(\hat{x}^k) - \nabla F_{\mu}(\hat{x}^k)}^2 \nonumber\\
& = \! F_{\mu}(\hat{x}^k) \!-\! \frac{\mu}{4}\norm{\nabla_{\delta} F_{\mu}(\hat{x}^k)}^2 \!+\! \frac{\delta^2}{\mu} \! = \! F_{\mu}(\hat{x}^{k/2}) - \frac{k\mu}{8}\norm{\nabla_{\delta} F_{\mu}(\hat{x}^{k/2})}^2 + \frac{k\delta^2}{2\mu}. \label{rel:inexact_descent_lemma}
\end{align}
By using \eqref{rel:1/k_MG_rate} into \eqref{rel:inexact_descent_lemma}, then for $k \ge K_{\delta}$
\begin{equation}gin{align*}
\norm{\nabla_{\delta} F_{\mu}(\hat{x}^k)}^2
& \le \frac{4 (F_{\mu}(\hat{x}^k) - F^*)}{k\mu} + \frac{\delta^2}{\mu} \le \frac{8 \text{dist}_{X^*}(x^0)\delta}{k\mu^2} + \frac{4\delta^2}{\mu^2} \\
& \le \frac{8 \delta^2}{\mu^2} + \frac{4\delta^2}{\mu^2} = \frac{12\delta^2}{\mu^2}.
\end{align*}
\end{proof}
\begin{equation}gin{lemma}\label{lemma:decrease}
Let $\gamma-$HG holds for the objective function $F$. Then IPPA sequence $\{x^k\}_{k \ge 0}$ with variable accuracies $\delta_k$, satisfies the following reccurences:
\noindent $(i)$ Under weak sharp minima $\gamma = 1$
\begin{equation}gin{align*}
\text{dist}_{X^*}(x^{k+1}) \le \max\left\{ \text{dist}_{X^*}(x^k) - \mu\sigma_F, 0 \right\} + \delta_k
\end{align*}
\noindent $(ii)$ Under quadratic growth $\gamma = 2$
\begin{equation}gin{align*}
\text{dist}_{X^*}(x^{k+1}) \le \frac{1}{\sqrt{1 + 2\mu \sigma_F}} \text{dist}_{X^*}(x^k)+ \delta_k
\end{align*}
\noindent $(iii)$ Under general Holderian growth $\gamma \ge 1$
\begin{equation}gin{align*}
\text{dist}_{X^*}(x^{k+1}) \le \max\left\{ \text{dist}_{X^*}(x^k) - \mu \varphi(\gamma) \sigma_F \text{dist}_{X^*}^{\gamma-1}(x^k), \left(1 - \frac{\varphi(\gamma)}{2}\right)\text{dist}_{X^*}(x^k) \right\} + \delta_k,
\end{align*}
\end{lemma}
\begin{equation}gin{proof}
$(i)$ Assume $\text{dist}_{X^*}(x^k) > \sigma_F \mu$ then from (the proof of) Theorem \ref{th:general_IPP_convrate} and Lemma \ref{lemma:deterministic_moreau_growth}:
\begin{equation}gin{align*}
\text{dist}_{X^*}(x^{k+1})
&\le \sqrt{\text{dist}_{X^*}^2(x^k) - 2\mu \left( F_{\mu}(x^k) - F^*\right)} + \delta_k \\
&\le \sqrt{\text{dist}_{X^*}^2(x^k) - 2\mu \left( \sigma_F\text{dist}_{X^*}(x^k)- \frac{\sigma^2_F\mu}{2}\right) } + \delta_k \\
& = \sqrt{\left( \text{dist}_{X^*}(x^k) - \mu \sigma_F\right)^2 } + \delta_k = \text{dist}_{X^*}(x^k) - \left(\mu \sigma_F - \delta_k \right).
\end{align*}
In short,
\begin{equation}gin{align*}
\text{dist}_{X^*}(x^{k+1}) \le \begin{equation}gin{cases} \text{dist}_{X^*}(x^k) - (\mu\sigma_F - \delta_k), & \text{if} \; \text{dist}_{X^*}(x^k)> \sigma_F\mu \\
\delta_k, & \text{if} \; \text{dist}_{X^*}(x^k) \le \sigma_F\mu \end{cases}
\end{align*}
\noindent $(ii)$ By using the same relations in the case $\gamma = 2$, then:
\begin{equation}gin{align*}
\text{dist}_{X^*}(x^{k+1})
& \le \sqrt{\text{dist}_{X^*}^2(x^k) - 2\mu \left( F_{\mu}(x^k) - F^*\right)} + \delta_k \\
&\le \sqrt{\text{dist}_{X^*}^2(x^k) - \frac{2\mu\sigma_F}{1 + 2\mu\sigma_F} \text{dist}_{X^*}^2(x^k) } + \delta_k = \frac{1}{\sqrt{1 + 2\mu\sigma_F}} \text{dist}_{X^*}(x^k)+ \delta_k.
\end{align*}
\noindent $(iii)$ Under Holderian growth, similarly Theorem \ref{th:general_IPP_convrate} and Lemma \ref{lemma:deterministic_moreau_growth} lead to:
\begin{equation}gin{align*}
& \text{dist}_{X^*}(x^{k+1})
\le \text{dist}_{X^*}(x^k) - \mu \frac{ F_{\mu}(x^k) - F^*}{\text{dist}_{X^*}(x^k)} + \delta_k \\
&\le \text{dist}_{X^*}(x^k) - \mu \varphi(\gamma) \min\left\{ \sigma_F \text{dist}_{X^*}^{\gamma-1}(x^k), \frac{1}{2\mu}\text{dist}_{X^*}(x^k) \right\} + \delta_k \\
& = \max\left\{ \text{dist}_{X^*}(x^k) - \mu \varphi(\gamma) \sigma_F \text{dist}_{X^*}^{\gamma-1}(x^k), (1 - \varphi(\gamma)/2)\text{dist}_{X^*}(x^k) \right\} + \delta_k.
\end{align*}
\end{proof}
\begin{equation}gin{theorem} \label{th:central_recurrence}
Let $\alpha,\rho>0, \begin{equation}ta \in (0,1)$ and $h(r) = \max\{ r - \alpha r^{\rho}, \begin{equation}ta r \}$. Then the sequence $r_{k+1} = h(r_k)$
satisfies:
\noindent $(i)$ For $\rho \in (0,1)$:
\begin{equation}gin{align}\label{r_rate_rho<1}
r_{k} &\le
\begin{equation}gin{cases}
\left(1 - \frac{\alpha}{2r_0^{1-\rho}} \right)^k \left[r_0 - k \frac{\alpha}{2} \left( \frac{\alpha}{1-\begin{equation}ta}\right)^{\frac{\rho}{1-\rho}} \right], & \text{if} \;\; r_k > \left( \frac{\alpha}{1-\begin{equation}ta}\right)^{\frac{1}{1-\rho}}
\\
\begin{equation}ta^{k-k_0-1}\left( \frac{\alpha}{1-\begin{equation}ta}\right)^{\frac{1}{1-\rho}} , & \text{if} \;\; r_k \le \left( \frac{\alpha}{1-\begin{equation}ta}\right)^{\frac{1}{1-\rho}}.
\end{cases}
\end{align}
\noindent $(ii)$ For $\rho \ge 1$:
\begin{equation}gin{align}\label{r_rate_rho>1}
r_k \le
\begin{equation}gin{cases}
\hat{\begin{equation}ta}^k r_0, &\text{if} \; r_k > \left( \frac{1-\hat{\begin{equation}ta}}{\alpha}\right)^{\frac{1}{\rho - 1}}\\
\left[\frac{1}{\frac{1}{\min\{r_0^{\rho - 1}, \frac{1 - \hat{\begin{equation}ta}}{\alpha} \}} + (\rho-1) (k-k_0) \alpha}\right]^{\frac{1}{\rho-1}} , &\text{if} \; r_k \le \left( \frac{1-\hat{\begin{equation}ta}}{\alpha}\right)^{\frac{1}{\rho - 1}},
\end{cases}
\end{align}
where $k_0 = \left\{\min\limits_{k \ge 0} \; k: \; r_k \le \left( \frac{1-\hat{\begin{equation}ta}}{\alpha}\right)^{\frac{1}{\rho - 1}} \right\}, \hat{\begin{equation}ta} = \max\left\{\begin{equation}ta, 1- 1/\rho \right\}$.
\end{theorem}
\begin{equation}gin{proof}
Denote $g(r) = r - \alpha r^{\rho}$.
\noindent \textbf{Consider $\rho \in (0,1)$}. In this case, note that $g$ is nondecreasing and thus also $h$ is nondecreasing, we have:
\begin{equation}gin{align*}
r_{k+1} &=
\begin{equation}gin{cases}
r_k - \alpha r_k^{\rho}, & \text{if} \;\; r_k > \left( \frac{\alpha}{1-\begin{equation}ta}\right)^{\frac{1}{1-\rho}} \\
\begin{equation}ta r_k, & \text{if} \;\; r_k \le \left( \frac{\alpha}{1-\begin{equation}ta}\right)^{\frac{1}{1-\rho}}
\end{cases}
\end{align*}
Observe that if $r_k > \left( \frac{\alpha}{1-\begin{equation}ta}\right)^{\frac{1}{1-\rho}}$ then, by using the monotonicity of $r_k$, we can further derive another bound:
\begin{equation}gin{align*}
r_{k+1} &
\le r_k - \frac{\alpha}{2} r_k^{\rho} - \frac{\alpha}{2} r_k^{\rho}
\le \left(1 - \frac{\alpha}{2r_k^{1-\rho}} \right) r_k - \frac{\alpha}{2}\left( \frac{\alpha}{1-\begin{equation}ta}\right)^{\frac{1}{1-\rho}} \\
&\le \left(1 - \frac{\alpha}{2r_0^{1-\rho}} \right) r_k - \frac{\alpha}{2}\left( \frac{\alpha}{1-\begin{equation}ta}\right)^{\frac{1}{1-\rho}}.
\end{align*}
Any given sequence $u_k$ satisfying the recurrence $u_{k+1} \le (1-\xi)u_k - c$ can be further bounded as: $u_{k+1} \le (1-\xi)^k u_0 - c\sum_{i=0}^{k-1} (1-\xi)^i \le (1-\xi)^k u_0 - c\sum_{i=0}^{k-1} (1-\xi)^k = (1-\xi)^k [u_0 - k c].$ Thus, by apply similar arguments to our sequence $r_k$ we refined the above bound as follows:
\begin{equation}gin{align*}
r_{k+1} &\le
\begin{equation}gin{cases}
\left(1 - \frac{\alpha}{2r_0^{1-\rho}} \right)^k \left[r_0 - k \frac{\alpha}{2} \left( \frac{\alpha}{1-\begin{equation}ta}\right)^{\frac{\rho}{1-\rho}} \right], & \text{if} \;\; r_k > \left( \frac{\alpha}{1-\begin{equation}ta}\right)^{\frac{1}{1-\rho}} \\
\begin{equation}ta^{k-k_0}\min\left\{r_0,\left( \frac{\alpha}{1-\begin{equation}ta}\right)^{\frac{1}{1-\rho}}\right\} , & \text{if} \;\; r_k \le \left( \frac{\alpha}{1-\begin{equation}ta}\right)^{\frac{1}{1-\rho}}.
\end{cases}
\end{align*}
\noindent \textbf{Now consider $\rho > 1$}. In this case, on one hand, the function $g$ is nondecreasing only on $\left(0,\left( \frac{1}{\alpha \rho} \right)^{\frac{1}{\rho -1}} \right]$. On the other hand, for $r \ge \left( \frac{1}{\alpha \rho} \right)^{\frac{1}{\rho -1}}$ it is easy to see that $g(r) \le \left(1- \frac{1}{\rho} \right)r$. These two observations lead to:
\begin{equation}gin{align*}
h(r) & = \max\{r - \alpha r^{\rho},\begin{equation}ta r\} \le \max \left\{r - \alpha r^{\rho},\left(1- \frac{1}{\rho} \right)r, \begin{equation}ta r \right\} \\
& = \max \left\{r - \alpha r^{\rho}, \hat{\begin{equation}ta} r \right\}:= \hat{h}(r),
\end{align*}
where $\hat{\begin{equation}ta} = \max\left\{1- \frac{1}{\rho}, \begin{equation}ta \right\} $. Since $\hat{h}$ is nondecreasing, then $r_k \le \hat{h}^{(k)}(r_0)$. In order to determine the clear convergence rate of $r_k$, based on \cite[Lemma 6, Section 2.2]{Pol:78} we make a last observation:
\begin{equation}gin{align}\label{rel:Poliak}
g^{(k)}(r) \le \frac{r}{\left(1 + (\rho-1) r^{\rho-1} k \alpha \right)^{\frac{1}{\rho-1}}} \le \left[\frac{1}{\frac{1}{r^{\rho - 1}} + (\rho-1) k \alpha }\right]^{\frac{1}{\rho-1}}
\end{align}
Using this final bound, we are able to deduce the explicit convergence rate:
\begin{equation}gin{align*}
r_k \le \hat{h}^{(k)}(r_0)
&\le
\begin{equation}gin{cases}
\hat{\begin{equation}ta}^k r_0, &\text{if} \; r_k > \left( \frac{1-\hat{\begin{equation}ta}}{\alpha}\right)^{\frac{1}{\rho - 1}}\\
g^{(k-k_0)}(r_{k_0}) , &\text{if} \; r_k \le \left( \frac{1-\hat{\begin{equation}ta}}{\alpha}\right)^{\frac{1}{\rho - 1}}
\end{cases} \nonumber\\
&\overset{\eqref{rel:Poliak}}{\le}
\begin{equation}gin{cases}
\hat{\begin{equation}ta}^k r_0, &\text{if} \; r_k > \left( \frac{1-\hat{\begin{equation}ta}}{\alpha}\right)^{\frac{1}{\rho - 1}}\\
\left[\frac{1}{\frac{1}{\min\{r_0^{\rho - 1}, \frac{1 - \hat{\begin{equation}ta}}{\alpha} \}} + (\rho-1) (k-k_0) \alpha}\right]^{\frac{1}{\rho-1}} , &\text{if} \; r_k \le \left( \frac{1-\hat{\begin{equation}ta}}{\alpha}\right)^{\frac{1}{\rho - 1}}.
\end{cases}
\end{align*}
\end{proof}
\begin{equation}gin{corrolary}\label{corr:exact_complexity}
Under the assumptions of Theorem \ref{th:central_recurrence}, let $r_{k+1} = h(r_k)$ and $\epsilon > 0$. The sequence $r_k$ attains the threshold $r_k \le \epsilon $ after the following number of iterations:
\noindent $(i)$ For $\rho \in (0,1)$:
\begin{equation}gin{align}\label{exact_complexity_r_[1,2]}
K \ge \Min{ \frac{2r_0^{1-\rho}}{\alpha} \log \left(\frac{r_0}{\max\{\epsilon, \tau^{\rho}\alpha/2\}} \right), \frac{2r_0}{\tau^{\rho}\alpha} } + \frac{1}{\begin{equation}ta} \log\left( \frac{\Min{r_0,\tau}}{\epsilon} \right)
\end{align}
\noindent $(ii)$ For $\rho \ge 1$:
\begin{equation}gin{align}\label{exact_complexity_r_[2,infty]}
K \ge\frac{1}{\hat{\begin{equation}ta}} \log\left( \frac{r_0}{\tau(\hat{\begin{equation}ta})} \right) + \frac{1}{(\rho-1)\alpha} \left( \frac{1}{\epsilon^{\rho-1}} - \frac{1}{\Min{r_0,\tau(\hat{\begin{equation}ta})}^{\rho-1}} \right),
\end{align}
where $\tau (\begin{equation}ta) = \left(\frac{\alpha}{1-\begin{equation}ta} \right)^{\frac{1}{1-\rho}}$.
\end{corrolary}
\begin{equation}gin{proof}
$(i)$ Let $\rho \in (0,1)$. In the first regime of \eqref{r_rate_rho<1}, when $r_k > \tau(\begin{equation}ta)$, there are necessary at most:
\begin{equation}gin{align}
K_1^{(0,1)} \ge \Min{ \frac{2r_0^{1-\rho}}{\alpha} \log \left(\frac{r_0}{\max\{\epsilon, \tau^{\rho}\alpha/2\}} \right), \frac{2r_0}{\tau^{\rho}\alpha} }
\end{align}
iterations, while the second regime, i.e. $r_k \le \tau(\begin{equation}ta)$, has a length of at most:
\begin{equation}gin{align}
K_2^{(0,1)} \ge \frac{1}{\begin{equation}ta} \log\left( \frac{\Min{r_0,\tau}}{\epsilon} \right)
\end{align}
iterations to reach $r_k \le \epsilon$. An upper margin on the total number of iterations is $K_1^{(0,1)} + K_2^{(0,1)}$.
\noindent $(ii)$ Let $ \rho > 1 $. Similarly, the first regime when $r_k > \tau(\hat{\begin{equation}ta})$ has a maximal length of:
$K_1^{(1,\infty)} \ge \frac{1}{\hat{\begin{equation}ta}} \log\left( \frac{r_0}{\tau(\hat{\begin{equation}ta})} \right).$
The second regime, while $r_k \le \tau(\hat{\begin{equation}ta})$, requires at most:
$K_2^{(1,\infty)} \ge \frac{1}{(\rho-1)\alpha} \left( \frac{1}{\epsilon^{\rho-1}} - \frac{1}{\Min{r_0,\tau(\hat{\begin{equation}ta})}^{\rho-1}} \right) $ iteration to get $r_k \le \epsilon$.
\end{proof}
\begin{equation}gin{lemma}\label{lemma:prelim_conv_rate}
Let $\alpha, \rho>0, \begin{equation}ta \in (0,1)$. Let the sequence $\{r_k,\delta_k\}_{k \ge 0}$ satisfy the recurrence:
\begin{equation}gin{align*}
r_{k+1} \le \max\{r_k -\alpha r_k^{\rho}, \begin{equation}ta r_k \}+ \delta_k.
\end{align*}
For $\rho \in (0,1)$, let $h(r) = \max\left\{r - \frac{\alpha}{2} r^{\rho} , \frac{1+\begin{equation}ta}{2} r \right\}$, then:
$$r_k \le \max\left\{h^{(k)}(r_0)),h^{(k-1)}\left(\hat{\delta}_1 \right), \cdots, h\left(\hat{\delta}_{k-1} \right),\hat{\delta}_{k} \right\} $$
For $\rho \ge 1$, let $\hat{h}(r)=\max\left\{h(r), \left( 1- \frac{1}{\rho}\right) r \right\}$, then:
$$r_k \le \max\left\{\hat{h}^{(k)}(r_0)),\hat{h}^{(k-1)}\left(\hat{\delta}_1 \right), \cdots, \hat{h}\left(\hat{\delta}_{k-1} \right),\hat{\delta}_{k} \right\},$$
where $\hat{\delta}_k = \max\left\{\left(\frac{2\delta_k}{\alpha}\right)^{\frac{1}{\rho}},\frac{2\delta_k}{1-\begin{equation}ta} \right\}$.
\end{lemma}
\begin{equation}gin{proof}
Starting from the recurrence we get:
\begin{equation}gin{align*}
r_{k+1}
& \le \max\{r_k -\alpha r_k^{\rho}, \begin{equation}ta r_k \}+ \delta_k = \max\{r_k -\alpha r_k^{\rho} + \delta_k, \begin{equation}ta r_k+ \delta_k \} \\
& = \max\left\{r_k - \frac{\alpha}{2} r_k^{\rho} + \left(\delta_k - \frac{\alpha}{2} r_k^{\rho}\right), \frac{1+\begin{equation}ta}{2} r_k+ \left(\delta_k - \frac{1-\begin{equation}ta}{2}r_k\right) \right\}
\end{align*}
If $\delta_k \le \min\left\{\frac{\alpha}{2} r_k^{\rho},\frac{1-\begin{equation}ta}{2}r_k\right\} $, or equivalently $r_k \ge \max\left\{\left(\frac{2\delta_k}{\alpha}\right)^{\frac{1}{\rho}},\frac{2\delta_k}{1-\begin{equation}ta}\right\}$, then we recover the recurrence:
\begin{equation}gin{align}\label{rel:max1}
r_{k+1}
& \le \max\left\{r_k - \frac{\alpha}{2} r_k^{\rho} , \frac{1+\begin{equation}ta}{2} r_k \right\}
\end{align}
Otherwise, clearly
\begin{equation}gin{align}\label{rel:max2}
r_k \le \max\left\{\left(\frac{2\delta_{k}}{\alpha}\right)^{\frac{1}{\rho}},\frac{2\delta_{k}}{1-\begin{equation}ta}\right\}
\end{align}
By combining both bounds \eqref{rel:max1} and \eqref{rel:max2}, we obtain:
\begin{equation}gin{align}\label{rel:combined_bounds_rho_(0,1)}
r_{k+1}
& \le \max\left\{r_k - \frac{\alpha}{2} r_k^{\rho} , \frac{1+\begin{equation}ta}{2} r_k, \left(\frac{2\delta_{k+1}}{\alpha}\right)^{\frac{1}{\rho}},\frac{2\delta_{k+1}}{1-\begin{equation}ta} \right\}.
\end{align}
Denote $h(r) = \max\left\{r - \frac{\alpha}{2} r^{\rho} , \frac{1+\begin{equation}ta}{2} r \right\}$ and $\hat{\delta}_k = \max\left\{\left(\frac{2\delta_k}{\alpha}\right)^{\frac{1}{\rho}},\frac{2\delta_k}{1-\begin{equation}ta} \right\}$. For $\rho \in (0,1)$, since both functions $ r \mapsto r - \alpha r^{\rho}$ and $r \mapsto \frac{1+\begin{equation}ta}{2} r $ are nondecreasing, then $h$ is nondecreasing.
This fact allows to apply the following induction to \eqref{rel:combined_bounds_rho_(0,1)}:
\begin{equation}gin{align}
r_{k+1}
&\le \max\left\{h(r_k), \hat{\delta}_{k+1}\right\} \le \max\left\{h \left( \Max{h(r_{k-1}), \hat{\delta}_{k}} \right), \hat{\delta}_{k+1}\right\} \nonumber\\
&\le \max\left\{h(h(r_{k-1})),h\left(\hat{\delta}_k \right),\hat{\delta}_{k+1}\right\} \nonumber\\
& \cdots \nonumber\\
&\le \max\left\{h^{(k+1)}(r_0)),h^{(k)}\left(\hat{\delta}_1 \right), \cdots, h\left(\hat{\delta}_k \right),\hat{\delta}_{k+1} \right\}. \label{rel:composition_induction}
\end{align}
In the second case when $\rho \ge 1$, the corresponding recurrence function $\hat{h}(r) = \max\left\{r - \frac{\alpha}{2} r^{\rho} , \left( 1- \frac{1}{\rho}\right) r, \frac{1+\begin{equation}ta}{2} r \right\}$ is again nondecreasing. Indeed, here $ r \mapsto r - \alpha r^{\rho}$ is nondecreasing only when $r \le \left( \frac{1}{\alpha \rho}\right)^{\frac{1}{\rho-1}}$. However, if $r > \left( \frac{1}{\alpha \rho}\right)^{\frac{1}{\rho-1}}$, then $\hat{h}(r) = \max\left\{1- \frac{1}{\rho}, \frac{1+\begin{equation}ta}{2} \right\}r$ which is also nondecreasing. Thus we get our claim. The monotonicity of $\hat{h}$ and majorization $\hat{h}(r) \ge h(r)$, allow us to obtain by a similar induction an analog relation to \eqref{rel:composition_induction}, which holds with $\hat{h}$.
\end{proof}
\begin{equation}gin{proof}[of Theorem \ref{th:IPPconvergence}]
\noindent $(i)$ Denote $r_{k} = \text{dist}_{X^*}(x^k)$. Since $\delta_k \le \delta_{k-1}$, then by rolling the recurrence in Lemma \ref{lemma:decrease} we get:
\begin{equation}gin{align}
r_{k+1}
&\le \max\left\{r_k - (\mu\sigma_F - \delta_k), \delta_k \right\} \nonumber \\
&\le \max\left\{r_{k-1} - [2\mu\sigma_F - \delta_k- \delta_{k-1}], \delta_k + \delta_{k-1} - \mu\sigma_F, \delta_k \right\} \nonumber \\
&\le \max\left\{r_{0} - \sum\limits_{i=0}^{k}(\mu\sigma_F - \delta_i), \delta_k + \max\left\{0, \delta_{k-1} - \mu\sigma_F, \cdots, \sum\limits_{i=0}^{k-1} (\delta_{i} - \mu\sigma_F) \right\} \right\} \label{main_recurrences}
\end{align}
\noindent By using the Lemma \ref{max_lemma}, then \eqref{main_recurrences} can be refined as:
\begin{equation}gin{align*}
r_{k+1}
&\le \max\left\{r_{0} - \sum\limits_{i=0}^{k}(\mu\sigma_F - \delta_i), \delta_k + \max\left\{0, \delta_{k-1} - \mu\sigma_F, \cdots, \sum\limits_{i=0}^{k-1} (\delta_{i} - \mu\sigma_F) \right\} \right\} \\
& \overset{\text{Lemma} \; \ref{max_lemma}}{\le} \max\left\{r_{0} - \sum\limits_{i=0}^{k}(\mu\sigma_F - \delta_i), \delta_k + \max\left\{0, \sum\limits_{i=0}^{k-1} \delta_{i} - \mu\sigma_F \right\} \right\} \\
& \le \max\left\{r_{0} - \sum\limits_{i=0}^{k}(\mu\sigma_F - \delta_i), \max\left\{\delta_k, \mu\sigma_F+ \sum\limits_{i=0}^{k} \delta_{i} - \mu\sigma_F \right\} \right\} \\
&= \max\left\{\max\{r_{0},\mu\sigma_F\} - \sum\limits_{i=0}^{k}(\mu\sigma_F - \delta_i), \delta_k \right\}.
\end{align*}
\noindent $(ii)$ Denote $\theta = \frac{1}{(1 + 2 \sigma_F \mu)^{1/2}}$. From Lemmas \ref{lemma:decrease} and \ref{lemma:first_sequence} we derive that:
\begin{equation}gin{align*}
\text{dist}_{X^*}(x^{k})
\le \theta \text{dist}_{X^*}(x^{k-1}) + \delta_{k-1}
\overset{\text{Lemma} \; \ref{lemma:first_sequence}}{\le} \theta^{\frac{k-4}{2}} \left(\text{dist}_{X^*}(x^{0}) + \mathcal{G}amma\right) + \frac{\delta_{\lceil k/2 \rceil + 1}}{1-\theta}.
\end{align*}
\noindent $(iii)$ First consider $\gamma \in [1,2)$ and let $h(r) = \Max{r - \frac{\mu\varphi(\gamma)\sigma_F}{2} r^{\gamma-1}, \frac{1+\sqrt{1-\varphi(\gamma)}}{2}r }$. Then by Lemmas \ref{lemma:decrease} and \ref{lemma:prelim_conv_rate}, we have that:
\begin{equation}gin{align}\label{rel:prelim_conv_rate_1}
r_{k+1}
&\le \max\left\{h^{(k)}(r_0)),h^{(k-1)}\left(\hat{\delta}_1 \right), \cdots, h\left(\hat{\delta}_{k-1} \right),\hat{\delta}_{k} \right\},
\end{align}
where $\hat{\delta}_k = \max\left\{\left(\frac{2\delta_k}{\mu\varphi(\gamma)\sigma_F}\right)^{\frac{1}{\rho}},\frac{2\delta_k}{1-\sqrt{1-\varphi(\gamma)}} \right\}$.
Let some $u_k = h^{(k)}(u_0)$ and $\bar{\delta}_k = \Max{\hat{\delta}_k, h(\bar{\delta}_{k-1})}$. Then, since $h$ is nondecreasing, we get:
\begin{equation}gin{align*}
r_{k+1}
\le \max\left\{h^{(k)}(r_0)),h^{(k-1)}\left(\hat{\delta}_1 \right), \cdots, h\left(\hat{\delta}_{k-1} \right),\hat{\delta}_{k} \right\} = \Max{u_k,\bar{\delta}_k },
\end{align*}
Finally, by using the convergence rate upper bounds from the Theorem \ref{th:central_recurrence}, we can further find out an the convergence rate order of $u_k$.
We can appeal to a similar argument when $\gamma \ge 2$, by using the nondecreasing function $\hat{h}(r) = \max\left\{h(r), \left( 1- \frac{1}{\rho}\right)r \right\}$, instead of $h$.
\end{proof}
\begin{equation}gin{theorem}\label{th:Holder_gradients}
\label{th:complexity_inner}
Let the function $f$ having $\nu-$Holder continuous gradients with constant $L_f$ and $\nu \in [0,1]$. Also let $\mu > 0, \alpha \le \Min{ \frac{\mu}{2}, \frac{\delta^{2(1-\nu)}}{4\mu L_f^2} }, z^0 \in \text{dom}(\psi)$ and
\begin{equation}gin{align}\label{PsGM_routine_const_complexity}
N \ge \left \lceil \frac{4\mu}{\alpha}\log\left( \frac{ \norm{z^0-\text{prox}_{\mu}^F(x)} }{\delta}\right) \right \rceil
\end{align}
then PsGM($z^0,x, \alpha, \mu, N$) outputs $z^N$ such that $\norm{z^{N} - \text{prox}_{\mu}^F(x)} \le \delta$.
\noindent Moreover, assume particularly that $\nu = 0$ and $F$ satisfies WSM with constant $\sigma_f$. Also let $\alpha \in (0, \mu/2 ]$, $Q$ be a closed convex feasible set and $\psi = \iota_Q$ its indicator function.
If
\begin{equation}gin{align}\label{PsGM_routine_const_complexity}
\text{dist}_{X^*}(x) \le \mu\sigma_F \qquad \text{and} \qquad N \ge \left \lceil 2\left(\frac{\text{dist}_{X^*}(x)}{\alpha L_f} \right)^2 \right \rceil
\end{align}
then PsGM($x,x, \alpha, \mu, N$) outputs $z^N$ satisfying $\text{dist}_{X^*}(z^N) \le \frac{\alpha L_f^2}{2\sigma_F}$.
\end{theorem}
\begin{equation}gin{proof}
For brevity we avoid the counter $k$ and denote $z(x) := \text{prox}_{\mu}^F(x), z^+ := \text{prox}_{\mu}^\psi\left(z - \alpha \left[f'(z) +\frac{1}{\mu}(z - x) \right] \right)$. Recall the optimality condition:
\begin{equation}gin{align}\label{rel:strong_convexity_inner}
z(x) = \text{prox}_{\mu}^\psi\left(z(x) - \alpha \left[f'(z(x)) +\frac{1}{\mu}(z(x)-x) \right] \right).
\end{align}
By using $\nu-$Holder continuity then we get:
\begin{equation}gin{align}\label{rel:bound_holdgrad}
\norm{f'(z) -f'(z(x))} \le L_f \norm{z-z(x)}^{\nu} \quad \forall z.
\end{align}
Then the following recurrence holds:
\begin{equation}gin{align}
& \norm{z^+-z(x)}^2 \nonumber\\
& \!\! \overset{\eqref{rel:strong_convexity_inner}}{=} \!\!\left\| \text{prox}_{\mu}^\psi \!\!\left( z\!\! - \!\!\alpha \left[f'(z) \!+\! \frac{1}{\mu}(z\!\!-\!\!x) \right] \right) \!\!-\!\! \text{prox}_{\mu}^\psi \left( z(x) \!\!-\!\! \alpha \left[f'(z(x))\! +\! \frac{1}{\mu}(z(x) \!\!-\!\! x) \right] \right) \right\|^2 \nonumber\\
& \le \left\| \left(1 - \frac{\alpha}{\mu}\right)(z-z(x)) + \alpha[f'(z(x)) - f'(z)] \right\|^2 \nonumber\\
&= \left(1 - \frac{\alpha}{\mu}\right)^2\norm{z - z(x)}^2 - 2\alpha \left(1 - \frac{\alpha}{\mu}\right) \langle f'(z) - f'(z(x)), z - z(x)\rangle \nonumber\\
& \hspace{6cm} + \alpha^2\norm{f'(z) -f'(z(x)) }^2 \label{rel:subgrad_prelim_recurrence}\\
&\overset{\eqref{rel:bound_holdgrad}}{\le} \left(1 - \frac{\alpha}{\mu}\right)^2\norm{z - z(x)}^2 + \alpha^2 L_f^2\norm{ z-z(x)}^{2\nu}.\nonumber
\end{align}
Obviously, a small stepsize $\alpha < \mu$ yields $\left(1 - \frac{\alpha}{\mu}\right)^2 \le 1 - \frac{\alpha}{\mu}$.
If the squared residual is dominant, i.e.
\begin{equation}gin{align}\label{rel:linconv_zone}
\norm{z - z(x)} \ge \delta^{} \ge \left(2\alpha\mu L^2 \right)^{\frac{1}{2(1-\nu)}},
\end{align}
then:
\begin{equation}gin{align}\label{rel:GMinner_linconv}
\norm{z^+ - z(x)}^2 \le \left(1 - \frac{\alpha}{2\mu} \right) \norm{z-z(x)}^2.
\end{align}
By \eqref{rel:linconv_zone}, this linear decrease of residual stop when $ \norm{z-z(x)} \le \delta$, which occurs after at most
$\left \lceil \frac{4\mu}{\alpha}\log\left( \frac{ \norm{z^0-z(x)} }{ \delta } \right) \right \rceil$ PsGM iterations.
\noindent To show the second part of our result we recall that the first assumption of \eqref{PsGM_routine_const_complexity} ensures $z(x) = \pi_{X^*}(x)$.
By using \eqref{rel:subgrad_prelim_recurrence} with chosen subgradient $f'(x^*) = 0$, then the following recurrence is obtained:
\begin{equation}gin{align}
\norm{z^{\ell+1} \!-\! x^*}^2
&\!=\! \left(1 \!-\! \frac{\alpha}{\mu}\right)^2\norm{z^\ell \!-\! x^*}^2 \!-\! 2\alpha \left(1 \!-\! \frac{\alpha}{\mu}\right) \langle f'(z^\ell), z^\ell \!-\! x^*\rangle \!+\! \alpha^2\norm{f'(z^\ell)}^2 \nonumber\\
& \le \norm{z^\ell - x^*}^2 - 2\alpha \left(1 - \frac{\alpha}{\mu}\right)\sigma_F \text{dist}_{X^*}(z^\ell) + \alpha^2 L_f^2 \nonumber\\
& \le \norm{z^\ell - x^*}^2 - \alpha \sigma_F \text{dist}_{X^*}(z^\ell) + \alpha^2 L_f^2, \label{rel:reccurence_wsm}
\end{align}
where $x^* = \pi_{X^*}(x)$ and in the last inequality we used $\langle f'(z^t), z^t - x^*\rangle \ge F(z^t) - F^* \ge \sigma_F \text{dist}_{X^*}(z^t) $. If $\text{dist}_{X^*}(z^0) = \text{dist}_{X^*}(x) > \frac{\alpha L_f^2}{2\sigma_F}$, then as long as $\text{dist}_{X^*}(z^\ell) > \frac{\alpha L_f^2}{2\sigma_F}$, \eqref{rel:reccurence_wsm} turns into:
\begin{equation}gin{align}
\text{dist}_{X^*}^2(z^{\ell+1}) \le \norm{z^{\ell+1} - x^*}^2 & \le \norm{z^\ell - x^*}^2 - \frac{\left(\alpha L_f\right)^2}{2} \nonumber\\
& \le \text{dist}_{X^*}(x)^2 - \ell \frac{\left(\alpha L_f\right)^2}{2}.
\end{align}
To unify both cases, we further express the recurrence as:
\begin{equation}gin{align}
\text{dist}_{X^*}(z^+)^2 \le \Max{ \text{dist}_{X^*}(x)^2 - \ell\frac{\left(\alpha L_f\right)^2}{2}, \frac{\alpha^2 L_f^4}{4\sigma_F^2} },
\end{align}
which confirms our above result.
\end{proof}
\begin{equation}gin{remark}
As the above theorem states, when the sequence $x^t$ is sufficiently close to the solution set, computing $x^{t+1}$ necessitates a number of PsGM iterations dependent on $\text{dist}_{X^*}(z^0)$. In other words, the estimate from \eqref{PsGM_routine_const_complexity} can be further reduced through a good initialization or restartation technique. Such a restartation, for the neighborhood around the optimum, is exploited by the Algorithm \ref{algorithm:IPP-SGM} below.
\end{remark}
\end{document} |
\begin{document}
\title{A Note on Counting Dependency Trees}
\begin{abstract}
We apply symbolic method to deduce functional equation which generating function of counting sequence of dependency trees must satisfy. Then we use Lagrange inversion theorem to obtain concrete expression of the counting sequence. We apply the famous Stirling's approximation to get approximation of the counting sequence. At last, we discuss the additive parameters of dependency trees.
\end{abstract}
\keywords{dependency trees, counting, analytic combinatorics, symbolic method}
\section{Introduction}
Dependency is a one-to-one correspondence: for every element (e.g. word or morph) in the sentence, there is exactly one node in the structure of that sentence that corresponds to that element. The result of this one-to-one correspondence is that dependency grammars are word (or morph) grammars. All that exist are the elements and the dependencies that connect the elements into a structure. The structure could be represented by dependency trees(cf. [2]).
Counting dependency trees is a foundation of average-case analysis of algorithms on dependency grammars processing. Hu et al. found a recurrence counting formula of dependency trees in [1]. Marco Kuhlmann deemed a closed form of counting sequence of dependency trees in [3].
In this note, we apply symbolic method to deduce counting formula of dependency trees. And we use famous Stirling's approximation to obtain approximation of counting sequence.
\section{Preliminaries}
Dependency trees could be defined recursively as: a dependency tree is either (i) a single node tree, or (ii) a root node with several sub-dependency trees on left and right respectly.
Symbolic method could be considered as a set of intuitive combinatorial constructions that immediately translate to equations that the associated generating functions must satisfy. Symbolic method provides translation tools for a large number of combinatorial constructions on combinatorial classes. Here, we just introduce some elementary translation tools. Advanced symbolic method and extensive applications could be found in Chapter 5 of [4] and Part A of [5].
To specify combinatorial classes, we make use of neutral objects $\epsilon$ of size 0 and the neutral class $\mathcal{E}$ that contains a single neutral object.
Then we introduce three simple operations on combinatorial classes.
Given two classes $\mathcal{A}$ and $\mathcal{B}$ of combinatorial objects, we can build new classes as follows:
$$
\begin{aligned}
& \mathcal{A} + \mathcal{B} \ \text{is the class consisting of disjoint copies of the members of} \ \mathcal{A} \ \text{and} \ \mathcal{B}, \\
& \mathcal{A} \times \mathcal{B} \ \text{is the class of ordered pairs of objects, one from} \ \mathcal{A} \ \text{and one from} \ \mathcal{B}, \\
& \text{and} \ SEQ(\mathcal{A}) \ \text{is the class} \ \epsilon + \mathcal{A} + \mathcal{A} \times \mathcal{A} + \mathcal{A} \times \mathcal{A} \times \mathcal{A} + \dots \ \text{.}
\end{aligned}
$$
Following elementary symbolic method provides a simple correspondence between operations in combinatorial constructions and their associated generating functions.
\begin{theorem}
\textbf{(Symbolic method)}.
Let $\mathcal{A}$ and $\mathcal{B}$ be unlabelled classes of combinatorial objects.
If A(z) is the generating function that enumerates $\mathcal{A}$
and B(z) is the generating function that enumerates $\mathcal{B}$, then
$$
\begin{aligned}
A(z) + B(z) \ & \text{is the generating function that enumerates} \ \mathcal{A} + \mathcal{B} \\
A(z)B(z) \ & \text{is the generating function that enumerates} \ \mathcal{A} \times \mathcal{B} \\
\frac{1}{1 - A(z)} \ & \text{is the generating function that enumerates} \ SEQ(\mathcal{A}).
\end{aligned}
$$
\end{theorem}
Lagrange inversion theorem is of particular importance for tree enumeration. The theorem allows us to extract coefficients from generating functions that are implicitly defined through functional equations.
\begin{theorem}
\textbf{(Lagrange inversion theorem)}. Suppose that a generating function $A(z)=\sum_{k \geqslant 0} {a_k z^k} $ satisfies the functional equation
$z=f(A(z))$ , where $f(z)$ satisfies $f(0)=0$ and $f'(0) \neq 0$. Then
$$
\begin{aligned}
a_n \equiv \left [ z^n \right ] A(z) = \frac{1}{n} \left [ u^{n-1} \right ] {\left (\frac{u}{f(u)} \right )}^n.
\end{aligned}
$$
\end{theorem}
\section{Couting Dependency Trees}
Let $\mathcal{T}$ be combinatorial class of dependency trees. Definition of dependency trees implies the construction of corresponding combinatorial class
$$
\begin{aligned}
\mathcal{T} = SEQ(\mathcal{T}) \times \circ \times SEQ(\mathcal{T})
\end{aligned},
$$
where $\circ$ be denoted as root node of dependency trees. Let $T(z)$ be generating function of $\mathcal{T}$, and generating function of single node is $z$. Symbolic method could translate the construction to functional equation
$$
\begin{aligned}
T(z) = \frac{1}{1-T(z)} \cdot z \cdot \frac{1}{1-T(z)}
\end{aligned}.
$$
With the help of \textbf{WolframAlpha}, we can get exact expression of $T(z)$,
$$
T(z)= \frac{1}{3}\left( \frac{\sqrt[3]{3\sqrt{3} \sqrt{27z^2-4z} +27z -2} }{\sqrt[3]{2}} + \frac{\sqrt[3]{2}}{\sqrt[3]{3\sqrt{3} \sqrt{27z^2-4z} +27z -2} } +2\right) .
$$
However, it is difficult to extract coefficient of generating function from the expression above. Fortunately, Lagrange inversion theorem could be applied. Through elementary algebra, we obtain $T(z)(1-T(z))^2=z$. Let $f(u)=u(1-u)^2$, we have
$$
\begin{aligned}
t_n\equiv\left [ z^n \right ] T(z) & = \frac{1}{n} \left [ u^{n-1} \right ] \frac{1}{(1-u)^{2n}} \\
& = \frac{1}{n} \left [ u^{n-1} \right ] \sum_{k \geqslant 0} {\binom{k+2n-1}{k} u^k} \\
& = \frac{1}{n} \binom{n-1+2n-1}{n-1} \\
& = \frac{1}{n} \binom{3n-2}{n-1} .
\end{aligned}
$$
The sequence is \textbf{OEIS A006013}(cf. [3]).
Then, with Stirling's approximation we could obtain approximation of $t_n$
$$
\begin{aligned}
t_n & = \frac {n(2n)3n!} {n (3n)(3n-1)n!2n!} \\
& \sim \frac{2}{9n} \frac{ \sqrt{6 \pi n} \left ( 3n / e \right ) ^{3n} }
{\sqrt{2 \pi n} \left ( n / e \right ) ^{n} \sqrt{4 \pi n} \left ( 2n / e \right ) ^{2n} } \\
& = \frac {1} { \sqrt {27\pi} n^{3/2}} \left( \frac {27} {4} \right) ^n .
\end{aligned}
$$
Thus, we can conjecture $z_0=4/27$ is the dominant singularity of generating function $T(z)$.
\section{Additive Parameters of Dependency Trees}
We could defne an additive parameter to be any parameter whose cost function satisfes the linear recursive schema
$$
c(t)=e(t)+ \sum_{r \in t_s} c(r) ,
$$
where the sum is over all the subtrees of the root of $t$. The function $e$ is called toll function. Let $C(z)$ be cumulative generating function of an additive tree parameter $c(t)$ for the dependency trees, and let $E(z)$ be cumulative generating function for the associated toll function $e(t)$. We could deduce the relation between these two functions
$$
\begin{aligned}
C(z) &\equiv \sum_{t \in \mathcal{T}} {c(t)z^{\left| t \right| }} \\
&= \sum_{t \in \mathcal{T}} {e(t)z^{\left| t \right| }} + \sum_{t \in \mathcal{T}} z^{\left| t \right| } \sum_{r \in t_s} c(r) \\
&= E(z) + \sum_{k \geqslant 0} (k+1) \sum_{t_1 \in \mathcal{T}} \cdots \sum_{t_k \in \mathcal{T}} \left( c(t_1) + \cdots + c(t_k) \right) z^{\left| t_1 \right| + \cdots + \left| t_k \right| +1} \\
&= E(z) + z\sum_{k \geqslant 0} (k+1) k C(z) T^{k-1}(z) \\
&= E(z) + \frac{2zC(z)}{ \left( 1- T(z) \right) ^3} .
\end{aligned}
$$
Thus, we obtain the equation of cumulative generating functions.
$$
C(z)= \frac{E(z)}{1-2z/\left( 1- T(z) \right) ^3} =E(z) \left( \frac{1-T(z)}{1-3T(z)} \right) .
$$
\end{document} |
\begin{document}
\renewcommand{1.2}{1.2}
\thispagestyle{empty}
\title[Automorphisms of non-singular nilpotent Lie algebras]{Automorphisms of non-singular nilpotent Lie algebras}
\author[Kaplan and Tiraboschi]{Aroldo Kaplan}
\address{\noindentndent
Facultad de Matem\'atica, Astronom\'\i a y F\'\i sica, Universidad
Nacional de C\'ordoba. CIEM -- CONICET. (5000) Ciudad
Universitaria, C\'ordoba, Argentina}
\email{(kaplan, tirabo)@famaf.unc.edu.ar}
\author[]{Alejandro Tiraboschi}
\thanks{This work was partially supported by CONICET, ANPCyT and Secyt (UNC) (Argentina)}
\subjclass{17B30,16W25}
\date{\today}
\begin{abstract} For a real, non-singular, 2-step nilpotent Lie algebra $\mathfrak{n}$, the group $\operatorname{Aut}(\mathfrak{n})/\operatorname{Aut}_0(\mathfrak{n})$, where $\operatorname{Aut}_0(\mathfrak{n})$ is the group of automorphisms which act trivially on the center, is the direct product of a compact group with the 1-dimensional group of dilations. Maximality of some automorphisms groups of $\mathfrak{n}$ follows and is related to how close is $\mathfrak{n}$ to being of Heisenberg type. For example, at least when the dimension of the center is two, $\dim \operatorname{Aut}(\mathfrak{n})$ is maximal if and only if $\mathfrak{n}$ is type $H$. The connection with fat distributions is discussed.
\end{abstract}
\maketitle
\begin{section}{Introduction}\leftarrowbel{section1} A 2-step nilpotent real Lie algebra $\mathfrak{n}$ with center $\mathfrak{z}$ is called {\it non-singular} \cite{E} if $\ad x: \mathfrak{n}\rightarrow \mathfrak{z}$ is onto for any $x\notin\mathfrak{z}$. Equivalently, it is a vector-valued antisymmetric form
$$[\ ,\ ]: \mathfrak{v}\times \mathfrak{v}\rightarrow \mathfrak{z},$$
$\mathfrak{v}=\mathfrak{n}/\mathfrak{z}$, such that the 2-forms $\leftarrowmbda([u,v])$ on $\mathfrak{v}$ are non-degenerate for all $\leftarrowmbda\in\mathfrak{z}^*$, $\leftarrowmbda\not=0$.
We shall call such Lie algebras {\it fat algebras} for short, since they are the nilpotentizations, or symbols, of fat vector distributions.
While for $m=1$ there is only one fat algebra up to isomorphisms, for $m\geq 2$ there is an uncountable number of isomorphism classes and for $m\geq 3$ they form a wild set.
The group of automorphisms $\operatorname{Aut}(\mathfrak{n})$ is the semidirect product of the group $G(\mathfrak{n})$ of graded automorphisms of $\mathfrak{n}=\mathfrak{v}\oplus\mathfrak{z}$ with the abelian group $\Hom(\mathfrak{v},\mathfrak{z})$ times the group of dilations $(t,t^2)$. Hence, we concentrate on $G(\mathfrak{n})$.
We prove that there is an exact sequence
$$
1\rightarrow G_0 \rightarrow G \rightarrow O(m)
$$
where $G_0$ is the subgroup of $G $ of elements that act trivially on the center and $m$ is the dimension of this center.
In other words, there are positive metrics (inner products) on $\mathfrak{z}$ which are invariant under all of $\operatorname{Aut}(\mathfrak{n})$. If a metric $g$ is also given on $\mathfrak{v}$, as in the case of the nilpotentization of a subriemannian structure, we also consider the subgroups $K_0$, $K $, of graded automorphisms that leave $g$ invariant, which define a compatible exact sequence
$$
1\rightarrow K_0 \rightarrow K \rightarrow O(m).
$$
Next, we compute the terms in this sequence and the images $G/G_0$ and $K/K_0$, proving that the exactness of
$$
1\rightarrow \Lie(K_0) \rightarrow \Lie(K) \rightarrow \mathfrak{so}(m)\rightarrow 1
$$
is equivalent to $\mathfrak{n}$ being of Heisenberg type, while the exactness of
$$
1\rightarrow \Lie(G_0) \rightarrow \Lie( G) \rightarrow \mathfrak{so}(m)\rightarrow 1
$$
is strictly more general. As to $G_0(\mathfrak{n})$, we describe it in detail for the case $m=2$, leading a proof that, at least in that case, $\dim \operatorname{Aut}(\mathfrak{n})$ is maximal if and only if $\mathfrak{n}$ is of Heisenberg.
In the last section we explain the connection with the Equivalence Problem for fat subriemannian distributions.
Algebras of Heisenberg type, or {\it $H$-type}, arise as follows \cite{K}. If $\mathfrak{v}$ is a real unitary module over the Clifford algebra ${\mathbb C}l(\mathfrak{z})$ associated to a quadratic form on $\mathfrak{z}$, the identity
$$<z,[u,v]>_\mathfrak{z} = <z\cdot u,v>_\mathfrak{v}$$
with $z\in \mathfrak{z}\subset {\mathbb C}l(\mathfrak{z})$, $u,v\in \mathfrak{v}$,
defines a fat $[\ ,\ ]: \mathfrak{v}\times \mathfrak{v}\rightarrow \mathfrak{z}$. Alternatively, they are characterized by possessing a positive-definite metric such that the operator $z\cdot$ defined by the above equation satisfies $ z\cdot(z\cdot v)= - |z|^2 v$.
It follows from Adam's theorem on frames on spheres \cite{H} that for any fat algebra there is an $H$-type algebra with the same $\dim\mathfrak{z}$ and $\dim\mathfrak{v}$. That these were, in some sense, the most symmetric, was expected from the properties of their sublaplacians \cite{BTV} \cite{CGN} \cite{GV} \cite{K}, but we found no explicit statements in this regard.
Finally, although the arguments below can be made more intrinsic, matrices are emphasized because they can be fed easily into MAGMA for the application of the methods of \cite{DG}.
\end{section}
\begin{section}{Automorphisms of fat algebras}\leftarrowbel{section2} Let $\mathfrak{n}$ be a 2-step Lie algebra with center $\mathfrak{z}$ and let $\mathfrak{v = n/z}$, so that
\begin{equation}\leftarrowbel{I}
\mathfrak{n}{\rm con}g \mathfrak{v}\oplus \mathfrak{z}
\end{equation}
and the Lie algebra structure is encoded into the map
$$
[\ ,\ ]: \Lambda^2\mathfrak{v}\rightarrow \mathfrak{z}.
$$
Let $n=\dim \mathfrak{v}$ and $m=\dim \mathfrak{z}$. Relative to a basis compatible with (\ref{I}), the bracket becomes an $\mathbb{R}^m$-valued antisymmetric form on $\mathbb{R}^n$ and an automorphism is a matrix of the form
$$\begin{pmatrix} a & 0 \\ c & b \end{pmatrix},\qquad a\in GL(n),\ b\in GL(m),\ c\in M_{n\times m}(\mathbb R)$$
such that
$$b([u,v])=[au,av].$$
$\operatorname{Aut}(\mathfrak{n})$ always contains the normal subgroup $\mathfrak{D(n)}$ of dilations and translations
$$\begin{pmatrix} tI_n & 0 \\ c & t^2I_m \end{pmatrix},\qquad t\in \mathbb{R}^*, \ c\in M_{n\times m}(\mathbb R).$$
Let
$$G=G(\mathfrak{n})=\{\begin{pmatrix} a & 0\\ 0 & b \end{pmatrix},\ a\in SL(n),\ b\in GL(m),\ b([u,v])=[au,av]\}.$$
Then $\operatorname{Aut}(\mathfrak{n})$ is the semidirect product of $G(\mathfrak{n })$ with $\mathfrak{D(n)}$.
Let
$$G_0= G_0(\mathfrak{n}) = \{\begin{pmatrix} a & 0\\ 0 & I_m \end{pmatrix},\ a\in SL(n),\ [au,av]=[u,v] \}, $$
the subgroup of automorphisms that act trivially on the center. These are Lie groups, $G_0$ is normal in $G$, and the quotient group
$$G/G_0 $$
can be identified with the group of $b\in GL(\mathfrak{z})$ such that
$ b([u,v])=[au,av] $ for some $a\in SL(\mathfrak{v})$. Obviously,
\begin{equation}\leftarrowbel{igualdad}
\dim \operatorname{Aut}(\mathfrak{n}) = nm+1 + \dim(G/G_0) + \dim(G_0).
\end{equation}
\
\begin{theorem}\leftarrowbel{th2.1} Let $\mathfrak{n}$ be a fat algebra with center $\mathfrak{z}$. Then there is a positive definite metric on $\mathfrak{z}$ invariant under $G(\mathfrak{n})$.
\end{theorem}
\begin{proof}
Fix arbitrary positive inner products on $\mathfrak{v}$ and $\mathfrak{z}$. For $z\in \mathfrak{z}$, $u,v\in \mathfrak{v}$
$$(T_zu,v)_\mathfrak{v} = (z,[u,v])_\mathfrak{z}$$
defines a linear map $z\mapsto T_z$ from $\mathfrak{z}$ to $\operatorname{End}(\mathfrak{v})$. Clearly,
$$\mathfrak{n}\ fat\ \Leftrightarrow\ T_z\in GL(\mathfrak{v}) \ \forall z\not=0.$$
Hence the hypothesis insures that the Pfaffian
$$P(z)=\det(T_z)$$
is non-zero on $\mathfrak{z}\setminus \{0\}$. This is a homogeneous polynomial of degree $n$, so it satisfies
\begin{equation}\leftarrowbel{I'}k \|z\|^n\leq |P(z)| \leq K\|z\|^n
\end{equation}
where $k,K$ are the minimum and maximum values of $|P|$ on the unit sphere, which are positive.
Let now $g_{a,b} :=\begin{pmatrix} a & 0\\ 0 & b \end{pmatrix}\in \operatorname{Aut}(\mathfrak{n})$. Then
$$T_{ b^{\tt t} z}=a^{\tt t} T_za$$
because
$(T_{ b^{\tt t} z}u,v)_\mathfrak{v} = ( b^{\tt t} z,[u,v])_\mathfrak{z} = ( z,b([u,v]))_\mathfrak{z} = ( z,[au,av])_\mathfrak{z} = (T_zau,av)_\mathfrak{v} = ( a^{\tt t} T_zau,v)_\mathfrak{v}.$
Consequently
$$
P( b^{\tt t} z) = (\det a)^2 P(z).
$$
In particular, if $g\in G$ then $P( b^{\tt t} z) = P(z)$.
This implies
$$
k \|b^{\tt t} z\|^n\leq |P(b^{\tt t} z)|= |P(z)| \leq K\|z\|^n
$$
for all $z$, therefore
$\|b\| \leq \sqrt[n]{K/k}$.
The group of $b\in GL(\mathfrak{z})$ such that $g_{a,b}\in \operatorname{Aut}(\mathfrak{n})$ for some $a\in SL(\mathfrak{v})$, is therefore bounded in $\operatorname{End}( \mathbb{R}^m)$. Its closure is a compact Lie subgroup of $GL(\mathfrak{z})$, necessarily contained in $O(\mathfrak{z})$ for some positive definite metric.
\end{proof}
From now on $\mathfrak{z}$ will be assumed endowed with such invariant metric.
If a metric $g$ on $\mathfrak{v}$ is also fixed, as in the case of the nilpotentization of a subriemannian structure, define
the groups
$$ K= K(\mathfrak{n},g) = \{\begin{pmatrix} a & 0\\ 0 & b \end{pmatrix},\ a\in SO(\mathfrak{v}),\ b\in O(\mathfrak{\ z}),\ [au,av]=b[u,v] \}$$
$$ K_0=K_0(\mathfrak{n},g) = \{\begin{pmatrix} a & 0\\ 0 & I \end{pmatrix},\ a\in SO(\mathfrak{v}),\ [au,av]=[u,v] \}.$$
Let $\mathfrak{g,g_0,k,k_0}$ be the Lie algebras of $G,G_0,K,K_0$ respectively. Then there is the commutative diagram with exact rows
\begin{equation*}\leftarrowbel{ancla}
\begindc{0}[3]
\obj(10,20)[A11]{$0$}
\obj(20,20)[A12]{$\mathfrak{g}_0$}
\obj(30,20)[A13]{$\mathfrak{g}$}
\obj(42,20)[A14]{$\mathfrak{so}(m)$}
\obj(10,10)[A21]{$0$}
\obj(20,10)[A22]{$\mathfrak{k}_0$}
\obj(30,10)[A23]{$\mathfrak{k}$}
\obj(42,10)[A24]{$ \mathfrak{so}(m)$}
\mor{A11}{A12}{}
\mor{A21}{A22}{}
\mor{A22}{A12}{}[\atleft,\solidarrow]
\mor{A12}{A13}{}
\mor{A13}{A14}{}
\mor{A22}{A23}{}
\mor{A23}{A24}{}
\mor{A23}{A13}{}
\mor{A24}{A14}{}
\enddc
\end{equation*}
where the vertical arrows are the inclusions.
Below we prove that the bottom sequence extends to
$$0\rightarrow \mathfrak{k}_0\ \rightarrow \mathfrak{k}\rightarrow \mathfrak{so}(m)\rightarrow 0$$
if and only if $\mathfrak{n}$ is type $H$. This is not the case for the top one: the condition that
$$0\rightarrow \mathfrak{g}_0\ \rightarrow \mathfrak{g}\rightarrow \mathfrak{so}(m)\rightarrow 0$$
is exact defines a class of fat algebras strictly larger than type $H$. We describe it in the next section for $m=2$.
\begin{proposition}\leftarrowbel{theorem2.2} Let $\mathfrak{n}=\mathfrak{v}+ \mathfrak{z}$ be an algebra of type $H$. There is a metric on $\mathfrak{z}$ such that
$\mathfrak{g/g_0 {\rm con}g so}(m)$.
\end{proposition}
\begin{proof}
There is an inner product in $\mathfrak{v}$ such that the $J_i=T_i$'s satisfy the Canonical Anticommutation Relations
$$J_w J_z+J_zJ_w = -2<z,w>I.$$
For $\|z\|=1$ let $r_z\in O(\mathfrak{z})$ be the reflection through the hyperplane orthogonal to $z$ and $J_z\in SL(\mathfrak{v})$ be as above. Then
$$g_{(J_z,-r_z)}=\begin{pmatrix} J_z & 0\\ 0 & -r_z \end{pmatrix}\in \operatorname{Aut}(\mathfrak{n}).$$
Indeed,
\begin{align*}
(w,[J_zu,J_zv])&= (J_w J_zu,J_zv ) = (-J_zJ_w u -2(z,w)u,J_zv)\\
&= -( J_zJ_w u ,J_zv)-2(z,w)(u,J_zv) = ( J_w u ,J_zJ_zv)+2(z,w)(J_zu,v)\\
&= - ( J_w u , v)+2(z,w)(J_zu,v)= ( J_{-w +2(z,w)z}u,v)\\
&= ( -w +2(z,w)z,[u,v])= (-r_z(w),[u,v])\\
&= (w ,-r_z([u,v])),
\end{align*}
so that
$$-r_z([u,v])= [J_zu,J_zv].$$
The Lie group generated by the $-r_z$ has finite index in
$O(\mathfrak{z})$.
\end{proof}
\begin{corollary}\leftarrowbel{corollary2.3} Let $\mathfrak{n}$ be a fat algebra with center of dimension $m$. Then
$$\dim(K/K_0) \leq \dim (G/G_0) \leq m(m-1)/2 $$
with equality achieved for any type $H$ algebra of the same dimension with center of the same dimension.
\end{corollary}
\
Since $\operatorname{Aut}(\mathfrak{n})/Aut_0(\mathfrak{n}) = (G/G_0) \times$(dilations), one obtains
\
\begin{corollary} Let $\mathfrak{n}$ be a fat algebra with center of dimension $m$. Then
$$ \dim(\operatorname{Aut}(\mathfrak{n})/Aut_0(\mathfrak{n})) \leq 1+ m(m-1)/2,$$
with equality achieved for any type $H$ algebra of the same dimension and with center of the same dimension.
\end{corollary}
\
A converse for Corollary \ref{corollary2.3} is
\
\begin{theorem} If $\mathfrak{n}$ is fat with center of dimension $m$ and
$$\dim( K/K_0 )= m(m-1)/2$$
for some metric on $\mathfrak{v}$, then $\mathfrak{n}$
is of type $H$.
\end{theorem}
\begin{proof}
The hypothesis implies that $ \mathfrak{k/k_0}= \mathfrak{g/g_0}{\rm con}g \mathfrak{so}(m)$, so that
$K/K_0$ acts transitively among the $|z|=1$. For $\begin{pmatrix} a & 0\\ 0 & b \end{pmatrix}$ in this group,
$-T_{bz} = aT_{z}a^{-1}$, hence $T_{bz}^2 = aT_{z}^2a^{-1}$. Since $T_z$ is invertible, we can choose the metric such that $T_{z_0}^2=-I$ for any given $z_0$. Therefore $T_z^2=-I$ for all $|z|=1$, which implies the assertion.
\end{proof}
\
\
Maximal dimension means there are isomorphisms
$$\Lie(K/K_0) = \Lie(G/G_0) {\rm con}g \mathfrak{so} (m).$$
Therefore the simply connected covers are isomorphic:
$Spin(m) {\rm con}g \widetilde{(G/G_0)_e}.$
The induced homomorphism
$$Spin(m) \rightarrow (G/G_0)_e$$
may or may not extend to a homomorphism
$$Pin(m) \rightarrow G/G_0 .$$
If it does extend, it may or may not be injective, in which case it is an isomorphism.
Therefore, among the algebras for which $\dim(G/G_0)$ is maximal, those for which
$Pin(m) {\rm con}g G/G_0$
can be regarded as the most symmetric.
\begin{theorem}\leftarrowbel{thcliff} Suppose $\mathfrak{n}$ is a 2-step graded algebra such that $\operatorname{Aut}(\mathfrak{n})$ contains a copy of $Pin(m)$ inducing the standard action on $\mathfrak{z}$. Then
$\mathfrak{n}$ is type $H$.
\end{theorem}
\begin{proof}
The assumption implies that there is a linear map $\mathfrak{z}\rightarrow \operatorname{End}(\mathfrak{v})$, denoted by $z\mapsto J_z$ such that
$J_z^2 = - |z|^2I$ for all $z$ and
$$[J_zu,J_zv]= r_z([u,v])$$
for $u,v\in \mathfrak{v}$, $z\in \mathfrak{z}$, $|z|=1$, where $r_z$ is the reflection in $\mathfrak{z}$ with respect of the line spanned by $z$. $Pin(m)$ is the group generated by the $J_z$'s with $\|z\|=1$, which acts linearly on $\mathfrak{v}$ and is compact. Fix a metric on $\mathfrak{v}$ invariant under it.
We get, as in the proof of Theorem \ref{th2.1}, that if $\begin{pmatrix} a & 0\\ 0 & b \end{pmatrix}\in \operatorname{Aut}(\mathfrak{n})$, then
$$T_{ b^{\tt t} z}=a^{\tt t} T_za.$$
In particular:
$$T_{ r_x(z)}=J_x T_z J_x.$$
If $x=z$, we get $T_z = -J_zT_zJ_z$, thus $T_zJ_z = -J_z^{-1}T_z = J_zT_z$. If $x\perp z$, we get $T_z = J_xT_zJ_x$, thus $T_zJ_x = J_x^{-1}T_z = -J_xT_z$. It follows that $T_z^2$ commutes with $J_z$ and with $J_w$, $w\perp z$.
Now, let $z\in\mathfrak{z}$ and $w\perp z$. Let $R_w(t)$ the $2t$-rotation from $z$ towards $w$. Then $R_w(t) = r_zr_{w(t)}$, with $w(t) = \cos(t)z + \sin(t)w$. It follows that
$$
\begin{pmatrix} J_zJ_{w(t)} & 0\\ 0 & R_w(t) \end{pmatrix}
$$
is an orthogonal automorphism and, therefore, satisfies
$$
T_{R_w(t)z} = (J_zJ_{w(t)})^{\tt t} T_z (J_zJ_{w(t)}).
$$
Since $(J_zJ_{w(t)})^{\tt t} = (J_zJ_{w(t)})^{-1}$,
$$
T_{R_w(t)z}^2 = (J_zJ_{w(t)})^{\tt t} T_{z}^2 (J_zJ_{w(t)}) = J_{w(t)}J_zT_{z}^2J_zJ_{w(t)}.
$$
Since $T_z^2$ commutes with $J_z$ and $J_w$,
\begin{equation}\leftarrowbel{t2}
T_{R_w(t)z}^2 = T_{z}^2J_{w(t)}J_zJ_zJ_{w(t)} = -T_{z}^2J_{w(t)}J_{w(t)}.
\end{equation}
But $J_{w(t)}^2 = -I$, so that
(\ref{t2}) implies that
$$
T_{R_w(t)z}^2 = T_{z}^2.
$$
For all $z' \in \mathfrak{z}$ we can choose $w\in \mathfrak{z},t \in \mathbb R$ such that $R_w(t)z = z'$, so we get
$$
T_{z'}^2 = T_{z}^2, \quad \text{ for all } z' \in \mathfrak{z}, |z'| =1.
$$
The anti-symmetry of the bracket implies that $T_z$ is skew-symmetric. Rescaling the scalar product on $\mathfrak{v}$ we obtain
that $T_{z}^2 = -I$, so $T_{z'}^2 = -I$ for all $z' \in \mathfrak{z}$, $|z'| =1$. Therefore $\mathfrak{n}$ is type $H$.
\end{proof}
\end{section}
\begin{section}{The case of center of dimension 2}\leftarrowbel{section3}
In this section we compute the groups $G,G_0, G/G_0$ in the case $m=2$. The various types are parametrized by pairs
$$(\mathbf{c},\mathbf{r})
\in (\mathbb{U}^\ell /SL(2,\mathbb{R}))
\times \mathbb{Z}_+^\ell $$
where $\mathbb{U}$ is the upper-half plane and $2\ell = 2\sum r_j
= \dim \mathfrak{n}-2.
$
As a corollary we conclude that $\operatorname{Aut}(\mathfrak{n})$ is maximal if and only if $\mathfrak{n}$ is type $H$. These are
complex Heisenberg algebras of various dimensions regarded as real Lie algebras.
First we recall the normal form for fat algebras with $m=2$ deduced from \cite{LT}. Given $c= a + bi\in \mathbb{C} $, let
$$
Z(c)= \begin{pmatrix} a &b \\ -b &a \end{pmatrix}.
$$
If $r\in \mathbb{Z}_+$, set
$$
A(c,r) = \begin{pmatrix} Z(c) & & & \\ I_2 & Z(c) & & \\& & \ddots & \\ & & I_2 & Z(c)\end{pmatrix}
$$
a $2r\times 2r$-matrix. If $\mathbf{c}=(c_1,...,c_\ell)\in \mathbb{C}^\ell$ and $\mathbf{r}=(r_1,...,r_\ell)\in \mathbb{N}_+^\ell$, set
$$
A(\mathbf{c},\mathbf{r}) = \begin{pmatrix} A(c_1,r_1)& & & \\ & A(c_2,r_2) & & \\& & \ddots & \\ & & & A(c_\ell,r_\ell) \end{pmatrix}
$$
which is a $2s\times 2s$ matrix, $s=r_1+...+r_\ell$.
Let now $\phi, \psi_{(\mathbf{c},\mathbf{r})}$ be the 2-forms on $\mathbb{R}^{4s}$ whose matrices in the standard basis are
\begin{equation}\leftarrowbel{psi} [\phi]=\begin{pmatrix} 0_{ }&- I_{ 2s } \\I_{2s } & 0_{ } \end{pmatrix}\qquad
[\psi_{(\mathbf{c},\mathbf{r})}]= \begin{pmatrix} 0_{ }& A(\mathbf{c},\mathbf{r}) \\-A^{\tt t}(\mathbf{c},\mathbf{r}) & 0_{ } \end{pmatrix}.
\end{equation}
Then
$$
[u,v]_{(\mathbf{c},\mathbf{r})} = (\phi (u,v),\psi_{(\mathbf{c},\mathbf{r})} (u,v))=<u,[\phi]v>e_1 + <u,[\psi_{(\mathbf{c},\mathbf{r})}]v>e_2
$$
is an $\mathbb{R}^{2}$-valued antisymmetric 2-form on $\mathbb{R}^{4s}$. Let
$$
\mathfrak{n}_{(\mathbf{c},\mathbf{r})} = \mathbb{R}^{4s}\oplus \mathbb{R}^2
$$
be the corresponding Lie algebra.
Define $M_{(\mathbf{c},\mathbf{r})}\in\operatorname{End}(\mathfrak{v})$ by
$$
\phi(M_{(\mathbf{c},\mathbf{r})}u,v) = \psi_{(\mathbf{c},\mathbf{r})}(u,v),
$$
whose matrix is
$$
[M_{(\mathbf{c},\mathbf{r})}] = \begin{pmatrix} -A_{(\mathbf{c},\mathbf{r})}^{\tt t}& 0 \\ 0 & -A_{(\mathbf{c},\mathbf{r})}\end{pmatrix}.
$$
then we have
\begin{equation}\leftarrowbel{corchete2}
[u,v]_{(\mathbf{c},\mathbf{r})} = \phi (u,v)e_1+\phi(M_{(\mathbf{c},\mathbf{r})}u,v)e_2 ,\text{ for } u,v \in \mathbb R^{4s}.
\end{equation}
One can deduce [LT]
\begin{proposition}\leftarrowbel{theorem3.1}
\
(a) Every fat algebra with center of dimension 2 is isomorphic to some $\mathfrak{n}_{(\mathbf{c},\mathbf{r})}$ with $\mathbf{c} \in \mathbb{U}^\ell$.
(b) Two of these are isomorphic if and only if the $\textbf{r}$'s coincide up to permutations and the $\textbf{c}$'s differ by some M\"obius transformation acting componentwise.
(c) $\mathfrak{n}_{(\mathbf{c},\mathbf{r})}$ is of type $H$ if and only if $\mathbf{c}=(c,\ldots,c)$ and $\mathbf{r}=(1,\ldots,1)$
\end{proposition}
Let now
$$\mathfrak{n} = \mathfrak{n}_{(\mathbf{c},\mathbf{r})}$$
be fat and let $G=G(\mathfrak{n})$, etc.
We denote $\hat{\mathfrak{n}} $ the algebra obtained by replacing the matrices $A(c,r)$ by their semisimple parts and setting all $c_j=\sqrt{-1}$. The resulting $\hat A(c,r)$ consists of blocks $\begin{pmatrix} 0 & 1 \\-1 & 0 \end{pmatrix}$ along the diagonal and $\hat{\mathfrak{n}} $ is isomorphic to the $H$-type algebra ${\mathfrak{n}}_{((i,\ldots,i),(1,\ldots,1))}$. The correspondence
$$\mathfrak{n}\mapsto \mathfrak{\hat n}$$
is functorial and seems extendable inductively to fat algebras of any dimension, although here we will maintain the assumption $m=2$.
\medbreak
\begin{lemma}\leftarrowbel{theorem3.2}
$
G_0(\mathfrak{n}) \subset G_0(\mathfrak{\hat n})
$ and $\dim\operatorname{Aut}( \mathfrak{n})\leq \dim\operatorname{Aut}( \hat{\mathfrak{n}})$.
\end{lemma}
\begin{proof}
Let $\phi$, $\psi$, $M_{(\mathbf{c},\mathbf{r})}\in\operatorname{End}(\mathfrak{v})$ be as above, so that
$$
\phi(M_{(\mathbf{c},\mathbf{r})}u,v) = \psi_{(\mathbf{c},\mathbf{r})}(u,v).
$$
By formula (\ref{corchete2}), $g \in G_0( \mathfrak{n}_{(\mathbf{c},\mathbf{r})})$ if and only if
$$
\phi(u,v) =\phi (gu,gv),\qquad \phi(M_{(\mathbf{c},\mathbf{r})}u,v)=\phi(M_{(\mathbf{c},\mathbf{r})}gu,gv)= \phi(g^{-1}M_{(\mathbf{c},\mathbf{r})}gu,v),
$$
i.e., if and only if $g\in Sp(\phi)$ and commutes with $M_{(\mathbf{c},\mathbf{r})}$. In particular it commutes with the semisimple part $\hat M_{(\mathbf{c},\mathbf{r})}$.
This is conjugate to a matrix having blocks $ Z(c)= \begin{pmatrix} \mathbb{R}e(c) &\Im(c) \\ -\Im(c) &\mathbb{R}e(c) \end{pmatrix} $ for various $c\in \mathbb{C}$ along the diagonal, and zeros elsewhere. Every matrix commuting with such a matrix will surely commute with that having all $c=1$. It follows that
$g$ also preserves $\phi(\hat M_{(\mathbf{c},\mathbf{r})}u,v)$ and, therefore, it is an automorphism of $\hat{\mathfrak{n}}$ as well. Thus,
$$
G_0( \mathfrak{n} )\subset G_0( \hat{\mathfrak{n}}).
$$
From Corollary \ref{corollary2.3},
$
\dim(G( \mathfrak{n})/G_0( \mathfrak{n} )) \le \dim(G( \hat{\mathfrak{n}} )/ G_0( \hat{\mathfrak{n}} )),
$
and therefore
$
\dim G( \mathfrak{n} ) = \dim(G( \mathfrak{n} )/G_0( \mathfrak{n} )) +\dim G_0( \mathfrak{n} ) \le \dim(G( \hat{\mathfrak{n}})/ G_0( \hat{\mathfrak{n}}))+ \dim G_0( \hat{\mathfrak{n}} ) = \dim G( \hat{\mathfrak{n}}).
$
Formula (\ref{igualdad}) implies
$\dim \operatorname{Aut}(\mathfrak{n})\leq \dim \operatorname{Aut}(\mathfrak{\hat n})$, as claimed.
\end{proof}
\renewcommand\b{\mathbf }
\
Next we will describe $\mathfrak g_0(\mathfrak{n}_{(c,r)})$ for $c\in \mathbb{U}$ and $r\in \mathbb{{N}}_+$, i.e., the case when the matrices $A$ consist of a single block.
Since $c$ is $SL(2,{\mathbb C})-$conjugate to $i$, it is enough to take $c=i$. Define the $2\times 2$-matrices
$$
\b 1 = \begin{pmatrix} 1&0\\0&1\end{pmatrix},\quad \b i = \begin{pmatrix} 0&-1\\1&0\end{pmatrix}, \quad \b x=\begin{pmatrix} 0&1\\1&0\end{pmatrix},\quad
\b y = \begin{pmatrix} -1&0\\0&1\end{pmatrix},
$$
and let $M_r(\mathbb R\leftarrowngle \b 1,\b i\rightarrowngle)$ and $M_r(\mathbb R\leftarrowngle \b x,\b y\rightarrowngle)$ denote the real vector spaces of $r \times r$ matrices with coefficients in the span of $\b 1,\b i$ and $ \b x,\b y $ respectively. Then the vector space
$$
\mathcal R(r) = \left\{\begin{pmatrix} A&B\{\mathbb C}&D\end{pmatrix}:\; A,D \in M_r(\mathbb R\leftarrowngle \b 1,\b i\rightarrowngle), \;B,C \in M_r(\mathbb R\leftarrowngle \b x,\b y\rightarrowngle) \right\},
$$
is a actually a matrix algebra.
Note that
$$
\b 1^{\tt t} =\b 1,\quad \b i^{\tt t} = -\b i,\quad \b x^{\tt t} = \b x,\quad \b y^{\tt t} =\b y.
$$
Letting
$A^{\tt t}$ denote the transpose or an $\mathbb R$-matrix and $A^t$, $A^*$ the transpose and conjugate transpose of $\mathbb R[\b i,\b x,\b y]$-matrices, one obtains
$$
A^{\tt t} = A^*
$$ for $A \in M_r(\mathbb R\leftarrowngle \b 1,\b i\rightarrowngle)$ while
$$A^{\tt t} = A^t $$
for $A \in M_r(\mathbb R\leftarrowngle \b x,\b y\rightarrowngle)$.
With the notation
$$J_1 = [\phi]\qquad J_2 = [\psi_{((i,\ldots,i),(1,\ldots,1))}],$$
$$
\mathfrak g_0(\hat{\mathfrak n}) = \left\{ X \in M_{4r}(\mathbb R): J_1X + X^{\tt t}J_1 = 0, J_2X + X^{\tt t}J_2 = 0\right\}.
$$
From \cite{S} we know that
$$\mathfrak g_0(\hat{\mathfrak n}){\rm con}g \mathfrak{sp}(r,\mathbb C)^\mathbb{R}$$
Changing basis,
$$
\mathfrak g_0(\hat{\mathfrak n}) = \left\{X \in \mathcal R(r): \; J_1X + X^{\tt t}J_1 = 0, \; J_2X + X^{\tt t}J_2 = 0\right\}
$$
where
$$
J_1 = \begin{pmatrix} 0&I_r\\-I_r&0\end{pmatrix}, \quad
J_2 = \begin{pmatrix} 0&\mathbf{i}I_r\\\mathbf{i}I_r&0\end{pmatrix}.
$$
This gives an alternative description of this algebra:
$$
\mathfrak g_0(\hat{\mathfrak n}) = \left\{\begin{pmatrix} A&B\{\mathbb C}&-A^*\end{pmatrix}: \ A \in M_r(\mathbb R\leftarrowngle \b 1,\b i\rightarrowngle),\ B,C \in M_r(\mathbb R\leftarrowngle \b x,\b y\rightarrowngle),\ B^t=B,\ C^t=C\right\}
$$
We now restrict our attention to matrices $\begin{pmatrix} A&B\{\mathbb C}&-A^*\end{pmatrix}$ in $\mathfrak g_0(\hat{\mathfrak n})$ where $A,B,C$ have the respective forms
\begin{align*}
\begin{pmatrix} a_{1} & a_{2} &\cdots &a_{r} \\0 & \ddots &\ddots& \vdots \\\vdots &\ddots&\ddots& a_{2}\\0 & \cdots &0 &a_{1}\end{pmatrix}
\qquad \begin{pmatrix} b_{1} &\cdots& b_{r-1} &b_{r} \\ \vdots& \iddots &\iddots& 0 \\b_{r-1} &\iddots&\iddots&\vdots\\b_{r} & 0 &\cdots &0\end{pmatrix}
\qquad \begin{pmatrix} 0 &\cdots& 0 &c_{1 } \\\vdots & \iddots &\iddots& c_{2} \\0 &\iddots&\iddots& \vdots\\c_{1} & c_{2} &\cdots &c_{r}\end{pmatrix}
\end{align*}
with coefficients in $M_2(\mathbb{R})$. Let $\b A_k=\begin{pmatrix} A&0\\0&-A^*\end{pmatrix}$ having $a_{k} =\b 1$ and zero otherwise and $\b A'_k$ the matrix of the same form but with $a_{k} =\b i$ and zeros elsewhere. Similarly, let $\b B_k$ (resp. $\b C_k$) the matrix
$\begin{pmatrix}0&B\\0&0\end{pmatrix}$ (resp., $\begin{pmatrix} 0&0\{\mathbb C}&0\end{pmatrix}$)
with $b_{k}$ (resp. $c_{k}$) equal to $\b x$ and zeros elsewhere, and $\b B'_k$ (resp. $\b C'_k$) with $b_{k}$ (resp. $c_{k}$) equal to $\b y$ and zeros elsewhere.
\begin{theorem} \leftarrowbel{th3.3}Let $\mathfrak{n} = \mathfrak{n}_{({c},{r})}$, $(c,r) \in \mathbb{\mathbb{U}}\times \mathbb N$, and regard
$\mathfrak g_0(\mathfrak n)$ as a subalgebra of $\mathfrak gl(\mathfrak v)$. Then,
\begin{enumerate}
\item $\mathfrak g_0(\mathfrak n)$ is the $\mathbb R$-span of $\b A_i,\b A'_i,\b B_i,\b B'_i,\b C_i,\b C'_i$ for $1 \le i \le r$.
\item The semisimple part of ${\mathfrak g}_0(\mathfrak n) $ is the span of $\b A_1,\b A'_1,\b B_1,\b B'_1,\b C_1,\b C'_1$.
\item The solvable radical is the span of $\b A_i,\b A'_i,\b B_i,\b B'_i,\b C_i,\b C'_i$ with $1 <i\le r$.
\end{enumerate}
In particular, the $\mathbb R$-dimension de ${\mathfrak g}_0(\mathfrak{n})$ is equal to $6r$ and the semisimple part of ${\mathfrak g}_0(\mathfrak n) $ is isomorphic to $\mathfrak{sp}(1,\mathbb C)$.
\end{theorem}
\begin{proof} It is enough to consider the case $\mathfrak{n} = \mathfrak{n}_{({i},{r})}$.
Let $T_2 = [\psi_{(i,r)}]$ and write
$T_2 = J_2 + N_2$ where
$$
N_2 = \begin{pmatrix} 0&N\\-N^t&0\end{pmatrix}, \text{ with } N = \begin{pmatrix} 0 &\cdots &0 &0 \\ 1 & \ddots & 0& 0 \\ &\ddots&\ddots& \\ 0 & \cdots &1 &0\end{pmatrix}.
$$
From Lemma \ref{theorem3.2},
$
{\mathfrak g}_0(\mathfrak n) = \left\{ X \in \mathfrak g_0(\hat{\mathfrak n}):\ T_2X + X^{\tt t}T_2 = 0\right\}
$. As $\mathfrak g_0(\mathfrak n) \subset \mathfrak g_0(\hat{\mathfrak n})$ one obtains
$$
{\mathfrak g}_0(\mathfrak n) = \left\{ X \in \mathfrak g_0(\hat{\mathfrak n}) :N_2X + X^{\tt t}N_2 = 0\right\}.
$$
The conditions on $\begin{pmatrix} A&B\{\mathbb C}&-A^*\end{pmatrix} \in {\mathfrak g}_0(\mathfrak n)$ are, explicitly,
\begin{align}
0 &= NC -C^tN^t = NC - (NC)^t \leftarrowbel{(1)} \\
0 &= N^tA - AN^t \leftarrowbel{(3)} \\
0 &= N^tB - B^tN = N^tB -( N^tB)^t. \leftarrowbel{(4)}
\end{align}
For the first equation, note that $NC$ symmetric if and only if $c_{i,j+1} = c_{j,i+1}$ and $c_{1,j}=0$ for $i,j<n$. Since $C$ is symmetric, $c_{i,j+1} = c_{j,i+1} =c_{i+1,j}$ and $c_{1,j}=0$ for $i,j<n$. We conclude:
If $i+j =k \le r$, $c_{i,j} = c_{i,k-i} = c_{i-1,k-i +1} = c_{i-2,k-i+2}\cdots =c_{1,k-1} =0$
If $i+j =k > r$, $c_{i,j} = c_{i,k-i} = c_{i+1,k-i -1} = c_{i+2,k-i-2}\cdots =c_{r,k-i+i-r} = c_{r,k-r}$
Thus, the strict upper antidiagonals are zero and each lower antidiagonal have all its elements equal.
For the second equation, note that $N^t$ and $A$ commute. This is equivalent to $c_{i,j} = c_{t,s}$ when $j-i=s-t$ and $c_{i,1}= 0$ for $i>1$.
The first condition implies that each diagonal have all its elements equal, while the second implies that the strict lower diagonals are zero.
Equation (\ref{(4)}) is analogous to equation (\ref{(1)}): the condition $N^tB$ symmetric is equivalent to each antidiagonal have all its elements equal and that the strict lower antidiagonals are 0.
From all this we conclude that the span of $\b A_i,\b A'_i,\b B_i,\b B'_i,\b C_i,\b C'_i$ with $1 \le i \le r$ is ${\mathfrak g}_0(\mathfrak n)$ and (1) follows.
(2) and (3) follow from (1) and the explicit presentation of the matrices $\b A_i,\b A'_i,\b B_i,\b B'_i,\b C_i,\b C'_i$.
\end{proof}
\begin{corollary} (of the proof)
Let $\mathfrak{n}$ be fat. Then $\dim(\mathfrak g_0(\mathfrak n))$ is maximal if and only if $\mathfrak{n}$ is of $H$-type.
\end{corollary}
\begin{proof}
Let $(\mathbf{c},\mathbf{r}) =((c_1,\ldots,c_l),(r_1,\ldots,r_l))$ be such that $\mathfrak{n}=\mathfrak{n}_{(\mathbf{c},\mathbf{r})}$.
We know that $\mathfrak g_0({\mathfrak n}) \subset \mathfrak g_0(\hat{\mathfrak n})$. If $c_i \not= c_j$ for some $i,j$, then there is not intertwining operator between
the blocks corresponding to these invariants, so $\mathfrak g_0({\mathfrak n}) \not= \mathfrak g_0(\hat{\mathfrak n})$.
When $c_1=c_2=\cdots=c_l$ we can consider $c_j =i$ for all $j$. Let $r = \sum r_i$. In this case if $\begin{pmatrix} A&B\{\mathbb C}&-A^*\end{pmatrix} \in \mathfrak g_0(\mathfrak n)$ must be satisfy the equations
(\ref{(1)}), (\ref{(3)}), (\ref{(4)}) but with $N$ such that coefficients $n_{j+1,j}$ are $0$ or $\b 1$. Suppose now that $\mathfrak g_0(\mathfrak n)$ is not of $H$-type, then some $n_{j+1,j}$ is equal to $\b 1$. We assume that $n_{21} = \b 1$ and let $A \in M_r(\mathbb R[\b i])$ such that $a_{12} = \b 1$ and $0$ otherwise, then
$$
X = \begin{pmatrix} A&0\\0&-A^*\end{pmatrix}
$$
belongs to $\mathfrak g_0(\hat{\mathfrak n})$ but is not in $\mathfrak g_0({\mathfrak n})$.
\end{proof}
It can be shown in general that the semisimple part of $\mathfrak g_0(\mathfrak n)$ is isomorphic to $\oplus_i \mathfrak{sp}(m_i,\mathbb C)$, where $m_i$ is the multiplicity of the pair $(c_i,r_i)$ in $(\mathbf{c},\mathbf{r})$.
\
In the case $m=2$, $\mathfrak{g/g}_0$ is either $0$ or isomorphic to $\mathfrak{so}(2)$.
\begin{theorem} $ \mathfrak g(\mathfrak n)/\mathfrak g_0(\mathfrak n) {\rm con}g \mathfrak{so}(2) $ if $c_1=\cdots = c_\ell$, and $0$ otherwise.
\end{theorem}
\begin{proof}
$ \mathfrak {g /\mathfrak g}_0 $ is a compact subalgebra of $\mathfrak{gl}(2) $, hence of the form $g \mathfrak{so}(2) g^{-1}$ for some $g\in SL(2,\mathbb{R})$ and it is nonzero if and only if there exists $X\in \mathfrak{sl}(v)$ such that, in the notation of the proof of Theorem \ref{th3.3},
$$
\begin{pmatrix} X&0\\0&g\mathbf{i}g^{-1}\end{pmatrix}
$$
is a derivation of $\mathfrak{n}$. For $g=\mathbf{1}$, if $T_1,T_2$ correspond to the standard basis of $\mathfrak{z}$, the equations for $X$ become
$$
\text{\em (a)}\quad T_1 X + X^{\tt t} T_1 = T_2, \qquad \text{\em (b)}\quad T_2 X + X^{\tt t} T_2 = -T_1
$$
In normal form, and for a single block $A_{(i,r)}$,
$$
T_1=J_1 = \begin{pmatrix} 0&I_r\\-I_r&0\end{pmatrix}, \quad
T_2 = \begin{pmatrix} 0_{ }&\b iI_r +N \\\b iI_r -N^{\tt t} & 0_{ } \end{pmatrix}.
$$
We decompose
$$
T_2 = J_2+N_2,\qquad \mathrm{with}\qquad J_2 = \begin{pmatrix} 0&\b iI_r\\\b iI_r&0\end{pmatrix},\ N_2=
\begin{pmatrix} 0_{ }& N \\ -N^{\tt t} & 0_{ } \end{pmatrix}
$$
and regard $J_1,J_2,T_1,T_2, N_2$ as matrices with coefficients in $M_2(\mathbb R)$. Note that $J_1,J_2$ correspond to $\mathfrak{\hat n}$, of type $H$.
Let
$$
Y_0 = \begin{pmatrix}
0&0 &0&0&0&0&s&0
\\ 0&2\b i&0&0&0&&s&0
\\ 0&\b 1&4 \b i&0&0&0& s&0
\\ 0&0&2\b 1&6\b i&0&0&s&0
\\ 0&0&0& 3 \b 1&8\b i& & s&0
\\ \vdots& \vdots&\vdots & \vdots& \vdots& \vdots& \ddots&0
\\ 0&0&0& \vdots& \vdots& 0& (n-2)\b 1&2(n-1) \b i\end{pmatrix}.
$$
A straightforward calculation shows that
$$
X_0= \begin{pmatrix} -Y_0^{\tt t} &0\\ 0& -Y^{\tt t}_0+ \b iI_r +N\end{pmatrix}
$$
is a solution of (a), (b). We conclude that
$$
\begin{pmatrix}X_0 &0\\ 0&\b i\end{pmatrix}
$$
is a derivation of $\mathfrak{n}_{(i,r)}$, which lies in $\mathfrak{g}(\mathfrak{n}_{(i,r)})$ but not in $\mathfrak{g}_0(\mathfrak{n}_{(i,r)})$.
For any $c\in \mathbb{U}$, $\mathfrak{n}_{(c,r)}{\rm con}g \mathfrak{n}_{(i,r)}$, hence they have the same $\mathfrak{g/g}_0$ up to isomorphisms. In fact, for any $g\in Sl(2,\mathbb{R})$, the algebra $\mathfrak{n}_{(g\cdot i,r)}$ has a derivation of the form
$$
\begin{pmatrix}X &0\\ 0&g\b ig^{-1}\end{pmatrix}.
$$
For a fixed $g$, these $X$ are unique modulo $\mathfrak{g}_0$ and come in normal form. Clearly, $c$ determines the $2\times 2$ matrix $ g\b ig^{-1}$ and the complex number $g\cdot i$.
In the case of an arbitrary fat $\mathfrak{n}_{(\mathbf{c},\mathbf{r})}$, each block $(c_k,r_k)$ determines a corresponding
$X_k$ such that
$$
\begin{pmatrix}X_k &0\\ 0& g_k\b i\g^{-1}_k\end{pmatrix}
$$
is a derivation of $\mathfrak{n}_{(c_k,r_k)}$. If $n_{(\mathbf{c},\mathbf{r})}$ has a derivation in $\mathfrak{g}$ that is not in $\mathfrak{g}_0$, then its must have one which is combination of such, acting on $\mathfrak{v}$ as $X_1+ X_2 + \cdots$. This forces
all the $ g_k\b ig_k^{-1}$ to be the same and all the $c_i$ to be the same. The reciprocal is clear.
\end{proof}
In particular, all algebras $\mathfrak{n}_{(\mathbf{c},\mathbf{r})}$ with $c_1=...=c_\ell$ and $r_i>1$ maximize de dimension of $\mathfrak{g/g}_0$, but they are not type $H$.
Lauret had pointed out to us that there were non-type H algebras such that $ \mathfrak g(\mathfrak n)/\mathfrak g_0(\mathfrak n) \not= 0 $. Independently, Oscari proved that this holds whenever the $c_i$'s all agree.
\end{section}
\begin{section}{Fat distributions}\leftarrowbel{section4} Let $D$ be a smooth vector distribution on a smooth manifold $M$, i.e., a subbundle of the tangent bundle $T(M)$. Its nilpotentization, or symbol, is the bundle on $M$ with fiber
$$N^D(M)_p = \bigoplus_j D^{(^j)}_p / D^{(j-1)}_p $$ where
$D^{(1)}_p=D_p$ and $D^{(j+1)}_p = D^{(j)}_p + [{\mathcal G}amma(D), {\mathcal G}amma(D^{j})]_p$. The Lie bracket in ${\mathcal G}amma(T(M))$ induces
a graded nilpotent Lie algebra structure on each fiber of $N^D(M)$. If $D^{(j)} =T(M)$ for some $j$, $D$ is called completely non-integrable. If $D^{(2)} =T(M)$,
the nilpotentization is 2-step, which in the notation of the previous section, is
$$\mathfrak{n}_p= N_D(M)_p = D_p \oplus \frac{D_p+[{\mathcal G}amma(D), {\mathcal G}amma(D)]_p}{D_p} = \mathfrak{v}_p+\mathfrak{z}_p,$$
It is also easy to see that $D$ is fat in the sense of Weinstein {\cite M} if and only if $\mathfrak{n}_p= \mathfrak{v}+\mathfrak{z}$ is non-singular, i.e., fat in the sense defined in the section 1.
A subriemannian metric $g$ defined on $D$ determines a metric on $\mathfrak{v}$. On $\mathfrak{z}$ we put a metric $\sigma$ invariant under $G$.
Let $\{\phi_1,...,\phi_m; \psi_1,...,\psi_n\}$ be a coframe on $M$ such that
$$D=\cap \ker \phi_i,$$
with $\{\phi_1,...,\phi_m\}$ and $\{\psi_1,...,\psi_n\}$ orthonormal with respect to $g+\sigma$. Define $T_z\in \operatorname{End}(D)$ as before, by
$$ \sigma(z,[u,v]) = g(T_zu,v).$$
Then $D$ is fat if and only if $T_z$ is invertible for all non-zero $z\in \mathfrak{z}$. The structure equations for the coframe can be written
$$d\phi_k \equiv \sum_i (T_k \psi_i)\wedge \psi_i \qquad mod(\phi_\ell)$$
with the $T_k$'s having the property that any non-zero linear combination of them is invertible. This is deduced from the fact that if $u,v\in \mathfrak{v}$, then $d\phi [u,v] = - \phi([u,v])$, since $u(\phi(v)) = u (0)=0$. The $d\psi$'s are essentially arbitrary.
Let now $M$ be a the simply connected Lie group with a fat Lie algebra $\mathfrak{n}$, $D$ the left-invariant distribution on $M$ such that $D_e=\mathfrak{v}$. For a left-invariant coframe, the structure equations take the form
$$d\phi_k = \sum_i (J_k \psi_i)\wedge \psi_i, \qquad d\psi_i =0 $$
where $J_1,...,J_m$ are anticommuting complex structures on $D$.
The results from the previous sections lead to consider fat distributions satisfying
$$ d\phi_k = \sum_i (J_k \psi_i)\wedge \psi_i \qquad mod(\phi_\ell) \leqno{(4.1)}$$
where the $J_k$ are sections of $\operatorname{End}(T(M)^*)$ satisfying the Canonical Commutation Relations
$$J_iJ_j + J_jJ_i = -2 \delta_{ij}.$$
The Equivalence Problem for these systems has been discussed for distributions with growth vector $(2n,2n+1), (4n,4n+3)$ and $(8,15)$. In these cases $\mathfrak{n}$ is parabolic, i.e., isomorphic to the Iwasawa subalgebra of a real semisimple Lie algebra $\mathfrak{g}$ of real rank one. The Tanaka \cite{T} subriemannian prolongation of such algebra is $\mathfrak{g}$, while in the non-parabolic case
is just
$$\mathfrak{n}+ \mathfrak{k(n)}+ \mathfrak{a}(\mathfrak{n})$$
where $\mathfrak{a}(\mathfrak{n})$ the 1-dimensional Lie algebra of dilations \cite{Su}. In this case,Tanaka's theorem implies that, in the notation of \cite{Z}, the first pseudo G-structure $P^0$ already carries a canonical frame.
As this paper was being written, E. van Erp pointed out to us his article \cite{Er}, where fat distributions are called polycontact and those satisfying (4.1) arise by imposing a compatible conformal structure.
\end{section}
\vskip .3cm
\begin{center} \bf Acknowledgments\end{center}
We wish to thank Professor J. Vargas for helpful discussions and M. Subils for pointing out a mistake in a previous version of this paper.
\end{document} |
\betagin{document}
\title{Elliptic problems with mixed nonlinearities and potentials singular at the origin and at the boundary of the domain}
\author[B. Bieganowski]{Bartosz Bieganowski}
\address[B. Bieganowski]{\newline\indent
Faculty of Mathematics, Informatics and Mechanics, \newline\indent
University of Warsaw, \newline\indent
ul. Banacha 2, 02-097 Warsaw, Poland}
\email{\href{mailto:[email protected]}{[email protected]}}
\author[A. Konysz]{Adam Konysz}
\address[A. Konysz]{\newline\indent Faculty of Mathematics and Computer Science, \newline\indent Nicolaus Copernicus University, \newline\indent ul. Chopina 12/18, 87-100 Toru\'n, Poland}
\email{\href{mailto:[email protected]}{[email protected]}}
\date{} \date{\today}
\betagin{abstract}
We are interested in the following Dirichlet problem
$$
\left\{ \betagin{array}{ll}
-\Deltalta u + \lambda u - \mu \frac{u}{|x|^2} - \nu \frac{u}{\mathrm{dist}(x,\mathbb{R}^N \setminus \Omegaega)^2} = f(x,u) & \quad \mbox{in } \Omegaega \\
u = 0 & \quad \mbox{on } \partialrtial \Omegaega,
\end{array} \right.
$$
on a bounded domain $\Omegaega \subset \mathbb{R}^N$ with $0 \in \Omegaega$. We assume that the nonlinear part is superlinear on some closed subset $K \subset \Omegaega$ and asymptotically linear on $\Omegaega \setminus K$. We find a
solution with the energy bounded by a certain min-max level, and infinitely many solutions provided that $f$ is odd in $u$. Moreover we study also the multiplicity of solutions to the associated normalized problem.
\noindent \textbf{Keywords:} variational methods, singular potential, nonlinear Schr\"odinger equation, multiplicity of solutions
\noindent \textbf{AMS Subject Classification:} 35Q55, 35A15, 35J20, 58E05
\end{abstract}
\maketitle
\partialgestyle{myheadings} \markboth{\underline{B. Bieganowski, A. Konysz}}{
\underline{Elliptic problems with singularities and mixed nonlinearities}}
\section{Introduction}
We are interested in the problem
\betagin{equation}\label{eq}
\left\{ \betagin{array}{ll}
-\Deltalta u + \lambda u - \mu \frac{u}{|x|^2} - \nu \frac{u}{\mathrm{dist}(x,\mathbb{R}^N \setminus \Omegaega)^2} = f(x,u) & \quad \mbox{in } \Omegaega \\
u = 0 & \quad \mbox{on } \partialrtial \Omegaega,
\end{array} \right.
\end{equation}
where $\lambda, \mu \in \mathbb{R}$ are real parameters, $f : \Omegaega \times \mathbb{R} \rightarrow \mathbb{R}$, $\Omegaega \subset \mathbb{R}^N$ is a bounded domain in $\mathbb{R}^N$ with $0 \in \Omegaega$, and $K \subset \Omegaega$ is a closed set with $| \mathrm{int}\, K | > 0$.
Semilinear problems of general form
$$
-\Deltalta u = h(x,u)
$$
appear when one looks for stationary states of time-dependent problems, including the \textit{heat equation} $\frac{\partialrtial u}{\partialrtial t} - \Deltalta u = h(x,u)$ or the \textit{wave equation} $\frac{\partialrtial^2 u}{ \partialrtial t^2} - \Deltalta u = h(x,u)$. In nonlinear optics the \textit{nonlinear Schr\"odinger equation} is studied
\betagin{equation}\label{eq:schroed}
\mathbf{i} \frac{\partialrtial \Psi}{\partialrtial t} + \Deltalta \Psi = h(x, |\Psi|) \Psi, \quad (t,x) \in \mathbb{R} \times \Omegaega
\end{equation}
and looking for standing waves $\Psi(t,x) = e^{i\lambda t} u(x)$ leads then to a semilinear problem.
The time-dependent equation \eqref{eq:schroed} appears in physical models in the case of bounded domains $\Omegaega$ (\cite{Fibich, Fibich2, Zuazua}), as well as in the case $\Omegaega = \mathbb{R}^N$ (\cite{Dorfler, Nie}). Two points of view of solutions to \eqref{eq} are possible; either $\lambda$ may be prescribed or may be considered as a part of the unknown. In the latter case a natural additional condition is the prescribed mass $\int_\Omegaega u^2 \, dx$. In the paper we will consider both cases, namely we will look for solutions for the unconstrained problem \eqref{eq} as well as the constrained one, see \eqref{eq:normalized} below.
The equation \eqref{eq} (and systems of such equations) on bounded domains has been studied in the presence of bounded potentials \cite{B2} and singular at the origin \cite{Gao}, see also \cite{Felli, GuoMed, Kostenko} for the case of unbounded $\Omegaega$. Its constrained counterpart without the potential has been studied e.g. in \cite{Noris,PV}, where \eqref{eq:normalized} was studied with $f(x,u)=|u|^{p-2}u$, $\nu=\mu=0$ in the mass-subcritical, mass-critical and mass-supercritical cases. In this paper we are interested in the presence of a potential
$$
V(x) = -\frac{\mu}{|x|^2} - \frac{\nu}{\mathrm{dist}\,(x, \mathbb{R}^N \setminus \Omegaega)^2}
$$
which is singular in $\Omegaega$ as well as on the whole boundary $\partialrtial \Omegaega$. We mention here that Schr\"odinger operators were studied with potentials being singular at the point on the boundary \cite{Chen}, as well as with potentials being singular on the whole boundary \cite{Tai, Tai2}.
We assume that $\Omegaega$ is a domain satisfying the following condition
\betagin{enumerate}
\item[(C)] $-\Deltalta d \geq 0$ in $\Omegaega$, in the sense of distributions, where $d(x) := \mathrm{dist} (x, \mathbb{R}^N \setminus \Omegaega)$.
\end{enumerate}
This condition allows us to study the singular potential by means of Hardy-type inequalities (Section \ref{sect:2}). As we will see in Section \ref{sect:2} (see Proposition \ref{prop:convex}), any convex domain $\Omegaega$ satisfies (C).
We impose the following condition on parameters appearing in the problem
\betagin{enumerate}
\item[(N)] $\mu,\nu \geq 0$, $\frac{\mu}{(N-2)^2} + \nu < \frac{1}{4}$, $N \geq 3$.
\end{enumerate}
On the nonlinear part of \eqref{eq} we propose the following assumptions.
\betagin{enumerate}
\item[(F1)] $f : \Omegaega \times \mathbb{R} \rightarrow \mathbb{R}$ is a Carath\'eodory function and there is $2 < p < 2^*$ such that
$$
|f(x,u)| \lesssim 1+|u|^{p-1}, \quad u \in \mathbb{R}, \ x \in \Omegaega
$$
\item[(F2)] $f(x,u) = o(u)$ uniformly in $x \in \Omegaega$ as $u \to 0$
\item[(F3)] $F(x,u)/|u|^2 \to +\infty$ as $|u| \to +\infty$ for $x \in K$, where $F(x,u) := \int_0^u f(x,s) \, ds$;
\item[(F4)] $f(x,u)/|u|$ is nondecreasing on $(-\infty, 0)$ and on $(0,\infty)$
\item[(F5)] $f(x,u)=\Theta(x)u$ for sufficiently large $|u|$ and $x \in \Omegaega \setminus K$, where $\Theta \in L^\infty(\Omegaega \setminus K)$.
\end{enumerate}
A simple example satisfying all foregoing conditions is the following
$$
f(x,u) = \left\{ \betagin{array}{ll}
\Gammamma(x) |u|^{p-2}u, & \quad x \in K, \\
\frac{|u|^2}{1+|u|^2} u \chi_{|u| \leq 1} + \frac12 u \chi_{|u| > 1} & \quad x \in \Omegaega \setminus K,
\end{array} \right.
$$
where $\Gammamma\in L^\infty(K)$ is a nonnegative function.
To show the boundedness of minimizing sequences to the problem \eqref{eq} we impose the following abstract condition
\betagin{enumerate}
\item[(A)] $-\lambda$ is not an eigenvalue of $-\Deltalta - \frac{\mu}{|x|^2} - \frac{\nu}{d(x)^2} - \Theta(x)$ with Dirichlet boundary conditions on $L^2(\Omegaega \setminus K)$.
\end{enumerate}
As we will see in Section \ref{sect:2}, (A) is satisfied if e.g. $\lambda \geq |\Theta|_\infty$ (cf. Theorem \ref{th:spectr}).
\betagin{Th}\label{th:main1}
Suppose that (C), (N), (F1)--(F5), (A) are satisfied and $\lambda \geq 0$. Then there is a nontrivial weak solution to \eqref{eq} with the energy level $c$ satisfying \eqref{gsl}.
\end{Th}
\betagin{Th}\label{th:main2}
Suppose that (C), (N), (F1)--(F5), (A) hold, $\lambda \geq 0$, and $f$ is odd in $u \in \mathbb{R}$. Then there is infinitely many weak solutions to \eqref{eq}.
\end{Th}
In the last section we also study the normalized problem
\betagin{equation}\label{eq:normalized}
\left\{ \betagin{array}{ll}
-\Deltalta u + \lambda u - \mu \frac{u}{|x|^2} - \nu \frac{u}{\mathrm{dist}(x, \mathbb{R}^N\setminus\Omegaega)^2} = f(x,u) & \quad \mbox{in } \Omegaega \\
u = 0 & \quad \mbox{on } \partialrtial \Omegaega, \\
\int_\Omegaega u^2 \, dx = \rho > 0,
\end{array} \right.
\end{equation}
where $\rho$ is fixed and $(\lambda, u) \in \mathbb{R} \times H^1_0(\Omegaega)$ is an unknown. Then we obtain the following multiplicity result in the so-called \textit{mass-subcritical case}.
\betagin{Th}\label{main:3}
Suppose that (C), (N), (F1) hold with $p < 2_* := 2+\frac{4}{N}$, and $f$ is odd in $u \in \mathbb{R}$. Then there is infinitely many weak solutions to \eqref{eq:normalized}.
\end{Th}
In what follows, $\lesssim$ denotes the inequality up to a multiplicative constant. Moreover $C$ denotes a generic constant which may vary from one line to another.
\section{The domain \texorpdfstring{$\Omegaega$}{Ω} and the singular Schr\"odinger operator}\label{sect:2}
We recall that if $A \subset \mathbb{R}^N$ is a closed, nonempty set, we can define the distance function $\mathrm{dist} (\cdot, A) : \mathbb{R}^N \rightarrow [0,\infty)$ by
$$
\mathrm{dist}(x, A) := \inf_{y \in A} |x-y|, \quad x \in \mathbb{R}^N.
$$
We collect the following properties of the distance function:
\betagin{itemize}
\item[(i)] $|\mathrm{dist}(x, A) - \mathrm{dist}(x',A) \leq |x-x'|$ for all $x,x' \in \mathbb{R}^N$,
\item[(ii)] if $x \in \mathbb{R}^N \setminus A$, then $\mathrm{dist}(\cdot, A)$ is differentiable at $x$ if and only if there is unique $u \in A$ such that $\mathrm{dist}(x,A) = |x-y|$ and then $\nabla \mathrm{dist}(x, A) = \frac{x-y}{|x-y|}$.
\end{itemize}
Now we consider $A := \mathbb{R}^N \setminus \Omegaega$. It is clear that $A$ is a closed subset of $\mathbb{R}^N$. We recall that we denote $d(x) = \mathrm{dist} (x, \mathbb{R}^N \setminus \Omegaega)$. Observe that, due to Rademacher’s theorem (\cite[Theorem 2.2.1]{Ziemer}) and (i), $d$ is differentiable almost everywhere and, from (ii) $|\nabla d| = 1$, almost everywhere. We remind that the assumption (C) says that
$$
-\Deltalta d \geq 0 \mbox{ in } \Omegaega
$$
holds in the sense of distributions. We note the following fact.
\betagin{Prop}\label{prop:convex}
If $\Omegaega \subset \mathbb{R}^N$ is a convex domain in $\mathbb{R}^N$, then $d \big|_\Omegaega$ is concave and satisfies (C).
\end{Prop}
\betagin{proof}
First note that $d \big|_\Omegaega : \Omegaega \rightarrow [0,\infty)$ is a concave function. Indeed, fix any $x,y \in \Omegaega$ and $\alphapha \in [0,1]$. Let $z = \alphapha x + (1-\alphapha)y$. Choose $z_0 \in \partialrtial \Omegaega$ such that $d(z) = |z-z_0|$. Let $T_{z_0} := z_0 + \mathrm{span} \{ z-z_0 \}^\perp$ be an affine subspace of $\mathbb{R}^N$ orthogonal to $z-z_0$ containing $z_0$. Define $x_0, y_0$ as orthogonal projections of $x, y$ respectively onto $T_{z_0}$. Then
$$
d(z) = |z-z_0| = \alphapha |x-x_0| + (1-\alphapha)|y-y_0| \geq \alphapha d(x) + (1-\alphapha) d(y),
$$
which completes the proof of concavity. Moreover, since $d$ is concave on $\Omegaega$, from \cite[Theorem 6.8]{Ev} there is a nonnegative Radon measure $\mu$ on $\Omegaega$ satisfying
$$
-\Deltalta d = \mu \quad \mbox{in the sense of distributions, }
$$
namely
$$
\int_{\Omegaega} \nabla d \cdot \nabla \varphi \, dx = \int_\Omegaega \varphi \, d\mu \quad \mbox{for } \varphi \in {\mathcal C}_0^\infty (\Omegaega).
$$
Clearly, for $\varphi \geq 0$ we get
$$
\int_{\Omegaega} \nabla d \cdot \nabla \varphi \, dx \geq 0
$$
and condition (C) holds.
\end{proof}
To study singular terms in \eqref{eq} we recall the following Hardy-type inequalities. If $u \in H^1_0 (\Omegaega)$, where $\Omegaega$ is a domain in $\mathbb{R}^N$ with finite Lebesgue measure and $0 \in \Omegaega$, then (see \cite{BV})
\betagin{equation}\label{hardy1}
\frac{(N-2)^2}{4} \int_\Omegaega \frac{u^2}{|x|^2} \, dx \leq \int_\Omegaega |\nabla u|^2 \, dx.
\end{equation}
Now let $\Omegaega \subset \mathbb{R}^N$ be a bounded domain satisfying (C). Then, for $u \in H^1_0(\Omegaega)$, the following Hardy inequality involving the distance function holds (see \cite{BFT})
\betagin{equation}\label{hardy2}
\frac14 \int_\Omegaega \frac{u^2}{d(x)^2} \, dx \leq \int_{\Omegaega} |\nabla u|^2 \, dx.
\end{equation}
We consider the operator $\mathcal{A} := -\Deltalta - \frac{\mu}{|x|^2} - \frac{\nu}{d(x)^2} - \Theta(x)$ on $L^2 (\Omegaega \setminus K)$ with Dirichlet boundary conditions. Then the domain is ${\mathcal D}({\mathcal A}) := H^2 (\Omegaega \setminus K) \cap H^1_0 (\Omegaega \setminus K)$.
\betagin{Th}\label{th:spectr}
The operator ${\mathcal A} : {\mathcal D}(A) \subset L^2 (\Omegaega \setminus K) \rightarrow L^2 (\Omegaega \setminus K)$ is elliptic, self-adjoint on $L^2 (\Omegaega \setminus K)$ and has compact resolvents. Moreover the spectrum $\sigma({\mathcal A}) \subset (-|\Theta|_\infty, +\infty)$ and consists of eigenvalues $-|\Theta|_\infty < \lambda_1 \leq \lambda_2 \leq \ldots$ with $\lambda_n \to +\infty$ as $n \to +\infty$.
\end{Th}
\betagin{proof}
It is well-known that ${\mathcal D}({\mathcal A})$ is closed in $L^2(\Omegaega \setminus K)$. It is easy to check that ${\mathcal A}$ is self-adjoint. The compactness of its resolvents easily follows from the Rellich-Kondrachov theorem. Hence its spectrum consists of eigenvalues $\lambda_n$ with $\lambda_n \to +\infty$. To see that $\sigma({\mathcal A}) \subset (-|\Theta|_\infty, +\infty)$, suppose that $\lambda$ is an eigenvalue of ${\mathcal A}$ with an associated eigenfunction $u \in H^1_0(\Omegaega \setminus K)$. We can treat $u$ as a function in $H^1_0(\Omegaega)$ being zero on $K$. Then, using \eqref{hardy1} and \eqref{hardy2},
\betagin{align*}
\lambda \int_{\Omegaega \setminus K} u^2 \, dx &\geq - \int_\Omegaega |\nabla u|^2 + \mu \frac{u^2}{|x|^2} + \nu \frac{u^2}{d(x)^2} \, dx + \lambda \int_{\Omegaega \setminus K} u^2 \, dx \\ &= -\int_{\Omegaega \setminus K} \Theta(x) u^2 \, dx \geq - |\Theta|_\infty \int_{\Omegaega \setminus K} u^2 \, dx.
\end{align*}
Hence $\lambda \geq -|\Theta|_\infty$. To show that $\lambda \neq -|\Theta|_\infty$, suppose by contradiction that there is $u \in H^1_0 (\Omegaega \setminus K)$ with
$$
\int_{\Omegaega \setminus K} |\nabla u|^2 - \mu \frac{u^2}{|x|^2} - \nu \frac{u^2}{d(x)^2} - \Theta(x) u^2 \, dx = -|\Theta|_\infty \int_{\Omegaega \setminus K} u^2 \, dx.
$$
Thus
$$
\int_{\Omegaega \setminus K} |\nabla u|^2 - \mu \frac{u^2}{|x|^2} - \nu \frac{u^2}{d(x)^2} \, dx = \int_{\Omegaega \setminus K} (\Theta(x) - |\Theta|_\infty) u^2 \, dx \leq 0.
$$
However, using \eqref{hardy1} and \eqref{hardy2} we get
$$
\int_{\Omegaega \setminus K} |\nabla u|^2 - \mu \frac{u^2}{|x|^2} - \nu \frac{u^2}{d(x)^2} \, dx \geq \left(1 - \frac{4\mu}{(N-2)^2} - 4 \nu \right) \int_{\Omegaega \setminus K} |\nabla u|^2 \, dx.
$$
Hence $\int_{\Omegaega \setminus K} |\nabla u|^2 \, dx = 0$ and $u = 0$, which is a contradiction. Hence $\sigma({\mathcal A}) \subset (-|\Theta|_\infty, +\infty)$.
\end{proof}
\section{Variational setting and critical point theory}
Suppose that $(E, \| \cdot \|)$ is a Hilbert space and ${\mathcal J} : E \rightarrow \mathbb{R}$ is a nonlinear functional of the general form
$$
{\mathcal J}(u) = \frac12 \|u\|^2 - {\mathcal I}(u),
$$
where ${\mathcal I}$ is of ${\mathcal C}^1$ class and ${\mathcal I}(0)=0$. We introduce the so-called \textit{Nehari manifold}
$$
{\mathcal N} := \{ u \in E \setminus \{ 0 \} \ : \ {\mathcal J}'(u)(u) = 0 \}.
$$
Observe that ${\mathcal I}'(u)(u) > 0$ on ${\mathcal N}$. Indeed,
$$
0 = {\mathcal J}'(u)(u) = \|u\|^2 - {\mathcal I}'(u)(u), \quad u \in {\mathcal N}.
$$
To utilize the mountain pass approach, we consider the following space of paths
$$
\Gammamma := \{ \gammamma \in {\mathcal C} ([0,1], E) \ : \ \gammamma(0) = 0, \ \|\gammamma(1)\| > r, \ {\mathcal J}(\gammamma(1)) < 0 \}
$$
and the following mountain pass level
$$
c := \inf_{\gammamma \in \Gammamma} \sup_{t \in [0,1]} {\mathcal J}(\gammamma(t)).
$$
Moreover we set
$$
\Gammamma_Q := \Gammamma \cap {\mathcal C}([0,1],Q).
$$
We propose an abstract theorem which is a combination of \cite[Theorem 5.1]{B} and \cite[Theorem 2.1]{BM-Indiana}. The proof is a straightforward modification of proofs of mentioned theorems, however we include it here for the reader's convenience.
\betagin{Th}\label{abstract}
Suppose that
\betagin{itemize}
\item[(J1)] there is $r > 0$ such that
$$
a := \inf_{\|u\|=r} {\mathcal J}(u) > 0;
$$
\item[(J2)] there is a closed vector subspace $Q \subset E$ such that $\frac{{\mathcal I} (t_n u_n)}{t_n^2} \to +\infty$ for $t_n \to +\infty$, $u_n \in Q$ and $u_n \to u \neq 0$;
\item[(J3)] for all $t > 0$ and $u \in {\mathcal N}$ there holds
$$
\frac{t^2-1}{2} {\mathcal I}'(u)(u) - {\mathcal I}(tu) + {\mathcal I}(u) \leq 0.
$$
\end{itemize}
Then $\Gamma_Q \neq \emptyset$, ${\mathcal N} \cap Q \neq \emptyset$ and
\betagin{equation}\label{gsl}
0 < \inf_{\|u\|=r} {\mathcal J}(u) \leq c \leq \inf_{\gammamma \in \Gamma_Q} \sup_{t \in [0,1]} {\mathcal J}(\gammamma(t)) = \inf_{{\mathcal N} \cap Q} {\mathcal J} = \inf_{u \in Q \setminus \{0\}} \sup_{t \geq 0} {\mathcal J}(tu),
\end{equation}
Moreover there is a Cerami sequence for ${\mathcal J}$ on the level $c$, i.e. a sequence $\{ u_n \}_n \subset E$ such that
$$
{\mathcal J}(u_n) \to c, \quad (1+\|u_n\|) {\mathcal J}'(u_n) \to 0.
$$
\end{Th}
\betagin{proof}
Observe that there exists $v \in Q \setminus \{0\}$ with $\|v\| > r$ such that ${\mathcal J}(v) < 0$. Indeed, fix $u \in Q \setminus \{0\}$ and from (J2) there follows that
\betagin{equation}\label{infty}
\frac{{\mathcal J}(tu)}{t^2} = \frac12 \|u\|^2 - \frac{{\mathcal I}(tu)}{t^2} \to - \infty \quad \mbox{as } t \to +\infty
\end{equation}
and we may take $v := t u$ for sufficiently large $t > 0$. In particular, the family of paths $\Gammamma_Q$ is nonempty. Moreover, ${\mathcal J}(tu) \to 0$ as $t \to 0^+$ and for $t = \frac{r}{\|u\|} > 0$ we get ${\mathcal J}(tu) > 0$. Hence, taking \eqref{infty} into account, $(0,+\infty) \ni t \mapsto {\mathcal J}(tu) \in \mathbb{R}$ has a local maximum, which is a critical point of ${\mathcal J}(tu)$ and $tu \in {\mathcal N}$. Hence ${\mathcal N} \cap Q \neq \emptyset$. Suppose that $u \in {\mathcal N} \cap Q$. Then, from (J3),
\betagin{equation*}
{\mathcal J}(tu) = {\mathcal J}(tu) - \frac{t^2-1}{2} {\mathcal J}'(u)(u) \leq {\mathcal J}(u)
\end{equation*}
and therefore $u$ is a maximizer (not necessarily unique) of ${\mathcal J}$ on $\mathbb{R}_+ u := \{ su \ : \ s > 0 \}$. Hence, for any $u \in {\mathcal N}\cap Q$ there are $0 < t_{\min} (u) \leq 1 \leq t_{\max}(u)$ such that $t u \in {\mathcal N}\cap Q$ for any $t \in [t_{\min}(u), t_{\max} (u)]$ and
$$
[t_{\min}(u), t_{\max} (u)] \ni t \mapsto {\mathcal J}(tu) \in \mathbb{R}
$$
is constant. Moreover ${\mathcal J}'(tu)(u) > 0$ for $t \in (0, t_{\min}(u))$ and ${\mathcal J}'(tu)(u) < 0$ for $t \in (t_{\max} (u), +\infty)$, $Q \setminus {\mathcal N}$ consists of two connected components and any path $\gammamma \in \Gammamma_Q$ intersects ${\mathcal N}\cap Q$. Thus
$$
\inf_{\gammamma \in \Gammamma_Q} \sup_{t \in [0,1]} {\mathcal J}(\gammamma(t)) \geq \inf_{{\mathcal N}\cap Q} {\mathcal J}.
$$
Since
$$
\inf_{{\mathcal N}\cap Q} {\mathcal J} = \inf_{u \in Q \setminus \{0\}} \sup_{t > 0} {\mathcal J}(tu)
$$
there follows, under (J1), that
$$
c = \inf_{\gammamma \in \Gammamma} \sup_{t \in [0,1]} {\mathcal J}(\gammamma(t)) \leq \inf_{\gammamma \in \Gammamma_Q} \sup_{t \in [0,1]} {\mathcal J}(\gammamma(t)) = \inf_{{\mathcal N} \cap Q} {\mathcal J} = \inf_{u \in Q \setminus \{0\}} \sup_{t > 0} {\mathcal J}(tu).
$$
The existence of a Cerami sequence follows from the mountain pass theorem.
\end{proof}
To study the multiplicity of solutions we will recall the symmetric mountain pass theorem. We consider the following condition
\betagin{enumerate}
\item[(J4)] there exists a sequence of subspaces $\widetilde{E}_1 \subset \widetilde{E_2} \subset \ldots \subset E$ such that $\dim \widetilde{E}_k=k$ for every $k \geq 1$ and there is a radius $R_k$ such that $\sup_{u \in \widetilde{E}_k,\ \|u\|\geq R_k} {\mathcal J} \leq 0$.
\end{enumerate}
Then, the following theorem holds.
\betagin{Th}[{\cite[Corolarry 2.9]{AmbrRab}}, {\cite[Theorem 9.12]{Rabinowitz}}]\label{multipl}
Suppose that ${\mathcal J}$, as above, is even and satisfies (J1), (J4) and a Palais-Smale condition (namely, any Palais-Smale sequence for ${\mathcal J}$ contains a convergent subsequence). Then ${\mathcal J}$ has an unbounded sequence of critical values.
\end{Th}
We work in the usual Sobolev space $H^1_0(\Omegaega)$ being the completion of ${\mathcal C}_0^\infty(\Omegaega)$ with respect to the norm
$$
\| u \|_{H^1} := \left( \int_{\Omegaega} |\nabla u|^2 + u^2 \, dx \right)^{1/2}.
$$
Define the bilinear form $B : H^1_0(\Omegaega) \times H^1_0 (\Omegaega) \rightarrow \mathbb{R}$ by
$$
B(u,v) := \int_{\Omegaega} \nabla u \cdot \nabla v + \lambda uv \, dx - \mu \int_{\Omegaega} \frac{uv}{|x|^2} \, dx - \nu \int_\Omegaega \frac{uv}{d(x)^2} \, dx, \quad u,v \in H^1_0(\Omegaega).
$$
\betagin{Lem}
$B$ defines an inner product on $H^1_0 (\Omegaega)$. Moreover, the associated norm is equivalent with the usual one.
\end{Lem}
\betagin{proof}
To check that $B$ is positive-definite we utilize \eqref{hardy1}, \eqref{hardy2}, and (N) to get
\betagin{align*}
B(u,u)& = \int_{\Omegaega} |\nabla u |^2 + \lambda u^2 \, dx - \mu \int_{\Omegaega} \frac{u^2}{|x|^2} \, dx - \nu \int_\Omegaega \frac{u^2}{d(x)^2} \, dx\\
&\geq \left(1 - \frac{4 \mu}{(N-2)^2} - 4\nu \right) \int_\Omegaega |\nabla u|^2 \, dx + \lambda u^2 \, dx \geq \left(1 - \frac{4 \mu}{(N-2)^2} - 4\nu \right) \int_\Omegaega |\nabla u|^2 \, dx
\end{align*}
and the statement follows from the Poincar\'e inequality. Moreover, from
$$
\int_\Omegaega |\nabla u|^2 + \lambda u^2 \, dx \geq B(u,u) \geq \left(1 - \frac{4 \mu}{(N-2)^2} - 4\nu \right) \int_\Omegaega |\nabla u|^2 \, dx
$$
there follows that $B$ generates a norm on $H^1_0(\Omegaega)$ equivalent to the standard one.
\end{proof}
Let $\| \cdot \|$ denote the norm generated by $B$, namely
$$
\|u\| := \sqrt{B(u,u)}, \quad u \in H^1_0(\Omegaega).
$$
Then we can define the energy functional ${\mathcal J} : H^1_0 (\Omegaega) \rightarrow \mathbb{R}$ by
\betagin{equation}\label{eq:J}
{\mathcal J}(u) := \frac12 \|u\|^2 - \int_\Omegaega F(x,u) \, dx,
\end{equation}
where $G(x,u) := \int_0^u g(x,s) \, ds$ and $F$ is given in (F3). It is well-known that under (F1), (F2) the functional is of ${\mathcal C}^1$ class and
$$
{\mathcal J}'(u)(v) = B(u,v) - \int_\Omegaega f(x,u)v \, dx, \quad u,v \in H^1_0(\Omegaega).
$$
Hence, its critical points are weak solutions to \eqref{eq}.
\section{Verification of (J1)--(J4)}
Observe that (F1), (F2) imply that for every $\varepsilon > 0$ one can find $C_\varepsilon > 0$ such that
$$
|f(x,u)| \leq \varepsilon |u| + C_{\varepsilon}|u|^{p-1}.
$$
There follows also a similar inequality for $F$, namely
\betagin{equation}\label{eq:F-eps}
F(x,u) \leq \varepsilon u^2 + C_\varepsilon |u|^p.
\end{equation}
We note also, that if in addition (F4) holds, then $F(x,u) \geq 0$. Moreover we recall that the functional ${\mathcal J}$ is defined by \eqref{eq:J}.
\betagin{Lem}\label{lem:assump}
Suppose that (C), (N), (F1)--(F5) hold. Then ${\mathcal J}$ satisfies (J1)--(J3) in Theorem \ref{abstract} and (J4) in Theorem \ref{multipl}.
\end{Lem}
\betagin{enumerate}
\item[(J1)] Using \eqref{eq:F-eps} and Sobolev embeddings we obtain
$$
\int_\Omega F(x,u)\,dx\leq \varepsilon|u|_2^2+C_\varepsilon |u|_p^p\lesssim \varepsilon\|u\|^2+C_\varepsilonilon\|u\|^p.
$$
Hence can choose $\varepsilon>0$ and $r>0$ such that
$$
\int_\Omega F(x,u)\,dx\leq \frac{1}{4}\|u\|^2
$$
for all $\|u\|\leq r$. Then we get
\betagin{align*}
\mathcal{J}(u) &= \frac{1}{2}\|u\|^2-\int_\Omega F(x,u)\,dx \geq \frac{1}{4}\|u\|^2=\frac{r^2}{4}>0
\end{align*}
for all $\|u\|=r.$
\item[(J2)] Let $Q := H^1_0 (\mathrm{int}\, K)$. Let $t_n\to+\infty$, $u_n \in Q$ and $u_n\to u\neq 0$. Then from Fatou's lemma and (F3)
$$
\frac{\mathcal{I}(t_nu_n)}{t_n^2}=\frac{\int_K F(x,t_nu_n)\,dx}{t_n^2} \to+\infty \quad \mbox{as } n \to +\infty.
$$
\item[(J3)] Fix $u \in {\mathcal N}$. Define
$$
(0,\infty) \ni t \mapsto \varphi(t):=\frac{t^2-1}{2}\mathcal{I}'(u)(u)-\mathcal{I}(tu)+\mathcal{I}(u)\in\mathbb{R}.
$$
Note that $\varphi(1)=0$. Moreover
\betagin{align*}
\varphi'(t)&=t \mathcal{I}'(u)(u)-\mathcal{I}'(tu)(u)=\int_\Omega f(x,u)tu\,dx-\int_\Omega f(x,tu)u\,dx.
\end{align*}
Suppose that $t\in(0,1)$. Then (F3) implies that for a.e. $x \in \Omegaega$, $f(x,tu)u \leq tf(x,u)u$ and therefore $\varphi'(t) \geq 0$. Similarly $\varphi'(t)\leq 0$ for $t>1$ which implies that $\varphi(t)\leq\varphi(0)=0$ for all $t>0$.
\item[(J4)] Let $\widetilde{E} \subset H^1_0(\mathrm{int}\,K) \subset H^1_0(\Omegaega)$ be a finite dimensional subspace. Note that on $\widetilde{E}$ all norms are equivalent. Suppose, by contradiction, that there is a sequence $(u_n) \subset \widetilde{E}$ such that $\|u_n\| \to +\infty$ and ${\mathcal J}(u_n) > 0$. Let $w_n(x) := u_n(x) / \|u_n\|$. It is clear that $\|w_n\|=1$ and, since $\widetilde{E}$ is finite-dimensional, there is $w \in \widetilde{E} \setminus \{0\}$ such that $\|w_n-w\| \to 0$. In particular $|\mathrm{supp}\, (w) \cap K| > 0$. Then, for a.e. $x \in \mathrm{supp}\, (w) \cap K$ we have that
$$
u_n(x)^2 = \|u_n\|^2 w_n(x)^2 \to +\infty.
$$
Hence, by the Fatou's lemma and (F3)
$$
0 < \frac{{\mathcal J}(u_n)}{\|u_n\|^2} = \frac12 - \int_{\Omegaega} \frac{F(x,u_n)}{\|u_n\|^2} \, dx \leq \frac12 - \int_{\mathrm{supp}\,(w) \cap K} \frac{F(x,u_n)}{u_n^2} w_n^2 \, dx \to -\infty,
$$
which is a contradiction.
\end{enumerate}
\section{Cerami sequences and proofs of main theorems}
\betagin{Lem}\label{lem:bdd}
Any Cerami sequence for ${\mathcal J}$ is bounded.
\end{Lem}
\betagin{proof}
Suppose that $\|u_n\|\to+\infty$ up to a subsequence. We define $v_n = \frac{u_n}{\|u_n\|}$. Then $\|v_n\|=1$ and $v_n \rightharpoonup v_0$ in $H^1_0(\Omegaega)$. From compact Sobolev embeddings, $v_n \to v_0$ in $L^2(\Omegaega)$, in $L^p (\Omegaega)$ and almost everywhere.
We consider three cases.
\betagin{itemize}
\item Suppose that $v_0 = 0$. Condition (J3) implies that
$$
{\mathcal J}(u) \geq {\mathcal J}(tu) - \frac{t^2-1}{2} {\mathcal J}'(u)(u).
$$
Taking $t\mapsto\frac{t}{\|u_n\|}$ we obtain that
$$
{\mathcal J}(u_n) \geq {\mathcal J}\left(\frac{t}{\|u_n\|}u_n\right) - \frac{\frac{t^2}{\|u_n\|^2}-1}{2} {\mathcal J}'(u_n)(u_n) = {\mathcal J}(t v_n) + o(1).
$$
Hence
$$
{\mathcal J}(u_n) \geq \frac{t^2}{2} - \int_\Omegaega F(tv_n) \, dx + o(1).
$$
Moreover
$$
\left| \int_\Omegaega F(tv_n) \, dx \right| \leq \varepsilon t^2 \int_\Omegaega |v_n|^2 \, dx + C_\varepsilon t^p \int_\Omegaega |v_n|^p \, dx \to 0 \quad \mbox{as } n \to \infty.
$$
Thus
$$
c + o(1) = {\mathcal J}(u_n) \geq \frac{t^2}{2} + o(1)
$$
for any $t > 0$ - a contradiction.
\item Now we suppose that $v_0 \neq 0$ and $|\mathrm{supp}\, v_0 \cap K | > 0$. Then
$$
o(1)=\frac{{\mathcal J}(u_n)}{\|u_n\|^2} = \frac12 - \int_\Omegaega \frac{F(u_n)}{\|u_n\|^2} \, dx = \frac12 - \int_\Omegaega \frac{F(u_n)}{u_n^2} v_n^2 \, dx \leq \frac12 - \int_{\mathrm{supp}\, v_0 \cap K} \frac{F(u_n)}{u_n^2} v_n^2 \, dx \to -\infty,
$$
a contradiction.
\item Suppose that $v_0 \neq 0$ and $|\mathrm{supp}\, v_0 \cap K| = 0$. Then $\mathrm{supp}\, v_0 \subset \Omegaega \setminus K$. Fix $\varphi \in {\mathcal C}_0^\infty (\Omegaega \setminus K)$ and note that
$$
o(1) = {\mathcal J}'(u_n)(\varphi) = \langle u_n, \varphi \rangle - \int_{\Omegaega} f(x,u_n) \varphi \, dx.
$$
Observe that
\betagin{align*}
\int_{\Omegaega} f(x,u_n)\varphi \, dx = \|u_n\| \int_{\Omegaega} \frac{f(x,u_n)}{u_n} v_n \varphi \, dx = \|u_n\| \left( \int_{\mathrm{supp}\, \varphi \cap \mathrm{supp}\, v} \frac{f(x,u_n)}{u_n} v_n \varphi \, dx + o(1) \right)
\end{align*}
Observe that for a.e. $x \in \mathrm{supp}\, \varphi \cap \mathrm{supp}\, v$ we get that $|u_n(x)| = |v_n(x)| \|u_n\| \to +\infty$. Fix $x \in \mathrm{supp}\, \varphi \cap \mathrm{supp}\, v$; then from (F5), for sufficiently large $n$, $f(x,u_n(x)) = \Theta(x) u_n(x)$. Thus
$$
\frac{f(x,u_n(x))}{u_n(x)} v_n(x) \varphi(x) = \Theta(x) v_n(x) \varphi(x)
$$
and therefore
$$
\frac{f(x,u_n(x))}{u_n(x)} v_n(x) \varphi(x) \to \Theta(x) v_n(x) \varphi(x)
$$
pointwise, a.e. on $\mathrm{supp}\, \varphi \cap \mathrm{supp}\, v$. Combining (F4) and (F5) we also get that
$$
\left| \frac{f(x,u_n(x))}{u_n(x)} \right|^2 \leq |\Theta|_\infty^2
$$
and $\frac{f(\cdot ,u_n)}{u_n} \to \Theta$ in $L^2(\mathrm{supp}\, \varphi \cap \mathrm{supp}\, v)$. Thus, from Lebesgue dominated convergence theorem and the H\"older inequality we get
$$
\int_{\mathrm{supp}\, \varphi \cap \mathrm{supp}\, v} \frac{f(x,u_n)}{u_n} v_n \varphi \, dx \to \int_{\Omegaega} \Theta(x) v \varphi \, dx.
$$
Hence
$$
\langle v, \varphi \rangle = \int_\Omegaega \Theta(x) v \varphi \, dx.
$$
In particular, $0$ is an eigenvalue of the operator $-\Deltalta + \lambda - \frac{\mu}{|x|^2} - \frac{\nu}{d^2} - \Theta(x)$ with Dirichlet boundary conditions on $\Omegaega \setminus K$, which contradicts (A).
\end{itemize}
\end{proof}
\betagin{proof}[Proof of Theorem \ref{th:main1}]
Since Cerami sequence $u_n$ is bounded we have following convergences (up to a subsequence):
\betagin{align*}
u_n\rightharpoonup u_0 \quad & \mbox{in } H^1_0(\Omegaega),\\
u_n\to u_0 \quad & \mbox{in } L^2(\Omega), \mbox{ and in } L^p(\Omega),\\
u_n\to u_0\quad & \mbox{a.e. on }\Omega.
\end{align*}
Hence, for any $\varphi\in {\mathcal C}^\infty_0(\Omegaega)$,
$$
{\mathcal J}'(u_n)(\varphi)-{\mathcal J}'(u_0)(\varphi)=\langle u_n-u_0,\varphi\rangle-\int_\Omega \left(f(x,u_n)-f(x,u_0)\right)\varphi\,dx\to 0,
$$
because obviously weak convergence of $u_n$ implies that
$$
\langle u_n-u_0,\varphi\rangle\to 0,
$$
and we will use the Vitali convergence theorem to prove that
$$
\int_\Omega \left(f(x,u_n)-f(x,u_0)\right)\varphi\,dx\to 0.
$$
Hence we need to check the uniform integrability of the family $\left\{ \left(f(x,u_n)-f(x,u_0)\right)\varphi \right\}_n$. Using (F1) and Lemma \ref{lem:bdd} we obtain that for any measurable set $E \subset \Omegaega$
\betagin{align*}
\int_E|f(x,u_n)-f(x,u_0)|\varphi\,dx &\leq \int_E|f(x,u_n)\varphi|\,dx+\int_E|f(x,u_0)\varphi|\,dx\\
&\lesssim \int_E|\varphi|\,dx+\int_E|u_n|^{p-1}|\varphi|\,dx+\int_E|\varphi|\,dx+\int_E|u_0|^{p-1}|\varphi|\,dx\\
&\lesssim |\varphi \chi_E|_1+ |u_n|_p^{p-1} |\varphi \chi_E|_p +|u_0|_p^{p-1} |\varphi \chi_E|_p \\
&\lesssim |\varphi \chi_E|_1+|\varphi \chi_E|_p.
\end{align*}
Then, for any $\varepsilon > 0$, we can choose $\deltalta > 0$ small enough that
$$
\int_E|f(x,u_n)-f(x,u_0)|\varphi\,dx < \varepsilon
$$
for $|E|<\deltalta$. Hence ${\mathcal J}'(u_n)(\varphi)\to{\mathcal J}'(u_0)(\varphi)$, and ${\mathcal J}'(u_0)=0$.
\end{proof}
\betagin{proof}[Proof of Theorem \ref{th:main2}]
The statement follows directly from Theorem \ref{multipl} and Lemma \ref{lem:assump}.
\end{proof}
\section{Multiple solutions to the mass-subcritical normalized problem}
In what follows we are interested in the normalized problem \eqref{eq:normalized}, where $\lambda$ is not prescribed anymore and is the part of the unknown $(\lambda, u) \in \mathbb{R} \times H^1_0(\Omegaega)$. Then, solutions are critical point of the energy functional
$$
{\mathcal J}_0 (u) := \frac12 |\nabla u|_2^2 - \frac{\mu}{2} \int_\Omegaega \frac{u^2}{|x|^2} \, dx - \frac{\nu}{2} \int_\Omegaega \frac{u^2}{d(x)^2} \, dx - \int_\Omegaega F(x,u) \, dx
$$
restricted to the $L^2$-sphere in $H^1_0(\Omegaega)$
$$
{\mathcal S} := \left\{ u \in H^1_0(\Omegaega) \ : \ \int_\Omegaega u^2 \, dx = \rho \right\}
$$
and $\lambda$ arises as a Lagrange multiplier.
We recall the well-known Gagliardo-Nirenberg inequality
\betagin{equation}\label{gn-ineq}
|u|_p \leq C_{p,N} |\nabla u|_2^{\deltalta_p} |u|_2^{1-\deltalta_p}, \quad u \in H^1_0(\Omegaega),
\end{equation}
where $\deltalta_p := N \left(\frac12 - \frac{1}{p} \right)$ and $C_{p,N} > 0$ is the optimal constant.
\betagin{Lem}\label{coercive}
${\mathcal J}_0$ is coercive and bounded from below on ${\mathcal S}$.
\end{Lem}
\betagin{proof}
Using (F1), \eqref{hardy1}, \eqref{hardy2} and \eqref{gn-ineq}, we obtain
\betagin{align*}
{\mathcal J}_0(u) &= \frac12 |\nabla u|_2^2 - \frac{\mu}{2} \int_\Omegaega \frac{u^2}{|x|^2} \, dx - \frac{\nu}{2} \int_\Omegaega \frac{u^2}{d(x)^2} \, dx - \int_\Omegaega F(x,u) \, dx \\
&\geq \frac12 \left(1 - \frac{4\mu}{(N-2)^2} - 4\nu \right) |\nabla u|_2^2 -C_1 |\Omega| - C_1|u|_p^p \\
&\geq \frac12 \left(1 - \frac{4\mu}{(N-2)^2} - 4\nu \right) |\nabla u|_2^2 - C_1|\Omega|- C\left(|\nabla u|_2^{\deltalta_p} |u|_2^{1-\deltalta_p} \right)^p\\
&\geq \frac12 \left(1 - \frac{4\mu}{(N-2)^2} - 4\nu \right) |\nabla u|_2^2 -C_1|\Omega|+ C|\nabla u|_2^{\deltalta_pp},
\end{align*}
where
$$
\deltalta_p p = N \left( \frac12 - \frac1p \right) p = N \left( \frac{p}{2}-1 \right) < N \cdot \frac{2}{N} = 2.
$$
Thus ${\mathcal J}_0$ is coercive and bounded from below on ${\mathcal S}$.
\end{proof}
\betagin{Lem}
${\mathcal J}_0$ satisfies the Palais-Smale condition on ${\mathcal S}$, i.e. any Palais-Smale sequence for ${\mathcal J}_0 |_{\mathcal S}$ has a convergent subsequence.
\end{Lem}
\betagin{proof}
Let $(u_n) \subset {\mathcal S}$ be a Palais-Smale sequence for ${\mathcal J}_0 |_{\mathcal S}$. Then Lemma \ref{coercive} implies that $(u_n)$ is bounded in $H_0^1 (\Omegaega)$. Hence we may assume that (up to a subsequence)
\betagin{align*}
u_n \rightharpoonup u \quad & \mbox{in } H^1_0(\Omegaega), \\
u_n \to u \quad & \mbox{in } L^p (\Omegaega), \\
u_n \to u \quad & \mbox{a.e. on } \Omegaega.
\end{align*}
Moreover
$$
{\mathcal J}_0'(u_n) + \lambda_n u_n \to 0 \quad \mbox{in } H^{-1}(\Omegaega) := (H^1_0 (\Omegaega))^*
$$
for some $\lambda_n \in \mathbb{R}$. In particular
$$
{\mathcal J}_0'(u_n)(u_n) + \lambda_n |u_n|_2^2 \to 0.
$$
Note that
$$
\lambda_n = - \frac{{\mathcal J}_0'(u_n)(u_n)}{|u_n|_2^2} + o(1)= - \frac{\|u_n\|^2 - \int_\Omegaega f(x,u_n)u_n \, dx}{|u_n|_2^2} + o(1).
$$
Observe that, from (F1)
$$
\left| \int_\Omegaega f(x,u_n)u_n \, dx \right| \lesssim 1 + |u_n|_p^p \lesssim 1.
$$
Therefore $(\lambda_n) \subset \mathbb{R}$ is bounded, and (up to a subsequence) $\lambda_n \to \lambda_0$. Therefore, up to a subsequence,
\betagin{align*}
o(1) &= {\mathcal J}_0'(u_n)(u_n) + \lambda_n |u_n|_2^2 - {\mathcal J}_0'(u_n)(u) - \lambda_n \int_{\Omegaega} u_n u \, dx \\
&= {\mathcal J}_0'(u_n)(u_n) - {\mathcal J}_0'(u_n)(u) + \lambda_n \int_{\Omegaega} u_n (u_n-u) \, dx \\
&= \|u_n\|^2 - \langle u_n, u \rangle - \int_{\Omegaega} f(x,u_n) (u_n-u) \, dx + \lambda_n \int_\Omegaega u_n (u_n-u) \, dx.
\end{align*}
It is clear that $\langle u_n, u\rangle \to \|u\|^2$ and that
$$
\left| \lambda_n \int_\Omegaega u_n (u_n-u) \, dx \right| \lesssim |u_n|_2^2 |u_n-u|_2^2 \to 0.
$$
Moreover, from (F1)
\betagin{align*}
\left| \int_\Omegaega f(x,u_n)(u_n-u) \, dx \right| \lesssim |u_n-u|_2 + |u_n|_p^{p-1} |u_n-u|_p \to 0.
\end{align*}
Hence $\|u_n\| \to \|u\|$ and therefore $u_n \to u$ in $H^1_0(\Omegaega)$.
\end{proof}
\betagin{proof}[Proof of Theorem \ref{main:3}]
From \cite[Theorem II.5.7]{Str} we obtain that ${\mathcal J}_0$ has at least $\hat{\gammamma}({\mathcal S})$ critical points, where
$$
\hat{\gammamma} ({\mathcal S}) := \sup \{ \gammamma(K) \ : \ K \subset {\mathcal S} \mbox{ - symmetric and compact} \}
$$
and $\gammamma$ denotes the Krasnoselskii's genus for symmetric and compact sets. We will show that $\hat{\gammamma}({\mathcal S}) = +\infty$. Indeed, fix $k \in \mathbb{N}$. It is sufficient to construct a symmetric and compact set $K \subset {\mathcal S}$ with $\gammamma(K) = k$. Choose functions $w_1, w_2, \ldots, w_k \in {\mathcal C}_0^\infty (\Omegaega) \cap {\mathcal S}$ with pairwise disjoint supports, namely $w_i w_j = 0$ for $i \neq j$. Now we set
$$
K := \left\{ \sum_{i=1}^k \alphapha_i w_i \in {\mathcal S} \ : \ \sum_{i=1}^k \alphapha_i^2 = 1 \right\}.
$$
It is clear that $K \subset {\mathcal S}$ is symmetric and compact. We will show that $\gammamma(K) = k$. In what follows $S^{m-1}$ denotes the $(m-1)$-dimensional sphere in $\mathbb{R}^m$ of radius $1$ centered at the origin. Note that $h : K \rightarrow S^{k-1}$ given by
$$
K \ni \sum_{i=1}^k \alphapha_i w_i \mapsto h \left( \sum_{i=1}^k \alphapha_i w_i \right) := (\alphapha_1, \ldots, \alphapha_k) \in S^{k-1}
$$
is a homeomorphism, which is odd. Hence $\gammamma(K) \leq k$. Suppose by contradiction that $\gammamma(K) < k$. Then there is a continuous and odd function $\widetilde{h} : K \rightarrow S^{\gammamma(K) -1}$. However, $\widetilde{h} \circ h^{-1} : S^{k-1} \rightarrow S^{\gammamma(K)-1}$ is an odd, continuous map, which contradicts the Borsuk-Ulam theorem \cite[Proposition II.5.2]{Str}, \cite[Theorem D.17]{Willem}. Hence $\gammamma(K) = k$.
\end{proof}
\end{document} |
\begin{document}
\begin{abstract}
In this text, we generalize Cech cohomology to sheaves $\mathcal F$ with values in blue $B$-modules where $B$ is a blueprint with $-1$. If $X$ is an object of the underlying site, then the cohomology sets $H^l(X,\mathcal F)$ turn out to be blue $B$-modules. For locally free $\mathcal O_X$-module $\mathcal F$ on a monoidal scheme $X$, we prove that $H^l(X,\mathcal F)^+=H^l(X^+,\mathcal F^+)$ where $X^+$ is the scheme associated with $X$ and $\mathcal F^+$ is the locally free $\mathcal O_{X^+}$-module associated with $\mathcal F$.
In an appendix, we show that the naive generalization of cohomology as a right derived functor is infinite-dimensional for the projective line over $\mathbb F_1$.
\end{abstract}
\title{\v Cech cohomology over $\Funsq$}
\section*{Introduction}
\label{section: introduction}
\noindent
While many standard methods in algebraic geometry carry over readily to ${\mathbb F}un$-geometry, other methods withstand a straightforward generalization since essential properties from usual algebraic geometry fail to be true or produce unusual results.
Sheaf cohomology with values in categories over ${\mathbb F}un$ belongs to the latter class of theories. Though methods from homological algebra generalize without great difficulties to injective resolutions of sheaves on ${\mathbb F}un$-schemes (see \cite{Deitmar11b}), the derived cohomology sets are larger than one would expect. For instance, the first cohomology set $H^1(X,{\mathcal O}_{X})$ of the projective line $X={\mathbb P}^1_{\mathbb F}un$ over ${\mathbb F}un$ is of infinite rank over ${\mathbb F}un$, cf.\ Appendix \ref{appendix: cohomology of p1}.
There have been some ad hoc observations for the projective line ${\mathbb P}^1_{\mathbb F}un$ in \cite{Connes-Consani10a}, for which \v Cech cohomology works well as long as the chosen covering consists of at most two open sets. For larger coverings, however, it is not clear how to make sense of the alternating sums in the definition of \v Cech cohomology.\footnote{During the time of writing, Jaiung Jun has published his preprint \cite{Jun15} on \v Cech cohomology for semirings. His method of double complexes might be applicaple to the setting of this paper.}
This problem resolves naturally for sheaves over ${\mathbb F}unsq$, since ${\mathbb F}unsq$ contains an additive inverse $-1$ of $1$, i.e.\ it bears a relation $1+(-1)=0$. This leads naturally to the theory of blueprints, which deals with multiplicative monoids that come together with certain additive relations that might be weaker than an addition.
The aim of this paper is to define \v Cech cohomology for sheaves with values in blue $B$-modules where $B$ is a blueprint with $-1$, and to show that this leads to a meaningful theory.
We calculate the cohomology of a monoidal scheme $X$ in terms of a comparison with the cohomology of their associated scheme $X^+$, which is also denoted by ``$X\otimes_{\mathbb F}un{\mathbb Z}$'' in the literature. For this comparison, we assume the following mild technical assumption on an open covering $\{U_i\}_{i\in I}$ of $X$.
\noindent\textbf{Hypothesis (H):} For all finite subsets $J\subset I$ of ${\mathcal I}$, the restriction map
\[
\res_{U_J,U_I}:{\mathcal O}_X(U_J)\to{\mathcal O}_X(U_I)
\]
is injective.
The following is Theorem \ref{thm: comparison of cech cohomology} of the main text.
\begin{thmA}
Given a monoidal scheme $X$ over $B$ that admits a finite covering $\{U_i\}$ with Hypothesis (H) such that ${\mathcal O}_X(U_i)$ are monoid blueprints over $B$. Then we have for every locally free sheaf ${\mathcal F}$ on $X$ that
\[
H^l(X,{\mathcal F})^+ \ = \ H^l(X^+,{\mathcal F}^+).
\]
\end{thmA}
Note that the class of monoidal schemes with a covering satisfying Hypothesis (H) contains, in particular, a model for every toric variety. Therefore the results of this paper might be helpful for calculations of sheaf cohomology for toric varieties, cf.\ Remark \ref{rem: cohomology for toric varieties}.
In the first part of this paper, we define \v Cech cohomology for sheaves of blue $B$-modules on an arbitrary site. We choose this general formulation because it is applicable to arithmetic questions like the \'etale cohomology of the compactification $\overline{{\mathbb S}pec {\mathbb Z}}$ of the arithmetic line; see \cite{L14} for a model of $\overline{{\mathbb S}pec {\mathbb Z}}$ and some ideas towards such a theory.
In the second part of this paper, we introduce the notion of monoidal schemes over a blueprint $B$, which extends the notion of monoidal schemes from ${\mathbb F}un$ to any blueprint, and we discuss the notion of locally free sheaves. In a final section, we formulate and prove our main result Theorem A.
Since there are several introduction to blueprints and blue schemes, we do not provide another one in this text, but provide the reader with a reference where this is necessary. As a general reference, we suggest the overview paper \cite{L13}. In particular, the reader will find the definition of a blueprint in section II.1.1, and the definition of a blue $B$-module in section II.6.1 of this paper.
\part{\v Cech cohomology over ${\mathbb F}unsq$}
\label{part: cech cohomology}
\section{Definition for a fixed covering}
\label{section: definition for a fixed covering}
\noindent
In this part of the paper, we consider a site ${\mathcal T}$ and an object $X$ of this site. We assume that ${\mathcal T}$ contains fibre products, so that we have a notion of covering families ${\mathcal U}=\{U_i\}_{i\in {\mathcal I}}$ of $X$. We will define \v Cech cohomology for $X$ with values in sheaves in blue $B$-modules where $B$ is a blueprint with $-1$, which can also be thought of as an ${\mathbb F}unsq$-algebra.
Throughout this part of the paper, we fix the site ${\mathcal T}$ and the object $X$. For this section, we also fix the covering family ${\mathcal U}$ and aim for defining the \v Cech cohomology $H^l(X,{\mathcal F};{\mathcal U})$ w.r.t.\ ${\mathcal U}$.
A blueprint with $-1$ is a blueprint that has an element $-1$ that satisfies the additive relation $1+(-1)\equiv0$. This element is necessarily unique, which means that there is a unique blueprint morphism ${\mathbb F}unsq\to B$ from
\[
{\mathbb F}unsq \ = \ {\mathcal{B}lpr}genquot{\{0,1,-1\}} {1+(-1)\equiv0}
\]
to $B$. By multiplying the defining relation for $-1$ with an arbitrary element $a$ of $B$, we see that $-a=(-1)\cdot a$ is an \emph{additive inverse of $a$}, i.e.\ it satisfies the relation $a+(-a)\equiv0$.
Let ${\mathcal M_0}d_B^{\textup{bl}}$ be the category of blue $B$-modules and ${\mathcal F}$ a sheaf on ${\mathcal T}$ with values in ${\mathcal M_0}d_B^{\textup{bl}}$. Let ${\mathcal U}=\{U_i\}_{i\in{\mathcal I}}$ be a covering family of $X$ where ${\mathcal I}$ is a totally ordered index set.
\begin{df}
For $l\geq 0$, we denote by ${\mathcal I}_l$ the family of all subsets $I$ of ${\mathcal I}$ with cardinality $l+1$. For such a subset, we write $I=(i_0,\dotsb,i_l)$ if $I=\{i_0,\dotsc,i_l\}$ and $i_0<\dotsb<i_l$. We define
\[
U_I \ = \ U_{i_0}\times_X\dotsc\times_X U_{i_l} \qquad\text{and}\qquad {\mathcal F}_I \ = \ {\mathcal F}(U_I),
\]
which is a blue $B$-module. Given $I\in{\mathcal I}_l$ and $k\in\{0,\dotsc,l\}$, we denote by $I^k$ the set $\{i_0,\dotsc,\widehat{i_k},\dotsc,i_l\}$. The canonical projection $U_I\to U_{I^k}$ onto all factors but $U_k$ defines a morphism
\[
\partial_{k,I}^{(l)}: \ {\mathcal F}_{I^k} \ \longrightarrow \ {\mathcal F}_I.
\]
If we define
\[
{\mathcal C}^l \ = \ \textup{pr}od_{I\in{\mathcal I}_l}{\mathcal F}_I
\]
the morphisms $\partial_{k,I}^{(l)}$ for varying $I$ define a morphism
\[
\partial_k^{(l)}: \ {\mathcal C}^{l-1} \ \longrightarrow \ {\mathcal C}^{l}
\]
for every $k=0,\dotsc,l$. The \emph{\v Cech complex of ${\mathcal U}$ with values in ${\mathcal F}$} is the cosimplicial blue $B$-module
\[
{\mathcal C}^\bullet \ = \ {\mathcal C}^\bullet(X,{\mathcal F};{\mathcal U}) \ = \ {\mathbb B}iggl( \xymatrix{{\mathcal C}^0 \ar@<0.5ex>[r]^{\partial_0^{(1)}}\ar@<-0.5ex>[r]_{\partial_1^{(1)}} & {\mathcal C}^1 \ar@<1ex>[r]^{\partial_0^{(2)}}\ar[r]\ar@<-1ex>[r]_{\partial_2^{(2)}} & {\mathcal C}^2 \ar@<1.5ex>[r]\ar@<0.5ex>[r]\ar@<-0.5ex>[r]\ar@<-1.5ex>[r] & {\mathcal C}^3 \quad\dotsb \quad} {\mathbb B}iggr).
\]
\end{df}
\begin{rem} \label{rem: total ceh complex}
In practice, the index set ${\mathcal I}$ is often finite. Then the \v Cech complex is finite since ${\mathcal C}^l$ is the empty product, i.e.\ ${\mathcal C}^l=0$, if $l\geq\#{\mathcal I}$.
This cosimplicial set is often called the \emph{ordered \v Cech complex} in literature, in contrast to the \emph{total \v Cech complex ${\mathcal C}_\textup{tot}^\bullet$} with ${\mathcal C}_{\textup{tot}}^l=\textup{pr}od{\mathcal F}_{\{i_0,\dotsc,i_l\}}$ where the product is taken over all elements $(i_0,\dotsc,i_l)\in{\mathcal I}^{l+1}$ without any assumption on the ordering or distinctness of the $i_k$'s.
\end{rem}
\begin{df}
Let ${\mathcal C}^\bullet$ be a cosimplicial blue $B$-module. The \emph{set of $l$-cocycles of ${\mathcal C}^\bullet$} is
\[
{\mathcal Z}^l \ = \ {\mathcal Z}^l({\mathcal C}^\bullet) \ = \ \biggl\{\ x\in{\mathcal C}^l \ \biggl|\ \sum_{k=0}^{l+1} (-1)^k\partial_k^{(l+1)}(x) \equiv 0 \ \bigg\}
\]
which we consider as a full blue $B$-submodule of ${\mathcal C}^l$, i.e.\ the pre-addition of ${\mathcal Z}^l$ is the restriction of the pre-addition of ${\mathcal C}^l$ to ${\mathcal Z}^l$. The \emph{set of $l$-coboundaries} is
\[
{\mathcal B}^l \ = \ {\mathcal B}^l({\mathcal C}^\bullet) \ = \ \biggl\{\ x\in{\mathcal C}^l \ \biggl|\ \exists y=\sum_i y_i \in({\mathcal C}^{l-1})^+\text{ such that }x \equiv \sum_i\sum_{k=0}^{l} (-1)^k\partial_k^{(l)}(y_i) \ \bigg\},
\]
which is considered as a full blue $B$-submodule of ${\mathcal C}^l$. For the case $l=0$, we use ${\mathcal C}_{-1}=\{0\}$.
If ${\mathcal C}^\bullet={\mathcal C}^\bullet(X,{\mathcal F};{\mathcal U})$, then we also write ${\mathcal Z}^l(X,{\mathcal F};{\mathcal U}) \ = \ {\mathcal Z}^l({\mathcal C}^\bullet)$ and ${\mathcal B}^l(X,{\mathcal F};{\mathcal U}) \ = \ {\mathcal B}^l({\mathcal C}^\bullet)$. In this case, we have
\[
{\mathcal Z}^l(X,{\mathcal F};{\mathcal U}) \ = \ \biggl\{\ (a_I)\in \textup{pr}od_{I\in{\mathcal I}_l}{\mathcal F}_I \ \biggl|\ \forall J\in{\mathcal I}_{l+1},\quad \sum_{k=0}^{l+1} (-1)^k\partial_{k,I}^{(l+1)}(a_{J^k}) \equiv 0 \ \bigg\}
\]
and
\[
{\mathcal B}^l(X,{\mathcal F};{\mathcal U}) \ = \ \biggl\{\ (a_I)\in \textup{pr}od_{I\in{\mathcal I}_l}{\mathcal F}_I \ \biggl|\ \exists (b_J)\in \textup{pr}od_{J\in{\mathcal I}_{l-1}}{\mathcal F}_J^+,\quad \forall I\in{\mathcal I}_{l},\ \ a_I \equiv \sum_{k=0}^{l} (-1)^k\partial_{k,I}^{(l)}(b_{I^k})\ \bigg\}.
\]
where we define $\delta_{k,I}^{(l)}(b_{J})=\sum_j\delta_{k,I}^{(l)}(b_{J,j})$ for $b_{J}=\sum_j b_{J,j}\in{\mathcal F}_J^+$ and $J=I^k$.
\end{df}
\begin{lemma}
For every $l\geq0$, we have ${\mathcal B}^l({\mathcal C}^\bullet)\subset {\mathcal Z}^l({\mathcal C}^\bullet)$.
\end{lemma}
\begin{proof}
Given $(a_i)\in{\mathcal B}^l({\mathcal C}^\bullet)$, i.e.\ there is an element $(b_J)\in({\mathcal C}^{l-1})^+$ such that
\[
a_I \ \equiv \ \sum_{k=0}^{l} (-1)^k\partial_k^{(l)}(b_{I^k})
\]
for all $I\in{\mathcal I}_l$, then we have for every $L\in{\mathcal I}_{l+1}$
\[
\sum_k (-1)^k\partial_k^{(l+1)}(a_{L^k}) \ \equiv \ \sum_{k'\neq k} (-1)^k \ \partial_{k}^{l+1}\circ\partial_{k'}^l\,{\mathbb B}igl((-1)^{k'+\epsilon}b_{L^{k,k'}}{\mathbb B}igr)
\]
where $\epsilon=0$ if $k'<k$ and $\epsilon=1$ if $k'>k$, and $L^{k,k'}=L-\{k,k'\}$. Since $\partial_{k}^{l+1}\circ\partial_{k'}^l=\partial_{k'}^{l+1}\circ\partial_{k}^l$, the above sum equals
\[
\sum_{k'< k} (-1)^{k+k'}\ \partial_{k}^{l+1}\circ\partial_{k'}^l\,\bigl(b_{L^{k,k'}}\bigr) \quad + \quad \sum_{k< k'} (-1)^{k+k'+1}\ \partial_{k}^{l+1}\circ\partial_{k'}^l\,\bigl(b_{L^{k,k'}}\bigr) \quad \equiv \quad 0,
\]
which shows that $(a_I)\in{\mathcal Z}^l({\mathcal C}^\bullet)$.
\end{proof}
\begin{df}
The \emph{$l$-th \v Cech cohomology of $X$ w.r.t\ ${\mathcal U}$ and with values in ${\mathcal F}$} is defined as the quotient
\[
H^l({\mathcal C}^\bullet) \ = \ {\mathcal Z}^l({\mathcal C}^\bullet)\ / \ {\mathcal B}^l({\mathcal C}^\bullet)
\]
of blue $B$-modules. If ${\mathcal C}^\bullet={\mathcal C}^\bullet(X,{\mathcal F};{\mathcal U})$, then we also write $H^l(X,{\mathcal F};{\mathcal U}) \ = \ H^l({\mathcal C}^\bullet)$.
\end{df}
Recall that a morphism ${\mathbb P}si: {\mathcal C}^\bullet\to{\mathcal D}^\bullet$ of cosimplicial blue $B$-modules is a collection of morphisms $\psi_l:{\mathcal C}^l\to{\mathcal D}^l$ of blue $B$-modules for all $l\geq0$ that commute with the respective coboundary maps $\partial_k^{(l)}$ of ${\mathcal C}^\bullet$ and ${\mathcal D}^\bullet$, i.e.\ $\partial_{k}^{(l)}\circ\psi_{l-1}=\psi_l\circ\partial_{k}^{(l)}$ for all $l\geq 0$ and $0\leq k\leq l$.
\begin{lemma}\label{lemma: morphisms induce maps between cohomology}
Let ${\mathbb P}si: {\mathcal C}^\bullet\to{\mathcal D}^\bullet$ be a morphism of cosimplicial blue $B$-modules. Then
\[
\psi_l({\mathcal Z}^l({\mathcal C}^\bullet))\subset{\mathcal Z}^l({\mathcal D}^\bullet) \qquad \text{and} \qquad \psi_l({\mathcal B}^l({\mathcal C}^\bullet))\subset{\mathcal B}^l({\mathcal D}^\bullet).
\]
Consequently, ${\mathbb P}si$ induces a morphism
\[
H^l({\mathcal C}^\bullet) \ = \ {\mathcal Z}^l({\mathcal C}^\bullet) \ / \ {\mathcal B}^l({\mathcal C}^\bullet) \quad \longrightarrow \quad {\mathcal Z}^l({\mathcal D}^\bullet) \ / \ {\mathcal B}^l({\mathcal D}^\bullet) \ = \ H^l({\mathcal D}^\bullet)
\]
for every $l\geq0$.
\end{lemma}
\begin{proof}
Let $x\in{\mathcal Z}^l({\mathcal C}^\bullet)$, i.e.\ $\sum(-1)^k\partial_k^{(l+1)}(x)\equiv0$. Then $\psi_l(x)\in{\mathcal D}^l$ satisfies
\[
\sum_{k=0}^{l+1} \ (-1)^k \ \partial_k^{(l+1)}(\psi(x)) \quad \equiv \quad \sum_{k=0}^{l+1} \ (-1)^k \ \psi_{l+1}\partial_k^{(l+1)}(x) \quad \equiv \quad \psi_{l+1}(0) \quad \equiv \quad 0.
\]
This shows that $\psi(x)\in{\mathcal Z}^l({\mathcal D}^\bullet)$. Let $x\in{\mathcal B}^l({\mathcal C}^\bullet)$, i.e.\ there exists an $y \in{\mathcal C}^{l-1}$ with $x\equiv\sum(-1)^k\partial_k^{(l)}(y)$. Then we have
\[
\psi_l(x) \quad \equiv \quad \sum(-1)^k\psi_l{\mathbb B}igl(\partial_k^{(l)}(y){\mathbb B}igr) \quad \equiv \quad \sum(-1)^k\partial_k^{(l)}\bigl(\psi_{l-1}(y)\bigl),
\]
which shows that $\psi_l(x)\in{\mathcal B}^l({\mathcal D}^\bullet)$.
\end{proof}
Next, we prove that the \v Cech cohomology w.r.t.\ ${\mathcal U}$ does not depend on the ordering of the index set ${\mathcal I}$. Note that the definition of the \v Cech complex ${\mathcal C}^\bullet$ is independent of the ordering of ${\mathcal I}$.
\begin{prop}
For $l\geq0$, the subsets ${\mathcal B}^l$ and ${\mathcal Z}^l$ of ${\mathcal C}^l$ are independent of the ordering of ${\mathcal I}$. Consequently, $H^l(X,{\mathcal F};{\mathcal U})$ does not depend on the ordering of ${\mathcal I}$.
\end{prop}
\begin{proof}
The usual argument works in this context: we show that ${\mathcal C}^\bullet$ is isomorphic to the \emph{alternating \v Cech complex ${\mathcal C}_\textup{alt}^\bullet$} as a cosimplicial blue $B$-module. The blue $B$-modules ${\mathcal C}^l_\textup{alt}$ are defined as all elements $a_I$ of ${\mathcal C}^l_\textup{tot}$ that satisfy
\[
a_I \ = \ 0
\]
if $I=(i_0,\dotsc,i_l)$ with $i_k=i_{k'}$ for some $k\neq k'$, and
\[
a_{\sigma I} \ = \ \sign(\sigma) \, a_{I}
\]
for a permutation $\sigma\in S_{l+1}$ and $\sigma I=(i_{\sigma(0)},\dotsc,i_{\sigma(l)})$. This defines a cosimplicial subset ${\mathcal C}^\bullet_\textup{alt} $ of ${\mathcal C}^\bullet_\textup{tot}$. Consider the following morphisms of blue $B$-modules
\[
\begin{array}{cccc}
\pi: & {\mathcal C}^l_\textup{alt} & \longrightarrow & {\mathcal C}^l. \\
& (a_I)_{I\in{\mathcal I}^{l+1}} & \longmapsto & (a_I)_{I\in{\mathcal I}_{l}}
\end{array}
\]
and
\[
\begin{array}{cccc}
\iota: & {\mathcal C}^l & \longrightarrow & {\mathcal C}^l_\textup{alt} \\
& (a_I)_{I\in{\mathcal I}_l} & \longmapsto & (\widetilde{a_I})_{I\in{\mathcal I}^{l+1}}
\end{array}
\]
with $\widetilde{a_I}=\sign(\sigma)a_{\sigma I}$ if $\sigma I\in{\mathcal I}_l$ and $\widetilde{a_I}=0$ if $I=(i_0,\dotsc,i_l)$ with $i_k=i_{k'}$ for some $k\neq k'$. As in the usual case of \v Cech cohomology with values in abelian categories, it is easily verified that $\iota$ and $\pi$ are mutually inverse isomorphisms.
If $\widetilde{\mathcal I}$ is the index set ${\mathcal I}$ with a different ordering and $\tilde\pi:{\mathcal C}^\bullet_\textup{alt}\to{\mathcal C}^\bullet$ is the isomorphism with respect to this ordering, then the automorphism $\tilde\pi\circ\iota:{\mathcal C}^\bullet\to{\mathcal C}^\bullet$ sends the set ${\mathcal Z}^l$ of $l$-cocycles w.r.t.\ to the ordering of ${\mathcal I}$ to the set $\widetilde{\mathcal Z}^l$ of $l$-coboundaries w.r.t.\ the ordering of $\widetilde{\mathcal I}$. More precisely, $\tilde\pi\circ\iota$ sends $a_I$ to $\sign(\sigma)a_{\sigma I}$ where $\sigma$ is the permutation such that $\sigma I$ is ordered w.r.t.\ to the ordering of ${\mathcal I}$. Since $B$ is with $-1$, we see that $\widetilde{\mathcal Z}^l={\mathcal Z}^l$.
Similarly, $\tilde\pi\circ\iota$ restricts to an automorphism of ${\mathcal B}^l$. This shows the claim of the proposition.
\end{proof}
\section{Refinements}
\label{section: refinements}
\noindent
In this section, we show that the \v Cech cohomology $H^l(X,{\mathcal F};{\mathcal U})$ is functorial in refinements, so that we form the colimit $H^l(X,{\mathcal F})=\colim H^l(X,{\mathcal F};{\mathcal U})$, which does not depend on the choice of a covering family of $X$ anymore.
\begin{df}
A \emph{refinement of a covering family ${\mathcal U}=\{U_i\}_{i\in{\mathcal I}}$} is a covering family ${\mathcal V}=\{V_j\}_{j\in{\mathcal J}}$ together with a map $\varphi:{\mathcal J}\to{\mathcal I}$ and a morphism $\varphi_j:V_j\to U_{\varphi(i)}$ for every $j\in{\mathcal J}$. We write ${\mathbb P}hi:{\mathcal V}\to {\mathcal U}$ for such a refinement.
\end{df}
Given a refinement ${\mathbb P}hi:{\mathcal V}\to {\mathcal U}$ of ${\mathcal U}$, we get induced maps $\varphi:{\mathcal J}_l\to{\mathcal I}_l$ that send $J=\{j_0,\dotsc,j_l\}$ to $\varphi(J)=\{\varphi(j_0),\dotsc,\varphi(j_l)\}$ and morphisms
\[
\varphi_J: \quad V_J \ = \ V_{j_0}\times_X \dotsb \times_X V_{j_l} \quad \longrightarrow \quad U_{\varphi(j_0)}\times_X \dotsb \times_X U_{\varphi(j_l)} \ = \ U_{\varphi(J)}
\]
for every $J\in{\mathcal J}_l$ and $l\geq 0$. This defines, in turn, a morphism $\psi_l:{\mathcal C}^l(X,{\mathcal F};{\mathcal U})\to{\mathcal C}^l(X,{\mathcal F};{\mathcal V})$ for every $l\geq0$. The morphisms $\psi_l$ commute with the respective coboundary morphisms $\partial_k^{(l)}$ of ${\mathcal C}^\bullet(X,{\mathcal F};{\mathcal V})$ and ${\mathcal C}^\bullet(X,{\mathcal F};{\mathcal U})$. Thus ${\mathbb P}hi:{\mathcal V}\to{\mathcal U}$ induces a morphism ${\mathbb P}si:{\mathcal C}^\bullet(X,{\mathcal F};{\mathcal U})\to{\mathcal C}^\bullet(X,{\mathcal F};{\mathcal V})$ of cosimplicial blue $B$-modules, which maps cocycles to cocycles and coboundaries to coboundaries. This means that we get a morphism
\[
{\mathbb P}si: \quad H^l(X,{\mathcal F};{\mathcal U}) \quad \longrightarrow \quad H^l(X,{\mathcal F};{\mathcal V})
\]
from the \v Cech cohomology w.r.t.\ ${\mathcal U}$ to the \v Cech cohomology w.r.t.\ ${\mathcal V}$.
\begin{df}
The \emph{\v Cech cohomology of $X$ with values in ${\mathcal F}$} is defined as the colimit
\[
H^l(X,{\mathcal F}) \ = \ \colim\ H^l(X,{\mathcal F};{\mathcal U})
\]
over the system of all covering families ${\mathcal U}$ of $X$ together with all refinements ${\mathbb P}hi:{\mathcal V}\to{\mathcal U}$ of covering families.
\end{df}
\part{Cohomology of monoidal schemes}
\label{part: monoidal schemes}
\section{Monoidal schemes over a blueprint}
\label{section: monoidal schemes}
\noindent
Monoidal schemes a.k.a.\ monoid schemes a.k.a.\ ${\mathbb F}un$-schemes (in the sense of Deitmar, \cite{Deitmar05}, or To\"en and Vaqui\'e, \cite{Toen-Vaquie09}) form the core of ${\mathbb F}un$-geometry in the sense that they appear as a natural subclass in every approach towards ${\mathbb F}un$-schemes.
In this section, we introduce monoidal schemes over a blueprint $B$ as certain blue schemes over $B$. Note that refer to the notion of blue schemes from \cite{L15}, which can be seen as an improvement of the original definition in terms of prime ideals, as contained in \cite{L13}. If $B$ happens to be a global blueprint, e.g.\ a monoid, a ring or a blue field, then both definitions give rise to an equivalent theory. In this case, one can also adopt the viewpoint of To\"en and Vaqui\'e in \cite{Toen-Vaquie09}, which yields yet another theory in general.
To start with, we adapt the concept of a semigroup ring to the context of blueprints. By a \emph{monoid}, we mean a commutative and associative semigroup with neutral element $1$ and absorbing element $0$. All monoids will be written multiplicatively. A monoid morphism is a multiplicative map that sends $1$ to $1$ and $0$ to $0$.
Let $B={\mathcal{B}lpr}quot{A}{{\mathcal R}}$ be a blueprint and $M$ a monoid. The \emph{monoid blueprint of $M$ over $B$} is the blueprint $B[M]={\mathcal{B}lpr}quot{A_M}{{\mathcal R}_M}$ that is defined as follows. The monoid $A_M$ is the smash product $A\wedge M$, which is the quotient of $A\times M$ by the equivalence relation that is generated by the relations $(a,0)\sim (b,0)$ and $(0,m)\sim(0,n)$ with $a,b\in A$ and $m,n\in M$. The pre-addition ${\mathcal R}$ is generated by the set of additive relations
\[
\bigl\{ \ \sum(a_i,1)\equiv\sum(b_j,1) \ \bigl| \ \sum a_j\equiv\sum b_j\text{ in }B \ \bigr\}.
\]
Note that $B[M]$ has the universal property that any pair of a blueprint morphism $B\to C$ and a monoid morphism $M\to C$ extends uniquely to a blueprint morphism $B[M]\to C$. Note further that as a blue $B$-module, $B[M]$ is isomorphic to \mbox{$\bigvee_{m\in M-\{0\}} B\cdot m$}.
\begin{df}
Let $B$ be a blueprint. A \emph{monoidal scheme over $B$} is a blue scheme $X$ that has an open affine covering $\{U_i\}_{i\in{\mathcal I}}$ such that
\begin{enumerate}
\item for every $i\in{\mathcal I}$, there is a monoid $M_i$ and an isomorphism ${\mathcal O}_X(U_i)\simeq B[M_i]$ of blueprints;
\item for every $i,j\in{\mathcal I}$, the intersection $U_i\cap U_j$ is covered by affine opens of the form $V_{i,j,k}={\mathbb S}pec B[N_{i,j,k}]$ for some monoids $N_{i,j,k}$ such that the restriction map $\res:B[M_i]\to B[N_{i,j,k}]$ is the localization at some multiplicative subset $S_{i,k}$ of $M_i$, i.e.\ $N_{i,j,k}=S^{-1}_{i,k} M_i$; and the same holds true for the restriction map $\res:B[M_j]\to B[N_{i,j,k}]$.
\end{enumerate}
\end{df}
\begin{rem}
Note that a monoidal scheme over ${\mathbb F}un$ is nothing else than a monoidal scheme in the usual sense. One can extend the method from \cite{Deitmar08} to show that $X^+_{\mathbb Z}$ is a toric variety over the ring $B^+_{\mathbb Z}$ if $X$ is connected separated integral torsion-free monoidal scheme of finite type over $B$.
\end{rem}
\begin{prop}\label{prop: monoidal schemes are defined over f1}
Let X be a blue scheme over $B$. Then $X$ is monoidal over $B$ if and only if there is a monoidal scheme $X_{\mathbb F}un$ over ${\mathbb F}un$ such that $X$ is isomorphic to $X_{\mathbb F}un\times_{{\mathbb S}pec{\mathbb F}un}{\mathbb S}pec B$.
\end{prop}
\begin{proof}
Since $B[M]\simeq{\mathbb F}un[M]\otimes_{\mathbb F}un B$, it is clear that the base extension $X_{\mathbb F}un\times_{{\mathbb S}pec{\mathbb F}un}{\mathbb S}pec B$ of a monoidal scheme $X_{\mathbb F}un$ to $B$ is monoidal over $B$.
To prove the other direction of the equivalence, assume that $X$ has a covering $\{U_i\}$ with $U_i={\mathbb S}pec B[M_i]$ for certain monoids $M_i$. The pairwise intersections $U_i\cap U_j$ have coverings $\{V_{i,j,k}\}$ with $V_{i,j,k}={\mathbb S}pec B[N_{i,j,k}]$ where each monoid $N_{i,j,k}$ is a localization of both $M_i$ and $M_j$, i.e.\
\[
B[N_{i,j,k}] \quad = \quad B[S_{i,k}^{-1}M_i]\quad = \quad B[S_{j,k}^{-1}M_j]
\]
for certain multiplicative subsets $S_{i,k}$ of $M_i$ and $S_{j,k}$ of $M_j$. If ${\mathcal D}$ is the diagram of all $U_i$ and $V_{i,j,k}$ together with the inclusions $V_{i,j,k}\to U_i$ and $V_{i,j,k}\to U_j$, then $X$ is the colimit of ${\mathcal D}$.
We define the affine monoidal schemes $U_{i,{\mathbb F}un}={\mathbb S}pec{\mathbb F}un[M_i]$ and $V_{i,j,k}={\mathbb S}pec{\mathbb F}un[N_{i,j,k,{\mathbb F}un}]$. The colimit of resulting diagram ${\mathcal D}_{\mathbb F}un$ defines a blue scheme $X_{\mathbb F}un$, which is monoidal over ${\mathbb F}un$ since $\{U_{i,{\mathbb F}un}\}$ is a covering of $X_{\mathbb F}un$. It is clear from the construction that $X\simeq X_{\mathbb F}un\times_{{\mathbb S}pec{\mathbb F}un}{\mathbb S}pec B$. This finishes the proof of the proposition.
\end{proof}
Recall that a blue scheme $X$ is \emph{separated over ${\mathbb F}un$} if the diagonal morphism $\Delta: X\to X\times X$ is a closed immersion. An important consequence is that the intersection of two affine subschemes of a separated blue scheme is affine.
\begin{cor}\label{cor: intersections of opens in monoidal schemes}
Let $X_{\mathbb F}un$ be a separated monoidal scheme over ${\mathbb F}un$ and $X_B=X\otimes_{\mathbb F}un B$ its base extension to $B$. Consider two open affine subsets $U_1$ and $ U_2$ of $X_B$ such that ${\mathcal O}_{X_B}(U_1)$ and ${\mathcal O}_{X_B}(U_2)$ are monoid blueprints over $B$. Then ${\mathcal O}_{X_B}(U_1\cap U_2)$ is a monoid blueprint over $B$.
\end{cor}
\begin{proof}
Let $M_1$ and $M_2$ be monoids such that ${\mathcal O}_{X_B}(U_i)\simeq B[M_i]$ for $i=1,2$. By Proposition \ref{prop: monoidal schemes are defined over f1}, there are open affine subschemes $V_1$ and $V_2$ of $X$ such that ${\mathcal O}_X(V_i)\simeq{\mathbb F}un[M_i]$ for $i=1,2$. We have
\[
U_1 \ \cap \ U_2 \ = \ U_1 \, \times_{X_B} \, U_2 \ = \ \bigl( V_1 \, \times_X \, V_2 \bigr) \, \otimes_{\mathbb F}un \, B \ = \ \bigl( V_1 \, \cap \ V_2 \bigr) \, \otimes_{\mathbb F}un \ B.
\]
Since $X$ is separated over ${\mathbb F}un$, the intersection $V_0=V_1\cap V_2$ is affine. By \cite[Thm.\ 30]{Vezzani12}, there is a monoid $M_0$ that is a localization of both $M_1$ and $M_2$ such that ${\mathcal O}_X(V_0)\simeq{\mathbb F}un[M_0]$. Since the intersection $U_0=U_1\cap U_2$ is isomorphic to the base extension of $V_0$ to $B$, we have ${\mathcal O}_{X_B}(U_0)\simeq B[M_0]$. This proves the corollary.
\end{proof}
For monoidal schemes over blue fields we can conclude the following.
\begin{prop}\label{prop: monoidal schemes over blue fields}
If $B$ is a blue field and $X$ is a monoidal scheme over $B$, then every open subset $U$ of $X$ has an open affine covering $\{U_i\}$ with $U_i={\mathbb S}pec B[M_i]$ for certain monoids $M_i$.
\end{prop}
\begin{proof}
Let $U$ be an open subset of $X$ and $\{V_j\}$ an open affine covering with ${\mathcal O}_X(V_j)=B[N_j]$. Then the intersections $U\cap V_j$ can be covered by subsets $W_{j,k}$ such that ${\mathcal O}_X(W_{j,k})$ is a localization of $B[N_j]$. Since $B$ is a blue field, we have for every multiplicative subset $S$ of $B[N_j]$ that
\[
S^{-1}B[N_j] \ = \ T^{-1}B[N_j] \ = \ B[T^{-1}N_j] \qquad \text{where}\qquad T \ = \ S\,\cap\, \{ \,1\cdot a\,|\, a\in N_j\,\}.
\]
Thus $U$ is covered by the $W_{j,k}$ and ${\mathcal O}_X(W_{j,k})$ are monoid blueprints over $B$, which proves the proposition.
\end{proof}
\begin{ex}
The proposition is not true over an arbitrary blueprint $B$ since the localizations of $B$ are in general not monoid blueprints. For instance consider the integers $B={\mathbb Z}$. Then ${\mathbb S}pec{\mathbb Z}$ is a monoidal scheme over ${\mathbb Z}$, but every proper open subset is of the form ${\mathbb S}pec {\mathbb Z}[d^{-1}]$ for some integer $d\geq 2$, and ${\mathbb Z}[d^{-1}]$ is not a monoid blueprint over ${\mathbb Z}$ since we have
\[
{\underline{n}}derbrace{d^{-1}+\dotsb+d^{-1}}_{d\text{ times}} \ \equiv \ 1.
\]
Note further that even if $B$ is a blue field, not every open subset $U$ of a monoidal scheme $X$ over $B$ satisfies that it is isomorphic to the spectrum of a monoid blueprint over $B$. For instance consider $X={\mathbb S}pec(B\times B)={\mathbb S}pec B\amalg{\mathbb S}pec B$, which is monoidal over $B$ since it is covered by two copies of ${\mathbb S}pec B$. However, $B\times B$ is not a monoid blueprint over $B$ since it contains the additive relation
\[
(1,0)+(0,1) \ \equiv \ (1,1).
\]
\end{ex}
\section{Locally free sheaves}
\label{section: locally free sheaves}
\noindent
Let $B$ be a blueprint and $M$ a blue $B$-module.
\begin{df}
A \emph{basis for $M$} is a subset $\beta$ of $M$ such that
\begin{enumerate}
\item for all $m\in M$, there are elements $a_1,\dotsc,a_r\in B$ and pairwise distinct elements $b_1,\dotsc,b_r\in \beta$ such that
\[
m \ \equiv \ \sum_{i=1}^r \ a_i b_i.
\]
\item If
\[
\sum_{i=r}^r \ a_i b_i \ \equiv \ \sum_{j=s}^r \ a'_j b'_j
\]
for $a_1,\dotsc,a_r,a'_1,\dotsc,a'_s\in B$ and pairwise distinct elements $b_1,\dotsc,b_r,b'_1,\dotsc,b'_s\in M$, then $a_1=\dotsb=a_r=a'_1=\dotsb=a'_s=0$.
\end{enumerate}
A blue $B$-module is \emph{freely generated} if it has a basis. A blue $B$-module is \emph{free} if is isomorphic to $\bigvee_{b\in\beta}B\cdot b$ for a subset $\beta$ of $B$.
\end{df}
Note that $\beta$ is a basis for the free blue module $\bigvee_{b\in\beta}B\cdot b$. Thus a free module is freely generated. The larger class of freely generated modules can be classified as follows.
\begin{lemma}
Let $M$ be a blue $B$-module with basis $\beta$.
\begin{enumerate}
\item There is a unique isomorphism from $M$ onto a blue submodule of
\[
\bigoplus_{b\in\beta} \ B \cdot b \quad = \quad \bigl\{ \ (m_b)\in\textup{pr}od_{b\in\beta}B \cdot b \ \bigl| \ m_b=0\text{ for all but finitely many }b\ \bigr\}
\]
that maps $b\in\beta$ to $1\cdot b$. Conversely, any blue $B$-submodule of $\bigoplus_{b\in\beta}B\cdot b$ that contains $\beta$ is freely generated by $\beta$.
\item Any two bases of $M$ have the same cardinality.
\end{enumerate}
\end{lemma}
\begin{proof}
Since every $m\in M$ is a unique linear combination $m\equiv\sum m_b b$ of the basis elements $b\in\beta$ where all but finitely many $m_b\in B$ are $0$, the only possible morphism ${\mathbb P}hi:M\to\bigoplus_{b\in\beta}B\cdot b$ sends $m$ to $(m_b b)$. Since there are no additive relations between the different basis elements by \eqref{part2} of the definition of a basis, the map ${\mathbb P}hi$ is indeed a morphism of blue $B$-modules. It is clearly injective and thus defines an isomorphism onto its image. The latter claim of \eqref{part1} of the lemma is obvious.
The embedding ${\mathbb P}hi:M\to \bigoplus_{b\in\beta}B\cdot b$ defines an isomorphism ${\mathbb P}hi^+:M^+\to \bigoplus_{b\in\beta}B^+\cdot b$ of blue $B^+$-modules, which have the same basis $\beta$. Thus we obtain an isomorphism ${\mathbb P}hi_{\mathbb Z}^+:M^+_{\mathbb Z}\to \bigoplus_{b\in\beta}B_{\mathbb Z}^+\cdot b$ of free $B_{\mathbb Z}^+$-modules with basis $\beta$. Since any two bases of a free module over a ring have the same cardinality, we obtain the same result for the freely generated blue $B$-module $M$.
\end{proof}
\begin{df}
Let $M$ be a freely generated blue $B$-module with basis $\beta$. The \emph{rank ${\textup{rk}}_B M$ of $M$ over $B$} is defined as the cardinality of $\beta$.
\end{df}
Let $X$ be a blue scheme over $B$ and $\beta$ a (possibly infinite) set of cardinality $r$. In this part of the paper, a sheaf on $X$ is a sheaf on the small Zariski site of $X$.
\begin{df}
A \emph{locally free sheaf of rank $r$} on $X$ is a sheaf ${\mathcal F}$ on $X$ in blue $B$-modules that has an open affine covering $\{U_i\}_{i\in{\mathcal I}}$ with the following properties:
\begin{enumerate}
\item\label{part1} if $B_i={\mathcal O}_X(U_i)$, then for every $i\in{\mathcal I}$,
\[
{\mathcal F}(U_i) \quad \simeq \quad \bigvee_{b\in\beta} B_i\cdot b\,;
\]
\item\label{part2} for every $i\in{\mathcal I}$ and every open subset $V$ of $U_i$ with ${\mathcal O}_X(V)=S_V^{-1}B_i$ for some multiplicative subset $S_V$ of $B_i$, there is an isomorphism ${\mathcal F}(V)\simeq\bigvee_{b\in\beta} S_V^{-1}B_i\cdot b$ such that the restriction map
\[
\res_{U_i,V}:\quad \bigvee_{b\in\beta} B_i\cdot b \quad \longrightarrow \quad \bigvee_{b\in\beta} S_V^{-1}B_i\cdot b
\]
corresponds to the localization of each component $B_i\cdot b$ at $S_V$.
\end{enumerate}
We call a covering $\{U_i\}$ of $X$ that satisfies properties \eqref{part1} and \eqref{part2} a \emph{trivialization of ${\mathcal F}$}.
\end{df}
Note that the \emph{localizations} $V$ of the $U_i$, i.e.\ open subsets of the form $V={\mathbb S}pec S_V^{-1} B_i$ for some multiplicative subset $S_V$ of $B_i$, form a basis for the topology of $X$. Thus a locally free sheaf ${\mathcal F}$ is uniquely determined by a trivialization $\{U_i\}$ together with the restriction maps to subsets of the form $V={\mathbb S}pec S_V^{-1} B_i$.
\begin{rem}
There is an obvious notion of a quasi-coherent sheaf on $X$ (cf.\ \cite{CLS12} for the case of monoidal schemes). Property \eqref{part2} is automatically satisfied if ${\mathcal F}$ is quasi-coherent. In other words, a quasi-coherent sheaf is locally free if and only if there are a set $\beta$ and an open affine covering $\{U_i\}$ of $X$ such that ${\mathcal F}(U_i) \simeq \bigvee_{b\in\beta} B_i\cdot b$ for all $i$.
\end{rem}
\begin{ex}
The sheaf $\bigvee_{b\in\beta}{\mathcal O}_X$ that sends an open subset $U$ of $X$ to $\bigvee_{b\in\beta}{\mathcal O}_X(U)$, together with the obvious restriction maps, is locally free of rank $r=\#\beta$. It is called the \emph{trivial locally free sheaf of rank $r$}.
\end{ex}
We construct the base extension of a locally free sheaf to rings. Let ${\mathcal F}$ be a locally free sheaf on $X$ of rank $r$ and ${\mathcal U}$ the family of all open affine subsets of $X$ such that ${\mathcal F}(U)\simeq\bigvee_{b\in\beta} {\mathcal O}_X(U)\cdot b$ together with all inclusion maps. By properties \eqref{part1} and \eqref{part2}, $X$ is the colimit of ${\mathcal U}$. If ${\mathcal U}_{\mathbb Z}^+$ denotes the family of all $U_{\mathbb Z}^+$ for $U$ in ${\mathcal U}$ and all inclusion maps $U_{\mathbb Z}^+\to V_{\mathbb Z}^+$ whenever $U\to V$ is in ${\mathcal U}$, then $X_{\mathbb Z}^+$ is the colimit of ${\mathcal U}_{\mathbb Z}^+$.
We define ${\mathcal F}_{\mathbb Z}^+(U_{\mathbb Z}^+)=\bigl({\mathcal F}(U)\bigr)_{\mathbb Z}^+$ for all $U$ in ${\mathcal U}$ and we obtain restriction morphisms $\res:{\mathcal F}(U)_{\mathbb Z}^+\to{\mathcal F}(V)_{\mathbb Z}^+$ for every inclusion $V\to U$ in ${\mathcal U}$. Since localizations commute with base extensions to rings, i.e.\ $(S^{-1}B)^+_{\mathbb Z} =S^{-1}(B_{\mathbb Z}^+)$, the values ${\mathcal F}_{\mathbb Z}^+(U_{\mathbb Z}^+)$ for $U$ in ${\mathcal U}$ glue together to a uniquely determined sheaf ${\mathcal F}_{\mathbb Z}^+$ on $X_{\mathbb Z}^+$. Since
\[
{\mathbb B}igl( \ \bigvee_{b\in\beta} B_U \cdot b {\mathbb B}igr)^+_{\mathbb Z} \quad = \quad \bigoplus_{b\in\beta} \ B_{U,{\mathbb Z}}^+ \cdot b,
\]
the sheaf ${\mathcal F}_{\mathbb Z}^+$ is locally free on $X$ as a sheaf with values in $B_{\mathbb Z}^+$-modules.
\section{{\v C}ech cohomology of monoidal schemes}
\label{section: cech cohomology of monoidal schemes}
\noindent
In this section, we prove the comparison result for the cohomology of locally free sheaves on monoidal schemes with the cohomology of its base extension to rings.
Let $B$ be a blueprint with $-1$ and $X$ a monoidal scheme over $B$. Let $\beta$ be a set of cardinality $r$ and ${\mathcal F}$ a locally free sheaf of rank $r$ on $X$. A trivialization ${\mathcal U}=\{U_i\}_{i\in{\mathcal I}}$ of ${\mathcal F}$ is \emph{finite} if ${\mathcal I}$ is a finite set. It is \emph{monoidal} if the coordinate blueprints $B_i={\mathcal O}_X(U_i)$ are monoid blueprints of the form $B[M_i]$ over $B$.
We employ the notation from Part \ref{part: cech cohomology} of the paper. We assume that ${\mathcal I}$ is totally ordered and denote by ${\mathcal I}_l$ the set of cardinality $l+1$-subsets $I$ of ${\mathcal I}$, which inherits an ordering from ${\mathcal I}$. We write $I=(i_0,\dotsc,i_l)$ if $I=\{i_0,\dotsc,i_l\}$ and $i_0<\dotsb<i_l$. For $I\in{\mathcal I}_l$, we define
\[
U_I \ = \ \bigcap_{i\in I} \ U_{i} \ , \qquad B_I \ = \ {\mathcal O}_X(U_I) \qquad \text{and} \qquad {\mathcal F}_I \ = \ {\mathcal F}(U_I).
\]
Let ${\mathcal C}^l=\textup{pr}od_{I\in{\mathcal I}_l}{\mathcal F}_I$ and ${\mathcal C}^\bullet={\mathcal C}^\bullet(X,{\mathcal F};{\mathcal U})$ the \v Cech complex of $X$ w.r.t.\ ${\mathcal U}$ and values in ${\mathcal F}$. We denote the coboundary maps as usual by $\partial_k^{(l)}:{\mathcal C}^{l-1}\to{\mathcal C}^l$. We state the following hypothesis on $X$ and ${\mathcal U}=\{U_i\}_{i\in{\mathcal I}}$.
\noindent\textbf{Hypothesis (H):} For all finite subsets $J\subset I$ of ${\mathcal I}$, the restriction map
\[
\res_{U_J,U_I}:{\mathcal O}_X(U_J)\to{\mathcal O}_X(U_I)
\]
is injective.
\begin{rem}
Recall that a blueprint $B$ is \emph{integral} if every non-zero element $a\in B$ acts injectively on $B$ by multiplication. A blue scheme is \emph{integral} if the coordinate blueprint of every open affine subscheme $U$ of $X$ is integral. If $X$ is integral, then (H) is satisfied for all open affine coverings ${\mathcal U}$ of $X$.
\end{rem}
Since $B$ is with $-1$, we have that $B^+$ is a ring, i.e.\ $B^+=B^+_{\mathbb Z}$. Similarly, $X^+=X^+_{\mathbb Z}$ is a scheme over $B^+$ and ${\mathcal F}^+$ is a locally free sheaf on $X^+$ in $B^+$-modules. Defining ${\mathcal U}^+$ as the collection of all affine opens $U_i^+$ of $X^+$, we obtain a trivialization of ${\mathcal F}^+$ and can form the \v Cech complex ${\mathcal C}^\bullet(X^+,{\mathcal F}^+;{\mathcal U}^+)$ of $X^+$ w.r.t.\ ${\mathcal U}^+$ and values in ${\mathcal F}^+$. Then the subsets ${\mathcal Z}^l(X^+,{\mathcal F}^+;{\mathcal U}^+)$ and ${\mathcal B}^l(X^+,{\mathcal F}^+;{\mathcal U}^+)$ are $B^+$-modules, and so is $H^l(X^+,{\mathcal F}^+;{\mathcal U}^+)$.
There is a canonical morphism ${\mathcal C}^\bullet(X,{\mathcal F};{\mathcal U}) \longrightarrow {\mathcal C}^\bullet(X^+,{\mathcal F}^+;{\mathcal U}^+)$ of cosimplicial blue $B$-modules, which is injective in each degree since all blue $B$-modules ${\mathcal C}^l(X,{\mathcal F};{\mathcal U})$ are with $-1$. Thus we can consider ${\mathcal Z}^l(X,{\mathcal F};{\mathcal U})$ as a subset of ${\mathcal Z}^l(X^+,{\mathcal F}^+;{\mathcal U}^+)$ and ${\mathcal B}^l(X,{\mathcal F};{\mathcal U})$ as a subset of ${\mathcal B}^l(X^+,{\mathcal F}^+;{\mathcal U}^+)$ for every $l\geq 0$. This induces a morphism
\[
H^l(X,{\mathcal F};{\mathcal U}) \quad \longrightarrow \quad H^l(X^+,{\mathcal F}^+;{\mathcal U}^+)
\]
of blue $B$-modules.
\begin{thm}\label{thm: comparison for a fixed covering}
Given a monoidal scheme $X$ over $B$, a locally free sheaf ${\mathcal F}$ on $X$ and a finite monoidal trivialization ${\mathcal U}=\{U_i\}_{i\in{\mathcal I}}$ of ${\mathcal F}$ that satisfies Hypothesis (H). Then
\[
{\mathcal Z}^l(X,{\mathcal F},{\mathcal U})^+ \ = \ {\mathcal Z}^l(X^+,{\mathcal F}^+,{\mathcal U}^+) \qquad \text{and} \qquad {\mathcal B}^l(X,{\mathcal F},{\mathcal U})^+ \ = \ {\mathcal B}^l(X^+,{\mathcal F}^+,{\mathcal U}^+)
\]
for every $l\geq0$. Consequently, we have
\[
H^l(X,{\mathcal F},{\mathcal U})^+ \ = \ H^l(X^+,{\mathcal F}^+,{\mathcal U}^+).
\]
\end{thm}
\begin{proof}
We will establish the following two lemmas in order to prove Theorem \ref{thm: comparison for a fixed covering}. In the proofs of these lemmas, we will make use of the usual \v Cech chain complex
\begin{multline*}
{\mathcal C}^0(X^+,{\mathcal F}^+;{\mathcal U}^+) \quad {\textup{st}}ackrel{d^1}\longrightarrow \quad {\mathcal C}^1(X^+,{\mathcal F}^+;{\mathcal U}^+) \quad \longrightarrow \quad \dotsb \\
\dotsb \quad \longrightarrow \quad {\mathcal C}^{l-1}(X^+,{\mathcal F}^+;{\mathcal U}^+) \quad {\textup{st}}ackrel{d^l}\longrightarrow \quad {\mathcal C}^l(X^+,{\mathcal F}^+;{\mathcal U}^+) \quad \longrightarrow \quad \dotsb
\end{multline*}
where the differentials $d^l=\sum_{i=0}^l (-1)^i \partial_k^{(l)}$ are the alternating sums of the respective restriction maps.
\begin{lemma}\label{lemma: base extension of coboundary blueprints}
${\mathcal B}^l(X,{\mathcal F},{\mathcal U})^+ \ = \ {\mathcal B}^l(X^+,{\mathcal F}^+,{\mathcal U}^+)$.
\end{lemma}
\begin{proof}
Since ${\mathcal U}$ is finite, a set of generators for ${\mathcal B}^l(X^+,{\mathcal F}^+,{\mathcal U}^+)$ is given by the images of the vectors $x_{a,b,J}=(0,\dotsc,0,a\cdot b,0,\dotsc,0)$ with $a\in B_J$ and $b\in\beta$. The image of such a vector is of the form $(d^l(x_{a,b,J})_I)_{I\in{\mathcal I}_l}$. Since $x_{a,b,J}$ has only one non-trivial component, we have $d^l(x_{a,b,J})_I=\partial_k^{(l)}(x_{a,b,J})_I$ for some $k$. Therefore, the image of $x$ in ${\mathcal C}^{l,+}$ is contained in ${\mathcal C}^l$. Since ${\mathcal B}^l(X,{\mathcal F},{\mathcal U})={\mathcal B}^l(X^+,{\mathcal F}^+,{\mathcal U}^+)\cap{\mathcal C}^l$, the lemma follows.
\end{proof}
\begin{lemma}\label{lemma: base extension of cocycle blueprints}
${\mathcal Z}^l(X,{\mathcal F},{\mathcal U})^+ \ = \ {\mathcal Z}^l(X^+,{\mathcal F}^+,{\mathcal U}^+)$.
\end{lemma}
\begin{proof}
Let $B_\eta=\colim B_I$ be the colimit of the blueprints $B_I$ for finite subsets $I$ of ${\mathcal I}$. By Hypothesis (H), the canonical inclusions $B_I\to B_\eta$ are injective for all finite $I\subset{\mathcal I}$. Since $\partial_k^{(l)}$ extends to a $B_{I^k}^+$-linear map
\[
\partial_k^{(l),+}: \quad {\mathcal F}_{I^k}^+ \ \simeq \ \bigoplus_{b\in\beta} \ B_{I^k}^+\cdot b \quad \longrightarrow \quad \bigoplus_{b\in\beta} \ B_I^+\cdot b \ \simeq \ {\mathcal F}_I^+,
\]
the \v Cech chain complex
\[
{\mathcal C}^0(X^+,{\mathcal F}^+;{\mathcal U}^+) \quad {\textup{st}}ackrel{d^1}\longrightarrow \quad {\mathcal C}^1(X^+,{\mathcal F}^+;{\mathcal U}^+) \quad \longrightarrow \quad \dotsb
\]
with ${\mathcal C}^l(X^+,{\mathcal F}^+;{\mathcal U}^+)=\textup{pr}od_{i\in{\mathcal I}_l}\bigoplus_{b\in\beta}B_I^+\cdot b$ defines a chain complex
\[
{\mathcal C}^{0,+}_\eta \quad {\textup{st}}ackrel{d^1}\longrightarrow \quad {\mathcal C}^{1,+}_\eta \quad \longrightarrow \quad \dotsb
\]
with ${\mathcal C}^{l,+}_\eta=\textup{pr}od_{i\in{\mathcal I}_l}\bigoplus_{b\in\beta}B_\eta^+\cdot b$. This chain complex is the \v Cech chain complex of the affine scheme $X_\eta^+={\mathbb S}pec B_\eta^+$ w.r.t.\ the covering ${\mathcal U}_\eta^+=\{U_{i,\eta}^+\}_{i\in{\mathcal I}}$ where $U_{i,\eta}^+=X_\eta^+$ and with values in the locally free sheaf ${\mathcal F}_\eta^+$ associated to the $B_\eta^+$-module $F_\eta^+=\colim {\mathcal F}^+(U_I^+)$ that is the colimit over all finite $I\subset{\mathcal I}$.
Since the cohomology of coherent sheaves on affine schemes is concentrated in degree $0$, we have
\begin{align*}
{\mathcal Z}^0(X_\eta^+,{\mathcal F}^+_\eta;{\mathcal U}_\eta^+) \quad &= \quad F_\eta^+ &&\text{and} \\
{\mathcal Z}^l(X_\eta^+,{\mathcal F}^+_\eta;{\mathcal U}_\eta^+) \quad &= \quad {\mathcal B}^l(X_\eta^+,{\mathcal F}^+_\eta;{\mathcal U}_\eta^+) &&\text{for}\quad l>0.
\end{align*}
Since $F_\eta^+=\bigl(\bigvee_{b\in\beta} B_\eta\cdot b\bigl)^+$ is generated by elements in ${\mathcal Z}^0(X,{\mathcal F};{\mathcal U})=\bigvee_{b\in\beta} B\cdot b$ as a blue $B_\eta$-module, the claim of the lemma follows for $l=0$.
For $l>0$, we can apply Lemma \ref{lemma: base extension of coboundary blueprints} to $X_\eta={\mathbb S}pec B_\eta$, the locally free sheaf associated with $F_\eta=\colim {\mathcal O}_X(U_I)$ and ${\mathcal U}_\eta=\{U_{i,\eta}\}_{i\in{\mathcal I}}$ with $U_{i,\eta}=X_\eta$ and get
\[
{\mathcal B}^l(X_\eta^+,{\mathcal F}^+_\eta;{\mathcal U}_\eta^+) \quad = \quad {\mathcal B}^l(X_\eta,{\mathcal F}_\eta;{\mathcal U}_\eta)^+.
\]
Since
\[
{\mathcal B}^l(X_\eta,{\mathcal F}_\eta;{\mathcal U}_\eta) \quad \subset \quad {\mathcal Z}^l(X_\eta,{\mathcal F}_\eta;{\mathcal U}_\eta) \quad \subset \quad {\mathcal Z}^l(X_\eta^+,{\mathcal F}^+_\eta;{\mathcal U}_\eta^+),
\]
we conclude that ${\mathcal Z}^l(X_\eta,{\mathcal F}_\eta;{\mathcal U}_\eta)^+={\mathcal Z}^l(X_\eta^+,{\mathcal F}^+_\eta;{\mathcal U}_\eta^+)$. Therefore
\begin{align*}
{\mathcal Z}^l(X,{\mathcal F};{\mathcal U})^+ \quad &= \quad {\mathbb B}igl( \ {\mathcal C}^l(X,{\mathcal F};{\mathcal U}) \quad \cap \quad {\mathcal Z}^l(X_\eta,{\mathcal F}_\eta;{\mathcal U}_\eta) \ {\mathbb B}igr)^+ \\
\quad &= \quad {\mathcal C}^l(X,{\mathcal F};{\mathcal U})^+ \quad \cap \quad {\mathcal Z}^l(X_\eta,{\mathcal F}_\eta;{\mathcal U}_\eta)^+ \\
\quad &= \quad {\mathcal C}^l(X^+,{\mathcal F}^+;{\mathcal U}^+) \quad \cap \quad {\mathcal Z}^l(X_\eta^+,{\mathcal F}_\eta^+;{\mathcal U}_\eta^+) \\
\quad &= \quad {\mathcal Z}^l(X^+,{\mathcal F}^+;{\mathcal U}^+)
\end{align*}
as desired.
\end{proof}
Since taking quotients commutes with the base extension to rings, we have that
\begin{multline*}
H^l(X,{\mathcal F};{\mathcal U})^+ \ = \ {\mathcal Z}^l(X,{\mathcal F};{\mathcal U})^+ / {\mathcal B}^l(X,{\mathcal F};{\mathcal U})^+ \ = \\ {\mathcal Z}^l(X^+,{\mathcal F}^+;{\mathcal U}^+) / {\mathcal B}^l(X^+,{\mathcal F}^+;{\mathcal U}^+) \ = \ H^l(X^+,{\mathcal F}^+;{\mathcal U}^+),
\end{multline*}
which proves Theorem \ref{thm: comparison for a fixed covering}.
\end{proof}
\begin{thm}\label{thm: comparison of cech cohomology}
Given a monoidal scheme $X$ over $B$ that admits a finite covering $\{U_i\}$ with Hypothesis (H) such that ${\mathcal O}_X(U_i)$ are monoid blueprints over $B$. Then we have for every locally free sheaf ${\mathcal F}$ on $X$ that
\[
H^l(X,{\mathcal F})^+ \ = \ H^l(X^+,{\mathcal F}^+).
\]
\end{thm}
\begin{proof}
Let ${\mathcal U}$ be a covering of $X$ with Hypothesis (H) and ${\mathcal F}$ a locally free sheaf on $X$. Then there is a finite refinement ${\mathcal V}$ of ${\mathcal U}$ that satisfies all conditions of Theorem \ref{thm: comparison for a fixed covering}. Since we can choose ${\mathcal U}$ itself arbitrary fine, the coverings ${\mathcal V}$ that satisfy the hypotheses of Theorem \ref{thm: comparison for a fixed covering} form a cofinal system in the category of all finite coverings of $X$ together with refinements. Since $X$ is quasi-compact, the ${\mathcal V}$ are cofinal in the category of all coverings of $X$.
Therefore the colimit of the cohomology blueprints $H^l(X,{\mathcal F};{\mathcal V})$ over all coverings ${\mathcal V}$ that satisfy Theorem \ref{thm: comparison for a fixed covering} equals $H^l(X,{\mathcal F})$. For the same reasons, the colimit of the cohomology groups $H^l(X^+,{\mathcal F}^+,{\mathcal V}^+)$ over all such ${\mathcal V}$ equals $H^l(X^+,{\mathcal F}^+)$. Since $(-)^+$ commutes with filtered colimits, this establishes the claim of the theorem.
\end{proof}
\begin{ex}[Line bundles on projective space]
Let $B$ be a blueprint with $-1$ and ${\mathcal O}(d)$ the twisted sheaf on ${\mathbb P}^n_B$. If $d\geq0$, then the cohomology $H^\ast({\mathbb P}^{n,+}_B,{\mathcal O}(d)^+)$ is concentrated in degree $0$. Therefore $H^0({\mathbb P}^n_B,{\mathcal O}(d))$ is the only non-trivial cohomology of ${\mathbb P}^n_B$ with values in ${\mathcal O}(d)$. It is clear that $H^0({\mathbb P}^n_B,{\mathcal O}(d))$ equals the blue $B$-module of global sections of ${\mathcal O}(d)$, which is a free $B$-module of rank ${\textup{rk}}\; H^0({\mathbb P}^{n,+}_B,{\mathcal O}(d)^+)$.
For $-n\leq d\leq -1$, the cohomology $H^\ast({\mathbb P}^n_B,{\mathcal O}(d))$ is trivial. If $d\leq -n-1$, then the cohomology $H^\ast({\mathbb P}^{n,+}_B,{\mathcal O}(d)^+)$ is concentrated in degree $n$. Therefore $H^n({\mathbb P}^n_B,{\mathcal O}(d))$ is the only non-trivial cohomology of $X$ with values in ${\mathcal O}(d)$. If ${\mathcal U}=\{U_i\}_{i\in{\mathcal I}}$ is the canonical atlas of ${\mathbb P}^n_B$, then we have $H^n({\mathbb P}^n_B,{\mathcal O}(d))=H^n({\mathbb P}^n_B,{\mathcal O}(d);{\mathcal U})$ by comparison with the compatible situation for ${\mathbb P}^{n,+}_B$ and the canonical covering ${\mathcal U}^+$. Therefore we have ${\mathcal Z}^l({\mathbb P}^n_B,{\mathcal O}(d))={\mathcal O}(d)(U_{\mathcal I})$, and ${\mathcal B}^l({\mathbb P}^n_B,{\mathcal O}(d))$ is generated by the images $d(x_{a,b,J})\in{\mathcal O}(d)(U_{\mathcal I})$ (cf.\ the proof of Lemma \ref{lemma: base extension of coboundary blueprints}). Therefore $H^n({\mathbb P}^n_B,{\mathcal O}(d))$ is a free blue $B$-module of rank ${\textup{rk}}\; H^n({\mathbb P}^{n,+}_B,{\mathcal O}(d)^+)$.
\end{ex}
Also in more complicated examples, we found that the cohomology blueprints are free over the base blueprint. Therefore we pose the following problem.
\begin{question*}
Let $B$ be a blueprint with $-1$ and $X$ a quasi-compact monoidal scheme over $B$ that admits an open affine covering satisfying Hypothesis (H). Is it true that $H^l(X,{\mathcal F})$ is a free blue $B$-module for every locally free sheaf ${\mathcal F}$?
\end{question*}
\begin{rem}[Sheaf cohomology for toric varieties] \label{rem: cohomology for toric varieties}
We conlcude this text with the following remark on possible applications to the computation of sheaf cohomology for toric varieties.
Every toric variety ${\mathcal X}$ over the ring $B^+$ admits a monoidal model $X$ over $B$, i.e.\ a monoidal scheme $X$ over $B$ such that ${\mathcal X}\simeq X^+$ as a $B^+$-scheme. The maximal open affine covering of $X$ satisfies Hypothesis (H) since the restriction maps correpond to inclusions of subsemigroups of the ambient character lattice of the toric variety.
Since the \v Cech cohomology for monoidal schemes is amenable to explicit calculation due to their rigid structure, Theorem \ref{thm: comparison of cech cohomology} yields an application for calculations of sheaf cohomology over toric varieties.
The drawback is, however, that only a very limited class of locally free sheaves over toric varieties can be defined over a monoidal model. Namely, the rigid structure of the wedge product implies that every locally free sheaf ${\mathcal F}$ on a monoidal scheme $X$ over a blueprint $B$ decomposes into the wedge product $\bigvee {\mathcal L}_i$ of line bundles.
This means that the only locally free sheaves of toric varieties for which our methods apply are (direct sums of) line bundles. There exists an algorithm to calculate the cohomology of toric line bundles, as conjectured in \cite{BJRR10} and proven independently in \cite{Roschy-Rahn10} and \cite{Jow11}. The method of this algorithm seems to be quite different from the perspective of our text, but it would be interesting to understand the precise relationship.
\end{rem}
\appendix
\section{Cohomology of {${\mathbb P}^1$} via injective resolutions}
\label{appendix: cohomology of p1}
\noindent
In this section, we mimic the methods of homological algebra and injective resolutions to calculate the cohomology $H^i_\textup{hom}(X,{\mathcal O}_X)$ of the projective line $X={\mathbb P}^1_{\mathbb F}un$. While $H_\textup{hom}^0(X,{\mathcal O}_X)$ equals the global sections of ${\mathcal O}_X$, it turns out that $H_\textup{hom}^1(X,{\mathcal O}_X)$ is of infinite rank over ${\mathbb F}un$.
Note that the following calculations apply also to the projective line over ${\mathbb F}unsq$, which shows that $H^i_\textup{hom}(X,{\mathcal F})$ differs from the cohomology blueprints $H^i(X,{\mathcal F})$, as considered in the main text of this paper.
Deitmar has given in \cite{Deitmar11b} a rigorous treatment of cohomology via injective resolutions for sheaves in so called \emph{belian} categories. This applies, in particular, to sheaves on ${\mathbb P}^1_{\mathbb F}un$ in pointed ${\mathbb F}un$-modules (also known as pointed ${\mathbb F}un$-sets). Note that the general hypotheses of \cite{Deitmar11b} are not satisfied by the category of blue $B$-modules.
To emphasize that we abandon any additive structure in the discussion that follows, we avoid mentioning blueprints, but employ the language of monoids and monoidal schemes.
Let $A={\mathbb F}un[T]$ be the coordinate monoid of ${\mathbb A}^1_{\mathbb F}un$. All of the $A$-modules in the following are pointed $A$-modules (following the terminology of \cite{Deitmar11b}), and we denote the base point generally by $\ast$. Let $F=\{T^i\}_{i\geq0}\cup\{\ast\}$ be the free module over $A$ of rank $1$, $I=\{T^i\}_{i\in{\mathbb Z}}\cup\{\ast\}$ and $J=\{T^i\}_{i<0}\cup\{\ast\}$. Then both $I$ and $J$ are injective $A$-modules. Let $G={\mathbb F}un[T^{\pm1}]$ be the ``quotient monoid'' of $A$. Then the corresponding localizations of $I$ and $J$ are $I$ itself resp.\ $0=\{\ast\}$, which are both injective $G$-modules.
The topological space of $X={\mathbb P}^1_{\mathbb F}un$ has three points; namely, two closed points $x_1,x_2$ and one generic point $x_0$. It can be covered by two opens $U_i=\{x_0,x_i\}$ ($i=1,2$), which are both isomorphic to ${\mathbb A}^1_{\mathbb F}un$ and which intersect in $U_0=\{x_0\}$. The coordinate monoids of these opens are respectively ${\mathcal O}_X(U_1)\simeq {\mathcal O}_X(U_2)\simeq A$ and ${\mathcal O}_X(U_0)\simeq G$, where ${\mathcal O}_X$ is the structure sheaf of $X$.
We define the injective sheaf ${\mathcal I}_0$ over $X$ by ${\mathcal I}_0(U_i)=I$ for $i=0,1,2$ together with the identity maps $\textup{id}:I\to I$ as restriction maps. We define the injective sheaf ${\mathcal I}_1$ over $X$ by ${\mathcal I}_0(U_i)=J$ for $i=1,2$ and $I_1(U_0)=0$ together with the trivial maps $0:J\to 0$ as restriction maps.
It is easily seen that the structure sheaf ${\mathcal O}_X$ of $X$ has an injective resolution of the form
\[
0 \ \longrightarrow \ {\mathcal O}_X \ \longrightarrow \ {\mathcal I}_0 \ \longrightarrow \ {\mathcal I}_1 \ \longrightarrow \ 0.
\]
Taking stalks at $x_1$ or at $x_2$ yields the exact sequence
\[
0 \ \longrightarrow \ F \ \longrightarrow \ I \ \longrightarrow \ J \ \longrightarrow \ 0
\]
of $A$-modules. Talking stalks at $x_0$ yields the exact sequence
\[
0 \ \longrightarrow \ I \ \longrightarrow \ I \ \longrightarrow \ 0 \ \longrightarrow \ 0
\]
of $H$-modules.
The next step is to apply $\Hom({\mathcal O}_X,-)$ to the given injective resolution of ${\mathcal O}_X$. A morphism $\varphi:{\mathcal O}_X\to {\mathcal I}_0$ is determined by the image of $\varphi_{x_0}(T^0)\in {\mathcal I}_{1,x_0}=I$ of $T^0\in {\mathcal I}_{0,x_0}=I$. Thus $\Hom({\mathcal O}_X,{\mathcal I}_0)\simeq I$ (as $A$-module, or even $H$-module).
A morphism $\psi:{\mathcal O}_X\to {\mathcal I}_1$ is given by two $A$-module maps \[\psi_i: {\mathcal O}_{X,x_i}=F \to J={\mathcal I}_{1,x_i}\] ($i=1,2$), which do not have to satisfy any relation since the restriction maps of ${\mathcal I}_1$ are trivial. Thus $\Hom({\mathcal O}_X,{\mathcal I}_1)\simeq J\times J$ (as $A$-module). Note that, a priori, these homomorphism sets are merely ${\mathbb F}un$-modules, but the richer structure as $A$-modules makes it easier to study the induced morphism ${\mathbb P}hi:\Hom({\mathcal O}_X,{\mathcal I}_0)\to \Hom({\mathcal O}_X,{\mathcal I}_1)$, which is the only non-trivial map in the complex
\[
0 \ \longrightarrow \ \Hom({\mathcal O}_X,{\mathcal I}_0) \ {\textup{st}}ackrel{\mathbb P}hi\longrightarrow \ \Hom({\mathcal O}_X,{\mathcal I}_1) \ \longrightarrow \ 0.
\]
We define the cohomology groups $H_\textup{hom}^i({\mathbb P}^1_{\mathbb F}un,{\mathcal O}_X)$ as the cohomology groups of this complex.
The kernel of ${\mathbb P}hi$ consists of the trivial morphism and the morphism $\varphi:{\mathcal O}_X\to{\mathcal I}_0$ that is characterized by $\varphi_{x_0}(T^0)=T^0$. Thus $H_\textup{hom}^0({\mathbb P}^1_{\mathbb F}un,{\mathcal O}_X)=\{\ast,\varphi\}$ is an $1$-dimensional ${\mathbb F}un$-vector space, in accordance with the analogous result for sheaf cohomology of ${\mathbb P}^1$ over a ring.
The image of ${\mathbb P}hi$ are morphisms $\psi:{\mathcal O}_X\to{\mathcal I}_1$ such that either $\psi_1$ or $\psi_2$ is trivial. Thus $\im{\mathbb P}hi=J\vee J\subset J\times J$ (as $A$-modules). Consequently $H_\textup{hom}^1({\mathbb P}^1,{\mathcal O}_X)=(J\times J)/(J\vee J)$ is an infinite-dimensional ${\mathbb F}un$-vector space. This result is not at all in coherence with the situation over a ring where $H_\textup{hom}^1({\mathbb P}^1,{\mathcal O}_X)=0$.
\begin{rem}
The above calculation can also be used to calculate $H_\textup{hom}^i({\mathbb P}^1,{\mathcal O}(n))$ for the twists ${\mathcal O}(n)$ of the structure sheaf, which yields the expected outcome for $H_\textup{hom}^0$, namely, an ${\mathbb F}un$-vector space of dimension $n+1$ if $n\geq1$ and $0$ if $n<0$, but which yields, again, an infinite-dimensional ${\mathbb F}un$-vector space $H_\textup{hom}^1({\mathbb P}^1,{\mathcal O}(n))$.
\end{rem}
\begin{rem}
As explained to the second author by Anton Deitmar, this does not contradict Theorem 2.7.1 in \cite{Deitmar11b}, which implies that the rank of the cohomology over ${\mathbb F}un$ is at most the rank of the corresponding cohomology over ${\mathbb Z}$. The reason is that the base extension of the twisted sheaf ${\mathcal O}(n)$ to ${\mathbb Z}$ (in the sense of \cite{Deitmar11b}) is not the twisted sheaf on the projective line over ${\mathbb Z}$, but a sheaf on ${\mathbb P}^1_{\mathbb Z}$ that is not of finite type.
To explain, the definition of the base extension of a sheaf ${\mathcal F}$ on an ${\mathbb F}un$-scheme $X$ to the associated scheme $X_{\mathbb Z}$ in \cite{Deitmar11b} is the pullback $\pi^\ast{\mathcal F}$ along the base extension map $\pi: X_{\mathbb Z}\to X$, not tensored with the structure sheaf of $X$. This differs from the sheaf ${\mathcal F}^+_{\mathbb Z}$ considered in this text.
\end{rem}
\end{document} |
\begin{document}
\title{\huge\textbf{On piecewise hyperdefinable groups}
\begin{abstract} The aim of this paper is to generalise and improve two of the main model{\hyp}theoretic results of ``Stable group theory and approximate subgroups'' by E. Hrushovski to the context of piecewise hyperdefinable sets. The first one is the existence of Lie models. The second one is the Stabilizer Theorem. In the process, a systematic study of the structure of piecewise hyperdefinable sets is developed. In particular, we show the most significant properties of their logic topologies.
\end{abstract}
\section*{Introduction}
Various enlightening results were found in \cite{hrushovski2011stable} with significant consequences on model theory and additive combinatorics. Two of them are particularly relevant: the Stabilizer Theorem and the existence of Lie models.
The Stabilizer Theorem \cite[Theorem 3.5]{hrushovski2011stable}, subsequently improved in \cite[Theorem 2.12]{montenegro2018stabilizers}, was originally itself a generalisation of the classical Stabilizer Theorem for stable and simple groups (see for example \cite[Section 4.5]{wagner2010simple}) changing the stability and simplicity hypotheses by some kind of measure{\hyp}theoretic ones. Here, we extend that theorem to piecewise hyperdefinable groups in Section 3. Once one has the right definition of dividing and forking for piecewise hyperdefinable sets, the original proof of \cite{hrushovski2011stable}, and its improved version of \cite{montenegro2018stabilizers}, can be naturally adapted. However, we also manage to simplify the proof in such a way that we get a slightly stronger result.
In \cite{hrushovski2011stable}, Hrushovski studied piecewise definable groups generated by near{\hyp}subgroups, i.e. approximate subgroups satisfying a kind of measure{\hyp}theoretic condition. In particular, he worked with ultraproducts of finite approximate subgroups. In that context, Hrushovski proved that there exist some Lie groups, named \emph{Lie models}\footnote{In \cite{hrushovski2011stable} they are simply called \emph{associated Lie groups}. The term \emph{Lie model} was later introduced in \cite{breuillard2012structure}.}, deeply connected to the model{\hyp}theoretic structure of the piecewise definable group \cite[Theorem 4.2]{hrushovski2011stable}. Furthermore, among all these Lie groups, Hrushovski focused on the minimal one, showing its uniqueness and its independence of expansions of the language.
Here, in Section 2, we improve these results by defining the more general notion of \emph{Lie core}. Then, we prove the existence of Lie cores for any piecewise hyperdefinable group with a generic piece and the uniqueness of the minimal Lie core. In the process, we adapt the classical model{\hyp}theoretic components (with parameters) $G^0$, $G^{00}$ and $G^{000}$ for piecewise hyperdefinable groups --- our definitions extend some particular cases already studied (e.g. \cite{hrushovski2022amenability}). We also introduce a new component $G^{\mathrm{ap}}$; $G^{\mathrm{ap}}$ is the smallest possible kernel of a continuous projection to a locally compact topological group without non{\hyp}trivial compact normal subgroups. We use these components to show that the minimal Lie core is precisely $\rfrac{G^0}{G^{\mathrm{ap}}}$. Using this canonical presentation of the minimal Lie core, we conclude that the minimal Lie core is piecewise $0${\hyp}hyperdefinable and independent of expansions of the language.
Hyperdefinable sets, originally introduced in \cite{hart2000coordinatisation}, are quotients of $\bigwedge${\hyp}definable (read infinitely{\hyp}definable or type{\hyp}definable) sets over $\bigwedge${\hyp}definable equivalence relations. Hyperdefinable sets have been already well studied by different authors (e.g. \cite{wagner2010simple} and \cite{kim2014simplicity}). Here we extend this study to piecewise hyperdefinable sets.
A piecewise hyperdefinable set is a strict direct limit of hyperdefinable sets. We are interested in the piecewise hyperdefinable sets as elementary objects in themselves that inherit an underlying model{\hyp}theoretic structure. In particular, we are principally interested in studying the natural logic topologies that generalise the usual Stone topology and can be defined for any piecewise hyperdefinable set.
We will discuss the general theory of piecewise hyperdefinable sets in Section 1, elaborating the basic theory that we need for the rest of the paper. Mainly, we study their basic topological properties such as compactness, local compactness, normality, Hausdorffness and metrizability, and also their relations with the quotient, product and subspace topologies. All this study leads naturally to the definition of \emph{locally hyperdefinable sets}, which is in fact one of the main notions of this document.
The original motivation of this paper comes from the study of rough approximate subgroups. Rough approximate subgroups are the natural generalisation of approximate subgroups where we also allow a small thickening of the cosets. This generalisation is particularly natural in the context of metric groups, where the thickenings are given by balls. In \cite{tao2008product} (see also \cite{tao2014metric}), Tao showed that a significant part of the basic theory about approximate subgroups can be extended to rough approximate subgroups of metric groups via a discretisation process. In \cite{gowers2020partial}, this generalisation to the context of metric groups has recently shown interesting applications.
It is a well{\hyp}known fact in first{\hyp}order model theory that, when working with metric spaces, non{\hyp}standard real numbers appear as soon as we get saturation. To solve this issue, we have to deal with some kind of continuous logic. Piecewise hyperdefinable sets provide a natural way of doing it. Roughly speaking, the idea behind continuous logic is to restrict the universe to the bounded elements and to quotient out by infinitesimals via the standard part function, as can be done using piecewise hyperdefinable sets.
Hrushovski already indicated in unpublished works that, using piecewise hyperdefinable groups, it should be possible to extend some of the results of \cite{hrushovski2011stable} to the context of metric groups. The aim of this paper is to give the abstract basis for that kind of results, to find in the end possibly interesting applications to combinatorics. We conclude the paper giving the natural generalisation of the Lie model Theorem for rough approximate subgroups. Applications of this result to the case of metric groups will be studied in a future paper.
We study in general piecewise hyperdefinable sets in Section 1, focussing on the properties of their logic topologies. The most important results of this section are given after the introduction of locally hyperdefinable sets in Section 1.4. Section 2 is the core of this paper and is devoted to the general study of piecewise hyperdefinable groups. The first fundamental result of the section is Theorem \ref{t:generic set lemma}, in which we show that piecewise hyperdefinable groups satisfying a natural combinatorial condition are locally hyperdefinable. In Section 2.3, we define the model{\hyp}theoretic components for piecewise hyperdefinable groups, proving the existence of $G^{\mathrm{ap}}$ in Theorem \ref{t:the aperiodic component}. Finally, we focus on the study of Lie cores, proving their existence (Theorem \ref{t:logic existence of lie core}), the uniqueness of the minimal one (Theorem \ref{t:logic uniqueness of minimal lie core}), giving a canonical representation of the minimal one in terms of the model{\hyp}theoretic components (Theorem \ref{t:canonical representation of the minimal lie core}) and showing its independence of expansions of the language (Corollary \ref{c:independence of expansions of the minimal lie core}). Section 3 is devoted to the Stabilizer Theorem for piecewise hyperdefinable groups, which is divided over Theorem \ref{t:stabilizer theorem 1}, Corollary \ref{c:stabilizer theorem mos b2} and Theorem \ref{t:stabilizer theorem 2} --- the latter being the standard statement of the Stabilizer Theorem. We conclude the paper stating the Rough Lie model Theorem \ref{t:rough lie model} which generalises Hrushovski's Lie Model Theorem to the case of rough approximate subgroups.
{\textnormal{\textbf{Notations and conventions:}}} From now on, unless otherwise stated, fix a many{\hyp}sort first order language, $\mathscr{L}$, and an infinite $\kappa${\hyp}saturated and strongly $\kappa${\hyp}homogeneous $\mathscr{L}${\hyp}structure, $\mathfrak{M}$, with $\kappa>|\mathscr{L}|$ a strong limit cardinal. We say that a subset of $\mathfrak{M}$ is \emph{small} if its cardinality is smaller than $\kappa$.
We would like to remark that our assumptions on $\mathfrak{M}$ are far stronger than needed. While saturation is fundamental, strong homogeneity is actually irrelevant. Also, assuming that $\kappa$ is a strong limit cardinal is much more than necessary. Most of the results and arguments work when we only assume $\kappa>|\mathscr{L}|$. The only significant exceptions are sections 2.3 and 2.4, where we need to assume $\kappa> 2^{|A|+|\mathscr{L}|}$, with $A$ the set of parameters we are using.
Against the common habit, we consider the logic topologies on the models themselves, rather than on the spaces of types. This has the unpleasant consequence of handling with non $\mathrm{T}_0$ topological spaces. In particular, we need to study normality and local compactness without Hausdorffness.
Anyway, whether on the model or on the space of types, these two topologies are in fact the ``same". Indeed, the spaces of types are obtained from the logic topologies on the model after saturation (i.e. compactification) as the quotient by the equivalence relation of being topologically indistinguishable (i.e. the \emph{Kolmogorov quotient}). In particular, the canonical projections ${\mathrm{tp}}_A:\ a\mapsto {\mathrm{tp}}(a/A)$ are continuous, open and closed maps\footnote{In fact, $X\mapsto\{{\mathrm{tp}}(x/A)\mathrel{:} x\in X\}$ is an isomorphism (in the sense of lattices of sets) between the topologies. Also, ${\mathrm{tp}}^{-1}_A[{\mathrm{tp}}_A[X]]=X$ for any $A${\hyp}invariant set.} from the $A${\hyp}logic topologies to the spaces of types over $A$. Moreover, based on this, we will show that piecewise hyperdefinable sets with their logic topologies generalise type spaces, the latter being the particular case of $\mathrm{T}_0$ logic topologies.
We choose to work with logic topologies over the models for three reasons. The first one is that we save one unneeded step (the Kolmogorov quotient). The second one is that the spaces of types of products are not the products of the spaces of types, while we want to be able to prove and naturally use Proposition \ref{p:product logic topology}, which says that the global logic topology of the product is the product topology of the global logic topologies. The last one, and related with the previous one, is that the space of types of a group does not preserve usually the group structure, which makes odd the statement of the Isomorphism Theorem \ref{t:isomorphism theorem} for piecewise hyperdefinable groups, fundamental in Section 2, expressed trough the spaces of types.
\rule{0.5em}{0.5em} From now on, except when otherwise stated, we use $a,b,\ldots$ and $A,B,\ldots$ to denote hyperimaginaries and small sets of hyperimaginaries respectively, while $a^*,a^{**},\ldots$ and $A^*,A^{**},\ldots$ denote associated representatives. If we use $a^*$ or $A^*$ without mention of $a$ or $A$, we mean real elements. To avoid technical issues, we assume that $A$ contains a subset $A_{\mathrm{re}}\subseteq A$ of real elements such that every element of $A$ is hyperimaginary over $A_{\mathrm{re}}$.
\rule{0.5em}{0.5em} We use $x,y,z,\ldots$ to denote variables. Except when otherwise stated, variables are always finite tuples of (single) variables. Sometimes, to simplify the notation, we also allow infinite tuples of variable. We write $\overline{x},\overline{y},\overline{z},\ldots$ to denote infinite tuples of variables. If in some exceptional place we need to consider single variables, we will explicitly indicate it.
\rule{0.5em}{0.5em} Except when otherwise stated, elements are finite tuples of (single) elements. We write $\overline{a},\overline{b},\ldots$ and $\overline{a}^*,\overline{b}^*,\ldots$ to indicate when we consider infinite tuples of elements. In this paper, we do not study piecewise hyperdefinable sets of infinite tuples. It seems that our results can be generalised in that sense by taking also inverse limits.
\rule{0.5em}{0.5em} By a type we always mean a complete type. The Stone space of types contained in the set $X$ with parameters $A^*$ is denoted by $\mathbf{S}_{X}(A^*)$. The set of formulas of $\mathscr{L}$ with parameters in $A^*$ and variables in the (possibly infinite) tuple $\overline{x}$ is denoted by $\mathrm{For}^{\bar{x}}(\mathscr{L}(A^*))$. The cardinality of the language is the cardinality of its set of formulas; $|\mathscr{L}|\coloneqq |\mathrm{For}(\mathscr{L})|$. In particular, $|\mathscr{L}|$ is always at least $\aleph_0$.
\rule{0.5em}{0.5em} Here, definable means definable with parameters. Also, $\bigwedge${\hyp}definable means infinitely{\hyp}definable (or type{\hyp}definable) over a small set of parameters. Similarly, $\bigvee${\hyp}definable means coinfinitely{\hyp}definable (or cotype{\hyp}definable) over a small set of parameters. To indicate that we use parameters from $A$, we write $A${\hyp}definable, $\bigwedge_A${\hyp}definable and $\bigvee_A${\hyp}definable. We also use cardinals and cardinal inequalities. In that case, the subscript should be read as an anonymous set of parameters whose size satisfies the indicated condition. For example, $\bigwedge_{<\omega}${\hyp}definable means $\bigwedge_A${\hyp}definable for some subset $A$ with $|A|<\omega$. The same notation will be naturally used for hyperdefinable, piecewise hyperdefinable and piecewise $\bigwedge${\hyp}definable sets.
\rule{0.5em}{0.5em} Following the terminology of \cite{tent2012course}, a $\kappa${\hyp}homogeneous structure is a structure such that every partial elementary internal map defined on a subset with cardinality less than $\kappa$ can be extended to any further element, while a strongly $\kappa${\hyp}homogeneous structure is a structure such that every partial elementary internal map defined on a subset with cardinality less than $\kappa$ can be extended to an automorphism.
\rule{0.5em}{0.5em} Let $R\subseteq X\times Y$ be a set{\hyp}theoretic binary relation. For $x\in X$, we write $R(x)\coloneqq \{y\in Y\mathrel{:} (x,y)\in R\}$. For a subset $V\subseteq X$, we write $R[V]\coloneqq\{y\in Y\mathrel{:} \exists x\in V\ (x,y)\in R\}$. We also write $R^{-1}\coloneqq\{(y,x)\mathrel{:} (x,y)\in R\}$, so $R^{-1}(y)\coloneqq \{x\in X\mathrel{:} (x,y)\in R\}$ for $y\in Y$ and $R^{-1}[W]\coloneqq \{x\in X\mathrel{:} \exists y\in V\ (x,y)\in R\}$ for $W\subseteq Y$. We denote the image and preimage functions between the power sets by ${\mathrm{Im}}\ R:\ V\mapsto R[V]$ and ${\mathrm{Im}}^{-1}R:\ W\mapsto R^{-1}[W]$. Most of the time, this notation is used for partial functions, which are always identified with their graphs --- note that $f^{-1}$ is only a function when $f$ is invertible.
\rule{0.5em}{0.5em} Cartesian projections are denoted by ${\mathop{\mbox{\normalfont{\large{\Fontauri{p}}}}}}$, quotient maps are denoted by ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}$, quotient homomorphisms in groups are denoted by $\pi$, inclusion maps are denoted by ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}$ and identity maps are denoted by ${\mathrm{id}}$.
\rule{0.5em}{0.5em} A lattice of sets is a family of sets closed under finite unions and intersections. A complete algebra on a set $X$ is a family of subsets of $X$ closed under complements and arbitrary unions.
\rule{0.5em}{0.5em} The class of ordinals is denoted by ${\mathbf{\mathbbm{O}\mathrm{n}}}$. The cardinal of a set $X$ is written $|X|$.
\rule{0.5em}{0.5em} We use product notation for groups. Also, unless otherwise stated, we consider the group acting on itself on the left. In particular, by a coset we mean a left coset. A subset $X$ of a group is called symmetric if $1\in X=X^{-1}$. For subsets $X$ and $Y$ of a group, we write $XY$ for the set of pairwise products, and abbreviate $X^n\coloneqq XX^{n-1}$ and $X^{-n}\coloneqq (X^{-1})^n$ for $n\in\mathbb{N}$. We say that $X$ normalises $Y$ if $x^{-1}Yx\subseteq Y$ for every $x\in X$.
\rule{0.5em}{0.5em} By a Lie group we always mean here a finite{\hyp}dimensional real Lie group.
\section{Piecewise hyperdefinable sets}
\subsection{Hyperdefinable sets}
Let $A^*$ be a small set of parameters. An \emph{$A^*${\hyp}hyperdefinable} set is a quotient $P=\rfrac{X}{E}$ where $X$ is a non{\hyp}empty $\bigwedge_{A^*}${\hyp}definable set and $E$ is an $\bigwedge_{A^*}${\hyp}definable equivalence relation. If we do not indicate the set of parameters, we mean that it is hyperdefinable for some small set of parameters. Write ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P:\ X\rightarrow P$ for the \emph{quotient map} given by $x\mapsto [x]_E\coloneqq \rfrac{x}{E}\coloneqq E(x)=\{x'\in X\mathrel{:} (x,x')\in E\}$. The elements of $A${\hyp}hyperdefinable sets are called \emph{hyperimaginaries over $A$}. Given a hyperimaginary element $a$, a \emph{representative} of $a$ is an element $a^*\in a$ of the structure such that ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P(a^*)=a$. The elements of the structure will be called \emph{real}.
An \emph{$\bigwedge_{A^*}${\hyp}definable} subset, $V\subseteq P$, is a subset such that ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_P[V]$ is $\bigwedge_{A^*}${\hyp}definable in $X$. We will say that a partial type defines $V\subseteq P$ if it defines ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_P[V]$. If $V\subseteq P$ is a non{\hyp}empty $\bigwedge${\hyp}definable set, after declaring the parameters, we will write $\underline{V}$ to denote a partial type defining ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_P[V]$. The following basic proposition is the starting point to study hyperdefinable sets.
\begin{lem}[Correspondence Lemma] \label{l:correspondence lemma} Let $P=\rfrac{X}{E}$ be an $A^*${\hyp}hyperdefinable set. Then, the image by ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P$ of any $\bigwedge_{A^*}${\hyp}definable subset of $X$ is an $\bigwedge_{A^*}${\hyp}definable subset of $P$. Moreover, the preimage function ${\mathrm{Im}}^{-1}{\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P$ is an isomorphism, whose inverse is ${\mathrm{Im}}\ {\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P$, between the lattice of $\bigwedge_{A^*}${\hyp}definable subsets of $P$ and the lattice of $\bigwedge_{A^*}${\hyp}definable subsets of $X$ closed under $E$.
\end{lem}
The main part of this lemma can be further generalised to Lemma \ref{l:infinite definable functions with real parameters}. For this, note firstly that, given two $A^*${\hyp}hyperdefinable sets $P=\rfrac{X}{E}$ and $Q=\rfrac{Y}{F}$, the Cartesian product $P\times Q$ is canonically identified with the hyperdefinable set $\rfrac{X\times Y}{E\hat{\times} F}$ via $([x]_E,[y]_F)\mapsto [x,y]_{E\hat{\times} F}$, where $E\hat{\times} F\coloneqq\{((x,y),(x',y'))\mathrel{:} (x,x')\in E$, $(y,y')\in F\}$. Then, we can talk about $\bigwedge_{A^*}${\hyp}definable relations and partial functions.
\begin{ejems} The inclusion ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}:\ V\rightarrow P$ of an $\bigwedge_{A^*}${\hyp}definable set $V$ is $\bigwedge_{A^*}${\hyp}definable. The Cartesian projections ${\mathop{\mbox{\normalfont{\large{\Fontauri{p}}}}}}_P:\ P\times Q\rightarrow P$ and ${\mathop{\mbox{\normalfont{\large{\Fontauri{p}}}}}}_Q:\ P\times Q\rightarrow Q$ are $\bigwedge_{A^*}${\hyp}definable. Also, the quotient map ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P:\ X\rightarrow P$ is $\bigwedge_{A^*}${\hyp}definable.
\end{ejems}
\begin{lem} \label{l:infinite definable functions with real parameters}
Let $P=\rfrac{X}{E}$ and $Q=\rfrac{Y}{F}$ be two $A^*${\hyp}hyperdefinable sets and $f$ an $\bigwedge_{A^*}${\hyp}definable partial function from $P$ to $Q$. Then, for any $\bigwedge_{A^*}${\hyp}definable sets $V\subseteq P$ and $W\subseteq Q$, $f[V]$ and $f^{-1}[W]$ are $\bigwedge_{A^*}${\hyp}definable.
\begin{dem} It is enough to check that the projection maps satisfy the proposition. It is trivial that ${\mathop{\mbox{\normalfont{\large{\Fontauri{p}}}}}}^{-1}_P[V]=V\times Q$ is $\bigwedge_{A^*}${\hyp}definable for any $\bigwedge_{A^*}${\hyp}definable subset $V$. On the other hand, if $V\subseteq P\times Q$ is $\bigwedge_{A^*}${\hyp}definable, by compactness, $\Sigma(x)=\{\exists y\mathrel{} \bigwedge \Delta(x,y)\mathrel{:} \Delta\subseteq \underline{V}\mbox{ finite}\}$ defines ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_P[{\mathop{\mbox{\normalfont{\large{\Fontauri{p}}}}}}_P[V]]$. \mathbb{Q}ED
\end{dem}
\end{lem}
\begin{obs} Let $f:\ P\rightarrow Q$ and $g:\ Q\rightarrow R$ be functions. As subsets of $P\times R$, we have $g\circ f={\mathop{\mbox{\normalfont{\large{\Fontauri{p}}}}}}_{P\times R}[(f\times R)\cap (P\times g)]$, where ${\mathop{\mbox{\normalfont{\large{\Fontauri{p}}}}}}_{P\times R}:\ P\times Q\times R\rightarrow P\times R$ is the natural projection. Thus, compositions of $\bigwedge_{A^*}${\hyp}definable partial functions are also $\bigwedge_{A^*}${\hyp}definable partial functions.
\end{obs}
A \emph{type over $A^*$, or $A^*${\hyp}type}, in $P$ is an $\bigwedge_{A^*}${\hyp}definable subset of $P$ which is $\subset${\hyp}minimal in the family of non{\hyp}empty $\bigwedge_{A^*}${\hyp}definable subsets. For $a\in P$, we write ${\mathrm{tp}}(a/A^*)$ for the type over $A^*$ containing $a$. As the lattice of $\bigwedge_{A^*}${\hyp}definable sets is closed under arbitrary intersections, the type of a hyperimaginary element always exists.
As usual, for infinite tuples $\overline{a}=(a_t)_{t\in T}$ and $\overline{b}=(b_t)_{t\in T}$ of hyperimaginaries over $A^*$, we write ${\mathrm{tp}}(\overline{a}/A^*)={\mathrm{tp}}(\overline{b}/A^*)$ to mean that ${\mathrm{tp}}(\overline{a}_{\mid T_0}/A^*)={\mathrm{tp}}(\overline{b}_{\mid T_0}/A^*)$ for any $T_0\subseteq T$ finite.
\begin{lem} \label{l:types of hyperimaginaries}
Let $P$ be an $A^*${\hyp}hyperdefinable set, $a\in P$ and $a^*\in a$. Then, ${\mathrm{tp}}(a/A^*)={\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P[{\mathrm{tp}}(a^*/A^*)]$. In particular, $b\in{\mathrm{tp}}(a/A^*)$ if and only if there is $b^*\in b$ such that ${\mathrm{tp}}(a^*/A^*)={\mathrm{tp}}(b^*/A^*)$. In other words, ${\mathrm{tp}}(a/A^*)$ is the orbit of $a$ under the action of $\mathrm{Aut}(\mathfrak{M}/A^*)$ on $P$.
\begin{dem} By the Correspondence Lemma \ref{l:correspondence lemma} and minimality. \mathbb{Q}ED
\end{dem}
\end{lem}
Let $P$ be $A^*${\hyp}hyperdefinable. A subset $V\subseteq P$ is \emph{$A^*${\hyp}invariant} if ${\mathrm{tp}}(a/A^*)\subseteq V$ for any $a\in V$. As immediate corollaries of Lemma \ref{l:types of hyperimaginaries} we get the following results:
\begin{coro} \label{c:invariance}
Let $P$ be $A^*${\hyp}hyperdefinable and $V\subseteq P$. Then, $V$ is $A^*${\hyp}invariant if and only if it is setwise invariant under the action of $\mathrm{Aut}(\mathfrak{M}/A^*)$ on $P$, if and only if ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}[V]$ is $A^*${\hyp}invariant.
\begin{dem} Obvious by Lemma \ref{l:types of hyperimaginaries}.
\mathbb{Q}ED
\end{dem}
\end{coro}
\begin{coro} \label{c:parameters of definition}
Let $P$ be $A^*${\hyp}hyperdefinable and $V\subseteq P$ an $\bigwedge${\hyp}definable subset. Then, $V$ is $\bigwedge_{A^*}${\hyp}definable if and only if it is $A^*${\hyp}invariant.
\end{coro}
We also want to be able to use hyperimaginary parameters. To do that we need to redefine the previous notions. Let $A$ be a set of hyperimaginaries. We start by defining types over $A$ and $A${\hyp}invariance for real elements. Then, we can say what is an $A${\hyp}hyperdefinable set, an $A${\hyp}invariant set, an $\bigwedge_A${\hyp}definable set and a type over $A$ for hyperimaginaries.
Since we assume that every element in $A$ is hyperimaginary over $A_{\mathrm{re}}$, the group $\mathrm{Aut}(\mathfrak{M}/A_{\mathrm{re}})$ naturally acts on the elements of $A$, i.e. $\sigma(a)$ makes sense for $a\in A$ and $\sigma\in\mathrm{Aut}(\mathfrak{M}/A_{\mathrm{re}})$. The \emph{group of automorphisms $\mathrm{Aut}(\mathfrak{M}/A)$ pointwise fixing $A$} is defined as the subgroup of elements of $\mathrm{Aut}(\mathfrak{M}/A_{\mathrm{re}})$ fixing each element of $A$. Note that, obviously, $\mathrm{Aut}(\mathfrak{M}/A^*)\leq \mathrm{Aut}(\mathfrak{M}/A)$ for any set of representatives $A^*$ of $A$.
Let $a^*$ be a tuple of real elements. The type of $a^*$ over $A$ is the set ${\mathrm{tp}}(a^*/A)\coloneqq\{b^*\mathrel{:} {\mathrm{tp}}(a^*,A)={\mathrm{tp}}(b^*,A)\}$. Equivalently, by Lemma \ref{l:types of hyperimaginaries}, ${\mathrm{tp}}(a^*/A)$ is the orbit of $a^*$ under $\mathrm{Aut}(\mathfrak{M}/A)$. A set of real elements $X$ is \emph{$A${\hyp}invariant} if we have ${\mathrm{tp}}(a^*/A)\subseteq X$ for any $a^*\in X$. Equivalently, $X$ is $A${\hyp}invariant if and only if it is setwise invariant under the action of $\mathrm{Aut}(\mathfrak{M}/A)$. We say that a hyperdefinable set $P=\rfrac{X}{E}$ is \emph{$A${\hyp}hyperdefinable} if $X$ and $E$ are $A${\hyp}invariant. Note that, as $\mathrm{Aut}(\mathfrak{M}/A^*)\leq \mathrm{Aut}(\mathfrak{M}/A)$, an $A${\hyp}hyperdefinable set is, in particular, $A^*${\hyp}hyperdefinable for any set of representatives $A^*$ of $A$. If $P$ is $A${\hyp}hyperdefinable, it follows that $\mathrm{Aut}(\mathfrak{M}/A)$ naturally acts on $P$.
A subset of an $A${\hyp}hyperdefinable set is \emph{$A${\hyp}invariant} if its preimage by the quotient map is $A${\hyp}invariant. Equivalently, it is $A${\hyp}invariant if and only if it is setwise invariant under the action of $\mathrm{Aut}(\mathfrak{M}/A)$. By Corollary \ref{c:invariance}, as $\mathrm{Aut}(\mathfrak{M}/A^*)\leq \mathrm{Aut}(\mathfrak{M}/A)$, every $A${\hyp}invariant set is in particular $A^*${\hyp}invariant for any set of representatives. An \emph{$\bigwedge_A${\hyp}definable} subset is an $A${\hyp}invariant $\bigwedge${\hyp}definable subset. By Corollary \ref{c:parameters of definition}, every $\bigwedge_A${\hyp}definable set is $\bigwedge_{A^*}${\hyp}definable for any set of representatives. Note that the $A${\hyp}invariant subsets of $P$ form a complete algebra of subsets of $P$. Thus, $\bigwedge_A${\hyp}definable sets are a lattice of sets closed under arbitrary intersections.
\begin{lem} \label{l:infinite definable functions}
Let $P=\rfrac{X}{E}$ and $Q=\rfrac{Y}{F}$ be two $A${\hyp}hyperdefinable sets and $f$ an $\bigwedge_A${\hyp}definable partial function. Then, for any $\bigwedge_A${\hyp}definable sets $V\subseteq P$ and $W\subseteq Q$, $f[V]$ and $f^{-1}[W]$ are $\bigwedge_A${\hyp}definable.
\begin{dem} By Lemma \ref{l:infinite definable functions with real parameters}, $f[V]$ and $f^{-1}[W]$ are $\bigwedge${\hyp}definable. As $f$ is $A${\hyp}invariant, $f(\sigma(a))=\sigma(f(a))$ for $a\in P$ and $\sigma\in\mathrm{Aut}(\mathfrak{M}/A)$, so $f[V]$ and $f^{-1}[W]$ are $A${\hyp}invariant. \mathbb{Q}ED
\end{dem}
\end{lem}
A \emph{type over $A$, or $A${\hyp}type}, is a $\subset${\hyp}minimal non{\hyp}empty $\bigwedge_A${\hyp}definable subset. For $a\in P$, we write ${\mathrm{tp}}(a/A)$ for the type over $A$ containing $a$. As the lattice of $\bigwedge_{A}${\hyp}definable sets is closed under arbitrary intersections, the type of a hyperimaginary element always exists. As usual, for infinite tuples $\overline{a}=(a_t)_{t\in T}$ and $\overline{b}=(b_t)_{t\in T}$, we write ${\mathrm{tp}}(\overline{a}/A)={\mathrm{tp}}(\overline{b}/A)$ to mean that ${\mathrm{tp}}(\overline{a}_{\mid T_0}/A)={\mathrm{tp}}(\overline{b}_{\mid T_0}/A)$ for any $T_0\subseteq T$ finite.
\begin{lem} \label{l:types and infinite definable functions}
Let $P,Q$ be $A${\hyp}hyperdefinable sets, $f:\ P\rightarrow Q$ an $\bigwedge_{A}${\hyp}definable function and $a\in P$. Then, $f[{\mathrm{tp}}(a/A)]={\mathrm{tp}}(f(a)/A)$.
\begin{dem} By Lemma \ref{l:infinite definable functions}, we have $f[{\mathrm{tp}}(a/A)]$ and $f^{-1}[{\mathrm{tp}}(f(a)/A)]$ are $\bigwedge_A${\hyp}definable. Then, ${\mathrm{tp}}(a/A)\subseteq f^{-1}[{\mathrm{tp}}(f(a)/A)]$ and ${\mathrm{tp}}(f(a)/A)\subseteq f[{\mathrm{tp}}(a/A)]$ by minimality. Thus, $f[{\mathrm{tp}}(a/A)]={\mathrm{tp}}(f(a)/A)$. \mathbb{Q}ED
\end{dem}
\end{lem}
As an immediate corollary we get the following result:
\begin{coro}\label{c:types over hyperimaginaries}
Let $P$ be an $A${\hyp}hyperdefinable set, $a\in P$ and $a^*\in a$. Then, ${\mathrm{tp}}(a/A)={\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P[{\mathrm{tp}}(a^*/A)]$. In particular, ${\mathrm{tp}}(a/A)$ is the orbit of $a$ under the action of $\mathrm{Aut}(\mathfrak{M}/A)$ on $P$. In other words, for any $A^*$ representatives of $A$, we have $b\in{\mathrm{tp}}(a/A)$ if and only if there are $b^{**}\in b$ and $A^{**}$ representatives of $A$ such that ${\mathrm{tp}}(a^*,A^*)={\mathrm{tp}}(b^{**},A^{**})$.
Consequently, $V$ is $A${\hyp}invariant if and only if ${\mathrm{tp}}(a/A)\subseteq V$ for any $a\in V$.
\begin{dem} Clear by Lemma \ref{l:types and infinite definable functions} and Lemma \ref{l:types of hyperimaginaries}.
\mathbb{Q}ED
\end{dem}
\end{coro}
We explain now how to substitute hyperimaginary parameters. Let $P$ be $A${\hyp}hyperdefinable, $\overline{b}$ a small set of hyperimaginaries over $A$ and $V\subseteq P$ an $\bigwedge_{\overline{b}}${\hyp}definable subset. Let $\overline{c}$ be such that ${\mathrm{tp}}(\overline{b}/A)={\mathrm{tp}}(\overline{c}/A)$. The set \emph{$V(\overline{c})$ given from $V$ by replacing $\overline{b}$ by $\overline{c}$} is the set $\sigma[V]$ for $\sigma\in\mathrm{Aut}(\mathfrak{M}/A)$ such that $\sigma(\overline{b})=\overline{c}$. Note that this does not depend on the choice of $\sigma\in\mathrm{Aut}(\mathfrak{M}/A)$.
Alternatively, we present a more explicit methodology using \emph{uniform definitions} that does not require the use of automorphisms. Let $P=\rfrac{X}{E}$ be an $A${\hyp}hyperdefinable set. We say that a partial type $\Sigma(x,A^*)$ is \emph{weak uniform on $P$ over $A$} if $\Sigma(x,A^{**})$ defines the same set on $P$ for any set of representatives $A^{**}$ of $A$, i.e. $\rfrac{\Sigma(\mathfrak{M},A^*)}{E}=\rfrac{\Sigma(\mathfrak{M},A^{**})}{E}$. We say that $\Sigma(x,A^*)$ is \emph{uniform on $P$ over $A$} if $\Sigma(x,A^*)\cap \mathrm{For}^x(\mathscr{L}(B^*))$ is weak uniform on $P$ over $B$ for any $B^*\subseteq A^*$ such that $P$ is still $B${\hyp}hyperdefinable and $A_{\mathrm{re}}\subseteq B$. Let $V\subseteq P$ be $\bigwedge_A${\hyp}definable. A \emph{uniform definition of $V$ over $A$} is a partial type $\underline{V}$ uniform on $P$ over $A$ that defines $V$.
\begin{lem} \label{l:uniform definition}
Let $P=\rfrac{X}{E}$ be an $A${\hyp}hyperdefinable set and $V\subseteq P$ a non{\hyp}empty $\bigwedge_A${\hyp}definable set. Then, there is a uniform definition of $V$ over $A$.
\begin{dem} Write $A=\{a_i\}_{i\in I}$ and $a_i\in Q_i=\rfrac{Y_i}{F_i}$. Pick $A^*$ representatives of $A$ and let $\Sigma(x,A^*)$ be any partial type defining $V$ over $A^*$. Write $\underline{F}=\bigwedge_i \underline{F}_i$ and $\Gamma={\mathrm{tp}}(A^*)$. Using saturation, take the partial type $\underline{V}(x,A^*)$ expressing $\exists \overline{y}\mathrel{} \Sigma(x,\overline{y})\wedge \underline{F}(\overline{y},A^*)\wedge \Gamma(\overline{y})$. By Corollary \ref{c:types over hyperimaginaries}, it is a uniform definition of $V$ over $A$. \mathbb{Q}ED
\end{dem}
\end{lem}
\begin{lem}\label{l:substitution and uniform definitions}
Let $P$ be an $A${\hyp}hyperdefinable set and $V\subseteq P$ be a non{\hyp}empty $\bigwedge_{\overline{b}}${\hyp}definable subset with $\overline{b}$ a small set of hyperimaginaries over $A$ and $A\subseteq \overline{b}$. Let $\underline{V}(x,\overline{b}^*)$ be a uniform definition of $V$. Let $\overline{c}$ be such that ${\mathrm{tp}}(\overline{b}/A)={\mathrm{tp}}(\overline{c}/A)$ and $\overline{c}^*$ be representatives. Then, $\underline{V}(x,\overline{c}^*)$ is a uniform definition of $V(\overline{c})$.
\begin{dem} Take $\sigma\in\mathrm{Aut}(\mathfrak{M}/A)$ with $\sigma(\overline{b})=\overline{c}$, so $\overline{b}^{**}=\sigma^{-1}(\overline{c}^*)$ is a representative of $\overline{b}$. As $\underline{V}(x,\overline{b}^*)$ is a uniform definition of $V$ on $P$ over $b$, we have that $\underline{V}(x,\overline{b}^{**})$ also defines $V$. Consequently, $V(\overline{c})$ is defined by $\underline{V}(x,\overline{c}^*)$. Now, take $\overline{c}_0\subseteq\overline{c}$ such that $P$ is still $ \overline{c}_0${\hyp}hyperdefinable and say $\overline{c}_0=\sigma(\overline{b}_0)$. Then, $P$ is still $\overline{b}_0${\hyp}hyperdefinable and, as $\underline{V}(x,\overline{b}^*)$ is uniform on $P$ over $b$, we have that $\underline{W}(x,\overline{b}^{**}_0)=\underline{V}(x,\overline{b}^{**})\cap \mathrm{For}(\mathscr{L}(\overline{b}^{**}_0))$ is weakly uniform on $P$ over $\overline{b}_0$. Therefore, as ${\mathrm{tp}}(c^*_0/A)={\mathrm{tp}}(b^{**}_0/A)$, $\underline{W}(x,\overline{c}^{*}_0)$ is weakly uniform on $P$ over $\overline{c}_0$. \mathbb{Q}ED
\end{dem}
\end{lem}
\begin{lem} \label{l:uniform definition finite}
Let $P$ and $Q$ be $A${\hyp}hyperdefinable sets and $b\in Q$. Then, for any $\bigwedge_{A,b}${\hyp}definable subset $V\subseteq P$ there is an $\bigwedge_A${\hyp}definable set $W\subseteq Q\times P$ such that $V(c)=W(c)\coloneqq {\mathop{\mbox{\normalfont{\large{\Fontauri{p}}}}}}_P[W\cap (\{c\}\times P)]$ for any $c\in {\mathrm{tp}}(b/A)$.
\begin{dem} Take $\Sigma(A^*,b^*,x)$ a uniform definition of $V$ on $P$ over $A$. Then, by Lemma \ref{l:substitution and uniform definitions}, $\Sigma(A^*,y,x)$ defines an $\bigwedge_A${\hyp}definable subset $W$ of $Q\times P$ such that $V(c)=W(c)\coloneqq {\mathop{\mbox{\normalfont{\large{\Fontauri{p}}}}}}_P[W\cap (\{c\}\times P)]$ for any $c\in{\mathrm{tp}}(b/A)$. \mathbb{Q}ED
\end{dem}
\end{lem}
\begin{lem} \label{l:equivalence relation of equality of types}
Let $P$ be an $A${\hyp}hyperdefinable set. Then,
\[\Delta_P(A)=\{(a,b)\in P\times P\mathrel{:} {\mathrm{tp}}(a/A)={\mathrm{tp}}(b/A)\}\]
is an $\bigwedge_{A}${\hyp}definable equivalence relation. Furthermore, it has a uniform definition $\underline{\Delta_P}(A^*)$ such that $\underline{\Delta_P}(A^*)\cap \mathrm{For}(\mathscr{L}(B^*))$ defines $\Delta_P(B)$ for any subset $B\subseteq A$ such that $P$ is $B${\hyp}hyperdefinable.
\end{lem}
\subsection{The logic topologies of hyperdefinable sets}
Let $P=\rfrac{X}{E}$ be an $A${\hyp}hyperdefinable subset. The \emph{$A${\hyp}logic topology} of $P$ is the one given by taking as closed sets the $\bigwedge_A${\hyp}definable subsets of $P$. In particular, by Lemma \ref{l:infinite definable functions}, the $A${\hyp}logic topology of $P$ is the quotient topology of the $A${\hyp}logic topology of $X$.
\begin{prop}
Let $P$ and $Q$ be $A${\hyp}hyperdefinable sets and $f:\ P\rightarrow Q$ an $\bigwedge_A${\hyp}definable function. Then, $f$ is a continuous and closed function between the $A${\hyp}logic topologies.
\begin{dem} By Lemma \ref{l:infinite definable functions}. \mathbb{Q}ED
\end{dem}
\end{prop}
\begin{prop}\label{p:topology hyperdefinables} Let $P$ be an $A${\hyp}hyperdefinable set. Then, the $A${\hyp}logic topology of $P$ is compact and normal (i.e. any two disjoint closed sets can be separated by open sets). The closure of a point $a\in P$ is ${\mathrm{tp}}(a/A)$, so the properties $\mathrm{T}_0$, $\mathrm{T}_1$ and $\mathrm{T}_2$ are equivalent to ${\mathrm{tp}}(a/A)=\{a\}$ for all $a\in P$.
\begin{dem} Compactness follows from saturation. For normality we should note that the image of a normal space by a continuous closed function is always normal. Therefore, using ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P$, from normality of the $A^*${\hyp}logic topology of $X$ we conclude the normality of the $A^*${\hyp}logic topology of $P$. Using that $a\mapsto \rfrac{a}{\Delta_P(A)}$ is continuous and closed from the $A^*${\hyp}logic topology of $P$ to the $A${\hyp}logic topology of $\rfrac{P}{\Delta_P(A)}$, we get normality of the latter. From there, we trivially conclude normality of the $A${\hyp}logic topology of $P$.
Finally, as the closure of a point is its type by definition, $\mathrm{T}_1$ is equivalent to ${\mathrm{tp}}(a/A)=\{a\}$ for every $a\in P$. By Corollary \ref{c:types over hyperimaginaries}, $a\in{\mathrm{tp}}(b/A)$ if and only if ${\mathrm{tp}}(a/A)={\mathrm{tp}}(b/A)$, so $b\in \overline{\{a\}}$ if and only if $a\in \overline{\{b\}}$ --- this topological property is sometimes called $\mathrm{R}_0$. Therefore, $\mathrm{T}_0$ is also equivalent to ${\mathrm{tp}}(a/A)=\{a\}$ for every $a\in P$. By normality, $\mathrm{T}_1$ and $\mathrm{T}_2$ are equivalent. \mathbb{Q}ED
\end{dem}
\end{prop}
\begin{prop} \label{p:uniqueness of hausdorff logic topologies}
Let $P$ be an $A${\hyp}hyperdefinable set and $A\subseteq A'$. If the $A${\hyp}logic topology of $P$ is Hausdorff, then the $A${\hyp}logic topology and the $A'${\hyp}logic topology are equal. Thus, there is at most one Hausdorff logic topology and, if it exists, it will be called the \emph{global logic topology}.
\end{prop}
\begin{obs} Furthermore, the global logic topologies are preserved by expansion of the language. Indeed, let $P$ be a hyperdefinable set whose $A${\hyp}logic topology is Hausdorff and $\mathfrak{M}'$ a $\kappa${\hyp}saturated and strongly $\kappa${\hyp}homogeneous expansion of $\mathfrak{M}$. Then, obviously, $\mathrm{id}:\ P\rightarrow P$ is a continuous bijection from the $A${\hyp}logic topology in $\mathfrak{M}'$ to the $A${\hyp}logic topology in $\mathfrak{M}$. As both are compact and the second one is Hausdorff, we conclude that both are the same.
\end{obs}
Let $P$ be an $A${\hyp}hyperdefinable set. Assuming that $\kappa> 2^{|A|+|\mathscr{L}|}$, either $|P|\geq \kappa$ or $|P|\leq 2^{|A|+|\mathscr{L}|}$. Then, $P$ has a global logic topology if and only if $P$ is small, if and only if $|P|\leq 2^{|A|+|\mathscr{L}|}$. Furthermore, write $\mathrm{bdd}(A)$ for the set of \emph{bounded hyperimaginaries over $A$}, i.e. hyperimaginaries over $A$ such that $|{\mathrm{tp}}(a/A)|<\kappa$. Note that $|\mathrm{bdd}(A)|\leq 2^{|A|+|\mathscr{L}|}$. It follows that, for any $A${\hyp}hyperdefinable set $P$, it has a global logic topology if and only if the $\mathrm{bdd}(A)${\hyp}logic topology is the global logic topology.
\begin{prop} \label{p:product logic topology}
Let $P$ and $Q$ be $A${\hyp}hyperdefinable sets. Then, the $A${\hyp}logic topology in $P\times Q$ is at least as fine as the product topology of the $A${\hyp}logic topologies. Furthermore, $P\times Q$ has a global logic topology if and only if $P$ and $Q$ have so, and then the global logic topology of $P\times Q$ is the product topology of the global logic topologies of $P$ and $Q$.
\end{prop}
\subsection{Piecewise hyperdefinable sets}
A \emph{piecewise $A${\hyp}hyperdefinable} set is a strict direct limit of $A${\hyp}hyperdefinable sets with $\bigwedge_A${\hyp}definable inclusions. In other words, a piecewise $A${\hyp}hyperdefinable set is a direct limit
\[P\coloneqq \underrightarrow{\lim}_{(I,\prec)}(P_i,\varphi_{ji})_{j\succeq i}=\rfrac{\bigsqcup_I P_i}{\sim_P}\]
of a direct system of $A${\hyp}hyperdefinable sets and $1${\hyp}to{\hyp}$1$ $\bigwedge_A${\hyp}definable functions $\varphi_{ji}:\ P_i\rightarrow P_j$.
Recall that $(I,\prec)$ is a direct ordered set and, for each $i,j,k\in I$ with $i\preceq j\preceq k$, $\varphi_{kj}\circ\varphi_{ji}=\varphi_{kj}$ and $\varphi_{ii}=\mathrm{id}$. Also, recall that $\sim_{P}$ is the equivalence relation on $\bigsqcup_I P_i$ defined by $x\sim_{P}y$ for $x\in P_i$ and $y\in P_j$ if and only if there is some $k\in I$ with $i\preceq k$ and $j\preceq k$ and $\varphi_{ki}(x)=\varphi_{kj}(y)$. Note that, in fact, as we consider only direct systems where the functions $\varphi_{ji}$ are $1${\hyp}to{\hyp}$1$, the equivalence relation is given by $x\sim_{P}y$ with $x\in P_i$ and $y\in P_j$ if and only if $\varphi_{ki}(x)=\varphi_{kj}(y)$ for any $k\in I$ such that $i\preceq k$ and $j\preceq k$.
The \emph{pieces} of $P$ are the subsets $\rfrac{P_i}{\sim_{P}}$. The \emph{canonical inclusions} are the maps ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i}:\ P_i\rightarrow \rfrac{P_i}{\sim_P}\subseteq P$ given by $a\mapsto [a]_{\sim_P}$ for $a\in P_i$.
The \emph{cofinality} $\mathrm{cf}(P)$ of $P$ is the cofinality of $I$, which is the minimal ordinal $\alpha$ from which there is a function $f:\ \alpha\rightarrow I$ such that for every $i\in I$ there is $\xi\in \alpha$ with $i\preceq f(\xi)$. We say that $P$ is \emph{countably} piecewise hyperdefinable if it has countable cofinality. From now on, we always assume that $\mathrm{cf}(P)<\kappa$.
A \emph{piecewise $\bigwedge_{A}${\hyp}definable} subset of $P$ is a subset $V\subseteq P$ such that ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}^{\ -1}_{P_i}[V]$ is $\bigwedge_{A}${\hyp}definable in $P_i$ for each $i\in I$. An \emph{$\bigwedge_A${\hyp}definable} subset of $P$ is a piecewise $\bigwedge_A${\hyp}definable subset contained in some piece. If $V\subseteq P$ is a non{\hyp}empty $\bigwedge${\hyp}definable set, after fixing a piece $\rfrac{P_i}{\sim_P}$ containing it and declaring the parameters, we will write $\underline{V}$ to denote a partial type defining it in $P_i$. In that case, we will also say that $\underline{V}$ defines $V$. If $a\in P$ is an element, after fixing a piece $\rfrac{P_i}{\sim_P}$ containing it, we will say that $a^*$ is a representative of $a$ if ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i}({\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P_i}(a^*))=a$, where ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P_i}$ is the quotient map of $P_i$ as hyperdefinable set.
\begin{obs} The set of piecewise $\bigwedge_A${\hyp}definable subsets is a lattice of sets closed under arbitrary intersections, and the collection of $\bigwedge_A${\hyp}definable subsets is the ideal of that lattice generated by the pieces.\end{obs}
Note that, in the previous definitions, piecewise hyperdefinable sets are not only sets but sets together with a particular structure. This structure is given by the lattices of piecewise $\bigwedge${\hyp}definable subsets and the ideals of $\bigwedge${\hyp}definable subsets. It is very important to remember this as the same set could be represented as a piecewise hyperdefinable set in several different ways --- see Example \ref{e:example 1}.
By the strictness condition we get the following fundamental lemma.
\begin{lem}[Correspondence Lemma]\label{l:correspondence lemma in pieces}
Let $P$ be piecewise $A${\hyp}hyperdefinable and $\rfrac{P_i}{\sim_P}$ a piece of $P$. Then, ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i}:\ P_i\rightarrow \rfrac{P_i}{\sim_P}$ is a bijection. Furthermore, $V\subseteq \rfrac{P_i}{\sim_P}$ is $\bigwedge_A${\hyp}definable as subset of $P$ if and only if ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i}^{\ -1}[V]$ is $\bigwedge_A${\hyp}definable as subset of $P_i$.
\begin{dem} Obviously ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i}:\ P_i\rightarrow\rfrac{P_i}{\sim_P}$ is a bijection by the strictness condition. Say $V\subseteq \rfrac{P_i}{\sim_P}$. Using again the strictness condition, for any $j,k\in I$ with $i,j\preceq k$, we have that ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}^{\ -1}_{P_j}[V]=\varphi_{kj}^{-1}[{\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}^{\ -1}_{P_k}[V]]=\varphi_{kj}^{-1}[\varphi_{ki}[{\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}^{\ -1}_{P_i}[V]]]$. Therefore, by $\bigwedge_A${\hyp}definability of the maps and Lemma \ref{l:infinite definable functions}, we conclude that $V$ is $\bigwedge_A${\hyp}definable if and only if ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i}^{\ -1}[V]$ is $\bigwedge_A${\hyp}definable as subset of $P_i$. \mathbb{Q}ED
\end{dem}
\end{lem}
The Correspondence Lemma \ref{l:correspondence lemma in pieces} says that ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i}$ is a true identification between $P_i$ and $\rfrac{P_i}{\sim_P}$ in terms of the model theoretic structure. In other words, it says that pieces of piecewise hyperdefinable sets are, indeed, hyperdefinable. From now on, slightly abusing of the notation, we make no distinction between $P_i$ and $\rfrac{P_i}{\sim_P}$, i.e. $P_i\coloneqq\rfrac{P_i}{\sim_P}$ and ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P_i}\coloneqq {\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i}\circ {\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P_i}$.
\begin{coro} \label{c:infinite definable in piecewise hyperdefinable}
Let $P=\underrightarrow{\lim}_I P_i$ be piecewise $A${\hyp}hyperdefinable. Then, a subset $V\subseteq P$ is piecewise $\bigwedge_A${\hyp}definable if and only if $V\cap P_i$ is $\bigwedge_A${\hyp}definable as subset of $P$ for each $i\in I$.
\end{coro}
\begin{obs}\label{o:remark ideal infinite definable sets} As every piece is $\bigwedge${\hyp}definable, the lattice of piecewise $\bigwedge_A${\hyp}definable subsets can be recovered from the ideal of $\bigwedge_A${\hyp}definable subsets. In other words, the structure of $P$ is completely determined by the ideals of $\bigwedge${\hyp}definable sets. However, in general, the lattice of piecewise $\bigwedge_A${\hyp}definable subsets does not determine the ideal of $\bigwedge_A${\hyp}definable subsets --- see Example \ref{e:example 1}.
\end{obs}
A \emph{type over $A$, or $A${\hyp}type}, is a $\subset${\hyp}minimal non{\hyp}empty piecewise $\bigwedge_A${\hyp}definable subset. The \emph{type of $a\in P$ over $A$}, ${\mathrm{tp}}(a/A)$, is the minimal piecewise $\bigwedge_A${\hyp}definable subset of $P$ containing $a$. Since $a\in P_i$ for some $i\in I$ and pieces are $\bigwedge_A${\hyp}definable, ${\mathrm{tp}}(a/A)$ is actually $\bigwedge_A${\hyp}definable.
Let $R\subseteq P\times Q$ be a binary relation between two piecewise hyperdefinable sets. We say that $R$ is \emph{piecewise bounded (or piecewise continuous)} if the image of any piece of $P$ is contained in some piece of $Q$. We say that it is \emph{piecewise proper} if the preimage of any piece of $Q$ is contained in some piece of $P$. We use this terminology in particular for partial functions. To simplify the terminology, we often omit reiterative uses of ``piecewise'' when they happen. So, for example, we say ``a piecewise bounded and proper $\bigwedge${\hyp}definable function'' instead of ``a piecewise bounded and piecewise proper piecewise $\bigwedge${\hyp}definable function''.
Given two piecewise $A${\hyp}hyperdefinable sets $P=\underrightarrow{\lim}_{I}P_i$ and $Q=\underrightarrow{\lim}_{J}Q_j$, the Cartesian product $P\times Q$ is canonically identified with $\underrightarrow{\lim}_{I\times J}P_i\times Q_j$ via $([x]_{P},[y]_{Q})\mapsto [x,y]_{P\times Q}$, where $(x,y)\sim_{P\times Q}(x',y')$ if and only if $x\sim_Px'$ and $y\sim_Q y'$. Thus, we say that a binary relation $R$ between $P$ and $Q$ is piecewise $\bigwedge_A${\hyp}definable if it is so as subset of the Cartesian product.
\begin{lem} \label{l:types and piecewise infinite definable functions}
Let $P$ and $Q$ be piecewise $A${\hyp}hyperdefinable sets, $f:\ P\rightarrow Q$ a piecewise $\bigwedge_{A}${\hyp}definable function and $a\in P$. Then, $f[{\mathrm{tp}}(a/A)]={\mathrm{tp}}(f(a)/A)$.
\begin{dem} Clear from Lemma \ref{l:types and infinite definable functions}. \mathbb{Q}ED
\end{dem}
\end{lem}
\begin{prop} \label{p:piecewise infinite definable functions}
Let $P$ and $Q$ be two piecewise $A${\hyp}hyperdefinable sets and $f$ a piecewise $\bigwedge_A${\hyp}definable partial function from $P$ to $Q$. Then:
{\textnormal{\textbf{(1)}}} If $f$ is piecewise bounded, images of $\bigwedge_A${\hyp}definable sets are $\bigwedge_A${\hyp}definable, and preimages of piecewise $\bigwedge_A${\hyp}definable sets are piecewise $\bigwedge_A${\hyp}definable.
{\textnormal{\textbf{(2)}}} If $f$ is piecewise proper, images of piecewise $\bigwedge_A${\hyp}definable sets are piecewise $\bigwedge_A${\hyp}definable, and preimages of $\bigwedge_A${\hyp}definable sets are $\bigwedge_A${\hyp}definable.
\begin{dem} Both are quite similar, so let us show only {\textnormal{\textbf{(1)}}}. For $i\in I$ and $j\in J$ with $f[P_i]\subseteq Q_j$, consider $f_{ji}:\ P_i\rightarrow Q_j$ given by the restriction of $f$. As $f$ is piecewise $\bigwedge_A${\hyp}definable, each $f_{ji}$ is $\bigwedge_A${\hyp}definable. Given an $\bigwedge_A${\hyp}definable subset $V\subseteq P_i$, we have that $f[V]=f_{ji}[V]$, concluding that the image of $V$ is $\bigwedge_A${\hyp}definable in $Q$ by Lemma \ref{l:infinite definable functions} and the Correspondence Lemma \ref{l:correspondence lemma in pieces}. On the other hand, given a piecewise $\bigwedge_A${\hyp}definable subset $W\subseteq Q$, we have that $f^{-1}[W]\cap P_i=f^{-1}_{ji}[W\cap Q_j]$, so $f^{-1}[W]$ is piecewise $\bigwedge_A${\hyp}definable in $P$ by Lemma \ref{l:infinite definable functions}. \mathbb{Q}ED
\end{dem}
\end{prop}
An \emph{isomorphism of piecewise $A${\hyp}hyperdefinable sets} is a piecewise bounded and proper $\bigwedge_A${\hyp}definable bijection. In that case, we will say that $P$ and $Q$ are \emph{isomorphic over $A$}.
\subsection{The logic topologies of piecewise hyperdefinable sets}
The \emph{$A${\hyp}logic topology} of $P$ is the respective direct limit topology. In other words, a subset of $P$ is closed if and only if it is piecewise $\bigwedge_A${\hyp}definable. By the Correspondence Lemma \ref{l:correspondence lemma in pieces}, each piece is compact and, further, every $\bigwedge_A${\hyp}definable subset is compact.
As in the case of hyperdefinable sets, $\overline{\{a\}}={\mathrm{tp}}(a/A)$ for any $a\in P$. Thus, the properties $\mathrm{T}_0$ and $\mathrm{T}_1$ are equivalent. If the $A${\hyp}logic topology is $\mathrm{T}_1$, for any other small set of hyperimaginary parameters $A'$ containing $A$, the logic topologies over $A$ and over $A'$ are the same. Thus, there is at most one $\mathrm{T}_1$ logic topology on $P$ and, if it exists, it is called the \emph{global logic topology}. It follows that $P$ has a global logic topology if and only if every piece has size at most $2^{|A|+|\mathscr{L}|}$, if and only if the $\mathrm{bdd}(A)${\hyp}logic topology is the global logic topology. In particular, if $P$ has a global logic topology, then $|P|\leq 2^{|A|+|\mathscr{L}|}+\mathrm{cf}(P)$. Thus, assuming $2^{|A|+|\mathscr{L}|}+\mathrm{cf}(P)<\kappa$, we conclude that $P$ has a global logic topology if and only if it is small.
There are still some topological properties that we want to extend from hyperdefinable sets to piecewise hyperdefinable sets. Ideally, we would like to show that these topologies are locally compact, normal and satisfy $\mathrm{T}_1\Leftrightarrow \mathrm{T}_2$. Also, we would like to show that they are closed under taking finite products. In general, these properties may fail --- see Example \ref{e:example 4}, Example \ref{e:example 6} and Example \ref{e:example no topological group} for some counterexamples. The rest of the subsection is dedicated to give sufficient natural conditions for these properties.
\
In general, for a topological space $X$, a \emph{covering} of $X$ is a set $\mathcal{C}\subseteq \mathcal{P}(X)$ such that $\bigcup\mathcal{C}=X$, whose elements are called \emph{pieces}. A covering $\mathcal{C}$ is \emph{coherent} when, for every $U\subseteq X$, $U$ is open in $X$ if and only if $U\cap P$ is open in $P$ for each $P\in\mathcal{C}$. Equivalently, $\mathcal{C}$ is coherent when, for every $V\subseteq X$, $V$ is closed in $X$ if and only if $V\cap P$ is closed in $P$ for each $P\in\mathcal{C}$. For example, $X$ is \emph{compactly generated} if the family of all the compact subsets is a coherent covering.
We say that a covering is \emph{local} if for every point of $X$ there is a piece that is a neighbourhood of it. The following topological results are straightforward:
\begin{lem}[Local coverings] \label{l:local coherent}
Let $X$ be a topological space and $\mathcal{C}$ a local covering. Then, $\mathcal{C}$ is coherent.
\end{lem}
\begin{lem} \label{l:lemma normal coverings}
Let $X$ be a topological space and $\{P_n\}_{n\in\mathbb{N}}$ a closed coherent covering with $P_n\subseteq P_{n+1}$ for each $n\in\mathbb{N}$. Suppose that $P_n$ is normal for each $n\in\mathbb{N}$. Then, $X$ is normal.
\begin{dem} Let $V$ and $W$ be closed disjoint subsets of $X$. Take the continuous map $g:\ V\sqcup W\rightarrow\{-1,1\}$ given by $g_{\mid V}=1$ and $g_{\mid W}=-1$. By recursion, using Tietze's Theorem \cite[Theorem 35.1]{munkres1999topology}, we construct a chain $(f_n)_{n\in\mathbb{N}}$ of continuous maps $f_n:\ P_n\rightarrow [-1,1]$, each one extending $g_{\mid P_n}$. Taking $f=\bigcup f_n$, we get a continuous map separating $V$ and $W$. \mathbb{Q}ED
\end{dem}
\end{lem}
In the case of a piecewise hyperdefinable set $P$, we have a directed closed and compact coherent covering $\{P_i\}_{i\in I}$. In particular, it follows that logic topologies are always compactly generated. If $P$ is countably piecewise hyperdefinable, then it is $\sigma${\hyp}compact (i.e. a countable union of compact sets). Furthermore, by Lemma \ref{l:lemma normal coverings}, we get normality:
\begin{prop} \label{p:normal countably piecewise hyperdefinable} Let $P$ be a countably piecewise $A${\hyp}hyperdefinable set. Then, $P$ is normal with the $A${\hyp}logic topology. In particular, global logic topologies of countably piecewise hyperdefinable sets are Hausdorff.
\end{prop}
Say that a piecewise hyperdefinable set $P$ is \emph{locally $A${\hyp}hyperdefinable} if its covering is local in the $A${\hyp}logic topology, i.e. if for every point of $P$ there is an $\bigwedge_A${\hyp}definable set which is a neighbourhood of it. Say that $P$ is locally hyperdefinable if it is so for some small set of parameters.
\begin{obs} \emph{Piecewise definable} sets are the special case of piecewise hyperdefinable sets when we only consider strict direct limits of definable sets with definable maps. Definable sets are always open in the logic topology so, following the terminology of this paper, every piecewise definable set is trivially locally (hyper)definable. The distinction between these two notions only appears in the general context of piecewise hyperdefinable sets. In particular, note that our terminology is consistent with the typical use of the terms ``piecewise definable'' and ``locally definable'' as synonyms in the literature.
\end{obs}
\begin{prop} \label{p:mapping locally hyperdefinable sets}
Let $P$ and $Q$ be piecewise $A${\hyp}hyperdefinable sets. Assume that $P$ is locally $A${\hyp}hyperdefinable. Let $f:\ P\rightarrow Q$ be a piecewise bounded and proper $\bigwedge_A${\hyp}definable onto map. Then, $Q$ is locally $A${\hyp}hyperdefinable.
\begin{dem} Pick $y\in Q$. By Proposition \ref{p:piecewise infinite definable functions}, $f^{-1}[{\mathrm{tp}}(y/A)]$ is $\bigwedge_A${\hyp}definable, so compact in the $A${\hyp}logic topology of $P$. As $P$ is locally hyperdefinable, we find an $\bigwedge_A${\hyp}definable set $V$ such that $f^{-1}[{\mathrm{tp}}(y/A)]\subseteq U\subseteq V$, where $P\setminus U$ is piecewise $\bigwedge_A${\hyp}definable. By Proposition \ref{p:piecewise infinite definable functions}, $f[V]$ is $\bigwedge_A${\hyp}definable, and $f[P\setminus U]$ is piecewise $\bigwedge_A${\hyp}definable. As $f^{-1}[{\mathrm{tp}}(y/A)]\subseteq U$, ${\mathrm{tp}}(y/A)\cap f[P\setminus U]=\emptyset$. Also, as $f$ is onto, $Q=f[V\cup (P\setminus U)]= f[V]\cup f[P\setminus U]$. Therefore, $y\in {\mathrm{tp}}(y/A)\subseteq Q\setminus f[P\setminus U]\subseteq f[V]$, concluding that $f[V]$ is an $\bigwedge_A${\hyp}definable neighbourhood of $y$ in the $A${\hyp}logic topology. As $y$ is arbitrary, we conclude that $Q$ is locally $A${\hyp}hyperdefinable. \mathbb{Q}ED
\end{dem}
\end{prop}
For locally hyperdefinable sets we have a really good control of the logic topology.
\begin{prop}\label{p:locally compact locally hyperdefinable}
Let $P$ be a locally $A${\hyp}hyperdefinable set. Then, $P$ is locally closed compact in the $A${\hyp}logic topology, i.e. every point has a local base of closed compact neighbourhoods.
\begin{dem} Say $P=\underrightarrow{\lim}\, P_i$. Pick $x\in P$ and $U$ open neighbourhood of $x$ in the $A${\hyp}logic topology. Take a piece $P_i$ and $U_0$ such that $x\in U_0\subseteq P_i$ with $U_0$ open in the $A${\hyp}logic topology of $P$. Then, $U_1\coloneqq U\cap U_0$ is an open neighbourhood of $x$ in the $A${\hyp}logic topology of $P$. Note that ${\mathrm{tp}}(x/A)\subseteq U_1\subseteq P_i$, so ${\mathrm{tp}}(x/A)$ and $P_i\setminus U_1$ are disjoint closed subsets of $P_i$. By Proposition \ref{p:topology hyperdefinables}, $P_i$ is normal, so there are ${\mathrm{tp}}(x/A)\subseteq U'$ and $P_i\setminus U_1\subseteq P_i\setminus V'$ with $U'\cap (P_i\setminus V')=\emptyset$ such that $P_i\setminus U'$ and $V'$ are $\bigwedge_A${\hyp}definable in $P_i$. Therefore, $x\in{\mathrm{tp}}(x/A)\subseteq U'\subseteq V'\subseteq U_1\subseteq P_i$. Now, since $U_1$ is open in the $A${\hyp}logic topology of $P$ and $U'\subseteq U_1$ is open in the subspace topology of $U_1$, we conclude that $U'$ is open in the $A${\hyp}logic topology of $P$. Hence, $V'$ is an $\bigwedge_A${\hyp}definable neighbourhood of $x$ in the $A${\hyp}logic topology contained in $U$. As $x$ and $U$ are arbitrary, we conclude that $P$ is locally closed compact. \mathbb{Q}ED
\end{dem}
\end{prop}
\begin{prop} \label{p:compact locally hyperdefinable}
Let $P$ be a locally $A${\hyp}hyperdefinable set. Then, every compact subset of $P$ in the $A${\hyp}logic topology is contained in the interior of some piece.
\begin{dem} Clear from the definition. \mathbb{Q}ED
\end{dem}
\end{prop}
\begin{prop} \label{p:hausdorff locally hyperdefinable}
Let $P$ be a locally $A${\hyp}hyperdefinable set. Then, any two closed compact disjoint subsets in the $A${\hyp}logic topology are separated by open sets. In particular, it is $\mathrm{T}_1$ if and only if it is $\mathrm{T}_2$.
\begin{dem} Say $P=\underrightarrow{\lim}\, P_i$. Let $K_1$ and $K_2$ be two disjoint closed compact subsets of $P$ in the $A${\hyp}logic topology. By Proposition \ref{p:compact locally hyperdefinable}, there is a piece $P_i$ and an open subset $U$ of $P$ such that $K_1,K_2\subseteq U\subseteq P_i$. By normality of $P_i$, there are disjoint open subsets $U_1$ and $U_2$ in the $A${\hyp}logic topology of $P_i$ such that $K_1\subseteq U_1$ and $K_2\subseteq U_2$. Thus, $U\cap U_1$ and $U\cap U_2$ are disjoint open subsets in the subspace topology of $U$ separating $K_1$ and $K_2$. As $U$ is open in the $A${\hyp}logic topology of $P$, we conclude that $U\cap U_1$ and $U\cap U_2$ are disjoint open subsets in the $A${\hyp}logic topology of $P$ separating $K_1$ and $K_2$. \mathbb{Q}ED
\end{dem}
\end{prop}
Recall that a function between topological spaces is \emph{proper} if the preimage of every compact set is compact. For instance, every closed function with compact fibres is proper \cite[Theorem 3.7.2]{engelking1989general}.
\begin{prop} \label{p:functions logic topology}
Let $P$ and $Q$ be piecewise $A${\hyp}hyperdefinable sets and $f:\ P\rightarrow Q$ a piecewise $\bigwedge_A${\hyp}definable function.
{\textnormal{\textbf{(1)}}} If $f$ is piecewise bounded, then $f$ is continuous between the $A${\hyp}logic topologies.
{\textnormal{\textbf{(2)}}} If $f$ is piecewise proper, then $f$ is closed and has compact fibres between the $A${\hyp}logic topologies. In particular, it is proper.
{\textnormal{\textbf{(3)}}} If $f$ is an isomorphism of piecewise $A${\hyp}hyperdefinable sets, then $f$ is a homeomorphism between the $A${\hyp}logic topologies.
{\textnormal{\textbf{(4)}}} If $Q$ is locally $A${\hyp}hyperdefinable, then $f$ is continuous between the $A${\hyp}logic topologies if and only if it is piecewise bounded.
{\textnormal{\textbf{(5)}}} If $P$ is locally $A${\hyp}hyperdefinable, then $f$ is proper between the $A${\hyp}logic topologies if and only if it is piecewise proper.
{\textnormal{\textbf{(6)}}} If $P$ and $Q$ are locally $A${\hyp}hyperdefinable, then $f$ is an isomorphism of piecewise $A${\hyp}hyperdefinable sets if and only if it is a homeomorphism between the $A${\hyp}logic topologies.
\begin{dem} Point \textbf{(1)} is given by Proposition \ref{p:piecewise infinite definable functions}(1).
Point \textbf{(2)}: closedness is given by Proposition \ref{p:piecewise infinite definable functions}(2). On the other hand, for any point $a\in Q$, $f^{-1}(a)\subseteq f^{-1}[{\mathrm{tp}}(a/A)]$ and $f^{-1}[{\mathrm{tp}}(a/A)]$ is $\bigwedge_A${\hyp}definable, so compact. As $f$ is $A${\hyp}invariant and ${\mathrm{tp}}(a/A)$ is the orbit of $a$ under $\mathrm{Aut}(\mathfrak{M}/A)$ by Corollary \ref{c:types over hyperimaginaries}, it follows that $f^{-1}[{\mathrm{tp}}(a/A)]$ is the orbit of $f^{-1}(a)$ under $\mathrm{Aut}(\mathfrak{M}/A)$, i.e. the smallest $A${\hyp}invariant set containing $f^{-1}(a)$. Consequently, any open covering of $f^{-1}(a)$ in the $A${\hyp}logic topology is also a covering of $f^{-1}[{\mathrm{tp}}(a/A)]$, so $f^{-1}(a)$ is compact too.
Point \textbf{(3)} is given by points \textbf{(1)} and \textbf{(2)}. Point \textbf{(4)} is given by \textbf{(1)} and Proposition \ref{p:compact locally hyperdefinable}. Point \textbf{(5)} is given by \textbf{(2)} and Proposition \ref{p:compact locally hyperdefinable}. Point \textbf{(6)} is given by points \textbf{(4)} and \textbf{(5)}.\mathbb{Q}ED
\end{dem}
\end{prop}
\begin{prop} \label{p:functions logic topology 2}
Let $P$ be a piecewise $A${\hyp}hyperdefinable set and $Q$ a locally $A${\hyp}hyperdefinable set whose $A${\hyp}logic topology is its global logic topology. Let $f:\ P\rightarrow Q$ be a function. Then, $f$ is continuous between the $A${\hyp}logic topologies if and only if $f$ is a piecewise bounded $\bigwedge_A${\hyp}definable function.
\begin{dem} By the Closed Graph Theorem \cite[Exercise 8, Section 26]{munkres1999topology}, Proposition \ref{p:functions logic topology} and Proposition \ref{p:compact locally hyperdefinable}. \mathbb{Q}ED
\end{dem}
\end{prop}
By Proposition \ref{p:functions logic topology}(1), Cartesian projection maps are continuous between the logic topologies. Therefore, the $A${\hyp}logic topology of a finite product of piecewise $A${\hyp}hyperdefinable sets is at least as fine as the product topology of the $A${\hyp}logic topologies. In the case of local hyperdefinable sets with global logic topologies, we have that they coincide.
\begin{prop} \label{p:product of locally hyperdefinable sets}
Let $P$ and $Q$ be locally hyperdefinable sets with global logic topologies. Then, $P\times Q$ is a locally hyperdefinable set and its product topology is its global logic topology.
\begin{dem} It suffices to note that, for any two topological spaces $X$ and $Y$ with respective local coverings $\mathcal{C}$ and $\mathcal{D}$, $\mathcal{C}\times \mathcal{D}\coloneqq \{P\times Q\mathrel{:} P\in\mathcal{C},\ Q\in\mathcal{D}\}$ is a local covering of the product topology. \mathbb{Q}ED
\end{dem}
\end{prop}
Similarly, in the case of countably piecewise hyperdefinable sets with global logic topologies, we also conclude that they coincide.
\begin{prop} \label{p:product of countably piecewise hyperdefinable sets} Let $P$ and $Q$ be two countably piecewise hyperdefinable sets with global logic topologies. Then, $P\times Q$ is a countably piecewise hyperdefinable set and its product topology is its global logic topology.
\begin{dem} Say $P=\underrightarrow{\lim}\, P_n$ and $Q=\underrightarrow{\lim}\, Q_n$. Let $\Gamma\subseteq P\times Q$ be closed in the global logic topology and $(a,b)\notin \Gamma$ arbitrary. We recursively define a sequence $(U_n,V_n)_{n\in\mathbb{N}}$ such that $a\in U_0$, $b\in V_0$, $U_n\subseteq U_{n+1}$, $V_n\subseteq V_{n+1}$, $U_n\subseteq P_n$ is open in $P_n$, $V_n\subseteq Q_n$ is open in $Q_n$ and $(\overline{U}_n\times \overline{V}_n)\cap \Gamma=\emptyset$.
For any $n\in\mathbb{N}$, note that the product topology and the global logic topology in $P_n\times Q_n$ coincide. As $(a,b)$ and $\Gamma\cap (P_0\times Q_0)$ are disjoint and closed, by normality in $P_0\times Q_0$, we can find $U_0\subseteq P_0$ open in $P_0$ and $V_0\subseteq Q_0$ open in $Q_0$ such that $(a,b)\in U_0\times V_0$ and $(\overline{U}_0\times\overline{V}_0)\cap \Gamma=\emptyset$. Now, suppose $U_n$ and $V_n$ are defined. By normality of $P_{n+1}\times Q_{n+1}$, for any $(x,y)\in \overline{U}_n\times \overline{V}_n$, we can find $U^{xy}_{n+1}\subseteq P_{n+1}$ open in $P_{n+1}$ and $V^{xy}_{n+1}\subseteq Q_{n+1}$ open in $Q_{n+1}$ such that $x\in U^{xy}_{n+1}$, $y\in V^{xy}_{n+1}$ and $(\overline{U}^{xy}_{n+1}\times \overline{V}^{xy}_{n+1})\cap \Gamma=\emptyset$. As $\overline{V}_n$ is compact, for each $x\in \overline{U}_n$ there is $F_x\subseteq \overline{V}_n$ finite such that $\overline{V}_n\subseteq \bigcup_{y\in F_x}V^{xy}_{n+1}$. Take $V^x_{n+1}=\bigcup_{y\in F_x}V^{xy}_{n+1}$ and $U^x_{n+1}=\bigcap_{y\in F_x}U^{xy}_n$. Then, $U^x_{n+1}\subseteq P_{n+1}$ is open in $P_{n+1}$, $V^x_{n+1}\subseteq Q_{n+1}$ is open in $Q_{n+1}$, $x\in U^x_{n+1}$, $\overline{V}_n\subseteq V^x_{n+1}$ and $(\overline{U}^x_{n+1}\times \overline{V}^x_{n+1})\cap \Gamma=\emptyset$. As $\overline{U}_n$ is compact, there is $F\subseteq \overline{U}_n$ finite such that $\overline{U}_n\subseteq \bigcup_{x\in F}U^x_{n+1}$. Take $U_{n+1}=\bigcup_{x\in F}U^x_{n+1}$ and $V_{n+1}=\bigcap_{x\in F} V^x_{n+1}$. Then, $U_{n+1}\subseteq P_{n+1}$ is open in $P_{n+1}$, $V_{n+1}\subseteq Q_{n+1}$ is open in $Q_{n+1}$, $\overline{U}_n\subseteq U_{n+1}$, $\overline{V}_n\subseteq V_{n+1}$ and $(\overline{U}_{n+1}\times \overline{V}_{n+1})\cap\Gamma=\emptyset$.
Let $U=\bigcup_{n\in\mathbb{N}} U_n$ and $V=\bigcup_{n\in\mathbb{N}} V_n$. For $n\in\mathbb{N}$, we have $U\cap P_n=\bigcup_{m\geq n} (U_m\cap P_n)$ and $V\cap Q_n=\bigcup_{m\geq n}(V_m\cap Q_n)$. As $P_n\subseteq P_m$ and $Q_n\subseteq Q_m$ for $m>n$, we get that $U_m\cap P_n$ is open in $P_n$ and $V_m\cap Q_n$ is open in $Q_n$, so $U\cap P_n$ is open in $P_n$ and $V\cap Q_n$ is open in $Q_n$ for each $n\in\mathbb{N}$. Therefore, $U\times V$ is open in the product topology and $(a,b)\in U\times V$ with $(U\times V)\cap \Gamma=\emptyset$. As $(a,b)\notin \Gamma$ is arbitrary, $\Gamma$ is closed in the product topology. As $\Gamma$ is arbitrary, we conclude that the global logic topology in $P\times Q$ is the same as the product topology. \mathbb{Q}ED
\end{dem}
\end{prop}
Let $P=\underrightarrow{\lim}\, P_i$ be piecewise $A${\hyp}hyperdefinable set and $V\subseteq P$ a non{\hyp}empty piecewise $\bigwedge_A${\hyp}definable subset. Note that the subspace topology of $V$ inherited from the $A${\hyp}logic topology of $P$ is the $A${\hyp}logic topology of $V$ given as the piecewise $A${\hyp}hyperdefinable set $\underrightarrow{\lim}\ V\cap P_i$. We conclude showing that local hyperdefinability is hereditary.
\begin{prop} \label{p:subspace locally hyperdefinable}
Let $P=\underrightarrow{\lim}\, P_i$ be a locally $A${\hyp}hyperdefinable set and $V\subseteq P$ a piecewise $\bigwedge_A${\hyp}definable subset. Then, $V$ with the induced piecewise hyperdefinable substructure $\underrightarrow{\lim}\ V\cap P_i$ is a locally $A${\hyp}hyperdefinable set.
\begin{dem} It suffices to note that, for any topological space $X$ with local covering $\mathcal{C}$ and any $Y\subseteq X$, $\mathcal{C}_{\mid Y}=\{C\cap Y\mathrel{:} C\in\mathcal{C}\}$ is a local covering of the subspace topology. \mathbb{Q}ED
\end{dem}
\end{prop}
\subsection{Spaces of types}
Let $P=\rfrac{X}{E}$ be an $A${\hyp}hyperdefinable set and $F\subseteq P\times P$ an $\bigwedge_A${\hyp}definable equivalence relation in $P$. Then, $\rfrac{P}{F}$ is actually identified with the $A${\hyp}hyperdefinable set $\rfrac{X}{{\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_{P\times P}[F]}$ via the canonical bijection
\[\eta:\ [x]_{{\mathop{\mbox{\normalfont{\small{\Fontauri{q}}}}}}^{-1}_{P\times P}[F]}\mapsto [[x]_P]_F.\]
Furthermore, by the definition of the quotient topologies, this map is a homeomorphism.
Let $P=\underrightarrow{\lim}_{(I,\prec)}(P_i,\varphi_{ji})_{j\succeq i}$ be a piecewise $A${\hyp}hyperdefinable set. Write $P_i=\rfrac{X_i}{E_i}$. Let $F\subseteq P\times P$ be a piecewise $\bigwedge_A${\hyp}definable equivalence relation. Write $F_i\coloneqq F_{\mid P_i\times P_i}$. Define $\widetilde{\varphi}_{ji}:\ \rfrac{P_i}{F_i}\rightarrow \rfrac{P_j}{F_j}$ by $\widetilde{\varphi}_{ji}([x]_{F_i})=[\varphi_{ji}(x)]_{F_j}$. Clearly, they are well{\hyp}defined $\bigwedge_A${\hyp}definable and $1${\hyp}to{\hyp}$1$ functions and $\widetilde{\varphi}_{ii}=\mathrm{id}$ and $\widetilde{\varphi}_{ki}=\widetilde{\varphi}_{kj}\circ\widetilde{\varphi}_{ji}$. Then, we have canonically the piecewise $A${\hyp}hyperdefinable structure in $\rfrac{P}{F}$ given by
\[\rfrac{P}{F}\coloneqq \underrightarrow{\lim}_{(I,\prec)}(\rfrac{P_i}{F_i},\widetilde{\varphi}_{ji})_{j\succeq i},\]
via the map $\eta:\ [{\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i}(x)]_F\mapsto {\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{P_i/F_i}([x]_{F_i})$ for $i$ such that $x\in P_i$. Furthermore, topologically, by the definitions of the quotient and the direct limit topologies, this bijection is clearly a homeomorphism.
Let $P=\rfrac{X}{E}$ be an $A^*${\hyp}hyperdefinable set. Consider the space of types $\mathbf{S}_X(A^*)=\{{\mathrm{tp}}(x/A^*)\mathrel{:} x\in X\}$ with the usual topology. We define the equivalence relation $\sim_E$ in $\mathbf{S}_X(A^*)$ as $p\sim_Eq$ if and only if there are $E${\hyp}equivalent realisations of $p(x)$ and $q(y)$. The \emph{space of types} of $P$ over $A^*$ is the space $\mathbf{S}_P(A^*)\coloneqq \rfrac{\mathbf{S}_X(A^*)}{\sim_E}$ with the quotient topology.
On the other hand, by Lemma \ref{l:equivalence relation of equality of types}, $\rfrac{P}{\Delta_P(A^*)}=\{{\mathrm{tp}}(a/A^*)\mathrel{:} a\in P\}$ is an $A^*${\hyp}hyperdefinable set and has its $A^*${\hyp}logic topology.
\begin{prop} \label{p:space of types}
Let $P=\rfrac{X}{E}$ be an $A^*${\hyp}hyperdefinable set. Then, $\rfrac{P}{\Delta_P(A^*)}$ and $\mathbf{S}_P(A^*)$ are homeomorphic.
\begin{dem}
Consider the map
\[\begin{array}{lccc} f:& \mathbf{S}_{P}(A^*)&\rightarrow & \rfrac{P}{\Delta_P(A^*)}\\ & \left[{\mathrm{tp}}(a^*/A^*)\right]_{\sim_{E}}&\mapsto& {\mathrm{tp}}([a^*]_{E}/A^*).\end{array}\]
By saturation, it is well{\hyp}defined. Clearly, it is onto. It is $1${\hyp}to{\hyp}$1$ by Lemma \ref{l:types of hyperimaginaries}. By the definition of the quotient topology, it is clear that $f$ is continuous. As $\mathbf{S}_{X}(A^*)$ is a compact topological space, $\mathbf{S}_{P}(A^*)$ is also compact. On the other hand, $\rfrac{P}{\Delta_P(A^*)}$ is a Hausdorff space. Then, as the domain is compact, the image is Hausdorff and $f$ is a continuous bijection, we conclude that $f$ is a homeomorphism. \mathbb{Q}ED
\end{dem}
\end{prop}
Therefore, for hyperimaginary parameters, we define the space of types of $P$ over $A$, $\mathbf{S}_P(A)$, as $\rfrac{P}{\Delta_P(A)}$ with its global logic topology.
\begin{obs} By Lemma \ref{l:infinite definable functions} and Lemma \ref{l:types and infinite definable functions}, any $\bigwedge_A${\hyp}definable function $f:\ P\rightarrow Q$ induces a continuous closed map between the spaces of types.
\end{obs}
Similarly, let $P=\underrightarrow{\lim}_{I}P_i$ be a piecewise $A^*${\hyp}hyperdefinable set. For each $i\in I$, we have the space of types $\mathbf{S}_{P_i}(A^*)=\{{\mathrm{tp}}(a/A^*)\mathrel{:} a\in P_i\}$. Given $i,j\in I$ with $i\preceq j$, the map $\varphi_{ji}:\ P_i\rightarrow P_j$ induces a continuous and closed $1${\hyp}to{\hyp}$1$ function ${\mathrm{Im}}\ \varphi_{ji}:\ \mathbf{S}_{P_i}(A^*)\rightarrow \mathbf{S}_{P_j}(A^*)$ given by ${\mathrm{tp}}(a/A^*)\mapsto {\mathrm{tp}}(\varphi_{ji}(a)/A^*)$. Clearly, ${\mathrm{Im}}\ \varphi_{kj}\circ{\mathrm{Im}}\ \varphi_{ji}={\mathrm{Im}}\ \varphi_{ki}$ and ${\mathrm{Im}}\ \varphi_{ii}=\mathrm{id}$. Therefore, we have a topological direct sequence $(\mathbf{S}_{P_i}(A^*),{\mathrm{Im}}\ \varphi_{ji})_{j\succeq i}$. The \emph{space of types} of $P$ over $A^*$ is then the direct limit topological space
\[\mathbf{S}_{P}(A^*)\coloneqq \underrightarrow{\lim}_I\mathbf{S}_{P_i}(A^*).\]
By Proposition \ref{p:space of types}, we have
\[\rfrac{P}{\Delta_{P}(A^*)}=\underrightarrow{\lim}\,\rfrac{P_i}{\Delta_{P_i}(A^*)}\cong\underrightarrow{\lim}\,\mathbf{S}_{P_i}(A^*)=\mathbf{S}_{P}(A^*).\]
Therefore, for a piecewise hyperdefinable set $P$, like in the case of hyperdefinable sets, we also define the \emph{space of types} of $P$ over hyperimaginary parameters $A$, $\mathbf{S}_P(A)$, as $\rfrac{P}{\Delta_{P}(A)}$ with its global logic topology. Even more, we say that a piecewise hyperdefinable set $S$ is a \emph{space of types} if it has a global logic topology.
Let $X$ be a topological space. Recall that two points are \emph{topologically indistinguishable} if they have the same neighbourhoods. A \emph{Kolmogorov map} is an onto continuous and closed map $k:\ X\rightarrow Y$ between topological spaces such that $k(a)$ and $k(b)$ are topologically indistinguishable if and only if $a$ and $b$ are topologically indistinguishable. We will use the following basic characterisation --- see \cite{pirttimaki2019survey} for more details:
\begin{lem} \label{l:kolmogorov maps}
Let $X$ and $Y$ be topological spaces and $k:\ X\rightarrow Y$ a function. Then, the following are equivalent:
{\textnormal{\textbf{(1)}}} $k$ is a Kolmogorov map.
{\textnormal{\textbf{(2)}}} ${\mathrm{Im}}\, k$ is a lattice isomorphism between the topologies with inverse ${\mathrm{Im}}^{-1}k$.
\end{lem}
\begin{obs} When $Y$ is $\mathrm{T}_0$, we call a Kolmogorov map $k:\ X\rightarrow Y$ the \emph{Kolmogorov quotient} of $X$. In that case, up to a homeomorphism, $Y$ is the quotient space $\rfrac{X}{\sim}$, where $\sim$ is the topologically indistinguishable equivalence relation, and $k$ is the respective quotient map.
\end{obs}
As it is discussed in the introduction, it is useful to note that the quotient map ${\mathrm{tp}}_A:\ P\rightarrow \mathbf{S}_{P}(A)$ given by ${\mathrm{tp}}_A:\ a\mapsto {\mathrm{tp}}(a/A)=\rfrac{a}{\Delta_P(A)}$ is the Kolmogorov quotient between the $A${\hyp}logic topologies. By definition, it also satisfies that ${\mathrm{tp}}^{-1}_A[{\mathrm{tp}}_A[V]]=V$ for any $A${\hyp}invariant set $V$. Thus, when studying $P$, it is typically possible to map the discussion via ${\mathrm{tp}}_A$, argue in $\mathbf{S}_P(A)$ and then lift the conclusions to $P$ via ${\mathrm{tp}}^{-1}_A$. Following this procedure, one can usually assume without loss of generality that $P$ is $\mathrm{T}_0$. This technique will be illustrated in the following subsection --- see Theorem \ref{t:uniform structure} and Metrisation Theorem \ref{t:metrisation theorem}.
\begin{obs} By Proposition \ref{p:functions logic topology}(1) and Lemma \ref{l:types and piecewise infinite definable functions}, any piecewise bounded $\bigwedge_A${\hyp}definable function induces a continuous map between the spaces of $A${\hyp}types.
\end{obs}
\subsection[Metrisation results]{Metrisation results \footnote{This section is inspired on results from \cite{ben2005uncountable}.}}
A \emph{uniform space} is a pair $(X,\Phi)$ formed by a non{\hyp}empty set $X$ and a filter $\Phi$ of binary relations in $X$ satisfying that, for every $U\in\Phi$,
\[\begin{array}{ll}
\mathbf{(i)}& \Delta\subseteq U,\\
\mathbf{(ii)}& U^{-1}\in \Phi \mathrm{\ and\ }\\
\mathbf{(iii)}& \mathrm{there\ is\ }V\in \Phi\mathrm{\ such\ that\ }V\circ V\subseteq U;
\end{array}\]
where $\Delta\coloneqq\{(x,x)\mathrel{:} x\in X\}$ is the diagonal (equality) relation, $U^{-1}\coloneqq\{(y,x)\mathrel{:} (x,y)\in U\}$ and $W\circ V\coloneqq\{(x,z)\mathrel{:} \exists y\mathrel{} (x,y)\in V,\ (y,z)\in W\}$. The filter $\Phi$ is called the \emph{uniform structure} of the uniform space $(X,\Phi)$.
A \emph{uniformity base} of $(X,\Phi)$ is a filter base of $\Phi$. Note that a filter base $\mathcal{B}$ of reflexive binary relations on $X$ is a uniformity base of some uniform structure if and only if, for any $U\in\mathcal{B}$, there are $V,W\in \mathcal{B}$ such that $V\circ V\subseteq U$ and $W\subseteq U^{-1}$.
Uniform spaces generalise (pseudo{\hyp})metric spaces. In other words, every pseudo{\hyp}metric $\rho$ induces a uniform space given by the uniformity base $\{\rho^{-1}[0,\varepsilon)\mathrel{:} \varepsilon\in\mathbb{R}_{>0}\}$. We say then that a uniform structure $\Phi$ is \emph{pseudo{\hyp}metrisable} if it arises in this way; that means there is a pseudo{\hyp}metric $\rho$ such that $\{\rho^{-1}[0,\varepsilon)\mathrel{:} \varepsilon\in\mathbb{R}_{>0}\}$ is a uniformity base of $\Phi$.
As in the case of metric spaces, every uniform space has a topology given by the system of local bases of neighbourhoods $\{U(a)\mathrel{:} U\in \Phi\}_{a\in X}$, where $U(a)\coloneqq\{b\mathrel{:} (a,b)\in U\}$. We say that a topology $\mathcal{T}$ \emph{admits a uniform structure} if there is a uniform structure $\Phi$ on $X$ with uniform topology $\mathcal{T}$. Note that a topological space could admit many different uniform structures. Uniform structures inducing the same topology are called \emph{equivalent}.
\begin{obs} Uniform structures are the natural abstract context to study uniform continuity, uniform convergence, Cauchy sequences or completeness. It is important to note that two equivalent uniform structures may differ on these aspects. For example, a uniformly continuous function with respect to one uniform structure might not be uniformly continuous in another equivalent uniform structure.
\end{obs}
Recall that a topological space $X$ is \emph{functionally regular} if, for any point $x\in X$ and any closed set $V\subseteq X$ such that $x\notin V$, there is a continuous function $f:\ X\rightarrow [0,1]$ such that $f(x)=0$ and $f_{\mid V}=1$. It is a well{\hyp}known fact from the theory of uniform spaces that a topological space admits a uniform structure if and only if it is functionally regular --- see \cite[Theorem 38.2]{willard1970general} for a proof. Hence, we get the following general result for countably piecewise $A${\hyp}hyperdefinable sets.
\begin{prop} \label{p:uniformity of countably piecewise hyperdefinable sets}
Every countably piecewise $A${\hyp}hyperdefinable set $P$ with its $A${\hyp}logic topology admits a uniform structure.
\begin{dem} We know that $P$ is normal by Proposition \ref{p:normal countably piecewise hyperdefinable}. Let $V$ be piecewise $\bigwedge_A${\hyp}definable and $a\in P\setminus V$. As $\overline{\{a\}}={\mathrm{tp}}(a/A)$, we have that $\overline{\{a\}}$ and $V$ are disjoint. By normality, using Urysohn's Lemma \cite[Theorem 33.1]{munkres1999topology}, we conclude that $P$ is functionally regular. Thus, we conclude that it admits a uniform structure. \mathbb{Q}ED
\end{dem}
\end{prop}
Our aim now is to give a better description of the uniform structure in each piece. In other words, we want to give an actual uniformity base for the logic topology.
Let $P$ be an $A${\hyp}hyperdefinable set. We say that an $\bigwedge_A${\hyp}definable binary relation $\varepsilon\subseteq P\times P$ is \emph{positive} if $\Delta_P(A)\subseteq \mathring{\varepsilon}$, where the interior is taken in the product topology of the $A${\hyp}logic topology. Write $\mathcal{E}_P(A)$ for the set of all positive $\bigwedge_A${\hyp}definable binary relations of $P$. It could seem odd the fact that we take the interior in the product topology. The following lemma explains why.
\begin{lem} Let $P$ be an $A${\hyp}hyperdefinable set and $\pi:\ P\times P\rightarrow \mathbf{S}_P(A)\times\mathbf{S}_P(A)$ the quotient map $(a,b)\mapsto ({\mathrm{tp}}(a/A),{\mathrm{tp}}(b/A))$. Then, for any $\varepsilon\in\mathcal{E}_P(A)$, $\pi[\varepsilon]\in\mathcal{E}_{\mathbf{S}_P(A)}(A)$, and for any $\varepsilon'\in \mathcal{E}_{\mathbf{S}_P(A)}(A)$, $\pi^{-1}[\varepsilon']\in \mathcal{E}_P(A)$.
\end{lem}
\begin{obs} When the $A${\hyp}logic topology is the global topology, $\Delta_P(A)$ is precisely the diagonal $\Delta$ and $\mathcal{E}_P(A)$ is the set of closed neighbourhoods of $\Delta$ in the global logic topology of $P\times P$, which coincides with the product topology by Proposition \ref{p:product logic topology}. This in particular applies to $\mathbf{S}_P(A)$.
\end{obs}
We now prove the main result of this subsection:
\begin{teo} \label{t:uniform structure}
Let $P$ be an $A${\hyp}hyperdefinable set. Then, $P$ with the $A${\hyp}logic topology admits a unique uniform structure and $\mathcal{E}_P(A)$ is a uniformity base of it.
\begin{dem} We start by proving the proposition for $\mathbf{S}_P(A)$ rather than $P$, so consider the set $\mathcal{E}_{\mathbf{S}_P(A)}(A)$ of positive $\bigwedge_A${\hyp}definable binary relations on $\mathbf{S}_P(A)$. By \cite[Theorem 36.19]{willard1970general}, since $\mathbf{S}_P(A)$ with the global logic topology is compact and Hausdorff, it admits one and only one uniform structure, which is precisely the filter of neighbourhoods of the diagonal in the product topology. Since the global logic topology and the product topology on $\mathbf{S}_P(A)\times\mathbf{S}_P(A)$ coincide by Proposition \ref{p:product logic topology}, $\mathcal{E}_{\mathbf{S}_P(A)}(A)$ is the collection of all closed neighbourhoods of the diagonal. Thus, by normality of $\mathbf{S}_P(A)\times\mathbf{S}_P(A)$ and closedness of $\Delta$ (which follows from Hausdorffness of $\mathbf{S}_P(A)$), the filter generated by $\mathcal{E}_{\mathbf{S}_P(A)}(A)$ is exactly the collection of all neighbourhoods of $\Delta$, which is precisely the unique uniform structure admitted by $\mathbf{S}_P(A)$. Hence, $\mathcal{E}_{\mathbf{S}_P(A)}(A)$ is a base of the unique uniform structure of $\mathbf{S}_P(A)$.
We now use the quotient maps ${\mathrm{tp}}_A:\ P\rightarrow \mathbf{S}_P(A)$ and $\tau:\ P\times P\rightarrow\mathbf{S}_P(A)\times\mathbf{S}_P(A)$ given by ${\mathrm{tp}}_A(a)={\mathrm{tp}}(a/A)$ and $\tau(a,b)=({\mathrm{tp}}(a/A),{\mathrm{tp}}(b/A))$ to extend the result to $P$.
Let $\Phi$ be any uniform structure on $P$ inducing the $A${\hyp}logic topology. Note that at least one $\Phi$ exists by Proposition \ref{p:uniformity of countably piecewise hyperdefinable sets}. Consider $\tau\Phi\coloneqq \{\tau[U]\mathrel{:} U\in \Phi\}$. Obviously, $\tau\Phi$ is a filter. Also, $\tau[U^{-1}]=\tau[U]^{-1}$ and $\tau[U]\circ\tau[U]\subseteq \tau[U\circ U\circ U]$ for any $U\in\Phi$, so $\tau\Phi$ is a uniform structure on $\mathbf{S}_P(A)$. On the other hand, for any $a\in P$, ${\mathrm{tp}}_A[U(a)]\subseteq\tau[U]({\mathrm{tp}}(a/A))\subseteq {\mathrm{tp}}_A[U\circ U(a)]$. Thus, by the properties of ${\mathrm{tp}}_A$, $\tau\Phi$ induces the global logic topology on $\mathbf{S}_P(A)$. As $\mathbf{S}_P(A)$ with the global logic topology only admits one uniform structure, it follows that $\mathcal{E}_{\mathbf{S}_P(A)}(A)$ is a uniformity base of $\tau\Phi$. In particular, we have $\tau^{-1}\mathcal{E}_{\mathbf{S}_P(A)}(A)\subseteq \Phi$, where $\tau^{-1}\mathcal{E}_{\mathbf{S}_P(A)}(A)\coloneqq \{\tau^{-1}[\varepsilon]\mathrel{:} \varepsilon\in\mathcal{E}_{\mathbf{S}_P(A)}(A)\}$. On the other hand, for $U\in\Phi$, take $V\in\Phi$ such that $V\circ V\circ V\subseteq U$ and find $\varepsilon\in\mathcal{E}_{\mathbf{S}_P(A)}(A)$ such that $\varepsilon\subseteq \tau[V]$. Therefore, $\tau^{-1}[\varepsilon]\subseteq \tau^{-1}[\tau[V]]=\Delta_P(A)\circ V\circ\Delta_P(A)\subseteq V\circ V\circ V\subseteq U$. Hence, $\tau^{-1}\mathcal{E}_{\mathbf{S}_P(A)}(A)$ is a uniformity base of $\Phi$, concluding uniqueness.
We now show that $\mathcal{E}_P(A)$ is a uniformity base of $\Phi$. Since $\tau^{-1}[\varepsilon]\in\mathcal{E}_P(A)$ for any $\varepsilon\in \mathcal{E}_{\mathbf{S}_P(A)}(A)$, it is enough to show that for any $\varepsilon\in \mathcal{E}_P(A)$ there is $\varepsilon'\in\mathcal{E}_{\mathbf{S}_P(A)}(A)$ such that $\tau^{-1}[\varepsilon']\subseteq \varepsilon$. Take $\varepsilon''\in \mathcal{E}_P(A)$ such that $\varepsilon''\circ\varepsilon''\circ\varepsilon''\subseteq \varepsilon$ and set $\varepsilon'=\tau[\varepsilon'']$. Then, $\tau^{-1}[\varepsilon']=\Delta_P(A)\circ\varepsilon''\circ\Delta_P(A)\subseteq \varepsilon$. \mathbb{Q}ED
\end{dem}
\end{teo}
Using compactness, we can get an even smaller uniformity base:
\begin{lem} \label{l:size uniformity base} Let $P$ be an $A${\hyp}hyperdefinable set. There is a family $\{\varepsilon_i\}_{i<|A|+|\mathscr{L}|}$ of $\bigwedge_A${\hyp}definable positive binary relations on $P$ which is a uniformity base of $P$ with the $A${\hyp}logic topology.
\begin{dem} Say $\underline{\Delta}_P(A)=\bigwedge_{i< |A|+|\mathscr{L}|} \varphi_i$ with $\varphi_i\in\mathrm{For}(\mathscr{L}(A^*))$. For each $i<|A|+|\mathscr{L}|$, write $E_i\coloneqq {\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P\times P}[\varphi_i(\mathfrak{M})]$ and $U_i\coloneqq P\times P\setminus {\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P\times P}[\neg\varphi_i(\mathfrak{M})]$. Since ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P\times P}^{-1}[\Delta_P(A)]\cap \neg\varphi_i(\mathfrak{M})=\emptyset$, $\Delta_P(A)\cap {\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P\times P}[\neg\varphi_i(\mathfrak{M})]=\emptyset$, so $\Delta_P(A)\subseteq U_i\subseteq E_i$ where $E_i$ is closed and $U_i$ is open in the $A${\hyp}logic topology. By compactness of $P\times P$ in the $A${\hyp}logic topology, we can take $\varepsilon_i\in \mathcal{E}_P(A)$ such that $\varepsilon_i\subseteq U_i\subseteq E_i$. It follows that $\{\varepsilon_i\}_{i<|A|+|\mathscr{L}|}$ is a sequence of $\bigwedge_A${\hyp}definable positive binary relations on $P$ such that $\Delta_P(A)=\bigcap_{i<|A|+|\mathscr{L}|}\varepsilon_i$. Now we show that $\{\varepsilon_i\}_{i<|A|+|\mathscr{L}|}$ is a uniformity base. Take $\varepsilon\in \mathcal{E}_P(A)$ and $U\subseteq \varepsilon$ open in the product topology of $P\times P$ such that $\bigcap_{i<|A|+|\mathscr{L}|} \varepsilon_i=\Delta_P(A)\subseteq U\subseteq \varepsilon$. Since $U$ is open in the product topology, it is also open in the $A${\hyp}logic topology. Then, by compactness, there is $i<|A|+|\mathscr{L}|$ such that $\varepsilon_i\subseteq U\subseteq \varepsilon$, concluding that $\{\varepsilon_i\}_{i<|A|+|\mathscr{L}|}$ is a uniformity base of $P$ with the $A${\hyp}logic topology.\mathbb{Q}ED
\end{dem}
\end{lem}
Recall that a uniform structure is pseudo{\hyp}metrisable if and only if it has a countable uniformity base --- see \cite[Theorem 38.3]{willard1970general}. Therefore, we get the following metrisation results:
\begin{coro} Let $P$ be an $A${\hyp}hyperdefinable set. Then, $P$ with its $A${\hyp}logic topology is pseudo{\hyp}metrisable if and only if there is a countable family $\{\varepsilon_n\}_{n\in\mathbb{N}}$ of $\bigwedge_A${\hyp}definable positive binary relations on $P$ such that $\Delta_P(A)=\bigcap_{n\in \mathbb{N}} \varepsilon_n$. In particular, $P$ with the $A${\hyp}logic topology is pseudo{\hyp}metrisable if $\mathscr{L}$ and $A$ are countable.
\end{coro}
\begin{teo}[Metrisation Theorem] \label{t:metrisation theorem} Let $P$ be a locally $A${\hyp}hyperdefinable set of countable cofinality. Then, $P$ with the $A${\hyp}logic topology is pseudo{\hyp}metrisable if and only if each piece is pseudo{\hyp}metrisable. In particular, $P$ with the $A${\hyp}logic topology is pseudo{\hyp}metrisable if $\mathscr{L}$ and $A$ are countable.
\begin{dem} Assume that each piece is pseudo{\hyp}metrisable. Taking the quotient map $a\mapsto {\mathrm{tp}}(a/A)$, we get that $\mathbf{S}_P(A)$ is locally metrisable and $\sigma${\hyp}compact. By $\sigma${\hyp}compactness, $\mathbf{S}_P(A)$ is trivially Lindel\"of (i.e. every open cover has a countable subcover). By Proposition \ref{p:normal countably piecewise hyperdefinable}, $\mathbf{S}_P(A)$ is normal and Hausdorff, so it is in particular regular (i.e. any closed set and any point outside it can be separated by open sets). Therefore, by \cite[Theorem 41.5]{munkres1999topology}, $\mathbf{S}_P(A)$ is paracompact (i.e. every open cover has a locally finite open refinement). By Smirnov's Metrisation Theorem \cite[Theorem 42.1]{munkres1999topology}, we conclude that it is metrisable. Taking the composition with the quotient map $a\mapsto {\mathrm{tp}}(a/A)$, we conclude that $P$ with the $A${\hyp}logic topology is pseudo{\hyp}metrisable. \mathbb{Q}ED
\end{dem}
\end{teo}
Using uniformities, it is also easy to find small dense subsets:
\begin{coro} Let $P$ be a piecewise $A${\hyp}hyperdefinable set. Then, there is a subset $D\subseteq P$ with $|D|\leq |A|+|\mathscr{L}|+\mathrm{cf}(P)$ which is dense in the $A${\hyp}logic topology. In particular, when $A$ and $\mathscr{L}$ are countable, every countably piecewise $A${\hyp}hyperdefinable set is separable (i.e. has a countable dense subset) with the $A${\hyp}logic topology.
\begin{dem} Say $P=\underrightarrow{\lim}_{i<\mathrm{cf}(P)}\, P_i$. By Lemma \ref{l:size uniformity base}, for each $i$, there is a uniformity base $\mathcal{B}_i\subseteq \mathcal{E}_{P_i}(A)$ with $|\mathcal{B}_i|\leq |A|+|\mathscr{L}|$. By compactness, for each $\varepsilon\in \mathcal{B}_i$, there is $D_{\varepsilon,i}\subseteq P_i$ finite such that $P_i\subseteq\bigcup_{a\in D_{\varepsilon,i}} \varepsilon(a)$. Take $D=\bigcup_{i<\mathrm{cf}(P)} \bigcup_{\varepsilon\in\mathcal{B}_i} D_{\varepsilon,i}$, so $|D|\leq |A|+|\mathscr{L}|+\mathrm{cf}(P)$. Let $U$ be open. Then, $\varepsilon(a)\subseteq U\cap P_i$ for some $a\in P$, $i<\mathrm{cf}(P)$ and $\varepsilon\in\mathcal{B}_i$. Find $\varepsilon_0\in\mathcal{B}_i$ with $\varepsilon_0^{-1}\subseteq \varepsilon$ and $d\in D_{i,\varepsilon_0}\subseteq D$ with $a\in\varepsilon_0(d)$, so $d\in\varepsilon(a)\subseteq U$, concluding $D\cap U\neq \emptyset$. As $U$ is arbitrary, $D$ is dense.
\mathbb{Q}ED
\end{dem}
\end{coro}
Arguing similarly for global logic topologies, we can reduce the number of parameters from $2^{|A|+|\mathscr{L}|}$ to $|A|+|\mathscr{L}|+\mathrm{cf}(P)$.
\begin{prop}\label{p:reduce number of parameters} Let $P$ be a piecewise $A${\hyp}hyperdefinable set with a global logic topology. Then, there is $B\subseteq P\cup A$ with $|B|\leq |A|+|\mathscr{L}|+\mathrm{cf}(P)$ such that the $B${\hyp}logic topology of $P$ is its global logic topology.
\begin{dem} Say $P=\underrightarrow{\lim}_{i\in I}\, P_i$ and $P_i=\rfrac{X_i}{E_i}$ with $|I|=\mathrm{cf}(P)$. Write $\lambda=|\mathscr{L}|+|A|+\mathrm{cf}(P)$. For $i\in I$, $\Delta_i\coloneqq \{(x,x)\mathrel{:} x\in P_i\}$ is $\bigwedge_{A^*}${\hyp}definable as $\underline{\Delta}_i=\underline{E}_i$. Say $\underline{\Delta}_i=\bigwedge_{j\in \lambda} \varphi_j(x,y)$ with $\varphi_j(x,y)\in\mathrm{For}(\mathscr{L}(A^*))$. Write $V_j\coloneqq {\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P_i\times P_i}[\varphi_j(\mathfrak{M})]$ and $U_j\coloneqq P_i\times P_i\setminus {\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_{P_i\times P_i}[\neg\varphi_j(\mathfrak{M})]$. Then, $\Delta_i\subseteq U_j\subseteq V_j$ where $V_j$ is closed and $U_j$ is open in the global logic topology of $P_i\times P_i$. By compactness of $P_i\times P_i$, there is $\varepsilon_j$ positive $\bigwedge${\hyp}definable binary relation such that $\varepsilon_j\subseteq U_j\subseteq V_i$. It follows that $\mathcal{B}_i\coloneqq \{\varepsilon_j\}_{j\in \lambda}$ forms a uniformity base of $P_i$ with the global logic topology.
By compactness of $P_i$, for each $\varepsilon\in \mathcal{B}_i$, there is $D_{\varepsilon,i}\subseteq P_i$ finite such that $P_i\subseteq\bigcup_{a\in D_{\varepsilon,i}} \varepsilon(a)$. Take $D_i=\bigcup_{\varepsilon\in\mathcal{B}_i} D_{\varepsilon,i}$ and $D=\bigcup_{i\in I}D_i$, so $|D|\leq \lambda$. It follows that $D_i$ is dense in the global logic topology of $P_i$.
Take $B=D\cup A\subseteq \mathrm{bdd}(A)$, so $|B|\leq \lambda$. We claim that the $B${\hyp}logic topology of $P$ is its global logic topology. Indeed, take $a\in P$ and $\sigma\in \mathrm{Aut}(\mathfrak{M}/B)$ arbitrary and, aiming a contradiction, suppose $\sigma(a)\neq a$, so $\sigma^{-1}(a)\neq a$. Pick $i\in I$ such that $a\in P_i$. Note that $\sigma[P_i]=P_i$. By Hausdorffness of $P_i$ in the global logic topology, there are $U$ and $V$ such that $a\in U\subseteq V\subseteq P_i$ with $\sigma^{-1}(a)\notin V$, where $U$ is open and $V$ is closed in the global logic topology of $P_i$. Now, $D_i\cap U=\sigma[D_i\cap U]\subseteq \sigma[V]$. As $U$ is open and $D_i$ is dense in $P_i$, $U\cap D_i$ is dense in $U$. As $\sigma[V]$ is closed in the global logic topology, we conclude $a\in U\subseteq \sigma[V]$. Therefore, $\sigma^{-1}(a)\in V$, getting a contradiction. \mathbb{Q}ED
\end{dem}
\end{prop}
\subsection{Examples}
Most of the examples of locally hyperdefinable sets come from the following two basic remarks, which were, in fact, already known. First, note that any piecewise $A${\hyp}definable set is trivially locally $A${\hyp}definable. Secondly, note that if $P$ is locally $A${\hyp}hyperdefinable and $E$ is a piecewise bounded $\bigwedge_A${\hyp}definable equivalence relation on $P$, then $\rfrac{P}{E}$ is locally hyperdefinable by Proposition \ref{p:mapping locally hyperdefinable sets}.
\begin{ejem}\label{e:example 1} A classical example is the field of real numbers $\mathbb{R}$ with its usual topology, which is a locally hyperdefinable set of countable cofinality in the theory of real closed fields. It is explicitly given as $\rfrac{O(1)}{o(1)}$ with $O(1)=\bigcup_{n\in\mathbb{N}} [-n,n]$ and $o(1)=\bigcap_{n\in\mathbb{N}} [-\rfrac{1}{n},\rfrac{1}{n}]$. Furthermore, up to isomorphism of piecewise hyperdefinable sets, $\rfrac{O(1)}{o(1)}$ is the unique representation of $\mathbb{R}$ as a locally hyperdefinable set. Indeed, by Proposition \ref{p:functions logic topology}(8), any two locally hyperdefinable sets homeomorphic with the logic topologies are isomorphic as piecewise hyperdefinable sets.
However, note that the real numbers with the usual topology can be represented as a piecewise hyperdefinable set (non{\hyp}locally hyperdefinable) in other non{\hyp}isomorphic ways. For instance, consider the direct system of all compact subsets of $\rfrac{O(1)}{o(1)}$ with empty interior in the global logic topology with the natural inclusion maps. Using that the topology is first{\hyp}countable, it is easy to note that this is a coherent covering. Therefore, the resulting direct limit with the global logic topology is a piecewise hyperdefinable set homeomorphic to $\mathbb{R}$ with the usual topology. However, it is not locally hyperdefinable, so it is not isomorphic to $\rfrac{O(1)}{o(1)}$.
\end{ejem}
\begin{ejem} \label{e:example 2}
More generally, any topological manifold $X$ (i.e. a locally euclidean Hausdorff topological space) is a locally hyperdefinable set in the usual theory of real closed fields. Indeed, for any $m$, $\mathbb{R}^m$ is locally hyperdefinable, so every compact subset of $\mathbb{R}^m$ is hyperdefinable, concluding that the compact charts of $X$ are hyperdefinable. Using now Proposition \ref{p:functions logic topology 2}, the chart changing maps are $\bigwedge${\hyp}definable, so the finite unions of compact chart neighbourhoods are hyperdefinable. It follows then that the whole manifold is locally hyperdefinable. If it is also second countable, then it has countable cofinality too. \end{ejem}
\begin{ejem} \label{e:example 3}
As $\mathbb{Q}$ with the usual metric topology is not locally compact, it cannot be given as a locally hyperdefinable set. However, it is possible to give it as a piecewise hyperdefinable set in the theory of real closed fields. Indeed, using that $\mathbb{Q}$ is first countable, it is clear that $\mathbb{Q}$ is compactly generated. Every compact subset of $\mathbb{Q}$ is a hyperdefinable subset being $\bigwedge${\hyp}definable in $\mathbb{R}=\rfrac{O(1)}{o(1)}$. In other words, $\mathbb{Q}$ is the direct limit of all the $\bigwedge${\hyp}definable subsets of $\mathbb{R}=\rfrac{O(1)}{o(1)}$ contained in $\mathbb{Q}$ with the standard inclusion maps. Note that it has uncountable cofinality.
\end{ejem}
\begin{ejem} \label{e:example 4}
More generally, any first countable Hausdorff topological space $X$ can be given as a piecewise hyperdefinable set with a global logic topology in the theory of real closed fields. Indeed, let $\mathcal{C}$ be the family of countable compact subsets of $X$. As $X$ is first countable, $\mathcal{C}$ is coherent. Now, by Mazurkiewicz{\hyp}Sierpi\'{n}ski Theorem \cite[Theorem 4]{milliet2011remark}, every countable compact Hausdorff topological space is homeomorphic to a countable successor ordinal with the order topology. On the other hand, by induction, we easily see that every countable ordinal with the order topology is homeomorphic to a subset of $\mathbb{Q}$. Therefore, for every $A\in \mathcal{C}$, there is a compact subset $P_A\subseteq \mathbb{Q}$ such that $A$ with the subspace topology is homeomorphic to $P_A$ with the subspace topology. For each $A\in\mathcal{C}$, pick $\eta_A: A\rightarrow P_A$ a homeomorphism. For $A,B\in \mathcal{C}$ with $A\subseteq B$, take $\varphi_{BA}=\eta_B\circ \eta_A^{-1}:\ P_{A}\rightarrow P_{B}$. Now, as noted in the previous example, every compact subset of $\mathbb{Q}$ is homeomorphic to a hyperdefinable set with a global logic topology in the theory of real closed fields. Also, for $A,B\in\mathcal{C}$ with $A\subseteq B$, $\varphi_{BA}$ is continuous, so it is $\bigwedge${\hyp}definable by Proposition \ref{p:functions logic topology 2}. Then, we conclude that $X$ is homeomorphic to $\underrightarrow{\lim}_{A\in \mathcal{C}}\, P_A$ with the global logic topology.
\end{ejem}
\begin{ejem} \label{e:example 5} For a countably piecewise hyperdefinable set that is not locally hyperdefinable, consider the infinite countable rose, i.e. the infinite countable bouquet of circles. This is $\rfrac{\mathbb{R}}{\sim}$ with the equivalence relation $x\sim y\Leftrightarrow x,y\in\mathbb{Z}\vee x=y$. Note that $\mathbb{R}$ is countably piecewise hyperdefinable and the relation $\sim$ is piecewise $\bigwedge${\hyp}definable in the theory of real closed fields. Thus, the infinite countable rose is a countably piecewise hyperdefinable set. It is not locally hyperdefinable and not first countable, so it is not pseudo{\hyp}metrisable.
\end{ejem}
\begin{ejem} \label{e:example 6}
For a piecewise hyperdefinable set which is not normal, consider $\mathbb{R}$ with the rational sequence topology. For $i\in\mathbb{R}$, pick a sequence $(i_n)_{n\in\mathbb{N}}$ of rational numbers converging to $i$. When $i\in \mathbb{Q}$, take $i_n=i$ for every $n\in\mathbb{N}$. Take $P_i=\{i_n\mathrel{:} n\in\mathbb{N}\}\cup\{i\}$ for $i\in\mathbb{R}$. For a finite subset $I\subseteq \mathbb{R}$, take $P_I=\bigcup_{i\in I} P_i$. Note that, for any finite $I\subseteq \mathbb{R}$, $P_I$ is compact in $\mathbb{R}$ with the usual topology, so each $P_I$ is hyperdefinable in the language of real closed fields. Take $P=\underrightarrow{\lim}\ P_I$ with the usual inclusion maps. We now check that $P$ with the global logic topology is homeomorphic to $\mathbb{R}$ with the rational sequence topology, i.e. the topology given by the local bases of open neighbourhoods $U_n(i)\coloneqq \{i_k\mathrel{:} k\geq n\}\cup \{i\}$ for $i\in\mathbb{R}$.
Note first that $U_n(i)$ is open in $P$ for each $i\in\mathbb{R}$ and $n\in\mathbb{N}$. Obviously, $U_n(i)\cap P_i$ is open in $P_i$. For $j\neq i$, as $(i_n)_{n\in\mathbb{N}}$ converges to $i$ and $(j_n)_{n\in\mathbb{N}}$ converges to $j$, we conclude that there is $m\in\mathbb{N}$ such that $U_n(i)\cap P_j\subseteq \{i_n\mathrel{:} n\leq m\}$. Thus, $U_n(i)\cap P_j$ is open in $P_j$.
On the other hand, suppose $U\subseteq P$ is open in $P$. Take $i\in U$. As $U\cap P_i$ is open in $P_i$, there is $n\in\mathbb{N}$ such that $U_n(i)\subseteq U\cap P_i$, so $U_n(i)\subseteq U$. As $i\in U$ is arbitrary, we conclude that $U$ is open in the rational sequence topology.
By Jone's Lemma \cite[Lemma 15.2]{willard1970general}, $P$ is not normal. Indeed, $\mathbb{Q}$ is dense, $\mathbb{R}\setminus \mathbb{Q}$ is discrete and closed and $|\mathbb{R}\setminus\mathbb{Q}|\geq 2^{|\mathbb{Q}|}$.
\end{ejem}
For a counterexample where the product topology of global logic topologies is not the global logic topology on the product, see Example \ref{e:example no topological group}. We have no counterexample of a piecewise hyperdefinable set with a non{\hyp}Hausdorff global logic topology. We have no counterexample of a countably piecewise $A${\hyp}hyperdefinable set that is not locally $A${\hyp}hyperdefinable but has a locally compact $A${\hyp}logic topology.
\section{Piecewise hyperdefinable groups}
In this section we study the particular case of piecewise hyperdefinable groups. Our main aim is to find sufficient and necessary conditions to conclude when they are locally compact topological groups with the logic topology. Then, we will discuss how to extend the classical Gleason{\hyp}Yamabe Theorem and some related results to piecewise hyperdefinable groups. We start with an introduction recalling some fundamental facts about topological groups.
\subsection{Preliminaries on topological groups}
Recall that a \emph{topological group} is a Hausdorff topological space with a group structure whose group operations are continuous.
\begin{obss} \label{o:remarks topological groups} Let $G$ be a topological group. The following are the basic fundamental facts that we need:
{\textnormal{\textbf{(1)}}} Let $H\trianglelefteq G$ be closed. Then, $\rfrac{G}{H}$ is a topological group and $\pi_{G/H}:\ G\rightarrow \rfrac{G}{H}$ is a continuous and open surjective homomorphism. Furthermore, if $H$ is compact, then $\pi_{G/H}$ is also a closed map and has compact fibres. In particular, it is proper by \cite[Theorem 3.7.2]{engelking1989general}.
{\textnormal{\textbf{(2)}}} Let $H\leq G$ be an open subgroup. Then, $H$ is also closed.
{\textnormal{\textbf{(3)}}} The connected component $G^0$ of the identity is a normal closed subgroup of $G$. If $G$ is locally connected (e.g. a Lie group), then $G^0$ is also open.
{\textnormal{\textbf{(4)}}} {\textnormal{\textbf{(Closed Isomorphism Theorem)}}} Let $f:\ G\rightarrow H$ be a continuous and closed surjective homomorphism between topological groups. Then, for any closed subgroup $S\trianglelefteq K\coloneqq \ker(f)$ with $S\trianglelefteq G$, the map $f_S:\ \rfrac{G}{S}\rightarrow H$ defined by $f=f_S\circ\pi_{G/S}$ is a continuous, closed and open homomorphism. In particular, $f_K:\ \rfrac{G}{K}\rightarrow H$ is an isomorphism of topological groups and $f$ is an open map.
\end{obss}
In the theory of topological groups, a \emph{Yamabe pair} of a topological group $G$ is a pair $(K,H)$ with $K\trianglelefteq H\leq G$ such that $K$ is compact, $H$ is open and $L=\rfrac{H}{K}$ is a finite dimensional Lie group. We say that $H$ is the \emph{domain}, $K$ is the \emph{kernel} and $L$ is the \emph{Lie core}. We write $\pi_{H/K}:\ H\rightarrow L$ for the quotient map. A Lie group is a \emph{Lie core} of $G$ if it is isomorphic, as topological group, to the Lie core of some Yamabe pair of $G$.
\begin{obs} Let $G$ be a topological group and suppose that it has a Yamabe pair $(K,H)$ with Lie core $L$. By Remark \ref{o:remarks topological groups}(1), $\pi_{H/K}:\ H\rightarrow L$ is a continuous, open, closed and proper surjective group homomorphism. In particular, as $L$ is locally compact, $H$ must be a locally compact topological group too. Since $H$ is open in $G$, we conclude that $G$ is locally compact as well.
\end{obs}
The following celebrated theorem, claiming that every locally compact topological group has Lie cores, is usually considered the solution to Hilbert's fifth problem.
\begin{teo}[Gleason{\hyp}Yamabe] \label{t:gleason yamabe}
Let $G$ be a locally compact topological group and $U\subseteq G$ a neighbourhood of the identity. Then, $G$ has a Yamabe pair $(K,H)$ with $K\subseteq U$. In particular, a topological group has a Lie core if and only if it is locally compact.
\begin{dem} The original papers are \cite{gleason1951structure} and \cite{yamabe1953generalization}. A complete proof can be found in \cite{tao2014hilbert}. Model{\hyp}theoretic treatments can be found in \cite{hirschfeld1990nonstandard} and \cite{van2015hilbert}. \mathbb{Q}ED
\end{dem}
\end{teo}
In this paper we mainly use this classical version of Gleason{\hyp}Yamabe Theorem \ref{t:gleason yamabe}. Alternatively, we can use the following variation proved in \cite[Theorem 1.25]{carolino2015structure} which provides some extra control over some parameters. Recall that two subsets of a group are \emph{$k${\hyp}commensurable} if $k$ left translates of each one cover the other. Recall that a \emph{$k${\hyp}approximate subgroup} is a symmetric subset $X$ which is $k${\hyp}commensurable with its set of pairwise products $X^2$.
\begin{teo}[Gleason{\hyp}Yamabe{\hyp}Carolino Theorem] \label{t:gleason yamabe carolino}
There are functions $c:\ \mathbb{N}\rightarrow \mathbb{N}$ and $d:\ \mathbb{N}\rightarrow\mathbb{N}$ such that the following holds:
Let $G$ be a locally compact topological group and $U\subseteq G$ an open precompact $k${\hyp}approximate subgroup for some $k\in\mathbb{N}$. Then, $G$ has a Yamabe pair $(K,H)$ with $K\subseteq U^4$ such that $\rfrac{H}{K}$ is a Lie group of dimension at most $d(k)$ and $H\cap U^4$ generates $H$ and is $c(k)${\hyp}commensurable to $U$.
\end{teo}
A Yamabe pair $(K',H')$ is \emph{smaller than or equal to} $(K,H)$ if $K\trianglelefteq K'\trianglelefteq H'\leq H$. A Yamabe pair is \emph{minimal} if it has no smaller ones. A Lie core is \emph{minimal} if it is the Lie core of some minimal Yamabe pair. In other words, let us define an \emph{aperiodic} topological group to be a topological group without non{\hyp}trivial compact normal subgroups. Then, by Remark \ref{o:remarks topological groups}(1), a Lie core is minimal if and only if it is an aperiodic connected Lie core. The following basic proposition implies that every Yamabe pair has a smaller or equal minimal Yamabe pair.
\begin{lem} \label{l:maximal compact normal subgroup}
Every connected Lie group has a unique maximal compact normal subgroup.
\begin{dem} By Cartan{\hyp}Iwasawa{\hyp}Malcev Theorem \cite[Chapter XV Theorem 3.1]{hochschild1965structure}, there is a maximal compact subgroup $T$, and every compact subgroup is contained in a conjugate of it. Hence, $\bigcap_{g\in G} gTg^{-1}$ is the unique maximal compact normal subgroup. \mathbb{Q}ED
\end{dem}
\end{lem}
\begin{obs} A different proof of the previous result, without using Cartan{\hyp}Iwasawa{\hyp}Malcev Theorem, was explained in \cite{hrushovski2011stable}.
\end{obs}
\begin{coro} \label{c:minimal yamabe pairs} Let $G$ be a topological group and $(K_1,H_1)$ a Yamabe pair. Then, there is a minimal Yamabe pair $(K,H)$ smaller than or equal to $(K_1,H_1)$. Furthermore, for any clopen subset $U$ containing $K_1$, we have $H\subseteq U^2$.
\begin{dem} Write $\pi_1\coloneqq \pi_{H_1/K_1}:\ H_1\rightarrow L_1$ for the quotient map to the Lie core of $(K_1,H_1)$. Let $\widetilde{L}\subseteq L_1$ be the topological connected component of the identity. As Lie groups are locally connected, $\widetilde{L}$ is open by Remarks \ref{o:remarks topological groups}(3). Let $\widetilde{K}\trianglelefteq \widetilde{L}$ be its maximal compact normal subgroup, given by Lemma \ref{l:maximal compact normal subgroup}. Take $H\coloneqq \pi^{-1}_1[\widetilde{L}]$ and $K\coloneqq \pi^{-1}_1[\widetilde{K}]$. Then, by Remark \ref{o:remarks topological groups}(1) and the Closed Isomorphism Theorem (Remark \ref{o:remarks topological groups}(4)), $(K,H)$ is a minimal Yamabe pair of $G$ smaller than or equal to $(K_1,H_1)$. Finally, if $U$ is clopen and $K_1\subseteq U$, $\pi_1[U]$ is clopen with $1\in \pi_1[U]$ as $\pi_1$ is open and closed by Remark \ref{o:remarks topological groups}(1). Thus, $\widetilde{L}\subseteq \pi_1[U]$ as $\widetilde{L}$ is connected, concluding that $H\subseteq UK_1\subseteq U^2$. \mathbb{Q}ED
\end{dem}
\end{coro}
Two Yamabe pairs $(K,H)$ and $(K',H')$ of $G$ with Lie cores $\pi\coloneqq \pi_{H/K}:\ H\rightarrow L$ and $\pi'\coloneqq \pi_{H'/K'}:\ H'\rightarrow L'$ are \emph{equivalent} if the map $\eta:\ \pi(h)\mapsto \pi'(h)$ for $h\in H\cap H'$ is a well{\hyp}defined isomorphism of topological groups between $L$ and $L'$. Equivalently, by the Closed Isomorphism Theorem (Remark \ref{o:remarks topological groups}(4)), $(K,H)$ and $(K',H')$ are equivalent if and only if $H\cap K'\subseteq K$, $H'\cap K\subseteq K'$, $(H\cap H')K=H$ and $(H\cap H')K'=H'$. It follows that minimal Yamabe pairs are unique up to equivalence:
\begin{lem} \label{l:lemma uniqueness minimal lie core} Let $G$ be a locally compact topological group and $(K_1,H_1)$ and $(K_2,H_2)$ two minimal Yamabe pairs with Lie cores $\pi_1\coloneqq \pi_{H_1/K_1}:\ H_1\rightarrow L_1$ and $\pi_2\coloneqq \pi_{H_2/K_2}:\ H_2\rightarrow L_2$:
{\textnormal{\textbf{(1)}}} Let $H'\leq H_1$ be an open subgroup. Then, $(K_1\cap H',H')$ is a minimal Yamabe pair of $G$ equivalent to $(K_1,H_1)$ and $[K_1:K_1\cap H']$ is finite.
{\textnormal{\textbf{(2)}}} $K_1\subseteq K_2\Leftrightarrow H_1\subseteq H_2\Leftrightarrow K_2\cap H_1=K_1$. In particular, $K_1=K_2$ if and only if $H_1=H_2$.
{\textnormal{\textbf{(3)}}} $(K_1\cap K_2,H_1\cap H_2)$ is a minimal Yamabe pair with $K_1\cap H_2=K_1\cap K_2=K_2\cap H_1$. In particular, $[K_1:K_1\cap K_2]$ and $[K_2:K_1\cap K_2]$ are finite.
\begin{dem}
{\textnormal{\textbf{(1)}}} By connectedness, $\pi_1[H']=L_1$. Thus, by the Closed Isomorphism Theorem (Remark \ref{o:remarks topological groups}(4)), we conclude that $(K_1\cap H',H')$ is a minimal Yamabe pair of $G$ equivalent to $(K_1,H_1)$. Finally, as $K_1\cap H'$ is an open subset of $K_1$, by compactness, $[K_1:K_1\cap H']$ is finite.
{\textnormal{\textbf{(2)}}} We already have $H_1\subseteq H_2\mathbb{R}ightarrow K_2\cap H_1=K_1\mathbb{R}ightarrow K_1\subseteq K_2$ by {\textnormal{\textbf{(1)}}}. On the other hand, by connectedness, $\pi_1[H_1\cap H_2]=L_1$. If $K_1\subseteq K_2$, then $K_1\leq H_1\cap H_2$, so $H_1=\pi^{-1}_1[\pi_1[H_1\cap H_2]]=H_1\cap H_2\subseteq H_2$.
{\textnormal{\textbf{(3)}}} By {\textnormal{\textbf{(1)}}}, $(K_1\cap H_2,H_1\cap H_2)$ and $(K_2\cap H_1,H_2\cap H_1)$ are minimal Yamabe pairs with $[K_1:K_1\cap H_2]$ and $[K_2:H_1\cap K_2]$ finite. By point {\textnormal{\textbf{(2)}}}, $K_1\cap K_2=K_1\cap H_2=K_2\cap H_1$. \mathbb{Q}ED
\end{dem}
\end{lem}
As an immediate consequence of the previous proposition we get the following corollary:
\begin{coro} \label{c:uniqueness minimal lie core} Every locally compact topological group has a unique minimal Yamabe pair up to equivalence.
\end{coro}
The previous uniqueness statement implies that the minimal Lie core $L$ is unique up to isomorphism of topological groups, but it is far stronger than that. Indeed, it also says that there is a \emph{global minimal Lie core map} extending all the minimal Yamabe pairs which is unique up to isomorphisms of $L$:
\begin{prop}\label{p:global minimal lie core map}
Let $G$ be a locally compact topological group and $L$ its minimal Lie core. Let $D_L$ be the union of all the domains of minimal Yamabe pairs of $G$. Then, there is a map $\pi_L:\ D_L\rightarrow L$ such that, for any minimal Yamabe pair $(K,H)$ of $G$, $\pi_{L\mid H}$ is a continuous, closed, open and proper surjective homomorphism with kernel $K$. Furthermore, $\pi_L$ is unique up to isomorphisms of $L$.
\begin{dem} Let $\mathcal{Y}$ be the set of all minimal Yamabe pairs of $G$ and fix any minimal Yamabe pair $(K_0,H_0)\in \mathcal{Y}$ and $L=\rfrac{H_0}{K_0}$. Now, for any Yamabe pair $(K,H)$, we define $\pi_{L\mid H}\coloneqq \eta_{(K,H)}\circ \pi_{H/K}:\ H\rightarrow L$ where $\eta_{(K,H)}:\ \rfrac{H}{K}\rightarrow L$ is the canonical isomorphism given by the equivalence between $(K,H)$ and $(K_0,H_0)$. Take any $(K,H),(K',H')\in \mathcal{Y}$. By Lemma \ref{l:lemma uniqueness minimal lie core} and Corollary \ref{c:uniqueness minimal lie core}, $(K\cap K',H\cap H')$ is a minimal Yamabe pair and equivalent to $(K,H)$, $(K',H')$ and $(K_0,H_0)$. For any $h\in H\cap H'$, as $(K\cap K',H\cap H')$ is equivalent to $(K_0,H_0)$, there is $h_0\in H\cap H'\cap H_0$ such that $h^{-1}h_0\in K\cap K'\cap K_0$. Thus, $\pi_{L\mid H}(h)=\pi_{L\mid H}(h_0)=\pi_{H_0/K_0}(h_0)=\pi_{L\mid H'}(h_0)=\pi_{L\mid H}(h)$. Take $D_L=\bigcup_{\mathcal{Y}} H$ and define $\pi_L=\bigcup_{\mathcal{Y}} \pi_{L\mid H}$. By Remark \ref{o:remarks topological groups}(1), we get a global map $\pi_L:\ D_L\rightarrow L$ such that $\pi_{L\mid H}:\ H\rightarrow L$ is a continuous, closed, open and proper surjective homomorphism with kernel $K$ for each minimal Yamabe pair $(K,H)$.
Suppose $\pi'_L:\ D_L\rightarrow L$ is any other map such that $\pi'_{L\mid H}$ is a continuous, closed, open and proper surjective group homomorphism with kernel $K$ for any minimal Yamabe pair $(K,H)$. Then, by the Closed Isomorphism Theorem (Remark \ref{o:remarks topological groups}(4)), we get an isomorphism $\eta:\ \rfrac{H_0}{K_0}\rightarrow L$ such that $\pi'_{L\mid H_0}=\eta\circ\pi_{L\mid H_0}$. Now, for any $(K,H)\in\mathcal{Y}$ and $g\in H$, there is $g_0\in H\cap H_0$ such that $g\in g_0K$. Then, $\pi'_L(g)=\pi'_L(g_0)=\eta\circ\pi_L(g_0)=\eta\circ\pi_L(g)$, concluding that $\pi'_L=\eta\circ\pi_L$. \mathbb{Q}ED
\end{dem}
\end{prop}
Note that $D_L=\mathrm{Dom}(\pi_L)$ is the union of all the domains of minimal Yamabe pairs of $G$ and $\ker(\pi_L)\coloneqq \pi_L^{-1}(1)$ is the union of all the kernels of minimal Yamabe pairs of $G$. Consequently, $D_L$ and $\ker(\pi_L)$ are invariant by any automorphism of $G$ as topological group. In particular, both are normal sets (i.e. conjugate invariant sets).
Among all the minimal Yamabe pairs, it could be natural to look for the ones with maximal domain. We have the following criterion. \begin{prop} \label{p:minimal yamabe pairs with maximal domain} Let $G$ be a locally compact topological group and $(K,H)$ a minimal Yamabe pair of $G$. Let $K'$ be a compact subgroup of $G$ with $K\leq K'$ such that $H$ normalises $K'$ (i.e. $hK'=K'h$ for any $h\in H$). Then, $K=H\cap K'$, $[K':K]$ is finite and $(K',H')$ is a minimal Yamabe pair of $G$ with $H'=K'H$. Furthermore, $H'$ is a finite union of cosets of $H$.
In particular, $(K,H)$ is a minimal Yamabe pair with maximal domain if and only if there is no compact subgroup $K'\leq G$ normalised by $H$ with $K< K'$.
\begin{dem} Clearly, $K\leq K'\cap H\trianglelefteq H$ is compact. Then, as $K$ is the maximal compact normal subgroup of $H$, we conclude $K'\cap H=K$ and $\rfrac{H}{K'\cap H}=\rfrac{H}{K}=L$. As $K=K'\cap H$ is open in $K'$ compact, $[K':K]$ is finite. Take $\Delta\subseteq K'$ finite such that $K'=\Delta K$. Note that $\Delta H=K'H$ is a clopen subgroup and $K'\trianglelefteq K'H$. Write $H'=\Delta H$. Then, $\pi_{H'/K'\ \mid H}:\ H\rightarrow \rfrac{H'}{K'}$ is a continuous and closed onto homomorphism. Therefore, by the Closed Isomorphism Theorem (Remark \ref{o:remarks topological groups}(4)), $\rfrac{H'}{K'}$ and $\rfrac{H}{K'\cap H}=L$ are isomorphic. That concludes that $(K',H')$ is also a minimal Yamabe pair of $G$. Using also Lemma \ref{l:lemma uniqueness minimal lie core}(1,2), we conclude that this gives a necessary and sufficient condition for the maximality of the domain. \mathbb{Q}ED
\end{dem}
\end{prop}
Similarly, it is natural to look at minimal Yamabe pairs with minimal kernel. In this case, this question is related to the connected component of $G$.
Recall that the \emph{quasicomponent} of a point in a topological space is the intersection of all its clopen neighbourhoods. By definition, quasicomponents are closed sets containing the connected components. In locally connected spaces, connected components are clopen, so quasicomponents and connected components coincide. Similarly, in every compact Hausdorff space, connected components and quasicomponents coincide \cite[Lemma 29.6]{willard1970general}. In general, however, they may be different --- even for locally compact Hausdorff topological spaces.
In a topological group $G$, as the inversion, the conjugations and the translations are homeomorphisms, the connected component $G^0$ and the quasicomponent $G^{\mathrm{qs}}$ of the identity are both normal closed subgroups of $G$. When $G$ is locally compact, it is a well{\hyp}known fact that $G^0=G^{\mathrm{qs}}$ is the intersection of all the open subgroups of $G$ \cite[Theorem 2.1.4(b)]{dikranjan1990topological}.
Hence, we conclude the following criterion for the existence of a minimal Yamabe pair with minimal kernel.
\begin{prop} \label{p:minimal yamabe pair with minimal domain}
Let $G$ be a locally compact topological group. Then, there is a minimal Yamabe pair with minimal kernel if and only if $G^0$ is open (i.e. $G$ is locally connected). Furthermore, in that case, for any other minimal Yamabe pair $(K,H)$, $(K\cap G^0,G^0)$ is the minimal Yamabe pair of $G$ with minimal kernel.
\begin{dem} Suppose that $(K,H)$ is a minimal Yamabe pair with minimal kernel. As $H$ is clopen by Remark \ref{o:remarks topological groups}(2), we have that $G^0\subseteq H$. On the other hand, for any other open subgroup $H'\leq G$, by Lemma \ref{l:lemma uniqueness minimal lie core}(1), we have that $(K\cap H',H\cap H')$ is a minimal Yamabe pair. As $(K,H)$ is the one with minimal kernel, it follows that $H\cap H'=H$, so $H\subseteq H'$. As $G^0$ is the intersection of all the open subgroups, we conclude that $G^0=H$. Conversely, suppose that $G^0$ is open. Then, by Lemma \ref{l:lemma uniqueness minimal lie core}(1), for any minimal Yamabe pair $(K,H)$, we have that $(K\cap G^0,G^0)$ is a minimal Yamabe pair. Thus, for any minimal Yamabe pairs $(K,H)$ and $(K',H')$, we have that $(K\cap G^0,G^0)$ and $(K'\cap G^0,G^0)$ are minimal Yamabe pairs, and so $K\cap G^0=K'\cap G^0$ by Lemma \ref{l:lemma uniqueness minimal lie core}(2). Therefore, $(K\cap G^0,G^0)$ is the minimal Yamabe pair with minimal kernel. \mathbb{Q}ED
\end{dem}
\end{prop}
In general, even if $G^0$ is not open, a similar conclusion is ``asymptotically'' true:
\begin{prop} \label{p:asymptotic minimal yamabe pair with minimal domain}
Let $G$ be a locally compact topological group and $L$ its minimal Lie core. Then, the restriction to $G^0$ of the global minimal Lie core map $\pi_{L\mid G^0}:\ G^{0}\rightarrow L$ is a continuous, open, closed and proper surjective group homomorphism.
\begin{dem} Take $(K,H)$ minimal Yamabe pair and $\pi_{L\mid H}:\ H\rightarrow L$. By Proposition \ref{p:global minimal lie core map}, $\pi_{L\mid H}$ is a continuous, open, closed and proper surjective group homomorphism. By definition, $G^0\leq H$. Thus, consider the restriction $\pi_{L\mid G^0}:\ G^0\rightarrow L$. As $G^0$ is a closed subgroup, $\pi_{L\mid G^0}$ is also a continuous, closed and proper group homomorphism. It remains to show that it is onto and open. Let $b\in L$. We want to show that $\pi^{-1}_{L\mid H}(b)\cap G^0\neq \emptyset$. Let $H'\leq G$ be a clopen subgroup such that $H'\leq H$. Then, $\pi_{L\mid H}[H']$ is clopen in $L$. As $L$ is connected, we get that $\pi^{-1}_{L\mid H}(b)\cap H'\neq \emptyset$. Since $\pi^{-1}_{L\mid H}(b)$ is compact and $H'$ is arbitrary, we conclude that $\pi^{-1}_{L\mid H}(b)\cap G^0\neq \emptyset$. Therefore, $\pi_{L\mid G^0}:\ G^0\rightarrow L$ is onto. We conclude that it is also open by the Closed Isomorphism Theorem (Remark \ref{o:remarks topological groups}(4)). \mathbb{Q}ED
\end{dem}
\end{prop}
\subsection{Local compactness and generic pieces}
A \emph{piecewise $A${\hyp}hyperdefinable group} is a group whose universe is piecewise $A${\hyp}hyperdefinable and whose operations are piecewise bounded $\bigwedge_A${\hyp}definable.
\begin{ejem}
Let $G$ be a definable group and $X\subseteq G$ a symmetric definable subset. Then, the subgroup $H\leq G$ generated by $X$ is a countably piecewise definable group. If $K\trianglelefteq H$ is a piecewise $\bigwedge${\hyp}definable normal subgroup, the quotient $\rfrac{H}{K}=\underrightarrow{\lim} \rfrac{X^n}{K}$ is a countably piecewise hyperdefinable group too. If $K\subseteq X^n$ for some $n$, then $\rfrac{H}{K}$ is also locally hyperdefinable. This corresponds to the case studied in \cite{hrushovski2011stable}.
\end{ejem}
\begin{obs} Piecewise $\bigwedge${\hyp}definable subgroups of piecewise hyperdefinable groups are piecewise hyperdefinable groups. The quotient of a piecewise hyperdefinable group by a normal piecewise $\bigwedge${\hyp}definable subgroup is a piecewise hyperdefinable group.
\end{obs}
Note that the group operations are continuous between the logic topologies by Proposition \ref{p:functions logic topology}(1). However, the product topology and the logic topology may differ, so piecewise hyperdefinable groups with the logic topologies do not need to be topological groups.
\begin{prop} \label{p:small piecewise hyperdefinable groups and translations} Let $G$ be a piecewise hyperdefinable group with a global logic topology. Then, every translation is a homeomorphism in its global logic topology.
\begin{dem} Trivial by Proposition \ref{p:functions logic topology}(1) and Proposition \ref{p:uniqueness of hausdorff logic topologies}. \mathbb{Q}ED
\end{dem}
\end{prop}
\begin{obs} Groups with a $\mathrm{T}_1$ topology such that every translation is continuous are called \emph{semitopological groups} --- see \cite{husain2018introduction} for an introduction to semitopological groups.
\end{obs}
\begin{teo}\label{t:countably piecewise hyperdefinable groups and topological groups}
Let $G$ be a countably piecewise hyperdefinable group with a global logic topology. Then, $G$ is a topological group with the global logic topology.
\begin{dem} Clear by Proposition \ref{p:functions logic topology}(1) and Proposition \ref{p:product of countably piecewise hyperdefinable sets}. \mathbb{Q}ED
\end{dem}
\end{teo}
\begin{ejem} \label{e:example no topological group}
We show now an example of a piecewise hyperdefinable group with a global logic topology that is not a topological group. We simply adapt the fundamental example given in \cite[Example 1.2]{tatsuuma1998group} to the case of piecewise hyperdefinable groups.
First, recall that $\mathbb{Q}^n$ with the usual topology is piecewise hyperdefinable with a global logic topology in the theory of real closed fields by Example \ref{e:example 4}. Now, the inclusion $\psi_{n,m}:\ \mathbb{Q}^m\rightarrow \mathbb{Q}^n$ given by $\psi(x)=(x,0,\ldots,0)$ for $n>m$ is a piecewise bounded $\bigwedge${\hyp}definable $1${\hyp}to{\hyp}$1$ map. Also, the set of pairwise sums of two compact countable subsets of $\mathbb{Q}^n$ is a compact countable subset of $\mathbb{Q}^n$, so $+$ is a piecewise bounded $\bigwedge${\hyp}definable map. Then, $\bigoplus_{\mathbb{N}}\mathbb{Q}=\underrightarrow{\lim}\, \mathbb{Q}^n$ is a piecewise hyperdefinable group. Now, $\bigoplus_\mathbb{N} \mathbb{Q}$ is not a topological group. Consider the set $U=\{x\mathrel{:} |x_j|<|\cos(jx_0)|\mathrm{\ for\ }j\in\mathbb{N}_{>0}\}$. As $x_0\in\mathbb{Q}$ for any $x\in \bigoplus_{\mathbb{N}} \mathbb{Q}$, we have $\cos(jx_0)\neq 0$, so $U$ is an open neighbourhood of $0$. However, there is no open neighbourhood $V$ of $0$ such that $V+V\subseteq U$, concluding that $\bigoplus_{\mathbb{N}}\mathbb{Q}$ is not a topological group. Aiming a contradiction, suppose otherwise; take $V$ an open neighbourhood of $0$ such that $V+V\subseteq U$. As $V$ is an open neighbourhood of $0$, there is $\varepsilon_0\in\mathbb{R}_{>0}$ such that $\{x\mathrel{:} |x_0|<\varepsilon_0 \mathrm{\ and\ }x_i=0\mathrm{\ for\ }i\in\mathbb{N}_{>0}\}\subseteq V$. Take $n\in\mathbb{N}_{>0}$ such that $2n\varepsilon_0>\pi$. There is then $\varepsilon_1\in\mathbb{R}_{>0}$ such that $\{x\mathrel{:} |x_n|<\varepsilon_1\mathrm{\ and\ }x_i=0\mathrm{\ for\ }i\neq n\}\subseteq V$. Hence, $\{x\mathrel{:} |x_0|<\varepsilon_0,\ |x_n|<\varepsilon_1\mathrm{\ and\ }x_i=0\mathrm{\ for\ }i\in\mathbb{N}\setminus \{0,n\}\}\subseteq V+V\subseteq U$. In particular, $(-\varepsilon_0,\varepsilon_0)_{\mathbb{Q}}\times(-\varepsilon_1,\varepsilon_1)_{\mathbb{Q}}\subseteq \{(x_0,x_1)\in \mathbb{Q}\times\mathbb{Q}\mathrel{:} |x_1|<|\cos(nx_0)|\}$. However, this is impossible when $2n\varepsilon_0>\pi$, getting a contradiction.
\end{ejem}
\begin{teo} \label{t:piecewise hyperdefinable and local compactness}
Let $G$ be a locally hyperdefinable group with a global logic topology. Then, $G$ is a locally compact topological group with this topology.
Furthermore, a countably piecewise hyperdefinable group $G$ is a locally compact topological group with some logic topology if and only if this logic topology is the global logic topology and $G$ is locally hyperdefinable.
\begin{dem} We know that the global logic topology of $G$ is locally compact by Proposition \ref{p:locally compact locally hyperdefinable} and Hausdorff by Proposition \ref{p:hausdorff locally hyperdefinable}. By Proposition \ref{p:product of locally hyperdefinable sets} and Proposition \ref{p:functions logic topology}(1), we conclude that $G$ is a locally compact topological group.
By definition, a logic topology is $\mathrm{T}_1$ if and only if it is the global logic topology. On the other hand, assuming $G=\underrightarrow{\lim}_{n\in\mathbb{N}}G_n$, by Baire's Category Theorem \cite[Theorem 48.2]{munkres1999topology}, if $G$ is locally compact Hausdorff, there are $h$ and $n\in \mathbb{N}$ such that $G_n$ is a neighbourhood of $h$. Thus, for any $g\in G$, $gh^{-1}G_n$ is an $\bigwedge${\hyp}definable neighbourhood of $g$. \mathbb{Q}ED
\end{dem}
\end{teo}
\begin{ejem} \label{e:example no locally compact topological group} We give an example of a countably piecewise hyperdefinable group with a global logic topology which is not locally hyperdefinable; this is the infinite countable direct sum of circles with the inductive topology. Denote the unit circle by $\mathbb{S}\coloneqq\{ x\in \mathbb{C}\mathrel{:} |x|=1\}$, which is hyperdefinable in the theory of real closed fields as quotient of the common definable circle by the infinitesimals. For $n>m$, we take the map $\psi_{n,m}:\ \mathbb{S}^m\rightarrow \mathbb{S}^n$ by $\psi_{n,m}(x)=(x,1,\ldots,1)$. Then, $\bigoplus_{\mathbb{N}} \mathbb{S}\coloneqq \underrightarrow{\lim}\, \mathbb{S}^n$ is a countably piecewise hyperdefinable group with a global logic topology that is not locally hyperdefinable.
A local base of open neighbourhoods of the identity in the global logic topology of $\bigoplus_{\mathbb{N}} \mathbb{S}$ is the family of subsets $U_\varepsilon\coloneqq\{x\mathrel{:} \mathrm{d}_{\mathbb{S}}(x_i,1)<\varepsilon_i\mathrm{\ for\ }i\in\mathbb{N}\}$ for sequences $\varepsilon=(\varepsilon_i)_{i\in\mathbb{N}}$ with $\varepsilon_i\in (0,1]$, where $\mathrm{d}_{\mathbb{S}}$ is the normalised usual distance in the unit circle. Indeed, suppose $U$ is an open neighbourhood of $1$ in $\bigoplus_{\mathbb{N}} \mathbb{S}$. By using compactness in $\mathbb{S}^n$ for each $n\in\mathbb{N}$, we recursively find a sequence $\varepsilon=(\varepsilon_i)_{i\in\mathbb{N}}$ with $\varepsilon_i\in (0,1]$ such that $\{x\mathrel{:} \mathrm{d}_{\mathbb{S}}(x_i,1)\leq \varepsilon_i\mathrm{\ for\ }i\leq n\}\subseteq U\cap \mathbb{S}^n$.
\end{ejem}
Unfortunately, proving that a particular piecewise hyperdefinable set is locally hyperdefinable may be truly hard, as it requires to check a property about a topological space that we understand only vaguely. Until now, the only method available to show that a piecewise hyperdefinable set is locally hyperdefinable relies on Proposition \ref{p:mapping locally hyperdefinable sets} and the fact that piecewise definable sets are trivially locally hyperdefinable (i.e. piecewise definable and locally definable are the same). Sometimes that is not enough. To solve this problem we introduce generic sets.
Let $G$ be a piecewise hyperdefinable group. A \emph{generic} subset is an $\bigwedge${\hyp}definable subset $V$ such that, for any other $\bigwedge${\hyp}definable subset $W$, $[W:V]\coloneqq\min\{|\Delta|\mathrel{:} W\subseteq \Delta V\}$ is finite, i.e. there is a finite $\Delta\subseteq G$ with $W\subseteq \Delta V$. Obviously, if $V$ is a generic set and $V\subseteq W$ for $\bigwedge${\hyp}definable $W$, then $W$ is also a generic set. In other words, if there are generic sets, the generic sets form an upper set of the family of $\bigwedge${\hyp}definable subsets. Hence, if there is a generic set, then there is in particular a generic piece, i.e. a piece which is a generic set.
The following theorem is a generalisation of an unpublished example due to Hrushovski.
\begin{teo}[Generic Set Lemma] \label{t:generic set lemma}
Let $G$ be a piecewise hyperdefinable group and $V$ a symmetric generic set. Then, for each $n\in\mathbb{N}$, $V^{n+2}$ is a neighbourhood of $V^n$ in some logic topology. In particular, if $G$ has a generic piece, then $G$ is locally hyperdefinable. Furthermore, when $G$ is small, $G$ is locally hyperdefinable if and only if it has a generic piece.
\begin{dem}Using that $V$ is generic, find a well{\hyp}ordered sequence $(a_\xi)_{\xi\in \alpha}$ in $G$ such that, for every $\bigwedge${\hyp}definable subset $W\subseteq G$, there is $\Delta_W\subseteq \alpha$ finite with $W\subseteq \bigcup_{\xi\in \Delta_W}a_\xi V$. Let $A$ be a set of parameters containing $\{a_\xi\}_{\xi\in\alpha}$ and such that $G=\underrightarrow{\lim}\ G_i$ is a piecewise $A${\hyp}hyperdefinable group and $V$ is $\bigwedge_A${\hyp}definable. From now on, we work on the $A${\hyp}logic topology. We want to show that $V^{n+2}$ is a neighbourhood of $V^n$. Let $\Delta\subseteq \alpha$ be finite and minimal such that $V^{n+4}\subseteq V^{n+2}\cup \bigcup_{\xi\in \Delta}a_\xi V$. Let $U=V^{n+4}\setminus \bigcup_{\xi\in \Delta}a_\xi V$. Obviously, $U\subseteq V^{n+2}$ and $U$ is open in $V^{n+4}$. Note also that $V^n\subseteq U$; otherwise, taking $a\in V^n\setminus U$, there is $\xi\in\Delta$ such that $a\in a_{\xi}V$, so $a_{\xi}\in V^{n+1}$ and $a_{\xi}V\subseteq V^{n+2}$, contradicting minimality of $\Delta$. Similarly, for each piece $G_i$ such that $V^{n+4}\subseteq G_i$, pick a finite and minimal subset $\Delta_i\subseteq \alpha$ such that $G_i\subseteq V^{n+2}\cup \bigcup_{\xi\in \Delta_i}a_\xi V$ and $\Delta\subseteq \Delta_i$. Define $U_i=G_i\setminus \bigcup_{\xi\in\Delta_i}a_\xi V$. Again, it is clear that $U_i\subseteq V^{n+2}\subseteq V^{n+4}$ and $U_i$ is open in $G_i$. Also, by minimality of $\Delta_i$, it follows that $V^n\subseteq U_i$.
We claim that $U_i=U$ for any $i\in I$ such that $V^{n+4}\subseteq G_i$. It is clear by definition that $U_i\subseteq U$. On the other hand, take $a\in V^{n+2}\setminus U_i$. As $a\notin U_i$ and $a\in V^{n+2}\subseteq V^{n+4}\subseteq G_i$, there is $\xi\in\Delta_i$ such that $a\in a_\xi V$. As $a\in V^{n+2}$, $a_\xi\in a\cdot V\subseteq V^{n+3}$, concluding $a_\xi V\subseteq V^{n+4}$. Then, by minimality of $\Delta_i$, it follows that $\xi\in \Delta$, so $a\notin U$. That shows that $U$ is open in $G$, so $V^{n+2}$ is a neighbourhood of $V^n$.
In particular, $V^3$ is a neighbourhood of $V$. As $G=\bigcup_{\xi\in\alpha} a_{\xi}V$, for every $a\in G$ there is $\xi\in\alpha$ such that $a\in a_\xi V\subseteq a_\xi U\subseteq a_\xi V^3$ with $a_\xi U$ open in the $A${\hyp}logic topology. Thus, $a_\xi V^3$ is an $\bigwedge_A${\hyp}definable neighbourhood of $a$ in the $A${\hyp}logic topology, concluding that $G$ is locally $A${\hyp}hyperdefinable.
On the other hand, if $G$ is locally hyperdefinable and small, it is a locally compact topological group by Theorem \ref{t:piecewise hyperdefinable and local compactness}. Therefore, the identity is in the interior of some piece of $G$. Every $\bigwedge${\hyp}definable subset $W$ of $G$ is compact, and so covered by finitely many translates of this piece. As $W$ is arbitrary, this piece is generic. \mathbb{Q}ED
\end{dem}
\end{teo}
If $G$ has a generic subset, so has $\rfrac{G}{K}$ for $K\trianglelefteq G$ piecewise $\bigwedge${\hyp}definable. Therefore, by the Generic Set Lemma (Theorem \ref{t:generic set lemma}), when $G$ has a generic subset, $\rfrac{G}{K}$ is a locally hyperdefinable group for any piecewise $\bigwedge${\hyp}definable normal subgroup $K\trianglelefteq G$.
A \emph{$T${\hyp}rough $k${\hyp}approximate subgroup} of a group $G$ is a subset $X\subseteq G$ such that $X^2\subseteq \Delta XT$ with $|\Delta|\leq k\in\mathbb{N}_{>0}$ and $1\in T\subseteq G$. In particular, a \emph{$k${\hyp}approximate subgroup} is a $1${\hyp}rough $k${\hyp}approximate subgroup.
It is clear from the definitions that every symmetric generic set is in particular an approximate subgroup. Conversely, any $\bigwedge${\hyp}definable approximate subgroup is a generic set of the piecewise hyperdefinable group that it generates. Thus, we can understand generic sets as a strengthening of $\bigwedge${\hyp}definable approximate subgroups.
The next corollary follows from Theorem \ref{t:piecewise hyperdefinable and local compactness} and the Generic Set Lemma (Theorem \ref{t:generic set lemma}) as a particular case:
\begin{coro} \label{c:corollary locally hyperdefinable and generic piece} Let $G=\underrightarrow{\lim}\, X^n$ be a piecewise hyperdefinable group generated by an $\bigwedge${\hyp}definable symmetric set $X$ and $T\trianglelefteq G$ be a normal piecewise $\bigwedge${\hyp}definable subgroup of small index. Then, $\rfrac{G}{T}$ is a locally compact topological group if and only if $X^n$ is a $T${\hyp}rough approximate subgroup for some $n$. In particular, if $T$ is $\bigwedge${\hyp}definable, $\rfrac{G}{T}$ is a locally compact topological group if and only if $X^n$ is an approximate subgroup for some $n$.
\end{coro}
An \emph{isomorphism} of piecewise $A${\hyp}hyperdefinable groups is an isomorphism of groups which is also an isomorphism of piecewise $A${\hyp}hyperdefinable sets.
\begin{teo}[Isomorphism Theorem] \label{t:isomorphism theorem}
Let $f:\ G\rightarrow H$ be an onto piecewise bounded and proper $\bigwedge_A${\hyp}definable homomorphism of piecewise $A${\hyp}hyperdefinable groups. Then, for each $\bigwedge_A${\hyp}definable subgroup $S\leq K\coloneqq \ker(f)$ with $S\trianglelefteq G$, there is a unique map $\widetilde{f}_S:\ \rfrac{G}{S}\rightarrow H$ such that $f=\widetilde{f}_S\circ \pi_{G/S}$. This map $\widetilde{f}_S$ is a piecewise bounded and proper $\bigwedge_A${\hyp}definable homomorphism of groups with kernel $\rfrac{K}{S}$. In particular, $\widetilde{f}_K:\ \rfrac{G}{K}\rightarrow H$ is a piecewise $\bigwedge_A${\hyp}definable isomorphism. Furthermore, if $G$ has a global logic topology, then $f$ is an open map between the global logic topologies.
\begin{dem} Existence and uniqueness are given by the usual Isomorphism Theorem. Say $G=\underrightarrow{\lim}\, G_i$ and $H=\underrightarrow{\lim}\, H_j$. We get that $\widetilde{f}_S$ is obviously piecewise $\bigwedge_A${\hyp}definable as, for any pieces $G_i$ and $H_j$, ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_{(\rfrac{G_i}{S})\times H_j}[\widetilde{f}_S]={\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_{G_i\times H_j}[f]$. It is trivially piecewise bounded and proper as $f$ and $\pi_{G/S}$ are so. Assuming that $G$ has a global logic topology, one{\hyp}side translations are continuous. Since, by Proposition \ref{p:functions logic topology}(3), $\widetilde{f}_K$ is a piecewise $\bigwedge_A${\hyp}definable homeomorphism, it is in particular an open map. Since $\pi^{-1}_{G/K} [\pi_{G/K}[U]]=KU=\bigcup_{x\in K} xU$, we see that $\pi_{G/K}$ is open. Therefore, $f=\widetilde{f}_K\circ\pi_{G/K}$ is open too.\mathbb{Q}ED
\end{dem}
\end{teo}
{\textbf{Digression on Machado's Closed Approximate Subgroup Theorem and de Saxc\'{e}'s Product Theorem:}} The Generic Set Lemma (Theorem \ref{t:generic set lemma}) can be rewritten in purely topological terms. Written in this way, it is likely that this result was already (partially) known in the theory of topological groups. Recall that a semitopological group is a group with a $\mathrm{T}_1$ topology such that left and right translations are continuous.
\begin{teo} \label{t:generics}
Let $G$ be a semitopological group with a coherent covering $\mathcal{C}$ by closed symmetric subsets such that, for any $A,B\in\mathcal{C}$, there is $C\in\mathcal{C}$ with $AB\subseteq C$. Let $V$ be a \emph{generic} piece, i.e. an element $V\in\mathcal{C}$ such that $[W:V]$ is finite for every $W\in \mathcal{C}$. Then, $V$ has non{\hyp}empty interior. In particular, if $V$ is compact Hausdorff with the subspace topology, then $G$ is a locally compact topological group. Furthermore, if $\mathcal{C}$ is a countable covering by compact Hausdorff subsets, then $G$ is locally compact Hausdorff if and only if $\mathcal{C}$ contains a generic piece.
\begin{dem} Mimicking the proof of Theorem \ref{t:generic set lemma}, we show that $V^2$ is a neighbourhood of the identity, and so $V$ has non{\hyp}empty interior as finitely many translates of it cover $V^2$. If $V$ is compact Hausdorff, as left translations are homeomorphisms, we get that every point has a compact Hausdorff neighbourhood. Therefore, $G$ is locally compact Hausdorff, so a topological group by Ellis's Theorem \cite[Theorem 2]{ellis1957locally}. The ``furthermore'' part is an immediate consequence of Baire's Category Theorem \cite[Theorem 48.2]{munkres1999topology}.
\end{dem}
\end{teo}
As a corollary we get the following notable Closed Approximate Subgroups Theorem, which was first proved by Machado in \cite[Theorem 1.4]{machado2021good}. Recall that the \emph{commensurator} in $G$ of an approximate subgroup $\Lambda\subseteq G$ is the subset $\mathrm{Comm}(\Lambda)=\{g\in G\mathrel{:} g^{-1}\Lambda g\mathrm{\ and\ }\Lambda$ are commensurable$\}$. The commensurator was first introduced by Hrushovski in \cite{hrushovski2022beyond}. The following is one of the most fundamental results about the commensurator:
\begin{lem}{\textnormal{\textbf{\cite[Lemma 5.1]{hrushovski2022beyond}}}} \label{l:commensurator} Let $G$ be a group and $\Lambda\subseteq G$ an approximate subgroup. Then, $\mathrm{Comm}(\Lambda)=\bigcup \{\Lambda'\subseteq G$ approximate $\mathrm{subgroup}\mathrel{:} \Lambda'$ and $\Lambda$ are commensurable$\}$ and $\mathrm{Comm}(G)\leq G$.
\end{lem}
\begin{coro}[Machado's Closed Approximate Subgroups Theorem] \label{c:machado}
Let $G$ be a locally compact topological group and $X$ a closed approximate subgroup. Take $\Lambda=\overline{X^2\cap K^2}$ where $K$ is a compact symmetric neighbourhood of the identity. Then, $\mathrm{Comm}(\Lambda)$, with the direct limit topology given by $\Omega=\{\Lambda'\mathrel{:} \Lambda'$ compact approximate subgroup commensurable to $\Lambda\}$, is a locally compact topological group such that ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}:\ \mathrm{Comm}(\Lambda)\rightarrow G$ is a continuous $1${\hyp}to{\hyp}$1$ group homomorphism, $X\subseteq \mathrm{Comm}(\Lambda)$, ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}_{\mid X}$ is a homeomorphism and $X$ has non{\hyp}empty interior. Furthermore, if $G$ is a Lie group, then $\mathrm{Comm}(\Lambda)$ is a Lie group too.
\begin{dem} As $\Lambda$ is compact, for any $B$ commensurable to $\Lambda$, we have that $\overline{B}$ is compact and commensurable to $\Lambda$. Thus, $\Omega$ is a covering of $\mathrm{Comm}(\Lambda)$ by Lemma \ref{l:commensurator}. By construction, $\Lambda$ is generic in $\Omega$. Also, by an easy computation, if $\Lambda_1$ and $\Lambda_2$ are commensurable approximate subgroups, then $\Lambda_1\Lambda_2\cup \Lambda_2\Lambda_1$ is an approximate subgroup commensurable to $\Lambda_1$ and to $\Lambda_2$. Thus, by Theorem \ref{t:generics}, $\mathrm{Comm}(\Lambda)$ with the direct limit topology is a locally compact topological group such that ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}:\ \mathrm{Comm}(\Lambda)\rightarrow G$ is a continuous proper $1${\hyp}to{\hyp}$1$ group homomorphism and $\Lambda$ has non{\hyp}empty interior. As any two compact neighbourhoods of the identity are commensurable, by \cite[Lemma 2.2, Lemma 2.3]{machado2021good}, we get that $X^2\cap (K')^2\subseteq \mathrm{Comm}(\Lambda)$ for any compact neighbourhood of the identity $K'$, concluding that $X\subseteq X^2\subseteq \mathrm{Comm}(\Lambda)$. Since $\Lambda\subseteq X^2$ has non{\hyp}empty interior, we get that $X^2$ has non{\hyp}empty interior and so $X$ has non{\hyp}empty interior. If $Y\subseteq X$ is closed in $G$, by continuity of ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}$, it is closed in $\mathrm{Comm}(\Lambda)$. On the other hand, if $Y\subseteq X$ is closed in $\mathrm{Comm}(\Lambda)$, then $Y\cap (K')^2=Y\cap \overline{X^2\cap (K')^2}$ is closed in $\overline{X^2\cap (K')^2}$ (and so in $G$) for any compact symmetric neighbourhood of the identity $K'$ of $G$. By local compactness of $G$, the family of compact symmetric neighbourhoods of the identity forms a local covering of $G$. Thus, by the Local Covering Lemma \ref{l:local coherent}, $Y$ is closed in $G$. Since $X$ is closed, we conclude that the subspace topologies of $X$ in $G$ and in $\mathrm{Comm}(\Lambda)$ coincide. Finally, if $G$ is a Lie group, $\mathrm{Comm}(\Lambda)$ is a Lie group by \cite[Chapter III, \S 8.2, Corollary 1]{bourbaki1975lie}. \mathbb{Q}ED
\end{dem}
\end{coro}
\begin{obs}{\textnormal{\textbf{\cite[Theorem 4.1]{machado2021good}}}} If $X$ is compact, we do not need to assume that $G$ is locally compact; in that case, we can simply take $\Lambda=X$.
\end{obs}
Corollary \ref{c:machado} implies the following remarkable result by using Poguntke's Theorem \cite[Theorem 3.3]{poguntke1994dense}. Recall that a connected Lie group is \emph{semi{\hyp}simple} if it has no non{\hyp}trivial connected solvable normal subgroups.
\begin{coro}\label{c:poguntke} Let $G$ be a semi{\hyp}simple Lie group and $X$ a closed approximate subgroup with empty interior. Then, $X$ is contained in a closed subgroup $H\lneq G$. In particular, the subgroup generated by $X$ is not dense.
\begin{dem} By Machado's Closed Approximate Subgroups Theorem (Corollary \ref{c:machado}), there is a subgroup $C\leq G$ containing $X$ that admits a Lie group structure such that $X$ has non{\hyp}empty interior and the map ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}:\ C\rightarrow G$ is a $1${\hyp}to{\hyp}$1$ continuous group homomorphism. By \cite[Theorem 3.3]{poguntke1994dense} and \cite[Proposition 6.5]{lee2002introduction}, as $G$ is semi{\hyp}simple, ${\mathop{\mbox{\normalfont{\Large{\calligra{i\,}}}}}}$ has dense image if and only if it is actually an isomorphism. Therefore, the group generated by $X$ cannot be dense in $G$.\mathbb{Q}ED
\end{dem}
\end{coro}
As a consequence, by an easy ultraproduct argument, we get the following corollary, which provides a variation of de Saxc\'{e}'s Product Theorem \cite[Theorem 1]{desaxce2014product} valid for semi{\hyp}simple Lie groups. For a metric space, write $\overline{\mathbb{D}}_r(x)$ for the closed ball at $x$ of radius $r$, and write $\mathrm{N}_r(X)$ for the maximum size of a finite $r${\hyp}separated subset of $X$ (or infinite if it does not exist).
\begin{coro}[A Product Theorem for Semisimple Lie Groups] \label{c:de saxce} Let $G$ be a semi{\hyp}simple Lie group and $U$ a compact neighbourhood of the identity. Take some left invariant metric. Let $\sigma<\mathrm{dim}(G)$ and $C,k,s\in \mathbb{N}$. Then, there is $m\in\mathbb{N}$ such that, for any $\overline{\mathbb{D}}_{2^{-m}}(1)${\hyp}rough $k${\hyp}approximate subgroup $X\subseteq U$ satisfying $\mathrm{N}_{2^{-i}}(X)\leq C2^{i\sigma}$ for each $i\leq m$, there is a closed subgroup $H\lneq G$ with $X\subseteq \overline{\mathbb{D}}_{2^{-s}}(H)$.
\begin{dem} Aiming a contradiction, suppose otherwise. Then, we have a counterexample $X_m$ for each $m\in\mathbb{N}$. Take an ultraproduct in the sense of (unbounded) continuous logic, i.e. take an ultraproduct, take the subgroup generated by $U$ and quotient by the infinitesimals. By compactness of $U$, we end then with a subset $X\subseteq U$ of $G$. By {\L}o\'{s}'s Theorem, $X$ is a $k${\hyp}approximate subgroup and satisfies $\mathrm{N}_{2^{-i}}(X)\leq C\cdot 2^{i\sigma}$ for every $i\in\mathbb{N}$. Then, $\dim(X)\leq \sigma<d$, where $\dim$ denotes the (large) inductive dimension, by \cite[Theorem VII 2]{hurewicz2015dimension} and \cite[Eq.3.17 p.46]{falconer1990fractal}. In particular, $X$ has empty interior in $G$ by \cite[Corollary 1 of Theorem IV 3]{hurewicz2015dimension}. By Corollary \ref{c:poguntke}, as $G$ is semi{\hyp}simple, the closure of the subgroup generated by $X$ is a proper closed subgroup $H\lneq G$ containing $X$. Thus, by {\L}o\'{s}'s Theorem, there is some $m$ such that $X_m\subseteq \overline{\mathbb{D}}_{2^{-s}}(H)$, getting a contradiction with our initial assumption.
\end{dem}
\end{coro}
\subsection{Model{\hyp}theoretic components}
We define now some model{\hyp}theoretic components for piecewise hyperdefinable groups. Let $G$ be a piecewise $A${\hyp}hyperdefinable group.
\\
The \emph{invariant component of $G$ over $A$} is
\[G^{000}_A\coloneqq\bigcap\left\{H\leq G\mathrel{:} H\mbox{ is }A\mbox{{\hyp}invariant with }[G:H]\mbox{ small}\right\}.\]
The \emph{infinitesimal component of $G$ over $A$} is
\[G^{00}_A\coloneqq\bigcap\left\{H\leq G\mathrel{:} H\mbox{ is p/w. }{\textstyle{\bigwedge}}\mbox{{\hyp}def. with } G^{000}_A\leq H\right\};\]
The \emph{connected component of $G$ over $A$} is
\[G^0_A\coloneqq\bigcap \left\{ H\leq G\mathrel{:} H\ \mathrm{and}\ G\setminus H \mbox{ are p/w. }{\textstyle{\bigwedge}}\mbox{{\hyp}def. with }G^{000}_A\leq H\right\}.\]
Obviously, $G^{000}_A\leq G^{00}_A\leq G^0_A\leq G$.
\begin{lem} \label{l:normal subgroup of small index}
Let $G$ be a piecewise $A${\hyp}hyperdefinable group and $T\leq G$ be an $A${\hyp}invariant subgroup of small index. Then, there is a unique maximal normal subgroup $\widetilde{T}\trianglelefteq G$ contained in $T$. Furthermore, $\widetilde{T}$ is $A${\hyp}invariant and has small index. Moreover, $\widetilde{T}$ is piecewise $\bigwedge${\hyp}definable when $T$ is so.
\begin{dem} Take $\widetilde{T}=\bigcap_{i\in [G:T]} T^{g_i}$ with $\{g_i\}_{i\in [G:T]}$ set of representatives. \mathbb{Q}ED
\end{dem}
\end{lem}
Let $B$ be a set of parameters with $A\subseteq B$. The \emph{$B${\hyp}logic topology} in $\rfrac{G}{G^{000}_A}$ is the one such that $V\subseteq \rfrac{G}{G^{000}_A}$ is closed if and only if $\pi^{-1}_{G/G^{000}_A}[V]$ is piecewise $\bigwedge_B${\hyp}definable. The global logic topology in $\rfrac{G}{G^{000}_A}$ is the one such that $V\subseteq \rfrac{G}{G^{000}_A}$ is closed if and only if $\pi^{-1}_{G/G^{000}_A}[V]$ is piecewise $\bigwedge${\hyp}definable.
\begin{teo}
Let $G$ be a piecewise $A${\hyp}hyperdefinable group. Then:
{\textnormal{\textbf{(1)}}} $G^{000}_A$ is an $A${\hyp}invariant normal subgroup of $G$ and $[G:G^{000}_A]$ is small. In fact, $[V:G^{000}_A]\leq 2^{|A|+|\mathscr{L}|}$ for any $\bigwedge${\hyp}definable subset $V\subseteq G$.
{\textnormal{\textbf{(2)}}} Let $B$ be a small set of parameters with $A\subseteq B$. The inversion map is continuous on $\rfrac{G}{G^{000}_A}$ with the $B${\hyp}logic topology.
{\textnormal{\textbf{(3)}}} The global logic topology on $\rfrac{G}{G^{000}_A}$ coincides with the $B${\hyp}logic topology for every small $B$ containing $A$ and a set of representatives of $\rfrac{G}{G^{000}_A}$. Every translation map is continuous on $\rfrac{G}{G^{000}_A}$ with the global logic topology.
{\textnormal{\textbf{(4)}}} $G^{00}_A$ is a piecewise $\bigwedge_A${\hyp}definable normal subgroup of $G$. Furthermore, $\rfrac{G^{00}_A}{G^{000}_A}$ is the closure of the identity in the global logic topology of $\rfrac{G}{G^{000}_A}$.
{\textnormal{\textbf{(5)}}} Let $\pi:\ \rfrac{G}{G^{000}_A}\rightarrow \rfrac{G}{G^{00}_A}$ be the natural quotient map given by $\pi_{G/G^{00}_A}=\pi\circ\pi_{G/G^{000}_A}$. Then, $\pi$ is a Kolmogorov map between the $B${\hyp}logic topologies for any $B$ small with $A\subseteq B$. In particular, it is the Kolmogorov quotient between the global logic topologies. If $\rfrac{G}{G^{00}_A}$ is a topological group, then the group operations in $\rfrac{G}{G^{000}_A}$ are continuous.
{\textnormal{\textbf{(6)}}} $G^0_A$ is a piecewise $\bigwedge_A${\hyp}definable normal subgroup of $G$.
{\textnormal{\textbf{(7)}}} $\rfrac{G}{G^{00}_A}$ is a locally compact topological group whenever $G$ has a generic piece modulo $G^{00}_A$. In that case, $\rfrac{G^0_A}{G^{00}_A}$ is the connected component of $\rfrac{G}{G^{00}_A}$ in the global logic topology and $\rfrac{G^0_A}{G^{000}_A}$ is the connected component of $\rfrac{G}{G^{000}_A}$ in the global logic topology.
\begin{dem} {\textnormal{\textbf{(1)}}} Trivially, $G^{000}_A$ is $A${\hyp}invariant and $[G:G^{000}_A]$ is small. Using Lemma \ref{l:normal subgroup of small index}, it is obvious that $G^{000}_A$ is a normal subgroup. Finally, recall that, for real elements, an $A^*${\hyp}invariant equivalence relation on a definable set with a small amount of equivalence classes has at most $2^{|A^*|+|\mathscr{L}|}$ equivalence classes --- indeed, such a relation is coarser than having the same type over an elementary substructure containing $A^*$. The same holds for $A${\hyp}invariant equivalence relations on hyperdefinable sets. Consequently, we actually have $[V:G^{000}_A]\leq 2^{|A|+|\mathscr{L}|}$ for any $\bigwedge${\hyp}definable $V$.
{\textnormal{\textbf{(2)}}} Trivial.
{\textnormal{\textbf{(3)}}} Take $B$ small containing $A$ and a set of representatives of $\rfrac{G}{G^{000}_A}$. We have that, for any $V\subseteq \rfrac{G}{G^{000}_A}$, $\pi^{-1}_{G/G^{000}_A}[V]$ is $B${\hyp}invariant.
In particular, the global logic topology on $\rfrac{G}{G^{000}_A}$ is the $B${\hyp}logic topology, so it is a well{\hyp}defined topology. Finally, as $\pi_{G/G^{000}_A}$ is a homomorphism and every translation map is piecewise bounded $\bigwedge${\hyp}definable, we conclude, by Proposition \ref{p:functions logic topology}(1), that every translation map is continuous in $\rfrac{G}{G^{000}_A}$ with the global logic topology.
{\textnormal{\textbf{(4)}}} Let $\widehat{V}$ be the closure of the identity on the global logic topology of $\rfrac{G}{G^{000}_A}$ and $V=\pi^{-1}_{G/G^{000}_A}[\widehat{V}]$. Obviously, $V\subseteq G^{00}_A$. On the other hand, as translations are continuous by point {\textnormal{\textbf{(3)}}}, it follows that $\widehat{V}^{-1}\widehat{V}\subseteq \widehat{V}$ and $\bar{g}\widehat{V}\bar{g}^{-1}\subseteq \widehat{V}$ for any $\bar{g}\in\rfrac{G}{G^{000}_A}$. Thus, $V$ is a normal subgroup, concluding $G^{00}_A=V$. In particular, $G^{00}_A$ is a piecewise $\bigwedge_A${\hyp}definable normal subgroup.
{\textnormal{\textbf{(5)}}} As translations are continuous, for any $\bar{g}\in\rfrac{G}{G^{000}_A}$, the closure of $\bar{g}$ in the global logic topology is $\bar{g}\cdot\rfrac{G^{00}_A}{G^{000}_A}$. Thus, $\pi$ is the Kolmogorov quotient between the global logic topologies.
By Lemma \ref{l:kolmogorov maps}, ${\mathrm{Im}}\, \pi$ is a lattice isomorphism between the global logic topologies with inverse ${\mathrm{Im}}^{-1}\pi$. Now, as $\pi$ is $A${\hyp}invariant, it follows that ${\mathrm{Im}}\, \pi$ is a lattice isomorphism between the $B${\hyp}logic topologies with inverse ${\mathrm{Im}}^{-1}\pi$. Therefore, by Lemma \ref{l:kolmogorov maps}, we conclude that $\pi$ is a Kolmogorov map between the $B${\hyp}logic topologies.
Now, suppose $\rfrac{G}{G^{00}_A}$ is a topological group. Take $U\subseteq \rfrac{G}{G^{000}_A}$ open and $\bar{g}_1\bar{g}_2\in U$. Then, $\pi(\bar{g}_1)\pi(\bar{g}_2)\in \pi[U]$. As $\pi[U]$ is open, there are $\widehat{U}_1$ and $\widehat{U}_2$ open in $\rfrac{G}{G^{000}_A}$ such that $\pi(\bar{g}_1)\in \widehat{U}_1$ and $\pi(\bar{g}_2)\in \widehat{U}_2$ and $\widehat{U}_1\widehat{U}_2\subseteq \pi[U]$. Write $U_1=\pi^{-1}[\widehat{U}_1]$ and $U_2=\pi^{-1}[\widehat{U}_2]$. By Lemma \ref{l:kolmogorov maps}, $\pi^{-1}[\pi[U]]=U$. Then, $U_1U_2\subseteq U$ with $\bar{g}_1\in U_1$, $\bar{g}_2\in U_2$ and $U_1$ and $U_2$ open. As $U$, $\bar{g}_1$ and $\bar{g}_2$ are arbitrary, we conclude that the product operation is continuous.
{\textnormal{\textbf{(6)}}} For any $H\leq G$ with $G^{000}_A\leq H$ such that $H$ and $G\setminus H$ are piecewise $\bigwedge${\hyp}definable, we have that $\rfrac{H}{G^{00}_A}$ is a clopen subgroup in the global logic topology. Hence, $\rfrac{G^0_A}{G^{00}_A}$ is the intersection of all the clopen subgroups of $\rfrac{G}{G^{00}_A}$ in the global logic topology. It follows that $\rfrac{G^0_A}{G^{00}_A}$ is closed in the global logic topology, so $G^0_A$ is piecewise $\bigwedge${\hyp}definable. As it is $A${\hyp}invariant, we conclude that $G^0_A$ is piecewise $\bigwedge_A${\hyp}definable. Finally, as every translation is a homeomorphism in $\rfrac{G}{G^{00}_A}$, we conclude that $\rfrac{G^0_A}{G^{00}_A}$ is a normal subgroup of $\rfrac{G}{G^{00}_A}$. Thus, $G^0_A\trianglelefteq G$.
{\textnormal{\textbf{(7)}}} By the Generic Set Lemma (Theorem \ref{t:generic set lemma}) and Theorem \ref{t:piecewise hyperdefinable and local compactness}, $\rfrac{G}{G^{00}_A}$ with the global logic topology is a locally compact topological group if it has a generic piece. In that case, by \cite[Theorem 2.1.4(b)]{dikranjan1990topological}, $\rfrac{G^0_A}{G^{00}_A}$ is the connected component of $\rfrac{G}{G^{00}_A}$. By point \textbf{(5)}, $\rfrac{G^0_A}{G^{000}_A}$ is the connected component of $\rfrac{G}{G^{000}_A}$. \mathbb{Q}ED
\end{dem}
\end{teo}
We define now a final special model{\hyp}theoretic component which has no analogues in the definable or hyperdefinable case. I want to thank Hrushovski for all his help via private conversations in relation with this result. Recall that an aperiodic topological group is a topological group that has no non{\hyp}trivial compact normal subgroups.
Let $G$ be a piecewise hyperdefinable group with a generic piece. The \emph{aperiodic component} $G^{\mathrm{ap}}$ of $G$ is the smallest piecewise $\bigwedge${\hyp}definable normal subgroup of $G$ with small index such that $\rfrac{G}{G^{\mathrm{ap}}}$ is an aperiodic locally compact topological group with its global logic topology.
\begin{lem} \label{l:lemma aperiodic} Let $G$ be an aperiodic locally compact topological group which is the union of $\lambda$ compact subsets with $\lambda\geq \aleph_0$. Then, $|G|\leq (\lambda+2^{\aleph_0})^{\lambda}$.
\begin{dem} By Gleason{\hyp}Yamabe Theorem \ref{t:gleason yamabe} and Corollary \ref{c:minimal yamabe pairs}, there is an open subgroup $H\leq G$ and a compact subgroup $K\trianglelefteq H$ such that $\rfrac{H}{K}$ is an aperiodic connected Lie group. Thus, $[H:K]\leq 2^{\aleph_0}$. On the other hand, as $G$ is a union of $\lambda$ compact subsets, $[G:H]\leq \lambda$, so $[G:K]\leq \lambda+2^{\aleph_0}$. Now, take $\{a_i\}_{i\in \lambda}$ a set of representatives of $\rfrac{G}{H}$ and write $K_i\coloneqq K^{a_i}=a_iKa_i^{-1}$ for each $i$. As $K\trianglelefteq H$, for any $g\in G$, we have $K^g=K_i$ for some $i\in \lambda$. Note that $[G:K_i]\leq \lambda+ 2^{\aleph_0}$, so $[G:\bigcap_{i\in \lambda} K_i]\leq (\lambda+ 2^{\aleph_0})^{\lambda}$. As $G$ is aperiodic, $\bigcap_{i\in \lambda} K_i=1$, so we conclude $|G|\leq (\lambda+ 2^{\aleph_0})^{\lambda}$. \mathbb{Q}ED
\end{dem}
\end{lem}
\begin{teo} \label{t:the aperiodic component} Let $G$ be a piecewise $0${\hyp}hyperdefinable group with a generic piece. Then, $G^{\mathrm{ap}}$ exists, is piecewise $\bigwedge_0${\hyp}definable and does not change by expansions of the language.
\begin{dem} Let $\lambda=\mathrm{cf}(G)$ and $\tau=(\lambda+2^{\aleph_0})^{\lambda}$. By assumption $\kappa>\lambda+2^{\aleph_0}$ is a strong limit cardinal, and so $\kappa>\tau^+$. For any small subset of parameters $A$, let $\mathcal{H}_A$ be the family of normal subgroups $H\trianglelefteq G$ of small index which are piecewise $\bigwedge_{\leq \tau}${\hyp}definable with parameters from $A$ (i.e. piecewise $\bigwedge_B${\hyp}definable for some $B\subseteq A$ with $|B|\leq \tau$) such that $\rfrac{G}{H}$, with its global logic topology, is an aperiodic locally compact topological group.
{\textnormal{\textbf{Claim:}}} For any $A$ small, $\mathcal{H}_A$ is closed under arbitrary intersections. Furthermore, for any $\mathcal{F}\subseteq \mathcal{H}_A$ there is $\mathcal{F}_0\subseteq \mathcal{F}$ with $|\mathcal{F}_0|\leq \tau$ and $\bigcap \mathcal{F}_0=\bigcap\mathcal{F}$.
{\textnormal{\textbf{Proof of claim:}}} Take $\mathcal{F}\subseteq \mathcal{H}_A$. Then, $H\coloneqq \bigcap \mathcal{F}$ is a piecewise $\bigwedge_A${\hyp}definable normal subgroup of $G$ of small index. As $G$ has a generic piece, $\rfrac{G}{H}$ has a generic piece as well, so $\rfrac{G}{H}$ is a locally compact topological group with its global logic topology by the Generic Set Lemma (Theorem \ref{t:generic set lemma}) and Theorem \ref{t:piecewise hyperdefinable and local compactness}. Take $K\leq G$ such that $\rfrac{K}{H}\trianglelefteq \rfrac{G}{H}$ is a compact normal subgroup in its global logic topology. Take $F\in\mathcal{F}$ arbitrary. We know that $\pi:\ \rfrac{G}{H}\rightarrow\rfrac{G}{F}$ is a continuous homomorphism between the global logic topologies, so $\rfrac{K}{F}$ is a compact normal subgroup of $\rfrac{G}{F}$. As $\rfrac{G}{F}$ is aperiodic, we conclude that $K\leq F$. As $F\in\mathcal{F}$ is arbitrary, we conclude that $K\subseteq\bigcap \mathcal{F}=H$, so $\rfrac{G}{H}$ is aperiodic. Since $\rfrac{G}{H}$ is an aperiodic locally compact topological group, by Lemma \ref{l:lemma aperiodic}, $[G:H]\leq \tau$.
If $|\mathcal{F}|\leq \tau$, then $H$ is piecewise $\bigwedge_{\leq \tau}${\hyp}definable, so $H\in\mathcal{H}$. Now, we claim that there is $\mathcal{F}_0\subseteq \mathcal{F}$ with $|\mathcal{F}_0|\leq \tau$ such that $\bigcap \mathcal{F}_0=H$. Indeed, suppose otherwise, then we can find recursively a sequence $(F_i)_{i\in\tau^+}$ of elements in $\mathcal{F}$ such that $H<\bigcap_{i\in \alpha} F_i\subset \bigcap_{i\in \beta}F_i$ for $\beta<\alpha<\tau^+$. Thus, we can find a sequence $(g_i)_{i\in \tau^+}$ such that $g_i\in \rfrac{F_j}{H}$ for $j\leq i$ and $g_i\notin \rfrac{F_{i+1}}{H}$; contradicting that $[G:H]\leq \tau$. \qed
Take a $\tau^+${\hyp}saturated elementary substructure $\mathfrak{N}\preceq \mathfrak{M}$. Set $G^{\mathrm{ap}}=\bigcap \mathcal{H}_N$, so $G^{\mathrm{ap}}\in \mathcal{H}_N$. By $\tau^+${\hyp}saturation of $\mathfrak{N}$, we have that $G^{\mathrm{ap}}$ is $0${\hyp}invariant, so it is piecewise $\bigwedge_0${\hyp}definable. As $G^{\mathrm{ap}}$ is $0${\hyp}invariant and $\mathfrak{N}$ is $\tau^+${\hyp}saturated, we conclude that $G^{\mathrm{ap}}=\bigcap\mathcal{H}_A$ for any $A$. Therefore, $G^{\mathrm{ap}}$ is the smallest piecewise $\bigwedge_{\leq \tau}${\hyp}definable normal subgroup of $G$ with small index such that $\rfrac{G}{G^{\mathrm{ap}}}$ is an aperiodic locally compact topological group with its global logic topology.
Now, we show that $G^{\mathrm{ap}}$ does not change by expansions of the language. In any $\kappa${\hyp}saturated and strongly $\kappa${\hyp}homogeneous $\mathscr{L}'/\mathscr{L}${\hyp}expansion $\mathfrak{M}'$ of an elementary extension of $\mathfrak{M}$, we find a piecewise $\bigwedge_0${\hyp}definable normal subgroup $G^{\mathrm{ap}}_{\mathscr{L}'}\trianglelefteq G$ which is the smallest $\bigwedge_{\leq \tau}${\hyp}definable normal subgroup of $G$ of small index such that $\rfrac{G}{G^{\mathrm{ap}}_{\mathscr{L}'}}$ is an aperiodic locally compact topological group with its global logic topology. Since $[G:G^{\mathrm{ap}}_{\mathscr{L}'}]\leq\tau$ by Lemma \ref{l:lemma aperiodic}, there is a large enough $\kappa${\hyp}saturated and strongly $\kappa${\hyp}homogeneous $\mathscr{L}_0/\mathscr{L}${\hyp}expansion $\mathfrak{M}_0$ of an elementary extension of $\mathfrak{M}$ such that, for any further $\kappa${\hyp}saturated and strongly $\kappa${\hyp}homogeneous $\mathscr{L}_1/\mathscr{L}_0${\hyp}expansion $\mathfrak{M}_1$ of an elementary extension of $\mathfrak{M}_0$, we have that $G^{\mathrm{ap}}_0(\mathfrak{M}_1)=G^{\mathrm{ap}}_1$ --- where $G^{\mathrm{ap}}_1$ is computed in $\mathfrak{M}_1$, $G^{\mathrm{ap}}_0$ is computed in $\mathfrak{M}_0$ and $G^{\mathrm{ap}}_0(\mathfrak{M}_1)$ is the realisation of $G^{\mathrm{ap}}_0$ in ${\mathfrak{M}_1}_{\mid \mathscr{L}_0}$.
Take this base expansion $\mathfrak{M}_0$ of an elementary extension of $\mathfrak{M}$, with language $\mathscr{L}_0$. By replacing $\mathfrak{M}_0$ by an elementary extension, we can assume that the $\mathscr{L}${\hyp}reduct ${\mathfrak{M}_0}_{\mid \mathscr{L}}$ of $\mathfrak{M}_0$ is a $\kappa${\hyp}saturated and strongly $\kappa${\hyp}homogeneous elementary extension of $\mathfrak{M}$. Let $G^{\mathrm{ap}}_0$ be computed in $\mathfrak{M}_0$. Take $\alpha\in\mathrm{Aut}({\mathfrak{M}_0}_{\mid\mathscr{L}})$. Consider the language $\mathscr{L}_1$ expanding $\mathscr{L}_0$ by adding a new symbol $\alpha x$ of the same sort that $x$ for each symbol $x$ of $\mathscr{L}_0$ which is not in $\mathscr{L}$. Take the $\mathscr{L}_1${\hyp}expansion $\mathfrak{M}'_1$ of $\mathfrak{M}_0$ given by interpreting $(\alpha x)^{\mathfrak{M}'_1}=\alpha( x^{\mathfrak{M}_0})$. Let $\mathfrak{M}_1$ be a $\kappa${\hyp}saturated and strongly $\kappa${\hyp}homogeneous elementary extension of $\mathfrak{M}'_1$. Let $G^{\mathrm{ap}}_0(\mathfrak{M}_1)$ be the realisation of $G^{\mathrm{ap}}_0$ in ${\mathfrak{M}_1}_{\mid\mathscr{L}_0}$, $G^{\mathrm{ap}}_1$ be computed in $\mathfrak{M}_1$ and $G^{\mathrm{ap}}_\alpha$ be computed in ${\mathfrak{M}_1}_{\mid \mathscr{L}_1\setminus (\mathscr{L}_0\setminus \mathscr{L})}$. By the choice of $\mathfrak{M}_0$, we know that $G^{\mathrm{ap}}_0(\mathfrak{M}_1)=G^{\mathrm{ap}}_1\subseteq G^{\mathrm{ap}}_0\cap G^{\mathrm{ap}}_\alpha$. Since $G^{\mathrm{ap}}_\alpha(\mathfrak{M}_0)$ is precisely $\alpha(G^{\mathrm{ap}}_0)$, we get that $G^{\mathrm{ap}}_0\subseteq \alpha(G^{\mathrm{ap}}_0)$. Therefore, as $\alpha$ is arbitrary, we conclude that $G^{\mathrm{ap}}_0$ is $\mathrm{Aut}({\mathfrak{M}_0}_{\mid\mathscr{L}})${\hyp}invariant. By Beth's Definability Theorem \cite[Exercise 6.1.4]{tent2012course}, we conclude that $G^{\mathrm{ap}}_0$ is already piecewise $\bigwedge_0${\hyp}definable in $\mathscr{L}$. Therefore, $G^{\mathrm{ap}}_0=G^{\mathrm{ap}}$ and $G^{\mathrm{ap}}$ does not change by expansions of the language.
Finally, by invariance under arbitrary expansions of the language, we have that $G^{\mathrm{ap}}$ is the smallest $\bigwedge${\hyp}definable normal subgroup of $G$ of small index such that $\rfrac{G}{G^{\mathrm{ap}}}$ is an aperiodic locally compact topological group with its global logic topology.
\mathbb{Q}ED
\end{dem}
\end{teo}
\subsection{Lie cores}
Now, we adapt the results about Lie cores to piecewise hyperdefinable groups.
Let $G$ be a piecewise $A${\hyp}hyperdefinable group. An \emph{$A${\hyp}Yamabe pair} of $G$ is a pair $(K,H)$ of subgroups $K\trianglelefteq H\leq G$ which is a Yamabe pair modulo $G^{00}_A$ for the global logic topology of $\rfrac{G}{G^{00}_A}$ and $\rfrac{K}{G^{00}_A}$ is $\bigwedge${\hyp}definable. In other words, it is a pair satisfying the following three properties:
\begin{enumerate}[(i)]
\item $H\leq G$ is a piecewise $\bigwedge${\hyp}definable subgroup whose complement is also piecewise $\bigwedge${\hyp}definable.
\item $K\trianglelefteq H$ is a piecewise $\bigwedge${\hyp}definable normal subgroup of $H$ such that $G^{00}_A\leq K$ and $\rfrac{K}{G^{00}_A}$ is $\bigwedge${\hyp}definable.
\item $L\coloneqq \rfrac{H}{K}$ with the respective global logic topology is a finite dimensional Lie group.
\end{enumerate}
We say that $H$ is the \emph{domain}, $K$ is the \emph{kernel} and $L$ is the \emph{Lie core}. Write $\pi\coloneqq\pi_{H/K}:\ H\rightarrow L$ and $\widetilde{\pi}\coloneqq\widetilde{\pi}_{H/K}:\ \rfrac{H}{G^{00}_A}\rightarrow L$ for the quotient maps with $\pi=\widetilde{\pi}\circ\pi_{G/G^{00}_A}$ where $\pi_{G/G^{00}_A}:\ G\rightarrow \rfrac{G}{G^{00}_A}$. We say that a Lie group is an \emph{$A${\hyp}Lie core} of $G$ if it is isomorphic, as Lie group, to the Lie core of some $A${\hyp}Yamabe pair of $G$.
\begin{obss} \label{o:remarks lie core} Let $G$ be a piecewise $A${\hyp}hyperdefinable group. Let $(K,H)$ be an $A${\hyp}Yamabe pair of $G$ and $\pi\coloneqq \pi_{H/K}:\ H\rightarrow L$ its Lie core and $\widetilde{\pi}\coloneqq\widetilde{\pi}_{H/K}:\ \rfrac{H}{G^{00}_A}\rightarrow L$ such that $\pi=\widetilde{\pi}\circ\pi_{G/G^{00}_A}$.
{\textnormal{\textbf{(1)}}} The fact that $G^{00}\leq K$ means that $[G:H]<\kappa$. Indeed, as $\rfrac{H}{K}$ is Hausdorff, we already have that $[H:K]<\kappa$. Therefore, if $[G:H]<\kappa$, we conclude that $[G:K]<\kappa$, so $G^{00}\leq K$ (for some set of parameters). Thus, the condition of working modulo $G^{00}$ is saying that $H$ is large in some sense.
On the other hand, the condition that $\rfrac{K}{G^{00}_A}$ is $\bigwedge${\hyp}definable may seem superfluous. Indeed, as $(\rfrac{K}{G^{00}_A},\rfrac{H}{G^{00}_A})$ is a Yamabe pair, $\rfrac{K}{G^{00}_A}$ is compact. If $\rfrac{G}{G^{00}_A}$ has a generic piece (as we will assume in the rest of the section), this is enough to conclude that $\rfrac{K}{G^{00}_A}$ is $\bigwedge${\hyp}definable. However, in general, $\rfrac{G}{G^{00}_A}$ may have compact sets that are not $\bigwedge${\hyp}definable, so we need to add this condition.
{\textnormal{\textbf{(2)}}} Note that $\widetilde{\pi}$ is a piecewise bounded and proper $\bigwedge${\hyp}definable map. Thus, by Proposition \ref{p:functions logic topology}, $\widetilde{\pi}$ is continuous and closed between the global logic topologies. By the Isomorphism Theorem \ref{t:isomorphism theorem}, since $\widetilde{\pi}$ is an onto homomorphism, we have that $\rfrac{(H/G^{00}_A)}{(K/G^{00}_A)}$ and $\rfrac{H}{K}$ are isomorphic as piecewise hyperdefinable groups and $\widetilde{\pi}$ is also an open map between the global logic topologies. Finally, as it has compact fibres, $\widetilde{\pi}$ is also proper \cite[Theorem 3.7.2]{engelking1989general}. In sum, $\widetilde{\pi}$ deeply connects the logic topology of $\rfrac{G}{G^{00}_A}$ and the geometry of $L$.
When $G^{00}_A$ is $\bigwedge_A${\hyp}definable, $\pi$ is also a piecewise bounded and proper $\bigwedge${\hyp}definable surjective homomorphism. Consequently, $\pi$ is a continuous, closed and proper map between the logic topologies (with enough parameters), concluding that the relation between $\rfrac{G}{G^{00}_A}$ and $L$ can be mostly lifted into a relation between $G$ and $L$. In that special case, we say that $\pi:\ H\rightarrow L$ is a \emph{Lie model} of $G$. In general, however, $G^{00}_A$ is only piecewise $\bigwedge_A${\hyp}definable and $\pi$ is only piecewise bounded --- we could even have $G=G^{00}_A$ (e.g. take $G=\bigcup_{n\in\mathbb{N}} [-a_n,a_n]$ with $a_n\in o(a_{n+1})$ for each $n\in\mathbb{N}$ in the theory of real closed fields). In Section 3, we give sufficient conditions to show that $G^{00}_A$ is $\bigwedge_A${\hyp}definable.
{\textnormal{\textbf{(3)}}} $L$ is a minimal Lie core if and only if it is an aperiodic connected Lie group. In that case, $\rfrac{K}{G^{00}_A}$ is the maximal compact normal subgroup of $\rfrac{H}{G^{00}_A}$, so it is its maximal $\bigwedge${\hyp}definable normal subgroup. In particular, every automorphism over $A$ leaving $H$ invariant leaves $K$ invariant.
{\textnormal{\textbf{(4)}}} As $L$ is locally compact and $\widetilde{\pi}$ is proper and continuous, we conclude that $\rfrac{H}{G^{00}_A}$ is locally compact. Since $\rfrac{H}{G^{00}_A}$ is open, we conclude that $\rfrac{G}{G^{00}_A}$ is locally compact with the global logic topology. Thus, if $G$ is a countably piecewise hyperdefinable group, by Theorem \ref{t:countably piecewise hyperdefinable groups and topological groups}, Theorem \ref{t:piecewise hyperdefinable and local compactness} and the Generic Set Lemma (Theorem \ref{t:generic set lemma}), we conclude that $G$ has a generic piece modulo $G^{00}_A$. For that reason, we will assume from now on that $G$ has a generic piece modulo $G^{00}_A$.
{\textnormal{\textbf{(5)}}} Note that our definitions of Lie core and Lie model extend the notion of Lie model used in \cite{hrushovski2011stable}. Suppose that $G$ is a piecewise $A${\hyp}definable group and $G^{00}_A$ is $\bigwedge_A${\hyp}definable. Then, $\pi$ is proper and continuous so, for any $\Gamma\subseteq U\subseteq L$ with $\Gamma$ compact and $U$ open, $\pi^{-1}[\Gamma]$ is $\bigwedge${\hyp}definable and $\pi^{-1}[U]$ is $\bigvee${\hyp}definable. Hence, there is a definable subset $D\subseteq G$ such that
\[\pi^{-1}[\Gamma]\subseteq D\subseteq \pi^{-1}[U].\]
As $L$ is first countable, it is metrisable by Birkhoff{\hyp}Kakutani Theorem \cite[Theorem 1.5.2]{tao2014hilbert}. Thus, every closed set in $L$ is $\mathrm{G_{\delta}}$ \cite[Example 2, page 249]{munkres1999topology}. In particular, it follows that the preimage of any compact set is $\bigwedge_{\omega}${\hyp}definable. When $L$ is connected, it is also second countable, so the preimage of every open set is $\bigvee_{\omega}${\hyp}definable.
\end{obss}
\begin{teo} \label{t:logic existence of lie core} Let $G$ be a piecewise $A${\hyp}hyperdefinable group with a generic piece modulo $G^{00}_A$. Then, $G$ has an $A${\hyp}Lie core. If $G$ is countably piecewise hyperdefinable, this condition is also necessary.
Furthermore, for any $U$ such that $\rfrac{U}{G^{00}_A}$ is an open neighbourhood of the identity in the global logic topology of $\rfrac{G}{G^{00}_A}$, there is an $A${\hyp}Yamabe pair $(K,H)$ of $G$ with $K\subseteq UG^{00}_A$.
\begin{dem} ($\mathbb{R}ightarrow$) From Gleason{\hyp}Yamabe Theorem \ref{t:gleason yamabe}, using the Generic Set Lemma (Theorem \ref{t:generic set lemma}), Theorem \ref{t:piecewise hyperdefinable and local compactness} and Proposition \ref{p:compact locally hyperdefinable}. ($\Leftarrow$) We have explained the necessity of the generic piece assumption in Remark \ref{o:remarks lie core}(4).
\mathbb{Q}ED
\end{dem}
\end{teo}
\begin{teo} \label{t:logic existence of minimal lie core}
Let $G$ be a piecewise $A${\hyp}hyperdefinable group with a generic piece modulo $G^{00}_A$ and $(K_1,H_1)$ an $A${\hyp}Yamabe pair of $G$. Then, $G$ has a minimal $A${\hyp}Yamabe pair $(K,H)$ smaller than or equal to $(K_1,H_1)$. Furthermore, $H\subseteq U^2$ for any $U$ containing $K_1$ such that $\rfrac{U}{G^{00}_A}$ is a clopen neighbourhood of $\rfrac{K_1}{G^{00}_A}$ in the global logic topology.
\begin{dem} Clear from Corollary \ref{c:minimal yamabe pairs}, using the Generic Set Lemma (Theorem \ref{t:generic set lemma}), Theorem \ref{t:piecewise hyperdefinable and local compactness} and Proposition \ref{p:compact locally hyperdefinable}.
\mathbb{Q}ED
\end{dem}
\end{teo}
Applying also Corollary \ref{c:uniqueness minimal lie core}, we conclude the following result.
\begin{teo} \label{t:logic uniqueness of minimal lie core}
Let $G$ be a piecewise hyperdefinable group with a generic piece modulo $G^{00}_A$. Then, $G$ has a unique minimal $A${\hyp}Yamabe pair up to equivalence.
\end{teo}
By Proposition \ref{p:global minimal lie core map}, we get a \emph{global minimal Lie core map} $\widetilde{\pi}_L:\ \rfrac{D_L}{G^{00}_A}\rightarrow L$ extending all the minimal $A${\hyp}Yamabe pairs, which is unique up to isomorphisms of $L$. Let $\pi_L:\ D_L\rightarrow L$ be the map given by $\pi_L=\widetilde{\pi}_L\circ \pi_{G/G^{00}_A\ \mid D_L}$. Here, $D_L$ is the union of all the domains of minimal $A${\hyp}Yamabe pairs and $\ker(\pi_L)\coloneqq \pi_L^{-1}(1)$ is the union of all the kernels of minimal $A${\hyp}Yamabe pairs. Consequently, $D_L$ and $\ker(\pi_L)$ are $A${\hyp}invariant.
\begin{obs} As noted in \cite{hrushovski2011stable}, the uniqueness of the minimal Lie core is achieved at a price. Indeed, while Gleason{\hyp}Yamabe Theorem \ref{t:gleason yamabe} gives us Yamabe pairs of arbitrarily small kernel, we have lost the control over the kernel in Theorem \ref{t:logic existence of minimal lie core}. If we do not care about uniqueness (as in \cite{breuillard2012structure} or \cite{massicot2015approximate}), it could be better just to apply Theorem \ref{t:logic existence of lie core} to find a Yamabe pair $(K,H)$ with arbitrarily small kernel. Also, it may be natural to apply Gleason{\hyp}Yamabe{\hyp}Carolino Theorem \ref{t:gleason yamabe carolino} rather than Gleason{\hyp}Yamabe Theorem \ref{t:gleason yamabe} to get some extra control on some parameters.
\end{obs}
In Proposition \ref{p:minimal yamabe pairs with maximal domain}, we gave a criterion to find minimal Yamabe pairs with maximal domain in topological groups. Applying it modulo $G^{00}_A$, this result can be easily adapted to piecewise hyperdefinable groups.
Similarly, we can easily adapt Proposition \ref{p:minimal yamabe pair with minimal domain} and Proposition \ref{p:asymptotic minimal yamabe pair with minimal domain} to the context of piecewise hyperdefinable groups by applying them modulo $G^{00}_A$. In particular, it follows that the minimal $A${\hyp}Lie core is piecewise $A${\hyp}hyperdefinable:
\begin{prop} \label{p:connected component and minimal lie core}
Let $G$ be a piecewise $A${\hyp}hyperdefinable group with a generic piece modulo $G^{00}_A$ and let $L$ be the minimal $A${\hyp}Lie core of $G$. Then, the restriction to $G^0_A$ of the global minimal Lie core map $\pi_{L\mid G^0_A}:\ G^0_A\rightarrow L$ is a piecewise bounded $\bigwedge${\hyp}definable surjective group homomorphism. Furthermore, we conclude that $L\cong \rfrac{G^0_A}{\ker(\pi_{L\mid G^0_A})}$.
\end{prop}
The previous Proposition \ref{p:connected component and minimal lie core} gives us a canonical presentation of the minimal $A${\hyp}Lie core of $G$ as the piecewise $A${\hyp}hyperdefinable group $\rfrac{G^0_A}{K}$ with $K\coloneqq\ker(\pi_{L\mid G^0_A})$. Similarly, we get a canonical $A${\hyp}invariant presentation of the global minimal Lie core map $\pi_L:\ D_L\rightarrow L$ by taking $\pi_{L\mid G^0_A}=\pi_{G^0_A/K}$. We now give a more precise description of $K\coloneqq\ker(\pi_{L\mid G^0_A})$ using $G^{\mathrm{ap}}$.
\begin{lem} \label{l:minimal yamabe pair and aperiodic component} Let $G$ be a piecewise $0${\hyp}hyperdefinable group with a generic piece and $G^{\mathrm{ap}}$ the aperiodic component of $G$. Let $(K,H)$ be a minimal $A${\hyp}Yamabe pair of $G$ with Lie core $\pi_{H/K}:\ H\rightarrow L$. Then, $G^{\mathrm{ap}}\cap H\leq K$.
\begin{dem} Let $B$ be the small set of parameters with $A\subseteq B$ such that the $B${\hyp}logic topology of $\rfrac{G}{G^{00}_A}$ is its global logic topology. Denote by $\mathrm{cl}_B$ the closure in the $B${\hyp}logic topology of $G$. We define by recursion the sequence $(J_\alpha)_{\alpha\in{\mathbf{\mathbbm{O}\mathrm{n}}}}$ of piecewise $\bigwedge_B${\hyp}definable normal subgroups of $G$ given by $J_0=G^{00}_A$, $J_{\gamma}=\mathrm{cl}_B(\bigcup_{i\in \gamma} J_i)$ for $\gamma$ limit and $J_{\alpha+1}=\mathrm{cl}_B(\bigcup \mathcal{K}_\alpha)$ where $\mathcal{K}_\alpha$ is the family of piecewise $\bigwedge${\hyp}definable normal subgroups which are $\bigwedge${\hyp}definable modulo $J_\alpha$.
{\textnormal{\textbf{Claim:}}} For $\alpha_0\in{\mathbf{\mathbbm{O}\mathrm{n}}}$ large enough, $J_\alpha=G^{\mathrm{ap}}$ for any $\alpha\geq \alpha_0$.
{\textnormal{\textbf{Proof of claim:}}} We prove inductively that $J_\alpha\subseteq G^{\mathrm{ap}}$ for any $\alpha\in{\mathbf{\mathbbm{O}\mathrm{n}}}$. Obviously, $J_0\subseteq G^{\mathrm{ap}}$. For $\gamma$ limit, assuming that $J_i\subseteq G^{\mathrm{ap}}$ for each $i\in \gamma$, as $G^{\mathrm{ap}}$ is piecewise $\bigwedge_B${\hyp}definable, we get that $J_\gamma\subseteq G^{\mathrm{ap}}$. Finally, assuming that $J_\alpha\subseteq G^{\mathrm{ap}}$, we have a piecewise bounded $\bigwedge${\hyp}definable onto homomorphism $\pi:\ \rfrac{G}{J_{\alpha}}\rightarrow \rfrac{G}{G^{\mathrm{ap}}}$. As $\rfrac{G}{G^{\mathrm{ap}}}$ is aperiodic, we have that every piecewise $\bigwedge${\hyp}definable normal subgroup of $G$ which is $\bigwedge${\hyp}definable modulo $J_{\alpha}$ must be contained in $G^{\mathrm{ap}}$. Therefore, $\bigcup \mathcal{K}_\alpha\subseteq G^{\mathrm{ap}}$, concluding that $J_{\alpha+1}\subseteq G^{\mathrm{ap}}$.
On the other hand, as there is a small amount of piecewise $\bigwedge_B${\hyp}definable normal subgroups of $G$, for large $\alpha_0$, we must have $J_{\alpha_0+1}=J_{\alpha_0}$. In particular, that means that $\rfrac{G}{J_{\alpha_0}}$ is an aperiodic locally compact topological group with its global logic topology, so $G^{\mathrm{ap}}\leq J_{\alpha_0}$. Thus, $G^{\mathrm{ap}}=J_\beta$ for $\beta\geq \alpha_0$. \ $\scriptscriptstyle{\square}$
Now, we prove by induction that, for any $\alpha\in {\mathbf{\mathbbm{O}\mathrm{n}}}$, we have $J_\alpha\cap H\leq K$. Obviously, $J_0\cap H\leq K$. For $\gamma$ limit, assuming that $J_i\cap H\leq K$ for any $i\in \gamma$, we have $H\cap \bigcup_{i\in\gamma} J_i\leq K$. Therefore, $H\cap J_{\gamma}=\mathrm{cl}_B(H\cap \bigcup_{i\in\gamma}J_i)\subseteq K$, as $H$ is clopen and $K$ is closed in the $B${\hyp}logic topology. Finally, assuming $J_\alpha\cap H\leq K$, we have a piecewise bounded $\bigwedge${\hyp}definable onto homomorphism $\pi:\ \rfrac{H}{J_\alpha}\rightarrow L=\rfrac{H}{K}$. As $L$ is aperiodic, we have that $H\cap \bigcup\mathcal{K}_\alpha\subseteq K$. Therefore, $H\cap J_{\alpha+1}=\mathrm{cl}_B(H\cap \bigcup \mathcal{K}_\alpha)\subseteq K$, as $H$ is clopen and $K$ is closed in the $B${\hyp}logic topology. In particular, by the claim, we conclude that $G^{\mathrm{ap}}\cap H\leq K$. \mathbb{Q}ED
\end{dem}
\end{lem}
\begin{teo} \label{t:canonical representation of the minimal lie core} Let $G$ be a piecewise $A${\hyp}hyperdefinable group with a generic piece. Let $L$ be the minimal $A${\hyp}Lie core of $G$ and $\pi_{L\mid G^0_A}:\ G^0_A\rightarrow L$ the restriction to $G^0_A$ of the global minimal Lie core map of $L$. Then, $G^0_A\cap G^{\mathrm{ap}}=\ker(\pi_{L\mid G^0_A})$. In particular, $L\cong\rfrac{G^0_A}{G^{\mathrm{ap}}}$.
\begin{dem} By Lemma \ref{l:minimal yamabe pair and aperiodic component}, we know that $G^{\mathrm{ap}}\cap G^0_A\leq \ker(\pi_{L\mid G^0_A})$. On the other hand, $\ker(\pi_{L\mid G^0_A})=G^0_A\cap \pi_L^{-1}(1)\trianglelefteq G$ is $\bigwedge${\hyp}definable modulo $G^{00}_A\leq G^{\mathrm{ap}}$. Therefore, $\rfrac{\ker(\pi_{L\mid G^0_A})}{G^{\mathrm{ap}}}$ is a compact normal subgroup of $\rfrac{G}{G^{\mathrm{ap}}}$. As it is aperiodic, we conclude that $\ker(\pi_{L\mid G^0_A})\leq G^{\mathrm{ap}}$, so $\ker(\pi_{L\mid G^0_A})=G^{\mathrm{ap}}\cap G^0_A$. \mathbb{Q}ED
\end{dem}
\end{teo}
In sum, for any piecewise $A${\hyp}hyperdefinable group $G$ with a generic piece, we have the following structure in terms of the components $G^0_A$, $G^{00}_A$ and $G^{\mathrm{ap}}$:
\[\begin{array}{ccl}1.& \rfrac{G^{\mathrm{ap}}\cap G^0_A}{G^{00}_A}& \mathrm{is\ a\ compact\ topological\ group.}\\
2.&\rfrac{G^0_A}{G^{\mathrm{ap}}}&\mathrm{is\ an\ aperiodic\ connected\ Lie\ group.}\\
3.&\rfrac{G}{G^0_A}&\mathrm{is\ a\ totally\ disconnected\ locally\ compact}\\ & & \mathrm{topological\ group,\ i.e.\ a}\ locally\ profinite\ group.
\end{array}\]
Unfortunately, if $G^{00}_A=G^{\mathrm{ap}}=G^0_A=G$, all the previous results say nothing about $G$. In the following section, we extend the Stabilizer Theorem to the context of piecewise hyperdefinable groups. This theorem gives sufficient conditions to conclude that, with enough parameters, $G^{00}$ is $\bigwedge${\hyp}definable. As we pointed out at the beginning of the section, in this particular situation, the minimal Lie core gives very precise information about $G$ since the quotient homomorphism is piecewise bounded and proper.
\
We now note that the minimal Lie core is independent of the parameters. Furthermore, as $G^{\mathrm{ap}}$ is independent of expansions of the language, so is the minimal Lie core.
\begin{coro} \label{c:independence of expansions of the minimal lie core} Let $G$ be a piecewise $0${\hyp}hyperdefinable group with a generic piece. Then, the minimal $A${\hyp}Lie core of $G$ is isomorphic to the minimal $0${\hyp}Lie core of $G$. Furthermore, the minimal Lie core of $G$ does not change by expansions of the language.
\begin{dem} By Lemma \ref{l:minimal yamabe pair and aperiodic component}, we have that the minimal $A${\hyp}Lie core of $G$ is isomorphic to the minimal Lie core of $\rfrac{G}{G^{\mathrm{ap}}}$. As the latter does not depend on parameters or expansions of the language, we conclude.
\mathbb{Q}ED
\end{dem}
\end{coro}
The \emph{$A${\hyp}Lie rank} $\mathrm{Lrank}_A(G)$ of a piecewise hyperdefinable group $G$ with a generic piece modulo $G^{00}_A$ is the dimension of its minimal $A${\hyp}Lie core. As a consequence of Theorem \ref{t:logic uniqueness of minimal lie core}, $\mathrm{Lrank}_A(G)$ is a well{\hyp}defined invariant. Note that, by Corollary \ref{c:independence of expansions of the minimal lie core}, we have that $\mathrm{Lrank}(G)\coloneqq\mathrm{Lrank}_A(G)$ does not depend on the parameters when $G$ has a generic piece.
\begin{prop} \label{p:lie rank} Let $G$ be a piecewise $A${\hyp}hyperdefinable group with a generic piece modulo $G^{00}_A$ and $N\trianglelefteq G$ a piecewise $\bigwedge_A${\hyp}definable normal subgroup of small index. Then,
\[\mathrm{Lrank}_A(G)\geq \mathrm{Lrank}_A(G/N)+\mathrm{Lrank}_A(N).\]
More precisely, let $L$ and $L_{G/N}$ be the minimal $A${\hyp}Lie cores of $G$ and $\rfrac{G}{N}$ respectively, presented with the canonical piecewise $A${\hyp}hyperdefinable structures given by Proposition \ref{p:connected component and minimal lie core}. Let $\pi_L$ and $\pi_{L_{G/N}}$ be their canonical global minimal Lie core maps. Let $(K,H)$ be a minimal $A${\hyp}Yamabe pair of $G$. Then:
{\textnormal{\textbf{(1)}}} $(K\cap N,H\cap N)$ is an $A${\hyp}Yamabe pair of $N$ with Lie core $\rfrac{H\cap N}{K\cap N}\cong L_N\coloneqq\pi_{L\mid H}[N]$. The connected component of $L_N$ is aperiodic so, in particular, $\mathrm{Lrank}_A(N)=\mathrm{dim}(L_N)$.
{\textnormal{\textbf{(2)}}} There is an $\bigwedge${\hyp}definable normal subgroup $T\trianglelefteq \rfrac{H}{N}$ with $\rfrac{K}{N}\trianglelefteq T$ such that $(T,\rfrac{H}{N})$ is a minimal $A${\hyp}Yamabe pair of $\rfrac{G}{N}$.
{\textnormal{\textbf{(3)}}} There is a piecewise bounded $\bigwedge_A${\hyp}definable surjective group homomorphism $\psi:\ L\rightarrow L_{G/N}$ such that $\psi\circ \pi_L=\pi_{L_{G/N}}\circ \pi_{G/N}$ (on the domain of $\pi_L$).
\begin{dem} {\textnormal{\textbf{(1)}}} Firstly, note that $G^{00}_A\leq N$, so we get $N^{00}_A=G^{00}_A$. We have that $\rfrac{H\cap N}{K\cap N}\cong \rfrac{H\cap N}{K}\cong L_N$. Note that $\rfrac{H\cap N}{G^{00}_A}$ is a closed normal subgroup of $\rfrac{H}{G^{00}_A}$. As $\widetilde{\pi}_{H/K}:\ \rfrac{H}{G^{00}_A}\rightarrow \rfrac{H}{K}$ is closed between the global logic topologies, we get that $L_N\cong \rfrac{H\cap N}{K}\cong \rfrac{((H\cap N)/G^{00}_A)}{(K/G^{00}_A)}$ is closed in $L\cong \rfrac{H}{K}\cong \rfrac{(H/G^{00}_A)}{(K/G^{00}_A)}$. Therefore, $L_N$ is a closed normal subgroup of $L$, concluding that $L_N$ is a Lie group. Then, $(K\cap N,H\cap N)$ is an $A${\hyp}Yamabe pair of $N$ with Lie core (isomorphic to) $L_N$. Let $L_N^0$ be the connected component of $L_N$. By Lemma \ref{l:maximal compact normal subgroup}, $L^0_N$ has a unique maximal compact normal subgroup $K_{N}$. Thus, by uniqueness, $K_N$ is characteristic in $L^0_N$, which is characteristic in $L_N$. Therefore, $K_N$ is a compact normal subgroup of $L$, so it is trivial by aperiodicity of $L$. In other words, $L^0_N$ is aperiodic. Thus, $L^0_N$ is the minimal $A${\hyp}Lie core of $N$. In particular, $\mathrm{Lrank}_A(N)=\mathrm{dim}(L_N)$.
{\textnormal{\textbf{(2)}}} By \textbf{(1)}, we know that $L_N\trianglelefteq L$ is a closed subgroup, so $L_0\coloneqq\rfrac{L}{L_N}$ is a Lie group too. As $L$ is connected, $L_0$ is connected too. By Lemma \ref{l:maximal compact normal subgroup}, there is a maximal compact normal subgroup $T_0\trianglelefteq L_0$. Then, $\rfrac{L_0}{T_0}$ is a connected aperiodic Lie group.
Since $\rfrac{G}{G^{00}_A}$ is a topological group, we know that the quotient homomorphism \[\pi_{(G/G^{00}_A)/(N/G^{00}_A)}:\ \rfrac{G}{G^{00}_A}\rightarrow \rfrac{(G/G^{00}_A)}{(N/G^{00}_A)}\cong \rfrac{G}{N}\] is an open map. Therefore, $\rfrac{H}{N}$ is an open subgroup of $\rfrac{G}{N}$. Now, $\rfrac{G}{N}$ is small and contains a generic set, so $\rfrac{G}{N}$ is a locally compact topological group by the Generic Set Lemma (Theorem \ref{t:generic set lemma}) and Theorem \ref{t:piecewise hyperdefinable and local compactness}. Thus, $\rfrac{H}{N}$ is clopen. Consider, $\phi_0:\ \rfrac{H}{N}\rightarrow L_0$ given by $\phi_0\circ\pi_{(G/N)\mid H}=\pi_{L/L_N}\circ\pi_{L\mid H}$. It is clear that $\phi_0$ is a piecewise bounded $\bigwedge${\hyp}definable surjective group homomorphism with kernel $\rfrac{K}{N}$, which is $\bigwedge${\hyp}definable. Therefore, it is also piecewise proper. By the Isomorphism Theorem \ref{t:isomorphism theorem}, it follows that $(\rfrac{K}{N},\rfrac{H}{N})$ is an $A${\hyp}Yamabe pair of $\rfrac{G}{N}$ with Lie core (isomorphic to) $L_0$.
As $L_0$ is locally hyperdefinable, $T_0$ is $\bigwedge${\hyp}definable. Thus, $\pi_{L_0/T_0}:\ L_0\rightarrow \rfrac{L_0}{T_0}$ is piecewise bounded and proper $\bigwedge${\hyp}definable surjective group homomorphism. Take $\phi=\pi_{L_0/T_0}\circ\phi_0:\ \rfrac{H}{N}\rightarrow \rfrac{L_0}{T_0}$. Then, $\phi$ is a piecewise bounded and proper $\bigwedge${\hyp}definable surjective group homomorphism. By the Isomorphism Theorem \ref{t:isomorphism theorem}, we conclude that $\rfrac{L_0}{T_0}\cong \rfrac{(H/N)}{T}$ where $T\coloneqq \ker(\phi)$ is an $\bigwedge${\hyp}definable normal subgroup of $\rfrac{H}{N}$ with $\rfrac{K}{N}\trianglelefteq T$. Consequently, $(T,\rfrac{H}{N})$ is a minimal $A${\hyp}Yamabe pair of $\rfrac{G}{N}$ with Lie core (isomorphic to) $\rfrac{L_0}{T_0}\cong L_{G/N}$.
{\textnormal{\textbf{(3)}}} By the Isomorphism Theorem \ref{t:isomorphism theorem}, take $\eta:\ \rfrac{L_0}{T_0}\rightarrow L_{G/N}$ isomorphism such that $\pi_{L_{G/N}\mid (\rfrac{H}{N})}=\eta\circ\phi$. Consider $\psi=\eta \circ \pi_{L_0/T_0}\circ\pi_{L/L_N}:\ L\rightarrow L_{G/N}$ --- note that, a priori, the definition of $\psi$ depends on $(K,H)$. Obviously, $\psi$ is a piecewise bounded $\bigwedge${\hyp}definable onto group homomorphism. Also, $\psi\circ \pi_{L\mid H}=\pi_{L_{G/N}\mid (\rfrac{H}{N})}\circ \pi_{(G/N)\mid H}$.
Let $(K',H')$ be any other minimal $A${\hyp}Yamabe pair of $G$. For $h'\in H'$, there is $h\in H\cap H'$ such that $\pi_L(h')=\pi_L(h)$, i.e. $h^{-1}h'\in K'$. By point \textbf{(2)}, it follows that $\pi_{L_{G/N}}(\pi_{G/N}(h))=\pi_{L_{G/N}}(\pi_{G/N}(h'))$. Thus, $\psi(\pi_L(h'))=\psi(\pi_L(h))=\pi_{L_{G/N}}(\pi_{G/N}(h))=\pi_{L_{G/N}}(\pi_{G/N}(h'))$. Therefore, $\psi\circ\pi_L=\pi_{L_{G/N}}\circ\pi_{G/N}$ on the domain of $\pi_L$.
It remains to show that $\psi$ is $A${\hyp}invariant. Take $\sigma\in\mathrm{Aut}(\mathfrak{M}/A)$ and $x\in L$. Take $h$ such that $\pi_L(h)=x$. Using that $\pi_L$, $\pi_{L_{G/N}}$ and $\pi_{G/N}$ are $A${\hyp}invariant, we get that \[\psi(\sigma(x))=\psi\circ \pi_L(\sigma(h))=\pi_{L_{G/N}}\circ\pi_{G/N}(\sigma(x))=\sigma(\pi_{L_{G/N}}\circ\pi_{G/N}(x))=\sigma(\psi(h)).\]
Finally, putting everything together, we conclude that
\[\mathrm{Lrank}_A(G/N)=\mathrm{dim}(L_{G/N})\leq \mathrm{dim}(L_0)=\mathrm{Lrank}_A(G)-\mathrm{Lrank}_A(N).\]
\mathbb{Q}ED
\end{dem}
\end{prop}
The following proposition is a direct consequence of the results of \cite{jing2021nonabelian}. I am very grateful to Chieu{\hyp}Minh for telling me about it.
\begin{prop}[An{\hyp}Jing{\hyp}Tran{\hyp}Zhang bound] Let $G$ be a piecewise hyperdefinable group and $X$ a symmetric generic set of $G$. Then, $\mathrm{Lrank}(G)\leq 12 \log_2(k)^2$ where $k=[X^2:X]$.
\begin{dem}
By working modulo $G^{00}$, we may assume that $G$ is small. Let $(K,H)$ be a minimal Yamabe pair of $G$ and $\pi\coloneqq \pi_{H/K}: H \rightarrow L$ the minimal Lie core. By the Generic Set Lemma (Theorem \ref{t:generic set lemma}), we have that $X^2$ is a symmetric compact neighbourhood of the identity in the global logic topology. Thus, as $H$ is open, $Y=H\cap X^2$ is also a symmetric compact neighbourhood of the identity in the global logic topology. By \cite[Lemma 2.3]{machado2021good}, $Y$ is a $k^3${\hyp}approximate subgroup. As $\pi$ is a continuous and open homomorphism, $\pi[Y]$ is a compact neighbourhood of the identity and a $k^3${\hyp}approximate subgroup. As it is a neighbourhood of the identity, it has positive Haar measure, so the general Brunn{\hyp}Minkowski Inequality \cite[Theorem 1.1]{jing2021nonabelian} applies and we get that $2 \leq k^{\rfrac{3}{\alpha}}$ with $\alpha\coloneqq d-m-h$, where $d=\dim(L)$, $m=\max\{\dim(\Gamma)\mathrel{:} \Gamma\leq L \mathrm{\ compact}\}$ and $h$ is the helix dimension of $L$. On the other hand, by \cite[Corollary 2.15]{jing2021nonabelian}, $h\leq \rfrac{n}{3}$ with $n\coloneqq d-m$, so $2n\leq 9\log_2(k)$. Finally, as $L$ has no compact normal subgroups, by \cite[Fact 3.6, Lemma 3.9]{an2021small}, we conclude that $d\leq \frac{n(n+1)}{2}$, so \[d\leq \frac{1}{8}\lfloor 9\log_2(k)\rfloor^2+\frac{1}{4}\lfloor 9\log_2(k)\rfloor\leq 12\log_2(k)^2.\]
\mathbb{Q}ED
\end{dem}
\end{prop}
\textbf{Breuillard{\hyp}Green{\hyp}Tao Theorem.} Let $\mathcal{A}$ be a particular class of piecewise hyperdefinable groups and $L$ a finite dimensional Lie group. It is then natural to study the \emph{direct problem} for $\mathcal{A}$ or (dually) the \emph{inverse problem} for $L$:
\begin{center}
What are the Lie cores of elements of $\mathcal{A}$?
Is $L$ a Lie core of an element of $\mathcal{A}$?
\end{center}
The classification theorem by Breuillard, Green and Tao for finite approximate subgroups \cite{breuillard2012structure} can be interpreted as an answer to the direct problem for the class of piecewise definable subgroups generated by pseudo{\hyp}finite definable approximate subgroups. Indeed, it can be restated to say that Lie cores of piecewise definable subgroups generated by pseudo{\hyp}finite definable approximate subgroups are nilpotent \cite[Proposition 9.6]{breuillard2012structure}. We consider it interesting to discuss the direct and inverse problems in general. In fact, one may expect that solutions of these questions would yield classification results similar to the one of \cite{breuillard2012structure}.
The following easy examples show that the inverse problem is trivially solved for some basic classes of piecewise hyperdefinable groups. In particular, these examples show that no general classification result for the Lie cores analogous to the one of \cite{breuillard2012structure} could be found in those cases.
\begin{ejems} Any Lie group $L$ is a Lie core of some piecewise hyperdefinable group in the theory of real closed fields. Indeed, $L$ is in particular a second countable manifold, so it is a locally hyperdefinable subset of countable cofinality. By Proposition \ref{p:functions logic topology 2}, the group operations are piecewise bounded $\bigwedge${\hyp}definable, so $L$ is a locally hyperdefinable group. Clearly, $L$ is its own Lie core.
For a slightly more explicit construction, note that any connected Lie group $L$ is the Lie core of some piecewise definable group generated by a definable approximate subgroup. Indeed, connected Lie groups are metrisable by Birkhoff{\hyp}Kakutani Theorem \cite[Theorem 1.5.2]{tao2014hilbert}, so take a left invariant metric $d$ for $L$. As $L$ is locally compact, we may assume that the closed unit ball $\overline{\mathbb{D}}$ is a compact symmetric neighbourhood of the identity. Consider the structure of $L$ with the language of groups, a sort for $\mathbb{R}$ with the language of ordered rings and a function symbol for the metric $d$. Let $L'$ be an $|L|${\hyp}saturated elementary extension of it. Consider the subgroup $H\leq L'$ generated by the closed unit ball and $E=\{(a,b)\mathrel{:} d(a,b)<\rfrac{1}{n}\mathrm{\ for\ any\ }n\in \mathbb{N}\}$. Clearly, $\overline{\mathbb{D}}$ is a definable approximate subgroup and $H$ is the piecewise definable group generated by it. Then, $L$ is a Lie core of $H$, as, in fact, we have $\rfrac{H}{E}\cong L$.
In the case of a linear connected Lie group $L$, we can combine both examples. Indeed, in that case, in the real numbers, $L$ is piecewise definable and its metric and group operations are definable, so, after saturation, we just need to take the piecewise definable group generated by the closed unit ball and quotient out by the infinitesimals as in the previous example.
\end{ejems}
\section{Stabilizer Theorem}
In this section, we aim to extend the Stabilizer Theorem \cite[Theorem 3.5]{hrushovski2011stable} to piecewise hyperdefinable groups. The main point of this theorem is that it provides sufficient conditions to conclude that $G^{00}$ is $\bigwedge${\hyp}definable. As we already noted in the previous section, this result has significant consequences for the projection $\pi_L$ of the minimal Lie core.
To prove the Stabilizer Theorem, we need first to extend the model{\hyp}theoretic notions of dividing, forking and stable relation to piecewise hyperdefinable sets. Once these model{\hyp}theoretic notions have been properly defined for piecewise hyperdefinable groups, adapting the original proof of the Stabilizer Theorem is straightforward.
Forking and dividing for hyperimaginaries have already been well studied by many authors (e.g. \cite{hart2000coordinatisation}, \cite{wagner2010simple} and \cite{kim2014simplicity}). Here, in the first subsection, we rewrite the definitions of dividing and forking for hyperdefinable sets in a slightly different way which we find more natural from the point of view of this paper. After that, it is trivial to extend the definition to the context of piecewise hyperdefinable sets.
\subsection{Dividing and forking}
Let $(P_t)_{t\in T}$ be $A${\hyp}hyperdefinable sets. Write $P\coloneqq \prod P_t$. For infinite tuples $\overline{a},\overline{b}\in P$ of hyperimaginaries, we write ${\mathrm{tp}}(\overline{a}/A)={\mathrm{tp}}(\overline{b}/A)$ to mean that ${\mathrm{tp}}(a_{\mid T_0}/A)={\mathrm{tp}}(b_{\mid T_0}/A)$ for any $T_0\subseteq T$ finite.
Let $(I,<)$ be a linear order. An \emph{$A${\hyp}indiscernible} sequence in $P$ indexed by $(I,<)$ is a sequence $(\overline{a}_i)_{i\in I}$ of hyperimaginary tuples $\overline{a}_i\in P$ such that, for any $n\in \mathbb{N}$ and any $i_1<\cdots<i_n$ and $j_1<\cdots<j_n$,
\[{\mathrm{tp}}(\overline{a}_{i_1},\ldots,\overline{a}_{i_n}/A)={\mathrm{tp}}(\overline{a}_{j_1},\ldots,\overline{a}_{j_n}/A).\]
\begin{lem}[Standard Lemma] \label{l:standard lemma for hyperimaginaries} Let $P=\prod_{t\in T}P_t$ be a product of $A${\hyp}hyperdefinable sets and $(I,<)$ and $(J,<)$ two infinite linear orders, with $|T|^+ +|J|\leq\kappa$. Then, given a sequence $(\overline{a}_i)_{i\in I}$ of hyperimaginary tuples in $P$, for any set of representatives $A^*$, there is a sequence $(\overline{b}_j)_{j\in J}$ of hyperimaginary tuples in $P$ with an $A^*${\hyp}indiscernible sequence of representatives $(\overline{b}^*_j)_{j\in J}$ such that
\[({\overline{b}_{j_1}}_{\mid T_0},\ldots,{\overline{b}_{j_n}}_{\mid T_0})\in W\]
for any $n\in \mathbb{N}$, any $j_1<\ldots<j_n$, any $T_0\subseteq T$ finite and any $\bigwedge_{A}${\hyp}definable set $W\subseteq \left(\prod_{t\in T_0} P_t\right)^n$ such that $({\overline{a}_{i_1}}_{\mid T_0},\ldots,{\overline{a}_{i_n}}_{\mid T_0})\in W$ for every $i_1<\cdots<i_n$.
\begin{dem} Trivial from the classic Ehrenfeucht{\hyp}Mostowski Standard Lemma proved using Ramsey's Theorem \cite[Theorem 5.1.5]{tent2012course}. \mathbb{Q}ED
\end{dem}
\end{lem}
\begin{coro} \label{c:indiscernible hyperimaginaries}
Let $P=\prod_{t\in T}P_t$ be a product of $A${\hyp}hyperdefinable sets and $(I,<)$ an infinite linear order, with $|T|^+ +|I|\leq\kappa$. Then, a sequence $(\overline{b}_i)_{i\in I}$ of hyperimaginary tuples in $P$ is $A${\hyp}indiscernible if and only if, for some set of representatives $A^*$ of $A$, there is an $A^*${\hyp}indiscernible sequence of representatives $(\overline{b}^*_i)_{i\in I}$.
Furthermore, if $(\overline{b}_i)_{i\in I}$ is $A${\hyp}indiscernible, for any set of representatives $A^*$ there is an $A^{**}${\hyp}indiscernible sequence of representatives $(\overline{b}^*_i)_{i\in I}$ of $(\overline{b}_i)_{i\in I}$ where $A^{**}$ is another set of representatives of $A$ with ${\mathrm{tp}}(A^*)={\mathrm{tp}}(A^{**})$.
\begin{dem} By the Standard Lemma \ref{l:standard lemma for hyperimaginaries} and Corollary \ref{c:types over hyperimaginaries}. \mathbb{Q}ED
\end{dem}
\end{coro}
Let $P=\rfrac{X}{E}$ be an $A${\hyp}hyperdefinable set and ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P:\ X\rightarrow P$ its quotient map. An $\bigwedge${\hyp}definable subset $V\subseteq P$ \emph{divides over $A$} if and only if ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_P[V]$ divides over $A^*$ for some set of representatives of $A$.
\begin{lem} \label{l:finite character of dividing}
Let $P$ be an $A${\hyp}hyperdefinable set and $V\subseteq P$ an $\bigwedge_{A,B}${\hyp}definable subset. Then, $V$ divides over $A$ if and only if there are a finite tuple $b$ from $B$ and an $\bigwedge_{A,b}${\hyp}definable subset $W\subseteq P$ such that $V\subseteq W$ and $W$ divides over $A$.
\begin{dem} One direction is trivial. Let us check the other. Take a uniform definition $\underline{V}$ of $V$, which exists by Lemma \ref{l:uniform definition}. Take representatives $A^*$ of $A$ such that $\underline{V}(x,A^*,B^*)$ divides over $A^*$. There is then $b$ finite such that $\underline{W}(x,A^*,b^*)\coloneqq \underline{V}(x,A^*,B^*)\cap \mathrm{For}^x(\mathscr{L}(A^*,b^*))$ divides over $A^*$. Now, as $\underline{V}$ is a uniform definition, $W$ is $\bigwedge_{A,b}${\hyp}definable. Obviously, $V\subseteq W$ and $W$ divides over $A$.
\mathbb{Q}ED
\end{dem}
\end{lem}
\begin{lem} \label{l:dividing}
Let $P$ and $Q$ be $A${\hyp}hyperdefinable sets. Let $V\subseteq Q\times P$ be an $\bigwedge_{A}${\hyp}definable set and $b\in Q$. Then, $V(b)$ divides over $A$ if and only if there is an $A${\hyp}indiscernible sequence $(b_i)_{i\in \omega}$ of hyperimaginaries from $Q$ such that ${\mathrm{tp}}(b_0/A)={\mathrm{tp}}(b/A)$ and $\bigcap_{i\in \omega} V(b_i)=\emptyset$.
Furthermore, $V(b)$ divides over $A$ if and only if for any set of representatives $A^*$ of $A$ there is another set of representatives $A^{**}$ such that ${\mathrm{tp}}(A^*)={\mathrm{tp}}(A^{**})$ and $V(b)$ divides over $A^{**}$.
\begin{dem} Assume $V(b)$ divides over $A$. Take a uniform definition $\underline{V}$ of $V$, which exists by Lemma \ref{l:uniform definition}. Take representatives $A^*$ of $A$ such that $\underline{V}(x,A^*,b^*)$ divides over $A^*$. There is then an $A^*${\hyp}indiscernible sequence $(b^*_i)_{i\in \omega}$ such that ${\mathrm{tp}}(b^*_0/A^*)={\mathrm{tp}}(b^*/A^*)$ and $\bigcup_{i\in\omega}\underline{V}(x,A^*,b^*_i)$ is not finitely satisfiable. Then, $(b_i)_{i\in \omega}$ given by $b_i={\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_Q(b^*_i)$ is $A${\hyp}indiscernible and ${\mathrm{tp}}(b_0/A)={\mathrm{tp}}(b/A)$. Also, ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_P\left[\bigcap_{i\in \omega} V(b_i)\right]=\bigcap_{i\in \omega} {\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_P[V(b_i)]=\bigcap_{i\in\omega}\underline{V}(\mathfrak{M},A^*,b^*_i)=\emptyset$, so $\bigcap_{i\in\omega} V(b_i)=\emptyset$ by surjectivity of ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}_P$.
On the other hand, assume there is an $A${\hyp}indiscernible sequence $(b_i)_{i\in \omega}$ in $Q$ such that ${\mathrm{tp}}(b_0/A)={\mathrm{tp}}(b/A)$ and $\bigcap_{i\in \omega} V(b_i)=\emptyset$. Take a uniform definition $\underline{V}$ of $V$, which exists by Lemma \ref{l:uniform definition}. By Corollary \ref{c:indiscernible hyperimaginaries}, there is an $A^{*}${\hyp}indiscernible sequence $(b^*_i)_{i\in \omega}$ of representatives of $(b_i)_{i\in \omega}$ with $A^{*}$ representatives of $A$. Now, as ${\mathrm{tp}}(b_0/A)={\mathrm{tp}}(b/A)$, by Corollary \ref{c:types over hyperimaginaries}, there is a representative $b^{**}\in b$ and a set of representatives $A^{**}$ of $A$ such that ${\mathrm{tp}}(b^*_0, A^{*})={\mathrm{tp}}(b^{**},A^{**})$. Take $\sigma\in\mathrm{Aut}(\mathfrak{M})$ mapping $(b_0^*,A^*)$ to $(b^{**},A^{**})$, and write $b'_i\coloneqq \sigma(b_i)$ and ${b'}^*_i\coloneqq \sigma(b^*_i)$ for $i\in\omega$. Then, $({b'}^*_i)_{i\in\omega}$ is an $A^{**}${\hyp}indiscernible sequence with ${b'}^*_0=b^{**}$. For this sequence, it follows that
$\emptyset={\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_P\left[\bigcap_{i\in \omega}V(b'_i)\right]=\bigcap_{i\in\omega}{\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_P[V(b'_i)]=\bigcap_{i\in \omega}\underline{V}(\mathfrak{M},A^{**},{b'}^{*}_i),$
concluding that $\underline{V}$ divides over $A^{**}$. In particular, $V$ divides over $A$.
For the ``furthermore part'', note that in the previous paragraph ${\mathrm{tp}}(A^*)$ can be chosen arbitrarily by Corollary \ref{c:indiscernible hyperimaginaries}. \mathbb{Q}ED
\end{dem}
\end{lem}
Combining both propositions we conclude that the definition given here is equivalent to the one studied previously by other authors (e.g. \cite{hart2000coordinatisation}, \cite{wagner2010simple} and \cite{kim2014simplicity}). It is now straightforward to prove the following basic proposition.
\begin{lem} \label{l:dividing and functions}
Let $P$ and $Q$ be $A${\hyp}hyperdefinable sets. Let $V\subseteq P$ be $\bigwedge${\hyp}definable and $f:\ P\rightarrow Q$ a $1${\hyp}to{\hyp}$1$ $\bigwedge_A${\hyp}definable function. Then, $f[V]$ divides over $A$ provided that $V$ divides over $A$.
\end{lem}
Let $P$ be an $A${\hyp}hyperdefinable set. The \emph{forking ideal} $\mathfrak{f}_P(A)$ of $P$ over $A$ is the ideal of $\bigwedge${\hyp}definable subsets of $P$ generated by the ones dividing over $A$. An $\bigwedge${\hyp}definable subset of $P$ \emph{forks over $A$} if it is in that forking ideal.
\begin{obs} Trivially, we have that $V$ forks over $A$ implies that $\underline{V}$ forks over $A$. In the case of simple theories, the converse also holds as forking and dividing are the same \cite[Proposition 3.2.7]{wagner2010simple}. In general, however, the converse is not true. For instance, let $\mathfrak{M}$ be the monster model of the theory of dense circular orders in the usual language, let $M$ be the whole $1${\hyp}ary universe of $\mathfrak{M}$ and $E$ the trivial equivalence relation given by $xEy$ for every $x,y\in M$. Obviously, $\rfrac{M}{E}$ is a singleton, so it does not fork over $\emptyset$. However, $M$ forks over $\emptyset$.
\end{obs}
Let $P$ be a piecewise $A${\hyp}hyperdefinable set. An $\bigwedge${\hyp}definable set $V\subseteq P$ \emph{divides} over $A$ if it divides over $A$ as subset of some/any piece $P_i$ containing $V$. Note that, by Lemma \ref{l:dividing and functions}, this is well{\hyp}defined. An $\bigwedge${\hyp}definable subset $V$ of $P$ forks over $A$ if and only if $V$ forks over $A$ as subset of some/any piece $P_i$ containing $V$. The \emph{forking ideal} $\mathfrak{f}_{P}(A)$ of $P$ over $A$ is the family of $\bigwedge${\hyp}definable subsets of $P$ forking over $A$. Clearly, $\mathfrak{f}_P(A)$ is the ideal of $\bigwedge${\hyp}definable subsets of $P$ generated by the ones dividing over $A$.
\subsection{Ideals}
From now on, $\lambda$ is a cardinal with $\kappa\geq\lambda>|\mathscr{L}|+|A|$.
Let $P$ be a piecewise $A${\hyp}hyperdefinable set and $\mu$ an ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets in $P$. We say that $\mu$ is \emph{$A${\hyp}invariant} if it is invariant under $\mathrm{Aut}(\mathfrak{M}/A)$, i.e. $W(\overline{b})\in\mu$ implies $W(\overline{b}')\in\mu$ for any $\overline{b}'$ with ${\mathrm{tp}}(\overline{b}/A)={\mathrm{tp}}(\overline{b}'/A)$ and any $\bigwedge_{\bar{b}}${\hyp}definable subset $W(\overline{b})\subseteq P$ with $|\overline{b}|<\lambda$. We say that an $\bigwedge_{<\lambda}${\hyp}definable subset is \emph{$\mu${\hyp}negligible} if it is in $\mu$ and that it is \emph{$\mu${\hyp}wide} if it is not in $\mu$. We say that $\mu$ is \emph{locally atomic} if, for any wide $\bigwedge_B${\hyp}definable subset $V$ with $|B|<\lambda$, there is $a\in V$ such that ${\mathrm{tp}}(a/B)$ is wide.
\begin{obs} For $X\subseteq P$, write $\mu_{\mid X}\coloneqq\{W\in \mu\mathrel{:} W\subseteq X\}$. Clearly, $\mu_{\mid X}$ is an ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of $X$. Say $P=\underrightarrow{\lim}\, P_i$, then write $\mu_i\coloneqq \mu_{\mid P_i}$ for each piece. We have from the definitions that $\mu$ is $A${\hyp}invariant if and only if each $\mu_i$ is so. Similarly, it is locally atomic if and only if each $\mu_i$ is so.
\end{obs}
We say that an $\bigwedge_A${\hyp}definable subset $V$ is \emph{$A${\hyp}medium} if for any $\bigwedge_{\bar{b}}${\hyp}definable subset $W(\overline{b})\subseteq V$ with $|\bar{b}|<\lambda$, we have
\[W(\overline{b})\in \mu\Leftrightarrow W(\overline{b}_0)\cap W(\overline{b}_1)\in\mu\]
for any $A${\hyp}indiscernible sequence $(\overline{b}_i)_{i\in\omega}$ realising ${\mathrm{tp}}(\overline{b}/A)$. We say that an $\bigwedge_{<\lambda}${\hyp}definable subset $V$ is $A${\hyp}medium if there is an $A${\hyp}medium $\bigwedge_A${\hyp}definable subset $V_0$ with $V\subseteq V_0$. We say that $V$ is \emph{strictly $A${\hyp}medium} if it is wide and $A${\hyp}medium. We say that $\mu$ is \emph{$A${\hyp}medium\footnote{We use the terminology of \cite{montenegro2018stabilizers}. In \cite{hrushovski2011stable}, it is said that the ideal has the $S_1$ property.}} if every $\bigwedge_{<\lambda}${\hyp}definable subset is $A${\hyp}medium for $\mu$.
\begin{obs} Note that, by definition, if $\mu$ is $A${\hyp}medium, $\mu$ is in particular $A${\hyp}invariant.
Note that the family of $A${\hyp}medium sets of $\mu$ is an $A${\hyp}invariant ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets.
\end{obs}
\begin{ejems} {\textnormal{\textbf{(1)}}} The forking ideal over $A$ is an $A${\hyp}invariant ideal of $\bigwedge${\hyp}definable subsets. In simple theories, the forking ideal over $A$ is locally atomic and $A${\hyp}medium. Indeed, it is locally atomic by the extension property of forking \cite[Theorem 3.2.8]{wagner2010simple}. On the other hand, suppose $V(\overline{b})$ is $\bigwedge_{\overline{b}}${\hyp}definable and $V(\overline{b}_0)\cap V(\overline{b}_1)$ forks over $A$ with $(\overline{b}_i)_{i\in\omega}$ an $A${\hyp}indiscernible sequence realising ${\mathrm{tp}}(\overline{b}/A)$. Then, by simplicity, $V(\overline{b}_0)\cap V(\overline{b}_1)$ divides over $A$, so $\bigcap^n_{i=0} V(\overline{b}_{2i})\cap V(\overline{b}_{2i+1})=\emptyset$ for some $n\in\mathbb{N}$. Thus, $V(\overline{b})$ divides over $A$, so it forks over $A$. As $V$ is arbitrary, we conclude that the forking ideal over $A$ is $A${\hyp}medium.
{\textnormal{\textbf{(2)}}} If we have an $A${\hyp}invariant measure on the lattice of $\bigwedge_{<\lambda}${\hyp}definable sets, the ideal of zero measure $\bigwedge_{<\lambda}${\hyp}definable subsets is an $A${\hyp}invariant ideal. In this case, every $\bigwedge_A${\hyp}definable subset of finite measure is $A${\hyp}medium.
\end{ejems}
\begin{lem}{\textnormal{\textbf{\cite[Lemma 2.9]{hrushovski2011stable}}}} \label{l:s1 and forking}
Let $P$ be a piecewise $A${\hyp}hyperdefinable set and $\mu$ an ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets on $P$. Let $V\subseteq P$ be a strictly $A${\hyp}medium $\bigwedge_{<\lambda}${\hyp}definable subset. Then, $V$ does not fork over $A$.
\begin{dem} Take an $A${\hyp}medium $\bigwedge_A${\hyp}definable set $V_0$ such that $V\subseteq V_0$. Suppose that $V$ forks over $A$. Then, there are $W'_1,\ldots,W'_n$ $\bigwedge${\hyp}definable subsets dividing over $A$ such that $V\subseteq\bigcup_i W'_i$. Applying Lemma \ref{l:finite character of dividing}, we may assume that each $W'_i$ is $\bigwedge_{<\lambda}${\hyp}definable. Now, $V\subseteq \bigcup (W'_i\cap V_0)$ and $W'_i\cap V_0$ divides over $A$. Write $W_i(\overline{b})=W'_i\cap V_0$. By Lemma \ref{l:dividing}, there is $(\overline{b}_j)_{j\in\omega}$ $A${\hyp}indiscernible such that $\bigcap_{j\in\omega}W_i(\overline{b}_j)=\emptyset$. Hence, $\bigcap^k_{j=0}W_i(\overline{b}_j)=\emptyset\in \mu$ for some $k$, concluding that $W_i(\overline{b})\in \mu$ by $A${\hyp}mediumness of $V_0$. This concludes that $V\in\mu$, contradicting that $V$ is wide. \mathbb{Q}ED
\end{dem}
\end{lem}
\begin{lem}\label{l:translation of medium}
Let $\mathfrak{N}\preceq \mathfrak{M}$ with $|N|<\lambda$ and $G=\underrightarrow{\lim}\, G_k$ be a piecewise $N${\hyp}hyperdefinable group. Let $\mu$ be an $N${\hyp}invariant ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of $G$. Let $W$ and $U$ be non{\hyp}empty $\bigwedge_N${\hyp}definable subsets of $G$. If $\mu$ is invariant under left translations and $U\cdot W$ is $N${\hyp}medium, then $W$ is $N${\hyp}medium too. Similarly, if $\mu$ is invariant under right translations and $W\cdot U$ is medium over $A$, then $W$ is medium over $A$ too.
\begin{dem} Let $X(\overline{b})\subseteq W$ be $\bigwedge_{\bar{b}}${\hyp}definable with $|\overline{b}|<\lambda$. Take an $N${\hyp}indiscernible sequence $(\overline{b}^*_i)_{i\in\omega}$ of representatives of elements in ${\mathrm{tp}}(\overline{b}/N)$ and write $X_i\coloneqq X(\overline{b}_i)$. Fix $k$ such that $U\subseteq G_k$ and take $\underline{U}$ defining $U$. As $\mathfrak{N}\prec\mathfrak{M}$, $\underline{U}$ is finitely satisfiable in $N$, so $\underline{U}$ does not fork over $N$ \cite[Lemma 7.1.10]{tent2012course}. Thus, $\underline{U}$ has a non{\hyp}forking extension to a complete type $p$ over $N,\overline{b}^*_0$. As $p$ does not fork over $N$, it does not divide over $N$. By \cite[Lemma 7.1.5]{tent2012course}, there is $a^*$ realising $p$ such that $(\overline{b}^*_i)_{i\in \omega}$ is $N,a^*${\hyp}indiscernible. As $a^*$ realises $p$, in particular, $a\in U$.
Suppose $\mu$ is invariant under left translations and $U\cdot W$ is $N${\hyp}medium. Then, we get that $a\cdot X_0\cap a\cdot X_1\in \mu$ if and only if $a\cdot X_0\in\mu$. Therefore, provided left translational invariance,
\[X_0\cap X_1\in \mu\Leftrightarrow a\cdot X_0\cap a\cdot X_1\in \mu\Leftrightarrow a\cdot X_0\in\mu\Leftrightarrow X_0\in\mu.\]
We similarly prove the case with right translations.
\mathbb{Q}ED
\end{dem}
\end{lem}
\subsection{Stable relations}
Let $P$ and $Q$ be piecewise $A${\hyp}hyperdefinable sets and $V,W\subseteq P\times Q$ disjoint $A${\hyp}invariant subsets. We say that $V$ and $W$ are \emph{unstably separated over $A$} if there is an infinite $A${\hyp}indiscernible sequence $(a_i,b_i)_{i\in \omega}$ such that $(a_0,b_1)\in V$ and $(a_1,b_0)\in W$. We say that they are \emph{stably separated over $A$} if they are not unstably separated. We say that an $A${\hyp}invariant binary relation $R\subseteq P\times Q$ is \emph{stable over $A$} if $R$ and $R^c$ are stably separated over $A$.
\begin{obss} {\textnormal{\textbf{(1)}}} Clearly, being stably separated over $A$ is invariant under piecewise $\bigwedge_A${\hyp}definable isomorphisms.
{\textnormal{\textbf{(2)}}} Note that $V$ and $W$ are stably separated over $A$ if and only if $V\cap (P_i\times Q_j)$ and $W\cap (P_i\times Q_j)$ are stably separated over $A$ for any pieces $P_i$ and $Q_j$.
{\textnormal{\textbf{(3)}}} Let $P$ and $Q$ be $A${\hyp}hyperdefinable. By Corollary \ref{c:indiscernible hyperimaginaries}, $V$ and $W$ are stably separated over $A$ if and only if ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_{P\times Q}[V]$ and ${\mathop{\mbox{\normalfont{\large{\Fontauri{q}}}}}}^{-1}_{P\times Q}[W]$ are stably separated over $A^*$ for any set of representatives $A^*$ of $A$. In particular, if $V$ and $W$ are $\bigwedge_A${\hyp}definable, we have that $V$ and $W$ are stably separated over $A$ if and only if $\underline{V}$ and $\underline{W}$ are stably separated over $A^*$ for any set of representatives $A^*$ of $A$ and any partial types $\underline{V}$ and $\underline{W}$ defining $V$ and $W$.
{\textnormal{\textbf{(4)}}} Note that, by the symmetry of stability, $R$ is stable over $A$ if and only if $R^c$ is stable over $A$. \end{obss}
We need the following two lemmas from \cite{hrushovski2011stable} for the Stabilizer Theorem. Their original proofs can be easily adapted.
Recall that we say that a partial type $\Sigma(x,y)$ over $A^*$ divides over $A^*$ \emph{with respect to} a global $A^*${\hyp}invariant type $\widehat{q}(y)$ if there is a sequence $(b^*_i)_{i\in \omega}$ such that ${\mathrm{tp}}(b^*_i/A^*,b^*_0,\ldots,b^*_{i-1})\models \widehat{q}$ and $\bigwedge \Sigma(x,b^*_i)$ is inconsistent.
\begin{lem}{\textnormal{\textbf{\cite[Lemma 2.2]{hrushovski2011stable}}}}
Let $P=\rfrac{X}{E}$ and $Q=\rfrac{Y}{F}$ be $A^*${\hyp}hyperdefinable sets and $\widehat{q}$ a global $A^*${\hyp}invariant type with $\underline{Y}\in\widehat{q}$. Let $W_1,W_2\subseteq P\times Q$ be $\bigwedge_{A^*}${\hyp}definable sets stably separated over $A^*$. Assume that $a\in P$ is such that $\underline{W_2(a)}\subseteq \widehat{q}$. Then, $V=W_1\cap \left({\mathrm{tp}}(a/A^*)\times Q\right)$ divides over $A^*$. Furthermore, $\underline{V}$ divides over $A^*$ with respect to $\widehat{q}$.
\end{lem}
For sets $V$ and $W$, write
\[V\times_{\mathrm{nf}(A)}W\coloneqq\{(a,b)\in V\times W\mathrel{:} {\mathrm{tp}}(b/aA)\mbox{ does not fork over }A\}.\]
Similarly, $V\times_{\mathrm{ndiv}(A)}W$, $V\hbox{\hss $\prescript{}{\mathrm{nf}(A)}{\times}$ \hss} W$ and $V\hbox{\hss $\prescript{}{\mathrm{ndiv}(A)}{\times}$ \hss} W$.
\begin{lem}{\textnormal{\textbf{\cite[Lemma 2.3]{hrushovski2011stable}}}} \label{l:fundamental lemma for the stabilizer theorem}
Let $P$ and $Q$ be piecewise $A^*${\hyp}hyperdefinable sets and $R\subseteq P\times Q$ a stable binary relation over $A^*$. Let $p\subseteq P$ and $q\subseteq Q$ be types over $A^*$. Assume that there is an $A^*${\hyp}invariant global type $\widehat{q}$ extending a partial type $\underline{q}$ over $A^*$ defining $q$.
\textnormal{\textbf{(1)}} Take $a\in p$ and $b\in q$ such that $(a,b)\in R$. Suppose that there are representatives $a^*\in a$ and $b^*\in b$ such that $b^*\models\widehat{q}_{\mid A^*,a^*}$. Then, we have that $(a',b)\in R$ for any $a'\in p$ such that ${\mathrm{tp}}(a'/A^*,b)$ does not divide over $A^*$.
\textnormal{\textbf{(2)}} Take $a',a\in p$ and $b\in q$. Suppose that ${\mathrm{tp}}(a/A^*,b)$ and ${\mathrm{tp}}(a'/A^*,b)$ do not divide over $A^*$. Then, $(a,b)\in R$ if and only if $(a',b)\in R$.
\textnormal{\textbf{(3)}} Assume that there is also an $A^*${\hyp}invariant global type $\widehat{p}$ extending a partial type $\underline{p}$ over $A^*$ defining $p$. Then, the following conditions are equivalent:
\[\begin{array}{llll}
\mbox{{\textbf{i.}}}& p\times_{\mathrm{ndiv}(A^*)}q\cap R\neq \emptyset & \mbox{{\textbf{v.}}}& p\hbox{\hss $\prescript{}{\mathrm{ndiv}(A^*)}{\times}$ \hss} q\cap R\neq \emptyset\\
\mbox{{\textbf{ii.}}}& p\times_{\mathrm{nf}(A^*)}q\cap R\neq \emptyset & \mbox{{\textbf{vi.}}}& p\hbox{\hss $\prescript{}{\mathrm{nf}(A^*)}{\times}$ \hss} q\cap R\neq \emptyset\\
\mbox{{\textbf{iii.}}}& p\times_{\mathrm{nf}(A^*)}q\subseteq R &\mbox{{\textbf{vii.}}} & p\hbox{\hss $\prescript{}{\mathrm{nf}(A^*)}{\times}$ \hss} q\subseteq R\\
\mbox{{\textbf{iv.}}}& p\times_{\mathrm{ndiv}(A^*)}q\subseteq R &\mbox{{\textbf{viii.}}} & p\hbox{\hss $\prescript{}{\mathrm{ndiv}(A^*)}{\times}$ \hss} q\subseteq R\end{array}\]
\end{lem}
\subsection{Stabilizer Theorem}
From now on, $\lambda$ is a cardinal with $\kappa\geq\lambda>|\mathscr{L}|+|A|$.
Let $G$ be a piecewise $A${\hyp}hyperdefinable group and $\mu$ an ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of $G$. Let $V,W\subseteq G$ be two $\bigwedge_{<\lambda}${\hyp}definable subsets. The \emph{(left) $\mu${\hyp}stabilizer of $V$ with respect to $W$} is the set ${\mathrm{St}}_\mu(V,W)=\{a\in G\mathrel{:} a^{-1}\cdot V\cap W\notin \mu\}$, and the \emph{(left) $\mu${\hyp}stabilizer group of $V$ with respect to $W$} is the subgroup ${\mathrm{Stab}}_\mu(V,W)$ of $G$ generated by ${\mathrm{St}}_\mu(V,W)$. Write ${\mathrm{St}}_\mu(V)\coloneqq {\mathrm{St}}_\mu(V,V)$ and ${\mathrm{Stab}}_\mu(V)\coloneqq {\mathrm{St}}_\mu(V,V)$. We omit the subscript $\mu$ if there is no confusion.
\begin{lem} \label{l:basic remarks}
Let $G$ be a piecewise $A${\hyp}hyperdefinable group, $\mu$ an $A${\hyp}invariant ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of $G$ and $V$ and $W$ two $\bigwedge_A${\hyp}definable wide subsets. Then:
{\textnormal{\textbf{(1)}}} Take any $B\supseteq A$ with $|B|<\lambda$. If $\mu$ is locally atomic, $g\in {\mathrm{St}}(V,W)$ if and only if there are $a\in V$ and $b\in W$ such that $g=ab^{-1}$ with ${\mathrm{tp}}(b/B,g)\notin \mu$.
{\textnormal{\textbf{(2)}}} If $\mu$ is invariant under left translations, then ${\mathrm{St}}(V,W)^{-1}={\mathrm{St}}(W,V)$. In particular, ${\mathrm{St}}(V)$ is a symmetric subset.
\end{lem}
We say that a subset $X\subseteq G$ is \emph{stable over $A$} if the relation $\{(a,b)\mathrel{:} a^{-1}b\in X\}$ is stable over $A$.
\begin{ejem}{\textnormal{\textbf{\cite[Lemma 2.10]{hrushovski2011stable} \& \cite[Lemma 2.8]{montenegro2018stabilizers}}}} \label{e:mos 2.8} Let $\mu$ be an $A${\hyp}invariant ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of a piecewise $A${\hyp}hyperdefinable group. If $\mu$ is invariant under left translations and $V$ and $W$ are $A${\hyp}medium $\bigwedge_A${\hyp}definable subsets, then ${\mathrm{St}}(V,W)$ is stable over $A$.
\end{ejem}
\begin{obs} Since ${\scriptstyle \bullet}^{-1}:\ G\rightarrow G$ is a piecewise $\bigwedge_A${\hyp}definable isomorphism, $X$ is stable over $A$ if and only if $X^{-1}$ is stable over $A$. Also, $X$ is stable over $A$ if and only if $\{(a,b)\mathrel{:} ab\in X\}$ is stable over $A$, if and only if $\{(a,b)\mathrel{:} ab^{-1}\in X\}$ is stable over $A$.
\end{obs}
For subsets $V$ and $W$ of a piecewise hyperdefinable group $G$, write
\[V\cdot_{\mathrm{nf}(A)}W\coloneqq \{a\cdot b\mathrel{:} (a,b)\in V\times_{\mathrm{nf}(A)}W\}.\]
Similarly, $V\cdot_{\mathrm{ndiv}(A)}W$, $V\hbox{\hss $\prescript{}{\mathrm{nf}(A)}{\cdot}$ \hss} W$ and $V\hbox{\hss $\prescript{}{\mathrm{ndiv}(A)}{\cdot}$ \hss} W$.
\begin{lem} \label{l:lemma stabilizer}
Let $G$ be a piecewise $A^*${\hyp}hyperdefinable group and $\mu$ a locally atomic $A^*${\hyp}invariant ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of $G$ which is also invariant under left translations. Let $p$ be an $A^*${\hyp}medium $A^*${\hyp}type and $X\subseteq G$ a stable subset over $A^*$ containing $p$. Suppose that there is an $A^*${\hyp}invariant global type $\widehat{p}$ extending a partial type $\underline{p}$ over $A^*$ defining $p$. Then, ${\mathrm{Stab}}(p)\subseteq X\cdot X^{-1}$. Furthermore, ${\mathrm{Stab}}(p)\cdot_{\mathrm{nf}(A^*)} p\subseteq X$.
\begin{dem} We may assume that $p$ is strictly $A${\hyp}medium; otherwise, ${\mathrm{St}}(p)=\emptyset$ and ${\mathrm{Stab}}(p)=\{1\}$, so the lemma holds trivially. Since $\mu$ is invariant under left translations, ${\mathrm{St}}(p)^{-1}={\mathrm{St}}(p)$ by Lemma \ref{l:basic remarks}(2). Thus, by definition, ${\mathrm{Stab}}(p)=\bigcup^{\infty}_{n=0} {\mathrm{St}}(p)^n$. We prove by induction on $n$ that $b\cdot c\in X$ for $b\in{\mathrm{St}}(p)^n$ and $c\in p$ such that ${\mathrm{tp}}(c/A^*,b)$ does not fork over $A^*$.
By hypothesis, $p\subseteq X$, so we are done for $n=0$. Assume it is true for $n-1$ with $n\in\mathbb{N}_{>0}$. Take $b=b_1\cdot \cdots\cdot b_n$ with $b_i\in {\mathrm{St}}(p)$ for each $i\in\{1,\ldots,n\}$ and $c\in p$ such that ${\mathrm{tp}}(c/A^*,b)$ does not fork over $A^*$. We want to prove that $b\cdot c\in X$.
As $b_n\in{\mathrm{St}}(p)$, by Lemma \ref{l:basic remarks}(1), there is $c'\in p$ such that $b_n\cdot c'\in p$ and ${\mathrm{tp}}(c'/A^*,b_1,\ldots,b_n)\notin \mu$. In particular, as $p$ is $A^*${\hyp}medium, ${\mathrm{tp}}(c'/A^*,b_1,\ldots,b_n)$ does not fork over $A^*$ by Lemma \ref{l:s1 and forking}. Hence, ${\mathrm{tp}}(c'/A^*,b)$ and ${\mathrm{tp}}(b_n\cdot c'/A^*,b_1,\ldots,b_{n-1})$ do not fork over $A^*$. By induction hypothesis, $b\cdot c'=(b_1\cdots b_{n-1})\cdot b_n\cdot c'\in X$. Since $(b,c')\in {\mathrm{tp}}(b/A^*)\times_{\mathrm{nf}(A^*)}p$, by Lemma \ref{l:fundamental lemma for the stabilizer theorem}(2), we conclude that $b\cdot c\in X$ too.
Now, for any $b\in {\mathrm{Stab}}(p)$ we can find $c\in p$ such that ${\mathrm{tp}}(c/A^*,b)$ does not fork over $A^*$ --- namely, choose $c^*\models \widehat{p}_{\mid A^*,b^*}$. Therefore, we conclude that ${\mathrm{Stab}}(p)\subseteq X\cdot p^{-1}\subseteq X\cdot X^{-1}$.\mathbb{Q}ED
\end{dem}
\end{lem}
As an immediate corollary we get the following result:
\begin{coro} Let $\mathfrak{N}\prec\mathfrak{M}$ with $|N|<\lambda$. Let $G$ be a piecewise $N${\hyp}hyperdefinable group and $\mu$ a locally atomic $N${\hyp}invariant ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets invariant under left translations. Let $p$ be an $N${\hyp}medium $N${\hyp}type with $p\subseteq {\mathrm{St}}(p)$. Then, ${\mathrm{Stab}}(p)={\mathrm{St}}(p)^2=(p\cdot p^{-1})^2$.
\begin{dem} If $p\subseteq {\mathrm{St}}(p)$, by Lemma \ref{l:basic remarks}, we get that ${\mathrm{St}}(p)\subseteq p\cdot p^{-1}\subseteq {\mathrm{St}}(p)^2$. By Example \ref{e:mos 2.8}, ${\mathrm{St}}(p)$ is a stable subset. Hence, by Lemma \ref{l:lemma stabilizer}, taking $\widehat{p}$ a coheir of $\underline{p}$, we conclude that ${\mathrm{Stab}}(p)\subseteq {\mathrm{St}}(p)^2\subseteq (p\cdot p^{-1})^2\subseteq {\mathrm{Stab}}(p)$.\mathbb{Q}ED
\end{dem}
\end{coro}
\begin{lem}{\textnormal{\textbf{\cite[Lemma 2.11]{montenegro2018stabilizers}}}} \label{l:mos 2.11}
Let $G$ be a piecewise $A^*${\hyp}hyperdefinable group and $\mu$ an $A^*${\hyp}invariant ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of $G$ which is also invariant under left translations. Let $p$ be an $A^*${\hyp}type and $W$ a strictly $A^*${\hyp}medium $\bigwedge_{A^*}${\hyp}definable subset such that $p^{-1}W$ is $A^*${\hyp}medium too. Suppose that there is an $A^*${\hyp}invariant global type $\widehat{p}$ extending a partial type $\underline{p}$ over $A^*$ defining $p$. Then, $p\cdot_{\mathrm{nf}(A^*)} p^{-1}\subseteq {\mathrm{St}}(W)$.
\begin{dem} Take $(a^*_i)_{i\in \omega}$ such that $a^*_i\models \widehat{p}_{\mid A^*,a^*_0,\ldots,a^*_{i-1}}$. Then, $(a_i)_{i\in \omega}$ is an $A^*${\hyp}indiscernible sequence of realisations of $p$ with ${\mathrm{tp}}(a_1/A^*,a_0)$ non{\hyp}forking. As $\mu$ is invariant under left translations, $a^{-1}_0W$ is wide. Since $p^{-1}W$ is $A^*${\hyp}medium, we conclude that $a^{-1}_0W\cap a^{-1}_1W\not\in\mu$. Thus, by invariance under left translations, $a_0a^{-1}_1\in {\mathrm{St}}(W)$. By Lemma \ref{l:fundamental lemma for the stabilizer theorem}(2) and Example \ref{e:mos 2.8}, we conclude that $p\cdot_{\mathrm{nf}(A^*)}p^{-1}\subseteq {\mathrm{St}}(W)$. \mathbb{Q}ED
\end{dem}
\end{lem}
For two subsets $X$ and $Y$ of a group $G$, recall that $[X:Y]\coloneqq \min\{|\Delta|\mathrel{:} X\subseteq \Delta Y\}$. Note that $[X^{-1}:Y^{-1}]=\min\{|\Delta|\mathrel{:} X\subseteq Y\Delta\}$.
\begin{lem} \label{l:index lemma 1}
Let $G$ be a piecewise $A^*${\hyp}hyperdefinable group and $S\leq G$ an $\bigwedge_{A^*}${\hyp}definable subgroup. Let $p\subseteq G$ be an $A^*${\hyp}type such that there is an $A^*${\hyp}invariant global type $\widehat{p}$ extending a partial type $\underline{p}$ defining $p$ over $A^*$.
\textnormal{\textbf{(1)}} If $[p:S]$ is small, then $[p:S]=1$ and $p^{-1}\cdot p\subseteq S$.
\textnormal{\textbf{(2)}} If $[p^{-1}:S]$ is small, then $[p^{-1}:S]=1$ and $p\cdot p^{-1}\subseteq S$.
\begin{dem} \textnormal{\textbf{(1)}} Let $(a_i)_{i\in \alpha}$ be a sequence in $p$ such that $\alpha=[p:S]$ and $a_jS\cap a_iS=\emptyset$ for $i\neq j$, and $p\subseteq \bigcup_{i\in\alpha}a_iS$. Suppose that there is no $i\in\alpha$ such that $\underline{(a_iS)\cap p}$ is contained in $\widehat{p}$. Then, there are formulas $\varphi_i\in \underline{(a_iS)\cap p}$ for each $i\in\alpha$ such that $\{\neg\varphi_i\}_{i\in\alpha}\subseteq \widehat{p}$. In particular, $\{\neg\varphi_i\}_{i\in\alpha}\cup\underline{p}$ is finitely satisfiable. Thus, by saturation, there is $c\in p$ such that $c\notin \bigcup_{i\in \alpha} a_iS$, getting a contradiction. That concludes that there is $i\in \alpha$ such that $\underline{(a_iS)\cap p}\subseteq \widehat{p}$. Since $a_iS\cap a_jS=\emptyset$ for any $i\neq j$, we conclude that there is just one such $i\in\alpha$. Since $\widehat{p}$ and $S$ are $A^*${\hyp}invariant, we get that $a_iS$ is $A^*${\hyp}invariant too. Indeed, take $\sigma\in\mathrm{Aut}(\mathfrak{M}/A^*)$ arbitrary. Then, $\sigma[\underline{(a_iS)\cap p}]\subseteq \widehat{p}$ defines $(\sigma(a_i)S)\cap p$, concluding that $\sigma(a_i)S=a_iS$. As $\sigma$ is arbitrary, we conclude that $a_iS$ is $A^*${\hyp}invariant. Thus, $a_iS$ is $\bigwedge_{A^*}${\hyp}definable by Corollary \ref{c:parameters of definition}. As $(a_iS)\cap p\neq \emptyset$, by minimality of $p$, $p\subseteq a_iS$. So, $[p:S]=1$ and $p^{-1}\cdot p\subseteq S$.
\textnormal{\textbf{(2)}} Analogous to point \textnormal{\textbf{(1)}}, but now working with right cosets --- as $[p^{-1}:S]=\min\{|\Delta|\mathrel{:} p\subseteq S\Delta\}$.
\mathbb{Q}ED
\end{dem}
\end{lem}
\begin{lem} \label{l:index lemma 2}
Let $G$ be a piecewise $A${\hyp}hyperdefinable group, $S\leq G$ an $\bigwedge_A${\hyp}definable subgroup and $V\subseteq G$ an $\bigwedge_A${\hyp}definable set. Suppose $[V:S]$ is not small. Then, there is an $A${\hyp}indiscernible sequence $(a_i)_{i\in\omega}$ in $V$ such that $a_i\cdot S\cap a_j\cdot S=\emptyset$ for $i\neq j$.
\begin{dem}
Let $\mathbf{G}$, $\mathbf{S}$ and $\mathbf{V}$ be the interpretations in the monster model $\boldsymbol{\mathfrak{C}}$. Since $\boldsymbol{\mathfrak{C}}$ is the monster model, $[\mathbf{V}: \mathbf{S}]\notin{\mathbf{\mathbbm{O}\mathrm{n}}}$. Recall that there is $\tau\in{\mathbf{\mathbbm{O}\mathrm{n}}}$ depending only on $|A^*|$ such that, for any sequence $(a^*_i)_{i\in\tau}$ of elements in $\boldsymbol{\mathfrak{C}}$, there is an $A^*${\hyp}indiscernible sequence $(b^*_j)_{j\in\omega}$ of elements in $\boldsymbol{\mathfrak{C}}$ with the property that, for any $j_1<\cdots<j_n$, ${\mathrm{tp}}(a^*_{i_1},\ldots,a^*_{i_n}/A^*)={\mathrm{tp}}(b^*_{j_1},\ldots,b^*_{j_n}/A^*)$ for some $i_1<\cdots<i_n$ --- see \cite[Lemma 7.2.12]{tent2012course}. In particular, take $(a_i)_{i\in \tau}$ a sequence of hyperimaginaries of $\mathbf{V}$ such that $a_i\cdot \mathbf{S}\cap a_j\cdot \mathbf{S}=\emptyset$ for each $i\neq j$. Let $(a^*_i)_{i\in\tau}$ be representatives of $(a_i)_{i\in \tau}$. Then, there is a sequence $(\widetilde{b}^*_j)_{j\in\omega}$ of $A^*${\hyp}indiscernible elements such that ${\mathrm{tp}}(\widetilde{b}^*_0,\widetilde{b}^*_1/A^*)={\mathrm{tp}}(a^*_{j'},a^*_{j''}/A^*)$ for some $j'<j''$. Now, by $\kappa${\hyp}saturation of $\mathfrak{M}$, we can find $(b^*_j)_{j\in\omega}$ elements in $\mathfrak{M}$ realising the same type that $(\widetilde{b}^*_j)_{j\in\omega}$ over $A^*$. So, the projections $b_j\in V$ form an $A${\hyp}indiscernible sequence $(b_j)_{j\in\omega}$ in $V$ such that $b_i\cdot S\cap b_j\cdot S=\emptyset$ for $i\neq j$. \mathbb{Q}ED
\end{dem}
\end{lem}
We now prove the Stabilizer Theorem for piecewise hyperdefinable groups. Below, Corollary \ref{c:stabilizer theorem mos b2} corresponds to \cite[Theorem 2.12 (B2)]{montenegro2018stabilizers}; Theorem \ref{t:stabilizer theorem 2} corresponds to \cite[Theorem 3.5]{hrushovski2011stable} and \cite[Theorem 2.12 (B1)]{montenegro2018stabilizers}; and Theorem \ref{t:stabilizer theorem 2}(c) corresponds to \cite[Proposition 2.14]{montenegro2018stabilizers}.
\begin{teo} \label{t:stabilizer theorem 1} Let $\mathfrak{N}\prec\mathfrak{M}$ with $|N|<\lambda$. Let $G$ be a piecewise $N${\hyp}hyperdefinable group and $\mu$ a locally atomic $N${\hyp}invariant ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets invariant under left translations. Let $p\subseteq G$ be an $N${\hyp}medium $N${\hyp}type. Assume that there is a wide $N${\hyp}type $q\subseteq {\mathrm{St}}(p)$ such that $p^{-1}\cdot q$ is $N${\hyp}medium. Then, ${\mathrm{Stab}}(p)={\mathrm{Stab}}(q)={\mathrm{St}}(p)^2={\mathrm{St}}(q)^4=(p\cdot p^{-1})^2$ is a wide $\bigwedge_N${\hyp}definable subgroup of $G$ without proper $\bigwedge_N${\hyp}definable subgroups of small index. Furthermore:
{\textnormal{\textbf{(a)}}} ${\mathrm{Stab}}(q)\cdot_{\mathrm{nf}(N)}q\subseteq {\mathrm{St}}(p)$.
{\textnormal{\textbf{(b)}}} Every strictly $N${\hyp}medium $N${\hyp}type of ${\mathrm{Stab}}(p)$ lies in ${\mathrm{St}}(p)$.
\begin{dem} We take $\widehat{p}$ and $\widehat{q}$ coheirs of $\underline{p}$ and $\underline{q}$. By Lemma \ref{l:translation of medium}, $q$ is $N${\hyp}medium. By Lemma \ref{l:mos 2.11}, $p\cdot_{\mathrm{nf}(N)}p^{-1}\subseteq {\mathrm{St}}(q)$. Then, ${\mathrm{St}}(p)\subseteq p\cdot p^{-1}\subseteq {\mathrm{St}}(q)^2$. Indeed, for any $a,b\in p$ we can find $c\in p$ such that ${\mathrm{tp}}(c/N,a,b)$ does not fork over $N$ --- simply, take $c^*$ realising $\widehat{p}_{\mid N,a^*,b^*}$. Thus, $ac^{-1},bc^{-1}\in {\mathrm{St}}(q)$, concluding that $ab^{-1}\in{\mathrm{St}}(q)^2$ by Lemma \ref{l:basic remarks}(2).
Since $q\subseteq {\mathrm{St}}(p)$, ${\mathrm{Stab}}(q)\cdot_{\mathrm{nf}(N)} q\subseteq {\mathrm{St}}(p)$ by Lemma \ref{l:lemma stabilizer} and Example \ref{e:mos 2.8}. In particular, ${\mathrm{Stab}}(q)\subseteq {\mathrm{St}}(p)^2$ by Lemma \ref{l:basic remarks}(2). Thus, ${\mathrm{Stab}}(p)={\mathrm{Stab}}(q)={\mathrm{St}}(p)^2=(p\cdot p^{-1})^2={\mathrm{St}}(q)^4$ is an $\bigwedge_N${\hyp}definable subgroup. Since $q\subseteq {\mathrm{St}}(p)\subseteq {\mathrm{Stab}}(p)$, we have that ${\mathrm{Stab}}(p)$ is wide.
Take an $\bigwedge_N${\hyp}definable subgroup $T\leq {\mathrm{Stab}}(p)$ such that $[{\mathrm{Stab}}(p):T]$ is small. Since $p\cdot p^{-1}\subseteq {\mathrm{Stab}}(p)$, $[p^{-1}:T]$ is also small. By Lemma \ref{l:index lemma 1}(2), $p\cdot p^{-1}\subseteq T$. Therefore, ${\mathrm{Stab}}(p)=(p\cdot p^{-1})^2\subseteq T$. In other words, ${\mathrm{Stab}}(p)$ does not have proper $\bigwedge_N${\hyp}definable subgroups of small index.
Finally, we prove property {\textnormal{\textbf{(b)}}}. Take $r\subset {\mathrm{Stab}}(p)$ a strictly $N${\hyp}medium $N${\hyp}type. Set $c\in q$. Since $\mu$ is locally atomic, there is $b\in r$ such that ${\mathrm{tp}}(b/N,c)\notin \mu$. By Lemma \ref{l:s1 and forking}, ${\mathrm{tp}}(b/N,c)$ does not fork over $N$. Then, by Lemma \ref{l:dividing and functions} and Lemma \ref{l:types and infinite definable functions}, ${\mathrm{tp}}(b^{-1}c^{-1}/N,c)$ does not fork over $N$. Since $c,b\in {\mathrm{Stab}}(q)$, we have $b^{-1}c^{-1}\in{\mathrm{Stab}}(q)$. Write $r'={\mathrm{tp}}(b^{-1}c^{-1}/N)\subseteq {\mathrm{Stab}}(q)$. By {\textnormal{\textbf{(a)}}}, $r'\cdot_{\mathrm{nf}(N)}q\subseteq {\mathrm{St}}(p)$. By Lemma \ref{l:fundamental lemma for the stabilizer theorem} and Example \ref{e:mos 2.8}, we conclude that $b^{-1}=b^{-1}\cdot c^{-1}\cdot c\in {\mathrm{St}}(p)$, so $b\in {\mathrm{St}}(p)$. Since $r$ is a type over $N$, we conclude $r\subseteq {\mathrm{St}}(p)$. \mathbb{Q}ED
\end{dem}
\end{teo}
\begin{coro} \label{c:stabilizer theorem mos b2}
Let $\mathfrak{N}\prec\mathfrak{M}$ with $|N|<\lambda$ and $G$ a piecewise $N${\hyp}hyperdefinable group. Let $\mu$ be an $N${\hyp}invariant locally atomic ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of $G$ invariant under left and right translations. Let $p$ be a wide type over $N$ and assume that $p^{-1}\cdot p\cdot p^{-1}$ is $N${\hyp}medium. Then, ${\mathrm{Stab}}(p)={\mathrm{St}}(p)^2=(p\cdot p^{-1})^2$ is a wide $\bigwedge_N${\hyp}definable subgroup of $G$ without proper $\bigwedge_N${\hyp}definable subgroups of small index such that every strictly $N${\hyp}medium $N${\hyp}type of ${\mathrm{Stab}}(p)$ lies in ${\mathrm{St}}(p)$.
\begin{dem} Take $a\in p$ arbitrary. By the local atomic property, there is $b\in p$ such that ${\mathrm{tp}}(b/N,a)$ is wide. By Lemma \ref{l:translation of medium}, $p\cdot p^{-1}$ is $N${\hyp}medium. Again by Lemma \ref{l:translation of medium} but now using invariance under right translations, we get that $p$ is $N${\hyp}medium. By Lemma \ref{l:s1 and forking}, it follows that ${\mathrm{tp}}(b/N,a)$ does not fork over $N$. Consider the $N${\hyp}type $q={\mathrm{tp}}(ba^{-1}/N)\subseteq p\prescript{}{\mathrm{nf}(N)}{\cdot} p^{-1}$.
By invariance under right translations, we know that $q$ is a wide $\bigwedge_N${\hyp}definable set. As $q\subseteq p\cdot p^{-1}$ with $p\cdot p^{-1}$ $N${\hyp}medium, we have that $q$ is $N${\hyp}medium too. Also, note that $p^{-1}\cdot q\subseteq p^{-1}\cdot p\cdot p^{-1}$, so $p^{-1}\cdot q$ is $N${\hyp}medium too. On the other hand, by Lemma \ref{l:translation of medium} using invariance under right translations, $p^{-1}\cdot p$ is $N${\hyp}medium, so $p\cdot_{\mathrm{nf}(N)}p^{-1}\subseteq {\mathrm{St}}(p)$ by Lemma \ref{l:mos 2.11}. By Example \ref{e:mos 2.8} and Lemma \ref{l:fundamental lemma for the stabilizer theorem}(3), $p\prescript{}{\mathrm{nf}(N)}{\cdot} p^{-1}\subseteq {\mathrm{St}}(p)$ too, so $q\subseteq {\mathrm{St}}(p)$. By Theorem \ref{t:stabilizer theorem 1}, we conclude that ${\mathrm{Stab}}(p)={\mathrm{St}}(p)^2=(p\cdot p^{-1})^2$ is a wide $\bigwedge_N${\hyp}definable subgroup of $G$ without proper $\bigwedge_N${\hyp}definable subgroups of small index such that every strictly $N${\hyp}medium $N${\hyp}type of ${\mathrm{Stab}}(p)$ lies in ${\mathrm{St}}(p)$. \mathbb{Q}ED
\end{dem}
\end{coro}
\begin{teo}[Stabilizer Theorem] \label{t:stabilizer theorem 2}
Let $\mathfrak{N}\prec\mathfrak{M}$ with $|N|<\lambda$ and $G=\underrightarrow{\lim}\, X^n$ a piecewise $N${\hyp}hyperdefinable group generated by a symmetric $\bigwedge_N${\hyp}definable subset $X$. Let $\mu$ be an $N${\hyp}invariant locally atomic ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of $G$ invariant under left translations. Let $p\subseteq X$ be a wide type over $N$ and assume that $X^3$ is $N${\hyp}medium. Then, ${\mathrm{Stab}}(p)={\mathrm{St}}(p)^2=(p\cdot p^{-1})^2$ is a wide and $N${\hyp}medium $\bigwedge_N${\hyp}definable normal subgroup of small index of $G$ without proper $\bigwedge_N${\hyp}definable subgroups of small index. Furthermore:
{\textnormal{\textbf{(a)}}} $p\cdot p\cdot p^{-1}=p\cdot {\mathrm{Stab}}(p)$ is a left coset of ${\mathrm{Stab}}(p)$.
{\textnormal{\textbf{(b)}}} Every wide $N${\hyp}type of ${\mathrm{Stab}}(p)$ is contained in ${\mathrm{St}}(p)$.
{\textnormal{\textbf{(c)}}} Assume $\mu$ is also invariant under right translations. Then, $p\cdot p^{-1}\cdot p={\mathrm{Stab}}(p)\cdot p$ is a right coset of ${\mathrm{Stab}}(p)$.
\begin{dem} Take $a\in p$ arbitrary. By the local atomic property, we find $b\in p$ such that ${\mathrm{tp}}(b/N,a)$ is wide. By Lemma \ref{l:translation of medium}, $X$ is $N${\hyp}medium. As ${\mathrm{tp}}(b/N,a)\subseteq p\subseteq X$, we conclude that ${\mathrm{tp}}(b/N,a)$ is $N${\hyp}medium, so ${\mathrm{tp}}(b/N,a)$ does not fork over $N$ by Lemma \ref{l:s1 and forking}. Also, ${\mathrm{tp}}(a^{-1}b/N,a)=a^{-1}\cdot {\mathrm{tp}}(b/N,a)$ is wide by left invariance of $\mu$, so $q={\mathrm{tp}}(a^{-1}b/N)\subseteq p^{-1}\cdot_{\mathrm{nf}(N)}p$ is wide. By Lemma \ref{l:translation of medium}, $X^2$ is $N${\hyp}medium, so $p^2$ is $N${\hyp}medium too. Using any coheir extending $\underline{p^{-1}}$, we get $p^{-1}\cdot_{\mathrm{nf}(N)}p\subseteq {\mathrm{St}}(p)$ by Lemma \ref{l:mos 2.11}. In particular, $q\subseteq {\mathrm{St}}(p)$. As $X$ is symmetric, $p^{-1}\cdot q\subseteq p^{-1}\cdot p^{-1}\cdot p\subseteq X^3$, so $p^{-1}\cdot q$ is $N${\hyp}medium. Thus, by Theorem \ref{t:stabilizer theorem 1}, ${\mathrm{Stab}}(p)={\mathrm{Stab}}(q)={\mathrm{St}}(p)^2={\mathrm{St}}(q)^4=(p\cdot p^{-1})^2$ is a wide $\bigwedge_N${\hyp}definable subgroup without proper $\bigwedge_N${\hyp}definable subgroups of small index and every strictly $N${\hyp}medium type of ${\mathrm{Stab}}(p)$ is contained in ${\mathrm{St}}(p)$. Also, ${\mathrm{Stab}}(q)\cdot_{\mathrm{nf}(N)}q\subseteq {\mathrm{St}}(p)$.
We show that it is a normal subgroup of small index. First of all, note that $[X^2:{\mathrm{Stab}}(p)]$ is small. Otherwise, by Lemma \ref{l:index lemma 2}, there is an $N${\hyp}indiscernible sequence $(b_j)_{j\in\omega}$ in $X^2$ such that $b_i\cdot {\mathrm{Stab}}(p)\cap b_j\cdot {\mathrm{Stab}}(p)=\emptyset$ for $i\neq j$. Take $d\in p^{-1}$ arbitrary. As $p\cdot p^{-1}\subseteq {\mathrm{Stab}}(p)$, we get $(b_i\cdot p\cdot d)\cap (b_j\cdot p\cdot d)=\emptyset$ for $i\neq j$, so $b_i\cdot p\cap b_j\cdot p=\emptyset$ for $i\neq j$. As $X^3$ is $N${\hyp}medium, that implies $b_0\cdot p\in\mu$, so $p\in\mu$ by invariance under left translations, contradicting our hypotheses. For any $c\in X$, we have $[{\mathrm{tp}}(c/N):{\mathrm{Stab}}(p)]=1$ by Lemma \ref{l:index lemma 1}(1), so ${\mathrm{tp}}(c/N)\cdot {\mathrm{Stab}}(p)\cdot {\mathrm{tp}}(c/N)^{-1}=c\cdot {\mathrm{Stab}}(p)\cdot c^{-1}$ is $\bigwedge_N${\hyp}definable. As $X$ is symmetric, we also get that $[p^{-1}:c\cdot {\mathrm{Stab}}(p)\cdot c^{-1}]=[p^{-1}\cdot c:{\mathrm{Stab}}(p)]\leq [X^2:{\mathrm{Stab}}(p)]$ is small, so $p\cdot p^{-1}\subseteq c\cdot {\mathrm{Stab}}(p)\cdot c^{-1}$ by Lemma \ref{l:index lemma 1}(2). Therefore, ${\mathrm{Stab}}(p)\subseteq c\cdot {\mathrm{Stab}}(p)\cdot c^{-1}$. As $c\in X$ is arbitrary and $X$ is symmetric, ${\mathrm{Stab}}(p)=c\cdot {\mathrm{Stab}}(p)\cdot c^{-1}$. Thus, $X\subseteq \mathrm{N}_G({\mathrm{Stab}}(p))$, concluding ${\mathrm{Stab}}(p)\trianglelefteq G$. Since we have proved that $[X:{\mathrm{Stab}}(p)]$ is small, we conclude that ${\mathrm{Stab}}(p)$ has small index by normality.
We show now property {\textnormal{\textbf{(a)}}}, i.e. $p\cdot p\cdot p^{-1}=p\cdot {\mathrm{Stab}}(p)=y\cdot{\mathrm{Stab}}(p)$ for any $y\in p$. As $(p\cdot p^{-1})^2={\mathrm{Stab}}(p)$, we have $p\cdot p\cdot p^{-1}\subseteq p\cdot {\mathrm{Stab}}(p)$. As $[p:{\mathrm{Stab}}(p)]\leq [X:{\mathrm{Stab}}(p)]$ is small, we get $[p:{\mathrm{Stab}}(p)]=1$ by Lemma \ref{l:index lemma 1}(1). Thus, $p\cdot {\mathrm{Stab}}(p)=y\cdot {\mathrm{Stab}}(p)$. On the other hand, take $x\in y\cdot {\mathrm{Stab}}(p)$ arbitrary, so $x^{-1}=c\cdot y^{-1}$ with $c\in{\mathrm{Stab}}(p)={\mathrm{Stab}}(q)$. By definition of $q$, we know that $q={\mathrm{tp}}(a^{-1}b/N)$ with ${\mathrm{tp}}(b/N,a)$ wide and $a,b\in p$. Take $z_0$ such that ${\mathrm{tp}}(z_0,y/N)={\mathrm{tp}}(b,a/N)$. By $N${\hyp}invariance of $\mu$, ${\mathrm{tp}}(z_0/N,y)$ is wide. By the local atomic property, we can find $z\in {\mathrm{tp}}(z_0/N,y)$ such that ${\mathrm{tp}}(z/N,y,c)$ is wide. Thus, ${\mathrm{tp}}(y^{-1}z/N,c)$ is wide by invariance under left translations. By Lemma \ref{l:s1 and forking}, ${\mathrm{tp}}(y^{-1}z/N,c)$ does not fork over $N$. Thus, $x^{-1}\cdot z=c\cdot y^{-1}z\in {\mathrm{Stab}}(q)\cdot_{\mathrm{nf}(N)}q\subseteq {\mathrm{St}}(p)$. In particular, we conclude $x^{-1}\in {\mathrm{St}}(p)\cdot p^{-1}\subseteq p\cdot p^{-1}\cdot p^{-1}$, so $x\in p\cdot p\cdot p^{-1}$.
As $X$ is symmetric, $p\cdot {\mathrm{Stab}}(p)\subseteq X^3$, so $p\cdot {\mathrm{Stab}}(p)$ is $N${\hyp}medium. By Lemma \ref{l:translation of medium}, we conclude that ${\mathrm{Stab}}(p)$ is $N${\hyp}medium. Thus, we conclude property \textnormal{\textbf{(b)}}.
Finally, it remains to prove property \textnormal{\textbf{(c)}}. In other words, assuming that $\mu$ is also invariant under right translations, we want to show that $p\cdot p^{-1}\cdot p={\mathrm{Stab}}(p)\cdot p={\mathrm{Stab}}(p)\cdot y$ for any $y\in p$. Take $a\in p$ arbitrary and, using the local atomic property, find $b\in p$ such that ${\mathrm{tp}}(b/N,a)$ is wide. As $p$ is $N${\hyp}medium, by Lemma \ref{l:s1 and forking}, ${\mathrm{tp}}(b/N,a)$ does not fork over $N$. Consider $q_2={\mathrm{tp}}(ba^{-1}/N)\subseteq p\prescript{}{\mathrm{nf}(N)}{\cdot} p^{-1}$. Then, by invariance under right translations, $q_2$ is a wide $\bigwedge_N${\hyp}definable set. As $q_2\subseteq X^2$ and $p^{-1}\cdot q_2\subseteq X^3$, we conclude that $q_2$ and $p^{-1}\cdot q_2$ are $N${\hyp}medium. As $p\cdot_{\mathrm{nf}(N)} p^{-1}\subseteq {\mathrm{St}}(p)$ by Lemma \ref{l:lemma stabilizer}, we get using Example \ref{e:mos 2.8} and Lemma \ref{l:fundamental lemma for the stabilizer theorem}(3) that $p\prescript{}{\mathrm{nf}(N)}{\cdot} p^{-1}\subseteq {\mathrm{St}}(p)$ too, so $q_2\subseteq {\mathrm{St}}(p)$. By Theorem \ref{t:stabilizer theorem 1}, we conclude that ${\mathrm{Stab}}(p)={\mathrm{Stab}}(q_2)$ and ${\mathrm{Stab}}(q_2)\cdot_{\mathrm{nf}(N)}q_2\subseteq {\mathrm{St}}(p)$. Take $x\in {\mathrm{Stab}}(p)\cdot y$ arbitrary, so $x=cy$ with $c\in{\mathrm{Stab}}(p)={\mathrm{Stab}}(q_2)$ and $y\in p$. Find $z_0$ such that ${\mathrm{tp}}(z_0,y/N)={\mathrm{tp}}(a,b/N)$, so ${\mathrm{tp}}(y/N,z_0)$ is wide. Using the local atomic property, we may find $y_1\in {\mathrm{tp}}(y/N,z_0)$ such that ${\mathrm{tp}}(y_1/N,z_0,c)$ is wide. Take $z$ such that ${\mathrm{tp}}(z,y/N,c)={\mathrm{tp}}(z_0,y_1/N,c)$. Thus, $yz^{-1}\in q_2$ and ${\mathrm{tp}}(y/N,z,c)$ is wide. In particular, as $X^2$ is $N${\hyp}medium, by Lemma \ref{l:s1 and forking}, ${\mathrm{tp}}(yz^{-1}/N,c)$ does not fork over $N$. Thus, $x\cdot z^{-1}=c\cdot yz^{-1}\in {\mathrm{Stab}}(q_2)\cdot_{\mathrm{nf}(N)}q_2\subseteq {\mathrm{St}}(p)$, concluding $x\in p\cdot p^{-1}\cdot p$. As $x$ is arbitrary, ${\mathrm{Stab}}(p)\cdot y\subseteq p\cdot p^{-1}\cdot p$. As $p\cdot p^{-1}\cdot p\subseteq {\mathrm{Stab}}(p)\cdot y$, we get $p\cdot p^{-1}\cdot p={\mathrm{Stab}}(p)\cdot p={\mathrm{Stab}}(p)\cdot y$ for any $y\in p$. \mathbb{Q}ED
\end{dem}
\end{teo}
\begin{obss} \label{o:remarks stabilizer}
{\textnormal{\textbf{(1)}}} The main improvement of Theorem \ref{t:stabilizer theorem 2} with respect to the original formulations studied in \cite{hrushovski2011stable} and \cite{montenegro2018stabilizers} is that we do not require invariance of $\mu$ under right translations. In both papers, while they preferably considered ideals invariant under two{\hyp}sided translations, the authors already anticipated that it should be possible to weaken the invariance under right translations hypothesis, perhaps losing a few properties of ${\mathrm{Stab}}(p)$. We have shown that, in fact, it can be completely eliminated without significant consequences. Indeed, we only use invariance under right translations for \textnormal{\textbf{(c)}}, but this is essentially replaced by \textnormal{\textbf{(a)}}.
{\textnormal{\textbf{(2)}}} Although in \cite[Theorem 2.12]{montenegro2018stabilizers} it was assumed that $X^4$ is $N${\hyp}medium, we only need $N${\hyp}mediumness of $X^3$.
{\textnormal{\textbf{(3)}}} Note that the locally atomic property is mostly used only for subsets of $p$. The only time when we apply the locally atomic property for other subsets is in the proof of Theorem \ref{t:stabilizer theorem 1} to get property {\textnormal{\textbf{(b)}}}. Thus, when $\mu$ only satisfies the locally atomic property for subsets of $p$, we only miss property {\textnormal{\textbf{(b)}}}.
For property {\textnormal{\textbf{(b)}}}, we have used the locally atomic property for subsets of ${\mathrm{Stab}}(p)=(p\cdot p^{-1})^2$. However, in fact, it suffices to assume it only for subsets of $p\cdot p\cdot p^{-1}$. Indeed, by Theorem \ref{t:stabilizer theorem 2}{\textnormal{\textbf{(a)}}} and invariance under left translations, if $V\subseteq {\mathrm{Stab}}(p)$ is $\bigwedge_B${\hyp}definable and wide, then $y\cdot V\subseteq p\cdot p\cdot p^{-1}$ is $\bigwedge_{B,y}${\hyp}definable and wide for any $y\in p$. By the locally atomic property on $p\cdot p\cdot p^{-1}$, there is $b\in V$ such that ${\mathrm{tp}}(yb/B,y)$ is wide, concluding that ${\mathrm{tp}}(b/B)$ is wide by invariance under left translations.
{\textnormal{\textbf{(4)}}} It suffices to have $\mu$ defined only on $X^3$, as, in that case, we may extend the ideal by taking the one generated by the finite unions of left translates of elements of $\mu$. As $\mu$ is invariant under left{\hyp}translations within $X^3$ (i.e. $V\in\mu$ if and only if $gV\in\mu$ for any $V\subseteq X^3$ and $g\in G$ such that $gV\subseteq X^3$), this extension coincides with $\mu$ inside of $X^3$, so we can apply the Stabilizer Theorem \ref{t:stabilizer theorem 2} by Remark \ref{o:remarks stabilizer}(3).
\end{obss}
\begin{prop}{\textnormal{\textbf{\cite[Corollary 3.11]{hrushovski2011stable}\&\cite[Proposition 2.13]{montenegro2018stabilizers}}}} Let $\mathfrak{N}\prec\mathfrak{M}$ with $|N|<\lambda$ and $G=\underrightarrow{\lim}\, X^n$ a piecewise $N${\hyp}hyperdefinable group generated by a symmetric $\bigwedge_N${\hyp}definable subset $X$. Let $\mu$ be an $N${\hyp}invariant locally atomic ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets of $G$ invariant under left translations such that $X$ is wide and $X^3$ is $N${\hyp}medium. Then, $\mu$ is $N${\hyp}medium on $G$.
\begin{dem} By the locally atomic property, there is a wide $N${\hyp}type $p\subseteq X$. By the Stabilizer Theorem \ref{t:stabilizer theorem 2}, $S\coloneqq {\mathrm{Stab}}(p)=(p\cdot p^{-1})^2$ is an $N${\hyp}medium normal subgroup of small index. Let $Y(\overline{b})\subseteq G$ be $\bigwedge_{N,\bar{b}}${\hyp}definable with $|\overline{b}|<\lambda$ small. Let $(\overline{b}_i)_{i\in\omega}$ be an $N${\hyp}indiscernible sequence with ${\mathrm{tp}}(\overline{b}/N)={\mathrm{tp}}(\overline{b}_i/N)$. Suppose that $Y(\overline{b})\notin \mu$. By the locally atomic property of $\mu$, there is a wide $N,\overline{b}${\hyp}type $q(\overline{b})\subseteq Y(\overline{b})$. By Lemma \ref{l:index lemma 1}(1), there is $a$ such that $q(\overline{b})\subseteq a\cdot S$. Let $(\overline{b}'_i)_{i\in \omega}$ be an $N,a^*${\hyp}indiscernible sequence realising ${\mathrm{tp}}((\overline{b}_i)_{i\in\omega}/N)$. By invariance under left translations, $a^{-1}q(\overline{b})\notin \mu$. As $S$ is $N${\hyp}medium, $a^{-1}q(\overline{b}'_0)\cap a^{-1}q(\overline{b}'_1)\notin \mu$. Thus, by $N${\hyp}invariance and invariance under left translations, we get that $q(\overline{b}_0)\cap q(\overline{b}_1)\notin \mu$, concluding $Y(\overline{b}_0)\cap Y(\overline{b}_1)\notin \mu$. \mathbb{Q}ED
\end{dem}
\end{prop}
\subsection{Near{\hyp}subgroups}
Let $G$ be a piecewise $A${\hyp}hyperdefinable group and $\mu$ an ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets invariant under left translations with $\kappa\geq \lambda>|\mathscr{L}|+|A|$. A \emph{$\mu${\hyp}near{\hyp}subgroup} of $G$ over $A$ is a wide $\bigwedge_A${\hyp}definable symmetric set $X$ generating $G$ such that $\mu_{\mid X^3}$ is locally atomic and $A${\hyp}medium. We say that $X$ is a near{\hyp}subgroup if it is a $\mu${\hyp}near{\hyp}subgroup for some ideal.
\begin{teo} \label{t:stabilizer nearsubgroups}
Let $G$ be a piecewise $A${\hyp}hyperdefinable group, $\mu$ an ideal of $\bigwedge_{<\lambda}${\hyp}definable subsets and $X$ a $\mu${\hyp}near{\hyp}subgroup over $A$. Then, there is a wide $\bigwedge_{|A|+|\mathscr{L}|}${\hyp}definable normal subgroup of small index $S$ contained in $X^4$ and contained in every $\bigwedge_A${\hyp}definable subgroup of small index. Furthermore, $S=(p\cdot p^{-1})^2$ and $ppp^{-1}=pS=yS$ for any $y\in p$, where $p\subseteq X$ is a wide type over an elementary substructure of size $|A|+|\mathscr{L}|$.
\begin{dem} By L\"{o}wenheim{\hyp}Skolem Theorem \cite[Theorem 2.3.1]{tent2012course}, we find an elementary substructure $\mathfrak{N}\preceq \mathfrak{M}$ containing $A^*$ with $|N|=|A^*|+|\mathscr{L}|<\lambda$. As $A^*\subseteq N$, $\mu_{\mid X^3}$ is $N${\hyp}medium. By the locally atomic property, there is a wide $N${\hyp}type $p\subseteq X$. Applying the Stabilizer Theorem \ref{t:stabilizer theorem 2} (and Remark \ref{o:remarks stabilizer}(3)), we conclude that there is a wide $\bigwedge_N${\hyp}definable normal subgroup $S\trianglelefteq G$ of small index which does not have proper $\bigwedge_N${\hyp}definable subgroups of small index. Furthermore, $S={\mathrm{Stab}}(p)=(p\cdot p^{-1})^2\subseteq X^4$ and $ppp^{-1}=pS=yS$ for any $y\in p$. As $A^*\subseteq N$, we conclude. \mathbb{Q}ED
\end{dem}
\end{teo}
\begin{teo}\label{t:general lie model}
Let $G=\underrightarrow{\lim}\, X^n$ be a piecewise $A${\hyp}hyperdefinable group generated by a near{\hyp}subgroup $X$ over $A$. Assume that $X^n$ is an approximate subgroup for some $n\in\mathbb{N}$. Then, $G$ has a connected Lie model $\pi_{H/K}:\ H\rightarrow L=\rfrac{H}{K}$ with $K\subseteq X^{2n+4}$ such that
{\textnormal{\textbf{(1)}}} $H\cap X^{2n}$ is $\bigwedge_{|A|+|\mathscr{L}|}${\hyp}definable and commensurable to $X^n$,
{\textnormal{\textbf{(2)}}} $\pi_{H/K}[H\cap X^{2n}]$ is a compact neighbourhood of the identity in $L$,
{\textnormal{\textbf{(3)}}} $H$ is generated by $H\cap X^{2n+4}$, and
{\textnormal{\textbf{(4)}}} $\pi_{H/K}$ is continuous and proper from the logic topology using $|A|+|\mathscr{L}|$ many parameters.
\begin{dem} As $X$ is a near{\hyp}subgroup, by Theorem \ref{t:stabilizer nearsubgroups}, $G^{00}_N\subseteq X^4$ with $|N|\leq |A|+|\mathscr{L}|$. As $X^n$ is an approximate subgroup, $\rfrac{X^n}{G^{00}_N}$ is an approximate subgroup. Thus, $\rfrac{X^{2n}}{G^{00}_N}$ is a neighbourhood of the identity by the Generic Set Lemma (Theorem \ref{t:generic set lemma}). Then, by Theorem \ref{t:logic existence of lie core}, we can find a connected Lie core $\pi_{H/K}:\ H\rightarrow L=\rfrac{H}{K}$ with $K\subseteq X^{2n}G^{00}_N\subseteq X^{2n+4}$.
{\textbf{(1)}} Since $\rfrac{X^{n}}{G^{00}_A}$ is compact and $\rfrac{H}{G^{00}_A}$ is an open subgroup in the global logic topology, we conclude that $\left[\rfrac{X^n}{G^{00}_A}:\rfrac{H}{G^{00}_A}\right]$ is finite. As $G^{00}_A\leq H$, we get that $[X^n:H]$ is finite. Thus, by \cite[Lemma 2.2, Lemma 2.3]{machado2021good}, $H\cap X^{2n}$ is an approximate subgroup commensurable to $X^{n}$.
{\textbf{(2)}} As $\rfrac{X^{2n}}{G^{00}_N}$ is a compact neighbourhood of the identity and $\rfrac{H}{G^{00}_N}$ is an open subgroup in the global logic topology, we conclude that $\rfrac{H\cap X^{2n}}{G^{00}_N}$ is a compact neighbourhood of the identity. Recall that $\widetilde{\pi}_{H/K}:\ \rfrac{H}{G^{00}_N}\rightarrow L$ given by $\pi_{H/K}=\widetilde{\pi}_{H/K}\circ\pi_{G/G^{00}_N}$ is a continuous, closed, open and proper onto group homomorphism. Thus, we conclude that $\pi_{H/K}[H\cap X^{2n}]$ is a compact neighbourhood of the identity.
{\textbf{(3)}} Since $L$ is connected and $\pi_{H/K}[X^{2n}\cap H]$ is a neighbourhood of the identity, $\pi_{H/K}[H\cap X^{2n}]$ generates $L$. Therefore, we have that $\pi^{-1}_{H/K}[\pi_{H/K}[H\cap X^{2n}]]=(H\cap X^{2n})\cdot K$ generates $H$. Now, $K\subseteq H\cap X^{2n+4}$, so $(H\cap X^{2n})\cdot K\subseteq (H\cap X^{2n+4})^2$, concluding that $H\cap X^{2n+4}$ generates $H$.
{\textbf{(4)}} By Proposition \ref{p:reduce number of parameters}, the global logic topology of $\rfrac{G}{G^{00}_N}$ is given using $|A|+|\mathscr{L}|$ many parameters, so $H\cap X^{2n}$ is $\bigwedge_{|A|+|\mathscr{L}|}${\hyp}definable and $\pi_{H/K}$ is continuous and closed using $|A|+|\mathscr{L}|$ many parameters.
\mathbb{Q}ED
\end{dem}
\end{teo}
Alternatively, using Gleason{\hyp}Yamabe{\hyp}Carolino Theorem \ref{t:gleason yamabe carolino} rather than Gleason{\hyp}Yamabe Theorem \ref{t:gleason yamabe}, we get the following variation which provides some extra control over some of the parameters.
\begin{teo}\label{t:general lie model carolino}
There are functions $c:\ \mathbb{N}\rightarrow \mathbb{N}$ and $d:\ \mathbb{N}\rightarrow\mathbb{N}$ such that the following holds for any $\kappa${\hyp}saturated $\mathscr{L}${\hyp}structure $\mathfrak{M}$ and any set of parameters $A$ with $\kappa>|\mathscr{L}|+|A|$:
Let $G=\underrightarrow{\lim}\, X^n$ be a piecewise $A${\hyp}hyperdefinable group generated by a near{\hyp}subgroup $X$ over $A$. Assume that $X^n$ is a $k${\hyp}approximate subgroup for some $n$ and $k$. Then, $G$ has a Lie model $\pi_{H/K}:\ H\rightarrow L=\rfrac{H}{K}$ with $K\subseteq X^{12n+4}$ and $\dim(L)\leq d(k)$ such that
{\textnormal{\textbf{(1)}}} $H\cap X^{2n}$ is $\bigwedge_{|A|+|\mathscr{L}|}${\hyp}definable and $c(k)${\hyp}commensurable to $X^n$,
{\textnormal{\textbf{(2)}}} $\pi_{H/K}[H\cap X^{2n}]$ is a compact neighbourhood of the identity in $L$,
{\textnormal{\textbf{(3)}}} $H$ is generated by $H\cap X^{12n+4}$, and
{\textnormal{\textbf{(4)}}} $\pi_{H/K}$ is continuous and proper from the logic topology using $|A|+|\mathscr{L}|$ many parameters.
\begin{dem} As $X$ is a near{\hyp}subgroup, by Theorem \ref{t:stabilizer nearsubgroups}, $G^{00}_N\subseteq X^4$ with $|N|\leq |A|+|\mathscr{L}|$. As $X^n$ is a $k${\hyp}approximate subgroup, $\rfrac{X^n}{G^{00}_N}$ is an approximate subgroup. Thus, $\rfrac{X^{2n}}{G^{00}_N}$ is a neighbourhood of the identity by the Generic Set Lemma (Theorem \ref{t:generic set lemma}), and, in particular, $\rfrac{G}{G^{00}_N}$ is a locally compact topological group with the global logic topology. Thus, $\rfrac{X^n}{G^{00}_N}$ is contained in the interior, $\rfrac{U}{G^{00}_N}$, of $\rfrac{X^{3n}}{G^{00}_N}$ in the global logic topology. Now, we obviously have that $\rfrac{U^2}{G^{00}_N}\subseteq \rfrac{X^{6n}}{G^{00}_N}$, which is covered by $k^5$ left translates of $\rfrac{X^n}{G^{00}_N}\subseteq \rfrac{U}{G^{00}_N}$. Hence, $\rfrac{U}{G^{00}_N}$ is an open precompact $k^5${\hyp}approximate subgroup.
Let $c_0$ and $d_0$ be the functions provided by Gleason{\hyp}Yamabe{\hyp}Carolino Theorem \ref{t:gleason yamabe carolino}. Applying Theorem \ref{t:gleason yamabe carolino}, we get a Lie model $\pi_{H/K}:\ H\rightarrow L=\rfrac{H}{K}$ with $G^{00}_N\leq K\subseteq U^4\subseteq X^{12n}G^{00}_N\subseteq X^{12n+4}$ and $\dim(L)\leq d(k)\coloneqq d_0(k^5)$ such that $H\cap U^4$ generates $H$ and is $c_0(k^5)${\hyp}commensurable to $U$. Thus, $H\cap X^{12n+4}$ generates $H$ and $c_0(k^5)$ left translates of $H\cap X^{12n+4}$ cover $X^n$. As $X^n$ is a $k${\hyp}approximate subgroup, $k^{15}$ left translates of $X^n$ cover $X^{16n}\supseteq H\cap X^{12n+4}$. By \cite[Lemma 2.2]{machado2021good}, we get that $k^{15}$ left translates of $H\cap X^{2n}$ cover $H\cap X^{12n+4}$, so $H\cap X^{2n}$ and $X^n$ are $c(k)${\hyp}commensurable, where $c(k)\coloneqq c_0(k^5)k^{15}$.
Now, $\rfrac{H\cap X^{2n}}{G^{00}_N}$ is a compact neighbourhood of the identity in the global logic topology. Recall that $\widetilde{\pi}_{H/K}:\ \rfrac{H}{G^{00}_N}\rightarrow L$ given by $\pi_{H/K}=\widetilde{\pi}_{H/K}\circ\pi_{G/G^{00}_N}$ is a continuous, closed, open and proper onto group homomorphism. Thus, $\pi_{H/K}[H\cap X^{2n}]$ is a compact neighbourhood of the identity of $L$.
By Proposition \ref{p:reduce number of parameters}, the global logic topology of $\rfrac{G}{G^{00}_N}$ is given using $|A|+|\mathscr{L}|$ many parameters, so $H\cap X^{2n}$ is $\bigwedge_{|A|+|\mathscr{L}|}${\hyp}definable and $\pi_{H/K}$ is continuous and proper using $|A|+|\mathscr{L}|$ many parameters.
\mathbb{Q}ED
\end{dem}
\end{teo}
We conclude applying Theorem \ref{t:general lie model} to the case of rough approximate subgroups. Recall that a \emph{$T${\hyp}rough $k${\hyp}approximate subgroup} of a group $G$ is a subset $X\subseteq G$ such that $X^2\subseteq \Delta XT$ with $|\Delta|\leq k\in\mathbb{N}_{>0}$ and $1\in T\subseteq G$.
Let $G$ be a group and $X\subseteq G$ a symmetric subset. For some fixed $n$, assume that $X^n$ is a $T_i${\hyp}rough $k${\hyp}approximate subgroup for a sequence $(T_i)_{i\in\mathbb{N}}$ of thickenings such that
\textbf{(a)} it decreases in doubling scales, i.e. $T_{i+1}T^{-1}_{i+1}\subseteq T_i$ for each $i\in\mathbb{N}$, and
\textbf{(b)} it is asymptotically normalised by $X$, i.e. $x^{-1}T_{i+1}x\subseteq T_i$ for each $x\in X$ and $i\in \mathbb{N}$.
Assuming saturation and $\bigwedge${\hyp}definability, we have that $T=\bigcap T_i$ is an $\bigwedge${\hyp}definable subgroup of $G$ normalised by $X$ and $X^n$ is a $T${\hyp}rough $k${\hyp}approximate subgroup.
\begin{teo}[Rough Lie Model Theorem] \label{t:rough lie model}
Let $G$ be an $A${\hyp}definable group, $T\leq G$ an $\bigwedge_{A}${\hyp}definable subgroup and $X\subseteq G$ a symmetric $\bigwedge_{A}${\hyp}definable subset. Write $\widetilde{G}$ for the subgroup generated by $X$. Assume that $X$ normalises $T$, $\rfrac{X}{\kern 0.1em T}$ is a near{\hyp}subgroup of $\rfrac{\widetilde{G}}{\kern 0.1em T}$ and $X^n$ is a $T${\hyp}rough approximate subgroup for some $n$. Then, $\widetilde{G}T\leq G$ has a connected Lie model $\pi_{H/K}:\ H\rightarrow L=\rfrac{H}{K}$ with $T\subseteq K\subseteq X^{2n+4}T$ such that
\textnormal{\textbf{(1)}} $H\cap X^{2n}T$ is $\bigwedge_{|A|+|\mathscr{L}|}${\hyp}definable and commensurable to $X^nT$,
\textnormal{\textbf{(2)}} $\pi_{H/K}[H\cap X^{2n}]$ is a compact neighbourhood of the identity in $L$,
\textnormal{\textbf{(3)}} $H$ is generated by $H\cap X^{2n+4}T$, and
\textnormal{\textbf{(4)}} $\pi_{H/K}$ is continuous and proper from the logic topology using $|A|+|\mathscr{L}|$ many parameters.
\end{teo}
Alternatively, using Theorem \ref{t:general lie model carolino}:
\begin{teo}[Rough Lie model Theorem, version 2] \label{t:rough lie model carolino}
There are functions $c:\ \mathbb{N}\rightarrow \mathbb{N}$ and $d:\ \mathbb{N}\rightarrow\mathbb{N}$ such that the following holds for any $\kappa${\hyp}saturated $\mathscr{L}${\hyp}structure $\mathfrak{M}$ and any set of parameters $A$ with $\kappa>|\mathscr{L}|+|A|$:
Let $G$ be an $A${\hyp}definable group, $T\leq G$ an $\bigwedge_{A}${\hyp}definable subgroup and $X\subseteq G$ a symmetric $\bigwedge_{A}${\hyp}definable subset. Write $\widetilde{G}$ for the subgroup generated by $X$. Assume that $X$ normalises $T$, $\rfrac{X}{\kern 0.1em T}$ is a near{\hyp}subgroup of $\rfrac{\widetilde{G}}{\kern 0.1em T}$ and $X^n$ is a $T${\hyp}rough $k${\hyp}approximate subgroup for some $n$ and $k$. Then, $\widetilde{G}T\leq G$ has a Lie model $\pi_{H/K}:\ H\rightarrow L=\rfrac{H}{K}$ with $T\subseteq K\subseteq X^{12n+4}T$ and $\dim(L)\leq d(k)$ such that
\textnormal{\textbf{(1)}} $H\cap X^{2n}T$ is $\bigwedge_{|A|+|\mathscr{L}|}${\hyp}definable and $c(k)${\hyp}commensurable to $X^nT$,
\textnormal{\textbf{(2)}} $\pi_{H/K}[H\cap X^{2n}]$ is a compact neighbourhood of the identity in $L$,
\textnormal{\textbf{(3)}} $H$ is generated by $H\cap X^{12n+4}T$, and
\textnormal{\textbf{(4)}} $\pi_{H/K}$ is continuous and proper from a logic topology using $|A|+|\mathscr{L}|$ many parameters.
\end{teo}
\end{document} |
\begin{document}
\title{Tighter monogamy relations in multipartite systems}
\author{Zhi-Xiang Jin
\thanks{Corresponding author: [email protected]}\\
School of Mathematical Sciences, Capital Normal University, \\ Beijing 100048, China\\
\and Jun Li
\thanks{Corresponding author: [email protected]}\\
School of Mathematical Sciences, Capital Normal University, \\Beijing 100048, China\\
\and Tao Li
\thanks{Corresponding author: [email protected]}\\
School of Science, Beijing Technology and Business University, \\Beijing 100048, China\\
\and Shao-Ming Fei
\thanks{Corresponding author: [email protected]}\\
School of Mathematical Sciences, Capital Normal University, \\Beijing 100048, China\\
Max-Planck-Institute for Mathematics in the Sciences, \\Leipzig 04103, Germany
}
\maketitle
\begin{abstract}
Monogamy relations characterize the distributions of entanglement in multipartite systems. We investigate monogamy relations related to the concurrence $C$, the entanglement of formation $E$, negativity $N_c$ and Tsallis-$q$ entanglement $T_q$. Monogamy relations for the $\alpha$th power of entanglement have been derived, which are tighter than the existing entanglement monogamy relations for some classes of quantum states. Detailed examples are presented.
\end{abstract}
\section{INTRODUCTION}
Due to the essential roles played in quantum communication and quantum information processing, quantum entanglement \cite{1,2,3,4,5,6,7,8} has been the subject of many recent studies in recent years. The study of quantum entanglement from various viewpoints has been a very active area and has led to many impressive results. As one of the fundamental differences be tween quantumand classical correlations, an essential property of entanglement is that a quantum system entangled with one of the other subsystems limits its entanglement with the remaining ones. The monogamy relations give rise to the distribution of entanglement in the multipartite quantum systems.Moreover, themonogamy property has emerged as the ingredient in the security analysis of quantum key distribution \cite{9}.
For a tripartite system $A$, $B$, and $C$, the usual monogamy of an entanglement measure $\mathcal{E}$ implies that \cite{10} the entanglement between $A$ and $BC$ satisfies $\mathcal{E}_{A|BC}\geqslant \mathcal{E}_{AB} +\mathcal{E}_{AC}$. However, such monogamy relations are not always satisfied by all entanglement measures for all quantum states. In fact, it has been shown that the squared concurrence $C^2$ \cite{11,12} and entanglement of formation $E^2$ \cite{13} satisfy the monogamy relations for multiqubit states. The monogamy inequality was further generalized to various entanglement measures such as continuous-variable entanglement \cite{14,15,16}, squashed entanglement \cite{10,17,18}, entanglement negativity \cite{19,20,21,22,23}, Tsallis-$q$ entanglement \cite{24,25}, and Renyi entanglement \cite{26,27,28}.
In this paper, we derive monogamy inequalities which are tighter than all the existing ones, in terms of the concurrence $C$, the entanglement of formation $E$, negativity $N_c$, and Tsallis-$q$ entanglement $T_q$.
\section{TIGHTER MONOGAMY RELATIONS FOR CONCURRENCE}
We first consider the monogamy inequalities satisfied by the concurrence. Let $\mathds{H}_X$ denote a discrete finite-dimensional complex vector space associated with a quantum subsystem $X$. For a bipartite pure state $|\psi\rangle_{AB}$ in vector space $\mathds{H}_A\otimes \mathds{H}_B$, the concurrence is given by \cite{29,30,31}
$
\label{CD} C(|\psi\rangle_{AB})=\sqrt{{2\left[1-\mathrm{Tr}(\rho_A^2)\right]}},
$
where $\rho_A$ is the reduced density matrix by tracing over the subsystem $B$, $\rho_A=\mathrm{Tr}_B(|\psi\rangle_{AB}\langle\psi|)$. The concurrence for a bipartite mixed state $\rho_{AB}$ is defined by the convex roof extension
$
C(\rho_{AB})=\min_{\{p_i,|\psi_i\rangle\}}\sum_ip_iC(|\psi_i\rangle),
$
where the minimum is taken over all possible decompositions of $\rho_{AB}=\sum\limits_{i}p_i|\psi_i\rangle\langle\psi_i|$, with $p_i\geqslant0$, $\sum\limits_{i}p_i=1$ and $|\psi_i\rangle\in \mathds{H}_A\otimes \mathds{H}_B$.
For a tripartite state $|\psi\rangle_{ABC}$,the concurrence of assistance is defined by \cite{32,33}
\begin{eqnarray*}
C_a(|\psi\rangle_{ABC})\equiv C_a(\rho_{AB})=\max\limits_{\{p_i,|\psi_i\rangle\}}\sum_ip_iC(|\psi_i\rangle),
\end{eqnarray*}
where the maximum is taken over all possible decompositions of $\rho_{AB}=\\ \mathrm{Tr}_C(|\psi\rangle_{ABC}\langle\psi|)=\sum\limits_{i}p_i|\psi_i\rangle_{AB}\langle\psi_i|.$ When $\rho_{AB}=|\psi\rangle_{AB}\langle\psi|$ is a pure state, one has $C(|\psi\rangle_{AB})=C_a(\rho_{AB})$.
For an $N$-qubit state $\rho_{AB_1\cdots B_{N-1}}\in \mathds{H}_A\otimes \mathds{H}_{B_1}\otimes\cdots\otimes \mathds{H}_{B_{N-1}}$, the concurrence $C(\rho_{A|B_1\cdots B_{N-1}})$ of the state $|\psi\rangle_{A|B_1\cdots B_{N-1}}$, viewed as a bipartite state under the partition $A$ and $B_1,B_2,\cdots, B_{N-1}$, satisfies \cite{34}
\begin{eqnarray}\label{mo1}
&&C^{\alpha}(\rho_{A|B_1,B_2\cdots,B_{N-1}})\nonumber \\
&&\geqslant C^{\alpha}(\rho_{AB_1})+C^{\alpha}(\rho_{AB_2})+\cdots+C^{\alpha}(\rho_{AB_{N-1}}),
\end{eqnarray}
for $\alpha\geqslant2$, where $\rho_{AB_i}=\mathrm{Tr}_{B_1\cdots B_{i-1}B_{i+1}\cdots B_{N-1}}(\rho_{AB_1\cdots B_{N-1}})$. The relation ($\ref{mo1}$) is further improved so that for $\alpha\geqslant2$, if $C(\rho_{AB_{i}})\geqslant C(\rho_{A|B_{i+1}\cdots B_{N-1}})$ for $i= 1,2, \cdots, m$ and $C(\rho_{AB_{j}})\leqslant C(\rho_{A|B_{j+1}\cdots B_{N-1}}$ for $j= m+1, \cdots, N-2$, $\forall 1 \leqslant m\leqslant N-3$, $N\geqslant 4$, then \cite{35},
\begin{eqnarray}\label{mo2}
&&C^\alpha(\rho_{A|B_1B_2\cdots B_{N-1}})\geqslant \nonumber\\
&&C^\alpha(\rho_{AB_1})+\frac{\alpha}{2} C^\alpha(\rho_{AB_2})+\cdots+\left(\frac{\alpha}{2}\right)^{m-1}C^\alpha(\rho_{AB_m})\nonumber\\
&&+\left(\frac{\alpha}{2}\right)^{m+1}\left[C^\alpha(\rho_{AB_{m+1}})+\cdots+C^\alpha(\rho_{AB_{N-2}})\right]\nonumber\\
&&+\left(\frac{\alpha}{2}\right)^{m}C^\alpha(\rho_{AB_{N-1}})
\end{eqnarray}
and for all $\alpha<0$,
\begin{eqnarray}\label{l2}
\setlength{\belowdisplayskip}{3pt}
&&C^\alpha(\rho_{A|B_1B_2\cdots B_{N-1}})< \nonumber \\
&&K[C^\alpha(\rho_{AB_1})+C^\alpha(\rho_{AB_2})+\cdots+C^\alpha(\rho_{AB_{N-1}})],
\end{eqnarray}
where $K=\frac{1}{N-1}$.
In the following, we show that these monogamy inequalities satisfied by the concurrence can be further refined and become even tighter. For convenience, we denote $C_{AB_i}=C(\rho_{AB_i})$ the
concurrence of $\rho_{AB_i}$ and $C_{A|B_1,B_2,\cdots,B_{N-1}}=C(\rho_{A|B_1 \cdots B_{N-1}})$. We first introduce two lemmas.
\textit{Lemma 1}. For any real number $x$ and $t$, $0\leqslant t \leqslant 1$, $x\in [1, \infty]$, we have $(1+t)^x\geqslant 1+(2^{x}-1)t^x$.
\textit{Proof}. Let $f(x,y)=(1+y)^x-y^x$ with $x\geqslant 1,~y\geqslant 1$, then, $\frac{\partial f}{\partial y}=x[(1+y)^{x-1}-y^{x-1}]\geqslant 0$. Therefore, $f(x,y)$ is an increasing function of $y$, i.e., $f(x,y)\geqslant f(x,1)=2^x-1$. Set $y=\frac{1}{t},~0<t\leqslant 1$, and we obtain $(1+t)^x\geqslant 1+(2^x-1)t^x$. When $t=0$, the inequality is trivial.~~~~~~~~~~~~ $\blacksquare$
\textit{Lemma 2}. For any $2\otimes2\otimes2^{n-2}$ mixed state $\rho\in \mathds{H}_A\otimes \mathds{H}_{B}\otimes \mathds{H}_{C}$, if $C_{AB}\geqslant C_{AC}$, we have
\begin{equation}\label{le2}
C^\alpha_{A|BC}\geqslant C^\alpha_{AB}+(2^{\frac{\alpha}{2}}-1)C^\alpha_{AC},
\end{equation}
for all $\alpha\geqslant2$.
\textit{Proof}. It has been shown that $C^2_{A|BC}\geqslant C^2_{AB}+C^2_{AC}$ for arbitrary $2\otimes2\otimes2^{n-2}$ tripartite state $\rho_{ABC}$ \cite{11,37}. Then, if $C_{AB}\geqslant C_{AC}$, we have
\begin{eqnarray*}
C^\alpha_{A|BC}&&\geqslant (C^2_{AB}+C^2_{AC})^{\frac{\alpha}{2}}\\
&&=C^\alpha_{AB}\left(1+\frac{C^2_{AC}}{C^2_{AB}}\right)^{\frac{\alpha}{2}} \\
&& \geqslant C^\alpha_{AB}\left[1+(2^{\frac{\alpha}{2}}-1)\left(\frac{C^2_{AC}}{C^2_{AB}}\right)^{\frac{\alpha}{2}}\right]\\
&&=C^\alpha_{AB}+(2^{\frac{\alpha}{2}}-1)C^\alpha_{AC}
\end{eqnarray*}
where the second inequality is due to Lemma 1. As the subsystems $A$ and $B$ are equivalent in this case, we have assumed that $C_{AB}\geqslant C_{AC}$ without loss of generality. Moreover,
if $C_{AB}=0$ we have $C_{AB}=C_{AC}=0$. That is to say the lower bound becomes trivially zero.~~~~~~~~~~~~$\blacksquare$
From Lemma 2, we have the following theorem.
\textit{Theorem 1}. For an $N$-qubit mixed state, if ${C_{AB_i}}\geqslant {C_{A|B_{i+1}\cdots B_{N-1}}}$ for $i=1, 2, \cdots, m$, and
${C_{AB_j}}\leq {C_{A|B_{j+1}\cdots B_{N-1}}}$ for $j=m+1,\cdots,N-2$,
$\forall$ $1\leq m\leq N-3$, $N\geqslant 4$, then we have
\begin{eqnarray}\label{th1}
&&C^\alpha_{A|B_1B_2\cdots B_{N-1}} \nonumber \\
&&~~~\geqslant C^\alpha_{AB_1}+(2^{\frac{\alpha}{2}}-1) C^\alpha_{AB_2}+\cdots+(2^{\frac{\alpha}{2}}-1)^{m-1}C^\alpha_{AB_m}\nonumber\\
&&~~~~~~+(2^{\frac{\alpha}{2}}-1)^{m+1}(C^\alpha_{AB_{m+1}}+\cdots+C^\alpha_{AB_{N-2}}) \nonumber\\
&&~~~~~~+(2^{\frac{\alpha}{2}}-1)^{m}C^\alpha_{AB_{N-1}}
\end{eqnarray}
for all $\alpha\geqslant2$.
\textit{Proof}. From the inequality (\ref{le2}), we have
\small
\begin{eqnarray}\label{pf11}
&&C^{\alpha}_{A|B_1B_2\cdots B_{N-1}}\nonumber\\
&&\geqslant C^{\alpha}_{AB_1}+(2^{\frac{\alpha}{2}}-1)C^{\alpha}_{A|B_2\cdots B_{N-1}}\nonumber\\
&&\geqslant C^{\alpha}_{AB_1}+(2^{\frac{\alpha}{2}}-1)C^{\alpha}_{AB_2}
+(2^{\frac{\alpha}{2}}-1)^2C^{\alpha}_{A|B_3\cdots B_{N-1}}\nonumber\\
&& \geqslant \cdots\nonumber\\
&&\geqslant C^{\alpha}_{AB_1}+(2^{\frac{\alpha}{2}}-1)C^{\alpha}_{AB_2}+\cdots+(2^{\frac{\alpha}{2}}-1)^{m-1}C^{\alpha}_{AB_m}\nonumber\\
&&~~~~+(2^{\frac{\alpha}{2}}-1)^m C^{\alpha}_{A|B_{m+1}\cdots B_{N-1}}.
\end{eqnarray}
\normalsize
Similarly, as ${C_{AB_j}}\leqslant {C_{A|B_{j+1}\cdots B_{N-1}}}$ for $j=m+1,\cdots,N-2$, we get
\begin{eqnarray}\label{pf12}
&&C^{\alpha}_{A|B_{m+1}\cdots B_{N-1}} \nonumber\\
&&~~~\geqslant (2^{\frac{\alpha}{2}}-1)C^{\alpha}_{AB_{m+1}}+C^{\alpha}_{A|B_{m+2}\cdots B_{N-1}}\nonumber\\
&&~~~\geqslant (2^{\frac{\alpha}{2}}-1)(C^{\alpha}_{AB_{m+1}}+\cdots+C^{\alpha}_{AB_{N-2}})\nonumber\\
&&~~~~~~~+C^{\alpha}_{AB_{N-1}}.
\end{eqnarray}
Combining (\ref{pf11}) and (\ref{pf12}), we have Theorem 1. ~~~~~~~~~~$\blacksquare$
As for $\alpha\geqslant 2$, $(2^{\frac{\alpha}{2}}-1)^m\geqslant (\alpha/2)^m$ for all $1\leq m\leq N-3$, our formula (\ref{th1}) in Theorem 1 gives a tighter monogamy relation with larger lower bounds than (\ref{mo1}), (\ref{mo2}). In Theorem 1, we have assumed that
some ${C_{AB_i}}\geqslant {C_{A|B_{i+1}\cdots B_{N-1}}}$ and some
${C_{AB_j}}\leq {C_{A|B_{j+1}\cdots B_{N-1}}}$ for the $2\otimes2\otimes\cdots\otimes2$ mixed state $\rho\in \mathds{H}_A\otimes \mathds{H}_{B_1}\otimes\cdots\otimes \mathds{H}_{{B_{N-1}}}$. If all ${C_{AB_i}}\geqslant {C_{A|B_{i+1}\cdots B_{N-1}}}$ for $i=1, 2, \cdots, N-2$, then we have the following conclusion:
\textit{Theorem 2}. If ${C_{AB_i}}\geqslant {C_{A|B_{i+1}\cdots B_{N-1}}}$ for all $i=1, 2, \cdots, N-2$, then we have
\small
\begin{eqnarray}\label{Co}
&&C^\alpha_{A|B_1\cdots B_{N-1}} \nonumber\\
&&~~~~~~\geqslant C^\alpha_{AB_1}+(2^{\frac{\alpha}{2}}-1) C^\alpha_{AB_2}+\cdots+(2^{\frac{\alpha}{2}}-1)^{N-3}C^\alpha_{AB_{N-2}}\nonumber\\
&&~~~~~~~~~+(2^{\frac{\alpha}{2}}-1)^{N-2}C^\alpha_{AB_{N-1}}.
\end{eqnarray}
\normalsize
{\it Example 1}. Let us consider the three-qubit state $|\psi\rangle$ in the generalized Schmidt decomposition form \cite{38,39},
\begin{eqnarray}\label{ex1}
|\psi\rangle&=&\lambda_0|000\rangle+\lambda_1e^{i{\varphi}}|100\rangle+\lambda_2|101\rangle \nonumber\\
&&+\lambda_3|110\rangle+\lambda_4|111\rangle,
\end{eqnarray}
where $\lambda_i\geqslant0,~i=0,1,2,3,4$ and $\sum\limits_{i=0}\limits^4\lambda_i^2=1.$
From the definition of concurrence, we have $C_{A|BC}=2\lambda_0\sqrt{{\lambda_2^2+\lambda_3^2+\lambda_4^2}}$, $C_{AB}=2\lambda_0\lambda_2$, and $C_{AC}=2\lambda_0\lambda_3$. Set $\lambda_{0}=\lambda_{1}=\frac{1}{2}$, $\lambda_{2}=\lambda_{3}=\lambda_{4}=\frac{\sqrt{6}}{6}$, one has $C_{A|BC}=\frac{\sqrt{2}}{2}$, $C_{AB}=C_{AC}=\frac{\sqrt{6}}{6}$, then $C_{A|BC}^{\alpha}=(\frac{\sqrt{2}}{2})^{\alpha}$, $C_{AB}^{\alpha}+C_{AC}^{\alpha}=2(\frac{\sqrt{6}}{6})^{\alpha}$, $C_{AB}^{\alpha}+\frac{\alpha}{2}C_{AC}^{\alpha}=(1+\frac{\alpha}{2})(\frac{\sqrt{6}}{6})^{\alpha}$, $C_{AB}^{\alpha}+(2^{\frac{\alpha}{2}}-1)C_{AC}^{\alpha}=2^{\frac{\alpha}{2}}(\frac{\sqrt{6}}{6})^{\alpha}$.
One can see that our result is better
than the results in \cite{34} and \cite{35} for $\alpha\geqslant2$; see Fig 1.
\begin{figure}
\caption{Behavior of the concurrence of $|\psi\rangle$ and its lower bound, which are functions of $\alpha$ plotted. The black solid line represents the concurrence of $|\psi\rangle$ in Example 1, the red dashed line represents the lower bound from our result, and the blue dotted (green dot-dashed) line represents the lower bound from the result in \cite{35}
\end{figure}
\section{TIGHTER MONOGAMY REALATIONS FOR EoF}
The entanglement of formation (EOF) \cite{40,41} is a well-defined important measure of entanglement for bipartite systems. Let $\mathds{H}_A$ and $\mathds{H}_B$ be $m$- and $n$- dimensional $(m\leqslant n)$ vector spaces, respectively. The EOF of a pure state $|\psi\rangle\in \mathds{H}_A\otimes \mathds{H}_B$ is defined by
\begin{equation}\label{SEA}
E(|\psi\rangle)=S(\rho_A),
\end{equation}
where $\rho_A=\mathrm{Tr}_B(|\psi\rangle\langle\psi|)$ and $S(\rho)=-\mathrm{Tr}(\rho \log_2\rho).$ For a bipartite mixed state $\rho_{AB}\in \mathds{H}_A\otimes \mathds{H}_B$, the entanglement of formation is given by
\begin{equation}\label{SEB}
E(\rho_{AB})=\min_{\{p_i,|\psi_i\rangle\}}\sum\limits_ip_iE(|\psi_i\rangle)
\end{equation}
with the minimum taking over all possible pure-state decompositions of $\rho_{AB}$.
Denote $f(x)=H\left(\frac{1+\sqrt{1-x}}{2}\right)$, where $H(x)=-x\log_2(x)-(1-x)\log_2(1-x)$. From (\ref{SEA}) and (\ref{SEB}), one has $E(|\psi\rangle)=f\left(C^2(|\psi\rangle)\right)$ for $2\otimes m~(m\geqslant2)$ pure state $|\psi\rangle$, and $E(\rho)=f\left(C^2(\rho)\right)$ for two-qubit mixed state $\rho$ \cite{42}. It is obvious that $f(x)$ is a monotonically increasing function for $0\leqslant x\leqslant1$. $f(x)$ satisfies the following relations:
\begin{equation}\label{F2}
f^{\sqrt{2}}(x^2+y^2)\geqslant f^{\sqrt{2}}(x^2)+f^{\sqrt{2}}(y^2),
\end{equation}
where $f^{\sqrt{2}}(x^2+y^2)=[f(x^2+y^2)]^{\sqrt{2}}.$
It has been shown that the EOF does not satisfy the inequality $E_{AB}+E_{AC}\leq E_{A|BC}$ \cite{43}. In \cite{44}, the authors showed that EOF is a monotonic function satisfying $E^2(C^2_{A|B_1B_2\cdots B_{N-1}})\geqslant E^2(\sum_{i=1}^{N-1}C^2_{AB_i})$. For $N-$qubit systems, one has \cite{34}
\begin{equation}\label{eof1}
E^\alpha_{A|B_1B_2\cdots B_{N-1}}\geqslant E^\alpha_{AB_1}+E^\alpha_{AB_2}+\cdots+E^\alpha_{AB_{N-1}}
\end{equation}
for $\alpha\geqslant\sqrt{2}$, where $E_{A|B_1B_2\cdots B_{N-1}}$ is the entanglement of formation of $\rho$ in bipartite partition $A|B_1B_2\cdots B_{N-1}$, and $E_{AB_i}$, $i=1,2,\cdots,N-1$, is the EOF of the mixed states $\rho_{AB_i}=\mathrm{Tr}_{B_1B_2\cdots B_{i-1},B_{i+1}\cdots B_{N-1}}(\rho)$. It is further proved that for $\alpha\geqslant\sqrt{2}$, if $C_{AB_{i}}\geqslant C_{A|B_{i+1}\cdots B_{N-1}}$ for $i=1,2,\cdots,m$ and $C_{AB_{j}}\leqslant C_{A|B_{j+1}\cdots B_{N-1}}$ for $j= m+1,\cdots,N-2$, $\forall 1 \leqslant m \leqslant N-3$, $N \geqslant 4$, then \cite{35}
\begin{eqnarray}\label{eof2}
&&E^\alpha_{A|B_1B_2\cdots B_{N-1}}\nonumber\\
&&~~~~~~\geqslant E^\alpha_{AB_1}+({\alpha}/{\sqrt{2}}) E^\alpha_{AB_2}\cdots+({\alpha}/{\sqrt{2}})^{m-1}E^\alpha_{AB_m}\nonumber\\
&&~~~~~~~~~+({\alpha}/{\sqrt{2}})^{m+1}(E^\alpha_{AB_{m+1}}+\cdots+E^\alpha_{AB_{N-2}})\nonumber\\
&&~~~~~~~~~+({\alpha}/{\sqrt{2}})^{m}E^\alpha_{AB_{N-1}},
\end{eqnarray}
In fact, generally we can prove the following results.
\textit{Theorem 3}. For any $N$-qubit mixed state $\rho\in \mathds{H}_A\otimes \mathds{H}_{B_1}\otimes\cdots\otimes \mathds{H}_{{B_{N-1}}}$, if
${C_{AB_i}}\geqslant {C_{A|B_{i+1}\cdots B_{N-1}}}$ for $i=1, 2, \cdots, m$, and
${C_{AB_j}}\leqslant {C_{A|B_{j+1}\cdots B_{N-1}}}$ for $j=m+1,\cdots,N-2$, $\forall$ $1\leqslant m\leqslant N-3$, $N\geqslant 4$, the entanglement of formation $E(\rho)$ satisfies
\begin{eqnarray}\label{th3}
&&~~~E^\alpha_{A|B_1B_2\cdots B_{N-1}}\nonumber\\
&&~~~~~~\geqslant E^\alpha_{AB_1}+(2^{t}-1) E^\alpha_{AB_2}\cdots+(2^{t}-1)^{m-1}E^\alpha_{AB_m}\nonumber\\
&&~~~~~~~~~+(2^{t}-1)^{m+1}(E^\alpha_{AB_{m+1}}+\cdots+E^\alpha_{AB_{N-2}})\nonumber\\
&&~~~~~~~~~+(2^{t}-1)^{m}E^\alpha_{AB_{N-1}},
\end{eqnarray}
for $\alpha\geqslant\sqrt{2}$, where $t={\alpha}/{\sqrt{2}}$.
\textit{Proof}. For $\alpha\geqslant\sqrt{2}$, we have
\begin{eqnarray}\label{FA}
f^{{\alpha}}(x^2+y^2)&&=\left(f^{\sqrt{2}}(x^2+y^2)\right)^t \nonumber\\
&&\geqslant \left(f^{\sqrt{2}}(x^2)+f^{\sqrt{2}}(y^2)\right)^t \nonumber\\
&&\geqslant \left(f^{\sqrt{2}}(x^2)\right)^t+(2^{t}-1)\left(f^{\sqrt{2}}(y^2)\right)^t\nonumber\\
&& = f^{\alpha}(x^2)+(2^{t}-1)f^{\alpha}(y^2),
\end{eqnarray}
where the first inequality is due to the inequality (\ref{F2}), and the second inequality is obtained from a similar consideration in the proof of the second inequality in (\ref{le2}).
Let $\rho=\sum\limits_ip_i|\psi_i\rangle\langle\psi_i|\in \mathds{H}_A\otimes \mathds{H}_{B_1}\otimes\cdots\otimes \mathds{H}_{{B_N-1}}$ be the optimal decomposition of $E_{A|B_1B_2\cdots B_{N-1}}(\rho)$ for the $N$-qubit mixed state $\rho$; then we have
\begin{eqnarray*}\label{ED}
E_{A|B_1B_2\cdots B_{N-1}}(\rho)&&=\sum_ip_iE_{A|B_1B_2\cdots B_{N-1}}(|\psi_i\rangle)\nonumber\\
&&=\sum_ip_if\left(C^2_{A|B_1B_2\cdots B_{N-1}}(|\psi_i\rangle)\right)\nonumber\\
&&\geqslant f\left(\sum_ip_iC^2_{A|B_1B_2\cdots B_{N-1}}(|\psi_i\rangle)\right)\nonumber\\
&&\geqslant f\left(\left[\sum_ip_iC_{A|B_1B_2\cdots B_{N-1}}(|\psi_i\rangle)\right]^2\right)\nonumber\\
&& \geqslant f\left(C^2_{A|B_1B_2\cdots B_{N-1}}(\rho)\right),\nonumber
\end{eqnarray*}
where the first inequality is due to the fact that $f(x)$ is a convex function. The second inequality is due to the Cauchy-Schwarz inequality: $(\sum\limits_ix_i^2)^{\frac{1}{2}}(\sum\limits_iy_i^2)^{\frac{1}{2}}\geqslant\sum\limits_ix_iy_i$, with $x_i=\sqrt{p_i}$ and $y_i=\sqrt{p_i}C_{A|B_1B_2\cdots B_{N-1}}(|\psi_i\rangle)$. Due to the definition of concurrence and that $f(x)$ is a monotonically increasing function, we obtain the third inequality. Therefore, we have
\small
\begin{flalign*}
&E^\alpha_{A|B_1B_2\cdots B_{N-1}}(\rho)\nonumber\\
&~~~\geqslant f^\alpha(C^2_{AB_1}+C^2_{AB_2}+\cdots+C^2_{AB_{m-1}})\nonumber\\
&~~~\geqslant f^{\alpha}(C^2_{AB_1})+(2^{t}-1) f^{\alpha}(C^2_{AB_2})+\cdots+(2^{t}-1)^{m-1} f^{\alpha}(C^2_{AB_m})\nonumber\\
&~~~~~~~+(2^{t}-1)^{m+1}(f^{\alpha}(C^2_{AB_{m+1}})+\cdots+f^{\alpha}(C^2_{AB_{N-2}}))\nonumber\\
&~~~~~~~+(2^{t}-1)^{m}f^{\alpha}(C^2_{AB_{N-1}})\nonumber\\
&~~~=E^\alpha_{A|B_1}+(2^{t}-1) E^\alpha_{AB_2}+\cdots+(2^{t}-1)^{m-1}E^\alpha_{AB_m}\nonumber\\
&~~~~~~~+(2^{t}-1)^{m+1}(E^\alpha_{AB_{m+1}}+\cdots+E^\alpha_{AB_{N-2}})\nonumber\\
&~~~~~~~+(2^{t}-1)^{m}E^\alpha_{AB_{N-1}},\nonumber
\end{flalign*}
\normalsize
where we have used the monogamy inequality in (\ref{mo1}) for $N$-qubit states $\rho$ to obtain the first inequality. By using (\ref{FA}) and the similar consideration in the proof of Theorem 1, we get the second inequality. Since for any $2\otimes2$ quantum state $\rho_{AB_i}$, $E(\rho_{AB_i})=f\left[C^2(\rho_{AB_i})\right]$, one gets the last equality. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$\blacksquare$
As for $(2^{ {\alpha}/{\sqrt{2}}}-1)\geqslant{\alpha}/{\sqrt{2}}$ for $\alpha\geqslant\sqrt{2}$, (\ref{th3}) is obviously tighter than (\ref{eof1}), (\ref{eof2}). Moreover, similar to the concurrence, for the case that ${C_{AB_i}}\geqslant {C_{A|B_{i+1}\cdots B_{N-1}}}$ for all $i=1, 2, \cdots, N-2$, we have a simple tighter monogamy relation for the entanglement of formation:
\textit{Theorem4}.
If ${C_{AB_i}}\geqslant {C_{A|B_{i+1}\cdots B_{N-1}}}$ for all $i=1, 2, \cdots, N-2$, we have
\begin{eqnarray}\label{th4}
E^\alpha_{A|B_1B_2\cdots B_{N-1}}&&\geqslant E^\alpha_{AB_1}+(2^{{\alpha}/{\sqrt{2}}}-1) E^\alpha_{AB_2}+\cdots\nonumber\\
&&~~~~+(2^{{\alpha}/{\sqrt{2}}}-1)^{N-2}E^\alpha_{AB_{N-1}}
\end{eqnarray}
for $\alpha\geqslant\sqrt{2}$.
{\it Example 2}. Let us consider the $W$ state, $|W\rangle=\frac{1}{\sqrt{3}}(|100\rangle+|010\rangle+|001\rangle).$ We have $E_{AB}=E_{AC}=0.550048$, $E_{A|BC}=0.918296$, and then $E_{A|BC}^{\alpha}=(0.918296)^{\alpha}$, $E^{\alpha}_{AB}+E^{\alpha}_{AC}=2(0.550048)^{\alpha}$, $E^{\alpha}_{AB}+\frac{\alpha}{\sqrt{2}}E^{\alpha}_{AC}=(1+\frac{\alpha}{\sqrt{2}})(0.550048)^{\alpha}$, $E^{\alpha}_{AB}+(2^{\frac{\alpha}{\sqrt{2}}}-1)E^{\alpha}_{AC}=2^{\frac{\alpha}{\sqrt{2}}}(0.550048)^{\alpha}$. It is easily verified that our results are better than the results in \cite{34} and \cite{35} for $\alpha\geqslant\sqrt{2}$; see Fig 2.
\begin{figure}
\caption{Behavior of the EOF of $|W\rangle$ and its lower bound, which are functions of $\alpha$ plotted. The black solid line represents the EOF of the state $|W\rangle$ in Example 2, the red dashed line represents the lower bound from our result, and the blue dotted (green dot-dashed) line represents the lower bound from the result in \cite{35}
\label{2}
\end{figure}
\section{TIGHTER MONOGAMY RELATIONS FOR NEGATIVITY }
Another well-known quantifier of bipartite entanglement is the negativity. Given a bipartite state $\rho_{AB}$ in $\mathds{H}_A\otimes \mathds{H}_B$, the negativity is defined by \cite{45} $N(\rho_{AB})=(||\rho_{AB}^{T_A}||-1)/2$, where $\rho_{AB}^{T_A}$ is the partial transpose with respect to the subsystem $A$, and $||X||$ denotes the trace norm of $X$, i.e $||X||=\mathrm{Tr}\sqrt{XX^\dag}$. Negativity is a computable measure of entanglement and is a convex function of $\rho_{AB}$. It vanishes if and only if $\rho_{AB}$ is separable for the $2\otimes2$ and $2\otimes3$ systems \cite{46}. For the purpose of discussion, we use the following definition of negativity, $ N(\rho_{AB})=||\rho_{AB}^{T_A}||-1$.
For any bipartite pure state $|\psi\rangle_{AB}$, the negativity $ N(\rho_{AB})$ is given by
$N(|\psi\rangle_{AB})=2\sum\limits_{i<j}\sqrt{\lambda_i\lambda_j}=(\mathrm{Tr}\sqrt{\rho_A})^2-1$,
where $\lambda_i$ are the eigenvalues for the reduced density matrix of $|\psi\rangle_{AB}$. For a mixed state $\rho_{AB}$, the convex-roof extended negativity (CREN) is defined as
\begin{equation}\label{nc}
N_c(\rho_{AB})=\mathrm{min}\sum_ip_iN(|\psi_i\rangle_{AB}),
\end{equation}
where the minimum is taken over all possible pure-state decompositions $\{p_i,~|\psi_i\rangle_{AB}\}$ of $\rho_{AB}$. CREN gives a perfect discrimination of positive partial transposed bound entangled states and separable states in any bipartite quantum system \cite{47,48}.
Let us consider the relation between CREN and concurrence. For any bipartite pure state $|\psi\rangle_{AB}$ in a $d\otimes d$ quantum system with Schmidt rank 2,
$|\psi\rangle_{AB}=\sqrt{\lambda_0}|00\rangle+\sqrt{\lambda_1}|11\rangle$,
one has
$N(|\psi\rangle_{AB})=\parallel|\psi\rangle\langle\psi|^{T_B}\parallel-1=2\sqrt{\lambda_0\lambda_1}
=\sqrt{2(1-\mathrm{Tr}\rho_A^2)}=C(|\psi\rangle_{AB})$. In other words, negativity is equivalent to concurrence for any pure state with Schmidt rank 2, and consequently it follows that for any two-qubit mixed state $\rho_{AB}=\sum p_i|\psi_i\rangle_{AB}\langle\psi_i|$,
\begin{eqnarray}\label{N1}
N_c(\rho_{AB})&&=\mathrm{min}\sum_ip_iN(|\psi_i\rangle_{AB})\\ \nonumber
&&=\mathrm{min}\sum_ip_iC(|\psi_i\rangle_{AB})\\ \nonumber
&&=C(\rho_{AB}).
\end{eqnarray}
With a similar consideration of concurrence, we obtain the following result.
\textit{Theorem 5}. For any $N$-qubit state $\rho\in \mathds{H}_A\otimes \mathds{H}_{B_1}\otimes\cdots\otimes \mathds{H}_{B_{N-1}}$, if
${N_c}_{AB_i}\geqslant {N_c}_{A|B_{i+1}\cdots B_{N-1}}$ for $i=1, 2, \cdots, m$, and
${N_c}_{AB_j}\leqslant {N_c}_{A|B_{j+1}\cdots B_{N-1}}$ for $j=m+1,\cdots,N-2$,
$\forall$ $1\leqslant m \leqslant N-3$, $N\geqslant 4$, we have
\begin{eqnarray}\label{la6}
&&~~~~~~{N^\alpha_c}_{A|B_1B_2\cdots B_{N-1}} \nonumber\\
&&~~~~~~\geqslant {N^\alpha_c}_{AB_1}+(2^{\frac{\alpha}{2}}-1) {N^\alpha_c}_{AB_2}+\cdots+(2^{\frac{\alpha}{2}}-1)^{m-1}{N^\alpha_c}_{AB_m}\nonumber\\
&&~~~~~~~~~+(2^{\frac{\alpha}{2}}-1)^{m+1}({N^\alpha_c}_{AB_{m+1}}
+\cdots+{N^\alpha_c}_{AB_{N-2}})\nonumber\\
&&~~~~~~~~~+(2^{\frac{\alpha}{2}}-1)^{m}{N^\alpha_c}_{AB_{N-1}}
\end{eqnarray}
for all $\alpha\geqslant2$.
In Theorem 5 we have assumed that
some ${N_c}_{AB_i}\geqslant {N_c}_{A|B_{i+1}\cdots B_{N-1}}$ and some
${N_c}_{AB_j}\leq {N_c}_{A|B_{j+1}\cdots B_{N-1}}$ for the $2\otimes2\otimes\cdots\otimes2$ mixed state $\rho\in \mathds{H}_A\otimes \mathds{H}_{B_1}\otimes\cdots\otimes \mathds{H}_{{B_{N-1}}}$.
If all ${N_c}_{AB_i}\geqslant {N_c}_{A|B_{i+1}\cdots B_{N-1}}$ for $i=1, 2, \cdots, N-2$, then we have
the following conclusion:
\textit{Theorem 6}.
If ${N_c}_{AB_i}\geqslant {N_c}_{A|B_{i+1}\cdots B_{N-1}}$ for all $i=1, 2, \cdots, N-2$, then we have
\begin{eqnarray}\label{}
{N_c}^\alpha_{A|B_1\cdots B_{N-1}}&&\geqslant {N_c}^\alpha_{AB_1}+(2^{\frac{\alpha}{2}}-1) {N_c}^\alpha_{AB_2}+\cdots\nonumber\\
&&~~~~+(2^{\frac{\alpha}{2}}-1)^{N-2}{N_c}^\alpha_{AB_{N-1}}.
\end{eqnarray}
\textit{Example 3}. Let us consider again the three-qubit state $|\psi\rangle$ $(9)$. From the definition of CREN, we have ${N_c}_{A|BC} = 2 \lambda_{0}\sqrt{\lambda_{2}^{2}+\lambda_{3}^{2}+\lambda_{4}^{2}}$, ${N_c}_{AB}=2\lambda_0 \lambda_2$, and ${N_c}_{AC}=2\lambda_0 \lambda_3$. Set $\lambda_0=\lambda_1=\lambda_2=\lambda_3=\lambda_4=\frac{\sqrt{5}}{5}$. One gets ${N_c}^\alpha_{A|BC}=(\frac{2\sqrt{3}}{5})^{\alpha}$, ${N_c}^\alpha_{AB}+{N_c}^\alpha_{AC}=2(\frac{2}{5})^{\alpha}$, ${N_c}^\alpha_{AB}+\frac{\alpha}{2}{N_c}^\alpha_{AC}=(1+\frac{\alpha}{2})(\frac{2}{5})^{\alpha}$, ${N_c}^\alpha_{AB}+(2^{\frac{\alpha}{2}}-1){N_c}^\alpha_{AC}=2^{\frac{\alpha}{2}}(\frac{2}{5})^{\alpha}$. One can see that our result is better than the results in $\cite{34}$ and $\cite{36}$ for $\alpha\geqslant2$; see Fig. 3.
\begin{figure}
\caption{Behavior of the concurrence of $|\psi\rangle$ and its lower bound, which are functions of $\alpha$ plotted. The black solid line represents the concurrence of $|\psi\rangle$ in Example 3, the red dashed line represents the lower bound from our result, and the blue dotted (green dot-dashed) line represents the lower bound from the result in \cite{36}
\end{figure}
\section{Tighter monogamy relations for Tsallis-q entanglement}
For a bipartite pure state $|\psi\rangle_{AB}$, the Tsallis-$q$ entanglement is defined by $\cite{24}$
\begin{eqnarray}\label{tq}
T_q(|\psi\rangle_{AB})=S_q(\rho_A)=\frac{1}{q-1}(1-\mathrm{tr}\rho_A^q),
\end{eqnarray}
for any $q > 0$ and $q \ne 1$. If $q$ tends to 1, $T_q(\rho)$ converges to the von Neumann entropy, $\lim_{q\to1} T_q(\rho)=-\mathrm{tr}\rho\ln\rho=S_q(\rho)$. For a bipartite mixed state $\rho_{AB}$, Tsallis-$q$ entanglement is defined via the convex-roof extension,
$
T_q(\rho_{AB})=\min\sum_ip_iT_q(|\psi_i\rangle_{AB}),
$
with the minimum taken over all possible pure-state decompositions of $\rho_{AB}$.
In \cite{49}, the author has proved an analytic relationship between Tsallis-$q$ entanglement and concurrence for $\frac{5-\sqrt{13}}{2}\leq q\leq \frac{5+\sqrt{13}}{2}$,
\begin{eqnarray}\label{an1}
T_q(|\psi\rangle_{AB})=\textsl{g}_q(C^2(|\psi\rangle_{AB})),
\end{eqnarray}
where the function $\textsl{g}_q(x)$ is defined as
\small
\begin{eqnarray}\label{an2}
\textsl{g}_q(x)=\frac{1}{q-1}\left[1-\left(\frac{1+\sqrt{1-x}}{2}\right)^q-\left(\frac{1-\sqrt{1-x}}{2}\right)^q\right].
\end{eqnarray}
\normalsize
It has been shown that $T_q(|\psi\rangle)=\textsl{g}_q\left(C^2(|\psi\rangle)\right)$ for $2\otimes m~(m\geqslant2)$ pure state $|\psi\rangle$, and $T_q(\rho)=\textsl{g}_q\left(C^2(\rho)\right)$ for two-qubit mixed state $\rho$ in \cite{24}. Hence, (\ref{an1}) holds for any $q$ such that $\textsl{g}_q(x)$ in (\ref{an2}) is monotonically increasing and convex. In particular, $\textsl{g}_q(x)$ satisfies the following relations for $2\leqslant q\leqslant 3$:
\begin{eqnarray}\label{gx}
\textsl{g}_q(x^2+y^2)\geqslant \textsl{g}_q(x^2)+\textsl{g}_q^2(y^2).
\end{eqnarray}
The Tsallis-$q$ entanglement satisfies $\cite{24}$
\begin{eqnarray}\label{stq1}
{T_q}_{A|B_1B_2\cdots B_{N-1}}\geqslant\sum_{i=1}^{N-1}{T_q}_{AB_i},
\end{eqnarray}
where $i=1,2,\cdots N-1$, $2\leqslant q\leqslant 3$. It is further proved in \cite{49}
\begin{eqnarray}\label{stq2}
{T_q^2}_{A|B_1B_2\cdots B_{N-1}}\geqslant\sum_{i=1}^{N-1}{T_q^2}_{AB_i},
\end{eqnarray}
with $\frac{5-\sqrt{13}}{2}\leqslant q\leqslant \frac{5+\sqrt{13}}{2}$.
In fact, generally we can prove the following results.
\textit{Theorem 7}. For an arbitrary $N$-qubit mixed state $\rho_{AB_1\cdots B_{N-1}}$, if
${C_{AB_i}}\geqslant {C_{A|B_{i+1}\cdots B_{N-1}}}$ for $i=1, 2, \cdots, m$, and
${C_{AB_j}}\leqslant {C_{A|B_{j+1}\cdots B_{N-1}}}$ for $j=m+1,\cdots,N-2$,
$\forall$ $1\leqslant m\leqslant N-3$, $N\geqslant 4$, then the $\alpha$th power of Tsallis-$q$ satisfies the monogamy relation
\begin{eqnarray}\label{th6}
&&{T_q}^\alpha_{A|B_1B_2\cdots B_{N-1}}\nonumber\\
&&~~~\geqslant{T_q}^\alpha_{AB_1}+(2^{{\alpha}}-1) {T_q}^\alpha_{AB_2}+\cdots+(2^{{\alpha}}-1)^{m-1}{T_q}^\alpha_{AB_m}\nonumber\\
&&~~~~~~+(2^{{\alpha}}-1)^{m+1}({T_q}^\alpha_{AB_{m+1}}+\cdots+{T_q}^\alpha_{AB_{N-2}})\nonumber\\
&&~~~~~~+(2^{{\alpha}}-1)^{m}{T_q}^\alpha_{AB_{N-1}},
\end{eqnarray}
\normalsize
where $\alpha\geqslant 1$, ${T_q}_{A|B_1B_2\cdots B_{N-1}}$ quantifies the Tsallis-$q$ entanglement in the partition $A|B_1B_2\cdots B_{N-1}$ and ${T_q}_{AB_i}$ quantifies that in two-qubit subsystem $AB_i$ with $2\leqslant q\leqslant 3$.
\textit{Proof}. For $\alpha\geqslant1$, we have
\begin{eqnarray}\label{pf61}
\textsl{g}_q^{{\alpha}}(x^2+y^2)
&&\geqslant \left(\textsl{g}_q(x^2)+\textsl{g}_q(y^2)\right)^{\alpha} \nonumber \\
&&\geqslant \textsl{g}_q^{\alpha}(x^2)+(2^{\alpha}-1)\textsl{g}_q^{\alpha}(y^2),
\end{eqnarray}
where the first inequality is due to the inequality (\ref{gx}), and the second inequality is obtained from a similar consideration in the proof of the second inequality in (\ref{le2}).
Let $\rho=\sum\limits_ip_i|\psi_i\rangle\langle\psi_i|\in \mathds{H}_A\otimes \mathds{H}_{B_1}\otimes\cdots\otimes \mathds{H}_{{B_N-1}}$ be the optimal decomposition for the $N$-qubit mixed state $\rho$; then we have
\begin{eqnarray}\label{pf62}
&&{T_q}_{A|B_1B_2\cdots B_{N-1}}(\rho)\nonumber\\
&&=\sum_ip_iT_q(|\psi_i\rangle_{A|B_1B_2\cdots B_{N-1}})\nonumber\\
&&=\sum_ip_i\textsl{g}_q\left[C^2_{A|B_1B_2\cdots B_{N-1}}(|\psi_i\rangle)\right]\nonumber\\
&&\geqslant \textsl{g}_q\left[\sum_ip_iC^2_{A|B_1B_2\cdots B_{N-1}}(|\psi_i\rangle)\right]\nonumber\\
&&\geqslant \textsl{g}_q\left(\left[\sum_ip_iC_{A|B_1B_2\cdots B_{N-1}}(|\psi_i\rangle)\right]^2\right)\nonumber\\
&&=\textsl{g}_q\left[C^2_{A|B_1B_2\cdots B_{N-1}}(\rho)\right],\nonumber\\
\end{eqnarray}
where the first inequality is due to the fact that $\textsl{g}_q(x)$ is a convex function. The second inequality is due to the Cauchy-Schwarz inequality: $(\sum\limits_ix_i^2)^{\frac{1}{2}}(\sum\limits_iy_i^2)^{\frac{1}{2}}\geqslant\sum\limits_ix_iy_i$, with $x_i=\sqrt{p_i}$ and $y_i=\sqrt{p_i}C_{A|B_1B_2\cdots B_{N-1}}(|\psi_i\rangle)$. Due to the definition of Tsallis-$q$ entanglement and that $\textsl{g}_q(x)$ is a monotonically increasing function, we obtain the third inequality.
Therefore, we have
\small
\begin{eqnarray}\label{pf63}
&&{T_q^\alpha}_{A|B_1B_2\cdots B_{N-1}}(\rho)\nonumber\\
&&~~~\geqslant \textsl{g}_q^\alpha\left[\sum_iC^2(\rho_{AB_i})\right]\nonumber\\
&&~~~\geqslant {\textsl{g}_q}^\alpha(C_{AB_1})+(2^{\alpha}-1) {\textsl{g}_q}^\alpha(C_{AB_2})+\cdots\nonumber\\
&&~~~~~~~+(2^{\alpha}-1)^{m-1}{\textsl{g}_q}^\alpha(C_{AB_m})\nonumber\\
&&~~~~~~~+(2^{\alpha}-1)^{m+1}\left({\textsl{g}_q}^\alpha(C_{AB_{m+1}})+\cdots+{\textsl{g}_q}^\alpha(C_{AB_{N-2}})\right)\nonumber\\
&&~~~~~~~+(2^{\alpha}-1)^{m}{\textsl{g}_q}^\alpha(C_{AB_{N-1}})\nonumber\\
&&~~~= {T_q}^\alpha_{AB_1}+(2^{\alpha}-1) {T_q}^\alpha_{AB_2}+\cdots+(2^{\alpha}-1)^{m-1}{T_q}^\alpha_{AB_m}\nonumber\\
&&~~~~~~~+(2^{\alpha}-1)^{m+1}({T_q}^\alpha_{AB_{m+1}}+\cdots+{T_q}^\alpha_{AB_{N-2}})\nonumber\\
&&~~~~~~~+(2^{\alpha}-1)^{m}{T_q}^\alpha_{AB_{N-1}},
\end{eqnarray}
\normalsize
where we have used the monogamy inequality in (\ref{mo1}) for $N$-qubit states $\rho$ to obtain the first
inequality. By using (\ref{pf61}) and the similar consideration in the proof of Theorem 1, we get the second
inequality. Since for any $2\otimes2$ quantum state $\rho_{AB_i}$, $T_q(\rho_{AB_i})=\textsl{g}_q\left[C^2(\rho_{AB_i})\right]$, one gets the last equality. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$\blacksquare$
\textit{Example 4}. Let us consider again the three-qubit state $|\psi\rangle$ $(9)$. From the definition of Tsallis-$q$ entanglement, when $q=2$, we have $T_{2A|BC} = 2 \lambda_{0}^2(\lambda_{2}^{2}+\lambda_{3}^{2}+\lambda_{4}^{2})$, $T_{2AB}=2\lambda_0^2 \lambda_2^2$, and $T_{2AC}=2\lambda_0^2 \lambda_3^2$. Set $\lambda_0=\lambda_1=\lambda_2=\lambda_3=\lambda_4=\frac{\sqrt{5}}{5}$. One gets ${T_2}^\alpha_{A|BC}=(\frac{6}{25})^{\alpha}$, ${T_2}^\alpha_{AB}+{T_2}^\alpha_{AC}=2(\frac{2}{25})^{\alpha}$, ${T_2}^\alpha_{AB}+(2^{\frac{\alpha}{2}}-1){T_2}^\alpha_{AC}=2^{\alpha}(\frac{2}{25})^{\alpha}$. One can see that our result is better than that in $\cite{34}$ for $\alpha\geqslant2$; see Fig. 4.
\begin{figure}
\caption{Behavior of the concurrence of $|\psi\rangle$ and its lower bound, which are functions of $\alpha$ plotted. The black solid line represents the concurrence of $|\psi\rangle$ in Example 4, the green dot-dashed line represents the lower bound from our result, and the blue dotted line represents the lower bound from the result in \cite{24}
\end{figure}
\section{conclusion}
Entanglement monogamy is a fundamental property of multipartite entangled states. We have presented monogamy relations related to the $\alpha$ power of concurrence $C$, entanglement of formation $E$, negativity $N_c$, and Tsallis-$q$ entanglement $T_q$, which are tighter, at least for some classes of quantum states, than the existing entanglement monogamy relations for $\alpha > 2$, $\alpha > \sqrt{2}$, $\alpha > 2$, $\alpha > 1$, respectively. The necessary conditions that our inequalities are strictly tighter can been seen from our monogamy relations. For instance, (8) s tighter than the existing ones for $\alpha > 2$, for all quantum states where at least one of the $C_{AB_i}$'s ($i=2,\cdots,N-1$) is not zero, which excludes the fully separable states that have no entanglement distribution at all among the subsystems. Another case that $C_{AB_i}=0$ for all $i=2,\cdots,N-1$ is the $N$-qubit GHZ state $\cite{50}$, which is genuine multipartite entangled. However, for the genuine entangled $N$-qubit $W$ state $\cite{51}$, one has $C_{AB_i}=\frac{2}{N}$, $i=2,\cdots,N-1$, In general, most of states have at least one nonzero $C_{AB_i}$ ($i=2,\cdots,N-1$).
Monogamy relations characterize the distributions of entanglement in multipartite systems. Tighter monogamy relations imply finer characterizations of the entanglement distribution. Our approach may also be used to further study the monogamy properties related to other quantum correlations.
\noindent{\bf Acknowledgments}\, \, This work is supported by the NSF of China under Grant No. 11675113 and is supported by the Research Foundation for Youth Scholars of Beijing Technology and Business University QNJJ2017-03.
\end{document} |
\begin{document}
\begin{abstract}
We recall the definition of classical polar varieties, as well as those of affine and projective reciprocal polar varieties. The latter are defined with respect to a non-degenerate quadric, which gives us a notion of orthogonality. In particular we relate the reciprocal polar varieties to the ``Euclidean geometry'' in projective space. The Euclidean distance degree and the degree of the focal loci can be expressed in terms of the ranks, i.e., the degrees of the classical polar varieties, and hence these characters can be found also for singular varieties, when one can express the ranks in terms of the singularities.
\keywords{Schubert varieties, polar varieties, reciprocal polar varieties, Euclidean normal bundle, Euclidean distance degree, focal locus.}
\end{abstract}
\title{Polar varieties revisited}
\section{Introduction}
The theory of polars and polar varieties has played an
important role in the quest for understanding and classifying projective
varieties. Their use in the definition of projective invariants is the
very
basis for the geometric approach to the theory of characteristic classes, such
as Todd classes and Chern classes. In particular this approach gives a way of defining Chern classes for \emph{singular} projective varieties (see e.g. \cite{Piene1,Piene3}).
The \emph{local} polar varieties were used by L\^e--Teissier, Merle and others in the study of singularities.
More recently, polar varieties have been applied to study
the topology of real affine varieties and to find real
solutions of polynomial equations
by Bank et al., Safey El Din--Schost, and Mork--Piene
\cite{Bank,Bank2,Bank3,Bank4,Safey,Mork,MorkPiene},
to complexity questions by B\" urgisser--Lotz \cite{BurgL},
to foliations by Soares \cite{Soares} and others, to
focal loci and caustics of reflection by Catanese and Trifogli \cite{C-T,T} and
Josse--P\`ene \cite{JP}, to
Euclidean distance degree by Draisma et al. \cite{ED}.
In this note I will explore the relation of polar and reciprocal polar varieties of possibly singular varieties to the Euclidean normal bundle, the Euclidean distance degree, and the focal loci. For simplicity, we work over an algebraically closed field of characteristic $0$.
\section{Classical polar varieties}
Let $\mathbb G(m,n)$ denote the Grassmann variety of $m$-dimensional linear subspaces of $\mathbb P^n$.
Let $L_k\subset \mathbb P^n$ be a linear subspace of dimension $n-m+k-2$.
Consider the special Schubert variety
\[\Sigma(L_k):=\{W\in \mathbb G(m,n) \,|\, \dim W\cap L_{k}\ge k-1\}.\]
It has a natural structure as a determinantal scheme and is of codimension $k$ in $\mathbb G(m,n)$.
As is well known, if $L_k'$ is another linear subspace of dimension $n-m+k-2$, then $\Sigma(L_k')$ and $\Sigma(L_k)$ are projectively equivalent (in particular, their rational equivalence classes are equal). So we will often just write $\Sigma_k$ for such a variety.
\begin{example}
The case $m=1$, $n=3$. For $\mathbb G(1,3)=\{\text{lines in } \mathbb P^3\}$, the special Schubert varieties are
\begin{align*}
\Sigma_1&=\{\text{lines meeting a given line}\} \\
\Sigma_2&=\{\text{lines in a given plane}\}.
\end{align*}
\end{example}
\begin{example}
The case $m=2$, $n=5$.
For $\mathbb G(2,5)=\{\text{planes in } \mathbb P^5\}$, the special Schubert varieties are
\begin{align*}
\Sigma_1&=\{\text{planes meeting a given plane}\} \\
\Sigma_2&=\{\text{planes intersecting a given 3-space in a line}\}\\
\Sigma_3&=\{\text{planes contained in a given hyperplane}\}
\end{align*}
\end{example}
More general Schubert varieties are defined similarly, by giving conditions with respect to flags of linear subspaces (see e.g. \cite{KL}). For example, in Example 1, we could consider
\[\Sigma_{0,2}=\{\text{lines in a given plane through a given point in the plane}\}.\]
Let $X\subset \mathbb P^n$ be a (possibly singular) variety of dimension $m$. The \emph{Gauss map} $\gamma\colon X\dashrightarrow \mathbb G(m,n)$ is the rational map that sends a nonsingular point $P\in X_{\rm sm}$ to the (projective) tangent space $T_PX$, considered as a point in the Grassmann variety. More precisely, if $V=H^0(\mathbb P^n,\mathcal O_{\mathbb P^n}(1))$, then $\gamma$ is given on $X_{\rm sm}:=X\setminus \operatorname{Sing} X$ by the restriction of the quotient
\[V_X\to \mathcal P_X^1(\mathcal L),\] where $\mathcal P_X^1(\mathcal L)$ denotes the bundle of 1st principal parts of the line bundle $\mathcal L:=\mathcal O_{\mathbb P^n}(1)|_X$. Note that restricted to $X_{\rm sm}$ the sheaf of principal parts is locally free with rank $m+1$, and that the fibers (over $X_{\rm sm}$) of $\mathbb P(\mathcal P_X^1(\mathcal L)) \subset X\times \mathbb P^n$ over $X$ (with respect to projection on the first factor) define the (projective) tangent spaces of $X$.
The \emph{polar varieties} of $X\subset \mathbb P^n$ are the closures of the inverse images
\[M_k:=\overline{\gamma|_{X_{\rm sm}}^{-1}\Sigma_k}\]
of the special Schubert varieties via the Gauss map \cite[p.~252]{Piene1}.
In the situation of Example 1,
let $X\subset \mathbb P^3$ be a curve. Then $M_1$ is the set of nonsingular points $P\in X$ such that the tangent line $T_PX$ meets a given line, i.e., it is the ramification points of the linear projection map $X\to \mathbb P^1$.
In Example 2, let $X\subset \mathbb P^5$ be a surface. Then $M_1$ is the ramification locus of the projection map $X\to \mathbb P^2$ with center a plane, and
$M_2$ consists of points $P\in X_{\rm sm}$ such that the tangent plane $T_PX$ intersects a given $3$-space in a line. One could also consider more general polar varieties, corresponding to general Schubert varieties, like the set of points $P$ such that $T_PX$ meets a given line; this is the ramification locus of the linear projection of $X$ to $\mathbb P^3$ with the line as center. Note that the cardinality of this 0-dimensional variety is equal to the degree of the tangent developable of $X$.
\section{Polar classes and Chern classes}
It was noted classically that the polar varieties are invariant under linear projections and sections. Therefore the \emph{polar classes}, i.e., their rational equivalence classes, are \emph{projective invariants} of the variety. Already Noether, Segre and Zeuthen observed that certain integral combinations of the polar classes of surfaces are \emph{intrinsic} invariants, i.e., they depend only on the surface, not on the given projective embedding. This was pursued by Todd and Severi, who used the polar classes to define what are now called the Chern classes. The formula is the following:
\[c_k(X)=\sum_{i=0}^k (-1)^{k-i}\binom{m+1-k+i}{i}[M_i]h^i,\]
where $h$ denotes the class of a hyperplane section. Since this expression makes sense also for singular varieties, it gives a definition of Chern classes for singular projective varieties, called the \emph{Chern--Mather} classes (see e.g. \cite{Piene3}). The formula can be inverted to express the polar classes in terms of Chern classes. When the variety $X$ is nonsingular, this is just the expression coming from the fact that in this case,
\[[M_k]=c_k(\mathcal P_X^1(\mathcal L)).\]
Using the canonical exact sequence
\[0\to \Omega_X^1\otimes \mathcal L \to \mathcal P_X^1(\mathcal L) \to \mathcal L \to 0,\]
we compute, with $c_i(X)=c_i({\Omega_X^1}^\vee)=(-1)^ic_i(\Omega_X^1)$,
\[[M_k]=\sum_{i=0}^k (-1)^{k-i}\binom{m+1-k+i}ic_{k-i}(X)h^i.\]
When $X$ is singular, we replace $X$ by its Nash transform $\overline X$, i.e., $\nu\colon\overline X\to X$ is the smallest proper modification of $X$ such that $\gamma$ extends to a morphism $\overline \gamma\colon \overline X \to \mathbb G(m,n)$. If we denote by $V_{\overline X} \to\mathcal P$ the locally free quotient corresponding to $\overline \gamma$, then it follows from the definition that we have
\[ [M_k]=\overline \gamma_*c_k(\mathcal P).\]
In many cases, this allows us to find formulas for the degrees of the polar classes in terms of the singularities of $X$, see \cite{Piene1,Piene2,Piene3}.
\section{Reciprocal polar varieties}
In \cite{Bank3,Bank4} Bank et al. introduced what they called \emph{dual} polar varieties. These varieties were further studied in \cite{Mork,MorkPiene} under the name of \emph{reciprocal} polar varieties. They are defined with respect to a non degenerate quadric in the ambient projective space, and sometimes with respect to the choice of a hyperplane at infinity.
Let $Q\subset \mathbb P^n$ be a non degenerate quadric. Then $Q$ induces a \emph{polarity}, classically called a \emph{reciprocation}, between points and hyperplanes in $\mathbb P^n$. The \emph{polar} hyperplane $P^\perp$ of $P$ is the linear span of the points on $Q$ such that the tangent hyperplane to $Q$ at that point contains $P$:
\[P=(b_0:\cdots :b_n)\mapsto P^\perp \colon \sum_{i=0}^n b_i\frac{\partial q}{\partial X_i}=0.\]
The map $P \mapsto P^\perp$ is the isomorphism $\mathbb P^n\to ( \mathbb P^n)^\vee$ given by the symmetric bilinear form associated with the quadratic polynomial $q$ defining $Q$. The point $P^\perp$ in the dual projective space represents a hyperplane in $\mathbb P^n$. If $L\subset \mathbb P^n$ is a linear space of dimension $r$, then $L^\perp \subset (\mathbb P^n)^\vee$ gives a $(n-r-1)$-dimensional linear subspace $(L^\perp)^\vee \subset \mathbb P^n$, which we will, by abuse of notation, also denote by $L^\perp$.
Note that if $H$ is a hyperplane, its \emph{pole} $H^\perp$ is the intersection of the tangent hyperplanes to $Q$ at the points of intersection with $H$. The map $H\mapsto H^\perp$ is the inverse of the above map $P\mapsto P^\perp$.
If the quadric is $q=\sum_{i=0}^n X_i^2$, then the polar of
\[P=(b_0:\cdots :b_n)\]
is the hyperplane
\[P^\perp: b_0X_0+\ldots +b_nX_n=0.\]
Let $X\subset \mathbb P^n$ be a (possibly singular) variety of dimension $m$. Let
$K_i\subset \mathbb P^n$ be a linear subspace of dimension $i+n-m-1$. Recall \cite[Def. 2.1, p.~104]{MorkPiene}, \cite[p.~527]{Bank3} the
definition of the $i$-th reciprocal polar variety of $X$ (with respect to $K_i$):
$W_{K_i}^{\perp}(X)$, $1\leq i \leq m$,
is the Zariski
closure of the set
\begin{displaymath}
\{ P\in X_{\rm sm}\setminus K_i^{\perp}\,|\, T_P X \not\pitchfork
\langle P,K_i^{\perp}\rangle ^{\perp}\},
\end{displaymath}
where $T_PX$ denotes the tangent space to $X$ at the point $P$ and
the notation $M\not\pitchfork N$ means that the two linear spaces $M$, $N$ are not transversal, i.e., that their linear span $\langle M,N\rangle$ is not equal to the whole ambient space $\mathbb P^n$.
For general $K_i$, the $i$th reciprocal variety has codimension $i$.
The condition $T_P X \not\pitchfork \langle P,K_i^{\perp}\rangle ^{\perp}$ is equivalent to the condition $\dim (T_PX\cap \langle P,K_i^{\perp}\rangle ^{\perp})\ge i-1$, and $T_PX\cap \langle P,K_i^{\perp}\rangle ^{\perp}$ is equal to $T_PX\cap P^\perp\cap K_i$, or to $\langle T_PX^\perp,P\rangle ^\perp\cap K_i$, or to $\langle\langle T_PX^\perp,P\rangle,K_i^\perp \rangle ^\perp$. That the dimension of the latter space is greater or equal to $i-1$ is equivalent to
$\dim \langle \langle T_PX^\perp,P\rangle,K_i^\perp \rangle \le n-i$, or to $\dim (\langle T_PX^\perp,P\rangle \cap K_i^\perp)\ge 0$.
Thus we obtain the following description of the $i$th polar variety:
\[W_{K_i}^{\perp}(X)=\overline{\{P\in X_{\rm sm}\setminus K_i^{\perp}\,|\, \dim (\langle T_PX^\perp,P\rangle \cap K_i^\perp)\ge 0 \}}.\]
In the case $i=m$ we have that $K_m$ is a hyperplane, so $K_m^\perp$ is a point. Assuming this is not a point on $X$, we see that the $m$th reciprocal polar variety of $X$ with respect to $K_m$ is the (finite) set of nonsingular points $P\in X$
\[W_{K_m}^\perp(X)=\{P\in X_{sm} \, | \, K_m^\perp \in \langle T_PX^\perp,P\rangle \}.\]
Let $H_\infty =Z(x_0) \subset \mathbb P^n$ denote ``the hyperplane at infinity'', set $\mathbb A^n =\mathbb P^n \setminus H_\infty$, and consider the affine part $Y:=X\cap \mathbb A^n$ of $X$. Let $L=K_i\subseteq H_\infty$ be a linear space of dimension $i+n-m-1$. Define the \emph{affine reciprocal polar variety} to be the affine part of the reciprocal polar variety:
\[W^\perp_L(Y):=W^\perp_L(X)\cap \mathbb A^n.\]
The linear variety $\langle P,L^\perp\rangle^\perp$ is contained in the hyperplane $H_\infty$, so we can consider the affine cone $I_{P,L^\perp}$ of $\langle P,L^\perp\rangle^\perp$ as a linear variety in the affine space $\mathbb A^n$. Then the affine reciprocal polar variety can be written as
\[W^\perp_L(Y)=\overline{\{P\in Y_{\rm sm}\setminus L^\perp \,|\, t_PY\not\pitchfork I_{P,L^\perp}\}},\]
where $t_PY$ denotes the affine tangent space at $P$, translated to the origin.
Assume $X=Z(F_1,\ldots,F_r)$ (for some $r\ge n-m$). Consider the case $L=K_m=H_\infty$. Then we see that $W^\perp_{H_\infty}(Y)$ is the closure of the set of smooth points of $Y$ where the $(n-m+1)$-minors of the matrix
\[
\left(\begin{array}{ccc}
\frac{\partial q}{\partial x_1} &\cdots & \frac{\partial q}{\partial x_n}\\
\frac{\partial F_1}{\partial x_1} & \cdots & \frac{\partial F_1}{\partial x_n}\\
\vdots & \vdots &\vdots\\
\frac{\partial F_r}{\partial x_1} & \cdots & \frac{\partial F_r}{\partial x_n}.
\end{array}\right)
\]
vanish.
This generalizes \cite[Prop. 3.2.5]{MorkPiene}.
\begin{example}
Assume $q$ is given by $q(x_0,x_1,\ldots,x_n)=x_0^2+\sum_{i=1}^n(x_i-a_ix_0)^2$ for some $a_1,\ldots,a_n$ such that $\sum_{i=1}^n a_i\neq 1$. Then $q$ restricts to (essentially) the square of the Euclidean distance function on $\mathbb A^n=\mathbb P^n \setminus H_\infty$, namely to $\sum_{i=1}^n(x_i-a_i)^2+1$. The affine reciprocal polar variety is given (on smooth points) by the vanishing of the $(n-m+1)$-minors of the matrix
\[
\left(\begin{array}{ccc}
x_1-a_1&\cdots & x_n-a_n\\
\frac{\partial F_1}{\partial x_1} & \cdots & \frac{\partial F_1}{\partial x_n}\\
\vdots & \vdots &\vdots\\
\frac{\partial F_r}{\partial x_1} & \cdots & \frac{\partial F_r}{\partial x_n}.
\end{array}\right)
\]
just as in \cite[(2.1)]{ED}.
\end{example}
\section{Euclidean normal bundles}
Consider a variety $X\subset \mathbb P^n$
and let $Q\subset \mathbb P^n$ be a non-degenerate quadric. As we saw in the previous section, the quadric induces a polarity on $\mathbb P^n$, which can be viewed as an orthogonality, like what one has in a Euclidean space. In \cite{C-T} this was used to define
\emph{Euclidean normal spaces} at each point $P\in X_{\rm sm}$. Actually they considered a non-degenerate quadric in a hyperplane at $\infty$, essentially as we saw in the case of affine reciprocal polar varieties. Here we shall consider the orthogonality on all of $\mathbb P^n$, and we define the normal space at a smooth point $P$ as follows:
\[N_PX:=\langle {T_P X}^\perp,P \rangle.\]
We shall now see that, by passing to the Nash modification of $X$, the Euclidean normal spaces are the fibers of a projective bundle.
The Nash modification $\nu\colon\overline X\to X$ is the ``smallest'' proper, birational map such that the pullback of the cotangent sheaf $\Omega^1_X$ of $X$ admits a locally free quotient of rank $m$. This is equivalent to $\nu^*\mathcal P^1_X(1)$ admitting a locally free quotient of rank $m+1$. Denote this quotient by $\mathcal P$, and let $\mathcal K$ denote the kernel of the surjection $\mathcal O_{\overline X}^{n+1} \to \mathcal P$. Thus $\mathcal K$ is a modification of the conormal sheaf $\mathcal N_{X/\mathbb P^n}$ of $X$ in $\mathbb P^n$ twisted by $\mathcal O_X(1)$.
The quadric $Q$ gives an isomorphism $\mathcal O_X^{n+1} \cong \mathcal (O_X^{n+1})^\vee$, hence we get a quotient $\mathcal O_{\overline X}^{n+1} \to \mathcal K^\vee $, whose (projective) fibers are the spaces $ T_P X^\perp$. Adding
the point map $\mathcal O_X^{n+1} \to \mathcal O_X(1)$, we get a surjection
\[\mathcal O_{\overline X}^{n+1} \to \mathcal E:=\mathcal K^\vee \oplus \mathcal O_{\overline X}(1),\]
whose (projective) fibers are the Euclidean normal spaces $N_PX$. Indeed, $\mathbb P(\mathcal E)\subset {\overline X}\times \mathbb P^n$, and the fibers of the structure map $\mathbb P(\mathcal E)\to {\overline X}$ above smooth points of $X$ are the spaces $N_PX\subset \mathbb P^n$ defined above. We call $\mathcal E$ the \emph{Euclidean normal bundle} of $X$ with respect to $Q$ (cf. \cite{C-T} and \cite{ED}).
Let $p\colon \mathbb P(\mathcal E) \to {\overline X}$ denote the structure map, and let $q\colon \mathbb P(\mathcal E) \to \mathbb P^n$ denote the projection on the second factor. The map $q$ is called the \emph{endpoint map} (for an explanation of the name, see \cite{C-T}).
Let $B\in \mathbb P^n$ be a (general) point. Then
\[ p(q^{-1}(B)) = \{ P\in X \, |\, B\in \langle T_PX^\perp,P \rangle \}.\]
Letting $L:=B^\perp$, so that $B=L^\perp$, we see that
\[ p(q^{-1}(B)) = \{ P\in X \, |\, L^\perp\in \langle T_PX^\perp,P \rangle \} = W_L^\perp(X)\]
is a reciprocal polar variety. In particular,
the degree of $q$ is just the degree of the reciprocal polar variety.
\begin{example}
Assume $X\subset \mathbb P^2$ is a (general) plane curve of degree $d$.
The \emph{reciprocal} polar variety is the intersection of the curve with its reciprocal polar, which has degree $d$, so $q$ has degree $d^2$.
\end{example}
In \cite{ED} the degree of the endpoint map $q\colon \mathbb P(\mathcal E) \to \mathbb P^n$ was called the \emph{Euclidean distance degree} of $X$:
\[ {\rm ED}\deg X=p_*c_1(\mathcal O_{\mathbb P(\mathcal E)}(1))^n=s_m(\mathcal E),\]
where $m=\dim X$ and $s_m$ denotes the $m$th Segre class. The reason for the name is the relationship to computing critical points for the distance function in the Euclidean setting. We refer to \cite{ED} for many more details. In the case of curves and surfaces (and in a slightly different setting), the degree of $q$ is called the normal class in \cite{JP2}.
Since $\mathcal E=\mathcal K^\vee \oplus \mathcal O_{\overline X}(1)$, we get \[s(\mathcal E)=s(\mathcal K^\vee)s(\mathcal O_{\overline X}(1))=c(\mathcal P)c(\mathcal O_{\overline X}(-1))^{-1}.\]
Therefore,
\[s_m(\mathcal E)=\sum_{i=0}^m c_{i}(\mathcal P)c_1(\mathcal O_{\overline X}(1))^{m-i}.\]
Since $c_{i}(\mathcal P)c_1(\mathcal O_{\overline X}(1))^{m-i}$ is the degree $\mu_i$ of the $i$th polar variety $[M_i]$ of $X$ \cite[p.~256]{Piene1}, we conclude (cf. \cite[Thm.~5.4]{ED}):
\[ {\rm ED}\deg X=\sum_{i=0}^m \mu_i.\]
The $\mu_i$ are called the \emph{ranks} (or classes) of $X$. Note that $\mu_0$ is the degree of $X$ and $\mu_{n-1}$ is the degree of the dual variety $X^\vee$ (provided the dimension of $X^\vee$ is $n-1$). It is known (see \cite[Prop. 3.6, p.~266]{Piene1}, \cite[3.3]{U}, and \cite[(4), p.~189]{K}) that the $i$th rank of $X$ is equal to the $(n-1-i)$th rank of the dual variety $X^\vee$ of $X$. As observed in \cite[Thm.~5.4]{ED}, it follows that the Euclidean distance degree of $X$ is equal to that of $X^\vee$.
Moreover, whenever we have formulas for the ranks $\mu_i$, we thus get formulas for ${\rm ED}\deg X$.
\begin{example}
If $X\subset \mathbb P^n$ is a smooth hypersurface of degree $\mu_0=d$, then $\mu_i=d(d-1)^i$, hence in this case (cf. \cite[(7.1)]{ED})
\[{\rm ED}\deg X=d\sum_{i=0}^{n-1} (d-1)^i= \frac{d((d-1)^n-1)}{d-2}.
\]
If $X$ has only isolated singularities, then only $\mu_{n-1}$ is affected, and we get (from Teissier's formula \cite[Cor. 1.5, p.~320]{Teissier}, and the Pl\"ucker formula for hypersurfaces with isolated singularities (\cite[II.3, p.~46]{Teissier2} and \cite[Cor. 4.2.1, p.~60]{Laumon})
\[{\rm ED}\deg X= \frac{d((d-1)^n-1)}{d-2}-\sum_{P\in {\rm Sing}(X)}(\mu_P^{(n)}+\mu_P^{(n-1)}),
\]
where $\mu_P^{(n)}$ is the Milnor number and $\mu_P^{(n-1)}$ is the sectional Milnor number of $X$ at $P$.
\end{example}
\begin{example}
Assume $X\subset \mathbb P^3$ is a generic projection of a smooth surface of degree $\mu_0=d$, so that $X$ has \emph{ordinary} singularities: a double curve of degree $\epsilon$, $t$ triple points, and $\nu_2$ pinch points. Then (using the formulas for $\mu_1$ and $\mu_2$ given in \cite[p.~18]{Piene2})
\[{\rm ED}\deg X=\mu_0+\mu_1+\mu_2=d^3-d^2+d-(3d -2)\epsilon - 3t-2\nu_2.\]
Further examples can be deduced from results in \cite{Piene1}, \cite{Piene2}, and \cite{Piene3}.
\end{example}
\section{Focal loci}
The \emph{focal locus} (see e.g. \cite{C-T} for an explanation of the name, or \cite{ED}, where it is denoted the \emph{ED discriminant}) is the branch locus $\Sigma_X$ of the endpoint map $q\colon \mathbb P(\mathcal E)\to \mathbb P^n$. More precisely, let $R_X$ denote the ramification locus of $q$; by definition, $R_X$ is the subscheme of $\mathbb P(\mathcal E)$ given on the smooth locus $\mathbb P(\mathcal E)_{\rm sm}$ by the $0$th Fitting ideal $F^0(\Omega^1_{\mathbb P(\mathcal E)/{\mathbb P^n}|_{\mathbb P(\mathcal E)_{\rm sm}}})$. The focal locus $\Sigma_X$ is the closure of the image $q(R_X)$.
Recall that we have, on the Nash modification $\nu\colon \overline X\to X$, the exact sequence
\[0\to \mathcal K\to \mathcal O_{\overline X}^{n+1} \to \mathcal P\to 0,\]
where $\mathcal K$ and $\mathcal P$ are the Nash bundles of the sheaves $\mathcal N_{X/\mathbb P^n}(1)$ and $\mathcal P_X^1(1)$ respectively, and that $\mathcal E=\mathcal K^\vee \oplus \mathcal O_{\overline X}(1)$.
Let $Z\to \overline X$ be a resolution of singularities, and, by abuse of notation, denote by $\mathcal K$, $\mathcal P$, $\mathcal E$ also their pullbacks to $Z$. The class $[R_X]$ of the closure of the ramification locus of $q\colon \mathbb P(\mathcal E) \to \mathbb P^n$ is given by
\[[R_X]=c_1(\Omega^1_{\mathbb P(\mathcal E)})-q^*c_1(\Omega^1_{\mathbb P^n})=c_1(\Omega^1_{\mathbb P(\mathcal E)})+(n+1)c_1(\mathcal O_{\mathbb P(\mathcal E)}(1)).\]
Using the exact sequences
\[0\to p^*\Omega^1_Z\to \Omega^1_{\mathbb P(\mathcal E)}Ê\to \Omega^1_{\mathbb P(\mathcal E)/Z} \to 0\]
and
\[0\to \Omega^1_{\mathbb P(\mathcal E)/Z} \to p^*\mathcal E\otimes O_{\mathbb P(\mathcal E)}(-1) \to \mathcal O_{\mathbb P(\mathcal E)} \to 0 \]
we find
\[[R_X] =p^*\bigl(c_1(\Omega_Z^1)+c_1(\mathcal P)+c_1(\mathcal O_Z(1))\bigr)+mc_1(\mathcal O_{\mathbb P(\mathcal E}(1)).\]
Therefore the degree of $R_X$ with respect to the map $q$ is given by
\[\deg R_X=\bigl(c_1(\Omega_Z^1)+c_1(\mathcal P)+c_1(\mathcal O_Z(1))\bigr)p_* c_1(\mathcal O_{\mathbb P(\mathcal E)}(1))^{n-1}+mp_*c_1(\mathcal O_{\mathbb P(\mathcal E)}(1))^{n}.\]
In terms of Segre classes of $\mathcal E$ this gives
\[\deg R_X=\bigl(c_1(\Omega_Z^1)+c_1(\mathcal P)+c_1(\mathcal O_Z(1))\bigr)s_{m-1}(\mathcal E)+ms_m(\mathcal E),\]
which gives, since $c_1(\Omega^1_Z)=c_1(\mathcal P^1_Z(1))-(m+1)c_1(\mathcal O_Z(1))$,
\[\deg R_X=\bigl(c_1(\mathcal P^1_Z(1))+c_1(\mathcal P)-mc_1(\mathcal O_Z(1))\bigr)s_{m-1}(\mathcal E)+ms_m(\mathcal E).\]
Now $s_m(\mathcal E)=\sum_{i=0}^m\mu_i$ and $s_{m-1}(\mathcal E)c_1(\mathcal O_Z(1))=\sum_{i=0}^{m-1}\mu_i$, hence
\[\deg R_X=\bigl(c_1(\mathcal P^1_Z(1))+c_1(\mathcal P)\bigr)s_{m-1}(\mathcal E)-m\sum_{i=0}^{m-1}\mu_i+m\sum_{i=0}^m\mu_i,\]
hence
\[\deg R_X=\bigl(c_1(\mathcal P_Z^1(1))+c_1(\mathcal P)\bigr)s_{m-1}(\mathcal E)++m\mu_m.\]
In the special case when $X\subset \mathbb P^n$ is a hypersurface ($m=n-1$),
we know by \cite[Cor.~3.4]{Piene1} that
\[c_1(\mathcal P_Z^1(1))=c_1(\mathcal P)+c_1(\mathcal R_{Z/X}^{-1}),\]
where $\mathcal R_{Z/X}=F^0(\Omega^1_{Z/X})$ is the (invertible) ramification ideal of $Z\to X$. Hence we get
\[\deg R_X=\bigl(2c_1(\mathcal P)+c_1(\mathcal R_{Z/X}^{-1})\bigr)s_{n-2}(\mathcal E)+(n-1)\mu_{n-1},\]
or
\[\deg R_X=\bigl(2c_1(\mathcal P)+c_1(\mathcal R_{Z/X}^{-1})\bigr)\sum_{i=0}^{n-2}c_i(\mathcal P)c_1(\mathcal O_Z(1))^{n-2-i}+(n-1)\mu_{n-1}.\]
\begin{example}
Let $X\subset \mathbb P^2$ be a plane curve of degree $\mu_0$ and class $\mu_1=c_1(\mathcal P)$. Then
\[ \deg R_X=2c_1(\mathcal P)+\kappa +\mu_1=3\mu_1+\kappa,\]
where $\kappa$ is the ``total number of cusps'' of $X$. Note that, again by \cite[Cor.~3.4]{Piene1}, $3\mu_1+\kappa = 3\mu_0+\iota$, where $\iota$ is the ``total number of inflection points'' of $X$. But the degree $\mu_0(X)$ of $X$ is equal to the class $\mu_1(X^\vee)$ of the dual curve $X^\vee$, and $\iota(X)$ of $X$ is $\kappa(X^\vee)$ of $X^\vee$. This shows that the degree of the focal locus, or ED discriminant, of the dual curve is equal to that of $X$:
\[ \deg R_{X^\vee}=3\mu_1(X^\vee)+\kappa(X^\vee)=3\mu_0+3\iota=\deg R_X.\]
The focal locus of a plane curve is also known as the \emph{evolute} or the \emph{caustic by reflection}. So, provided the maps $R_X\to \Sigma_X$ and $R_{X^\vee}\to \Sigma_{X^\vee}$ are birational, we have shown that the degree of the evolute of $X$ is equal to the degree of the evolute of the dual curve $X^\vee$. For more on evolutes, see \cite{JP}. In the case that $X$ is a ``Pl\"ucker curve'' of degree $d=\mu_0$ and having only $\delta$ nodes and $\kappa$ ordinary cusps as singularities, and $\iota$ ordinary inflection points, then the classical formula, due to Salmon, is
\[\deg R_X = 3d(d-1)-6\delta -8\kappa.\]
Since in this case $\mu_1=d(d-1)-2\delta - 3\kappa$, this checks with our formula. Moreover, since the number of inflection points is $\iota=3d(d-2)-6\delta - 8\kappa$, $\deg R_{X^\vee}=3d+\iota=3d(d-1)-6\delta -8\kappa =\deg R_X$, as it should.
\end{example}
If $X$ is smooth, we have $Z={\overline X}=X$, $\mathcal E=\mathcal N_{X/\mathbb P^n}(1)^\vee \oplus \mathcal O_X(1)$, and $s(\mathcal N_{X/\mathbb P^n}(1)^\vee)=c(\mathcal P^1_X(1))$. Hence we can compute the class of $R_X$ in terms of the Chern classes of $X$ and $\mathcal O_X(1)$.
We get
\[\deg R_X=2\bigl(c_1(\Omega_X^1)\sum_{i=0}^{m-1}c_i(\mathcal P^1_X(1))c_1(\mathcal O_X(1))^{m-1-i}+(m+1)\sum_{i=0}^{m-1}\mu_i\bigr) +m\mu_m.\]
Since $c_i(\mathcal P^1_X(1))=\sum_{j=0}”\binom{m+1-i+j}{j}c_{i-j}(\Omega_X^1)c_1(\mathcal O_X(1))^j$, and since the $\mu_i$'s can be expressed in terms of the Chern numbers $c_{m-j}(\Omega_X^1)c_1(\mathcal O_X(1))^j$, we see that also $\deg R_X$ can be expressed in terms of these Chern numbers and the Chern numbers $c_1(\Omega_X^1)c_{m-1-j}(\Omega_X^1)c_1(\mathcal O_X(1))^j$.
\begin{example}
Assume $X\subset \mathbb P^n$ is a smooth curve of degree $d$. Then
\[\deg R_X= 2(2g-2)+4\mu_0+\mu_1=2(2g-2)+4d+2d+2g-2=6(d+g-1),\]
as in \cite[Ex. 7.11]{ED}.
\end{example}
\begin{example}
Let $X\subset \mathbb P^n$ be a smooth surface of degree $\mu_0=d$. Then, as in \cite[Section 5]{C-T} we get:
\[\deg R_X=2(15d+9c_1(\Omega_X^1)c_1(\mathcal O_X(1))+c_1(\Omega_X^1)^2+c_2(\Omega_X^1)).\]
\end{example}
\begin{example}
Let $X\subset \mathbb P^n$ be a general hypersurface ($m=n-1$) of degree $\mu_0$. It is known that in this case $R_X\to \Sigma_X$ is birational \cite[Thm.~2]{T}.
Since $c_1(\Omega_X^1)=(\mu_0-n-1)c_1(\mathcal O_X(1))$ we get
\[\deg \Sigma_X=\deg R_X=(2\mu_0-n-1)s_{n-2}(\mathcal E)c_1(\mathcal O_X(1))+(n-1)s_{n-1}(\mathcal E).\]
Hence
\[ \deg \Sigma_X=(2\mu_0-n-1)\sum_{i=0}^{n-2} \mu_i +(n-1)\sum_{i=0}^{n-1} \mu_i=(n-1)\mu_{n-1}+2(\mu_0-1)\sum_{i=0}^{n-2}\mu_i.\]
For a smooth hypersurface of degree $d$ in $\mathbb P^n$, we have $\mu_i=d(d-1)^i$. Hence
\[ \deg \Sigma_X=(n-1)d(d-1)^{n-1}+2d(d-1)((d-1)^{n-1}-1)(d-2)^{-1},\]
which checks with the formula found in \cite[Thm.~2]{T}.
\end{example}
\end{document} |
\begin{document}
\title{Fixed-Endpoint Optimal Control of Bilinear Ensemble Systems hanks{This work was
supported in part by the National Science Foundation under the awards CMMI-1301148, CMMI-1462796, and ECCS-1509342.}
\slugger{mms}{xxxx}{xx}{x}{x--x}
\begin{abstract}
Optimal control of bilinear systems has been a well-studied subject in the area of mathematical control. However, techniques for solving emerging optimal control problems involving an ensemble of structurally identical bilinear systems are underdeveloped. In this work, we develop an iterative method to effectively and systematically solve these challenging optimal ensemble control problems, in which the bilinear ensemble system is represented as a time-varying linear ensemble system at each iteration and the optimal ensemble control law is then obtained by the singular value expansion of the input-to-state operator that describes the dynamics of the linear ensemble system. We examine the convergence of the developed iterative procedure and pose optimality conditions for the convergent solution. We also provide examples of practical control designs in magnetic resonance to demonstrate the applicability and robustness of the developed iterative method.
\end{abstract}
\begin{keywords}
Ensemble control, iterative methods, sweep method, fixed-endpoint problems, bilinear systems, optimality conditions, magnetic resonance.
\end{keywords}
\begin{AMS}\end{AMS}
\pagestyle{myheadings}
\thispagestyle{plain}
\markboth{S. WANG AND J.-S. LI}{FIXED-ENDPOINT CONTROL FOR BILINEAR ENSEMBLES}
\section{Introduction}
Newly emerging fields in science and engineering, such as systems neuroscience, synchronization engineering, and quantum science and technology, give rise to new classes of optimal control problems that involve underactuated manipulation of individual and collective behavior of dynamic units in a large ensemble. Representative examples include neural stimulation for alleviating the symptoms of neurological disorders such as Parkinson's disease, where a population of neurons in the brain is affected by a small number of electrodes \cite{Ching2013a};
pulse designs for exciting and transporting quantum systems between desired states, where an ensemble of quantum systems is driven by a single or multiple pulses in a pulse sequence \cite{Brent06, Li_PNAS11}; and the engineering of dynamical structures for complex oscillator networks, where
sequential patterns of a network of nonlinear rhythmic elements are created and altered by a mild global waveform \cite{Kiss:2007:1886-1889}.
Solving these nontraditional and large-scale underactuated control problems requires the development of systematic and computationally tractable and effective methods.
Among these emerging control problems, in this paper, we will study fixed-endpoint optimal control problems involving bilinear ensemble systems, which arise from the domain of quantum control \cite{Chen14} and appear in a variety of other different fields, such as cancer chemotherapy \cite{Schaettler2002} and robotics \cite{Becker12}. The control of bilinear systems has been a well-studied subject in the area of mathematical control. From Pontryagin's maximum principle to spectral collocation methods, a wide variety of theoretical and computational methods have been developed to solve optimal control problems of bilinear systems \cite{Aganovic1994, Mohler2000}. In particular, the numerical methods are in principle categorized into direct, e.g., pseudospectral methods \cite{Gong06,Ross04}, and indirect approaches, e.g., indirect transcription method \cite{Rao09} and
shooting methods \cite{Bertolazzi2005}. Implementing these existing numerical methods to solve optimal control problems involving an ensemble, i.e., a large number (finitely or infinitely many) or a parameterized family, of bilinear systems
may encounter low efficiency, slow convergence, and instability issues, because most of these methods rely on suitable discretization of the continuous-time dynamics into a large-scale nonlinear program (LSNLP). In addition, the global constraint for such an optimal ensemble control problem, in which each individual system receives the same control input, makes the discretized LSNLP very restrictive and intractable to solve or even to find a feasible solution \cite{Li_TAC12_QCP}.
On the other hand, optimal control problems involving a linear system, or a linear ensemble system, are often computationally tractable and analytically solvable for many special cases, such as the linear quadratic regulator (LQR) \cite{Brockett70} and the minimum-energy control of harmonic oscillator ensembles \cite{Li_TAC11}. This suggests a bypass to solve optimal control problems of bilinear ensemble systems through solving that of linear ensemble systems and motivates the development of the iterative method in this work.
The central idea is to represent the bilinear ensemble system as a linear ensemble system at each iteration, and then feasibly calculate the optimal control and trajectory for each iteration until a convergent solution is found. Iterative methods have been introduced and adopted to deal with diverse control design problems, including the free-endpoint quadratic optimal control of bilinear systems \cite{Hofer1988} and optimal state tracking for nonlinear systems \cite{Cimen2004}, while the fixed-endpoint problems along with the emerging problems that involve controlling a bilinear ensemble system remain unexplored.
In this paper, we combine the idea of the aforementioned iterative method with our previous work on optimal control of linear ensemble systems to construct an iterative algorithm for solving optimal control problems involving a time-invariant bilinear ensemble system of the form,
\begin{eqnarray*}
\frac{d}{dt}{X(t,\b)}=A(\b)X(t,\b)+B(\b)u(t)+\Big(\sum_{i=1}^m u_i(t) B_i(\b)\Big) X(t,\b),
\end{eqnarray*}
where $X=(x_1,\ldots,x_n)^T\in M\subset\mathbb{R}^n$ denotes the state, $\b\in K\subset\mathbb{R}^d$ with $K$ compact and $d$ a positive integer, $u(t)=(u_1(t),\ldots,u_m(t))^T \in\mathbb{R}^m$ is the control, and the matrices $A(\b)\in\mathbb{R}^{n\times n}$, $B(\b)\in\mathbb{R}^{n\times m}$, and $B_i(\b)\in\mathbb{R}^{n\times n}$, $i=1,\ldots,m$, for $\b\in K$.
This paper is structured as follows. In the next section, we present the developed iterative method for fixed-endpoint optimal control of a time-invariant bilinear system, where we introduce a sweep method that accounts for
the terminal condition based on the notion of flow mapping from the optimal control theory.
In Section \ref{sec:convergence}, we examine the convergence of the iterative method using the fixed-point theorem. In Section \ref{sec:optimality}, we propose the conditions for global optimality of the convergent solution. Then, in Section \ref{sec:ensemble}, we extend the developed iterative method to solve optimal control problems involving bilinear ensemble systems and show the convergence of the method. Finally, examples and simulations of practical control design problems are illustrated in Section \ref{sec:examples} to demonstrate the applicability and robustness of the developed iterative procedure.
\section{Iterative method for optimal control of bilinear systems}
\label{sec:single_bilinear}
We start with considering a fixed-endpoint, finite-time, quadratic optimal control problem involving a time-invariant bilinear system of the form
\begin{align}
\min &\quad J=\frac{1}{2}\int_0^{t_f} \Big[x^T(t)Qx(t)+u^T(t)Ru(t)\Big] \, dt, \nonumber\\
\label{eq:oc1}
{\rm s.t.} &\quad \dot{x}=Ax+Bu+\Big[\sum_{i=1}^m u_i B_i\Big]x, \tag{P1}\\
&\quad x(0)=x_0, \quad x(t_f)=x_f, \nonumber
\end{align}
where $x(t)\in\mathbb{R}^n$ is the state and $u(t)\in\mathbb{R}^m$ is the control; $A\in\mathbb{R}^{n\times n}$, $B_i\in\mathbb{R}^{n\times n}$, and $B\in\mathbb{R}^{n\times m}$ are constant matrices; $R\in\mathbb{R}^{m\times m}\succ 0$ is positive definite and $Q\in\mathbb{R}^{n\times n}\succeq 0$ is positive semi-definite; and $x_0,x_f\in\mathbb{R}^n$ are the initial and the desired terminal state, respectively. We first represent the time-invariant bilinear system in \eqref{eq:oc1} as a time-varying linear system,
\begin{eqnarray}
\label{eq:linear}
\dot{x}(t)=Ax+Bu+\left[\sum_{j=1}^n x_j(t) N_j\right]u,
\end{eqnarray}
in which we write the bilinear term $\big(\sum_{i=1}^m u_iB_i\big)x=\big(\sum_{j=1}^n x_jN_j\big) u$ with $x_j$ the $j^{th}$ element of $x$, $N_j\in\mathbb{R}^{n\times m}$ for $j=i,\dots,n$, and $u=(u_1,\ldots,u_m)^T\in\mathbb{R}^m$. Then, we solve this optimal control problem by Pontryagin's maximum principle. The Hamiltonian of this problem is
\begin{equation}
H(x,u,\lambda)=\frac{1}{2}(x^TQx+u^TRu)+\lambda^T \Big\{Ax+\Big[B+(\sum_{j=1}^n x_jN_j)\Big] u\Big\},
\end{equation}
where $\lambda(t)\in\mathbb{R}^n$ is the co-state vector. The optimal control is then obtained by the necessary condition, $\frac{\partial H}{\partial u}=0$, given by
\begin{equation}
\label{eq:u}
u^* = -R^{-1}\Big( B+\sum_{j=1}^n x_jN_j\Big)^T\lambda,
\end{equation}
and the optimal trajectory of the state $x$ and the co-state $\lambda$ satisfy, for $t\in[0,t_f]$,
\begin{align}
\label{eq:state}
\dot{x}_i & = \big[Ax\big]_i-\Big[\big(B+\sum_{j=1}^n x_jN_j\big)R^{-1}\big(B+\sum_{j=1}^n x_jN_j\big)^T\lambda\Big]_i, \\
\label{eq:costate}
\dot{\lambda}_i & = -\big[Qx\big]_i-\big[A^T\lambda\big]_i+\lambda^T\Big\{N_iR^{-1}\big(B+\sum_{j=1}^n x_jN_j\big)^T+\big(B+\sum_{j=1}^n x_jN_j\big)R^{-1}N_i^T\Big\}\lambda
\end{align}
with the boundary conditions $x(0)=x_0$ and $x(t_f)=x_f$, where $x_i$, $\l_i$ and $[\,\cdot\,]_i$, $i=1,\dots,n$, are the $i^{th}$ component of the associated vectors. By the following change of variables,
\begin{align}
\label{eq:A}
& \tilde{A}_{ij} = A_{ij}-\Big[(N_j R^{-1} \big(B+\sum_{j=1}^n x_jN_j\big)^T+\big(B+\sum_{j=1}^n x_jN_j) R^{-1} N_j^T\big)\lambda\Big]_i, \\
\label{eq:B}
&\tilde{B}R^{-1}\tilde{B}^T = BR^{-1}B^T-\big(\sum_{j=1}^n x_jN_j\big)R^{-1}\big(\sum_{j=1}^n x_jN_j\big)^T,\\
\label{eq:Q}
&\tilde{Q}=Q,
\end{align}
we can rewrite \eqref{eq:state} and \eqref{eq:costate} into the form
\begin{align}
\label{eq:state1}
\dot{x} &= \tilde{A}x-\tilde{B}R^{-1}\tilde{B}^T\lambda, \quad x(0)=x_0, \quad x(t_f) = x_f, \\
\label{eq:costate1}
\dot{\lambda} &= -\tilde{Q}x-\tilde{A}^T\lambda,
\end{align}
which coincides with the canonical form of the state and co-state equations characterizing the optimal trajectories for the analogous optimal control problem involving the time-invariant linear system $\dot{x}=\tilde{A}x+\tilde{B}u$ \cite{Schaettler13}. In this way, the optimal state and co-state trajectories for the optimal control problem \eqref{eq:oc1} involving a time-invariant bilinear system
are now expressed in terms of the equations related to a time-varying linear system as in \eqref{eq:state1} and \eqref{eq:costate1}.
Using this ``linear-system representation'' together with the Sweep method \cite{Schaettler13, Bryson75}, we will solve the optimal control problem \eqref{eq:oc1} in an iterative manner. Specifically, we will consider at each iteration the fixed-endpoint linear quadratic optimal control problem,
\begin{align}
\min &\quad J=\frac{1}{2}\int_0^{t_f} \Big[(x^{(k+1)})^T(t)Qx^{(k+1)}(t)+(u^{(k+1)})^T(t)Ru^{(k+1)}(t)\Big] \, dt, \nonumber\\
\label{eq:oc2}
{\rm s.t.} &\quad
\dot{x}^{(k+1)}(t)=\tilde{A}^{(k)} x^{(k+1)}+\tilde{B}^{(k)} u^{(k+1)} \tag{P2}\\
&\quad x^{(k+1)}(0)=x_0, \quad x^{(k+1)}(t_f)=x_f, \nonumber
\end{align}
by treating the previous trajectory $x^{(k)}$ as a known quantity, where $k\in\mathbb{N}$ denotes the iteration. In the following sections, we will introduce the Sweep method and present the iterative procedure.
\subsection{Sweep method for fixed-endpoint problems}
\label{sec:sweep}
Observe that in \eqref{eq:state1} and \eqref{eq:costate1} there are two boundary conditions for the state $x$ while none for the co-state $\lambda$. It requires implementing specialized computational methods, such as shooting methods, to solve such a two-point boundary value problem, which in general involve intensive numerical optimizations. Here, we adopt the idea of the Sweep method
by letting
\begin{equation}
\label{eq:lam}
\lambda(t) = K(t)x(t)+S(t)\nu,
\end{equation}
with $\l(t_f)=\nu$, where $K(t),~S(t)\in\mathbb{R}^{n\times n}$ and $\nu$ is the multiplier, a constant associated with the terminal constraint $\psi$, which in this case is $\psi(x(t_f))=x(t_f)=x_f$. From the transversality condition in Pontryagin's maximum principle, we know that $K(t_f)=0$ because there is no terminal cost and $S(t_f)=\frac{\partial\psi}{\partial x}\big|_{x(t_f)} = I$. Moreover, if $K$ is chosen to satisfy the Riccati equation
\begin{eqnarray}
\label{eq:kRiccati}
\dot{K}(t)=-Q -\tilde{A}^TK(t)-K(t)\tilde{A}+K(t)\tilde{B}R^{-1}\tilde{B}^TK(t),
\end{eqnarray}
with the terminal condition $K(t_f)=0$, then $S$ satisfies the matrix differential equation
\begin{eqnarray} \label{eq:s}
\dot{S}(t) = -(\tilde{A}^T- K(t)\tilde{B}R^{-1}\tilde{B}^T)S(t),
\end{eqnarray}
with the terminal condition $S(t_f)=I$, by taking the time derivative of \eqref{eq:lam} and using \eqref{eq:state1}, \eqref{eq:costate1} and \eqref{eq:kRiccati}. In addition, in order to fulfill the terminal condition $\psi(x(t_f))=x_f$ at time $t_f$, the multiplier $\nu$ associated with $\psi$ must satisfy
\begin{eqnarray}
\label{eq:endpoint}
x_f = S^T(t)x(t)+P(t)\nu
\end{eqnarray}
for all $t\in [0,t_f]$, where $P(t)\in\mathbb{R}^{n\times n}$ obeys
the matrix differential equation
\begin{align}
\label{eq:pode}
\dot{P}(t)-S^T(t)\tilde{B}R^{-1}\tilde{B}^TS(t)=0,
\end{align}
with the terminal condition $P(t_f)=0$. It follows from \eqref{eq:endpoint} using $t=0$ that
\begin{equation}
\nu=\big[P(0)\big]^{-1} \big[x_f-S^T(0)x_0\big],
\end{equation}
provided $P(t)$ is invertible for $t\in[0,t_f]$. More details about the Sweep method based on the notion of flow mapping are provided in Appendix \ref{appd:sweep}.
\subsection{Iteration procedure}
\label{sec:iterative}
The optimal solution of the problem \eqref{eq:oc1} is characterized by the homogeneous time-varying linear system described in \eqref{eq:state1} and \eqref{eq:costate1}, and we will solve for $x$ and $\l$ via an iterative procedure, which is based on analytical expressions and requires no numerical optimizations. To proceed this, we write \eqref{eq:state1} and \eqref{eq:costate1} as the iteration equations,
\begin{align}
\label{eq:x_k}
& \dot{x}^{(k+1)} = \tilde{A}^{(k)}x^{(k+1)} - \tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T\lambda^{(k+1)},\\
\label{eq:lambda_k}
& \dot{\lambda}^{(k+1)}=- \tilde{Q}^{(k)}x^{(k+1)} -(\tilde{A}^{(k)})^T\lambda^{(k+1)},
\end{align}
with identical boundary conditions $x^{(k+1)}(0)=x_0$ and $x^{(k+1)}(t_f)=x_f$ for all $k=0,1,2,\ldots$, where $\tilde{A}^{(k)}$, $\tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T$, and $\tilde{Q}^{(k)}$ are defined according to
\eqref{eq:A}, \eqref{eq:B}, and \eqref{eq:Q}, by
\begin{align}
\label{eq:Ak}
& \tilde{A}_{ij}^{(k)} = A_{ij}-\Big[(N_j R^{-1} \big(B+\sum_{j=1}^n x_j^{(k)}N_j\big)^T+\big(B+\sum_{j=1}^n x_j^{(k)}N_j) R^{-1} N_j^T\big)\lambda^{(k)}\Big]_i, \\
\label{eq:Bk}
&\tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T = BR^{-1}B^T-\big(\sum_{j=1}^n x_j^{(k)}N_j\big)R^{-1}\big(\sum_{j=1}^n x_j^{(k)}N_j\big)^T,\\
\label{eq:Qk}
&\tilde{Q}^{(k)}=Q.
\end{align}
Applying the Sweep method introduced in Section \ref{sec:sweep}, we let $\lambda^{(k+1)}(t) = K^{(k+1)}(t)x^{(k+1)}(t)+S^{(k+1)}(t)\nu^{(k+1)}$ for $t\in[0,t_f]$, where $K^{(k)}$ satisfies the Riccati equation
\begin{eqnarray}
\label{eq:K}
\dot{K}^{(k+1)}=-Q^{(k)}-K^{(k+1)}\tilde{A}^{(k)}-(\tilde{A}^{(k)})^TK^{(k+1)}+K^{(k+1)}\tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^TK^{(k+1)},
\end{eqnarray}
with the boundary condition $K^{(k+1)}(t_f)=0$, and $S^{(k)}$ follows
\begin{align}
\label{eq:Sk}
\dot{S}^{(k+1)}=-\Big[(\tilde{A}^{(k)})^T-K^{(k+1)}\tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T\Big] S^{(k+1)},\quad S^{(k+1)}(t_f)=I.
\end{align}
Moreover, the multiplier $\nu^{(k)}$ satisfies
\begin{equation}
\label{eq:nuk}
\nu^{(k+1)}=\big[P^{(k+1)}(0)\big]^{-1} \big[x_f-(S^{(k+1)})^T(0)x_0\big],
\end{equation}
where $P^{(\cdot)}(t)\in\mathbb{R}^{n\times n}$ is invertible (see Lemma \ref{lem:P_inverse} in Section \ref{sec:convergence}) and satisfies the dynamic equation
\begin{align}
\label{eq:Pk}
\dot{P}^{(k+1)} = (S^{(k+1)})^T\tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^TS^{(k+1)}
\end{align}
with the terminal condition $P^{(k+1)}(t_f)=0$. Then, the optimal control \eqref{eq:u} for the original Problem \eqref{eq:oc1} can be expressed as
\begin{equation}
\label{eq:ocuk}
u^*(t) = -R^{-1}\Big[B+\sum_{j=1}^n x^*_j(t)N_j\Big]^T [K^*(t)x^*(t)+S^*(t)\nu^*],
\end{equation}
if this iterative procedure is convergent, where $x^{(k)}\rightarrow x^*$, $K^{(k)}\rightarrow K^*$, and $S^{(k)}\nu^{(k)}\rightarrow S^*\nu^*$.
\begin{remark}
\label{rmk:initialization}
The iterative method can be initialized by conveniently using the optimal control of the system involving only the linear part of the bilinear system in \eqref{eq:oc1}, i.e., the LQR control. That is,
the solution $({x}^{(0)}(t),\l^{(0)}(t))$ to the homogeneous system
\begin{align*}
& \dot{x}^{(0)}=Ax^{(0)}-BR^{-1}B^T \lambda^{(0)}, \quad x^{(0)}(0)=x_0,\quad x^{(0)}(t_f)=x_f,\\
& \dot{\lambda}^{(0)}=-A^T \lambda^{(0)}.
\end{align*}
However, the linear system $\dot{x}=Ax+Bu$ may be uncontrollable so that the desired transfer between $x_0$ and $x_f$ is impossible and the LQR solution does not exist. In such a case, any state trajectory with the endpoints $x_0$ and $x_f$ can be a feasible initial trajectory $x^{(0)}(t)$ of the iterative procedure.
\end{remark}
\subsection{A special case: minimum-energy control of bilinear systems}
\label{eq:minimum_energy}
Before analyzing the convergence of the iterative method, we illustrate the procedure using the example of minimum-energy control of bilinear systems, which is a special case of Problem \eqref{eq:oc1} with $Q=0$. Consider the following fixed-endpoint optimal control problem,
\begin{align}
\min &\quad J=\frac{1}{2}\int_0^{t_f} u^T(t)Ru(t) \, dt, \nonumber\\
\label{eq:ocme}
{\rm s.t.} &\quad \dot{x}(t)=Ax+Bu+\left[\sum_{j=1}^n x_j(t) N_j\right]u, \tag{P3} \\
&\quad x(0)=x_0, \quad x(t_f)=x_f. \nonumber
\end{align}
The Hamiltonian of this
problem is $H(x,u,\lambda)=\frac{1}{2}u^TRu+\lambda^T[Ax+Bu+(\sum_{j=1}^n x_jN_j) u]$, where $\lambda(t)\in\mathbb{R}^n$ is the co-state vector. The optimal control is of the form as in \eqref{eq:u} and the optimal state and co-state trajectories satisfy \eqref{eq:state1} and \eqref{eq:costate1}, respectively, with $Q=0$. The respective iteration equations follow \eqref{eq:x_k} and \eqref{eq:lambda_k} with $Q^{(k)}=0$ for all $k=0,1,2,\ldots$.
Following the iterative method presented in Section \ref{sec:iterative}, we represent the costate $\lambda^{(k+1)}(t)=K^{(k+1)}(t)x^{(k+1)}(t)+S^{(k+1)}(t)\nu^{(k+1)}$, $t\in[0,t_f]$, and the matrix $K^{(k+1)}(t)\in\mathbb{R}^{n\times n}$ satisfies the Riccati equation,
\begin{align}
\label{eq:kme}
\dot{K}^{(k+1)} =& -K^{(k+1)}\tilde{A}^{(k)}-(\tilde{A}^{(k)})^TK^{(k+1)} + K^{(k+1)}\tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^TK^{(k+1)},
\end{align}
with the terminal condition $K^{(k+1)}(t_f)=0$, which has the trivial solution, $K^{(k+1)}(t)\equiv 0$, $\forall\, k=0,1,2,\ldots$, and for $t\in [0,t_f]$. This gives
\begin{eqnarray}
\label{eq:lambda_min_energy}
\lambda^{(k+1)}(t)=S^{(k+1)}(t)\nu^{(k+1)},
\end{eqnarray}
and $S^{(k+1)}$ satisfies
\begin{eqnarray}
\label{eq:S(k+1)}
\dot{S}^{(k+1)} = -(\tilde{A}^{(k)})^TS^{(k+1)}, \quad S^{(k+1)}(t_f)=I.
\end{eqnarray}
In addition,
the multiplier associated with the terminal constraint is expressed as in \eqref{eq:nuk}. Combining \eqref{eq:lambda_min_energy} with \eqref{eq:u} gives the minimum-energy control at the $(k+1)^{th}$ iteration,
\begin{equation}
\label{eq:ocume}
(u^*)^{(k+1)}(t) = -R^{-1}\Big[B+\sum_{j=1}^n x^{(k+1)}_jN_j\Big]^T S^{(k+1)}\nu^{(k+1)}.
\end{equation}
Note that the auxiliary variable $P^{(k)}(t)\in\mathbb{R}^{n\times n}$ at each iteration satisfies \eqref{eq:Pk}, and thus $$P^{(k+1)}(0)=-\Phi_{\tilde{A}^{(k)}}(t_f,0) W^{(k+1)} \Phi^T_{\tilde{A}^{(k)}}(t_f,0),$$
where $\Phi^T_{\tilde{A}^{(k)}}(t_f,t)=\Phi_{-(\tilde{A}^{(k)})^T}(t,t_f)$ is the transition matrix for the homogeneous equation \eqref{eq:S(k+1)}
and
$$W^{(k+1)}=\int_0^{t_f} \Phi_{\tilde{A}^{(k)}}(0,\s) \tilde{B}^{(k)} R^{-1} (\tilde{B}^{(k)})^T \Phi^T_{\tilde{A}^{(k)}} (0,\s)d\s$$ is the controllability Gramian for the time-varying linear system as in Problem \eqref{eq:ocme}, or, equivalently, as in \eqref{eq:state1} and \eqref{eq:costate1} with $Q=0$. Moreover, the closed-loop expression in \eqref{eq:ocume} is consistent with the open-loop expression of the minimum-energy control in terms of the controllability Gramian, that is,
\begin{eqnarray}
\label{eq:u*}
(u^*)^{(k+1)}= R^{-1} \big[B+\sum_{j=1}^n x^{(k+1)}_jN_j\big]^T\Phi^T_{\tilde{A}^{(k)}}(0,t)\big(W^{(k+1)}\big)^{-1}\xi^{(k)},
\end{eqnarray}
where $\xi^{(k)}=\Phi_{\tilde{A}^{(k)}} (0,t_f) x_f -x_0$.
\section{Convergence of the Iterative Method}
\label{sec:convergence}
Following the iterative algorithm described in Section \ref{sec:iterative}, we expect to find the optimal control for Problem \eqref{eq:oc1}, provided the iterations are convergent. In this section, we show that the convergence of this algorithm is pertinent to the controllability of the linear system considered at each iteration and depends on the choice of the weight matrix $R$.
In Section \ref{sec:ensemble}, we will extend this iterative method to solve optimal control problems involving bilinear ensemble systems.
To facilitate the proof, we introduce the following mathematical tools. Considering the Banach spaces, $\mathcal{X}\doteq C([0,t_f];\, \mathbb{R}^n)$, $\mathcal{Y}\doteq C([0,t_f];\, \mathbb{R}^{n\times n})$, and $\mathcal{Z}\doteq C([0,t_f];\, \mathbb{R}^n)$ with the norms
\begin{align}
\label{eq:norm1}
\| x \|_{\alpha} &= \sup_{t\in[0,t_f]} \big[\|x(t)\|\exp(-\alphat)\big], & \text{for}\ \ x\in\mathcal{X}, \\
\label{eq:norm2}
\| y \|_{\alpha} &= \sup_{t\in[0,t_f]} \big[\|y(t)\|\exp(-\alpha(t_f-t))\big], & \text{for}\ \ y\in\mathcal{Y}, \\
\label{eq:norm3}
\| z \|_{\a} &= \sup_{t\in[0,t_f]}\big[\|z(t)\|\exp(-\alpha(t_f-t))\big], & \text{for}\ \ z\in\mathcal{Z},
\end{align}
in which $\|v\|=\sum_{i=1}^n |v_i|$ for $v\in\mathbb{R}^n$ and $\|D\| = \max_{1\leq j\leq n} \sum_{i=1}^n |D_{ij}|$ for $D\in \mathbb{R}^{n\times n}$,
and the parameter $\alpha$
serves as an additional degree of freedom to control the rate of convergence \cite{Hofer1988}, we define the operators $T_1:\mathcal{X} \times \mathcal{Y} \times \mathcal{Z} \rightarrow \mathcal{X}$, $T_2: \mathcal{X} \times \mathcal{Y} \times \mathcal{Z} \rightarrow \mathcal{Y}$, and $T_3:\mathcal{X} \times \mathcal{Y} \times \mathcal{Z} \rightarrow \mathcal{Z}$ that characterize the dynamics of $x\in\mathcal{X}$, $K\in\mathcal{Y}$, and $S\nu\in\mathcal{Z}$
as described in Section \ref{sec:iterative}, given by
\begin{align}
\frac{d}{dt}T_1[x,K,S\nu](t) &= \tilde{A}(x(t),K(t),S(t)\nu)T_1[x,K,S\nu](t)-\tilde{B}(x(t))R^{-1}\tilde{B}^T(x(t))T_3[x,K,S\nu](t) \nonumber \\
\label{eq:T1}
& -\tilde{B}(x(t))R^{-1}\tilde{B}^T(x(t))T_2[x,K,S\nu](t)T_1[x,K,S\nu](t), \\
T_1[x,K,S\nu](0) &= x_0 \nonumber \\
\frac{d}{dt}T_2[x,K,S\nu](t) &= - Q + T_2[x,K,S\nu](t)\tilde{B}(x(t))R^{-1}\tilde{B}^T(x(t))T_2[x,K,S\nu](t) \nonumber \\
\label{eq:T2}
& - T_2[x,K,S\nu](t)\tilde{A}(x(t),K(t),S(t)\nu)- \tilde{A}^T(x(t),K(t),S(t)\nu)T_2[x,K,S\nu](t) \\
T_2[x,K,S\nu](t_f) &= 0, \nonumber \\
\frac{d}{dt}T_3[x,K,S\nu](t) &= -\Big[ \tilde{A}^T(x(t),K(t),S(t)\nu) - T_2[x,K,S\nu](t)\tilde{B}(x(t))R^{-1}\tilde{B}^T(x(t))\Big] \cdot \nonumber \\
\label{eq:T3}
&\quad T_3[x,K,S\nu](t), \\
T_3[x,K,S\nu](t_f) &= \nu(T_1[x,K,S\nu],T_2[x,K,S\nu],T_3[x,K,S\nu]) \nonumber
\end{align}
where $\nu(T_1[x,K,S\nu],T_2[x,K,S\nu],T_3[x,K,S\nu])$ is the multiplier satisfying \eqref{eq:nuk}. With these definitions
and the following lemma, the convergence of the iterative method can be developed using the fixed-point theorem.
\begin{lemma}
\label{lem:P_inverse}
The matrix $P^{(k+1)}(t)$ as in \eqref{eq:Pk} is nonsingular over $t\in [0,t_f]$ at each iteration $k$ if and only if the time-varying linear system in Problem \eqref{eq:oc2} is controllable over $[0,t_f]$ \cite{Schaettler13}.
\end{lemma}
{\it Proof:} See Appendix \ref{appd:Pnonsingular}.
$\Box$
{\theorem
\label{thm:convergence}
Consider the iterative method with the iterations evolving according to
\begin{align}
\label{eq:T1k}
x^{(k+1)}(t) &= T_1[x^{(k)},K^{(k)},S^{(k)}\nu^{(k)}](t), \\
\label{eq:T2k}
K^{(k+1)}(t) &= T_2[x^{(k)},K^{(k)},S^{(k)}\nu^{(k)}](t), \\
\label{eq:T3k}
S^{(k+1)}(t)\nu^{(k+1)} &= T_3[x^{(k)},K^{(k)},S^{(k)}\nu^{(k)}](t),
\end{align}
where the operators $T_1$, $T_2$, and $T_3$ are defined in \eqref{eq:T1}, \eqref{eq:T2}, and \eqref{eq:T3}, respectively. If at each iteration $k$ the linear system as in \eqref{eq:oc2} is controllable, then $T_1$, $T_2$, and $T_3$ are contractive. Furthermore, starting with a triple of feasible trajectories $(x^{(0)},K^{(0)},S^{(0)}\nu^{(0)})$, the iteration procedure is convergent, and the sequences $x^{(k)}$, $K^{(k)}$ and $S^{(k)}\nu^{(k)}$ converge to the unique fixed points, $x^*$, $K^*$, and $(S\nu)^*$, respectively.
}
{\it Proof:}
Because the linear system in \eqref{eq:oc2} is controllable at each iteration $k$, by Lemma \ref{lem:P_inverse} the matrix $P^{(k+1)}$ defined in \eqref{eq:Pk} is invertible and hence the multiplier $\nu^{(k+1)}$ expressed in \eqref{eq:nuk} is well-defined. Then, we have, at time $t_f$, $S^{(k+1)}(t_f)\nu^{(k+1)} = T_3[x^{(k)},K^{(k)},S^{(k)}\nu^{(k)}](t_f)=\nu^{(k+1)}$, since $S^{(k+1)}(t_f)=I$.
From \eqref{eq:Ak} and \eqref{eq:Bk}, for each fixed $t\in [0,t_f]$, we obtain the bounds
\begin{align*}
\|\tilde{A}^{(k+1)}-\tilde{A}^{(k)}\| &\leq \Big[\sum_{i=1}^n \|G_i \| ^2 \Big] ^{1/2} \| \lambda^{(k+1)}-\lambda^{(k)}\| \\
& + \|\Big[\sum_{i,j=1}^n \|H_{ij} \| ^2 \Big] ^{1/2} \left\lbrace \| \lambda^{(k+1)} \| \| x^{(k+1)}-x^{(k)} \| + \| x^{(k)} \| \| \lambda^{(k+1)}-\lambda^{(k)}\| \right\}, \\
\|\tilde{B}^{(k+1)}R^{-1}(\tilde{B}&^{(k+1)})^T-\tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T\|\leq\|\Big[\sum_{i,j=1}^n \|H_{ij} \| ^2 \Big] ^{1/2} \| (x^{(k+1)})^2 - (x^{(k)})^2 \|,
\end{align*}
where $G_i = N_i R^{-1} B^T + B R^{-1} N^T_i$ and $H_{ij}=N_i R^{-1} N^T_j + N_j R^{-1} N^T_i$, and from \eqref{eq:Qk}, we have $\tilde{Q}^{(k+1)}=\tilde{Q}^{(k)}$ for all $k=0,1,2,\ldots$. Substituting \eqref{eq:lambda_k} into the above inequalities, we can write these bounds
in terms of $\|x^{(k+1)}-x^{(k)}\|$, $\|K^{(k+1)}-K^{(k)}\|$ and $\|S^{(k+1)}\nu^{(k+1)}-S^{(k)}\nu^{(k)}\|$, given by
\begin{align}
\label{eq:dA}
\|\tilde{A}^{(k+1)}-\tilde{A}^{(k)}&\| \leq \Big\{ \Big[\sum_{i=1}^n \|G_i \| ^2 \Big] ^{1/2} + \|x^{(k)} \|\Big[\sum_{i,j=1}^n \|H_{ij} \| ^2 \Big] ^{1/2} \Big\} \, \cdot \nonumber \\
& \big\{ \| K^{(k+1)} \|\|x^{(k+1)}-x^{(k)}\| + \|K^{(k+1)}-K^{(k)}\| \|x^{(k)} \| + \|S^{(k+1)}\nu^{(k+1)}-S^{(k)}\nu^{(k)}\| \big\} \\
& + \Big[\sum_{i,j=1}^n \|H_{ij} \| ^2 \Big] ^{1/2} \| K^{(k+1)} \| \| x^{(k+1)} \| + \| S^{(k+1)}\nu^{(k+1)} \| \| x^{(k+1)}-x^{(k)} \|, \nonumber\\
\label{eq:dB}
\|\tilde{B}^{(k+1)}R^{-1} (\tilde{B} & ^{(k+1)})^T-\tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T\|\leq\Big[\sum_{i,j=1}^n \|H_{ij} \|^2 \Big]^{1/2} \cdot \nonumber \\
& \qquad\qquad\qquad\qquad\qquad\quad\Big\{\| x^{(k+1)} \| + \| x^{(k)} \| \Big\} \| x^{(k+1)}-x^{(k)} \|.
\end{align}
In addition, the solution to \eqref{eq:Sk} is given by
\begin{align}
\nonumber
S^{(k+1)}(t) & = \Phi_{-[(\tilde{A}^{(k)})^T - K^{(k+1)}\tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T]}(t,t_f)\, S^{(k+1)}(t_f) \\
\label{eq:transition}
& = \Phi^T_{[\tilde{A}^{(k)} - \tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^TK^{(k+1)}]}(t_f,t),
\end{align}
where $\Phi_{(.)}$ denotes the transition matrix associated with the homogeneous system \eqref{eq:Sk} and $S^{(k+1)}(t_f)=I$. Then, we have
\begin{align}
\label{eq:dS}
\|S^{(k+1)}(t)-S^{(k)}(t)\| & \leq \int_t^T \|\big[ (S^{(k+1)})^T(t)\big] ^{-1}\|\|S^{(k+1)})^T(\s)\|\Big[\| \tilde{A}^{(k)}(\sigma)-\tilde{A}^{(k-1)}(\sigma) \| \nonumber\\
& + \| \tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T(\sigma)-\tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T(\sigma) \| \|K^{(k+1)}(\s)\| \\
& + \|\tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T(\sigma) \|\| K^{(k+1)}(\sigma)-K^{(k)}(\sigma) \|\Big] \|S^{(k)}(\s)\|d\sigma. \nonumber
\end{align}
From the Riccati equation for $K^{(k)}$ described in \eqref{eq:K}, we can write the differential equation for the difference $K^{(k+1)}-K^{(k)}$ as
\begin{align}
\dfrac{d}{dt} (K^{(k+1)}- K^{(k)})& =- (K^{(k+1)}- K^{(k)})\Big[\tilde{A}^{(k)} - \tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T K^{(k+1)} \Big] \nonumber\\
\label{eq:Kd}
& - \Big[\tilde{A}^{(k)} - \tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T K^{(k+1)} \Big]^T (K^{(k+1)}- K^{(k)}) \\
& - K^{(k)}(\tilde{A}^{(k)}-\tilde{A}^{(k-1)}) - (\tilde{A}^{(k)}-\tilde{A}^{(k-1)})^T K^{(k+1)} \nonumber \\
& + K^{(k)}( \tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T-\tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T ) K^{(k+1)}, \nonumber
\end{align}
with the terminal condition $K^{(k+1)}(t_f)-K^{(k)}(t_f)=0$. Applying the variation of constants formula, backward in time from $t=t_f$, to \eqref{eq:Kd} and employing \eqref{eq:transition} yield
\begin{align*}
K^{(k+1)}(t)- & K^{(k)}(t) = (S^{(k)})^T(t)\Big\{ \int_t^{t_f} \Big[(S^{(k)})^T(\s)\Big]^{-1} \Big[K^{(k)}(\s)\big(\tilde{A}^{(k)}(\s)-\tilde{A}^{(k-1)}(\s)\big) \\
& +\big(\tilde{A}^{(k)}(\s)-\tilde{A}^{(k-1)}(\s)\big)^T K^{(k+1)}(\s)-K^{(k)}(\s)\cdot \\
& \big(\tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T-\tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T\big) K^{(k+1)}(\s)\Big] \Big[S^{(k+1)}(\s)\Big]^{-1} d\sigma\Big\} S^{(k+1)}(t),
\end{align*}
which results in
\begin{align}
\label{eq:dK}
\|K^{(k+1)}(t)-K^{(k)}(t)\| & \leq \int_t^{t_f} \Big[\beta_1 \|\tilde{A}^{(k)}(\s)-\tilde{A}^{(k-1)}(\s)\| \nonumber \\
& +\beta_2 \|\tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T(\s)-\tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T(\s)\| \Big] d\s,
\end{align}
where $\beta_1$ and $\beta_2$ are both finite time-varying coefficients (see Appendix \ref{appd:betas}).
Similarly, from \eqref{eq:x_k} and \eqref{eq:lambda_k}, we can write the differential equation for $(x^{(k+1)}-x^{(k)})$, that is,
\begin{align}
\frac{d}{dt} (x^{(k+1)}- x^{(k)}) &= \Big[\tilde{A}^{(k)} - \tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^TK^{(k+1)} \Big] (x^{(k+1)}-x^{(k)}) \nonumber\\
&+ \Big\{ (\tilde{A}^{(k)}-\tilde{A}^{(k-1)}) - \left( \tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T-\tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T \right) K^{(k+1)} \nonumber\\
\label{eq:xd}
& - \tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T (K^{(k+1)}-K^{(k)})\Big\} x^{(k)} \\
& - \left( \tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T-\tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T \right) S^{(k+1)}\nu^{(k+1)} \nonumber \\
& - \tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T \big(S^{(k+1)}\nu^{(k+1)}-S^{(k)}\nu^{(k)}\big), \nonumber
\end{align}
with the terminal condition $x^{(k+1)}(t_f)-x^{(k)}(t_f)=0$. Applying the variation of constants formula to \eqref{eq:xd} yields,
\begin{align*}
x^{(k+1)}(t)-x^{(k)}(t) &= \Big[ (S^{(k+1)})^T(t)\Big]^{-1}\int_0^t (S^{(k+1)})^T(\s)\Big\{ \Big[\big(\tilde{A}^{(k)}(\s)-\tilde{A}^{(k-1)}(\s)\big) \\
& - \big(\tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T(\sigma)-\tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T(\sigma)\big) K^{(k+1)}(\sigma) \\
& - \tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T (\sigma) \big(K^{(k+1)}(\sigma)-K^{(k)}(\sigma)\big) \Big] x^{(k)}(\sigma) \\
& - \left( \tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T-\tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T \right) S^{(k+1)}\nu^{(k+1)} \nonumber \\
& - \tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T(S^{(k+1)}\nu^{(k+1)}-S^{(k)}\nu^{(k)}) \Big\} d\s.
\end{align*}
It follows that
\begin{align}
\| x^{(k+1)}(t)-x^{(k)}(t) \| & \leq \int_0^t \Big[ \beta_3 \| \tilde{A}^{(k)}(\sigma)-\tilde{A}^{(k-1)}(\sigma) \| + \beta_4 \| K^{(k+1)}(\sigma) - K^{(k)}(\sigma) \| \nonumber \\
\label{eq:dx}
& + \beta_5 \|\tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T(\sigma)-\tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T (\s) \| \\
& + \beta_6 \|S^{(k+1)}\nu^{(k+1)}(\sigma)-S^{(k)}\nu^{(k)}(\sigma)\| \Big] d\sigma, \nonumber
\end{align}
where $\beta_3$, $\beta_4$, $\beta_5$ and $\beta_6$ are all finite time-varying coefficients (see Appendix \ref{appd:betas}). Furthermore, since $\nu^{(k+1)}$ is a constant within each iteration $k$, from \eqref{eq:Sk} we can write
$$\dfrac{d}{dt}(S^{(k+1)}\nu^{(k+1)}) = -\Big[(\tilde{A}^{(k)})^T - K^{(k+1)}\tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T\Big] S^{(k+1)}\nu^{(k+1)},$$
with the terminal condition $S^{(k+1)}(t_f)\nu^{(k+1)} = \nu^{(k+1)}$. This allows us to write
\begin{align*}
\dfrac{d}{dt}(S^{(k+1)}\nu^{(k+1)}-S^{(k)}\nu^{(k)}) &= -\Big[\tilde{A}^{(k)} - \tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^TK^{(k+1)} \Big]^T (S^{(k+1)}\nu^{(k+1)}-S^{(k)}\nu^{(k)})\\
& -\Big\{ (\tilde{A}^{(k)}-\tilde{A}^{(k-1)}) - \tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T (K^{(k+1)}-K^{(k)})\nonumber\\
& -\left( \tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T-\tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T \right) K^{(k+1)} \Big\} ^T S^{(k)}\nu^{(k)},
\end{align*}
with the terminal condition $S^{(k+1)}(t_f)\nu^{(k+1)}-S^{(k)}(t_f)\nu^{(k)} = \nu^{(k+1)}-\nu^{(k)}$, and then
\begin{align*}
S^{(k+1)}(t)\nu^{(k+1)} &- S^{(k)}(t)\nu^{(k)} = \Big[ (S^{(k+1)})^T(t)\Big]^{-1} \Big\{(\nu^{(k+1)}-\nu^{(k)}) \\
&- \int_t^{t_f} (S^{(k+1)})^T(\s) \Big[(\tilde{A}^{(k)}-\tilde{A}^{(k-1)}) - \tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T (K^{(k+1)}-K^{(k)})\nonumber\\
&- \left( \tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T-\tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T \right) K^{(k+1)}\Big]^T S^{(k)}\nu^{(k)} d\sigma \Big\}.
\end{align*}
From \eqref{eq:nuk} we may obtain
\begin{align}
\label{eq:dnu}
\nonumber
\|\nu^{(k+1)}-\nu^{(k)}\| & \leq \|(P^{(k+1)})^{-1}\| \, \|P^{(k+1)}(t)-P^{(k)}(t)\| \, \|\nu^{(k)}\| \\
& + \|(P^{(k+1)})^{-1}\| \, \|S'^{(k+1)}(t)-S'^{(k)}(t)\| \, \|x_0\|,
\end{align}
in which, by evolving \eqref{eq:Pk} backward in time from $t=t_f$, the difference $P^{(k+1)}(t)-P^{(k)}(t)$ satisfies
\begin{align}
\nonumber
P^{(k+1)}(t)-P^{(k)}(t) &= -\int_t^{t_f} \bigg[\big(S^{(k+1)}+S^{(k)}\big) \tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T \big(S^{(k+1)}(\s)-S^{(k)}(\s)\big) \\
\label{eq:dP}
& + S^{(k+1)}\Big(\tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)}(\s))^T-\tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)}(\s))^T\Big)S^{(k)} \bigg] d\s.
\end{align}
Using \eqref{eq:dnu} and \eqref{eq:dP}, we obtain
\begin{align}
\| S^{(k+1)}(t)\nu^{(k+1)}-S^{(k)}(t)\nu^{(k)} \| & \leq \int_t^{t_f} \Big[\beta_7 \, \| \tilde{A}^{(k)}(\sigma)-\tilde{A}^{(k-1)}(\sigma) \| \nonumber\\
\label{eq:dcostate1}
& + \b_8 \, \| \tilde{B}^{(k)}R^{-1}(\tilde{B}^{(k)})^T(\sigma)-\tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T(\sigma) \| \\
& + \b_9 \, \| K^{(k+1)}(\sigma)-K^{(k)}(\sigma) \| \Big] d\sigma, \nonumber
\end{align}
where $\beta_7$, $\beta_8$ and $\beta_9$ are finite time-varying coefficients (see Appendix \ref{appd:betas}).
Combining the bounds in \eqref{eq:dA}, \eqref{eq:dB}, \eqref{eq:dK}, \eqref{eq:dx}, and \eqref{eq:dcostate1}, and using the definitions of the operators $T_1$, $T_2$, and $T_3$ in \eqref{eq:T1}, \eqref{eq:T2} and \eqref{eq:T3}, respectively, we reach the inequality that holds component-wise, given by
\begin{align}
\label{eq:contraction}
& \left[\begin{array}{c}\| T_1[x^{(k)},K^{(k)},S^{(k)}\nu^{(k)}]-T_1[x^{(k-1)},K^{(k-1)},S^{(k-1)}\nu^{(k-1)}]\|_{\a} \\
\| T_2[x^{(k)},K^{(k)},S^{(k)}\nu^{(k)}]-T_2[x^{(k-1)},K^{(k-1)},S^{(k-1)}\nu^{(k-1)}]\|_{\a} \\
\|T_3[x^{(k)},K^{(k)},S^{(k)}\nu^{(k)}]-T_3[x^{(k-1)},K^{(k-1)},S^{(k-1)}\nu^{(k-1)}]\|_{\a} \end{array}\right] \nonumber \\
& \leq M \left[\begin{array}{c}\|x^{(k)}-x^{(k-1)}\|_{\a} \\
\|K^{(k)}-K^{(k-1)}\|_{\a} \\
\|S^{(k)}\nu^{(k)}-S^{(k-1)}\nu^{(k-1)}\|_{\a} \end{array}\right],
\end{align}
where $M\in\mathbb{R}^{3\times 3}$ whose elements are all related to $R^{-1}$ (see Appendix \ref{appd:M}). As a result, the eigenvalues of $M$ can be made within the unit circle by choosing sufficiently large $R$. Therefore, the operators $T_1$, $T_2$ and $T_3$ are contractive, and the fixed-point theorem \cite{Brockett70} warrants the convergence of the iterative procedure to the unique fixed points, i.e., $x^{(k)}\rightarrow x^*$, $K^{(k)}\rightarrow K^*$, and $S^{(k)}\nu^{(k)}\rightarrow (S\nu)^*$.
Note that the choice of $R$ determines the magnitude of eigenvalues of $M$ and thus can also be used to improve the convergence rate of the iterative procedure.
$\Box$
\begin{remark}[Optimality of the Convergent Solution]
\label{rmk:necessary}
The convergence of $x^{(k)}\rightarrow x^*$, $K^{(k)}\rightarrow K^*$, and $S^{(k)}\nu^{(k)}\rightarrow (S\nu)^*$ immediately leads to $\lambda^{(k)}\rightarrow \lambda^*$ by \eqref{eq:lam} and by the continuity of all the variables involved. This in turn guarantees that the fixed points $x^*$ and $\lambda^*$ resulting from the iterative procedure are the solutions to \eqref{eq:state1} and \eqref{eq:costate1} with the convergent $\tilde{A}$ and $\tilde{B}R^{-1}\tilde{B}^T$, denoted $A^*$ and $B^*R^{-1}(B^*)^T$, obtained from \eqref{eq:A} and \eqref{eq:B},
respectively. This implies that the convergent solution pair $(x^*,\lambda^*)$ satisfies the necessary optimality condition, and thus the convergent control $u^*$ is a candidate of the optimal control for Problem \eqref{eq:oc1}.
\end{remark}
\section{Global Optimality of the Convergent Solution}
\label{sec:optimality}
We have shown in Remark \ref{rmk:necessary} that the convergent optimal control $u^*$ generated by the iterative procedure satisfies the necessary optimality condition. In this section, we will further illustrate that $u^*$ may be the unique global optimal control given appropriate assumptions on the value function associated with Problem \eqref{eq:oc2}.
\subsection{Optimality of the solution at each iteration}
\label{sec:sufficient}
For each iteration $k$, \eqref{eq:oc2} is a time-dependent problem with a specified the time horizon, and the optimal control satisfies the Hamilton-Jacobi-Bellman (HJB) equation, given by
\begin{align}
\label{eq:HJBlqr}
\dfrac{\partial V^{(k)}}{\partial t}(t,x^{(k)}) + \min_{u\in \mathcal{U}} \Big\{ \dfrac{\partial V^{(k)}}{\partial x^{(k)}}(t,x^{(k)})^T (\tilde{A}^{(k-1)}x^{(k)}+\tilde{B}^{(k-1)}u)+\dfrac{1}{2} [(x^{(k)})^TQx^{(k)}+u^TRu]\Big\}\equiv 0,
\end{align}
with the boundary condition $V^{(k)}(t_f,x^{(k)})=0$, where $V^{(k)}$ is the value function and $\mathcal{U}$ is the set of all admissible controls.
Since the matrix $R\in \mathbb{R}^{n\times n}$ is positive definite, the function to be minimized in \eqref{eq:HJBlqr} is strictly convex in the control variable $u$. As a result, the minimization problem in \eqref{eq:HJBlqr} has a unique solution given by the stationary point, satisfying $(\tilde{B}^{(k-1)})^T \dfrac{\partial V^{(k)}}{\partial x^{(k)}}(t,x^{(k)}) + Ru_*^{(k)}(t)=0$, or, equivalently,
\begin{align}
\label{eq:optimalu}
u_*^{(k)}(t) = -R^{-1}(\tilde{B}^{(k-1)})^T \dfrac{\partial V^{(k)}}{\partial x^{(k)}}(t,x^{(k)}).
\end{align}
Substituting \eqref{eq:optimalu} into
\eqref{eq:HJBlqr} gives a first-order nonlinear partial differential equation,
\begin{align}
\dfrac{\partial V^{(k)}}{\partial t}(t,x^{(k)}) &+ \dfrac{1}{2} \Big[ \dfrac{\partial V^{(k)}}{\partial x^{(k)}}(t,x^{(k)})^T\tilde{A}^{(k-1)}x^{(k)} +(x^{(k)})^T(\tilde{A}^{(k-1)})^T\dfrac{\partial V^{(k)}}{\partial x^{(k)}}(t,x^{(k)})\Big] \nonumber \\
\label{eq:fopde}
& + \dfrac{1}{2} (x^{(k)})^TQx^{(k)}-\dfrac{1}{2} \dfrac{\partial V^{(k)}}{\partial x^{(k)}}(t,x^{(k)})^T \tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T \dfrac{\partial V^{(k)}}{\partial x^{(k)}}(t,x^{(k)})\equiv 0.
\end{align}
Due to the quadratic and symmetric nature of \eqref{eq:fopde}, we consider the value function of the form,
\begin{align}
\label{eq:Vfunc}
V^{(k)}(t,x) = \dfrac{1}{2}x^TK^{(k)}x + x^TS^{(k)}\nu^{(k)} +\dfrac{1}{2}(\nu^{(k)})^TP^{(k)}\nu^{(k)},
\end{align}
with the boundary condition $V^{(k)}(t_f,x^{(k)})=0$, where $K^{(k)}(t),~S^{(k)}(t),~P^{(k)}(t)\in \mathbb{R}^{n\times n}, \forall~ t\in [0,t_f]$ and $\nu^{(k)}\in\mathbb{R}$. It is straightforward to verify that $(V^{(k)}(t,x^{(k)}),u_*^{(k)})$
is a classical solution to the HJB equation \eqref{eq:fopde} if the matrices $K^{(k)}(t)$, $S^{(k)}(t)$, and $P^{(k)}(t)$ satisfy the matrix differential equations \eqref{eq:kRiccati}, \eqref{eq:s} and \eqref{eq:pode}, respectively, with the respective boundary conditions $K^{(k)}(t_f)=0$, $S^{(k)}(t_f)=I$, and $P^{(k)}(t_f)=0$. Note that the value function $V^{(k)}(t,x^{(k)})$ in \eqref{eq:Vfunc} is continuously differentiable on its domain and extends continuously onto the terminal manifold $\mathcal{N}^{(k)} = \{ x^{(k)}(t_f) = x_f \}$.
\subsection{Global optimality of the convergent solution}
\label{sec:global}
We showed in Section \ref{sec:sufficient} that the control $u_*^{(k)}$ presented in \eqref{eq:optimalu} is the global optimum for the $k^{th}$ iteration in Problem \eqref{eq:oc2}. Here, we will show that the convergent solution $u^*$ of $u_*^{(k)}$, i.e., $u_*^{(k)}\rightarrow u^*$,
for the linear problem \eqref{eq:oc2} is indeed the global optimal control for the original bilinear problem \eqref{eq:oc1} under some regularity conditions on the value function associated with \eqref{eq:oc2} expressed in \eqref{eq:Vfunc}.
\begin{theorem}
\label{thm:globalsufficient}
Consider the iterative method applied to Problem \eqref{eq:oc2}, and suppose that at each iteration $k$ the linear system as in \eqref{eq:oc2} is controllable. Let $u^*$ be the convergent solution of the optimal control sequence $\{u_*^{(k)}\}$ generated by the iterative procedure for $k\in\mathbb{N}$, i.e., $u_*^{(k)}\rightarrow u^*$, and let $V^*$ be the corresponding convergent value function defined in \eqref{eq:Vfunc}, i.e., $V^{(k)} \rightarrow V^*$. If (i) $V^*\in C^1$, and $\dfrac{\partial V^*}{\partial t}$ and $\dfrac{\partial V^*}{\partial x}$ are Lipschitz continuous; and (ii) there exist real-valued $L_1$ functions, $g(t)$ and $h_i(x)$, $i = 1,2,\ldots,n$, i.e., $g \in L_1([0,t_f])$ and $h_i \in L_1(M)$ where $M \subset \mathbb{R}^n$, such that $\Big|\dfrac{\partial V^{(k)}}{\partial t}(t,x^{(k)}(t))\Big| \leq g(t)$ for all $k\in\mathbb{N}$ and $t\in [0,t_f]$, and for each component $i$, $\Big| \Big[\dfrac{\partial V^{(k)}}{\partial x}(t,x)\Big]_i\Big| \leq h_i(x)$ for all $k\in\mathbb{N}$ and for all $x=x^{(k)}(t)\in\mathbb{R}^n$, then $u^*$ is a global optimum for the original Problem \eqref{eq:oc1}.
\end{theorem}
{\it Proof:}
First of all, the conditions in (i) guarantee the existence of the optimal control. Because the linear system as in \eqref{eq:oc2} is controllable at each iteration $k$, there exist unique fixed points for the sequences $x^{(k)}$, $K^{(k)}$, and $S^{(k)}\nu^{(k)}$ such that $x^{(k)}\rightarrow x^*$, $K^{(k)}\rightarrow K^*$ and $S^{(k)}\nu^{(k)}\rightarrow (S\nu)^*$ by Theorem \ref{thm:convergence}. It follows that the partial derivatives of $V^{(k)}$ with respect to $t$ and $x$ are convergent, denoted $\dfrac{\partial V^{(k)}}{\partial t}(t,x) \rightarrow V_t(t,x)$ and $\dfrac{\partial V^{(k)}}{\partial x}(t,x)\rightarrow V_x(t,x)$, where
\begin{align*}
V_t(t,x^*) &= \frac{1}{2}(x^*)^T \Big[ - Q - (\tilde{A}^*)^TK^* - K^*\tilde{A}^* + K^*\tilde{B}^* R^{-1}(\tilde{B}^*)^TK^*\Big] x^* \\
&+ (x^*)^T \Big[ -(\tilde{A}^*)^T - K^*\tilde{B}^* R^{-1}(\tilde{B}^*)^T \Big]S^* \nu^* + \dfrac{1}{2} (\nu^*)^T (S^*)^T \tilde{B}^* R^{-1}(\tilde{B}^*)^TS^* \nu^*,\\
V_x(t,x^*) &= K^*x^* +S^* \nu^*= \lambda^*,
\end{align*}
in which $\tilde{A}^*$ and $\tilde{B}^*R^{-1}(\tilde{B}^*)^T$ are the limits of $\tilde{A}^{(k)}$ and $\tilde{B}^{(k)} R^{-1}(\tilde{B}^{(k)})^T$, respectively,
following \eqref{eq:Ak} and \eqref{eq:Bk}. Because $\dfrac{\partial V^{(k)}}{\partial t}(t,x^{(k)}(t))$ and $\dfrac{\partial V^{(k)}}{\partial x}(t,x^{(k)}(t))$ are dominated by $g(t)$ and $h(x)=(h_1,\ldots,h_n)'$, respectively, by the Lebesgue Dominated Convergence theorem, we have
\begin{align}
\label{eq:V*1}
\lim_{k\rightarrow \infty} V^{(k)}(t,x) &= \lim_{k\rightarrow \infty}\int_0^t \dfrac{\partial V^{(k)}}{\partial \s}(\s,x) d\sigma=\int_0^t \lim_{k\rightarrow \infty}\dfrac{\partial V^{(k)}}{\partial \s}(\s,x) d\sigma= \int_0^t V_\sigma(\s,x) d\s, \\
\label{eq:V*2}
\lim_{k\rightarrow \infty} V^{(k)}(t,x) &= \lim_{k\rightarrow \infty}\int_{x(0)}^{x(t)} \dfrac{\partial V^{(k)}}{\partial x}(t,x) dx =\int_{x(0)}^{x(t)} \lim_{k\rightarrow \infty}\dfrac{\partial V^{(k)}}{\partial x}(t,x) dx = \int_{x(0)}^{x(t)} V_x (t,x) dx.
\end{align}
where $\lim_{k\rightarrow \infty} V^{(k)}(t,x)=V^* (t,x)$ by assumption.
Because $V^*$ is continuously differentiable with respect to both $t$ and $x$, we obtain from \eqref{eq:V*1} and \eqref{eq:V*2} the partial derivatives
\begin{align}
\label{eq:direvative}
\dfrac{\partial V^*}{\partial t}(t,x) = V_t (t,x),\quad \dfrac{\partial V^*}{\partial x}(t,x) = V_x (t,x).
\end{align}
In addition, due to the convergence of the iterative procedure, \eqref{eq:fopde} is convergent to
\begin{align*}
V_t(t,x^*) &+ V_x(t,x^*)^T\tilde{A}^*x^* + \dfrac{1}{2} (x^*)^TQx^* - \frac{1}{2} V_x(t,x^*)^T \tilde{B}^*R^{-1}(\tilde{B}^*)^T V_x(t,x^*)\equiv 0.
\end{align*}
which, by employing \eqref{eq:direvative} and $V_x(t,x^*) = \lambda^*$, can be rewritten as
\begin{align*}
\dfrac{\partial V^*}{\partial t}(t,x^*) + \dfrac{\partial V^*}{\partial x}(t,x^*)^T (\tilde{A}^*x^*-\tilde{B}^*R^{-1}(\tilde{B}^*)^T\lambda^*)+\dfrac{1}{2}\big[(x^*)^TQx^*+(\lambda^*)^T \tilde{B}^*R^{-1}(\tilde{B}^*)^T\lambda^*\big]\equiv 0,
\end{align*}
with the boundary condition $V^*(t_f,x^*)=0$. Because the convergent solution pair $(x^*,\lambda^*)$ satisfies the necessary condition (see Remark \ref{rmk:necessary}), the above equation is equivalent to, by \eqref{eq:optimalu}, \eqref{eq:state1}, \eqref{eq:costate1}, and \eqref{eq:linear},
\begin{align}
\label{eq:HJBu}
\dfrac{\partial V^*}{\partial t}(t,x^*) + \dfrac{\partial V^*}{\partial x}(t,x^*)^T \Big[Ax^*+\Big(B+\sum_{i=1}^n x^*_i N_i\Big) u^*\Big]+\dfrac{1}{2} [(x^*)^TQx^*+(u^*)^TRu^* ]\equiv 0.
\end{align}
Since $V^*$ is differentiable, according to the dynamic programming principle \cite{Schaettler13}, the quantity on the left-hand side in \eqref{eq:HJBu} is non-negative for every control $u$ in the admissible control set $\mathcal{U}\subset\mathbb{R}^m$. It follows that $(V^*,u^*)$ is a solution to the HJB equation of the original Problem \eqref{eq:oc1}, that is,
\begin{align}
\label{eq:HJBbilinear}
\dfrac{\partial V}{\partial t}(t,x) + \min_{u\in\mathcal{U}} \Big\{ \dfrac{\partial V}{\partial x}(t,x)^T \Big[Ax+\Big(B+\sum_{i=1}^n x_i N_i\Big) u\Big]+\dfrac{1}{2} [x^TQx+u^TRu ] \Big\} \equiv 0,
\end{align}
with the boundary condition $V(t_f,x)=0$.
Furthermore, the optimal control $u^*$ is global and unique, since the minimization in \eqref{eq:HJBbilinear} is over a convex (quadratic) function in $u$, and $u^*$ is of the form as expressed in \eqref{eq:ocuk}.
$\Box$
\section{Optimal Control of Bilinear Ensemble Systems}
\label{sec:ensemble}
The iterative method presented in Sections \ref{sec:iterative} and \ref{sec:convergence} can be directly extended to deal with optimal control problems involving a bilinear ensemble system. Consider the minimum-energy control problem for steering a time-invariant bilinear ensemble system, indexed by the parameter $\b$ varying on a compact set $K\subset\mathbb{R}^d$, given by
\begin{eqnarray}
\label{eq:bilinear_ensemble}
\frac{d}{dt}{X(t,\b)}=A(\b)X(t,\b)+B(\b)u+\Big(\sum_{i=1}^m u_i(t) B_i(\b)\Big) X(t,\b),
\end{eqnarray}
where $X=(x_1,\ldots,x_n)^T\in M\subset\mathbb{R}^n$ denotes the state, $\b\in K$, $u:[0,T]\to\mathbb{R}^m$ is the control; the matrices $A(\b)\in\mathbb{R}^{n\times n}$, $B(\b)\in\mathbb{R}^{n\times m}$, and $B_i(\b)\in\mathbb{R}^{n\times n}$, $i=1,\ldots,m$, for $\b\in K$.
Following the iterative procedure developed in Section \ref{sec:iterative}, we represent the time-invariant bilinear ensemble system in \eqref{eq:bilinear_ensemble} as an iteration equation and formulate the minimum-energy optimal ensemble control problem as
\begin{align}
\min & \quad J=\frac{1}{2}\int_0^{t_f} (u^{(k)})^T(t)Ru^{(k)}(t)\,dt, \nonumber\\
\label{eq:oc4}
{\rm s.t.} & \quad \frac{d}{dt}{X^{(k)}(t,\b)}=\tilde{A}^{(k-1)}(t,\b)X^{(k)}(t,\b)+\tilde{B}^{(k-1)}(t,\b) u^{(k)}, \tag{P4}\\
& \quad X^{(k)}(0,\b)=X_0(\b),\quad X^{(k)}(t_f,\b)=X_f(\b), \nonumber
\end{align}
which involves a time-varying linear ensemble system and where we consider the linear ensemble system in a Hilbert space setting; that is, the elements of the matrices $\tilde{A}^{(k-1)}(t,\b)\in\mathbb{R}^{n\times n}$ and $\tilde{B}^{(k-1)}(t,\b)\in\mathbb{R}^{n\times m}$, defined analogously as in \eqref{eq:Ak} and \eqref{eq:Bk}, are real-valued $L_{\infty}$ and $L_2$ functions, respectively, over the space $D=[0,T]\times K$, denoted as $\tilde{A}^{(k-1)}\in L_{\infty}^{n\times n}(D)$ and $\tilde{B}^{(k-1)}\in L_2^{n\times m}(D)$, $X_0,X_f\in L_2^n(K)$, and $R\in\mathbb{R}^{m\times m}\succ 0$.
By the variation of constants formula, the ensemble control law that steers the system in \eqref{eq:oc4} between $X_0(\b)$ and $X_f(\b)$ at time $t_f$ satisfies, for each iteration $k$, the integral equation $(L^{(k)}u^{(k)})(\b)=\xi^{(k)}(\b)$,
where
\begin{eqnarray}
\label{eq:xi}
\xi^{(k)}(\b)=\Phi^{(k-1)}(0,t_f,\b)X_f(\b)-X_0(\b),
\end{eqnarray}
$\Phi^{(k-1)}(t,0,\b)$ is the transition matrix associated with $\tilde{A}^{(k-1)}(t,\b)$, and where the linear operator $L^{(k)}$ is compact \cite{Li_TAC11} and is defined by
\begin{eqnarray}
\label{eq:Lk}
(L^{(k)}u)(\b)=\int_0^T\Phi^{(k-1)}(0,\s,\b)\tilde{B}^{(k-1)}(\b) u(\s)d\s.
\end{eqnarray}
\begin{theorem}
\label{thm:convergence_ensemble}
Consider the optimal ensemble control problem \eqref{eq:oc4}. Let $(\s_n^{(k)},\mu_n^{(k)},\nu_n^{(k)})$ be a singular system of the operator $L^{(k)}$ defined in \eqref{eq:Lk}. The iterative procedure described according to \eqref{eq:T1} and \eqref{eq:T2} is convergent if the conditions
\begin{align}
(i)\ \sum_{n=1}^{\infty}\frac{|\langle\xi^{(k)},\nu_n^{(k)}\rangle|^2}{(\s_n^{(k)})^2}<\infty, \quad (ii)\ \xi^{(k)}\in\overline{\mathcal{R}(L^{(k)})}
\end{align}
hold at each iteration $k\in\mathbb{N}$,
where $\xi^{(k)}$ is defined in \eqref{eq:xi}, $\overline{\mathcal{R}(L^{(k)})}$ denotes the closure of the range space of $L^{(k)}$, and $\langle \xi,\nu\rangle=\int_K \xi^T\nu d\b$ is the inner product defined in $L_2^n(K)$. Furthermore, starting with a feasible initial ensemble trajectory $X^{(0)}(t,\b)$ for Problem \eqref{eq:oc4}, the sequences $X^{(k)}$ and $u^{(k)}$, as in \eqref{eq:x_k} and \eqref{eq:lambda_k}, generated by the iterative method converge to the unique fixed points $X^*$ and $u^*$, respectively.
\end{theorem}
{\it Proof:} Since the conditions (i) and (ii) hold for the operator $L^{(k)}$ at each iteration $k$, the time-varying linear ensemble system in \eqref{eq:oc4} obtained at each iteration $k$ is ensemble controllable,
namely, there exists a $u^{(k)}\in L_2^m([0,t_f])$ that steers the ensemble from $X_0(\b)$ to $X_f(\b)$ at time $t_f<\infty$. Moreover, the minimum-energy control that completes this transfer is an infinite weighted sum of the singular functions of $L^{(k)}$, i.e., $u^{(k)}=\sum_{n=1}^{\infty}\frac{1}{\s^{(k)}_n}\langle\xi^{(k)},\nu^{(k)}_n\rangle\mu^{(k)}_n$,
and, for any $\e>0$, the truncated optimal control of $u^{(k)}$, i.e.,
\begin{eqnarray}
\label{eq:uk_N}
u^{(k)}_N=\sum_{n=1}^{N^{(k)}(\e)}\frac{1}{\s^{(k)}_n}\langle\xi^{(k)},\nu^{(k)}_n\rangle\mu^{(k)}_n,
\end{eqnarray}
drives the ensemble from $X_0(\b)$ to an $\e$-neighborhood of $X_f(\b)$, denoted $\mathcal{B}_\e(X_f)$, satisfying $\|X^{(k)}(t_f,\b)-X_f(\b)\|_2<\e$, where the positive integer $N^{(k)}(\e)$ depends on $\e>0$ \cite{Li_TAC11}. In addition, we denote the optimal trajectories corresponding to the controls $u^{(k)}$ and $u_N^{(k)}$ as $X^{(k)}$ and $X_N^{(k)}$, respectively. Then, according to Theorem \ref{thm:convergence}, the iterative procedure applied to solve for Problem \eqref{eq:oc4} will converge with the convergent optimal control and optimal trajectory pair defined by $(X^*,u^*)$, i.e., $X^{(k)}\rightarrow X^*$ and $u^{(k)}\rightarrow u^*$.
However, the iterations
are evolved based on the linear ensemble system formed by the ``truncated trajectory'' $X_N^{(k)}$, given by
\begin{eqnarray}
\label{eq:truncated_sys}
\frac{d}{dt}{\hat{X}^{(k+1)}(t,\b)}=\tilde{A}_N^{(k)}(t,\b)\hat{X}^{(k+1)}(t,\b)+\tilde{B}_N^{(k)}(t,\b) \hat{u}^{(k+1)},
\end{eqnarray}
where $\tilde{A}_N^{(k)}(t,\b)=\tilde{A}(X_N^{(k)}(t,\b))$ and $\tilde{B}_N^{(k)}(t,\b)=\tilde{B}(X_N^{(k)}(t,\b))$ depend on $X_N^{(k)}$.
Therefore, it requires to show that, at each iteration $k$, the ensemble system as in \eqref{eq:truncated_sys} is ensemble controllable and, furthermore, the optimal control and optimal trajectory obtained based on this iteration equation
converge to $X^*$ and $u^*$, respectively, as $k\rightarrow\infty$.
Now, let $L_N^{(k+1)}:L_2^{m}([0,t_f])\rightarrow L_2^n(K)$ be the operator
defined by
\begin{eqnarray}
\label{eq:LNk}
\Big(L_N^{(k+1)}u\Big)(\b)=\int_0^T\Phi_N^{(k)}(0,\s,\b)\tilde{B}_N^{(k)}(\s,\b) u(\s)d\s,
\end{eqnarray}
where $\Phi_N^{(k)}(t,0,\b)$ is the transition matrix associated with $\tilde{A}_N^{(k)}(t,\b)$ and $u\in L_2^m([0,t_f])$. Because, by Condition (i),
$u^{(k)}_N$ converges to $u^{(k)}$ uniformly \cite{Li_TAC11} with
\begin{eqnarray}
\label{eq:uniform1}
\|u^{(k)}-u^{(k)}_N\|_2^2 = \sum_{n = N+1}^{\infty } \dfrac{1}{(\s^{(k)}_n)^2}| \langle\xi^{(k)},\nu^{(k)}_n\rangle |^2 \rightarrow 0,\quad\text{as}\quad N\rightarrow\infty,
\end{eqnarray}
and $L^{(k)}$ is compact (see Appendix \ref{appd:linear_ensemble}) so that
\begin{eqnarray}
\label{eq:uniform2}
\|L^{(k)}u^{(k)}-L^{(k)}u_N^{(k)}\|_2^2 = \sum_{n = N+1}^{\infty } (\s^{(k)}_n)^2| \langle u^{(k)},u_N^{(k)} \rangle |^2 \rightarrow 0, \quad\text{as}\quad N\rightarrow\infty.
\end{eqnarray}
It follows that $\|X^{(k)}-X^{(k)}_N\|_2^2\rightarrow 0$ as $N\rightarrow\infty$ and, consequently, we have
\begin{eqnarray}
\label{eq:ANBN}
\|\tilde{A}^{(k)}-\tilde{A}^{(k)}_N\|_2^2\rightarrow 0,\quad \|\tilde{B}^{(k)}-\tilde{B}^{(k)}_N\|_2^2\rightarrow 0,\quad\text{as}\quad N\rightarrow\infty;
\end{eqnarray}
which implies that at each iteration $k$, the system \eqref{eq:truncated_sys} is also ensemble controllable for sufficiently large $N$.
Next, let $\hat{u}^{(k+1)}$ be the minimum-energy control that steers the system in \eqref{eq:truncated_sys} at each iteration from $X_0(\b)$ to $\mathcal{B}_\e(X_f)$, which is characterized by $(L_N^{(k+1)}\hat{u}^{(k+1)})(\b)=\hat{\xi}^{(k+1)}(\b)$, where $\hat{\xi}^{(k+1)}=\Phi_N^{(k)}(0,t_f,\b)X_f(\b)-X_0(\b)$. Also, we recall that $\Phi_N^{(k)}(t,0,\b)$ and $\Phi^{(k)}(t,0,\b)$
are the respective transition matrices associated with $\tilde{A}_N^{(k)}(\b)\in L_{\infty}^{n\times n}(K)$ and $\tilde{A}^{(k)}(\b)\in L_{\infty}^{n\times n}(K)$, and thus $\|\Phi^{(k)}(t,0,\b)-\Phi^{(k)}_N(t,0,\b)\|_2^2\rightarrow 0$ as $N\rightarrow\infty$. This guarantees $\|\Phi^{(k)}(0,t,\b)-\Phi^{(k)}_N(0,t,\b)\|_2^2\rightarrow 0$ as $N\rightarrow\infty$ since $\|\Phi_N^{(k)}(t,0,\b)\|$ and $\| \Phi^{(k)}(t,0,\b)\|$ are both bounded. Then, we have
\begin{eqnarray*}
\| L^{(k+1)}u^{(k+1)}-L^{(k+1)}_N\hat{u}^{(k+1)}\|_2 = \| \xi^{(k+1)}- \hat{\xi}^{(k+1)} \|_2 \leq \|\Phi^{(k)}-\Phi_N^{(k)}\|_2 \|X_f\|_2\rightarrow 0
\end{eqnarray*}
as $N\rightarrow\infty$ since $X_f\in L_2^n(K)$. This leads to $\|X^{(k+1)}-\hat{X}^{(k+1)}\|_2^2\rightarrow 0$ as $N\rightarrow\infty$, where $\hat{X}^{(k+1)}$ is the trajectory resulting from $\hat{u}^{(k+1)}$. Furthermore, the property $\|\Phi^{(k)}-\Phi^{(k)}_N\|_2\rightarrow 0$, together with \eqref{eq:LNk} and \eqref{eq:ANBN}, gives
\begin{eqnarray}
\label{eq:LN_L}
\|L^{(k+1)}_N-L^{(k+1)}\|_2 \rightarrow 0,\quad\text{as}\quad N\rightarrow\infty.
\end{eqnarray}
Because $L^{(k)}(u^{(k)}-\hat{u}^{(k)})=L^{(k)}u^{(k)}-L_N^{(k)}\hat{u}^{(k)}+(L_N^{(k)}-L^{(k)})\hat{u}^{(k)}=\xi^{(k)}-\hat{\xi}^{(k)}+(L_N^{(k)}-L^{(k)})\hat{u}^{(k)}$,
we obtain, as $N\rightarrow\infty$,
\begin{eqnarray*}
\| u^{(k+1)}-\hat{u}^{(k+1)}\|_2 \leq \frac{\|X_f\|_2}{\|L^{(k+1)}\|_2}\|\Phi^{(k)}-\Phi_N^{(k)}\|_2 + \frac{\|\hat{u}^{(k+1)}\|_2}{\|L^{(k+1)}\|_2}\|L^{(k+1)}_N-L^{(k+1)}\|_2\rightarrow 0,
\end{eqnarray*}
by \eqref{eq:LN_L} and by the facts $X_f\in L_2^n(K)$, $\hat{u}^{(k)}\in L_2^m([0,t_f])$, and $\|L^{(k+1)}\|_2 >0$ due to controllability of the system in \eqref{eq:oc4}.
Similar to \eqref{eq:uniform1} and \eqref{eq:uniform2}, we have uniform convergence properties for $\hat{u}_N^{(k)}$, the truncated control of $\hat{u}^{(k)}$, and for $L_N^{(k)}$ such that $\|\hat{u}^{(k)}-\hat{u}^{(k)}_N\|_2^2\rightarrow 0$ and $\| L^{(k)}_N\hat{u}^{(k)}-L^{(k)}_N\hat{u}^{(k)}_N\|_2^2\rightarrow 0$ as $N\rightarrow\infty$. Then, the triangle inequality gives
\begin{eqnarray}
\label{eq:uk_u^N}
\|u^{(k)}-\hat{u}_N^{(k)}\|_2 \leq \|u^{(k)}-\hat{u}^{(k)}\|_2 +\|\hat{u}^{(k)}-\hat{u}_N^{(k)}\|_2 \rightarrow 0,
\end{eqnarray}
and hence
\begin{eqnarray}
\label{eq:Lu_LNuN}
\|L^{(k)}u^{(k)}-L^{(k)}_N\hat{u}^{(k)}_N\|_2 \leq \|L^{(k)}\|_2\, \|u^{(k)}-\hat{u}_N^{(k)}\|_2 + \|L^{(k)}-L_N^{(k)}\|_2\, \|\hat{u}_N^{(k)}\|_2,
\end{eqnarray}
as $N\rightarrow\infty$, which guarantees that, at each iteration $k$, the trajectory $\hat{X}_N^{(k)}$ converges to $X^{(k)}$, which result from $\hat{u}^{(k)}_N$ and $u^{(k)}$, respectively, i.e.,
\begin{eqnarray}
\label{eq:x_N}
\|X^{(k)}-\hat{X}_{N}^{(k)}\|_2^2\rightarrow 0, \quad \text{as}\quad N\rightarrow \infty.
\end{eqnarray}
In addition, since $X^{(k)}\rightarrow X^*$ and $u^{(k)}\rightarrow u^*$ as $k\rightarrow\infty$, we have
\begin{eqnarray}
\label{eq:uu*}
\|\hat{u}_N^{(k)}-u^*\|_2 \leq \|\hat{u}_N^{(k)}-u^{(k)}\|_2 +\|u^{(k)}-u^*\|_2 \rightarrow 0
\end{eqnarray}
as $k,~N\rightarrow\infty$ by \eqref{eq:uk_u^N}, as well as $\| L^{(k)}-L^*\|_2\rightarrow 0$ as $k\rightarrow\infty$, where $L^*$ is the operator defined with respect to the convergent solutions $X^*$ and $\l^*$, given by
\begin{eqnarray}
\label{eq:L}
(L^*u)(\b)=\int_0^T\Phi^*(0,\s,\b)\tilde{B}\big(X^*(t,\b)\big)u(\s)d\s,
\end{eqnarray}
in which $\Phi^*(t,0,\b)$ is the transition matrix associated with $\tilde{A}(X^*(t,\b))$. This then gives
\begin{eqnarray}
\label{eq:LuLu*}
\|L^{(k)}u^{(k)}-L^*u^*\|_2 \leq \|L^{(k)}\|_2 \|u^{(k)}-u^*\|_2 +\|L^{(k)}-L^*\|_2 \|u^*\|_2 \rightarrow 0,
\end{eqnarray}
as $k\rightarrow\infty$. Finally, combining \eqref{eq:Lu_LNuN} and \eqref{eq:LuLu*} and applying the triangle inequality yield
$$\|L^*u^*-L^{(k)}_N\hat{u}^{(k)}_N\|_2\rightarrow 0 \text{\quad as \quad} k,N\rightarrow\infty ,$$
which implies that $\|X^*-\hat{X}_N^{(k)}\|\rightarrow 0$ as $k,N\rightarrow\infty$. This together with \eqref{eq:uu*} concludes the convergence of the sequences $\{\hat{u}_N^{(k)}\}$ and $\{\hat{X}_N^{(k)}\}$, generated by the iterative method, to $u^*$ and $X^*$, respectively, i.e., the minimum-energy ensemble control law and the optimal ensemble trajectory that satisfy the necessary optimality condition.
$\Box$
\section{Examples and Numerical Simulations}
\label{sec:examples}
In this section, we apply the developed iterative algorithm to solve optimal control problems involving single and ensemble bilinear systems, including the well-known Bloch system that models the evolution of two-level quantum systems \cite{Li_PNAS11}.
Ensemble control of Bloch systems is a key to many applications in quantum control, such as nuclear magnetic resonance
spectroscopy and imaging (MRI), quantum computation and quantum information processing \cite{Li_PRA06, Mabuchi05}, as well as quantum optics \cite{Silver85}.
\begin{example}[Population Control in Socioeconomics]
\label{ex:population}
\rm We consider a simple but representative bilinear system arising from the field of socioeconomics, which models the dynamics of population growth, simplified based on the Gibson's population transfer model \cite{Gibson1972}, given by $\frac{dx}{dt} = ux$,
where $x\in\mathbb{R}$ represents the number of domestic laborers and $u\in\mathbb{R}$ denotes the attractiveness for immigration multiplier. Putting this into the canonical form as presented in \eqref{eq:linear}, we have $A=0$, $B=0$, and $N=1$. We consider the design of the optimal control $u$ for reducing two-third of the domestic laborer population, i.e., from $x(0)=1$ to $x(t_f)=1/3$, in $t_f=2$, which minimizes the cost functional $J=\int_0^2 (x^2+u^2)dt$. The optimal control obtained by the iterative method is shown in Figure \ref{fig:PControl} and the resulting optimal trajectory is displayed in Figure \ref{fig:PState}.
\end{example}
\begin{figure}
\caption{\subref{fig:PControl}
\label{fig:PControl}
\label{fig:PState}
\label{fig:PopulationSingle}
\end{figure}
\begin{figure}
\caption{\subref{fig:singlecontrol}
\label{fig:singlecontrol}
\label{fig:singletrajectory}
\label{fig:blochsingle}
\end{figure}
\begin{example}[Excitation of a Two-Level System]
\label{ex:bloch}
\rm A canonical example of optimal control of bilinear ensemble systems in quantum control is the optimal pulse design for the excitation of a collection of two-level systems \cite{Li_PNAS11}, in which the dynamics of a quantum
ensemble obeys the Bloch equations, and optimal pulses (controls) that steer the ensemble between states of interest are pursued. The Bloch equations form a bilinear control system evolving on the special Lie group SO(3), given by
\begin{align}
\label{eq:blochsys}
\frac{d}{dt} \begin{bmatrix}
x_1 \\ x_2 \\ x_3
\end{bmatrix} = \begin{bmatrix}
0 & -\omega & u_1 \\
\omega & 0 & -u_2 \\
-u_1 & u_2 & 0 \end{bmatrix}
\begin{bmatrix}
x_1 \\ x_2 \\ x_3
\end{bmatrix},
\end{align}
where $x=(x_1,x_2,x_3)^T$ denotes the bulk magnetization of the spins, $\w$ denotes the Larmor frequency of the spins, and $u_1$ and $u_2$ are the radio-frequency fields applied on the $y$ and the $x$ direction, respectively \cite{Li_TAC09}. A common control task is to drive the system from the equilibrium state $x_0 = (0,0,1)^T$ to an excited state on the transverse plane, e.g., $x_f=(1,0,0)^T$, and, in particular, achieving the desired state transfer with minimum-energy is of practical importance \cite{Li_PNAS11}.
Here, we consider exciting a spin system with the Larmor frequency $\w=0.5$, and first rewrite the Bloch system in the canonical form as presented in \eqref{eq:linear} with
$$A = \begin{bmatrix}
0 & -\omega & 0 \\
\omega & 0 & 0 \\
0 & 0 & 0
\end{bmatrix}, \quad B=0;
\quad N_1 = \begin{bmatrix}
0 & 0 \\
0 & 0 \\
-1 & 0
\end{bmatrix}, \quad
N_2 = \begin{bmatrix}
0 & 0 \\
0 & 0 \\
0 & 1
\end{bmatrix}, \quad
N_3 = \begin{bmatrix}
1 & 0 \\
0 & -1 \\
0 & 0
\end{bmatrix},$$
and apply the iterative method described in Section \ref{sec:iterative} to find the minimum-energy control. Figure \ref{fig:singlecontrol} illustrates the convergent minimum-energy control that steers the spin system from $x_0$ to $x_f$ at $t_f=1$ and minimizes $J=\int_0^1 (u_1^2+u_2^2) dt$, and the resulting trajectory is shown in Figure \ref{fig:singletrajectory}. This optimal control and the trajectory converge in 17 iterations, starting with an initial trajectory $x^{(0)}$ with endpoints $x_0$ and $x_f$ with the least distance, given the stopping criterion $\|x(t_f)-x_f\|\leq 10^{-5}$.
\end{example}
\begin{example}[Excitation of an Ensemble of Two-Level Systems]
\label{ex:bloch_ensemble}
\rm Here, we apply the iterative method to design a minimum-energy broadband excitation ($\pi/2$) pulse that steers an ensemble of spin systems modeled in \eqref{eq:blochsys} with the ensemble state defined as $X(t,\w)=(x(t,\w),y(t,\w),z(t,\w))^T$ for $\w\in [-1,1]$ from $X(0,\w)=(0,0,1)^T$ to $X(t_f,\w)=(1,0,0)^T$, where $t_f=10$ is the pulse duration. At each iteration $k$, the minimum-energy ensemble control law is calculated using an singular-value-decomposition (SVD) based algorithm \cite{Li_ACC12_SVD}, by which the input-to-state operator $L^{(k)}_N$ as in \eqref{eq:LNk} that characterizes the evolution of the system dynamics is approximated by a matrix of finite rank, due to the compactness of this operator. Then, the singular values, $\s_n^{(k)}$, and the singular vectors, $\mu_n^{(k)}$ and $\nu_n^{(k)}$, are calculated using SVD to synthesize the optimal ensemble control expressed in \eqref{eq:uk_N} \cite{Li_ACC12_SVD}. The convergent minimum-energy ensemble control law, i.e., the minimum-energy broadband $\pi/2$ pulse, is illustrated in Figure \ref{fig:ensemblecontrol}, and the performance, i.e., the $x$-component of the final state $X(t_f,\w)$, $t_f=10$, for 141 spin systems is shown in Figure \ref{fig:ensembletrajectory}. The iterative algorithm converges in 152 iterations
given the stopping criterion $\|X(t_f,\w)-X_f(\w)\|\leq 10^{-5}$.
\end{example}
\begin{figure}
\caption{\subref{fig:ensemblecontrol}
\label{fig:ensemblecontrol}
\label{fig:ensembletrajectory}
\label{fig:blochensemble}
\end{figure}
\section{Conclusion}
We develop an iterative method for solving fixed-endpoint optimal control problems involving time-invariant bilinear and bilinear ensemble systems. We analyze the convergence of the iterative procedure by using the contraction mapping and the fixed-point theorem. The central idea of our approach is to represent the time-invariant bilinear ensemble system as a time-varying linear ensemble system and then to show, in an iterative manner, that the optimal control of the original bilinear (ensemble) system is the convergent optimal control of the associated linear (ensemble) system. In addition, we illustrate the condition for global optimality of the convergent solution. Finally, we demonstrate the effectiveness and applicability of the constructed iterative method using several examples involving the control of population growth in socioeconomics and the design of broadband pulses for exciting an ensemble of two-level systems, which is essential to many applications in quantum control.
\section{Appendix}
\subsection{Sweep method and the notion of flow mapping}
We illustrate the idea of the developed Sweep method for dealing with fixed-endpoint optimal control problems presented in Section \ref{sec:sweep} using the notion of flow mapping \cite{Schaettler13} and show the connection between the non-singularity of the matrix $P$ defined in \eqref{eq:pode} and the controllability of the system in \eqref{eq:oc2}.
\subsubsection{Flow mapping in optimal control}
\label{appd:sweep}
Consider the optimal control problem parameterized by $p$ given by,
\begin{align}
\label{eq:oc_flow}
\min &\quad J=\int_0^{t_f} L(t,x(t,p),u(t,p)) \, dt, \nonumber\\
{\rm s.t.} &\quad \dot{x}(t,p)=f(t,x(t,p),u(t,p)), \tag{P5}
\end{align}
where $p\in\mathbb{R}$ represents the perturbation of extremals. Let $\mathcal{E}$ be a $C^r$-parameterized family of extremals for Problem \eqref{eq:oc_flow}, and suppose that $F:E\rightarrow G$ is the flow restricted on a subset $E$ of the $(t,p)$-space, which is a $C^{1,r}$-diffeomorphism onto an open subset $G\subset \mathbb{R}\times \mathbb{R}^n$ of the $(t,x)$-space. Let $C$ be the cost-to-go function for Problem \eqref{eq:oc_flow}, then the value function $V^{\mathcal{E}}:G\rightarrow \mathbb{R}$ of $\mathcal{E}$ defined by
$V^{\mathcal{E}} = C\circ F^{-1}$,
is continuously differentiable in $(t,x)$ and $r$-times continuously differentiable in $x$ for any fixed $t$ (see Theorem 5.2.1 in \cite{Schaettler13}). In addition, the function $u_*:G\rightarrow \mathbb{R}$ defined by $u_* = u\circ F^{-1}$
is an admissible feedback control that is continuous and $r$-times continuously differentiable in $x$ for any fixed $t$. The diagrams below illustrate the relation of the mappings defined above.
\begin{center}
\qquad \xymatrix{
E \ar[d]^F \ar[r]^C &{\mathbb{R}}\\
G \ar[ru]_{V^{\mathcal{E}}}}
\qquad \qquad \qquad \xymatrix{
E \ar[d]^F \ar[r]^u &{\mathbb{R}}\\
G \ar[ru]_{u_*}}
\end{center}
\noindent Together, the pair $(V^{\mathcal{E}},u_*)$ is a classical solution to the Hamilton-Jacobi-Bellman equation, and the following identities hold for all $p$,
\begin{align*}
\dfrac{\partial V^{\mathcal{E}}}{\partial t}(t, x(t,p)) & = -H(t,\l(t,p),x(t,p),u(t,p)), \\
\dfrac{\partial V^{\mathcal{E}}}{\partial x}(t, x(t,p)) & = \l(t,p),
\end{align*}
where $H$ is the Hamiltonian associated with Problem \eqref{eq:oc_flow} and $\lambda$ is the co-state of $x$.
$V^{\mathcal{E}}$ is $(r+1)$-times continuously differentiable in $x$ on $G$ because $\mathcal{E}$ is nicely $C^r$-parameterized, and then we have $\dfrac{\partial^2 V^{\mathcal{E}}}{\partial x^2}(t, x(t,p)) = \dfrac{\partial \l}{\partial p}(t,p)\Big(\dfrac{\partial x}{\partial p}(t,p)\Big)^{-1}$,
provided that $\dfrac{\partial x}{\partial p}(t,p)$ is invertible.
\subsubsection{Sweep method derived based on the flow mapping}
Now, consider the parameterized optimal control problem associated with Problem \eqref{eq:oc2}
given by
\begin{align}
\min &\quad J=\frac{1}{2}\int_0^{t_f} \Big[x^T(t,p)Qx(t,p)+u^T(t,p)Ru(t,p)\Big] \, dt, \nonumber\\
\label{eq:oc6}
{\rm s.t.} &\quad
\dot{x}(t,p)=\tilde{A}x(t,p)+\tilde{B}u(t,p) \tag{${\rm P2}_p$}\\
&\quad x(0,p)=x_0, \quad x(t_f,p)=x_f, \nonumber
\end{align}
we then have, for this parameterized LQR problem,
\begin{align*}
\dfrac{d}{dt}\Big(\dfrac{\partial x}{\partial p}(t,p)\Big) &= \tilde{A}\dfrac{\partial x}{\partial p}(t,p)-\tilde{B}R^{-1}\tilde{B}^T\dfrac{\partial \l}{\partial p}(t,p), \\
\dfrac{d}{dt}\Big(\dfrac{\partial \l}{\partial p}(t,p)\Big) &= -Q\dfrac{\partial x}{\partial p}(t,p)-\tilde{A}^T\dfrac{\partial \l}{\partial p}(t,p).
\end{align*}
Defining
\begin{eqnarray}
\label{eq:Delta}
\Delta (t,p) = \dfrac{\partial \l}{\partial p}(t,p) - K(t,p)\dfrac{\partial x}{\partial p}(t,p)-S(t,p)\dfrac{\partial \nu}{\partial p}(p)
\end{eqnarray}
with $\Delta (t_f,p) = 0$, where $K$ and $S$ satisfy the matrix differential equations \eqref{eq:kRiccati} and \eqref{eq:s} with the terminal conditions $K(t_f,p)=0\in \mathbb{R}^{n\times n}$ and $S(t_f,p)=I\in \mathbb{R}^{n\times n}$, respectively; \eqref{eq:Delta} yields,
\begin{align*}
\dot{\Delta}(t,p) & = \dfrac{d}{dt}\Big(\dfrac{\partial \l}{\partial p}\Big) - \dot{K}\dfrac{\partial x}{\partial p} - K\dfrac{d}{dt}\Big(\dfrac{\partial x}{\partial p}\Big) - \dot{S}\dfrac{\partial \nu}{\partial p}(p) \\
& = -Q\dfrac{\partial x}{\partial p} - \tilde{A}^T\dfrac{\partial \l}{\partial p} - (-Q-K\tilde{A}-\tilde{A}^TK+K\tilde{B}R^{-1}\tilde{B}^TK)\dfrac{\partial x}{\partial p} \\
& - K \Big(\tilde{A}\dfrac{\partial x}{\partial p}-\tilde{B}R^{-1}\tilde{B}^T\dfrac{\partial \l}{\partial p}\Big) + [\tilde{A}^T-K^T\tilde{B}R^{-1}\tilde{B}^T]S\dfrac{\partial \nu}{\partial p}(p) \\
& = -[\tilde{A}^T-K^T\tilde{B}R^{-1}\tilde{B}^T]\Delta(t,p).
\end{align*}
This gives $\Delta(t,p) \equiv 0$, since $\Delta (t_f,p) = 0$, and hence guarantees, from \eqref{eq:Delta}, that $\dfrac{\partial \l}{\partial p}(t,p) = K(t,p)\dfrac{\partial x}{\partial p}(t,p)+S(t,p)\dfrac{\partial \nu}{\partial p}(p)$,
which we adopted to define \eqref{eq:lam}.
In order to fulfill the terminal condition $x(t_f,p)=x_f$ at time $t_f$, we introduce the auxiliary variables $O(t,p),~P(t,p)\in\mathbb{R}^{n\times n}$, and set
\begin{eqnarray}
\label{eq:end}
x_f = O(t,p)x(t,p)+P(t,p)\nu(p).
\end{eqnarray}
Clearly, at $t=t_f$, we need $O(t_f,p)=I$ and $P(t_f,p)=0$. Taking the time derivative on both sides of \eqref{eq:end}, we get $0=\dot{O}(t,p)x(t,p)+O(t,p)\dot{x}(t,p)+\dot{P}(t,p)\nu(p)$, which results in
\begin{align*}
\big\{\dot{O}(t,p)+O(t,p)[\tilde{A}-\tilde{B}R^{-1}\tilde{B}^TK(t,p)]\big\}x(t,p)+[\dot{P}(t,p)-O(t,p)\tilde{B}R^{-1}\tilde{B}^TS(t,p)]\nu(p)=0.
\end{align*}
Since it holds for all $x$ and $\nu$, we have
\begin{align}
\label{eq:Odot}
\dot{O}(t,p)+O(t,p)[\tilde{A}-\tilde{B}R^{-1}\tilde{B}^TK(t,p)]=0 \quad &\text{ with}\quad O(t_f,p)=I, \\
\label{eq:Pdot}
\dot{P}(t,p)-O(t,p)\tilde{B}R^{-1}\tilde{B}^TS(t,p)=0 \quad &\text{ with}\quad P(t_f,p)=0.
\end{align}
Observe that, from \eqref{eq:Odot}, $\dot{O}^T(t,p)$ satisfies the same equation, for all $p$, as $S(t)$ in \eqref{eq:s} with the terminal condition $S(t_f,p)=I$; hence, $O(t,p)=S^T(t,p)$ for $t\in [0,t_f]$. Then, we can rewrite \eqref{eq:Pdot} as
\begin{align}
\dot{P}(t,p)-S^T(t,p)\tilde{B}R^{-1}\tilde{B}^TS(t,p)=0
\end{align}
for $t\in [0,t_f]$ with the terminal condition $P(t_f,p)=0$.
The multiplier associated with the terminal constraint can be expressed, by \eqref{eq:end}, as $\nu(p)=\big[P(t,p)\big]^{-1} \big[x_f-S^T(t,p)x(t,p)\big]$, provided $P(t,p)$ is invertible for $t\in[0,t_f]$. Note this condition holds for all $t\in [0,t_f]$, i.e., $\nu(p)$ is a constant for a fixed $p$; hence plugging in $t=0$, we have $\nu(p)$ in terms of the initial and terminal states, given by
\begin{equation}
\label{eq:nu_p}
\nu(p)=\big[P(0,p)\big]^{-1} \big[x_f-S^T(0,p)x_0\big].
\end{equation}
\subsubsection{Controllability of the system in \eqref{eq:oc6} and non-singularity of the matrix $P$}
\label{appd:Pnonsingular}
The following two lemmas will illustrate that controllability of the system in \eqref{eq:oc6} guarantees non-singularity of the matrix $P$, so that $\nu(p)$ in \eqref{eq:nu_p} is well defined.
\begin{lemma}
\label{lem:Psingular}
The matrix $P(\tau,p)$ is singular if and only if there exists a nontrivial solution $\mu =\mu(t,p)$ of the linear adjoint equation
$\dot{\mu} = -\mu \dfrac{\partial f}{\partial x} (t,x(t,p),u(t,p))$
with the terminal condition $\mu(t_f,p)$ that is perpendicular to the terminal manifold at $x(t_f,p)$ such that
$\mu (t,p)\dfrac{\partial f}{\partial u} (t,x(t,p),u(t,p)) \equiv 0$
on the interval $[\tau, t_f]$, where $\dot{x}(t,p) = f(t,x(t,p),u(t,p))$ \cite{Schaettler13}.
\end{lemma}
\begin{lemma}
\label{lem:Pcontrollable}
A time-varying linear system $\dot{x}=A(t)x+B(t)u$ is controllable over an interval $[\tau, t_f]$ if and only if for every nontrivial solution $\mu$ of the adjoint equation $\dot{\mu} = -\mu A(t)$, the function $\mu(t)B(t)$ does not vanish identically on the interval $[\tau, t_f]$. It is completely controllable if this holds for any subinterval $[\tau, t_f]$ \cite{Schaettler13}.
\end{lemma}
By Lemma \ref{lem:Pcontrollable}, the linear system $\dot{x}=\tilde{A}(t)x+\tilde{B}(t)u$ in Problem \eqref{eq:oc6} is controllable over an interval $[t, t_f]$ if and only if for every nontrivial solution $\mu$ of the adjoint equation $\dot{\mu} = -\mu \tilde{A}(t)$, the function $\mu(t)\tilde{B}(t)$ does not vanish identically on the interval $[t, t_f]$, which by Lemma \ref{lem:Psingular} is equivalent to the non-singularity of $P(t,p)$, $\forall t \in [0, t_f]$. In particular, for $t=0$, we have that $P(0,t_f)$ is invertible if and only if the linear system $\dot{x}=\tilde{A}(t)x+\tilde{B}(t)u$ in Problem \eqref{eq:oc6} is controllable over an interval $[0, t_f]$.
\subsection{The $\b$-coefficients in Theorem \ref{thm:convergence}}
\label{appd:betas}
The time-varying coefficients $\b_i(\s)$, $i=1,...,9$, in Theorem \ref{thm:convergence} are finite and described below, in which $\sigma\in [0,t]$ for $0\leq t\leq t_f$:
\begin{align*}
\beta_1 & = \|(S^{(k)})^T(t)\| \|\Big[(S^{(k)})^T(\s)\Big]^{-1}\|\Big( \|K^{(k)}(\s)\| + \|K^{(k+1)}(\s)\| \Big) \|\Big[S^{(k+1)}(\s)\Big]^{-1} \| \|S^{(k+1)}(t)\|, \\
\beta_2 & = \|(S^{(k)})^T(t)\| \|\Big[(S^{(k)})^T(\s)\Big]^{-1}\| \|K^{(k)}(\s)\| \|K^{(k+1)}(\s)\| \|\Big[S^{(k+1)}(\s)\Big]^{-1} \| \|S^{(k+1)}(t)\|, \\
\beta_3 & = \|\Big[ (S^{(k+1)})^T(t)\Big] ^{-1}\| \|(S^{(k+1)})^T(\s)\| \|x^{(k)}(\sigma)\|, \\
\beta_4 & = \|\Big[ (S^{(k+1)})^T(t)\Big] ^{-1}\| \|(S^{(k+1)})^T(\s)\| \| \tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T (\sigma) \| \|x^{(k)}(\sigma)\|, \\
\beta_5 & = \|\Big[ (S^{(k+1)})^T(t)\Big] ^{-1}\| \|(S^{(k+1)})^T(\s)\| \Big[ \|K^{(k+1)}(\sigma)\| \|x^{(k)}(\sigma)\| + \|S^{(k+1)}\nu^{(k+1)}\| \Big],
\end{align*}
\begin{align*}
\beta_6 & = \|\Big[ (S^{(k+1)})^T(t)\Big] ^{-1}\| \|(S^{(k+1)})^T(\s)\| \| \tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T (\sigma) \|, \\
\beta_7 & = \Big\{ \|(P^{(k+1)})^{-1}\|\Big[ (\|S^{(k+1)}\| + \|S^{(k)}\|)\|\tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T(\s)\|\|\nu^{(k)}\|+\|x_0\| \Big] + \|\nu^{(k)}\| \Big\} \\
&\ \cdot \|\Big[ (S^{(k+1)})^T(t)\Big] ^{-1}\| \|(S^{(k+1)})^T(\s)\|\|(S^{(k)})^T(\s)\|, \\
\beta_8 & = \Big\{ \|(P^{(k+1)})^{-1}\|\Big[ (\|S^{(k+1)}\| + \|S^{(k)}\|)\|\tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T(\s)\|\|\nu^{(k)}\|+\|x_0\| \Big] + \|\nu^{(k)}\| \Big\} \\
&\ \cdot \|\Big[ (S^{(k+1)})^T(t)\Big] ^{-1}\| \|(S^{(k+1)})^T(\s)\| \|K^{(k+1)}(\s)\| \|(S^{(k)})^T(\s)\| \\
&\ + \|(P^{(k+1)})^{-1}\|\|S^{(k+1)}(\s)\| \|S^{(k)}(\s)\|\|\nu^{(k)}\|, \\
\beta_9 & = \Big\{ \|(P^{(k+1)})^{-1}\|\Big[ (\|S^{(k+1)}\| + \|S^{(k)}\|)\|\tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T(\s)\|\|\nu^{(k)}\|+\|x_0\| \Big] + \|\nu^{(k)}\| \Big\} \\
&\ \cdot \|\Big[ (S^{(k+1)})^T(t)\Big] ^{-1}\| \|(S^{(k+1)})^T(\s)\|\|\tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T(\s)\| \|(S^{(k)})^T(\s)\|.
\end{align*}
\subsection{The entries of the matrix $M$ in \eqref{eq:contraction}}
\label{appd:M}
Let $p=\sqrt{\sum_{i=1}^n \|G_i \| ^2}$ and $q=\sqrt{\sum_{i,j=1}^n \|H_{ij} \| ^2}$, where $G_i$ and $H_i$ are defined as in \eqref{eq:dA} and \eqref{eq:dB}. The entries of the matrix $M$ in \eqref{eq:contraction} satisfy the relations
\begin{align*}
m_{11} & \varpropto \beta_3 \Big[ (p+\|x^{(k-1)} \|q)\|K^{(k)} \| + q(\|K^{(k)}\|\|x^{(k)}\|+\|S^{(k)}\nu^{(k)})\|) \Big] + \beta_5 q \Big[\|x^{(k)}\|+\|x^{(k-1)}\| \Big], \\
m_{12} & \varpropto \beta_3 (p+\|x^{(k-1)} \|q)\|x^{(k-1)} \| + \beta_4, \\
m_{13} & \varpropto \beta_3 (p+\|x^{(k-1)} \|q) + \beta_6, \\
m_{21} & \varpropto \beta_1 \Big[ (p+\|x^{(k-1)} \|q)\|K^{(k)} \| + q(\|K^{(k)}\|\|x^{(k)}\|+\|S^{(k)}\nu^{(k)})\|) \Big] + \beta_2 q \Big[\|x^{(k)}\|+\|x^{(k-1)}\| \Big], \\
m_{22} & \varpropto \beta_1 (p+\|x^{(k-1)} \|q)\|x^{(k-1)} \|, \\
m_{23} & \varpropto \beta_1 (p+\|x^{(k-1)} \|q), \\
m_{31} & \varpropto \beta_7 \Big[ (p+\|x^{(k-1)} \|q)\|K^{(k)} \| + q(\|K^{(k)}\|\|x^{(k)}\|+\|S^{(k)}\nu^{(k)})\|) \Big] + \beta_8 q \Big[\|x^{(k)}\|+\|x^{(k-1)}\| \Big], \\
m_{32} & \varpropto \beta_7 (p+\|x^{(k-1)} \|q)\|x^{(k-1)} \| + \beta_9, \\
m_{33} & \varpropto \beta_7 (p+\|x^{(k-1)} \|q),
\end{align*}
where $\varpropto$ denotes proportionality.
Because both $p$ and $q$ are related to $R^{-1}$, they can be made sufficiently small by choosing large enough $R$, for example, $R=\gammaI$ with $\g\gg 1$, where $I\in\mathbb{R}^{m\times m}$ is the identify matrix. In addition, for $m_{12}$, $m_{13}$, and $m_{32}$, the coefficients $\beta_4$, $\beta_6$, and $\beta_9$ defined in Appendix \ref{appd:betas} involve the factor $\|\tilde{B}^{(k-1)}R^{-1}(\tilde{B}^{(k-1)})^T(\s)\|$, $\s\in [0,t_f]$, which can also be made arbitrary small by adjusting $R$, more specifically, by choosing $R$ with large eigenvalues. Thus, each entry of $M$ can be made sufficiently small through the choice of $R$.
\subsection{Singular value expansion for compact operators}
\label{appd:linear_ensemble}
\begin{theorem}[Singular value expansion \cite{Gohberg}]
\label{thm:singular}
Let $Y$ and $Z$ be Hilbert spaces, $K:Y\rightarrow Z$ be a compact operator and $\{(\sigma_n,\mu_n,\nu_n)\ |\ n\in\Delta\}$ be a singular system for $K$. Then
$$Ky=\sum_{n\in \Delta }\sigma_n \langle y,\mu_n \rangle \nu_n,\quad K^* z=\sum_{n\in \Delta } \sigma_n \langle z,\nu_n \rangle \mu_n,$$
for all $y\in Y$, $z\in Z$. In particular, if $K_n y=\sum_{j=1}^{n}\sigma_j \langle y,\mu_j \rangle \nu_j$ for $y\in Y$, and $K$ is of infinite rank, namely, $\Delta=\mathbb{N}$, then $\|K-K_n\|\leq\sup_{j>n}\sigma_j\rightarrow 0$ as $n\rightarrow\infty$.
\end{theorem}
\section{Bibliography}
\end{document} |
\begin{document}
\title{Scheduling with Complete Multipartite Incompatibility Graph on Parallel Machines
{
\thanks{This work was supported by Polish National Science Center 2018/31/B/ST6/01294 grant and Gda\'nsk University of Technology, grant no. POWR.03.02.00-IP.08-00-DOK/16.}
}}
\author{\IEEEauthorblockN{Tytus Pikies}
\IEEEauthorblockA{\textit{Dept. of Algorithms and System Modeling} \\
\textit{Gda\'nsk University of Technology}\\
Gda\'nsk, Poland \\
[email protected]}
\and \IEEEauthorblockN{Krzysztof~Turowski}
\IEEEauthorblockA{\textit{Theoretical Computer Science Dept.} \\
\textit{Jagiellonian University}\\
Krak\'ow, Poland \\
[email protected]}
\and \IEEEauthorblockN{Marek Kubale}
\IEEEauthorblockA{\textit{Dept. of Algorithms and System Modeling} \\
\textit{Gda\'nsk University of Technology}\\
Gda\'nsk, Poland \\
[email protected]}
}
\maketitle
\begin{abstract}
In this paper we consider the problem of scheduling on parallel machines with a presence of incompatibilities between jobs.
The incompatibility relation can be modeled as a complete multipartite graph in which each edge denotes a pair of jobs that cannot be scheduled on the same machine.
Our research stems from the work of Bodlaender et al. \cite{bodlaender1994scheduling,BodlaenderJOnTheComplexity1993}.
In particular, we pursue the line investigated partially by Mallek et al. \cite{mallek2019}, where the graph is complete multipartite so each machine can do jobs only from one partition.
We provide several results concerning schedules, optimal or approximate with respect to the two most popular criteria of optimality: $C_{max}$ (makespan) and $\sum C_j$ (total completion time).
We consider a variety of machine types in our paper: identical, uniform and unrelated.
Our results consist of delimitation of the easy (polynomial) and $\mathbb{N}PHClass$ problems within these constraints.
We also provide algorithms, either polynomial exact algorithms for easy problems, or algorithms with a guaranteed constant worst-case approximation ratio or even in some cases a \textnormal{\sffamily PTAS}\xspace for the harder ones.
In particular, we fill the gap on research for the problem of finding a schedule with the smallest $\sum C_j$ on uniform machines.
We address this problem by developing a linear programming relaxation technique with an appropriate rounding, which to our knowledge is a novelty for this criterion in the considered setting.
\end{abstract}
\begin{IEEEkeywords}
job scheduling, uniform machines, makespan, total completion time, approximation schemes, NP-hardness, incompatibility graph
\end{IEEEkeywords}
\section{Introduction}
\subsection{An example application}
Imagine that we are treating some people ill with contagious diseases.
There are quarantine units containing people ill with a particular disease waiting to receive some medical services.
We also have a set of nurses.
We would like the nurses to perform the services in a way that no nurse will travel between different quarantine units, to avoid spreading of the diseases.
Also, we would like to provide to each patient the required services, which correspond to the time to be spent by a nurse.
Consider two sample goals:
The first might be to lift the quarantine in the general as fast as possible.
The second might be to minimize the average time of a patient waiting and treatment.
The problem can be easily modeled as a scheduling problem in our model.
The jobs are the medical services to be performed.
The division of jobs into partitions of the incompatibility graph is the division of the tasks into the quarantine units.
The machines are the nurses.
The sample goals correspond to $C_{max}$ and $\sum C_j$ criteria, respectively.
This is only a single example of an application of \emph{scheduling with incompatibility graph on parallel machines}.
\subsection{Notation and the problems description}
We follow the notation and definitions from \cite{brucker1999scheduling}, with necessary extensions.
Let the set of jobs be $J = \{j_1, \ldots, j_n\}$ and the set of machines be $M = \{m_1, \ldots, m_m\}$.
We denote the processing requirements of the jobs $j_1, \ldots, j_n$ as $p_1, \ldots, p_n$.
Now let us define a function $p: J \times M \rightarrow \mathbb{N}$, which assigns a time needed to process a given job for a given machine.
We distinguish three main types of machines, in the ascending order of generality:
\begin{itemize}
\item \emph{identical} -- when $p(j_i, m) = p_i$ for all $j_i \in J$, $m \in M$,
\item \emph{uniform} -- when there exists a function $s: M \rightarrow \mathbb{Q}_+$, in this case $p(j_i, m) = \frac{p_i}{s(m)}$ for any $j_i \in J$, $m \in M$,
\item \emph{unrelated} -- when there exists $s: J \times M \rightarrow \mathbb{Q}_+$, which assigns
$p(j_i, m) = \frac{p_i}{s(j_i, m)}$ for any $j_i \in J$, $m \in M$.
\end{itemize}
The incompatibility between jobs form a relation that can be represented as a simple graph $G = (J, E)$, where $J$ is the set of jobs, and $\{j_1, j_2\}$ belongs to $E$, iff $j_1$ and $j_2$ are incompatible.
In this paper we consider complete multipartite graphs, i.e. graphs whose sets of vertices may be split into disjoint independent sets $J_1, \ldots, J_k$ (called \emph{partitions} of the graph), such that for every two vertices in different partitions there is an edge between them.
Due to the fact that the structure is simple, we omit the edges and we identify the graph with the partition of the jobs.
We differentiate between the cases when the number of the partitions is fixed, and when it is not the case.
In the first case we denote the graph as $G = \completekpartite{k}$, and in the second as $G = \textit{complete multipartite}$.
A schedule $S$ is an assignment from jobs in the space of machines and starting times.
Hence if $S(j) = (m, t)$, then $j$ is executed on the machine $m$ in the time interval $[t, t + p(j, m))$ and $t + p(j, m) = C_j$ is the completion time of $j$ in $S$.
No two jobs may be executed at the same time on any machine.
Moreover, no two jobs which are connected by an edge in the incompatibility graph may be scheduled on the same machine.
By $C_{max}(S)$ we denote maximum $C_j$ in $S$ over all jobs.
By $\sum C_j(S)$ we denote sum of completion times of jobs in $S$.
These are two criteria of optimality of a schedule commonly considered in the literature.
Note that in both cases we are interested in finding or approximating the minimum value of respective measure.
Interestingly, an assignment from jobs in machines it sufficient to determine values of these measures in any reasonable schedule consistent with this assignment.
By reasonable we mean a schedule in which there are no unnecessary delays between processed jobs; and the jobs forming a load on any machine are in optimal order given by Smith's Rule \cite{SmithRule}, in the case of $\sum C_j$ criterion.
Under such an assignment the ordering of the jobs either has no impact on the $C_{max}$ criterion; or is in a sense determined in the case of $\sum C_j$ criterion.
We use the well-known three-field notation of \cite{lawler1982recent}.
We are interested in problems described by $\alpha|\beta|\gamma$, where
\begin{itemize}
\item $\alpha$ is $P$ (identical machines), $Q$ (uniform machines) or $R$ (unrelated machines),
\item $\beta$ contains either $G = \textit{complete multipartite}$ or $G = \completekpartite{k}$, or some additional constraints, e.g. $p_j = 1$ (unit jobs only),
\item $\gamma$ is either $C_{max}$ or $\sum C_j$.
\end{itemize}
\subsection{An overview of previous work}
We recall that the $P||C_{max}$ is \mathbb{N}PHClass even for two machines \cite{gareyJ1979computers}.
However, $Q||C_{max}$ (and therefore $P||C_{max}$ as well) does admit a \textnormal{\sffamily PTAS}\xspace \cite{hochbaum1988}.
Moreover, $R_{m}||C_{max}$ admits a \textnormal{\sffamily FPTAS}\xspace \cite{horowitz1976}.
There is $(2 - \frac{1}{m})$-approximation algorithm for $R||C_{max}$ \cite{shchepin2005optimal}; however there is no polynomial algorithm with approximation ratio better than $\frac{3}{2}$, unless $\textnormal{\sffamily P}\xspace = \mathbb{N}PClass$ \cite{lenstra1990approximation}.
On the other hand, $Q|p_j = 1|C_{max}$ and $Q||\sum C_j$ (with $P|p_j = 1|C_{max}$ and $P||\sum C_j$ as their special cases) can be solved in $\textnormal{O}(\min\{n + m \log{m}, n \log{m}\})$ \cite{dessouky1990scheduling} and $\textnormal{O}(n \log{n})$ \cite[p. 133--134]{brucker1999scheduling} time, respectively.
$R||\sum C_j$ can be regarded as a special case of an assignment problem \cite{bruno1974scheduling}, which can be solved in polynomial time.
The problem of scheduling with incompatible jobs for identical machines was introduced by Bodlaender et al. in \cite{bodlaender1994scheduling}.
They provided a series of polynomial time approximation algorithms for $P|G = \kcolorable{k}|C_{max}$.
For bipartite graphs they showed that $P|G = \textit{bipartite}|C_{max}$ has a polynomial $2$-approximation algorithm, and this ratio of approximation is the best possible if $P \neq NP$.
They also proved that there exist \textnormal{\sffamily FPTAS}\xspace in the case when the number of machines is fixed and $G$ has constant treewidth.
The special case $P|G, p_j = 1|C_{max}$ was treated extensively in the literature under the name \textnormal{\sffamily Bounded Independent Sets}\xspace: for given $m$ and $t$, determine whether $G$ can be partitioned into at most $t$ independent sets with at most $m$ vertices in each.
More generally, $P|G|C_{max}$ is equivalent to a weighted version of \textnormal{\sffamily Bounded Independent Sets}\xspace.
We note also that $P|G, p_j = 1|C_{max}$ is closely tied to \textnormal{\sffamily Mutual Exclusion Scheduling}\xspace, where we are looking for a schedule in which no two jobs connected by an edge in $G$ are executed in the same time.
For the unrestricted number of machines it is the case that $P|G, p_j = 1|C_{max}$ has a polynomial algorithm for a certain class of graphs $G$ if and only if \textnormal{\sffamily Mutual Exclusion Scheduling}\xspace has a polynomial algorithm for the same class of graphs.
When all this is taken into account, there are known polynomial algorithms for solving $P|G, p_j = 1|C_{max}$ when $G$ is restricted to the following classes: forests \cite{baker1996mutual}, split graphs \cite{lonc1991complexity}, complements of bipartite graphs and complements of interval graphs \cite{bodlaender1995restrictions}.
However, the problem remains \mathbb{N}PHClass when $G$ is restricted to bipartite graphs (even for $3$ machines), interval graphs and cographs \cite{bodlaender1995restrictions}.
Recently another line of research was established for $G$ equal to a collection of cliques (bags) in \cite{das2017minimizing}.
The authors considered $C_{max}$ criterion and presented a \textnormal{\sffamily PTAS}\xspace for identical machines together with $(\log{n})^{1/4 - \epsilon}$-inapproximability result for unrelated machines.
They also provided an $8$-approximate algorithm for unrelated machines with additional constraints.
This approach was further pursued in \cite{grage2019eptas}, where an \textnormal{\sffamily EPTAS}\xspace for identical machines case was presented.
The last result is a construction of \textnormal{\sffamily PTAS}\xspace for uniform machines with some additional restrictions on machine speeds and bag sizes \cite{page2020makespan}.
Unfortunately, the case of complete multipartite incompatibility graph was not studied so extensively.
It may be inferred from \cite{bodlaender1994scheduling} that for $P|G = \textit{complete multipartite}|C_{max}$ there exists a \textnormal{\sffamily PTAS}\xspace, which can be easily extended to $\textnormal{\sffamily EPTAS}\xspace$; and that there is a polynomial time algorithm for $P|G = \textit{complete multipartite}, p_j = 1|C_{max}$.
In the case of uniform machines Mallek et al. \cite{mallek2019} proved that $Q|G = \completekpartite{2}, p_j = 1|C_{max}$ is \mathbb{N}PHClass, but it may be solved in $\textnormal{O}(n)$ time when the number of machines is fixed.
Moreover, they showed an $\textnormal{O}(m n + m^2 \log{m})$ algorithm for the particular case $Q|G = star, p_j = 1|C_{max}$.
However, their result implicitly assumed that the number of jobs $n$ is encoded in binary on $\log{n}$ bits (thus making the size of schedules exponential in terms of the input size), not -- as it is customary assumed -- in unary.
In this paper we provide several results: first, we prove that $P|G = \textit{complete multipartite}|\sum C_j$, unlike its $C_{max}$ counterpart, can be solved in polynomial time.
Next, we show that $Q|G = \textit{complete multipartite}, p_j =1|\sum C_j$ is \textnormal{\sffamily Strongly NP-hard}\xspace and that the same holds for $C_{max}$ criterion.
However, it turns out that both $Q|G = \completekpartite{k}, p_j=1|\sum C_j$ and $Q|G = \completekpartite{k}, p_j = 1|C_{max}$ admit polynomial time algorithms.
Also, we propose $2$-approximation and $4$-approximation algorithms for $Q|G = \textit{complete multipartite},p_j = 1|C_{max}$ and $Q|G = \textit{complete multipartite},p_j = 1|\sum C_j$, respectively.
The first of our two main results is a $4$-approximate algorithm for $Q|G = \completekpartite{k}|\sum C_j$, based on a linear programming technique.
The second one is a \textnormal{\sffamily PTAS}\xspace for $Q|G = \textit{complete multipartite}, p_j=1|C_{max}$.
We conclude by showing that the solutions for $R|G|C_{max}$, or for $R|G|\sum C_j$, cannot be approximated within any fixed constant, even when $G = \completekpartite{2}$ and there are only two processing times.
\section{Identical machines}
We recall that $P|G = \textit{complete multipartite}|C_{max}$ is \mathbb{N}PHClass as a generalization of $P||C_{max}$, but it admits an \textnormal{\sffamily EPTAS}\xspaceClass \cite{bodlaender1994scheduling}.
Focusing our attention on $\sum C_j$ let us define what we mean by a \emph{greedy assignment} of machines to partitions:
\begin{enumerate}
\item assign to each partition a single machine,
\item assign remaining machines one by one to the partitions in a way that it decreases $\sum C_j$ as much as possible.
\end{enumerate}
To see why this approach works we need the following lemma.
\begin{lemma}
\label{lemma:nonincreasing_returns}
For any set of jobs, let $S_i$ be an optimal schedule in an instance of $P_i||\sum C_j$ determined by $J$.
Then $\sum C_j(S_1) - \sum C_j(S_2) \ge \sum C_j(S_2) - \sum C_j(S_3) \ge \ldots \ge \sum C_j(S_{m-1}) - \sum C_j(S_{m})$.
\end{lemma}
\begin{proof}
Assume for simplicity that $n$ is divisible by $i(i+1)(i+2)$.
If this is not the case, then we add dummy jobs with $p_j = 0$; obviously, this does not increase $\sum C_j$.
Fix the ordering of jobs with respect to nonincreasing processing times.
Now we may associate with each job its multiplier corresponding to the position in the reversed order on its machine.
If a job $j_i$ has a multiplier $l$, then it contributes $l p_i$ to $\sum C_j$, and it is scheduled as the $l$-th last job on a machine.
Now think of the multipliers in the terms of blocks of size $i+1$.
For $S_i$ the multipliers with respect to job order are:
\begin{gather*}
\underbrace{1,\ldots, 1, 1, 2}_{\text{The first block}} ; \underbrace{2, \ldots, 2, 3, 3}_{\text{The second block}} ; \ldots ; \underbrace{i, \ldots, i+1, i+1, i+1}_{\text{The (i)-th block}} ; \ldots
\end{gather*}
For $S_{i + 1}$ the multipliers are:
\begin{gather*}
\underbrace{1,\ldots, 1, 1, 1}_{\text{The first block}} ; \underbrace{2, \ldots, 2, 2, 2}_{\text{The second block}} ; \ldots ; \underbrace{i, \ldots, i, i, i}_{\text{The (i)-th block}} ; \ldots
\end{gather*}
For $S_{i + 2}$ the multipliers are:
\begin{gather*}
\underbrace{1, 1, 1, \ldots, 1}_{\text{The first block}} ; \underbrace{1, 2, 2, \ldots, 2}_{\text{The second block}} ; \ldots ; \underbrace{i-1, \ldots, i-1, i, i}_{\text{The (i)-th block}} ; \ldots
\end{gather*}
Also, let the sum of multipliers of the $k$-th block in $S_{i}$ be $s_k^i$.
By some algebraic manipulations we prove that
\begin{align*}
s_k^i & = (i+1)k + k + \floor*{(k-1) / i}, \\
s_k^{i+1} & = (i+1)k, \\
s_k^{i+2} & = (i+1)k - k + \floor*{k / (i+2)}.
\end{align*}
It follows directly that $s_{k-1}^i - s_{k-1}^{i+1} \ge s_{k}^{i+1} - s_{k}^{i+2}$, for $k \ge 2$.
The smallest processing time in the $k$-th block is at least $p_{(i+1)k} \ge p_{(i+1)k + 1}$, therefore the contribution of the $k$-th block to $\sum C_j(S_{i}) - \sum C_j(S_{i+1})$ is at least $p_{(i+1)k + 1} (s_k^i - s_k^{i+1})$.
Similarly, the largest processing time in the $(k+1)$-th block is at most $p_{(i+1)k + 1}$ so the contribution of the $(k+1)$-th block to $\sum C_j(S_{i+1}) - \sum C_j(S_{i+2})$ is at most $p_{(i+1)k+1} (s_k^{i+1} - s_k^{i+2})$.
Thus the contribution of the $(k + 1)$-th block to $\sum C_j(S_{j+1}) - \sum C_j(S_{j+2})$ is at most the contribution of the $k$-th block to $\sum C_j(S_j) - \sum C_j(S_{j+1})$, for all $k \ge 1$.
Also, the first block does not contribute to $\sum C_j(S_{i+1}) - \sum C_j(S_{i+2})$, which proves the lemma.
\end{proof}
\begin{corollary}
For a given instance of the problem \break $P|G = \textit{complete multipartite}|\sum C_j$ a schedule constructed by the greedy method has optimal $\sum C_j$.
\end{corollary}
\begin{proof}
Let $S_{alg}$ and $S_{opt}$ be the greedy and optimal schedules, respectively.
If the numbers of machines assigned to each of the partitions are equal in $S_{alg}$ and $S_{opt}$, then the theorem obviously holds.
Assume that there is a partition $J_i$ that has more machines assigned in $S_{opt}$ than in $S_{alg}$.
It means that there is also a partition $J_j$ that has less machines assigned in $S_{opt}$ than in $S_{alg}$.
Let us construct a new schedule $S_{opt}$ by assigning one more machine to $J_i$ and one less to $J_j$.
By \cref{lemma:nonincreasing_returns} we decreased $\sum C_j$ on partition $J_i$ no less than we increased it on partition $J_j$.
Hence, the claim follows.
\end{proof}
\section{Uniform machines}
It turns out that for an arbitrary number of partitions the problem is hard, even when all jobs have equal length:
\begin{theorem}
\label{theorem:nphcomplete}
$Q|G = \textit{complete multipartite}, p_j=1|\sum C_j$ is $\textnormal{\sffamily Strongly NP-hard}\xspace$.
\end{theorem}
\begin{proof}
We proceed by reducing \textnormal{\sffamily Strongly NP-complete}\xspace \textnormal{\sffamily 3-Partition}\xspace \cite{gareyJ1979computers} to our problem.
Recall that an instance of \textnormal{\sffamily 3-Partition}\xspace is $(A, b, s)$, where $A$ is a set of $3 m$ elements, $b$ is a bound value, and $s$ is a size function such that for each $a \in A$, $\frac{b}{4} < s(a) < \frac{b}{2}$ and $\sum_{a \in A} s(a) = m b$.
The question is whether $A$ can be partitioned into disjoint sets $A_1, \ldots, A_m$, such that $\forall_{1 \le i \le m} \sum_{a \in A_i} s(a) = b$.
For any $(A, b, s)$ we let $G = (J_1 \cup \ldots \cup J_m, E) = \completekpartite{m}$, where $|J_i| = b$ for all $i = 1, 2, \ldots, m$.
Moreover, let $M = \{m_1, \ldots, m_{3 m}\}$ with speeds $s(m_i) = s(a_i)$.
Finally, let the limit value be $\sum C_j = \frac{m (b + 1)}{2}$.
Suppose now that an instance $(A, b, s)$ admits a $3$-partition and let the sets be $A_1$, \ldots, $A_m$.
Then if $a_i \in A_j$, we assign exactly $s(a_i)$ jobs from $J_j$ to the machine $m_i$.
Since for every $i$ it holds that $\sum_{a \in A_i} s(a) = b$, we know that all jobs are assigned.
Moreover, we never violate the incompatibility graph conditions, as we assign to any machine only jobs from a single partition.
By assigning $s(a_i)$ jobs to a machine $m_i$ we ensure that
\begin{align*}
\sum C_j = \sum_{i = 1}^m \frac{\binom{s(a_i) + 1}{2}}{s(m_i)} = \sum_{i = 1}^m \frac{s(a_i) + 1}{2} = \frac{m (b + 1)}{2}.
\end{align*}
Conversely, suppose that we find a schedule $S$ with $\sum C_j \le \frac{m (b + 1)}{2}$.
Now let $l_i$ be the number of jobs assigned to $m_i$.
Let us consider the following quantity:
\begin{align*}
X & := \sum_{i = 1}^m \binom{l_i + 1}{2} \frac{1}{s(m_i)} - \frac{m (b + 1)}{2} \\
& = \sum_{i = 1}^m \frac{l_i + s(m_i) + 1}{s(m_i)} (l_i - s(m_i)).
\end{align*}
$X$ is the difference $\sum C_j (S)$ and $\sum C_j$ of a schedule where each machine $m_i$ is assigned $s(m_i)$ jobs.
Now, we note that $\sum_{i = 1}^m (l_i - s(m_i)) = 0$ as every job is assigned somewhere. Moreover,
\begin{align*}
l_i + s(m_i) + 1 > 2 s(m_i) & \text{\quad if $l_i \ge s(m_i)$,} \\
l_i + s(m_i) + 1 \le 2 s(m_i) & \text{\quad if $l_i < s(m_i)$.}
\end{align*}
By combining the last two facts we note that: every element $l_i - s(m_i) \ge 0$ in $X$ gets multiplied by some number greater than $2$, and every $l_i - s(m_i) < 0$ gets multiplied by some number not greater than $2$.
Therefore $\sum_{i = 1}^m (l_i - s(m_i)) = 0$ implies $X \ge 0$.
Moreover, if there exists any element, such that $l_i - s(m_i) > 0$, then $X > 0$.
However, a schedule with $\sum C_j \le \frac{m (b + 1)}{2}$ satisfies $X \le 0$, therefore it holds that $X = 0$ and $l_i = s(m_i)$ for all machines.
Each machine has jobs from exactly one partition assigned.
Let $M_j$ be the set of machines on which the jobs from $J_j$ are executed.
By the previous argument a machine $m_i$ has exactly $s(m_i)$ jobs assigned in $S$.
By this and the bounds on $a \in A$, we have $|M_j| = 3$.
Since the correctness of the schedule guarantees that all jobs are covered by some machines, we know that the division into $M_1$, $M_2$, \ldots, $M_m$ corresponds to a partition.
\end{proof}
\begin{theorem}
$Q|G = \textit{complete multipartite}, p_j=1|C_{max}$ is $\textnormal{\sffamily Strongly NP-hard}\xspace$.
\end{theorem}
\begin{proof}
The proof is almost identical to that of \cref{theorem:nphcomplete}, only using $C_{max}$ as the criterion and $1$ as the bound.
\end{proof}
When the number of partitions is fixed, we show that there are polynomial algorithms for solving the respective problems.
\begin{theorem}
There exists a $\textnormal{O}(m n^{k + 1} \log(m n))$ algorithm for $Q|G = \completekpartite{k}, p_j=1|C_{max}$.
\end{theorem}
\begin{proof}
We adopt the Hochbaum-Shmoys framework \cite{hochbaum1988}, i.e. we guess $C_{max}$ of a schedule and check whether this is a feasible value.
There are only up to $\textnormal{O}(m n)$ possible values of $C_{max}$ to consider, e.g. using binary search, as it has to be determined by the number of jobs loaded on a single machine.
Now assume that we check a single possible value of $C_{max}$.
Fix any ordering of the machines.
We store an information, if there is a feasible assignment of the first $0 \le l \le m$ machines such that there are $a_i$ unassigned jobs for partition $J_i$.
In each step there is a set of tuples $(a_1, \ldots, a_k)$ corresponding to the remaining jobs of $\textnormal{O}(n^k)$ size.
We start with $(|J_1|, \ldots, |J_k|)$ -- if no machines are used, all jobs are unassigned.
For $l \ge 1$ we take any tuple $(a_1, \ldots, a_k)$ from the previous iteration.
We try all possible assignments of $m_l$ to the partitions, then we try all feasible assignments of the remaining jobs to the machine.
This produces an updated tuple, determining some feasible assignment of the first $l$ machines.
For each $(a_1, \ldots, a_k)$ we construct at most $kn$ updated tuples, each in time $\textnormal{O}(1)$, so the work is bounded by $\textnormal{O}(k n^{k + 1})$.
Note that we do need to store only one copy of each distinct $(a_1, \ldots, a_k)$.
After considering $m$ machines it is sufficient to check if the tuple $(0, 0, \ldots, 0)$ is feasible.
Clearly the total running time for a single guess of $C_{max}$ is $\textnormal{O}(mkn^{k + 1})$.
\end{proof}
\begin{theorem}
There exists a $\textnormal{O}(m n^{k + 1})$ algorithm for $Q|G = \completekpartite{k}|\sum C_j$.
\end{theorem}
\begin{proof}
Let us process machines in any fixed ordering.
Let the state of the partial assignment be identified by a tuple $(a_1, \ldots, a_k, c)$, where $a_i$ denotes the number of vertices remaining to be covered and $c$ denotes $\sum C_j$ of the jobs scheduled so far.
Assume that two partial assignments $P_1$ and $P_2$ on $m'$ first machines are described by the same state $(a_1, \ldots, a_k)$; and two values $c_1$ and $c_2$, respectively.
If $c_2 \ge c_1$, then any extension of $P_2$ on $m'' > m'$ first machines cannot be better than the exact same extension of $P_1$ on $m''$ machines.
Therefore for any $(a_1, \ldots, a_k)$ it is sufficient to store only the tuple $(a_1, \ldots, a_k, c)$ with the smallest $c$.
We may proceed with a dynamic programming similar to the one used for $C_{max}$.
That is, we start with a single state $(|J_1|, \ldots, |J_k|, 0)$.
In the $l$-th step ($l = 1, \ldots, m$) we take states $(a_1, \ldots, a_k, c)$, corresponding to feasible assignments for the first $l - 1$ machines.
We try all $k$ possible assignments of $m_l$ to partitions.
If $m_l$ is assigned to $J_i$, for some $i$, then we try all choices of the number $n' \in \{0, \ldots, a_i\}$ of remaining jobs from $J_i$.
Such a choice together with the assignment of $m_l$ determines an assignment of $n'$ unassigned jobs to $m_l$.
If the tuple constructed $(a_1' ,\ldots, a_k', c')$ has $\sum C_j$ inferior to the already produced we do not store it.
Finally, after considering all machines we obtain exactly one tuple of the form $(0, \ldots, 0, c)$, which determines the optimal schedule.
At each step there are at most $n^k$ states since for every $(a_1, \ldots, a_k)$ we store only the smallest $c$.
There are up to $n$ possible new assignments generated from each state. Each try requires $\textnormal{O}(k)$ time.
Therefore, it is clear that for any fixed $k$ the time complexity of the algorithm is $\textnormal{O}(m n^{k + 1})$.
\end{proof}
\begin{lemma}
\label{lemma:GeometricSumForSigma}
Let $J$ be any set of jobs.
Let $M$ be a set of uniform machines with respective speeds $s(m_1) = s(m_2) = 2$, $s(m_i) = 2^{i - 1}$ for $3 \le i \le m$.
Then it holds that $\sum C_j$ of an optimal schedule of $J$ on $M$ is at least as big as the optimal $\sum C_j$ for $J$ and a single machine with speed $2^m$
\end{lemma}
\begin{proof}
It is sufficient to observe that by replacing two machines $m'$ and $m''$ with $s(m') = s(m'') = 2^i$ with one machine $m$ with $s(m) = 2^{i + 1}$ we never increase the total completion time of the optimal schedule.
Without loss of generality assume that there are exactly $k$ jobs assigned to both $m'$ and $m''$, as we may always add jobs with $p_j = 0$ at the beginning.
The contribution of $m'$ and $m''$ to the total completion time is equal to
$
\sum C_j(m', m'') = \sum_{j = 1}^{k} \frac{j p'_j}{2^i} + \sum_{j = 1}^{k} \frac{j p''_j}{2^i}
$
where $p'_j$ and $p''_j$ are the $j$-th last jobs scheduled on $m'$ and $m''$, respectively.
Now if we consider a set of machines where $m$ replaced $m'$ and $m''$ and consider a schedule where all jobs from $m'$ and $m''$ are scheduled on $m$ in an interleaving manner the first is the first one from $m'$, the second is the first one from $m''$ etc.
Then the contribution of $m$ to the total completion time is equal to
$
\sum C_j(m) = \sum_{j = 1}^{k} \frac{(2 j - 1) p'_j}{2^{i + 1}} + \sum_{j = 1}^{k} \frac{2 j p''_j}{2^{i + 1}} < \sum C_j(m', m'').
$
Since this schedule is clearly a feasible one for a new set of machines we conclude that this holds also for the optimal schedule for the same set of machines.
\end{proof}
Using \cref{lemma:GeometricSumForSigma}, exhaustive search, linear programming, and rounding we were able to prove \cref{theorem:4apx}.
Roughly speaking, the main idea lies in the fact that we may guess the speeds of the machines for each of the partitions.
Then construct a linear (possibly fractional) relaxation of the assignment of the machines to the partitions and round it to integer.
The rounding consists of rounding up the number of the machines of each speeds assigned to the partition.
By knowledge what is the speed of fastest machine assigned to the partition in $S_{opt}$ and by the previous lemma, we may schedule the jobs assigned to fractions of the machines on a machine with the highest speed in the partition, increasing the total completion by at most $2$.
This together with rounding proves the following theorem.
\begin{algorithm}
\begin{algorithmic}[1]
\Require $J = (J_1, \ldots, J_k)$, $M = \{m_1, \ldots, m_m\}$.
\State Round the speeds of the machines up to the nearest multiple of $2$.
\State Let the nonempty group of the machines, ordered by the speeds be $M_1, \ldots, M_{l}$.
\State For each of the partitions, guess the speed of the machine with the highest speed and the number of the machines of this speed assigned.
Let the indices of their speed groups be $s_1', \ldots, s_k'$ and numbers be $n_1' ,\ldots, n_k'$, respectively.
Dismiss the guesses that are unfeasible.
\State Solve the following linear program.
\State Let variables be:
\begin{itemize}
\item $n_{pr, tp}$, where $pr \in \{1, \ldots, k\}$, $tp \in \{1, \ldots, l\}$.
\item $x_{jb, lr, tp}$, where $jb \in J_1 \cup \ldots, \cup J_k$, $lr \in \{1, \ldots, n\}$, $tp \in \{1, \ldots, l\}$
\end{itemize}
\State Let the conditions be:
\begin{align}
\sum_{pr} n_{pr, tp} = |M_{tp}| & & \forall{tp} \label{cond:MachinesInTotalRight} \\
n_{pr, tp} = 0 & & \forall{pr} \forall{tp > s_{pr}} \label{cond:MachinesForParitionRight0}\\
0 \le n_{pr, tp} \le |M_{tp}| - \sum_{i \in \{i | s_i = tp\}}n_i' & & \forall{pr} \forall{tp < s_{pr}} \label{cond:MachinesForParitionRightI}\\
n_{pr}' = n_{pr, tp} & & \forall{pr}, tp = s_{pr} \label{cond:MachinesForParitionRightII} \\
\sum_{lr, tp} x_{jb, lr, tp} = 1 & & \forall{jb} \label{cond:EachJobAssigned} \\
\sum_{jb \in J_{pr}} x_{jb, lr, tp} \le n_{pr, tp} & & \forall{pr} \forall{tp} \forall{lr} \label{cond:NotMoreJobsInLayerThanMachines} \\
0 \le x_{jb, lr, tp} & & \forall{jb} \forall{lr} \forall{tp} \label{cond:NonNegativeJobs} \\
0 \le n_{pr, tp} & & \forall{pr} \forall{tp} \label{cond:NonNegativeMachines}
\end{align}
\State Let the cost function be: $\sum_{jb, lr, tp} x_{jb, lr, tp} \cdot lr \cdot p(jb) \cdot \frac{1}{s(tp)}$, where $p(jb)$ is the processing requirement of job $jb$, $s(tp)$ is the speed factor of machine of type $tp$.
\State Solve the jobs assignment for each partition separately using the optimal solution of LP.
\end{algorithmic}
\caption{$4$-approximate algorithm for the problem \break $Q|G = \completekpartite{k}|\sum C_j$ }
\label{algorithm:4ApxForSumCost}
\end{algorithm}
\begin{theorem}
\label{theorem:4apx}
There exists a $4$-approximation algorithm for $Q|G = \completekpartite{k}|\sum C_j$.
\end{theorem}
\begin{proof}
Consider \cref{algorithm:4ApxForSumCost}.
First notice that the proposed program is a linear relaxation of the scheduling problem.
Precisely, $n_{pr, tp}$ means how many machines from a group $tp$ are assigned to the partition $pr$; $x_{jb, lr, tp}$ means what part of a job $jb$ is assigned as the $lr$-th last on a machine of type $tp$.
Notice that jobs assigned to machines of a given type form layers, i.e. jobs assigned as last contribute their processing times once, as the last by one contributes twice, etc.
About the conditions:
\begin{itemize}
\item Condition \eqref{cond:MachinesInTotalRight} guarantees that all the machines are assigned, fractionally at worst.
\item Condition \eqref{cond:MachinesForParitionRight0} provides that no machine with speed higher than maximum possible (i.e. guessed) is assigned to the partition.
\item Condition \eqref{cond:MachinesForParitionRightI} guarantees that each of the partitions can be given any number of not preassigned (not assigned by guessing) machines of a given type.
\item Condition \eqref{cond:MachinesForParitionRightII} guarantees that the given number of machines of guessed type are assigned to a given partition as the fastest ones
\item Condition \eqref{cond:EachJobAssigned} ensures that any job is assigned completely, in a fractional way at worst.
\item Condition \eqref{cond:NotMoreJobsInLayerThanMachines} guarantees that for a given layer, partition, and machine type there is no more jobs assigned than the machines of this type to the partition.
\end{itemize}
The cost function corresponds to an observation that a job $jb$ assigned as the $l$-th last on the machine of type $tp$ contributes exactly $\frac{l \cdot p(jb)}{s(tp)}$ to $\sum C_j$.
An optimal solution to LP $(x^*, n^*)$ corresponds to a fractional assignment of machines to the partitions.
We now construct for each partition $J_i$ separately a \emph{partition fractional scheduling} problem in the following way:
\begin{itemize}
\item A new set of variables $y_{jb,lr,m}$ indicating a fractional assignment of $jb \in J_i$ as the $lr$-th last job on machine $m \in M'$.
\item A cost function $\sum_{jb,lr,m} y_{jb,lr,m} \cdot \frac{lr \cdot p(jb)}{s(m)}$.
\item The conditions:
\begin{itemize}
\item $\forall_{jb, lr, m}\; y_{jb,lr,m} \ge 0$,
\item $\forall_{jb} \sum_{lr, m} y_{jb,lr,m} = 1$ -- each job has to be assigned completely,
\item $\forall_{lr, m} \sum_{jb} y_{jb, lr, m} \le 1$ -- each layer on each machine cannot contain more than a full job in total.
\end{itemize}
\end{itemize}
Here the set of machines $M'$ consists of exactly $\ceil{n^*_{i, tp}}$ machines for each $1 \le tp \le l$.
Hence, for each type we add at most one ,,virtual'' machine due to rounding, except the machine with the highest speed per partition, which were preassigned exactly.
Now we rearrange jobs within layers for machines of the same speed to construct some feasible solution to partition fractional scheduling.
Hence, let $Y = \{y_{jb, lr, m}: s(m) = tp, jb \in J_i\}$ for any fixed $lr$, $tp$,
We \emph{redistribute} $x_{jb, lr, tp}$ in the following way: $\forall_{lr, tp}$ we set $y_{jb, lr, m} = x_{jb, lr, tp}$ for the consecutive variables $x_{jb, lr, tp}$.
If such an assignment would set some variable $y_{jb, lr, m} = y'$ such that $\sum_{jb}y_{jb, lr, m} > 1$, then we set $y_{jb, lr, m} = x_{jb, lr, tp} - (y'-1)$, instead.
And we continue with next a machine of speed $tp$ and unassigned fraction of $x_{jb, lr, tp}$.
Notice that by condition \eqref{cond:NotMoreJobsInLayerThanMachines} we have $\forall_{lr, tp} \sum_{jb}x_{jb,lr,tp}\le n^*_{i, tp} \le |Y|$.
Since we only rearrange jobs preserving their layers the cost of $y$ in partition fractional scheduling is equal to contribution of variables from $J_i$ to the cost of $x^*$ in LP.
Hence an optimal solution can have only at most this cost.
Let us model this linear program as a flow network.
Precisely, we construct:
\begin{itemize}
\item a set of vertices $V = J_j \cup ( M' \times |J_j|)$
\item a set of arcs $J_j \times ( M' \times |J_j|)$ with capacity $1$ each, and with the cost of the flow by an arc $(jb, (m, lr))$ equal to $lr \cdot p(jb) \cdot \frac{1}{s(m)}$.
\end{itemize}
Any fractional solution corresponds to a fractional flow by the network, i.e. a value of $y_{jb, m, lr}$ is exactly the flow by the arc $(jb, lr, m)$.
We know that e.g. Successive Shortest Path Algorithm \cite{NetworkFlows} finds an integral minimum cost flow in the network corresponding directly to partition fractional scheduling.
We know that this solution is at least as good as the solution for the general relaxation of the scheduling problem.
We can treat the flow as an assignment for all the jobs in $J_i$.
Due to rounding of $n^*$ it may use some virtual machines, but by \cref{lemma:GeometricSumForSigma} we can change them into a single virtual machine with a speed no greater than the speed of the fastest machine.
Finally, by moving all jobs from the virtual machine and any fastest machine to this fastest machine we increase $\sum C_j$ of these jobs by at most $2$ times.
This together with rounding of the machine speeds allows to bound the approximation ratio by $4$.
\end{proof}
\subsection{Simple Algorithms for Unit Time Jobs}
In this subsection we sketch outlines of two methods: a $2$-approximation algorithm for the problem $Q|G = \textit{complete multipartite}, p_j=1|C_{max}$ and a $4$-approximation algorithm for $Q|G = \textit{complete multipartite}, p_j=1|\sum C_j$.
It is more convenient to express the algorithms in terms of covering the parts of $G$ by capacities of the machines.
Let us first sketch briefly the necessary notation.
Let $J = J_1, \ldots, J_k$ be the parts.
Let $M = m_1, \ldots, m_m$ be the machines.
Let $c: M \rightarrow \mathbb{N}_+$ be a function giving \emph{capacities} of the machines.
By a \emph{\cover} we mean any subset of the machines.
For \cover $M' \subseteq M$, a part $J_j$, and $c$ we define the \emph{cover ratio} $CR^c_j(M') = \frac{\min\{|J_j|, \sum_{m_i \in M'} c(m_i)\}}{|J_j|}$.
We say that a \cover $M' \subseteq M$ is an \cover[$\alpha$] for $J_j$ under $c$ if $CR^c_j(M') \ge \alpha$.
A \cover $M' \subseteq M$ is said to be an exact \cover for $J_j$ under $c$ if $CR^c_j(M') = 1$.
By a \emph{\covering} of $J' \subseteq J$ we mean a partial function $F: M \rightharpoonup J'$, i.e. a function which not necessarily assigns all the machines.
We say that a \covering $F$ of $J'$ is an \covering[$\alpha$] for $J'$ under $c$ if $\forall_{J_j \in J'}F^{-1}(J_j)$ is an \cover[$\alpha$] of $J_j$.
Similarly, a \covering $F$ of $J'$ is an \emph{exact \covering} for $J'$ under $c$ if $\forall_{J_j \in J'}F^{-1}(J_j)$ is an exact \cover of $J_j$.
When defining particular \covers (\coverings) we usually omit the function $c$, it is clearly stated in the context of a given \cover (\covering).
Also, we sometimes do not explicitly specify $J'$ if it is clear from the context.
Now, let us proceed to the algorithms that we propose and the proofs of the quality of solutions produced by the algorithms.
\begin{algorithm}
\begin{algorithmic}[1]
\Procedure{Greedy-Covering}{$J = \{J_1, \ldots, J_k\}, M = \{m_1, \ldots, m_m\}, c$}
\State Let $F \leftarrow \emptyset$
\State Let $j \leftarrow 1$
\For{$i=1, \ldots, m$}
\State $F \leftarrow F \cup (m_i, J_j)$
\IfThen{$CR_j^c(F) \ge \frac{1}{2}$}{$j \gets j + 1$}
\EndFor
\IfThenElse{$F$ is \cover[$\frac{1}{2}$] for $J$}{\Return $F$;}{\Return \mathbb{N}O}
\EndProcedure
\end{algorithmic}
\caption
{
Let $J$ be the parts ordered by their sizes.
Let $M$ be the machines ordered by their capacities $c$.
The following procedure verifies whether there is no exact \covering for the instance, or else it constructs a \covering[$\frac{1}{2}$].
}
\label{algorithm:two-covering-verifier}
\end{algorithm}
\begin{lemma}
\label{lemma:coveringInHalf}
For a given $(J, M, c)$ \Cref{algorithm:two-covering-verifier} either verifies that there is no exact \covering or it constructs a \covering[$\frac{1}{2}$].
The algorithm works in $\textnormal{O}(n + m)$ time.
\end{lemma}
\begin{proof}
First, let us assume that for the given $(J, M, c)$ there exists an exact \covering $F_{opt}$.
We establish the following invariant under this assumption: during the execution of \cref{algorithm:two-covering-verifier} after constructing a \covering[$\frac{1}{2}$] $F$ for $J_1 \ldots, J_j$ the sum of capacities of $M \setminus F^{-1}(\{J_1, \ldots, J_j\})$ at least $\sum_{j' = j + 1}^k |J_{j'}|$.
First, let us prove a particular case: $\sum_{i \in [m]} c(m_i) = n$, i.e. that the instance is \emph{tight}.
Before the assignment of the first machine the invariant obviously holds.
Now suppose that the invariant holds for all values $1, \ldots, j - 1$ and let us use a few of the remaining unassigned machines to construct a \cover for $J_j$.
\begin{itemize}
\item
Assume that for the first machine $m_i$ assigned to $J_j$ in $F$ the following inequality holds: $c(m_i) \le |J_j|$.
Then the invariant is preserved after the assignment, since by assigning the sequence of machines $m_i, \ldots, m_{i''}$ to $J_j$ the sum of the remaining capacities was decreased by $\sum_{i' = i}^{i''} c(m_{i'}) \le |J_j|$ and $\sum_{j' = j}^k |J_{j'}|$ was decreased by $|J_j|$.
\item
Assume the opposite, that $c(m_i) > |J_j|$ holds for the first $m_i$ used for $J_j$.
Note that $J_j$ is covered by exactly one machine in $F$.
Observe, the assignment of any machine $m_{i' \le i}$ to any part $J_{j' \ge j}$ in $F_{opt}$ would force that $F_{opt}^{-1}(J_j')$ would have total capacity greater than $|J_{j'}|$, which is impossible, because the instance is tight.
This means that $F_{opt}^{-1}(\{J_j, \ldots, J_k\})$ consist only of machines of smaller capacity, which are in $M \setminus F^{-1}(\{J_j, \ldots, J_k\})$.
Hence, the invariant again holds.
\end{itemize}
Finally, consider the general case $\sum_{i \in [m]} c(m_i) \ge \sum_{j \in [k]} |J_j|$.
In such case let us consider an arbitrary exact \covering $F_{opt}$.
If there are some machines unassigned, then let us assign them arbitrary.
Now, $F_{opt}$ would be also an exact \covering if for any $j$ the number of jobs in $J_j$ was exactly $\sum_{i: F(m_i) = J_j} c(m_i)$.
Let us modify the instance, that is let us assume that it consists of parts $J' = \{J_1' ,\ldots, J_k'\}$ where $J_j'$ is of cardinality $\sum_{i: F(m_i) = J_j} c(m_i)$.
Let us mark the modified instance, perhaps after resorting the parts, as $(M, (J_1', \ldots, J_k'), c)$.
By the observation for the tight case, the algorithm would succeed in constructing a \covering[$\frac{1}{2}$] $F'$ for $J'$.
Note that for any $j \in [k]$ we have $|J_j'| \ge |J_j|$, due to the fact that no part size is lower than previously.
Observe, by the fact that algorithm has to succeed in constructing $F'$ it has to succeed in constructing a \covering[$\frac{1}{2}$] $F$ for $(J_1, \ldots, J_k)$.
This is by an observation (a formal proof of which we omit) that for any $j \in [k]$ $M \setminus F'^{-1}(\{J_1', \ldots, J_j'\}) \subseteq M \setminus F^{-1}(\{J_1, \ldots, J_j\})$.
And vice versa, if the algorithm returns \mathbb{N}O, there is no exact \covering.
\end{proof}
\begin{example}
In order to illustrate the reasoning for the general case confer:
\begin{itemize}
\item A set of machines $M = \{m_1, m_2, m_3, m_4, m_5, m_6\}$.
\item The function $c$ with values equal to $7, 4, 4, 3, 3, 2$, for $m_1, m_2, m_3, m_4, m_5, m_6$, respectively.
\item Two sets of parts $J' = (J_1', J_2', J_3')$, where the parts have sizes $10, 8, 5$, respectively; and $J = (J_1, J_2, J_3)$, where the parts have sizes $7, 6, 4$, respectively.
\end{itemize}
The \covering produced for $J'$ is $m_1; m_2, m_3; m_4$ and the \covering produced for $J$ is $m_1; m_2; m_3$.
\end{example}
\begin{theorem}
\label{thm:Q-cmax-2apx}
There exists $2$-approximation algorithm for $Q|G = \textit{complete multipartite}, p_j=1|C_{max}$ running in $\textnormal{O}(m \log m + n \log{n})$ time.
\end{theorem}
\begin{proof}
Assume that we suspect that there exits a schedule for an instance $(J ,M)$ of $Q|G = \textit{complete multipartite}, p_j=1|C_{max}$ within time $T$.
Let us calculate the capacities of the machines, i.e. $c(m_i) = \floor{s(m_i) \cdot T}$.
If the assumption is true, then there exists an exact \covering.
Now we apply \cref{algorithm:two-covering-verifier} to $(J, M, c)$ and we may get two results:
\begin{enumerate}
\item the algorithm returned $F$ -- a \covering[$\frac{1}{2}$] of $J$,
\item or the algorithm returned \mathbb{N}O, which guarantees that there is no exact \covering of $F$ in the time $T$.
\end{enumerate}
By \Cref{lemma:coveringInHalf} the second case cannot occur if there exists a schedule in the time $T$.
Now, let us take \covering[$\frac{1}{2}$] $F$ for this $T$.
We translate it to a schedule as follows: for $i = 1, \ldots, m$ if $F(m_i) = J_j$, we schedule up to $2 c(m_i)$ jobs from $J_j$ on $m_i$.
In total, the space on the machines assigned to $J_j$ in the time $2T$ ensures that all jobs are scheduled.
Moreover, each machine gets jobs only from a single part.
Finally, it is clear that the makespan of this schedule is at most $2T$.
Let $C_{max}^*$ be $C_{max}$ of an optimum schedule.
Observe, that $C_{max}^*$ is determined by a number of jobs $n' \in [n]$ assigned to some machine $m_i$.
This means that we have only $\textnormal{O}(mn)$ candidates for $C_{max}^*$.
By checking the candidates we can find the smallest $T$ for which there exists a \covering[$\frac{1}{2}$] $F$.
Using $F$ it is easy to construct a schedule with $C_{max}$ equal to at most $2T \le 2 C_{max}^*$.
Initially we have to sort the machines and parts, which can be done in $\textnormal{O}(m \log m)$ and $\textnormal{O}(n \log n)$ time, respectively.
From this point that we can assume that $m \le n$; in the other case we can always discard all but $n$ fastest machines without affecting the optimal solution.
By this we have $\textnormal{O}(\log{n})$ iterations of binary search over candidates for $T$.
Each application of \cref{algorithm:two-covering-verifier} requires $\textnormal{O}(n)$ time, also by $m \le n$.
Clearly, the sketched $2$-approximation algorithm requires $\textnormal{O}(m \log m + n \log n)$ time.
\end{proof}
\begin{theorem}
\label{thm:Q-sumcost-4apx}
There exists a $4$-approximation algorithm for $Q|G = \textit{complete multipartite}, p_j =1|\sum C_j$ running in $\textnormal{O}(m^2n^3\log m)$ time.
\end{theorem}
\begin{proof}
Assume that the parts and machines are sorted in order of their nonincreasing sizes and speeds, respectively.
Suppose that we knew in advance the numbers of jobs $c_1, \ldots, c_m$ assigned to a machines $m_1, \ldots, m_m$ in some optimal schedule.
Without loss of generality, we could assume that if $s_1 \ge \ldots \ge s_m$, then so $c_1 \ge \ldots \ge c_m$.
Observe, that the values $c_i$ can be also interpreted to form capacities of the machines.
In this case we could apply \cref{algorithm:two-covering-verifier} to $(J, M, c)$ and obtain $F$ -- a \covering[$\frac{1}{2}$] of $J$.
We could translate $F$ to a schedule as follows: if $F(m_i) = J_j$, then assign up to $2c_i$ jobs from $J_j$ to $m_i$.
In total, the capacity of the machines assigned to $J_j$ is at least $\frac{1}{2}|J_j|$ so every job would be scheduled.
Now observe that $m_i$ in the optimal schedule contributes exactly $\binom{c_i + 1}{2} \frac{1}{s(m_i)}$ to $\sum C_j$, but in constructed schedule it would contribute at most $\binom{2 c_i + 1}{2} \frac{1}{s(m_i)} \le 4 \binom{c_i + 1}{2} \frac{1}{s(m_i)}$.
So this would be schedule with $\sum C_j$ at most $4$ times the optimum.
Unfortunately, by \cref{theorem:nphcomplete}, it is NP-hard to obtain such information.
However, observe that in $F$ the assignment of the machines to the parts would be ordered.
That is, $J_1$ gets $n_1$ machines of biggest capacity, $J_2$ gets next $n_2$ machines of biggest capacity, etc.
This is by the observation that in \cref{algorithm:two-covering-verifier} the machines are considered in a fixed order, determined by the order of capacities, which w.l.o.g. is determined by the order of speeds.
Let us call by \emph{ordered \covering} any such \covering.
This observation allows us to construct a \covering that corresponds to a schedule with $\sum C_j$ at most $4$ times the optimum without knowledge of $c$.
We proceed by using dynamic programming over all ordered \coverings.
Precisely, let the states of this program be defined by $(j, i, cost, F)$.
Where $F$ is ordered \covering in which the first $i$ machines are assigned to the first $j$ parts, and whose associated schedule has lowest $\sum C_j$ among all schedules corresponding to an ordered \covering of first $i$ machines to the first $j$ parts.
Notice that $(1, 1, cost_1, F_1), \ldots, (1, m-(k-1), cost_{m-(k-1)}, F_{m-(k-1)})$ are well defined.
Precisely, $cost_i$ is the total completion time of the jobs from $J_1$ scheduled on $i$ fastest machines, $F_i = \bigcup_{i' \in [i]} \{(m_{i'}, J_1)\}$.
For $k' \ge 2$, $m-(k-k') \ge m' \ge k'$ and $m'-1 \ge m'' \ge k'-1$, construct $(k', m', cost, F')$ as the best with respect to $\sum C_j$ ordered \covering corresponding to $(k'-1, m'', cost'', F'')$ and the assignment of $m_{m''+1}, \ldots, m_{m'}$ to $J_{k'}$.
Every such an assignment is feasible.
Moreover, for any $1 \le k' \le k$ and $k' \le m' \le m-(k'-k)$ holds that there is no ordered \covering of $\{J_1, \ldots, J_{k'}\}$ using $\{m_1, \ldots m_{m'}\}$ with smaller $\sum C_j$ of associated schedule than $cost$ following from $(k', m', cost, F)$.
For $k' = 1$ and any $m'$ it obviously holds.
Consider a counter-example with the minimum number of the parts and the minimum number of the machines.
In this case, let there be an ordered \covering $F_{opt}$ associated with some schedule of minimum $\sum C_j$ defined by the numbers of the machines assigned to $J_1, \ldots, J_k$, and let these numbers be $n_1, \ldots, n_k$, respectively.
Consider an ordered \covering $F_{alg}$ determined by $(k-1, n_1 + \ldots + n_{k-1}, cost, F')$ and by the assignment of $M_k = \bigcup_{i = n_1 + \ldots + n_{k-1} + 1}^{n_1 + \ldots + n_{k}} m_i$ to $J_k$.
Notice that the contributions to $\sum C_j$ of the $J_k$ scheduled on $M_k$ are equal in the schedule associated with $F_{opt}$ and in the schedule associated with $F_{alg}$.
This means that there is a minimum counter-example on $k-1$ parts and $n_1 + \ldots + n_{k-1}$ machines.
The sorting of parts and machines can be done in $\textnormal{O}(n)$ and $\textnormal{O}(m \log m)$ time, respectively.
After this operation we can assume that $m \le n$.
If $|M| < |J|$, then no schedule can exist.
At each step of our dynamic program there are at most $mn$ states since for every $(i, j)$ we store only the smallest $c$.
There are up to $m$ possible new \coverings generated from each state, each requiring $\textnormal{O}(n \log m )$ time to generate.
Therefore each step requires $\textnormal{O}(m^2n^2\log m)$ operations and the total running time of the algorithm is $\textnormal{O}(km^2n^2\log m) = \textnormal{O}(m^2n^3\log n)$.
\end{proof}
\subsection{A PTAS for $Q|G = \textit{complete multipartite}, p_j=1|C_{max}$}
Now let us return to $Q|G = \textit{complete multipartite}, p_j=1|C_{max}$ problem.
We can significantly improve on \Cref{thm:Q-cmax-2apx} and construct a PTAS inspired by the ideas of the PTAS for \textnormal{\sffamily Machine Covering}\xspace \citep{azar1998approximation}.
\begin{algorithm}
\begin{algorithmic}[1]
\Procedure{PTAS}{$J$, $M$, $T$, $\epsilon$}
\State $\epsilon \leftarrow \min\{\frac{1}{2}, \epsilon\}$
\State Calculate rounded capacities $c^*(m)$ for all $m \in M$
\State $l_{min} \gets \left\lceil{3 \log_{1 + \epsilon}\frac{1}{\epsilon}}\right\rceil$ + 1
\State Split $J$ into ranges $\{P_l\}_{l = 0}^{l_{max}}$
\State Find $SV^{l_{\min}+1}$ for $(J, M, c^*)$ (see \Cref{algorithm:prePTASCmax} and \Cref{lemma:preHeartOfPTAS})
\For{$l = l_{min} + 1, \ldots, l_{max}$}
\For {$sv \in SV^{l}$}
\State Generate $CSV(sv)$ (see \Cref{algorithm:interPTASCmax,algorithm:PTASCmax}, \Cref{theorem:interPTASproperties,lemma:HeartOfPTAS-good})
\EndFor
\State Find $SV^{l+1}$ as a subset of $\bigcup_{sv \in SV^{l}} CSV(sv)$ (see \Cref{lem:cutting-sv})
\EndFor
\IfThen{$SV^{l_{max}+1} = \emptyset$}{\Return \mathbb{N}O}
\State Pick any $sv \in SV^{l_{max}+1}$ with its respective \cover[$(1-\epsilon)$]ing $F$
\State \Return a schedule $S$ constructed from $F$
\EndProcedure
\end{algorithmic}
\caption{
The main part of the PTAS for $Q|G = \textit{complete multipartite}, p_j=1|C_{max}$.
The following algorithm either verifies that there is no schedule of length at most $T$ or it constructs a schedule with $C_{max}$ close to $T$.
}
\label{algorithm:ptas-highlevel}
\end{algorithm}
A high-level overview of the algorithm is presented as \Cref{algorithm:ptas-highlevel}.
As previously, we state the algorithm in terms of constructing an approximate \covering.
As in the case of the application of the $2$-approximation algorithm for $Q|G = \textit{complete multipartite}, p_j = 1|C_{max}$, we state the algorithm in the framework of \citet{hochbaum1988}.
That is, we assume that a guess of a value $T$ is given such that there exists a schedule with $C_{max} \le T$.
Naturally, such a schedule corresponds to an exact \covering of $J$ by $M$ under capacities given by $c(m_i) = \floor{s(m_i) \cdot T}$.
The method that we propose either verifies that the guess is incorrect, i.e. that there is no exact \covering in capacities determined by the time $T$, or it constructs a \covering with capacities determined by the time $T$ that can be transformed into a schedule with $C_{max}$ near $T$.
By applying the method over a set of candidate makespans we find the smallest $T$ such that there exists a schedule with makespan close to $T$.
For any fixed guess of $T$ we can distinguish the following basic steps of the algorithm:
\begin{enumerate}
\item Applying some preprocessing, in particular to divide parts into ranges and to calculate rounded capacities of the machines.
\item Finding a set of vectors $SV^{l_{min}+1}$ such that at least one vector in the set describes machines that are not assigned to small parts in some exact \covering and each vector in the set describes an exact \covering of small parts.
\item Applying iteratively a procedure consisting of two steps:
\begin{enumerate}
\item The first step is to find for each $sv^l \in SV^{l}$ a set of candidate state vectors $CSV(sv^l)$ such that it contains at least one good state vector (a term defined later) if $sv^l$ is a good state vector.
\item The second step is to calculate $SV^{l+1}$ as a subset of $\bigcup_{sv \in SV^{l}} CSV(sv)$ such that the constructed set contains at least one good state vector.
\end{enumerate}
\item Constructing a schedule with $C_{max} \le T(1 + 7 \epsilon)$ using a nice \cover[$(1-\epsilon)$]ing, corresponding to a vector in $SV^{l_{max}+1}$ -- the set $SV^{l_{max}+1}$ is nonempty provided that there exists a schedule with $C_{max} \le T$.
\end{enumerate}
\subsubsection{Basic definitions}
In order to prove the result formally, we state a suitable notation and a few notions tailored to our problem.
As previously, for any fixed $T$ let us define the \emph{capacity} of $m \in M$ by $c(m) = \floor{s(m) \cdot T}$.
Let us also define \emph{rounded capacity} of $m$ by $c^*(m)$, equal to $c(m)$ rounded up to the nearest value of the form $\floor{(1 + \epsilon)^i}$.
Clearly, $c(m) \le c^*(m) \le (1 + \epsilon) c(m)$ so for convenience from now on we will refer to rounded capacities exclusively, and the \covers are constructed with respect to $c^*$.
Now, we group parts into sets (also called \emph{ranges}) $P_l = \{J_k\colon |J_k| \in [\floor{(1 + \epsilon)^{l}}, \floor{(1 + \epsilon)^{l + 1}})\}$ and we will consider these ranges in order of increasing $l$.
For convenience, let $l_{max}$ be the largest value such that its range is nonempty.
Next, given $c^*$ and $l$ we divide the machines into several types:
\begin{enumerate}
\item \emph{tiny} -- with $c^*(m_i) < \epsilon^{-2}$,
\item \emph{small} -- with $\epsilon^{-2} \le c^*(m_i) < \epsilon (1 + \epsilon)^{l}$,
\item \emph{average} -- with $\max\{\epsilon (1 + \epsilon)^{l}, \epsilon^{-2}\} \le c^*(m_i) < \floor{(1 + \epsilon)^{l + 1}}$,
\item \emph{large} -- with $\max\{\floor{(1 + \epsilon)^{l + 1}}, \epsilon^{-2}\} \le c^*(m_i)$.
\end{enumerate}
The division is unambiguous only with respect to the given $l$.
For clarity of the notation we sometimes write that $m$ is \smallmachine[$l$] (\averagemachine[$l$]) /\largemachine[$l$]/ to denote that $m$ is small (average) /large/ with respect to $l$ under $c^*$.
Sometimes we do not use the $l$ explicitly when stating that some machine is small, average, etc., but it is always given implicitly.
We use the notation of \covers and \coverings used in the description of \cref{algorithm:two-covering-verifier}.
However, we have to add a few other types of \covers and a few other types of \coverings.
For a part $J_k$ a set $M'\subseteq M$ is a \emph{tiny exact \cover} if it is an exact \cover and $M'$ consists of tiny machines alone.
Also, we say that $M' \subseteq M$ is \emph{slack \exactcover} of a part $J_j$ in $P_l$ when it is exact \cover of $J_j$, $M'$ consists of \smallmachine[$l$] machines or tiny machines, and there is at least one \smallmachine[$l$] machine in $M'$.
Also for a \cover $M' \subseteq M$ of $J_j \in P_l$ where $M'$ consists of at least one \smallmachine[$l$] or \averagemachine[$l$] machine by \emph{slack capacity} we mean the total capacity of all \smallmachine[$l$] and tiny\xspace machines in $M'$.
We define that two \covers $M'$ and $M''$ are \emph{equivalent} under capacity function $c^*$ if there is a bijection $f: M' \rightarrow M''$ such that for any $\forall_{m \in M'}c^*(m) = c^*(f(m))$.
Hence, for a set $M'$ of machines of equal capacity under $c^*$ there is $|M'|+1$ nonequivalent subsets of $M'$.
Due to the rounding we have only at most $d_{average} = \floor{\log_{1 + \epsilon}(\frac{1}{\epsilon})} + 1$ distinct capacities for average machines, regardless of $l$.
Their capacities are equal to $\floor{(1+ \epsilon)^{l - d_{average} + 1}}, \ldots, \floor{(1+\epsilon)^{l}}$ -- since is easy to check that $\floor{(1+ \epsilon)^{l - d_{average}}} < \epsilon (1 + \epsilon)^{l}$.
Similarly we have only at most $d_{tiny} = \ceil{\log_{1+\epsilon}\frac{1}{\epsilon^2}}$ distinct capacities of tiny machines (the maximum number is of the form $(1+\epsilon)^{\ceil{2\log_{1+\epsilon}\epsilon^{-1}}-1}$, the numbers are counted from $0$).
We write ``at most'' due to the fact that when $\epsilon$ is small, then a few values $(1+\epsilon)^i$ for small $i$ may be rounded to the same integer, hence there is no reason to duplicate entries.
To avoid unnecessary details we assume that there are always $d_{average}$ distinct capacities of average machines and $d_{tiny}$ distinct capacities of tiny machines.
\subsubsection{State vectors}
The crucial concept for our algorithm and its proof is the \emph{state vector} for the $l$-th range with the fields:
\[
(M_{exact}; M_{slack}, n_{small}; M_{average}; M_{large}; F),
\]
The meanings of the fields are as follows:
\begin{itemize}
\item $M_{exact}$ -- a set of unassigned tiny\xspace machines designed to form exact \covers for some parts;
\item $M_{slack}$ -- a set, disjoint with $M_{exact}$, of unassigned tiny\xspace and \smallmachine[$l$] machines;
\item $n_{small}$ -- the number of \smallmachine[$l$] machines in $M_{slack}$;
\item $M_{average}$ -- a set of unassigned \averagemachine[$l$] machines;
\item $M_{large}$ -- a set of unassigned \largemachine[$l$] machines;
\item $F$ -- a \cover[$(1-\epsilon)$]ing for $P_0 \cup \ldots \cup P_{l-1}$.
\end{itemize}
First at all, the set $M_{exact}$ is necessary in our construction to guarantee that even parts of high cardinality can be covered exactly by tiny\xspace machines without excessive spending of machine capacity.
Otherwise the intuition is clear; we would like to track the unassigned machines with a suitable precision: high enough that for the next ranges we can find a nice \cover[$(1-\epsilon)$]ing (a term defined later), but spending a polynomial amount of time.
Also, keep in mind that we would like to track the unassigned machines with respect to equivalence relation defined.
In particular, there can be $\textnormal{O}(m^{d_{tiny}})$ not equivalent sets of $M_{exact}$, $\textnormal{O}(m^{d_{average}})$ nonequivalent sets of $M_{average}$ and it is enough to consider $\textnormal{O}(m)$ nonequivalent sets of $M_{large}$.
The last observation is justified by the observations in the next subsection.
For clarity, for a state vector $sv^l$ we use expressions like $\sv^l.M_{exact}$ or $\sv^l.n_{small}$ to refer to the values of its fields.
Also, we denote the set of vectors as $SV^l$ to emphasize that it consists of state vectors for $l$-th range.
\subsubsection{Good vectors and $\epsilon$-approximate coverings}
Let us consider parts in a non-decreasing order of their sizes.
Assume that there exists an exact \covering $F$ of $J$.
Then, if $F^{-1}(J_j)$ for any $J_j \in P_l$ contains a large machine, then we may assume that $F^{-1}(J_j)$ consists of exactly one large machine $m$ -- simply we can drop other machines.
Also, we may assume that $m$ is the smallest large machine assigned to $J_j$ or a later part.
If this is not the case, then we can do as follows.
\begin{itemize}
\item As long as there is a \largemachine[$l$] machine $m'$ of smaller capacity that is unassigned, then exchange $m$ with $m'$.
\item As long as there is a \largemachine[$l$] machine $m'$ of smaller capacity used for a later part then exchange $m$ with $m'$.
\end{itemize}
Let us fix any such exact \covering and to differentiate it from other \coverings let us denote it as the \emph{optimal \covering}, or symbolically as $F_{opt}$.
Now, with respect to the optimal \covering we can define two additional types of the machines.
For convenience w define $m$ to be \emph{tiny-exact\xspace} if in the optimal \covering $F_{opt}$ $m$ is assigned and the set $F_{opt}^{-1}(F_{opt}(m))$ is a tiny exact \cover; otherwise we call $m$ \emph{tiny-non-exact\xspace} machine.
We use the optimal \covering to form conditions for desirable state vectors at each step of the algorithm.
We say that a state vector $sv^l$ is \emph{good} if:
\begin{itemize}
\item $\sv^l.M_{exact}$ is equivalent to the tiny-exact\xspace machines in $M \setminus F_{opt}^{-1}(P_0 \cup \ldots \cup P_{l-1})$,
\item $\sv^l.M_{large}$ is equivalent to the \largemachine[$l$] machines in $M \setminus F_{opt}^{-1}(P_0 \cup \ldots \cup P_{l-1})$,
\item $\sv^l.M_{average}$ is equivalent to the \averagemachine[$l$] machines in $M \setminus F_{opt}^{-1}(P_0 \cup \ldots \cup P_{l-1})$,
\item $\sv^l.n_{small}$ is at least the number of parts in $\setssum{P}{l}{l_{max}}$ that are covered by slack \exactcover in $F_{opt}$ and \emph{where the fastest machine assigned in $F_{opt}$ is \smallmachine[$l$]},
\item The capacity of $M_{slack}$ at least the capacity of \smallmachine[$l$] and tiny-non-exact\xspace machines in $M \setminus F_{opt}^{-1}(\setssum{P}{0}{l-1})$.
\end{itemize}
Intuitively, we would like a good state vector $\sv^l$ to describe the set of unassigned machines similar to $M \setminus F_{opt}^{-1}(\setssum{P}{0}{l-1})$.
Most importantly, the condition on $n_{small}$, as we will see, is designed to guarantee that the number of \smallmachine[$l$] machines unassigned is at least as big as the number of parts in $\setssum{P}{l}{l_{max}}$ covered by slack \exactcover for which the fastest machine assigned in the optimal \covering is \smallmachine[$l$].
We use the condition to guarantee that each such part can be covered by a \cover similar to slack \exactcover.
We search the space of feasible \coverings for a \cover[$(1-\epsilon)$]ing which is \emph{nice}.
Formally, $M'$ is an \emph{nice \cover[$(1-\epsilon)$]} of a part $J_i \in P_l$ if:
\begin{itemize}
\item $M'$ is an exact \cover of $J_i$, i.e., $\sum_{m \in M'} c^*(m) \ge |J_i|$;
\item or $M'$ is \emph{relatively almost exact \cover} of $J_i$, i.e., $\sum_{m \in M'} c^*(m) \ge (1 - \epsilon) |J_i|$ and $c^*(m) > \epsilon^{-1}$ for all $m \in M'$;
\item or $M'$ is \emph{absolutely almost exact \cover} of $J_i$, i.e., $\sum_{m \in M'} c^*(m) \ge |J_i| - \epsilon^{-1}$ and $c^*(m) > \epsilon^{-2}$ for some $m \in M'$.
\end{itemize}
We call a \covering $F$ of $P_l = \{J_1, \ldots, J_{|P_l|}\}$ a \emph{nice \cover[$(1-\epsilon)$]ing} if for any $i \in [|P_l|]$ the set $F^{-1}(J_i)$ is nice \cover[$(1-\epsilon)$] for $J_i$.
Using a nice \cover[$(1-\epsilon)$]ing it is easy to construct a schedule, perhaps increasing $T$ a bit.
To present the desirable property more clearly let us consider the following lemma.
\begin{lemma}
\label{lemma:epsilontoexactcovering}
Assume that for an instance $(J, M, c^*, \epsilon )$, where $c^*$ is an integer-valued function and $\epsilon \in (0, 1)$, there is a nice \cover[$(1-\epsilon)$]ing $F$.
Then, $F$ is exact \covering of $J$ with $M$ under $c' = \floor{c^*\left(\frac{1}{1-\epsilon} + \epsilon \right)}$.
\end{lemma}
\begin{proof}
For $J_i \in P_l$ consider what type of cover $F^{-1}(J_i)$ is:
\begin{itemize}
\item If $F^{-1}(J_j)$ is an exact \cover, then there is nothing to prove.
\item Assume that all machines in $F^{-1}(J_i)$ have capacity at least $\epsilon^{-1}$.
In this case the \cover[$(1-\epsilon)$] has to be (at least) relatively almost exact \cover.
Therefore, we can multiply $c^*$ by $\frac{1}{1 - \epsilon}$ to increase the total capacity of $F^{-1}(J_i)$ by $\frac{1}{1 - \epsilon}$.
The total capacity of $F^{-1}(J_i)$ is at least $|J_i|$ after the operation.
However, this could lead to some fractional capacities, so additionally we would like the capacities to be rounded up to the nearest integer.
Summing up, we would like to replace $c^*(m)$ by $\ceil{\frac{c^*(m)}{1 - \epsilon}}$.
This can be done e.g. by using $\floor{c^* \left(\frac{1}{1 - \epsilon} + \epsilon\right)}$ instead of $c^*$ due to the fact that
\begin{align*}
\ceil*{\frac{c^*(m)}{1 - \epsilon}}
\le \floor*{\frac{c^*(m)}{1 - \epsilon}} + 1
< \floor*{c^*(m) \left(\frac{1}{1 - \epsilon} + \epsilon\right)}
\end{align*}
where we used the property that $c^*(m) > \epsilon^{-1}$ for all machines used to cover $J_i$.
\item
Assume that at least one machine with capacity less than $\epsilon^{-1}$ is present in $F^{-1}(J_i)$.
However, this means that the total capacity of $F^{-1}(J_i)$ is at least equal to $|J_i| - \frac{1}{\epsilon}$, by the requirement that \cover[$(1-\epsilon)$] has to be absolutely almost exact \cover in such case.
Moreover, due to the fact that $c^*$ is integer-valued, the missing capacity has to be at most $\floor{\frac{1}{\epsilon}}$ in fact -- both $|J_i|$ and capacities are integers.
Additionally, $F^{-1}(J_i)$ contains at least one machine $m_i$ with capacity $c^*(m_i) \ge \epsilon^{-2}$.
Thus, it is sufficient to use $\floor{c^*(1 + \epsilon)}$ instead of $c^*$, because we get $\floor{c^*(m_i)(1 + \epsilon)} \ge \floor{c^*(m_i) + \frac{1}{\epsilon}} = c^*(m_i) + \floor{\frac{1}{\epsilon}}$ -- the additional capacity of $m_i$ brings the total capacity of cover to at least $|J_i|$.
\end{itemize}
Overall, it is sufficient to pick the scaling according to the worst possible case.
Hence, every nice \cover[$(1-\epsilon)$]ing of $J$ with respect to $c^*$ is exact \covering with respect to $\floor*{c^*(\frac{1}{1-\epsilon} + \epsilon)}$
\end{proof}
\subsubsection{Parts with small sizes and slow machines}
Let us turn our attention to the first nontrivial part of the \Cref{algorithm:ptas-highlevel}: finding $SV^{l_{min}+1}$ -- a set of state vectors for $(l_{min} +1)$-th range guaranteed to contain a good state vector, under the condition that $C_{max}^* \le T$.
This state vector is our starting point for further iterations.
At first, let us describe the idea of \Cref{algorithm:prePTASCmax}: we split the machines into slow and fast ones and perform a dynamic programming in order to find all combinations of slow and fast machines which can be used to construct an exact \covering of $J' = \bigcup_{i = 1}^{l_{min}} P_i$.
By the discussion on the optimal \covering for any part from $J'$ we use only the fast machine with the smallest capacity unused yet.
\begin{algorithm}[H]
\begin{algorithmic}[1]
\Procedure{Find-$SV^{l_{min}+1}$}{$J$, $M$, $c^*$}
\State Sort $J$ according to their cardinalities.
\State $l_{min} \gets \lceil 3 \log_{1+\epsilon}\frac{1}{\epsilon}\rceil + 1$
\TwoLinesFor{$i = 0, 1, \ldots, l_{min}$}{$M_i \gets \{m\colon c^*(m) = \floor{(1 + \epsilon)^i}\}$}
\Comment{assign each $m \in M$ to minimum feasible $M_i$}
\State $J' \gets \{J_k\colon |J_k| < \floor{(1+\epsilon)^{l_{min} + 1}}\}$
\State $M_{fast} \gets M \setminus \bigcup_{i = 1}^{l_{min}} M_i$
\State $S_0 \gets \{(M_0, \ldots, M_{l_{min}}; M_{fast}; \emptyset)\}$
\For{$i = 1, \ldots, |J'|$}
\State $S_{i} \gets \emptyset$.
\For{\textbf{each} $s = (M_0, \ldots, M_{l_{min}}; M_{fast}, F) \in S_{i-1}$}
\For{\textbf{each} $M_0' \subseteq M_0, \ldots, M_{l_{min}}' \subseteq M_{l_{min}}$\footnotemark[2]}
\If{$\sum_{j=0}^{l_{min}}|M_j'| \cdot \floor{(1 + \epsilon)^{j}} \ge |J_{i}|$}
\State Let $F' := F \cup (M_0', J_i) \cup \ldots \cup (M_{l_{min}}', J_i)$
\State Add $(M_0 \setminus M_0', \ldots, M_{l_{min}} \setminus M_{l_{min}}', M_{fast}; F')$ to $S_{i}$
\EndIf
\EndFor
\If{$M_{fast} \neq \emptyset$}
\State Let $m$ be the slowest machine in $M_{fast}$
\State Add $(M_0, \ldots, M_{l_{min}}, M_{fast} \setminus \{m\}; F \cup (m, J_i))$ to $S_{i}$
\EndIf
\EndFor
\State Trim $S_{i}$
\EndFor
\State $SV^{l_{min} + 1} \gets \emptyset$
\For{\textbf{each} $s = (M_0, \ldots, M_{l_{min}}; M_{fast}; F) \in S_{|J'|}$}
\State $M_{exact} \leftarrow \emptyset$, $M_{unused} \leftarrow \emptyset$
\For{$M_0' \subseteq M_0, \ldots, M_{d_{tiny}}' \subseteq M_{d_{tiny}}$\footnotemark[2]}
\OneLineFor{$i = 0, \ldots, d_{tiny}$}{ $M_{exact}\leftarrow M_{exact} \cup M_i'$,\;$M_{unused} \leftarrow M_{unused} \cup (M_i \setminus M_i')$}
\OneLineFor{$i = d_{tiny}+1, \ldots, l_{min}$}{$M_{unused} \leftarrow M_{unused} \cup M_i$}
\State Transform $M_{unused}$ to $M_{slack}, M_{average}$ and $M_{large}$ for range $l_{\min}+1$
\State Calculate $n_{small}$ and $c$ from $M_{slack}$
\State Add $(M_{exact}; M_{slack}, n_{small}; M_{average}; M_{large}; F)$ to $SV^{l_{\min}+1}$
\EndFor
\EndFor
\State \Return The set $SV^{l_{min}+1}$ after trimming
\EndProcedure
\end{algorithmic}
\caption{An algorithm calculating the set $SV^{l_{\min}+1}$.}
\label{algorithm:prePTASCmax}
\end{algorithm}
\footnotetext[2]{Taking into account equivalence relation.}
Formally, the properties of the algorithm are summed in the following lemma:
\begin{lemma}
\label{lemma:preHeartOfPTAS}
Let $l_{min} = \lceil 3 \log_{1+\epsilon}\frac{1}{\epsilon}\rceil + 1$.
\Cref{algorithm:prePTASCmax} finds a set of state vectors $SV^{l_{min}+1}$ with the following properties:
\begin{enumerate}[label=\textnormal{(\roman*)}]
\item\label{preHeartOfPTAS-good} $SV^{l_{min}+1}$ contains at least one good state vector if there is an exact \covering of $J$.
\item\label{preHeartOfPTAS-cover} Every $sv^{l_{min}+1} \in SV^{l_{min}+1}$ contains an exact \covering of $\setssum{P}{0}{l_{min}}$.
\item\label{preHeartOfPTAS-size} $|SV^{l_{min}+1}| = \textnormal{O}(m^{d_{tiny} + d_{average} + 2})$ and it can be computed in polynomial time.
\end{enumerate}
\end{lemma}
\begin{proof}
First at all, let us clarify the trimming operation of sets $S_{i}$ and $SV^{l_{min} + 1}$.
In the case of former set it means: for each nonequivalent set $M_0', M_1', \ldots, M_{l_{min}}'; M_{large}$ preserve only one tuple in $S_i$.
In the case if the latter set it means: for each nonequivalent set $M_{exact}, M_{average}, M_{large}$, and a number $n_{small} \in \setk{m}$ preserve only the tuple $(M_{exact}; M_{slack}, n_{small}; M_{average}; M_{large})$ in $SV^{l_{min} + 1}$ where the capacity of $M_{slack}$ is maximum, if any such tuple exists at all.
Let the optimal \covering be given as $F_{opt}$.
Let us prove that each $S_i$ contains a tuple representing a set of machines equivalent to $M \setminus F_{opt}^{-1}(J_1 \cup \ldots \cup J_i)$.
This claim is obviously true for $i = 0$.
Assume that the claim is true for $S_{i-1}$, we prove that it holds for $S_{i}$.
Consider the tuple $s$ representing a set equivalent to $M \setminus F_{opt}^{-1}(J_1 \cup \ldots \cup J_{i-1})$.
\begin{itemize}
\item
Assume that $F_{opt}^{-1}(J_{i})$ consists of slow machines, that is, $F_{opt}^{-1}(J_{i}) \subseteq \setssum{M}{0}{l_{min}}$.
In this case the second nested {\bfseries for} loop generates all subsets of slow machines which are exact \cover for $J_i$, hence in particular a set equivalent to $F_{opt}^{-1}(J_{i})$.
\item
Assume that $F_{opt}^{-1}(J_{i})$ consists of a single fast machine.
Observe that the {\bfseries if} generates an exact \cover by a fast machines.
\end{itemize}
Thus in either case there exists a tuple $s$ in $S_i$ representing the set equivalent to $M \setminus F_{opt}^{-1}(J_1 \cup \ldots \cup J_{i})$.
As a consequence, there exists a tuple $s \in SV^{l_{min}+1}$ that represents the machines that are equivalent to $M \setminus F_{opt}^{-1}(J')$ establishes \ref{preHeartOfPTAS-good}.
An observation that for any $i \in [l_{min}+1]$ each tuple $s \in S_i$ contains an exact \covering of $\setssum{P}{0}{i-1}$ establishes \ref{preHeartOfPTAS-cover}.
The observation can be directly inferred from \cref{algorithm:prePTASCmax}.
In order to prove \ref{preHeartOfPTAS-size} we start by noting that $|S_i| \le (m + 1)^{l_{min} + 2}$ as any coordinate of any vector $s \in S_i$ can be expressed as a value from $\{0, \ldots, m\}$ (remember that we identify sets equivalent under $c^*$) and the number of coordinates of $s$ is equal to $l_{min} + 2$.
Moreover, each $s \in S_{i - 1}$ generates at most $(m + 1)^{l_{min} + 1} + 1$ potential elements in $S_i$.
Checking if the generated element corresponds to a feasible covering of $J_i$ and copying the \covering of $J_1, \ldots, J_i$ can be done in $\textnormal{O}(m)$ time.
Thus the total time complexity of constructing $S_i$ from $S_{i - 1}$ is also $\textnormal{O}(m^{2l_{min} + 4})$.
The trimming operation can be done in time $\textnormal{O}(m^{2l_{min} + 4})$, due to the fact that the target set has size $\textnormal{O}((m + 1)^{l_{min} + 2})$ and the number of entries that have to be visited is $\textnormal{O}(m^{2l_{min} + 3})$ and the entries are of length $\textnormal{O}(m)$.
These observations follow from the fact that $\subseteq$ is taken with respect to the equivalence relation, hence only the number of the machines taken from each group matters.
Since $|J'| \le n$, finding $S_{|P'|}$ requires $\textnormal{O}(nm^{2l_{min} + 4})$ time.
The transformation of every $s \in S_{|P'|}$ to a set of it corresponding state vectors in $\sv^{l_{min}+1}$ is simple.
The first step is composed of two parts.
First is to divide the set of unassigned machines into $M_{exact}$, the tiny\xspace machines designed to form exact \covers and others (there are $\textnormal{O}(m^{d_{tiny}})$ nonequivalent partitions).
After the division, the second step is to turn the divided sets to a state vector for $l_{min}+1$ directly using the definition of the state vector.
Again the division and construction can be done in total time $\textnormal{O}(m^{d_{tiny} + 1})$.
Hence for all the tuples this gives time $\textnormal{O}(m^{l_{min} + d_{tiny} + 3})$ and the total number of tuples constructed is $\textnormal{O}(m^{l_{min} + d_{tiny} + 2})$.
After the construction the set of tupples is trimmed so that for each nonequivalent $(M_{exact}; n_{small}; M_{average}; M_{large})$ (at most $\textnormal{O}(m ^{d_{tiny} + 1 + d_{average} + 1})$ entries) only the entry with biggest capacity of $M_{slack}$ is preserved.
Together this gives time $\textnormal{O}(n \cdot m^{2l_{min} +4} + m^{l_{min} + d_{tiny} + 3} + m^{d_{tiny} + d_{average} + 3}) = \textnormal{O}(nm^{2l_{min} + 4})$.
This requires a polynomial time in $n$ and $m$ for each element of $S_{|J'|}$, thus establishing the result.
\end{proof}
\subsubsection{Finding a good state vector for range $l+1$ using a good state vector for range $l$}
Now we proceed to the essence of the algorithm: generating and merging sets of state vectors after constructing a covering of $P_l$ based on state vectors for range $l$.
During generation of sets of \emph{candidate state vectors} for every $sv^{l} \in SV^{l}$ (denoted as $CSV(sv^{l})$) three invariants are preserved:
\begin{enumerate}[label=\textnormal{(\roman*)}]
\item If $sv^{l}$ contains a nice \cover[$(1-\epsilon)$]ing for $P_0 \cup \ldots \cup P_{l - 1}$, then its every candidate state vector contains a nice \cover[$(1-\epsilon)$]ing for $P_0 \cup \ldots \cup P_l$.
\item If $sv^{l}$ was good, then at least one state vector among its candidate state vectors is good.
\item For any $sv^{l} \in SV^{l}$ the cardinality of $CSV(sv^{l})$ is polynomial with respect to $n$ and $m$.
\end{enumerate}
By these invariants, it is sufficient to merge all $CSV(sv^{l})$ into $SV^{l+1}$ in such a way that if at least one $CSV(sv^{l})$ contains a good vector, then $SV^{l+1}$ also contains a good vector.
The key ideas for generation of all candidate state vectors from a given state vector $\sv^{l}$, presented in \Cref{algorithm:PTASCmax}, are as follows:
\begin{enumerate}
\item
First is to consider all nonequivalent sets of tiny\xspace machines reserved for tiny exact \covers, \averagemachine[$l$] machines, and \largemachine[$l$] machines.
By checking all possibilities one have to match sets equivalent to the present in $F_{opt}^{-1}(P_l)$.
\item
The second is to consider all possible values of two other numbers, checking of course whether they are compatible with $\sv^l$:
\begin{itemize}
\item The number of \emph{machines} which are \emph{\smallmachine[$l$]} and \emph{used} for slack \exactcovers of parts in $P_l$ in $F_{opt}$ (hence everywhere below denoted as $msu$) as the fastest machines,
\item The number of \emph{machines} which are \emph{\smallmachine[$l$]} and \emph{transferred} (hence everywhere below denoted as $mst$) and used for slack \exactcovers of parts in $\setssum{P}{l+1}{l_{max}}$ in $F_{opt}$ as the fastest machines.
In the algorithm we reserve them to guarantee that some state vector constructed after constructing \covers for $P_l$ is good.
\end{itemize}
A correct guess of last value guarantees that if in the optimal \covering in further ranges there is some number of parts that are covered by slack \exactcover where the fastest machine is \smallmachine[$l$], then the set of unused machines allows to construct a slack \cover[$(1-\epsilon)$] for such number of parts.
\item
The last idea is to verify (using \Cref{algorithm:interPTASCmax}) whether the guess is feasible.
\end{enumerate}
The intuition behind \Cref{algorithm:interPTASCmax} can be summed up as the following $2$ sentences.
Assume that for a given range $P_l$ there is given amount of resources, and number of parts that have to be covered by slack \exactcover.
Then there exists an algorithm which:
\begin{itemize}
\item either calculates a lower bound on minimum total capacity of slack machines required in any exact \covering of $P_l$ under specified resources and conditions, and constructs a nice \cover[$(1-\epsilon)$]ing of $P_l$ using such an amount of slack capacity;
\item or it proves that no exact \covering of $P_l$ with the provided resources exists.
\end{itemize}
Let us start with the description of the key procedure.
\begin{algorithm}[H]
\algnewcommand{\LeftComment}[1]{\Statex \hspace{\algorithmicindent}\(\triangleright\) #1}
\begin{algorithmic}[1]
\Procedure{Check-Covering}{$M_{exact}^*$, $M_{slack}^*$, $n_{msu}^*$, $M_{average}^*$, $M_{large}^*$, $P_l = \{J_1, \ldots, J_{|P_l|}\}$}
\State Let $M_{msu}^*$ be the set of $n_{msu}^*$ fastest \smallmachine[$l$] machines in $M_{slack}^*$
\State $S_0 \leftarrow \{(M_{exact}^*, M_{slack}^* \setminus M_{msu}^*, M_{msu}^*, M_{average}^*, M_{large}^*, \emptyset)\}$
\For {$i = 1, \ldots, |P_l|$}
\State $S_i = \emptyset$
\For{$s = (M_{exact}, M_{slack}, M_{msu}, M_{average}, M_{large}, F) \in S_{i-1}$}
\LeftComment{\textbf{Case I}: tiny exact \cover:}
\For{\textbf{each} $M_{exact}' \subseteq M_{exact}$}
\IfThen{$M_{exact}'$ is an exact \cover of $J_i$}{add $s \setminus M_{exact}'$\footnotemark\; to $S_i$ }
\EndFor
\LeftComment{\textbf{Case II}: a single \largemachine[$l$] machine as an exact \cover:}
\IfThen{$m \in M_{large}$}{add $s \setminus \{m\}$ to $S_i$}
\LeftComment{\textbf{Case III}: \cover[$(1-\epsilon)$] consisting of by \averagemachine[$l$] and slack machines:}
\For{\textbf{each} nonempty $M_{average}' \subseteq M_{average}$}
\parState{Let $M_{slack}'$ be the maximal (inclusion-wise) set of fastest machines from $M_{slack}$ such~that $\sum_{m \in M_{slack}' \cup M_{average}'} c^*(m) \le |J_i|$}
\IfThenTwoLines{$M_{average}'\cup M_{slack}'$ is an \cover[$(1-\epsilon)$] of $J_i$}{Add $s \setminus (M_{average}'\cup M_{slack}')$ to $S_i$}
\EndFor
\LeftComment{\textbf{Case IV}: slack \cover[$(1-\epsilon)$]:}
\State Let $m_{msu}$ be the fastest machine from $M_{msu}$
\parState{Let $M_{slack}'$ be the maximal (inclusion-wise) set of fastest machines from $M_{slack}$ such~that $\sum_{m \in M_{slack}' \cup \{m_{msu}\}} c^*(m) \le |J_i|$}
\IfThenTwoLines{$\{m_{msu}\} \cup M_{slack}'$ is an \cover[$(1-\epsilon)$] of $J_i$}{Add $s \setminus (\{m_{msu}\} \cup M_{slack}')$ to $S_i$}
\EndFor
\parState{Trim $S_i$}
\EndFor
\IfThenElseTwoLines{there exists $s = (\emptyset, M_{slack}^{**}, \emptyset, \emptyset, \emptyset, F)$ in $S_{|P_l|}$}{\Return $s$ where $\sum_{m \in s.M_{slack}^{**}} c^*(m)$ is maximum}{\Return \texttt{NO}}
\EndProcedure
\end{algorithmic}
\caption
{
The following algorithm either proves that there is no exact \covering for a range $P_l$, or calculates a nice \cover[$(1-\epsilon)$]ing for this range.
}
\label{algorithm:interPTASCmax}
\end{algorithm}
\footnotetext{As a shorthand, $s \setminus M$ denotes a tuple $s = (M_{exact}, M_{slack}, M_{msu}, M_{average}, M_{large}, F)$ with machines from $M$ removed and where $\bigcup_{m \in M}(m, J_i)$ is added to $F$.}
\begin{lemma}
Let the $l \ge l_{min}+1$-th range of parts $P_l = (J_1, \ldots, J_{|P_l|})$ be given.
Let a number $n_{msu}^*$ be given.
Let also the following sets of distinct machines be given:
\begin{itemize}
\item $M_{exact}^*$ be a set of tiny\xspace machines that can be used for tiny exact \cover exclusively.
\item $M_{slack}^*$ be a set of tiny\xspace and \smallmachine[$l$] machines containing at least $n_{msu}^*$ \smallmachine[$l$] machines.
\item $M_{average}^*$ be a set of \averagemachine[$l$] machines.
\item $M_{large}^*$ be a set of \largemachine[$l$] machines.
\end{itemize}
Then \cref{algorithm:interPTASCmax}:
\begin{enumerate}
\item\label{theorem:interPTASproperties:exactcover}
Either determines that there is no exact \covering $F_{opt}$ for $P_l$ with machines equivalent to $M_{exact}^*$ forming tiny exact \covers, \averagemachine[$l$] machines equivalent to $M_{average}^*$ assigned, and $|M_{large}^*|$ \largemachine[$l$] machines assigned, and assigning the amount of slack capacity bounded by the amount given in $M_{slack}^*$ in a way that $n_{msu}$ parts are covered by slack \exactcovers.
\item
Or it calculates a nice \cover[$(1-\epsilon)$]ing of $P_l$: with an assignment of $M_{exact}^*, M_{average}^*, M_{large}^*$, where $n_{msu}$ parts are covered by slack \cover[$(1-\epsilon)$]s, and the amount of slack capacity assigned is bounded by the capacity of $M_{slack}^*$.
Moreover, the amount of slack capacity assigned is a lower bound on the slack capacity used in any exact \covering of $P_l$ described in (\ref{theorem:interPTASproperties:exactcover}), provided that any such exact \covering exists.
\end{enumerate}
\label{theorem:interPTASproperties}
\end{lemma}
\begin{proof}
First let us clarify what means to trim $S_i$.
It means that the algorithm considers all nonequivalent sets $M_{exact}, M_{average}, M_{large}$ and number of machines in $M_{msu}$.
For each unique quadruple considered it preserves only a tuple with the biggest capacity of $M_{slack}$.
Assume that there exists an exact \covering $F_{opt}$ of $P_l$ with the advertised properties.
Let the set of machines assigned to $P_l$ in $F_{opt}$ be denoted as $M$ (keep in mind that all the machines have to be assigned).
By a slight abuse of the notation we use the same symbols as for the optimal \covering and for the set of machines.
Keep in mind that the sets $M_{exact}^*$, $M_{average}^*$ and $M_{large}^*$ have equivalent sets in $M$.
This might be not the case for $M_{slack}^*$ -- in the case of this set we are only interested in the total capacity of the machines in the set.
Moreover, the capacity of tiny-non-exact\xspace and \smallmachine[$l$] machines in $M$ may be less than the capacity of $M_{slack}^*$.
In particular, let us assume that the capacity of tiny-non-exact\xspace and \smallmachine[$l$] in $M$ is least possible under the specified conditions.
We analyze $F_{opt}$ part by part considering how the set of unassigned yet machines looks like.
To prove the theorem we prove the following invariant.
For every $i \in [|P_l|]$ in $S_i$ there exists a tuple $s_i = (M_{exact}; M_{slack}, M_{msu}; M_{average}; M_{large}; F)$ such that:
\begin{itemize}
\item The set $M_{exact}$ is equivalent to the machines forming tiny exact \covers in $M \setminus F_{opt}^{-1}(\setssum{J}{1}{i})$.
\item The set $M_{average}$ is equivalent to the average machines in $M \setminus F_{opt}^{-1}(\setssum{J}{1}{i})$.
\item The number $|M_{large}|$ is exactly equal to the number of large machines in $M \setminus F_{opt}^{-1}(\setssum{J}{1}{i})$.
\item $|M_{msu}|$ is exactly equal to the number of parts $J_{j > i}$ in $P_l$ that are covered by slack \exactcover in $F_{opt}$.
\item The capacity of $M_{slack}^* \setminus (M_{slack} \cup M_{msu})$ (that is, the slack capacity assigned by the algorithm) is at most the slack capacity in $F_{opt}^{-1}(\setssum{J}{1}{i})$.
\end{itemize}
Moreover each $S_i$ consists of tuples containing a nice \cover[$(1-\epsilon)$]ing of $J_1, \ldots, J_i$.
For $i = 0$ it is trivially true since \cover[$(1-\epsilon)$]ing constructed is empty and $M \setminus F_{opt}^{-1}(\emptyset) = M$.
Assume that the invariant holds for $i-1$ and let $s_{i-1}$ be the corresponding tuple.
Consider how $F_{opt}^{-1}(J_i)$ looks like:
\begin{description}
\item[(Case I)] It consists of tiny machines only.
In this case the algorithm also constructs exact \cover for $J_i$ using a set of machines equivalent to $F_{opt}^{-1}(J_i)$.
\item[(Case II)] It consists of one \largemachine[$l$] machine.
In this case the algorithm also constructs exact \cover for $J_i$ as one large machine.
\item[(Case III)] It consists of \averagemachine[$l$], and perhaps some tiny\xspace and \smallmachine[$l$] machines.
In this case the algorithm constructs a nice \cover[$(1-\epsilon)$] for $J_i$ with an equivalent set of \averagemachine[$l$] machines and some set of \smallmachine[$l$] and tiny\xspace machines.
\item[(Case IV)] It consists of tiny\xspace and \smallmachine[$l$] machines (in particular, it contains at least one \smallmachine[$l$] machine).
The algorithm also produces a nice \cover[$(1-\epsilon)$] for $J_i$ consisting of a machine from $M_{msu}$ and some \smallmachine[$l$] and tiny\xspace machines.
\end{description}
Observe that the algorithm produces exact \cover in the first two cases.
In Cases III and IV it produces relatively almost exact \cover (perhaps even exact \cover) if only machines with capacity greater than $\epsilon^{-1}$ are assigned, or it produces absolutely almost exact \cover (perhaps even exact \cover) when smaller machines are assigned.
The fact that in Case III a nice \cover[$(1-\epsilon)$]ing is constructed follows from the fact that the algorithm is applied from range $P_{l_{min}+1}$.
Recall that $l_{min} = \ceil{3 \log_{1+\epsilon}\frac{1}{\epsilon}} + 1$ and $d_{average} = \ceil{\log_{1+\epsilon}\frac{1}{\epsilon}} + 1$.
Thus, for any $l > l_{min}$ we know that $\epsilon^{-2} < (1+\epsilon)^{l - d_{average}}$ -- hence any \averagemachine[$l$] machine has capacity greater than $\epsilon^{-2}$.
Finally, the fact that in Case IV a nice \cover[$(1-\epsilon)$]ing is constructed follows from the observation that all the machines in $M_{slack}$ that are \smallmachine[$l$], hence also those reserved in $M_{msu}$ have capacity that is greater than $\epsilon^{-2}$.
In the last two cases there is no waste of capacity of tiny\xspace and \smallmachine[$l$] machines, due to the fact that the parts are large compared to the capacities of such machines of those types and due to the way of assigning the machines -- the algorithm never overfills the required capacity.
Moreover in both cases the \cover can be constructed.
That is, despite the fact that some slack is stored in $M_{msu}$ and potentially unavailable, the amount of slack is still sufficient.
In Case III:
\begin{itemize}
\item Either there are no more parts covered by slack \exactcovers in $F_{opt}$.
In this case $M_{slack}$ contains the necessary slack by the inductive assumption.
\item Or there are more parts covered by slack \cover[$(1-\epsilon)$] in $F_{opt}$, in this case each machine in $M_{msu}$ corresponds to part that is covered by slack \exactcover later in $F_{opt}$.
For each part that is covered by slack \exactcover in $F_{opt}$, but it is uncovered yet ``almost all'' slack capacity used to cover it in $F_{opt}$ is stored in $M_{slack}$.
\end{itemize}
Case IV is similar to Case III:
\begin{itemize}
\item Either $J_i$ is the last part covered by slack \exactcover in $F_{opt}$.
Then, by induction the capacity in $M_{slack} \cup M_{msu}$ is sufficient to form a \cover for $J_i$.
\item Or there are more parts covered by slack \exactcover in $F_{opt}$, but in this case every machine left in $M_{msu}$ certifies that there is more then enough slack.
\end{itemize}
In each case the desired tuple exists before the trimming and by an easy observation the trimming rule has to preserve tuple of not lower remaining slack capacity.
Due to this, we are sure that $s_i$ has at least as much capacity present in $M_{slack} \cup M_{msu}$ as capacity of \smallmachine[$l$] and tiny-non-exact\xspace machines present in $M \setminus F_{opt}^{-1}(\{J_1, \ldots, J_i\})$.
Therefore, the invariant holds for $i$ as well.
Hence, consider the tuple $(\emptyset, M_{slack}^{**}, \emptyset, \emptyset, \emptyset, F)$, where $M_{slack}^{**}$ has maximum capacity, present after constructing \covers for all parts.
By the invariant it has to be the case that the capacity of $M_{slack}^* \setminus M_{slack}^{**}$ is at most the slack capacity assigned to $P_l$ in $F_{opt}$.
\end{proof}
\begin{corollary}
\cref{algorithm:interPTASCmax} is polynomial time.
\end{corollary}
\begin{proof}
The first loops makes $\textnormal{O}(n)$ iterations.
The second loops is over a set of $\textnormal{O}(m^{d_{tiny} + d_{average} + 2})$ entries.
Then there are the following cases in parallel.
In Case I there is loop over $\textnormal{O}(m^{d_{tiny}})$ entries.
The next case has complexity $\textnormal{O}(1)$.
In Case III there is outer loop over $\textnormal{O}(m^{d_{average}})$ entries and inner loop over $\textnormal{O}(m)$ entries.
In Case IV there is only loop over $\textnormal{O}(m)$ entries.
The entries have size that is $\textnormal{O}(m)$, due to the fact that we have to store the \covering constructed.
The trimming rule simply proceeds over all produced entries (a set of $\textnormal{O}(m^{2d_{tiny} + 2d_{average} + 3})$ entries of size $\textnormal{O}(m)$) and produces the set that again has $\textnormal{O}(m^{d_{tiny} + d_{average} + 2})$ entries.
Together this gives time complexity $\textnormal{O}(nm^{2d_{tiny} + 2d_{average} + 4})$
Hence, the algorithm is polynomial time.
\end{proof}
\begin{algorithm}[H]
\begin{algorithmic}[1]
\Procedure{Generate-Candidate-State-Vectors}{$\sv^l \in SV^{l}$}
\State $CSV' \gets \emptyset$
\For{\textbf{each} $M_{exact}^* \subseteq \sv^l.M_{exact}$, $M_{average}^* \subseteq \sv^l.M_{average}$, $M_{large}^* \subseteq \sv^l.M_{large}$}
\For{\textbf{each} $n_{msu}^*$, $n_{mst}$}
\IfThen{$n_{msu}^*, n_{mst}$ inconsistent with $sv^l$}{\textbf{continue}}
\State Let $M_{mst}$ be $n_{mst}$ machines from $sv.M_{slack}$ of biggest capacity
\State Apply \Cref{algorithm:interPTASCmax} for $(M_{exact}^*, sv.M_{slack} \setminus M_{mst}, n_{msu}^*, M_{average}^*, M_{large}^*, P_l)$
\If{\Cref{algorithm:interPTASCmax} returned a valid $M_{slack}^{**}$ and covering $F$}
\State $csv' \gets \sv^l \setminus M_{exact}^* \cup (\sv^l.M_{slack} \setminus (M_{slack}^{**} \cup M_{mst})) \cup M_{average}^* \cup M_{large}^*$\footnotemark
\State Add $csv'$ to $CSV'$
\EndIf
\EndFor
\EndFor
\State Transform $CSV'$ to $CSV$, i.e. to state vectors for the $(l+1)$-th range
\State \Return $CSV$
\EndProcedure
\end{algorithmic}
\caption{The following algorithm generates a set of candidate state vectors for a given state vector}
\label{algorithm:PTASCmax}
\end{algorithm}
\footnotetext{As a shorthand, $s \setminus M$ denotes a tuple $s = (M_{exact}, M_{slack}, M_{msu}, M_{average}, M_{large}, F)$ with machines from $M$ removed $s$, and where $F$ is added to $s.F$.}
\begin{lemma}
\label{lemma:HeartOfPTAS-good}
If $sv^{l} \in SV^{l}$ is a good state vector, then \Cref{algorithm:PTASCmax} returns a set of vectors $CSV(sv^{l})$ such that at least one $sv^{l} \in CSV(sv^{l})$ is also a good state vector.
Moreover, if each $sv^{l}$ contains an \cover[$(1-\epsilon)$]ing of $\setssum{P}{0}{l}$, then each $csv^l \in CSV^{l}(sv^l)$ contains an \cover[$(1-\epsilon)$]ing of $\setssum{P}{0}{l+1}$.
\end{lemma}
\begin{proof}
Let us consider the optimal \covering of $J$.
Let the number of parts in $P_l$ that are covered by slack \exactcover in the optimal \covering be exactly equal to $n_{msu}^*$.
There are also $n_{mst}$ parts in $\setssum{P}{l+1}{l_{max}}$ covered by slack \exactcover consisting of machines that are tiny\xspace or \smallmachine[$l$].
In particular, this means that there are $n_{mst} + n_{msu}^*$ machines in $M \setminus F_{opt}^{-1}(P_0 \cup \ldots \cup P_{l-1})$ that are \smallmachine[$l$].
This is because each of the mentioned $n_{mst}$ parts can be only covered by tiny-non-exact\xspace and \smallmachine[$l$] machines $M'$ present in $M \setminus F_{opt}^{-1}(\setssum{P}{0}{l-1})$.
This also means that the capacity of tiny-non-exact\xspace and \smallmachine[$l$] machines that is assigned to parts in $P_l$ in the optimal \covering is upper bounded by the capacity of tiny-non-exact\xspace and \smallmachine[$l$] machines present in $(M \setminus F_{opt}^{-1}(\setssum{P}{0}{l-1}))$ with the capacity of $M'$ (lower bounded by $n_{mst} \cdot \floor{(1+\epsilon)^{l+1}}$) removed.
\Cref{algorithm:PTASCmax} tries every possible combination of $M_{exact}^*, M_{average}^*, M_{large}^*$.
Hence, we are certain that at some point the algorithm proceeds with sets $M_{exact}^*, M_{average}^*, M_{large}^*$ equivalent to the ones assigned to $P_l$ in the optimal \covering.
Moreover, since it tries every value of $n_{msu}^*$ (again from $0$ to $m$) we are certain to go through the iteration in which $n_{msu}^*$ would be equal to the number of parts in $P_l$ covered by slack \exactcover in the optimal \covering.
Finally, since it tries every value of $n_{mst}$ (from $0$ to $m$) we are sure that we go through an iteration such that $n_{mst}$ is exactly equal to the value derived from optimal \covering.
By the fact that the vector $\sv^l$ is good we have to have $\sv^l.n_{small} \ge n_{msu}^* + n_{mst}$; hence the algorithm proceeds with the values for which the inequality holds.
Hence, the amount of slack available in $\sv^l.M_{slack} \setminus M_{mst}$ is at least $(M \setminus F_{opt}^{-1}(\setssum{P}{0}{l-1}))$ (because the vector is good) with the capacity of $M_{mst}$ (upper bounded by $n_{mst} \cdot \epsilon\floor{(1+\epsilon)^{l+1}}$) removed.
Moreover, the number of \smallmachine[$l$] machines in $\sv^l.M_{slack} \setminus M_{mst}$ is at least $n_{msu}^*$, again by the fact that the vector is good.
By this we know that we can apply \cref{algorithm:interPTASCmax} to construct a nice \cover[$(1-\epsilon)$]ing of $P_l$ and calculate a lower bound on capacity of tiny-non-exact\xspace and \smallmachine[$l$] machines that has to be assigned in any exact \covering (hence in particular in the optimal \covering) on $P_l$.
This means that after the execution the algorithm constructs a tuple representing unassigned machines such that:
\begin{itemize}
\item It has equivalents set of tiny\xspace machines reserved for tiny exact \covers to the set of tiny-exact\xspace machines in $M \setminus F_{opt}^{-1}(P_0 \cup \ldots \cup P_l)$.
And similarly, it has equivalents sets of \averagemachine[$l$] and \largemachine[$l$] machines.
\item It has at least $n_{mst}$ unassigned machines that are \smallmachine[$l$].
Moreover, the number of unassigned machines that are \smallmachine[$(l+1)$] and \averagemachine[$l$] is exactly (due to the guess) the number of machines that are \smallmachine[$(l+1)$] and \averagemachine[$l$] in $M \setminus F_{opt}^{-1}(\setssum{P}{0}{l})$.
\item It has the amount of capacity remaining (in $M_{slack}^{**} \cup M_{mst}$) that is at least equal to the total capacity of tiny-non-exact\xspace and \smallmachine[$l$] machines in $M \setminus F_{opt}^{-1}(\setssum{P}{0}{l})$.
\end{itemize}
After the transformation of vectors the unassigned machines that are \smallmachine[$(l+1)$] but are not \smallmachine[$l$] contribute exactly the same capacity as machines from $M \setminus F_{opt}^{-1}(\setssum{P}{0}{l})$ that are \smallmachine[$(l+1)$] but are not \smallmachine[$l$].
Clearly, such a tuple in $csv' \in CSV'$ is transformed to a good tuple $csv \in CSV$.
Each tuple in $CSV$ corresponds to an \cover[$(1-\epsilon)$]ing of $\setssum{P}{0}{l}$, by the properties of \cref{theorem:interPTASproperties} and the assumption that each tuple of $SV^{l}$ contains an \cover[$(1-\epsilon)$]ing of $\setssum{P}{0}{l-1}$.
\end{proof}
As a sidenote observe, that we did not included any trimming rules.
Hence, it follows that the sets $CSV$ may contain duplicated entries.
However, this does not matter in the light of the further considered \cref{lem:cutting-sv}.
\begin{corollary}
\Cref{algorithm:PTASCmax} is polynomial time and produces a set of size $\textnormal{O}(m^{d_{tiny} + d_{average} + 2})$.
\end{corollary}
\begin{proof}
The first loop is over $\textnormal{O}(m^{d_{tiny} + d_{average} + 2})$ entries.
The second loop is over $\textnormal{O}(n^2)$ entries.
Checking whether the guess is feasible can be done in $\textnormal{O}(1)$ time.
Taking subsets of $n_{mst}$ machines can be done in $\textnormal{O}(m)$ time.
\Cref{algorithm:interPTASCmax} has $\textnormal{O}(nm^{2d_{tiny} + 2d_{average} + 4})$ time complexity.
Construction of new tuples can be done in $\textnormal{O}(m)$ time.
Transformation can be done in $\textnormal{O}(m)$ time.
Together this gives $\textnormal{O}(n^3 m^{3d_{tiny} + 3d_{average} + 6})$ time.
The produced set has cardinality $\textnormal{O}(n^2 m^{d_{tiny} + d_{average} + 2})$ entries of size $\textnormal{O}(m)$.
\end{proof}
Let now $SV^{l}$ for every $l = l_{min} + 1, \ldots, l_{max}$ be defined as a subset of $\bigcup_{sv \in SV^{l - 1}} CSV(sv)$ such that for every $M_{exact}$, $M_{average}$, $M_{large}$ (there are $\textnormal{O}(m^{d_{tiny}})$, $\textnormal{O}(m^{d_{average}})$, $\textnormal{O}(m)$ nonequivalent sets), and $n_{small} \in \setk{m}$, we keep only the state vector $(M_{exact}; M_{slack}, n_{small}; M_{average}; M_{large})$ with the biggest capacity of $M_{slack}$ machines (with ties broken arbitrarily).
\begin{lemma}
\label{lem:cutting-sv}
For any $l \in \{l_{min} + 1, \ldots, l_{max}\}$:
\begin{enumerate}[label=\textnormal{(\roman*)}]
\item If $SV^{l - 1}$ has at least one good state vector, then $SV^{l}$ also has at least one good state vector.
\item If any vector in $SV^{l - 1}$ contains a nice \cover[$(1-\epsilon)$]ing of $P_0 \cup \ldots P_{l-1}$, then any vector in $SV^{l}$ contains a nice \cover[$(1-\epsilon)$]ing of $P_0 \cup \ldots \cup P_l$.
\item The set $\bigcup_{sv \in SV^{l - 1}} CSV(sv)$ has polynomial size and can be calculated in polynomial time.
\item $SV^{l}$ has $\textnormal{O}(m^{d_{tiny} + d_{average} + 2})$ state vectors for every $l = l_{min} + 1, \ldots, l_{max}$.
\end{enumerate}
\end{lemma}
\begin{proof}
In order to prove the first property observe, if $sv \in SV^{l - 1}$ was a good state vector, then $CSV(sv)$ contains at least one good state vector, by \Cref{lemma:HeartOfPTAS-good}.
Now it is sufficient to note that for every $sv \in SV^{l - 1}$ and every good state vector $sv'$ not in $SV^{l}$ there has to be another state vector $sv'' \in SV^{l}$ with the same values of $M_{exact}$, $n_{small}$, $M_{average}$ and $M_{large}$ but at least as large capacity of $M_{slack}$ -- so $sv''$ has to be a good state vector as well.
The second property follows directly from \cref{lemma:HeartOfPTAS-good}.
By the construction, there is at most one state vector in $SV^{l}$ for each $M_{exact}$, $n_{small}$, $M_{average}$ and $M_{large}$.
Thus the cardinality of $SV^{l}$ is $\textnormal{O}(m^{d_{tiny} + d_{average} +2})$.
Hence, number of produced candidate state vectors is $\textnormal{O}(n^2m^{d_{tiny} + d_{average} + 2})$ for each vector.
Therefore, before the trimming the produced set has cardinality $\textnormal{O}(n^2m^{2d_{tiny} + 2d_{average} + 4})$.
Also, they are produced in total time $\textnormal{O}(m^{d_{tiny} + d_{average} + 2} \cdot n^{3}m^{3d_{tiny} + 3d_{average} + 6}) = \textnormal{O}(n^{3}m^{4d_{tiny} + 4d_{average} + 8})$.
Hence, similarly to the previous cases, the trimming can be done by passing the constructed set $\bigcup_{sv \in SV^{l - 1}} CSV(sv)$ of $\textnormal{O}(n^2m^{2d_{tiny} + 2d_{average} + 4})$ entries of size $\textnormal{O}(m)$ once.
Together this gives the total time required to produce $SV^{l}$ from $SV^{l-1}$ equal to $\textnormal{O}(n^{3}m^{4d_{tiny} + 4d_{average} + 8})$.
This proves the last two points.
\end{proof}
The following conclusion follows directly from \cref{lemma:preHeartOfPTAS} and \cref{lem:cutting-sv}.
\begin{corollary}
\label{lem:best-sv}
If there is an exact \covering of $J$, then $SV^{l_{max}+1}$ is nonempty.
Any state vector from $SV^{l_{max}+1}$ contains an \cover[$(1-\epsilon)$]ing of $J$.
\end{corollary}
By summing all the observations up we obtain the following theorem.
\begin{theorem}
\label{thm:killer-ptas}
There exists a PTAS for $Q|G = \textit{complete multipartite}, p_j=1|C_{max}$.
\end{theorem}
\begin{proof}
To avoid unnecessary details we always execute the presented algorithms with $\epsilon \le \frac{1}{2}$.
Assume that there exists an exact \covering within time $T$.
Under such assumptions $SV^{l_{max}+1}$ has to contain at least one good state vector with some nice \cover[$(1-\epsilon)$]ing $F$, by \Cref{lem:best-sv}.
Observe that if we multiply $T$ by $\left(\frac{1}{1-\epsilon} + \epsilon\right)$, then the new capacities of the form $\floor*{c^*\left(\frac{1}{1-\epsilon} + \epsilon\right)}$ guarantee that $F$ is an exact \covering, by \cref{lemma:epsilontoexactcovering}.
Finally, since we used the rounded capacities $c^*(m)$, we need to get back to capacities $c(m)$.
By the fact that $c(m) \le c^*(m) \le (1 + \epsilon) c(m)$ for all $m \in M$, if there is an exact \covering of $J$ for rounded capacities with respect to $T \left(\frac{1}{1 - \epsilon} + \epsilon\right)$, then it is exact \covering of $J$ under the true capacities for $T (1 + \epsilon) \left(\frac{1}{1 - \epsilon} + \epsilon\right)$.
To complete the proof of the approximation ratio, we note that for all $\epsilon \in (0, \frac{1}{2}]$ we have
\begin{align*}
(1 + \epsilon) \left(\frac{1}{1 - \epsilon} + \epsilon\right) \le (1 + \epsilon) (1 + 3 \epsilon) < (1 + 7 \epsilon),
\end{align*}
so our algorithm is $(1 + 7 \epsilon)$-approximation algorithm.
The complexity of the algorithm is polynomial in $n$ and $m$.
\Cref{lemma:preHeartOfPTAS} establishes that $SV^{l_{min}+1}$ can be found in time $\textnormal{O}(nm^{2l_{min} + 4})$.
By \Cref{theorem:interPTASproperties}, \Cref{lemma:HeartOfPTAS-good}, and \Cref{lem:cutting-sv} each $SV^l$ for $l = l_{min} + 1, \ldots, l_{max}$ and can be found in $\textnormal{O}(n^3m^{4d_{tiny} + 4d_{average} + 8})$ time.
Clearly, the number of nonempty ranges between $P_{l_{min}+1}$ and $P_{l_{max}+1}$ is at most $n$ and we can even optimize the algorithm to iterate over non empty ranges.
Together this gives an algorithm of time complexity $\textnormal{O}(nm^{2l_{min} + 4} + n^4m^{4d_{tiny} + 4d_{average} + 8}) = \textnormal{O}(n^4m^{4d_{tiny} + 4d_{average} + 8})= \textnormal{O}(n^4m^{12\ceil{\log_{1+\epsilon}\frac{1}{\epsilon}} + 12})$ for a given $T$.
By combining it with a binary search over possible values of $T$ (there are $\textnormal{O}(mn)$ candidates in total) we complete the proof obtaining the overall time complexity $\textnormal{O}(\log nm \cdot n^4m^{12\ceil{\log_{1+\epsilon}\frac{1}{\epsilon}} + 12})$.
\end{proof}
\begin{example}
\begin{table}[H]
\centering
\begin{tabularx}{\textwidth}{| C |c c c c || c c c c c c c |}
\hline
Group & $M_{0}$ & $M_{1}$ & $M_{2}$ & $M_{3}$ & $M_{4}$ & $M_{5}$ & $M_{6}$ & $M_{7}$ & $M_{8}$ & $M_{9}$ & $M_{10}$ \\
Capacity &$1$ & $1$ & $2$ & $3$ & $5$ & $7$ & $11$ & $17$ & $25$ & $38$ & $57$ \\
\hline
Number &$0$ & $0$ &$0$ & $39$ & $0$ & $2$ & $4$ & $2$ & $1$ & $0$ & $1$ \\
\hline
\end{tabularx}
\caption
{
By $M_i$ we denote the set of machines of rounded capacity $\floor{(1+\epsilon)^{i}}$; here $\epsilon = 0.5$.
The set of tiny machine is additionally separated by a vertical line.
}
\label{tab:machines}
\qquad
\begin{tabularx}{\textwidth}{| C | c c | | c c c c | c c | }
\hline
Range & $P_1$ & $P_7$ & \multicolumn{4}{ c |}{$P_8$} & \multicolumn{2}{c|}{$P_9$} \\
Sizes in & $[1, 2)$ & $[17, 25)$ & \multicolumn{4}{ c | }{$[25,38)$} & \multicolumn{2}{c|}{$[38,57)$} \\
Part & $J_{1}$ & $J_{2}$ & $J_{3}$ & $J_{4}$ & $J_{5}$ & $J_{6}$ & $J_{7}$ & $J_{8}$ \\
Size & $3$ & $20$ & $25$ & $26$ & $36$ & $37$ & $50$ & $51$ \\
\hline
The optimal \covering & $3$ & $25$ & $3^9$ & $57$ & $3^7, 17$ & $3^9, 11$ & $3, 7^2, 11^3$ & $3^{12},17$\\
\hline
An \cover[$(1-\epsilon)$]ing I& $3$ & $25$ & $57$ & $3^9$ & $11^3$ & $3, 7^2, 17$ & $3^{11}, 17$ & $ 3^{13}, 11$ \\
\hline
An \cover[$(1-\epsilon)$]ing II& $3$ & $25$ & $57$ & $3^9$ & $11, 17$ & $11, 17$ & $3^{8},7^2,11$ & $3^{13},11$ \\
\hline
\end{tabularx}
\caption
{
Parts, their cardinalities, a sample exact \covering referred to as the optimal \covering, a \cover[$(1-\epsilon)$]ing I corresponding to the optimal \covering, and a \cover[$(1-\epsilon)$]ing II unrelated to the optimal \covering.
The parts that are small, i.e. they are in the first $l_{min}$ ranges are separated by a double line.
}
\label{tab:parts}
\qquad
\begin{tabularx}{\textwidth}{| C | c c c c | c c c | c c | c | }
\hline
Multiset of machines & \multicolumn{4}{ c |}{$M_{exact}$} & \multicolumn{3}{c |}{$M_{slack}$} & \multicolumn{2}{c|}{$M_{average}$} & \multicolumn{1}{ c |}{$M_{large}$} \\
Field & $n_0$ & $n_1$ & $n_2$ & $n_3$ & $M_{slack}$ & $n_{small}$ & $c$ & $n_1$ & $n_2$ & $n_{large}$ \\
\hline
Capacities for $l=8$ & $1 $ & $1$ & $2$ & $3$ & $\le 11$ & - & - & $17$ & $25$ & $\ge 38$ \\
$M \setminus F_{opt}^{-1}(\setssum{P}{0}{7})$ & $0$ & $0$ & $0$ & $9$ & $3^{29},7^2,11^4$ & $2$ & $145$ & $2$ & $1$ & $1$ \\
$\sv^8$ & $0$ & $0$ & $0$ & $9$ & $3^{29},7^2,11^4$ & $6$ & $145$ & $2$ & $1$ & $1$ \\
\hline
Capacities for $l=9$ & $1 $ & $1$ & $2$ & $3$ & $\le 17$ & - & - & $25$ & $38$ & $\ge 57$ \\
$M \setminus F_{opt}^{-1}(\setssum{P}{0}{8})$ & $0$ & $0$ & $0$ & $0$ & $3^{13}, 7^2, 11^3, 17$ & $2$ & $103$ & $1$ & $0$ & $0$ \\
$\sv[a]^9$ & $0$ & $0$ & $0$ & $0$ & $3^{28}, 11, 17$ & $2$ & $112$ & $1$ & $0$ & $0$ \\
$\sv[b]^9$ & $0$ & $0$ & $0$ & $0$ & $3^{28}, 7^2, 11^2$ & $4$ & $120$ & $1$ & $0$ & $0$ \\
\hline
\end{tabularx}
\caption
{
Here the sets $M_{exact}, M_{average}$ and $M_{large}$ are represented by numbers of machines of given capacity -- due to the observation of equivalence of machines under given capacity.
The ``vectors'' $M \setminus F_{opt}^{-1}(\setssum{P}{0}{7})$ and $M \setminus F_{opt}^{-1}(\setssum{P}{0}{8})$ were constructed for convenience, they describe the optimal \covering presented in \cref{tab:parts}.
That is, they characterize exactly $M_{exact}$, $M_{average}$ and $M_{large}$ of a good vector and give lower bounds on $n_{small}$ and capacity of $M_{slack}$.
$n_{small}$ for $F_{opt}$ represents the $2$ parts in ranges $P_8$ or later for which a \smallmachine[$8$] machine (i.e. small from perspective of the $8$-th range) is the fastest machine.
Notice that $F_{opt}^{-1}(J_8)$ is slack \exactcover, but the fastest machine is \smallmachine[$9$] but not \smallmachine[$8$].
Finally, observe that both $\sv[a]^9$ and $\sv[b]^9$ are good, despite the fact that they have fewer \smallmachine[$9$] machines available than there is in $M \setminus F_{opt}^{-1}(\setssum{P}{0}{8})$.
Moreover, during construction of $\sv[b]^9$ both $n_{msu}^*$ and $n_{mst}$ were guessed incorrectly; despite this a good vector was constructed.
However, only the existence of $\sv[a]^9$ is guaranteed by \cref{lemma:HeartOfPTAS-good}.
}
\label{tab:bv}
\end{table}
As an example confer the data given by:
\begin{itemize}
\item A precision parameter $\epsilon = 0.5$.
\item Guessed $T = 1$, determining the presented capacities.
\item A set of machines given in \cref{tab:machines}.
The machines are grouped into sets of respective cardinalities.
To avoid introducing excessive number of symbols we identify the machines with their capacities and we will refer to the machines by the numbers only.
\item A set of parts grouped into ranges given in \cref{tab:parts}.
\item A sample exact \covering chosen to be the optimal \covering $F_{opt}$.
\item Good vectors $\sv^8$, $\sv[a]^9$, and $\sv[b]^9$ given in \cref{tab:bv}.
\end{itemize}
Observe that here $l_{min} = 7$, hence $\setssum{P}{0}{7}$ are covered by exact \covers using \cref{algorithm:prePTASCmax} and there is a, potentially good, vector $\sv^8$ constructed.
As presented in \cref{tab:parts}, notice that $F_{opt}^{-1}(J_3), F_{opt}^{-1}(J_4), F_{opt}^{-1}(J_5), F_{opt}^{-1}(J_6)$ are: a set of tiny machines, a \largemachine[$8$] machine, \averagemachine[$8$], tiny-non-exact\xspace, and \smallmachine[$l$] machines, and tiny\xspace and \smallmachine[$8$] machines, respectively.
Moreover $F_{opt}^{-1}(J_7)$ and $F_{opt}^{-1}(J_8)$ are slack \exactcovers.
However, in $F_{opt}^{-1}(J_7)$ the fastest machine is \smallmachine[$8$].
This means that at least one \smallmachine[$8$] is in $M \setminus F_{opt}^{-1}(\setssum{P}{0}{8})$.
Moreover, it means that the capacity of tiny-non-exact\xspace and \smallmachine[$8$] machines in $M \setminus F_{opt}^{-1}(\setssum{P}{0}{8})$ is at least $38$, by the definition of range.
Hence, in at least one iteration \cref{algorithm:PTASCmax} considers a good state vector; for example $\sv^8$, presented in \cref{tab:parts}.
Moreover, tiny\xspace exact, \averagemachine[$8$], and \largemachine[$8$] machines that are to be assigned to parts in $P_8$ are equivalent to the assigned in the optimal \covering; in the example $3^9, 17, 57$, respectively.
The algorithm guesses the value $n_{mst}$, the number of parts covered by slack \exactcover in $P_9$ or later range (here only $P_9$) \emph{where the fastest machines are \smallmachine[$8$] machines}; here it is equal to $1$.
Moreover, the number of parts in $P_8$ that are covered by slack \exactcover are guessed; in the example $n_{msu} = 1$.
Observe that using the guesses the algorithm constructs the set $M_{mst}$ (in the example $M_{mst} = \{11\}$).
The amount of capacity reserved in $M_{mst}$ is much less than the capacity of tiny-non-exact\xspace \smallmachine[$8$] machines assigned in $F_{opt}^{-1}(P_9)$
Hence, \cref{algorithm:interPTASCmax} in at least one iteration is applied with equivalent set of tiny\xspace machines reserved for tiny exact \covers, \largemachine[$8$], and \averagemachine[$8$] machines, a sufficient number of \smallmachine[$8$] machines, equal at least to $n_{msu}$ and at least the same amount of capacity in $M_{msu}$ as the capacity of tiny-non-exact\xspace and \smallmachine[$l$] in $F_{opt}^{-1}(P_8)$.
Therefore, \cref{algorithm:PTASCmax} has to return a good vector, perhaps $\sv[a]^9$.
Observe, that $\sv[a]^9$ may perhaps have fewer \smallmachine[$9$] machines than present in $M \setminus F_{opt}^{-1}(\setssum{P}{0}{8})$.
However, by reserving $M_{mst}$ and guessing the \averagemachine[$8$] and \smallmachine[$9$] machines in $M \setminus F_{opt}^{-1}(\setssum{P}{0}{8})$, still the number of \smallmachine[$9$] machines is enough to construct slack \cover[$(1-\epsilon)$]s for every part covered by slack \exactcovers in optimal \covering in $\setssum{P}{9}{l_{max}}$.
Of course, during the execution it might be the case that a superior good state vector is constructed, for example $\sv[b]^9$, even from guesses not corresponding to the optimal \covering.
By \cref{theorem:interPTASproperties} it might be used as well to construct an \cover[$(1-\epsilon)$]ing in further ranges.
Considering the constructed \cover[$(1-\epsilon)$]ing, observe that $J_5$ is covered by slack \cover[$(1-\epsilon)$] and it is relatively almost cover.
Notice that $J_7, J_8$ are covered by slack \cover[$(1-\epsilon)$]s and they are absolutely almost covers.
\end{example}
\section{Unrelated machines}
We prove that there is no good approximation algorithm possible in the case of unrelated machines.
\begin{comment}
\begin{theorem}
\label{thm:R-sigmacj}
$R|G = \completekpartite{2}| \sum C_j$ is \textnormal{\sffamily Strongly NP-hard}\xspace even if $s_{i,j} \in \{s_1, s_2\}$ with $s_1 > s_2$.
\end{theorem}
\begin{proof}
We proceed by reducing \textnormal{\sffamily 3-Dimensional Matching}\xspace, a well-known problem to be in \mathbb{N}PHClass \cite{garey-johnson}, to our problem.
Recall that an instance of \textnormal{\sffamily 3-Dimensional Matching}\xspace is a tuple $(W, X, Y, M, q)$, where $W$, $X$, $Y$ are disjoint sets with $|W| = |X| = |Y| = q$ and $M \subseteq W \times X \times Y$.
We ask if there exists $M' \subseteq M$ such that $|M'| = q$ such that for any distinct $(w_1, x_1, y_1), (w_2, x_2, y_2) \in M$' we have $w_1 \neq w_2$, $x_1 \neq x_2$ and $y_1 \neq y_2$.
For any instance of \textnormal{\sffamily 3-Dimensional Matching}\xspace:
By an abuse of the notation, let the set of the machines be equal to $M$.
Construct two sets of jobs, each with processing requirement equal to $1$.
\begin{itemize}
\item $J_1 = W \cup X \cup Y$ with $p_{j,m} = p_1$ if $j \in J_1$ is included in $m \in M$, otherwise $p_{j,m} = p_2$,
\item $J_2$ with $3 |M| - 3 q$ jobs with $p_{j,m} = p_1$ for every $j \in J_2$ and $m \in M$.
\end{itemize}
Construct a speed function:
\begin{equation}
s(j,m) =s_1 \text{ if } j \in m \textrm{ or } j \in J_2 \text{ ; } s_2 \text{ otherwise}
\end{equation}
Now $G$ is defined as a complete bipartite graph with partitions $J_1$ and $J_2$.
We ask whether there exists a schedule with $\sum C_j \le \frac{6 |M|}{s_1}$.
If there is a required matching $M'$, then we can schedule any $j \in J_1$ on $m \in M'$, such that $j \in m$.
Also, we may schedule all jobs from $J_2$ on remaining $|M| - q$ machines by putting $3$ jobs on each machine.
This way each job has its processing time equal to $p_1$ and there are exactly $3$ jobs on each machine, therefore this schedule achieves $\sum C_j = \frac{6 |M|}{s_1}$.
Now suppose we have a schedule with $\sum C_j \le \frac{6 |M|}{s_1}$.
Let $c_i$ be the number of machines which in this schedule have exactly $i$ jobs on them -- so $\sum_{i = 0}^n i c_i = 3 |M|$ and $\sum_{i = 0}^n c_i = |M|$, as there are in total $3 |M|$ jobs and $|M|$ machines.
Since $s_1 > s_2$, the total completion time of this schedule may be bounded from below as:
\begin{align}
\label{eq:bound}
\sum C_j & \ge \sum_{i = 0}^n \binom{i + 1}{2} \frac{c_i}{s_1}
= \sum_{i = 0}^n \left(\binom{i - 2}{2} + 3 i - 3\right) \frac{c_i}{s_1} \nonumber\\
& = \sum_{i = 0}^n \binom{i - 2}{2} \frac{c_i}{s_1} + \frac{6 |M|}{s_1} \ge \frac{6 |M|}{s_1},
\end{align}
since $\binom{i - 2}{2}$ is non-negative for all $i = 0, 1, 2, \ldots$
Since by assumption $\sum C_j \le \frac{6 |M|}{s_1}$, it has to be the case that $\sum_{i = 0}^n \binom{i - 2}{2} c_i = 0$.
However, for any $i \in \{0, \ldots, n\} \setminus \{2, 3\}$ it is true that $\binom{i - 2}{2} > 0$ so it implies that $c_i = 0$.
This in turn entails that $c_2 = 0$, as otherwise we would need $c_i > 0$ for some $i \ge 4$.
Because if any machine has less than $3$ jobs assigned, some other would get more than $3$ jobs, obviously.
Therefore it has to be the case that $c_3 = |M|$, that is, every machine got exactly $3$ jobs.
Finally, this means that jobs from $J_1$ had to be scheduled on exactly $q$ machines.
Moreover, the processing time of each job was $\frac{1}{s_1}$ -- otherwise the first inequality from (\ref{eq:bound}) would be strict.
Therefore, each such machine corresponds to a triple of jobs from $J_1$, so also to an edge from $M$. And by construction, the set of these edges forms a 3-dimensional matching.
\end{proof}
\begin{theorem}
\label{thm:R-cmax}
$R|G = \completekpartite{2}| C_{max}$ is \textnormal{\sffamily Strongly NP-hard}\xspace even if $s_{i,j} \in \{s_1, s_2\}$ with $s_1 > s_2$.
\end{theorem}
\begin{proof}
The proof is almost identical to the one above, the only difference is that we use $C_{max} \le \frac{3}{s_1}$ bound instead of $\sum C_j \le \frac{6 |M|}{s_1}$.
\end{proof}
\end{comment}
\begin{theorem}
There is no constant approximation ratio algorithm for $R|G = \completekpartite{2}|\sum C_j$ \break $(R|G = \completekpartite{2}|C_{max})$.
\end{theorem}
\begin{proof}
Assume that there is $d$-approximate algorithm for the $R|G = \completekpartite{2}|\sum C_j$ problem ($R|G = \completekpartite{2}|C_{max}$).
Consider an instance of \textnormal{\sffamily 3-SAT}\xspace with the set of variables $V$ and the set of clauses $C$, moreover where for each $v \in V$ there are at most $5$ clauses containing $v$.
This version is still \mathbb{N}PCClass \cite{gareyJ1979computers}.
We construct the scheduling instance as follows: let $M = \{v^T, v^F: v \in V\}$.
Let also $G = \completekpartite{2}$ with partitions $J_1 = \{j_{v, 1}: v \in V\} \cup \{j_c: c \in C\}$ and $J_2 = \{j_{v, 2}: v \in V\}$.
Hence $n = 2|V| + |C| \le 7|V|$, by $|C| \le 5|V|$.
Let $p_j = 1$ for all jobs.
Let $s_1 \ge 1$ be a value determined by an instance of \textnormal{\sffamily 3-SAT}\xspace, but polynomially bound by the size of the instance.
Let now $s(j_{v, 1}, v^T) = s(j_{v, 2}, v^T) = s(j_{v, 1}, v^F) = s(j_{v, 2}, v^F) = s_1$, for any $v \in V$.
Let for any $c \in C$ $s(j_c, v^T) = s_1$ if $v$ appears in $c$, and $s(j_c, v^F) = s_1$ if $\neg{v}$ appears in $c$.
Set all other $s(j, m)$ to $1$.
Consider any instance of the scheduling problem, corresponding to an instance of \textnormal{\sffamily 3-SAT}\xspace with answer \texttt{YES}.
Then we can schedule $J$ on the machines according to fulfilling valuation, in the following way:
If $v$ has value $T$ then we assign $v^T$ to $J_1$ and $v^F$ to $J_2$, otherwise we assign $v^F$ to $J_1$ and $v^T$ to $J_2$.
Hence any job in $J_2$ can be processed with speed $1$ similarly for any job in $J_1$.
Hence, $\sum C_j \le \binom{n+1}{2}\frac{1}{s_1} \le \frac{49|V|^2}{s_1}$ ($C_{max} \le \frac{n}{s_1} \le \frac{7 |V|}{s_1}$), for an optimal schedule.
Now we can state that it is sufficient to set $s_1 = 49 d |V|^2 + 1$ ($s_1 = 7 d |V| + 1$) to prove the theorem.
On the other hand, assume that the answer for an instance of \textnormal{\sffamily 3-SAT}\xspace is \texttt{NO}.
Assume that there exists a schedule with $\sum C_j < 1$ ($C_{max} < 1$).
Assume that there is a partition, such that both $v^T$ and $v^F$ have no jobs from it assigned in the schedule, then $\sum C_j \ge 1$ ($C_{max} \ge 1$), a contradiction.
Thus assume then, that $j_c \in J_1$ is a job assigned to a machine $m$ with $s(j_c, m) = 1$; in this case we also clearly have a contradiction.
Hence, each $j_c \in J_1$ is assigned to a machine corresponding to a valuation of the variable fulfilling $c$, hence there exists a fulfilling valuation, a contradiction.
Hence for such an instance for any schedule $\sum C_j \ge 1$ ($C_{max} \ge 1$).
Clearly, by using this $d$-approximate algorithm on an instance of the scheduling problem corresponding to an \texttt{YES} instance of \textnormal{\sffamily 3-SAT}\xspace we would be able to obtain a schedule with $\sum C_j < 1$ ($C_{max} < 1$).
On the other hand, for an instance corresponding to a \texttt{NO} instance there is no schedule with $\sum C_j \le 1$ ($C_{max} \le 1$).
\end{proof}
\begin{comment}
\begin{corollary}
It is in \mathbb{N}PCClass to decide if there exists a schedule for $P|G = \completekpartite{2}, m_ljobrestriction|-$.
\end{corollary}
Hence, as the last result let us consider a restricted assignment of the machines to the partitions.
Due, to the fact that the $m_ljobrestriction$ case is not in $\textnormal{\sffamily APX}\xspaceClass$.
\begin{theorem}
$P|G = \textit{complete multipartite}, m_lpartitionrestriction|C_{max}$ is in \textnormal{\sffamily EPTAS}\xspaceClass.
\end{theorem}
\begin{proof}
We use exactly the technique same as in \cite{bodlaender1994scheduling}.
We can check, if the allocation is feasible by a simple matching technique.
\end{proof}
\begin{theorem}
$P|G = \textit{complete multipartite}, m_lpartitionrestriction|\sum C_j$ is in \textnormal{\sffamily P}\xspace.
\end{theorem}
\begin{proof}
For this one we construct a following network.
Construct a vertex $s$ - source.
Construct for each of the partition $J_i$ a set of $m$ vertices.
Denote then as $v_i^1, \ldots, v_i^m$.
Construct edges $(s, v_i^k)$ with capacity $1$ each.
Let $cost(s, v_i^1) = \infty$ for any $i$.
Let $cost(s, v_i^k) = diff$ for any $i$ and $k > 1$, where $diff$ is the difference of the cost of the schedule on $k-1$ machines and $k$.
By \cref{Lemma:MinimizationOfDiffs} the consecutive differences are non-increasing.
Consider an optimal allocation of the machines.
Notice that the cost of the allocation can presented as a cost of the schedule, where each of the partition gets exactly one machine, minus a sum of savings.
The savings are accrued by successive allocation of the machines to the partitions.
The schedule that finds the savings with maximum value is optimal.
Using such a schedule we construct a cost matching in the graph.
As a side note notice that the construction uses the allocation of first machine in special way.
If there is no feasible schedule clearly the sum of savings will be strictly less than $k \cdot \infty$.
Now consider the matching with maximum value.
If the value is less than $k \cdot \infty$, then there can be no valid schedule.
Otherwise, it corresponds exactly to an allocation of the machines.
\end{proof}
\section{More Powerful Identical Machines}
In the paper \cite{jansen2020approximation} by Jansen et. al. there is presented a PTAS for a similar problem, i.e. a problem where each of the identical machines can process jobs not from $1$, but from $c$ different classes.
The problem $P(c-relaxed)| G = \textit{complete multipartite} | C_{\max}$ admits a $\textnormal{\sffamily PTAS}\xspace$ with time complexity $\textnormal{O}(n^{\textnormal{O}(\frac{1}{\epsilon^8}\log(\frac{1}{\epsilon}))}\log(m)\log(p_{\max}))$.
The problem is of course strongly $\mathbb{N}PHClass$, due to the fact that $P||C_{\max}$ is strongly $\mathbb{N}PHClass$.
However, as we have proved it is not the case for unit time jobs, because, both $P(c = 1)|G = \textit{complete multipartite}, p_j = 1|C_{\max}$ and $P(c=1)|G = \textit{complete multipartite}, p_j = 1|\Sigma C_j$ are polynomial.
Hence as a side note we prove that,
\begin{theorem}
$P(c-relaxed) | G = \textit{complete multipartite}, p_j = 1 | C_{\max}$ is strongly $\mathbb{N}PHClass$, even when $c$ is a constant greater than $1$.
\end{theorem}
\begin{proof}
Let $(A = \{a_1, \ldots, a_{3n}\}, b)$ be an instance of $\textnormal{\sffamily 3-Partition}\xspace$.
I.e. let $A$ be a set of $3n$ numbers such that $\forall_{a \in A} \frac{1}{2} b > a > \frac{1}{4} b$.
The question is if $A$ can be divided into $n$ disjoint sets such that the number in any of those sets sum to $b$.
Our reduction depends on $c$ - we analyze two cases.
In the first let us assume that $c = 2$.
Let there be $3n$ partitions, $J_1, \ldots, J_{3n}$, corresponding to $a_1, \ldots, a_{3n}$, i.e. having the number of jobs equal to $a_1, \ldots, a_{3n}$, respectively.
Let there be $n$ partitions, $J_{3n+1}, \ldots, J_{4n}$ each with $2b$ jobs.
There are $3n$ machines.
The schedule length limit is $b$.
Assume that the answer for the instance of \textnormal{\sffamily 3-Partition}\xspace is yes.
Hence, let $\{\{a_1', a_2', a_3'\}, \ldots, \{a_{3n -2}', a_{3n-1}', a_{3n}'\}\}$ be the partition into the required sets.
For $m_1$ assign partition corresponding to $a_1'$ and $b - a_1'$ jobs from $J_{3n+1}$,
for $m_2$ assign partition corresponding to $a_2'$ and $b - a_2'$ jobs from $J_{3n+1}$,
for $m_3$ assign partition corresponding to $a_3'$ and $b - a_3'$ jobs from $J_{3n+1}$,
for $m_4$ assign partition corresponding to $a_4'$ and $b - a_4'$ jobs from $J_{3n+2}$,
etc.
Notice that each of the machines is assigned exactly $b$ jobs.
Now, assume that there is a schedule $S$ with $C_{max} = b$.
Notice that each of the partitions corresponding to $a_1, \ldots, a_{3n}$ have to have at least one machine assigned.
If a machine has a jobs from partition corresponding to $a_i$, then it cannot have any job from partition corresponding to $a_{i'}$, due to the bounds on the number sizes, hence it has to have jobs from some $A_{j > 3n}$ assigned.
Let $S^{-1}(J_{i > 3n})$ be the set of the machines, that process the jobs corresponding to $J_i$.
Let us assume that $|S^{-1}(J_i)| = 2$, for some $J_{i > 3n}$.
In this case these machines can only process the jobs from $J_{i > 3n}$, otherwise the schedule have to be longer, this is impossible due to the observations about $J_1, \ldots, J_{3n}$.
Let us assume that $|S^{-1}(J_{i>3n})| \ge 4$.
Then there is some partition $J_{i' > 3n}$ such that $|S^{-1}(J_{i' > 3n})| \le 2$ this is again impossible.
Hence for any $J_{i> 3n}$ there are exactly $3$ machines $m_a, m_b, m_c$ processing jobs from it.
As the total number of jobs processed by $m_a$, $m_b$ and $m_c$ and coming from $B_i$ is $2b$, it is clear that the total number of jobs from the partitions have to sum to $b$.
In the second case let us assume that $c \ge 3$.
In this case let $G$ consist of $cn$ partitions.
Let partitions $J_1, \ldots, J_{3n}$ have $a_1, \ldots, a_{3n}$ vertices respectively.
Let partitions $J_{3n+1}, \ldots, J_{cn}$ have $b$ vertices each.
There are $n$ machines.
The question is if there exists a schedule with length $(c-2)b$.
We show that the \textnormal{\sffamily 3-Partition}\xspace reduced to $P(c-relaxed) | G = \textit{complete multipartite}, p_j = 1 | C_{\max}$.
Let us assume that the answer to the instance of \textnormal{\sffamily 3-Partition}\xspace is yes.
The there exists a division into $n$ sets with the described property.
For a set $\{a_{i}, a_{j}, a_{k}\}$ we assign to $m_1$ the jobs from the partitions corresponding to $a_{i}, a_{j}, a_{k}$ and the vertices from $J_{3n+1}, J_{4n+1}, \ldots, J_{(cn + 1}$.
For the other machines similarly.
It is easy to see that this schedule covers all the jobs, and have makespan equal to $(c-2)b$.
Now let us assume that there exists a schedule with makespan equal to $(c-2)b$.
It means that each of the machines have to have exactly such a number of jobs assgined.
First notice that each of the machines have to have exactly $c$ different partitions exclusively assigned.
Notice that each of the machines have to have exactly $3$ partitions corresponding to some $3$ numbers from $A$ assigned.
If it would have less (and hence more from the dummy sets) then the makespan has to be longer.
If it would have more than $3$ than some other machine have to have less, and by the similar reasoning it leads to a contradiction.
Hence the total number of jobs from the partitions corresponding to the numbers from $A$ has to be equal to $b$, for each of the machines.
Hence the answer for the instance of \textnormal{\sffamily 3-Partition}\xspace is yes.
\end{proof}
It is easy to see that a similar reasoning holds for $\Sigma C_j$.
\begin{corollary}
$P(c-relaxed) | G = \textit{complete multipartite}, p_j = 1 | \Sigma C_j$ is strongly $\mathbb{N}PHClass$, even when $c$ is a constant greater than $1$.
\end{corollary}
In literature \cite{DBLP:journals/siamcomp/BaevRS08}, there is a reduction from $2-paritition$ to a similar problem, namely, $2-HDP$, but it is from weakly $\mathbb{N}PHClass$ \textnormal{\sffamily Partition}\xspace, and it is not polynomial under unary encoding.
\end{comment}
\end{document} |
\begin{equation}gin{document}
\title{A Bernstein type result for graphical self-shrinkers in $\mathbb{R}^{4}$}
\author{Hengyu Zhou}
\address[H. ~Z.]{Department of Mathematics, Sun Yat-sen University, 510275,
Guangzhou, P. R. China}
\email{[email protected]}
\begin{equation}gin{abstract}
Self-shrinkers are important geometric objects in the evolution of mean curvature flows, while the Bernstein Theorem is one of the most profound results in minimal surface theory. We prove a Bernstein type result for {graphical self-shrinker} surfaces with {co-dimension} two in $\mathbb{R}^4$. Namely under certain natural conditions on the Jacobian of any smooth map from $\mathbb{R}^2$ to $\mathbb{R}^2$ we show that the self-shrinker which is the graph of this map must be affine linear through 0. The proof relies on the derivation of structure equations of graphical {self-shrinker}s in terms of the {parallel form} and the existence of some positive functions on
self-shrinkers related to these Jacobian conditions.
\end{abstract}
\maketitle
\section{Introduction}
A smooth submanifold $\Sigma^n$ in $\mathbb{R}^{n+k}$ is a {\it {self-shrinker}} if the equation
\begin{equation}\langlebel{eq:stru}
\vec{H}+\F{1}{2}\vec{F}^{\bot}=0
\end{equation}
holds for any point vector $\vec{F}$ on $\Sigma^n$. Here $\vec{H}$ is the {mean curvature} vector of $\Sigma^{n}$ and $\bot$ is the projection of
$\vec{F}$ into the {normal bundle} of $\Sigma^{n}$.\\
\indent
Self-shrinkers are important in the study of {mean curvaturef}s for at least two reasons. First if $\Sigma$ is a {self-shrinker}, it is easily checked that
\begin{equation}q
\Sigma_{t}=\sqrt{-t}\Sigma,\quad -\infty < t <0
evolution equationq
is a solution to the {mean curvaturef}. Hence {self-shrinker}s are self-similar solutions to
the {mean curvaturef}. On the other hand by Huisken (\cite{Hui90}) the blow-ups around a type I singularity converge weakly to nontrivial {self-shrinker}s after rescaling and choosing subsequences. Because of the parabolic maximum principle the finite time singularity of {mean curvaturef}s for initial compact hypersurfaces is unavoidable. Therefore it is desirable to classify {self-shrinker}s under various geometric conditions.
\subsection{Motivation}
The rigidity of graphical minimal submanifolds in Euclidean spaces is summarized as the {Bernstein} theorem. In this subsection we always assume that $f$ is a smooth map from $\mathbb{R}^{n}$ into $\mathbb{R}^{k}$ and $\Sigma =(x,f(x))$ is the graph of $f$. Let $Df$ denote the gradient of $f$. The {Bernstein} theorem states that if $\Sigma$ is minimal, then $\Sigma$ is totally geodesic under the following conditions:
\begin{equation}gin{enumerate}
\item $n\leq 7$ and $k=1$ by \cite{SJ68};
\item any $n$ and $k=1$ with $|Df|=o(\sqrt{|x|^2+|f|^2})$ as $|x|\rightarrow \infty$ by \cite{EH90};
\item $n=2$ and any $k$ with $|Df|\leq C$ by \cite{CO67};
\item $n=3$ and any $k$ with $|Df|\leq C$ by \cite{FD80};
\item any $n\geq 2$ and $k\geq 2$ with more restrictive conditions (See Remark \ref{rm:minimal}) by \cite{WMT03}( also see \cite{JX99}, \cite{JXY13}).
\end{enumerate}
On the other hand a self-shrinker is minimal in weighted Euclidean space $(\mathbb{R}^{n+k}, e^{-\F{|x|^{2}}{2n}}dx^{2})$ where $dx^{2}$ is the standard Euclidean metric (\cite{CM12}). The interests in Bernstein type results for graphical self-shrinkers are revived due to the works of Ecker-Huisken (\cite{EH89}) and Wang (\cite{WL11}). They showed that a graphical {self-shrinker} $\Sigma$ with {co-dimension} one is a hyperplane through $0$ without any restriction on dimension $n$. Combining this with the historical results above, naturally we are interested in the rigidity of \emph{graphical {self-shrinker} surfaces} with {co-dimension} $k\geq 2$.\\
\indent There are two main difficulties to study graphical self-shrinker surfaces with higher codimension. First the techniques for minimal submanifolds in Euclidean space are generally not available. In the case of {self-shrinker} surfaces, there are no corresponding harmonic functions (\cite{CO67}) and monotonicity formula of the tangent cone at infinity in the minimal surface theory (\cite{SJ68}, \cite{FD80}, also \S 17 in \cite{SL83}). Second the contrast
between the hypersurface and higher co-dimensional submanifolds are another obstacle to study self-shrinkers with higher {co-dimension}. In the hypersurface case the normal bundle is trivial and the mean curvature is a scalar function. In the higher {co-dimension}
case the normal bundle can be highly non-trivial. In general the computations related to mean curvature and second fundamental form in this situation are very involved except few cases. \\
\indent However recent progresses on {self-shrinker}s and graphical {mean curvaturef}s provide new tools to overcome these obstacles under some conditions. By \cite{CZ13} and \cite{DX13} there is a {polynomial volume growth property} for completely immersed, \emph{proper} {self-shrinker}s (Definition \ref{pvgp}). With this property, the integration technique gives good estimates if there are well-behaved structure equations satisfied by {self-shrinker}s (Lemma \ref{pro:est}). On the other hand graphical {self-shrinker}s have a lot of structure equations in terms of parallel form (Theorem \ref{thm:sk:eq} and \ref{pro:Laplacian}). This approach is inspired by the works of \cite{WMT01, WMT02} and \cite{TW04} to investigate graphical {mean curvaturef}s with arbitrary codimension in product manifolds. They obtained evolution equations of the Hodge star of parallel forms along {mean curvaturef}s (see Remark \ref{rm:star}). \\
\indent The contribution in this paper is to apply the parallel form's theory into the study of graphical self-shrinkers. In $\mathbb{R}^{4}$ both of codimension and dimension of a graphical self-shrinker surface are two. Then $\mathbb{R}^{4}$ provides four parallel 2-forms to reflect various properties of a graphical self-shrinker surface, which is explained in the next subsection.
\subsection{Statement of the main result}
Suppose $f=(f_{1}(x_1,x_2), f_{2}(x_1,x_2))$ is a smooth map from $\mathbb{R}^{2}$ into $\mathbb{R}^{2}$. Then its Jacobian $J_f$ is given by
$$J_{f}=(\F{\partial f_{1}}{\partial x_{1}}\F{\partial f_{2}}{\partial x_{2}}-\F{\partial f_{1}}{\partial x_{2}}\F{\partial f_{2}}{\partial x_{1}})$$ The main result of this note is given as follows.
\begin{theorem}\langlebel{main}
Suppose $f:\mathbb{R}^2\rightarrow \mathbb{R}^2$ is a smooth map with its Jacobian $J_f$ satisfying (1): $J_f >-1$ for all $x$ or (2): $J_f<1$ for all $x$. If its graph is a self-shrinker in $\mathbb{R}^4$, then its graph is a two dimensional plane through 0.
\end{theorem}
\begin{Rem} Notice that in our setting the {co-dimension} of $\Sigma$ is two.
\end{Rem}
\begin{Rem} In \cite{DW10}, the authors studied the graphical self-shrinker of $f(x)$ in $\mathbb{R}^{n+m}$ where $f:\mathbb{R}^n\rightarrow \mathbb{R}^m$. While their approach is very promising in arbitrary codimension, it requires that the eigenvalues $\{\langlembda_k\}$ of the maps $f$ satisfy $|\langlembda_i\langlembda_j|\leq 1$ for $i\neq j$. Notice that in Theorem \ref{main} we have $|J_f|=|\langlembda_1\langlembda_2|$ and $n=m=2$.\\
\indent Our paper relaxes the conditions on eigenvalues for graphical self-shrinkers, since we are able to use special geometry of 4-dimensions to prove our main results. Such geometry includes the existence of two parallel forms $dx_1\wedge dx_2$ and $dx_3\wedge dx_4$. See \S 3.1.\\
\indent It seems very difficult to generalize our approach to treat ever higher codimensional cases without further geometric inputs.
\end{Rem}
A special case for Theorem \ref{main} is that if $f$ is a diffeomorphism on $\mathbb{R}^2$ the {graphical self-shrinker} of $f$ in $\mathbb{R}^{4}$ is totally geodesic. One can compare this theorem with the results (\cite{HSV09, HSV11}). Those authors obtained the rigidity of graphical minimal surfaces in $\mathbb{R}^{4}$ assuming bounded Jacobian.\\
\indent Let us explain conditions (1) and (2) in more details. Let $\Sigma$ be the {graphical self-shrinker} in Theorem ~\ref{main}. We
take $(x_{1},x_{2}, x_{3}, x_{4})$ as the coordinate of $\mathbb{R}^{4}=\mathbb{R}^{2}\times \mathbb{R}^{2}$. Let
$\end{theorem}a_{1}=dx_{1}\wedge dx_{2}$, $\end{theorem}a_{2}=dx_3\wedge dx_4$, $\end{theorem}a' = \end{theorem}a_1+\end{theorem}a_2$ and
$\end{theorem}a''=\end{theorem}a_1 -\end{theorem}a_2$. First we choose a proper orientation on $\Sigma$
such that $\ast\end{theorem}a_{1}>0$ (Def. \ref{Def:star}). The direct computation shows that $\ast\end{theorem}a'=(1+J_{f})\ast\end{theorem}a_{1}$ and $\ast\end{theorem}a''=(1-J_{f})\ast\end{theorem}a_{1}$
(Lemma \ref{lm:hodge-star}). Then conditions (1) and (2) correspond to $\ast\end{theorem}a'>0$ and $\ast\end{theorem}a''>0$ respectively. When
{\it both} conditions (1) and (2) are satisfied, then we have $|J_f| < 1$ which means the map $f$ is area-decreasing. In \cite{WMT02, TW04}, assuming $f$ is area-decreasing, together with additional curvature conditions, they showed the {mean curvaturef} of the graph of some smooth map stays
graphical and exists for all time.
Note that the usual {maximum principle} does not apply to our non-compact submanifold. Our main technical tool to treat
this problem is Lemma ~\ref{pro:est} where we use a cutoff function and apply the Divergence Theorem:
a technique also used in \cite{WL11} for the case of hypersurface. The crucial condition is the {polynomial volume growth property} of {graphical self-shrinker}s. It can be of independent interest and
we state it in a more general formulation (Lemma \ref{pro:est}).
\subsection{Plan of the paper}
In \S2 we discuss the {parallel form} and the geometry of {graphical self-shrinker}s. The structure equation of {self-shrinker}s in terms of
{parallel form}s is summarized in Theorem \ref{thm:sk:eq}. In \S3 we apply Theorem \ref{thm:sk:eq} to the cases
of $*\end{theorem}a'$ and $*\end{theorem}a''$. For example in Theorem \ref{pro:Laplacian} we derive that
$*\end{theorem}a' =\end{theorem}a' (e_{1}, e_{2})$ satisfies the equation
\begin{equation}\langlebel{eq:key}
\Delta(*\end{theorem}a') + *\end{theorem}a'((h^{3}_{1k}- h^{4}_{2k})^{2}+(h^{4}_{1k}+h^{3}_{2k})^{2})-\F{1}{2}\langle\vec{F},
\nabla(*\end{theorem}a') \rangle =0,
\end{equation}
where $h^{\alpha}_{ij}$ are the {second fundamental form} and $\Delta$ ($\nabla$) is the Laplacian (covariant derivative) of $\Sigma$.
With the {polynomial volume growth property} Lemma ~\ref{pro:est} implies that
\begin{equation}q
((h^{3}_{1k}- h^{4}_{2k})^{2}+(h^{4}_{1k}+h^{3}_{2k})^{2})\equiv 0,
evolution equationq
if $*\end{theorem}a'$ is a positive function. This implies that the {graphical self-shrinker} is minimal. Similar conclusion can be achieved for
$*\end{theorem}a''$. We then show that it is actually {totally geodesic}.
\section{Parallel Forms}\langlebel{se:paral}
The {parallel form}s in Euclidean space play the fundamental role in this paper. We will record many structure equations
of {self-shrinker}s in terms of {parallel form}s. These equations can be quite general. We will present these results for submanifolds of arbitrary (co-)dimensions in general Riemannian manifolds in \S2.1 and then restrict to {self-shrinker}s in
Euclidean spaces in \S2.2.
\subsection{Parallel forms and their Hodge star}
We will adapt the notation in \cite{WMT08}, \cite{LL11}. Assume that $N^n$ is a smooth
$n$-dimensional submanifold in a Riemannian manifold $M^{n+k}$ of dimension $n+k$. We denote an orthonormal basis of the tangent bundle of $N$ by $\lk e_{i}\rk_{i=1}^{n}$ and denote an orthonormal basis of the normal bundle of $N$ by $\lk e_{\alpha}\rk_{\alpha=n+1}^{n+k}$. The Riemann
curvature tensor of $M$ is defined by
\begin{equation}q
R(X, Y)Z=-\bar{\nabla}_{X}\bar{\nabla}_{Y}Z+\bar{\nabla}_{Y}\bar{\nabla}_{X}Z+\bar{\nabla}_{[X, Y]}Z,
evolution equationq
for smooth vector fields $X, Y$ and $Z$. The {second fundamental form} $A$ and the {mean curvature} vector $\vec{H}$ are defined as
\begin{equation}gin{align}
A(e_{i}, e_{j})&=(\bar{\nabla}_{e_{i}}e_{j})^{\bot}=h^{\alpha}_{ij}e_{\alpha}\\
\vec{H} &=(\bar{\nabla}_{e_{i}}e_{i})^{\bot}=h^{\alpha}_{ii}e_{\alpha}=h^{\alpha}e_{\alpha}.
\end{align}
Here we used Einstein notation and $h^{\alpha}=h^{\alpha}_{ii}$.
Let $\nabla$ be the covariant derivative of $\Sigma$ {with respect to} the induced metric. Then $\nabla^{\bot} A$ can be written
as follows:
\begin{equation}
\nabla^{\bot}_{e_{k}}A(e_{i}, e_{j})=h^{\alpha}_{ij,k} e_{\alpha}.
\end{equation}
Note that $h^{\alpha}_{ij,k}$ is not equal to $e_{k}(h^{\alpha}_{ij})$ unless $\Sigma$ is a hypersurface. In fact
we have
\begin{lem} \langlebel{lm:cd} $h^{\alpha}_{ij,k}$ takes the following form:
\begin{equation}\langlebel{eq:derf}
h^{\alpha}_{ij,k} = e_{k}(h^{\alpha}_{ij})+h^{\begin{equation}ta}_{ij}\langle e_{\alpha}, \bar{\nabla}_{e_{k}}e_{\begin{equation}ta}\rangle
-C_{ki}^{l}h^{\alpha}_{lj}-C_{kj}^{l}h_{li}^{\alpha},
\end{equation}
where $\nabla_{e_{i}}e_{j}=C_{ij}^{k}e_{k}$.
\end{lem}
\begin{proof}
By its definition
\begin{equation}q
h^{\alpha}_{ij,k}=\langle \nabla^{\bot}_{e_{k}}A(e_{i}, e_{j}), e_{\alpha}\rangle.
evolution equationq
The conclusion follows from expanding $\nabla^{\bot}_{e_{k}}A(e_{i},e_{j})$
\begin{equation}gin{align*}
h^{\alpha}_{ij,k}&=\langle \bar{\nabla}_{e_{k}}(A(e_{i},e_{j})), e_{\alpha}\rangle
-\langle A(\nabla_{e_{k}}e_{i}, e_{j}), e_{\alpha}\rangle - \langle A(e_{i},\nabla_{e_{k}}e_{j}), e_{\alpha}\rangle\\
&=\langle \bar{\nabla}_{e_{k}}(h^{\begin{equation}ta}_{ij}e_{\begin{equation}ta}), e_{\alpha}\rangle
-C_{ki}^{l}h^{\alpha}_{lj}- C_{kj}^{l}h_{li}^{\alpha}\\
&= e_{k}(h^{\alpha}_{ij})+h^{\begin{equation}ta}_{ij}\langle e_{\alpha}, \bar{\nabla}_{e_{k}}e_{\begin{equation}ta}\rangle
-C_{ki}^{l}h^{\alpha}_{lj}-C_{kj}^{l}h_{li}^{\alpha}.
\end{align*}
\end{proof}
For later calculation we recall that the
Codazzi equation is
\begin{equation}\langlebel{eq:cdz}
R_{\alpha ikj} = h^{\alpha}_{ij,k}-h^{\alpha}_{ik,j},
\end{equation}
where $R_{\alpha ikj}=R(e_{\alpha}, e_{i}, e_{k}, e_{j})$.
\begin{equation}gin{Def}\langlebel{Def:star} An
$n$-form $\Omega$ is called {\it parallel} if $\bar{\nabla} \Omega = 0$ where $\bar{\nabla} $ is the covariant derivative
of $M$. \\
\indent The Hodge star $\ast\Omega$ on $N$ is
defined by
\begin{equation}\langlebel{star}
\ast\Omega=\F{\Omega(X_{1}, \cdots, X_{n})}{\sqrt{det(g_{ij})}}
\end{equation}
where $\{X_1, \cdots, X_n\}$ is a local frame on $N$ and $g_{ij}=\langle X_i, X_j\rangle$.
\end{Def}
\begin{Rem}\langlebel{rm:star} We denote by $M$ the product manifold $N_1\times N_2$, $\Omega$ the volume form of $N_1$. Then $\Omega$ is a parallel form in $M$.
If $N$ is a graphical manifold over $N_1$, then $*\Omega >0$ on $N$ for an appropriate orientation. For example the graphical {self-shrinker} $\Sigma$ in \S 1.2 satisfies that $*\Omega >0$ on $\Sigma$ where $\Omega$ is $dx_1\wedge \cdots \wedge dx_n$.\\
\indent A crucial observation is that $\ast\Omega$ is independent of the frame
$\{X_{1},\cdots, X_{n}\}$ up to a fixed orientation. This fact greatly simplifies our calculation. When $\{X_1, \cdots, X_n\}$ is an orthonormal frame $\{e_{1},\cdots, e_{n}\}$, $\ast\Omega=\Omega(e_1, \cdots, e_n)$.\\
\indent The evolution equation of $*\Omega$ along mean curvature flows is the key ingredient in (\cite{WMT02}).
\end{Rem}
\begin{Rem}\langlebel{rm:minimal} In \cite{WMT03} the author proved that suppose $\Sigma=(x,f(x))$ is minimal where $f:\mathbb{R}^{n}\rightarrow \mathbb{R}^{k}$ and there exists $0<\delta<1$ and $K>0$ such that $|\langlembda_i\langlembda_j|\leq 1-\delta$ and $*\Omega> K$, then $\Sigma$ is affine linear. \\
\indent Here $\{\langlembda_i\}_{i=1}^{n}$ is the eigenvalue of $df$ and $\Omega$ is $dx_1\wedge\cdots\wedge dx_n$.
\end{Rem}
The following equation \eqref{Lop} first appeared as equation (3.4) in \cite{WMT02} in the proof
of the evolution equation of $\ast\Omega$ along the {mean curvaturef}. We provide a proof for the sake of
completeness.
\begin{equation}gin{pro} \langlebel{pro:1} Let $N^n$ be a smooth submanifold of $M^{n+k}$. Suppose $\Omega$ is a
parallel $n$-form and $R$ is the Riemann curvature tensor of $M$. Then
$\ast\Omega=\Omega(e_{1},\cdots, e_{n})$ satisfies the following equation:
\begin{equation}\langlebel{Lop}
\Delta(\ast\Omega)=-\sum_{i,k}(h^{\alpha}_{ik})^{2}\ast\Omega+\sum_{i}(h^{\alpha}_{,i}+
\sum_{k}R_{\alpha kik })\Omega_{i\alpha}
+2\sum_{i<j,k}h^{\alpha}_{ik}h^{\begin{equation}ta}_{jk}\Omega_{i\alpha,j\begin{equation}ta}.
\end{equation}
Here $\Delta$ denotes the Laplacian on $N$ {with respect to} the induced metric, and
$h^{\alpha}_{,k}=h^{\alpha}_{ii,k}$. In the second group of terms, $\Omega_{i\alpha}=\Omega(\hat{e}_{1},\cdots, \hat{e}_{n})$ with $\hat{e}_{s}=e_{s}$ for $s\neq i$ and $\hat{e}_{s}=e_{\alpha}$ for $s=i$. In the last group of terms, $\Omega_{i\alpha, j\begin{equation}ta}=\Omega(\hat{e}_{1},\cdots, \hat{e}_{n})$ with $\hat{e}_{s}=e_{s}$ for $s\neq i,j$, $\hat{e}_{s}=e_{\alpha}$ for $s=i$ and $\hat{e}_{s}=e_{\begin{equation}ta}$ for $s=j$.
\end{pro}
\begin{proof}
Recall that $\nabla$ and $\bar{\nabla}$ are the covariant derivatives of $N$ and $M$ respectively. Fix
a point $p$ on $\Sigma$ and assume that $\lk e_{1},\cdots, e_{n}\rk$ is normal at $p$ {with respect to} $\nabla$.
Lemma ~\ref{lm:cd} implies that
\begin{equation}\langlebel{eq:drn}
\nabla_{e_{i}}e_{j}(p)=0,\quad h^{\alpha}_{ij,k}(p)=e_{k}(h^{\alpha}_{ij})(p)
+h^{\begin{equation}ta}_{ij}\langle e_{\alpha}, \bar{\nabla}_{e_{k}}e_{\begin{equation}ta}\rangle(p).
\end{equation}
Since $\bar{\nabla}\Omega = 0$, we have
\begin{equation}gin{align}
\nabla_{e_{k}}(\ast\Omega)&=\Omega(\bar{\nabla}_{e_{k}}e_{1}, \cdots, e_{n})+\cdots+\Omega(e_{1},\cdots,\bar{\nabla}_{e_{k}}e_{n})\notag\\
&=\sum_{i}h^{\alpha}_{ik}\Omega_{i\alpha};\langlebel{eq:grt}
\end{align}
For $\nabla_{e_{k}}\nabla_{e_{k}}(\ast\Omega)$ we get
\begin{equation}\langlebel{eq:sk2}
\nabla_{e_{k}}\nabla_{e_{k}}(\ast\Omega)=\sum_{i}e_{k}(h^{\alpha}_{ik})\Omega_{i\alpha}
+\sum_{i}h^{\alpha}_{ik}e_{k}(\Omega_{i\alpha}).
\end{equation}
The second term in \eqref{eq:sk2} can be computed as
\begin{equation}gin{align*}
\sum_{i}h^{\alpha}_{ik}e_{k}(\Omega_{i\alpha})&=\sum_{i} h^{\alpha}_{ik}\Omega(e_{1},\cdots, \bar{\nabla}_{e_{k}}e_{\alpha},\cdots, e_{n})+2\sum_{i<j}h^{\alpha}_{ik}h^{\begin{equation}ta}_{jk}\Omega_{i\alpha, j\begin{equation}ta}\notag\\
&=\sum_{i,\alpha}-(h^{\alpha}_{ik})^{2}\ast\Omega+ h^{\begin{equation}ta}_{ ik}\langle e_{\alpha}, \bar{\nabla}_{e_{k}}e_{\begin{equation}ta}\rangle \Omega_{i\alpha} +2\sum_{i<j}h^{\alpha}_{ik}h^{\begin{equation}ta}_{jk}\Omega_{i\alpha, j\begin{equation}ta}.
\end{align*}
Plugging this into \eqref{eq:sk2} yields that
\begin{equation}gin{align}
\nabla_{e_{k}}\nabla_{e_{k}}(\ast\Omega)
&=-\sum_{i,\alpha}(h^{\alpha}_{ik})^{2}\ast\Omega+2\sum_{i<j}h^{\alpha}_{ik}h^{\begin{equation}ta}_{jk}\Omega_{i\alpha,j\begin{equation}ta}+\sum_{i}h^{\alpha}_{ki,k}\Omega_{i\alpha}\notag\\
&=-\sum_{i,\alpha}(h^{\alpha}_{ik})^{2}\ast\Omega+2\sum_{i<j}h^{\alpha}_{ik}h^{\begin{equation}ta}_{jk}\Omega_{i\alpha,j\begin{equation}ta}+\sum_{i}(h^{\alpha}_{kk,i}+R_{\alpha kik})\Omega_{i\alpha}.\notag
\end{align}
In view of \eqref{eq:drn} and \eqref{eq:cdz} we can finally conclude that
\begin{equation}gin{align*}
\Delta(\ast\Omega(p))&=\nabla_{e_{k}}\nabla_{e_{k}}(\ast\Omega)(p)-\nabla_{\nabla_{e_{k}}e_{k}}(\ast\Omega)(p) \\
&=-\sum_{i,k,\alpha}(h^{\alpha}_{ik})^{2}\ast\Omega+2\sum_{i<j,k}h^{\alpha}_{ik}h^{\begin{equation}ta}_{jk}\Omega_{i\alpha, j\begin{equation}ta}+\sum_{i}(h^{\alpha}_{,i}+\sum_{k}R_{\alpha kik})\Omega_{i\alpha}.
\end{align*}
This is the conclusion.
\end{proof}
\subsection{Self-shrinkers in {Euclidean space}}
In this subsection we only consider the case when $M^{n+k}$ is the Eucildean space and $N^n$ is a {self-shrinker}.
\begin{lem} \langlebel{lm:slm} Let $\Omega$ be a parallel n-form in $\mathbb{R}^{n+k}$. Suppose $N^n$ is an n-dimensional {self-shrinker}
in $\mathbb{R}^{n+k}$. Using the notation in Proposition \ref{pro:1} we have
\begin{equation}
\sum_{i} \Omega_{i\alpha} h^{\alpha}_{,i}=\F{1}{2}\langle\vec{F}, \nabla(\ast\Omega) \rangle
\end{equation}
where $\vec{F}$ is any point on $N^n$.
\end{lem}
\begin{proof}
As in \eqref{eq:drn} we assume that $\{e_{1},\cdots, e_{n}\}$ is normal at $p$. From \eqref{eq:grt}
we compute $\nabla(\ast\Omega)$ as follows:
\begin{equation}q
\nabla (\ast\Omega) =\nabla_{e_{k}}(\ast\Omega)e_{k}=(\sum_{i}h^{\alpha}_{ik}\Omega_{i\alpha})e_{k}.
evolution equationq
This leads to
\begin{equation} \langlebel{eq:sk3}
\F{1}{2}\langle\vec{F},\nabla(\ast\Omega) \rangle=\F{1}{2}\langle\vec{F},e_{k}\rangle(\sum_{i}h^{\alpha}_{ki}\Omega_{i\alpha}).
\end{equation}
Recall that $\vec{H} =h^{\alpha}e_{\alpha}$. Then $h^{\alpha}=-\F{1}{2}\langle\vec{F}, e_{\alpha}\rangle$ since
$\vec{H}+\F{1}{2}\vec{F}^{\bot}=0$. Taking the derivative of $h^{\alpha}$ {with respect to} $e_{i}$ we get
\begin{equation}gin{align}
e_{i}(h^{\alpha})&=\F{1}{2}h^{\alpha}_{ik}\langle\vec{F},e_{k}\rangle-\F{1}{2}\langle\vec{F},e_{\begin{equation}ta}\rangle\langle \bar{\nabla}_{e_{i}}e_{\alpha}, e_{\begin{equation}ta}\rangle\notag\\
&=\F{1}{2}h_{ik}^\alpha \langle \vec{F}, e_k\rangle+h^\begin{equation}ta \langle \bar{\nabla}_{e_i}e_\alpha, e_\begin{equation}ta\rangle\notag\\
&=\F{1}{2}h^{\alpha}_{ik}\langle\vec{F},e_{k}\rangle-h^{\begin{equation}ta} \langle \bar{\nabla}_{e_{i}}e_{\begin{equation}ta}, e_{\alpha}\rangle.\langlebel{eq:cmp}
\end{align}
Here we applied $\langle \bar{\nabla}_{e_i}e_\alpha, e_\begin{equation}ta\rangle=-\langle \bar{\nabla}_{e_i}e_{\begin{equation}ta},e_\alpha\rangle$.
Since we assume that $\nabla_{e_{i}}e_{j}(p)=0$, \eqref{eq:derf} yields that $h^{\alpha}_{kk,i}(p)=e_{i}(h^{\alpha}_{kk})(p)+h^{\begin{equation}ta}_{kk} \langle \bar{\nabla}_{e_{i}}e_{\begin{equation}ta}, e_{\alpha}\rangle(p)$. Then we conclude that
\begin{equation}q
h^{\alpha}_{,i}(p)=e_{i}(h^{\alpha})(p)+h^{\begin{equation}ta} \langle \bar{\nabla}_{e_{i}}e_{\begin{equation}ta}, e_{\alpha}\rangle(p).
evolution equationq
Comparing the above with \eqref{eq:cmp} we get $ h^{\alpha}_{,i}(p)=\F{1}{2}h^{\alpha}_{ik}\langle\vec{F},e_{k}\rangle(p)$. The lemma follows from combining this with \eqref{eq:sk3}.
\end{proof}
Using Proposition \ref{pro:1} and Lemma \ref{lm:slm} we obtain a series of structure equations of {self-shrinker}s
in terms of the {parallel form}.
\begin{theorem}\langlebel{thm:sk:eq}(Structure Equation)
In $\mathbb{R}^{n+k}$ suppose $\Sigma$ is an $n$-dimensional {self-shrinker}. Let $\Omega$ be a parallel $n$-form, then
$\ast\Omega=\Omega(e_{1}, \cdots, e_{n})$ satisfies that
\begin{equation}gin{align}
\Delta (\ast\Omega) +(h^{\alpha}_{ik})^{2}\ast\Omega
-2\sum_{i< j} \Omega_{i\alpha,j\begin{equation}ta}h^{\alpha}_{ik}h^{\begin{equation}ta}_{jk}-
\F{1}{2}\langle\vec{F},\nabla(\ast\Omega\rangle)=0,\langlebel{eq:sk}
\end{align}
where $\vec{F}$ is the coordinate of the point on $\Sigma$ and $\Omega_{i\alpha, j\begin{equation}ta}=\Omega(\hat{e}_{1},\cdots, \hat{e}_{n})$ with $\hat{e}_{s}=e_{s}$ for $s\neq i,j$, $\hat{e}_{s}=e_{\alpha}$ for $s=i$ and $\hat{e}_{s}=e_{\begin{equation}ta}$ for $s=j$.
\end{theorem}
This theorem enables us to obtain various information of {self-shrinker}s for different {parallel form}s. We will apply this idea
to our particular situation in the next section.
\section{Graphical {self-shrinker}s in ${\mathbb{R}^4}$}
From this section on we will focus on the {graphical self-shrinker}s in {Euclidean space}. The structure equations of
graphical {self-shrinker}s will be derived in Theorem \ref{pro:Laplacian}. The {polynomial volume growth property} plays an essential role in Lemma
~\ref{pro:est}, which is our main technical tool.
\subsection{Structure equations for {graphical self-shrinker}s in $\mathbb{R}^{4}$} We consider the following four different
parallel 2-forms in $\mathbb{R}^4$:
\begin{equation}gin{align}
&\end{theorem}a_{1}=dx_{1}\wedge dx_{2}, &\end{theorem}a'=dx_{1}\wedge dx_{2}+dx_{3}\wedge dx_{4}\notag\\
&\end{theorem}a_{2}=dx_{3}\wedge dx_{4} &\end{theorem}a''=dx_{1}\wedge dx_{2}-dx_{3}\wedge dx_{4}\langlebel{def:coor}
\end{align}
Recall that for a smooth map $f=(f_{1}(x_{1}, x_{2}),f_{2}(x_{1}, x_{2}))$ its Jacobian $J_{f}$ is
\begin{equation}
J_{f}=(\F{\partial f_{1}}{\partial x_{1}}\F{\partial f_{2}}{\partial x_{2}}-\F{\partial f_{1}}{\partial x_{2}}\F{\partial f_{2}}{\partial x_{1}}) ;
\end{equation}
\begin{lem} \langlebel{lm:hodge-star} Suppose $\Sigma =(x, f(x))$ where $f:\mathbb{R}^{2}\rightarrow \mathbb{R}^{2}$ is a smooth map. Then on $\Sigma$ it holds that
$$
*\end{theorem}a_{2}= J_{f}*\end{theorem}a_{1};
$$
\end{lem}
\begin{proof} Notice that $*\end{theorem}a_1$ and $*\end{theorem}a_2$ are independent of the choice of the local frame. Denote by $e_1= \F{\partial }{\partial x_1}+\F{\partial f_1}{\partial x_1}\F{\partial}{\partial x_3}+\F{\partial f_2}{\partial x_1}\F{\partial}{\partial x_4}$, $e_2= \F{\partial }{\partial x_2}+\F{\partial f_1}{\partial x_2}\F{\partial}{\partial x_3}+\F{\partial f_2}{\partial x_2}\F{\partial}{\partial x_4}$ and $g_{ij}=\langle e_i, e_j \rangle$. Then
\begin{equation}gin{align}
*\end{theorem}a_2&=\F{dx_3\wedge dx_4(e_1, e_2)}{\sqrt{det(g_{ij})}}\notag\\
&=\F{J_f}{\sqrt{det(g_{ij})}}\notag\\
&=J_f*\end{theorem}a_1; \notag
\end{align}
\end{proof}
The above lemma is not enough to explore structure equations in Theorem \ref{thm:sk:eq}.
We need further information about the microstructure of a point on $\Sigma$.
\begin{lem} Assume $f:\mathbb{R}^{2}\rightarrow \mathbb{R}^{2}$ is a smooth map. Denote by $df$ the differential of $f$. Then for any point $x$
\begin{equation}gin{enumerate}
\item There exist oriented orthonormal bases $\{a_{1}, a_{2}\}$ and $\{a_{3}, a_{4}\}$ in $T_{x}\mathbb{R}^{2}$ and $T_{f(x)}\mathbb{R}^{2}$ respectively such that
\begin{equation}\langlebel{eq:lin_map}
df(a_{1})=\langlembda_{1}a_{3},\qquad df(a_{2})=\langlembda_{2}a_{4};
\end{equation}
Here `oriented' means $dx_{i}\wedge dx_{i+1}(a_{i}, a_{i+1})=1$ for $i=1,3$ .
\item Moreover we have $\langlembda_{1}\langlembda_{2}=J_{f}$.
\end{enumerate}
\end{lem}
\begin{proof} Fix a point $x$. First we prove the existence of (1). By the Singular Value Decomposition Theorem (p.291 in \cite{ST07}) there exist two $2\times 2$ orthogonal matrices $Q_1, Q_2$ such that
$$
\begin{equation}gin{pmatrix}
\F{\partial f_{1}}{\partial x_{1}} & \F{\partial f_{2}}{\partial x_{1}}\\
\F{\partial f_{1}}{\partial x_{2}} & \F{\partial f_{2}}{\partial x_{2}}
\end{pmatrix}
= Q_1\begin{equation}gin{pmatrix}
\langlembda'_{1} & 0 \\
0& \langlembda'_{2}
\end{pmatrix} Q_2
$$
with $\langlembda'_{1},\langlembda'_2\geq 0$. \\
\indent Let $\langlembda_{1}=\mathrm{det}(Q_1)\langlembda'_{1}\mathrm{det}(Q_2)$, $\langlembda_{2}=\langlembda'_{2}$, $A =\mathrm{det}(Q_1)Q_1$ and $B=\mathrm{det}(Q_2)Q_2$. Thus $\mathrm{det}(A)=\mathrm{det}(B)=1$ and
\begin{equation}\langlebel{eq:cord}
\begin{equation}gin{pmatrix}
\F{\partial f_{1}}{\partial x_{1}} & \F{\partial f_{2}}{\partial x_{1}}\\
\F{\partial f_{1}}{\partial x_{2}} & \F{\partial f_{2}}{\partial x_{2}}
\end{pmatrix}
= A \begin{equation}gin{pmatrix}
\langlembda_{1} & 0 \\
0& \langlembda_{2}
\end{pmatrix} B
\end{equation}
We consider the new basis $(a_{1},a_{2})^{T} =A^T(\F{\partial}{\partial x_{1}}, \F{\partial }{\partial x_{2}})^{T}$, $(a_{3}, a_{4})^{T}=B(\F{\partial}{\partial x_{3}}, \F{\partial }{\partial x_{4}})^{T}$, then $dx_{1}\wedge dx_{2}(a_1, a_2)=1$ and $dx_{3}\wedge dx_{4}(a_3, a_4)=1$ ($A^T$ is the transpose of $A$). Moreover \eqref{eq:cord} implies that
$$
df(a_1, a_2)^{T}=\begin{equation}gin{pmatrix}
\langlembda_{1} &0 \\
0 &\langlembda_{2}
\end{pmatrix} (a_{3}, a_{4})^T
$$
Now we obtain (1).\\
\indent According to \eqref{eq:cord} we have $J_f=det(A)\langlembda_1\langlembda_2 det(B)=\langlembda_1\langlembda_2$. We arrive at (2). The proof is complete.
\end{proof}
\begin{Rem} The conclusion (2) does not depend on the special choice of $\{a_i\}_{i=1}^4$ which satisfies \eqref{eq:lin_map}.
\end{Rem}
With these bases we construct the following local frame for later use.
\begin{equation}gin{Def}\langlebel{def:cor}
Fix a point $p=(x, f(x))$ on $\Sigma$. We construct a special orthonormal basis $\{e_{1},e_{2}\}$
of the tangent bundle $T\Sigma$ and $\{e_{3}, e_{4}\}$ of the normal bundle $N\Sigma$ at follows. At the point $p$ we have for $i=1,2$:
\begin{equation}gin{align}
e_{i}=\F{1}{\sqrt{1+\langlembda^{2}_{i}}}(a_{i}+\langlembda_{i}a_{2+i});\quad
e_{2+i}=\F{1}{\sqrt{1+\langlembda^{2}_{i}}}(a_{2+i}-\langlembda_{i}a_{i})\langlebel{eq:cd};
\end{align}
where $\{a_{1}, a_2, a_3, a_4\}$ are from \eqref{eq:lin_map}.
\end{Def}
For a parallel 2-form $\Omega$ we have $\ast\Omega=\Omega(e_{1},e_{2})$. Applying \eqref{eq:cd}
and $\langlembda_{1}\langlembda_{2}=J_{f}$ direct computations show that $*\end{theorem}a_{1},*\end{theorem}a_{2}, *\end{theorem}a'$ and
$*\end{theorem}a''$ take the following form:
\begin{equation}gin{align}
*\end{theorem}a_{1}&=\F{1}{\sqrt{(1+\langlembda_{1}^{2})(1+\langlembda_{2}^{2})}} > 0, \langlebel{eta1}\\
*\end{theorem}a_{2}&=\F{\langlembda_{1}\langlembda_{2}}{\sqrt{(1+\langlembda_{1}^{2})(1+\langlembda_{2}^{2})}},\\
*\end{theorem}a'&=(1+J_{f})(*\end{theorem}a_{1}),\langlebel{eta-prime}\\
*\end{theorem}a''&=(1-J_{f})(*\end{theorem}a_{1}).\langlebel{eq:ineta}
\end{align}
Now we have the
structure equations for {graphical self-shrinker}s in $\mathbb{R}^4$ as follows:
\begin{theorem}\langlebel{pro:Laplacian}
Suppose $f:\mathbb{R}^{2}\rightarrow \mathbb{R}^{2}$ is a smooth map and $\Sigma =(x, f(x))$ is a {graphical self-shrinker} in $\mathbb{R}^{4}$.
Using the notation in Definition ~\ref{def:cor} we have
\begin{equation}gin{align}
\Delta (*\end{theorem}a_1)&+*\end{theorem}a_{1}(h^{\alpha}_{ik})^{2}
-2*\end{theorem}a_{2}(h^{3}_{1k}h^{4}_{2k}-h^{4}_{1k}h^{3}_{2k})-\F{1}{2}\langle\vec{F},\nabla(*\end{theorem}a_1)\rangle=0;\langlebel{eq:lp:1}\\
\Delta(*\end{theorem}a_{2})&+*\end{theorem}a_{2}(h^{\alpha}_{ik})^{2}-2*\end{theorem}a_{1}(h^{3}_{1k}h^{4}_{2k}
-h^{4}_{1k}h^{3}_{2k})-\F{1}{2}\langle\vec{F},\nabla(*\end{theorem}a_{2})\rangle=0;\langlebel{eq:lp:2}\\
\Delta(*\end{theorem}a')&+*\end{theorem}a'((h^{3}_{1k}-h^{4}_{2k})^{2}+(h^{4}_{1k}
+h^{3}_{2k})^{2})-\F{1}{2}\langle\vec{F},\nabla(*\end{theorem}a')\rangle=0;\langlebel{eq:lp:3}\\
\Delta(*\end{theorem}a'')&+*\end{theorem}a''((h^{3}_{1k}+h^{4}_{2k})^{2}+(h^{4}_{1k}-h^{3}_{2k})^{2})
-\F{1}{2}\langle\vec{F},\nabla(*\end{theorem}a'')\rangle=0,\langlebel{eq:lp:4}
\end{align}
where $h^{\alpha}_{ij}=\langle \bar{\nabla}_{e_{i}}e_{j}, e_{\alpha}\rangle$ are the second fundamental form of $\Sigma$, $\Delta$ and $\nabla$ are the {Laplacian} and the {covariant derivative} of $\Sigma$ respectively.
\end{theorem}
\begin{proof} First we consider the equation \eqref{eq:lp:1}. Applying the frame in \eqref{eq:cd}, the third term in
Theorem ~\ref{thm:sk:eq} becomes:
\begin{equation}gin{align*}
2(\end{theorem}a_{1})_{i\alpha, j\begin{equation}ta}h^{\alpha}_{ik}h^{\begin{equation}ta}_{jk}&=2 dx_{1}\wedge
dx_{2}(e_{3}, e_{4})(h^{3}_{1k}h^{4}_{2k}-h^{3}_{2k}h^{4}_{1k})\\
&=2\F{\langlembda_{1}\langlembda_{2}}{\sqrt{(1+\langlembda_{1}^{2})
(1+\langlembda_{2}^{2})}}(h^{3}_{1k}h^{4}_{2k}-h^{4}_{1k}h^{3}_{2k})\\
&=2*\end{theorem}a_{2}(h^{3}_{1k}h^{4}_{2k}-h^{4}_{1k}h^{3}_{2k}).
\end{align*}
Here in the second line we used the fact that $dx_{1}\wedge dx_{2}(a_{1}, a_{2})=1$.
Plugging this into \eqref{eq:sk}, we obtain \eqref{eq:lp:1}. \\
\indent Similarly we obtain that
\begin{equation}gin{align}
2(\end{theorem}a_{2})_{i\alpha, j\begin{equation}ta}h^{\alpha}_{ik}h^{\begin{equation}ta}_{jk}&=2 dx_{3}\wedge dx_{4}(e_{3}, e_{4})(h^{3}_{1k}h^{4}_{2k}-h^{3}_{2k}h^{4}_{1k})\notag\\
&=2\F{1}{\sqrt{(1+\langlembda_{1}^{2})(1+\langlembda_{2}^{2})}}(h^{3}_{1k}h^{4}_{2k}-h^{4}_{1k}h^{3}_{2k})\notag\\
&=2*\end{theorem}a_{1}(h^{3}_{1k}h^{4}_{2k}-h^{4}_{1k}h^{3}_{2k}).\langlebel{eq:if}
\end{align}
Here in the second line we used the fact that $dx_{3}\wedge dx_{4}(a_{3}, a_{4})=1$. Then
\eqref{eq:lp:2} follows from plugging \eqref{eq:if} into \eqref{eq:sk}.\\
\indent To show \eqref{eq:lp:3} we observe that $*\end{theorem}a'=*\end{theorem}a_1+*\end{theorem}a_2$. Then plugging \eqref{eq:lp:1} into \eqref{eq:lp:2} we obtain that
$$
\Delta (*\end{theorem}a')+*\end{theorem}a'\{\sum_{\alpha=3}^4\sum_{i,k=1}^2(h^\alpha_{ik})^2-2\sum_{k=1}^2(h^3_{1k}h^4_{2k}-h^4_{1k}h^3_{2k})\}-\F{1}{2}\langle \vec{F},\nabla(*\end{theorem}a')\rangle=0
$$
Thus \eqref{eq:lp:3} follows from the identity
$$
\sum_{\alpha=3}^4\sum_{i,k=1}^2(h^\alpha_{ik})^2-2\sum_{k=1}^2(h^3_{1k}h^4_{2k}-h^4_{1k}h^3_{2k})=\sum_{k=1}^2(h^3_{1k}-h^4_{2k})^2+(h^4_{1k}+h^3_{2k})^2
$$
With a similar derivation we can show \eqref{eq:lp:4}. The proof is complete.
\end{proof}
\subsection{Volume growth for {self-shrinker}s}
We will state our main analytic tool in a more general setting since it may be of independent interest. In
this subsection we will consider {graphical self-shrinker}s of $n$-dimensional in $\mathbb{R}^{n+k}$.
\begin{equation}gin{Def}\langlebel{pvgp}
Let $N^n$ be a complete, immersed $n$-dimensional submanifold in $\mathbb{R}^{n+k}$. We say $N$ has
the {polynomial volume growth property} if for any $r\geq 1$
\begin{equation}q
\int_{N\cap B_{r}(0)} d vol \leq C r^n,
evolution equationq
where $B_{r}(0)$ is the ball in $\mathbb{R}^{n+k}$ centered at $0$ with radius $r$.
\end{Def}
Recently \cite{CZ13} and \cite{DX13}
showed the {polynomial volume growth property} is automatic under the following condition, but without the restriction of
dimension and codimension.
\begin{theorem} (\cite{CZ13, DX13}) \langlebel{thm:vgrowth}
If $N^n$ is a $n$-dimensional complete, \textit{properly} immersed {self-shrinker} in $\mathbb{R}^{n+k}$, then it
satisfies the {polynomial volume growth property}.
\end{theorem}
\begin{Rem} \langlebel{rm:vgth}
The properness assumption can not be removed. See Remark 4.1 in \cite{CZ13}.
\end{Rem}
Notice that any {graphical self-shrinker} in {Euclidean space} is embedded, complete and proper. Thus we have the following conclusion.
\begin{equation}gin{cor}
Let $\Sigma =(x, f(x))$ be a smooth {graphical self-shrinker} in $\mathbb{R}^{4}$ where
$f:\mathbb{R}^{2}\rightarrow \mathbb{R}^{2}$ is a smooth map. Then $\Sigma$ has the {polynomial volume growth property}.
\end{cor}
The following lemma is crucial for our argument:
\begin{lem} \langlebel{pro:est}
Let $N^n \subset \mathbb{R}^{n+k}$ be a complete, immersed smooth $n$-dimensional submanifold
with at most polynomial volume growth. Suppose $g$ is a positive function and $K$ is a nonnegative function satisfying
\begin{equation} \langlebel{eq:beq}
0\geq \Delta g -\F{1}{2}\langle\vec{F}, \nabla g\rangle + Kg,
\end{equation}
where $\Delta (\nabla)$ is the Laplacian (covariant derivative) of
$N^n$ and $\vec{F}$ is the position vector of $N^n$. Then $g$ is a positive constant and $K\equiv 0$.
\end{lem}
\begin{proof} Fix $r\geq 1$. We denote by $\phi$ a compactly supported smooth function in $\mathbb{R}^{n+k}$
such that $\phi\equiv 1$ on $B_{r}(0)$ and $\phi\equiv 0$ outside of $B_{r+1}(0)$ with
$|\nabla \phi|\leq |D\phi|\leq 2$. Here $D\phi$ and $\nabla \phi$ are the gradient of $\phi$ in
$\mathbb{R}^{n+k}$ and $N^n$ respectively.
Since $g$ is positive, let $u=\log g$. Then the inequality \eqref{eq:beq} becomes
\begin{equation}q
0\geq \Delta u -\F{1}{2} \langle \vec{F}, \nabla u\rangle +(K+|\nabla u|^{2}).
evolution equationq
Multiplying the righthand side of the above equation by $\phi\ex$ and integrating it on $N^n$ we get
\begin{equation}gin{align}
0&\geq \int_{N}\phi^{2}div_{N}(\ex \nabla u) + \int_{N}\phi^{2}\ex(K+|\nabla u|^{2})\notag\\
&= -\int_{N} 2\phi\langle\nabla \phi, \nabla u\rangle \ex + \int_{N}\phi^{2}\ex(K+|\nabla u|^{2})\notag\\
&\geq -\int_{N}2|\nabla\phi|^{2}\ex +\int_{N}\phi^{2}\ex(K+\F{|\nabla u|^{2}}{2})\langlebel{eq:kt}
\end{align}
In \eqref{eq:kt} we used the inequality
\begin{equation}q
|2\phi\langle \nabla \phi, \nabla u\rangle |\leq \F{\phi^{2}|\nabla u|^{2}}{2}+2|\nabla\phi|^{2}.
evolution equationq
Now we estimate, using the condition that $|\nabla \phi|\leq |D\phi|\leq 2$,
\begin{equation}gin{align*}
\int_{N\cap B_{r}(0)}\ex(K+\F{|\nabla u|^{2}}{2})&\leq \int_{N}\phi^{2}\ex(K+\F{|\nabla u|^{2}}{2})\\
&\leq \int_{N}2|\nabla\phi|^{2}\ex \quad\text{ by \eqref{eq:kt}}\\
&\leq 8\int_{N\cap (B_{r+1}(0)\backslash B_{r}(0))}\ex\\
&\leq 8C (r+1)^{n}e^{-\F{r^{2}}{4}}.
\end{align*}
In the last line we use the fact that the submanifold $N^n$ has the {polynomial volume growth property}.
Letting $r$ go to infinity we obtain that
\begin{equation}q
\int_{N}\ex(K +\frac{|\nabla u|^2}{2})\leq 0.
evolution equationq
Since $K$ is nonnegative, we have $K\equiv \nabla u \equiv 0$. Therefore $g$ is a positive constant.
\end{proof}
\subsection{The proof of Theorem \ref{main}}
Adapting to our case of {graphical self-shrinker} surfaces in $\mathbb{R}^4$ we are ready to prove Theorem \ref{main}.
\begin{proof} (of Theorem ~\ref{main})
We claim that $\Sigma$ is minimal under the assumptions. We prove this case by case.
\subsubsection*{Assuming Condition (1):} the equations \eqref{eta1} and \eqref{eta-prime} imply
that the {parallel form} $\ast\end{theorem}a'$ has the same sign as $1+ J_{f}$. Hence $\ast\end{theorem}a'$ is a positive function. Moreover
from \eqref{eq:lp:3} $\ast\end{theorem}a'$ satisfies
\begin{equation}q
\Delta(\ast\end{theorem}a')+(\ast\end{theorem}a')((h^{3}_{1k}-h^{4}_{2k})^{2}+(h^{4}_{1k}+h^{3}_{2k})^{2})
-\F{1}{2}\langle\vec{F},\nabla(\ast\end{theorem}a')\rangle=0.
evolution equationq
Here $\vec{F}=(x,f(x))$. Since $\Sigma$ has the {polynomial volume growth property}, using Lemma \ref{pro:est} we conclude that
\begin{equation}q
(h^{3}_{1k}-h^{4}_{2k})^{2}+(h^{4}_{1k}+h^{3}_{2k})^{2}\equiv 0.
evolution equationq
We then obtain
\begin{equation}gin{align}
h^{3}_{11}&=h^{4}_{21}, \quad h^{3}_{22}=-h^{4}_{12}\langlebel{eq:m1},\\
h^{4}_{11}&=-h^{3}_{21},\quad h^{4}_{22}=h^{3}_{12}\langlebel{eq:m2}.
\end{align}
Then $\vec{H}=( h^{3}_{11}+h^{3}_{22})e_{3}+(h^{4}_{11}+h^{4}_{22})e_{4}\equiv 0$. So $\Sigma$ is a
{minimal surface}.
\subsubsection*{Assuming Condition (2):} this is similar to the above case. \eqref{eta1} and
\eqref{eq:ineta} imply that $\ast\end{theorem}a''$ has the same sign as $1-J_{f}$. Thus $\ast\end{theorem}a''$ is positive.
From \eqref{eq:lp:4} it also satisfies
\begin{equation}q
\Delta(\ast\end{theorem}a'')+(\ast\end{theorem}a'')((h^{3}_{1k}+h^{4}_{2k})^{2}+(h^{4}_{1k}-h^{3}_{2k})^{2})
-\F{1}{2}\langle\vec{F},\nabla(\ast\end{theorem}a'')\rangle=0
evolution equationq
Again we apply Lemma \ref{pro:est} to find that
\begin{equation}q
(h^{3}_{1k}+h^{4}_{2k})^{2}+(h^{4}_{1k}-h^{3}_{2k})^{2}\equiv 0.
evolution equationq
Then we have
\begin{equation}gin{align}
h^{3}_{11}&=-h^{4}_{21}, \quad h^{3}_{22}=h^{4}_{12},\langlebel{eq:m3}\\
h^{4}_{11}&=h^{3}_{21},\quad h^{4}_{22}=-h^{3}_{12}.\langlebel{eq:m4}
\end{align}
Therefore we arrive at:
\begin{equation}q
\vec{H}=( h^{3}_{11}+h^{3}_{22})e_{3}+(h^{4}_{11}+h^{4}_{22})e_{4}\equiv 0,
evolution equationq
which also means $\Sigma$ is minimal.\\
\indent Now $\Sigma$ is a {graphical self-shrinker} and minimal. From \eqref{eq:stru} we have $\vec{F}^{\bot}\equiv 0$
for any point $\vec{F}$ on $\Sigma$. For any normal unit vector $e_{\alpha}$ in the normal bundle of
$\Sigma$, we have
\begin{equation}\langlebel{F}
\langle\vec{F}, e_{\alpha}\rangle\equiv 0.
\end{equation}
Take derivative {with respect to} $e_{i}$ for $i=1,2$ from \eqref{F} and we obtain
\begin{equation}gin{align*}
&\langle\vec{F}, e_{1}\rangle h^{\alpha}_{11}+\langle\vec{F}, e_{2}\rangle h^{\alpha}_{12}=0,\\
&\langle\vec{F}, e_{1}\rangle h^{\alpha}_{21}+\langle\vec{F}, e_{2}\rangle h^{\alpha}_{22}=0,
\end{align*}
Now assume $\vec{F}\neq 0$. Since $\vec{F}^{\bot}=0$, $(\langle \vec{F}, e_{1}\rangle, \langle\vec{F}, e_{2}\rangle)\neq (0, 0)$. According to the basic linear algebra we conclude that
\begin{equation}\langlebel{det}
h^{\alpha}_{11}h^{\alpha}_{22}-(h^{\alpha}_{12})^{2}=0
\end{equation}
The minimality implies $h^{\alpha}_{11}=-h^{\alpha}_{22}$. Hence \eqref{det} becomes
$-(h^{\alpha}_{11})^{2}=(h^{\alpha}_{12})^{2}$. We find that $h^{\alpha}_{ij}=0$ for $i, j = 1,2$. Therefore $\Sigma$ is {totally geodesic} except $\vec{F}= 0$. Since $\Sigma$ is a graph, there is at most one point on $\Sigma$ such that $\vec{F} =0$. By the continuity of the {second fundamental form} $\Sigma$ is {totally geodesic} everywhere.\\
\indent Now $\Sigma$ is a plane. Provided $0$ is not on the plane, then we can find a point $\vec{F_0}$ in this plane which is nearest to $0$. It is easy to see that $\vec{F}_0=\vec{F}^{\bot}_0\neq 0$. This gives a contradiction because $\vec{F}^{\bot}=-\F{\vec{H}}{2}\equiv 0$. We complete the proof.
\end{proof}
\end{document} |
\begin{document}
\title{\mbox{Envy-freeness and maximum Nash welfare}
\begin{abstract}
We study fair allocation of resources consisting of both divisible and indivisible goods to agents with additive valuations.
Recently, a fairness notion called envy-freeness for mixed goods (EFM) has been introduced for this setting.
The EFM is a natural combination of classic fairness notions called envy-freeness for divisible goods and envy-freeness up to one good for indivisible goods.
When either divisible or indivisible goods exist, it is known that an allocation that achieves the maximum Nash welfare (MNW) satisfies the classic fairness notions.
On the other hand, for mixed goods, an MNW allocation does not necessarily entail EFM.
In this paper, we formally prove that an MNW allocation for mixed goods is envy-free up to one (indivisible) good for mixed goods.
\end{abstract}
\section{Introduction}
Fair allocation of finite goods has been attracting attention for decades from many research communities such as economics and computer science.
Most of the literature on fair allocation can be categorized into two types.
One is the research that deals with heterogeneous and infinitely divisible goods.
In this case, the fair allocation problem is also called cake cutting by merging all goods into one cake.
The other deals with indivisible goods.
In this paper, we focus on the case where agents' valuations over divisible goods are non-atomic, and those over indivisible goods are additive.
A standard fairness notion is \emph{envy-freeness} (EF)~\cite{Foley}.
This property requires that no agent prefers another agent's bundle to her own bundle.
It is well known that for divisible goods, there exists an envy-free allocation such that all goods are distributed~\cite{Varian}, and such an allocation can be found by a discrete and bounded protocol~\cite{Aziz2016}.
On the other hand, for indivisible goods, such an envy-free allocation may not exist.
For example, when there are a single good and two agents, either agent can receive the good, and the agent with no good envies the other.
Thus, an approximate fairness notion called \emph{envy-freeness up to one good} (EF1) has been proposed~\cite{Budi11a,Lipton2004}.
An allocation is said to be EF1 if we can eliminate the envy of any agent towards another one by removing some good from the envied agent's bundle.
It is known that an EF1 allocation always exists and can be found in polynomial time~\cite{Lipton2004} under additive valuations.
Another prominent fairness notion is based on the Nash welfare, which is the product of all positive utilities (equivalently, the sum of logarithm of positive utilities) in an allocation.
An allocation is said to be a \emph{maximum Nash welfare} (MNW) allocation if it maximizes the number of agents with positive utilities among all allocations and subject to that, it maximizes the Nash welfare.
While maximizing the Nash welfare aims to be socially optimal, it attains fairness among individuals.
It is known that every MNW allocation is EF for divisible goods~\cite{Varian, Segal-Halevi2019}, and EF1 for indivisible goods~\cite{Caragiannis2019} under additive valuations.
Moreover, an MNW allocation satisfies an efficiency notion called \emph{Pareto optimality} (PO).
Recently, fair allocation of a mix of divisible goods and indivisible ones has been introduced~\cite{Bei2021}.
In a real-world application, such a situation often occurs; for example, an inheritance may contain divisible goods (e.g., land and money) and indivisible goods (e.g., cars and houses).
In an allocation of such mixed goods, neither EF nor EF1 is not a reasonable goal.
Since this situation includes the fair allocation of indivisible goods, the EF is a too strong requirement.
On the other hand, the EF1 is too weak for divisible goods, which may lead to an unfair allocation.
Thus, Bei et al.~\cite{Bei2021} introduced a fairness notion called \emph{envy-freeness for mixed goods} (EFM), which is a generalization of EF and EF1.
Intuitively, in an EFM allocation, each agent does not envy any agents with bundles of only indivisible goods for more than one indivisible good; moreover, each agent does not envy agents with bundles having a positive fraction of a divisible good.
Contrary to expectation, an MNW allocation for mixed goods is incompatible even with a weaker variant of EFM~\cite{Bei2021}.
However, Caragiannis et al.~\cite{Caragiannis2019} mentioned that an MNW allocation for mixed goods is \emph{envy-free up to one good for mixed goods} (EF1M), that is, each agent $i$ does not envy another agent $j$ if one indivisible good is removed from the bundle of agent $j$.
They informally claimed that their proof to show that MNW implies EF1 works by regarding a divisible good as a set of $k$ indivisible goods and letting $k$ go to infinity.
In fact, when we see divisible goods in this way,
the EF for divisible good is EF1 in the sense that any envy can be eliminated by removing an infinitely small indivisible good.
Thus, the EF1M requires each agent to use the EF1 criterion regardless of a bundle of another agent.
On the other hand, in the EFM allocation, when agent $j$ receives indivisible goods and a positive fraction of a divisible good, envies from other agents must be eliminated by removing any ``indivisible'' good belonging to the agent $j$.
This requirement is exactly the \emph{envy-freeness up to any good} (EFX), which is stronger than EF1.
Therefore, the EF1M is a natural and essential notion in correspondence with the MNW.
The problem is that an explicit proof was not provided in the paper~\cite{Caragiannis2019}.
Taking the limit is not a trivial task, particularly when we allow general non-atomic valuations.
In this paper, we formally prove that an MNW allocation is EF1M and PO under additive valuations over indivisible goods.
In fact, we do this in a different way than described in the paper~\cite{Caragiannis2019}, which treats explicitly divisible goods as infinitely small indivisible goods.
Our proof is a simple and natural combination of proofs for divisible goods~\cite{Segal-Halevi2019} and indivisible goods~\cite{Caragiannis2019}.
As a corollary, we can prove that there always exists an EF1M and PO allocation.
\subsection{Related work}
As mentioned above, there is an EF allocation for divisible goods, and an EF1 allocation for indivisible goods.
It is also known that there exists an EFX allocation for special cases~\cite{Plaut2020,Chaudhury2020,Mahara2020,Mahara2021,Amanatidis2021}.
However, the existence is open even for four agents with additive valuations.
There also exist many papers that aim to find a fair allocation, and the problem of finding an MNW allocation is one of the most studied topics in this area.
For homogeneous divisible goods, this problem is a special case of finding a market equilibrium of the Fisher's market model and is formulated as the convex program of Eisenberg and Gale~\cite{Eisenberg1959}~(see, e.g., a textbook~\cite{textbook} for details).
Later, a combinatorial polynomial-time algorithm was proposed~\cite{Devanur2008} and also strongly polynomial-time ones~\cite{Orlin2010, Vegh2016}.
For indivisible goods, the problem is APX-hard even for additive valuations~\cite{Lee2017}.
Thus, approximation algorithms are expensively studied~\cite{Cole2015,Cole2017,Garg2018,Anari2018}.
It is also known to be polynomial-time solvable for special valuations such as binary additive valuations~\cite{Barman2018} and moreover matroid rank functions~\cite{Benabbou2021}.
\section{Preliminaries}\label{sec:preliminaries}
In this section, we describe the fair allocation problem.
Let $N = \{1,2,\dots, n\}$ be the set of $n$ agents.
Let $M$ be the set of $m$ indivisible goods, and $C$ be the set of $\ell$ heterogeneous divisible goods.
We denote by $E=M \cup C$ the set of \emph{mixed} goods.
We can regard the $j$th divisible good as an interval $[\frac{j-1}{\ell}, \frac{j}{\ell})$.
Thus, we may identify the entire set of divisible goods with one interval $C=[0,1)$, which we call a \emph{cake}.
Let $\mathcal{C}$ be the family of Borel sets of $C$.
We call a Borel set of $C$ a piece.
Each agent $i$ has a valuation function $u_i \colon 2^M \cup \mathcal{C} \to \mathbb{R}_+$.
For ease of notation, we write $u_i(g)=u_i(\{g\})$ for all indivisible goods $g\in M$.
We assume the following properties for valuation functions.
(i) Each $u_i$ is \emph{additive} over indivisible goods: $u_i(M')=\sum_{g\in M'} u_i(g)$ for every $M' \subseteq M$.
(ii) The valuation functions over the cake are non-atomic: any point in $C$ has value $0$ for all agents.
(iii) The valuation is countably additive: the valuation of a piece which is a countable union of disjoint subintervals is the sum of the valuation of the subintervals.
In this paper, we consider allocating all goods.
Without loss of generality, we assume that all agents have a positive value for some good, and every good receives a positive value from some agent.
An allocation of indivisible goods in $M$ is an ordered partition $\mathcal{M}=(M_1,\ldots, M_n)$ of $M$ such that agent $i$ receives the bundle $M_i \subseteq M$.
Similarly, an allocation of the cake $C$ is a partition $\mathcal{C}=(C_1,\ldots,C_n)$ of $C$ such that agent $i$ receives the piece $C_i \in \mathcal{C}$.
An allocation of the mixed goods $E$ is defined as $\mathcal{A}=(A_1,\dots, A_n)$ such that $A_i=M_i\cup C_i$ is a bundle for agent $i$.
The \emph{utility} of agent $i$ in an allocation $\mathcal{A}$ is the sum of the valuations of $M_i$ and $C_i$; for simplicity, we write the utility of agent $i$ as $u_i(A_i)=u_i(M_i)+u_i(C_i)$.
We provide definitions of the fairness notions used in this paper.
\begin{definition}
An allocation $\mathcal{A}$ for mixed goods is said to be \emph{envy-free} (EF) if it satisfies $u_i(A_i)\geq u_i(A_j)$ for any agents $i, j\in N$.
\end{definition}
\begin{definition}
An allocation $\mathcal{M}$ for indivisible goods is said to be \emph{envy-free up to one good} (EF1) if it satisfies for any agents $i, j\in N$, either $u_i(M_i)\geq u_i(M_j)$ or $u_i(M_i) \geq u_i(M_j\setminus \{g\})$ for some $g\in M_j$.
\end{definition}
\begin{definition}[Bei et al.~\cite{Bei2021}]
An allocation $\mathcal{A}$ for mixed goods is said to be \emph{envy-free for mixed goods} (EFM) if it satisfies the following condition for any agents $i, j \in N$:
\begin{itemize}
\item if $C_j = \emptyset$, then $u_i(A_i)\geq u_i(A_j)$
or $u_i(A_i) \geq u_i(A_j\setminus \{g\})$ for some indivisible good $g\in A_j$,
\item otherwise (i.e., $C_j\neq \emptyset$), $u_i(A_i)\geq u_i(A_j)$.
\end{itemize}
\end{definition}
\begin{definition}[Bei et al.~\cite{Bei2021}]
An allocation $\mathcal{A}$ for mixed goods is said to be \emph{weak envy-free for mixed goods} (weak EFM) if it satisfies the following condition for any agents $i, j \in N$:
\begin{itemize}
\item if $M_j\neq \emptyset$, and additionally either $C_j=\emptyset$ or $u_i(C_j)=0$, then
$u_i(A_i) \geq u_i(A_j\setminus \{g\})$ for some indivisible good $g\in A_j$,
\item otherwise, $u_i(A_i)\geq u_i(A_j)$.
\end{itemize}
\end{definition}
For an allocation $\mathcal{A}$, the \emph{Nash welfare} is defined as
\[
NW(\mathcal{A})=\prod_{i\in N:\, u_i(A_i) >0} u_i(A_i).
\]
\begin{definition}
An allocation $\mathcal{A}$ is said to be a \emph{maximum Nash welfare} (MNW) allocation if the number of agents with positive utilities is maximized among all allocations, and subject to that, the Nash welfare is maximized.
\end{definition}
Besides fairness, an efficient allocation is also desirable.
We focus on the Pareto optimality as an efficiency notion.
\begin{definition}
For an allocation $\mathcal{A}$ for mixed goods,
another allocation $\mathcal{A}'$ is said to \emph{Pareto dominate} $\mathcal{A}$ if it satisfies that $u_i(A'_i) \geq u_i(A_i)$ for all agents $i\in N$ and some agent is strictly better off.
An allocation $\mathcal{A}$ is called \emph{Pareto optimal} (PO) if no allocation Pareto dominates $\mathcal{A}$.
\end{definition}
We introduce the new fairness notion mentioned in the paper~\cite{Caragiannis2019}.
This requires each agent to use the EF1 criterion to compare her bundle with others.
\begin{definition}
An allocation $\mathcal{A}$ for mixed goods is said to be \emph{envy-free up to one good for mixed goods} (EF1M) if it satisfies the following condition for any agents $i, j\in N$:
\begin{itemize}
\item if $M_j=\emptyset$, then $u_i(A_i)\geq u_i(A_j)$,
\item otherwise, $u_i(A_i)\geq u_i(A_j)$ or $u_i(A_i) \geq u_i(A_j\setminus \{g\})$ for some indivisible good $g\in A_j$.
\end{itemize}
\end{definition}
By the definition, EF1M is weaker than weak EFM.
Let $\mathcal{A}$ be an EFM allocation, and choose two agents $i, j$ arbitrarily.
If $M_j \neq \emptyset$ and $C_j=\emptyset$ or $u_i(C_j)=0$, then the latter part of the condition of EF1M is satisfied.
Otherwise, since agent $i$ does not envy agent $j$, the condition of EF1M is satisfied.
Therefore, $A$ is EF1M.
To illustrate the difference between EF1M and (weak) EFM, let us see the example given by Bei et al.~\cite{Bei2021} that shows MNW and weak EFM are incompatible.
\begin{example}[Bei et al.~\cite{Bei2021}]
Consider the following instance: there are two agents, two indivisible goods $g_1, g_2$, and a homogeneous cake $C$.
The valuation of agent $1$ is defined as $u_1(g_1)=u_1(g_2)=0.4$ and $u_1(C)=0.2$, and that of agent $2$ is defined as $u_2(g_1)=u_2(g_2)=0.499$ and $u_2(C)=0.002$.
Then, an allocation $\mathcal{A}=(\{g_1, C\}, \{g_2\})$ is the only MNW allocation.
This allocation is not weak EFM because $A_1$ contains a piece of the cake but agent $2$ envies agent $1$.
On the other hand, $\mathcal{A}$ is EF1M because removing the good $g_1$ from $A_1$ eliminates the envy of agent $2$.
\end{example}
Since the EFM is based on partially the EFX criterion, we can define a stronger variant of EFM, which is also mentioned by Bei et al.~\cite{Bei2021}.
\begin{definition}
An allocation $\mathcal{A}$ for mixed goods is said to be \emph{envy-free up to any good for mixed goods} (EFXM) if it satisfies the following condition for any agents $i, j\in N$:
\begin{itemize}
\item if $C_j = \emptyset$, then
$u_i(A_i)\geq u_i(A_j)$ or $u_i(A_i) \geq u_i(A_j\setminus \{g\})$ for any indivisible good $g\in A_j$.
\item otherwise, $u_i(A_i)\geq u_i(A_j)$,
\end{itemize}
\end{definition}
It is not difficult to see that any EFXM allocation is EFM.
Hence, we can summarize the relationships between notions as
$$
\text{EF} \mathbb{R}ightarrow \text{EFXM} \mathbb{R}ightarrow \text{EFM} \mathbb{R}ightarrow \text{weak EFM} \mathbb{R}ightarrow \text{EF1M}.
$$
Bei et al.~\cite{Bei2021} also mentioned that we can find an EFXM allocation by using their EFM algorithm whenever an EFX allocation for indivisible goods can be found.
\section{Main result}\label{sec:main}
In this section, we show that an MNW allocation for mixed goods is EF1M.
To prove this, we use the following lemma.
This is also used in the paper~\cite{Segal-Halevi2019} to show that MNW for the cake implies EF.
\begin{lemma}[Stromquist and Woodall~\cite{Stromquist1985}]\label{lem:sw}
Let $i,j$ be two agents.
Let $H \subseteq [0, 1)$ be a piece such that $u_i(H)>0$ and $u_j(H) > 0$.
Then, for every $z \in [0,1]$, there exists $H^z \subseteq H$ such that
\[
u_i(H^z)=z\cdot u_i(H) \quad \text{and} \quad u_j(H^z)=z\cdot u_j(H).
\]
\end{lemma}
\begin{theorem}\label{thm:main}
When all agents have additive valuations over indivisible goods, every MNW allocation $\mathcal{A}$ for mixed goods is EFM1 and PO.
\end{theorem}
\begin{proof}
We observe that an MNW allocation $\mathcal{A}$ is PO.
Indeed, the existence of an allocation that Pareto dominates $\mathcal{A}$ contradicts the choice of $\mathcal{A}$.
Therefore, we may assume that every agent has a positive valuation for each indivisible good and piece in the agent's bundle.
Otherwise, giving the good with zero value to another agent who wants it will increase the number of agents with positive utility or the Nash welfare.
If there exist only indivisible goods, then an MNW allocation is EF1.
Thus we consider when we have the cake.
Suppose that some agent $i\in N$ envies agent $j$ and the condition of EF1M is not satisfied for these agents.
First, assume that $M_j=\emptyset$ but $u_i(A_i)<u_i(A_j)$.
Then $C_j \neq \emptyset$.
We note that $u_i(A_i) > 0$ and $u_j(C_j)>0$.
Choose a number $z\in (0,1)$ such that $z\cdot u_i(C_j) < u_i(C_j)-u_i(A_i)$.
By Lemma~\ref{lem:sw} with $H=C_j$, there exists
a piece $C' \subseteq C_j$ such that
$u_i(C')=z\cdot u_i(C_j)$ and $u_j(C')=z\cdot u_j(C_j)$.
Thus, we have
\begin{align}
u_i(A_i) + u_i(C') < u_i(C_j) \quad \text{and} \quad \frac{u_j(C')}{u_i(C')} = \frac{u_j(C_j)}{u_i(C_j)}.
\end{align}
Let $\mathcal{A}'$ be an allocation such that $A'_i=A_i\cup C'$, $A'_j=A_j\setminus C'$ and $A'_k=A_k$ for each $k\neq i,j$.
Then, we observe that the number of agents with positive utilities remains the same, and $NW(\mathcal{A}') > NW(\mathcal{A})$ because
\begin{align}
&\frac{1}{\prod_{k\neq i,j: \, u_k(A_k)>0}u_k(A_k)}(NW(A')-NW(A)) \\
&= (u_i(A_i)+u_i(C'))(u_j(A_j)-u_j(C'))-u_i(A_i)u_j(A_j) \\
&=u_i(C') \cdot \left(u_j(A_j)-(u_i(A_i)+u_i(C'))\frac{u_j(C')}{u_i(C')} \right) \\
&>u_i(C')\cdot \left(u_j(A_j)-u_i(A_j)\frac{u_j(C_j)}{u_i(C_j)} \right)\\
&=u_i(C')\cdot \left(u_j(A_j)-u_i(A_j)\frac{u_j(C_j)}{u_i(A_j)} \right)
= 0.
\end{align}
This contradicts that $\mathcal{A}$ is an MNW allocation.
Next, we assume that $M_j\neq\emptyset$ but $u_i(A_i)<u_i(A_j\setminus \{g\})$ for any indivisible good $g\in M_j$.
If $u_i(M_j)=0$, then we have $u_i(A_i)<u_i(A_j)=u_i(C_j)$, and we can see that giving some piece in $C_j$ to agent $i$ increases the Nash welfare in the same way as the above case.
Thus, in the reminder, we may assume that $u_i(M_j)>0$, that is, there exists a good $g \in M_j$ with $u_i(g)>0$.
Let us choose arbitrarily $g^* \in \argmin_{g\in M_j:\, u_i(g)>0}\frac{u_j(g)}{u_i(g)}$.
By the assumption, $A_j \setminus \{g^*\}$ contains a piece or an indivisible good, and the Pareto optimality of $\mathcal{A}$ implies that $u_j(A_j\setminus \{g^*\}) >0$.
\paragraph{Case 1:} Suppose that $u_i(C_j)=0$.
By the choice of $g^*$, we have
$$0 \leq \frac{u_j(g^*)}{u_i(g^*)} \leq \frac{\sum_{g\in M_j}u_j(g)+u_j(C_j)}{\sum_{g\in M_j}u_i(g)} = \frac{u_j(A_j)}{u_i(A_j)}.$$
Let $\mathcal{A}'$ be an allocation such that $A'_i=A_i\cup \{g^*\}$, $A'_j=A_j\setminus \{g^*\}$ and $A'_k=A_k$ for $k\neq i,j$.
We note that $u_j(A_i\setminus\{g^*\}) > 0$ since $\mathcal{A}$ is PO and $u_i(A_i\setminus\{g^*\}) > 0$, and hence the number of agents with positive utilities remains the same.
It holds that
\begin{align*}
&\frac{1}{\prod_{k\neq i,j: \, u_k(A_k)>0}u_k(A_k)}(NW(A')-NW(A)) \\
&= (u_i(A_i)+u_i(g^*))(u_j(A_j)-u_j(g^*))-u_i(A_i)u_j(A_j) \\
&=u_i(g^*) \cdot \left(u_j(A_j)-(u_i(A_i)+u_i(g^*))\frac{u_j(g^*)}{u_i(g^*)} \right) \\
&>u_i(g^*)\cdot \left(u_j(A_j)-u_i(A_j)\frac{u_j(g^*)}{u_i(g^*)} \right)\\
&\geq u_i(g^*)\cdot \left(u_j(A_j)-u_i(A_j)\frac{u_j(A_j)}{u_i(A_j)} \right)= 0,
\end{align*}
where the strict inequality holds because $u_i(A_i) < u_i(A_j)-u_i(g^*)$.
Therefore, $NW(\mathcal{A}')>NW(\mathcal{A})$.
\paragraph{Case 2:} Suppose that $u_i(C_j)> 0$ and $\frac{u_j(g^*)}{u_i(g^*)} \leq \frac{u_j(C_j)}{u_i(C_j)}$.
Similarly, we have
$$0 \leq \frac{u_j(g^*)}{u_i(g^*)} \leq \frac{\sum_{g\in M_j}u_j(g)+u_j(C_j)}{\sum_{g\in M_j}u_i(g)+u_i(C_j)} = \frac{u_j(A_j)}{u_i(A_j)}.$$
Let $\mathcal{A}'$ be an allocation such that $A'_i=A_i\cup \{g^*\}$, $A'_j=A_j\setminus \{g^*\}$ and $A'_k=A_k$ for $k\neq i,j$.
Then, $NW(\mathcal{A}')>NW(\mathcal{A})$ by the same discussion as above.
\paragraph{Case 3:} Suppose that $u_i(C_j)> 0$ and $\frac{u_j(g^*)}{u_i(g^*)} > \frac{u_j(C_j)}{u_i(C_j)}$.
By the PO of $\mathcal{A}$, we have $u_j(C_j)>0$.
Lemma~\ref{lem:sw} with $H=C_j$ implies that we can choose a number $z\in (0,1)$ and a piece $C'\subsetneq C_j$ such that
\begin{align}\label{eq:piece}
u_i(C')=z\cdot u_i(C_j) < u_i(A_j)-u_i(A_i)
\end{align}
and $u_j(C')=z\cdot u_j(C_j)$.
Let $\mathcal{A}'$ be an allocation such that $A'_i=A_i\cup C'$, $A'_j=A_j\setminus C'$ and $A'_k=A_k$ for $k\neq i,j$.
We note that $u_j(A'_j)>0$.
By the choice of $C'$ and the assumption $\frac{u_j(g^*)}{u_i(g^*)} > \frac{u_j(C_j)}{u_i(C_j)}$, we have
$$0 \leq \frac{u_j(C')}{u_i(C')} = \frac{u_j(C_j)}{u_i(C_j)} \leq \frac{\sum_{g\in M_j}u_j(g)+u_j(C_j)}{\sum_{g\in M_j}u_i(g)+u_i(C_j)} = \frac{u_j(A_j)}{u_i(A_j)}.$$
Therefore, by a similar argument to the case when $M_j=\emptyset$,
\begin{align*}
&\frac{1}{\prod_{k\neq i,j:\, u_k(A_k)>0}u_k(A_k)}(NW(A')-NW(A)) \\
&= (u_i(A_i)+u_i(C'))(u_j(A_j)-u_j(C'))-u_i(A_i)u_j(A_j) \\
&=u_i(C') \cdot \left(u_j(A_j)-(u_i(A_i)+u_i(C'))\frac{u_j(C')}{u_i(C')} \right) \\
&>u_i(C')\cdot \left(u_j(A_j)-u_i(A_j)\frac{u_j(C')}{u_i(C')} \right) \geq 0,
\end{align*}
which implies that $NW(\mathcal{A}')>NW(\mathcal{A})$.
Here, we need to choose a piece using Lemma~\ref{lem:sw} satisfying~\eqref{eq:piece} to ensure the strict inequality.
Therefore, we derive a contradiction for each case.
This completes the proof.
\end{proof}
Theorem~\ref{thm:main} implies that there exists an EF1M and PO allocation if an MNW allocation exists.
The following lemma indicates that this is the case.
\begin{lemma}[Segal-Halevi and Sziklai~\cite{Segal-Halevi2019}]\label{lem:MNW}
When all agents have non-atomic valuations over the cake, there always exists an MNW allocation for the cake.
\end{lemma}
For each PO allocation $\mathcal{M}$ of indivisible goods, we can observe that there exists an MNW allocation under $\mathcal{M}$ as follows.
We set $u_j(g)=0$ for all goods $g\in M$ and agent $j$ except the one who receives $g$ in $\mathcal{M}$.
Then, we regard that all indivisible goods are homogeneous divisible ones (to ensure that the valuation functions are non-atomic).
Considering only agents $i$ with $u_i(M_i)>0$ or $u_i(C)>0$, Lemma~\ref{lem:MNW} implies that there exists an MNW allocation $\mathcal{A}$.
By the Pareto optimality of an MNW allocation, every indivisible good $g$ is fully allocated to the agent $i$ such that $g \in M_i$.
Thus, $\mathcal{A}$ is an MNW allocation under the allocation $\mathcal{M}$ of indivisible goods.
We see that there exists an MNW allocation for mixed goods by enumerating all possible allocations of indivisible goods.
\begin{corollary}
When all agents have non-atomic valuations over the cake and additive valuations over indivisible goods, there always exists an EF1M and PO allocation for mixed goods.
\end{corollary}
We also note that not every EF1M allocation is PO because the EFM and the PO are incompatible, as pointed out by Bei et al.~\cite{Bei2021}.
\section{Computation of EF1M allocations}
In this section, we mention that an EF1M allocation can be found in finite steps in the Robertson-Webb model~\cite{Robertson1998}.
This is in contrast to the fact that it remains open whether an EFM allocation can be found in finite steps.
Let $\mathcal{C}$ be an EF allocation of $C$, and let $\mathcal{M}$ be an EF1 allocation of $M$.
These can be found in finite steps.
Let $\mathcal{A}$ be an allocation defined by $A_i = M_i \cup C_i$ ($i \in N$).
For any agents $i, j$, if $M_j=\emptyset$, then it holds that $u_i(A_i)\geq u_i(C_i) \geq u_i(C_j) = u_i(A_j)$; if $M_j \neq \emptyset$ and $u_i(A_i)<u_i(A_j)$, then since $u_i(M_i)< u_i(M_j)$, there exists an indivisible good $g \in M_j$ such that $u_i(M_i) \geq u_i(M_j\setminus \{g\})$, which implies that
\begin{align*}
u_i(A_i) &= u_i(C_i)+u_i(M_i) \\
&\geq u_i(C_j)+u_i(M_j \setminus \{g\}) = u_i(A_j \setminus \{g\}).
\end{align*}
Therefore, $\mathcal{A}$ is EF1M.
However, an allocation in this way may not attain the MNW.
For example, let us consider an instance with a homogeneous cake $C$, a single indivisible good $g$, and two agents.
The values of both goods are $1$ for these agents.
The EF allocation of $C$ is to allocate half for each agent.
In the resulting EF1M allocation, one agent receives $g$ and half of $C$, and the other receives half of $C$, and the Nash welfare is $1.5 \times 0.5=0.75$.
On the other hand, the MNW allocation is to allocate the whole of $C$ to one agent and $g$ to the other agent, whose Nash welfare is $1 \times 1 = 1$.
\section{Conclusion}
In this paper, we formally proved that an MNW allocation for mixed goods is always EF1M and PO.
We also mentioned that there always exists an EF1M and PO allocation, and discussed computation of an EF1M allocation.
One future work is to construct an algorithm to find an MNW allocation for mixed goods.
Since the truthfulness was out of scope of this paper, it is also an interesting future work to design a truthful mechanism to compute an EFM or EF1M allocation.
\section*{Acknowledgments.}
We would like to thank Yasushi Kawase for the comments to improve the clarity of this paper.
The second author was supported by JSPS KAKENHI Grant Numbers JP17K12646, JP21K17708, and JP21H03397, Japan.
\end{document} |
\begin{document}
\title{\bf Hartman's effect as evidence of quantum non-spatiality
}
\author{Massimiliano Sassoli de Bianchi
\\
\normalsize\itshape
Center Leo Apostel for Interdisciplinary Studies, \\ \itshape
Vrije Universiteit Brussel, 1050 Brussels, Belgium
\\
\normalsize
E-Mails:
\url{[email protected]}
}
\date{}
\maketitle
\begin{abstract}
\noindent We analyze the tunneling phenomenon from the viewpoint of Hartman's effect, showing that one cannot describe a tunneling entity as ``passing through'' the potential barrier, hence, one must renounce viewing it as being permanently present in space. In other words, Hartman's effect appears to be a strong indicator of quantum non-spatiality.
\end{abstract}
{\bf Keywords:} Tunneling, Hartman's effect, Time-delay, Non-spatiality
\\
\noindent Hartman's effect is the theoretical \cite{Hartman1962} and experimental \cite{Enders1992,Enders1993,Spielmann1994,Longhi2001} observation that the time-delay experienced by a quantum entity tunneling through a potential barrier is independent of its width, in the limit where the latter becomes increasingly larger.\footnote{In this article, we assume that the obtained experimental results have essentially confirmed the theoretical predictions, hence, we will not go into the analysis of possible criticisms about what the experiments would have actually shown; see for example \cite{Winful2006,Sokolovski2018} for two critical views.}
If we think of the tunneling process in classical terms, i.e., imagining the tunneling entity like a localized corpuscle, moving from left to right (we limit our discussion to a single dimension of space), then for a sufficiently spatially extended barrier its motion becomes apparently superluminal, in the sense that in order to account for the very large time-advance (i.e., negative time-delay) accumulated with respect to a free reference particle with same incoming speed, it is necessary to attribute to the interacting entity an effective velocity within the barrier that tends to infinity as the barrier width increases.
More precisely, let $2a$ be the width of the potential barrier corresponding to the region where it has constant height $V_0$, with $2(a+b)$ the length of the overall interval where it has its support, including the two transition regions (see Fig.~\ref{figure1}). Then Hartman's effect is typically expressed as the following asymptotic behavior for the transmission time-delay:
\begin{equation}
\tau_{\rm tr}=-{2a\over v}\left[1 + {\rm O}\left({1\over a}\right)\right],
\label{Hartman}
\end{equation}
where $v$ is the incoming velocity (here assumed to be positive, since the entity comes from the left). By definition of time-delay, and still reasoning here as if we were dealing with an entity of a corpuscular nature, the time $T_{\rm tr}$ the particle spends within the barrier region is given by the time-delay (\ref{Hartman}) plus the time a free reference particle of same incoming velocity spends in that same region, i.e., $T_{\rm tr}=\tau_{\rm tr}+{2(a+b)\over v}$. Hence, as $a\to\infty$, $T_{\rm tr}$ tends to a constant, which means that the effective average velocity inside the barrier, $v_{\rm eff}={2(a+b)\over T_{\rm tr}}$, tends to infinity as $a\to\infty$.
\begin{figure}
\caption{A potential barrier whose region of constant height $V_0$ has width $2a$, while the left and right transition regions have each width $b$, hence $V(x)$ has its support in the interval $[-a-b,a+b]$, contained in a larger interval $[-R,R]$, $R>a+b$, which is the one considered for the calculation of the time-delay. If the energy $E={1\over 2}
\label{figure1}
\end{figure}
The probability for a classical corpuscle of mass $m$ and velocity $v=\sqrt{2E/m}$, with $E<V_0$, to be transmitted through the barrier, is of course zero, hence interpreting the tunneling phenomenon using a classical analogy is not correct, since only quantum entities can be detected on the right hand side of the barrier with a non-zero probability, when the incoming energy $E$ is strictly below the barrier height $V_0$. But even if we renounce viewing the tunneling entity as a corpuscle, the question of the effective velocity of the process remains. How can a quantum entity, impinging on the barrier from the left, reappear almost instantly to its right, so that its effective velocity $v_{\rm eff}$ becomes arbitrarily height?
We know that superluminal phenomena exist and pose no problems in physics. A famous example is the speed of the shadow of an object \cite{Leblond2006}, which is not limited by any physical principle and therefore has no upper limit. Tunneling also appears to be a superluminal phenomenon, but as for the shadow it cannot be used to produce faster-than-light signaling, considering in particular that the tunneling probability is very small.\footnote{The analysis of the notion of signaling is beyond the scope of this article. Here we limit ourselves to observing that the existence of superluminal (effective) velocities does not necessarily imply the possibility of achieving superluminal signaling; see for instance the discussion in \cite{Dumont2020}.} Nevertheless, to what extent can we say that the tunneling process correspond to a propagation through space at an effective speed greater than the speed of light?
One of the main criticisms in attaching the tunneling entity a superluminal velocity is that there would be no uncontroversial notion of tunneling time \cite{Hauge1989,Muga2008,Sassoli2012}, hence one cannot meaningfully talk of the tunneling entity's speed in the barrier region. In this regard, we can observe that there is no intrinsic arrival time operator in quantum mechanics \cite{Allcock1969,Muga2008}. However, the notion of sojourn time (also called dwell time), defined in terms of probabilities of presence, remains valid also in quantum mechanics, i.e., can be associated with a bona fide self-adjoint operator \cite{Jaworski1989}. But to associate a time-delay to a tunneling entity, one needs a more specific notion of conditional time-delay, which in turn requires a notion of conditional sojourn time, that is, a time of sojourn conditional to the fact that the entity is ultimately observed in a given state (in our case, a transmitted state).
Unfortunately, one cannot define a quantum conditional sojourn time operator, providing a meaningful answer to the question of how much time, on average, an entity has passed inside a given region of space, say a region of radius $R$ incorporating the potential barrier (see Fig.~\ref{figure1}), conditional to the fact that it is ultimately observed in a given state. This because it is a composite question requiring to jointly answer two incompatible questions: one about the presence of the entity inside the region of interest, and one about knowing if it will be ultimately transmitted or reflected. Nevertheless, this does not mean that the definition of a meaningful notion of conditional time-delay would not be possible. Indeed, as explained in \cite{Sassoli1993,Sassoli2012}, time-delay is defined as the difference of two sojourn times -- the interaction and free sojourn times -- in the limit where the radius $R$ of the localization region tends to infinity. In this limit, the region covers the entire space, hence the question about the presence of the entity inside of it becomes compatible with the questions of whether it will be transmitted or reflected. Without going into details, this means that an unphysical negative joint probability distribution (associated with two incompatible questions) becomes positive in the time-delay limit $R\to\infty$, and therefore recovers a consistent probabilistic interpretation. As a consequence, contrary to the notion of transmission sojourn time, one can still make sense of a notion of transmission time-delay \cite{Sassoli1993,Sassoli2012,Jaworski1988}, and more generally of angular time-delay \cite{Bolle1976}, compatibly with those approaches that use a direct analysis of the evolution of the wave packets \cite{Hauge1989,Jaworski1988,Sassoli2012}.
It must be said, however, that regardless of the difficulties posed by the theoretical definition of the notion of a quantum time-delay, associated with an entity that is ultimately observed to be transmitted, from an experimental point of view the time-delays associated with the tunneling process are accessible and measured, and it is therefore necessary to be able to explain why they are superluminal in nature, when through classical, or semiclassical, reasoning we associate such processes with an effective spatial velocity $v_{\rm eff}>c$.
That being said, we now want to get to the heart of our analysis, which is to take the notion of quantum (transmission and reflection) time-delay very seriously and use the tunneling phenomenon to extract information about the nature of a quantum entity. Our reasoning uses three ingredients: (1) a careful analysis of the process of a classical entity which is reflected from the barrier; (2) the semiclassical approximation, and (3) the remarkable properties of the quantum $S$-matrix, which will allow us to deduce that what happens for reflection also happen for the transmission (tunneling) process.
Our starting point is the calculation of the reflection time-delay experienced by a classical corpuscle interacting with the potential barrier (see Fig.~\ref{figure1}):
\begin{equation}
\label{potential}
V(x)=V_0
\begin{cases}
1 & |x| < a\\
h(|x|-a) & a\leq |x|\leq a+b
\end{cases}
\end{equation}
where $a>0$, and $h(s)$ is a smooth monotone decreasing function with support in the interval $[0, b]$, $b>0$, with $h(0) = 1$ and $h(b)=0$, describing the switching on and off of the barrier in space. A classical particle of energy $E<V_0$ is always reflected back from the potential barrier. To calculate the associated time-delay, one needs to calculate the difference between the time $T_{\rm re}^{\rm cl}$ it spends inside a region of radius $R>a+b$, and the time $T^{\rm cl}_0={2R\over v}$ spent in that same region by a free reference particle of same energy $E={1\over 2}mv^2$, then take the limit $R\to\infty$, which will of course be trivial here, being the potential compactly supported:
\begin{equation}
\tau_{\rm re}^{\rm cl}=\lim_{R\to\infty}(T_{\rm re}^{\rm cl} -T^{\rm cl}_0)
\label{time-delay-classical}
\end{equation}
More precisely, a simple calculation gives for the classical (reflection) time-delay \cite{Narnhofer1981,Sassoli2000}:
\begin{equation}
\tau_{\rm re}^{\rm cl}=\left[2\int_{-b}^{-s_0}{ds\over v(s)}-{2b\over v}\right]+\left[0-{2a\over v}\right]
\label{time-classical}
\end{equation}
where $v(s)=\sqrt{2(E-V_0h(-s))/m}$ and $s_0$ is such that $E-V_0h(s_0)=0$, so that $x=-a-s_0$ is the extreme point reached by the particle inside the potential region, before being reflected back; see Fig.~\ref{figure1}.
Let us explain the physical content of (\ref{time-classical}). We know that the time spent by the free reference particle inside the potential region is ${2(a+b)\over v}$. The term ${2a\over v}$ describes the time spent in the region where the potential is constant, and the term ${2b\over v}$ the time spent in the two transition regions, where there is a non-zero gradient.\footnote{One can also think of the free reference particle as a particle that is elastically reflected at the origin. In both cases, it will have to cross the variable gradient transition region twice: once going in and once coming out.} Now, since the incoming energy $E$ is below the maximum height $V_0$ of the potential barrier, the particle cannot explore its region of length $2a$ where it is constant, hence, the time it spends inside that region is exactly zero, which is the reason of the $0$-term indicated in the second bracket in (\ref{time-classical}), which quantifies the time-delay produced by the zero-gradient part of the potential. On the other hand, the first bracket in (\ref{time-classical}) is the contribution to the time-delay due to the transition region, which is explored twice by the reflected particle, first when it moves from the left to the right, until it reaches the extreme point of its movement at $x=-a-s_0$, and then when it comes back from where it came, having reversed its motion. Clearly, the expression in the first bracket does not depend on the barrier width $2a$, but only on the detail of the transition region, hence we have the following asymptotic form:
\begin{equation}
\tau_{\rm re}^{\rm cl}=-{2a\over v}\left[1 + {\rm O}\left({1\over a}\right)\right],
\label{Hartman-ref}
\end{equation}
We can see that (\ref{Hartman-ref}) is identical to (\ref{Hartman}). However, different from (\ref{Hartman}), there is no mystery here in the observed asymptotic behavior. It comes from the second bracket in (\ref{time-classical}), which is a consequence of the fact that the reflected particle never enters the (constant height) barrier region of width $2a$. Note also that it would be meaningless to use (\ref{Hartman-ref}) to speak of an effective velocity of the reflected particle inside the barrier region of width $2a$, as is clear that it never enters such region.
What is the relevance of the above to the problem of interpreting Hartman's effect in quantum tunneling? As we said, two additional ingredients will allow us to make the connection: the fact that classical time-delays correspond to the semiclassical approximations of their quantum mechanical analogues \cite{Narnhofer1981,Berry1982,Fedoriouk1987,MartinSassoli1994}, and the existence of a remarkable relation between the transmission and reflection time-delays in quantum mechanics, which follows from the unitarity of the scattering matrix $S$ \cite{SassoliDiVentra1995}. Let us start by observing how the latter affects the quantum time-delays. For a one-dimensional problem, since there are only two directions, the scattering matrix is the $2\times 2$ matrix:
\begin{equation}
S=
\left[ \begin{array}{cc}
{\cal T} & {\cal R} \\
{\cal L} & {\cal T} \end{array} \right],
\label{ScatteringMatrix}
\end{equation}
where ${\cal T}$ is the transmission amplitude and ${\cal L}$ and ${\cal R}$ the reflection amplitudes from the left and right, respectively (for a given energy $E$). It is then straightforward to observe that the unitarity relation $S^\dagger S = I$ implies $|{\cal T}|^2 +|{\cal R}|^2 = |{\cal T}|^2 + |L|^2 = 1$ (probability conservation), as well as the relation ${\cal T}^*{\cal R} + {\cal L} ^*{\cal T} =0$, which can be equivalently written (using $|{\cal R}| = |{\cal L} |$) as the phase relation:
\begin{equation}
\alpha_{\cal T}+ {\pi\over 2} ={1\over 2}(\alpha_{\cal L} +\alpha_{\cal R}) \mod\pi.
\label{phase}
\end{equation}
Since the quantum mechanical transmission and reflection time-delays for a given energy $E$, are given by the reduced Planck constant $\hbar$ times the energy derivatives of the phases of the transmission and reflection amplitudes, respectively, \cite{Eisenbud1948,Wigner1955,Smith1960,Sassoli2012}, i.e., $\tau_{\rm tr}=\hbar {d\alpha_{\cal T}\over dE}$, $\tau_{\rm re}^{\rm left}=\hbar {d\alpha_{\cal L}\over dE}$, $\tau_{\rm re}^{\rm right}=\hbar {d\alpha_{\cal R}\over dE}$, from (\ref{phase}) we obtain the remarkable relation:
\begin{equation}
\tau_{\rm tr} ={1\over 2}(\tau_{\rm re}^{\rm left} +\tau_{\rm re}^{\rm right}).
\label{time-relation}
\end{equation}
In other words, the transmission time-delay is the arithmetic average of the reflection time-delays from the left and from the right, and since we have considered in our analysis a symmetric potential barrier, (\ref{phase}) simply becomes:
\begin{equation}
\tau_{\rm tr} = \tau_{\rm re}.
\label{time-relation2}
\end{equation}
Note that (\ref{time-relation}) and (\ref{time-relation2}) are not approximations, but exact equalities, expressing a fundamental difference between quantum mechanics and classical mechanics. Indeed, in classical mechanics, for a given non-zero incoming energy $E$, the particle is always either transmitted or reflected, so relations of the above kind would be meaningless for a classical corpuscle. On the other hand, in quantum mechanics both outcomes, transmission and reflection, are always possible, hence transmission and reflection time-delays can be jointly defined for a same incoming energy $E$, and the unitarity of the evolution process then forces them to be equal, in the sense of (\ref{time-relation}) and (\ref{time-relation2}).
Now comes the last step of our reasoning. As we mentioned already, the classical expression (\ref{Hartman-ref}) also holds for a quantum process, in the (semiclassical) regime where the reflection of the incoming waves at the potential barrier boundaries can be disregarded, which is generally the case if the de Broglie wavelength $\lambda$ of the incoming entity is sufficiently small compared to the size $b$ of the gradient of the potential in the transition regions, i.e., if $\lambda \ll b$, and this all the more so if the function $h(s)$ describing the transition regions is highly differentiable \cite{Narnhofer1981,Berry1982,Fedoriouk1987,MartinSassoli1994}.
In view of the above, it follows that equations (\ref{time-classical})-(\ref{Hartman-ref}) are good approximations for the quantum mechanical transmission time-delays, and this in itself constitutes a derivation of Hartman's effect valid for potential barriers of general shape, when the barrier's transition regions vary over distances much larger than the de Broglie wavelength of the incoming entity. However, what interests us is not the derivation in itself, but the insight it allows us to gain into the temporal aspects the tunneling phenomenon.
Hartman's effect (\ref{Hartman}), when viewed from the perspective of a reflected entity (\ref{Hartman-ref}), is just the consequence of the fact that the latter does not penetrate the entire barrier width, but only interacts with its external transition region. Hence, in its race with a free reference entity, it has to travel a much shorter distance (roughly, shorter than $2a$), which explains its considerable time-advance (or negative time-delay). From the exact equality (\ref{time-relation2}), it then follows that, mutatis mutandis, the same must be true for the tunneling entity. Indeed, the transmission and reflection time-delays being necessarily always identical, the mechanisms that are at their origin must also be always the same, i.e., also the time-advance experienced by the tunneling entity must be due to the fact that, in its race, it avoids altogether the energetically forbidden region of width $2a$, and this in exactly the same way it is avoided by the reflected entity. Schematically, we thus have the situation depicted in Fig.~\ref{figure2}.
\begin{figure}
\caption{Unlike the reference entity that evolves freely, the potential region for which $E-V(x)<0$ remains inaccessible both to the reflected and the transmitted (tunneling) entities. Outside of this inaccessible region, the scattering entities interact for the same amount of time, on average, with the transition regions, whether they are ultimately reflected or transmitted. In the first case, they interact twice with the left transition region (assuming they come from the left), in the second case they interact once with both transition regions, which are here identical being the barrier symmetric. For non-symmetric barriers, the same applies if one considers an average over the processes where the entities approach the barrier both from the left and from the right, as per (\ref{time-relation}
\label{figure2}
\end{figure}
At this point, if we take the above analysis seriously, we are faced with a serious interpretational problem. Just as it would be absurd to think of the reflected entity as ``passing through'' the potential barrier, the same applies to the tunneling one. The difficulty with the latter is that since it approaches the potential barrier from the left, then reappears to its right hand side, and in between there is an inaccessible region that is not crossed (otherwise it would affect the resulting time-delay, making it different from the reflection time-delay), and there are no other paths to go from the left to the right of the potential barrier, one might rightly ask: How a spatial entity moving in space can suddenly skip an entire region of it, of arbitrary size, as if it did not exist, and instantly reappear on the other side of it? What we want to emphasize is that it is one thing to think of an entity that, for unclear reasons, would be able to move through a potential region with an arbitrary effective velocity, and quite another to observe that in its evolution it is as if that region did not exist, as our analysis strongly suggests.
In our view, the conundrum has only one solution: we have to think of a quantum entity not as a spatial entity, permanently present in space, but as a non-spatial entity, only ephemerally present in space. Indeed, a non-spatial entity can in principle actualize its presence in different regions of space, and this independently of the existence of inaccessible regions separating them. Note that the notion of non-spatiality was firstly introduced by Diederik Aerts in the late eighties \cite{Aerts1990} and has been discussed in a number of works (see \cite{Aerts1998,Aerts1999} and the references cited therein), and since then other authors realized the need for its introduction, to fully explain quantum mechanics \cite{Christiaens2003,Kastner2012,Sassoli2021}. In a nutshell, non-spatiality is the observation that \cite{Aerts1999}: ``Reality is not contained within space. Space is a momentaneous crystallization of a theatre for reality where the motions and interactions of the macroscopic material and energetic entities take place. But other entities -- like quantum entities for example -- `take place' outside space, or -- and this would be another way of saying the same thing -- within a space that is not the three-dimensional Euclidean space.''
The analysis we have here presented adds to numerous other analyses of quantum phenomena that suggest that our reality would be fundamentally non-spatial \cite{Sassoli2021}. Just to give two examples, quantum measurements and the Born rule can be explained by introducing ``hidden'' interactions that are genuinely non-spatial \cite{asdb2014}; also, the phenomenon of quantum entanglement, when understood as correlations created from ``hidden'' connections, requires again the latter to be non-spatial in nature \cite{asdb2016}. In this article, we offered what we think is new evidence in favor of the non-spatiality hypothesis, resulting from the analysis of the quantum tunneling phenomenon and so-called Hartman's effect.\footnote{Note that non-spatiality also forces us to update our interpretation of the very notion of sojourn time of a quantum entity in a given region of space, in the sense that a quantum sojourn time shouldn't be understood anymore as an actual time, but as a measure of the total availability of a non-spatial entity in participating to a process of actualization of its spatial localization \cite{Sassoli2012b}.}
We conclude by observing that once we take seriously the notion of non-spatiality (without falling into the trap of assuming that non-spatiality would rhyme with non-reality) an interesting question imposes itself: What would be the nature of a non-spatial entity, and how does spatiality emerge from non-spatiality? A possible answer, which is still a work in progress, is contained in the so-called `conceptuality interpretation of quantum mechanics', and refer the interested reader to \cite{Aertsetal2018} and the references cited therein.
\end{document} |
\begin{document}
\title{Reduction Theorems for Hybrid Dynamical Systems}
\author{Manfredi Maggiore, Mario Sassano, Luca Zaccarian
\thanks{This research was carried out while M. Maggiore
was on sabbatical leave at the Laboratoire des Signaux et
Syst\`emes, CentraleSup\'elec, Gif sur Yvette, France.
M. Maggiore was supported by the Natural Sciences
and Engineering Research Council of Canada (NSERC).
L. Zaccarian is supported in
part by grant PowerLyap funded by CaRiTRo.}
\thanks{M. Maggiore is with the Department
of Electrical and Computer Engineering, University of Toronto,
Toronto, ON, M5S3G4 Canada \texttt{[email protected]}}
\thanks{M. Sassano is with the Dipartimento di
Ingegneria Civile e Ingegneria Informatica, ``Tor Vergata'', Via del
Politecnico 1, 00133, Rome, Italy \texttt{[email protected]}}
\thanks{L. Zaccarian is with
LAAS-CNRS, Universit\'e de Toulouse, CNRS, Toulouse, France and with Dipartimento di
Ingegneria Industriale, University of Trento, Italy \texttt{[email protected]}}
}
\maketitle
\begin{abstract}
This paper presents reduction theorems for stability, attractivity,
and asymptotic stability of compact subsets of the state space of a
hybrid dynamical system. Given two closed sets $\Gamma_1
\subset \Gamma_2 \subset \mathbb Re^n$, with $\Gamma_1$ compact, the theorems
presented in this paper give conditions under which a qualitative
property of $\Gamma_1$ that holds relative to $\Gamma_2$ (stability,
attractivity, or asymptotic stability) can be guaranteed to also hold
relative to the state space of the hybrid system. As a consequence of
these results, sufficient conditions are presented for the stability
of compact sets in cascade-connected hybrid systems. We also present a
result for hybrid systems with outputs that converge to zero along
solutions. If such a system enjoys a detectability property with
respect to a set $\Gamma_1$, then $\Gamma_1$ is globally
attractive. The theory of this paper is used to develop a hybrid
estimator for the period of oscillation of a sinusoidal signal.
\end{abstract}
\section{Introduction}
\IEEEPARstart{O}{ver} the past ten to fifteen years, research in hybrid dynamical systems theory
has intensified following the work of A.R. Teel and co-authors
(e.g.,~\cite{GoeSanTee09,GoeSanTee12}), which unified previous results
under a common framework, and produced a comprehensive theory of
stability and robustness. In the framework
of~\cite{GoeSanTee09,GoeSanTee12}, a hybrid system is a dynamical
system whose solutions can flow and jump, whereby flows are modelled by
differential inclusions, and jumps are modelled by update
maps. Motivated by the fact that many challenging control
specifications can be cast as problems of set stabilization, the
stability of sets plays a central role in hybrid systems theory.
For continuous nonlinear systems, a useful way to assess whether a
closed subset of the state space is asymptotically stable is to
exploit hierarchical decompositions of the stability problem. To
illustrate this fact, consider the continuous-time cascade-connected
system
\begin{equation}\label{eq:sys}
\begin{aligned}
&\dot x^1 = f_1(x^1,x^2) \\
&\dot x^2 = f_2(x^2),
\end{aligned}
\end{equation}
with state $(x^1,x^2) \in \mathbb Re^{n_1} \times \mathbb Re^{n_2}$, and assume that
$f_1(0,0)=0$, $f_2(0)=0$. To determine whether or not the equilibrium
$(x^1,x^2) = (0,0)$ is asymptotically stable for~\eqref{eq:sys}, one
may equivalently determine whether or not $x^1=0$ is asymptotically
stable for $\dot x^1 = f_1(x^1,0)$ and $x^2=0$ is asymptotically
stable for $\dot x^2 = f_2(x^2)$ (see,
e.g.,~\cite{Vid80,SeiSua90}). This way the stability problem is
decomposed into two simpler sub-problems.
For dynamical systems that do not possess the cascade-connected
structure~\eqref{eq:sys}, the generalization of the
decomposition just described is the focus of the so-called {\em
reduction problem}, originally formulated by P. Seibert
in~\cite{Sei69,Sei70}. Consider a dynamical system on $\mathbb Re^n$ and two
closed sets $\Gamma_1 \subset \Gamma_2 \subset \mathbb Re^n$. Assume that
$\Gamma_1$ is either stable, attractive, or asymptotically stable
relative to $\Gamma_2$, i.e., when solutions are initialized on
$\Gamma_2$. What additional properties should hold in order that
$\Gamma_1$ be, respectively, stable, attractive, or asymptotically
stable? The global version of this reduction problem is formulated
analogously.
For continuous dynamical
systems,
the reduction problem was solved in~\cite{SeiFlo95} for the case when
$\Gamma_1$ is compact, and in~\cite{ElHMag13} when
$\Gamma_1$ is a closed set. In particular, the work in~\cite{ElHMag13}
linked the reduction problem with a hierarchical control design
viewpoint, in which a hierarchy of control specifications corresponds
to a sequence of sets $\Gamma_1 \subset \cdots \subset \Gamma_l$ to be
stabilized. The design technique of backstepping can be regarded as
one such hierarchical control design problem. Other relevant
literature for the reduction problem is found
in~\cite{Kal99,IggKalOut96}.
In the context of hybrid dynamical systems, the reduction problem is
just as important as its counterpart for continuous nonlinear systems.
To illustrate this fact, we mention three application areas of the theorems
presented in this paper. Additional theoretical implications are
discussed in Section~\ref{sec:reduction_problem}.
Recent literature on stabilization of hybrid limit cycles for bipedal
robots~(e.g.,~\cite{PleGriWesAbb03}) relies on the stabilization of a
set $\Gamma_2$ (the so-called hybrid zero dynamics) on which
the robot satisfies ``virtual constraints.'' The key idea in this
literature is that, with an appropriate design, one may ensure the
existence of a hybrid limit cycle, $\Gamma_1 \subset \Gamma_2$,
corresponding to stable walking for the dynamics of the robot on the
set $\Gamma_2$. In this context, the theorems presented in this paper
can be used to show that the hybrid limit cycle is asymptotically
stable for the closed-loop hybrid system, without Lyapunov analysis.
Furthermore, as we show in Section~\ref{sec:example}, the problem of
estimating the unknown frequency of a sinusoidal signal can be cast as
a reduction problem involving three sets $\Gamma_1 \subset \Gamma_2
\subset \Gamma_3$. More generally, we envision that the theorems in
this paper may be applied in hybrid estimation problems as already done in \cite{BisoffiAuto17}, whose proof would be simplified by the results of this paper.
Finally, it was shown in~\cite{InvernizziACC18,MichielettoIFAC17,RozMag14} that for underactuated VTOL
vehicles, leveraging reduction theorems one may partition the position
control problem into a hierarchy of two control specifications:
position control for a point-mass, and attitude tracking. Reduction
theorems for hybrid dynamical systems enable employing the
hybrid attitude trackers in~\cite{MayhewTAC11}, allowing one to
generalize the results in~\cite{InvernizziACC18,RozMag14} and obtain global asymptotic
position stabilization and tracking.
{\em Contributions of this paper.} The goal of this paper is to extend
the reduction theorems for continuous dynamical systems found
in~\cite{SeiFlo95,ElHMag13} to the context of hybrid systems modelled
in the framework of~\cite{GoeSanTee09,GoeSanTee12}. We assume
throughout that $\Gamma_1$ is a compact set and develop reduction
theorems for stability of $\Gamma_1$
(Theorem~\ref{thm:reduction_stability}), local/global attractivity of
$\Gamma_1$ (Theorem~\ref{thm:reduction_attractivity}), and
local/global asymptotic stability of $\Gamma_1$
(Theorem~\ref{thm:reduction_asy_stability}). The conditions of the
reduction theorem for asymptotic stability are necessary and
sufficient. Our results generalize the reduction theorems found
in \cite[Corollary 19]{GoeSanTee09} and \cite[Corollary
7.24]{GoeSanTee12}, which were used in \cite{Tee10} to develop a
local hybrid separation principle.
We explore a number of consequences of our reduction theorems. In
Proposition~\ref{prop:stability_cascades} we present a novel result
characterizing the asymptotic stability of compact sets for
cascade-connected hybrid systems. In
Proposition~\ref{prop:detectability} we consider a hybrid system with
an output function, and present conditions guaranteeing that
boundedness of solutions and convergence of the output to zero imply
attractivity of a subset of the zero level set of the output. These
conditions give rise to a notion of detectability for hybrid systems
that had already been investigated in slightly different form
in~\cite{SanGoeTee07}. Finally, in the spirit of the hierarchical
control viewpoint introduced in~\cite{ElHMag13}, we present a
recursive reduction theorem (Theorem~\ref{thm:recursive_reduction}) in
which we consider a chain of closed sets $\Gamma_1 \subset \cdots
\subset \Gamma_l \subset \mathbb Re^n$, with $\Gamma_1$ compact, and we
deduce the asymptotic stability of $\Gamma_1$ from the asymptotic
stability of $\Gamma_i$ relative to $\Gamma_{i+1}$ for all $i$.
Finally, the theory developed in this paper is applied to the problem
of estimating the frequency of oscillation of a sinusoidal
signal. Here, the hierarchical viewpoint simplifies an otherwise
difficult problem by decomposing it into three separate sub-problems
involving a chain of sets $\Gamma_1 \subset \Gamma_2 \subset
\Gamma_3$.
{\em Organization of the paper.} In Section~\ref{sec:prelim} we
present the class of hybrid systems considered in this paper and
various notions of stability of sets. The concepts of this section
originate in~\cite{GoeSanTee09,GoeSanTee12,SeiFlo95,ElHMag13}. In
Section~\ref{sec:reduction_problem} we formulate the reduction problem
and its recursive version, and discuss links with the stability of
cascade-connected hybrid systems and the output zeroing problem with
detectability. In Section~\ref{sec:main_results} we present novel
reduction theorems for hybrid systems and their proofs. The results of
Section~\ref{sec:main_results} are employed in
Section~\ref{sec:example} to design an estimator for the frequency of
oscillation of a sinusoidal signal. Finally, in
Section~\ref{sec:conclusion} we make concluding remarks.
{\em Notation.}
We denote the set of positive real numbers by $\mathbb Re_{>0}$, and the set
of nonnegative real numbers by $\mathbb Re_{\geq 0}$. We let $\mathbb{S}^1$ denote
the set of real numbers modulo $2\pi$. If $x \in \mathbb Re^n$, we denote by
$|x|$ the Euclidean norm of $x$, i.e., $|x| = (x^\top x)^{1/2}$. We
denote by $\mathbb{B}$ the closed unit ball in $\mathbb Re^n$, i.e.,
$\mathbb{B}:=\{x \in \mathbb Re^n: |x| \leq 1\}$. If $\Gamma\subset \mathbb Re^n$ and $x
\in \mathbb Re^n$, we denote by $|x|_\Gamma$ the point-to-set distance of $x$
to $\Gamma$, i.e., $|x|_\Gamma = \inf_{y \in \Gamma} |x-y|$. If
$c>0$, we let $B_c(\Gamma) := \{x \in \mathbb Re^n : |x|_\Gamma < c\}$, and
$\bar B_c(\Gamma) :=\{x \in \mathbb Re^n : |x|_\Gamma \leq c\}$. If $U$ is a
subset of $\mathbb Re^n$, we denote by $\bar U$ its closure and by $\mathrm{int}\,rior
U$ its interior. Given two subsets $U$ and $V$ of $\mathbb Re^n$, we denote
their Minkowski sum
by $U + V:= \{ u+v:\; u \in U, v\in V\}$. The empty set is denoted by $\emptyset$.
\section{Preliminary notions}
\label{sec:prelim}
In this paper we use the notion of hybrid system defined
in~\cite{GoeSanTee09,GoeSanTee12} and some notions of set stability
presented in \cite{ElHMag13}. In this section we review the essential
definitions that are required in our development.
Following~\cite{GoeSanTee09,GoeSanTee12}, a hybrid system is a 4-tuple
${\mathcal H} = (C,F,D,G)$ satisfying the
\noindent
{\bf Basic Assumptions (\hspace*{-.5ex}\cite{GoeSanTee09,GoeSanTee12})}
\begin{enumerate}[{A}1)]
\item $C$ and $D$ are closed subsets of $\mathbb Re^n$.
\item $F: \mathbb Re^n \ensuremath{\rightrightarrows} \mathbb Re^n$ is outer semicontinuous, locally bounded
on $C$, and such that $F(x)$ is nonempty and convex for each $x \in
C$.
\item $G :\mathbb Re^n \ensuremath{\rightrightarrows} \mathbb Re^n$ is outer semicontinuous, locally bounded
on $D$, and such that $G(x)$ is nonempty for each $x \in D$.
\end{enumerate}
A {\em hybrid time domain} is a subset of $\mathbb Re_{\geq 0} \times \mathbb{N}$
which is the union of infinitely many sets $[t_j,t_{j+1}] \times
\{j\}$, $j\in \mathbb{N}$, or of finitely many such sets, with the last one
of the form $ [t_j,t_{j+1}] \times \{j\}$, $[t_j,t_{j+1}) \times
\{j\}$, or $[t_j,\infty) \times \{j\}$. A {\em hybrid arc} is a
function $x: \mathsf{dom}\xspace(x) \to \mathbb Re^n$, where $\mathsf{dom}\xspace(x)$ is a hybrid time
domain, such that for each $j$, the function $t \mapsto x(t,j)$ is
locally absolutely continuous on the interval $I_j=\{t: (t,j) \in
\mathsf{dom}\xspace(x)\}$. A {\em solution} of ${\mathcal H}$ is a hybrid arc $x : \mathsf{dom}\xspace(x)
\to \mathbb Re^n$ satisfying the following two conditions.
\noindent
{\em Flow condition.} For each $j \in \mathbb{N}$ such that $I_j$ has
nonempty interior,
\[
\begin{array}{ll}
\dot x(t,j) \in F(x(t,j)) &\text{ for almost all } t \in I_j, \\
x(t,j) \in C & \text{ for all } t \in [\min I_j,\sup I_j).
\end{array}
\]
{\em Jump condition.} For each $(t,j) \in \mathsf{dom}\xspace(x)$ such that $(t,j+1)
\in \mathsf{dom}\xspace(x)$,
\[
\begin{aligned}
& x(t,j+1) \in G(x(t,j)), \\
& x(t,j) \in D.
\end{aligned}
\]
A solution of ${\mathcal H}$ is {\em maximal} if it cannot be extended.
In this paper we will only consider maximal solutions, and
therefore the adjective ``maximal'' will be omitted in what follows.
If $(t_1,j_1), (t_2, j_2) \in \mathsf{dom}\xspace(x)$ and $t_1 \leq t_2, j_1 \leq
j_2$, then we write $(t_1,j_1) \preceq (t_2,j_2)$. If at least one
inequality is strict, then we write $(t_1,j_1) \prec (t_2,j_2)$.
A solution $x$ is {\em complete} if $\mathsf{dom}\xspace(x)$ is unbounded or,
equivalently, if there exists a sequence $\{ (t_i,j_i)\}_{i \in \mathbb{N}}
\subset \mathsf{dom}\xspace(x)$ such that $t_i + j_i \to \infty$ as $i \to \infty$.
The set of all maximal solutions of ${\mathcal H}$ originating from $x_0 \in
\mathbb Re^n$ is denoted ${\mathcal S_{\cH}}(x_0)$. If $U \subset \mathbb Re^n$, then
\[
{\mathcal S_{\cH}}(U) :=\bigcup_{x_0 \in U} {\mathcal S_{\cH}}(x_0).
\]
We let ${\mathcal S_{\cH}}:={\mathcal S_{\cH}}(\mathbb Re^n)$. The {\em range} of a hybrid arc $x:\mathsf{dom}\xspace(x)
\to \mathbb Re^n$ is the set
\[
\mathsf{rge}\xspace(x) :=\big\{y \in \mathbb Re^n: \big(\exists (t,j) \in \mathsf{dom}\xspace(x) \big)
\ y=x(t,j)\big\}.
\]
If $U \subset \mathbb Re^n$, we define
\[
\mathsf{rge}\xspace({\mathcal S_{\cH}}(U)):= \bigcup_{x_0 \in U} \mathsf{rge}\xspace\big( {\mathcal S_{\cH}}(x_0) \big).
\]
\begin{definition}[Forward invariance]
A set $\Gamma \subset \mathbb Re^n$ is {\em strongly forward invariant} for
${\mathcal H}$ if
\[
\mathsf{rge}\xspace({\mathcal S_{\cH}}(\Gamma)) \subset \Gamma.
\]
In other words, every solution of ${\mathcal H}$ starting in $\Gamma$ remains
in $\Gamma$.
The set $\Gamma$ is {\em weakly forward invariant} if for
every $x_0 \in \Gamma$ there exists a complete $x \in {\mathcal S_{\cH}}(x_0)$ such
that $x(t,j) \in \Gamma$ for all $(t,j) \in \mathsf{dom}\xspace(x)$.
\end{definition}
If $\Gamma\subset \mathbb Re^n$ is closed, then the {\em restriction} of
${\mathcal H}$ to $\Gamma$ is the hybrid system ${\mathcal H}|_\Gamma :=(C\cap \Gamma,
F,D\cap \Gamma,G)$. Whenever $\Gamma$ is strongly forward invariant,
solutions that start in $\Gamma$ cannot flow out or jump out of
$\Gamma$. Thus, in this specific case, restricting ${\mathcal H}$ to $\Gamma$
corresponds to considering only solutions to ${\mathcal H}$ originating in
$\Gamma$, i.e., ${\mathcal S_{\cH}}Gamma = {\mathcal S_{\cH}}(\Gamma)$.
\begin{definition}[stability and attractivity]\label{defn:stability_attractivity}
Let $\Gamma \subset \mathbb Re^n$ be compact.
\begin{itemize}
\item $\Gamma$ is {\em stable} for ${\mathcal H}$ if for every $\varepsilon >0$ there
exists $\mathrm{d}lta>0$ such that
\[
\mathsf{rge}\xspace({\mathcal S_{\cH}}(B_\mathrm{d}lta(\Gamma))) \subset B_\varepsilon(\Gamma).
\]
\item The {\em basin of attraction} of $\Gamma$ is the largest
set
of points $p \in \mathbb Re^n$ such that each $x \in {\mathcal S_{\cH}}(p)$ is bounded and,
if $x$ is complete, then $|x(t,j)|_\Gamma \to 0$ as $t+j \to \infty$,
$(t,j) \in \mathsf{dom}\xspace(x)$.
\item $\Gamma$ is {\em attractive} for ${\mathcal H}$ if the basin of
attraction of $\Gamma$ contains $\Gamma$ in its interior.
\item $\Gamma$ is {\em globally attractive} for ${\mathcal H}$ if its
basin of attraction is $\mathbb Re^n$.
\item $\Gamma$ is {\em asymptotically stable} for ${\mathcal H}$ if it is
stable and attractive, and $\Gamma$ is {\em globally asymptotically
stable} if it is stable and globally attractive.
\end{itemize}
Let $\Gamma \subset \mathbb Re^n$ be closed.
\begin{itemize}
\item $\Gamma$ is {\em stable} for ${\mathcal H}$ if for every $\varepsilon>0$ there
exists an open set $U$ containing $\Gamma$ such that
\[
\mathsf{rge}\xspace({\mathcal S_{\cH}}(U)) \subset B_\varepsilon(\Gamma).
\]
\item The {\em basin of attraction} of $\Gamma$ is the largest set of
points $p \in \mathbb Re^n$ such that for each $x \in {\mathcal S_{\cH}}(p)$,
$|x|_\Gamma$ is bounded and, if $x$ is complete, then
$|x(t,j)|_\Gamma \to 0$ as $t+j \to \infty$, $(t,j) \in \mathsf{dom}\xspace(x)$.
\item $\Gamma$ is {\em attractive} if the basin of attraction of
$\Gamma$ contains $\Gamma$ in its interior.
\item $\Gamma$ is {\em globally attractive} if its basin of
attraction is $\mathbb Re^n$.
\item $\Gamma$ is {\em asymptotically stable} if it stable and
attractive, and {\em globally asymptotically stable} if it is stable
and globally attractive.
\end{itemize}
\end{definition}
\begin{remark}\label{rem:triviality_of_definitions}
When $C \cup D$ is closed, the properties of stability and
attractivity hold trivially for compact sets $\Gamma$ on which there
are no solutions. More precisely, if $\Gamma \subset \mathbb Re^n \setminus
(C \cup D)$, then $\Gamma$ is automatically stable and attractive (and
hence asymptotically stable). Moreover, all points outside $C \cup D$
trivially belong to its basin of attraction.
\end{remark}
\begin{remark}
In~\cite[Definition 7.1]{GoeSanTee12}, the notions of attractivity and
asymptotic stability of compact sets defined above are referred to as
local pre-attractivity and local pre-asymptotic stability. The prefix
``pre'' refers to the fact that the attraction property is only
assumed to hold for complete solutions. Recent literature on hybrid
systems has dropped this prefix, and in this paper we follow the same
convention.
\end{remark}
\begin{remark}
For the case of closed, non-compact sets,~\cite{GoeSanTee12} adopts
notions of uniform global stability, uniform global pre-attractivity,
and uniform global pre-asymptotic stability (see~\cite[Definition
3.6]{GoeSanTee12}) that are stronger than the notions presented in
Definition~\ref{defn:stability_attractivity}, but they allow the
authors of~\cite{GoeSanTee12} to give Lyapunov characterizations of
asymptotic stability. In this paper we use weaker definitions to
obtain more general results. Specifically, the results of this paper
whose assumptions concern asymptotic stability of closed sets
(assumptions (ii) and (ii') in Corollary~\ref{cor:asy_stability},
assumptions (i) and (i') in Theorem~\ref{thm:recursive_reduction})
continue to hold when the stronger stability properties
of~\cite{GoeSanTee12} are satisfied.
To illustrate the differences between the above mentioned stability
and attractivity notions for closed sets, in~\cite[Definition
3.6]{GoeSanTee12} the uniform global stability property requires
that for every $\varepsilon>0$, the open set $U$ of
Definition~\ref{defn:stability_attractivity} be of the form
$B_\mathrm{d}lta(\Gamma)$, i.e., a neighborhood of $\Gamma$ of constant
diameter, hence the adjective ``uniform.'' Moreover, \cite[Definition
3.6]{GoeSanTee12} requires that $\mathrm{d}lta \to \infty$ as $\varepsilon \to
\infty$, hence the adjective ``global.'' On the other hand,
Definition~\ref{defn:stability_attractivity} only requires the
existence of a neighborhood $U$ of $\Gamma$, not necessarily of
constant diameter, and without the ``global'' requirement. In
particular, the diameter of $U$ may shrink to zero near points of
$\Gamma$ that are infinitely far from the origin, even as $\varepsilon \to
\infty$. Similarly, the notion of uniform global pre-attractivity
in~\cite[Definition 3.6]{GoeSanTee12} is much stronger than that of
global attractivity in Definition~\ref{defn:stability_attractivity},
for it requires solutions not only to converge to $\Gamma$, but to do
so with a rate of convergence which is uniform over sets of initial
conditions of the form $B_r(\Gamma)$.
\end{remark}
\begin{definition}[local stability and attractivity near a set]
\label{def:local_properties}
Consider two sets $\Gamma_1 \subset \Gamma_2 \subset \mathbb Re^n$, and
assume that $\Gamma_1$ is compact.
The set $\Gamma_2$ is {\em locally stable near} $\Gamma_1$ for ${\mathcal H}$
if there exists $r >0$ such that the following property holds. For
every $\varepsilon>0$, there exists $\mathrm{d}lta>0$ such that, for each $x \in
{\mathcal S_{\cH}}(B_\mathrm{d}lta(\Gamma_1))$ and for each $(t,j) \in \mathsf{dom}\xspace(x)$, it holds
that if $x(s,k) \in B_r(\Gamma_1)$ for all $(s,k) \in \mathsf{dom}\xspace(x)$ with
$(s,k) \preceq (t,j)$, then $x(t,j) \in B_\varepsilon(\Gamma_2)$. The set
$\Gamma_2$ is {\em locally attractive near $\Gamma_1$} for ${\mathcal H}$ if
there exists $r>0$ such that $B_r(\Gamma_1)$ is contained in the basin
of attraction of $\Gamma_2$.
\end{definition}
\begin{remark}\label{rem:local_stability}
The notions in Definition~\ref{def:local_properties} originate
in~\cite{SeiFlo95}. It is an easy consequence of the definition,
and it is shown rigorously in the proof of
Theorem~\ref{thm:reduction_asy_stability}, that local stability of
$\Gamma_2$ near $\Gamma_1$ is a necessary condition for $\Gamma_1$
to be stable. In particular, if $\Gamma_1$ is stable, then
$\Gamma_2$ is locally stable near $\Gamma_1$ for arbitrary
values\footnote{For this reason, in~\cite{ElHMag13}, local stability
of $\Gamma_2$ near $\Gamma_1$ is defined by requiring that the
property holds for any $r>0$.} of $r>0$. Moreover, local
attractivity of $\Gamma_2$ near $\Gamma_1$ is a necessary condition
for $\Gamma_1$ to be attractive. Finally, it is easily seen that if
$\Gamma_2$ is stable, then $\Gamma_2$ is locally stable near
$\Gamma_1$, thus local stability of $\Gamma_2$ near $\Gamma_1$ is a
necessary condition for both the stability of $\Gamma_1$ and the
stability of $\Gamma_2$.
\end{remark}
\begin{figure}
\caption{An illustration of local stability of $\Gamma_2$ near
$\Gamma_1$. Continuous lines denote flow, while dashed lines denote
jumps. All solutions starting sufficiently close to $\Gamma_1$
remain close to $\Gamma_2$ so long as they remain in
$B_r(\Gamma_1)$. In the figure, the solution from $x_1$ remains in
$B_r(\Gamma_1)$ and therefore also in $B_\varepsilon(\Gamma_2)$. The
solution from $x_2$ jumps out of $B_r(\Gamma_1)$, then jumps out of
$B_\varepsilon(\Gamma_2)$. The solution from $x_3$ flows out of
$B_r(\Gamma_1)$, then flows out of $B_\varepsilon(\Gamma_2)$. Finally, the
solution from $x_4$ jumps out of $B_r(\Gamma_1)$, then flows out of
$B_\varepsilon(\Gamma_2)$.}
\label{fig:local_stab}
\end{figure}
According to Definition~\ref{def:local_properties}, the set $\Gamma_2$
is locally attractive near $\Gamma_1$ if all solutions starting near
$\Gamma_1$ converge to $\Gamma_2$. Thus $\Gamma_2$ might be locally
attractive near $\Gamma_1$ even when it is not attractive in the sense
of Definition~\ref{defn:stability_attractivity}. On the other hand,
the set $\Gamma_2$ is locally stable near $\Gamma_1$ if solutions
starting close to $\Gamma_1$ remain close to $\Gamma_2$ so long as
they are not too far from $\Gamma_1$. This notion is illustrated in
Figure~\ref{fig:local_stab}.
\begin{definition}[relative properties]
Consider two closed sets $\Gamma_1 \subset \Gamma_2 \subset \mathbb Re^n$.
We say that $\Gamma_1$ is, respectively, stable, (globally)
attractive, or (globally) asymptotically stable {\em relative to}
$\Gamma_2$ if $\Gamma_1$ is stable, (globally) attractive, or
(globally) asymptotically stable for ${\mathcal H}|_{\Gamma_2}$.
\end{definition}
\begin{example}
To illustrate the definition, consider the linear time-invariant
system
\[
\begin{aligned}
&\dot x_1 = -x_1 \\
&\dot x_2 = x_2,
\end{aligned}
\]
and the sets $\Gamma_1 =\{(0,0)\}$, $\Gamma_2
=\{(x_1,x_2):x_2=0\}$. Even though $\Gamma_1$ is an unstable
equilibrium, $\Gamma_1$ is globally asymptotically stable relative to
$\Gamma_2$. Now consider the planar system expressed in polar
coordinates $(\rho,\theta) \in \mathbb Re_{>0} \times \mathbb{S}^1$ as
\[
\begin{aligned}
& \dot \theta = \sin^2(\theta/2)+(1-\rho)^2 \\
& \dot \rho=0.
\end{aligned}
\]
Let $\Gamma_1$ be the point on the unit circle $\Gamma_1 =
\{(\theta,\rho): \theta=0, \rho=1\}$, and $\Gamma_2$ be the unit
circle, $\Gamma_2 = \{(\theta,\rho): \rho=1\}$. On $\Gamma_2$, the
motion is described by $\dot \theta = \sin^2(\theta/2)$. We see that
$\dot \theta \geq 0$, and $\dot \theta =0$ if and only if $\theta = 0$
modulo $2\pi$. Thus $\Gamma_1$ is globally attractive relative to
$\Gamma_2$, even though it is not an attractive equilibrium.
\end{example}
The next two results will be useful in the sequel (see also~\cite[Proposition 3.32]{GoeSanTee12}).
\begin{lemma}\label{lem:restrictions_inherit_stability}
For a hybrid system ${\mathcal H}:=(C,F,D,G)$, if $\Gamma_1 \subset \mathbb Re^n$ is a
closed set which is, respectively, stable, attractive, or globally
attractive for ${\mathcal H}$, then for any closed set ${\Gamma_2} \subset
\mathbb Re^n$, $\Gamma_1$ is, respectively, stable, attractive, or globally
attractive for ${\mathcal H}|_{\Gamma_2}$.
\end{lemma}
\begin{proof}
The result is a consequence of the fact that each solution of ${\mathcal H}|_{\Gamma_2}$
is also a solution of ${\mathcal H}$.
\end{proof}
The next result is a partial converse to
Lemma~\ref{lem:restrictions_inherit_stability}.
\begin{lemma}\label{lem:restrictions_imply_attractivity}
For a hybrid system ${\mathcal H}:=(C,F,D,G)$,
if $\Gamma_1 \subset {\Gamma_2} \subset \mathbb Re^n$ are two closed sets such that
$\Gamma_1$ is compact and $\Gamma_1 \subset \mathrm{int}\,rior {\Gamma_2}$, then:
\begin{enumerate}[(a)]
\item $\Gamma_1$ is stable for ${\mathcal H}$ if and only if it is stable for
${\mathcal H}|_{\Gamma_2}$.
\item If $\Gamma_1$ is stable for ${\mathcal H}$, then $\Gamma_1$ is attractive for
${\mathcal H}$ if and only if $\Gamma_1$ is attractive for ${\mathcal H}|_{\Gamma_2}$.
\end{enumerate}
\end{lemma}
\begin{proof}
{\em Part (a). } By Lemma~\ref{lem:restrictions_inherit_stability}, if
$\Gamma_1$ is stable for ${\mathcal H}$, then it is also stable for
${\mathcal H}|_{\Gamma_2}$. Next assume that $\Gamma_1$ is stable for
${\mathcal H}|_{\Gamma_2}$. Since $\Gamma_1$ is compact and contained in the
interior of $\Gamma_2$, there exists $r>0$ such that $B_r(\Gamma_1)
\subset \Gamma_2$. For any $\varepsilon>0$, let
$\varepsilon':=\min(\varepsilon,r)$. By the definition of stability of
$\Gamma_1$, there exists $\mathrm{d}lta>0$ such that
\begin{equation}\label{eq:dummy}
\mathsf{rge}\xspace({\cal S}_{{\mathcal H}|_{\Gamma_2}}(B_\mathrm{d}lta(\Gamma_1))) \subset
B_{\varepsilon'}(\Gamma_1).
\end{equation}
Since $B_{\varepsilon'}(\Gamma_1) \subset B_r(\Gamma_1) \subset
\Gamma_2$, we have that solutions of ${\mathcal H}$ and ${\mathcal H}|_{\Gamma_2}$
originating in $B_\mathrm{d}lta(\Gamma_1)$ coincide, i.e.,
\begin{equation}\label{eq:dummy2}
{\cal S}_{{\mathcal H}|_{\Gamma_2}}(B_\mathrm{d}lta(\Gamma_1)) =
{\mathcal S_{\cH}}(B_\mathrm{d}lta(\Gamma_1)).
\end{equation}
Substituting~\eqref{eq:dummy2} into~\eqref{eq:dummy} and using the
fact that $\varepsilon'\leq \varepsilon$ we get
\[
\mathsf{rge}\xspace({\mathcal S_{\cH}}(B_\mathrm{d}lta(\Gamma_1))) \subset
B_{\varepsilon'}(\Gamma_1) \subset B_\varepsilon(\Gamma_1),
\]
which proves that $\Gamma_1$ is stable for ${\mathcal H}$.
{\em Part (b).}
By Lemma~\ref{lem:restrictions_inherit_stability}, if $\Gamma_1$ is
attractive for ${\mathcal H}$ then it is also attractive for
${\mathcal H}|_{\Gamma_2}$. For the converse, assume that $\Gamma_1$ is
attractive for ${\mathcal H}|_{\Gamma_2}$. Since $\Gamma_1$ is compact and
contained in the interior of ${\Gamma_2}$, there exists
$\varepsilon>0$ such that $B_\varepsilon(\Gamma_1) \subset
{\Gamma_2}$. Since $\Gamma_1$ is stable for ${\mathcal H}$, there exists
$\mathrm{d}lta>0$ such that
\[
\mathsf{rge}\xspace({\mathcal S_{\cH}}(B_\mathrm{d}lta(\Gamma_1))) \subset B_\varepsilon(\Gamma_1) \subset {\Gamma_2}.
\]
The above implies that solutions of ${\mathcal H}$ and ${\mathcal H}|_{\Gamma_2}$ originating in
$B_\mathrm{d}lta(\Gamma_1)$ coincide, i.e.,
\begin{equation}\label{eq:solutions_coincide}
{\mathcal S_{\cH}}(B_\mathrm{d}lta(\Gamma_1)) = {\cal S}_{{\mathcal H}|_{\Gamma_2}}(B_\mathrm{d}lta(\Gamma_1)).
\end{equation}
Since $\Gamma_1$ is attractive for ${\mathcal H}|_{\Gamma_2}$, the basin of
attraction of $\Gamma_1$ is a neighborhood of $\Gamma_1$, and
therefore there exists $\mathrm{d}lta>0$ small enough to
ensure~\eqref{eq:solutions_coincide} and to ensure that
$B_\mathrm{d}lta(\Gamma_1)$ is contained in the basin of
attraction. By~\eqref{eq:solutions_coincide}, $B_\mathrm{d}lta(\Gamma_1)$ is
also contained in the basin of attraction of $\Gamma_1$ for system
${\mathcal H}$, from which it follows that $\Gamma_1$ is attractive for ${\mathcal H}$.
\end{proof}
\section{The reduction problem}\label{sec:reduction_problem}
In this section we formulate the reduction problem, discuss its
relevance, and present two theoretical applications: the stability of
compact sets for cascade-connected hybrid systems, and a result
concerning global attractivity of compact sets for hybrid systems with
outputs that converge to zero.
\begin{problem*}{Reduction Problem} Consider a hybrid system ${\mathcal H}$ satisfying
the Basic Assumptions, and two sets
$\Gamma_1 \subset \Gamma_2 \subset \mathbb Re^n$, with $\Gamma_1$ compact and
$\Gamma_2$ closed. Suppose that $\Gamma_1$ enjoys property $P$
relative to $\Gamma_2$, where $P \in \{$stability, attractivity,
global attractivity, asymptotic stability, global asymptotic
stability$\}$. We seek conditions under which property $P$ holds
relative to $\mathbb Re^n$.
\end{problem*}
As mentioned in the introduction, this problem was first formulated by
Paul Seibert in 1969-1970~\cite{Sei69,Sei70}. The solution in the
context of hybrid systems is presented in
Theorems~\ref{thm:reduction_stability},~\ref{thm:reduction_attractivity},~\ref{thm:reduction_asy_stability}
in the next section.
To illustrate the reduction problem, suppose we wish to determine
whether a compact set $\Gamma_1$ is asymptotically stable, and suppose
that $\Gamma_1$ is contained in a closed set $\Gamma_2$, as
illustrated in Figure~\ref{fig:reduction}. In the reduction framework,
the stability question is decomposed into two parts: (1) Determine
whether $\Gamma_1$ is asymptotically stable relative to $\Gamma_2$;
(2) determine whether $\Gamma_2$ satisfies additional suitable
properties (Theorem~\ref{thm:reduction_asy_stability} in
Section~\ref{sec:main_results} states precisely the required
properties). In some cases, these two questions might be easier to
answer than the original one, particularly when $\Gamma_2$ is
strongly forward invariant, since in this case question (1) would
typically involve a hybrid system on a state space of lower
dimension. This sort of decomposition occurs frequently in control
theory, either for convenience or for structural necessity, as we now
illustrate.
\begin{figure}
\caption{Illustration of the reduction problem when $\Gamma_2$ is
strongly forward invariant.}
\label{fig:reduction}
\end{figure}
In the context of control systems, the sets $\Gamma_1 \subset
\Gamma_2$ might represent two control specifications organized
hierarchically: the specification associated with set $\Gamma_2$ has
higher priority than that associated with set $\Gamma_1$. Here, the
reduction problem stems from the decomposition of the control design
into two steps: meeting the high-priority specification first, i.e.,
stabilize $\Gamma_2$; then, assuming that the high-priority
specification has been achieved, meet the low-priority specification,
i.e., stabilize $\Gamma_1$ relative to $\Gamma_2$. This point of view
is developed in~\cite{ElHMag13}, and has been applied to the
almost-global stabilization of VTOL vehicles~\cite{RozMag14},
distributed control~\cite{ElHMag13_2,ThuConHu16}, virtual holonomic
constraints~\cite{MagCon13},
robotics~\cite{MohRezMagPet15,OttDieAlb15}, and static or dynamic
allocation of nonlinear redundant actuators~\cite{SassanoEJC16}.
Similar ideas have also been adopted in~\cite{MichielettoIFAC17},
where the concept of local stability near a set, introduced in
Definition~\ref{def:local_properties}, is key to ruling out situations
where the feedback stabilizer may generate solutions that blow up to
infinity. In the hybrid context, the hierarchical viewpoint described
above has been adopted in~\cite{BisoffiAuto17} to deal with unknown
jump times in hybrid observation of periodic hybrid exosystems, while
discrete-time results are used in the proof of GAS reported
in~\cite{AlessandriAuto18} for so-called stubborn observers in
discrete time. In the case of more than two control specifications,
one has the following.
\begin{problem*}{Recursive Reduction Problem}
Consider a hybrid system ${\mathcal H}$ satisfying the Basic Assumptions, and
$l$ closed sets $\Gamma_1 \subset \cdots \subset \Gamma_l \subset
\mathbb Re^n$, with $\Gamma_1$ compact. Suppose that $\Gamma_i$ enjoys
property $P$ relative to $\Gamma_{i+1}$ for all $i \in \{1,\ldots,
l\}$, where $P \in \{$stability, attractivity, global attractivity,
asymptotic stability, global asymptotic stability$\}$. We seek
conditions under which the set $\Gamma_1$ enjoys property $P$ relative
to $\mathbb Re^n$.
The solution of this problem is found in
Theorem~\ref{thm:recursive_reduction} in the next section. It is
shown in~\cite{ElHMag13} that the backstepping stabilization technique
can be recast as a recursive reduction problem.
\end{problem*}
As mentioned earlier, the reduction problem may emerge from structural
considerations, such as when the hybrid system is the cascade
interconnection of two subsystems.
\vspace*{2ex}
\noindent
{\bf Cascade-connected hybrid systems.} Consider a hybrid system ${\mathcal H}
= (C,F$, $D,G)$, where $C = C_1 \times C_2 \subset \mathbb Re^{n_1} \times
\mathbb Re^{n_2}$, $D = D_1 \times D_2 \subset \mathbb Re^{n_1} \times \mathbb Re^{n_2}$
are closed sets, and $F: \mathbb Re^{n_1+n_2} \ensuremath{\rightrightarrows} \mathbb Re^{n_1+n_2}$, $G:
\mathbb Re^{n_1+n_2} \ensuremath{\rightrightarrows} \mathbb Re^{n_1+n_2}$ are maps satisfying the Basic
Assumptions. Suppose that $F$ and $G$ have the upper triangular
structure
\begin{equation}\label{eq:upper_triangular}
F(x^1,x^2) = \begin{bmatrix} F_1(x^1,x^2) \\ F_2(x^2)
\end{bmatrix}, \ G(x^1,x^2) = \begin{bmatrix} G_1(x^1,x^2) \\ G_2(x^2)
\end{bmatrix},
\end{equation}
where $(x^1,x^2) \in \mathbb Re^{n_1} \times \mathbb Re^{n_2}$. Define $\hat F_1
:\mathbb Re^{n_1} \ensuremath{\rightrightarrows} \mathbb Re^{n_1}$ and $\hat G_1 : \mathbb Re^{n_1} \ensuremath{\rightrightarrows} \mathbb Re^{n_1}$
as
\begin{equation}\label{eq:upper_triangular2}
\hat F_1(x^1):=F_1(x^1,0), \ \hat G_1(x^1):=G_1(x^1,0).
\end{equation}
With these definitions, we can view ${\mathcal H}$ as the cascade connection of
the hybrid systems
\[
{\mathcal H}_1= (C_1,\hat F_1,D_1,\hat G_1), \ {\mathcal H}_2 = (C_2,F_2,D_2,G_2),
\]
with ${\mathcal H}_2$ driving ${\mathcal H}_1$. The following result is a corollary of
Theorem~\ref{thm:reduction_asy_stability} in
Section~\ref{sec:main_results}. It generalizes to the hybrid setting
classical results for continuous time-invariant dynamical systems in,
e.g.,~\cite{Vid80,SeiSua90}. Using
Theorems~\ref{thm:reduction_stability}
and~\ref{thm:reduction_attractivity}, one may formulate analogous
results for the properties of attractivity and stability.
\begin{proposition}\label{prop:stability_cascades}
Consider the hybrid system ${\mathcal H}:=(C_1 \times C_2, F, D_1\times
D_2,G)$, with maps $F, G$ given in~\eqref{eq:upper_triangular}, and
the two hybrid subsystems ${\mathcal H}_1 := (C_1,\hat F_1, D_1,\hat G_1)$ and
${\mathcal H}_2:=(C_2,F_2,D_2,G_2)$ satisfying the Basic Assumptions, with maps
$\hat F_1,\hat G_1$ given in~\eqref{eq:upper_triangular2}. Let $\hat
\Gamma_1 \subset \mathbb Re^{n_1}$ be a compact set, and denote
\begin{equation}\label{eq:cascade:Gamma1}
\Gamma_1 = \{(x^1,x^2) \in \mathbb Re^{n_1} \times \mathbb Re^{n_2} : x^1 \in \hat
\Gamma_1, \, x^2=0\}.
\end{equation}
Suppose that $0 \in C_2 \cup D_2$. Then the following holds:
\begin{enumerate}[(i)]
\item $\Gamma_1$ is asymptotically stable for ${\mathcal H}$ if $\hat \Gamma_1$
is asymptotically stable for ${\mathcal H}_1$ and $0 \in \mathbb Re^{n_2}$ is
asymptotically stable for ${\mathcal H}_2$.
\item $\Gamma_1$ is globally asymptotically stable for ${\mathcal H}$ if $\hat
\Gamma_1$ is globally asymptotically stable for ${\mathcal H}_1$, $0 \in
\mathbb Re^{n_2}$ is globally asymptotically stable for ${\mathcal H}_2$, and all
solutions of ${\mathcal H}$ are bounded.
\end{enumerate}
\end{proposition}
The result above is obtained from
Theorem~\ref{thm:reduction_asy_stability} in
Section~\ref{sec:main_results} setting $\Gamma_1$ as
in~\eqref{eq:cascade:Gamma1}, and $\Gamma_2 = \{(x_1,x_2) \in
\mathbb Re^{n_1} \times \mathbb Re^{n_2} : x_2=0\}$. The restriction
${\mathcal H}|_{\Gamma_2}$ is given by
\[
\left.{\mathcal H}\right|_{\Gamma_2}= \left(C_1 \times
\{0\}, \begin{bmatrix} F_1(x_1,0) \\ F_2(0) \end{bmatrix}, D_1 \times
\{0\}, \begin{bmatrix} G_1(x_1,0) \\ G_2(0) \end{bmatrix} \right),
\]
from which it is straightforward to see that $\Gamma_1$ is (globally)
asymptotically stable relative to $\Gamma_2$ if and only if $\hat
\Gamma_1$ is (globally) asymptotically stable for ${\mathcal H}_1$. It is also
clear that if $0 \in \mathbb Re^{n_2}$ is (globally) asymptotically stable
for ${\mathcal H}_2$, then $\Gamma_2$ is (globally) asymptotically stable for
${\mathcal H}$.
The converse, however, is not true. Namely, the (global)
asymptotic stability of $\Gamma_2$ for ${\mathcal H}$ does not imply that $0
\in \mathbb Re^{n_2}$ is (globally) asymptotically stable for ${\mathcal H}_2$, which
is why Proposition~\ref{prop:stability_cascades} states only
sufficient conditions. The reason is that the set of hybrid arcs
$x_2(t,j)$ generated by solutions of ${\mathcal H}$ is generally smaller than
the set of solutions of ${\mathcal H}_2$. This phenomenon is illustrated in the
next example.
\begin{example}\label{example:cascade}
Consider the cascade connected system ${\mathcal H}=(C_1 \times C_2, F, D_1
\times D_2, G)$, with $C_1 = \{1\}$, $C_2 = \mathbb Re$, $D_1=D_2
=\emptyset$, and
\[
F(x_1,x_2) = \begin{bmatrix} 1 \\ x_2
\end{bmatrix}.
\]
All solutions of ${\mathcal H}$ have the form $(1,x_2(0,0))$, and are defined
only at $(t,j)=(0,0)$. Since the origin $(x_1,x_2)=(0,0)$ is not
contained in $C \cup D$, it is trivially asymptotically stable for
${\mathcal H}$ (see Remark~\ref{rem:triviality_of_definitions}). Moreover,
there are no complete solutions, and all solutions are constant, hence
bounded, which implies that the basin of attraction of the origin is
the entire $\mathbb Re^2$. Hence the origin is globally asymptotically stable
for ${\mathcal H}$. On the other hand, ${\mathcal H}_2$ is the linear time-invariant
continuous-time system on
$\mathbb Re$ with dynamics $\dot x_2 = x_2$, clearly unstable. This example
shows that the condition, in
Proposition~\ref{prop:stability_cascades}, that $0$ be (globally)
asymptotically stable for ${\mathcal H}_2$ is not necessary.
\end{example}
Proposition \ref{prop:stability_cascades} is to be compared to
\cite[Theorem 1]{Tee10}, where the author presents an analogous
result for a different kind of cascaded hybrid system. The notion of
cascaded hybrid system used in
Proposition~\ref{prop:stability_cascades} is one in which a jump is
possible only if the states $x^1$ and $x^2$ are simultaneously in
their respective jump sets, $D_1$ and $D_2$, and a jump event
involves both states, simultaneously. On the other hand, the notion
of cascaded hybrid system proposed in~\cite{Tee10} is one in which
jumps of $x^1$ and $x^2$ occur independently of one another, so that
when $x^1$ jumps nontrivially, $x^2$ remains constant, and vice versa. Moreover,
in~\cite{Tee10} the jump and flow sets are not expressed as
Cartesian products of sets in the state spaces of the two
subsystems.
Another circumstance in which the reduction problem plays a prominent
role is the notion of detectability for systems with outputs.
\vspace*{2ex}
\noindent
{\bf Output zeroing with detectability.}
Consider a hybrid system ${\mathcal H}$ satisfying the Basic Assumptions, with
a continuous output function $h: \mathbb Re^n \to \mathbb Re^k$, and let $\Gamma_1$
be a compact, strongly forward invariant subset of $h^{-1}(0)$.
Assume that all solutions on $\Gamma_1$ are complete. Suppose that
all $x \in {\mathcal S_{\cH}}$ are bounded. Under what circumstances does the
property $h(x(t,j)) \to 0$ for all complete $x \in {\mathcal S_{\cH}}$ imply that
$\Gamma_1$ is globally attractive? This question arises in the
context of passivity-based stabilization of
equilibria~\cite{ByrIsiWil91} and closed sets~\cite{ElHMag10} for
continuous control systems. In the hybrid systems setting, a similar
question arises when using virtual constraints to stabilize hybrid
limit cycles for biped robots
(e.g.,~\cite{PleGriWesAbb03,WesGriKod03,WesGriCheChoMor07}). In this
case the zero level set of the output function is the virtual
constraint.
Let $\Gamma_2$ denote the maximal weakly forward invariant subset
contained in $h^{-1}(0)$.
Using the sequential compactness of the space of solutions of
${\mathcal H}$~\cite[Theorem 4.4]{GoeTee06}, one can show that the closure of a
weakly forward invariant set is weakly forward invariant. This fact
and the maximality of $\Gamma_2$ imply that $\Gamma_2$ is closed.
Furthermore, since $\Gamma_1$ is strongly forward invariant, contained
in $h^{-1}(0)$, and all solutions on it are complete, necessarily
$\Gamma_1 \subset \Gamma_2$. It turns out (see the proof of
Proposition~\ref{prop:detectability} below) that any bounded complete
solution $x$ such that $h(x(t,j)) \to 0$ converges to $\Gamma_2$.
In light of the discussion above, the question we asked earlier can be
recast as a reduction problem: Suppose that $\Gamma_2$ is globally
attractive. What stability properties should $\Gamma_1$ satisfy
relative to $\Gamma_2$ in order to ensure that $\Gamma_1$ is globally
attractive for ${\mathcal H}$? The answer, provided by
Theorem~\ref{thm:reduction_attractivity} in
Section~\ref{sec:main_results}, is that $\Gamma_1$ should be globally
asymptotically stable relative to $\Gamma_2$ (attractivity is not
enough, as shown in Example~\ref{ex:attractivity:counterexample}
below).
Following\footnote{In~\cite{SanGoeTee07}, the authors adopt a
different definition of detectability, one that requires $\Gamma_1$
to be globally attractive, instead of globally asymptotically
stable, relative to $\Gamma_2$. When they employ this property,
however, they make the extra assumption that $\Gamma_1$ be stable
relative to $\Gamma_2$.}~\cite{ElHMag10}, the hybrid system ${\mathcal H}$ is
said to be {\em $\Gamma_1$-detectable from $h$} if $\Gamma_1$ is
globally asymptotically stable relative to $\Gamma_2$, where
$\Gamma_2$ is the maximal weakly forward invariant subset contained in
$h^{-1}(0)$.
Using the reduction theorem for attractivity in
Section~\ref{sec:main_results}
(Theorem~\ref{thm:reduction_attractivity}), we get the answer to the
foregoing output zeroing question.
\begin{proposition}\label{prop:detectability}
Let ${\mathcal H}$ be a hybrid system satisfying the Basic Assumptions, $h :
\mathbb Re^n \to \mathbb Re^k$ a continuous function, and $\Gamma_1 \subset
h^{-1}(0)$ be a compact set which is strongly forward invariant for
${\mathcal H}$, such that all solutions from $\Gamma_1$ are complete. If 1)
${\mathcal H}$ is $\Gamma_1$-detectable from $h$, 2) each $x \in {\mathcal S_{\cH}}$ is
bounded, and 3) all complete $x \in {\mathcal S_{\cH}}$ are such that $h(x(t,j)) \to
0$, then $\Gamma_1$ is globally attractive.
\end{proposition}
\begin{proof}
Let $\Gamma_2$ be the maximal weakly forward invariant subset of
$h^{-1}(0)$. This set is closed by sequential compactness of the
space of solutions of ${\mathcal H}$~\cite[Theorem 4.4]{GoeTee06}. By
assumption, any $x \in {\mathcal S_{\cH}}$ is bounded. If $x \in {\mathcal S_{\cH}}$ is complete,
by~\cite[Lemma~3.3]{SanGoeTee07}, the positive limit set $\Omega(x)$
is nonempty, compact, and weakly invariant. Moreover, $\Omega(x)$ is
the smallest closed set approached by $x$. Since $h(x(t,j)) \to 0$ and
$h$ is continuous, $\Omega(x) \subset h^{-1}(0)$. Since $\Omega(x)$ is
weakly forward invariant and contained in $h^{-1}(0)$, necessarily
$\Omega(x) \subset \Gamma_2$. Thus $\Gamma_2$ is globally attractive
for ${\mathcal H}$. Since $\Gamma_1$ is strongly forward invariant, contained
in $h^{-1}(0)$, and on it all solutions are complete, $\Gamma_1$ is
contained in $\Gamma_2$, the maximal set with these properties. By
the $\Gamma_1$-detectability assumption, $\Gamma_1$ is globally
asymptotically stable relative to $\Gamma_2$. By
Theorem~\ref{thm:reduction_attractivity}, we conclude that $\Gamma_1$
is globally attractive.
\end{proof}
\section{Main results}\label{sec:main_results}
In this section we solve the reduction problem, presenting reduction
theorems for stability, (global) attractivity, and (global) asymptotic
stability. We also present the solution of the recursive reduction
problem for the property of asymptotic stability.
\begin{theorem}[Reduction theorem for
stability]\label{thm:reduction_stability} For a hybrid system ${\mathcal H}$
satisfying the Basic Assumptions, consider two sets $\Gamma_1
\subset \Gamma_2 \subset \mathbb Re^n$, with $\Gamma_1$ compact and
$\Gamma_2$ closed.
If
\begin{enumerate}[(i)]
\item $\Gamma_1$ is asymptotically stable relative to $\Gamma_2$,
\item $\Gamma_2$ is locally stable near $\Gamma_1$,
\end{enumerate}
then $\Gamma_1$ is stable for ${\mathcal H}$.
\end{theorem}
\begin{remark}
As argued in Remark~\ref{rem:local_stability}, local stability of
$\Gamma_2$ near $\Gamma_1$ (assumption (ii)) is a necessary condition
in Theorem~\ref{thm:reduction_stability}. In place of this
assumption, one may use the stronger assumption that $\Gamma_2$ be
stable, which might be easier to check in practice but is not a
necessary condition (see for example system \eqref{eq:forremark} in Example~\ref{ex:forremark}).
There are situations, however, when the local stability property is
essential and emerges quite naturally from the context of the
problem. This occurs, for instance, when solutions far from $\Gamma_1$
but near $\Gamma_2$ have finite escape times. For examples of such
situations, refer to~\cite{GreMasMag17}
and~\cite{MichielettoIFAC17}.
\end{remark}
\begin{proof}
Hypotheses (i) and (ii) imply that there exists a scalar $r>0$
such that:
\begin{itemize}
\item[(a)] Set $\Gamma_1$ is globally asymptotically stable for system
$\mathcal{H}_{r,0} := (C \cap \Gamma_2 \cap \bar B_r(\Gamma_1), F, D
\cap \Gamma_2 \cap \bar B_r(\Gamma_1),G)$,
\item[(b)]
Given system $\mathcal{H}_r := {\mathcal H}|_{\bar B_r(\Gamma_1)}$
for each $\varepsilon>0$,
$\exists \mathrm{d}lta>0$ such that all solutions to $\mathcal{H}_r$ satisfy:
$$
|x(0,0)|_{\Gamma_1} \leq \mathrm{d}lta \; \mathbb Rightarrow \; |x(t,j)|_{\Gamma_2} \leq \varepsilon, \; \forall (t,j) \in \mathsf{dom}\xspace(x).
$$
\end{itemize}
Since $\Gamma_1$ is contained in the interior of $\bar B_r(\Gamma_1)$,
by Lemma~\ref{lem:restrictions_imply_attractivity} to prove stability
of $\Gamma_1$ for ${\mathcal H}$ it suffices to prove stability of
$\Gamma_1$ for system $\mathcal{H}_r$ introduced in (b).
The rest of the proof follows similar steps to the
proof of stability reported in \cite[Corollary 7.24]{GoeSanTee12}.
From item (a) and due to \cite[Theorem 7.12]{GoeSanTee12}, there exists a
class $\mathcal{KL}$ bound $\beta \in \mathcal{KL}$ and, due to
\cite[Lemma 7.20]{GoeSanTee12} applied with a constant perturbation
function $x \mapsto \rho(x) =\bar \rho$ and with ${\mathcal U}=\real^n$, for each
$\varepsilon >0$ there exists $\bar \rho>0$ such that defining
\begin{equation}
\label{eq:rhobarsets}
\begin{array}{rcl}
C_{\bar \rho,r} &:=& C \cap \bar B_{\bar \rho}(\Gamma_2)\cap
\bar B_r(\Gamma_1) \\
&\subset&
\{x \in \mathbb Re^n:\; (x+ \bar \rho \mathbb{B}) \cap (C \cap \Gamma_2 \cap \bar B_r(\Gamma_1)) \neq \emptyset \} \\
D_{\bar \rho,r} &:=& D \cap \bar B_{\bar \rho}(\Gamma_2)\cap
\bar B_r(\Gamma_1) \\
&\subset&
\{x \in \mathbb Re^n:\; (x+ \bar \rho \mathbb{B}) \cap (D \cap \Gamma_2 \cap \bar B_r(\Gamma_1)) \neq \emptyset \}
\end{array}
\end{equation}
and introducing system ${\mathcal H}_{\bar \rho,r} := (C_{\bar \rho,r},F ,D_{\bar \rho,r} ,G)$, we have\footnote{Note that
for a constant perturbation $\rho(x) = \bar \rho$ the inflated flow
and jump sets in \cite[Definition~6.27]{GoeSanTee12} are exactly $\bar
\rho$ inflations of the original ones.}
\begin{equation}
\begin{array}{l}
|x(t,j)|_{\Gamma_1} \leq \beta( |x(0,0)|_{\Gamma_1}, t+j) +\frac{\varepsilon}{2}, \\
\hspace{3cm} \forall (t,j) \in \mathsf{dom}\xspace(x),
\forall x \in {\mathcal S}_{{\mathcal H}_{\bar \rho,r}}
\end{array}
\label{eq:KLbound}
\end{equation}
Let $\varepsilon >0$ be given. Let $\bar \rho>0$ be such that (\ref{eq:KLbound}) holds. Due to item (b) above, there exists a small enough $\mathrm{d}lta >0$ such that $\beta(\mathrm{d}lta, 0) \leq \frac{\varepsilon}{2}$ and
\begin{equation}
(x \in {\mathcal S}_{\mathcal{H}_r}, |x(0,0)|_{\Gamma_1} \! \leq \mathrm{d}lta) \mathbb Rightarrow |x(t,j)|_{\Gamma_2}\! \leq \bar \rho, \forall (t,j)\! \in\! \mathsf{dom}\xspace(x) .
\label{eq:rhobar_sols}
\end{equation}
Then
the solutions considered in (\ref{eq:rhobar_sols}) are also solutions
of ${\mathcal H}_{\bar \rho,r}$ because they remain in $B_{\bar
\rho}(\Gamma_2)$. Since these are solutions of ${\mathcal H}_{\bar
\rho,r}$, we may apply (\ref{eq:KLbound}) to get
\begin{equation}
|x(t,j)|_{\Gamma_1} \leq \beta( \mathrm{d}lta, 0) +\frac{\varepsilon}{2}\leq
\frac{\varepsilon}{2} +\frac{\varepsilon}{2} = \varepsilon, \;\forall (t,j) \in \mathsf{dom}\xspace(x) ,
\end{equation}
which completes the proof.
\end{proof}
\begin{example}
\label{ex:forremark}
Assumption (i) in the above theorem cannot be replaced by the weaker
requirement that $\Gamma_1$ be stable relative to $\Gamma_2$. To
illustrate this fact, consider the linear time-invariant system
\[
\begin{aligned}
& \dot x_1 =x_2 \\
& \dot x_2 =0,
\end{aligned}
\]
with $\Gamma_1 = \{(0,0)\}$ and $\Gamma_2 = \{(x_1,x_2):
x_2=0\}$. Although $\Gamma_1$ is stable relative to $\Gamma_2$ and
$\Gamma_2$ is stable, $\Gamma_1$ is an unstable equilibrium. On the
other hand, consider the system
\[
\begin{aligned}
& \dot x_1 =-x_1 +x_2 \\
& \dot x_2 =0,
\end{aligned}
\]
with the same definitions of $\Gamma_1$ and $\Gamma_2$. Now $\Gamma_1$
is asymptotically stable relative to $\Gamma_2$, and $\Gamma_2$ is
stable. As predicted by Theorem~\ref{thm:reduction_stability},
$\Gamma_1$ is a stable equilibrium. Finally, let $\sigma: \mathbb Re \to [0,
1]$ be a $C^1$ function such that $\sigma(s)=0$ for $|s|\leq 1$ and
$\sigma(s) =1$ for $|s| \geq 2$, and consider the system
\begin{align}
\begin{array}{l}
\dot x_1 =-x_1 (1-\sigma(x_1))+x_2^2 \\[.1cm]
\dot x_2 =\sigma(x_1)x_2,
\end{array}
\label{eq:forremark}
\end{align}
with the earlier definitions of $\Gamma_1$ and $\Gamma_2$. One can see
that $\Gamma_1$ is asymptotically stable relative to $\Gamma_2$, and
$\Gamma_2$ is unstable. For the former property, note that the motion
on $\Gamma_2$ is described by $\dot x_1 = -x_1(1-\sigma(x_1))$, a
$C^1$ differential equation which near $\{x_1=0\}$ reduces to $\dot
x_1=-x_1$. To see that $\Gamma_2$ is an unstable set, note that if
$x_1(0) \geq 2$, then $x_1(t) \geq x_1(0)$ and $\dot x_2 =
x_2$. Namely, solutions move away from $\Gamma_2$. On the other hand,
$\Gamma_2$ is locally stable near $\Gamma_1$, because as long as
$|x_1|\leq 1$, $\dot x_2 = 0$. By
Theorem~\ref{thm:reduction_stability}, $\Gamma_1$ is a stable
equilibrium.
\end{example}
\begin{theorem}[Reduction theorem for
attractivity]\label{thm:reduction_attractivity} For a hybrid system
${\mathcal H}$ satisfying the Basic Assumptions, consider two sets $\Gamma_1
\subset \Gamma_2 \subset \mathbb Re^n$, with $\Gamma_1$ compact and
$\Gamma_2$ closed. Assume that
\begin{enumerate}[(i)]
\item $\Gamma_1$ is globally asymptotically stable relative to
$\Gamma_2$,
\item $\Gamma_2$ is globally attractive,
\end{enumerate}
then the basin of attraction of $\Gamma_1$ is the set
\begin{equation}
{\cal B} := \{x_0
\in \mathbb Re^n : \text{all } x \in {\mathcal S_{\cH}}(x_0) \text{ are bounded}\}.
\label{eq:Bset}
\end{equation}
In particular, if ${\cal B}$ contains $\Gamma_1$ in its
interior, then $\Gamma_1$ is attractive. If all solutions of ${\mathcal H}$ are
bounded, then $\Gamma_1$ is globally attractive.
\end{theorem}
\begin{proof}
By definition, any bounded non complete solution belongs to the basin
of attraction of $\Gamma_1$. The proof amounts then to showing that
any bounded and complete solution $x \in {\mathcal S_{\cH}}$ converges to
$\Gamma_1$, so that all points in ${\cal B}$ defined in \eqref{eq:Bset} are
contained in its basin of attraction. Conversely, any solution in the
basin of attraction of $\Gamma_1$ is bounded by definition, so it
belongs to ${\cal B}$.
Hypothesis (i) corresponds to the following fact:
\begin{itemize}
\item[(a)] Set $\Gamma_1$ is globally asymptotically stable for system
${\mathcal H}|_{\Gamma_2} := (C \cap \Gamma_2, F, D \cap \Gamma_2,G)$.
\end{itemize}
The rest of the proof follows similar steps to the proof of
attractivity reported in \cite[Corollary~7.24]{GoeSanTee12}. Given
any bounded and complete solution $x\in {\mathcal S_{\cH}}$, define $M :=
\max\nolimits\limits_{(t,j) \in \mathsf{dom}\xspace(x)} |x(t,j)|_{\Gamma_1}$.
Convergence of $x$ to $\Gamma_1$ is established by showing that for
each $\varepsilon$, there exists $T\geq 0$ such that
\begin{equation}
|x(t,j)|_{\Gamma_1} \leq \varepsilon, \quad \forall (t,j) \in \mathsf{dom}\xspace (x): t+j \geq T.
\label{eq:conv_x}
\end{equation}
From item (a) above, and applying \cite[Theorem~7.12]{GoeSanTee12},
there exists a uniform class $\mathcal{KL}$ bound $\beta \in
\mathcal{KL}$ on the solutions to system ${\mathcal H}|_{\Gamma_2}$. Fix an
arbitrary $\varepsilon>0$. To establish (\ref{eq:conv_x}), due to
\cite[Lemma~7.20]{GoeSanTee12} applied to ${\mathcal H}|_{\Gamma_2}$ with
$\mathcal{U}= \real^n$, with a constant perturbation function $x
\mapsto \rho(x) =\bar \rho$ and with the compact set $K = \bar
B_M(\Gamma_1)$ (to be used in the definition of semiglobal practical
$\mathcal{KL}$ asymptotic stability of \cite[Definition~7.18]{GoeSanTee12}),
there exists a small enough $\bar \rho>0$ such that
defining~\footnote{Note that the set inclusions in
\eqref{eq:rhobarsets_attr} always hold for a small enough $\bar
\rho$. Indeed, even in the peculiar case when $C \cap \Gamma_2$ is
empty, since $C$ and $\Gamma_2$ are closed, it is possible to pick
$\bar \rho$ small enough so that $C \cap \bar B_{\bar
\rho}(\Gamma_2)$ is empty too, and then the inclusions
\eqref{eq:rhobarsets_attr} hold because both sides are empty
sets. Similar arguments apply when $D \cap \Gamma_2$ is empty. }
\begin{equation}
\label{eq:rhobarsets_attr}
\begin{array}{rcl}
C_{\bar \rho} \!\!&:=&\!\! \bar
B_M(\Gamma_1) \; \cap\; C \cap \bar B_{\bar \rho}(\Gamma_2) \\
&\subset& \!\!
\bar B_M(\Gamma_1) \cap
\{x \in \mathbb Re^n:\; (x+ \bar \rho \mathbb{B}) \cap ((C \cap \Gamma_2 ) \neq \emptyset \} \\
D_{\bar \rho}\!\! &:=& \!\! \bar
B_M(\Gamma_1) \; \cap\; D \cap \bar B_{\bar \rho}(\Gamma_2) \\
&\subset& \!\!
\bar B_M(\Gamma_1) \cap
\{x \in \mathbb Re^n:\; (x+ \bar \rho \mathbb{B}) \cap ((D \cap \Gamma_2 ) \neq \emptyset \}
\end{array}
\end{equation}
and introducing system
${\mathcal H}_{\bar \rho} := (C_{\bar \rho},F ,D_{\bar \rho} ,G)$,
we have
\begin{align}
\label{eq:KLbound_attr}
|\bar x(t,j)|_{\Gamma_1} &\leq \beta( |\bar x(0,0)|_{\Gamma_1}, t+j) +\frac{\varepsilon}{2}, \\
\nonumberumber
&\leq \beta( M, t+j) +\frac{\varepsilon}{2}, \;\; \forall (t,j) \in \mathsf{dom}\xspace(\bar x), \forall \bar x \in {\mathcal S}_{{\mathcal H}_{\bar \rho}}.
\end{align}
Define now $T_2>0$ satisfying $\beta( M,T_2 ) \leq \frac{\varepsilon}{2}$, and obtain:
\begin{equation}
\label{eq:conv_bound}
\bar x \in {\mathcal S}_{{\mathcal H}_{\bar \rho}} \quad \mathbb Rightarrow \quad
|\bar x(t,j)|_{\Gamma_1} \leq \varepsilon, \forall (t,j) \in \mathsf{dom}\xspace(\bar x): t+j \geq T_2.
\end{equation}
Moreover, from hypothesis (ii), there exists $T_1>0$ such that
$|x(t,j)|_{\Gamma_2} \leq \bar \rho$ for all $(t,j) \in \mathsf{dom}\xspace (x)$ satisfying $t+j \geq T_1$. As a consequence, the tail of solution $x$ (after $t+j \geq T_1$) is a solution to ${\mathcal H}_{\bar \rho}$. By virtue of (\ref{eq:conv_bound}), equation (\ref{eq:conv_x}) is established with $T= T_1+T_2$ and the proof is completed.
\end{proof}
\begin{example} \label{ex:circles}
Consider a hybrid system with continuous states $x=(x_1,x_2,x_3)
\in \mathbb Re^3$ and a discrete state $q \in \{1,-1\}$. The
dynamics are defined as
\[
\begin{aligned}
& \dot x_1 = q x_2 &\quad&x_1^+= x_1\\
& \dot x_2 = -q x_1 &&x_2^+=x_2 \\
& \dot x_3 = x_1^2 -x_3 &&x_3^+=x_3/2 \\
& \dot q =0, &&q^+ =-q ,
\end{aligned}
\]
and the flow and jump sets are selected as closed sets ensuring that along
flowing solutions
we have $x_1(t,j) > 0 \mathbb Rightarrow q(t,j) = 1$ and
$x_1(t,j) < 0 \mathbb Rightarrow q(t,j) = -1$. To this end, when the
solution hits the set $\{x_1=0\}$, the discrete state is toggled, $q^+
= -q$, and the state $x_3$ is halved, $x_3^+ = x_3/2$.
In particular, we select
\[
\begin{aligned}
& C= \{(x,q): x_1 \geq 0, q=1\} \cup \{(x,q): x_1
\leq 0, q =-1\}, \\
& D=\{(x,q): x_1 =0, q=1\} \cup \{(x,q): x_1 = 0, q =-1\}, \\
\end{aligned}
\]
For any flowing solution starting in $C$, the states $(x_1,x_2)$
describe an arc of a circle centered at $(x_1,x_2)=(0,0)$. The
direction of motion
is clockwise on the half-space $x_1> 0$, and counter-clockwise on $x_1
<0$. Each solution reaches the set $\{(x,q): x_1=0\}$ in finite
time. On this set, the only complete solutions are Zeno, namely, the
discrete state $q$ persistently toggles. The set
\[
\Gamma_2:=\{(x,q): x_1=0\}
\]
is, therefore, globally attractive for ${\mathcal H}$. It is, however,
unstable, as solutions of the $(x_1,x_2)$-subsystem starting
arbitrarily close to $\Gamma_2$ with $x_2>0$ evolve along arcs of
circles that move away from $\Gamma_2$. On $\Gamma_2$, the flow is
described by the differential equation $\dot x_3 = -x_3$, while the
jumps are described by the difference equation $x_3^+ = x_3 /2$. Thus
the $x_2$ axis
\[
\Gamma_1 :=\{(x,q) \in \Gamma_2 : x_3=0\},
\]
is globally asymptotically stable relative to $\Gamma_2$. Since the states
$(x_1,x_2)$ are bounded, so is the $x_3$ state. By
Theorem~\ref{thm:reduction_attractivity}, $\Gamma_1$ is globally
attractive for ${\mathcal H}$. On the other hand, $\Gamma_1$ is unstable for ${\mathcal H}$.
\end{example}
\begin{example}\label{ex:attractivity:counterexample}
In
Theorem~\ref{thm:reduction_attractivity}, one may not replace
assumption (i) by the weaker requirement that $\Gamma_1$ be attractive
relative to $\Gamma_2$. We illustrate this fact with an example taken
from~\cite{ElH11}. Consider the smooth differential equation
\[
\begin{aligned}
&\dot{x}_1=(x_2^2+x_3^2)(-x_2)\\
&\dot{x}_2=(x_2^2+x_3^2)(x_1)\\
&\dot{x}_3=-x_3^3,
\end{aligned}
\]
and the sets $\Gamma_1=\{(x_1,x_2,x_3):x_2=x_3=0\}$ and
$\Gamma_2=\{(x_1,x_2,x_3):x_3=0\}$. One can see that $\Gamma_2$ is
globally asymptotically stable, and the motion on $\Gamma_2$ is
described by the system
\[
\begin{aligned}
&\dot{x}_1=-x_2 (x_2^2)\\ &\dot{x}_2=x_1 (x_2^2).
\end{aligned}
\]
On $\Gamma_1\subset \Gamma_2$, every point is an equilibrium. Phase
curves on $\Gamma_2$ off of $\Gamma_1$ are concentric semicircles
$\{x_1^2+x_2^2=c\}$, and therefore $\Gamma_1$ is a global, but
unstable, attractor relative to $\Gamma_2$. As shown in
Figure~\ref{fig:ce}, for initial conditions not in $\Gamma_2$ the
trajectories are bounded and their positive limit set is a circle
inside $\Gamma_2$ which intersects $\Gamma_1$ at equilibrium
points. Thus $\Gamma_1$ is not attractive.
\begin{figure}
\caption{Example~\ref{ex:attractivity:counterexample}
\label{fig:ce}
\end{figure}
\end{example}
\begin{theorem}[Reduction theorem for asymptotic stability]
\label{thm:reduction_asy_stability}
For a hybrid system ${\mathcal H}$ satisfying the Basic Assumptions, consider
two sets $\Gamma_1 \subset \Gamma_2 \subset \mathbb Re^n$, with $\Gamma_1$
compact and $\Gamma_2$ closed. Then $\Gamma_1$ is asymptotically
stable if, and only if
\begin{enumerate}[(i)]
\item $\Gamma_1$ is asymptotically stable relative to $\Gamma_2$,
\item $\Gamma_2$ is locally stable near $\Gamma_1$,
\item $\Gamma_2$ is locally attractive near $\Gamma_1$.
\end{enumerate}
Moreover, $\Gamma_1$ is globally asymptotically stable for ${\mathcal H}$ if,
and only if,
\begin{enumerate}[(i')]
\item $\Gamma_1$ is globally asymptotically stable relative to
$\Gamma_2$,
\item $\Gamma_2$ is locally stable near $\Gamma_1$,
\item $\Gamma_2$ is globally attractive,
\item all solutions of ${\mathcal H}$ are bounded.
\end{enumerate}
\end{theorem}
\begin{proof}
$(\Leftarrow)$ We begin by proving the local version of the theorem.
By assumption (i), there exists $r>0$ such that $\Gamma_1$ is globally
asymptotically stable relative to the set $\Gamma_{2,r} :=\Gamma_2
\cap \bar B_r(\Gamma_1)$ for ${\mathcal H}$. By
Lemma~\ref{lem:restrictions_inherit_stability}, the same property
holds for the restriction ${\mathcal H}_r:={\mathcal H}|_{\bar B_r(\Gamma_1)}$.
By assumption (iii) and by making, if necessary, $r$ smaller,
$\Gamma_{2,r}$ is globally attractive for ${\mathcal H}_r$.
By Theorem~\ref{thm:reduction_attractivity}, the basin of attraction
of $\Gamma_1$ for ${\mathcal H}_r$ is the set of initial conditions from which
solutions of ${\mathcal H}_r$ are bounded. Since the flow and jump sets of
${\mathcal H}_r$ are compact, all solutions of ${\mathcal H}_r$ are bounded, and thus
$\Gamma_1$ is attractive for ${\mathcal H}_r$.
Assumptions (i) and (ii) and Theorem~\ref{thm:reduction_stability}
imply that $\Gamma_1$ is stable for ${\mathcal H}$. Since $\Gamma_1$ is
contained in the interior of $\bar B_r(\Gamma_1)$, by
Lemma~\ref{lem:restrictions_imply_attractivity} the attractivity of
$\Gamma_1$ for ${\mathcal H}_r$ implies the attractivity of $\Gamma_1$ for
${\mathcal H}$. Thus $\Gamma_1$ is asymptotically stable for ${\mathcal H}$.
For the global version, it suffices to notice that assumptions (i'),
(iii'), and (iv') imply, by Theorem~\ref{thm:reduction_attractivity},
that $\Gamma_1$ is globally attractive for ${\mathcal H}$.
$(\mathbb Rightarrow)$ Suppose that $\Gamma_1$ is asymptotically
stable. By Lemma~\ref{lem:restrictions_inherit_stability}, $\Gamma_1$
is asymptotically stable for
${\mathcal H}|_{\Gamma_2}$, and thus condition (i) holds.
By~\cite[Proposition~6.4]{GoeTee06}, the basin of attraction of
$\Gamma_1$ is an open set ${\cal B}$ containing $\Gamma_1$, each solution
$x \in {\mathcal S_{\cH}}({\cal B})$ is bounded and, if it is complete, it converges to
$\Gamma_1$. Since $\Gamma_1 \subset \Gamma_2$, such a solution
converges to $\Gamma_2$ as well. Thus the basin of attraction of
$\Gamma_2$ contains ${\cal B}$, proving that $\Gamma_2$ is locally
attractive near $\Gamma_1$ and condition (iii) holds. To prove that
$\Gamma_2$ is locally stable near $\Gamma_1$, let $r>0$ and $\varepsilon>0$
be arbitrary. Since $\Gamma_1$ is stable, there exists $\mathrm{d}lta>0$ such
that each $x \in {\mathcal S_{\cH}}(B_\mathrm{d}lta(\Gamma_1))$ remains in
$B_\varepsilon(\Gamma_1)$ for all hybrid times in its hybrid time
domain. Since $\Gamma_1 \subset \Gamma_2$, $B_\varepsilon(\Gamma_1) \subset
B_\varepsilon(\Gamma_2)$. Thus each $x \in {\mathcal S_{\cH}}(B_\mathrm{d}lta(\Gamma_1))$ remains
in $B_\varepsilon(\Gamma_2)$ for all hybrid times in its hybrid time
domain. In particular, it also does so for all the hybrid times for
which it remains in $B_r(\Gamma_1)$. This proves that condition (ii)
holds.
Suppose that $\Gamma_1$ is globally asymptotically stable. The proof
that conditions (i'), (ii'), (iii') hold is a straightforward
adaptation of the arguments presented above. Since $\Gamma_1$ is
globally attractive, its basin of attraction is $\mathbb Re^n$. Since
$\Gamma_1$ is compact, by definition all solutions originating in its
basin of attraction are bounded. Thus condition (iv') holds.
\end{proof}
Theorems~\ref{thm:reduction_stability}
and~\ref{thm:reduction_asy_stability} generalize to the hybrid setting
analogous results for continuous systems in~\cite{SeiFlo95,ElHMag13,Son89b}.
The following corollary is of particular interest.
\begin{corollary}\label{cor:asy_stability}
For a hybrid system ${\mathcal H}$ satisfying the Basic Assumptions, consider
two sets $\Gamma_1 \subset \Gamma_2 \subset \mathbb Re^n$, with $\Gamma_1$
compact and $\Gamma_2$ closed. If
\begin{enumerate}[(i)]
\item $\Gamma_1$ is asymptotically stable relative to $\Gamma_2$,
\item $\Gamma_2$ is asymptotically stable,
\end{enumerate}
then $\Gamma_1$ is asymptotically stable. Moreover, if
\begin{enumerate}[(i')]
\item $\Gamma_1$ is globally asymptotically stable relative to $\Gamma_2$,
\item $\Gamma_2$ is globally asymptotically stable,
\end{enumerate}
then $\Gamma_1$ is asymptotically stable with basin of attraction
given by the set of initial conditions from which all solutions are
bounded. In particular, if all solutions are bounded, then $\Gamma_1$
is globally asymptotically stable.
\end{corollary}
\begin{proof}
If $\Gamma_2$ is asymptotically stable then $\Gamma_2$ is locally
attractive near $\Gamma_1$. Moreover, for each $\varepsilon>0$ there exists
an open set $U$ containing $\Gamma_2$ such that each $x \in {\mathcal S_{\cH}}(U)$
remains in $B_\varepsilon(\Gamma_2)$ for all hybrid times in its hybrid time
domain. Since $\Gamma_1 \subset \Gamma_2$, $\Gamma_1$ is contained in
$U$. Since $\Gamma_1$ is compact, there exists $\mathrm{d}lta>0$ such that
$B_\mathrm{d}lta(\Gamma_1) \subset U$. Thus each solution $x \in
{\mathcal S_{\cH}}(B_\mathrm{d}lta(\Gamma_1))$ remains in $B_\varepsilon(\Gamma_2)$ for all hybrid
times in its hybrid time domain, implying that $\Gamma_2$ is locally
stable near $\Gamma_1$. By Theorem~\ref{thm:reduction_asy_stability},
$\Gamma_1$ is asymptotically stable. An analogous argument holds for
the global version of the corollary.
\end{proof}
If in
Theorems~\ref{thm:reduction_stability},~\ref{thm:reduction_attractivity},
and~\ref{thm:reduction_asy_stability} one replaces $\mathbb Re^n$ by a closed
subset ${\mathcal X}$ of $\mathbb Re^n$, then the conclusions of the theorems hold
relative to ${\mathcal X}$, for one can apply the theorems to the restriction
${\mathcal H}|_{{\mathcal X}}$. This allows one to apply the theorems inductively to
finite sequences of nested subsets $\Gamma_1 \subset \cdots \subset
\Gamma_l$ to solve the recursive reduction problem.
\begin{theorem}[Recursive reduction theorem for asymptotic
stability] \label{thm:recursive_reduction} For a hybrid system ${\mathcal H}$
satisfying the Basic Assumptions, consider $l$ sets $\Gamma_1
\subset \cdots \subset \Gamma_l \subset \Gamma_{l+1}:=\mathbb Re^n$, with
$\Gamma_1$ compact and all $\Gamma_i$ closed. If
\begin{enumerate}[(i)]
\item $\Gamma_i$ is asymptotically stable relative to $\Gamma_{i+1}$,
$i=1,\ldots,l$,
\end{enumerate}
then $\Gamma_1$ is asymptotically stable for ${\mathcal H}$. On the other hand, if
\begin{enumerate}[(i')]
\item $\Gamma_i$ is globally asymptotically stable relative to $\Gamma_{i+1}$,
$i=1,\ldots,l$,
\item all $x \in {\mathcal S_{\cH}}$ are bounded,
\end{enumerate}
then $\Gamma_1$ is globally asymptotically stable for ${\mathcal H}$.
\end{theorem}
Analogous statements hold, {\em mutatis mutandis}, for the properties
of stability and attractivity
(see~\cite[Proposition~14]{ElHMag13}). The proof of the theorem above
is contained in that of~\cite[Proposition~14]{ElHMag13} and is
therefore omitted.
\section{Adaptive hybrid observer for uncertain internal models}
\label{sec:example}
Consider a LTI system described by equations of the form
\begin{subequations}\label{eq:exosystem_nominal}
\begin{align}
\dot \chi &= \left[\begin{array}{cc} 0 & -\omega \\ \omega & 0 \end{array} \right]\chi := S \chi, \\
y &= \left[ \begin{array}{cc} 1 & 0 \end{array} \right]\chi := H \chi,
\end{align}
\end{subequations}
\noindent with $\omega \in \mathbb{R}$ not precisely known, for which however lower and upper
bounds are assumed to be available, namely $\omega_m < \omega < \omega_M$,
$\omega_m, \omega_M \in \mathbb{R}_{+}$. Note that (\ref{eq:exosystem_nominal}) can be considered a hybrid
system with empty jump set and jump map. Suppose in addition
that the norm of the initial condition $\chi(0,0)$ is upper and lower
bounded, namely $\chi_{m} \leq |\chi(0,0)| \leq \chi_{M}$,
for some known positive constants $\chi_{m}$ and $\chi_{M}$. By the nature of the
dynamics in (\ref{eq:exosystem_nominal}), the bounds above imply the existence
of a compact set $\mathcal{W} := \{\chi \in \mathbb{R}^2 : |\chi| \in [\chi_m, \chi_M]\}$ that is
strongly forward invariant for (\ref{eq:exosystem_nominal}) and where solutions to (\ref{eq:exosystem_nominal})
are constrained to evolve.
The objective of this section consists in
estimating the period of oscillation, namely $2\pi/\omega$ with $\omega$ unknown, and in (asymptotically)
reconstructing the
state of the system (\ref{eq:exosystem_nominal}) via the measured output $y$. It is shown that
this task can be reformulated in terms of the results discussed in the previous sections.
Towards this end, let
\begin{equation}\label{eq:hyb_estimator}
\left\{ \!\!\! \begin{array}{ccl}
\dot{\hat{\chi}} \!\!\! &=& \!\!\! \hat{S}(T)\hat{\chi}+\hat{L}(T)(y-H\hat{\chi}), \\
\dot q \!\!\! &=& \!\!\! 0, \\
\dot T \!\!\! &=& \!\!\! 0, \\
\dot \tau \!\!\! &=& \!\!\! 1,
\end{array}
\right. \!\!
\left\{ \!\!\! \begin{array}{ccl}
\hat{\chi}^{+} \!\!\!\! &=& \!\!\! \hat{\chi}, \\
q^{+} \!\!\!\! &=& \!\!\! {\rm sign}(y), \\
T^{+} \!\!\!\! &=& \!\!\! \lambda T + (1-\lambda)2\tau, \\
\tau^{+} \!\!\!\! &=& \!\!\! 0,
\end{array} \right.
\end{equation}
\noindent with $\lambda \in [0,1)$, denote the \emph{flow} and \emph{jump} maps, respectively,
of the proposed hybrid estimator, where the matrices $\hat{S}(T)$ and $\hat{L}(T)$ are defined as
\begin{equation}\label{eq:hatS}
\hat{S}(T) := \left[ \begin{array}{cc} 0 & -\dfrac{2 \pi}{T} \\ \dfrac{2 \pi}{T} & 0 \end{array} \right], \hspace{0.5cm}
\hat{L}(T) := \left[ \begin{array}{c} \dfrac{4 \pi}{T} \\[12pt] 0 \end{array} \right]\,,
\end{equation}
\noindent which are such that $(\hat{S}(T)-\hat{L}(T)H)$ is Hurwitz. Note that
the lower bound $T_m$ on $T$ specified below guarantees that matrix $\hat{S}(T)$ is well-defined.
Intuitively, the rationale behind the definition of flow and jump sets for the hybrid
estimator given below is that the system is forced to jump whenever the sign of the
logic variable $q$ is different from the sign of the output $y$.
Therefore, homogeneity of the dynamics implies that
$\tau$ is eventually upper-bounded by some value $\bar{\tau} = \pi/\omega_m$. Moreover,
note
that the lower and upper bounds on $\omega$ induce
similar bounds on the possible values of $T$, namely $2 \pi/\omega_M = T_{m} < T < T_{M} = 2 \pi/\omega_m$.
Denoting by $\Xi$ the space where state $\xi:= (\chi,\hat{\chi},q,T,\tau)$ evolves,
\begin{equation*}
\Xi:=\mathcal{W} \times \mathbb{R}^{2} \times \{-1,1\} \times [T_m, T_M] \times [0, \pi/\omega_m],
\end{equation*}
the closed-loop system (\ref{eq:exosystem_nominal})-(\ref{eq:hyb_estimator}) is then completed by the flow set
\begin{equation}\label{eq:flow_set}
\mathcal{C} := \{ (\chi,\hat{\chi},q,T,\tau) \in \Xi: qy \geq -\sigma \}\,,
\end{equation}
\noindent and by the jump set
\begin{equation}\label{eq:jump_set}
\mathcal{D} := \{ (\chi, \hat{\chi},q,T,\tau) \in \Xi: |y|\geq \sigma, qy \leq -\sigma \}
\end{equation}
\noindent for some $\sigma > 0$ that should be selected
smaller than $\chi_m$ to guarantee that the output trajectory, under the assumptions for the
initial conditions of (\ref{eq:exosystem_nominal}), intersects the line $qy = -\sigma$.
Note that $\mathcal{C}$ and $\mathcal{D}$ depend only on the output $y$.
\begin{figure}
\caption{The white \emph{doughnut}
\label{fig:Trajectories}
\end{figure}
Adopting the notation introduced in the previous sections,
define the functions $\mathfrak{h} : \mathbb{R}^{2} \rightarrow \{-1,1\}$
as
\begin{equation}\label{eq:h_frak}
\mathfrak{h}(\chi) := \left\{
\begin{array}{rcr}
-1, & {\rm if} & \chi_1 \geq \sigma \hspace{0.2cm} \lor \hspace{0.2cm} (|\chi_1|<\sigma \wedge \chi_2 > 0) \\
1, & {\rm if} & \chi_1 \leq -\sigma \hspace{0.2cm} \lor \hspace{0.2cm} (|\chi_1|<\sigma \wedge \chi_2 < 0)
\end{array}
\right.
\end{equation}
\noindent and $\varrho : \mathbb{R}^2 \times \mathbb{R} \rightarrow \mathbb{R}$
as $\varrho(\chi,\tau) := H e^{S\left(\pi/\omega -\tau \right)}\chi - \mathfrak{h}(\chi)\sigma$, which is constant along
flowing solutions because
\begin{equation}\label{eq:rho_const}
\dot{\varrho}(\chi,\tau) = -H e^{S\left(\pi/\omega -\tau \right)} S\chi + H e^{S\left(\pi/\omega-\tau \right)} \dot{\chi} = 0\,,
\end{equation}
\noindent which is zero if and only if $\tau$ is suitably synchronized with $\chi$, namely such that
$\tau^+ = \pi/\omega$: this would in turn guarantee that $T^{+} = 2 \pi/ \omega$ at the next jump provided that also $T = 2 \pi/ \omega$.
Then,
consider the sets
\begin{equation}\label{eq:Gamma_3_IM}
\begin{split}
\Gamma_3 := \Big \{ \xi & \in \Xi: \varrho(\chi,\tau)=0 \Big \}\,,
\end{split}
\end{equation}
\begin{equation}\label{eq:Gamma_2_IM}
\begin{split}
\Gamma_2 := \Big \{ \xi & \in \Gamma_3: T = \dfrac{2 \pi}{\omega}\Big \}
\end{split}
\end{equation}
\noindent and
\begin{equation}\label{eq:Gamma_1_IM}
\begin{split}
\Gamma_1 := \Big \{ \xi & \in \Gamma_2: \chi = \hat{\chi} \Big \}
\end{split}
\end{equation}
\noindent with $\xi := (\chi,\hat{\chi},q,T,\tau)$, which clearly satisfy $\Gamma_1 \subset \Gamma_2 \subset \Gamma_3$.
Roughly speaking, on the set $\Gamma_1$ the state $\hat{\chi}$ of the
hybrid estimator (\ref{eq:hyb_estimator}) is perfectly synchronized
with that of system (\ref{eq:exosystem_nominal}), $\Gamma_2$ consists of the set of states that ensure $T^{+} = 2\pi/\omega$
at the next jump, while $\Gamma_3$ prescribes the correct value of the
initial timer $\tau$, depending on the initial phase of $\chi$, such that
at jumps $\tau$ coincides with $\pi/\omega$. Note that $\Gamma_1$ is compact, by the hypothesis
on $\mathcal{W}$, while $\Gamma_2$ and $\Gamma_3$ are closed.
Let us now show GAS of $\Gamma_1$ by using reductions theorems. To this end,
we apply the recursive version of Theorem~\ref{thm:reduction_asy_stability} given in Theorem~\ref{thm:recursive_reduction}. In particular, we show
GAS of $\Gamma_1$ relative to $\Gamma_2$, GAS of $\Gamma_2$ relative to $\Gamma_3$,
GAS of $\Gamma_3$ and finally boundedness of solutions.
To begin with, it can be shown
that $\Gamma_1$ is globally asymptotically stable relative to $\Gamma_2$.
In fact, letting $\eta_1 = \chi - \hat{\chi}$ denote
the estimation error, then its dynamics restricted to $\Gamma_2$, due to the trivial jumps
of $\chi$ and $\hat{\chi}$, is described by the hybrid system defined by the flow dynamics
\begin{equation}\label{eq:e_dot}
\dot \eta_1 = S\chi - \hat{S}(T)\hat{\chi} - \hat{L}(T)H\eta_1 = (S-\hat{L}(T)H)\eta_1 \,,
\end{equation}
\noindent which is obtained by considering that, on the set
$\Gamma_2$, $\hat{S}(T) = S$, for $\xi \in \mathcal{C}$, and the jump
dynamics $\eta_1^+ = \eta_1$ for $\xi \in \mathcal{D}$. The claim
follows by recalling that $\hat{L}(T)$ is such that $(S-\hat{L}(T)H)$
is Hurwitz and by \emph{persistent flowing} conditions of stability
\cite[Proposition~3.27]{GoeSanTee12}.
\begin{figure}
\caption{Top Graph: time histories of the function $y$ generated by
(\ref{eq:exosystem_nominal}
\label{fig:Estimates}
\end{figure}
Moreover, $\Gamma_2$ is globally asymptotically stable relative to $\Gamma_3$.
To show this, let $\eta_2 = T - 2\pi/\omega$ and recall that
all the trajectories of (\ref{eq:hyb_estimator}) that remain in $\Gamma_3$
are characterized by the property that $\tau = \pi/\omega$ at the time of jump.
Therefore, the dynamics of $\eta_2$ restricted to $\Gamma_3$
is described by the hybrid system defined by the flow dynamics $\dot{\eta}_2 = 0$,
for $\xi \in \mathcal{C}$ and the jump dynamics
\begin{equation}\label{eq:eta2_plus}
\eta_2^{+} = T^{+} - \dfrac{2\pi}{\omega} = \lambda\left(T-\dfrac{2\pi}{\omega} \right) = \lambda \eta_2 \,,
\end{equation}
\noindent for $\xi \in \mathcal{D}$. Asymptotic stability of
$\Gamma_2$ relative to $\Gamma_3$ then follows by \emph{persistent
jumping} stability conditions \cite[Proposition~3.24]{GoeSanTee12},
which applies because $\sigma > \chi_m$, and by recalling that $0 \leq
\lambda < 1$. In addition, global attractivity of $\Gamma_3$ can be
shown by relying on the fact that $\tau(t_2,1)$, namely the value of
$\tau$ before the second jump, is equal to $\pi/\omega$, hence
implying that $\varrho(\chi(t,k),\tau(t,k)) = 0$ for $(t,k) \in {\rm
dom} \, \varrho$ with $k>1$. Stability of $\Gamma_3$, on the other
hand, follows by noting that a perturbation $\mathrm{d}lta$ on $\tau(0,0)$
with respect to the values in $\Gamma_3$, $i.e.$ values that satisfy
$\varrho(\chi,\tau) = 0$, results in $\tau(t_1,0) = \pi/\omega +
\varepsilon(\mathrm{d}lta)$, with $\varepsilon$ a class-$\mathcal{K}$
function of $\mathrm{d}lta$.
Finally, boundedness of the trajectories of the state $\chi$ and of $q$, $T$ and $\tau$
follows by the existence of the strongly forward invariant set $\mathcal{W}$ - described by
the lower, $\chi_m$, and upper, $\chi_M$, bounds - and by definition of the flow and jump
sets, respectively. Therefore, to conclude global asymptotic stability of the set $\Gamma_1$
it only remains to show that the trajectories of $\hat{\chi}$ are bounded. Towards this end,
recall the flow dynamics of $\hat{\chi}$ in (\ref{eq:hyb_estimator}), namely
\begin{equation}\label{eq:dyn_hatchi}
\dot{\hat{\chi}} = (S(T)-\hat{L}(T)C_o)\hat{\chi} + \hat{L}(T)C_o\chi := M(T)\hat{\chi} + \hat{L}(T)C_o\chi\,,
\end{equation}
\noindent with $M(T)$, and its derivative with respect to $T$, uniformly bounded in $T$, since
$T \in [T_m, T_M]$, and Hurwitz uniformly in $T$ by definition of $\hat{L}(T)$, whereas
the jump dynamics is described by $\hat{\chi}^{+} = \hat{\chi}$. Thus, by applying
\cite[Lemma~5.12]{khalil_book}, it follows that there exists a unique positive definite solution
$P(T)$ to the Lyapunov equation $P(T)M(T)+M(T)^{\top}P(T) = -I$, with the additional property
that $c_1 |\hat{\chi}|^2 \leq \hat{\chi}^{\top}P(T)\hat{\chi} \leq c_2 |\hat{\chi}|^2$, for some
positive constants $c_1$ and $c_2$.
Boundedness of the trajectories of $\hat{\chi}$ then follows by standard manipulations
on the time derivative of the functions $\hat{\chi}^{\top}P(T)\hat{\chi}$ along
the trajectories of (\ref{eq:dyn_hatchi}) and by noting that $\hat{L}(T)$ is uniformly
bounded, by the definition of $\hat{L}$ and of $T$, and by recalling
that $|\chi|$ is uniformly bounded by definition of the strongly forward invariant compact set $\mathcal{W}$.
In the following numerical simulations, we suppose that $\omega = 1.5$ and
we let $\sigma = 0.25$ and $\lambda = 0.5$. Moreover,
we let $\chi(0,0) = [2,\,0]'$ and $\hat{\chi}(0,0) = [0,\,0]'$, while the remaining components of the
estimator are initialized as $q(0,0) = 1$, $T(0,0) = 2.5$ and $\tau(0,0) = 0$.
The top graph of Figure~\ref{fig:Estimates} depicts the time histories of the function
$y$ generated by (\ref{eq:exosystem_nominal}) and of the state $q(t,k)$, solid and dashed lines,
respectively. The middle graph of Figure~\ref{fig:Estimates} shows the time histories of the estimate
$T(t,k)$, converging to the correct value of the period of oscillation $2 \pi/ \omega$, while the bottom
graph displays the time histories of $\hat{\chi}_{1}(t,k)$ (dark) and $\hat{\chi}_{2}(t,k)$ (gray), solid lines,
converging to the actual states $\chi_{1}(t,k)$ and $\chi_{2}(t,k)$, dashed lines.
\section{Conclusion} \label{sec:conclusion}
In this paper we presented three reduction theorems for stability,
local/global attractivity, and local/global asymptotic stability of
compact sets for hybrid dynamical systems, along with a number of
their consequences. The proofs of these results rely crucially on the
${\cal KL}$ characterization of robustness of asymptotic stability of
compact sets found in~\cite[Theorem~7.12]{GoeSanTee12}. A different proof
technique is possible which generalizes the proofs found
in~\cite{ElHMag13}. As a future research direction, we conjecture
that, similarly to what was done in~\cite{ElHMag13} for continuous
dynamical systems, it may be possible to state reduction theorems for
hybrid systems in which the set $\Gamma_1$ is only assumed to be
closed, not necessarily bounded.
In addition to the applications listed in the introduction, the
reduction theorems presented in this paper may be employed to
generalize the position control laws for VTOL vehicles presented
in~\cite{RozMag14,MichielettoIFAC17}, by replacing continuous
attitude stabilizers with hybrid ones, such as the one found
in~\cite{MayhewTAC11}. Furthermore, the results of this paper may
be used to generalize the allocation techniques
of~\cite{SassanoEJC16}, possibly following similar ideas to those
in~\cite{SerraniAuto15}.
\section*{Acknowledgments}
The authors wish to thank Andy Teel for fruitful discussions and
Antonio Lor\'ia for making their research collaboration possible.
\newif\ifbiblio
\bibliofalse
\ifbiblio
\begin{IEEEbiography}
[{\includegraphics[width=1.1in,height=1.25in,clip,keepaspectratio]{maggiore.eps}}]
{Manfredi Maggiore} was born in Genoa, Italy. He received the "Laurea"
degree in Electronic Engineering in 1996 from the University of Genoa
and the PhD degree in Electrical Engineering from the Ohio State
University, USA, in 2000. Since 2000 he has been with the Edward
S. Rogers Sr. Department of Electrical and Computer Engineering,
University of Toronto, Canada, where he is currently Professor. He has
been a visiting Professor at the University of Bologna (2007-2008),
and the Laboratoire des Signaux et Syst\`emes, Ecole CentraleSup\'elec
(2015-2016). His research focuses on mathematical nonlinear control,
and relies on methods from dynamical systems theory and differential
geometry.
\end{IEEEbiography}
\begin{IEEEbiography}
[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{MS.eps}}]
{Mario Sassano}
was born in Rome, Italy, in 1985.
He received the B.S degree in Automation Systems
Engineering and the M.S degree in Systems and
Control Engineering from the University of Rome
”La Sapienza”, Italy, in 2006 and 2008, respectively.
In 2012 he was awarded a Ph.D. degree by Imperial
College London, UK, where he had been a Research
Assistant in the Department of Electrical and Electronic
Engineering since 2009. Currently he is an
Assistant Professor at the University of Rome ”Tor
Vergata”, Italy. His research interests are focused
on nonlinear observer design, optimal control and differential game theory
with applications to mechatronical systems and output regulation for hybrid
systems. He is Associate Editor of the IEEE CSS Conference Editorial Board
and of the EUCA Conference Editorial Board.
\end{IEEEbiography}
\begin{IEEEbiography}
[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{luca_zacc.eps}}]
{Luca Zaccarian} (SM '09 -- F '16) received the Laurea and the Ph.D. degrees from the University of Roma Tor Vergata (Italy) where has been Assistant Professor in control engineering from 2000 to 2006 and then Associate Professor. Since 2011 he is Directeur de Recherche at the LAAS-CNRS, Toulouse (France) and since 2013 he holds a part-time associate professor position at the University of Trento, Italy. Luca Zaccarian's main research interests include analysis and design of nonlinear and hybrid control systems, modeling and control of mechatronic systems. He is currently a member of the EUCA-CEB, an associate editor for the IFAC journal Automatca. He was a recipient of the 2001 O. Hugo Schuck Best Paper Award given by the American Automatic Control Council.
\end{IEEEbiography}
\fi
\end{document} |
\begin{document}
\title{Inverse medium scattering for a nonlinear Helmholtz equation}
\author{Roland Griesmaier\footnote{Institut f\"ur
Angewandte und Numerische Mathematik,
Karlsruher Institut f\"ur Technologie, Englerstr.~2,
76131 Karlsruhe, Germany ({\tt [email protected]},
{\tt [email protected]})}\,,
Marvin Kn\"oller\footnotemark[1]\,,
and Rainer Mandel\footnote{Institut f\"ur Analysis,
Karlsruher Institut f\"ur Technologie, Englerstr.~2,
76131 Karlsruhe, Germany ({\tt [email protected]}).}
}
\date{\today}
\maketitle
\begin{abstract}
We discuss a time-harmonic inverse scattering problem for a
nonlinear Helmholtz equation with compactly supported inhomogeneous
scattering objects that are described by a nonlinear refractive
index in unbounded free space.
Assuming the knowledge of a nonlinear far field operator, which maps
Herglotz incident waves to the far field patterns
of corresponding solutions of the nonlinear scattering problem, we
show that the nonlinear index of refraction is uniquely determined.
We also generalize two reconstruction methods, a factorization
method and a monotonicity method, to recover the support of such
nonlinear scattering objects.
Numerical results illustrate our theoretical findings.
\end{abstract}
{\small\noindent
Mathematics subject classifications (MSC2010): 35R30, (65N21)
\\\noindent
Keywords: Inverse scattering, nonlinear Helmholtz equation,
uniqueness, factorization method, monotonicity method
\\\noindent
Short title: Nonlinear inverse medium scattering
}
\section{Introduction}
\label{sec:Introduction}
The linear Helmholtz equation is used to model the propagation
of sound waves or electromagnetic waves of small amplitude in
inhomogeneous isotropic media in the time-harmonic regime (see, e.g.,
\cite{ColKre19}).
However, if the magnitudes are large, then intensity-dependent
material laws might be required, and nonlinear Helmholtz equations
are often more appropriate.
A prominent example are Kerr-type nonlinear media (see,
e.g.,~\cite{Boy08,MolNew04} for the physical background).
Optical Kerr effects are studied in various applications from laser
optics (see, e.g., \cite{AdaChaPay89,Che16}) both from a theoretical
and applied point of view.
In this theoretical study we consider an inverse medium scattering
problem for a class of nonlinear Helmholtz equations that covers for
instance generalized Kerr-type nonlinear media of
arbitrary order.
To begin with, we discuss the well-posedness of the direct scattering
problem.
We consider compactly supported scatterers that are described by a
nonlinear refractive index, which we basically assume to be well
approximated by a linear refractive index at low intensities.
Rewriting the scattering problem in terms of a nonlinear
Lippmann-Schwinger equation we use a contraction argument together
with resolvent estimates for the linearized problem to establish the
existence and uniqueness of solutions for incident waves that are
sufficiently small relative to the size of the nonlinearity.
Here it is important to note that the parameters in nonlinear
material laws are usually extremely small (see, e.g.,
\cite[p.~212]{Boy08}), which means that this assumption does not
rule out incident fields of rather large intensity.
As a byproduct we also give a priori estimates for the solution of
the nonlinear scattering problem as well as estimates for the
linearization error, which are instrumental for the rest of the work.
The main reason for considering incident waves that are small relative
to the size of the nonlinearity here is that we later use
linearization techniques to solve the corresponding inverse problem.
However, we note that a more general existence result for
the direct scattering problem that avoids any smallness assumption on
the incident field has recently been established in \cite{CheEveWet21}
(see also \cite{Gut04,Man19}).
We define a nonlinear far field operator that maps
densities of Herglotz incident fields to the far field patterns
of the corresponding solutions of the direct scattering problem.
In the linear case such far field operators are used to describe the
scattering process for infinitely many incident fields, and their
properties have been widely studied (see, e.g., \cite{ColKre19}).
Similar to \cite{Lec11} (see also \cite{KirGri08} for the linear case)
we derive a factorization of this operator into three simpler
operators.
Here it is important to note that only the second operator in this
factorization is nonlinear.
We derive estimates for the corresponding linearization error.
Restricting the discussion to a class of generalized Kerr-type
nonlinearities of arbitrary order, we then turn to the associated
inverse scattering problem.
We show that the knowledge of the nonlinear far field operator
uniquely determines the nonlinear refractive index.
This generalizes earlier results for the inverse medium scattering
problem for nonlinear Helmholtz equations from \cite{Fur20,Jal04}.
In comparison to these works we consider a less regular and more
general class of nonlinear refractive indices.
Our proof relies on linearization to determine the terms in the
generalized Kerr-type nonlinearity recursively, and it uses the
classical uniqueness result for the corresponding linear inverse
medium scattering problem (see, e.g.,
\cite{Buk08,Nac88,Nov88,Ram88}).
Recently, a uniqueness proof that avoids the use of the linear result
has been established for a more regular class of power-type
nonlinearities than considered here in
\cite{FeiOks20,LasLiiLinSal21,HarLin22}.
Earlier uniqueness results for semilinear elliptic inverse problems
have, e.g., been obtained in \cite{ImaYam13,IsaNac95,IsaSyl94,Sun10}.
Furthermore, inverse scattering problems for nonlinear Schr\"odinger
equations, which are closely related to the nonlinear Helmholtz
equations considered in this work, have been studied using different
techniques than those applied in this work in
\cite{HarSer14,Ser08,Ser12,SerHar08,SerHarFot12,SerSan10}.
We also generalize two popular methods for shape reconstruction for
inverse scattering problems, the factorization method and the
monotonicity method, to the nonlinear scattering problem.
A related factorization method has been discussed in~\cite{Lec11} for
a class of weakly scattering objects and for scattering objects with
small nonlinearity of linear growth.
In comparison to this work we consider a larger class of
nonlinearities without any smallness assumption on the nonlinearity,
but on the other hand we assume that the incident fields are
sufficiently small relative to the size of the nonlinearity.
For linear scattering problems the factorization method has originally
been developed in \cite{Kir98,Kir99,Kir00} (see also \cite{ColKir96}
and the monographs \cite{CakCol14,KirGri08}).
Using estimates for the linearization error we show that the
inf-criterion from \cite{Kir00} can be extended to the nonlinear
case considered in this work.
However, since the far field operator is nonlinear, the efficient
numerical implementation of this criterion using spectral theory that
is used for the linear scattering problem no longer applies.
Instead we have to solve a nonlinear constrained optimization problem
for each sampling point to decide whether it belongs to the support of
the nonlinear scatterer or not.
This leads to a numerical scheme that is considerably more time
consuming than the traditional scheme for the linear case.
The situation is similar for the nonlinear monotonicity method.
For linear scattering problems monotonicity based reconstruction
methods have been proposed in
\cite{AlbGri20,GriHar18,HarPohSal19b,HarPohSal19a}.
Using linearization techniques we show that the method can be extended
to the nonlinear case considered in this work.
Again the tools from spectral theory that have been used for the
numerical implementation of the monotonicity criteria in
\cite{AlbGri20,GriHar18} are not available for the nonlinear
scattering problem.
However, we show that there is a close connection between the
nonlinear monotonicity based shape characterization and the
inf-criterion for the nonlinear factorization method, which we exploit
to implement the nonlinear monotoncity based reconstruction method in
terms of a similar constrained optimization problem as for the
nonlinear factorization method.
We consider a numerical example with a scattering object that is
described by a third-order nonlinear refractive index using optical
coefficients for glass from \cite{Boy08}.
Since the nonlinear part of the refractive index is extremely small,
we work with incident fields of very high intensity such that there
is a significant nonlinear contribution in the scattered field.
The forward solver, which is based on the same fixed point
iteration for the nonlinear Lippmann-Schwinger equation that we use
to analyze the direct scattering problem, as well as the
reconstruction methods work well.
This suggests that the smallness assumptions on the intensity of the
incident fields that we have to make in our theoretical results is not
too restrictive.
The article is organized as follows.
In Section~\ref{sec:Setting} we introduce the nonlinear scattering
problem, and we establish existence and uniqueness of solutions for
the direct scattering problem.
In Section~\ref{sec:InverseProblem} we turn to the inverse scattering
problem to recover the nonlinear refractive index from observations of
the corresponding nonlinear far field operator.
Focusing on a class of generalized Kerr-type nonlinearities, we show
that this inverse problem has a unique solution.
In Sections~\ref{sec:FactorizationMethod} and
\ref{sec:MonotonicityMethod} we derive and analyze a nonlinear
factorization method and a nonlinear monotonicity method for
reconstructing the support of nonlinear scatterers.
In Section~\ref{sec:NumericalExamples} we provide numerical examples.
\section{The nonlinear scattering problem}
\label{sec:Setting}
The nonlinear wave equation
\begin{equation*}
\frac{\partial^2\psi}{\partial t^2}(t,x) - \Delta \psi(t,x)
\,=\, h(x,\psi(t,x)) \,,
\qquad (t,x) \in \field{R}\times\field{R}d \,,
\end{equation*}
is used to model the interaction of acoustic or electromagnetic waves
with a compactly supported inhomogeneous penetrable scattering object
with nonlinear response in $d$-dimensional free space for $d=2,3$.
In the following we restrict the discussion to nonlinearities of the
form
\begin{equation*}
h(x,\psi(t,x))
\,=\, k^2 q(x,|\psi(t,x)|)\psi(t,x) \,,
\qquad (t,x) \in \field{R}\times\field{R}d \,,
\end{equation*}
where $q: \field{R}d\times\field{R}\to\field{R}$ is real-valued.
Specifying a \emph{wave number} $k>0$, the time-periodic ansatz
\begin{equation*}
\psi(x,t)
\,=\, e^{-\mathrm{i} k t} u(x) \,, \qquad (x,t) \in \field{R}d\times\field{R} \,,
\end{equation*}
gives the nonlinear Helmholtz equation
\begin{equation*}
\Delta u + k^2 u
\,=\, - k^2 q(x,|u|) u \,, \qquad x\in\field{R}d \,.
\end{equation*}
Denoting by $n^2 := 1+q$ the associated nonlinear
\emph{refractive index}, we make the following general assumptions
throughout this work.
\begin{assumption}
\label{ass:Index1}
The nonlinear contrast function $q\in{L^{\infty}}(\field{R}d\times\field{R})$ shall satisfy
\begin{itemize}
\item[(i)] $\supp(q) \subseteq \overline D\times\field{R}$ for some bounded open
set $D\subset\field{R}d$,
\item[(ii)] $q(x,0)=0$ for a.e.\ $x\in\field{R}d$,
\item[(iii)] and there exist $q_0\in{L^{\infty}}Rd$ with
$\essinf q_0>-1$ and $\supp(q_0)\subseteq \ol{D}$,
and a parameter~$\alpha>0$ such that for
any $z_1,z_2\in\field{C}$ with $|z_1|,|z_2|\leq 1$,
\begin{equation}
\label{eq:Assumption3}
\bigl\|q(\,\cdot\,,|z_1|)z_1-q(\,\cdot\,,|z_2|)z_2-q_0(z_1-z_2)\bigr\|_{{L^{\infty}}Rd}
\,\leq\, C_q (|z_1|^\alpha + |z_2|^\alpha) |z_1-z_2| \,.
\end{equation}
\end{itemize}
\end{assumption}
For later reference we note that \eqref{eq:Assumption3} implies
\begin{equation}
\label{eq:Assumption3a}
\bigl\|q(\,\cdot\,,|z|)z-q_0 z\bigr\|_{{L^{\infty}}Rd}
\,\leq\, C_q |z|^{1+\alpha} \qquad
\text{for any } z\in\field{C} \,,\; |z|\leq 1 \,.
\end{equation}
\begin{example}
\label{exa:generalizedKerr}
An example for a nonlinear material law that satisfies
Assumption~\ref{ass:Index1} is the
\emph{generalized Kerr-type material law}
\begin{equation}
\label{eq:generalizedKerr}
q(x,|z|)
\,=\, q_0(x) + \sum_{l=1}^L q_l(x)|z|^{\alpha_l}
\qquad x\in\field{R}d \,,\; z\in\field{C} \,,
\end{equation}
for $q_0,\ldots,q_L\in {L^{\infty}}Rd$ with support in $\ol{D}$, where
the lowest order term satisfies $\essinf q_0>-1$, and the
exponents fulfill $0<\alpha_1<\cdots<\alpha_L<\infty$.
In this case condition~\eqref{eq:Assumption3} is satisfied for
$\alpha=\alpha_1$ and $C_q=\sum_{l=1}^L\|q_l\|_{L^{\infty}}D$.
For the special case when~$L=1$ and $\alpha_1=2$ this gives the
well-known Kerr nonlinearity (see, e.g., \cite{Boy08,MolNew04}).
$\lozenge$
\end{example}
We suppose that the wave motion is caused by an \emph{incident field}
$u^i$ satisfying the linear Helmholtz equation
\begin{subequations}
\label{eq:ScatteringProblem}
\begin{equation}
\label{eq:uIncident}
\Delta u^i + k^2 u^i \,=\, 0 \qquad \text{in } \field{R}d \,.
\end{equation}
The scattering problem that we consider consists in determining the
\emph{total field} $u=u^i+u^s$ such that
\begin{equation}
\label{eq:uTotal}
\Delta u + k^2 n^2(\,\cdot\,,|u|) u
\,=\, 0 \qquad \text{in } \field{R}d \,,
\end{equation}
where the \emph{scattered field} $u^s$ satisfies the
\emph{Sommerfeld radiation condition}
\begin{equation}
\label{eq:Sommerfeld}
\lim_{r\to\infty} r^{\frac{d-1}{2}} \Bigl(
\frac{\partialu^s}{\partial r}(x) - \mathrm{i} k u^s(x) \Bigr)
\,=\, 0 \,, \qquad r=|x| \,,
\end{equation}
\end{subequations}
uniformly with respect to all directions $x/|x|\in{S^{d-1}}$.
\begin{remark}
Throughout this work (nonlinear) Helmholtz equations are to be
understood in the strong sense.
For instance, $u\inH^2_{\mathrm{loc}}(\field{R}d)$ is a solution to \eqref{eq:uTotal}
if and only if it satisfies the equation weakly almost everywhere in
$\field{R}d$.
Elliptic regularity results show that $u^i$ is smooth throughout
$\field{R}d$, and that $u$ and thus also $u^s$ are smooth in
$\field{R}d\setminus\ol{D}$.
In particular the radiation condition \eqref{eq:Sommerfeld} is
well-defined.
As usual we call a solution to a (nonlinear) Helmholtz equation on
an unbounded domain that satisfies the Sommerfeld radiation
condition a \emph{radiating solution}.~
$\lozenge$
\end{remark}
Next we show that the scattering problem \eqref{eq:ScatteringProblem}
is equivalent to the problem of solving the
\emph{nonlinear Lippmann-Schwinger equation}
\begin{equation}
\label{eq:LippmannSchwinger}
u(x)
\,=\, u^i(x) + k^2 \int_D \Phi_k(x-y) q(y,|u(y)|) u(y) \, \dif y \,,
\qquad x\in D\,,
\end{equation}
in ${L^{\infty}}D$.
Here $\Phi_k$ is the outgoing free space fundamental solution to the
Helmholtz equation, i.e., for $x,y\in\field{R}d$, $x\not=y$, we have
$\Phi_k(x-y)=(\mathrm{i}/4)\, H^{(1)}_0(k|x-y|)$ if $d=2$ and
$\Phi_k(x-y)=e^{\mathrm{i} k|x-y|}/(4\pi|x-y|)$ if $d=3$.
The arguments that we use to prove this are the same as in the linear
case (see, e.g., \cite[Thm.~7.12]{Kir21}).
\begin{lemma}
\label{lmm:LippmannSchwinger}
If $u\inH^2_{\mathrm{loc}}(\field{R}d)$ is a solution of \eqref{eq:ScatteringProblem},
then $u|_{D}$ is a solution of \eqref{eq:LippmannSchwinger}.
Conversely, if $u\in{L^{\infty}}D$ is a solution of
\eqref{eq:LippmannSchwinger} then $u$ can be extended to a solution
$u\inH^2_{\mathrm{loc}}(\field{R}d)$ of \eqref{eq:ScatteringProblem}.
\end{lemma}
\begin{proof}
Let $u\inH^2_{\mathrm{loc}}(\field{R}d)$ be a solution of
\eqref{eq:ScatteringProblem}.
Then $q(\,\cdot\,,|u|) u|_{D}\in {L^{\infty}}D$, and the volume potential
$v := k^2 \Phi_k\ast (q(\,\cdot\,,|u|) u) \in H^2_{\mathrm{loc}}(\field{R}d)$ is a radiating
solution of
\begin{equation}
\label{eq:ProofLS1}
\Delta v + k^2 v
\,=\, - k^2 q(\,\cdot\,,|u|) u \qquad \text{in } \field{R}d
\end{equation}
(see, e.g., \cite[Thm.~7.11]{Kir21}).
Accordingly, $u^s-v$ is a radiating solution of
$\Delta(u^s-v)+k^2(u^s-v)=0$ in~$\field{R}d$.
Thus $v = u^s$ (see, e.g., \cite[p.~24]{ColKre19}), which proves the
first part.
Conversely, let $u\in {L^{\infty}}D$ be a solution of
\eqref{eq:LippmannSchwinger}.
Defining $v := k^2 \Phi_k\ast (q(\,\cdot\,,|u|) u)$ in $\field{R}d$, we find that
$u=u^i+v$ in $D$.
Moreover, $v \in H^2_{\mathrm{loc}}(\field{R}d)$ satisfies \eqref{eq:ProofLS1}, and if
we extend $u$ by $u^i+v$ to all of $\field{R}d$, then $u$ solves
\eqref{eq:ScatteringProblem}.
\end{proof}
In the following we consider this problem for more general source
terms and study radiating solutions $v\inH^2_{\mathrm{loc}}(\field{R}d)$ of
\begin{equation}
\label{eq:GeneralSP}
\Delta v + k^2 v
\,=\, -k^2 q(\,\cdot\,,|v+f|) (v+f)
\qquad \text{in } \field{R}d \,,
\end{equation}
where $f\in {L^{\infty}}D$.
In this situation, $f$ represents the incident field and $v$ the
corresponding scattered field.
As in Lemma~\ref{lmm:LippmannSchwinger} we find that this is
equivalent to the problem of solving the nonlinear integral equation
\begin{equation}
\label{eq:GeneralLS}
v(x)
\,=\, k^2 \int_{D} \Phi_k(x-y) q(y,|v(y)+f(y)|) (v(y)+f(y)) \, \dif y \,,
\qquad x\in D \,,
\end{equation}
in ${L^{\infty}}D$.
\begin{remark}
In the linear case, i.e., when $q=q_0$, the scattering problem
\eqref{eq:GeneralSP} reduces to
\begin{equation}
\label{eq:LinearSP}
\Delta v_0 + k^2 v_0
\,=\, -k^2 q_0 (v_0+f)
\qquad \text{in } \field{R}d \,,
\end{equation}
and the corresponding linear Lippmann-Schwinger equation reads
\begin{equation}
\label{eq:LinearLS}
v_0(x)
\,=\, k^2 \int_{D} \Phi_k(x-y) q_0(y) (v_0(y)+f(y)) \, \dif y \,,
\qquad x\in D\,.
\end{equation}
We note that $I-k^2\Phi_k\ast (q_0\,\cdot\,)$ is an isomorphism
on~${L^2}D$ (see \cite[Thm.~7.13]{Kir21} for the corresponding
result in the case when $D$ is a ball ${B_R(0)}$) as well as
on~${L^{\infty}}D$.
For the latter we recall that $k^2\Phi_k\ast (q_0\,\cdot\,)$ maps
${L^{\infty}}D$ into $H^2({B_R(0)})$ for ${B_R(0)}$ containing $D$, which embeds
continuously into ${L^{\infty}}D$.
In particular we have
\begin{subequations}
\label{eq:EstLS}
\begin{align}
\bigl\| \bigl( I-k^2\Phi_k\ast (q_0\,\cdot\,) \bigr)^{-1} g \bigr\|_{L^2}D
&\,\leq\, C_{LS,2} \| g \|_{L^2}D \,,
&&g\in {L^2}D \,,
\label{eq:EstLSLtwo} \\
\bigl\| \bigl( I-k^2\Phi_k\ast (q_0\,\cdot\,) \bigr)^{-1} g \bigr\|_{L^{\infty}}D
&\,\leq\, C_{LS,\infty} \| g \|_{L^{\infty}}D \,,
&&g\in {L^{\infty}}D \,.
\label{eq:EstLSLinfty}
\end{align}
\end{subequations}
Accordingly, the unique solution $v_0$ of \eqref{eq:LinearLS} is
given by
\begin{equation}
\label{eq:DefV0}
v_0
\,=\, \bigl( I-k^2\Phi_k\ast (q_0\,\cdot\,) \bigr)^{-1}
\bigl( k^2 \Phi_k\ast (q_0 f) \bigr)
\qquad \text{in } D\,,
\end{equation}
and we denote by $V_0$ the linear operator that maps $f$ to $v_0$.
The solution $v_0$ can be extended by the right hand side of
\eqref{eq:LinearLS} to a radiating solution of \eqref{eq:LinearSP}
in all of~$\field{R}d$, which we also denote by $v_0=V_0f$.
For later reference we note that
\eqref{eq:EstLS} implies
\begin{subequations}
\label{eq:V0Estimate}
\begin{align}
\|V_0f\|_{L^2}D
&\,\leq\, C_{V_0,2} \|f\|_{L^2}D \,,
&&f\in {L^2}D \,,
\label{eq:V0EstimateLtwo} \\
\|V_0f\|_{L^{\infty}}D
&\,\leq\, C_{V_0,\infty} \|f\|_{L^{\infty}}D \,,
&&f\in {L^{\infty}}D \,,
\label{eq:V0EstimateLinfty}
\end{align}
\end{subequations}
where
$C_{V_0,\infty}=k^2C_{LS,\infty}\|\Phi_k\|_{L^1(B_{2R}(0))}\|q_0\|_{L^{\infty}}D$
and
$C_{V_0,2}=k^2C_{LS,2}\|\Phi_k\|_{L^1(B_{2R}(0))}\|q_0\|_{L^{\infty}}D$.
Here and in the following $R>0$ is chosen such that~$D\subseteq{B_R(0)}$.
$\lozenge$
\end{remark}
In Proposition~\ref{pro:Wellposedness} below we establish
well-posedness of \eqref{eq:GeneralSP}.
Writing
\begin{equation*}
U_\delta
\,:=\, \bigl\{ v\in {L^{\infty}}D \,\big|\,
\|v\|_{L^{\infty}}D \leq \delta \bigr\} \,,
\qquad \delta>0 \,,
\end{equation*}
we show that for any $f\in U_{\delta}$ with $\delta>0$ sufficiently
small there exists a unique solution $v$ of~\eqref{eq:GeneralLS} in
${L^{\infty}}D$ such that the difference $w:=v-v_0$ with $v_0$ from
\eqref{eq:DefV0} satisfies $w\in U_{\delta}$.
We call this~$v$ the unique small solution of \eqref{eq:GeneralLS}.
Denoting by $V$ the nonlinear operator that maps $f$ to~$v$, we shall
see that $V$ is Fr\'{e}chet-differentiable at zero and $V'(0)=V_0$.
The mere existence of such an operator is well-known, see for
instance, \cite[Thm.~1.2]{CheEveWet21}, \cite[Thm.~1]{Gut04}, or
\cite[Thm.~1]{Man19}.
\begin{proposition}
\label{pro:Wellposedness}
Suppose that Assumption~\ref{ass:Index1} is satisfied.
There exists $\delta > 0$ such that for any given $f \in U_{\delta}$
the nonlinear integral equation \eqref{eq:GeneralLS} has a unique
solution ${v = V(f) \in {L^{\infty}}D}$ satisfying $v-V_0f \in U_{\delta}$, and
there exists a constant\footnote{Throughout $C$ denotes a generic
constant, the values of which might change from line to line.}
$C>0$ such that, for all such~$f$,
\begin{subequations}
\label{eq:VEstimates}
\begin{align}
\|V(f) \|_{L^{\infty}}D
&\,\leq\, C \|f\|_{L^{\infty}}D \,,
\label{eq:VStabilityLinfty}\\
\|V(f) \|_{L^2}D
&\,\leq\, C \|f\|_{L^2}D \,,
\label{eq:VStabilityLtwo}\\
\|V(f)- V_0f \|_{L^{\infty}}D
&\,\leq\, C \|f\|_{L^{\infty}}D^{1+\alpha} \,,
\label{eq:VHoelderLinfty}\\
\|V(f)- V_0f \|_{L^2}D
&\,\leq\, C \|f\|_{L^{\infty}}D^{\alpha} \|f\|_{L^2}D \,.
\label{eq:VHoelderLtwo}
\end{align}
\end{subequations}
\end{proposition}
\begin{remark}
\label{rem:Wellposedness}
The proof of Proposition~\ref{pro:Wellposedness} below shows that
the upper bound $\delta>0$ has to be such that the product
$C_q\delta>0$, where $C_q$ is the upper bound on the nonlinearity
from Assumption~\ref{ass:Index1}, is sufficiently small.
This means that there is a tradeoff between the size of the
nonlinearity and the intensity of the incident fields and
scattered fields that are covered by this well-posedness result.
$\lozenge$
\end{remark}
\begin{proof}[Proof of Proposition~\ref{pro:Wellposedness}]
For any given $f\in {L^{\infty}}(D)$ let $v_0:=V_0f\in{L^{\infty}}D$ as in
\eqref{eq:DefV0}.
Then, $v\in{L^{\infty}}D$ solves~\eqref{eq:GeneralLS} if and only if
$w := v-v_0$ satisfies
\begin{equation*}
w - k^2\Phi_k \ast (q_0w)
\,=\,
k^2\Phi_k\ast \bigl( q_N(\,\cdot\,,|w+v_0+f|)(w+v_0+f) \bigr)
\qquad \text{in } D \,,
\end{equation*}
where $q_N := q-q_0$ denotes the nonlinear part of the contrast
function.
This is equivalent to $w$ being a fixed point of the nonlinear map
$G: {L^{\infty}}D \to {L^{\infty}}D$,
\begin{equation}
\label{eq:DefG}
G(w)
\,:=\, \bigl(I-k^2\Phi_k \ast (q_0\,\cdot\,)\bigr)^{-1}
\Bigl(k^2\Phi_k\ast \bigl( q_N(\,\cdot\,,|w+v_0+f|)(w+v_0+f) \bigr) \Bigr) \,.
\end{equation}
Using \eqref{eq:EstLSLinfty}, Young's inequality,
\eqref{eq:Assumption3a}, and \eqref{eq:V0EstimateLinfty} we have for
any $f\in U_{\delta}$ and $w\in U_{\delta}$ that
\begin{equation*}
\begin{split}
\|G(w)\|_{L^{\infty}}D
&\,\leq\, C_{LS,\infty} \bigl\|
k^2\Phi_k\ast
\bigl(q_N(\,\cdot\,,|w+v_0+f|)(w+v_0+f)\bigr) \bigr\|_{L^{\infty}}D \\
&\,\leq\, k^2 C_{LS,\infty} \|\Phi_k\|_{L^1(B_{2R}(0))}
\bigl\|q_N(\,\cdot\,,|w+v_0+f|)(w+v_0+f)\bigr\|_{L^{\infty}}D \\
&\,\leq\, k^2 C_{LS,\infty} \|\Phi_k\|_{L^1(B_{2R}(0))}
C_q \|w+v_0+f\|_{L^{\infty}}D^{1+\alpha} \\
&\,\leq\, k^2 C_{LS,\infty} \|\Phi_k\|_{L^1(B_{2R}(0))}
C_q \bigl( \delta + (C_{V_0,\infty}+1)\delta \bigr)^{1+\alpha} \,.
\end{split}
\end{equation*}
Here, $R>0$ was chosen such that $D\subseteq{B_R(0)}$.
Similarly, applying \eqref{eq:Assumption3} we obtain for
any~$f\in U_{\delta}$ and $w_1,w_2\in U_{\delta}$ that
\begin{equation*}
\begin{split}
&\|G(w_1)-G(w_2)\|_{L^{\infty}}D\\
&\,\leq\, k^2 C_{LS,\infty} \|\Phi_k\|_{L^1(B_{2R}(0))}
C_q \bigl(\|w_1+v_0+f\|_{L^{\infty}}D^\alpha
+ \|w_2+v_0+f\|_{L^{\infty}}D^\alpha \bigr)
\|w_1-w_2\|_{L^{\infty}}D \\
&\,\leq\, k^2 C_{LS,\infty} \|\Phi_k\|_{L^1(B_{2R}(0))}
C_q\, 2 \bigl(\delta + (C_{V_0,\infty}+1)\delta \bigr)^\alpha
\|w_1-w_2\|_{L^{\infty}}D\,.
\end{split}
\end{equation*}
Choosing $\delta > 0$ such that $C_q\delta > 0$ is sufficiently
small, we find that
\begin{equation*}
\|G(w)\|_{L^{\infty}}D \,\leq\, \delta \,,\qquad
\|G(w_1)-G(w_2)\|_{L^{\infty}}D
\,\leq\, \frac{1}{2}\|w_1-w_2\|_{L^{\infty}}D \,.
\end{equation*}
So $G:U_{\delta}\to U_{\delta} $ is a contraction, and Banach's
fixed point theorem yields the existence of a uniquely determined
fixed point $w\in U_{\delta}$ of $G$ such that $v=V(f):=w+V_0f$
solves \eqref{eq:GeneralLS}.
It remains to show~\eqref{eq:VStabilityLinfty}--\eqref{eq:VHoelderLtwo}.
This follows from \eqref{eq:DefG}, \eqref{eq:EstLSLinfty}, Young's
inequality, and \eqref{eq:Assumption3a} because
\begin{equation}
\label{eq:ProofWellposedness7}
\begin{split}
&\|V(f)-V_0f\|_{L^{\infty}}D
\,=\, \|G(V(f)-V_0f)\|_{L^{\infty}}D \\
&\,=\, \bigl\| \bigl(I-k^2\Phi_k \ast (q_0\,\cdot\,)\bigr)^{-1}
\Bigl(k^2\Phi_k\ast \bigl( q_N(\,\cdot\,,|V(f)+f|)(V(f)+f) \bigr) \Bigr)
\bigr\|_{L^{\infty}}D \\
&\,\leq\, k^2 C_{LS,\infty} \|\Phi_k\|_{L^1(B_{2R}(0))}
\bigl\|q_N(\,\cdot\,,|V(f)+f|)(V(f)+f)\bigr\|_{L^{\infty}}D \\
&\,\leq\, k^2 C_{LS,\infty} \|\Phi_k\|_{L^1(B_{2R}(0))}
C_q \|V(f)+f\|_{L^{\infty}}D^{1+\alpha} \\
&\,\leq\, k^2 C_{LS,\infty} \|\Phi_k\|_{L^1(B_{2R}(0))}
C_q \bigl( \|V(f)-V_0f\|_{L^{\infty}}D+\|V_0f+f\|_{L^{\infty}}D \bigr)^\alpha
\|V(f)+f\|_{L^{\infty}}D \,.
\end{split}
\end{equation}
Hence, \eqref{eq:V0EstimateLinfty} yields
\begin{equation*}
\begin{split}
\|V(f)\|_{L^{\infty}}D
&\,\leq\, C_{V_0,\infty}\|f\|_{L^{\infty}}D\\
&\,\cdot\,antom{\,\leq\,}
+ k^2 C_{LS,\infty} \|\Phi_k\|_{L^1(B_{2R}(0))}
C_q \bigl(\delta+(C_{V_0,\infty}+1)\delta)^\alpha
\bigl(\|V(f)\|_{L^{\infty}}D +\|f\|_{L^{\infty}}D\bigr) \,.
\end{split}
\end{equation*}
Given that $C_q\delta> 0$ is sufficiently small as
in the first part of the proof we thus
obtain~\eqref{eq:VStabilityLinfty}.
Therewith, \eqref{eq:ProofWellposedness7}
shows~\eqref{eq:VHoelderLinfty}.
Finally, using \eqref{eq:DefG}, \eqref{eq:EstLSLtwo}, Young's
inequality, and \eqref{eq:Assumption3a} we get
\begin{equation*}
\begin{split}
&\|V(f)-V_0f\|_{L^2}D\\
&\,=\, \|G(V(f)-V_0f)\|_{L^2}D\\
&\,=\, \bigl\| \bigl(I-k^2\Phi_k \ast (q_0\,\cdot\,)\bigr)^{-1}
\Bigl(k^2\Phi_k\ast \bigl( q_N(\,\cdot\,,|V(f)+f|)(V(f)+f) \bigr) \Bigr)
\bigr\|_{L^2}D \\
&\,\leq\, k^2 C_{LS,2} \|\Phi_k\|_{L^1(B_{2R}(0))}
\bigl\|q_N(\,\cdot\,,|V(f)+f|)(V(f)+f)\bigr\|_{L^2}D \\
&\,\leq\, k^2 C_{LS,2} \|\Phi_k\|_{L^1(B_{2R}(0))}
C_q \||V(f)+f|^{1+\alpha}\|_{L^2}D \\
&\,\leq\, k^2 C_{LS,2} \|\Phi_k\|_{L^1(B_{2R}(0))}
C_q \|V(f)+f\|_{L^{\infty}}D^\alpha \|V(f)+f\|_{L^2}D \,.
\end{split}
\end{equation*}
Proceeding as before this implies \eqref{eq:VStabilityLtwo} when
$C_q\delta>0$ is sufficiently small, and thus
also~\eqref{eq:VHoelderLtwo}.
\end{proof}
After extending the right hand side of \eqref{eq:GeneralLS} to all
of~$\field{R}d$, Proposition~\ref{pro:Wellposedness} guarantees the existence
of a unique small radiating solution of the generalized scattering
problem~\eqref{eq:GeneralSP} for any $f\in{L^{\infty}}D$ that is
sufficiently small.
We denote this extension by $v=V(f)$ as well.
In particular, Proposition~\ref{pro:Wellposedness} tells us that for
all ${L^{\infty}}D$-small incoming waves $u^i$ we have a unique
small solution $u = u^i + V(u^i|_{D})$ of the nonlinear forward
problem~\eqref{eq:ScatteringProblem}.
Here small means
that~$\|V(u^i|_{D})-V_0(u^i|_{D})\|_{L^{\infty}}D \leq \delta$ with
$\delta>0$ from Proposition~\ref{pro:Wellposedness}.
Substituting the far field asymptotics of the fundamental solution
(see, e.g., \cite[p.~24 and p.~89]{ColKre19}) into the extension of
the integral representation \eqref{eq:GeneralLS} to all of $\field{R}d$, we
obtain the following result.
\begin{proposition}
\label{pro:Farfield}
Suppose that Assumption~\ref{ass:Index1} is satisfied, let
$\delta>0$ be as in Proposition~\ref{pro:Wellposedness}, and let
$f\in U_{\delta}$.
Then the extension of the unique solution
$v = V(f) \in U_{\delta}$ of \eqref{eq:GeneralLS} to all of $\field{R}d$,
has the asymptotic behavior
\begin{equation*}
v(x)
\,=\, C_d e^{\mathrm{i} k|x|}|x|^{\frac{1-d}{2}} v^\infty(\widehat{x})
+ O(|x|^{-\frac{d+1}{2}}) \,, \qquad |x|\to\infty \,,
\end{equation*}
uniformly in all directions $\widehat{x}:=x/|x|\in{S^{d-1}}$, where
\begin{equation*}
C_d \,=\, e^{\mathrm{i}\pi/4} /\sqrt{8\pi k} \quad\text{if } d=2
\qquad\text{and}\qquad
C_d \,=\, 1/(4\pi) \quad\text{if } d=3 \,.
\end{equation*}
The \emph{far field pattern} $v^\infty = (V(f))^\infty \in {L^2}Sd$
is given by
\begin{equation}
\label{eq:Farfield}
v^\infty(\widehat{x})
\,=\, k^2 \int_{D} q\bigl(y,|v(y)+f(y)|\bigr)
\bigl(v(y)+f(y)\bigr) e^{-\mathrm{i} k \widehat{x}\cdot y} \, \dif y \,,
\qquad \widehat{x}\in{S^{d-1}} \,.
\end{equation}
\end{proposition}
In the following we will we restrict the discussion to incident fields
that are superpositions of plane waves.
We define the \emph{Herglotz operator} $H:{L^2}Sd \to {L^2}D$,
\begin{equation}
\label{eq:HerglotzOperator}
(H \psi)(x)
\,:=\, \int_{S^{d-1}} \psi(\theta) e^{\mathrm{i} k x\cdot\theta} \, \dif s(\theta) \,,
\qquad x\in{D} \,,
\end{equation}
and we note that its adjoint $H^*:{L^2}D \to {L^2}Sd$ satisfies
\begin{equation}
\label{eq:HerglotzAdjoint}
(H^*\,\cdot\,i)(\widehat{x})
\,=\, \int_{D} \,\cdot\,i(y) e^{-\mathrm{i} k \widehat{x}\cdot y} \, \dif y \,,
\qquad \widehat{x}\in{S^{d-1}} \,.
\end{equation}
The operators $H$ and $H^*$ are compact.
Observing that
\begin{equation}
\label{eq:BoundHerglotzLinfty}
\|H\psi\|_{L^{\infty}}D
\,\leq\, \omega_{d-1}^{1/2}\|\psi\|_{L^2}Sd \,,
\end{equation}
where $\omega_{d-1}$ denotes the area of the unit sphere, we define
\begin{equation*}
\mathcal{D}(F)
\,:=\, \bigl\{ \psi \in{L^2}Sd \;\big|\;
\|\psi\|_{L^2}Sd \leq \delta / \omega_{d-1}^{1/2} \bigr\} \,,
\end{equation*}
where $\delta > 0$ is as in
Proposition~\ref{pro:Wellposedness}.
Then any $f=Hg$ with $g\in\mathcal{D}(F)$ satisfies $f\in U_{\delta}$, and
the unique small radiating solution $v = V(Hg)$ of
\eqref{eq:GeneralSP} has the far field pattern
$v^\infty = (V(Hg))^\infty$.
Introducing the \emph{nonlinear far field operator}
$F:\mathcal{D}(F)\subseteq{L^2}Sd \to {L^2}Sd$ by
\begin{equation}
\label{eq:FarfieldOperator}
F(g)
\,:=\, (V(Hg))^\infty \,,
\end{equation}
we obtain from \eqref{eq:Farfield} that
\begin{equation*}
F(g)
\,=\, H^* \bigl(
k^2 q(\,\cdot\,,|v + Hg|)(v + Hg) \bigr) \,.
\end{equation*}
These facts are summarized as follows.
\begin{proposition}
\label{pro:NonlinearFactorization}
Suppose that Assumption~\ref{ass:Index1} holds, and let
$g\in\mathcal{D}(F)$.
Then the far field pattern of the unique small radiating solution
$V(f)$ of \eqref{eq:GeneralSP} with $f=Hg$ satisfies
\begin{equation}
\label{eq:Factorization}
F(g)
\,=\, H^* T(Hg) \,,
\end{equation}
where $T: \mathcal{D}(T) \subseteq {L^2}D \to {L^2}D$ is defined by
\begin{equation}
\label{eq:OperatorT}
T(f)(x)
\,=\, k^2q\bigl(x,|V(f)(x)+f(x)|\bigr)(V(f)(x)+f(x)) \,, \qquad
x\in{D} \,.
\end{equation}
Here $\mathcal{D}(T) := \ol{H(\mathcal{D}(F))}$.
\end{proposition}
\begin{remark}
In the linear case when $q=q_0$, the far field operator
$F_0:{L^2}Sd\to{L^2}Sd$ is given by
\begin{equation*}
F_0g
\,:=\, (V_0Hg)^\infty \,.
\end{equation*}
The factorization \eqref{eq:Factorization} reads
\begin{equation*}
F_0g
\,=\, H^*T_0Hg \,, \qquad g\in{L^2}Sd \,,
\end{equation*}
where $T_0: {L^2}D \to {L^2}D$ is defined by
\begin{equation}
\label{eq:DefT0}
T_0f
\,:=\, k^2 q_0(f+V_0f) \,.
\end{equation}
Then \eqref{eq:Assumption3a} implies that, for any $f\in\mathcal{D}(T)$,
\begin{equation*}
\begin{split}
&\|T(f) - T_0f\|_{L^2}D
\,=\, k^2 \bigl\|q(\,\cdot\,,|V(f)+f|)(V(f)+f) - q_0 (V_0f+f)\bigr\|_{L^2}D \\
&\,\leq\, k^2 \bigl\|q(\,\cdot\,,|V(f)+f|)(V(f)+f)-q_0 (V(f)+f)\bigr\|_{L^2}D
+ k^2 \|q_0(V(f)-V_0f)\|_{L^2}D \\
&\,\leq\, k^2C_q \bigl\| |V(f)+f|^{1+\alpha} \bigr\|_{L^2}D
+ k^2 \|q_0\|_{L^{\infty}}D \| V(f)-V_0f \|_{L^2}D \,.
\end{split}
\end{equation*}
Applying \eqref{eq:VHoelderLtwo} and
\eqref{eq:VStabilityLinfty}--\eqref{eq:VStabilityLtwo} gives
\begin{equation}
\label{eq:BoundTMinusT0Ltwo}
\begin{split}
\|T(f) -T_0f\|_{L^2}D
&\,\leq\, k^2C_q\|V(f)+f\|_{L^{\infty}}D^\alpha \|V(f)+f\|_{L^2}D
+ C \|f\|_{L^{\infty}}D^\alpha \|f\|_{L^2}D \\
&\,\leq\, C \|f\|_{L^{\infty}}D^\alpha \|f\|_{L^2}D \,.
\end{split}
\end{equation}
Similarly, using \eqref{eq:VHoelderLinfty} and
\eqref{eq:VStabilityLinfty}, we find that, for any
$f\in\mathcal{D}(T)$,
\begin{equation}
\label{eq:BoundTMinusT0Linfty}
\|T(f) -T_0f\|_{L^{\infty}}D
\,\leq\, C \|f\|_{L^{\infty}}D^{\alpha+1} \,.
\end{equation}
$\lozenge$
\end{remark}
\section{Uniqueness for the inverse scattering problem}
\label{sec:InverseProblem}
In this section we restrict the discussion to generalized Kerr-type
nonlinearities $q$ as in~\eqref{eq:generalizedKerr}.
We show that the knowledge of the nonlinear far field operator
uniquely determines the associated nonlinear refractive index.
A related result has recently been established for a different class
of real analytic nonlinearities in \cite{Fur20}.
\begin{theorem}
\label{thm:UniquenessInverseProblem}
For $j=1,2$ let
\begin{equation}
\label{eq:MaterialLawUniqueness}
q^{(j)}(x,|z|)
\,=\, q_0^{(j)}(x) + \sum_{l=1}^L q_l^{(j)}(x)|z|^{\alpha_l}
\qquad x\in\field{R}d \,,\; z\in\field{C} \,,\; j=1,2 \,,
\end{equation}
be a generalized Kerr-type nonlinearity, where
$q_0^{(j)},\ldots,q_L^{(j)}\in {L^{\infty}}Rd$ with support in $D$,
the lowest order term satisfies $\essinf q_0^{(j)}>-1$,
and the exponents fulfill $0<\alpha_1<\cdots<\alpha_L<\infty$.
If the associated nonlinear far field operators satisfy
$F^{(1)}=F^{(2)}$, then~$q^{(1)} = q^{(2)}$.
\end{theorem}
\begin{proof}
By linearization around zero we first show that
$q_0^{(1)}=q_0^{(2)}$ in \eqref{eq:MaterialLawUniqueness}.
We consider factorizations of the far field operators
$F^{(j)}=H^* T^{(j)}(H)$, $j=1,2$, as in
Proposition~\ref{pro:NonlinearFactorization}, where $H$ and~$H^*$
are the Herglotz operator and its adjoint from
\eqref{eq:HerglotzOperator} and \eqref{eq:HerglotzAdjoint}, and the
operator $T^{(j)}$ is as in \eqref{eq:OperatorT} with $q$ replaced
by $q^{(j)}$.
Furthermore, we denote by $T_0^{(j)}$ the bounded linear operator
from \eqref{eq:DefT0} with $q_0$ replaced by $q_0^{(j)}$.
Then \eqref{eq:BoundTMinusT0Linfty} shows that
$T^{(j)}(f) = T_0^{(j)} f + O(\|f\|_{L^{\infty}}D^{\alpha_1+1})$
as~$\|f\|_{L^{\infty}}D\to 0$.
Recalling \eqref{eq:BoundHerglotzLinfty}, we obtain from
$F^{(1)}=F^{(2)}$ that
\begin{equation*}
F_0^{(1)}
\,=\, H^* T_0^{(1)} H
\,=\, H^* T_0^{(2)} H
\,=\, F_0^{(2)} \,,
\end{equation*}
where $F_0^{(j)}$ is the linear far field operator corresponding to
the contrast function $q_0^{(j)}$, $j=1,2$.
The uniqueness of solutions to the inverse medium scattering problem
for the linear Helmholtz equation (see, e.g.,
\cite[Thm.~7.28]{Kir21} or \cite{Buk08,Nac88,Nov88,Ram88}) implies
that $q_0^{(1)}=q_0^{(2)}=:q_0$.
In particular we conclude that $T_0^{(1)}=T_0^{(2)}=:T_0$.
To prove the theorem by induction, we now assume
$q_l^{(1)}=q_l^{(2)}=:q_l$ for $l=0,\ldots,m-1$, where
$m\in\{1,\ldots,L\}$.
The nonlinear Lippmann-Schwinger equation \eqref{eq:GeneralLS}
gives, for $f\in U_{\delta}$ and $j=1,2$,
\begin{equation*}
V^{(j)}(f)
\,=\, k^2\Phi_k\ast q^{(j)}\bigl(\,\cdot\,,|V^{(j)}(f)+f|\bigr)(V^{(j)}(f)+f)
\qquad \text{in } {D} \,.
\end{equation*}
Here, $V^{(j)}(f)$ stands for the solution map $V(f)$ from
Proposition~\ref{pro:Wellposedness} with $q$ replaced by $q^{(j)}$.
Setting $\alpha_0:= 0$ and using \eqref{eq:MaterialLawUniqueness} we
obtain that
\begin{equation}
\label{eq:ProofUniqueness7}
\begin{split}
&V^{(1)}(f)-V^{(2)}(f) \\
&\,=\, k^2\Phi_k\ast \bigl(
q^{(1)}(\,\cdot\,,|V^{(1)}(f)+f|)(V^{(1)}(f)+f)
- q^{(2)}(\,\cdot\,,|V^{(2)}(f)+f|)(V^{(2)}(f)+f) \bigr) \\
&\,=\, k^2\Phi_k\ast \biggl( \sum_{l=0}^{m-1} q_l
\Bigl( |V^{(1)}(f)+f|^{\alpha_l}(V^{(1)}(f)+f)
- |V^{(2)}(f)+f|^{\alpha_l}(V^{(2)}(f)+f) \Bigr) \biggr) \\
&\,\cdot\,antom{\,=\,}
+ k^2\Phi_k\ast \biggl( \sum_{l=m}^{L} \Bigl(
q_l^{(1)} |V^{(1)}(f)+f|^{\alpha_l}(V^{(1)}(f)+f)
- q_l^{(2)}|V^{(2)}(f)+f|^{\alpha_l}(V^{(2)}(f)+f)
\Bigr) \biggr)
\end{split}
\end{equation}
in ${D}$.
Applying Lemma~\ref{lmm:UsefulEstimate} and
\eqref{eq:VStabilityLinfty} we find that, for $l=1,\ldots,m-1$,
\begin{equation}
\label{eq:ProofUniqueness8}
\begin{split}
\bigl| |V^{(1)}(f)+f|^{\alpha_l}(V^{(1)}(f)+f)
-& |V^{(2)}(f)+f|^{\alpha_l}(V^{(2)}(f)+f) \bigr|\\
&\,\leq\, C \bigl( |f| + |V^{(1)}(f)| + |V^{(2)}(f)| \bigr)^{\alpha_l}
\bigl| V^{(1)}(f) - V^{(2)}(f) \bigr| \\
&\,\leq\, C \|f\|_{L^{\infty}}D^{\alpha_l}
\bigl| V^{(1)}(f) - V^{(2)}(f) \bigr| \,.
\end{split}
\end{equation}
Accordingly,
\begin{multline*}
\sum_{l=0}^{m-1} q_l \Bigl( |V^{(1)}(f)+f|^{\alpha_l}(V^{(1)}(f)+f)
- |V^{(2)}(f)+f|^{\alpha_l}(V^{(2)}(f)+f) \Bigr) \\
\,=\, {\widetilde q}_{f,m-1} \bigl(V^{(1)}(f) - V^{(2)}(f)\bigr) \,,
\end{multline*}
where ${\widetilde q}_{f,m-1}\in L^\infty(\field{R}d)$ is given by
\begin{equation*}
{\widetilde q}_{f,m-1}
:= q_0
+ \sum_{l=1}^{m-1} \! q_l \frac{
|V^{(1)}(f)+f|^{\alpha_l}(V^{(1)}(f)+f)
- |V^{(2)}(f)+f|^{\alpha_l}(V^{(2)}(f)+f) }
{V^{(1)}(f) - V^{(2)}(f)} {\bf 1}_{V^{(1)}(f) \neq V^{(2)}(f)} \,.
\end{equation*}
We note that ${\widetilde q}_{f,m-1}$ is supported in $\ol{D}$ and
\eqref{eq:ProofUniqueness8} implies that
\begin{equation}
\label{eq:ProofUniqueness9}
\|{\widetilde q}_{f,m-1}-q_0\|_{L^{\infty}}D
\,\leq \, C\|f\|_{L^{\infty}}D^{\alpha_1} \,.
\end{equation}
Hence, for $f\in U_{\delta}$ such that $\|f\|_{L^{\infty}}D$ is
sufficiently small, we conclude from \eqref{eq:EstLSLinfty} that the
operator $I-k^2\Phi_k\ast ({\widetilde q}_{f,m-1}\,\cdot\,):{L^{\infty}}D\to {L^{\infty}}D$
is invertible with a uniform bound for the operator norm of the
inverse (see, e.g., \cite[Thm.~10.1]{Kre14}).
Denoting
\begin{equation}
\label{eq:DefR(f)}
R(f)
\,:=\, \sum_{l=m}^L
\Bigl( q_l^{(1)} |V^{(1)}(f)+f|^{\alpha_l}(V^{(1)}(f)+f)
- q_l^{(2)}|V^{(2)}(f)+f|^{\alpha_l}(V^{(2)}(f)+f) \Bigr)
\end{equation}
we find from \eqref{eq:ProofUniqueness7}, Young's inequality, and
\eqref{eq:VStabilityLinfty} that
\begin{equation}
\label{eq:ProofUniqueness11}
\begin{split}
\bigl\|V^{(1)}(f)-V^{(2)}(f)\bigr\|_{L^{\infty}}D
&\,=\, \bigl\| \bigl( I-k^2\Phi_k\ast ({\widetilde q}_{f,m-1}\,\cdot\,) \bigr)^{-1}
(k^2\Phi_k\ast R(f)) \bigr\|_{L^{\infty}}D \\
&\,\leq\, C \|k^2\Phi_k\ast R(f)\|_{L^{\infty}}D \\
&\,\leq\, C \|\Phi_k\|_{L^1(B_{2R}(0))} \|R(f)\|_{L^{\infty}}D \\
&\,\leq\, C \bigl(
\| V^{(1)}(f) + f \|_{L^{\infty}}D^{\alpha_m+1}
+ \| V^{(2)}(f) + f \|_{L^{\infty}}D^{\alpha_m+1} \bigr) \\
&\,\leq\, C \|f\|_{L^{\infty}}D^{\alpha_m+1} \,.
\end{split}
\end{equation}
Here, $R>0$ was chosen such that $D\subseteq{B_R(0)}$.
Next we want to use \eqref{eq:ProofUniqueness11} in order to deduce
$q_m^1= q_m^2$. Set $w_f:= V^{(1)}(f)-V^{(2)}(f)$.
By assumption we know that the far field of $w_f$ vanishes
whenever~$f$ is a sufficiently small Herglotz wave, i.e., when
$f\in U_{\delta}\cap\field{R}cal(H)$ with $\delta>0$ as in
Proposition~\ref{pro:Wellposedness}.
Moreover, we find as in the proof of
Lemma~\ref{lmm:LippmannSchwinger} that~\eqref{eq:ProofUniqueness7}
implies
\begin{equation*}
\Delta w_f + k^2(1+{\widetilde q}_{f,m-1})w_f
\,=\, -k^2 R(f) \qquad \text{in } \field{R}d \,,
\end{equation*}
in particular $\Delta w_f+k^2w_f= 0$ in $\field{R}d\setminus\ol{{B_R(0)}}$ and
$w_f$ is radiating.
So Rellich's lemma (see, e.g., \cite[Lmm.~2.12]{ColKre19}) gives
$w_f = 0$ in $\field{R}d\setminus\ol{{B_R(0)}}$.
Now let $v\in H^2({B_R(0)})$ be any solution of
$\Delta v+k^2(1+q_0)v= 0$ in ${B_R(0)}$.
Then, for all $f\in U_{\delta}\cap\field{R}cal(H)$,
\begin{equation*}
\begin{split}
0
&\,=\, \int_{\partial{B_R(0)}} \Bigl( w_f \frac{\partial v}{\partial\nu}
- v\frac{\partial w_f}{\partial\nu} \Bigr) \, \dif s \\
&\,=\, \int_{{B_R(0)}} \bigl( w_f \Delta v - v\Delta w_f \bigr) \, \dif x \\
&\,=\, \int_{{B_R(0)}} \Bigl( w_f \bigl(-k^2(1+q_0)v\bigr)
- v\bigl(-k^2(1+{\widetilde q}_{f,m-1})w_f-k^2R(f)\bigr) \Bigr) \, \dif x \\
&\,=\, \int_{{B_R(0)}} v
\bigl( k^2R(f) + k^2({\widetilde q}_{f,m-1}-q_0)w_f \bigr) \, \dif x \\
&\,=\, \int_{D} v
\bigl( k^2R(f) + k^2({\widetilde q}_{f,m-1}-q_0)w_f \bigr) \, \dif x \,.
\end{split}
\end{equation*}
In the last equality we used that $R(f)$ and ${\widetilde q}_{f,m-1}-q_0$
are supported in $\ol{D}$ by our assumption on the nonlinear
contrast function.
In \eqref{eq:ProofUniqueness11} we found that
$\|w_f\|_{L^{\infty}}D \leq C \|f\|_{L^{\infty}}D^{\alpha_m+1}$,
and combining this with \eqref{eq:ProofUniqueness9} gives
\begin{equation}
\label{eq:OrthogonalityR(f)a}
0
\,=\, \int_{D} v \, R(f) \, \dif x
+ O\bigl(\|f\|_{L^{\infty}}D^{\alpha_m+\alpha_1+1}\bigr)
\qquad \text{as } \|f\|_{L^{\infty}}D \to 0 \,.
\end{equation}
Next we identify the leading order term in $R(f)$.
Using Lemma~\ref{lmm:UsefulEstimate} and \eqref{eq:VStabilityLinfty},
\eqref{eq:VHoelderLinfty} we obtain that, for~$j=1,2$,
\begin{equation}
\label{eq:OrthogonalityR(f)b}
\begin{split}
&\bigl| |V^{(j)}(f)+f|^{\alpha_m}(V^{(j)}(f)+f)
-|V_0f+f|^{\alpha_m}(V_0f+f) \bigr|\\
&\,\leq\, C \bigl( |f| + |V^{(j)}(f)| + |V_0f| \bigr)^{\alpha_m}
\bigl| V^{(j)}(f) - V_0^{(j)}f \bigr| \\
&\,\leq\, C \|f\|_{L^{\infty}}D^{\alpha_m+\alpha_1+1} \,.
\end{split}
\end{equation}
Similarly, we find that, for $j=1,2$ and $l=m+1,\ldots,L$,
\begin{equation}
\label{eq:OrthogonalityR(f)c}
\bigl| |V^{(j)}(f)+f|^{\alpha_l}(V^{(j)}(f)+f) \bigr|
\,\leq\, C \|f\|_{L^{\infty}}D^{\alpha_{m+1}+1} \,.
\end{equation}
Substituting \eqref{eq:DefR(f)} into \eqref{eq:OrthogonalityR(f)a},
and applying
\eqref{eq:OrthogonalityR(f)b}--\eqref{eq:OrthogonalityR(f)c} gives
\begin{equation*}
0
\,=\, \int_{D} v \,
\bigl( q_m^{(1)} - q_m^{(2)} \bigr)
|V_0f+f|^{\alpha_m}(V_0f+f)
\, \dif x
+ O\bigl(\|f\|_{L^{\infty}}D^{\alpha_m+\alpha_1+1}\bigr)
+ O\bigl(\|f\|_{L^{\infty}}D^{\alpha_{m+1}+1}\bigr) \,.
\end{equation*}
as $\|f\|_{L^{\infty}}D\to 0$.
Hence, for all $f\in U_{\delta}\cap\field{R}cal(H)$,
\begin{equation*}
0
\,=\, \int_{D} v \, \bigl(q_m^{(1)}-q_m^{(2)}\bigr)
|V_0f+f|^{\alpha_m} (V_0f+f) \, \dif x \,.
\end{equation*}
Setting $f=f_1+tf_2$ for $f_1,f_2\in U_{\delta}\cap\field{R}cal(H)$ and
differentiating with respect to $t$ gives
\begin{equation*}
\begin{split}
0
\,=\, \int_{D} v \bigl(q_m^{(1)}-q_m^{(2)}\bigr)
Z(f_1,f_2) \, \dif x \,,
\end{split}
\end{equation*}
where
\begin{equation*}
\begin{split}
Z(f_1,f_2)
&\,:=\, \Bigl(1+\frac{\alpha_m}{2}\Bigr)|V_0f_1+f_1|^{\alpha_m}
(V_0f_2+f_2)\\
&\,\cdot\,antom{\,:=\,}
+ \frac{\alpha_m}{2} |V_0f_1+f_1|^{\alpha_m-2}(V_0f_1+f_1)^2\,
\ol{(V_0f_2+f_2)} \,.
\end{split}
\end{equation*}
Since $\mathrm{i} f_1 \in U_{\delta}\cap\field{R}cal(H)$, too, we even get
\begin{equation}
\label{eq:Orthogonalityq1minusq2}
\begin{split}
0
&\,=\, \int_{D} v \, \bigl(q_m^{(1)}-q_m^{(2)}\bigr)
\bigl( Z(f_1,f_2) + Z(\mathrm{i} f_1,f_2) \bigr) \, \dif x \\
&\,=\, (2+\alpha_m) \int_{D} v \,
\bigl(q_m^{(1)}-q_m^{(2)}\bigr) |V_0f_1+f_1|^{\alpha_m}
(V_0f_2+f_2) \, \dif x \,.
\end{split}
\end{equation}
Next we recall that the span of all total fields $f+V_0f$ that
correspond to radiating solutions~$V_0f$ of the linear scattering
problem \eqref{eq:LinearSP} with Herglotz incident fields $f=Hg$,
$g\in{L^2}Sd$, is dense in the space of solutions to the linear
Helmholtz equation in
\begin{equation*}
\Delta {\widetilde v} + k^2(1+q_0){\widetilde v}
\,=\, 0 \qquad \text{in } {B_R(0)}
\end{equation*}
with respect to the ${L^2}BR$-norm where $D\subset {B_R(0)}$ (see
\cite[Thm.~7.24]{Kir21}, where this result has been shown for plane
wave incident fields instead of Herglotz incident fields).
Since $f_1,f_2\in U_{\delta}\cap\field{R}cal(H)$ have been arbitrary in
\eqref{eq:Orthogonalityq1minusq2}, we get for all solutions
$v,{\widetilde v}\in H^2({B_R(0)})$ of $\Delta v + k^2(1+q_0)v = 0$ in ${B_R(0)}$
that
\begin{equation*}
0
\,=\, \int_{D} v {\widetilde v}
\bigl(q_m^{(1)}-q_m^{(2)}\bigr) |V_0f_1+f_1|^{\alpha_m} \, \dif x \,.
\end{equation*}
This gives $(q_m^{(1)}-q_m^{(2)})|V_0f_1+f_1|^{\alpha_m}=0$ for any
$f_1 \in U_{\delta}\cap\field{R}cal(H)$ (see, e.g.,
\cite[Thm.~7.27]{Kir21} or \cite{Buk08,Nac88,Nov88,Ram88}).
From this we infer $(q_m^{(1)}-q_m^{(2)})(V_0f_1+f_1)=0$ for any
$f_1 \in U_{\delta}\cap\field{R}cal(H)$ and thus
\begin{equation*}
\int_D (q_m^{(1)}-q_m^{(2)})(V_0f_1+f_1)(V_0f_2+f_2) \, \dif x
\,=\, 0
\end{equation*}
for any given $f_1,f_2 \in U_{\delta}\cap\field{R}cal(H)$.
The density result used above shows
$q_m^{(1)}-q_m^{(2)}=0$ a.e.\ in~$D$, and thus
$q_m^{(1)}=q_m^{(2)}$.
So the claim is proven by induction.
\end{proof}
\section{The nonlinear factorization method}
\label{sec:FactorizationMethod}
In this section we discuss a generalization of the factorization
method to recover the shape of a nonlinear scattering object from
observations of the corresponding nonlinear far field operator.
We consider general nonlinear contrast functions
$q\in L^\infty(\field{R}d\times\field{R})$ as in Section~\ref{sec:Setting}, but here
we make the following slightly stronger assumptions.
\begin{assumption}
\label{ass:Index2}
Let $D$ be open and Lipschitz bounded such that
$\field{R}d\setminus\ol{D}$ is connected.
Then the nonlinear contrast function $q\in L^\infty(\field{R}d\times\field{R})$
shall satisfy Assumption~\ref{ass:Index1}, and
\begin{itemize}
\item[(i)] $\supp(q) \subseteq \ol{D}\times\field{R}$,
\item[(ii)] $\supp(q_0) = \ol{D}$ with
$q_0 \geq q_{0,\mathrm{min}} > 0$ a.e.\ in $D$ for some $q_{0,\mathrm{min}}>0$,
\item[(iii)] the wave number $k^2$ is such that the homogeneous
linear transmission eigenvalue problem to determine
$v,w\in {L^2}D$, $(v,w)\not=(0,0)$ with
\begin{align*}
\Delta v + k^2 v
&\,=\, 0 \quad \text{in } D \,,
& v
&\,=\, w \quad \text{on } \partial D \,,\\
\Delta w + k^2 (1+q_0) w
&\,=\, 0 \quad \text{in } D \,,
& \frac{\partial v}{\partial\nu}
&\,=\, \frac{\partial w}{\partial\nu} \quad \text{on } \partial D \,,
\end{align*}
(see, e.g., \cite[Def.~7.21]{Kir21}) has no
nontrivial solution.
\end{itemize}
\end{assumption}
A factorization method for nonlinear weakly scattering objects and for
scattering objects with small nonlinearity of linear growth has
already been discussed in \cite{Lec11}.
In constrast to this work, we consider a larger class of nonlinear
refractive indices without any smallness assumption on the a priori
unknown nonlinearity, but we assume that the incident fields that are
used for the reconstruction are small relative to the size of the
nonlinearity.
Let $\delta>0$ be as in Proposition~\ref{pro:Wellposedness}.
We consider the nonlinear far field operator~$F$
from~\eqref{eq:FarfieldOperator} with the factorization $F=H^*T(H)$
from Proposition~\ref{pro:NonlinearFactorization}.
The next theorem is a nonlinear version of the abstract inf-criterion
of the factorization method to describe the range of $H^*$ in terms
of~$F$.
This result has been established in \cite[Thm.~2.1]{Lec11}.
The proof is essentially the same as in the linear case (see, e.g.,
\cite[Lmm.~7.33]{Kir21}).
\begin{theorem}
\label{thm:AbstractInfCriterion}
Let $X$ and $Y$ be Hilbert spaces, $\rho>0$, and let
\begin{equation*}
\mathcal{F}:\mathcal{D}(\mathcal{F})
\,:=\, \{ g\in X \;|\; \|g\|_X \leq \rho \}
\subseteq X \to X
\end{equation*}
be a nonlinear operator.
We assume that $\mathcal{F} = \mathcal{H}^*\mathbb{T}cal(\mathcal{H})$, where $\mathcal{H}:Y\to X$
is a compact linear operator and $\mathbb{T}cal: \mathcal{D}(\mathbb{T}cal)\subseteq Y\to Y$
with $\mathcal{D}(\mathbb{T}cal) = \ol{\mathcal{H}(\mathcal{D}(\mathcal{F}))}$ satisfies
\begin{equation*}
\|\mathbb{T}cal(\mathcal{H} g)\|_Y
\,\leq\, C_* \|\mathcal{H} g\|_Y
\end{equation*}
and
\begin{equation*}
|\skp{\mathbb{T}cal(\mathcal{H} g)}{\mathcal{H} g}_Y|
\,\geq\, c_* \|\mathcal{H} g\|_Y^2
\end{equation*}
for all $g\in\mathcal{D}(\mathcal{F})$ with $\|g\|_X\leq\rho$ and some
$c_*,C_*>0$.
Then, for any $\,\cdot\,i\in X$, $\,\cdot\,i\not=0$, and
any~$0<{\widetilde\rho}\leq\rho$,
\begin{equation}
\label{eq:InfCharacterization}
\,\cdot\,i \in\field{R}cal(\mathcal{H}^*)
\quad\Longleftrightarrow\quad
\inf \biggl\{
\Bigl| \frac{\skp{\mathcal{F}(g)}{g}_X}{\skp{g}{\,\cdot\,i}_X^2} \Bigr|
\;\bigg|\; g\in \mathcal{D}(\mathcal{F})\subseteq X \,,\; \|g\|_X={\widetilde\rho} \,,\;
\skp{g}{\,\cdot\,i}_X \not=0 \biggr\}
\,>\, 0 \,.
\end{equation}
\end{theorem}
\begin{proof}
Let $0\not=\,\cdot\,i=\mathcal{H}^*\psi\in\field{R}cal(\mathcal{H}^*)$ for some $\psi\in Y$.
Then $\psi\not=0$, and for any $g\in\mathcal{D}(\mathcal{F})\subseteq X$ with
$\|g\|_X={\widetilde\rho}\leq\rho$ and $\skp{g}{\,\cdot\,i}_X\not=0$ we
find that
\begin{equation*}
\begin{split}
|\skp{\mathcal{F}(g)}{g}_X|
&\,=\, |\skp{\mathcal{H}^*\mathbb{T}cal(\mathcal{H} g)}{g}_X|
\,=\, |\skp{\mathbb{T}cal(\mathcal{H} g)}{\mathcal{H} g}_Y|
\,\geq\, c_* \|\mathcal{H} g\|_Y^2\\
&\,=\, \frac{c_*}{\|\psi\|_Y^2} \|\mathcal{H} g\|_Y^2 \|\psi\|_Y^2
\,\geq\, \frac{c_*}{\|\psi\|_Y^2} |\skp{\mathcal{H} g}{\psi}_Y|^2\\
&\,=\, \frac{c_*}{\|\psi\|_Y^2} |\skp{g}{\mathcal{H}^*\psi}_X|^2
\,=\, \frac{c_*}{\|\psi\|_Y^2} |\skp{g}{\,\cdot\,i}_X|^2
\end{split}
\end{equation*}
Thus we have found a positive lower bound for the infimum in
\eqref{eq:InfCharacterization}.
Now let $0\not=\,\cdot\,i\not\in\field{R}cal(\mathcal{H}^*)$.
We first show that the subspace
$\{ \mathcal{H} g \;|\; g\in X \,,\; \skp{g}{\,\cdot\,i}_X=0 \}$ is
dense in $\field{R}cal(\mathcal{H})$.
Let $\psi\in\field{R}cal(\mathcal{H})$ such that
$0=\skp{\mathcal{H} g}{\psi}_Y=\skp{g}{\mathcal{H}^*\psi}_X$ for all $g\in X$
with $\skp{g}{\,\cdot\,i}_X=0$.
That means $\mathcal{H}^*\psi\in\spann\{\,\cdot\,i\}$, and because
$\,\cdot\,i\not\in\field{R}cal(\mathcal{H}^*)$, we conclude that $\mathcal{H}^*\psi=0$.
Therefore $\psi \in \field{R}cal(\mathcal{H})\cap\field{N}cal(\mathcal{H}^*)$, i.e., $\psi=0$, and we
have shown that
$\{ \mathcal{H} g \;|\; g\in X \,,\; \skp{g}{\,\cdot\,i}_X=0 \}$ is dense in
$\field{R}cal(\mathcal{H})$.
Since $\mathcal{H}\,\cdot\,i/\|\,\cdot\,i\|_Y^2\in\field{R}cal(\mathcal{H})$, we can find a sequence
$({\widetilde g}_n)_n \subseteq \{ g \in X \;|\; \skp{g}{\,\cdot\,i}_X=0 \}$ such
that~$\mathcal{H}{\widetilde g}_n\to-\mathcal{H}\,\cdot\,i/\|\,\cdot\,i\|_X^2$.
Setting $\widehat g_n := {\widetilde g}_n+\,\cdot\,i/\|\,\cdot\,i\|_X^2$ this yields
$\skp{\widehat g_n}{\,\cdot\,i}_X=1$ and $\mathcal{H}\widehat g_n\to 0$ as~$n\to\infty$.
Thus, we define
$g_n := {\widetilde\rho}\, \widehat g_n/\|\widehat g_n\|_X \in \mathcal{D}(\mathcal{F})$
to obtain
\begin{equation*}
\Bigl| \frac{\skp{\mathcal{F}(g_n)}{g_n}_X}{\skp{g_n}{\,\cdot\,i}_X^2} \Bigr|
\,=\, \frac{|\skp{\mathbb{T}cal(\mathcal{H} g_n)}{\mathcal{H} g_n}_Y|}
{|\skp{g_n}{\,\cdot\,i}_X|^2}
\,\leq\, \frac{C_* \|\mathcal{H} g_n\|_Y^2}{|\skp{g_n}{\,\cdot\,i}_X|^2}
\,=\, \frac{C_* \|\mathcal{H}\widehat g_n\|_Y^2}{|\skp{\widehat g_n}{\,\cdot\,i}_X|^2}
\to 0 \qquad \text{as } n\to\infty \,,
\end{equation*}
i.e., the infimum in \eqref{eq:InfCharacterization} is zero.
\end{proof}
Next we show that the operator $T$ from
Proposition~\ref{pro:NonlinearFactorization} satisfies the assumptions
in Theorem~\ref{thm:AbstractInfCriterion}.
\begin{proposition}
\label{pro:BoundsT}
Suppose that Assumption~\ref{ass:Index2} holds, and let $\delta>0$
be as in Proposition~\ref{pro:Wellposedness}.
Then there are constants $c_*,C_*,C>0$ such that
\begin{subequations}
\label{eq:BoundsT}
\begin{align}
\|T(f)\|_{L^2}D
&\,\leq\, C_* \bigl( 1 + \|f\|_{L^{\infty}}D^\alpha \bigr)
\|f\|_{L^2}D \,, \label{eq:BoundT} \\
|\skp{T(f)}{f}_{L^2}D|
&\,\geq\, c_* \bigl( 1 - C \|f\|_{L^{\infty}}D^\alpha \bigr)
\|f\|_{L^2}D^2 \label{eq:CoercivityT}
\end{align}
\end{subequations}
for all $f\in U_{\delta}$.
\end{proposition}
\begin{proof}
Let $f\in U_{\delta}$.
We first note that \eqref{eq:DefT0} and \eqref{eq:V0EstimateLtwo}
show that
\begin{equation*}
\begin{split}
\|T_0f\|_{L^2}D
\,\leq\, k^2 \|q_0\|_{L^{\infty}}D (1+C_{V_0,2}) \|f\|_{L^2}D \,.
\end{split}
\end{equation*}
Combining this with \eqref{eq:BoundTMinusT0Ltwo} gives
\begin{equation*}
\begin{split}
\|T(f)\|_{L^2}D
&\,\leq\, \|T_0 f\|_{L^2}D + \|T(f)-T_0 f\|_{L^2}D \\
&\,\leq\, k^2 \|q_0\|_{L^{\infty}}D (1+C_{V_0,2}) \|f\|_{L^2}D
+ C \|f\|_{L^{\infty}}D^\alpha \|f\|_{L^2}D\\
&\,\leq\, C_* \bigl( 1 + \|f\|_{L^{\infty}}D^\alpha \bigr)
\|f\|_{L^2}D
\end{split}
\end{equation*}
for some $C_*>0$.
Next let $S_0:{L^2}D\to {L^2}D$ be defined by
\begin{equation*}
S_0\psi
\,:=\, \frac{1}{k^2 q_0}\psi - \Phi_k\ast \psi \,.
\end{equation*}
It has been shown in \cite[Thm.~7.32]{Kir21} that $S_0$ is an
isomorphism with $T_0 =S_0 ^{-1}$, which can be seen
using \eqref{eq:DefT0} and \eqref{eq:DefV0} as follows.
Let $h\in{L^2}D$, then
\begin{equation*}
\begin{split}
S_0 T_0 h
&\,=\, \frac{1}{k^2q_0} T_0 h - \Phi_k\ast (T_0 h)
\,=\, (I+V_0)h - \Phi_k\ast \bigl(k^2q_0(h+V_0h)\bigr) \\
&\,=\, h + \bigl(I - k^2 \Phi_k\ast (q_0\,\cdot\,)\bigr)(V_0h)
- k^2 \Phi_k\ast (q_0h)
\,=\, h \,.
\end{split}
\end{equation*}
If $k^2$ is not an interior transmission eigenvalue then it follows
from \cite[Lmm.~7.35]{Kir21} and the arguments used in the proof of
\cite[Thm.~7.30]{Kir21} that there exists a constant $c_*>0$ such that
\begin{equation*}
| \skp{T_0f}{f}_{L^2}D |
\,=\, | \skp{S_0^{-1}f}{f}_{L^2}D |
\,\geq\, c_* \|f\|_{L^2}D^2
\qquad \text{for all } f\in\ol{\field{R}cal(H)} \,.
\end{equation*}
Accordingly, combining this with \eqref{eq:BoundTMinusT0Ltwo} gives
\begin{equation*}
\begin{split}
\bigl|\skp{T(f)}{f}_{L^2}D\bigr|
&\,\geq\, \bigl|\skp{T_0f}{f}_{L^2}D\bigr|
- \bigl|\skp{T(f)-T_0f}{f}_{L^2}D\bigr| \\
&\,\geq\, \bigl( c_* - C \|f\|_{L^{\infty}}D^\alpha \bigr)
\|f\|_{L^2}D^2
\,\geq\, c_* \bigl( 1 - C \|f\|_{L^{\infty}}D^\alpha \bigr)
\|f\|_{L^2}D^2 \,.
\end{split}
\end{equation*}
\end{proof}
Combining \eqref{eq:BoundsT} with \eqref{eq:HerglotzOperator} and
applying H\"older's inequality gives the following corollary.
\begin{corollary}
\label{cor:BoundsT}
Suppose that Assumption~\ref{ass:Index2} holds.
Then there are constants $c_*,C_*,C>0$ such that
\begin{subequations}
\label{eq:BoundsTH}
\begin{align}
\|T(Hg)\|_{L^2}D
&\,\leq\, C_* \bigl( 1+C\omega_{d-1}^{\alpha/2}\|g\|_{L^2}Sd^\alpha \bigr)
\|Hg\|_{L^2}D \,, \label{eq:BoundTH}\\
|\skp{ T(Hg)}{Hg}_{L^2}D|
&\,\geq\, c_*\bigl( 1-C\omega_{d-1}^{\alpha/2}
\|g\|_{{L^2}Sd)}^\alpha \bigr) \|Hg\|_{L^2}D^2 \label{eq:CoercivityTH}
\end{align}
\end{subequations}
for all $g\in\mathcal{D}(F)$.
\end{corollary}
The following result can be shown analogously to
\cite[Thm.~4.6]{KirGri08}.
\begin{proposition}
\label{pro:RangeCharacterizationHstar}
For any $z\in\field{R}d$ we define the \emph{test function}
$\,\cdot\,i_z\in{L^2}Sd$ by
\begin{equation*}
\,\cdot\,i_z(\widehat{x})
\,:=\, e^{-\mathrm{i} k z\cdot\widehat{x}} \,, \qquad \widehat{x}\in{S^{d-1}} \,.
\end{equation*}
Then $z\in D$ if and only if $\,\cdot\,i_z\in\field{R}cal(H^*)$.
\end{proposition}
Combining the results above, we obtain the main result of this
section.
\begin{theorem}
\label{thm:NonlinearFactorization}
Suppose that Assumption~\ref{ass:Index2} holds, and let $\delta>0$
be as in Proposition~\ref{pro:Wellposedness}.
Let~$C>0$ be the constant in \eqref{eq:CoercivityTH}, and let
\begin{equation*}
\rho
\,:=\, \min \biggl\{
\frac{\delta}{\omega_{d-1}^{1/2}} \,,
\frac{1}{\omega_{d-1}^{1/2}} \Bigl(\frac{1}{2C}\Bigr)^{1/\alpha}
\biggr\}
\end{equation*}
Then, for any $0<{\widetilde\rho}\leq \rho$ and $z\in\field{R}d$,
\begin{equation}
\label{eq:NLFCharacterization}
z\in D
\;\Longleftrightarrow\;
\inf \biggl\{
\Bigl|\frac{\skp{F(g)}{g}_{L^2}Sd}{\skp{g}{\,\cdot\,i_z}_{L^2}Sd^2}\Bigr|
\;\bigg|\; g\in {L^2}Sd \,,\,
\|g\|_{L^2}Sd={\widetilde\rho} \,,\,
\skp{g}{\,\cdot\,i_z}_{L^2}Sd\neq 0 \biggr\}
> 0 \,.
\end{equation}
\end{theorem}
\begin{proof}
By Proposition~\ref{pro:RangeCharacterizationHstar} we know that
$z\in D$ is equivalent to $\,\cdot\,i_z\in\field{R}cal(H^*)$, which, by
Theorem~\ref{thm:AbstractInfCriterion}, is in turn equivalent to
the condition on the right hand side of~\eqref{eq:NLFCharacterization}
provided that the nonlinear far field operator $F$ admits the
factorization $F=H^*T(H)$ for $T$ as in
Theorem~\ref{thm:AbstractInfCriterion}.
This has been shown in Proposition~\ref{thm:NonlinearFactorization}
and Corollary~\ref{cor:BoundsT}.
Note that our choice of $\rho$ guarantees the existence of the far
field operator (see Proposition~\ref{pro:Wellposedness}) as well
as the coercivity estimate in
Proposition~\ref{thm:NonlinearFactorization} (see
Corollary~\ref{cor:BoundsT}).
\end{proof}
We will comment on a numerical implementation of this criterion in
Section~\ref{sec:NumericalExamples} below.
For numerical implementations in the linear case we refer, e.g., to
\cite{Kir99,KirGri08}.
\section{The nonlinear monotonicity method}
\label{sec:MonotonicityMethod}
In this section we consider general nonlinear contrast functions
$q\in L^\infty(\field{R}d\times\field{R})$ as in
Section~\ref{sec:FactorizationMethod}, but we waive the assumption on
$k^2$ not being a transmission eigenvalue.
\begin{assumption}
\label{ass:Index3}
Let $D$ be open and Lipschitz bounded such that
$\field{R}d\setminus\ol{D}$ is connected.
Then the nonlinear contrast function $q\in L^\infty(\field{R}d\times\field{R})$
shall satisfy Assumption~\ref{ass:Index1}, and
\begin{itemize}
\item[(i)] $\supp(q) \subseteq \ol{D}\times\field{R}$,
\item[(ii)] $\supp(q_0) = \ol{D}$ with
$0 < q_{0,\mathrm{min}} \leq q_0 \leq q_{0,\mathrm{max}} < \infty$ a.e.\ in $\ol{D}$ for
some $q_{0,\mathrm{min}},q_{0,\mathrm{max}}>0$.
\end{itemize}
\end{assumption}
Given any open and bounded subset $B\subseteq\field{R}d$, we define the associated
\emph{probing operator} $P_B:{L^2}Sd \to {L^2}Sd$ by
\begin{equation*}
P_Bg \,:=\, k^2 H_B^*H_Bg \,,
\end{equation*}
where $H_B:L^2({S^{d-1}})\to L^2(B)$ and $H_B^*:L^2(B)\to L^2({S^{d-1}})$ are
given as in \eqref{eq:HerglotzOperator} and \eqref{eq:HerglotzAdjoint}
with $D$ replaced by $B$.
Accordingly, we find that for all $g\in{L^2}Sd$,
\begin{equation}
\label{eq:IdentityTB}
\skp{P_Bg}{g}_{L^2}Sd
\,=\, k^2\int_B |H g|^2 \, \dif x
\,=\, k^2 \int_B \biggl|
\int_{S^{d-1}} e^{\mathrm{i} k \theta\cdot x} g(\theta) \, \dif s(\theta) \biggr|^2 \, \dif x \,.
\end{equation}
The operator~$P_B$ is bounded, compact, and self-adjoint.
\begin{theorem}
\label{thm:NonlinearMonotonicity}
Suppose that Assumption~\ref{ass:Index3} holds, and let $\delta>0$
be as in Proposition~\ref{pro:Wellposedness}.
Let~$B\subseteq\field{R}d$ be open and bounded, and let
\begin{equation*}
\rho
\,:=\, \min \biggl\{
\frac{\delta}{\omega_{d-1}^{1/2}} \,,
\frac{1}{\omega_{d-1}^{1/2}} \Bigl( \
\frac{k^2 q_{0,\mathrm{min}}}{2\,C} \Bigr)^{\frac1\alpha}
\biggr\} \,,
\end{equation*}
where $C>0$ is the constant from \eqref{eq:BoundTMinusT0Ltwo}
and $\delta>0$ is as in Proposition~\ref{pro:Wellposedness}.
For any $0<{\widetilde\rho}\leq\rho$ the following characterization
of $D$ holds.
\begin{itemize}
\item[(a)] If $B\subseteq D$, then there exists a finite dimensional
subspace $\mathbb{V}cal\subseteq{L^2}Sd$ such that, for
all~$\beta\leq\frac{q_{0,\mathrm{min}}}{2}$,
\begin{equation*}
\beta \skp{P_Bg}{g}_{L^2}Sd
\,\leq\, \real\bigl(\skp{F(g)}{g}_{L^2}Sd\bigr)
\quad\text{for all } g\in\mathbb{V}calperp \text{ with }
\|g\|_{L^2}Sd = {\widetilde\rho} \,.
\end{equation*}
\item[(b)] If $B\not\subseteq D$, then there is no finite dimensional
subspace $\mathbb{V}cal\subseteq{L^2}Sd$ and no $\beta>0$ such that
\begin{equation*}
\beta \skp{P_Bg}{g}_{L^2}Sd
\,\leq\, \real\bigl(\skp{F(g)}{g}_{L^2}Sd\bigr)
\quad\text{for all } g\in\mathbb{V}calperp \text{ with }
\|g\|_{L^2}Sd = {\widetilde\rho} \,.
\end{equation*}
\end{itemize}
\end{theorem}
\begin{proof}
We consider the factorization of the far field operator
$F=H^*T(H)$ as in \eqref{eq:Factorization}.
Accordingly, the linear far field operator corresponding to the
contrast function $q_0$ satisfies $F_0 = H^* T_0 H$, and we
obtain from \eqref{eq:BoundTMinusT0Ltwo} that, for all
$g\in\mathcal{D}(F)$,
\begin{equation*}
\begin{split}
\real\biggl(\int_{S^{d-1}} g\, \ol{F(g)}\, \dif s\biggr)
&\,=\, \real\biggl(\int_{S^{d-1}} g\, \ol{F_0g}\, \dif s\biggr)
+ \real\biggl(\int_{S^{d-1}} g\, \ol{(F-F_0)(g)}\, \dif s\biggr) \\
&\,\geq\, \real\biggl(\int_{S^{d-1}} g\, \ol{F_0g}\, \dif s\biggr)
- C \|Hg\|_{L^{\infty}}D^\alpha \|Hg\|_{L^2}D^2 \,.
\end{split}
\end{equation*}
Applying \cite[Thm.~3.2]{GriHar18} with $q_1=0$ and $q_2=q$ we
find that there exists a finite dimensional subspace
$\mathbb{V}cal\subseteq{L^2}Sd$ such that, for all $g\in\mathcal{D}(F)\cap\mathbb{V}calperp$,
\begin{equation*}
\begin{split}
\real\biggl(\int_{S^{d-1}} g\, \ol{F(g)}\, \dif s\biggr)
&\,\geq\, k^2 \int_D q_0 |Hg|^2 \, \dif x
- C \|Hg\|_{L^{\infty}}D^\alpha \int_D |Hg|^2 \, \dif x \\
&\,\geq\, k^2 \Bigl( q_{0,\mathrm{min}} - \frac{C\omega_{d-1}^{\alpha/2}}{k^2}
\|g\|_{L^2}Sd^\alpha \Bigr)
\int_D |Hg|^2 \, \dif x \,.
\end{split}
\end{equation*}
Assuming that $\|g\|_{L^2}Sd = {\widetilde\rho}$ we obtain that
\begin{equation*}
\real\biggl(\int_{S^{d-1}} g\, \ol{F(g)}\, \dif s\biggr)
\,\geq\, k^2 \frac{q_{0,\mathrm{min}}}{2} \int_D |Hg|^2 \, \dif x \,.
\end{equation*}
Moreover, if $B\subseteq D$ and $\beta\leq \frac{q_{0,\mathrm{min}}}{2}$, then
\begin{equation*}
\beta \int_{S^{d-1}} g \ol{P_Bg} \, \dif s
\,=\, k^2 \beta \int_B |Hg|^2 \, \dif x
\,\leq\, k^2 \frac{q_{0,\mathrm{min}}}{2} \int_D |Hg|^2 \, \dif x \,,
\end{equation*}
which shows part (a).
We prove part (b) by contradiction.
Let $B\not\subseteq D$, $\beta>0$, and assume that
\begin{equation}
\label{eq:ProofShape1-3}
\beta \skp{P_Bg}{g}_{L^2}Sd
\,\leq\, \real\bigl(\skp{F(g)}{g}_{L^2}Sd\bigr)
\qquad\text{for all } g\in\mathbb{V}cal_1^\perp
\text{ with } \|g\|_{L^2}Sd={\widetilde\rho}
\end{equation}
for some $0<{\widetilde\rho}\leq\rho$ and a finite dimensional
subspace $\mathbb{V}cal_1\subseteq{L^2}Sd$.
Using \eqref{eq:BoundTMinusT0Ltwo} we find that
\begin{equation*}
\begin{split}
\real\biggl(\int_{S^{d-1}} g\, \ol{F(g)}\, \dif s\biggr)
&\,=\, \real\biggl(\int_{S^{d-1}} g\, \ol{F_0g}\, \dif s\biggr)
+ \real\biggl(\int_{S^{d-1}} g\, \ol{(F-F_0)(g)}\, \dif s\biggr) \\
&\,\leq\, \real\biggl(\int_{S^{d-1}} g\, \ol{F_0g}\, \dif s\biggr)
+ C \|Hg\|_{L^{\infty}}D^\alpha \|Hg\|_{L^2}D^2 \,.
\end{split}
\end{equation*}
Applying the monotonicity relation (3.3) in
\cite[Cor.~3.4]{GriHar18} with $q_1=0$ and $q_2=q$, shows that
there exists a finite dimensional subspace $\mathbb{V}cal_2\subseteq{L^2}Sd$ such
that
\begin{equation}
\label{eq:ProofShape1-5}
\real\biggl(\int_{S^{d-1}} g \, \ol{F_0g} \, \dif s\biggr)
\,\leq\,
k^2 \int_D q_0 |V_0Hg|^2\, \dif x
\qquad\text{for all } g\in\mathbb{V}cal_2^\perp \,.
\end{equation}
Combining \eqref{eq:ProofShape1-3}--\eqref{eq:ProofShape1-5}, we
obtain for $\mathbb{V}caltilde := \mathbb{V}cal_1^\perp + \mathbb{V}cal_2^\perp$ that, for
all $g\in\mathbb{V}tilde^\perp$ with $\|g\|_{L^2}Sd = {\widetilde\rho}$,
\begin{equation*}
\begin{split}
k^2 \beta \|Hg\|_{L^2(B)}^2
&\,\leq\, k^2 \int_D q_0 |V_0Hg|^2\, \dif x
+ C \|H g\|_{L^{\infty}}D^\alpha \|H g\|_{L^2}D^2\\
&\,\leq\, k^2 q_{0,\mathrm{max}} \int_D |V_0Hg|^2\, \dif x
+ C \|H g\|_{L^{\infty}}D^\alpha \|H g\|_{L^2}D^2 \,.
\end{split}
\end{equation*}
Applying \cite[Thm.~4.5]{GriHar18} with $q_1=0$ and $q_2=q$, this
implies that there exists a constant~$\field{C}tilde>0$ such that, for all
$g\in\mathbb{V}caltilde^\perp$ with $\|g\|_{L^2}Sd={\widetilde\rho}$,
\begin{equation}
\label{eq:ProofShape1-Contradiction}
\begin{split}
k^2 \beta \|Hg\|_{L^2(B)}^2
&\,\leq\, \bigl( \field{C}tilde k^2 q_{0,\mathrm{max}}
+ C \|Hg\|_{L^{\infty}}D^\alpha \bigr)
\|Hg\|_{L^2}D^2 \\
&\,\leq\, \bigl( \field{C}tilde k^2 q_{0,\mathrm{max}}
+ C \omega_{d-1}^{\alpha/2} \|g\|_{L^2}D^\alpha \bigr)
\|Hg\|_{L^2}D^2 \,.
\end{split}
\end{equation}
In the following we denote by $\mathcal{P}_\mathbb{V}cal:{L^2}Sd\to{L^2}Sd$ the
orthogonal projection onto $\mathbb{V}cal$.
Using \cite[Lmm.~4.4]{GriHar18} we obtain as in the proof of
\cite[Thm.~4.1]{GriHar18} a sequence
$(\widetilde{g}_m)_{m\in\field{N}}\subseteq{L^2}Sd$ such that
$\|\widetilde{g}_m\|_{L^2}Sd = \rho/2$, and
\begin{equation*}
\|H\widetilde{g}_m\|_{L^2(B)}
\,\geq\, m \bigl( \|H\widetilde{g}_m\|_{L^2}D
+ \|\mathcal{P}_\mathbb{V}cal\widetilde{g}_m\|_{L^2}Sd \bigr) \,, \qquad m\in\field{N} \,.
\end{equation*}
Therefore, $g_m:=\widetilde{g}_m-\mathcal{P}_\mathbb{V}cal\widetilde{g}_m\in\mathbb{V}calperp$
and by rescaling $\widetilde{g}_m$ we can assume without loss of
generality that $\|g_m\|_{L^2}Sd = {\widetilde\rho} \leq \rho$.
Accordingly, if $\|H \|\leq 1$, then
\begin{equation*}
\begin{split}
\|Hg_m\|_{L^2(B)}
&\,\geq\, \|H\widetilde{g}_m\|_{L^2(B)}
- \|H_B\|\|\mathcal{P}_\mathbb{V}cal\widetilde{g}_m\|_{L^2}Sd\\
&\,>\, m \|H\widetilde{g}_m\|_{L^2}D
+ m \|\mathcal{P}_\mathbb{V}cal\widetilde{g}_m\|_{L^2}Sd
- \|H_B\|\|\mathcal{P}_\mathbb{V}cal\widetilde{g}_m\|_{L^2}Sd\\
&\,\geq\, m \|Hg_m\|_{L^2}D
+ \bigl( m (1 - \|H \|) - \|H_B\| \bigr)
\|\mathcal{P}_\mathbb{V}cal\widetilde{g}_m\|_{L^2}Sd \\
&\,\geq\, m \|Hg_m\|_{L^2}D
\end{split}
\end{equation*}
for all $m\in\field{N}$ such that $m \geq \|H_B\| / (1-\|H \|)$.
On the other hand, if $\|H \|>1$, then
\begin{equation*}
\begin{split}
\|Hg_m\|_{L^2(B)}
&\,\geq\, \|H\widetilde{g}_m\|_{L^2(B)}
- \|H_B\|\|\mathcal{P}_\mathbb{V}cal\widetilde{g}_m\|_{L^2}Sd\\
&\,>\, m \|H\widetilde{g}_m\|_{L^2}D
+ m \|\mathcal{P}_\mathbb{V}cal\widetilde{g}_m\|_{L^2}Sd
- \|H_B\|\|\mathcal{P}_\mathbb{V}cal\widetilde{g}_m\|_{L^2}Sd\\
&\,\geq\, \frac{m}{2\|H \|} \|H\widetilde{g}_m\|_{L^2}D
+ m \|\mathcal{P}_\mathbb{V}cal\widetilde{g}_m\|_{L^2}Sd
- \|H_B\|\|\mathcal{P}_\mathbb{V}cal\widetilde{g}_m\|_{L^2}Sd\\
&\,\geq\, \frac{m}{2\|H \|} \|Hg_m\|_{L^2}D
+ \Bigl( \frac{m}{2} - \|H_B\| \Bigr)
\|\mathcal{P}_\mathbb{V}cal\widetilde{g}_m\|_{L^2}Sd \\
&\,\geq\, \frac{m}{2\|H \|} \|Hg_m\|_{L^2}D \,.
\end{split}
\end{equation*}
for all $m\in\field{N}$ with $m\geq 2\|H_B\|$.
This contradicts \eqref{eq:ProofShape1-Contradiction}, and we have
shown part (b).
\end{proof}
\begin{remark}[Numerical implementation of
Theorem~\ref{thm:NonlinearMonotonicity}]
\label{rem:FMvsMM}
Considering for any $z\in\field{R}d$ a probing domain $B=B_\varepsilon(z)$ that
is a ball of radius $\varepsilon>0$ around $z$, the identity
\eqref{eq:IdentityTB} gives
\begin{equation*}
\begin{split}
\skp{P_Bg}{g}_{L^2}Sd
&\,=\, k^2 \int_{B_\varepsilon(z)} \biggl|
\int_{S^{d-1}} e^{\mathrm{i} k \theta\cdot z} e^{\mathrm{i} k \theta\cdot (x-z)}
g(\theta) \, \dif s(\theta) \biggr|^2 \, \dif x\\
&\,=\, k^2 |B_\varepsilon(z)|\, \Bigl|
\int_{S^{d-1}} e^{\mathrm{i} k \theta\cdot z} g(\theta) \, \dif s(\theta) \Bigr|^2
+ O\bigl(k^3\varepsilon|B_\varepsilon(z)|\|g\|_{L^2}Sd^2\bigr) \\
&\,=\, k^2 |B_\varepsilon(z)|\, |\skp{g}{\,\cdot\,i_z}_{L^2}Sd|^2
+ O\bigl(k^3\varepsilon|B_\varepsilon(z)|\|g\|_{L^2}Sd^2\bigr) \,,
\end{split}
\end{equation*}
uniformly with respect to $z\in\field{R}d$.
Here we used that $|e^{\mathrm{i} t}-1|\leq |t|$ for $t\in\field{R}$.
If $z\in D$, then part (a) of
Theorem~\ref{thm:NonlinearMonotonicity} implies that there is a
finite dimensional subspace $\mathbb{V}cal\subseteq{L^2}Sd$ such that for all
$\beta\leq\frac{q_{0,\mathrm{min}}}{2}$ and for all
$g\in\mathbb{V}calperp$ with~$\|g\|_{L^2}Sd = {\widetilde\rho}$,
\begin{equation}
\label{eq:FMvsMM1}
\frac{\real\bigl(\skp{F(g)}{g}_{L^2}Sd\bigr)}
{\beta \skp{P_Bg}{g}_{L^2}Sd}
\,=\, \frac{\real\bigl(\skp{F(g)}{g}_{L^2}Sd\bigr)}
{\beta k^2 |B_\varepsilon(z)| |\skp{\,\cdot\,i_z}{g}_{L^2}Sd|^2
+ O\bigl(k^3\varepsilon|B_\varepsilon(z)|\|g\|_{L^2}Sd^2\bigr)}
\,\geq\, 1 \,,
\end{equation}
i.e.,
\begin{equation}
\label{eq:FMvsMM2}
\frac{\real\bigl(\skp{F(g)}{g}_{L^2}Sd\bigr)}
{|\skp{\,\cdot\,i_z}{g}_{L^2}Sd|^2 + O\bigl(k\varepsilon\|g\|_{L^2}Sd^2\bigr)}
\,\geq\, k^2\beta |B_\varepsilon| \,,
\end{equation}
as $\varepsilon\to 0$.
This shows that for any fixed $g\in\mathbb{V}calperp$ with
$\|g\|_{L^2}Sd = {\widetilde\rho}$ and $\skp{g}{\,\cdot\,i_z}_{L^2}Sd\neq 0$ we
can choose $\varepsilon>0$ sufficiently small such that
\begin{equation*}
\frac{\real\bigl(\skp{F(g)}{g}_{L^2}Sd\bigr)}
{|\skp{\,\cdot\,i_z}{g}_{L^2}Sd|^2}
\,\geq\, \frac{k^2\beta |B_\varepsilon|}{2} \,.
\end{equation*}
Similarly, if $z\notin D$, then
part (b) of Theorem~\ref{thm:NonlinearMonotonicity} says that there
is no finite dimensional subspace $\mathbb{W}cal\subseteq{L^2}Sd$ and no $\beta>0$
such that \eqref{eq:FMvsMM1}--\eqref{eq:FMvsMM2} hold for all
$g\in\mathbb{W}calperp$ with $\|g\|_{L^2}Sd = {\widetilde\rho}$ as $\varepsilon\to0$.
Assuming that $\,\cdot\,i_z\notin\mathbb{V}cal$, this says that
\begin{equation}
\label{eq:MonotonicityInfCriterion}
z\in D
\;\Longleftrightarrow\;
\inf \biggl\{
\frac{\real\bigl(\skp{F(g)}{g}_{L^2}Sd\bigr)}
{|\skp{\,\cdot\,i_z}{g}_{L^2}Sd|^2}
\;\bigg|\; g\in \mathbb{V}calperp \,,\,
\|g\|_{L^2}Sd={\widetilde\rho} \,,\,
\skp{g}{\,\cdot\,i_z}_{L^2}Sd\neq 0 \biggr\}
\,>\, 0 \,.
\end{equation}
This is closely related to the inf-criterion from the nonlinear
factorization method in \eqref{eq:NLFCharacterization}.
For the monotonicity criterion we have to exclude the finite
dimensional subspace $\mathbb{V}calperp$, and we assumed that
$\,\cdot\,i_z\notin\mathbb{V}cal$ in the derivation of
\eqref{eq:MonotonicityInfCriterion}, while for the factorization
method we had to assume that $k^2$ is such that the homogeneous
linear transmission eigenvalue problem has no nontrivial
solution.~
$\lozenge$
\end{remark}
In Section~\ref{sec:NumericalExamples}, we will use
\eqref{eq:MonotonicityInfCriterion} to implement the nonlinear
monotonicity based reconstruction method.
However, since the finite dimensional subspace $\mathbb{V}calperp$ that
has to be excluded is a priori unknown, we will neglect this
constraint.
For a numerical implementation in the linear case we refer
to~\cite{GriHar18}.
\section{Numerical examples}
\label{sec:NumericalExamples}
In this section we comment on a numerical implementation of the shape
characterizations in Theorems~\ref{thm:NonlinearFactorization} and
\ref{thm:NonlinearMonotonicity}.
We consider the two-dimensional case only, i.e., $d=2$.
Let $D\subseteq\field{R}two$ be open and Lipschitz bounded such that
$D\subseteq{B_R(0)}$ for some $R>0$ sufficiently large and $\field{R}two\setminus\ol{D}$
is connected.
We consider at third-order Kerr-type nonlinear material law that is
given by
\begin{equation}
\label{eq:ExaContrast}
q(x,|z|)
\,:=\, q_0(x) + q_1(x)|z|^2 \,,
\qquad x\in\field{R}two \,,\; z\in\field{C} \,,
\end{equation}
where $q_0, q_1 \in {L^{\infty}}(\field{R}two)$ with support in $\ol{D}$ and
$\essinf q_0>-1$.
Accordingly, the scattering problem~\eqref{eq:ScatteringProblem}
consists in determining $u=u^i+u^s$ such that
\begin{equation*}
\Delta u + k^2 \bigl(1 + q_0 + q_1|u|^2\bigr) u
\,=\, 0 \qquad \text{in } \field{R}two \,,
\end{equation*}
and $u^s$ satisfies the Sommerfeld radiation condition.
This fits into the framework of the previous sections.
We evaluate approximate solutions of this nonlinear scattering problem
using a fixed point iteration for the nonlinear Lippmann-Schwinger
equation
\begin{equation*}
u^s(x)
\,=\, k^2 \int_D \Phi_k(x-y)
q(x,|u^i(y)+u^s(y)|) (u^i(y)+u^s(y)) \, \dif y \,,
\qquad x\in [-R,R]^2 \,,
\end{equation*}
as in the proof of Proposition~\ref{pro:Wellposedness}.
Denoting the solution to the linear problem by
\begin{equation}
\label{eq:ExaFPILinearProblem}
u^s_0
\,:=\, \bigl(I-k^2\Phi_k \ast (q_0\,\cdot\,)\bigr)^{-1}
\bigl( k^2\Phi_k\ast (q_0u^i) \bigr)
\qquad \text{on } [-R,R]^2 \,,
\end{equation}
the fixed point iteration determines the difference $w:=u^s-u^s_0$.
Starting with the initial guess~$w_0=0$ on $[-R,R]^2$ we evaluate, for
$\ell=0,1,2,\ldots$,
\begin{equation}
\label{eq:ExaFPIUpdate}
w_{\ell+1}
\,:=\, \bigl(I-k^2\Phi_k \ast (q_0\,\cdot\,)\bigr)^{-1}
\Bigl(k^2\Phi_k\ast \bigl( q_1|w_\ell+u^s_0+u^i|^2(w_\ell+u^s_0+u^i)
\bigr) \Bigr)
\qquad \text{on } [-R,R]^2 \,.
\end{equation}
We have seen in the proof of Proposition~\ref{pro:Wellposedness}
that this fixed point iteration converges whenever the product
$\|q_1\|_{L^{\infty}}D \|u^i\|_{L^{\infty}}D$ is sufficiently small (see
Remark~\ref{rem:Wellposedness}).
In our numerical example below we stop the fixed point iteration when
\begin{equation}
\label{eq:FPITolerance}
\frac{\|w_{\ell+1}-w_{\ell}\|_{{L^{\infty}}([-R,R]^2)}}
{\|w_{\ell+1}\|_{{L^{\infty}}([-R,R]^2)}}
\,<\, \varepsilon
\end{equation}
for some tolerance $\varepsilon>0$, and we denote the final iterate by
$w_\varepsilon \approx w$.
Accordingly, an approximation for the far field pattern~$u^\infty$ can
be evaluated using Proposition~\ref{pro:Farfield} by
\begin{equation*}
u^\infty_\varepsilon(\widehat{x})
\,=\, k^2 \int_D \bigl( q_0(y)
+ q_1(y)|w_\varepsilon(y)+u^s_0(y)+u^i(y)|^2 \bigr)
\bigl(w_\varepsilon(y)+u^s_0(y)+u^i(y)\bigr)
e^{-\mathrm{i} k \widehat{x}\cdot y} \, \dif y \,,
\quad \widehat{x}\in{S^1} \,.
\end{equation*}
In \eqref{eq:ExaFPILinearProblem} and in each step of the fixed point
iteration \eqref{eq:ExaFPIUpdate} we have to solve a linear
Lippmann-Schwinger integral equation.
For this purpose we use the simple cubature method from
\cite[Sec.~2]{Vai00}.
Next we turn to the inverse scattering problem.
We consider an equidistant grid of points
\begin{equation}
\label{eq:Grid}
\triangle
\,=\, \{ z_{ij} = (ih,jh) \;|\;
-J \leq i,j\leq J\} \,\subseteq\, [-R,R]^2
\end{equation}
with step size $h=R/J$ in the region of interest $[-R,R]^2$.
For each $z_{ij}\in \triangle$ we approximate a solution to the
minimization problem
\begin{equation}
\label{eq:MinimizationProblemFM}
\text{Minimize } \;
\biggl|\frac{\skp{F(g)}{g}_{L^2}Sone}
{\skp{g}{\,\cdot\,i_{z_{ij}}}_{L^2}Sone^2}\biggr| \;
\text{ subject to } \;
\|g\|_{L^2}Sone={\widetilde\rho} \;
\text{ and } \; \skp{g}{\,\cdot\,i_{z_{ij}}}_{L^2}Sone\neq 0
\end{equation}
for the nonlinear factorization method (see
Theorem~\ref{thm:NonlinearFactorization}), and
\begin{equation}
\label{eq:MinimizationProblemMM}
\text{Minimize } \;
\frac{\real\bigl(\skp{F(g)}{g}_{L^2}Sone\bigr)}
{|\skp{\,\cdot\,i_{z_{ij}}}{g}_{L^2}Sone|^2} \;
\text{ subject to } \;
\|g\|_{L^2}Sone={\widetilde\rho} \;
\text{ and } \; \skp{g}{\,\cdot\,i_{z_{ij}}}_{L^2}Sone\neq 0
\end{equation}
for the nonlinear monotonicity method (see
Theorems~\ref{thm:NonlinearMonotonicity} and
Remark~\ref{rem:FMvsMM}).
We use a composite trapezoid rule on an equidistant grid of points
\begin{equation}
\label{eq:ObservationGrid}
\{(\cos\,\cdot\,i_m,\sin\,\cdot\,i_m) \;|\;
\,\cdot\,i_m=2\pi m/M \,,\; m=0,\ldots,M-1 \} \subseteq {S^1} \,, \qquad M\in\field{N} \,,
\end{equation}
to approximate the inner products in \eqref{eq:MinimizationProblemFM}
and \eqref{eq:MinimizationProblemMM}, and we discretize the densities
$g\in{L^2}({S^1})$ using a truncated Fourier series expansion
\begin{equation}
\label{eq:DFTg}
g(\cos(t),\sin(t))
\,=\, \sum_{n = -{N}/{2}}^{{N}/{2}-1} \widehat g_n
\frac{1}{\sqrt{2\pi}} e^{\mathrm{i} n t} \,, \qquad t\in[0,2\pi) \,,\;
N/2\in\field{N} \,.
\end{equation}
Accordingly, we minimize \eqref{eq:MinimizationProblemFM} and
\eqref{eq:MinimizationProblemMM} with respect to the finite
dimensional vector of Fourier coefficients
$[\widehat g_{-N/2},\ldots,\widehat g_{N/2-1}]^{\top}\in \field{C}^{N}$.
From our theoretical results in
Theorems~\ref{thm:NonlinearFactorization} and
\ref{thm:NonlinearMonotonicity} (see also Remark~\ref{rem:FMvsMM}),
we expect the values of the minima in
\eqref{eq:MinimizationProblemFM} and \eqref{eq:MinimizationProblemMM}
to be close to zero when $z \in \field{R}two\setminus\ol{D}$, and
significantly larger than zero when $z\in D$.
In each grid point $z_{ij}\in\triangle$ we approximate solutions of
\eqref{eq:MinimizationProblemFM} and \eqref{eq:MinimizationProblemMM}
using the interior point method provided by
Matlab's~\texttt{fmincon}.
To find an appropriate initial guess $g_{ij}^{(0)}$ at each sampling
point $z_{ij} \in \triangle$, we first perform a preliminary global
search and evaluate
\begin{equation}
\label{eq:StGuessFM}
g_{ij}^{(0)}
\,:=\, \argmin_{p,\ell,z}
\biggl|\frac{\skp{F(g_{p,\ell,z})}{g_{p,\ell,z}}_{L^2}Sone}
{\skp{g_{p,\ell,z}}{\,\cdot\,i_{z_{ij}}}_{L^2}Sone^2}\biggr|
\end{equation}
for the optimization problem \eqref{eq:MinimizationProblemFM} and
\begin{equation}
\label{eq:StGuessMM}
g_{ij}^{(0)}
\,:=\, \argmin_{p,\ell,z}
\frac{\real\bigl(\skp{F(g_{p,\ell,z})}{g_{p,\ell,z}}_{L^2}Sone\bigr)}
{|\skp{\,\cdot\,i_{z_{ij}}}{g_{p,\ell,z}}_{L^2}Sone|^2}
\end{equation}
for the optimization problem \eqref{eq:MinimizationProblemMM}.
Here, $g_{p,\ell,z}\in{L^2}Sone$ is given by
\begin{equation*}
g_{p,\ell,z}(\cos(t), \sin(t))
\,=\, {\widetilde\rho}\, \mathrm{i}^p
\frac{1}{\sqrt{2\pi}}e^{\mathrm{i} \ell t}
e^{-\mathrm{i} k (z_1\cos(t) + z_2\sin(t))} \,,
\end{equation*}
and the minimization in \eqref{eq:StGuessFM} and \eqref{eq:StGuessMM}
is over $p=0,1$, $\ell = -N/2,\ldots, N/2-1$, and
${z=(z_1,z_2)\in\triangle}$.
The densities $g_{p,\ell,z}$ generate shifted Herglotz incident fields
$(Hg_\ell)(x-z)$, where~$g_\ell$ has just one active Fourier
mode.
For each $z_{ij}\in\triangle$ we denote the values of the final result
of the optimization by $\mathbb{I}fac(z_{ij})$ for
\eqref{eq:MinimizationProblemFM} and $\mathbb{I}mon(z_{ij})$ for
\eqref{eq:MinimizationProblemMM}.
Color coded plots of these indicator functions should give a
reconstruction of the support $D$ of the scattering object.
\begin{example}
\begin{figure}
\caption{Nonlinear factorization method:
Exact shape of the scattering object (left),
initial guess $\mathbb{I}
\label{fig:Exa1-1}
\end{figure}
\begin{figure}
\caption{Nonlinear monotonicity method:
Exact shape of the scattering object (left),
initial guess $\mathbb{I}
\label{fig:Exa1-2}
\end{figure}
We consider a kite shaped scattering object $D$ as shown in the left
plot in Figure~\ref{fig:Exa1-1} and in the left plot in
Figure~\ref{fig:Exa1-2}.
The coefficients in the Kerr-type nonlinear material law in
\eqref{eq:ExaContrast} are determined to be
\begin{equation*}
q_0 \,=\,
\begin{cases}
1.16 &\text{in } D\,,\\
0 &\text{in } \field{R}two\setminus \overline{D} \,,
\end{cases}
\qquad\text{and}\qquad
q_1 \,=\,
\begin{cases}
2.5\cdot 10^{-22} &\text{in } D\,,\\
0 &\text{in } \field{R}two\setminus \overline{D} \,.
\end{cases}
\end{equation*}
These coefficients correspond to fused silica (see table~4.1.2 on
p.~212 in \cite{Boy08} with $q_0 = n_0^2-1$ and $q_1 = \chi^{(3)}$).
For the wave number in the exterior we choose $k=1$, and the norm
constraint in \eqref{eq:MinimizationProblemFM} and
\eqref{eq:MinimizationProblemMM} is set to
${\widetilde\rho} = 3.0\times 10^{10}$.
A simple rescaling argument shows that we can equivalently work with
\begin{equation*}
u_{\mathrm{resc}} \,:=\, u/\tau \,,\quad
q_{1,\mathrm{resc}} \,:=\, \tau^2 q_1 \,,\quad
\text{and} \quad
{\widetilde\rho}_{\mathrm{resc}} \,:=\, {\widetilde\rho}/\tau \qquad
\text{for any } \tau>0 \,.
\end{equation*}
In the numerical implementation we choose
$\tau = 3.0\times 10^{10}$, i.e.,
$\widetilde{q_1}_{\mathrm{resc}} = 0.26 \, \mathbf{1}_{D}$
and~${\widetilde\rho}_{\mathrm{resc}}=1$.
We use a sampling grid $\triangle$ as in \eqref{eq:Grid} with
$R=5$ and $J=20$, i.e., the step size in each direction
is~$h=0.25$.
Furthermore, we choose $M=256$ quadrature nodes in
\eqref{eq:ObservationGrid}, $N=16$ Fourier modes in \eqref{eq:DFTg},
and for the tolerance in \eqref{eq:FPITolerance} we
choose~$\varepsilon=10^{-5}$.
We compute the starting guess for the optimization
\eqref{eq:MinimizationProblemFM} and
\eqref{eq:MinimizationProblemMM} for each sampling point
$z_{ij} \in \triangle$ as in \eqref{eq:StGuessFM} or
\eqref{eq:StGuessMM}.
The corresponding values of the cost functional in
\eqref{eq:MinimizationProblemFM} and
\eqref{eq:MinimizationProblemMM} for each grid point
$z_{ij}\in\triangle$ are denoted by $\mathbb{I}fac^{(0)}(z_{ij})$ and
$\mathbb{I}fac^{(0)}(z_{ij})$, respectively.
Color coded plots of $\mathbb{I}fac^{(0)}$ and $\mathbb{I}mon^{(0)}$ are shown
in~Figure~\ref{fig:Exa1-1} (center) and Figure~\ref{fig:Exa1-2}
(center), respectively.
These give already a reasonable reconstruction of the location of
the nonlinear scattering object.
The dashed lines indicate the exact geometry of the scatterer.
Then we approximate solutions to the optimization problems
\eqref{eq:MinimizationProblemFM} and
\eqref{eq:MinimizationProblemMM} for each sampling point
$z_{ij} \in \triangle$ using Matlab's~\texttt{fmincon} algorithm.
These approximations are denoted by~$\mathbb{I}fac(z_{ij})$ and $\mathbb{I}mon(z_{ij})$,
respectively.
Color coded plots of the indicator functions $\mathbb{I}fac$ and~$\mathbb{I}mon$ for
the nonlinear factorization method and for the nonlinear
monotonicity method are shown in~Figure~\ref{fig:Exa1-1}~(right) and
Figure~\ref{fig:Exa1-2}~(right), respectively.
Again the dashed lines indicate the exact geometry of the scatterer.
The results obtained by the two methods are of similar quality.
A significant improvement of the reconstruction is observed when
compared to the initial guesses.
The shape of the support of the scattering object is nicely
recovered.
$\lozenge$
\end{example}
\section*{Conclusions}
\label{sec:Conclusions}
We have discussed a direct and inverse scattering problem for a class
of nonlinear Helmholtz equations in unbounded free space.
Assuming that the intensities of the incident waves are sufficiently
small relative to the size of the nonlinearity, we have established
existence and uniqueness of solutions to the direct and inverse
scattering problem.
Our analysis relies on linearization techniques and estimates for the
linearization error.
We have also considered extensions of two shape reconstruction
techniques for the inverse scattering problem, and we have provided
numerical examples.
\section*{Acknowledgments}
Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research
Foundation) -- Project-ID 258734477 -- SFB 1173.
\begin{appendix}
\section{Appendix: A useful estimate}
In Lemma~\ref{lmm:UsefulEstimate} below we show a simple estimate
that is used in the proof of
Theorem~\ref{thm:UniquenessInverseProblem}, but that we have not
been able to find in the literature.
\begin{lemma}
\label{lmm:UsefulEstimate}
Let $a,b\in\field{C}$ and $\alpha>0$.
Then,
\begin{equation*}
\bigl| |a|^\alpha a - |b|^\alpha b \bigr|
\,\leq\, 2 (|a|+|b|)^\alpha |a-b| \,.
\end{equation*}
\end{lemma}
\begin{proof}
Without loss of generality we can assume that $|a|\geq |b|> 0$.
Then $t:=b/a\in\field{C}$ satisfies~$0<|t|\leq 1$, and
we are left to show that
\begin{equation}
\label{eq:Inequality-t}
\bigl| 1-|t|^\alpha t \bigr|
\,\leq\, 2 (1+|t|)^\alpha |1-t| \,.
\end{equation}
If $\bigl| 1-|t|^\alpha t \bigr| \leq |1-t|$ or $|t|=1$,
then \eqref{eq:Inequality-t} is clearly satisfied.
Hence, we assume from now on without loss of generality that
$\bigl| 1-|t|^\alpha t \bigr| > |1-t|$ and $0<|t|<1$.
This implies that~$0<\real(t)\leq|t|$, and accordingly
\begin{equation*}
\frac{\bigl| 1-|t|^\alpha t \bigr|^2}{|1-t|^2}
\,=\, \frac{1-2|t|^\alpha\real(t)+|t|^{2\alpha+2}}{1-2\real(t)+|t|^2}
\,\leq\, \frac{1-2|t|^{\alpha+1}+|t|^{2\alpha+2}}{1-2|t|+|t|^2}
\,=\, \frac{\bigl( 1-|t|^{\alpha+1} \bigr)^2}{(1-|t|)^2} \,.
\end{equation*}
Therefore, it suffices to show that
\begin{equation*}
\frac{1-|t|^{1+\alpha}}{1-|t|}
\,\leq\, 2 (1+|t|)^\alpha \,.
\end{equation*}
Let $n:=\lfloor\alpha\rfloor$ and $\beta:=\alpha-n$.
Then,
\begin{equation*}
(1+|t|)^n
\,=\, \sum_{\ell=0}^n \binom{n}{\ell} |t|^\ell
\,\geq\, \sum_{\ell=0}^n |t|^\ell
\,=\, \frac{1-|t|^{n+1}}{1-|t|} \,,
\end{equation*}
and $2(1+|t|)^\beta\geq 1 + |t|^\beta$.
Accordingly,
\begin{equation*}
2 (1+|t|)^\alpha
\,=\, \frac{1 + |t|^\beta - |t|^{n+1} -|t|^{n+\beta+1}}{1-|t|}
\,\geq\, \frac{1-|t|^{\alpha+1}}{1-|t|} \,.
\end{equation*}
\end{proof}
\end{appendix}
{\small
}
\end{document} |
\begin{document}
\title{Quibbs, a Code Generator\\
for Quantum Gibbs Sampling}
\author{Robert R. Tucci\\
P.O. Box 226\\
Bedford, MA 01730\\
[email protected]}
\date{ \today}
\maketitle
\vskip2cm
\section*{Abstract}
This paper introduces
Quibbs v1.3,
a Java application
available for free. (Source code
included in the distribution.)
Quibbs is a ``code generator" for
quantum Gibbs sampling:
after the user inputs some files that specify
a classical Bayesian network,
Quibbs outputs
a quantum circuit
for performing Gibbs sampling of that
Bayesian network
on a quantum computer. Quibbs
implements an algorithm
described in earlier papers,
that combines
various apple pie techniques such as:
an adaptive fixed-point version of
Grover's algorithm, Szegedy operators,
quantum phase estimation
and quantum multiplexors.
\section{Introduction}
We say a unitary operator
acting an array of qubits has been
compiled if
it has been expressed
as a Sequence of Elementary
Operations (SEO), where by elementary
operations we mean
1 and 2-qubit operations such
as CNOTs and single-qubit rotations.
SEO's are often represented as quantum circuits.
There exist software, ``general quantum compilers"
(like Qubiter, discussed in Ref.\cite{TucQubiter}),
for compiling arbitrary unitary
operators (operators that have no a priori
known structure).
There also exists
software,``special purpose
quantum compilers"(like
each of the 7
applications
of QuanSuite,
discussed in Refs.\cite{quantree,quanfou,quanfruit}),
for
compiling unitary operators
that have a very definite, special
structure which is known a priori.
This paper introduces\footnote{The reason
for releasing the first public
version of Quibbs with such an odd
version number is that Quibbs shares
many Java classes with other previous Java
applications of mine
(QuanSuite
discussed in Refs.\cite{quantree,quanfou,quanfruit},
QuSAnn
discussed in Ref.\cite{TucQusann},
and Multiplexor Expander
discussed in Ref.\cite{TucQusann}),
so I have made the
decision to give all these applications
a single unified version
number.}
Quibbs v1.3,
a Java application
available for free. (Source code
included in the distribution.)
Quibbs is a ``code generator" for
quantum Gibbs sampling:
after the user inputs some files that specify
a classical Bayesian network,
Quibbs outputs
a quantum circuit
for performing Gibbs sampling of that
Bayesian network
on a quantum computer.
Quibbs is not really a quantum
compiler (neither general nor special)
because, although it
generates a quantum circuit like
the quantum compilers do,
it doesn't
start with an explicitly
stated unitary matrix
as input.
Quibbs
implements the algorithm
of Tucci discussed in Refs.
\cite{TucGibbs,TucZ, TucGrover}.
The quantum circuit
generated by Quibbs
includes some quantum multiplexors.
The Java application Multiplexor Expander
(see Ref.\cite{TucQusann})
allows the user to replace
each of those multiplexors by
a sequence of more elementary
gates such as multiply
controlled NOTs and qubit rotations.
Multiplexor Expander is also
available for free, including source code.
For an explanation of the mathematical
notation
used in this paper, see some of my previous papers;
for instance, Ref.\cite{notationNAND} Section 2.
Throughout this paper, we will
often refer to an operator $V$.
$V$ is defined by
figure 5 of
Ref.\cite{TucGibbs}.
We will also use the acronym
AFGA (Adaptive Fixed-point Grover Algorithm)
for the algorithm described in Ref.\cite{TucGrover}.
\section{The Control Panel}
\label{sec-control-panel}
Fig.\ref{fig-quibbs-main} shows the
{\bf Control Panel} for Quibbs. This is the
main and only window of Quibbs
(except for the occasional
error message window). This
window is
open if and only if Quibbs is running.
\begin{figure}
\caption{{\bf Control Panel}
\label{fig-quibbs-main}
\end{figure}
The {\bf Control Panel}
allows you to enter the following inputs:
\begin{description}
\item[I/O Folder:] Enter in this
text box the name
of a folder. The folder will contain Quibbs' input and
output files for the particular
Bayesian network
that you are currently considering.
The I/O folder must
be in the same directory as
the Quibbs application.
To generate
a quantum circuit,
the I/O folder must contain
the following 3 input files:
\begin{itemize}
\item[(In1)] {\tt parents.txt}
\item[(In2)] {\tt states.txt}
\item[(In3)] {\tt probs.txt}
\end{itemize}
A detailed description of these 3 input files
will be given
in the next section. For this section,
all you need to
know is that: The {\tt parents.txt} file
lists the parent nodes of
each node of the Bayesian net being considered.
The {\tt states.txt} file lists the names of
the states of each node of the Bayesian net.
And the {\tt probs.txt} file gives the probability
matrix for each node of the Bayesian net.
Together, the In1, In2 and In3 files fully
specify the Bayesian network being
considered.
In Fig.\ref{fig-quibbs-main},
``3nodes" is entered in the {\bf I/O Folder}
text box.
A folder called ``3nodes" comes
with the distribution of Quibbs.
It contains, among other
things, In1, In2, In3 files
that specify one possible Bayesian network
with 3 nodes. The Quibbs distribution
also comes with 3 other examples of
I/O folders. These are named
``2nodes", ``4nodeFullyConnected" and
``Asia".
When you press the
{\bf Read Bayesian Net}
button, Quibbs reads
files In1, In2 and In3.
The program then creates data structures
that contain
complete information about
the Bayesian network. Furthermore, Quibbs fills the
scrollable list in the {\bf Starting State} grouping
with information that specifies
``the starting state". The starting state is
one particular instantiation (i.e.,
a particular state for each node) of the
Bayesian network.
Each row of the scrollable list
names a different node, and a particular
state of that node.
For example,
Fig.\ref{fig-quibbs-main} shows the
Quibbs {\bf Control Panel} immediately after
pressing the {\bf Read Bayesian Net} button.
In this example, the Bayesian net
read in has 3 nodes called
$A, B$ and $C$, and the starting state has node
$A$ in state $a1$, node $B$ in state $b1$
and node $C$ in state $c1$.
Suppose $A$ is a node of the
Bayesian net being considered. And suppose
$A$ has $N_A$ states. Quibbs will
give each state of node $A$
a ``decimal name"; that is, a
number from 0 through $N_A-1$.
The ``binary name" of a state
is the binary representation
of its decimal name.
As shown in Fig.\ref{fig-quibbs-main},
the scrollable list
of the {\bf Control Panel}
gives not only the ``english name"
of the state of each node, but
also
the binary and decimal names of that state.
For example,
Fig.\ref{fig-quibbs-main}
informs us that state $a1$ of node $A$
has binary name $(00)$ and decimal name $0$.
If you press the {\bf Random Start} button,
the starting state
inside the scrollable list
is changed to a randomly
generated one. Alternative, you can
choose a specific state for each node of
the Bayesian net by using the {\bf Node State Menu},
the menu immediately to the left of
the {\bf Random Start} button.
To use the {\bf Node State Menu},
you select the particular row of
the scrollable list that you want
to change. The {\bf Node State Menu}
mutates to reflect your row selection
in the scrollable list. You
can choose from the menu a particular node state.
When you do so, the selected row
in the scrollable list changes to reflect
your menu choice.
When you press the
{\bf Do a Pre-run} button,
Quibbs both reads and writes files:
it reads files In1 and In2 (but not In3,
so if In3 is not included in the I/O folder,
this button still works), and
it writes the following files, whose
content will be described later:
\begin{itemize}
\item {\tt probsF.txt}
\item {\tt probsT.txt}
\item {\tt blankets.txt}
\item {\tt nits.txt}
\end{itemize}
\item[Number of Probe Bits (for each PE step):]
This is the parameter $a=1,2,3,\ldots$
for the operator $V$.
\item[Number of Phase Estimation (PE) Steps:]
This is the parameter $c=1,2,3,\ldots$
for the operator $V$.
\item[Maximum Number of Grover Steps:]
Quibbs will stop iterating the AFGA if
it reaches this number
of iterations.
\item[Gamma Tolerance (degs):]
This is an angle
given in degrees.
Quibbs will stop iterating the AFGA
if the absolute value of
$\gamma_j$ becomes smaller than this tolerance.
($\gamma_j$
is an angle in AFGA that tends to zero
as the iteration index $j$ tends to infinity.
$\gamma_j$ quantifies
how close the AFGA is to reaching the
target state).
\item[Delta Lambda (degs):]
This is the angle $\Delta\lambda$ of
AFGA,
given in degrees.
\end{description}
Once Quibbs has
successfully read files In1, In2 and In3,
and once you have filled
all the text boxes in the {\bf Inputs}
grouping, you can
successfully press the
{\bf Write Q. Circuit Files} button.
This will cause Quibbs to write
the following output files within
the I/O folder:
\begin{itemize}
\item[(Out1)] {\tt quibbs\_log.txt}
\item[(Out2)] {\tt quibbs\_eng.txt}
\item[(Out3)] {\tt quibbs\_pic.txt}
\end{itemize}
The contents of these
3 output files will be described
in detail in the next section. For this
section, all you need to know
is that: The {\tt quibbs\_log.txt} file
records all the input and output
parameters that you entered into the
{\bf Control Panel}, so
you won't forget them.
The {\tt quibbs\_eng.txt} file
is an in``english" description
of a quantum circuit.
And the {\tt quibbs\_pic.txt} file
translates, line for line,
the english description found in
{\tt quibbs\_eng.txt}
into a ``pictorial" description.
Normally, you want to
press the
{\bf Write Q. Circuit Files} button
without check-marking the
{\bf Omit V Gates (diagnostic)} check box.
If you do check-mark it,
you will still generate
files Out1,Out2, Out3,
except that those files will
omit to mention all those gates
that generate the operator $V$,
at every place were it would
normally appear. Viewing the circuit
without its $V$'s is useful for
diagnostic and educational purposes,
but such a circuit is of course
useless for Gibbs sampling
the Bayesian net being considered.
The {\bf Control Panel}
displays the following
output text boxes.
(The {\bf Starting Gamma (degs)}
output text box
and the {\bf Prob. of Starting State}
output text box
are both filled as soon as
a starting state is given in
the inputs. The other output text boxes
are
filled when you press the
{\bf Write Q. Circuit Files} button.)
\begin{description}
\item[Starting Gamma (degs):]
This is $\gamma_0$, the first $\gamma_j$
in AFGA, given in degrees. In the
notations of Ref.\cite{TucGibbs},
and \cite{TucGrover},
\beqa
\gamma_0 &=& \acos(\hat{s'}\cdot\hat{t})=
2 \;\acos(|\av{s'|t}|)\\
&=& 2\;\acos(\sqrt{P(x_0)})
\;,
\eeqa
where $P(x_0)$ is the {\bf Prob. of
Starting State} defined next.
\item[Prob. of Starting State:]
This is the probability
$P(x_0)$ in Ref.\cite{TucGibbs}, where
$P()$ is the full probability
distribution of the
Bayesian net $\rvx$ being considered, and
$x_0$ is the starting value for $\rvx$.
\item[Number of Qubits:] This is the total
number of qubits used by the quantum circuit,
equal to $2\nb+ac$ in the notation of
Ref.\cite{TucGibbs}.
\item[Number of Elementary Operations:]
This is the number of elementary operations
in the output quantum circuit.
If there are no LOOPs, this is
the number of lines in the English File
(see Sec. \ref{sec-eng-file}), which
equals the number of lines in the
Picture File (see Sec. \ref{sec-pic-file}).
For a LOOP (assuming it is not nested
inside a larger LOOP), the
``{\tt LOOP k REPS:$N$}" and
``{\tt NEXT k}" lines are not counted,
whereas the lines between
``{\tt LOOP k REPS:$N$}" and
``{\tt NEXT k}"
are counted $N$ times (because
{\tt REPS:$N$} indicates
$N$ repetitions of the loop body).
Multiplexors expressed as a
single line are counted as a
single elementary operation
(unless, of course, they are inside a LOOP,
in which case they are
counted as many times as the loop body
is repeated).
\end{description}
\section{Input Files}
\label{sec-in-files}
As explained earlier,
for Quibbs to generate
quantum circuit files,
it needs to first read 3 input files:
the Parents File called {\tt parents.txt},
the States File called {\tt states.txt},
and the Probabilities File called {\tt probs.txt}.
These 3 input files
must be placed inside the I/O folder.
Next we explain
the contents of each of these 3 input files.
\subsection{Parents File}
\begin{figure}
\caption{Parents file in the I/O folder
``3nodes", for a Bayesian net with graph
$A\rarrow B \larrow C$
}
\label{fig-parents}
\end{figure}
Fig.\ref{fig-parents}
shows the Parents File
as found in the folder ``3nodes" which is
included with the Quibbs distribution,
for a Bayesian net with graph
$A\rarrow B \larrow C$.
In this example, nodes $A$ and $C$
have no parents and node $B$ has
parents $A$ and $C$.
In general, a Parents
File must obey the
following rules:
\begin{itemize}
\item Call focus nodes the node names
immediately after a hash.
Focus nodes
in the States, Parents and Probabilities
Files must all be in the same order.
For example,
in the ``3nodes" case,
that order is $A,B,C$.
\item For each focus node, give a hash,
then the name of the focus node,
then a list of parents of
the focus node, separating all of these
with whitespace.
\end{itemize}
\subsection{States File}
\begin{figure}
\caption{States file in the I/O folder
``3nodes", for a Bayesian net with graph
$A\rarrow B \larrow C$
}
\label{fig-states}
\end{figure}
Fig.
\ref{fig-states}
shows the States File
as found in the folder ``3nodes" which is
included with the Quibbs distribution,
for a Bayesian net with graph
$A\rarrow B \larrow C$.
In this example, node $A$ has 3 states
called $a1,a2$ and $a3$, node $B$ has
2 states called $b1$ and $b2$, and
node $C$ has 2 states called $c1$ and $c2$.
In general, a States
File must obey the
following rules:
\begin{itemize}
\item
Call focus nodes the node names
immediately after a hash.
Focus nodes
in the States, Parents and Probabilities
Files must all be in the same order.
For example,
in the ``3nodes" case,
that order is $A,B,C$.
\item
For each focus node, give a hash,
then the name of the focus node,
then a list of names of the states of the
focus node,
separating all of these
with whitespace.
\end{itemize}
\subsection{Probabilities File}
\label{sec-probs-file}
\begin{figure}
\caption{Probabilities File in the I/O folder
``3nodes", for a Bayesian net with graph
$A\rarrow B \larrow C$
}
\label{fig-probs}
\end{figure}
Fig.\ref{fig-probs}
shows the Probabilities File
as found in the folder ``3nodes" which is
included with the Quibbs distribution,
for a Bayesian net with graph
$A\rarrow B \larrow C$.
In this example, $P_A(a1)=0.2$,
$P_{B|A,C}(b1|a1,c1)=0.7$, etc.
In general, a Probabilities
File must obey the
following rules:
\begin{itemize}
\item
Call focus nodes the node names
immediately after a hash.
Focus nodes
in the States, Parents and Probabilities
Files must all be in the same order.
For example,
in the ``3nodes" case,
that order is $A,B,C$.
\item
For each focus node, give a hash,
then the name of the
focus node, then
the state of the focus node, then the states
of each parent of the focus node,
then the conditional probability of
the focus node conditioned on its parents,
separating all of these
with whitespace.
\item
The order in which the
states of the parents of the focus node
are listed must be
identical to the order in which
the parents of that
focus node are listed in the Parents File.
For example,
in the ``3nodes" case, the Parents
File gives the parents
of node $B$ as $A,C$, in that
order. Hence, in the Probabilities File,
each conditional probability for focus node $B$
is given after giving the states
of nodes $B,A,C$, in that order.
\item
A combination of node states
may be omitted, in which case Quibbs
will interpret that probability to
be zero. For example,
in ``3nodes" case,
if
$$P_{B|A,C}(b1|a2,c2)=0,$$
you could omit a line of the form
\begin{verbatim}
b1 a2 c2 0.0
\end{verbatim}
for the focus node $B$.
\end{itemize}
Note that Quibbs can
help you to write a
Probabilities File, by
generating a template
that you can change
according to your
needs. Such templates
can be generated by
means of the {\bf Do a Pre-run}
button. See Sec.\ref{sec-pre-run}
for a detailed explanation of this.
\section{Output Files}
\label{sec-out-files}
As explained earlier, when you press the
{\bf Write Q. Circuit Files} button,
Quibbs writes 3 output files
within the I/O folder:
a Log File called {\tt quibbs\_log.txt},
an English File called {\tt quibbs\_eng.txt},
and a Picture File called {\tt quibbs\_pic.txt}.
Next we explain
the contents of each of these 3 output files.
We also explain
the contents of the various
files generated when you
press the {\bf Do a Pre-run} button.
\subsection{Log File}
\begin{figure}
\caption{
Log File generated by Quibbs
using input files from the
``3nodes" I/O folder.}
\label{fig-quibbs-log}
\end{figure}
Fig.\ref{fig-quibbs-log}
is an example a Log File.
This example was
generated by
Quibbs using the input files from the
``3nodes" I/O folder.
A Log File
records all the information
found in the
{\bf Control Panel}.
\subsection{English File}\label{sec-eng-file}
\begin{figure}
\caption{
English File generated by
Quibbs (with the {\bf Omit V Gates}
\label{fig-quibbs-eng}
\end{figure}
\begin{figure}
\caption{
English File generated by
Quibbs (with the {\bf Omit V Gates}
\label{fig-quibbsOmit-eng}
\end{figure}
Fig.\ref{fig-quibbs-eng}
(respectively, Fig.\ref{fig-quibbsOmit-eng})
is an example of an English File.
This example was
generated by
Quibbs,
with the {\bf Omit V Gates} feature
OFF (respectively, ON),
in the same run as the
Log File of Fig.\ref{fig-quibbs-log},
and using the input files from the
``3nodes" I/O folder.
An English File
completely specifies the output SEO.
It does so ``in English", thus its name.
Each line represents one elementary operation,
and time increases as we move downwards.
In general, an English File obeys
the following rules:
\begin{itemize}
\item Time grows as we move down the file.
\item Each row
corresponds to one elementary operation.
Each row starts with 4 letters that indicate
the type of elementary operation.
\item For a one-bit operation
acting on a ``target bit" $\alpha$,
the target bit $\alpha$ is
given after the word {\tt AT}.
\item If the one-bit operation is controlled, then
the controls are indicated after the word {\tt IF}.
{\tt T} and {\tt F} stand for
true and false, respectively.
{\tt $\alpha$T} stands for
a control $P_1(\alpha)=n(\alpha)$ at bit $\alpha$.
{\tt $\alpha$F} stands for
a control $P_0(\alpha)=\nbar(\alpha)$ at bit $\alpha$.
\item ``{\tt LOOP k REPS:$N$}" and ``{\tt NEXT k}"
mark the beginning and end of $N$
repetitions. {\tt k} labels the loop. {\tt k} also
equals the line-count number in the English file
(first line is 0)
of the line
``{\tt LOOP k REPS:$N$}".
\item {\tt SWAP
$\alpha$ $\beta$}
stands for the swap(i.e., exchange) operator
$E(\alpha, \beta)$
that swaps
bits $\alpha$ and $\beta$.
\item {\tt PHAS }$\theta^{degs}$ stands for
a phase factor $e^{i \theta^{degs} \frac{\pi}{180}}$.
\item
{\tt P0PH }$\theta^{degs}$ stands for
the one-bit gate
$e^{i P_0 \theta^{degs} \frac{\pi}{180}}$ (note
$P_0=\nbar$).
{\tt P1PH }$\theta^{degs}$ stands for
the one-bit gate
$e^{i P_1 \theta^{degs} \frac{\pi}{180}}$
(note $P_1=n$).
Target bit follows the word {\tt AT}.
\item {\tt SIGX}, {\tt SIGY},
{\tt SIGZ}, {\tt HAD2}
stand for
the Pauli matrices $\sigx, \sigy, \sigz$
and the one-bit Hadamard matrix $H$,
respectively.
Target bit follows the word {\tt AT}.
\item {\tt ROTX}, {\tt ROTY},
{\tt ROTZ}, {\tt ROTN}
stand for one-bit
rotations
with rotation axes in the
directions: $x$, $y$, $z$, and
an arbitrary direction $n$, respectively.
Rotation angles (in degrees) follow
the words {\tt ROTX}, {\tt ROTY},
{\tt ROTZ}, {\tt ROTN}.
Target bit follows the word {\tt AT}.
\item
{\tt MP\_Y} stands for a multiplexor
which performs a one-bit rotation of
a target bit about the
$y$ axis. Target bit follows the word {\tt AT}.
Rotation angles (in degrees) follow
the word {\tt BY}. Multiplexor controls
are specified by $\alpha(k$, where
integer $\alpha$ is the bit position
and integer $k$ is the control's name.
\end{itemize}
Here is a list of examples
showing how to translate the mathematical
notation used in Ref.\cite{notationNAND}
into the English File language:
\begin{center}
\begin{tabular}{|l|l|}
\hline
Mathematical language & English File language\\
\hline
\hline
Loop named 5 with 2 repetitions &
{\tt LOOP 5 REPS: 2}\\
\hline
Next iteration of loop named 5&
{\tt NEXT 5}\\
\hline
$E(1,0)^{\nbar(3)n(2)}$ &
{\tt SWAP 1 0 IF 3F 2T}\\
\hline
$e^{i 42.7 \frac{\pi}{180} \nbar(3)n(2)}$ &
{\tt PHAS 42.7 IF 3F 2T}\\
\hline
$e^{i 42.7 \frac{\pi}{180} \nbar(3)n(2)}$ &
{\tt P0PH 42.7 AT 3 IF 2T}\\
\hline
$e^{i 42.7 \frac{\pi}{180} n(3)n(2)}$ &
{\tt P1PH 42.7 AT 3 IF 2T}\\
\hline
$\sigx(1)^{\nbar(3)n(2)}$ &
{\tt SIGX AT 1 IF 3F 2T}\\
\hline
$\sigy(1)^{\nbar(3)n(2)}$ &
{\tt SIGY AT 1 IF 3F 2T}\\
\hline
$\sigz(1)^{\nbar(3)n(2)}$ &
{\tt SIGZ AT 1 IF 3F 2T}\\
\hline
$H(1)^{\nbar(3)n(2)}$ &
{\tt HAD2 AT 1 IF 3F 2T}\\
\hline
$(e^{i \frac{\pi}{180} 23.7 \sigx(1)})^{\nbar(3)n(2)}$ &
{\tt ROTX 23.7 AT 1 IF 3F 2T}\\
\hline
$(e^{i \frac{\pi}{180} 23.7 \sigy(1)})^{\nbar(3)n(2)}$ &
{\tt ROTY 23.7 AT 1 IF 3F 2T} \\
\hline
$(e^{i \frac{\pi}{180} 23.7 \sigz(1)})^{\nbar(3)n(2)}$ &
{\tt ROTZ 23.7 AT 1 IF 3F 2T}\\
\hline
$(e^{
i \frac{\pi}{180}
[30\sigx(1)+ 40\sigy(1) + 11 \sigz(1)]
})^{\nbar(3)n(2)}$ &
{\tt ROTN 30.0 40.0 11.0 AT 1 IF 3F 2T}\\
\hline
$[e^{i\sum_{b1,b0}\theta_{b_1b_0}\sigy(3)P_{b_1b_0}(2,1)}]^{n(0)}$
&
{\tt MP\_Y AT 3 IF 2(1 1(0 0T BY 30.0 10.5 11.0 83.1}
\\
where $\left\{\begin{array}{l}
\theta_{00}=30.0(\frac{\pi}{180})
\\
\theta_{01}=10.5(\frac{\pi}{180})
\\
\theta_{10}=11.0(\frac{\pi}{180})
\\
\theta_{11}=83.1(\frac{\pi}{180})
\end{array}\right.$
&\;
\\
\hline
\end{tabular}
\end{center}
\subsection{ASCII Picture File}\label{sec-pic-file}
\begin{figure}
\caption{
Picture File generated by
Quibbs (with the {\bf Omit V Gates}
\label{fig-quibbs-pic}
\end{figure}
\begin{figure}
\caption{
Picture File generated by
Quibbs (with the {\bf Omit V Gates}
\label{fig-quibbsOmit-pic}
\end{figure}
Fig.\ref{fig-quibbs-pic}
(respectively, Fig.\ref{fig-quibbsOmit-pic})
is an example of a Picture File.
This example was
generated by
Quibbs,
with the {\bf Omit V Gates} feature
OFF (respectively, ON),
in the same run as the
Log File of Fig.\ref{fig-quibbs-log},
and using the input files from the
``3nodes" I/O folder.
A Picture File
partially specifies the output SEO.
It gives an ASCII picture of
the quantum circuit.
Each line represents one elementary operation,
and time increases as we move downwards.
There is a one-to-one onto correspondence
between the rows of the English
and Picture Files.
In general, a Picture File obeys
the following rules:
\begin{itemize}
\item Time grows as we move down the file.
\item Each row
corresponds to one elementary operation.
Columns $1, 5, 9, 13, \ldots$ represent
qubits (or, qubit positions). We define the
rightmost qubit as 0. The qubit
immediately to
the left of the rightmost qubit
is 1, etc.
For a one-bit operator
acting on a ``target bit" $\alpha$,
one places a symbol
of the operator at bit position
$\alpha$.
\item {\tt |} represents a ``qubit wordline"
connecting the same qubit at
two consecutive times.
\item {\tt -}represents a wire connecting different
qubits at the same time.
\item{\tt +} represents both {\tt |} and {\tt -}.
\item If the one-bit operation is controlled, then
the controls are indicated
as follows.
{\tt @} at bit position $\alpha$ stands for
a control $n(\alpha)=P_1(\alpha)$.
{\tt 0} at bit position $\alpha$ stands for
a control $\nbar(\alpha)=P_0(\alpha)$.
\item ``{\tt LOOP k REPS:$N$}" and ``{\tt NEXT k}"
mark the beginning and end of $N$
repetitions. {\tt k} labels the loop. {\tt k} also
equals the line-count number in the Picture File
(first line is 0)
of the line
``{\tt LOOP k REPS:$N$}" .
\item The swap(i.e., exchange) operator
$E(\alpha, \beta)$
is represented by putting
arrow heads {\tt <} and {\tt >} at
bit positions $\alpha$ and $\beta$.
\item A
phase factor $e^{i\theta}$ for
$\theta\in \RR$ is represented by
placing {\tt Ph} at any bit position
which does not already hold a control.
\item The one-bit gate
$e^{i P_0(\alpha)\theta}$ (note
$P_0(\alpha)=\nbar(\alpha)$) for $\theta\in \RR$
is represented by putting {\tt OP}
at bit position $\alpha$.
\item The one-bit gate
$e^{i P_1(\alpha)\theta}$
(note
$P_1(\alpha)=n(\alpha)$) for $\theta\in \RR$
is represented by putting {\tt @P}
at bit position $\alpha$.
\item One-bit operations
$\sigx(\alpha)$,
$\sigy(\alpha)$,
$\sigz(\alpha)$
and $H(\alpha)$
are represented by placing the letters
{\tt X,Y,Z, H}, respectively,
at bit position $\alpha$.
\item
One-bit rotations
acting on bit $\alpha$,
in the
$x,y,z,n$ directions,
are represented by placing
{\tt Rx,Ry,Rz, R}, respectively,
at bit position $\alpha$.
\item
A multiplexor that rotates
a bit $\tau$ about the $y$ axis
is represented by placing
{\tt Ry}
at bit position $\tau$.
A multiplexor control at bit position $\alpha$
and
named by the integer $k$
is represented by placing
$(k$ at bit position $\alpha$.
\end{itemize}
Here is a list of examples
showing how to translate the mathematical
notation used in Ref.\cite{notationNAND}
into the Picture File language:
\begin{tabular}{|l|l|}
\hline
Mathematical language & Picture File language\\
\hline
\hline
Loop named 5 with 2 repetitions &
{\tt LOOP 5 REPS:2}\\
\hline
Next iteration of loop named 5&
{\tt NEXT 5}\\
\hline
$E(1,0)^{\nbar(3)n(2)}$& {\tt 0---@---<--->} \\
\hline
$e^{i 42.7 \frac{\pi}{180} \nbar(3)n(2)}$ &
{\tt 0---@---+--Ph}\\
\hline
$e^{i 42.7 \frac{\pi}{180} \nbar(3)n(2)}$ &
{\tt 0P--@\ \ \ |\ \ \ |}\\
\hline
$e^{i 42.7 \frac{\pi}{180} n(3)n(2)}$ &
{\tt @P--@\ \ \ |\ \ \ |}\\
\hline
$\sigx(1)^{\nbar(3)n(2)}$& {\tt 0---@---X\ \ \ |} \\
\hline
$\sigy(1)^{\nbar(3)n(2)}$& {\tt 0---@---Y\ \ \ |} \\
\hline
$\sigz(1)^{\nbar(3)n(2)}$& {\tt 0---@---Z\ \ \ |} \\
\hline
$H(1)^{\nbar(3)n(2)}$& {\tt 0---@---H\ \ \ |} \\
\hline
$(e^{i \frac{\pi}{180} 23.7 \sigx(1)})^{\nbar(3)n(2)}$&
{\tt 0---@---Rx\ \ |} \\
\hline
$(e^{i \frac{\pi}{180} 23.7 \sigy(1)})^{\nbar(3)n(2)}$&
{\tt 0---@---Ry\ \ |} \\
\hline
$(e^{i \frac{\pi}{180} 23.7 \sigz(1)})^{\nbar(3)n(2)}$&
{\tt 0---@---Rz\ \ |} \\
\hline
$(e^{
i \frac{\pi}{180}
[30\sigx(1)+ 40\sigy(1) + 11 \sigz(1)]
})^{\nbar(3)n(2)}$&
{\tt 0---@---R\ \ \ |} \\
\hline
$[e^{i\sum_{b1,b0}\theta_{b_1b_0}\sigy(3)P_{b_1b_0}(2,1)}]^{n(0)}$
&
{\tt |\ \ \ Ry--(1--(0--@}
\\
where $\left\{\begin{array}{l}
\theta_{00}=30.0(\frac{\pi}{180})
\\
\theta_{01}=10.5(\frac{\pi}{180})
\\
\theta_{10}=11.0(\frac{\pi}{180})
\\
\theta_{11}=83.1(\frac{\pi}{180})
\end{array}\right.$
&\;
\\
\hline
\end{tabular}
\subsection{Output Files From a Pre-run}
\label{sec-pre-run}
When you
press the {\bf Do a Pre-run} button,
Quibbs writes 4 output files
within the I/O folder:
two Uniform Probabilities Files
called
{\tt probsF.txt} and
{\tt probsT.txt}, a Blankets File
called
{\tt blankets.txt}, and a Nits File called
{\tt nits.txt}.
Next we explain
the contents of each of these 4 output files.
\subsubsection{Uniform Probabilities Files}
\begin{figure}
\caption{
Uniform Probabilities File
(of type {\tt probsF.txt}
\label{fig-probsF}
\end{figure}
\begin{figure}
\caption{Uniform Probabilities File
(of type {\tt probsT.txt}
\label{fig-probsT}
\end{figure}
Figs.\ref{fig-probsF}
and \ref{fig-probsT}
are both examples of a
Uniform Probabilities File.
Both files were
generated by
Quibbs using the input files from the
``3nodes" I/O folder.
A Uniform Probabilities File
is simply a Probability File,
as defined in Sec.\ref{sec-probs-file},
but of a specific kind
that assigns uniform
values to all conditional
probabilities of the Bayesian net
specified by the Parents File and
States File in the I/O Folder.
Generating a
Uniform
Probabilities File
(either {\tt probsF.txt} or {\tt probsT.txt})
does not require a {\tt probs.txt}
file. One can use a
Uniform Probabilities File
as a template
for a {\tt probs.txt} file.
Just cut and paste the contents of a
Uniform Probabilities File into a new
file called {\tt probs.txt}
and modify its probabilities according
to your needs.
Note that the only
difference between a {\tt probsF.txt}
and a {\tt probsT.txt} file
is that the first (respectively, second) of these
varies the states of a focus
node before (respectively, after)
varying the states of its parents.
\subsubsection{Blankets File}
\begin{figure}
\caption{Blankets File
generated using input files from the
``3nodes" I/O folder.
}
\label{fig-blankets}
\end{figure}
Fig.\ref{fig-blankets}
is an example of a
Blankets File.
This example was
generated by
Quibbs using the input files from the
``3nodes" I/O folder.
The Markov blanket
$MB(i)$ for a node $\rvx_i$
of the classical Bayesian network $\rvx$ is defined
so that
(see section entitled
``Notation and Preliminaries"
in Ref.\cite{TucMetHas})
\beq
P(x_i|x_\noti)=P(x_i|x_{MB(i)})
\;.
\label{eq-mb}
\eeq
It can be shown that
the Markov blanket of a
focus node equals the union
of:
\begin{itemize}
\item the parents
of the focus node
\item
the
children of the focus node
\item
the parents of each children
of the focus node (but
excluding the focus node itself)
\end{itemize}
A Blankets File gives the Markov
blanket for each node of a Bayesian net.
In the example of
Fig.\ref{fig-blankets},
node $A$ has Markov blanket
$\{B,C\}$, etc.
In general, a Blankets File obeys the
following rules:
\begin{itemize}
\item Call focus nodes the node names
immediately after a hash.
\item For each focus node, Quibbs writes a hash,
then the name of the focus node,
then a list of the nodes which form the
Markov blanket
of
the focus node.
\end{itemize}
\subsubsection{Nits File}
\begin{figure}
\caption{Nits File
generated using input files from the
``3nodes" I/O folder.
}
\label{fig-nits}
\end{figure}
Fig.\ref{fig-nits}
is an example of a
Nits File.
This example was
generated by
Quibbs using the input files from the
``3nodes" I/O folder.
The word ``nit" is a contraction
of the words ``node" and ``qubit".
Quibbs assigns to each node
(of the Bayesian net being
consider) its own private set of nits.
We explained in Sec.\ref{sec-control-panel}
how
Quibbs assigns a decimal and a binary name
to each state of a node.
The binary name of a state gives the
states of the nits. For example,
suppose node $A$ has 3 states: $a1(00)0$,
$a1(01)1$ and $a1(10)2$. Then
node $A$ is assigned two private nits,
call them $nit0$ and $nit1$.
When node $A$ is in state
$(b1,b0)$, where $b1$ and $b0$
are either 0 or 1, then
$nit1$ is in state $b1$
and $nit0$ is in state $b0$.
Actually, Quibbs doesn't
give nits an ``english" name like
$nit0$ and $nit1$. It just calls them
by integers.
Fig.\ref{fig-nits} informs us
that the ``3nodes" Bayesian net
has 4 nits called 0,1,2,3.
Nits 0 and 1 are both owned by
node $A$ (which has 3 states $a1,a2,a3$).
Nit 2 is owned by node $B$ (which has 2 states $b1,b2$).
Nit 3 is owned by node $C$ (which has 2 states $c1,c2$).
The original Bayesian net
with the original nodes implies
a new, finer Bayesian net whose nodes
are the nits themselves. Just
like one can define a Markov blanket
for each node of the original Bayesian
net, one can define a Markov blanket (
equal to a
particular set
of nits) for each nit.
Fig.\ref{fig-nits} informs us
that for the ``3nodes" Bayesian net,
nit 0 has a nit blanket $\{1,2,3\}$,
etc.
In general, a Nits File obeys the
following rules:
\begin{itemize}
\item Call focus nit the number (
a sort of nit name)
immediately after a hash.
\item For each focus nit, Quibbs writes
the words ``owner node" followed by the name
of the owner node of the focus nit.
\item For each focus nit,
Quibbs writes
the words ``blanket nit" followed by
the Markov blanket of nits
for the focus nit.
\end{itemize}
Why do we care about
node blankets and nit blankets at all?
Quibbs uses a method,
discussed in Ref.\cite{TucGibbs},
of representing
Szegedy operators
using quantum multiplexors.
Quibbs uses nit blankets
to simplify its Szegedy representations
by eliminating certain unnecessary
controls in their multiplexors.
\end{document} |
\begin{document}
\title{Approval-Based Voting with Mixed Goods}
\begin{abstract}
We consider a voting scenario in which the resource to be voted upon may consist of both indivisible and divisible goods.
This setting generalizes both the well-studied model of multiwinner voting and the recently introduced model of cake sharing.
Under approval votes, we propose two variants of the extended justified representation (EJR) notion from multiwinner voting, a stronger one called \emph{EJR for mixed goods (\mbox{EJR-M})} and a weaker one called \emph{EJR up to $1$ (\mbox{EJR-1})}.
We extend three multiwinner voting rules to our setting---GreedyEJR, the method of equal shares (MES), and proportional approval voting (PAV)---and show that while all three generalizations satisfy EJR-1, only the first one provides \mbox{EJR-M}.
In addition, we derive tight bounds on the proportionality degree implied by EJR-M and EJR-1, and investigate the proportionality degree of our proposed rules.
\end{abstract}
\section{Introduction}
In \emph{multiwinner voting}---a ``new challenge for social choice theory'', as \citet{FaliszewskiSkSl17} put it---the goal is to select a subset of candidates of fixed size from a given set based on the voters' preferences.
The candidates could be politicians vying for seats in the parliament, products to be shown on a company website, or places to visit on a school trip.
A common way to elicit preferences from the voters is via the \emph{approval} model, wherein each voter simply specifies the subset of candidates that he or she approves \citep{Kilgour10,LacknerSk22}.
While (approval-based) multiwinner voting has received substantial attention from computational social choice researchers in the past few years, a divisible analog called \emph{cake sharing} was recently introduced by \citet{BeiLuSu22}.
In cake sharing, the candidates correspond to a divisible resource such as time periods for using a facility or files to be stored in cache memory.
Following the famous resource allocation problem of \emph{cake cutting} \citep{RobertsonWe98,Procaccia16}, this divisible resource is referred to as a ``cake''.
In this paper, we study a setting that simultaneously generalizes both multiwinner voting and cake sharing, which we call \emph{(approval-based) voting with mixed goods}.
Specifically, in our setting, the resource may consist of both indivisible and divisible goods.\footnote{Since a ``candidate'' usually refers to an indivisible entity, we use the term ``good'' instead from here on.}
This generality allows our model to capture more features of the resource than either of the previous models.
For example, when reserving time slots, it is possible that some hourly slots must be reserved as a whole, while other slots can be booked fractionally.
Likewise, in cache memory storage, certain files may need to be stored in their entirety, whereas other files can be broken into smaller portions.
Combinations of divisible and indivisible goods have been examined in the context of \emph{fair division}, where the resource is to be divided among interested agents and the entire resource can be allocated \citep{BeiLiLi21,BeiLiLu21,BhaskarSrVa21}.
By contrast, we investigate mixed goods in a \emph{collective choice} context, where only a subset of the resource can be allocated but the allocated resource is collectively shared by all agents.\footnote{We henceforth use the term ``agent'' instead of ``voter''.}
There are multiple criteria that one can use to select a collective subset of resource based on the approval votes.
For example, one could try to optimize the \emph{social welfare}---the sum of the agents' utilities---or the \emph{coverage}---the number of agents who receive nonzero utility.
A representation criterion that has attracted growing interest is \emph{justified representation (JR)} \citep{AzizBrCo17}.
In multiwinner voting, if there are $n$~agents and $k$ (indivisible) goods can be chosen, then JR requires that whenever a group of at least $n/k$ agents approve a common good, some agent in that group must have an approved good in the selected set.
A well-studied strengthening of JR is \emph{extended justified representation (EJR)}, which says that for each positive integer $t$, if a group of at least $t\cdot n/k$ agents approve no fewer than $t$ common goods (such a group is said to be \emph{$t$-cohesive}), some agent in that group must have no fewer than $t$ approved goods in the selected set.
\citet{AzizBrCo17} showed that the \emph{proportional approval voting (PAV)} rule always outputs a set of goods that satisfies EJR.
In cake sharing, Bei et al.~(2022) adapted EJR by imposing the condition for every positive \emph{real number} $t$, and proved that the resulting notion is satisfied by the \emph{maximum Nash welfare (MNW)} rule.\footnote{See Appendix~A in the extended version of their work.
They also noted that JR does not admit a natural analog for cake sharing, since there is no discrete unit of cake.}
Can we unify the two versions of EJR for our generalized setting in such a way that the guaranteed existence is maintained?\footnote{As further evidence for the generality of our setting, we remark that, as Bei et al.~(2022, Sec.~1.2) pointed out, cake sharing itself generalizes another collective choice setting called \emph{fair mixing} \citep{AzizBoMo20}.}
\begin{table*}[!ht]
\centering
\begin{tabular}{c c c c}
\toprule
& \textbf{GreedyEJR-M} & \textbf{Generalized \rulex} & \textbf{Generalized PAV} \\
\midrule
\EJR{M} & \yes & \no & \no \\
\EJR{1} & \yes & \yes & \yes \\
Proportionality degree & $\floor{t} \cdot \left( 1 - \frac{\floor{t} + 1}{2t} \right)$ & $\left[ \frac{t - 2 + 1/t}{2}, \frac{\ceiling{t} + 1}{2} \right]$ & $> t-1$ \\
\midrule
Indivisible-goods EJR & \yes$^*$ & \yes$^*$ & \yes$^*$ \\
Cake EJR & \yes & \yes & \no \\
\midrule
Polynomial-time computation & ? & \yes & \no$^*$ \\
\bottomrule
\end{tabular}
\caption{Overview of our results. Entries marked by an asterisk follow from known results in multiwinner voting. We also show that the proportionality degree implied by EJR-M and EJR-1 is $\floor{t} \cdot \left( 1 - \frac{\floor{t} + 1}{2t} \right)$ and $\frac{t-2+1/t}{2}$, respectively.}
\label{table:summary}
\end{table*}
\subsection{Our Contributions}
In \Cref{sec:EJR-notions}, we introduce two variants of EJR suitable for the mixed-goods setting.
The stronger variant, \emph{EJR for mixed goods (EJR-M)}, imposes the EJR condition for any positive real number $t$ whenever a $t$-cohesive group commonly approves a resource of size \emph{exactly} $t$.
The weaker variant, \emph{EJR up to~$1$ (EJR-1)}, again considers the condition for every positive real number $t$ but only requires that some member of a $t$-cohesive group receives utility greater than $t-1$.
While \mbox{EJR-M} reduces to the corresponding notion of EJR in both multiwinner voting and cake sharing, and therefore offers a unification of both versions, EJR-1 does so only for multiwinner voting.
We then extend three multiwinner voting rules to our setting: GreedyEJR, the method of equal shares (MES), and proportional approval voting (PAV).
We show that \emph{GreedyEJR-M}, our generalization of GreedyEJR, satisfies EJR-M (and therefore EJR-1), which also means that an EJR-M allocation always exists.
On the other hand, we prove that our generalizations of the other two methods provide EJR-1 but not EJR-M.
Furthermore, while \mbox{GreedyEJR-M} and Generalized MES guarantee the cake version of EJR in cake sharing, Generalized PAV does not.
In \Cref{sec:proportionality-degree}, we turn our attention to the concept of \emph{proportionality degree}, which measures the average utility of the agents in a cohesive group \citep{Skowron21}.
We derive tight bounds on the proportionality degree implied by both EJR-M and EJR-1, with the EJR-M bound being slightly higher.
We also investigate the proportionality degree of the three rules from \Cref{sec:EJR-notions}; in particular, we find that Generalized PAV has a significantly higher proportionality degree than both GreedyEJR-M and Generalized MES.
An overview of our results can be found in \Cref{table:summary}.
\section{Preliminaries}
\label{sec:prelim}
Let $N = \{1,2,\dots,n\}$ be the set of agents.
In the mixed-goods setting, the resource $R$ consists of a cake $C = [0, c]$ for some real number $c\ge 0$ and a set of indivisible goods $G = \{g_1,\dots,g_m\}$ for some integer $m\ge 0$.
Assume without loss of generality that $\max(c,m) > 0$.
A \emph{piece of cake} is a union of finitely many disjoint (closed) subintervals of $C$.
Denote by $\ell(I)$ the length of an interval $I$, that is, $\ell([x, y]) \coloneqq y - x$.
For a piece of cake $C'$ consisting of a set of disjoint intervals $\mathcal{I}_{C'}$, we let $\ell(C') \coloneqq \sum_{I \in \mathcal{I}_{C'}} \ell(I)$.
A~\emph{bundle}~$R'$ consists of a (possibly empty) piece of cake $C'\subseteq C$ and a (possibly empty) set of indivisible goods $G'\subseteq G$; the \emph{size} of such a bundle $R'$ is $s(R') \coloneqq \ell(C') + |G'|$.
We sometimes write $R' = (C', G')$ instead of $R' = C'\cup G'$.
We assume that the agents have \emph{approval} preferences (also known as \emph{dichotomous} or \emph{binary}), i.e., each agent $i\in N$ approves a bundle $R_i = (C_i, G_i)$ of the resource.\footnote{Approval preferences can be given explicitly as part of the input for algorithms, so we do not need the cake-cutting query model of \citet{RobertsonWe98}.}
The utility of agent~$i$ for a bundle $R'$ is given by $u_i(R') \coloneqq s(R_i\cap R') = \ell(C_i\cap C') + |G_i\cap G'|$.
Let $\alpha\in (0,c+m]$ be a given parameter, and assume that a bundle~$A$ with $s(A)\le \alpha$ can be chosen and collectively allocated to the agents;\footnote{Instead of the variable $k$ as in multiwinner voting, we use $\alpha$, as this variable may not be an integer in our setting. This is consistent with the notation used by Bei et al.~(2022) for cake sharing.} we also refer to an allocated bundle as an \emph{allocation}.
An \emph{instance} consists of the resource~$R$, the agents~$N$ and their approved bundles $(R_i)_{i\in N}$, and the parameter $\alpha$.
We say that an instance is a \emph{cake instance} if it does not contain indivisible goods (i.e., $m = 0$), and an \emph{indivisible-goods instance} if it does not contain cake (i.e., $c = 0$).
An example instance is shown in \Cref{fig:example-instance}.
\begin{figure}
\caption{A mixed-goods instance with two agents $N = \{1,2\}
\label{fig:example-instance}
\end{figure}
A \emph{mechanism} or \emph{rule} $\mathcal{M}$ maps any instance to an allocation of the resource.
For any property $P$ of allocations, we say that a rule~$\mathcal{M}$ satisfies property $P$ if for every instance, the allocation output by $\mathcal{M}$ satisfies $P$.
An example of a rule is the \emph{maximum Nash welfare (MNW)} rule, which returns an allocation $A$ that maximizes the product $\prod_{i\in N}u_i(A)$ of the agents' utilities.\footnote{Ties can be broken arbitrarily except when the highest possible product is $0$.
In this exceptional case, the MNW rule first gives positive utility to a set of agents of maximal size and then maximizes the product of utilities for the agents in this set.}
\section{EJR Notions and Rules}
\label{sec:EJR-notions}
In order to reason about \emph{extended justified representation (EJR)}, an important concept is that of a cohesive group.
For any positive real number $t$, a set of agents $N^*\subseteq N$ is said to be \emph{$t$-cohesive} if $|N^*| \ge t\cdot n/\alpha$ and $s(\bigcap_{i\in N^*} R_i) \ge t$.
For an indivisible-goods instance, \citet{AzizBrCo17} defined EJR as follows: an allocation~$A$ satisfies EJR if for every positive integer $t$ and every $t$-cohesive group of agents $N^*$, at least one agent in $N^*$ receives utility at least $t$.
Bei et al.~(2022) adapted this axiom to cake sharing by considering every positive real number $t$ instead of only positive integers.\footnote{Note that the indivisible-goods version with positive integers~$t$ may be meaningless in the cake setting, e.g., if the entire cake has length less than $1$. More generally, the restriction to positive integers $t$ is unnatural for cake, as there is no discrete unit of cake.}
To distinguish between these two versions of EJR, as well as from versions for mixed goods that we will define next, we refer to the two versions as \emph{indivisible-goods EJR} and \emph{cake EJR}, respectively.
A first attempt to define EJR for mixed goods is to simply use the cake version.
However, as we will see shortly, the resulting notion is too strong.
Hence, we relax it by lowering the utility threshold.
\begin{definition}[EJR-$\beta$]
\label{def:EJR-beta}
Let $\beta \ge 0$.
Given an instance, an allocation $A$ with $s(A) \le \alpha$ is said to satisfy \emph{extended justified representation up to $\beta$ (EJR-$\beta$)} if for every positive real number $t$ and every $t$-cohesive group of agents $N^*$, it holds that $u_j(A) > t-\beta$ for some $j\in N^*$.\footnote{For $\beta = 1$, \citet{PetersPiSk21} considered a somewhat similar notion called ``EJR up to one project'' in the setting of participatory budgeting with indivisible projects.}
\end{definition}
\begin{proposition}
\label{prop:EJR-beta}
For each constant $\beta \in [0,1)$, there exists an indivisible-goods instance in which no allocation satisfies EJR-$\beta$.
This remains true even if we relax the inequality $u_j(A) > t-\beta$ in \Cref{def:EJR-beta} to $u_j(A) \ge t-\beta$.
\end{proposition}
\begin{proof}
We work with the weaker condition $u_j(A) \ge t-\beta$.
Fix $\beta\in[0,1)$, and choose a rational constant $\beta' \in (\beta, 1)$.
Consider an indivisible-goods instance with integers $n$ and~$\alpha$ such that $\alpha = \beta'\cdot n$, and assume that all agents approve disjoint nonempty subsets $G_i$ of goods.
Each individual agent forms a $\beta'$-cohesive group, so in an EJR-$\beta$ allocation, every agent must receive utility at least $\beta' - \beta > 0$.
Hence, any EJR-$\beta$ allocation necessarily includes at least one good from each approval set $G_i$, and must therefore contain at least $n$ goods in total.
However, since $\alpha = \beta'\cdot n < n$, no allocation can satisfy EJR-$\beta$.
\end{proof}
\Cref{prop:EJR-beta} raises the question of whether EJR-1 can always be satisfied.
We will answer this question in the affirmative in \Cref{sec:GreedyEJR-M}.
Before that, we introduce EJR-M, another variant of EJR tailored to mixed goods.
The intuition behind EJR-M is that a $t$-cohesive group of agents should be able to claim a utility of $t$ for some member only when there exists a commonly approved resource of size \emph{exactly}~$t$.
This rules out such cases as in the proof of \Cref{prop:EJR-beta}, where a group can effectively claim a utility higher than $t$ due to the indivisibility of the goods.
\begin{definition}[EJR-M]
\label{def:EJR-M}
Given an instance, an allocation~$A$ with $s(A) \le \alpha$ is said to satisfy \emph{extended justified representation for mixed goods (EJR-M)} if the following holds:
For every positive real number $t$ and every $t$-cohesive group of agents $N^*$ for which there exists $R^*\subseteq R$ such that $s(R^*) = t$ and $R^*\subseteq R_i$ for all $i\in N^*$, it holds that $u_j(A) \ge t$ for some $j\in N^*$.
\end{definition}
Note that for indivisible-goods instances, the condition $s(R^*) = t$ can only hold for integers $t$, so EJR-M reduces to indivisible-goods EJR.
Likewise, for cake instances, if a group is $t$-cohesive then a commonly approved subset of size exactly $t$ always exists, so EJR-M reduces to cake EJR.
Hence, EJR-M unifies EJR from both settings.
\begin{proposition}
\label{prop:EJR-M-floor}
Let $t$ be a positive real number.
For an EJR-M allocation $A$ and a $t$-cohesive group of agents $N^*$, it holds that $u_j(A) \ge \floor{t}$ for some $j\in N^*$.
\end{proposition}
\begin{proof}
Let $R^* = \bigcap_{i\in N^*}R_i$, so $s(R^*) \ge t$, and let $m^*$ be the number of indivisible goods in $R^*$.
If $m^* \ge \floor{t}$, then by \Cref{def:EJR-M}, there exists $j\in N^*$ such that $u_j(A) \ge \floor{t}$.
Else, $m^* < \floor{t}$, which means that $R^*$ contains a piece of cake of length at least $t - m^*$.
In this case, by considering the $m^*$ indivisible goods and a piece of cake of length exactly $t-m^*$ commonly approved by all agents in $N^*$, \Cref{def:EJR-M} implies the existence of $j\in N^*$ such that $u_j(A) \ge t \ge \floor{t}$.
\end{proof}
Since $\floor{t} > t-1$ for every real number $t$, we have the following corollary.
\begin{corollary}
\label{cor:EJR-M-EJR-1}
EJR-M implies EJR-1.
\end{corollary}
For indivisible-goods instances, EJR-1 reduces to indivisible-goods EJR, since for every positive integer $t$, the smallest integer greater than $t-1$ is $t$.
On the other hand, for cake instances, EJR-1 is weaker than cake EJR.
In the cake setting, Bei et al.~(2022) proved that the MNW rule satisfies cake EJR.
However, in the indivisible-goods setting, the fact that MNW tries to avoid giving utility $0$ to any agent at all costs means that it sometimes attempts to help individual agents at the expense of large deserving groups.
This is formalized in the following proposition.
\begin{proposition}
\label{prop:MNW-EJR}
For any constant $\beta \ge 0$, there exists an indivisible-goods instance in which no MNW allocation satisfies EJR-$\beta$.
\end{proposition}
\begin{proof}
It suffices to prove the statement for every positive integer $\beta$.
Fix a positive integer $\beta$, and let $\gamma = \beta + 2$.
Consider an indivisible-goods instance with $n = \gamma^2 + \gamma$ agents, $m = 2\gamma$ goods, and $\alpha = \gamma+1$.
The first $\gamma^2$ agents all approve goods $g_1,\dots,g_{\gamma}$, while agent $\gamma^2 + i$ only approves good $g_{\gamma + i}$ for $1\le i\le \gamma$.
Notice that the first $\gamma^2$ agents form a $\gamma$-cohesive group, so at least one of them must receive utility no less than $\gamma - \beta = 2$ in an EJR-$\beta$ allocation.
In particular, at least two goods among $g_1,\dots,g_{\gamma}$ must be chosen.
However, every MNW allocation contains $g_{\gamma+1}, g_{\gamma+2}, \dots, g_{2\gamma}$ along with exactly one of $g_1,\dots,g_{\gamma}$.
It follows that no MNW allocation satisfies EJR-$\beta$.
\end{proof}
\subsection{GreedyEJR-M}
\label{sec:GreedyEJR-M}
\Cref{prop:MNW-EJR} implies that the MNW rule cannot guarantee EJR-M or EJR-1 in the indivisible-goods setting, let alone in the mixed-goods setting.
We show next that a greedy approach can be used to achieve these guarantees.
The rule that we use is an adaptation of the \emph{GreedyEJR} rule from the indivisible-goods setting \citep{BredereckFaKa19,PetersPiSk21,ElkindFaIg22}; we therefore call it \emph{GreedyEJR-M} and describe it below.
\begin{framed}
\noindent
\textbf{GreedyEJR-M} \\
\noindent
\emph{Step~1:} Initialize $N' = N$ and $R' = \emptyset$.\\
\noindent
\emph{Step~2:} Let $t^*$ be the largest \emph{nonnegative} real number for which there exist $\emptyset \neq N^*\subseteq N'$ and $R^*\subseteq R$ such that $N^*$ is a $t^*$-cohesive group, $R^*\subseteq R_i$ for all $i\in N^*$, and $s(R^*) = t^*$.
Consider any such pair $(N^*,R^*)$.
Remove $N^*$ from $N'$ and add the part of $R^*$ that is not already in $R'$ to $R'$. \\
\noindent
\emph{Step~3:} If $N' = \emptyset$, return $R'$. Else, go back to Step~2.
\end{framed}
\begin{example}
\label{ex:running}
Consider the instance in \Cref{fig:example-instance}.
We have $n/\alpha = 1$, and Step~2 of GreedyEJR-M chooses $t^* = 1$, along with (as one possibility) $N^* = \{1\}$ and $R^* = \{g_1\}$.
We are left with $N' = \{2\}$, and the next iteration of Step~2 chooses $t^* = 1$, $N^* = \{2\}$, and $R^* = \{g_2\}$.
Finally, the rule returns $R' = \{g_1,g_2\}$.
\end{example}
\begin{theorem}
\label{thm:greedyEJR-M}
The GreedyEJR-M rule satisfies EJR-M (and therefore EJR-1).
\end{theorem}
\begin{proof}
By \Cref{cor:EJR-M-EJR-1}, it suffices to prove the claim for EJR-M.
We break the proof into the following four parts.
\begin{itemize}
\item \emph{The procedure is well-defined.}
To this end, we must show that the largest nonnegative real number $t^*$ in Step~2 always exists.
Observe that for each group of agents $X\subseteq N'$, the set
\begin{equation*}
\begin{split}
T_X \coloneqq \biggl\{\, t \ge 0 \,\biggm|\, &|X| \ge t\cdot \frac{n}{\alpha} \text{ and there exists } \\
&Y\subseteq \bigcap_{i\in X}R_i \text{ with } s(Y) = t \,\biggr\}
\end{split}
\end{equation*}
is a union of a finite number of (possibly degenerate) closed intervals, and is nonempty because $0\in T_X$. Therefore, $T_X$ has a maximum.
The value $t^*$ chosen in Step~2 is then the largest among the maxima of $T_X$ across all $X\subseteq N'$.
\item \emph{The procedure always terminates.}
This is because each iteration of Step~2 removes at least one agent from $N'$.
\item \emph{The procedure returns an allocation $R'$ with $s(R') \le \alpha$.}
Indeed, if an iteration of Step~2 uses value $t^*$, it removes at least $t^*\cdot n/\alpha$ agents from $N'$ and adds a resource of size at most~$t^*$ to $R'$.
Since only $n$ agents can be removed in total, the added resource has size at most $\alpha$.
\item \emph{The returned allocation $R'$ satisfies EJR-M.}
Assume for contradiction that for some group $X$, \Cref{def:EJR-M} fails for $X$ and parameter $t$.
Consider the moment after the procedure removed the last group with parameter $t^* \ge t$.
If no agent in $X$ has been removed, the procedure should have removed $X$ with parameter $t$, a contradiction.
Else, some agent $j\in X$ has been removed.
In this case, the procedure guarantees that $u_j(R') \ge t$, which means that $X$ satisfies \Cref{def:EJR-M} with parameter $t$, again a contradiction. \qedhere
\end{itemize}
\end{proof}
\subsection{Generalized \RuleX}
Despite the strong representation guarantee provided by GreedyEJR-M, the rule does not admit an obvious polynomial-time implementation.
In the indivisible-goods setting, \citet{PetersSk20} introduced the \emph{Method of Equal Shares\xspace (MES\xspace)}, originally known as \emph{Rule~X}, and showed that it satisfies indivisible-goods \EJR{} and runs in polynomial time.
We now extend their rule to our mixed-goods setting.
At a high level, in \emph{Generalized \rulex}, each agent is given a budget of~$\alpha / n$, which can be spent on buying the resource---each piece of cake has cost equal to its length whereas each indivisible good costs~$1$.
In each step, a piece of cake or an indivisible good that incurs the smallest cost per utility for agents who approve it is chosen, and these agents pay as equally as possible to cover the cost of the chosen resource.
The rule stops once no more cake or indivisible good is affordable.
Note that when the resource consists only of indivisible goods, Generalized \rulex is equivalent to the original MES\xspace of \citet{PetersSk20}.
\begin{framed}
\noindent
\textbf{Generalized \rulex} \\
\noindent
\emph{Step~1:} Initialize $R' = (C', G') = (\emptyset, \emptyset)$ and $b_i = \alpha/n$ for each $i \in N$.\\
\noindent
\emph{Step~2:} Divide the remaining cake~$C$ into intervals $I_1, \dots, I_k$ so that each agent approves each interval either entirely or not at all.
For each interval $ I_j = [x_0, x_1]$, $x \in (x_0, x_1]$, and $\rho \ge 0$, we say that $I_j$ is \emph{$(x, \rho)$-affordable} if
\[
\sum_{i \in N_{I_j}} \min(b_i, (x-x_0)\cdot \rho) = x-x_0,
\]
where $N_{I_j}\subseteq N$ denotes the set of remaining agents who approve $I_j$.
Similarly, for each remaining good $g \in G$ and $\rho \ge 0$, we say that $g$ is \emph{$\rho$-affordable} if
\[
\sum_{i \in N_g} \min(b_i,\rho) = 1,
\]
where $N_g\subseteq N$ denotes the set of remaining agents who approve $g$.
\\
\noindent
\emph{Step~3:} If for every $\rho$, no $\rho$-affordable good or $(x,\rho)$-affordable piece of cake exists, return $R'$.
Else, take either an interval $I_j$ with the smallest $\rho$ along with the largest $x$ such that $I_j$ is $(x, \rho)$-affordable, or a good $g$ with the smallest $\rho$ such that $g$ is $\rho$-affordable, depending on which $\rho$ is smaller.
In the former case, deduct $\min(b_i, (x-x_0)\cdot\rho)$ from $b_i$ for each $i\in N_{I_j}$, and set $C = C \setminus [x_0, x]$ and $C' = C' \cup [x_0, x]$.
In the latter case, deduct $\min(b_i, \rho)$ from $b_i$ for each $i\in N_g$, and set $G = G \setminus \{g\}$ and $G' = G' \cup \{g\}$.
Remove all agents who have run out of budget from $N$, and go back to Step~2.
\end{framed}
\begin{example}
For the instance in \Cref{fig:example-instance}, each agent starts with a budget of $\alpha/n = 1$.
The first iteration of Step~2 selects the entire cake (with $\rho = 1/2$), and each agent pays $0.9/2 = 0.45$ for this cake.
Since neither agent has enough budget left to buy the indivisible good that she approves (which costs $1$), the procedure terminates with only the cake.
\end{example}
In the instance above, each agent on her own is $1$-cohesive and approves a subset of the resource of size exactly $1$, so the only EJR-M allocation is $\{g_1, g_2\}$.
In particular, the allocation chosen by Generalized MES is not EJR-M.
\begin{proposition}
\label{prop:GMES-EJR-M}
Generalized \rulex doesn't satisfy EJR-M.
\end{proposition}
Nevertheless, we prove that Generalized \rulex satisfies EJR-1 and, moreover, can be implemented efficiently.
\begin{theorem}
\label{thm:GMES}
Generalized \rulex satisfies EJR-1 and can be implemented in polynomial time.
\end{theorem}
\begin{proof}
First, observe that for each interval $I_j$, provided that $N_{I_j} \ne \emptyset$, the value of $\rho$ chosen in Step~3 will be $\rho = 1/|N_{I_j}|$, and the value of $x$ will be either $x_1$ or the smallest value such that $(x-x_0)\cdot \rho = b_i$ for some $i\in N_{I_j}$, whichever is smaller.
For each indivisible good $g$, the value of $\rho$ can also be computed in polynomial time.\footnote{See Footnote~9 in the extended version of \citet{PetersSk20}'s work.}
After each iteration of Step~3, if the procedure has not terminated, at least one of the following occurs: an entire interval $I_j$ is removed from $C$, an indivisible good $g$ is removed from $G$, or one or more agents run out of budget.
Hence, the procedure can be implemented in polynomial time.
Next, note that whenever a resource of some size $y$ is added to $R'$, the agents together pay a total of $y$.
Since the agents have a total starting budget of $n\cdot(\alpha/n) = \alpha$, Generalized \rulex returns an allocation of size at most $\alpha$.
Finally, we show that the returned allocation $R'$ satisfies EJR-1.
Assume for contradiction that for some $t > 0$, there exists a $t$-cohesive group $N'$ with $u_i(R') \le t-1$ for all $i\in N'$.
If there is still a nonempty piece of cake from $\bigcap_{i \in N'} C_i$ left in $C$ when the procedure terminates, we know that this piece is not affordable, and thus no agent in $N'$ has any budget left.
Similarly, if there is still a good from $\bigcap_{i \in N'} G_i$ left in $G$, it must be that $\sum_{i \in N'} b_i < 1$.
In either case, $\sum_{i \in N'} b_i < 1$ holds at the end of the procedure.
Since $|N'| \ge \frac{tn}{\alpha}$, we have $b_i < \frac{\alpha}{tn}$ for some $i\in N'$.
In particular, agent~$i$ receives a utility of at most $t-1$ but has spent more than $\frac{\alpha}{n} - \frac{\alpha}{tn} = \frac{\alpha}{n} \cdot \frac{t-1}{t}$.
Thus, the cost per utility for $i$ is strictly greater than $\frac{1}{t-1}\left(\frac{\alpha}{n} \cdot \frac{t-1}{t}\right) = \frac{\alpha}{tn}$.
Now, consider the first cake interval or indivisible good added in Step~3 for which the cost per utility for some agent in $N'$ exceeds $\frac{\alpha}{tn}$; the existence of such an interval or good follows from the previous sentence.
Note that the value of $\rho$ in this step must be larger than $\frac{\alpha}{tn}$.
However, since this is the first step in which an agent from $N'$ pays more than $\frac{\alpha}{tn}$ per utility and the utility of each agent in $N'$ is at most $t-1$, each agent in $N'$ must have budget at least $\frac{\alpha}{n} - \frac{\alpha}{tn}\cdot (t-1) = \frac{\alpha}{tn}$ remaining before this step, and there is still a resource of size at least $1$ left in $\bigcap_{i\in N'} R_i$.
Since $|N'| \ge \frac{tn}{\alpha}$, before this step, either there is still an indivisible good from $\bigcap_{i \in N'} G_i$ which is $\rho$-affordable for some $\rho \le \frac{\alpha}{tn}$, or there is still an interval $I_j = [x_0,x_1]$ from $\bigcap_{i \in N'} C_i$ which is $(x_0+\varepsilon, \rho)$-affordable for some $\varepsilon > 0$ and $\rho \le \frac{\alpha}{tn}$.
Either way, this contradicts the fact that Generalized \rulex chooses a resource with $\rho > \frac{\alpha}{tn}$.
\end{proof}
For indivisible-goods instances, Generalized \rulex reduces to the original MES\xspace of \citet{PetersSk20}, which satisfies indivisible-goods EJR.
We prove that the analog holds for cake instances.
\begin{proposition}
\label{prop:GMES-cake-EJR}
For cake instances, Generalized \rulex satisfies cake EJR.
\end{proposition}
\begin{proof}
Consider a cake instance, and assume for contradiction that the returned cake $C'$ does not satisfy cake EJR.
Thus, there exists a real number $t > 0$ and a $t$-cohesive group~$N'$ with $u_i(C') < t$ for all $i \in N'$.
Let $\delta > 0$ be such that $u_i(C') < t-\delta$ for all $i
\in N'$.
Since $(\bigcap_{i \in N'} C_i) \setminus C' \neq \emptyset$ and this piece of cake is not affordable at the end, the budget of all agents in $N'$ must have run out.
Hence, the cost per utility for every agent in $N'$ is strictly greater than $\frac{\alpha}{(t - \delta) n}$.
Now, consider the first cake interval added in Step~3 for which the cost per utility for some agent in $N'$ exceeds $\frac{\alpha}{tn}$; the existence of such an interval follows from the previous sentence since $\frac{\alpha}{(t - \delta) n} > \frac{\alpha}{tn}$.
Note that the value of $\rho$ in this step must be larger than $\frac{\alpha}{tn}$.
However, since this is the first step in which an agent from $N'$ pays strictly more than $\frac{\alpha}{tn}$ per utility and the utility of each agent in $N'$ is at most $t-\delta$, each agent in $N'$ must have budget at least $\frac{\alpha}{n} - \frac{\alpha(t - \delta)}{tn} \ = \frac{\alpha\delta}{tn} > 0$ remaining before this step.
Since $|N'| \ge \frac{tn}{\alpha}$, before this step there is still an interval $I_j = [x_0,x_1]$ from $\bigcap_{i \in N'} C_i$ which is $(x_0+\varepsilon, \rho)$-affordable for some $\varepsilon > 0$ and $\rho \le \frac{\alpha}{tn}$.
This contradicts the fact that Generalized \rulex chooses a resource with $\rho > \frac{\alpha}{tn}$.
\end{proof}
\subsection{Generalized PAV}
In the indivisible-goods setting, a well-studied rule is \emph{proportional approval voting (PAV)}, which chooses an allocation $R'$ that maximizes $\sum_{i\in N}H_{u_i(R')}$, where $H_x \coloneqq 1 + \frac{1}{2} + \dots + \frac{1}{x}$ is the $x$-th harmonic number.
We now show how to generalize PAV to the mixed-goods setting.
To this end, we will use a continuous extension of harmonic numbers due to \citet{Hintze19}, defined as
\[
H_x \coloneqq \sum_{k = 1}^\infty \frac{x}{k(x+k)} = \sum_{k = 1}^\infty \left(\frac{1}{k} - \frac{1}{k+x}\right)
\]
for each real number $x\ge 0$; in particular, these infinite sums converge.
It is clear from the definition that the generalized harmonic numbers indeed extend the original harmonic numbers, and that $H_x > H_y$ for all $x > y \ge 0$.
Moreover, $H_{x+1} - H_x = \frac{1}{x+1}$ for all $x \ge 0$.
\begin{definition}[Generalized PAV]
The \emph{Generalized PAV} rule selects an allocation $R'$ with $s(R') \le \alpha$ that maximizes $\sum_{i\in N} H_{u_i(R')}$.
\end{definition}
For ease of notation, we let $H(R') \coloneqq \sum_{i\in N} H_{u_i(R')}$ for any allocation $R'$, and call $H(R')$ the \emph{GPAV-score} of $R'$.
Given the instance in \Cref{fig:example-instance}, since $H_{1.9} + H_{0.9} > 1.45 + 0.93 > 1 + 1 = H_1 + H_1$, Generalized PAV selects the entire cake together with one of the indivisible goods.
As the only EJR-M allocation in this instance is $\{g_1, g_2\}$, the allocation selected by Generalized PAV is not EJR-M.
\begin{proposition}
\label{prop:GPAV-EJR-M}
Generalized PAV does not satisfy EJR-M.
\end{proposition}
To show that Generalized PAV satisfies EJR-1, we establish a useful lemma on the growth rate of the generalized harmonic numbers.
\begin{lemma}
\label{lem:harmonic-growth}
For any $x \in (0, \infty)$ and $y \in [0,1]$, it holds that
$ H_{x + y} - H_{x} \le \frac{y}{x+y}$.
\end{lemma}
\begin{proof}
First, note that for any positive integer $r$,
\begin{align*}
&\sum_{k = 1}^r \frac{x+y}{k(x+y+k)} - \sum_{k = 1}^r \frac{x}{k(x+k)} \\
&= \sum_{k = 1}^r \left(\frac{1}{k} - \frac{1}{ k + x + y}\right) - \sum_{k = 1}^r \left(\frac{1}{k} - \frac{1}{k + x} \right) \\
&= \sum_{k = 1}^r \left(\frac{1}{k + x} - \frac{1}{k + x + y}\right) \\
&= \sum_{k = 1}^r \frac{y}{(k + x)(k + x + y)} \\
&\le \sum_{k = 1}^r \frac{y}{(k + x + y - 1)(k + x + y)}.
\end{align*}
Hence, we have
\begin{align*}
&H_{x + y} - H_{x} \\
&= \lim_{r \to \infty} \sum_{k = 1}^r \frac{x+y}{k(x+y+k)} - \lim_{r \to \infty} \sum_{k = 1}^r \frac{x}{k(x+k)} \\
&= \lim_{r \to \infty} \left( \sum_{k = 1}^r \frac{x+y}{k(x+y+k)} - \sum_{k = 1}^r \frac{x}{k(x+k)} \right) \\
&\le \lim_{r \to \infty} \sum_{k = 1}^r \frac{y}{(k + x + y - 1)(k + x + y)} \\
&= \sum_{k = 1}^\infty \frac{y}{(k + x + y - 1)(k + x + y)} \\
&= \sum_{k = 1}^\infty \left( \frac{y}{k+x+y-1} - \frac{y}{k+x+y} \right)
= \frac{y}{x+y},
\end{align*}
where the inequality follows the previous paragraph and the last sum telescopes.
\end{proof}
\begin{theorem}
\label{thm:GPAV}
Generalized PAV satisfies EJR-1.
\end{theorem}
\begin{proof}
Let $R' = (C',G')$ be a Generalized PAV allocation.
By adding a piece of cake approved by no agent to the resource $R$ as well as $R'$ if necessary, we may assume without loss of generality that $\ell(R') = \alpha$.
Assume also that the cake~$C'$ is represented by the interval $[0,c']$.
Whenever $x+y > c'$, the interval $[x,x+y]$ refers to $[x,c']\cup[0,x+y-c']$, i.e., we cyclically wrap around the cake $C'$.
Suppose for contradiction that for some $t > 0$, there exists a $t$-cohesive group $N'$ with $u_i(R') \le t-1$ for all $i \in N'$.
Hence, there exists either a piece of cake of size $1$ that is approved by all agents in $N'$ but not contained in $R'$, or an indivisible good with the same property.
We assume the latter case; the proof proceeds similarly in the former case.
Denote this good by $g^*$, and let $G'' \coloneqq G' \cup \{g^*\}$ and $R'' \coloneqq (C', G'')$.
We have
\begin{align*}
H(R'') - H(R')
&\ge \sum_{i \in N'} \left(H_{u_i(R') + 1} - H_{u_i(R')}\right) \\
&= \sum_{i \in N'} \frac{1}{u_i(R') + 1} \\
&\ge \frac{\lvert N'\rvert^2}{\sum_{i \in N'}(u_i(R') + 1)} \\
&\ge \frac{\lvert N'\rvert^2}{\lvert N'\rvert\cdot (t-1) + \lvert N'\rvert}
= \frac{|N'|}{t}
\ge \frac{n}{\alpha},
\end{align*}
where the second inequality follows from the inequality of arithmetic and harmonic means and the last inequality from the definition of a $t$-cohesive group.
In other words, adding~$g^*$ increases the GPAV-score of $R'$ by at least $n/\alpha$.
For each good $g\in G$, denote by $N_g\subseteq N$ the set of agents who approve it.
For each $g \in G''$, we have
\begin{align*}
H(R'') - H(R''\setminus \{g\})
&= \sum_{i \in N_g} \left(H_{u_i(R'')} - H_{u_i(R'') - 1}\right) \\
&= \sum_{i \in N_g} \frac{1}{u_i(R'')}.
\end{align*}
Letting $N_+$ consist of the agents~$i$ with $u_i(R'') > 0$, we get
\begin{align}
\sum_{g \in G''} (H(R'') - H(R''\setminus \{g\})) \nonumber
&= \sum_{g \in G''} \sum_{i \in N_g} \frac{1}{u_i(R'')} \nonumber \\
&= \sum_{i \in N_+}\sum_{g \in G'' \cap G_i} \frac{1}{u_i(R'')} \nonumber \\
&= \sum_{i\in N_+}\frac{u_i(G'')}{u_i(R'')} \label{eq:GPAV-difference-good} \\
&\le \sum_{i\in N_+} 1 \le n. \nonumber
\end{align}
If there is a good $g \in G''$ such that $H(R'') - H(R''\setminus \{g\}) < n/\alpha$ (clearly, $g\ne g^*$), we can replace $g$ with~$g^*$ in~$R'$ and obtain a higher GPAV-score, contradicting the definition of~$R'$.
Hence, we may assume that $H(R'') - H(R''\setminus \{g\}) \ge n/\alpha$ for every good $g \in G''$.
It follows that
\[
n \ge \sum_{g \in G''} (H(R'') - H(R''\setminus \{g\})) \ge |G''|\cdot \frac{n}{\alpha}.
\]
Therefore, we have that $|G''|\le \alpha$, and so $c' \ge 1$.
Now, for any $x\in C'$, it holds that
\begin{align}
H(R'') - &H(R''\setminus [x, x + 1]) \nonumber \\
&= \sum_{i \in N} \left(H_{u_i(R'')} - H_{u_i(R'') - u_i([x, x + 1]) }\right) \nonumber \\
&\le \sum_{i \in N_+} \frac{u_i([x, x + 1])}{u_i(R'')}, \label{eq:GPAV-difference-cake}
\end{align}
where the inequality follows from \Cref{lem:harmonic-growth}.
Using \eqref{eq:GPAV-difference-good} and \eqref{eq:GPAV-difference-cake}, we get
\begin{align}
&\sum_{g \in G''} (H(R'') - H(R''\setminus \{g\})) \nonumber \\
&\qquad+ \int_{C'} (H(R'') - H(R''\setminus [x, x + 1])) \diff x \nonumber \\
&\le \sum_{i\in N_+}\frac{u_i(G'')}{u_i(R'')} + \int_{C'} \left(\sum_{i \in N_+}\frac{u_i([x, x + 1])}{u_i(R'')}\right) \diff x \nonumber \\
&= \sum_{i\in N_+}\frac{u_i(G'')}{u_i(R'')} + \sum_{i\in N_+}\left(\int_{C'} \frac{u_i([x, x + 1])}{u_i(R'')} \diff x\right) \nonumber \\
&= \sum_{i \in N_+} \left[\frac{1}{u_i(R'')}\left(u_i(G'') + \int_{C'} u_i([x, x + 1])\diff x\right) \right] \nonumber \\
&= \sum_{i \in N_+} \left[\frac{1}{u_i(R'')}\left(u_i(G'') + u_i(C')\right)\right] \le \sum_{i\in N} 1 = n. \label{eq:GPAV}
\end{align}
Here, we have $\int_{C'} u_i([x, x + 1])\diff x = u_i(C')$ because
\begin{align*}
\int_{C'} u_i([x,x+1])\diff x
&= \int_{C'} \ell(C_i\cap [x,x+1])\diff x \\
&= \int_{C_i\cap C'}\ell([y-1,y])\diff y \\
&= \int_{C_i\cap C'}1 \diff y \\
&= \ell(C_i\cap C')
= u_i(C'),
\end{align*}
where the second equality holds because a point $y\in C_i$ belongs to the interval $[x,x+1]$ if and only if $x\in[y-1, y]$.
If it were the case that $H(R'') - H(R''\setminus [x, x + 1]) \ge n/\alpha$ for every $x\in C'$, we would have
\begin{align*}
&\sum_{g \in G''} (H(R'') - H(R''\setminus \{g\})) \\
&\qquad+ \int_{C'} (H(R'') - H(R''\setminus [x, x + 1])) \diff x\\
&\ge |G''|\cdot\frac{n}{\alpha} + c'\cdot \frac{n}{\alpha} = (\alpha+1)\cdot \frac{n}{\alpha} > n,
\end{align*}
a contradiction with \eqref{eq:GPAV}.
Thus, it must be that $H(R'') - H(R''\setminus [x, x + 1]) < n/\alpha$ for some $x\in C'$.
By replacing the cake $[x,x+1]$ in $R'$ with the good $g^*$, we therefore obtain a higher GPAV-score than that of $R'$.
This yields the final contradiction and completes the proof.
\end{proof}
In contrast to Generalized \rulex, Generalized PAV does not satisfy EJR in cake sharing.
\begin{proposition}
\label{prop:GPAV-cake-EJR}
For cake instances, Generalized PAV does not satisfy cake EJR.
\end{proposition}
To prove this statement, we use the following proposition.
\begin{proposition}[\citet{BeiLuSu22}]
\label{prop:maximize-EJR-M}
Let $f:\mathbb{R}_{\ge 0}\rightarrow [-\infty,\infty)$ be a strictly increasing function which is differentiable in $(0,\infty)$.
For cake sharing, if a rule that always chooses an allocation~$R'$ maximizing $\sum_{i\in N} f(u_i(R'))$ satisfies cake EJR, then there exists a constant $c$ such that $f'(x) = c/x$ for all $x\in (0,\infty)$.\footnote{This is Theorem~A.8 in the extended version of their work.
Bei et al.~normalized the length of the cake to $1$, but the same proof works in our setting.}
\end{proposition}
\begin{proof}[Proof of \Cref{prop:GPAV-cake-EJR}]
For a positive integer $r$, one can check that the derivative with respect to $x$ of $\sum_{k=1}^r\frac{x}{k(x+k)}$ is $\sum_{k=1}^r\frac{1}{(x+k)^2}$, which converges as $r\rightarrow\infty$.
This means that $H_x = \sum_{k=1}^\infty \frac{x}{k(x+k)}$ is differentiable as a function of~$x$, and its derivative is $\sum_{k=1}^\infty\frac{1}{(x+k)^2}$.
In particular, there is no constant $c$ such that $H'_x = c/x$ for all $x\in (0, \infty)$---for example, this can be seen by observing that, as $x$ approaches $0$ from above, $H'_x$ approaches $\sum_{k = 1}^\infty 1/k^2 = \pi^2/6$ rather than $\infty$.
By \Cref{prop:maximize-EJR-M}, Generalized PAV does not satisfy cake EJR.
\end{proof}
\section{Proportionality Degree}\label{sec:proportionality-degree}
In addition to the axiomatic study of representation in terms of criteria like \EJR{M} and \EJR{1}, another relevant concept for cohesive groups is the \emph{proportionality degree}, which measures the average utility of the agents in each such group \citep{Skowron21}.
In this section, we first derive tight bounds on the proportionality degree implied by \EJR{M} and \EJR{1}, and then investigate the proportionality degree of the rules we studied in \Cref{sec:EJR-notions}.
\begin{definition}[Average satisfaction]
Given an instance and an allocation~$A$, the \emph{average satisfaction} of a group of agents~$N' \subseteq N$ with respect to~$A$ is $\frac{1}{|N'|} \cdot \sum_{i \in N'} u_i(A)$.
\end{definition}
\begin{definition}[Proportionality degree]
Fix a function $f \colon \mathbb{R}_{> 0} \to \mathbb{R}_{\ge 0}$.
A rule~$\mathcal{M}$ has a \emph{proportionality degree} of~$f$ if for each instance~$I$, each allocation~$A$ that $\mathcal{M}$ outputs on $I$, and each $t$-cohesive group of agents~$N^*$, the average satisfaction of~$N^*$ with respect to $A$ is at least~$f(t)$, i.e.,
\[
\frac{1}{|N^*|} \cdot \sum_{i \in N^*} u_i(A) \geq f(t).
\]
\end{definition}
For indivisible goods, \citet{SanchezFernandezElLa17} showed that EJR implies a proportionality degree of $\frac{t-1}{2}$.
\subsection{Proportionality Degree Implied by EJR-M/1}
Our focus in this subsection is to establish tight bounds on the proportionality degree implied by \EJR{M} and \EJR{1}.
Observe that for $t < 1$, a $t$-cohesive group may have an average satisfaction of $0$ in an EJR-M or EJR-1 allocation.
Indeed, if $\alpha = t$ and the resource consists only of a single indivisible good, which is approved by all $n$ agents, then the set of all agents is $t$-cohesive, but the empty allocation is EJR-M and EJR-1.
We therefore assume $t\ge 1$ for our results from here on.
We first show that the proportionality degree implied by EJR-M is $\floor{t} \cdot \left( 1 - \frac{\floor{t} + 1}{2t} \right)$, beginning with the lower bound.
Note that this quantity is roughly $t/2$.
\begin{theorem}\label{thm:EJR-M-LB-average-satisfaction}
Given any instance and any real number \mbox{$t\ge 1$}, let $N^* \subseteq N$ be a $t$-cohesive group and $A$ be an \mbox{EJR-M} allocation.
The average satisfaction of $N^*$ with respect to $A$ is at least $\floor{t} \cdot \left( 1 - \frac{\floor{t} + 1}{2t} \right)$.
\end{theorem}
The high-level idea behind the proof of \Cref{thm:EJR-M-LB-average-satisfaction} is that, given a $t$-cohesive group~$N^*$ and an \EJR{M} allocation, a $\frac{t - \floor{t}}{t}$ fraction of the agents in $N^*$ are guaranteed a utility of at least~$\floor{t}$.
The remaining agents can then be partitioned into $\floor{t}$ disjoint subsets so that each subset consists of a $1/t$ fraction of the agents in $N^*$ and the guaranteed utilities for these subsets drop arithmetically from~$\floor{t}-1$ to~$0$.
\begin{proof}[Proof of \Cref{thm:EJR-M-LB-average-satisfaction}]
For ease of notation, let~$r \coloneqq n/\alpha$ and~$n^* \coloneqq |N^*| \ge \ceiling{tr}$.
Since $N^*$ is $t$-cohesive, by \Cref{prop:EJR-M-floor}, some agent~$i_1 \in N^*$ gets utility at least~$\floor{t}$ from the allocation~$A$.
If $|N^* \setminus \{i_1\}| \geq \floor{t} \cdot r$, then since $N^* \setminus \{i_1\}$ is $\floor{t}$-cohesive, \Cref{prop:EJR-M-floor} implies that another agent~$i_2 \neq i_1$ gets utility at least~$\floor{t}$ from~$A$.
Applying this argument repeatedly, as long as there are at least $\ceiling*{\floor{t} \cdot r}$~agents left, \Cref{prop:EJR-M-floor} implies that one of them gets utility at least~$\floor{t}$.
Let $N'_{\floor{t}}$ consist of the agents with guaranteed utility $\floor{t}$ from this argument, and note that $|N'_{\floor{t}}| = |N^*| - \ceiling{\floor{t} \cdot r} + 1 \ge \ceiling{tr} - \ceiling{\floor{t} \cdot r} + 1$.
Let $\widehat{N} \coloneqq N^* \setminus N'_{\floor{t}}$; we have $|\widehat{N}| = \ceiling{\floor{t} \cdot r} - 1$.
Denote by~$N_{\floor{t}}$ an arbitrary subset of~$N'_{\floor{t}}$ of size exactly $\ceiling{tr} - \ceiling{\floor{t} \cdot r} + 1$.
Now, let us consider the agents in~$\widehat{N}$.
Applying an argument similar to the one in the previous paragraph but using $(\floor{t}-1)$-cohesiveness, we find that $\widehat{N}$ contains at least $\ceiling*{\floor{t} \cdot r} - \ceiling*{(\floor{t} - 1) \cdot r}$ agents with a utility of at least~$\floor{t} - 1$ each; let these agents form~$N_{\floor{t}-1}$.
Continuing inductively, we can partition $\widehat{N}$ into~$\floor{t}$ pairwise disjoint sets $N_{\floor{t} - 1}, N_{\floor{t} - 2}, \dots, N_1, N_0$ such that for each~$j \in \{0, 1, \dots, \floor{t} - 1\}$, every agent in~$N_j$ gets utility at least~$j$ from the allocation~$A$.
For each~$j \in \{1, 2, \dots, \floor{t}\}$, it holds that $\left| \bigcup_{k = 0}^{j-1} N_k \right| = \ceiling{jr} - 1$.
Furthermore, we have
\begin{align*}
j \cdot \ceiling{tr} &\geq j \cdot t \cdot r \\
&= t \cdot (jr + 1) - t \geq t \cdot \ceiling{jr} - t = t \cdot (\ceiling{jr} - 1),
\end{align*}
which implies that
\[
\frac{\left| \bigcup_{k = 0}^{j-1} N_k \right|}{\ceiling{tr}} = \frac{\ceiling{jr} - 1}{\ceiling{tr}} \leq \frac{j}{t}.
\]
Since $\left| \bigcup_{k = 0}^{\floor{t}} N_k \right| = \ceiling{tr}$, it follows that
\begin{align}\label{eq:EJR-M-agents-fraction}
\frac{\left| \bigcup_{k = j}^{\floor{t}} N_k \right|}{\ceiling{tr}}
\geq \frac{t - j}{t}
&= \frac{t - \floor{t}}{t} + \frac{\floor{t} - j}{t} \nonumber \\
&= \frac{t - \floor{t}}{t} + \sum_{k = j}^{\floor{t}-1} \frac{1}{t}.
\end{align}
With this relationship in hand, we can bound the average satisfaction of $N_{\floor{t}} \cup \widehat{N} = \bigcup_{k = 0}^{\floor{t}} N_k$ as
\begin{align*}
&\frac{1}{\left| \bigcup_{k = 0}^{\floor{t}} N_k \right|} \cdot \sum_{i \in \bigcup_{k = 0}^{\floor{t}} N_k} u_i(A) \\
&\quad\geq \frac{1}{\ceiling{tr}} \cdot \left( \sum_{k = 0}^{\floor{t}} \left| N_k \right| \cdot k \right) \\
&\quad= \sum_{k = 0}^{\floor{t}} \frac{\left| N_k \right|}{\ceiling{tr}} \cdot k \\
&\quad= \sum_{d = 1}^{\floor{t}} \sum_{k = d}^{\floor{t}} \frac{\left| N_k \right|}{\ceiling{tr}} \\
&\quad\geq \sum_{d = 1}^{\floor{t}} \left( \frac{t - \floor{t}}{t} + \sum_{k = d}^{\floor{t}-1} \frac{1}{t} \right) \\
&\quad= \frac{t - \floor{t}}{t} \cdot \floor{t} + \sum_{d = 1}^{\floor{t}} \sum_{k = d}^{\floor{t}-1} \frac{1}{t} \\
&\quad= \frac{t - \floor{t}}{t} \cdot \floor{t} + \frac{1}{t} \cdot \frac{\floor{t} \cdot (\floor{t} - 1)}{2} \\
&\quad= \frac{\floor{t}}{t} \cdot \frac{2t - \floor{t} - 1}{2} = \floor{t} \cdot \left( 1 - \frac{\floor{t}+1}{2t} \right),
\end{align*}
where the first inequality holds because each agent in $N_k$ gets utility at least $k$ and the second inequality follows from~\eqref{eq:EJR-M-agents-fraction}.
Since every agent in~$N'_{\floor{t}} \setminus N_{\floor{t}}$ gets utility at least $\floor{t}$, the average satisfaction of $N'_{\floor{t}} \setminus N_{\floor{t}}$ is at least~$\floor{t} \ge \floor{t} \cdot \left( 1 - \frac{\floor{t}+1}{2t} \right)$.
As the average satisfaction of~$N^*$ is a convex combination of the corresponding quantities for $N'_{\floor{t}} \setminus N_{\floor{t}}$ and $N_{\floor{t}} \cup \widehat{N}$, it is at least~$\floor{t} \cdot \left( 1 - \frac{\floor{t}+1}{2t} \right)$, as desired.
\end{proof}
We next give a matching upper bound.
\begin{theorem}\label{thm:EJR-M-UB-average-satisfaction}
For any real numbers $t \geq 1$ and~$\varepsilon > 0$, there exists an instance, a $t$-cohesive group~$N^*$, and an \mbox{EJR-M} allocation~$A$ such that the average satisfaction of $N^*$ with respect to $A$ is at most $\floor{t} \cdot \left( 1 - \frac{\floor{t}+1}{2t} \right) + \varepsilon$.
\end{theorem}
We do not prove \Cref{thm:EJR-M-UB-average-satisfaction} directly, as we will establish a stronger statement later in \Cref{thm:prop-degree-GreedyEJR-M}.
Next, we show that the proportionality degree implied by \EJR{1} is $\frac{t - 2 + 1/t}{2} = \frac{(t-1)^2}{2t}$, which is slightly lower than that implied by \EJR{M} for every $t$.
For the lower bound, we use a similar idea as in \Cref{thm:EJR-M-LB-average-satisfaction}, but we need to be more careful about agents with low utility guarantees.
In particular, even when the guarantee provided by the EJR-1 condition is negative, the actual utility is always nonnegative, so we need to ``round up'' the EJR-1 guarantee appropriately.
\begin{theorem}\label{thm:EJR-1-LB-average-satisfaction}
Given any instance and any real number \mbox{$t\ge 1$}, let $N^* \subseteq N$ be a $t$-cohesive group and $A$ be an \mbox{EJR-1} allocation.
The average satisfaction of $N^*$ with respect to $A$ is greater than $\frac{t-2+1/t}{2}$.
\end{theorem}
To prove this \namecref{thm:EJR-1-LB-average-satisfaction}, we will use the following \namecref{claim:average}, which provides a lower bound for the average of a nonincreasing and nonnegative sequence with a particular structure.
\begin{claim}\label{claim:average}
Let~$r > 0$ and~$t \geq 1$ be real numbers.
Consider any nonincreasing and nonnegative sequence
\[
t-1, a_1, a_2, \dots, a_{\ceiling{tr} - \ceiling{r}}, b_1, b_2, \dots, b_{\ceiling{r}-1},
\]
in which $a_1, a_2, \dots, a_{\ceiling{tr} - \ceiling{r}}$ forms an arithmetic subsequence with common difference $-1/r$.
If $t-1 - a_1 \leq 1/r$, then the average of the entire sequence is at least $\frac{t-2+1/t}{2}$.
\end{claim}
\begin{proof}
We start by showing that the average of the subsequence $t-1, a_1, a_2, \dots, a_{\ceiling{tr} - \ceiling{r}}$ is at least $\frac{t-1}{2}$.
The bound holds trivially if~$\ceiling{tr} - \ceiling{r} = 0$; we therefore assume that $\ceiling{tr} - \ceiling{r} \geq 1$.
Let us continually decrease each of the numbers $a_1, a_2, \dots, a_{\ceiling{tr} - \ceiling{r}}$ by the same amount until (at least) one of the following two cases occurs:
\begin{itemize}
\item \underline{Case~1}:
The difference between~$t-1$ and~$a_1$ becomes~$1/r$, i.e., $a_1 = t-1 - 1/r$.
Note that $a_{\ceiling{tr}-\ceiling{r}}$ is still nonnegative in this case.
\item \underline{Case~2}:
$a_{\ceiling{tr} - \ceiling{r}}$ becomes~$0$.
Note that the difference between~$t-1$ and~$a_1$ is still at most~$1/r$.
\end{itemize}
Clearly, the average of the subsequence in question $t-1, a_1, a_2, \dots, a_{\ceiling{tr} - \ceiling{r}}$ does not increase during this process.
Thus, it suffices to show that in each of the above two cases, this average is at least~$\frac{t-1}{2}$ after the process.
\begin{itemize}
\item In Case~1, the subsequence $t-1, a_1, a_2, \dots, a_{\ceiling{tr} - \ceiling{r}}$ is now an arithmetic sequence, so its average is $\frac{(t-1) + a_{\ceiling{tr} - \ceiling{r}}}{2} \geq \frac{t-1}{2}$, where the inequality follows from the fact that $a_{\ceiling{tr} - \ceiling{r}} \geq 0$ in this case.
\item In Case~2, consider the arithmetic sequence~$(d_k)_{k = 0}^{\ceiling{tr} - \ceiling{r}}$ with $d_0 = t-1$ and $d_{\ceiling{tr} - \ceiling{r}} = 0$.
Let~$-\beta$ be its common difference, so~$\beta$ is nonnegative.
On the one hand, we have
\[
t - 1 = d_0 = d_{\ceiling{tr} - \ceiling{r}} + \beta \cdot (\ceiling{tr} - \ceiling{r}) = \beta \cdot (\ceiling{tr} - \ceiling{r}).
\]
On the other hand, we have
\begin{align*}
t-1 &= (t-1 - a_1) + a_1 \\
&= (t-1 - a_1) + (\ceiling{tr} - \ceiling{r} - 1) \cdot 1/r \\
&\leq (\ceiling{tr} - \ceiling{r}) \cdot 1/r,
\end{align*}
where the inequality holds because $t-1 - a_1 \leq 1/r$ in Case~2.
As a result, we have $\beta \cdot (\ceiling{tr} - \ceiling{r}) \leq (\ceiling{tr} - \ceiling{r}) \cdot 1/r$, that is, $\beta \leq 1/r$.
Hence, each term of the sequence $t-1, a_1, a_2, \dots, a_{\ceiling{tr} - \ceiling{r}}$ is at least as large as the corresponding term of the sequence~$(d_k)_{k = 0}^{\ceiling{tr} - \ceiling{r}}$.
We conclude that the average of $t-1, a_1, a_2, \dots, a_{\ceiling{tr} - \ceiling{r}}$ is at least that of $(d_k)_{k = 0}^{\ceiling{tr} - \ceiling{r}}$, which is $\frac{d_0 + d_{\ceiling{tr} - \ceiling{r}}}{2} = \frac{t-1}{2}$.
\end{itemize}
In both cases, we have proven that the average of the sequence $t-1, a_1, a_2, \dots, a_{\ceiling{tr} - \ceiling{r}}$ is at least $\frac{t-1}{2}$.
Next, we show that the average of the entire sequence
\[t-1, a_1, a_2, \dots, a_{\ceiling{tr} - \ceiling{r}}, b_1, b_2, \dots, b_{\ceiling{r}-1}\]
is at least~$\frac{t-2+1/t}{2}$.
This can be done by taking all $b_i$'s to be $0$ and applying the lower bound on the average of $t-1, a_1, a_2, \dots, a_{\ceiling{tr} - \ceiling{r}}$ that we previously computed:
\begin{align*}
&\frac{1}{1 + (\ceiling{tr}-\ceiling{r}) + (\ceiling{r}-1)} \cdot \frac{t-1}{2} \cdot (1 + (\ceiling{tr} - \ceiling{r})) \\
&\qquad\qquad\qquad\qquad\qquad= \frac{1 + \ceiling{tr} - \ceiling{r}}{\ceiling{tr}} \cdot \frac{t-1}{2} \\
&\qquad\qquad\qquad\qquad\qquad= \left( 1 + \frac{1 - \ceiling{r}}{\ceiling{tr}} \right) \cdot \frac{t-1}{2} \\
&\qquad\qquad\qquad\qquad\qquad\geq \left( 1 + \frac{1 - (r+1)}{\ceiling{tr}} \right) \cdot \frac{t-1}{2} \\
&\qquad\qquad\qquad\qquad\qquad= \left( 1 - \frac{r}{\ceiling{tr}} \right) \cdot \frac{t-1}{2} \\
&\qquad\qquad\qquad\qquad\qquad\geq \left( 1 - \frac{r}{tr} \right) \cdot \frac{t-1}{2} \\
&\qquad\qquad\qquad\qquad\qquad= \frac{(t-1)^2}{2t} = \frac{t-2+1/t}{2}.
\end{align*}
The claim is thus proven.
\end{proof}
We are now ready to establish \Cref{thm:EJR-1-LB-average-satisfaction}.
\begin{proof}[Proof of \Cref{thm:EJR-1-LB-average-satisfaction}]
For notational convenience, let~$r \coloneqq n/\alpha$ and~$n^* \coloneqq |N^*|$.
We have $|N^*| = n^* \geq t r$, where the inequality holds because $N^*$ is $t$-cohesive.
\EJR{1} implies that some agent~$i_1 \in N^*$ gets utility greater than~$t-1$ from the allocation~$A$.
If $|N^* \setminus \{i_1\}| \geq t r$, then since there still exists a subset of the resource of size at least $t$ commonly approved by the agents in $N^* \setminus \{i_1\}$, \EJR{1} implies that another agent~$i_2 \neq i_1$ gets utility greater than~$t-1$ from~$A$.
Applying this argument repeatedly, as long as there are at least~$t \cdot r$ agents left, EJR-1 implies that one of them gets utility greater than~$t-1$.
Let $N_{t-1}$ consist of the agents with guaranteed utility greater than $t-1$ from this argument.
Let $\widehat{N} \coloneqq N^* \setminus N_{t-1}$ and $\widehat{n} \coloneqq |\widehat{N}|$, and note that $\widehat{n} = \ceiling{tr} - 1$.
Now, let us consider the agents in~$\widehat{N}$.
Since $|\widehat{N}| = \widehat{n} \ge \frac{\widehat{n}}{r} \cdot r$ and $s\left( \bigcap_{i \in \widehat{N}} R_i \right) \geq t > \frac{\widehat{n}}{r}$, the agents in~$\widehat{N}$ form an $\frac{\widehat{n}}{r}$-cohesive group.
By \EJR{1}, some agent in~$\widehat{N}$ gets utility greater than~$\frac{\widehat{n}}{r} - 1$.
Continuing inductively, the guaranteed utility drops arithmetically with a common difference of~$1/r$.
Note also that an agent's actual utility is always nonnegative.
To calculate the average satisfaction of the $t$-cohesive group~$N^*$, we first focus on the agents in $\widehat{N}$ along with agent~$i_1$ discussed earlier in the proof.
The average satisfaction of these agents is
\begin{align*}
\frac{1}{1 + \widehat{n}} &\cdot \left( u_{i_1}(A) + \sum_{i \in \widehat{N}} u_i(A) \right) \\
&> \frac{1}{1 + \widehat{n}} \cdot \left( t-1 + \sum_{i \in \widehat{N}} u_i(A) \right) \\
&> \frac{1}{1 + \widehat{n}} \cdot \left( t-1 + \sum_{j = \ceiling{r}}^{\widehat{n}} \left( \frac{j}{r} - 1 \right) + \sum_{j = 1}^{\ceiling{r} - 1} 0 \right) \\
&\geq \frac{t-2+1/t}{2}.
\end{align*}
The last inequality is due to \Cref{claim:average}: We have a nonincreasing and nonnegative sequence with $t-1$ as the first element, followed by a decreasing arithmetic sequence with $\ceiling{tr} - \ceiling{r}$ terms whose common difference is~$-1/r$ and whose first term, $\frac{\ceiling{tr}-1}{r} - 1$, is at most~$1/r$ away from~$t-1$, and then followed by $\ceiling{r} - 1$ zeros.
The average satisfaction of~$N_{t-1} \setminus \{i_1\}$, if this set is not empty, is greater than~$t-1$.
As the average satisfaction of~$N^*$ is a convex combination of the corresponding quantities for $N_{t-1} \setminus \{i_1\}$ and $\widehat{N} \cup \{i_1\}$, it is greater than~$\frac{t-2+1/t}{2}$, as desired.
\end{proof}
We now derive a matching upper bound.
\begin{theorem}\label{thm:EJR-1-UB-average-satisfaction}
For any real numbers $t \geq 1$ and~$\varepsilon > 0$, there exists an instance, a $t$-cohesive group~$N^*$, and an \mbox{EJR-1} allocation~$A$ such that the average satisfaction of $N^*$ with respect to $A$ is at most $\frac{t-2+1/t}{2} + \varepsilon$.
\end{theorem}
\begin{proof}
Consider a cake instance with a sufficiently large number of agents~$n$ (to be specified later).
Let~$\alpha = t$.
Thus, we have $|N| = t \cdot n/t \geq t \cdot n/\alpha$.
The cake is given by the interval $[0, 2t]$, and the agents' preferences are as follows.
\begin{itemize}
\item Each agent~$i \in \left\{ 1, 2, \dots, \ceiling*{n/\alpha}-1 \right\}$ approves the interval~$[0, t]$.
\item Each agent~$i \in \left\{ \ceiling*{n/\alpha}, \ceiling*{n/\alpha}+1, \dots, n \right\}$ approves the interval~$\left[ 0, t + \frac{i - n/\alpha}{n/\alpha} + \delta \right]$, where~$\delta \in (0,1)$ is sufficiently small (to be specified later).
\end{itemize}
Since all $n$ agents approve the interval~$[0, t]$, they form a $t$-cohesive group~$N$.
We claim that allocation~$A = [t, 2t]$, which has size $t = \alpha$, satisfies \EJR{1}.
Consider a $t'$-cohesive group for some value of $t' > 0$.
If~$t' \in (0, 1)$, the requirement of EJR-1 is trivially fulfilled.
Since $n = t\cdot n/\alpha$, we may therefore assume that $t'\in [1, t]$.
Consider agent~$\ceiling*{t' \cdot n/\alpha} \in N$, who approves the interval~$\left[ 0, t + \frac{\ceiling*{t' \cdot n/\alpha} - n/\alpha}{n/\alpha} + \delta \right]$; this agent gets utility at least $\frac{\ceiling*{t' \cdot n/\alpha} - n/\alpha}{n/\alpha} + \delta \geq \frac{t' \cdot n/\alpha - n/\alpha}{n/\alpha} + \delta > t' - 1$ from the allocation~$A$.
Since every $t'$-cohesive group contains at least~$\ceiling*{t' \cdot n/\alpha}$ agents, it must contain an agent who gets utility greater than~$t' - 1$ from~$A$.
This means that $A$ satisfies \EJR{1}, as claimed.
The average satisfaction of the $t$-cohesive group~$N$ with respect to the \EJR{1} allocation~$A$ is
\begin{align*}
&\frac{1}{|N|} \cdot \sum_{i \in N} u_i(A) = \frac{1}{n} \cdot \sum_{i = \ceiling{n/\alpha}}^{n} \left( \frac{i - n/\alpha}{n/\alpha} + \delta \right) \\
&= \frac{\alpha}{n^2} \cdot \sum_{i = \ceiling{n/\alpha}}^{n} (i - n/\alpha) + \frac{1}{n} \cdot \sum_{i = \ceiling{n/\alpha}}^{n} \delta \\
&= \frac{\alpha}{n^2} \cdot \frac{(\ceiling{n/\alpha} - n/\alpha) + (n - n/\alpha)}{2} \cdot (n - \ceiling{n/\alpha} + 1) \\
&\quad+ \frac{\delta}{n} \cdot (n - \ceiling{n/\alpha} + 1) \\
&\leq \frac{\alpha}{n^2} \cdot \frac{(n - n/\alpha + 1)^2}{2} + \frac{\delta}{n} \cdot (n - n/\alpha + 1) \\
&= \frac{\alpha - 2 + 1/\alpha}{2} + \frac{\delta \cdot (\alpha - 1)}{\alpha} + \frac{\alpha}{2n^2} + \frac{\alpha - 1 + \delta}{n} \\
&= \frac{t - 2 + 1/t}{2} + \frac{\delta \cdot (t-1)}{t} + \frac{t}{2n^2} + \frac{t-1+\delta}{n},
\end{align*}
where the inequality holds because $\ceiling{n/\alpha} - n/\alpha \le 1$ and $-\ceiling{n/\alpha} \le -n/\alpha$.
Finally, we choose a sufficiently large~$n$ and a sufficiently small~$\delta$ so that $\frac{\delta \cdot (t-1)}{t} + \frac{t}{2n^2} + \frac{t-1+\delta}{n} \leq \varepsilon$; this ensures that the average satisfaction of $N$ is at most $\frac{t -2+1/t}{2} + \varepsilon$, as desired.
\end{proof}
\subsection{Proportionality Degree of Specific Rules}
In this subsection, we investigate the proportionality degree of the rules that we studied in \Cref{sec:EJR-notions}.
We begin with GreedyEJR-M.
Since GreedyEJR-M satisfies \EJR{M}, \Cref{thm:EJR-M-LB-average-satisfaction} immediately yields a lower bound.
We derive a matching upper bound, which implies that the proportionality degree of GreedyEJR-M is $\floor{t} \cdot \left( 1 - \frac{\floor{t} + 1}{2 t} \right)$.
\begin{theorem}\label{thm:prop-degree-GreedyEJR-M}
For any real numbers $t \geq 1$ and $\varepsilon > 0$, there exists an instance, a $t$-cohesive group $N^*$, and an allocation $A$ output by GreedyEJR-M such that the average satisfaction of $N^*$ with respect to $A$ is at most $\floor{t} \cdot \left( 1 - \frac{\floor{t} + 1}{2t} \right) + \varepsilon$.
\end{theorem}
We first provide an intuition behind the proof of \Cref{thm:prop-degree-GreedyEJR-M}.
We construct an indivisible-goods instance, make $\alpha$ an integer, and choose $n$ to be a multiple of~$\alpha$.
Our goal is to construct a target $t$-cohesive group of agents~$N^*$ with as small utilities as possible.
Since GreedyEJR-M outputs an \EJR{M} allocation, the largest number of agents in $N^*$ that receive utility $0$---denote the set of these agents by~$N_0$---is $n/\alpha - 1$; otherwise, these agents would form a $1$-cohesive group and cannot all receive utility $0$.
Similarly, among the agents in~$N^* \setminus N_0$, the largest number of agents that receive utility~$1$---denote the set of these agents by $N_1$---is $n/\alpha$, as we do not want $N_0\cup N_1$ to form a $2$-cohesive group.
Continuing inductively, we want to partition $N^*$ into $N_0\cup N_1\cup\dots\cup N_{\floor{t}}$, with the agents in $N_k$ receiving utility exactly~$k$ for each~$k$.
We add dummy agents and goods in order to make sure that, instead of all agents in $N^*$ being satisfied at once by the GreedyEJR-M execution, the agents in $N_{\floor{t}}$ are first satisfied along with some dummy agents via some dummy goods, then those in $N_{\floor{t}-1}$ are satisfied along with other dummy agents via other dummy goods, and so on.
The dummy agents and goods need to be carefully constructed to make this argument work.
\begin{proof}[Proof of \Cref{thm:prop-degree-GreedyEJR-M}]
Let~$\alpha = \frac{\floor{t} \cdot (\floor{t}+1)}{2} + 1 = \frac{\floor{t}^2 + \floor{t} + 2}{2}$, and note that $\alpha$ is an integer.
We will construct an indivisible-goods instance with a sufficiently large number of agents~$n$ (to be specified later), where $n$ is a multiple of~$\alpha$ that is at least $2\alpha$.
Observe that
\begin{align*}
\ceiling*{t \cdot \frac{n}{\alpha}} &= \ceiling*{\floor{t} \cdot \frac{n}{\alpha} + (t-\floor{t}) \cdot \frac{n}{\alpha}} \\
&= \floor{t} \cdot \frac{n}{\alpha} + \ceiling*{(t-\floor{t}) \cdot \frac{n}{\alpha}}.
\end{align*}
Since $t-\floor{t} < 1$, we can choose $n$ large enough so that $\ceiling*{(t-\floor{t}) \cdot \frac{n}{\alpha}} \leq \frac{n}{\alpha} - 1$.
When this holds, we have
\begin{align}
t \cdot \frac{n}{\alpha} &\leq \ceiling*{t \cdot \frac{n}{\alpha}} \nonumber \\
&\leq \floor{t} \cdot \frac{n}{\alpha} + \frac{n}{\alpha} - 1 = (\floor{t}+1) \cdot \frac{n}{\alpha} - 1.
\label{eq:squeeze}
\end{align}
This inequality will help ensure that we can construct a $t$-cohesive group that is not $(\floor{t}+1)$-cohesive.
Note that in an indivisible-goods instance, a $t$-cohesive group must commonly approve at least~$\ceiling{t}$ indivisible goods.
We now describe our instance.
Let~$G = \{g_1, g_2, \dots, g_{\ceiling{t}}\} \cup \left( \bigcup_{k = 1}^{\floor{t}} D^G_k \right)$ be the set of indivisible goods, where for each~$k \in \{1, 2, \dots, \floor{t}\}$, $D^G_k$ contains exactly~$k$ indivisible goods.
In particular, the sets
\[
\{g_1, g_2, \dots, g_{\ceiling{t}}\}, D^G_1, D^G_2, \dots, D^G_{\floor{t}}
\]
are all disjoint.
Our specifications for the agents are slightly different depending on whether $t \geq 2$ or $t\in [1,2)$; we distinguish between the two cases below.
\paragraph{Case~1: $t \geq 2$.}
Recalling that $n/\alpha$ is an integer, we partition the set of all agents~$N$ into the following pairwise disjoint sets:
\begin{align*}
&N_0 = \left\{ 1, 2, \dots, \frac{n}{\alpha}-1 \right\}, \\
&N_1 = \left\{ \frac{n}{\alpha}, \frac{n}{\alpha}+1, \dots, 2 \cdot \frac{n}{\alpha}-1 \right\}, \\
&\qquad\vdots \\
&N_k = \left\{ k \cdot \frac{n}{\alpha}, k \cdot \frac{n}{\alpha}+1, \dots, (k+1) \cdot \frac{n}{\alpha}-1 \right\}, \\
&\qquad\vdots \\
&N_{\floor{t}} = \left\{ \floor{t} \cdot \frac{n}{\alpha}, \floor{t} \cdot \frac{n}{\alpha}+1, \dots, \ceiling*{t \cdot \frac{n}{\alpha}} \right\}, \\
&D_1, D_2, \dots, D_{\floor{t}-1}, D_{\floor{t}},
\end{align*}
where $N^* \coloneqq \bigcup_{k = 0}^{\floor{t}} N_k = \left\{ 1, 2, \dots, \ceiling*{t \cdot \frac{n}{\alpha}} \right\}$ is our target $t$-cohesive group and $\bigcup_{k = 1}^{\floor{t}} D_k$ consists of ``dummy agents''.
More specifically:
\begin{itemize}
\item $D_1$ contains a single agent;
\item For each~$k \in \{2, \dots, \floor{t}-1\}$, $D_k$ contains $(k-1) \cdot \frac{n}{\alpha}$ agents;
\item $D_{\floor{t}}$ contains $\floor{t} \cdot \frac{n}{\alpha} - \left( \ceiling*{t \cdot \frac{n}{\alpha}} - \floor{t} \cdot \frac{n}{\alpha} + 1 \right) = 2 \floor{t} \cdot \frac{n}{\alpha} - \ceiling*{t \cdot \frac{n}{\alpha}} - 1$ agents.
Note that $\left| N_{\floor{t}} \cup D_{\floor{t}} \right| = \floor{t} \cdot \frac{n}{\alpha}$.
\end{itemize}
Note that $D_1$ and $D_{\floor{t}}$ are different sets because $t\ge 2$.
We verify that the total number of agents is indeed $n$:
\begin{align*}
|N| &= \left| \left( \bigcup_{k = 0}^{\floor{t}} N_k \right) \cup \left( \bigcup_{k = 1}^{\floor{t}} D_k \right) \right| \\
&= \left| \left( \bigcup_{k = 0}^{\floor{t}-1} N_k \right) \cup D_1 \cup \left( \bigcup_{k = 2}^{\floor{t}-1} D_k \right) \cup \left( N_{\floor{t}} \cup D_{\floor{t}} \right) \right| \\
&= \left( \floor{t} \cdot \frac{n}{\alpha} - 1 \right) + 1 + \sum_{k = 2}^{\floor{t}-1} (k-1) \cdot \frac{n}{\alpha} + \floor{t} \cdot \frac{n}{\alpha} \\
&= 2 \floor{t} \cdot \frac{n}{\alpha} \\
&\quad+ \frac{n}{\alpha} \cdot \frac{(2 - 1) + ((\floor{t}-1) - 1)}{2}\cdot ((\floor{t}-1) - 2 + 1) \\
&= \frac{n}{\alpha} \cdot \left( 2 \floor{t} + \frac{(\floor{t}-1) \cdot (\floor{t}-2)}{2} \right) \\
&= \frac{n}{\alpha} \cdot \frac{\floor{t}^2 + \floor{t} + 2}{2} \\
&= n.
\end{align*}
The agents' preferences are as follows.
\begin{itemize}
\item The agents in~$N_0$ approve the goods in~$\{g_1, g_2, \dots, g_{\ceiling{t}}\}$.
\item For each~$k \in \{1, 2, \dots, \floor{t}\}$, the agents in~$N_k$ approve the goods in~$\{g_1, g_2, \dots, g_{\ceiling{t}}\} \cup D^G_k$, and the agents in~$D_k$ approve the goods in~$D^G_k$.
\end{itemize}
This completes the description of our instance.
Since $|N^*| = \ceiling*{t \cdot \frac{n}{\alpha}}$, inequality \eqref{eq:squeeze} implies that $N^*$ is not $(\floor{t}+1)$-cohesive.
On the other hand, since the agents in~$N^*$ commonly approve the goods~$g_1, g_2, \dots, g_{\ceiling{t}}$, $N^*$ is $t$-cohesive.
Also, notice that $t^* = \floor{t}$ is the largest integer such that a $t^*$-cohesive group exists in our instance.
We now consider the execution of GreedyEJR-M on the above instance.
Since our instance consists exclusively of indivisible goods, only integers~$t^*$ are relevant for the \EJR{M} condition in each round of GreedyEJR-M.
We claim that GreedyEJR-M can return the allocation~$A = \bigcup_{k = 1}^{\floor{t}} D^G_k$, which has size~$\frac{\floor{t} \cdot (\floor{t}+1)}{2} \leq \alpha$.
Given the instance, at the beginning, $t^* = \floor{t}$ is the largest number such that the \EJR{M} condition is satisfied: it is satisfied with $\left( N_{\floor{t}} \cup D_{\floor{t}}, D^G_{\floor{t}} \right)$, because $\left| N_{\floor{t}} \cup D_{\floor{t}} \right| = \floor{t} \cdot \frac{n}{\alpha}$ and the agents in $N_{\floor{t}} \cup D_{\floor{t}}$ commonly approve all goods in~$D^G_{\floor{t}}$, which has size exactly~$\floor{t}$.\footnote{At this stage, the \EJR{M} condition with $t^* = \floor{t}$ also holds with~$(N^*, \{g_1, g_2, \dots, g_{\floor{t}}\})$.}
GreedyEJR-M removes the agents in~$N_{\floor{t}} \cup D_{\floor{t}}$ and adds the goods in~$D^G_{\floor{t}}$ to the allocation~$A$.
Now, there is no more $\floor{t}$-cohesive group, and $t^* = \floor{t}-1$ becomes the largest number such that the \EJR{M} condition is satisfied: it is satisfied with~$\left( N_{\floor{t}-1} \cup D_{\floor{t}-1}, D^G_{\floor{t}-1} \right)$.
More generally, for each integer~$k$ from~$\floor{t}-1$ down to~$2$, the \EJR{M} condition is satisfied for~$t^* = k$ with~$\left( N_k \cup D_k, D^G_k \right)$ because
\begin{align*}
|N_k \cup D_k|
&= \frac{n}{\alpha} + (k-1) \cdot \frac{n}{\alpha}
= k \cdot \frac{n}{\alpha}
\end{align*}
and the agents in~$N_k \cup D_k$ commonly approve the goods in~$D^G_k$, which has size exactly~$k$; moreover,
\begin{align*}
|N_0\cup N_1\cup\dots\cup N_k| &= (k+1)\cdot\frac{n}{\alpha} - 1 \\
&< (k+1)\cdot\frac{n}{\alpha}.
\end{align*}
Hence, at each stage, GreedyEJR-M can remove the agents in~$N_k \cup D_k$ and add the goods in~$D^G_k$ to the allocation~$A$.
Finally, when only the agents in~$N_0 \cup N_1 \cup D_1$ remain, there is no $2$-cohesive group; however, the \EJR{M} condition is satisfied for~$t^* = 1$ with~$\left( N_1 \cup D_1, D^G_1 \right)$ because $|N_1 \cup D_1| = \frac{n}{\alpha} + 1 \geq \frac{n}{\alpha}$ and the agents in~$N_1 \cup D_1$ commonly approve the single good in~$D^G_1$.
Once $N_1 \cup D_1$ is removed by GreedyEJR-M and $D^G_1$ is added to~$A$, the remaining agents in~$N_0$ do not form a $1$-cohesive group, so GreedyEJR-M terminates.
Note that for every~$k \in \{0, 1, 2, \dots, \floor{t}\}$, each agent in~$N_k$ gets utility exactly~$k$ from the allocation~$A$.
The average satisfaction of $N^*$ with respect to the returned allocation $A$ is
\begin{align*}
&\frac{\sum_{i \in N^*} u_i(A)}{|N^*|} \\
&= \frac{1}{|N^*|} \cdot \sum_{k = 0}^{\floor{t}} |N_k| \cdot k \\
&= \sum_{k = 0}^{\floor{t}-1} \frac{|N_k| \cdot k}{|N^*|} + \frac{\left| N_{\floor{t}} \right| \cdot \floor{t}}{|N^*|} \\
&= \sum_{d = 1}^{\floor{t}-1} \sum_{k = d}^{\floor{t}-1} \frac{|N_k|}{|N^*|} + \frac{\left| N_{\floor{t}} \right| \cdot \floor{t}}{|N^*|} \\
&= \sum_{d = 1}^{\floor{t}-1} \frac{\left| \bigcup_{k = d}^{\floor{t}-1} N_k \right|}{|N^*|} + \frac{\left| N_{\floor{t}} \right| \cdot \floor{t}}{|N^*|} \\
&= \sum_{d = 1}^{\floor{t}-1} \frac{\left( \floor{t} \cdot \frac{n}{\alpha} - 1 \right) - \left( d \cdot \frac{n}{\alpha} - 1 \right)}{\ceiling*{t \cdot \frac{n}{\alpha}}} \\
&\qquad+ \frac{\left( \ceiling*{t \cdot \frac{n}{\alpha}} - \floor{t} \cdot \frac{n}{\alpha} + 1 \right) \cdot \floor{t}}{\ceiling*{t \cdot \frac{n}{\alpha}}} \\
&= \sum_{d = 1}^{\floor{t}-1} \frac{(\floor{t} - d) \cdot \frac{n}{\alpha}}{\ceiling*{t \cdot \frac{n}{\alpha}}} \\
&\qquad+ \frac{\left( \ceiling*{\floor{t} \cdot \frac{n}{\alpha} + (t - \floor{t}) \cdot \frac{n}{\alpha}} - \floor{t} \cdot \frac{n}{\alpha} + 1 \right) \cdot \floor{t}}{\ceiling*{t \cdot \frac{n}{\alpha}}} \\
&\leq \sum_{d = 1}^{\floor{t}-1} \frac{(\floor{t} - d) \cdot \frac{n}{\alpha}}{t \cdot \frac{n}{\alpha}} \\
&\qquad+ \frac{\left( \floor{t} \cdot \frac{n}{\alpha} + \ceiling*{(t - \floor{t}) \cdot \frac{n}{\alpha}} - \floor{t} \cdot \frac{n}{\alpha} + 1 \right) \cdot \floor{t}}{t \cdot \frac{n}{\alpha}} \\
&\leq \frac{1}{t} \cdot \sum_{d = 1}^{\floor{t}-1} (\floor{t} - d) + \frac{\left( (t - \floor{t}) \cdot \frac{n}{\alpha} + 2 \right) \cdot \floor{t}}{t \cdot \frac{n}{\alpha}} \\
&= \frac{1}{t} \cdot \frac{\floor{t} \cdot (\floor{t} - 1)}{2} + \frac{\floor{t} \cdot (t - \floor{t})}{t} + \frac{2 \alpha \floor{t}}{n t} \\
&= \floor{t} \cdot \left( \frac{\floor{t} - 1}{2t} + \frac{t - \floor{t}}{t} \right) + \frac{\floor{t} \cdot (\floor{t}^2 + \floor{t} + 2)}{n t} \\
&= \floor{t} \cdot \left( 1 - \frac{\floor{t} + 1}{2t} \right) + \frac{\floor{t} \cdot (\floor{t}^2 + \floor{t} + 2)}{n t}.
\end{align*}
Choosing a sufficiently large~$n$ so that $\frac{\floor{t} \cdot (\floor{t}^2 + \floor{t} + 2)}{n t} \leq \varepsilon$ completes the proof for the case $t\ge 2$.
\paragraph{Case~2: $1 \leq t < 2$.}
In this case, we have $\floor{t} = 1$ and thus $\alpha = \frac{\floor{t}^2 + \floor{t} + 2}{2} = 2$.
Recalling again that $n / \alpha$ is an integer, we partition the $n$ agents into pairwise disjoint sets as follows:
\begin{align*}
&N_0 = \left\{ 1, \dots, \frac{n}{\alpha} - 1 \right\}, \\
&N_{\floor{t}} = \left\{ \frac{n}{\alpha}, \dots, \ceiling*{t \cdot \frac{n}{\alpha}} \right\}, \\
&D_{\floor{t}},
\end{align*}
where $N^* \coloneqq N_0 \cup N_{\floor{t}} = \left\{ 1, 2, \dots, \ceiling*{t \cdot \frac{n}{\alpha}} \right\}$ is our target $t$-cohesive group and $D_{\floor{t}}$ contains the remaining $n - \ceiling*{t \cdot \frac{n}{\alpha}}$ agents, who are ``dummy agents''.
From \eqref{eq:squeeze}, there is at least one agent in~$D_{\floor{t}}$:
\begin{align*}
n - \ceiling*{t \cdot \frac{n}{\alpha}} &= 2 \cdot \frac{n}{\alpha} - \ceiling*{t \cdot \frac{n}{\alpha}} \\
&= (\floor{t} + 1) \cdot \frac{n}{\alpha} - \ceiling*{t \cdot \frac{n}{\alpha}} \geq 1.
\end{align*}
The set of indivisible goods is $\{g_1, g_2, \dots, g_{\ceiling{t}}\} \cup D_{\floor{t}}^G$, where $D_{\floor{t}}^G$ contains a single good.
The agents' preferences are as follows.
\begin{itemize}
\item The agents in~$N^*$ approve the goods in~$\{g_1, g_2, \dots, g_{\ceiling{t}}\}$.
\item The agents in~$N_{\floor{t}}\cup D_{\floor{t}}$ approve the good in~$D_{\floor{t}}^G$.
\end{itemize}
This completes the description of our instance.
Since the total number of agents is $n = 2\cdot n/\alpha$ but no good is approved by all $n$ agents, there is no $2$-cohesive group.
On the other hand, since the agents in~$N_{\floor{t}} \cup D_{\floor{t}}$ commonly approve the good in~$D_{\floor{t}}^G$ and
\[
\left| N_{\floor{t}} \cup D_{\floor{t}} \right| = n - \left( \frac{n}{\alpha} - 1 \right) \geq \frac{n}{\alpha},
\]
$N_{\floor{t}} \cup D_{\floor{t}}$ is $1$-cohesive.
Thus, GreedyEJR-M can start by identifying the $1$-cohesive group~$N_{\floor{t}} \cup D_{\floor{t}}$ and adding $D_{\floor{t}}^G$ to the allocation~$A$.
Once the agents in~$N_{\floor{t}} \cup D_{\floor{t}}$ are removed by GreedyEJR-M, the remaining agents in~$N_0$ do not form a $1$-cohesive group, so no more good is added to~$A$.
Note that $N^*$ is a $1$-cohesive group, since it has size $\ceiling*{t \cdot \frac{n}{\alpha}} \ge \frac{n}{\alpha}$ and the agents in it commonly approve $g_1$.
The average satisfaction of~$N^*$ with respect to the returned allocation~$A$ is
\begin{align*}
&\frac{\sum_{i \in N^*} u_i(A)}{|N^*|}\\
&\quad= \frac{|N_0| \cdot 0}{|N^*|} + \frac{\left| N_{\floor{t}} \right| \cdot \floor{t}}{|N^*|} \\
&\quad= \frac{\left( \ceiling*{t \cdot \frac{n}{\alpha}} - \floor{t} \cdot \frac{n}{\alpha} + 1 \right) \cdot \floor{t}}{\ceiling*{t \cdot \frac{n}{\alpha}}} \\
&\quad= \frac{\left( \ceiling*{\floor{t} \cdot \frac{n}{\alpha} + (t - \floor{t}) \cdot \frac{n}{\alpha}} - \floor{t} \cdot \frac{n}{\alpha} + 1 \right) \cdot \floor{t}}{\ceiling*{t \cdot \frac{n}{\alpha}}} \\
&\quad\leq \frac{\left( \floor{t} \cdot \frac{n}{\alpha} + \ceiling*{(t - \floor{t}) \cdot \frac{n}{\alpha}} - \floor{t} \cdot \frac{n}{\alpha} + 1 \right) \cdot \floor{t}}{t \cdot \frac{n}{\alpha}} \\
&\quad\leq \frac{\left( (t - \floor{t}) \cdot \frac{n}{\alpha} + 2 \right) \cdot \floor{t}}{t \cdot \frac{n}{\alpha}} \\
&\quad= \frac{\floor{t} \cdot (t - \floor{t})}{t} + \frac{2 \alpha \floor{t}}{n t} \\
&\quad= \floor{t} \cdot \left( 1 - \frac{\floor{t} + 1}{2t} \right) + \frac{2 \alpha \floor{t}}{n t},
\end{align*}
where the last equation follows from $\floor{t} = 1$.
Choosing a sufficiently large~$n$ so that $\frac{2 \alpha \floor{t}}{n t} \leq \varepsilon$ completes the proof for the case $t \in [1, 2)$, and therefore the proof of \Cref{thm:prop-degree-GreedyEJR-M}.
\end{proof}
In the indivisible-goods setting, \citet[Prop.~A.10]{LacknerSk22} showed that for integers $t$, the proportionality degree of MES is between $\frac{t-1}{2}$ and $\frac{t+1}{2}$.
Since Generalized MES satisfies EJR-1, \Cref{thm:EJR-1-LB-average-satisfaction} implies a lower bound of $\frac{t-2+1/t}{2}$ on its proportionality degree.
On the other hand, since any $t'$-cohesive group is also $t$-cohesive for $t\le t'$, Lackner and Skowron's result implies an upper bound of $\frac{\ceiling{t}+1}{2}$ for Generalized MES.
Finally, we prove that Generalized PAV has a significantly higher proportionality degree than the other two rules that we study.
In doing so, we extend a result of \citet{AzizElHu18} from the indivisible-goods setting.
\begin{theorem}
\label{thm:prop-degree-GPAV}
For any real number $t\ge 1$, the average satisfaction of a $t$-cohesive group with respect to a Generalized PAV allocation is greater than $t-1$.
\end{theorem}
\begin{proof}
The proof is almost identical to that of \Cref{thm:GPAV}.
Let $R'$ be a Generalized PAV allocation, and assume for contradiction that a $t$-cohesive group $N'$ has $\frac{1}{\lvert N'\rvert}\cdot \sum_{i \in N'} u_i(R') \le t-1$.
Since the agents in~$N'$ commonly approve a resource of size at least $t$, if the subset of this resource included in $R'$ has size larger than $t-1$, we would have $\frac{1}{\lvert N'\rvert}\cdot \sum_{i \in N'} u_i(R') > t-1$, a contradiction.
Hence, there exists a resource of size~$1$ approved by all agents in $N'$ but not included in $R'$.
The rest of the proof proceeds in the same way as that of \Cref{thm:GPAV}; in particular, the chain of inequalities starting with $H(R'')-H(R')$ still holds because $\sum_{i\in N'}u_i(R') \le |N'|\cdot(t-1)$.
\end{proof}
We also demonstrate in \Cref{app:tightness} that the bound $t-1$ is almost tight.
\section{Conclusion and Future Work}
In this work, we have initiated the study of approval-based voting with mixed divisible and indivisible goods, which allows us to unify both the well-studied setting of multiwinner voting and the recently introduced setting of cake sharing.
We generalized three important rules from multiwinner voting to our setting, determined their relations to our proposed extensions of the EJR axiom, and investigated their proportionality degree.
While the Generalized MES and Generalized PAV rules only satisfy the weaker axiom of \mbox{EJR-1}, GreedyEJR-M satisfies the stronger axiom of \mbox{EJR-M}, thereby showing that an allocation fulfilling both axioms can be found in every instance.
Nevertheless, an interesting open question is whether there exists a polynomial-time algorithm for computing such an allocation.
Further directions for future work include exploring other axioms such as \emph{proportional justified representation (PJR)} \citep{SanchezFernandezElLa17} and the \emph{core}.
In fact, one can define \mbox{PJR-M} and \mbox{PJR-1} analogously to our \mbox{EJR-M} and \mbox{EJR-1}---since these PJR axioms are weaker than the respective EJR axioms, our positive results on the EJR variants directly carry over to the PJR variants as well.
\section*{Acknowledgments}
This work was partially supported by ARC Laureate Project FL200100204 on ``Trustworthy AI'', by the Singapore Ministry of Education under grant number MOE-T2EP20221-0001, by the Deutsche Forschungsgemeinschaft under
grant BR 4744/2-1 and the Graduiertenkolleg ``Facets of
Complexity'' (GRK 2434), and by an NUS Start-up Grant.
We thank the anonymous reviewers for their valuable feedback.
\appendix
\section{(Almost) Tightness of \Cref{thm:prop-degree-GPAV}}
\label{app:tightness}
In their work, \citet{AzizElHu18} showed that, for indivisible-goods instances, the bound $t-1$ on the proportionality degree of PAV is tight for every positive integer $t$.
In fact, they proved a stronger statement that for any integer $t \ge 1$ and real number $\varepsilon > 0$, there exists an instance such that no allocation provides an average satisfaction of at least $t - 1 + \varepsilon$ to every $t$-cohesive group.
We extend their result to our setting by showing that for real numbers $t \ge 1$, the bound $t-1$ is essentially tight for large~$t$; our formal statement can be found below.
\begin{theorem}
For any $t \geq 1$ and $\varepsilon >0$, let $c = t - \floor{t}$ denote the fractional part of $t$.
There exists an indivisible-goods instance in which no allocation provides an average satisfaction of at least $t-1+\frac{c(1-c)}{t} + \varepsilon$ to all $t$-cohesive groups.
\end{theorem}
\begin{proof}
We first assume that $t$ (and therefore $c$) is a rational number.
Let $\gamma$ be a rational number in $(0, \varepsilon)$ such that $\gamma < 1-c$.
Let $c = p/q$ and $\gamma = p'/q$ for some $p', p, q \in \mathbb{Z}$, and let $k = \floor{t}$.
Our instance is given as follows.
\begin{itemize}
\item The set of agents $N$ can be partitioned into the following pairwise disjoint sets:
\begin{itemize}
\item $N_0$ contains $q\cdot ((k+1)c+k\gamma)$ agents;
\item Each of $N_1, \ldots, N_{k+1}$ contains $q\cdot (1-c-\gamma)$ agents.
\end{itemize}
Therefore, the total number of agents is
\begin{align*}
n &= q\cdot ((k+1)c+k\gamma + (k+1)(1-c-\gamma)) \\
&= q\cdot (k+1-\gamma).
\end{align*}
\item The resource consists of $(k+1)^2$ indivisible goods, which can be partitioned into $k+1$ disjoint sets $G_1, \ldots, G_{k+1}$, each with $k+1$ goods.
\item The agents' preferences are such that for each $i \in \{1, \ldots, k+1\}$, the agents in $M_i \coloneqq N\setminus N_i$ commonly approve the goods in $G_i$.
Note that each set $M_i$ consists of $q\cdot((k+1)c+k\gamma + k(1-c-\gamma)) = q\cdot (k+c) = q\cdot t$ agents.
\item We set $\alpha = k+1-\gamma$.
Since each set $M_i$ consists of $q\cdot t = t\cdot n/\alpha$ agents and these agents commonly approve $|G_i| = k+1 \ge t$ goods, $M_i$ is a $t$-cohesive group.
\end{itemize}
Because $\alpha = k+1-\gamma < k+1$ and our instance consists only of indivisible goods, any feasible allocation~$A$ can include at most $k$ goods.
Since we have $k+1$ sets of goods $G_1, \ldots, G_{k+1}$, at least one of these sets is entirely excluded from~$A$.
Consider an arbitrary allocation~$A$, and assume without loss of generality that $G_1$ is such a set.
We now analyze the average satisfaction of the $t$-cohesive group~$M_1$ with respect to $A$.
Note that for each $i \in \{2,\dots,k+1\}$, any good selected from $G_i$ contributes a total utility of
\begin{align*}
|M_1 \cap M_i|
&= |N \setminus (N_1 \cup N_i)| \\
&= q\cdot (k+1-\gamma - 2(1-c-\gamma)) \\
&= q\cdot(t-(1-c)+\gamma)
\end{align*}
to agents in $M_1$. Therefore, all goods in $A$ combined give a total utility of at most
\[q\cdot k \cdot(t-(1-c)+\gamma) = q[(t-c)(t-(1-c))+k\gamma]\]
to all agents in $M_1$.
It follows that the average satisfaction of $M_1$ with respect to~$A$ is at most
\begin{align*} &\frac{q[(t-c)(t-(1-c))+k\gamma]}{qt} \\
&\quad= \frac{(t-c)(t-(1-c))+k\gamma}{t} \\
&\quad= \frac{(t^2-ct) - (t-c)(1-c)+k\gamma}{t} \\
&\quad= \frac{t^2-ct + ct - t + c(1-c)+k\gamma}{t}\\
&\quad\leq t-1+\frac{c(1-c)}{t}+\gamma < t-1+\frac{c(1-c)}{t}+\varepsilon.\end{align*}
This completes the proof for the case where $t$ is a rational number.
Finally, assume that $t$ is irrational.
Let $t' > t$ and $\varepsilon' < \varepsilon$ be rational numbers such that $\floor{t'} = \floor{t}$ and $t'+ \frac{c'(1-c')}{t'}+ \varepsilon' < t+\frac{c(1-c)}{t}+ \varepsilon$, where $c' = t' - \floor{t'}$; the existence of such a pair $(t',\varepsilon')$ is guaranteed by the fact that, as we increase $t$ slightly, the value of $t + \frac{c(1-c)}{t}$ changes continuously.
From our previous argument, there exists an instance in which no allocation provides an average satisfaction of at least $t'-1 + \frac{c'(1-c')}{t'} + \varepsilon'$ to all $t'$-cohesive groups.
Suppose for contradiction that there is an allocation that provides an average satisfaction of at least $t-1 + \frac{c(1-c)}{t} + \varepsilon$ to all $t$-cohesive groups in this instance.
Because any $t'$-cohesive group is also $t$-cohesive, this allocation would also provide an average satisfaction of at least $t-1 + \frac{c(1-c)}{t} + \varepsilon > t'-1+\frac{c'(1-c')}{t'}+\varepsilon'$ to all $t'$-cohesive groups, a contradiction.
\end{proof}
\end{document} |
\begin{document}
\title{The Regularization Theory of the Krylov Iterative Solvers LSQR, CGLS,
LSMR and CGME For Linear Discrete Ill-Posed
Problems hanks{This work was supported in part by
the National Science Foundation of China (No. 11371219)}
\slugger{sirev}{xxxx}{xx}{x}{x--x}
\begin{abstract}
For the large-scale linear discrete ill-posed problem $\min\|Ax-b\|$ or $Ax=b$
with $b$ contaminated by a white noise, the Lanczos bidiagonalization based Krylov
solver LSQR and its mathematically equivalent CGLS are
most commonly used. They have intrinsic regularizing effects,
where the number $k$ of iterations plays the role
of regularization parameter. However, there has been no
answer to the long-standing fundamental concern: {\em for which kinds of
problems LSQR and CGLS can find best possible regularized solutions}?
The concern was actually expressed foresightedly by Bj\"{o}rck and
Eld\'{e}n in 1979.
Here a best possible regularized solution means that it is at least
as accurate as the best regularized
solution obtained by the truncated singular value decomposition (TSVD) method,
which and the best possible solution by standard-form Tikhonov regularization
are both of the same order of the worst-case error and cannot be improved
under the assumption that the solution to an underlying linear compact operator
equation is continuous or its derivative squares integrable.
In this paper we make a detailed analysis on the
regularization of LSQR for severely, moderately and mildly ill-posed problems.
We first consider the case that the singular values of $A$ are simple. We establish
accurate $\sin\Theta$ theorems for the 2-norm distance between the
underlying $k$-dimensional
Krylov subspace and the $k$-dimensional dominant right singular subspace
of $A$. Based on them and some follow-up results, for the first two kinds of
problems, we prove that LSQR finds a best possible regularized solution at
semi-convergence occurring at iteration $k_0$ and the following results hold for
$k=1,2,\ldots,k_0$: (i) the $k$-step Lanczos
bidiagonalization always generates a near best rank $k$ approximation to $A$;
(ii) the $k$ Ritz values always approximate the first $k$ large singular values
in natural order; (iii) the $k$-step LSQR
always captures the $k$ dominant SVD components of $A$. However,
for the third kind of problem, we prove that LSQR cannot find a best possible
regularized solution generally. We derive accurate estimates for
the diagonals and subdiagonals of the bidiagonal matrices generated by
Lanczos bidiagonalization, which can be used to decide if LSQR finds a best possible
regularized solution at semi-convergence. We also analyze the regularization
of the other two Krylov solvers LSMR and CGME that are
MINRES and the CG method applied to $A^TAx=A^Tb$ and
$\min\|AA^Ty-b\|$ with $x=A^Ty$, respectively, proving that the regularizing effects
of LSMR are similar to LSQR for each kind of problem and both are
superior to CGME.
We extend all the results to the case that $A$ has multiple singular values.
Numerical experiments confirm our theory on LSQR.
\end{abstract}
\begin{keywords}
Discrete ill-posed, full or partial regularization, best or near best rank $k$
approximation, TSVD solution, semi-convergence, Lanczos bidiagonalization,
LSQR, CGLS, LSMR, CGME
\end{keywords}
\begin{AMS}
65F22, 65F10, 65F20, 65J20, 65R30, 65R32, 15A18
\end{AMS}
\pagestyle{myheadings}
\thispagestyle{plain}
\markboth{ZHONGXIAO JIA}{REGULARIZATION OF LSQR, CGLS, LSMR AND CGME}
\section{Introduction and Preliminaries}\label{intro}
Consider the linear discrete ill-posed problem
\begin{equation}
\min\limits_{x\in \mathbb{R}^{n}}\|Ax-b\| \mbox{\,\ or \ $Ax=b$,}
\ \ \ A\in \mathbb{R}^{m\times n}, \label{eq1}
\ b\in \mathbb{R}^{m},
\end{equation}
where the norm $\|\cdot\|$ is the 2-norm of a vector or matrix, and
$A$ is extremely ill conditioned with its singular values decaying
to zero without a noticeable gap. \eqref{eq1} mainly arises
from the discretization of the first kind Fredholm integral equation
\begin{equation}\label{eq2}
Kx=(Kx)(t)=\int_{\Omega} k(s,t)x(t)dt=g(s)=g,\ s\in \Omega
\subset\mathbb{R}^q,
\end{equation}
where the kernel $k(s,t)\in L^2({\Omega\times\Omega})$ and
$g(s)$ are known functions, while $x(t)$ is the
unknown function to be sought. If $k(s,t)$ is non-degenerate
and $g(s)$ satisfies the Picard condition, there exists the unique squares
integrable solution
$x(t)$; see \cite{engl00,hansen98,hansen10,kirsch,mueller}. Here for brevity
we assume that $s$ and $t$ belong to the same set $\Omega\subset
\mathbb{R}^q$ with $q\geq 1$.
Applications include image deblurring, signal processing, geophysics,
computerized tomography, heat propagation, biomedical and optical imaging,
groundwater modeling, and many others; see, e.g.,
\cite{aster,engl93,engl00,hansen10,ito15,kaipio,kern,kirsch,mueller,
natterer,vogel02}.
The theory and numerical treatments of integral
equations can be found in \cite{kirsch,kythe}.
The right-hand side $b=\hat{b}+e$ is noisy and assumed to be
contaminated by a white noise $e$, caused by measurement, modeling
or discretization errors, where $\hat{b}$
is noise-free and $\|e\|<\|\hat{b}\|$.
Because of the presence of noise $e$ and the extreme
ill-conditioning of $A$, the naive
solution $x_{naive}=A^{\dagger}b$ of \eqref{eq1} bears no relation to
the true solution $x_{true}=A^{\dagger}\hat{b}$, where
$\dagger$ denotes the Moore-Penrose inverse of a matrix.
Therefore, one has to use regularization to extract a
best possible approximation to $x_{true}$.
The most common regularization, in its simplest form, is the
direct standard-form Tikhonov regularization
\begin{equation}\label{tikhonov}
\min\limits_{x\in \mathbb{R}^{n}}{\|Ax-b\|^2+\lambda^2\|x\|^2}
\end{equation}
with $\lambda>0$ the regularization parameter
\cite{phillips,tikhonov63,tikhonov77}.
The solutions to \eqref{eq1} and \eqref{tikhonov} can be fully analyzed by
the singular value decomposition (SVD) of $A$. Let
\begin{equation}\label{eqsvd}
A=U\left(\begin{array}{c} \Sigma \\ \mathbf{0} \end{array}\right) V^{T}
\end{equation}
be the SVD of $A$,
where $U = (u_1,u_2,\ldots,u_m)\in\mathbb{R}^{m\times m}$ and
$V = (v_1,v_2,\ldots,v_n)\in\mathbb{R}^{n\times n}$ are orthogonal,
$\Sigma = \diag (\sigma_1,\sigma_2,
\ldots,\sigma_n)\in\mathbb{R}^{n\times n}$ with the singular values
$\sigma_1>\sigma_2 >\cdots >\sigma_n>0$ assumed to be simple
throughout the paper except Section \ref{multiple}, and the superscript $T$ denotes
the transpose of a matrix or vector. Then
\begin{equation}\label{eq4}
x_{naive}=\sum\limits_{i=1}^{n}\frac{u_i^{T}b}{\sigma_i}v_i =
\sum\limits_{i=1}^{n}\frac{u_i^{T}\hat{b}}{\sigma_i}v_i +
\sum\limits_{i=1}^{n}\frac{u_i^{T}e}{\sigma_i}v_i
=x_{true}+\sum\limits_{i=1}^{n}\frac{u_i^{T}e}{\sigma_i}v_i
\end{equation}
with $\|x_{true}\|=\|A^{\dagger}\hat{b}\|=
\left(\sum_{k=1}^n\frac{|u_k^T\hat{b}|^2}{\sigma_k^2}\right)^{1/2}$.
Throughout the paper, we always assume that $\hat{b}$ satisfies the discrete
Picard condition $\|A^{\dagger}\hat{b}\|\leq C$ with some constant $C$ for $n$
arbitrarily large \cite{aster,gazzola15,hansen90,hansen90b,hansen98,hansen10,kern}.
It is an analog of the Picard condition in the finite dimensional case;
see, e.g., \cite{hansen90}, \cite[p.9]{hansen98},
\cite[p.12]{hansen10} and \cite[p.63]{kern}. This
condition means that, on average, the Fourier coefficients
$|u_i^{T}\hat{b}|$ decay faster than $\sigma_i$ and enables
regularization to compute useful approximations to $x_{true}$, which
results in the following popular model that is used throughout Hansen's books
\cite{hansen98,hansen10} and the current paper:
\begin{equation}\label{picard}
| u_i^T \hat b|=\sigma_i^{1+\beta},\ \ \beta>0,\ i=1,2,\ldots,n,
\end{equation}
where $\beta$ is a model parameter that controls the decay rates of
$| u_i^T \hat b|$.
Hansen \cite[p.68]{hansen10} points out, ``while this is a crude model,
it reflects the overall behavior often found in real problems."
One precise definition of the discrete Picard condition is
$| u_i^T \hat b|=\tau_i\sigma_i^{1+\zeta_i}$ with certain
constants $\tau_i\geq 0,\ \zeta_i>0,\ i=1,2,\ldots,n$. We remark that once
the $\tau_i>0$ and $\zeta_i$ do not differ greatly, such discrete Picard
condition does not affect our claims, rather it
complicates derivations and forms of the results.
The white noise $e$ has a number of attractive properties which
play a critical role in the regularization analysis: Its covariance matrix
is $\eta^2 I$, the expected values ${\cal E}(\|e\|^2)=m \eta^2$ and
${\cal E}(|u_i^Te|)=\eta,\,i=1,2,\ldots,n$, and
$\|e\|\approx \sqrt{m}\eta$ and $|u_i^Te|\approx \eta,\
i=1,2,\ldots,n$; see, e.g., \cite[p.70-1]{hansen98} and \cite[p.41-2]{hansen10}.
The noise $e$ thus affects $u_i^Tb,\ i=1,2,\ldots,n,$ {\em more or less equally}.
With \eqref{picard}, relation \eqref{eq4} shows that for large singular values
$|{u_i^{T}\hat{b}}|/{\sigma_i}$ is dominant relative to
$|u_i^{T}e|/{\sigma_i}$. Once
$| u_i^T \hat b| \leq | u_i^T e|$ from some $i$ onwards, the small singular
values magnify $|u_i^{T}e|/{\sigma_i}$, and the noise
$e$ dominates $| u_i^T b|/\sigma_i$ and must be suppressed. The
transition point $k_0$ is such that
\begin{equation}\label{picard1}
| u_{k_0}^T b|\approx | u_{k_0}^T \hat{b}|> | u_{k_0}^T e|\approx
\eta, \ | u_{k_0+1}^T b|
\approx | u_{k_0+1}^Te|
\approx \eta;
\end{equation}
see \cite[p.42, 98]{hansen10} and a similar description \cite[p.70-1]{hansen98}.
The $\sigma_k$ are then divided into the $k_0$ large ones and the $n-k_0$ small
ones. The truncated SVD (TSVD) method \cite{hansen98,hansen10} computes the TSVD
regularized solutions
\begin{equation}\label{solution}
x^{tsvd}_k=\left\{\begin{array}{ll} \sum\limits_{i=1}^{k}\frac{u_i^{T}b}
{\sigma_i}{v_i}\thickapprox
\sum\limits_{i=1}^{k}\frac{u_i^{T}\hat{b}}
{\sigma_i}{v_i},\ \ \ &k\leq k_0;\\ \sum\limits_{i=1}^{k}\frac{u_i^{T}b}
{\sigma_i}{v_i}\thickapprox
\sum\limits_{i=1}^{k_0}\frac{u_i^{T}\hat{b}}{\sigma_i}{v_i}+
\sum\limits_{i=k_0+1}^{k}\frac{u_i^{T}e}{\sigma_i}{v_i},\ \ \ &k>k_0.
\end{array}\right.
\end{equation}
It is known from \cite[p.70-1]{hansen98} and
\cite[p.86-8,96]{hansen10} that $x_{k_0}^{tsvd}$ is
the best TSVD regularized solution to \eqref{eq1} and balances the
regularization and perturbation errors optimally. The parameter $k$ is a
regularization parameter that determines how many large SVD components of
$A$ are used to compute a regularized solution $x_k^{tsvd}$ to
\eqref{eq1}.
Let $U_k=(u_1,\ldots,u_k)$, $V_k=(v_1,\ldots,v_k)$ and $\Sigma_k=
{\rm diag}(\sigma_1,\ldots,\sigma_k)$, and define $A_k=U_k\Sigma_k V_k^T$.
Then $A_k$ is the best rank $k$ approximation to $A$ with
$\|A-A_k\|=\sigma_{k+1}$ (cf. \cite[p.12]{bjorck96}), and
$
x_{k}^{tsvd}=A_k^{\dagger}b
$
is the minimum-norm least squares solution to
$$
\min\limits_{x\in \mathbb{R}^{n}}\|A_kx-b\|
$$
that perturbs $A$ to $A_k$ in \eqref{eq1}. This interpretation will be
often exploited later.
The solution $x_{\lambda}$ of the Tikhonov regularization
has a filtered SVD expansion
\begin{equation}\label{eqfilter}
x_{\lambda} = \sum\limits_{i=1}^{n}f_i\frac{u_i^{T}b}{\sigma_i}v_i,
\end{equation}
where the $f_i=\frac{\sigma_i^2}{\sigma_i^2+\lambda^2}$ are called filters.
The TSVD method is a special parameter filtered method, where, in $x_k^{tsvd}$,
we take $f_i=1,\ i=1,2,\ldots,k$ and $f_i=0,\ i=k+1,\ldots,n$. The error
$x_{\lambda}-x_{true}$ can be written as the sum of the regularization and
perturbation errors, and an optimal $\lambda_{opt}$ aims to balance
these two errors and make the sum of their norms minimized
\cite{hansen98,hansen10,kirsch,vogel02}. The best possible regularized
solution $x_{\lambda_{opt}}$ retains the $k_0$ dominant SVD components
and dampens the other $n-k_0$ small SVD components as much as
possible \cite{hansen98,hansen10}. Apparently, the ability to acquire only
the largest SVD components of $A$ is fundamental in solving \eqref{eq1}.
A number of parameter-choice methods have been developed for finding
$\lambda_{opt}$ or $k_0$, such as the discrepancy principle \cite{morozov},
the L-curve criterion, whose use goes back to Miller \cite{miller} and
Lawson and Hanson \cite{lawson} and is termed much later and studied in detail
in \cite{hansen92,hansen93}, and the generalized cross validation
(GCV) \cite{golub79,wahba}; see, e.g.,
\cite{bauer11,hansen98,hansen10,kern,kilmer03,kindermann,neumaier98,reichel13,vogel02}
for numerous comparisons. All parameter-choice methods aim to
make $f_i/\sigma_i$ not small for $i=1,2,\ldots,k_0$
and $f_i/\sigma_i\approx 0$ for $i=k_0+1,\ldots,n$. Each of these methods
has its own merits and disadvantages, and
no one is absolutely reliable for all ill-posed problems.
For example, some of the mentioned parameter-choice methods
may fail to find accurate approximations to $\lambda_{opt}$;
see \cite{hanke96a,vogel96} for an analysis
on the L-curve method and \cite{hansen98} for some other parameter-choice
methods. A further investigation on paramater-choice methods is not our concern
in this paper.
The TSVD method is important in its own right.
It and the standard-form Tikhonov regularization
produce very similar solutions with essentially the minimum
2-norm error, i.e., the worst-case error \cite[p.13]{kirsch};
see \cite{varah79}, \cite{hansen90b}, \cite[p.109-11]{hansen98} and
\cite[Sections 4.2 and 4.4]{hansen10}. Indeed, for a linear compact
equation $Kx=g$ including \eqref{eq2} with the noisy $g$ and true solution
$x_{true}(t)$, under the source condition that its solution $x_{true}(t)
\in {\cal R}(K^*)$ or $x_{true}(t)\in {\cal R}(K^*K)$, the range of
the adjoint $K^*$ of $K$ or
that of $K^*K$, which amounts to assuming that $x_{true}(t)$ or its
derivative is squares integrable, the errors of
the best regularized solutions by the TSVD method and the Tikhonov
regularization are {\em order optimal, i.e., the same order
as the worst-case error} \cite[p.13,18,20,32-40]{kirsch},
\cite[p.90]{natterer} and \cite[p.7-12]{vogel02}. These conclusions
carries over to \eqref{eq1} \cite[p.8]{vogel02}.
Therefore, either of $x_{\lambda_{opt}}$ and $x_{k_0}^{tsvd}$
is a best possible solution to \eqref{eq1} under the above
assumptions and can be taken as standard reference
when assessing the regularizing effects of an iterative
solver. For the sake of clarity, we will take $x_{k_0}^{tsvd}$.
For \eqref{eq1} large, the TSVD method and the Tikhonov regularization
method are generally too demanding, and only iterative regularization
methods are computationally viable. A major class of methods has been
Krylov iterative solvers that project \eqref{eq1} onto a sequence of
low dimensional Krylov subspaces
and computes iterates to approximate $x_{true}$; see, e.g.,
\cite{aster,engl00,gilyazov,hanke95,hansen98,hansen10,kirsch}.
Of Krylov iterative solvers, the CGLS (or CGNR) method,
which implicitly applies the Conjugate Gradient (CG)
method \cite{golub89,hestenes} to the normal equations $A^TAx=A^Tb$
of \eqref{eq1}, and its mathematically equivalent LSQR algorithm \cite{paige82}
have been most commonly used. The Krylov solvers CGME
(or CGNE) \cite{bjorck96,bjorck15,craig,hanke95,hanke01,hps09} and
LSMR \cite{bjorck15,fong} are also choices, which amount to the
CG method applied to $\min\|AA^Ty-b\|$ with $x=A^Ty$ and MINRES \cite{paige75}
applied to $A^TAx=A^Tb$, respectively. These Krylov solvers have been
intensively studied and known to have regularizing
effects \cite{aster,eicke,gilyazov,hanke95,hanke01,hansen98,hansen10,hps09}
and exhibit semi-convergence \cite[p.89]{natterer};
see also \cite[p.314]{bjorck96}, \cite[p.733]{bjorck15},
\cite[p.135]{hansen98} and \cite[p.110]{hansen10}: The iterates
converge to $x_{true}$ and their norms increase steadily,
and the residual norms decrease in an initial stage; then afterwards the
noise $e$ starts to deteriorate the iterates so that they start to diverge
from $x_{true}$ and instead converge to $x_{naive}$,
while their norms increase considerably and the residual norms stabilize.
If we stop at the right time, then, in principle,
we have a regularization method, where the iteration number plays the
role of parameter regularization.
Semi-convergence is due to the fact that the projected problem starts to
inherit the ill-conditioning of \eqref{eq1} from some iteration
onwards, and the appearance of a small singular
value of the projected problem amplifies the noise considerably.
The regularizing effects of CG type methods were noticed by
Lanczos \cite{lanczos} and were rediscovered in \cite{johnsson,squire,tal}.
Based on these works and motivated by a heuristic explanation on good
numerical results with very few iterations using CGLS
in \cite{johnsson}, and realizing that such an excellent performance
can only be expected if convergence to the regular part
of the solution, i.e., $x_{k_0}^{tsvd}$, takes place before the effects of
ill-posedness show up, on page 13 of \cite{bjorck79},
Bj\"{o}rck and Eld\'{e}n in 1979 foresightedly expressed a fundamental concern
on CGLS (and LSQR): {\em More
research is needed to tell for which problems this approach will work, and
what stopping criterion to choose.} See also \cite[p.145]{hansen98}.
As remarked by Hanke and Hansen \cite{hanke93}, the paper \cite{bjorck79}
was the only extensive survey on algorithmic details until that time,
and a strict proof of the regularizing properties of conjugate gradients is
extremely difficult.
An enormous effort has long been made to the study of
regularizing effects of LSQR and CGLS (cf.
\cite{eicke,engl00,firro97,gilyza86,hanke95,hanke01,hansen98,hansen10,
hps09,kirsch,nemi,nolet,paige06,scales,vorst90}) in the Hilbert
or finite dimensional space setting, but a rigorous regularization theory
of LSQR and CGLS for \eqref{eq1} is still lacking,
and there has been no definitive answer to the above long-standing
fundamental question, and the same is for LSMR and CGME.
For $A$ symmetric, MINRES and MR-II applied to $Ax=b$
directly are alternatives and have been
shown to have regularizing effects \cite{calvetti01,hanke95,hanke96,
hansen10,jensen07,kilmer99}, but MR-II seems preferable since
the noisy $b$ is excluded in the underlying subspace \cite{huang15,jensen07}.
For $A$ nonsymmetric or multiplication with $A^{T}$ difficult to compute,
GMRES and RRGMRES are candidate methods
\cite{baglama07,calvetti02,calvetti02c,neuman12}, and the latter
may be better \cite{jensen07}. The hybrid
approaches based on the Arnoldi process have been first
proposed in \cite{calvetti00b} and studied in
\cite{calvetti01,calvetti03,lewis09,novati13}.
Gazzola and her coauthors \cite{gazzola14}--\cite{gazzola-online}
have described a general framework of the hybrid methods
and presented various Krylov-Tikhonov methods with different parameter-choice
strategies. Unfortunately, unlike LSQR and CGLS, these methods are
highly problem dependent and may not have regularizing effects for general
nonsymmetric ill-posed problems; see, e.g., \cite{jensen07} and \cite[p.126]{hansen10}.
The fundamental cause is that the underlying Krylov subspaces may not favor the
the dominant left and singular subspaces of $A$,
which are desired in solving \eqref{eq1}.
The behavior of ill-posed problems critically depends on the decay rate of
$\sigma_j$. The following characterization of the degree of ill-posedness
of \eqref{eq1} was introduced in \cite{hofmann86}
and has been widely used \cite{aster,engl00,hansen98,hansen10,mueller}:
If $\sigma_j=\mathcal{O}(j^{-\alpha})$, then \eqref{eq1}
is mildly or moderately ill-posed for $\frac{1}{2}<\alpha\le1$ or $\alpha>1$.
If $\sigma_j=\mathcal{O}(\rho^{-j})$ with $\rho>1$,
$j=1,2,\ldots,n$, then \eqref{eq1} is severely ill-posed.
Here for mildly ill-posed problems we add the requirement
$\alpha>\frac{1}{2}$, which does not appear in \cite{hofmann86}
but must be met for $k(s,t)\in L^2({\Omega\times\Omega})$
in \eqref{eq1} \cite{hanke93,hansen98}.
In the one-dimensional case, i.e., $q=1$, \eqref{eq1}
is severely ill-posed with $k(s,t)$ sufficiently smooth, and
it is moderately ill-posed with $\sigma_j=\mathcal{O}(j^{-p-1/2})$,
where $p$ is the highest order of continuous derivatives of
$k(s,t)$; see, e.g., \cite[p.8]{hansen98} and \cite[p.10-11]{hansen10}.
Clearly, the singular values $\sigma_j$ for a
severely ill-posed problem decay at the same rate $\rho^{-1}$,
while those of a moderately or mildly ill-posed problem decay
at the decreasing rate $\left(\frac{j}{j+1}\right)^{\alpha}$
that approaches one more quickly with $j$ for the mildly ill-posed problem than
for the moderately ill-posed problem.
If a regularized solution to \eqref{eq1} is at least as accurate as
$x_{k_0}^{tsvd}$, then it is called a best possible regularized solution.
Given \eqref{eq1}, if the regularized solution of an iterative regularization
solver at semi-convergence is a best possible one,
then, by the words of Bj\"{o}rck and Eld\'{e}n,
the solver {\em works} for the problem and is said to have
the {\em full} regularization. Otherwise, the solver is said to have
the {\em partial} regularization.
Because it has been unknown whether or not LSQR, CGLS, LSMR and CGME
have the full regularization for a given \eqref{eq1},
one commonly combines them with some explicit
regularization, so that the resulting hybrid variants
(hopefully) find best possible regularized solutions
\cite{aster,hansen98,hansen10}. A hybrid CGLS is to run CGLS
for several trial regularization parameters $\lambda$ and
picks up the best one among the candidates \cite{aster}. Its
disadvantages are
that regularized solutions cannot be updated with different $\lambda$
and there is no guarantee that the selected regularized solution
is a best possible one.
The hybrid LSQR variants have been advocated by Bj\"{o}rck and Eld\'{e}n
\cite{bjorck79} and O'Leary and Simmons \cite{oleary81}, and improved and
developed by Bj\"orck \cite{bjorck88} and Bj\"{o}rck, Grimme and
Van Dooren \cite{bjorck94}.
A hybrid LSQR first projects \eqref{eq1} onto Krylov
subspaces and then regularizes the projected problems explicitly.
It aims to remove the effects
of small Ritz values and expands a Krylov subspace until it
captures the $k_0$ dominant SVD components of $A$
\cite{bjorck88,bjorck94,hanke93,oleary81}.
The hybrid LSQR and CGME have been intensively studied in, e.g.,
\cite{bazan10,bazan14,berisha,chung08,hanke01,hanke93,lewis09,
neuman12,novati13,renaut} and \cite{aster,hansen10,hansen13}.
Within the framework of such hybrid solvers, it is hard to find a near-optimal
regularization parameter \cite{bjorck94,renaut}. More seriously,
as we will elaborate mathematically and numerically in the concluding
section of this paper, it may make no sense to speak of
the regularization of the projected problems and their optimal
regularization parameters since they may actually fail to satisfy
the discrete Picard conditions. In contrast,
if an iterative solver has the full regularization, we stop it after
semi-convergence. Obviously, we cannot emphasize too much
the importance of completely understanding the
regularization of LSQR, CGLS, LSMR and CGME. By the definition of
the full or partial regularization,
we now modify the concern of Bj\"{o}rck and Eld\'{e}n as:
{\em Do LSQR, CGLS, LSMR and CGME have the full or partial regularization for
severely, moderately and mildly ill-posed problems? How to identify
their full or partial regularization in practice?}
In this paper, assuming exact arithmetic, we first focus on LSQR and make a
rigorous analysis on its regularization for severely,
moderately and mildly ill-posed problems. Due to the mathematical equivalence
of CGLS and LSQR, the assertions on the full or partial regularization of
LSQR apply to CGLS as well. We then analyze the regularizing
effects of LSMR and CGME and draw definitive conclusions.
We prove that LSQR has the full regularization for severely and
moderately ill-posed problems once $\rho>1$ and $\alpha>1$ suitably,
and it generally has only the partial regularization for mildly ill-posed
problems. In Section \ref{lsqr}, we describe the
Lanczos bidiagonalization process and LSQR, and make an
introductory analysis. In Section \ref{sine},
we establish accurate $\sin\Theta$ theorems for the 2-norm
distance between the underlying $k$-dimensional Krylov subspace and the
$k$-dimensional dominant right singular subspace of $A$. We then derive
some follow-up results that play a central role in analyzing
the regularization of LSQR. In Section \ref{rankapp},
we prove that a $k$-step Lanczos bidiagonalization always
generates a near best rank $k$ approximation to $A$, and the $k$ Ritz values
always approximate the first $k$ large singular values in natural order,
and no small Ritz value appears for $k=1,2,\ldots,k_0$.
This will show that LSQR has the full regularization.
For mildly ill-posed problems,
we prove that, for some $k\leq k_0$, the $k$ Ritz values generally
do not approximate the first $k$ large singular values in natural order
and LSQR generally has only the partial regularization.
In Section \ref{alphabeta}, we derive bounds for the entries of bidiagonal
matrices generated by Lanczos bidiagonalization, proving how fast they
decay and showing how to use them to reliably identify if
LSQR has the full regularization when
the degree of ill-posedness of \eqref{eq1} is unknown in advance.
Exploiting some of the results on LSQR, we analyze the regularization of
LSMR and CGME and prove that LSMR has similar regularizing effects to LSQR
for each kind of problem and both of them are superior to CGME.
In Section \ref{compare}, we present some perturbation results and
prove that LSQR resembles the TSVD method for severely and moderately
ill-posed problems. In Section \ref{multiple}, with a number of
nontrivial changes and reformulations, we extend all the results to
the case that $A$ has multiple singular values. In Section \ref{numer},
we report numerical experiments to confirm our theory on LSQR. Finally,
we summarize the paper with further remarks in Section \ref{concl}.
Throughout the paper, denote by $\mathcal{K}_{k}(C, w)=
span\{w,Cw,\ldots,C^{k-1}w\}$ the $k$-dimensional Krylov subspace generated
by the matrix $\mathit{C}$ and the vector $\mathit{w}$, and by $I$ and the
bold letter $\mathbf{0}$ the identity matrix
and the zero matrix with orders clear from the context, respectively.
For $B=(b_{ij})$, we define the nonnegative matrix $|B|=(|b_{ij}|)$,
and for $|C|=(|c_{ij}|)$, $|B|\leq |C|$ means
$|b_{ij}|\leq |c_{ij}|$ componentwise.
\section{The LSQR algorithm}\label{lsqr}
LSQR is based on the Lanczos bidiagonalization process, which
computes two orthonormal bases $\{q_1,q_2,\dots,q_k\}$ and
$\{p_1,p_2,\dots,p_{k+1}\}$ of $\mathcal{K}_{k}(A^{T}A,A^{T}b)$ and
$\mathcal{K}_{k+1}(A A^{T},b)$ for $k=1,2,\ldots,n$,
respectively. We describe the process as Algorithm 1.
{\bf Algorithm 1: \ $k$-step Lanczos bidiagonalization process}
\begin{remunerate}
\item Take $ p_1=b/\|b\| \in \mathbb{R}^{m}$, and define $\beta_1{q_0}=0$.
\item For $j=1,2,\ldots,k$
\begin{romannum}
\item
$r = A^{T}p_j - \beta_j{q_{j-1}}$
\item $\alpha_j = \|r\|;q_j = r/\alpha_j$
\item
$ z = Aq_j - \alpha_j{p_{j}}$
\item
$\beta_{j+1} = \|z\|;p_{j+1} = z/\beta_{j+1}.$
\end{romannum}
\end{remunerate}
Algorithm 1 can be written in the matrix form
\begin{align}
AQ_k&=P_{k+1}B_k,\label{eqmform1}\\
A^{T}P_{k+1}&=Q_{k}B_k^T+\alpha_{k+1}q_{k+1}e_{k+1}^{T}.\label{eqmform2}
\end{align}
where $e_{k+1}$ denotes the $(k+1)$-th canonical basis vector of
$\mathbb{R}^{k+1}$, $P_{k+1}=(p_1,p_2,\ldots,p_{k+1})$,
$Q_k=(q_1,q_2,\ldots,q_k)$ and
\begin{equation}\label{bk}
B_k = \left(\begin{array}{cccc} \alpha_1 & & &\\ \beta_2 & \alpha_2 & &\\ &
\beta_3 &\ddots & \\& & \ddots & \alpha_{k} \\ & & & \beta_{k+1}
\end{array}\right)\in \mathbb{R}^{(k+1)\times k}.
\end{equation}
It is known from \eqref{eqmform1} that
\begin{equation}\label{Bk}
B_k=P_{k+1}^TAQ_k.
\end{equation}
We remind that the singular values of $B_k$, called the Ritz values of $A$ with
respect to the left and right subspaces $span\{P_{k+1}\}$ and $span\{Q_k\}$,
are all simple. This basic fact will often be used later.
At iteration $k$, LSQR solves the problem
$\|Ax^{(k)}-b\|=\min_{x\in \mathcal{K}_k(A^TA,A^Tb)}
\|Ax-b\|$ and computes the iterates $x^{(k)}=Q_ky^{(k)}$ with
\begin{equation}\label{yk}
y^{(k)}=\arg\min\limits_{y\in \mathbb{R}^{k}}\|B_ky-\|b\|e_1^{(k+1)}\|
=\|b\| B_k^{\dagger} e_1^{(k+1)},
\end{equation}
where $e_1^{(k+1)}$ is the first canonical basis vector of $\mathbb{R}^{k+1}$,
and the residual norm $\|Ax^{(k)}-b\|$ decreases monotonically with respect to
$k$. We have $\|Ax^{(k)}-b\|=
\|B_k y^{(k)}-\|b\|e_1^{(k+1)}\|$ and $\|x^{(k)}\|=\|y^{(k)}\|$,
both of which can be cheaply computed.
Note that $\|b\|e_1^{(k+1)}=P_{k+1}^T b$. We have
\begin{equation}\label{xk}
x^{(k)}=Q_k B_k^{\dagger} P_{k+1}^Tb,
\end{equation}
that is, the iterate $x^{(k)}$ by LSQR is the minimum-norm least
squares solution to the perturbed
problem that replaces $A$ in \eqref{eq1} by its rank $k$ approximation
$P_{k+1}B_k Q_k^T$. Recall that the best rank $k$ approximation
$A_k$ to $A$ satisfies $\|A-A_k\|=\sigma_{k+1}$. We can relate
LSQR and the TSVD method from two perspectives. One of them is
to interpret LSQR as solving a nearby problem that perturbs $A_k$ to
$P_{k+1}B_k Q_k^T$, provided that
$P_{k+1}B_k Q_k^T$ is a near best rank $k$ approximation
to $A$ with an approximate accuracy $\sigma_{k+1}$. The other is
to interpret $x_k^{tsvd}$ and $x^{(k)}$ as the solutions to
the two perturbed problems of \eqref{eq1} that replace $A$ by
the rank $k$ approximations with the same quality
to $A$, respectively. Both perspectives lead to the consequence:
the LSQR iterate $x^{(k_0)}$ is as accurate as $x_{k_0}^{tsvd}$
and is thus a best possible regularized solution to \eqref{eq1}, provided that
$P_{k+1}B_k Q_k^T$ is a near best rank $k$
approximation to $A$ with the approximate accuracy $\sigma_{k+1}$
and the $k$ singular values of $B_k$ approximate the first $k$
large ones of $A$ in natural order for $k=1,2,\ldots,k_0$. Otherwise,
as will be clear later, $x^{(k_0)}$ cannot be as accurate as $x_{k_0}^{tsvd}$
if either $P_{k_0+1}B_{k_0} Q_{k_0}^T$ is not a near best rank $k_0$
approximation to $A$ or $B_{k_0}$ has at least one
singular value smaller than $\sigma_{k_0+1}$. We will give a precise
definition of a near best rank $k$ approximation later.
As stated in the introduction, the semi-convergence of LSQR must occur at some
iteration $k$. Under the discrete Picard condition
\eqref{picard}, if semi-convergence
occurs at iteration $k_0$, we are sure that LSQR has the full regularization
because $x^{(k_0)}$ has captured the $k_0$ dominant SVD components of $A$ and
effectively suppressed the other $n-k_0$ SVD components;
if semi-convergence occurs at some iteration
$k<k_0$, then LSQR has only the partial
regularization since it has not yet captured the needed $k_0$ dominant SVD
components of $A$.
\section{$\sin\Theta$ theorems for the distances between $\mathcal{K}_k(A^TA,A^Tb)$
and $span\{V_k\}$ as well as the others related} \label{sine}
Van der Sluis and Van der Vorst \cite{vorst86} prove the following result,
which has been used in Hansen \cite{hansen98} and the references therein
to illustrate and analyze the regularizing effects of LSQR and CGLS. We
will also investigate it further in our paper.
\begin{prop}\label{help}
LSQR with the starting vector $p_1=b/\|b\|$ and CGLS
applied to $A^TAx=A^Tb$ with the starting vector
$x^{(0)}=0$ generate the same iterates
\begin{equation}\label{eqfilter2}
x^{(k)}=\sum\limits_{i=1}^nf_i^{(k)}\frac{u_i^{T}b}{\sigma_i}v_i,\
k=1,2,\ldots,n,
\end{equation}
where
\begin{equation}\label{filter}
f_i^{(k)}=1-\prod\limits_{j=1}^k\frac{(\theta_j^{(k)})^2-\sigma_i^2}
{(\theta_j^{(k)})^2},\ i=1,2,\ldots,n,
\end{equation}
and the $\theta_j^{(k)}$ are the singular values of $B_k$
labeled as $\theta_1^{(k)}>\theta_2^{(k)}>\cdots>\theta_k^{(k)}$.
\end{prop}
\eqref{eqfilter2} shows that $x^{(k)}$ has a filtered SVD expansion of
form \eqref{eqfilter}. If all the Ritz values $\theta_j^{(k)}$
approximate the first $k$ singular values
$\sigma_j$ of $A$ in natural order, the filters $f_i^{(k)}\approx 1,\,
i=1,2,\ldots,k$ and the other $f_i^{(k)}$ monotonically decay to zero for
$i=k+1,k+2,\ldots,n$. If this is the case until $k=k_0$, the $k_0$-step LSQR
has the full regularization and computes a best possible regularized solution
$x^{(k_0)}$. However, if a small Ritz value appears before some
$k\leq k_0$, i.e., $\theta_{k-1}^{(k)}>\sigma_{k_0+1}$ and
$\sigma_{j^*}<\theta_k^{(k)}\leq\sigma_{k_0+1}$ with the smallest integer
$j^*>k_0+1$, then $f_i^{(k)}\in (0,1)$ tends to zero
monotonically for $i=j^*,j^*+1,\ldots,n$; on the other hand,
we have
$$
\prod\limits_{j=1}^k\frac{(\theta_j^{(k)})^2-\sigma_i^2}
{(\theta_j^{(k)})^2}=\frac{(\theta_k^{(k)})^2-\sigma_i^2}
{(\theta_k^{(k)})^2}\prod\limits_{j=1}^{k-1}
\frac{(\theta_j^{(k)})^2-\sigma_i^2}{(\theta_j^{(k)})^2}\leq 0,
\ i=k_0+1,\ldots,j^*-1
$$
since the first factor is non-positive and the second factor is positive.
Then we get $f_i^{(k)}\geq 1,\ i=k_0+1,\ldots,j^*-1$, so that $x^{(k)}$
is deteriorated and LSQR has only the partial regularization.
Hansen \cite[p.146-157]{hansen98} summarizes the known results on $f_i^{(k)}$,
where a bound for $|f_i^{(k)}-1|,\ i=1,2,\ldots,k$
is given in \cite[p.155]{hansen98} but there is no accurate estimate for
the bound. As we will see in Section \ref{compare},
the results to be established in this paper can be used for this purpose,
and, more importantly, we will show that the bound in \cite[p.155]{hansen98}
can be sharpened substantially.
The standard $k$-step Lanczos bidiagonalization method computes the
$k$ Ritz values $\theta_j^{(k)}$, which are used to approximate some of
the singular values of $A$, and is mathematically equivalent to the
symmetric Lanczos method for
the eigenvalue problem of $A^TA$ starting with $q_1=A^Tb/\|A^Tb\|$;
see \cite{bjorck96,bjorck15} or \cite{reichel05,jia03,jia10} for
several variations that are based on standard, harmonic,
refined projection \cite{bai,stewart01,vorst02}
or a combination of them. A general convergence
theory of harmonic and refined harmonic projection methods was lacking
in the books \cite{bai,stewart01,vorst02} and has later been established
in \cite{jia05}. As is known from \cite{bjorck96,meurant,parlett}, for a general
singular value distribution and a general vector $b$, some of the $k$ Ritz
values become good approximations to the
largest and smallest singular values of $A$ as $k$ increases.
If large singular values are well separated but
small singular values are clustered, large Ritz values
converge fast but small Ritz values converge very slowly.
For \eqref{eq1}, we see from \eqref{eqsvd} and
\eqref{picard} that $A^Tb$ contains more information on dominant right
singular vectors than on the ones corresponding to small singular values.
Therefore, $\mathcal{K}_k(A^TA,A^Tb)$ hopefully contains richer
information on the first $k$ right singular vectors $v_i$ than on the
other $n-k$ ones, at least for $k$ small.
Furthermore, note that $A$ has many small singular values clustered at zero.
Due to these two basic facts, all the Ritz values are expected to
approximate the large singular values of $A$ in natural order until some
iteration $k$, at which a small Ritz value shows up
and the regularized solutions then start to be contaminated by the
noise $e$ dramatically after that iteration. These qualitative
arguments are frequently used to analyze
and elaborate the regularizing effects of LSQR and CGLS; see,
e.g., \cite{aster,hansen98,hansen08,hansen10,hansen13} and the references
therein. Clearly, these arguments are not precise and cannot help us
draw any definitive conclusion on the full or partial
regularization of LSQR. For a severely ill-posed example from
seismic tomography, it is reported in \cite{vorst90} that the desired
convergence of the Ritz values actually holds as long as the discrete
Picard condition is satisfied and there is a good separation
among the large singular values of $A$. Unfortunately,
there has been no mathematical justification on these observations.
A complete understanding of the regularization
of LSQR includes accurate solutions of the following basic problems:
How well or accurately does $\mathcal{K}_{k}(A^{T}A, A^{T}b)$
approximate or capture the $k$-dimensional dominant right singular subspace of
$A$? How accurate is the rank $k$ approximation $P_{k+1}B_kQ_k^T$ to $A$?
Can it be a near best rank $k$ approximation to $A$?
How does the noise level $\|e\|$ affects the approximation accuracy of
$\mathcal{K}_{k}(A^{T}A, A^{T}b)$ and $P_{k+1}B_kQ_k^T$ for $k\leq k_0$ and $k>k_0$,
respectively? What sufficient conditions on $\rho$ and $\alpha$ are needed
to guarantee that $P_{k+1}B_kQ_k^T$ is a near best rank $k$ approximation to $A$?
When do the Ritz values $\theta_i^{(k)}$
approximate $\sigma_i,\ i=1,2,\ldots,k$ in natural order? When does at least
a small Ritz value appear, i.e., $\theta_k^{(k)}<\sigma_{k_0+1}$ before some
$k\leq k_0$? We will make a rigorous and detailed analysis on these problems
and some others related closely, present our results, and draw definitive
assertions on the regularization of LSQR for three kinds of ill-posed
problems.
In terms of the canonical angles $\Theta(\mathcal{X},\mathcal{Y})$ between
two subspaces $\mathcal{X}$ and $\mathcal{Y}$ of the same
dimension \cite[p.43]{stewartsun}, we first present the following $\sin\Theta$
theorem, showing how the $k$-dimensional Krylov subspace
$\mathcal{K}_{k}(A^{T}A, A^{T}b)$ captures or approximates the $k$-dimensional
dominant right singular subspace of $A$ for severely ill-posed problems.
\begin{theorem}\label{thm2}
Let the SVD of $A$ be as \eqref{eqsvd}. Assume that \eqref{eq1} is severely
ill-posed with $\sigma_j=\mathcal{O}(\rho^{-j})$ and $\rho>1$, $j=1,2,\ldots,n$,
and the discrete Picard condition \eqref{picard} is satisfied.
Let $\mathcal{V}_k=span\{V_k\}$ be the $k$-dimensional dominant right singular
subspace of $A$ spanned by the columns of $V_k=(v_1,v_2,\ldots,v_k)$
and $\mathcal{V}_k^R=\mathcal{K}_{k}(A^{T}A, A^{T}b)$. Then for
$k=1,2,\ldots,n-1$ we have
\begin{equation}\label{deltabound}
\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|=
\frac{\|\Delta_k\|}{\sqrt{1+\|\Delta_k\|^2}}
\end{equation}
with $\Delta_k \in \mathbb{R}^{(n-k)\times k}$ to be defined by \eqref{defdelta}
and
\begin{equation}\label{k1}
\|\Delta_1\|\leq \frac{\sigma_{2}}{\sigma_1}\frac{|u_2^Tb|}{|u_1^Tb|}
\left(1+\mathcal{O}(\rho^{-2})\right),
\end{equation}
\begin{equation}\label{eqres1}
\|\Delta_k\|\leq
\frac{\sigma_{k+1}}{\sigma_k}\frac{|u_{k+1}^Tb|}{|u_{k}^Tb|}
\left(1+\mathcal{O}(\rho^{-2})\right)
|L_{k_1}^{(k)}(0)|,\ k=2,3,\ldots,n-1,
\end{equation}
where
\begin{equation}\label{lk}
|L_{k_1}^{(k)}(0)|=\max_{j=1,2,\ldots,k}|L_j^{(k)}(0)|,
\ |L_j^{(k)}(0)|=\prod\limits_{i=1,i\ne j}^k\frac{\sigma_i^2}{|\sigma_j^2-
\sigma_i^2|},\,j=1,2,\ldots,k.
\end{equation}
In particular, we have
\begin{align}
\|\Delta_1\|&\leq\frac{\sigma_2^{2+\beta}}{\sigma_1^{2+\beta}}
\left(1+\mathcal{O}(\rho^{-2})\right), \label{case5}\\
\|\Delta_k\|&\leq\frac{\sigma_{k+1}^{2+\beta}}{\sigma_k^{2+\beta}}
\left(1+\mathcal{O}(\rho^{-2})\right)|L_{k_1}^{(k)}(0)|,\
k=2,3,\ldots,k_0, \label{case1}\\
\|\Delta_k\|&\leq\frac{\sigma_{k+1}}{\sigma_k}\left(1+\mathcal{O}(\rho^{-2})\right)
|L_{k_1}^{(k)}(0)|, \ k=k_0+1,\ldots, n-1.\label{case2}
\end{align}
\end{theorem}
{\em Proof}.
Let $U_n=(u_1,u_2,\ldots,u_n)$ whose columns are the
first $n$ left singular vectors of $A$ defined by \eqref{eqsvd}.
Then the Krylov subspace $\mathcal{K}_{k}(\Sigma^2,
\Sigma U_n^Tb)=span\{DT_k\}$ with
\begin{equation*}\label{}
D=\diag(\sigma_i u_i^Tb)\in\mathbb{R}^{n\times n},\ \
T_k=\left(\begin{array}{cccc} 1 &
\sigma_1^2&\ldots & \sigma_1^{2k-2}\\
1 &\sigma_2^2 &\ldots &\sigma_2^{2k-2} \\
\vdots & \vdots&&\vdots\\
1 &\sigma_n^2 &\ldots &\sigma_n^{2k-2}
\end{array}\right).
\end{equation*}
Partition the diagonal matrix $D$ and the matrix $T_k$ as follows:
\begin{equation*}\label{}
D=\left(\begin{array}{cc} D_1 & 0 \\ 0 & D_2 \end{array}\right),\ \ \
T_k=\left(\begin{array}{c} T_{k1} \\ T_{k2} \end{array}\right),
\end{equation*}
where $D_1, T_{k1}\in\mathbb{R}^{k\times k}$. Since $T_{k1}$ is
a Vandermonde matrix with $\sigma_j$ supposed to be distinct for $j=1,2,\ldots,k$,
it is nonsingular. Therefore, from $\mathcal{K}_{k}(A^{T}A, A^{T}b)=span\{VDT_k\}$
we have
\begin{equation}\label{kry}
\mathcal{V}_k^R=\mathcal{K}_{k}(A^{T}A, A^{T}b)=span
\left\{V\left(\begin{array}{c} D_1T_{k1} \\ D_2T_{k2} \end{array}\right)\right\}
=span\left\{V\left(\begin{array}{c} I \\ \Delta_k \end{array}\right)\right\},
\end{equation}
where
\begin{equation}\label{defdelta}
\Delta_k=D_2T_{k2}T_{k1}^{-1}D_1^{-1}\in \mathbb{R}^{(n-k)\times k}.
\end{equation}
Write $V=(V_k, V_k^{\perp})$, and define
\begin{equation}\label{zk}
Z_k=V\left(\begin{array}{c} I \\ \Delta_k \end{array}\right)
=V_k+V_k^{\perp}\Delta_k.
\end{equation}
Then $Z_k^TZ_k=I+\Delta_k^T\Delta_k$, and the columns of
$\hat{Z}_k=Z_k(Z_k^TZ_k)^{-\frac{1}{2}}$
form an orthonormal basis of $\mathcal{V}_k^R$. So we get an
orthogonal direct sum decomposition of $\hat{Z}_k$:
\begin{equation}\label{decomp}
\hat{Z}_k=(V_k+V_k^{\perp}\Delta_k)(I+\Delta_k^T\Delta_k)^{-\frac{1}{2}}.
\end{equation}
By definition and \eqref{decomp}, for the matrix 2-norm we obtain
\begin{align}\label{sindef}
\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|
&=\|(V_k^{\perp})^T\hat{Z}_k\|
=\|\Delta_k(I+\Delta_k^T\Delta_k)^{-\frac{1}{2}}\|
=\frac{\|\Delta_k\|}{\sqrt{1+\|\Delta_k\|^2}},
\end{align}
which is \eqref{deltabound}.
Next we estimate $\|\Delta_k\|$. For $k=2,3,\ldots,n-1$,
it is easily justified that the $j$-th column of $T_{k1}^{-1}$ consists of
the coefficients of the $j$-th Lagrange polynomial
\begin{equation*}\label{}
L_j^{(k)}(\lambda)=\prod\limits_{i=1,i\neq j}^k
\frac{\lambda-\sigma_i^2}{\sigma_j^2-\sigma_i^2}
\end{equation*}
that interpolates the elements of the $j$-th canonical basis vector
$e_j^{(k)}\in \mathbb{R}^{k}$ at the abscissas $\sigma_1^2,\sigma_2^2
\ldots, \sigma_k^2$. Consequently, the $j$-th column of $T_{k2}T_{k1}^{-1}$ is
\begin{equation}\label{tk12i}
T_{k2}T_{k1}^{-1}e_j^{(k)}=(L_j^{(k)}(\sigma_{k+1}^2),\ldots,L_j^{(k)}
(\sigma_{n}^2))^T, \ j=1,2,\ldots,k,
\end{equation}
from which we obtain
\begin{equation}\label{tk12}
T_{k2}T_{k1}^{-1}=\left(\begin{array}{cccc} L_1^{(k)}(\sigma_{k+1}^2)&
L_2^{(k)}(\sigma_{k+1}^2)&\ldots & L_k^{(k)}(\sigma_{k+1}^2)\\
L_1^{(k)}(\sigma_{k+2}^2)&L_2^{(k)}(\sigma_{k+2}^2) &\ldots &
L_k^{(k)}(\sigma_{k+2}^2) \\
\vdots & \vdots&&\vdots\\
L_1^{(k)}(\sigma_{n}^2)&L_2^{(k)}(\sigma_{n}^2) &\ldots &L_k^{(k)}(\sigma_{n}^2)
\end{array}\right)\in \mathbb{R}^{(n-k)\times k}.
\end{equation}
Since $|L_j^{(k)}(\lambda)|$ is monotonically
decreasing for $0\leq \lambda<\sigma_k^2$, it is bounded by $|L_j^{(k)}(0)|$.
With this property and the definition of $L_{k_1}^{(k)}(0)$
by \eqref{lk}, we get
\begin{align}
|\Delta_k|&=|D_2T_{k2}T_{k1}^{-1}D_1^{-1}| \notag \\
&\leq
\left(\begin{array}{cccc}
\frac{\sigma_{k+1}}{\sigma_1}\left|\frac{u_{k+1}^Tb}
{u_1^Tb}\right||L_{k_1}^{(k)}(0)|, &\frac{\sigma_{k+1}}{\sigma_2}
\left|\frac{u_{k+1}^Tb}
{u_2^Tb}\right||L_{k_1}^{(k)}(0)|, &\ldots&\frac{\sigma_{k+1}}{\sigma_k}
\left|\frac{u_{k+1}^Tb}{u_k^Tb}\right||L_{k_1}^{(k)}(0)| \\
\frac{\sigma_{k+2}}{\sigma_1}\left|\frac{ u_{k+2}^Tb}
{u_1^Tb}\right| |L_{k_1}^{(k)}(0)|, &\frac{\sigma_{k+2}}{\sigma_2}
\left|\frac{u_{k+2}^Tb}
{u_2^Tb}\right| |L_{k_1}^{(k)}(0)|,&
\ldots &\frac{\sigma_{k+2}}{\sigma_k}\left|\frac{u_{k+2}^Tb}
{u_k^Tb}\right| |L_{k_1}^{(k)}(0)| \\
\vdots &\vdots & &\vdots\\
\frac{\sigma_n}{\sigma_1}\left|\frac{u_n^Tb}
{u_1^Tb}\right| |L_{k_1}^{(k)}(0)|, &\frac{\sigma_n}{\sigma_2}\left|\frac{u_n^Tb}
{u_2^Tb}\right| |L_{k_1}^{(k)}(0)|,& \ldots &
\frac{\sigma_n}{\sigma_k}\left|\frac{u_n^Tb}{u_k^Tb}\right| |L_{k_1}^{(k)}(0)|
\end{array}
\right) \notag\\
&= |L_{k_1}^{(k)}(0)||\tilde\Delta_k|, \label{amplify}
\end{align}
where
\begin{equation}
|\tilde\Delta_k|=\left|(\sigma_{k+1} u_{k+1}^T b,\sigma_{k+2}u_{k+2}^Tb,
\ldots,\sigma_n u_n^T b)^T
\left(\frac{1}{\sigma_1 u_1^Tb},\frac{1}{\sigma_2 u_2^Tb},\ldots,
\frac{1}{\sigma_k u_k^Tb}\right)\right| \label{delta1}
\end{equation}
is a rank one matrix. Therefore, by $\|C\|\leq \||C|\|$
(cf. \cite[p.53]{stewart98}), we get
\begin{align}
\|\Delta_k\| &\leq \||\Delta_k|\|\leq |L_{k_1}^{(k)}(0)|
\left\||\tilde\Delta_k|\right\| \notag\\
&=|L_{k_1}^{(k)}(0)|\left(\sum_{j=k+1}^n\sigma_j^2| u_j^Tb|^2\right)^{1/2}
\left(\sum_{j=1}^k \frac{1}{\sigma_j^2| u_j^Tb|^2}\right)^{1/2}.
\label{delta2}
\end{align}
By the discrete Picard condition \eqref{picard}, \eqref{picard1} and the
description between them,
for the white noise $e$, it is known from \cite[p.70-1]{hansen98} and
\cite[p.41-2]{hansen10} that
$
| u_j^T b|\approx | u_j^T \hat{b}|=\sigma_j^{1+\beta}
$
decrease as $j$ increases up to $k_0$ and then become stabilized as
$
| u_j^T b|\approx | u_j^T e |\approx \eta \approx \frac{\|e\|}{\sqrt{m}},
$
a small constant for $j>k_0$. In order to simplify the
derivation and present our results compactly, in terms of these
assumptions and properties, in later proofs we will use the following precise
equalities and inequalities:
\begin{align}
|u_j^T b|&= |u_j^T\hat{b}|=\sigma_j^{1+\beta},\ j=1,2,\ldots,k_0,\label{ideal3}\\
|u_j^T b|&=|u_j^T e|=\eta, \ j=k_0+1,\ldots,n,\label{ideal2}\\
|u_{j+1}^Tb|&\leq |u_j^Tb |,\ j=1,2,\ldots,n-1.\label{ideal}
\end{align}
From \eqref{ideal} and $\sigma_j=\mathcal{O}(\rho^{-j}),\ j=1,2,\ldots,n$,
for $k=1,2,\ldots,n-1$ we obtain
\begin{align}
\left(\sum_{j=k+1}^n\sigma_j^2| u_j^Tb|^2\right)^{1/2}
&= \sigma_{k+1}| u_{k+1}^Tb| \left(\sum_{j=k+1}^n
\frac{\sigma_j^2| u_j^Tb|^2}{\sigma_{k+1}^2| u_{k+1}^Tb|^2}\right)^{1/2}
\notag\\
&\leq \sigma_{k+1}| u_{k+1}^Tb| \left(\sum_{j=k+1}^n
\frac{\sigma_j^2}{\sigma_{k+1}^2}\right)^{1/2} \notag\\
&=\sigma_{k+1}| u_{k+1}^Tb|\left(1+\sum_{j=k+2}^n\mathcal{O}
(\rho^{2(k-j)+2})\right)^{1/2}
\notag \\
&=\sigma_{k+1}| u_{k+1}^Tb|\left(1+\mathcal{O}\left(\sum_{j=k+2}^n
\rho^{2(k-j)+2}\right)\right)^{1/2}
\notag \\
&=\sigma_{k+1}| u_{k+1}^Tb|\left(1+ \mathcal{O}\left(\frac{\rho^{-2}}
{1-\rho^{-2}}\left(1-\rho^{-2(n-k-1)}\right)\right)\right)^{1/2}
\notag \\
&=\sigma_{k+1}| u_{k+1}^Tb| \left(1+\mathcal{O}(\rho^{-2})\right)^{1/2}\notag\\
&=\sigma_{k+1}| u_{k+1}^Tb| \left(1+\mathcal{O}(\rho^{-2})\right)
\label{severe1}
\end{align}
with $1+\mathcal{O}(\rho^{-2})$ replaced by one for $k=n-1$. In a similar manner,
for $k=2,3,\ldots,n-1$, from \eqref{ideal} we get
\begin{align*}
\left(\sum_{j=1}^k \frac{1}{\sigma_j^2| u_j^Tb|^2}\right)^{1/2}
&=\frac{1}{\sigma_k | u_k^T b|}\left(\sum_{j=1}^k\frac{\sigma_k^2| u_k^Tb|^2}
{\sigma_j^2| u_j^Tb|^2}\right)^{1/2}
\leq \frac{1}{\sigma_k | u_k^T b|}\left(\sum_{j=1}^k\frac{\sigma_k^2}
{\sigma_j^2}\right)^{1/2} \\
&=\frac{1}{\sigma_k | u_k^T b|}\left(1+\mathcal{O}\left(\sum_{j=1}^{k-1}
\rho^{2(j-k)}\right)\right)^{1/2} \\
&=\frac{1}{\sigma_k | u_k^T b|}\left(1+\mathcal{O}(\rho^{-2})\right).
\end{align*}
From the above and \eqref{delta2}, we finally obtain
$$
\|\Delta_k\|\leq \frac{\sigma_{k+1}}{\sigma_k}\frac{| u_{k+1}^T b|}
{| u_k^T b|}\left(1+\mathcal{O}(\rho^{-2})\right)|L_{k_1}^{(k)}(0)|,
\ k=2,3,\ldots,n-1,
$$
which proves \eqref{eqres1}.
Note that the Langrange polynomials $L_j^{(k)}(\lambda)$ require $k\geq 2$. So,
we need to treat the case $k=1$ independently:
from \eqref{defdelta} and \eqref{ideal}, observe that
$$
T_{k2}=(1,1,\ldots,1)^T,\ D_2T_{k2}=(\sigma_2u_2^Tb,\sigma_3 u_3^Tb,
\ldots,\sigma_n u_n^Tb)^T,\
T_{k1}^{-1}=1,\ D_1^{-1}=\frac{1}{\sigma_1 u_1^Tb}.
$$
Therefore, we have
\begin{equation}\label{deltaexp}
\Delta_1=(\sigma_2u_2^Tb,\sigma_3 u_3^Tb,
\ldots,\sigma_n u_n^Tb)^T\frac{1}{\sigma_1 u_1^Tb},
\end{equation}
from which and \eqref{severe1} for $k=1$ it is direct to get \eqref{k1}.
In terms of the discrete Picard condition \eqref{picard},
\eqref{picard1}, \eqref{ideal3} and \eqref{ideal2}, we have
\begin{equation}\label{ratio1}
\frac{|u_{k+1}^Tb|}{| u_k^T b|}= \frac{|u_{k+1}^T\hat{b}|}{| u_k^T\hat{b}|}
=\frac{\sigma_{k+1}^{1+\beta}}{\sigma_k^{1+\beta}}, \ k\leq k_0
\end{equation}
and
\begin{equation}\label{ratio2}
\frac{|u_{k+1}^Tb|}{| u_k^T b|}= \frac{|u_{k+1}^T e|}{| u_k^T e|}
=1,\ k>k_0.
\end{equation}
Applying them to \eqref{k1} and \eqref{eqres1} establishes \eqref{case5},
\eqref{case1} and \eqref{case2}, respectively.
\qquad\endproof
We next estimate the factor $|L_{k_1}^{(k)}(0)|$ accurately.
\begin{theorem}\label{estlk}
For the severely ill-posed problem and $k=2,3,\ldots,n-1$, we have
\begin{align}
|L_k^{(k)}(0)|&=1+\mathcal{O}(\rho^{-2}), \label{lkkest}\\
|L_j^{(k)}(0)|&=\frac{1+\mathcal{O}(\rho^{-2})}
{\prod\limits_{i=j+1}^k\left(\frac{\sigma_{j}}{\sigma_i}\right)^2}
=\frac{1+\mathcal{O}(\rho^{-2})}{\mathcal{O}(\rho^{(k-j)(k-j+1)})},
\ j=1,2,\ldots,k-1, \label{lj0}\\
|L_{k_1}^{(k)}(0)|&=\max_{j=1,2,\ldots,k}|L_j^{(k)}(0)|
=1+\mathcal{O}(\rho^{-2}). \label{lkk}
\end{align}
\end{theorem}
{\em Proof}.
Exploiting the Taylor series expansion and
$\sigma_i=\mathcal{O}(\rho^{-i})$ for $i=1,2,\ldots,n$,
by definition, for $j=1,2,\ldots,k-1$ we have
\begin{align}
|L_j^{(k)}(0)|&=\prod\limits_{i=1,i\neq j}^k
\left|\frac{\sigma_i^2}{\sigma_i^2-\sigma_j^2}\right|
=\prod\limits_{i=1}^{j-1}\frac{\sigma_i^2}{\sigma_i^2-\sigma_j^2}
\cdot\prod\limits_{i=j+1}^{k}\frac{\sigma_i^2}{\sigma_j^2-\sigma_{i}^2}
\notag\\
& =\prod\limits_{i=1}^{j-1}\frac{1}
{1-\mathcal{O}(\rho^{-2(j-i)})}
\prod\limits_{i=j+1}^{k}\frac{1}
{1-\mathcal{O}(\rho^{-2(i-j)})}\frac{1}
{\prod\limits_{i=j+1}^{k}\mathcal{O}(\rho^{2(i-j)})} \notag\\
&=\frac{\left(1+\sum\limits_{i=1}^j \mathcal{O}(\rho^{-2i})\right)
\left(1+\sum\limits_{i=1}^{k-j+1} \mathcal{O}(\rho^{-2i})\right)}
{\prod\limits_{i=j+1}^{k}\mathcal{O}(\rho^{2(i-j)})} \label{lik}
\end{align}
by absorbing those higher order terms into the two $\mathcal{O}(\cdot)$
in the numerator. For $j=k$, we get
\begin{align*}
|L_k^{(k)}(0)|&=\prod\limits_{i=1}^{k-1}
\left|\frac{\sigma_i^2}{\sigma_i^2-\sigma_{k}^2}\right|
=\prod\limits_{i=1}^{k-1}\frac{1}
{1-\mathcal{O}(\rho^{-2(k-i)})}=
\prod\limits_{i=1}^{k-1}\frac{1}
{1-\mathcal{O}(\rho^{-2i})}\\
&=1+\sum\limits_{i=1}^k \mathcal{O}(\rho^{-2i})
=1+\mathcal{O}\left(\sum\limits_{i=1}^k\rho^{-2i}\right)\\
&=1+ \mathcal{O}\left(\frac{\rho^{-2}}
{1-\rho^{-2}}(1-\rho^{-2k})\right)
=1+\mathcal{O}(\rho^{-2}),
\end{align*}
which is \eqref{lkkest}.
Note that for the numerator of \eqref{lik} we have
$$
1+\sum\limits_{i=1}^j \mathcal{O}(\rho^{-2i})
=1+ \mathcal{O}\left(\sum\limits_{i=1}^j\rho^{-2i}\right)
=1+ \mathcal{O}\left(\frac{\rho^{-2}}
{1-\rho^{-2}}(1-\rho^{-2j})\right),
$$
and
$$
1+\sum\limits_{i=1}^{k-j+1} \mathcal{O}(\rho^{-2i})
=1+ \mathcal{O}\left(\sum\limits_{i=1}^{k-j+1}\rho^{-2i}\right)
=1+ \mathcal{O}\left(\frac{\rho^{-2}}{1-\rho^{-2}}
(1-\rho^{-2(k-j+1)})\right),
$$
whose product for any $k$ is
$$
1+ \mathcal{O}\left(\frac{2\rho^{-2}}{1-\rho^{-2}}\right)
+\mathcal{O}\left(\left(\frac{\rho^{-2}}{1-\rho^{-2}}\right)^2\right)=
1+ \mathcal{O}\left(\frac{2\rho^{-2}}{1-\rho^{-2}}\right)
= 1+\mathcal{O}(\rho^{-2}).
$$
On the other hand, note that the denominator of \eqref{lik} is defined by
$$
\prod\limits_{i=j+1}^k\left(\frac{\sigma_{j}}{\sigma_i}\right)^2
=\prod\limits_{i=j+1}^{k}\mathcal{O}(\rho^{2(i-j)})
=\mathcal{O}((\rho\cdot\rho^2\cdots\rho^{k-j})^2)
=\mathcal{O}(\rho^{(k-j)(k-j+1)}),
$$
which, together with the above estimate
for the numerator of \eqref{lik}, proves \eqref{lj0}.
Notice that
$
\prod\limits_{i=j+1}^k\left(\frac{\sigma_j}{\sigma_i}\right)^2
$
is always {\em bigger than one} for $j=1,2,\ldots,k-1$.
Therefore, for any $k$, combining \eqref{lkkest} and \eqref{lj0}
gives \eqref{lkk}.
\qquad\endproof
\begin{remark}\label{severerem}
\eqref{lkkest} and \eqref{lkk} have essentially been shown in {\rm \cite{huangjia}}.
Here we have given a general and complete proof. From \eqref{lkk}, we get
\begin{equation}\label{rhoL}
\left(1+\mathcal{O}(\rho^{-2})\right)|L_{k_1}^{(k)}(0)|= 1+
\mathcal{O}(\rho^{-2}), \ k=2,3,\ldots,n-1,
\end{equation}
so the results in Theorem~\ref{thm2} are simplified as
\begin{align}
\|\Delta_k\|&\leq\frac{\sigma_{k+1}^{2+\beta}}{\sigma_k^{2+\beta}}
\left(1+\mathcal{O}(\rho^{-2})\right),\ k=1,2,\ldots, k_0,
\label{case3}\\
\|\Delta_k\|&\leq\frac{\sigma_{k+1}}{\sigma_k}\left(1+\mathcal{O}(\rho^{-2})\right),
\ k=k_0+1,\ldots,n-1. \label{case4}
\end{align}
\end{remark}
\begin{remark}
\eqref{lj0} illustrates that $|L_j^{(k)}(0)|$ increases fast
with $j$ increasing and the smaller $j$, the smaller $|L_j^{(k)}(0)|$.
\eqref{case3} and \eqref{case4} indicate that $\mathcal{V}_k^R$ captures
$\mathcal{V}_k$ better for $k\leq k_0$ than for $k>k_0$. That is,
after the transition point $k_0$, the noise $e$ starts to
deteriorate $\mathcal{V}_k^R$ and impairs its ability to capture
$\mathcal{V}_k$.
\end{remark}
In what follows we establish accurate estimates for
$\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|$ for
moderately and mildly ill-posed problems.
\begin{theorem}\label{moderate}
Assume that \eqref{eq1} is moderately ill-posed with $\sigma_j=
\zeta j^{-\alpha},\ j=1,2,\ldots,n$,
where $\alpha>\frac{1}{2}$ and $\zeta>0$ is some constant,
and the other assumptions and notation are the same as in Theorem~\ref{thm2}.
Then \eqref{deltabound} holds with
\begin{align}
\|\Delta_1\|&\leq\frac{|u_2^Tb|}{|u_1^Tb|}\sqrt{\frac{1}{2\alpha-1}},\label{k2}\\
\|\Delta_k\|&\leq\frac{|u_{k+1}^Tb|}{|u_{k}^Tb|}
\sqrt{\frac{k^2}{4\alpha^2-1}+\frac{k}{2\alpha-1}}|L_{k_1}^{(k)}(0)|,\
k=2,3,\ldots,n-1. \label{modera1}
\end{align}
Particularly, we have
\begin{align}
\|\Delta_1\|&\leq \frac{\sigma_2^{1+\beta}}{\sigma_1^{1+\beta}}
\sqrt{\frac{1}{2\alpha-1}},\label{mod1}\\
\|\Delta_k\|&\leq \frac{\sigma_{k+1}^{1+\beta}}{\sigma_k^{1+\beta}}
\sqrt{\frac{k^2}{4\alpha^2-1}+\frac{k}{2\alpha-1}}|L_{k_1}^{(k)}(0)|, \
k=2,3,\ldots, k_0, \label{modera2} \\
\|\Delta_k\|&\leq
\sqrt{\frac{k^2}{4\alpha^2-1}+\frac{k}{2\alpha-1}}|L_{k_1}^{(k)}(0)|,\
k=k_0+1,\ldots, n-1.
\label{modera3}
\end{align}
\end{theorem}
{\em Proof}.
Following the proof of Theorem~\ref{thm2}, we know that $|\Delta_k|\leq
|L_{k_1}^{(k)}(0)||\tilde\Delta_k|$ still holds with
$\tilde{\Delta}_k$ defined by \eqref{delta1}. So we only need to
bound the right-hand side of \eqref{delta2}. For $k=1,2,\ldots, n-1$, from
\eqref{ideal} we get
\begin{align}
\left(\sum_{j=k+1}^n\sigma_j^2| u_j^Tb|^2\right)^{1/2}
&= \sigma_{k+1}| u_{k+1}^Tb| \left(\sum_{j=k+1}^n
\frac{\sigma_j^2| u_j^Tb|^2}{\sigma_{k+1}^2| u_{k+1}^Tb|^2}\right)^{1/2}
\notag\\
&\leq \sigma_{k+1}| u_{k+1}^Tb| \left(\sum_{j=k+1}^n
\frac{\sigma_j^2}{\sigma_{k+1}^2}\right)^{1/2} \notag\\
&= \sigma_{k+1}| u_{k+1}^Tb| \left(\sum_{j=k+1}^n \left(\frac{j}{k+1}
\right)^{-2\alpha}\right)^{1/2} \notag \\
&=\sigma_{k+1}| u_{k+1}^Tb|
\left((k+1)^{2\alpha}\sum_{j=k+1}^n \frac{1}{j^{2\alpha}}\right)^{1/2}
\notag\\
&< \sigma_{k+1}| u_{k+1}^Tb| (k+1)^{\alpha}\left(\int_k^{\infty}
\frac{1}{x^{2\alpha}} dx\right)^{1/2}
\notag \\
&= \sigma_{k+1}| u_{k+1}^Tb|\left(\frac{k+1}{k}\right)^{\alpha}
\sqrt{\frac{k}{2\alpha-1}} \notag\\
&=\sigma_{k+1}| u_{k+1}^Tb|\frac{\sigma_k}{\sigma_{k+1}} \sqrt{\frac{k}
{2\alpha-1}} \notag\\
&=\sigma_k | u_{k+1}^Tb|\sqrt{\frac{k}
{2\alpha-1}}.
\label{modeest}
\end{align}
Since the function $x^{2\alpha}$ with any $\alpha> \frac{1}{2}$
is convex over the interval $[0,1]$, for $k=2,3,\ldots, n-1$, from \eqref{ideal}
we obtain
\begin{align}
\left(\sum_{j=1}^k \frac{1}{\sigma_j^2| u_j^Tb|^2}\right)^{1/2}
&=\frac{1}{\sigma_k | u_k^T b|}\left(\sum_{j=1}^k
\frac{\sigma_k^2| u_k^Tb|^2}
{\sigma_j^2| u_j^Tb|^2}\right)^{1/2}
\leq\frac{1}{\sigma_k | u_k^T b|}\left(\sum_{j=1}^k\frac{\sigma_k^2}
{\sigma_j^2}\right)^2 \notag \\
&=\frac{1}{\sigma_k | u_k^T b|}
\left(\sum_{j=1}^k \left(\frac{j}{k}
\right)^{2\alpha}\right)^{1/2} \notag \\
&=\frac{1}{\sigma_k | u_k^T b|}
\left(k\sum_{j=1}^{k} \frac{1}{k}\left(\frac{j-1}{k}
\right)^{2\alpha}+1\right)^{1/2} \label{sum1} \\
&< \frac{1}{\sigma_k | u_k^T b|} \left(k\int_0^1
x^{2\alpha}dx+1\right)^{1/2}\notag \\
&=\frac{1}{\sigma_k | u_k^T b|} \sqrt{\frac{k}{2\alpha+1}+1}. \label{estimate2}
\end{align}
Substituting the above and \eqref{modeest} into \eqref{delta2} establishes
\eqref{modera1}, from which and \eqref{ratio1}, \eqref{ratio2}
it follows that \eqref{modera2} and \eqref{modera3} hold. For $k=1$,
we still have \eqref{deltaexp}, from which and \eqref{modeest}
we obtain \eqref{k2}. From \eqref{ratio1} and \eqref{k2}
we get \eqref{mod1}.
\qquad\endproof
\begin{remark}
For a purely technical reason and for the sake of precise presentation,
we have used the simplified singular value
model $\sigma_j=\zeta j^{-\alpha}$ to replace the general form
$\sigma_j=\mathcal{O}(j^{-\alpha})$, where the constant in each
$\mathcal{O}(\cdot)$ is implicit. This model,
though simple, reflects the essence of moderately and mildly ill-posed
problems and avoids some troublesome derivations and
non-transparent formulations.
\end{remark}
\begin{remark}
In the spirit of the proof of Theorem~\ref{estlk}, exploiting
the first order Taylor expansion, we have an
estimate
\begin{align}
|L_{k_1}^{(k)}(0)|& \approx |L_k^{(k)}(0)|=
\prod\limits_{i=1}^{k-1}\frac{\sigma_i^2}{\sigma_i^2-\sigma_k^2}
=\prod\limits_{i=1}^{k-1}\frac{1}
{1-(\frac{i}{k})^{2\alpha}} \notag\\
&\approx 1+\sum\limits_{i=1}^{k-1}\left(\frac{i}{k}\right)^{2\alpha}
=1+k\sum\limits_{i=1}^k\frac{1}{k}\left(\frac{i-1}{k}\right)^{2\alpha}
\notag\\
&< 1+k\int_0^1 x^{2\alpha}dx=1+\frac{k}{2\alpha+1},
\label{lkkmoderate}
\end{align}
where the right-hand side of \eqref{lkkmoderate}
increases linearly with respect to $k$.
\end{remark}
\begin{remark}\label{rem3.5}
\eqref{k2} and \eqref{mod1} indicate that $\|\Delta_1\|<1$ is
guaranteed for moderately ill-posed problems with $\alpha>1$.
One might worry that the upper bounds \eqref{k2} and
\eqref{modera1} overestimate $\|\Delta_k\|$ and thus
$\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|$ considerably
because, in the proof, we have bounded the opaque $\sum$ in \eqref{sum1}
from above by the compact integral \eqref{estimate2} nearest to it, which
can be overestimates for $k=1,2,\ldots,k_0$. It is not the case provided that
$k_0$ is not very small. In fact, since $\alpha>\frac{1}{2}$,
we can bound \eqref{sum1} from below by the integral nearest to it:
\begin{align*}
\frac{1}{\sigma_k | u_k^T b|}\left(k\sum_{j=1}^{k} \frac{1}{k}\left(\frac{j-1}{k}
\right)^{2\alpha}+1\right)^{1/2}
&> \frac{1}{\sigma_k | u_k^T b|} \left(k\int_0^{\frac{k-1}{k}}
x^{2\alpha}dx+1\right)^{1/2}\notag \\
&=\frac{1}{\sigma_k | u_k^T b|} \sqrt{\frac{k}{2\alpha+1}
\left(\frac{k-1}{k}\right)^{2\alpha+1}+1},
\end{align*}
which is near to \eqref{estimate2} once $k\leq k_0$ is not very small.
The smaller $\alpha$, the smaller the difference between
the upper and lower bounds, i.e., the sharper \eqref{estimate2}.
\end{remark}
\begin{remark}
It is easily seen from \eqref{deltabound} that
$\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|$ increases monotonically
with respect to $\|\Delta_k\|$. For $\|\Delta_k\|$ reasonably small and
$\|\Delta_k\|$ large we have
$$
\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|\approx \|\Delta_k\|
\ \mbox{ and } \
\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|\approx 1,
$$
respectively. From \eqref{picard} and \eqref{picard1}, we obtain
$k_0=\lfloor\eta^{-\frac{1}{\alpha(1+\beta)}}\rfloor-1$,
where $\lfloor\cdot\rfloor$ is the Gaussian function. For the white noise
$e$, we have $\eta\approx \frac{\|e\|}{\sqrt{m}}$.
As a result, for moderately ill-posed problems with $\alpha>1$,
$k_0$ is typically small and at most modest for a practical noise $e$, whose
relative size $\frac{\|e\|}{\|\hat{b}\|}$ typically ranges from
$10^{-4}$ to $10^{-2}$. This means that for a moderately ill-posed problem
$\|\Delta_k\|$ is at most modest and cannot be large, so that
$\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|<1$ fairly.
\end{remark}
\begin{remark}
For severely ill-posed problems, since all the $\frac{\sigma_{k+1}}{\sigma_k}
\sim \rho^{-1}$, a constant, \eqref{case3} and \eqref{case4}
indicate that $\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|$
is essentially unchanged for $k=1,2,\ldots,k_0$ and $k=k_0+1,\ldots,n-1$,
respectively, that is, $\mathcal{V}_k^R$ captures $\mathcal{V}_k$ with almost
the same accuracy for $k\leq k_0$ and $k>k_0$, respectively. However,
the situation is different for moderately ill-posed
problems. For them, $\frac{\sigma_{k+1}}{\sigma_k}=
\left(\frac{k}{k+1}\right)^{\alpha}$ increases slowly as $k$ increases, and
the factor $\sqrt{\frac{k^2}{4\alpha^2-1}+\frac{k}{2\alpha-1}}
|L_{k_1}^{(k)}(0)|$ increases as $k$ grows. Therefore, \eqref{modera2} and
\eqref{modera3} illustrate that $\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|$
increases slowly with $k\leq k_0$ and $k> k_0$, respectively.
This means that $\mathcal{V}_k^R$ may not capture
$\mathcal{V}_k$ so well as it does for severely ill-posed problems as $k$
increases. In particular, starting with some $k>k_0$,
$\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|$ starts to approach one,
which indicates that, for $k$ big, $\mathcal{V}_k^R$ will contain substantial
information on the right singular vectors corresponding to the $n-k$ small
singular values of $A$.
\end{remark}
\begin{remark}\label{mildrem}
For mildly ill-posed problems with $\frac{1}{2}<
\alpha\leq 1$, there are some distinctive features. Note
from \eqref{picard} and \eqref{picard1} that
$k_0$ is now considerably bigger than
that for a severely or moderately ill-posed problem
with the same noise level $\|e\|$ and $\beta$. As a result,
firstly, for $\alpha\leq 1$ and the same $k$, the factor $\frac{\sigma_{k+1}}
{\sigma_{k}}=\left(\frac{k}{k+1}\right)^{\alpha}$ is
bigger than that for the moderately ill-posed problem;
secondly, $\sqrt{\frac{k^2}{4\alpha^2-1}+\frac{k}{2\alpha-1}} \sim k$
if $\alpha\approx 1$ and is much bigger than $k$ and can be
arbitrarily large if $\alpha\approx \frac{1}{2}$; thirdly,
since $\frac{1}{2}<\alpha\leq 1$, for $k\geq 3$ that ensures
$\frac{2\alpha+1}{k}\leq 1$, we have
\begin{align}
|L_{k_1}^{(k)}(0)|&\geq |L_{k}^{(k)}(0)|=
\prod\limits_{i=1}^{k-1}\frac{\sigma_i^2}{\sigma_i^2-\sigma_{k}^2}
=\prod\limits_{i=1}^{k-1}\frac{1}
{1-(\frac{i}{k})^{2\alpha}} \notag\\
&> 1+\sum\limits_{i=1}^{k-1}\left(\frac{i}{k}\right)^{2\alpha}
>1+k\int_0^{\frac{k-1}{k}} x^{2\alpha}dx \notag\\
&=1+\frac{k\left(\frac{k-1}{k}\right)^{2\alpha+1}}{2\alpha+1}
\approx 1+\frac{k}{2\alpha+1}\left(1-\frac{2\alpha+1}{k}\right)=
\frac{k}{2\alpha+1},\label{lowerbound}
\end{align}
which also holds for moderately ill-posed problems and
is bigger than one considerably for $\frac{1}{2}<\alpha\leq 1$ as
$k$ increases up to $k_0$.
Our accurate bound \eqref{modera2} thus becomes increasingly large as $k$
increases up to $k_0$ for mildly ill-posed
problems, causing that $\|\Delta_k\|$ is large and
$\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|\approx 1$ starting with
some $k\leq k_0$. Consequently, $\mathcal{V}_{k_0}^R$ cannot
effectively capture the $k_0$ dominant right singular vectors and
contains substantial information on the right singular vectors
corresponding to the $n-k_0$ small singular values.
\end{remark}
\begin{remark}
In \cite[Thm 2.1]{huangjia}, the authors derived some
bounds for $\|\Delta_k\|$ and
$\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|$. There, without
realizing the crucial fact that $|\Delta_k|$ can be effectively
bounded by a rank one matrix and the key point that
$D_2T_{k2}T_{k1}^{-1}D_1^{-1}$ must be treated as a whole other
than separately, by \eqref{defdelta} the authors made use of
$$
\|\Delta_k\|\leq \|\Delta\|_F=\left\|D_2T_{k2}T_{k1}^{-1}D_1^{-1}\right\|_F
\leq \|D_2\|\left\|T_{k2}T_{k1}^{-1}\right\|_F\left\|D_1^{-1}\right\|
$$
and
$\left\|T_{k2}T_{k1}^{-1}\right\|_F\leq |L_{k_1}^{(k)}(0)|\sqrt{k(n-k)}$
(cf. \eqref{tk12}) to obtain bounds for $\|\Delta_k\|$ and
$\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|$. These bounds are too
pessimistic because of the appearance of the fatal factor $\sqrt{k(n-k)}$,
which ranges from
$\sqrt{2(n-2)}$ to $\frac{n}{2}$ for $k=2,3,\ldots,n-1$, too large
amplification for $n$ large. In contrast, our new estimates, which hold for
both $\|\Delta_k\|$ and $\|\Delta_k\|_F$, are much more accurate and
$\sqrt{k(n-k)}$ has been removed.
\end{remark}
Before proceeding, we tentatively investigate how
$\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|$ affects the smallest Ritz
value $\theta_k^{(k)}$. This problem is of central importance for
understanding the regularizing effects of LSQR. We aim to lead the reader
to a first manifestation that (i) we may have $\theta_k^{(k)}>\sigma_{k+1}$
when $\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|<1$ fairly, that is,
no small Ritz value may appear provided that
$\mathcal{V}_k^R$ captures $\mathcal{V}_k$ with only {\em some}
other than high accuracy, and (ii) we must have
$\theta_k^{(k)}\leq\sigma_{k+1}$, that is, $\theta_k^{(k)}$ cannot approximate
$\sigma_k$ in natural order£¬ meaning that $\theta_k^{(k)}\leq\sigma_{k_0+1}$
no later than iteration $k_0$, once
$\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|$ is sufficiently close to one.
\begin{theorem}\label{initial}
Let $\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|^2=1-\varepsilon_k^2$ with
$0< \varepsilon_k< 1$, $k=1,2,\ldots,n-1$, and let the unit-length
$\tilde{q}_k\in\mathcal{V}_k^R$
be a vector that has the smallest acute angle with $span\{V_k^{\perp}\}$, i.e.,
the closest to $span\{V_k^{\perp}\}$, where $V_k^{\perp}$ is the matrix consisting
of the last $n-k$ columns of $V$ defined by \eqref{eqsvd}. Then it holds that
\begin{equation}\label{rqi}
\varepsilon_k^2\sigma_k^2+
(1-\varepsilon_k^2)\sigma_n^2< \tilde{q}_k^TA^TA\tilde{q}_k<
\varepsilon_k^2\sigma_{k+1}^2+
(1-\varepsilon_k^2)\sigma_1^2.
\end{equation}
If $\varepsilon_k\geq \frac{\sigma_{k+1}}{\sigma_k}$,
then
\begin{equation}
\sqrt{\tilde{q}_k^TA^TA\tilde{q}_k}>\sigma_{k+1};
\label{est1}
\end{equation}
if $\varepsilon_k^2\leq\frac{\delta}
{(\frac{\sigma_1}{\sigma_{k+1}})^2-1}$ for a given arbitrarily small
$\delta>0$, then
\begin{equation}\label{thetasigma}
\theta_k^{(k)}<(1+\delta)^{1/2}\sigma_{k+1}.
\end{equation}
\end{theorem}
{\em Proof}.
Since the columns of $Q_k$ generated by Lanczos bidiagonalization form an
orthonormal basis of $\mathcal{V}_k^R$, by definition and the assumption on
$\tilde{q}_k$ we have
\begin{align}
\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|&=\|(V_k^{\perp})^TQ_k\|
=\|V_k^{\perp}(V_k^{\perp})^TQ_k\| \notag\\
&=\max_{\|c\|=1}\|V_k^{\perp}(V_k^{\perp})^TQ_kc\|
=\|V_k^{\perp}(V_k^{\perp})^T Q_kc_k\| \notag\\
&=\|V_k^{\perp}(V_k^{\perp})^T\tilde{q}_k\|=\|(V_k^{\perp})^T\tilde{q}_k\|
=\sqrt{1-\varepsilon_k^2}
\label{qktilde}
\end{align}
with $\tilde{q}_k=Q_kc_k\in\mathcal{V}_k^R$ and $\|c_k\|=1$.
Since $\mathcal{V}_k$ is the orthogonal complement of $span\{V_k^{\perp}\}$,
by definition we know that $\tilde{q}_k\in \mathcal{V}_k^R$ has the largest acute
angle with $\mathcal{V}_k$, that is, it is the vector in $\mathcal{V}_k^R$
that contains the least information on $\mathcal{V}_k$.
Expand $\tilde{q}_k$ as the following orthogonal direct sum decomposition:
\begin{equation}\label{decompqk}
\tilde{q}_k=V_k^{\perp}(V_k^{\perp})^T\tilde{q}_k+V_kV_k^T\tilde{q}_k.
\end{equation}
Then from $\|\tilde{q}_k\|=1$ and \eqref{qktilde} we obtain
\begin{align}\label{angle2}
\|V_k^T\tilde{q}_k\|&=\|V_kV_k^T\tilde{q}_k\|=
\sqrt{1-\|V_k^{\perp}(V_k^{\perp})^T\tilde{q}_k\|^2}=\sqrt{1-(1-\varepsilon_k^2)}
=\varepsilon_k.
\end{align}
From \eqref{decompqk}, we next bound the Rayleigh quotient of $\tilde{q}_k$
with respect to $A^TA$ from below. By the SVD \eqref{eqsvd} of $A$ and
$V=(V_k,V_k^{\perp})$, we partition
$$
\Sigma=\left(\begin{array}{cc}
\Sigma_k &\\
&\Sigma_k^{\perp}
\end{array}
\right),
$$
where $\Sigma_k=\diag(\sigma_1,\sigma_2,\ldots,\sigma_k)$ and
$\Sigma_k^{\perp}=\diag(\sigma_{k+1},\sigma_{k+2},\ldots,\sigma_n)$.
Making use of $A^TAV_k=V_k\Sigma_k^2$ and $A^TAV_k^{\perp}=
V_k^{\perp}(\Sigma_k^{\perp})^2$ as well as $V_k^TV_k^{\perp}=\mathbf{0}$,
we obtain
\begin{align}
\tilde{q}_k^TA^TA\tilde{q}_k&=\left(V_k^{\perp}(V_k^{\perp})^T\tilde{q}_k+V_kV_k^T
\tilde{q}_k\right)^TA^TA \left(V_k^{\perp}(V_k^{\perp})^T\tilde{q}_k+
V_kV_k^T\tilde{q}_k\right) \notag\\
&=\left(\tilde{q}_k^TV_k^{\perp}(V_k^{\perp})^T+\tilde{q}_k^TV_kV_k^T\right)
\left(V_k^{\perp}(\Sigma_k^{\perp})^2(V_k^{\perp})^T\tilde{q}_k+V_k\Sigma_k^2V_k^T
\tilde{q}_k\right) \notag\\
&=\tilde{q}_k^TV_k^{\perp}(\Sigma_k^{\perp})^2(V_k^{\perp})^T\tilde{q}_k
+\tilde{q}_k^TV_k\Sigma_k^2V_k^T\tilde{q}_k. \label{expansion}
\end{align}
Observe that it is impossible for $(V_k^{\perp})^T\tilde{q}_k$ and
$V_k^T\tilde{q}_k$ to be the eigenvectors of $(\Sigma_k^{\perp})^2$
and $\Sigma_k^2$ associated with their respective smallest eigenvalues
$\sigma_n^2$ and $\sigma_k^2$ simultaneously, which are
the $(n-k)$-th canonical vector $e_{n-k}$ of $\mathbb{R}^{n-k}$ and
the $k$-th canonical vector $e_k$ of $\mathbb{R}^{k}$, respectively;
otherwise, we have $\tilde{q}_k=v_n$ and
$\tilde{q}_k=v_k$ simultaneously, which are impossible as $k<n$. Therefore,
from \eqref{expansion} and \eqref{qktilde}, \eqref{angle2}
we obtain the strict inequality
\begin{align*}
\tilde{q}_k^TA^TA\tilde{q}_k&> \|(V_k^{\perp})^T\tilde{q}_k\|^2
\sigma_n^2+\|V_k^T\tilde{q}_k\|^2\sigma_k^2
=(1-\varepsilon_k^2)\sigma_n^2+\varepsilon_k^2 \sigma_k^2,
\end{align*}
from which it follows that the lower bound of \eqref{rqi} holds. Similarly,
from \eqref{expansion} and \eqref{qktilde}, \eqref{angle2}
we obtain the upper bound of \eqref{rqi}:
$$
\tilde{q}_k^TA^TA\tilde{q}_k <\|(V_k^{\perp})^T\tilde{q}_k\|^2
\|(\Sigma_k^{\perp})^2\|+\|V_k^T\tilde{q}_k\|^2\|\Sigma_k^2\|
=(1-\varepsilon_k^2)\sigma_{k+1}^2+\varepsilon_k^2 \sigma_1^2.
$$
From the lower bound of \eqref{rqi}, we see that if
$\varepsilon_k$ satisfies $\varepsilon_k^2 \sigma_k^2\geq \sigma_{k+1}^2$,
i.e., $\varepsilon_k\geq \frac{\sigma_{k+1}}{\sigma_k}$,
then $\sqrt{\tilde{q}_k^TA^TA\tilde{q}_k}>\sigma_{k+1}$, i.e.,
\eqref{est1} holds.
From \eqref{Bk}, we obtain $B_k^TB_k=Q_k^TA^TAQ_k$.
Note that $(\theta_k^{(k)})^2$ is the smallest eigenvalue
of the symmetric positive definite matrix $B_k^TB_k$.
Therefore, we have
\begin{equation}\label{rqi2}
(\theta_k^{(k)})^2=\min_{\|c\|=1} c^TQ_k^TA^TAQ_kc=
\min_{q\in \mathcal{V}_k^R,\ \|q\|=1} q^TA^TAq
=\hat{q}_k^TA^TA\hat{q}_k,
\end{equation}
where $\hat{q}_k$ is, in fact, the Ritz vector of $A^TA$ from
$\mathcal{V}_k^R$ corresponding to the smallest Ritz value $(\theta_k^{(k)})^2$.
Therefore, for $\tilde{q}_k$ defined in Theorem~\ref{initial} we have
$$
\theta_k^{(k)}\leq \sqrt{\tilde{q}_k^TA^TA\tilde{q}_k},
$$
from which it follows from \eqref{rqi} that
$(\theta_k^{(k)})^2<(1-\varepsilon_k^2)\sigma_{k+1}^2+\varepsilon_k^2 \sigma_1^2$.
As a result, for any $\delta>0$, we can choose $\varepsilon_k\geq 0$ such that
$$
(\theta_k^{(k)})^2<(1-\varepsilon_k^2)\sigma_{k+1}^2+\varepsilon_k^2 \sigma_1^2
\leq (1+\delta)\sigma_{k+1}^2,
$$
i.e., \eqref{thetasigma} holds,
solving which for $\varepsilon_k^2$ gives $\varepsilon_k^2\leq\frac{\delta}
{(\frac{\sigma_1}{\sigma_{k+1}})^2-1}$.
\qquad\endproof
\begin{remark}
We analyze $\theta_k^{(k)}$ for $\varepsilon_k\geq \frac{\sigma_{k+1}}{\sigma_k}$.
A key observation and interpretation is that, in the sense of $\min$
in \eqref{rqi2}, $\hat{q}_k\in \mathcal{V}_k^R$ is the optimal vector that
extracts the least information from $\mathcal{V}_k$ and the richest information
from $span\{V_k^{\perp}\}$.
From Theorem~\ref{initial}, since $\mathcal{V}_k$ is the orthogonal complement
of $span\{V_k^{\perp}\}$, we know that $\tilde{q}_k\in \mathcal{V}_k^R$
has the largest acute angle with $\mathcal{V}_k$, that is,
it contains the least information from $\mathcal{V}_k$ and the richest information
from $span\{V_k^{\perp}\}$. Therefore, $\hat{q}_k$ and $\tilde{q}_k$ have a similar
optimality, so that we have
\begin{equation}\label{approxeq}
\theta_k^{(k)}\approx \sqrt{\tilde{q}_k^TA^TA\tilde{q}_k}.
\end{equation}
Combining this estimate with \eqref{est1}, we may have
$\theta_k^{(k)}>\sigma_{k+1}$ when
$\varepsilon_k\geq \frac{\sigma_{k+1}}{\sigma_k}$.
\end{remark}
\begin{remark}
We inspect the condition $\varepsilon_k\geq\frac{\sigma_{k+1}}
{\sigma_k}$ for \eqref{est1} and get insight into whether or not the true
$\varepsilon_k$ resulting from three kinds of ill-posed problems satisfies
it. For severely ill-posed problems, the lower bound $\frac{\sigma_{k+1}}
{\sigma_k}$ is basically the constant $\rho^{-1}$; for moderately ill-posed
problems with $\alpha>1$, the bound increases with increasing
$k\leq k_0$, and it cannot be close to one provided that $\alpha>1$ suitably
or $k_0$ not big; for mildly ill-posed problems with $\alpha<1$, the bound
increases faster than it does for moderately ill-posed problems, and
it may well approach one for $k\leq k_0$. Therefore,
the condition for \eqref{est1} requires that
$\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|$
be not close to one for severely and moderately ill-posed problems,
but $\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|$
must be close to zero for mildly ill-posed problems.
In view of \eqref{deltabound} and
$\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|^2=1-\varepsilon_k^2$,
we have $\|\Delta_k\|^2=\frac{1-\varepsilon_k^2}{\varepsilon_k^2}$.
Thus, the condition $\varepsilon_k\geq\frac{\sigma_{k+1}}
{\sigma_k}$ for \eqref{est1} amounts to requiring
that $\|\Delta_k\|$ be at most modest and cannot
be large for severely and moderately ill-posed problems but
it must be fairly small for mildly ill-posed problems. Unfortunately,
Theorems~\ref{thm2}--\ref{moderate} and the remarks followed
indicate that $\|\Delta_k\|$ increases with $k$ increasing and is generally
large for a mildly ill-posed problem, while it
increases slowly with $k\leq k_0$ for a moderately
ill-posed problem with $\alpha>1$ suitably, and by \eqref{case3}
it is approximately a constant $\rho^{-(2+\beta)}$, which is smaller
than one considerably for a
severely ill-posed problem with $\rho>1$ not close to one.
Consequently, for mildly ill-posed problems,
because the actual $\|\Delta_k\|$ can hardly be small and is generally
large, the true $\varepsilon_k$ is small and may well be
close to one, so that the condition $\varepsilon_k\geq\frac{\sigma_{k+1}}{\sigma_k}$
generally fails to meet as $k$ increases, while it is satisfied for severely
or moderately ill-posed problems with $\rho>1$ or $\alpha>1$ suitably.
\end{remark}
\begin{remark}\label{appear}
\eqref{thetasigma} shows that there is
at least one Ritz value $\theta_k^{(k)}\leq\sigma_{k+1}$ when
$\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|$ is sufficiently close
to one since we can choose $\delta$ small enough such that
$(1+\delta)^{1/2}\sigma_{k+1}$ is close to $\sigma_{k+1}$ arbitrarily.
As we have shown, $\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|$ cannot be
close to one for severely or moderately ill-posed problems with $\rho>1$ or
$\alpha>1$ suitably, but it is generally so for mildly ill-posed problems.
This means that for some $k\leq k_0$ it is very possible to have
$\theta_k^{(k)}\leq\sigma_{k+1}$ for mildly ill-posed problems.
\end{remark}
We must be aware that our above analysis on
$\theta_k^{(k)}>\sigma_{k+1}$ is not rigorous because
we cannot quantify {\em how small}
$\sqrt{\tilde{q}_k^TA^TA\tilde{q}_k}-\theta_k^{(k)}$ is.
From $\theta_k^{(k)}\leq\sqrt{\tilde{q}_k^TA^TA\tilde{q}_k}$, it is apparent
that the condition $\varepsilon_k\geq\frac{\sigma_{k+1}}{\sigma_k}$
may not be sufficient for $\theta_k^{(k)}>\sigma_{k+1}$ though it is so for
$\sqrt{\tilde{q}_k^TA^TA\tilde{q}_k}>\sigma_{k+1}$.
We delay our detailed and rigorous analysis
to Section \ref{rankapp}, where we present a number of deep-going and accurate
results on the key problems stated in the last second paragraph before
Theorem~\ref{thm2}, including the precise behavior of $\theta_k^{(k)}$.
One of the results will be on the sufficient conditions
for $\theta_k^{(k)}>\sigma_{k+1}$, which are satisfied when
certain deterministic and mild restrictions on $\rho$ or $\alpha$
are imposed for severely or moderately ill-posed problems.
However, we will see that $\alpha<1$ for mildly ill-posed problems
never meets the sufficient conditions to be presented there.
Theorems~\ref{thm2}--\ref{moderate} establish necessary
background for answering the fundamental
concern by Bj\"{o}rck and Eld\'{e}n, and their proof approaches
also provide key ingredients for some of the later results.
We next present the following results,
which will play a central role in our later analysis.
\begin{theorem}\label{thm3}
Assume that the dicrete Picard condition \eqref{picard} is satisfied,
let $\Delta_k\in \mathbb{R}^{(n-k)\times k}$ be defined as
\eqref{defdelta} and $L_j^{(k)}(0)$ and $L_{k_1}^{(k)}(0)$
be defined as \eqref{lk}, and write
$\Delta_k=(\delta_1,\delta_2,\ldots,\delta_k)$. Then for severely
ill-posed problems and $k=1,2,\ldots,n-1$ we have
\begin{align}
\|\delta_j\|&\leq \frac{\sigma_{k+1}}{\sigma_j}\frac{| u_{k+1}^Tb|}
{| u_j^T b|}\left(1+\mathcal{O}(\rho^{-2})\right)| L_j^{(k)}(0)|,
\ k>1,\ j=1,2,\ldots,k, \label{columndelta} \\
\|\delta_1\|& \leq \frac{\sigma_{2}}{\sigma_1}\frac{| u_2^Tb|}
{| u_1^T b|}\left(1+\mathcal{O}(\rho^{-2})\right)|,\ k=1
\label{columndelta1}
\end{align}
and
\begin{equation} \label{prodnorm}
\|\Sigma_k\Delta_k^T\|\leq \left\{\begin{array}{ll}
\sigma_{k+1}\frac{| u_{k+1}^Tb|}{| u_k^T b|}
\left(1+\mathcal{O}(\rho^{-2})\right)
& \mbox{ for } 1\leq k\leq k_0,\\
\sigma_{k+1}\sqrt{k-k_0+1}\left(1+\mathcal{O}(\rho^{-2})\right)
& \mbox{ for } k_0<k\leq n-1;
\end{array}
\right.
\end{equation}
for moderately or mild ill-posed problems with the singular values
$\sigma_j=\zeta j^{-\alpha}$ and $\zeta$ a positive constant we have
\begin{align}
\|\delta_j\|&\leq \frac{\sigma_k}{\sigma_j}\frac{| u_{k+1}^Tb|}
{| u_j^T b|} \sqrt{\frac{k}{2\alpha-1}}|L_j^{(k)}(0)|,
\ k>1,\ j=1,2,\ldots,k, \label{columndelta2} \\
\|\delta_1\|&\leq \frac{| u_2^Tb|}
{| u_1^T b|} \sqrt{\frac{1}{2\alpha-1}},\ k=1 \label{columnnorm}
\end{align}
and
\begin{equation}\label{prodnorm2}
\|\Sigma_k\Delta_k^T\|\leq \left\{\begin{array}{ll}
\sigma_1\frac{| u_2^Tb|}{| u_1^T b|}\sqrt{\frac{1}{2\alpha-1}} &
\mbox{ for } k=1,\\
\sigma_k\frac{| u_{k+1}^Tb|}{| u_k^T b|}\sqrt{\frac{k^2}{4\alpha^2-1}+
\frac{k}{2\alpha-1}}
|L_{k_1}^{(k)}(0)|& \mbox{ for } 1<k\leq k_0, \\
\sigma_k \sqrt{\frac{k k_0}{4\alpha^2-1}+
\frac{k(k-k_0+1)}{2\alpha-1}}|L_{k_1}^{(k)}(0)|
& \mbox{ for } k_0<k\leq n-1.
\end{array}
\right.
\end{equation}
\end{theorem}
{\em Proof}.
From \eqref{defdelta} and \eqref{amplify}, for $j=1,2,\ldots,k$ and $k>1$
we have
\begin{equation}\label{deltaj}
\|\delta_j\|^2\leq |L_j^{(k)}(0)|^2 \sum_{i=k+1}^n\frac{\sigma_{i}^2}
{\sigma_j^2}\frac{| u_i^T b|^2}{| u_j^T b|^2}
\end{equation}
and from \eqref{deltaexp}, for $k=1$ we have
\begin{equation}\label{deltas}
\|\delta_1\|^2=\sum_{i=2}^n\frac{\sigma_{i}^2}
{\sigma_1^2}\frac{| u_i^T b|^2}{| u_1^T b|^2}.
\end{equation}
For severely ill-posed problems, $k=1,2,\ldots,n-1$ and
$j=1,2,\ldots,k$, from \eqref{severe1} we obtain
\begin{align*}
\sum_{i=k+1}^n\frac{\sigma_{i}^2}
{\sigma_j^2}\frac{| u_i^T b|^2}{| u_j^T b|^2}&=
\frac{1}{\sigma_j^2| u_j^Tb|^2}
\sum_{i=k+1}^n\sigma_{i}^2 |u_i^T b|^2 \\
&\leq \frac{\sigma_{k+1}^2}
{\sigma_j^2}\frac{| u_{k+1}^T b|^2}{| u_j^T b|^2}
\left(1+\mathcal{O}(\rho^{-2})\right).
\end{align*}
For moderately or mildly ill-posed problems, $k=1,2,\ldots,n-1$ and
$j=1,2,\ldots,k$, from \eqref{modeest} we obtain
\begin{align*}
\sum_{i=k+1}^n\frac{\sigma_{i}^2}
{\sigma_j^2}\frac{| u_i^T b|^2}{| u_j^T b|^2}
&=\frac{1}{\sigma_j^2| u_j^Tb|^2}
\sum_{i=k+1}^n\sigma_{i}^2| u_i^T b|^2\\
&\leq \frac{\sigma_k^2}
{\sigma_j^2}\frac{| u_{k+1}^T b|^2}{| u_j^T b|^2}
\frac{k}{2\alpha-1}.
\end{align*}
Combining the above with \eqref{deltaj}, \eqref{lkk} and \eqref{rhoL}, we
obtain \eqref{columndelta}, while \eqref{columndelta2} follows
from the above and \eqref{deltaj} directly. For $k=1$, from \eqref{deltas}
and the above we get \eqref{columndelta1} and \eqref{columnnorm}, respectively.
By \eqref{delta1}, for $k>1$ we have
$$
|\Delta_k\Sigma_k|\leq |L_{k_1}^{(k)}(0)|\left|(\sigma_{k+1} u_{k+1}^T b,
\sigma_{k+2}u_{k+2}^Tb,\ldots,\sigma_n u_n^T b)^T
\left(\frac{1}{u_1^Tb},\frac{1}{u_2^Tb},\ldots,
\frac{1}{u_k^Tb}\right)\right|.
$$
Therefore, we get
\begin{align}
\|\Sigma_k\Delta_k^T\|&=\|\Delta_k\Sigma_k\|\leq \left\||\Delta_k\Sigma_k|\right\|
\notag\\
&\leq |L_{k_1}^{(k)}(0)|\left(\sum_{j=k+1}^n\sigma_j^2| u_j^Tb|^2\right)^{1/2}
\left(\sum_{j=1}^k \frac{1}{| u_j^Tb|^2}\right)^{1/2}. \label{sigdel}
\end{align}
By \eqref{deltaexp}, for $k=1$ we have
$$
\|\Delta_1\Sigma_1\|=\left(\sum_{j=2}^n\sigma_j^2| u_j^Tb|^2\right)^{1/2}
\frac{1}{| u_1^Tb|}.
$$
We have derived the bounds \eqref{severe1} and
\eqref{modeest} for $\left(\sum_{j=k+1}^n\sigma_j^2| u_j^Tb|^2\right)^{1/2}$
for severely and moderately or mildly ill-posed problems, respectively, from
which we obtain \eqref{prodnorm} and \eqref{prodnorm2} for $k=1$. In order
to bound $\|\Sigma_k\Delta_k^T\|$ for $k>1$, we need to estimate
$\left(\sum_{j=1}^k\frac{1}{| u_j^Tb|^2}\right)^{1/2}$.
We next carry out this task for severely and moderately or mildly ill-posed
problems, respectively, for each kind of which we consider the cases of
$k\leq k_0$ and $k>k_0$ separately.
Case of $k\leq k_0$ for severely ill-posed problems: From the discrete
Picard condition \eqref{picard} and \eqref{ideal3}, we obtain
\begin{align*}
\sum_{j=1}^k
\frac{1}{| u_j^Tb|^2}&= \frac{1}{| u_k^Tb|^2} \sum_{j=1}^k
\frac{| u_k^Tb|^2}{| u_j^Tb|^2}
=\frac{1}{| u_k^Tb|^2} \left(1+\mathcal{O}\left(\sum_{j=1}^{k-1}\rho^{2(j-k)
(1+\beta)}\right)\right)\\
&=\frac{1}{| u_k^Tb|^2} \left(1+\mathcal{O}(\rho^{-2(1+\beta)}) \right).
\end{align*}
Case of $k> k_0$ for severely ill-posed problems: From \eqref{ideal3} and
\eqref{ideal2}, we obtain
\begin{align*}
\sum_{j=1}^k
\frac{1}{| u_j^Tb|^2}&= \frac{1}{| u_k^Tb|^2}\left( \sum_{j=1}^{k_0}
\frac{| u_k^Tb|^2}{| u_j^Tb|^2}+\sum_{j=k_0+1}^{k}
\frac{| u_k^Tb|^2}{| u_j^Tb|^2}\right)\\
&=\frac{1}{| u_k^Tb|^2}\left(1+\mathcal{O}\left(\sum_{j=1}^{k_0-1}
\rho^{2(j-k_0)(1+\beta)}\right)+k-k_0\right)\\
&=\frac{1}{| u_k^Tb|^2}\left(1+\mathcal{O}(\rho^{-2(1+\beta)})+k-k_0\right).
\end{align*}
Substituting the above two relations for the two cases into \eqref{sigdel} and
combining them with \eqref{severe1} and \eqref{lkk}, we
get \eqref{prodnorm}.
Case of $k\leq k_0$ for moderately or mildly ill-posed problems: From \eqref{ideal3}
we have
\begin{align*}
\sum_{j=1}^k
\frac{1}{| u_j^Tb|^2}&= \frac{1}{| u_k^Tb|^2} \sum_{j=1}^k
\frac{| u_k^Tb|^2}{| u_j^Tb|^2}= \frac{1}{| u_k^Tb|^2}\sum_{j=1}^k
\left(\frac{j}{k}\right)^{2\alpha (1+\beta)}\\
&<\frac{1}{| u_k^Tb|^2} \sum_{j=1}^k
\left(\frac{j}{k}\right)^{2\alpha}
=\frac{1}{| u_k^T b|^2} k\sum_{j=1}^k \frac{1}{k}\left(\frac{j}{k}
\right)^{2\alpha} \notag \\
&< \frac{1}{| u_k^T b|^2} \left(k \int_0^1
x^{2\alpha}dx+1 \right)=\frac{1}{| u_k^Tb|^2}\left(\frac{k}{2\alpha+1}+1\right).
\end{align*}
Case of $k> k_0$ for moderately or mildly ill-posed problems: From \eqref{ideal3} and
\eqref{ideal2} we have
\begin{align*}
\sum_{j=1}^k\frac{1}{| u_j^Tb|^2}
&= \frac{1}{| u_k^Tb|^2} \left(\sum_{j=1}^{k_0}
\frac{| u_k^Tb|^2}{| u_j^Tb|^2}
+\sum_{j=k_0+1}^{k}
\frac{| u_k^Tb|^2}{| u_j^Tb|^2}\right)\\
&= \frac{1}{| u_k^Tb|^2} \left(\sum_{j=1}^{k_0}
\left(\frac{j}{k_0}\right)^{2\alpha (1+\beta)}+k-k_0\right)\\
&<\frac{1}{| u_k^Tb|^2} \left(\sum_{j=1}^{k_0}
\left(\frac{j}{k_0}\right)^{2\alpha}+k-k_0\right)\\
&\leq \frac{1}{| u_k^Tb|^2} \left(\frac{k_0}{2\alpha+1}+1+k-k_0\right).
\end{align*}
Substituting the above two bounds for the two cases into \eqref{sigdel} and
combining them with \eqref{modeest}, we get \eqref{prodnorm2}.
\qquad\endproof
\eqref{prodnorm} and \eqref{prodnorm2} indicate that $\|\Sigma_k\Delta_k^T\|$
decays swiftly as $k$ increases. As has been seen, we must take some cares to
accurately bound $\|\Sigma_k\Delta_k^T\|$. Indeed,
for $1<k\leq k_0$, if we had simply bounded it by
\begin{equation}\label{rough}
\|\Sigma_k\Delta_k^T\|\leq \|\Sigma_k\|\|\Delta_k^T\|=\sigma_1\|\Delta_k\|,
\end{equation}
the factors $\sigma_{k+1}$
in \eqref{prodnorm} and $\sigma_k$ in \eqref{prodnorm2} would have been replaced by
$\frac{\sigma_1\sigma_{k+1}}{\sigma_k}\approx \sigma_1\rho^{-1}$ and
$\sigma_1$, respectively, by substituting the estimates \eqref{eqres1}
and \eqref{modera1} for $\|\Delta_k\|$ into the above. Such bounds
overestimate $\|\Sigma_k\Delta_k^T\|$ too much as $k$ increases, and
are useless to precisely analyze the regularization of
LSQR, CGME and LSMR for ill-posed problems since they make us
impossible to get those predictively accurate results to be presented
in Sections \ref{rankapp}--\ref{compare}.
As a byproduct, we consider an interesting problem that has its own right,
though its solution will not be used in this paper:
How close to the Krylov subspace $\mathcal{V}_k^R$ is the individual
right singular vector $v_j$ for $j\leq k$ and $k=1,2,\ldots,n-1$? Denote
by $\sin\angle(v_j,\mathcal{V}_k^R)$ the distance between
$v_j$ and $\mathcal{V}_k^R$, which is defined as
$$
\sin\angle(v_j,\mathcal{V}_k^R)=\|(I-\Pi_k)v_j\|=\min_{w\in \mathcal{V}_k^R}
\|v_j-w\|
$$
with $\Pi_k$ the orthogonal projector onto $\mathcal{V}_k^R$. Then
we present the following result.
\begin{theorem}\label{indiv}
Let $\Delta_k=(\delta_1,\delta_2,\ldots,\delta_k)$ be defined by
\eqref{defdelta}. Then for $k=1,2,\ldots,n-1$ and $j=1,2,\ldots,k$ we have
\begin{align}
\frac{\sigma_{\min}(\Delta_k)}{\sqrt{1+\sigma_{\min}^2(\Delta_k)}}
\leq
\sin\angle(v_j,\mathcal{V}_k^R)&\leq
\min\{\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|,\|\delta_j\|\},
\label{errorvj}
\end{align}
where $\sigma_{\min}(\cdot)$ denotes the smallest singular value of
a matrix.
\end{theorem}
{\em Proof}.
We first prove the upper bound of \eqref{errorvj}.
Since the columns of $Z_k$ defined by \eqref{zk} form a basis of
$\mathcal{V}_k^R$, its $j$-th column
$Z_k e_j\in \mathcal{V}_k^R$. As a result, we get
\begin{align*}
\sin\angle(v_j,\mathcal{V}_k^R)&=\min_{w\in \mathcal{V}_k^R}
\|v_j-w\|\leq \|v_j-Z_ke_j\|\\
&=\|v_j-(V_k+V_k^{\perp}\Delta_k)e_j\|=\|v_j-v_j-V_k^{\perp}\delta_j\|\\
&=\|V_k^{\perp}\delta_j\|=\|\delta_j\|.
\end{align*}
Recall from \eqref{decomp} that the columns of $\hat{Z}_k$
form an orthonormal basis of $\mathcal{V}_k^R$, and suppose that
$(\hat{Z}_k,\hat{Z}_k^{\perp})$ is orthogonal. Then the columns of
$\hat{Z}_k^{\perp}$ are an orthonormal basis of the orthogonal complement
of $\mathcal{V}_k^R$ with respect to $\mathbb{R}^n$. Particularly,
$$
\hat{Z}_k^{\perp}=(V_k^{\perp}-V_k\Delta_k^T)(I+\Delta_k\Delta_k^T)^{-\frac{1}{2}}
$$
meets the requirement. By definition, we obtain
\begin{align*}
\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|&=\|(\hat{Z}_k^{\perp})^TV_k\|
=\|\hat{Z}_k^{\perp}(\hat{Z}_k^{\perp})^TV_k\|=\max_{\|c\|=1}
\|\hat{Z}_k^{\perp}(\hat{Z}_k^{\perp})^TV_kc\|\\
&=\max_{\|c\|=1}\|(I-\hat{Z}_k\hat{Z}_k^T)V_kc\|=\max_{\|c\|=1}\|(I-\Pi_k)V_kc\|,
\end{align*}
from which and $v_j=V_ke_j$ it follows that
$$
\sin\angle(v_j,\mathcal{V}_k^R)
\leq \|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|
$$
by taking $c=e_j,\ j=1,2,\ldots,k$. So the upper bound of \eqref{errorvj} holds.
We next derive the lower bound of \eqref{errorvj}. We obtain from above
that
\begin{align*}
\sin\angle(v_j,\mathcal{V}_k^R)&=\|(I-\Pi_k)v_j\|=\|(\hat{Z}_k^{\perp})^Tv_j\|\\
&=\|(I+\Delta_k\Delta_k^T)^{-\frac{1}{2}}\left((V_k^{\perp})^T-
\Delta_kV_k^T\right)v_j\|\\
&=\|(I+\Delta_k\Delta_k^T)^{-\frac{1}{2}}\Delta_k e_j\|\\
&\geq \sigma_{\min}\left((I+\Delta_k\Delta_k^T)^{-\frac{1}{2}}\Delta_k\right)
=\frac{\sigma_{\min}(\Delta_k)}{\sqrt{1+\sigma_{\min}^2(\Delta_k)}}. \qquad\endproof
\end{align*}
We remark that the lower bound in \eqref{errorvj} is just the sine of the
smallest canonical angle of $\mathcal{V}_k$ and $\mathcal{V}_k^R$. Since
$v_j\in \mathcal{V}_k$, it is natural that $\angle(v_j,\mathcal{V}_k^R)$ lies
between the smallest and largest angles of $\mathcal{V}_k$ and $\mathcal{V}_k^R$,
as \eqref{errorvj} indicates. The nontrivial point of the upper bound in
\eqref{errorvj} is that $\sin\angle(v_j,\mathcal{V}_k^R)$ can be
much smaller than $\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|$,
as indicated by the bounds \eqref{columndelta} and \eqref{columndelta2},
especially for $j$ not close to $k$.
Combining \eqref{errorvj} with \eqref{columndelta} and \eqref{columndelta2},
we see that the smaller $j$, the closer $v_j$ is to $\mathcal{V}_k^R$.
\section{The rank $k$ approximation $P_{k+1}B_kQ_k^T$ to $A$, the Ritz values
$\theta_i^{(k)}$ and the regularization of LSQR}\label{rankapp}
Making use of Theorems~\ref{thm2}--\ref{thm3}, we are able to solve those
key problems stated before Theorem~\ref{thm2} and give definitive
answers to the fundamental concern by Bj\"{o}rck and Eld\'{e}n, proving
that LSQR has the full regularization
for severely or moderately ill-posed problems with $\rho>1$ or
$\alpha>1$ suitably and it, in general, has only the partial regularization
for mildly ill-posed problems.
Define
\begin{equation}\label{gammak}
\gamma_k = \|A-P_{k+1}B_kQ_k^T\|,
\end{equation}
which measures the accuracy of the rank $k$ approximation $P_{k+1}B_kQ_k^T$ to $A$
generated by Lanczos bidiagonalization. Recall \eqref{xk} and
the comments followed. It is known that the full
or partial regularization of LSQR uniquely depends on whether or not
$\gamma_k\approx \sigma_{k+1}$ holds, where we will make the precise meaning `$\approx$'
clear by introducing the definition of near best
rank $k$ approximation to $A$, and on whether or not the $k$ singular values
$\theta_i^{(k)}$, i.e., Ritz values, of $B_k$, approximate
the $k$ large singular values $\sigma_i$ of $A$ in natural order
for $k=1,2,\ldots, k_0$. If both of them hold, LSQR has the full regularization;
if either of them is not satisfied, LSQR has only the partial
regularization.
This section consists of three subsections. In Section \ref{rankaccur}, we
present accurate estimates for $\gamma_k$ for the three kinds of ill-posed
problems under consideration. We prove that, under some reasonable conditions
on $\rho$ or $\alpha$, the matrix $P_{k+1}B_kQ_k^T$
is a near best rank $k$ approximation to $A$. In Section \ref{ritzapprox},
we deepen the results in Section \ref{rankaccur} and show
how the $k$ Ritz values $\theta_i^{(k)}$ behave. We derive the sufficient
conditions on $\rho$ and $\alpha$ for which
they approximate the first $k$ large singular values $\sigma_i$ of $A$
in natural order. In Section \ref{morerank}, we consider general best and near
best rank approximations to $A$ with respect to the 2-norm. For
$A$ with $\sigma_i=\zeta i^{-\alpha},\ i=1,2,\ldots,n$,
we analyze the nonzero singular values of such a rank
$k$ approximation, and prove that they approximate the first $k$
large singular values of $A$ for $\alpha>1$ suitably but can
fail to do so for $\frac{1}{2}<\alpha \leq 1$. These results will help
understand the regularizing effects of LSQR.
\subsection{The accuracy of rank $k$ approximation $P_{k+1}B_kQ_k^T$ to $A$
and more related}\label{rankaccur}
We first present one of the main results in this paper.
\begin{theorem}\label{main1}
Assume that the discrete Picard condition \eqref{picard} is
satisfied. Then for $k=1,2,\ldots,n-1$ we have
\begin{equation}\label{final}
\sigma_{k+1}\leq \gamma_k\leq \sqrt{1+\eta_k^2}\sigma_{k+1}
\end{equation}
with
\begin{equation} \label{const1}
\eta_k\leq \left\{\begin{array}{ll}
\xi_k\frac{| u_{k+1}^Tb|}{| u_k^T b|}
\left(1+\mathcal{O}(\rho^{-2})\right)
& \mbox{ for } 1\leq k\leq k_0,\\
\xi_k\sqrt{k-k_0+1}\left(1+\mathcal{O}(\rho^{-2})\right)
& \mbox{ for } k_0<k \leq n-1
\end{array}
\right.
\end{equation}
for severely ill-posed problems and
\begin{equation}\label{const2}
\eta_k\leq \left\{\begin{array}{ll}
\xi_1\frac{\sigma_1}{\sigma_2}\frac{| u_2^Tb|}{| u_1^Tb|}
\sqrt{\frac{1}{2\alpha-1}} & \mbox{ for } k=1, \\
\xi_k\frac{\sigma_k}{\sigma_{k+1}}\frac{|u_{k+1}^T b|}{|u_k^T b|}
\sqrt{\frac{k^2}{4\alpha^2-1}+\frac{k}{2\alpha-1}}
|L_{k_1}^{(k)}(0)|& \mbox{ for } 1< k\leq k_0, \\
\xi_k\frac{\sigma_k}{\sigma_{k+1}}\sqrt{\frac{k k_0}{4\alpha^2-1}+
\frac{k(k-k_0+1)}{2\alpha-1}}|L_{k_1}^{(k)}(0)|
& \mbox{ for } k_0<k\leq n-1
\end{array}
\right.
\end{equation}
for moderately or mildly ill-posed problems with $\sigma_j=\zeta j^{-\alpha},\
j=1,2,\ldots,n$, where
$\xi_k=\sqrt{\left(\frac{\|\Delta_k\|}{1+\|\Delta_k\|^2}\right)^2+1}$ for
$\|\Delta_k\|<1$ and $\xi_k\leq\frac{\sqrt{5}}{2}$ for $\|\Delta_k\|\geq 1$
with $\Delta_k$ defined
by \eqref{defdelta}.
\end{theorem}
{\em Proof}.
Since $A_k$ is the best rank $k$ approximation to
$A$ with respect to the 2-norm and $\|A-A_k\|=\sigma_{k+1}$,
the lower bound in \eqref{final} holds. Next we prove the upper bound.
From \eqref{eqmform1}, we obtain
\begin{align}
\gamma_k
&= \|A-P_{k+1}B_kQ_k^T\|= \|A-AQ_kQ_k^T\|= \|A(I-Q_kQ_k^T)\|. \label{gamma2}
\end{align}
From Algorithm 1, \eqref{kry}, \eqref{zk} and \eqref{decomp}, we obtain
$$
\mathcal{V}_k^R
=\mathcal{K}_{k}(A^{T}A,A^{T}b)=span\{Q_k\}=span\{\hat{Z}_k\}
$$
with $Q_k$ and $\hat{Z}_k$ being orthonormal, and the
orthogonal projector onto $\mathcal{V}_k^R$ is thus
\begin{equation}\label{twobasis}
Q_kQ_k^T=\hat{Z}_k\hat{Z}_k^T.
\end{equation}
Keep in mind that $A_k=U_k\Sigma_k V_k^T$. It is direct to justify that
$(U_k\Sigma_k V_k^T)^T(A-U_k\Sigma_k V_k^T)=\mathbf{0}$ for $k=1,2,\ldots,n-1$.
Therefore, exploiting this and noting that $\|I-\hat{Z}_k\hat{Z}_k^T\|=1$ and
$V_k^TV_k^{\perp}=\mathbf{0}$ for $k=1,2,\ldots,n-1$,
we get from \eqref{gamma2}, \eqref{twobasis} and \eqref{decomp} that
\begin{align}
\gamma_k^2 &= \|(A-U_k\Sigma_kV_k^T+U_k\Sigma_kV_k^T)(I-\hat{Z}_k\hat{Z}_k^T)\|^2
\notag\\
&=\max_{\|y\|=1}\|(A-U_k\Sigma_kV_k^T+U_k\Sigma_kV_k^T)
(I-\hat{Z}_k\hat{Z}_k^T)y\|^2 \notag\\
&=\max_{\|y\|=1}\|(A-U_k\Sigma_kV_k^T)(I-\hat{Z}_k\hat{Z}_k^T)y+
U_k\Sigma_kV_k^T(I-\hat{Z}_k\hat{Z}_k^T)y\|^2\notag\\
&=\max_{\|y\|=1}\left(\|(A-U_k\Sigma_kV_k^T)(I-\hat{Z}_k\hat{Z}_k^T)y\|^2+
\| U_k\Sigma_kV_k^T(I-\hat{Z}_k\hat{Z}_k^T)y\|^2\right)\notag\\
&\leq \|(A-U_k\Sigma_kV_k^T)(I-\hat{Z}_k\hat{Z}_k^T)\|^2+
\| U_k\Sigma_kV_k^T(I-\hat{Z}_k\hat{Z}_k^T)\|^2 \notag \\
&\leq \sigma_{k+1}^2+\| \Sigma_kV_k^T(I-\hat{Z}_k\hat{Z}_k^T)\|^2 \notag\\
&\leq \sigma_{k+1}^2+\|\Sigma_kV_k^T\left(I-(V_k+V_k^{\perp}\Delta_k)(I+
\Delta_k^T\Delta_k)^{-1}(V_k+V_k^{\perp}\Delta_k)^T\right)\|^2\notag\\
&= \sigma_{k+1}^2 + \left\|\Sigma_k\left(V_k^T-(I+
\Delta_k^T\Delta_k)^{-1}(V_k+V_k^{\perp}\Delta_k)^T\right)\right\|^2 \notag\\
&= \sigma_{k+1}^2 + \left\|\Sigma_k(I+
\Delta_k^T\Delta_k)^{-1}\left((I+\Delta_k^T\Delta_k)V_k^T-
\left(V_k+V_k^{\perp}\Delta_k\right)^T\right)\right\|^2 \notag\\
&= \sigma_{k+1}^2+ \|\Sigma_k(I+
\Delta_k^T\Delta_k)^{-1}\left(\Delta_k^T\Delta_kV_k^T-\Delta_k^T
(V_k^{\perp})^T\right)\|^2\notag\\
&= \sigma_{k+1}^2 + \|\Sigma_k(I+
\Delta_k^T\Delta_k)^{-1}\Delta_k^T\Delta_kV_k^T-\Sigma_k(I+
\Delta_k^T\Delta_k)^{-1}\Delta_k^T(V_k^{\perp})^T\|^2 \label{twomatrix}\\
&\leq \sigma_{k+1}^2 +\|\Sigma_k(I+
\Delta_k^T\Delta_k)^{-1}\Delta_k^T\Delta_k\|^2+\|\Sigma_k(I+
\Delta_k^T\Delta_k)^{-1}\Delta_k^T\|^2\notag\\
&=\sigma_{k+1}^2+\epsilon_k^2,
\label{estimate1}
\end{align}
where the last inequality follows by using $V_k^T V_k^{\perp}=\mathbf{0}$ and
the definition of the induced matrix 2-norm to amplify the second term
in \eqref{twomatrix}.
We estimate $\epsilon_k$ accurately below. To this end, we need to use two
key identities and some results related. By the SVD of $\Delta_k$, it is direct
to justify that
\begin{equation}\label{inden1}
(I+
\Delta_k^T\Delta_k)^{-1}\Delta_k^T\Delta_k=\Delta_k^T\Delta_k(I+
\Delta_k^T\Delta_k)^{-1}
\end{equation}
and
\begin{equation}\label{inden2}
(I+
\Delta_k^T\Delta_k)^{-1}\Delta_k^T=\Delta_k^T(I+
\Delta_k\Delta_k^T)^{-1}.
\end{equation}
Define the function $f(\lambda)=\frac{\lambda}{1+\lambda^2}$ with
$\lambda\in [0,\infty)$. Since the derivative
$f^{\prime}(\lambda)=\frac{1-\lambda^2}{(1+\lambda^2)^2}$,
$f(\lambda)$ is monotonically increasing for $\lambda\in [0,1]$
and decreasing for $\lambda\in [1,\infty)$, and the maximum of
$f(\lambda)$ over $\lambda\in [0,\infty)$ is $\frac{1}{2}$, which attains at
$\lambda=1$. Based on
these properties and exploiting the SVD of $\Delta_k$, for the matrix 2-norm
we get
\begin{equation}\label{compact}
\|\Delta_k(I+\Delta_k^T\Delta_k)^{-1}\|=\frac{\|\Delta_k\|}{1+\|\Delta_k\|^2}
\end{equation}
for $\|\Delta_k\|<1$ and
\begin{equation}\label{noncomp}
\|\Delta_k(I+\Delta_k^T\Delta_k)^{-1}\|\leq\frac{1}{2}
\end{equation}
for $\|\Delta_k\|\geq 1$ (Note: in this case, since $\Delta_k$ may have at least
one singular value smaller than one, we do not
have an expression like \eqref{compact}). It then follows
from \eqref{estimate1}, \eqref{compact}, \eqref{noncomp}
and $\|(1+\Delta_k\Delta_k^T)^{-1}\|\leq 1$ that
\begin{align}
\epsilon_k^2&=\|\Sigma_k \Delta_k^T\Delta_k(I+
\Delta_k^T\Delta_k)^{-1}\|^2+\|\Sigma_k \Delta_k^T (I+
\Delta_k\Delta_k^T)^{-1}\|^2 \label{separa}\\
&\leq \|\Sigma_k\Delta_k^T\|^2\|\Delta_k(I+\Delta_k^T\Delta_k)^{-1}\|^2+
\|\Sigma_k\Delta_k^T\|^2 \|(1+\Delta_k\Delta_k^T)^{-1}\|^2 \notag\\
&\leq \|\Sigma_k\Delta_k^T\|^2\left(\|\Delta_k
(I+\Delta_k^T\Delta_k)^{-1}\|^2+1\right)\notag\\
&=\|\Sigma_k\Delta_k^T\|^2\left(\left(\frac{\|\Delta_k\|}
{1+\|\Delta_k\|^2}\right)^2+1\right)
=\xi_k^2\|\Sigma_k\Delta_k^T\|^2 \notag
\end{align}
for $\|\Delta_k\|<1$ and
$$
\epsilon_k\leq \|\Sigma_k\Delta_k^T\|\sqrt{\|\Delta_k
(I+\Delta_k^T\Delta_k)^{-1}\|^2+1}=\xi_k\|\Sigma_k\Delta_k^T\|
\leq \frac{\sqrt{5}}{2}\|\Sigma_k\Delta_k^T\|
$$
for $\|\Delta_k\|\geq 1$. Replace $\|\Sigma_k\Delta_k^T\|$ by
its bounds \eqref{prodnorm} and \eqref{prodnorm2} in the
above, insert the resulting bounds for $\epsilon_k$
into \eqref{estimate1}, and let $\epsilon_k=\eta_k\sigma_{k+1}$.
Then we obtain the upper bound in \eqref{final} with $\eta_k$
satisfying \eqref{const1} and \eqref{const2} for severely and moderately
or mildly ill-posed problems, respectively.
\qquad\endproof
Note from \eqref{ideal3} that
$$
\frac{|u_{k+1}^T b|}{| u_k^T b|}=
\frac{\sigma_{k+1}^{1+\beta}}{\sigma_k^{1+\beta}},\ k\leq k_0.
$$
Therefore, for the right-hand side of \eqref{const2} and $k\leq k_0$
we have
$$
\frac{\sigma_k}{\sigma_{k+1}}\frac{| u_{k+1}^T b|}{| u_k^T b|}=
\left(\frac{\sigma_{k+1}}{\sigma_k}\right)^{\beta}<1.
$$
\begin{remark}\label{decayrate}
For severely ill-posed problems, from \eqref{case3}, \eqref{case4} and
the definition of $\xi_k$ we know that
$$
\xi_k(1+\mathcal{O}(\rho^{-2}))
=1+\mathcal{O}(\rho^{-2})
$$
for both $k\leq k_0$ and $k>k_0$. Therefore, from
\eqref{const1} and \eqref{ideal3}, for $k\leq k_0$ we have
\begin{equation}\label{etak0}
\eta_k\leq \xi_k
\frac{|u_{k+1}^Tb |}{|u_k^T b |}\left(1+\mathcal{O}(\rho^{-2})\right)
=\frac{|u_{k+1}^Tb |}{|u_k^T b |}=
\frac{\sigma_{k+1}^{1+\beta}}{\sigma_k^{1+\beta}}=\mathcal{O}(\rho^{-1-\beta})<1
\end{equation}
by ignoring the smaller term $\mathcal{O}(\rho^{-1-\beta})
\mathcal{O}(\rho^{-2})
=\mathcal{O}(\rho^{-3-\beta})$, and for $k>k_0$ we have
\begin{equation}\label{incres}
\eta_k\leq \xi_k\sqrt{k-k_0+1}\left(1+\mathcal{O}(\rho^{-2})\right)
=\sqrt{k-k_0+1}
\end{equation}
by ignoring the smaller term $\sqrt{k-k_0+1}\mathcal{O}(\rho^{-2})$,
which increases slowly with $k$.
\end{remark}
\begin{remark}
For the moderately or mildly ill-posed problems with
$\sigma_j=\zeta j^{-\alpha}$,
from the derivation on $\eta_k$ and its estimate \eqref{const2},
by comparing \eqref{k2} and \eqref{modera1} with \eqref{const2}, for $k\leq k_0$
we approximately have
\begin{equation}\label{etadelta}
\frac{\sigma_k}{\sigma_{k+1}}\|\Delta_k\|\leq
\eta_k\leq \frac{\sqrt{5}}{2}\frac{\sigma_k}{\sigma_{k+1}}\|\Delta_k\|,
\end{equation}
and for $k>k_0$, from \eqref{lkkmoderate} and \eqref{lowerbound}
we approximately have
\begin{align}
\eta_k&< \frac{\sigma_k}{\sigma_{k+1}}\sqrt{\frac{k k_0}{4\alpha^2-1}+
\frac{k(k-k_0+1)}{2\alpha-1}}|L_{k_1}^{(k)}(0)| \notag\\
&\sim \frac{k^{3/2}\sqrt{k_0}}{(2\alpha+1)\sqrt{{4\alpha^2-1}}}+
\frac{k^{3/2}\sqrt{k-k_0+1}}{(2\alpha+1)\sqrt{{2\alpha-1}}}, \label{asym}
\end{align}
which increases faster than the right-hand side of \eqref{incres} with
respect to $k$.
\end{remark}
\begin{remark}\label{decayrate2}
From \eqref{final}, \eqref{const1} and \eqref{etak0},
for severely ill-posed problems we have
$$
1<\sqrt{1+\eta_k^2}<1+\frac{1}{2}{\eta_k^2}\leq
1+\frac{1}{2}\frac{\sigma_{k+1}^{2(1+\beta)}}{\sigma_k^{2(1+\beta)}}
\sim 1+\frac{1}{2}\rho^{-2(1+\beta)},
$$
and $\gamma_k$ is an accurate approximation to
$\sigma_{k+1}$ for $k\leq k_0$ and marginally less accurate for $k>k_0$.
Thus, the rank $k$ approximation $P_{k+1}B_kQ_k^T$ is as accurate as
the best rank $k$ approximation $A_k$ within the
factor $\sqrt{1+\eta_k^2}\approx 1$ for $k\leq k_0$ and $\rho>1$ suitably.
For moderately ill-posed problems, $\gamma_k$ is still an excellent
approximation to $\sigma_{k+1}$, and the rank $k$ approximation
$P_{k+1}B_kQ_k^T$ is almost as accurate as the best rank $k$ approximation
$A_k$ for $k\leq k_0$. Therefore, $P_{k+1}B_kQ_k^T$ plays the same role as
$A_k$ for these two kinds of ill-posed problems and $k\leq k_0$, it is known
from the clarification in Section \ref{lsqr} that LSQR may have the full
regularization. We will, afterwards, deepen this theorem and derive
more results, proving that LSQR must have the full regularization for
these two kinds of problems provided that $\rho>1$ and $\alpha>1$ suitably.
For both severely and moderately ill-posed problems, we note that the
situation is not so satisfying for increasing $k>k_0$. But at that time,
a possibly big $\eta_k$ does not do harm to our regularization purpose
since we will prove that, provided that $\rho>1$ and $\alpha>1$ suitably,
LSQR has the full regularization and has already found
a best possible regularized solution at semi-convergence occurring at
iteration $k_0$. If it is the case, we will simply stop performing it
after semi-convergence.
\end{remark}
\begin{remark}\label{mildre}
For mildly ill-posed problems, the situation is fundamentally different.
As clarified in Remark~\ref{mildrem}, we have
$\sqrt{\frac{k^2}{4\alpha^2-1}+\frac{k}{2\alpha-1}}>1$ and
$|L_k^{(k)}(0)|>1$ considerably as $k$ increases up to $k_0$
because of $\frac{1}{2}<\alpha\leq 1$,
leading to $\eta_k>1$ substantially. This means that
$\gamma_{k_0}$ is substantially bigger than $\sigma_{k_0+1}$ and can
well lie between $\sigma_{k_0}$ and $\sigma_1$, so that
the rank $k_0$ approximation $P_{k_0+1}B_{k_0}Q_{k_0}^T$ is much less accurate
than the best rank $k_0$ approximation $A_{k_0}$ and LSQR has only the partial
regularization.
\end{remark}
\begin{remark}
For a given ill-posed problem, the noise level $\|e\|$ only affects $k_0$ but
has no effect on the overall decay rate of $\gamma_k$.
\end{remark}
\begin{remark}
There are several subtle treatments in the proof of Theorem~\ref{main1}, each of
which turns out to be absolutely necessary. Ignoring or missing any one of them
would be fatal and make us fail to obtain accurate estimates for $\epsilon_k$
defined by \eqref{estimate1}: The first is the treatment of
$\|U_k\Sigma_kV_k^T(I-\hat{Z}_k\hat{Z}_k^T)\|$.
By the definition of $\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|$, if we had
amplified it by
$$
\|U_k\Sigma_kV_k^T(I-\hat{Z}_k\hat{Z}_k^T)\|
\leq \|\Sigma_k\|\|V_k^T(I-\hat{Z}_k\hat{Z}_k^T)\|=
\sigma_1\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|,
$$
we would have obtained a too large overestimate, which is almost a fixed
constant for severely
ill-posed problems and $k=1,2,\ldots,k_0$ and increases with
$k=1,2,\ldots,k_0$ for moderately and mildly ill-posed problems. Such rough
estimates are useless to get a meaningful bound for $\gamma_k$. The key is to treat
$U_k\Sigma_kV_k^T(I-\hat{Z}_k\hat{Z}_k^T)$ as a
whole other rather separate it in the above way, so that we can
bound its norm accurately. The second is the use of
\eqref{inden1} and \eqref{inden2}. The third is the extraction of
$\|\Sigma_k\Delta_k^T\|$ from \eqref{separa} as a whole other than
amplify it to $\|\Sigma_k\|\|\Delta_k\|=\sigma_1\|\Delta_k\|$,
i.e., the fatal overestimate \eqref{rough}.
The fourth is accurate estimates for it;
see \eqref{prodnorm} and \eqref{prodnorm2} in Theorem~\ref{thm3}.
For example, without using \eqref{inden1} and \eqref{inden2}, we would have
no way but to obtain
\begin{align*}
\epsilon_k^2 &\leq \|\Sigma_k\|^2\|(I+
\Delta_k^T\Delta_k)^{-1}\Delta_k^T\Delta_k\|^2+\|\Sigma_k\|^2\|(I+
\Delta_k^T\Delta_k)^{-1}\Delta_k^T\|^2\\
&=\sigma_1^2\left(\frac{\|\Delta_k\|^2}{1+\|\Delta_k\|^2}\right)^2+\sigma_1^2
\|(I+\Delta_k^T\Delta_k)^{-1}\Delta_k^T\|^2\\
&=\sigma_1^2\left(\frac{\|\Delta_k\|^2}{1+\|\Delta_k\|^2}\right)^2+\sigma_1^2
\|\Delta_k(I+\Delta_k^T\Delta_k)^{-1}\|^2.
\end{align*}
From \eqref{compact}, \eqref{noncomp} and the previous
estimates for $\|\Delta_k\|$, such bound is too pessimistic
and completely useless in our context, and
it even does not decrease and could not be small as $k$ increases, while
our estimates for $\epsilon_k=\eta_k\sigma_{k+1}$
in Theorem~\ref{main1} are much more accurate and
decay swiftly as $k$ increases, as indicated by \eqref{const1}
and \eqref{const2}.
\end{remark}
In order to prove the full or partial regularization of LSQR for
\eqref{eq1} completely and rigorously, besides Theorem~\ref{main1},
it appears that we need to introduce a precise definition of the near best
rank $k$ approximation
$P_{k+1}B_kQ_k^T$ to $A$, i.e., the precise meaning of
$\gamma_k\approx \sigma_{k+1}$. By definition \eqref{gammak}, the rank
$k$ matrix $P_{k+1}B_kQ_k^T$ is called a near best rank $k$ approximation
to $A$ if it satisfies
\begin{equation}\label{near}
\sigma_{k+1}\leq \gamma_k<\sigma_k \mbox{ and } \gamma_k-\sigma_{k+1}
<\sigma_k-\gamma_k,\mbox{ i.e., } \gamma_k<\frac{\sigma_k+\sigma_{k+1}}{2},
\end{equation}
that is, $\gamma_k$ lies between $\sigma_k$ and $\sigma_{k+1}$ and is closer to
$\sigma_{k+1}$. This definition is natural.
For an ill-posed problem \eqref{eq1}, since there is no
considerable gap of $\sigma_k$ and $\sigma_{k+1}$, the definition means that
$\gamma_k$ must approximate $\sigma_{k+1}$ more accurately as $k$ increases.
We mention in passing that a near best rank $k$ approximation to $A$ from
an ill-posed problem is much more stringent than it is for a matrix from
a (numerically ) rank-deficient problem where the large singular values are
very well separated from the small ones and there is a substantial gap
between two groups of singular values. In addition, we point out that it may
be much harder to computationally obtain a near best $k$ rank approximation to
the large $A$ from the ill-posed problem than for a
numerically rank deficient matrix of the same order.
Based on Theorem~\ref{main1}, for the severely and moderately or mildly ill-posed
problems with the singular value models $\sigma_k=\zeta\rho^{-k}$ and
$\sigma_k=\zeta k^{-\alpha}$, we next derive the sufficient conditions on
$\rho$ and $\alpha$ that guarantee that $P_{k+1}B_kQ_k^T$ is a near best rank $k$
approximation to $A$ for $k=1,2,\ldots,k_0$. We analyze if and how the
sufficient conditions are satisfied for three kinds of ill-posed problems.
\begin{theorem}\label{nearapprox}
For a given \eqref{eq1}, assume that the discrete Picard condition
\eqref{picard} is satisfied. Then, in the sense of \eqref{near},
$P_{k+1}B_kQ_k^T$ is a near best rank $k$ approximation to $A$
for $k=1,2,\ldots,k_0$ if
\begin{equation}\label{condition}
\sqrt{1+\eta_k^2}<\frac{1}{2}\frac{\sigma_k}{\sigma_{k+1}}+\frac{1}{2}.
\end{equation}
For the severely ill-posed problems with $\sigma_k=\zeta\rho^{-k}$ and
the moderately or mildly ill-posed problems with $\sigma_k=\zeta k^{-\alpha}$,
$P_{k+1}B_kQ_k^T$ is a near best rank $k$ approximation to $A$
for $k=1,2,\ldots,k_0$ if $\rho>2$ and $\alpha$ satisfies
\begin{equation}\label{condition1}
2\sqrt{1+\eta_k^2}-1<\left(\frac{k_0+1}{k_0}\right)^{\alpha},
\end{equation}
respectively.
\end{theorem}
{\em Proof}.
By \eqref{final}, we see that $\gamma_k\leq \sqrt{1+\eta_k^2}\sigma_{k+1}$.
Therefore, $P_{k+1}B_kQ_k^T$ is a near best rank $k$ approximation to $A$ in
the sense of \eqref{near} provided that
$$
\sqrt{1+\eta_k^2}\sigma_{k+1}<\sigma_k
$$
and
$$
\sqrt{1+\eta_k^2}\sigma_{k+1}<\frac{\sigma_k+\sigma_{k+1}}{2},
$$
from which \eqref{condition} follows.
From \eqref{etak0}, for the severely ill-posed problems with
$\sigma_k=\zeta\rho^{-k}$ and $\rho>1$ we have
\begin{equation}\label{simp}
\sqrt{1+\eta_k^2}<1+\frac{1}{2}\eta_k^2\leq 1+\frac{1}{2}\rho^{-2(1+\beta)}
<1+\rho^{-1}, \ k=1,2,\ldots,k_0,
\end{equation}
from which it follows that
\begin{align}\label{ampli}
\sqrt{1+\eta_k^2}\sigma_{k+1}
&<(1+\rho^{-1})\sigma_{k+1}.
\end{align}
Since $\sigma_k/\sigma_{k+1}=\rho$, \eqref{condition} holds provided that
$$
1+\rho^{-1}<\frac{1}{2}\rho+\frac{1}{2},
$$
i.e., $\rho^2-\rho-2>0$, solving which for $\rho$ we get $\rho>2$. For the
moderately or mildly ill-posed problems with $\sigma_k=\zeta k^{-\alpha}$,
it is direct from \eqref{condition} to
get
$$
2\sqrt{1+\eta_k^2}-1<\left(\frac{k+1}{k}\right)^{\alpha}.
$$
Since $\left(\frac{k+1}{k}\right)^{\alpha}$ decreases
monotonically as $k$ increases, its minimum over $k=1,2,\ldots,k_0$
is $\left(\frac{k_0+1}{k_0}\right)^{\alpha}$.
Therefore, we obtain \eqref{condition1}.
\qquad\endproof
\begin{remark}
Given the noise level $\|e\|$, the discrete Picard condition \eqref{picard}
and \eqref{picard1}, from the bound \eqref{const2} for
$\eta_k,\,k=1,2,\ldots,k_0$, we see that the bigger $\alpha>1$ is, the smaller
$k_0$ and $\eta_k$ are. Therefore,
there must be $\alpha>1$ such that \eqref{condition1} holds.
Here we should remind that it is more suitable to
regard the conditions on $\rho$ and $\alpha$ as an indication that
$\rho$ and $\alpha$ must not be close to one other than precise requirements
since we have used the bigger \eqref{simp} and simplified
models $\sigma_k=\zeta \rho^{-k}$ and $\sigma_k=\zeta k^{-\alpha}$.
\end{remark}
\begin{remark}
For the mildly ill-posed problems with $\sigma_k=\zeta k^{-\alpha}$,
Theorem~\ref{moderate} has shown that
$\|\Delta_k\|$ is generally not small and can be arbitrarily large
for $k=1,2,\ldots,k_0$. From \eqref{etadelta}, we see
that $\eta_k$ has comparable size to $\|\Delta_k\|$. Note that the right-hand
side $\left(\frac{k_0+1}{k_0}\right)^{\alpha}\leq 2$ for
$\frac{1}{2}<\alpha\leq 1$ and any $k_0\geq 1$. Consequently, \eqref{condition1}
cannot be met generally for mildly ill-posed problems. The rare possible
exceptions are that $k_0$ is only very few and $\alpha$ is close to one since,
in such case, $\eta_k$ is not large for $k=1,2,\ldots,k_0$.
So, $P_{k+1}B_kQ_k^T$ is generally not a near best rank $k$ approximation
to $A$ for $k=1,2,\ldots, k_0$ for this kind of problem.
\end{remark}
\subsection{The approximation behavior of the Ritz values $\theta_i^{(k)}$}
\label{ritzapprox}
In this subsection, starting with Theorem~\ref{main1}, we prove that, under
certain sufficient conditions on $\rho$ and $\alpha$ for the severely
and moderately ill-posed problems with the models $\sigma_i=\zeta\rho^{-i}$ and
$\sigma_i=\zeta i^{-\alpha}$, respectively,
the $k$ Ritz values $\theta_i^{(k)}$ approximate the first
$k$ large singular values $\sigma_i$ in natural order for $k=1,2,\ldots,k_0$,
which means that no Ritz value smaller than $\sigma_{k_0+1}$ appears.
Combining this result with Theorem~\ref{nearapprox},
we can draw the definite conclusion that LSQR must have the full
regularization for these two kinds of problems provided that $\rho>1$ and
$\alpha>1$ suitably.
\begin{theorem}\label{ritzvalue}
Assume that \eqref{eq1} is severely ill-posed with
$\sigma_i=\zeta\rho^{-i}$ and $\rho>1$ or moderately ill-posed with
$\sigma_i=\zeta i^{-\alpha}$ and $\alpha>1$,
and the discrete Picard condition \eqref{picard} is
satisfied. Let the Ritz values $\theta_i^{(k)}$ be labeled
as $\theta_1^{(k)}>\theta_2^{(k)}>\cdots>\theta_{k}^{(k)}$.
Then
\begin{align}
0<\sigma_i-\theta_i^{(k)} &\leq \sqrt{1+\eta_k^2}\sigma_{k+1},\
i=1,2,\ldots,k.\label{error}
\end{align}
If $\rho\geq 1+\sqrt{2}$ or $\alpha>1$ satisfies
\begin{equation}\label{condm}
1+\sqrt{1+\eta_{k}^2}<\left(\frac{k_0+1}{k_0}\right)^{\alpha},\
k=1,2,\ldots,k_0,
\end{equation}
then the $k$ Ritz values $\theta_i^{(k)}$ strictly interlace
the first large $k+1$ singular values of $A$ and approximate
the first $k$ large ones in natural order for $k=1,2,\ldots,k_0$:
\begin{align}
\sigma_{i+1}&<\theta_i^{(k)}<\sigma_i,\,i=1,2,\ldots,k,
\label{error2}
\end{align}
meaning that there is no Ritz value $\theta_i^{(k)}$ smaller than $\sigma_{k_0+1}$
for $k=1,2,\ldots, k_0$.
\end{theorem}
{\em Proof}.
Note that for $k=1,2,\ldots,k_0$ the $\theta_i^{(k)},\ i=1,2,\ldots,k$ are
just the nonzero singular values of $P_{k+1}B_kQ_k^T$, whose other $n-k$
singular values are zeros. We write
$$
A=P_{k+1}B_k Q_k^T+(A-P_{k+1}B_k Q_k^T)
$$
with $\|A-P_{k+1}B_k Q_k^T\|=\gamma_k$
by definition \eqref{gammak}. Then by the Mirsky's theorem of
singular values \cite[p.204, Thm 4.11]{stewartsun}, we have
\begin{equation}\label{errbound}
| \sigma_i-\theta_i^{(k)}|\leq \gamma_k\leq
\sqrt{1+\eta_k^2}\sigma_{k+1},\ i=1,2,\ldots,k.
\end{equation}
Since the singular values of $A$ are simple and $b$ has components in all the
left singular vectors $u_1,u_2,\ldots, u_n$ of $A$, Lanczos bidiagonalization,
i.e., Algorithm 1, can be run to completion, producing $P_{n+1},\ Q_n$ and
the lower bidiagonal $B_n\in \mathbb{R}^{(n+1)\times n}$ such that
\begin{equation}\label{fulllb}
P^TAQ_n=\left(\begin{array}{c}
B_n\\
\mathbf{0}
\end{array}
\right)
\end{equation}
with the $m\times m$ matrix $P=(P_{n+1},\hat{P})$ and $n\times n$ matrix $Q_n$
orthogonal and all
the $\alpha_i$ and $\beta_{i+1}$, $i=1,2,\ldots,n$, of $B_n$ being positive.
Note that the singular values of $B_k,\ k=1,2,\ldots,n,$
are all simple and that $B_k$ consists of the first $k$ columns of $B_n$
with the last $n-k$ {\em zero} rows deleted. Applying the Cauchy's {\em strict}
interlacing theorem \cite[p.198, Corollary 4.4]{stewartsun} to the singular
values of $B_k$ and $B_n$, we have
\begin{align}
\sigma_{n-k+i}< \theta_i^{(k)}&< \sigma_i,\ i=1,2,\ldots,k.
\label{interlace}
\end{align}
Therefore, \eqref{errbound} becomes
\begin{equation}\label{ritzapp}
0< \sigma_i-\theta_i^{(k)}\leq\gamma_{k}\leq
\sqrt{1+\eta_k^2}\sigma_{k+1},\ i=1,2,\ldots,k,
\end{equation}
which proves \eqref{error}.
That is, the $\theta_i^{(k)}$ approximate $\sigma_i$ from below
for $i=1,2,\ldots,k$ with the errors no more than
$\gamma_k\leq \sqrt{1+\eta_k^2}\sigma_{k+1}$.
For $i=1,2,\ldots,k$, notice that $\rho^{-k+i}\leq 1$. Then
from \eqref{ritzapp}, \eqref{simp} and $\sigma_i=\zeta\rho^{-i}$ we obtain
\begin{align*}
\theta_i^{(k)}&\geq \sigma_i-\gamma_k>\sigma_i-
(1+\rho^{-1})\sigma_{k+1}\\
&=\zeta\rho^{-i}-\zeta (1+\rho^{-1})\rho^{-(k+1)}\\
&=\zeta\rho^{-(i+1)}(\rho-(1+\rho^{-1})\rho^{-k+i})\\
&\geq \zeta\rho^{-(i+1)}(\rho-\rho^{-1}-1)\\
&\geq\zeta\rho^{-(i+1)}=\sigma_{i+1},
\end{align*}
provided that $\rho-\rho^{-1}\geq 2$, solving which we get $\rho\geq 1+\sqrt{2}$.
Together with the upper bound of \eqref{interlace}, we have proved \eqref{error2}.
For the moderately ill-posed problems with $\sigma_i=\zeta i^{-\alpha},\
i=1,2,\ldots,k$ and $k=1,2,\ldots,k_0$,
we get
\begin{align*}
\theta_i^{(k)}&\geq \sigma_i-\gamma_k\geq\sigma_i-\sqrt{1+\eta_k^2}
\sigma_{k+1}\\
&=\zeta i^{-\alpha}-\zeta \sqrt{1+\eta_k^2}(k+1)^{-\alpha}\\
&=\zeta (i+1)^{-\alpha}\left(\left(\frac{i+1}{i}\right)^{\alpha}
-\sqrt{1+\eta_k^2}\left(\frac{i+1}{k+1}\right)^{\alpha}\right)\\
&>\zeta (i+1)^{-\alpha}=\sigma_{i+1},
\end{align*}
i.e., \eqref{error2} holds, provided that $\eta_k>0$ and $\alpha>1$ are such that
$$
\left(\frac{i+1}{i}\right)^{\alpha}
-\sqrt{1+\eta_k^2}\left(\frac{i+1}{k+1}\right)^{\alpha}>1,
$$
which means that
$$
\sqrt{1+\eta_k^2}<\left(\left(\frac{i+1}{i}\right)^{\alpha}-1\right)
\left(\frac{k+1}{i+1}\right)^{\alpha}=
\left(\frac{k+1}{i}\right)^{\alpha}-\left(\frac{k+1}{i+1}\right)^{\alpha},\
i=1,2,\ldots,k.
$$
It is easily justified that the above right-hand side monotonically
decreases with respect to $i=1,2,\ldots,k$, whose minimum
attains at $i=k$ and equals $\left(\frac{k+1}{k}\right)^{\alpha}-1$.
Furthermore, since $\left(\frac{k+1}{k}\right)^{\alpha}-1$ decreases
monotonically as $k$ increases, its minimum over $k=1,2,\ldots,k_0$
is $\left(\frac{k_0+1}{k_0}\right)^{\alpha}-1$,
which is just the condition \eqref{condm}.
\qquad\endproof
\begin{remark}
Similar to \eqref{condition1},
there must be $\alpha>1$ such that \eqref{condm} holds. Again, we stress
that the conditions on $\rho$ and $\alpha$ should be regarded as an indicator that
$\rho$ and $\alpha$ must not be close to one other than precise requirements
since we have used the amplified \eqref{simp} and the simplified
models $\sigma_i=\zeta \rho^{-i}$ and $\sigma_i=\zeta i^{-\alpha}$.
Comparing Theorem~\ref{nearapprox} with Theorem~\ref{ritzvalue}, we
find out that, as far as the severely or moderately ill-posed problems are
concerned, for $k=1,2,\ldots,k_0$ the near best rank approximation
$P_{k+1}B_kQ_k^T$ essentially means that the singular values
$\theta_i^{(k)}$ of $B_k$ approximate the first $k$
large singular values $\sigma_i$ of $A$ in natural order, provided that
$\rho>1$ or $\alpha>1$ suitably.
\end{remark}
\begin{remark}
Under the conditions of Theorems~\ref{nearapprox}--\ref{ritzvalue},
let us explore how the results in them
depend on $\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|$.
\eqref{etak0} and \eqref{case1} indicate that,
for the severely ill-posed problems with $\sigma_k=\zeta\rho^{-k}$,
ignoring higher order small terms,
we have $\eta_k\leq\rho^{-1-\beta}$ and $\|\Delta_k\|\leq \rho^{-2-\beta}<1$
for $k\leq k_0$; for the moderately ill-posed problems with
$\sigma_k=\zeta k^{-\alpha}$, \eqref{etadelta} indicates that $\eta_k$
and $\|\Delta_k\|$ are comparable in size for $k\leq k_0$,
while \eqref{modera2} shows that $\|\Delta_k\|$ is
at most of modest size for $k\leq k_0$. As a result, Theorem~\ref{thm2}
and Theorem~\ref{moderate} demonstrate that
$\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|<\frac{1}{\sqrt{2}}$ and
$\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|<1$ fairly for severely and
moderately ill-posed problems, respectively. In other words,
the largest canonical angle between
$\mathcal{V}_k^R$ and $\mathcal{V}_k$ does not exceed $\frac{\pi}{4}$
and is considerably smaller than $\frac{\pi}{2}$
for these two kinds of problems and $k\leq k_0$, respectively.
\end{remark}
\begin{remark}\label{extract}
Theorems~\ref{main1}--\ref{ritzvalue} show that, for $k=1,2,\ldots,k_0$,
the $k$-step Lanczos bidiagonalization is guaranteed to extract or acquire
the first $k$ dominant SVD components for the severely
or moderately ill-posed problems with $\rho>1$ or $\alpha>1$ suitably,
so that LSQR has the full regularization for these two kinds of ill-posed
problems and can obtain best possible regularized solutions $x^{(k_0)}$ at
semi-convergence.
\end{remark}
Let us have a closer look at the regularization of LSQR for mildly ill-posed
problems. We observe that the sufficient condition \eqref{condm} for \eqref{error2}
is never met for this kind of problem because
$
\left(\frac{k_0+1}{k_0}\right)^{\alpha}\leq 2
$
for any $k_0$ and $\frac{1}{2}< \alpha\leq 1$. This indicates that,
for $k=1,2,\ldots,k_0$, the $k$ Ritz values $\theta_i^{(k)}$ may not approximate
the the first $k$ large singular values $\sigma_i$ in natural order
and particularly there is at least one Ritz value
$\theta_{k_0}^{(k_0)}<\sigma_{k_0+1}$, causing that $x^{(k_0)}$ is already
deteriorated and cannot be as accurate as the best TSVD solution $x_{k_0}^{tsvd}$,
so that LSQR has only the partial regularization.
We can also make use of Theorem~\ref{initial} to explain the partial
regularization of LSQR: Theorem~\ref{moderate} has shown that
$\|\Delta_k\|$ is generally not small and
may become arbitrarily large as $k$ increases up to $k_0$ for mildly ill-posed
problems, meaning that $\|\sin\Theta(\mathcal{V}_k,\mathcal{V}_k^R)\|\approx 1$,
as the sharp bound \eqref{modera2} indicates, from which it follows that
a small Ritz value $\theta_{k_0}^{(k_0)}<\sigma_{k_0+1}$ generally appears.
\subsection{General best or best rank $k$ approximations to $A$ and their
implications on LSQR}
\label{morerank}
We investigate the general best or near best rank $k$ approximations to $A$ with
$\sigma_k=\zeta k^{-\alpha}$ and $\alpha>\frac{1}{2}$.
We aim to show that, for each of such rank
$k$ approximations, its smallest
nonzero singular value may be smaller than $\sigma_{k+1}$ for
$\frac{1}{2}<\alpha\leq 1$, that is, its nonzero singular values
may not approximate the $k$ large singular values of $A$ in natural
order, while the smallest nonzero singular value
of such a rank $k$ approximation is guaranteed to be bigger
than $\sigma_{k+1}$ if
only $\alpha>1$ suitably. As it will turn out, this can help
us further understand the regularization of LSQR for mildly and moderately
ill-posed problems. Finally, we investigate the behavior of
the Ritz values $\theta_i^{(k)},\ i=1,2,\ldots,k$ when $P_{k+1}B_kQ_k^T$
is not a near best rank $k$ approximation to $A$ for mildly ill-posed problems.
First of all, we point out an intrinsic fact that both the best and near best
rank $k$ approximations to $A$ with respect to the 2-norm are {\em not} unique.
This fact is important for further understanding Theorem~\ref{ritzvalue}.
Let $C_k$ be a best or near best rank
$k$ approximation to $A$ with $\|A-C_k\|=(1+\epsilon)\sigma_{k+1}$ with any
$\epsilon\geq 0$ satisfying $(1+\epsilon)\sigma_{k+1}<\frac{\sigma_k+
\sigma_{k+1}}{2}$ (Note: $\epsilon=0$ corresponds to a best rank $k$ approximation),
i.e., $(1+\epsilon)\sigma_{k+1}$ is between $\sigma_{k+1}$
and $\sigma_k$ and closer to $\sigma_{k+1}$, by which we get
$$
1+2\epsilon<\frac{\sigma_k}{\sigma_{k+1}}.
$$
It is remarkable to note that $C_k$ is not unique. For example, among others,
all the
$$
C_k=A_k(\theta,j)=A_k-\sigma_{k+1}U_k
{\rm diag}(\theta(1+\epsilon),\ldots,\theta(1+\epsilon),
\underbrace{(1+\epsilon)}_j,\theta(1+\epsilon),\ldots,\theta(1+\epsilon))
V_k^T
$$
with any $0\leq\theta\leq 1$ and $1\leq j\leq k-1$ is a family of
best or near best rank $k$ approximations to $A$.
The smallest nonzero singular value of $A_k(\theta,j)$ is
$\sigma_k-\theta(1+\epsilon)\sigma_{k+1}$. Since $\sigma_k=\zeta k^{-\alpha}$ and
$
\left(\frac{k+1}{k}\right)^{\alpha}<2
$
for any $k>1$ and $\frac{1}{2}<\alpha\leq 1$,
we obtain
\begin{equation}\label{mildmoderate2}
\sigma_k-\theta(1+\epsilon)\sigma_{k+1}=\sigma_{k+1}
\left(\left(\frac{k+1}{k}\right)^{\alpha}-\theta(1+\epsilon)\right)<\sigma_{k+1}
\end{equation}
for $\theta$ sufficiently close to one.
This shows that $\sigma_k-\theta(1+\epsilon)\sigma_{k+1}$ does not lie between
$\sigma_{k+1}$ and $\sigma_k$ and interlace them for $k>1$. In this case, for a
given $\alpha\in (\frac{1}{2},1]$, the bigger $k$ is, the
smaller $\left(\frac{k+1}{k}\right)^{\alpha}-\theta(1+\epsilon)$ is, and the
further is $\sigma_k-\theta(1+\epsilon)\sigma_{k+1}$ away from $\sigma_{k+1}$.
On the other hand, for $\theta$ sufficiently small we
have
\begin{equation}\label{mildmoderate}
\left(\frac{k+1}{k}\right)^{\alpha}-\theta(1+\epsilon)>1,
\end{equation}
that is, $\sigma_k-\theta(1+\epsilon)\sigma_{k+1}$ interlaces
$\sigma_{k+1}$ and $\sigma_k$ for $\theta$
sufficiently small.
For $A$ with $\sigma_k=\zeta k^{-\alpha}$ and $\alpha>1$, the situation is
much better since, for any $k$, the requirement \eqref{mildmoderate} is met
for {\em any} $0\leq\theta\leq 1$ provided that $\alpha>1$ suitably,
leading to $\sigma_k-\theta(1+\epsilon)\sigma_{k+1}>\sigma_{k+1}$, meaning
that the smallest singular value
$\sigma_k-\theta(1+\epsilon)\sigma_{k+1}$ of a near best rank approximation
$A_k(\theta,j)$ interlaces $\sigma_{k+1}$ and $\sigma_k$.
However, we should be aware that the above analysis is made
for the {\em worst-case}: For any best or a
near best rank $k$ approximation $C_k$ to $A$,
the minimum of the smallest nonzero singular values of all the $C_k$ is exactly
$\sigma_k-(1+\epsilon)\sigma_{k+1}$. We now prove this. Suppose that
$\sigma_k(C_k)$ is the smallest nonzero singular value of a given such $C_k$.
Then from $\|A-C_k\|=(1+\epsilon)\sigma_{k+1}$,
by the standard perturbation theory we have
$$
|\sigma_k-\sigma_k(C_k)|\leq (1+\epsilon)\sigma_{k+1}.
$$
Clearly, the minimum of all the $\sigma_k(C_k)$ is attained if and
only if the above equality holds, which is exactly
$\sigma_k-(1+\epsilon)\sigma_{k+1}$. On the other side, by
construction, we also see that the smallest singular value
$\sigma_k-\theta(1+\epsilon)\sigma_{k+1}$
of $C_k$ is arbitrarily close to or equal to $\sigma_k$ by
taking $\theta$ arbitrarily small or zero, which means that
\eqref{mildmoderate} holds. In this case, we observe from the equality
in \eqref{mildmoderate2} that $\sigma_k-\theta(1+\epsilon)\sigma_{k+1}
>\sigma_{k+1}$ and interlaces $\sigma_{k+1}$ and $\sigma_k$.
As far as LSQR is concerned, notice that the condition \eqref{condm} for the
interlacing property \eqref{error2} is derived by assuming the worst case that
$\sigma_k-\theta_k^{(k)}=\gamma_k\leq\sqrt{1+\eta_k^2}\sigma_{k+1}$, i.e.,
$\theta_k^{(k)}$ is supposed to be the smallest possible nonzero one among
all the $\sigma_k(C_k)$, where $C_k$ belongs to the set of near best
$k$ approximations that satisfy $\|A-C_k\|=\gamma_k\leq\sqrt{1+\eta_k^2}
\sigma_{k+1}$. For mildly ill-posed problems, the above arguments indicate
that although in the worst case some of the $k$ Ritz values $\theta_i^{(k)}$
may not approximate the first $k$ large singular values $\sigma_i$
of $A$ in natural order, it is possible so in practice
in case $P_{k+1}B_kQ_k^T$ is {\em occasionally} a near best
rank $k$ approximation to $A$ for some small $k\leq k_0$.
Unfortunately, as we have shown previously, $P_{k+1}B_kQ_k^T$ is
rarely a near best rank $k$ approximation to $A$ for
mildly ill-posed problems, i.e., $\gamma_k>\sigma_k$ generally.
Recall the second part of Theorem~\ref{initial} and Remark~\ref{appear},
which have shown rigorously that there is at least one Ritz value
$\theta_k^{(k)}<\sigma_{k+1}$ if $\varepsilon_k$
is sufficiently small there, that is, $\eta_k$ or equivalently $\|\Delta_k\|$
is large. This is exactly the case that $P_{k+1}B_kQ_k^T$ is not a near best
rank $k$ approximation to $A$, causing that LSQR has only the partial
regularization.
We can make a further analysis on the behavior of
$\theta_i^{(k)},\ i=1,2,\ldots,k$ when \eqref{eq1} is mildly ill-posed.
Suppose that $\gamma_k\in [\sigma_{j+1},\sigma_j]$ for some $j\leq k$, which
means that $P_{k+1}B_kQ_k^T$ is definitely not a near best rank $k$
approximation to $A$ when $j<k$. Below we derive the smallest
upper bound for $\sigma_j-\gamma_k$ and obtain the biggest
lower bound for $\theta_j^{(k)}$. For $\sigma_j=\zeta j^{-\alpha}$
with $\frac{1}{2}<\alpha\leq 1$ we have
\begin{align*}
\sigma_j-\gamma_k&\leq \sigma_j-\sigma_{j+1}
=\sigma_{j+1}\left(\left(\frac{j+1}{j}\right)^{\alpha}-1\right)\\
&\leq \sigma_{j+1}\left(1+\frac{\alpha}{j}-1\right)\\
&=\frac{\alpha}{j}\sigma_{j+1}=\frac{\alpha}{\zeta j^{1-\alpha}}\zeta
j^{-\alpha}\sigma_{j+1}
=\frac{\alpha}{\zeta j^{1-\alpha}}\zeta^2 j^{-\alpha} (j+1)^{-\alpha}\\
&=
\frac{\alpha}{\zeta j^{1-\alpha}}\zeta \sigma_{j(j+1)}=
\frac{\alpha}{j^{1-\alpha}}\sigma_{j(j+1)},
\end{align*}
in which $\frac{\alpha}{j^{1-\alpha}}<1$
decreases with increasing $j$ for $\alpha<1$ and is one for $\alpha=1$.
Therefore, the smallest
upper bound for $\sigma_j-\gamma_k$ is no more than $\sigma_{j(j+1)}$,
which is smaller than $\sigma_{k+1}$ once $j(j+1)>k$. In view of the above
and \eqref{ritzapp}, for $\gamma_k\in [\sigma_{j+1},\sigma_j]$, since
$\theta_j^{(k)}\geq\sigma_j-\gamma_k$ and has the biggest lower bound
$\sigma_{j(j+1)}$, we may have $\theta_j^{(k)}<\sigma_{k+1}$ provided that
$j(j+1)>k$. Moreover, when $\theta_j^{(k)}<\sigma_{k+1}$,
by the labeling rule, there are $k-j+1$ Ritz values
$\theta_j^{(k)},\theta_{j+1}^{(k)},\ldots,\theta_k^{(k)}$ smaller
than $\sigma_{k+1}$. As a result, for $k=k_0$, there are $k_0-j+1$
Ritz values smaller than $\sigma_{k_0+1}$ that deteriorate the LSQR
iterate $x^{(k_0)}$, so that LSQR has only the partial regularization.
\section{Decay rates of $\alpha_k$ and $\beta_{k+1}$ and the regularization of
LSMR and CGME} \label{alphabeta}
In this section, we will present a number of results on the decay rates of
$\alpha_k,\ \beta_{k+1}$ and $\gamma_k$ and on certain other rank
$k$ approximations to $A$ and $A^TA$ constructed by Lanczos bidiagonalization.
The decay rates of $\alpha_k$ and $\beta_{k+1}$ are
particularly useful for practically detecting the degree of
ill-posedness of \eqref{eq1} and identifying the full or partial regularization
of LSQR and LSMR. The results on the new rank $k$
approximations critically determine the full or partial regularization of the
Krylov iterative regularization solvers LSMR \cite{fong} and CGME
\cite{craig,hanke95,hanke01,hps09}. In Section \ref{decay}, we
prove how $\alpha_k$ and $\beta_{k+1}$ decay by relating them to
$\gamma_k$ and the estimates established for it. Then we show how to
exploit the decay rate of $\alpha_k+\beta_{k+1}$ to identify
the degree of ill-posedness of \eqref{eq1} and the regularization of LSQR.
In Section \ref{lsmr}, we prove that the regularization of LSMR resembles LSQR for each
of the three kinds of ill-posed problems. In Section \ref{cgme}, we prove that
the regularizing effects of CGME have intrinsic indeterminacy and are inferior
to those of LSQR and LSMR. In Section \ref{rrqr},
we compare LSQR with some standard randomized algorithms \cite{halko11} and
strong rank-revealing QR, i.e., RRQR, factorizations \cite{gu96,hong92},
and show that the former solves
ill-posed problems more accurately than the latter two ones at no more cost.
\subsection{Decay rates of $\alpha_k$ and $\beta_{k+1}$ and their practical
use}\label{decay}
We consider how $\alpha_k$ and $\beta_{k+1}$ decay in certain pronounced
manners and show how to use them to identify the full or partial regularization
of LSQR in practice.
\begin{theorem}\label{main2}
With the notation defined previously, the following results hold:
\begin{eqnarray}
\alpha_{k+1}&<&\gamma_k\leq \sqrt{1+\eta_k^2}\sigma_{k+1},
\ k=1,2,\ldots,n-1,\label{alpha}\\
\beta_{k+2}&<& \gamma_k\leq\sqrt{1+\eta_k^2}\sigma_{k+1}, \ k=1,2,\ldots,n-1,
\label{beta}\\
\alpha_{k+1}\beta_{k+2}&\leq &
\frac{\gamma_k^2}{2}\leq
\frac{(1+\eta_k^2)\sigma_{k+1}^2}{2}, \ k=1,2,\ldots,n-1,
\label{prod2}\\
\gamma_{k+1}&<&\gamma_k,\ \ k=1,2,\ldots,n-2. \label{gammamono}
\end{eqnarray}
\end{theorem}
{\em Proof}.
From \eqref{fulllb}, since $P$ and $Q_n$ are orthogonal matrices,
we have
\begin{align}
\gamma_k &=\|A-P_{k+1}B_kQ_k^T\|=\|P^T(A-P_{k+1}B_kQ_k^T)Q_n\| \label{invar}\\
&=
\left\| \left(\begin{array}{c}
B_n \\
\mathbf{0}
\end{array}
\right)-(I,\mathbf{0} )^TB_k (I,\mathbf{0} )\right\|=\|G_k\| \label{gk}
\end{align}
with
\begin{align}\label{gk1}
G_k&=\left(\begin{array}{cccc}
\alpha_{k+1} & & & \\
\beta_{k+2}& \alpha_{k+2} & &\\
&
\beta_{k+3} &\ddots & \\& & \ddots & \alpha_{n} \\
& & & \beta_{n+1}
\end{array}\right)\in \mathbb{R}^{(n-k+1)\times (n-k)}
\end{align}
resulting from deleting the $(k+1)\times k$ leading principal matrix of $B_n$
and the first $k$ zero rows and columns of the resulting matrix.
From the above, for $k=1,2,\ldots,n-1$ we have
\begin{align}
\alpha_{k+1}^2+\beta_{k+2}^2&=\|G_ke_1\|^2\leq \|G_k\|^2=\gamma_k^2,
\label{alphabetasum1}
\end{align}
which shows that $\alpha_{k+1}< \gamma_k$ and $\beta_{k+2}<\gamma_k$
since $\alpha_{k+1}>0$ and $\beta_{k+2}>0$.
So from \eqref{final}, we get \eqref{alpha} and \eqref{beta}. On the other
hand, noting that
\begin{align*}
2\alpha_{k+1}\beta_{k+2}&\leq \alpha_{k+1}^2+\beta_{k+2}^2\leq \gamma_k^2,
\end{align*}
we get \eqref{prod2}.
Note that $\alpha_k>0$ and $\beta_{k+1}>0,\ k=1,2,\ldots,n$.
By $\gamma_k=\|G_k\|$ and \eqref{gk1}, note that $\gamma_{k+1}=\|G_{k+1}\|$
equals the 2-norm of the submatrix deleting the first column of $G_k$.
Applying the Cauchy's strict interlacing theorem to the singular values
of this submatrix and $G_k$, we obtain \eqref{gammamono}.
\qquad\endproof
\begin{remark}
For severely and moderately ill-posed problems, based on
the results in the last section,
\eqref{alpha} and \eqref{beta} show that $\alpha_{k+1}$ and $\beta_{k+2}$
decay as fast as $\sigma_{k+1}$ for $k\leq k_0$ and their decays may become slow
for $k>k_0$. For mildly ill-posed problems,
since $\eta_k$ are generally bigger than one considerably for $k\leq k_0$,
$\alpha_{k+1}$ and $\beta_{k+2}$ cannot generally decay as fast as
$\sigma_{k+1}$, and their decays become slower for $k>k_0$.
Gazzola and his
coauthors \cite{gazzola14,gazzola-online} claim without rigorous proofs
that $\alpha_{k+1}\beta_{k+1}=\mathcal{O}(k\sigma_k^2)$ and
$\alpha_{k+1}\beta_{k+2}=\mathcal{O}(k\sigma_{k+1}^2)$ for severely ill-posed
problems with the constants in $\mathcal{O}(\cdot)$ unknown
(see Proposition 4 of \cite{gazzola-online}), but they do not show how
fast each of them decays; see Proposition 6 of \cite{gazzola-online}.
In contrast, our \eqref{alpha}, \eqref{beta} and \eqref{prod2} are
rigorous and quantitative for all three kinds of ill-posed problems.
In \cite[Corollary 3.1]{gazzola16}, the authors have derived
the product inequality
$$
\prod_{k=1}^l\alpha_{k+1}\beta_{k+1}\leq \prod_{k=1}^l\sigma_k^2,\
l=1,2,\ldots,n-1.
$$
Whether or not this inequality is sharp is unknown, as they point out.
By it, they empirically claim that $\alpha_{k+1}\beta_{k+1}$
may decay as fast as $\sigma_k^2$ when the inequality is sharp; conversely,
if it is not sharp, nothing can be said on how fast
$\alpha_{k+1}\beta_{k+1}$ decays.
\end{remark}
We now shed light on \eqref{alpha} and \eqref{beta}.
For a given \eqref{eq1}, its degree of ill-posedness
is either known or unknown. If it is unknown, \eqref{alpha} is of
practical importance and can be exploited to identify whether or not LSQR has
the full regularization without extra cost in an automatic and
reliable way, so is \eqref{beta}.
From the proofs of \eqref{alpha} and \eqref{beta}, we find that
$\alpha_{k+1}$ and $\beta_{k+2}$ are as small as $\gamma_k$. Since our theory
and analysis in Section \ref{rankapp} have proved that $\gamma_k$
decays as fast as $\sigma_{k+1}$ for severely or moderately ill-posed problems
with $\rho>1$ or $\alpha>1$ suitably and it decays more slowly than
$\sigma_{k+1}$ for mildly il-posed problems, the decay rate
of $\sigma_k$ can be judged by that of $\alpha_k$
or $\beta_{k+1}$ or better judged by that of $\alpha_k+\beta_{k+1}$
reliably, as shown below.
Given \eqref{eq1}, run LSQR until semi-convergence
occurs at iteration $k^*$. Check how $\alpha_k+\beta_{k+1}$ decays as $k$
increases during the process. If, on average, it decays in
an obviously exponential way, then \eqref{eq1} is a severely ill-posed problem.
In this case, LSQR has the full regularization, and semi-convergence means
that we have found a best possible regularized solution. If, on average,
$\alpha_k$ decays as fast as $k^{-\alpha}$ with $\alpha>1$ considerably, then
\eqref{eq1} is surely a moderately ill-posed problem, and LSQR also has found a
best possible regularized solution at semi-convergence. If, on average,
it decays at most as fast as or more slowly than $k^{-\alpha}$
with $\alpha$ no more than one, \eqref{eq1} is a mildly ill-posed problem.
Notice that the noise $e$ does not deteriorate regularized solutions until
semi-convergence. Therefore, if a hybrid LSQR is used, then it is more reasonable
and also cheaper to apply regularization to projected problems only
from iteration $k^*+1$ onwards
other than from the very start, i.e., the first iteration, as done in the hybrid
Lanczos bidiagonalization/Tikhonov regularization scheme \cite{berisha}, until
a best possible regularized solution
is found. For a hybrid LSMR, regularization is applied to the projected
problems generated in LSMR in the same way.
\subsection{The regularization of LSMR}\label{lsmr}
Based on the previous results, we can rigorously analyze the
regularizing effects of of LSMR \cite{fong,bjorck15} and
draw definitive conclusions on its regularization for three kinds of
ill-posed problems.
LSMR is mathematically equivalent to MINRES applied to
$A^TAx=A^Tb$, and its iterate $x_k^{lsmr}$ minimizes
$\|A^T(b-Ax)\|$ over $x\in \mathcal{V}_k^R$,
and the residual norm $\|A^T(b-Ax_k^{lsmr})\|$ decreases
monotonically with respect to $k$. In our notation, noting from Algorithm 1
that $Q_{k+1}^TA^TAQ_k=(B_k^TB_k,\alpha_{k+1}\beta_{k+1}e_k)^T$ with rank $k$,
it is known from Section 2.2 of \cite{fong} that
\begin{equation}\label{lsmrsolution}
x_k^{lsmr}=Q_ky_k^{lsmr}=Q_k(Q_{k+1}^TA^TAQ_k)^{\dagger}Q_{k+1}^TA^Tb,
\end{equation}
which can be efficiently computed and updated.
So LSMR amounts to solving the modified problem that perturbs the matrix $A^TA$
in $A^TAx=A^Tb$ to its rank $k$ approximation $Q_{k+1}Q_{k+1}^TA^TAQ_kQ_k^T$,
and the iterate $x_k^{lsmr}$ is the minimum-norm least squares solution to
the modified problem
\begin{equation}\label{lsmrrank}
\min\|Q_{k+1}Q_{k+1}^TA^TAQ_kQ_k^Tx-A^Tb\|.
\end{equation}
It is direct to verify that the TSVD solution $x_k^{tsvd}$ is exactly
the minimum-norm least squares solution to the modified
problem $\min\|A_k^TA_kx-A^Tb\|$ that replaces
$A^TA$ by its 2-norm best rank $k$ approximation $A_k^TA_k$ in $A^TAx=A^Tb$.
As a result, the regularization problem for LSMR now becomes that of accurately
estimating $\|A^TA-Q_{k+1}Q_{k+1}^TA^TAQ_kQ_k^T\|$,
investigates how close it is to $\sigma_{k+1}^2=\|A^TA-A_k^TA_k\|$ and analyzes
whether or not the singular values of $Q_{k+1}^TA^TAQ_k$ approximate
the $k$ large singular values $\sigma_i^2,\ i=1,2,\ldots,k$ of $A^TA$ in natural
order.
\begin{theorem}\label{aprod}
For LSMR and $k=1,2,\ldots,n-1$, we have
\begin{equation}\label{aproderror}
\gamma_k^2\leq \|A^TA-Q_{k+1}Q_{k+1}^TA^TAQ_kQ_k^T\|\leq
\sqrt{1+m_k(\gamma_{k-1}/\gamma_k)^2}\gamma_k^2
\end{equation}
with $0\leq m_k<1$.
\end{theorem}
{\em Proof}.
For the orthogonal matrix $Q_n$ generated by Algorithm 1, noticing that
$\alpha_{n+1}=0$, from \eqref{eqmform1} and \eqref{eqmform2} we obtain
$Q_n^TA^TAQ_n=B_n^TB_n$ and
\begin{align*}
\|A^TA-Q_{k+1}Q_{k+1}^TA^TAQ_kQ_k^T\|&=
\|Q_n^T(A^TA-Q_{k+1}Q_{k+1}^TA^TAQ_kQ_k^T)Q_n\|\\
&=\|B_n^TB_n-(I,\mathbf{0})^T(B_k^TB_k,\alpha_{k+1}
\beta_{k+1}e_k)^T(I,\mathbf{0})\|\\
&=\|F_k\|,
\end{align*}
where $F_k$ is the $(n-k+1)\times (n-k)$ matrix that is generated by deleting
the $(k+1)\times k$ leading principal matrix of the symmetric tridiagonal
matrix $B_n^TB_n$ and the first $k-1$ zero rows and $k$ zero columns
of the resulting matrix. Note that $B_n^TB_n$ has the diagonals
$\alpha_k^2+\beta_{k+1}^2$, $k=1,2,\ldots,n$ and the super- and sub-diagonals
$\alpha_k\beta_k,\ k=2,3,\ldots,n$. We have
\begin{align}\label{fk}
F_k&=\left(\begin{array}{ccccc}
\alpha_{k+1}\beta_{k+1} & & & &\\
\alpha_{k+1}^2+\beta_{k+2}^2 &\alpha_{k+2}\beta_{k+2} &&&\\
\alpha_{k+2}\beta_{k+2} &\alpha_{k+2}^2+\beta_{k+3}^2&\ddots & &\\
& \alpha_{k+3}\beta_{k+3} & \ddots& & \\
& & &\alpha_{n-1}\beta_{n-1}& \\
& & \ddots& \alpha_{n-1}^2+\beta_n^2 &\alpha_n\beta_n\\
& & &\alpha_n\beta_n&\alpha_n^2+\beta_{n+1}^2\\
\end{array}
\right)
\end{align}
According to \eqref{gk1}, it is direct to check that
the $(n-k)\times (n-k)$ symmetric tridiagonal matrix $G_k^TG_k$ is
a submatrix of $F_k$ that deletes its first row $\alpha_{k+1}\beta_{k+1}e_1^T$.
Therefore, we have $\|F_k\|\geq\|G_k^TG_k\|=\|G_k\|^2=\gamma_k^2$
with the last equality being from \eqref{invar} and \eqref{gk}, which proves
the lower bound in \eqref{aproderror}.
On the other hand, noting the strict inequalities
in \eqref{alpha} and \eqref{beta}, since
$$
F_k^TF_k=(G_k^TG_k)^2+\alpha_{k+1}^2\beta_{k+1}^2e_1e_1^T,
$$
from \cite[p.98]{wilkinson} we obtain
$$
\|F_k\|^2=\|G_k\|^4+m^{\prime}_k\alpha_{k+1}^2\beta_{k+1}^2\leq \gamma_k^4+
m_k\gamma_{k-1}^2\gamma_k^2
$$
with $0\leq m^{\prime}_k\leq 1$ and $0\leq m_k<m^{\prime}_k$ if
$m^{\prime}_k>0$, from which the upper bound of \eqref{aproderror} follows
directly.
\qquad\endproof
Recall that LSQR is mathematically equivalent to CGLS that implicitly applies
the CG method to $A^TAx=A^Tb$. By \eqref{xk}, \eqref{eqmform1},
\eqref{eqmform2} and \eqref{Bk}, noting that $P_{k+1}P_{k+1}^Tb=b$,
we obtain the LSQR iterates
\begin{align*}
x^{(k)}&=Q_kB_k^{\dagger}P_{k+1}^Tb=Q_k(B_k^TB_k)^{-1}B_k^TP_{k+1}^Tb\\
&=Q_k(Q_k^TA^TAQ_k)^{-1}Q_k^TA^TP_{k+1}P_{k+1}^Tb=
Q_k(Q_k^TA^TAQ_k)^{-1}Q_k^TA^Tb,
\end{align*}
which is the minimum-norm least squares solution to the modified problem
\begin{equation}\label{lsqrrank}
\min\|Q_kQ_k^TA^TAQ_kQ_k^T x-A^Tb\|
\end{equation}
that replaces $A^TA$ by its rank $k$ approximation $Q_kQ_k^TA^TAQ_kQ_k^T
=Q_kB_k^TB_kQ_k^T$ in $A^TAx=A^Tb$. As a result, in the sense of solving
$A^TAx=A^Tb$, for LSQR, the accuracy of such rank $k$ approximation
is $\|A^TA-Q_kQ_k^TA^TAQ_kQ_k^T\|$. We can establish the following result which
relates the corresponding approximation accuracy concerning LSMR to
that concerning LSQR.
\begin{theorem}\label{lsqrmr}
For the rank $k$ approximations to $A^TA$ defined in \eqref{lsmrrank}
and \eqref{lsqrrank} involved in LSMR and LSQR, we have
\begin{align}\label{lsqrmrest}
\|A^TA-Q_{k+1}Q_{k+1}^TA^TAQ_kQ_k^T\|&\leq
\|A^TA-Q_kQ_k^TA^TAQ_kQ_k^T\|.
\end{align}
\end{theorem}
{\proof} Similar to the proof of Theorem~\ref{aprod}, it is direct to verify
that
\begin{align*}
\|A^TA-Q_kQ_k^TA^TAQ_kQ_k^T\|&=
\|Q_n^T(A^TA-Q_kQ_k^TA^TAQ_kQ_k^T)Q_n\|\\
&=\|B_n^TB_n-(I,\mathbf{0})^TB_k^TB_k(I,\mathbf{0})\|\\
&=\|F_k^{\prime}\|,
\end{align*}
where $F_k^{\prime}$ is an $(n-k+1)\times (n-k+1)$ matrix whose
first column is $(0,\alpha_{k+1}\beta_{k+1},\mathbf{0})^T$ and last $n-k$
columns are just the matrix $F_k$ defined by \eqref{fk}. Therefore, we have
$\|F_k\|\leq \|F_k^{\prime}\|$, which is just \eqref{lsqrmr}.
\qquad\endproof
This theorem indicates that, as far as solving $A^TAx=A^Tb$
is concerned, the rank $k$ approximation in LSMR is at least as accurate as
that in LSQR. However, regarding LSQR applied to \eqref{eq1} directly,
Theorem~\ref{main1} is much more attractive since it not only deals with
the rank $k$ approximation to $A$ directly but also estimates the accuracy of
the rank $k$ approximation in terms
of $\sigma_{k+1}$ more compactly and informatively.
\begin{remark}
According to the results and analysis in Section \ref{rankapp},
we have $\gamma_{k-1}/\gamma_k\sim \rho$ for severely ill-posed problems,
and $\gamma_{k-1}/\gamma_k\sim (k/(k-1))^{\alpha}$ at most for
moderately and mildly ill-posed problems.
In comparison with Theorem~\ref{main1}, noting the form of
the lower and upper bounds of \eqref{aproderror}, we see that
$Q_{k+1}Q_{k+1}^TA^TAQ_kQ_k^T=Q_{k+1}(B_k^TB_k,\alpha_{k+1}\beta_{k+1}e_k)^TQ_k^T$
as a rank $k$ approximation to $A^TA$
is basically as accurate as $P_{k+1}B_kQ_k^T$ as a rank $k$ approximation
to $A$.
\end{remark}
\begin{remark}
From \cite[p.33]{stewartsun}, the singular values of
$(B_k^TB_k,\alpha_{k+1}\beta_{k+1}e_k)^T$ are correspondingly bigger
than those of $B_k^TB_k$, i.e., $(\theta_i^{(k)})^2$. Therefore,
the smallest singular value of $(B_k^TB_k,\alpha_{k+1}\beta_{k+1}e_k)^T$
is no less than $(\theta_k^{(k)})^2$.
As a result, $(B_k^TB_k,\alpha_{k+1}\beta_{k+1}e_k)^T$ has no
singular values smaller than $\sigma_{k_0+1}^2$ before $k\leq k_0$,
provided that $\theta_k^{(k)}>\sigma_{k_0+1}$ for
$k\leq k_0$. This means that the noise deteriorates the iterates
$x_k^{lsmr}$ no sooner than it does for the LSQR iterates $x^{(k)}$.
\end{remark}
\begin{remark}
A combination of Theorem~\ref{lsqrmr} and the above two remarks means that the
regularizing effects of LSMR are highly competitive with and not inferior to
those of LSQR for each kind of ill-posed problem under consideration. Consequently,
from the theory of LSQR in Section \ref{rankapp}, we conclude that LSMR has
the full regularization for severely or moderately ill-posed problems with
$\rho>1$ or $\alpha>1$ suitably. However, Theorem~\ref{aprod} indicates
that LSMR generally has only the partial regularization for mildly ill-posed
problems since $\gamma_{k_0}$ is generally bigger than $\sigma_{k_0+1}$
considerably; see Remark~\ref{mildre}.
\end{remark}
\begin{remark}
We can define a near best rank $k$ approximation to $A^TA$ similar to
\eqref{near}. Based on \eqref{aproderror}, if, in LSMR, we simply take
$\|A^TA-Q_{k+1}Q_{k+1}^TA^TAQ_kQ_k^T\|=\gamma_k^2$ for the ease of presentation,
we can establish an analog of Theorem~\ref{nearapprox} for LSMR. In the
meantime, completely parallel to the proof of Theorem~\ref{ritzvalue}, we can
also derive an analog of Theorem~\ref{ritzvalue} for LSMR, in which
the sufficient conditions on $\eta_k$ that ensure that the singular values
of $(B_k^TB_k,\alpha_{k+1}\beta_{k+1}e_k)^T$
approximate the first $k$ large singular values of $A^TA$ in natural order
are found to be
$$
2+\eta_k^2<\left(\frac{k_0+1}{k_0}\right)^{2\alpha},\ k=1,2,\ldots,k_0
$$
for $A$ with $\sigma_i=\zeta i^{-\alpha}$.
\end{remark}
\begin{remark}
Since LSMR and LSQR have similar regularizing effects for each kind of ill-posed
problem, we can judge the full or partial regularization
of LSMR by inspecting the decay rate of $\alpha_k+\beta_{k+1}$ with respect
to $k$, as has been done for LSQR,
\end{remark}
In Section \ref{rankapp} we have interpreted LSQR
as solving the modified problem that perturbs $A$ in \eqref{eq1}
to its rank $k$ approximation $P_{k+1}B_kQ_k^T$. The
regularization of LSQR then is up to the accuracy of such rank $k$
approximation to $A$ and how the $k$ large singular values of $A$ are
approximated by the nonzero singular values of $B_k$. We will treat CGME
in the same way later. It might be hopeful to treat LSMR in this preferable
and more direct way. From \eqref{lsmrsolution}, LSMR is also equivalent to
computing the minimum-norm least squares solution to the modified
problem
$$
\min\|\left(Q_k(Q_{k+1}^TA^TAQ_k)^{\dagger}
Q_{k+1}^TA^T\right)^{\dagger} x-b\|,
$$
which perturbs $A$ in \eqref{eq1} to its rank $k$ approximation
$\left(Q_k(Q_{k+1}^TA^TAQ_k)^{\dagger}Q_{k+1}^TA^T\right)^{\dagger}$.
However, an analysis of such formulation appears intractable because
there is no explicit way to
remove two generalized inverses $\dagger$ in such rank $k$
approximation, which makes it impossible to accurately estimate
$\|A-\left(Q_k(Q_{k+1}^TA^TAQ_k)^{\dagger}Q_{k+1}^TA^T\right)^{\dagger}\|$
in terms of $\sigma_{k+1}$.
\subsection{The other rank $k$ approximations to $A$ generated by Lanczos
bidiagonalization and the regularization of CGME}\label{cgme}
By \eqref{eqmform1} and \eqref{eqmform2}, we get
\begin{align}
P_{k+1}P_{k+1}^TA &= P_{k+1}(B_kQ_k^T+\alpha_{k+1}e_{k+1}q_{k+1}^T)\notag\\
&=P_{k+1}(B_k, \alpha_{k+1}e_{k+1})Q_{k+1}^T \notag\\
&=P_{k+1}\bar{B}_kQ_{k+1}^T, \label{leftbidiag}
\end{align}
where $Q_{k+1}=(Q_k,q_{k+1})$, and $\bar{B}_k=(B_k,\alpha_{k+1}e_{k+1})\in
\mathbb{R}^{(k+1)\times (k+1)}$ is lower bidiagonal with rank $k+1$.
Thus, it follows from \cite{hanke95,hanke01,hps09} that
CGME is the CG method applied to $\min\|AA^Ty-b\|$ and $x=A^Ty$, where
the $k$-th iterate $x_k^{cgme}$ minimizes the error $\|A^{\dagger}b-x\|$, i.e.,
$\|x_{naive}-x\|$, over $x\in \mathcal{V}_k^R$, and the error norm
$\|x_{naive}-x_k^{cgme}\|$ decreases monotonically with respect to $k$.
By Lanczos bidiagonalization, it is known from \cite{hanke95,hanke01,hps09} that
$x_k^{cgme}=Q_ky_k^{cgme}$ with $y_k^{cgme}=\|b\|\bar{B}_{k-1}^{-1}e_1^{(k)}$
and the residual norm $\|Ax_k^{cgme}-b\|=\beta_{k+1}|e_k^T y_k^{cgme}|$
with $e_k$ the $k$-th canonical vector of dimension $k$.
Noting that $\|b\|e_1^{(k)}=P_k^Tb$, we have
\begin{equation}\label{cgmesolution}
x_k^{cgme}=Q_k\bar{B}_{k-1}^{-1}P_k^Tb.
\end{equation}
Therefore, $x_k^{cgme}$ is the minimum-norm least squares solution to the
modified problem that replaces $A$ in \eqref{eq1} by its rank $k$
approximation $P_k\bar{B}_{k-1}Q_k^T=P_kP_k^TA$.
\begin{theorem}\label{cgmeappr}
For the rank $k+1$ approximation $P_{k+1}P_{k+1}^TA$ and the rank $k$ approximation
in CGME, we have
\begin{eqnarray}
\|(I-P_{k+1}P_{k+1}^T)A\|
&\leq& \gamma_k\leq\sqrt{1+\eta_k^2}\sigma_{k+1}, \label{leftsub}\\
\gamma_k<\|A-P_k\bar{B}_{k-1}Q_k^T\|&\leq& \gamma_{k-1}. \label{cgmelowup}
\end{eqnarray}
\end{theorem}
{\proof}
Since $P_{k+1}P_{k+1}^T(I-P_{k+1}P_{k+1}^T)=0$, we obtain
\begin{align}
\gamma_k^2 &=\|A-P_{k+1}B_kQ_k^T\|^2 \notag\\
&=\|P_{k+1}P_{k+1}^TA-P_{k+1}B_kQ_k^T+(I-P_{k+1}P_{k+1}^T)A\|^2\notag\\
&=\max_{\|y\|=1}\|\left(
(P_{k+1}P_{k+1}^TA-P_{k+1}B_kQ_k^T)+(I-P_{k+1}P_{k+1}^T)A\right)y\|^2\notag\\
&=\max_{\|y\|=1} \|P_{k+1}P_{k+1}^T(P_{k+1}P_{k+1}^TA-
P_{k+1}B_kQ_k^T)y+(I-P_{k+1}P_{k+1}^T)Ay\|^2 \notag\\
&=\max_{\|y\|=1}\left(\|P_{k+1}P_{k+1}^T(P_{k+1}P_{k+1}^TA-
P_{k+1}B_kQ_k^T)y\|^2+\|(I-P_{k+1}P_{k+1}^T)Ay\|^2\right)\notag\\
&=\max_{\|y\|=1}\left(\|P_{k+1}(P_{k+1}^TA-B_kQ_k^T)y\|^2+
\|(I-P_{k+1}P_{k+1}^T)Ay\|^2\right)\notag\\
&=\max_{\|y\|=1}\left(\|(P_{k+1}^TA-B_kQ_k^T)y\|^2+
\|(I-P_{k+1}P_{k+1}^T)Ay\|^2\right)\notag\\
&\geq \max_{\|y\|=1} \|(I-P_{k+1}P_{k+1}^T)Ay\|^2 \notag\\
&=\|(I-P_{k+1}P_{k+1}^T)A\|^2,\notag
\end{align}
which, together with \eqref{final}, establishes \eqref{leftsub}.
From \eqref{leftbidiag} and \eqref{leftsub} we obtain
\begin{equation}\label{leftapp}
\|(I-P_{k+1}P_{k+1}^T)A\|=\|A-P_{k+1}\bar{B}_kQ_{k+1}^T\|\leq \gamma_k.
\end{equation}
The upper bound of \eqref{cgmelowup} is direct \eqref{leftapp} and
\eqref{gammamono} by noting that
$$
\|A-P_k\bar{B}_{k-1}Q_k^T\|=\|(I-P_kP_k^T)A\|\leq\gamma_{k-1}.
$$
Along the proof path of Theorem~\ref{main2}, we obtain
$$
\|A-P_k\bar{B}_{k-1}Q_k^T\|=\|(\beta_{k+1}e_1,G_k)\|
$$
with $G_k$ defined by \eqref{gk1}. It is straightforward to justify
that the singular values of $G_k\in \mathbb{R}^{(n-k+1)\times (n-k)}$ strictly
interlace those of $(\beta_k e_1,G_k)\in \mathbb{R}^{(n-k+1)\times (n-k+1)}$
by noting that $(\beta_{k+1}e_1,G_k)^T (\beta_{k+1}e_1,G_k)$ is
an {\em unreduced} symmetric tridiagonal matrix, from which and
$\|G_k\|=\gamma_k$ (cf. \eqref{invar} and \eqref{gk}) the
lower bound of \eqref{cgmelowup} follows.
\qquad\endproof
By the definition \eqref{gammak} of $\gamma_k$, this theorem indicates that
$P_k\bar{B}_{k-1}Q_k^T$ is definitely a less accurate
rank $k$ approximation to $A$ than $P_{k+1}B_kQ_k^T$ in LSQR.
Moreover, a combination of it and Theorem~\ref{main1} indicates that
$P_k\bar{B}_{k-1}Q_k^T$ may never be a near best rank $k$ approximation
to $A$ even for severely and moderately ill-posed problems because,
unlike LSQR, there do not exist sufficient conditions on
$\rho>1$ and $\alpha>1$ to meet this requirement.
For mildly ill-posed problems, CGME generally has only the partial
regularization since $\gamma_k$ has been proved to be generally bigger than
$\sigma_{k+1}$ substantially and is rarely close to $\sigma_{k+1}$.
Next we consider the other issue that is as equally important as
the rank $k$ approximation in CGME: the behavior
of the singular values of $\bar{B}_{k-1}$, which are denoted by
$\bar{\theta}_i^{(k-1)},\ i=1,2,\ldots,k$
labeled in the decreasing order. Observe that
$\bar{B}_{k-1}$ consists of the first $k$ rows of $B_k$. Since $B_k B_k^T$ is
an $(k+1)\times (k+1)$ unreduced symmetric tridiagonal matrix, whose eigenvalues
are $(\theta_1^{(k)})^2,(\theta_2^{(k)})^2,\ldots,(\theta_k^{(k)})^2,0$,
and $\bar{B}_{k-1}\bar{B}_{k-1}^T$ is the $k\times k$ leading principal
submatrix of $B_k B_k^T$, whose eigenvalues are
$(\bar{\theta}_1^{(k-1)})^2,(\bar{\theta}_2^{(k-1)})^2, \ldots,
(\bar{\theta}_k^{(k-1)})^2$, by the strict interlacing property of eigenvalues,
we obtain
\begin{equation}\label{secondinter}
\theta_1^{(k)}>\bar{\theta}_1^{(k-1)}>\theta_2^{(k)}>
\bar{\theta}_2^{(k-1)}> \cdots >\theta_k^{(k)}> \bar{\theta}_k^{(k-1)}>0, \
k=1,2,\ldots,n.
\end{equation}
On the other hand, note that $\alpha_{n+1}=0$ and
$\bar{\theta}_i^{(n)}=\theta_i^{(n)}=\sigma_i,\ i=1,2,\ldots,n$, i.e.,
the singular values of $\bar{B}_n$ are $\sigma_1,\sigma_2,\ldots,\sigma_n$
and zero, which is denoted by the dummy $\sigma_{n+1}=0$. Since
$\bar{\theta}_{n+1}^{(n)}=\sigma_{n+1}=0$ and the first $k$ rows of $\bar{B}_n$
are $(\bar{B}_{k-1},\mathbf{0})\in \mathbb{R}^{k\times n}$, whose singular
values are $\bar{\theta}_1^{(k-1)},\ldots,\bar{\theta}_k^{(k-1)}$, by
applying the strict interlacing property
of singular values to $(\bar{B}_{k-1},\mathbf{0})$ and $\bar{B}_n$, for
$k=1,2,\ldots,n-1$ we have
\begin{equation}\label{interbar}
\sigma_{n+1-k+i}<\bar{\theta}_i^{(k-1)}<\sigma_i,\ i=1,2,\ldots,k,
\end{equation}
from which it follows that
\begin{equation}\label{thetak}
0<\bar{\theta}_k^{(k-1)}<\sigma_k.
\end{equation}
\eqref{secondinter} and \eqref{thetak} indicate that,
unlike $\theta_k^{(k)}$ that lies between $\sigma_{k+1}$ and $\sigma_k$
and approximates $\sigma_k$ for severely or moderately ill-posed problems
with $\rho>1$ or $\alpha>1$ suitably (cf. \eqref{error2}), the
lower bound for $\bar{\theta}_k^{(k-1)}$ is simply zero,
and there does not exist a better one for it. This means that
$\bar{\theta}_k^{(k-1)}$ may be much smaller than $\sigma_{k_0+1}$ and actually
it can be arbitrarily small, independently of the degree $\rho$ or $\alpha$ of
ill-posedness. In other words, the size of $\rho$ or $\alpha$ does not have any
intrinsic effects on the lower bound of $\bar{\theta}_k^{(k-1)}$, and
one thus cannot control $\bar{\theta}_k^{(k-1)}$ from below by choosing $\rho$ or
$\alpha$. In the meantime, \eqref{secondinter} tells us that
$\bar{\theta}_k^{(k-1)}<\theta_k^{(k)}$. These facts, together with
Theorem~\ref{cgmeappr}, show that the regularization of
CGME is inferior to that of LSQR and LSMR for each kind of problem.
On the one hand, they mean that
CGME has the partial regularization for mildly ill-posed problems; on the
other hand, the regularizing effects of CGME have indeterminacy
for severely and moderately ill-posed problems, that is,
it may or may not have the full regularization for these
two kinds of problems. Clearly, CGME has the full regularization
only when $P_k\bar{B}_{k-1}Q_k^T$
is as accurate as the rank $k$ approximation $P_{k+1}B_kQ_k^T$ and
$\bar{\theta}_k^{(k-1)}\approx \theta_k^{(k)},\
k=1,2,\ldots,k_0$ for these two kinds of problems with $\rho>1$ and
$\alpha>1$ considerably, but unfortunately there is no
guarantee that these requirements are satisfied mathematically.
The above analysis indicates that CGME itself is not reliable and cannot be
trusted to compute best possible regularized solutions.
In principle, one can detect the full or partial regularization
of CGME as follows: One first exploits the decay rate of
$\alpha_k+\beta_{k+1}$ to identify the degree of ill-posedness of \eqref{eq1}.
If \eqref{eq1} is mildly ill-posed, CGME has only
the partial regularization. If \eqref{eq1} is recognized as severely or moderately
ill-posed, one then needs to do two things to identify the regularization
of CGME: check if $\|A-P_k\bar{B}_{k-1}Q_k\|
\approx \|A-P_{k+1}B_kQ_k\|$, and compute the singular values of both $B_k$ and
$\bar{B}_{k-1}$ and check if $\bar{\theta}_k^{(k-1)}\approx \theta_k^{(k)}$.
If both hold, CGME has the full regularization; if either of them does not hold,
it has only the partial regularization.
We can informally deduce more features on CGME. For the LSQR iterate $x^{(k)}$,
note that the optimality requirement of CGME means that
$\|x_{naive}-x_k^{cgme}\|\leq \|x_{naive}-x^{(k)}\|$. Since
$$
\|x_{naive}-x_k^{cgme}\|=\|x_{naive}-x_{true}+x_{true}-x_k^{cgme}\|
\leq \|x_{naive}-x_{true}\|+\|x_{true}-x_k^{cgme}\|
$$
and
$$
\|x_{naive}-x^{(k)}\|=\|x_{naive}-x_{true}+x_{true}-x^{(k)}\|
\leq \|x_{naive}-x_{true}\|+\|x_{true}-x^{(k)}\|
$$
with the first terms in the right-hand sides being the same constant,
not rigorously speaking, we should have
\begin{equation}\label{cgmelsqr}
\|x_{true}-x_k^{cgme}\|\leq \|x_{true}-x^{(k)}\|
\end{equation}
until the semi-convergence of CGME. Keep in mind that
the regularization of CGME is inferior to or are at most
as good as that of LSQR for each kind of ill-posed problem.
Both $\|x_{true}-x_k^{cgme}\|$ and
$\|x_{true}-x^{(k)}\|$ first decrease until their respective
semi-convergence and then become increasingly large as $k$ increases.
As a result, we deduce that (i) $x_k^{cgme}$ is at least as accurate as
$x^{(k)}$ until the semi-convergence of CGME and (ii)
CGME reaches semi-convergence no later than LSQR;
otherwise, \eqref{cgmelsqr} indicates
that the optimal regularized solution by CGME at semi-convergence
would be more accurate than that by LSQR at semi-convergence, which
contradicts the property that LSQR has better
regularization than CGME. The experiments in \cite{hanke01} justify this
assertion; see Figure 3.1 and Figure 5.2 there.
Next let us return to \eqref{leftapp} and show
how to extract a rank $k$ approximation to $A$ from the
rank $k+1$ approximation $P_{k+1}\bar{B}_kQ_{k+1}^T$ as best as possible.
\begin{theorem}\label{approx}
Let $\bar{C}_k$ be the best rank $k$ approximation to $\bar{B}_k$ with respect
to the 2-norm. Then
\begin{align}
\|A-P_{k+1}\bar{C}_kQ_{k+1}^T\|&\leq \sigma_{k+1}+\gamma_k,\label{lowrank}\\
\|A-P_{k+1}\bar{C}_kQ_{k+1}^T\|&\leq \bar{\theta}_{k+1}^{(k)}+\gamma_k,
\label{better}
\end{align}
where $\bar{\theta}_{k+1}^{(k)}$ is the smallest singular value of $\bar{B}_k$.
\end{theorem}
{\em Proof}. Write $A-P_{k+1}\bar{C}_kQ_{k+1}^T=A-P_{k+1}\bar{B}_kQ_{k+1}^T+
P_{k+1}(\bar{B}_k-\bar{C}_k)Q_{k+1}^T$. Then from \eqref{leftbidiag} we obtain
\begin{align}
\|A-P_{k+1}\bar{C}_kQ_{k+1}^T\| & \leq \|A-P_{k+1}\bar{B}_kQ_{k+1}^T\|+
\|P_{k+1}(\bar{B}_k-\bar{C}_k)Q_{k+1}^T\| \label{barbk}\\
&= \|A-P_{k+1}\bar{B}_kQ_{k+1}^T\|+
\|P_{k+1}P_{k+1}^TA-P_{k+1}\bar{C}_kQ_{k+1}^T\|. \label{decom2}
\end{align}
By the assumption on $C_k$ and \eqref{leftbidiag},
$P_{k+1}\bar{C}_kQ_{k+1}^T$ is the best rank $k$ approximation to
$P_{k+1}\bar{B}_kQ_{k+1}^T=P_{k+1}P_{k+1}^TA$. Keep in mind that $A_k$ is
the best rank $k$ approximation to $A$. Since $P_{k+1}P_{k+1}^TA_k$ is a rank
$k$ approximation to $P_{k+1}P_{k+1}^TA$, we get
\begin{align*}
\|P_{k+1}P_{k+1}^TA-P_{k+1}\bar{C}_kQ_{k+1}^T\| &\leq \|P_{k+1}P_{k+1}^T(A-A_k)\|\\
&\leq \|A-A_k\|=\sigma_{k+1},
\end{align*}
from which, \eqref{leftapp} and \eqref{decom2} it follows
that \eqref{lowrank} holds.
Since $P_{k+1}$ and $Q_{k+1}$ are orthonormal, by the 2-norm invariance, we
obtain
$$
\|P_{k+1}(\bar{B}_k-\bar{C}_k)Q_{k+1}^T\|=\|\bar{B}_k-\bar{C}_k\|
=\bar{\theta}_{k+1}^{(k)},
$$
from which and \eqref{barbk} it follows that \eqref{better} holds.
\qquad\endproof
We point out that \eqref{lowrank} may be conservative since
we have amplified $\|P_{k+1}(\bar{B}_k-\bar{C}_k)Q_{k+1}^T\|$
twice and obtained its bound $\sigma_{k+1}$, which can be a considerable
overestimate. In comparison with \eqref{gammak} and \eqref{final},
the bound \eqref{lowrank} indicates that $P_{k+1}\bar{C}_kQ_{k+1}^T$
may not be as accurate as $P_{k+1}B_kQ_k^T$, but
\eqref{better} illustrates that $P_{k+1}\bar{C}_kQ_{k+1}^T$
can be as accurate as $P_{k+1}B_kQ_k^T$ because
$\bar{\theta}_{k+1}^{(k)}<\theta_{k+1}^{(k+1)}<\sigma_{k+1}$
from \eqref{secondinter} and \eqref{error2}. Moreover,
as we have explained, $\bar{\theta}_{k+1}^{(k)}$ can be arbitrarily small.
If so, $\bar{\theta}_{k+1}^{(k)}$ is negligible in \eqref{better} and
$P_{k+1}\bar{C}_kQ_{k+1}^T$ is at least as accurate as
$P_{k+1}B_kQ_k^T$.
We now present a new but informal analysis to show why
$P_{k+1}\bar{C}_kQ_{k+1}^T$ may be at least as accurate as
$P_{k+1}B_kQ_k^T$ as a rank $k$ approximation
to $A$. Keep in mind that $\bar{\theta}_i^{(k)},\
i=1,2,\ldots,k+1$ be the singular values of $\bar{B}_k$.
Then the singular values of $\bar{C}_k$ are
$\bar{\theta}_i^{(k)},\ i=1,2,\ldots,k$. Since $\alpha_{k+1}>0$ for
$k\leq n-1$, applying the strict interlacing property of singular
values to $B_k$ and $\bar{B}_k$, we have
\begin{equation}\label{bkbarbk}
\bar{\theta}_1^{(k)}>\theta_1^{(k)}>\bar{\theta}_2^{(k)}>\cdots>
\bar{\theta}_k^{(k)}>\theta_k^{(k)}>\bar{\theta}_{k+1}^{(k)}>0, \
k=1,2,\ldots,n-1.
\end{equation}
The above relationships, together with \eqref{ritzapp}, prove that
\begin{equation}\label{ck}
\sigma_i-\bar{\theta}_i^{(k)}<\sigma_i-\theta_i^{(k)}\leq \gamma_k,\,
i=1,2,\ldots,k,
\end{equation}
that is, the $\bar{\theta}_i^{(k)}$ are more accurate than
$\theta_i^{(k)}$ as approximations to $\sigma_i,\ i=1,2,\ldots,k$.
By the standard perturbation theory, note from \eqref{leftapp} that
$$
\sigma_i-\bar{\theta}_i^{(k)}\leq \|A-P_{k+1}\bar{B}_kQ_{k+1}^T\|\leq \gamma_k,
\,i=1,2,\ldots,k+1,
$$
while the singular value differences between $A$ and $P_{k+1}\bar{C}_kQ_{k+1}^T$
are $\sigma_i-\bar{\theta}_i^{(k)},\ i=1,2,\ldots,k$ and
$\sigma_i,\ i=k+1,\ldots,n$, all of which, from \eqref{ck} and \eqref{final},
are no more than $\gamma_k$. Based on these rigorous facts and
the relationship between $C_k$ and $\bar{B}_k$,
it is possible that $\|A-P_{k+1}\bar{C}_kQ_{k+1}^T\|\leq \gamma_k$,
and if it is so, then by definition \eqref{gammak} $P_{k+1}\bar{C}_kQ_{k+1}^T$
is a more accurate rank $k$ approximation to $A$ than $P_{k+1}B_kQ_k^T$ is.
\subsection{A comparison with standard randomized algorithms and RRQR
factorizations}\label{rrqr}
We compare the rank approximations $P_{k+1}B_kQ_k^T$ and
$P_{k+1}\bar{C}_kQ_{k+1}^T$ by Lanczos bidiagonalization with those by some
standard randomized algorithms and RRQR factorizations, and demonstrate
that the former ones are much more accurate than the latter ones
for severely and moderately ill-posed problems.
Note \eqref{gamma2}. Compare \eqref{final}, \eqref{leftsub} and \eqref{lowrank}
or \eqref{better} with the corresponding results
(1.9), (5.6), (6.3) and Theorem 9.3 in \cite{halko11}
for standard randomized algorithms and those on the strong
RRQR factorization \cite{gu96}, where the
constants in front of $\sigma_{k+1}$ are like $\sqrt{kn}$
and $\sqrt{1+4k(n-k)}$, respectively, which are far bigger than one. Within
the framework of the RRQR factorizations, it is known from \cite{hong92} that
the optimal factor of such kind is $\sqrt{k(n-k)+\min\{k,n-k\}}$
but to find corresponding permutations is
an NP-hard problem, whose cost increases exponentially
with $n$; see also \cite[p.298]{bjorck15}. Clearly, the strong
RRQR factorizations are near-optimal within the framework,
and they suit well for finding a high quality low rank $k$ approximation
to a matrix whose $k$ large singular values are much bigger than
the $n-k$ small ones.
Unfortunately, the standard randomized algorithms and RRQR factorization do
not very nicely fit into solving ill-posed problems: they have regularizing
effects but, in general, cannot find best possible regularized solutions.
We argue as follows: Since there are no considerable gaps of singular values,
the RRQR factorization techniques can hardly find a near best rank $k$
approximation to $A$ in the sense of \eqref{near}, which is vital to
solve \eqref{eq1} to find a best possible regularized solution.
In contrast, for a severely or moderately ill-posed problem with $\rho>1$
or $\alpha>1$ suitably, the rank $k$ approximations $P_{k+1}B_kQ_k^T$ are near
best ones for $k=1,2,\ldots, k_0$ and no singular value smaller than $\sigma_{k_0+1}$
appears. Besides, it is easy to check that the $k$-step
Lanczos bidiagonalization costs fewer flops than the standard randomized
algorithms do for a
sparse $A$, and it is more efficient than the strong RRQR factorization for
a dense $A$, which includes $\mathcal{O}(mnk)$ flops and the overhead cost of
searching permutations.
For further developments and recent advances on
randomized algorithms, we refer to Gu's work \cite{gu2015},
where he has considered randomized
algorithms within the subspace iteration framework proposed in
\cite{halko11}, presented a number of
new results and improved the error bounds for
the rank $k$ approximations that are iteratively extracted. Such
approaches may be promising to solve ill-posed problems.
\section{The filters $f_i^{(k)}$ and
a comparison of LSQR and the TSVD method}\label{compare}
Based on Proposition~\ref{help}, exploiting Theorem~\ref{main1},
Theorem~\ref{ritzvalue} and
Theorem~\ref{main2}, we present the following results, which,
from the viewpoint of Tikhonov regularization, explain why
LSQR has the full regularization for severely and moderately ill-posed problems
with $\alpha>1$ and $\alpha>1$ suitably and why it generally has the partial
regularization for mildly ill-posed problems.
\begin{theorem}\label{main3}
For the severely or moderately ill-posed problems with $\rho>1$ or $\alpha>1$,
under the assumptions of Theorem~\ref{ritzvalue}, let
$f_i^{(k)}$ be defined by \eqref{filter}. Then for $k=1,2,\ldots,k_0$ we have
\begin{align}
|f_i^{(k)}-1| &\approx \frac{2\sigma_{k+1}}
{\sigma_i}\left|\prod\limits_{j=1,j\not= i}^{k}
\left(1-\left(\frac{\sigma_i}{\sigma_j}\right)^2\right)\right|,
\ i=1,2,\ldots,k, \label{fi1}\\
f_i^{(k)}&\approx \sigma_i^2\sum\limits_{j=1}^{k}\frac{1}{\sigma_j^2},
\ i=k+1,k+2,\ldots,n. \label{fi2}
\end{align}
\end{theorem}
{\em Proof}.
For $k=1,2,\ldots,k_0$,
it follows from \eqref{filter} that
\begin{equation*}
|f_i^{(k)}-1|=\left|\frac{(\theta_i^{(k)})^2-\sigma_i^2}
{(\theta_i^{(k)})^2}
\prod\limits_{j=1,j\not=i}^{k}\frac{(\theta_j^{(k)})^2-\sigma_i^2}
{(\theta_j^{(k)})^2}\right|, \ i=1,2,\ldots,k.
\end{equation*}
To simplify presentations and illuminate the essence,
for the severely and moderately ill-posed problems with $\rho>1$ and
$\alpha>1$ suitably, we simply replace $\sqrt{1+\eta_k^2}$ in \eqref{error}
by one. On the other hand, we replace the denominator of
$\frac{(\theta_i^{(k)})^2-\sigma_i^2}{(\theta_i^{(k)})^2}$
by $\sigma_i^2$. Then by \eqref{error} we approximately have
$$
\sigma_i^2-(\theta_i^{(k)})^2=(\sigma_i-\theta_i^{(k)})
(\sigma_i+\theta_i^{(k)})\approx 2\sigma_{k+1}\sigma_i.
$$
For $j=1,2,\ldots,k$ but $i$, replace $\theta_j^{(k)}$
by $\sigma_j$ approximately. Then \eqref{fi1} follows.
By \eqref{error2}, since $\theta_{k}^{(k)}>
\sigma_i$ for $i=k+1,\ldots,n$,
the factors $\sigma_i/\theta_j^{(k)}<1$ and decay to
zero with increasing $i$ for each fixed $j\leq k$. Therefore,
for $i=k+1,\ldots,n$ we get
\begin{align*}
f_i^{(k)}&=1-\prod_{j=1}^{k} \left(1-\left(\frac{\sigma_i}{\theta_j^{(k)}}
\right)^2\right) \\
&=1-\left(1-\sum_{j=1}^{k} \left(\frac{\sigma_i}{\theta_j^{(k)}}\right)^2
\right)+\mathcal{O} \left(\frac{\sigma_i^4}{\left(\theta_{k-1}^{(k)}
\theta_{k}^{(k)}\right)^2}\right)\\
&=\sum_{j=1}^{k} \left(\frac{\sigma_i}{\theta_j^{(k)}}\right)^2
+\mathcal{O} \left(\frac{\sigma_i^4}{\left(\theta_{k-1}^{(k)}
\theta_{k}^{(k)}\right)^2}\right)
\end{align*}
Replace $\theta_j^{(k)}$ by its upper bound $\sigma_j,\ j=1,2,\ldots,k$ in
the above, and note that the second term is higher order small relative to
the first term. Then \eqref{fi2} follows.
\qquad\endproof
\begin{remark}
For $k=1,2,\ldots,k_0$,
since $\sigma_i,\, i=1,2,\ldots,k$ are dominant singular values, the factors
$$
\left|\prod\limits_{j=1,j\not= i}^{k}
\left(1-\left(\frac{\sigma_i}{\sigma_j}\right)^2\right)\right|,
\ i=1,2,\ldots,k,
$$
are modest. Consequently, \eqref{fi1} indicates that
the $f_i^{(k)}\approx 1$ with the errors
$\mathcal{O}(\sigma_{k+1}/\sigma_i)$ for
$i=1,2,\ldots,k$, while
\eqref{fi2} shows that the $f_i^{(k)}$ are at least as small as
$\sigma_i^2/\sigma_{k}^2$ for $i=k+1,\ldots,n$
and decrease with increasing $i$.
\end{remark}
\begin{remark}
For mildly ill-posed problems and $k=1,2,\ldots,k_0$,
as we have shown in Remark~\ref{appear}
and Section \ref{rankapp}, it is generally the case that
$\theta_{k}^{(k)}<\sigma_{k_0+1}$. Suppose $\theta_{k}^{(k)}>\sigma_{j^*}$
with the smallest integer $j^*>k_0+1$. Then we have shown in the paragraph
after Proposition~\ref{help} that $f_i^{(k)}\geq 1, \ i=k_0+1,\ldots, j^*-1$.
As a result, LSQR has only the partial regularization.
\end{remark}
Recall that $\Delta_k=(\delta_1,\delta_2,\ldots,\delta_k)$,
and define $U_k^{o}=(u_{k+1},\ldots,u_n)$.
In terms of \eqref{ideal3}--\eqref{ideal},
Hansen \cite[p.151,155, Theorems 6.4.1-2]{hansen98}
presents the following bounds
\begin{equation}\label{ferror1}
|f_i^{(k)}-1|\leq \frac{\sigma_{k+1}}{\sigma_i}
\frac{(\|(U_k^{o})^Tb\||+\sigma_{k+1}\|\Delta_k\|\|x^{(k)}\|}
{| u_i^Tb |}\|\delta_i\|,\ i=1,2,\ldots,k,
\end{equation}
\begin{equation}\label{deltainf}
\|\delta_i\|_{\infty}\leq \frac{\sigma_{k+1}}{\sigma_i}
\frac{| u_{k+1}^T b|}{| u_i^T b|} |L_i^{(k)}(0) |,\ i=1,2,\ldots,k,
\end{equation}
and
\begin{equation}\label{ferror2}
0 \leq f_i^{(k)}\leq \frac{\sigma_i^2}{\sigma_k^2} |L_k^{(k)}(0)|
\sum_{j=1}^k f_j^{(k)},\ \ i=k+1,\ldots,n,
\end{equation}
where $\|\cdot\|_{\infty}$ is the infinity norm of a vector.
We now address a few points on the bounds \eqref{ferror1} and \eqref{ferror2}.
First, there had no estimates for $\|\Delta_k\|$ and $|L_i^{(k)}(0)|,\
i=1,2,\ldots,k$; second, what we need is $\|\delta_k\|$ other than
$\|\delta_k\|_{\infty}$, and as is seen from its proof in
\cite[p.151]{hansen98}, it is relatively easy to obtain the accurate bound
\eqref{deltainf} for $\|\delta_i\|_{\infty}$, whereas it is
hard to derive an accurate one for $\|\delta_i\|$.
Because of lacking accurate estimates, it is unclear how small or large
the bound \eqref{ferror1} and \eqref{ferror2} are. Moreover, as it will
appear soon, the
factor $\sigma_{k+1}\|(U_k^{o})^T b\|$ in the numerator of \eqref{ferror1}
may be a too crude overestimate, such that the bound \eqref{ferror1} is
pessimistic and is useless to estimate $|f_i^{(k)}-1|,\
i=1,2,\ldots,k$ and $f_i^{(k)},\ i=k+1,\ldots,n$.
Let us have a closer look at these points.
Obviously, exploiting \eqref{deltainf}, we can only obtain the bounds
\begin{align*}
\|\delta_i\|_{\infty} &\leq \|\delta_i\|\leq \sqrt{n-k}\|\delta_i\|_{\infty},\\
\max_{i=1,2,\ldots,k}\|\delta_i\|&\leq \|\Delta_k\|\leq
\sqrt{k}\max_{i=1,2,\ldots,k}\|\delta_i\|,
\end{align*}
from which it follows that
$$
\max_{i=1,2,\ldots,k}\|\delta_i\|_{\infty}\leq \|\Delta_k\|\leq
\sqrt{k(n-k)}\max_{i=1,2,\ldots,k}\|\delta_i\|_{\infty}.
$$
As a result, the estimates for both $\|\delta_i\|$ and $\|\Delta_k\|$ are too
crude for $n$ large and $k$ small. Indeed, as we have seen previously,
their accurate estimates are much involved and complicated.
In Theorem~\ref{thm3}, we have derived accurate estimates for
$\|\delta_i\|,\ i=1,2,\ldots,k$; see \eqref{columndelta}, \eqref{columndelta1}
for severely ill-posed problems and \eqref{columndelta2}, \eqref{columnnorm}
for moderately and mildly ill-posed problems. Theorems~\ref{thm2}--\ref{moderate}
have given sharp estimates for $\|\Delta_k\|$ for three kinds of problems,
respectively.
The factor $\sigma_{k+1}\|(U_k^{o})^T b\|$ itself in the numerator
of \eqref{ferror1}, though simple and elegant in form, does not give clear and
quantitative information on its size. As a matter of fact, one must
analyze its size carefully for the two cases
$k\leq k_0$ and $k>k_0$, respectively,
for each kind of ill-posed problem; see the discrete Picard condition
\eqref{picard} and \eqref{picard1}. For each of these two cases, using our proof
approach used for Theorems~\ref{thm2}, \ref{moderate} and \ref{thm3},
we can obtain accurate estimates for $\|(U_k^{o})^T b\|=
\left(\sum_{j=k+1}^n |u_j^T b|^2\right)^{1/2}$ for three kinds of
ill-posed problems, respectively. However, the point is that the
factor $\sigma_{k+1}\|(U_k^{o})^T b\|$ results from a substantial
amplification in the derivation. It is seen from the last line
of \cite[p.155]{hansen98} that this factor results from simply bounding it by
$$
\|\Sigma_k^{\perp} (U_k^{(o)})^T b\|\leq \sigma_{k+1} \|(U_k^{(o)})^Tb\|,
$$
where $\Sigma_k^{\perp}={\rm diag} (\sigma_{k+1},\ldots,\sigma_n)$.
For our context, this amplification is fatal, and it is subtle
to obtain sharp bounds for $\|\Sigma_k^{\perp}(U_k^{(o)})^T b\|$.
We observe that $\|\Sigma_k^{\perp} (U_k^{(o)})^T b\|$ is nothing but
the first square root factor in \eqref{delta2}, for which we have
established the accurate estimates \eqref{severe1} and \eqref{modeest}
for severely, moderately and mildly ill-posed problems, respectively,
which hold for $k=1,2,\ldots,n-1$ and are {\em independent} of $n$.
It can be checked that these bounds for $\|\Sigma_k^{\perp} (U_k^{(o)})^T b\|$ are
substantially smaller than $\sigma_{k+1} \|(U_k^{(o)})^Tb\|$.
After the above substantial improvements on \eqref{ferror1} and
\eqref{ferror2}, we can exploit
the accurate bounds for $\|\Delta_k\|$ and $\|\delta_i\|$ in
Theorems~\ref{thm2}--\ref{moderate} and Theorem~\ref{thm3}, as
well as the remarks on them,
to accurately estimate the bounds \eqref{ferror1}
and \eqref{ferror2}. From them we can draw the full regularization
LSQR for severely and moderately ill-posed problems with
$\rho>1$ and $\alpha>1$ suitably and its partial regularization
for mildly ill-posed problems.
Making use of some standard perturbation results from Hansen~\cite{hansen10},
we can quantitatively relate LSQR to the TSVD method and analyze the
differences between their corresponding regularized solutions
and differences between $P_{k+1}B_kQ_k^T x^{(k)}$ and $A_k x_k^{tsvd}$ predicting
the right-hand side $b$ for $k=1,2,\ldots,k_0$.
\begin{theorem}\label{lsqrtsvd}
For the severely or moderately ill-posed problem~\eqref{eq1},
let $A_k$ be the rank $k$ best approximation to $A$, and assume that
$\|E_k\|=\|P_{k+1}B_kQ_k^T-A_k\|\leq \sigma_k-\sigma_{k+1}$. Then
for $k=1,2,\ldots,k_0$
we have
\begin{align}
\frac{\|x^{(k)}-x_k^{tsvd}\|}{\|x_k^{tsvd}\|}&\leq
\frac{\kappa(A_k)}{1-\epsilon_k}\left(\frac{\|E_k\|}{\|A_k\|}
+\frac{\epsilon_k}{1-\epsilon_k-\hat{\epsilon}_k}
\frac{\|A_kx_k^{tsvd}-b\|}{\|A_k x_k^{tsvd}\|}\right), \label{comperror}
\end{align}
\begin{align}
\frac{\|P_{k+1}B_kQ_k^T x^{(k)}-A_k x_k^{tsvd}\|}{\|b\|}&\leq
\frac{\epsilon_k}{1-\epsilon_k}, \label{compres}
\end{align}
where
$$
\kappa(A_k)=\frac{\sigma_1}{\sigma_k},\ \ \epsilon_k=\frac{\|E_k\|}{\sigma_k},\ \
\hat{\epsilon}_k=\frac{\sigma_{k+1}}{\sigma_k}.
$$
\end{theorem}
{\em Proof}.
For the problem $\min\|A_k x-b\|$ that replaces $A$ by $A_k$
in \eqref{eq1}, we regard the rank $k$ matrix $P_{k+1}B_kQ_k^T$
as a perturbed $A_k$ with the perturbation matrix
$E_k=P_{k+1}B_kQ_k^T-A_k$. Then by the standard perturbation
results on the TSVD solutions \cite[p.65-6]{hansen10}, we obtain
\eqref{comperror} and \eqref{compres} directly.
\qquad\endproof
\begin{remark}
Write $\|E_k\|=\|P_{k+1}B_kQ_k^T-A+A-A_k\|$.
Since the rank $k$ matrices $P_{k+1}B_kQ_k^T$ and
$A_k$ have the $k$ nonzero singular values $\theta_i^{(k)}$ and $\sigma_i$,
$i=1,2,\ldots,k$, respectively, from Mirsky's theorem
\cite[p.204, Theorem 4.11]{stewartsun} we get the bounds
\begin{align}
\max_{i=1,\ldots,k}|\sigma_i-\theta_i^{(k)}|&\leq \|E_k\|=\|A_k-P_{k+1}B_kQ_k^T\|,
\label{first}\\
\max\{\max_{i=1,\ldots,k}|\sigma_i-\theta_i^{(k)}|,
\sigma_{k+1}\}&\leq \|A-P_{k+1}B_kQ_k^T\|=\gamma_k, \label{second}
\end{align}
where the lower bound in \eqref{first} is no more than the one in \eqref{second}.
It is then expected that
$\|E_k\|\leq \gamma_k\approx \sigma_{k+1}$ for severely and
moderately ill-posed problems.
Therefore, we have $\epsilon_k\approx \hat{\epsilon}_k<1$, and
\eqref{compres} indicates that $\|P_{k+1}B_kQ_k^T x^{(k)}-A_k x_k^{tsvd}\|$,
is basically no more than $\epsilon_k$, $k=1,2,\ldots,k_0$.
\end{remark}
\begin{remark}
From \eqref{comperror}, since the possibly not small factor
$$
\frac{\|A_kx_k^{tsvd}-b\|}{\|A_k x_k^{tsvd}\|}
$$
enters the bound \eqref{comperror}, two regularized solutions $x^{(k)}$ and
$x_k^{tsvd}$ may differ considerably even though $P_{k+1}B_kQ_k^T x^{(k)}$
and $A_k x_k^{tsvd}$ predict the right-hand side $b$ with similar accuracy
for $k=1,2,\ldots,k_0$. This is the case for the inconsistent ill-posed
problem $\min\|Ax-b\|$ with $m>n$, where $\|A_kx_k^{tsvd}-b\|$ decreases
with respect to $k$ until
$$
\|A_{k_0}x_{k_0}^{tsvd}-b\|^2=\|Ax_{k_0}^{tsvd}-b\|^2\approx
\frac{n-k_0}{m}\|e\|^2+\|(I-U_nU_n^T)b\|^2,
$$
with $U_n$ the first $n$ columns of the $m\times m$ left singular vector
matrix $U$ and $\|(I-U_nU_n^T)b\|$
the incompatible part of $b$ lying outside of the range of $A$
(cf. \cite[p.71,88]{hansen10}). Here we remark that the term
$\|(I-U_nU_n^T)b\|$ appears in the relation (4.17) of \cite[p.71]{hansen10}
but is missing in the above right-hand side \cite[p.88]{hansen10}.
For the consistent $Ax=b$, since $\|(I-U_nU_n^T)b\|=0$,
the right-hand side of \eqref{comperror} is approximately
$$
\frac{\sigma_{k_0+1}}{\sigma_{k_0}}\left(1+\frac{\sigma_{1}}{\sigma_{k_0}}
\sqrt{\frac{n-k_0}{m}}\frac{\|e\|}{\|b\|}\right).
$$
We see from the above and \eqref{compres} that two different
regularized solutions can be quite different even if their residual norms
are of similar very sizes, as addressed by Hansen
\cite[p.123-4, Theorem 5.7.1]{hansen98}. However, we point out that
the accuracy of different regularized solutions as approximations to
$x_{true}$ can be compared. If the norms of errors of them and $x_{true}$ have
very comparable sizes, they are equally accurate regularized solutions
to \eqref{eq1}.
\end{remark}
\begin{remark}
Note that $\|E_k\|=\|P_{k+1}B_kQ_k^T-A_k\|\leq \sigma_k-\sigma_{k+1}$
is assumed only for severely and moderately ill-posed problems. As the previous
analysis has indicated, we have $\|E_k\|\approx \gamma_k\approx\sigma_{k+1}$.
As a result, it is easily justified
that this assumption is valid for these two kinds of problems provided
that $\rho>1$ and $\alpha>1$ suitably. However, the assumption fails
to hold for the mildly ill-posed problems with
$\sigma_i=\zeta i^{-\alpha},\ i=1,2,\ldots,n$
and $\frac{1}{2}<\alpha\leq 1$ since, for $k>1$, we have
$$
\sigma_k-\sigma_{k+1}=\sigma_{k+1}
\left(\left(\frac{k+1}{k}\right)^{\alpha}-1\right)<\sigma_{k+1}
\leq \gamma_k\approx \|E_k\|.
$$
\end{remark}
\section{The extension to the case that $A$ has multiple singular values}
\label{multiple}
Previously, under the assumption that the singular values of $A$ are simple,
we have proved the results and made a detailed analysis on them. Recall
the basic fact that the singular values $\theta_i^{(k)},\ i=1,2,\ldots,k$
of $B_k$ are always simple mathematically, independent of whether
the singular values of $A$ are simple or multiple. In other words,
the Lanczos bidiagonalization process works as if the singular values
of $A$ are simple, and the Ritz values $\theta_i^{(k)},\ i=1,2,\ldots,k$,
are the approximations to some of the {\em distinct} singular values of $A$.
In this section, we will show that, by making a number of suitable and nontrivial
changes and reformulations, our previous results and analysis can
be extended to the case that $A$ has multiple singular values.
Assume that $A$ has $s$ distinct singular values
$\sigma_1>\sigma_2>\cdots>\sigma_s>0$ with $\sigma_i$ being $c_i$ multiple
and $s\leq n$. In order to treat this case, we need to make a number of
preliminary preparations and necessary modifications or reformulations.
Below let us show the detail.
First of all, we need to take $b$ into consideration and present a new form
SVD of $A$ by selecting a specific set of left and right singular vectors
corresponding to a multiple singular value $\sigma_i$ of $A$,
so that the discrete Picard condition \eqref{picard} holds for one
particularly chosen left singular vector associated with $\sigma_i$. Specifically,
for the $c_i$ multiple $\sigma_i$, the orthonormal basis of the
corresponding left singular subspace can be chosen so that
$b$ has a nonzero orthogonal projection on just
one unit length left singular vector $u_i$ in the
singular subspace and no components in the remaining $c_i-1$ ones. Precisely,
let the columns of $F_i$ form an orthonormal basis of the left singular
subspace associated with $\sigma_i$, each of which satisfies \eqref{picard}.
Then we take
\begin{equation}\label{newuk}
u_i=\frac{F_iF_i^Tb}{\|F_i^Tb\|},
\end{equation}
where $F_iF_i^T$ is the orthogonal projector onto the
left singular subspace with $\sigma_i$,
and define the corresponding unit length right singular vector by
$v_i = A^T u_i/\sigma_i$. We select the other $c_i-1$ orthonormal
left singular vectors which are orthogonal to $u_i$ and, together with $u_i$,
form the left singular subspace associated with $\sigma_i$, and define the
corresponding unit length right singular vectors in the same way as $v_i$,
which and $v_i$ form an orthonormal basis of the unique right singular
subspace with $\sigma_i$.
After such treatment, we get the desired SVD of $A$. We stress that
$u_i$ defined above is unique since the orthogonal projection of $b$
onto the left singular subspace with $\sigma_i$ is unique
and equal to $F_iF_i^Tb$ for a given orthonormal $F_i$.
Now we need to prove that $u_i$ satisfies the discrete Picard
condition \eqref{picard} essentially. To see this, by the Cauchy--Schwarz
inequality, \eqref{newuk} and the assumption that each column of $F_i$
satisfies the discrete Picard condition \eqref{picard}, we get
\begin{equation}\label{estpicard}
|u_i^T \hat{b}|=\frac{| b^TF_iF_i^T \hat{b}|}{\|F_i^Tb\|}\leq \|F_i^T\hat{b}\|
=\sqrt{c_i}\sigma_i^{1+\beta},\ i=1,2,\ldots,s.
\end{equation}
Therefore, the Fourier coefficients $|u_i^T \hat{b}|$, on average, decay faster
than the singular values $\sigma_i,\,i=1,2,\ldots,s$. This is exactly
what the discrete Picard condition means; see the description before
\eqref{picard}. Recall that \eqref{picard} is a simplified model of this
condition. Based on the estimate \eqref{estpicard},
we recover \eqref{picard} by simply resetting \eqref{estpicard} as
\begin{equation}\label{newpicard}
|u_i^T \hat{b}|=\sigma_i^{1+\beta},\ i=1,2,\ldots,s.
\end{equation}
With help of the SVD of $A$ described above, it is crucial to
observe that $x_k^{tsvd}$ in \eqref{solution} is now the sum consisting of
the first $k$ {\em distinct} dominant SVD components of $A$. Furthermore,
for \eqref{eq1} and such reformulation of \eqref{solution},
the matrix $A$ in them can be equivalently replaced by the new $m\times n$
matrix
\begin{equation}\label{aprime}
A^{\prime}=U\Sigma^{\prime} V^T,
\end{equation}
where $\Sigma^{\prime}={\rm diag}(\sigma_1,\sigma_2,
\ldots,\sigma_s,\mathbf{0})$, $U_s=(u_1,u_2,\ldots,u_s)$ and
$V_s=(v_1,v_2,\ldots,v_s)$ are the first $s$ columns of
$U$ and $V$, respectively, the last $n-s$ columns of $U$ are
the other left singular vectors of $A$ that are orthogonal to $b$
by the construction stated above, and the last $n-s$ columns of $V$ are
the other corresponding right singular vectors of $A$.
Obviously, for the new SVD of $A$ defined above, $A^{\prime}$ is of rank $s$
with the $s$ simple nonzero singular values $\sigma_1,\sigma_2,\ldots,\sigma_s$,
its left and right singular vector matrices $U$ and $V$ are the corresponding
ones of $A$ with proper column exchanges, respectively.
We have $x_{true}=A^{\dagger}\hat{b}=(A^{\prime})^{\dagger}\hat{b}$
and the TSVD regularized solutions $x_k^{tsvd}=(A_k^{\prime})^{\dagger}b$,
where $A_k^{\prime}$ is the best rank $k$ approximation to $A^{\prime}$
with respect to the 2-norm. In addition, we comment that from the discrete Picard
condition
$$
\|A^{\dagger}\hat{b}\|=\|(A^{\prime})^{\dagger}\hat{b}\|
=\left(\sum_{k=1}^s\frac{|u_k^T\hat{b}|^2}{\sigma_k^2}\right)^{1/2}\leq C,
$$
independently of $n$ and $s$,
we can obtain \eqref{newpicard} directly in the same way as done in
the introduction for \eqref{picard}.
Another fundamental change is that the $k$-dimensional dominant right
singular space of $A$ now becomes that of $A^{\prime}$, i.e.,
$\mathcal{V}_k=span\{V_k\}$ with $V_k=(v_1,v_2,\ldots,v_k)$ associated with
the first $k$ large singular values of $A^{\prime}$. It is the subspace of
concern in the case that $A$ has multiple singular values.
We will also denote $U_k=(u_1,u_2,\ldots,u_k)$ and $\mathcal{U}_k=span\{U_k\}$.
As for Krylov subspaces, by the SVD of $A$ and that of
$A^{\prime}$, expanding $b$ as $b=\sum_{j=1}^s\xi_j u_j+(I-U_sU_s^T)b$,
we easily justify
\begin{equation}\label{ata}
\mathcal{K}_k(A^TA, A^Tb)=\mathcal{K}_k((A^{\prime})^TA^{\prime},(A^{\prime})^Tb)
\end{equation}
and
\begin{equation}\label{aat}
\mathcal{K}_k(AA^T, b)=\mathcal{K}_k(A^{\prime}(A^{\prime})^T,b)
\end{equation}
by noting that
\begin{equation}\label{aprimea}
(A^TA)^i A^Tb=\left((A^{\prime})^TA^{\prime}\right)^i(A^{\prime})^Tb
=\sum_{j=1}^s \xi_j \sigma_j^{2i+1}v_j
\end{equation}
for any integer $i\geq 0$ and
\begin{equation}\label{aaprime}
(AA^T)^i b=\left(A^{\prime}(A^{\prime})^T\right)^ib
=\sum_{j=1}^s \xi_j \sigma_j^{2i}u_j
\end{equation}
for any integer $i\geq 1$. Thus, for the given $b$, Lanczos
bidiagonalization works on $A$ exactly as if it does on $A^{\prime}$.
That is, \eqref{eqmform1}--\eqref{Bk} generated by Algorithm 1 hold when
$A$ is replaced by $A^{\prime}$, and the $k$ Ritz values $\theta_i^{(k)}$
approximate $k$ nonzero singular values of $A^{\prime}$.
Moreover, \eqref{aprimea} and \eqref{aaprime} indicate
$$
\mathcal{K}_{s+1}((A^{\prime})^TA^{\prime},
(A^{\prime})^Tb)=\mathcal{K}_s((A^{\prime})^TA^{\prime},
(A^{\prime})^Tb),\
\mathcal{K}_{s+2}(A^{\prime}(A^{\prime})^T,b)=
\mathcal{K}_{s+1}(A^{\prime}(A^{\prime})^T,b).
$$
As a result, since $(A^{\prime})^Tb$ has nonzero components in all the
eigenvectors $v_1,v_2,\ldots,v_s$ of $(A^{\prime})^TA^{\prime}$ associated
with its nonzero distinct eigenvalues $\sigma_1^2,\sigma_2^2,\ldots,
\sigma_s^2$, Lanczos bidiagonalization cannot break down until step $s+1$,
and the singular values $\theta_i^{(s)}$ of $B_s$ are exactly the singular
values $\sigma_1,\sigma_2,\ldots,\sigma_s$ of $A^{\prime}$.
At step $s$, Lanczos bidiagonalization on $A$
generates the $(s+1)\times s$ lower bidiagonal matrix
\begin{align}
P_{s+1}^T AQ_s&=P_{s+1}^T A^{\prime} Q_s=B_s \label{lbaprime}
\end{align}
and
\begin{align}
\mathcal{V}_s&=span\{Q_s\},\ \mathcal{U}_s\subset
span\{P_{s+1}\}. \label{uv}
\end{align}
Having done the above, what we need is to estimate
how $\mathcal{K}_k(A^TA, A^Tb)=\mathcal{K}_k((A^{\prime})^TA^{\prime},
(A^{\prime})^Tb)$ approximates or captures the $k$-dimensional
dominant right subspaces $\mathcal{V}_k$, $k=1,2,\ldots,s-1$.
This is a crucial step and the starting point of all the later
analysis. In what follows let us show how to adapt the beginning part
of the proof of Theorem~\ref{thm2} to the case that $A$ has multiple singular
values.
Observe the Krylov subspace $\mathcal{K}_{k}((\Sigma^{\prime})^2,
\Sigma^{\prime} U^Tb)=span\{\hat{D}\hat{T}_k\}$ with
$$
\hat{D}=\diag(\sigma_1 u_1^Tb,\ldots,\sigma_s u_s^Tb,
\mathbf{0})=\left(\begin{array}{cc}
D& \\
& \mathbf{0}
\end{array}\right)\in\mathbb{R}^{n\times n}
$$
and
$$
\hat{T}_k=\left(\begin{array}{cccc} 1 &
\sigma_1^2&\ldots & \sigma_1^{2k-2}\\
1 &\sigma_2^2 &\ldots &\sigma_2^{2k-2} \\
\vdots & \vdots&&\vdots\\
1 &\sigma_s^2 &\ldots &\sigma_s^{2k-2}\\
0 & 0 &\ldots& 0\\
\vdots & \vdots&&\vdots\\
0 & 0 &\ldots& 0
\end{array}\right)=\left(\begin{array}{c}
T_k\\
\mathbf{0}
\end{array}\right)\in \mathbb{R}^{n\times k}.
$$
Partition the diagonal matrix $D$ and the matrix $T_k$ into the forms
\begin{equation*}
D=\left(\begin{array}{cc} D_1 & 0 \\ 0 & D_2 \end{array}\right)
\in \mathbb{R}^{s\times s},\ \ \
T_k=\left(\begin{array}{c} T_{k1} \\ T_{k2} \end{array}\right)
\in \mathbb{R}^{s\times k},
\end{equation*}
where $D_1, T_{k1}\in\mathbb{R}^{k\times k}$ and
$D_2=\diag(u_{k+1}^Tb,\ldots,u_s^Tb)$. Since $T_{k1}$ is
a Vandermonde matrix with $\sigma_j$ distinct for $j=1,2,\ldots,k$, it is
nonsingular. Therefore, from
$$
\mathcal{K}_k((A^{\prime})^TA^{\prime},
(A^{\prime})^Tb)=span\{V\hat{D}\hat{T}_k\}
$$
and the structures of $\hat{D}$ and $\hat{T}_k$, we obtain
\begin{equation*}
\mathcal{K}_k((A^{\prime})^TA^{\prime},
(A^{\prime})^Tb)=span\{V_sD T_k\}=span \left\{V_s\left
(\begin{array}{c} D_1T_{k1} \\ D_2T_{k2} \end{array}\right)\right\}
=span\left\{V_s\left(\begin{array}{c} I \\ \Delta_k \end{array}\right)\right\},
\end{equation*}
with
\begin{equation*}
\Delta_k=D_2T_{k2}T_{k1}^{-1}D_1^{-1},
\end{equation*}
meaning that $\mathcal{K}_k((A^{\prime})^TA^{\prime},
(A^{\prime})^Tb)$ is orthogonal to the last $n-s$ columns of $V$.
Write
\begin{equation}\label{parti}
V_s=(V_k, V_k^{\perp}),\ \ V=(V_s,\hat{V}_s),
\end{equation}
and define
\begin{equation*}
Z_k=V_s\left(\begin{array}{c} I \\ \Delta_k \end{array}\right)
=V_k+V_k^{\perp}\Delta_k.
\end{equation*}
Then $Z_k^TZ_k=I+\Delta_k^T\Delta_k$, the
columns of $\hat{Z}_k=Z_k(Z_k^TZ_k)^{-\frac{1}{2}}$
form an orthonormal basis of $\mathcal{K}_k((A^{\prime})^TA^{\prime},
(A^{\prime})^Tb)$, and we get the orthogonal
direct sum decomposition
\begin{equation*}
\hat{Z}_k=(V_k+V_k^{\perp}\Delta_k)(I+\Delta_k^T\Delta_k)^{-\frac{1}{2}}.
\end{equation*}
Denote $\mathcal{V}_k^R=\mathcal{K}_k((A^{\prime})^TA^{\prime},
(A^{\prime})^Tb)$. For $\|\sin\Theta(\mathcal{V}_k, \mathcal{V}_k^R)\|$,
based on the above, we get \eqref{deltabound} by
replacing $V_k^{\perp}$ in \eqref{sindef} by
$(V_k^{\perp},\hat{V}_s)$ defined as \eqref{parti} and noting
that $\hat{Z}_k$ is orthogonal to $\hat{V}_s$. Then it is direct
to derive the same bounds for $\|\sin\Theta(\mathcal{V}_k, \mathcal{V}_k^R)\|$
as those established previously in completely the same way.
As for the extension of Theorem~\ref{initial}, by definition and \eqref{parti},
we need to replace $V_k^{\perp}$ in \eqref{qktilde}
by $(V_k^{\perp},\hat{V}_s)$ defined as \eqref{parti}.
The unit-length $\tilde{q}_k\in\mathcal{V}_k^R$ is now a vector that
has the smallest acute angle with $span\{(V_k^{\perp},\hat{V}_s)\}$,
and we modify \eqref{decompqk} as
$$
\tilde{q}_k=\hat{V}_s\hat{V}_s^T\tilde{q}_k
+V_k^{\perp}(V_k^{\perp})^T\tilde{q}_k+V_kV_k^T\tilde{q}_k.
$$
Recall that the columns of $\hat{V}_s$ are the right singular vectors
of $A^{\prime}$ corresponding to zero singular values. It disappears
when forming the Rayleigh quotient of $(A^{\prime})^T A^{\prime}$ with respect
to $\tilde{q}_k$. The proof of Theorem~\ref{initial} then
carries over to $A^{\prime}$, and the
results hold for the case that $A$ has multiple singular values.
Another fundamental change is that, when speaking of a rank $k$ approximation,
we now mean that for $A^{\prime}$. Note that the best rank $k$ approximation
$A^{\prime}_k$ to $A^{\prime}$ is $A^{\prime}_k=U_k\Sigma_k^{\prime}V_k^T$,
$k=1,2,\ldots,s$, where $U_k$ and $V_k$ are defined as before,
and $\Sigma_k^{\prime}={\rm diag}(\sigma_1,\sigma_2,\ldots,\sigma_k)$.
The $k$-step Lanczos bidiagonalzation process on $A$ now generates
rank $k$ approximations $P_{k+1}B_kQ_k^T$ in LSQR and $P_k\bar{B}_{k-1}Q_k$
in CGME to $A^{\prime}$ and the rank $k$ approximation
$Q_{k+1}Q_{k+1}^T(A^{\prime})^TA^{\prime}Q_kQ_k^T$ in LSMR
to $(A^{\prime})^TA^{\prime}$, where $P_{k+1}$ and $Q_k$ are
the first $k+1$ and $k$ columns of $P_{s+1}$ and $Q_s$ in \eqref{lbaprime}.
We then need to estimate the approximation accuracy of these rank
$k$ approximations and compare them with that of the best rank $k$
approximations $A_k^{\prime}$ and
$(A^{\prime}_k)^TA^{\prime}_k$, respectively. Meanwhile, for each of these
three rank $k$ approximation matrices, we need to
analyze how its $k$ nonzero singular values approximate $k$ singular
values of $A^{\prime}$ or $(A^{\prime})^T A^{\prime}$.
For the rank $k$ approximation $P_{k+1}B_kQ_k^T$ to $A^{\prime}$
in LSQR, similar to \eqref{gammak}, we define
$$
\gamma_k^{\prime}=\|A^{\prime}-P_{k+1}B_kQ_k^T\|,\ k=1,2,\ldots,s-1.
$$
Then, without any changes but the replacement of the index $n$ by $s$,
all the results in Section \ref{rankapp} and \eqref{leftsub}
in Theorem~\ref{main2} carry over to the multiple singular
value case.
The final important note is how to extend the results presented
Section \ref{decay}--\ref{cgme} to the multiple singular
value case. We have to derive the three key relations similar
to \eqref{invar}, \eqref{gk} and \eqref{gk1}, where the fact that Lanczos
bidiagonalization can be run to $n$ steps without breakdown is exploited.
In the case that $A$ has multiple singular values,
since Lanczos diagonalization on
$A$ must break down at step $s+1$, there are no $P_{n+1}$
and $Q_n$ as in \eqref{invar}. To this end, from \eqref{lbaprime} we augment
$P_{s+1}$ and $Q_s$ to the $m\times m$ and $n\times n$
orthogonal matrices $P=(P_{s+1},\hat{P})$ and $Q=(Q_s,\hat{Q})$,
respectively, from which and \eqref{lbaprime} we obtain
$$
P^TAQ=P^TA^{\prime}Q=\left(\begin{array}{cc}
B_s&\mathbf{0}\\
\mathbf{0}& \mathbf{0}
\end{array}
\right).
$$
Having this relation, like \eqref{invar} and \eqref{gk}, we get
$$
\gamma_k^{\prime}=\|A^{\prime}-P_{k+1}B_kQ_k^T\|
=\|P^T\left(A^{\prime}-P_{k+1}B_kQ_k^T\right)Q\|=\|G_k\|,
$$
where $G_k$ is the right bottom $(s-k+1)\times (s-k)$ matrix of
$B_s$, similar to $G_k$ in \eqref{gk1}. Then Theorem~\ref{main2}
extends naturally to the multiple singular value case without
any change but the replacement of the index $n$ by $s$, and all the
other results and analysis in Section \ref{decay}--\ref{cgme} carry over
to this case as well. The results in Section \ref{compare} hold without any change
whenever $A$, $A_k$ and the index $n$ are replaced by $A^{\prime}$,
$A^{\prime}_k$ and $s$, respectively.
In summary, based on the above reformulations, changes and preliminary work,
except Section \ref{rrqr}, we have extended all the results and analysis
in Sections \ref{lsqr}--\ref{compare}
to the case that $A$ has multiple singular values, just as we have done
for the simple singular value case. In the analysis, derivation and results,
the index $n$ is often replaced by $s$ whenever needed, and when this is
necessary is clear from the context related.
\section{Numerical experiments}\label{numer}
For a number of problems from Hansen's regularization toolbox \cite{hansen07},
Huang and Jia \cite{huangjia} have numerically justified the full
regularization of LSQR for severely and moderately ill-posed problems and its
partial regularization for mildly ill-posed problems, where each $A$ is
$1,024\times 1,024$.
In this section, we report numerical experiments to
confirm our theory and illustrate the full or partial regularization
of LSQR in much more detail. For the first two kinds of problems,
we demonstrate that $\gamma_k,\ \alpha_{k+1}$
and $\beta_{k+2}$ decay as fast as $\sigma_{k+1}$. We compare LSQR
and the hybrid LSQR with the TSVD method applied to projected
problems after semi-convergence. In the experiments, we use the L-curve
criterion, the function $\mathsf{lcorner}$ in \cite{hansen07}, to determine
an actually optimal regularization parameter.
For each of severely and moderately ill-posed problems, we show that the
regularized solution obtained by LSQR at semi-convergence is at least as
accurate as the best TSVD regularized solution, indicating
that LSQR has the full regularization. In the meantime, we
show that the regularized solution obtained by LSQR at semi-convergence
is considerably less accurate than that by the hybrid LSQR for mildly ill-posed
problems, demonstrating that LSQR has only the partial regularization.
As a byproduct, we compare LSQR with GMRES and RRGMRES and illustrate that the
latter ones have no regularizing effects for general nonsymmetric
ill-posed problems.
We choose several ill-posed problems from Hansen's regularization
toolbox \cite{hansen07}, which include
the severely ill-posed problems $\mathsf{shaw,\ wing,\ i\_laplace}$,
the moderately ill-posed problems $\mathsf{heat,\ phillips}$, and
the mildly ill-posed problem $\mathsf{deriv2}$.
All the codes are from \cite{hansen07}, and the problems
arise from discretizations of \eqref{eq2}.
We remind that, as far as solving \eqref{eq1} is concerned,
our primary goal consists in justifying the regularizing effects
of iterative solvers for \eqref{eq1}, which are {\em unaffected by the size}
of \eqref{eq1} and only depends on the degree of
ill-posedness, the noise level $\|e\|$ and the actual discrete Picard
condition, provided that the condition number of \eqref{eq1},
measured by the ratio between the largest and smallest singular values
of each $A$, is large enough.
Therefore, for this purpose, as extensively done in the
literature (see, e.g., \cite{hansen98,hansen10} and the references therein
as well as many other papers), it is enough to report the results on small
and/or medium sized discrete ill-posed problems since the condition numbers of
these $A$ are already huge or large, which, in finite precision arithmetic,
are roughly $10^{16}, 10^{8}$ and $10^{6}$ for severely, moderately and
mildly ill-posed problems with $n=256$, respectively.
Indeed, for $n$ large, say, 10,000 or more, we have observed that
LSQR and the hybrid LSQR have the same behavior as for small $n$,
e.g., $n=256$ used in this paper. Also,
an important reason is that such choice enables us to fully
justify the regularization effects of LSQR by comparing it with
the TSVD method, which suits only for small and/or medium sized problems
because of its computational complexity for $n$ large. For each example,
we generate a $256\times 256$ matrix $A$, the true
solution $x_{true}$ and noise-free right-hand side $\hat{b}$.
In order to simulate the noisy data, we generate white noise
vectors $e$ such that the relative noise levels
$\varepsilon=\frac{\|e\|}{\|\hat{b}\|}=10^{-2}, 10^{-3}, 10^{-4}$, respectively.
We mention that, to better illustrate the behavior of the hybrid LSQR,
we, in the concluding section, will report some important
observations on $\mathsf{phillips}$ and $\mathsf{deriv2}$ of $n=1,024$ and $10,240$,
whose condition numbers are as large as $1.7\times 10^{15}$ and $1.3\times 10^8$
for $n=10,240$, respectively. To simulate exact arithmetic,
LSQR uses full reorthogonalization in Lanczos bidiagonalization.
All the computations are carried out in Matlab 7.8 with the machine precision
$\epsilon_{\rm mach}= 2.22\times10^{-16}$ under the Miscrosoft
Windows 7 64-bit system.
\subsection{The accuracy of rank $k$ approximations}
{\bf Example 1}.
This problem $\mathsf{shaw}$ arises from one-dimensional image restoration and is
obtained by discretizing \eqref{eq2} with $[-\frac{\pi}{2}, \frac{\pi}{2}]$ as
the domains of $s$ and $t$, where
\begin{align*}
k(s,t)&=(\cos(s)+\cos(t))^2\left(\frac{\sin(u)}{u}\right)^2,\
u=\pi(\sin(s)+\sin(t)),\\
x(t)&=2\exp(-6(t-0.8)^2)+\exp(-2(t+0.5)^2).
\end{align*}
{\bf Example 2}. This problem $\mathsf{wing}$ has a discontinuous solution
and is obtained by discretizing \eqref{eq2} with $[0, 1]$ as
the domains of $s$ and $t$, where
\begin{align*}
k(s,t)&=t\exp(-st^2),\ \ \ g(s)=\frac{\exp(-\frac{1}{9}s)-\exp(-\frac{4}{9}s)}{2s},\\
x(t)&=\left\{\begin{array}{ll} 1,\ \ \ &\frac{1}{3}<t<\frac{2}{3};\\ 0,\ \ \
&elsewhere. \end{array}\right.
\end{align*}
The problems $\mathsf{shaw}$ and $\mathsf{wing}$ are severely ill-posed
with the singular values $\sigma_k=\mathcal{O}(e^{-4 k})$
for $\mathsf{shaw}$ and $\sigma_k=\mathcal{O}(e^{-9k})$ for $\mathsf{wing}$,
respectively.
In Figure~\ref{fig1}, we display the decay curves of the $\gamma_k$ for
$\mathsf{shaw}$ with $\varepsilon=10^{-2}, 10^{-3}$ and for $\mathsf{wing}$
with $\varepsilon=10^{-3}, 10^{-4}$, respectively.
We observe that the three curves with different $\varepsilon$ are almost
unchanged. This is in accordance with our Remark~\ref{decayrate}, where
it is stated that the decay rate of $\gamma_k$ is little affected
by noise levels for severely ill-posed problems, since
$\gamma_k$ primarily depends on the decay rate of $\sigma_{k+1}$
and different noise levels only affect the value of
$k_0$ other than the decay rate of $\gamma_k$. In addition,
we have observed that $\gamma_k$ and $\sigma_{k+1}$ decay
until they level off at $\epsilon_{\rm mach}$ due to round-off errors.
Most importantly, the results have clearly confirmed the theory
that $\gamma_k$ decreases as fast as $\sigma_{k+1}$, and we have
$\gamma_k\approx\sigma_{k+1}$, whose decay curves
are almost indistinguishable.
In Figure~\ref{fig2}, we plot the relative errors $\|x^{(k)}
-x_{true}\|/\|x_{true}\|$ with different $\varepsilon$ for
these two problems. As we have seen, LSQR exhibits clear semi-convergence.
Moreover, for a smaller $\varepsilon$, we get a more accurate regularized
solution at cost of more iterations, as $k_0$ is bigger from \eqref{picard}
and \eqref{picard1}.
\begin{figure}
\caption{(a)-(b): Decay curves of the sequences $\gamma_k$ and
$\sigma_{k+1}
\label{fig1}
\end{figure}
\begin{figure}
\caption{ The relative errors $\|x^{(k)}
\label{fig2}
\end{figure}
{\bf Example 3}.
This problem $\mathsf{heat}$ is moderately ill-posed, arises from the inverse
heat equation, and is obtained by discretizing \eqref{eq2}
with $[0, 1]$ as integration interval, where the kernel $k(s,t)=k(s-t)$ with
\begin{equation*}
k(t)=\frac{t^{-3/2}}{2\sqrt{\pi}}\exp\left(-\frac{1}{4t}\right).
\end{equation*}
{\bf Example 4}.
This is the $\mathsf{phillips}$ famous problem, a moderately ill-posed one.
It is obtained by discretizing \eqref{eq2}
with $[-6, 6]$ as the domains of $s$ and $t$, where
\begin{align*}
k(s,t)&=\left\{\begin{array}{ll} 1+\cos\left(\frac{\pi(s-t)}{3}\right),
\ \ \ &|s-t|<3,\\ 0,\ \ \ &|s-t|\geq 3, \end{array}\right.
\\
g(s)&=(6-|s|)\left(1+\frac{1}{2}\cos\left(\frac{\pi s}{3}\right)\right)+
\frac{9}{2\pi}\sin\left(\frac{\pi|s|}{3}\right),\\
x(t)&=\left\{\begin{array}{ll} 1+\cos\left(\frac{\pi t}{3}\right),\ \ \ &|t|<3,\\
0,\ \ \ &|t|\geq 3. \end{array}\right.
\end{align*}
\begin{figure}
\caption{(a): Decay curves of the sequences $\gamma_k$ and $\sigma_{k+1}
\label{fig3}
\end{figure}
From Figure~\ref{fig3}, we see that $\gamma_k$ decreases almost as fast as
$\sigma_{k+1}$ for the moderately ill-posed problems $\mathsf{heat}$ and
$\mathsf{phillips}$. However, slightly different
from severely ill-posed problems, $\gamma_k$, though
excellent approximations to $\sigma_{k+1}$, may not be so very accurate.
This is expected, as the constants $\eta_k$ in \eqref{const2} are generally
bigger than those in \eqref{const1} for severely ill-posed problems. Also,
different from Figure~\ref{fig1}, we observe from Figure~\ref{fig3} that
$\gamma_k$ deviates more from $\sigma_{k+1}$ with $k$ increasing, especially
for the problem $\mathsf{phillips}$. This confirms
Remarks~\ref{decayrate}--\ref{decayrate2} on moderately ill-posed problems.
In Figure~\ref{fig4}, we depict the relative errors of $x^{(k)}$, and from them
we observe analogous phenomena to those for severely ill-posed problems.
The only distinction is that LSQR now needs more iterations, i.e.,
a bigger $k_0$ is needed for moderately ill-posed problems with the same
$\varepsilon$, as is seen from \eqref{picard} and \eqref{picard1}.
\begin{figure}
\caption{ The relative errors $\|x^{(k)}
\label{fig4}
\end{figure}
{\bf Example 5}.
The mildly ill-posed problem $\mathsf{deriv2}$ is obtained by
discretizing \eqref{eq2} with $[0, 1]$ as the domains of $s$ and $t$,
where the kernel $k(s,t)$ is the
Green's function for the second derivative:
\begin{equation*}\label{}
k(s,t)=\left\{\begin{array}{ll} s(t-1),\ \ \ &s<t;\\ t(s-1),\ \ \
&s\geq t, \end{array}\right.
\end{equation*}
and the solution $x(t)$ and
the right-hand side $g(s)$ are given by
\begin{equation*}\label{}
x(t)=\left\{\begin{array}{ll} t,
\ \ \ &t<\frac{1}{2};\\ 1-t,\ \ \ &t\geq\frac{1}{2}, \end{array}\right. \ \ \
g(s)=\left\{\begin{array}{ll} (4s^3-3s)/24,\ \ \ &s<\frac{1}{2};
\\ (-4s^3+12s^2-9s+1)/24,\ \ \ &s\geq\frac{1}{2}. \end{array}\right.
\end{equation*}
\begin{figure}
\caption{(a)-(b): Decay curves of the partial and complete sequences
$\gamma_k$ and $\sigma_{k+1}
\label{figmild}
\end{figure}
Figure~\ref{figmild} (a)-(b) display
the decay curves of the partial and complete sequences $\gamma_k$ and
$\sigma_{k+1}$, respectively. We see that, different from severely and
moderately ill-posed problems, $\gamma_k$ does not decay so fast as
$\sigma_{k+1}$ and deviates from $\sigma_{k+1}$ significantly.
Recall that Theorem~\ref{main1} holds for mildly
ill-posed problems, where $\eta_k$ defined by \eqref{const2} is considerably
bigger than one. These observations justify our theory and confirm
that the rank $k$ approximations to $A$ generated by Lanczos
bidiagonalization are not as accurate as those for severely and moderately
problems.
\subsection{A comparison of LSQR and the hybrid LSQR}
For the severely ill-posed $\mathsf{shaw}$, $\mathsf{wing}$
and the moderately ill-posed $\mathsf{heat}$, $\mathsf{phillips}$,
we compare the regularizing effects of LSQR and the hybrid LSQR with the TSVD
method applied to the projected problems after semi-convergence,
and demonstrate that they compute the same best possible regularized
solution for each problem and LSQR thus
has the full regularization. For the mildly ill-posed problem
$\mathsf{deriv2}$, we show that LSQR has only the partial
regularization and the hybrid LSQR can compute a best possible regularized
solution.
In the sequel, we report the results only for
the noise level $\varepsilon=10^{-3}$. Results for the other two
$\varepsilon$ are analogous and thus omitted unless stated otherwise.
We first have a close look at the severely and moderately ill-posed problems.
Figure~\ref{fig5a} (a)-(b) and Figure~\ref{fig6} (a)-(b) plot
the relative errors of regularized solutions obtained by the two
methods for $\mathsf{shaw,\ wing}$ and
$\mathsf{heat,\ phillips}$. Clearly, we see that for each problem the
relative errors reach the same minimum level. After semi-convergence of LSQR,
the TSVD method applied to projected problems simply stabilizes the
regularized solutions with the minimum error and does not improve
them. This means that LSQR has already found best possible regularized solutions
at semi-convergence and has the full regularization,
and regularization applied to projected problems does not help
and is unnecessary at all. In practice, we simply stop LSQR after its
semi-convergence for severely and moderately ill-posed problems.
For these four problems, for test purposes we choose
$x_{reg}=\arg\min _{k}\|x^{(k)}-x_{true}\|$ for LSQR, which are just
the iterates obtained by LSQR at semi-convergence.
Figure~\ref{fig5a} (c)-(d) and Figure~\ref{fig6} (c)-(d) show that the
regularized solutions $x_{reg}$ are generally excellent approximations to the
true solutions $x_{true}$. The exception is the problem $\mathsf{wing}$ whose
underlying integral equation has a discontinuous solution, which
corresponds to the true solution $x_{true}$ whose
entries have big jumps in the discrete case,
as depicted in Figure~\ref{fig5a} (d). For it, the regularized solution
$x_{reg}$ deviates from $x_{true}$ considerably and the
relative error is not small. This is because all CG type methods applied to
either \eqref{eq1} or $A^TAx=A^Tb$ or $\min\|AA^Ty-b\|$ with $x=A^Ty$
compute smooth regularized solutions. More insightfully, LSQR and CGLS are
equivalent to implicitly solving the Tikhonov regularization problem
\eqref{tikhonov}, and their regularized solutions are
of the filtered form \eqref{eqfilter2}. It is well known that
the regularization term $\lambda^2\|x\|$ in Tikhonov regularization does not
suit for discontinuous solutions. The continuous ill-posed problems with
discontinuous or non-smooth solutions are from numerous important applications,
including linear regression, barcode reading, gravity surveying in geophysics,
image restoration and some others \cite{aster,hansen10,mueller}. For them, a
better alternative is use the 1-norm $\lambda^2\|Lx\|_1$ as the regularization
term, which leads to the Total Variation Regularization
\cite{aster,engl00,hansen10,mueller,vogel02} or Errors-in-Variables Modeling
called in \cite{huffel}, where $L\not=I$ is some
$p\times n$ matrix with no restriction to $p$ and is typically taken to be the
discrete approximation to the first or second derivative
operator \cite[Ch.8]{hansen10}.
\begin{figure}
\caption{(a)-(b): The relative errors $\|x^{(k)}
\label{fig5a}
\end{figure}
\begin{figure}
\caption{(a)-(b): The relative errors $\|x^{(k)}
\label{fig6}
\end{figure}
Now we investigate the behavior of LSQR and the hybrid LSQR for
$\mathsf{deriv2}$. Figure~\ref{figmild2} (a) indicates that
the relative errors of $x^{(k)}$ by the hybrid LSQR reach a considerably
smaller minimum level than those by LSQR, illustrating that LSQR
has only the partial regularization. Precisely, we find that the semi-convergence
of LSQR occurs at iteration $k=4$, but the regularized solution is not acceptable.
The hybrid LSQR uses a larger six dimensional Krylov subspace
$\mathcal{K}_6(A^TA,A^Tb)$ to construct a more accurate regularized solution.
We also choose $x_{reg}=\arg\min _{k}\|x^{(k)}-x_{true}\|$ for LSQR and the
hybrid LSQR, respectively. Figure~\ref{figmild2} (b) indicates that the
best regularized solution by the hybrid LSQR is a considerably better
approximation to $x_{true}$ than that by LSQR, especially in the
non-smooth middle part of $x_{true}$.
\begin{figure}
\caption{(a)-(b):
The relative errors $\|x^{(k)}
\label{figmild2}
\end{figure}
\subsection{Decay behavior of $\alpha_k$ and $\beta_{k+1}$}
For the severely ill-posed $\mathsf{shaw, wing}$ and the moderately
ill-posed $\mathsf{heat, phillips}$, we now
illustrate that $\alpha_k$ and $\beta_{k+1}$ decay as fast as the singular
values $\sigma_k$ of $A$. We take the noise level $\varepsilon=10^{-3}$. The results
are similar for $\varepsilon=10^{-2}$ and $10^{-4}$.
Figure~\ref{fig7} illustrates that both $\alpha_k$ and $\beta_{k+1}$
decay as fast as $\sigma_k$, and for $\mathsf{shaw}$ and
$\mathsf{wing}$ all of them decay swiftly and level off at
$\epsilon_{\rm mach}$ due to round-off errors
in finite precision arithmetic. Precisely, they reach
the level of $\epsilon_{\rm mach}$ at $k=22$ and $k=8$ for $\mathsf{shaw}$ and
$\mathsf{wing}$, respectively. Such decay behavior has also
been observed in \cite{bazan14,gazzola14,gazzola-online}, but no theoretical
support was given. These experiments confirm Theorem~\ref{main1} and
Theorem~\ref{main2}, which have proved that $\gamma_k$ decreases as fast as
$\sigma_{k+1}$ and that $\alpha_k$, $\beta_{k+1}$ and $\alpha_k+\beta_{k+1}$
decay as fast as $\sigma_k$.
\begin{figure}
\caption{(a)-(d): Decay curves of the sequences $\alpha_k$,
$\beta_{k+1}
\label{fig7}
\end{figure}
\subsection{A comparison of LSQR and the TSVD method}
We compare the performance of LSQR and the TSVD method
for the severely ill-posed $\mathsf{shaw,\ wing}$ and moderately ill-posed
$\mathsf{heat,\ phillips}$. We take $\varepsilon=10^{-3}$. For each problem,
we compute the norms of regularized solutions, their relative errors and
the residual norms obtained by the two methods. We plot the L-curves of
the residual norms versus those of regularized solutions in the $\log$-$\log$
scale.
Figures~\ref{lsqrtsvd1}--\ref{lsqrtsvd2} indicate LSQR and the TSVD method
behave very similarly for $\mathsf{shaw}$ and $\mathsf{wing}$. They
illustrate that, for $\mathsf{wing}$, the norms of approximate
solutions and the relative errors by the two methods
are almost indistinguishable for the same $k$, and, for $\mathsf{shaw}$, the
residual norms by LSQR decreases more quickly than the ones by the TSVD method
for $k=1,2,3$ and then they become almost identical starting from $k=4$.
The L-curves tell us that the two methods obtain the best regularized solutions
when $k_0=7$ and $k_0=3$ for $\mathsf{shaw}$ and $\mathsf{wing}$, respectively.
The values of $k_0$ determined by the L-curves are exactly the ones
at which semi-convergence occurs, as indicated by (b) and (c) in
Figures~\ref{lsqrtsvd1}--\ref{lsqrtsvd2}. These results demonstrate that
LSQR has the full regularization and resembles the TSVD method very much.
For each of $\mathsf{heat}$ and $\mathsf{phillips}$,
Figures~\ref{lsqrtsvd3}--\ref{lsqrtsvd4} demonstrate that
the best regularized solution obtained by LSQR is at
least as accurate as, in fact, a little bit more accurate than
that by the TSVD method, and the corresponding
residual norms decreases and drop below at least the same level as those
by the TSVD method. The residual norms by the two methods then stagnate after
the best regularized solutions are found. All these confirm that
LSQR has the full regularization. The fact that the best regularized solutions
by LSQR can be more accurate than the best TSVD solutions is not unusual.
We can explain why. Note that the true solutions $x(t)$ to the
integral equations that generate the problems $\mathsf{heat}$
and $\mathsf{phillips}$ are at least first order
differentiable. It is known that, in the infinite dimensional space setting,
for a linear compact operator equation $Kx=g$,
the TSVD method and standard-form Tiknonov regularization method have been
shown to be order optimal only when the true solution
is continuous or first order differentiable, and they are not order optimal
for stronger smoothness assumptions on the true solution. In contrast,
CGLS is order optimal, and the smallest error of the iterates is of the
same order as the worst-case error for the arbitrarily smooth
true solution, that is, given the same noise level, the smoother the true
solution is, the more accurate the best regularized solution is. In other
words, for the smoother true solution, the best regularized solution by CGLS
is generally more accurate than the counterpart corresponding to
the continuous or first order differentiable true solution;
see, e.g., \cite[p.187-191]{engl00} and \cite[p.13,34-36,40]{kirsch}.
Consequently, for the discrete \eqref{eq1} resulting from such kind of
continuous compact linear equation, once
the mathematically equivalent LSQR has the full regularization,
its best regularized solution is at least as accurate as and
can be more accurate than the best regularized solution by the TSVD method
or the standard-form Tiknonov regularization method when the true
solution of a continuous compact linear operator
equation is smoother than only continuous or first order differentiable.
From the figures we observe some obvious differences
between moderately and severely ill-posed problems. For $\mathsf{heat}$,
it is seen that the relative errors and
residual norms converge considerably more quickly for the LSQR solutions
than for the TSVD solutions. Figure~\ref{lsqrtsvd3} (b) tells us that
LSQR only uses 12 iterations to find the best regularized
solution and the TSVD method finds the best regularized
solution for $k_0=21$, while the L-curve gives $13$ iterations and
$k_0=18$ iterations, respectively. Similar differences are observed for
$\mathsf{phillips}$, where Figure~\ref{lsqrtsvd4} (b) indicates
that both LSQR and the TSVD method find the
best regularized solutions at $k_0=7$, while the L-curve shows
that $k_0=8$ for LSQR and $k_0=11$ for the TSVD method. Therefore,
unlike for severely ill-posed problems, the L-curve criterion is not
very reliable to determine correct $k_0$ for moderately
ill-posed problems.
We can observe more. Figure~\ref{lsqrtsvd3} shows that
the TSVD solutions improve little and their residual norms decrease
very slowly for the indices $i=4,5,11,12,18,19,20$. This implies that
the $v_i$ corresponding to these indices $i$ make very little contribution
to the TSVD solutions. This is due to the fact that the Fourier coefficients
$|u_i^T\hat{b}|$ are very small relative to $\sigma_i$ for these indices $i$.
Note that $\mathcal{K}_k(A^TA,A^Tb)$ adapts itself in
an optimal way to the specific right-hand side $b$, while the TSVD method uses
all $v_1,v_2,\ldots,v_k$ to construct a regularized solution,
independent of $b$. Therefore, $\mathcal{K}_k(A^TA,A^Tb)$ picks up only those
SVD components making major contributions to $x_{true}$, such that LSQR uses
possibly fewer $k$ iterations than $k_0$ needed by the TSVD method to capture
those truly needed
dominant SVD components. The fact that LSQR (CGLS) includes fewer SVD components
than the TSVD solution with almost the same accuracy was first noticed
by Hanke \cite{hanke01}.
Generally, for severely and moderately ill-posed problems,
we may deduce that LSQR uses possibly fewer than $k_0$ iterations to compute a
best possible regularized solution if, in practice, some of
$|u_i^T b|$, $i=1,2,\ldots,k_0$ are considerably bigger than
the corresponding $\sigma_i$ and some of them are reverse.
For $\mathsf{phillips}$,
as noted by Hansen \cite[p.32, 123--125]{hansen10}, half of the SVD
components satisfy $u_i^T \hat{b}=v_i^Tx_{true}=0$ for $i$ even, only the
odd indexed $v_1,v_3,\ldots,$ make contributions to $x_{true}$. This is why
the relative errors and residual norms of TSVD solutions
do not decrease at even indices before $x_{k_0}^{tsvd}$ is found.
\begin{figure}
\caption{Results for the severely ill-posed problem $\mathsf{shaw}
\label{lsqrtsvd1}
\end{figure}
\begin{figure}
\caption{Results for the severely ill-posed problem $\mathsf{wing}
\label{lsqrtsvd2}
\end{figure}
\begin{figure}
\caption{Results for the moderately ill-posed problem $\mathsf{heat}
\label{lsqrtsvd3}
\end{figure}
\begin{figure}
\caption{ Results for the moderately ill-posed problem $\mathsf{phillips}
\label{lsqrtsvd4}
\end{figure}
\subsection{A comparison of LSQR and GMRES, RRGMRES}
GMRES applied to solving \eqref{eq1} with $A$ square computes the iterate
$$
x_k^{g}=\|b\|W_k \bar{H}_k^{\dagger}e_1^{(k+1)}
=W_k \bar{H}_k^{\dagger}W_{k+1}^Tb.
$$
The quantity
\begin{align}
\gamma_k^g&=\|A-W_{k+1}\bar{H}_kW_k^T\| \label{gmresgamma}
\end{align}
measures the accuracy of the rank $k$
approximation $W_{k+1}\bar{H}_kW_k^T$ to $A$, where the columns of
$W_k$ and $W_{k+1}$ are orthonormal bases of $\mathcal{K}_k(A,b)$ and
$\mathcal{K}_{k+1}(A,b)$, respectively,
generated by the Arnoldi process starting with $w_1=b/\|b\|$,
and $\bar{H}_k=W_{k+1}^TAW_k$ is the $(k+1)\times k$ upper Hessenberg
matrix. The size of $\gamma_k^g$ reflects the regularizing effects of
GMRES for solving \eqref{eq1}. We should address that, different from
$\gamma_k$ defined by \eqref{gammak} for LSQR, which has been proved to
decrease monotonically as $k$ increases (cf. \eqref{gammamono}),
mathematically $\gamma_k^g$ has no monotonic property.
Similar to the LSQR iterates $x^{(k)}$ and $\gamma_k$, qualitatively speaking,
if $\gamma_k^g$ decays smoothly in some definitive manner,
then, to some extent, GMRES has regularizing effects;
if they do not decay at all or behave irregularly, then GMRES does not
have regularizing effects and fails to work for \eqref{eq1}.
We test GMRES on the general nonsymmetric $\mathsf{heat}$ and
the following Example 6, and compare it with LSQR.
{\bf Example 6}.
Consider the general nonsymmetric ill-posed problem $\mathsf{i\_laplace}$,
which is severely ill-posed and arises from inverse Laplace transformation.
It is obtained by discretizing the first kind Fredholm integral
equation \eqref{eq2} with $[0,\infty)$ the domains of $s$ and $t$.
The kernel $k(s,t)$, the right-hand side $g(s)$ and the solution $x(t)$ are
given by
\begin{equation*}\label{}
k(s,t)=\exp(-st),\ \ g(s)=\frac{1}{s+1/2},\ \ x(t)=\exp(-t/2).
\end{equation*}
We investigate the regularizing effects of GMRES with $\varepsilon=10^{-3}$.
Let $H_k=(h_{i,j})\in \mathbb{R}^{k\times k}$ denote the upper
Hessenberg matrix obtained by the $k$-step Arnoldi process. We
observe that the $h_{k+1,k}$ decay quickly with $k$ increasing, generally faster
than $\sigma_k$; see Figure~\ref{fig8} (a)-(b).
This phenomenon may lead to a misbelief that GMRES has
general regularizing effects. However, it is not the case. In fact,
a small $h_{k+1,k}$ exactly indicates that all the eigenvalues of $H_k$
may approximate some $k$ eigenvalues of $A$ well and the Arnoldi method finds an
approximate $k$-dimensional invariant subspace or eigenspace of $A$
\cite{jia01,saad}. We also refer to \cite{jia95,jia98} for a detailed
convergence analysis of the Arnoldi method. Unfortunately,
for a general nonsymmetric matrix $A$, a
small $h_{k+1,k}$ does not mean that the singular values, i.e., the Ritz values,
of $\bar{H}_k$ are also good approximations to some $k$ singular values of
$A$. As a matter of fact, as our analysis in Section \ref{rankapp} and
Section \ref{alphabeta} has indicated, the accuracy of the singular values
of a rank $k$ approximation matrix, here the singular values of $\bar{H}_k$,
as approximations to the $k$ large singular values of $A$ critically relies on
the size of $\gamma_k^g$ defined by \eqref{gmresgamma} other than $h_{k+1,k}$.
Indeed, as indicated by Figure~\ref{fig8} (c)-(d), though
$h_{k+1,k}$ is small, some of the singular values of $\bar{H}_k$ are
very poor approximations to singular values of $A$,
and some of those good approximations are much smaller than $\sigma_{k+1}$
and approximate the singular values of $A$ in disorder
rather than in natural order. It is important to note that, for a
general nonsymmetric or, more rigorously, non-normal $A$,
the $k$-dimensional Krylov subspace $\mathcal{K}_k(A,b)$ that
underlies the Arnoldi process mixes all the left and right singular vectors of
$A$, and the Arnoldi process generally fails to
extract the dominant SVD components and cannot generate a high quality rank
$k$ approximation to $A$, causing that GMRES has no good regularizing effects.
\begin{figure}
\caption{(a)-(b): Decay curves of the sequences $h_{k+1,k}
\label{fig8}
\end{figure}
Figure~\ref{fig5} (a)-(b) gives more justifications.
We have a few important observations: For the two test problems,
the quantities $\gamma_k$ decay as fast as the $\sigma_{k+1}$ for LSQR,
while the $\gamma_k^g$ diverge quickly from the $\sigma_{k+1}$ for GMRES and
do not exhibit any regular decreasing tendency. For $\mathsf{i\_laplace}$,
the $\gamma_k^g$ decrease very slowly until $k=19$, then basically stabilize
for three iterations followed, and finally start to
increase from $k=22$ onwards. As for $\mathsf{heat}$, the
$\gamma_k^g$ are almost constant from beginning to end.
Since all the $\gamma_k^g$ are not small, they
indicate that the Arnoldi process cannot generate any reasonable and meaningful
rank $k$ approximations to $A$ for $k=1,2,\ldots,30$. This is especially true
for $\mathsf{heat}$. Consequently, we are sure that GMRES
fails and does not have regularizing effects
for the two test problems.
We plot $\|x^{(k)}-x_{true}\|/\|x_{true}\|$
by LSQR and $\|x_k^{g}-x_{true}\|/\|x_{true}\|$ by GMRES in
Figure~\ref{fig5} (c)-(d). Obviously, LSQR exhibits semi-convergence,
but GMRES does not and the relative errors obtained by it even increase from
the beginning; see Figure~\ref{fig5} (d). This again demonstrates that
GMRES cannot provide meaningful regularized solutions for these two problems.
Let $x_{reg}=\arg\min _{k}\|x^{(k)}-x_{true}\|$. Figure~\ref{fig5} (e) and
(f) show that LSQR obtains excellent regularized
solutions, while GMRES fails. It is known that
MR-II \cite{fischer96} for $A$ symmetric and
RRGMRES \cite{calvetti00} for $A$ nonsymmetric work on the subspace
$\mathcal{K}_k(A,Ab)$. They were originally designed to solve singular or
inconsistent systems, restricted to
a subspace of range of $A$, and compute the minimum-norm least squares
solutions when the ranges of $A$ and $A^T$ are identical. However,
for the preferred RRGMRES \cite{neuman12}, we have observed phenomena
similar to those for GMRES, illustrating that RRGMRES does not have
regularizing effects for the test problems.
From these typical experiments, we conclude that GMRES and
RRGMRES are susceptible to failure for general nonsymmetric ill-posed
problems and they are not general-purpose regularization methods.
In fact, as addressed in \cite[p.126]{hansen10} and \cite{jensen07},
GMRES and RRGMRES may only work well when either
the mixing of SVD components is weak or the Krylov basis vectors are
just well suited for the ill-posed problem, as addressed in \cite{hansen10}.
\begin{figure}
\caption{(a)-(b): Decay curves of the sequences $\gamma_k$, $\gamma_k^g$,
denoted by $\gamma_k$-LSQR and $\gamma_k$-GMRES in the figure,
and $\sigma_{k+1}
\label{fig5}
\end{figure}
\section{Conclusions}\label{concl}
For the large-scale ill-posed problem \eqref{eq1}, iterative solvers
are the only viable approaches. Of them, LSQR and CGLS are most popularly
used for general purposes, and CGME and LSMR are also choices.
They have general regularizing effects and exhibit semi-convergence.
However, if semi-convergence occurs before it
capture all the needed dominant SVD components, then best possible
regularized solutions are not yet found and the solvers have only the
partial regularization. In this case, their hybrid variants have often
been used to compute best possible regularized solutions.
If semi-convergence means that they have already found
best possible regularized solutions, they have the full regularization,
and we simply stop them after semi-convergence.
We have considered the fundamental open question in depth: Do LSQR, CGLS,
LSMR and CGME have the full or partial regularization for severely, moderately
and mildly ill-posed problems? We have first considered the case that
all the singular values of $A$ are simple. As a key and indispensable step, we
have established accurate bounds for the 2-norm
distances between the underlying $k$ dimensional Krylov subspace and the
$k$ dimensional dominant right singular subspace for the three kinds of
ill-posed problems under consideration. Then we have
provided other absolutely necessary background and ingredients. Based on them,
we have proved that, for severely or moderately ill-posed problems
with $\rho>1$ or $\alpha>1$ suitably, LSQR has the full regularization.
Precisely, for $k\leq k_0$
we have proved that a $k$-step Lanczos bidiagonalization produces a near best
rank $k$ approximation of $A$ and the $k$ Ritz values approximate
the first $k$ large singular values of $A$ in natural order, and
no small Ritz value smaller than $\sigma_{k_0+1}$ appears before LSQR
captures all the needed dominant SVD components, so that
the noise $e$ in $b$ cannot deteriorate regularized
solutions until a best possible regularized solution has been found.
We have shown that LSQR resembles the TSVD method for these
two kinds of problems. For mildly ill-posed problems, we have proved
that LSQR generally has only the partial regularization
since a small Ritz value generally appears before all
the needed dominant SVD components are captured. Since CGLS is mathematically
equivalent to LSQR, our assertions on the full or partial regularization
of LSQR apply to CGLS as well.
We have derived bounds for the diagonals and subdiagonals of bidiagonal
matrices generated by Lanczos bidiagonalization. Particularly,
we have proved that they decay as fast as the singular values of $A$
for severely ill-posed problems or moderately ill-posed problems
with $\rho>1$ or $\alpha>1$ suitably and decay more slowly
than the singular values of $A$ for mildly ill-posed problems.
These bounds are of theoretical and practical importance, and they
can be used to identify the degree of ill-posedness without extra cost
and decide the full or partial regularization of LSQR.
Based on some of the results established for LSQR, we have derived accurate
estimates for the accuracy of the rank $k$ approximations to $A$ and
$A^TA$ that are involved in CGME and LSMR, respectively. We have analyzed the
behavior of the smallest singular values of the projected matrices associated
with CGME and LSMR. Using these results, we have shown that
LSMR has the full regularization for severely and moderately ill-posed problems
with $\alpha>1$ and $\alpha>1$ suitably, and it generally has
only the partial regularization for mildly ill-posed probolems. In the meantime,
we have shown that the regularization of CGME has indeterminacy
and is inferior to LSQR and LSMR for each of three kinds of ill-posed problems.
In addition, our results have indicated
that the rank $k$ approximations to $A$ generated by Lanczos
bidiagonalization are substantially more accurate than those obtained
by standard randomized algorithms \cite{halko11}
and the strong RRQR factorizations \cite{gu96}.
With a number of nontrivial modifications and reformulations,
we have shown how to extend all the results obtained
for LSQR, CGME and LSMR to the case that $A$ has multiple singular
values.
We have made detailed and illuminating numerical experiments and confirmed our
theory on LSQR. We have also compared LSQR with GMRES and RRGMRES,
showing that the latter two methods do not general regularizing
effects and fail to deliver regularized solutions for general
nonsymmetric ill-posed problems. Theoretically, this is due to the fact that
GMRES and RRGMRES may work and have regularizing effects only
for (nearly) symmetric or, more generally,
(nearly) normal ill-posed problems, for which the left and right singular
vectors are (nearly) identical to the eigenvectors of $A$.
Our analysis approach can be adapted to MR-II for symmetric ill-posed problems,
and similar results
and assertions are expected for three kinds of symmetric ill-posed problems.
Using a similar approach to that in \cite{huangjia},
the authors \cite{huang15} has made an initial regularization analysis on
MR-II and derived the corresponding $\sin\Theta$ bounds,
which are too large overestimates. Our approach are applicable
to the preconditioned CGLS (PCGLS) and LSQR (PLSQR) \cite{hansen98,hansen10}
by exploiting the transformation technique originally proposed
in \cite{bjorck79,elden82} and advocated in \cite{hanke92,hanke93,hansen07}
or the preconditioned MR-II \cite{hansen10,hansen06},
all of which correspond to a general-form Tikhonov regularization involving the
matrix pair $\{A,L\}$, in which the regularization term $\|x\|^2$ is replaced by
$\|Lx\|^2$ with some $p\times n$ matrix $L\not=I$. It
should also be applicable to the mathematically equivalent LSQR
variant \cite{kilmer07} that is based on a joint bidiagonalization of
the matrix pair $\{A,L\}$ that corresponds to the above general-form Tikhonov
regularization. In this setting, the Generalized SVD (GSVD)
of $\{A,L\}$ or the mathematically equivalent SVD of $AL_A^{\dagger}$
will replace the SVD of $A$ to play a central role in analysis, where
$L_A^{\dagger}=\left(I-\left(A(I-L^{\dagger}L)^{\dagger}A\right)\right)^{\dagger}
L^{\dagger}$ is call the {\em $A$-weighted generalized inverse of $L$}
and $L_A^{\dagger}=L^{-1}$ if $L$ is square and invertible;
see \cite[p.38-40,137-38]{hansen98} and \cite[p.177-183]{hansen10}.
Finally, we highlight on hybrid Krylov iterative solvers and
make some remarks, which deserve particular and enough attention in our opinion.
Because of lack of a complete regularized theory
on LSQR, in order to find a best possible regularized solution for a
given \eqref{eq1}, one has commonly been using some
hybrid LSQR variants without considering the degree of ill-posedness
of \eqref{eq1}; see, e.g., \cite{aster,hansen98,hansen10} and the related papers
mentioned in the introduction. The hybrid CGME \cite{hanke01} and
CGLS \cite{aster,hansen10} have also been used.
However, Bj\"{o}rck \cite{bjorck94} has addressed that the hybrid LSQR variants
are mathematically complicated, and pointed out that it is hard to find
reasonable regularization parameters and tell when to stop them reliably.
For a hybrid LSQR variant, or more generally,
for any hybrid Krylov solver that first projects and then
regularizes \cite{hansen98,hansen10}, the situation is
more serious than what has been realized. It has long commonly accepted that
the approach of {\em "first-regularize-then-project"}
is equivalent to the approach of {\em "first-project-then-regularize"} and
they produce the same solution; see Section 6.4 and Figure 6.10
of \cite{hansen10}. This equivalence seems natural. Unfortunately, they
are {\em not} equivalent when solving \eqref{eq1}. Their equivalence requires
the assumption that the same regularization
parameter $\lambda$ in Tikhonov regularization is {\em used}, so that
both of them solve the same problem and
compute the same regularized solution. However, as far as
regularization methods are concerned, the fundamental point is
that each of the two approaches {\em must determine} its own optimal
regularization parameter $\lambda$ which is {\em unknown} in advance.
Mathematically,
for the approach of "first-regularize-then-project", there is an optimal
$\lambda$ since \eqref{eq1} satisfies the Picard condition, though its
determination is generally costly and may not be computationally viable
for a large \eqref{eq1}. On the contrary, for the approach of
"first-project-then-regularize", one must determine its optimal $\lambda$
for each projected problem, so one will have a sequence of optimal
$\lambda$'s. Whether or not they converge to the optimal regularization
parameter of \eqref{tikhonov} is unclear and lacks theoretical evidence.
For discrete regularization parameters in the TSVD method
for \eqref{eq1} and each of the projected problems, the situation is
similar. Unfortunately, for projected problems, their optimal
regularization parameters and their determination may encounter
insurmountable mathematical and numerical difficulties,
as we will clarify below.
As is well known, the Picard condition is an absolutely necessary condition
for the existence of the squares integrable solution to a linear compact
operator equation; without it, regularization
would be out of the question; see, e.g., \cite{engl00,kirsch,mueller}. This is
also true for the discrete linear ill-posed problem, where
the discrete Picard condition means that $\|x_{true}\|\leq C$ uniformly with some
(not large) constant $C$ such that regularization is
useful to compute a meaningful approximation to it \cite{hansen98,hansen10}.
Nevertheless, to the best of our knowledge, the discrete Picard conditions
for projected problems arising from LSQR or any other Krylov iterative solver
have been paid little attention until very recently \cite{gazzola15}.
Unfortunately, a fatal problem is that {\em the discrete Picard conditions are
not necessarily satisfied for the projected problems}. In \cite{gazzola15},
taking $e=\mathbf{0}$, i.e., $b=\hat{b}$ {\em noise free}, the authors
have proved that the discrete Picard conditions are satisfied
or inherited for the projected problems
under the {\em absolutely necessary assumption} that the
$k$ Ritz values, i.e., the singular values of the projected matrix at
iteration $k$, approximate the $k$ large singular values of $A$ in natural
order, regularization makes sense and can be used to solve the projected problems.
However, as have been stated in \cite{hansen98,hansen10} and
highlighted in this paper, under such assumption, Krylov solvers
themselves will find best possible regularized solutions at
semi-convergence, and there is no need to continue iterating and
regularize the projected problems at all, that is, no hybrid variant
is needed. On the other hand, if the
$k$ Ritz values do not approximate the $k$ large singular values of $A$
in natural order and at least one Ritz value smaller than $\sigma_{k_0+1}$
appears before $k\leq k_0$, the discrete Picard
conditions are {\em essentially not satisfied any longer} for the projected
problems starting from such $k$ onwards. If so, regularization applied to
projected problems is mathematically groundless and numerically may
lead to unavoidable failure.
We take LSQR as an example for a precise statement on the
discrete Picard conditions for projected problems. Recall that, in the
projected problem \eqref{yk}, the noisy right-hand side is
$\|b\|e_1^{(k+1)}=P_{k+1}^Tb$ with $b=\hat{b}+e$ and the noise-free right-hand
side is $P_{k+1}^T\hat{b}$. Then for
$k=1,2,\ldots,n-1$ and for $n$ arbitrarily large and $\sigma_n\rightarrow 0$
(cf. \cite{gazzola15}), the discrete Picard conditions for
the projected problems are
$$
{\sup}_{k,n}\|B_k^{\dagger}P_{k+1}^T\hat{b}\|\leq C
$$
uniformly with some constant $C$. Numerically, for a given $n$ and
$\sigma_n$ close to zero arbitrarily, once
$\|B_k^{\dagger}P_{k+1}^T\hat{b}\|$ is very large for some $k$, then
the discrete Picard condition actually fails for the corresponding
projected problem. In this case, for the projected
problem, it is hard to apply regularization to the projected problem
and speak of its optimal regularization parameter,
which does not exist at all in the extreme case
that $\|B_k^{\dagger}P_{k+1}^T\hat{b}\|$ is
infinitely unbounded, which amounts to stating that $B_k$ has a
singular value close to zero arbitrarily. As a result,
any regularization applied to it works poorly. Indeed,
for {\sf phillips} and {\sf deriv2} of order $n=1,024$ and $10,240$,
we have observed that the hybrid LSQR exhibits considerable
{\em erratic} other than smooth curves of the errors between the regularized
solutions and $x_{true}$ in the dampening and stabilizing stage,
causing that the hybrid LSQR is unreliable to obtain a best regularized
solution; see Figure~\ref{figerratic}. Actually,
the regularized solutions obtained by the hybrid LSQR after its stabilization
are considerably less
accurate than those by the pure LSQR itself. For {\sf deriv2} of order $n=1,024$,
similar phenomena have also been observed for the hybrid MINRES and
MR-II \cite{huang15}.
\begin{figure}
\caption{(a)-(b): The relative errors $\|x^{(k)}
\label{figerratic}
\end{figure}
The above phenomena are exactly due to the actual failure of the discrete
Picard conditions for the projected problems
because each of the projected matrices starts to have at least one
singular value {\em considerably} smaller than $\sigma_{k_0+1}$ from some
iteration $k\leq k_0$ onwards, which and whose corresponding (left
and right) Ritz vectors does not approximate any singular
triplet of $A$ well. A consequence of such actual
failure is that it is hard to reliably stop the hybrid variants at
right iteration in order to ultimately find a best regularized solution.
Therefore, for the mildly ill-posed problems and moderately ill-posed problems
with $\alpha>1$ not enough, it is appealing to seek other mathematically solid
and computationally viable variants of LSQR, LSMR and MR-II so that
best possible regularized solutions can be found.
\end{document} |
\begin{document}
\thispagestyle{plain}\begin{center}{\LARGE Marginal Likelihood Estimation via Arrogance Sampling }\\
{\large By Benedict Escoto }\\
\end{center}
\VignetteIndexEntry{Technical paper explaining and justifying technique}
\begin{abstract}
This paper describes a method for estimating the marginal likelihood
or Bayes factors of Bayesian models using non-parametric importance
sampling (``arrogance sampling''). This method can also be used to
compute the normalizing constant of probability distributions.
Because the required inputs are samples from the distribution to be
normalized and the scaled density at those samples, this method may
be a convenient replacement for the harmonic mean estimator. The
method has been implemented in the open source R package
\texttt{margLikArrogance}.
\end{abstract}
\section{Introduction}
When a Bayesian evaluates two competing models or theories, $T_1$ and
$T_2$, having observed a vector of observations $\boldsymbol x$, Bayes' Theorem
determines the posterior ratio of the models' probabilities:
\begin{equation} \label{bayes factor}
\frac{p(T_1|\boldsymbol x)}{p(T_2|\boldsymbol x)} = \frac{p(\boldsymbol x|T_1)}{p(\boldsymbol x|T_2)} \frac{p(T_1)}{p(T_2)}.
\end{equation}
\noindent The quantity $\frac{p(\boldsymbol x|T_1)}{p(\boldsymbol x|T_2)}$ is called a
\emph{Bayes factor} and the quantities $p(\boldsymbol x|T_1)$ and $p(\boldsymbol x|T_2)$
are called the theories' \emph{marginal likelihoods}.
The types of Bayesian models considered in this paper have a fixed
finite number of parameters, each with their own probability function.
If $\boldsymbol \theta$ are parameters for a model $T$, then
\begin{equation} \label{main integral}
p(\boldsymbol x|T) = \int p(\boldsymbol x|\boldsymbol \theta, T) p(\boldsymbol \theta|T) \, d\boldsymbol \theta
= \int p(\boldsymbol x \wedge \boldsymbol \theta|T) \, d\boldsymbol \theta
\end{equation}
\noindent Unfortunately, this integral is difficult to compute in
practice. The purpose of this paper is to describe one method for
estimating it.
Evaluating integral (\ref{main integral}) is sometimes called the
problem of computing normalizing constants. The following formula
shows how $p(\boldsymbol x|T)$ is a normalizing constant.
\begin{equation}
\label{norm constant}
p(\boldsymbol \theta|\boldsymbol x, T) = \frac{p(\boldsymbol \theta \wedge \boldsymbol x|T)}{p(\boldsymbol x|T)}
\end{equation}
\noindent Thus the marginal likelihood $p(\boldsymbol x|T)$ is also the
normalizing constant of the posterior parameter distribution
$p(\boldsymbol \theta|\boldsymbol x, T)$ assuming we are given the density $p(\boldsymbol \theta
\wedge \boldsymbol x|T)$ which is often easy to compute in Bayesian models.
Furthermore, Bayesian statisticians typically produce samples from the
posterior parameter distribution $p(\boldsymbol \theta|\boldsymbol x, T)$ even when not
concerned with theory choice. In these case, computing the marginal
likelihood is equivalent to computing the normalizing constant of a
distribution from which samples and the scaled density at these
samples are available. The method described in this paper takes this
approach.
\section{Review of Literature}
\label{lit review}
Given how basic (\ref{bayes factor}) is, it is perhaps surprising that
there is no easy and definitive way of applying it, even for simple
models. Furthermore, as the dimensionality and complexity of
probability distributions increase, the difficulty of approximation
also increases. The following three techniques for computing bayes
factors or marginal likelihoods are important but will not be
mentioned further here.
\begin{enumerate}
\item Analytic asymptotic approximations such as Laplace's method,
see for instance Kass and Raftery (1995),
\item Bridge sampling/path sampling/thermodynamic integration
(Gelman and Meng, 1998), and
\item Chib's MCMC approximation (Chib, 1995; Chib and Jeliazkov, 2005).
\end{enumerate}
\noindent Kass and Raftery (1995) is a popular overview of the earlier
literature on Bayes factor computation. All these methods can be very
successful in the right circumstances, and can often handle problems
too complex for the method described here. However, the method of
this paper may still be useful due to its convenience.
The rest of section \ref{lit review} describes three approaches that
are relevant to this paper.
\subsection{Importance Sampling}
\label{imp sampling}
Importance sampling is a technique for reducing the variance of monte
carlo integration. This section will note some general facts; see
Owen and Zhou (1998) for more information.
Suppose we are trying to compute the (possibly multidimensional)
integral $I$ of a well-behaved function $f(\boldsymbol \theta)$. Then
\begin{equation*}
I = \int f(\boldsymbol \theta) \, d\boldsymbol \theta =
\int \frac{f(\boldsymbol \theta)}{g(\boldsymbol \theta)} g(\boldsymbol \theta) \, d(\boldsymbol \theta)
\end{equation*}
\noindent so if $g(\boldsymbol \theta)$ is a probability density function and
$\boldsymbol \theta_i$ are independent samples from it, then
\begin{equation}
\label{imp approx}
I = \mbox{E}_g[f(\boldsymbol \theta)/g(\boldsymbol \theta)] \approx \frac{1}{n} \sum_{i=1}^n \frac{f(\boldsymbol \theta_i)}{g(\boldsymbol \theta_i)} = I_n.
\end{equation}
\noindent $I_n$ is an unbiased approximation to $I$ and by the central
limit theorem will tend to a normal distribution. It has variance
\begin{equation}
\label{imp var}
\mbox{Var}[I_n] = \frac{1}{n} \int \left(\frac{f(\boldsymbol \theta)}{g(\boldsymbol \theta)} - I\right)^2 g(\boldsymbol \theta)\, d\boldsymbol \theta =
\frac{1}{n} \int \frac{(f(\boldsymbol \theta) - Ig(\boldsymbol \theta))^2}{g(\boldsymbol \theta)}\, d\boldsymbol \theta
\end{equation}
\noindent Sometimes $f$ is called the \emph{target} and $g$ is called
the \emph{proposal} distribution.
Assuming that $f$ is non-negative, then minimum variance (of $0!$) is
achieved when $g = f / I$---in other words when $g$ is just the
normalized version of $f$. This cannot be done in practice because
normalizing $f$ requires knowing the quantity $I$ that we wanted to
approximate; however (\ref{imp var}) is still important because it
means that the more similar the proposal is to the target, the better
our estimator $I_n$ becomes. In particular, $f$ must go to 0 faster
than $g$ or the estimator will have infinite variance.
To summarize this section:
\begin{enumerate}
\item Importance sampling is a monte carlo integration technique
which evaluates the target using samples from a proposal
distribution.
\item The estimator is unbiased, normally distributed, and its
variance (if not 0 or infinity) decreases as $O(n^{-1})$ (using
big-$O$ notation).
\item The closer the proposal is to the target, the better the
estimator. The proposal also needs to have longer tails than the
target.
\end{enumerate}
\subsection{Nonparametric Importance Sampling}
A difficulty with importance sampling is that it is often difficult to
choose a proposal distribution $g$. Not enough is known about $f$ to
choose an optimal distribution, and if a bad distribution is chosen
the result can have large or even infinite variance. One approach to
the selection of proposal $g$ is to use non-parametric techniques to
build $g$ from samples of $f$. I call this class of techniques
self-importance sampling, or \textbf{arrogance sampling} for short,
because they attempt to sample $f$ from itself without using any
external information. (And also isn't it a bit arrogant to try to
evaluate a complex, multidimensional integral using only the values at
a few points?) The method of this paper falls into this class and
particularly deserves the name because the target and proposal (when
they are both non-zero) have exactly the same values up to a
multiplicative constant.
Two papers which apply nonparametric importance sampling to the
problem of marginal likelihood computation (or computation of
normalizing constants) are Zhang (1996) and Neddermeyer (2009).
Although both authors apply their methods to more general situations,
here I will use the framework suggested by (\ref{norm constant}) and
assume that we can compute $p(\boldsymbol \theta \wedge \boldsymbol x|T)$ for arbitrary
$\boldsymbol \theta$ and also that we can sample from the posterior parameter
distribution $p(\boldsymbol \theta|\boldsymbol x, T)$. The goal is to estimate the
normalizing constant, the marginal likelihood $p(\boldsymbol x|T)$.
Zhang's approach is to build the proposal $g$ using traditional kernel
density estimation. $m$ samples are first drawn from $p(\boldsymbol \theta|\boldsymbol x,
T)$ and used to construct $g$. Then $n$ samples are drawn from $g$
and used to evaluate $p(\boldsymbol x|T)$ as in traditional importance sampling.
This approach is quite intuitive because kernel estimation is a
popular way of approximating an unknown function. Zhang proves that
the variance of his estimator decreases as $O(m^\frac{-4}{4+d}n^{-1})$
where $d$ is the dimensionality of $\boldsymbol \theta$, compared to $O(n^{-1})$
for standard (parametric) importance sampling.
There were, however, a few issues with Zhang's method:
\begin{enumerate}
\item A kernel density estimate is equal to 0 at points far from the
points the kernel estimator was built on. This is a problem
because importance sampling requires the proposal to have longer
tails than the target. This fact forces Zhang to make the
restrictive assumption that $p(\boldsymbol \theta|\boldsymbol x,T)$ has compact
support.
\item It is hard to compute the optimal kernel bandwidth. Zhang
recommends using a plug-in estimator because the function
$p(\boldsymbol \theta \wedge \boldsymbol x|T)$ is available, which is unusual for
kernel estimation problems. Still, bandwidth selection appears to
require significant additional analysis.
\item Finally, although the variance may decrease as
$O(m^\frac{-4}{4+d}n^{-1})$ as $m$ increases, the difficulty of
computing $g(\boldsymbol \theta)$ also increases with $m$, because it
requires searching through the $m$ basis points to find all the
points close to $\boldsymbol \theta$. In multiple dimensions, this problem
is not trivial and may outweigh the $O(m^\frac{-4}{4+d})$ speedup
(in the worst case, practical evaluation of $g(\boldsymbol \theta)$ at a
single point may be $O(m)$). See Zlochin and Baram (2002) for
some discussion of these issues.
\end{enumerate}
\noindent Neddermeyer (2009) uses a similar approach to Zhang and also
achieves a variance of $O(m^\frac{-4}{4+d}n^{-1})$. It improves on
Zhang's approach in two ways relevant to this paper:
\begin{enumerate}
\item The support of $p(\boldsymbol \theta|\boldsymbol x,T)$ is not required to be
compact.
\item Instead of using kernel density estimators, linear blend
frequency polynomials (LBFPs) are used instead. LBFPs are
basically histograms whose density is interpolated between
adjacent bins. As a result, the computation of $g(\boldsymbol \theta)$
requires only finding which bin $\boldsymbol \theta$ is in, and looking up
the histogram value at that and adjacent bins ($2^d$ bins in
total).
\end{enumerate}
As we will see in section \ref{my technique}, the arrogance sampling
described in this paper is similar to the methods of Zhang and
Neddermeyer.
\subsection{Harmonic Mean Estimator}
The harmonic mean estimator is a simple and notorious method for
calculating marginal likelihoods. It is a kind of importance
sampling, except the proposal $g$ is actually the distribution
$p(\boldsymbol \theta|\boldsymbol x, T) = p(\boldsymbol \theta \wedge \boldsymbol x|T) / p(\boldsymbol x|T)$ to be
normalized and the target $f$ is the known distribution
$p(\boldsymbol \theta|T)$. Then if $\boldsymbol \theta_i$ are samples from
$p(\boldsymbol \theta|x,T)$, we apparently have
\begin{equation*}
1 \approx \frac{1}{n} \sum_{i=1}^{n}
\frac{p(\boldsymbol \theta_i|T)}{p(\boldsymbol \theta_i|\boldsymbol x, T)} = \frac{1}{n} \sum_{i=1}^{n}
\frac{p(\boldsymbol \theta_i|T)}{p(\boldsymbol x|\boldsymbol \theta_i, T)p(\boldsymbol \theta_i|T) /
p(\boldsymbol x|T)}
= \frac{1}{n} \sum_{i=1}^{n} \frac{1}{p(\boldsymbol x|\boldsymbol \theta_i, T) / p(\boldsymbol x|T)}
\end{equation*}
\noindent hence
\begin{equation} \label{hme}
p(\boldsymbol x|T) \stackrel{?}{\approx} \left(\frac{1}{n} \sum_{i=1}^{n}
\frac{1}{p(\boldsymbol x|\boldsymbol \theta_i, T)} \right)^{-1}
\end{equation}
Two advantages of the harmonic mean estimator are that it is simple to
compute and only depends on samples from $p(\boldsymbol \theta|x, T)$ and the
likelihood $p(\boldsymbol x|\boldsymbol \theta,T)$ at those samples. The main drawback of
the harmonic mean estimator is that it doesn't work---as mentioned
earlier the importance sampling proposal distribution needs to have
longer tails than the target. In this case the target $p(\boldsymbol \theta|T)$
typically has longer tails than the proposal $p(\boldsymbol \theta|\boldsymbol x, T)$ and
thus (\ref{hme}) has infinite variance. Despite not working, the
harmonic mean estimator continues to be popular (Neal, 2008).
\section{Description of Technique}
\label{my technique}
This paper's arrogance sampling technique is a simple method that
applies the nonparametric importance techniques of Zhang and
Neddermeyer in an attempt to develop a method almost as convenient as
the harmonic mean estimator.
The only required inputs are samples $\boldsymbol \theta_i$ from $p(\boldsymbol \theta|\boldsymbol x,
T)$ and the values $p(\boldsymbol \theta_i \wedge \boldsymbol x|T) =
p(x|\boldsymbol \theta_i,T)p(\boldsymbol \theta_i|T)$. This is similar to the harmonic mean
estimator, but perhaps slightly less convenient because $p(\boldsymbol \theta_i
\wedge \boldsymbol x|T)$ is required instead of $p(\boldsymbol x|\boldsymbol \theta_i,T)$.
There are two basic steps:
\begin{enumerate}
\item Take $m$ samples from $p(\boldsymbol \theta|\boldsymbol x, T)$ and using
modified histogram density estimation, construct probability
density function $f(\boldsymbol \theta)$.
\item With $n$ more samples from $p(\boldsymbol \theta|\boldsymbol x, T)$, estimate
$1/p(\boldsymbol x|T)$ via importance sampling with target $f$ and proposal
$p(\boldsymbol \theta|\boldsymbol x, T)$.
\end{enumerate}
\noindent These steps are described in more detail below.
\subsection{Construction of the Histogram}
Of the $N$ total samples $\boldsymbol \theta_i$ from $p(\boldsymbol \theta|\boldsymbol x, T)$, the
first $m$ will be used to make a histogram. The optimal choice of $m$
will be discussed below, but in practice this seems difficult to
determine. An arbitrary rule of $\mbox{min}(0.2 N, 2 \sqrt{N})$ can
be used in practice.
With a traditional histogram, the only available information is the
location of the sampled points. In this case we also know the
(scaled) heights $p(\boldsymbol \theta \wedge \boldsymbol x|T)$ at each sampled point. We
can use this extra information to improve the fit.
Our ``arrogant'' histogram $f$ is constructed the same as a regular
histogram, except the bin heights are not determined by the number of
points in each bin, but rather by the minimum density over all points
in the bin. If a bin contains no sampled points, then $f(\boldsymbol \theta) =
0$ for $\boldsymbol \theta$ in that bin. Then $f$ is normalized so that $\int
f(\boldsymbol \theta) \,d\boldsymbol \theta = 1$.
To determine our bin width, we can simply and somewhat arbitrarily set
our bin width $h$ so that the histogram is positive for 50\% of the
sampled points from the distribution $p(\boldsymbol \theta|\boldsymbol x, T)$. To
approximate $h$, we can use a small number of samples (say, 40) from
$p(\boldsymbol \theta|\boldsymbol x, T)$ and set $h$ so that $f(\boldsymbol \theta) > 0$ for exactly
half of these samples.
Figure \ref{histograms} compares the traditional and new histograms
for a one dimensional normal distribution based on 50 samples. The
green rug lines indicate the $50$ sampled points which are the same
for all. The arrogant histogram's bin width is chosen as above. The
traditional histogram's optimal bin width was determined by Scott's
rule to minimize mean squared error. As the figure shows, the
modified histogram is much smoother for a given bin width, so a
smaller bin width can be used. On the other hand, $f$ will either
equal 0 or have about twice the original density at each point, while
the traditional histogram's density is numerically close to the
original density.
\begin{figure}
\caption{Histogram Comparison}
\label{histograms}
\end{figure}
\subsection{Importance Sampling}
The remaining $n = N - m - 40$ sampled points can be used for
importance sampling. Using equation (\ref{imp approx}) with histogram $f$ as
our target and $p(\boldsymbol \theta|\boldsymbol x, T)$ as the proposal, we have
\begin{equation*}
1 \approx I_n = \frac{1}{n} \sum_{i=1}^n \frac{f(\boldsymbol \theta_i)}{p(\boldsymbol \theta_i|\boldsymbol x, T)}
= \frac{1}{n} \sum_{i=1}^n \frac{f(\boldsymbol \theta_i)}{p(\boldsymbol \theta_i \wedge \boldsymbol x|T) / p(\boldsymbol x|T)}
\end{equation*}
\noindent hence
\begin{equation} \label{arrogance}
p(\boldsymbol x|T) \approx p(\boldsymbol x|T) / I_n = \left( \frac{1}{n} \sum_{i=1}^n
\frac{f(\boldsymbol \theta_i)}{p(\boldsymbol \theta_i \wedge \boldsymbol x|T)} \right)^{-1} = A_n
\end{equation}
\noindent To underscore the self-important/arrogant nature of
this approximation $A_n$, we can rewrite (\ref{arrogance}) as
\begin{equation*}
p(\boldsymbol x|T) \approx H \left( \frac{1}{n} \sum_{i=1}^n
\frac{\mbox{min}\{p(\boldsymbol \theta_j \wedge \boldsymbol x|T): \boldsymbol \theta_j \mbox{ and } \boldsymbol \theta_j \mbox{ are in the same bin}\}}{p(\boldsymbol \theta_i \wedge \boldsymbol x|T)} \right)^{-1}
\end{equation*}
\noindent where $H$ is the histogram normalizing constant. This
equation shows that all the values in the numerator and the
denominator of our importance sampling are from the same distribution
$p(\boldsymbol \theta \wedge \boldsymbol x|T)$.
Note that the histogram $f$ is the target of the importance sampling
and $p(\boldsymbol \theta \wedge \boldsymbol x|T)$ is the proposal. This is backwards from
the usual scheme where the unknown distribution is the target and the
known distribution is the proposal. Instead here the unknown
distribution is the proposal, as in the harmonic mean estimator (see
Robert and Wraith (2009) for another example of this.)
As in section \ref{imp sampling}, our approximation of $p(\boldsymbol x|T)^{-1}$
tends to a normal distribution as $n \to \infty$ by the central limit
theorem. This fact can be used to estimate a confidence interval
around $p(\boldsymbol x|T)$.
\section{Validity of Method}
This section will investigate the performance of the method. First,
note that this method is just an implementation of importance
sampling, so $A_n^{-1}$ should converge to $p(\boldsymbol x|T)^{-1}$ with finite
variance as long as the proposal density $p(\boldsymbol \theta|\boldsymbol x, T)$ exists
and is finite and positive on the compact region where the target
histogram density is positive.
To calculate the speed of convergence we will use equation (\ref{imp
var}) where $f$ is the histogram, $g(\boldsymbol \theta) = p(\boldsymbol \theta|\boldsymbol x, T)$,
and $I = 1$ because the histogram has been normalized. Unless
otherwise noted, we will assume below that $g: \mathbb{R}^d
\rightarrow \mathbb{R}$ is finite, twice differentiable and positive,
and that $\int \frac{\norm{\nabla \cdot g(\boldsymbol \theta)}^2}{g(\boldsymbol \theta)}
d\boldsymbol \theta$ is finite.
\subsection{Histogram Bin Width}
One important issue will be how quickly the $d$-dimensional
histogram's selected bin width $h$ goes to 0 as the number of samples
$m \rightarrow \infty$. This section will only offer an intuitive
argument. For any $m$, the histogram will enclose about the same
probability ($\frac{1}{2}$) and will have about the same average
density in a fixed region. Each bin has volume $h^d$, so if $l$ is
the number of bins then $lh^d = O(1)$ and $h \propto l^{-d}$.
Furthermore, the distribution of the sampled points converges to the
actual distribution $g(\boldsymbol \theta)$. If $m > O(l)$, an unbounded number
of sampled points would end up in each bin. If $m < O(l)$, then some
bins would have no points in them. Neither of these is possible
because exactly one sampled point is necessary to establish each bin.
Thus $m \propto l$ and $h \propto m^{-d}$.
\subsection{Conditional Variance}
Before estimating the convergence rate of $A_n$ we will prove
something about the conditional variance of importance sampling. Let
$A = \{\boldsymbol \theta: f(\boldsymbol \theta) > 0\}$, $\mathbf{1}_A$ be the characteristic
function of $A$, and $q = \int_A g(\boldsymbol \theta) \,d\boldsymbol \theta$. Define
\begin{equation*}
g_A(\boldsymbol \theta) = \left\{
\begin{array}{cl}
g(\boldsymbol \theta)/q & \mbox{ if } \boldsymbol \theta \in A\\
0 & \mbox{ otherwise }
\end{array} \right.
\end{equation*}
\noindent Then $g_A$ is the density of $g$ conditional on $f > 0$.
Define $\mbox{Var}_A$ and $\mbox{E}_A$ to mean the variance and
expectation conditional on $f(\boldsymbol \theta) > 0$. Thus
\begin{eqnarray*}
\mbox{Var}(f(\boldsymbol \theta)/g(\boldsymbol \theta)) &=& \mbox{Var}(\mbox{E}(f(\boldsymbol \theta)
/ g(\boldsymbol \theta) | \mathbf{1}_A)) + \mbox{E}(\mbox{Var}(f(\boldsymbol \theta) / g(\boldsymbol \theta) | \mathbf{1}_A))\\
&=& \mbox{Var}\left(
\begin{array}{cl}
\mbox{E}_A(f(\boldsymbol \theta) / g(\boldsymbol \theta)) & \mbox{ if } \boldsymbol \theta \in A\\
0 & \mbox{ otherwise}\\
\end{array}\right)\\
&& + \,\mbox{E}\left(
\begin{array}{cl}
\mbox{Var}_A(f(\boldsymbol \theta) / g(\boldsymbol \theta)) & \mbox{ if } \boldsymbol \theta \in A\\
0 & \mbox{ otherwise}\\
\end{array}\right)\\
&=& \mbox{Var}\left(
\begin{array}{cl}
1/q & \mbox{ if } \boldsymbol \theta \in A\\
0 & \mbox{ otherwise}\\
\end{array}\right) + q\mbox{Var}_A(f(\boldsymbol \theta) / g(\boldsymbol \theta))\\
&=& (1/q)^2q(1-q) + \frac{1}{q}\mbox{Var}_A(f(\boldsymbol \theta) / q g(\boldsymbol \theta))\\
&=& \frac{1-q}{q} + \frac{1}{q}\mbox{Var}_A(f(\boldsymbol \theta) / g_A(\boldsymbol \theta))
\end{eqnarray*}
We will assume below that $q=\frac{1}{2}$, so that
\begin{equation}
\label{I var}
\mbox{Var}(f(\boldsymbol \theta)/g(\boldsymbol \theta)) = 1 + 2 \mbox{Var}_A(f(\boldsymbol \theta) /
g_A(\boldsymbol \theta))
\end{equation}
\subsection{Importance Sampling Convergence}
With $f$, $g$, and $A$ as defined above, $f$ and $g_A$ have the same
domain. Assuming errors in estimating $q$ and normalization errors
are of a lesser order of magnitude, we can treat the histogram heights
as being sampled from $g_A$. Suppose the histogram has $l$ bins
$\{B_j\}$, each with width $h$ and based around the points
$g_A(\boldsymbol \theta_j)$. Then by equation (\ref{imp var}),
\begin{eqnarray*}
\mbox{Var}_A(f(\boldsymbol \theta) / g_A(\boldsymbol \theta)) &=&
\sum_{j=1}^l \int_{B_j} \frac{(f(\boldsymbol \theta) - g_A(\boldsymbol \theta))^2}{g_A(\boldsymbol \theta)}d\boldsymbol \theta\\
&=& \sum_{j=1}^l \int_{B_j} \frac{(g_A(\boldsymbol \theta) + \nabla g_A(\boldsymbol \theta)\cdot(\boldsymbol \theta_j - \boldsymbol \theta) + O((\boldsymbol \theta_j - \boldsymbol \theta)^2) - g_A(\boldsymbol \theta))^2}{g_A(\boldsymbol \theta)}d\boldsymbol \theta\\
&=& \sum_{j=1}^l \int_{B_j} \frac{(\nabla g_A(\boldsymbol \theta)\cdot(\boldsymbol \theta_j - \boldsymbol \theta))^2 + O((\boldsymbol \theta_j - \boldsymbol \theta)^3)}{g_A(\boldsymbol \theta)}d\boldsymbol \theta\\
&\leq& \sum_{j=1}^l \int_{B_j} \frac{\norm{\nabla \cdot g_A(\boldsymbol \theta)}^2 h^2}{g_A(\boldsymbol \theta)} d\boldsymbol \theta\\
&=& h^2 \int \frac{\norm{\nabla \cdot g_A(\boldsymbol \theta)}^2}{g_A(\boldsymbol \theta)} d\boldsymbol \theta \label{h var}
\end{eqnarray*}
Because $h \propto m^{-d}$ where $d$ is the number of dimensions, and
$m$ is the number of samples used to make the histogram,
\begin{equation*}
\mbox{Var}_A(f(\boldsymbol \theta) / g_A(\boldsymbol \theta)) \leq Cm^{-2/d}\\
\end{equation*}
\noindent where $C \propto \int \frac{\norm{\nabla \cdot
g_A(\boldsymbol \theta)}^2}{g_A(\boldsymbol \theta)} d\boldsymbol \theta$. Putting this together
with (\ref{I var}), we get
\begin{equation}
\label{final variance}
\mbox{Var}(I_n) = \mbox{Var}(p(\boldsymbol x|T) / A_n) = n^{-1}(1+O(Cm^{-2/d}))
\end{equation}
\section{Implementation Issues}
\subsection{Speed of Convergence}
The variance of $n^{-1}(1+O(Cm^{-2/d}))$ given by (\ref{final
variance}) is asymptotically equal to $n^{-1}$, which is the typical
importance sampling rate. In practice however, the asymptotic results
cannot distinguish useful from impractical estimators. If $Cm^{-2/d}$
is small and $\mbox{Var}(p(\boldsymbol x|T) / A_n) \approx n^{-1}$, then
$p(\boldsymbol x|T)$ can be approximated in only 1000 samples to about $6\% =
\frac{1.96}{\sqrt{1000}}$ with 95\% confidence. For many theory
choice purposes, this is quite sufficient. Thus in typical problem
cases the factor of $Cm^{-2/d}$ will be very significant. If
$Cm^{-2/d} \gg 1$, then the convergence rate may in practice be
similar to $n^{-1}m^{-2/d}$. Compare this to the rate of
$n^{-1}m^{-4/(4+d)}$ for the methods proposed by Zhang and
Neddermeyer.
This method also uses simple histograms, instead of a more
sophisticated density estimation method (Zhang uses kernel
estimation, Neddermeyer uses linear blend frequency polynomials).
Although simple histograms converge slower for large $d$ as shown
above, they are much faster to compute for large $d$.
Neddermeyer's LBFP algorithm is quite efficient compared to Zhang's,
but its running time is $O(2^dd^2n^\frac{d+5}{d+4})$. $d$ is a
constant for any fixed problem, but if, say, $d=10$, then the
dimensionality constant multiplies the running time by $2^{10}10^2
\approx 10^5$.
By contrast, this paper's method takes only $O(dm\mbox{log}(m))$ time
to construct the initial histogram, and an additional
$O(dn\mbox{log}(m))$ time to do the importance sampling. The main
reason for the difference is that querying a simple histogram can be
done in $\mbox{log}(m)$ time by computing the bin coordinates and
looking up the bin's height in a tree structure. However, querying a
LBFP requires blending all nearby bins and is thus exponential in $d$.
\subsection{When $g = 0$}
Our discussion assumed that $g(\boldsymbol \theta) = p(\boldsymbol \theta|\boldsymbol x, T)$ was
always positive. If $g$ goes to 0 where the histogram is positive,
the variance of $A_n^{-1}$ will be infinite. However, this paper's method
can still be used if $g(\boldsymbol \theta)$ is 0 over some well-defined area.
For instance, suppose one dimension $\theta_k$ of $p(\boldsymbol \theta|T)$ is
defined by a gamma distribution, so that $p(\theta_k|T) = 0$ if and
only if $\theta_k \leq 0$. Then we can ensure the variance is not
infinite by checking that the histogram is only defined where
$\theta_k > \epsilon > 0$ for some fixed $\epsilon$.
The \texttt{margLikArrogance} package contains a simple mechanism to
do this. The user may specify a range along each dimension of
$\boldsymbol \theta$ where it is known that $g > 0$. If the histogram is
non-zero outside of this range, the method aborts with an error.
Note that the variance of the estimator increases with $\int
\frac{\norm{\nabla \cdot g_A(\boldsymbol \theta)}^2}{g_A(\boldsymbol \theta)} d\boldsymbol \theta$. In
practice the estimator will work well only when $g$ doesn't go to 0
too quickly where the histogram is positive. In these cases the
histogram will be defined well away from any region where $g=0$ and
infinite variance won't be an issue even if $g=0$ somewhere.
\subsection{Bin Shape}
Cubic histogram bins were used above---their widths were fixed at $h$
in each dimension. Although the asymptotic results aren't affected by
the shape of each bin, for usable convergence rates the bins'
dimensions need to compatible with the shape of the high probability
region of $p(\boldsymbol \theta|\boldsymbol x, T)$. Unfortunately, it is difficult to
determine the best bin shapes.
The \texttt{margLikArrogance} package contains a simple workaround: by
default the distribution is first scaled so that the sampled standard
deviation along each dimension is constant. This is equivalent to
setting each bin's width by dimension in proportion to that
dimension's standard deviation. If this simple rule of thumb is
insufficient, the user can scale the sampled values of $p(\boldsymbol \theta|\boldsymbol x,
T)$ manually (and make the corresponding adjustment to the estimate
$A_n$).
\section{Conclusion}
This paper has described an ``arrogance sampling'' technique for
computing the marginal likelihood or Bayes factor of a Bayesian model.
It involves using samples from the model's posterior parameter
distribution along with the scaled values of the distribution's
density at those points. These samples are divided into two main
groups: $m$ samples are used to build a histogram; $n$ are used to
importance sample the histogram using the posterior parameter
distribution as the proposal.
This method is simple to implement and runs quickly in
$O(d(m+n)\mbox{log}(m))$ time. Its asymptotic convergence rate,
$n^{-1}(1+O(Cm^{-2/d}))$, is not remarkable, but in practice
convergence is fast for many problems. Because the required inputs
are similar to those of the harmonic mean estimator, it may be a
convenient replacement for it.
\begin{enumerate}
\item S. Chib. ``Marginal Likelihood from the Gibbs Output''
\emph{Journal of the American Statistical Association}. Vol 90,
No 432. (1995)
\item S. Chib and I. Jeliazkov. ``Accept-reject Metropolis-Hastings
sampling and marginal likelihood estimation'' \emph{Statistica
Neerlandica}. Vol 59, No 1. (2005)
\item A. Gelman and X. Meng. ``Simulating Normalizing Constants:
From Importance Sampling to Bridge Sampling to Path Sampling''
\emph{Statistical Science}. Vol 13, No 2. (1998)
\item R. Kass and A. Raftery. ``Bayes Factors'' \emph{Journal of the
American Statistical Association}. Vol 90, No 430. (1995)
\item R. Neal. ``The Harmonic Mean of the Likelihood: Worst Monte
Carlo Method Ever''. Blog post,
\texttt{http://radfordneal.wordpress.com/2008/08/17/the-harmonic-mean-of-the-likelihood-worst-monte-carlo-method-ever/}. (2008)
\item J. Neddermeyer. ``Computationally Efficient Nonparametric
Importance Sampling'' \emph{Journal of the American Statistical
Association}. Vol 104, No 486. (2009) arXiv:0805.3591v2
\item A. Owen and Y. Zhou. ``Safe and effective importance
sampling'' \emph{Journal of the American Statistical
Association}. Vol 95, No 449. (2000)
\item C. Robert and D. Wraith. ``Computational methods for Bayesian
model choice'' arXiv:0907.5123v1
\item P. Zhang. ``Nonparametric Importance Sampling'' \emph{Journal
of the American Statistical Association}. Vol 91, No 435. (1996)
\item M. Zlochin and Y. Baram. ``Efficient Nonparametric
Importance Sampling for Bayesian Inference'' \emph{Proceedings
of the 2002 International Joint Conference on Neural Networks}
2498--2502. (2002)
\end{enumerate}
\end{document} |
\begin{document}
\begin{abstract}
In this paper we study the regularity properties of $\Lambda$-minimizers of the capillarity energy in a half space with the wet part constrained to be confined inside a given planar region. Applications to a model for nanowire growth are also provided.
\varepsilonnd{abstract}
\title{Regularity of capillarity droplets with obstacle}
\tableofcontents
\section{Introduction}
Capillarity phenomena occur whenever two or more fluids are situated adjacent each other and do not mix. The separating interface is usually refereed to as a capillary surface. Since the pioneering works by Young and Laplace (see Finn's book \cite{Finnbook} for an historical introduction) these phenomena have been the subject of countless studies in the mathematical and interdisciplinary literature. A modern treatment of the problem is based on Gauss' idea of describing equilibrium configurations as critical points or (local) minimizers of a free energy accounting for the area of the surface separating the fluids and the surrounding media, for the area of the {\it wet region} due to the adhesion between the fluids and the walls of the container, and for the possible presence of external fields acting on the system (such as gravity). The existence of minimizing configurations is easily obtained in the framework of sets of finite perimeter. While the regularity inside the container of such configurations reduces to the more classical study of minimal surfaces, a more specific question is related to the regularity of the contact line between the container and the fluid, see \cite{Taylor77} for the physically relevant three dimensional case and \cite{De-PhilippisMaggi15} for a wide extension to more general anisotropic energies and to higher dimensions.
In this paper we study the regularity of the contact line for a capillarity problem where the `container' is a half space $H$ and the wet region is constrained to be confined inside a given planar domain $O\subset\boldsymbol{p}artial H$. The motivation for this problem comes from the study of a mathematical model of vapor-liquid-solid
(VLS) nanowire growth considered in the physical literature.
We recall that during VLS growth a
nanoscale liquid drop of catalyst deposited on the flat tip of the solid cylindrical
nanowire feeds its vertical growth. In the experiments it is observed that the sharp edge of the nanowire produces a pinning effect and forces the wet part to remain confined inside the top face of the cylinder and the liquid drop to be contained in the upper half space, see Figure~\ref{figure1}.
\begin{figure}\label{figure1}
\includegraphics[scale=0.71]{Curiotto2}
\caption{As the volume of the liquid drop increases the drop wets larger regions of the nanowire edge, but remains pinned at the top (reproduced from P. Krogstrup et al. \cite{Curiotto})}
\varepsilonnd{figure}
\subsection{Setting of the problem and main results}
Let us start by fixing some notations: We will work in \(\mathbb{R}^{N}\) and we set
$$
H=\{x \in \mathbb{R}^{N}: x_1> 0\}.
$$
Given \(\sigma\in (-1,1)\) we consider for a set of finite perimeter \(E\subset H\) and an open set \(A\) (not necessarily contained in \(H\)) the {\it capillarity energy}
\[
\begin{split}
\mathcal{F}_\sigma(E;A)&=P(E; H\cap A)+\sigma P(E;\boldsymbol{p}artialrtial H\cap A)
\\
&=\mathcal{H}^{N-1}(\boldsymbol{p}artialrtial^* E\cap H \cap A)+\sigma \mathcal{H}^{N-1}(\boldsymbol{p}artialrtial^* E\cap \boldsymbol{p}artialrtial H\cap A)
\varepsilonnd{split}
\]
where \(\boldsymbol{p}artialrtial^* E\) is the reduced boundary of \(E\), $P(E;G)$ is the perimeter of $E$ in $G$ (see the definitions at the beginning of Section~\ref{sec:densityestimates}) and $\mathcal{H}^{N-1}$ stands for the $(N-1)$-dimensional Hausdorff measure. In case \(A=\mathbb{R}^N\) we will simply write \( \mathcal{F}_\sigma(E)\).
We aim to impose a constraint on the {\it wet region} \(\boldsymbol{p}artialrtial^* E\cap \boldsymbol{p}artialrtial H\) of \(E\). To this end we consider a relatively open set \(O\subset \boldsymbol{p}artialrtial H\) and we denote by
\[
\mathcal C_{O}=\bigl\{E\subset H \text{ sets of locally finite perimeter such that \(\boldsymbol{p}artialrtial^* E \cap \boldsymbol{p}artialrtial H\subset \overline O\)}\bigr\}
\]
the class of admissible competitors. We aim at studying the regularity properties of (local) minimizers of the variational problem
\[
\min\{ \mathcal{F}_\sigma(E): E\in \mathcal C_O, |E|=m\}.
\]
Note that classical variational arguments imply that if one assumes that \(M:=\overline{ \boldsymbol{p}artialrtial E\cap H}\) is a smooth manifold with boundary, then any minimizer satisfies the following Euler-Lagrange conditions:
\begin{enumerate}[label=(\roman*)]
\item (\varepsilonmph{Constant mean curvature}) There exists \(\lambda>0\) such that \(H_{M}=\lambda \) in \(M\cap H\), where $H_M$ is the sum of the principle curvatures of $M$ and more precisely coincides with the tangential divergence of the outer unit normal field $\nu_{E}$ to the boundary of $E$;
\item (\varepsilonmph{Young's inequality}) \(\nu_{E}\cdot \nu_{H}\ge \sigma\) on \(M\cap \boldsymbol{p}artialrtial H\);
\item (\varepsilonmph{Young's law inside $O$}) \(\nu_{E}\cdot \nu_{H}= \sigma\) on \((M\cap \boldsymbol{p}artialrtial H)\setminus \boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H} O\)\footnote{Here and in the sequel for a set \(U\subset \boldsymbol{p}artialrtial H\) we will denote by \(\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H} U\) its relative boundary in \(\boldsymbol{p}artialrtial H\)}.
\varepsilonnd{enumerate}
Note that (iii) above is the classical Young's law which holds true outside the {\it thin obstacle} $\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H} O$, while (ii) is a global inequality which should hold true on the whole free boundary \(M\cap \boldsymbol{p}artialrtial H\).
As it is customary in Geometric Measure Theory, we will remove volume type constraints and we will deal with some perturbed minimality conditions. Note that this allows to treat several problems at once (volume constraints, potential terms, etc.).
Concerning the regularity of the obstacle we introduce the following class of open subsets of $\boldsymbol{p}artialrtial H$ satisfying a uniform inner and outer ball condition at every point of the (relative) boundary. More precisely, we give the following definition.
\begin{definition}\label{def: BR} Let $R\in (0,+\infty)$. We denote by
$\mathcal{B}_{R}$ the family of all relatively open subsets $O$ of $\boldsymbol{p}artialrtial H$ such that for every $x\in \boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H} O$ there exist two $(N-1)$-dimensional relatively open balls $B'$, $B''\subset \boldsymbol{p}artialrtial H$ of radius $R$ such that $B'\subset O$, $B''\subset \boldsymbol{p}artialrtial H\setminus O$, and $\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H} B'\cap \boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H} B''=\{x\}$.
Morever, we set $\mathcal{B}_{\infty}=\cap_{R>0}\mathcal{B}_{R}$. Note that $\mathcal{B}_{\infty}$ is made up of all relatively open half-spaces of $\boldsymbol{p}artialrtial H$ and of $\boldsymbol{p}artial H$ itself.
\varepsilonnd{definition}
\begin{remark}\label{rem:BR}
Note that if $O\in \mathcal{B}_{R}$, then $O$ is of class $C^{1,1}$ with principal curvatures bounded by $\frac{1}{R}$, see \cite{MoMo00, Dalphin}. Therefore, if $O_h$ is a sequence in $\mathcal{B}_{R_h}$ with $R_h\to R\in(0,+\infty]$, then there exists a (not relabelled) subsequence such that $\overline{O_h}\to \overline O$ in the Kuratowski sense, where $O\in\mathcal{B}_{R}$. Moreover, for all $\alpha\in(0,1)$ $O_h\to O$ in $C^{1,\alpha}_{loc}$ in the following sense: given $x\in \boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H} O$, there exist a $(N-1)$-dimensional ball $B$ centered at $x$, $\boldsymbol{p}si_h,\boldsymbol{p}si\in C^{1,1}(\mathbb{R}^{N-2})$ such that, up to a rotation in $\boldsymbol{p}artial H$, $\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H} O_h\cap B$ coincides with the graph of $\boldsymbol{p}si_h $ in $B$, $\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H} O\cap B$ coincides with the graph of $\boldsymbol{p}si $ in $B$, and $\boldsymbol{p}si_h\to\boldsymbol{p}si$ locally in $C^{1,\alpha}(\mathbb{R}^{N-2})$.
\varepsilonnd{remark}
\begin{definition}\label{def:lambdamin}
Given \(\Lambda\geq 0\), \(r_0>0\), and $O\subset\boldsymbol{p}artial H$ a relatively open set, we say that \(E\in \mathcal C_O\) is a \((\Lambda, r_0)\)-minimizer of \(\mathcal{F}_{\sigma}\) with obstacle \(O\) if
\begin{multline*}
\mathcal{F}_\sigma(E;B_{r_0}(x_0))\le \mathcal{F}_\sigma(F;B_{r_0}(x_0))+\Lambda|E\boldsymbol{D}elta F|
\\
\text{for all \(F\in \mathcal C_O\) such that \(E\boldsymbol{D}elta F\Subset B_{r_0}(x_0)\)}\,.
\varepsilonnd{multline*}
When $E$ is a $(\Lambda,r_0)$-minimizer (with obstacle $O$) for every $r_0>0$ we will simply say that $E\subset H$ is a \((\Lambda,+\infty)\)-minimizer (with obstacle $O$) or simply a \(\Lambda\)-minimizer (with obstacle $O$) of $\mathcal{F}_{\sigma}$.
\varepsilonnd{definition}
We are in a position to state the main result of the paper, which establishes full regularity in three dimensions and partial regularity in any dimension, with an estimate on the Hausdorff dimension $\text{dim}_{\mathcal H}$ of the singular set.
\begin{theorem}\label{th:reg}
Let $O\subset\boldsymbol{p}artial H$ be a relatively open set of class $C^{1,1}$ and let $E\in\mathcal C_O$ a $(\Lambda,r_0)$-minimizer with obstacle $O$ of $\mathcal F_\sigma$. Then, the following conclusions hold true:
\begin{itemize}
\item[(i)] if $N=3$, then $\overline{\boldsymbol{p}artial E\cap H}$ is a surface with boundary of class $C^{1,\tau}$ for all $\tau\in(0,\frac12)$.
\item[(ii)] if $N\geq4$, there exists a closed set $\Sigma\subset \overline{\boldsymbol{p}artial E\cap H}\cap\boldsymbol{p}artial H$, with ${\rm dim}_{\mathcal H}(\Sigma)\leq N-4$, such that $\overline{\boldsymbol{p}artial E\cap H}\setminus\Sigma$ is (locally) a $C^{1,\tau}$ hypersurface with boundary for all $\tau\in(0,\frac12)$.
\varepsilonnd{itemize}
Moreover,
\begin{align*}
\nu_{E}\cdot \nu_{H}\ge \sigma \boldsymbol{q}uad &\text{on} \boldsymbol{q}uad M\cap \boldsymbol{p}artialrtial H; \\
\nu_{E}\cdot \nu_{H}= \sigma \boldsymbol{q}uad &\text{on} \boldsymbol{q}uad (M\cap \boldsymbol{p}artialrtial H)\setminus \boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H} O,
\varepsilonnd{align*}
where we set $M:= \overline{\boldsymbol{p}artial E\cap H}\setminus\Sigma$.
\varepsilonnd{theorem}
The main new point of the above result is the regularity at the points of the free-boundary $\overline{\boldsymbol{p}artial E\cap H}\cap\boldsymbol{p}artial H$ lying on the thin obstacle $\boldsymbol{p}artial_{\boldsymbol{p}artial H}O$. Indeed the full regularity at the boundary points $\overline{\boldsymbol{p}artial E\cap H}\cap O$ for $N=3$ follows from the classical result of \cite{Taylor77}, while the partial regularity for $N\geq4$ from the more recent paper \cite{De-PhilippisMaggi15}. The idea the proof of Theorem~\ref{th:reg} is based on the following dichotomy, inspired by the work of Fern\'andez-Real and Serra,~\cite{Fernandez-RealSerra20}, where a thin obstacle problem for the area functional is studied, see also the work of Focardi and Spadaro \cite{focardi-spadaro20} for the non parametric case. If in a $r$-neighborhood of $x\in\overline{\boldsymbol{p}artial E\cap H}\cap\boldsymbol{p}artial_{\boldsymbol{p}artial H}O$ the boundary $\boldsymbol{p}artial E\cap H$ is contained in a sufficiently thin strip with a slope sufficiently larger than the slope $\alpha(\sigma)$ given by the Young's law, then we show by a barrier argument that at scale $r/2$ the free boundary fully coincides with the thin obstacle $\boldsymbol{p}artial_{\boldsymbol{p}artial H}O$. If instead the slope of the strip is sufficiently close to $\alpha(\sigma)$, then a linearization procedure allows us to reduce to a Signorini type problem and to deduce from the decay estimates available for the latter a flatness improvement result at a smaller scale. Iterating this dichotomy argument leads to a {\it boundary $\varepsilon$-regularity} result, see Theorem~\ref{th:epsreg}, which, combined with a suitable monotonicity formula, finally yields the proof of Theorem~\ref{th:reg}. Since the barrier argument requires closeness in the ``uniform norm'', in order to make the dichotomy effective, we need to establish a decay of the excess in the same norm. For this step we follow the ideas introduced by Savin in \cite{Savin} to deal with interior regularity of minimal surfaces and we rely on a partial Harnack inequality which is obtained by a nontrivial extension to the boundary of these ideas.
The connection with the Signorini problem also explains why we cannot expect better regularity than $C^{1,1/2}$, see for instance \cite{Richardson78}.
\subsection{Applications to a model for nanowire growth}\label{nanosec}
\begin{figure}\label{figure2}
\vskip -1cm
\includegraphics[scale=0.25]{Spencer3}
\includegraphics[scale=0.39]{Spencer4}
\caption{On the left a drop sitting on the tip of a nanowire with hexagonal section. The picture on the right shows the behaviour of the droplet near a corner (courtesy of B. Spencer)}
\varepsilonnd{figure}
As anticipated at the beginning of this introduction, we conclude by applying the above regularity theory to a model of vapor-liquid-solid
(VLS) nanowire growth considered in the physical literature and studied in \cite{FFLM22}.
Following the work of several authors, see the references in \cite{FFLM22}, we consider a continuum framework for
nanowire VLS growth. We model the nanowire as a semi-infinite cylinder $\mathbf{C}=\omega\times(-\infty,0]$, where $\omega\subset\mathbb{R}^2$ is a bounded sufficiently regular domain, and the liquid drop as a set
$E\subset\mathbb{R}^{3}\setminus\mathbf{C}$ of finite perimeter. Typically the observed nanowires have either a regular or a polygonal section.
Given $\sigma\in(-1,1)$ we consider the free energy
\[
J_{\sigma,\omega}(E):=\mathcal{H}^{2}(\boldsymbol{p}artialrtial^{\ast}E\setminus\mathbf{C})+\sigma\mathcal{H}^{2}(\boldsymbol{p}artialrtial^{\ast}E\cap\mathbf{C})\,.
\]
The shape of the liquid drop is then described by (local) minimizers of $J_{\sigma,\omega}$ under a volume constraint. To this aim we say that $E\subset\mathbb{R}^3\setminus\mathbf{C}$ is a {\it volume constrained local minimizer of $J_{\sigma,\omega}$} if there exists $\varepsilon>0$ such that $J_{\sigma,\omega}(E)\leq J_{\sigma,\omega}(F)$ for all $F\subset\mathbb{R}^3\setminus\mathbf{C}$ with $|F|=|E|$ and $|E\boldsymbol{D}elta F|<\varepsilon$.
Here we study local minimizing configurations corresponding to liquid drops sitting on the top $\mathbf{C}_{top}:=\omega\times\{0\}$ of the cylinder and contained in the upper half space $\mathscr H:=\{x_3>0\}$.
In the case where $\omega$ is a $C^{1,1}$ domain we prove the following regularity result.
\begin{theorem}\label{cinqueuno}
Let $\omega\subset\mathbb{R}^2$ be a bounded domain of class $C^{1,1}$ and let $E\subset\mathscr H$ be a volume constrained local minimizer of $J_{\sigma,\omega}$. Then $\overline{\boldsymbol{p}artial E\cap\mathscr H}$ is a surface with boundary of class $C^{1,\tau}$ for all $\tau\in(0,\frac12)$.
\varepsilonnd{theorem}
In the experiments it is observed that for nanowires with a polygonal section, see for instance \cite{Curiotto}, the liquid drop never wets the corners of the polygon as illstrated in Figure~\ref{figure2}. Here we give a rigorous proof of this fact when $\sigma<0$ and, for a general $\sigma$, under the additional assumption that the contact between the liquid drop and $\gamma:=\boldsymbol{p}artial\omega\times\{0\}$ is nontangential (see Definition~\ref{nontang}).
\begin{theorem}\label{cinquedue}
Let $\omega\subset\mathbb{R}^2$ be a convex polygon and let $E\subset\mathscr H$ be a volume constrained local minimizer of $J_{\sigma,\omega}$. If $\sigma<0$, then $\overline{\boldsymbol{p}artial E\cap\mathscr H}$ is a surface with boundary of class $C^{1,\tau}$ for all $\tau\in(0,\frac12)$ and the contact line $\overline{\boldsymbol{p}artial E\cap\mathscr H}\cap\mathbf{C}_{top}$ does not contain any vertex of the polygon. If $\sigma\geq0$ the same conclusion holds provided that $E$ has a nontangential contact at all points of $\overline{\boldsymbol{p}artial E\cap\mathscr H}\cap\gamma$.
\varepsilonnd{theorem}
\section{Density estimates and compactness}\label{sec:densityestimates}
Given a Lebesgue measurable set $E\subset\mathbb{R}^N$ , we say that $E$ is of locally finite perimeter if there exists a $\mathbb{R}^N$-valued Radon measure $\mu_E$ such that
$$
\int_E\nabla\varphi\,dx=\int_{\mathbb{R}^N}\varphi\,d\mu_E\boldsymbol{q}quad \text{for all $\varphi\in C^1_c(\mathbb{R}^N;\mathbb{R}^N)\,.$}
$$
If $G\subset \mathbb{R}^N$ is a Borel set we denote by $P(E; G)= |\mu_E|(G)$ the perimeter of $E$ in $G$. For all relevant definitions and properties of sets of finite perimeter we shall refer to the book \cite{Maggi12}. In the following we denote by $\boldsymbol{p}artialrtial^*E$ the reduced boundary of a set of finite perimeter and by
$\boldsymbol{p}artial^eE$ the {\varepsilonm essential boundary}, which is defined as
$$
\boldsymbol{p}artial^eE:=\mathbb{R}^N\setminus(E^{(0)}\cup E^{(1)})\,,
$$
where $E^{(0)}$ and $E^{(1)}$ are the sets of points where the density of $E$ is $0$ and $1$, respectively. Since the perimeter measure coincides with the $\mathcal{H}^{N-1}$ measure restricted to the reduced boundary $\boldsymbol{p}artial^*E$, we will often write
$\mathcal{H}^{N-1}(\boldsymbol{p}artial^*E\cap \Omega)$ instead of $P(E;\Omega)$. Note that if $E\subset H$ the characteristic function of $\boldsymbol{p}artialrtial^* E\cap \boldsymbol{p}artialrtial H$ is the trace of \(1_E\) intended as a \(BV\) function (for the definition of trace see \cite[Th. 3.87]{AmbrosioFuscoPallara00}).
In the following, when dealing with a set of locally finite perimeter $E$, we will always assume that $E$ coincides with a precise representative that satisfies the property $\boldsymbol{p}artial E=\overline{\boldsymbol{p}artial^*E}$, see \cite[Remark 16.11]{Maggi12}. A possible choice is given by $E^{(1)}$, for which one may check that
\begin{equation}\label{Euno}
\boldsymbol{p}artial E^{(1)}=\overline{\boldsymbol{p}artial^*E}\,.
\varepsiloneq
Given a set \(E\) we denote by \(E_{x,r}\) the set \(E_{x,r}:=(E-x)/r\). A ball centered in \(x\) and of radius \(r\) is denoted by \(B_r(x)\), if the center is \(0\) we simply set \(B_r\).
Note that if \(E\) is \((\Lambda, r_0)\)-minimizer of \(\mathcal{F}_{\sigma}\) with obstacle \(O\in \mathcal B_R\) and \(x\in \boldsymbol{p}artialrtial H\), \(r>0\) then
\(E_{x,r}\) is a \((\Lambda r, r_0/r)\) minimizer of \(\mathcal{F}_{\sigma}\) with obstacle \(O_{x,r}\in \mathcal B_{\frac Rr}\). Hence we can assume without loss of generality that \(E\) is \((\Lambda, 1)\)-minimizer with obstacle $O\in \mathcal B_R$, where
\begin{equation}\label{e:basicassumption}
\Lambda +\frac 1R\leq \boldsymbol{c_0}\leq 1\,,
\varepsilonnd{equation}
with \(\boldsymbol{c_0}\) is a small constant, depending on $\sigma$ and $N$ (to be chosen later).
We will say, with a slight abuse of language, that a constant is \varepsilonmph{universal} if it only depends on $\Lambda$, \(\sigma\) and on the dimension.
We start by observing that the constraint on the wet part can be replaced by a suitable penalization. To this end we introduce the following functional
$$
\mathcal{F}^{O}_\sigma(F;A):=P(F; A\setminus \overline O)+\sigma P(F;O\cap A)\,,
$$
where $A\subset\mathbb{R}^N$ is an open set and $F\subset H$ is a set of finite perimeter. If $A=\mathbb{R}^N$ we simply write $\mathcal{F}^{O}_\sigma(F)$.
We also denote by $\mathbf{C}_O$ the semi-infinite cylinder constructed over $O$, that is,
$$
\mathbf{C}_O:=\{x=(x_1,\dots,x_N)\in\mathbb{R}^N: x_1>0 \,\,\text{and}\,\, (x_2,\dots,x_N)\in O\}\,.
$$
\begin{lemma}\label{lm:osta}
Assume that $E\subset H$ is a $(\Lambda, r_0)$-minimizer for $\mathcal F_\sigma$ in the sense of Definition~\ref{def:lambdamin}.
Then, $E$ is a $(\Lambda, r_0)$-minimizer for $\mathcal F^O_\sigma$ without obstacle, that is,
$$
\mathcal{F}^O_\sigma(E;B_{r_0}(x_0))\le \mathcal{F}^O_\sigma(F;B_{r_0}(x_0))+\Lambda|E\boldsymbol{D}elta F| \text{ for all \(F\subset H\) such that \(E\boldsymbol{D}elta F\Subset B_{r_0}(x_0)\)}\,.
$$
\varepsilonnd{lemma}
\begin{proof}
Since the argument is local we may assume without loss of generality that $E$ is a set of finite perimeter.
Let $F$ be as in the statement and let $B$ be an open ball of radius $r_0$ such that $E\boldsymbol{D}elta F\Subset B$ and
$\mathcal{H}^{N-1}(\boldsymbol{p}artialrtial^*F\cap \boldsymbol{p}artialrtial B)=0$, and set $B^+=B\cap H$. For every $\varepsilonps>0$ (sufficiently small) we set
$$
F_\varepsilonps:=[(F\cap\{x_1>\varepsilonps \}\cap B^+)\setminus \mathbf{C}_O)]\cup (F\cap \mathbf{C}_O)\cup (F\setminus B^+)\,.
$$
Note that $F_\varepsilonps$ is an admissible competitor in the sense of Definition~\ref{def:lambdamin} and thus
\begin{equation}\label{e:osta1}
\mathcal{F}_\sigma(E)\leq \mathcal{F}_\sigma(F_\varepsilonps)+\Lambda|F_\varepsilonps\boldsymbol{D}elta E|\,.
\varepsilonnd{equation}
Note that for a.e. $\varepsilon$
\begin{align*}
\mathcal{F}_\sigma(F_\varepsilonps)\leq& P(F; (B^+\cap \{x_1>\varepsilonps\})\setminus \overline{\mathbf{C}}_O)
+\mathcal{H}^{N-1}((F^{(1)}\cap B^+\cap\{x_1=\varepsilonps\})\setminus \overline{\mathbf{C}}_O)\\
&+ P(F; B^+\cap \overline{\mathbf{C}}_O)+P(F; H\setminus B^+)
+\sigma P(F; O)\\
&+ \mathcal{H}^{N-1}(F^{(1)}\cap B^+\cap\boldsymbol{p}artialrtial \mathbf{C}_O\cap \{x_1\leq \varepsilonps\})+
\mathcal{H}^{N-1}(\boldsymbol{p}artialrtial B\cap F^{(1)}\cap \{x_1\leq \varepsilonps\})\,.
\varepsilonnd{align*}
By the continuity of the trace with respect to strict convergence in $BV$ (see \cite[Th.~3.88]{AmbrosioFuscoPallara00}) we have that
$$
\mathcal{H}^{N-1}((F^{(1)}\cap B^+\cap\{x_1=\varepsilonps_n\})\setminus \overline{\mathbf{C}}_O)
\to P(F; \boldsymbol{p}artialrtial H\setminus O)
$$
for a sequence $\varepsilonps_n\to 0^+$. Thus,
\begin{multline*}
\liminf_{\varepsilonps\to0}\mathcal{F}_\sigma(F_{\varepsilonps})\leq P(F; B^+\setminus \overline{\mathbf{C}}_O)+P(F; \boldsymbol{p}artialrtial H\setminus O)+P(F; B^+\cap \overline{\mathbf{C}}_O)\\
+P(F; H\setminus \overline B^+)
+\sigma P(F; O)=\mathcal{F}^O_\sigma(F)\,.
\varepsilonnd{multline*}
Then conclusion follows recalling \varepsilonqref{e:osta1}.
\varepsilonnd{proof}
\begin{remark}\label{rem:lsc}
Observe that $\mathcal F_\sigma(\cdot,A)$ is lower semicontinuous with respect to the $L^1_{loc}$ convergence in $\mathcal C_O$, see \cite[Prop. 19.1]{Maggi12}. The same proposition can be applied to show that also $\mathcal F^O_\sigma(\cdot,A)$ is lower semicontinuous with respect to $L^1_{loc}$ in $H$. To this aim it is enough to observe that it is possible to construct an open set $U$ of class $C^{1,1}$ contained in $\mathbb{R}^N\setminus\overline H$ such that $\boldsymbol{p}artial U\cap\boldsymbol{p}artial H=\overline O$ and then to apply \cite[Prop. 19.1 and Prop. 1.3]{Maggi12} to $\mathbb{R}^N\setminus\overline U$.
\varepsilonnd{remark}
We now prove some useful volume and perimeter density estimates.
\begin{proposition}\label{p:densityestimates}
Let \(\sigma \in (-1,1)\), let \(E\) be a \((\Lambda,1)\)-minimizer of \(\mathcal{F}_{\sigma}\) with obstacle \(O\in \mathcal B_R\)
and assume that \varepsilonqref{e:basicassumption} is in force.
Then there are universal positive constants \(c_1\) and \(C_1\) such that
\begin{enumerate}[label=\textup{(\roman*)}]
\item for all $x\in H$
$$
P(E; B_r(x))\leq C_1 r^{N-1}\,;
$$
\item for all \(x\in \boldsymbol{p}artialrtial E\)
\[
|E\cap B_r(x)|\geq c_1 |B_r(x)\cap H|\,;
\]
\item for all \(x\in \overline{\boldsymbol{p}artialrtial E\cap H}\)
\[
P(E;B_r(x)\cap H)\geq c_1 r^{N-1}\,;
\]
\item
if $x\in \boldsymbol{p}artialrtial (H\setminus E)$ and $B_{2r}(x)\cap\boldsymbol{p}artialrtial H\subset O$, then
$$
|B_r(x)\setminus E|\geq c_1|B_r(x)\cap H|
$$
\varepsilonnd{enumerate}
for every $r\leq1$. Finally, $E$ is equivalent to an open set, still denoted by $E$, such that $\boldsymbol{p}artial E=\boldsymbol{p}artial^eE$, hence $\mathcal{H}^{N-1}(\boldsymbol{p}artial E\setminus\boldsymbol{p}artial^*E)=0$.
\varepsilonnd{proposition}
\begin{proof} The proof of this proposition follows the lines of \cite[Lemma~2.8]{De-PhilippisMaggi15}; however, some modifications are needed. Given $x\in H$ and $r<1$, we set $m(r):=|E\cap B_r(x)|$. Recall that for a.e. such $r$ we have
$m'(r)=\mathcal{H}^{N-1}(E^{(1)}\cap \boldsymbol{p}artialrtial B_r(x))$ and $\mathcal{H}^{N-1}(\boldsymbol{p}artialrtial^*E\cap \boldsymbol{p}artialrtial B_r(x))=0$. For any such $r$ we set $F:=E\setminus B_r(x)$. Then, using Definition~\ref{def:lambdamin}, we have
\begin{align}
P(E; B_r(x)\cap H)&\leq \mathcal{H}^{N-1}(\boldsymbol{p}artialrtial B_r(x)\cap E^{(1)})+\Lambda|E\cap B_r(x)|+|\sigma|P(E; \boldsymbol{p}artialrtial H\cap B_r(x))\nonumber\\
&\leq C_1 r^{N-1} \label{e:de1}
\varepsilonnd{align}
for a suitable universal constant $C_1$.
Observe now that by an easy application of the divergence theorem we have
$$
P(E; \boldsymbol{p}artialrtial H\cap B_r(x))=P(E\cap B_r(x); \boldsymbol{p}artialrtial H)\leq P(E\cap B_r(x); H)\,.
$$
Thus, using also \varepsilonqref{e:de1}, we have
\begin{align}
P(E\cap B_r(x))&=P(E\cap B_r(x); H)+P(E\cap B_r(x); \boldsymbol{p}artialrtial H)\nonumber\\
&\leq 2 P(E\cap B_r(x); H)= 2 P(E; B_r(x)\cap H)+2m'(r)\label{e:de2}\\
&\leq 4 m'(r)+2\Lambda m(r)+2|\sigma| P(E; \boldsymbol{p}artialrtial H\cap B_r(x))\nonumber\\
&\leq 4 m'(r)+2\Lambda m(r)+2|\sigma| P(E\cap B_r(x); H)\nonumber\,.
\varepsilonnd{align}
Comparing the first term in the second line with the fourth line of the previous chain of inequalities we have in particular that
$$
P(E\cap B_r(x); H)\leq \frac{1}{1-|\sigma|}(2m'(r)+\Lambda m(r))\,.
$$
In turn, inserting the above estimate in \varepsilonqref{e:de2} and using the isoperimetric inequality we get
\begin{align*}
N\omega_N^{\frac1N}m(r)^{\frac{N-1}{N}}& \leq P(E\cap B_r(x))\leq \frac{2}{1-|\sigma|}(2m'(r)+\Lambda m(r))\\
&\leq \frac{2}{1-|\sigma|}(2m'(r)+\Lambda r\omega_N^{\frac1N}m(r)^{\frac{N-1}N})\\
&\leq
\frac{2}{1-|\sigma|}(2m'(r)+\mathbf{c_0}\omega_N^{\frac1N}m(r)^{\frac{N-1}N})\,,
\varepsilonnd{align*}
where in the last inequality we used \varepsilonqref{e:basicassumption}. Now if $\frac{2\mathbf{c_0}}{1-|\sigma|}\leq 1$, then from the previous inequality we get
$$
(N-1)\omega_N^{\frac1N}m(r)^{\frac{N-1}{N}}\leq \frac{4}{1-|\sigma|}m'(r)\,.
$$
Observe now that if in addition $x\in \boldsymbol{p}artialrtial^* E$, then $m(r)>0$ for all $r$ as above. Thus, we may divide the previous inequality by $m(r)^{\frac{N-1}{N}}$, and integrate the resulting differential inequality thus getting
$$
|E\cap B_r(x)|\geq c_1 |B_r(x)\cap H|\,,
$$
for a suitable positive constant $c_1$ depending only on $N$ and $|\sigma|$.
To get the lower density estimate on the perimeter, let $x\in \boldsymbol{p}artialrtial^*E\cap H$ and $r<1$. If
$\dist(x, \boldsymbol{p}artialrtial_{\boldsymbol{p}artial H}O)>\frac{r}2$, then either $B_\rho(x)\cap \boldsymbol{p}artialrtial H\subset \boldsymbol{p}artialrtial H\setminus\overline O$ for all $\rho\in (0, \frac{r}2)$ or $B_\rho(x)\cap \boldsymbol{p}artialrtial H\subset O$ for all $\rho\in (0, \frac{r}2)$.
In the first case, $E$ is a
$(\Lambda, r/2)$-minimizer of the standard perimeter in $B_{\frac{r}{2}}(x)$ under the constraint $E\subset H$. Then an easy truncation argument implies that $E$ is also an unconstrained
$(\Lambda, r/2)$-minimizer of the perimeter in the same ball. Then, the lower density estimate on the perimeter follows from classical results (see \cite[Th. 21.11]{Maggi12}).
In the second case, it follows that $H\setminus E$ is a $(\Lambda, r/2)$-minimizer of $\mathcal{F}_{-\sigma}$ in $B_{\frac{r}{2}}(x)$ and thus, arguing as above, we get
$$
|E\cap B_{\frac{r}{2}}(x)|\leq (1-c_1) |B_{\frac{r}{2}}(x)\cap H|
$$
and, in turn, by the relative isoperimetric inequality, setting for every $\tau\in (\frac12, 1)$
$$
\kappa(\tau):=\inf_{t\geq 0}\inf\Bigl\{\frac{P(F; B_1(te_1)\cap H)}{|F|^{\frac{N-1}{N}}}:\, F\subset B_1(te_1)\cap H,\, |F|\leq\tau |B_1(te_1)\cap H|\Bigr\}
$$
we obtain
$$
P(E, B_{\frac{r}{2}}(x)\cap H)\geq \kappa(1-c_1) |E\cap B_{\frac{r}2}(x)|^{\frac{N-1}{N}}\geq c_1 \kappa(1-c_1) |B_{\frac{r}{2}}(x)\cap H|^{\frac{N-1}{N}}\,.
$$
Assume now that $\dist(x, \boldsymbol{p}artialrtial_{\boldsymbol{p}artial H} O)\leq \frac{r}2$. Then, there exists $y=(0, y')\in \boldsymbol{p}artialrtial_{\boldsymbol{p}artial H} O$ such that the ball $B_{\frac{r}2}(y)\cap H$ is contained in $B_r(x)\cap H$.
Assume that
$|E\cap B_{\frac{r}2}(y)|\leq (1-\gamma)|B_{\frac{r}2}(y)\cap H|$ for some small $\gamma\in (0, \frac12)$ to be chosen later. By the relative isoperimetric inequality we get
$$
P(E; B_{r}(x)\cap H)\geq \kappa_N\min\{|E\cap B_{r}(x)|, |(H\setminus E)\cap B_{r}(x)|\}^{\frac{N-1}{N}}.
$$
If $\min\{|E\cap B_{r}(x)|, |(H\setminus E)\cap B_{r}(x)|\}=|E\cap B_{r}(x)|$, then (iii) follows from (ii). Otherwise
\begin{align*}
P(E; B_{r}(x)\cap H)&\geq\kappa_N|(H\setminus E)\cap B_{r}(x)|^{\frac{N-1}{N}} \nonumber \\
&\geq\kappa_N|(H\setminus E)\cap B_{\frac{r}{2}}(y)|^{\frac{N-1}{N}}\geq\kappa_N\Big(\frac{\gamma\omega_N}{2^{N+1}}\Big)^{\frac{N-1}{N}}r^{N-1}\,. \label{e:de3}
\varepsilonnd{align*}
If instead
\begin{equation}\label{e:de3.5}
|E\cap B_{\frac{r}2}(y)|> (1-\gamma)|B_{\frac{r}2}(y)\cap H|\,,
\varepsilonnd{equation}
then we denote by $\Pi$ the orthogonal projection on $\boldsymbol{p}artialrtial H$ of
$\boldsymbol{p}artialrtial^*E\cap B_{\frac{r}2}(y)\cap H$. Set $D:= \{(0, z'):\,|z'-y'|<\frac{r}{4} \}\setminus O$ and observe that
$$
\mathcal{H}^{N-1}(D)\geq \tilde c\omega_{N-1}\Bigl(\frac{r}4\Bigr)^{N-1}\,,
$$
for a universal constant $\tilde c>0$. Note that in the above inequality we are using the assumption that $O\in \mathcal B_R$, with $\frac1R\leq \boldsymbol{c_0}$ (see \varepsilonqref{e:basicassumption}).
Assume first
that $\mathcal{H}^{N-1}(\Pi\cap D)\geq \gamma \omega_{N-1}\bigl(\frac{r}4\bigr)^{N-1}$. In this case, we have
\begin{equation}\label{e:de4}
P(E; B_{\frac{r}2}(y)\cap H)\geq \gamma \omega_{N-1}\Bigl(\frac{r}4\Bigr)^{N-1}\,.
\varepsilonnd{equation}
If instead $\mathcal{H}^{N-1}(\Pi\cap D)< \gamma \omega_{N-1}\bigl(\frac{r}4\bigr)^{N-1}$, then
$$
|(H\setminus E)\cap B_{\frac{r}2}(y)|\geq\frac{r}{4}\mathcal{H}^{N-1}(D\setminus\Pi)\geq (\tilde c-\gamma)\omega_{N-1}\Bigl(\frac{r}4\Bigr)^{N}\,.
$$
The last inequality contradicts \varepsilonqref{e:de3.5} if $\gamma$ is sufficiently small. This contradiction shows that \varepsilonqref{e:de4} holds under the assumption
\varepsilonqref{e:de3.5}. This completes the proof of (iii).
Property (iv) follows from the volume estimate (ii) recalling that $H\setminus E$ is a $(\Lambda, r_0)$-minimizer for $\mathcal{F}_{-\sigma}$ in $B_{2r}(x)$.
In order to show that $\boldsymbol{p}artial E=\boldsymbol{p}artial^eE$ it is enough to prove that $\boldsymbol{p}artial E\subset\boldsymbol{p}artial^eE$. To this aim fix $x\in\boldsymbol{p}artial E$. If $x\in H$, from the density estimates (ii) and (iv) we have at once that
$x\not\in E^{(0)}\cup E^{(1)}$, that is $x\in\boldsymbol{p}artial^eE$. On the other hand, if $x\in\boldsymbol{p}artial H$, then $|B(x,r)\setminus E|\geq \frac12|B(x,r)|$ and again, recalling (ii), we have that $x\in\boldsymbol{p}artial^eE$. Hence $\mathcal{H}^{N-1}(\boldsymbol{p}artial E\setminus\boldsymbol{p}artial^*E)=\mathcal{H}^{N-1}(\boldsymbol{p}artial^eE\setminus\boldsymbol{p}artial^*E)=0$, where the last equality follows from Theorem~16.2 in \cite{Maggi12}.
Finally, since, see \varepsilonqref{Euno}, $\boldsymbol{p}artial E^{(1)}=\boldsymbol{p}artial E=\boldsymbol{p}artial^eE$, we have that $E^{(1)}\cap \boldsymbol{p}artial E^{(1)}=\varepsilonmptyset$, hence $E^{(1)}$ is an open set.
\varepsilonnd{proof}
We are now ready to prove the following compactness theorem for almost minimizers.
\begin{theorem}\label{th:compactness} Let $\Lambda_h\geq 0$, $r_h, R_h\in [1,+\infty]$ satisfying
$$
\Lambda_h +\frac 1{R_h}\leq \boldsymbol{c_0}\,,
$$
and
$$
\Lambda_h\to \Lambda_0\in [0,+\infty),\, \boldsymbol{q}uad r_h\to r_0\in [1,+\infty], \boldsymbol{q}uad R_h\to R_0\in [1, +\infty]\,,
$$
with $\boldsymbol{c_0}$ as in \varepsilonqref{e:basicassumption}. Let $E_h$ be a $(\Lambda_h, r_h)$-minimizer of $\mathcal F_{\sigma}$
with obstacle $O_h\in \mathcal B_{R_h}$. Then there exist a (not relabelled) subsequence,
a set $O\in \mathcal B_{R_0}$, and a set $E$ of locally finite perimeter, such that $E_h\to E$ in $L^1_{loc}(\mathbb{R}^N)$,
$O_h\to O$ in $C^{1,\alpha}_{loc}$ for all $\alpha\in (0,1)$, with the property that $E$ is a $(\Lambda_0, r_0)$-minimizer of $\mathcal{F}^O_\sigma$.
Moreover,
$$
\mu_{E_h}\wtos \mu_E\,, \boldsymbol{q}uad |\mu_{E_h}|\wtos |\mu_E|\,,
$$
as Radon measures.
In addition, the following Kuratowski convergence type properties hold:
\[
\begin{split}
&\text{(i)\,\,for every $x\in\boldsymbol{p}artial E$ there exists $x_h\in\boldsymbol{p}artial E_h$ such that $x_h\to x$;} \\
& \text{(ii)\,\,if
$x_h\in \overline{H\cap\boldsymbol{p}artial E_h}$ and $x_h\to x$, then $x\in \overline{H\cap\boldsymbol{p}artial E}$\,.}
\varepsilonnd{split}
\]
Finally, if $\Lambda_0=0$ and $\boldsymbol{p}artialrtial H\setminus \overline{O}$ is connected, then either
$\boldsymbol{p}artialrtial E\cap(\boldsymbol{p}artialrtial H\setminus \overline{O})=\boldsymbol{p}artialrtial H\setminus \overline{O}$ or $\boldsymbol{p}artialrtial E\cap \boldsymbol{p}artialrtial H\subset\overline O$.
In the latter case, we have that $E$ is a $(0, r_0)$-minimizer with obstacle $O$,
$$|\mu_{E_h}|\mathop{\hbox{\vrule height 6pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits H\wtos |\mu_E|\mathop{\hbox{\vrule height 6pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits H$$
as Radon measures in $\mathbb{R}^N$, and
$$
\boldsymbol{p}artialrtial E_h\cap \boldsymbol{p}artialrtial H\to \boldsymbol{p}artialrtial E\cap \boldsymbol{p}artialrtial H\boldsymbol{q}uad\text{ in }L^1_{loc}(\boldsymbol{p}artial H)\,.
$$
\varepsilonnd{theorem}
\begin{proof}
The proof goes along the lines of the proof of \cite[Th. 2.9]{De-PhilippisMaggi15}, with some nontrivial modifications that we explain here.
First of all the existence of a non relabelled sequence $E_h$ converging in $L^1_{loc}(\mathbb{R}^N)$ to some set $E$ of locally finite perimeter and such that $\mu_{E_h}\wtos \mu_E$ as Radon measures follows at once since the sets $E_h$ have locally equibounded perimeters, thanks to Proposition~\ref{p:densityestimates} (i). The convergence, up to a subsequence, of $O_h$ to $O\in\mathcal B_{R_0}$ in $C^{1,\alpha}_{loc}$ follows from Remark~\ref{rem:BR}.
We may assume without loss of generality that $r_0<+\infty$. We now want to show that $E$ is a $(\Lambda_0, r_0)$-minimizer of $\mathcal F^O_\sigma$.
To this end, let us fix a ball $B_{r_0}(x)$, and consider a competitor $F$ for $E$ such that $E\boldsymbol{D}elta F\Subset B_{r_0}(x)$. Note that for a.e. $r<r_0$ such that $E\boldsymbol{D}elta F\Subset B_r(x)$, we have $\mathcal{H}^{N-1}(\boldsymbol{p}artialrtial^*E\cap \boldsymbol{p}artialrtial B_r(x))=\mathcal{H}^{N-1}(\boldsymbol{p}artialrtial^*E_h\cap \boldsymbol{p}artialrtial B_r(x))=0$ for all $h$ and $\mathcal{H}^{N-1}((E\boldsymbol{D}elta E_h)\cap \boldsymbol{p}artialrtial B_r(x))\to 0$. Choose any such $r$ and set $F_h:=(F\cap B_r(x))\cup (E_h\setminus B_r(x))$. Denoting by
$O^\deltalta:=\{x\in \boldsymbol{p}artialrtial H: d_O(x)< -\deltalta\, \textrm{sign}\sigma\}$ with $\deltalta>0$ and $d_O$ being the signed distance from $\boldsymbol{p}artialrtial O$ restricted to $\boldsymbol{p}artial H$ and recalling that $E_h$ is a $(\Lambda_h, r_h)$-minimizer of $\mathcal F^{O_h}_\sigma$, we have by Remark~\ref{rem:lsc}, if $\sigma\leq0$
\begin{align*}
\mathcal F^{O^\deltalta}_\sigma(E; B_r(x))&\leq \liminf_{h}\mathcal F^{O^\deltalta}_\sigma(E_h; B_r (x))\leq \liminf_{h}\mathcal F^{O_h}_\sigma(E_h; B_r(x))\\
&\leq
\limsup_{h}\mathcal F^{O_h}_\sigma(E_h; B_{r_0}(x))\leq \lim_{h}\big[\mathcal F^{O_h}_\sigma(F_h; B_{r_0}(x))+\Lambda_h|F_h\boldsymbol{D}elta E_h|\big]\\
&=\mathcal F^{O}_\sigma(F; B_{r_0}(x))+\Lambda_0|F\boldsymbol{D}elta E| \,.
\varepsilonnd{align*}
where we used the fact that $O_h\subset O^\deltalta$ for $h$ sufficiently large. Instead, if $\sigma>0$,
\begin{align*}
\mathcal F^{O^\deltalta}_\sigma(E; B_r(x))&\leq \liminf_{h}\mathcal F^{O^\deltalta}_\sigma(E_h; B_r (x))\\
&\leq \liminf_{h}\big(\mathcal F^{O_h}_\sigma(E_h; B_r(x))+\mathcal{H}^{N-1}((O_h\setminus O^\deltalta)\cap B_r(x))\big)\\
&\leq
\limsup_{h}\mathcal F^{O_h}_\sigma(E_h; B_{r_0}(x))+\mathcal{H}^{N-1}((O\setminus O^\deltalta)\cap B_r(x))\\
&\leq \lim_{h}\big[\mathcal F^{O_h}_\sigma(F_h; B_{r_0}(x))+\mathcal{H}^{N-1}((O\setminus O^\deltalta)\cap B_r(x))+\Lambda_h|F_h\boldsymbol{D}elta E_h|\big]\\
&=\mathcal F^{O}_\sigma(F; B_{r_0}(x))+\mathcal{H}^{N-1}((O\setminus O^\deltalta)\cap B_r(x))+\Lambda_0|F\boldsymbol{D}elta E| \,.
\varepsilonnd{align*}
Letting $\deltalta\to 0^+$ we have in both cases that
\begin{align*}
\mathcal F^{O}_\sigma(E; B_{r_0}(x))&\leq \liminf_{h}\mathcal F^{O_h}_\sigma(E_h; B_{r_0}(x))\leq
\limsup_{h}\mathcal F^{O_h}_\sigma(E_h; B_{r_0}(x))\\
&\leq \mathcal F^{O}_\sigma(F; B_{r_0}(x))+\Lambda_0|F\boldsymbol{D}elta E|\,.
\varepsilonnd{align*}
Thus,
we have proved that $E$ is a $(\Lambda_0, r_0)$-minimizer of $\mathcal F^O_\sigma$.
Choosing $F=E$ in the previous inequality, we obtain
\begin{equation}\label{e:compact1}
\mathcal F^{O_h}_\sigma(E_h; B_{r_0}(x))\to \mathcal F^{O}_\sigma(F; B_{r_0}(x))\,.
\varepsilonnd{equation}
Assume now that $\sigma\leq 0$ and observe that by the lower semicontinuity of perimeter we get
$$
\begin{aligned}
P(E; B_{r_0}(x)\setminus \overline{O^\deltalta}))
&\leq \liminf_{h}
P(E_h; B_{r_0}(x)\setminus \overline{O^\deltalta})) \\
&\leq \liminf_{h}
P(E_h; B_{r_0}(x)\setminus \overline{O_h}))
=
\liminf_{h}P(E_h; H\cap B_{r_0}(x))\,.
\varepsilonnd{aligned}
$$
Hence, letting $\deltalta\to 0^+$ we have
\begin{equation}\label{e:compact2}
P(E; B_{r_0}(x)\setminus \overline O )\leq \liminf_{h}P\big(E_h; H\cap B_{r_0}(x)\big)\,.
\varepsilonnd{equation}
On the other hand, by similar arguments
\begin{equation}\label{e:compact2.5}
\begin{aligned}
\limsup_hP(E_h; O_h\cap & B_{r_0}(x))=
\limsup_hP(E_h; \overline{O_h}\cap B_{r_0}(x))\\
&\leq
P(E; \overline{O}\cap B_{r_0}(x))=P(E; {O}\cap B_{r_0}(x))\,.
\varepsilonnd{aligned}
\varepsilonnd{equation}
From the above inequalities and \varepsilonqref{e:compact1} and the fact that $\sigma\leq0$ we deduce that
\begin{equation}\label{e:compact3}
P(E_h; B_{r_0}(x))\to P(E; B_{r_0}(x))\,.
\varepsilonnd{equation}
From this inequality we deduce that $|\mu_{E_h}|\wtos |\mu_E|$ as Radon measures in $\mathbb{R}^N$.
If $\sigma>0$, we observe that
$$
\sigma P(E_h; O_h\cap B_{r_0}(x))=\sigma \mathcal{H}^{N-1}(O_h\cap B_{r_0}(x))-\sigma P(H\setminus E_h; O_h\cap B_{r_0}(x))\,.
$$
Therefore, arguing as in the proof of \varepsilonqref{e:compact2.5} and recalling \varepsilonqref{e:compact2} we conclude that \varepsilonqref{e:compact3} holds also in this case. The properties (i) and (ii) then follow by standard arguments using also the perimeter density estimate stated in Proposition~\ref{p:densityestimates} (iii), see also Step 4 of the proof of Theorem 2.9 in \cite{Maggi12}.
Let us now prove the last part the statement, and thus assume $\Lambda_0=0$ and that $\boldsymbol{p}artialrtial H\setminus \overline{O}$ is connected.
Assume that there exists $x\in \boldsymbol{p}artialrtial E\cap(\boldsymbol{p}artialrtial H\setminus \overline{O})$. Since $E$ is $(0, r_0)$-minimizer of the standard perimeter in $\mathbb{R}^N\setminus\overline{O}$ and $E$ lies on one side of $\boldsymbol{p}artial H$, we have that $x$ is a regular point of $\boldsymbol{p}artial E$. Indeed every blow-up is a minimizing cone contained in a half space and thus is a half space (see for instance \cite[Lemma 3]{dm19} for a proof of this well known fact). In turn this implies the local regularity. Then the strong maximum principle for the mean curvature equation implies that $\boldsymbol{p}artial E$ coincides with $\boldsymbol{p}artial H$ in the connected component of $\boldsymbol{p}artial H\setminus\overline O$ containing $x$, that is, in the whole $\boldsymbol{p}artial H\setminus\overline O$.
To conclude the proof observe now that if $\boldsymbol{p}artialrtial E\cap \boldsymbol{p}artialrtial H\subset \overline O $, then from \varepsilonqref{e:compact1}
and \varepsilonqref{e:compact2.5}, arguing as above, we conclude that
$$
P(E_h; B_r(x)\cap H)\to P(E; B_r(x)\cap H)
$$
and, in turn, $|\mu_{E_h}|\mathop{\hbox{\vrule height 6pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits H\wtos |\mu_E|\mathop{\hbox{\vrule height 6pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits H$ as Radon measures in $\mathbb{R}^N$. Now the very last part of the statement follows from \cite[Th.~3.88]{AmbrosioFuscoPallara00}.
\varepsilonnd{proof}
\section{ $\varepsilonps$-regularity}\label{sec:epsreg}
This section is devoted to the proof of the $\varepsilonps$-regularity Theorem~\ref{th:epsreg} for free boundary points lying on the thin obstacle $\boldsymbol{p}artial_{\boldsymbol{p}artial H}O$. Such a proof will split into two subsections.
\subsection{Partial Harnack inequality}
In the following for any $a<b$ and $\alpha\in\mathbb{R}$ we set
$$
S_{a,b}^\alpha:=\Big\{x\in \mathbb{R}^N:a<x_N-\alpha x_1<b\Big\}.
$$
When $\alpha=\frac{\sigma}{\sqrt{1-\sigma^2}}$ we shall simply write
$$
S_{a,b}:=\Big\{x\in \mathbb{R}^N:a<x_N-\frac{\sigma x_1}{\sqrt{1-\sigma^2}}<b\Big\}.
$$
In the following we shall write a generic point $x\in\mathbb{R}^N$ as $(x',x_N)$ with $x'=(x_1,\dots,x_{N-1})$. With a slight abuse of notation we will also denote by $x'$ the generic point of $\{x_N=0\}\simeq\mathbb{R}^{N-1}$.
For $R>0$, $x'\in\mathbb{R}^{N-1}$ we set $D_R(x')=\{y'\in \mathbb{R}^{N-1}:\, |x'-y'|<R\}$ and $\mathcal C_R(x'):=D_R(x')\times\mathbb{R}$, $D^+_R(x')=D_R(x')\cap\{x_1>0\}$, $\mathcal C^+_R(x')=\mathcal C_R(x')\cap H$. When $x'=0$ we will omit the center.
\begin{lemma}\label{lm:savin}
There exist two universal constants \(\varepsilonps_0\), \(\varepsilonta_0\in (0,1/2)\), with the following properties: If $E\subset H$ is a $(\Lambda,1)$-minimizer of $\mathcal{F}_\sigma$ in $\mathbb{R}^N$ with obstacle $O\in\mathcal B_R$,
such that for $0<r\leq1$
\begin{equation}\label{savin11}
\boldsymbol{p}artialrtial E\cap \mathcal C_{2r}^+\subset S_{a,b}\boldsymbol{q}quad \text{with }(b-a)\leq \varepsilonps_0 r\,,
\varepsilonnd{equation}
\begin{equation}\label{rem:guido1}
\Big\{x_N<\frac{\sigma x_1}{\sqrt{1-\sigma^2}}+a\Big\}\cap\mathcal C_{2r}^+
\subset E\,,
\varepsilonnd{equation}
$\Lambda r^2<\varepsilonta_0(b-a)$ and
\begin{equation}\label{116}
\frac{r^2}{R}\leq \varepsilonta_0(b-a)\,,
\varepsilonnd{equation}
\begin{equation}\label{117}
\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O\cap \mathcal C_r\cap S_{a,b}\not=\varepsilonmptyset\,,
\varepsilonnd{equation}
then there exist \(a'\ge a\), \(b'\le b\) with
\[
b'-a'\le (1-\varepsilonta_0) (b-a)
\]
such that
\[
\boldsymbol{p}artialrtial E\cap \mathcal C_{\frac{r}{2}}^+\subset S_{a', b'}.
\]
The same conclusion holds if assumption \varepsilonqref{116} is replaced by
\begin{equation}\label{118}
\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O\cap \mathcal C_{2r}\cap S_{a,b}=\varepsilonmptyset\,.
\varepsilonnd{equation}
\varepsilonnd{lemma}
\begin{remark}\label{rem:guido} We start noticing that if $E\subset H$ is a $(\Lambda,1)$-minimizer of $\mathcal{F}_\sigma$ in $\mathbb{R}^N$ with obstacle $O\in\mathcal B_R$
such that, for $0<r\leq1$ \varepsilonqref{savin11} and \varepsilonqref{rem:guido1} hold, then
\begin{equation}\label{guido1}
\Big\{x_N<\frac{\sigma x_1}{\sqrt{1-\sigma^2}}+a\Big\}\cap\mathcal C_{2r}^+
\subset E\cap \mathcal C_{2r}^+\subset \Big\{x_N<\frac{\sigma x_1}{\sqrt{1-\sigma^2}}+b\Big\}\cap\mathcal C_{2r}^+
\varepsiloneq
provided $\varepsilon_0$ is small enough.
Indeed if the above inequalities were not true then $E\cap \mathcal C_{2r}^+$ would be contained in $S_{a,b}$ thus violating the volume density estimates in Proposition~\ref{p:densityestimates}.
\varepsilonnd{remark}
We will investigate the consequences of the flatness condition:
$$
\boldsymbol{p}artialrtial E\cap \mathcal \mathcal C^+_{2r}\subset S_{-\varepsilonps r, \varepsilonps r}\,.
$$
Thanks to Remark~\ref{rem:guido} we may define two functions $u^{\boldsymbol{p}m}:D^+_{2r}\to \mathbb{R}$ as
\[
\begin{split}
u^+(x')&=\max \{ x_N: (x',x_N) \in \boldsymbol{p}artialrtial E\}
\\
u^-(x')&=\min \{ x_N: (x',x_N) \in \boldsymbol{p}artialrtial E\}.
\varepsilonnd{split}
\]
Note that $u^+$ is upper semicontinuous and $u^-$ is lower semicontinuous. In particular we may define for every $x'\in D_{2r}\cap\{x_1=0\}$
$$
u^-(x')=\inf\{\liminf_{h\to\infty}u^-(x_h'):\,x_h'\in D_{2r}^+,\,x_h'\to x'\}
$$
and, similarly, $u^+(x')=\sup\{\limsup_{h}u^-(x_h'):\,x_h'\in D_{2r}^+,\,x_h'\to x'\}$. Observe that $\{(x',x_N):\, x'\in D_{2r}^+,\,x_N<u^-(x')\}\subset E$ and thus from the above definition it follows that for
$$
\{(x',x_N):\, x'\in D_{2r}\cap\{x_1=0\},\,x_N<u^-(x')\}\subset \boldsymbol{p}artialrtial E\cap\boldsymbol{p}artialrtial H\,.
$$
In the following we recall the notions of viscosity super- and subsolutions.
\begin{definition}\label{viscoM}
Let $\Omegaega\subset\mathbb{R}^{d}$ be an open set and let $v:\Omegaega\to\mathbb{R}$ be a lower (upper) semicontinuous function. Given $\xi_0\in\mathbb{R}^{d}$, a constant $\gamma\in\mathbb{R}$ we say that $v$ satisfies the inequality
\begin{equation}\label{visco1}
\boldsymbol{D}iv \biggl(\frac{\nabla v+\xi_0}{\sqrt{1+|\nabla v+\xi_0|^2}}\biggr) \leq \gamma \,\,(\geq\gamma)
\varepsiloneq
in the viscosity sense
if for any function $\varphi\in C^2(\Omegaega)$ such that $\varphi\leq v$ ($\varphi\geq v$) in a neighborhood of a point $x_0\in\Omegaega$, $\varphi(x_0)=v(x_0)$ one has
\begin{equation}\label{visco2}
\boldsymbol{D}iv \biggl(\frac{\nabla\varphi+\xi_0}{\sqrt{1+|\nabla\varphi+\xi_0|^2}}\biggr)(x_0)\leq\gamma\,\,(\geq\gamma)\,.
\varepsiloneq
Moreover, a lower (upper) semicontinuous function $v$ in $\overline\Omegaega$ satisfies the Neumann boundary condition
$$
\frac{\nabla v\cdot n}{\sqrt{1+|\nabla v|^2}}\leq \gamma \,\,\,(\geq \gamma)\,, \boldsymbol{q}quad\text{on $\Gamma$}
$$
in the viscosity sense, where $\Gamma$ is a subset of $\boldsymbol{p}artialrtial\Omegaega$ and $n$ stands for the inner normal to $\boldsymbol{p}artialrtial\Omegaega$, if for every $\varphi\in C^2(\mathbb{R}^d)$ such that $\varphi\leq v$ ($\varphi\geq v$) in a neighborhood of a point $x_0\in\Gamma$, $\varphi(x_0)=v(x_0)$, one has that
$$
\frac{\nabla\varphi(x_0)\cdot n}{\sqrt{1+|\nabla\varphi(x_0)|^2}}\leq \gamma \,\,\,(\geq \gamma)\,.
$$
\varepsilonnd{definition}
In the following we also need the following restricted notions of viscosity super- and subsolutions.
\begin{definition}\label{viscorestr}
Let $v:\Omegaega\to\mathbb{R}$ be a lower (upper) semicontinuous function, $\xi_0\in\mathbb{R}^{d}$ and $\gamma\in\mathbb{R}$. Given $\kappa>0$, we say that $v$ satisfies the inequality \varepsilonqref{visco1} in the $\kappa$-viscosity sense if \varepsilonqref{visco2} holds for any function $\varphi\in C^2(\Omegaega)$ such that $\varphi\leq v$ ($\varphi\geq v$) in a neighborhood of a point $x_0\in\Omegaega$, $\varphi(x_0)=v(x_0)$ and $|\nabla\varphi(x_0)|\leq\kappa$.
\varepsilonnd{definition}
We now recall a crucial result, which is essentially contained in \cite{Savin}.
\begin{proposition}\label{prop:savin}
Let $\xi\in\mathbb{R}^{d}$ with $|\xi|\leq M$.
There exist two constants $C_0>1$ and $\mu_0\in(0,1)$, depending only on $M$ and $d$, with the following properties: Let $k$ be a positive integer and $\nu>0$ such that $C_0^k\nu\leq1$ and let $v:\overline B_2\to(0,\infty)$ be a lower semicontinuous function, bounded from above, satisfying
\begin{equation}\label{prop:savin1}
\boldsymbol{D}iv \biggl(\frac{\nabla u+\xi}{\sqrt{1+|\nabla u+\xi|^2}}\biggr) \leq\nu
\varepsilonnd{equation}
in the $(C_0^k\nu)$-viscosity sense.
If there exists a point $x_0\in B_{1/2}$ such that $v(x_0)\leq\nu$, then
$$
|\{v\leq C_0^k\nu\}\cap B_{1}|\geq(1-\mu_0^k)|B_1|\,.
$$
\varepsilonnd{proposition}
Note that in \cite{Savin} condition \varepsilonqref{prop:savin1} is assumed to hold in the usual viscosity sense and not in the $(C_0^k\nu)$-viscosity sense considered here. However the proof of \cite{Savin} extends to our framework without significant changes, see the Appendix where we provide the details for the reader's convenience.
We now prove a comparison lemma which is a variant of Lemma 2.12 in \cite{De-PhilippisMaggi15}. To this aim, in the following, given $0<a<r$ we set
$$
D_{r,a}:=D_r\cap\{|x_1|<a\}\boldsymbol{q}uad\text{and} \boldsymbol{q}uad D_{r,a}^+:=D^+_r\cap\{x_1<a\}\,.
$$
\begin{lemma}\label{lm:DPCM}
Let $E\subset H$ be a $0$-minimizer of $\mathcal{F}_{-\sigma}$ with obstacle $O\in\mathcal B_R$ for some $R>0$, let $0<\varepsilonta<r$ and let $u_0\in C^2(\overline{D_{r,\varepsilonta}^+})$ be such that
\begin{equation}\label{DPCM1}
\begin{split}
\boldsymbol{D}iv\bigg(\frac{\nabla u_0}{\sqrt{1+|\nabla u_0|^2}}\bigg)\geq0\boldsymbol{q}quad&\text{in $D_{r,\varepsilonta}^+,$} \\
\frac{\boldsymbol{p}artial_1 u_0}{\sqrt{1+|\nabla u_0|^2}}\geq\sigma\boldsymbol{q}quad &\text{on $\boldsymbol{p}artial D_{r,\varepsilonta}^+\cap\{x_1=0\}.$}
\varepsilonnd{split}
\varepsiloneq
Assume also that $E$ is bounded from below,
\begin{equation}\label{DPCM2}
E\cap[(\boldsymbol{p}artial D_{r,\varepsilonta}^+\cap\{x_1>0\})\times\mathbb{R}]\subset\{(x',x_N)\in(\boldsymbol{p}artial D_{r,\varepsilonta}^+\cap\{x_1>0\})\times\mathbb{R}:\,x_N\geq u_0(x')\}
\varepsiloneq
and that
\begin{equation}\label{DPCM3}
\mathcal{H}^{N-1}\big(\boldsymbol{p}artial E\cap[(\boldsymbol{p}artial D_{r,\varepsilonta}^+\cap\{x_1>0\})\times\mathbb{R}]\big)=0\,.
\varepsiloneq
Then, $$
E\cap(D_{r,\varepsilonta}^+\times\mathbb{R})\subset\{(x',x_N)\in D_{r,\varepsilonta}^+\times\mathbb{R}:\,x_N\geq u_0(x')\}\,.
$$
\varepsilonnd{lemma}
\begin{proof} We adapt the argument of \cite[Lemma~2.12]{De-PhilippisMaggi15}. Denote
$$
C:=D_{r,\varepsilonta}^+\times\mathbb{R}
$$
and set
$$
F^{\boldsymbol{p}m}:=\{(x', x_N)\in C:\, x_N\gtrless u_0(x')\}\,.
$$
Consider the competitor given by
$$
G:=(E\setminus C)\cup (E\cap F^+)\,,
$$
which is an admissible compact perturbation of $E$ since $E$ is bounded from below. From the minimality of $E$ we then obtain
\begin{equation}\label{compa1}
\begin{split}
\mathcal{H}^{N-1}(\boldsymbol{p}artial E&\cap F^-)+\mathcal{H}^{N-1}(\boldsymbol{p}artial E\cap \boldsymbol{p}artial F^-\cap C\cap\{\nu_E=\nu_{F^-}\})\\
&\boldsymbol{q}uad-\sigma\mathcal{H}^{N-1}(\boldsymbol{p}artial E\cap \boldsymbol{p}artial F^-\cap \boldsymbol{p}artial H)\\
&\leq \mathcal{H}^{N-1}(E\cap \boldsymbol{p}artial F^-\cap C)\,.
\varepsilonnd{split}
\varepsiloneq
Denote
$$
X(x):=\frac{1}{\sqrt{1+|\nabla u_0(x')|^2}}(\nabla u_0(x'), -1)\,,
$$
and observe that $\boldsymbol{D}iv X\geq 0$ in $C$, thanks to the first assumption in \varepsilonqref{DPCM1}. Then by the Divergence Theorem we obtain
\[
\begin{split}
0\leq\int_{E\cap F^-}\boldsymbol{D}iv X\, dx&=\int_{\boldsymbol{p}artial E\cap F^-}X\cdot\nu_E\, d\mathcal{H}^{N-1}+ \int_{\boldsymbol{p}artial F^-\cap E\cap C}X\cdot\nu_{F^-}\, d\mathcal{H}^{N-1}\\
&+\int_{\boldsymbol{p}artial E\cap \boldsymbol{p}artial F^-\cap C\cap \{\nu_E=\nu_{F^-}\}}X\cdot\nu_E\, d\mathcal{H}^{N-1}-\int_{\boldsymbol{p}artial E\cap \boldsymbol{p}artial F^-\cap \boldsymbol{p}artial H}X\cdot e_1\, d\mathcal{H}^{N-1}\,.
\varepsilonnd{split}
\]
Observing that $X\cdot\nu_{F^-}=-1$ on $\boldsymbol{p}artial F^-\cap C$, from the previous inequality, we get, thanks to the second assumption in \varepsilonqref{DPCM1},
\[
\begin{split}
\sigma \mathcal{H}^{N-1} (\boldsymbol{p}artial E\cap \boldsymbol{p}artial F^-\cap \boldsymbol{p}artial H)&\leq \int_{\boldsymbol{p}artial E\cap \boldsymbol{p}artial F^-\cap \boldsymbol{p}artial H} \frac{\boldsymbol{p}artial_1 u_0}{\sqrt{1+|\nabla u_0|^2}}\, d\mathcal{H}^{N-1}\\
&\leq \int_{\boldsymbol{p}artial E\cap F^-}X\cdot\nu_E\, d\mathcal{H}^{N-1}+\int_{\boldsymbol{p}artial E\cap \boldsymbol{p}artial F^-\cap C\cap \{\nu_E=\nu_{F^-}\}}X\cdot\nu_E\, d\mathcal{H}^{N-1}\\
&\boldsymbol{q}uad - \mathcal{H}^{N-1}(\boldsymbol{p}artial F^-\cap E\cap C)\\
&\leq \mathcal{H}^{N-1}(\boldsymbol{p}artial E\cap F^-)+\mathcal{H}^{N-1}(\boldsymbol{p}artial E\cap \boldsymbol{p}artial F^-\cap C\cap \{\nu_E=\nu_{F^-}\})\\
&\boldsymbol{q}uad- \mathcal{H}^{N-1}(\boldsymbol{p}artial F^-\cap E\cap C)\\
&\leq \sigma \mathcal{H}^{N-1} (\boldsymbol{p}artial E\cap \boldsymbol{p}artial F^-\cap \boldsymbol{p}artial H)\,,
\varepsilonnd{split}
\]
where the last inequality follows from \varepsilonqref{compa1}. Thus all the inequalities above are equalities and, in turn, $X\cdot\nu_E=1$ on $\boldsymbol{p}artial E\cap F^-$. We now conclude by applying the Divergence Theorem in $E\cap F^-$ to the vector field $Y(x'):=(x', x'\cdot \nabla u_0(x'))$. Since $Y\cdot \nu_{E\cap F^-}=0$ $\mathcal{H}^{N-1}$-a.e. on $ \boldsymbol{p}artial E\cap F^-$ and on $\boldsymbol{p}artial F^-\cap C$, recalling \varepsilonqref{DPCM2} and \varepsilonqref{DPCM3}, we obtain
$$
(N-1)|E\cap F^-|=\int_{E\cap F^-}\boldsymbol{D}iv Y\, dx=0\,,
$$
and the conclusion follows.
\varepsilonnd{proof}
\begin{lemma}\label{pace}
Let $E\subset H$ is a $(\Lambda,1)$-minimizer of $\mathcal{F}_\sigma$ in $\mathbb{R}^N$ with obstacle $O\in\mathcal B_R$. Assume also that \varepsilonqref{guido1} holds for some $a, b$ and with $\frac{\sigma}{\sqrt{1-\sigma^2}}$ replaced by $\alpha$ for some $\alpha\in\mathbb{R}$, with $r=1$, so that the functions $u^{\boldsymbol{p}m}$ are well defined on $D^+_2$. Then the functions $u^+$, $u^-$
satisfy
\begin{equation}\label{irene-1}
\begin{cases}
\displaystyle\boldsymbol{D}iv \bigg(\frac{\nabla u^+}{\sqrt{1+|\nabla u^+|^2}}\bigg) \geq-\Lambda \,\,\,\, \text{in $D^+_2$,} &
\cr
\displaystyle\boldsymbol{D}iv \biggl(\frac{\nabla u^-}{\sqrt{1+|\nabla u^-|^2}}\biggr) \leq\Lambda \,\,\,\, \text{in $D^+_2$,} &
\cr
\displaystyle\frac{\boldsymbol{p}artial_1 u^+}{\sqrt{1+|\nabla u^+|^2}}\geq\sigma \,\,\,\,\text{on $\Gamma$,} &
\cr
\displaystyle\frac{\boldsymbol{p}artial_1 u^-}{\sqrt{1+|\nabla u^-|^2}}\leq\sigma \,\,\,\,\text{on $\Gamma\cap\{x':\,(x',u^-(x'))\in O\}$,
} &
\varepsilonnd{cases}
\varepsilonnd{equation}
in the viscosity sense, where $\Gamma:= D_2\cap\{x_1=0\}$.
\varepsilonnd{lemma}
\begin{proof}
{\bf Step 1.} We prove the statement first for $u^-$.
We fix $x'_0\in D^+_2$ and take a function $\varphi\in C^2(D^+_2)$ such that $x'_0$ is a strict minimum point for $u^- -\varphi$ in $D^+_2$ and $u^-(x'_0)=\varphi(x'_0)$.
Assume by contradiction that
\begin{equation}\label{irene0}
\boldsymbol{D}iv \biggl(\frac{\nabla \varphi}{\sqrt{1+|\nabla \varphi|^2}}\biggr)> \Lambda
\varepsilonnd{equation}
in a neighborhood of $x'_0$. For $\varepsilonta>0$ set
$$
E_\varepsilonta:=E\cup \{x\in E^c:\, x_N<\varphi(x')+\varepsilonta\}
$$
and note that if $\varepsilonta$ is sufficiently small is an admissible competitor for $E$.
Note that by $\Lambda$-minimality, for all but countably many such $\varepsilonta$,
\begin{align}
&\mathcal{F}_\sigma (E; \mathcal C_2) - \mathcal{F}_\sigma (E_\varepsilonta; \mathcal C_2)\nonumber \\
&=
\mathcal{H}^{N-1}(\boldsymbol{p}artialrtial E\cap \{x_N< \varphi(x')+\varepsilonta\})-\mathcal{H}^{N-1}(E^c\cap \{x_N
= \varphi(x')+\varepsilonta\})\leq \Lambda |E_\varepsilonta\setminus E|\,.\label{irene1}
\varepsilonnd{align}
On the other hand, setting
$$
X:=\biggl(\frac{\nabla \varphi}{\sqrt{1+|\nabla \varphi|^2}}, -\frac{1}{\sqrt{1+|\nabla \varphi|^2}}\biggr)\,,
$$
by \varepsilonqref{irene0} and \varepsilonqref{irene1} we have, if $\varepsilonta$ is small enough,
\begin{align*}
&\Lambda |E_\varepsilonta\setminus E|<\int_{E_\varepsilonta\setminus E} \boldsymbol{D}iv X\, dx\\
&=
-\int_{\boldsymbol{p}artialrtial E\cap \{x_N< \varphi(x')+\varepsilonta\}} X\cdot \nu_{E}\, d\mathcal{H}^{N-1}+
\int_{ E^c\cap \{x_N= \varphi(x')+\varepsilonta\}} X\cdot \nu_{E_\varepsilonta}\, d\mathcal{H}^{N-1}\\
&\leq \mathcal{H}^{N-1}(\boldsymbol{p}artialrtial E\cap \{x_N< \varphi(x')+\varepsilonta\})-\mathcal{H}^{N-1}(E^c\cap \{x_N
= \varphi(x')+\varepsilonta\})\leq \Lambda |E_\varepsilonta\setminus E|\,,
\varepsilonnd{align*}
which yields a contradiction.
{\bf Step 2.}
Let $\varphi\in C^2( D_2)$ such that $\varphi\leq u^-$ in a neighborhood of $x'_0\in D_2\cap\{x_1=0\}$ in $D^+_2$, $\varphi(x'_0)=u^-(x'_0)$ and $(x'_0,u^-(x'_0))\in O$ and thus it lies strictly below the boundary of the obstacle. We claim that
\begin{equation}\label{decay2}
\frac{\boldsymbol{p}artialrtial_1\varphi(x'_0)}{\sqrt{1+|\nabla\varphi(x'_0)|^2}}\leq \sigma\,.
\varepsiloneq
We argue by contradiction assuming that
\begin{equation}\label{decay3}
\frac{\boldsymbol{p}artialrtial_1\varphi(x'_0)}{\sqrt{1+|\nabla\varphi(x'_0)|^2}}> \sigma\,.
\varepsiloneq
In this case we set
\[
E_n:=n\big(E-(x'_0, u^-(x'_0))\big),\boldsymbol{q}uad\! \varphi_n(x'):=n\Big[\varphi\Big(x_0'+\frac{x'}{n}\Big)-\varphi(x_0')\Big], \boldsymbol{q}uad\! O_n:=n\big(O-(x'_0, u^-(x'_0))\big)\,.
\]
Observe that $E_n$ is a $(\Lambda/n, n)$-minimizer of $\mathcal{F}_\sigma$ with obstacle $O_n$. By Theorem~\ref{th:compactness} we may assume that up to a not relabelled subsequence $E_n\to E_\infty$ in $L^1_{loc}(\mathbb{R}^N)$, where $E_\infty$ is a $0$-minimizer of $\mathcal{F}_\sigma$ (with obstacle $\boldsymbol{p}artial H$). Moreover, from property (ii) of Theorem~\ref{th:compactness} we have that
\begin{equation}\label{zeroin}
0\in \overline{\boldsymbol{p}artial E_\infty\cap H}\,.
\varepsiloneq
Note also that $\varphi_n\to \varphi_\infty$ locally uniformly, where $\varphi_\infty(x'):=\nabla \varphi(x'_0)\cdot x'$, and that
\begin{equation}\label{decay6}
E_\infty\supset\{(x', x_N)\in H:\, x_N<\varphi_\infty(x')\}\,.
\varepsiloneq
Set now for $\varepsilon>0$ small (to be chosen)
$$
\boldsymbol{p}si_\varepsilon(x'):=\varphi_\infty(x')-\varepsilon x_1-\frac{\varepsilon^2}2|x'|^2+\frac\varepsilon2 x_1^2\,.
$$
A direct calculation shows that
$$
\frac1\varepsilon\boldsymbol{D}iv \biggl(\frac{\nabla \boldsymbol{p}si_\varepsilon}{\sqrt{1+|\nabla \boldsymbol{p}si_\varepsilon|^2}}\biggr)\to \frac{1+|\nabla \varphi(x'_0)|^2-| \boldsymbol{p}artial_1\varphi(x'_0)|^2}{\big(1+|\nabla \varphi(x'_0)|^2\big)^{3/2}}>0\,,
$$
locally uniformly. Therefore, we may fix $\varepsilon>0$ so small that
\begin{equation}\label{decay4}
\boldsymbol{D}iv \biggl(\frac{\nabla \boldsymbol{p}si_\varepsilon}{\sqrt{1+|\nabla \boldsymbol{p}si_\varepsilon|^2}}\biggr)>0\boldsymbol{q}uad \text{in $\overline D_1^+$}
\varepsiloneq
and that, recalling \varepsilonqref{decay3},
\begin{equation}\label{decay5}
\frac{\boldsymbol{p}artial_1 \boldsymbol{p}si_\varepsilon}{\sqrt{1+|\nabla \boldsymbol{p}si_\varepsilon|^2}}>\sigma \boldsymbol{q}uad\text{on $\boldsymbol{p}artial D_1^+\cap\{x_1=0\}$.}
\varepsiloneq
Observe that we may now choose $\varepsilonta, r\in (0,1)$ such that
\begin{equation}\label{psieta}
\boldsymbol{p}si_\varepsilon<\varphi_{\infty}\boldsymbol{q}uad\text{on }\boldsymbol{p}artial D_{r,\varepsilonta}\,\boldsymbol{q}uad\text{and}\boldsymbol{q}uad \mathcal{H}^{N-1}\big(\boldsymbol{p}artial E_\infty \cap[(\boldsymbol{p}artial D_{r,\varepsilonta}^+\cap\{x_1>0\})\times\mathbb{R}]\big)=0\,.
\varepsiloneq
Finally, let $w\in C_{c}^{\infty}(D_{r,\varepsilonta})$, with $w(0)>0$, and note that by \varepsilonqref{decay4} and \varepsilonqref{decay5} we may choose $\delta\in (0, 1)$ so small that,
setting $\boldsymbol{p}si_{\varepsilon, \delta}:=\boldsymbol{p}si_\varepsilon+\delta w$, we have
\begin{equation}\label{decay7.1}
\begin{split}
& \boldsymbol{D}iv \biggl(\frac{\nabla \boldsymbol{p}si_{\varepsilon, \delta}}{\sqrt{1+|\nabla \boldsymbol{p}si_{\varepsilon, \delta}|^2}}\biggr)>0\boldsymbol{q}uad \text{in $ D_{r,\varepsilonta}^+$,}\\
& \frac{\boldsymbol{p}artial_1 \boldsymbol{p}si_{\varepsilon,\delta}}{\sqrt{1+|\nabla \boldsymbol{p}si_{\varepsilon,\delta}|^2}}>\sigma \boldsymbol{q}uad\text{on $\boldsymbol{p}artial D_{r,\varepsilonta}^+\cap\{x_1=0\}$.}
\varepsilonnd{split}
\varepsiloneq
Recall that $H\setminus E_\infty$ is a $0$-minimizer of $\mathcal{F}_{-\sigma}$. Note that this minimality property is a consequence of the fact that $E_\infty$ is a $0$-minimizer of $\mathcal{F}_{\sigma}$, which in turn follows from the assumption that $(x'_0,u^-(x'_0)$ does not touch the boundary of $O$ . Moreover, by \varepsilonqref{decay6} and \varepsilonqref{psieta}, we have
$$
(H\setminus E_\infty) \cap[(\boldsymbol{p}artial D_{r,\varepsilonta}^+\cap\{x_1>0\})\times\mathbb{R}]\subset\{(x',x_N)\in(\boldsymbol{p}artial D_{r,\varepsilonta}^+\cap\{x_1>0\})\times\mathbb{R}:\,x_N\geq \boldsymbol{p}si_{\varepsilon,\delta}(x')\}\,.
$$
Therefore, taking into account also \varepsilonqref{decay7.1}, we can apply Lemma~\ref{lm:DPCM}, with $E$, $u_0$ replaced by $H\setminus E_\infty$, $\boldsymbol{p}si_{\varepsilon,\delta}$, respectively, to conclude that
$$
(H\setminus E_\infty) \cap(D_{r,\varepsilonta}^+\times\mathbb{R})\subset\{(x',x_N)\in D_{r,\varepsilonta}^+\times\mathbb{R}:\,x_N\geq \boldsymbol{p}si_{\varepsilon,\deltalta}(x')\}\,.
$$
In particular, since $\boldsymbol{p}si_{\varepsilon,\delta}(0)>0$ the latter inclusion contradicts \varepsilonqref{zeroin}. This concludes the proof of \varepsilonqref{decay2}.
{\bf Step 3.} Concerning the proof of the statement for $u^+$, the case where $x'_0\in D^+_2$ is proved with the same argument used in Step~1. Instead, when $x'_0\in D_2\cap\{x_1=0\}$ we argue as In Step~2, replacing $E$ by $\widetilde E:=\{(x',-x_N):\, (x',x_N)\in E\}$ with the only difference that at the end of the proof we apply Lemma~\ref{lm:DPCM} to $\widetilde E_\infty$ instead of $H\setminus \widetilde E_\infty$.
\varepsilonnd{proof}
In the following for $x'\in D^+_{2r}$ we set
$$
u^{\boldsymbol{p}m}_\sigma(x'):=u^{\boldsymbol{p}m}(x')-\frac{\sigma x_1}{\sqrt{1-\sigma^2}}\,.
$$
\begin{lemma}\label{lm:decay}
There exist two universal constants $C_1>1$, $\mu_1\in(0,1)$ with the following property: Let $E\subset H$ be a $(\Lambda,1)$-minimizer of $\mathcal{F}_\sigma$ in $\mathbb{R}^N$ with obstacle $O\in\mathcal B_R$. Assume that \varepsilonqref{guido1} holds for $a=-\varepsilonps, b=\varepsilonps$, $\varepsilonps\in (0, 1)$, $r=1$, so that the functions $u^{\boldsymbol{p}m}$ are well defined on $D^+_2$. Assume also that $\Lambda<\varepsilonta \varepsilonps$ with $\varepsilonta \in (0, 1)$ and
$$
O\cap S_{-\varepsilonps, \varepsilonps}\cap\mathcal C_2=\{x\in S_{-\varepsilonps, \varepsilonps}:\, x_1=0,\, x_N<\boldsymbol{p}si(x_2, \dots, x_{N-1})\}\cap\mathcal C_2\,,
$$
where $\boldsymbol{p}si\in C^{1,1}(\boldsymbol{p}artialrtial H\cap D_2)$ with
$$
\boldsymbol{D}iv \biggl(\frac{\nabla \boldsymbol{p}si+\xi_0}{\sqrt{1+|\nabla \boldsymbol{p}si+\xi_0|^2}}\biggl)\leq \varepsilonta\varepsilonps\,,\boldsymbol{q}quad \xi_0:=\frac{\sigma}{\sqrt{1-\sigma^2}}e_1\,.
$$
Then the following holds:
\begin{enumerate}[\textup{(\roman*)}]
\item If there exists \(\bar x'\in D_{1/2}^+\) such that \(u^+_\sigma(\bar x')\ge (1-\varepsilonta)\varepsilonps\), then
\begin{multline*}
\mathcal{H}^{N-1}(\{x'\in D_{1}^+: u^+_\sigma(x')\ge (1-C_1^k\varepsilonta)\varepsilonps \})\ge (1-\mu_1^k)\mathcal{H}^{N-1} (D^+_1)
\\
\text{for all \(k\ge 1\) such that \(C_1^k\sqrt{\varepsilonta\varepsilonps}\le 1\)}\,.
\varepsilonnd{multline*}
\vskip 0.2cm
\item If there exists \(\bar x'\in D_{1/2}^+\) such that \(u^-_\sigma(\bar x')\le -(1-\varepsilonta)\varepsilonps\), then
\begin{multline*}
\mathcal{H}^{N-1}(\{x'\in D_{1}^+: \min\{u^-_\sigma(x'), \boldsymbol{p}si(x')\}\leq -(1-C_1^k\varepsilonta)\varepsilonps \})\ge (1-\mu_1^k)\mathcal{H}^{N-1} (D^+_1)
\\
\text{for all \(k\ge 1\) such that \(C_1^k\sqrt{\varepsilonta\varepsilonps}\le 1\)}\,,
\varepsilonnd{multline*}
where we have set $\boldsymbol{p}si(x')=\boldsymbol{p}si(x_2,\dots,x_{N-1})$ when $x_1>0$.
\varepsilonnd{enumerate}
\varepsilonnd{lemma}
\begin{proof}
We start by proving (ii). To this aim we split the proof in three steps.
\boldsymbol{p}artialr\noindent
{\bf Step 1.} Set $w^-(x')=\min\{u^-(x'),\boldsymbol{p}si(x')+\xi_0\cdot x'\}=\min\big\{u^-(x'),\boldsymbol{p}si(x')+\frac{\sigma x_1}{\sqrt{1-\sigma^2}}\big\}$.
We claim that $w^-$ satisfies
$$
\boldsymbol{D}iv \biggl(\frac{\nabla w^-}{\sqrt{1+|\nabla w^-|^2}}\biggr)\leq {\varepsilonta\varepsilonps}\boldsymbol{q}quad\text{ in }D^+_2
$$
in the viscosity sense.
To this aim assume that $x'_0\in D^+_2$ is a strict minimum point for $w^- -\varphi$ in $D^+_2$, with $\varphi\in C^2(D^+_2)$ and $w^-(x'_0)=\varphi(x'_0)$. If $w^-(x'_0)=\boldsymbol{p}si(x'_0)+\xi_0\cdot x'_0$, then clearly we also have that $x'_0$ is a minimum point of $x'\to \boldsymbol{p}si(x')+\xi_0\cdot x'-\varphi(x')$ and thus
$$
\boldsymbol{D}iv \biggl(\frac{\nabla \varphi}{\sqrt{1+|\nabla \varphi |^2}}\biggr)(x'_0)\leq
\boldsymbol{D}iv \biggl(\frac{\nabla \boldsymbol{p}si+\xi_0}{\sqrt{1+|\nabla \boldsymbol{p}si+\xi_0|^2}}\biggr)(x'_0)\leq{\varepsilonta\varepsilonps}\,.
$$
If otherwise $w^-(x'_0)=u^-(x'_0)$ the claim follows from Lemma~\ref{pace} since $\Lambda<\varepsilonta\varepsilonps$.
{\bf Step 2.}
We now denote by $w_\sigma^-$ the function defined on $D_2$ obtained by even reflection of
$$
\min\{u^-_\sigma, \boldsymbol{p}si\}-L_kx_1,\boldsymbol{q}quad L_k:=\Big(\frac{\sigma^+}{2\sqrt{1-\sigma^2}}+1\Big)C_1^{2k}\varepsilonta^2\varepsilonps^2\,,
$$
with respect to $\boldsymbol{p}artialrtial H$, where $C_1>1$ will be chosen later and $k$ is an integer such that $C_1^k\sqrt{\varepsilonta\varepsilonps}\leq1$. We claim that $w_\sigma^-$ satisfies the inequality
\begin{equation}\label{decay7}
\boldsymbol{D}iv \biggl(\frac{\nabla w^-_\sigma+\xi_0+L_ke_1}{\sqrt{1+|\nabla w^-_\sigma+\xi_0+L_ke_1|^2}}\biggr)\leq \varepsilonta\varepsilonps\boldsymbol{q}uad\text{ in }D_2
\varepsilonnd{equation}
in the $(C_1^k\varepsilonta\varepsilonps)$-viscosity sense, see Definition~\ref{viscorestr}.
To this aim let $\varphi\in C^2(D_2)$ such that $w_\sigma^- -\varphi$ has a minimum at $x'_0\in D_2$ and $|\nabla \varphi(x'_0)|\leq C_1^k\varepsilonta\varepsilonps$.
We first show that $x'_0\not \in \boldsymbol{p}artialrtial H$. Indeed, assume by contradiction the opposite and assume in addition that
$u^-_\sigma(x'_0)<\boldsymbol{p}si(x'_0)$, that is $u^-(x'_0)<\boldsymbol{p}si(x'_0)$. Then, by Lemma~\ref{pace}, using as a test function $\varphi+\Big(\frac{\sigma }{\sqrt{1-\sigma^2}}+L_k\Big)x_1$, we infer that
$$
\frac{\boldsymbol{p}artial_1\varphi(x'_0)+A_k}{\sqrt{1+(\boldsymbol{p}artial_1\varphi(x'_0)+A_k)^2+|\nabla'\varphi(x'_0)|^2}}\leq\sigma\,,
$$
where we set $A_k:=\frac{\sigma }{\sqrt{1-\sigma^2}}+L_k$ and $\nabla'\varphi=\nabla\varphi-(\boldsymbol{p}artial_1\varphi)e_1$. From this inequality, using that $|\nabla'\varphi|\leq C_1^k\varepsilonta\varepsilonps$, we easily get
$$
\boldsymbol{p}artial_1\varphi(x'_0)+A_k\leq\frac{\sigma }{\sqrt{1-\sigma^2}}+\frac{\sigma^+ }{\sqrt{1-\sigma^2}}\frac{C_1^{2k}\varepsilonta^2\varepsilonps^2}{2}\,.
$$
In turn, the last inequality implies that $\boldsymbol{p}artial_1\varphi(x'_0)<0$.
On the other hand the symmetric argument in $D_2^-$ shows that
$\boldsymbol{p}artialrtial_1\varphi (x'_0)>0$, thus leading to a contradiction.
If instead $u^-(x'_0)=\boldsymbol{p}si(x'_0)$, i.e., $u^-_\sigma(x'_0)=\boldsymbol{p}si(x'_0)$ then $x'_0$ is a minimum for $\boldsymbol{p}si- L_kx_1
-\varphi$ in $\overline D^+_2$ and thus, in particular,
$$
\boldsymbol{p}artialrtial_1\varphi (x'_0)\leq\boldsymbol{p}artialrtial_1\boldsymbol{p}si(x'_0)-L_k=-L_k<0.
$$
Arguing symmetrically in $D^-_2$ we also get $\boldsymbol{p}artialrtial_1\varphi (x'_0)>0$, which is again a contradiction.
Thus, $x'_0\in D^+_2\cup D^-_2$ and the fact that $w_\sigma^-$ satisfies \varepsilonqref{decay7} in the viscosity sense now follows easily from Step 1, since on $D^+_2$ we have $w^-_\sigma(x')=w^-(x')-\big(\frac{\sigma}{\sqrt{1-\sigma^2}}+L_k\big)x_1$.
\noindent
{\bf Step 3.}
Observe that from our assumptions we have that $-\varepsilonps\leq\min\{u^-_\sigma, \boldsymbol{p}si\}\leq\varepsilonps$. Thus,
$0<w_\sigma^- +\varepsilonps+2L_k$ in $D_2$. Moreover, by assumption we have
$$
w_\sigma^- (\bar x')+\varepsilonps+2L_k\leq\Big(1+\Big(\frac{\sigma^+}{\sqrt{1-\sigma^2}}+2\Big)C_1^{2k}\varepsilonta\varepsilonps\Big)\varepsilonta\varepsilonps\leq\Big(3+\frac{\sigma^+}{\sqrt{1-\sigma^2}}\Big)\varepsilonta\varepsilonps:=\nu\,.
$$
Therefore from Step 2 and by Proposition~\ref{prop:savin}, applied with $d=N-1$, we have that
$$
\mathcal{H}^{N-1}(\{w_\sigma^- +\varepsilonps+2L_k\leq C_0^k\nu\}\cap D_1)\ge (1-\mu_0^k) \mathcal{H}^{N-1}(D_1)\,,
$$
provided that $C_0^k\nu\leq C_1^k\varepsilonta\varepsilonps\leq1$, where $\mu_0$ and $C_0$ are the constants provided by Proposition~\ref{prop:savin} corresponding to $M=\frac{|\sigma|}{\sqrt{1-\sigma^2}}+\frac{\sigma^+}{2\sqrt{1-\sigma^2}}+1$. Note that the inequality
$C_0^k\nu\leq C_1^k\varepsilonta\varepsilonps$ is satisfied if we take $C_1\geq C_0(3+\sigma^+(1-\sigma^2)^{-1/2})$. Thus we finally have
$$
\mathcal{H}^{N-1}(\{ \min\{u^-_\sigma, \boldsymbol{p}si\}\leq -\varepsilonps+L_kx_1-2L_k+C_0^k\nu \}\cap D_1^+)\ge (1-\mu_0^k)\mathcal{H}^{N-1} (D^+_1)\,,
$$
from which the conclusion (ii) follows since $L_kx_1-2L_k+C_0^k\nu\leq C_1^k\varepsilonta\varepsilonps$ in $D^+_1$.
Concerning the proof of (i) we argue as in the previous steps with $w^-$ replaced by $-u^+$ and $w^-_\sigma$ replaced by the even reflection of $-u^+_\sigma-L_k x_1$.
\varepsilonnd{proof}
\begin{remark}\label{rm:stupida}
Note that if $\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O\cap \mathcal C_2\cap S_{-\varepsilonps,\varepsilonps}=\varepsilonmptyset$ the conclusion of Lemma~\ref{lm:decay} holds with $\min\{u^-_\sigma,\boldsymbol{p}si\}$ replaced by $u^-_\sigma$.
\varepsilonnd{remark}
\begin{remark}\label{rm:decay}
Observe that the following interior version of the previous lemma holds:
Let $\kappa$ be a positive number. There exist a constant $C_1>1$ and $\mu_1\in(0,1)$ depending only on $\kappa$ with the following property: if $E\subset H$ is a $(\Lambda,1)$-minimizer of the perimeter in $\mathbb{R}^N$ such that
$$
\boldsymbol{p}artialrtial E\cap \mathcal C_2(y')\subset S_{-\varepsilonps, \varepsilonps}^\alpha\,,
\subset E\,,
$$
for some $y'=(y_1,\dots,y_{N-1})$ with $y_1\geq2$,
$\alpha\in[-\kappa,\kappa]$, $\varepsilonps\in (0, 1)$, $\Lambda<\varepsilonta \varepsilonps$ with $\varepsilonta \in (0, 1)$, then the following holds:
\begin{enumerate}[\textup{(\roman*)}]
\item If there exists \(\bar x'\in D_{1/2}(y')\) such that \(u^+(\bar x')-\alpha\bar x_1\ge (1-\varepsilonta)\varepsilonps\), then
\begin{multline*}
\mathcal{H}^{N-1}(\{x'\in D_{1}(y'): u^+(x')-\alpha x_1\ge (1-C_1^k\varepsilonta)\varepsilonps \})\ge (1-\mu_1^k)\mathcal{H}^{N-1} (D^+_1)
\\
\text{for all \(k\ge 1\) such that \(C_1^k\sqrt{\varepsilonta\varepsilonps}\le 1\)}.
\varepsilonnd{multline*}
\vskip 0.2cm
\item If there exists \(\bar x'\in D_{1/2}(y')\) such that \(u^-(\bar x')-\alpha\bar x_1\le -(1-\varepsilonta)\varepsilonps\), then
\begin{multline*}
\mathcal{H}^{N-1}(\{x'\in D_{1}(y'): u^-(x')-\alpha x_1\leq -(1-C_1^k\varepsilonta)\varepsilonps \})\ge (1-\mu_1^k)\mathcal{H}^{N-1} (D^+_1)
\\
\text{for all \(k\ge 1\) such that \(C_1^k\sqrt{\varepsilonta\varepsilonps}\le 1\)}\,.
\varepsilonnd{multline*}
\varepsilonnd{enumerate}
This clearly follows from the interior version of the previous arguments. We only remark that the uniformity of the estimates with respect to $\alpha$ varying in a bounded interval relies on the estimates provided by Proposition~\ref{prop:savin} which are uniform with respect to $\xi$ varying in a bounded set.
\varepsilonnd{remark}
We need also the following lemma which relies on the classical interior regularity theory for $(\Lambda,r_0)$-minimizers of the perimeter.
\begin{lemma}\label{covid}
For every $\deltalta\in(0,1)$, $\kappa>0$, there exists $\varepsilonps>0$ such that if $E$ is a $(\Lambda,1)$-minimizer of the perimeter in $\mathcal \mathcal C^+_2$ with $\Lambda\leq1$ such that
$$
\boldsymbol{p}artialrtial E\cap \mathcal C_2^+\subset S_{-\varepsilonps,\varepsilonps}^\alpha,
$$
with $\alpha\in[-\kappa,\kappa]$,
then the corresponding functions $u^+$ and $u^-$ coincide in $D^+_1$ outside a set of measure less than $\deltalta$.
\varepsilonnd{lemma}
\begin{proof}
It is enough to observe that given $\deltalta'\in(0,1)$, a sequence $\varepsilonps_n>0$ converging to zero and a sequence $E_n$ of $(\Lambda_n,r_n)$-minimizers, with $\Lambda_n+\frac{1}{r_n}\leq2$ and $\boldsymbol{p}artialrtial E_n\cap \mathcal C_2^+\subset S_{-\varepsilonps_n,\varepsilonps_n}^{\alpha_n}$, with $\alpha_n\in[-\kappa,\kappa]$, $\alpha_n\to\alpha$, then from classical regularity results we have that $\boldsymbol{p}artialrtial E_n\cap \mathcal \mathcal C^+_1\cap\{x_1>\deltalta'\} $ converge in $C^1$ to the plane $\{x_N-\alpha x_1=0\}\cap \mathcal \mathcal C^+_1\cap\{x_1>\deltalta'\}$.
\varepsilonnd{proof}
\begin{lemma}\label{apparte}
For every $\deltalta>0$ there exists $\varepsilonta=\varepsilonta(\deltalta)\in(0,1/2)$ with the following property: Let $O\in\mathcal B_R$ be such that
\begin{equation}\label{apparte1}
\boldsymbol{p}artialrtial H\cap\{x_N=-\varepsilonps r\}\cap \mathcal C_{2r}\subset {\overline O} \,,
\varepsilonnd{equation}
for some $\varepsilonps\in(0,1]$, and for some $r>0$ with $\frac{r}{R}\leq\varepsilonta(\deltalta)\varepsilonps$. Assume also that
there exists a point $x'\in \boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O\cap \mathcal C_r\cap\{-\varepsilonps r\leq x_N\leq\varepsilonps r\}$. Then the connected component of $\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O\cap \mathcal C_{2r}$ containing $x'$ is the graph of a function $\boldsymbol{p}si\in C^{1,1}(\boldsymbol{p}artialrtial H\cap D_{2r})$ such that
$$
\|D\boldsymbol{p}si\|_{L^\infty(\boldsymbol{p}artialrtial H\cap D_{2r})}\leq\varepsilonps,\boldsymbol{q}uad r\|D^2\boldsymbol{p}si\|_{L^\infty(\boldsymbol{p}artialrtial H\cap D_{2r})}\leq\deltalta\varepsilonps\,.
$$
\varepsilonnd{lemma}
\begin{proof} By rescaling it is enough to prove the statement with $r=1$.
Observe that, under our assumptions, if $\varepsilonta(\deltalta)$ is sufficiently small, hence $R$ is large, then the connected component of $\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O\cap \mathcal C_{2}$ containing $x'$ is the graph of a $C^{1,1}(\boldsymbol{p}artialrtial H\cap D_{2})$ function.
We argue by contradiction assuming that there exist a sequence $\varepsilonps_h\in(0,1]$, a sequence $O_h\in\mathcal B_{R_h}$, with $\frac{1}{R_h}\leq\frac{\varepsilonps_h}{h}$ such that $\boldsymbol{p}artialrtial H\cap\{x_N=-\varepsilonps_h\}\cap \mathcal C_2\subset \overline O_h$ for all $h$ and that there exist $x'_h\in \boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O_h\cap \mathcal C_1\cap\{-\varepsilonps_h\leq x_N\leq\varepsilonps_h\}$ such that
\begin{equation}\label{apparte2}
\|D\boldsymbol{p}si_h\|_{L^\infty(\boldsymbol{p}artialrtial H\cap D_{2})}>\varepsilonps_h\boldsymbol{q}uad\text{or}\boldsymbol{q}uad\|D^2\boldsymbol{p}si_h\|_{L^\infty(\boldsymbol{p}artialrtial H\cap D_{2})}>\varepsilonps_h\deltalta\,,
\varepsiloneq
where the graph of $\boldsymbol{p}si_h\in C^{1,1}(\boldsymbol{p}artialrtial H\cap D_{2})$ describes the connected component of $\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O_h\cap \mathcal C_2$.
We now set $\widetilde O_h=\Phi_h(O_h)$ where $\Phi_h(x)=(x_1,\dots,x_{N-1},\varepsilonps_h^{-1}x_N)$. Since $\Phi_h$ maps balls of radius $1$ into ellipsoids with maximal semiaxis of length $\frac{1}{\varepsilonps_h}$ it is easy to check that $\widetilde O_h\in\mathcal B_{\varepsilonps_hR_h}$. Without loss of generality we may assume that $\Phi_h(x'_h)$ converges to a point $\bar x'\in\boldsymbol{p}artialrtial H\cap\overline{\mathcal C}_1$ with $-1\leq\bar x_N\leq1$.
Then, recalling that $\varepsilonps_hR_h\to+\infty$ and thus the uniform inner and outer ball condition for $\widetilde O_h$ hold for larger and larger radii, by a compactness argument we have that the connected components of $\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H} \widetilde O_h$ containing $x'_h$ converge locally (up to a subsequence) in the Hausdorff sense to a $(N-2)$-dimensional plane $\boldsymbol{p}i$ passing through $\bar x'$ and such that $\boldsymbol{p}i\cap\mathcal C_2\subset\{x_N\geq-1\}$. Note that the latter inclusion follows from \varepsilonqref{apparte1} applied to $\widetilde O_h$ and $\varepsilonps=1$. Note also that this inclusion together with the fact that $\bar x'_N\leq1$ yields that the slope of $\boldsymbol{p}i$ is strictly less than $1$. Hence the functions $\frac{\boldsymbol{p}si_h}{\varepsilonps_h}$ converge locally uniformly (actually in $C^{1,1}$) to an affine function whose gradient has norm strictly less than 1. This contradicts \varepsilonqref{apparte2} for $h$ sufficiently large.
\varepsilonnd{proof}
\begin{proof}[Proof of \cref{lm:savin}]
By a simple rescaling argument it is enough to prove the statement for $r=1$.
Up to renaming the coordinates we can assume that \(a+b=0\), so that if we set \(\varepsilonps=(b-a)/2\) our assumption becomes
\[
\boldsymbol{p}artialrtial E\cap \mathcal C^+_{2}\subset S_{-\varepsilonps, \varepsilonps}.
\]
Let \(0< \varepsilonta_0<1\) to be fixed in an universal way. If \(\sup_{D_{1/2}^+} u^+_\sigma\le \varepsilonps-\varepsilonta_0 \varepsilonps\) (resp. if \(\inf_{D_{1/2}^+} u^-_\sigma\ge -\varepsilonps+\varepsilonta_0 \varepsilonps\)) we are done by choosing \(a'=a=-\varepsilonps\) and \(b'=\varepsilonps-\varepsilonta_0 \varepsilonps=b-\varepsilonta_0(b-a)/2\) (resp. \(b'=b=\varepsilonps\) and \(a'=-\varepsilonps+\varepsilonta_0 \varepsilonps=a+\varepsilonta_0(b-a)/2\)).
Hence we can assume by contradiction that there are \(\bar x', \hat x' \in D_{1/2}^+\) such that
\begin{equation}\label{e:trieste}
u^+_\sigma(\bar x') > \varepsilonps-\varepsilonta_0 \varepsilonps\boldsymbol{q}quad\text{and}\boldsymbol{q}quad u_\sigma^-(\hat x')< -\varepsilonps+\varepsilonta_0 \varepsilonps.
\varepsilonnd{equation}
Assume that \varepsilonqref{117} holds. Then there exists $\bar y'\in\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O\cap \mathcal C_1\cap\{-\varepsilonps<y_N<\varepsilonps\}$. Then, we may apply Lemma~\ref{apparte} to conclude that the connected component of $\boldsymbol{p}artial_{\boldsymbol{p}artial H}O\cap \mathcal C_2$ containing $\bar y'$ is the graph of a function $\boldsymbol{p}si\in C^{1,1}(\boldsymbol{p}artialrtial H\cap D_2)$, where $\|D^2\boldsymbol{p}si\|_{L^\infty(\boldsymbol{p}artialrtial H\cap D_2)}\le\deltalta\varepsilonps$ with $\deltalta=\deltalta(\sigma)$ so small that
$$
\boldsymbol{D}iv \biggl(\frac{\nabla \boldsymbol{p}si+\xi_0}{\sqrt{1+|\nabla \boldsymbol{p}si+\xi_0|^2}}\biggl)\leq \varepsilonta_0\varepsilonps\,.
$$
This is certainly true provided that $O\in\mathcal B_R$ with $\frac{1}{R}\leq\varepsilonta(\deltalta(\sigma))\varepsilonps$. Thus we can apply Lemma~\ref{lm:decay} with $k_0$ large to infer that
\[
\mathcal{H}^{N-1}(\{x'\in D^+_1: u^+_\sigma(x') \ge (1-C_1^{k_0}\varepsilonta_0)\varepsilonps \})\ge (1-\mu_1^{k_0})\mathcal{H}^{N-1} (D^+_1)\ge \frac{3}{4}\mathcal{H}^{N-1} (D^+_1)
\]
and
\begin{multline*}
\mathcal{H}^{N-1}(\{x'\in D_1^+: \min\{u_\sigma^-,\boldsymbol{p}si\}\le -(1-C_1^{k_0}\varepsilonta_0)\varepsilonps \})\\
\ge (1-\mu_1^{k_0})\mathcal{H}^{N-1} (D^+_1)\ge \frac{3}{4}\mathcal{H}^{N-1} (D^+_1),
\varepsilonnd{multline*}
provided that $C_1^{k_0}\sqrt{\varepsilonta_0}\leq1$.
If \(\boldsymbol{p}si\ge -(1-C^{k_0}_1\varepsilonta_0)\varepsilonps\) in $D_1\cap\boldsymbol{p}artialrtial H$ we are done since from Lemma~\ref{covid} we may assume that $\mathcal{H}^{N-1}(\{u^+\not=u^-\})<\mathcal{H}^{N-1} (D^+_1)/4$ and therefore from the two previous inequalities we get
\[
\mathcal{H}^{N-1}(\{x'\in D^+_1: u^+_\sigma(x') \ge (1-C_1^{k_0}\varepsilonta_0)\varepsilonps \}\cap\{u^+=u^-\})> \frac{1}{2}\mathcal{H}^{N-1} (D^+_1)
\]
and
\[
\mathcal{H}^{N-1}(\{x'\in D_1^+: \min\{u_\sigma^-,\boldsymbol{p}si\}\le -(1-C_1^{k_0}\varepsilonta_0)\varepsilonps \}\cap\{u^+=u^-\})> \frac{1}{2}\mathcal{H}^{N-1} (D^+_1)
\]
which is impossible.
Hence we only have to deal with the case where there exists a point in $D_1\cap\boldsymbol{p}artial H$ at which $\boldsymbol{p}si\le -(1-C^{k_0}_1\varepsilonta_0)\varepsilonps\le -3\varepsilonps/4\) (provided \(\varepsilonta_0\) is small). Thus, up to replacing $O$ with $O+\frac{3\varepsilonps}{4}e_N$, we may apply Lemma~\ref{apparte} in $\mathcal C_2$ with $\varepsilonps$ replaced by $\varepsilonps/4$ (and taking $\varepsilonta(\deltalta)$ smaller if needed) to conclude that $\|\nabla\boldsymbol{p}si\|_{L^\infty(D_2)}\leq\frac{\varepsilonps}{4}$. In turn this estimate implies that $\boldsymbol{p}si\leq0$ in $D_2\cap \boldsymbol{p}artialrtial H$.
Let $\Omegaega\subset \mathbb{R}^{N-1}$ be a smooth set such that $D^+_{4/3}\subset \Omegaega\subset D^+_{3/2}$ and consider the solution to the following problem:
\[
\begin{cases}
\boldsymbol{D}elta w_\mu(1+|\xi_0|^2)-D_{11}w_\mu|\xi_0|^2=-\mu\boldsymbol{q}quad &\text{in \(\Omegaega\),}
\\
w_\mu=\varphi\boldsymbol{q}quad &\text{on \(\boldsymbol{p}artialrtial \Omegaega\),}
\varepsilonnd{cases}
\]
where $\varphi$ is a smooth function such that $\varphi\varepsilonquiv 1$ on $\boldsymbol{p}artialrtial \Omegaega\cap H$ and $\varphi\varepsilonquiv 1/4$ on $D_1\cap \boldsymbol{p}artialrtial H$, and $\varphi\geq 1/4$ elsewhere on $\boldsymbol{p}artialrtial \Omegaega$ and $\mu>0$ is to be chosen. Note that $w_\mu$ is smooth on $\overline \Omegaega$ and that it converges in $C^2(\overline \Omegaega)$, as $\mu\to0$, to the function $w_0$ such that $w_0=\varphi$ on $\boldsymbol{p}artialrtial\Omegaega$ and
$$
\boldsymbol{D}elta w_0(1+|\xi_0|^2)-D_{11}w_0|\xi_0|^2=0\boldsymbol{q}quad \text{in \(\Omegaega\),}
$$
By the maximum principle there exists $\tau\in (0,1/2)$ such that $\max_{\overline D_1^+} w_0\leq 1-2\tau$. Therefore there exists $\mu_0>0$ such that
\begin{equation}\label{eq:tau}
\max_{\overline D_1^+} w_{\mu_0}\leq 1-\tau\,.
\varepsilonnd{equation}
We now claim that for $\varepsilonps\leq \varepsilonps_0$, with $\varepsilonps_0$ depending only on $w_0$, the function $v_\varepsilonps:=\varepsilonps w_{\mu_0}$ satisfies
\begin{multline*}
\boldsymbol{D}iv \biggl(\frac{\nabla v_\varepsilonps+\xi_0}{\sqrt{1+|\nabla v_\varepsilonps+\xi_0|^2}}\biggr)\\
=\frac{\varepsilonps}{(1+|\varepsilonps\nabla w_{\mu_0}+\xi_0|^2)^{\frac32}}[\boldsymbol{D}elta w_{\mu_0} (1+|\varepsilonps\nabla w_{\mu_0}+\xi_0|^2)
-D^2w_{\mu_0}(\varepsilonps\nabla w_{\mu_0}+\xi_0)(\varepsilonps\nabla w_{\mu_0}+\xi_0)]\\
\leq \varepsilonps\Big(-\mu_0+C\varepsilonps(1+\|w_0\|_{C^2(\overline \Omegaega)}^3)\Big)<-\frac{\varepsilonps\mu_0}{2}< -2\varepsilonta_0\varepsilonps\,,
\varepsilonnd{multline*}
with $C>0$ universal, provided $\varepsilonps_0$ and $\varepsilonta_0$ are chosen small enough.
By our assumptions, \(u^+_\sigma\le v_\varepsilonps\) on \(\boldsymbol{p}artialrtial \Omegaega\). Thus, recalling the first inequality in \varepsilonqref{irene-1} and the assumption on $\Lambda$, we get that
$$
\boldsymbol{D}iv \biggl(\frac{\nabla u^+_\sigma+\xi_0}{\sqrt{1+|\nabla u^+_\sigma+\xi_0|^2}}\biggr)\geq- 2\varepsilonta_0\varepsilonps
$$
in the viscosity sense. By the comparison principle we conclude that
\(u^+_\sigma\le v_\varepsilonps\) in \(\Omegaega\). In turn, recalling \varepsilonqref{eq:tau}, we infer
\[
\sup_{D_1^+} u^+_\sigma\le \sup_{D_1^+} v_\varepsilonps\le (1-\tau) \varepsilonps\,.
\]
This is in contradiction with \varepsilonqref{e:trieste} if \(\varepsilonta_0<\tau\).
Finally, if \varepsilonqref{118} holds, we may argue as before taking $\boldsymbol{p}si\varepsilonquiv\varepsilonps$.
\varepsilonnd{proof}
\begin{lemma}\label{lm:savinglobal}
There exist \(\varepsilonps_1\), \(\varepsilonta_1\in (0,1/2)\) universal with the following property: if $E\subset H$ is a $(\Lambda,1)$-minimizer of $\mathcal{F}_\sigma$ in $\mathbb{R}^N$ with obstacle $O\in\mathcal B_R$,
such that
\begin{equation}\label{savinglobal1}
\boldsymbol{p}artialrtial E\cap \mathcal C_{2r}^+(x')\subset S_{a,b}\boldsymbol{q}quad \text{with }b-a\leq \varepsilonps_1r\,,
\varepsilonnd{equation}
\begin{equation}\label{rem:guido1bis}
\Big\{x_N<\frac{\sigma x_1}{\sqrt{1-\sigma^2}}+a\Big\}\cap\mathcal C_{2r}^+(x')
\subset E\,,
\varepsilonnd{equation}
for some $x'\in\mathbb{R}^{N-1}$ with $x_1\geq0$, $0<r\leq1$,
$\Lambda r^2<\varepsilonta_1 (b-a)$ and
$$
\frac{r^2}{R}\leq \varepsilonta_1(b-a)\,,
$$
then there exist \(a'\ge a\), \(b'\le b\) with
\[
b'-a'\le (1-\varepsilonta_1) (b-a)
\]
such that
\[
\boldsymbol{p}artialrtial E\cap \mathcal \mathcal C^+_{\frac{r}{32}}(x')\subset S_{a', b'}.
\]
\varepsilonnd{lemma}
\begin{proof}
We start by observing that a simplified version of the same arguments in the proof of Lemma~\ref{lm:savin}, see Remark~\ref{rm:decay}, (or, alternatively, the standard interior regularity results for $(\Lambda,r_0)$-minimizers of the perimeter) lead to the following interior version of the Lemma: there exist $\varepsilonps_1\leq\varepsilonps_0$, $\varepsilonta_1\leq\varepsilonta_0$ , such that if \varepsilonqref{savinglobal1} and \varepsilonqref{rem:guido1bis} hold with $x_1\geq2r$ and $\Lambda r^2<\varepsilonta_1 (b-a)$, then there exist \(a'\ge a\), \(b'\le b\) with
$
b'-a'\le (1-\varepsilonta_1) (b-a)
$
such that
\begin{equation}\label{savinglobal2}
\boldsymbol{p}artialrtial E\cap \mathcal C_{\frac{r}2}(x')\subset S_{a', b'}.
\varepsilonnd{equation}
Therefore it is enough to prove the statement in the case $0< x_1<2r$. By rescaling, we may assume $r=1$ .
If $0< x_1\leq\frac{1}{8}$, we may apply Lemma~\ref{lm:savin} with the origin replaced by the point $\overline x'=(0,x_2,\dots,x_N)$ and $r=\frac12$, provided $\varepsilonta_1$ is sufficiently small, thus getting $\boldsymbol{p}artialrtial E\cap \mathcal C^+_{\frac{1}{4}}(\bar x')\subset S_{a', b'}$, hence in particular $\boldsymbol{p}artialrtial E\cap \mathcal C^+_{\frac{1}{8}}(x')\subset S_{a', b'}$.
Finally, if $x_1\geq1/8$ we just get the interior estimate \varepsilonqref{savinglobal2} with $r=\frac{1}{16}$.
\varepsilonnd{proof}
\begin{remark}\label{rm:savinglobal}
Let $\kappa>0$ be given. Observe that there exist possibly smaller $\varepsilonps_1,\varepsilonta_1$, depending on $\kappa$, such that if $D_{2r}(x')\subset\{x_1>0\}$ and
$$
\boldsymbol{p}artialrtial E\cap \mathcal C_{2r}(x')\subset S_{a,b}^\alpha\boldsymbol{q}quad \text{with }b-a\leq \varepsilonps_1r\,,
$$
for some $\alpha\in(-\kappa,\kappa)$ and $\Lambda r^2<\varepsilonta_1 (b-a)$, then
\[
\boldsymbol{p}artialrtial E\cap \mathcal C_{\frac{r}{32}}(x')\subset S_{a', b'}^\alpha,
\]
with \[
b'-a'\le (1-\varepsilonta_1) (b-a)\,.
\]
Indeed this follows with the same arguments of the proof of Lemma~\ref{lm:savin}, taking into account Remark~\ref{rm:decay} and Lemma~\ref{covid} which holds uniformly with respect to $\alpha\in[-\kappa,\kappa]$.
\varepsilonnd{remark}
\subsection{Flatness improvement and $\varepsilonps$-regularity} We start with the following crucial barrier argument, which forces the solution to coincide with the boundary of the obstacle if the solution is too close to a plane which does not satisfy the exact optimality condition, compare with \cite{Fernandez-RealSerra20} for a similar argument.
\begin{lemma}\label{lm:barrier}
There exist universal constants \(\varepsilonps_2, M_0>0\) with the following property. Assume that $E\subset H$ is a $(\Lambda,1)$-minimizer of $\mathcal{F}_\sigma$ in $\mathbb{R}^N$ with obstacle $O\in\mathcal B_R$. Assume also that
\[
\overline{\boldsymbol{p}artialrtial E \cap H} \cap \mathcal C_{2r}\subset \{ \abs{x_N-\alpha x_1}< \varepsilonps r\big\},\boldsymbol{q}uad \{x_N<\alpha x_1-\varepsilonps r\}\cap\mathcal C_{2r}^+
\subset E\,,
\]
for some \(0<\varepsilonps \le \varepsilonps_2\), $0<r\leq1$ with $\Lambda r<\varepsilonps$ and $\frac{r}{R}\leq\varepsilonta(1)\varepsilonps$ (where $\varepsilonta(1)$ is as in Lemma~\ref{apparte}), and for some $\alpha\ge M_0\varepsilonps+\frac{\sigma}{\sqrt{1-\sigma^2}}$. Then
\[
\overline{\boldsymbol{p}artialrtial E \cap H}\cap \boldsymbol{p}artialrtial H\cap\mathcal C_{r/2}=\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O\cap\mathcal C_{r/2}\cap\{|x_N|<\varepsilonps r\}.
\]
\varepsilonnd{lemma}
\begin{proof} Without loss of generality, by a rescaling argument, we may assume $r=1$.
We start by assuming that there exists $\bar y'\in\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O\cap\mathcal C_1\cap S_{-\varepsilonps,\varepsilonps}$. From the assumption on $O$ we get, thanks to Lemma~\ref{apparte} that the connected component of $\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O\cap\mathcal C_{2}$ containing $\bar y'$ is a graph of a $C^{1,1}$ function $\boldsymbol{p}si$ such that
$\|D\boldsymbol{p}si\|_{L^\infty(\boldsymbol{p}artialrtial H\cap D_{2})}\leq\varepsilonps$, $\|D^2\boldsymbol{p}si\|_{L^\infty(\boldsymbol{p}artialrtial H\cap D_{2})}\leq\varepsilonps$.
In particular
\begin{equation}\label{barrier0.5}
\|\boldsymbol{p}si\|_{L^\infty(\boldsymbol{p}artialrtial H\cap D_{1})}\leq3\varepsilonps\,.
\varepsilonnd{equation}
As said, the proof is obtained via a barrier construction. To this end we fix $z'=(0,z_2,\dots,z_{N-1})$ with $|z'|\leq1/2$ and assume by contradiction that $u^-(z')<\boldsymbol{p}si(z')$. We define for $x'\in\mathbb{R}^{N-1}$
\[
w(x')=\beta x_1+\gamma\varepsilonps (L x_1^2-|x'-z'|^2)+\boldsymbol{p}si(z')+\nabla\boldsymbol{p}si(z')\cdot(x'-z')+c
\]
where
\[
\beta= \frac{M_0\varepsilonps}{2}+\frac{\sigma}{\sqrt{1-\sigma^2}}\,,
\]
$L$ and $\gamma$ are positive constants to be chosen
and $c$ is the first constant such that the function $w$ touches from below $u^-(x')$ in $\overline D_{1/2}^+(z')$.
Observe that $c\leq0$, since $ w(z')=\boldsymbol{p}si(z')+c\leq u^-(z')\leq\boldsymbol{p}si(z')$, and that
\begin{equation}\label{barrier1-}
\frac1\varepsilonps\boldsymbol{D}iv \biggl(\frac{\nabla w}{\sqrt{1+|\nabla w|^2}}\biggr)\to 2\gamma\frac{(L-N+1)(1+|\xi_0|^2)-(L-1)|\xi_0|^2}{(1+|\xi_0|^2)^{3/2}}
\varepsilonnd{equation}
uniformly in $D_2$ as $\varepsilonps\to0$, where $\xi_0=\sigma e_1/\sqrt{1-\sigma^2}$. Therefore, if $L$ is chosen so large that the right hand side of \varepsilonqref{barrier1-} is bigger than $2$, there exists $\varepsilonps_2$ depending on $M_0$ such that if $0<\varepsilonps<\varepsilonps_2$ then
\begin{equation}\label{barrier1}
\boldsymbol{D}iv \biggl(\frac{\nabla w}{\sqrt{1+|\nabla w|^2}}\biggr)>\varepsilonps\,.
\varepsilonnd{equation}
Observe that if $x'\in\boldsymbol{p}artialrtial D_{1/2}(z')\cap\{x_1\geq0\}$, then
$$
w(x')\leq\beta x_1+\frac{\gamma L\varepsilonps}{2} x_1-\frac{\gamma}{4}\varepsilonps+\boldsymbol{p}si(z')+\nabla\boldsymbol{p}si(z')\cdot(x'-z')< \alpha x_1-\varepsilonps\leq u^-(x)\,,
$$
provided $M_0>\gamma L$ and $\gamma\geq18$, where in the last inequality we used \varepsilonqref{barrier0.5}. We claim that $w$ does not touch $u^-$ at a point $x_0\in D^+_{\frac12}(z')$. Indeed if this is the case, by Lemma~\ref{pace}, recalling that $\Lambda$ by assumption is smaller than $\varepsilonps$ we would get that
$$
\boldsymbol{D}iv \biggl(\frac{\nabla w}{\sqrt{1+|\nabla w|^2}}\biggr)(x_0)\leq\varepsilonps
$$
a contradiction to \varepsilonqref{barrier1}.
Thus we may conclude that $w$ touches $u^-$ at a point $\overline x'\in\{x_1=0\}\cap\boldsymbol{p}artialrtial D^+_{\frac12}(z')$. Assume by contradiction that $u^-(\overline x')<\boldsymbol{p}si(\overline x')$, then from the last inequality in \varepsilonqref{irene-1}, observing that
$$
|\nabla w(\overline x')-\boldsymbol{p}artialrtial_1w(\overline x') \,e_1|\leq C\varepsilonps\,,
$$
for a constant depending only on $N$, we have
$$
\boldsymbol{p}artialrtial_1w(\overline x')\leq \frac{\sigma}{\sqrt{1-\sigma^2}}+\frac{\sigma^+}{\sqrt{1-\sigma^2}}\frac{C^2\varepsilonps^2}{2}\,.
$$
which is impossible, provided that $M_0$ is sufficiently large, since
$$
\boldsymbol{p}artialrtial_1w(\overline x')=\frac{M_0\varepsilonps}{2}+\frac{\sigma}{\sqrt{1-\sigma^2}}\,.
$$
Therefore, at the touching point we have
$$
0=u^-(\overline x')-w(\overline x')=\boldsymbol{p}si(\overline x')-w(\overline x')\,,
$$
hence
$$
\boldsymbol{p}si(\overline x')=\boldsymbol{p}si(z')+\nabla\boldsymbol{p}si(z')\cdot(\overline x'-z')+c-\gamma\varepsilonps|\overline x'-z'|^2\,.
$$
On the other hand, recalling that $|D^2\boldsymbol{p}si|\leq \varepsilonps$, we have
$$
\boldsymbol{p}si(\overline x')\geq\boldsymbol{p}si(z')+\nabla\boldsymbol{p}si(z')\cdot(\overline x'-z')-\varepsilonps|\overline x'-z'|^2\,.
$$
Combining the two inequalities we get that $0\geq c\geq(\gamma-1)\varepsilonps|\overline x'-z'|^2$. Therefore $\overline x'=z'$ hence $\boldsymbol{p}si(z')=u^-(z')$.
If instead $\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O\cap\mathcal C_1\cap S_{-\varepsilonps,\varepsilonps}=\varepsilonmptyset$, we may repeat the same argument as before with $\boldsymbol{p}si$ replaced by $\tilde\boldsymbol{p}si\varepsilonquiv\varepsilonps$, obtaining that $u^\boldsymbol{p}m=\tilde\boldsymbol{p}si=\varepsilonps$ in $\boldsymbol{p}artialrtial H\cap\mathcal C_{1/2}$ which is impossible by assumption.
\varepsilonnd{proof}
\begin{remark}\label{rm:barrier}
Note that the assumptions of the previous lemma force the obstacle $\boldsymbol{p}si$ to satisfy the inequality
$|\boldsymbol{p}si|<\varepsilonps r $ in $\boldsymbol{p}artialrtial H\cap \mathcal C_{r/2}$.
\varepsilonnd{remark}
\begin{lemma}\label{itera}
For all \(\tau\in (0,1/2)\), \(M>0\), there exist constants \(\lambda_0=\lambda_0(M,\tau)>0\), \(C_2=C_2(M,\tau)>0\) such that for all \(\lambda\in (0, \lambda_0)\) one can find \( \varepsilonps_3=\varepsilonps_3(M,\tau, \lambda)>0\), \( \varepsilonta_2=\varepsilonta_2(M,\tau, \lambda)>0\) with the following property: Assume $E\subset H$ is a $(\Lambda,1)$-minimizer of $\mathcal{F}_\sigma$ in $\mathbb{R}^N$ with obstacle $O\in\mathcal B_R$. Assume also that $0\in\overline{\boldsymbol{p}artialrtial E\cap H}$ and
$$
\overline{\boldsymbol{p}artialrtial E\cap H} \cap \mathcal C_r\subset \{ \abs{x_N-\alpha x_1}< \varepsilonps r\big\}, \boldsymbol{q}uad \{x_N<\alpha x_1-\varepsilonps r\}\cap\mathcal C_{r}^+\subset E\,,
$$
for some \(0<\varepsilonps \le \varepsilonps_3\), $0<r\leq1$ with $\Lambda r<\varepsilonta_2\varepsilonps $ and $\frac{r}{R}\leq \varepsilonta_2\varepsilonps$, and for some
\(\frac{\sigma}{\sqrt{1-\sigma^2}}\le \alpha \le M \varepsilonps +\frac{\sigma}{\sqrt{1-\sigma^2}}\).
Then there exist \(\bar \alpha\ge\frac{\sigma}{\sqrt{1-\sigma^2}} \), a rotation \(R\) about the \(x_1\) axis, with \(\|R-\Id\|\le C_2\varepsilonps\), \(|\bar \alpha-\alpha|\le C_2 \varepsilonps\), such that
\[
R( \overline{\boldsymbol{p}artialrtial E\cap H}) \cap\mathcal C_{\lambda r}\subset \{ \abs{x_N-\bar\alpha x_1}< \lambda^{1+\tau} \varepsilonps r\big\}.
\]
\varepsilonnd{lemma}
\begin{proof} By rescaling it is enough to show the statement for $r=1$.
We argue by contradiction and we assume that there exist \(\tau \) and \(M\) and sequences \(\varepsilonpsilon_n\to 0\), \(\frac{\sigma}{\sqrt{1-\sigma^2}}\le \alpha_n \le M\varepsilonpsilon_n+\frac{\sigma}{\sqrt{1-\sigma^2}}\), $(\Lambda_n,1)$-minimizers \(E_n\) with $0\in\overline{\boldsymbol{p}artialrtial E_n\cap H}$, $\overline{\boldsymbol{p}artialrtial E_n\cap H} \cap \mathcal C_1\subset \{ \abs{x_N-\alpha_n x_1}< \varepsilonpsilon_n\big\}$ and $\{x_N<\alpha x_1-\varepsilonps r\}\cap\mathcal C_{r}^+\subset E$, $\Lambda_n$, $O_n\in\mathcal B_{R_n}$, such that
$\Lambda_n\leq\frac{\varepsilonpsilon_n}{n}$, $\frac{1}{R_n}\leq\frac{\varepsilonpsilon_n}{n}$ for which the conclusion fails for all \(\bar \alpha \), \(R\) with \(|\bar \alpha-\alpha_n|\le C_2 \varepsilonpsilon_n\), \(\|R-\Id\|\le C_2\varepsilonpsilon_n\), and for a certain \(\lambda \le \lambda_0\). We will show that for a suitable choice of \(C_2\) and \(\lambda_0\) depending only on \(M\), \(\tau\) and the dimension this will lead to a contradiction.
We claim that there exist universal constants $C>0$, $\beta\in(0,1]$ and $\gamma_n>0$, $\gamma_n\to0$, such that for all $x',y'\in \overline D^+_{\frac12}$, with $|x'-y'|\geq\gamma_n$,
\begin{equation}\label{itera0}
|u_{n,\sigma}^{\boldsymbol{p}m}(x')-u_{n,\sigma}^{\boldsymbol{p}m}(y')|\leq C\varepsilonpsilon_n|x-y|^\beta,\boldsymbol{q}quad |u_{n,\sigma}^{\boldsymbol{p}m}(x')-u_{n}^{\mp}(y')|\leq C\varepsilonpsilon_n|x-y|^\beta\,,
\varepsilonnd{equation}
where $u^{\boldsymbol{p}m}_{n,\sigma}$ are the functions defined as $u^\boldsymbol{p}m_{\sigma}$ with $E$ replaced by $E_n$.
To prove the claim we fix $x'\in\overline D^+_{\frac12}$. Note that our assumptions imply that if $0<\varrho\leq
\frac12$
\[
\overline{\boldsymbol{p}artialrtial E_n\cap H} \cap\mathcal C_{\varrho}(x')\subset S_{a,b}\,,
\]
with $b-a= 2(1+M)\varepsilonpsilon_n$.
We now apply Lemma~\ref{lm:savinglobal} to conclude that for all $y'\in \overline D^+_{\frac{\varrho}{64}}(x')$
\begin{equation}\label{itera1}
|u^{\boldsymbol{p}m}_{n,\sigma}(x')-u^{\boldsymbol{p}m}_{n,\sigma}(y')| <2(1+M)(1-\varepsilonta_1)\varepsilonpsilon_n,\,\,\,\,\,|u^{\boldsymbol{p}m}_{n,\sigma}(x')-u^{\mp}_{n,\sigma}(y')|< 2(1+M)(1-\varepsilonta_1)\varepsilonpsilon_n,
\varepsilonnd{equation}
provided that
\begin{equation}\label{itera2}
4(1+M)\varepsilonpsilon_n\leq \varepsilonps_1 \varrho\boldsymbol{q}uad\text{and}\boldsymbol{q}uad \frac{\varrho^2}{n}\leq8\varepsilonta_1(1+M)\,.
\varepsilonnd{equation}
Thus, inequalities \varepsilonqref{itera1} hold for $n$ sufficiently large.
Note that \varepsilonqref{itera1} implies that
$$
\overline{\boldsymbol{p}artialrtial E_n\cap H} \cap \mathcal C_{\frac{\varrho}{64}}(x')\subset S_{a',b'}\boldsymbol{q}uad\text{with \,\,$b'-a'\leq 2(1+M)(1-\varepsilonta_1)\varepsilonpsilon_n$.}
$$
Therefore, by applying Lemma~\ref{lm:savinglobal} $m_n-1$ more times we conclude that for all $y'\in \overline D^+_{\frac{\varrho}{64^{j+1}}}(x')$, $j=0,1,\dots,m_n-1$,
\begin{equation}\label{itera4}
\begin{split}
&|u^{\boldsymbol{p}m}_{n,\sigma}(x')-u^{\boldsymbol{p}m}_{n,\sigma}(y')|< 2(1+M)(1-\varepsilonta_1)^{j+1}\varepsilonpsilon_n, \\
&|u^{\boldsymbol{p}m}_{n,\sigma}(x')-u^{\mp}_{n,\sigma}(y')|< 2(1+M)(1-\varepsilonta_1)^{j+1}\varepsilonpsilon_n,
\varepsilonnd{split}
\varepsilonnd{equation}
provided that
$$
4\cdot64^{j}(1+M)(1-\varepsilonta_1)^j\varepsilonpsilon_n\leq\varepsilonps_1\varrho, \boldsymbol{q}uad \frac{\varrho^2}{64^{2j}n}\leq8\varepsilonta_1(1-\varepsilonta_1)^{j}(1+M)\,.
$$
Note that the last inequality follows from the second one in \varepsilonqref{itera2}, since $\varepsilonta_1<\frac12$, while the first holds provided $m_n$ is the largest integer for which
$$
4\cdot64^{m_n-1}(1+M)(1-\varepsilonta_1)^{m_n-1}\varepsilonpsilon_n\leq\varepsilonps_1\varrho\,.
$$
In particular, taking $\varrho=\frac12$ and applying \varepsilonqref{itera4} with $y\in\overline D^+_{\frac{\varrho}{64^j}}(x')\setminus D^+_{\frac{\varrho}{64^{j+1}}}(x')$ and $j=1,\dots m_n-1$ we easily get that if $y'\in \overline D^+_{1/128}(x')\setminus D^+_{\gamma_n}(x')$, with $\gamma_n=\frac{1}{2\cdot64^{m_n}}$, then
$$
|u^{\boldsymbol{p}m}_{n,\sigma}(x')-u^{\boldsymbol{p}m}_{n,\sigma}(y')|\leq C\varepsilonpsilon_n|x'-y'|^\beta,\boldsymbol{q}quad |u^{\boldsymbol{p}m}_{n,\sigma}(x')-u^{\mp}_{n,\sigma}(y')|\leq C\varepsilonpsilon_n|x'-y'|^\beta\,,
$$
for a constant $C$ depending only on $M$ and $\beta=-\frac{\log(1-\varepsilonta_1)}{\log64}$. This proves \varepsilonqref{itera0} when $x'\in\overline D^+_{\frac12}$ and $y'\in D^+_{\frac{1}{128}}(x')$, with $|x'-y'|\geq\gamma_n$. Clearly, $\gamma_n\to0$, since $m_n\to\infty$.
In particular if we consider the functions
\[
v^{\boldsymbol{p}m}_{n}(x')=\frac{u^{\boldsymbol{p}m}_{n,\sigma}(x')-\Big(\alpha_n-\frac{\sigma}{\sqrt{1-\sigma^2}}\Big) x_1}{\varepsilonpsilon_n}=\frac{u^{\boldsymbol{p}m}_{n}(x')-\alpha_nx_1}{\varepsilonpsilon_n}
\]
they converge (up to a subsequence) to the same H\"older continuous function \(v\) defined on \(D_{1/2}^+\) (note that \(\big|\alpha_n-\frac{\sigma}{\sqrt{1-\sigma^2}}\big|\le M\varepsilonpsilon_n\)). Furthermore \(\|v\|_{\infty}\le 1\) and $v(0)=0$, since the assumption $0\in\overline{\boldsymbol{p}artialrtial E_n\cap H}$ implies that $u^{-}_{n,\sigma}(0)\leq0\leq u^{+}_{n,\sigma}(0)$. We also assume that (up to further subsequence),
\begin{equation}\label{gamma1000}
\frac{1}{\varepsilonpsilon_n}\Big(\alpha_n-\frac{\sigma}{\sqrt{1-\sigma^2}}\Big) \to\gamma \in [0,M]\,.
\varepsiloneq
We now consider two possible cases. Let us assume first that for all $n$ there exists $y'_n\in \boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O_n\cap\mathcal C_{\frac12}\cap\{-\varepsilonpsilon_n<x_N<\varepsilonpsilon_n\}$. Then by Lemma~\ref{apparte}, we have that for $n$ large the connected component of $\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O_n\cap\mathcal C_{1}$ containing $y_n'$ is a graph of a $C^{1,1}$ function $\boldsymbol{p}si_n$ such that
$$
\|D\boldsymbol{p}si_n\|_{L^\infty(\boldsymbol{p}artialrtial H\cap D_{1})}\leq2\varepsilonps_n,\boldsymbol{q}uad\|D^2\boldsymbol{p}si_n\|_{L^\infty(\boldsymbol{p}artialrtial H\cap D_{1})}\leq\deltalta_n\varepsilonps_n\,
$$
for a suitable sequence $\deltalta_n$ converging to zero.
Recall that
$$
\frac{\boldsymbol{p}si_n(y'_n)}{\varepsilonpsilon_n}\in(-1,1)\,.
$$
Therefore we may assume that, up to a subsequence,
$$
\frac{\boldsymbol{p}si_n(x')}{\varepsilonpsilon_n}\to\boldsymbol{p}si_\infty+\omega\cdot x'\boldsymbol{q}uad\text{uniformly with respect to $x'\in D_{1}\cap\boldsymbol{p}artialrtial H$}
$$
for some $\boldsymbol{p}si_\infty\in[-2,2]$ and $\omega\in\mathbb{R}^{N-1}$ of the form $(0,\omega_2,\dots,\omega_{N-1})$ with $|\omega|\leq2.$
We now claim that \(v-\omega\cdot x'\) satisfies
\begin{equation}\label{itera5}
\begin{cases}
Lw:=\boldsymbol{D}elta w(1+|\xi_0|^2)-D_{11}w|\xi_0|^2 =0\boldsymbol{q}quad &\text{in \(D_{1/2}^+\)}
\\
w\le \boldsymbol{p}si_\infty &\text{on \(D_{1/2}\cap\{x_1=0\}\)}
\\
\boldsymbol{p}artialrtial_1 w\ge -\gamma &\text{on \(D_{1/2}\cap\{x_1=0\}\)}
\\
(\boldsymbol{p}si_\infty-w)(\boldsymbol{p}artialrtial_1w+\gamma)=0 &\text{on \(D_{1/2}\cap\{x_1=0\}\)}
\varepsilonnd{cases}
\varepsilonnd{equation}
in the viscosity sense. To this aim observe that the functions $u^\boldsymbol{p}m_{n}$, are subsolutions, respectively supersolutions, of
\[
\displaystyle\boldsymbol{D}iv \Biggl(\frac{\nabla w}{\sqrt{1+|\nabla w|^2}}\Biggr) =\mp\frac{\varepsilonpsilon_n}{n}
\]
thanks to Lemma~\ref{pace}. Therefore $v_{n}^-$ and $v_{n}^+ $ are supersolutions, respectively subsolutions, of
$$
L_nw:=\boldsymbol{D}elta w-\dfrac{D^2 w (\alpha_n e_1+\varepsilonpsilon_n \nabla w)(\alpha_n e_1+\varepsilonpsilon_n \nabla w)}{1+|\alpha_n e_1+\varepsilonpsilon_n \nabla w|^2}=\boldsymbol{p}m\frac1n\sqrt{1+|\alpha_n e_1+\varepsilonpsilon_n \nabla w|^2}\,.
$$
Passing to the limit in the previous equation, recalling that $v_{n}^\boldsymbol{p}m$ converge to $v$ uniformly in $D^+_{\frac12}$ and using the stability of viscosity super- and subsolutions, we get that $v$, hence $v-\omega\cdot x'$, is a viscosity solution of the first equation in \varepsilonqref{itera5}. Recalling that $v^{\boldsymbol{p}m}_{n}\leq\frac{\boldsymbol{p}si_n}{\varepsilonpsilon_n}$ on $D_{\frac12}\cap\boldsymbol{p}artialrtial H$, we get that $v(x')\leq\boldsymbol{p}si_\infty+\omega\cdot x'$ for all $x'\in D_{\frac12}\cap\boldsymbol{p}artialrtial H$, hence the second inequality in \varepsilonqref{itera5} follows for $v-\langle\omega,\cdot\rangle$.
To prove the third inequality we take a $C^2$ test function $\varphi$ touching $v$ from above in $D^+_{\frac12}$ at a point $\overline x'\in D_{\frac12}\cap\{x_1=0\}$. Without loss of generality we may assume that $\overline x'$ is the unique touching point. We argue by contradiction assuming that $\boldsymbol{p}artialrtial_1\varphi(\overline x')<-\gamma$. If this is the case the function
\begin{equation}\label{itera7}
\tilde\varphi(x')=-\tilde\gamma x_1- bx_1^2+\varphi(0,x_2,\dots,x_{N-1}),\boldsymbol{q}quad\text{with $\tilde\gamma\in(\gamma,-\boldsymbol{p}artialrtial_1\varphi(\overline x'))$}
\varepsilonnd{equation}
and $b>0$ to be chosen,
stays above $\varphi$ and hence above $v$ in a neighborhood of $\overline x'$. In particular $\overline x'$ is the unique touching point between $\tilde\varphi$ and $v$ in such a neighborhood. Therefore there exists a sequence $x_n'\in\overline D^+_{\frac12}$ converging to $\overline x'$ such that $x'_n$ is a local maximizer of $v_{n}^+-\tilde\varphi$. If $x_n'\in D^+_{\frac12}$ for infinitely many $n$, thus recalling the subsolution property of $v^+_n$ we have
$$
L_n\tilde\varphi(x_n')\geq-\frac1n\sqrt{1+|\alpha_n e_1+\varepsilonpsilon_n \nabla\tilde\varphi(x_n')|^2}.
$$
Thus, passing to the limit, $L\tilde\varphi(\overline x')\geq0$ which is impossible if we choose $b$ so large that $L\tilde\varphi(\overline x')<0$. Otherwise, $x_n'\in D_{\frac12}\cap\{x_1=0\}$ for infinitely many $n$. In particular for all such $n$ the function $u^+_n-(\varepsilonpsilon_n\tilde\varphi+\alpha_nx_1)$ has a local maximum in $x'_n$. Hence by the third inequality in \varepsilonqref{irene-1} we have
$$
\varepsilonpsilon_n\boldsymbol{p}artialrtial_1\tilde\varphi(x'_n)+\alpha_n\geq \frac{\sigma}{\sqrt{1-\sigma^2}}-\frac{\sigma^-}{\sqrt{1-\sigma^2}}\frac{\varepsilonpsilon_n^2\|\nabla\tilde\varphi\|^2_{L^\infty(D^+_1)}}{2}\,.
$$
Dividing the previous inequality by $\varepsilonpsilon_n$ and recalling the definition of $\gamma$ we have
$$
-\tilde\gamma+\gamma\geq0\,,
$$
which is a contradiction to \varepsilonqref{itera7}. This shows the third inequality in \varepsilonqref{itera5}.
To show the last equality we take a test function $\varphi$ such that $v-\langle\omega,\cdot\rangle-\varphi$ has a strict local minimum at a point $\overline x'\in D_{\frac12}\cap\{x_1=0\}$ such that $v(\overline x')-\omega\cdot\overline x'<\boldsymbol{p}si_\infty$. Then, arguing by contradiction as before and using $u^-_n$ and the fourth inequality in \varepsilonqref{irene-1} in place of $u^+_n$ and the third inequality in \varepsilonqref{irene-1} we infer that $\boldsymbol{p}artialrtial_1\varphi(\overline x')\leq-\gamma$, thus getting also the last equality in \varepsilonqref{itera5} in the viscosity sense.
Thus, the function $\overline w(x'):=\boldsymbol{p}si_\infty-v(x')+\omega\cdot x'-\gamma x_1$ solves the following Signorini type problem
\[
\begin{cases}
\boldsymbol{D}elta\overline w(1+|\xi_0|^2)-D_{11}\overline w|\xi_0|^2 =0\boldsymbol{q}quad &\text{in \(D_{1/2}^+\)}
\\
\overline w\ge 0 &\text{on \(D_{1/2}\cap\{x_1=0\}\)}
\\
\boldsymbol{p}artialrtial_1\overline w\le0 &\text{on \(D_{1/2}\cap\{x_1=0\}\)}
\\
\overline w\,\boldsymbol{p}artialrtial_1\overline w=0 &\text{on \(D_{1/2}\cap\{x_1=0\}\)}
\varepsilonnd{cases}
\]
in the viscosity sense.
In particular by the regularity estimates proved in \cite{Misi} (see also \cite{Guillen09}) we infer that there exist a universal constant \({C}\), such that for all \(\lambda < 1/4\)
\[
\sup_{D^+_{\lambda}} \abs[\big]{v(x')-\nabla v(0) \cdot x'}=\sup_{D^+_{\lambda}} \abs[\big]{\overline w(x')-\overline w(0)-\nabla \overline w(0) \cdot x'}\le {C} \lambda^{\frac{3}{2}}\|\overline w\|_{L^\infty(D^+_{\frac12})}\le C(1+M) \lambda^{\frac{3}{2}}
\]
and
\[
\abs{\nabla v(0)}\le C\|\overline w\|_{L^\infty(D^+_{\frac12})}\leq C (1+M)\,.
\]
We first choose \(\lambda_0\) so that \(C(1+M) \lambda^{\frac{3}{2}}<\frac14\lambda^{1+\tau}\) for all \(\lambda\le \lambda_0\). Therefore, by the above estimate and uniform convergence of $v_{n}^\boldsymbol{p}m$ to $v$ we get for $n$ large, recalling that $v(0)=0$,
\begin{align*}
& (\varepsilonpsilon_n\boldsymbol{p}artialrtial_1v(0)+\alpha_n)x_1+\varepsilonpsilon_n(\nabla v(0)-\boldsymbol{p}artialrtial_1v(0)e_1)\cdot x'-\frac14\lambda^{1+\tau}\varepsilonpsilon_n
\\
&\boldsymbol{q}uad <
u^-_n(x')\leq u^+_n(x')< (\varepsilonpsilon_n\boldsymbol{p}artialrtial_1v(0)+\alpha_n)x_1+\varepsilonpsilon_n(\nabla v(0)-\boldsymbol{p}artialrtial_1v(0)e_1)\cdot x'+\frac14\lambda^{1+\tau}\varepsilonpsilon_n\,.
\varepsilonnd{align*}
In particular, setting
\[
\bar \alpha_n=\varepsilonpsilon_n\Big(\boldsymbol{p}artialrtial_1 v( 0)+\frac{\lambda^{1+\tau}}{4}\Big) +\alpha_n
\]
and
\[
\boldsymbol v_n=\varepsilonpsilon_n(\nabla v (0)-\boldsymbol{p}artialrtial_1 v_1(0) e_1)
\]
$\boldsymbol{p}artialrtial E_n\cap\mathcal C_\lambda$ is contained in the strip
$$
S_n:=\Big\{|x_N-\bar\alpha_nx_1-\boldsymbol v_n\cdot x'|<\frac12\lambda^{1+\tau}\varepsilonpsilon_n\Big\}\,.
$$
Note that, since by \varepsilonqref{itera5} $\boldsymbol{p}artialrtial_1v(0)\geq-\gamma$, recalling \varepsilonqref{gamma1000}, we have that for $n$ sufficiently large $\bar\alpha_n>\frac{\sigma}{\sqrt{1-\sigma^2}}$.
Hence, if \(R_n\) is the rotation around \(x_1\) which maps the vector
\[
\frac{e_N-\boldsymbol v_n}{\abs{e_N-\boldsymbol v_n}}
\]
in $e_N$, we conclude that
$$
R_n(\overline{\boldsymbol{p}artialrtial E_n\cap H})\cap \mathcal C_\lambda\subset \{ \abs{x_N-\bar\alpha_n x_1}< \lambda^{1+\tau} \varepsilonpsilon_n\big\}\,,
$$
thus giving a contradiction in this case.
If instead for infinitely many $n$ we have that $\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O_n\cap\mathcal C_{\frac12}\cap\{-\varepsilonpsilon_n<x_N<\varepsilonpsilon_n\}=\varepsilonmptyset$,
then reasoning as above we may infer that the function $\tilde w(x'):=v(x')+\gamma x_1$ solves the Neumann problem\[
\begin{cases}
\boldsymbol{D}elta \tilde w(1+|\xi_0|^2)-D_{11}\tilde w|\xi_0|^2 =0\boldsymbol{q}quad &\text{in \(D_{1/2}^+\)}
\\
\boldsymbol{p}artialrtial_1\tilde w=0 &\text{on \(D_{1/2}\cap\{x_1=0\}\)}\,.
\varepsilonnd{cases}
\]
The same argument, now using the more standard elliptic estimates for the Neumann problem, leads to a contradiction also in this case.
\varepsilonnd{proof}
Next Lemma is the interior case of the above estimate and the proof follows from the interior version of the previous arguments taking into account Remark~\ref{rm:savinglobal}.
\begin{lemma}\label{iteraintern}
For all \(\tau\in (0,1)\), $\kappa>0$ there exist constants \(\lambda_1=\lambda_1(\tau, \kappa)>0\), \(C_3=C_3(\tau, \kappa)>0\) such that for all \(\lambda\in (0, \lambda_1)\) there exist \( \varepsilonps_4=\varepsilonps_4(\tau,\kappa, \lambda)>0\), \( \varepsilonta_3=\varepsilonta_3(\tau,\kappa, \lambda)>0\) with the following property. If $E\subset H$ is a $(\Lambda,1)$-minimizer of $\mathcal{F}_\sigma$ in $\mathbb{R}^N$ with obstacle $O$ such that if $\bar x\in\boldsymbol{p}artialrtial E$, $D_r(\bar x')\subset\{x_1>0\}$, $0<r\leq1$,
\[
\boldsymbol{p}artialrtial (E-\bar x) \cap \mathcal C_r\subset \{ \abs{x_N-\alpha x_1}< \varepsilonps r\big\}\,,
\]
with \(\varepsilonps \le \varepsilonps_4\), for some $\alpha\in[-\kappa,\kappa]$,
$\Lambda r<\varepsilonta_3\varepsilonps$,
then there exist \(\bar \alpha\in\mathbb{R} \), a rotation \(R\) about the $x_1$ axis, with \(\|R-\Id\|\le C_3\varepsilonps\), \(|\bar \alpha-\alpha|\le C_3 \varepsilonps\), such that
\[
\boldsymbol{p}artialrtial R(E-\bar x) \cap \mathcal C_{\lambda r}\subset \{ \abs{x_N-\bar\alpha x_1}< \lambda^{1+\tau} \varepsilonps r\big\}.
\]
\varepsilonnd{lemma}
Next Lemma deals with the case when \(\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial_{H}}E\) fully coincides with the obstacle and it is a simple application of the boundary regularity result. This would be needed for the situations where Lemma \ref{lm:barrier} applies.
\begin{lemma}\label{iteraostac}
For all \(\tau\in (0,1)\), $\kappa>0$, there exist constants \(\lambda_2=\lambda_2(\tau, \kappa)>0\), \(C_4=C_4(\kappa)>0\) such that for all \(\lambda\in (0, \lambda_2)\) one can find \( \varepsilonps_5=\varepsilonps_5(\tau, \kappa, \lambda)>0\), with the following property: Assume that $E\subset H$ is a $(\Lambda,1)$-minimizer of $\mathcal{F}_\sigma$ in $\mathbb{R}^N$ with obstacle $O\in\mathcal B_R$. Assume also that
$\bar x\in\boldsymbol{p}artialrtial H\cap\overline{\boldsymbol{p}artialrtial E\cap H}$ and
\[
(\boldsymbol{p}artialrtial E-\bar x) \cap \mathcal C_r^+\subset \{ \abs{x_N-\alpha x_1}< \varepsilonps r\big\}, \boldsymbol{q}uad \{x_N<\alpha x_1-\varepsilonps r\}\cap\mathcal C_{r}^+\subset E-\bar x\,,
\]
for some \(\varepsilonps \le \varepsilonps_5\),
$0<r\leq 1$ with $\Lambda r<\varepsilonps $ and $\frac{r}{R}<\varepsilonta(1)\varepsilonps$ (where $\varepsilonta(1)$ is the constant provided in Lemma~\ref{apparte}), and for some $ \alpha \in[-\kappa,\kappa]$. Finally assume that
$$
\boldsymbol{p}artialrtial H\cap\overline{(\boldsymbol{p}artialrtial E-\bar x)\cap H}\cap\mathcal C_r= \boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}(O-\bar x)\cap\mathcal C_r\cap\{-\varepsilonps r<x_N<\varepsilonps r\}\,.
$$
Then there exist \(\bar \alpha\geq\frac{\sigma}{\sqrt{1-\sigma^2}} \), a rotation \(R\) about the \(x_1\) axis, with \(\|R-\Id\|\le C_4\varepsilonps\), \(|\bar \alpha-\alpha|\le C_4 \varepsilonps\), such that
\[
R( \boldsymbol{p}artialrtial E-\bar x) \cap\mathcal C_{\lambda r}^+\subset \{ \abs{x_N-\bar\alpha x_1}< \lambda^{1+\tau} \varepsilonps r\big\}.
\]
\varepsilonnd{lemma}
\begin{proof}
By rescaling and translating we may assume $r=1$ and $\bar x=0$. Denote by $\mathcal E$ the cylindrical excess
$$
\mathcal E(E;r)=\frac{1}{r^{N-1}}\int_{\boldsymbol{p}artialrtial E\cap \mathcal C_r^+}|\nu_E(x)-\nu_\alpha|^2\,d\mathcal{H}^{N-1}(x)\,,
$$
where $\nu_\alpha$ is the normal to the hyperplane $\{x_N-\alpha x_1=0\}$ pointing upward. Fix $\deltalta\in(0,1)$. We claim that there exists $\varepsilonps_5$ such that if the assumptions are satisfied for $r=1$ then
\begin{equation}\label{iteraostac1}
\mathcal E\Big(E;\frac12\Big)\leq\deltalta\,.
\varepsilonnd{equation}
To prove this claim we observe that if $E_n$ is a sequence of $(\Lambda_n,1)$-minimizers satisfying the assumptions with $\varepsilonps=\varepsilonps_n\to0$, then by Theorem~\ref{th:compactness} we have that, up to subsequence, $|(E_n\boldsymbol{D}elta F)\cap \mathcal C^+_{\frac12}|\to0$, $|\mu_{E_n}|\mathop{\hbox{\vrule height 6pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits \mathcal C^+_{\frac12}\wtos |\mu_F|\mathop{\hbox{\vrule height 6pt width .5pt depth 0pt \vrule height .5pt width 6pt depth 0pt}}\nolimits \mathcal C^+_{\frac12}$, where $F=\{x_N-\alpha x_1\leq0\}$. Thus,
$$
\mathcal E\Big(E_n;\frac12\Big)\to \mathcal E\Big(F;\frac12\Big)=0\,.
$$
Hence \varepsilonqref{iteraostac1} follows by a compactness argument.
We recall that, thanks to Lemma~\ref{apparte}, there exists $\varepsilonta(1)>0$ such that if $\frac{1}{R}<\varepsilonta(1)\varepsilonps$, $\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O\cap \mathcal C_1$ is the graph of a function $\boldsymbol{p}si\in C^{1,1}(D_1)$ such that
$$
\|\nabla\boldsymbol{p}si\|_{L^\infty(\boldsymbol{p}artialrtial H\cap D_1)}\leq2\varepsilonps,\boldsymbol{q}uad \|D^2\boldsymbol{p}si\|_{L^\infty(\boldsymbol{p}artialrtial H\cap D_1)}\leq2\varepsilonps\,.
$$
Observe that, there exists $\mu(\kappa)\in(0,1)$ such that if $\Sigma\subset\mathcal C^+_{\frac14}$ can be described as a graph over $\{x_N-\alpha x_1=0\}$ of a $C^1$ function $g$ such that $\|\nabla g\|_{\infty}\leq\mu(\kappa)$, then
$\Sigma$ can be also written as a graph over $D^+_{\frac14}$ of a $C^1$ function $f$ with $\|\nabla f\|_{\infty}\leq2\kappa$.
By \cite[Theorem~6.1]{DuzaarSteffen02}\footnote{Theorem 6.1 in \cite{DuzaarSteffen02} is stated and proved for almost minimizing currents. However it is well known to the experts that the methods of the proof extend without significant changes also to the framework of almost minimizing sets of finite perimeter.} we have that there exists $\bar\deltalta=\bar\deltalta(\kappa)$ such that if
$$
\mathcal E\Big(E;\frac12\Big)+\Lambda+ \|D^2\boldsymbol{p}si\|_\infty \leq \bar\deltalta\,,
$$
then $\boldsymbol{p}artialrtial E\cap \mathcal C^+_{\frac14}$ is a graph with respect to the hyperplane $\{x_N-\alpha x_1=0\}$ of a $C^{1,\gamma}$ function $g$ with $\|\nabla g\|_{C^{0,\gamma}}\leq\mu(\kappa)$, for some universal $\gamma>0$.
In particular $\boldsymbol{p}artialrtial E\cap \mathcal C^+_{\frac14}$ is the graph over $D^+_{\frac14}$ of a $C^{1,\gamma}$ function $f$ with
$\|\nabla f\|_{C^{0,\gamma}}\leq C(\kappa)$. Then, observing that the function $f(x')-\alpha x_1$ is a solution of
$$
\boldsymbol{D}iv \biggl(\frac{\nabla w+\alpha e_1}{\sqrt{1+|\nabla w+\alpha e_1|^2}}\biggr) =h\,
$$
with $|h|\leq\Lambda$,
standard regularity estimates for solutions of the mean curvature equation imply that for all $s\in(0,1)$ there exists a constant $C_{s,\kappa}$, depending only on $s$ and $\kappa$ such that
\begin{equation}\label{iteraostac2}
\|f-\alpha x_1\|_{C^{1,s}(D^+_{\frac18})}\leq C_{s,\kappa}\big(\|f-\alpha x_1\|_{L^\infty(D^+_{\frac14})}+\|\boldsymbol{p}si\|_{C^{1,s}(D^+_{\frac14})}\big)\leq C'_{s,\kappa}\varepsilonps\,,
\varepsilonnd{equation}
where the last inequality follows from the fact that $\|\boldsymbol{p}si\|_{C^{1,1}(D^+_{\frac14})}\leq C(\|D^2\boldsymbol{p}si\|_{L^\infty(D^+_{\frac14})}+\|\boldsymbol{p}si\|_{L^\infty(D^+_{\frac14})})$ for a universal constant $C$.
Let us fix $\tau\in(0,1)$ and take $s=(1+\tau)/2$. From the previous estimate we have that for all \(\lambda < 1/8\), since $f(0)=0$,
\[
\sup_{D^+_{\lambda}} \abs[\big]{f(x')-\nabla f(0) \cdot x'}\le C'_{s,\kappa} \lambda^{1+s}\varepsilonps<\frac14\lambda^{1+\tau}\varepsilonps\,,
\]
provided that $\lambda<\lambda_2(\tau,\kappa)$.
We take
\[
\tilde \alpha=\boldsymbol{p}artialrtial_1f(0)
\]
and
\[
\boldsymbol v=\nabla f (0)-\boldsymbol{p}artialrtial_1 f(0) e_1=\nabla\boldsymbol{p}si(0)\,,
\]
where the last equality follows from the fact that by assumption $f=\boldsymbol{p}si$ on $\boldsymbol{p}artialrtial H\cap\mathcal C_1$. Thus
$R(\boldsymbol{p}artialrtial E)\cap \mathcal C_\lambda^+$ is contained in the strip
$$
S:=\{|x_N-\tilde\alpha x_1|<\frac12\lambda^{1+\tau}\varepsilonps\}\,,
$$
where \(R\) is the rotation around \(x_1\) which maps the vector
\[
\frac{e_N-\boldsymbol v}{\abs{e_N-\boldsymbol v}}
\]
in $e_N$, provided $\varepsilon$, hence $\varepsilon_5$, is sufficiently small, depending on $\lambda$. Note that, recalling that $|\boldsymbol v|=|\nabla\boldsymbol{p}si(0)|\leq C\varepsilonps$, the choice of $\tilde\alpha$ and \varepsilonqref{iteraostac2}, we have
$$
\|R-\Id\|\le C_4\varepsilonps, \boldsymbol{q}uad |\tilde\alpha-\alpha|\le C_4 \varepsilonps\,,
$$
for a sufficiently large $C_4$ depending only on $\kappa$.
From the fact that $E$ is a $(\Lambda,1)$-minimizer of $\mathcal{F}_\sigma$ in $\mathcal C^+_{\frac14}$ of class $C
^{1,\tau}$ up to the boundary via a standard first variation argument, see also Lemma~\ref{pace}, we get that
$$
\frac{\boldsymbol{p}artialrtial_1f(0)}{\sqrt{1+|\nabla f(0)|^2}}\geq \sigma\,.
$$
From this inequality we get
$$
\tilde\alpha=\boldsymbol{p}artialrtial_1f(0)\geq \frac{\sigma}{\sqrt{1-\sigma^2}}-\frac{\sigma^-}{\sqrt{1-\sigma^2}}\frac{|\nabla\boldsymbol{p}si(0)|^2}{2}\,,
$$
since $|\nabla f(0)-\tilde\alpha e_1|=|\nabla\boldsymbol{p}si(0)|\leq2\varepsilonps$.
From the last inequality, setting $\bar\alpha=\max\big\{\tilde\alpha,\frac{\sigma}{\sqrt{1-\sigma^2}}\big\}$, we finally get
$$
R(\boldsymbol{p}artialrtial E)\cap\mathcal C_{\lambda}^+\subset S\cap\mathcal C_{\lambda}^+\subset\Big\{|x_N-\bar\alpha x_1|<\frac12\lambda^{1+\tau}\varepsilonps+\frac{2\sigma^-\varepsilonps^2}{\sqrt{1-\sigma^2}}\Big\}\subset\{|x_N-\bar\alpha x_1|<\lambda^{1+\tau}\varepsilonps\}\,,
$$
provided $\varepsilonps_5$ is chosen sufficiently small.
\varepsilonnd{proof}
We can now prove the following
\begin{lemma}\label{lm:epsilon} Let $\tau\in(0,1/2)$. There exist $\bar\lambda=\bar\lambda(\tau)\in(0,1/2)$ and $\bar C=\bar C(\tau)$ such that for all $\lambda\in(0,\bar\lambda)$ it is possible to find \( \bar\varepsilonps=\bar\varepsilonps(\lambda,\tau)\in(0,\frac12)\), $\bar\varepsilonta=\bar\varepsilonta(\lambda,\tau)\in(0,\frac12)$ with the following property:
Assume $E\subset H$ is a $(\Lambda,1)$-minimizer of $\mathcal{F}_\sigma$ in $\mathbb{R}^N$ with obstacle $O\in\mathcal B_R$ and let $y\in \overline{\boldsymbol{p}artialrtial E\cap H}$ be such that
$$(\overline{\boldsymbol{p}artialrtial E\cap H}-y)\cap \mathcal C_\varrho\subset S_{-\varepsilonps\varrho,\varepsilonps\varrho}^\alpha,
\boldsymbol{q}uad \boldsymbol{q}uad \{x_N<\alpha x_1-\varepsilonps r\}\cap\mathcal C_{\varrho}\subset E-y\,,
$$
for some $0<\varrho\leq1$, $0<\varepsilonps\leq\bar\varepsilonps$, $\alpha\in[\frac{\sigma}{\sqrt{1-\sigma^2}}-1,\frac{\sigma}{\sqrt{1-\sigma^2}}+1]$ (and $\alpha\geq\frac{\sigma}{\sqrt{1-\sigma^2}}$ if $0\leq y_1<\frac{\lambda\varrho}{16}$),
with $\Lambda\varrho<\bar\varepsilonta\varepsilonps $, $\frac{\varrho}{R}\leq\bar\varepsilonta\varepsilon$. Then there exist a sequence of rotations $R_k=R_k(y)$, $R_0=I$, a sequence $\alpha_k=\alpha_k(y)\in\mathbb{R}$, $\alpha_0=\alpha$, such that, setting $\varrho_k=\frac{\lambda^{k+1}\varrho}{16}$,
\begin{equation}\label{epsilon1}
\|R_{k+1}-R_k\|\leq \frac{\bar C}{\lambda}\lambda^{k\tau}\varepsilonps,\boldsymbol{q}quad |\alpha_{k+1}-\alpha_k|\leq \frac{\bar C}{\lambda}\lambda^{k\tau}\varepsilonps
\varepsilonnd{equation}
and
\begin{equation}\label{epsilon2}
R_{k}(\overline{\boldsymbol{p}artialrtial E\cap H}-y) \cap\mathcal C_{\varrho_k}\subset \{ \abs{x_N-\alpha_k x_1}< \lambda^{k(1+\tau)}\varrho\varepsilonps \big\}.
\varepsilonnd{equation}
Moreover $\alpha_k(y)\geq\frac{\sigma}{\sqrt{1-\sigma^2}}$ whenever $0\leq y_1<\frac{\lambda^k\varrho}{16}$.
\varepsilonnd{lemma}
\begin{proof} By rescaling we may assume $\varrho=1$. We fix $\tau\in(0,1/2)$.
{\bf Step 1.} We first assume that $y\in\boldsymbol{p}artialrtial H$ and that $\alpha\leq\frac{\sigma}{\sqrt{1-\sigma^2}}+M_0\varepsilonps$, where $M_0$ is the constant in Lemma~\ref{lm:barrier}, $\varepsilonps\leq\bar\varepsilonps$. Thus we may apply Lemma~\ref{itera} taking $\lambda\leq\lambda_0(M_0,\tau)$ assuming that $\bar\varepsilonps\leq\varepsilonps_3$ and $\bar\varepsilonta\leq\varepsilonta_2$. We recall that both these constants depend on $M_0,\tau$ and $\lambda$. Then we get a rotation $R_1$ and $\alpha_1\geq\frac{\sigma}{\sqrt{1-\sigma^2}}$ satisfying
$$
\|R_{1}-I\|\leq C_2\varepsilonps,\boldsymbol{q}quad |\alpha_{1}-\alpha|\leq C_2\varepsilonps\,,
$$
\[
R_1( \overline{\boldsymbol{p}artialrtial E\cap H}-y) \cap\mathcal C_{\lambda}\subset \{ \abs{x_N-\alpha_1 x_1}< \lambda^{1+\tau}\varepsilonps\big\}.
\]
If $\alpha_1\leq
\frac{\sigma}{\sqrt{1-\sigma^2}}+M_0\lambda^\tau\varepsilonps$, we can apply Lemma~\ref{itera} again, with $r=\lambda$ and $\varepsilonps$ replaced by $\lambda^\tau\varepsilonps$, to get a rotation $\tilde R$ such that $\|\tilde R-I\|\leq C_2\lambda^\tau\varepsilonps$ and $\alpha_2\geq\frac{\sigma}{\sqrt{1-\sigma^2}}$ such that, setting $R_2=\tilde R\circ R_1$, $\|R_{2}-R_1\|\leq C_2\lambda^{\tau}\varepsilonps$, $|\alpha_{2}-\alpha_1|\leq C_2\lambda^{\tau}\varepsilonps$ and
\[
R_2( \overline{\boldsymbol{p}artialrtial E\cap H}-y) \cap\mathcal C_{\lambda^2}\subset \{ \abs{x_N-\alpha_2 x_1}<\lambda^{2(1+\tau)} \varepsilonps\big\}.
\]
We may now iterate this procedure and get a sequence of rotations $R_k$ and a sequence $\alpha_k\geq\frac{\sigma}{\sqrt{1-\sigma^2}}$ such that
$$
\|R_{k+1}-R_{k}\|\leq C_2\lambda^{k\tau}\varepsilonps,\boldsymbol{q}quad |\alpha_{k+1}-\alpha_{k}|\leq C_2\lambda^{k\tau}\varepsilonps
$$
and
$$
R_{k+1}( \overline{\boldsymbol{p}artialrtial E\cap H}-y) \cap\mathcal C_{\lambda^{k+1}}\subset \{ \abs{x_N-\alpha_{k+1} x_1}< \lambda^{(k+1)(1+\tau)}\varepsilonps\big\}.
$$
hold as long as $\alpha_{k}\leq\frac{\sigma}{\sqrt{1-\sigma^2}}+M_0\lambda^{k\tau}\varepsilonps$. If the latter inequality is satisfied for every $k$, we get the conclusion with $\frac{\bar C}{\lambda}$ replaced by $C_2$ and with $\varrho_{k}$ replaced by $\lambda^{k+1}$ and thus also by $\lambda^{k+1}/16$.
Otherwise let $\bar k\in\mathbb N\cup\{0\}$ be the first integer such that $\alpha_{\bar k}>\frac{\sigma}{\sqrt{1-\sigma^2}}+M_0\lambda^{\bar k\tau}\varepsilonps$. In this case assuming that $\bar\varepsilonps\leq\varepsilonps_2$, Lemma~\ref{lm:barrier} yields that
\[
R_{\bar k}( \overline{\boldsymbol{p}artialrtial E\cap H}-y) \cap \boldsymbol{p}artialrtial H\cap\mathcal C_{\frac{\lambda^{\bar k}}{4}}=\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O\cap\mathcal C_{\frac{\lambda^{\bar k}}{4}}\cap\{|x_N|<\lambda^{\bar k(1+\tau)}\varepsilon\}\,,
\]
provided that we also enforce $\bar\varepsilonta\leq\varepsilonta(1)$.
Observe that from the previous iteration argument we know in particular that
$$
R_{\bar k}( \overline{\boldsymbol{p}artialrtial E\cap H}-y) \cap\mathcal C_{\frac{\lambda^{\bar k}}{4}}\subset \{ \abs{x_N-\alpha_{\bar k} x_1}< \lambda^{{\bar k}(1+\tau)}\varepsilonps \big\}.
$$
We may now use Lemma~\ref{iteraostac} with $\tilde\varepsilonps=4\lambda^{\bar k \tau}\varepsilonps$ and $r=\lambda^{\bar k}/4$, provided that $4\bar\varepsilonps\leq \varepsilonps_5$ and that we have chosen from the beginning $\lambda\leq\lambda_2(\tau,\kappa)$, where $\kappa:=\frac{|\sigma|}{\sqrt{1-\sigma^2}}+2$. Indeed, since $\alpha\leq\frac{|\sigma|}{\sqrt{1-\sigma^2}}+1$, we have that $\alpha_{\bar k}\in[-\kappa,\kappa]$, since
\begin{equation}\label{epsilon3}
| \alpha_{\bar k}-\alpha|\leq C_2\bar\varepsilonps\sum_{n=0}^\infty\lambda^{\tau n}<1
\varepsilonnd{equation}
by taking $\bar\varepsilonps$ smaller if needed. So we get there exist a rotation \(R_{\bar k+1}\) with \(\|R_{\bar k+1}-R_{\bar k}\|\le 4C_4\lambda^{\bar k \tau}\varepsilonps\), \(|\alpha_{\bar k+1}-\alpha_{\bar k}|\le 4C_4\lambda^{\bar k \tau}\varepsilonps\), $\alpha_{\bar k+1}\geq\frac{\sigma}{\sqrt{1-\sigma^2}}$, such that
\[
R_{\bar k+1}( \overline{\boldsymbol{p}artialrtial E\cap H}-y) \cap\mathcal C_{\frac{\lambda^{\bar k+1}}{4}}^+\subset \{ \abs{x_N-\bar\alpha x_1}<\lambda^{(\bar k+1)(1+\tau)} \varepsilonps \big\}.
\]
At this point we keep iterating the previous argument by applying Lemma~\ref{iteraostac} to get for all $k>\bar k$ a sequence $R_k$ and a sequence $\alpha_k\geq\frac{\sigma}{\sqrt{1-\sigma^2}}$ satisfying \varepsilonqref{epsilon2} (even with $\varrho_k$ replaced by $\lambda^k/4$) and \varepsilonqref{epsilon1} with $\frac{\bar C}{\lambda}$ replaced by $4C_4$. Note that, arguing as for \varepsilonqref{epsilon3}, we may ensure that during this iteration process $\alpha_k\in[-\kappa,\kappa]$ provided that we choose $\bar\varepsilonps$ smaller if needed, depending on $\lambda$ and $\kappa$.
{\bf Step 2.} Let us now assume that $y\in\boldsymbol{p}artialrtial E\cap H$. If $y_1\geq\frac{\lambda}{16}$, since by assumption we have
\[
\boldsymbol{p}artialrtial (E-y) \cap\mathcal C_{\frac{\lambda}{16}}\subset \{ \abs{x_N-\alpha x_1}<\varepsilonps\big\}\,,
\]
we may apply iteratively Lemma~\ref{iteraintern} choosing $\lambda<\lambda_1(\tau,\kappa)$, with $\kappa$ as above, taking $\bar\varepsilonps\leq\frac{\lambda}{16}\varepsilonps_4$ and $\bar\varepsilonta\leq\varepsilonta_3$, where we recall that the constants $\varepsilonta_3, \varepsilonps_4$ depend on $\tau,\kappa$ and $\lambda$. In this way we get the conclusion $\varrho_k=\frac{\lambda^{k+1}}{16}$ and with a sequence of rotations $R_k$, and a sequence $\alpha_k$ such that \(\|R_{k+1}-R_k\|\le \frac{16}{\lambda}C_3\lambda^{k\tau}\varepsilonps\), \(|\alpha_{k+1}-\alpha_k|\le \frac{16}{\lambda}C_3\lambda^{k\tau}\varepsilonps\).
Note that, arguing as above, choosing $\bar\varepsilonps$ smaller if needed, we may ensure that all the $\alpha_k$ remain in the interval $[-\kappa,\kappa]$.
{\bf Step 3.} If $y_1<\frac{\lambda}{16}$, we denote by $\hat k$ the last integer such that $y_1<\frac{\lambda^{\hat k}}{16}$. We denote by $\bar y$ the point $\bar y=(0,y_2\,\dots,y_{N-1},y_N-\alpha y_1)$. Observe that by assumption this point satisfies
$$
( \overline{\boldsymbol{p}artialrtial E\cap H}-\bar y)\cap \mathcal C_{\frac34}\subset S_{-\varepsilonps,\varepsilonps}^\alpha\,.
$$
Hence from Step 1 we have that for all $k=0,1,\dots,\hat k$ there exist a radius $r_k\in\{\frac{3}{16}\lambda^k,\frac34\lambda^k\}$, a rotation $R_k$ and a number $\alpha_k$,
such that
$$
\|R_{k+1}-R_k\|\leq \frac43\bar C\lambda^{k\tau}\varepsilonps,\boldsymbol{q}quad |\alpha_{k+1}-\alpha_k|\leq \frac43\bar C\lambda^{k\tau}\varepsilonps,
$$
with $\bar C=\max\{C_2,4C_4\}$,
and
$$
R_{k}( \overline{\boldsymbol{p}artialrtial E\cap H}-\bar y) \cap\mathcal C_{r_k}\subset \{ \abs{x_N-\alpha_k x_1}< \lambda^{k(1+\tau)} \varepsilonps \big\}.
$$
In particular we have that that for $k=1,\dots,\hat k$
$$
R_{k}(\overline{\boldsymbol{p}artialrtial E\cap H}- y) \cap\mathcal C_{\frac{\lambda^k}{8}}\subset \{ \abs{x_N-\alpha_k x_1}< \lambda^{k(1+\tau)}\varepsilonps \big\}\,.
$$
In particular we have that
$$
R_{\hat k}( \overline{\boldsymbol{p}artialrtial E\cap H}- y) \cap\mathcal C_{\frac{\lambda^{\hat k+1}}{16}}\subset \{ \abs{x_N-\alpha_{\hat k} x_1}< \lambda^{\hat k(1+\tau)}\varepsilonps \big\}\,.
$$
Note that, as already observed in Step1, $\alpha_k\geq\frac{\sigma}{\sqrt{1-\sigma^2}}$ for all $k=1,\dots,\hat k$.
Observing that the cylinder $\mathcal C_{\frac{\lambda^{\hat k+1}}{16}}(y)\subset H$, we may start from this cylinder arguing as in the proof of Step~2 to conclude.
\varepsilonnd{proof}
\begin{theorem}[\(\varepsilonps\)-regularity theorem]\label{th:epsreg}
There exists \(\widehat\varepsilonps >0\) with the following property. If $E\subset H$ is a $(\Lambda,1)$-minimizer of $\mathcal{F}_\sigma$ in $\mathbb{R}^N$ with obstacle $O\in\mathcal B_R$, $\bar x\in\overline{\boldsymbol{p}artialrtial E\cap H}\cap\boldsymbol{p}artialrtial H$ such that
\[
(\overline{\boldsymbol{p}artialrtial E\cap H}-\bar x) \cap \mathcal C_r\subset S_{-\widehat\varepsilonps r,\widehat\varepsilonps r}\boldsymbol{q}uad \big\{x_N<\frac{\sigma x_1}{\sqrt{1-\sigma^2}}-\widehat\varepsilonps r\Big\}\cap\mathcal C_{r}^+\subset E-\bar x\,,
\]
where $0<r\leq1$,
$\Lambda r<\widehat\varepsilonps$, $\frac{r}{R}\leq\widehat\varepsilonps$,
then $M:=\overline{\boldsymbol{p}artialrtial E\cap H} \cap\mathcal C_{\frac r2}(\bar x')$ is a hypersurface (with boundary) of class $C^{1,\tau}$ for all $\tau\in(0,\frac12)$. Moreover,
\begin{align*}
\nu_{E}\cdot \nu_{H}\ge \sigma \boldsymbol{q}uad &\text{on} \boldsymbol{q}uad M\cap \boldsymbol{p}artialrtial H; \\
\nu_{E}\cdot \nu_{H}= \sigma \boldsymbol{q}uad &\text{on} \boldsymbol{q}uad (M\cap \boldsymbol{p}artialrtial H)\setminus \boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H} O,
\varepsilonnd{align*}
\varepsilonnd{theorem}
\begin{proof}
We may assume $\bar x=0$.
{\bf Step 1.} We claim that given $\tau$ there exists $\hat\varepsilonps=\hat\varepsilonps(\tau)$ such that if the assumptions are satisfied for such $\hat\varepsilonps$ then $\boldsymbol{p}artialrtial E$ is of class $C^{1,\tau}$ in $\mathcal C^+_{\frac{r}{2}}(\bar x')$ uniformly up to $\boldsymbol{p}artialrtial H$.
To this aim we may assume without loss of generality that $r=1$.
We fix $\tau\in(0,1/2)$ and let $\bar\lambda$ and $\bar C$ as in Lemma~\ref{lm:epsilon}. Fix $\lambda\in(0,\bar\lambda)$. Let $\bar\varepsilonps$ and $\bar\varepsilonta$ be the corresponding constants provided once again by Lemma~\ref{lm:epsilon} and set $\hat\varepsilonps=\frac{1}{2}\bar\varepsilonta\tilde\varepsilonps$, with $\tilde\varepsilonps\leq\bar\varepsilonps$ to be chosen.
We fix $y\in\boldsymbol{p}artialrtial E\cap H\cap\mathcal C_{\frac12}$, $y=(y',y_N)$ and observe that
$$
(\overline{\boldsymbol{p}artialrtial E\cap H}-y) \cap \mathcal C_{\frac12}\subset S_{-\frac{\tilde\varepsilonps}{2},\frac{\tilde\varepsilonps}{2}}
$$
and that $\frac{1}{2}\Lambda<\bar\varepsilonta\tilde\varepsilonps $, $\frac{1}{2R}\leq\bar\varepsilonta\tilde\varepsilonps$. Therefore, from \varepsilonqref{epsilon1} and \varepsilonqref{epsilon2} we have that there exist a sequence of rotations $R_k(y)$ converging to $R(y)$ and a sequence $\alpha_k(y)$ converging to $\alpha(y)$, for suitable $R(y)$ and $\alpha(y)$ such that
\begin{equation}\label{epsreg1}
\|R_{k}(y)-R(y)\|\leq C(\lambda,\tau)\lambda^{k\tau}\tilde\varepsilonps,\boldsymbol{q}quad |\alpha_{k}(y)-\alpha(y)|\leq C(\lambda,\tau)\lambda^{k\tau}\tilde\varepsilonps\,,
\varepsilonnd{equation}
with $C(\lambda,\tau)=\frac{\overline C}{\lambda(1-\lambda^\tau)}$ and
$$
R_{k}(y)(\overline{\boldsymbol{p}artialrtial E\cap H}-y) \cap\mathcal C_{\varrho_k}\subset \Big\{ \abs{x_N-\alpha_k(y) x_1}< \lambda^{k(1+\tau)}\frac{\tilde\varepsilonps}{2} \Big\}\,,
$$
with $\varrho_k=\frac{\lambda^{k+1}}{32}$. Note now that by the classical interior regularity results $\boldsymbol{p}artialrtial E\cap H$ is a locally $C^{1,\gamma}$-hypersurface for all $\gamma\in(0,1)$, provided $\tilde\varepsilonps$ is sufficiently small. Therefore the hyperplane $$y+R(y)^{-1}(\{x_N-\alpha(y)x_1=0\})$$ coincides with the tangent plane to $\boldsymbol{p}artialrtial E$ at $y$.
Let now $y,z\in\boldsymbol{p}artialrtial E\cap H \cap \mathcal C_{\frac12}$ with $0<|y-z|<\frac{\lambda}{32}$ and let $h$ an integer $h\geq0$ such that $\frac{\varrho_{h+1}}{2}\leq|y-z|<\frac{\varrho_{h}}{2}$. Assume that $0< y_1\leq z_1$. Since $\mathcal C_{\frac{\varrho_h}{2}}(z')\subset \mathcal C_{\varrho_h}(y')$, we have
$$
R_{h}(y)(\overline{\boldsymbol{p}artialrtial E\cap H}-z) \cap\mathcal C_{\frac{\varrho_h}{2}}\subset \Big\{ \abs{x_N-\alpha_h(y) x_1}< \lambda^{h(1+\tau)}\tilde\varepsilonps \Big\}\,.
$$
Thus we may apply Lemma~\ref{lm:epsilon} with $\varrho=\frac{\varrho_h}{2}$, $\varepsilonps=\frac{64\lambda^{h\tau}}{\lambda}\tilde\varepsilonps\leq\bar\varepsilonps$ provided we have chosen $\tilde\varepsilonps$ sufficiently small. Thus we get for $k\geq h$ a sequence of radii $r_k=\frac{\lambda^{k-h+1}\varrho_h}{32}=\frac{\lambda^{k+2}}{32^2}$, a sequence of rotations
$S_k(z)$ converging to $S(z)$ and a sequence $\beta_k(z)$ converging to $\beta(z)$ such that
$$
S_{k}(z)(\overline{\boldsymbol{p}artialrtial E\cap H}-z) \cap\mathcal C_{r_k}\subset \{ \abs{x_N-\beta_k(z) x_1}< \lambda^{k(1+\tau)}\tilde\varepsilonps\}\,.
$$
Clearly $S(z)= R(z)$ and $\beta(z)=\alpha(z)$ by the uniqueness of the tangent plane. Note also that, arguing as for \varepsilonqref{epsreg1}, we have
$$
\|S_{k}(z)-R(z)\|\leq C(\lambda,\tau)\lambda^{k\tau}\tilde\varepsilonps,\boldsymbol{q}quad |\beta_{k}(z)-\alpha(z)|\leq C(\lambda,\tau)\lambda^{k\tau}\tilde\varepsilonps,
$$
for a possibly larger constant $C(\lambda,\tau)$. Therefore, since $R_h(y)=S_h(z)$, and since $\frac{\lambda^{h+2}} {64}\leq|y-z|$ by our choice of $h$, we have
$$
\|R(y)-R(z)\|\leq\|R(y)-R_h(y)\|+\|S_h(z)-R(z)\|\leq 2C(\lambda,\tau)\lambda^{h\tau}\tilde\varepsilonps\leq \widetilde C(\lambda,\tau)\tilde\varepsilonps|y-z|^\tau\,.
$$
A similar estimate holds also for $|\alpha(y)-\alpha(z)|$, showing that both the maps $\alpha$ and $R$ are $\tau$-H\"older continuous uniformly up to $\boldsymbol{p}artialrtial H$. This proves that $\boldsymbol{p}artialrtial E$ is of class $C^{1,\tau}$ up to $\boldsymbol{p}artialrtial H$, where $\tau$ is the exponent fixed at the beginning of Step 1.
Finally observe that if $y\in\overline{\boldsymbol{p}artialrtial E\cap H}\cap\boldsymbol{p}artialrtial H$, we may choose a sequence $y_k\in\boldsymbol{p}artialrtial E\cap H$ converging to $y$ and such that $y_k\cdot e_1<\frac{\lambda^{k}}{32}$. Then, from Lemma~\ref{lm:epsilon} we have that $\alpha_k(y_k)\geq\frac{\sigma}{\sqrt{1-\sigma^2}}$ and in turn using \varepsilonqref{epsreg1} and passing to the limit we get that also
\begin{equation}\label{epsreg3}
\alpha(y)\geq\frac{\sigma}{\sqrt{1-\sigma^2}}\,.
\varepsilonnd{equation}
\noindent
{\bf Step 2.} Let us now show that if $\overline {\boldsymbol{p}artialrtial E\cap H}$ is of class $C^{1,\tau}$ in $\mathcal C_{\frac12}$ for some $\tau\in(0,\frac12)$, then it is also of class $C^{1,\gamma}$ for all $\gamma\in(0,1/2)$.
To this aim we take a point $y\in\overline {\boldsymbol{p}artialrtial E\cap H}\cap\boldsymbol{p}artialrtial H\cap\mathcal C_{\frac12}$ and consider two cases.
Assume first that $\alpha(y)>\frac{\sigma}{\sqrt{1-\sigma^2}}$. Exploiting the $C^1$ regularity of $\overline {\boldsymbol{p}artialrtial E\cap H}$ up to $\boldsymbol{p}artialrtial H$ we may find $\varepsilonps<\varepsilonps_2$ and $\varrho$ so small that $\alpha(y)>\frac{\sigma}{\sqrt{1-\sigma^2}}+M_0\varepsilonps$, where $M_0$ and $\varepsilonps_2$ are the constants of Lemma~\ref{lm:barrier}, and that
$$
(\boldsymbol{p}artialrtial E-y)\cap \mathcal C_{2\varrho}^+\subset \{ \abs{x_N-\alpha x_1}< \varepsilonps\varrho\big\}\,.
$$
Then we have that $
\overline{(\boldsymbol{p}artialrtial E -y)\cap H}\cap \boldsymbol{p}artialrtial H\cap\mathcal C_{\varrho/2}=\boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O\cap\mathcal C_{\varrho/2}\cap\{|x_N|<\varepsilonps\varrho \}$. Therefore, see for instance \cite{DuzaarSteffen02}, we may conclude that $\overline {\boldsymbol{p}artialrtial E\cap H}$ is of class $C^{1,\gamma}$ in $\mathcal C_{\varrho/2}(y')$ for all $\gamma\in(0,1)$.
Otherwise, recalling \varepsilonqref{epsreg3},
we have $\alpha(y)=\frac{\sigma}{\sqrt{1-\sigma^2}}$. In this case, given $\gamma\in(0,1/2)$, we may choose $\varrho$ so small that the assumptions of the claim in Step 1 are satisfied in $\mathcal C_{\varrho}(y')$ with $\hat\varepsilonps(\tau)$ replaced by $\hat\varepsilonps(\gamma)$ and the conclusion follows from Step 1.
\varepsilonnd{proof}
\section{A monotonicity formula and proof of the main results}\label{sec:mainthm}
In this section, in view of the applications to the model for nanowire growth discussed in Subsection~\ref{nanosec}, we consider also the case of convex polyhedral obstacles. Thus, in order to
deal at the same time with convex and smooth obstacles, we introduce the following definition where we identify \(O\) as a subset of \(\mathbb R^{N-1}\).
\begin{definition}
Let \(O\subset \boldsymbol{p}artialrtial H\approx \mathbb{R}^{N-1}\), we say that \(O\) is locally \varepsilonmph{semi-convex} at scale \(\bar r>0\) and with constant \(C\ge 0\) such that, with if for all \(\bar x \in \boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H} O\) there exists a radius \(C\)-semiconvex function \(\boldsymbol{p}si: \mathbb R^{N-2} \to \mathbb R\) with \(\boldsymbol{p}si(0)=0\) such that, up to a change of of coordinates:
\begin{equation}\label{e:graphrep}
O\cap \boldsymbol{p}artialrtial H \cap B_{\bar r}(\bar x)=\Bigl(\bar x+\bigr\{ (0,x'',x_N): x_N \ge \boldsymbol{p}si (x'')\bigl\}\Bigr)\cap B_{\bar r}(\bar x).
\varepsilonnd{equation}
\varepsilonnd{definition}
Recall that a function \(\boldsymbol{p}si: \mathbb R^{N-2} \to \mathbb R\) is said to be \(C\) semiconvex if the function \(\boldsymbol{p}si(x'')+C|x''|^2/2\) is convex. In particular the sub-differential of \(\boldsymbol{p}artialrtial \boldsymbol{p}si(x'')\) for all \(x''\) is non-empty, where
\[
\boldsymbol{p}artialrtial \boldsymbol{p}si(x'')=\Bigl\{p\in\mathbb{R}^{N-2}: \boldsymbol{p}si (y'')\ge \boldsymbol{p}si (x'') +p \cdot(y''-x'')-\frac{C|x''-y''|^2}{2}\text{ for all \(y''\in\mathbb{R}^{N-2}\)}\Bigr\}.
\]
Note in particular that if \(\boldsymbol{p}si (0)=0\) and \(p \in \boldsymbol{p}artialrtial \boldsymbol{p}si (x'')\) then
\begin{equation}\label{e:conv}
0\ge \boldsymbol{p}si (x'')-p \cdot x''-\frac{C|x''|^2}{2} .
\varepsilonnd{equation}
By taking \(O\) as above
we define for \(x =(0,x'',x_N)\in \boldsymbol{p}artialrtial_{\boldsymbol{p}artial H} O\) the normal and the tangent cones at \(x\) in $\boldsymbol{p}artial H$ as
\[
N_{x} O=\{ \lambda (0,p, -1):\, p \in \boldsymbol{p}artialrtial \boldsymbol{p}si (x''), \lambda \in [0,+\infty)\}
\]
and
\[
T_{ x} O=\{ v \in \mathbb{R}^{N}:\, v\cdot e_1=0,\,v \cdot \nu\le 0 \boldsymbol{q}uad\text{for all \(\nu \in N_{x} O\)}\}.
\]
It is well known that for \(x =(0,x'',x_N)\in \boldsymbol{p}artialrtial_{\boldsymbol{p}artial H} O\), the sets
$$
O_{x, r}=\frac{O- x}{r} \to T_{ x} O
$$
as \(r\to 0\) where the convergence is the Kuratowski sense.
We start with the following technical lemma.
\begin{lemma}\label{lm:vectorfield}
Let \(O\subset \boldsymbol{p}artialrtial H\) be such that \varepsilonqref{e:graphrep} is satisfied. Then there exist a smooth vector field \(X: \mathbb{R}^N \to \mathbb R^{N}\) such that
\begin{enumerate}
\item[(i)] \(X(x)\cdot e_1=0 \) for all \(x \in \boldsymbol{p}artialrtial H\)
\item[(ii)] \(X(x) \cdot \nu\ge 0\) for all \(x \in \boldsymbol{p}artialrtial_{H} O\cap B_{\bar r}(\bar x)\) and all \(\nu \in N_{x} O\)
\item[(iii)] For all \(x \in B_{\bar r} (\bar x) \) it holds:
\begin{align}
X(x)&=x+ O(|x|^2)\label{e:st1}\,,
\\
\nabla X(x)&=\mathrm{Id}+O(|x|)\label{e:st2}\,.
\varepsilonnd{align}
\varepsilonnd{enumerate}
\varepsilonnd{lemma}
\begin{proof}
We may assume that \(\bar x=0\) and we define
\[
X(x)=\Bigl(x_1,x'',x_N-\frac{C|x''|^2}{2}\Bigr)
\]
where \(C\) is the semiconvexity constant of \(\boldsymbol{p}si\). Clearly (i) and (iii) are satisfied. To check (ii), note that if $\nu\in N_xO$ then
\[
\nu=\lambda (0,p,-1)
\]
for \(p \in \boldsymbol{p}artialrtial \boldsymbol{p}si (x')\) and \(\lambda \ge 0\) so that
\[
X(x) \cdot \nu=\lambda\Bigl(p\cdot x'-\boldsymbol{p}si(x')+\frac{C|x'|^2}{2}\Bigr)\ge 0
\]
by \varepsilonqref{e:conv}.
\varepsilonnd{proof}
We can now state the desired monotonicity formula for \((\Lambda,r_0)\) minimizers.
\begin{theorem}\label{thm:montonicity}
Let $E\subset H$ be a $(\Lambda,r_0)$-minimizer of $\mathcal{F}_\sigma^O$ with obstacle $O$. Assume that \(\bar x \in \boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H} O\) and that \varepsilonqref{e:graphrep} is satisfied with \(0<\bar r\le r_0\) and a \(C\)-semiconvex function \(\boldsymbol{p}si\) with \(\boldsymbol{p}si(0)=0\). Then there exists a constant \(c_0=c_0(C,\Lambda,N)\) such that for all $0<s<r$, with $r<\bar r$ sufficiently small,
\begin{equation}\label{monotone-1}
e^{c_0r}\frac{\mathcal{F}_\sigma^O(E, B_{r}(\bar x))}{r^{N-1}}-e^{c_0s}\frac{\mathcal{F}_\sigma^0(E, B_{s}(\bar x))}{s^{N-1}}\geq\frac12\int_s^rt^{1-N}e^{c_0t}\frac{d}{dt}\bigg[\int_{\boldsymbol{p}artialrtial E \cap H\cap B_t(\bar x) }\frac{(x\cdot \nu_E)^2}{|x|^2}\bigg]\,dt\,.
\varepsiloneq
\varepsilonnd{theorem}
\begin{proof}
In this proof by $O(r)$ we mean any function bounded by $cr$ for $r$ small, where the constant $c$ depends only on $\Lambda$, $N$ and the semiconvexity constant $C$.
We assume that \(\bar x=0\) and we let \(\boldsymbol{p}hi\in C_c^\infty([0,1);[0,1))\) be a smooth decreasing function with \(\boldsymbol{p}hi (0)=1\) and we consider for \(r\le \bar r/2\) the vector field \(T: \mathbb{R}^N\to \mathbb{R}^N\) defined as
\[
T(x)=-\boldsymbol{p}hi \Biggl(\frac{|x|}{r}\Biggr)X(x)\,,
\]
where $X$ is as in Lemma~\ref{lm:vectorfield}. By (i) and (ii) of the lemma, if we let \(\varphi_t\) be the flow generated by \(T\), then for all \(t>0\)
\[
\varphi_t(\boldsymbol{p}artialrtial H)=\boldsymbol{p}artialrtial H,\boldsymbol{q}quad \varphi_t( O)\subset O\boldsymbol{q}quad \text{and} \boldsymbol{q}quad \varphi_t(x)=x\boldsymbol{q}uad\text{for $x\not\in B_r$\,.}
\]
In particular \(\varphi_t(E)\boldsymbol{D}elta E\Subset B_{\bar r}\) and thus the set \(\varphi_t(E)\) is a competitor for the $(\Lambda, r_0)$-minimality of $E$. Hence
\begin{equation}\label{monotone0}
\begin{split}
\mathcal{F}_\sigma^O (E;B_{\bar r})&
\le \mathcal{F}_\sigma^O (\varphi_t(E);B_{\bar r})+\Lambda |\varphi_t(E)\boldsymbol{D}elta E| \\
&\leq P(\varphi_t(\boldsymbol{p}artial E\setminus O))+\sigma P(\varphi_t(\boldsymbol{p}artial E\cap O))+\Lambda |\varphi_t(E)\boldsymbol{D}elta E|
\varepsilonnd{split}
\varepsiloneq
for all \(t>0\).
By using the coarea formula and recalling that \(|X(x)|=O(r)\) on \(\mathrm{spt} \,T\), it is easy to check that for $t>0$ small enough
$$
|\varphi_t(E)\boldsymbol{D}elta E|= t\int_{\boldsymbol{p}artialrtial E \cap H} |T(x)\cdot\nu_E(x)|\,d\mathcal H^{N-1} +o(t)\leq
tO(r)\int_{\boldsymbol{p}artialrtial E \cap H} \boldsymbol{p}hi \biggl(\frac{|x|}{r}\biggr)\,d\mathcal H^{N-1}\,.
$$
Therefore, differentiating the inequality in \varepsilonqref{monotone0} we get
\begin{equation}\label{e:der}
O(r)\int_{\boldsymbol{p}artialrtial E \cap H} \boldsymbol{p}hi \biggl(\frac{|x|}{r}\biggr)\,\mathcal H^{N-1}\le \int_{\boldsymbol{p}artialrtial E \setminus O} \boldsymbol{D}iv_{\tau} T d \mathcal H^{N-1}+\sigma \int_{\boldsymbol{p}artialrtial E \cap O} \boldsymbol{D}iv_{\tau} T d \mathcal H^{N-1}\,,
\varepsilonnd{equation}
where
\[
\boldsymbol{D}iv_{\tau} T =\boldsymbol{D}iv T-\nabla T [\nu_E] \cdot \nu_E
\]
is the tangential divergence of \(T\). Since
\[
\nabla T =-\boldsymbol{p}hi \biggl(\frac{|x|}{r}\biggr)\nabla X(x)-\boldsymbol{p}hi' \biggl(\frac{|x|}{r}\biggr)\frac{|x|}{r} \frac{x}{|x|}\otimes\frac{X(x)}{|x|}
\]
by exploiting \varepsilonqref{e:st1}, \varepsilonqref{e:st2} we have
\[
\begin{split}
\boldsymbol{D}iv_{\tau} T =&-\boldsymbol{p}hi \biggl(\frac{|x|}{r}\biggr)(N-1)-\boldsymbol{p}hi' \biggl(\frac{|x|}{r}\biggr)\frac{|x|}{r}
\\
&+\boldsymbol{p}hi' \biggl(\frac{|x|}{r}\biggr)\frac{|x|}{r}\biggl(\frac{(x\cdot \nu_E)^2}{|x|^2}\biggr)
+O(r)\boldsymbol{p}hi \biggl(\frac{|x|}{r}\biggr)+O(r)\boldsymbol{p}hi' \biggl(\frac{|x|}{r}\biggr)\frac{|x|}{r}.
\varepsilonnd{split}
\]
which can be written as
\[
\begin{split}
\boldsymbol{D}iv_{\tau} T &=(1+O(r))r^N\frac{d}{d r} \Biggl(r^{1-N}\boldsymbol{p}hi \biggl(\frac{|x|}{r}\biggr)\Biggr)
\\
&+\boldsymbol{p}hi' \biggl(\frac{|x|}{r}\biggr)\frac{|x|}{r}\biggl(\frac{(x\cdot \nu_E)^2}{|x|^2}\biggr)
+O(r)\boldsymbol{p}hi \biggl(\frac{|x|}{r}\biggr)
\varepsilonnd{split}
\]
Combining the above inequality with \varepsilonqref{e:der} we infer, after easy computations,
\[
\begin{split}
(1+O(r))&\frac{d}{d r} \Biggl(r^{1-N} \int_{\boldsymbol{p}artialrtial E \setminus O} \boldsymbol{p}hi \biggl(\frac{|x|}{r}\biggr)\mathcal H^{N-1}+\sigma r^{1-N} \int_{\boldsymbol{p}artialrtial E \cap O} \boldsymbol{p}hi \biggl(\frac{|x|}{r}\biggr) d \mathcal H^{N-1}\Biggr)
\\
\ge& -c \Biggl(r^{1-N} \int_{\boldsymbol{p}artialrtial E \setminus O} \boldsymbol{p}hi \biggl(\frac{|x|}{r}\biggr)\mathcal H^{N-1}+\sigma r^{1-N} \int_{\boldsymbol{p}artialrtial E \cap O} \boldsymbol{p}hi \biggl(\frac{|x|}{r}\biggr) d \mathcal H^{N-1} \Biggr)
\\
&- r^{-N} \int_{\boldsymbol{p}artialrtial E \setminus O}\boldsymbol{p}hi' \biggl(\frac{|x|}{r}\biggr)\frac{|x|}{r}\biggl(\frac{(x\cdot \nu_E)^2}{|x|^2}\biggr)-\sigma r^{-N} \int_{\boldsymbol{p}artialrtial E \cap O}\boldsymbol{p}hi' \biggl(\frac{|x|}{r}\biggr)\frac{|x|}{r}\biggl(\frac{(x\cdot \nu_E)^2}{|x|^2}\biggr).
\varepsilonnd{split}
\]
In turn, noticing that
$$
\int_{\boldsymbol{p}artialrtial E \cap H}\boldsymbol{p}hi' \biggl(\frac{|x|}{r}\biggr)\frac{|x|}{r}\frac{(x\cdot \nu_E)^2}{|x|^2}=0\,,
$$
setting
\begin{align*}
h(r)
&=r^{1-N} \int_{\boldsymbol{p}artialrtial E \setminus O} \boldsymbol{p}hi \biggl(\frac{|x|}{r}\biggr)\,d\mathcal H^{N-1}+\sigma r^{1-N} \int_{\boldsymbol{p}artialrtial E \cap O} \boldsymbol{p}hi \biggl(\frac{|x|}{r}\biggr) d \mathcal H^{N-1}, \\
k(r)&=- r^{-N} \int_{\boldsymbol{p}artialrtial E \setminus O}\boldsymbol{p}hi' \biggl(\frac{|x|}{r}\biggr)\frac{|x|}{r}\frac{(x\cdot \nu_E)^2}{|x|^2}-\sigma r^{-N} \int_{\boldsymbol{p}artialrtial E \cap O}\boldsymbol{p}hi' \biggl(\frac{|x|}{r}\biggr)\frac{|x|}{r}\frac{(x\cdot \nu_E)^2}{|x|^2}\\
&=r^{1-N}\frac{d}{dr}\biggl[\int_{\boldsymbol{p}artialrtial E \cap H}\boldsymbol{p}hi \biggl(\frac{|x|}{r}\biggr)\frac{(x\cdot \nu_E)^2}{|x|^2}\,d\mathcal H^{N-1}\bigg]\,, \\
\varepsilonnd{align*}
the inequality above implies that
$$
h'(r)\geq -c_0h(r)+\frac{1}{2}k(r)
$$
provided that $2\geq1+O(r)\geq1/2$, where $c_0$ is a costant depending only on $\Lambda, N$ and $C$.
Note also that $k(r)\geq0$.
Multiplying both sides of this inequality by $e^{c_0r}$ and integrating in $(s,r)$, we have
\begin{align*}
&h(r)e^{c_0r}-h(s)e^{c_0s}
\geq \frac12\int_s^re^{c_0t}k(t)\,dt\,.
\varepsilonnd{align*}
Then we conclude by letting $\boldsymbol{p}hi\to1$.
\varepsilonnd{proof}
By classical argument, the above monotonicity formula allows for the study of blow-ups of \(\Lambda\)-minimizers. To this aim we preliminary observe that the following compactness property for blow-ups in the case of convex polyhedral domains holds.
\begin{theorem}\label{th:compactnessbis}
Let $E\subset H$ be a $(\Lambda,r_0)$-minimizer of $\mathcal{F}_\sigma $ with obstacle $O$, where $O$ is a convex polyhedron. Let \(\bar x\in \boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O\cap\boldsymbol{p}artial E\) and set
$$
E_h=\frac{E-\bar x}{r_h}\,,
$$
where $r_h\to 0^+$.
Then there exist a (not relabelled) subsequence
and a set $E_\infty$ of locally finite perimeter, such that $E_h\to E_\infty$ in $L^1_{loc}(\mathbb{R}^N)$ with the property that $E_\infty$ is a $0$-minimizer of $\mathcal{F}^{\tilde O}_\sigma$, where $\tilde O=T_{\bar x} O$.
Moreover,
$$
\mu_{E_h}\wtos \mu_E\,, \boldsymbol{q}uad |\mu_{E_h}|\wtos |\mu_E|\,,
$$
as Radon measures.
In addition, the following Kuratowski convergence type properties hold:
\[
\begin{split}
&\text{(i)\,\,for every $x\in\boldsymbol{p}artial E_\infty$ there exists $x_h\in\boldsymbol{p}artial E_h$ such that $x_h\to x$;} \\
& \text{(ii)\,\,if
$x_h\in \overline{\boldsymbol{p}artial E_h\cap H}$ and $x_h\to x$, then $x\in \overline{\boldsymbol{p}artial E_\infty\cap H}$\,.}
\varepsilonnd{split}
\]
Finally, either
$\boldsymbol{p}artialrtial E_\infty\cap(\boldsymbol{p}artialrtial H\setminus{\tilde O})=\boldsymbol{p}artialrtial H\setminus{\tilde O}$ or $\boldsymbol{p}artialrtial E_\infty\cap \boldsymbol{p}artialrtial H\subset\overline{\tilde O}$.
\varepsilonnd{theorem}
The proof of this theorem follows as in the proof of Theorem~\ref{th:compactness} observing that the density estimates proved in Proposition~\ref{p:densityestimates} still hold when $O$ is a convex polyhedron.
\begin{proposition}\label{cor:bu}
Let $E\subset H$ be a $(\Lambda,r_0)$-minimizer of $\mathcal{F}_\sigma$ with obstacle $O$, where $O$ is either of class $C^{1,1}$ or a convex polyhedron. If \(\bar x\in \boldsymbol{p}artialrtial_{\boldsymbol{p}artialrtial H}O\cap\boldsymbol{p}artial E\) then the sets
\[
E_{\bar x,r}=\frac{E-\bar x}{r}
\]
are pre-compact in \(L^1\) and every limit point \(E_\infty\) as $r\to0$ is a conical minimizer of \(\mathcal F_{\sigma}^{\tilde O}\) with obstacle \(\tilde O=T_{\bar x} O\). Moreover if \(N=3\) either \(\boldsymbol{p}artialrtial E_\infty=\boldsymbol{p}artial H\) or, after a rotation of coordinates in $\boldsymbol{p}artial H$,
\[
\boldsymbol{p}artialrtial E_\infty\cap H=\bigl\{ x: x_N=\alpha x_1\bigr\}\cap H
\]
with
\[
\alpha \ge \frac{\sigma}{\sqrt{1-\sigma^2}}
\]
and $\tilde O$ is a half space.
\varepsilonnd{proposition}
\begin{proof}
Let $E_\infty$ be a limit point of of $E_{\bar x,r}$ as $r\to0$. By Theorem~\ref{th:compactness} or Theorem~\ref{th:compactnessbis} $E_\infty$ is a $0$-minimizer of $\mathcal F_{\sigma}^{\tilde O}$.
Observe that by Theorem~\ref{thm:montonicity} there exists
$$
\lim_{r\to0}\frac{\mathcal{F}_\sigma(E, B_{r}(\bar x))}{r^{N-1}}
$$
and it is finite. From this a standard argument, see for instance the proof of Theorem~28.6 in \cite{Maggi12}, shows that
$$
\frac{\mathcal{F}_\sigma^{\tilde O}(E_\infty, B_{r}(\bar x))}{r^{N-1}}
$$
is constant with respect to $r$.
Thus, the right-hand side of \varepsilonqref{monotone-1} with $E_\infty$ in place of $E$ is zero. This immediately implies that $x\cdot\nu_{E_\infty}(x)=0$ for $\mathcal{H}^{N-1}$-a.e. $x$, hence, see \cite[Prop. 28.8]{Maggi12}, $E_\infty$ is a cone.
Note that if \(N=3\) the only cones with zero mean curvature are planes and this forces \(\boldsymbol{p}artialrtial E_{\infty}\) to be a union of planes. These planes can not intersect in \(H\) by the interior regularity theory. In particular since \(0\in \boldsymbol{p}artialrtial E_{\infty} \) (which holds true since \(\bar x \in \boldsymbol{p}artialrtial E\)), either $\boldsymbol{p}artial E_\infty=\boldsymbol{p}artial H$ or, after a rotation in the hyperplane $H$,
\[
\boldsymbol{p}artialrtial E_{\infty}\cap H=\bigcup_{i=1}^k \bigl\{ x: x_1=\alpha_i x_N\bigr\}\cap H.
\]
Minimality forces \(k=1\) and $\boldsymbol{p}artial E_\infty\cap \boldsymbol{p}artial H=\tilde O$ and thus $\tilde O$ is a half space.
\varepsilonnd{proof}
We are now in a position to prove Theorem~\ref{th:reg}.
\begin{proof}[Proof of Theorem~\ref{th:reg}]
We start by proving part (i).
In view of the result \cite[Theorem 1.2]{De-PhilippisMaggi15} it is enough to prove the regularity in a neighborhood of a point $\bar x\in\boldsymbol{p}artial E\cap\boldsymbol{p}artial_{\boldsymbol{p}artial H}O$. Given such a point, by Proposition~\ref{cor:bu} we know that there exists a sequence $E_h=E_{\bar x,r_h}=\frac{E-\bar x}{r_h}$, $r_h\to0$, of blow-ups of $E$ converging in $L^1$ to a $0$-minimizer $E_\infty$ of \(\mathcal F_{\sigma}^{\tilde O}\), where \(\tilde O=T_{\bar x} O\). Moreover, either \(\boldsymbol{p}artialrtial E_\infty=\boldsymbol{p}artial H\) or, after a rotation of coordinates in $\boldsymbol{p}artial H$,
$\boldsymbol{p}artialrtial E_\infty\cap H=\bigl\{ x: x_1=\alpha x_N\bigr\}\cap H$
with $\alpha \ge \frac{\sigma}{\sqrt{1-\sigma^2}}$. Note that if $E_\infty=\boldsymbol{p}artial H$ or if $\alpha> \frac{\sigma}{\sqrt{1-\sigma^2}}$ by conclusion (ii) of Theorem~\ref{th:compactness} we get that $\overline{\boldsymbol{p}artial E_h\cap H}$ satisfies the assumptions of Lemma~\ref{lm:barrier} in a neighborhood of $\bar x$ and thus $\overline{\boldsymbol{p}artial E\cap H}\cap\boldsymbol{p}artial H$ coincides with $\boldsymbol{p}artial_{\boldsymbol{p}artial H}O$ in such a neighborhood. In turn, the conclusion follows by \cite[Theorem~6.1]{DuzaarSteffen02}. If instead $\alpha=\frac{\sigma}{\sqrt{1-\sigma^2}}$, using again (ii) of Theorem~\ref{th:compactness} we have that for $E_h$ satisfies the assumptions of Theorem~\ref{th:epsreg} for $h$ sufficiently large. Hence the conclusion follows also in this case.
Now the proof of part (ii) follows combining (i) with the classical Federer's dimension reduction argument, see for instance \cite[Appendix A]{Simon83} or \cite[Sections 28.4-28.5]{Maggi12}. We leave the details to the reader.
\varepsilonnd{proof}
We conclude the section by proving Theorems~\ref{cinqueuno} and \ref{cinquedue}. In the following we make use of the notation introduced in Subsection~\ref{nanosec}.
\begin{proof}[Proof of Theorem~\ref{cinqueuno}]
We start by recalling that by a standard argument, see for instance Example 21.3 in \cite{Maggi12}, there exists $\Lambda, r_0>0$ such that
$$
J_{\sigma,\omega}(E)\leq J_{\sigma,\omega}(F)+\Lambda||E|-|F||
$$
for all $F\subset\mathbb{R}^3\setminus\mathbf{C}$ with diam$(E\boldsymbol{D}elta F)<r_0$. It easily follows that $E$ is a $(\Lambda,r_0)$-minimizer of $J_{\sigma,\omega}$ with obstacle $\omega$ in the sense introduced in the previous sections. Thus the conclusion follows from Theorem~\ref{th:reg}.
\varepsilonnd{proof}
Before proving Theorem~\ref{cinquedue} we set the following definition.
\begin{definition}\label{nontang}
We say that $E\subset\mathbb{R}^3\setminus\mathbf{C}$ has {\it a nontangential contact with $\mathbf{C}_{top}$ at a point $\bar x\in\overline{\boldsymbol{p}artial E\cap\mathscr H}\cap\gamma$} if for any subsequence $E_{\bar x, r_h}$ of blow-ups of $E$ with $r_h\to0$, converging to $E_\infty$, we have that $\boldsymbol{p}artial E_\infty$ does not contain the plane $\{x_3=0\}$.
\varepsilonnd{definition}
\begin{proof}[Proof of Theorem~\ref{cinquedue}]
We start by observing that, arguing as in the proof of Theorem~\ref{cinqueuno} we have that $E$ is $(\Lambda,r_0)$-minimizer of $J_{\sigma,\omega}$ with ostacle $\omega$.
Assume first that $E$ has a nontangential contact at all points of $\overline{\boldsymbol{p}artial E\cap\mathscr H}\cap\gamma$. Fix a point $\bar x\in\overline{\boldsymbol{p}artial E\cap\mathscr H}\cap\gamma$ and consider a sequence $E_{\bar x, r_h}$ of blow-ups of $E$ with $r_h\to0$, converging to some set $E_\infty$ of locally finite perimeter. By Proposition~\ref{cor:bu} it follows that $E_\infty$ is a $0$-minimizer of $J_{\sigma,\omega}$ with obstacle $T_{\bar x}\omega$, that
after a rotation in the plane $\{x_3=0\}$
$$
\boldsymbol{p}artialrtial E_\infty\cap\{x_3>0\}=\bigl\{ x: x_1=\alpha x_3\bigr\}\cap \{x_3>0\}\,,
$$
with $\alpha\geq \frac{\sigma}{\sqrt{1-\sigma^2}}$ and thus $\boldsymbol{p}artial E_\infty\cap\{x_3=0\}$ is a half plane. In turn $T_{\bar x}\omega$ is a half plane and thus it cannot be a vertex of the polygon. Hence, thanks to Theorem~\ref{th:reg}, $\overline{\boldsymbol{p}artial E\cap\mathscr H}$ is of class $C^{1,\tau}$ for all $\tau\in(0,\frac12)$ in a neighborhood of $\bar x$.
Finally, we claim that if $\sigma<0$ then $E$ has a nontangential contact at all points of $\overline{\boldsymbol{p}artial E\cap\mathscr H}\cap\gamma$. Indeed assume by contradiction that $E_\infty=\{x_3>0\}$ for some point $\bar x\in\overline{\boldsymbol{p}artial E\cap\mathscr H}\cap\gamma$ and some sequence of blow-ups $E_{\bar x, r_h}$ with $r_h\to0$. Then, arguing similarly as for Theorem~\ref{th:compactnessbis}, see also \cite[Theorem~2.9]{De-PhilippisMaggi15}, we get that $E_\infty$ is a $0$-minimizer of $J_{\sigma,T_{\bar x}\omega}$, that is
$$
J_{\sigma,T_{\bar x}\omega}(E_\infty)\leq J_{\sigma,T_{\bar x}\omega}(F)
$$
for all $F\subset\mathbb{R}^3\setminus(T_{\bar x}\omega\times(-\infty,0])$ with $E_\infty\boldsymbol{D}elta F$ bounded. From this it easily follows that $F_\infty:=(\mathbb{R}^3\setminus(T_{\bar x}\omega\times(-\infty,0]))\setminus E_\infty$ is a $0$-minimizer of $J_{-\sigma,T_{\bar x}\omega}$. By a rotation in the plane $\{x_3=0\}$ we may assume without loss of generality that the half line $\varepsilonll=\{(0,x_2,0):\,x_2>0\}$ is contained in $\boldsymbol{p}artial T_{\bar x}\omega\times\{0\}$ and that $F_\infty$ (locally around each point of $\varepsilonll$) is contained in the half space $H=\{x_1>0\}$. Hence, locally around any point $y\in\varepsilonll$, $F_\infty$ is a $0$ minimizer of $\mathcal F_{-\sigma}$ with obstacle $O=\{x_3<0\}\cap\boldsymbol{p}artial H$. Thus, by the third inequality in \varepsilonqref{irene-1} (with $\sigma$ replaced by $-\sigma$) it follows that $\boldsymbol{p}artial F_\infty$ must form with the vertical plane $\boldsymbol{p}artial H$ an angle which is strictly larger then $\boldsymbol{p}i/2$, thus leading to a contradiction.
\varepsilonnd{proof}
\section{Appendix}
Here we give the proof of Proposition~\ref{prop:savin}. We shall closely follow the proofs of Lemmas~2.1 and 2.2 and of Theorem~1.1 in \cite{Savin} with the necessary changes.
We denote by $\mathbb{M}^{d\times d}_{\rm sym}$ the space of symmetric $d\times d$ matrices and by ${{\rm Tr}\,(A)}$ the trace of $A$. Let $F:\mathbb{M}^{d\times d}_{\rm sym}\times\mathbb{R}^d\to\mathbb{R}$ be the function
$$
F(A,p):=\frac{{\rm Tr}\,(A)}{(1+|p+\xi|^2)^{1/2}}-\sum_{i,j=1}^d\frac{A_{ij}(p_i+\xi_i)(p_j+\xi_j)}{(1+|p+\xi|^2)^{3/2}}\,,
$$
where $\xi\in\mathbb{R}^d$ is a vector, with $|\xi|\leq M$.
Observe that there exist two positive constants $\tilde\lambda,\lambda>0$, depending only on $M$ and $d$ such that if $|p|\leq1$, $A\in \mathbb{M}^{d\times d}_{\rm sym}$ and $A\geq0$, then
$$
\tilde\lambda\|A\|\geq F(A,p)\geq\lambda\|A\|\,,
$$
where $\|A\|:=\sqrt{\sum_{i,j}A^2_{ij}}$. Since throughout this section $M$ will be fixed, in the following with a slight abuse of language we will say that a constant is universal if it depends only on $d$ and $M$.
The next result is essentially as \cite[Lemma 2.1]{Savin}.
\begin{lemma}\label{lemma2.1}
Let $r\in(0,1)$ and $a\in(0,1/2)$. Let $v:B_1\to(0,\infty)$ be a viscosity $(2a)$-supersolution of the equation
\begin{equation}\label{lemma2.10}
\boldsymbol{D}iv \biggl(\frac{\nabla u+\xi}{\sqrt{1+|\nabla u+\xi|^2}}\biggr) =a\,,
\varepsilonnd{equation}
bounded from above.
There exists a universal constant $c_0$ with the following property:
Let $\overline B_r(x_0)\subset B_1$ and let $B\subset\overline B_1$ be a closed set. For every $y\in B$ consider the paraboloid
\begin{equation}\label{lemma2.11}
-\frac{a}{2}|x-y|^2+c_y
\varepsilonnd{equation}
staying below $v$ in $\overline B_r(x_0)$ and touching $v$ in $\overline B_r(x_0)$. Assume that the set $A$ of all such touching points for $y\in B$ is contained in $B_r(x_0)$. Then $|A|\geq c_0|B|$.
\varepsilonnd{lemma}
\begin{proof}
Observe that since $v$ is a bounded lower semicontinuous function, then $A$ is a closed set.
Assume that $v$ is semi-concave in $B_r(x_0)$, that is there exists $b>0$ such that $v-b|x|^2$ is concave in $B_r(x_0)$. Note that this implies in particular that $v$ is differentiable at all touching points $z\in A$. Moreover it is not difficult to show that $D v$ is Lipschitz in $A$ with a Lipschitz constant depending only on $a$ and $b$.
By assumption if $z\in A$ there exists a paraboloid of vertex $y\in B$ and opening $a$ as in \varepsilonqref{lemma2.11} touching $v$ at $z$ from below. Moreover,
\begin{equation}\label{lemma2.12}
y=z+\frac{1}{a}Dv(z)\,.
\varepsilonnd{equation}
Let us denote by $Z$ the set of points in $B_r(x_0)$ such that $v$ is twice differentiable at $z$. By Alexandrov theorem $|B_r(x_0)\setminus Z|=0$. Fix $z\in Z$. For all $x\in B_r(x_0)$ we have
$$
v(x)=P(x;z)+o(|x-z|^2)=v(z)+Dv(z)\cdot(x-z)+\frac12(x-z)^TD^2v(z)(x-z)+o(|x-z|)^2\,.
$$
We claim that there exists a universal constant $C>0$ such that
$$
-aI\leq D^2v(z)\leq CaI\boldsymbol{q}quad\text{for all $z\in A\cap Z\,.$}
$$
The left inequality follows from the fact that the paraboloid in \varepsilonqref{lemma2.11} touches $v$ from below. To prove the second inequality assume that there exists a unit vector $e$ such that
$$
D^2v(z)\geq Ca\,e\otimes e-aI\,.
$$
We will prove that if $C$ is sufficiently large this leads to a contradiction.
Consider now the test function $\varphi(x)=P(x;z)-\displaystyle\frac{\varepsilonps}{2}|x-z|^2$, with $\varepsilonps>0$. Note that $\varphi$ lies below $v$ in a neighborhood of $z$ and touches $v$ at $z$ and that by \varepsilonqref{lemma2.12} $|D\varphi(z)|=|Dv(z)|=a|y-z|\leq2a$. Therefore $\varphi$ is an admissible test function and we have
\begin{align*}
a
& \geq F(D^2\varphi(z),D\varphi(z))=F(D^2v(z)-\varepsilonps I,D\varphi(z))\geq F(Ca\,e\otimes e-(a+\varepsilonps)I,D\varphi(z)) \\
&= F(Ca\,e\otimes e,D\varphi(z))-(a+\varepsilonps)F(I,D\varphi(z))\geq Ca\lambda-(a+\varepsilonps)\tilde\lambda\sqrt{n}\,
\varepsilonnd{align*}
which is a contradiction if $C>(\tilde\lambda\sqrt{n}+1)/\lambda$ and $\varepsilonps$ is sufficiently small.
Now, from the area formula, using \varepsilonqref{lemma2.12} we obtain
\begin{equation}\label{lemma2.13}
|B|\leq\int_A\Bigl|{\rm det}\Bigl(I+\frac{1}{a}D^2v(x)\Big)\Big|\,dx\leq C|A|,
\varepsilonnd{equation}
where $C$ is another universal constant.
If $v$ is not semi-concave for $\varepsilonps>0$ we define the inf-convolution, setting for $x\in\overline B_r(x_0)$
$$
v_\varepsilonps(x)=\inf_{y\in\overline B_r(x_0)}\Big\{v(y)+\frac1\varepsilonps|y-x|^2\Big\}.
$$
It is easily checked that $v_\varepsilonps$ is semi-concave: Since $v$ is lower semicontinuous and bounded we have also that $v_\varepsilonps$ converges pointwise to $v$ in $\overline B_r(x_0)$. Moreover each $v_\varepsilonps$ is a viscosity $(2a)$-supersolution of the equation \varepsilonqref{lemma2.10} in $B_r(x_0)$. In fact, if $\varphi\in C^2(B_r(x_0))$ lies below $v_\varepsilonps$ in a neighborhood of $\overline x\in B_r(x_0)$ and touches $v_\varepsilonps$ at $\overline x$, let $\overline y$ the point in $\overline B_r(x_0)$ such that
$$
\inf_{y\in\overline B_r(x_0)}\Big\{v(y)+\frac1\varepsilonps|y-\overline x|^2\Big\}=v(\overline y)+\frac1\varepsilonps|\overline y-\overline x|^2.
$$
Then the function
$$
\varphi(x+\overline x-\overline y)+v(\overline y)-\varphi(\overline x)
$$
touches $v$ from below at $\overline y$ and thus
$$
F(D^2\varphi(\overline x),D\varphi(\overline x))\leq a.
$$
Observe that for $\varepsilonps$ small enough the contact set $A_\varepsilonps$ of $v_\varepsilonps$ is contained in $B_r(x_0)$. Indeed if this is not true there exist a sequence $\varepsilonps_n$ converging to $0$ and points $x_n\in A_{\varepsilonps_n}$, the contact set of $v_{\varepsilonps_n}$, such that $x_n\in\boldsymbol{p}artialrtial B_r(x_0)$. Therefore there exist $y_n\in B$ and $c_n\in\mathbb{R}$ such that
$$
-\frac{a}{2}|x-y_n|^2+c_n\leq v_{\varepsilonps_n}(x)\,\,\,\forall x\in\overline B_r(x_0),\boldsymbol{q}uad-\frac{a}{2}|x_n-y_n|^2+c_n= v_{\varepsilonps_n}(x_n).
$$
Since $B$ is closed and the $v_{\varepsilonps_n}$ are equibounded we may assume that $y_n\to \bar y\in B$, $c_n\to\bar c$ and $x_n\to\bar x\in\boldsymbol{p}artialrtial B_r(x_0)$. Thus, from the first inequality above we have that the paraboloid $-\frac{a}{2}|x-\bar y|^2+\bar c$ stays below $v$ in $B_r(x_0)$. Moreover, by lower semicontinuity we have
\[
\begin{split}
-\frac{a}{2}|\bar x-\bar y|^2+\bar c&= \lim_{n\to\infty}v_{\varepsilonps_n}(x_n)\\
&=\lim_{n\to\infty}\Big(v(z_n)+\frac{1}{\varepsilonps_n}|z_n-x_n|^2\Big)\geq \liminf_{z\to \bar x} \geq v(z)\geq v(\bar x).
\varepsilonnd{split}
\]
Hence $\bar x\in A\cap\boldsymbol{p}artialrtial B_r(x_0)$ which is impossible by assumption. In particular we may assume that \varepsilonqref{lemma2.13} holds with $A$ replaced by $A_\varepsilonps$ for $\varepsilonps$ sufficiently small.
Using the fact that $B$ is closed and the pointwise convergence of $v_\varepsilonps$ to $v$, a similar argument shows
$$
\bigcap_{h=1}^\infty\bigcup_{k=h}^\infty A_{1/k}\subset A.
$$
From the above inclusion and \varepsilonqref{lemma2.13} we then conclude that $|B|\leq C|A|$.
\varepsilonnd{proof}
Let $v:\overline B_1\to(0,\infty)$ be a lower semicontinuous function. If $a>0$ we denote by $A_a$ the set of points where $v$ can be touched from below by a paraboloid of opening $a$ and vertex in $\overline B_1$ and where the value of $v$ is also smaller than $a$.
\begin{align*}
A_a:
&=\Big\{x\in\overline B_1:\,\,v(x)\leq a\,\,\text{and there exists}\,\,y\in\overline B_1\,\,\text{such that}\\
&\boldsymbol{q}quad\boldsymbol{q}quad\boldsymbol{q}quad \min_{z\in\overline B_1}\Big(v(z)+\frac{a}{2}|z-y|^2\Big)=v(x)+\frac{a}{2}|x-y|^2\Big\}\,.
\varepsilonnd{align*}
\begin{lemma}\label{lemma2.2}
There exist two constants $c_1>0$, $C_1>1$, with the following properties: Let $0<a<1/C_1$ and let $v:\overline B_1\to (0,\infty)$ be a viscosity $(C_1a)$-supersolution of the equation \varepsilonqref{lemma2.10}, bounded from above. If
$$
\overline B_r(x_0)\subset B_1,\boldsymbol{q}quad A_a\cap\overline B_r(x_0)\not=\varepsilonmptyset\,,
$$
then
$$
\big|A_{C_1a}\cap B_{r/8}(x_0)|\geq c_1|B_r|\,.
$$
\varepsilonnd{lemma}
\begin{proof}
Up to replace $r$ with $r+\varepsilonps$ and then letting $\varepsilonps\to0^+$ we may assume that there exists
$$
x_1\in A_a\cap B_r(x_0)\,.
$$
Thus there exists $y_1\in\overline B_1$ such that the paraboloid
$$
P(x;y_1)=v(x_1)+\frac{a}{2}|x_1-y_1|^2-\frac{a}{2}|x-y_1|^2
$$
touches $v$ in $x_1$ from below. We claim that there exists a universal constant $C$ such that there exists a point $z\in \overline B_{r/16}(x_0)$ such that
\begin{equation}\label{lemma2.20}
v(z)\leq P(z;y_1)+Car^2\,.
\varepsilonnd{equation}
If $x_1\in \overline B_{r/16}(x_0)$ then we may take trivially $z=x_1$. Otherwise,
we consider the function
\[
\varphi(x)=
\begin{cases}
\alpha^{-1}(|x|^{-\alpha}-1) & \boldsymbol{q}uad \text{if}\,\,\,\displaystyle\frac{1}{16}\leq|x|\leq1, \cr
\alpha^{-1}(16^{\alpha}-1) & \boldsymbol{q}uad \text{if}\,\,\,\displaystyle |x|\leq\frac{1}{16}\,,
\varepsilonnd{cases}
\]
with $\alpha>0$, universal, to be chosen.
Then, for $x\in B_r(x_0)$, we set
$$
\boldsymbol{p}si(x)=P(x;y_1)+ar^2\varphi\Big(\frac{x-x_0}{r}\Big)\,.
$$
Note that $|D\boldsymbol{p}si|\leq a(16^{\alpha+1}+2)\leq1$ if $a$ is small enough and that in the annulus $B_r(x_0)\setminus \overline B_{r/16}(x_0)$ we have
\begin{equation}\label{lemma2.21}
F(D^2\boldsymbol{p}si,D\boldsymbol{p}si)= aF\Big(D^2\varphi\Big(\frac{x-x_0}{r}\Big),D\boldsymbol{p}si\Big)-aF(I,D\boldsymbol{p}si)
\geq a\lambda(\alpha+1)-a\tilde\lambda\sqrt{n}>a,
\varepsilonnd{equation}
provided $\alpha>0$ is chosen large enough.
Let us now denote by $z$ a minimum point of $v-\boldsymbol{p}si$ in $\overline B_r(x_0)$. Since $v(x_1)-\boldsymbol{p}si(x_1)=-ar^2\varphi\big(\frac{x_1-x_0}{r})<0$, the minimum of $v-\boldsymbol{p}si$ is strictly negative. Therefore $z\not\in\boldsymbol{p}artialrtial B_r(x_0)$ since $v-\boldsymbol{p}si\geq0$ on $\boldsymbol{p}artialrtial B_r(x_0)$. On the other hand, if $r/16<|z-x_0|<r$, observing that $|D\boldsymbol{p}si|\leq a(16^{\alpha+1}+2)\leq C_1a$, if we choose $C_1$ large enough, we would have that $\boldsymbol{p}si$ is an admissible test function for $v$ in a neighborhood of $z$ satisfying \varepsilonqref{lemma2.21}, which is impossible. Therefore $z\in\overline B_{r/16}(x_0)$ and we have
$$
v(z)\leq\boldsymbol{p}si(z)= P(z;y_1)+ar^2\alpha^{-1}(16^\alpha-1)\,,
$$
thus proving \varepsilonqref{lemma2.20}.
To conclude the proof, consider for every $y\in \overline B_{r/64}(z)$ the paraboloid
$$
P(x;y_1)-C'\frac{a}{2}|x-y|^2+c_y\,,
$$
where $C'$ is a universal constant to be chosen and $c_y$ is such that the above paraboloid touches $v$ from below. Note that the above paraboloid has opening $(C'+1)a$ and vertex at
$$
\frac{y_1}{C'+1}+\frac{C'y}{C'+1}\,.
$$
Observe that
$$
P(z;y_1)-C'\frac{a}{2}|z-y|^2+c_y\leq v(z)\leq P(z;y_1)+Car^2\,,
$$
hence $c_y\leq Car^2+\frac{C'a}{2}\big(\frac{r}{64}\big)^2$.
On the other hand, if $|x-z|\geq\frac{r}{16}$, we have
\begin{align*}
P(x;y_1)-C'\frac{a}{2}|x-y|^2+c_y
& \leq P(x;y_1)-C'\frac{a}{2}\Big(\frac{3r}{64}\Big)^2+Car^2+\frac{C'a}{2}\Big(\frac{r}{64}\Big)^2 \\
& < P(x;y_1)\leq v(x)\,,
\varepsilonnd{align*}
provided that we choose $C'$ large enough independently of $a$ and $r$. Thus the contact point $x_y$ belongs to the ball $B_{r/16}(z)\subset B_{r/8}(x_0)$. Note that for $y\in \overline B_{r/64}(z)$ the vertex $\frac{y_1}{C'+1}+\frac{C'y}{C'+1}$ spans the ball with center $\frac{y_1}{C'+1}+\frac{C'z}{C'+1}$ and radius $C'r/64(C'+1)$, which is contained in $B_1$, provided that $C'$ is large enough. Moreover, the gradient of the function $x\mapsto P(x;y_1)-C'\frac{a}{2}|x-y|^2+c_y$ is smaller than $2(C'+1)a$ and
$$
v(x_y)=P(x_y;y_1)-C'\frac{a}{2}|x_y-y|^2+c_y\leq a+2a+c_y\leq C''a
$$
for a sufficiently large, universal, constant $C''>2(C'+1)a$. Therefore, by applying Lemma~\ref{lemma2.1} with $B_{r}(x_0)$ and $B$ replaced respectively by $B_{r/8}(x_0)$ and the ball of radius $C'r/64(C'+1)$ and center $\frac{y_1}{C'+1}+\frac{C'z}{C'+1}$, with $a$ replaced by $C''a$, we have that, if $3C''a\leq1$
$$
\big|A_{C''a}\cap B_{r/8}(x_0)\big|\geq c_0\Big(\frac{C'}{C'+1}\Big)^n\big|B_{r/64}|\,,
$$
from which \varepsilonqref{lemma2.2} follows with $C_1=3\max\{C'',16^{\alpha+1}+2\}$.
\varepsilonnd{proof}
We can now give the
\begin{proof}[Proof of Proposition~\ref{prop:savin}]
Let $x_0\in B_{1/2}$ be a point where $v(x_0)\leq\nu$. Then the function $u(x)=v(x_0+x)$ is a positive viscosity $(C_0^k\nu)$-supersolution of \varepsilonqref{prop:savin1} in $\overline B_1$ with $u(0)\leq\nu$. Consider the paraboloid with vertex at the origin and opening $20\nu$ touching $u$ from below and observe that since $u$ is positive this paraboloid touches $u$ in $B_{1/3}$. Therefore $A_{20\nu}\cap\overline B_{1/3}\not=\varepsilonmptyset$. Moreover, setting $D_0=A_{20\nu}\cap\overline B_{1/3}$, from Lemma~\ref{lemma2.2} we know that if $B_r(x)\subset B_1$, $B_{r/8}(x)\subset B_{1/3}$ and $D_0\cap B_r(x)\not=\varepsilonmptyset$, then
$$
|(A_{20C_1\nu}\cap\overline B_{1/3})\cap B_{r/8}(x)|=|A_{20C_1\nu}\cap B_{r/8}(x)|\geq c_1|B_r|\,,
$$
provided $20C_1\nu\leq1$ and $v$, hence $u$, is a viscosity $(20C_1\nu)$-supersolution. By applying the same lemma to $D_1=A_{20C_1\nu}\cap\overline B_{1/3}$ and proceeding by induction, we have that the sets $D_0\subset D_1\subset\dots\subset D_k\subset B_{1/3}$, where $D_i=A_{20C_1^i\nu}\cap\overline B_{1/3}$ for $i=1,\dots,k$, have all the property that if $B_r(x)\subset B_1$, $B_{r/8}(x)\subset B_{1/3}$ and $D_i\cap B_r(x)\not=\varepsilonmptyset$, for $i\leq k$, then
$$
|D_{i+1}\cap B_{r/8}(x)|\geq c_1|B_r|\,,
$$
provided $20C_1^k\nu\leq1$ and $v$, hence $u$, is a viscosity $(20C_1^k\nu)$-supersolution. Then, using Lemma~2.3 in \cite{Savin} and setting $C_2=20C_1$, it follow that for a suitable $\mu$ depending only on $c_1$, hence universal,
$$
\big|\{x\in B_{1/3}(x_0):\,v(x)\leq C_2^k\nu\}\big|\leq(1-\mu^k)|B_{1/3}|\,.
$$
From this inequality the conclusion then follows by a standard rescaling and covering argument.
\varepsilonnd{proof}
\varepsilonnd{document} |
\begin{document}
\twocolumn[
^{(i)}cmltitle{Uncertainty Decomposition in Bayesian Neural\\ Networks with Latent Variables}
^{(i)}cmlsetsymbol{equal}{*}
\begin{icmlauthorlist}
^{(i)}cmlauthor{Stefan Depeweg}{si,tum}
^{(i)}cmlauthor{Jos\'e Miguel Hern\'andez-Lobato}{cam}
^{(i)}cmlauthor{Finale Doshi-Velez}{ha}
^{(i)}cmlauthor{Steffen Udluft}{si}
\end{icmlauthorlist}
^{(i)}cmlaffiliation{si}{Siemens AG}
^{(i)}cmlaffiliation{tum}{Technical University of Munich}
^{(i)}cmlaffiliation{cam}{University of Cambridge}
^{(i)}cmlaffiliation{ha}{Harvard University}
^{(i)}cmlcorrespondingauthor{Stefan Depeweg}{[email protected]}
\vskip 0.3in
]
\printAffiliationsAndNotice{}
\begin{abstract}
Bayesian neural networks (BNNs) with latent variables are probabilistic models
which can automatically identify complex stochastic patterns in the data. We
describe and study in these models a decomposition of predictive uncertainty into its
epistemic and aleatoric components. First, we show how such a decomposition
arises naturally in a Bayesian active learning scenario by following an
information theoretic approach. Second, we use a similar decomposition to
develop a novel risk sensitive objective for safe reinforcement learning (RL).
This objective minimizes the effect of model bias in environments whose
stochastic dynamics are described by BNNs with latent variables. Our
experiments illustrate the usefulness of the resulting decomposition in
active learning and safe RL settings.
\end{abstract}
\section{Introduction}
Recently, there has been an increased interest in Bayesian neural networks
(BNNs) and their possible use in reinforcement learning (RL) problems
\citep{gal2016improving,blundell2015weight,houthooft2016vime}. In particular,
recent work has extended BNNs with a latent variable model to describe complex stochastic
functions \citep{depeweg2016learning,moerland2017learning}. The proposed
approach enables the automatic identification of arbitrary stochastic patterns
such as multimodality and heteroskedasticity, without having to manually
incorporate these into the model.
In model-based RL, the aforementioned BNNs with latent
variables can be used to describe complex stochastic dynamics. The BNNs encode
a probability distribution over stochastic functions, with each function
serving as an estimate of the ground truth continuous Markov Decision Process
(MDP). Such probability distribution can then be used for policy search, by finding the
optimal policy with respect to state trajectories simulated from the model. The BNNs with
latent variables produce improved probabilistic predictions and these result in better
performing policies \citep{depeweg2016learning,moerland2017learning}.
We can identify two distinct forms of uncertainties in the class of models
given by BNNs with latent variables. As described by \citet{kendall2017uncertainties},
"Aleatoric uncertainty captures noise inherent in the observations. On the
other hand, epistemic uncertainty accounts for uncertainty in the model." In
particular, epistemic uncertainty arises from our lack of knowledge of the values of
the synaptic weights in the network, whereas aleatoric uncertainty originates from
our lack of knowledge of the value of the latent variables.
In the domain of model-based RL the epistemic uncertainty
is the source of model bias (or representational bias, see e.g.
\citet{joseph2013reinforcement}). When there is high discrepancy between model
and real-world dynamics, policy behavior may deteriorate. In analogy to the
principle that "a chain is only as strong as its weakest link" a drastic error
in estimating the ground truth MDP at a single transition step can render the
complete policy useless (see e.g. \citet{schneegass2008uncertainty}).
In this work we address the decomposition of the uncertainty present in the
predictions of BNNs with latent variables into its epistemic and aleatoric
components. We show the usefulness of such decomposition in two different
domains: active learning and risk-sensitive RL.
First we consider an active learning scenario with stochastic functions. We
derive an information-theoretic objective that decomposes the entropy of the
predictive distribution of BNNs with latent variables into its epistemic and
aleatoric components. By building on that decomposition, we then investigate
safe RL using a risk-sensitive criterion \citep{garcia2015comprehensive} which
focuses only on risk related to model bias, that is, the risk of the policy
performing at test time significantly different from at training time. The
proposed criterion quantifies the amount of epistemic uncertainty (model bias risk)
in the model's predictive distribution and ignores any risk stemming from the
aleatoric uncertainty. Our experiments show that, by using this risk-sensitive
criterion, we are able to find policies that, when evaluated on the ground
truth MDP, are safe in the sense that on average they do not deviate
significantly from the performance predicted by the model at training time.
We focus on the off-policy batch RL scenario, in which we are
given an initial batch of data from an already-running system and are asked to
find a better policy. Such scenarios are common in real-world industry settings
such as turbine control, where exploration is restricted to avoid possible
damage to the system.
^{(i)}ffalse
\def^{(i)}{^{(i)}}
\def\mathbf{x}opt{\vx_\star}
\def\mathbf{x}rec{\widetilde\vx}
\def\calX{\calX}
\def\calD{\calD}
\fi
\def\mathrm{H}{\mathrm{H}}
\newcommand{\mathbf{x}}{\mathbf{x}}
\newcommand{\textrm{EI}}{\textrm{EI}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\,|\,}{\,|\,}
\newcommand{\text{Gam}}{\text{Gam}}
\section{Bayesian Neural Networks with Latent Variables}\label{sec:bnnswlv}
Given data~${\mathcal{D} = \{ \mathbf{x}_n, \mathbf{y}_n \}_{n=1}^N}$, formed
by feature vectors~${\mathbf{x}_n ^{(i)}n \mathbb{R}^D}$ and targets~${\mathbf{y}_n
^{(i)}n \mathbb{R}}^K$, we assume that~${\mathbf{y}_n =
f(\mathbf{x}_n,z_n;\mathcal{W}) + \bm \epsilon_n}$, where~$f(\cdot ,
\cdot;\mathcal{W})$ is the output of a neural network with weights
$\mathcal{W}$ and $K$ output units. The network receives as input the feature vector $\mathbf{x}_n$ and the
latent variable $z_n \sim \mathcal{N}(0,\gamma)$.
The activation functions for the hidden layers are rectifiers:~${\varphi(x) = \max(x,0)}$.
The activation functions for the output layers are the identity function:~${\varphi(x) = x}$.
The network output is
corrupted by the additive noise variable~$\bm \epsilon_n \sim \mathcal{N}(\bm 0,\bm
\Sigma)$ with diagonal covariance matrix $\bm \Sigma$. The role of the latent variable $z_n$ is to
capture unobserved stochastic features that can affect the network's
output in complex ways. Without $z_n$, randomness is only given by the additive
Gaussian observation noise $\bm \epsilon_n$, which can only describe limited stochastic
patterns.
The network has~$L$ layers, with~$V_l$ hidden units in layer~$l$,
and~${\mathcal{W} = \{ \mathbf{W}_l \}_{l=1}^L}$ is the collection of~${V_l
\times (V_{l-1}+1)}$ weight matrices.
The $+1$ is introduced here to account
for the additional per-layer biases.
^{(i)}ffalse
Let $\mathbf{Y}$ be an ${N\times K}$ matrix with the
targets~$\mathbf{y}_n$ and~$\mathbf{X}$ be an ${N\times D}$ matrix of
feature vectors~$\mathbf{x}_n$.
We denote by $\mathbf{z}$ the $N$-dimensional vector with the
values of the random disturbances $z_1,\ldots,z_N$ that were used to generate the data.
The likelihood function is
\begin{align}
p(\mathbf{Y}\,|\,\mathcal{W},\mathbf{z},\mathbf{X}) &=
\prod_{n=1}^N p(\mathbf{y}_n\,|\,\mathcal{W},\mathbf{z},\mathbf{x}_n) \\
&= \prod_{n=1}^N \prod_{k=1}^K \mathcal{N}(y_{n,k} \,|\, f(\mathbf{x}_n,z_n;\mathcal{W}),\bm \Sigma)\,.\label{eq:likelihood}
\end{align}
The prior for each entry in $\mathbf{z}$ is $\mathcal{N}(0,\gamma)$.
We also specify a Gaussian prior distribution for each entry in each of the weight matrices in
$\mathcal{W}$. That is,
\begin{align}
p(\mathbf{z}) & =\prod_{n=1}^N \mathcal{N}(z_n|0,\gamma)\, \\
p(\mathcal{W}) &= \prod_{l=1}^L \prod_{i=1}^{V_l} \prod_{j=1}^{V_{l-1}+1} \mathcal{N}(w_{ij,l}\,|\,0,\lambda)\,,
\label{eq:prior}
\end{align}
where $w_{ij,l}$ is the entry in the~$i$-th row and~$j$-th column
of~$\mathbf{W}_l$ and $\gamma$ and $\lambda$ are a prior variances. The posterior
distribution for the weights~$\mathcal{W}$ and the random disturbances $\mathbf{z}$ is
given by Bayes' rule:
\begin{align}
p(\mathcal{W},\mathbf{z}\,|\,\mathcal{D}) = \frac{p(\mathbf{Y}\,|\,\mathcal{W},\mathbf{z},\mathbf{X})
p(\mathcal{W})p(\mathbf{z})}
{p(\mathbf{y}\,|\,\mathbf{X})}\,.
\end{align}
Given a new input vector~$\mathbf{x}_\star$, we can then
make predictions for $\mathbf{y}_\star$ using the
predictive distribution
\begin{align}
p(\mathbf{y}_\star\,|\,\mathbf{x}_\star,\mathcal{D}) = ^{(i)}nt\!\!
& \left[ ^{(i)}nt
\mathcal{N}(y_\star\,|\, f(\mathbf{x}_\star,z_\star; \mathcal{W}), \bm \Sigma)\mathcal{N}(z_\star|0,1) \,dz_\star \right] \\
&p(\mathcal{W},\mathbf{z}\,|\,\mathcal{D})\, d\mathcal{W}\,d\mathbf{z}\,.\label{eq:predictive_distribution}
\end{align}
The approximation of this Bayesian model is described in using local $\alpha$-divergences.
\fi
We approximate the exact posterior distribution $p(\mathcal{W},\mathbf{z}\,|\,\mathcal{D})$
with the factorized Gaussian distribution
\begin{align}
q(\mathcal{W},\mathbf{z}) = &\left[ \prod_{l=1}^L\! \prod_{i=1}^{V_l}\! \prod_{j=1}^{V_{l\!-\!1}\!+\!1} \mathcal{N}(w_{ij,l}| m^w_{ij,l},v^w_{ij,l})\right] \\
&\left[\prod_{n=1}^N \mathcal{N}(z_n \,|\, m_n^z, v_n^z) \right]\,.\label{eq:posterior_approximation}
\end{align}
The parameters~$m^w_{ij,l}$,~$v^w_{ij,l}$ and
~$m^z_n$,~$v^z_n$ are determined by
minimizing a divergence between
$p(\mathcal{W},\mathbf{z}\,|\,\mathcal{D})$
and the approximation $q$. For more detail the reader is referred to the work of
\citet{hernandez2016black,depeweg2016learning}. In all our experiments we use
black-box $\alpha$-divergence minimization with $\alpha=1.0$, as it seems to
produce a better decomposition of uncertainty into its
empistemic and aleatoric components, although further studies
are needed to strengthen this claim.
The described BNNs with latent variables can describe complex stochastic
patterns while at the same time account for model uncertainty. They achieve
this by jointly learning $q(\mathbf{z})$, which captures the specific values of
the latent variables in the training data, and $q(\mathcal{W})$, which
represents any uncertainty about the model parameters. The result is a principled
Bayesian approach for inference of stochastic functions.
\section{Active Learning of Stochastic Functions}\label{sec:al}
Active learning is the problem of choosing which data points to incorporate
next into the training data so that the resulting gains in predictive
performance are as high as possible. In this section, we derive a
Bayesian active learning procedure for stochastic functions. This procedure
illustrates how to separate two sources of uncertainty, that is, aleatoric
and epistemic, in the predictive distribution of BNNs with latent variables.
Within a Bayesian setting,
active learning can be formulated as choosing data
based on the expected reduction in entropy of the posterior distribution
\cite{mackay1992information}. \citet{hernandez2015probabilistic} apply this
entropy-based approach to scalable BNNs. In \cite{houthooft2016vime}, the authors use
a similar approach as an exploration scheme in RL problems in which a BNN
is used to represent current knowledge about the transition dynamics. These
previous works only assume additive Gaussian noise and, unlike
the BNNs with latent variables from Section \ref{sec:bnnswlv},
they and cannot capture complex stochastic patterns.
We start by deriving the expected reduction
in entropy in BNNs with latent variables. We assume a scenario
in which a BNN with latent variables has been fitted to a batch of data
$\mathcal{D}=\{(\mathbf{x}_1,\mathbf{y}_i),\cdots,(\mathbf{x}_N,\mathbf{y}_N)\}$ to produce
a posterior approximation $q(\mathcal{W},\mathbf{z})$. We now want to estimate the expected reduction in
posterior entropy for $\mathcal{W}$ when a particular data point $\mathbf{x}$ is incorporated in the training
set. The expected reduction in entropy is
\begin{align}
\text{H}(\mathcal{W}&|\mathcal{D}) - \mathbf{E}_{y|\mathbf{x},\mathcal{D}}\biggl[\text{H}(\mathcal{W}|\mathcal{D}\cup\{\mathbf{x},y\})\biggr] \\
&= \text{H}(\mathcal{W})-\text{H}(\mathcal{W}|y)\\
&= \text{I}(\mathcal{W};y) \\
&=\text{H}(y)-\text{H}(y|\mathcal{W})\label{eq:intermediate}\\
\begin{split}
&= \text{H}\left[^{(i)}nt_{\mathcal{W},z}p(y|\mathcal{W},\mathbf{x},z)p(z)p(\mathcal{W}|\mathcal{D})dzd\mathcal{W}\right] \\
& - \mathbf{E}_{\mathcal{W}|\mathcal{D}} \biggl[ \text{H}(^{(i)}nt_z p(y|\mathcal{W}=\mathcal{W}_i,\mathbf{x},z) p(z) dz )\biggr]
\end{split} \label{eq:al-objective}
\end{align}
where $\text{H}(\cdot)$ denotes the entropy of a random variable and
$\text{H}(\cdot|\cdot)$ and $\text{I}(\cdot;\cdot)$ denote the conditional
entropy and the mutual information between two random variables. In
(\ref{eq:intermediate}) and (\ref{eq:al-objective}) we see that the active
learning objective is given by the difference of two terms. The first term is
the entropy of the predictive distribution, that is, $\text{H}(y)$. The second
term, that is, $\text{H}(y|\mathcal{W})$, is a conditional entropy.
To compute this term,
we average across $\mathcal{W}_i \sim q(\mathcal{W})$
the entropy
$\text{H}\left[^{(i)}nt_{z}p(y|\mathcal{W}=\mathcal{W}_i,\mathbf{x},z)p(z)dz\right]$.
As shown in this expression, the randomness in $p(y|\mathcal{W}=\mathcal{W}_i,\mathbf{x})$
has its origin in the latent variable $z$ (and the constant output noise
$\epsilon \sim \mathcal{N}(0,\bm \Sigma)$ which is not shown here).
Therefore, this second term can be interpreted as the 'aleatoric uncertainty'
present in the predictive distribution, that is, the average entropy of $y$
that originates from the latent variable $z$ and not from the uncertainty about
$\mathcal{W}$. We can refer to the whole objective function in (\ref{eq:al-objective}) as an estimate of
the epistemic uncertainty: the full predictive uncertainty about $y$ given $\mathbf{x}$ minus the corresponding aleatoric
uncertainty.
\begin{figure}
\caption{Information Diagram illustrating quantities of entropy with three variables.
The area surrounded by a dashed line indicates reduction in entropy given by
equation (\protect{\ref{eq:al-objective}
\label{venn3d}
\end{figure}
The previous decomposition is also illustrated by the information diagram from Figure \ref{venn3d}. The
entropy of the predictive distribution is composed of the blue, cyan, grey and
pink areas. The blue area is constant: when both $\mathcal{W}$ and $z$ are
determined, the entropy of $y$ is constant and given by the entropy of the additive Gaussian noise $\epsilon \sim \mathcal{N}(0,\bm \Gamma)$.
$\text{H}(y|\mathcal{W})$ is given by the light and dark blue areas. The reduction in
entropy is therefore obtained by the grey and pink areas.
The quantity in equation (\ref{eq:al-objective}) can be approximating using standard entropy estimators, e.g. nearest-neighbor methods \citep{kozachenko1987sample,kraskov2004estimating,gao2016breaking}. For that, we
repeatedly sample $\mathcal{W}$ and $z$ and do forward passes through the neural network to sample $y$.
The resulting samples of $y$ can then be used to
approximate the respective entropies for each $\mathbf{x}$ using the nearest-neighbor approach:
\begin{align}
&\text{H}(y|\mathbf{x},\mathcal{D})-\mathbf{E}_{W|\mathcal{D}}\left[\text{H}(^{(i)}nt_{z}p(y|\mathcal{W},\mathbf{x},z)p(z)dz)\right]\nonumber \\
&\approx \hat{\text{H}}(y_1,\ldots,y_L) -\frac{1}{M} \sum_{i=1}^M \left[\hat{\text{H}}(y_1^{\mathcal{W}_i},\ldots,y_L^{\mathcal{W}_i})\right]\,.
\label{eq:entropy_after}
\end{align}
where $\hat{\text{H}}(\cdot)$ computes the nearest-neighbor estimate of the entropy
given an empirical sample of points, $y_1,\ldots,y_L\sim p(y|\mathbf{x},\mathcal{D})$,
$\mathcal{W}_1,\ldots,\mathcal{W}_M\sim q(\mathcal{W})$ and
$y_1^{\mathcal{W}_i},\ldots,y_L^{\mathcal{W}_i}\sim
p(y|\mathbf{x},\mathcal{D},\mathcal{W}=\mathcal{W}_i)$ for $i=1,\ldots,M$.
\subsection{Toy Problems}\
We now will illustrate the active learning procedure described in the previous section on two toy examples. In each problem we will first train a BNN with 2 layers and 20 units in each layer on the available data. Afterwards, we approximate the information-theoretic measures as outlined in the previous section
We first consider a toy problem given by a regression task with heteroskedastic noise. For this, we define
the stochastic function
$y= 7 \sin (x) + 3|\cos (x / 2)| \epsilon$ with $\epsilon \sim \mathcal{N}(0,1)$. The data availability is
limited to specific regions of $x$. In particular, we sample 750 values of $x$
from a mixture of three Gaussians with mean parameters
$\{\mu_1= -4,\mu_2= 0,\mu_3= 4\}$, variance
parameters $\{\sigma_1=\frac{2}{5},\sigma_2=0.9,\sigma_3=\frac{2}{5}\}$ and with each Gaussian component
having weight equal to $1/3$ in the mixture.
Figure \ref{fig:rd} shows the raw data. We have
lots of points at both borders of the $x$ axis and in the center, but little data available in
between.
\begin{figure*}
\caption{Active learning example using heteroskedastic data. \protect\subref{fig:rd}
\label{fig:rd}
\label{fig:dd}
\label{fig:pd}
\label{fig:h}
\label{fig:hgw}
\label{fig:ob}
\label{toy_ep}
\end{figure*}
\begin{figure*}
\caption{Active learning example using bimodal data. \protect\subref{fig:rd}
\label{fig:brd}
\label{fig:bdd}
\label{fig:bpd}
\label{fig:bh}
\label{fig:bhgw}
\label{fig:bob}
\label{toy_bep}
\end{figure*}
Figure \ref{toy_ep} visualizes the respective
quantities. We see that the BNN with latent variables does an accurate
decomposition of its predictive uncertainty between epistemic uncertainty and
aleatoric uncertainty: the reduction in entropy approximation, as shown in
Figure \ref{fig:ob}, seems to be inversely proportional to the density used to
sample the data (shown in Figure \ref{fig:dd}). This makes sense, since in this
toy problem the most informative data points are expected to be located in regions
where data is scarce. Note that, in more complicated settings, the most
informative data points may not satisfy this property.
Next we consider a toy problem given by a regression task with bimodal data.
We define $x^{(i)}n[-0.5, 2]$ and $y=10\sin (x)+\epsilon$
with probability $0.5$ and $y=10\cos (x)+\epsilon$, otherwise, where $\epsilon
\sim \mathcal{N}(0,1)$ and $\epsilon$ is independent of $x$. The data availability is
not uniform in $x$. In particular we sample 750 values of $x$ from an exponential distribution with $\lambda=0.5$
Figure \ref{toy_bep} visualizes the respective
quantities. The predictive distribution shown in Figure \ref{fig:bpd} suggests that the BNN has learned the bimodal structure in the data. The predictive distribution appears to get increasingly 'washed out' as we increase $x$. This increase in entropy as a function of $x$ is shown in Figure \ref{fig:bh}. The conditional entropy $\text{H}(y|\mathcal{W})$ of the predictive distribution shown in Figure \ref{fig:bhgw} appears to be symmetric around $x=0.75$.
This suggest that the BNN has correctly learned to separate the aleatoric component from the full uncertainty for the problem at hand: the ground truth function is symmetric around $x=0.75$ at which point it changes from a bimodal to a unimodal stochastic function. Figure \ref{fig:bob} shows the estimate of reduction in entropy for each $x$. Here we can observe two effects: First, as expected, the expected entropy reduction will increase with higher $x$. Second, we see a slight
decrease from $x=-0.5$ to $x=0.75$. We believe the reason for this is twofold: because the data is limited to $[-0.5,2]$ we expect a higher level of uncertainty in the vicinity of both borders. Furthermore we expect that learning a bimodal function requires more data to reach the same level of confidence than a unimodal function.
^{(i)}ffalse
\subsubsection{Comparison between BNN}\label{toy_bnn}
In Figure \ref{toy_ad} we repeated the experiment, however using $\alpha=0.5$ this time.
We see that while the predictive distribution looks similar, the Curiousity in the plot on
the lower left is now uniform in state space. Indeed, the entropy of the predictive
distribution $\text{H}(y)$ is equal to the conditional entropy $\text{H}(y|\mathcal{W})$. We see here,
that our uncertainty over $\mathcal{W}$ has no effect on the entropy $\text{H}(y)$.
\begin{figure}
\caption{Entropy quantities for the toy data benchmark with $\alpha=0.5$.}
\label{toy_ad}
\end{figure}
In Figure \ref{latent} we visualize the latent variables $z_n$ for $\alpha=0.5$ and $\alpha=1.0$. We see that $\alpha=0.5$ has a much more tighter distribution for each $z_n$.
\begin{figure}
\caption{Distribution of latent variables $z_n$ for $\alpha=0.5$ and $\alpha=1.0$.
For each data point $(x_n,y_n)$ a histogram plot of the distribution $z_n$ is shown.
}
\label{latent}
\end{figure}
\fi
\section{Risk-Sensitive Reinforcement Learning}
In the previoius section we studied how BNNs with latent variables can be used
for active learning of stochastic functions. The resulting algorithm is
based on a decomposition of predictive uncertainty into its aleatoric and
epistemic components. In this section we build up on this result to derive a
new risk-sensitive objective in model-based RL with the aim to minimize the
effect of model bias. Our new risk criteron enforces that the learned
policies, when evaluated on the ground truth system, are safe in the sense that
on average they do not deviate significantly from the performance prediced by
the model at training time.
Similar to \cite{depeweg2016learning}, we consider the domain of batch
reinforcement learning. In this setting we are given a batch of state transitions
$\mathcal{D}=\{(\mathbf{s}_t, \mathbf{a}_t,\mathbf{s}_{t+1})\}$ formed by
triples containing the current state $\mathbf{s}_t$, the action applied
$\mathbf{a}_t$ and the next state $\mathbf{s}_{t+1}$. For example, $\mathcal{D}$ may be
formed by measurements taken from an already-running system. In addition to
$\mathcal{D}$, we are also given a cost function $c$. The goal is to obtain
from $\mathcal{D}$ a policy in parametric form that minimizes $c$ on average under the
system dynamics.
The aforementioned problem can be solved using model-based policy search
methods. These methods include two key parts \citep{deisenroth2013survey}. The
first part consists in learning a dynamics model from $\mathcal{D}$. We assume
that the true dynamical system can be expressed by an unknown neural network
with stochastic inputs: \begin{equation}\label{eq:transitions} \mathbf{s}_t =
f_\text{true}(\mathbf{s}_{t-1},\mathbf{a}_{t-1},z_t;\mathcal{W}_\text{true})\,,\quad
z_t \sim \mathcal{N}(0,\gamma)\,, \end{equation} where
$\mathcal{W}_\text{true}$ denotes the synaptic weights of the network and
$\mathbf{s}_{t-1}$, $\mathbf{a}_{t-1}$ and $z_t$ are the inputs to the network.
In the second part of our model-based policy search algorithm, we optimize a
parametric policy given by a deterministic neural network with synaptic weights
$\mathcal{W}_{\pi}$.
This parametric policy computes the action $\mathbf{a}_t$ as a function of $\mathbf{s}_t$, that is,
$\mathbf{a}_t = \pi(\mathbf{s}_t;\mathcal{W}_\pi)$.
We optimize $\mathcal{W}_{\pi}$ to minimize the expected
cost $C = \sum_{t=1}^T c_t$ over a finite horizon $T$ with respect to our
belief $q(\mathcal{W})$, where $c_t = c(\mathbf{s}_t)$. This expected cost is obtained by averaging over
multiple virtual roll-outs. For each roll-out we choose $\mathbf{s}_0$
randomly from the states in $\mathcal{D}$, sample $\mathcal{W}_i\sim q$
and then simulate state trajectories using the model
$\mathbf{s}_{t+1}=f(\mathbf{s}_t,\mathbf{a}_t,z_t;\mathcal{W}_i)+\bm
\epsilon_{t+1}$ with policy $\mathbf{a}_t = \pi(\mathbf{s}_t;\mathcal{W}_\pi)$,
input noise $z_t \sim \mathcal{N}(0,\gamma)$ and additive noise $\bm
\epsilon_{t+1} \sim \mathcal{N}(\bm 0, \bm \Sigma)$. This procedure allows us
to obtain estimates of the policy's expected cost for any particular cost
function. If model, policy and cost function are differentiable, we are then
able to tune $\mathcal{W}_{\pi}$ by stochastic gradient descent over the
roll-out average.
Given the cost function $c$, the objective to be optimized by the policy search algorithm is
\begin{align}
\textstyle J(\mathcal{W}_{\pi}) = \textstyle \mathbf{E}_{q(\mathcal{W})}\left[C\right]= \textstyle \mathbf{E}_{q(\mathcal{W})} \left[\sum_{t=1}^{T}c_t\right]\,.\label{eq:exact_cost}
\end{align}
In practice, $\mathbf{s}_0$ is sampled uniformly at random from the training data $\mathcal{D}$.
Standard approaches in risk-sensitive RL
\citep{garcia2015comprehensive,maddison2017particle,mihatsch2002risk} use
the standard deviation of the cost $C$ as risk measure. High risk is
associated to high variability in the cost $C$. To penalize risk, the new objective to be optimized is given by
\begin{align}
\textstyle J(\mathcal{W}_{\pi}) = \textstyle \mathbf{E} \left[C\right] +
\beta \mathbf{\sigma} (C)\,, \label{mf-risk}
\end{align}
where
$\mathbf{\sigma} (C)$ denotes the standard deviation of the cost
and the free parameter $\beta$ determines the amount of risk-avoidance ($\beta \ge 0$) or risk-seeking
behavior ($\beta < 0$). In this standard setting, the variability of the cost
$\mathbf{\sigma} (C)$ originates from two different sources. First, from the existing uncertainty
over the model parameters and secondly, from the intrinsic stochasticity of the dynamics.
One of the main dangers of model-based RL is model bias: the discrepancy of
policy behavior under a) the assumed model and b) the ground truth MDP. While
we cannot avoid the existence of such discrepancy when data is limited, we wish
to guide the policy search towards policies that stay in state spaces where the
risk for model-bias is low. For this, we can define the model bias $b$ as follows:
\begin{equation}
b(\mathcal{W}_{\pi}) = \sum_{t=1}^T \left|\mathbf{E}_\text{true} [c_t] - \mathbf{E}_{q(\mathcal{W})} [c_t] \right|\,,\label{eq:model_bias}
\end{equation}
where $\mathbf{E}_\text{true} [c_t]$ is the expected cost obtained at time $t$
when starting at the initial state $\mathbf{s}_0$ and the ground truth dynamics are evolved according to policy
$\pi(\mathbf{s}_t ; \mathcal{W}_{\pi})$. Note that we focus on having similar
expectations of the individual state costs $c_t$ instead of having similar
expectations of the final episode cost $C$. The former is a more strict
criterion since it may occur that model and ground truth diverge,
but both give roughly the same cost $C$ on average.
As indicated in (\ref{eq:transitions}), we assume that the true dynamics are
determined by a neural network with latent variables and weights given
by $\mathcal{W}_\text{true}$. By using the approximate posterior
$q(\mathcal{W})$, and assuming that $\mathcal{W}_\text{true}\sim
q(\mathcal{W})$, we can obtain an upper bound on the expected model bias
as follows:
\resizebox{\linewidth}{!}{
\begin{minipage}{\linewidth}
\small
\begin{align}
\mathbf{E}_{q(\mathcal{W})}&[b(\mathcal{W}_{\pi}] = \mathbf{E}_{\mathcal{W}_\text{true} \sim q(\mathcal{W})}
\sum_{t=1}^T \left|\mathbf{E}[c_t|\mathcal{W}_\text{true}] - \mathbf{E}_{q(\mathcal{W})} [c_t] \right| \nonumber \\
&= \sum_{t=1}^T \mathbf{E}_{\mathcal{W}_\text{true} \sim q(\mathcal{W})} \sqrt{(\mathbf{E}[c_t|\mathcal{W}_\text{true}] -
\mathbf{E}_{q(\mathcal{W})} [c_t])^2} \nonumber \\
&\le \sum_{t=1}^T \sqrt{ \mathbf{E}_{\mathcal{W}_\text{true} \sim q(\mathcal{W})} (\mathbf{E}[c_t|\mathcal{W}_\text{true}] -
\mathbf{E}_{q(\mathcal{W})} [c_t])^2} \nonumber\\
&= \sum_{t=1}^T \sqrt{ \sigma_{\mathcal{W}_\text{true} \sim q(\mathcal{W})}^2(\mathbf{E}[c_t|\mathcal{W}])} \nonumber \\
&= \sum_{t=1}^T \sigma_{\mathcal{W}_\text{true} \sim q(\mathcal{W})}(\mathbf{E}[c_t|\mathcal{W}_\text{true}])\,. \label{eq:risk_rl}
\end{align}
\end{minipage}}
We note that $\mathbf{E}[c_t|\mathcal{W}]$ is the expected reward of a policy $\mathcal{W}_{\pi}$ under the dynamics
given by $\mathcal{W}$. The expectation integrates out the influence of the
latent variables $z_1,\ldots,z_t$ and the output noise $\epsilon_1,\ldots,\epsilon_t$. The last equation in (\ref{eq:risk_rl}) can thereby be
interpreted as the variability of the reward, that originates from our
uncertainty over the dynamics given by distribution $q(\mathcal{W})$.
In Section \ref{sec:al} we showed how (\ref{eq:al-objective}) encodes
decomposition of the entropy of the predictive distribution
into its aleatoric and epistemic components. The resulting
decomposition naturally arises from an information-theoretic approach for active learning.
We can express $\sigma^2_{\mathcal{W}_\text{true} \sim q(\mathcal{W})}(\mathbf{E}[c_t|\mathcal{W}_\text{true}])$
in a similar way using the law of total variance:
{\small
\begin{align}
\sigma^2_{\mathcal{W}_\text{true} \sim q(\mathcal{W})}(\mathbf{E}[c_t|\mathcal{W}_\text{true}]) =
\sigma^2(c_t) - \mathbf{E}_{\mathcal{W} \sim q(\mathcal{W})}[\sigma^2(c_t|\mathcal{W}])]\,.\nonumber
\end{align}
}We extend the policy search objective of (\ref{eq:exact_cost}) with a risk
component given by an approximation to the model bias. Similar to
\citet{depeweg2016learning}, we derive a Monte Carlo approximation that enables
optimization by gradient descent. For this, we perform $M \times N$
roll-outs by first sampling $\mathcal{W}\sim q(\mathcal{W})$ a total of $M$ times and
then, for each of these samples of $\mathcal{W}$, performing $N$ roll-outs in
which $\mathcal{W}$ is fixed and we only sample the latent variables and the
additive Gaussian noise. In particular,
\resizebox{\linewidth}{!}{
\begin{minipage}{\linewidth}
\footnotesize
\begin{align}
&J(\mathcal{W}_{\pi}) = \sum_{t=1}^T \left\{ \mathbf{E}_{q(\mathcal{W})}\left[c_t\right] + \beta
\sigma_{\mathcal{W}_\text{true} \sim q(\mathcal{W})}(\mathbf{E}[c_t|\mathcal{W}_\text{true}])
\right\}
\approx \nonumber\\
& \sum_{t=1}^{T} \left\{ \frac{1}{MN } \left[ \sum_{m=1}^M \sum_{n=1}^N c_{m,n}(t) \right] +
\beta \hat{\sigma}_{M}\left( \frac{1}{N}\sum_{n=1}^{N} c_{m,n}(t)\right) \right\}\,,\label{eq:risk_rl1}
\end{align}
\end{minipage}}
where $c_{m,n}(t) = c(\mathbf{s}_{t}^{\mathcal{W}^{m},\{z_{1}^{m,n},\ldots,z_{t}^{m,n}\}, \{\bm{\epsilon}_{1}^{m,n},\ldots,\bm{\epsilon}_{t}^{m,n}\},\mathcal{W}_\pi})$ is the cost that is obtained at time $t$ in a roll-out
generated by using a policy with parameters $\mathcal{W}_\pi$, a transition function parameterized by
$\mathcal{W}^m$ and latent variable values $z_1^{m,n},\ldots,z_{t}^{m,n}$, with additive noise values
$\bm{\epsilon}_{1}^{m,n},\ldots,\bm{\epsilon}_{t}^{m,n}$. $\hat{\sigma}_{M}$ is an empirical estimate of the standard deviation calculated
over $M$ draws of $\mathcal{W}$.
The free parameter $\beta$ determines the importance of the risk criterion. As
described above, the proposed approximation generates $M \times N$ roll-out
trajectories for each starting state $\mathbf{s}_0$. For this, we sample
$\mathcal{W}^m \sim q(\mathcal{W})$ for $m=1,\ldots,M$ and for each $m$ we then
do $N$ roll-outs with different draws of the latent variables $z_t^{m,n}$ and
the additive Gaussian noise $\epsilon_t^{m,n}$. We average across the $M\times
N$ roll-outs to estimate $\mathbf{E}_{\mathcal{W} \sim
q(\mathcal{W})}[c_t]$. Similarly, for each $m$, we average across the
corresponding $N$ roll-outs to estimate
$\mathbf{E}[c_t|\mathcal{W}^m]$. Finally, we compute the empirical standard
deviation of the resulting estimates to approximate
$\sigma_{\mathcal{W}_\text{true} \sim q(\mathcal{W})}(\mathbf{E}[c_t|\mathcal{W}_\text{true}])$.
\begin{figure*}
\caption{Results on Industrial Benchmark. Performances
of policies trained using equation (\protect\ref{eq:risk_rl1}
\label{fig:ids_train}
\label{fig:ids_test}
\label{ids_result}
\end{figure*}
\subsection{Application: Industrial Benchmark}
We show now the effectiveness of the proposed method on a stochastic dynamical system. For
this we use the industrial benchmark, a high-dimensional stochastic model
inspired by properties of real industrial systems. A detailed description and
example experiments can be found in
\cite{hein2016introduction,depeweg2016learning}, with python source code
available\footnote{\url{https://github.com/siemens/industrialbenchmark}}\footnote{\url{https://github.com/siemens/policy_search_bb-alpha}}.
In our experiments, we first define a behavior policy
that is used to collect data by interacting with the system. This policy is
used to perform three roll-outs of length $1000$ for each setpoint value in
$\{0,10,20,\ldots,100\}$. The setpoint is a hyper-parameter of the industrial
benchmark that indicates the complexity of its dynamics. The setpoint is
included in the state vector $\mathbf{s}_t$ as a non-controllable variable which
is constant throughout the roll-outs. Policies in the industrial benchmark specify
changes $\calDelta_v$, $\calDelta_g$ and $\calDelta_s$ in three steering variables $v$ (velocity), $g$ (gain)
and $s$ (shift) as a function of $\mathbf{s}_t$. In the behavior
policy these changes are stochastic and sampled according to
\begin{align}
\calDelta_v & \sim \left\{
\begin{array}{@{}ll@{}}
\mathcal{N}(0.5,\frac{1}{\sqrt{3}})\,, & \text{if}\,\, v(t) < 40 \\
\mathcal{N}(-0.5,\frac{1}{\sqrt{3}})\,, & \text{if}\,\, v(t) > 60 \\
\mathcal{U}(-1,1)\,, & \text{otherwise}
\end{array}
\right.\\
\calDelta_g & \sim \left\{
\begin{array}{@{}ll@{}}
\mathcal{N}(0.5,\frac{1}{\sqrt{3}})\,, & \text{if}\,\, g(t) < 40 \\
\mathcal{N}(-0.5,\frac{1}{\sqrt{3}})\,, & \text{if}\,\, g(t) > 60 \\
\mathcal{U}(-1,1)\,, & \text{otherwise}
\end{array}
\right.\\
\calDelta_s & \sim \mathcal{U}(-1,1) \,.
\end{align}
The velocity $v(t)$ and gain $g(t)$ can take values in $[0,100]$. Therefore,
the data collection policy will try to keep these values only in the medium
range given by the interval $[40,60]$. Because of this, large parts of the
state space will be unobserved. After collecting the data, the
$30,000$ state transitions are used to train a BNN with latent variables
with the same hyperparameters as in \cite{depeweg2016learning}.
After this, we train different policies using the Monte Carlo approximation
described in equation (\ref{eq:risk_rl1}). We consider different choices of
$\beta^{(i)}n [0,5]$ and use a horizon of $T=100$ steps, with $M=50$ and $N=25$ and
a minibatch size of $1$.
Performance is measured using two different objectives. The first one is the
expected cost obtained under the ground truth dynamics of the system, that is
$\sum_{t=1}^T \mathbf{E}_\text{true} [c_t]$. The second objective is the model
bias as defined in equation (\ref{eq:model_bias}).
We compare with two baselines. The first one ignores any risk and, therefore,
is obtained by just optimizing equation (\ref{eq:exact_cost}). The second
baseline uses the standard deviation $\sigma(c_t)$ as risk criterion and, therefore,
is similar to equation (\ref{mf-risk}), which is the
standard approach in risk-sensitive RL.
In Figure \ref{ids_result} we show the results obtained by our method and
by the second baseline when performance is evaluated under the model (Figure
\ref{fig:ids_train}) or under the ground truth (Figure \ref{fig:ids_test}).
Each plot shows empirical estimates of the model bias vs. the expected
cost, for various choices of $\beta$. We also highlight the result obtained
with $\beta=0$, the first baseline.
Our novel approach for risk-sensitive reinforcement learning produces policies
that attain at test time better trade-offs between expected cost and model
bias. As $\beta$ increases, the policies gradually put more emphasis on the
expected model bias. This leads to higher costs but lower discrepancy between
model and real-world performance.
\section{Conclusion}
We have studied a decomposition of predictive uncertainty into its epistemic and aleatoric
components when working with Bayesian neural networks with
latent variables. This decomposition naturally arises in an
information-theoretic active learning setting. The decomposition also inspired
us to derive a novel risk objective for safe reinforcement learning that
minimizes the effect of model bias in stochastic dynamical systems.
\end{document} |
\begin{document}
\frenchspacing
\title{Landau's Theorem for $\pi$-blocks\\ of $\pi$-separable groups}
\begin{abstract}\noindent
Slattery has generalized Brauer's theory of $p$-blocks of finite groups to $\pi$-blocks of $\pi$-separable groups where $\pi$ is a set of primes. In this setting we show that the order of a defect group of a $\pi$-block $B$ is bounded in terms of the number of irreducible characters in $B$. This is a variant of Brauer's Problem 21 and generalizes Külshammer's corresponding theorem for $p$-blocks of $p$-solvable groups. At the same time, our result generalizes Landau's classical theorem on the number of conjugacy classes of an arbitrary finite group. The proof relies on the classification of finite simple groups.
\end{abstract}
\textbf{Keywords:} Brauer's Problem 21, $\pi$-blocks, number of characters\\
\textbf{AMS classification:} 20C15
\section{Introduction}
Many authors, including Richard Brauer himself, have tried to replace the prime $p$ in modular representation theory by a set of primes $\pi$. One of the most convincing settings is the theory of $\pi$-blocks of $\pi$-separable groups which was developed by Slattery~\cite{Slattery,Slattery2} building on the work of Isaacs and others (for precise definitions see next section). In this framework most of the classical theorems on $p$-blocks can be carried over to $\pi$-blocks. For instance, Slattery proved versions of Brauer's three main theorems for $\pi$-blocks. Also many of the open conjectures on $p$-blocks make sense for $\pi$-blocks.
In particular, \emph{Brauer's Height Zero Conjecture} and the \emph{Alperin--McKay Conjecture} for $\pi$-blocks of $\pi$-separable groups were verified by Manz--Staszewski~\cite[Theorem~3.3]{ManzStaszewski} and Wolf~\cite[Theorem~2.2]{Wolf} respectively. In a previous paper~\cite{SambalePi} the present author proved \emph{Brauer's $k(B)$-Conjecture} for $\pi$-blocks of $\pi$-separable groups. This means that the number $k(B)$ of irreducible characters in a $\pi$-block $B$ is bounded by the order of its defect groups.
In this paper we work in the opposite direction. Landau's classical theorem asserts that the order of a finite group $G$ can be bounded by a function depending only on the number of conjugacy classes of $G$.
\emph{Problem~21} on Brauer's famous list~\cite{BrauerLectures} from 1963 asks if the order of a defect group of a block $B$ of a finite group can be bounded by a function depending only on $k(B)$.
Even today we do not know if there is such a bound for blocks with just three irreducible characters (it is expected that the defect groups have order three in this case, see \cite[Chapter~15]{habil}). On the other hand, an affirmative answer to Problem~21 for $p$-blocks of $p$-solvable groups was given by Külshammer~\cite{KLandau3}. Moreover, Külshammer--Robinson~\cite{KR} showed that a positive answer in general would follow from the Alperin--McKay Conjecture.
The main theorem of this paper settles Problem~21 for $\pi$-blocks of $\pi$-separable groups.
\begin{ThmA}
The order of a defect group of a $\pi$-block $B$ of a $\pi$-separable group can be bounded by a function depending only on $k(B)$.
\end{ThmA}
Since $\{p\}$-separable groups are $p$-solvable and $\{p\}$-blocks are $p$-blocks, this generalizes Külshammer's result.
If $G$ is an arbitrary finite group and $\pi$ is the set of prime divisors of $|G|$, then $G$ is $\pi$-separable and $\operatorname{Irr}(G)$ is a $\pi$-block with defect group $G$ (see \autoref{facts} below). Hence, Theorem~A also implies Landau's Theorem mentioned above.
Külshammer's proof relies on the classification of finite simple groups and so does our proof.
Although it is possible to extract from the proof an explicit bound on the order of a defect group, this bound is far from being optimal. With some effort we obtain the following small values.
\begin{ThmB}
Let $B$ be a $\pi$-block of a $\pi$-separable group with defect group $D$. Then
\begin{align*}
k(B)=1&\Longleftrightarrow D=1,\\
k(B)=2&\Longleftrightarrow D=C_2,\\
k(B)=3&\Longleftrightarrow D\in\{C_3,S_3\}
\end{align*}
where $C_n$ denotes the cyclic group of order $n$ and $S_n$ is the symmetric group of degree $n$.
\end{ThmB}
\section{Notation}
Most of our notation is standard and can be found in Navarro's book~\cite{Navarro}.
For the convenience of the reader we collect definitions and crucial facts about $\pi$-blocks.
In the following, $\pi$ is any set of prime numbers. We denote the $\pi$-part of an integer $n$ by $n_\pi$.
A finite group $G$ is called $\pi$-\emph{separable} if there exists a normal series
\[1=N_0\unlhd\ldots\unlhd N_k=G\]
such that each quotient $N_i/N_{i-1}$ is a $\pi$-group or a $\pi'$-group. The largest normal $\pi$-subgroup of $G$ is denoted by $\operatorname{O}_{\pi}(G)$.
\begin{Def}
\begin{itemize}
\item A $\pi$-\emph{block} of $G$ is a minimal non-empty subset $B\subseteq\operatorname{Irr}(G)$ such that $B$ is a union of $p$-blocks for every $p\in\pi$ (see \cite[Definition (1.12) and Theorem (2.15)]{Slattery}). In particular, the $\{p\}$-blocks of $G$ are the $p$-blocks of $G$. In accordance with the notation for $p$-blocks we set $k(B):=|B|$ for every $\pi$-block $B$.
\item A \emph{defect group} $D$ of a $\pi$-block $B$ of a $\pi$-separable group $G$ is defined inductively as follows (see \cite[Definition (2.2)]{Slattery2}). Let $\chi\in B$ and let $\lambda\in\operatorname{Irr}(\operatorname{O}_{\pi'}(G))$ be a constituent of the restriction $\chi_{\operatorname{O}_{\pi'}(G)}$ (we say that $B$ \emph{lies over} $\lambda$). Let $G_\lambda$ be the inertial group of $\lambda$ in $G$. If $G_\lambda=G$, then $D$ is a Hall $\pi$-subgroup of $G$ (such subgroups always exist in $\pi$-separable groups).
Otherwise there exists a unique $\pi$-block $b$ of $G_\lambda$ lying over $\lambda$ such that $\psi^G\in B$ for any $\psi\in b$ (see \autoref{FR} below). In this case we identify $D$ with a defect group of $b$.
As usual, the defect groups of $B$ form a conjugacy classes of $G$.
It was shown in \cite[Theorem (2.1)]{Slattery2} that this definition agrees with the usual definition for $p$-blocks.
\item A $\pi$-block $B$ of $G$ \emph{covers} a $\pi$-block $b$ of $N\unlhd G$, if there exist $\chi\in B$ and $\psi\in b$ such that $[\chi_N,\psi]\ne 0$ (see \cite[Definition (2.5)]{Slattery}).
\end{itemize}
\end{Def}
\begin{Prop}\label{facts}
For every $\pi$-block $B$ of a $\pi$-separable group $G$ with defect group $D$ the following holds:
\begin{enumerate}[(i)]
\item\label{f1} $\operatorname{O}_{\pi}(G)\le D$.
\item\label{f2} For every $\chi\in B$ we have $\frac{|D|\chi(1)_\pi}{|G|_\pi}\in\operatorname{N}N$ and for some $\chi$ this fraction equals $1$.
\item\label{f3} If the $\pi$-elements $g,h\in G$ are not conjugate, then
\[\sum_{\chi\in B}\chi(g)\overline{\chi(h)}=0.\]
\item\label{f4} If $B$ covers a $\pi$-block $b$ of $N\unlhd G$, then for every $\psi\in b$ there exists some $\chi\in B$ such that $[\chi_N,\psi]\ne 0$.
\item\label{f5} If $B$ lies over a $G$-invariant $\lambda\in\operatorname{Irr}(\operatorname{O}_{\pi'}(G))$, then $B=\operatorname{Irr}(G|\lambda)$.
\end{enumerate}
\end{Prop}
\begin{proof}
\begin{enumerate}[(i)]
\item See \cite[Lemma~(2.3)]{Slattery2}.
\item See \cite[Theorems (2.5) and (2.15)]{Slattery2}.
\item This follows from \cite[Corollary~8]{Robinsonpi} (by \cite[Remarks on p. 410]{Robinsonpi}, $B$ is really a $\pi$-block in the sense of that paper).
\item See \cite[Lemma (2.4)]{Slattery}.
\item See \cite[Theorem~(2.8)]{Slattery}.\qedhere
\end{enumerate}
\end{proof}
The following result allows inductive arguments (see \cite[Theorem~2.10]{Slattery} and \cite[Corollary~2.8]{Slattery2}).
\begin{Thm}[Fong--Reynolds Theorem for $\pi$-blocks]\label{FR}
Let $N$ be a normal $\pi'$-subgroup of a $\pi$-separable group $G$. Let $\lambda\in\operatorname{Irr}(N)$ with inertial group $G_\lambda$. Then the induction of characters induces a bijection $b\mapsto b^G$ between the $\pi$-blocks of $G_\lambda$ lying over $\lambda$ and the $\pi$-blocks of $G$ lying over $\lambda$. Moreover, $k(b)=k(b^G)$ and every defect group of $b$ is a defect group of $b^G$.
\end{Thm}
Finally we recall $\pi$-special characters which were introduced by Gajendragadkar~\cite{Gajendragadkar}. A character $\chi\in\operatorname{Irr}(G)$ is called $\pi$-\emph{special}, if $\chi(1)=\chi(1)_\pi$ and for every subnormal subgroup $N$ of $G$ and every irreducible constituent $\varphi$ of $\chi_N$ the order of the linear character $\det\varphi$ is a $\pi$-number.
Obviously, every character of a $\pi$-group is $\pi$-special.
If $\chi\in\operatorname{Irr}(G)$ is $\pi$-special and $N:=\operatorname{O}_{\pi'}(G)$, then $\chi_N$ is a sum of $G$-conjugates of a linear character $\lambda\in\operatorname{Irr}(N)$ by Clifford theory. Since the order of $\det\lambda=\lambda$ is a $\pi$-number and divides $|N|$, we obtain $\lambda=1_N$. This shows that $N\le\operatorname{Ker}(\chi)$.
\section{Proofs}
At some point in the proof of Theorem~A we need to refer to Külshammer's solution~\cite[Theorem]{KLandau3} of Brauer's Problem 21 for $p$-solvable groups:
\begin{Prop}\label{K}
There exists a monotonic function $\alpha:\operatorname{N}N\to\operatorname{N}N$ with the following property: For every $p$-block $B$ of a $p$-solvable group with defect group $D$ we have $|D|\le \alpha(k(B))$.
\end{Prop}
The following ingredient is a direct consequence of the classification of finite simple groups.
\begin{Prop}[{\cite[Theorem~2.1]{Kohl}}]\label{kohl}
There exists a monotonic function $\beta:\operatorname{N}N\to\operatorname{N}N$ with the following property: If $G$ is a finite non-abelian simple group such that $\operatorname{Aut}(G)$ has exactly $k$ orbits on $G$, then $|G|\le \beta(k)$.
\end{Prop}
In the following series of lemmas, $\pi$ is a fixed set of primes, $G$ is a $\pi$-separable group and $B$ is a $\pi$-block of $G$ with defect group $D$. \autoref{facts}\eqref{f2} guarantees the existence of a height $0$ character in $B$. We need to impose an additional condition on such a character.
\begin{Lem}\label{lemchar}
There exists some $\chi\in B$ such that $\operatorname{O}_{\pi}(G)\le\operatorname{Ker}(\chi)$ and $|D|\chi(1)_\pi=|G|_\pi$.
\end{Lem}
\begin{proof}
We argue by induction on $|G|$. Let $B$ lie over $\lambda\in\operatorname{Irr}(\operatorname{O}_{\pi'}(G))$. Suppose first that $G_\lambda=G$. Then $|D|=|G|_\pi$ and $B=\operatorname{Irr}(G|\lambda)$ by \autoref{facts}\eqref{f5}.
Since $\lambda$ is $\pi'$-special, there exists a $\pi'$-special $\chi\in B$ by \cite[Lemma~(2.7)]{Slattery}. It follows that $\operatorname{O}_\pi(G)\le\operatorname{Ker}(\chi)$ and $|D|\chi(1)_\pi=|D|=|G|_\pi$.
Now assume that $G_\lambda<G$. Let $b$ be the Fong--Reynolds correspondent of $B$ in $G_\lambda$. By induction there exists some $\psi\in b$ such that $\operatorname{O}_\pi(G_\lambda)\le\operatorname{Ker}(\psi)$ and $|D|\psi(1)_\pi=|G_\lambda|_\pi$. Let $\chi:=\psi^G\in B$. Since $[\operatorname{O}_\pi(G),\operatorname{O}_{\pi'}(G)]=1$ we have $\operatorname{O}_\pi(G)\le\operatorname{O}_\pi(G_\lambda)\le\operatorname{Ker}(\psi)$ and $\operatorname{O}_\pi(G)\le\operatorname{Ker}(\chi)$ (see \cite[Lemma~(5.11)]{Isaacs}). Finally, \[|D|\chi(1)_\pi=|D|\psi(1)_\pi|G:G_\lambda|_\pi=|G_\lambda|_\pi|G:G_\lambda|_\pi=|G|_\pi.\qedhere\]
\end{proof}
For every $p\in\pi$, the character $\chi$ in \autoref{lemchar} lies in a $p$-block $B_p\subseteq B$ whose defect group has order $|D|_p$. In fact, it is easy to show that every Sylow $p$-subgroup of $D$ is a defect group of $B_p$.
Our second lemma extends an elementary fact on $p$-blocks (see \cite[Theorem (9.9)(b)]{Navarro}).
\begin{Lem}\label{lemquot}
Let $N$ be a normal $\pi$-subgroup of $G$. Then $B$ contains a $\pi$-block of $G/N$ with defect group $D/N$.
\end{Lem}
\begin{proof}
Again we argue by induction on $|G|$. Let $\lambda\in\operatorname{Irr}(\operatorname{O}_{\pi'}(G))$ be under $B$. Suppose first that $G_\lambda=G$. Then $|D|=|G|_\pi$. By \autoref{lemchar}, there exists some $\chi\in B$ such that $N\le\operatorname{O}_\pi(G)\le\operatorname{Ker}(\chi)$ and $\chi(1)_\pi=1$. Hence, we may consider $\chi$ as a character of $\overline{G}:=G/N$. As such, $\chi$ lies in a $\pi$-block $\overline{B}$ of $\overline{G}$.
For any $\psi\in\overline{B}$ there exists a sequence of characters $\chi=\chi_1,\ldots,\chi_k=\psi$ such that $\chi_i$ and $\chi_{i+1}$ lie in the same $p$-block of $\overline{G}$ for some $p\in\pi$ and $i=1,\ldots,k-1$. Then $\chi_i$ and $\chi_{i+1}$ also lie in the same $p$-block of $G$. This shows that $\psi\in B$ and $\overline{B}\subseteq B$. For a defect group $P/N$ of $\overline{B}$ we have
\[|P/N|=\max\Bigl\{\frac{|\overline{G}|_\pi}{\psi(1)_\pi}:\psi\in\overline{B}\Bigr\}=\frac{|\overline{G}|_\pi}{\chi(1)_\pi}=|\overline{G}|_\pi=|D/N|\]
by \autoref{facts}\eqref{f2}. Since the Hall $\pi$-subgroups are conjugate in $G$, we conclude that $D/N$ is a defect group of $\overline{B}$.
Now let $G_\lambda<G$, and let $b$ be the Fong--Reynolds correspondent of $B$ in $G_\lambda$. After conjugation, we may assume that $D$ is a defect group of $b$. By induction, $b$ contains a block $\overline{b}$ of $G_\lambda/N$ with defect group $D/N$. If we regard $\lambda$ as a character of $\operatorname{O}_{\pi'}(G)N/N\cong\operatorname{O}_{\pi'}(G)$, we see that $\overline{G}_\lambda=G_\lambda/N$. It follows that the Fong--Reynolds correspondent $\overline{B}=\overline{b}^{\overline{G}}$ of $\overline{b}$ is contained in $B$ and has defect group $D/N$.
\end{proof}
The next result extends one half of \cite[Proposition]{KLandau1} to $\pi$-blocks.
\begin{Lem}\label{lemsub}
Let $N\unlhd G$, and let $b$ be a $\pi$-block of $N$ covered by $B$. Then $k(b)\le|G:N|k(B)$.
\end{Lem}
\begin{proof}
By \autoref{facts}\eqref{f4}, $b\subseteq\bigcup_{\chi\in B}{\operatorname{Irr}(N|\chi)}$. For every $\chi\in B$ the restriction $\chi_N$ is a sum of $G$-conjugate characters according to Clifford theory. In particular, $\lvert\operatorname{Irr}(N|\chi)\rvert\le|G:N|$ and
\[k(b)\le\sum_{\chi\in B}\lvert\operatorname{Irr}(N|\chi)\rvert\le |G:N|k(B).\qedhere\]
\end{proof}
It is well-known that the number of irreducible characters in a $p$-block $B$ is greater or equal than the number of conjugacy classes which intersect a given defect group of $B$ (see \cite[Problem (5.7)]{Navarro}). For $\pi$-blocks we require the following weaker statement.
\begin{Lem}\label{lemnormal}
Let $N$ be a normal $\pi$-subgroup of $G$. Then the number of $G$-conjugacy classes contained in $N$ is at most $k(B)$.
\end{Lem}
\begin{proof}
Let $R\subseteq N$ be a set of representatives for the $G$-conjugacy classes inside $N$. By \autoref{lemchar}, there exists some $\chi\in B$ such that $\chi(r)=\chi(1)\ne 0$ for every $r\in R$. Thus, the columns of the matrix $M:=(\chi(r):\chi\in B,\,r\in R)$ are non-zero. By \autoref{facts}\eqref{f3}, the columns of $M$ are pairwise orthogonal, so in particular they are linearly independent. Hence, the number of rows of $M$ is at least $|R|$.
\end{proof}
We can prove the main theorem now.
\begin{proof}[Proof of Theorem~A]
The proof strategy follows closely the arguments in \cite{KLandau3}.
We construct inductively a monotonic function $\gamma:\operatorname{N}N\to\operatorname{N}N$ with the desired property. To this end, let $B$ be a $\pi$-block of a $\pi$-separable group $G$ with defect group $D$ and $k:=k(B)$. If $k=1$, then the unique character in $B$ has $p$-defect $0$ for every $p\in\pi$. It follows from \autoref{facts}\eqref{f2} that this can only happen if $D=1$. Hence, let $\gamma(1):=1$.
Now suppose that $k>1$ and $\gamma(l)$ is already defined for $l<k$. Let $N:=\operatorname{O}_{\pi'}(G)$. By a repeated application of the Fong--Reynolds Theorem for $\pi$-blocks and \autoref{facts}\eqref{f5}, we may assume that $B$ is the set of characters lying over a $G$-invariant $\lambda\in\operatorname{Irr}(N)$. Then $D$ is a Hall $\pi$-subgroup of $G$. By \cite[Problem~(6.3)]{Navarro2}, there exists a character triple isomorphism \[(G,N,\lambda)\to(\widehat{G},\widehat{N},\widehat{\lambda})\]
such that $G/N\cong\widehat{G}/\widehat{N}$ and $\widehat{N}=\operatorname{O}_{\pi'}(\widehat{G})\le\operatorname{Z}(\widehat{G})$. Then $\widehat{B}:=\operatorname{Irr}(\widehat{G}|\widehat{\lambda})$ is a $\pi$-block of $\widehat{G}$ with defect group $\widehat{D}\cong D$ and $k(\widehat{B})=k$. After replacing $G$ by $\widehat{G}$ we may assume that $N\le\operatorname{Z}(G)$.
Then
\[\operatorname{O}_{\pi'\pi}(G)=N\times P\]
where $P:=\operatorname{O}_{\pi}(G)$. If $P=1$, then $G$ is a $\pi'$-group and we derive the contradiction $k=1$. Hence, $P\ne 1$.
Let $M$ be a minimal normal subgroup of $G$ contained in $P$.
By \autoref{lemquot}, $B$ contains a $\pi$-block $\overline{B}$ of $G/M$ with defect group $D/M$. Since the kernel of $B$ is a $\pi'$-group (see \cite[Theorem~(6.10)]{Navarro}), we have $k(\overline{B})<k$. By induction, it follows that
\begin{equation}\label{DM}
|D/M|\le \gamma(k-1)
\end{equation}
where we use that $\gamma$ is monotonic.
Let $H/M$ be a Hall $\pi'$-subgroup of $G/M$, and let
\[K:=\bigcap_{g\in G}gHg^{-1}\unlhd G.\]
Then
\[|G:K|\le|G:H|!=|G/M:H/M|!=(|G/M|_\pi)!=|D/M|!\le \gamma(k-1)!\]
by \eqref{DM}. Let $b$ be a $\pi$-block of $K$ covered by $B$. By \autoref{lemsub},
\begin{equation}\label{kb}
k(b)\le|G:K|k\le \gamma(k-1)!k.
\end{equation}
Thus we have reduced our problem to the block $b$ of $K$. Since $K/M\le H/M$ is a $\pi'$-group, $b$ has defect group $M$ by \autoref{facts}\eqref{f1}.
As a minimal normal subgroup, $M$ is a direct product of isomorphic simple groups.
Suppose first that $M$ is an elementary abelian $p$-group for some $p\in\pi$.
Then $K$ is $p$-solvable and $b$ is just a $p$-block with defect group $M$. Hence, with the notation from \autoref{K} we have
\begin{equation}\label{M1}
|M|\le \alpha(k(b))\le \alpha\bigl(\gamma(k-1)!k\bigr)
\end{equation}
by \eqref{kb}.
Now suppose that $M=S\times\ldots\times S=S^n$ where $S$ is a non-abelian simple group.
Let $x_1,\ldots,x_s\in S$ be representatives for the orbits of $\operatorname{Aut}(S)$ on $S\setminus\{1\}$. Since $\operatorname{Aut}(M)\cong\operatorname{Aut}(S)\wr S_n$ (where $S_n$ denotes the symmetric group of degree $n$), the elements $(x_i,1,\ldots,1)$, $(x_i,x_i,1,\ldots,1),\ldots,(x_i,\ldots,x_i)$ of $M$ with $i=1,\ldots,s$ lie in distinct conjugacy classes of $K$. Consequently, \autoref{lemnormal} yields $ns\le k(b)$. Now with the notation of \autoref{kohl} we deduce that $|S|\le \beta(s+1)$ and
\begin{equation}\label{M2}
|M|=|S|^n\le \beta(s+1)^n\le \beta\bigl(k(b)+1\bigr)^{k(b)}\le \beta\bigl(\gamma(k-1)!k+1\bigr)^{\gamma(k-1)!k}
\end{equation}
by \eqref{kb}.
Setting
\[\gamma(k):=\gamma(k-1)\max\bigl\{\alpha\bigl(\gamma(k-1)!k\bigr),\,\beta\bigl(\gamma(k-1)!k+1\bigr)^{\gamma(k-1)!k}\bigr\}\]
we obtain
\[|D|=|D/M||M|\le\gamma(k)\]
by \eqref{DM}, \eqref{M1} and \eqref{M2}.
Obviously, $\gamma$ is monotonic.
\end{proof}
\begin{proof}[Proof of Theorem~B]
We have seen in the proof of Theorem~A that $k(B)=1$ implies $D=1$. Conversely, \cite[Theorem~3]{SambalePi} shows that $D=1$ implies $k(B)=1$.
Now let $k(B)=2$. Then $B$ is a $p$-block for some $p\in\pi$. By a result of Brandt~\cite[Theorem~A]{Brandt}, $p=2$ and $|D|_2=2$ follows from \autoref{facts}\eqref{f2}. For every $q\in\pi\setminus\{2\}$, $B$ consists of two $q$-defect $0$ characters. This implies $D=C_2$. Conversely, if $D=C_2$, then we obtain $k(B)=2$ by \cite[Theorem~3]{SambalePi}.
Finally, assume that $k(B)=3$. As in the proof of Theorem~A, we may assume that
\[\operatorname{O}_{\pi'\pi}(G)=\operatorname{O}_{\pi'}(G)\times P\]
with $P:=\operatorname{O}_{\pi}(G)\ne 1$.
By the remark after \autoref{lemchar}, for every $p\in\pi$ there exists a $p$-block contained in $B$ whose defect group has order $|D|_p$. If $|D|_2\ge 4$, we derive the contradiction $k(B)\ge 4$ by \cite[Proposition~1.31]{habil}. Hence, $|D|_2\le 2$. By \autoref{lemquot}, $B$ contains a $\pi$-block $\overline{B}$ of $G/P$ with defect group $D/P$ and $k(\overline{B})<k(B)$. The first part of the proof yields $|D/P|\le 2$. In particular, $P$ is a Hall subgroup of $D$.
From \autoref{lemnormal} we see that $P$ has at most three orbits under $\operatorname{Aut}(P)$.
If $P$ is an elementary abelian $p$-group, then $B$ contains a $p$-block $B_p$ with normal defect group $P$. The case $p=2$ is excluded by the second paragraph of the proof. Hence, $p>2$ and $k(B_p)=k(B)=3$. Now \cite[Proposition~15.2]{habil} implies $|P|=3$ and $|D|\in\{3,6\}$. A well-known lemma by Hall--Higman states that $\operatorname{C}_G(P)\le \operatorname{O}_{\pi'\pi}(G)$. Hence, $|D|=6$ implies $D\cong S_3$.
It remains to deal with the case where $P$ is not elementary abelian. In this case, a result of Laffey--MacHale~\cite[Theorem~2]{LaffeyMacHale} shows that $P=P_1\rtimes Q$ where $P_1$ is an elementary abelian $p$-group and $Q$ has order $q\in\pi\setminus\{p\}$. Moreover, $|P_1|\ge p^{q-1}$. In particular, $p>2$ since $|D|_2\le 2$. Again $B$ contains a $p$-block $B_p$ with normal defect group $P_1$ and $k(B_p)=3$. As before, we obtain $|P_1|=3$ and $q=2$. This leads to $D\cong S_3$.
Conversely, let $D\in\{C_3,S_3\}$. By the first part of the proof, $k(B)\ge 3$.
Let $N:=\operatorname{O}_{\pi'}(G)$. Using the Fong--Reynolds Theorem for $\pi$-blocks again, we may assume that $B=\operatorname{Irr}(G|\lambda)$ where $\lambda\in\operatorname{Irr}(N)$ is $G$-invariant and $D$ is a Hall $\pi$-subgroup of $G$. By a result of Gallagher (see \cite[Theorem~5.16]{Navarro2}), we have $k(B)\le k(G/N)$. Moreover, $\operatorname{O}_{\pi}(G/N)\le DN/N$ and
\[\operatorname{C}_{G/N}(\operatorname{O}_{\pi}(G/N))\le\operatorname{O}_{\pi}(G/N)\]
by the Hall--Higman Lemma mentioned above. It is easy to see that this implies $G/N\le S_3$. Hence, $k(B)\le 3$ and we are done.
\end{proof}
\section*{Acknowledgment}
The author is supported by the German Research Foundation (\mbox{SA 2864/1-1} and \mbox{SA 2864/3-1}).
\end{document} |
\betagin{document}
\title{Geometric regularity theory for a time-dependent Isaacs equation}
\author{ P\^edra D. S. Andrade, Giane C. Rampasso and Makson S. Santos*}
\date{\today}
\maketitle
\betagin{abstract}
\noindent The purpose of this work is to produce a regularity theory for a class of parabolic Isaacs equations. Our techniques are based on approximation methods which allow us to connect our problem with a Bellman parabolic model. An approximation regime for the coefficients, combined with a smallness condition on the source term unlocks new regularity results in Sobolev and H\"older spaces.
\noindent \thetaxtbf{Keywords}: Isaacs parabolic equations; Regularity theory; Estimates in Sobolev and H\"older spaces; Approximation methods.
\noindent \thetaxtbf{MSC(2020)}: 35B65; 35K55; 35Q91.
\end{abstract}
\section{Introduction}
\label{introduction}
In this paper, we investigate a regularity theory for $L^p$-viscosity solutions to an Isaacs parabolic equation of the form
\betagin{equation} \label{eq_main}
u_t + \sup_{\alpha \in {\cal A}} \inf_{\betata \in {\cal B}}\left[ - {\operatorname{Tr}}(A_{\alpha, \betata}(x,t)D^{2} u)\right] = f \quad \thetaxt{in} \quad Q_1,
\end{equation}
where $Q_1 := B_1\times (-1, 0]$, $A_{\alpha,\betata}:Q_1\times{\cal A}\times{\cal B} \rightarrow \mathbb{R}^{d^2}$ is a $(\lambdabda, \Lambda)$-elliptic matrix, $\cal A$ and $\cal B$ are countable sets and the source term $f$ satisfies a set of conditions to be specified later. We produce new results on the regularity for the $L^p$-viscosity solutions to \eqref{eq_main}. In particular, we are interested in obtaining improved regularity in Sobolev and H\"older spaces. We argue by approximation methods relating the model in \eqref{eq_main} to a Bellman parabolic problem.
Approximation methods were introduced by L. Caffarelli in the groundbreaking paper \cite{caffarelli}. More recently, these methods appeared in more general contexts, as developed in the works of E. Teixeira, J.M. Urbano and their collaborators; see for instance \cite{M3}, \cite{M8}, \cite{M1}, \cite{M5}, \cite{M4}. We also refer to the surveys \cite{pim_san}, \cite{M7}. Finally, we emphasize that important results have been obtained by approximation methods in a variety of different settings, for instance see \cite{ku_min2}, \cite{ku_min1}, just to mention a few.
The Isaacs equation appears in several branches of applied mathematics. Originally, it was introduced in the works of R. Isaacs, as the PDE associated with two players zero-sum stochastic differential games, see \cite{isaacs}. We notice a revitalization of the interest in this class of equation as the theory of viscosity solution was introduced. We refer the reader to \cite{cra_ev_li}, \cite{cra_li}, \cite{ev_sou}, \cite{fl_sou} for existence and uniqueness of viscosity solutions. In \cite{kat}, the author developed representation formulas for viscosity solutions in the parabolic setting. See also \cite{bucaqui}.
The study of the regularity theory for fully nonlinear parabolic equations first appeared in \cite{krysaf2}, \cite{KrySaf}. In these papers, the authors examine linear parabolic equations with measurable coefficients. This analysis enables them to produce a Harnack inequality and develop regularity theory of the solutions to fully nonlinear parabolic equations of the form
\betagin{equation}\label{eq_fully}
u_t+F(D^2u)=0 \,\,\, \mbox{in} \,\,\, Q_1.
\end{equation}
Namely, by a linearization argument, viscosity solutions to \eqref{eq_fully} are of class $\mathcal{C}^{1,\gammamma}$, for some $\gammamma\in(0,1)$. Under convexity assumptions on the operator $F$, the authors in \cite{krylov1}, \cite{krylov2} proved estimates in $\mathcal{C}_{loc}^{2,\gammamma}(Q_1)$ for viscosity solutions to \eqref{eq_fully}.
In \cite{wangI}, \cite{wangII} under the assumptions that the operator with frozen coefficients has appropriated interior estimates, the author establishes several a priori estimates in Sobolev and H\"older spaces for fully nonlinear parabolic equations extending the results in \cite{caffarelli}; see also \cite{cafcab}. A Harnack inequality for fully nonlinear uniformly parabolic equations is treated in \cite{imbersil}, where the authors also study existence, uniqueness and regularity results for viscosity solutions. Under an almost convexity assumption, the authors in \cite{Dong-Krylov-2019} prove weighted and mixed-norm Sobolev estimates for fully nonlinear second-order elliptic and parabolic equations with almost $\operatorname{VMO}$ dependence on space-time variables; see also \cite{Krylov-2018}.
The former developments rely on notions pertinent to the realm of $\mathcal{C}$-viscosity solutions, see \cite{ccks1996}. In \cite{cra_ko_swI} a $L^p$-viscosity theory for fully nonlinear parabolic equations is put forward. More precisely, the authors prove $W^{2,1;p}$-estimates when the matrix $A_{\alpha, \betata}$ is independent of $\betata$. They also establish $\mathcal{C}^{1,\gammamma}$-estimates in the case that $p > d+2$. In addition, they obtain H\"older regularity estimates for the gradient for $\mathcal{C}$-viscosity solutions.
Regularity estimates for $L^p$-viscosity solutions are also the subject of \cite{M8}. In that paper, the authors establish a regularity theory for this type of problem involving source terms with mixed norms, under distinct regularity regimes. In particular, they prove optimal interior regularity results in $\mathcal{C}^{0,\gammamma}(Q_1)$, $\mathcal{C}^{\operatorname{Log-Lip}}(Q_1)$ and $\mathcal{C}^{1, \operatorname{Log-Lip}}(Q_1)$ spaces. For sharp regularity estimates in Sobolev spaces we refer to \cite{caspim}. Local regularity in ${\mathcal C}^{1,\gammamma}$ spaces is also the subject of \cite{kry2} where the assumptions under the coefficients are different from \cite{cra_ko_swI} and \cite{wangI}.
A remarkable feature of fully nonlinear operators is that every model of the form $F(M)$ can be rewritten as
\[
\inf_{\alpha \in {\cal A}}\sup_{\betata \in {\cal B}}\{A_{\alpha\betata}(M)\},
\]
for some family $A_{\alpha\betata} = a^{\alpha\betata}_{ij}\partial_{ij}$, see \cite{caf_cab2}. Hence we can get information about the solutions to the parabolic problem governed by $F$ by examining the solutions to an associated Isaacs parabolic problem.
Another point of interest in the regularity theory of the Isaacs equation is due to its nature. Namely, Isaacs operators are neither concave nor convex, which places them off the scope of the Evans-Krylov's theory as developed in \cite{evans}, \cite{krylov1}, \cite{krylov2}. As a consequence we have no a priori reason to expect $\mathcal{C}^{2,\gammamma}$-interior estimates or the existence of classical solutions. In fact, in \cite{na_vl} the authors exhibited a solution to an elliptic Isaacs equation whose Hessian blows up in an interior point of the domain. As a conclusion it is not reasonable to expect solutions to be more regular than $\mathcal{C}^{1,\gammamma}$, if further conditions are not imposed.
Furthermore, since the Isaacs equations are positively homogeneous of degree 1 with respect to the Hessian, we are not entitled to resort to the concept of \thetaxtit{recession function}, as introduced in \cite{sil_tei}; see also \cite{pim_san} and \cite{tei_pim}. In addition, being positively homogeneous of degree 1, immediately we conclude the Isaacs equation is not driven by a differentiable operator, otherwise it would be a linear operator. So the partial regularity results would not be available for this class of problems.
The purpose of this paper is to extend the results in \cite{Pimentel} to the parabolic setting. In \cite{Pimentel}, the author uses approximation methods to relate solutions of the Isaacs elliptic equation to solutions of the Bellman one, under distinct smallness regimes imposed on the coefficients. The author established that viscosity solutions are locally in $W^{2,p}(B_1)$, ${\mathcal C}^{1,\thetaxt{Log-Lip}}(B_1)$, or in ${\mathcal C}^{2,\gammamma}(B_1)$, depending on the smallness regimes chosen.
Under certain smallness regime on the matrix $A_{\alpha,\betata}$, we also use approximation methods to connect $L^p$-viscosity solutions to \eqref{eq_main} with the solutions to a parabolic Bellman equation of the form
\[
u_t+\inf_{\betata\in\mathcal{B}}\left[-\operatorname{Tr}(\bar{A}_{\betata}(x,t)D^2u)\right]=0 \,\,\, \mbox{in} \,\,\, Q_1.
\]
Since the Bellman operator is convex with respect to the Hessian, the idea is to import information from the Evans-Krylov theory to our equation.
In a first moment, we study the equation with dependence on the gradient
\betagin{equation} \label{equation03}
u_t + \sup_{\alpha \in {\cal A}} \inf_{\betata \in {\cal B}}
\left[ - {\operatorname{Tr}}(A_{\alpha, \betata}(x,t) D^{2} u)-{\bf b}_{\alpha,\betata}(x,t)\cdot Du\right] = f \quad \thetaxt{in} \quad Q_1,
\end{equation}
where ${\bf b}_{\alpha,\betata}: Q_1\times\mathcal{A}\times\mathcal{B}\rightarrow\mathbb{R}^d$ is a given vector field. In this case, we follow the ideas in \cite{caspim} and \cite{Pimentel} to obtain Sobolev estimates for viscosity solutions to \eqref{equation03}. The proof of this fact follows from $W^{2,1;\deltalta}$-estimates for \eqref{eq_main}, combining standard measure-theoretical results and properties of $L^p$-viscosity solutions of fully nonlinear parabolic equations.
We also deal with the borderline case. In fact, in a different approximation regime and source term conditions, we show that $L^p$-viscosity solutions to \eqref{eq_main} are locally in the parabolic Log-Lipschitz space $\mathcal{C}^{1,\operatorname{Log-Lip}}(Q_1)$. Finally, if we refine the approximation regime, we improve the previous regularity result by showing ${C}^{2,\gammamma}$-estimates at the origin, for some $0<\gammamma<1$.
The remainder of this article is structured as follows: in Section 2 we present our main results; we also gather a few facts used throughout the paper and detail our assumptions. In Section 3, we establish improved regularity in Sobolev spaces. Section 4 is devoted to the proof of improved regularity borderline H\"older spaces. We conclude the paper by establishing $\mathcal{C}^{2,\gammamma}$-estimates at the origin for \eqref{eq_main}.
\section{Preliminaries}
\subsection{Notations}
In this section we gather some notations which is used in the paper. The \emph{open ball} in $\mathbb{R}^d$ of radius $r$ and centered at the origin is denoted by $B_r$. We also define the \emph{parabolic domain} by
\[
Q_r\coloneqq B_r\times (-r^2,0]\subset\mathbb{R}^{d+1}
\]
whose \emph{parabolic boundary} is
\[
\partial_pQ_r\coloneqq B_r\times\{t=-r^2\}\cup\partial B_r\times (-r^2,0].
\]
In addition we define
\[
Q_r(x_0,t_0) := Q_r + (x_0,t_0).
\]
The \emph{parabolic cube} of side $r$ stands for
\[
K_r := [-r,r]^d\times[-r^2,0].
\]
Given $p\in[1,\infty]$, the \emph{parabolic Sobolev space} $W^{2,1;p}(Q_r)$ is defined by
\[
W^{2,1;p}(Q_r)\coloneqq\{u\in L^{p}(Q_r):u_t,\, Du,\, D^2u \in L^p(Q_r)\}
\]
endowed with the natural norm
\[
\|u\|_{W^{2,1;p}(Q_r)}=\left[\|u\|^p_{L^p(Q_r)}+\|u_t\|^p_{L^p(Q_r)}+\|Du\|^p_{L^p(Q_r)}+\|D^2u\|^p_{L^p(Q_r)}\right]^{\frac{1}{p}}.
\]
Hence, we say that $u\in W_{loc}^{2,1;p}(Q_r)$ if $u\in W^{2,1;p}(Q')$ for all $Q'\Subset Q_r$.
In order to define the parabolic H\"older space, we introduce the \emph{parabolic distance} between the points $(x_1,t_1)$ and $(x_2,t_2)$ in $Q_r$ by
\[
\operatorname{dist}((x_1,t_1),(x_2,t_2))\coloneqq\sqrt{|x_1-x_2|^2+|t_1-t_2|}.
\]
Therefore, a function $u:Q_r\rightarrow\mathbb{R}$ belongs to the \emph{parabolic H\"older space} $\mathcal{C}^{0,\gammamma}(Q_r)$ if the following norm
\[
\|u\|_{\mathcal{C}^{0,\gammamma}(Q_r)}\coloneqq \|u\|_{L^{\infty}(Q_r)}+[u]_{\mathcal{C}^{0,\gammamma}(Q_r)}
\]
is finite, where $[u]_{\mathcal{C}^{0,\gammamma}(Q_r)}$ denotes the semi-norm
\[
[u]_{\mathcal{C}^{0,\gammamma}(Q_r)}\coloneqq \sup_{\substack{(x_1,t_1),(x_2,t_2)\in Q_r \\ (x_1,t_1)\neq(x_2,t_2)}}\frac{|u(x_1,t_1)-u(x_2,t_2)|}{\operatorname{dist}((x_1,t_1),(x_2,t_2))^{\gammamma}}.
\]
This means that $u$ is $\gammamma$-H\"older continuous with respect to the spatial variables and $\frac{\gammamma}{2}$-H\"older continuous with respect to the time variable.
Similarly, we say that $u\in \mathcal{C}^{1,\gammamma}(Q_r)$ if the spatial gradient $Du(x,t)$ exists in the classical sense for every $(x,t)\in Q_r$ and the norm
\betagin{equation*}
\betagin{aligned}
\|u\|_{\mathcal{C}^{1,\gammamma}(Q_r)}&\coloneqq \|u\|_{L^{\infty}(Q_r)}+\|Du\|_{L^{\infty}(Q_r)}\\
&+ \sup_{\substack{(x_1,t_1),(x_2,t_2)\in Q_r \\ (x_1,t_1)\neq(x_2,t_2)}}\frac{|u(x_1,t_1)-u(x_2,t_2)-Du(x_1,t_1)\cdot(x_1-x_2)|}{\operatorname{dist}((x_1,t_1),(x_2,t_2))^{1+\gammamma}}.
\end{aligned}
\end{equation*}
is finite; \emph{i.e.} $Du$ is $\gammamma$-H\"older continuous in the spatial variables and $u$ is $\frac{1+\gammamma}{2}$-H\"older continuous with respect to the variable $t$. For the borderline case $\gammamma = 1$, we say that solutions are of class ${\mathcal C}^{1, 1}(Q_r)$ if $Du$ is Lipschitz continuous with respect to spatial variable and $u$ is also Lipschitz continuous with respect to time variable. Finally, $u\in\mathcal{C}^{2,\gammamma}(Q_r)$ if, for all $(x,t)\in Q_r$, the derivative with respect to the temporal variable $u_t(x,t)$ and the spatial Hessian $D^2u(x,t)$ exist in the classical sense and
\betagin{equation*}
\betagin{aligned}
\|u\|_{\mathcal{C}^{2,\gammamma}(Q_r)}&\coloneqq \|u\|_{L^{\infty}(Q_r)}+\|u_t\|_{\mathcal{C}^{0,\gammamma}(Q_r)}+\|Du\|_{\mathcal{C}^{1,\gammamma}(Q_r)}<+\infty.
\end{aligned}
\end{equation*}
Hence, every component of the Hessian $D^2u$ is $\gammamma$-H\"older continuous with respect to the spatial variables and the derivative of $u$ with respect to the time variable $u_t$ is $\frac{\gammamma}{2}$-H\"older continuous in $t$.
Lastly, we say that $u$ belongs to ${\mathcal C}^{1, \operatorname{Log-Lip}}_{loc}(Q_r)$ if $u$ satisfies the following estimate
\[
\displaystyle\sup_{Q_{r/2}(x_0,t_0)}\big|u(x,t)-[u(x_0,t_0)+Du(x_0,t_0)\cdot x]\big|\leq Cr^2\ln r^{-1},
\]
for some universal constant $C>0$ .
We refer to \cite{cra_ko_swI}, \cite{M8}, \cite{imbersil} for more details.
\subsection{Assumptions and main results}
In this subsection, we detail the assumptions and main results of the paper. The first assumption concerns the conditions imposed on the matrix $A_{\alpha,\betata}$.
\betagin{Assumption}[\it Ellipticity of $\mathcal{A}_{\alpha,\betata}$]\label{assumption1}\rm
Let $\alpha\in\mathcal{A}$ and $\betata\in\mathcal{B}$, where $\mathcal{A},\mathcal{B}$ are countable sets. We assume that the matrix $A_{\alpha,\betata}:Q_1\times \mathcal{A}\times\mathcal{B}\to\mathbb{R}^{d^2}$ is $(\lambdabda,\Lambda)$-elliptic; \emph{i.e.} there are constants $0<\lambdabda\leq\Lambda$ such that
\[
\lambdabda I\leq A_{\alpha,\betata}(x,t)\leq \Lambda I
\]
for every $(x,t)\in Q_1$.
\end{Assumption}
In the sequel, we introduce some integrability conditions on the source term.
\betagin{Assumption}[\it Regularity of the source term]\label{assumptionsourceterm}
Let $p>d+1$. We suppose $f\in L^p(Q_1).$
\end{Assumption}
An important ingredient in the analysis of the Sobolev regularity regards the smallness regime described in the next assumption.
\betagin{Assumption}[\it Smallness regime for regularity in $W^{2,1;p}$]\label{assumptionsobolev}
We assume that the matrix $\bar{A}_{\betata}:Q_1\times {\cal B}\to\mathbb{R}^{d^2}$ satisfies
\[
|A_{\alpha,\betata}(x,t)-\bar{A}_{\betata}(x,t)| \leq \varepsilon_1
\]
uniformly in $(x,t)$, $\alpha$ and $\betata$, where $\varepsilon_1>0$ is a sufficiently small constant that will be determined later.
\end{Assumption}
It is worth noting that due to the parabolic nature, we need a slightly stronger assumption than in \cite{Pimentel}. Here, we assume that solutions to our approximated problem have ${\mathcal C}^{1,1}$-estimates. The next condition is fundamental to produce Sobolev estimates, since we connect the equation \eqref{equation03} to a Bellman parabolic model.
\betagin{Assumption} [\it ${\mathcal C}^{1,1}$- estimates for the parabolic Bellman model] \label{assumption3}
Let $v\in\mathcal{C}(Q_{8/9})$ be a $L^p$-viscosity solution to
\[
v_t+\inf_{\betata \in {\cal B}}[-\operatorname{Tr}(\bar{A}_{\betata}(x,t)D^2v)]=0 \ \ \mbox{in} \ \ Q_{8/9}.
\]
Then $v\in {\mathcal C}^{1,1}(Q_{3/4})\cap\mathcal{C}(\bar{Q}_{3/4})$. Moreover, there exists a universal constant $C>0$ such that
\[
\|v\|_{{\mathcal C}^{1,1}(Q_{3/4})}\leq C\|v\|_{L^{\infty}(Q_{8/9})}.
\]
\end{Assumption}
In order to obtain Sobolev estimates to equation \eqref{equation03}, some assumptions on the lower-order coefficients are required.
\betagin{Assumption}[\it The vector $\bf{b}_{\alpha,\betata}$]\label{assumption_vectorb} \rm
We assume that $\thetaxtbf{b}_{\alpha,\betata}\in L^\infty(Q_1)$ uniformly in $\alpha$ and $\betata$; \emph{i.e.}, there exists a constant $C > 0$ such that
\[
\sup_{\alpha \in {\cal A}}\sup_{\betata \in {\cal B}}\|\thetaxtbf{b}_{\alpha,\betata}\|_{L^{\infty}(Q_1)}\leq C.
\]
\end{Assumption}
For the proof of parabolic $\mathcal{C}^{1, \operatorname{Log-Lip}}$-estimates, we need to refine the smallness regime on the matrix $A_{\alpha,\betata}$. This is the content of our next assumption.
\betagin{Assumption}[\it Smallness regime for regularity in $\mathcal{C}^{1,\operatorname{Log-Lip}}$] \label{assumption4}
We suppose $f \in \operatorname{BMO}(Q_1)$; \emph{i.e.}, for all $Q_r(x_0, t_0) \subset Q_1$, we have
\[
\| f \|_{\operatorname{BMO}(Q_1)}:= \sup_{0<r\leq 1}\Xint-_{Q_r(x_0, t_0)} |f(x,t)- \langle f\rangle_{(x_0, t_0), r}| dxdt< \infty,
\]
where $\langle f\rangle_{(x_0, t_0), r}:= \displaystyle\Xint-_{Q_r(x_0, t_0)} f(x,t) dxdt$. In addition, for every $Q_{r}(x_0, t_0)\subset Q_1,$ we assume that
\[
\displaystyle\sup_{Q_{r}(x_0, t_0)}|A_{\alpha,\betata}(x,t)-\bar{A}_{\betata}(x_0,t_0)| \leq\varepsilon_2,
\]
uniformly in $\alpha$ and $\betata,$ where $\varepsilon_2>0$ is a sufficiently small constant that will be determined later.
\end{Assumption}
Our last assumption concerns ${\mathcal C}^{2,\gammamma}$-estimates at the origin of solutions to \eqref{eq_main} which requires an additional smallness condition.
\betagin{Assumption}[\it Smallness regime for ${\mathcal C}^{2,\gammamma}$-estimates at the origin]\label{assumption5}
Assume that
\[
\sup_{(x,t)\in Q_{r}}\sup_{\alpha \in {\cal A}}\sup_{\betata \in {\cal B}}|A_{\alpha,\betata}(x,t)-\bar{A}_{\betata}(0,0)|\leq\varepsilon_3 r^{\gammamma}.
\]
In addition, we suppose
\[
\Xint-_{Q_r}|f(x,t)|^pdx \leq \varepsilon_3^pr^{\gammamma p},
\]
where $\varepsilon_3>0$ is a sufficiently small constant that will be determined later.
\end{Assumption}
An example of a matrix that satisfies A$\mathbb{R}f{assumption5}$ is given by $A_{\alpha,\betata}(x,t):= \bar{A}_\betata(0,0) + \varepsilon_3|x|^\gammamma$.
It is worth to highlight that, since the assumptions to be made throughout this manuscript are exactly the parabolic counterpart of those imposed in the stationary case, our results are also dependendent on the smallness regime imposed on the coefficients as in \cite{Pimentel}.
At this point, we put forward our main results. At first, we study the gradient-dependent equation \eqref{equation03}. Under the assumption that the coefficients are uniformly close to the Bellman one, we prove that solutions to \eqref{equation03} are of class $W_{loc}^{2,1;p}(Q_1)$.
\betagin{teo}\label{theorem01}
Let $d +1<p$ and $u \in {\cal C}(Q_1)$ be a $L^p$-viscosity solution to \eqref{equation03}. Assume that A\mathbb{R}f{assumption1}-A\mathbb{R}f{assumption_vectorb} are in force. Then, $u \in W_{loc}^{2,1;p}(Q_1)$ with the estimate
\[
\|u\|_{{ W^{2,1;p}}(Q_{1/2})} \leq C \left( \|u\|_{L^{\infty}(Q_1)} + \|f\|_{L^{p}(Q_1)}\right),
\]
where $C>0$ is a constant depending on $d, \lambdabda, \Lambda, p, \displaystyle\sup_{\alpha \in {\cal A}}\sup_{\betata \in {\cal B}}\|\thetaxtbf{b}_{\alpha,\betata}\|_{L^{\infty}(Q_1)}$.
\end{teo}
We point out that in \cite{Escauriaza}, the author proves $W^{2,p}$ estimates in the elliptic setting for $p> d - \varepsilon$, where $\varepsilon$ is a positive constant and depending on the ellipticity coefficients. This number $\varepsilon$ is well-known in the literature as {\it Escauriaza's exponent}. According to \cite[Remark I]{Escauriaza}, these results can be produced for the parabolic scenario. However, as far as we know no results in that direction have been produced. The essential ingredients for the proof of this result can be found in \cite{Cerutti-Grimaldi-07, Chen-17, Escauriaza-2000}; namely, Green's functions properties associated with some linear operators and well-posedness to certain parabolic problems. See also \cite[Section 5]{caspim} and the references therein.
Notice that the matrix $A_{\alpha, \betata}$ depends on $\betata$ in Theorem \mathbb{R}f{theorem01}, it implies that the operator {\it is not} convex; compare with \cite[Theorem 9.1]{cra_ko_swI}. An adjustment in the smallness regime leads us to our second main result that regards the parabolic ${\mathcal C}^{1,\operatorname{Log-Lip}}$ regularity.
\betagin{teo}\label{theorem02}
Let $u\in\mathcal{C}(Q_1)$ be a $L^p$-viscosity solution to \eqref{eq_main} and $(x_0,t_0)\in Q_{1/2}$. Suppose A\mathbb{R}f{assumption1} and A\mathbb{R}f{assumption4} are in force. Then $u\in\mathcal{C}^{1,\operatorname{Log-Lip}}_{loc}(Q_1)$; \emph{i.e.}, there exist a universal constant $C>0$ and $0<r\leq 1/2$ such that
\small
\[
\displaystyle\sup_{Q_{r}(x_0,t_0)}\big|u(x,t)-[u(x_0,t_0)+Du(x_0,t_0)\cdot x]\big|\leq C\left(\|u\|_{L^{\infty}(Q_1)}+\|f\|_{\operatorname{BMO}(Q_1)}\right)r^2\ln r^{-1}.
\]
\end{teo}
Lastly, assuming additional conditions on the source term, and refining the smallness regime, we are able to obtain $C^{2,\gammamma}$-estimates at the origin for solutions to \eqref{eq_main}. It is the content of our last main theorem.
\betagin{teo}\label{theorem03}
Let $u \in {\mathcal C}(Q_1)$ be a $L^p$-viscosity solution to \eqref{eq_main}. Suppose that assumptions A\mathbb{R}f{assumption1} and A\mathbb{R}f{assumption5} are in force. Then, $u$ is ${\mathcal C}^{2,\gammamma}$ at the origin, \emph{i. e.}, there exists a polynomial $P$ of degree 2 and a constant $C>0$ such that
\[
\|u - P\|_{L^\infty(Q_r)} \leq Cr^{2+\gammamma},
\]
with
\[
|DP(0,0)| + \|D^2P(0,0)\| \leq C,
\]
for all $0<r \ll 1$.
\end{teo}
It is worth noting that in \cite{wangII} the authors establish ${\mathcal C}^{2,\gammamma}$ regularity for solutions, relying on ${ \mathcal C}^{2,\gammamma}$-estimates for the operator with frozen coefficients, which is not our case. We also notice that Theorem \mathbb{R}f{theorem03} \emph{does not} implies local ${\mathcal C}^{2,\gammamma}$-regularity, unless assumption A$\mathbb{R}f{assumption5}$ holds for every $(x,t)$, see Remark \mathbb{R}f{last_rem}. Throughout the paper, we use some definitions and preliminary results, which are described in the next section.
\betagin{Remark}
The authors believe that these results can be extended to operators with zero-th order terms of the form
\[
G(D^2u, u_t, u, x, t) := u_t + \sup_{\alpha \in {\cal A}} \inf_{\betata \in {\cal B}}\left[ - {\operatorname{Tr}}(A_{\alpha, \betata}(x,t)D^{2} u) + a_{\alpha,\betata}(x,t)u(x,t)\right],
\]
by imposing that
\[
\sup_{\alpha \in {\cal A}}\sup_{\betata \in {\cal B}}\|a_{\alpha,\betata}\|_{L^{\infty}(Q_1)}\leq C.
\]
\end{Remark}
\betagin{Remark}
In the proof of Theorem \mathbb{R}f{theorem01}, we consider the following smallness regimes
\betagin{equation}\label{eq_scal}
\|u\|_{L^\infty(Q_1)} \leq 1 \;\;\mbox{ and } \;\; \|f\|_{L^p(Q_1)} \leq \varepsilon_1,
\end{equation}
for some $\varepsilon_1$ to be determined. The conditions in \eqref{eq_scal} are not restrictive. Indeed, if we consider the auxiliary function
\[
v(x,t) = \dfrac{u(\rho x, \rho^2t)}{K},
\]
with $0 < \rho \ll 1$ and $K > 0$, then $v$ solves
\[
v_t + \sup_{\alpha \in {\cal A}} \inf_{\betata \in {\cal B}}\left[ - {\operatorname{Tr}}(\tilde{A}_{\alpha, \betata}(x,t) D^{2} v)-\tilde{{\bf b}}_{\alpha,\betata}(x,t)\cdot Dv\right] = \tilde{f} \quad \thetaxt{in} \quad Q_1,
\]
in the viscosity sense, where
\[
\tilde{A}_{\alpha, \betata}(x,t) = A_{\alpha, \betata}(\rho x,\rho^2t),\quad \tilde{{\bf b}}_{\alpha,\betata}(x,t) = \rho{\bf b}_{\alpha,\betata}(\rho x,\rho^2t),
\]
and
\[
\tilde{f}(x,t) = \frac{\rho^2}{K}f(\rho x, \rho^2t).
\]
Thus, by choosing
\[
K = \|u\|_{L^\infty(Q_1)} + \varepsilon_1^{-1}\|f\|_{L^p(Q_1)},
\]
we can assume \eqref{eq_scal} without loss of generality, since the coefficients $\tilde{A}_{\alpha, \betata}$ and $\tilde{{\bf b}}_{\alpha,\betata}$ and the source term $\tilde{f}$ satisfy the same assumptions required in Theorem \mathbb{R}f{theorem01}. Similarly, we can assume
\betagin{equation*}
\|f\|_{\operatorname{BMO}(Q_1)} \leq \varepsilon_2,
\end{equation*}
in the proof of Theorem \mathbb{R}f{theorem02}.
\end{Remark}
\subsection{Definitions and auxiliary results}
In what follows, we recall some definitions and results which will be useful throughout the paper. First, we present the definition of $L^p$-viscosity solution.
\betagin{Definition}[$L^p$-viscosity solution]\label{def Lp-viscosity sol}
Let $f\in L^p_{\thetaxtrm{loc}}(Q_1)$. We say that $u\in \mathcal{C}(Q_1)$ is an $L^p$-viscosity subsolution $($resp. supersolution$)$ of
\betagin{equation}\label{Lp-viscosity sol}
u_t+F(x,t,u,Du,D^2u)=f(x,t) \: \: \mbox{in} \: \: Q_1,
\end{equation}
if for $\phi\in W^{2,p}_{\mathrm{loc}}(Q_1)$, we have
\betagin{align*}\label{limSubsolution1}
{\mathrm{ess.}\varliminf}_{(y,s)\to (x,t)} \,\{\phi_t(y,s)+F(y,s,u(y),D\phi(y),D^2\phi (y))-f(y,s)\} \leq 0
\\
(\mbox{resp.},\:{\mathrm{ess.}\varlimsup}_{(y,s)\to (x,t)} \,\{\phi_t(y,s)+F(y,s,u(y),D\phi(y),D^2\phi (y))-f(y,s)\} \geq 0 ) \nonumber
\end{align*}
whenever $u-\phi$ attains a local maximum (resp.\ minimum) at $(x,t) \in Q_1$. We call $u$ is an $L^p$-viscosity solution of \eqref{Lp-viscosity sol}, if $u$ is an $L^p$-viscosity subsolution and supersolution of \eqref{Lp-viscosity sol}. We say that a $L^p$-viscosity solution $u$ is a normalized $L^p$-viscosity solution if $\sup_{Q_1}|u| \leq 1$.
\end{Definition}
For the sake of completeness, we define the class of viscosity solutions. First, we recall the definition of extremal operators; see \cite{Astesiano} for a first contribution on fully nonlinear parabolic extremal equations.
\betagin{Definition}[\it Pucci's extremal operators]\rm
Let $\mathcal{S}(d)$ the space of $d\times d$ symmetric matrices. For $M \in \mathcal{S}(d)$, we define the Pucci's extremal operators by
\[
\mathcal{M}^+_{\lambdabda,\Lambda}(M)\,:=\,-\lambdabda\sum_{e_i>0}e_i\,-\,\Lambda\sum_{e_i<0}e_i
\]
and
\[
\mathcal{M}^-_{\lambdabda,\Lambda}(M)\,:=\,-\Lambda\sum_{e_i>0}e_i\,-\,\lambdabda\sum_{e_i<0}e_i,
\]
where $(e_i)_{i=1}^d$ are the eigenvalues of $M$.
\end{Definition}
Observe that, if $A\in\mathcal{S}(d)$ is a $(\lambdabda,\Lambda)$-elliptic matrix, \emph{i.e},
\betagin{equation}\label{elliptic-pucci}
\lambdabda |\xi|^2\leq A_{ij}\xi_i\xi_j\leq \Lambda |\xi|^2
\end{equation}
for every $\xi\in\mathbb{R}^d$, it easy to see that we can write the Pucci's extremal operators as
\betagin{equation}
\mathcal{M}^+_{\lambdabda,\Lambda}(M)=\sup_{\lambdabda I\leq A\leq \Lambda I}[-\operatorname{Tr}(AM)]
\end{equation}
and
\betagin{equation}
\mathcal{M}^-_{\lambdabda,\Lambda}(M)=\inf_{\lambdabda I\leq A\leq \Lambda I}[-\operatorname{Tr}(AM)];
\end{equation}
we refer to the reader to \cite{cafcab} for more details. See also \cite{cra_ko_swI}, \cite{imbersil}. Therefore, the Pucci's extremal operators are prototype examples of Bellman operators.
\betagin{Definition}[\it The class of viscosity solutions]\rm
Let $f\in\mathcal{C}(Q_1)$ and $0<\lambdabda\leq\Lambda.$ We say that $u$ is in the class of supersolutions $\overline{S}(\lambdabda, \Lambda, f)$ if
\[
u_t+\mathcal{M}^+_{\lambdabda,\Lambda}(D^2u)\,\geq\, f(x,t) \,\, \mbox{in} \,\, Q_1
\]
in the viscosity sense. Similarly, $u$ is in the class of subsolutions $\underline{S}(\lambdabda, \Lambda, f)$ if
\[
u_t+\mathcal{M}^-_{\lambdabda,\Lambda}(D^2u)\,\leq\, f(x,t) \,\, \mbox{in} \,\, Q_1
\]
in the viscosity sense. Finally, the class of $(\lambdabda, \Lambda)$-viscosity solutions is defined by
\[
S(\lambdabda,\Lambda,f)=\overline{S}(\lambdabda, \Lambda, f)\cap\underline{S}(\lambdabda, \Lambda, f).
\]
\end{Definition}
In what follows we introduce measure notions that we use in the next section. We refer the reader to \cite{caffarelli} for more details.
\betagin{Definition}\rm
Let $L:Q_1\rightarrow\mathbb{R}$ be an affine function and $M$ a positive constant. The paraboloid of opening $M$ is defined by
\[
P_{M}(x,t)=L(x,t)\pm M(|x|^2+|t|).
\]
In addition, we introduce
\[
\underline{G}_{M}(u, Q)\coloneqq\{(x_0,t_0)\in Q: \exists \, P_{M} \; \mbox{that touches } u \mbox{ by bellow at} \; (x_0,t_0)\},
\]
\[
\overline{G}_{M}(u, Q)\coloneqq\{(x_0,t_0)\in Q: \exists \, P_{M} \; \mbox{that touches } u \mbox{ by above at} \; (x_0,t_0)\},
\]
and
\[
G_M(u,Q)\coloneqq \underline{G}_{M}(u, Q)\cap \overline{G}_{M}(u, Q).
\]
In addition, denote
\[
\underline{A}_M(u,Q)\coloneqq Q\setminus\underline{G}_M(u,Q),\,\,\,\,\,\; \overline{A}_M(u,Q)\coloneqq Q\setminus\overline{G}_M(u,Q)
\]
and
\[
A_M(u,Q)\coloneqq \underline{A}_{M}(u, Q)\cup \overline{A}_{M}(u, Q).
\]
\end{Definition}
We close this section with a well-known result from the realm of measure theory. Given $K_1$, a dyadic cube is obtained by repeating a finite numbers of time the following procedure: We split the sides of $K_1$ into two equal intervals in $x$ and four equals one in $t$. We do the same with the $2^{d+2}$ cubes obtained, and we repeat this process. Each cube obtained in this process is called a dyadic cube. We say that $\tilde{K}$ is a predecessor of a cube $K$ if K is one of the $2^{d+2}$ cubes obtained by splitting the sides of $\tilde{K}$.
In addition, given $m \in \mathbb{N}$ and a dyadic cube $K$, the set $\bar{K}^m$ is obtained by staking $m$ copies of its predecessor $\bar{K}$; in other words, if $\bar{K}$ has the form $(a,b)\times L$, then $\bar{K}^m = (b,b + m(b-a))\times L$.
\betagin{Lemma}[Stacked covering lemma]\label{lem_cov}
Let $m \in \mathbb{N}$, $A \subset B \subset K_1$ and $0<\rho <1$. Suppose that
\betagin{itemize}
\item[(i)] $|A| \leq \rho|K_1|$;
\item[(ii)] If $K$ is dyadic cube of $K_1$ such that $|K \cap A| > \rho|K|$, then $\bar{K}^m\subset B$.
\end{itemize}
Then $|A| \leq \dfrac{\rho(m+1)}{m} |B|$.
\end{Lemma}
For a proof of Lemma \mathbb{R}f{lem_cov} we refer \cite[Lemma 4.27]{imbersil}; see also \cite{cafcab}. The next section is devoted to prove the Theorem \mathbb{R}f{theorem01}.
\section{Estimates in Sobolev spaces}
Throughout this section, we detail the proof of Theorem \mathbb{R}f{theorem01}, namely, the $W^{2,1;p}$-estimates to equation \eqref{equation03}. First, we establish the same estimate for \eqref{eq_main}, \emph{i.e.}, the PDE with no dependence on the gradient.
\betagin{Proposition}\label{prop_sob}
Let $d +1<p$ and $u \in {\cal C}(Q_1)$ be a normalized $L^p$-viscosity solution to \eqref{eq_main}. Assume A\mathbb{R}f{assumption1}-A\mathbb{R}f{assumption3} hold true. Then, $u \in W_{loc}^{2,1;p}(Q_1)$. Moreover, there exists a universal constant $C>0$ such that
\[
\|u\|_{{ W^{2,1;p}}(Q_{1/2})} \leq C \left( \|u\|_{L^{\infty}(Q_1)} +
\|f\|_{L^{p}(Q_1)}\right).
\]
\end{Proposition}
We use standard arguments to prove Proposition \mathbb{R}f{prop_sob}, see for instance \cite{cafcab} and \cite{caspim}, just to cite a few. On account of completeness, we present the main steps of the proof. We start with the following lemma.
\betagin{Lemma}[A priori regularity in $W_{loc}^{2, 1; \deltalta}(Q_1)$] \label{Lemma01}
Let $u\in{\cal C}(Q_1)$ be a normalized viscosity solution to (\mathbb{R}f{eq_main}). Assume A\mathbb{R}f{assumption1}-A\mathbb{R}f{assumptionsourceterm} are in force. Then, there exist some $\deltalta>0$ and a universal constant $C>0$ satisfying
\[
| A_M(u, Q_1)\cap K_1| \leq C M^{-\deltalta}.
\]
\end{Lemma}
The Lemma \mathbb{R}f{Lemma01} is a well-known result; we refer the reader to \cite[Proposition 7.4]{cafcab} for the elliptic setting. For the parabolic context, it follows from \cite[Theorem 4.11]{wangI}. Next, we prove an approximation lemma that relates solutions to \eqref{eq_main} with solutions of the Bellman parabolic model.
\betagin{Proposition}[First approximation lemma]\label{approximationlemma}
Let $u\in\mathcal{C}(Q_1)$ be a normalized $L^p$-viscosity solution to \eqref{eq_main}.
Suppose that A\mathbb{R}f{assumption1}-A\mathbb{R}f{assumption3} hold true. Then, given $\deltalta>0$, there exists $\varepsilon_1>0$, such that, if
\[
\|f \|_{L^{p}(Q_1)} \leq \varepsilon_1,
\]
then there exists $h \in {\mathcal C}^{1,1}(Q_{3/4})$ satisfying
$$ \| u - h\|_{L^{\infty}(Q_{3/4})} \leq \deltalta.$$
\end{Proposition}
\betagin{proof}
Suppose the statement of the proposition is false. Then, there exists a $\deltalta_0 > 0$ such that
$$\|u-h\|_{L^{\infty}(Q_{3/4})} > \deltalta_0,$$
for every $h \in {\mathcal C}^{1,1}(Q_{3/4})$. Consider the sequences $(A_{\alpha, \betata}^n)_{n\in\mathbb{N}}, (f_n)_{n\in\mathbb{N}}$ and $(u_n)_{n\in\mathbb{N}}$ such that
$$|A^n_{\alpha, \betata}(x,t) - \bar{A}_{\betata}(x,t)| + \|f_n\|_{L^p(Q_{3/4})} \leq 1/n,$$
and $u_n$ solves
\betagin{equation}\label{eq02}
(u_n)_t + \sup_{\alpha \in {\cal A}} \inf_{\betata \in {\cal B}}
[ - {\operatorname{Tr}}(A^n_{\alpha, \betata}(x, t) D^{2} u_n(x, t))] = f_n(x, t) \quad \thetaxt{in} \quad Q_1.
\end{equation}
The regularity theory available for (\mathbb{R}f{eq02}) implies that, through a subsequence if necessary, $u_n$ converges to a function $u_{\infty}$ in the ${\mathcal C}^{0,\gammamma}$-topology; see \cite{imbersil}, \cite{KrySaf}. Now, by standard stability results of viscosity solutions, we have that
\[
(u_{\infty})_t + \inf_{\betata \in {\cal B}} [ - {\operatorname{Tr}}(\bar{A}_{\betata}(x, t) D^{2} u_{\infty})] = 0;
\]
see \cite{cra_ko_swI}, \cite{imbersil}.
From assumption A\mathbb{R}f{assumption3} we have $u_{\infty} \in {\mathcal C}^{1,1}(Q_{3/4})$. Finally, taking $h=u_{\infty}$ we obtain a contradiction. This finishes the proof.
\end{proof}
Now, we are able to establish a first level of improved decay rate. In the sequel, $Q$ is a parabolic domain such that $Q_{8\sqrt{d}} \subset Q$.
\betagin{Proposition} \label{Prop 1}
Let $0<\rho<1$ and $u\in\mathcal{C}(Q)$ be a normalized $L^p$-viscosity solution to \eqref{eq_main} in $ Q_{8{\sqrt{d}}}$ satisfying
\[
- | x |^2 - | t | \leq u(x,t) \leq |x |^2 + | t | \quad \mbox{in} \quad Q \setminus Q_{6{\sqrt{d}}}.
\]
Assume that A\mathbb{R}f{assumption1}-A\mathbb{R}f{assumption3} are satisfied and also
\[
\| f \|_{L^{d+1}(Q_{8{\sqrt{d}}})} \leq \varepsilon.
\]
Then, there exists $\bar{M}>1$ such that
\[
| G_{\bar{M}}(u, Q) \cap K_1| \geq 1 - \rho.
\]
\end{Proposition}
\betagin{proof}
From Proposition \mathbb{R}f{approximationlemma}, there exists $h\in {\mathcal C}^{1,1}_{loc}(Q_{8\sqrt{d}})$ such that
\[
\|u-h\|_{L^{\infty}(Q_{6{\sqrt{d}}})}\leq\deltalta.
\]
Extend $h$ continuously to $Q$ such that
\[
h = u \quad \thetaxt{in} \quad Q \setminus Q_{7{\sqrt{d}}}$$ and $$ \| u - h\|_{L^{\infty} (Q)} = \| u - h\|_{L^{\infty} (Q_{6{\sqrt{d}}})}.
\]
By the maximum principle we obtain
\[
\| u \|_{L^{\infty} (Q_{6{\sqrt{d}}})} = \| h\|_{L^{\infty} (Q_{6{\sqrt{d}}})}.
\]
It follows that
\[
\| u - h\|_{L^{\infty} (Q)} \leq 2
\]
and
\[
-2 - | x |^2 - | t | \leq h(x,t) \leq 2 + |x |^2 + | t | \quad \thetaxt{in} \quad Q \setminus Q_{6{\sqrt{d}}}.
\]
Hence, we can find $N>1$ such that $Q_1 \subset G_{N}(h, Q)$.
Now, we introduce the auxiliary function
\[
w := \frac{\deltalta}{2C \varepsilon}(u - h).
\]
According to Lemma \mathbb{R}f{Lemma01} applied to $ w \in S(\lambdabda, \Lambda, f)$ we have
\[
|A_{M_1}(w, Q)\cap K_1 | \leq C M_1^{-\sigma},
\]
for every $M_1>0$, which leads to
\[
|A_{M_2}(u - h , Q)\cap K_1| \leq C {\varepsilon}^{\sigma}M_2^{\sigma}
\]
for every $M_2>0$.
Therefore
\[
|G_{N}(u - h, Q) \cap K_1| \geq 1 - C{\varepsilon}^{\sigma}M_2.
\]
By choosing $\varepsilon \ll 1$ sufficiently small, and taking $\bar{M} \equiv 2N$, we conclude the proof.
\end{proof}
\betagin{Proposition} \label{Prop 2}
Let $0<\rho<1$ and $u\in\mathcal{C}(Q)$ be a normalized $L^p$-viscosity solution to
\eqref{eq_main} in $Q_{8\sqrt{d}}$. Assume A\mathbb{R}f{assumption1}-A\mathbb{R}f{assumption3} are in force. In addition, suppose
\[
\|f\|_{L^{d+1}(Q_{8\sqrt{d}})} \leq \varepsilon,
\]
and $G_1(u,Q) \cap K_3 \not= \emptyset.$ Then
\[
|G_M(u, Q) \cap K_1| \geq 1 - \rho,
\]
with $M$ as in Proposition \mathbb{R}f{Prop 1}.
\end{Proposition}
\betagin{proof}
Let $(x_1, t_1) \in G_1(u,Q)\cap K_3$. It implies that there exists an affine function $L$ such that
\[-\dfrac{|x-x_1|^2 + |t-t_1|}{2} \leq u(x,t) - L(x,t) \leq \dfrac{|x-x_1|^2 + |t-t_1|}{2} \;\;\; \thetaxt{in}\;\; Q.
\]
Now, we set
\[
v:= \dfrac{u-L}{C},
\]
where $C>1$ is a large constant such that $\|v\|_{L^{\infty}(Q_{8\sqrt{d}})} \leq 1$ and
\[
-|x|^2 - |t| \leq v(x,t) \leq |x|^2 + |t| \;\ \thetaxt{in} \;\ Q\setminus Q_{6\sqrt{d}}.
\]
Notice that $v$ solves
\[
v_t + \sup_{\alpha \in {\cal A}}\inf_{\betata \in {\cal B}}(-\operatorname{Tr}(A_{\alpha, \betata}(x,t)D^2v)) = \dfrac{f}{C}.
\]
If we set $M:=C\bar{M}$, from Proposition \mathbb{R}f{Prop 1} we obtain
\[
|G_M(u,Q)\cap K_1| = |G_{C\bar{M}}(u,Q)\cap K_1| = |G_{\bar{M}}(v,Q)\cap K_1| \geq 1 - \rho.
\]
\end{proof}
The following result is an application of Lemma \mathbb{R}f{lem_cov} and produces decay rates for the sets $A_M\cap K_1$.
\betagin{Proposition}\label{Prop 4}
Let $0<\rho<1$ and $u\in\mathcal{C}(Q)$ be a normalized $L^p$-viscosity solution to \eqref{eq_main} in $Q_{8\sqrt{d}}$. Extend $f$ by zero outside of $Q_{8\sqrt{d}}$. Suppose A\mathbb{R}f{assumption1}-A\mathbb{R}f{assumption3} hold true. Denote
\[
A\coloneqq A_{M^{k+1}}(u, Q_{8\sqrt{d}})\cap K_1
\]
and
\[
B := \left\lbrace A_{M^{k}}(u, Q_{8\sqrt{d}})\cap K_1\right\rbrace\cup\left\lbrace(x,t)\in K_1: m(f^{d+1})(x,t)\geq(c_1M^{k})^{d+1}\right\rbrace,
\]
where $c_1$ is a positive universal constant and $M>1$ depends only on $d$. Then,
\[
|A|\leq \rho|B|.
\]
\end{Proposition}
\betagin{proof}
First, observe that
\[
|u(x,t)|\leq 1\leq |x|^2+|t| \ \ \mbox{in} \ \ Q_{8\sqrt{d}}\backslash Q_{6\sqrt{d}}.
\]
According to Proposition \mathbb{R}f{Prop 1}, we obtain
\[
|G_{M^{k+1}}(u,Q_{8\sqrt{d}})\cap K_1|\geq 1-\rho,
\]
which implies that
\[
|A|=|A_{M^{k+1}}(u,Q_{8\sqrt{d}})\cap K_1|\leq \rho|K_1|.
\]
Now, consider any dyadic cube $K\coloneqq K_{1/2^i}$ of $K_1.$ Notice that
\betagin{equation}\label{hypothesis}
|A_{M^{k+1}}(u,Q_{8\sqrt{d}})\cap K|=|A\cap K|>\rho|K|.
\end{equation}
It remains to see that $\bar{K}^m \subset B$, for some $m \in \mathbb{N}$. We proceed by a contradiction argument assuming that $\bar{K}^m\not\subset B$. Let $(x_1,t_1)$ such that
\betagin{equation}\label{inter contradiction}
(x_1,t_1)\in\bar{K}^m\cap G_{M^k}(u, Q_{8\sqrt{d}})
\end{equation}
and
\betagin{equation}\label{max_cont}
m(f^{d+1})(x_1,t_1)\leq (c_1M^k)^{d+1}.
\end{equation}
Define
\[
v(x,t)\coloneqq\frac{2^{2i}}{M^k}u\left(\frac{x}{2^i},\frac{t}{2^{2i}}\right).
\]
Since $Q_{8\sqrt{d}} \subset Q_{2^i\cdot8\sqrt{d}}$, we have that $v$ solves
\[
v_t + \sup_{\alpha \in {\cal A}} \inf_{\betata \in {\cal B}}
[ - {\operatorname{Tr}}(A_{\alpha, \betata}(x, t) D^{2} v)]=\tilde{f} \ \ \mbox{in} \ \ Q_{8\sqrt{d}},
\]
where
\[
\tilde{f}(x,t)\coloneqq\frac{1}{M^k}f\left(\frac{x}{2^i},\frac{t}{2^{2i}}\right).
\]
We have
\[
\|\tilde{f}\|_{L^{d+1}(Q_{8\sqrt{d}})}^{d+1}=\frac{2^{i(d+2)}}{M^{k(d+1)}}\displaystyle\int_{Q_{8\sqrt{d}/2^i}}|f(x,t)|^{d+1}dxdt\leq c(d)c_1^{d+1}.
\]
Now, by choosing $c_1$ small enough in \eqref{max_cont} we obtain
\[
\|\tilde{f}\|_{L^{d+1}(Q_{8\sqrt{d}})}\leq\varepsilon.
\]
Furthermore, the inequality \eqref{inter contradiction} yields
\[
G_1(v,Q_{8\sqrt{d}/2^i})\cap K_3\neq\emptyset.
\]
From Proposition \mathbb{R}f{Prop 2} we get
\[
|G_{M}(v, Q_{2^i\cdot8\sqrt{d}})\cap K_1|\geq(1-\rho)
\]
\emph{i.e.},
\[
|G_{M^{k+1}}(u, Q_{8\sqrt{d}})\cap K|\geq(1-\rho)|K|,
\]
which contradicts \eqref{hypothesis}.
\end{proof}
At this point we are ready to prove the Proposition \mathbb{R}f{prop_sob}.
\betagin{proof}[Proof of Proposition \mathbb{R}f{prop_sob}]
Define
\[
\alpha_k\coloneqq|A_{M^k}(u,Q_{8\sqrt{d}})\cap K_1|
\]
and
\[
\betata_k\coloneqq|\{(x,t)\in K_1:m(f^{d+1})(x,t)\geq(c_1M^k)^{d+1}\}|.
\]
From Proposition \mathbb{R}f{Prop 4}, we have that
\[
\alpha_{k+1}\leq\rho(\alpha_k+\betata_k).
\]
Hence,
\betagin{equation}\label{inequality01}
\alpha_k\leq\rho^k+ \displaystyle\sum_{i=1}^{k-1}\rho^{k-i}\betata_i.
\end{equation}
Since $f\in L^p(Q_1)$, it follows that $m(f^{d+1})\in L^{p/(d+1)}(Q_1),$ and, for some $C>0$
\[
\|m(f^{d+1})\|_{L^{p/(d+1)}(Q_1)}\leq C\|f\|^{d+1}_{L^p(Q_1)}.
\]
Therefore,
\betagin{equation}\label{inequality02}
\displaystyle\sum_{k=0}^{\infty}M^{pk}\betata_k\leq C.
\end{equation}
By combining \eqref{inequality01} and \eqref{inequality02} and choosing $\rho$ such that $\rho M^p \leq 1/2$, we obtain
\betagin{equation*}
\betagin{aligned}
\displaystyle\sum_{k=1}^{\infty}M^{pk}\alpha_k&\leq\displaystyle\sum_{k=1}^{\infty}(\rho M^p)^k+\displaystyle\sum_{k=1}^{\infty}\sum_{i=0}^{k-1}\rho^{k-i}M^{p(k-i)}\betata_iM^{pi}\\
&\leq\displaystyle\sum_{k=1}^{\infty}2^{-k}+\left(\displaystyle\sum_{i=0}^{\infty}M^{pi}\betata_i\right)\left(\displaystyle\sum_{j=1}^{\infty}(\rho M^p)^j\right)\\
&\leq \displaystyle\sum_{k=1}^{\infty}2^{-k}+C\displaystyle\sum_{j=1}^{\infty}2^{-j}\\
&\leq C.
\end{aligned}
\end{equation*}
This concludes the proof.
\end{proof}
Finally, we detail the proof of Theorem \mathbb{R}f{theorem01}.
\betagin{proof}[Proof of Theorem \mathbb{R}f{theorem01}]
We split the proof in two steps. \\
\thetaxtbf{Step 1}
First, by a reduction argument, we see that it is enough to prove the result for $L^p$-viscosity solutions to \eqref{eq_main}.
Let $u$ be an $L^p$-viscosity solution to \eqref{equation03}. By \cite[Proposition 3.2]{cra_ko_swI}, $u$ is parabolic twice differentiable a.e. and its pointwise derivatives satisfy \eqref{equation03} in $Q_1$. Define
\[
g(x,t) := u_t + \sup_{\alpha \in {\cal A}} \inf_{\betata \in {\cal B}} [ - {\operatorname{Tr}}(A_{\alpha, \betata}(x,t)D^{2} u)].
\]
It is easy to see that
\[
|g(x,t)| \leq |f(x,t)| + \sup_{\alpha \in {\cal A}} \sup_{\betata \in {\cal B}}|{\bf b}_{\alpha,\betata}(x,t)||Du|.
\]
According to the Theorem 7.3 in \cite{cra_ko_swI}, we have that $Du \in L^p(Q_1)$ with estimates. Furthermore, by Remark 7.7 in \cite{cra_ko_swI}, we obtain
\[
\|Du\|_{L^{p}(Q_{1/2})} \leq C\left(\|u \|_{L^{\infty}(Q_1)} + \| f\|_{L^p(Q_1)}\right),
\]
for some constant $C>0$. Since ${\bf b}_{\alpha,\betata}$ satisfies A\mathbb{R}f{assumption_vectorb} and $f\in L^p(Q_1)$, we have that $g \in L^p_{loc}({Q_1})$, with $p>d+1$. It follows from \cite[Proposition 4.1]{cra_ko_swI} that $u$ is an $L^p$-viscosity solution to
\betagin{equation}\label{eq_wgrad}
u_t + \sup_{\alpha \in {\cal A}} \inf_{\betata \in {\cal B}} [ - {\operatorname{Tr}}(A_{\alpha, \betata}(x,t)D^{2} u)] = g(x,t) \;\; \thetaxt{ in } \;\; Q_1.
\end{equation}
Therefore, if the Theorem \mathbb{R}f{theorem01} holds for $L^p$-viscosity solutions of \eqref{eq_wgrad}, we can conclude the proof.
\thetaxtbf{Step 2}
Now, consider the equation
\betagin{equation}\label{eq_wg}
u_t + \sup_{\alpha \in {\cal A}} \inf_{\betata \in {\cal B}} [ - {\operatorname{Tr}}(A_{\alpha, \betata}(x,t)D^{2} u)] = g(x,t) \;\; \thetaxt{ in } \;\; Q_1.
\end{equation}
Let $g_j \in {\mathcal C}(\overline{Q}_1)\cap L^p(Q_1)$ and $u_j$ such that
\[
\|g_j - g\|_{L^p(Q_1)} \rightarrow 0, \;\thetaxt{ as } \; j\rightarrow \infty,
\]
and
\betagin{equation*}
\left\{
\betagin{array}{rcl}
(u_j)_t + \sup_{\alpha \in {\cal A}} \inf_{\betata \in {\cal B}} [ - {\operatorname{Tr}}(A_{\alpha, \betata}(x,t)D^{2} u_j)] & = g_j(x,t) & \thetaxt{ in } \;\; Q_1 \\
u_j(x,t) & = u(x,t) & \mbox{ on } \;\; \partial Q_1.
\end{array}
\right.
\end{equation*}
By Proposition \mathbb{R}f{prop_sob} we have that
\[
\|u_j\|_{W^{2,1;p}(Q_{1/2})} \leq C\left( \|u_j\|_{L^\infty(Q_1)} + \|g_j\|_{L^p(Q_1)} \right).
\]
By using \cite[Proposition 2.6]{cra_ko_swI} and Sobolev embeddings (see for instance \cite{LSU}), we obtain that, up to a subsequence if necessary, $u_j \rightarrow \bar{u}$ in ${\mathcal C}(\overline{Q}_1)$. Moreover, $u_j$ converges weakly to $\bar{u}$ in $W_{loc}^{2,1;p}(Q_1)$. By stability results we have that $\bar{u}$ is a $L^{p}$-viscosity solution to \eqref{eq_wg}; see \cite{cra_ko_swI}, \cite{imbersil}. In addition,
\[
\|\bar{u}\|_{W^{2,1;p}(Q_{1/2})} \leq C\left( \|\bar{u}\|_{L^\infty(Q_1)} + \|g\|_{L^p(Q_1)} \right).
\]
Compatibility on the parabolic boundary and the maximum principle \cite[Lemma 6.2]{cra_ko_swI} guarantee that $\bar{u} = u$. This finishes the proof.
\end{proof}
\betagin{Remark}
An important development concerning regularity of the solutions in Sobolev spaces to fully nonlinear equations in the elliptic setting was pursuit in \cite{nik}. In that paper, the author develops a global, up to the boundary, estimate in $W^{2,p}$. We believe a similar line of arguments could be developed also in the parabolic setting, leading to a global regularity also in the context of the Isaacs model.
\end{Remark}
\betagin{Remark}
We believe that under further conditions on $f$, namely $f \in \operatorname{BMO}$, it would be possible to prove that $D^2u$ and $u_t$ are in $\operatorname{BMO}$, locally. We refer to \cite{tei_pim} for the elliptic case.
\end{Remark}
\section{Regularity in $\mathcal{C}^{1,\operatorname{Log-Lip}}$ spaces}
This section is devoted to prove the parabolic $\mathcal{C}^{1,\operatorname{Log-Lip}}(Q_1)$ interior regularity estimates for solutions to \eqref{eq_main}. In order to prove this result, initially we establish a second approximation lemma which unlocks the geometric argument.
\betagin{Proposition}[Second approximation lemma]\label{third approximation}
Let $u\in\mathcal{C}(Q_1)$ be an $L^p$-viscosity solution to \eqref{eq_main}. Assume A\mathbb{R}f{assumption1} and A\mathbb{R}f{assumption4} are in force. Given $\deltalta>0$, there exists $\varepsilon_2>0$ such that, if
\[
\|f \|_{\operatorname{BMO}(Q_1)} \leq \varepsilon_2,
\]
then we can find $h \in {\cal C}^{2,\bar{\gammamma}}(Q_{3/4})$, for some $0<\bar{\gammamma}<1$, satisfying
\betagin{equation*}
\left \{
\betagin{array}{ll}
\displaystyle h_t + \inf_{\betata \in {\cal B}} [ - {\operatorname{Tr}}(\bar{A}_{\betata}(0,0) D^{2}h)] = 0 & \thetaxt{in} \quad Q_{ 3/4}, \\
h = u & \thetaxt{on} \quad {\partial}Q_{ 3/4},
\end{array}
\right.
\end{equation*}
such that
\[
\|u-h\|_{L^\infty(Q_{3/4})}\leq\deltalta.
\]
Furthermore, $\|h\|_{\mathcal{C}^{2,\bar{\gammamma}}(Q_{3/4})}\leq C$, for some $C>0$ a universal constant.
\end{Proposition}
\betagin{proof}
By a contradiction argument, assume that the statement of the proposition is false. Then, we can find $\deltalta_0 > 0$ and sequences $(A_{\alpha, \betata}^n)_{n\in\mathbb{N}}, (f_n)_{n\in\mathbb{N}}$ and $(u_n)_{n\in\mathbb{N}}$ satisfying
\betagin{itemize}
\item [(i)] $|A^n_{\alpha, \betata}(x,t) - \bar{A}_{\betata}(0,0)| + \|f_n\|_{L^p(Q_{3/4})} \leq 1/n;$
\item[(ii)] $(u_n)_t + \displaystyle\sup_{\alpha \in {\cal A}} \inf_{\betata \in {\cal B}}
\left[ - {\operatorname{Tr}}(A^n_{\alpha, \betata}(x, t) D^{2} u_n(x, t))\right]= f_n(x, t)$;
\end{itemize}
however
$$\|u-h\|_{L^{\infty}(Q_{3/4})} > \deltalta_0,$$
for every $h \in {\cal C}^{2, \bar\gammamma}(Q_{3/4})$ and every $\bar{\gammamma}\in(0,1)$.
Because of (ii), the sequence $(u_n)_{n\in\mathbb{N}}$ is uniformly bounded in $\mathcal{C}^{0,\gammamma}$, for some $\gammamma\in(0,1)$; see \cite{imbersil}, \cite{KrySaf}. Hence $u_n$ converges to a function $u_{\infty}$ locally uniformly in $Q_1$. By standard stability results of viscosity solutions, we have that
\[
(u_{\infty})_t + \inf_{\betata \in {\cal B}} [ - {\operatorname{Tr}}(\bar{A}_{\betata}(0, 0) D^{2} u_{\infty})] = 0;
\]
see \cite{cra_ko_swI}, \cite{imbersil}.
Since the Bellman operator is convex, the Evans-Krylov's regularity theory assures that $u_{\infty}\in\mathcal{C}^{2, \bar\gammamma}(Q_{3/4})$, for some $\bar{\gammamma}\in(0,1)$ and that $\|u_{\infty}\|_{\mathcal{C}^{2,\bar\gammamma}(Q_{3/4})}\leq C$, with $C>0$ a universal constant; see \cite{krylov1}, \cite{krylov2}. Setting $h=u_{\infty}$ we obtain a contradiction.
\end{proof}
The Approximation Lemma provides a tangential path connecting the Bellman parabolic model with our problem of interest. The next Proposition ensures the existence of an approximating quadratic polynomial, which is the key for the proof of $\mathcal{C}^{1,\operatorname{Log-Lip}}$-estimates. For simplicity, when the point is the origin, we denote $\langle f \rangle_r$ instead of $\langle f\rangle_{(0, 0), r}$.
\betagin{Proposition}\label{induction1}
Let $u\in\mathcal{C}(Q_1)$ be an $L^p$-viscosity solution to \eqref{eq_main}. Assume A\mathbb{R}f{assumption1} and A\mathbb{R}f{assumption4} hold. Then, there exists $ \varepsilon_2>0$ such that if
\[
\|f \|_{\operatorname{BMO}(Q_1)} \leq \varepsilon_2,
\]
one can find $0<\rho\ll 1$ and a sequence of second order polynomials $(P_n)_{n\in\mathbb{N}}$ of the form
\[
P_n(x,t)\coloneqq a_n+b_n\cdot x+c_n \,t+\frac{1}{2}x^{t}d_nx
\]
satisfying:
\[
c_n+\displaystyle\inf_{\betata \in {\cal B}}\left[-\operatorname{Tr}(\bar{A}_{\betata}(0,0)d_n)\right]=\langle f \rangle_1,
\]
\[
\displaystyle\sup_{Q_{\rho^n}}|u(x,t)-P_n(x,t)|\leq\rho^{2n}
\]
and
\betagin{equation}\label{condition coefficients}
|a_{n-1}-a_n|+\rho^{n-1}|b_{n-1}-b_n|+\rho^{2(n-1)}(|c_{n-1}-c_n|+|d_{n-1}-d_n|)\leq C\rho^{2(n-1)},
\end{equation}
for every $n\geq0.$
\end{Proposition}
\betagin{proof}
First, we may assume $\|u\|_{L^{\infty}(Q_1)}\leq1/2$, by a reduction argument. We argue by induction in $n \geq0$. We present the proof in four steps.
\thetaxtbf{Step 1}
Define
\[
P_{-1}(x,t)=P_0(x,t)=\frac{1}{2}x^tQx,
\]
where $Q$ is such that
\[
\displaystyle\inf_{\betata \in {\cal B}}[-\operatorname{Tr}(\bar{A}_{\betata}(0,0)Q)]=\langle f \rangle_1.
\]
The case $n=0$ is obviously satisfied. Suppose the induction hypotheses have been established for $n=1, \dots, k$, for some $k\in\mathbb{N}$. Let us show that the case $n=k+1$ also holds true. Define an auxiliary function $v_k:Q_1\rightarrow\mathbb{R}$ as
\[
v_k(x,t)\coloneqq\frac{(u-P_k)(\rho^kx,\rho^{2k}t)}{\rho^{2k}}.
\]
Observe that $v_k$ solves the equation
\betagin{equation*}\label{equation_vk}
(v_k)_{t}+\left(\sup_{\alpha \in {\cal A}} \inf_{\betata \in {\cal B}}
\left[ - {\operatorname{Tr}}(A_{\alpha, \betata}(\rho^kx,\rho^{2k}t) (D^{2}v_k+d_k))\right]+c_k\right)=f_k(x,t) \ \ \mbox{in} \ \ Q_1
\end{equation*}
where $f_k(x,t)=f(\rho^kx,\rho^{2k}t).$
\thetaxtbf{Step 2}
By induction hypothesis we conclude that $|v_k|\leq1$. Also, from the assumption A\mathbb{R}f{assumption4}, notice that
\[
|A_{\alpha,\betata}(\rho^kx,\rho^{2k}t)-\bar{A}_{\betata}(0,0)|\leq\varepsilon_2.
\]
Moreover,
\[
\betagin{array}{rcl}
\displaystyle \Xint-_{Q_r}|f_k(x,t) - \langle f_k\rangle_r|dx dt & = &
\dfrac{1}{|Q_{r\rho^k}|}\displaystyle \int_{Q_{r\rho^k}}|f(y, s) - \langle f \rangle_{r\rho^k}| dyds
\\
&\leq &\displaystyle\sup_{0<r\leq1}\displaystyle \Xint-_{Q_r}|f(x,t) - \langle f\rangle_r|dx dt\\
& = & \|f\|_{\operatorname{BMO}(Q_1)} \\
& \leq & \varepsilon_2.
\end{array}
\]
Observe that, if we have $v\in\mathcal{C}(Q_1)$ a viscosity solution to
\[
v_t+\displaystyle\inf_{\betata \in {\cal B}}\left[-\operatorname{Tr}(\bar{A}_{\betata}(0,0)D^2v)\right]=0,
\]
then, by applying the Evans-Krylov's parabolic regularity theory we obtain that $v\in\mathcal{C}^{2,\bar\gammamma}_{loc}(Q_1)$, for some $\bar\gammamma\in(0,1)$; see \cite{krylov1}, \cite{krylov2}. Furthermore, the following estimate holds
\[
\|v\|_{\mathcal{C}^{2,\bar\gammamma}(Q_{1/2})}\leq C_1,
\]
with $C_1>0$ a universal constant.
However, from the induction hypothesis, we have
\[
c_k+\displaystyle\inf_{\betata \in {\cal B}}\left[-\operatorname{Tr}(\bar{A}_{\betata}(0,0)d_k)\right]=\langle f\rangle_1.
\]
Therefore, it follows that solutions to
\[
v_t+c_k+\displaystyle\inf_{\betata \in {\cal B}}\left[-\operatorname{Tr}(\bar{A}_{\betata}(0,0)(D^2v+d_k))\right]=\langle f\rangle_1
\]
are of class $\mathcal{C}^{2,\bar\gammamma}_{loc}(Q_1)$, for some $\bar\gammamma\in(0,1)$, with estimate
\[
\|v\|_{\mathcal{C}^{2,\bar\gammamma}(Q_{1/2})}\leq C=C(\langle f\rangle_1, C_1).
\]
Indeed, if we define the operator
\[
G(M)\coloneqq\displaystyle\inf_{\betata \in {\cal B}}\left[-\operatorname{Tr}(\bar{A}_{\betata}(0,0)(M+d_k))\right]-\displaystyle\inf_{\betata \in {\cal B}}\left[-\operatorname{Tr}(\bar{A}_{\betata}(0,0)d_k)\right],
\]
we obtain that $v$ solves
\betagin{equation*}
\betagin{aligned}
v_t+G(D^2v)=&\;v_t+c_k+\displaystyle\inf_{\betata \in {\cal B}}\left[-\operatorname{Tr}(\bar{A}_{\betata}(0,0)(D^2v+d_k))\right]\\
&-c_k-\displaystyle\inf_{\betata \in {\cal B}}\left[-\operatorname{Tr}(\bar{A}_{\betata}(0,0)d_k)\right]\\
=&\;\langle f\rangle_1-\langle f\rangle_1\\
=&\;0.
\end{aligned}
\end{equation*}
Moreover, since the Bellman operator is uniformly elliptic and convex, it follows that $G:\mathcal{S}(d)\rightarrow\mathbb{R}$ is also a uniformly elliptic and convex operator.
\thetaxtbf{Step 3}
As a consequence of Step 2, we have that Proposition \mathbb{R}f{third approximation} holds true for $v_k$. Hence, we can find a function $h\in\mathcal{C}^{2,\bar\gammamma}_{loc}(Q_1)$, for some $\bar\gammamma\in(0,1)$, satisfying
\betagin{equation}\label{equationh1}
h_t+c_k+\displaystyle\inf_{\betata \in {\cal B}}[-\operatorname{Tr}(\bar{A}_{\betata}(0,0)(D^2h+d_k))]=\langle f\rangle_1 \ \ \mbox{in} \ \ Q_1,
\end{equation}
such that
\[
\sup_{Q_\rho}|v_k(x,t)-h(x,t)|\leq\deltalta
\]
for given $\deltalta>0$ which we choose below.
Define
\[
\bar{P}(x,t)\coloneqq h(0,0)+Dh(0,0)\cdot x+h_t(0,0)\,t+\frac{1}{2}x^t D^2h(0,0)x.
\]
Then, since $h\in\mathcal{C}^{2,\bar\gammamma}_{loc}(Q_1),$ we have that
\betagin{equation}\label{estimate h}
|D^2h(0,0)|+|h_t(0,0)|+|Dh(0,0)|+|h(0,0)|\leq C
\end{equation}
and
\[
\sup_{Q_\rho}|h(x,t)-\bar{P}(x,t)|\leq C\rho^{2+\bar{\gammamma}}.
\]
Therefore, from the triangular inequality, it follows that
\betagin{equation*}\label{triangular ine}
\betagin{aligned}
\displaystyle\sup_{Q_\rho}|v_k(x,t)-\bar{P}(x,t)|&\leq \displaystyle\sup_{Q_\rho}|v_k(x,t)-h(x,t)|+\displaystyle\sup_{Q_\rho}|h(x,t)-\bar{P}(x,t)|\\
&\leq\deltalta+C\rho^{2+\bar{\gammamma}}.
\end{aligned}
\end{equation*}
In the sequel, we make the universal choices
\[
\deltalta\coloneqq\frac{\rho^2}{2} \ \ \ \mbox{and} \ \ \ \rho\coloneqq\left(\frac{1}{2C}\right)^{1/\bar{\gammamma}}
\]
to obtain
\betagin{equation}\label{k+1-step1}
\displaystyle\sup_{Q_\rho}|v_k(x,t)-\bar{P}(x,t)|\leq\rho^2.
\end{equation}
Setting
\[
P_{k+1}(x,t)\coloneqq P_k(x,t)+\rho^{2k}\bar{P}(\rho^{-k}x,\rho^{-2k}t)
\]
we can conclude from \eqref{k+1-step1}
\[
\displaystyle\sup_{Q_{\rho^{k+1}}}|u(x,t)-P_{k+1}(x,t)|\leq\rho^{2(k+1)}.
\]
Note that by choosing $\rho$, we fix $\deltalta$ which determines the value of $\varepsilon_2$.
\thetaxtbf{Step 4}
From the definition of $P_{k+1}$ we have that $c_{k+1}=c_k+h_t(0,0)$ and $d_{k+1}=d_k+D^2h(0,0);$ therefore from \eqref{equationh1}, we have
\[
c_{k+1}+\displaystyle\inf_{\betata \in {\cal B}}[-\operatorname{Tr}(\bar{A}_{\betata}(0,0)d_{k+1})]=\langle f\rangle_1.
\]
To conclude the $(k+1)$-th step of the induction, note that, since $a_{k+1}=a_k+\rho^{2k}h(0,0)$ and $b_{k+1}=b_k+\rho^k Dh(0,0),$ from \eqref{estimate h} we obtain that
\[
|a_{k+1}-a_k|+\rho^{k}|b_{k+1}-b_k|+\rho^{2k}(|c_{k+1}-c_k|+|d_{k+1}-d_k|)\leq C\rho^{2k}.
\]
The proof of the proposition is now complete.
\end{proof}
Finally, we are able to prove the Theorem \mathbb{R}f{theorem02} which we describe in details below.
\betagin{proof}[Proof of the Theorem \mathbb{R}f{theorem02}]
Without loss of generality, consider $(x_0,t_0)=(0,0).$ First, it follows from \eqref{condition coefficients} that the sequences $(a_n)_{n\in\mathbb{N}}$ and $(b_n)_{n\in\mathbb{N}}$ are convergent sequences to $u(0,0)$ and $Du(0,0)$, respectively. Moreover,
\[
|a_n-u(0,0)|\leq C\rho^{2n} \ \ \mbox{and} \ \ |b_n-Du(0,0)|\leq C\rho^n.
\]
Furthermore, the estimate in \eqref{condition coefficients} yields
\[
|c_n|\leq\displaystyle\sum_{j=1}^{n}|c_j-c_{j-1}|\leq Cn
\]
and
\[
|d_n|\leq\displaystyle\sum_{j=1}^{n}|d_j-d_{j-1}|\leq Cn.
\]
Let $0<r\ll 1$ and fix $n\in\mathbb{N}$ such that $\rho^{n+1}<r<\rho^n$. Hence, we estimate from the previous computations
\betagin{equation*}
\betagin{array}{rl}
&\displaystyle\sup_{Q_{r}}\left|u(x,t)-[u(0,0)+Du(0,0)\cdot x]\right|
\\
&\quad\quad\leq \displaystyle\sup_{Q_{\rho^n}}\left|u(x,t)-P_n(x,t)\right|+\displaystyle\sup_{Q_{\rho^n}}\left|P_n(x,t)-[u(0,0)+Du(0,0)\cdot x]\right|\\
&\quad\quad\leq \rho^{2n}+|a_n-u(0,0)| + \displaystyle \rho^n|b_n-Du(0,0)| + \rho^{2n}(|c_n|+|d_n|)
\\
&\quad\quad\leq C\rho^{2n} + C\rho^{2n} + \rho^{2n}|c_n| + \rho^{2n}|d_n|
\\
&\quad\quad\leq Cn \rho^{2n}
\\
&\quad\quad\leq -\frac{2C}{\rho^2\ln\rho} r^2\ln r^{-1},
\end{array}
\end{equation*}
The last inequality follows from the fact that $r <\rho^n$ implies $n< \frac{\mbox{ln} \;r}{\mbox{ln} \;\rho}$. This finishes the proof.
\end{proof}
\betagin{Remark}
In general, one of the important facts concerning the study of ${\mathcal C}^{\operatorname{Log-Lip}}$ regularity lies on the following: assume $u \in {\mathcal C}^{\operatorname{Log-Lip}}(Q)$, $Q\Subset Q_1$ and let $\gammamma \in (0,1)$. Notice that
\[
\lim_{s\to 0^+}-s^{1-\gammamma}\ln s = 0.
\]
Thus,
\[
s^{1-\gammamma}\ln(1/s) \leq C = C(\gammamma), \;\;\mbox{ for } \;\; s <1/2.
\]
It implies that
\betagin{equation*}
\betagin{aligned}
|u(x_1, t_1) - u(x_2,t_2)| &\leq C \operatorname{dist}((x_1,t_1),(x_2,t_2))\ln\left(\frac{1}{\operatorname{dist}((x_1,t_1),(x_2,t_2))}\right)\\
&\leq C(\gammamma)\operatorname{dist}((x_1,t_1),(x_2,t_2))^\gammamma.
\end{aligned}
\end{equation*}
Hence $u \in {\mathcal C}^{0,\gammamma}(Q)$. Therefore, once regularity in ${\mathcal C}^{\operatorname{Log-Lip}}$ is available, it is possible to conclude that $u \in {\mathcal C}^{0,\gammamma}(Q)$, for every $\gammamma \in (0,1)$. However, functions in ${\mathcal C}^{\operatorname{Log-Lip}}$ spaces may not be Lipschitz continuous. In fact, the Lipschitz logarithmical modulus of continuity $\omegaega(s)\coloneqq s\ln(1/s)$ is not a Lipschitz continuous function.
\end{Remark}
\section{Improved ${\mathcal C}^{2,\gammamma}$-estimates at the origin}
In this last section, we state and prove $\mathcal{C}^{2,\gammamma}$-estimates at the origin. As in the previous section, this is achieved through approximation methods. In the next result we prove the existence of a sequence of second order polynomials which approximates $L^p$-viscosity solutions to \eqref{eq_main}.
\betagin{Proposition}\label{induction2}
Let $u\in\mathcal{C}(Q_1)$ be a normalized $L^p$-viscosity solution to \eqref{eq_main}. Assume that A\mathbb{R}f{assumption1} and A\mathbb{R}f{assumption5} hold. There exists a sequence of polynomials $(P_n)_{n\in\mathbb{N}}$ of the form
\[
P_n(x,t)\coloneqq a_n+b_n\cdot x+c_n\,t+\frac{1}{2}x^td_n\,x
\]
such that
\betagin{equation}\label{estimate1}
\displaystyle\sup_{Q_{\rho^n}}|u(x,t)-P_n(x,t)|\leq \rho^{n(2+\gammamma)}
\end{equation}
for some $0<\gammamma<1$, and
\[
c_n+\displaystyle\inf_{\betata \in {\cal B}}(-\operatorname{Tr}(\bar A_{\betata}(0,0)d_n))=0,
\]
for every $n\geq0$, where
\betagin{equation}\label{estimate2}
\footnotesize{|a_n-a_{n-1}|+\rho^{n-1}|b_n-b_{n-1}|+\rho^{2(n-1)}(|c_n-c_{n-1}|+|d_n-d_{n-1}|)\leq C\rho^{(n-1)(2+\gammamma)}}
\end{equation}
and the constants $C>0$ and $0<\rho<<1$ are universal.
\end{Proposition}
\betagin{proof}
Without loss of generality, we assume that $\|u\|_{L^\infty(Q_1)}\leq1$. As in the Proposition \mathbb{R}f{induction1}, we prove this statement by induction in $n\geq 0$. We split the proof in four steps.
\thetaxtbf{Step 1}
Let us define
\[
P_{-1}(x,t)\equiv P_0(x,t)\equiv0.
\]
Hence, the case $n=0$ is obvious. Suppose the case $n=k$ has been verified, for some $k\in\mathbb{N}$. Let us prove the statement for the case $n=k+1$. For that, we introduce the auxiliary function
\[
v_k(x,t)\coloneqq\frac{(u-P_k)(\rho^kx, \rho^{2k}t)}{\rho^{k(2+\gammamma)}} \ \ \mbox{in} \ \ Q_1
\]
which solves the equation
\[
(v_k)_t+\frac{1}{\rho^{k\gammamma}}\left(\displaystyle\sup_{\alpha \in {\cal A}}\inf_{\betata \in {\cal B}}[-\operatorname{Tr}(A_{\alpha,\betata}(\rho^kx,\rho^{2k}t)(\rho^{k\gammamma}D^2 v_k+d_k))]+c_k\right)=f_k,
\]
where $f_k(x,t):= \rho^{-k\gammamma}f(\rho^kx, \rho^{2k}t)$.
\thetaxtbf{Step 2}
In order to approximate $v_k$ by a suitable function $h\in \mathcal{C}^{2,\bar{\gammamma}}_{loc}(Q_1)$, set
\[
M_k\coloneqq \rho^{k\gammamma}M+d_k.
\]
From the assumption A\mathbb{R}f{assumption5}, it follows that
\betagin{equation*}
\betagin{aligned}
&\left|\displaystyle\sup_{\alpha \in {\cal A}}\inf_{\betata \in {\cal B}}[-\operatorname{Tr}(A_{\alpha,\betata}(\rho^{k}x,\rho^{2k}t)M_k)]-\displaystyle\inf_{\betata \in {\cal B}}[-\operatorname{Tr}(\bar A_{\betata}(0,0)M_k)]\right|
\\
&\quad\quad\leq \displaystyle\sup_{\alpha \in {\cal A}}\sup_{\betata \in {\cal B}}\left|\operatorname{Tr}[(A_{\alpha,\betata}(\rho^k x,\rho^{2k}t)-\bar A_{\betata}(0,0))M_k]\right|\\
&\quad\quad\leq \displaystyle\sup_{(x,t)\in Q_1}\sup_{\alpha \in {\cal A}}\sup_{\betata \in {\cal B}}|A_{\alpha, \betata}(\rho^kx,\rho^{2k}t)-\bar A_{\betata}(0,0)|\|\rho^{k\gammamma}M+d_k\|
\\
&\quad\quad\leq \tilde C(d)\varepsilon_3\rho^{\gammamma k}(\|\rho^{k\gammamma}M+d_k\|)
\\
&\quad\quad\leq \tilde C(d)\varepsilon_3\rho^{\gammamma k}(\|M\|+\|d_k\|).
\end{aligned}
\end{equation*}
From the induction hypothesis and the universal choice of $\rho$, we compute
\betagin{equation*}
\|d_k\| \leq \sum_{i=1}^{k}C\rho^{(i-1)\gammamma} \leq \frac{C(1-\rho^{(k-1)\gammamma})}{1-\rho^\gammamma} \leq \frac{C}{1-\rho^\gammamma}\leq C_0.
\end{equation*}
Then, we find that
\betagin{equation}\label{assumption scaling}
\betagin{aligned}
&\frac{1}{\rho^{\gammamma k}}\left|\displaystyle\sup_{\alpha \in {\cal A}}\inf_{\betata \in {\cal B}}[-\operatorname{Tr}(A_{\alpha,\betata}(\rho^{k}x,\rho^{2k}t)M_k)]-\displaystyle\inf_{\betata \in {\cal B}}[-\operatorname{Tr}(\bar A_{\betata}(0,0)M_k)]\right|
\\
&\quad\quad\leq \;C_0\tilde C(d)\varepsilon_3(1+\|M\|).
\end{aligned}
\end{equation}
Also, observe that
\betagin{equation*}
\betagin{aligned}
\|f_k\|^{p}_{Q_1}&=\frac{1}{\rho^{k\gammamma p}}\displaystyle\int_{Q_1}|f(\rho^k x, \rho^{2k}t)|^p dxdt\\
&=\frac{1}{\rho^{k\gammamma p}}\displaystyle\Xint-_{Q_{\rho^k}}|f(y,s)|^p dyds\\
&\leq \varepsilon_3^p.
\end{aligned}
\end{equation*}
Combining the estimate in \eqref{assumption scaling} and standard stability results, given $\deltalta>0,$ there exists $\varepsilon_3=\varepsilon_3(\deltalta)$ that ensures the existence of $h\in\mathcal{C}(Q_{3/4})$ satisfying the equation
\betagin{equation}\label{h equation}
\left \{
\betagin{array}{ll}
h_t + \frac{1}{\rho^{k\gammamma}}\displaystyle\inf_{\betata \in {\cal B}} [ - {\operatorname{Tr}}\left(\bar A_{\betata}(0,0)(\rho^{k\gammamma}D^{2}h+d_k)\right)+c_k] = 0 & \thetaxt{in} \quad Q_{ 3/4}, \\
h=v_k & \thetaxt{on} \quad {\partial}Q_{ 3/4},
\end{array}
\right.
\end{equation}
such that
\[
\|v_k-h\|_{L^{\infty}(Q_{8/9})}\leq\deltalta.
\]
Now, let us prove that $h\in\mathcal{C}^{2,\bar{\gammamma}}_{loc}(Q_1)$. In fact, first observe that we can rewrite the equation in \eqref{h equation} in the following way
\betagin{equation}\label{equation h_2}
h_t+\displaystyle\inf_{\betata \in {\cal B}}\left[-\operatorname{Tr}\left(\bar A_{\betata}(0,0)\left(D^2h+\frac{d_k}{\rho^{k\gammamma}}\right)\right)-\frac{c_k}{\rho^{k\gammamma}}\right]=0.
\end{equation}
Because of the Evans-Krylov's parabolic regularity theory, we know that viscosity solutions to
\[
h_t+\displaystyle\inf_{\betata \in {\cal B}}[-\operatorname{Tr}(\bar A_{\betata}(0,0)D^2v)]=0 \ \ \mbox{in} \ \ Q_{3/4}
\]
are locally of class $\mathcal{C}^{2,\bar{\gammamma}}(Q_{3/4})$, for some $\bar\gammamma\in(0,1)$, with estimate
\[
\|v\|_{\mathcal{C}^{2,\bar{\gammamma}}(Q_{1/2})}\leq C,
\]
for some universal positive constant $C$.
Moreover, from the induction hypothesis we have
\betagin{equation*}\label{equaion h_1}
\frac{c_k}{\rho^{k\gammamma}}+\displaystyle\inf_{\betata \in {\cal B}}\left[-\operatorname{Tr}\left(\bar A_{\betata}(0,0)\frac{d_k}{\rho^{k\gammamma}}\right)\right]=\frac{1}{\rho^{k\gammamma}}\left[c_k+\displaystyle\inf_{\betata \in {\cal B}}[-\operatorname{Tr}(\bar A_{\betata}(0,0)d_k)]\right]=0.
\end{equation*}
Therefore, viscosity solutions to
\betagin{equation*}
v_t+\displaystyle\inf_{\betata \in {\cal B}}\left[-\operatorname{Tr}\left(\bar A_{\betata}(0,0)\left(D^2v+\frac{d_k}{\rho^{k\gammamma}}\right)\right)+\frac{c_k}{\rho^{k\gammamma}}\right]=0
\end{equation*}
are of class $\mathcal{C}^{2,\bar{\gammamma}}(Q_{3/4})$ locally, for some $\bar\gammamma\in(0,1)$, with estimate
\[
\|v\|_{\mathcal{C}^{2,\bar{\gammamma}}(Q_{1/2})}\leq C,
\]
with $C>0$ a universal constant. Combining this fact with \eqref{equation h_2}, we obtain $h\in\mathcal{C}^{2,\bar{\gammamma}}_{loc}(Q_1)$, for some $0<\bar\gammamma<1$, such that
$\|h\|_{\mathcal{C}^{2,\bar{\gammamma}}(Q_{1/2})}\leq C$.
Hence,
\small{
\[
\sup_{Q_{\rho}}\left|h(x,t)-\left[h(0,0)+Dh(0,0)\cdot x+h_t(0,0)t+\frac{1}{2}x^tD^{2}h(0,0)x\right]\right|\leq C \rho^{2+\bar\gammamma}.
\]
}
\thetaxtbf{Step 3}
Setting
\[
\bar P(x,t)=h((0,0)+Dh(0,0)\cdot x+h_t(0,0)\,t+\frac{1}{2}x^tD^2h(0,0)x,
\]
from the triangular inequality we have
\betagin{equation*}
\betagin{aligned}
\sup_{Q_{\rho}}|u(x,t)-\bar P(x,t)|&\leq \sup_{Q_{\rho}}|u(x,t)-h(x,t)|+\sup_{Q_{\rho}}|h(x,t)-\bar P(x,t)|\\
&\leq \deltalta + C\rho^{2+\bar\gammamma}.
\end{aligned}
\end{equation*}
For $0< \gammamma < \bar{\gammamma}$ fixed, we make the universal choices
\[
\rho:= \left(\dfrac{1}{2C}\right)^{\frac{1}{\bar{\gammamma}-\gammamma}} \;\;\; \thetaxt{and}\;\;\; \deltalta\coloneqq\frac{\rho^{2+\gammamma}}{2}
\]
to obtain
\betagin{equation}\label{estimatev_k}
\sup_{Q_{\rho}}|v_k(x,t)-\bar P(x,t)|\leq\rho^{2+\gammamma}.
\end{equation}
Observe that the universal choice of $\deltalta$ determines $\varepsilon_3$.
\thetaxtbf{Step 4}
Set
\[
P_{k+1}(x,t)\coloneqq P_k(x,t)+\rho^{k(2+\gammamma)}\bar P(\rho^{-k}x,\rho^{-2k}t).
\]
The estimate in \eqref{estimatev_k} and the definition of $P_{k+1}$ lead to
\[
\displaystyle\sup_{Q_{\rho^{k+1}}} |u(x,t)-P_{k+1}(x,t)|\leq\rho^{(k+1)(2+\gammamma)}.
\]
Furthermore, since $c_{k+1}=c_k+\rho^{k\gammamma}h_t(0,0)$ and $d_{k+1}=d_k+\rho^{k\gammamma}D^2h(0,0)$, from \eqref{h equation} we obtain that
\[
c_{k+1}+\displaystyle\inf_{\betata \in {\cal B}}(-\operatorname{Tr}(\bar A_{\betata}(0,0)\,d_{k+1}))=0.
\]
Because of the $\mathcal{C}^{2,\bar{\gammamma}}$-estimates for $h$ we have that
\[
|h(0,0)|+|Dh(0,0)|+|h_t(0,0)|+|D^2h(0,0)|\leq C,
\]
for some universal positive constant $C$. Hence, from the definition of $P_{k+1}$, we can conclude
\[
|a_{k+1}-a_{k}|+\rho^{k}|b_{k+1}-b_{k}|+\rho^{2k}(|c_{k+1}-c_{k}|+|d_{k+1}-d_{k}|)\leq C\rho^{k(2+\gammamma)}.
\]
The proof is now complete.
\end{proof}
In what follows, we prove the $\mathcal{C}^{2,\gammamma}$-regularity at the origin.
\betagin{proof}[Proof of the Theorem \mathbb{R}f{theorem03}]
From the Proposition \mathbb{R}f{induction2}, we can find a polynomial $\bar{P}$ of the form
\[
\bar{P}(x,t)\coloneqq \bar{a}+\bar{b}\cdot x+\bar{c} \, t+\frac{1}{2}x^t\bar{d}x
\]
such that $P_n \to \bar{P}$ uniformly in $Q_1$. The regularity of $h$ implies that there exists a constant $C >0$ such that
\[
|D\bar{P}(0,0)| + \|D^2\bar{P}(0.0)\| \leq C,
\]
with the following estimates:
\[
|a_n-\bar{a}|\leq C\rho^{n(2+\gammamma)} \,\,\,\,\, ; \,\,\,\,\, |b_n-\bar{b}|\leq C\rho^{n(1+\gammamma)}
\]
\[
|c_n-\bar{c}|\leq C\rho^{n\gammamma} \,\,\,\,\, \mbox{and} \,\,\,\,\, |d_n-\bar{d}|\leq C\rho^{n\gammamma}.
\]
To conclude the proof, given $0<\rho<r$, take the first integer $n\in\mathbb{N}$ satisfying $r^{n+1}<\rho\leq r^n$. Therefore, we can estimate
\betagin{equation*}
\betagin{array}{rcl}
& &\displaystyle\sup_{Q_\rho}\left|u(x,t)-\bar{P}(x,t)\right|
\\
& &\quad \leq \displaystyle \sup_{Q_{r^n}}|u(x,t)-P_n(x,t)|+\sup_{Q_{r^n}}|P_n(x,t) - \bar{P}(x,t)|
\\
& &\quad\leq\displaystyle \frac{C}{r}r^{(n+1)(2+\gammamma)}
\\
& &\quad\leq \displaystyle C\rho^{2+\gammamma}.
\end{array}
\end{equation*}
This finishes the proof of the theorem.
\end{proof}
\betagin{Remark}\label{last_rem}
Even though we only prove pointwise estimates at the origin, Theorem \mathbb{R}f{theorem03} holds true at any point that satisfies assumption A$\mathbb{R}f{assumption5}$. Also, notice that we recover the classical ${\mathcal C}^{2,\gammamma}$ theory for Bellman equations when assumption A$\mathbb{R}f{assumption5}$ is satisfied for every point in $Q_1$, since we would have $A_{\alpha,\betata}(x,t) = \bar{A}_{\betata}(x,t)$ for every $(x,t) \in Q_1$ and $\bar{A}_\betata$ is of class ${\mathcal C}^\gammamma$.
\end{Remark}
\noindent{\bf Acknowledgements:} PA is partially supported by FAPERJ (Grant \# E-26/ 201.609/2018) and CAPES - Brazil. GR is also supported by CAPES - Brazil. MS is supported by the CONACyT-Mexico. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brazil (CAPES) - Finance Code 001.
\noindent\thetaxtsc{P\^edra D. S. Andrade}\\
Department of Mathematics\\
Pontifical Catholic University of Rio de Janeiro -- PUC-Rio\\
22451-900, G\'avea, Rio de Janeiro-RJ, Brazil\\
\noindent\thetaxttt{[email protected]}
\noindent\thetaxtsc{Giane C. Rampasso}\\
Department of Mathematics\\
University of Campinas -- IMECC -- Unicamp\\
13083-859, Cidade Universitária, Campinas-SP, Brazil\\
\noindent\thetaxttt{[email protected]}
\noindent\thetaxtsc{Makson S. Santos}\\
Centro de Investigación en Matemáticas - CIMAT\\
36023, Valenciana, Guanajuato, Gto, Mexico.\\
\noindent\thetaxttt{[email protected]}
\end{document} |
\begin{document}
\title{\TheTitle}
\begin{abstract}
In this article a special class of nonlinear optimal control problems
involving a bilinear term in the boundary condition is studied. These kind of problems
arise for instance in the identification of an unknown space-dependent Robin
coefficient from a given measurement of the state, or when the Robin
coefficient can be controlled in order to reach a desired state.
To this end, necessary and sufficient optimality conditions are derived and
several discretization approaches for the numerical solution
the optimal control problem are investigated.
Considered are both a full discretization and the postprocessing approach
meaning that we compute an improved control by a pointwise evaluation of the
first-order optimality condition. For both approaches finite element error
estimates are shown and the validity of these results is confirmed by numerical experiments.
\keywords{Bilinear boundary control, identification of Robin
parameter,
finite element error estimates,
postprocessing approach}
\AMS{35J05, 35Q93, 49J20, 65N21, 65N30}
end{abstract}
\section{Introduction}
This paper is concerned with bilinear boundary control problems of the form
\begin{equation*}
\frac12 \|y-y_d\|_{L^2(\Omega)}^2 + \frac\alpha2\|u\|_{L^2(\Gamma)}^2 \to \min!
end{equation*}
subject to
\begin{equation*}
\begin{aligned}
-\Delta y + y &= f &&\mbox{in}\ \Omega,\\
\partial_n y + u\,y &= g && \mbox{on}\ \Gamma,
end{aligned}
end{equation*}
\begin{equation*}
u\in U_{ad}:=\{v\in L^2(\Gamma)\boldsymbol colon u_a \le u \le u_b\ \mbox{a.\,e.\ on}\ \Gamma\},
end{equation*}
where $\Omega\subset\mathbb R^n$, $n\in\{2,3\}$, is a bounded domain, $\alpha>0$ is the regularization parameter, $y_d\in L^2(\Omega)$ is a desired state
and $0 \le u_a < u_b$ are the control bounds.
As an application of bilinear boundary control problems
we mentioned the identification of an unknown Robin coefficient
from a given measurement $y_d$ of the state quantity.
This is for instance of interest in the modeling of stem cell division processes \boldsymbol cite{FRT16},
where $u$ is the unknown parameter describing the chemical reactions between proteins
from the cell interior and the cell cortex. For further applications,
$u$ can be interpreted as a heat-exchange
coefficient in thermodynamics or as a quantity for corrosion damage
in electrostatics.
There are many publications dealing with the identification of the Robin
coefficient, see for instance \boldsymbol cite{Chaabane1999,Hetmaniok2017,LiuNakamura2017,MohebbiSellier2018}.
Only a few papers use an optimal control approach similar to the one considered in the
present article.
We mention \boldsymbol cite{JL12,HaoThanhLesnic2013}, where the parabolic version of our model problem
is considered.
The authors prove convergence of a finite element approximation but no
convergence rate is established. A similar problem is discussed in
\boldsymbol cite{Gwinner2018}, dealing with the recovery of the Robin parameter in a variational
inequality.
The aim of the present paper is to derive necessary and sufficient optimality conditions
for the optimal control problem and to investigate several numerical approximations
regarding convergence towards a local solution.
This complements a previous contribution of Kr\"oner and Vexler
\boldsymbol cite{KV09} where the distributed control case, meaning that the bilinear term
$u\,y$ appears in the differential equation, is discussed. In this article
error estimates for the approximate controls in the $L^2(\Omega)$-norm are derived
for several finite element approximations. To this end, the authors approximate the
state and adjoint state by linear finite elements and
derive the convergence rate $1$ for piecewise constant and $3/2$ for piecewise linear
approximations for the control. Moreover, advanced discretization concepts like
the postprocessing approach \boldsymbol cite{MR11} and
the variational discretization \boldsymbol cite{Hin05} is investigated which allow an improvement
up to a convergence rate of $2$. It is the purpose of the present article to extend the results
to the case of boundary control.
The numerical analysis of boundary control problems
is usually more difficult than for distributed control problems as the adjoint control--to--state
operator maps onto some Sobolev/Lebes\-gue space defined on the boundary.
As a consequence, error estimates for the traces of finite element solutions
have to be proved, more precisely, in the $L^2(\Gamma)$-norm.
Here, we consider two different discretization approaches. The first one is a full
discretization using piecewise linear finite elements for the states and
piecewise constant functions on the boundary for the control approximation.
Under the assumption that the domain has a Lipschitz boundary we show that the
discrete optimal control converges with the optimal rate $1$. To show this result we
exploit the local coercivity of the objective, best-approximation properties
of the control space and suboptimal error estimates for the state and adjoint equation.
In order to obtain a more accurate solution we also investigate the
postprocessing approach
where an improved control is computed by a pointwise application of the
first-order optimality condition to the discrete state
variables. For this approach we have to assume more regularity for the exact
solution and thus, we restrict our considerations to two-dimensional domains
with sufficiently smooth boundary. Under this assumption we show the optimal
convergence rate of $2-\varepsilon$ with arbitrary $\varepsilon>0$
which is the rate one would
also expect in the case of linear quadratic boundary control problems
and smooth solutions \boldsymbol cite{APR13,APW17,MR04} (even with $h^{-\varepsilon}$
replaced by $\lvert\ln h\rvert$, where $h$ is the maximal element diameter of the finite
element mesh).
The proof relies on the non-expansivity of the projection onto the feasible
set as well as sharp error estimates for the state and adjoint state in
$L^2(\Gamma)$. To obtain estimates in these norms superconvergence properties
of the midpoint interpolant, finite element error estimates for the Ritz
projection in $L^2(\Gamma)$ and a supercloseness result between the
midpoint interpolant of the exact and the discrete solution are exploited.
To show the $L^2(\Gamma)$-norm error estimate we will, as we consider smooth solutions,
derive a maximum norm estimate.
To the best of the authors knowledge these results are not available in the literature
for problems with Robin boundary conditions. Based on the
ideas from \boldsymbol cite{FR76} we formulate the missing proof.
We moreover note that the setting discussed here does not fit into the
well-known framework of the semilinear optimal control problems discussed
e.\,g.\ in \boldsymbol cite{ACT02,CM08,CMT05,KP14}, as these contributions deal with nonlinearities
depending solely on the state variable. However, many techniques can be reused for the problem
considered here. The only publication where more
general nonlinearities depending both on
the state and the control variable is, to the best of the authors knowledge,
\boldsymbol cite{RoeschTroeltzsch1992}. Therein optimality conditions are discussed
but there is no theory on the numerical analysis of approximation methods
for this problem class available yet. However, we think that the consideration of
bilinear control problems may serve as a starting point for the investigation of a more general
class of nonlinear optimal control problems.
The article is structured as follows. In Section~\ref{sec:state_equation} we
discuss the solubility of the state equation and regularity results for its solution.
In Section~\ref{sec:optimal_control} we analyze the optimal
control problem. In particular, necessary and sufficient optimality conditions
are investigated. Section \ref{sec:fem} is devoted to the finite element
discretization of the state equation, where we show finite element error estimates
required for the numerical analysis of the optimal control problem later.
The discretization of the optimal control problem is considered in
Section~\ref{sec:discrete_optimal_control}.
In particular, we discuss convergence
rates for the numerical solution obtained by a full discretization of the
optimal control problem as well as for an improved control obtained by a
postprocessing step. The latter result requires some auxiliary results that we
discuss in the appendix.
To be more precise, a maximum norm error estimate for the finite element solution
of an elliptic equation with Robin boundary conditions is needed. A proof is
given in Appendix \ref{app:maximum_norm_estimate}.
Moreover, a proof of local error estimates for
the midpoint interpolant and the $L^2(\Gamma)$ projection onto piecewise
constant functions on the boundary is needed.
To the best of the authors knowledge these results are not available in the
literature in case of domains with curved boundaries. Thus, we discuss these
auxiliary results in Appendix \ref{app:midpoint}.
Finally, we will compare the theoretical results with numerical experiments
in Section~\ref{sec:experiments}.
\section{Analysis of the state equation}
\label{sec:state_equation}
We consider the boundary value problem
\begin{equation*}
-\Delta y + y = f \ \mbox{in}\ \Omega,\qquad \partial_n y + u\,y=g\ \mbox{on}\ \Gamma,
end{equation*}
on a bounded Lipschitz domain $\Omega\subset\mathbb R^n$, $n\in\{2,3\}$, with data $f\in L^2(\Omega)$ and $g\in L^2(\Gamma)$.
The corresponding weak formulation reads
\begin{equation}\label{eq:weak_form}
\text{Find}\ y\in H^1(\Omega)\boldsymbol colon\qquad a_u(y,v) = F(v)\qquad\forall v\in H^1(\Omega),
end{equation}
with
\begin{align*}
a_u(y,v)&:=(\nabla y,\nabla v)_{L^2(\Omega)} + (y,v)_{L^2(\Omega)} + (u\,y,v)_{L^2(\Gamma)},\\
F(v) &:= (f,v)_{L^2(\Omega)} + (g,v)_{L^2(\Gamma)}.
end{align*}
First, we show an existence and uniqueness result for eqref{eq:weak_form}.
Therefore, we introduce a decomposition of the control into a positive and negative part $u^+,u^-\in L_+^2(\Gamma):=\{v\in L^2(\Gamma)\boldsymbol colon v\ge 0\mbox{\ a.\,e.\ on}\ \Gamma\}$
such that $u=u^+ - u^-$. The following result then relies on the Lax-Milgram-Lemma. However,
an assumption on the coefficient $u$ is required.
\begin{lemma}\label{lem:lax_milgram}
Assume that $u\in L^2(\Gamma)$ satisfies
\begin{equation}\label{eq:ass_coercivity}
\|u^-\|_{L^2(\Gamma)} < \frac1{c_*^2}.
end{equation}
with the constant $c^*$ which is due to the embedding $\|v\|_{L^4(\Gamma)} \le c^*\|v\|_{H^1(\Omega)}$.
Then, the solution $y$ of eqref{eq:weak_form} belongs to $H^1(\Omega)$
and satisfies the a priori estimate
\begin{equation*}
\|y\|_{H^1(\Omega)}\le \frac{1}{\gamma_u}\,\left(\|f\|_{H^1(\Omega)^*} + \|g\|_{H^{-1/2}(\Gamma)}\right).
end{equation*}
with $\gamma_u := 1-c_*^2\,\|u^-\|_{L^2(\Gamma)}>0$.
end{lemma}
\begin{proof}
The boundedness of $a_u$ follows directly from the Cauchy-Schwarz inequality and
the embedding $H^1(\Omega)\hookrightarrow L^4(\Gamma)$. This implies.
\begin{align*}
a(y,z) &\le \|y\|_{H^1(\Omega)}\,\|z\|_{H^1(\Omega)}
+ \|u\|_{L^2(\Gamma)}\,\|y\|_{L^4(\Gamma)}\, \|z\|_{L^4(\Gamma)} \\
&\le \left(1+c_*^2\,\|u\|_{L^2(\Gamma)}\right)\,\|y\|_{H^1(\Omega)}\, \|z\|_{H^1(\Omega)}.
end{align*}
To show the coercivity we take into account the decomposition $u=u^+ - u^-$
and the embedding $H^1(\Omega)\hookrightarrow L^4(\Gamma)$ to get
\begin{align*}
a(y,y) & \ge \|y\|_{H^1(\Omega)}^2 + \int_\Gamma u^-\,y^2 \ge \left(1-c_*^2\,\|u^-\|_{L^2(\Gamma)}\right)\,\|y\|_{H^1(\Omega)}^2.
end{align*}
Here, the assumption eqref{eq:ass_coercivity} will ensure the coercivity.
An application of the Lax-Milgram Lemma leads to the desired result.
end{proof}
Note that $\{v\in L^2(\Gamma)\boldsymbol colon \|v^-\|_{L^2(\Omega)} < c_*^{-2}\}$
is an open subset of $L^2(\Gamma)$. This is the key idea which allows us to
avoid the two-norm discrepancy for the optimal control problem
as we will see that the reduced objective functional is differentiable with respect
to the $L^2(\Gamma)$-topology. In the following we will hide the dependency of the estimates
on $\|u^-\|_{L^2(\Gamma)}$ and thus $\gamma_u$ in the generic constant as we impose
positive control bounds in the considered optimal control problem.
Later, we will frequently make use of the following Lipschitz estimate.
\begin{lemma}\label{lem:lipschitz_general}
If $u_1,u_2\in L^2(\Gamma)$ satisfy the assumption eqref{eq:ass_coercivity},
the corresponding states $y_1,y_2\in H^1(\Omega)$ solving
\begin{equation*}
a_{u_i}(y_i,v) = (f_i,v) + (g_i,v)_\Gamma \quad\forall v\in H^1(\Omega),\ i=1,2,
end{equation*}
fulfill the estimate
\begin{align*}
\|y_1-y_2\|_{H^1(\Omega)} &\le \big(\|u_1-u_2\|_{L^2(\Gamma)}\,\|y_2\|_{H^1(\Omega)} \\
&\quad + \|f_1-f_2\|_{H^1(\Omega)^*}
+ \|g_1-g_2\|_{H^{-1/2}(\Gamma)}\big).
end{align*}
end{lemma}
\begin{proof}
Subtracting the variational formulations for $y_1$ and $y_2$ from each other leads to
\begin{align*}
& (\nabla(y_1-y_2),\nabla v)_{L^2(\Omega)} + (y_1-y_2,v)_{L^2(\Omega)} + (u_1\,(y_1-y_2),v)_{L^2(\Gamma)} \\
&\qquad= (f_1-f_2,v)_{L^2(\Omega)} + (g_1-g_2,v)_{L^2(\Gamma)} + ((u_2-u_1)\,y_2,v)_{L^2(\Gamma)}.
end{align*}
The result follows from Lemma \ref{lem:lax_milgram} and the continuity of
the product mapping from $L^2(\Gamma)\times H^{1/2}(\Gamma)$ to $H^{-1/2}(\Gamma)$, see
\boldsymbol cite[Theorem 1.4.4.2]{Gri85}.
end{proof}
In the following theorem we collect some regularity results for the solution of eqref{eq:weak_form}.
\begin{lemma}\label{lem:props_S}
Let $\Omega\subset\mathbb R^n$, $n\in\{2,3\}$, be a bounded Lipschitz
domain. By $y\in H^1(\Omega)$ we denote the solution of eqref{eq:weak_form}.
The following a priori estimates are valid, under the assumption that the
input data possess the regularity demanded by the right-hand side:
\begin{align*}
\intertext{a) If $r>2n/(1+n)$ and $p > 2$ for $n=2$ and $p\ge 4$ for $n=3$, then}
\|y\|_{H^{3/2}(\Omega)} + \|y\|_{H^1(\Gamma)}
&\le c\left(1+\|u\|_{L^p(\Gamma)}\right)\left( \|f\|_{L^{r}(\Omega)} + \|g\|_{L^2(\Gamma)}
\right).\\
\intertext{b) If $r > n/2,\ s > n-1,$ and $p\ge2$ for $n=2$ and $p > 8/3$ for $n=3$, then}
\|y\|_{C(\overline\Omega)}
&\le c \left(1+\|u\|_{L^{p}(\Gamma)}\right)^2\left(\|f\|_{L^{r}(\Omega)}
+ \|g\|_{L^{s}(\Gamma)}\right). \\
\intertext{c) Furthermore, if $\Omega$ is a convex polygonal/polyhedral
domain, or possesses a boundary which is of class $C^{1,1}$, there holds}
\|y\|_{H^2(\Omega)} &\le c\left(1+\|u\|_{H^{1/2}(\Gamma)}\right)^2\,\left(\|f\|_{L^2(\Omega)} + \|g\|_{H^{1/2}(\Gamma)}\right).
end{align*}
end{lemma}
\begin{proof}
a)
In \boldsymbol cite[Theorem 1.12]{Dha12} it is shown that the problem
\begin{equation*}
-\Delta y = F\ \mbox{in}\ \Omega,\qquad \partial_n y = G \ \mbox{on}\ \Gamma
end{equation*}
possesses a solution in $H^{3/2}(\Omega)$ provided that
$F\in H^{s-2}(\Omega)$ for some $s\in (3/2,2]$ and $G\in L^2(\Gamma)$, as well as
$\int_\Omega F + \int_\Gamma G = 0$. The solubility condition
is satisfied in our situation with $F=f-y$ and $G=g-u\,y$ and becomes clear when testing
eqref{eq:weak_form} with $vequiv 1$. The regularity required for $F$ follows from the embedding
$f\in L^r(\Omega)\hookrightarrow H^{-1/2+\varepsilon}(\Omega)$ for sufficiently small
$\varepsilon>0$.
Moreover, the H\"older inequality and the embeddings
$H^1(\Omega)\hookrightarrow L^q(\Gamma)$ with $q<\infty$ ($n=2$)
or $H^1(\Omega)\hookrightarrow L^4(\Gamma)$ ($n=3$)
imply $\|u\,y\|_{L^2(\Gamma)} \le c\,\|u\|_{L^p(\Omega)}\,\|y\|_{H^1(\Omega)}$, from which we conclude
$G\in L^2(\Gamma)$. From \boldsymbol cite[Theorem 1.12]{Dha12} and Lemma \ref{lem:lax_milgram}
we then obtain
\begin{align}\label{eq:a_priori_h32}
\|y\|_{H^{3/2}(\Omega)}
&\le c \left( \|F\|_{L^r(\Omega)} + \|G\|_{L^2(\Gamma)} + \left\vert\int_\Omega y(x)\mathrm dx\right\vert\right) \nonumber\\
&\le c \left(1+\|u\|_{L^p(\Gamma)}\right)\left(\|f\|_{L^r(\Omega)} + \|g\|_{L^2(\Gamma)}\right).
end{align}
It remains to show the $H^1(\Gamma)$-norm estimate.
We split the solution into the parts $y_f$ and $y_g$ solving
\begin{equation*}
\begin{aligned}
-\Delta y_f + y_f &= f &\qquad -\Delta y_g + y_g &= 0 &\qquad& \mbox{in}\ \Omega,\\
\partial_n y_f &= 0 & \partial_n y_g &= g - u y && \mbox{on}\ \Gamma.
end{aligned}
end{equation*}
Using \boldsymbol cite[Theorem 5.4]{GM11} we directly deduce
\begin{equation*}
\|y_g\|_{H^1(\Gamma)} \le c\,\|g-u\,y\|_{L^2(\Gamma)} \le c \left(\|g\|_{L^2(\Gamma)} + \|u\|_{L^p(\Gamma)}\,\|y\|_{H^1(\Omega)}\right)
end{equation*}
and Lemma \ref{lem:lax_milgram} leads to the desired estimate for $y_g$.
For the function $y_f$, we get the desired estimate by an application of a trace theorem and the
a priori estimate eqref{eq:a_priori_h32} which can in case of $gequiv 0$ be improved to
\begin{equation*}
\|y_f\|_{H^1(\Gamma)} \le c\,\|y_f\|_{H^{3/2+\varepsilon}(\Omega)}
\le c\,\|f\|_{L^r(\Omega)},
end{equation*}
provided that $\varepsilon >0$ is sufficiently small. The validity of the second step
can be confirmed by means of \boldsymbol cite[Theorem 1.12]{Dha12} and \boldsymbol cite[Theorem 23.3]{Dau88}.
The decomposition $y=y_f+y_g$ and the estimates shown above imply the desired estimate in
the $H^1(\Gamma)$-norm.
b) We prove the result for the case $n=3$. The two-dimensional case follows from the same arguments.
From \boldsymbol cite[Theorem 3.1]{Cas93} it is known that the solution of eqref{eq:weak_form}
belongs to $C(\overline\Omega)$ if $f\in L^r(\Omega)$, $r>n/2$, and $g-uy\in L^s(\Gamma)$, $s>n-1$.
The latter assumption can be concluded from the H\"older inequality, a Sobolev embedding and a trace theorem, which implies
\[
\|u\,y\|_{L^s(\Gamma)} \le c\,\|u\|_{L^p(\Gamma)}\,\|y\|_{L^8(\Gamma)}\le c\,\|u\|_{L^p(\Gamma)}\,\|y\|_{H^{5/4+\varepsilon}(\Omega)}
\]
for $1/p+1/8=1/(2+\varepsilon)$. A simple computation shows that $p>8/3$
and $s=2+\varepsilon$ with $\varepsilon>0$ sufficiently small guarantee the validity of
the previous steps.
It remains to show $y\in H^{5/4+\varepsilon}(\Omega)$. This can be deduced from
\boldsymbol cite[Theorem 23.3]{Dau88} where the a priori estimate
\begin{equation}\label{eq:h54_reg}
\|y\|_{H^{5/4+\varepsilon}(\Omega)} \le c \left(\|f\|_{H^{3/4-\varepsilon}(\Omega)^*} + \|g - u\,y\|_{H^{-1/4+\varepsilon}(\Gamma)}\right)
end{equation}
is stated.
The regularity demanded by the right-hand side of eqref{eq:h54_reg} is confirmed with the
embeddings $f\in L^r(\Omega)\hookrightarrow H^{3/4-\varepsilon}(\Omega)^*$ and
$g\in L^s(\Gamma)\hookrightarrow H^{-1/4+\varepsilon}(\Gamma)$.
Moreover, there holds $\|u\,y\|_{H^{-1/4+\varepsilon}(\Gamma)} \le c\,\|u\|_{L^p(\Gamma)}\,\|y\|_{L^4(\Gamma)}$, see \boldsymbol cite[Theorem 1.4.4.2]{Gri85}.
Collecting up the arguments above leads to
\begin{align*}
\|y\|_{C(\overline\Omega)}
&\le c \left(\|f\|_{L^r(\Omega)} + \|g\|_{L^s(\Gamma)} + \|u\|_{L^p(\Gamma)} \|y\|_{H^{5/4+\varepsilon}(\Omega)}\right) \\
&\le c \left(1+\|u\|_{L^p(\Gamma)}\right)^2\left(\|f\|_{L^r(\Omega)} + \|g\|_{L^s(\Omega)}
+ \|y\|_{H^1(\Omega)}\right)
end{align*}
and the assertion follows after insertion of the a priori estimate from Lemma \ref{lem:lax_milgram}.
c)
With an embedding we deduce from the assumption that $u\in L^4(\Gamma)$.
Hence, eqref{eq:h54_reg} is applicable which implies $y\in H^{3/4}(\Gamma)$ and thus, $u\,y\in H^{1/2}(\Gamma)$, see \boldsymbol cite[Theorem 1.4.4.2]{Gri85}.
The $H^2(\Omega)$-regularity of $y$ then follows from a shift theorem applied to the
equation with boundary conditions $\partial_n y = g - u y\in H^{1/2}(\Gamma)$ on $\Gamma$,
see \boldsymbol cite[Theorem 2.4.2.7]{Gri85} (for domains with smooth boundary) or \boldsymbol cite[Theorem 4.4.3.8]{Gri85} (for convex polygonal domains).
end{proof}
\section{The optimal control problem}
\label{sec:optimal_control}
We introduce the control-to-state operator $S\boldsymbol colon U_{ad}\to H^1(\Omega)$
defined by $S(u):=y$, with $y$ the solution of eqref{eq:weak_form}.
In this section we discuss the bilinear optimal control problem
\begin{equation}\label{eq:target}
j(u):=\frac12\|S(u)-y_d\|_{L^2(\Omega)}^2 + \frac\alpha2\|u\|_{L^2(\Gamma)}^2 \to \min!
end{equation}
subject to $u\in U_{ad}:=\{v\in L^2(\Gamma)\boldsymbol colon u_a\le v\le u_b\ \text{a.\,e.\ on}\ \Gamma\}$.
Here, $\alpha>0$ is the regularization parameter, $y_d\in L^2(\Omega)$ the desired state
and $0 < u_a < u_b$ the control bounds.
Our aim is to derive necessary and sufficient optimality conditions as
well as regularity results for local solutions. Note, that the operator $S$ is non-affine and
consequently, $j$ is non-convex.
\subsection{Optimality conditions}
To derive optimality conditions differentiability properties of the (implicitly defined) operator $S$ are of interest.
\begin{lemma}\label{lem:differentiability}
The operator $S\boldsymbol colon U_{ad}\to H^1(\Omega)$ is infinitely many times Fr\'echet differentiable with respect to the $L^2(\Gamma)$-topology.
The first derivative $\delta y := S'(u)\delta u$ is the weak solution of
\begin{equation}\label{eq:tangent_equation}
\left\lbrace
\begin{array}{rlll}
-\Delta \delta y + \delta y &= 0 &\qquad&\mbox{in}\ \Omega,\\
\partial_n\delta y + u\,\delta y &= -\delta u\,y &&\mbox{on}\ \Gamma.
end{array}
\right.
end{equation}
end{lemma}
\begin{proof}
The result follows from an application of the implicit function theorem to
the operator $e\boldsymbol colon H^1(\Omega)\times U \to H^1(\Omega)^*$ with
$U:=\{v\in L^2(\Gamma)\boldsymbol colon v\mbox{ fulfills eqref{eq:ass_coercivity}}\}$ defined by
\begin{equation*}
e(y,u)v := (\nabla y,\nabla v)_{L^2(\Omega)} + (y,v)_{L^2(\Omega)} + (u\,y,v)_{L^2(\Gamma)} - (f,v)_{L^2(\Omega)} - (g,v)_{L^2(\Omega)},
end{equation*}
whose roots are solutions of eqref{eq:weak_form}.
We choose $\delta y\in H^1(\Omega)$, $\delta u\in U$ such that
$u+\delta u\in U$ (note that $U$ is an open subset of $L^2(\Gamma)$).
First, we confirm that the linear operator $e'(y,u)\boldsymbol colon H^1(\Omega)\times U\to H^1(\Omega)^*$
defined by
\begin{equation}\label{eq:proof_derivative_S}
e'(y,u)(\delta y, \delta u):=(\nabla \delta y,\nabla \boldsymbol cdot)_{L^2(\Omega)} + (\delta y,\boldsymbol cdot)_{L^2(\Omega)} + (u\,\delta y + y\,\delta u,\boldsymbol cdot)_{L^2(\Gamma)}
end{equation}
is the Fr\'echet-derivative of $e$. This is a consequence of
\begin{equation*}
e(y+\delta y,u+\delta u) - e(y,u) = e'(y,u)(\delta y,\delta u) + (\delta u\,\delta y,\boldsymbol cdot)_{L^2(\Gamma)}
end{equation*}
and the fact that the remainder term satisfies
\begin{align}
\label{eq:remainder_term}
\|\delta u\,\delta y\|_{H^1(\Omega)^*}
&= \sup_{\varphi\in H^1(\Omega)} \frac{(\delta u\,\delta y,\varphi)_{L^2(\Gamma)}}{\|\varphi\|_{H^1(\Omega)}}
\le c \sup_{\varphi\in H^1(\Omega)} \frac{\|\varphi\|_{L^4(\Gamma)}}{\|\varphi\|_{H^1(\Omega)}}\,
\|\delta u\|_{L^2(\Gamma)}\,\|\delta y\|_{L^4(\Gamma)} \nonumber\\
&\le c\,\|\delta u\|_{L^2(\Gamma)}\,\|\delta y\|_{H^1(\Omega)}
\le c\left(\|\delta u\|_{L^2(\Gamma)}^2+\|\delta y\|_{H^1(\Omega)}^2\right) \nonumber\\
&= o(\|(\delta y,\delta u)\|_{H^1(\Omega)\times L^2(\Gamma)}),
end{align}
where we applied the generalized H\"older inequality and the embedding $H^1(\Omega)\hookrightarrow L^4(\Gamma)$.
The second Fr\'echet derivative $e''\boldsymbol colon H^1(\Omega)\times U\to \mathcal L((H^1(\Omega)\times L^2(\Gamma))^2,H^1(\Omega)^*)$ is given by
\begin{equation*}
e''(y,u)(\delta y,\delta u)(\tau y,\tau u) := (\tau u\,\delta y + \delta u\,\tau y,\boldsymbol cdot)_{L^2(\Gamma)}
end{equation*}
and the mapping $(y,u)\mapsto e''(y,u)$ is continuous.
The derivatives of order $n\ge 3$ vanish.
Hence, $e\boldsymbol colon H^1(\Omega)\times U\to H^1(\Omega)^*$ is of class $C^\infty$.
Finally, due to Lemma \ref{lem:lax_milgram} we conclude that the linear mapping
\[\delta y\mapsto e_y(y,u)\delta y = (\nabla \delta y,\nabla\boldsymbol cdot)_{L^2(\Omega)} + (\delta y,\boldsymbol cdot)_{L^2(\Omega)}+(u\,\delta y,\boldsymbol cdot)_{L^2(\Gamma)}\in H^1(\Omega)^*\]
is bijective. The implicit function theorem implies the assertion and the derivative
$\delta y:=S'(u)\delta u$ is given by $e'(y,u)(\delta y,\delta u) = 0$.
This corresponds to the weak formulation of eqref{eq:tangent_equation}.
end{proof}
From the chain rule and Lemma \ref{lem:differentiability} we directly conclude the following differentiability result
\begin{lemma}
The functional $j\boldsymbol colon U_{ad}\to\mathbb R$ is infinitely many times Fr\'echet differentiable with respect to the $L^2(\Gamma)$-topology
and the first derivative is given by
\begin{equation}\label{eq:opt_cond_varineq}
\left<j'(u),v\right> = (S(u)-y_d, S'(u)v)_{L^2(\Omega)} +
\alpha\,(u,v)_{L^2(\Gamma)},
\qquad v\in L^2(\Gamma).
end{equation}
end{lemma}
The optimality condition can be simplified using the adjoint of the linearized
control-to-state operator, this is,
\[S'(u)^*\boldsymbol colon H^1(\Omega)^*\to L^2(\Gamma),\qquad
S'(u)^*v:= -[S(u)\,p]_\Gamma\] with $p\in H^1(\Omega)$ solving the adjoint equation
\begin{equation}
\label{eq:adjoint_eq}
\left\lbrace
\begin{array}{rlll}
-\Delta p + p &= v &\qquad&\mbox{in}\ \Omega,\\
\partial_n p + u\,p &= 0 &&\mbox{on}\ \Gamma.
end{array}
\right.
end{equation}
In the following we denote the control-to-adjoint mapping $u\mapsto p=S'(u)^*(S(u)-y_d)$ with
by $Z\boldsymbol colon L^2(\Gamma)\to H^1(\Omega)$.
Consequently, we can rewrite the optimality condition eqref{eq:opt_cond_varineq} as
\begin{align}\label{eq:opt_cond}
&
\begin{aligned}
-\Delta y + y &= f\qquad & -\Delta p + p &= y-y_d &\qquad & \mbox{in}\ \Omega,\\
\partial_n y + u\,y &= g & \partial_n p + u\,p &= 0 && \mbox{on}\ \Gamma,
end{aligned}\\[.3em]
&\hspace{1cm}\left(\alpha\,u - y\,p,v-u\right)_{L^2(\Gamma)} \ge 0\hspace{1.95cm}\mbox{for all}\ v\in U_{ad}. \nonumber
end{align}
The variational inequality is equivalent to the projection formula
\begin{equation}\label{eq:proj_formula}
u = \Pi_{ad}\left(\frac1\alpha [y\,p]_\Gamma\right)
end{equation}
with $\Pi_{ad}$ the $L^2(\Gamma)$-projection onto $U_{ad}$.
To compute the second derivative of $j$ we need the solution
$\delta y:=S'(u)\delta u\in H^1(\Omega)$ of the emph{tangent equation}
\begin{equation*}
\left\lbrace
\begin{array}{rlll}
-\Delta \delta y + \delta y &= 0 &\quad&\mbox{in}\quad\Omega,\\
\partial_n\delta y + u\,\delta y &= - y\,\delta u &&\mbox{on}\quad\Gamma,
end{array}
\right.
end{equation*}
and the solution $\delta p:=Z'(u)\delta u\in H^1(\Omega)$ of the emph{dual for Hessian equation}
\begin{equation*}
\left\lbrace
\begin{array}{rlll}
-\Delta \delta p + \delta p &= \delta y &\qquad& \mbox{in}\quad\Omega,\\
\partial_n\delta p + u\,\delta p &= -p\,\delta u &&\mbox{on}\quad\Gamma.
end{array}
\right.
end{equation*}
Then, the reduced Hessian in the directions $\delta u,\tau u\in L^2(\Gamma)$ then reads
\begin{equation}\label{eq:second_deriv_j}
j''(u)\left[\delta u,\tau u\right] = (\alpha\,\delta u - p\,\delta y - y\,\delta p,\tau u)_{L^2(\Gamma)}.
end{equation}
Next, we derive some stability and Lipschitz properties of $S$, $Z$, $S'$ and $Z'$.
As the following results require different assumptions on $f$, $y_d$ and $g$ we simply
assume the most restrictive ones, this is,
\begin{equation*}
f, y_d\in L^\infty(\Omega),\qquad g\in H^{1/2}(\Gamma).
end{equation*}
Moreover, we will hide the dependency on these quantities in the generic constant
to simplify the notation.
\begin{lemma}\label{lem:stability_SZ}
Let $u\in L^2(\Gamma)$ satisfy the assumption eqref{eq:ass_coercivity}.
The control-to-state operator $S$ satisfies the following inequalities:
\begin{align*}
\|S(u)\|_{H^1(\Omega)} &\le c, &&\\
\|S(u)\|_{H^{3/2}(\Omega)} + \|S(u)\|_{H^1(\Gamma)} &\le c\,(1+\|u\|_{L^{p_1}(\Gamma)}), \\
\|S(u)\|_{L^\infty(\Omega)} &\le c\,(1+\|u\|_{L^{p_2}(\Gamma)})^2,
end{align*}
with $p_1 > 2$ and $p_2\ge 2$ for $n=2$, and $p_1\ge 4$ and $p_2 > 8/3$ for $n=3$.
The estimates remain valid when replacing the operator $S$ by the control-to-adjoint operator $Z$.
end{lemma}
\begin{proof}
The inequalities for $S$ are a direct consequence of Lemmata
\ref{lem:lax_milgram} and \ref{lem:props_S}. The inequalities for $Z$ can
be derived with similar arguments, but the right-hand side of the adjoint
equation involves the corresponding state $S(u)$. However, in all cases the
norms of $S(u)-y_d$ can be bounded by $c\,(1+\|S(u)\|_{H^1(\Omega)})\le c$.
end{proof}
\begin{lemma}\label{lem:stability_SZ_prime}
Given are $u,\delta u\in L^2(\Gamma)$ and it is assumed that $u$
satisfies eqref{eq:ass_coercivity}.
Then, the following stability estimates hold true:
\begin{align*}
\|S'(u)\delta u\|_{H^1(\Omega)} &\le c\,\|\delta u\|_{L^2(\Gamma)},\\
\|S'(u)\delta u\|_{H^{3/2}(\Omega)} &\le c\left(1+\|u\|_{L^p(\Gamma)}\right)^3 \|\delta u\|_{L^2(\Gamma)},
end{align*}
with $p>2$ for $n=2$ and $p\ge 4$ for $n=3$.
The estimates remain valid when replacing $S'$ by $Z'$.
end{lemma}
\begin{proof}
In the following we write $y:=S(u)$ and $\delta y = S'(u)\delta u$.
The stability in $H^1(\Omega)$ follows directly from Lemma \ref{lem:lax_milgram}
and the estimate
\begin{equation}\label{eq:multiplication_H-12}
\|\delta u\,y\|_{H^{-1/2}(\Gamma)} = \sup_{\genfrac{}{}{0pt}{}{\varphi\in H^{1/2}(\Gamma)}{\varphi\notequiv 0}} \frac{\left(\delta u \,y,\varphi\right)_{L^2(\Gamma)}}{\|\varphi\|_{H^{1/2}(\Gamma)}}
\le c\,\|\delta u\|_{L^2(\Gamma)}\,\|y\|_{H^1(\Omega)},
end{equation}
which follows from the same arguments used already in eqref{eq:remainder_term}.
The boundedness of $y:=S(u)$ in $H^1(\Omega)$ can be found in the previous Lemma.
The estimate in the $H^{3/2}(\Omega)$-norm follows analogously with Lemma \ref{lem:props_S}a)
and
\[\|y\,\delta u\|_{L^2(\Gamma)}\le c\,\|y\|_{L^\infty(\Omega)}\,\|\delta u\|_{L^2(\Gamma)}\]
and the stability in $L^\infty(\Omega)$ proved in Lemma \ref{lem:stability_SZ}.
The estimates for $Z'$ are deduced with similar techniques.
With the a priori estimate from Lemma \ref{lem:props_S}a)
and the embedding $H^1(\Omega)\hookrightarrow L^r(\Omega)$ which holds for $r <\infty$ ($n=2$)
or $r\le 6$ ($n=3$) we get
\begin{align*}
\|Z'(u)\delta u\|_{H^{3/2}(\Omega)} &\le c \left(1+\|u\|_{L^p(\Gamma)}\right)
\left(\|p\,\delta u\|_{L^2(\Gamma)} + \|\delta y\|_{H^1(\Omega)}\right) \\
&\le c\left(1+\|u\|_{L^p(\Gamma)}\right)
\left(1+\|p\|_{L^\infty(\Gamma)}\right)\|\delta u\|_{L^2(\Gamma)}
end{align*}
with $p=Z(u)$.
The stability of $Z$ in $L^\infty(\Omega)$ is discussed in the previous Lemma.
end{proof}
\begin{lemma}\label{lem:lipschitz_SZ}
Let $u,v\in L^2(\Gamma)$ satisfy assumption eqref{eq:ass_coercivity}.
Then, the following Lipschitz-estimates hold:
\begin{align*}
\|S(u) - S(v)\|_{H^1(\Omega)} &\le c\,\|u-v\|_{L^2(\Gamma)}, \\
\|S'(u)\delta u - S'(v)\delta u\|_{H^1(\Omega)}
&\le c\,\|u-v\|_{L^2(\Gamma)}\|\delta u\|_{L^2(\Gamma)}.
end{align*}
The estimates are also valid when replacing $S$ by $Z$ and $Z$ by $Z'$.
end{lemma}
\begin{proof}
The estimates for $S$ and $S'$ follow directly from Lemma
\ref{lem:lipschitz_general} and the stability estimates for $S$ and $S'$
in $H^1(\Omega)$ proved in the Lemmata \ref{lem:stability_SZ} and \ref{lem:stability_SZ_prime}.
The Lipschitz estimate for $Z$ is proved in a similar way. In this case
one has to apply the Lipschitz estimate shown for
$S$ to the term $\|S(u)-S(v)\|_{H^1(\Omega)}$ appearing due to the differences in
the right-hand sides.
With the same idea we show the Lipschitz estimate for
$Z'$. Using again Lemma~\ref{lem:lipschitz_general} we get
\begin{align*}
&\|Z'(u)\delta u - Z'(v)\delta u\|_{H^1(\Omega)}
\le c\,\Big(\|u-v\|_{L^2(\Gamma)}\,\|Z'(u)\delta u\|_{H^1(\Omega)} \\
&\qquad+ \|S'(u)\delta u - S'(v)\delta v\|_{H^1(\Omega)}
+ \|\delta u\,(Z(u) - Z(v))\|_{H^{-1/2}(\Gamma)}\Big).
end{align*}
It remains to bound the three terms on the right-hand side.
To this end, we apply Lemma \ref{lem:stability_SZ_prime}
to the first term, the Lipschitz estimate for $S'(\boldsymbol cdot)\delta u$ to the second term,
and the multiplication rule eqref{eq:multiplication_H-12} with $y=Z(u) - Z(v)$
as well as the Lipschitz estimate for $Z$ to the third term.
end{proof}
As the optimal control problem is non-convex we have to deal with local solutions.
For some local solution $\bar u\in U_{ad}$ we require the following second-order
sufficient condition:
\begin{assumption}[SSC]\label{ass:ssc}
The objective functional is locally convex near the local solution $\bar u$, i.\,e.,
a constant $\delta > 0$ exists such that
\begin{equation}\label{eq:ssc}
j''(\bar u)(v,v) \ge \delta\,\|v\|_{L^2(\Gamma)}^2 \qquad\forall v\in L^2(\Gamma).
end{equation}
end{assumption}
With standard arguments one can show that each function $\bar u\in U_{ad}$ fulfilling the
first-order necessary condition eqref{eq:opt_cond} and the second-order sufficient condition
eqref{eq:ssc} is indeed a local solution and satisfies the quadratic growth condition
\begin{equation*}
j(\bar u) \le j(u) - \gamma\,\|u-\bar u\|_{L^2(\Gamma)}^2\qquad \forall u\in B_\tau(\bar u),
end{equation*}
with certain constants $\gamma,\tau > 0$.
The author is aware that there are weaker assumptions which are sufficient for
local minima, for instance one could formulate eqref{eq:ssc} for all directions
$v$ from a critical cone. However, with this assumption the
convergence proof for the postprocessing approach presented in Section
\ref{sec:postprocessing} requires some more careful investigations, in
particular the construction of a modified interpolant onto $U_{ad}$.
One possible solution for this issue can be found in \boldsymbol cite{KP14}.
Later, we will require the following Lipschitz estimate for the Hessian of $j$.
\begin{lemma}\label{lem:lipschitz_2nd_deriv}
Let $u,v \in L^2(\Gamma)$ fulfilling eqref{eq:ass_coercivity} be given.
Then, the Lipschitz-estimate
\begin{equation*}
\left\vert j''(u)(\delta u,\delta u) - j''(v)(\delta u,\delta u)\right\vert \le
c\,\|\delta u\|_{L^2(\Gamma)}^2\,\|u-v\|_{L^2(\Gamma)}.
end{equation*}
is valid for all $\delta u\in L^2(\Gamma)$.
end{lemma}
\begin{proof}
To shorten the notation we write
$y_u=S(u)$, $p_u = Z(u)$, $\delta y_u = S'(u)\delta u$ and $\delta p_u = Z'(u)\delta u$.
From the representation eqref{eq:second_deriv_j} we obtain
\begin{align*}
& \left\vert j''(v)(\delta u,\delta u) - j''(u)(\delta u,\delta
u)\right\vert \\
&\quad\le \left\vert(p_u\, \delta y_u - p_v\, \delta y_v + y_u\,\delta p_u - y_v\,\delta
p_v,\delta u)_{L^2(\Gamma)}\right\vert.
end{align*}
We estimate the right-hand side using the Cauchy-Schwarz inequality,
the embedding $H^1(\Omega)\hookrightarrow L^4(\Gamma)$ and the Lipschitz
estimates from Lemma \ref{lem:lipschitz_SZ} as well as the a priori estimates
from Lemmata \ref{lem:stability_SZ} and
\ref{lem:stability_SZ_prime}. This implies
\begin{align*}
&\quad \left\vert(p_u\,\delta y_u - p_v\,\delta y_v,\delta u)_{L^2(\Gamma)}\right\vert \\
&\le c\left(\|p_u - p_v\|_{H^1(\Omega)}\,\|\delta y_u\|_{H^1(\Omega)} +
\|\delta y_u - \delta y_v\|_{H^1(\Omega)}\,\|p_v\|_{H^1(\Omega)}
\right)\|\delta u\|_{L^2(\Gamma)}\\
&\le c\,\|u-v\|_{L^2(\Gamma)}\,\|\delta u\|_{L^2(\Gamma)}^2.
end{align*}
With similar arguments we deduce
\begin{align*}
&\quad \left\vert\left(y_u\,\delta p_u - y_v\,\delta p_v,\delta u\right)_{L^2(\Gamma)}\right\vert \\
&\le c\left(\|y_u - y_v\|_{H^1(\Omega)}\,\|\delta p_u\|_{H^1(\Omega)} +
\|\delta p_u - \delta p_v\|_{H^1(\Omega)}\,
\|y_v\|_{H^1(\Omega)}\right)\|\delta u\|_{L^2(\Gamma)} \\
&\le c\,\|u-v\|_{L^2(\Gamma)}\,\|\delta u\|_{L^2(\Gamma)}^2,
end{align*}
and conclude the assertion.
end{proof}
\begin{corollary}\label{cor:coervice_neighb}
Let $\bar u\in U_{ad}$ be a local solution of eqref{eq:target} satisfying Assumption \ref{ass:ssc}.
Then, some $\varepsilon>0$ exists such that the inequality
\[j''(u)(\delta u,\delta u) \ge \frac{\delta}2 \|\delta u\|^2_{L^2(\Gamma)}\]
is valid for all $\delta u\in L^2(\Gamma)$ and $u\in L^2(\Gamma)$ with $\|u-\bar u\|_{L^2(\Gamma)}\le \varepsilon$.
end{corollary}
\begin{proof}
The assertion follows immediately from the previous Lemma. For further
details we refer to \boldsymbol cite[Lemma 2.23]{KV09}.
end{proof}
In the next Lemma we will collect some basic regularity results for the solution
of eqref{eq:target}.
\begin{lemma}\label{lem:regularity_general}
Let $\Omega\subset\mathbb R^n$, $n\in\{2,3\}$, be a Lipschitz domain.
Each local solution $\bar u\in U_{ad}$ of eqref{eq:target}
and the corresponding states $\bar y=S(\bar u)$, $\bar p=Z(\bar u)$ satisfy
\begin{equation*}
\bar u\in H^1(\Gamma)\boldsymbol cap L^\infty(\Gamma),\qquad \bar y,\bar p\in H^{3/2}(\Omega)
\boldsymbol cap H^1(\Gamma)\boldsymbol cap C(\overline\Omega).
end{equation*}
end{lemma}
\begin{proof}
All regularity result, except $\bar u\in H^1(\Gamma)$, follow directly from Lemma \ref{lem:props_S}.
To show $\bar u\in H^1(\Gamma)$ we apply the product rule
\begin{equation*}
\|\bar y\,\bar p\|_{H^1(\Gamma)} \le c\left(\|\bar y\|_{H^1(\Gamma)} \,\|\bar p\|_{L^\infty(\Omega)} + \|\bar y\|_{L^\infty(\Omega)} \,\|\bar p\|_{H^1(\Gamma)}\right) \le c
end{equation*}
and confirm $\bar y\,\bar p\in H^1(\Omega)$. The desired result then follows after an
application of the Stampacchia-Lemma, see \boldsymbol cite[p. 50]{KinderlehrerStampacchia1980},
to the projection formula eqref{eq:proj_formula}.
end{proof}
Under additional assumptions on the geometry of $\Omega$ we can show even higher regularity. This is needed for the postprocessing approach studied in Section \ref{sec:postprocessing} where we will show
almost quadratic convergence of the control approximations.
\begin{lemma}\label{lem:regularity_improved}
Let $\Omega\subset\mathbb R^2$ be a bounded domain with a
$C^{1,1}$-boundary $\Gamma$. Then, there holds
\begin{equation*}
\bar u\in W^{1,q}(\Gamma)\boldsymbol cap H^{2-1/q}(\tilde\Gamma),\qquad \bar y, \bar p \in W^{2,q}(\Omega),
end{equation*}
for all $\tilde\Gamma\subset\subset \mathcal A$ or $\tilde\Gamma\subset\subset \mathcal I$,
where $\mathcal A:=\{x\in\Gamma\boldsymbol colon u(x)\in\{u_a,u_b\}\}$ and
$\mathcal I:=\Gamma\setminus\mathcal A$
denote the active and inactive set, respectively.
end{lemma}
\begin{proof}
With the projection formula eqref{eq:proj_formula}, $\bar y, \bar p\in H^1(\Gamma)$ proved in
Lemma \ref{lem:regularity_general}
and the multiplication rule \boldsymbol cite[Theorem 1.4.4.2]{Gri85} we obtain
$\bar u\in H^{1/2}(\Gamma)$.
From Lemma \ref{lem:props_S}c) we then conclude
$\bar y, \bar p\in H^2(\Omega)\hookrightarrow W^{1,q}(\Gamma)$ for all $q<(1,\infty)$
and a further application of the multiplication rule yields
$\bar y\,\bar p\in W^{1,q}(\Gamma)$.
From eqref{eq:proj_formula} we conclude the property $\bar u\in W^{1,q}(\Gamma)$.
Furthermore, we confirm that $\bar u\,\bar y, \bar u\,\bar p\in W^{1-1/q,q}(\Gamma)$
and a standard shift theorem for the Neumann problem, compare also the
technique used in the proof of Lemma \ref{lem:props_S}a), results in
$\bar y,\bar p\in W^{2,q}(\Omega)$. Repeating the arguments above, i.\,e., using the multiplication
rule and the projection formula, we obtain
$\bar u\in W^{2-1/q,q}(\tilde\Gamma)\hookrightarrow H^{2-1/q}(\tilde\Gamma)$.
end{proof}
We chose the assumptions of the previous Lemma in such a way that
the regularity is only restricted due to the projection formula. Of course, when
the control bounds are never active we could further improve the regularity results.
\section{Finite element approximation of the state equation}\label{sec:fem}
This section is devoted to the finite element approximation of the variational problem
eqref{eq:weak_form}. While the results from the previous sections are valid for
arbitrary Lipschitz domains (unless otherwise explicitly assumed), we have to assume more smoothness
of the boundary $\Gamma$ in order to establish our discretization results:
\begin{enumerate}[align=left]
\item[\boldsymbol customlabel{ass:domain1}{\textbf{(A1)}}]
The domain $\Omega\subset\mathbb R^n$, $n\in\{2,3\}$, possesses a
Lipschitz continuous boundary $\Gamma$ which is piecewise $C^1$.
end{enumerate}
This definition includes arbitrary (possibly non-convex) polygonal or
polyhedral domains. Indeed, the regularity of solutions is in this case also
restricted by corner and edge singularities. However, for the first convergence result
we require only $H^{3/2}(\Omega)\boldsymbol cap H^1(\Gamma)$-regularity of the solution.
Later, we want to investigate improved discretization techniques for which
more regularity is needed. Then, we will use a stronger assumption on the domain.
First, we introduce shape-regular triangulations $\{\mathcal T_h\}_{h>0}$
of $\Omega$ consisting of triangles ($n=2$) or tetrahedra ($n=3$).
The elements $T$ may have curved edges/faces
such that the property
\begin{equation*}
\overline\Omega = \bigcup_{T\in\mathcal T_h} \overline T
end{equation*}
is valid for an arbitrary domain $\Omega$.
Moreover, we assume that the triangulations are feasible in the sense of
Ciarlet \boldsymbol cite{Cia91}.
The mesh parameter $h>0$ is the maximal element diameter
\begin{equation*}
h = \max_{T\in\mathcal T_h} h_T,\quad h_T:=\text{diam}(T).
end{equation*}
The family of meshes $\{\mathcal T_h\}_{h>0}$ is assumed to be quasi-uniform,
this means some $\kappa > 0$ independent of $h$ exists such that
each element $T\in\mathcal T_h$ contains a ball with radius $\rho_T$
satisfying the estimate $\frac{\rho_T}{h} \ge \kappa$.
Each triangulation $\mathcal T_h$ of $\Omega$ induces also a triangulation $\mathcal E_h$
of the boundary $\Gamma$
By $F_T\boldsymbol colon \hat T\to T$ we denote the transformations from the reference
triangle/tetra\-hedron $\hat T$ to the world element $T\in\mathcal T_h$.
The transformations $F_T$ may be non-affine for elements with curved
faces. Here, we consider transformations of the form
\begin{equation*}
F_T = \tilde F_T + \Phi_T,
end{equation*}
with some affine function $\tilde F_T(\hat x) = \tilde B_T\hat x + \tilde b_T$, $\tilde B_T\in\mathbb R^{n\times n}$, $\tilde b\in\mathbb R^n$, chosen in such a way that if $T$ is a curved boundary
element, $\tilde T=\tilde F_T(\hat T)$ is an $n$-simplex whose vertices coincide with the vertices
of $T$. The assumed shape-regularity implies $\|\tilde B_T\|\le c\,h_T$ and
$\|\tilde B_T^{-1}\|\le h_T^{-1}$, see \boldsymbol cite[Theorem 15.2]{Cia91}.
To guarantee the validity of interpolation error estimates we assume:
\begin{enumerate}[align=left]
\raggedright
\item[\boldsymbol customlabel{ass:domain2}{\textbf{(A2)}}]
The triangulations $\mathcal T_h$ are regular of order $2$ in the sense of \boldsymbol cite{Ber89},
this is, for all sufficiently small $h>0$ there holds
\begin{equation}\label{eq:props_trafo}
\sup_{\hat x\in\hat T} \|D \Phi_T(\hat x)\boldsymbol cdot \tilde B_T^{-1}\| \le c < 1,\qquad
\sup_{\hat x\in \hat T}\|D^2 \Phi_T(\hat x)\| \le c h^2,
end{equation}
for all $T\in\mathcal T_h$.
end{enumerate}
There are multiple strategies to construct the mappings $F_T$ satisfying these assumptions
and we refer the reader for instance to \boldsymbol cite{Ber89,Sco73,Zla73}. Therein, it is assumed
that $\Gamma$ is piecewise $C^3$, only in the second reference $C^4$ is required.
The trial and test space is defined by
\begin{equation*}
V_h:= \{ v_h\in C(\overline\Omega) \boldsymbol colon v_h = \hat v_h\boldsymbol circ F_T^{-1},\ \hat v_h \in \mathcal P_1(\hat T)\ \mbox{for all}\ T\in\mathcal T_h\}.
end{equation*}
Next, we introduce an interpolation operator which maps functions from $W^{1,1}(\Omega)$ onto $V_h$.
Therefore, we partly use the quasi-interpolant proposed by Bernardi \boldsymbol cite{Ber89}, but
use a modification for boundary nodes as in \boldsymbol cite{SZ90}, see also \boldsymbol cite{Ape99}. To each interior node
$x_i$, $i=1,\ldots,N^{in}$, of $\mathcal T_h$, we associate the patch of elements
$\sigma_i:= \boldsymbol cup\{\bar T\boldsymbol colon T\in\mathcal T_h, x_i\in \bar T\}$. For the boundary nodes $x_i$,
$i=N^{in}+1,\ldots,N$, we define
$\sigma_i:=\boldsymbol cup\{\bar E\boldsymbol colon E\in\mathcal E_h, x_i\in \bar E\}$.
Instead of using nodal values as for the Lagrange interpolant, we use the nodal values
of some regularized function computed by an $L^2$-projection over $\sigma_i$. Therefore, denote by $F_i\boldsymbol colon \hat\sigma_i\to \sigma_i$
a continuous transformation from a reference patch $\hat \sigma_i$ having diameter
$O(1)$ to $\sigma_i$.
The interpolation operator $\Pi_h\boldsymbol colon W^{1,1}(\Omega)\to V_h$ is defined as follows.
To each node $x_i$, $i=1,\ldots,N$, we associate a first-order polynomial
$p_i=\hat p_i\boldsymbol circ F_i^{-1}$ defined by
\begin{equation*}
\int_{\hat\sigma_i} (\hat p_i - \hat u)\,\hat q = 0\qquad\forall q\in\mathcal P_1(\hat\sigma_i),
end{equation*}
where $\hat u$ is chosen such that $u = \hat u\boldsymbol circ F_i^{-1}$.
The interpolation operator is defined by
\begin{equation*}
\Pi_h v(x) = \sum_{i=1}^N p_i(x_i)\,\varphi_i(x),
end{equation*}
where $\{\varphi_i\}_{i=1,\ldots,N}$ is the nodal basis of $V_h$.
Note, that due to the modification for boundary nodes, this operator is only applicable to $W^{1,1}(\Omega)$-functions. The desired interpolation properties remain valid. In particular, there holds
\begin{equation}\label{eq:int_error}
\|u-\Pi_h u\|_{H^m(\Omega)} \le c h^{ell-m} \|u\|_{H^{ell}(\Omega)},\quad m\le ell\le 2,\ ell\ge 1,
end{equation}
see \boldsymbol cite[Theorem 4.1]{Ber89}, \boldsymbol cite[Theorem 4.1]{SZ90}.
Due to the special choice of the patches $\sigma_i$ for the boundary nodes we get similar
interpolation error estimates on the boundary, this is,
\begin{equation}\label{eq:int_error_boundary}
\|u-\Pi_h u\|_{H^m(\Gamma)} \le c h^{ell-m}\|u\|_{H^{ell}(\Gamma)}, \quad m\le ell\le 2.
end{equation}
The proof follows from the same arguments as in \boldsymbol cite[Theorem 4.1]{SZ90}.
The finite element solutions of eqref{eq:weak_form}
are characterized by the variational formulations
\begin{equation}\label{eq:fem}
\text{Find } y_h\in V_h\boldsymbol colon\quad a_u(y_h,v_h) = F(v_h)\qquad\forall v_h\in V_h.
end{equation}
As in the continuous case one can show that eqref{eq:fem} possesses a unique solution for each $h>0$.
With the usual arguments we can derive an error estimate for the approximation error in the energy-norm.
\begin{lemma}\label{lem:h1_l2_error}
Assume that \ref{ass:domain1} and \ref{ass:domain2} are satisfied and that
the solution $y$ of eqref{eq:weak_form} belongs to $H^{s}(\Omega)$ with some
$s\in [1,2]$. Then, there holds the error estimate
\begin{align}
\|y-y_h\|_{H^1(\Omega)}&\le c\,h^{s-1}\,\|y\|_{H^{s}(\Omega)}.\label{eq:h1_error}
end{align}
end{lemma}
\begin{proof}
The proof follows from the C\'ea-Lemma and the interpolation error estimates eqref{eq:int_error}.
end{proof}
Of particular interest are error estimates on the boundary. This is required in order to derive
error estimates for boundary control problems. To this end, we prove first a suboptimal result
which is valid for arbitrary Lipschitz domains $\Omega$.
\begin{lemma}\label{lem:fe_error_suboptimal}
Let the assumptions \ref{ass:domain1} and \ref{ass:domain2} be satisfied.
It is assumed that the solution $y$ of eqref{eq:weak_form} belongs to $H^{3/2}(\Omega)$.
Moreover, the parameter $u$
fulfills eqref{eq:ass_coercivity} and belongs to
$L^p(\Gamma)$ with $p>2$ for $n=2$ and $p\ge 4$ for $n=3$.
Then, the error estimate
\begin{equation*}
\|y - y_h\|_{L^2(\Gamma)}
\le c\,h\left(1+\|u\|_{L^p(\Gamma)}\right)\|y\|_{H^{3/2}(\Omega)}
\le c\,h\left(1+\|u\|_{L^p(\Gamma)}\right)^2
end{equation*}
holds, for all $h>0$.
end{lemma}
\begin{proof}
We introduce the dual problem
\begin{equation*}
\mbox{Find $w\in H^1(\Omega)$}\boldsymbol colon\quad
a_u(v,w) = (y - y_h,v)_{L^2(\Gamma)}\qquad \forall v\in H^1(\Omega)
end{equation*}
and obtain with the typical arguments of the Aubin-Nitsche trick
\begin{align*}
\|y - y_h\|_{L^2(\Gamma)}^2
\le c\,\|y - y_h\|_{H^1(\Omega)}\,\|w-\Pi_h w\|_{H^1(\Omega)}
\le c\,h\,\|w\|_{H^{3/2}(\Omega)}\,\|y\|_{H^{3/2}(\Omega)}.
end{align*}
The last step is an application of Lemma \ref{lem:h1_l2_error} and the interpolation error estimate
eqref{eq:int_error}.
The regularity required for the dual solution $w$ can be deduced from Lemma~\ref{lem:props_S}
with $fequiv0$ and $g=y-y_h$. Taking into account the a priori estimate
\[
\|w\|_{H^{3/2}(\Omega)} \le c\left(1+\|u\|_{L^p(\Gamma)}\right)\|y-y_h\|_{L^2(\Gamma)}
\] we conclude the assertion.
end{proof}
If the solution is more regular, we can also show a higher convergence rate.
In this case we will use the H\"older inequality and a trace theorem to obtain
$\|y-y_h\|_{L^2(\Gamma)} \le \|y-y_h\|_{L^\infty(\Omega)}$, and insert the following result.
\begin{theorem}\label{thm:fe_error_linfty}
Consider a planar domain domain $\Omega\in\mathbb R^2$.
Let $u\in H^{1/2}(\Gamma)$ with $u\ge 0$ a.\,e., and assume that
\ref{ass:domain1} and \ref{ass:domain2} are satisfied.
Assume that the solution $y$ of eqref{eq:weak_form} belongs to
$y\in W^{2,q}(\Omega)$ with $q\in[2,\infty)$.
Then, the error estimate
\begin{equation*}
\|y-y_h\|_{L^\infty(\Omega)} \le c\,h^{2-2/q}\,\lvert\ln h\rvert\,\|y\|_{W^{2,q}(\Omega)}
end{equation*}
is valid.
end{theorem}
The proof requires rather technical arguments and is postponed to the appendix.
\section{The discrete optimal control problem}\label{sec:discrete_optimal_control}
In the following we investigate the discretized optimal control problem:
\begin{equation}\label{eq:discrete_target}
\text{Find}\ u_h\in U_h^{ad}\boldsymbol colon\quad J_h(y_h,u_h) := \frac12 \|y_h-y_d\|_{L^2(\Omega)}^2 + \frac\alpha2 \|u_h\|_{L^2(\Gamma)}^2 \to\min!
end{equation}
subject to
\begin{equation*}
y_h\in V_h,\quad a_{u_h}(y_h,v_h) = F(v_h)\qquad\forall v_h\in V_h.
end{equation*}
The reduced objective functional is denoted by $j_h(u_h):=J_h(S_h(u_h),u_h)$.
We use piecewise linear finite elements to approximate the state $y$, i.\,e., the space
$V_h$ is defined as in the previous section. The controls are sought in the space
of piecewise constant functions,
\[U_h^{ad}:=\{w_h\in L^\infty(\Gamma)\boldsymbol colon w_h|_E \in\mathcal P_0\quad\forall E\in\mathcal E_h\}\boldsymbol cap U_{ad},\]
where $\mathcal E_h$ is the triangulation of the boundary induced by $\mathcal T_h$.
As in the continuous case we can derive a first-order necessary optimality condition which reads
\begin{align*}
a_{u_h}(y_h,v_h) &= F(v_h) &&\mbox{for all}\ v_h\in V_h,\\
a_{u_h}(v_h,p_h) &= (y_h-y_d,v_h)_{L^2(\Omega)} && \mbox{for all}\ v_h\in V_h,\\
(\alpha\,u_h - y_h\, p_h,w_h-u_h)_{L^2(\Gamma)} &\ge 0 && \mbox{for all}\ w_h\in U_h^{ad}.
end{align*}
The discrete control--to--state operator is denoted by $S_h\boldsymbol colon L^2(\Gamma)\to V_h$
and the control--to--adjoint operator by $Z_h\boldsymbol colon L^2(\Gamma)\to V_h$.
Analogous to the continuous case we compute the first and second derivatives of $j_h$ and obtain
\begin{equation}\label{eq:deriv_jh}
j_h'(u)\delta u = \left(\alpha\,u - S_h(u)\,Z_h(u),\delta u\right)_{L^2(\Gamma)}
end{equation}
and
\begin{equation}\label{eq:second_deriv_jh}
j_h''(u)(\delta u,\tau u) = \left(\alpha\,\delta u - S_h(u)\, Z_h'(u)\delta u - S_h'(u)\delta u\, Z_h(u),\tau u\right)_{L^2(\Gamma)}.
end{equation}
The first-order optimality condition reads in the short form
\begin{equation}\label{eq:opt_cond_discrete}
(\alpha\,u_h - S_h(u_h)\,Z_h(u_h), w_h-u_h)_{L^2(\Gamma)} \ge 0 \qquad \mbox{for all}\ w_h\in U_h^{ad}.
end{equation}
\subsection{Properties of the discrete control--to--state/adjoint operator}
In Section~\ref{sec:optimal_control} we have derived several stability and Lipschitz properties for
the operators $S$, $Z$, $S'$ and $Z'$. Here, we will derive the discrete analogues that are needed
in the following. Throughout this section we assume that \ref{ass:domain1} and \ref{ass:domain2}
are fulfilled.
\begin{lemma}\label{lem:stability_ShZh}
There hold the following properties:
\begin{align*}
\|S_h(u)\|_{H^1(\Gamma)} &\le c\left(1+\|u\|_{L^{p_1}(\Gamma)}\right)^2,\\
\|S_h(u)\|_{L^\infty(\Omega)} &\le c\left(1+\|u\|_{L^{p_2}(\Gamma)}\right)^2,
end{align*}
for $p_1, p_2 >2$ for $n=2$ and $p_1\ge 4$, $p_2> 4$ for $n=3$.
These estimates remain valid when replacing $S_h$ by $Z_h$.
end{lemma}
\begin{proof}
We start with the estimate in the $H^1(\Gamma)$-norm.
With the triangle inequality and an inverse estimate we obtain
\begin{align*}
\|S_h(u)\|_{H^1(\Gamma)}
&\le c \left(\|S(u) - S_h (u)\|_{H^1(\Gamma)}
+ \|S(u)\|_{H^1(\Gamma)} \right)\\
&\le c\,\big( \|S(u) - \Pi_h S(u)\|_{H^1(\Gamma)}
+ h^{-1} \|S(u) - \Pi_h S(u)\|_{L^2(\Gamma)} \\
&\quad + h^{-1}\,\|S(u) - S_h(u)\|_{L^2(\Gamma)} +
\|S(u)\|_{H^1(\Gamma)}\big).
end{align*}
The first two terms are bounded by the last one due to
eqref{eq:int_error_boundary} and it remains to apply the stability estimate from
Lemma \ref{lem:stability_SZ}. For the third term we apply the error estimate from Lemma
\ref{lem:fe_error_suboptimal}. This implies the first estimate.
We prove the maximum norm estimate only for the case $n=3$.
In the following, we write $y_h := S_h(u)$.
We introduce the function $\tilde y\in H^1(\Omega)$ solving the problem
\begin{equation*}
-\Delta \tilde y + \tilde y = f\ \mbox{in}\ \Omega,\qquad \partial_n \tilde
y = g-u\,y_h\ \mbox{on}\ \Gamma.
end{equation*}
Obviously, $y_h$ is the Neumann Ritz-projection of $\tilde y$, i.\,e.,
\begin{equation*}
a^{\text N}(y_h - \tilde y,v_h)=\int_\Omega\left(\nabla(y_h-\tilde y)\boldsymbol cdot\nabla
v_h + (y_h-\tilde y)\,v_h \right)= 0\quad\mbox{for all}\ v_h\in V_h.
end{equation*}
Let $x^*\in \bar T^*$ with $T^*\in\mathcal
T_h$ be the point where $|y_h|$ attains its maximum.
With an inverse inequality and the H\"older inequality we get
\begin{align}\label{eq:yh_linfty_start}
\|y_h\|_{L^\infty(\Omega)} &= |y_h(x^*)| \le c\,|T^*|^{-1}\,\|y_h\|_{L^1(T^*)} \nonumber\\
&\le c \left(|T^*|^{-1}\,\|\tilde y-y_h\|_{L^1(T^*)} + \|\tilde y\|_{L^\infty(T^*)}\right) \nonumber\\
&= c\,(\delta^h,\tilde y-y_h)_{L^2(\Omega)} + c\,\|\tilde y\|_{L^\infty(\Omega)},
end{align}
where $\delta^h$ is a regularized delta function defined by
$\delta^h(x) = |T^*|^{-1}\sgn(\tilde y(x)-y_h(x))$ if $x\in T^*$ and $\delta^h(x)=0$
otherwise. The second term on the
right-hand side can be treated with the arguments used already in
the proof of Lemma \ref{lem:props_S}b), namely
\begin{align*}
\|\tilde y\|_{L^\infty(\Omega)}
&\le c\left(\|f\|_{L^r(\Omega)} + \|g\|_{L^s(\Gamma)} + \|u\,y_h\|_{L^{s}(\Gamma)} \right)
end{align*}
with $r>3/2$ and $s=2+\varepsilon$ with $\varepsilon>0$ sufficiently small such that the following
arguments remain valid.
Furthermore, we estimate the last term with the H\"older inequality
with $p_2=4\,(2+\varepsilon)/(2-\varepsilon)$ and $p'=4$ (note that $1/{p_2}+1/p'=1/s$)
and the embedding $H^1(\Omega)\hookrightarrow L^4(\Gamma)$. This yields
\begin{equation*}
\|u\,y_h\|_{L^{s}(\Gamma)} \le c\,\|u\|_{L^{p_2}(\Gamma)}\,\|y_h\|_{L^{4}(\Gamma)}
\le c\,\|u\|_{L^{p_2}(\Gamma)}\,\|y_h\|_{H^1(\Omega)}.
end{equation*}
It remains to exploit stability of $S_h$ in the $H^1(\Omega)$-norm to conclude
\begin{equation}\label{eq:linfty_est_y_tilde}
\|\tilde y\|_{L^\infty(\Omega)} \le c\,(1+\|u\|_{L^{p_2}(\Gamma)}).
end{equation}
The estimate for the first term on the right-hand side of eqref{eq:yh_linfty_start} is based
on the ideas from \boldsymbol cite[Section 3.6]{Win08}. First, we introduce a
regularized Green's function $g^h\in H^1(\Omega)$ solving the variational
problem
$a^{\text N}(z,g^h) = (\delta^h,z)_{L^2(\Omega)}$ for all $z\in H^1(\Omega)$.
The Neumann Ritz-projection of $g^h$ is denoted by $g_h^h$.
Using the Galerkin orthogonality we obtain
\begin{align}\label{eq:est_delta_err}
(\delta^h,\tilde y-y_h) & = a^{\text N}(\tilde y-y_h,g^h)
= a^{\text N}(\tilde y-\Pi_h \tilde y,g^h-g_h^h)\nonumber\\
&\le c\,h^{1/2}\,\|\tilde y\|_{H^{3/2}(\Omega)}\,\|g^h\|_{H^1(\Omega)},
end{align}
where the last step follows form the stability of the Ritz projection
and the interpolation error estimate eqref{eq:int_error}.
To bound the $H^1(\Omega)$-norm of $g^h$ we apply the ellipticity of
$a^{\text N}$, the definition of $g^h$, the H\"older inequality and an embedding to
arrive at
\begin{align*}
c\,\|g^h\|_{H^1(\Omega)}^2
&\le a^{\text N}(g^h,g^h) =(\delta^h,g^h)_{L^2(\Omega)} \\
&\le c\,\|\delta^h\|_{L^{6/5}(\Omega)}\,\|g^h\|_{L^6(\Omega)}
\le c\,h^{-1/2}\,\|g^h\|_{H^1(\Omega)}.
end{align*}
The last step follows from the property $\|\delta^h\|_{L^{6/5}(\Omega)}\le
c\,|T^*|^{-1/6}\le c\,h^{-1/2}$ that can be confirmed with a simple
computation.
Insertion into eqref{eq:est_delta_err} and taking into account eqref{eq:yh_linfty_start} and
eqref{eq:linfty_est_y_tilde} yields the desired stability estimate.
The estimates for $Z_h$ follow in a similar way.
One just has to replace $f$ by $S_h(u)-y_d$ and the result follows from the estimates
proved already for $S_h(u)$.
end{proof}
\begin{lemma}\label{lem:Sh_lipschitz}
Assume that $u,v\in L^2(\Gamma)$ satisfy the assumption
eqref{eq:ass_coercivity}.
Then, the Lipschitz estimate
\begin{equation*}
\|S_h(u) - S_h(v)\|_{H^1(\Omega)} \le c\,\|u-v\|_{L^2(\Gamma)}
end{equation*}
holds.
end{lemma}
\begin{proof}
The proof follows with the same arguments as in the continuous case, see Lemmata
\ref{lem:lipschitz_general} and \ref{lem:lipschitz_SZ}.
end{proof}
Next, we discuss some error estimates for the approximation of the control-to-state and control-to-adjoint operator.
While estimates for $S_h$ and $Z_h$ are a direct consequence of Lemma
\ref{lem:fe_error_suboptimal}, the results for the linearized operators $S_h'$ and $Z_h'$
require some more effort as for instance
$S'(u)\delta u - S_h'(u)\delta u$ does not fulfill the Galerkin orthogonality.
\begin{lemma}\label{lem:fe_error_h1}
For each $u\in U_{ad}$ and $\delta u\in L^2(\Gamma)$
the error estimates
\begin{align*}
\|S(u) - S_h(u)\|_{H^1(\Omega)} &\le c\,h^{1/2}\,(1+\|u\|_{L^p(\Gamma)}),\\
\|S'(u)\delta u - S_h'(u)\delta u \|_{H^1(\Omega)}&\le c\,h^{1/2}\,(1+\|u\|_{L^p(\Gamma)})^3\,\|\delta u\|_{L^2(\Gamma)}
end{align*}
are valid for $p>2$ for $n=2$ and $p\ge 4$ for $n=3$.
The results are also valid when replacing $S$ and $S_h$ by $Z$ and $Z_h$,
as well as $S'$ and $S_h'$ by $Z'$ and $Z_h'$, respectively.
end{lemma}
\begin{proof}
The first estimate is just a combination of the Lemmata \ref{lem:h1_l2_error} and
\ref{lem:stability_SZ}.
To show the estimate for the linearized operators we introduce again
the abbreviations $y:=S(u)$, $y_h:=S_h(u)$, $\delta y := S'(u)\delta u$ and $\delta y_h:= S_h'(u)\delta u$.
Moreover, define the auxiliary function $\delta \tilde y_h\in V_h$ as the solution of
\begin{equation*}
a_u(\delta \tilde y_h,v_h) = (y\,\delta u,v_h)_{L^2(\Gamma)}\qquad \forall v_h\in V_h.
end{equation*}
This function fulfills the Galerkin orthogonality, i.\,e.,
$a_u(\delta y-\delta \tilde y_h,v_h) = 0$ for all $v_h\in V_h$.
Hence, we obtain with Lemma \ref{lem:h1_l2_error} and the Lipschitz-property from Lemma
\ref{lem:lipschitz_general} (note that this Lemma is also valid for the discrete solutions)
\begin{align*}
\|\delta y - \delta y_h\|_{H^1(\Omega)}
&\le c\left(\|\delta y - \delta \tilde y_h\|_{H^1(\Omega)}
+ \|\delta \tilde y_h - \delta y_h\|_{H^1(\Omega)} \right)\\
&\le c\left(h^{1/2}\,\|\delta y\|_{H^{3/2}(\Omega)}
+ \|\delta u\,(y-y_h)\|_{H^{-1/2}(\Gamma)}\right).
end{align*}
For the first term we simply insert the second estimate from Lemma \ref{lem:stability_SZ_prime}.
The second term on the right-hand side is further estimated by means of
\boldsymbol cite[Theorem 1.4.4.2]{Gri85} and a trace theorem which yield
\begin{equation*}
\|\delta u\,(y-y_h)\|_{H^{-1/2}(\Gamma)} \le c\,\|\delta u\|_{L^2(\Gamma)} \,\|y-y_h\|_{H^1(\Omega)},
end{equation*}
and the assertion follows after an application of the estimate shown already for $S(u)-S_h(u)$.
The estimates for $Z$ and $Z'$ follow with similar arguments.
end{proof}
\subsection{Convergence of the fully discrete solutions}
\label{sec:full_discretization}
Throughout this subsection we assume that the properties \ref{ass:domain1} and \ref{ass:domain2}
are fulfilled. These assumptions are needed to guarantee the required regularity of the solution
and the validity of interpolation error estimates.
As the solutions of both the continuous and discrete optimal control problem eqref{eq:target} and eqref{eq:discrete_target}, respectively, are not unique
we have to construct a sequence of discrete local solutions converging towards a continuous one.
The first question which arises is whether such a sequence exists. To this end, we introduce
a localized problem
\begin{equation}\label{eq:discrete_local}
j_h(u_h)\to\min! \quad \mbox{s.\,t.}\ u_h\in U_h^{ad}\boldsymbol cap B_\varepsilon(\bar u),
end{equation}
where $\bar u\in U_{ad}$ is a fixed local solution of eqref{eq:target} and $\varepsilon>0$ is some small parameter. First, we show that this problem possesses a unique local solution
which would immediately follow if we could show that the coercivity discussed in Corollary
\ref{cor:coervice_neighb} is transferred to the discrete case.
The following arguments are similar to the investigations in \boldsymbol cite{CMT05}, in
particular Theorem 4.4 and 4.5 therein.
\begin{lemma}\label{lem:discrete_coerc}
Let $\bar u\in U_{ad}$ be a local solution of eqref{eq:target}.
Assume that $\varepsilon>0$ and $h>0$ are sufficiently small. Then, the inequality
\begin{equation*}
j_h''(u)\delta u^2\ge \frac\delta4\|\delta u\|_{L^2(\Gamma)}^2
end{equation*}
is valid for all $u$ satisfying $\|u-\bar u\|_{L^2(\Gamma)} \le
\varepsilon$.
end{lemma}
\begin{proof}
With the explicit representations of $j''$ and $j_h''$ from eqref{eq:second_deriv_j}
and eqref{eq:second_deriv_jh}, respectively, and Corollary \ref{cor:coervice_neighb}, we obtain
\begin{align}\label{eq:discrete_corec_initial}
&\qquad \frac\delta2 \|\delta u\|_{L^2(\Gamma)}^2
\le \left(j''(u)\delta u^2 - j_h''(u)\delta u^2\right) +
j_h''(u)\delta u^2 \nonumber\\
&\le \Big( \|y_h\, \delta p_h - y\, \delta p\|_{L^2(\Gamma)}
+ \|\delta y_h\, p_h - \delta y\,p\|_{L^2(\Gamma)} \Big)\,\|\delta u\|_{L^2(\Gamma)} + j_h''(u)\delta u^2,
end{align}
with $y=S(u)$, $p = Z(u)$, $\delta y= S'(u)\delta u$
and $\delta p=Z'(u)\delta u$, and the discrete analogues $y_h=S_h(u)$,
$p_h = Z_h(u)$, $\delta y_h= S_h'(u)\delta u$ and $\delta p_h=Z_h'(u)\delta u$.
It remains to bound the two norms in parentheses appropriately. Therefore, we apply
the triangle inequality,
the stability properties for $S'$, $S_h$, $Z'$ and $Z_h$ from
Lemmata \ref{lem:stability_SZ}, \ref{lem:stability_SZ_prime} and \ref{lem:stability_ShZh}
as well as the error estimates from Lemma \ref{lem:fe_error_h1}.
Note that the control bounds provide the regularity for $u$ that is required for these estimates.
As a consequence we obtain
\begin{align*}
\|y_h\, \delta p_h - y\, \delta p\|_{L^2(\Gamma)}
& \le c\left(\|y-y_h\|_{H^1(\Omega)}\,\|\delta p\|_{H^1(\Omega)}
+ \|\delta p-\delta p_h\|_{H^1(\Omega)}\,\|y_h\|_{H^1(\Omega)}\right) \\
&\le c\,h^{1/2}\,\|\delta u\|_{L^2(\Gamma)}.
end{align*}
With similar arguments we can show
\begin{align*}
\|\delta y_h\, p_h - \delta y \,p\|_{L^2(\Gamma)}
& \le c \left(\|\delta y - \delta y_h\|_{H^1(\Omega)}\,\|p_h\|_{H^1(\Omega)}
+ \|p - p_h\|_{H^1(\Omega)}\,\|\delta y\|_{H^1(\Omega)}\right) \\
&\le c\, h^{1/2}\,\|\delta u\|_{L^2(\Gamma)}.
end{align*}
The previous two estimates together with eqref{eq:discrete_corec_initial} imply
\begin{equation*}
\frac\delta2\, \|\delta u\|_{L^2(\Gamma)}^2
\le c\,h^{1/2}\,\|\delta u\|_{L^2(\Gamma)}^2 + j_h''(\bar u)\delta u^2.
end{equation*}
Choosing $h$ sufficiently small such that $c\, h^{1/2} \le \frac\delta4$ leads to the assertion.
end{proof}
\begin{theorem}\label{thm:convergence_fully_discrete}
Let $\bar u\in U_{ad}$ be a local solution of eqref{eq:target}
satisfying Assumption~\ref{ass:ssc}.
Assume that $\varepsilon>0$ and $h_0>0$ are sufficiently small. Then, the auxiliary problem eqref{eq:discrete_local} possesses a unique solution for each $h\le h_0$ denoted by $\bar u_h^\varepsilon$, and there holds
\begin{equation*}
\lim_{h\to 0} \|\bar u-\bar u_h^\varepsilon\|_{L^2(\Gamma)} = 0.
end{equation*}
end{theorem}
\begin{proof}
The existence of at least one solution of eqref{eq:discrete_local} follows immediately
from the compactness and non-emptyness of $U_h^{ad}\boldsymbol cap B_\varepsilon(\bar u)$.
Note that $Q_h \bar u\in U_h^{ad}\boldsymbol cap B_{\varepsilon}(\bar u)$ for sufficiently small $h>0$, this
confirms that the feasible set is non-empty.
Due to Lemma \ref{lem:discrete_coerc} this solution is unique.
Moreover, the family $\{\bar u_h^\varepsilon\}_{h\le h_0}$ is bounded and hence, a weakly convergent
sequence $\{\bar u_{h_k}^\varepsilon\}_{k\in\mathbb N}$ with $h_k \searrow 0$
exists.
The weak limit is denoted by $\tilde u\in L^2(\Gamma)$
and from the convexity of the feasible set we deduce $\tilde u\in U_h^{ad}\boldsymbol cap B_\varepsilon(\bar u)$.
Without loss of generality it is assumed that $\bar u_h^\varepsilon\rightharpoonup \tilde u$ in $L^2(\Gamma)$ as $h\searrow 0$.
Next, we show that $\tilde u$ is a local minimum of the continuous problem.
First, we show the convergence of the corresponding states
which follows with the arguments from \boldsymbol cite{CM02}.
First, we employ the triangle inequality to get
\begin{equation}\label{eq:conv_state_triangle}
\|S(\tilde u) - S_h(\bar u_h^\varepsilon)\|_{H^1(\Omega)} \le
c\left( \|S(\tilde u) - S_h(\tilde u)\|_{H^1(\Omega)} + \|S_h(\tilde u) - S_h(\bar u_h^\varepsilon)\|_{H^1(\Omega)} \right).
end{equation}
For the first term on the right-hand side
we exploit convergence of the finite element method proved in
Lemma \ref{lem:fe_error_h1} which yields
\begin{equation*}
\|S(\tilde u) - S_h(\tilde u)\|_{H^1(\Omega)}\to 0,\quad h\searrow 0.
end{equation*}
With similar arguments as in the proof of Lemma \ref{lem:lipschitz_general}
we moreover deduce
\begin{align*}
\|S_h(\tilde u) - S_h(\bar u_h^\varepsilon)\|_{H^1(\Omega)}^2
&= -(\tilde u\,S_h(\tilde u) - \bar u_h^\varepsilon\,S_h(\bar u_h^\varepsilon), S_h(\tilde u) - S_h(\bar u_h^\varepsilon))_{L^2(\Gamma)} \\
&= -((\tilde u-\bar u_h^\varepsilon)\,S_h(\tilde u), S_h(\tilde u) - S_h(\bar u_h^\varepsilon))_{L^2(\Gamma)} \\
&- \int_\Gamma \bar u_h^\varepsilon\,(S_h(\tilde u) - S_h(\bar u_h^\varepsilon))^2.
end{align*}
The integral term on the right-hand side is non-negative due to the lower control bounds
$\bar u_h^\varepsilon \ge u_a\ge 0$. We can bound the first term on the right-hand side with
the Cauchy-Schwarz inequality and the multiplication rule from \boldsymbol cite[Theorem 1.4.4.2]{Gri85}
which provides
\begin{align*}
& \|S_h(\tilde u) - S_h(\bar u_h^\varepsilon)\|_{H^1(\Omega)}^2 \le \|\tilde u - \bar u_h^\varepsilon\|_{H^{-s}(\Gamma)}
\, \|S_h(\tilde u)\|_{H^1(\Gamma)} \, \|S_h(\tilde u) - S_h(\bar u_h^\varepsilon)\|_{H^1(\Omega)}
end{align*}
for arbitrary $s\in(0,1/2)$.
Note that there holds $\|\tilde u - \bar u_h^\varepsilon\|_{H^{-s}(\Gamma)}\to 0$ for $h\searrow 0$
due to the compact embedding $L^2(\Gamma)\hookrightarrow H^{-s}(\Gamma)$, $s>0$.
It remains to bound the second factor on the right-hand side by an application of Lemma
\ref{lem:stability_ShZh} and to divide the whole estimate by the third factor.
After insertion of this estimate into eqref{eq:conv_state_triangle}
we obtain the strong convergence of the states, this is,
\begin{equation}\label{eq:strong_states_convergence}
\|S(\tilde u) - S_h(\bar u_h^\varepsilon)\|_{H^1(\Omega)} \to 0\quad\mbox{for}\quad h\searrow 0.
end{equation}
Next, we show that $\tilde u$ is a local solution of the continuous problem eqref{eq:target}.
To this end we exploit eqref{eq:strong_states_convergence} and the lower semi-continuity of
the norm map to arrive at
\begin{equation}\label{eq:conv_target}
j(\tilde u)
\le \liminf_{h\searrow 0} j_h(\bar u_h^\varepsilon)
\le \limsup_{h\searrow 0} j_h(\bar u_h^\varepsilon)
\le \limsup_{h\searrow 0} j_h(Q_h \bar u) \le j(\bar u).
end{equation}
The second to last step follows from the optimality of $\bar u_h^\varepsilon$ for
eqref{eq:discrete_local} and the admissibility of the $L^2(\Gamma)$-projection
$Q_h \bar u$ for sufficiently small $h>0$.
The last step follows from the strong convergence of the $L^2(\Gamma)$-projection $Q_h$
in $L^2(\Gamma)$. Note that this implies
$\lim_{h\searrow 0} \|S_h(Q_h \bar u) - S(\bar u)\|_{L^2(\Omega)} = 0$.
Due to Assumption \ref{ass:ssc} the solution $\bar u$ is unique within $B_\varepsilon(\bar u)$
when $\varepsilon > 0$ is sufficiently small. This implies $\tilde u = \bar u$.
Note that all ``$\le$'' signs in eqref{eq:conv_target} then turn to ``$=$'' signs.
To conclude the strong convergence of the sequence $\{\bar u_h^\varepsilon\}_{h>0}$
we show additionally the convergence of the norms.
This follows from eqref{eq:conv_target} and the strong convergence of the states
from which we infer
\begin{align*}
\frac\alpha2\lim_{h\searrow 0} \|\bar u_h^\varepsilon\|_{L^2(\Gamma)}^2
&= \lim_{h\searrow 0}\left(j_h(\bar u_h^\varepsilon) - \frac12\|S_h(\bar u_h^\varepsilon) - y_d\|_{L^2(\Omega)}^2\right) \\
&= j(\bar u) - \frac12 \|S(\bar u) - y_d\|_{L^2(\Omega)}^2 = \frac\alpha2 \|\bar u\|_{L^2(\Gamma)}^2.
end{align*}
end{proof}
The previous Lemma guarantees that each local solution $\bar u\in U_{ad}$
can be approximated by a sequence of local solutions of the discretized problems
eqref{eq:discrete_local}. Due to $\bar u_h^\varepsilon\in B_\varepsilon(\bar u)$
and $\bar u_h^\varepsilon\rightarrow \bar u$ for $h\searrow 0$ (i.\,e., the constraint $\bar u_h^\varepsilon\in B_\varepsilon(\bar u)$ is never active),
the functions $\bar u_h^\varepsilon$ are local solutions of the
discrete problems eqref{eq:discrete_target} provided that $h>0$ is small enough.
Hence, we neglect the superscript $\varepsilon$ in the following and denote by
$\bar u_h$ the sequence of discrete local solutions converging to the
local solution $\bar u$.
Next, we show linear convergence of the sequence $\bar u_h$.
\begin{theorem}\label{thm:convergence_full_disc}
Let $\bar u\in U_{ad}$ be a local solution of eqref{eq:target}
which fulfills Assumption~\ref{ass:ssc}, and $\{\bar u_h\}_{h>0}$
are local solutions of eqref{eq:discrete_target} with $\bar u_h\to \bar u$ for $h\searrow 0$.
Then, the error estimate
\begin{equation*}
\|\bar u-\bar u_h\|_{L^2(\Gamma)} \le \frac{c}{\sqrt \delta} h
end{equation*}
holds.
end{theorem}
\begin{proof}
Let $\xi = \bar u+t(\bar u_h-\bar u)$ with $t\in(0,1)$.
From Corollary \ref{cor:coervice_neighb} we obtain for sufficiently small $h$ the estimate
\begin{align*}
\frac{\delta}2 \|\bar u-\bar u_h\|_{L^2(\Gamma)}^2
&\le j''(\xi)(\bar u-\bar u_h)^2\\
&= j'(\bar u)(\bar u-\bar u_h) - j'(\bar u_h)(\bar u-\bar u_h),
end{align*}
where the last step follows from the mean value theorem for some $t\in (0,1)$.
Next, we confirm with the first-order optimality conditions that
\begin{equation*}
j'(\bar u)(\bar u-\bar u_h) \le 0 \le j_h'(\bar u_h)(Q_h \bar u-\bar u_h)
end{equation*}
with the $L^2(\Gamma)$ projection $Q_h$ onto $U_h$.
Note that the property $Q_h\bar u\in U_{ad}$ is trivially satisfied.
Insertion into the inequality above leads to
\begin{equation}\label{eq:error_full_disc_start}
\frac\delta2 \|\bar u-\bar u_h\|_{L^2(\Gamma)}^2 \le
(j_h'(\bar u_h) - j'(\bar u_h))(Q_h \bar u-\bar u_h) - j'(\bar u_h)(\bar u-Q_h \bar u).
end{equation}
An estimate for the second part follows from orthogonality of the $L^2(\Gamma)$-projection,
this is,
\begin{align}\label{eq:error_full_disc_1}
j'(\bar u_h)(\bar u-Q_h \bar u) &= (\alpha \,\bar u_h + S(\bar u_h)\,Z(\bar u_h), \bar u-Q_h \bar u)_{L^2(\Gamma)} \nonumber\\
&= (S(\bar u_h)\,Z(\bar u_h) - Q_h(S(\bar u_h)\,Z(\bar u_h)), \bar u-Q_h \bar u)_{L^2(\Gamma)} \nonumber\\
&\le c\,h^2\,\|S(\bar u_h)\,Z(\bar u_h)\|_{H^1(\Gamma)}\,\|\bar u\|_{H^1(\Gamma)}.
end{align}
Furthermore, we exploit the Leibniz rule and the stability properties for $S$ and $Z$
from Lemma \ref{lem:stability_SZ} to obtain
\begin{align}\label{eq:reg_Suh_Zuh}
\|S(\bar u_h)\,Z(\bar u_h)\|_{H^1(\Gamma)}
&\le c\Big(\|S(\bar u_h)\|_{H^1(\Gamma)} \,\|Z(\bar u_h)\|_{L^\infty(\Omega)}\nonumber\\
&\phantom{\le} + \|S(\bar u_h)\|_{L^\infty(\Omega)}\, \|Z(\bar u_h)\|_{H^1(\Gamma)}\Big)
\le c.
end{align}
Next, we discuss the first term on the right-hand side of eqref{eq:error_full_disc_start}.
Insertion of the definition of $j_h'$ and $j'$ and the stability of $Q_h$ yield
\begin{align*}
& (j_h'(\bar u_h) - j'(\bar u_h))(Q_h \bar u-\bar u_h)\\
&\quad =(S_h(\bar u_h)\,Z_h(\bar u_h) - S(\bar u_h)\,Z(\bar u_h), Q_h(\bar u-\bar u_h))_{L^2(\Gamma)} \\
&\quad \le c\,\Big(\|S_h(\bar u_h)\|_{L^\infty(\Gamma)}\, \|Z(\bar u_h) -
Z_h(\bar u_h)\|_{L^2(\Gamma)} \nonumber\\
&\qquad \phantom{\le} + \|Z(\bar u_h)\|_{L^\infty(\Gamma)}\, \|S(\bar u_h) - S_h(\bar u_h)\|_{L^2(\Gamma)}\Big)\, \|\bar u-\bar u_h\|_{L^2(\Gamma)} \\
&\quad \le c\,h\left(\|S_h(\bar u_h)\|_{L^\infty(\Gamma)}\,\|Z(\bar
u_h)\|_{H^{3/2}(\Omega)} + \|Z(\bar u_h)\|_{L^\infty(\Gamma)}\,\|S(\bar
u_h)\|_{H^{3/2}(\Omega)}\right)\\
&\qquad \times\|\bar u-\bar u_h\|_{L^2(\Gamma)}.
end{align*}
In the last step we inserted the finite element error estimates from Lemma \ref{lem:fe_error_suboptimal}. Exploiting also the stability estimates from Lemmata \ref{lem:stability_SZ} and
\ref{lem:stability_ShZh}
we obtain
\begin{equation*}
(j_h'(\bar u_h) - j'(\bar u_h))(Q_h \bar u-\bar u_h)
\le c\,h\,\|\bar u - \bar u_h\|_{L^2(\Gamma)}.
end{equation*}
Together with eqref{eq:error_full_disc_start}, eqref{eq:error_full_disc_1}
and eqref{eq:reg_Suh_Zuh} we arrive at the assertion.
end{proof}
\subsection{Postprocessing approach}\label{sec:postprocessing}
In this section we consider the so-called postprocessing approach introduced in \boldsymbol cite{MR04}.
The basic idea is to compute an ``improved'' control $\tilde u_h$ by a pointwise evaluation of
the projection formula, i.\,e.,
\begin{equation}\label{eq:def_postprocessing}
\tilde u_h := \Pi_{\text{ad}}\left(-\frac1\alpha [\bar y_h\,\bar p_h]_\Gamma\right),
end{equation}
where $\bar y_h$ and $\bar p_h$ is the discrete state and adjoint state,
respectively, obtained by the full discretization approach discussed in
Section~\ref{sec:full_discretization}.
As we require higher regularity of the exact solution in order to observe a
higher convergence rate than for the full discretization approach,
we replace \ref{ass:domain1} by the stronger assumption
\begin{enumerate}[align=left]
\item[\boldsymbol customlabel{ass:domain1b}{\textbf{(A1')}}]
The domain $\Omega$ is planar and its boundary is globally $C^3$.
end{enumerate}
The most technical part of convergence proofs for this approach
is the proof of $L^2$-norm estimates for the state variables.
This is usually done by considering the following three terms separately:
\begin{align}\label{eq:postprocessing_basic}
&\|\bar y-\bar y_h\|_{L^2(\Gamma)} \nonumber\\
&\quad \le
c \left(\|\bar y-S_h(\bar u)\|_{L^2(\Gamma)} + \|S_h(\bar u) - S_h(R_h \bar u)\|_{L^2(\Gamma)}
+ \|S_h(R_h \bar u) - \bar y_h\|_{L^2(\Gamma)}\right).
end{align}
In \boldsymbol cite{MR04} $R_h\boldsymbol colon C(\Gamma)\to U_h$ is chosen as the midpoint interpolant.
We will construct and investigate such an operator in Appendix \ref{app:midpoint}.
Note that a definition of a midpoint interpolant on curved elements is not straight-forward.
The first term on the right-hand side of eqref{eq:postprocessing_basic} is a finite element error in the $L^2(\Gamma)$-norm. We collect the required estimates in the following Lemma.
\begin{lemma}\label{lem:postprocessing_first}
For all $q<\infty$ there hold the estimates
\begin{align*}
\|\bar y - S_h(\bar u)\|_{L^2(\Gamma)}&\le c\,h^{2-2/q}\,\lvert\ln h\rvert\, \|\bar y\|_{W^{2,q}(\Omega)} \\
\|\bar p - Z_h(\bar u)\|_{L^2(\Gamma)}&\le c\,h^{2-2/q}\,\lvert\ln h\rvert\, \left(\|\bar p\|_{W^{2,q}(\Omega)} +
\|\bar y\|_{H^2(\Omega)}\right).
end{align*}
end{lemma}
\begin{proof}
The first estimate follows from the H\"older inequality and the maximum norm estimate derived
in Theorem \ref{thm:fe_error_linfty}. The second estimate requires an intermediate step.
We denote by $p^h(\bar u)\in V_h$ the solution of the equation
\begin{equation*}
a_{\bar u}(p^h(\bar u),v_h) = (S(\bar u)-y_d,v_h)_{L^2(\Omega)}\qquad \forall v_h\in V_h.
end{equation*}
As $p^h(\bar u)$ is the Ritz-projection of $\bar p$ we can apply Theorem \ref{thm:fe_error_linfty}
again and obtain
\begin{equation*}
\|\bar p - p^h(\bar u)\|_{L^2(\Gamma)} \le c\,h^{2-2/q}\,\lvert\ln h\rvert\,\|\bar p\|_{W^{2,q}(\Gamma)}.
end{equation*}
To show an estimate for the error between $p^h(\bar u)$ and $Z_h(\bar u)$ we
test the equations defining both functions by $v_h = p^h(\bar u) - Z_h(\bar
u)$, compare the proof of Lemma \ref{lem:lipschitz_general}.
Together with the non-negativity of $\bar u$ we obtain
\begin{align*}
&\|p^h(\bar u)-Z_h(\bar u)\|_{H^1(\Omega)}^2\\
&\quad = -\int_\Gamma \bar u\,(p^h(\bar u) - Z_h(\bar u))^2
+ (S(\bar u) - S_h(\bar u),p^h(\bar u)-Z_h(\bar u))_{L^2(\Omega)} \\
&\quad\le c\,h^2\,\|y\|_{H^2(\Omega)}\,\|p^h(\bar u)-Z_h(\bar u)\|_{H^1(\Omega)}.
end{align*}
The last step follows from the estimate $\|S(\bar u) - S_h(\bar u)\|_{L^2(\Omega)} \le c\,h^2\,\|S(\bar u)\|_{H^2(\Omega)}$ which is a consequence of the Aubin-Nitsche trick.
With the triangle inequality we conclude the desired estimate for the discrete
control--to--adjoint operator .
end{proof}
To obtain an optimal error estimate for the second term we need an additional assumption
which is used in all contributions studying the postprocessing approach.
To this end,
define the subsets $\mathcal K_2:=\boldsymbol cup\{\bar E\boldsymbol colon E\in\mathcal E_h,\ E\subset\mathcal A,\ \mbox{or}\ E\subset\mathcal I\}$ and $\mathcal K_1:=\Gamma\setminus \mathcal K_2$.
In the following we will assume that $\mathcal K_1$ satisfies
\begin{equation}\label{eq:assumption}
|\mathcal K_1|\le c\,h.
end{equation}
The idea of this assumption is, that the control can only switch between active and inactive set on
$\mathcal K_1$. Only due to these switching points the regularity of the control is reduced,
see also Lemma \ref{lem:regularity_improved}.
One can in general expect that this happens at finitely many points
and thus, the assumption eqref{eq:assumption} is not very restrictive.
As an intermediate result required to prove estimates for $Z_h(\bar u)-Z_h(R_h\bar u)$ in
$L^2(\Gamma)$, we need an estimate for $S_h(\bar u) - S_h(R_h\bar u)$ in $L^2(\Omega)$.
\begin{lemma}\label{lem:postprocessing_second_aux}
For all $q<\infty$ there holds the estimate
\begin{equation*}
\|S_h(\bar u) - S_h(R_h \bar u)\|_{L^2(\Omega)} \le
c\,h^{2-2/q}\left(1 + \|\bar u\|_{W^{1,q}(\Gamma)} + \|\bar u\|_{H^{2-1/q}(\mathcal K_2)}\right).
end{equation*}
end{lemma}
\begin{proof}
To shorten the notation we write $e_h:= S_h(\bar u) - S_h(R_h \bar u)$.
Moreover, we introduce the function $w\in H^1(\Omega)$ solving the equation
\begin{equation}\label{eq:dual_problem}
a_{\bar u}(v,w) = (e_h,v)_{L^2(\Omega)}\qquad\forall v\in H^1(\Omega).
end{equation}
This implies
\begin{equation}\label{eq:superapprox_omega_begin}
\|e_h\|_{L^2(\Omega)}^2 = a_{\bar u}(e_h,w-\Pi_h w) + a_{\bar u}(e_h,\Pi_h w).
end{equation}
Next, we discuss both terms on the right-hand side separately.
The first one is treated with the Cauchy-Schwarz inequality and
the interpolation error estimate eqref{eq:int_error}.
These arguments lead to
\begin{equation*}
a_{\bar u}(e_h,w-\Pi_h w) \le c\,h\,\|e_h\|_{H^1(\Omega)}\,\|e_h\|_{L^2(\Omega)}.
end{equation*}
The $H^1(\Omega)$-norm of $e_h$ is further estimated by the Lipschitz property from
Lemma~\ref{lem:Sh_lipschitz} and an interpolation error estimate for the midpoint interpolant. This yields
\begin{equation*}
\|e_h\|_{H^1(\Omega)} \le c\,\|\bar u - R_h\bar u\|_{L^2(\Gamma)} \le c\,h\,\|\bar u\|_{W^{1,q}(\Gamma)}
end{equation*}
for all $q\ge 2$.
Insertion into the estimate above taking into account the stability estimates
from Lemma \ref{lem:stability_ShZh} yields
\begin{equation}\label{eq:superapprox_omega_first}
a_{\bar u}(e_h,w-\Pi_h w) \le c h^{2} \|\bar u\|_{W^{1,q}(\Gamma)} \|e_h\|_{L^2(\Omega)}.
end{equation}
Next, we consider the second term on the right-hand side of eqref{eq:superapprox_omega_begin}.
After a reformulation by means of the definition of $S_h$ we get
\begin{align}\label{eq:superapprox_omega_second_0}
a_{\bar u}(e_h,\Pi_h w)
&= a_{\bar u} (S_h(\bar u),\Pi_h w) - a_{\bar u}(S_h(R_h\bar u), \Pi_h w) \nonumber\\
&= a_{R_h\bar u} (S_h(R_h \bar u),\Pi_h w) - a_{\bar u}(S_h(R_h\bar u), \Pi_h w)\nonumber\\
&= ((R_h\bar u - \bar u)\,S_h(R_h \bar u),\Pi_h w)_{L^2(\Gamma)}.
end{align}
We can further estimate this term with the interpolation error estimate
from Lemma \ref{lem:int_error_Rh}
\begin{align}\label{eq:superapprox_omega_second_1}
&a_{\bar u}(e_h,\Pi_h w)
\le c\,h^2\,\|\bar u\|_{H^1(\Gamma)}\,\|S_h(R_h\bar u)\,\Pi_hw\|_{H^1(\Gamma)} \nonumber\\
&\qquad + \|S_h(R_h\bar u)\|_{L^\infty(\Gamma)}\,\|\Pi_h
w\|_{L^\infty(\Gamma)}\,\sum_{E\in\mathcal E_h} |\int_E(\bar u-R_h\bar
u)| \nonumber\\
&\le c\,\Big(h^2 + \sum_{E\in\mathcal E_h} |\int_E(\bar u-R_h\bar
u)|\Big)\left(1+\|\bar u\|_{H^1(\Gamma)}\right)
\|S_h(R_h\bar u)\|_{H^1(\Gamma)}\, \|\Pi_h w\|_{H^1(\Gamma)}.
end{align}
The last step follows from the embedding $H^1(\Gamma)\hookrightarrow
L^\infty(\Gamma)$ and the multiplication rule
$\|u\,v\|_{H^1(\Gamma)} \le c\,\|u\|_{H^1(\Omega)}\,\|v\|_{H^1(\Gamma)}$,
see \boldsymbol cite[Theorem 1.4.4.2]{Gri85}. Both properties are only fulfilled in case of $n=2$.
Let us discuss the terms on the right-hand side separately.
For elements $E\subset\mathcal K_1$ we can exploit the assumption~eqref{eq:assumption}
which provides the estimate $\sum_{E\subset\mathcal K_1} |E| \le c\,h$ and the second
interpolation error
estimate from Lemma~\ref{lem:local_est_midpoint} to arrive at
\begin{equation*}
\sum_{\genfrac{}{}{0pt}{}{E\in\mathcal E_h}{E\subset\mathcal K_1}} |\int_E(\bar u-R_h\bar u)|
\le c\,h\,\|\bar u-R_h\bar u\|_{L^\infty(\Gamma)}
\le c\,h^{2-1/q}\,\|\nabla \bar u\|_{L^q(\Gamma)}.
end{equation*}
Moreover, with the first estimate from Lemma~\ref{lem:local_est_midpoint} and the discrete
Cauchy-Schwarz inequality we obtain for elements
in $\mathcal K_2$ the estimate
\begin{align*}
\sum_{\genfrac{}{}{0pt}{}{E\in\mathcal E_h}{E\subset\mathcal K_2}} |\int_E(\bar u-R_h\bar u)|
&\le c\,h^{5/2-1/q}\,\|\bar u\|_{H^{2-1/q}(\mathcal K_2)}\,
\Big(\sum_{\genfrac{}{}{0pt}{}{E\in\mathcal E_h}{E\subset\mathcal K_2}}1\Big)^{1/2} \\
&\le c\,h^{2-1/q}\,\|\bar u\|_{H^{2-1/q}(\mathcal K_2)}.
end{align*}
The remaining terms on the right-hand side of
eqref{eq:superapprox_omega_second_1}
can be treated with stability estimates for $S_h$ (see
Lemma~\ref{lem:stability_ShZh}) and $R_h$,
the estimate $\|\Pi_h w\|_{H^1(\Gamma)}\le c\,\|w\|_{H^1(\Gamma)}$
stated in eqref{eq:int_error_boundary} and the a~priori estimate
$\|w\|_{H^1(\Gamma)} \le c\,\|e_h\|_{L^2(\Omega)}$ from Lemma \ref{lem:props_S}a).
Insertion of the previous estimates into eqref{eq:superapprox_omega_second_1}
yields
\begin{equation}\label{eq:superapprox_omega_second}
a_{\bar u}(e_h,\Pi_h w) \le c\,h^{2-1/q}\left(1+\|\bar u\|_{W^{1,q}(\Gamma)} + \|\bar u\|_{H^{2-1/q}(\mathcal K_2)}\right)\|e_h\|_{L^2(\Omega)}.
end{equation}
Note that we hide the of lower-order norms of $\bar u$ in the generic constant
as these quantities may be estimated by means of the control bounds $u_a$ and $u_b$.
Insertion of eqref{eq:superapprox_omega_first} and eqref{eq:superapprox_omega_second}
into eqref{eq:superapprox_omega_begin} and dividing by $\|e_h\|_{L^2(\Omega)}$
implies the assertion.
end{proof}
\begin{lemma}\label{lem:postprocessing_second}
Under the assumption eqref{eq:assumption} the estimates
\begin{align*}
\|S_h(\bar u) - S_h(R_h \bar u)\|_{L^2(\Gamma)} &\le c\,h^{2-1/q}\, \left(1 + \|\bar u\|_{H^{2-1/q}(\mathcal K_2)} + \|\bar u\|_{W^{1,q}(\Gamma)}\right), \\
\|Z_h(\bar u) - Z_h(R_h \bar u)\|_{L^2(\Gamma)} &\le c\,h^{2-1/q}\, \left(1 + \|\bar u\|_{H^{2-1/q}(\mathcal K_2)} + \|\bar u\|_{W^{1,q}(\Gamma)}\right)
end{align*}
are valid for arbitrary $q\in[2,\infty)$.
end{lemma}
\begin{proof}
We will only prove the second estimate as the first one follows from the same technique
and is even easier as the right-hand sides of the equations defining $S_h(\bar u)$ and $S_h(R_h\bar u)$ coincide. This is not the case for the control--to--adjoint operator.
To shorten the notation we write $e_h:=Z_h(\bar u) - Z_h(R_h \bar u)$.
As in the previous Lemma we rewrite the error by a duality argument using a dual problem
similar to eqref{eq:dual_problem} with solution $w\in H^1(\Omega)$, more precisely,
\begin{equation*}
a_{\bar u}(v,w) = (e_h,v)_{L^2(\Gamma)}\qquad \forall v\in H^1(\Omega).
end{equation*}
This yields
\begin{equation}\label{eq:superapprox_begin}
\|e_h\|_{L^2(\Gamma)}^2 = a_{\bar u}(e_h,w-\Pi_h w) + a_{\bar u}(e_h, \Pi_h w).
end{equation}
We rewrite the second expression in eqref{eq:superapprox_begin} and get
analogous to eqref{eq:superapprox_omega_second_0}
\begin{align}\label{eq:superapprox_first}
&a_{\bar u}(e_h,\Pi_h w)\nonumber\\
&\quad= a_{\bar u}(Z_h(\bar u), \Pi_h w) \pm a_{R_h\bar u}(Z_h(R_h \bar u), \Pi_h w)
- a_{\bar u}(Z_h(R_h \bar u), \Pi_h w)\nonumber\\
&\quad = (S_h(\bar u) - S_h(R_h\bar u), \Pi_h w)_{L^2(\Omega)}
+ ((R_h \bar u - \bar u)\, Z_h(R_h \bar u), \Pi_h w)_{L^2(\Gamma)}.
end{align}
Note that the first term would not appear when deriving estimates for $S_h$ instead of $Z_h$ as
the equations defining $S_h(\bar u)$ and $S_h(R_h \bar u)$ have the same right-hand side.
The first term can be treated with the Cauchy-Schwarz inequality,
Lemma~\ref{lem:postprocessing_second_aux} and the estimate
$\|\Pi_h w\|_{L^2(\Omega)} \le c\,\|w\|_{H^1(\Omega)}\le
c\,\|e_h\|_{L^2(\Gamma)}$ which can be deduced from
eqref{eq:int_error} and Lemma~\ref{lem:lax_milgram} with $g=e_h$.
These ideas lead to
\begin{align}\label{eq:superapprox_Zhw1}
&(S_h(\bar u) - S_h(R_h\bar u), \Pi_h w)_{L^2(\Omega)} \nonumber\\
&\qquad \le c\,h^{2-1/q}\, \left(1 + \|\bar u\|_{W^{1,q}(\Gamma)} + \|\bar u\|_{H^{2-1/q}(\mathcal K_2)}\right)\,\|e_h\|_{L^2(\Gamma)}.
end{align}
For the second term on the right-hand side of eqref{eq:superapprox_first}
we apply the same steps as for eqref{eq:superapprox_omega_second}
with the only modification that the a priori estimate
$\|w\|_{H^1(\Gamma)} \le c \|e_h\|_{L^2(\Gamma)}$ from
Lemma~\ref{lem:props_S}a) has to be employed. From this we infer
\begin{align}\label{eq:superapprox_decomp}
& ((R_h \bar u - \bar u)\,Z_h(R_h \bar u), \Pi_h w)_{L^2(\Gamma)}
\nonumber\\
&\quad \le c\, h^{2-1/q}\left(1+\|\bar u\|_{W^{1,q}(\Gamma)}
+ \|\bar u\|_{H^{2-1/q}(\mathcal K_2)}\right)
\|Z_h(R_h \bar u)\|_{H^1(\Gamma)}\,\|\Pi_h w\|_{H^1(\Gamma)}\nonumber\\
&\quad \le c\, h^{2-1/q}\left(1+\|\bar u\|_{W^{1,q}(\Gamma)}
+ \|\bar u\|_{H^{2-1/q}(\mathcal K_2)}\right)
\|e_h\|_{L^2(\Gamma)}.
end{align}
In the last step we used the boundedness of $Z_h(R_h \bar u)$, see Lemma \ref{lem:stability_ShZh}.
Insertion of eqref{eq:superapprox_Zhw1} and eqref{eq:superapprox_decomp}
into eqref{eq:superapprox_first}
leads to
\begin{equation}\label{eq:superapprox_Zhw}
a_{\bar u}(e_h,\Pi_h w) \le c\,h^{2-1/q}\left(1 + \|\bar u\|_{W^{1,q}(\Gamma)}+ \|\bar u\|_{H^{2-1/q}(\mathcal K_2)}\right) \|e_h\|_{L^2(\Gamma)}.
end{equation}
It remains to discuss the first term on the right-hand side of eqref{eq:superapprox_begin}.
We obtain with the boundedness of $a_{\bar u}$, the interpolation error estimate
eqref{eq:int_error} and Lemma~\ref{lem:props_S}a)
\begin{equation}\label{eq:superapprox_third}
a_{\bar u}(e_h,w-\Pi_h w) \le c\,h^{1/2}\,\|e_h\|_{H^1(\Omega)}\, \|w\|_{H^{3/2}(\Omega)}
\le\,c\,h^{1/2}\,\|e_h\|_{H^1(\Omega)}\,\|e_h\|_{L^2(\Gamma)}.
end{equation}
An estimate for the expression $\|e_h\|_{H^1(\Omega)}$ follows from the equality
\begin{equation*}
\|e_h\|_{H^1(\Omega)}^2 + (\bar u\,Z_h(\bar u) - R_h \bar u\, Z_h(R_h \bar u), e_h)_{L^2(\Gamma)} = (S_h(\bar u) - S_h(R_h\bar u), e_h)_{L^2(\Omega)}
end{equation*}
which can be deduced by subtracting the equations for $Z_h(\bar u)$ and
$Z_h(R_h \bar u)$ from each other.
Rearranging the terms yields
\begin{align*}
\|e_h\|_{H^1(\Omega)}^2 &\le ((R_h \bar u - \bar u)\,Z_h(R_h \bar u),
e_h)_{L^2(\Gamma)} \\
&- (\bar u \,e_h,e_h)_{L^2(\Gamma)} + \|S_h(\bar u) - S_h(R_h\bar u)\|_{L^2(\Omega)} \|e_h\|_{L^2(\Omega)}.
end{align*}
The second term on the right-hand side can be bounded by zero as $\bar u\ge 0$.
An estimate for the last term is proved in Lemma \ref{lem:postprocessing_second_aux}.
For the first term we apply the estimate eqref{eq:superapprox_decomp} with
$\Pi_h w$ replaced by $e_h$. All together, we obtain
\begin{equation}\label{eq:error_eh_H1}
\|e_h\|_{H^1(\Omega)}^2 \le c\, h^{2-1/q}\left(1 + \|\bar
u\|_{W^{1,q}(\Gamma)} + \|\bar u\|_{H^{2-1/q}(\mathcal K_2)}\right)
\left(\|e_h\|_{H^1(\Gamma)} + \|e_h\|_{H^1(\Omega)}\right).
end{equation}
Moreover, with an inverse inequality and a trace theorem we get
\begin{equation*}
\|e_h\|_{H^1(\Gamma)} \le c\,h^{-1/2}\,\|e_h\|_{H^{1/2}(\Gamma)} \le c\,h^{-1/2}\,\|e_h\|_{H^1(\Omega)}.
end{equation*}
Consequently, we deduce from eqref{eq:error_eh_H1}
\begin{equation*}
\|e_h\|_{H^1(\Omega)} \le c\,h^{3/2-1/q}\left(1 + \|\bar
u\|_{W^{1,q}(\Gamma)} + \|\bar u\|_{H^{2-1/q}(\mathcal K_2)}\right).
end{equation*}
Insertion into eqref{eq:superapprox_third} leads to
\begin{equation*}
a_{\bar u}(e_h,w-\Pi_h w) \le c\,h^{2-1/q}\,\left(1 + \|\bar
u\|_{W^{1,q}(\Gamma)} + \|\bar u\|_{H^{2-1/q}(\mathcal K_2)}\right)\|e_h\|_{L^2(\Gamma)}.
end{equation*}
Together with eqref{eq:superapprox_Zhw} and eqref{eq:superapprox_begin} we conclude the
desired estimate for $Z_h$.
end{proof}
\begin{lemma}\label{lem:postprocessing_third}
Under the assumption eqref{eq:assumption} there holds the estimate
\begin{equation*}
\|R_h \bar u - \bar u_h\|_{L^2(\Gamma)} \le c\,h^{2-2/q}\,\lvert\ln h\rvert
end{equation*}
with \[c = c\left(\|\bar u\|_{H^{2-1/q}(\mathcal K_2)},\|\bar u\|_{W^{1,q}(\Gamma)},
\|\bar y\|_{W^{2,q}(\Omega)},\|\bar p\|_{W^{2,q}(\Omega)}\right).\]
end{lemma}
\begin{proof}
We observe that each function $\xi:=t\, R_h \bar u + (1-t)\, \bar u_h$ for
$t\in [0,1]$ satisfies
\begin{equation*}
\|\bar u - \xi\|_{L^2(\Gamma)} \le t \|\bar u - R_h\bar u\|_{L^2(\Gamma)} + (1-t)\|\bar u - \bar u_h\|_{L^2(\Gamma)} < \varepsilon,
end{equation*}
for arbitrary $\varepsilon>0$ provided that $h$ is sufficiently small.
This follows from the convergence of the midpoint interpolant, see Lemma \ref{lem:local_est_midpoint}, and
convergence of $\bar u_h$ towards $\bar u$, see Theorem \ref{thm:convergence_fully_discrete}.
Hence, with the coercivity of $j_h''$
proved in Lemma \ref{lem:discrete_coerc} and the mean value theorem we conclude
\begin{align*}
\frac\delta4\,\|R_h \bar u - \bar u_h\|_{L^2(\Gamma)}^2
&\le j_h''(\xi)(R_h \bar u - \bar u_h)^2 \\
&= j_h'(R_h \bar u)(R_h \bar u - \bar u_h) - j_h'(\bar u_h)(R_h \bar u - \bar u_h).
end{align*}
For the latter term we exploit the discrete optimality condition and the fact that
the continuous optimality condition holds even pointwise. This implies the inequality
\begin{equation*}
j_h'(\bar u_h)(R_h\bar u - \bar u_h) \ge 0 \ge (\alpha\,R_h \bar u -
R_h(\bar y\,\bar p),R_h \bar u - \bar u_h)_{L^2(\Gamma)}.
end{equation*}
Insertion into the estimate above implies
\begin{align}\label{eq:supercloseness_begin}
&\frac\delta4\|R_h \bar u - \bar u_h\|_{L^2(\Gamma)}^2 \nonumber\\
&\quad \le \left(R_h(\bar y\,\bar p)-\bar y\,\bar p
+\bar y\,\bar p -S_h(R_h \bar u)\,Z_h(R_h \bar u), R_h \bar u - \bar u_h\right)_{L^2(\Gamma)}.
end{align}
The right-hand side can be decomposed into two parts.
With appropriate intermediate functions we obtain for the latter one
\begin{align*}
&(\bar y\, \bar p - S_h(R_h \bar u)\,Z_h(R_h \bar u), R_h \bar u - \bar u_h)_{L^2(\Gamma)} \\
&\quad = ((\bar y-S_h(R_h\bar u))\, \bar p + S_h(R_h\bar u)\,(\bar p-Z_h(R_h\bar u)), R_h\bar u-\bar u_h)_{L^2(\Gamma)} \\
&\quad \le c\,\big(\|\bar y-S_h(R_h\bar u)\|_{L^2(\Gamma)}\,\|\bar p\|_{L^\infty(\Gamma)} \\
&\qquad + \|\bar p-Z_h(R_h\bar u)\|_{L^2(\Gamma)}\, \|S_h(R_h\bar u)\|_{L^\infty(\Gamma)}
\big)\,\|R_h\bar u-\bar u_h\|_{L^2(\Gamma)}.
end{align*}
Moreover, we apply the triangle inequality and the estimates from
Lemmata \ref{lem:postprocessing_first} and \ref{lem:postprocessing_second}
to deduce
\begin{align*}
\|\bar y-S_h(R_h\bar u)\|_{L^2(\Gamma)}
&\le \|\bar y-S_h(\bar u)\|_{L^2(\Gamma)} + \|S_h(\bar u)-S_h(R_h\bar u) \|_{L^2(\Gamma)} \\
&\le c\,h^{2-2/q}\,\lvert\ln h\rvert.
end{align*}
Analogously, one can derive an estimate for the term $\|\bar p - Z_h(R_h\bar u)\|_{L^2(\Gamma)}$.
Moreover, we apply Lemmata \ref{lem:stability_SZ} and \ref{lem:stability_ShZh} to bound the norms
of $p=Z(\bar u)$ and $S_h(R_h \bar u)$, respectively.
All together we obtain the estimate
\begin{align}\label{eq:supercloseness_first}
&(\bar y\,\bar p-S_h(R_h \bar u)\,Z_h(R_h \bar u), R_h \bar u - \bar
u_h)_\Gamma
\le c\,h^{2-2/q}\,\lvert\ln h\rvert\,\|R_h\bar u-\bar u_h\|_{L^2(\Gamma)}.
end{align}
Next we discuss that part of eqref{eq:supercloseness_begin} which involves
the term $R_h(\bar y\,\bar p)-\bar y\,\bar p$ in the first argument.
With an application of the local estimate from Lemma \ref{lem:int_error_Rh}
we obtain
\begin{align}\label{eq:supercloseness_second}
(R_h (\bar y\,\bar p) - \bar y\,\bar p), R_h \bar u - \bar u_h)_{L^2(\Gamma)}
& = \sum_{E\in\mathcal E_h} [R_h\bar u - \bar u_h]_E\int_E
\left(\bar y\,\bar p-R_h (\bar y\,\bar p))\right) \nonumber\\
&\le c\,h^{2-1/q}\,\|R_h\bar u- \bar u_h\|_{L^2(\Gamma)}\,\|\bar y\,\bar p\|_{H^{2-1/q}(\Gamma)}.
end{align}
With \boldsymbol cite[Theorem 1.4.4.2]{Gri85} and a trace theorem we conclude
\begin{align*}
\|\bar y\,\bar p\|_{H^{2-1/q}(\Gamma)}
\le c\,\|\bar y\|_{W^{2-1/q,q}(\Gamma)}\,\|\bar p\|_{W^{2-1/q,q}(\Gamma)}
\le c\,\|\bar y\|_{W^{2,q}(\Omega)}\,\|\bar p\|_{W^{2,q}(\Omega)}.
end{align*}
Insertion of the estimates eqref{eq:supercloseness_first} and eqref{eq:supercloseness_second}
into eqref{eq:supercloseness_begin}, and dividing the resulting estimate by
$\|R_h u - u_h\|_{L^2(\Gamma)}$, leads to the desired result.
end{proof}
Now we are in the position to state the main result of this section.
\begin{theorem}\label{thm:postprocessing}
Let $(\bar y,\bar u,\bar p)$ be a local solution of eqref{eq:opt_cond}
satisfying Assumption \ref{eq:assumption}. Moreover, let $\{\bar u_h\}_{h>0}$ be a sequence of
local solutions of eqref{eq:opt_cond_discrete} such that for sufficiently small $\varepsilon,h_0>0$
the property
\[\|\bar u - \bar u_h\|_{L^2(\Gamma)} < \varepsilon\qquad\forall h < h_0\]
holds. Then, the error estimate
\begin{equation*}
\|\bar u - \tilde u_h\|_{L^2(\Gamma)} \le c\,h^{2-2/q}\,\lvert\ln h\rvert
end{equation*}
is satisfied, where
$c=c(\|\bar u\|_{W^{1,q}(\Gamma)},\|\bar u\|_{H^{2-1/q}(\mathcal K_2)},\|\bar y\|_{W^{2,q}(\Omega)}, \|\bar p\|_{W^{2,q}(\Omega)})$.
end{theorem}
\begin{proof}
With the projection formulas eqref{eq:proj_formula} and eqref{eq:def_postprocessing}, respectively, the non-expansivity of the operator $\Pi_{ad}$ and the triangle inequality we obtain
\begin{align*}
\|\bar u - \tilde u_h\|_{L^2(\Gamma)}
&\le c\,\|\Pi_{ad} \left(\frac1\alpha\,\bar y\, \bar p\right)
- \Pi_{ad} \left(\frac1\alpha\,\bar y_h\, \bar p_h\right)\|_{L^2(\Gamma)} \\
&\le \frac{c}\alpha\,\left(\|\bar y - \bar y_h\|_{L^2(\Gamma)}\,\|\bar p\|_{L^\infty(\Omega)}
+ \|\bar y_h\|_{L^\infty(\Omega)}\,\|\bar p - \bar p_h\|_{L^2(\Gamma)}\right).
end{align*}
The assertion follows after insertion of eqref{eq:postprocessing_basic} together with
the estimates obtained in Lemmata \ref{lem:postprocessing_first}, \ref{lem:postprocessing_second}
and \ref{lem:postprocessing_third}, as well as the stability estimates of $Z$ and $S_h$
from Lemmata \ref{lem:props_S} and \ref{lem:stability_ShZh}, respectively.
end{proof}
\section{Numerical experiments}\label{sec:experiments}
It is the purpose of this last section to confirm the theoretical results
by numerical experiments. To this end, we reformulate the discrete optimality condition
eqref{eq:deriv_jh} and use the equivalent projection formula
\begin{equation}\label{eq:proj_formula_discrete}
u_h = \Pi_{ad}\left(\frac1\alpha R_h^{\text{Simp}}(S_h(u_h)\,Z_h(u_h))\right).
end{equation}
Here, $R_h^{\text{Simp}}\boldsymbol colon C(\Gamma)\to U_h$ is a projection operator based on the Simpson rule,
this is,
\begin{equation*}
[R_h^{\text{Simp}}(v)]_{E} = \frac16\left(v(x_{E_1}) + 4v(x_{E}) + v(x_{E_2})\right),
end{equation*}
where $x_{E_1}$ and $x_{E_2}$ are the endpoints of the boundary edge $E\in\mathcal E_h$
and $x_E$ its midpoint.
The numerical solution of eqref{eq:proj_formula_discrete} is computed by a semismooth Newton-method.
The input data of the considered benchmark problem is chosen as follows.
The computational domain is the unit square $\Omega:=(0,1)^2$.
We define the exact Robin parameter $\tilde u$ by
\[
\tilde u(x_1,x_2):=
\begin{cases}
\max(-0.01,\ 1-30(x_1-0.5)^2), &\mbox{if}\ x_1=0, \\
-0.01, &\mbox{otherwise},
end{cases}
\]
and use the desired state $y_d = S_h(\tilde u)$
and the right-hand side $fequiv 0$. Moreover, the regularization
parameter $\alpha = 10^{-2}$ and the control bounds $u_a=0$, $u_b=\infty$ are used.
We compute the numerical solution of our benchmark problem on a sequence of meshes
starting with $\mathcal T_{h_0}$, $h_0=\sqrt{2}$,
consisting of two rectangular triangles only. The remaining grids $\mathcal T_{h_i}$,
$i=1,2,\ldots,$ are obtained by a double bisection through the longest edge of each element
applied to the previous mesh. This guarantees
$h_i = \frac12 h_{i-1}$. In order to compute the discretization error we use the solution
on the mesh $\mathcal T_{h_{11}}$ as an approximation of the exact solution, this means,
\begin{equation*}
\|\bar u-\bar u_{h_i}\|_{L^2(\Gamma)} \approx \|\bar u_{h_{11}} - \bar u_{h_i}\|_{L^2(\Gamma)},\quad i=0,1,\ldots,10.
end{equation*}
Analogously, we compute the error for the approximation obtained by the postprocessing strategy.
However, in this case the exact solution is approximated by
$\bar u\approx \Pi_{ad}(\frac1\alpha \bar y_{h_{11}}\,\bar p_{h_{11}})$. The error norms
$\|\Pi_{ad}(\frac1\alpha \bar y_{h_{11}}\,\bar p_{h_{11}}) -
\Pi_{ad}(\frac1\alpha \bar y_{h_{i}}\,\bar p_{h_{i}})\|_{L^2(\Gamma)}$, $i=0,\ldots,11$, are computed element-wise by the Simpson quadrature formula
with the modification that elements $E$ are split at those points where
$\bar y_{h_i}\,\bar p_{h_i}$ or $\bar y_{h_{11}}\,\bar p_{h_{11}}$ change its sign.
\begin{figure}
\begin{center}
\includegraphics[width=.8\textwidth]{control_state-crop}
end{center}
\boldsymbol caption{Optimal state (surface) and the optimal control (boundary curve) for
the benchmark problem.}
\label{fig:control_state}
end{figure}
The optimal control and corresponding state of our benchmark problem is illustrated in
Figure~\ref{fig:control_state} and the measured discretization errors as well as the experimentally
computed convergence rates are summarized in Table \ref{tab:experiment}.
As we have proven in Theorem \ref{thm:convergence_full_disc} the numerical solutions obtained by a
full discretization using a piecewise constant control approximation converge with the optimal
convergence rate $1$. Moreover, it is confirmed that the solution obtained with a
postprocessing step, see Theorem \ref{thm:postprocessing}, converges with order $2$.
Note that we actually proved the results for the case that the boundary is smooth which is
indeed not the case in our example. However, the corner singularities contained in the solution are
for a $90^\boldsymbol circ$-corner comparatively mild so that the regularity results from Lemma
\ref{lem:regularity_improved} remain valid.
\begin{table}[htb]
\begin{center}
\begin{tabular}{lrrrr}
\toprule
\multicolumn{1}{l}{$i$} & \multicolumn{1}{l}{DOF} & \multicolumn{1}{l}{BD DOF} & \multicolumn{1}{l}{$\|u-u_{h_i}\|_{L^2(\Gamma)}$ (eoc)} & \multicolumn{1}{l}{$\|u-\tilde u_{h_i}\|_{L^2(\Gamma)}$ (eoc)} \\ \midrule
$3$ & 113 & 32 & 1.60e-2 (1.06) & 1.81e-2 (1.15) \\
$4$ & 353 & 64 & 5.81e-3 (1.46) & 4.43e-3 (2.03) \\
$5$ & 1217 & 128 & 2.56e-3 (1.18) & 1.03e-3 (2.11) \\
$6$ & 4481 & 256 & 1.24e-3 (1.05) & 1.65e-4 (2.64) \\
$7$ & 17153 & 512 & 6.17e-4 (1.00) & 7.52e-5 (1.13) \\
$8$ & 67073 & 1024 & 3.06e-4 (1.01) & 1.79e-5 (2.07) \\
$9$ & 265217 & 2048 & 1.49e-4 (1.04) & 4.31e-6 (2.05) \\
$10$ & 1054716 & 4096 & 6.67e-5 (1.16) & 8.50e-7 (2.34) \\
\bottomrule
end{tabular}
\boldsymbol caption{Experimentally computed errors for the full discretization and the postprocessing approach with the corresponding experimental convergence rates (in parentheses)}
\label{tab:experiment}
end{center}
end{table}
\begin{appendix}
\section{Proof of Theorem \ref{thm:fe_error_linfty}}
\label{app:maximum_norm_estimate}
The proof of the maximum norm estimate presented in Theorem \ref{thm:fe_error_linfty}
follows basically from the arguments of \boldsymbol cite{FR76,Sco76}.
For the convenience of the reader we want to repeat the proof
as the result of Theorem~\ref{thm:fe_error_linfty} is, for our specific situation, not
directly available in the literature. The novelty of the present proof is that
it includes curved elements as well as Robin boundary conditions.
In the aforementioned articles, a representation of
the error term based on a regularized Dirac function is used. This function forms
the right-hand side of a dual problem whose solution is an approximation of
Green's function. The main difficulty is to bound this solution in appropriate norms.
To this end, we denote by $T^*\in\mathcal T_h$ the element where $|y-y_h|$ attains its maximum.
The regularized Dirac function is defined by
$\delta^h(x) := |T^*|^{-1}\sgn(y(x)-y_h(x))$ if $x\in T^*$, and $\delta^h(x):=0$ if $x\not\in T^*$.
The corresponding Green's function denoted by $g^h$ solves the problem
\begin{equation}\label{eq:reg_green}
-\Delta g^h + g^h = \delta^h \quad\mbox{in}\quad\Omega,\qquad \partial_n g^h + u\,g^h = 0\quad\mbox{on}\quad\Gamma.
end{equation}
The Dirac function satisfies the properties
\begin{equation}\label{eq:prop_dirac}
\|\delta^h\|_{L^1(\Omega)} \le c,\qquad \|\delta^h\|_{L^2(\Omega)} \le c\,h^{-1}.
end{equation}
We start our considerations with some a priori estimates for the solution $g^h$.
\begin{lemma}\label{lem:prop_gh}
The following a priori estimates hold:
\begin{align*}
&(i)& \|g^h\|_{H^1(\Omega)} &\le c\,\lvert\ln h\rvert^{1/2} &(ii)&& \|g^h\|_{H^2(\Omega)} &\le c\,h^{-1} \\
&(iii)& \|g^h\|_{L^\infty(\Omega)} &\le c\,\lvert\ln h\rvert &&&&
end{align*}
end{lemma}
\begin{proof}
(ii) To show the estimate in the $H^2(\Omega)$-norm we apply the a priori estimate from
Lemma \ref{lem:props_S}c) and $\|\delta^h\|_{L^2(\Omega)} \le c h^{-1}$.\\[.5em]
(i) The weak form of eqref{eq:reg_green} and the property eqref{eq:prop_dirac} imply
\begin{align}\label{eq:H1_to_Linfty}
\gamma_u\,\|g^h\|_{H^1(\Omega)}^2
&\le a_u(g^h,g^h) = (\delta^h,g^h)_{L^2(\Omega)} \le c\,\|g^h\|_{L^\infty(\Omega)}
\nonumber\\
&\le c \left(\|g^h-g_h^h\|_{L^\infty(\Omega)} + \lvert\ln h\rvert^{1/2}\,\|g_h^h\|_{H^1(\Omega)}\right),
end{align}
where the discrete Sobolev inequality was applied in the last step.
The function $g_h^h\in V_h$ is the Ritz-projection of $g^h$ and
satisfies the usual stability estimate
\begin{equation}\label{eq:linfty_gh}
\|g_h^h\|_{H^1(\Omega)} \le c\,\|g^h\|_{H^1(\Omega)}.
end{equation}
Next, we derive a suboptimal error estimate for the finite-element error in the $L^\infty(\Omega)$-norm.
Using an inverse inequality, estimates for the interpolant $\Pi_h$ from eqref{eq:int_error},
the Aubin-Nitsche trick and the a priori estimate shown already in the $H^2(\Omega)$-norm
we deduce
\begin{align}\label{eq:linfty_gh-ghh}
\|g^h-g^h_h\|_{L^\infty(\Omega)}
&\le \|g^h - \Pi_h g^h\|_{L^\infty(\Omega)}
+ c\,h^{-1}\left(\|g^h - \Pi_h g^h\|_{L^2(\Omega)} + \|g^h - g_h^h\|_{L^2(\Omega)}\right)\nonumber\\
&\le c\,h\,\|g^h\|_{H^2(\Omega)} \le c.
end{align}
Note that we hide the dependency on $u$, or more precisely on $\|u\|_{H^{1/2}(\Omega)}$
and lower-order norms, in the generic constant to simplify the notation.
Insertion of eqref{eq:linfty_gh} and eqref{eq:linfty_gh-ghh} into
eqref{eq:H1_to_Linfty}
yields with Young's inequality
\begin{equation*}
\gamma_u\,\|g^h\|_{H^1(\Omega)}^2 \le c\,\lvert\ln h\rvert + \frac12\,\gamma_u\,\|g^h\|_{H^1(\Omega)}^2.
end{equation*}
The desired estimate follows form a kick-back-argument.\\[.5em]
(iii) The $L^\infty(\Omega)$-estimate follows directly from eqref{eq:H1_to_Linfty},
eqref{eq:linfty_gh} and eqref{eq:linfty_gh-ghh} using the inequality (i).
end{proof}
Next, we show an a priori estimate for $g^h$ in a weighted norm.
This is the key idea which allows us to bound second derivatives
by a logarithmic factor only. The weight function we will use is defined by
\begin{equation*}
\sigma(x) := \sqrt{|x-x^*|_2^2 + c\,h^2},
end{equation*}
with $x^*:=\argmax_{x\in T^*} |y-y_h|(x)$. This function satisfies
\begin{equation}\label{eq:props_sigma}
\|\sigma^{-1}\|_{L^2(\Omega)}\le c\,\lvert\ln h\rvert^{1/2},\qquad \|\sigma^{-1}\|_{L^2(\Gamma)} \le c\,h^{-1/2}\,\lvert\ln h\rvert^{1/4}.
end{equation}
The first property follows from a simple computation.
The second estimate is a consequence of the multiplicative trace theorem
$
\|\sigma^{-1}\|_{L^2(\Gamma)} \le c\,\|\sigma^{-1}\|_{L^2(\Omega)}^{1/2}\,\|\sigma^{-1}\|_{H^1(\Omega)}^{1/2}$, see \boldsymbol cite[Theorem~1.6.6]{BS02},
and the chain rule $\nabla \sigma^{-1} = -\sigma^{-2} \nabla \sigma$ and it is
simple to show $\|\nabla\sigma\|_{L^\infty(\Omega)} \le 1$
and $\|\sigma^{-2}\|_{L^2(\Omega)} \le c\,h^{-1}$.
\begin{lemma}
Assume that $u\in H^{1/2}(\Gamma)$. There holds the estimate
\begin{equation*}
\|\sigma\,\nabla^2 g^h \|_{L^2(\Omega)} \le c\,\lvert\ln h\rvert^{1/2}.
end{equation*}
end{lemma}
\begin{proof}
We introduce the functions $\xi_i:=|x_i-x_i^*|$, $i=1,2$, which allow us to write
\begin{equation}\label{eq:max_est_weighted_H2}
\|\sigma\,\nabla^2 g^h\|_{L^2(\Omega)}^2 = \sum_{i=1}^2\|\xi_i\,\nabla^2 g^h\|_{L^2(\Omega)}^2
+ c\,h^2\,\|\nabla^2 g^h\|_{L^2(\Omega)}^2.
end{equation}
With the reverse product rule we obtain
\begin{equation}\label{eq:H2_reg_gh_1}
\|\xi_i\,\nabla^2 g^h\|_{L^2(\Omega)}^2 \le \|\nabla^2(\xi_i\,g^h)\|_{L^2(\Omega)}^2 + \|\nabla g^h \|_{L^2(\Omega)}^2.
end{equation}
Moreover, we easily confirm that $\xi_i\,g^h$ is the solution of the problem
\begin{equation*}
\begin{aligned}
-\Delta (\xi_i\,g^h) + \xi_i\, g^h &= -2\,\frac{\partial g^h}{\partial x_i} + \xi_i\,\delta^h &\mbox{in}\ \Omega,\\
\partial_n (\xi_i\,g^h) + u\,\xi_i\,g^h &= g^h\,n_i &\mbox{on}\ \Gamma,
end{aligned}
end{equation*}
where $n_i$ is the $i$-th component of the outer unit normal vector on $\Gamma$.
Lemma \ref{lem:props_S}c) using the property $ \|\xi_i\,\delta^h\|_{L^2(\Omega)} \le c$,
which follows from a simple computation, leads to
\begin{align}\label{eq:max_est_H2_aux}
\|\nabla^2(\xi_i\,g^h)\|_{L^2(\Omega)}
&\le c \left(1+\|u\|_{H^{1/2}(\Gamma)}\right)
\left(1+\|\nabla g^h\|_{L^2(\Omega)}
+ \|g^h\|_{H^{1/2}(\Gamma)}\right)\nonumber\\
&\le c\left(1+\|g^h\|_{H^1(\Omega)}\right).
end{align}
Insertion into eqref{eq:H2_reg_gh_1} and using Lemma \ref{lem:prop_gh}(i) leads to
\begin{equation*}
\|\xi_i\,\nabla^2 g^h\|_{L^2(\Omega)} \le c\,\lvert\ln h\rvert^{1/2}, \quad i=1,2.
end{equation*}
An estimate for the second term on the right-hand side of eqref{eq:max_est_weighted_H2}
is derived in Lemma \ref{lem:prop_gh}(ii).
end{proof}
Next, we derive some error estimates for the approximation $g_h^h$ in several norms.
\begin{lemma}\label{lem:error_gh-ghh}
Assume that $u\in H^{1/2}(\Gamma)$. Then, there hold the error estimates
\begin{equation*}
\begin{aligned}
h^{-1}\,\|g^h-g_h^h\|_{L^2(\Omega)} + \|\nabla(g^h-g_h^h)\|_{L^2(\Omega)}
&\le c,\\
\|\sigma\nabla^2 (g^h-g_h^h)\|_{L^2_{\textup{pw}}(\Omega)} &\le c\,\lvert\ln h\rvert^{1/2}.
end{aligned}
end{equation*}
end{lemma}
\begin{proof}
The first estimate follows directly from the $H^1(\Omega)$-error estimate
stated in Lemma \ref{lem:h1_l2_error} and the Aubin-Nitsche trick.
Moreover, the a priori estimate for the $H^2(\Omega)$-norm of $g^h$
from Lemma \ref{lem:prop_gh} has to be exploited.
In the second estimate one observes that the discrete function $g_h^h$ would vanish
except on curved elements (note that $g_h^h$ is affine on the reference element only, but not on $T$). With the transformation result \boldsymbol cite[Lemma 2.3]{Ber89} we obtain
\begin{align*}
\|\nabla^2(g^h - g_h^h)\|_{L^2(T)}
&\le\,c\,|T|^{1/2}\,h^{-2}\sum_{k=0}^2 h^{4-2k}\,
\|\hat\nabla^k(\hat g^h - \hat g_h^h)\|_{L^2(\hat T)} \\
&\le c\left(\|\nabla^2 g^h\|_{L^2(T)} + h\,\|g^h - g_h^h\|_{H^1(T)}\right),
end{align*}
where $g^h = \hat g^h\boldsymbol circ F_T^{-1}$, $g_h^h=\hat g_h^h\boldsymbol circ F_T^{-1}$.
Taking into account $\inf_{x\in T}\sigma(x) \sim \sup_{x\in T} \sigma(x)$, which holds due to the assumed shape-regularity, and $|\sigma(x)| \le c$ for all $x\in\Omega$, we obtain
\begin{equation*}
\|\sigma\,\nabla^2 (g^h-g_h^h)\|_{L^2_{\textup{pw}}(\Omega)}
\le c \left(\|\sigma\,\nabla^2 g^h\|_{L^2(\Omega)} + h\,\|g^h - g_h^h\|_{H^1(\Omega)}\right).
end{equation*}
The first term has been discussed in the previous Lemma and the last term
has been considered in the present Lemma already.
end{proof}
Now we are in the position to prove Theorem \ref{thm:fe_error_linfty}.\\
\begin{proof}
With an inverse inequality and the H\"older inequality, the definition of $\delta^h$
and a maximum norm estimate for the interpolant $\Pi_h$, see e.\,g.\ \boldsymbol cite[Theorem 4.1]{Ber89},
we obtain
\begin{align}\label{eq:max_est_start}
\|y-y_h\|_{L^\infty(\Omega)} = \|y-y_h\|_{L^\infty(T^*)}
&\le c \left(\|y-\Pi_h y\|_{L^\infty(T^*)} + |T^*|^{-1}\,\|\Pi_h y-y_h\|_{L^1(T^*)}\right)\nonumber\\
&\le c \left(\|y-\Pi_h y\|_{L^\infty(T^*)} + (\delta^h,y-y_h)_{L^2(\Omega)}\right)\nonumber\\
&\le c \left(h^{2-2/q}\,\|y\|_{W^{2,q}(\Omega)} + a_u(y-y_h,g^h)\right),
end{align}
where $g_h^h\in V_h$ denotes the Ritz projection of $g^h$.
For the latter part on the right-hand side of eqref{eq:max_est_start}
we get with the Galerkin orthogonality, the H\"older inequality,
a trace theorem for the boundary integral term
as well as $\|u\|_{L^\infty(\Gamma)} \le c$
\begin{align}\label{eq:max_est_orthogonality}
& a_u(y-y_h,g^h) = a_u(y-\Pi_h y,g^h- g_h^h)\nonumber\\
&\quad \le c\,\|y-\Pi_hy\|_{W^{1,\infty}(\Omega)}\,\|g^h-g_h^h\|_{W^{1,1}(\Omega)}.
end{align}
An estimate for the interpolation error is deduced
in \boldsymbol cite{Ber89}.
The $L^1(\Omega)$-norms can be replaced by weighted $L^2(\Omega)$-norms
involving the weighting function $\sigma$. Taking into account the
properties eqref{eq:props_sigma} we obtain
\begin{equation}\label{eq:max_est_L1_est_1}
\|g^h-g_h^h\|_{W^{1,1}(\Omega)}
\le c\,\lvert\ln h\rvert^{1/2}\, \left(\|\sigma\,\nabla (g^h-g_h^h)\|_{L^2(\Omega)}
+ \|g^h-g_h^h\|_{L^2(\Omega)}\right).
end{equation}
In the following we will show that the expressions on the right-hand side
of eqref{eq:max_est_L1_est_1} are bounded by $c\,h\,\lvert\ln h\rvert^{1/2}$.
Therefore, we apply the reverse product rule and get
\begin{align*}
&\|\sigma\, \nabla (g^h-g_h^h)\|_{L^2(\Omega)}^2\\
&\qquad = (\nabla(\sigma^2\, (g^h -g_h^h)), \nabla(g^h-g_h^h))_{L^2(\Omega)}
- ((g^h-g_h^h)\, \nabla\sigma^2, \nabla(g^h-g_h^h))_{L^2(\Omega)}.
end{align*}
From this we conclude
\begin{align}\label{eq:max_est_milestone}
& \Theta^2:= \|\sigma \nabla (g^h-g_h^h)\|_{L^2(\Omega)}^2 + \|g^h-g_h^h\|_{L^2(\Omega)}^2 \nonumber\\
&\qquad \le a_u(\sigma^2(g^h-g_h^h),g^h-g_h^h) - ((g^h-g_h^h) \nabla\sigma^2, \nabla(g^h-g_h^h))_{L^2(\Omega)}.
end{align}
Here, we exploited that $(u\,\sigma^2\,(g^h-g_h^h),g^h-g_h^h)_{L^2(\Gamma)}\ge 0$
due to $u\ge u_a\ge 0$.
Next, we introduce the abbreviation $z:=\sigma^2\,(g^h-g_h^h)$. The
Galerkin orthogonality of $g^h-g_h^h$, Young's inequality and the trace
theorem taking into account $|\sigma| + |\nabla \sigma|\le c$ yield
\begin{equation*}
\|\sigma\,v\|_{L^2(\Gamma)} \le c\left(\|\sigma\,v\|_{L^2(\Omega)} +
\|\nabla(\sigma\,v)\|_{L^2(\Omega)}\right)
\le c\left(\|v\|_{L^2(\Omega)} + \|\sigma\,\nabla v\|_{L^2(\Omega)}\right)
end{equation*}
and thus,
\begin{align}\label{eq:max_est_interpolation_start}
&a_u(\sigma^2(g^h-g_h^h),g^h-g_h^h) = a_u(z-\Pi_h z,g^h-g_h^h) \nonumber\\
&\qquad\le \frac14\, \Theta^2 + c\,\Big(\|\sigma^{-1}\,\nabla(z-\Pi_h
z)\|_{L^2(\Omega)}^2 \nonumber\\
&\qquad \phantom{\frac12 \Theta^2 + c\le}+\|\sigma^{-1}\,(z-\Pi_h
z)\|_{L^2(\Omega)}^2 +
\|\sigma^{-1}\,u\,(z-\Pi_h z)\|_{L^2(\Gamma)}^2\Big).
end{align}
Next, we derive local interpolation error estimates.
In the following we use the notation $\underline\sigma_T:=\inf_{x\in T} \sigma(x)$ and $\overline\sigma_T :=\sup_{x\in T} \sigma(x)$. Due to the assumed shape-regularity there holds
$\underline\sigma_T\sim\overline\sigma_T$ for all $T\in \mathcal T_h$, and hence,
\begin{equation*}
\|\sigma^{-1}\,\nabla(z-\Pi_h z)\|_{L^2(T)} + h^{-1}\,\|\sigma^{-1}\,(z-\Pi_h z)\|_{L^2(T)}
\le c\,\underline\sigma_T^{-1}\, h\,
\|\sigma^2\,(g^h-g_h^h)\|_{H^2(\tilde T)},
end{equation*}
where $\tilde T$ is the patch of all elements adjacent to $T$ (note that
$\Pi_h$ is a quasi-interpolant).
The Leibniz rule and the properties $|\nabla\sigma^2|\le\sigma$ and
$|\nabla^2 \sigma^2|\le c$ imply
\begin{equation*}
\|\sigma^2\,(g^h-g_h^h)\|_{H^2(\tilde T)} \le \|g^h-g_h^h\|_{L^2(\tilde T)}
+ \|\sigma\,\nabla(g^h-g_h^h)\|_{L^2(\tilde T)} + \|\sigma^2\,\nabla^2
(g^h-g_h^h)\|_{L^2(\tilde T)}.
end{equation*}
Next, we combine the two estimates above and take into account the properties
$h\,\underline\sigma_T^{-1} \le c$ and $\overline \sigma_T\sim \overline\sigma_{\tilde T}$
which follows from the assumed quasi-uniformity.
Summation over all $T\in\mathcal T_h$
and an application of Lemma \ref{lem:error_gh-ghh} yields
\begin{align}\label{eq:max_est_interpolation_global}
&\|\sigma^{-1}\nabla(z-\Pi_h z)\|_{L^2(\Omega)} + h^{-1}\,\|\sigma^{-1}(z-\Pi_h z)\|_{L^2(\Omega)}\nonumber\\
&\quad \le c \left(\|g^h-g_h^h\|_{L^2(\Omega)} + h\,\|\nabla(g^h-g_h^h)\|_{L^2(\Omega)} + h\,\|\sigma\,\nabla^2 (g^h-g_h^h)\|_{L^2_{\textup{pw}}(\Omega)}\right) \nonumber\\
&\quad\le c\,h\,\lvert\ln h\rvert^{1/2}.
end{align}
It remains to discuss the third term on the right-hand side of
eqref{eq:max_est_interpolation_start}.
With interpolation error estimates for $\Pi_h$ on the boundary, compare
also eqref{eq:int_error_boundary}, and $u\in L^\infty(\Gamma)$ we obtain
\begin{align}\label{eq:max_est_3rd_term}
\|\sigma^{-1}\,u\,(z-\Pi_h z)\|_{L^2(E)}
&\le c\,h\,\underline\sigma_E^{-1} \|\nabla z\|_{L^2(\tilde E)}\nonumber\\
&\le c\,h\left(\|g^h-g_h^h\|_{L^2(E)} +
\|\sigma\,\nabla(g^h-g_h^h)\|_{L^2(\tilde E)}\right),
end{align}
where we exploited the product rule and the property $\nabla\sigma^2\le 2\sigma\vec1$
in the last step.
With a trace theorem and Lemma \ref{lem:error_gh-ghh} we conclude
\begin{equation*}
\|g^h-g_h^h\|_{L^2(\Gamma)} \le c\,\|g^h-g_h^h\|_{H^1(\Omega)} \le c,
end{equation*}
and with a multiplicative trace theorem, Young's inequality, the product
rule and the estimates from Lemma \ref{lem:error_gh-ghh} we obtain
\begin{align*}
\|\sigma\,\nabla(g^h-g_h^h)\|_{L^2(\Gamma)}
&\le c \left(\|\sigma\,\nabla(g^h-g_h^h)\|_{L^2(\Omega)} + \|\nabla
(\sigma\,\nabla(g^h-g_h^h))\|_{L_{\textup{pw}}^2(\Omega)} \right)\\
&\le c \left(\|\nabla(g^h-g_h^h))\|_{L^2(\Omega)} + \|\sigma\,\nabla^2(g^h-g_h^h)\|_{L_{\textup{pw}}^2(\Omega)}\right)\\
&\le c\,\lvert\ln h\rvert^{1/2}.
end{align*}
The estimate eqref{eq:max_est_3rd_term} then simplifies to
\begin{equation}\label{eq:max_est_3rd_term_final}
\|\sigma^{-1}\,(z-\Pi_h z)\|_{L^2(\Gamma)}
\le c\,h\,\lvert\ln h\rvert^{1/2}.
end{equation}
Insertion of eqref{eq:max_est_interpolation_global} and eqref{eq:max_est_3rd_term_final} into eqref{eq:max_est_interpolation_start} leads to the estimate
\begin{equation}\label{eq:max_est_part1}
a(\sigma^2(g^h-g_h^h),g^h-g_h^h) \le \frac14 \Theta^2 + c\,h^2\,\lvert\ln h\rvert.
end{equation}
It remains to show an estimate for the second term on the right-hand side
of eqref{eq:max_est_milestone}.
Due to $|\nabla\sigma^2| \le 2\sigma\vec 1$, Young's inequality
and the $L^2(\Omega)$-error estimate from Lemma \ref{lem:error_gh-ghh}
we get
\begin{align}\label{eq:max_est_part2}
((g^h-g_h^h) \nabla\sigma^2, \nabla(g^h-g_h^h))_{L^2(\Omega)}
&\le c\,\|g^h-g_h^h\|_{L^2(\Omega)}^2 + \frac14\,\|\sigma\,\nabla(g^h-g_h^h)\|_{L^2(\Omega)}^2\nonumber\\
&\le c\,h^2 + \frac14\,\Theta^2.
end{align}
Insertion of eqref{eq:max_est_part1} and eqref{eq:max_est_part2} into
eqref{eq:max_est_milestone}
yields
\begin{equation}\label{eq:final_est_theta}
\Theta^2 \le \frac12\,\Theta^2 + c\,h^2\,\lvert\ln h\rvert
end{equation}
and with a kick-back-argument we conclude $\Theta^2=c\,h^2\,\lvert\ln h\rvert$.
Finally, we collect up the previous estimates. To this end, we insert
eqref{eq:final_est_theta} into eqref{eq:max_est_L1_est_1}, the resulting estimate into
eqref{eq:max_est_orthogonality} and this into eqref{eq:max_est_start}.
end{proof}
\section{Local estimates for the midpoint interpolant and the $L^2(\Gamma)$-projection}\label{app:midpoint}
To the best of the author's knowledge there are no error estimates for the midpoint interpolant defined on a curved boundary available in the literature. Thus, we prove the following Lemmata which are needed in the proof of Lemma \ref{lem:postprocessing_second}.
Consider a single boundary element $E\subset \bar T$ with corresponding element $T\in\mathcal T_h$.
A parametrization of the boundary element is given by $E:=\{\gamma_E(\xi) := F_T(\xi,0),\ \xi\in(0,1)\}$ when assuming that the edge of $\hat T$ with endpoints $(0,0)$, $(1,0)$ is mapped onto $E$.
In the following we denote the length of a boundary element $E\in\mathcal E_h$
by $L_E = \int_0^1 |\dot\gamma_E(\xi)|\,\mathrm d\xi$.
\begin{lemma}\label{lem:local_est_midpoint}
For each function $u\boldsymbol colon \Gamma\to\mathbb R$
there exists some piecewise constant function $R_h u\in U_h$
satisfying the local estimates
\begin{align*}
\left\vert\int_E (u - R_h u)\right\vert &\le c\,h^{5/2}\left(\|\nabla u\|_{L^2(E)} + \|\nabla^2 u\|_{L^2(E)}\right),\\
\|u-R_h u\|_{L^\infty(E)} &\le c\,h^{1-1/q}\,\|\nabla u\|_{L^q(E)},\quad
q\in (1,\infty],
end{align*}
for all $E\in\mathcal E_h$, provided that $u$ possesses the regularity demanded by the right-hand
side.
end{lemma}
\begin{proof}
Let us first construct a suitable interpolation operator. To obtain the desired second-order accuracy we have to guarantee that the property $\int_E p = \int_E R_h p$ holds for all functions $p(\gamma_E(\xi)) = \hat p(\xi)$ with some first-order polynomial $\hat p(\xi):=a +b\,\xi$. The transformation to $\hat E:=(0,1)\times \{0\}$ yields
\begin{equation*}
\int_E(p(x)-R_h p) \mathrm ds_x = \int_0^1 (\hat p(\xi) - \hat p(\xi_E))
\,|\dot\gamma_E(\xi)|\,\mathrm d\xi= b\,\int_0^1(\xi - \xi_E)\,|\dot\gamma_E(\xi)|\,\mathrm d\xi = 0.
end{equation*}
The latter step holds true when choosing
\begin{equation*}
\xi_E := \frac{1}{L_E}\int_0^1 \xi\,|\dot\gamma_E(\xi)|\,\mathrm d\xi
end{equation*}
To this end, we define our operator by means of $R_h u|_E= \hat u(\xi_E)$.
Obviously, the definition of $R_h$ depends on the transformations $F_T$.
To show the interpolation error estimates we apply the property $\int_E(p-R_h p)=0$ for arbitrary
$p\in\mathcal P_1$, the stability of the interpolant $\hat R_h \hat u|_E = \hat u(\xi_E)$,
the properties eqref{eq:props_trafo} of the transformation $F_T$ and the
Bramble-Hilbert Lemma. This yields
\begin{align*}
\int_E(u(x)-R_h u)\mathrm ds_x
&= \int_0^1(I-\hat R_h)(\hat u - \hat p)(\xi)\,|\dot\gamma_E(\xi)|\,\mathrm d\xi \\
&\le c\,h\,\|\hat u - \hat p\|_{L^\infty(\hat E)}
\le c\,h\,\|\partial_{\xi\xi}\hat u\|_{L^2(\hat E)}.
end{align*}
For the transformation back to the world element $E$ we apply the chain
rule \[\partial_{\xi\xi} \hat u(\xi) = \dot\gamma_E(\xi)^\top\, \nabla^2
u(\gamma_E(\xi)) \,\dot\gamma_E(\xi) + \ddot\gamma_E(\xi)^\top\,\nabla u(\gamma_E(\xi))
\]
and the properties eqref{eq:props_trafo} to arrive at
\begin{align*}
\|\partial_{\xi\xi}\hat u\|_{L^2(\hat E)}
& \le c\,h^2 \left(\max_{\xi}|\dot\gamma_E(\xi)|^{-1} \sum_{|\alpha|=1}^2\int_0^1 (D^\alpha u(\gamma_E(\xi)))^2\,|\dot\gamma_E(\xi)|\,\mathrm d\xi \right)^{1/2} \\
& \le c\,h^2 \left(\min_{\xi}|\dot\gamma_E(\xi)|\right)^{-1/2} \left(\|\nabla u\|_{L^2(E)} + \|\nabla^2 u\|_{L^2(E)}\right).
end{align*}
Finally, the norm of $\dot\gamma_E$ can be bounded by means of
\begin{equation}\label{eq:est_gamma_-1}
|\dot\gamma_E(\xi)|^{-1} = |DF_T(\xi,0) (1,\,0)^\top|^{-1} \le \left(\min_{\|x\|=1} |DF_T(\xi,0) x|\right)^{-1}
= \|DF_T(\xi,0)^{-1}\|.
end{equation}
Note, that the last step is valid for the spectral norm only.
An application of Lemma 2.2 from \boldsymbol cite{Ber89} which provides $\sup_{\hat x\in \hat T}\|DF_T(\hat x)^{-1}\|\le c h^{-1}$ leads to the first estimate.
The second estimate follows with similar arguments. For an arbitrary constant $\hat p$ we
then obtain
\begin{equation*}
\|u-R_h u\|_{L^\infty(E)} \le \|(I-\hat R_h)(\hat u-\hat p)\|_{L^\infty(\hat E)}
\le c\, |\hat u|_{W^{1,q}(\hat E)} \le c\, h\,|E|^{-1/q}\,\|\nabla u\|_{L^q(E)}.
end{equation*}
end{proof}
A further operator that is needed in Section \ref{sec:optimal_control} is
the $L^2(\Gamma)$-projection onto $U_h$.
In case of curved boundaries, this operator
reads
\begin{equation*}
[Q_h v]|_E := \frac1{L_E}\int_0^1 v(\gamma_E(\xi))\,|\dot\gamma_E(\xi)|\,\mathrm d\xi
end{equation*}
for each $E\in\mathcal E_h$.
Note that this definition implies the orthogonality property
\begin{equation}\label{eq:orthogonality}
(u-Q_h u, v_h)_{L^2(\Gamma)} = 0 \qquad\forall v_h\in U_h.
end{equation}
With similar arguments as in the previous Lemma we obtain the following local estimate
which is standard in case of a boundary consisting of straight edges.
\begin{lemma}\label{lem:local_est_l2_proj}
Assume that $u\in H^1(\Gamma)$. Then the estimate
\begin{equation*}
\|u-Q_h u\|_{L^2(E)} \le c\,h\,\|\nabla u\|_{L^2(E)}
end{equation*}
is fulfilled for all $E\in\mathcal E_h$.
end{lemma}
\begin{proof}
We introduce a further projection onto $U_h$, namely
$[\tilde Q_h u]|_E:=\int_0^1 u(\gamma_E(\xi))\,\mathrm d\xi$.
Using eqref{eq:orthogonality}, the transformation to the reference element as in the previous Lemma and the Poincar\'e inequality we obtain
\begin{align*}
\|u-Q_h u\|_{L^2(E)}^2
&\le \|u-\tilde Q_h u\|_{L^2(E)}^2 \\
& = \int_0^1 \left(u(\gamma_E(\xi)) - \int_0^1 u(\gamma_E(\xi'))\,\mathrm d\xi'\right)^2
|\dot\gamma_E(\xi)|\,\mathrm d\xi \\
&\le c\,h\,\|\partial_\xi u(\gamma_E(\boldsymbol cdot))\|_{L^2(0,1)}^2 \le c\,h^2\,\|\nabla u\|_{L^2(E)}^2
end{align*}
where the last step is a consequence of the chain rule
$\partial_\xi u(\gamma_E(\xi)) = \nabla u(\gamma_E(\xi))\boldsymbol cdot\dot\gamma_E(\xi)$ and
$|\dot\gamma_E(\xi)|\sim h$ for all $\xi\in(0,1)$, see also eqref{eq:est_gamma_-1}.
end{proof}
We conclude this section with an estimate for an expression which is need in Lemma
\ref{lem:postprocessing_second}.
\begin{lemma}\label{lem:int_error_Rh}
Assume that the functions $u$ and $v$ belong to $\in H^1(\Gamma)$.
Then the inequality
\begin{equation*}
(u-R_h u, v)_{L^2(\Gamma)}
\le c\,h^2\,\|\nabla u\|_{L^2(\Gamma)}\,\|\nabla v\|_{L^2(\Gamma)} +
c\,\|v\|_{L^\infty(\Gamma)} \sum_{E\in\mathcal E_h} \left\vert\int_E (u-R_h u)\right\vert
end{equation*}
is valid.
end{lemma}
\begin{proof}
First, we split the term under consideration using the $L^2(\Gamma)$-projection onto
$U_h$ and obtain
\begin{equation*}
(u-R_h u, v)_{L^2(\Gamma)} = (u-Q_h u, v - Q_h v)_{L^2(\Gamma)} + (Q_h (u-R_h u), v)_{L^2(\Gamma)}.
end{equation*}
The first term on the right-hand side can be treated with the local estimate from Lemma \ref{lem:local_est_l2_proj} which yields
\begin{equation*}
(u-Q_h u, v - Q_h v)_{L^2(\Gamma)}
\le c\,h^2\, \|\nabla u\|_{L^2(\Gamma)}\,\|\nabla v\|_{L^2(\Gamma)}.
end{equation*}
For the second term we exploit the definition of $Q_h$ and $R_h$ on the reference element.
For each $E\in\mathcal E_h$ we then obtain
\begin{align*}
\|Q_h(u-R_h u)\|_{L^1(E)}
&= \int_0^1 \left\vert \frac1{L_E}\int_0^1(u(\gamma_E(\xi)) - u(\gamma_E(\xi_E)))\,
|\dot\gamma_E(\xi)|\,\mathrm d\xi\right\vert\,|\dot\gamma_E(\xi')|\,\mathrm d\xi' \\
&= \left\vert\int_0^1(u(\gamma_E(\xi))-u(\gamma_E(\xi_E)))\,|\dot\gamma_E(\xi)|\,\mathrm d\xi\right\vert
= \left\vert\int_E (u-R_h u)\right\vert,
end{align*}
where we used $\int_0^1|\dot\gamma_E(\xi')|\,\mathrm d\xi' = L_E$ in the second step.
Consequently, we obtain
\begin{equation*}
(Q_h (u-R_h u), v)_{L^2(\Gamma)} \le c\,\|v\|_{L^\infty(\Gamma)}\sum_{E\in\mathcal E_h} \left\vert\int_E (u-R_h u)\right\vert
end{equation*}
and conclude the assertion.
end{proof}
end{appendix}
\printbibliography
end{document} |
\begin{document}
\begin{abstract}
We prove that every \{finitely generated residually finite\}-by-sofic group satisfies Kaplansky's direct and stable finiteness conjectures with respect
to all noetherian rings.
We use this result to provide countably many new examples of finitely presented non-LEA groups, for which soficity is still undecided, satisfying these two conjectures.
Deligne's famous example $\widetilde{\text{Sp}_{2n}(\mathbb Z)}$ of a non residually finite group is among our examples, along with the families of amalgamated free products $\text{SL}_n(\mathbb Z[1/p])\ast_{\mathbb F_r}\text{SL}_n(\mathbb Z[1/p])$
and HNN extensions $\text{SL}_n(\mathbb Z[1/p])\ast_{\mathbb F_r}$, where $p>2$ is a prime, $n\geq 3$ and $\mathbb F_r$ is a free group of rank $r$, for all $r\geq 2$.
\end{abstract}
\title{Groups satisfying Kaplansky's stable finiteness conjecture}
\section{Introduction}
\noindent
In \cite[pp. 122-123]{Kap} Irving Kaplansky posed what nowadays is known as \emph{Kaplansky's direct finiteness conjecture}:
\begin{KCfields}
Given a field $\mathbb{K}$ and a group $G$, the group ring $\mathbb{K}[G]$ is directly finite. That is to say, if $x,y\in \mathbb{K}[G]$ are such that $xy=1$, then $yx=1$.
\end{KCfields}
One could look at the matrix rings $\mathrm{Mat}_{n\times n}(\mathbb{K}[G])$ and ask whether or not these rings are directly finite for all $n\in\mathbb N$. This is known as \emph{Kaplansky's stable finiteness conjecture}.
Although it might look stronger than the direct finiteness conjecture, they are in fact equivalent \cite{Vir}.
Hence in this paper, for the sake of simplicity, we restrict our arguments to direct finiteness.
Kaplansky himself proved that, given a field $\mathbb{K}$ of characteristic zero and a group $G$, the group ring $\mathbb{K}[G]$ is directly finite.
Since then progress has been done, but the conjecture is still unresolved. In \cite{AOmP}, Ara, O'Meara and Perera proved that $D[G]$ is directly finite whenever $G$ is a residually amenable
group and $D$ is a division ring. Later Elek and Szab\'{o} generalized this result to the wider class of all sofic groups \cite[Corollary 4.7]{ElSz04}.
Sofic groups were introduced in 1999 by Gromov, in an attempt to solve Gottschalk's conjecture in topological dynamics \cite{Gro}.
The class of sofic groups is far from being completely understood, and it still puzzles the experts.
In particular, it is not yet known if all groups are sofic.
Soficity is stable under many group-theoretic operations \cite{CaLu,CSC,Pes08}. At the same time, it is still unclear how this notion behaves under taking group extensions:
While it is known that a sofic-by-amenable group is again sofic \cite[Proposition 7.5.14]{CSC}, it is still an open problem whether or not finite-by-sofic,
free-by-sofic or sofic-by-sofic groups, among others, are again sofic groups.
Here we use the standard notation of an $\mathcal{A}$-by-$\mathcal{B}$ group to denote a group $G$ with a normal subgroup $N\trianglelefteq G$ such that $N\in\mathcal{A}$ and $G/N\in\mathcal{B}$, where
$\mathcal{A}$ and $\mathcal{B}$ are given classes of groups (e.g. $\mathcal{A}$ being the free groups and $\mathcal{B}$ being the sofic groups, in the case of a free-by-sofic group).
Since there is yet no known example of a group which fails to be sofic, the original Kaplansky's conjecture and the following variant are still open problems:
\begin{KCdrings}
Given a division ring $D$ and a group $G$, the group ring $D[G]$ is directly finite.
\end{KCdrings}
A major recent breakthrough has been made by Virili \cite{Vir}. He proved that any crossed product $N\ast G$ is directly finite whenever $N$ is a left-noetherian ring (respectively: right-noetherian ring)
and $G$ is a sofic group (see Section \ref{prof} for the definition and \cite{Pas,Vir} for more details about crossed products).
Group rings are basic examples of crossed products, hence we state another generalization of the original conjecture:
\begin{KCnrings}
Given a noetherian\footnote{The main concern of this work are groups, hence in what follows we focus on noetherian rings, that is, rings that are both left- and right-noetherian,
rather than specifying if the ring is left- or right-noetherian. All our statements remain true when one restricts to left-noetherian, or right-noetherian, rings.}
ring $N$ and a group $G$, the group ring $N[G]$ is directly finite.
\end{KCnrings}
As a consequence of his general result on crossed products,
Virili deduced that the group ring $N[G]$ of a \{polycyclic-by-finite\}-by-sofic group $G$ is directly finite with respect to all noetherian rings $N$ \cite[Corollary 5.4]{Vir},
and that the group ring $D[G]$
of a free-by-sofic group $G$ is directly finite with respect to all division rings $D$ \cite[Corollary 5.5]{Vir}. As mentioned above, the interest in these classes of groups arises because they are
not known to be sofic.
The aim of this paper is to prove the following Theorem, which establishes Kaplansky's direct and stable finiteness conjectures for the group ring of many groups that are not known to be sofic.
\begin{theor}\label{Kap RF-by-sofic}
Let $N$ be a noetherian ring and $G$ be a \{finitely generated residually finite\}-by-sofic group. Then $N[G]$ is directly finite and, equivalently, stably finite.
\end{theor}
As a corollary, we partially extend \cite[Corollary 5.5]{Vir} from division rings to noetherian rings.
\begin{corol}
Let $N$ be a noetherian ring and $G$ be a \{finitely generated free\}-by-sofic group. Then $N[G]$ is directly finite and, equivalently, stably finite.
\end{corol}
Moreover,
we construct countably many pairwise non-isomorphic finitely presented groups that satisfies both of these conjectures, and that are not known to be sofic.
In particular, these groups are not locally embeddable into amenable groups, also called LEA groups (see Section \ref{sec.4} for definitions).
\begin{corol2}
There exists an infinite family $\{G_i\}_{i\in\mathbb N}$ of pairwise non-isomorphic finitely presented non-LEA groups.
These groups are not known to be sofic, and $D[G_i]$ is directly finite and, equivalently, stably finite, with respect to all division rings $D$.
\end{corol2}
The groups described in Corollary B are given by the HNN extensions $\text{SL}_n(\mathbb Z[1/p])\ast_{\mathbb F_r}$ and by the amalgamated free products
$\text{SL}_n(\mathbb Z[1/p])\ast_{\mathbb F_r}\text{SL}_n(\mathbb Z[1/p])$. Here $n\geq 3$ is an integer, $p>2$ is a prime number and $\mathbb F_r$ is a free subgroup of $\text{SL}_n(\mathbb Z[1/p])$ of
rank $r$, for all $r\geq2$.
See Corollary~\ref{corKaNi} and Corollary~\ref{cor2} for the precise statements and the proofs.
The paper is organized as follows:
In Section~\ref{sec.spacemarked} we define the space of marked groups, we recall some useful properties and we prove preliminary results that will lead to the proof
of the Main Theorem.
In Section~\ref{prof} we prove the Main Theorem and we give some corollaries. In Section~\ref{sec.4} we apply our result to countably many pairwise non-isomorphic groups,
which are not yet known to be sofic, and for which
we establish Kaplansky's direct finiteness and stable finiteness conjectures. Deligne's famous example $\widetilde{\text{Sp}_{2n}(\mathbb Z)}$ of a non residually finite group is among our interests.
\subsubsection*{Acknowledgments}
The author is very grateful to his advisor, Goulnara Arzhantseva, for inspiring questions and suggestions.
He wants to thank Simone Virili for sharing the text of his PhD Thesis, his work \cite{Vir} and for many constructive discussions over the topic. Thanks are also due to Nikolay Nikolov,
for explaining the proof of \cite[Theorem 1]{KaNi}.
\section{The space of marked groups}\label{sec.spacemarked}
In this section, we briefly discuss the space of marked groups. For more details and properties, see~\cite{Ch,ChGu,Gri84}.
We prove the following theorem:
\begin{theorem}\label{RF-by-sofic}
Let $Q$ be a group and $G$ be a finitely generated \{finitely generated residually finite\}-by-$Q$ group.
Then $G$ is the limit, in the space of marked groups, of finite-by-$Q$ groups.
\end{theorem}
We stress the following particular case:
\begin{corollary}\label{stressed.cor}
Let $G$ be a finitely generated \{finitely generated residually finite\}-by-sofic group.
Then $G$ is the limit, in the space of marked groups, of finite-by-sofic groups.
\end{corollary}
A \emph{marked group} is a pair $(G,S)$, where $G$ is a finitely generated group and $S$ is a finite sequence of elements that generate $G$. If $\lvert S\rvert=n$ then $(G,S)$ is
called an $n$-marked group. Two $n$-marked groups $(G,(s_1,\dots,s_n))$ and $(G',(s'_1,\dots,s'_n))$ are isomorphic
if the bijection $\varphi(s_i):=s_i'$ extends to an isomorphism of groups $\varphi\colon G\to G'$. In particular, two marked groups with given generating sets of different size are
never isomorphic as marked groups, although they might be isomorphic as abstract groups.
Let $\mathcal{G}_n$ denote the set of $n$-marked groups, up to isomorphism of marked groups. Then $\mathcal{G}_n$ corresponds bijectively to the set of
normal subgroups of the free group $\mathbb F_n$ on $n$ free generators.
Let $(G,S),(G',S')\in\mathcal{G}_n$ and let $N$, $N'$ be the normal subgroups of $\mathbb F_n$ such that $\mathbb F_n/N\cong G$, $\mathbb F_n/N'\cong G'$. Let $B_{\mathbb F_n}(r)$ denote the ball of radius $r$ in $\mathbb F_n$ centered
at the identity element. The function
\begin{equation}
v(N,N'):=\sup\{r\in\mathbb N\mid N\cap B_{\mathbb F_n}(r)=N'\cap B_{\mathbb F_n}(r)\}
\end{equation}
defines on $\mathcal{G}_n$ the ultrametric
\begin{equation}\label{ultrametric}
d\bigl((G,S),(G',S')\bigr):=2^{-v(N,N')}.
\end{equation}
An $n$-marked group $(G,S)$ can be viewed as an $(n+1)$-marked group by adding the trivial element $e_G$ to $S$. This defines an isometric embedding of $\mathcal{G}_n$ into $\mathcal{G}_{n+1}$.
Let $\mathcal{G}:=\bigcup_{n\in\mathbb N}\mathcal{G}_n$ be the \emph{space of marked groups}. The ultrametrics of \eqref{ultrametric} can be extended to an ultrametric
$d\colon \mathcal{G}\times\mathcal{G}\to \mathbb R_{\geq0}$, and $\bigl(\mathcal{G},d\bigr)$ is a compact totally disconnected ultrametric space~\cite{ChGu}.
The following fact is well known.
\begin{lemma}\label{char.conv}
Let $(G,S)$ and $\{(G_r,S_r)\}_{r\in\mathbb N}$ be marked groups in $\mathcal{G}_n$ and let $N, \{N_r\}_{r\in\mathbb N}$ be normal subgroups of $\mathbb F_n$ such that $\mathbb F_n/N\cong G$ and $\mathbb F_n/N_r\cong G_r$.
The following are equivalent:
\begin{enumerate}
\item the sequence $\{(G_r,S_r)\}_{r\in\mathbb N}$ converges to the point $(G,S)$ in the space $\bigl(\mathcal{G}_n,d\bigr)$;
\item for all $x\in N$ (respectively: for all $y\notin N$) there exists $\bar{r}$ such that $x\in N_r$ (respectively: $y\notin N_r$) for $r\geq\bar{r}$;
\item for all $R\in\mathbb N$ there exists $\bar{r}$ such that the Cayley graphs $\mathbb Cay(G,S)$ and $\mathbb Cay(G_r,S_r)$ have balls of radius $R$ isomorphic as labeled directed graphs, for all $r\geq\bar{r}$.
\end{enumerate}
\end{lemma}
The following easy lemma is used in the proof of Theorem~\ref{RF-by-sofic}.
\begin{lemma}\label{piccololemma}
Let $K\unlhd N\unlhd G$ be a subnormal chain of groups, where $N$ is finitely generated, and $K$ has finite index in $N$.
Then there exists a normal subgroup $\widetilde{K}\unlhd G$, contained in $K$, such that $N/\widetilde{K}$ is finite.
\begin{proof}
Since $N$ is finitely generated, for any given positive integer $d\in\mathbb N$, $N$ has only finitely many subgroups of index $d$.
Let $\widetilde{K}$ be the intersection of all subgroups $H\unlhd N$ with $[N:H]=[N:K]$. Then $\widetilde{K}$ is a finite index characteristic subgroup of $N$, and hence it is normal in $G$.
\end{proof}
\end{lemma}
We are now ready to prove Theorem \ref{RF-by-sofic}.
\begin{proof}[\bf Proof of Theorem \ref{RF-by-sofic}]
Let $G$ be a \{finitely generated residually finite\}-by-$Q$ group and suppose $G$ is generated by a finite set $S$.
Let $N\unlhd G$ be a finitely generated residually finite subgroup such that $G/N\cong Q$.
As $N$ is residually finite, there exists a family $\{K_r\}_{r\in\mathbb N}$ of finite index normal subgroups of $N$ such that $\bigcap_{r\in\mathbb N}K_r=\{e\}$ and $K_{r+1}\subseteq K_r$ for all
$r\in\mathbb N$.
As $N$ is finitely generated, we can assume that $K_r\unlhd G$ for all $r\in\mathbb N$ by Lemma~\ref{piccololemma}.
Consider the family of finite-by-$Q$ groups $\{G/K_r\}_{r\in\mathbb N}$.
For every $r$, let $S_r$ be the image of $S$ in $G/K_r$ under the canonical projection $\lambda_r\colon G\twoheadrightarrow G/K_r$.
We prove that the sequence $\{(G/K_r,S_r)\}_{r\in\mathbb N}$ converges to $(G,S)$ in $\mathcal{G}_{\lvert S\rvert}$ using the second condition of Lemma~\ref{char.conv}.
Let $\pi\colon\mathbb F\twoheadrightarrow G$ be the canonical surjective homomorphism from the finitely generated free group $\mathbb F$ on $\lvert S\rvert$ free generators,
and $\pi_r\colon\mathbb F\twoheadrightarrow G/K_r$. Then $\pi_r=\lambda_r\circ\pi$:
\begin{equation*}
\xymatrix{\mathbb F\ar[rr]^{\pi}\ar[rrd]_{\pi_r}&&G\ar[d]^{\lambda_r}\\
&&G/K_r
}
\end{equation*}
Set $\Lambda:=\ker\pi$ and $\Lambda_r:=\ker\pi_r$.
By definition, $\Lambda\leq\Lambda_r$ for every $r$, so, to check that the second condition of Lemma~\ref{char.conv} is satisfied,
we have only to prove that for all $y\notin\Lambda$ there exists $\bar{r}$ such that $y\notin\Lambda_r$, for $r\geq \bar{r}$.
Let $y\notin \Lambda$ and $\pi(y)=:g\in G\setminus\{e_G\}$.
Note that if $g\notin N$ then $g\notin K_r$ for all $r$, as $K_r\subseteq N$. Thus, for such $g\notin N$ we have
\begin{equation}
\pi_r(y)=\lambda_r(\pi(y))=\lambda_r(g)\neq e_{G/K_r}\qquad \forall r\in\mathbb N,
\end{equation}
that is to say, $y\notin \Lambda_r$ for all $r\in\mathbb N$.
If $g\in N\setminus\{e_G\}$, then there exists $\bar{r}$ such that $g\notin K_r$ for all $r\geq \bar{r}$, because $\bigcap_{r\in\mathbb N}K_r=\{e\}$ and because this family of normal subgroups
is totally ordered by inclusion.
In particular,
\begin{equation}
\pi_r(y)=\lambda_r(g)\neq e_{G/K_r}\qquad \forall r\geq \bar{r}.
\end{equation}
This implies that $y\notin\Lambda_r$ for $r\geq\bar{r}$, and the proof is completed.
\end{proof}
\section{Proof of the Main Theorem}\label{prof}
We first show that the assertions of Kaplansky's direct finiteness and stable finiteness conjectures are preserved under taking limits in the space of marked groups.
\begin{proposition}\label{limit}
Let $N$ be a noetherian ring and $(G,S)$ be the limit, in the space of marked groups, of the sequence $\{(G_r,S_r)\}_{r\in\mathbb N}$.
If $N[G_r]$ is directly finite (respectively: stably finite) for all $r\in\mathbb N$ then so is $N[G]$.
\begin{proof}
Suppose first that $N[G_r]$ is directly finite for all $r\in\mathbb N$, and consider two non-trivial elements $x=\sum_{g\in G}k_gg$ and $y=\sum_{g\in G}h_gg\in N[G]$ such that $xy=1$.
We want to prove that $yx=1$.
Let $yx=\sum_{g\in G}l_gg$ and consider
$$m:=\max\{\lVert g\rVert_S\mid k_g\neq 0\text{ or }h_g\neq 0\text{ or }l_g\neq 0\},$$
where $\lVert -\rVert_S$ denotes the norm induced by the word metric on $G$ given by the finite generating set $S$. That is to say:
$$\lVert g\rVert_S:=\min\{k\mid g=s_1\dots s_k,\quad s_i\in S\cup S^{-1}\}.$$
Since the sequence $\{(G_r,S_r)\}_{r\in\mathbb N}$ converges to $(G,S)$, by Lemma~\ref{char.conv} there exists $\bar{r}$ such that, for all $r\geq \bar{r}$, $\mathbb Cay(G,S)$ and $\mathbb Cay(G_r,S_r)$ have
balls of radius $m$ isomorphic as labeled directed graphs.
The group-coefficients of $x$ and $y$ have the same partial multiplication in $G$ as in $G_r$, and moreover $yx=1$ in $N[G_r]$. It thus follows that $yx=1$ in $N[G]$.
The same arguments work when $N[G_r]$ is stably finite for all $r\in\mathbb N$.
\end{proof}
\end{proposition}
\begin{proof}[\bf Proof of the Main Theorem]
First we suppose that the group in question is finitely generated, so
let $G$ be a finitely generated \{finitely generated residually finite\}-by-sofic group. By Corollary~\ref{stressed.cor}, $G$ is the limit of finite-by-sofic groups, which
satisfy Kaplansky's direct finiteness conjecture with respect to all noetherian rings \cite[Corollary 5.4]{Vir}. Hence, by Proposition~\ref{limit}, $G$ satisfies the conjecture
with respect to all noetherian rings.
If the group $G$ is not finitely generated, consider a finitely generated residually finite normal subgroup $K$ such that $G/K$ is sofic.
Then $G$ is the directed union of its finitely generated subgroups containing such $K$:
\begin{equation}\label{dir.union}
G=\bigcup\{H\mid K\leq H\leq G\text{ and }H\text{ is finitely generated}\}.
\end{equation}
These are finitely generated \{finitely generated residually finite\}-by-sofic groups, and hence satisfy
Kaplansky's direct finiteness conjecture by the first part of the proof.
Fix a noetherian ring $N$ and consider two elements $x,y\in N[G]$ such that $xy=1$.
The group-coefficients of $x$, $y$ and $yx$, being non-trivial only finitely many times, sit in some finitely generated subgroup $H\leq G$ appearing in the directed union in \eqref{dir.union}.
The group ring $N[H]$ is directly finite by the first part of this proof, so $yx=1$ in $N[H]$.
This implies that $yx=1$ in $N[G]$.
Thus, $G$ satisfies the conjecture.
\end{proof}
There is a variant of Lemma~\ref{piccololemma}, in the case when the group $N/K$ is solvable:
Assume we are given a subnormal chain $K\unlhd N\unlhd G$ and suppose that $N/K$ is solvable, then there exists a normal subgroup $\widetilde{K}\unlhd G$, contained in $K$, such that $N/\widetilde{K}$ is solvable as well.
To the author's knowledge, the following are open questions:
\begin{question}\label{question1}
Let $N$ be a noetherian ring and $G$ be a solvable-by-sofic group. Is $N[G]$ directly finite?
\end{question}
\begin{question}
Let $N$ be a noetherian ring and $G$ be a solvable group. Does there exist a noetherian ring $N'$ such that $N[G]$ embeds into $N'$?
\end{question}
An affirmative answer to the latter question implies an affirmative answer to Question \ref{question1}.
If Question \ref{question1} has an affirmative answer, then our argument as in
the proof of Theorem~\ref{RF-by-sofic} implies that a finitely generated \{residually solvable\}-by-sofic group $G$ is
the limit in $\mathcal{G}$ of solvable-by-sofic groups, and hence that a \{residually solvable\}-by-sofic group is directly finite.
We recall now the definition and basic facts on crossed products. They are useful in the applications of our Main Theorem.
Given a ring $R$ and a group $G$, a \emph{crossed product} $R\ast G$ of $G$ over $R$ is a ring constructed as follows.
Assign uniquely to every $g\in G$ a symbol $\bar{g}$, and let $\bar{G}$ be the collection of these symbols.
As a set,
\begin{equation*}
R\ast G:=\Bigl\{\sum_{g\in G}r_g \bar{g}\mid g\in G, r_g\in R\text{ is almost always }0 \Bigr\}.
\end{equation*}
The sum is defined component-wise,
\begin{equation*}
\Bigl(\sum_{g\in G}r_g\bar{g}\Bigr)+ \Bigl(\sum_{g\in G}s_g\bar{g}\Bigr):=\sum_{g\in G}(r_g+s_g)\bar{g}.
\end{equation*}
The product in $R\ast G$ is specified in terms of two maps
\begin{equation*}
\tau\colon G\times G\to U(R),\qquad \sigma\colon G\to \mathrm{Aut}(R),
\end{equation*}
where $U(R)$ is the group of units of $R$ and
$\mathrm{Aut}(R)$ is the group of ring automorphisms of $R$. Let $r^{\sigma(g)}$ denote the result of the action of $\sigma(g)$ on $r$. Then, for all $r\in R$ and $g,g_1,g_2,g_3\in G$,
the maps $\sigma$ and $\tau$ satisfy
\begin{equation*}
\sigma(e)=1,\qquad\tau(e,g)=\tau(g,e)=1,
\end{equation*}
and
\begin{equation*}
\tau(g_1,g_2)\tau(g_1g_2,g_3)=\tau(g_2,g_3)^{\sigma(g_1)}\tau(g_1,g_2g_3),\qquad
r^{\sigma(g_2)\sigma(g_1)}=\tau(g_1,g_2)r^{\sigma(g_1g_2)}\tau(g_1,g_2)^{-1}.
\end{equation*}
These conditions guaranties that the product
\begin{equation*}
\Bigl(\sum_{g\in G}r_g\bar{g}\Bigr)\cdot \Bigl(\sum_{g\in G}s_g\bar{g}\Bigr):=\sum_{g\in G}\Bigl(\sum_{h_1h_2=g} r_{h_1}s_{h_2}^{\sigma(h_1)}\tau(h_1,h_2)\Bigr)\bar{g}
\end{equation*}
is associative.
Certain crossed products have their own specific name.
If the maps $\sigma$ and $\tau$ are trivial, the crossed product $R\ast G$ is the group ring $R[G]$.
If $\sigma$ is trivial, then $R\ast G=R^t[G]$ is a \emph{twisted group ring}, while if $\tau$ is trivial then $R\ast G=RG$ is a \emph{skew group ring}.
Given a
normal subgroup $N\unlhd G$ and a fixed crossed product $R\ast G$, we have
\begin{equation}\label{crpro1}
R\ast G=\bigl( R\ast N\bigr)\ast G/N,
\end{equation}
where the latter is
some crossed product of the group $G/N$ over the ring $R\ast N$, and $R\ast N$ is the subring of $R\ast G$ induced by the subgroup $N$ \cite[Lemma 1.3]{Pas}
(that is, the maps $\sigma$ and $\tau$ associated to the crossed product $R\ast N$ are the restrictions of the
ones associated to $R\ast G$).
In particular,
\begin{equation}\label{crpro2}
R[G]=R[N]\ast G/N.
\end{equation}
\noindent
We now state some interesting corollaries of our main result.
There exist one-relator groups which are not residually finite~\cite{BaSo62}, or not even residually solvable~\cite{Ba69}.
Whether or not all one-relator groups are sofic is a well-known open problem.
Wise recently proved that one-relator groups with torsion are residually finite~\cite{Wis}, answering a longstanding conjecture of Baumslag~\cite{Ba67}.
Combining this deep result of Wise with our Main Theorem, we obtain:
\begin{corollary}\label{1-by-sofic}
Let $D$ be a division ring and $G$ be a \{finitely generated one-relator\}-by-sofic group.
Then $D[G]$ is directly finite and, equivalently, stably finite.
\begin{proof}
Let $N\unlhd G$ be a normal subgroup of $G$ such that $G/N$ is sofic and $N$ is a finitely generated one-relator group.
If $N$ is torsion-free, then its group ring $D[N]$ embeds into a division ring $D'$ \cite{LeLe}.
By \eqref{crpro2} we have that $D[G]=D[N]\ast G/N$. Hence $D[G]$ embeds into $D'\ast G/N$, which is directly finite \cite[Corollary 5.4]{Vir}. Thus also $D[G]$ is directly finite.
If $N$ has torsion, then it is residually finite \cite{Wis}. Thus $G$ satisfies the hypotheses of the Main Theorem, and $D[G]$ is directly finite.
\end{proof}
\end{corollary}
From Corollary \ref{1-by-sofic}, in the particular case when the sofic group is trivial, we recover the fact that the group ring of a finitely generated one-relator group is directly and stably finite.
\begin{corollary}
Let $D$ be a division ring and $G$ be a finitely generated one-relator group, then $D[G]$ is directly finite and, equivalently, stably finite.
\end{corollary}
Finitely generated right-angled Artin groups are known to be residually finite. We deduce the following:
\begin{corollary}
Let $N$ be a noetherian ring and $G$ be a \{finitely generated right-angled Artin group\}-by-sofic group. Then $N[G]$ is directly finite and, equivalently, stably finite.
\end{corollary}
\begin{remark}
Here is an alternative way of proving the Main Theorem, which was suggested by Simone Virili after the first version of this paper was written.
In~\cite{Mon} it is observed that, given a noetherian ring $N$ and
$$\mathcal{C}:=\{\text{groups }G\text{ such that }N[G]\text{ is directly finite}\},$$
if a group $G$ is fully residually $\mathcal{C}$ then $G\in\mathcal{C}$.
One then argues that a \{finitely generated residually finite\}-by-sofic group is residually finite-by-sofic, which is equivalent of being fully residually finite-by-sofic.
Hence the Main Theorem follows.
\end{remark}
\section{Examples}\label{sec.4}
We now apply our Main Theorem to some concrete groups, whose (non-)soficity still intrigues the experts. We conclude that they satisfy Kaplansky's direct and stable finiteness conjectures.
Moreover, we provide countably many new explicit presentations of groups satisfying these two conjectures.
These groups are not Locally Embeddable into Amenable (LEA for short), and it is not known whether or not they are sofic.
A finitely generated LEA group is the limit in $\mathcal{G}$ of amenable groups. In particular, LEA groups are sofic.
There exist examples of sofic groups which are not LEA, and the class of LEA groups is the biggest known class of groups strictly contained in the class of sofic groups.
In what follows, we use the fact that a finitely presented LEA group is residually amenable~\cite[Proposition 7.3.8]{CSC}. We refer to \cite[\S 7.3]{CSC} for more informations about LEA groups.
\subsection{Deligne's group}
A famous example considered by Deligne~\cite{De78} is the following.
Let $n\geq 2$ be an integer and $\widetilde{\text{Sp}_{2n}(\mathbb Z)}$ be the preimage of the symplectic group $\text{Sp}_{2n}(\mathbb Z)$ in the universal cover $\widetilde{\text{Sp}_{2n}(\mathbb R)}$ of $\text{Sp}_{2n}(\mathbb R)$.
It is known~\cite{De78} that $\widetilde{\text{Sp}_{2n}(\mathbb Z)}$ is given by the following central extension
\begin{equation}\label{central.ext}
\{e\}\longrightarrow \mathbb Z\longrightarrow \widetilde{\text{Sp}_{2n}(\mathbb Z)}\longrightarrow \text{Sp}_{2n}(\mathbb Z)\longrightarrow\{e\}.
\end{equation}
The group $\widetilde{\text{Sp}_{2n}(\mathbb Z)}$ is finitely presented as it is an extension of two finitely presented groups.
Moreover, the group is not residually
finite~\cite{De78} and it satisfies Kazhdan's property (T)~\cite[Example 1.7.13 (iii)]{BeHaVa}. This immediately implies that it is not an LEA group.
Our Main Theorem implies that $N[\widetilde{\text{Sp}_{2n}(\mathbb Z)}]$ is directly finite for all noetherian rings $N$. Indeed, $\widetilde{\text{Sp}_{2n}(\mathbb Z)}$ is \{finitely generated free\}-by-sofic, as shown by~\eqref{central.ext}.
\subsection{Finitely presented amalgamated products and HNN extensions}
From now on, if $\Gamma$ is a group then $\bar{\Gamma}$ denotes an isomorphic copy of $\Gamma$. If
$\Gamma=\langle X\mid R\rangle$ is a presentation of the group $\Gamma$, let $\bar{X}$ and $\bar{R}$ denote the same generators and relators in the isomorphic
copy $\bar\Gamma$.
Let $p>2$ be a prime number and $n\geq 3$. The group $\text{SL}_n(\mathbb Z[1/p])$ is finitely presented~\cite[Theorem 4.3.21]{HO} and has Kazhdan's property (T)~\cite{BeHaVa}.
Moreover, it satisfies the \emph{congruence subgroup property}~\cite{BaMiSe}.
This means that every finite index subgroup $H\leq \text{SL}_n(\mathbb Z[1/p])$ contains the kernel of the natural projection
\begin{equation}\label{projection}
\pi_q\colon\text{SL}_n(\mathbb Z[1/p]) \twoheadrightarrow \text{SL}_n(\mathbb Z/q\mathbb Z),
\end{equation}
for some $q$ coprime with $p$. In particular, if the finite index subgroup $H$ is normal in $\text{SL}_n(\mathbb Z[1/p])$, it follows that
\begin{equation}\label{eq.quotients}
\frac{\text{SL}_n(\mathbb Z[1/p])}{H}\cong\text{SL}_n(\mathbb Z/q\mathbb Z) \qquad\text{or}\qquad\frac{\text{SL}_n(\mathbb Z[1/p])}{H}\cong \text{PSL}_n(\mathbb Z/q\mathbb Z),
\end{equation}
for exactly one $q$ coprime with $p$.
In what follows, given an element $x\in\text{SL}_n(\mathbb Z[1/p])$ and a projection $\pi$ from $\text{SL}_n(\mathbb Z[1/p])$ onto $\text{SL}_n(\mathbb Z/q\mathbb Z)$ or $\text{PSL}_n(\mathbb Z/q\mathbb Z)$, we denote the order of $\pi(x)$ by $o_x$.
The proof of the following theorem is adapted from \cite[Theorem 1]{KaNi}, where an analogous fact is proved, but in the case when the amalgamated subgroup is infinite cyclic.
In that case, the resulting group is known to be sofic. In contrast to \cite{KaNi}, our aim is to produce non-LEA groups that are not known to be sofic and that satisfy Kaplansky's direct and
stable finiteness conjectures.
\begin{theorem}\label{theoKaNi}
Let $p>2$ be a prime number, $n\geq 3$,
let $\Gamma:=\text{SL}_n(\mathbb Z[1/p])=\langle X\mid R\rangle$. Let $\langle a,b\rangle =F\leq\Gamma$ be the subgroup generated by the matrices
\begin{equation*} a=\begin{pmatrix}
1&2&0\\ 0&1&0\\ 0&0& \mathrm{I}_{n-2}
\end{pmatrix},
\qquad b=\begin{pmatrix}
1&0&0\\ 2&1&0\\ 0&0& \mathrm{I}_{n-2}~
\end{pmatrix},
\end{equation*}
where $\mathrm{I}_{n-2}$ is the identity matrix of dimension $n-2$. Then the group $$G:=\Gamma\ast_F \Gamma=\langle X,\bar{X}\mid R,\bar{R},a=\bar{a},b=\bar{b}\rangle$$ is not LEA.
\begin{proof}
The group $G$ is finitely presented. Hence it is sufficient to prove that it is not residually amenable.
Let
$$ x=\begin{pmatrix}
1&\frac{2}{p}&0\\ 0&1&0\\ 0&0&\mathrm{I}_{n-2}
\end{pmatrix}
$$
and consider the element $g=[x,\bar{x}]\in G$.
Since $x\notin F$, using normal forms for the elements of the amalgamated free product \cite[I.11]{LS}, it follows that $g\neq e_G$.
Let $\pi\colon G\twoheadrightarrow A$ be a surjective homomorphism with $A$ amenable. We claim that $\pi(g)=e_A$.
Indeed, consider the restriction $\pi\restriction_\Gamma\colon\Gamma\to \pi(\Gamma)$. The group $\pi(\Gamma)\leq A$ is amenable and moreover it is a quotient
of $\Gamma$, which is a group with Kazhdan's property $(T)$.
Hence $\pi(\Gamma)$ is finite and, in particular, $\pi(x)$ has finite order $o_x$.
The element $x$ is unipotent, so $\pi(x)$ is unipotent too.
As the group $\Gamma$ satisfies the congruence subgroup property, it follows that $\pi(x)$ is an element of some $\text{SL}_n(\mathbb Z/q\mathbb Z)$ or $\text{PSL}_n(\mathbb Z/q\mathbb Z)$, for $q$ coprime with $p$.
As $\pi(x)$ is unipotent, the order $o_x$ divides a power of $q$. Moreover $\gcd(p,q)=1$, so $\gcd(p,o_x)=1$.
As $x^p=a$, we have that $\langle \pi(a)\rangle\leq\langle \pi(x)\rangle$ and that
$$o_a=o_{x^p}=\frac{o_x}{\gcd(p, o_x)}=o_x.$$
This implies that the two finite groups $\langle \pi(a)\rangle$ and $\langle \pi(x)\rangle$ have the same cardinality, and so
$\langle \pi(a)\rangle=\langle \pi(x)\rangle$.
The same argument applies to the elements $\bar{x}$ and $\bar{a}$, so $\langle \pi(\bar{a})\rangle=\langle \pi(\bar{x})\rangle$.
As in the group $G$ we have $a=\bar{a}$, it follows that $\langle\pi(x)\rangle=\langle\pi(\bar{x})\rangle$, and so $\pi(g)=[\pi(x),\pi(\bar{x})]=e_A$. That is, the element $g$ is mapped
to the trivial element in all amenable quotients of $G$. Thus, $G$ is not residually amenable.
\end{proof}
\end{theorem}
In the next corollary we construct countably many pairwise non-isomorphic groups, not known to be sofic, satisfying Kaplansky's direct and stable finiteness conjectures.
\begin{corollary}\label{corKaNi}
With the notations of the previous theorem, for $r\geq 2$ let $F_r\leq\Gamma$ be generated by
$\{b^iab^{-i}\mid i=0,\dots,r-1\}$. Then the groups
\begin{equation*}
G_r:=\Gamma\ast_{F_r}\Gamma=\langle X,\bar{X}\mid R,\bar R,a=\bar{a},\dots ,b^{r-1}ab^{-r+1}=\bar b^{r-1}\bar a\bar b^{-r+1}\rangle
\end{equation*}
are pairwise non-isomorphic and are not LEA. Moreover they are free-by-sofic, and hence $D[G_r]$ is directly finite (equivalently: stably finite) with respect to all division rings $D$, for all $r\geq2$.
\begin{proof}
The subgroup $F_r$ is a free group of rank $r$. The argument of the proof of Theorem~\ref{theoKaNi}
shows that the element $g=[x,\bar{x}]$ is mapped to the trivial element in all amenable quotients of $G_r$. Hence $G_r$ is not LEA.
By the universal property of amalgamated free products, we have the following commuting diagram
$$\xymatrix{
\Gamma \,\,\ar@{^{(}->}[r]\ar[ddr]_{\text{id}} &G_r\ar@{.>}^{\exists ! \varphi}[dd] & \,\,\bar\Gamma\ar@{_{(}->}[l]\ar[ddl]^{\text{id}}\\
&&\\
&\Gamma&
}$$
where $\varphi\colon G_r\twoheadrightarrow \Gamma$ is a surjective homomorphism.
Let $K=\ker\varphi$, then $K\cap \Gamma=K\cap \bar{\Gamma}=\{e\}$. This implies that $K$ acts freely on the Bass-Serre tree associated to
the amalgamated free product $G_r$, that is to say, $K$ is a free group.
Hence $G_r$ is free-by-sofic and, given a division ring $D$, the group ring $D[G_r]$ is directly finite by our Main Theorem.
Note that $K$ is not finitely generated, so we cannot conclude that $N[G_r]$ is directly finite for noetherian rings~$N$.
It remains to prove that the family $\{G_r\}_{r\geq 2}$ consists of pairwise non-isomorphic finitely presented groups. To this aim, we recall the notion of deficiency of a finitely presented group.
The \emph{deficiency} $\mathrm{def}(G)$ of a finitely presented group $G$ is defined as
$\max\{\lvert X\rvert-\lvert R\rvert \}$ over all the finite presentations $G=\langle X\mid R\rangle$. It is invariant under isomorphism, and we have
$$\mathrm{def}(G_r)=2\cdot \mathrm{def}(\text{SL}_m(\mathbb Z[1/p])-r.$$
Therefore, the groups $\{G_r\}_{r\geq 2}$ are pairwise non-isomorphic.
\end{proof}
\end{corollary}
Note that the groups $G_r$ do not have property (T). This follows, for instance, from \cite[Remark 2.3.5 and Theorem 2.3.6]{BeHaVa}.
Our result extends further to HNN extensions. In \cite{Ber}, we have characterized the residual amenability of particular HNN extensions $A\ast_H$ and amalgamated free products $A\ast_H A$, in terms of the
amalgamated subgroup $H$ being closed in the proamenable topology of $A$ \cite[Corollaries 1.8 and 1.10]{Ber}.
Using these results and Corollary~\ref{corKaNi}, we obtain:
\begin{corollary}\label{cor2}
Let $p>2$ be a prime number, $n\geq 3$ and $\text{SL}_n(\mathbb Z[1/p])=\langle X\mid R\rangle$. For $r\geq 2$ the groups
\begin{equation*}
\Gamma_r:=\langle X, t\mid R,\, tat^{-1}=a,\, t(bab^{-1})t^{-1}=bab^{-1},\dots,\,t(b^{r-1}ab^{-(r-1)})t^{-1}=b^{r-1}ab^{-(r-1)}\rangle
\end{equation*}
are pairwise non-isomorphic and are not LEA. Moreover they are free-by-sofic, and hence $D[\Gamma_r]$ is directly finite (equivalently: stably finite) with respect to all division rings $D$, for all $r\geq 2$.
\end{corollary}
We end with the following question:
\begin{question}
Are the groups $G_r$ and $\Gamma_r$ sofic/hyperlinear?
\end{question}
\end{document} |
\begin{document}
\title{
{\bf\Large Nonlinear Stability for the Periodic and Non-Periodic Zakharov System }
\begin{abstract}
We prove the existence of a smooth curve of periodic traveling wave solutions for the Zakharov system. We also show that this type of solutions are nonlinear stable by the periodic flow generated for the system mentioned before. An improvement of the work of Ya Ping \cite{Wu1} is made, we prove the stability of the solitary wave solutions associated to the Zakharov system. \\
{\bf Key words.} Periodic traveling waves, Solitary waves, Nonlinear Stability, Zakharov System.\\
{\bf AMS subject classifications.} 35Q53; 35B35; 35B10
\end{abstract}
\section{Introduction}
In this essay we study the periodic Zakharov system
\begin{equation} \label{equaZakha}
\left \{
\begin{aligned}
iu_{t}+u_{xx}&=uv\\
v_{tt}-v_{xx}&=(|u|^2)_{xx},\\
\end{aligned} \right.
\end{equation}
where $u=u(x,t)\in\mathbb{C},\ v=v(x,t)\in\mathbb{R}$ and $x,t\in\mathbb{R}.$ This system was introduced by Zakharov in \cite{Zakharov1} to describe the long wave Langmuir turbulence in a plasma. The function $u=u(x,t)$ represents the slowly varying envelope of the highly oscillatory electric field and $v$ denotes the deviation of the ion density from the equilibrium.\\
The goal of this paper is to establish the existence and nonlinear stability of periodic traveling wave solutions for the Zakharov system. More precisely, we are interested in solutions for (\ref{equaZakha}) of the form
\begin{equation}\label{solforms}
u(x,t)=e^{-i\omega t}e^{i\frac{c}{2}(x-ct)}\phi_{\omega,c}(x-ct)\ \ \ \text{and}\ \ \
v(x,t)=\psi_{\omega,c}(x-ct),
\end{equation}
where $\omega,c\in\mathbb{R}$ and $\phi_{\omega,c},\psi_{\omega,c}: \mathbb{R}\rightarrow\mathbb{R}$ are periodic smooth functions with the same fundamental period $L>0.$ As far as we know any result of stability for this type of waves has been established before. The first work about existence and nonlinear stability of periodic waves was made by Benjamin in \cite{benjamin3}, where he studied periodic waves of cnoidal type for the Korteweg-de Vries equation. This work had some gaps on central parts of the stability theory that was revised and complemented by Angulo, Bona and Scialom in \cite{anguloBonaScialom}. In the last few years some papers about the nonlinear stability on the periodic case have appeared in the literature, see for instance \cite{AnguloLibro, angulo5, angulo4, AnguloNatali2, anguloNatali, GallayHaragus1, GallayHaragus2, haragus1, NataliPastor, Neves1}. \\
Substituting the type of solutions given in (\ref{solforms}) in the system (\ref{equaZakha}), we get that $\phi=\phi_{\omega,c}$ and $\psi=\psi_{\omega,c}$ have to satisfy the next system of ordinary differential equations,
\begin{equation}\label{systemedo}
\left \{
\begin{aligned}
&(c^2 -1)\psi ''=(\phi^2)''\\
&\phi ''+\left(\omega+\frac{c^2}{4}\right)\phi=\phi\psi.
\end{aligned} \right.
\end{equation}
Integrating the first equation of the system (\ref{systemedo}) and substituting on the second one, we obtain after some algebra that the solution $\phi$ has to satisfy
\[\left(\phi'\right)^2=\frac{1}{2(1-c^2)}F(\phi),\]
where $F$ is the polynomial given by
\[F(t)=-t^4+2(1-c^2)\left(-\omega-\frac{c^2}{4}\right)t^2+4(1-c^2)A_{\phi}\]
and $A_{\phi}$ is a constant of integration. It is clear that the solutions of the equation (\ref{equaZakha}) depend of the roots of the polynomial $F.$ Assuming that $F$ has roots $\pm\eta_1$ and $\pm\eta_2$ with $0<\eta_2<\eta_1,$ we obtain the smooth curve of dnoidal waves
\[\nu\in\left(\frac{2\pi^2}{L^2},+\infty\right)\longmapsto\left(\psi_{\nu},\phi_{\nu}\right)\in H_{per}^n([0,L])\times H_{per}^n([0,L]),\ \ \text{for all}\ \ n\in \mathbb{N}, \]
with $\phi_{\nu}$ and $\psi_{\nu}$ given by
\[\phi_{\nu}(\xi)=\eta_1\text{dn}\left(\tfrac{\eta_1\xi}{\sqrt{2(1-c^2)}} ;k\right)\ \ \text{and}\ \ \ \psi_{\nu}(\xi)=-\frac{\eta^2_1}{1-c^2}\text{dn}^2 \left(\tfrac{\eta_1\xi}{\sqrt{2(1-c^2)}};k\right).\] Here, $k^2=\frac{\eta^2_1-\eta^2_2}{\eta^2_1},$ $\nu=-\left(\omega+\frac{c^2}{4}\right)$ and dn denotes the Jacobi elliptic function of dnoidal type. This solutions are constructed with the same fixed minimal period $L>0,$ not necessarily large.\\
With respect to the well-posedness problem for the Zakharov system, on the periodic case, this was studied by Bourgain in \cite{Bourgain2}, where a global well-posedness result was obtained for initial data $(u(0), v(0),v_t(0) )\in H^1_{per}\times L^2_{per}\times H^{-1}_{per}.$ It is worth to note that in the periodic case there exists another \textit{more general} result about well-posedness for the Zakharov system obtained by Takaoka in \cite{Takaoka1}, but for our purpose the result established by Bourgain is good enough. See also Guo and Shen \cite{GuoBoling1}, where the existence of classical periodic solutions for the system (\ref{equaZakha}) is proved. On the continuous case the Cauchy problem associated to the Zakharov system in one and several dimensions have been studied extensively, see for instance \cite{AddedAdded1, bejenaru1, BourgCollia1, Colliander1, GiniTsutVelo1, KenigPonceVega1, OzawaTsut1, Pecher, SchotetWeiste1, SulemSulem1}.\\
In order to establish the spectral properties of some linear operators which appear in the proof of the stability, we use the Floquet theory, more precisely we use the Oscillation Theorem (see Magnus and Winkler \cite{magnus}). Our spectral analysis depends basically of the next periodic and semi-periodic eigenvalue problems associated to the Lam\'e equation, given respectively by
\[\left\{
\begin{aligned}
y''+&[\lambda-m(m+1)k^2\text{sn}^2(x,k)]y=0\\
y(0)&=y(2K(k)),\ \ y'(0)=y'(2K(k))
\end{aligned} \right.\]
and
\[\left\{
\begin{aligned}
y''+&[\lambda-m(m+1)k^2\text{sn}^2(x,k)]y=0\\
y(0)&=-y(2K(k)),\ \ y'(0)=-y'(2K(k)),
\end{aligned} \right.\]
where $\lambda\in\mathbb{R},$ $m\in\mathbb{N},$ sn denotes the Jacobi elliptic function of snoidal type and $K$ is the complete elliptic integral of the first type (see Byrd and Friedman \cite{byrdFriedman}). Recently, Neves in \cite{Neves1} proved that is possible to characterize the eigenvalues of the Hill operator $L(y)=-y''+Q(x)y$ in $L^2[0,\pi]$ if we know explicitly one of the eigenfunctions associated to this eigenvalue (in this case, $Q$ is a $C^2$ periodic function with period $\pi$). Unfortunately, we only had access to this work when we already had concluded our spectral results using the associated Lam\'e equation. We are completely sure that this new theory can be use to obtain the spectral properties of the operator studied in this paper.\\
To obtain our result of stability for the dnoidal wave solutions, we rewrite the Zakharov system as
\begin{equation}\label{ZakNovoInt}
\left \{
\begin{aligned}
v_t &=-V_x, \ \int_0^L V(x,t)dx = 0 \\
V_t &= -(v+|u|^2)_x \\
iu_t &+ u_{xx} =uv
\end{aligned} \right.
\end{equation}
and we adapt to the periodic case the ideas established by Benjamin \cite{benjamin1}, Bona \cite{bona2} and Weinstein \cite{weinstein3}, then we impose the restriction
\[\int_0^L v_0(x)dx\leq \int_0^L \psi_{\nu}(x)dx,\]
where $v(x,0)=v_0(x),$ to obtain that the dnoidal waves with $c\in(-1,1)$ fixed and $\nu>\frac{2\pi^2}{L^2},$ are orbitally stable in
\[X:=L^2_{per}([0,L])\times\widetilde{L}^2_{per}([0,L])\times H^1_{per}([0,L])\]
by the periodic flow of the system (\ref{ZakNovoInt}). Here, $\widetilde{L}^2_{per}$ is given by
\[\widetilde{L}^2_{per}([0,L])=\left\{f \in L^2_{per}([0,L]): \int_0^L f(x) dx=0\right\}.\]
With regard to the existence and stability of solitary wave solutions for the Zakharov system, there exists a result obtained by Ya Ping in \cite{Wu1}, this work is not completely right. In \cite{Wu1} the author considered the \textit{equivalent} system
\[
\left \{
\begin{aligned}
v_t&=V_{xx},\\
V_t&=v+|u|^2,\\
iu_{t}&+u_{xx}=uv.\\
\end{aligned} \right.
\]
Therefore, the solitary wave solution $V(x,t)=\varphi_{\omega,c}(x-ct)$ is given by
\begin{equation}\label{solWuvarphi}
\varphi_{\omega,c}(\xi)=c\sqrt{-4\omega-c^2}\tanh\left(\frac{\sqrt{-4\omega-c^2}}{2}\xi\right).
\end{equation}
Observe that this solution is not in any Sobolev space $H^s(\mathbb{R})$ and Ya ping proved stability in $L^2(\mathbb{R})\times H^1(\mathbb{R})\times H^1(\mathbb{R}),$ which is not right because the solution (\ref{solWuvarphi}) is not in the space where the author proves the stability. One of the goals of this paper is to improve the result of stability, for the solitary waves solutions, obtained by Ya Ping. Following the ideas used to establish the stability on the periodic case we prove that the solitary wave solutions
\[
\psi_{\omega,c}(\xi)=\left(2\omega+\frac{c^2}{2}\right)\ \text{sech}^2\left(\frac{\sqrt{-4\omega-c^2}}{2}\xi\right), \ \ \ \phi_{\omega,c}(\xi)=\sqrt{\frac{(-4\omega-c^2)(1-c^2)}{2}}\ \text{sech}\left(\frac{\sqrt{-4\omega-c^2}}{2}\xi\right)
\]
\[
\text{and}\ \ \ \varphi_{\omega,c}(\xi)=c\left(2\omega+\frac{c^2}{2}\right)\ \text{sech}^2\left(\frac{\sqrt{-4\omega-c^2}}{2}\xi\right)
\]
are orbitally stable in $X=L^2(\mathbb{R})\times L^2(\mathbb{R})\times H^1(\mathbb{R})$ by the flow generated by the Zakharov system if $1-c>0$ and $4\omega+c^2\geq 0.$ It is worth to note that in the continuous case the restriction imposed above for the initial datum $v_0$ is not necessary, because using the property that the solitary wave solutions converges to zero, when $\xi$ goes to infinity, the term that force to impose this condition disappears.\\
The plan of the paper is as follows. The next section is devoted to describe briefly the notation that will be used,
and to make a few preliminary remarks regarding periodic and nonperiodic Sobolev spaces. In Section 3 we prove the existence of a smooth curve of dnoidal wave solutions for the system (\ref{equaZakha}). Section 4 contains the spectral analysis of some linear operators necessary to obtain our result of stability. In Section 5 we present the result of nonlinear stability for the dnoidal wave solutions of the system (\ref{equaZakha}). Finally, in Section 6 we present the result of stability of the solitary waves associated to the Zakharov system.
\section{Notation}
The $L^2$-based Sobolev spaces of periodic functions are defined as follows (for further details see Iorio and Iorio \cite{ioriolibro}). Let $\mathcal{P}=C^{\infty}_{per}$ denote the collection of all functions $f:\mathbb{R}\rightarrow \mathbb{C}$ which are $C^{\infty}$ and periodic with period $2L>0.$ The collection $\mathcal{P}'$ of all continuous linear functionals from $\mathcal{P}$ into $\mathbb{C}$ is the set of \textit{periodic distributions.} If $\Psi\in \mathcal{P}'$ then we denote the value of $\Psi$ at $\varphi$
by $\Psi(\varphi)=\langle\Psi,\varphi\rangle.$ Define the functions $\Theta_k(x)=\exp(\pi ikx/L), \ k\in \mathbb{Z},\ x\in\mathbb{R}.$ The Fourier transform of $\Psi$ is the function $\widehat{\Psi}:\mathbb{Z}\rightarrow\mathbb{C}$ defined by the formula $\widehat{\Psi}(k)=\frac 1{2L}\langle\Psi,\varphi\rangle, \ k\in\mathbb{Z}.$ So, if $\Psi$ is a periodic function with period $2L,$ we have
\[\widehat{\Psi}(k)=\frac 1{2L}\int_{-L}^L \Psi(x)e^{-\frac{ik\pi x}{L}}dx.\]
For $s\in \mathbb{R},$ the Sobolev space of order $s,$ denoted by $H^s_{per}([-L,L])$ is the set of all $f\in \mathcal{P}'$ such that $(1+|k|^{2})^{\frac{s}{2}}\widehat{f}(k)\in l^2(\mathbb{Z}),$ with norm
\[||f||^2_{H^s_{per}}=2L\sum_{k=-\infty}^{\infty}(1+|k|^{2})^s|\widehat{f}(k)|^2.\]
We also note that $H^s_{per}$ is a Hilbert space with respect to the inner product
\[(f|g)_s = 2L\sum_{n=-\infty}^{\infty}(1+|k|^2)^{s}\widehat{f}(k)\overline{\widehat{g}(k)}\]
In the case $s=0,$ $H^0_{per} $ is a Hilbert space that is isometrically isomorphic to $L^2([-L,L])$ and
\[(f|g)_0 = (f,g) = \int_{-L}^{L} f\overline{g} \ dx.\]
The space $H^0_{per}$ will be denoted by $L^2_{per}$ and its norm will be $\|\cdot\|_{L^2_{per}}.$
Of course $H^s_{per} \subset L^2_{per}$, for any $s \geq 0 $. Moreover, $(H^s_{per})'$, the topological dual of $H^s_{per}$, is isometrically isomorphic to $H^{-s}_{per}$ for all $s \in \mathbb{R}$. The duality is implemented concretely by the pairing
\[\langle f,g\rangle_s = 2L\sum_{k=-\infty}^{\infty}\widehat{f}(k)\overline{\widehat{g}(k)}, \ \ \ for \ \ \ f \in H^{-s}_{per}, \ \ g \in H^s_{per}. \]
Thus, if $f \in L^2_{per}$ and $g \in H^s_{per} $, with $s\geq 0,$ it follows that $\langle f,g\rangle_s = (f,g)$. Additionally, in the particular case $s=\frac 1{2}$ we will denote the pairing $\langle f,g\rangle_s$ simply by $\langle f,g\rangle.$ One of Sobolev's Lemmas in this context states that if $s>\frac{1}{2}$ and
\[C_{per} = \{f: \mathbb{R} \longrightarrow \mathbb{C} \ | \ f \ \ \text{is continuous and periodic with period} \ \ 2L \},\]
then $H^{s}_{per}\hookrightarrow C_{per}$.\\
Let $s\in \mathbb{R}.$ The ($L^2$ type) Sobolev space $H^s(\mathbb{R})$ is the collection of all $f\in \mathcal{S}'(\mathcal{R})$ such that $(1+|\xi|^2)^{\frac s{2}}\widehat{f}\in L^2(\mathbb{R},d\xi),$ that is, $\widehat{f}$ is a measurable function and
\[
\| f\|_{s}^2=\int_{\mathbb{R}}(1+|\xi|^2)^s|\widehat{f}(\xi)|^2d\xi<\infty.
\]
For more details see Iorio and Iorio \cite{ioriolibro}. Finally, we say that $b\in \widehat{H}^{-1}(\mathbb{R})$ if there exists $V\in L^2(\mathbb{R})$ such that $b=-V'$ and $\|b\|_{\widehat{H}^{-1}}=\|V\|_{L^2}.$
\section{Existence of dnoidal wave solutions}
In this section we show the existence of a smooth curve of dnoidal wave solutions, with the same fundamental period, for the Zakharov system. In this case, we are interested in solutions for the system (\ref{equaZakha}) in the form given in (\ref{solforms}). Since $u$ is a periodic function (with period $L$), for $c\neq 0$ we suppose that there exists $m\in\mathbb{N}$ such that $L=\frac{4\pi m}{c}.$ Note that for $c=0$ we obtain immediately that $u$ is a $L$-periodic function. Substituting (\ref{solforms}) in (\ref{equaZakha}), we have that $\phi=\phi_{\omega,c}$ and $\psi=\psi_{\omega,c}$ have to satisfy (\ref{systemedo}). Integrating the first equation in (\ref{systemedo}) we obtain
\begin{equation}\label{edo2Zakha}
(c^2-1)\psi'=(\phi^2)'+a_0.
\end{equation}
Using the fact that $\phi^2$ and $\psi$ are periodic we get that $a_0=0.$ Therefore
\begin{equation}\label{ecuacorr}
(c^2-1)\psi'=(\phi^2)'.
\end{equation}
Integrating (\ref{ecuacorr}), we have that for all $c\neq 1$
\begin{equation}\label{ecuasegint}
\psi=\frac{-\phi^2}{1-c^2}+a_1.
\end{equation}
We assume in our theory that the constant of integration $a_1$ is zero. Thus, substituting (\ref{ecuasegint}) in the second equation of (\ref{systemedo}) we have that
\begin{equation} \label{ecuaordphi}
\phi''+\left(\omega+\frac{c^2}{4}\right)\phi+\frac{\phi^3}{1-c^2} = 0.
\end{equation}
Now, multiplying (\ref{ecuaordphi}) by $\phi '$ and integrating once, we arrived at
\[\frac{\left(\phi'\right)^2}{2}+\left(\omega+\frac{c^2}{4}\right)\frac{\phi^2}{2}+\frac{\phi^4}{4(1-c^2)}=A_{\phi},\]
where $A_{\phi}$ is a constant of integration. Then,
\[\left(\phi'\right)^2=\frac{1}{2(1-c^2)}F(\phi),\]
where $F$ is a polynomial given by
\[F(t)=-t^4+2(1-c^2)\left(-\omega-\frac{c^2}{4}-a_1\right)t^2+4(1-c^2)A_{\phi}.\]
Suppose that $F$ has roots $\pm\eta_1$ and $\pm\eta_2$ (note that $F$ is even) and without loss of generality that $0<\eta_2<\eta_1$. Thus, we can write
\begin{equation} \label{equa9zak}
\left(\phi'\right)^2=\frac{1}{2(1-c^2)}(\phi^2-\eta^2_2)(\eta^2_1-\phi^2).
\end{equation}
Assume also that $1-c^2>0,$ then the left side of (\ref{equa9zak}) is not negative, therefore we have that
\[\eta^2_2 \leq \phi^2\leq \eta^2_1.\]
Since we are interested in positive solutions, from the last inequality we obtain $\eta_2\leq\phi\leq\eta_1.$ Using (\ref{equa9zak}) we get that the $\eta_{j}$'s satisfy
\[\left \{
\begin{aligned}
-2(1-c^2)\left(\omega+\frac{c^2}{4}\right)&=\eta^2_1 + \eta^2_2\\
4(1-c^2)A_{\phi}&=-\eta^2_1\eta^2_2.\\
\end{aligned}\right.\]
From the last system, we get the restriction $4\omega+c^2<0.$ \\
Now, define $\varrho(\xi)=\frac{\phi(\xi)}{\eta_1}$, $k^2=\frac{\eta^2_1-\eta^2_2}{\eta^2_1}$ and assume that $\varrho(0)=1$. Thus, we can rewrite the equation (\ref{equa9zak}) as
\begin{equation} \label{equa11zak}
\left(\varrho'\right)^2=\frac{\eta^2_1}{2(1-c^2)}(1-\varrho^2)(\varrho^2+k^2 -1).
\end{equation}
Finally, define $\chi$ through the relation $\varrho^2=1-k^2\sin^2\chi$, with $\chi(0)=0,$ then (\ref{equa11zak}) can be reduce to
\begin{equation}\label{ecuacorr2}
[\chi']^2=\frac{\eta^2_1}{2(1-c^2)}(1-k^2\sin^2\chi).
\end{equation}
From (\ref{ecuacorr2}) we obtain after some algebra that,
\begin{equation}\label{ecuacorr3}
\int^{\chi(\xi)}_0 \frac{dt}{\sqrt{1-k^2\sin\ t}}= \frac{\eta_1\xi}{\sqrt{2(1-c^2)}}.
\end{equation}
Using the identity (\ref{ecuacorr3}), we obtain from the definition of the Jacobi elliptic functions (see Byrd and Friedman \cite{byrdFriedman}) that
\[\sin(\chi(\xi))=\text{sn}\left(\frac{\eta_1\xi}{\sqrt{2(1-c^2)}} ;k\right).\]
Therefore
\begin{align*}
\varrho(\xi)&=\sqrt{1-k^2\sin^2(\chi(\xi))}=\sqrt{1-k^2 \text{sn}^2\left(\tfrac{\eta_1\xi}{\sqrt{2(1-c^2)}} ;k\right)} = \text{dn}\left(\tfrac{\eta_1\xi}{\sqrt{2(1-c^2)}} ;k\right),
\end{align*}
where we use the fact that $k^2\text{sn}^2+\text{dn}^2=1.$ Coming back to the variable $\phi$ we obtain that
\begin{equation} \label{equa12zak}
\phi_{\omega,c}(\xi) = \eta_1\text{dn}\left(\tfrac{\eta_1\xi}{\sqrt{2(1-c^2)}} ;k\right)
\end{equation}
and using (\ref{ecuasegint}) we arrive at
\begin{equation} \label{equa13zak}
\psi_{\omega,c}(\xi)=-\frac{\eta^2_1}{1-c^2}\text{dn}^2 \left(\tfrac{\eta_1\xi}{\sqrt{2(1-c^2)}};k\right).
\end{equation}
Now, since dn has fundamental period $2K$, where $K = K(k)$ is the complete elliptic integral of the first type (see Byrd and Friedman \cite{byrdFriedman}), we obtain that $\phi$ e $\psi$ have fundamental period given by
\[T_\psi = T_\phi = \frac{2{\sqrt{2(1-c^2)}}}{\eta_1}K(k).\]
Fix $\omega$ and $c$ such that $1-c^2 >0$ and $4\omega+c^2<0.$ Additionally, define
\[\nu=-\left(\omega+\frac{c^2}{4}\right) \ \ \text{and} \ \ \alpha=1-c^2.\]
Then $\eta^2_1 + \eta^2_2 = 2\nu \alpha$ and consequently $0 < \eta_2 < \sqrt{\nu\alpha} < \eta_1 < \sqrt{2\nu\alpha}.$ We express $T_\psi$ and $T_{\phi}$ as functions of the parameter $\eta_2,$
\[T_\psi(\eta_2)= T_\phi(\eta_2)=\frac{2{\sqrt{2\alpha}}}{\sqrt{2\nu\alpha-\eta^2_2 }}K(k(\eta_2)) \ \ \ \ \text{with}\ \ \ \ \
k^2(\eta_2)=\frac{2\nu\alpha-2\eta^2_2}{2\nu\alpha-\eta^2_2}.\]
Note that if $ \eta_2\rightarrow 0$, we have that $k(\eta_2)\rightarrow 1^-,$ which implies that $K(k(\eta_2))\rightarrow +\infty $ and consequently $ T_\psi,T_\phi\rightarrow +\infty $. On the other hand, when $\eta_2\rightarrow {\sqrt{\nu\alpha}}$ we get that $k(\eta_2)\rightarrow 0^+$ and then $K(k(\eta_2))\rightarrow \frac{\pi}{2}$. Therefore $T_\psi,T_\phi \rightarrow \frac{\pi{\sqrt{2}}}{\sqrt{\nu}}.$ Since the function $\eta_2 \in (0,\sqrt{\nu\alpha})\mapsto T_\psi(\eta_2)=T_\phi(\eta_2)$ is strictly decreasing (we prove this fact later) we obtain
\[T_\phi=T_\psi > \frac{\pi{\sqrt{2}}}{\sqrt{\nu}}.\] Now, for $L>0$ and $1-c^2>0$ fixed, chose $\nu > 0$ such that $\sqrt{\nu}> \frac{\pi{\sqrt{2}}}{L}.$
Then, it follows from the analysis given above that there exists a unique $\eta_2=\eta_2(\nu)\in(0,\sqrt{\nu\alpha})$ such that the dnoidal waves $\phi=\phi(\cdot;\eta_1(\nu),\eta_2(\nu))$ and $\psi=\psi(\cdot;\eta_1(\nu),\eta_2(\nu))$ have fundamental period $L=T_{\psi}(\eta_2)=T_{\phi}(\eta_2).$
\begin{remark}
The formula (\ref{equa12zak}) and (\ref{equa13zak}) contains, at least formally, the solitary wave solutions for the system (\ref{equaZakha}) found by Ya Ping in \cite{Wu1}. In fact, if $\eta_2 \rightarrow 0^+$ we obtain that $\eta_1 \rightarrow \sqrt{2\nu\alpha}$, $k(\eta_2) \rightarrow 1^-$ and $dn(x,1^-)=\text{sech}(x)$. Consequently
\[\phi_{c,\omega}(x)=\frac{\sqrt{(-4\omega - c^2)(1-c^2)}}{2} \text{sech} \left(\frac{\sqrt{-4\omega - c^2}}{2} \ x\right)\ \ \text{and} \]
\[\psi_{c,\omega}(x)=\left(2\omega+\frac{c^2}{2}\right) \text{sech}^2 \left(\frac{\sqrt{-4\omega - c^2}}{2} \ x\right).\]
\end{remark}
\begin{theo}\label{TeorDnoidal}
Let $L>0$ and $1-c^2>0$ be arbitrarily fixed. Consider $\nu_0>\frac{2\pi^2}{L^2}$ and the unique $\eta_{2,0}=\eta_{2,0}(\nu_0)\in (0,\sqrt{\nu_0\alpha})$ such that $T_{\psi_{\nu_0}}=L=T_{\phi_{\nu_0}}$. Then,
(i) there exist intervals $I(\nu_0)$ and $B(\eta_{2,0})$ around $\nu_0$ and $\eta_{2,0}$ respectively, and a unique smooth function $\Lambda:I(\nu_0)\longrightarrow B(\eta_{2,0})$ such that $\Lambda(\nu_0)=\eta_{2,0}$ and
\[\frac{2\sqrt{2\alpha}}{\sqrt{2\nu\alpha - \eta_2^2}}K(k) = L,\]
for all $\nu\in I(\nu_0)$, $\eta_2=\Lambda(\nu)$ and
\begin{equation}\label{equaZa13c}
k^2 = k^2(\nu)=\frac{2\nu\alpha-2\eta_2^2}{2\nu\alpha-\eta_2^2}.
\end{equation}
Furthermore, we can chose $I(\nu_0)=(\frac{2\pi^2}{L^2},+\infty).$
(ii) The dnoidal waves $\psi(\cdot;\eta_1,\eta_2)$ and $\phi(\cdot;\eta_1,\eta_2)$ given by (\ref{equa12zak}) and (\ref{equa13zak}), and determined by $\eta_1=\eta_1(\nu),$ $\eta_2=\eta_2(\nu)=\Lambda(\nu),$ with $\eta^2_1 + \eta^2_2 = 2\nu\alpha,$ have fundamental period $L$ and satisfy (\ref{ecuasegint}) and (\ref{ecuaordphi}). Furthermore, the map
\[\nu \in I(\eta_0)\longmapsto\left(\psi(\cdot;\eta_1(\nu),\eta_2(\nu)),\phi(\cdot;\eta_1(\nu),\eta_2(\nu))\right)\in H_{per}^n([0,L])\times H_{per}^n([0,L]) \]
is smooth for all integer $n\geq 1$.
(iii) The map $\Lambda:I(\nu_0)\rightarrow B(\eta_{2,0})$ is strictly decreasing. Therefore, from (\ref{equaZa13c}), $\nu\mapsto k(\nu)$ is a strictly increasing function.
\end{theo}
\proof
The proof of this theorem follows the same ideas of the Theorem 2.1 in Angulo \cite{angulo4}, we will use the Implicit Function Theorem. For this, consider the open set
\[\Omega = \left\{(\eta,\nu) \in \mathbb{R}^2: \nu>\frac{2\pi^2}{L^2} \ \text{and} \ \eta \in (0,\sqrt{\nu\alpha}\ )\right\}\] and $\Gamma:\Omega \longrightarrow \mathbb{R}$ defined as
\[\Gamma(\eta,\nu) = \frac{2\sqrt{2\alpha}}{\sqrt{2\nu\alpha-\eta^2}} \ K(k(\eta,\nu))- L,\]
where
\begin{equation}\label{equa13bzak}
k^2(\eta,\nu) =\frac{2\nu\alpha-2\eta^2}{2\nu\alpha-\eta^2}.
\end{equation}
From the hypothesis, we have that $\Gamma(\eta_{2,0},\nu_0) = 0$. We proof that $\frac{d\Gamma}{d\eta}<0$ in $\Omega$. In fact, we use the next relation
\begin{equation}\label{equa14zak}
\frac{dK(k)}{dk}=\frac{E(k)-k'^2K(k)}{kk'^2}\ \ \text{with}\ \ k \in (0,1),
\end{equation}
where $E=E(k)$ is the complete elliptic integral of the second type and $k'^2=1-k^{2}$ is the complementary modulus. Deriving (\ref{equa13bzak}) with respect to $\eta,$ we obtain that
\begin{equation} \label{equa15zak}
\frac{\partial k}{\partial\eta}= -\frac{2\eta\nu\alpha}{k(2\nu\alpha-\eta^2)^2}.
\end{equation}
Then from (\ref{equa14zak}) and (\ref{equa15zak}) we obtain
\[\frac{\partial\Gamma}{\partial\eta}=\frac{2\eta\sqrt{2\alpha}}{(2\nu\alpha-\eta^2)^{\frac{3}{2}}} \ K(k)-\frac{4\eta\nu\alpha\sqrt{2\alpha}}{(2\nu\alpha-\eta^2)^{\frac{5}{2}}}\left[\frac{E(k) - k'^2K(k)}{k^2k'^2}\right].\]
Thus,
\begin{align*}
\frac{\partial \Gamma}{\partial \eta} < 0 & \Leftrightarrow k^2k'^2(2\nu\alpha-\eta^2)K(k) < 2\nu\alpha E(k) - 2\nu\alpha k'^2K(k)\\
& \Leftrightarrow k'^2(2\nu\alpha-2\eta^2)K(k) + 2\nu\alpha k'^2K(k) < 2\nu\alpha E(k)\\
& \Leftrightarrow \frac{2\nu\alpha k'^2}{(1+k'^2)}K(k)< \nu\alpha E(k)\Leftrightarrow (1+k'^2)E(k)- 2k'^2K(k) > 0.
\end{align*}
Since the last inequality always holds, we obtain that $\frac{\partial \Gamma}{\partial \eta}<0$. By the Implicit Function Theorem we have that there exists an interval $I(\nu_0)$ around $(\nu_0)$, an interval $B(\eta_{2,0})$ around $\eta_{2,0}$ and a smooth function $\Lambda:I(\nu_0)\longrightarrow B(\eta_{2,0})$ such that $\Lambda (\nu_0) = \eta_{2,0}$ and
\[\Gamma(\Lambda(\nu),\nu) = 0,\ \ \ \ \ \ \forall \nu \in I(\nu_0).\]
Additionally, since $\nu_0$ was chosen arbitrarily in $I=\left(\frac{2\pi^2}{L^2},+\infty\right)$ and from the uniqueness of $\Lambda,$ we extend $\Lambda$ to $I.$ The part $(ii)$ is immediate, using the smoothness of the function involved.\\
Now, we prove that $\Lambda$ is an strictly decreasing function. For this note that $\Gamma(\Lambda(\nu),\nu)=L$ for all $\nu\in I(\nu_0)$ then, using again the Implicit Function Theorem we get that
\[\Lambda '(\nu)=-\frac{\partial\Gamma/{\partial\nu}}{\partial\Gamma/\partial\eta}.\]
Since $\frac{\partial\Gamma}{\partial\eta}<0,$ we just have to prove that $\frac{\partial\Gamma}{\partial\nu}<0$ in $I(\nu_0)$. In fact, since
\[\frac{\partial\Gamma}{\partial\nu}=\frac{2\alpha\sqrt{2\alpha}}{(2\nu\alpha-\eta^2)^{3/2}}\left[ -K+\frac{dK}{dk}\frac{\eta^2}{k(2\nu\alpha-\eta^2)} \right]\]
and $\eta^2=(2\nu\alpha-\eta^2)k'^2,$ we obtain
\begin{align*}
\frac{\partial\Gamma}{\partial\nu}<0&\Leftrightarrow \frac{\eta^2}{\sqrt{2\nu\alpha-\eta^2}\sqrt{2\nu\alpha-2\eta^2}}\frac{dK}{dk}<K\Leftrightarrow k'^2\frac{dK}{dk}-kK<0.\\
\end{align*}
From (\ref{equa14zak}), we arrived at
\[ \frac{\partial\Gamma}{\partial\nu}<0\Leftrightarrow\frac{E-k'^2K}{k}-kK<0\Leftrightarrow E<K.\]
Since the last inequality always holds for any $k\in(0,1)$ (see Byrd and Friedman \cite{byrdFriedman}), we obtain the desired result.\\
Finally, deriving $k$ with respect to $\nu,$ we obtain
\[\frac{dk}{d\nu}=\frac{\alpha\eta(\eta-2\eta'\nu)}{k(2\nu\alpha-\eta^2)^2}>0,\]
which proves that $\nu \mapsto k(\nu)$ is strictly increasing function, this finishes the proof of the theorem.
$\square$
The next result will be used in the prove of the stability of the dnoidal waves solutions.
\begin{coro}\label{coroDnoidal}
Let $L>0$ and $c$ be arbitrarily fixed with $1-c^2 >0.$ Consider the smooth curve of dnoidal waves $\nu \in \left(\frac{2\pi^2}{L^2},+\infty\right) \longmapsto \phi_{\nu}(\cdot ; \eta_1(\nu),\eta_2(\nu))$ determined by Theorem \ref{TeorDnoidal}. Then
\[\frac{d}{d\nu}\int_0^L \phi^2_\nu(\xi)d\xi>0.\]
\end{coro}
\proof
Using the facts that $\eta_1L=2\sqrt{2(1-c^2)}K(k)$ and $\int_0^L dn^2 (y) dy = E(k)$ (see Byrd and Friedman \cite{byrdFriedman}) we get that
\[\int_0^L \phi^2_{\nu}(\xi) d\xi = 2\eta_1\sqrt{2(1-c^2)} \int_0^K dn^2 (y;k) dy = \frac{8(1-c^2)}{L}K(k)E(k).\]
Since $k\mapsto K(k)E(k)$ and $\nu \mapsto k(\nu)$ are strictly increasing functions we obtain
\[\frac{d}{d\nu}\int_0^L \phi^2_{\nu}(\xi) d\xi =\frac{8(1-c^2)}{L} \frac{d}{dk }\left[K(k)E(k)\right]\frac{dk}{d\nu}>0\]
This finishes the proof of the corollary.
$\square$
\section{Spectral Analysis}
In this part of the paper, we study some spectral properties of various operators which will be necessary to obtain our result of stability. First, note that the system (\ref{equaZakha}) can be rewritten as
\begin{equation}\label{ZakNovo}
\left \{
\begin{aligned}
v_t &=-V_x, \ \int_0^L V(x,t)dx = 0 \\
V_t &= -(v+|u|^2)_x \\
iu_t &+ u_{xx} =uv
\end{aligned} \right.
\end{equation}
Therefore, we have the Hamiltonian structure $\frac{\partial U}{\partial t}=JE'(U)$ where $U=(v,V,u)^t,$ $J$ is the linear skew-symmetric operator given by
\[J=\left(
\begin{array}{crrc}
0 &-\frac{d}{dx} &0 \\
-\frac{d}{dx} &0 &0 \\
0 &0 &-\frac{i}{2}
\end{array}
\right)\]
and $E$ is the energy functional define as
\begin{equation} \label{equa2.4Zak}
E(v,V,u)=\frac{1}{2}\int_0^L 2|u_x|^2+v^2+V^2+2v|u|^2 \ dx.
\end{equation}
We also use the functionals $Q_1$ and $Q_2$ defined as
\begin{equation}\label{quantCons2e3}
Q_1(v,V,u)=\int_0^L uV + \text{Im}(u_x\overline{u})\ dx \ \ \ \ \text{and}\ \ \ \ Q_2(v,V,u)= \int_0^L |u|^2 dx.
\end{equation}
A standard analysis proves that $E,$ $Q_1$ and $Q_2$ are conserved quantities of the system (\ref{ZakNovo}), i.e.,
\[E(v(t),V(t),u(t))= E(v(0),V(0),u(0)), \ \ \ Q_1(v(t),V(t),u(t))= Q_1(v(0),V(0),u(0))\]
\[\text{and}\ \ Q_2(v(t),V(t),u(t))= Q_2(v(0),V(0),u(0)) \]
for all $t\in [-T,T],$ where $T$ is the maximal time of existence of solutions.\\
Now, suppose that $V(x,t)=\varphi_{\omega,c}(x-ct),$ with $\varphi_{\omega,c}:\mathbb{R}\rightarrow\mathbb{R}$ a smooth $L-$periodic function, is solution of $v_t=-V_{x}$, then
\begin{equation}\label{edo3Zakh}
c\psi_{\omega,c}'=\varphi'_{\omega,c}.
\end{equation}
Therefore, $c\psi= \varphi+d_0$, where $d_0$ is a constant of integration. Since we are interested in $\varphi$ with zero mean, we obtain that $d_0=\frac{c}{L}\int_0^L \psi dx$. Using tha fact that $\int_0^K \text{dn}^2(x,k) dx=E(k)$ we get that $d_0(k)=-\frac{c\eta_1^2}{1-c^2}\frac{E(k)}{K(k)}$. Therefore
\begin{equation}\label{varphidnoidal}
\varphi(\xi)=-\frac{c\eta_1^2}{1-c^2}\left[\text{dn}^2\left(\frac{\eta_1\xi}{\sqrt{2(1-c^2)}};k\right)-\frac{E(k)}{K(k)}\right].
\end{equation}
It is worth to note that if $\eta_2\rightarrow 0^+,$ then $\eta_1\rightarrow\sqrt{2\alpha\nu}$ and therefore $k\rightarrow 1^-.$ Since dn$(u,1^-)=\text{sech}(u)$, $E(1)=\frac{\pi}{2}$ and $K(1)= +\infty$ we arrive at
\[\varphi(\xi)=c\left(2\omega+\frac{c^2}{2}\right)\text{sech}^2\left(\frac{\sqrt{-4\omega-c^2}}{2}\xi \right),\]
which is the solitary wave solution for (\ref{edo3Zakh}).\\
Now, using the Theorem \ref{TeorDnoidal} we have that there exist periodic traveling waves for (\ref{ZakNovo}) given by
\[\left(\psi_{\omega,c}(x-ct),\varphi_{\omega,c}(x-ct), e^{- i\omega t} e^{i\frac{c}{2}(x-ct)}\phi_{\omega,c}(x-ct)\right),\]
where
\begin{equation}\label{solZakhpsi}
\psi_{\omega,c}(\xi)=\frac{-\eta_{1}^2}{1-c^2}\text{dn}^2\left(\frac{\eta_1\xi}{\sqrt{2(1-c^2)}};k \right),
\
\phi_{\omega,c}(\xi)=\eta_1\text{dn}\left(\frac{\eta_1\xi}{\sqrt{2(1-c^2)}};k\right)
\end{equation}
\begin{equation}\label{solZakhphi}
\text{and}\ \ \ \ \varphi_{\omega,c}(\xi)= -\frac {c\eta_1^2}{1-c^2}\left[\text{dn}^2\left(\frac{\eta_1\xi}{\sqrt{2(1-c^2)}};k\right)-\frac{E(k)}{K(k)}\right].
\end{equation}
The next operators will be useful in the proof of the stability of the dnoidal wave solutions:
\begin{equation}\label{operaDnoidal}
\mathcal{L}_{3}=-\frac{d^2}{{dx}^2}-\left(\omega+\frac{c^2}{4}\right)+3\psi\ \ \ \text{and}\ \ \ \
\mathcal{L}_{4}=-\frac{d^2}{{dx}^2}-\left(\omega+\frac{c^2}{4}\right)+\psi.
\end{equation}
We will study the spectral properties of the operators $\mathcal{L}_{i},\ i=3,4$. Recall that $\sigma(\mathcal{L}_{i})=\sigma_{ess}(\mathcal{L}_{i})\cup\sigma_{disc}(\mathcal{L}_{i})$ where $\sigma_{ess}(\mathcal{L}_{i})$ and $\sigma_{disc}(\mathcal{L}_{i})$ denote, respectively, the essential spectrum and the point spectrum of $\mathcal{L}_{i}$ (see Reed and Simon \cite{ReedSimon1}). Write
\[\mathcal{L}_{3}=\left(-\frac{d^2}{{dx}^2}-\left(\omega+\frac{c^2}{4}\right)\right)+3\psi=:\mathcal{L}+M_1,\]
\[\mathcal{L}_{4}=\left(-\frac{d^2}{{dx}^2}-\left(\omega+\frac{c^2}{4}\right)\right)+\psi=:\mathcal{L}+M_2,\]
where $\mathcal{L}=-\frac{d^2}{dx^2}-(\omega+\frac{c^2}{4}).$ Since $M_1$ and $M_2$ are relatively compact with respect to $\mathcal{L}$, it follows from the Weyl's Essential Spectrum Theorem (see Reed and Simon \cite{ReedSimon1}) that $\sigma_{ess}(\mathcal{L}_i)=\sigma_{ess}(\mathcal{L})=\emptyset,$ with $i= 3,4.$ Thus $\sigma(\mathcal{L}_i)=\sigma_{disc}(\mathcal{L}_i),$ para $i=3,4.$ Therefore we have to analyze the periodic eigenvalue problem on $[0,L]$
\begin{equation}\label{probperiogeral}
\left \{
\begin{aligned}
\mathcal{L}_{i}\chi&=\lambda\chi\\
\chi(0)&=\chi(L),\ \chi'(0)=\chi'(L). \\
\end{aligned}\right.
\end{equation}
The problem (\ref{probperiogeral}) determines that the spectrum of $\mathcal{L}_i$ is a countable set of eigenvalues $\{\lambda_n: n=0,1,2,3,...\}$ with
\[\lambda_0\leq \lambda_1\leq \lambda_2\leq \lambda_3\leq \cdots, \]
where the double eigenvalues are counted twice and $\lambda\rightarrow+\infty$ when $n\rightarrow\infty.$ We denote by $\chi_n$ the eigenfunctions associated to the eigenvalue $\lambda_n.$ It is clear from the conditions $\chi(0)=\chi(L), \chi'(0)=\chi'(L)$ that $\chi_n$ can be extended to all $(-\infty,+\infty)$ as a continuous differentiable function with period $L.$ We know from the Floquet theory that the periodic eigenvalue problem (\ref{probperiogeral}) is related to the study of the next semi-periodic eigenvalue problem consider in $[0,L]$
\[\left \{
\begin{aligned}
\mathcal{L}_{i}\eta&= \mu\eta\\
\eta(0)&=-\eta(L),\ \eta'(0)=-\eta'(L), \\
\end{aligned} \right.\]
which also is a self-adjoint problem and therefore determines a sequence of eigenvalues $\{\mu_n: n=0,1,2,3 ...\}$ with
\[\mu_0\leq \mu_1\leq \mu_2\leq \mu_3\leq \cdots, \]
where the double eigenvalues are counted twice and $\mu_n\rightarrow+\infty$ when $n\rightarrow\infty.$
We denote by $\eta_n$ the eigenfunction associated to the eigenvalue $\mu_n.$
\begin{theo}
Let $\phi_{\nu}=\phi$ and $\psi_{\nu} = \psi$ the dnoidal waves given by Theorem \ref{TeorDnoidal}. Then,
(i) the operator $\mathcal{L}_3$ in (\ref{operaDnoidal}) defined in $L_{per}^2([0,L])$ with domain $H_{per}^2([0,L])$ has its fist three eigenvalues simple, where zero is the second one with associated eigenfunction $\phi '$. Furthermore, the rest of the spectrum is constitute by a discrete set of eigenvalues which are double.
(ii) The operator $\mathcal{L}_4$ in (\ref{operaDnoidal}) defined in $L_{per}^2([0,L])$ with domain $H_{per}^2([0,L])$ has zero as its first eigenvalue which is simple with associated eigenfunction $\phi$. Furthermore, the rest of the spectrum is constitute by a discrete set of eigenvalues.
\end{theo}
\proof
$(i)$ The proof is based on the Floquet Theory (see Eastham \cite{eastham}, Mangnus and Winkler \cite{magnus}). Deriving (\ref{ecuaordphi}) and using (\ref{ecuasegint}) we have that $\mathcal{L}_3\phi '=0$. Then zero is an eigenvalue of $\mathcal{L}_3$ with associated eigenfunction $\phi '.$ Since $\phi '$ has exactly two zeros on $[0,L)$, we get that zero is either the second or the third eigenvalue of $\mathcal{L}_3$. We will prove that zero is in fact the second one. For this we have to study the periodic problem
\begin{equation} \label{equa2.12Zak}
\left \{
\begin{aligned}
\mathcal{L}_{3}\chi &=\lambda\chi\\
\chi(0)&=\chi(L),\ \chi'(0)=\chi'(L).\\
\end{aligned} \right.
\end{equation}
Let $\Lambda(x)=\chi(\eta x)$ where $\eta=\frac{\sqrt{2\alpha}}{\eta_1}$. Then, from the explicit form of $\psi$ and the relation $k^2 \text{sn}^2+\text{dn}^2=1$,we have that the problem (\ref{equa2.12Zak}) is equivalent to
\begin{equation} \label{equa2.13Zak}
\left \{
\begin{aligned}
\Lambda ''+&[\rho-6k^2\text{sn}^2(x; k)]\Lambda=0\\
\Lambda(0)&=\Lambda(2K),\ \Lambda '(0)=\Lambda '(2K), \\
\end{aligned}\right.
\end{equation}
where
\[\rho=\frac{2\alpha}{\eta_{1}^2}\left(\frac{\lambda}{2}+\omega+\frac{c^2}{4}+\frac{3\eta_{1}^2}{\alpha}\right).\]
The second order equation given in (\ref{equa2.13Zak}) is called the Jacobian form of the Lam\' e equation. It is well known that such equation determines the existence of exactly three intervals of instability (see Theorem $7.8$ in Mangnus and Winkler \cite{magnus}). We will show that this intervals are the first three. First, observe that $\rho_1= 4+k^2$ and $\Lambda_1(x)=\text{cn}(x;k)\text{sn}(x;k)$ satisfy the problem (\ref{equa2.13Zak}). Furthermore, following Ince \cite{ince} we have that the functions
\[\Lambda_0(x)=1-(1+k^2-\sqrt{1+k^2+k^4})\text{sn}^2(x;k),\]
\[\Lambda_2(x)=1-(1+k^2+\sqrt{1+k^2+k^4})\text{sn}^2(x;k),\]
which have period $2K,$ are the eigenfunctions of (\ref{equa2.13Zak}) with eigenvalues given by
\[\rho_0= 2\left(1+k^2 -\sqrt{1+k^2+k^4}\right)\ \ \ \text{and}\ \ \ \rho_2= 2\left(1+k^2 -\sqrt{1+k^2+k^4}\right).\]
Since $\Lambda_0$ does not have zeros in $[0,2K]$, it follows that $\rho _0$ is the first eigenvalue of (\ref{equa2.13Zak}). Furthermore, since $\Lambda_2$ has two zeros in $[0,2K)$ and $\rho_1<\rho_2,$ we have that $\rho_1$ is the second eigenvalue of (\ref{equa2.13Zak}) and $\rho_2$ is the third. We also have that $\rho_0,\rho_1$ and $\rho_2$ are simple. Now, since the eigenvalues of (\ref{equa2.12Zak}) and (\ref{equa2.13Zak}) are related as
\[\lambda=\frac{\eta_1}{\alpha}(\rho-6)+2\nu,\]
we can see $\lambda$ as a function of $\rho$, which is increasing. Since $k^2-2=\frac{2\alpha\nu}{\eta_{1}^2}$, we have that $\lambda(\rho_1) = 0 = \lambda_1$ and since $\lambda_0 < \lambda_1 < \lambda_2$, we obtain that
\[\lambda_0<0=\lambda_1<\lambda_2.\]
This finishes the proof of the part $(i)$.\\
$(ii)$ Using (\ref{ecuaordphi}) and (\ref{ecuasegint}) we have that $\mathcal{L}_4\phi=0$. Thus, zero is an eigenvalue of $\mathcal{L}_4$ with associated eigenfunction $\phi$. Since $\phi$ does not have zeros in $[0,L]$ we obtain that zero is the first eigenvalue of $\mathcal{L}_4$ and it is simple.
$\square$
\section{Nonlinear Stability for the Dnoidal Wave Solutions}
In this section we study the nonlinear stability properties of the periodic traveling wave solution $\Phi(\xi)=(\psi(\xi),\varphi(\xi),\widetilde{\phi}(\xi))$ where $\psi$, $\varphi$ and $\phi$ are given by (\ref{solZakhpsi}), (\ref{solZakhphi}), $\widetilde{\phi}(\xi)=e^{i\frac{c}{2}\xi}\phi(\xi)$ and $1-c^2>0$. First, we define the type of stability in which we are interested: Let $ X:= L^2_{per}([0,L]) \times \widetilde{L}^2_{per}([0,L]) \times H^1_{per}([0,L]),$ where
\[\widetilde{L}^2_{per}([0,L])=\left\{f \in L^2_{per}([0,L]): \int_0^L f(x) dx=0\right\}.\]
Initially, observe that the system (\ref{ZakNovo}) has two basic symmetries: translations and rotations. This means that if $(v(x,t),V(x,t),u(x,t))$ is a solution of (\ref{ZakNovo}), then the pair of functions
\[(v(x+y),V(x+y),u(x+y))\ \ \ \ \ \text{and}\ \ \ \ (v(x,t),V(x,t),e^{-is}u(x,t))\]
are also solutions, for any real constants $y$ and $s.$ So, our notion of stability will be modulus these symmetries. More precisely,
\begin{defi}
We say that the orbit generated by $\Phi(\xi)$, namely
\[\mathcal{O}_{\Phi}=\left\{\left(\psi(\cdot + y)),\varphi(\cdot+y)), e^{i\theta}\widetilde{\phi}(\cdot+y)\right):(\theta,y)\in [0,2\pi)\times\mathbb{R}\right\}\]
is stable in $X$ by the flow generated by the system (\ref{ZakNovo}), if for all $\epsilon>0$, there exists $\delta>0$ such that for any $(v_0,V_0, u_0)\in X$ satisfying
\[\|v_0-\psi\|_{L^2_{per}}<\delta,\ \ \|V_0-\varphi\|_{L^2_{per}}<\delta \ \ \text{and}\ \ \|u_0-\widetilde{\phi}\|_{H^1_{per}}<\delta,\]
we have that the solution $(v, V,u)$ of the system (\ref{ZakNovo}) with $(v(0),V(0),u(0))=(v_0, V_0,u_0)$, satisfies
\[(v,V,u) \in C(\mathbb{R}; L^2_{per}([0,L]))\times C(\mathbb{R}; \widetilde{L}^2_{per}([0,L])) \times C(\mathbb{R};H^1_{per}([0,L])),\]
\begin{equation}\label{desiImp1}
\inf_{y \in \mathbb{R}} \|v(\cdot+y,t)-\psi\|_{L^2_{per}}<\epsilon, \ \ \ \inf_{y \in \mathbb{R}} \|V(\cdot+y,t)-\varphi\|_{L^2_{per}}<\epsilon
\end{equation}
\begin{equation}\label{desiImp2}
\text{and}\ \ \ \inf_{\theta \in [0,2\pi),y \in \mathbb{R}} \|e^{i\theta}u(\cdot+y,t)-\widetilde{\phi}\|_{H^1_{per}}<\epsilon.
\end{equation}
Otherwise, we say that $\Phi$ é $X$-unstable.
\end{defi}
Next, we present our result of stability for the dnoidal waves.
\begin{theo}\label{teoStabDnoidal}
Let $L>0$ and $1-c^2>0$ be fixed numbers. Consider the smooth curve of periodic traveling wave solutions for the system (\ref{ZakNovo}), $\nu\mapsto(\psi_{\nu},\varphi_{\nu},\phi_{\nu}),$ determined by the Theorem \ref{TeorDnoidal} and (\ref{varphidnoidal}). Then, for $\nu>\frac{2\pi^2}{L^2}$ the orbit generated by $\Phi_{\nu}(x,t)=\left(\psi_{\nu}(x),\varphi_{\nu}(x),\widetilde{\phi}_{\nu}(x)\right)$ is stable in $X$ by the periodic flow generated by the system (\ref{ZakNovo}), if the initial datum $(v_0,V_0,u_0)$ satisfies
\[\int_0^L v_0(x)dx\leq\int_0^L\psi(x)dx.\]
\end{theo}
{\noindent \bf{Proof:\hspace{4pt}}}Consider $(\psi_{\nu},\varphi_{\nu},\widetilde{\phi}_{\nu})$ the solution of (\ref{ZakNovo}) given by Theorem \ref{TeorDnoidal}. For $(v_0,V_0,u_0) \in L^2([0,L]) \times \widetilde{L}_{per}^2([0,L]) \times H_{per}^1([0,L])$ and $(v,V,u)$ the global solution for (\ref{ZakNovo}) corresponding to this initial data, we define for $t\geq 0$ and $ \nu > \frac{2\pi^2}{L^2}$
\[\Omega_t(y,\theta)=\|e^{i\theta}(T_cu)'(\cdot + y,t)-\phi_{\nu}'\|_{L_{per}^2}^2+\nu\|e^{i\theta}(T_cu)(\cdot + y,t)-\phi_{\nu}\|_{L_{per}^2}^2,\]
where we denote by $T_c$ the bounded linear operator define as
\[(T_cu)(x,t)=e^{-ic(x-ct)/2}u(x,t).\]
Then, the deviation of the solution $u(t)$ from the orbit generated by $\Phi$ is measure by
\begin{equation}\label{equa3}
\rho_{\nu}(u(\cdot,t),\phi_{\nu})^2:=\inf\left\{\Omega_t(y,\theta): (y,\theta)\in[0,L]\times[0,2\pi]\right\}.
\end{equation}
Therefore, from (\ref{equa3}) we have that for each $t$ the $\inf\Omega_t(y,\theta)$ is attained in $(\theta,y)=(\theta(t),y(t)).$
Consider the perturbation of the periodic wave $(\psi,\varphi,\widetilde{\phi})$
\begin{equation} \label{equa4}
\left \{
\begin{aligned}
\xi(x,t)&=e^{i\theta}(T_cu)(x+y,t)-\phi_{\nu}(x) \\
\eta(x,t)&= V(x+y,t)-\varphi_{\nu}(x)\\
\gamma(x,t)&= v(x+y,t)-\psi_{\nu}(x).
\end{aligned} \right.
\end{equation}
By the property of minimum of $(\theta,y)=(\theta(t),y(t))$, we obtain from (\ref{equa4}) that $p(x,t)=\text{Re}(\xi(x,t))$ and $q(x,t)=\text{Im}(\xi(x,t))$ satisfy the compatibility relations
\begin{equation} \label{equa5}
\left \{
\begin{aligned}
\int_0^L q(x,t)\phi_{\nu}(x)\psi_{\nu}(x) dx&= 0 \\
\int_0^L p(x,t)(\phi_{\nu}(x)\psi_{\nu}(x))' dx &= 0. \\
\end{aligned} \right.
\end{equation}\\
Now, consider the continuous functional $\mathcal{B}$ defined in $X$ as
\[\mathcal{B}(v,V,u) := E (v,V,u)-cQ_1(v,V,u)-\omega Q_2(v,V,u),\]
where $E$, $Q_1$ and $Q_2$ were defined in (\ref{equa2.4Zak}) and (\ref{quantCons2e3}). Then, from (\ref{equa4}) and (\ref{equa5}), we get
\begin{align*}
\Delta\mathcal{B}:=& \ \mathcal{B}(v(t),V(t),u(t))-\mathcal{B}(\psi,\varphi,\widetilde{\phi})\\
=&\left(\mathcal{L}_3p,p\right)+\left(\mathcal{L}_4q,q\right) +\frac{1}{2}\int_0^L \gamma^2+2\gamma(p^2+q^2)-4\psi p^2 + 4\gamma p \phi \ dx \\& + \frac{1}{2} \int_0^L 2\gamma\psi +\eta^2+ 2\eta\varphi + 2\gamma\phi^2 - 2c\gamma\eta -2c\gamma\varphi -2c\psi\eta \ dx
\end{align*}
where
\[\mathcal{L}_3=-\frac{d^2}{dx^2}-\left(\omega+\frac{c^2}{4}\right)+3\psi \ \ \ \text{and} \ \ \ \mathcal{L}_4=-\frac{d^2}{dx^2}-\left(\omega+\frac{c^2}{4}\right)+\psi.\]
Using the facts that $c\psi-\varphi=d_0$ and $\int_0^L\eta dx=0,$ we obtain
\begin{align*}
\Delta\mathcal{B}(t)&=\left(\mathcal{L}_3p,p\right)+\left(\mathcal{L}_4q,q\right) + \frac{1}{2}\int_0^L \left[\sqrt{1-c^2}\gamma+\frac{2\phi p}{\sqrt{1-c^2}}+\frac{p^2+q^2}{\sqrt{1-c^2}}\right]^2 dx \\
&+ \frac{1}{2}\int_0^L(c\gamma-\eta)^2dx - \int_0^L \frac{4\phi p(p^2+q^2)}{1-c^2}+\frac{(p^2+q^2)^2}{1-c^2} dx + \int_0^L(c\gamma-\eta)(c\psi-\varphi)dx\\
& =\left(\mathcal{L}_3p,p\right)+\left(\mathcal{L}_4q,q\right) + \frac{1}{2}\int_0^L \left[\sqrt{1-c^2}\gamma+\frac{2\phi p}{\sqrt{1-c^2}}+\frac{p^2+q^2}{\sqrt{1-c^2}}\right]^2 dx \\
&+\frac{1}{2}\int_0^L(c\gamma-\eta)^2 dx-\int \frac{4\phi p(p^2+q^2)}{1-c^2}+\frac{(p^2+q^2)^2}{1-c^2}dx + cd_0\int_0^L\gamma dx.
\end{align*}
Since $cd_0\leq 0,$ $\int_0^Lv_0dx\leq\int_0^L\psi(x)dx$ and $\int v(t,x)dx=\int v_0(x)dx,$ we have that $cd_0\int_0^L\gamma dx\geq 0.$ Therefore
\begin{align}\label{equa55}
\notag\Delta\mathcal{B}(t)\geq &\left(\mathcal{L}_3p,p\right)+\left(\mathcal{L}_4q,q\right) + \frac{1}{2}\int_0^L \left[\sqrt{1-c^2}\gamma+\frac{2\phi p}{\sqrt{1-c^2}}+\frac{p^2+q^2}{\sqrt{1-c^2}}\right]^2dx \\&+\frac{1}{2}\int_0^L(c\gamma-\eta)^2dx-C_1\|\xi\|_{H^1_{per}}^3-C_2\|\xi\|_{H^1_{per}}^4,
\end{align}
with $C_i>0$, $i=1,2.$\\
The estimates for $\left(\mathcal{L}_3p,p\right)$ and $\left(\mathcal{L}_4q,q\right)$ will be obtain from the next theorems.
\begin{theo}\label{teoEstForCua1}
Let $1-c^2>0$ and $\nu>\frac{2\pi^2}{L^2}$ fixed numbers. Consider $\phi_{\nu}$ the dnoidal wave given by Theorem \ref{TeorDnoidal}. Then
\begin{itemize}
\item[(a)] $\inf\{\left(\mathcal{L}_3f,f\right):\|f\|=1\ \text{and}\ \left(f,\phi_{\nu}\right)=0\}=:\alpha_0=0$
\item[(b)] $\inf\{\left(\mathcal{L}_3f,f\right):\|f\|=1,\ \left( f,\phi_{\nu}\right)=0\ \text{and}\ \left(f,(\phi_{\nu}\psi_{\nu})'\right)=0\}=:\alpha>0.$
\end{itemize}
\end{theo}
\proof
$(a)$ Since $\mathcal{L}_3\left(\frac{d}{dx}\phi_{\nu}\right)=0$ and $\left(\frac{d}{dx}\phi_{\nu},\phi_{\nu}\right)=0$, then $\alpha_0 \leq 0$. We prove that $\alpha_0 \geq 0$ using Lemma E.1 in Weinstein \cite{weinstein2} (which works on the periodic case). We first show that the infimum is attained. In fact, since $\phi_{\nu}$ is bounded we have that $\alpha$ is finite, thus there exists $\{f_j\}\subset H^1_{per}([0,L])$ with $\|f_j\|=1$, $\left( f_j,\phi_{\nu}\right)=0$ and $\lim_{j\rightarrow \infty}\left(\mathcal{L}_3 f_j,f_j\right)=\alpha_0$. Since $\{f_j\}$ is bounded in $H^1_{per}([0,L])$ there exists a subsequence of $\{f_j\}$, that we denote again $f_j,$ such that $f_j \rightharpoonup g$ weakly in $H^1_{per}([0,L]),$ then $f_j \rightarrow g$ in $L^2_{per}([0,L])$. Therefore $(g,\phi_{\nu}) =0$ and $(\phi_c f_j,f_j)\rightarrow (\phi g,g)$ when $j \rightarrow + \infty$. So $g \neq 0$ and $\|g'\|_{L^2_{per}}\leq \liminf \|f_j'\|_{L^2_{per}}$.\\
Now, define $f=g/\|g\|_{L^2_{per}}$, then $(f,\phi_{\nu})=0$, $\|f\|_{L^2_{per}}=1$ and
\[\alpha_0 \leq (\mathcal{L}_3 f,f) \leq\frac{\alpha_0}{\|f\|^2_{L^2_{per}}} =\alpha_0.\]
Therefore the infimum is attained. We show now that $\alpha_0 \geq 0$. In fact, $\mathcal{L}_3$ has the spectral properties required to use Lemma E.1, we need to find $\chi$ such that $\mathcal{L}_3\chi=\phi_{\nu}$ and $(\chi,\phi_{\nu})\leq 0$. From Theorem \ref{TeorDnoidal} we have that $\nu\in\left(\frac{2\pi^2}{L^2},+\infty\right)\longmapsto\phi_{\nu}\in H^1_{per}([0,L])$ is of class $C^1$, then differentiating (\ref{ecuaordphi}) with respect to $\nu$ we obtain that $\chi=-\frac{d}{d\nu}\phi_{\nu}$ satisfies $\mathcal{L}_3\chi =\phi_{\nu}$. Using the Corollary \ref{coroDnoidal} we obtain that
\[(\chi,\phi_{\nu})=-\frac{1}{2}\frac{d}{d\nu}\int_0^L \phi^2_\nu(\xi)\ d \xi<0\]
Therefore $(\chi,\phi_\nu)<0,$ which proves that $\alpha_0 \geq 0$. This finishes the proof of part $(a)$.\\
$(b)$ Using the part $(a)$, we have that $\alpha \geq 0$. Suppose that $\alpha =0$. Using a similar argument as in part $(a)$ we obtain that there exists $f \in H^1_{per}([0,L])$ such that $\|f\|_{L^2_{per}}=1$ and $(f,\phi_\nu)=\left(f,(\phi_\nu\psi)'\right)=0$. Then, from the theory of Lagrange Multipliers, there exists $\lambda,\ \theta$ and $\delta$ such that
\[\mathcal{L}_3 f =\lambda f+\theta \phi_\nu + \delta(\phi_\nu\psi_\nu)'.\]
Since $\left(\mathcal{L}_3 f,f\right)=0$, we obtain that $\lambda=0$. From the fact that $\mathcal{L}_3\phi'_\nu=0$ we have
\[0=\delta \int_0^L \phi_{\nu}'(\phi_\nu\psi_\nu)' d\xi = -\frac{3\delta}{1-c^2}\int_0^L (\phi_{\nu}')^2 \phi^2_{\nu}\ d\xi.\]
The last inequality implies $\delta=0,$ thus $\mathcal{L}_3f=\theta\phi_\nu$. Consider $\chi=-\frac{d}{d\nu}\phi_\nu$, then we get that $\mathcal{L}_3(f-\theta\chi)=0$, thus
\[0=(f-\theta\chi,\phi_\nu)=-\theta(\chi,\phi_\nu).\]
Therefore $\theta=0$ and consequently there exists $s \in \mathbb{R}\setminus\{0\}$ such that $f = s\phi_\nu ',$ which is absurd. Therefore $\alpha>0$, which finishes the proof of the theorem.
$\square$
\begin{theo}\label{teoEstForCua2}
Let $1-c^2>0$ and $\nu>\frac{2\pi^2}{L^2}$ be fixed numbers. Consider $\phi_\nu$ and $\psi_\nu$ the dnoidal waves given by Theorem \ref{TeorDnoidal}. Then,
\[\inf \{\left(\mathcal{L}_4 f,f\right):\|f\|_{L^2_{per}}=1\ \ \text{and} \ \ \left(f,\phi_\nu\psi_\nu\right)=0\}=:\beta>0\]
\end{theo}
\proof
From the spectral properties of $\mathcal{L}_4$ is clear that $\mathcal{L}_4$ is a nonnegative operator, therefore $\beta\geq 0$. Suppose that $\beta=0 $. Then, following the same ideas of the proof of Theorem \ref{teoEstForCua1}, we have that the infimum is attained on a admissible function $g\neq 0$ and there exists $(\lambda,\theta)\in\mathbb{R}^2$ such that
\[\mathcal{L}_4g =\lambda g + \theta \phi_\nu\psi_\nu.\]
Since $\left(g, \phi_\nu\psi_\nu\right) =0$, then $\lambda =0$. Furthermore,
\[0=(\mathcal{L}_4\phi,g)=\theta\int_0^L\phi^2_\nu\psi_\nu\ d\xi,\]
which implies $\theta =0$. Since zero is a simple eigenvalue of $\mathcal{L}_4$, there exists $s\in\mathbb{R}\setminus\{0\}$ such that $g= s\phi$, which is absurd. This finishes the proof of the theorem.
$\square$
Our goal is to estimate the terms $\left(\mathcal{L}_3 p,p\right)$ and $\left(\mathcal{L}_4 q,q\right)$, where $p$ and $q$ satisfy (\ref{equa5}). Using Theorem \ref{teoEstForCua2} and definition of $\mathcal{L}_4$, we have that there exists $C_0>0$ such that
\begin{equation}\label{equa8}
\left(\mathcal{L}_4 q,q\right) \geq C_0 \|q\|^2_{H^1_{per}}.
\end{equation}
Now, we estimate $\left(\mathcal{L}_3 p,p\right)$. Suppose with out loos of generality that $\|\phi_\nu\|_{L^2_{per}}=1$. Denote $p_{\perp}=p-p_{\parallel}$, where $p_{\parallel} = (p,\phi_\nu)\phi_\nu$, then from (\ref{equa5}) we obtain that $(p_{\perp},\phi_\nu)=0$ and $(p_{\perp},(\phi_\nu\psi_\nu)')=0$. From Theorem \ref{teoEstForCua1} it follows that $(\mathcal{L}_3 p_{\perp},p_{\perp})\geq \widetilde{C_0}\|p_{\perp}\|^2_{L^2_{per}}$.\\
Also consider the normalization $Q_2(u_0)=Q_2(\phi),$ i.e., $\|u_0\|_{L^2_{per}}=\|\phi_\nu\|_{L^2_{per}}$. Then $\|u(t)\|_{L^2_{per}}=1,$ for all $t\geq 0$, thus $-2(p,\phi_\nu)= \|\xi\|^2_{L^2_{per}}$. Therefore
\[(\mathcal{L}_3p_{\perp},p_{\perp})\geq C_0\|p_{\perp}\|^2_{L^2_{per}} \geq C_0\|p\|^2_{L^2_{per}} -\widetilde{C}_1\|\xi\|^4_{H^1_{per}}.\]
Since $\left(\mathcal{L}_3\phi_\nu,\phi_\nu\right) <0$, it follows that $\left(\mathcal{L}_3 p_{\parallel},p_{\parallel}\right)\geq -\widetilde{C}_2\|\xi\|^4_{H^1_{per}}$. From the Cauchy-Schwarz inequality we get that $\left(\mathcal{L}_3 p_{\parallel},p_{\perp}\right)\geq -\widetilde{C}_3\|\xi\|^4_{H^1_{per}}$. Therefore, from the specific form of the operator $\mathcal{L}_3$ we conclude that
\begin{equation}\label{equa9}
\left(\mathcal{L}_3 p,p\right)\geq D_1\|p\|_{H^1_{per}}^2- D_2\|p\|_{H^1_{per}}^3-D_3\|p\|_{H^1_{per}}^4,
\end{equation}
where $D_j >0$ for $j=1,2,3.$\\
Now, using (\ref{equa55}), (\ref{equa8}) and (\ref{equa9}) we arrive at
\[\Delta\mathcal{B}(t)\geq d_1\|\xi\|_{1,\nu}^2- d_2\|\xi\|_{1,\nu}^3- d_3\|\xi\|_{1,\nu}^4\]
where $d_i>0$, for $i=1,2,3$ and $\|f\|_{1,\nu}^2:=\|f'\|_{L^2_{per}}^2+ \nu\|f\|_{L^2_{per}}^2$.
Using a similar argument as in Benjamin \cite{benjamin1}, we obtain that for any $\epsilon>0,$ there exists $\delta(\epsilon)>0$ such that if
\[\|u_0-\widetilde{\phi}\|_{1,\nu}<\delta,\ \ \|V_0-\varphi\|_{L^2_{per}}<\delta \ \ \text{and}\ \ \|v_0-\psi\|_{L^2_{per}}<\delta, \]
then
\begin{equation}\label{desiFinalZak}
\rho_{\nu}(u(t),\phi_{\nu})^2=\|\xi(t)\|^2_{1,\nu}<\epsilon,
\end{equation}
for all $t\geq 0.$ Therefore we obtain the inequality (\ref{desiImp2}).\\
Finally, using (\ref{equa55}) and the analysis made above for $\xi$ we obtain that
\[\int_0^L \left[\sqrt{1-c^2}\gamma+\frac{2\phi p}{\sqrt{1-c^2}}+\frac{p^2+q^2}{\sqrt{1-c^2}}\right]^2dx\leq\epsilon\ \ \ \text{and}\ \ \ \int_0^L(c\gamma-\eta)^2 dx\leq\epsilon.\]
Using the two last inequalities, the Cauchy-Schwarz inequality and (\ref{desiFinalZak}) we arrive at (\ref{desiImp1}), which proves that $(\psi,\varphi,\widetilde{\phi})$ is stable with respect to small perturbations that preserves the $L^2_{per}([0,L])$ norm of $\widetilde{\phi}.$ The general case follows from the continuity of the map
\[\nu\in\left(\frac{2\pi^2}{L^2},+\infty\right)\mapsto (\psi, \varphi,\widetilde{\phi}).\]
This finishes the proof of the theorem.
$\square$
\section{Stability for the Solitary Wave Solutions}
In this section we improve the result established by Ya Ping in \cite{Wu1}, to obtain a correct stability result for the solitary wave solutions associated to the Zakharov system. With regard to the Cauchy problem associate to the system (\ref{equaZakha}) we address the reader to the work of Colliander in \cite{Colliander2}, where is obtained a result of global well-posedness for initial data $(u,v,v_t)(0)\in H^1(\mathbb{R})\times L^2(\mathbb{R})\times \widehat{H}^{-1}(\mathbb{R})$.\\
If $v_t(x,0)\in \widehat{H}^{-1}(\mathbb{R})$ we can rewrite (\ref{equaZakha}) as the equivalent system
\begin{equation}\label{Zaksolit}
\left \{
\begin{aligned}
v_t &=-V_x, \\
V_t &= -(v+|u|^2)_x \\
iu_t &+ u_{xx} =uv.
\end{aligned} \right.
\end{equation}
The solitary wave solutions for this system are given by
\[
\psi_{\omega,c}(\xi)=\left(2\omega+\frac{c^2}{2}\right)\ \text{sech}^2\left(\frac{\sqrt{-4\omega-c^2}}{2}\xi\right), \ \ \ \phi_{\omega,c}(\xi)=\sqrt{\frac{(-4\omega-c^2)(1-c^2)}{2}}\ \text{sech}\left(\frac{\sqrt{-4\omega-c^2}}{2}\xi\right)
\]
\[
\text{and}\ \ \ \varphi_{\omega,c}(\xi)=c\left(2\omega+\frac{c^2}{2}\right)\ \text{sech}^2\left(\frac{\sqrt{-4\omega-c^2}}{2}\xi\right).
\]
As in the periodic case, we have to study the spectral properties of the operators
\begin{equation}
L_{3}=-\frac{d^2}{{dx}^2}-\left(\omega+\frac{c^2}{4}\right)+3\psi\ \ \ \text{and}\ \ \ \
L_{4}=-\frac{d^2}{{dx}^2}-\left(\omega+\frac{c^2}{4}\right)+\psi.
\end{equation}
The spectral properties necessary to obtain our result of stability were established by Ya Ping in \cite{Wu1}. See also \cite{Wu1} for the definition of orbital stability for the solitary wave solutions associated to the Zakharon system.\\
The next theorem is the principal result of this section.
\begin{theo}
Assume that $4\omega+c^2\leq 0$ and $1-c^2>0.$ Then the solitary wave solutions $(\psi_{\omega,c}, \varphi_{\omega,c}, \widetilde{\phi}_{\omega,c}),$ with $ \widetilde{\phi}_{\omega,c}(x)=e^{i\frac{c}{2}x}\phi_{\omega,c}(x),$ are orbitally stable in $X= L^2_{per}([0,L]) \times L^2_{per}([0,L]) \times H^1_{per}([0,L])$ by the flow generated for the system (\ref{Zaksolit}).
\end{theo}
\proof
The proof follows the same ideas of Theorem \ref{teoEstForCua2}. We only observe that in this case
\begin{align*}
\Delta\mathcal{B}(t)& =\left(L_3p,p\right)+\left(L_4q,q\right) + \frac{1}{2}\int_{\mathbb{R}}\left[\sqrt{1-c^2}\gamma+\frac{2\phi p}{\sqrt{1-c^2}}+\frac{p^2+q^2}{\sqrt{1-c^2}}\right]^2 dx \\
&+\frac{1}{2}\int_{\mathbb{R}}(c\gamma-\eta)^2 dx-\int _{\mathbb{R}}\frac{4\phi p(p^2+q^2)}{1-c^2}+\frac{(p^2+q^2)^2}{1-c^2}dx.
\end{align*}
Note that the constant $d_0$ does not appear because the decaying properties of the solitary wave solutions imply that $d_0=0.$ The rest of the proof follows similarly as the result obtained on the periodic case.
$\square$
\end{document} |
\betaegin{eqnarray*}gin{document}
\title{A note on the Poisson boundary of lamplighter random walks}
\betaegin{eqnarray*}gin{abstract}
The main goal of this paper is to determine the Poisson boundary of lamplighter random walks over a general class of discrete groups $\mathcal{G}amma$ endowed with a ``rich'' boundary. The starting point is the Strip Criterion of identification of the Poisson boundary for random walks on discrete groups due to Kaimanovich \cite{Kaimanovich2000}. A geometrical method for constructing the strip as a subset of the lamplighter group $\mathbb{Z}_{2}\omegar\mathcal{G}amma$ starting with a ``smaller'' strip in the group $\mathcal{G}amma$ is developed. Then, this method is applied to several classes of base groups $\mathcal{G}amma$: groups with infinitely many ends, hyperbolic groups in the sense of Gromov, and Euclidean lattices. We show that under suitable hypothesis the Poisson boundary for a class of random walks on lamplighter groups is the space of infinite limit configurations.
\paragraph{Keywords}
\epsilonnd{abstract}
\noindent
\textbf{Keywords}: Random walk, wreath product, lamplighter group, Poisson boundary.\\
\noindent
\textbf{Mathematics Subject Classification (2000)}: 60J50, 60B15, 05C05, 20E08\\
\section{Introduction}
Let $\mathcal{G}amma$ be a finitely generated group, and imagine a lamp sitting at each group element. For simplicity, we consider that the lamps have only two states: $0$ (the lamp is swiched off) or $1$ (the lamp is swiched on), and initially all lamps are off. We think of a lamplighter person moving randomly in $\mathcal{G}amma$ and switching randomly lamps on or off. We investigate the following model: at each step the lamplighter may walk to some random neighbour vertex, and may change the state of some lamps in a bounded neighbourhood of his position. This model can be interpreted as a random walk on the wreath product $(\mathbb{Z}/2\mathbb{Z})\omegar\mathcal{G}amma$ governed by a probability measure $\mu$. The random walk is described by a transient Markov chain $Z_{n}$, which represents the random position of the lamplighter and the random configuration of the lamps at time $n$. We assume that the lamplighter random walk's projection on the base group $\mathcal{G}amma$ is transient. Write $\mathbb{Z}_{2}:=\mathbb{Z}/2\mathbb{Z}$ and $G:=\mathbb{Z}_{2}\omegar\mathcal{G}amma$.
Transience of the projected random walk on $\mathcal{G}amma$ implies that almost every path of the original random walk $Z_{n}$ on $G$ will leave behind a certain (infinitely supported) limit configuration on $\mathcal{G}amma$. It is then natural to ask whether this limit configurations describe completely the behaviour of the random walk $Z_{n}$ at infinity.
For a more topological viewpoint, we attach to $G=\mathbb{Z}_{2}\omegar\mathcal{G}amma$ a natural boundary $\mathcal{O}mega$ at infinity, such that $G\cup\mathcal{O}mega$ is a metrizable space (not necessarily compact or complete) on which $G$ acts by homeomorphisms and every point in $\mathcal{O}mega$ is an accumulation point of a sequence in $G$. We then show that, in this topology, the random walk $Z_{n}$ converges almost surely to an $\mathcal{O}mega$-valued random variable, under the assumption that the projected random walk on $\mathcal{G}amma$ converges to the boundary. If we denote by $\mu_{\infty}$ the limit distribution of $Z_{n}$ on $\mathcal{O}mega$, then the measure space $(\mathcal{O}mega,\mu_{\infty})$ provides a model for the behaviour at infinity of the random walk $Z_{n}$. We are interested if this space is maximal, i.e. there is no way (up to sets of measure $0$) of further refining this space. This maximal space is called the \textit{Poisson boundary} of the random walk.
The Poisson boundary of a random walk on a group is a measure-theoretical space, which describes completely the significant behaviour of the random walk at infinity. Another way of defining the Poisson boundary is to say that it is the space of ergodic components of the time shift in the trajectory space.
In order to prove that the measure space $(\mathcal{O}mega,\mu_{\infty})$ is indeed the Poisson boundary of the random walk $Z_{n}$, we shall use the very useful Strip Criterion of identification of the Poisson boundary due to Kaimanovich, which we state here in the most general form. For details see Kaimanovich \cite[Thm. $6.5$ on p. 677]{Kaimanovich2000} and \cite[Thm. $5.19$]{KaimanovichWoess2002}.
\betaegin{eqnarray*}gin{proposition}[\textbf{Strip Criterion}]\langlebel{StripCriterion}
Let $\mu$ be a probability measure with finite first moment on $G$, and let $(B_{+},\langlembda_{+})$ and $(B_{-},\langlembda_{-})$ be $\mu$- and $\check{\mu}$-boundaries, respectively. If there exists a measurable $G$-equivariant map $S$ assigning to almost every pair of points $(b_{-},b_{+})\in B_{-} \times B_{+}$ a non-empty ``strip'' $ S(b_{-},b_{+})\subset G$, such that, for the ball $B(id,n)$ of radius $n$ in the metric of $G$,
\betaegin{eqnarray*}gin{equation*}
\frac{1}{n}\log| S(b_{-},b_{+})\cap B(id,n) | \to 0 ,\ \mbox{as}\ n\to\infty,
\epsilonnd{equation*}
for $(\langlembda_{-} \times \langlembda_{+})$-almost every $(b_{-},b_{+})\in B_{-} \times B_{+}$, then $(B_{+},\langlembda_{+})$ and $(B_{-},\langlembda_{-})$ are the Poisson boundaries of the random walks $(G,\mu)$ and $(G,\check{\mu})$, respectively.
\epsilonnd{proposition}
This criterion was applied by Kaimanovich to groups with sufficiently rich geometric boundaries, for which such strips have a natural geometric interpretation.
We shall give a general method for constructing the strip $S$ as a subset of the lamplighter group $\mathbb{Z}_{2}\omegar \mathcal{G}amma$, with the properties required in the Proposition \ref{StripCriterion}. This method requires that the Strip Criterion can be applied to the random walk on $\mathcal{G}amma$. Also, some additional assumptions are required. The method can be applied to a large class of base groups $\mathcal{G}amma$, which are endowed with a sufficiently rich boundary, so that the random walk on $\mathcal{G}amma$ converges to this boundary. The important fact here is that the basic geometry for the lamplighter group $\mathbb{Z}_{2}\omegar \mathcal{G}amma$ is provided by the underlying structure $\mathcal{G}amma$. We shall explain how this method works when $\mathcal{G}amma$ is a group with infinitely many ends, a hyperbolic group, or a Euclidean lattice.
The paper is organized as follows. In Section \ref{sec:Lamplighter} we recall some definitions and basic properties of the main objects of study (lamplighter groups and random walks, wreath products). In Section \ref{sec:TheNaturalBoundary}, we attach both to the group $\mathcal{G}amma$ and $\mathbb{Z}_{2}\omegar\mathcal{G}amma$ certain boundaries, which satisfy some required assumptions. Under the condition that the random walk on $\mathcal{G}amma$ converges to the boundary almost surely, we prove that the random walk $Z_{n}$ on $\mathbb{Z}_{2}\omegar\mathcal{G}amma$ converges also to the boundary. In Section \ref{sec:The Poisson boundary}, we shall apply the Strip Criterion \ref{StripCriterion} in order to determine the Poisson boundary of random walks over a general class of groups $\mathbb{Z}_{2}\omegar\mathcal{G}amma$. We shall explain here the \textit{half-space method} for constructing a strip as a subset of $\mathbb{Z}_{2}\omegar\mathcal{G}amma$. The general procedure is based on the fact that the strip in the base group $\mathcal{G}amma$ has additional ``nice'' properties, which help us to lift it to a bigger strip. We shall prove that the strip satisfies the required properties in Proposition \ref{StripCriterion}. Finally, we shall consider some typical examples of groups $\mathcal{G}amma$, which are endowed with nice geometric boundaries, so that random walks on $\mathcal{G}amma$ converge to this boundary. For this specific examples, we shall apply the \textit{half-space method}.
Concluding the introduction, let us remark that the first to show that lamplighter groups are fascinating objects in the study of random walks were Kaimanovich and Vershik \cite{KaimanovichVershik1983}. By now, there is a considerable amount of literature on this topic. The paper of Kaimanovich \cite{KaimanovichVershik1983} may serve as a major source for the earlier literature. See also Lyons, Pemantle and Peres \cite{Lyons1996}, Erschler \cite{Erschler2001,Erschler2003}, Revelle \cite{Revelle,Revelle2003}, Pittet and Saloff-Coste \cite{Pittet&Saloff-Coste1996,Pittet&Saloff-Coste2002}, Grigorchuk and Zuk \cite{GrigorchukZuk2001}, Dicks and Schick \cite{DicksSchick2002}, Bartholdi and Woess \cite{BartholdiWoess2005}, Brofferio and Woess \cite{Brofferio 2006}.
\section{Lamplighter groups and random walks}\langlebel{sec:Lamplighter}
\paragraph{Lamplighter groups.}Consider an infinite group $\mathcal{G}amma$, generated by a finite set $S_{\mathcal{G}amma}$. Denote by $e$ the identity element, and by $d(\cdot,\cdot)$ the word metric on $\mathcal{G}amma$, that is, the length of the shortest path between two elements in the Cayley graph of $\mathcal{G}amma$ (with respect to $S$).
Imagine a lamp sitting at each element of $\mathcal{G}amma$, which can be switched off or on (encoded by 0 and 1). We think of a lamplighter person moving randomly in $\mathcal{G}amma$ and switching randomly lamps on or off. At every moment of time the lamplighter will leave behind a certain configuration of lamps. The configurations of lamps are encoded by functions $\epsilonta:\mathcal{G}amma\rightarrow \mathbb{Z}_{2}$. We write $\hspace*{20pt}at{\mathcal{C}}=\{\epsilonta : \mathcal{G}amma\rightarrow\mathbb{Z}_{2}\}$ for the set of all configurations, and let $\mathcal{C} \subset \hspace*{20pt}at{\mathcal{C}}\ $ be the set of all finitely supported configurations, where a configuration is said to have finite support if the set $ supp(\epsilonta)=\{x \in \mathcal{G}amma : \epsilonta(x)\ne 0 \} $ is finite. Denote by $\betaf{0}\betaf$ the zero configuration, i.e. the configuration which corresponds to all lamps switched off, and by $\deltaelta_{x}$ the configuration where only the lamp at $x\in\mathcal{G}amma$ is on and all other lamps are off.
Recall that the \textit{wreath product} of the groups $\mathbb{Z}_{2}$ and $\mathcal{G}amma$ is a semidirect product of $\mathcal{G}amma$ and the direct sum of copies of $\mathbb{Z}_{2}$ indexed by $\mathcal{G}amma$, where every $x\in\mathcal{G}amma$ acts on $\sum_{x\in\mathcal{G}amma}\mathbb{Z}_{2}$ by the translation $T_{x}$ defined as
\betaegin{eqnarray*}gin{equation*}
(T_{x}\epsilonta)(y)=\epsilonta(x^{-1}y),\forall y\in \mathcal{G}amma.
\epsilonnd{equation*}
Let $G:=\mathbb{Z}_{2}\omegar\mathcal{G}amma$ denote the \textit{wreath product}. The elements of $G$ are pairs of the form $(\epsilonta,x)\in\mathcal{C}\times \mathcal{G}amma$, where $\epsilonta$ represents a (finitely supported!) configuration of the lamps and $x$ the position of the lamplighter. A group operation on $G$ is given by
\betaegin{eqnarray*}gin{equation*}
(\epsilonta,x)(\epsilonta^{'},x^{'})=(\epsilonta\oplus T_{x}\epsilonta^{'},xx^{'}),
\epsilonnd{equation*}
where $x,x^{'}\in\mathcal{G}amma$, $\epsilonta,\epsilonta^{'}\in\mathcal{C}$, $\oplus$ is the componentwise addition modulo $2$. The group identity is $(\textbf{0},e)$. We shall call $G$ together with this operation the \textit{lamplighter group} over $\mathcal{G}amma$.
\paragraph{Lamplighter distance.}When $S_{\mathcal{G}amma}$ is a generating set for $\mathcal{G}amma$, then a natural set of generators for $G=\mathbb{Z}_{2}\omegar\mathcal{G}amma$ is given by
\betaegin{eqnarray*}gin{equation*}
S_{G}=\{(\deltaelta_{e},e),(\textbf{0},s):s\in S_{\mathcal{G}amma}\}.
\epsilonnd{equation*}
Consider the Cayley graph of $G$ with respect to the generating set $S_{G}$. We lift the word metric $d(\cdot,\cdot)$ on $\mathcal{G}amma$ to a metric $d_{G}(\cdot,\cdot)$ on $G$ by assigning the following distances (lenghts) to the elements of $S_{G}$: $d_{G}((\textbf{0},e),(\textbf{0},s)):=1$ for $s\in S_{\mathcal{G}amma}$ and $d_{G}((\textbf{0},e),(\deltaelta_{e},e)):=c> 0$, where $c$ is some arbitrary, but fixed positive constant. Then the distance $d_{G}((\epsilonta,x),(\epsilonta^{'},x^{'}))$ between $(\epsilonta,x)$ and $(\epsilonta^{'},x^{'})$ is the length of the shortest path in the Cayley graph of $G$ joining these two vertices. More precisely, if we denote by $l(x,x^{'})$ the smallest length of a ``travelling salesman'' tour from $x$ to $x^{'}$ that visits each element of the set $\epsilonta \betaigtriangleup \epsilonta ^{'}$ (where the two configurations are different), then
\betaegin{eqnarray*}gin{equation}\langlebel{GraphGmetric}
d_{G}((\epsilonta,x),(\epsilonta^{'},x^{'}))=l(x,x^{'})+c\cdot|\epsilonta^{'}\betaigtriangleup\epsilonta|
\epsilonnd{equation}
defines a metric on $G$.
\paragraph{Lamplighter random walks.}Let $\mu$ be a probability measure on $G$, such that $supp(\mu)$ generates $G$ as a group. Consider the random walk $Z_{n}$ on $G$ with one-step transition probabilites given by $p((\epsilonta,x),(\epsilonta^{'},x^{'}))=\mu((\epsilonta,x)^{-1}(\epsilonta^{'},x^{'}))$, starting at the identity $(\begin{bf}0\epsilonnd{bf},e)$. We shall call $Z_{n}$ the \textit{lamplighter random walk} over the base group $\mathcal{G}amma$ and with law $\mu$. The lamplighter random walk starting at $(\begin{bf}0\epsilonnd{bf},e)$ can also be described by a sequence of $G$-valued random variables $Z_{n}$ in the following way:
\betaegin{eqnarray*}gin{equation}\langlebel{lamplighter random variables}
Z_{0}:=(\begin{bf}0\epsilonnd{bf},e),\ Z_{n}=Z_{n-1}i_{n},\mbox{ for all } n\geq1,
\epsilonnd{equation}
where $i_{n}$, with $i_{n}=(f_{n},z_{n})$, is a sequence of i.i.d. $G$-valued random variables governed by the probability measure $\mu$.
We write $Z_{n}=(\epsilonta_{n},X_{n})$, where $\epsilonta_{n}$ is the random configuration of lamps at time $n$ and $X_{n}$ is the random element of $\mathcal{G}amma$ at which the lamplighter stands at time $n$. Therefore, the projection of $Z_{n}=(\epsilonta_{n},X_{n})$ on $\mathcal{G}amma$ is the random walk $X_{n}$ starting at the identity $e$ and with law
\betaegin{eqnarray*}gin{equation*}
\nu(x)=\sum_{\epsilonta\in\mathcal{C}}\mu(\epsilonta,x).
\epsilonnd{equation*}
The law $\nu$ of a random walk on $\mathcal{G}amma$ is said to have \textit{finite first moment} if
\betaegin{eqnarray*}gin{equation*}
\sum_{x\in \mathcal{G}amma}d(e,x)\nu(x)<\infty.
\epsilonnd{equation*}
As a \textit{general assumption}, we assume the transience of $X_{n}$. By transience of $X_{n}$, every finite subset of $\mathcal{G}amma$ is left with probability one after a finite time. Therefore, the sequence $(\epsilonta_{n})_{n\in\mathbb{N}_{0}}$ of configurations converges pointwise to a random limit configuration $\epsilonta_{\infty}$, which is not necessarily finitely supported. Now a natural question is whether the behaviour of the random walk at infinity is completely described by these limit configurations. For this purpose, a notion of "infinity" for the lamplighter group is needed.
\section{Convergence to the boundary}\langlebel{sec:TheNaturalBoundary}
\paragraph{The boundary of $\mathcal{G}amma$.}Consider the base group $\mathcal{G}amma$ as above, with the word metric $d(\cdot,\cdot)$ on it. Let $\omegaidehat{\mathcal{G}amma}=\mathcal{G}amma\cup\partial\mathcal{G}amma$ be an extended space (not necessarily compact) with ideal \textit{boundary} $\partial\mathcal{G}amma$ (the set of points at infinity), such that $\omegaidehat{\mathcal{G}amma}$ is compatible with the group structure on $\mathcal{G}amma$ in the sense that the action of $\mathcal{G}amma$ on itself extends to an action on $\omegaidehat{\mathcal{G}amma}$ by homeomorphisms.
\paragraph{3.1} \langlebel{sec:BasicAssumptions}\textbf{Basic assumptions.} Returning to the random walk $X_{n}$ on $\mathcal{G}amma$, assume that:
\betaegin{eqnarray*}gin{description}
\item(a) The law $\nu$ of the random walk $X_{n}$ has finite first moment on $\mathcal{G}amma$.
\item(b) The random walk $X_{n}$ converges almost surely to a random element of $\partial\mathcal{G}amma$: there is a $\partial\mathcal{G}amma$-valued random variable $X_{\infty}$ such that in the topology of $\omegaidehat{\mathcal{G}amma}$,
\betaegin{eqnarray*}gin{equation*}
\lim_{n\to\infty}X_{n}=X_{\infty},\mbox{ almost surely for every starting point } x\in\mathcal{G}amma.
\epsilonnd{equation*}
\item(c) The boundary $\partial\mathcal{G}amma$ is such that the following \textbf{convergence property} holds: whenever $(x_{n}),(y_{n})$ are sequences in $\mathcal{G}amma$ such that $(x_{n})$ accumulates at $\xi\in \partial\mathcal{G}amma$ and
\betaegin{eqnarray*}gin{equation}\langlebel{Convergence}
\tag{\textbf{CP}}
d(x_{n},y_{n})/d(x_{n},e)\to 0\,\mbox{ as }n\to\infty,
\epsilonnd{equation}
then $(y_{n})$ accumulates also at $\xi$.
\epsilonnd{description}
\paragraph{The boundary of $G=\mathbb{Z}_{2}\omegar\mathcal{G}amma$.}Remark that the natural compactification of $\mathcal{C}$ in the topology of pointwise convergence is the set $\omegaidehat{\mathcal{C}}$ of all, finitely or infinitely supported configurations. Since the vertex set of the Cayley graph of $G$ is $\mathcal{C}\times\mathcal{G}amma$, the space $\partial G=(\omegaidehat{\mathcal{C}}\times \omegaidehat{\mathcal{G}amma})\setminus (\mathcal{C} \times\mathcal{G}amma)$ is a \textit{natural boundary} at infinity for $G$. Let us write $\omegaidehat{G}=\omegaidehat{\mathcal{C}} \times \omegaidehat{\mathcal{G}amma}$.
The boundary $\partial G$ contains many points towards the lamplighter random walk $Z_{n}$ does not converge, as we shall later see. For this reason, we define a ``smaller'' boundary $\mathcal{O}mega$ for the lamplighter group (which is still dense in $\partial G$), and we shall prove that the random walk $Z_{n}$ converges to a random variable with values in $\mathcal{O}mega$. Define
\betaegin{eqnarray*}gin{equation}\langlebel{omega}
\mathcal{O}mega =\betaigcup_{\mathfrak{u} \in \partial\mathcal{G}amma}\mathcal{C}_{\mathfrak{u}}\times \{\mathfrak{u}\},
\epsilonnd{equation}
where a configuration $\zeta$ is in $\mathcal{C}_{\mathfrak{u}}$ if and only if $\mathfrak{u}$ is its only accumulation point (i.e., there may be infinitely many lamps switched on only in a neighbourhood of $\mathfrak{u}$) or if $\zeta$ is finitely supported. The set $\mathcal{C}_{\mathfrak{u}}$ is dense in $\omegaidehat{\mathcal{C}}$ because $\mathcal{C}\subset\mathcal{C}_{\mathfrak{u}}$ and $\mathcal{C}$ is dense in $\omegaidehat{\mathcal{C}}$. Hence, $\mathcal{O}mega$ is also dense in $\partial G$.
The action of $G=\mathbb{Z}_{r}\omegar\mathcal{G}amma$ on itself extends to an action on $\omegaidehat{G}$ by homeomorphisms and leaves the Borel subset $\mathcal{O}mega\subset\partial G$ invariant. If we take $(\epsilonta,x)\in G$ and $(\zeta,\mathfrak{u}) \in \mathcal{O}mega$, then
\betaegin{eqnarray*}gin{equation*} \langlebel{action}
(\epsilonta,x)(\zeta,\mathfrak{u})=(\epsilonta \oplus T_{x}\zeta,x \mathfrak{u}).
\epsilonnd{equation*}
If $\mathfrak{u}\in\partial\mathcal{G}amma$ and $\zeta$ is finitely supported or accumulates only at $\mathfrak{u}$, then $T_{x}\zeta$ can at most accumulate at $x\mathfrak{u}$. Also the configuration $\epsilonta\oplus T_{x}\zeta$ accumulates again at most at $x\mathfrak{u}$ because $\epsilonta$ is finitely supported, so that adding $\epsilonta$ modifies $T_{x}\zeta$ only in finitely many points.
When the \textbf{Basic assumptions $(3.1)$} hold, we are able to state the first result on convergence of the lamplighter random walk $Z_{n}$ to $\mathcal{O}mega$.
\betaegin{eqnarray*}gin{theo}\langlebel{ConvergenceTheorem}
Let $Z_{n}=(\epsilonta_{n},X_{n})$ be a random walk with law $\mu$ on the group $G=\mathbb{Z}_{r}\omegar\mathcal{G}amma$ such that $supp(\mu)$ generates $G$. If $\mathcal{O}mega$ is defined as in \epsilonqref{omega} and $\mu$ has finite first moment, then there exists an $\mathcal{O}mega$-valued random variable $Z_{\infty}=(\epsilonta_{\infty},X_{\infty})$ such that $Z_{n}\to Z_{\infty}$ almost surely, for every starting point. Moreover the distribution of $Z_{\infty}$ is a continuous measure on $\mathcal{O}mega$.
\epsilonnd{theo}
\betaegin{eqnarray*}gin{proof}
Without loss of generality, we may suppose that the starting point for the lamplighter random walk $Z_{n}$ is $id=(\begin{bf}0\epsilonnd{bf},e)$, that is, we start a random walk in the identity element $e$ of $\mathcal{G}amma$ with all the lamps switched off.
The support of $\mu$ generates $G=\mathbb{Z}_{r}\omegar\mathcal{G}amma$ as a group, therefore, also the law $\nu$ of the projected random walk $X_{n}$ on $\mathcal{G}amma$ is such that its support generates $\mathcal{G}amma$. By assumption, the random walk $X_{n}$ on $\mathcal{G}amma$ is transient, and converges almost surely to a random variable $X_{\infty}\in\partial\mathcal{G}amma$.
Now, assume that the lamplighter random walk $Z_{n}$ has finite first moment. Let $(y_{n})$ be an unbounded sequence of elements in $\mathcal{G}amma$, such that $y_{n}\in supp(T_{X_{n-1}}f_{n})$ ($f_{n}$ is a finitely supported configuration), for each $n$, that is, $y_{n}$ is a group element where the lamp is switched on. Since, by assumptions, the law $\nu$ of the random walk $X_{n}$ on $\mathcal{G}amma$ has finite first moment, the following holds with probability $1$:
\betaegin{eqnarray*}gin{equation*}
d(y_{n},X_{n-1})/{n}\to 0,\mbox{ as }n\to\infty.
\epsilonnd{equation*}
Moreover, it follows from \textit{Kingman's subadditive ergodic theorem} (see Kingman\cite{Kingman1968}) that there exists finite constant $m>0$ such that
\betaegin{eqnarray*}gin{equation*}
d(X_{n},e)/n\to m,\ \mbox{as}\ n\to\infty,\ \mbox{almost surely}.
\epsilonnd{equation*}
Using the last two equations and the triangle inequality, we have $d(X_{n},y_{n})/d(X_{n},e)\to 0$, as $n\to\infty$. Recall now that the boundary $\partial\mathcal{G}amma$ satisfies the convergence property \epsilonqref{Convergence} and $X_{n}\to X_{\infty}\in\partial\mathcal{G}amma$. Therefore, the sequence $(y_{n})$ accumulates at $X_{\infty}$. Now, from the definition of the group operation on $G$ and the equation \epsilonqref{lamplighter random variables}, one can remark that the configuration $\epsilonta_{i}$ of lamps at every moment of time $i$ is obtained by adding (componentwise addition modulo 2) to the configuration $\epsilonta_{i-1}$ the configuration $T_{X_{i-1}}f_{i}$ (where $f_{i}$ is finitely supported). Hence
\betaegin{eqnarray*}gin{equation*}
supp(\epsilonta_{n})\subset \betaigcup_{i=1}^{n}supp(T_{X_{i-1}}f_{i}),
\epsilonnd{equation*}
which is a union of finite sets. From the above, the sequence $y_{n}\in supp(T_{X_{n-1}}f_{n})$ accumulates at $X_{\infty}$, therefore the sequence $supp(\epsilonta_{n})$ must accumulate at $X_{\infty}$. That is, the random configuration $\epsilonta_{n}$ converges pointwise to a configuration $\epsilonta_{\infty}$, which accumulates at $X_{\infty}$ and $Z_{n}=(\epsilonta_{n},X_{n})$ converges to a random element $Z_{\infty}=(\epsilonta_{\infty},X_{\infty})\in\mathcal{O}mega$.
When the limit distribution of $X_{n}$ is a continuous measure on $\partial\mathcal{G}amma$ (i.e., it carries no point mass), then the same is true for the limit distribution of $Z_{n}=(\epsilonta_{n},X_{n})$ on $\mathcal{O}mega$. Otherwise, supposing that there exists some single point of $\mathcal{O}mega$ with non-zero measure, then one comes to a contradiction finding some single point in $\partial\mathcal{G}amma$ with non-zero measure, which is not true because of the continuity of the limit distribution on $\partial\mathcal{G}amma$.
Even when the limit distribution $\nu_{\infty}$ of $X_{n}$ is not a continuous measure on $\partial\mathcal{G}amma$, the limit distribution of $Z_{n}$ is still continuous. When the measure $\nu_{\infty}$ is not continuous, there exists $\mathfrak{u}\in\partial\mathcal{G}amma$, with $\nu_{\infty}(\{\mathfrak{u}\})=\mathbb{P}[X_{n}=\mathfrak{u}|X_{o}=e]>0$. Assume that the limit distribution $\mu_{\infty}$ of $Z_{n}$ is not continuous. Then, there is a configuration $\phi$ such that, the limit configuration of the lamplighter random walk $Z_{n}$ is $\phi$. Then, for every $x\in\mathcal{G}amma$, all trajectories of the random walk $X_{n}$ starting at $x$ and converging to the deterministic boundary element $\mathfrak{u}$ will have the same limit configuration $\phi_{x}$, accumulating only at $\mathfrak{u}$. Note that the group $\mathcal{G}amma$ acts also on the space of limit configurations by translations, and for every $y\in supp(\nu)$, $T_{y}\phi_{x}=\phi_{xy}$. Since the support of $\nu$ generates $\mathcal{G}amma$, this can not happen. Therefore, the distribution of $Z_{n}$ is a continuous measure. One can also prove the continuity of the limit distribution using Borel-Cantelli lemma.
\epsilonnd{proof}
\section{The half-space method and the Poisson boundary of the lamplighter random walk}\langlebel{sec:The Poisson boundary}
Under the assumptions of Theorem \ref{ConvergenceTheorem}, let $\mu_{\infty}$ be the distribution of $Z_{\infty}$ on $\mathcal{O}mega$, given that the position of the random walk $Z_{n}$ at time $n=0$ is $id=(\begin{bf}0\epsilonnd{bf},e)$. This is a probability measure on $\mathcal{O}mega$ defined for Borel sets $U\subset\mathcal{O}mega$ by
\betaegin{eqnarray*}gin{equation*}
\mu_{\infty}(U)=\mathbb{P}[Z_{\infty}\in U\vert Z_{0}=(\begin{bf}0\epsilonnd{bf},e)].
\epsilonnd{equation*}
The measure $\mu_{\infty}$ is a \textit{harmonic measure} for the random walk $Z_{n}$ with law $\mu$, that is, it satisfies the convolution equation $\mu \alphast\mu_{\infty}=\mu_{\infty}$. Since $G$ acts on $\mathcal{O}mega$ by measurable bijections and the measure $\mu_{\infty}$ is stationary with respect to $\mu$, it follows that $(\mathcal{O}mega,\mu_{\infty})$ is a \textit{$\mu$-boundary} (or a \textit{Furstenberg boundary}) for the random walk $Z_{n}$ with law $\mu$, in the sense of Furstenberg \cite{Furstenberg}. There exists a maximal $\mu$-boundary, which is called the \textit{Poisson boundary} for the random walk $Z_{n}$.
The typical situation when a $\mu$-boundary $(B,\langlembda)$ can arise is when $B$ is a certain topological or combinatorial boundary of a group $G$, and almost all paths of the random walk $Z_{n}$ with law $\mu$ on $G$ converge (in a certain sense which needs to be specified in each particular case) to a limit point $Z_{\infty}\in B$. Then the space $B$ considered as a measure space with the resulting hitting distribution $\langlembda$ (the \textit{harmonic measure} on $B$) is a $\mu$-boundary of the random walk $Z_{n}$.
We want to know if the measure space $(\mathcal{O}mega,\mu_{\infty})$ is indeed maximal. In order to check the maximality of the $\mu$-boundary, we use the very useful Strip Criterion \ref{StripCriterion} of identification of the Poisson boundary due to Kaimanovich \cite[Thm. $6.5$ p. 677]{Kaimanovich2000} and \cite[Thm. $5.19$]{KaimanovichWoess2002}. This criterion is symmetric with respect to the time reversal and leads to a simultaneous identification of the Poisson boundary of the random walk and of the reflected random walk, respectively. Consider now the \textit{reflected random walk} $\check{Z}_{n}=(\check{\epsilonta}_{n},\check{X}_{n})$ on $G$ with law $\check{\mu}(g)=\mu(g^{-1})$ for all $g\in G$, and starting at $(\begin{bf}0\epsilonnd{bf},e)$. The \textit{reflected random walk} $\check{X}_{n}$ on $\mathcal{G}amma$ is the random walk on $\mathcal{G}amma$ with law $\check{\nu}(x)=\nu(x^{-1})$, for all $x\in\mathcal{G}amma$, and starting at $e$.
\paragraph*{\betaegin{eqnarray*}gin{center}The half-space method\epsilonnd{center}}\langlebel{sec:Method for constructing the strip}
Assume that:
\betaegin{eqnarray*}gin{enumerate}
\item The \textbf{Basic assumptions $(3.1)$} hold for $X_{n}$ and $\check{X}_{n}$. Let $\nu_{\infty}$ and $\check{\nu}_{\infty}$ be the respective hitting distributions on $\mathcal{G}amma$.
\item For $\nu_{\infty}\times\check{\nu}_{\infty}$-almost every pair $(\mathfrak{u},\mathfrak{v})\in\partial \mathcal{G}amma\times\partial\mathcal{G}amma$, one has a strip $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ which satisfies the conditions from the Proposition \ref{StripCriterion}. That is, it is a subset of $\mathcal{G}amma$, it is $\mathcal{G}amma$-equivariant, and it has subexponential growth, i.e.,
\betaegin{eqnarray*}gin{equation}\langlebel{SubexponentialGrowthBaseStrip}
\deltafrac{1}{n}\log|\mathfrak{s}(\mathfrak{u},\mathfrak{v})\cap B(e,n)|\to 0,\mbox{ as }n\to\infty,
\epsilonnd{equation}
where $B(e,n)=\{x\in\mathcal{G}amma:\ d(e,x)\leq n\}$ is the ball with center $o$ and radius $n$ in $\mathcal{G}amma$.
\item For every $x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})$, one can assign to the triple $(\mathfrak{u},\mathfrak{v},x)$ a partiton of $\mathcal{G}amma$ into \textit{half-spaces} $\mathcal{G}amma_{\pm}$ such that $\mathcal{G}amma_{+}$ (respectively, $\mathcal{G}amma_{-}$) contains a neighbourhood of $\mathfrak{u}$ (respectively, $\mathfrak{v}$), and the assignments $(\mathfrak{u},\mathfrak{v},x)\mapsto\mathcal{G}amma_{\pm}$ are $\mathcal{G}amma$-equivariant.
\epsilonnd{enumerate}
In the last item, one can partition $\mathcal{G}amma$ in more that two subsets and the method can still be applied. However, the important subsets are that ones containing a neighbourhood of $\mathfrak{u}$ (respectively, $\mathfrak{v}$), since only there may be infinitely many lamps switched on (because $\mathfrak{u}$ and $\mathfrak{v}$ are the respective boundary points toward the random walks $X_{n}$ and $\check{X}_{n}$ converge). What we want is to build a finitely supported configuration associated to pairs $(\phi_{+},\phi_{-})$ of limit configurations (of the lamplighter random walk and of the reflected random walk) accumulating at $\mathfrak{u}$ and $\mathfrak{v}$, respectively. In order to do this we restrict $\phi_{+}$ and $\phi_{-}$ on $\mathcal{G}amma_{-}$ and $\mathcal{G}amma_{+}$, respectively, and then we ``glue together'' the restrictions. Since the new configuration depends on the partition of $\mathcal{G}amma$, we cannot choose the same partition for all x, because we will have a constant configuration which is not equivariant. Therefore, the partition of $\mathcal{G}amma$ should depend on $x$.
We state here the main result of this paper.
\betaegin{eqnarray*}gin{theo}\langlebel{PoissonTheorem}
Let $Z_{n}=(\epsilonta_{n},X_{n})$ be a random walk with law $\mu$ on $G=\mathbb{Z}_{2}\omegar\mathcal{G}amma$, such that $supp(\mu)$ generates $G$. Suppose that $\mu$ has finite first moment and $\mathcal{O}mega$ is defined as in \epsilonqref{omega}. If the above assumptions are satisfied, then the measure space $(\mathcal{O}mega,\mu_{\infty})$ is the Poisson boundary of $Z_{n}$, where $\mu_{\infty}$ is the limit distribution on $\mathcal{O}mega$ of $Z_{n}$ starting at $id=(\begin{bf}0\epsilonnd{bf},e)$.
\epsilonnd{theo}
\betaegin{eqnarray*}gin{proof}
In order to apply the Strip Criterion \ref{StripCriterion}, we need to find $\mu$- and $\check{\mu}$-boundaries for the lamplighter random walk $Z_{n}$ and the reflected lamplighter random walk $\check{Z}_{n}$, respectively. By Theorem \ref{ConvergenceTheorem} each of the random walks $Z_{n}$ and $\check{Z}_{n}$ starting at $id$ converges almost surely to an $\mathcal{O}mega$-valued random variable. If $\mu_{\infty}$ and $\check{\mu}_{\infty}$ are their respective limit distributions on $\mathcal{O}mega$, then the spaces $(\mathcal{O}mega,\mu_{\infty})$ and $(\mathcal{O}mega,\check{\mu}_{\infty})$ are $\mu$- and $\check{\mu}$- boundaries of the respective random walks.
Let us take $b_{+}=(\phi_{+},\mathfrak{u})$, $b_{-}=(\phi_{-},\mathfrak{v})\in\mathcal{O}mega$, where $\phi_{+}$ and $\phi_{-}$ are the limit configurations of $Z_{n}$ and $\check{Z}_{n}$, respectively, and $\mathfrak{u},\mathfrak{v}\in\partial\mathcal{G}amma$ are their only respective accumulation points. By the continuity of $\mu_{\infty}$ and $\check{\mu}_{\infty}$, the set $\{(b_{+},b_{-})\in\mathcal{O}mega\times\mathcal{O}mega:\mathfrak{u}=\mathfrak{v}\}$ has $(\mu_{\infty}\times\check{\mu}_{\infty})$-measure $0$, so that, in constructing the strip $S(b_{+},b_{-})$ we shall consider only the case $\mathfrak{u}\neq\mathfrak{v}$.
Using the third item in the above assumptions, let us consider a partition of $\mathcal{G}amma$ into $\mathcal{G}amma_{+}$, $\mathcal{G}amma_{-}$, and eventually $\mathcal{G}amma\setminus(\mathcal{G}amma_{+}\cup\mathcal{G}amma_{-})$, where $\mathcal{G}amma_{+}$ (respectively, $\mathcal{G}amma_{-}$) contains a neighbourhood of $\mathfrak{u}$ (respectively, $\mathfrak{v}$), and $\mathcal{G}amma\setminus(\mathcal{G}amma_{+}\cup\mathcal{G}amma_{-})$ is the remaining subset (which may be empty). The set $\mathcal{G}amma\setminus(\mathcal{G}amma_{+}\cup\mathcal{G}amma_{-})$ contains neither $\mathfrak{u}$ nor $\mathfrak{v}$. The restriction of $\phi_{+}$ on $\mathcal{G}amma_{-}$ (respectively, of $\phi_{-}$ on $\mathcal{G}amma_{+}$) is finitely supported since its only accumulation point is $\mathfrak{u}$ (respectively, $\mathfrak{v}$), which is not in a neighbourhood of $\mathcal{G}amma_{-}$ (respectively, $\mathcal{G}amma_{+}$). Now ``put together'' the restriction of $\phi_{+}$ on $\mathcal{G}amma_{-}$ and of $\phi_{-}$ on $\mathcal{G}amma_{-}$ in order to get the new configuration
\betaegin{eqnarray*}gin{equation}\langlebel{StripConfiguration}
\mathbb{P}hi(b_{+},b_{-},x)=
\betaegin{eqnarray*}gin{cases}
\phi_{-}, & \mbox{on}\ \mathcal{G}amma_{+}\\
\phi_{+}, & \mbox{on}\ \mathcal{G}amma_{-}\\
0, & \mbox{on}\ \mathcal{G}amma\setminus(\mathcal{G}amma_{+}\cup\mathcal{G}amma_{-})
\epsilonnd{cases}
\epsilonnd{equation}
on $\mathcal{G}amma$, which is, by construction, finitely supported. Now, the sought for the ``bigger'' strip $S(b_{+},b_{-})\subset G$ is the set
\betaegin{eqnarray*}gin{equation}\langlebel{LamplighterStrip}
S(b_{+},b_{-})=\{\left(\mathbb{P}hi,x\right) :\ x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})\}
\epsilonnd{equation}
of all pairs $(\mathbb{P}hi,x)$, where $\mathbb{P}hi=\mathbb{P}hi(b_{+},b_{-},x)$ is the configuration defined above and $x$ runs through the strip $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ in $\mathcal{G}amma$. This is a subset of $G=\mathbb{Z}_{2}\omegar\mathcal{G}amma$. We prove that the map $(b_{+},b_{-})\mapsto S(b_{+},b_{-})$ is $G$-equivariant, i.e., for $g=(\epsilonta,\gamma)\in G$:
\betaegin{eqnarray*}gin{equation*}
gS(b_{+},b_{-})=S(gb_{+},gb_{-}).
\epsilonnd{equation*}
Next,
\betaegin{eqnarray*}gin{equation*}
gS(b_{+},b_{-})=(\epsilonta,\gamma)\cdot\{\left(\mathbb{P}hi,x\right) :\ x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})\}=\left\lbrace (\epsilonta\oplus T_{\gamma}\mathbb{P}hi,\gamma x),\ x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})\right\rbrace .
\epsilonnd{equation*}
If $x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})$, then $\gamma x\in\mathfrak{s}(\gamma \mathfrak{u},\gamma\mathfrak{v})$, since $\mathfrak{s}(\gamma \mathfrak{u},\gamma\mathfrak{v})$ is $\mathcal{G}amma$-equivariant. Also,
\betaegin{eqnarray*}gin{equation*}
\epsilonta\oplus T_{\gamma}\mathbb{P}hi=
\betaegin{eqnarray*}gin{cases}
\epsilonta\oplus T_{\gamma}\phi_{-}, & \mbox{on}\ \gamma\mathcal{G}amma_{+} \\
\epsilonta\oplus T_{\gamma}\phi_{+}, & \mbox{on}\ \gamma\mathcal{G}amma_{-} \\
0, & \mbox{on}\ \mathcal{G}amma\setminus(\gamma\mathcal{G}amma_{+}\cup \gamma\mathcal{G}amma_{-}).
\epsilonnd{cases}
\epsilonnd{equation*}
This means that $\epsilonta\oplus T_{\gamma}\mathbb{P}hi(b_{+},b_{-},x)=\mathbb{P}hi(gb_{+},gb_{-},\gamma x),\forall x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})$. On the other side,
\betaegin{eqnarray*}gin{equation*}
S(gb_{+},gb_{-})=S((\epsilonta\oplus T_{\gamma}\phi_{+},\gamma\mathfrak{u}),(\epsilonta\oplus T_{\gamma}\phi_{-},\gamma\mathfrak{v}))=\{(\mathbb{P}hi(gb_{+},gb_{-},\gamma x),\gamma x),\ x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})\},
\epsilonnd{equation*}
that is, $gS(b_{+},b_{-})=S(gb_{+},gb_{-})$, and this proves the $G$-equivariance of the strip $S(b_{+},b_{-})$.
Finally, let us prove that the strip $S(b_{+},b_{-})$ has subexponential growth. For this, let $(\epsilonta,x)\in S(b_{+},b_{-})$ such that $d_{G}((\begin{bf}0\epsilonnd{bf},e),(\epsilonta,x))\leq n$. From the definition of the metrics $d_{G}(\cdot,\cdot)$ and $d(\cdot,\cdot)$ on $G$ and $\mathcal{G}amma$, respectively, it follows that $d(e,x)\leq n$. Therefore, if
\betaegin{eqnarray*}gin{equation}\langlebel{small_strip}
(\epsilonta,x)\in S(b_{+},b_{-})\cap B((\begin{bf}0\epsilonnd{bf},e),n),\mbox{ then }x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})\cap B(e,n),
\epsilonnd{equation}
where $B((\begin{bf}0\epsilonnd{bf},e),n)$ (respectively, $B(e,n)$) is the ball with center $id=(\begin{bf}0\epsilonnd{bf},e)$ (respectively, $e$) and of radius $n$ in $G$ (respectively, $\mathcal{G}amma$). Since for every $x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ we associate only one configuration $\mathbb{P}hi$ in $S(b_{+},b_{-})$, equation \epsilonqref{small_strip} implies that
\betaegin{eqnarray*}gin{equation*}
|S(b_{+},b_{-})\cap B((\begin{bf}0\epsilonnd{bf},e),n)|\leq |\mathfrak{s}(\mathfrak{u},\mathfrak{v})\cap B(e,n)|.
\epsilonnd{equation*}
Now, the assumption \epsilonqref{SubexponentialGrowthBaseStrip} that the $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ has subexponential growth leads to
\betaegin{eqnarray*}gin{equation*}
\deltafrac{\log|S(b_{+},b_{-})\cap B((\begin{bf}0\epsilonnd{bf},e),n)|}{n}\to 0,\mbox{ as }n\to\infty,
\epsilonnd{equation*}
and this proves the subexponential growth of the strip.
Since for almost every pair of points $(b_{+},b_{-})\in\mathcal{O}mega\times\mathcal{O}mega$ we have assigned a strip $S(b_{+},b_{-})$ which satisfies the conditions from Proposition \ref{StripCriterion}, it follows that the measure space $(\mathcal{O}mega,\mu_{\infty})$ is the Poisson boundary of the lamplighter random walk $Z_{n}$.
\epsilonnd{proof}
As an application of the half-space method, we consider several classes of base groups $\mathcal{G}amma$: groups with infinitely many ends, hyperbolic groups and Euclidean lattices.
\subsection{Groups with infinitely many ends}\langlebel{GraphsWithInfEnds}
The concept of \textit{ends} in the discrete settings goes back to Freudenthal \cite{Freudenthal1944}. The \textit{space of ends} $\partial\mathcal{G}amma$ of a finitely generated group $\mathcal{G}amma$ is defined as the space of ends of its Cayley graph with respect to a certain finite generating set. Recall that an end in a graph is an equivalence class of one-sided infinite paths, where two such paths are equivalent if there is a third one which meets each of the two infinitely often. We omit the description of the topology of $\omegaidehat{\mathcal{G}amma}=\mathcal{G}amma\cup\partial\mathcal{G}amma$, which can be found in Woess \cite{woess}. The space $\omegaidehat{\mathcal{G}amma}=\mathcal{G}amma\cup\partial\mathcal{G}amma$ is called the \textit{end compactification} of $\mathcal{G}amma$.
A finitely generated group $\mathcal{G}amma$, has one, two or infinitely many ends. If it has one end, then the end compactification is not suitable for a good description of the structure of $\mathcal{G}amma$ at infinity. If it has two ends, then it is quasi-isometric with the two-way-infinite path and the Poisson boundary of any random walk with finite first moment on $\mathcal{G}amma$ is trivial (see Woess \cite[Thm $25.4$]{woess}). Thus, we shall consider here the case when the underlying group $\mathcal{G}amma$ for the lamplighter random walk $Z_{n}$ is a \textit{group with infinitely many ends}. The natural geometric boundary of $\mathcal{G}amma$ is the space of ends $\partial\mathcal{G}amma$.
We shall also use the powerful theory of cuts and structure trees developed by Dunwoody; see the book by Dicks and Dunwoody \cite{Dunwoody&Dicks}, or for another detailed description see Woess \cite{woess} and Thomassen and Woess \cite{Thomassen&Woess}. A detailed study of structure theory may be very fruitful for obtaining information on the behaviour of the random walks. We shall again omit the description of the structure tree $\mathcal{T}$ of the Cayley graph of a finitely generated group $\mathcal{G}amma$ and of the structure map $\varphi$ between the Cayley graph of $\mathcal{G}amma$ and its structure tree. The structure tree $\mathcal{T}$ is countable, but not necessarily locally finite, and $\mathcal{G}amma$ acts on $\mathcal{T}$ by automorphisms.
The following result is a particular case of Theorem \ref{PoissonTheorem}. This is the first example where the half-space method can be applied in order to find the Poisson boundary of lamplighter random walks over groups with infinitely many ends.
\betaegin{eqnarray*}gin{theo}\langlebel{PoissonInfEnds}
Let $\mathcal{G}amma$ be a group with infinitely many ends and $Z_{n}=(\epsilonta_{n},X_{n})$ be a random walk with law $\mu$ on $G=\mathbb{Z}_{2}\omegar \mathcal{G}amma$, such that $supp(\mu)$ generates $G$. Suppose that $\mu$ has finite first moment and $\mathcal{O}mega$ is defined as in \epsilonqref{omega}. If $\mu_{\infty}$ is the limit distribution on $\mathcal{O}mega$ of the random walk $Z_{n}$ starting at $id=(\begin{bf}0\epsilonnd{bf},e)$, then $(\mathcal{O}mega,\mu_{\infty})$ is the Poisson boundary of the random walk $Z_{n}$.
\epsilonnd{theo}
\betaegin{eqnarray*}gin{proof}
In order to apply the Theorem \ref{PoissonTheorem}, we show that the conditions required in the half-space method are satisfied for the base group $\mathcal{G}amma$ and random walks $X_{n}$ on $\mathcal{G}amma$. First of all, one can check that the space of ends $\partial\mathcal{G}amma$ satisfies the convergence property (\ref{Convergence}). When $\mathcal{G}amma$ is a group with infinitely many ends and the law $\nu$ of the random walk $X_{n}$ has finite first moment, by Woess \cite{WoessAmenable1989}, $X_{n}$ converges in the end topology to a random end from $\partial\mathcal{G}amma$. The same is true for the random walk $\check{X}_{n}$ with law $\check{\nu}$ on $\mathcal{G}amma$. Moreover, the limit distributions are continuous measures on $\partial\mathcal{G}amma$. Let $\nu_{\infty}$ and $\check{\nu}_{\infty}$ be the respective limit distributions on $\partial\mathcal{G}amma$. The \textbf{Basic assumptions $(3.1)$} hold for $X_{n}$ and $\check{X}_{n}$, and the first item in the half-space method is fulfilled.
Next, one of the main points of the method is to assign a strip $\mathfrak{s}(\mathfrak{u},\mathfrak{v})\subset\mathcal{G}amma$ to almost every pair of ends $(\mathfrak{u},\mathfrak{v})\in\partial\mathcal{G}amma\times\partial\mathcal{G}amma$, and to prove that it satisfies the conditions from Proposition \ref{StripCriterion}. By the continuity of $\nu_{\infty}$ and $\check{\nu}_{\infty}$, the set $\{(\mathfrak{u},\mathfrak{v})\in\partial\mathcal{G}amma\times\partial\mathcal{G}amma:\mathfrak{u}=\mathfrak{v}\}$ has $(\nu_{\infty}\times\check{\nu}_{\infty})$-measure $0$, so that, in constructing the strip $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ we shall consider only the case $\mathfrak{u}\neq\mathfrak{v}$. For this, let $F$ be a $D$-cut, i.e. a finite subset of the Cayley graph of $\mathcal{G}amma$, whose deletion disconnects $\mathcal{G}amma$ into precisely two connected components. This cut is used in defining the structure tree of the graph. For details, see Dicks and Dunwoody \cite{Dunwoody&Dicks}. Denote by $F^{0}$ the set of all end vertices (in the Cayley graph of $\mathcal{G}amma$) of the edges of $F$. For every pair of ends $(\mathfrak{u},\mathfrak{v})\in\partial \mathcal{G}amma\times\partial\mathcal{G}amma$, let us define the strip
\betaegin{eqnarray*}gin{equation*}
\mathfrak{s}(\mathfrak{u},\mathfrak{v})=\betaigcup\{\gamma F^{0}:\ \gamma\in\mathcal{G}amma:\ \omegaidehat{\mathcal{U}}(\mathfrak{u},\gamma F)\neq \omegaidehat{\mathcal{U}}(\mathfrak{v},\gamma F)\}.
\epsilonnd{equation*}
The set $\mathcal{U}(\mathfrak{u},F)$ is the connected component which represents the end $\mathfrak{u}$ when we remove the finite set $F$ from $\mathcal{G}amma$, and $\omegaidehat{\mathcal{U}}(\mathfrak{u},F)$ is its completion (which contains $\mathfrak{u}$) in $\omegaidehat{\mathcal{G}amma}$. It is clear that $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ is a subset of $\mathcal{G}amma$, and moreover $\gamma \mathfrak{s}(\mathfrak{u},\mathfrak{v})=\mathfrak{s}(\gamma\mathfrak{u},\gamma\mathfrak{v}) $, for every $\gamma\in\mathcal{G}amma$. The strip $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ is the union of all $\gamma F^{0}$ such that the connected components $\omegaidehat{\mathcal{U}}(\mathfrak{u},\gamma F)$ and $\omegaidehat{\mathcal{U}}(\mathfrak{v},\gamma F)$, which contain the ends $\mathfrak{u}$ and $\mathfrak{v}$, respectively, when we remove the set $\gamma F$ from $\mathcal{G}amma$, are not the same. In other words, $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ is the union of all $\gamma F^{0}$ such that the sides (one connected component of $\mathcal{G}amma\setminus\gamma F$ and its complement in $\mathcal{G}amma$) of $\gamma F$, seen as edges of the structure tree $\mathcal{T}$, lie on the geodesic between $\varphi\mathfrak{u}$ and $\varphi\mathfrak{v}$ ($\varphi$ is the structure map between $\partial\mathcal{G}amma$ and its structure tree $\mathcal{T}$). This geodesic can be empty (when $\varphi(\mathfrak{u})=\varphi(\mathfrak{v})$), finite (when $\varphi(\mathfrak{u})$ and $\varphi(\mathfrak{v})$ are vertices in the structure tree $\mathcal{T}$), one way infinite or two way infinite. The latter holds when $\mathfrak{u},\mathfrak{v}$ are distinct thin ends (i.e. ends with finite diameter), and we have to check the subexponential growth of the strip only in this case. Using the properties of the structure tree of the Cayley graph of $\mathcal{G}amma$, there is an integer $k>0$, such that the following holds: if $A_{0},A_{1},\ldots ,A_{k}$ are oriented edges in the structure tree $\mathcal{T}$ and connected components in $\mathcal{G}amma$ such that $A_{0} \supset A_{1} \supset \cdots \supset A_{k}$ properly, then $d(A_{k},\mathcal{G}amma\setminus A_{0})\geq 2$. Finiteness of $F^{0}$ implies that there is a constant $c>0$ such that
\betaegin{eqnarray*}gin{equation*} \langlebel{subexp}
|\mathfrak{s}(\mathfrak{u},\mathfrak{v})\cap B(e,n) |\leq cn,
\epsilonnd{equation*}
for all $n$, and for all distinct thin ends $\mathfrak{u},\mathfrak{v}$. This proves the subexponential growth of $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$.
Next, let us go to the partition of $\mathcal{G}amma$ into half-spaces. By the definition of $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$, every $x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ is contained in some cut $\gamma F$, for some $\gamma\in\mathcal{G}amma$. Since a $D$-cut $F$ in a graph has the property that the sets $\gamma F$ do not intersect, for every $\gamma\in\mathcal{G}amma$, it follows that every $x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ is contained in exactly one cut $\gamma F$. We partition $\mathcal{G}amma$ in this way: for every $x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})$, we look at the $D$-cut $\gamma F$ containing $x$, and remove it from $\mathcal{G}amma$. Then the set $(\mathcal{G}amma\setminus\gamma F)$ contains precisely two connected components. This follows from the definition of a $D$-cut, and from the finiteness of the removed set $F$. Moreover, the connected components containing $\mathfrak{u}$ and $\mathfrak{v}$ are different, by the definiton of the strip. Let $\mathcal{G}amma_{+}$ be the connected component of $(\mathcal{G}amma\setminus\gamma F)$, which contains $\mathfrak{u}$ and $\mathcal{G}amma_{-}$ be its complement in $\mathcal{G}amma$, which contains $\mathfrak{v}$. One can see here that the partition of $\mathcal{G}amma$ into the half-spaces $\mathcal{G}amma_{+}$ and $\mathcal{G}amma_{-}$ depends on the cut $\gamma F$ containing $x$, that is, depends on $x$. The sets $\mathcal{G}amma_{+}$ and $\mathcal{G}amma_{-}$ are $\mathcal{G}amma$-equivariant. From the above, it follows that all the assumptions needed in the half-space method hold in the case of a group with infinitely many ends. Now, we apply Theorem \ref{PoissonTheorem}.
By Theorem \ref{ConvergenceTheorem} each of the random walks $Z_{n}$ and $\check{Z}_{n}$ starting at $id$ converges almost surely to an $\mathcal{O}mega$-valued random variable. If $\mu_{\infty}$ and $\check{\mu}_{\infty}$ are their respective limit distributions on $\mathcal{O}mega$, then the spaces $(\mathcal{O}mega,\mu_{\infty})$ and $(\mathcal{O}mega,\check{\mu}_{\infty})$ are $\mu$- and $\check{\mu}$- boundaries of the respective random walks.
Take $b_{+}=(\phi_{+},\mathfrak{u})$, $b_{-}=(\phi_{-},\mathfrak{v})\in\mathcal{O}mega$, where $\phi_{+}$ and $\phi_{-}$ are the limit configurations of $Z_{n}$ and $\check{Z}_{n}$, respectively, and $\mathfrak{u},\mathfrak{v}\in\partial\mathcal{G}amma$ are their only respective accumulation points. Define the configuration $\mathbb{P}hi(b_{+},b_{-},x)$ like in \epsilonqref{StripConfiguration}, where $\mathcal{G}amma\setminus(\mathcal{G}amma_{+}\cup\mathcal{G}amma_{-})$ is the empty set, and the strip $S(b_{+},b_{-})$ exactly like in \epsilonqref{LamplighterStrip}. From Theorem \ref{PoissonTheorem}, $S(b_{+},b_{-})$ satisfies the conditions from Proposition \ref{StripCriterion}, and it follows that the space $(\mathcal{O}mega,\mu_{\infty})$ is the Poisson boundary of the lamplighter random walk $Z_{n}$ over $G=\mathbb{Z}_{2}\omegar \mathcal{G}amma$.
\epsilonnd{proof}
\subsection{Hyperbolic groups}
Consider the group $\mathcal{G}amma$ with the word metric $d(\cdot,\cdot)$ on it. The group $\mathcal{G}amma$ is called (word) \textit{hyperbolic} if its Cayley graph corresponding to a finite generating set $S$ is hyperbolic. A graph is called \textit{hyperbolic} (in the sense of Gromov) if there is a $\deltaelta \geq 0$ such that every geodesic triangle in the graph is $\deltaelta$-thin. We shall not lay out the basic features of hyperbolic graphs and their hyperbolic boundary and compactification, which we denote again by $\partial_{h}\mathcal{G}amma$ and $\omegaidehat{\mathcal{G}amma}$. For details, the reader is invited to consult the texts by Gromov \cite{Gromov1987}, Ghys and de la Harpe \cite{GhysHarpe1990}, Coornaert, Delzant and Papadopoulus \cite{CoornaertDelzantPapadopoulos1990}, or, for a presentation in the context of random walks on graphs, Woess \cite[Section 22]{woess}.
If $\mathcal{G}amma$ is a hyperbolic group, then one can understand how its hyperbolic compactification $\partial_{h}\mathcal{G}amma$ is related to the end compactification $\partial\mathcal{G}amma$: the former is finer, that is, the identity on $\mathcal{G}amma$ extends to a continuos surjection from the hyperbolic to the end compactification which maps $\partial_{h}\mathcal{G}amma$ onto $\partial\mathcal{G}amma$. For every two distinct points $\mathfrak{u},\mathfrak{v}$ on the hyperbolic boundary $\partial_{h}\mathcal{G}amma$, there is a geodesic $\overline{\mathfrak{u}\mathfrak{v}}$ which may not be unique. The boundary of a finitely generated hyperbolic group $\mathcal{G}amma$ is either infinite or has cardinality $2$. In the latter case, it is a group with two ends that is quasi-isometric with the two-way infinite path, and the Poisson boundary of any random walk with finite first moment is trivial. Thus, we assume that $\partial_{h}\mathcal{G}amma$ is infinite, and consider lamplighter random walks $Z_{n}$ on $G=\mathbb{Z}_{2}\omegar\mathcal{G}amma$, when $\mathcal{G}amma$ is a hyperbolic group. The natural geometric boundary of $\mathcal{G}amma$ is the hyperbolic boundary $\partial_{h}\mathcal{G}amma$. For the Poisson boundary of the lamplighter random walk $Z_{n}$ on $G=\mathbb{Z}_{2}\omegar\mathcal{G}amma$ we have to distinguish two cases: \textbf{(a)} infinite hyperbolic boundary and infinitely many ends and \textbf{(b)} infinite hyperbolic boundary and only one end.
\paragraph{(a) Infinite hyperbolic boundary and infinitely many ends.} From the fact that the identity on $\mathcal{G}amma$ extends to a continuous surjection from the hyperbolic to the end compactification, which maps $\partial_{h}\mathcal{G}amma$ onto $\partial\mathcal{G}amma$, it follows that, this is exactly the case treated in Section \ref{GraphsWithInfEnds}, i.e., of a group with infinitely many ends, where the ends are the connected components of the hyperbolic boundary. The Poisson boundary of the lamplighter random walk $Z_{n}$ is given by Theorem \ref{PoissonInfEnds}, with the hyperbolic boundary $\partial_{h}\mathcal{G}amma$ instead of the space of ends $\partial\mathcal{G}amma$.
\paragraph{(b) Infinite hyperbolic boundary and only one end.} What we want is to determine the Poisson boundary of lamplighter random walks $Z_{n}$ on $G=\mathbb{Z}_{2}\omegar\mathcal{G}amma$, when $\mathcal{G}amma$ is a finitely generated hyperbolic group with infinite boundary and only one end. In order to use the half-space method defined before, we shall need some additional definitions.
Consider the hyperbolic boundary $\partial_{h}\mathcal{G}amma$ as being described by equivalence of geodesic rays. For $y\in\mathcal{G}amma$ and $\mathfrak{u}\in\partial_{h}\mathcal{G}amma$ let $\pi=[y=y_{0},y_{1},\ldots,\mathfrak{u}]$ be a geodesic ray joining $y$ with $\mathfrak{u}$. For every $x\in\mathcal{G}amma$ let $\betaegin{eqnarray*}ta_{\mathfrak{u}}(x,\pi)=\limsup_{i\to\infty}(d(x,y_{i})-i)$, and define the \textit{Busemann function} of the point $\mathfrak{u}\in\partial_{h}\mathcal{G}amma$, $\betaegin{eqnarray*}ta_{\mathfrak{u}}:\mathcal{G}amma\times\mathcal{G}amma\rightarrow \mathbb{R}$ as follows:
\betaegin{eqnarray*}gin{equation*}
\betaegin{eqnarray*}ta_{\mathfrak{u}}(x,y)=\sup\{\betaegin{eqnarray*}ta_{\mathfrak{u}}(x,\pi^{'}):\pi^{'} \mbox{ is a geodesic ray from }y\mbox{ to } \mathfrak{u}\}.
\epsilonnd{equation*}
The \textit{horosphere} with the centre $\mathfrak{u}$ passing through $x\in\mathcal{G}amma$, denoted $H_{x}(\mathfrak{u})$ is the set
\betaegin{eqnarray*}gin{equation*}
H_{x}(\mathfrak{u})=\{ y\in\mathcal{G}amma:\ \betaegin{eqnarray*}ta_{\mathfrak{u}}(x,y)=0\}.
\epsilonnd{equation*}
The following result is another special case of Theorem \ref{PoissonTheorem}.
\betaegin{eqnarray*}gin{theo}\langlebel{PoissonHyperbolicGraphs}
Let $\mathcal{G}amma$ be a finitely generated hyperbolic group with infinite hyperbolic boundary and only one end, and $Z_{n}=(\epsilonta_{n},X_{n})$ be a random walk with law $\mu$ on $G=\mathbb{Z}_{2}\omegar\mathcal{G}amma$, such that $supp(\mu)$ generates $G$. Suppose that $\mu$ has finite first moment and $\mathcal{O}mega$ is defined as in \epsilonqref{omega}, with the hyperbolic boundary $\partial_{h}\mathcal{G}amma$ instead of $\partial\mathcal{G}amma$. If $\mu_{\infty}$ is the limit distribution on $\mathcal{O}mega$ of the random walk $Z_{n}$ starting at $id=(\begin{bf}0\epsilonnd{bf},e)$, then $(\mathcal{O}mega,\mu_{\infty})$ is the Poisson boundary of the random walk.
\epsilonnd{theo}
\betaegin{eqnarray*}gin{proof} The proof is as in the preceding example. First of all, we show that the conditions required in the half-space method are satisfied for $\mathcal{G}amma$ and for random walks $X_{n}$ on $\mathcal{G}amma$. One can check that the hyperbolic boundary $\partial_{h}\mathcal{G}amma$ satisfies the convergence property (\ref{Convergence}). When $\mathcal{G}amma$ is a hyperbolic group, and the law $\nu$ of the random walk $X_{n}$ on $\mathcal{G}amma$ has finite first moment, Woess \cite{WoessFixedSets1993} (see also Woess \cite{woess}) proved that $X_{n}$ converges almost surely in the hyperbolic topology to a random element from $\partial_{h}\mathcal{G}amma$, and the limit distribuition is a continuous measure. The same is true for the random walk $\check{X}_{n}$ with law $\check{\nu}$ on $\mathcal{G}amma$. Let $\nu_{\infty}$ and $\check{\nu}_{\infty}$ be the respective limit distributions on $\partial_{h}\mathcal{G}amma$. The \textbf{Basic assumptions $(3.1)$} hold for $X_{n}$ and $\check{X}_{n}$, and the first item in the half-space method is fulfilled.
In order to prove that the second item in the half-space method holds, we assign to almost every pair of boundary points $(\mathfrak{u},\mathfrak{v})\in\partial_{h}\mathcal{G}amma\times\partial_{h}\mathcal{G}amma$ a strip $\mathfrak{s}(\mathfrak{u},\mathfrak{v})\subset\mathcal{G}amma$ and we prove that it satisfies the conditions from Proposition \ref{StripCriterion}. By the continuity of $\nu_{\infty}$ and $\check{\nu}_{\infty}$ on $\partial_{h}\mathcal{G}amma$, the set $\{(\mathfrak{u},\mathfrak{v})\in\partial_{h}\mathcal{G}amma\times\partial_{h}\mathcal{G}amma:\mathfrak{u}=\mathfrak{v}\}$ has $(\nu_{\infty}\times\check{\nu}_{\infty})$-measure $0$, so that, in constructing $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ we consider only the case $\mathfrak{u}\neq\mathfrak{v}$. Let
\betaegin{eqnarray*}gin{equation*}
\mathfrak{s}(\mathfrak{u},\mathfrak{v})=\betaigcup\{x\in\mathcal{G}amma:x\mbox{ lies on a two way infinite geodesic between }\mathfrak{u}\mbox{ and }\mathfrak{v}\}.
\epsilonnd{equation*}
The strip $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$ is the union of all points $x$ from all geodesics in $\mathcal{G}amma$ joining $\mathfrak{u}$ and $\mathfrak{v}$. This is a subset of $\mathcal{G}amma$, and $\gamma \mathfrak{s}(\mathfrak{u},\mathfrak{v})=\mathfrak{s}(\gamma\mathfrak{u},\gamma\mathfrak{v}) $, for every $\gamma\in\mathcal{G}amma$. Since in a hyperbolic space any two geodesics with the same endpoints are within uniformly bounded distance one from another (see \cite{GhysHarpe1990} for details), and the geodesics have linear growth, it follows that
there exists a constant $c>0$ such that
\betaegin{eqnarray*}gin{equation*}
|\mathfrak{s}(\mathfrak{u},\mathfrak{v})\cap B(e,n)|\leq cn,
\epsilonnd{equation*}
for all $n$ and distinct $\mathfrak{u},\mathfrak{v}\in\partial_{h}\mathcal{G}amma$, and this proves the subexponential growth of $\mathfrak{s}(\mathfrak{u},\mathfrak{v})$.
Finally, let us partition $\mathcal{G}amma$ into half-spaces. Actually, this is one of the examples where the partition is made in two half-spaces and another ``not-interesting'' set on which the configuration will be $0$. For every $x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})$, let $H_{x}(\mathfrak{u})$ (respectively, $H_{x}(\mathfrak{v})$) be the horosphere with center $\mathfrak{u}$ (respectively, $\mathfrak{v}$) and passing through $x$. Remark that the two horospheres may have non empty intersection. Consider the partition of $\mathcal{G}amma$ into the subsets $\mathcal{G}amma_{+}$, $\mathcal{G}amma_{-}$, and $\mathcal{G}amma\setminus(\mathcal{G}amma_{+}\cup\mathcal{G}amma_{-})$, where $\mathcal{G}amma_{+}=H_{x}(\mathfrak{u})$ (that is, it contains a neighbourhood of $\mathfrak{u}$) and $\mathcal{G}amma_{-}=H_{x}(\mathfrak{v})\setminus H_{x}(\mathfrak{u})$. This partition is $\mathcal{G}amma$-equivariant. From the above, it follows that all the assumptions needed in the half-space method hold in the case of a finitely generated hyperbolic group $\mathcal{G}amma$. Now, we apply Theorem \ref{PoissonTheorem}.
By Theorem \ref{ConvergenceTheorem} each of the random walks $Z_{n}$ and $\check{Z}_{n}$ starting at $id$ converges almost surely to an $\mathcal{O}mega$-valued random variable, where $\mathcal{O}mega$ is defined as in \epsilonqref{omega}, with the hyperbolic boundary $\partial_{h}\mathcal{G}amma$ instead of $\partial\mathcal{G}amma$. If $\mu_{\infty}$ and $\check{\mu}_{\infty}$ are their respective limit distributions on $\mathcal{O}mega$, then the spaces $(\mathcal{O}mega,\mu_{\infty})$ and $(\mathcal{O}mega,\check{\mu}_{\infty})$ are $\mu$- and $\check{\mu}$- boundaries of the respective random walks.
Take $b_{+}=(\phi_{+},\mathfrak{u})$, $b_{-}=(\phi_{-},\mathfrak{v})\in\mathcal{O}mega$, where $\phi_{+}$ and $\phi_{-}$ are the limit configurations of $Z_{n}$ and $\check{Z}_{n}$, respectively, and $\mathfrak{u},\mathfrak{v}\in\partial\mathcal{G}amma$ are their only respective accumulation points. Define the configuration $\mathbb{P}hi(b_{+},b_{-},x)$ like in \epsilonqref{StripConfiguration}, and the strip $S(b_{+},b_{-})$ exactly like in \epsilonqref{LamplighterStrip}. From Theorem \ref{PoissonTheorem}, $S(b_{+},b_{-})$ satisfies the conditions from Proposition \ref{StripCriterion}, and it follows that the space $(\mathcal{O}mega,\mu_{\infty})$ (with the hyperbolic boundary $\partial_{h}\mathcal{G}amma$ instead of $\partial\mathcal{G}amma$ in the definition \epsilonqref{omega} of $\mathcal{O}mega$) is the Poisson boundary of the lamplighter random walk $Z_{n}$ over $G=\mathbb{Z}_{2}\omegar \mathcal{G}amma$.
\epsilonnd{proof}
\subsection{Euclidean lattices}
Let now $\mathcal{G}amma= \mathbb{Z}^{d}$, $d\geq 3$, be the $d$-dimensional lattice, with the Euclidean metric $|\cdot|$ on it. For $\mathbb{Z}^{d}$, there are also natural boundaries and compactifications. A nice example of compactification is obtained by embedding $\mathbb{Z}^{d}$ into the $d$-dimensional unit disc via the map $x\mapsto x/(1+|x|)$, and taking the closure. In this compactification, the boundary $\partial\mathbb{Z}^{d}$ is the unit sphere $S_{d-1}$ in $\mathbb{R}^d$, and a sequence $x_{n}$ in $\mathbb{Z}^{d}$ converges to $\mathfrak{u}\in S_{d-1}$ if and only if $|x_{n}|\to\infty$ and $x_{n}/|x_{n}|\to\mathfrak{u}$, as $n\to\infty$.
Let us check that the property \epsilonqref{Convergence} holds for the boundary $S_{d-1}$. For this, let $x_{n}$ a sequence converging to $\mathfrak{u}\in S_{d-1}$, and $y_{n}$ another sequence in $\mathbb{Z}_{d}$, such that $x_{n}/|x_{n}|\to\mathfrak{u}$ and $|x_{n}-y_{n}|/|x_{n}|\to 0$ as $n\to\infty$. Since
\betaegin{eqnarray*}gin{equation*}
\mathcal{B}ig{|}\deltafrac{y_{n}}{|x_{n}|}-\deltafrac{x_{n}}{|x_{n}|}\mathcal{B}ig{|}\leq\deltafrac{|x_{n}-y_{n}|}{|x_{n}|}\to 0,
\epsilonnd{equation*}
it follows that $y_{n}/|x_{n}|\to\mathfrak{u}$. Now, $y_{n}/|x_{n}|=(y_{n}/|y_{n}|)\cdot(|y_{n}|/|x_{n}|)$, and the sequence $|y_{n}|/|x_{n}|$ of real numbers converges to $1$, since we can bound it from above and from below by two sequences both converging to 1. Therefore $y_{n}/|y_{n}|\to\mathfrak{u}$, and the property \epsilonqref{Convergence} holds. Next, if the law $\nu$ of the random walk $X_{n}$ on $\mathbb{Z}^{d}$ has non-zero first moment (drift)
\betaegin{eqnarray*}gin{equation*}
m=\sum_{x}x\nu(x)\in\mathbb{R}^{d},
\epsilonnd{equation*}
then the law of large numbers implies that $X_{n}$ converges to the boundary $S_{d-1}$ in this compactification with deterministic limit $m/|m|$. In particular, the limit distribution $\nu_{\infty}$ is the Dirac mass at this point. Next, we state the result on the Poisson boundary of lamplighter random walks over $\mathbb{Z}^{d}$ in the case of non-zero drift, using the half-space method. Remark that, in this case the description of the Poisson boundary war earlier obtained by Kaimanovich \cite{KaimanovichPreprint}.
\betaegin{eqnarray*}gin{theo}\langlebel{PoissonEuclideanLattices}
Let $\mathcal{G}amma=\mathbb{Z}^{d}$, $d\geq 3$ be a Euclidean lattice and $Z_{n}=(\epsilonta_{n},X_{n})$ be a random walk with law $\mu$ on $G=\mathbb{Z}_{2}\omegar \mathbb{Z}^{d}$, such that $supp(\mu)$ generates $G$, and the projected random walk $X_{n}$ on $\mathbb{Z}^{d}$ has non-zero drift. Suppose that $\mu$ has finite first moment and $\mathcal{O}mega$ is defined as in \epsilonqref{omega}, with the unit sphere $S_{d-1}$ instead of $\partial\mathcal{G}amma$. If $\mu_{\infty}$ is the limit distribution on $\mathcal{O}mega$ of the random walk $Z_{n}$ starting at $id=(\begin{bf}0\epsilonnd{bf},e)$, then $(\mathcal{O}mega,\mu_{\infty})$ is the Poisson boundary of the random walk.
\epsilonnd{theo}
\betaegin{eqnarray*}gin{proof} Let us show that the conditions required in the half-space method are satisfied for $\mathcal{G}amma=\mathbb{Z}^{d}$ and for random walks $X_{n}$ on $\mathcal{G}amma$. The random walk $X_{n}$ (respectively $\check{X}_{n}$) converges to the boundary $S_{d-1}$ with deterministic limit $\mathfrak{u}=m/|m|$ (respectively, $\mathfrak{v}=-m/|m|$), in the case of non-zero mean $m$, and the convergence property \epsilonqref{Convergence} holds. The \textbf{Basic assumptions $(3.1)$} hold for $X_{n}$ and $\check{X}_{n}$, and the first item in the half-space method is satisfied. The limit distributions $\nu_{\infty}$ and $\check{\nu}_{\infty}$ are the Dirac-masses at this limit points.
Now, to define a strip in $\mathbb{Z}^{d}$ is an easy task, because of the growth of $\mathbb{Z}^{d}$. For the two limit points $\mathfrak{u}$ and $\mathfrak{v}$ of $X_{n}$ and $\check{X}_{n}$, respectively, define the strip $\mathfrak{s}(\mathfrak{u},\mathfrak{v})=\mathbb{Z}^{d}$. This strip does not depend on the limit points, it is $\mathbb{Z}^{d}$-equivariant, and it has polynomial growth of order $d$, that is, also subexponential growth. Next, let us partition $\mathbb{Z}^{d}$ into half-spaces. Denote by $\overline{\mathfrak{u}\mathfrak{v}}$ the geodesic of $S_{d-1}$ joining the two deterministic boundary points $\mathfrak{u},\mathfrak{v}\in S_{d-1}$. In this case, this is exactly the diameter in the ball, since the points $\mathfrak{u}$ and $\mathfrak{v}$ are antipodal points, i.e. they are opposite through the centre. For every $x\in\mathfrak{s}(\mathfrak{u},\mathfrak{v})=\mathbb{Z}^{d}$, consider the hyperplane which passes through $x$ and is orthogonal to $\overline{\mathfrak{u}\mathfrak{v}}$. This hyperplane cuts $\mathbb{Z}^{d}$ into two disjoint spaces $\mathcal{G}amma_{+}$ and $\mathcal{G}amma_{-}$, containing $\mathfrak{u}$ and $\mathfrak{v}$, respectively. Hence, $\mathcal{G}amma=\mathbb{Z}^{d}$ is partitioned into half-spaces $\mathcal{G}amma_{+}$ and $\mathcal{G}amma_{-}$, which are $\mathbb{Z}^{d}$-equivariant. From the above, it follows that all the assumptions needed in the half-space method hold in the case of a Euclidean lattice $\mathbb{Z}^{d}$. Now, we apply Theorem \ref{PoissonTheorem}.
By Theorem \ref{ConvergenceTheorem} each of the random walks $Z_{n}$ and $\check{Z}_{n}$ starting at $id$ converges almost surely to an $\mathcal{O}mega$-valued random variable, where $\mathcal{O}mega$ is defined as in \epsilonqref{omega}, with $S_{d-1}$ instead of $\partial\mathcal{G}amma$. Nevertheless, the only ``active'' points of non-zero $\nu_{\infty}$- and $\check{\nu}_{\infty}$-measure on $S_{d-1}$ are $\mathfrak{u}=m/|m|$ and $\mathfrak{v}=-m/|m|$, respectively. More precisely, $\mathcal{O}mega$ can be written as
\betaegin{eqnarray*}gin{equation}\langlebel{EuclideanOmega}
\mathcal{O}mega =\mathcal{B}ig{(}\mathcal{C}_{\mathfrak{u}}\times\{\mathfrak{u}\}\mathcal{B}ig{)}\cup\mathcal{B}ig{(}\mathcal{C}_{\mathfrak{v}}\times\{\mathfrak{v}\}\mathcal{B}ig{)},
\epsilonnd{equation}
where $\mathcal{C}_{\mathfrak{u}}$ (respectively, $\mathcal{C}_{\mathfrak{v}}$) is the set of all configurations accumulating only at $\mathfrak{u}$ (respectively, $\mathfrak{v}$).
If $\mu_{\infty}$ and $\check{\mu}_{\infty}$ are the limit distributions of $Z_{n}$ and $\check{Z}_{n}$ on $\mathcal{O}mega$, then the spaces $(\mathcal{O}mega,\mu_{\infty})$ and $(\mathcal{O}mega,\check{\mu}_{\infty})$ are $\mu$- and $\check{\mu}$- boundaries of the respective random walks. Take $b_{+}=(\phi_{+},\mathfrak{u})$, $b_{-}=(\phi_{-},\mathfrak{v})\in\mathcal{O}mega$, where $\phi_{+}$ and $\phi_{-}$ are the limit configurations of $Z_{n}$ and $\check{Z}_{n}$, respectively, and $\mathfrak{u},\mathfrak{v}\in\partial\mathcal{G}amma$ are their only respective accumulation points. Define the configuration $\mathbb{P}hi(b_{+},b_{-},x)$ like in \epsilonqref{StripConfiguration}, and the strip $S(b_{+},b_{-})$ exactly like in \epsilonqref{LamplighterStrip}. From Theorem \ref{PoissonTheorem}, $S(b_{+},b_{-})$ satisfies the conditions from Proposition \ref{StripCriterion}, and it follows that the space $(\mathcal{O}mega,\mu_{\infty})$, with $\mathcal{O}mega$ as in \epsilonqref{EuclideanOmega} is the Poisson boundary of the lamplighter random walk $Z_{n}$ over $G=\mathbb{Z}_{2}\omegar \mathcal{G}amma$.
\epsilonnd{proof}
\paragraph*{Final remarks.}
One can also apply the method in order to find the Poisson boundary of lamplighter random walks over polycyclic groups, nilpotent groups, discrete groups of semi-simple Lie groups. Another application of the method is the determination of
the Poisson boundary of random walks over ``iterated'' lamplighter groups. That is, we consider our base group as being $\mathbb{Z}_{2}\omegar\mathcal{G}amma$ and we construct a new lamplighter group $\mathbb{Z}_{r}\omegar(\mathbb{Z}_{r}\omegar\mathcal{G}amma)$ over $\mathbb{Z}_{r}\omegar\mathcal{G}amma$. The interesting fact here is that, the geometry of the group $\mathbb{Z}_{r}\omegar\mathcal{G}amma$ is completely different from that one of $\mathcal{G}amma$. For instance, when $\mathcal{G}amma$ is a group with infinitely many ends, $\mathbb{Z}_{r}\omegar\mathcal{G}amma$ has only one end. Our method still works, that is, we start with a strip in the lamplighter graph $\mathbb{Z}_{r}\omegar X$ and we lift it to a ''bigger`` one in $\mathbb{Z}_{r}\omegar (\mathbb{Z}_{r}\omegar X)$, following the steps of our method.
\paragraph*{Acknowledgements}
I am grateful to Wolfgang Woess for numerous numerous fruitful disscusions and for his help during the writing of this manuscript, and also to Vadim Kaimanovich for several hints and useful remarks regarding the content and exposition. I would also like to thank to the referee for careful reading, suggestions and corrections that helped to improve the paper.
\betaegin{eqnarray*}gin{small}\betaegin{eqnarray*}gin{thebibliography}{[20]}
\betaibitem{BartholdiWoess2005}Bartholdi, L., and Woess, W.: \textit{Spectral computations on lamplighter groups and Diestel-Leader graphs}, J. Fourier Anal. Appl. \textbf{11} (2005) 175-202.
\betaibitem{Brofferio 2006}Brofferio, S., and Woess, W.: \textit{Positive harmonic functions for semi-isotropic random walks on trees, lamplighter groups, and DL-graphs}, Potential Analysis \textbf{24} (2006) 245-265.
\betaibitem{Cartwright1989}Cartwright, D. I., and Soardi, P. M.: \textit{Convergence to ends for random walks on the automorphism group of a tree}, Proc. Amer. Math. Soc. \textbf{107} (1989) 817-823.
\betaibitem{CoornaertDelzantPapadopoulos1990}Coornaert, M., Delzant, T., and Papadopoulos, A.: \textit{G\'{e}ometri\'{e} et Theori\'{e} des Groupes: les Groupes Hyperboliques de Gromov}, Lecture notes in Math., \textbf{1441} (1990) Springer, Berlin.
\betaibitem{DicksSchick2002}Dicks, W., and Schick, Th.: \textit{The spectral measure of certain elements of the complex group ring of a wreath product}, Geom. Dedicata \textbf{93} (2002) 121-137.
\betaibitem{Dunwoody&Dicks}Dicks, W., and Dunwoody, M. J.: \textit{Groups acting on graphs}, Cambridge Univ. Press, Cambridge.
\betaibitem{Dunwoody}Dunwoody, M. J.: (1982)\textit{Cutting up graphs}, Combinatorica \textbf{2} (1989) 15-23.
\betaibitem{Erschler2001}Erschler, A. G.: \textit{On the asymptotics of the rate of departure to infinity} (Russian), Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov. (POMI) \textbf{283} (2001) 251-257, 263.
\betaibitem{Erschler2003}Erschler, A. G.:\textit{On drift and entropy growth for random walks on groups}, Ann. Probab. \textbf{31} (2003) 1193-1204.
\betaibitem{Freudenthal1944}Freudenthal, H.: \textit{\"{U}ber die Enden diskreter R\"{a}ume und Gruppen}, Comment. Math. Helv. \textbf{17} (1944) 1-38.
\betaibitem{Furstenberg}Furstenberg, H.: \textit{Random walks and discrete subgroups of Lie groups}, In Advances in Probability and Related Topics, 1 (P. Ney, ed.), (1971) pp.1-63, M. Dekker, New York.
\betaibitem{GhysHarpe1990}Ghys, E., and De La Harpe, P. (eds.): \textit{Sur les Groupes Hyperboliques d'apr\'{e}s Mikhael Gromov}, Birkh\"{a}user, (1990) Boston.
\betaibitem{GrigorchukZuk2001}Grigorchuk, R. I., and Zuk, A.: \textit{The lamplighter group as a group generated by a $2$-state automaton, and its spectrum}, Geom. Dedicata \textbf{87} (2001) 209-244.
\betaibitem{Gromov1987}Gromov, M.: \textit{ Hyperbolic groups}, In Essay in Group Theory (S.M.Gersten, ed) (1987) 75-263, Springer, New York.
\betaibitem{Kaimanovich1991}Kaimanovich, V.A.: \textit{Poisson boundaries of random walks on discrete solvable groups}, in Probability measures on Groups $X$ (ed. H. Heyer), (1991) pp. 205-238, Plenum, New York.
\betaibitem{Kaimanovich2000}Kaimanovich, V.A.: \textit{The Poisson formula for groups with hyperbolic properties}, Annals of Math. \textbf{152} (2000), 659-692.
\betaibitem{KaimanovichVershik1983}Kaimanovich, V.A., and Vershik, A.M.: \textit{Random walks on discrete groups: Boundary and entropy}, Ann. Probab. \textbf{11} (1983) 457-490.
\betaibitem{KaimanovichPreprint}Kaimanovich, V.A.: \textit{Poisson boundary of discrete groups}, unpublished manuscript (2001).
\betaibitem{KaimanovichWoess2002}Kaimanovich, V. A., and Woess, W.: \textit{Boundary and entropy of space homogeneous Markov Chains}, Ann. Probab. \textbf{30} (2002) 323-363.
\betaibitem{KarlssonWoess2006}Karlsson, A., and Woess, W.: \textit{The Poisson boundary of lamplighter random walks on trees}, Geometriae Dedicata \textbf{124} (2007) 95-107.
\betaibitem{Kingman1968}Kingman, J.:\textit{The ergodic theory of subadditive processes}, J. Royal Stat. Soc., Ser. B, \textbf{30} (1968) 499-510.
\betaibitem{Lyons1996}Lyons, R., Pemantle, R., and Peres, Y.: \textit{Random walks on the lamplighter group}, Ann. Probab. \textbf{24} (1996) 1993-2006.
\betaibitem{Pittet&Saloff-Coste1996}Pittet, C., and Saloff-Coste, L.: \textit{Amenable groups, isoperimetric profiles and random walks}, in Geometric Group Theory Down Under (Canberra, 1996), pp. 293-316, de Gruyter, Berlin, 1999.
\betaibitem{Pittet&Saloff-Coste2002}Pittet, C., and Saloff-Coste, L.: \textit{On random walks on wreath products}, Ann. Probab. \textbf{30} (2002) 948-977.
\betaibitem{Revelle}Revelle, D.: \textit{Rate of escape of random walks on wreath products}, Ann. Probab. \textbf{31} (2003) 1917-1934.
\betaibitem{Revelle2003}Revelle, D.: \textit{Heat kernel asymptotics on the lamplighter group}, Electron. Comm. Probab. \textbf{8} (2003), 142-154.
\betaibitem{Thomassen&Woess}Thomassen, C., and Woess, W.: \textit{Vertex-transitive graphs and accessibility}, J. Combin. Theory Ser. B \textbf{58} (1993) 248-268.
\betaibitem{WoessAmenable1989}Woess, W.: \textit{Amenable group actions on infinite graphs}, Math. Ann \textbf{284} (1989) 251-265.
\betaibitem{WoessFixedSets1993} Woess, W.: \textit{ Fixed sets and free subgroups of groups acting on metric spaces}. Math. Z. \textbf{214} (1993) 425-440.
\betaibitem{woess}Woess, W.: \textit{ Random Walks on Infinite Graphs and Groups}, Cambridge Tracts in Mathematics \textbf{138}, Camb. Univ. Press, (2000).
\epsilonnd{thebibliography}\epsilonnd{small}
\epsilonnd{document} |
\begin{document}
\title{A note on local uniqueness of equilibria: How isolated is a local equilibrium?
}
\author{Stefano Matta\thanks{Correspondence to
S. Matta, Dipartimento di Economia, University of Cagliari,
viale S. Ignazio 17, 09123 Cagliari, Italy,
tel. 00390706753340, Fax 0039070660929
E-mail: [email protected].
The author was supported
by STAGE, funded by Fondazione di Sardegna and Regione Autonoma della Sardegna.
}.
\\
\and\centerline{(University of Cagliari)}}
\maketitle
\noindent\textbf{Abstract}:
The motivation of this note is to show how singular values
affect local uniqueness.
More precisely, Theorem 3.1 shows how to construct
a neighborhood (a ball) of a regular equilibrium whose diameter
represents an estimate of local uniqueness, hence
providing a measure of how isolated
a (local) unique equilibrium can be. The result, whose relevance
in terms of comparative statics is evident, is based on reasonable
and natural assumptions and hence is applicable in many different settings,
ranging from pure exchange economies to non-cooperative games.
\noindent\textbf{Keywords:} Comparative statics, regularity, singular values, local uniqueness.
\noindent\textbf{JEL Classification:} C60, C62, D51.
\section{Introduction}\label{sec_intro}
Consider an equilibrium
equation $f(p,q)=0$, where $p$ and $q$ denote the unknowns and parameters, respectively.
Local uniqueness and continuous (smooth) dependence of the unknowns on the parameters
are at the heart of comparative statics.
This means that, roughly speaking, the (locally unique) solution $p$ changes continuously (smoothly)
as the parameter $q$ varies under a continuous (smooth) perturbation.
More precisely, at a point $p$ belonging to the solution set of $q$,
there exists a neighborhood $N$ of $(p,q)$ such that the projection onto the second factor,
$pr:(p,q)\to q$, restricted to $N$, is an homeomorphism (diffeomorphism).
This regularity property has been extensively studied in the literature in different settings,
in particular after the introduction of the differentiable viewpoint in the seminal paper by \cite{debreu}.
The study of the equilibrium set $E=\{(p,q)|\,f(p,q)=0\}$ and the properties of
the projection $pr:E\to Q$, $(p,q)\mapsto q$, allows to establish natural connections
between economic and mathematical properties:
e.g., the surjectivity of $pr$ (existence), the cardinality of the set $\{pr^{-1}(q)\}$
(uniqueness/multiplicity), the continuity of the correspondence
of $pr^{-1}$ (structural stability).
Under the very restrictive hypothesis of global uniqueness, i.e. when the set $pr^{-1}(q)$ is a single-tone set for every parameter $q$,
the whole solution set $E$ is homeomorphic (diffeomorphic) to the parameter set.
Multiplicity, on the other hand, is deeply related to the set of singular values of the projection.
Even if this set has measure zero in $Q$ under certain assumptions,
its components of codimension-one and codimension-two can be relevant.
Consider, for example, redistribution policies, represented by
continuous perturbations of the parameters. In such an instance,
it is crucial not to cross the singular set to avoid catastrophes and
undesiderable welfare effects, that can be determined by prices multiplicity.
For example, Figure 1, taken from \citep{die},
shows an example of two policies sharing the same target, but
whose (different) outcomes depend on the order of use of resources.
\begin{figure}
\caption{Two policies sharing the same target but with different outcomes (taken from \citep{die}
\label{figdie}
\end{figure}
The codimension-two components also play a key role, as highlighted in this note,
since they have an impact on the topology of the
parameter set. The intuition behind this is that they can be seen
as holes inside $Q$.
The consequences of this property have not, to the best of my knowledge,
been analyzed in the literature, which has traditionally focused on the codimension-one component,
the only one that can disconnect $Q$.
The motivation of this short note is to highlight how the singular values can affect the size of local uniqueness.
This information can be used to estimate how isolated a (local) unique equilibrium can be.
More precisely, Theorem \ref{teoruniq} shows a sufficient condition that enables to construct a neighborhood $N$ such that
the projection restricted to $N$ is injective. This means that local uniqueness is satisfied in $N$.
This result can be applied to different settings,
characterized by multiplicity and multiple roots associated to critical solutions.
For example, one can appraise potential distorsive effects (e.g. transfer paradox \cite{batra})
of redistributive policies \cite{gago} in exchange and production economies \cite{balib2}.
This is due to the fact that a common geometric structure is present. The same structure, with
suitable changes, can be found, e.g., in the case of infinite economies \cite{bainf,chiz} or non-cooperative game theory \cite{gw}.
This paper is organized as follow. Section \ref{sec_def} recalls the main definitions
and tools. Section \ref{sec_main} is devoted to the proof of the main result.
\section{Definitions}\label{sec_def}
In this section the main definitions and properties are outlined for the reader's convenience.
For an introduction to covering spaces, the reader is referred
to \cite{massey}.
The analysis is focused on the projection map $pr:E\to Q$. The set
$Q$ denotes the parameter set and $E$ is the equilibrium set, i.e.
$E=\{(p,q)|\,f(p,q)=0\}$, where $f(p,q)=0$ represents an equilibrium condition
to be satisfied by the unknowns $p$, given the constraints and the primitives of the model.
Suppose that both $E$ and $Q$ are smooth manifolds where we can measure the length of curves and the distance between points.
More precisely, given a piecewise differentiable curve $\tilde \gamma:[0, 1]\rightarrow E$ joining two points $\tilde x_1$ and $\tilde x_2$ in $E$, namely $\tilde\gamma (0)=\tilde x_1$ and $\tilde\gamma (1)=\tilde x_2$,
we can compute the length of $\tilde \gamma$ denoted by $L_E(\tilde \gamma)$ and the distance
\begin{equation}\label{lengdef}
d_E(\tilde x_1, \tilde x_2)=\mathop{\hbox{inf}}_{\tilde \gamma\in\tilde\Gamma_{\tilde x_1, \tilde x_2}}L_E(\tilde\gamma).
\end{equation}
between $\tilde x_1$ and $\tilde x_2$,
where $\tilde \Gamma_{\tilde x_1, \tilde x_2}$ denotes the set of piecewise differentiable curves
with endpoints $\tilde x_1$ and $\tilde x_2$.
Similarly, we can compute the length $L_Q(\gamma)$ of a piecewise differentiable curve $\gamma:[0, 1]\rightarrow Q$ joining two points $x_1$ and $x_2$ of $Q$
and their distance $d_Q(x_1, x_2)$.
In this paper we assume that the following properties are satisfied:
\begin{itemize}
\item [(i)]
$pr$ is smooth;
\item [(ii)]
$pr$ is proper;
\item [(iii)]
$pr$ is a length decreasing map, i.e.
\begin{equation}\label{Ldec}
L_E(\tilde\gamma)\geq L_Q(pr\circ\tilde\gamma), \ \forall\tilde\gamma:[0, 1]\to E.
\end{equation}
Consequently, by \eqref{lengdef}, $pr$ is distance decreasing, namely
\begin{equation}\label{ddec}
d_E(\tilde x, \tilde y)\geq d_Q(pr(\tilde x), pr(\tilde y)),\ \forall \tilde x, \tilde y\in E.
\end{equation}
\item [(iv)]
the ball centered at a point $\tilde x\in E$ (resp. $x\in Q$) of radius $\tilde r$ (resp. $r$), namely
$B_{\tilde r}(\tilde x)=\{\tilde y\in E \ | \ d_E(\tilde y, \tilde x)<\tilde r\}$ (resp. $B_{r}(x)=\{y\in Q \ | \ d_Q( y, x)< r\}$ ) is connected.
\end{itemize}
All these are reasonable and natural assumptions.
Smoothness is an approximation of continuity under suitable topologies, properness
also has a nice economic meaning: for example,
it represents the idea of scarcity and desirability.
Assumption (iii) and (iv) have a very intuitive meaning. Indeed it is expected that
a projection does not increase the length of curves and that
the metrics chosen satisfy the connectedness property.
As a specific and natural example, one can take a Riemannian metric $g_Q$ on $Q$
(if $Q$ is a subset of some Euclidean space, $g_Q$ can be the flat metric
as is the case, e.g., in literature related to the Edgeworth box)
and the pull-back metric $g_E=pr^*g_Q$ on $E$.
Then $pr$ is length decreasing if we endow $Q$ and $E$ with the
length functions associated to these metrics.
In this setting, the application of the inverse function theorem (IFT) and Sard's theorem
to $pr$ leads to standard regularity properties enjoyed by an open and dense subset of the parameter space, denoted by $\cal R$. More precisely,
it is not hard to prove
that $pr_{|pr^{-1}(\cal R)}:pr^{-1}(\cal R)\rightarrow {\cal R}$ is a covering map (see \cite[Proposition 2.2]{lmstr}). Indeed the properness of $pr$ is a sufficient condition to turn a surjective local diffeomorphism
into a covering map.
We recall that a {\em covering map} between two topological spaces $\tilde X$ and $X$ is a continuous surjective
map such that each $x\in X$ admits a {\em well-covered} neighborhood, i.e.,
each $x\in X$ has an open neighborhood $U$ such that $p^{-1}(U)$ is a disjoint union of open sets in $\tilde X$, each of which is mapped by $p$ homeomorphically onto $U$.
The economic meaning behind the covering property is that it represents
the well-known property of smooth selection of equilibria.
This is crucial for comparative statics analysis
and characterizes
the regular values ${\cal R}$ of the projection.
Its complement $Q\setminus \cal R$
represents the projection of the critical equilibria,
denoted by $\Sigma=\cup \Sigma^k$. It is the union of closed sets of
codimension $k$, with $k$ greater or equal to $1$. Hence $\Sigma^1$ denotes the
component which disconnects the parameter set. For a deep analysis
of the set of singular and critical equilibra in a
general equilibrium framework, the reader is referred to
\cite{basing, bacrit, balib2,lmcrit}.
Literature is usually focused on
$\Sigma^1$ as the main cause of catastrophes.
But $\Sigma^2$ affects ${\cal R}$'s topology in a relevant way as we will see
in Theorem \ref{teoruniq} (see also Remark \ref{remtheor} below).
\comment{
I end this section recalling the {\em arc lifting property} \citep[Lemma 3.1]{massey},
that will be used in the proof of our main result.
Given a smooth map $p:\tilde X\to X$, a {\em lift}
of a map $f : Y \to X$, where $Y$ is a topological space,
is a map $\tilde f : Y \to \tilde X$ such that $p\tilde f = f$.
Let $\alpha: [0,1] \to X$ be an arc. If $p$ is a covering map, then
a unique lift of $\alpha$,
$\tilde\alpha: [0,1] \to \tilde X$, exists, whose starting point $\tilde x_0$
is any point chosen in the fiber of $p^{-1}(\alpha(0))$.
}
\section{Main result}\label{sec_main}
Let $x\in {\cal R}$ and let $\Gamma_x$ be the
set of non contractible loops $\gamma: [0, 1]\to {\cal R}$,
$\gamma(0)=\gamma(1)=x$.
Notice that the image of $\gamma$ lies in a connected component $K_x$ of ${\cal R}$ containing $x$.
Let
\begin{equation}\label{mx}
m_x=\mathop{\hbox{inf}}_{\gamma\in \Gamma_x}L_{d_Q}(\gamma)\geq 0
\end{equation}
be the nonnegative number which measures the length of the minimal loop of all the non contractible closed curves in $K_x$ based
at $x$. If no $\Sigma^2$ component belongs to the connected component of $Q\setminus \Sigma^1$ containing $x$, then $m_x=0$.
It could be said in a suggestive way that $\Sigma^2$ are \lq\lq holes'' in $K_x$ iff $m_x\neq 0$.
Let us also define
\begin{equation}\label{dx}
d_x=d_{Q}(x,\Sigma^1)=\mathop{\hbox{inf}}_{\sigma\in \Sigma^1}d_{Q}(x,\sigma).
\end{equation}
The positive number
$d_x$ measures the distance between $x$
and the boundary of $K_x$, which corresponds to $\Sigma^1$.
This distance is well-defined since $\Sigma^1$
is closed in $Q$, being the projection of a closed set via a proper map.
The following theorem uses the information so far
to construct a neighborhood $N$ of $\tilde x$ such that $pr_{|N}$ is injective,
i.e. where one has local uniqueness.
\begin{teor}\label{teoruniq}
Let us assume assumptions (i)-(iv) above are satisfied.
Let $x\in {\cal R}$ and $\tilde x\in pr^{-1}(x)$.
Let $r_x$ be the positive real number defined as
$$r_x=\begin{cases}
3d_x & \mbox{if}\ m_x= 0\\
\min(m_{x}, d_{x}) &\mbox{if}\ m_x\neq 0.
\end{cases}$$
where $m_x$ and $d_x$ are given by \eqref{mx} and \eqref{dx} respectively.
Then $pr_{|_{B_\frac{r_x}{3}(\tilde x)}}$ is injective.
\end{teor}
\noindent {\bf Proof: }\rm
Denote by $\tilde K_x$
the connected component of $pr^{-1}({\cal R})$ containing $\tilde x$.
Then, being $E$ and $Q$ smooth manifolds, by \cite[Ch. 5, Lemma 2.1]{massey} $pr_{|\tilde K_x}:\tilde K_x\to K_x$
is a covering map. If $m_x=0$ then $K_x$ is simply-connected and hence $pr_{|\tilde K_x}$ \cite[Ch. 5, Exercise 6.1]{massey} is a diffeomorphism.
Notice that by (iv) $B_{d_x}(x)$ is connected and hence $B_{d_x}(x)\subset K_x$. Moreover since by \eqref{ddec} $pr$ is distance decreasing
one gets that $pr\left(B_{d_x}(\tilde x)\right)\subset B_{d_x}(x)\subset K_x$ and hence $B_{d_x}(\tilde x)\subset \tilde K_x$.
It follows that $pr_{|_{B_\frac{r_x}{3}(\tilde x)}}=pr_{|B_{d_x}(\tilde x)}$ is injective.
Let $m_x\neq 0$. Assume, by contradiction, that $\tilde x_1$ and $\tilde x_2$ are two distinct points in $B_\frac{r_x}{3}(\tilde x)$, $r_x=\min(m_{x}, d_{x})$,
such that $pr(\tilde x_1)=pr(\tilde x_2)=x$.
Let $\tilde \Gamma_{\tilde x_1, \tilde x_2}$ denote the set of piecewise differentiable curves $\tilde \gamma :[0, 1]\to B_\frac{r_x}{3}(\tilde x)$
with endpoints $\tilde x_1$ and $\tilde x_2$.
Notice that $\tilde \Gamma_{\tilde x_1, \tilde x_2}\neq\emptyset$ by assumption (iv).
Then, for each $\tilde\gamma\in \tilde \Gamma_{\tilde x_1, \tilde x_2}$, the curve
$$\gamma =pr\circ\tilde\gamma:[0,1]\to K_x$$ is a piecewise
differentiable loop based on $x$.
By \cite[Ch. 5, Theorem 4.1]{massey} each of these loops is non-contractible and hence by \eqref{mx} one has $L_{d_Q}(\gamma)\geq m_x$.
It follows, by \eqref{lengdef} and \eqref{Ldec},
that
$$d_E(\tilde x_1, \tilde x_2)=\mathop{\hbox{inf}} _{\tilde\gamma\in \tilde\Gamma_{\tilde x_1, \tilde x_2}}L_{d_E}(\tilde \gamma)\geq\mathop{\hbox{inf}} _{\gamma\in \Gamma_{x}} L_{d_Q}(\gamma)=m_x\geq r_x.$$
On the other hand, by the triangle inequality,
$$d_{E}(\tilde x_1,\tilde x_2)\leq d_{E}(\tilde x_1,\tilde x)+ d_{E}(\tilde x,\tilde x_2)=\frac{r_x}{3}+\frac{r_x}{3}=
\frac{2}{3}r_{x} < r_{x},$$
yielding the desired contradiction.
\hspace*{\fill}$\mathop{\hbox{B}}ox$
\begin{remar}\label{remtheor}\rm
The key idea behind the proof of the theorem is,
roughly speaking, as follows. If $K_x$ is the connected component of ${\cal R}$ containing $x$,
the set $\Sigma^2$ are \lq\lq holes'' inside $K_x$.
If $K_x$ did not contain holes, then it would be
simply connected and hence the solution set would be single-tone (global uniqueness) for every parameter in $K_x$,
if one restricts $pr$ to the connected component $\tilde K_x$ of $pr^{-1}({\cal R})$ containing $\tilde x$.
On the other hand, if $K_x$ contains holes then $m_x\neq 0$, namely
one has to be careful with the non-contractible loops by taking the ball centered at $\tilde x$ of radius $\min(d_{x}, m_{x})$.
\end{remar}
\begin{remar}\label{remtheor}\rm
The diameter of the ball $\frac{r_x}{3}(\tilde x)$
represents a sufficient condition that ensures
injectivity but it is does not maximize the size of the ball (if $m_x\neq 0$).
It can be seen as an estimate of local uniqueness
and a measure of how isolated is a local equilibrium, providing us with a possible answer to the question raised in the title the paper.
\end{remar}
\small{
}
\end{document} |
\begin{document}
\title{A General Framework for Computing Optimal Correlated Equilibria in Compact Games\shortver{\\ (Extended Abstract)}
\begin{abstract}
We analyze the problem of computing a correlated equilibrium that optimizes some objective (e.g., social welfare).
\emcite{PR08JACM} gave a sufficient condition for the tractability of this problem; however, this condition only applies to a subset of existing representations.
We propose a different algorithmic approach for the optimal CE problem that applies to \emph{all} compact representations, and give
a sufficient condition that generalizes that of \emcite{PR08JACM}.
In particular, we reduce the optimal CE problem to the \emph{deviation-adjusted social welfare problem}, a combinatorial optimization problem closely related to the optimal social welfare problem.
This framework allows us to identify new classes of games for which the optimal CE problem is tractable; we show that graphical polymatrix games on tree graphs are one example.
We also study the problem of computing the optimal \emph{coarse correlated equilibrium}, a solution concept closely related to CE.
Using a similar approach we derive a sufficient condition for this problem, and use it to prove that the problem is tractable for
singleton congestion games.
\end{abstract}
}
\thesis{
\chapter{A General Framework for Computing Optimal Correlated Equilibria in Compact Games}\label{ch:optCE}
}
\section{Introduction}
\thesis{In this chapter we\footnote{This chapter is based on joint work with Kevin Leyton-Brown. A slightly shorter version was submitted for publication in August 2011 and is currently under review.}
continue to focus on correlated equilibrium (CE).
We have seen from the previous chapter and its related literature \cite{PR08JACM,JiangLB10exact} that finding a sample CE is tractable, even for compactly represented games. However, since in general there can be
an infinite number of
CE even in a generic game, finding an arbitrary one is of limited value.
}
\nothesis{
A fundamental class of computational problems in game theory is the computation of \emph{solution concepts} of finite games.
Much recent effort in the literature has concerned the problem of computing a sample Nash equilibrium \cite{ChenDeng06,Daskalakis06,Daskalakis05,GoldbergPapa06}.
First proposed by Aumann \yrcite{aumann1974subjectivity,aumann1987correlated}, correlated equilibrium (CE) is another important solution concept.
Whereas in a mixed strategy Nash equilibrium players randomize independently, in a correlated equilibrium the players can coordinate their behavior based on signals from an intermediary.
Correlated equilibria of a game can be formulated as probability distributions over pure strategy profiles satisfying certain linear constraints. The resulting linear feasibility program has size polynomial in the size of the normal form representation of the game.
However, the size of the normal form representation grows exponentially in the number
of players. This is problematic when games involve large numbers of players.
Fortunately, most large games of practical interest have highly-structured payoff functions, and thus it is possible to represent them compactly.
A line of research thus exists to look for \emph{compact game representations} that are able to succinctly describe structured games, including work on
graphical games \cite{graphical} and action-graph games \cite{ActionGraph,AGG-full}.
But now the size of the linear feasibility program for CE can be exponential in the size of compact representation; furthermore a CE can require exponential space to specify.
The problem of computing a sample CE
was recently shown to be in polynomial time for most existing compact representations \cite{PR08JACM,JiangLB10exact}. However, since in general there can be
an infinite number of
CE in a game, finding an arbitrary one is of limited value.
}
Instead, here we focus on the problem of computing a correlated equilibrium that optimizes some objective. In particular we consider \fullver{two kinds of objectives:
(1) A linear function}\shortver{optimizing linear functions} of players' expected utilities. For example, computing the best (or worst) social welfare corresponds to maximizing (or minimizing) the sum of players' utilities, respectively.
\fullver{(2) Max-min welfare: maximizing the utility of the worst-off player. (More generally, maximizing the minimum of a set of linear functions of players' expected utilities.)
}
We are also interested in computing optimal coarse correlated equilibrium (CCE) \cite{Hannan1957}.
\nothesis{It is known}\thesis{Recall from \autoref{sec:lit_CE}} that the empirical distribution of any no-external-regret learning dynamic converges to the set of CCE, while the empirical distribution of no-internal-regret learning dynamics converges to the set of CE\nothesis{ (see e.g. \cite{AGTBook})}. Thus,
optimal CE / CCE provide useful bounds on the social welfare of the empirical distributions of these dynamics.
\fullver{Optimal CE / CCE can also be used as bounds on optimal NE since CE and CCE are both relaxations of NE. Hence they are also useful for computing (bounds on) the price of anarchy and price of stability of a game.
}
\thesis{The problems of computing optimal CE / CCE can be formulated as linear programs with sizes polynomial in the size of normal form. However, as with the rest of the thesis, we are interested in the case when the input is a compactly-represented game.}
We are particularly interested in the relationship between the optimal CE / CCE problems and the problem of computing the optimal social welfare outcome (i.e.
strategy profile) of the game, which is exactly the optimal social welfare CE problem without the incentive constraints.
This is an instance of a line of questions that has received much interest from the algorithmic game theory community: ``How does adding incentive constraints to an optimization problem affect its complexity?'' This question in the mechanism design setting is perhaps one of the central questions of algorithmic mechanism design \cite{NisanRonen01amd}.
Of course, a more constrained problem can in general be computationally easier than the relaxed version of the problem.
Nevertheless, results from complexity of Nash equilibria and algorithmic mechanism design suggest that adding \emph{incentive constraints} to a problem is unlikely to decrease its computational difficulty. That is, when the optimal social welfare problem is hard, we tend also to expect that the optimal CE problem will be hard as well.
On the other hand, we are interested in the other direction: when it is the case for a class of games that the optimal social welfare problem can be efficiently computed,
can the same structure be exploited to efficiently compute the optimal CE?
\nothesis{
The seminal work on the computation of optimal CE is \cite{PR08JACM}. This paper considered the optimal linear objective CE problem and proved that the problem is NP-hard for many representations\shortver{ including graphical games,
polymatrix games, and congestion games.}\fullver{, while tractable for a couple of representations.}}\thesis{As mentioned in \autoref{sec:lit_CE}, \emcite{PR08JACM}
considered the optimal linear objective CE problem and proved that the problem is NP-hard for many representations, while tractable for a couple of representations. We now take a more in-depth look at this paper.}
\fullver{In particular, the representations shown to be NP-hard include graphical games,
polymatrix games, and congestion games.
These hardness results, although nontrivial, are not surprising:
the optimal social welfare problem is already NP-hard for these representations.
}
On the tractability side, \emcite{PR08JACM} focused on so-called ``reduced form'' representations, meaning representations for which there exist player-specific partitions of the strategy profile space into payoff-equivalent outcomes. They showed that if a particular \emph{separation problem} is polynomial-time solvable, the optimal CE problem is polynomial-time solvable as well.
Finally, they showed that this separation problem is polynomial-time solvable for bounded-treewidth graphical games, symmetric games and anonymous games.
Perhaps most surprising and interesting is the \emph{form} of Papadimitriou and Roughgarden's
sufficient condition for tractability:
their separation problem for an instance of a reduced-form-based representation is essentially equivalent to solving the optimal social welfare problem for an instance of that representation with the same reduced form but possibly different payoffs.
In other words, if we have a polynomial-time algorithm for the optimal social welfare problem for a reduced-form-based representation,
we can turn that into a polynomial-time algorithm for the optimal social welfare CE problem.
However, \aunpcite{PR08JACM}'s sufficient condition for tractability only applies to reduced-form-based representations.
Their definition of reduced forms is unable to handle representations that exploit linearity of utility, and in which the structure of player $p$'s utility function may depend on the action she chose.
As a result, many representations do not fall
into this characterization, such as polymatrix games, congestion games, and action-graph games.
Although the optimal CE problems for these representations are NP-hard in general, we are interested in identifying tractable subclasses
of games, and a sufficient condition that applies to all representations would be helpful.
In this \nothesis{article}\thesis{chapter}, we propose a different algorithmic approach for the optimal CE problem that applies to \emph{all} compact representations.
By applying the ellipsoid method to the dual of the LP for optimal CE, we show that the polynomial-time solvability of what we call the \emph{deviation-adjusted social welfare problem} is a sufficient condition for the tractability of the optimal CE problem.
We also give a sufficient condition for tractability of the optimal CCE problem: the polynomial-time solvability of the \emph{coarse deviation-adjusted social welfare problem}\fullver{, which we show reduces to the deviation-adjusted social welfare problem}.
\thesis{Our algorithms are instances of the black-box approach, with the required subroutines being the computations of the deviation-adjusted social welfare problem and the coarse deviation-adjusted social welfare problem, respectively.}
We show that for reduced-form-based representations, the deviation-adjusted social welfare problem can be reduced to the separation problem of \emcite{PR08JACM}.
Thus
the class of reduced forms for which our problem is polynomial-time solvable contains the class for which the separation problem is polynomial-time solvable.
More generally, we show that if a representation can be characterized by ``linear reduced forms'', i.e. player-specific linear functions over partitions, then for that representation, the deviation-adjusted social welfare problem can be reduced to the optimal social welfare problem.
As an example, we show that for graphical polymatrix games on trees, optimal CE can be computed in polynomial time. Such games are not captured by the reduced-form framework.\footnote{In a recent paper \emcite{KXL11approxCE} has independently proposed an algorithm for optimal CE in graphical polymatrix games on trees. They used a different approach that is specific to graphical games and graphical polymatrix games, and it is not obvious whether their approach can be extended to other classes of games.}
\fullver{The key feature of these representations upon which our argument relies is that the partitions for player $p$ (which characterize the structure of the utility function for $p$) do not depend on the action chosen by $p$.}
On the other hand, representations like action-graph games and congestion games have \emph{action-specific} structure,
and as a result the deviation-adjusted social welfare problems and coarse deviation-adjusted social welfare problems on these representations are structured differently from the corresponding optimal social welfare problems.
Nevertheless, we are able to show a polynomial-time algorithm for the optimal CCE problem on
\emph{singleton congestion games} \cite{ieong2005fac}, a subclass of congestion games.
We use a symmetrization argument to reduce the optimal CCE problem to the coarse deviation-adjusted social welfare problem with player-symmetric deviations,
which can be solved using a dynamic-programming algorithm.
This is an example where the optimal CCE problem is tractable while the complexity of the optimal CE problem is not yet known.
\section{Problem Formulation}
\nothesis{
Consider a simultaneous-move game $G=(\mathcal{N}, \{S_p\}_{p\in \mathcal{N}}, \{u^p\}_{p\in \mathcal{N}})$, where $\mathcal{N}=\{1,\ldots,n\}$ is the set of players.
Denote a player $p$, and player $p$'s set of pure strategies (i.e., actions) $S_p$. Let $m = \max_p|S_p|$.
Denote a pure strategy profile $s=(s_1,\ldots,s_n)\in S$,
with $s_p$ being player $p$'s pure strategy. Denote by $S_{-p}$ the set of partial pure strategy profiles of the players other than $p$.
Let $u^p$ be the vector of player $p$'s utilities for each pure profile, denoting player $p$'s utility under pure strategy profile $s$ as $u^p_s$.
}\thesis{We follow the notation of \autoref{ch:CE}. Furthermore, let $\mathcal{N}=\{1,\ldots,n\}$ be the set of players.}
Let $w$ be the vector of social welfare for each pure profile, that is $w=\sum_{p\in \mathcal{N}} u^p$,
with $w_s$ denoting the social welfare for pure profile $s$.
Throughout the \nothesis{paper}\thesis{chapter} we assume that the game is given in a representation with \nothesis{\emph{polynomial type} \cite{Papadimitriou,PR08JACM},
i.e., that the number of players and the number of actions for each player are bounded by polynomials of the size of the representation.}\thesis{polynomial type. Unlike in \autoref{ch:CE}, here we do not assume the existence of a polynomial-time algorithm for expected utility.}
\subsection{Correlated Equilibrium}
\nothesis{
A \emph{correlated distribution} is a probability distribution over pure strategy profiles, represented by a vector $x\in \mathds{R}^M$, where $M=\prod_p |S_p|$. Then $x_s$ is the probability of pure strategy profile $s$ under the distribution $x$.
\begin{definition}
A correlated distribution $x$ is a \emph{correlated equilibrium} (CE) if it satisfies the following \emph{incentive constraints}: for each player $p$ and each pair of her actions $i,j\in S_p$,
\fullver{\begin{equation}\label{eq:incentives}}\shortver{we have $}
\sum_{s_{-p}\in S_{-p}} [u^p_{is_{-p}}- u^p_{js_{-p}}] x_{is_{-p}} \ge 0\fullver{,\end{equation}}\shortver{$,}
where the subscript ``$is_{-p}$'' (respectively ``$js_{-p}$'') denotes the pure strategy profile in which player $p$ plays $i$ (respectively $j$) and the other players play according to the partial profile $s_{-p}\in S_{-p}$.
\end{definition}
\nothesis{Intuitively, when
a trusted intermediary draws a strategy profile $s$ from this distribution, privately announcing to each player $p$ her own component $s_p$,
$p$ will have no incentive to choose another strategy, assuming others follow the suggestions.}
We write these incentive constraints in matrix form as $Ux\ge 0$. Thus $U$ is an
$N\times M$ matrix, where $N=\sum_p |S_p|^2$.
The rows of $U$\fullver{, corresponding to the left-hand sides of the constraints \eqref{eq:incentives},} are indexed by $(p,i,j)$, where $p$ is a player and $i,j\in S_p$ are a pair of $p$'s actions.
Denote by
$U_s$ the column of $U$ corresponding to pure strategy profile $s$.
These incentive constraints, together with the constraints
\fullver{\begin{equation}\label{eq:prob}}\shortver{$}
x\ge 0, \; \sum_{s\in S} x_s =1\fullver{,\end{equation}}\shortver{$,}
which ensure that $x$ is a probability distribution, form a linear feasibility program that defines the set of CE.
}
\thesis{Correlated equilibrium (CE) is defined in Definition \ref{def:CE}.}
The problem of computing a maximum social welfare CE can be formulated as the LP
\begin{align}\label{primal}
\max\: & w^T x\tag{$P$}\\
Ux\thesis{&}\geq 0 \thesis{\nonumber\\}\nothesis{, \;}
x &\geq 0\thesis{\nonumber\\}\nothesis{, \;}
\sum_{s\in S} x_s=1\nonumber
\end{align}
\fullver{
Another objective of interest is the max-min welfare CE problem: computing a CE that maximizes the utility of the worst-off player.
\begin{align}\label{mmprimal}
\max \:r\\
\sum_s x_s u^p_s\geq r\qquad \forall p\\
Ux\geq 0 \thesis{\nonumber\\}\nothesis{, \;}
x\geq 0\thesis{\nonumber\\}\nothesis{, \;}
\sum_{s\in S} x_s=1\nonumber
\end{align}
}
Another solution concept of interest is \emph{coarse correlated equilibrium} (CCE).
Whereas CE requires that each player has no profitable deviation even if she takes into account the signal she receives from the intermediary,
CCE only requires that each player has no profitable \emph{unconditional deviation}.
\begin{definition}
A correlated distribution $x$ is a \emph{coarse correlated equilibrium} (CCE) if it satisfies the following incentive constraints:
for each player $p$ and each of his actions $j\in S_p$,
\fullver{\begin{equation}}\shortver{we have $}
\sum_{(i,s_{-p})\in S} [u^p_{is_{-p}} - u^p_{js_{-p}}] x_{is_{-p}} \geq 0\fullver{.\end{equation}}\shortver{$.}
\end{definition}
We write these incentive constraints in matrix form as $Cx\ge 0$. Thus $C$ is an $(\sum_p|S_p|)\times M$ matrix.
By definition, a CE is also a CCE.
The problem of computing a maximum social welfare CCE can be formulated as the LP
\begin{align}\label{cprimal}
\max\: & w^T x\tag{$CP$}\\
Cx \thesis{&}\geq 0\thesis{\nonumber\\}\nothesis{, \;}
x &\geq 0\thesis{\nonumber\\}\nothesis{, \;}
\sum_{s\in S} x_s=1.\nonumber
\end{align}
\section{The Deviation-Adjusted Social Welfare Problem}
Consider the dual of \eqref{primal},
\begin{align}\label{dual}
\min\: &t \tag{$D$}\\
U^T y + w &\leq t\mathbf{1}\nonumber\\
y&\geq 0.\nonumber
\end{align}
We label the $(p,i,j)$-th element of $y\in \mathds{R}^N$ (corresponding to row $(p,i,j)$ of $U$) as $y^p_{i,j}$.
This is an LP with a polynomial number of variables and an exponential number of constraints. Given a separation oracle, we can solve it in polynomial time using the ellipsoid method.
A separation oracle needs to determine whether a given $(y,t)$ is feasible, and if not output a hyperplane that separates $(y,t)$ from the feasible set.
We focus on a restricted form
of separation oracles, which outputs a violated constraint for infeasible points.\footnote{This is a restriction because in general there exist separating hyperplanes
other than the violated constraints. For example \thesis{as we saw in Chapter \ref{ch:CE}, }\emcite{PR08JACM}'s algorithm for computing a sample CE uses a separation oracle that outputs a convex combination of the constraints as a separating hyperplane.}
Such a separation oracle needs to solve the
following problem: \begin{problem}\label{prob:optCE_sep}
Given $(y,t)$ with $y\geq 0$, determine if there exists an $s$ such that $(U_s)^T y+w_s > t$; if so output such an $s$.
\end{problem}
The left-hand-side expression $(U_s)^T y+w_s$ is the social welfare at $s$ plus the term $(U_s)^T y$.
Observe that the $(p,i,j)$-th entry of $U_s$ is $ u^p_s- u^p_{js_{-p}}$ if $s_p= i$ and is zero otherwise.
Thus $(U_s)^T y=\sum_p\sum_{j\in S_p} y^p_{s_p,j}\left(u^p_s- u^p_{js_{-p}}\right)$.
We now reexpress $(U_s)^T y+w_s$ in terms of \emph{deviation-adjusted utilities} and \emph{deviation-adjusted social welfare}.
\begin{definition}\label{def:adj_u_sw}
Given a game, and a vector $y\in\mathds{R}^{N}$
such that $y\geq 0$, the \emph{deviation-adjusted utility} for player $p$ under pure profile $s$ is
\[
\hat{u}^p_s(y)= u^p_s +\sum_{j\in S_p} y^p_{s_p,j}\left(u^p_s- u^p_{js_{-p}}\right).
\]
The deviation-adjusted social welfare is $\hat{w}_s(y)=\sum_p \hat{u}^p_s(y)$.
\end{definition}
By construction,
the deviation-adjusted social welfare $\hat{w}_s(y)=\sum_pu^p_s+\linebreak\sum_p\sum_{j\in S_p}y^p_{s_p,j}\left(u^p_s- u^p_{js_{-p}}\right)= (U_s)^T y+w_s$.
Therefore, Problem \ref{prob:optCE_sep} is equivalent to the following \emph{deviation-adjusted social welfare problem}.
\begin{definition}
For a game representation, the \emph{deviation-adjusted social welfare problem} is the following: given an instance of the representation and rational vector $(y,t)\in\mathds{Q}^{N+1}$
such that $y\geq 0$, determine if there exists an $s$ such that the deviation-adjusted social welfare $\hat{w}_s(y)>t$; if so output such an $s$.
\end{definition}
\begin{proposition}
If the deviation-adjusted social welfare problem can be solved in polynomial time for a game representation, then so can the problem of computing the maximum social welfare CE.
\end{proposition}
\fullver{
\begin{proof}
Recall that an algorithm for Problem \ref{prob:optCE_sep} can be used as a separation oracle for \eqref{dual}.
Then we can apply the ellipsoid method using the given algorithm for the deviation-adjusted social welfare problem as a separation oracle. This solves \eqref{dual} in polynomial time.
By LP duality, the optimal objective of \eqref{dual} is the social welfare of the optimal CE.
The cutting planes generated during the ellipsoid method can then be used to compute such a CE with polynomial-sized support.
\nothesis{\qed}
\end{proof}
}
\thesis{We observe that our approach has certain similarities to the Ellipsoid Against Hope algorithm and its variants discussed in \autoref{ch:CE}:
both approaches are black-box approaches based on LP duality formulations of the respective problems, and both make use of the ellipsoid method to overcome
the exponential size of the LPs.
On the other hand, due to the different LP formulations of the sample CE problem and the optimal CE problem respectively, the two approaches require different separation oracles, which leads to the different requirements on the subroutines provided by the representation.
}
Let us consider interpretations of the dual variables $y$ and the deviation-adjusted social welfare of a game.
\hide{
\cite{hart1989existence} proved the existence of CE in any finite game by analyzing the LP formulation of CE and interpreting it as a two-player game between a primal player and a dual player.
Can we interpret a pair of primal and dual LPs like
\eqref{primal} and \eqref{dual}
as a constant-sum game played by a player controlling the primal variables $x$ and a player controlling the dual variables $y$?
One obstacle is that in \eqref{dual} the domain of $y$ is unbounded, so we cannot easily interpret it as a mixed strategy.
Relevant is \cite{nau1990coherent} which gave another proof of the existence of CE. Their dual formulation has a unbounded domain for $y$,
and they interpret such a $y$ as an arbitrage plan against the group of players in the game.
On the other hand, our LP formulation is different in that we have the social welfare as the objective of \eqref{primal}.
We now give one interpretation of the dual problem.}
The dual \eqref{dual} can be rewritten as
$\min_{y\geq 0}\max_s \tilde{w}_s(y)$. By weak duality, for a given $y\geq 0$ the maximum deviation-adjusted social welfare $\max_s \tilde{w}_s(y)$ is an upper bound on the
maximum social welfare CE. So the task of the dual \eqref{dual} is to find $y$ such that the resulting maximum deviation-adjusted social welfare gives the tightest bound.\footnote{An equivalent perspective is to view $y$ as Lagrange
multipliers, and the optimal deviation-adjusted SW problem as the Lagrangian
relaxation of \eqref{primal} given the multipliers $y$.
}
At optimum,
$y$ corresponds to the concept of ``shadow prices'' from optimization theory; that is,
$y^p_{ij}$ equals the rate of change in the social welfare objective when the constraint $(p,i,j)$ is relaxed infinitesimally.
Compared to the maximum social welfare CE problem, the maximum deviation-adjusted social welfare problem
replaces the incentive constraints with a set of additional penalties or rewards.
Specifically,
we can interpret $y$ as a set of nonnegative prices, one for each incentive constraint $(p,i,j)$ of \eqref{primal}.
At strategy profile $s$, for each incentive constraint $(p,i,j)$ we impose a penalty equal to $y^p_{ij}$ times the amount the constraint $(p,i,j)$ is violated
by $s$. Note that the penalty can be negative, and is zero if $s_p\neq i$.
Then $\tilde{w}_s(y)$ is equal to the social welfare of the modified game.
\begin{bf}Practical computation.\end{bf}
\thesis{We have seen from Chapters \ref{ch:Survey}, \ref{ch:AGG} and \ref{ch:CE} that t}\nothesis{T}he problem of computing the expected utility \nothesis{(EU)} given a mixed strategy profile has been established as an important subproblem for both the sample NASH problem and the sample CE problem, both in theory \nothesis{\cite{Daskalakis06,PR08JACM}} and in practice\nothesis{ \cite{BlumSheltonKoller,AGG-full}}.
Our results \thesis{in this chapter} suggest that the deviation-adjusted social welfare problem is of similar importance to the optimal CE problem.
This connection is more than theoretical: our algorithmic approach can be turned into a practical method for computing optimal CE.
In particular, although it makes use of the ellipsoid method, we can easily substitute a more practical method, such as simplex with column generation.
In contrast, \emcite{PR08JACM}'s algorithmic approach for reduced forms makes two nested applications of the ellipsoid method, and is less likely to be practical.
\fullver{Furthermore, even for representations without a polynomial-time algorithm for the deviation-adjusted social welfare problem, a promising direction would be to formulate the deviation-adjusted social welfare problem as
a integer program or constraint program and solve using e.g. CPLEX.
}
\fullver{
\subsection{The Weighted Deviation-Adjusted Social Welfare Problem}
For the max-min welfare CE problem, we can form the dual of \eqref{mmprimal},
\begin{align}
\min \: t\label{mmdual}\\
U^Ty + \sum_p v_p u^p \leq t\mathbf{1}\label{eq:maxmin_dual_constr}\\
y\geq 0, \;
v\geq 0\fullver{\nonumber\\}\shortver{, \;}
\sum_p v_p=1.\notag
\end{align}
This is again an LP with polynomial number of variables and exponential number of constraints; specifically, block \eqref{eq:maxmin_dual_constr}
is exponential. We observe that \eqref{eq:maxmin_dual_constr} is similar to the corresponding block in \eqref{dual}, except for the weighted sum
$\sum_p v_p u^p $ instead of the social welfare $w$.
Thus, in order to express the left-hand side of \eqref{eq:maxmin_dual_constr} we need notions slightly different from those given in Definition \ref{def:adj_u_sw}, which we call \emph{weighted deviation-adjusted utility} and \emph{weighted deviation-adjusted social welfare}.
\begin{definition}
Given a game, a vector $y\in\mathds{R}^{N}$
such that $y\geq 0$, and a vector $ v\in \mathds{R}^n$ such that $v\geq 0$ and $\sum_p v_p=1$,
the \emph{weighted deviation-adjusted utility} for player $p$ under pure profile $s$ is
\[\hat{u}^p_s(y,v)=v_p u^p_s +\sum_{j\in S_p} y^p_{s_p,j}(u^p_s- u^p_{js_{-p}}).
\]
The weighted deviation-adjusted social welfare is $\hat{w}_s(y,v)=\sum_p \hat{u}^p_s(y,v)$.
\end{definition}
Following analysis similar to that given above, the following problem serves as a separation oracle of LP \eqref{mmdual}.
\begin{definition}
For a game representation,
the \emph{weighted deviation-adjusted social welfare problem} is the following: given an instance of the representation, and rational vector $(y,v,t)\in\mathds{Q}^{N+n+1}$
such that $y\geq 0$, $v\geq 0$ and $\sum_p v_p=1$, determine if there exists an $s$ such that the deviation-adjusted social welfare $\hat{w}_s(y)>t$; if so output such an $s$.
\end{definition}
\begin{proposition}
If the weighted deviation-adjusted social welfare problem can be solved in polynomial time for a game representation, then the problem of computing the max-min welfare CE
is in polynomial time for this representation.
\end{proposition}
It is straightforward to see that the deviation-adjusted social welfare problem reduces to the weighted deviation-adjusted social welfare problem.
In all representations that we consider in this chapter, the weighted and unweighted versions have the same structure and thus the same complexity.
}
\subsection{The Coarse Deviation-Adjusted Social Welfare Problem}
For the optimal social welfare CCE problem, we can
form the dual of \eqref{cprimal}
\begin{align}\label{cdual}
\min\: &t\\
C^T y + w &\leq t\mathbf{1}\nonumber\\
y&\geq 0\nonumber
\end{align}
\begin{definition}
We label the $(p,j)$-th element of $y$ as $y^p_j$.
Given a game, and a
vector $y\in\mathds{R}^{\sum_p|S_p|}$ such that $y\geq 0$,
the \emph{coarse deviation-adjusted utility} for player $p$ under pure profile $s$ is
\fullver{\[}\shortver{$}
\tilde{u}^p_s(y)= u^p_s +\sum_{j\in S_p} y^p_{j}(u^p_s- u^p_{js_{-p}})\fullver{.\]}\shortver{$.}
The coarse deviation-adjusted social welfare is $\tilde{w}_s(y)=\sum_p \tilde{u}^p_s(y)$.
\end{definition}
\begin{proposition}
If the coarse deviation-adjusted social welfare problem can be solved in polynomial time for a game representation, then the problem of computing the maximum social welfare CCE
is in polynomial time for this representation.
\end{proposition}
The coarse deviation-adjusted social welfare problem reduces to the deviation-adjusted social welfare problem.
To see this, given an input
vector $y$ for the coarse deviation-adjusted social welfare problem, we can construct an input vector $y'\in \mathds{Q}^N$ for the deviation-adjusted social welfare problem with
$y'^p_{ij} = y_j^p$ for all $p\in \mathcal{N}$ and $i,j\in S_p$.
\section{The Deviation-Adjusted Social Welfare Problem for Specific Representations}
In this section we study the deviation-adjusted social welfare problem and its variants on specific representations.
Depending on the representation, the deviation-adjusted social welfare problem is not always solvable in polynomial time. Indeed, \emcite{PR08JACM} showed that for many
representations the problem of optimal CE is NP-hard.
Nevertheless, for such representations we can often identify tractable subclasses of games.
We will argue that the deviation-adjusted social welfare problem is a more useful formulation for identifying tractable classes of games
than the separation problem formulation of \emcite{PR08JACM}, as the latter only applies to reduced-form-based representations.
\subsection{Reduced Forms}\label{sec:reduced_form}
\emcite{PR08JACM} gave the following reduced form characterization of representations.
\begin{definition}[\cite{PR08JACM}]
Consider a game $G=(\mathcal{N}$, $\{S_p\}_{p\in \mathcal{N}}, \{u^p\}_{p\in \mathcal{N}})$. For $p=1,\ldots ,n$, let $P_p=\{C_p^1\ldots C_p^{r_p}\}$ be a partition of $S_{-p}$ into $r_p$ classes. The set $\mathcal{P}=\{P_1,\ldots,P_n\}$ of partitions is a \emph{reduced form} of $G$ if $u^p_s=u^p_{s'}$ whenever (1) $s_p=s'_p$ and (2) both $s_{-p}$ and $s'_{-p}$ belong to the same class in $P_p$. The \emph{size} of a reduced form is the number of classes in the partitions plus the bits required to specify a payoff value for each tuple $(p, k,\ell)$
where $1\leq p\leq n$, $1\leq k\leq r_p$ and $\ell\in S_p$.
\end{definition}
Intuitively, the reduced form imposes the condition that $p$'s utility for choosing an action $s_p$ depends only on which \emph{class} in the partition $P_p$ the profile
of the others' actions belongs to.
\emcite{PR08JACM} showed that several compact representations such as graphical games and anonymous games have natural reduced forms
whose sizes are (roughly) equal to the sizes of the representation.
We say such a compact representation has a \emph{concise reduced form}.
Intuitively, such a reduced form describes the structure of the game's utility functions.
\fullver{
\begin{example}
\thesis{Recall from \autoref{sec:lit_static} that a}\nothesis{A} graphical game \cite{graphical} is associated with a graph $(\mathcal{N},E)$, such that player $p$'s utility depends only on her action
and the actions of her neighbors in the graph. The sizes of the utility functions are exponential only in the degrees of the graph.
Such a game has a natural reduced form where the classes in $P_p$ are identified with the pure profiles of $p$'s neighbors, i.e., $s_{-p}$
and $s'_{-p}$ belong to the same class if and only if they agree on the actions of $p$'s neighbors.
The size of the reduced form is exactly the number of utility values required to specify the graphical game's utility functions.\qed
\end{example}
}
Let $\mathcal{S}_p(k,\ell)$ denote the set of pure strategy profiles $s$ such that $s_p=\ell$ and $s_{-p}$ is in the $k$-th class $C_p^k$ of $P_p$,
and let $u^p_{(k,\ell)}$ denote the utility of $p$ for that set of strategy profiles.
\emcite{PR08JACM} defined the following \emph{Separation Problem} for a reduced form.
\begin{definition}[\cite{PR08JACM}]
Let $\mathcal{P}$ be a reduced form for game $G$. The \emph{Separation Problem} for $\mathcal{P}$ is the following:
Given rational numbers $\gamma_p(k,\ell)$ for all $p\in \{1, \ldots, n\}$, $k\in \{1, \ldots, r_p\}$, and $\ell\in S_p$, is there a pure strategy profile $s$ such that
$
\sum_{p,k,\ell: s\in \mathcal{S}_p(k,\ell)} \gamma_p(k,\ell)<0 ?
$
If so, find such\fullver{ an} $s$.
\end{definition}
Since $s\in \mathcal{S}_p(k,\ell)$ implies $s_p=\ell$, the left-hand side of the above expression is equivalent to $\sum_p\sum_{k:s\in \mathcal{S}_p(k,s_p)} \gamma_p(k,s_p)$.
Furthermore, since $s$ belongs to exactly one class in $P_p$, the expression is a sum of exactly $n$ summands\fullver{, one for each player}.
\emcite{PR08JACM} proved that if the separation problem can be solved in polynomial time, then a CE that maximizes a given linear objective in the players' utilities
can be computed in time polynomial in the size of the reduced form.
How does \emcite{PR08JACM}'s sufficient condition relate to ours, provided that the game has a concise reduced form? We show that
the class of reduced form games for which our \fullver{weighted} deviation-adjusted social welfare problem is polynomial-time solvable contains the class for which the separation problem is polynomial-time solvable.
\begin{proposition}\label{prop:wdsw_reduced_form}
Let $\mathcal{P}$ be a reduced form for game $G$. Suppose the separation problem can be solved in polynomial time. Then
the \fullver{weighted} deviation-adjusted social welfare problem can be solved in time polynomial in the size of the reduced form.
\end{proposition}
\fullver{
\begin{proof}
First we observe that if a game $G$ has a reduced form $\mathcal{P}$, then its deviation-adjusted utilities \fullver{(and weighted deviation-adjusted utilities)} also satisfy the partition structure
specified by $\mathcal{P}$, i.e., given $y$ and $v$, the weighted deviation-adjusted utility $\hat{u}^p_s(y,v)$ depends only on a player's action $s_p$ and the class in $P_p$ that $s_{-p}$ belongs to.
To see why, suppose $s_{-p}\in C_p^k$. Then
\begin{align*}
\hat{u}^p_{\ell s_{-p}}(y,v)&=v_p u^p_{\ell s_{-p}} +\sum_{j\in S_p} y^p_{\ell,j}(u^p_{\ell s_{-p}}- u^p_{js_{-p}})\\
&= v_p u^p_{(k,\ell)} + \sum_{j\in S_p} y^p_{\ell,j} (u^p_{(k,\ell)} -u^p_{(k,j)}),
\end{align*}
which depends only on $\ell$ and $k$.
This proves the following, which will be useful later.
\begin{lemma}\label{lem:wdu_reduced_form}
Let $\mathcal{P}$ be a reduced form for game $G$.
\begin{enumerate}
\item For all $y\in \mathds{R}^N$, $v\in \mathds{R}^n$, for all players $p$, $s_p\in S_p$, and for all $s_{-p}, s'_{-p}\in S_{-p}$, if
$s_{-p}$ and $s'_{-p}$ are in the same class in $P_p$
then the weighted deviation-adjusted utilities
$\hat{u}^p_{s_p,s_{-p}}(y,v)=\hat{u}^p_{s_p,s'_{-p}}(y,v)$.
\item Write the weighted deviation-adjusted utility for player $p$, given her pure strategy $\ell\in S_p$ and class $C_p^k$,
as $\hat{u}^p_{(k,\ell)}(y,v)$ (well defined by the above). We have
\[
\hat{u}^p_{(k,\ell)}(y,v) \equiv v_p u^p_{(k,\ell)} + \sum_{j\in S_p} y^p_{\ell,j} (u^p_{(k,\ell)} -u^p_{(k,j)}).
\]
\end{enumerate}
\end{lemma}
Given an instance of the weighted deviation-adjusted social welfare problem with a game with reduced form $\mathcal{P}$ and rational vectors $y\in \mathds{R}^N$, $v\in \mathds{R}^n$ and $t\in \mathds{R}$, we construct an instance of the separation problem by letting $\gamma_p(k,\ell) = t/n - \hat{u}^p_{(k,\ell)}(y,v)$,
where $\hat{u}^p_{(k,\ell)}(y,v)$ is as defined in Lemma \ref{lem:wdu_reduced_form} and can be efficiently computed given the reduced form.
Recall that the separation problem asks for pure profile $s$ such that $\sum_{p,k,\ell: s\in \mathcal{S}_p(k,\ell)} \gamma_p(k,\ell)<0$, the left hand side of which
is a sum of $n$ terms.
By construction, for all $s$,
$\sum_{p,k,\ell: s\in \mathcal{S}_p(k,\ell)} \gamma_p(k,\ell)<0 $ if and only if
$\sum_p\sum_{k:s\in \mathcal{S}_p(k,s_p)} \left(t/n - \hat{u}^p_{(k,s_p)}(y,v)\right)<0$, and since the left hand side is a sum of $n$ terms,
this holds if and only if
$\hat{w}^p_s(y,v) >t$.
Therefore the weighted deviation-adjusted social welfare problem instance has a solution $s$ if and only if the
corresponding separation problem instance has a solution $s$, and a polynomial-time algorithm for the separation problem can be used to
solve
the weighted deviation-adjusted social welfare problem in polynomial time.
\nothesis{\qed}
\end{proof}
}
We now compare the the \fullver{weighted} deviation-adjusted social welfare problem with the optimal social welfare problem for these representations.
We observe \fullver{from Lemma \ref{lem:wdu_reduced_form}} that the \fullver{weighted} deviation-adjusted social welfare problem can be formulated as an instance of the optimal social welfare problem on another game with the same reduced form
but different payoffs. Can we claim that the existence of a polynomial-time algorithm for the optimal social welfare problem for a representation implies the
existence of a polynomial-time algorithm for the \fullver{weighted} social welfare problem (and thus the optimal CE problem)?
This is not necessarily the case, because the representation might impose certain structure on the utility functions that are not captured by the reduced forms, and
the polynomial-time algorithm for the optimal social welfare problem could depend on the existence of such structure. The \fullver{weighted} deviation-adjusted social welfare problem might no longer exhibit such structure and thus might not be solvable using the given algorithm.
Nevertheless, if we consider a game representation that is ``completely characterized''
by its reduced forms, the \fullver{weighted} deviation-adjusted social welfare problem is equivalent to the decision version of the optimal social welfare outcome problem for that representation.
To make this more precise, we say a game representation is a \emph{reduced-form-based representation} if
there exists a mapping from instances of the representation to reduced forms such that it maps each instance to a concise reduced form of that instance,
and if we take such a reduced form and change its payoff values arbitrarily, the resulting reduced form
is a concise reduced form of another instance of the representation.
\begin{corollary}\label{coro:reduced_form_corrspondence}
For a reduced-form-based representation, if there exists a polynomial-time algorithm for the optimal social welfare problem, then the optimal
social welfare CE problem and the max-min welfare CE problem can be solved in polynomial time.
\end{corollary}
Of course, this can be derived using the separation problem for reduced forms without the deviation-adjusted social welfare formulation.
On the other hand, the deviation-adjusted social welfare formulation can be applied to representations without concise reduced forms.
In fact, we will use it to show below that the connection between the optimal social welfare problem and the optimal CE problem
applies to a wider classes of representations
than just reduced-form-based representations.
\subsection{Linear Reduced Forms}
One class of representations that does not have concise reduced forms are those that represent utility functions as sums of other functions, such as polymatrix games and the hypergraph games of \emcite{PR08JACM}.
In this section we characterize these representations using linear reduced forms, showing that linear-reduced-form-based representations satisfy a property similar
to Corollary \ref{coro:reduced_form_corrspondence}.
Roughly speaking, a linear reduced form has multiple partitions for each agent, rather than just one; an agent's overall utility is a sum over utility functions defined on each of that agent's partitions.
\begin{definition}
Consider a game $G=(\mathcal{N}, \{S_p\}_{p\in \mathcal{N}}, \{u^p\}_{p\in \mathcal{N}})$. For $p=1,\ldots ,n$, let $P_p=\{P_{p,1},\ldots,P_{p,t_p}\}$, where
$P_{p,q}=\{C_{p,q}^1\ldots C_{p,q}^{r_{pq}}\}$ is a partition of $S_{-p}$ into $r_{pq}$ classes. The set $\mathcal{P}=\{P_1,\ldots,P_n\}$ is a \emph{linear reduced form} of $G$ if for each $p$ there exist $u^{p,1},\ldots, u^{p,t_p}\in \mathds{R}^M$ such that for all $s$, $u^p_s=\sum_q u^{p,q}_s$,
and for each $q\leq t_p$, $u^{p,q}_s=u^{p,q}_{s'}$ whenever (1) $s_p=s'_p$ and (2) both $s_{-p}$ and $s'_{-p}$ belong to the same class in $P_{p,q}$. The \emph{size} of a reduced form is the number of classes in the partitions plus the bits required to specify a number for each tuple
$(p,q,k,\ell)$ where $1\leq p\leq n$, $1\leq q\leq t_p$, $1\leq k\leq r_{pq}$ and $\ell\in S_p$.
\end{definition}
We write $u^{p,q}_{(k,\ell)}$ for the value corresponding to tuple $(p,q,k,\ell)$, and for $\mathbf{k}=(k_1,\ldots,k_{t_p} )$
we write $u^{p}_{(\mathbf{k},\ell)} \equiv \sum_q u^{p,q}_{(k_q,\ell)}$.
\begin{example}[polymatrix games]\label{ex:polymatrix_lrf}
\thesis{Recall from \autoref{sec:lit_static} that in}\nothesis{In} a polymatrix game, each player's utility is the sum of utilities resulting from her bilateral interactions with each of the $n-1$ other players:
$
u^p_s = \sum_{p'\neq p} e_{s_p}^T A^{pp'} e_{s_{p'}}
$
where $A^{pp'}\in \mathds{R}^{|S_p|\times| S_{p'}|}$ and $ e_{s_p}\in \mathds{R}^{|S_p|}$ is the unit vector corresponding to $s_p$.
The utility functions of such a representation require only $\sum_{p,p'\in \mathcal{N}}|S_p|\times| S_{p'}|$ values to specify.
Polymatrix games do not have a concise reduced-form encoding, but can easily be written as linear-reduced-form games. Essentially, we create one partition for every matrix game that an agent plays, with each class differing in the action played by the other agent who participates in that matrix game, and containing all the strategy profiles that can be adopted by all of the other players.
Formally, given a polymatrix game, we construct its linear reduced form
with $P_p=\{P_{p,q}\}_{q\in \mathcal{N}\setminus\{ p\}}$, and $P_{p,q} =\{ C_{p,q}^\ell\}_{\ell\in S_q}$ with $C_{p,q}^\ell=\{s_{-p}|s_q=\ell\}$.\qed
\end{example}
Most of the results in Section \ref{sec:reduced_form} straightforwardly translate to linear reduced forms.
\fullver{
\begin{lemma}\label{lem:wdu_linear_reduced_form}
Let $\mathcal{P}$ be a linear reduced form for game $G$.
Then for all $y\in \mathds{R}^N$, $v\in \mathds{R}^n$, for all players $p$, there exist $\hat{u}^{p,1}(y,v),\ldots, \hat{u}^{p,t_p}(y,v)\in \mathds{R}^M$ such that the weighted deviation-adjusted utilities $\hat{u}^p(y,v)=\sum_q \hat{u}^{p,q}(y,v)$, and
for all $q\leq t_p$,
$s_p\in S_p$ and $s_{-p}, s'_{-p}\in S_{-p}$, if
$s_{-p}$ and $s'_{-p}$ are in the same class in $P_{p,q}$,
then
$\hat{u}^{p,q}_{s_p,s_{-p}}(y,v)=\hat{u}^{p,q}_{s_p,s'_{-p}}(y,v)$.
Write the weighted deviation-adjusted utility for player $p$, her pure strategy $\ell\in S_p$ and classes $C_{p,1}^{k_1}, \ldots, C_{p,t_p}^{k_{t_p}}$
as $\hat{u}^p_{(\mathbf{k},\ell)}(y,v)$ where $\mathbf{k}=(k_1,\ldots,k_{t_p} )$. Furthermore, we have
\[
\hat{u}^p_{(\mathbf{k},\ell)}(y,v) \equiv v_p u^p_{(\mathbf{k},\ell)} + \sum_{j\in S_p} y^p_{\ell,j} (u^p_{(\mathbf{k},\ell)} -u^p_{(\mathbf{k},j)}).
\]
\end{lemma}
}
\begin{corollary}\label{coro:linear_reduced_form_corrspondence}
For a linear-reduced-form-based representation, if there exists a polynomial-time algorithm for the optimal social welfare problem, then the optimal
social welfare CE problem and the max-min welfare CE problem can be solved in polynomial time.
\end{corollary}
\subsubsection{Graphical Polymatrix Games}
A polymatrix game may have graphical-game-like structure: player $p$'s utility may depend only on a subset of the other player's actions.
In terms of utility functions, this corresponds to $A^{pp'}=0$ for certain pairs of players $p,p'$.
As with graphical games, we can construct the (undirected) graph $G=(\mathcal{N},E)$ where there is an edge $\{p,p'\}\in E$ if $A^{pp'}\neq 0$ or$A^{p'p}\neq 0$.
We call such a game a graphical polymatrix game.
This can also be understood as a graphical game where each player $p$'s utility is the sum of bilateral interactions with her neighbors.
A tree polymatrix game is a graphical polymatrix game whose corresponding graph is a tree. Consider the optimal CE problem on tree polymatrix games.
Since such a game is also a tree graphical game,
\emcite{PR08JACM}'s optimal CE algorithm for tree graphical games can be applied. However, this algorithm does not run in polynomial time, because
the representation size of tree polymatrix games can be exponentially smaller than that of the corresponding graphical game (which grows exponentially in the degree of the graph).
\fullver{However, we can give a different polynomial-time algorithm for this problem.}\shortver{Nevertheless, we give an polynomial-time algorithm for the deviation-adjusted social welfare problem for such games, which then implies the following theorem.}
\begin{theorem}
Optimal CE in tree polymatrix games can be computed in polynomial time.
\end{theorem}
\fullver{
\begin{proof}
It is sufficient to give an algorithm for the deviation-adjusted social welfare problem.
Using an argument similar to that given in Example \ref{ex:polymatrix_lrf},
tree polymatrix games have a natural linear reduced form, and it is straightforward to verify that tree polymatrix games
are a linear-reduced-form-based representation.
By Corollary \ref{coro:linear_reduced_form_corrspondence} it is sufficient to construct an algorithm for the optimal social welfare problem.
Let $N_p$ be the set of players in the subtree rooted at $p$.
Suppose $p$'s parent in the tree is $q$. Let the \emph{social welfare contribution} of $N_p$ be the social welfare of players in $N_p$ minus $e_{s_p}^T A^{pq} e_{s_{q}}$. Let the social welfare contribution of the root player be the social welfare of $\mathcal{N}$. Then the social welfare contribution of $N_p$ depends solely on the pure strategy profile restricted to $N_p$.
The following dynamic programming algorithm solves the optimal social welfare problem in polynomial time. We go
from the leaves to the root of the tree.
Each child $q $ of $p$ passes to its parent the message $\{w^{N_{q},s_q}\}_{s_q\in S_q}$, where $w^{N_{q},s_q}$ is the optimal social welfare contribution of $N_{q}$ provided that $q$ plays $s_q$.
Given the messages from all of $p's$ children $q_1,\ldots, q_k$,
we can compute the message of $p$ as follows: for each $s_p\in S_p$,
\begin{align*}
w^{N_{p},s_p}& = \max_{s_{q_1},\ldots,s_{q_k}}\sum_{j=1}^k \left[w^{N_{q_j},s_{q_j}} + e_{s_p}^TA^{p,{q_j}} e_{s_{q_j}}\right] \\
&=\sum_{j=1}^k \max_{s_{q_j}}\left[w^{N_{q_j},s_{q_j}} + e_{s_p}^TA^{p,{q_j}} e_{s_{q_j}}\right].
\end{align*}
The second equality is due to the fact that the $j$-th summand depends only on $s_{q_j}$.
It is straightforward to verify that the optimal social welfare is
$\max_{s_r} w^{N_r,s_r}$ where $r$ is the root player, and that the algorithm runs in polynomial time.
The corresponding optimal pure strategy profile can be constructed by going from the root to the leaves. \nothesis{\qed}
\end{proof}
This algorithm can be straightforwardly extended to yield a polynomial-time algorithm for optimal CE in graphical polymatrix games with constant treewidth,
for hypergraphical games \cite{PR08JACM} on acyclic hypergraphs, and more generally for hypergraphs with constant hypertree-width.
}
\subsection{Representations with Action-Specific Structure}\label{sec:optCE_act_spec}
The above results for reduced forms and linear reduced forms crucially depend on the fact that the partitions (i.e., the structure of the utility functions) depend on $p$ but
do not depend on the action chosen by player $p$.
There are representations whose utility functions have action-dependent structure, including
congestion games \cite{congestion}, local effect games\\
\cite{localeffect}, and action-graph games \cite{AGG-full}.
For such representations, we can define a variant of the reduced form that has action-dependent partitions.
\thesis{For example:
\begin{definition}
Consider a game $G=(\mathcal{N}, \{S_p\}_{p\in \mathcal{N}}, \{u^p\}_{p\in \mathcal{N}})$. For $p=1,\ldots ,n$, $\ell\in S_p$, let $P_{p,\ell}=\{P_{p,\ell,1},\ldots,P_{p,\ell,t_{p\ell}}\}$, where
$P_{p,\ell,q}=\{C_{p,\ell,q}^1\ldots C_{p,\ell,q}^{r_{p\ell q}}\}$ is a partition of $S_{-p}$ into $r_{p\ell q}$ classes. The set $\mathcal{P}=\{P_{p,\ell}\}_{p\in \mathcal{N}, \ell\in S_p}$ is a \emph{action-specific linear reduced form} of $G$ if for each $p,\ell$ there exist $u^{p,\ell,1},\ldots, u^{p,\ell,t_{p\ell}}\in \mathds{R}^M$ such that for each $p\in \mathcal{N}$, $\ell\in S_p$, and $q\leq t_p$,
\begin{enumerate}
\item for all $s_{-p}\in S_{-p}$, $u^p_{\ell s_{-p}}=\sum_q u^{p,\ell,q}_{\ell s_{-p}}$;
\item $u^{p,\ell,q}_{\ell s_{-p}}=u^{p,\ell,q}_{\ell s'_{-p}}$ whenever
both $s_{-p}$ and $s'_{-p}$ belong to the same class in $P_{p,\ell,q}$.
\end{enumerate}
The \emph{size} of a reduced form is the number of classes in the partitions plus the bits required to specify a number for each tuple
$(p,q,k,\ell)$ where $1\leq p\leq n$, $1\leq q\leq t_{p\ell}$, $1\leq k\leq r_{p\ell q}$ and $\ell\in S_p$.
\end{definition}
}
However, unlike both the reduced form and linear reduced form, the \fullver{weighted} deviation-adjusted utilities no longer satisfy the same partition structure as the utilities.
Intuitively, the \fullver{weighted} deviation-adjusted utility at $s$ has contributions from the utilities of the strategy profiles when player $p$ deviates to different actions.
Whereas for linear reduced forms these deviated strategy profiles correspond to the same class as $s$ in the partition,
we now consider different partitions for each action to which $p$ deviates.
As a result the \fullver{weighted} deviation-adjusted social welfare problem has a more complex form that the optimal social welfare problem.
\subsubsection{Singleton Congestion Games}
\thesis{As mentioned in Chapters \ref{ch:Survey} and \ref{ch:pure}, }\emcite{ieong2005fac} studies a class of games called singleton congestion games and showed that the optimal PSNE can be computed in polynomial time.
Such a game can be formulated as an instance of congestion games where each action contains a single resource,
or an instance of symmetric AGGs where the only edges are self edges.
Formally, a singleton congestion game is specified by $( \mathcal{N}, \mathcal{A}, \{f^\alpha\}_{\alpha\in\mathcal{A} } )$ where $\mathcal{N}={1,\ldots,n}$ is the set of players,
$\mathcal{A}$ the set of actions, and for each action $\alpha\in\mathcal{A}$, $f^\alpha: [n]\rightarrow \mathds{R}$.
The game is symmetric; each player's set of actions $S_p\equiv \mathcal{A}$. Each strategy profile $s$ induces an action count $c(\alpha)=|\{p|s_p=\alpha\}|$ on each $\alpha$: the number of players playing action $\alpha$. Then the utility of a player that chose $\alpha$ is
$f^\alpha(c(\alpha))$.
The representation requires $O(|\mathcal{A}| n)$ numbers to specify.
We now show that the optimal social welfare CCE problem can be computed in polynomial time for singleton congestion games.
Before attacking the problem, we first note
that the optimal social welfare problem can be solved in polynomial time by a relatively straightforward dynamic-programming algorithm which
is a simplified version of \emcite{ieong2005fac}'s algorithm for optimal PSNE in singleton congestion games.
\fullver{First observe that the social welfare of a strategy profile can be written in terms of the action counts:
\[w_s = \sum_\alpha c(\alpha) f^\alpha (c(\alpha)).\]
The optimal social welfare problem is equivalent to finding a vector of action counts that sums to $n$ and maximizes the above expression. The social welfare can be further
decomposed into contributions from each action $\alpha$. The dynamic-programming algorithm starts with a single action and adds one action at a time until all actions are added.
At each iteration, it maintains a set of tuples $\{(n', w^{n'}) \}_{1\leq n'\leq n}$, specifying that the best social welfare contribution from the current set of actions is $w^{n'}$ when exactly $n'$ players chose actions in the current set.
Consider the optimal social welfare CCE problem.
}
Can we leverage the algorithm for the optimal social welfare problem to solve the coarse deviation-adjusted social welfare problem?
Our task here is slightly more complicated: in general the coarse deviation-adjusted social welfare problem no longer has the same symmetric structure due to the fact that
$y$ can be asymmetric.
However, when $y$ is player-symmetric (that is, $y^p_j=y^{p'}_j$ for all pairs of players $(p,p')$), then we
recover symmetric structure.
\begin{lemma}\label{lem:optCE_scg_sep}
Given a singleton congestion game and player-symmetric input $y$, the coarse deviation-adjusted social welfare problem can be solved in polynomial time.\end{lemma}
\fullver{
\begin{proof}
The coarse deviation-adjusted social welfare can be written as
\begin{align*}
\tilde{w}_s(y) &= \sum_p u^p_s(1+\sum_{j\neq s_p} y^p_j) - \sum_p\sum_{j\neq s_p} y^p_j u^p_{js_{-p}}\\
&=
\sum_{\alpha\in \mathcal{A}} \left[c(\alpha)f^\alpha(c(\alpha))\left(1+\sum_{j\neq \alpha} y^p_j\right)
-(n-c(\alpha))f^\alpha(c(\alpha)+1)y^p_\alpha\right].
\end{align*}
The contribution from each action $\alpha$ depends only on $c(\alpha)$. Therefore, using a similar dynamic-programming algorithm as above we
can solve the coarse deviation-adjusted social welfare problem in polynomial time. \nothesis{\qed}
\end{proof}
}
Therefore if we can guarantee that during a run of ellipsoid method for \eqref{cdual} all input queries $y$ to the separation oracle are symmetric,
then we can apply Lemma \ref{lem:optCE_scg_sep} to solve the problem in polynomial time.
We observe that for any symmetric game, there must exist a \emph{symmetric} CE that optimizes the social welfare. This is because given an optimal CE we can
create a mixture of permuted versions of this CE, which must itself be a CE by convexity, and must also achieve the same social welfare by symmetry.
However, this argument in itself does not guarantee that the $y$ we obtain by the method above will be symmetric.
Instead, we observe that if we solve \eqref{cdual} using a ellipsoid method with a player-symmetric initial ball, and
use a separation oracle that returns a player-symmetric cutting plane, then the query points $y$ will be player-symmetric.
We are able to construct such a separation oracle using a symmetrization argument.
\begin{theorem}\label{thm:optCE_scg}
Given a singleton congestion game, the optimal social welfare CCE can be computed in polynomial time.
\end{theorem}
\nothesis{\fullver{The proof is given in Appendix \ref{app:scg_proof}.}}
\thesis{\begin{proof}
As argued in Section \ref{sec:optCE_act_spec},
it is sufficient to construct a separation oracle for \eqref{cdual} that returns a player-symmetric cutting plane.
The cutting plane corresponding to a pure strategy profile solution $s$ of the coarse deviation-adjusted social welfare problem is not player-symmetric in general; but we can symmetrize it by constructing a mixture of permutations of $s$. Since by symmetry each permuted version of $s$ correspond to a violated constraint, the resulting cutting plane is still correct and is symmetric. Enumerating all permutations over players would be exponential, but it turns out that for our purposes it is sufficient to use a small set of permutations.
Formally, let $\pi_i$ be the permutation over the set of players $\mathcal{N}$ that maps each $p$ to $p+ i \mod n$. Then the set of permutations
$\{\pi_i\}_{0\leq i\leq n-1}$ corresponds to the cyclic group.
Suppose $s$ is a solution of the coarse deviation-adjusted social welfare problem with symmetric input $y$. The corresponding cut (violated constraint) is
$(C_s)^T y + w_s\leq t$. Recall that the $(p,j)$-th entry of $C_s$ is $C_s^{p,j}=(u^p_s-u^p_{js_{-p}})$.
For a permutation $\pi$ over $\mathcal{N}$, write $s^\pi$ the permuted profile induced by $\pi$, i.e. $s^\pi=(s_{\pi(1)},\ldots,s_{\pi(n)})$.
Then $s^\pi$ is also a solution of the coarse deviation-adjusted social welfare problem.
Form the following convex combination of $n$ of the constraints of \eqref{cdual}:
\[
\frac{1}{n}\sum_{i=0}^{n-1} \left[(C_{s^{\pi_i}})^T y+ w_{s^{\pi_i}}\right] \leq t
\]
The left-hand side can be simplified to $w_s+(\overline{C}_s)^T y$ where $\overline{C}_s=\frac{1}{n}\sum_{i=0}^{n-1} C_{s^{\pi_i}}$.
We claim that this cutting plane is player-symmetric, meaning $\overline{C}_s^{p,j}=\overline{C}_s^{p',j}$ for all pairs of players $p,p'$ and all $j\in \mathcal{A}$.
This is because
\begin{align*}
\overline{C}_s^{p,j} & = \frac{1}{n}\sum_{i=0}^{n-1}C_{s^{\pi_i}}^{p,j}
= \frac{1}{n}\sum_{i=0}^{n-1} (u^p_{s^{\pi_i}}-u^p_{js^{\pi_i}_{-p}})\\
&= \frac{1}{n}\left[\sum_{\alpha\neq j} c(\alpha)f^\alpha(c(\alpha)) - (n-c(j))f^j(c(j)+1)\right]
=\overline{C}_s^{p',j}.
\end{align*}
This concludes the proof.
\nothesis{\qed}
\end{proof}
}
\fullver{Our approach for singleton congestion games crucially depends on the fact that the coarse deviation profile $y^p_j$ does not care which action it is deviating from.
This allowed us to (in the proof of Lemma \ref{lem:optCE_scg_sep}) decompose the coarse deviation-adjusted social welfare into terms that only depend on the action count on one action.
The same approach cannot be directly applied to solve the optimal CE problem, because then the deviation profile would give a different
$y^p_{ij}$ for each action $i$ that $p$ deviates from, and the resulting expression for deviation-adjusted social welfare would involve summands that
depend on the action counts on pairs of actions.
}
\thesis{An interesting future direction is to explore whether our approach for singleton congestion games can be generalized to other classes of symmetric games, such as symmetric AGGs with bounded treewidth.}
\fullver{
\section{Conclusion and Open Problems}
We have proposed an algorithmic approach for solving the optimal correlated equilibrium problem in succinctly represented games, substantially extending a previous approach due to \emcite{PR08JACM}. In particular, we showed that the optimal CE problem is tractable when the \emph{deviation-adjusted social welfare problem} can be solved in polynomial time.
We generalized the reduced forms of \emcite{PR08JACM} to show that if a representation can be characterized by ``linear reduced forms'', i.e. player-specific linear functions over partitions, then for that representation, the deviation-adjusted social welfare problem can be reduced to the optimal social welfare problem. Leveraging this result, we showed that the optimal CE problem is tractable in graphical polymatrix games on tree graphs. We also considered the problem of computing the optimal \emph{coarse correlated equilibrium}, and derived a similar sufficient condition. We used this condition to prove that the optimal CCE problem is tractable for singleton congestion games.
}
\fullver{
Our work points the way to a variety of open problems, which we briefly summarize here.
\begin{bf}Price of Anarchy.\end{bf}
Our results imply that for compactly represented games with polynomial-time algorithms for the optimal social welfare problem and the weighted deviation-adjusted social welfare problem, the Price of Anarchy (POA) for correlated equilibria (i.e., the ratio of social welfare under the best outcome and the worst correlated equilibrium) can be computed in polynomial time.
Similarly for the Price of Total Anarchy (i.e., the ratio of social welfare under the best outcome and the worst coarse correlated equilibrium).
There is an extensive literature on proving bounds on the POA for various solution concepts and for various classes of games. One line of research that is particularly relevant to our work is the ``smoothness bounds'' method pioneered by
\emcite{Roughgarden09POA}.
In particular, that work showed that if a certain smoothness relation can be shown to hold for a class of games, then it can be used to prove an upper bound on POA for these games that holds for many solution concepts including pure and mixed NE, CE and CCE.
More recently, \emcite{Nadav10pdPOA} gave a primal-dual LP formulation for proving POA bounds and showed that finding the best smoothness coefficients corresponds to
the dual of the LP for the POA for average coarse correlated equilibrium (ACCE), a weaker solution concept than CCE.
The primal-dual LP formulation of \emcite{Nadav10pdPOA} and our LPs \eqref{primal} and \eqref{dual} are equivalent up to scaling; however whereas \emcite{Nadav10pdPOA} focused on the task
of proving POA upper bounds for classes of games, here we focus on computing the optimal CE / CCE and POA for individual games.
One interesting direction is to use our algorithms together with an game instance generator to automatically find game instances with
large POA, thus improving the lower bounds on POA for given classes of games.
}
\fullver{
\begin{bf}Complexity separations.\end{bf}
We have shown that for singleton congestion games, the optimal social welfare problem and the optimal CCE problem are tractable while
the complexity of the optimal CE problem is unknown. An open problem is to prove a separation of the complexities of these problems for singleton congestion games or for another class.
Another related problem is the optimal PSNE problem, which can be thought of as the optimal CE problem plus integer constraints on $x$.
We do not know the exact relationship between the optimal PSNE problem and the other problems.
For example the optimal PSNE problem is known to be tractable for singleton congestion games \cite{ieong2005fac} while we do not know how to solve the optimal CE problem. On the other hand for tree polymatrix games we showed the CE problem is in polynomial time, while the complexity of the PSNE problem is
unknown.
}
\fullver{
\begin{bf}Necessary condition for tractability.\end{bf}
Another open question is the following: is tractability of the deviation-adjusted social welfare problem a \emph{necessary} condition for tractability of the optimal CE problem?
We know (e.g., from \emcite{GLS1988}) that the separation oracle problem for the dual LP \eqref{dual} is equivalent to the problem of optimizing an arbitrary linear objective on the feasible set of \eqref{dual}.
However this in itself is not enough to prove equivalence of the deviation-adjusted social welfare problem and the optimal CE problem. First of all the
separation oracle problem is more general: it allows cutting planes other than constraints corresponding to pure strategy profiles.
Furthermore, \eqref{dual} has a particular objective, but optimizing an arbitrary linear objective means allowing
the objective to depend on $y$ as well as $t$. If we take the dual of such an LP with (e.g.) objective $r^Ty+t$ for some vector $r\in \mathds{R}^N$, we get a generalized version of
the optimal CE problem, with constraints $U x\geq r$ instead of $Ux\geq 0$.
}
\fullver{
\begin{bf}Relaxations and approximations.\end{bf}
Another interesting direction worth exploring is relaxations of the incentive constraints of these problems,
either as hard bounds or as soft constraints that add penalties to the objective, as well as the problem of approximating the optimal CE.
For these problems we can define corresponding variants of the deviation-adjusted social welfare problem as sufficient conditions, but it remains to be seen whether
one can prove concrete results, e.g., for approximating optimal CE for specific representations for which the exact optimal CE problem is hard.
\begin{bf}Communication complexity of uncoupled dynamics.\end{bf}
\emcite{hart2010com} considered a setting in which each player is informed only about her own utility function, and analyzed the communication complexity for so-called \emph{uncoupled} dynamics to reach various kinds of equilibrium. They used a straightforward adaptation of \emcite{PR08JACM}'s algorithm for a sample CE to show that a CE can be reached using polynomial amount of communication.
We can consider the question of reaching an optimal CE by uncoupled dynamics.
Our approach can be straightforwardly adapted to this setting, reducing the problem to finding a communication protocol
for the uncoupled version of the deviation-adjusted social welfare problem in which each player knows only her own utility function.
\begin{proposition}
If there is a polynomial communication protocol for the uncoupled
deviation-adjusted social welfare problem, then there is a polynomial communication protocol for the optimal CE problem.
\end{proposition}
At a high level, the protocol has a center running the ellipsoid method on \eqref{dual}, using the communication protocol for the uncoupled
deviation-adjusted social welfare problem as a separation oracle.
An open problem is whether there exist more ``natural'' types of dynamics that converge to optimal CE.
For example, there is extensive literature on no-internal-regret learning dynamics that converges to the set of approximate CE in a polynomial number of steps.
Can such dynamics be modified to yield optimal CE?
}
\hide{
\begin{bf}Representations of the set of CE.\end{bf}
\cite{PR08JACM}'s approach for reduced forms focused on an LP that expresses the incentive constraints and objectives
using marginal probabilities on the classes of the partitions in the reduced form.
This LP is a relaxation of the optimal CE problem because the marginal probabilities do not always correspond to a distribution over strategy profiles.
However, if it is possible to add a polynomial number of constraints to the LP such that the
resulting set of constraints describes exactly the set of marginal probabilities that are extendable to distributions over strategy profiles, then
the optimum of the resulting LP corresponds to the optimal CE.
Such formulations are interesting because they provide a concise description of the set of CE, and thus may provide more information on the set of CE than
just the optimal CE.
\cite{PR08JACM} provided such a formulation for anonymous games and \cite{Kakade2003CEG}
for tree graphical games. For both cases there exist algorithms to sample from the distributions described by such marginal probabilities.
Note that although such an approach is different from \cite{PR08JACM}'s main approach which reduced the problem to the separation problem,
anonymous games and tree graphical games have efficient algorithms for the separation problem as well as polynomial-sized descriptions of the set of CE.
Can such concise representations of the set of CE be constructed for game representations that are not captured by the reduced form framework?
In a recent paper \emcite{KXL11approxCE} gave such a construction for what they call graphical games with pair-wise utility functions,
which is equivalent to graphical polymatrix games. For tree graphs
the formulation is exact, while for general graphs it is a relaxation.
An open problem is whether this can be done for singleton congestion games.
It is also interesting to explore the relationship between the deviation-adjusted social welfare problem and the existence of concise representations of CE.
}
\nothesis{
\shortver{\begin{footnotesize}}
\shortver{\end{footnotesize}}
\fullver{
\appendix
\section{Proof of Theorem \ref{thm:optCE_scg}}\label{app:scg_proof}
\begin{proof}
As argued in Section \ref{sec:optCE_act_spec},
it is sufficient to construct a separation oracle for \eqref{cdual} that returns a player-symmetric cutting plane.
The cutting plane corresponding to a pure strategy profile solution $s$ of the coarse deviation-adjusted social welfare problem is not player-symmetric in general; but we can symmetrize it by constructing a mixture of permutations of $s$. Since by symmetry each permuted version of $s$ correspond to a violated constraint, the resulting cutting plane is still correct and is symmetric. Enumerating all permutations over players would be exponential, but it turns out that for our purposes it is sufficient to use a small set of permutations.
Formally, let $\pi_i$ be the permutation over the set of players $\mathcal{N}$ that maps each $p$ to $p+ i \mod n$. Then the set of permutations
$\{\pi_i\}_{0\leq i\leq n-1}$ corresponds to the cyclic group.
Suppose $s$ is a solution of the coarse deviation-adjusted social welfare problem with symmetric input $y$. The corresponding cut (violated constraint) is
$(C_s)^T y + w_s\leq t$. Recall that the $(p,j)$-th entry of $C_s$ is $C_s^{p,j}=(u^p_s-u^p_{js_{-p}})$.
For a permutation $\pi$ over $\mathcal{N}$, write $s^\pi$ the permuted profile induced by $\pi$, i.e. $s^\pi=(s_{\pi(1)},\ldots,s_{\pi(n)})$.
Then $s^\pi$ is also a solution of the coarse deviation-adjusted social welfare problem.
Form the following convex combination of $n$ of the constraints of \eqref{cdual}:
\[
\frac{1}{n}\sum_{i=0}^{n-1} \left[(C_{s^{\pi_i}})^T y+ w_{s^{\pi_i}}\right] \leq t
\]
The left-hand side can be simplified to $w_s+(\overline{C}_s)^T y$ where $\overline{C}_s=\frac{1}{n}\sum_{i=0}^{n-1} C_{s^{\pi_i}}$.
We claim that this cutting plane is player-symmetric, meaning $\overline{C}_s^{p,j}=\overline{C}_s^{p',j}$ for all pairs of players $p,p'$ and all $j\in \mathcal{A}$.
This is because
\begin{align*}
\overline{C}_s^{p,j} & = \frac{1}{n}\sum_{i=0}^{n-1}C_{s^{\pi_i}}^{p,j}
= \frac{1}{n}\sum_{i=0}^{n-1} (u^p_{s^{\pi_i}}-u^p_{js^{\pi_i}_{-p}})\\
&= \frac{1}{n}\left[\sum_{\alpha\neq j} c(\alpha)f^\alpha(c(\alpha)) - (n-c(j))f^j(c(j)+1)\right]
=\overline{C}_s^{p',j}.
\end{align*}
This concludes the proof.
\nothesis{\qed}
\end{proof}
}
\end{document} |
\begin{document}
\title{On a property of harmonic measure on simply connected domains}
\author{Christina Karafyllia}
\address{Department of Mathematics, Aristotle University of Thessaloniki, 54124, Thessaloniki, Greece}
\email{[email protected]}
\thanks{I would like to thank Professor D.\ Betsakos, my thesis advisor, for his advice during the preparation of this work and the Onassis Foundation for the scholarship I receive during my Ph.D.\ studies.\ I would also like to thank the referees for their useful remarks and their suggestions about simplifying some proofs.}
\fancyhf{}
\renewcommand{0pt}{0pt}
\fancyhead[RO,LE]{\small \thepage}
\fancyhead[CE]{\small On a property of harmonic measure on simply connected domains}
\fancyhead[CO]{\small Christina Karafyllia}
\fancyfoot[L,R,C]{}
\subjclass[2010]{Primary 30C85; Secondary 30F45, 30C35, 31A15}
\keywords{Harmonic measure, conformal mapping, hyperbolic distance.}
\begin{abstract} Let $D \subset \mathbb{C}$ be a domain with $0 \in D$.\ For $R>0$, let
${{\hat \omega }_D}\left( {R} \right)$ denote the harmonic measure of $ D \cap \left\{ {\left| z \right| = R} \right\}$ at $0$ with respect to the domain $
D \cap \left\{ {\left| z \right| < R} \right\}
$ and
${\omega _D}\left( {R} \right)$ denote the harmonic measure of $\partial D \cap \left\{ {\left| z \right| \ge R} \right\}$ at $0$ with respect to $D$.\ The behavior of the functions ${\omega _D}$ and ${{\hat \omega }_D}$ near $\infty$ determines (in some sense) how large $D$ is.\ However, it is not known whether the functions ${\omega _D}$ and ${{\hat \omega }_D}$ always have the same behavior when $R$ tends to $\infty$.\ Obviously, ${\omega _D}\left( {R} \right) \le {{\hat \omega }_D}\left( {R} \right)$ for every $R>0$.\ Thus, the arising question, first posed by Betsakos, is the following: Does there exist a positive constant $C$ such that for all simply connected domains $D$ with $0 \in D$ and all $R>0$,
\[{\omega _D}\left( {R} \right) \ge C{{\hat \omega }_D}\left( {R} \right)?\]
In general, we prove that the answer is negative by means of two different counter-examples.\ However, under additional assumptions involving the geometry of $D$, we prove that the answer is positive.\ We also find the value of the optimal constant for starlike domains.
\end{abstract}
\maketitle
\section{Introduction}\label{section1}
We will give an answer to a question of Betsakos (\cite[p.\ 788]{Bet}) about a property of harmonic measure.\ For a domain $D$, a point $z \in D$ and a Borel subset $E$ of $\overline D $, let ${\omega _D}\left( {z,E} \right)$ denote the harmonic measure at $z$ of $\overline E$ with respect to the component of $D \backslash {\overline E}$ containing $z$.\ The function ${\omega _D}\left( { \cdot ,E} \right)$ is exactly the solution of the generalized Dirichlet problem with boundary data $\varphi = {1_E}$ (see \cite[ch.\ 3]{Ahl}, \cite[ch.\ 1]{Gar} and \cite[ch.\ 4]{Ra}).\ The probabilistic interpretation of harmonic measure is that, given a domain $D$, a point $z\in D$ and a set $E \subset \partial D$, the harmonic measure ${\omega _D}\left( { z ,E} \right)$ is the probability that a Brownian motion started at $z$ will first hit the boundary of $D$ in the set $E$.
Let $D \subset \mathbb{C}$ be a domain with $0 \in D$.\ For $R>0$, we set
\[{\omega _D}\left( {R} \right) = {\omega _D}\left( {0,\partial D \cap \left\{ {z:\left| z \right| \ge R} \right\}} \right)\]
and
\[{{\hat \omega }_D}\left( {R} \right) = {\omega _D}\left( {0,
D \cap
\left\{ {z:\left| z \right| = R} \right\}} \right).\]
The behavior of the functions ${\omega _D}$ and ${{\hat \omega }_D}$ near $\infty$ determines (in some sense) how large $D$ is and it has been studied from various viewpoints.\ For example, in \cite{Ts} and \cite[p.\ 111-118]{Tsu} Tsuji proved bounds for the growth of ${{\hat \omega }_D}\left( {R} \right)$ in terms of the size of the maximal arcs on $\left\{ {z:\left| z \right| = R} \right\}$.\ Tsuji's inequalities can be used to obtain estimates for the maximum modulus, means and coefficients of various classes of $p-$valent functions (see also \cite[ch.\ 8]{Hay}).\ In \cite{We} Hayman and Weitsman used ${{\hat \omega }_D}\left( {R} \right)$ to estimate the means and hence the coefficients of functions when information is known about their value distribution.\ With the aid of ${\omega _D}\left( {R} \right)$ and ${{\hat \omega }_D}\left( {R} \right)$, Sakai \cite{Sa} gave an integral representation of the least harmonic majorant of $\left| x \right| ^p$ in an open subset $D$ of $\mathbb{R}^n$ with $0 \in D$ and proved isoperimetric inequalities for it.\ Ess{\' e}n, Haliste, Lewis and Shea (\cite{Ess}, \cite{Esse}) also studied the problem of harmonic majoration in higher dimensions in terms of the geometry of $D$ by using ${\omega _D}\left( {R} \right)$ and ${{\hat \omega }_D}\left( {R} \right)$.\ In \cite[p.\ 1348]{Soly} Solynin proved an estimate of ${{ \omega }_D}\left( {R} \right)$ when $D = f\left( \mathbb{D} \right)$ and $f$ is in the class $S$ of functions which are regular and univalent in the unit disk and $f\left( 0 \right)=0$, $f'\left( 0 \right) = 1$.\ Baernstein \cite{Bae} proved an integral formula involving ${{\hat \omega }_D}\left( {R} \right)$ and Green's function.
In \cite{Es} Ess{\' e}n proved that every analytic function $f:\mathbb{D} \to D$ belongs to the Hardy space $H^p$ for some $p>0$ if and only if for some constants $q$ and $C$, we have ${{\hat \omega }_D}\left( R \right) \le C{R^{ - q}}$ for every $R \ge 1$.\ With the aid of Ess{\' e}n's result, Kim and Sugawa \cite{Kim} proved that the Hardy number, ${\rm {h}}\left( D \right)$, of a plane domain $D$ with $0 \in D$, can be determined by
\[{\rm {h}}\left( D \right) = - \mathop {\lim \sup }\limits_{R \to + \infty } \frac{{\log {{\hat \omega }_D}\left( R \right)}}{{\log R}}.\]
In \cite{Be} Betsakos studied another problem involving ${\omega _D}\left( {R} \right)$.\ Let $\mathcal{B}$ be the family of all simply connected domains $D \subset \mathbb{C}$ such that $0 \in D$ and there is no disk of radius larger than $1$ contained in $D$.\ It is obvious that if $D \in \mathcal{B}$ then ${\omega _D}\left( {R} \right)$ is a decreasing function of $R$.\ In fact, ${\omega _D}$ decays exponentially as it is proved that there exist positive constants $\beta$ and $C$ such that ${\omega _D}\left( R \right) \le C{e^{ - \beta R}}$, for every $D \in \mathcal{B}$ and every $R>0$.\ The problem studied in \cite{Be} is to find the optimal exponent $\beta$.
Poggi-Corradini (see \cite[p.\ 33-34]{Co}, \cite{Co1}, \cite{Co2}) studied ${\omega _D}\left( {R} \right)$ and ${{\hat \omega }_D}\left( {R} \right)$ in relation with conformal mappings in Hardy spaces.\ In fact, if $D$ is an unbounded simply connected domain with $0 \in D$ and $\psi$ is a conformal mapping of $\mathbb{D}$ onto $D$, then he proved that
\[\psi \in {H^p}\left( \mathbb{D} \right) \Leftrightarrow \int_0^{ + \infty } {{R^{p - 1}}{\omega _D}\left( R \right)dR} < + \infty \Leftrightarrow \int_0^{ + \infty } {{R^{p - 1}}{{\hat \omega }_D}\left( R \right)dR} < + \infty.\]
To establish the last equivalence, Poggi-Corradini first proved that there exists a constant $M_0>1$ such that for all $R>0$,
\begin{equation}\label{co}
{\omega _D}\left( R \right) \ge \frac{1}{2}{{\hat \omega }_D}\left( {{M_0}R} \right).
\end{equation}
All the results mentioned above are some of the estimates and applications of ${\omega _D}$ and ${{\hat \omega }_D}$ that have been made over time.\ However, it is still unknown whether the functions ${\omega _D}$ and ${{\hat \omega }_D}$ always have the same behavior when $R$ tends to $\infty$.\ Obviously, by the maximum principle, for every $R>0$,
\[{\omega _D}\left( {R} \right) \le {{\hat \omega }_D}\left( {R} \right)\]
but all we know about the inverse inequality is (\ref{co}).\ Thus, a natural question, first posed in \cite[p.\ 788]{Bet} by Betsakos, is the following:
\begin{question}\label{con} Does there exist a positive constant $C$ such that for a class of domains $D$ (such as simply connected, starlike etc.) with $0 \in D$ and every $R>0$,
\[{\omega _D}\left( {R} \right) \ge C{{\hat \omega }_D}\left( {R} \right)?\]
\end{question}
In this paper we prove that for simply connected domains the answer is negative by means of two different counter-examples.\ However, under additional assumptions involving the geometry of the domains, we prove that the answer is positive and we also find the value of the optimal constant for starlike domains.
\begin{figure}\label{dom1}
\end{figure}
In Section \ref{section3}, we construct the simply connected domain $D$ of Fig.\ \ref{dom1} and prove that there exists a sequence of positive numbers ${\left\{ {{R_n}} \right\}_{n \in \mathbb{N}}}$ such that
\[\mathop {\lim }\limits_{n \to + \infty } \frac{{{{\hat \omega }_D}\left( {{R_n}} \right)}}{{{\omega _D}\left( {{R_n}} \right)}} = + \infty,\]
which implies that there does not exist a positive constant $C$ such that
${\omega _D}\left( {R} \right) \ge C{{\hat \omega }_D}\left( {R} \right)$ for every $R>0$.\ As we see in the proof, this result is due to the fact that the hyperbolic distance between the point $R_n$ and the hyperbolic geodesic, $\Gamma _n$, joining the endpoints of the arc $ D \cap \left\{ {\left| z \right| = R_n } \right\}$ in $D$ tends to infinity as $n\to +\infty$.\ In other words, there does not exist a positive constant $c$ such that $D \cap \left\{ {\left| z \right| = R_n } \right\} \subset \left\{ {z \in D:d_D \left( {z,\Gamma _n } \right) < c} \right\}$ for every $n \in \mathbb{N}$.\ Note that $d_D \left( {z,\Gamma _n } \right)$ denotes the hyperbolic distance between $z$ and $\Gamma _n$ in $D$, which we define in Section \ref{section2}.\ Now we consider the following condition on the simply connected domain $D$:
\begin{condition1*}
There exists a constant $c>0$ such that, for every $R>0$, every arc of $D \cap \left\{ {z:\left| z \right| = R} \right\}$ lies in a hyperbolic $c$-neighborhood of the hyperbolic geodesic joining its endpoints.
\end{condition1*}
\begin{figure}\label{do2}
\end{figure}
The arising question is whether the answer to the Question \ref{con} is positive for simply connected domains that satisfy Condition (1).\ However, we prove that this condition is not enough by constructing, in Section \ref{section4}, the simply connected domain $D$ of Fig.\ \ref{do2}, which comes from a small variation of the domain of Fig.\ \ref{dom1}.\ In fact, there exists a sequence of positive numbers ${\left\{ {{R_n}} \right\}_{n \in \mathbb{N}}}$ such that, despite the fact that Condition (1) is satisfied, we have again
\[\mathop {\lim }\limits_{n \to + \infty } \frac{{{{\hat \omega }_D}\left( {{R_n}} \right)}}{{{\omega _D}\left( {{R_n}} \right)}} = + \infty.\]
This time, this is due to the fact that there exists a prime end $P$ of $\partial{D}$ that is inside the disk $\left\{ {z:\left| z \right| < R_n} \right\}$ but every arc in $D$ joining $0$ to $P$ intersects the circle $\left\{ {z:\left| z \right| = R_n} \right\}$.\ See, for example, the prime end $P$ in Fig.\ \ref{do2}.\ So, we consider the following condition:
\begin{condition2*}
For every $R>0$, there does not exist any prime end $P$ of $\partial{D}$ that is inside the disk $\left\{ {z:\left| z \right| < R} \right\}$ but every arc in $D$ joining $0$ to $P$ intersects the circle $\left\{ {z:\left| z \right| = R} \right\}$.
\end{condition2*}
Note that in the first counter-example (Section \ref{section3}) Condition (2) is satisfied, since it is obvious that there do not exist such prime ends.\ These two counter-examples show that Conditions (1) and (2) are necessary if we want to give a positive answer to the Question \ref{con}.\ But are they enough? In Section \ref{section5}, we actually prove that if a simply connected domain satisfies Conditions (1) and (2), then there exists a positive constant $K=K\left( c \right)$ such that for every $R>0$,
\[{{\hat \omega }_D}\left( {R} \right) \le K{\omega _D}\left( {R} \right).\]
Moreover, we prove that we can find the value of this constant if we retain Condition (2) and replace Condition (1) with the following condition:
\begin{condition3*}
For every $R>0$ and for every arc of $D \cap \left\{ {z:\left| z \right| = R} \right\}$, the hyperbolic geodesic joining its endpoints lies entirely in $\overline{D} \cap \left\{ {z:\left| z \right| \le R} \right\}$.
\end{condition3*}
So, having these results in mind, in Section \ref{section5}, we prove the theorem below which gives a positive answer to the Question \ref{con}.
\begin{theorem}\label{kyrio} Let $D \subset \mathbb{C}$ be a simply connected domain with $0 \in D$.\ With the notation above, if Conditions $\rm{(1)}$ and $\rm{(2)}$ are satisfied, then there exists a positive constant $K=K\left( c \right)$ such that for every $R>0$,
\[{{\hat \omega }_D}\left( {R} \right) \le K{\omega _D}\left( {R} \right).\]
If Conditions $\rm{(2)}$ and $\rm{(3)}$ are satisfied, then for every $R>0$,
\[{{\hat \omega }_D}\left( {R} \right) \le 2{\omega _D}\left( {R} \right).\]
\end{theorem}
Finally, recall that a domain $D$ in $\mathbb{C}$ is called starlike with respect to $0$, if for every point $z \in D$, the segment of the straight line from $0$ to $z$, $\left[ {0,z} \right]$, lies entirely in $D$.\ In Section \ref{section6}, we prove that starlike domains satisfy Conditions (2) and (3) and that $2$ is the optimal constant:
\begin{theorem}\label{star} Let $D$ be a starlike domain in $\mathbb{C}$.\ Then for every $R>0$,
\[{{\hat \omega }_D}\left( {R} \right) \le 2{\omega _D}\left( {R} \right)\]
and the constant $2$ is best possible.
\end{theorem}
In Section \ref{section2}, we introduce some preliminaries such as notions and results in hyperbolic geometry and basic properties of harmonic measure.\ In Sections \ref{section3} and \ref{section4}, we present the counter-examples of Fig.\ \ref{dom1} and \ref{do2} respectively, and in Sections \ref{section5} and \ref{section6}, we prove Theorems \ref{kyrio} and \ref{star} respectively.
\section{Preliminary results}\label{section2}
\subsection{Results in hyperbolic geometry}
For the unit disk $\mathbb{D}$ the density of the hyperbolic metric is
\[{\lambda _\mathbb{D}}\left( z \right) = \frac{2}{{1 - {{\left| z \right|}^2}}}.\]
Let $\Omega$ be a hyperbolic region in the complex plane $\mathbb{C}$; that is, $\mathbb{C}\backslash \Omega $ contains at least two points.\ If $f$ is a holomorphic universal covering projection of $\mathbb{D}$ onto $\Omega$ then the density $\lambda _{\Omega}$ is determined from
\[{\lambda _{\Omega}}\left( {f\left( z \right)} \right)\left| {f'\left( z \right)} \right| = \frac{2}{{1 - {{\left| z \right|}^2}}}\]
(see \cite[p.\ 236]{Mi}).\ The determination of $\lambda _{\Omega}$ is independent of the choice of the holomorphic covering projection onto $\Omega$.\ If $\Omega$ is simply connected, then $f$ is a conformal mapping of $\mathbb{D}$ onto $\Omega$.\ We note that in this paper we work on simlpy connected domains.
The hyperbolic distance between two points $z,w$ in $\mathbb{D}$ is defined by
\[{d_\mathbb{D}}\left( {z,w} \right) = \log \frac{{1 + \left| {\frac{{z - w}}{{1 - z\bar w}}} \right|}}{{1 - \left| {\frac{{z - w}}{{1 - z\bar w}}} \right|}}\](see \cite[ch.\ 1]{Ahl}, \cite[p.\ 11-28]{Bea}).\ It is conformally invariant and thus it can be defined on any simply connected domain $D \ne \mathbb{C}$ as follows: If $f$ is a Riemann mapping of $\mathbb{D}$ onto $D$ and $z,w \in D$, then
${d_D}\left( {z,w} \right) = {d_\mathbb{D}}\left( {{f^{ - 1}}\left( z \right),{f^{ - 1}}\left( w \right)} \right)$.
Also, for a set $E \subset D$, we define ${d_D}\left( {z,E} \right): = \inf \left\{ {{d_D}\left( {z,w} \right):w \in E} \right\}$.
The following theorem is known as Minda's reflection principle \cite[p.\ 241]{Mi}.\ First, we introduce some notation: If $\Gamma$ is a straight line (or circle), then $R$ is one of the half-planes (or the disk) determined by $\Gamma$ and ${\Omega ^*}$ is the reflection of a hyperbolic region $\Omega$ in $\Gamma$ .
\begin{theorem} \label{rp} Let $\Omega$ be a hyperbolic region in $\mathbb{C}$ and $\Gamma$ be a straight line or circle with $\Omega \cap \Gamma \ne \emptyset $.\ If $\Omega \backslash R \subset \Omega^*$, then
\[{\lambda _{{\Omega ^*}}}\left( z \right) \le {\lambda _\Omega }\left( z \right)\]
for all $z \in \Omega \backslash \overline R $.\ Equality holds if and only if $\Omega$ is symmetric about $\Gamma$.
\end{theorem}
\noindent
A generalization of Theorem \ref{rp} was proved by Solynin in \cite{Sol}.
\subsection{Quasi-hyperbolic distance}
The hyperbolic distance between $z_1,z_2 \in D$ can be estimated by the quasi-hyperbolic distance, ${\delta _D}\left( {z_1,z_2} \right)$, which is defined by
\[{\delta _D}\left( {{z_1},{z_2}} \right) = \mathop {\inf }\limits_{\gamma :{z_1} \to {z_2}} \int_\gamma {\frac{{\left| {dz} \right|}}{{d\left( {z,\partial D} \right)}}}, \]
where the infimum ranges over all the paths connecting $z_1$ to $z_2$ in $D$ and $d\left( {z,\partial D} \right)$ denotes the Euclidean distance of $z$ from $\partial D$.\ Then it is proved that $\left( {{1 \mathord{\left/
{\vphantom {1 2}} \right.
\kern-\nulldelimiterspace} 2}} \right){\delta _D} \le {d_D} \le 2{\delta _D}$ (see \cite[p.\ 33-36]{Bea}, \cite[p.\ 8]{Co}).
\subsection{Harmonic measure}
If $E \subset {{\overline {\mathbb{D}}\backslash \left\{ 0 \right\}}}$, then a special case of the Beurling-Nevanlinna projection theorem (see \cite[p.\ 43-44]{Ahl}, \cite[p.\ 105]{Gar} and \cite[p.\ 120]{Ra}) is the following:
\begin{theorem}\label{bene} Let $E \subset {{\overline {\mathbb{D}}\backslash \left\{ 0 \right\}}}$ be a closed, connected set intersecting the unit circle.\ If ${r_0} = \min \left\{ {\left| z \right|:z \in E} \right\}$ and ${E^ * } = \left\{ { - \left| z \right|:z \in E} \right\} = \left( { - 1,} \right.\left. { - {r_0}} \right]$, then
\[{\omega _{\mathbb{D}}}\left( {0,E} \right) \ge {\omega _{\mathbb{D} }}\left( {0,{E^ * }} \right) = \frac{2}{\pi }\arcsin \frac{{\left( {1 - {r_0}} \right)}}{{\left( {1 + {r_0}} \right)}}.\]
\end{theorem}
Next theorem states the strong Markov property for harmonic measure, which follows from the probabilistic interpretation of harmonic measure (see \cite[p.\ 282]{Be} and \cite[p.\ 88]{Port}).
\begin{theorem}\label{markov}
Let $D_1$ and $D_2$ be two domains in $\mathbb{C}$.\ Assume that $D_1 \subset {D_2} $ and let $F \subset \partial{D_2}$ be a closed set.\ If $\sigma=\partial{D_1}\backslash \partial {D_2}$, then for $z \in {D_1}$,
\[{\omega _{{D_2}}}\left( {z,F} \right) = {\omega _{{D_1}}}\left( {z,F} \right) + \int_\sigma {{\omega _{{D_1}}}\left( {z,ds} \right){\omega _{{D_2}}}\left( {s,F} \right)}. \]
\end{theorem}
The following result of Balogh and Bonk \cite{Bo} gives an estimate of the logarithmic capacity of a set $E\subset \partial \mathbb{D}$.\ But this also proves an estimate of harmonic measure because if $E$ is a finite union of closed arcs in $\partial \mathbb{D}$, then $\omega _\mathbb{D} \left( {0,E} \right) \le {\rm cap}E$ (see \cite[p.\ 164]{Gar}).
\begin{theorem}\label{bonk}
There exists a universal constant $K>0$ with the following property.\ Suppose $f:\mathbb{D} \to \mathbb{C}$ is a conformal mapping with $\dist\left( {f\left( 0 \right),\partial f\left( \mathbb{D} \right)} \right)\\= d$.\ If $E_f \left( R \right)$ is the set of all $\zeta \in \partial \mathbb{D}$ with $\rm{length}$ $f\left( {\left[ {0,\zeta } \right)} \right) \ge R > 0$, then
\[{\rm cap}E_f \left( R \right) \le K\sqrt {\frac{d}{R}}. \]
\end{theorem}
Next theorem states a relation between harmonic measure and hyperbolic distance, which we prove in \cite{Kara}.
\begin{theorem}\label{geod}
Let $\Gamma $ be the hyperbolic geodesic joining two points $z_1,z_2 \in \partial{\mathbb{D}}$ in $\mathbb{D}$.\ Then
\[{e^{ - {d_{\mathbb{D}}}\left( {0,\Gamma } \right)}} \le {\omega _{\mathbb{D}}}\left( {0,\Gamma } \right) \le \frac{4}{\pi }{e^{ - {d_{\mathbb{D}}}\left( {0,\Gamma } \right)}}.\]
\end{theorem}
\section{First counter-example}\label{section3}
Hereinafter, we use the notation $D\left( {z,r} \right) := \left\{ {w \in \mathbb{C}:\left| {w - z} \right| < r} \right\}$ for some $z \in \mathbb{C}$ and some $r>0$.\ Let $D$ be the simply connected domain of Fig.\ \ref{dom}, namely,
\[D = \mathbb{D} \cup \left( \left\{ {z \in \mathbb{C} :\left| {\Arg{z}} \right| < 1} \right\}\backslash \bigcup\limits_{n = 1}^{ + \infty } {\left\{ {z \in \partial D\left( {0,{e^n}} \right):\frac{1}{{{40^n}}} \le \left| {\Arg{z}} \right| \le 1} \right\}}\right)\]
and consider the sequence ${\left\{ {{R_n}} \right\}_{n \in \mathbb{N}}}$ with ${R _n} = {e^{n + \frac{1}{{{{40}^n}}}}}$ for every $n \in \mathbb{N}$.
\begin{figure}
\caption{The simply connected domain $D$.}
\label{dom}
\end{figure}
\begin{theorem}\label{ant1} With the notation above, the simply connected domain $D$ has the following properties:
\begin{enumerate}[\rm(i)]
\item $D$ satisfies Condition {\rm (2)}.
\item $D$ does not satisfy Condition {\rm (1)}.
\item $\mathop {\lim }\limits_{n \to + \infty } \frac{{{{\hat \omega }_D}\left( {{R_n}} \right)}}{{{\omega _D}\left( {{R_n}} \right)}} = + \infty.$
\end{enumerate}
\end{theorem}
\proof
Property (i) is immediate by the construction of $D$.\ So, we prove properties (ii) and (iii) (for a similar calculation see \cite{Ka}).\ The Riemann mapping theorem implies that there exists a conformal mapping $\psi $ from $\mathbb{D}$ onto $D$ such that $\psi \left( 0 \right) = 0$.\ For $n \in \mathbb{N}$, we set ${F_{R_n}} = \left\{ {z \in \mathbb{D}:\left| {\psi \left( z \right)} \right| = {R_n}} \right\}$ and ${E_{R_n} } = \left\{ {\zeta \in \partial \mathbb{D}:\left| {\psi \left( \zeta \right)} \right| \ge {R_n}} \right\}$.\ Also, for $n \in \mathbb{N}$, let $\Gamma_{R_n}$ be the hyperbolic geodesic joining the endpoints of $F _{{R _n}}$ in $\mathbb{D}$.
By Theorem \ref{bene} and the definition of hyperbolic distance we can easily infer that for every $n \in \mathbb{N}$,
\begin{equation}\nonumber
{\omega _\mathbb{D}}\left( {0,{F_{R_n}}} \right) \ge \frac{2}{\pi }{e^{ - {d_\mathbb{D}}\left( {0,{F_{R_n}}} \right)}}
\end{equation}
(see \cite[p.\ 35]{Co}).\ So, by the conformal invariance of harmonic measure and hyperbolic distance, we have
\begin{equation}\label{bn}
\hat \omega _D \left( {R_n } \right) = \omega _D \left( {0,\psi \left( {F_{R_n } } \right)} \right) \ge \frac{2}{\pi }e^{ - d_D \left( {0,\psi \left( {F_{R_n } } \right)} \right)}.
\end{equation}
Now fix a number $n>2$.\ If $z \in D$ and ${g_{D}}\left( {\cdot,\cdot
} \right)$ denotes the Green function for $D$ (see \cite[p.\ 41-43]{Gar}, \cite[p.\ 106-115]{Ra}), then
\[{d_{D}}\left( 0,z \right) = \log \frac{{1 + {e^{ - {g_{D}}\left( {0,z} \right)}}}}{{1 - {e^{ - {g_{D}}\left( {0,z} \right)}}}}\]
(see \cite[p.\ 12-13]{Bea} and \cite[p.\ 106]{Ra}).\ For every $w_n \in \psi \left( {{F_{{R _n}}}} \right)\backslash \left\{ {{R_n}} \right\}$ (see Fig.\ \ref{eik}), we infer, by a symmetrization result, that
\[{g_{D}}\left( {0,{R_n}} \right) \ge {g_{D}}\left( {0, w_n} \right)\]
(see Lemma 9.4 in \cite[p.\ 659]{Hay}).\ Since
\[f\left( x \right) = \log \frac{{1 + {e^{ - x}}}}{{1 - {e^{ - x}}}}\]
is a decreasing function on $\left( {0, + \infty } \right)$, we have that
\[{d_{D}}\left( {0, \psi \left( {{F_{{R _n}}}} \right)} \right)={d_{D}}\left( {0, {R_n} } \right).\] This in conjunction with (\ref{bn}) implies that
\begin{equation}\label{sx2}
\hat \omega _D \left( {R_n } \right) \ge \frac{2}{\pi }e^{ - d_D \left( {0,R_n } \right)}.
\end{equation}
\begin{figure}
\caption{The crosscuts $\psi \left( {{F_{{R _n}
\label{eik}
\end{figure}
\noindent
Since ${\Gamma _{{R_n}}}$ denotes the hyperbolic geodesic joining the endpoints of $F_{{R _n}}$ in $\mathbb{D}$, by Theorem \ref{geod} and \cite[p.\ 370]{Beu},
\begin{equation}\nonumber
{\omega _\mathbb{D}}\left( {0,{E_{{R _n}}}} \right) = \frac{1}{2}{\omega _\mathbb{D}}\left( {0,{\Gamma _{{R _n}}}} \right) \le \frac{2}{\pi }{e^{ - {d_\mathbb{D}}\left( {0,{\Gamma _{{R _n}}}} \right)}}
\end{equation}
and thus
\begin{equation}\label{sx3}
\omega _D \left( {R_n } \right) = \omega _D \left( {0,\psi \left( {E_{R_n } } \right)} \right) \le \frac{2}{\pi }e^{ - d_D \left( {0,\psi \left( {\Gamma _{R_n } } \right)} \right)}.
\end{equation}
Since $D$ is symmetric with respect to the real axis, we deduce that
\[{d_{D}}\left( {{0},\psi \left( {{\Gamma _{{R _n}}}} \right)} \right) = {d_{D}}\left( {0,{r_n}} \right),\]
where ${r_n} = \psi \left( {{\Gamma _{{R_n}}}} \right) \cap \mathbb{R} \in \left( {e^n,e^{n + 1}} \right)$ (see Fig.\ \ref{eik}) and hence by (\ref{sx3}) we conclude that
\begin{equation}\label{sx4}
\omega _D \left( {R_n } \right) \le \frac{2}{\pi }e^{ - d_D \left( 0, r_n \right)}.
\end{equation}
Since $0,\,R_n$ and $r_n$ lie, in this order, along a hyperbolic geodesic (for more details see \cite{Ka}), we have that
\[{d_{D}}\left( {0,{r_n}} \right) = {d_{D}}\left( {0,{R_n}} \right) + {d_{D}}\left( {{R_n},{r_n}} \right)\]
(see \cite[p.\ 14]{Bea}).\ Combining this with (\ref{sx2}) and (\ref{sx4}), we deduce that
\begin{equation}\label{sx5}
\frac{{\hat \omega _D \left( {R_n } \right)}}{{\omega _D \left( {R_n } \right)}} \ge e^{d_D \left( {0,r_n } \right) - d_D \left( {0,R_n } \right)} = e^{d_D \left( {R_n ,r_n } \right)}.
\end{equation}
Now notice that the quasi-hyperbolic distance (see Section \ref{section2}) $\delta _D \left( {R_n ,r_n } \right)$ is equal to $\delta _{D\backslash{\overline{\mathbb{D}} }} \left( {R_n ,r_n } \right)$ because the quasi-hyperbolic geodesic joining $R_n$ to $r_n$ in $D$ and the quasi-hyperbolic geodesic joining $R_n$ to $r_n$ in $D\backslash{\overline{\mathbb{D}} }$ is the segment $\left[ {R_n ,r_n} \right]$ in both cases.\ So, we deduce that
\begin{equation}\label{neo}
d_D \left( {R_n ,r_n } \right) \ge \frac{1}{2}\delta _D \left( {R_n ,r_n } \right) = \frac{1}{2}\delta _{D\backslash{\overline{\mathbb{D}} }} \left( {R_n ,r_n } \right) \ge \frac{1}{4}d_{D\backslash{\overline{\mathbb{D}} }} \left( {R_n ,r_n } \right).
\end{equation}
In order to simplify our computations we use the conformal mapping $g\left( z \right) = \LOG z$ that maps $D\backslash \overline{\mathbb{D}} $ onto $g\left( {D\backslash \overline{\mathbb{D}} } \right) := D'$ (see Fig.\ \ref{re}).
\begin{figure}
\caption{The domain $D'$ and the points $\log R_n$, $\log r_n$ in case $n=3$.}
\label{re}
\end{figure}
\noindent
Thus, we get
\begin{eqnarray}\label{anisot}
d_{D\backslash{\overline{\mathbb{D}} }} \left( {R_n ,r_n } \right)&=&{d_{D'}}\left( {\log R_n ,\log r_n } \right) \ge \frac{1}{2}{\delta _{D'}}\left( \log R_n ,\log r_n \right) \nonumber \\
&=& \frac{1}{2}\int_{{{\mathop{\log R_n}} }}^{\log r_n} {\frac{{dx}}{{d\left( {x,\partial D'} \right)}}} \ge \frac{1}{2}\int_{\log R_n}^{\log r_n} {\frac{{dx}}{{\sqrt {{{\left( {\frac{1}{{{40^{n}}}}} \right)}^2} + {{\left( {x-n} \right)}^2}} }}} \nonumber \\
&=&\frac{1}{2}\arcsinh \left( {{40^{n}}\left( {\log r_n-n} \right)} \right)-\frac{1}{2}\arcsinh \left( {1} \right) \nonumber \\
&\ge& \frac{1}{2}\arcsinh \left( {{40^{n}} k } \right)-\frac{1}{2}\arcsinh \left( {1} \right),
\end{eqnarray}
where $k>0$ is a constant independent of $n$ (see \cite{Ka}).\ Now, taking limits in (\ref{anisot}) as $n \to + \infty $, we obtain
\begin{equation}\nonumber
\mathop {\lim }\limits_{n \to + \infty } d_{D\backslash{\overline{\mathbb{D}} }} \left( {R_n ,r_n } \right) = + \infty.
\end{equation}
Thus, by (\ref{neo}) we conclude that
\begin{equation}\label{sx55}
\mathop {\lim }\limits_{n \to + \infty } d_{D} \left( {R_n ,r_n } \right) = + \infty,
\end{equation}
which proves property (ii).\ Finally, by (\ref{sx5}) and (\ref{sx55}), we infer that
\[\mathop {\lim }\limits_{n \to + \infty } \frac{{{{\hat \omega }_D}\left( {{R_n}} \right)}}{{{\omega _D}\left( {{R_n}} \right)}} = + \infty \]
and hence property (iii) holds.\ So, there does not exist a positive constant $C$ such that for every $R>0$,
\[{\omega _D}\left( {R} \right) \ge C{{\hat \omega }_D}\left( {R} \right).\]
\qed
\section{Second counter-example}\label{section4}
Let $D$ be the simply connected domain of Fig.\ \ref{dom2}, namely,
\[D=\mathbb{D}\cup \left( \left\{ {z \in \mathbb{C}:\left| {\Arg{z}} \right| < 1} \right\}\backslash {D_0}\right),\]
where
\begin{align*}
D_0&= \bigcup_{n=1}^{+\infty} \bigg( \left\{z\in \partial D(0,e^n): \frac{1}{40^n}\leq |\Arg z|\leq 1 \right\} \\
&\quad\quad\quad\qquad \cup \left\{re^{i\theta}: e^n\leq r\leq e^{n+1/40^n},\, |\theta|=\frac{1}{40^n} \right\}
\bigg).
\end{align*}
We consider the sequence ${\left\{ {{R_n}} \right\}_{n \in \mathbb{N}}}$ with ${R _n} = {e^{n+1/40^n}}$ for every $n \in \mathbb{N}$.
\begin{figure}
\caption{The simply connected domain $D$.}
\label{dom2}
\end{figure}
\begin{theorem}\label{ant2} With the notation above, the simply connected domain $D$ has the following properties:
\begin{enumerate}[\rm(i)]
\item $D$ does not satisfy Condition {\rm (2)}.
\item $D$ satisfies Condition {\rm (1)}.
\item $\mathop {\lim }\limits_{n \to + \infty } \frac{{{{\hat \omega }_D}\left( {{R_n}} \right)}}{{{\omega _D}\left( {{R_n}} \right)}} = + \infty.$
\end{enumerate}
\end{theorem}
\proof Property (i) is immediate by the construction of $D$.\ So, we prove properties (ii) and (iii).\ First we introduce some notation.\ For $n \in \mathbb{N}$, let ${F_{R_n}}$ be the component of $D \cap \left\{ {\left| z \right| = R_n } \right\}$ that intersects the real axis and $\Gamma_{R_n}$ be the hyperbolic geodesic joining the endpoints of $F _{R _n}$ in $D$.\ Also, we set ${E_{R_n} } = \partial D \cap \left\{ {\left| z \right| \ge R_n } \right\}$ for every $n \in \mathbb{N}$.
Now we apply J\o rgensen's theorem \cite[p.\ 116]{Jo} that a Euclidean disk inside a simply connected domain is hyperbolically convex.\ Combining this with the construction of $D$, we deduce that, for every $n \in \mathbb{N}$, we can find a disk $D_n \subset D$ centered at a point of $\mathbb{R}$ (see Fig.\ \ref{jj}) that satisfies the following properties:
\begin{enumerate}
\item $D_n$ contains the arc ${F_{R_n}}$ and the geodesic $\Gamma_{R_n}$.
\item The endpoints of ${F_{R_n}}$ lie on $\partial D_n$.
\item The Euclidean distance of each point of $\partial D_n$ from $\partial D$ is attained on the set $\left\{re^{i\theta}: e^n\leq r\leq e^{n+1/40^n},\, |\theta|=\frac{1}{40^n} \right\}$.
\item If $\theta_n$ is the acute angle between $\left\{re^{i\theta}: e^n\leq r\leq e^{n+1/40^n},\, \theta=\frac{1}{40^n} \right\}$ and the tangent of $\partial D_n$ at the point $z_n=e^{n+1/40^n}e^{1/40^ni}$, then $\theta _n \ge k$ for some constant $k>0$ independent of $n$ (see Fig.\ \ref{jj}).
\end{enumerate}
So, if $s\in D_n \cap \left\{ {z:{\mathop{\rm Im}\nolimits} z \ge 0} \right\}$ then we can easily infer that
\begin{equation}\label{2sx}
\dist\left( {s,\partial D} \right) = \left| {s - z_n } \right|\,\,{\rm or}\,\,\dist\left( {s,\partial D} \right)\ge \sin k\left| {s - z_n } \right|.
\end{equation}
\begin{figure}\label{jj}
\end{figure}
Since $D$ and $D_n$ are symmetric with respect to $\mathbb{R}$, (\ref{2sx}) also holds for every $s\in D_n\cap \left\{ {z:{\mathop{\rm Im}\nolimits} z < 0} \right\}$ by replacing $z_n$ with ${\bar z_n }$.\
So, (\ref{2sx}) in combination with the fact that $F_{R_n},\Gamma_{R_n}$ lie in $D_n$ and join $z_n$ to ${\bar z_n }$ implies that, for every $n \in \mathbb{N}$, the quasi-hyperbolic distance between any point of $F_{R_n}$ and $\Gamma_{R_n}$ is bounded from above by an absolute positive constant.\ Thus, the hyperbolic distance between any point of $F_{R_n}$ and $\Gamma_{R_n}$ is bounded from above by an absolute positive constant.\ This proves property (ii).
Now set $L_n = \dist \left( {R_n ,\left\{ {z \in \mathbb{C}:\left| {\Arg z} \right| = 1} \right\}} \right)$ and $d_n = \dist\left( {R_n ,\partial D} \right)$.\ By the construction of $D$, there exists a number $n_0 \in \mathbb{N}$ such that for every $n>n_0$ and every $s \in F_{R_n}$ (see Fig.\ \ref{antip22}),
\[D\left( {s,\frac{{L_n }}{2}} \right) \subset \left\{ {z \in \mathbb{C}:\left| {\Arg z} \right| < 1} \right\}\,\,\,{\rm and}\,\,\,D\left( {s,\frac{{L_n }}{2}} \right) \cap E_{R_n } = \emptyset.\]
Fix a number $n>n_0$ and a point $s \in F_{R_n}$.\ The Riemann mapping theorem implies that there exists a conformal mapping $f$ from $\mathbb{D}$ onto $D$ such that $f\left( 0 \right) = s$.\ Therefore, applying Theorem \ref{bonk} with its notation, we have that
\begin{eqnarray}\label{e2}
\omega _D \left( {s,E_{R_n } } \right) &=& \omega _\mathbb{D} \left( {0,f^{ - 1} \left( {E_{R_n } } \right)} \right) \le {\rm cap}f^{ - 1} \left( {E_{R_n } } \right)\le {\rm cap}E_f \left( {\frac{{L_n }}{2}} \right) \nonumber \\
&\le& K\sqrt {\frac{{2d_s }}{{L_n }}} \le K\sqrt {\frac{{2d_n }}{{L_n }}},
\end{eqnarray}
where $d_s = \dist\left( {s,\partial D} \right)$.\ So, by Theorem \ref{markov} and relation (\ref{e2}), we infer that for every $n>n_0$,
\[\frac{{\hat \omega _D \left( {R_n } \right)}}{{\omega _D \left( {R_n } \right)}} = \frac{{\omega _D \left( {0,F_{R_n } } \right)}}{{\omega _D \left( {0,E_{R_n } } \right)}} = \frac{{\omega _D \left( {0,F_{R_n } } \right)}}{{\int_{F_{R_n } } {\omega _D \left( {0,ds} \right)\omega _D \left( {s,E_{R_n } } \right)} }} \ge \frac{1}{K}\sqrt {\frac{{L_n }}{{2d_n }}}. \]
\begin{figure}\label{antip22}
\end{figure}
\noindent
Taking limits as $n\to +\infty$, we deduce that
\[\mathop {\lim }\limits_{n \to + \infty } \frac{1}{K}\sqrt {\frac{{L_n }}{{2d_n }}} = + \infty\]
and hence
\[\mathop {\lim }\limits_{n \to + \infty } \frac{{\hat \omega _D \left( {R_n } \right)}}{{\omega _D \left( {R_n } \right)}} = + \infty.\]
This proves property (iii).
\qed
\section{Proof of theorem \ref{kyrio}}\label{section5}
\proof[Proof of Theorem \ref{kyrio}]
Since $D$ is a simply connected domain, the Riemann mapping theorem implies that there exists a conformal mapping $\psi$ from $\mathbb{D}$ onto $D$ with $\psi \left( 0 \right) = 0$.\ Now we introduce some notation.\ For $R >0$, we set ${F_R } = \left\{ {z \in \mathbb{D}:\left| {\psi \left( z \right)} \right| = R } \right\}$, that is, $\psi \left( {{F_R}} \right) = D \cap \left\{ {z:\left| z \right| = R} \right\}$.\ Note that since $\psi \left( {{F_R}} \right)$ is a countable union of open arcs in $D$ that are the intersection of $D$ with the circle $\left\{ {z:\left| z \right| = R} \right\}$, the preimage of every such arc is also an arc in $\mathbb{D}$ with two distinct endpoints on $\partial \mathbb{D}$ (see Proposition 2.14 \cite[p.\ 29]{Pom}).\ Also, let $N\left( R \right) \in \mathbb{N} \cup \left\{ { + \infty } \right\}$ denote the number of components of $F_R$ and
\[I_R = \left\{ \begin{array}{l}
\left\{ {1,2, \ldots ,N\left( R \right)} \right\},\,{\rm{if}}\,N\left( R \right) < + \infty \\
\mathbb{N},\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\rm{if}}\,N\left( R \right) = + \infty \\
\end{array} \right..\]
If ${\left\{ {F_R^i } \right\}_{i \in I_R } }$ are the components of $F_R$, then, for every $i \in I_R$, we set ${\Gamma}_R^i$ be the hyperbolic geodesic joining the endpoints of $F_R^i $ in $\mathbb{D}$ and $C_R^i$ be the arc of $\partial \mathbb{D}$ joining the endpoints of ${\Gamma}_R^i$ and lying on the boundary of the component of $\mathbb{D}\backslash {{\Gamma}_R^i}$ which does not contain the origin (see Fig.\ \ref{int1}).
Suppose that Conditions (2) and (3) are satisfied.\ For every $R>0$ and $i \in I_R$ and for each $z\in {{\Gamma}_R^i}$, we have that
\begin{equation}\nonumber
{\omega _\mathbb{D}}\left( {z,C_R^i} \right)=\frac{1}{2},
\end{equation}
(see \cite[p.\ 370]{Beu}).\ Condition (3) implies that each crosscut $F_R^i $ is contained in the component of $\mathbb{D}\backslash {{\Gamma}_R^i}$ bounded by ${{\Gamma}_R^i}$ and $C_R^i$ (see Fig.\ \ref{int1}).\ Thus, by the maximum principle, we deduce that for every $z\in {F_R^i}$,
\begin{equation}\label{a1}
{\omega _\mathbb{D}}\left( {z,C_R^i} \right)\ge\frac{1}{2}.
\end{equation}
\begin{figure}\end{figure}
\noindent
Applying Theorem \ref{markov} and relation (\ref{a1}), we infer that
\begin{eqnarray}\label{a22}
\omega _\mathbb{D} \left( {0,C_R^i } \right) &=& \int\limits_{F_R^i } {\omega _\mathbb{D} \left( {0,dz} \right)\omega _\mathbb{D} \left( {z,C_R^i } \right)} \ge \frac{1}{2}\int\limits_{F_R^i } {\omega _\mathbb{D} \left( {0,dz} \right)} \nonumber \\
&=& \frac{1}{2}\omega _\mathbb{D} \left( {0,F_R^i } \right)
\end{eqnarray}
for every $R>0$ and every $i \in I_R$.\ Condition (2) and the conformal invariance of harmonic measure imply that
\[
\omega _D \left( R \right) \ge \omega _\mathbb{D} \left( {0,\bigcup\limits_{i \in I_R } {C_R^i } } \right)=
\sum\limits_{i \in I_R } {\omega _\mathbb{D} \left( {0,C_R^i } \right)}.\]
Combining this with (\ref{a22}) we get
\begin{eqnarray}
\omega _D \left( R \right) &\ge& \sum\limits_{i \in I_R } {\omega _\mathbb{D} \left( {0,C_R^i } \right)} \ge \frac{1}{2}\sum\limits_{i \in I_R } {\omega _\mathbb{D} \left( {0,F_R^i } \right)}= \frac{1}{2}\omega _\mathbb{D} \left( {0, \bigcup\limits_{i \in I_R } {F_R^i }} \right) \nonumber \\
&=& \frac{1}{2}\omega _\mathbb{D} \left( {0, {F_R }} \right) = \frac{1}{2}\omega _D \left( {0,\psi \left( {F_R } \right)} \right) = \frac{1}{2}\hat \omega _D \left( R \right) \nonumber
\end{eqnarray}
and thus we have the desired result
\[{{\hat \omega }_D}\left( {R} \right) \le 2{\omega _D}\left( {R} \right)\]
for every $R>0$.
Now suppose that Conditions (1) and (2) are satisfied.\ By Condition (1) we infer that, for every $R>0$ and every $i \in I_R$, there exists a hyperbolic $k_c$-neighborhood, $U_R^i$, of ${{\Gamma}_R^i}$ such that $\partial{U_R^i}$ consists of two circular arcs in $\mathbb{D}$ and ${F_R^i }$ is contained in $U_R^i$ (see Fig.\ \ref{int2}).\ Note that $k_c$ is a positive constant that depends only on $c$.\ Let ${I_R^i }$ denote the circular arc of $\partial{U_R^i}$ such that ${F_R^i }$ is contained in the component of $\mathbb{D}\backslash {I_R^i}$ bounded by ${I_R^i}$ and $C_R^i$ (see Fig.\ \ref{int2}).\ For every $R>0$ and $i \in I_R$ and for each $z\in {I_R^i}$, we have that
\begin{equation}\nonumber
{\omega _\mathbb{D}}\left( {z,C_R^i} \right)=k',
\end{equation}
where $k'$ lies in the open interval $\left(0,1 \right)$ and depends only on $k_c$ and hence only on $c$.\ Now we repeat the argument above letting ${I_R^i}$ play the role of ${{\Gamma}_R^i}$.\ Therefore, for every $z\in {F_R^i}$,
\begin{equation}\label{a11}
{\omega _\mathbb{D}}\left( {z,C_R^i} \right)\ge k'.
\end{equation}
By Theorem \ref{markov} and relation (\ref{a11}), we infer that
\begin{equation}\nonumber
\omega _\mathbb{D} \left( {0,C_R^i } \right) = \int\limits_{F_R^i } {\omega _\mathbb{D} \left( {0,dz} \right)\omega _\mathbb{D} \left( {z,C_R^i } \right)} \ge k'\omega _\mathbb{D} \left( {0,F_R^i } \right)
\end{equation}
for every $R>0$ and every $i \in I_R$.\ This in conjunction with Condition (2) implies that
\begin{eqnarray}
\omega _D \left( R \right) &\ge& \omega _\mathbb{D} \left( {0,\bigcup\limits_{i \in I_R } {C_R^i } } \right)=
\sum\limits_{i \in I_R } {\omega _\mathbb{D} \left( {0,C_R^i } \right)}\ge k' \sum\limits_{i \in I_R } {\omega _\mathbb{D} \left( {0,F_R^i } \right)} \nonumber \\
&=& k'\omega _\mathbb{D} \left( {0, {F_R }} \right)= k'\hat \omega _D \left( R \right).\nonumber
\end{eqnarray}
So, we conclude that for every $R>0$,
\[{{\hat \omega }_D}\left( {R} \right) \le K{\omega _D}\left( {R} \right),\]
where $K = \frac{1}{{k'}}$ is a positive constant that depends only on $c$.
\qed
\section{Proof of theorem \ref{star}}\label{section6}
In the proof of Theorem \ref{star} we will use the following result which is an easy computation coming from the conformal invariance of harmonic measure.
\begin{lemma}\label{le2} Let $a\in \left( {0,1} \right)$ and $b\in \left[ {0,1} \right)$.\ Then
\[{\omega _{\mathbb{D} \backslash \left[ {a,1} \right)}}\left( { - b,\partial {\mathbb{D}}} \right) = 1 - \frac{2}{\pi }\arctan \frac{1}{{\sqrt {{{\left( {\frac{{\left( {1 + a} \right)\left( {1 + b} \right)}}{{\left( {1 - a} \right)\left( {1 - b} \right)}}} \right)}^2} - 1} }}.\]
\end{lemma}
\proof[Proof of Theorem \ref{star}] Let $D$ be a starlike domain in $\mathbb{C}$.\ Using the notation of the proof of Theorem \ref{kyrio}, we will prove that Conditions (2) and (3) are satisfied.\ Since $D$ is starlike, Condition (2) is obviously satisfied and thus we prove Condition (3).\ Let ${F_R^i}$ be a component of $F_R$ for some $i \in I_R$.\ Suppose that $\psi \left( {\Gamma _R^i} \right) \not\subset \overline D \cap \left\{ {z:\left| z \right| \le R} \right\}$, then $\psi \left( {\Gamma _R^i} \right)$ contains a curve ${\gamma _R^i}$ lying in $D\backslash D\left( {0,R} \right)$ with endpoints $z_1,z_2 \in \partial {D \left( {0,R } \right)}$ (see Fig.\ \ref{eiko}).\ Since $\psi \left( {\Gamma _R^i} \right)$ is the hyperbolic geodesic joining the endpoints of $\psi \left( {F_R^i} \right)$ in $D$, ${\gamma _R^i}$ is the hyperbolic geodesic joining $z_1$ to ${z_2}$ in $D$.\ Notice that $D$ is a hyperbolic region in $\mathbb{C}$ such that $D \cap \partial D\left( {0,R} \right) \ne \emptyset $.\ Since $D$ is starlike, we have that $D \backslash D\left( {0,R } \right) \subset D ^ *$, where $D^ *$ is the reflection of $D$ in the circle $\partial D\left( {0,R} \right)$.\ So, applying Theorem \ref{rp}, we get
\[{\lambda _{{D ^ * }}}\left( z \right) < {\lambda _D }\left( z \right),\;z \in {\gamma _R^i}\]
and thus
\[\int_{{{\gamma _R^i}}^ * } {{\lambda _D }\left( {{z^ * }} \right)\left| {d{z^ * }} \right|} < \int_{{\gamma _R^i}} {{\lambda _D }\left( z \right)\left| {dz} \right|}, \]
where ${{\gamma _R^i}}^ * $ is the reflection of ${\gamma _R^i}$ in $\partial D\left( {0,R} \right)$.\ But this leads to contradiction because ${\gamma _R^i}$ is the hyperbolic geodesic joining $z_1$ to ${z_2}$ in $D$.\ So, $\psi \left( {\Gamma _R^i} \right) \subset \overline D \cap \left\{ {z:\left| z \right| \le R} \right\}$ and thus Condition (3) is satisfied.\ Theorem \ref{kyrio} implies that for every $R>0$,
\[{{\hat \omega }_D}\left( {R} \right) \le 2{\omega _D}\left( {R} \right).\]
\begin{figure}\label{eiko}
\end{figure}
Now we prove that the constant $2$ is best possible.\ Consider the Koebe function $K \left( z \right) = \frac{z}{{{{\left( {1 - z} \right)}^2}}}$ which maps $\mathbb{D}$ conformally onto $D_0:=\mathbb{C}\backslash \left( { - \infty , - \frac{1}{4}} \right]$.\ For $R>{\frac{1}{4}}$, by the conformal invariance of harmonic measure and Lemma \ref{le2}, we have
\begin{eqnarray}\label{ko1}
{{\hat \omega }_{D_0}}\left( {R} \right)&=& {\omega _{D\left( {0,R} \right)\backslash \left( { - R, - \frac{1}{4}} \right]}}\left( {0,\partial D\left( {0,R} \right)} \right)={\omega _{\mathbb{D}\backslash \left( { - 1, - \frac{1}{{4R}}} \right]}}\left( {0,\partial \mathbb{D}} \right) \nonumber \\
&=&{\omega _{\mathbb{D}\backslash \left[ {\frac{1}{{4R}},1} \right)}}\left( {0,\partial \mathbb{D}} \right)= 1 - \frac{2}{\pi }\arctan \frac{1}{{\sqrt {{{\left( {\frac{{4R + 1}}{{4R - 1}}} \right)}^2} - 1} }} \nonumber \\
&=& 1 - \frac{2}{\pi }\arctan \frac{{4R - 1}}{{4\sqrt R }}.
\end{eqnarray}
Using the fact that
\[{K^{ - 1}}\left( { - R} \right) = \left( {\frac{{2R - 1}}{{2R}}} \right) \pm i\frac{{\sqrt {4R - 1} }}{{2R}}\]
and the conformal invariance of harmonic measure, we deduce that
\begin{eqnarray}\label{ko2}
{{\omega }_{D_0}}\left( {R} \right)&=& {\omega _{D_0}}\left( {0,\left( { - \infty , - R} \right]} \right) \nonumber \\
&=&{\omega _\mathbb{D}}\left( {0,\arc\left( {\left( {\frac{{2R - 1}}{{2R}}} \right) - i\frac{{\sqrt {4R - 1} }}{{2R}},\left( {\frac{{2R - 1}}{{2R}}} \right) + i\frac{{\sqrt {4R - 1} }}{{2R}}} \right)} \right) \nonumber \\
&=&2{\omega _\mathbb{D}}\left( {0,\arc\left( {1,\left( {\frac{{2R - 1}}{{2R}}} \right) + i\frac{{\sqrt {4R - 1} }}{{2R}}} \right)} \right) \nonumber \\
&=&\frac{1}{\pi }\arctan \frac{{\sqrt {4R - 1} }}{{2R - 1}},
\end{eqnarray}
where $\arc\left( {\left( {\frac{{2R - 1}}{{2R}}} \right) - i\frac{{\sqrt {4R - 1} }}{{2R}},\left( {\frac{{2R - 1}}{{2R}}} \right) + i\frac{{\sqrt {4R - 1} }}{{2R}}} \right)$ denotes the arc of $\partial \mathbb{D}$ joining $\left( {\frac{{2R - 1}}{{2R}}} \right) - i\frac{{\sqrt {4R - 1} }}{{2R}}$ to $\left( {\frac{{2R - 1}}{{2R}}} \right) + i\frac{{\sqrt {4R - 1} }}{{2R}}$ counterclockwise.\ Applying (\ref{ko1}) and (\ref{ko2}), we infer that
\[\mathop {\lim }\limits_{R \to + \infty } \frac{{{{\hat \omega }_{D_0}}\left( {R} \right)}}{{{\omega _{D_0}}\left( {R} \right)}} = \mathop {\lim }\limits_{R \to + \infty } \frac{{\pi - 2\arctan \frac{{4R - 1}}{{4\sqrt R }}}}{{\arctan \frac{{\sqrt {4R - 1} }}{{2R - 1}}}} = 2.\]
Suppose that there exists a positive constant $C<2$ such that for every starlike domain $D$ and every $R>0$, ${{\hat \omega }_D}\left( {R} \right) \le C{\omega _D}\left( {R} \right)$.\ This implies for $D_0$ that
\[2 = \mathop {\lim }\limits_{R \to + \infty } \frac{{{{\hat \omega }_{{D_0}}}\left( {R} \right)}}{{{\omega _{{D_0}}}\left( {R} \right)}} \le C,\]
which leads to contradiction.\ Therefore, the constant $2$ is best possible.
\qed
\\
Note that we could also prove Theorem \ref{star} by using instead of Minda's reflection principle and Theorem \ref{kyrio}, the strong Markov property for harmonic measure (see Section \ref{section2}).
\proof[Another proof of Theorem \ref{star}] Let $D$ be a starlike domain in $\mathbb{C}$.\ Set ${F_R} = D \cap \partial D\left( {0,R} \right)$, ${E_R} = \partial D\backslash D\left( {0,R} \right)$, ${L_R} = \partial D \cap D\left( {0,R} \right)$ and ${D_0} = D \cap D\left( {0,R} \right)$ as illustrated in Fig.\ \ref{mark}.\ So, we have the relations
\begin{equation}\label{ss1}
{{\hat \omega }_D}\left( {R} \right) = {\omega _{{D_0}}}\left( {0,{F_R}} \right) = 1 - {\omega _{{D_0}}}\left( {0,{L_R}} \right)
\end{equation}
and
\begin{equation}\label{ss2}
{\omega _D}\left( {R} \right) = {\omega _D}\left( {0,{E_R}} \right) = 1 - {\omega _D}\left( {0,{L_R}} \right).
\end{equation}
\begin{figure}
\caption{The starlike domain $D$.}
\label{mark}
\end{figure}
\noindent
By Theorem \ref{markov},
\[{\omega _D}\left( {0,{L_R}} \right) = {\omega _{{D_0}}}\left( {0,{L_R}} \right) + \int_{{F_R}}{{\omega _{{D_0}}}\left( {0,ds} \right){\omega _D}\left( {s,{L_R}} \right)}\]
which in conjunction with (\ref{ss1}) and (\ref{ss2}) implies that
\begin{equation}\label{ss3}
{{\hat \omega }_D}\left( {R} \right) = {\omega _D}\left( {R} \right) + \int_{{F_R}}{{\omega _{{D_0}}}\left( {0,ds} \right){\omega _D}\left( {s,{L_R}} \right)}.
\end{equation}
Let $N\left( R \right) \in \mathbb{N} \cup \left\{ { + \infty } \right\}$ denote the number of components of ${F_R}$ and
\[I_R = \left\{ \begin{array}{l}
\left\{ {1,2, \ldots ,N\left( R \right)} \right\},\,{\rm{if}}\,N\left( R \right) < + \infty \\
\mathbb{N},\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\rm{if}}\,N\left( R \right) = + \infty \\
\end{array} \right..\]
If ${\left\{ {F_R^i } \right\}_{i \in I_R } }$ are the components of $F_R$, then
\begin{equation}\label{ss4}
\int_{{F_R}} {{\omega _{{D_0}}}\left( {0,ds} \right){\omega _D}\left( {s,{L_R}} \right)} = \sum\limits_{i \in I_R} {\int_{F_R^i} {{\omega _{{D_0}}}\left( {0,ds} \right){\omega _D}\left( {s,{L_R}} \right)} },
\end{equation}
since ${F_R^i}$ are mutually disjoint sets.
Now let ${F_R^i}$ be a component of $F_R$ for some $i \in I_R$.\ If $z_1,z_2$ denote the endpoints of ${F_R^i}$ such that $\Arg{z_1} < \Arg{z_2}$, then we set
\[L_R^ * = \left\{ {r{e^{i\Arg{z_1}}}:0 \le r \le R} \right\} \cup \left\{ {r{e^{i\Arg{z_2}}}:0 \le r \le R} \right\}\]
and
\[{D^ * } = \left\{ {z \in \mathbb{C}:\Arg{z_1} < \Arg z < \Arg{z_2}} \right\}\]
as illustrated in Fig.\ \ref{mark2}.\ For every $s \in F_R^i$,
\begin{equation}\label{ss5}
{\omega _D}\left( {s,{L_R}} \right) \le {\omega _{{D^ * }}}\left( {s,L_R^ * } \right).
\end{equation}
\begin{figure}\label{mark2}
\end{figure}
\noindent
If $\theta = \Arg{z_2} - \Arg{z_1}$, we consider the conformal mappings
\[{f_1}\left( z \right) = z{e^{ - i\left( {\Arg{z_2} - {\theta \mathord{\left/
{\vphantom {\theta 2}} \right.
\kern-\nulldelimiterspace} 2}} \right)}},\,{f_2}\left( z \right) = {z^{{\pi \mathord{\left/
{\vphantom {\pi \theta }} \right.
\kern-\nulldelimiterspace} \theta }}},\,
{f_3}\left( z \right) = \frac{{z - 1}}{{z + 1}}.\] Then the composition $f = {f_3} \circ {f_2} \circ {f_1}$ maps $D^ * $ conformally onto $\mathbb{D}$.\ Since $f\left( {F_R^i} \right)$ is the hyperbolic geodesic joining $f\left( {{z_1}} \right)$ to $f\left( {{z_2}} \right)$ in $\mathbb{D}$, for every $s \in F_R^i$,
\[{\omega _{{D^ * }}}\left( {s,L_R^ * } \right) = {\omega _\mathbb{D}}\left( {f\left( s \right),f\left( {L_R^ * } \right)} \right) = \frac{1}{2}.\]
This in combination with (\ref{ss5}) implies that for every $s \in F_R^i$,
\[{\omega _D}\left( {s,{L_R}} \right) \le \frac{1}{2}.\]
By this and relations (\ref{ss3}) and (\ref{ss4}) we infer that
\begin{eqnarray}
{{\hat \omega }_D}\left( {R} \right) &=& {\omega _D}\left( {R} \right) + \sum\limits_{i \in I_R} {\int_{F_R^i} {{\omega _{{D_0}}}\left( {0,ds} \right){\omega _D}\left( {s,{L_R}} \right)} } \nonumber \\
&\le& {\omega _D}\left( {R} \right) + \frac{1}{2}\sum\limits_{i \in I_R} {\int_{F_R^i} {{\omega _{{D_0}}}\left( {0,ds} \right)} } \nonumber \\
&=& {\omega _D}\left( {R} \right)+\frac{1}{2}\sum\limits_{i\in I_R} {\omega _{{D_0}}}\left( {0,F_R^i} \right)= {\omega _D}\left( {R} \right)+\frac{1}{2}{\omega _{{D_0}}}\left( {0,{F_R}} \right) \nonumber \\
&=&{\omega _D}\left( {R} \right)+\frac{1}{2}{{\hat \omega }_D}\left( {R} \right), \nonumber
\end{eqnarray}
and thus for every $R>0$,
\[{{\hat \omega }_D}\left( R \right) \le 2{\omega _D}\left( R \right).\]
The fact that the constant $2$ is best possible is proved as before.
\qed
\begin{bibdiv}
\begin{biblist}
\bib{Ahl}{book}{
title={Conformal Invariants: Topics in Geometric Function Theory},
author={L.V. Ahlfors},
date={1973},
publisher={McGraw-Hill},
address={New York}
}
\bib{Bae}{article}{
title={The size of the set where a univalent function is large},
author={A. Baernstein},
journal={J. d' Anal. Math.},
volume={70},
date={1996},
pages={157--173}
}
\bib{Bo}{article}{
title={Lengths of radii under conformal maps of the unit disk},
author={Z. Balogh and M. Bonk},
journal={Proc. Amer. Math. Soc.},
volume={127},
date={1999},
pages={801--804}
}
\bib{Bea}{article}{
title={The hyperbolic metric and geometric function theory},
author={A.F. Beardon and D. Minda,},
journal={Quasiconformal mappings and their applications},
date={2007},
pages={9--56}
}
\bib{Be}{article}{
title={Harmonic measure on simply connected domains of fixed inradius},
author={D. Betsakos},
journal={Ark. Mat.},
volume={36},
date={1998},
pages={275--306}
}
\bib{Bet}{article}{
title={Geometric theorems and problems for harmonic measure},
author={D. Betsakos},
journal={Rocky Mountain J. of Math.},
volume={31},
date={2001},
pages={773--795}
}
\bib{Beu}{book}{
title={The Collected Works of Arne Beurling},
subtitle={Vol. 1, Complex Analysis},
author={A. Beurling},
date={1989},
publisher={Birkh\"{a}user},
address={Boston}
}
\bib{Es}{article}{
title={On analytic functions which are in $H^p$ for some positive $p$},
author={M. Ess{\' e}n},
journal={Ark. Mat.},
volume={19},
date={1981},
pages={43--51}
}
\bib{Ess}{article}{
title={Harmonic majorization and classical analysis},
author={M. Ess{\' e}n K. Haliste J.L. Lewis and D.F. Shea},
journal={J. London Math. Soc.},
volume={32},
date={1985},
pages={506--520}
}
\bib{Esse}{article}{
title={Harmonic majorization and thinness },
author={M. Ess{\' e}n},
journal={Proc. of the 14th Winter School on Abstract Analysis},
volume={},
date={1987},
pages={295--304}
}
\bib{Gar}{book}{
title={Harmonic Measure},
author={J.B. Garnett and D.E. Marshall},
date={2005},
publisher={Cambridge University Press},
address={Cambridge}
}
\bib{Hay}{book}{
title={Subharmonic Functions},
subtitle={Volume 2},
author={W.K. Hayman},
date={1989},
publisher={Academic Press},
address={London}
}
\bib{We}{article}{
title={On the coefficients and means of functions omitting values},
author={W.K. Hayman and A. Weitsman},
journal={Math. Proc. Cambridge Philos. Soc.},
volume={77},
date={1975},
pages={119--137}
}
\bib{Jo}{article}{
title={On an inequality for the hyperbolic measure and its applications in the theory of functions},
author={V. J\o rgensen},
journal={Math. Scand.},
volume={4},
date={1956},
pages={113--124}
}
\bib{Ka}{article}{
title={On a relation between harmonic measure and hyperbolic distance on planar domains},
author={C. Karafyllia},
journal={Indiana Univ. Math. J. (to appear)}
}
\bib{Kara}{article}{
title={On the Hardy number of a domain in terms of harmonic measure and hyperbolic distance},
author={C. Karafyllia},
journal={(submitted). ArXiv:1908.11845}
}
\bib{Kim}{article}{
title={Hardy spaces and unbounded quasidisks},
author={Y.C. Kim and T. Sugawa},
journal={Ann. Acad. Sci. Fenn.},
volume={36},
date={2011},
pages={291--300}
}
\bib{Mi}{article}{
title={Inequalities for the hyperbolic metric and applications to geometric function theory},
author={D. Minda},
journal={Lecture Notes in Math.},
volume={1275},
date={1987},
pages={235--252}
}
\bib{Co}{article}{
title={Geometric models, iteration and composition operators},
author={P. Poggi-Corradini},
journal={Ph.D. Thesis, University of Washington},
date={1996}
}
\bib{Co1}{article}{
title={The Hardy class of geometric models and the essential spectral radius of composition operators},
author={P. Poggi-Corradini},
journal={Journal of Functional Analysis},
volume={143},
date={1997},
pages={129--156}
}
\bib{Co2}{article}{
title={The Hardy class of K{\oe}nigs maps},
author={P. Poggi-Corradini},
journal={Michigan Math. J.},
volume={44},
date={1997},
pages={495--507}
}
\bib{Pom}{book}{
title={Boundary Behaviour of Conformal Maps},
author={C. Pommerenke},
date={1992},
publisher={Springer-Verlag},
address={Berlin}
}
\bib{Port}{book}{
title={Brownian Motion and Classical Potential Theory},
author={S.C. Port and C.J. Stone},
date={1978},
publisher={Academic Press},
address={New York}
}
\bib{Ra}{book}{
title={Potential Theory in the Complex Plane},
author={T. Ransford},
date={1995},
publisher={Cambridge University Press},
address={Cambridge}
}
\bib{Sa}{article}{
title={Isoperimetric inequalities for the least harmonic majorant of $\left| x \right|^p$},
author={M. Sakai},
journal={Trans. Amer. Math. Soc.},
volume={299},
date={1987},
pages={431--472}
}
\bib{Soly}{article}{
title={The boundary distortion and extremal problems in certain classes of univalent functions},
author={A.Yu. Solynin},
journal={J. Math. Sci.},
volume={79},
date={1996},
pages={1341–-1358 }
}
\bib{Sol}{article}{
title={Functional inequalities via polarization},
author={A.Yu. Solynin},
journal={St. Petersburg Math. J.},
volume={8},
date={1997},
pages={1015--1038}
}
\bib{Ts}{article}{
title={A theorem on the majoration of harmonic measure and its applications},
author={M. Tsuji},
journal={Tohoku Math. J. (2)},
volume={3},
date={1951},
pages={13--23}
}
\bib{Tsu}{book}{
title={Potential Theory in Modern Function Theory},
author={M. Tsuji},
date={1959},
publisher={Maruzen},
address={Tokyo}
}
\end{biblist}
\end{bibdiv}
\end{document} |
\begin{document}
\title[On torsion in linearized Legendrian contact homology]{On torsion in linearized Legendrian contact homology}
\author{Roman Golovko}
\begin{abstract}
In this short note we discuss certain examples of Legendrian submanifolds, whose linearized Legendrian contact (co)homology groups over integers have non-vanishing algebraic torsion. More precisely, for a given arbitrary finitely generated abelian group $G$ and a positive integer $i$ we construct examples of Legendrian submanifolds of the standard contact vector space, whose $i$-th linearized Legendrian contact (co)homology over $\mathbb Z$ computed with respect to a certain augmentation is isomorphic to $G$.
\end{abstract}
\address{Faculty of Mathematics and Physics, Charles University, Sokolovsk\'{a} 49/83, 18675 Praha 8, Czech Republic}
\email{[email protected]}
\date{\today}
\thanks{}
\vskip.1inubjclass[2010]{Primary 53D12; Secondary 53D42}
\keywords{torsion, Legendrian contact homology}
\maketitle
\vskip.1inection{Introduction and main result}
The Legendrian contact homology of a closed Legendrian submanifold $\mathcal{L}ambda$ of the standard contact vector space $(\mathbb{R}^{2n+1},\xi_{st}=\ker \alpha_{st})$, where $\alpha_{st}=dz-ydx$, is a modern Legendrian invariant defined by Eliashberg--Givental--Hofer \cite{EGHSFT} and Chekanov \cite{ChekanovDGAL}, and developed by Ekholm--Etnyre--Sullivan \cite{EkhEtnSul2005}.
It is a homology of the Legendrian contact homology (LCH) differential graded algebra (often called the Chekanov--Eliashberg differential graded algebra). Chekanov--Eliashberg DGA is a unital noncommutative differential graded algebra freely generated by the generically finite set of integral curves of the Reeb vector field $\partial_z$ that start and end on $\mathcal{L}ambda$ and called Reeb chords.
Legendrian contact homology is often defined over $\mathbb{Z}_2$, but if $\mathcal{L}ambda$ is spin it can be also defined over other fields, over $\mathbb{Z}$ \cite{OrientLegContHom, KarlssonLCHor} and even more general coefficient rings whose structure envolves certain topological information about $\mathcal{L}ambda$ such as $\mathbb{Z}_2[H_1(\mathcal{L}ambda;\mathbb{Z})]$ or $\mathbb{Z}[H_1(\mathcal{L}ambda;\mathbb{Z})]$ \cite{EkhEtnSul2005, KarlssonLCHor}.
The Legendrian contact homology DGA is not finite rank, even in fixed degree; the same holds in homology: the
graded pieces of the Legendrian contact homology are often infinite dimensional and difficult to compute.
In order to deal with this issue Chekanov \cite{ChekanovDGAL} proposed to use an augmentation of
the DGA to produce a generically finite-dimensional linear complex, whose homology is called linearized Legendrian contact homology.
Given an exact Lagrangian filling $L$ of $\mathcal{L}ambda$ in the symplectization $(\mathbb{R}\times \mathbb{R}^{2n+1}, d(e^t \alpha_{st}))$ with vanishing Maslov number, it induces
the augmentation of the Chekanov--Eliashberg algebra, i.e. a unital DGA homomorphism $\varepsilon: \mathcal A(\mathcal{L}ambda)\to (\mathbb{Z}_2,0)$, see \cite{EkhomHonaKalmancobordisms}.
If besides that $L$ is equipped with a spin structure extending the given spin structure on $\mathcal{L}ambda$, then one also has an augmentation $\varepsilon: \mathcal A(\mathcal{L}ambda)\to (\mathbb{Z},0)$, see \cite{EkhomHonaKalmancobordisms, KarlssonOrc}.
Most of the computations of linearized Legendrian contact homology groups have been done for the Chekanov-Eliashberg algebras with $\mathbb{Z}_2$-coefficients. One can ask whether for integral coefficients
one can get some elements of finite order in the linearized Legendrian contact (co)homology, i.e. if one can get a non-trivial algebraic torsion in linearized Legendrian contact (co)homology. In particular, one can ask whether an arbitrary
finitely generated abelian group can be realized as a linearized Legendrian contact (co)homology of some Legendrian.
We provide the following answer to this question in high dimensions:
\begin{theorem}
\label{topresnongaug}
Given a finitely generated abelian group $G$ and $i\in \mathbb{N}$.
There is a Legendrian submanifold $\mathcal{L}ambda$ in $\mathbb R^{2i+7}$ of Maslov number $0$ such that the Chekanov-Eliashberg algebra of $\mathcal{L}ambda$ admits an augmentation
$\varepsilon: \mathcal A(\mathcal{L}ambda)\to (\mathbb{Z}, 0)$ such that $LCH_{\varepsilon}^i(\mathcal{L}ambda;\mathbb{Z})$ is isomorphic to $G$.
\end{theorem}
\vskip.1inection{Proof of Theorem \ref{topresnongaug}}
We start with the following construction of a spin manifold, whose first homology with $\mathbb{Z}$-coefficients is isomorphic to $G$.
\vskip.1inubsection{Construction of a spin manifold}
\label{spincontrolledh1}
Given a finitely generated abelian group $G$, we can write it as
$$G\vskip.1inimeq \mathbb{Z}^k\times \mathbb{Z}_{p_1}\times\dots
\times \mathbb{Z}_{p_s},$$
where $k$ is a non-negative integer and $p_1,\dots, p_s$ are non-negative integers that are powers of some not necessarily distinct prime numbers.
Then we construct a closed, oriented, connected $3$-manifold $N$ such that $H_1(N;\mathbb Z)\vskip.1inimeq G$. We get $N$ from the $k$ distinct copies of $S^1\times S^2$ and the collection of lens spaces $L(p_1,q_1),\dots,L(p_s,q_s)$. More precisely, since we know that
\begin{align}
H_1(S^1\times S^2;\mathbb Z)\vskip.1inimeq \mathbb{Z}\quad \mbox{and}\quad H_1(L(p_i,q_i);\mathbb{Z})\vskip.1inimeq \mathbb{Z}_{p_i},
\end{align}
we define $$N= \#^k (S^1\times S^2)\# L(p_1,q_1)\# \dots \#L(p_s,q_s).$$
Since we know that all closed, orientable manifolds of dimension $3$ are spin, we can say that $N$ is spin.
We observe that
\begin{align*}
&H_1(N;\mathbb{Z})=H_1(\#^k (S^1\times S^2) \# L(p_1,q_1)\# \dots \#L(p_s,q_s);\mathbb{Z})\\ &\vskip.1inimeq H_1(S^1\times S^2;\mathbb{Z})^k\times H_1(L(p_1,q_1);\mathbb{Z})\times \dots \times H_1(L(p_s,q_s);\mathbb{Z})
\\ &\vskip.1inimeq\mathbb{Z}^k\times \mathbb{Z}_{p_1}\times\dots
\times \mathbb{Z}_{p_s}\vskip.1inimeq G.
\end{align*}
Then since $S^n$ is spin for all $n\geq 0$ and the product of spin manifolds is a spin manifold, we can say that $N\times S^{i-2}$ is a spin manifold.
Finally, when $i\noindenteq 3$, K\"{u}nneth formula implies that
\begin{align}
\label{compihom}
H_{1}(N\times S^{i-2}; \mathbb{Z})\vskip.1inimeq H_{1}(N; \mathbb{Z})\vskip.1inimeq G.
\end{align}
From now on we will denote $N\times S^{i-2}$ by $\mathcal{L}ambda$.
\vskip.1inubsection{Construction of a Legendrian submanifold and its filling}
Now we recall the following statement, which can be seen as an implication of the h-principle of Murphy (\cite[Remark A.5]{LLEIHDCM}).
\begin{proposition}
\label{PorpGeoRealization}
Every closed spin $n$-dimensional manifold admits a Legendrian embedding $i:\mathcal{L}ambda \to (\mathbb{R}^{2n+1}, \xi_{st})$ with a vanishing Maslov number.
\end{proposition}
\begin{remark}
Str\"angberg in \cite{LegendrianAprroximation} described the $C^0$-approximation procedure for $2$-dimensional submanifolds in the standard contact $\mathbb{R}^5$ by Legendrian submanifolds generalizing the standard approximation result for closed Legendrians in the standard contact $\mathbb{R}^3$. The construction of Str\"angberg as mentioned in \cite{LegendrianAprroximation} admits an extention to certain high dimensions, that will also potentially lead to Proposition \ref{PorpGeoRealization}.
\end{remark}
We apply Proposition \ref{PorpGeoRealization} to $\mathcal{L}ambda$ from Section \ref{spincontrolledh1} and get a Legendrian embedding $i:\mathcal{L}ambda \to (\mathbb{R}^{2n+1}, \xi_{st})$ with a vanishing Maslov number, this Legendrian embedding will still be denoted by $\mathcal{L}ambda$. We push $\mathcal{L}ambda$ slightly in $\partial_z$-direction (Reeb direction) and get a $2$-copy of $\mathcal{L}ambda$ that we call $\overline{\mathcal{L}ambda}$. Following the construction of Mohnke from \cite{Mohnketorusfilliingpair} we note that $\overline{\mathcal{L}ambda}$ admits an exact Lagrangian filling of Maslov number $0$ diffeomorphic to $\mathbb{R}\times \mathcal{L}ambda$. Following \cite{EkhomHonaKalmancobordisms,KarlssonOrc}, we observe that this spin Maslov number $0$ exact Lagrangian filling induces the augmentation $\varepsilon$ to $\mathbb Z$. Then we would like to study linearized Legendrian contact cohomology of $\mathcal A(\overline{\mathcal{L}ambda})$, linearized with respect to $\varepsilon$.
\vskip.1inubsection{Application of the isomorphism of Seidel-Ekholm-Dimitroglou Rizell}
Recall that there is an isomorphism due to Seidel that has been described
by Ekholm in \cite{Seidelsisowfc} and completely proven by Dimitroglou Rizell in \cite{DimitroglouRizellLiftingPSH}.
\begin{theorem}[Seidel--Ekholm--Dimitroglou Rizell]
\label{SEDRI}
Let $\mathcal{L}ambda$ be a Legendrian submanifold of Maslov number $0$ of $\mathbb{R}^{2n+1}_{st}$, which admits an exact Lagrangian filling $L$ of Maslov number $0$. Then
\begin{align}
LCH_{\varepsilon}^{i}(\mathcal{L}ambda; \mathbb{Z}_{2})\vskip.1inimeq H_{n-i}(L_{\mathcal{L}ambda}; \mathbb{Z}_{2})
\end{align}
where $\varepsilon$ is the augmentation induced by $L$.
\end{theorem}
The homology and cohomology groups in
the above result are defined over $\mathbb{Z}_2$.
Recall that after the proof of Dimitrogloy Rizell \cite{DimitroglouRizellLiftingPSH} signs of the cobordism maps between Chekanov-Eliashberg algebras
have been studied by Karlsson in \cite{KarlssonOrc}. In addition, the work of Ekholm--Lekili \cite{EkholmLekiliduality} implies the enhancement of Theorem \ref{SEDRI}, which compares not just the corresponding homology and cohomology groups, but the
corresponding $A_{\infty}$-structures, which holds with signs and works over an arbitrary field. Therefore, we can say that the isomorphism of Seidel--Ekholm--Dimitroglou Rizell holds over $\mathbb{Z}$.
Then we take the linearized Legendrian contact cohomology of $\overline{\mathcal{L}ambda}$ and apply the isomorphism of Seidel-Ekholm--Dimitroglou Rizell over $\mathbb{Z}$ and get that
\begin{align}
\label{Seidelsisofor2copy}
LCH^{i}_{\varepsilon}(\overline{\mathcal{L}ambda}; \mathbb Z) \vskip.1inimeq H_{1}(\mathbb R\times \mathcal{L}ambda; \mathbb{Z})\vskip.1inimeq H_{1}(\mathcal{L}ambda;\mathbb{Z}).
\end{align}
Then combining Formula (\ref{Seidelsisofor2copy}) with Formula (\ref{compihom}) we get that
\begin{align}
\label{thefinform}
LCH^{i}_{\varepsilon}(\overline{\mathcal{L}ambda}; \mathbb Z)\vskip.1inimeq G.
\end{align}
\begin{remark}
We can also take the $1$-jet space $J^1(\mathcal{L}ambda)\vskip.1inimeq \mathbb{R}\times T^{\ast} \mathcal{L}ambda$ equipped with the standard contact $1$-form $\alpha=dz+\theta$, where
$\theta$ is a primitivve of the standard symplectic form on $T^{\ast} \mathcal{L}ambda$. Then we take $0_{\mathcal{L}ambda}$, and denote
by $\overline{\mathcal{L}ambda}$ the $2$-copy of $0_{\mathcal{L}ambda}$. In this case one does not need to apply the isomorphism of Seidel--Ekholm--Dimitroglou Rizell, but
one can simply rely on the analysis of pseudoholomorphic curves in the duality paper of Ekholm--Etnyre--Sabloff \cite{EESDuality} to get Formula \ref{thefinform}), and hence the analogue of Theorem \ref{topresnongaug} will hold.
\end{remark}
\begin{remark}
Note that the examples we construct have ``complicated'' topology that allows us to realize an arbitrary finitely generated abelian group as a Legendrian contact cohomology group.
One could ask whether an arbitrary finitely generated abelian group can be realized as a linearized Legendrian contact (co)homology of a Legendrian with ``simple'' topology, for example Legendrian knots and links. This question is due to Bourgeois \cite{BourgeoisTorsion} and it remains wide open.
\end{remark}
\vskip.1inection*{Acknowledgements}
The author is grateful to Russell Avdek, Georgios Dimitroglou Rizell and Filip Strako\v{s} for the very helpful discussions.
The author is supported by the GA\v{C}R EXPRO Grant 19-28628X.
\end{document} |
\begin{document}
\begin{frontmatter}
\title{On a Voter model on $\mathbb{R}^{\lowercase{d}}$:\\Cluster growth in the\\ Spatial $\mathcal{L}ambda$-Fleming-Viot Process}
\runtitle{Cluster growth in the S$\mathcal{L}ambda$FV Process}
\author{\fnms{Habib} \snm{Saadi}\ead[label=e1]{[email protected]} \thanksref{t1} \thanksref{t2}}
\affiliation{Imperial College London}
\thankstext{t1}{Supported by EPSRC Grant EP/E065945/1.}
\thankstext{t2}{The author would like to thank Alison Etheridge, Nic Freeman and Mladen Savov for carefully reading earlier versions of this manuscript.}
\runauthor{H. Saadi}
\begin{abstract}
The spatial $\mathcal{L}ambda$-Fleming-Viot (S$\mathcal{L}ambda$FV) process introduced in
(Barton, Etheridge and V\'eber, 2010)
can be seen as a direct extension of the Voter Model (Clifford and Sudbury, 1973); (Liggett, 1997).
As such, it is an Interacting Particle System with configuration space $\mathcal{M}^{\mathbb{R}^d}$, where $\mathcal{M}$ is the set of probability measures on some space $K$. Such processes are usually studied thanks to a dual process that describes the genealogy of a sample of particles. In this paper, we propose two main contributions in the analysis of the S$\mathcal{L}ambda$FV process. The first is the study of the growth of a cluster, and the suprising result is that with probability one, every bounded cluster stops growing in finite time. In particular, we discuss why the usual intuition is flawed. The second contribution is an original method for the proof, as the traditional (backward in time) duality methods fail.
We develop a forward in time method that exploits a martingale property of the process. To make it feasible, we construct adequate objects that allow to handle the complex geometry of the problem. We are able to prove the result in any dimension $d$.
\end{abstract}
\begin{keyword}[class=AMS]
\kwd[Primary ]{60J25}
\kwd{60K35}
\kwd{92D10}
\kwd[; secondary ]{60D05}
\end{keyword}
\begin{keyword}
\kwd{Generalized Fleming--Viot process}
\kwd{interacting particle systems}
\kwd{almost sure properties}
\kwd{cluster}
\end{keyword}
\end{frontmatter}
\section{Introduction}
The spatial $\mathcal{L}ambda$-Fleming-Viot process (S$\mathcal{L}ambda$FV) is a model
used
to represent biological evolution on a continuum.
It was first
introduced in \cite{ETHERIDGE_2008_DDASSMMOE_BCP},
and then studied in more details in \cite{BARTON_ETHERIDGE_KELLEHER_2010} ,
\cite{BARTON_ETHERIDGE_VEBER_2010} and \cite{BERESTYCKI_ETHERIDGE_VEBER_2011}.
In this setting, given a set of genetic types $K$, a population
living on $\mathbb{R}^d$ is represented by a collection of probability measures on $K$.
More precisely, the genetic composition at time $t$ of the population at point $x \in \mathbb{R}^d$
is given by a measure $\rho_t(x,\cdot)$ on the type space $K$.
The S$\mathcal{L}ambda$FV process
is a direct spatial extension of the generalised Fleming-Viot processes presented in \cite{DONNELLY_KURTZ_1999a} and
studied in \cite{BERTOIN_LEGALL_2003}.
But it can also be seen as an
interacting particle system generalising
the Voter Model \cite{CLIFFORD_SUDBURY_1973,LIGGETT_1997}.
The configuration space for the Voter Model is $\{0,1\}^{\mathbb{Z}^d}$, whereas for the S$\mathcal{L}ambda$FV process
it is $\mathcal{M}^{\mathbb{R}^d}$, where $\mathcal{M}$ is the
set of probability measures on $K$. This generalisation of the configuration space is one of the elements
that make the study of the S$\mathcal{L}ambda$FV process particularly challenging.\\
Our motivation in this article is the study of the fate of a new genetic type created by mutation at time
$0$. More precisely, we assume that there are only two types of individuals, $Blue$ and $Red$,
and that the new type, say $Red$, occupies a bounded set of $\mathbb{R}^d$ at time $0$.
The question is how far this newly created type is going to spread.
Because we are working with two types only, the setting simplifies. We have
$ \rho_t(x,Blue)=1-\rho_t(x,Red)$, so at time $t$ it is enough to consider the collection of
numbers
$ \{\rho_t(y,Red), \, y \in \mathbb{R}^d \}$.
This is why we are going to represent the population at time $t$ by the function
$$ X_t : \mathbb{R}^d\mapsto [0,1]$$
such that $X_t(y)=\rho_t(y,Red)$. Working with a function instead of a collection of
probability measures allows us to simplify the notation when manipulating the
S$\mathcal{L}ambda$FV process.
\subsection{The process}
For every time $t \geq 0$, let $X_t$ be a function from $\mathbb{R}^d$ to $[0,1]$. The quantity $X_t(y)$ for $y \in \mathbb{R}^d $ is the
frequency of $Red$ individuals at location $y$. The dynamics of $X_t$ is the following.
Consider a space-time Poisson point process
$\mathbb{P}i$ on $\mathbb{R}_+ \times \mathbb{R}^d \times [0,1]$ with rate $dt\otimes dc \otimes dv$, and two constants
$0 \leq U < 1$ and $R>0$.
Then, for every point $(t,c,v)$ of $\mathbb{P}i$,
\begin{enumerate}
\item[i)] Draw a ball $B(c,R)$ of radius $R$ centred around
$c$.
\item[ii)] If $X_{t-}(c)\geq v$, then the parent is
$Red$, and
for every point $y \in B(c,R)$,
\begin{equation*}
X_t(y)=(1-U)X_{t-}(y)+U.
\end{equation*}
\item[iii)] If $X_{t-}(c)< v$, then the parent is
$Blue$, and
for every point $y \in B(c,R)$,
\begin{equation*}
X_t(y)=(1-U)X_{t-}(y).
\end{equation*}
\end{enumerate}
The steps i), ii) and iii) can be written in a single equation:
\begin{align}
\label{Single_Transition}
X_{t}(y)=X_{t-}(y)+U \mathds{1}_{\{\|y-c\| \leq R\} } \left(\mathds{1}_{\{v \leq X_{t-}(c)\} }-X_{t-}(y)\right), \quad y \in \mathbb{R}^d.
\\
\nonumber
\end{align}
In biological terms, each point $t,c,R$ of the Poisson Point process corresponds to a \emph{reproduction
event}
taking place at time $t$ in a ball $B(c,R)$. First, a parent is chosen at random at location $c$. The parent is $Red$ with probability $X_{t-}(c)$ and $Blue$ with probability $1-X_{t-}(c)$, and her offspring are going to have the same type as her.
Second, competition for finite resources causes a proportion $U$ of the population inside the ball of
centre $c$ and radius $R$ to die. Finally, the offspring of the parent replaces the proportion $U$
of individuals who have died. Births and deaths take place simultaneously at time $t$.
Figure \ref{Fig_Def_Xt} illustrates the births and deaths events taking place during a single transition at time $t$ corresponding
to the point $(t,c,v)$ from the Poisson point process $\mathbb{P}i$.
\begin{figure}
\caption{Schematic view in dimension $d=1$ of a Markov transition for the process $X_t(\cdot)$ induced by the point $(t,c,v) \in \mathbb{P}
\label{Fig_Def_Xt}
\end{figure}
\begin{Remark}
We have chosen the parent to be at location $c$, which is a simplification of the model in \cite{BARTON_ETHERIDGE_VEBER_2010}, where the
location of the parent was chosen uniformly on the ball. This does not change the model significantly,
it just simplifies some calculations.
\end{Remark}
The presentation of the process we just gave is simply
an algorithm that describes the jumps of
$(X_t)_{t\geq0}$,
but we need to construct it formally as a Markov
process.
The most natural way is to translate this algorithm
into the infinitesimal generator $\mathcal{L}$ of $X(t)$, which is defined by
\begin{align}
\mathcal{L}I(f) := \lim_{t \rightarrow 0} \frac{\mathbb{E}[ I(X_t) - I(X_0) \, | \, X_0=f]}{t},
\end{align}
where $I$ is a test function, and $f$ is the initial value of the process $(X_t)_{t\geq0}$.
We choose the test function $I$
from the family $I_n( \, \cdot \,;\psi)$ of functions
of the form
\begin{align}
\label{Test_Func_Red}
I_n \big( f;\psi \big)
=\int_{(\mathbb{R}^d)^n} \psi(x_1,\dots,x_n) \prod_{i=1}^n
\, f(x_i) \,
dx_1 \dots dx_n,
\end{align}
where $\psi$ is a function from $(\mathbb{R}^d)$ to $\mathbb{R}$ such that
$\int |\psi(x_1,\dots,x_n)| dx_1 \dots dx_n< \infty$, and $f$ is a function
from $(\mathbb{R}^d)^n$ to $[0,1]$ corresponding to $X_0$. The intuition behind this form is that the distribution
of the function-valued process $(X_t)_{t\geq0}$ is described by the finite-dimensional dynamics at all locations $x_1,\dots,x_n$.
The generator $\mathcal{L}$ of the process $(X_t)_{t \geq 0}$ is given by
\begin{align}
\nonumber
& \quad
\mathcal{L} I_n(f;\psi)
\\[7pt]
\nonumber
=
\quad &
\int_{\mathbb{R}^d}
\int_{(\mathbb{R}^d)^n}
\sum_{I \subset \{1,\dots,n\}}
\Big(
\prod_{j \notin I}
\mathds{1}_{x_j \notin B(c,R)}
f(x_j)
\Big)
\times
\Big(
\prod_{i \in I} \mathds{1}_{x_i \in B(c,R)}
\Big)
\\[7pt]
\nonumber
&
\
\times
\Bigg[
f(c) \,
\bigg(
\prod_{i \in I} \Big( (1-U) f(x_i)+U \Big)
-
\prod_{i \in I} f(x_i)
\bigg)
\\[7pt]
\nonumber
&
\quad \quad \quad \quad \quad \quad \quad
+
\big(1-f(c)\big) \,
\bigg(
\prod_{i \in I} (1-U) f(x_i)
-
\prod_{i \in I} f(x_i)
\bigg)
\Bigg]
\\[7pt]
\label{Gener_ModSLFV_Red}
&
\quad \quad \quad \quad \quad \quad \quad \quad
\quad \quad \quad \quad \quad \quad \quad
\psi(x_1,\dots,x_n) \,
dx_1 \dots dx_n \,
\, dc.
\end{align}
To understand this expression, we can think of $(X_t)_{t \geq 0}$ as a jump process
with possibly an infinite number of jumps at each instant.
The transitions of the process are indexed by the points $(t,c,v)$ of the Poisson Point process $\mathbb{P}i$ with
intensity $dt \otimes dc \otimes dv$. Morally, we can use equation (\ref{Single_Transition}) and write
the generator in the form
\begin{align*}
\nonumber
\mathcal{L} I_n(f;\psi)
=
\int_{\mathbb{R}^d}
\int_{[0,1]}
\Big[I_n \big( f_{(c,v)};\psi \big)
- I_n \big( f;\psi \big)\Big]
dv \,
dc,
\end{align*}
where
$$ f_{(c,v)}(x)=f(x)+U \mathds{1}_{\{\|x-c\| \leq R\} }\left(\mathds{1}_{\{v \leq f(c)\} }-f(x)\right).$$
When we replace the test functions $I_n$ with their expression, we obtain
\begin{align*}
\nonumber
\mathcal{L} I_n(f;\psi)
=
&
\int_{\mathbb{R}^d}
\int_{(\mathbb{R}^d)^n}
\int_{[0,1]}
\Big[
\prod_{i=1}^n\, f_{(c,v)}(x_i)
- \prod_{i=1}^n\, f(x_i)
\Big] dv \,
\\[7pt]
&
\quad \quad \quad \quad \quad \quad \quad \quad
\psi(x_1,\dots,x_n) \,
dx_1 \dots dx_n \,
dc.
\end{align*}
To express the integral
\begin{align*}
& \
\int_{[0,1]}
\Big[
\prod_{i=1}^n\, f_{(c,v)}(x_i)
- \prod_{i=1}^n\, f(x_i)
\Big] dv,
\end{align*}
we find the unique set $I \subset \{1,\dots,n\}$ that verifies $x_j \in B(c,R)$ if and only if $j \in I$ (see figure \ref{Fig_Generator} ).
After careful computations, we obtain expression (\ref{Gener_ModSLFV_Red}).
\begin{figure}
\caption{Visualisation of the unique set $I$ that produces a nonzero term in the sum appearing in the definition (\ref{Gener_ModSLFV_Red}
\label{Fig_Generator}
\end{figure}
One needs to prove that there exists a Markov process $(X_t)_{t\geq0}$
that is defined by (\ref{Gener_ModSLFV_Red}).
A general proof for the existence of the S$\mathcal{L}ambda$FV process is given in \cite{BARTON_ETHERIDGE_VEBER_2010} using duality. However, we do not need to use
this result,
because in our case we are able to construct directly the process
using our forward in time method, see \S \ref{CTMC}.
\subsection{Main result}
The bounded support $Red$ population is
competing against the unbounded support $Blue$ population, so intuitively, we expect the $Red$ population
to become extinct. The real question is how far the $Red$ population manages
to spread before ultimately disappearing. This is why we are studying the dynamics of the support of the $Red$ population.
Every time the individual sampled to be the parent is
not $Red$, the proportion of the $Red$ population decreases, which decreases the overall probability that the parent
at the next event is going to be $Red$. The same reasoning applies to the $Blue$ population.
The only way for the support to grow is if the ball of centre $c$ and radius $R$ is not entirely contained in the support of $X_t$,
while the parent sampled at $c$ is $Red$. On the other hand, if we take $U<1$, the support never shrinks, as once
a point $y \in \mathbb{R}^d$ is occupied with a frequency $f(y)$, its future frequencies will always be positive.
Given this schematic view of the dynamics of the process, one intuitively expects a behaviour similar
to what we are going to call the \emph{oil film spreading}:
The proportion of the $Red$ population would converge to zero at every point, but at the same time
its support would grow
forever
and ultimately would occupy an infinite subset of $R^d$. Stated more naturally, there seems to be no reason
why the support would not grow to an infinitely large set.\\
However, the actual behaviour of the process is rather counterintuitive. Before stating our main result,
we need to introduce some notation.\\
\begin{Notation}
\begin{itemize}
\item
For any given function
$f: \mathbb{R}^d \rightarrow \mathbb{R}$, we denote its support
by
Supp$(f)$.
\item
We define $\mathcal{S}_c$ to be the set of Borel measurable
functions $f: \,
\mathbb{R}^d \rightarrow [0,1]$ with compact support. We
endow $\mathcal{S}_c$ with the $L^\infty$ norm.
\item
Given a set $A \subset \mathbb{R}^d$ and $R>0$, we denote by $A^R$ the $R$-expansion of $A$,
that is the set defined by
$$ A^R:=\{x \in \mathbb{R}^d \text{ s.t. } \min_{y \in A} \|x-y \|\leq R\}.$$
\end{itemize}
\end{Notation}
\begin{Theorem}
\label{Main_Result}
Let $(X_t)_{t \geq 0}$ be a Markov process
with generator (\ref{Gener_ModSLFV_Red}). Suppose $X_0=f$ is deterministic
and with bounded support.
Then, there exists a random finite set $B \subset \mathbb{R}^d$, and
an almost surely finite random
time $T$ such that
\begin{equation}
\forall t > T, \,
\text{Supp}(X_t) = B \quad \text{a.s.}
\end{equation}
Furthermore,
\begin{equation}
\sup_{z \in B} X_t(z) \rightarrow 0 \quad \text{a.s. as } t
\rightarrow
\infty \\
\end{equation}
\end{Theorem}
\begin{Remark}
For the sake of clarity, we chose $X_0$ to be deterministic. The result would still be true if
$\mathbb{E}[|\text{Supp}(X_0)|]<\infty$, see Remark \ref{Rmk_Delta0_Random}.
\end{Remark}
The proof of this result is the objective of this paper. It is fairly challenging because of the large dimension of the
state space. Although the result is stated for the support of $X_t$, the actual object we need to keep track of
is the whole function $X_t$. The natural approach consisting in approximating probabilities of trajectories that correspond to the event
in question simply do not work, because such approximations waste too much information about the process.
Our approach is to first summarise the structure
of the process in a useful way. This is why we build adequate geometric tools that allow us to use a powerful martingale argument.\\
Fundamentally, the cause of the behaviour described in Theorem \ref{Main_Result} lies in the discrete nature of the jumps inherent to $\mathcal{L}ambda$FV processes, rather than in the
geometry of the S$\mathcal{L}ambda$FV process.
To see that, we consider the simpler process $(\hat{Z}_t)_{t \geq 0}$ where there is no space, that is to say at every reproduction event,
the parent is sampled with probability $\hat{Z}_t$, and
a proportion $U$
of the whole population is replaced by the offspring of the parent.
In the notation of \cite{BERTOIN_LEGALL_2003},
the process $(\hat{Z}_t)_{t \geq 0}$ is the $\mathcal{L}ambda$FV process
on the state-space $K=\{Red,Blue\}$ with
$\mathcal{L}ambda$-measure $\mathcal{L}ambda(du)=u^2 \delta_U (du)$ .
It is a continuous time Markov Chain on $[0,1]$ with constant intensity, and the transitions of its embedded
discrete time Markov Chain $(Z_n)_{n \geq 0}$ are given by
\begin{align*}
\left\{
\begin{aligned}
&
Z_{n+1}=(1-U)Z_n+U \, \varepsilon_{n+1},
\\
&
\varepsilon_{n+1} \, | \, Z_n \, \thicksim \, \text{B}(Z_n),
\end{aligned}
\right.
\end{align*}
where $\text{B}(Z_n)$ is a Bernoulli distribution with parameter $Z_n$.
It is straightforward to show that $(Z_n)_{n \geq 0}$ is a nonnegative martingale,
and
therefore converges almost surely. As a consequence, $\varepsilon_n$
converges
almost
surely to $0$ or $1$.
This means that after some
finite random time, $\varepsilon_n$ will remain constant equal to $0$
or $1$. Almost surely in finite time,
either
the $Red$ or the $Blue$ population will be the only one to keep
reproducing.
This remarkable feature is due to the fact that if the frequency $Z_n$ is
not sampled
a few
times in a row, it is going to decrease geometrically, and becomes rapidly too small
to be
sampled again. The same reasoning applies to the $Blue$ population.\\
As we will prove, the same mechanism takes place in the spatial model, that is
after some almost surely finite random time, the $Red$ population will stop reproducing.
\subsection{Proofs and outline}
The martingale argument we demonstrated in the previous section seems to be the most promising approach.
However, to be able to use such an argument, we need to find a way to filter out all the complex dependencies introduced by space,
which is the main challenge in this work. We solved this problem by introducing in \S \ref{Forbid_Reg} the geometrical object
of \emph{forbidden region} that
allows to connect the martingale convergence to the sampling of the $Red$ population.
The rest of this article
is devoted to the proof
of Theorem \ref{Main_Result}, as well as the
construction of the process $(X_{(t)})_{t\geq0}$ defined by (\ref{Gener_ModSLFV_Red}).
In Section \ref{Sec_DTMC}, we introduce a discrete time Markov Chain $(Y_n)_{n \geq 0}$ which is the discrete time
equivalent of $(X_{(t)})_{t\geq0}$.
This chain is going to be used to construct $(X_{(t)})_{t\geq0}$ and to prove
Theorem \ref{Main_Result}.
We state in Proposition \ref{Conv_Delta_n} the equivalent of Theorem \ref{Main_Result} in discrete time.
Section \ref{Sec_Geometry} provides a toolbox that allows to handle easily the geometry of the model.
We prove in Section \ref{Finit_Sampl}
the central Proposition \ref{Prop_Finit_Many_Sampl}, which states that in discrete time, the $Red$ population defined by $Y_n(\cdot)$ is sampled
from only finitely many times. The proof relies on the fact that the total mass of the population, i.e. the integral of $Y_n(\cdot)$
over $\mathbb{R}_d$, is a martingale that converges almost surely. In \S \ref{Forbid_Reg}, we introduce the crucial concept of forbidden region, and use it to
prove Proposition \ref{Prop_Finit_Many_Sampl}.
We gather all the results in Section \ref{Prf_Thrm} . Proposition \ref{Prop_Finit_Many_Sampl}
allows both to use $(Y_n)_{n\geq0}$ to construct $(X_t)_{t\geq0}$ as a non explosive continuous time Markov Chain, and to prove
Theorem \ref{Main_Result}.
We finally conclude by discussing some extensions of this work.
\section{A discrete time Markov chain}
\label{Sec_DTMC}
\subsection{Construction}
\begin{Definition}
\label{Def_Yn}
Consider $R>0$ and $0<U<1$. Let $y$ be an
$\mathcal{S}_c$-valued random
variable.
We construct simultaneously random sequences $(C_n)_{n \geq
1}$
and $(V_n)_{n \geq 1}$, a filtration $(\mathcal{P}_n)_{n \geq
0}$,
and an $\mathcal{S}_c$-valued Markov chain $(Y_n)_{n\geq0}$,
using the following recurrence:
\begin{align}
\left\{
\begin{aligned}
&
Y_0=y,
\\
&
\mathcal{P}_0:=\sigma(Y_0),
\\
&
\Delta_0=\text{Supp}(Y_0).
\end{aligned}
\right.
\end{align}
and for $n\geq0$,
\begin{itemize}
\item $\mathcal{P}_n:=
\sigma(C_1,\dots,C_n,V_1,\dots,V_n,
Y_0,\dots,Y_n)$,
\item conditionally on $\mathcal{P}_n$, $C_{n+1}$ is uniform
on $\Delta_n^{R}$,
\item $V_{n+1}$ is distributed uniformly on $[0,1]$,
independently from
$\mathcal{P}_n$ and $C_{n+1}$,
\item $Y_{n+1}$ is given by the formula
\begin{align}
\label{Dyn_Yn}
Y_{n+1}(\cdot)=Y_n(\cdot)+U \, \delta_{B(C_{n+1},R)}(\cdot) \,
\bigl( \mathds{1}_{ \{
V_{n+1} \leq Y_n(C_{n+1})\} } -Y_n(\cdot) \bigr)
\end{align}
\item $\Delta_{n+1}=\text{Supp}(Y_{n+1})$.
\end{itemize}
We introduce the notation $\varepsilon_{n+1}:=\mathds{1}_{ \{
V_{n+1} \leq Y_n(C_{n+1})\} }$.
In
particular, the trajectories of $(\Delta_n)_{n \geq 0}$ are
given by
\begin{equation}
\Delta_{n}=\Delta_{0} \cup
\bigcup_{
\substack{
1\leq k \leq n,\\
\varepsilon_{k}=1 }
}
B(C_{k},R).
\label{Dyn_Clust}
\end{equation}
Finally, we denote the natural filtration of $(Y_n)_{n\geq0}$
by
$\mathcal{G}_n:=\sigma(Y_0,\dots,Y_n)$.
\end{Definition}
\begin{Remark}
We recall that $\Delta_n^R$ is the $R$-expansion of the set
$\Delta_n$.
\end{Remark}
At each
reproduction event, the random variable $C_{n+1}$
corresponds to the centre of the event. The parent is sampled uniformly
at location $C_{n+1}$ thanks to the random variable $V_{n+1}$, so that the parent is $Red$ with probability
$Y_n(C_{n+1})$, and $Blue$ with probability $1-Y_n(C_{n+1})$.
The constants $R$ and $U$ are the radius of the
event and the proportion of the population that is modified.
The
random variable $\varepsilon_{n+1}$ indicates the types of the
parent chosen.
\begin{Notation}
If
$\varepsilon_{n+1} =1$, we say that the event is a
\emph{positive
sampling event}, because
the total red population increases, whereas when
$\varepsilon_{n+1}
=0$ we say that it
is a \emph{negative sampling event}.
\end{Notation}
Equation (\ref{Dyn_Clust}) shows that if the cluster $\Delta_n$
is to increase, the
minimum requirement is that there is a positive sampling.\\
\begin{Remark}
\label{Delta_Expands}
Expression (\ref{Dyn_Clust}) is true because we assumed
$U<1$. In this case, the support and the range of the process
coincide. Once a region
is occupied by the \emph{Red} population it remains occupied
at every finite time. Therefore the cluster $\Delta_n$ never
shrinks. In the case where $U=1$ expression (\ref{Dyn_Clust})
would remain true if $\Delta_n$ was defined to be the range of the process,
that is $\bigcup_{n \geq 0} \text{Supp}(Y_n)$.
\end{Remark}
\subsection{Result of the cluster convergence}
The following result is the expression of our main result
in the discrete time setting, with the temporary technical condition
$Y_0=a\, \delta_{B(C_0,r_0)}$. This condition is removed in Proposition
\ref{Conv_Delta_n_General} by allowing $Y_0$ to be any deterministic
function with bounded support.
\begin{Proposition}
\label{Conv_Delta_n}
Suppose $Y_0=a\, \delta_{B(C_0,r_0)}$, where
$a \in [0,1]$, $r_0>0$ and
$C_0 \in \mathbb{R}^d$.
Then, there exists an almost surely finite
random time $\kappa$ such that
\begin{align}
\forall n > \kappa, \quad \varepsilon_n=0.
\end{align}
Therefore, there exists an
almost surely
bounded random set $B \in \mathbb{R}^d$ such that
\begin{align}
\forall n > \kappa, \quad \Delta_n=B.
\end{align}
\end{Proposition}
Most of the remainder of this paper is devoted to proving this
Proposition. We first
investigate the geometric properties of the model in the next section.
\section{Geometry}
\label{Sec_Geometry}
This section constructs all the tools that allow us to manage the geometry of the process.
\begin{Remark}
From now on, unless specified otherwise, we suppose that $Y_0=a\, \delta_{B(C_0,r_0)}$.
\end{Remark}
\subsection{$Y_n$ is piecewise constant}
\begin{Definition}
For notational convenience, we introduce the sequence\\
$(R_n)_{n \geq 0}$ such that
\begin{align}
\left\{
\begin{aligned}
& R_0=r_0,
&
\\
& R_n=R,
& n \geq 1.
\end{aligned}
\right.
\end{align}
\end{Definition}
\begin{Lemma}
\label{Struct_Yn}
For every $n \geq 0$, for every
$\zeta \subset \{0,\dots,n\}$,
consider the set $A_{n,\zeta}$ defined by
\begin{align}
\label{Struct_A}
A_{n,\zeta}:=
\big( \bigcap_{m \in \zeta} B(C_{m},R_{m}) \big)
\, \diagdown \, \big( \bigcup_{\substack{j \leq n,\\ j \notin \zeta}}
B(C_{j},R_{j}) \big).
\end{align}
The function $Y_n$ can be written as
\begin{align}
\label{Spat_Struct}
Y_n=\sum_{\zeta \subset \{0,\dots,n\}}
\alpha_{n,\zeta} \, \delta_{A_{n,\zeta}},
\end{align}
where the sets $A_{n,\zeta}$ are all disjoint for a given $n$,
and $\alpha_{n,\zeta} \geq 0$.
\end{Lemma}
\begin{Remark}
By construction, $\forall z \in A_{n,\zeta}$, we have $Y_n(z)=\alpha_{n,\zeta} $.
\end{Remark}
\begin{figure}
\caption{Structure of the level sets $A_{n,\zeta}
\label{Fig_Structure_Levels_Y}
\end{figure}
\begin{proof}
We first introduce the shorter notation
$$B_j:=B(C_{j},R_{j}), \ j \geq 0.$$
The fact that the sets $A_{n,\zeta}$ are all disjoint for a given $n$
is straightforward, so we just need to prove (\ref{Spat_Struct}).
But before that, we need to show that
\begin{align}
\label{Union}
\bigcup_{\zeta \subset \{0,\dots,n\}} A_{n,\zeta}
=
\bigcup_{i=1}^{n} B_i,
\end{align}
and we proceed by induction on $n$.
The statement is true for $n=0$.
Suppose now that it is true for some given $n \geq 0$.
Let $\zeta' \subset \{0,\dots,n+1\}$.
\begin{itemize}
\item[--] If $\zeta'=\{n+1\}$, then $A_{n+1,\zeta'}=B_{n+1}
\, \diagdown \, \bigcup_{\zeta \subset \{0,\dots,n\}} A_{n,\zeta}.$
\item[--] If $(n+1) \in \zeta'$, and $\zeta' \neq \{n+1\}$
then there exists $\zeta \subset \{0,\dots,n\}$
such that
$$A_{n+1,\zeta'}=B_{n+1} \cap A_{n,\zeta}.$$
\item[--] If $(n+1) \notin \zeta'$,
then there exists $\zeta \subset \{0,\dots,n\}$
such that
$$A_{n+1,\zeta'}=A_{n,\zeta} \, \diagdown \, B_{n+1}.$$
\end{itemize}
Therefore, we see that
\begin{align*}
\bigcup_{\zeta' \subset \{0,\dots,n+1\}} A_{n+1,\zeta'}
=
\, &
\Big(
\bigcup_{\zeta \subset \{0,\dots,n\}} B_{n+1} \cap A_{n,\zeta}
\Big) \,
\cup \,
\Big(
\bigcup_{\zeta \subset \{0,\dots,n\}} A_{n,\zeta} \, \diagdown \, B_{n+1}
\Big)
\\[7pt]
&
\quad \quad \quad \quad \quad \quad \quad \quad
\quad \quad \quad
\cup
\Big(
B_{n+1} \, \diagdown \,
\bigcup_{\zeta \subset \{0,\dots,n\}} A_{n,\zeta}
\Big)
\\[7pt]
=
& \,
\Big(
\bigcup_{\zeta \subset \{0,\dots,n\}} A_{n,\zeta}
\Big)
\cup
\Big(
B_{n+1} \, \diagdown \,
\bigcup_{\zeta \subset \{0,\dots,n\}} A_{n,\zeta}
\Big)
\\[8pt]
=
& \,
B_{n+1}
\cup
\Big(
\bigcup_{\zeta \subset \{0,\dots,n\}} A_{n,\zeta}
\Big),
\end{align*}
and the statement is proven using the inductive hypothesis.\\
We can return to the proof of expression (\ref{Spat_Struct}). We need to show
that
for all $n \geq 0$, $\zeta \subset \{0,\dots,n\}$,
\begin{align}
\label{Union_An}
&
\Big(
x \notin \bigcup_{\zeta \subset \{0,\dots,n\}} A_{n,\zeta}
\Big)
\text{ implies }
\Big(
Y_n(x)=0
\Big),
\\
\nonumber
\text{and }
&
\\
\label{xy_in_An}
&
\Big( x,y \, \in A_{n,\zeta} \Big)
\text{ implies }
\Big(
Y_n(x)=Y_n(y)
\Big).
\\
\nonumber
\end{align}
We prove (\ref{Union_An}) by induction on $n$, and we use (\ref{Union}).
Statement (\ref{Union_An}) is satisfied for $n=0$ because
$Y_0=a\, \delta_{B(C_0,r_0)}$.
Suppose now that it is true
for some $n \geq 0$, and consider $x \notin \cup_{i=1}^{n+1} B_i$. In
particular,
$x \notin B_{n+1}$, and using the dynamics equation (\ref{Dyn_Yn}),
we find that
$Y_{n+1}(x)=Y_n(x).$
Because $x \notin \cup_{i=1}^{n} B_i$, we can use the inductive hypothesis,
and we obtain that $Y_{n}(x)=0$, which proves (\ref{Union_An}).
To prove (\ref{xy_in_An}), we also use induction. It is true for $n=0$.
Suppose (\ref{xy_in_An}) is satisfied
for a given $n \geq 0$. Consider
$\zeta' \subset \{0,\dots,n+1\}$, and take $x,y \in A_{n+1,\zeta'}$.
\begin{itemize}
\item[--] If $\zeta'=\{n+1\}$, then
$A_{n+1,\zeta'}=B_{n+1}
\, \diagdown \, \bigcup_{\zeta \subset \{0,\dots,n\}} A_{n,\zeta}$, and
in particular $$x,y \notin \bigcup_{\zeta \subset \{0,\dots,n\}}
A_{n,\zeta}.$$
We have $\delta_{B_{n+1}}(x)=\delta_{B_{n+1}}(y)=1$,
and using (\ref{Union_An}) we see that
$Y_n(x)=Y_n(y)=0$.
\item[--] If $(n+1) \in \zeta'$ and $\zeta' \neq \{n+1\}$,
there exists $\zeta \subset \{0,\dots,n\}$
such that
$$A_{n+1,\zeta'}=B_{n+1} \cap A_{n,\zeta}.$$
Therefore $\delta_{B_{n+1}}(x)=\delta_{B_{n+1}}(y)=1$,
and using the inductive hypothesis, we have $Y_n(x)=Y_n(y)$.
\item[--] If $(n+1) \notin \zeta'$,
then there exists $\zeta \subset \{0,\dots,n\}$
such that
$$A_{n+1,\zeta'}=A_{n,\zeta} \, \diagdown \, B_{n+1}.$$
In this case $\delta_{B_{n+1}}(x)=\delta_{B_{n+1}}(y)=0$,
and using the inductive hypothesis, we have $Y_n(x)=Y_n(y)$.
\end{itemize}
We can express $Y_{n+1}$ using the dynamics equation (\ref{Dyn_Yn}),
and we obtain:
\begin{align*}
\left\{
\begin{aligned}
&
Y_{n+1}(x)=Y_n(x)+U \, \delta_{B_{n+1}}(x)
\, (\varepsilon_{n+1}-Y_n(x))
\\
&
Y_{n+1}(y)=Y_n(y)+U \, \delta_{B_{n+1}}(y)
\, (\varepsilon_{n+1}-Y_n(y)).
\end{aligned}
\right.
\end{align*}
We have proved that for any choice of $\zeta'$, all the terms in the above
equations are the same for $x$ and $y$, therefore $Y_{n+1}(x)=Y_{n+1}(y)$,
and we have proved (\ref{xy_in_An}).
\end{proof}
\subsection{Variation of the local average}
A central tool for the rest of the work is
the average of the function $Y_n$ on a ball of radius $R$ and
centre $x \in \mathbb{R}^d$. It is important that $R$ is the radius of the reproduction event,
as this is what links the martingale introduced in the next section to the geometry of the process (see
Lemma \ref{Mass_Change}).
We introduce the following function:
\begin{Definition}
\begin{align}
\mathbb{P}hi_n(x):=\int_{B(x,R)} Y_n(z) \, dz.
\end{align}
\end{Definition}
The main result of this section is the following.\\
\begin{Proposition}
\label{Mean_Val}
For every $x,y \in \mathbb{R}^d$,
\begin{align}
\mathbb{P}hi_n(y)-\mathbb{P}hi_n(x) \leq \|y-x\| S(R).
\end{align}
\end{Proposition}
$ $ \newline
The rest of this section is devoted to proving this inequality.
For this we need to introduce some auxiliary functions.
\begin{Definition}
Given $x,y \in \mathbb{R}^d$, for every $n\geq0$, we define
\begin{align}
\label{Lambda_nxy}
\mathcal{L}ambda_n^{x,y}: [0, \|y -x\|] & \longrightarrow
[0,\infty)
\\
\nonumber
t
&
\, \longmapsto
\mathcal{L}ambda_n(t):=\mathbb{P}hi_n \Big( \,x+ t \,\frac{y-x}{\|y -x\|} \,
\Big).
\end{align}
\end{Definition}
The key property for the proof of Proposition \ref{Mean_Val} is
the following.
\begin{Proposition}
\label{Piece_Diff}
$\mathcal{L}ambda_n^{x,y}$ is a continuous, piecewise differentiable
function.
Moreover, for every point $t$ where $\mathcal{L}ambda_n^{x,y}$ is
differentiable,
we have:
\begin{align}
\label{Bound_D_Lambda}
\frac{d \mathcal{L}ambda_n^{x,y}}{dt} (t) \leq S(R).
\end{align}
\end{Proposition}
\begin{proof}
We first prove that $\mathcal{L}ambda_n^{x,y}$ is continuous and that there is at most
a finite number $J$ of points $t_1 < \dots < t_J$ at which $\mathcal{L}ambda_n^{x,y}$
is not differentiable.
Thanks to equation (\ref{Spat_Struct}) from Lemma
\ref{Struct_Yn},
we see that $\mathbb{P}hi_n$ is given by
\begin{align*}
\mathbb{P}hi_n(x)&=\int_{B(x,R)}
\sum_{\zeta \subset \{0,\dots,n\}}
\alpha_{n,\zeta} \delta_{A_{n,\zeta}}(z) \, dz
\\
&=
\sum_{\zeta \subset \{0,\dots,n\}}
\alpha_{n,\zeta} |B(x,R) \cap A_{n,\zeta}|,
\end{align*}
therefore $\mathcal{L}ambda_n^{x,y}(t)$ is given by
\begin{align}
\label{Struct_Lambda_n}
\mathcal{L}ambda_n^{x,y}(t)&=
\sum_{\zeta \subset \{0,\dots,n\}}
\alpha_{n,\zeta} \,
|
B\big( \,x+ t \,\frac{y-x}{\|y -x\|} \,
,R \big)
\cap A_{n,\zeta}|.
\end{align}
We simplify the notation by introducing
$B_j:=B(C_{j},R_{j}), \ j \geq 0$
and
$$x_t:=x+ t \, \frac{y-x}{\|y -x\|}.$$\\
The definition (\ref{Struct_A}) of the sets $A_{n,\zeta}$ and the
inclusion-exclusion formula allow us to prove that
for each $\zeta \subset \{0,\dots,n\}$, there exists
a $\beta_{n,\zeta} \in \mathbb{R}$ such that
\begin{align*}
\mathcal{L}ambda_n^{x,y}(t)=
&
\sum_{\zeta \subset \{0,\dots,n\}}
\beta_{n,\zeta} \,
|
B\big( \,x_t \, ,R \big)
\cap
\big( \bigcap_{m \in \zeta} B_m \big)
|.
\end{align*}
\begin{Remark}
The main change with expression (\ref{Struct_Lambda_n}) is that now we are working with intersections of balls, which are convex,
whereas the sets $A_{n,\zeta}$ are usually not. Also, we had the fact that $\alpha_{n,\zeta}$ is the value of the function $Y_n$
on the set $A_{n,\zeta}$, and such an interpretation is lost for $\beta_{n,\zeta}$.
\end{Remark}
If we introduce the function $H_{\zeta}$ defined for each set
$\zeta \subset \{0,\dots,n\}$ by
\begin{align}
\label{Function_H_zeta}
H_{\zeta}:
[0, \|y -x\|]
&
\longrightarrow
\mathbb{R}
\\
t \
&
\nonumber
\longmapsto
| \,
B(x_t,R) \cap
\bigcap_{m \in \zeta} B_m
\, |,
\end{align}
then we can simply rewrite the function $\mathcal{L}ambda_n^{x,y}$ as
\begin{align}
\label{Struct_Lambda_n_Convex}
\mathcal{L}ambda_n^{x,y}(t)=
&
\sum_{\zeta \subset \{0,\dots,n\}}
\beta_{n,\zeta} \,
H_{\zeta}(t).
\end{align}
\begin{figure}
\caption{Visualisation of the function $H_{\zeta}
\label{Fig_Function_Lambda_n_x_y}
\end{figure}
The continuity of $H_{\zeta}$ follows from the continuity of
the function $t \mapsto x_t$, and this shows that
$\mathcal{L}ambda_n^{x,y}$ is continuous.\\
The set $\bigcap_{m \in \zeta} B_m$ is convex,
therefore there exist $t_1$, $t_2$
such that
\begin{align}
\label{t1t2}
H_{\zeta}(t)>0 \mathcal{L}eftrightarrow t_1<t<t_2.
\end{align}
Consider $t$ such that $t_1<t<t_2$. Thanks to (\ref{t1t2}),
this means we can choose a point $z$ belonging to
the
interior of
$B(x_t,R) \cap \bigcap_{m \in \zeta} B_m$.
Because the set $B(x_t,R) \cap \bigcap_{m \in \zeta} B_m$ is convex, we can express its volume
in $d$-dimensional spherical coordinates with $z$ as the new origin. Given angular coordinates
$\phi:=(\phi_1,\dots,\phi_{d-1})$, we denote by $p_{\phi}(t)$ the unique point
of the boundary of $B(x_t,R) \cap \bigcap_{m \in \zeta} B_m$
with angular coordinates $\phi$, and $L(t,\phi)$
the distance between $z$ and $p_{\phi}(t)$. We have:
\begin{align*}
H_{\zeta}(t)=
\int_0^{\pi}
\dots
\int_0^{\pi}
\int_0^{2 \pi}
\int_0^{L(t,\phi)}
r^{d-1}
\sin^{d-2}(\phi_1)
\sin^{d-3}(\phi_2)
\dots
\sin(\phi_{d-2})
\\
dr \,
d\phi_{d-1} \dots d\phi_1.
\end{align*}
\begin{figure}
\caption{Illustration of $p_{\phi}
\label{Fig_Convex_Disc}
\end{figure}
We denote by $\Omega_{\zeta}$ the boundary of $\bigcap_{m \in \zeta} B_m$, and by
$S_t$ the sphere of centre $x_t$ and radius $R$. We can find a partition
$\Xi_1,\dots,\Xi_K$ of the space
$$ [0,\pi]^{d-2} \times [0,2 \pi]$$
such that
\begin{align*}
\left\{
\begin{aligned}
&
\forall \phi \in \Xi_j, \, p_{\phi}(t) \in \Omega_{\zeta},
\\
&
\text{or}
\\
&
\forall \phi \in \Xi_j, \, p_{\phi}(t) \in S_t.
\end{aligned}
\right.
\end{align*}
Therefore we can write $H_{\zeta}(t)$ as
\begin{align*}
H_{\zeta}(t)=
\sum_{j=1}^K
\int_{\Xi_j}
\int_0^{L(t,\phi)}
r^{d-1}
\sin^{d-2}(\phi_1)
\sin^{d-3}(\phi_2)
\dots
\sin(\phi_{d-2})
dr \, d\phi.
\end{align*}
Thanks to this representation, we see that a sufficient condition
for $H_{\zeta}$
to be differentiable at $t$, $t_1<t<t_2$, is that
for every $\phi$ in the interior of every $\Xi_j$, the function
$t \mapsto L(t,\phi)$ is differentiable at $t$. In this case,
the derivative is given by
\begin{align*}
\frac{d H_{\zeta}(t)}{dt}=
\sum_{j=1}^K
\int_{\Xi_j}
\frac{\partial L}{\partial t}
(t,\phi)
\,
L(t,\phi)^{d-1}
\sin^{d-2}(\phi_1)
\sin^{d-3}(\phi_2)
\dots
\sin(\phi_{d-2})
\, d\phi.
\end{align*}
We focus now on the differentiability of $t \mapsto L(t,\phi)$, where
$\phi$ belongs to the interior of $\Xi_j$ for some $j$.
Suppose first that
$p_{\phi}(t) \in \Omega_{\zeta}$. Because the function
$t \mapsto x_t$ is continuous,
and because $z$ is a fixed point,
there exists $h>0$ such that
for every $u \in (t-h,t+h)$, $p_{\phi}(u) \in \Omega_{\zeta}$.
Therefore, there exists $h>0$ such that
for every $u \in (t-h,t+h)$, $L(u,\phi)=L(t,\phi)$, and $t \mapsto
L(t,\phi)$
is differentiable at $t$.
In the case where $p_{\phi}(t) \in S_t$, by the
same continuity argument, we obtain that for every $u \in (t-h,t+h)$,
$p_{\phi}(u) \in S_{u}$. Because the distance between $z$ and the
projection of $z$ on $S_t$ along the angle $\phi$ is differentiable,
we conclude that $t \mapsto L(t,\phi)$ is also differentiable in
this case.\\
We have proved that for all $t \neq t_1,t_2$, $H_{\zeta}$ is
differentiable at
$t$.
Given that
$$\mathcal{L}ambda_n^{x,y}(t)=\sum_{\zeta \subset \{0,\dots,n\}}
\beta_{n,\zeta} \, H_{\zeta}(t),
$$
we conclude that there is at most a finite number of points at which
$\mathcal{L}ambda_n^{x,y}$ is not differentiable, and this proves the first
part of the
Proposition.\\
We need now to show the upper bound
(\ref{Bound_D_Lambda}) for the derivative. Suppose
$\mathcal{L}ambda_n^{x,y}$ is differentiable at $t$. By definition,
\begin{align*}
\mathcal{L}ambda_n^{x,y}(t)
=
\int_{B(x_{t},R)} Y_n(z) dz,
\end{align*}
therefore
\begin{align*}
\mathcal{L}ambda_n^{x,y}(t+h)-\mathcal{L}ambda_n^{x,y}(t)
=
\int_{B(x_{t+h},R)} Y_n(z) dz
-
\int_{B(x_{t},R)} Y_n(z) dz.
\end{align*}
By construction, for all
$h \geq 0$, $B(x_{t+h},R) \subset B(x_t,R+h)$,
which implies that
\begin{align*}
\mathcal{L}ambda_n^{x,y}(t+h)-\mathcal{L}ambda_n^{x,y}(t)
& \leq
\int_{B(x_{t},R+h)} Y_n(z) dz
-
\int_{B(x_{t},R)} Y_n(z) dz
\\[7pt]
&
\leq
\int_{B(x_{t},R+h) \diagdown B(x_{t},R)} Y_n(z) dz
\\[7pt]
&
\leq
|B(x_{t},R+h)|- |B(x_{t},R)|.
\end{align*}
Dividing by $h$ and taking the limit as $h \rightarrow 0$, we obtain\\
\begin{align}
\frac{d \mathcal{L}ambda_n^{x,y}}{dt} (t)
\leq
\frac{d V(R)}{dR}
=
S(R),
\end{align}
where $V(R)$ is the volume of a ball of radius $R$, and $S(R)$
its surface area.
\end{proof}
\begin{proof}[Proof of Proposition \ref{Mean_Val}]
We saw in the previous proof that there is only a finite
number of points
$t_1,\dots,t_J$ at which $\mathcal{L}ambda_n^{x,y}$ is not
differentiable.
By continuity of $\mathcal{L}ambda_n^{x,y}$, and using
Proposition \ref{Piece_Diff},
\begin{align*}
& \mathbb{P}hi_n(y)-\mathbb{P}hi_n(x)
\\
= &
\mathcal{L}ambda_n^{x,y}(\|y-x\|)-\mathcal{L}ambda_n^{x,y}(0)
\\
= &
\mathcal{L}ambda_n^{x,y}(\|y-x\|)-\mathcal{L}ambda_n^{x,y}(t_J)
+ \sum_{j=1}^{J-1}
\mathcal{L}ambda_n^{x,y}(t_{j+1})-\mathcal{L}ambda_n^{x,y}(t_j)
\\
& \quad \quad \quad \quad \quad \quad \quad \quad
\quad \quad \quad \quad \quad \quad \quad
+\mathcal{L}ambda_n^{x,y}(t_1)-\mathcal{L}ambda_n^{x,y}(0)
\\
\leq & S(R) (\|y-x\|-t_J)
+ \sum_{j=1}^{J-1} S(R)(t_{j+1}-t_j)
\\
& \quad \quad \quad \quad \quad \quad \quad \quad
\quad \quad \quad \quad \quad \quad \quad
+S(R)(t_1-0)
\\
\leq & \|y-x\| \, S(R).
\qedhere
\end{align*}
\end{proof}
\section{Probability}
\label{Sec_Finit_Sampl}
\subsection{A martingale argument}
\label{MartArg}
\begin{Definition}
We denote by $M_n$ the total mass of $Y_n$, that is
\begin{equation*}
M_{n}=\int_{\mathbb{R}^d} Y_{n}(z) dz.
\end{equation*}
\end{Definition}
\begin{Lemma}
\label{Mass_Change}
The change of the total mass $M_{n+1}-M_n$
is given by
\begin{align}
M_{n+1}-M_n
= U \, \bigl( \varepsilon_{n+1} V(R)
- \mathbb{P}hi_n(C_{n+1}) \bigr)
\end{align}
\end{Lemma}
\begin{proof} Using (\ref{Dyn_Yn}),
\begin{align*}
M_{n+1}-M_n &
= \int_{\mathbb{R}^d}
\Big(
Y_{n+1}(z)-Y_{n}(z)
\Big)
dz
\\
& = \int_{\mathbb{R}^d}
U \, \delta_{B(C_{n+1},R)} (z) \, \bigl( \varepsilon_{n+1}
-Y_n(z)
\bigr)
dz\\
& = U \, \displaystyle \int_{B(C_{n+1},R)}
\Big(
\varepsilon_{n+1} -Y_n(z)
\Big)
dz\\
& = U \, \bigl( \varepsilon_{n+1} V(R)
- \mathbb{P}hi_n(C_{n+1}) \bigr)
\qedhere
\end{align*}
\end{proof}
\begin{Proposition}
\label{Mart}
$(M_n)_{n\geq0}$ is a discrete time nonnegative
$(\mathcal{G}_n)_{n \geq 0}$ martingale.
\end{Proposition}
\begin{proof}
Thanks to Lemma \ref{Mass_Change} and Definition \ref{Def_Yn},
we can calculate explicitly
$\mathbb{E}[\,M_{n+1}-M_n \, | \, \mathcal{G}_n \,]$ as
follows:
\begin{align*}
& \, \mathbb{E}\bigl[M_{n+1}-M_n \, | \, \mathcal{G}_n\bigr]
\\[10pt]
= & \,
\mathbb{E}\biggl[U \,
\mathds{1}_{ \{ V_{n+1} \leq Y_n(C_{n+1})\} }
\int_{B(C_{n+1},R)} dz
- U \, \int_{B(C_{n+1},R)} Y_n(z) dz \, | \,
\mathcal{G}_n\biggr]
\\[10pt]
= & U \,
\int_{\mathbb{R}^d}
\int_{[0,1]}
\biggl[
\mathds{1}_{ \{ v \leq Y_n(c)\} }
\int_{B(c,R)} dz
- \int_{B(c,R)} Y_n(z) \,dz \,
\biggr]\,
dv \,
\frac{\mathds{1}_{c \in \Delta_n^R}}{ |\Delta_n^R|}
\, dc
\\[10pt]
= & U \,
\int_{\mathbb{R}^d}
\biggl[
Y_n(c)
\int_{B(c,R)} dz
- \int_{B(c,R)} Y_n(z) dz \,
\biggr]\, \frac{\mathds{1}_{c \in \Delta_n^R}}{ |\Delta_n^R|}
dc
\\[10pt]
= & \frac{U}{ |\Delta_n^R|} \, \biggl[ \,
\int_{\mathbb{R}^d}
\int_{\mathbb{R}^d}
\mathds{1}_{z \in B(c,R)} \,
\mathds{1}_{c \in \Delta_n^R}
\,
Y_n(c) dz \, dc
\\[10pt]
& \quad \quad \quad \quad \quad \quad \quad \quad \quad
\quad \quad
\quad
-
\int_{\mathbb{R}^d}
\int_{\mathbb{R}^d}
\mathds{1}_{z \in B(c,R)}\,
\mathds{1}_{c \in \Delta_n^R}
\,
Y_n(z) dz \, dc \, \biggr].
\end{align*}
In particular, for every $f \in \mathcal{S}_c$,
\begin{align*}
& \int_{\mathbb{R}^d}
\int_{\mathbb{R}^d}
\mathds{1}_{\{z \in B(c,R)\}} \,
\mathds{1}_{\{c \in \text{Supp}(f)^R\}} \,
f(c)
dz \, dc \\
= & \int_{\mathbb{R}^d}
\int_{\mathbb{R}^d}
\mathds{1}_{\{c \in B(z,R)\}} \,
f(c)
dz \, dc
\quad \text{since } c \in B(z,R)
\mathcal{L}eftrightarrow
z \in B(c,R)
\\
= & \int_{\mathbb{R}^d}
\int_{\mathbb{R}^d}
\mathds{1}_{\{z \in B(c,R)\}} \, f(z)
dc \, dz,\\
= & \int_{\mathbb{R}^d}
\int_{\mathbb{R}^d}
\mathds{1}_{\{z \in B(c,R)\}} \,
\mathds{1}_{\{c \in \text{Supp}(f)^R\}} \,
f(z)
dc \, dz,
\end{align*}
so $\mathbb{E}[\, M_{n+1}-M_n \, | \, \mathcal{G}_n \, ]=0$,
which shows that
$(M_n)_{n\geq0}$
is a martingale.
\end{proof}
\begin{Definition}
Let $\alpha>0$ be a real number such that $0<\alpha<UV(R)/2$. We then define
\begin{equation}
\tau_{\alpha}:=\inf \{p \geq 0: \forall n \geq p,\quad
|M_{n+1}-M_n| < \alpha
\}.
\end{equation}
\end{Definition}
In particular, $\tau_{\alpha}$ is not a stopping time, but this
is not going to be an issue in what follows.
\begin{Proposition}
\label{Constraint_C}
The random time $\tau_{\alpha}$ is a.s. finite, and
$\forall n>\tau_{\alpha}$,
\begin{align}
\left\{
\begin{aligned}
& \mathbb{P}hi_n(C_{n+1}) < \frac{\alpha}{U}
& \text{if } \varepsilon_{n+1}=0,
\\
& \mathbb{P}hi_n(C_{n+1}) > V(R)-\frac{\alpha}{U}
& \text{if } \varepsilon_{n+1}=1.
\end{aligned}
\right.
\end{align}
\end{Proposition}
\begin{proof}
We know that $M_n$ is a nonnegative martingale, so
it converges almost surely when $n \rightarrow \infty$.
Therefore $\tau_{\alpha}$ is
almost surely finite, and by definition,
$\forall n>\tau_{\alpha}, \quad |M_{n+1}-M_n| < \alpha$.
Using Lemma \ref{Mass_Change}, we observe that
\begin{align*}
|M_{n+1}-M_n| =
\left\{
\begin{aligned}
& U \, \mathbb{P}hi_n(C_{n+1})
& \text{if } \varepsilon_{n+1}=0,
\\
& U \, \bigl( V(R) - \mathbb{P}hi_n(C_{n+1}) \bigr)
& \text{if } \varepsilon_{n+1}=1,
\end{aligned}
\right.
\end{align*}
which concludes the proof.
\end{proof}
\subsection{Forbidden region}
\label{Forbid_Reg}
The concept of a forbidden region will allow us to treat
probabilistically the geometric
properties established in \S \ref{Sec_Geometry}.
\begin{Lemma}
For $n \geq 0$,
\begin{align}
\nonumber
& \{n \geq \tau_{\alpha}\} \cap
\{\varepsilon_k=1 \text{ infinitely often} \}
\\
\label{Include_1}
\subset
\ &
\{n \geq \tau_{\alpha}\} \cap \bigcap_{j=n}^{\infty}
\{\sup \mathbb{P}hi_j > V(R)-\alpha /U\}
\end{align}
\end{Lemma}
\begin{proof}
Take $j \geq \tau_{\alpha}$ such that $\sup \mathbb{P}hi_j \leq
V(R)-\alpha /U$.
Using Proposition \ref{Constraint_C}, this implies that
$\varepsilon_{j+1}=0$. In particular,
$\sup \mathbb{P}hi_{j+1} \leq \sup \mathbb{P}hi_j\leq V(R)-\alpha /U$. By
induction, we just
showed that
\begin{align*}
\left\{
\begin{aligned}
& j \geq \tau_{\alpha}
\\
& \sup \mathbb{P}hi_j \leq V(R)-\alpha /U
\end{aligned}
\right.
\mathcal{L}ongrightarrow
\left\{
\begin{aligned}
& j \geq \tau_{\alpha}
\\
& \forall k\geq j, \varepsilon_k=0.
\end{aligned}
\right.
\end{align*}
The contrapositive of this implication allows to conclude this
proof.
\end{proof}
\begin{Definition}
We define the \emph{forbidden region} $F_n$ to be
\begin{align}
& F_n:= \{ x \in \mathbb{R}^d \, :
\frac{\alpha}{U} \leq \mathbb{P}hi_n(x) \leq
V(R)-\frac{\alpha}{U}
\}.
\end{align}
We also introduce the quantity
\begin{align}
\psi:=V\Bigg(\frac{V(R)-2 \, \alpha/U}{S(R)}\Bigg).
\end{align}
\end{Definition}
The reason for the name \emph{forbidden region} is motivated by
the following lemma, which tells us that after the time
$\tau_{\alpha}$, if the
local averages
are always too high, then the points
$C_{n+1}$ are forbidden from falling in the region $F_n$. Furthermore,
this lemma
provides a lower bound on the volume of $F_n$.
\begin{Lemma}
\begin{align}
\nonumber
& \{j \geq \tau_{\alpha}\} \cap
\{\sup \mathbb{P}hi_j > V(R)-\alpha /U\}
\\[10pt]
\label{Include_2}
\subset
\ &
\{C_{j+1} \notin F_j,|F_j|\geq \psi\}
\end{align}
\end{Lemma}
\begin{proof}
An immediate consequence of Proposition \ref{Constraint_C} is
$$j \geq \tau_{\alpha} \mathbb{R}ightarrow C_{j+1} \notin F_j.$$
More
work is required to obtain the lower bound for the volume of
the forbidden region. We first define
$P_j:=\{ x \in \mathbb{R}^d \, : \mathbb{P}hi_j(x) \geq
V(R)-\alpha/U \} $. If we assume
$\sup \mathbb{P}hi_j > V(R)-\alpha /U$, then $P_j$ is nonempty. We can
then take a point $y \in P_j$.
The function $\mathbb{P}hi_j$ is continuous, and $\Delta_j$ is finite,
so the region
$N_{j}:=\{ x \in \mathbb{R}^d \, : \mathbb{P}hi_j(x) \leq
\alpha/U \} $ is
infinite. Indeed, for every $x$ at a distance from $\Delta_j$
larger than $R$,
$\mathbb{P}hi_j(x)=0$. In particular, for a large enough positive number
$\overline{R}$, we can consider $\Gamma$ the sphere of
radius $\overline{R}$ and
centre $y$, and the ball $B(y,\overline{R})$ such that
\begin{align}
\label{Ball_Gamma_Delta}
\left\{
\begin{aligned}
& \Delta_j \subset B(y,\overline{R}),
\\
& \Gamma \subset N_{j}.
\end{aligned}
\right.
\end{align}
For $x,y \in \mathbb{R}^d$, we denote by $[x,y]$ the line-segment
between $x$ and $y$.
We need the following lemma:
\begin{Lemma}
\label{Two_Points}
The point $y \in P_{j}$ being fixed, for every point $x \in
\Gamma$, we can
find
two points $x_0, y_0$ such that:
\begin{align}
\left\{
\begin{aligned}
& [x_0, y_0] \subset F_{j},
\\
& [x_0, y_0] \subset [x, y],
\\
& \|y_0-x_0\|\geq \frac{V(R)-2 \, \alpha/U}{S(R)}.
\end{aligned}
\right.
\end{align}
\end{Lemma}
\begin{figure}
\caption{Illustration of the construction (\ref{Ball_Gamma_Delta}
\label{Fig_Ball_Around_Delta}
\end{figure}
By integrating the result of Lemma \ref{Two_Points}
over all the points $x \in \Gamma$, we find
that the volume of
$F_j$ is larger than the volume of a ball of radius
$(V(R)-2 \, \alpha/U)/S(R)$, hence the
result.
\end{proof}
\begin{proof}[Proof of Lemma \ref{Two_Points}]
The function $\mathcal{L}ambda_j^{x,y}$ defined in (\ref{Lambda_nxy})
is continuous, with
$\mathcal{L}ambda_j^{x,y}(0)=0$ and
$\mathcal{L}ambda_j^{x,y}(\|y-x\|)\geq V(R)-\alpha/U$, so
there
are two points $t_1,t_2 \in [0,\|y-x\|]$ such that
$\mathcal{L}ambda_j^{x,y}([t_1,t_2])=
[\alpha/U,V(R)-\alpha/U]$.
By application of the continuous function
$t \longrightarrow x+ t \,(y-x)/(\|y -x\|)$, this means that
there are two points
$x_0,y_0 \in \mathbb{R}^d$ such that
\begin{itemize}
\item[--] $\mathbb{P}hi_j(x_0)=\alpha/U$,
\item[--] $\mathbb{P}hi_j(y_0)=V(R)-\alpha/U$,
\item[--] $\forall z \in (x_0,y_0),
\mathbb{P}hi_j(z)\in (\alpha/U,V(R)-\alpha/U)$.
\end{itemize}
The last statement is just the fact that $(x_0,y_0) \subset
F_{j}$.
By using Corollary \ref{Mean_Val}, we find that
\begin{align*}
\|y_0-x_0\| \, S(R) & \geq
\mathbb{P}hi_j(y_0)-\mathbb{P}hi_j(x_0)
\\
& \geq
V(R)-2 \, \alpha/U.
\end{align*}
\end{proof}
We reach now the main point of this section, which is an upper
bound for the probability that infinitely many positive sampling
events take place.
\begin{Proposition}
\label{UBoundProb}
\begin{align}
\nonumber
& \mathbb{P}
\big( \varepsilon_k=1 \text{ infinitely often} \big)
\\
\leq
\, &
\sum_{l=0}^{\infty}
\mathbb{P}
\big( \bigcap_{j=l}^{\infty}
\{C_{j+1} \notin F_j,|F_j|\geq \psi\}
\big)
\end{align}
\end{Proposition}
\begin{proof}
As $\tau_{\alpha}<\infty$ a.s.,
\begin{align}
\nonumber
& \mathbb{P}
\big( \varepsilon_k=1 \text{ i.o.} \big)
\\
\nonumber
=
\, &
\mathbb{P}
\big( \{\tau_{\alpha}<\infty\} \cap
\{ \varepsilon_k=1 \text{ i.o.} \} \big)
\\
=
\nonumber
\, &
\sum_{n=0}^{\infty}
\mathbb{P}
\big( \{\tau_{\alpha}=n\} \cap
\{ \varepsilon_k=1 \text{ i.o.} \} \big)
\\
\label{AvDern}
\leq
\, &
\sum_{n=0}^{\infty}
\mathbb{P}
\big( \{n \geq \tau_{\alpha}\} \cap
\{ \varepsilon_k=1 \text{ i.o.} \} \big)
\end{align}
We can write $\{n \geq \tau_{\alpha}\}=
\bigcap_{j=n}^{\infty}\{j \geq \tau_{\alpha}\}$
Using this in (\ref{Include_2}), we obtain
\begin{align*}
& \{n \geq \tau_{\alpha}\} \cap
\bigcap_{j=n}^{\infty}
\{\sup \mathbb{P}hi_j > V(R)-\alpha /U \}
\\
\subset
\ &
\bigcap_{j=n}^{\infty}
\{C_{j+1} \notin F_j,|F_j|\geq \psi\}.
\end{align*}
Combining this with (\ref{Include_1}), we have
\begin{align}
\nonumber
& \{n \geq \tau_{\alpha}\} \cap
\{\varepsilon_k=1 \text{ i.o.} \}
\\
\label{JustForb}
\subset
\ &
\bigcap_{j=n}^{\infty}
\{C_{j+1} \notin F_j,|F_j|\geq \psi\},
\end{align}
and the result follows.
\end{proof}
\subsection{Finitely many positive sampling events}
\label{Finit_Sampl}
\begin{Proposition}
\label{Prop_Finit_Many_Sampl}
\begin{align}
\nonumber
\mathbb{P}
\big( \varepsilon_k=1 \text{ infinitely often} \big)
=0.
\end{align}
\end{Proposition}
\begin{proof}
Using Proposition \ref{UBoundProb}, we simply need to prove
that for every $l \geq 0$,
\begin{align*}
\mathbb{P} \big(
\bigcap_{j=l}^{\infty} \{C_{j+1} \notin F_j,
|F_j|\geq \psi
\}
\big)=0.
\end{align*}
By monotone convergence, we have
\begin{align*}
& \,
\mathbb{P} \big(
\bigcap_{j=l}^{\infty} \{C_{j+1} \notin F_j,
|F_j|\geq \psi
\}
\big)
=
\lim_{n \rightarrow \infty}
\mathbb{P} \big(
\bigcap_{j=l}^{n} \{C_{j+1} \notin F_j,
|F_j|\geq \psi
\}
\big).
\end{align*}
We are going to work in the slightly more general setting where $Y_0$, and therefore $\Delta_0$, are allowed to be random.
This is easily dealt with, because we begin by conditioning on
$\Delta_0$:
\begin{align*}
\mathbb{P} \big(
\bigcap_{j=l}^{n} \{C_{j+1} \notin F_j,
|F_j|\geq \psi \}
\big)
=
\mathbb{E} \, \big[
\mathbb{P} \big(
\bigcap_{j=l}^{n} \{C_{j+1} \notin F_j,
|F_j|\geq \psi \}
\, | \, \Delta_0
\big)
\, \big].
\end{align*}
We then condition on all but the last reproduction events:
\begin{align}
\nonumber
& \,
\mathbb{P} \Big(
\bigcap_{j=l}^{n} \{C_{j+1} \notin F_j,
|F_j|\geq \psi \}
\, | \, \Delta_0
\Big)
\\
\label{Recurr_Prob}
= & \,
\mathbb{E} \, \bigg[
\mathds{1}_{
\displaystyle \{ C_{l+1} \notin F_l,|F_l|\geq \psi \} }
\dots
\mathds{1}_{
\displaystyle \{ C_{n} \notin F_{n-1},|F_{n-1}|\geq \psi \} }
\\
\nonumber
& \,
\quad \quad \quad \quad
\mathbb{P} \Big(
C_{n+1} \notin F_n,
|F_n|\geq \psi
\, | \, \Delta_0 , F_l, C_{l+1} , ..., F_{n-1}, C_{n}
\Big)
\, | \, \Delta_0
\bigg].
\end{align}
We can calculate the last term by conditioning on $F_n$ and
$\Delta_n$:
\begin{align}
\nonumber
& \,
\mathbb{P} \big(
C_{n+1} \notin F_n,
|F_n|\geq \psi
\, | \, \Delta_0 , F_l, C_{l+1} , ..., F_{n-1}, C_{n} , F_n,
\Delta_n
\big)
\\[10pt]
\nonumber
= & \,
\mathds{1}_{|F_n|\geq \psi}
\mathbb{P} \big(
C_{n+1} \notin F_n
\, | \, F_n, \Delta_n
\big)
\\[10pt]
\nonumber
= & \,
\mathds{1}_{|F_n|\geq \psi}
\Big( 1- \frac{|F_n|}{|\Delta_n^R|}
\Big)
\\[1pt]
\nonumber
\leq & \,
1- \frac{\psi}{|\Delta_n^R|}
\\[1pt]
\label{Ineq_Prob}
\leq & \,
1- \frac{\psi}{|\Delta_0^R|+n V(2R)}.
\end{align}
The second and third equalities come from the fact that
conditionally on $\Delta_n$, $C_{n+1}$ is sampled uniformly
from $\Delta_n^R$, independently of the past. The last
inequality comes from (\ref{Dyn_Clust}):
\begin{equation*}
\Delta_{n}=\Delta_0 \cup
\bigcup_{
\substack{
1\leq k \leq n,\\
\varepsilon_{k}=1 }
}
B(C_{k},R).
\end{equation*}
In particular, it implies that
\begin{align*}
|\Delta_n^R|
\leq &
|\Delta_0^R|
+ |B(C_1,R)^R|
+ \dots
+ |B(C_n,R)^R|
\\
\leq &
|\Delta_0^R|
+ n \, V(2R).
\end{align*}
Putting inequality (\ref{Ineq_Prob}) into (\ref{Recurr_Prob}),
we obtain the following upper bound:
\begin{align}
\nonumber
& \,
\mathbb{P} \Big(
\bigcap_{j=l}^{n} \{C_{j+1} \notin F_j,
|F_j|\geq \psi \}
\, | \, \Delta_0
\Big)
\\
\label{Recurr_Prob_2}
\leq & \,
\Big(
1- \frac{\psi}{|\Delta_0^R|+n V(2R)}
\Big)
\mathbb{P} \Big(
\bigcap_{j=l}^{n-1} \{C_{j+1} \notin F_j,
|F_j|\geq \psi \}
\, | \, \Delta_0
\Big).
\end{align}
Inequality (\ref{Recurr_Prob_2}) provides a recurrence
relation, which we can solve immediately to obtain
\begin{align*}
\mathbb{P} \big(
\bigcap_{j=l}^{n} \{C_{j+1} \notin F_j,
|F_j|\geq \psi \}
\, | \, \Delta_0
\big)
\leq \,
\prod_{j=l}^n
\Big(1- \frac{\psi}{|\Delta_0^R|+j V(2R)}\Big) \,.
\end{align*}
Taking expectation and then the limit as $n\rightarrow \infty$
, we obtain
\begin{align*}
\mathbb{P} \big(
\bigcap_{j=l}^{\infty} \{C_{j+1} \notin F_j,
|F_j|\geq \psi \}
\big)
\leq & \,
\lim_{n \rightarrow \infty}
\mathbb{E} \, \big[
\prod_{j=l}^n
\big(1- \frac{\psi}{|\Delta_0^R|+j V(2R)}\big) \,
\big]
\\
\leq & \,
\mathbb{E} \, \big[
\prod_{j=l}^\infty
\big(1- \frac{\psi}{|\Delta_0^R|+j V(2R)}\big) \,
\big].
\end{align*}
We rewrite the infinite random product using logarithms:
\begin{align*}
& \,
\prod_{j=l}^\infty
\big(1- \frac{\psi}{|\Delta_0^R|+j V(2R)}\big)
\\
= & \,
\exp \Big(
\sum_{j=l}^{\infty}
\log
\big(1- \frac{\psi}{|\Delta_0^R|+j V(2R)}\big)
\Big).
\end{align*}
After observing that
\begin{align*}
& \,
\log
\big(1- \frac{\psi}{|\Delta_0^R|+j V(2R)}\big)
\mathop{\sim}_{j \rightarrow \infty}^{a.s.} \,
\frac{-\psi/V(2R)}{j},
\end{align*}
we conclude that the infinite product is almost surely equal
to $0$. Because we chose $Y_0$ to be deterministic, we conclude that
\begin{align*}
\mathbb{P} \big(
\bigcap_{j=l}^{\infty} \{C_{j+1} \notin F_j,
|F_j|\geq \psi \}
\big)
=0.
\end{align*}
\end{proof}
\begin{Remark}
\label{Rmk_Delta0_Random}
In the case where we take $Y_0$ to be random, a
sufficient condition for
the expectation of the infinite product to also be
equal to $0$ is simply $\mathbb{E}(|\Delta_0|)<\infty$, that is the volume of the initial support has a finite expectation.
\end{Remark}
\section{Proof of the theorems}
\label{Prf_Thrm}
\subsection{Proof of Proposition \ref{Conv_Delta_n}}
We proved in Proposition \ref{Prop_Finit_Many_Sampl} that with
probability one,
there are only finitely many sampling events. This means that there
exists an almost surely finite
random time $\kappa$ such that
\begin{align}
\forall n > \kappa, \quad \varepsilon_n=0.
\end{align}
We recall the dynamics of the cluster $\Delta_n$ described by
(\ref{Dyn_Clust}):
\begin{equation*}
\Delta_{n}=\Delta_0 \cup
\bigcup_{
\substack{
1\leq k \leq n,\\
\varepsilon_{k}=1 }
}
B(C_{k},R).
\end{equation*}
Therefore, if we define $B:=\Delta_{\kappa}$, we have
\begin{align}
\forall n > \kappa, \quad \Delta_n=B,
\end{align}
and the proof of Proposition \ref{Conv_Delta_n} is complete.\\
We can now generalise Proposition \ref{Conv_Delta_n}
by removing the technical condition on the starting
point and allowing $Y_0$
to be any function in $\mathcal{S}_c$.
\begin{Proposition}
\label{Conv_Delta_n_General}
Suppose $Y_0=f \in \mathcal{S}_c$.
Then, there exists an almost surely finite
random time $\kappa$ such that
\begin{align}
\forall n > \kappa, \quad \varepsilon_n=0.
\end{align}
Therefore, there exists an
almost surely bounded
random set $B \in \mathbb{R}^d$ such that
\begin{align}
\forall n > \kappa, \quad \Delta_n=B.
\end{align}
\end{Proposition}
\begin{proof}
We proceed by coupling $Y$ with a Markov chain $\widetilde{Y}$
with the same transition probabilities,
but started from $\widetilde{Y}_0=\delta_{B(C_0,r_0)}$ such that
$Y_0 \leq \widetilde{Y}_0$.
We denote the initial conditions by
$\widetilde{Y}_0=\widetilde{f}$ and $Y_0=f$. We first build
$\widetilde{Y}$ as
described in
Definition \ref{Def_Yn}. We then use the sequences
$(\widetilde{C}_n)_{n
\geq 1}$ and $(\widetilde{V}_n)_{n \geq 1}$ that we used
to construct $\widetilde{Y}$ in the following way. First consider the
random sequence $Y'$
defined by $Y'_0=f$, and for $n \geq 0$,
\begin{align*}
Y'_{n+1}=Y'_n+U \, \delta_{B(\widetilde{C}_{n+1},R)} \,
\bigl( \mathds{1}_{ \{
\widetilde{V}_{n+1} \leq Y'_n(\widetilde{C}_{n+1})\} } -Y'_n \bigr).
\end{align*}
We can prove by induction that
\begin{equation}
\label{Monoton}
\forall n \geq 0, \, Y'_n \leq \widetilde{Y}_n.
\end{equation}
It is of course true at $n=0$, and then we just
observe that if $Y'_n \leq \widetilde{Y}_n$, then
\begin{align*}
& \widetilde{Y}_{n+1}-{Y}_{n+1}'
\\
=&
\big(1-U \, \delta_{B(\widetilde{C}_{n+1},R)} \big)
\big(\widetilde{Y}_n-{Y}_n'\big)
\\
& \quad \quad \quad \quad
+ U \, \delta_{B(\widetilde{C}_{n+1},R)} \,
\bigl(
\mathds{1}_{ \{ \widetilde{V}_{n+1} \leq
\widetilde{Y}_n(\widetilde{C}_{n+1})\} }
- \mathds{1}_{ \{ \widetilde{V}_{n+1} \leq
{Y}_n'(\widetilde{C}_{n+1})\} }
\bigr)
\\
\geq & 0.
\end{align*}
We denote by
$\widetilde{\Delta}$ and
$\Delta'$
the respective sequences of supports, and in particular we have
proved that
\begin{equation}
\forall n \geq 0, \, \Delta'_n \subset \widetilde{\Delta}_n.
\end{equation}
We define
the sequence of $\big(\sigma(\widetilde{\mathcal{P}}_n,f)\big)_{n
\geq 0}$-stopping
times $(J_n)_{n \geq 0}$ by setting
\begin{align}
\label{Incl_Deltas}
\left\{
\begin{aligned}
& J_0=0,
\\
& J_{n+1}=\inf \{k > J_n: \widetilde{C}_k \in
{\Big(\Delta'_{k-1}}\Big)^{R}\}.
\end{aligned}
\right.
\end{align}
We now construct $(Y_n)_{n \geq 0}$,
$(C_n)_{n \geq 1}$
and $(V_n)_{n \geq 1}$ by taking
\begin{align*}
\left\{
\begin{aligned}
& Y_n:=Y'_{J_n}, \, n \geq 0
\\
& C_n:=\widetilde{C}_{J_n}, \, n \geq 1
\\
& V_n:=\widetilde{V}_{J_n}, \, n \geq 1.
\end{aligned}
\right.
\end{align*}
We denote by $\Delta_n$ the support of
$Y_n$, and we define
the filtration $(\mathcal{P}_n)_{n \geq 0}$ to be
\begin{align*}
\left\{
\begin{aligned}
&
\mathcal{P}_0:=\sigma(Y_0),\\
&
\mathcal{P}_n:=
\sigma(C_{1},\dots,C_{n},
V_{1},\dots,V_{n},
Y_0,\dots,Y_n),
\end{aligned}
\right.
\end{align*}
By construction, conditionally on
$\mathcal{P}_{n}$, $C_{n+1}$ is distributed
uniformly on
$\Delta_{n}^{R}$. Because $V_{n+1}$ is
independent of
$\mathcal{P}_{n}$,
we conclude that the law of $Y$ is the one given in
Definition
\ref{Def_Yn}.
Using (\ref{Monoton}), we see that
\begin{align}
\label{Comp_Y_Ytilde}
Y_n=Y'_{J_n} \leq \widetilde{Y}_{J_n}.
\end{align}
We introduce
\begin{align*}
\left\{
\begin{aligned}
&
\widetilde{\varepsilon}_{n+1}:=\mathds{1}_{\widetilde{Y}_n(\widetilde{C}_{n+1}) \geq
\widetilde{V}_{n+1}},
\\
&
\varepsilon_{n+1}:=\mathds{1}_{Y_n(C_{n+1}) \geq
V_{n+1}}.
\end{aligned}
\right.
\end{align*}
Because $\widetilde{f}=a\, \delta_{B(C_0,r_0)}$, we can use Proposition
\ref{Conv_Delta_n}, and
we obtain that there exists $\widetilde{\kappa}$ almost surely finite such that
\begin{align}
\forall n > \widetilde{\kappa}, \quad \widetilde{\varepsilon}_n=0.
\end{align}
In particular, this implies that there exists
$\kappa$ almost surely finite such that
\begin{align}
\forall n > \kappa, \quad \widetilde{\varepsilon}_{J_n}=0.
\end{align}
Combined with (\ref{Comp_Y_Ytilde}), this implies that
\begin{align}
\forall n > \kappa, \quad \varepsilon_{n}=0,
\end{align}
and the conclusion follows.
\end{proof}
\subsection{The continuous time process is non explosive}
We are now going to construct explicitly the process $(X_t)_{t \geq 0}$
with generator (\ref{Gener_ModSLFV_Red}) as a continous time Markov chain,
by using $(Y_n)_{n \geq 0}$ as the embedded Markov chain.
\begin{Definition}
Consider an i.i.d sequence
$(E_1,E_2,\dots )$ of Exp(1) random variables. We define the
\textsl{jump times}
$(T_0,T_1,\dots )$ by setting $T_0=0$ and
\begin{align}
\label{Def_Times_Tn}
T_n=\frac{E_1}{\lambda(Y_0)}+
\dots+\frac{E_n}{\lambda(Y_{n-1})}, \quad n \geq 1,
\end{align}
where $\lambda(f):=|\text{(Supp}(f))^R|$ for $f \in \mathcal{S}_c$.\\
We can then define a stochastic process
$(X_t)_{t\geq0}$ by setting
\begin{align}
\label{Def_Xt}
\forall n \geq 0, \, \forall t \in [T_n,T_{n+1}),
\quad
X_t&=Y_n.
\end{align}
\end{Definition}
We recall that the set $\text{(Supp}(f))^R$ is the $R$-expansion
of the support of $f$, that is the set of points at a distance less than $R$
from the support of $f$. The quantity $\lambda(f)$ is its volume, and
it is the rate at which the process $(X_t)_{t \geq 0}$ jumps out of the state $f$.
This will be verified in the following proposition
by checking that we have the correct generator.
\begin{Proposition}
\label{CTMC}
The process
$(X_t)_{t\geq0}$ constructed in (\ref{Def_Xt}) is a non-explosive
$\mathcal{S}_c$-valued continuous time Markov chain.
Moreover, its generator is given by (\ref{Gener_ModSLFV_Red}).
\end{Proposition}
\begin{proof}
The first thing to verify is that $X_t$ is really defined for
all nonnegative $t$.
This is equivalent to saying that
\begin{align*}
\mathbb{P} \big[ \,
T_n \xrightarrow[]{} \infty
\text{ as }
n \rightarrow \infty
\, \big]=1,
\end{align*}
that is
\begin{align*}
\mathbb{P}\big[ \,
\sum_{n=1}^{\infty} \frac{E_n}{\lambda(Y_{n-1})}
=\infty
\, \big]=1.
\end{align*}
We show in Proposition \ref{Conv_Delta_n}
that $(\Delta_n)_{n \geq 0}$ converges in a
finite number of steps to a bounded set $B$. This means that
almost
surely,
there is a random time $\kappa$ such that for all $n \geq
\kappa$,
\begin{align*}
\lambda(Y_n)=
|\text{B}^R|
\end{align*}
which implies that
\begin{align*}
\sum_{n=\kappa+1}^{\infty}
\frac{E_{n}}{\lambda(Y_{n-1})}
= \
\frac{\displaystyle \sum_{n={\kappa+1}}^{\infty}
E_n}{\displaystyle \,
|\text{B}^R|}
= \ \infty \text{ a.s}.
\end{align*}
Hence $(X_t)_{t \geq 0}$ is a stochastic process
defined for all $t
\geq 0$.
The Markov property is obvious, and this shows that $(X_t)_{t
\geq 0}$
is a non-explosive continuous time Markov chain.
We can then write the generator of $(X_t)_{t\geq 0}$ for functions
$G: \mathcal{S}_c \mapsto \mathbb{R}$ as
\begin{align*}
\mathcal{L} G(f)=
\int_{(\text{Supp}(f))^R}
\int_0^1 G \big[
f+U \delta_{B(c,R)} (\mathds{1}_{v \leq f(c)} - f)
\big]
-G(f)
\, dv \, dc.
\end{align*}
If we take $G=I_n(\cdot,\psi)$ as defined in (\ref{Test_Func_Red}), the generator
of $(X_t)_{t\geq0}$ takes the form (\ref{Gener_ModSLFV_Red}).
\end{proof}
\subsection{Proof of Theorem \ref{Main_Result}}
\label{Proof_Main_Result}
We have seen in Proposition \ref{CTMC}
that the process $(X_t)_{t\geq0}$ is a non explosive
continuous
time Markov chain. Therefore, the trajectories of $(X_t)_{t\geq0}$
are completely described by its embedded Markov chain $(Y_n)_{n \geq 0}$.
In particular, for all $n \geq 0$, for all $t \in [T_n,T_{n+1})$,
$\text{Supp}(X_t)=\Delta_n$.
Using the result from Proposition \ref{Conv_Delta_n_General},
and the the sequence of times $(T_n)_{n\geq0}$ defined in (\ref{Def_Times_Tn}),
there exists a finite random set $B \subset \mathbb{R}^d$, and an
almost surely finite
random time
$T:=T_{\kappa}$, such that
\begin{equation}
\forall t > T, \,
\text{Supp}(X_t) = B \quad \text{a.s.}
\end{equation}
The second point to prove here is the extinction of the population.
From Proposition (\ref{Conv_Delta_n_General}),
\begin{align}
\forall n > \kappa, \quad \varepsilon_n=0.
\end{align}
This implies that at every point $x$,
the frequency $(X_t(x))_{t \geq T}$ converges geometrically to zero,
which concludes the proof.
\section{Conclusion}
Although the S$\mathcal{L}ambda$FV process is constructed in great generality,
our study was restricted to the case where $R$ and $U$ are constant.
In the setting described in \cite{BARTON_ETHERIDGE_VEBER_2010}, these quantities
can be made random by adding extra dimensions to the space-time Poisson point process. We then
define $\mathbb{P}i$ on the space $[0,\infty) \times \mathbb{R}^d \times [0,1] \times (0,\infty) \times [0,1] $,
with intensity $dt\otimes dc \otimes dv \times \zeta(dr,du)$, such that
$$
\int_{(0,\infty) \times [0,1]} u \, r^d \zeta(dr,du)< \infty.
$$
Our result holds in the case $\zeta(dr,du)=\delta_{R,U}(dr,du)$ because the volume of $\Delta_n$ increases
at most linearly with $n$.
We could imagine extending the same result using
practically the same method to the case
where
\begin{align*}
\int_{(0,\infty) \times [0,1]} r^d \zeta(dr,du) < \infty,
\end{align*}
because the process still jumps at finite rate, and the
volume of $\Delta_n$ is at most of order $n \mathbb{E}(R)$, where $R$
is a realisation of the random radius. The problem comes from
the fact that the radii being random makes the construction
of the Markov chain more complicated. Morally the result remains true
in this case, but the proof becomes significantly more involved.\\
The situation where
\begin{align*}
\int_{(0,\infty) \times [0,1]} r^d \zeta(dr,du) = \infty
\end{align*}
is radically different, because now the process jumps at an infinite rate.
The problem is that we do not have a description of the geometry of
the process at time $t>0$. The behaviour is not obvious, and it cannot be
simulated. For us this remains an open question, which would certainly require
different techniques.
\end{document} |
\begin{document}
\title[The range of a Ces\`aro-like operator acting on $H^\infty$ ]
{Carleson measures and the range of a Ces\`aro-like operator acting on $H^\infty$}
\author{Guanlong Bao, Fangmei Sun and Hasi Wulan}
\address{Guanlong Bao\\
Department of Mathematics\\
Shantou University\\
Shantou, Guangdong 515063, China}
\email{[email protected]}
\address{Fangmei Sun\\
Department of Mathematics\\
Shantou University\\
Shantou, Guangdong 515063, China}
\email{[email protected]}
\address{Hasi Wulan\\
Department of Mathematics\\
Shantou University\\
Shantou, Guangdong 515063, China}
\email{[email protected]}
\thanks{The work was supported by NNSF of China (No. 11720101003) and NSF of Guangdong Province (No. 2022A1515012117).}
\subjclass[2010]{47B38, 30H05, 30H25, 30H35}
\keywords{Ces\`aro-like operator, Carleson measure, $H^\infty$, $BMOA$}
\begin{abstract}
In this paper, by describing characterizations of Carleson type measures on $[0,1)$, we determine the range of a Ces\`aro-like operator acting on $H^\infty$. A special case of our result gives an answer to a question posed by P. Galanopoulos, D. Girela and N. Merch\'an recently.
\end{abstract}
\maketitle
\section{Introduction}
Let $\D$ be the open unit disk in the complex plane $\c$. Denote by $H(\D)$ the space of functions analytic in $\D$.
For $f(z)=\sum_{n=0}^\infty a_nz^n$ in $H(\D)$, the Ces\`aro operator $\C$ is defined by
$$
\C (f)(z)=\sum_{n=0}^\infty\left(\f{1}{n+1}\sum_{k=0}^n a_k\right)z^n, \quad z\in\D.
$$
See \cite{BWY, DS, EX, M, S1, S2} for the investigation of the Ces\`aro operator acting on some analytic function spaces.
Recently, P. Galanopoulos, D. Girela and N. Merch\'an \cite{GGM} introduced a Ces\`aro-like operator $\Cu$ on $H(\D)$.
For nonnegative integer $n$, let $\mu_n$ be the moment of order $n$ of a finite positive Borel measure $\mu$ on $[0, 1)$; that is,
$$
\mu_n=\int_{[0, 1)} t^{n}d\mu(t).
$$
For $f(z)=\sum_{n=0}^\infty a_nz^n$ belonging to $H(\D)$, the Ces\`aro-like operator $\Cu$ is defined by
$$
\Cu (f)(z)=\sum^\infty_{n=0}\left(\mu_n\sum^n_{k=0}a_k\right)z^n, \quad z\in\D.
$$
If $d\mu(t)=dt$, then $\Cu=\C$. In \cite{GGM}, the authors studied the action of $\Cu$ on distinct spaces of analytic functions.
We also need to recall some function spaces. For $0<p<\infty$, $H^p$ denotes the classical Hardy space \cite{D} of those functions $f\in H(\D)$ for which
$$
\sup_{0<r<1} M_p(r, f)<\infty,
$$
where
$$
M_p(r, f)= \left(\f{1}{2\pi}\int_0^{2\pi}|f(re^{i\theta})|^p d\theta \right)^{1/p}.
$$
As usual, denote by $H^\infty$ the space of bounded analytic functions in $\D$. It is well known that $H^\infty$ is a proper subset of the Bloch space $\B$ which consists of those functions $f\in H(\D)$ satisfying
$$
\|f\|_\B=\sup_{z\in \D}(1-|z|^2)|f'(z)|<\infty.
$$
Denote by $\text{Aut}(\D)$ the group of M\"obius maps of $\D$, namely,
$$
\text{Aut}(\D)=\{e^{i\theta}\sigma_a:\ \ a\in \D \ \text{and} \ \theta \ \ \text{is real}\},
$$
where
$$
\sigma_a(z)=\frac{a-z}{1-\overline{a}z}, \qquad z\in \D.
$$
In 1995 R. Aulaskari, J. Xiao and R. Zhao \cite{AXZ} introduced $\qp$ spaces. For
$0\leq p<\infty$, a function $f$ analytic in $\D$ belongs to $\qp$ if
$$
\|f\|_{\mathcal{Q}_p}^2=\sup_{w\in \D} \int_\D |f'(z)|^2(1-|\sigma_w(z)|^2)^p dA(z)<\infty,
$$
where $dA$ is the area measure on $\c$ normalized so that $A(\D)=1$. $\qp$ spaces are M\"obius invariant in the sense that
$$
\|f\|_{\qp}=\|f\circ \phi\|_{\qp}
$$
for every $f\in \qp$ and $\phi \in \text{Aut}(\D)$. It was shown in \cite{X1} that $\Q_2$ coincides with the Bloch space $\B$.
This result was extended in \cite{AL} by showing that $\qp=\B$ for all $1<p<\infty$.
The space $\Q_1$ coincides with $BMOA$, the set of analytic functions in $\D$ with boundary values of bounded mean oscillation (see \cite{B, Gir}). The space $\Q_0$ is the Dirichlet space $\mathcal D$. For $0<p<1$, the space $\qp$ is a proper subset of $BMOA$ and has many interesting
properties. See J. Xiao's monographs \cite{X2, X3} for the theory of $\qp$ spaces.
For $1\leq p<\infty$ and $0<\alpha\leq 1$, the mean Lipschitz space $\Lambda^p_\alpha$ is the set of those functions $f\in H(\D)$ with a non-tangential limit almost everywhere such that $\omega_p(t, f)=O(t^\alpha)$ as $t\to 0$. Here $\omega_p(\cdot, f)$ is the integral modulus of continuity of order $p$ of the function $f(e^{i\theta})$. It is well known (cf.\cite[Chapter 5]{D}) that $\Lambda^p_\alpha$ is a subset of $H^p$ and $\Lambda^p_\alpha$ consists of those functions $f\in H(\D)$ satisfying
$$
\|f\|_{\Lambda^p_\alpha}=\sup_{0<r<1}(1-r)^{1-\alpha}M_p(r, f')<\infty.
$$
Among these spaces, the spaces $\Lambda^p_{1/p}$ are of special interest. $\Lambda^p_{1/p}$ spaces increase with $p\in (1, \infty)$ in the sense of inclusion and they are contained in $BMOA$ (cf. \cite{BSS}).
By Theorem 1.4 in \cite{ASX}, $\Lambda^p_{1/p}\subseteq \Q_q$ when $1\leq p<2/(1-q)$ and $0<q<1$. In particular, $\Lambda^2_{1/2}\subseteq \Q_q \subseteq \B$ for all $0<q<\infty$.
Given an arc $I$ of the unit circle $\T$ with arclength $|I|$ (normalized such that $|\T|=1$), the
Carleson box $S(I)$ is given by
$$
S(I)=\{r\zeta \in \D: 1-|I|<r<1, \ \zeta\in I\}.
$$
For $0<s<\infty$, a positive Borel measure $\nu$ on $\D$ is said to be an $s$-Carleson measure if
$$
\sup_{I\subseteq\T}\frac{\nu(S(I))}{|I|^s}<\infty.
$$
If $\nu$ is a $1$-Carleson measure, we write that $\nu$ is a Carleson measure characterizing $H^p\subseteq L^p(d\nu)$ (cf. \cite{D}).
A positive Borel measure $\mu$ on [0, 1) can be seen as a Borel measure on $\D$ by identifying it
with the measure $\tilde{\mu}$ defined by
$$
\tilde{\mu}(E)=\mu(E \cap [0, 1)),
$$
for any Borel subset $E$ of $\D$. Thus $\mu$ is an $s$-Carleson measure on $[0,1)$ if there is a positive constant $C$ such that
$$
\mu([t, 1)) \leq C (1-t)^s
$$
for all $t\in [0, 1)$. We refer to \cite{BYZ} for the investigation of this kind of measures associated with Hankel measures.
It is known that the Ces\`aro operator $\C$ is bounded on $H^p$ for all $0<p<\infty$ (cf. \cite{M, S1, S2}) but this is not true on $H^\infty$. In fact, N. Danikas and A. Siskakis \cite{DS} gave that
$\mathcal{C }(H^\infty)\nsubseteq H^\infty$ but $\mathcal{C }(H^\infty)\subseteq BMOA$. Later M. Ess\'en and J. Xiao \cite{EX} proved that $\mathcal{C }(H^\infty)\subsetneqq \qp$ \ for \ $0<p<1$. Recently, the relation between $\mathcal{C }(H^\infty)$ and a class of M\"obius invariant function spaces was considered in \cite{BWY}.
It is quite natural to study $\Cu(H^\infty)$. In \cite{GGM} the authors characterized positive Borel measures $\mu$ such that $\Cu(H^\infty)\subseteq H^\infty$ and proved that $\Cu(H^\infty)\subseteq \B$ if and only if $\mu$ is a Carleson measure. Moreover, they showed that if $\Cu(H^\infty)\subseteq BMOA$, then $\mu$ is a Carleson measure. In \cite[p. 20]{GGM}, the authors asked whether or not $\mu$ being a Carleson measure implies that $\Cu(H^\infty)\subseteq BMOA$. In this paper,
by giving some descriptions of
$s$-Carleson measures on $[0,1)$,
for $0<p<2$, we show that $\Cu(H^\infty)\subseteq \qp$ if and only if $\mu$ is a Carleson measure, which giving an affirmative answer to their question. We also consider another Ces\`aro-like operator $\C_{\mu, s}$ and describe the embedding $\Cus(H^\infty)\subseteq X$ in terms of $s$-Carleson measures, where $X$ is
between $\Lambda^p_{1/p}$ and $\B$ for $\max\{1, 1/s\}<p<\infty$.
Throughout this paper, the symbol $A\thickapprox B$ means that $A\lesssim
B\lesssim A$. We say that $A\lesssim B$ if there exists a positive
constant $C$ such that $A\leq CB$.
\section{Positive Borel measures on [0, 1) as Carleson type measures }
In this section, we give some characterizations of positive Borel measures on [0, 1) as Carleson type measures.
The following description of Carleson type measures (cf. \cite{Bla} ) is well known.
\begin{otherl}\label{S-CM}
Suppose $s>0$, $t>0$ and $\mu$ is a positive Borel measure on $\D$. Then $\mu$ is an $s$-Carleson measure if and only if
\begin{equation}\label{sCMformula}
\sup_{a\in \D}\int_{\D} \frac{(1-|a|^2)^t}{|1-\overline{a}w|^{s+t}}d\mu(w)<\infty.
\end{equation}
\end{otherl}
For Carleson type measures on [0, 1), we can obtain some descriptions that are different from Lemma \ref{S-CM}. Now we give the first main result in this section.
\begin{prop}\label{newCM1}
Suppose $0<t<\infty$, $0\leq r<s<\infty$ and $\mu$ is a finite positive Borel measure on $[0,1)$. Then the following conditions are equivalent:
\begin{enumerate}
\item [(i)] $\mu$ is an $s$-Carleson measure;
\item [(ii)] \begin{equation}\label{1formulaCM}
\sup_{a\in\D}\int_{[0,1)}\frac{(1-|a|)^t}{(1-x)^{r}(1-|a|x)^{s+t-r}}d\mu(x)<\infty;
\end{equation}
\item [(iii)] \begin{equation}\label{2formulaCM}
\sup_{a\in\D}\int_{[0,1)}\frac{(1-|a|)^t}{(1-x)^{r}|1-ax|^{s+t-r}}d\mu(x)<\infty.
\end{equation}
\end{enumerate}
\end{prop}
\begin{proof}
$(i)\Rightarrow (ii)$. Let $\mu$ be an $s$-Carleson measure. Fix $a\in \D$ with $|a|\leq 1/2$. If $r=0$, the desired result holds. For $0<r<s$, using a well-known formula about the distribution function(\cite[p.20 ]{Gar}), we get
\begin{align}\label{bu1}
&\int_{[0,1)}\frac{(1-|a|)^t}{(1-x)^{r}(1-|a|x)^{s+t-r}}d\mu(x)\nonumber \\
\thickapprox & \int_{[0,1)}\left(\frac{1}{1-x}\right)^rd\mu(x) \nonumber \\
\thickapprox & r \int_0^\infty \lambda^{r-1} \mu(\{x\in [0, 1): 1-\frac{1}{\lambda}<x \})d\lambda \nonumber \\
\lesssim & \int_0^1 \lambda^{r-1} \mu([0, 1))d\lambda + \int_1^\infty \lambda^{r-1} \mu([1-\frac{1}{\lambda}, 1))d\lambda \nonumber \\
\lesssim & 1+ \int_1^\infty \lambda^{r-s-1} d\lambda \lesssim1.
\end{align}
Fix $a\in\D$ with $|a|>1/2$ and let
\begin{align*}
S_n(a)=\{x\in[0,1): 1-2^n(1-|a|)\leq x<1\}, \ \ n=1, 2, \cdots .
\end{align*}
Let $n_a$ be the minimal integer such that $1-2^{n_a}(1-a)\leq 0$. Then $S_n(a)=[0, 1)$ when $n\geq n_a$.
If $ x\in S_1(a)$, then
\begin{equation}\label{301}
1-|a| \leq 1-|a|x.
\end{equation}
Also, for $2\leq n\leq n_a$ and $x\in S_n(a)\backslash S_{n-1}(a)$, we have
\begin{equation}\label{302}
1-|a|x \geq |a|-x \geq |a|-(1-2^{n-1}(1-|a|))=(2^{n-1}-1)(1-|a|).
\end{equation}
We write
\begin{align*}
& \int_{[0,1)}\frac{(1-|a|)^t}{(1-x)^r(1-|a|x)^{s+t-r}}d\mu(x)\\
=&\int_{S_1(a)}\frac{(1-|a|)^t}{(1-x)^r(1-|a|x)^{s+t-r}}d\mu(x)\\
&+\sum^{n_a}_{n=2}\int_{S_n(a)\backslash S_{n-1}(a)}\frac{(1-|a|)^t}{(1-x)^r(1-|a|x)^{s+t-r}}d\mu(x)\\
=: &J_1(a)+J_2(a).
\end{align*}
If $r=0$, bearing in mind that (\ref{301}), (\ref{302}) and $\mu$ is an $s$-Carleson measure, it is easy to check that $J_i(a)\lesssim 1$ for $i=1, 2$. Now consider $0<t<\infty$ and $0< r<s<\infty$. Using (\ref{301}) and some estimates similar to (\ref{bu1}), we have
\begin{align*}
J_1(a) \lesssim (1-|a|)^{r-s}\int_{S_1(a)}\left(\frac{1}{1-x}\right)^rd\mu(x)
\lesssim 1.
\end{align*}
Note that (\ref{302}), $0<t<\infty$, $0< r<s<\infty$ and $\mu$ is an $s$-Carleson measure.
Then
{\small {\small
\begin{align*}
&J_2(a)\\
\lesssim& \sum^{n_a}_{n=2}\frac{(1-|a|)^{r-s}}{2^{n(s+t-r)}}\int_{S_n(a)\backslash S_{n-1}(a)}\left(\frac{1}{1-x}\right)^rd\mu(x)\\
\lesssim& \sum^{n_a}_{n=2}\frac{(1-|a|)^{r-s}}{2^{n(s+t-r)}}\int_0^\infty\lambda^{r-1}\mu\big(\big\{x\in[1-2^n(1-|a|),1): 1-\frac{1}{\lambda}<x\big\}\big)d\lambda\\
\thickapprox& \sum^{n_a}_{n=2}\frac{(1-|a|)^{r-s}}{2^{n(s+t-r)}}\bigg(\int_0^{\frac{1}{2^n(1-|a|)}}\lambda^{r-1}\mu\big([1-2^n(1-|a|),1)\big)d\lambda\\
&+\int_{\frac{1}{2^n(1-|a|)}}^\infty\lambda^{r-1}\mu\big(\big[1-\frac{1}{\lambda},1\big)\big)d\lambda\bigg)\\
\lesssim& \sum^{n_a}_{n=2}\frac{(1-|a|)^{r-s}}{2^{n(s+t-r)}}\bigg(2^{ns}(1-|a|)^s\int_0^{\frac{1}{2^n(1-|a|)}}\lambda^{r-1}d\lambda
+\int_{\frac{1}{2^n(1-|a|)}}^\infty\lambda^{r-1-s}d\lambda\bigg)\\
\thickapprox& \sum^{n_a}_{n=2} \frac{1}{2^{tn}}<\infty.
\end{align*}}}
Consequently,
$$
\sup_{a\in \D}\int_{[0,1)}\frac{(1-|a|)^t}{(1-x)^r(1-|a|x)^{s+t-r}}d\mu(x)<\infty.
$$
The implication of $(ii)\Rightarrow (iii)$ is clear.
$(iii)\Rightarrow (i)$. For $r\geq 0$, it is clear that
\begin{align*}
\int_{[0,1)}\frac{(1-|a|)^t}{(1-x)^{r}|1-ax|^{s+t-r}}d\mu(x)\geq \int_{[0,1)}\frac{(1-|a|)^t}{|1-ax|^{s+t}}d\mu(x)
\end{align*}
for all $a\in \D$. Combining this with Lemma \ref{S-CM}, we see that if (\ref{2formulaCM}) holds, then $\mu$ is an $s$-Carleson measure.
\end{proof}
\noindent {\bf Remark 1.}\ \ The condition $0\leq r<s<\infty$ in Proposition \ref{newCM1} can not be changed to $r\geq s>0$. For example, let $d\mu_1(x)=(1-x)^{s-1}dx$, $x\in [0, 1)$. Then $\mu_1$ is an $s$-Carleson measure but for $r\geq s>0$,
\begin{align*}
&\sup_{a\in\D}\int_{[0,1)}\frac{(1-|a|)^t}{(1-x)^{r}|1-ax|^{s+t-r}}d\mu_1(x)\\
\geq & \int_0^1 (1-x)^{s-1-r}dx=+\infty.
\end{align*}
\noindent {\bf Remark 2.}\ \ $\mu$ supported on $[0,1)$ is essential in Proposition \ref{newCM1}. For example, consider $0<t<1$, $0< r<s<1$ and $s=r+t$. Set $d\mu_2(w)=|f'(w)|^2(1-|w|^2)^sdA(w)$, $w\in \D$, where $f\in \Q_s\setminus \Q_t$. Note that for $0<p<\infty$ and $g\in H(\D)$, $|g'(w)|^2(1-|w|^2)^pdA(w)$ is a $p$-Carleson measure if and only if $g\in\Q_p$ (cf. \cite{X2}). Hence $d\mu_2$ is an $s$-Carleson measure. But
\begin{align*}
&\sup_{a\in\D}\int_{\D}\frac{(1-|a|)^t}{(1-|w|)^{r}|1-a\overline{w}|^{s+t-r}}d\mu_2(w)\\
&=\sup_{a\in\D}\int_{\D}|f'(w)|^2\frac{(1-|a|)^t(1-|w|)^{s-r}}{|1-a\overline{w}|^{s+t-r}}dA(w)\\
&\thickapprox\sup_{a\in\D}\int_{\D}|f'(w)|^2(1-|\sigma_a(w)|^2)^tdA(w)=+\infty.
\end{align*}
Before giving the other characterization of Carleson type measures on $[0,1)$, we need to recall some results.
The following result is Lemma 1 in \cite{Mer}, which generalizes Lemma 3.1 in \cite{GM} from $p=2$ to $1<p<\infty$.
\begin{otherl}\label{inter labda B}
Let $f\in H(\D)$ with $f(z)=\sum^\infty_{n=0}a_n z^n$. Suppose $1<p<\infty$ and the sequence $\{a_n\}$ is a decreasing sequence of nonnegative numbers. If $X$ is a subspace of $H(\D)$ with $\Lambda^p_{1/p}\subseteq X\subseteq\B$, then
$$
f\in X \iff a_n=O\left(\frac1n\right).
$$
\end{otherl}
We recall a characterization of $s$-Carleson measure $\mu$ on [0, 1) as follows (cf. \cite[Theorem 2.1]{BW} or \cite[Proposition 1]{CGP}).
\begin{otherp}\label{u n^s}
Let $\mu$ be a finite positive Borel measure on [0, 1) and $s>0$. Then $\mu$ is an $s$-Carleson measure if and only if the sequence of moments
$\{\mu_n\}_{n=0}^\infty$ satisfies $\sup_{n\geq 0} (1+n)^s \mu_n<\infty$.
\end{otherp}
The following characterization of functions with nonnegative Taylor coefficients in $\Q_p$ is Theorem 2.3 in \cite{AGW}.
\begin{otherth}\label{Qp coeff}
Let $0<p<\infty$ and let $f(z)=\sum_{n=0}^\infty a_nz^n$ be an analytic function in $\D$ with $a_n\geq 0$ for all $n$. Then
$f\in \Q_p$ if and only if
$$
\sup_{0\leq r<1} \sum_{n=0}^\infty \f{(1-r)^p}{(n+1)^{p+1}}\left(\sum_{k=0}^n(k+1)a_{k+1}(n-k+1)^{p-1}r^{n-k}\right)^2<\infty.
$$
\end{otherth}
We need the following well-known estimates (cf. \cite[Lemma 3.10]{Zhu}).
\begin{otherl}\label{useful estimates}
Let $\beta$ be any real number. Then
$$
\int^{2\pi}_0\frac{d\theta}{|1-ze^{-i\theta}|^{1+\beta}}\thickapprox
\begin{cases}1 & \enspace \text{if} \ \ \beta<0,\\
\log\frac{2}{1-|z|^2} & \enspace \text{if} \ \ \beta=0,\\
\frac{1}{(1-|z|^2)^\beta} & \enspace \text{if}\ \ \beta>0,
\end{cases}
$$
for all $z\in \D$.
\end{otherl}
For $0<s<\infty$ and a finite positive Borel
measure $\mu$ on $[0, 1)$, set
$$
f_{\mu, s}(z)=\sum_{n=0}^\infty \frac{\Gamma(n+s)}{\Gamma(s)n!} \mu_n z^n, \ \ z\in \D.
$$
Now we state the other main result in this section which is inspired by Lemma \ref{inter labda B} and Proposition \ref{u n^s}.
\begin{prop} \label{newCM2}
Suppose $0<s<\infty$ and $\mu$ is a finite positive Borel
measure on $[0, 1)$. Let $1<p<\infty$ and let $X$ be a subspace of $H(\D)$ with $\Lambda^p_{1/p}\subseteq X\subseteq\B$.
Then $\mu$ is an $s$-Carleson measure if and only if
$f_{\mu, s}\in X$.
\end{prop}
\begin{proof}
Let $\mu$ be an $s$-Carleson measure. Clearly,
$$
f_{\mu, s}(z)=\int_{[0, 1)} \frac{1}{(1-tz)^s}d\mu(t)
$$
for any $z\in \D$.
For $p>1$, it follows from the Minkowski inequality and Lemma \ref{useful estimates} that
\begin{align*}
M_p(r, f'_{\mu, s})\leq& s \left(\frac{1}{2\pi}\int_0^{2\pi} \left(\int_{[0, 1)}\frac{1}{|1-tre^{i\theta}|^{s+1}}d\mu(t)\right)^pd\theta\right)^{1/p}\\
\leq & s \int_{[0, 1)} \left( \frac{1}{2\pi}\int_0^{2\pi} \frac{1}{|1-tre^{i\theta}|^{(s+1)^p}}d\theta \right)^{1/p} d\mu(t)\\
\lesssim& \int_{[0, 1)} \frac{1}{(1-tr)^{s+1-\frac{1}{p}}} d\mu(t)
\end{align*}
for all $0<r<1$. Combining this with Proposition \ref{newCM1}, we get $f_{\mu, s}\in \Lambda^p_{1/p}$ and hence $f_{\mu, s}\in X$.
On the other hand, let $f_{\mu, s}\in X$. Then $f_{\mu, s}\in \Q_q$ with $q>1$. By the Stirling formula,
$$
\frac{\Gamma(n+s)}{\Gamma(s)n!}\thickapprox (n+1)^{s-1}
$$
for all nonnegative integers $n$. Consequently, by Theorem \ref{Qp coeff} we deduce
{\small
\begin{eqnarray*}
\infty&>&\sum_{n=0}^\infty \f{(1-r)^q}{(n+1)^{q+1}}\left(\sum_{k=0}^n(k+2)^{s}\mu_{k+1}(n-k+1)^{q-1}r^{n-k}\right)^2\\
&\gtrsim& \sum_{n=0}^\infty \f{(1-r)^q}{(4n+1)^{q+1}}\left(\sum_{k=0}^{4n}(k+2)^{s}\mu_{k+1}(4n-k+1)^{q-1}r^{4n-k}\right)^2\\
&\gtrsim& \sum_{n=0}^\infty \f{(1-r)^q}{(4n+1)^{q+1}}\left(\sum_{k=n}^{2n}(k+2)^{s}\int_r^1 t^{k+1}d\mu(t) (4n-k+1)^{q-1}r^{4n-k}\right)^2\\
&\gtrsim& \mu^2([r, 1)) (1-r)^q\sum_{n=0}^\infty \f{r^{8n+2}}{(4n+1)^{q+1}}\left(\sum_{k=n}^{2n}(k+2)^{s}(4n-k+1)^{q-1} \right)^2\\
&\gtrsim&\mu^2([r, 1)) (1-r)^q \sum_{n=0}^\infty (4n+2)^{2s+q-1} r^{8n+2}\\
&\thickapprox& \f{\mu^2([r, 1))}{(1-r)^{2s}}
\end{eqnarray*}
}
for all $r\in [0, 1)$ which yields that $\mu$ is an $s$-Carleson measure. The proof is complete.
\end{proof}
\section{$\qp$ spaces and the range of $\Cu$ acting on $H^\infty$}
In this section, we characterize finite positive Borel measures $\mu$ on $[0,1)$ such that $\C_\mu(H^\infty)\subseteq \qp$ for $0<p<2$. Descriptions of Carleson measures in Proposition \ref{newCM1} play a key role in our proof.
The following lemma is from \cite{OF}.
\begin{otherl}\label{estiamtes}
Suppose $s>-1$, $r>0$, $t>0$ with $r+t-s-2>0$. If $r$, $t<2+s$, then
$$
\int_\D \frac{(1-|z|^2)^s}{|1-\overline{a}z|^r|1-\overline{b}z|^t}dA(z)\lesssim \frac{1}{|1-\overline{a}b|^{r+t-s-2}}
$$
for all $a$, $b\in \D$. If $t<2+s<r$, then
$$
\int_\D \frac{(1-|z|^2)^s}{|1-\overline{a}z|^r|1-\overline{b}z|^t}dA(z)\lesssim \frac{(1-|a|^2)^{2+s-r}}{|1-\overline{a}b|^{t}}
$$
for all $a$, $b\in \D$.
\end{otherl}
We give our result as follows.
\begin{theor}\label{1main}
Suppose $0<p<2$ and $\mu$ is a finite positive Borel measure on $[0,1)$. Then $\C_\mu(H^\infty)\subseteq \qp$ if and only if $\mu$ is a Carleson measure.
\end{theor}
\begin{proof}
Suppose $\C_\mu(H^\infty)\subseteq \qp$. Then $\C_\mu(H^\infty)$ is a subset of the Bloch space. By \cite[Theorem 5]{GGM}, $\mu$ is a Carleson measure.
Conversely, suppose $\mu$ is a Carleson measure and $f\in H^\infty$. Then $f$ is also in the Bloch space $\B$. From Proposition 1 in \cite{GGM},
$$
\C_{\mu}(f)(z)=\int_{[0, 1)} \frac{f(tz)}{1-tz}d\mu(t), \ \ z\in \D.
$$
Hence for any $z\in \D$,
\begin{align}\label{31}
&\|\Cu (f)\|_{\qp} \nonumber \\
\lesssim & \sup_{a\in\D} \left(\int_{\D}\left(\int_{[0,1)}\frac{|tf'(tz)|}{|1-tz|}d\mu(t)\right)^2(1-|\sigma_a(z)|^2)^p dA(z)\right)^{\frac12} \nonumber \\
& +\sup_{a\in\D}\left(\int_{\D}\left( \int_{[0,1)}\frac{|tf(tz)|}{|1-tz|^2}d\mu(t)\right)^2(1-|\sigma_a(z)|^2)^p dA(z)\right)^{\frac12} \nonumber \\
\lesssim & \|f\|_\B \sup_{a\in\D} \left(\int_{\D}\left(\int_{[0,1)}\frac{1}{(1-|tz|)|1-tz|}d\mu(t)\right)^2(1-|\sigma_a(z)|^2)^p dA(z)\right)^{\frac12} \nonumber \\
& +\|f\|_{H^\infty}\sup_{a\in\D}\left(\int_{\D}\left( \int_{[0,1)}\frac{1}{|1-tz|^2}d\mu(t)\right)^2(1-|\sigma_a(z)|^2)^p dA(z)\right)^{\frac12}.\nonumber \\
\end{align}
Let $c$ be a positive constant such that $2c<\min\{2-p, p\}$. Then
\begin{equation}\label{32}
(1-|tz|)^2\geq (1-t)^{2-2c} (1-|z|)^{2c}
\end{equation}
for all $t\in [0, 1)$ and all $z\in \D$.
By the Minkowski inequality, (\ref{32}), Lemma \ref{estiamtes} and Proposition \ref{newCM1}, we get
\begin{align}\label{33}
&\sup_{a\in\D} \left(\int_{\D}\left(\int_{[0,1)}\frac{1}{(1-|tz|)|1-tz|}d\mu(t)\right)^2(1-|\sigma_a(z)|^2)^p dA(z)\right)^{\frac12} \nonumber \\
\leq&\sup_{a\in\D} \int_{[0,1)} \left( \int_{\D} \frac{1}{(1-|tz|)^2|1-tz|^2} (1-|\sigma_a(z)|^2)^p dA(z) \right)^{\frac12} d\mu(t)\nonumber \\
\lesssim &\sup_{a\in\D}(1-|a|^2)^{\frac p2}\int_{[0,1)}\frac{1}{(1-t)^{1-c}}d\mu(t)\big(\int_{\D}\frac{(1-|z|^2)^{p-2c}}{|1-tz|^2|1-\bar{a}z|^{2p}} dA(z)\big)^{\frac12} \nonumber \\
\lesssim &\sup_{a\in\D}\int_{[0,1)}\frac{(1-|a|^2)^{\frac p2}}{(1-t)^{1-c}|1-ta|^{\frac{p}{2}+c}}d\mu(t)<\infty.
\end{align}
Similarly, it follows from Lemma \ref{estiamtes} and Proposition \ref{newCM1} that
\begin{align}\label{34}
&\sup_{a\in\D}\left(\int_{\D}\left( \int_{[0,1)}\frac{1}{|1-tz|^2}d\mu(t)\right)^2(1-|\sigma_a(z)|^2)^p dA(z)\right)^{\frac12} \nonumber \\
\leq & \sup_{a\in\D} \int_{[0,1)} \left( \int_{\D} \frac{1}{|1-tz|^4} (1-|\sigma_a(z)|^2)^p dA(z)\right)^{\frac12} d\mu(t)¡¡\nonumber¡¡\\
\lesssim & \sup_{a\in\D}\int_{[0,1)}\frac{(1-|a|^2)^{\frac p2}}{(1-t^2)^{1-\frac p2}|1-at|^p}d\mu(t)<\infty.
\end{align}
From (\ref{31}), (\ref{33}) and (\ref{34}), we get that $\Cu (f)\in \qp$. The proof is complete,
\end{proof}
\noindent {\bf Remark 3.}\ \
Set $d\mu_0(x)=dx$ on [0, 1). Then $d\mu_0$ is a Carleson measure and $\C_{\mu_0}(1)(z)=\frac{1}{z}\log\f{1}{1-z}$.
Clearly, the function $\C_{\mu_0}(1)$ is not in the Dirichlet space. Thus Theorem \ref{1main} does not hold when $p=0$.
Note that $\qp=\B$ for any $p>1$. Theorem \ref{1main} generalizes Theorem 5 in \cite{GGM} from the Bloch space $\B$ to all $\qp$ spaces. For $p=1$, Theorem \ref{1main} gives an answer to a question raised in \cite[p. 20]{GGM}.
The proof given here highlights the role of Proposition \ref{newCM1}. In the next section, we give a more general result where an alternative proof of Theorem \ref{1main} will be provided.
\section{$s$-Carleson measures and the range of another Ces\`aro-like operator acting on $H^\infty$ }
It is also natural to consider how the characterization of $s$-Carleson measures in Proposition \ref{newCM2} can play a role in the investigation of the range of Ces\`aro-like operators acting on $H^\infty$. We consider this
topic by another kind of Ces\`aro-like operators.
Suppose $0<s<\infty$ and $\mu$ is a finite positive Borel measure on $[0,1)$.
For $f(z)=\sum_{n=0}^\infty a_nz^n$ in $H(\D)$, we define
$$
\C_{\mu, s} (f)(z)=\sum^\infty_{n=0}\left(\mu_n\sum^n_{k=0}\frac{\Gamma(n-k+s)}{\Gamma(s)(n-k)!}a_k\right)z^n, \quad z\in\D.
$$
Clearly, $\C_{\mu, 1}$ is equal to $\C_{\mu}$.
\begin{limma}\label{intergera repre}
Suppose $0<s<\infty$ and $\mu$ is a finite positive Borel measure on $[0,1)$. Then
$$
\C_{\mu, s} (f)(z)=\int_{[0,1)}\frac{f(tz)}{(1-tz)^s}d\mu(t)
$$
for $f\in H(\D)$.
\end{limma}
\begin{proof}
The proof follows from a simple calculation with power series. We omit it.
\end{proof}
We have the following result.
\begin{theor}\label{2main}
Suppose $0<s<\infty$ and $\mu$ is a finite positive Borel measure on $[0,1)$. Let $\max\{1, \f{1}{s}\}<p<\infty$ and $X$ is a subspace of $H(\D)$ with $\Lambda^p_{1/p}\subseteq X\subseteq\B$. Then
$\C_{\mu, s}(H^\infty)\subseteq X$ if and only if $\mu$ is an $s$-Carleson measure.
\end{theor}
\begin{proof}
Let $\C_{\mu, s}(H^\infty)\subseteq X$. Then $\C_{\mu, s}(1)\in X$; that is, $f_{\mu, s}\in X$. It follows from Proposition \ref{newCM2} that $\mu$ is an $s$-Carleson measure.
On the other hand, let $\mu$ be an $s$-Carleson measure and $f\in H^\infty$. By Lemma \ref{intergera repre}, we see
\begin{align*}
\Cus (f)'(z)=\int_{[0,1)}\frac{tf'(tz)}{(1-tz)^s}d\mu(t)+ \int_{[0,1)}\frac{stf(tz)}{(1-tz)^{s+1}}d\mu(t), \quad z\in\D.
\end{align*}
Then
\begin{align}\label{41}
&\sup_{0<r<1}(1-r)^{1-\frac1p}\left(\f{1}{2\pi}\int_0^{2\pi}|\Cus (f)'(re^{i\theta})|^p d\theta\right)^{\frac1p}\nonumber \\
\lesssim &\|f\|_\B \sup_{0<r<1}(1-r)^{1-\frac1p} \left(\f{1}{2\pi}\int_0^{2\pi}\left(\int_{[0,1)} \frac{1}{|1-tre^{i\theta}|^{s}(1-tr)} d\mu(t)\right)^p d\theta\right)^{\frac1p} \nonumber \\
&+\|f\|_{H^\infty}\sup_{0<r<1}(1-r)^{1-\frac1p} \left(\f{1}{2\pi}\int_0^{2\pi}\left(\int_{[0,1)} \frac{1}{|1-tre^{i\theta}|^{s+1}} d\mu(t)\right)^p d\theta\right)^{\frac1p}. \nonumber \\
\end{align}
Note that $ps>1$. By the Minkowski inequality, Lemma \ref{useful estimates} and Lemma \ref{S-CM}, we deduce
\begin{align}\label{42}
&\sup_{0<r<1}(1-r)^{1-\frac1p} \left(\f{1}{2\pi}\int_0^{2\pi}\left(\int_{[0,1)} \frac{1}{|1-tre^{i\theta}|^{s}(1-tr)} d\mu(t)\right)^p d\theta\right)^{\frac1p} \nonumber \\
\leq & \sup_{0<r<1}(1-r)^{1-\frac1p} \int_{[0,1)} \left(\f{1}{2\pi}\int_0^{2\pi} \frac{1}{|1-tre^{i\theta}|^{sp}(1-tr)^p} d\theta\right)^{\frac1p} d\mu(t) \nonumber \\
\lesssim & \sup_{0<r<1}(1-r)^{1-\frac1p} \int_{[0,1)} \f{1}{(1-tr)^{s+1-\frac1p}} d\mu(t)<\infty, \nonumber \\
\end{align}
and
\begin{align}\label{43}
&\sup_{0<r<1}(1-r)^{1-\frac1p} \left(\f{1}{2\pi}\int_0^{2\pi}\left(\int_{[0,1)} \frac{1}{|1-tre^{i\theta}|^{s+1}} d\mu(t)\right)^p d\theta\right)^{\frac1p}\nonumber \\
\lesssim & \sup_{0<r<1}(1-r)^{1-\frac1p} \int_{[0,1)} \left( \f{1}{2\pi}\int_0^{2\pi}\frac{1}{|1-tre^{i\theta}|^{(s+1)p} } d\theta \right)^{\frac1p} d\mu(t) \nonumber \\
\lesssim & \sup_{0<r<1}(1-r)^{1-\frac1p} \int_{[0,1)} \f{1}{(1-tr)^{s+1-\frac1p}} d\mu(t)<\infty.
\end{align}
From (\ref{41}), (\ref{42}) and (\ref{43}), $\Cus(f)\in \Lambda^p_{1/p}$. Note that $\Lambda^p_{1/p}\subseteq X$. The desired result follows.
\end{proof}
\end{document} |
\begin{document}
\baselineskip=17pt
\title{Rotated Odometers and Actions on Rooted Trees}
\begin{abstract}
A rotated odometer is an infinite interval exchange transformation (IET) obtained as a composition of the
von Neumann-Kakutani map and a finite
IET of intervals of equal length. In this paper, we consider rotated odometers for which the finite
IET is of intervals of length $2^{-N}$, for some $N \geq 1$. We show that every such system is
measurably isomorphic to a ${\mathbb Z}$-action on a rooted tree, and that the unique minimal aperiodic subsystem
of this action is always measurably isomorphic to the action of the adding machine.
We discuss the applications of this work to the study of group actions on binary trees.
\end{abstract}
\boldsymbol{s}ection{Introduction}\label{sec:intro}
In this paper, we consider infinite interval exchange transformations (IETs) obtained by precomposing
the von Neumann-Kakutani map of an interval
with a finite IET of equal length intervals,
and study the dynamics of such systems.
Let $\am$ be the von Neumann-Kakutani map, represented on the half-open unit interval $[0,1)$ as
\begin{align}\label{eq-odometer}
\am(x) = x - (1-3 \cdot 2^{-n}) \qquad \boldsymbol{t}ext{ if } x \in [1-2^{1-n}, 1-2^{-n}),\ n \geq 1.
\end{align}
For $q \in {\mathbb N}$, divide the interval $I = [0,1)$ into $q$ half-open subintervals of length $\frac{1}{q}$. Let $\pi$ be a permutation of $q$ symbols and let $R_\pi$ be the corresponding piecewise continuous map of the subintervals.
The infinite IET $F_\pi:I \boldsymbol{t}o I$ defined by $F_\pi = \am \circ R_\pi$ is called the \emph{rotated odometer}.
This generalizes the case when $R_\pi:x \mapsto x+ p/q \mod 1$ is a circle rotation, and we keep the name for
the general case.
It was shown in \cite{BL2021} that every rotated odometer $(I,F_\pi,\lambda)$ with Lebesgue measure $\lambda$ is measurably isomorphic to the first return map of a flow of rational slope on a certain infinite-type translation surface.
The translation surfaces in question have interesting properties: they are non-compact surfaces of finite area,
infinite genus and with a finite number of ends. The closure of such a surface contains a single \emph{wild} singularity and possibly a finite number of cone angle singularities,
see \cite{DHP,Rbook} for definitions and details about translation surfaces of infinite type. On the other hand, one can consider $(I,F_\pi,\lambda)$ as a perturbation of the von Neumann-Kakutani system $(I,\am,\lambda)$.
A natural question is, what dynamical properties of $(I,\am, \lambda)$ are preserved under such perturbation? For the case $q \ne 2^N$, $N \geq 1$,
this question was partially answered in \cite{BL2021}.
Let $I_{per}$ the set of periodic points in $I$ and $I_{np} = I \boldsymbol{s}etminus I_{per}$ be the non-periodic points.
It was shown in \cite{BL2021} that the aperiodic subsystem $(I_{np},F_\pi)$ of the rotated odometer $(I,F_\pi)$ can be embedded into the Bratteli-Vershik system on a suitable Bratteli diagram, which can be constructed using coding partitions.
The ergodic measures and the spectrum of the Koopman operator
for $(I_{np}, F_\pi)$ can then be studied using the methods developed in the
literature for stationary Bratteli diagrams, see \cite{BKMS2010,Fogg2002}.
In \cite{BL2021} we investigated these questions for the case $q \ne 2^N$, $N \geq 1$.
In particular, it was shown that $(I_{np},F_\pi)$ may be non-minimal with unique minimal set,
and that it admits at most $q$ invariant ergodic measures
(examples of rotated odometers with $2$ invariant ergodic measures are given too).
In this paper, we consider the case $q = 2^N$, $N \geq 1$, where it is possible to construct a different, simpler Cantor model for the dynamical system of a rotated odometer than in \cite{BL2021}.
More precisely, we show that the rotated odometer $(I,F_\pi,\lambda)$ is measurably isomorphic to a ${\mathbb Z}$-action on a rooted binary tree, and, using this model, we study the dynamical and ergodic properties of the system. We also discuss the applications of our results to the study of group actions on binary trees.
We now give an overview of the main steps in the procedure which builds a measurable isomorphism between $(I,F_\pi,\lambda)$ and a ${\mathbb Z}$-action on a tree.
As a first step, we embed $(I,F_\pi)$ into a dynamical system given by a homeomorphism of a Cantor set, that is, there exists a Cantor set $I^*$, a homeomorphism $F_\pi^*:I^* \boldsymbol{t}o I^*$ and an injective map $\iota: I \boldsymbol{t}o I^*$, such that the image $\iota(I)$ is dense in $I^*$ and $\iota \circ F_\pi = F_\pi^* \circ \iota$.
This procedure has an important difference with an embedding of $(I, F_\pi)$
into a compact space $(I^*,F_\pi)$ constructed in \cite{BL2021}.
Indeed, to define the compact space $I^*$ in \cite{BL2021} we employ a technique standard in the study of finite IETs, see for instance \cite{Keane1975}.
Namely, we create gaps in $I$ by doubling points in the orbits of discontinuities of $F_\pi$.
Periodic points in \cite{BL2021} have half-open neighborhoods where each point is periodic with the same period as $x$, and
no points in this neighborhood get doubled. Consequently $I^*$ is not totally disconnected.
However, the closure of $\iota(I_{np})$ is always a Cantor set.
In this paper $I^*$ is constructed by simply doubling every dyadic rational
$p/2^m$, $m \geq 1$, $0 < p < 2^m$, thus repeating the construction of the middle-third Cantor set, if we think of the middle interval as collapsed to a point. The compact space $I^*$ obtained this way is always totally disconnected. The discontinuity points of $(I,F_\pi)$ are among the doubled points, which implies that $F_\pi$ extends to a homeomorphism $F_\pi^*$ of $I^*$. The embedding $\iota$ is a measurable map with respect to the Lebesgue measure $\lambda$ on $I$
and the measure $\mu$ on $I^*$ defined in Section~\ref{subsec-embedding}.
We next build a tree model.
\begin{definition}\label{def-binary}
A \emph{rooted binary tree} $T$ consists of the set $V = \bigsqcup_{i \geq 0} V_i$ of vertices and the set $E = \bigsqcup_{i \geq 1}E_i$ of edges,
which satisfy the following properties for all $i \geq 0$:
\begin{enumerate}
\item The cardinality $|V_i| = 2^i$.
\item Every vertex in $V_i$ is connected by edges to precisely two vertices in $V_{i+1}$.
\item Every vertex in $V_{i+1}$ is connected by an edge to precisely one vertex in $V_i$.
\end{enumerate}
\end{definition}
We modify the binary tree to obtain a \emph{grafted} binary tree as follows.
\begin{definition}\label{def-grafted}
For $N \geq 1$, a \emph{grafted} binary tree $T_N$ consists of the set $V = \bigsqcup_{i \geq 0} V_i$ of vertices and the set $E = \bigsqcup_{i \geq 1}E_i$ of edges, such that:
\begin{enumerate}
\item $|V_0|=1$, $|V_1| = 2^N$ and for $i \geq 2$ we have $|V_i| = 2^{N+i-1}$.
\item The root $v_0 \in V_0$ is connected by edges to $2^N$ vertices in $V_1$.
\item For $i \geq 1$, every vertex in $V_i$ is connected by edges to precisely $2$ vertices in $V_{i+1}$, and to a single vertex in $V_{i-1}$.
\end{enumerate}
\end{definition}
In the notation of Definition~\ref{def-grafted}, we have $T_1 = T$, where $T$ is the binary tree of Definition~\ref{def-binary}.
We introduce a labelling of vertices in $V$. Write $\cA_{k} = \{0,1,\ldots,2^k-1\}$, for $k \geq 1$,
and consider the tree $T_N$. The root $v_0 \in V_0$ is not labelled, vertices in $V_1$ are labelled by digits in $\cA_{N}$, and for $i \geq 1$, if $v \in V_i$ is labelled by a word $w_1w_2 \cdots w_i$ where $w_1 \in \cA_{N}$ and $w_i \in \cA_1$ for $i \geq 2$, then the two vertices in $V_{i+1}$ connected to $v$ are labelled by $w_1 \cdots w_i 0$ and $w_1 \cdots w_i 1$.
\begin{definition}
An \emph{infinite path} in the tree $T_N$ is an infinite sequence in the product space
\begin{align}\label{eq-pathspaceproduct}\partial T_N = \{(w_i) = w_1 w_2 \ldots \mid w_1 \in \cA_{N}, w_i \in \cA_1, i\geq 2\} = \cA_{N} \boldsymbol{t}imes \prod_{i \geq 2} \cA_{1,i}, & & \cA_{1,i} = \cA_1 \boldsymbol{t}extrm{ for } i\geq 2. \end{align}
The space $\partial T_N$ is called the \emph{boundary} of the tree $T_N$.
\end{definition}
Since $N$ is finite and the cardinality of $\cA_1$ is two, $\partial T_N$ is a Cantor set.
\begin{definition}\label{def-auto}
An automorphism $g: T_N \boldsymbol{t}o T_N$ is a map of $T_N$ which restricts to bijective maps on the sets $V$ and $E$ of vertices and edges respectively, and which preserves the structure of the tree. That is, if $v_1 \cdots v_i \in V_i$ is a vertex, then for any vertex $v_1 \cdots v_i w \in V_{i+1}$, where $w \in \{0,1\}$, we have that $g(v_1 \cdots v_i)$ is a subword of
$g (v_1 \cdots v_i w_i)$. In other words, two vertices in $V_i$ and $V_{i+1}$ are joined by an edge if and only if their images under $g$ are joined by an edge.
\end{definition}
We denote by $Aut(T_N)$ the group of automorphisms of $T_N$.
It is straightforward to see that every automorphism $g \in Aut(T_N)$ induces a homeomorphism of the boundary $\partial T_N$.
A \emph{cylinder}, or a \emph{cylinder set} $[w_1w_2 \ldots w_i]$ in $T_N$, $i \geq 1$, is the set of all infinite paths starting with the finite sequence $w_1 w_2 \ldots w_i$.
The {\em Bernoulli measure} $\mu_N$ on $\partial T_N$ is the
standard measure in which every cylinder $[w_1w_2\dots w_i]$
has the mass $2^{-N-i+1}$. It is straightforward that $\mu_N$ is preserved under every
automorphism of $T_N$.
\begin{theorem}\label{thm-main3}
Let $q = 2^N$, let $\pi$ be a permutation on $q$ symbols and let $(I,F_\pi,\lambda)$ be a rotated odometer
with Lebesgue measure $\lambda$. Then there exists an automorphism $\widetilde F_\pi \in Aut(T_N)$
and a measurable isomorphism
$${\varphii}: (I,F_\pi,\lambda) \boldsymbol{t}o (\partial T_N,\widetilde F_\pi, \mu_N)$$
such that $\widetilde F_\pi \circ \varphii = \varphii \circ F_\pi$.
\end{theorem}
Theorem~\ref{thm-main3} is proved in Section~\ref{subsec-treemodel}.
A consequence of Theorem~\ref{thm-main3} is the following description of the dynamics
of $(I,F_\pi,\lambda)$ in the case $q=2^N$, $N \geq 1$,
which is more precise than the result of \cite{BL2021}.
\begin{theorem}\label{thm-main1}
Let $q = 2^N$ for some $N \geq 1$, and let $(I,F_\pi)$ be a rotated odometer.
There exists a decomposition $I = I_{per} \cup I_{np}$ with the following properties:
\begin{enumerate}
\item[(i)] Every point in $I_{per}$ is periodic, the restriction $F_\pi: I_{per} \boldsymbol{t}o I_{per}$ is well-defined and invertible.
\item[(ii)] If $I_{per}$ is non-empty, then $I_{per}$ is a finite union of half-open maximal periodic intervals $[x,y)$, $x,y \in I$. Thus the set of periods of points in $(I,F_\pi)$ is finite.
\item[(iii)] The set $I_{np}$ contains $0$ and $F_\pi: I_{np} \boldsymbol{t}o I_{np}$ is well-defined
and invertible at every point in $I_{np} \boldsymbol{s}etminus \{ 0 \}$.
\item[(iv)] The aperiodic system $(I_{np}, F_\pi)$ is minimal.
\end{enumerate}
\end{theorem}
The difference with the general case $q \geq 2$ in \cite{BL2021} is that there $I_{per}$ can be an infinite union of half-open intervals,
while for $q = 2^N$, $I_{per}$ is at most a \emph{finite} union of half-open intervals. It follows that the set of periods which occur in $(I,F_\pi)$ is finite,
which need not be the case in \cite{BL2021}. Another difference is that for $q = 2^N$ the aperiodic subsystem $(I_{np},F_\pi)$ is always minimal, while this need not hold for $q \ne 2^N$. Theorem~\ref{thm-main1} is proved in Section~\ref{subsec-treemodel}.
Since $I_{per}$ is a finite union of half-open intervals, $I_{np}$ is also a finite union of half-open intervals,
and its Lebesgue measure $\lambda(I_{np}) > 0$.
We normalise $\lambda_{np}(U) = \lambda(U)/\lambda(I_{np})$ for every $U \boldsymbol{s}ubset I_{np}$.
The \emph{ dyadic adding machine} $\am: \{0,1\}^{\mathbb N} \boldsymbol{t}o \{0,1\}^{\mathbb N}$ is a well-known example of a minimal ${\mathbb Z}$-action on the space of one-sided infinite sequences of $0$'s and $1$'s. For a finite set $S = \{0,\ldots,r-1\}$, $r \geq 1$, we define the adding machine $\am_S: S \boldsymbol{t}imes \{0,1\}^{\mathbb N} \boldsymbol{t}o S \boldsymbol{t}imes \{0,1\}^{\mathbb N} $
as the addition of $1$ in $S$ with infinite carry to the right.
In other words,
\begin{align}\label{addingS}
\am_S(s,x) = \begin{cases}
(s+1,x) & \boldsymbol{t}ext{ if } s < r-1,\\[1mm]
(0,\am(x)) & \boldsymbol{t}ext{ if } s=r-1, \boldsymbol{t}ext{ where $\am$ is the dyadic adding machine},
\end{cases}
\end{align}
The adding machine $\am_S$ preserves the obvious Bernoulli measure $\mu_S$.
Since $\{0,1\}^{\mathbb N}$ is homeomorphic to $\partial T_1$, there is a conjugate action on $\partial T_1$ which we also call the adding machine and denote by $a$.
A recursive definition of the adding machine on the boundary $\partial T_N$ of the grafted tree $T_N$ is given in Example~\ref{ex-addmach1}.
\begin{corollary}\label{cor-addmach}
Let $q = 2^N$ for some $N \geq 1$, and let $(I,F_\pi)$ be a rotated odometer.
The aperiodic system $(I_{np},F_\pi,\lambda_{np})$ is measurably isomorphic to the action of the adding machine on $S \boldsymbol{t}imes \{0,1\}^{{\mathbb N}}$, for $|S| \leq 2^N$, with Bernoulli measure $\mu_S$.
\end{corollary}
A rotated odometer need not be conjugate to an automorphism of the binary tree $T$, since $F_\pi$ may be such
that the permutation $\pi$ does not respect the structure of the binary tree, see Remark~\ref{rem-period3}.
Therefore $T_N$ cannot be substituted by $T_1$ in Theorem~\ref{thm-main3}.
Theorem~\ref{thm-main3} has applications in the study of group actions on binary trees. Infinite IETs and actions of self-similar groups on binary trees are related. For instance, the famous Grigorchuk group was initially defined as a group of infinite IETs
of the unit interval, see \cite[Section 2]{Grig2011}. Actions of self-similar groups on binary rooted trees are an active topic of research in
Geometric Group Theory \cite{BN2008,Grig2011,Nekr2005}, and they also have applications in the study of arboreal representations of absolute
Galois groups of number fields \cite{Jones2013,Lukina2018}. We now present a corollary of Theorem~\ref{thm-main3} for the actions of groups on binary trees.
To this end, let $T_1 = T$ be the binary tree.
An automorphism $g \in Aut(T)$ is of finite order if $g^m = id$ for some $m \geq 1$. For instance, if $g_i$ interchanges $0$'s and $1$'s in the $i$-th coordinate $w_i$, then $g_i$ has order $2$. Another example of an element of order $2$ is $g_{even}$, which interchanges $0$ and $1$ in $w_i$ for every even $i$, and of course one can construct many more examples. The adding machine \eqref{addingS} is an automorphism of $T$ of infinite order.
Let $G \boldsymbol{s}ubset Aut(T)$ be a profinite group such that $G$ acts transitively on $\partial T$. Given $g \in Aut(T)$, the restriction $g|V_n$ is a permutation of a finite set $V_n$, and so it can be written as a product of cycles. Let $(x_i) = x_1 x_2 \cdots \in \partial T$, then $x_1 \cdots x_n$ is a vertex in $V_n$. Denote by $g_{n,x_1 \cdots x_n}$ the cycle containing $x_1 \cdots x_n$, then one can ask how the sequence of cycles $\{g_{n,x_1 \cdots x_n}\}$ behaves as $n$ increases.
It is conjectured in \cite{BJ2007}, that when $G$ is a representation of the absolute Galois group of a number field, elements with a certain type of cycle structure are dense in $G$. To the best of our knowledge, this conjecture is solved only in a few cases.
As a rule, given $g \in Aut(T)$, it is not immediate to determine the cycle structure of $g$, except in a few simple cases when $g$ is periodic or when $g$ acts transitively on every level $V_n$, $n \geq 1$.
The theorem below allows us to determine the cycle structure for compositions of the adding machine and some periodic elements of $Aut(T)$.
\begin{theorem}\label{theorem-appl}
Let $\mu_1$ be the Bernoulli measure on $\partial T$, and let $\lambda$ be Lebesgue measure on the half-open unit interval $I$. Let $g\in Aut(T)$ be such that there exists $m \geq 1$ such that for every $i > m$ and every sequence $w_1 w_2 \cdots \in \partial T$ the action of $g$ leaves $w_i$ unchanged (which implies that $g$ has finite order). Let $a \in Aut(T)$ be the adding machine. Then the following is true:
\begin{enumerate}
\item For some permutation $\pi$ on $2^m$ intervals, there exists a rotated odometer $(I,F_\pi)$ and an injective measure-preserving map $ \varphii: (I,\lambda) \boldsymbol{t}o (\partial T,\mu)$,
such that $\varphii \circ F_\pi = (a \circ g)\circ \varphii$.
\item Consequently, $a \circ g$ has infinite order, there is a clopen subset $U \boldsymbol{s}ubset \partial T$ such that the restriction $\langle a \circ g \rangle|U$ is minimal, and there is an $n_0 \geq 0$ such that
every $x \in \partial T \boldsymbol{s}etminus U$ is periodic of period $2^k$ for some $k \leq n_0$.
\end{enumerate}
\end{theorem}
The realization of a tree automorphism as an interval exchange transformation in Theorem~\ref{theorem-appl} relies on the fact that, under the hypotheses of the theorem, $g$ respects the embedding of an interval into the boundary of a tree $T$ in Theorem~\ref{thm-main1}. This means, in particular, that the orbits of points which do not have preimages under $\varphii$ consist of points which also do not have preimages under $\varphii$. This condition need not hold for a general finite order automorphism of $T$. We discuss this and the possibility of generalizing Theorem~\ref{theorem-appl} to a larger class of tree automorphisms in Remark~\ref{remark-generalize}.
\begin{remark}
{\rm
In the literature, an \emph{odometer} in $Aut(T)$ is sometimes defined as any $h \in Aut(T)$ such that the action of the cyclic group $\langle h \rangle$ is transitive on each $V_n$, $n \geq 1$. Every such $h$ is conjugate to the adding machine in Example~\ref{ex-addmach1}
by some $g \in Aut(T)$ \cite{Pink13}. We stress that Theorem~\ref{theorem-appl} only holds for the adding machine and need not hold for an odometer $h$. To this end we show in Remark~\ref{remark-notminimal} that it is possible to find $h \in Aut(T)$ such that the action of the cyclic subgroup $\langle h \rangle$ on $\partial T$ is minimal, and a periodic $g \in Aut(T)$, such that the product $h \circ g$ has finite order.
There exists an infinite IET that is measurably isomorphic to the action of such $\langle h \rangle$ on $\partial T$, but this IET will not be the rotated odometer
of the form defined at the beginning of the introduction.
}
\end{remark}
We finish with a sample open question motivated by applications to actions on binary trees.
Consider compositions of the adding machine with a periodic element which does not satisfy the hypotheses of Theorem~\ref{theorem-appl} but which respects the embedding of $I$ in Theorem~\ref{thm-main1}, see Remark~\ref{remark-generalize} for the justification of such an assumption.
It may be possible to solve the following problem by considering a sequence $\{F_{\pi_i}\}_{i \geq 1}$ of rotated odometers, where each $\pi_i$ is a (possibly different) permutation of a finite number of symbols.
\begin{problem}\label{prob-compositions}
Let $g \in Aut(T)$ be periodic such that for any $i \geq 1$ there is $j>i$ and $w_1 \cdots w_j \cdots \in \partial T$ such that $g(w_j) \ne w_j$, and such that $g$ preserves the embedding $\varphii$ in Theorem~\ref{thm-main1}. Find a model for the action of the product $a \circ g$,
where $a$ is the adding machine, in terms of rotated odometers.
What are the topological properties of infinite translation surfaces,
which admit flows whose first return map is measurably isomorphic to such systems?
\end{problem}
The paper is organized as follows. In Section~\ref{sec:general} we develop a tree model for rotated odometers and prove Theorem~\ref{thm-main3}. In Section~\ref{subsec-dynamics} we discuss the dynamics of rotated odometers and prove Theorems~\ref{thm-main1} and~\ref{theorem-appl} and Corollary~\ref{cor-addmach}.
\boldsymbol{s}ection{The tree model}\label{sec:general}
In this section we build a tree model for a rotated odometer with $q = 2^N$, and prove Theorem~\ref{thm-main3}.
\boldsymbol{s}ubsection{Embedding into a Cantor set}\label{subsec-embedding}
Set $C = \{p 2^{-n} \mid n\geq 1, 0 < p < 2^n\}$; these dyadic rationals are used as cut-points. For each point $x \in C$ we add a double point $x^-$ to $I$, and define
$$I^* = I \cup \{x^- \mid x \in C\} \cup \{1\}.$$
The subset $I \cup \{1\}$ of $ I^*$ has total order $<$ induced from ${\mathbb R}$. We extend this order to $I^*$
by defining $x^- < x$ if $x \in C$, and $y < x^-$ if $y \in I \boldsymbol{s}etminus C$, $x \in C$ and $y < x$.
Since there are no points between $x^-$ and $x$ in $I^*$, adding $x^-$ to $I$ can be thought of as creating a gap. We give $I^*$ an order topology with open sets
\begin{align*} \cB = \{(a,b) \mid a,b \in I^*\} \bigsqcup \{[0,b) \mid b \in I^*\} \cup \{(a,1] \mid a \in I^*\}. \end{align*}
It is straightforward that the sets $\{[x,y^-] \mid x,y \in C\}$ are clopen in this topology. Since $C$ is dense in $I$, every point $z \in I^*$ has a system of decreasing clopen neighborhoods
$$C(z,n) = \{[p_n2^{-n},(p_n+1)2^{-n}] \mid n \geq 0, \, 0 \leq p_n < 2^n\}.$$
Recall that a metric $d$ on a space $X$ is an \emph{ultrametric} if it satisfies the following stronger form of the triangle inequality,
$$
d(x,y) = \max\{d(x,z), d(z,y)\} \ \boldsymbol{t}extrm{ for all }x,y,z \in X.
$$
We put an ultrametric on $I^*$ by declaring that
\begin{align*} d(z_1,z_2) = \frac{1}{2^r}, \quad r = \max\{n \geq 0 \mid C(z_1,n) = C(z_2,n) \}. \end{align*}
Then $I^*$ is a compact totally disconnected perfect metric space, that is, $I^*$ is a Cantor set.
Define a measure $\mu$ on $I^*$ by setting for each clopen set $\{[x,y^-] \mid x,y \in C\}$
$$\mu([x,y^-]) = y - x,$$
and denote by $\iota: I \boldsymbol{t}o I^*$ the inclusion map. Clearly $\mu(I^*) = 1$. Since $C$ is countable, the following is straightforward.
\begin{lemma}\label{lemma-iota}
The map $\iota: (I,\lambda) \boldsymbol{t}o (I^*,\mu)$ measurable.
\end{lemma}
Denote by $D_0 = \{1 - 2^{-k} \mid k \geq 0\}$ the set of discontinuities of the von Neumann-Kakutani map $\am$, and let $D^+$ and $D^-$ be the sets of forward and backward (whenever defined) orbits of points in $D_0$. Since $\am$ is continuous on the intervals $I_k = [1 - 2^{-(k-1)},1-2^{-k})$, $k \geq 1$, and, moreover, the restriction $\am|I_k$ for each $k \geq 1$ is a translation by $\pm p 2^{-s}$ for some $p,s \in {\mathbb N}$, the set $D_0 \cup D^+ \cup D^-$ of forward and backward orbits of the points of discontinuity of $\am$ is contained in $C$.
We can extend $\am: I \boldsymbol{t}o I$ to a continuous map $\am^*:I^* \boldsymbol{t}o I^*$ by setting $\am^*(x) = \am(x)$ if $x \in I$, and
$$\am^*(x^-) = \lim_{y \nearrow x} \am(y), \quad \boldsymbol{t}extrm{for all } x \in C \cup \{1\}.$$
Every point $x \in I$ except $0$ has a two-sided orbit, and it follows that $\iota(x)$ has a two-sided orbit in $I^*$.
For any sequence $y \nearrow 1$ the sequence of images $\am(y) \boldsymbol{s}earrow 0$, so $\am^*(1) = 0$ and $0$ has a two-sided orbit in $I^*$ under $\am^*$. It follows that $\am^*$ is a homeomorphism.
It is immediate that $\am^* \circ \iota(x) = \iota \circ \am (x)$ for all $x \in I$.
Note that the finite IET $R_\pi: I \boldsymbol{t}o I$ extends in a similar manner to a periodic homeomorphism $R_\pi^*:I^* \boldsymbol{t}o I^*$, which satisfies $R_\pi^* \circ \iota (x) = \iota \circ R_\pi(x)$ for all $x \in I$. Then for the composition $F_\pi^* = \am^* \circ R_\pi^*$ it follows that $F_\pi^* \circ \iota(x) = \iota \circ F_\pi(x)$ for all $x \in I$, and thus $(I,F_\pi,\lambda)$ is measurably isomorphic to $(I^*,F_\pi^*,\mu)$ via the embedding $\iota$.
\boldsymbol{s}ubsection{Actions on trees}
The binary tree and the grafted binary trees were defined in Definitions~\ref{def-binary} and~\ref{def-grafted}, and automorphisms of trees were defined in Definition~\ref{def-auto}. We now introduce a description of elements in $Aut(T_N)$ convenient for computations. This approach is a slight modification of the one routinely used in Geometric Group Theory to study actions on binary trees,
see for instance \cite{Nekr2005}. The purpose of this modification is to take into account the fact that in the grafted binary tree the vertex set $V_1$ has more than $2$ vertices.
Let $T = T_1$ be the binary tree with the labelling of vertices by finite words in $\cA_1$ as defined in the Introduction. Let $w = w_1 \cdots w_k \in \prod_{i=1}^k\cA_1$ and denote by $T(w)$ the subtree of $T$ consisting of all paths starting with the finite word $w$. All such paths pass through the vertex in $V_k$ labelled by $w$. Then there is an isomorphism of trees
\begin{align}\label{eq-binaryhomeo}
\kappa_w: T(w) \boldsymbol{t}o T, \qquad w_1 \cdots w_k v_{k+1} \cdots \mapsto v_{k+1} \cdots, \, v_i \in \{0,1\} \boldsymbol{t}extrm{ for }i > k.\end{align}
For every $g \in Aut(T)$, the restriction $g|V_n$ is a permutation of a set of $2^n$ elements.
\begin{definition}\label{def-section}
Given an automorphism $g \in Aut(T)$, and a finite word $w$, we define a \emph{section} at $w$ by
\begin{align}\label{eq-section}g_w = \kappa_{g(w)} \circ g \circ \kappa_w^{-1} \in Aut(T).\end{align}
\end{definition}
Let $g|V_n = \boldsymbol{t}au$. Then we can write $g$ as a composition (we compose the maps on the left)
\begin{align}\label{eq-recur}
g = (g_{\boldsymbol{t}au^{-1}(0^n)},g_{\boldsymbol{t}au^{-1}(0^{n-1}1)}, \ldots, g_{\boldsymbol{t}au^{-1}(1^n)}) \boldsymbol{t}au,
\end{align}
where $g_{\boldsymbol{t}au^{-1}(w)}$ are sections, for finite words $w$ of $n$ letters.
Equation \eqref{eq-recur} means that to compute $g$, we first apply $\boldsymbol{t}au$ on $V_n$,
and then we apply a section $g_{\boldsymbol{t}au^{-1}(w)}$ to the subtree $T(w)$, for all $w \in V_n$.
\begin{example}\label{ex-addmach1}
{\rm
Using sections, we can write automorphisms of $T$ recursively. Recall that a generator of the adding machine action on a Cantor space $\{0,1\}^{{\mathbb N}}$ is given by
\begin{align}\label{eq-addingmach}a (w_1 w_2 \cdots) = \left\{ \begin{array}{ll} (w_1+1)\, w_2 \cdots & \boldsymbol{t}extrm{ if }w_1 = 0, \\ 0 \, 0 \cdots 0 \, (w_{k+1}+1) \, w_{k+2} \cdots &\boldsymbol{t}extrm{ if } w_i= 1 \boldsymbol{t}extrm{ for } 1 \leq i \leq k, \, w_{k+1} =0, \\ 0 \, 0 \cdots & \boldsymbol{t}extrm{ if }w_k = 1 \boldsymbol{t}extrm{ for all }k \geq 1. \end{array}\right. \end{align}
Recall that for the binary tree $T$ we have $\partial T \cong \{0,1\}^{\mathbb N}$.
Let $\boldsymbol{s}igma$ be the non-trivial permutation of $\cA_1$. Then using \eqref{eq-recur} we can write
$$a = (a,1)\boldsymbol{s}igma,$$
where $1$ is the identity map in $Aut(T)$.
Here $\boldsymbol{s}igma$ performs the addition of $1$ modulo two in the first entry of the sequence, interchanging $0$ and $1$, while $(a,1)$ implements the recursive procedure of infinite carry to the right. For example, if $w = 10^\infty$, then applying $\boldsymbol{s}igma$ to $w$ interchanges $1$ to $0$ in the first component, so $\boldsymbol{s}igma(w) = 0^\infty$, and we must compute $(a,1)(0^\infty)$ next. The sequence $0^\infty$ belongs to the subtree $T(0)$ which means that we must apply the section $a_0=a$ to $0^\infty$. That is, we apply $a$ to $0^\infty$ starting from the second entry. Since $a|V_1 = \boldsymbol{s}igma$, we must interchange $0$ and $1$ in the second entry, obtaining $010^\infty \in T(01)$. We have for the sections $a_1 = 1$, then also $a_{01} = 1$, and the computation stops with the result $a(10^\infty) = 010^\infty$.
}
\end{example}
Using \eqref{eq-recur} we can compute the compositions of elements in $Aut(T)$. The following statement is obtained by a straightforward computation.
\begin{lemma}\label{lemma-product}
Let $g,h \in Aut(T)$, and suppose $g = (g_0,\ldots,g_{2^n-1}) \boldsymbol{t}au $ and $h = (h_0,\ldots,h_{2^n-1}) \nu $, where $\boldsymbol{t}au, \nu$ are permutations of $2^n$ symbols and $g_i,h_i \in Aut(T)$ for $0 \leq i < 2^n$. Then
\begin{align}\label{eq-compose} gh = (g_0,\ldots,g_{2^n-1}) \boldsymbol{t}au (h_0,\ldots,h_{2^n-1}) \nu = (g_0 h_{\boldsymbol{t}au^{-1}(0)},\ldots,g_{2^n-1}h_{\boldsymbol{t}au^{-1}(2^n-1)}) \boldsymbol{t}au \nu.\end{align}
\end{lemma}
Now let $T_N$ be the grafted binary tree. Similarly to \eqref{eq-binaryhomeo}, for any $w \in V_k$, $k \geq 1$ we define a map
\begin{align}\label{eq-kappaw}\kappa_w: T_N(w) \boldsymbol{t}o T_1, \qquad w_1 \cdots w_k v_{k+1} \cdots \mapsto v_{k+1} \cdots.\end{align}
The difference with \eqref{eq-binaryhomeo} is that the range of $\kappa_w$ is not the grafted tree $T_N$ but the binary tree $T$. A section $g_w$ of the grafted tree $T_N$ at $w$ is defined by \eqref{eq-section} with $\kappa_w$ given by \eqref{eq-kappaw}.
Again, the difference with the setting of the binary tree is that for the grafted binary tree $T_N$ sections are elements of $Aut(T)$ and not of $Aut(T_N)$.
\begin{lemma}\label{lemma-1}
Given an automorphism $g \in Aut(T)$ of the binary tree $T$, there is always an automorphism $\widehat g \in Aut(T_N)$ of the grafted tree $T_N$, such that the induced homeomorphisms on the boundaries of the corresponding trees are conjugate.
\end{lemma}
\begin{proofof}{Lemma \ref{lemma-1}}
Vertices in the vertex level set $V_N$ of $T$ are labelled by words of length $N$ in the alphabet $\cA_1$. Define the map
$$\kappa_N: \cA_1^N \boldsymbol{t}o \cA_{N}, \qquad w_1 \cdots w_N \mapsto \boldsymbol{s}um_{i=1}^N 2^{N-i} w_i.$$
Using the identification \eqref{eq-pathspaceproduct} of the path spaces $\partial T$ and $\partial T_N$ with products of finite sets we obtain a homeomorphism
$$\kappa_\infty: \partial T \boldsymbol{t}o \partial T_N, \qquad w_1 w_2 \cdots \mapsto \kappa_N(w_1\cdots w_N)w_{N+1} \cdots.$$
It follows that the map $\widetilde g = \kappa_\infty \circ g \circ \kappa_\infty^{-1} : \partial T_N \boldsymbol{t}o \partial T_N$ is a homeomorphism. Moreover, by construction if two paths $(w_i),(v_i) \in \partial T$ coincide up to level $m \geq N$, then their images under $\kappa_\infty$ coincide up to level $m-N$, so every subtree $T(w)$ for $w \in V_N$ is mapped isomorphically onto a subtree $T_N(\kappa_N(w))$. It follows that $\widetilde g $ defines an automorphism $\widehat g$ of $T_N$.
\end{proofof}
Given a recursive definition of $g \in Aut(T)$ as in \eqref{eq-recur}, we can obtain a recursive definition of $\widehat g \in Aut(T_N)$. Indeed, let $g|V_N = \boldsymbol{t}au$ be a permutation of $V_N$ induced by $g$. Then $\boldsymbol{t}au_N = \kappa_N \circ \boldsymbol{t}au \circ \kappa_N^{-1}$ is a permutation of the level set $V^N_1$ of $T_N$, and if $g = (g_0,\ldots,g_{2^N-1})\boldsymbol{t}au$, then $\widehat g =(g_0,\ldots,g_{2^N-1})\boldsymbol{t}au_N $.
\begin{example}\label{ex-addmach2}
{\rm
Let $a = (a,1)\boldsymbol{s}igma$ be the standard adding machine as in Example~\ref{ex-addmach1}, and let $N \geq 2$. We can compute that
\begin{align}\label{eq-tauodometer}\boldsymbol{t}au_N = \kappa_N \circ (a|V_N) \circ \kappa_N^{-1} = (0, \, 2^{N-1}, \, 2^{N-2}, \, 2^{N-1} + 2^{N-2}, \, \ldots ,2^N-1),\end{align}
and $\widehat a = (a,1,\ldots,1)\boldsymbol{t}au_N \in Aut(T_N)$.
More generally, given a finite set $S \boldsymbol{s}ubset V_N$, we can consider a subtree $T_S = \bigcup_{w \in S} T_N(w) \boldsymbol{s}ubset T_N$. Let $\eta$ be a transitive permutation of $S$, and consider the map $a_S = (a,1,\ldots,1)\eta$ on $T_S$. Then $a_S$ is transitive on $V_n \cap T_S$, for any $n \geq 1$, so $a_S$ is the adding machine on $T_S$.
}
\end{example}
We note that, given $h \in Aut(T_N)$, the composition $\kappa_\infty^{-1} \circ h \circ \kappa_\infty$ need not define an automorphism of $T$. Indeed, let $N=2$, so $T_N$ has $4$ vertices at the first level, and let $h|V_2 = \boldsymbol{t}au_2 = (012)$, so the vertex $3$ is fixed. We have $\kappa_2^{-1}(3) = 11 \in V_2$ and $\kappa_2^{-1}(2) = 10 \in V_2$. At the same time
$$ \kappa_2^{-1} \circ \boldsymbol{t}au_2 \circ \kappa_2(10) = \kappa_2^{-1}(\boldsymbol{t}au_2(2)) = \kappa_2^{-1}(0) = 00.$$
Thus $\kappa_\infty^{-1} \circ h \circ \kappa_\infty$ maps paths starting with $1$ in $\partial T$ to paths starting with either $1$ or $0$ depending on the second symbol in the sequence.
This means that $ \kappa_2^{-1} \circ \boldsymbol{t}au_2 \circ \kappa_2$ is incompatible with the structure of the binary tree $T$, and so $\kappa_\infty^{-1} \circ h \circ \kappa_\infty$ does not define an automorphism of $T$.
\boldsymbol{s}ubsection{Tree models for rotated odometers}\label{subsec-treemodel}
In this section we prove Theorem~\ref{thm-main3}.
\begin{proofof}{Theorem~\ref{thm-main3}}
Recall that $\pi$ is a permutation of $2^N$ symbols, and $\iota: (I,\lambda) \boldsymbol{t}o (I^*,\mu)$ is a measurable embedding into a Cantor set. Write $x_{n,p} = p2^{-n}$ for points in $C$, and $x_{n,p}^-$ for the corresponding double points in $I^*$.
For each $n \geq 1$, set $x_{n,2^n}^- = 1$.
Note that for any $n \geq 0$ we have
$$I^* = \bigcup \{[x_{n,p}, x_{n,p+1}^-] \mid 0 \leq p < 2^n-1 \}. $$
Consider the grafted tree $T_N$, and recall that $|V_1| = 2^N$. We are going to construct a homeomorphism $\widetilde \varphii: I^* \boldsymbol{t}o \partial T_N $ inductively as follows.
Define $\widetilde \varphii_1: I^* \boldsymbol{t}o \cA_{N}$ by setting
$$\widetilde \varphii_1(z) = p \quad \boldsymbol{t}extrm{ if and only if } \quad z \in [x_{N,p}, x_{N,p+1}^-].$$
For $n \geq 2$, there is a unique $0 \leq m < 2^{n}-1$ such that $z \in [x_{n,m}, x_{n,m+1}^-]$. Set
$$w_n = \widetilde \varphii_{n}(z) = m \mod 2.$$ Then define
$$\widetilde \varphii_\infty: I^* \boldsymbol{t}o \partial T_N, \qquad z \mapsto (\widetilde \varphii_1(z), \widetilde \varphii_2(z), \ldots).$$
This mapping is bijective, since every point in $I^*$ has a system of clopen neighborhoods of the form $\{[x_{n,p},x_{n,p+1}^-] \mid n \geq 0\}$, and every clopen neighborhood $[x_{n,p}, x_{n,p+1}^-]$ is non-empty. The mapping $\widetilde \varphii_\infty$ is clearly continuous and so it is a homeomorphism. Note that by construction the inclusions of clopen sets in $I^*$ correspond to vertices in $T_N$ joined by finite paths.
The measure $\mu$ assigns equal weight to each interval $\{[x_{n,p},x_{n,p+1}^-] \mid 0 \leq p <2^n\}$ in the partition of $I^*$, and $\mu(I^*) = 1$. By construction each $[x_{n,p},x_{n,p+1}^-] $ is mapped onto a unique vertex in $V_{n-N+1}$. The Bernoulli measure $\mu_N$ assigns equal weight to every set $\partial T_N(w)$, where $w \in V_{n-N+1}$, and $\mu_N(\partial T) = 1$. It follows that $\widetilde \varphii_\infty$ is measure-preserving.
Every map of $I^*$ which for all $n \geq 1$ induces a permutation of clopen sets $\{[x_{n,p}, x_{n,p+1}] \mid 0 \leq p < 2^n-1\}$, induces a family of permutations of the vertex level sets $V_{n-N+1}$, $n \geq 1$ of $T_N$. Since paths in $T_N$ correspond to inclusions of clopen sets in $I^*$, such permutations are compatible with the structure of the tree $T_N$ and induce an automorphism of $T_N$. We note that the maps $\am^*: I^* \boldsymbol{t}o I^*$ and $R_\pi^*:I^* \boldsymbol{t}o I^*$ described in Section~\ref{subsec-embedding} satisfy this condition. Therefore, the composition $F_\pi^* = \am^* \circ R_\pi^*: I^* \boldsymbol{t}o I^*$ induces an automorphism of $T_N$. The proof of Theorem~\ref{thm-main3} is completed by composing $\varphii = \widetilde \varphii_\infty \circ \iota: I \boldsymbol{t}o \partial T_N$ with the measurable isomorphism $\iota: (I,F_\pi,\lambda) \boldsymbol{t}o (I^*,F_\pi^*,\mu)$.
\end{proofof}
\begin{remark}\label{remark-addedpoints}
{\rm
Consider the set of added points $\{x_{m,p}^- \mid x_{m,p} \in C\}$. Suppose $x_{m,p} = p2^{-m}$ is an irreducible fraction,
that is, $p$ is odd. Then for $n > m$ we have that $\widetilde \varphii_n(x_{m,p}^-) = 1$ since in that case $x_{m,p}^-$
corresponds to a right endpoint of a clopen interval in the partition $\{[x_{n,r},x_{n,r+1}^-] \mid 0 \leq r < 2^n\}$,
and it is always contained in the second interval of the subdivision of $[x_{n,r},x_{n,r+1}^-]$ into two intervals.
Then the image of $x_{m,p}^-$ in $\partial T_N$ is a sequence which is eventually constant with entries equal to $1$.
}
\end{remark}
\boldsymbol{s}ection{Dynamics of rotated odometers} \label{subsec-dynamics}
Using the tree model obtained in Theorem~\ref{thm-main3} we study the dynamics of rotated odometers and prove Theorems~\ref{thm-main1} and~\ref{theorem-appl} and Corollary~\ref{cor-addmach}.
\boldsymbol{s}ubsection{Periodic and non-periodic points}\label{subsec-periodic}
For the von Neumann-Kakutani map $\am^*:I^* \boldsymbol{t}o I^*$ denote by $A = \widetilde\varphii_\infty \circ \am^* \circ \widetilde\varphii_\infty^{-1}: \partial T_N \boldsymbol{t}o \partial T_N$
the induced map of the binary tree $T$. We want to describe $A$ using the recursive formula \eqref{eq-recur}.
\begin{proofof}{Theorem~\ref{thm-main1} and Corollary~\ref{cor-addmach}}
In what follows $n \geq N$.
Let $L_n = [0, 2^{-n})$ and $M_n = [1-2^{-n},1)$, so that $L_n$ is the first and $M_n$ is the last set of the partition of $I$ into $2^n$ sets of equal lengths. Then $\iota(L_n) \boldsymbol{s}ubset [x_{n,0},x_{n,1}^-] \boldsymbol{s}ubset I^*$ and $\iota(M_n) \boldsymbol{s}ubset [x_{n,2^{n}-1},1] \boldsymbol{s}ubset I^*$. The definition of the von Neumann-Kakutani map in \eqref{eq-odometer} implies that $\am(x) \in L_n$ if and only if $x \in M_n$, and for any $[x_{n,p}, x_{n,p+1})$ except $M_n$ the restriction $\am|[x_{n,p}, x_{n,p+1})$ is a translation. Thus it preserves the order $\leq $ on the points in $[x_{n,p}, x_{n,p+1})$ induced from ${\mathbb R}$.
The relation $\leq$ is not preserved by the restriction $\am: M_n \boldsymbol{t}o L_n$, where the order of two halves of $M_n$ is interchanged, and the intervals inside the image of the second half of $M_n$ are further interchanged. The second half of $M_n$ is the set $M_{n+1}$, and we have $\am(M_{n+1}) = L_{n+1}$. Thus the restriction of $\am$ to the set of intervals $\{[x_{n,p}, x_{n,p+1}) \mid 0 \leq p < 2^n\}$, and therefore of $\am^*$ to the set of intervals $\{[x_{n,p}, x_{n,p+1}^-] \mid 0 \leq p < 2^n\}$, defines a permutation of $2^n$ symbols, which is transitive since $\am$ is minimal on $I$.
It follows that $A|V_{n-N+1}$ is a transitive permutation of $V_{n-N+1}$. Since further permutations of subintervals, which do not respect the order $<$, only happen for the interval, mapped onto $L_{n-N+1}$, for any $w \ne 0^{n-N+1} \in V_{n-N+1}$,
the section $A_w \in Aut(T)$ is the identity map. The restriction of the section $A_{0^{n-N+1}}$ to $V_{n-N+2}$ is a non-trivial permutation of two symbols, since $\am$ permutes two subintervals of $L_{n-N+1}$. For $n = N$, we have $A|V_{N-N+1} = A|V_1= \boldsymbol{t}au_N$, for $\boldsymbol{t}au_N$ given by \eqref{eq-tauodometer}, and so $A = (a,1,\ldots,1)\boldsymbol{t}au_N$, where $a = (a,1) \boldsymbol{s}igma$ is described in Example~\ref{ex-addmach1}.
Similarly, given a permutation $\pi$ of $2^N$ symbols, and the corresponding finite IET $R_\pi: I \boldsymbol{t}o I$, we deduce that the induced map $R = \widetilde\varphii_\infty \circ R_\pi^* \circ \widetilde\varphii_\infty^{-1}$ is given by $R = (1,1,\ldots,1)\pi$, with $R|V_1 = \pi$.
Now using the law for composition of tree automorphisms \eqref{eq-compose} we can easily understand the dynamics of the system $(\partial T_N, A \circ R)$. In particular,
$$A \circ R = (a,1,\ldots,1)\boldsymbol{t}au \pi,$$
which leads to the following conclusions:
\begin{enumerate}
\item[(i)] Consider the decomposition of $\boldsymbol{t}au \pi$ into cycles, and suppose $c$ is a cycle containing $0$. Let $\cO \boldsymbol{s}ubset \cA_{N}$ be the set of symbols in $c$. Then
$$\partial T_N(\cO) = \bigcup \{\partial T(s) \mid s \in \cO \}$$
is a clopen subset of $\partial T$ and the restriction of $A \circ R$ to this set satisfies
$$A\circ R|\partial T_N(\cO) = (a,1,\ldots,1)c,$$
which shows that this system is the addition of $1$ in the first component with infinite carry to the right, and so it is minimal. Set $S=\{0,\ldots,|c|-1\}$, then $A \circ R| \partial T_N(\cO)$ is isomorphic to the adding machine on $S \boldsymbol{t}imes \{0,1\}^{\mathbb N}$ defined in Example~\ref{ex-addmach2}, and Corollary~\ref{cor-addmach} follows. Here $|c|$ denotes the length of the cycle $c$. In particular, the system $(\partial T_N, A \circ R)$ is minimal if and only if $\boldsymbol{t}au\pi$ is a transitive permutation.
\item[(ii)] In the cycle decomposition of $\boldsymbol{t}au \pi$, let $c'$ be a cycle not containing $0$, and let $\cO' \boldsymbol{s}ubset \cA_{N}$ be the set of symbols in $c'$. Then $\partial T_N(\cO') = \bigcup \{ \partial T(s) \mid s \in \cO' \} $ is a clopen subset of $\partial T$, and we have
$$A\circ R|\partial T_N(\cO') = (1,1,\ldots,1)c'.$$
Thus every point in $T_N(\cO')$ has period $|c'|$.
\item[(iii)] Since $\boldsymbol{t}au \pi$ contains a finite number of cycles, the set of periods of periodic points in $(\partial T_N, A \circ R)$, and so in $(I,F_\pi)$, is finite. Also, it follows that a point $x \in \partial T_N$ is periodic if and only if $x \in \partial T_N(\cO')$ for some cycle $c'$ not containing $0$. There is at most a finite number of such cycles $c'$ in $\boldsymbol{t}au \pi$, and so there is a finite number of half-open intervals in $I$ whose image under the inclusion map $\varphii = \widetilde \varphii_\infty \circ \iota$ is contained in $\partial T_N\boldsymbol{s}etminus \partial T_N(\cO)$. It follows that the set of periodic points in $(I,F_\pi)$ is at most a finite union of half-open intervals.
\end{enumerate}
These prove Theorem~\ref{thm-main1} and Corollary~\ref{cor-addmach}.
\end{proofof}
\begin{remark}\label{rem-period3}
{\rm We note that the periods of points in $(I,F_\pi)$ need not be powers of $2$. Let $N = 2$, then $A = (a,1,1,1)(0213)$. Let $\pi = (03)$. Then
$$A \circ R = (a,1,1,1)(0213)(03) = (a,1,1,1)(0)(321),$$
so the orbit of every infinite sequence in $\partial T_2$ starting with $1$, $2$ or $3$ is periodic with period $3$.
It follows from this example that there exist rotated odometers whose action is not measurably isomorphic
to the action of an automorphism of the binary tree $T$.
Indeed, if $g \in Aut(T)$ is an automorphism and $x \in \partial T$ is a periodic point, then the period of $x$ is a power of $2$.
To see this, consider $x = (x_0,x_1,\ldots)$, and let $r_k$ be the period of $x_k$ in $V_k$.
Then the period $x_{k+1}$ in $V_{k+1}$ is either $r_k$ or $2r_k$.
Since the period $r_1$ of $x_1$ in $V_1$ is either $1$ or $2$, the statement follows.
}
\end{remark}
\boldsymbol{s}ubsection{Applications}
We prove Theorem~\ref{theorem-appl}.
\begin{proofof}{Theorem~\ref{theorem-appl}}
Suppose that $g \in Aut(T)$ is of finite order such that there is $m \geq 1$ such that for every $i >m$ the action of $g$ leaves $w_i$ unchanged.
We need to show that there exists a rotated odometer $(I,F_\pi)$ for some permutation $\pi$ on $2^m$ intervals, such that $(I,F_\pi,\lambda)$ is measurably isomorphic to $(\partial T, a \circ g , \mu_1)$,
where $\lambda$ is Lebesgue measure, $\mu_1$ is the Bernoulli measure on the binary tree $T$ and $a$ is the adding machine described in Example~\ref{ex-addmach1}.
Consider the partition of $\partial T$ into clopen sets $\partial T(w)$, where $w= w_1 \cdots w_n$. Also, consider a partition of $I^*$ into subintervals $\{[x_{n,p},x_{n,p+1}) \mid 0 \leq p < 2^n\}$. By construction every such subinterval is mapped under $\varphii = \widetilde \varphii_\infty \circ \iota$ into a distinct clopen set $\partial T(w)$, and $\varphii$ is injective on $I$.
Define
$$\widetilde{g}:I \boldsymbol{t}o I, \qquad x \mapsto (\widetilde \varphii_\infty \circ \iota)^{-1} \circ g \circ (\widetilde \varphii_\infty \circ \iota)(x).$$
The map $\widetilde{g}$ is well-defined. Indeed, by Remark~\ref{remark-addedpoints} the points in $I^*$ which do not have preimages in $I$ under $\iota$ are mapped into sequences which are eventually constant with entries equal to $1$. Since $g$ does not change $w_i$ for $i \geq m$, $w \in \partial T$ is eventually a sequence of $1$'s if and only if $g(w)$ is eventually a sequence of $1$'s. Therefore, the map $\varphii$ is invertible at $g \circ \varphii(x)$. Since $g$ does not change $w_i$ for $i \geq m$, $\widetilde g$ preserves the order of points in the sets $\{[x_{n,p},x_{n,p+1}) \mid 0 \leq p < 2^n\}$, and it follows that the restriction of $\widetilde{g}$ to every interval $\{[x_{m,p},x_{m,p+1}) \mid 0 \leq p <2^m\}$ is a translation. We conclude that $\widetilde g: I \boldsymbol{t}o I$ is a finite IET.
It is proved in Section~\ref{subsec-periodic} that the von Neumann-Kakutani map $(I, \am,\lambda)$ is measurably isomorphic to $(\partial T, a, \mu_1)$, where $a = (a,1)\boldsymbol{s}igma$ is the standard adding machine. Set $F_\pi = \am \circ \widetilde g $, then $(I, F_\pi, \lambda)$ is measurably isomorphic to $(\partial T, a \circ g, \mu_1)$. The second statement of Theorem~\ref{theorem-appl} follows from Theorem~\ref{thm-main1}.
\end{proofof}
\begin{remark}\label{remark-notminimal}
{\rm
In the literature a transformation $g$ such that the cyclic group $\langle h \rangle$ acts transitively on every level $V_n$, $n \geq 1$, of the tree $T$, is sometimes called an \emph{odometer}. Every odometer is conjugate to the adding machine in Example~\ref{ex-addmach1}
by a tree automorphism \cite{Pink13}. We note that Theorem~\ref{theorem-appl} only holds for the adding machine, but need not hold for its conjugates.
Indeed, define $a_1 = \boldsymbol{s}igma$ and $a_2 = (a_1,a_2)$ using the recursive notation, then $\langle a_1,a_2\rangle$ is the dihedral group. Both $a_1$ and $a_2$ have order two, and $h = a_1a_2$ generates an infinite cyclic group whose action on every $V_n$, $n \geq 1$, is transitive. The element $h$ is conjugate to $a = (a,1)\boldsymbol{s}igma$ but it is not equal to $a$. Recall that $\boldsymbol{s}igma$ acts on $(w_1,w_2,\ldots) \in \partial T$ by interchanging $0$ and $1$ in the first entry, and keeps the remaining entries fixed. We compute that $h \boldsymbol{s}igma = \boldsymbol{s}igma (a_1,a_2) \boldsymbol{s}igma = (a_2,a_1)$ has order two.
}
\end{remark}
\begin{remark}\label{remark-generalize}
{\rm
We have seen in the proof of Theorem~\ref{theorem-appl} that, under its hypotheses, a finite order element $g \in Aut(T)$ respects the embedding of $I$ into $\partial T$. More precisely, by Remark~\ref{remark-addedpoints} points which do not have preimages under this embedding correspond to sequences which are eventually constant with entries equal to $1$, and if $g \in Aut(T)$ satisfies the hypotheses of Theorem~\ref{theorem-appl}, then it preserves the set of such sequences. Suppose $g \in Aut(T)$ does not satisfy the hypotheses of Theorem~\ref{theorem-appl}, that is, for any $n \geq 1$ there is $(w_i) \in \partial T$ and $m_n \geq n$ such that $g(w_{m_n}) \ne w_{m_n}$. Then the action of $g$ on $\partial T$ may or may not respect the embedding of $I$.
If such $g \in Aut(T)$ respects the embedding of $I$ into $\partial T$ ($h$ in Remark~\ref{remark-notminimal} is an example), then $g$ induces an IET of \emph{infinite} number of intervals. At the moment we do not have a unified way of describing the dynamics of a composition of such an IET with the von Neumann-Kakutani map, and we pose this as an open question in Problem~\ref{prob-compositions}. An example of an element which does not respect the embedding is, for instance, $g_{even}$ given for any $(w_i) \in \partial T$ by
$$g_{even}(w_{2n}) = w_{2n}+1 \mod 2, \quad g_{even}(w_{2n+1}) = w_{2n+1}, \quad n \geq 1.$$
It is not clear whether such elements induce IETs on the interval $I$, even if one discards the measure $0$ set of orbits in $\partial T$ which do not have preimages under the embedding, and/or allows reflections of subintervals.}
\end{remark}
\end{document} |
\begin{document}
\title[Nuclear and type I crossed products]{Nuclear and type I crossed
products of C*-algebras by group and compact quantum group actions}
\author{Raluca Dumitru and Costel Peligrad}
\address{Raluca Dumitru: Department of Mathematics and Statistics,
University of North Florida, 1 UNF Drive, Jacksonville, Florida 32224;
Institute of Mathematics of the Romanian Academy, Bucharest, Romania; E-mail
address: [email protected]}
\address{Costel Peligrad: Department of Mathematical Sciences, University of
Cincinnati, 610A Old Chemistry Building, Cincinnati, OH 45221; E-mail
address: [email protected]}
\subjclass[2000]{47L65, 20G42}
\maketitle
\begin{abstract}
If $A$ is a C*-algebra, $G$ a locally compact group, $K\subset G$ a compact
subgroup and $\alpha:G\rightarrow Aut(A)$ a continuous homomorphism, let $
A\times_{\alpha}G$ denote the crossed product. In this paper we prove that $
A\times_{\alpha}G$ is nuclear (respectively type I or liminal) if and only
if certain hereditary C*-subalgebras, $S_{\pi}$, $\mathcal{I}_{\pi}\subset
A\times_{\alpha}G$ $\pi\in\widehat{K}$, are nuclear (respectively type I or
liminal). These algebras are the analogs of the algebras of spherical
functions considered by R. Godement for groups with large compact subgroups.
If $K=G$ is a compact group or a compact quantum group, the algebras $
S_{\pi} $ are stably isomorphic with the fixed point algebras $A\otimes
B(H_{\pi })^{\alpha\otimes ad\pi}$ where $H_{\pi}$ is the Hilbert space of
the representation $\pi.$
\end{abstract}
\section{Introduction and preliminary results}
Let $G$ be a locally compact group and $K\subset G$ a compact subgroup. In
\cite{godement} (see also \cite{warner}) the study of $\widehat{G}$, the set
of equivalence classes of irreducible representations of $G$ is reduced to
the study of $\widehat{K}$ and the representations of certain classes of
spherical functions. In this paper we extend this approach to the case of
crossed products of C*-algebras by locally compact group and compact quantum
group actions. Let $(A,G,\alpha)$ be a C*-dynamical system and let $K\subset
G$ be a compact subgroup.
In \cite{peligrad} we defined the C*-algebras $S_{\pi}$, $\mathcal{I}
_{\pi}\subset A\times_{\alpha}G$, $\pi\in\widehat{K}$ where $\widehat{K}$ is
the set of all equivalence classes of unitary representations of $K.$ These
are the analogs of the algebras of the algebras of spherical functions. For
the case $K=G$, these algebras were previously defined by Landstad in \cite
{landstad}.
Recently, in \cite{raljfa,ralpelspectra}, we have extended the study of
these algebras to the case of compact quantum group actions on C*-algebras.
If $K=G$ is a compact group or a compact quantum group, the algebras $
S_{\pi} $ are stably isomorphic with the fixed point algebras $A\otimes
B(H_{\pi})^{\alpha\otimes ad\pi}$ where $H_{\pi}$ is the Hilbert space of
the representation $\pi$. In this section we will review some definitions
and preliminary results.
\subsection{Preliminaries on actions of compact groups on C*-algebras.}
\
Let $K$ be a compact group and denote by $\widehat{K}$ the set of all
equivalence classes of irreducible, unitary representations of $K$. Let $
\delta :K\rightarrow Aut(A)$ be an action of $K$ on a C*-algebra $A.$ Let $
\pi \in \widehat{K\text{.}}$ If $\pi _{ij}(g)$ are the coefficients of $\pi
_{g}$ in a fixed basis of the Hilbert space $H_{\pi }$ of the representation
$\pi ,1\leq i,j\leq d_{\pi }$ we define the character of $\pi ,\chi _{\pi
}(g)=d_{\pi }tr(\pi _{g^{-1}})=d_{\pi }\sum \overline{\pi _{ii}(g)},g\in K$
where $d_{\pi }$ is the dimension of the representation $\pi .$ We consider
the following mapping from $B$ into itself :
\begin{center}
$P^{\pi ,\delta }(a)=\int_{K}\chi _{\pi }(k)\delta _{k}(a)dk$
\end{center}
We define the spectral subspaces of the action $\delta$
\begin{center}
$A_{1}^{\delta }(\pi )=\left\{ a\in A|P^{\pi ,\delta }(a)=a\right\} $, $\pi
\in \widehat{K}$
\end{center}
In particular if $\pi=\pi_{0},$ is the trivial one dimensional
representation, $A_{1}^{\delta}(\pi_{0})=A^{\delta}$ is the algebra of fixed
elements under the action $\delta$. In this case, the projection $
P^{\pi_{0},\delta}$ of $A$ onto $A^{\delta}$ is a completely positive map.
Indeed, the extension of $P^{\pi_{0},\delta}$ to $M_{n}(A)$ is the
projection of this latter C*-algebra onto its fixed point algebra with
respect to the action $\alpha\otimes id$ where $id$ is the trivial action of
$G$ on $M_{n}=B(H_{n})$ where $H_{n}$ is the Hilbert space of dimension $n$.
\subsection{Algebras of spherical functions inside the crossed product}
\
Let now $(A,G,\alpha)$ be a C*-dynamical system with $G$ a locally compact
group and $K\subset G$ a compact subgroup. Denote by $A\times_{\alpha}G$ the
corresponding crossed product (see for instance \cite{pedersen}). Then the
algebra $C(K)$ of all continuous functions on $G$ can be embedded as follows
in the multiplier algebra $M(A\times_{\alpha}G)$ of $A\times_{\alpha}G$: If $
\varphi\in C(K)$ and $y\in C_{c}(G,A),$the dense subalgebra of $A\times
_{\alpha}G$ consisting of continuous functions with compact support from $G$
to $A$, then
\begin{center}
$(\varphi y)(g)=\int_{K}\varphi(k)\alpha_{k}(y(k^{-1}g))dk$
\end{center}
and
\begin{center}
$(y\varphi)(g)=\int_{K}\varphi(k)y(gk)dk$
\end{center}
In particular, if $\varphi=\chi_{\pi},$ $\varphi$ is a projection in $
M(A\times_{\alpha}G)$ and if $\pi_{1}$ and $\pi_{2}$ are distinct elements
in $\widehat{K\text{,}}$ the projections $\chi_{\pi_{1}}$ and $
\chi_{\pi_{2}} $ are orthogonal. We need the following results from [\cite
{peligrad}, Lemma 2.5.]:
\begin{remark}
\label{Lemma2.5JFA}The following statements hold:\newline i) If $\pi_{1} \neq\pi_{2}$ in $\widehat{K}$ then the projections $\chi_{\pi_{1}}$ and
$\chi_{\pi_{2}}$ are orthogonal in $M(A\times_{\alpha}G)$.\newline ii) $\sum_{\pi}\chi_{\pi}=I$, where $I$ is the identity of the bidual $(A\times_{\alpha}G)^{\star\star}$ of $A\times_{\alpha}G$.
\end{remark}
If $\pi\in\widehat{K}$, denote $S_{\pi}=\overline{\chi_{\pi}(A\times_{
\alpha}G)\chi_{\pi}}$, where the closure is taken in the norm topology of $
A\times_{\alpha}G$ Then, it is immediate that $S_{\pi}$ is strongly Morita
equivalent with the two sided ideal $J_{\pi}\overline{=(A\times_{\alpha}G)
\chi_{\pi}(A\times_{\alpha}G)}$. Indeed, it can be easily verified that $X=
\overline{(A\times_{\alpha}G)\chi_{\pi}}$ is an $S_{\pi}-J_{\pi}$
imprimitivity bimodule. We will consider next the action, $\delta$ of $K$ on
$A\times_{\alpha}G$ defined as follows: If $y\in C_{c}(G,A)$ set $
\delta_{k}(y)=\alpha_{k}(y(k^{-1}gk)$. Then $\delta_{k}$ extend to
automorphisms of $A\times_{\alpha}G$ and thus $\delta$ is an action of $K$
on $A\times_{\alpha}G$. The fixed point algebra $\mathcal{I=}
(A\times_{\alpha}G)^{\delta}$ is called in \cite{peligrad} the algebra of
K-central elements of the crossed product $A\times_{\alpha}G$. Denote:
\begin{center}
$\mathcal{I}_{\pi}=\mathcal{I\cap}S_{\pi}$
\end{center}
Then, [\cite{peligrad}, Proposition 2.7.], we have
\begin{remark}
\label{JFA2.7}$S_{\pi}$ is $\ast-$isomorphic with $\mathcal{I}_{\pi}\otimes B(H_{\pi})$.
\end{remark}
If $G=K$ is a compact group, then by [\cite{landstad}, Lemma 3] we have:
\begin{remark}
\label{landstad lemma3}For every $\pi\in\widehat{G}$, $\mathcal{I}_{\pi}$ is $\ast-$ isomorphic with $(A\otimes B(H_{\pi}))^{\alpha\otimes ad\pi}$.
\end{remark}
\subsection{Compact quantum group actions on C*-algebras}
\
Let $\mathcal{G}=(B,\Delta)$ be a compact quantum group (\cite{wor1,wor2}).
Here, $B$ is a unital C*-algebra (which is the analog of the C*-algebra of
continuous functions in the group case) and $\Delta:B\rightarrow
B\otimes_{\min}B$ a $\ast$-homomorphism such that:
i) $(\Delta\otimes\iota)\Delta=(\iota\otimes\Delta)\Delta$, where $
\iota:B\rightarrow B$ is the identity map and
ii) $\overline{\Delta(B)(1\otimes B)}=\overline{\Delta(B)(B\otimes1)}
=B\otimes_{\min}B$.
Let $\widehat{\mathcal{G}}$ denote the set of all equivalence classes of
unitary representations of $\mathcal{G}$ or equivalently, the set of all
equivalence classes of irreducible unitary co-representations of $B$. For
each $\pi\in\widehat{\mathcal{G}}$, $\pi=\left[ \pi_{ij}\right] $, $
\pi_{ij}\in B$ $1\leq i,j\leq d_{\pi}$,where $d_{\pi}$ is the dimension of $
\pi$, let $\chi_{\pi}=\sum_{i}\pi_{ii}$ be the character of $\pi$ and let $
F_{\pi}\in B(H_{\pi})$ be the positive, invertible matrix that intertwines $
\pi$ with its double contragredient representation and such that $
tr(F_{\pi})=tr(F_{\pi }^{-1})=M_{\pi}$. Then, with the notations in \cite
{wor1}, $F_{\pi}=\left[ f_{1}(\pi_{ij})\right] $ where $f_{1}$ is a linear
functional on the $\ast -$subalgebra $\mathcal{B\subset}B$ that is linearly
spanned by $\left\{\pi_{ij}|\pi\in\widehat{\mathcal{G}},1\leq i,j\leq
d_{\pi}\right\}$. If $a\in B$ (respectively $\mathcal{B}$) and $\xi$ is a
linear functional on $B$ (respectively $\mathcal{B}$) we denote (\cite
{wor1,wor2})
\begin{center}
$a\ast\xi=(\xi\otimes\iota)(\Delta(a))\in B$
\end{center}
Denote also by $\xi\cdot a$ the following linear functional on $B$
(respectively $\mathcal{B}$):
\begin{center}
$(\xi\cdot a)(b)=\xi(ab)$
\end{center}
If $h$ is the Haar state on $B$ let $h_{\pi}=M_{\pi}h\cdot(\chi_{\pi}\ast
f_{1})$. If $v_{r}$ is the right regular representation of $\mathcal{G}$,
the Fourier transform of $a\in B$ is defined as follows:
\begin{center}
$\widehat{a}=\mathcal{F}_{v_{r}}(a)=(\iota\otimes h\cdot a)(v_{r}^{\star})$
\end{center}
where $\mathcal{F}_{v_{r}}$ is the Fourier transform as defined by
Woronowicz in \cite{wor2}. Then the norm closure of the set $\widehat{B}
=\left\{ \widehat {a}|a\in B\right\} $ is a C*-algebra called the dual of $B$
(\cite{baaj,wor2}) and $\widehat{B}$ is a subalgebra of the algebra of
compact operators, $\mathcal{C(}H_{h})$ on the Hilbert space $H_{h}$ of the
GNS representation of $B$ associated with the Haar state $h$.
Let $A$ be a C*-algebra and $\delta:A\rightarrow M(A\otimes B)$ be a $\ast-$
homomorphism of $A$ into the multiplier algebra of the minimal tensor
product $A\otimes B$. Then $\delta$ is called an action of $\mathcal{G}$ on $
A$ (or a coaction of $B$ on $A$) if the following two conditions hold:
\newline
a) $(\iota\otimes\Delta)\delta=(\delta\otimes\iota)\delta$ and\newline
b) $\overline{\delta(A)(1\otimes B)}=A\otimes B$
Let $\pi\in\widehat{\mathcal{G}}$. Denote $P^{\pi,\delta}(a)=(\iota\otimes
h_{\pi})(\delta(a)),a\in A$. Then $P^{\pi,\delta}$ is a contractive linear
map from $A$ into itself. In particular, if $\pi=\pi_{0}$ is the trivial one
dimensional representation, then $P^{\pi_{0},\delta}=(\iota\otimes h)\delta$
is the completely positive projection of norm $1$ of $A$ onto the fixed
point C*-subalgebra $A^{\delta}$.
The crossed product $A\times_{\delta}\mathcal{G}$ is by definition, (\cite
{baaj,boca}), the norm closure of the set $\left\{
(\pi_{u}\otimes\pi_{h})(\delta (a)(1\otimes\widehat{b})|a\in A,b\in
B\right\} $, where $\pi_{u}$\ is the universal representation of $A$\ and $
\pi_{h}$\ is the GNS representation of $B$ associated with the Haar state $h$
.
Let $\pi\in\widehat{\mathcal{G}}$. If we denote $p_{\pi}=(\iota\otimes
h_{\pi})(v_{r}^{\star})$, then $\left\{ p_{\pi}\right\} _{\pi\in\widehat {
\mathcal{G}}}$ are mutually orthogonal projections in $\widehat{B}$ and
therefore in $A\times_{\delta}\mathcal{G}$ (\cite{boca,raljfa}). For $\pi\in
\widehat{\mathcal{G}}$ denote $\mathcal{S}_{\pi}=\overline{
p_{\pi}(A\times_{\delta}\mathcal{G)}p_{\pi}}$. In [\cite{raljfa}, Lemma 3.3]
it is shown that $ad(v_{r})$ is an action of $\mathcal{G}$ on the crossed
product $A\times_{\delta}\mathcal{G}$ and the fixed point algebra $\mathcal{
I=(}A\times_{\delta}\mathcal{G)}^{ad(v_{r})}$ of this action plays the role
of the $K-$central elements in the case of groups. Let $\mathcal{I}_{\pi}=
\mathcal{I\cap S}_{\pi}$. Let $\delta_{\pi}$ be the following action of $
\mathcal{G}$ on $A\otimes B(H_{\pi})$:
\begin{center}
$\delta_{\pi}(a\otimes m)=(\pi)_{23}(\delta(a))_{13}(1\otimes m\otimes
1)(\pi^{\ast})_{23}$
\end{center}
where the leg-numbering notation is the usual one (\cite{baaj,wor2}). The
above $\delta_{\pi}$ equals $\delta\otimes ad(\pi)$ in the case of compact
groups. Then, we have:
\begin{remark}
\label{ralucaanalogsof2.2,2.8andlandstadlemma}The following statements hold
true:\newline i) The projections $\left\{ p_{\pi}\right\} _{\pi\in
\widehat{\mathcal{G}}}$ are mutually orthogonal and $\sum_{\pi}p_{\pi}=1$ in
the bidual $(A\times_{\delta}\mathcal{G)}^{\star\star}$\newline ii)
$\mathcal{S}_{\pi}$ is $\star-$isomorphic with $\mathcal{I}_{\pi}\otimes
B(H_{\pi})$\newline iii) $\mathcal{I}_{\pi}$ is $\star-$isomorphic with
$A\otimes B(H_{\pi})^{\delta_{\pi}}$
\end{remark}
\begin{proof}
Part i) is [\cite{raljfa}, Section 2.1., Equation (2) and the discussion
after that equation]. Part ii) is [\cite{raljfa}, Remark 3.5.] and Part iii)
is [\cite{raljfa}, Proposition 4.8.].
\end{proof}
\section{Nuclear and type I crossed products}
In this section we will state and prove our main results. We give necessary
and sufficient conditions for a crossed product to be nuclear or type I. Our
conditions are given in terms of the algebras of spherical functions inside
the crossed product and in case of compact groups or compact quantum groups,
in terms of the fixed point algebras of $A\otimes B(H_{\pi})$ for the
actions $\delta\otimes ad(\pi)$.
Recall that a C*-algebra $C$ is said to be of type I if for every factor
representation $T$ of $C$ the Von Neumann factor $T(C)^{\prime\prime}$ is a
type I factor. $C$ is called liminal if for every irreducible representation
$T$ of $C$, $T(C)$ consists of compact operators.
A C*-algebra is called nuclear if its bidual, $C^{\ast\ast}$, is an
injective von Neumann algebra, i.e. if and only if there is a projection of
norm one from $B(H_{u})$ onto $C^{\ast\ast}$, where $H_{u}$ is the Hilbert
space of the universal representation of $C$. With the notations from
Section 1, we have the following :
\begin{remark}
\label{Cor 2.8JFA}Let $(A,G,\alpha)$ be a C*-dynamical system with G a locally compact group and let $K\subset G$ be a compact subgroup. The following three statements hold:\newline i) $S_{\pi}$ is nuclear if and only if $\mathcal{I}_{\pi}$ is nuclear\newline ii) $S_{\pi}$ is liminal if and only if $\mathcal{I}_{\pi}$ is liminal\newline iii) $S_{\pi}$ is type I if and only if $\mathcal{I}_{\pi}$ is type I
\end{remark}
\begin{proof}
These statements follow from Remark \ref{JFA2.7}.
\end{proof}
The following is the analog of the above Remark for the case of compact
quantum group actions:
\begin{remark}
\label{followsfromRalRemark3.5}Let $\mathcal{G}=(B,\Delta)$ be a compact quantum group and $\delta$ an action of $\mathcal{G}$ on a C*-algebra $A$. The
following conditions are equivalent:\newline i) $\mathcal{S}_{\pi}$ is nuclear if and inly if $(A\otimes B(H_{\pi}))^{\delta_{\pi}}$ is nuclear\newline ii)
$\mathcal{S}_{\pi}$ is liminal if and inly if $(A\otimes B(H_{\pi}))^{\delta_{\pi}}$ is liminal\newline iii) $\mathcal{S}_{\pi}$ is type I if and only if $(A\otimes B(H_{\pi}))^{\delta_{\pi}}$ is type I
\end{remark}
\begin{proof}
The result follows from Remark \ref{ralucaanalogsof2.2,2.8andlandstadlemma}.
\end{proof}
\subsection{Type I crossed products}
\
We start with the following general result:
\begin{lemma}
\label{typeIlemma}Let $C$ be a C*-algebra and $M(C)$ the multiplier algebra of $C.$ Let $\left\{ p_{\lambda}\right\} \subset M(C)$ be a family of mutually
orthogonal projections of sum 1 in $C^{\ast\ast}$, the bidual of $C$. The following conditions are equivalent:\newline i) $C$ is type I (respectively
liminal) \newline ii) The hereditary subalgebras $S_{\lambda}=p_{\lambda}Cp_{\lambda}\subset C$ are type I (respectively liminal) for every $\lambda$.
\end{lemma}
\begin{proof}
Assume that $C$ is type I (respectively liminal). Then $S_{\lambda}$ are
type I (respectively liminal) as C*-subalgebras of a type I (liminal)
C*-algebra.
Assume now that all $S_{\lambda}$ are type I (liminal). Let $T$ be a
nondegenerate factor representation (respectively an irreducible
representation) of $C$. Since, by assumption, $\sum p_{\lambda}=1$ it
follows that $\sum p_{\lambda}C$ is norm dense in $C$. Therefore, there is a
$\lambda$ such that the restriction of $T$ to $p_{\lambda}C,$ $
T|_{p_{\lambda}C}\neq0$. Then $T|_{J_{\lambda}}\neq0$, where $J_{\lambda}=
\overline{Cp_{\lambda}C}$ is the two sided ideal of $C$ generated by $
p_{\lambda}$. Since $T$ is a factor representation of $C$ (respectively an
irreducible representation of $C$) and the bicommutant $T(J_{\lambda})^{^{
\prime\prime}}$ is a nonzero weakly closed ideal of $T(C)^{\prime\prime}$ it
follows that $T(J_{\lambda})^{^{\prime\prime}}=T(C)^{\prime\prime}$.
Therefore $T$ has the same type with $T|_{J_{\lambda}}$. On the other hand,
it can be checked that $J_{\lambda}$ is strongly Morita equivalent with $
S_{\lambda}$ in the sense of Rieffel, \cite{rieffel}, with imprimitivity
bimodule $Cp_{\lambda}$. Therefore, since $S_{\lambda}$ is assumed to be
type I (respectively liminal), it follows from the discussion in \cite
{rieffel} (respectively \cite{fell}) that $J_{\lambda}$ is type I
(respectively liminal). It then follows that the representation $T$ is a
type I representation (respectively $T(C)$ consists of compact operators).
Since $T$ was arbitrary, we are done.
\end{proof}
We will state next some consequences of the above Lemma.
\begin{theorem}
\label{typeIlocallycompact}Let $(A,G,\alpha)$ be a C*-dynamical system with $G$ a locally compact group and let $K\subset G$ be a compact subgroup. Then
the following conditions are equivalent:\newline i) $A\times_{\alpha}G$ is type I (respectively liminal)\newline ii) The hereditary C*-subalgebras
$S_{\pi}\subset A\times_{\alpha}G$, $\pi\in\widehat{K}$ are type I (respectively liminal)\newline iii) The C*-subalgebras of $K-$central
elements, $\mathcal{I}_{\pi}\subset S_{\pi},\pi\in\widehat{K}$ are type I (respectively liminal).
\end{theorem}
\begin{proof}
The equivalence of the conditions i)-iii) follows from Remarks \ref
{Lemma2.5JFA} and \ref{Cor 2.8JFA} and Lemma \ref{typeIlemma}.
\end{proof}
If $G=K$ is a compact group, then the conditions i)-iii) in the above
theorem are equivalent with:
iv) The fixed point algebra $A^{\alpha}$ is type I (respectively liminal) [
\cite{gootman}, Theorem 3.2].
We will prove next an analogous result for compact quantum group actions. In
[\cite{boca}, Theorem19] it is shown that the crossed product of a
C*-algebra by an ergodic action of a compact quantum group is a direct sum
of full algebras of compact operators, hence a liminal C*-algebra. Since, in
the ergodic case, $\mathcal{S}_{\pi}$ are finite dimensional, the next
result is an extension of Boca's result to the case of general compact
quantum group actions.
For compact quantum groups we have the following result:
\begin{theorem}
\label{typeIquantum}Let $\mathcal{G=(}B,\Delta)$ be a compact quantum group and $\delta$ an action of $\mathcal{G}$ on a C*-algebra $A$. The following
conditions are equivalent:\newline i) $A\times_{\delta}\mathcal{G}$ is type I (respectively liminal)\newline ii) The hereditary C*-subalgebras
$\mathcal{S}_{\pi}\subset A\times_{\delta}\mathcal{G}$, $\pi\in\widehat{\mathcal{G}}$, are type I (respectively liminal)\newline iii) The
C*-subalgebras $\mathcal{I}_{\pi}\subset\mathcal{S}_{\pi},\pi\in \widehat{\mathcal{G}}$, are type I (respectively liminal).\newline iv) The
C*-algebras $A\otimes B(H_{\pi})^{\delta_{\pi}}$, $\pi\in\widehat{\mathcal{G}}$ are type I (respectively liminal).
\end{theorem}
\begin{proof}
The result follows from Remark \ref{ralucaanalogsof2.2,2.8andlandstadlemma}
and Lemma \ref{typeIlemma}.
\end{proof}
\subsection{Nuclear crossed products}
\
We start with the following lemma which is certainly known but we could not
find a reference for it:
\begin{lemma}
\label{prepLemma}A C*-algebra $C$ is nuclear if and only if for every state $\varphi$ of $C$, $T_{\varphi}(C)^{\prime\prime}$ is an injective von Neumann
algebra, where $T_{\varphi}$ is the GNS representation of $C$ associated with $\varphi$.
\end{lemma}
\begin{proof}
If $C$ is nuclear then $C^{\ast\ast}$ is an injective von Neumann algebra [
\cite{effros}, Theorem 6.4.]. Therefore, so is $T_{\varphi}(C)^{\prime
\prime} $ which is isomorphic with an algebra of the form $eC^{\ast\ast}$
for a certain projection, $e\in(C^{\ast\ast})^{\prime}$.
Conversely, if $T_{\varphi}(C)^{\prime\prime}$ is injective for every state $
\varphi$, let $\left\{ \varphi_{\iota}\right\} $ be a maximal family of
states for which the corresponding cyclic representations $
T_{\varphi_{\iota}}$ are disjoint. Then $T_{\varphi_{\iota}}$ and $T=\oplus
T_{\varphi_{\iota}}$ can be extended to normal representations $\overline{
T_{\varphi_{\iota}}\ }$ and $\overline{T\ }$ of $C^{\ast\ast}$ with $
\overline{T\ }$ a normal isomorphism. Therefore, $C^{\ast\ast}$ is
isomorphic with $\oplus T_{\varphi_{\iota}}(C)^{\prime\prime}$. Since all $
T_{\varphi_{\iota}}(C)^{\prime\prime}$ are injective (by assumption), from [
\cite{effros}, Proposition 3.1.], it follows that $C^{\ast\ast}$ is
injective and thus $C$ is nuclear.
\end{proof}
Throughout the rest of this section all algebras, groups and quantum groups
are assumed to be separable. The following Lemma is the analog of Lemma \ref
{typeIlemma} for the case of nuclear crossed products.
\begin{lemma}
\label{nuclearlemma}Let $C$ be a separable C*-algebra and $\left\{q_{\lambda}\right\} \subset M(C)$ be a family of mutually orthogonal projections such that $\sum_{\lambda}q_{\lambda}=1$ in $C^{\star\star}$. The following statements are equivalent\newline i) $C$ is nuclear\newline ii) The hereditary C*-subalgebras $S_{\lambda}$ are nuclear for all $\lambda$.
\end{lemma}
\begin{proof}
Assume first that $C$ is nuclear. Then, by [\cite{choi}, Corollary 3.3 (4)],
every hereditary subalgebra of $C$ is nuclear. Hence $S_{\lambda}$ is
nuclear for every $\lambda$.\newline
Assume now that ii) holds that is : all $S_{\lambda}$ are nuclear
C*-algebras. We will show that for every cyclic representation $T_{\varphi}$
of $C$, $T_{\varphi} (C)^{\prime\prime}$ is injective and the result will
follow from the previous lemma. Let $\varphi$ be a state of $C$. Then, by
reduction theory, $\overline{T_{\varphi}}(C^{\ast\ast})=T_{\varphi}(C)^{
\prime\prime}$ is the direct integral of factors $\overline{T_{\psi}}
(C^{\ast\ast})=T_{\psi }(C)^{\prime\prime}$ where $\psi$ are factor states
of $C$, $\overline {T_{\varphi}}(C^{\ast\ast})=\int\overline{T_{\psi}}
(C^{\ast\ast})d\mu (\psi)$ where $\mu$ is the central measure associated
with the state $\varphi$ and the integral is taken over the state space of $
C $ [\cite{sakai}, Theorem 3.5.2.]. Applying [\cite{connes}, Proposition
6.5.], it follows that $T_{\varphi}(C)^{\prime\prime }$ is injective if and
only if almost all of the factors $T_{\psi}(C)^{\prime\prime}$ are
injective. We have, therefore, reduced our problem to the following:
Assuming that all hereditary subalgebras $S_{\lambda}$ are nuclear, show
that for every cyclic factor representation $T$ of $C$ we have that $
T(C)^{\prime\prime}$ is an injective von Neumann algebra.
Let $T$ be a non degenerate cyclic factor representation of $C$. Since $
\sum_{\lambda}q_{\lambda}=1$ in $C^{\ast\ast}$ there is a $\lambda$ such
that the restriction $T|_{q_{\lambda}C}\neq0$. Hence the restriction of $T$
to the closed two sided ideal $J_{\lambda}=\overline{Cq_{\lambda}C}$ is non
zero. Since $T$ is a factor representation and $J_{\lambda}$ is a two sided
ideal it follows that $T(C)^{\prime\prime}=T(J_{\lambda})^{\prime\prime}$.
We show next that under our assumptions $T(J_{\lambda})^{\prime\prime}$ is
injective and thus $T(C)^{\prime\prime}$ is injective. We noticed above that
$S_{\lambda}$ is strongly Morita equivalent with $J_{\lambda}$. Since $C$ is
separable, so are $S_{\lambda\text{ }}$ and $J_{\lambda}$. By [\cite{brown},
Theorem 1.2.] $S_{\lambda\text{ }}$ and $J_{\lambda}$ are stably isomorphic.
Since $S_{\lambda\text{ }}$ is nuclear it follows that $J_{\lambda}$ is
nuclear. By Lemma \ref{prepLemma} we have that $T(J_{\lambda})^{\prime
\prime} $ is injective and the proof is complete.
\end{proof}
From the proof of the previous lemma it follows:
\begin{corollary}
A separable C*-algebra $C$ is nuclear if and only if for every factor state $\psi$ of $C$, $T_{\psi}(C)^{\prime\prime}$ is an injective von Neumann algebra, where $T_{\psi}$ is the GNS representation of $C$ associated with $\psi$.
\end{corollary}
We can now state our main results of this section.
\begin{theorem}
\label{nuclearlocallycompact}Let $(A,G,\alpha)$ be a C*-dynamical system with $G$ a locally compact group and let $K\subset G$ be a compact subgroup. Then
the following conditions are equivalent:\newline i) $A\times_{\alpha}G$ is a nuclear C*-algebra\newline ii) The hereditary C*-subalgebras $S_{\pi}\subset
A\times_{\alpha}G$, $\pi\in\widehat{K}$ are nuclear\newline iii) The C*-subalgebras of $K-$central elements, $\mathcal{I}_{\pi}\subset S_{\pi}
,\pi\in\widehat{K}$ are nuclear\newline Furthermore, any of the previous three equivalent conditions implies\newline iv) $A$ is nuclear\newline In addition,
if $G$ is amenable, i.e if the group C*-algebra $C^{\ast}(G)$ is nuclear the conditions i)-iv) are equivalent.
\end{theorem}
\begin{proof}
The equivalence of the conditions i)-iii) follows from Remarks \ref
{Lemma2.5JFA} and \ref{Cor 2.8JFA} and Lemma \ref{nuclearlemma}. On the
other hand, if the crossed product, $A\times_{\alpha}G$ is nuclear, then,
applying [\cite{raeburn}, Theorem 4.6.], it follows that $
A\times_{\alpha}G\times_{\widehat{\alpha}}\widehat{G}$ is nuclear, where $
\widehat{\alpha}$ is the dual coaction. Since by biduality this latter
crossed product is isomorphic with $A\otimes\mathcal{C(H)}$ where $\mathcal{
C(H)}$ is the C*-algebra of compact operators on a certain Hilbert space, $
\mathcal{H}$, it follows that $A$ is a nuclear C*-algebra. Finally, if $G$
is amenable and $A$ is nuclear, then by [\cite{green}, Proposition 14], the
crossed product $A\times_{\alpha}G$ is nuclear and therefore in this case iv)
$\implies$i).
\end{proof}
In the proof of the implication i)$\mathcal{\Longrightarrow }$iv) of the
above theorem we have used the fact that every locally compact group is
co-amenable and Raeburn's result. The next result is the analog of the
previous one for the case of compact quantum groups. A compact quantum
group, $\mathcal{G}=(B,\Delta )$, is automatically amenable since $\widehat{B
}$ is a subalgebra of compact operators, but not co-amenable, in general,
since $B$ is not necessarily nuclear.
We will state next the corresponding result for compact quantum group
actions.
\begin{theorem}
Let $(A,\mathcal{G},\delta)$ be a quantum C*-dynamical system with $\mathcal{G=(}B,\Delta)$ a compact quantum group. The following three
conditions are equivalent:\newline i) $A\times_{\alpha}\mathcal{G}$ is nuclear\newline ii) The hereditary C*-subalgebras $S_{\pi}\subset
A\times_{\alpha}\mathcal{G}$ are nuclear\newline iii) The C*-algebras $(A\otimes B(H_{\pi}))^{\delta_{\pi}}$ are nuclear.\newline Furthermore, each
of the above condition is implied by \newline iv) $A$ is a nuclear C*-algebra.\newline In addition, if the quantum group $\mathcal{G}$ is
co-amenable, i.e. if $B$ is a nuclear C*-algebra, then the conditions i)-iv) are equivalent with the following:\newline v) $A^{\delta}$ is nuclear.
\end{theorem}
\begin{proof}
The equivalence of i)-iii) follows from Lemma \ref
{ralucaanalogsof2.2,2.8andlandstadlemma} and Lemma \ref{nuclearlemma}. We
now prove that iv) implies iii). Let $\pi\in\widehat{\mathcal{G}}$. If $A$
is nuclear, then $A\otimes B(H_{\pi})$ is a nuclear C*-algebra. The
projection of $A\otimes B(H_{\pi})$ onto the fixed point algebra $(A\otimes
B(H_{\pi }))^{\delta_{\pi}}$ is obviously a completely positive map.
Therefore, by [\cite{choi}, Corollary 3.4. (4)] it follows that $(A\otimes
B(H_{\pi}))^{\delta_{\pi}}$ is nuclear. Assume now that $\mathcal{G}$ is
co-amenable. Therefore, $B$ is nuclear. Then, by applying [\cite{doplicher},
Corollary 7] it follows that $A^{\delta}$ is nuclear if and only if $A$ is
nuclear and thus v)$\iff$iv). Since $S_{\pi_{0}}$ is isomorphic with $
A^{\delta}$, we have that iii)$\implies$iv) and the proof is completed.
\end{proof}
\end{document} |
\begin{document}
\begin{abstract}
Let $M(\alpha)$ denote the Mahler measure of the algebraic number $\alpha$. In a recent paper, Dubickas and Smyth constructed a metric version of the Mahler measure
on the multiplicative group of algebraic numbers. Later, Fili and the author used similar techniques to study a non-Archimedean version. We show how to generalize
the above constructions in order to associate, to each point in $(0,\infty]$, a metric version $M_x$ of the Mahler measure, each having a triangle inequality
of a different strength. We are able to compute $M_x(\alpha)$ for sufficiently small $x$, identifying, in the process, a function $\bar M$ with certain minimality
properties. Further, we show that the map $x\mapsto M_x(\alpha)$ defines a continuous function on the positive real numbers.
\end{abstract}
\title{A collection of metric Mahler measures}
\section{Introduction} \label{Intro}
Let $f$ be a polynomial with complex coefficients given by
\begin{equation*}
f(z) = a\cdot\prod_{n=1}^N(z-\alpha_n).
\end{equation*}
We define the {\it (logarithmic) Mahler measure} $M$ of $f$ by
\begin{equation*}
M(f) = \log|a|+\sum_{n=1}^N\log^+|\alpha_n|.
\end{equation*}
If $\alpha$ is a non-zero algebraic number, we define the Mahler measure of $\alpha$ by
\begin{equation*}
M(\alpha) = M(\min_\mathbb Z(\alpha)).
\end{equation*}
In other words, $M(\alpha)$ is simply the Mahler measure of the minimal polynomial of $\alpha$ over $\mathbb Z$. It is well-known that
\begin{equation} \label{MahlerInverses}
M(\alpha) = M(\alpha^{-1})
\end{equation}
for all algebraic numbers $\alpha$.
It is a consequence of a theorem of Kronecker that $M(\alpha) = 0$ if and only if $\alpha$ is a root of unity. In a famous 1933 paper, D.H. Lehmer \cite{Lehmer}
asked whether there exists a constant $c>0$ such that $M(\alpha) \geq c$ in all other cases. He could find no algebraic number with Mahler measure smaller than
that of
\begin{equation*}
\ell(x) = x^{10}+x^9-x^7-x^6-x^5-x^4-x^3+x+1,
\end{equation*}
which is approximately $0.16\ldots$. Although the best known general lower bound is
\begin{equation*}
M(\alpha) \gg \left(\frac{\log\log\deg\alpha}{\log\deg\alpha}\right)^3,
\end{equation*}
due to Dobrowolski \cite{Dobrowolski},
uniform lower bounds haven been established in many special cases (see \cite{BDM, Schinzel, Smyth}, for instance). Furthermore, numerical evidence provided by
Mossinghoff \cite{Moss, MossWeb} and Mossinghoff, Pinner and Vaaler \cite{MPV} suggests there does, in fact, exist such a constant $c$.
This leads to the following conjecture, which we will now call Lehmer's conjecture.
\begin{conj}[Lehmer's conjecture]
There exists a real number $c > 0$ such that if $\alpha\in\overline\ratt$ is not a root of unity then $M(\alpha) \geq c$.
\end{conj}
In an effort to create a geometric structure on the multiplicative group of algebraic numbers $\overline\ratt$, Dubickas and Smyth \cite{DubSmyth2} constructed
a metric version of the Mahler measure. Let us briefly recall this construction. Write
\begin{equation} \label{RestrictedProduct}
\mathcal X(\overline\ratt) = \{(\alpha_1,\alpha_2,\ldots): \alpha_n = 1\ \mathrm{for\ all\ but\ finitely\ many}\ n\}
\end{equation}
to denote the restricted infinite direct product of $\overline\ratt$. Let $\tau:\mathcal X(\overline\ratt) \to \overline\ratt$ be defined by
\begin{equation*}
\tau(\alpha_1,\alpha_2,\cdots) = \prod_{n=1}^\infty \alpha_n
\end{equation*}
and note that $\tau$ is indeed a group homomorphism. The {\it metric Mahler measure} $M_1$ of $\alpha$ is given by
\begin{equation*}
M_1(\alpha) = \inf\left\{ \sum_{n=1}^\infty M(\alpha_n): (\alpha_1,\alpha_2,\ldots)\in \tau^{-1}(\alpha)\right\}.
\end{equation*}
We note that the infimum in the definition of $M_1(\alpha)$ is taken over all ways of writing $\alpha$ as a product of elements in $\overline\ratt$.
As a result of this construction, the function $M_1$ satisfies that triangle inequality
\begin{equation} \label{MahlerTriangle}
M_1(\alpha\beta) \leq M_1(\alpha) + M_1(\beta)
\end{equation}
for all $\alpha,\beta\in \overline\ratt$. It can be shown that $M_1(\alpha) = 0$ if and only if $\alpha$ is a root of unity, and moreover,
$M_1$ is well-defined on the quotient group $\mathcal G = \overline\ratt/\mathrm{Tor}(\overline\ratt)$. Using \eqref{MahlerInverses} and \eqref{MahlerTriangle}, we find that the
map $(\alpha,\beta)\mapsto M_1(\alpha\beta^{-1})$ is a metric on $\mathcal G$. It is noted in \cite{DubSmyth2} that this map yields the discrete topology
if and only if Lehmer's conjecture is true.
Following the strategy of \cite{DubSmyth2}, Fili and the author \cite{FiliSamuels} examined a non-Archimedean version of the metric Mahler measure.
That is, define the {\it ultrametric Mahler measure} $M_\infty$ of $\alpha$ by
\begin{equation*}
M_\infty(\alpha) = \inf\left\{ \max_{n\geq 1} M(\alpha_n): (\alpha_1,\alpha_2,\ldots)\in\tau^{-1}(\alpha)\right\},
\end{equation*}
replacing the sum in the definition of $M_1$ by a maximum. In this case, $M_\infty$ has the strong triangle inequality
\begin{equation*}
M_1(\alpha\beta) \leq \max \{M_1(\alpha),M_1(\beta)\}
\end{equation*}
for all $\alpha,\beta\in \overline\ratt$. Once again, we are able to verify that
$M_\infty$ is well-defined on $\mathcal G$. Here, the map $(\alpha,\beta)\mapsto M_\infty(\alpha\beta^{-1})$ yields a non-Archimedean metric on $\mathcal G$ which induces
the discrete topology if and only if Lehmer's conjecture is true.
In view of the definitions of $M_1$ and $M_\infty$, it is natural to define a collection of intermediate metric Mahler measures in the following way.
If $x\in (0,\infty]$, we define $M_x:\mathcal X(\overline\ratt)\to [0,\infty)$ by
\begin{equation*}
M_x(\alpha_1,\alpha_2,\ldots) = \left\{
\begin{array}{ll}
\displaystyle \left( \sum_{n=1}^{\infty} M(\alpha_n)^x\right)^{1/x} & \mathrm{if}\ x\in (0,\infty) \\
& \\
\displaystyle \max_{n\geq 1}\{M(\alpha_n)\} & \mathrm{if}\ x = \infty.
\end{array}
\right.
\end{equation*}
In the case that $x \geq 1$, we see that $M_x(\alpha_1,\alpha_2,\ldots)$ is the $L^x$ norm on the vector $(M(\alpha_1),M(\alpha_2),\ldots)$.
Then we define the {\it $x$-metric Mahler measure} by
\begin{equation} \label{xMetricMahlerDef}
M_x(\alpha) = \inf\{M_x(\bar\alpha): \bar\alpha\in\tau^{-1}(\alpha)\}
\end{equation}
and note that this definition generalizes those of $M_1$ and $M_\infty$. Indeed, the $1$- and $\infty$-metric Mahler measures are simply the
metric and ultrametric Mahler measures, respectively.
In \cite{DubSmyth2}, Dubickas and Smyth showed that if Lehmer's conjecture is true, then the infimum in the definition of $M_1(\alpha)$ must
always be achieved. The author \cite{Samuels} was able to verify that the infima in $M_1(\alpha)$ and $M_\infty(\alpha)$ are achieved even without the assumption of
Lehmer's conjecture. Moreover, this infimum must always be attained in a relatively simple subgroup of $\overline\ratt$. In particular, if $K$ is a number field we write
\begin{equation*}
\mathrm{Rad}(K) = \left\{\alpha\in\overline\ratt:\alpha^r\in K\mathrm{\ for\ some}\ r\in\mathbb N\right\}.
\end{equation*}
For any algebraic number $\alpha$, let $K_\alpha$ denote the Galois closure of $\mathbb Q(\alpha)$ over $\mathbb Q$.
We showed in \cite{Samuels} that the infimum in both $M_1(\alpha)$ and $M_\infty(\alpha)$ is always attained by some
\begin{equation*}
\bar\alpha \in \tau^{-1}(\alpha) \cap \mathcal X(\mathrm{Rad}(K_\alpha)).
\end{equation*}
where $\mathcal X(\mathrm{Rad}(K_\alpha))$ is defined similarly to $\mathcal X(\overline\ratt)$ in \eqref{RestrictedProduct}.
Not surprisingly, the same argument can be used to establish the analog for all values of $x$.
\begin{thm} \label{Achieved}
Suppose $\alpha$ is a non-zero algebraic number and $x\in (0,\infty]$. Then there exists a point $\bar\alpha\in \tau^{-1}(\alpha) \cap \mathcal X(\mathrm{Rad}(K_\alpha))$ such that
$M_x(\alpha) = M_x(\bar\alpha)$.
\end{thm}
We now turn our attention momentarily to the computation of some values of $M_x(\alpha)$. First define
\begin{equation*}
C(\alpha) = \inf\{ M(\gamma): \gamma\in K_\alpha\setminus \mathrm{Tor}(\overline\ratt)\}
\end{equation*}
and note that by Northcott's Theorem \cite{Northcott}, the infimum on the right hand side of this definition is always achieved. In paricular, this means
that $C(\alpha) > 0$.
The author \cite{Samuels2} gave a strategy for reducing the computation of $M_\infty(\alpha)$ to a finite set.
The method uses the {\it modified Mahler measure}
\begin{equation} \label{MBarDef}
\bar M(\alpha) = \inf \{M(\zeta\alpha):\zeta\in \mathrm{Tor}(\overline\ratt)\}
\end{equation}
and gives the value of $M_\infty$ in terms of $\bar M$. Although $\bar M$ requires taking an infimum over an infinite set, it is often very reasonable to calculate.
Indeed, the infimum on the right hand side of \eqref{MBarDef} is always attained at a root of unity $\zeta$ that makes $\deg(\zeta\alpha)$ as small as possible.
This function $\bar M$ arises again when computing $M_x(\alpha)$ for small $x$ in a more straightforward way than in \cite{Samuels2}.
\begin{thm} \label{SmallP}
If $\alpha$ is a non-zero algebraic number and $x$ is a positive real number satisfying
\begin{equation} \label{AlwaysX}
x\cdot (\log \bar M(\alpha) - \log C(\alpha)) \leq \log 2
\end{equation}
then $M_x(\alpha) = \bar M(\alpha)$.
\end{thm}
As we will discuss in detail in section \ref{AbelianHeights}, the construction given by \eqref{xMetricMahlerDef} is not unique to the Mahler measure.
Suppose $\phi:\overline\ratt\to [0,\infty)$ satisfies
\begin{equation} \label{BasicHeightProps}
\phi(1) = 0\quad\mathrm{and}\quad \phi(\alpha) = \phi(\alpha^{-1})\ \mathrm{for\ all}\ \alpha\in\overline\ratt,
\end{equation}
and write
\begin{equation*}
\phi_x(\alpha_1,\alpha_2,\ldots) = \left\{
\begin{array}{ll}
\displaystyle \left( \sum_{n=1}^{\infty} \phi(\alpha_n)^x\right)^{1/x} & \mathrm{if}\ x\in (0,\infty) \\
& \\
\displaystyle \max_{n\geq 1}\{\phi(\alpha_n)\} & \mathrm{if}\ x = \infty.
\end{array}
\right.
\end{equation*}
Generalizing the metric Mahler measure, let $\phi_x$ be defined by
\begin{equation} \label{xMetricSwitchDef}
\phi_x(\alpha) = \inf\{\phi_x(\bar\alpha): \bar\alpha\in\tau^{-1}(\alpha)\}.
\end{equation}
We now write $\mathcal S(M)$ to denote the set of all functions $\phi$ satisfying \eqref{BasicHeightProps} such that $\phi_x(\alpha) = M_x(\alpha)$ for all $\alpha\in\overline\ratt$
and $x\in (0,\infty]$. We are able to show that $\bar M$ belongs to $\mathcal S(M)$. Moreover, it is a consequence of Theorem \ref{SmallP} that $\bar M$ is the minimal element
of $\mathcal S(M)$.
\begin{cor} \label{MBarMinimal}
We have that $\bar M\in \mathcal S(M)$. Moreover, if $\psi\in\mathcal S(M)$ then $\psi(\alpha) \geq \bar M(\alpha)$ for all $\alpha\in\overline\ratt$.
\end{cor}
We now ask if the map $x\mapsto M_x(\alpha)$ is continuous on $\mathbb R_{>0}$ for every algebraic number $\alpha$. We recall that Theorem \ref{Achieved} asserts that,
for each $x$, there exists
a point $\bar\alpha\in \tau^{-1}(\alpha)$ that attains the infimum in the definition of $M_x(\alpha)$. If the infimum is achieved at the same point $(\alpha_1,\alpha_2,\ldots)$
for all real $x$, then we have that
\begin{equation*}
M_x(\alpha) = \left( \sum_{n=1}^N M(\alpha_n)^x\right)^{1/x}
\end{equation*}
which clearly defines a continuous function. Unfortunately, using the example of $M_x(p^2)$ for a rational prime $p$, we see that this is not the case.
\begin{thm} \label{NotUniform}
Let $p$ be a rational prime and assume that $(\alpha_1,\alpha_2,\ldots)\in \tau^{-1}(p^2)$ with $M_x(p^2) = M_x(\alpha_1,\alpha_2,\cdots)$.
\begin{enumerate}[(i)]
\item\label{P2Small} If $x\cdot (\log\log (p^2) - \log\log 2) < \log 2$ then precisely one point $\alpha_n$ differs from a root of unity.
\item\label{P2Large} If $x >1$ then at least two points $\alpha_n$ differ from a root of unity.
\end{enumerate}
\end{thm}
Although the infimum in $M_x(\alpha)$ is not achieved at the same point for all $x$, we are able to prove that $x\mapsto M_x(\alpha)$ is continuous for all $\alpha$.
\begin{thm} \label{Continuous}
If $\alpha$ is a non-zero algebraic number then the map $x \mapsto M_x(\alpha)$ is continuous on the positive real numbers.
\end{thm}
It is worth noting that continuity appears to be somewhat special to the Mahler measure. That is, we cannot expect an arbitrary function $\phi$ satisfying \eqref{BasicHeightProps}
to be such that $x\mapsto \phi_x(\alpha)$ is continuous. Even making a slight modification to the Mahler measure causes continuity to fail. For example, define
the {\it Weil height} of $\alpha\in \overline\ratt$ by
\begin{equation*}
h(\alpha) = \frac{M(\alpha)}{\deg\alpha}
\end{equation*}
and note that, in view of our remarks about the Mahler measure, $h(\alpha) = 0$ if and only if $\alpha$ is a root of unity. In fact, it is well-known that
\begin{equation} \label{WeilHeightDefined}
h(\alpha) = h(\zeta\alpha)
\end{equation}
for all roots of unity $\zeta$. Moreover, we have that $h(\alpha) = h(\alpha^{-1})$ for all $\alpha\in\overline\ratt$ so that $h$ satisfies \eqref{BasicHeightProps}.
Unlike the Mahler measure, we know how to compute $h_x(\alpha)$ for every $x$ and $\alpha$.
\begin{thm} \label{WeilHeightComp}
If $\alpha$ is a non-zero algebraic number then
\begin{equation*}
h_x(\alpha) = \left\{ \begin{array}{ll}
h(\alpha) & \mathrm{if}\ x \leq 1 \\
0 & \mathrm{if}\ x > 1.
\end{array}
\right.
\end{equation*}
\end{thm}
As we have noted, Theorem \ref{WeilHeightComp} does indeed show that $x\mapsto h_x(\alpha)$ is possibly discontinuous. More specifically, it is continuous if and only if
$\alpha$ is a root of unity.
\section{Heights on Abelian groups} \label{AbelianHeights}
In this section, we generalize our $x$-metric Mahler measure construction to a very broad class of functions on an abelian group $G$ by exploring
definition \eqref{xMetricSwitchDef} in more detail.
We are able to establish some basic properties in this situation that we can use to prove our main results.
Let $G$ be a multiplicatively written abelian group. We say that $\phi:G \to [0,\infty)$ is a {\it (logarithmic) height} on $G$ if
\begin{enumerate}[(i)]
\item $\phi(1) = 0$, and
\item $\phi(\alpha) = \phi(\alpha^{-1})$ for all $\alpha\in G$.
\end{enumerate}
If $\psi$ is another height on $G$, we follow the conventional notation that
\begin{equation*}
\phi = \psi \quad \mathrm{or} \quad \phi \leq \psi
\end{equation*}
when $\phi(\alpha) = \psi(\alpha)$ or $\phi(\alpha) \leq \psi(\alpha)$ for all $\alpha\in G$, respectively. We write
\begin{equation*}
Z(\phi) = \{ \alpha\in G: \phi(\alpha) = 0\}
\end{equation*}
to denote the {\it zero set} of $\phi$.
If $x$ is a positive real number then we say that $\phi$ has the {\it $x$-triangle inequality} if
\begin{equation*}
\phi(\alpha\beta) \leq \left (\phi(\alpha)^x + \phi(\beta)^x\right )^{1/x}
\end{equation*}
for all $\alpha,\beta\in G$. We say that $\phi$ has the {\it $\infty$-triangle inequaltiy} if
\begin{equation*}
\phi(\alpha\beta) \leq \max\{\phi(\alpha),\phi(\beta)\}
\end{equation*}
for all $\alpha,\beta\in G$. For appropriate $x$, we say that these functions are {\it $x$-metric heights}. We observe that the $1$-triangle inequality is simply
the classical triangle inequality while the $\infty$-triangle inequality is the strong triangle inequality. We also obtain the following ordering of the $x$-triangle
inequalities.
\begin{lem} \label{Intermediates}
Suppose that $G$ is an abelian group and that $x,y\in (0,\infty]$ with $x\geq y$. If $\phi$ is an $x$-metric height on $G$ then
$\phi$ is also a $y$-metric height on $G$.
\end{lem}
\begin{proof}
If $a,b$ and $q$ are real numbers with $a,b \geq 0$ and $q\geq 1$, then it is easily verified that
\begin{equation} \label{MVTapp}
a^q+b^q \leq (a+b)^q.
\end{equation}
Let us now assume that $\phi$ has the $x$-triangle inequality and that $\alpha,\beta\in G$. If $x=y =\infty$ then the lemma is completely trivial.
If $x = \infty$ and $y <\infty$ then we have that
\begin{equation*}
\phi(\alpha\beta) \leq \max\{\phi(\alpha),\phi(\beta)\} = \max\{\phi(\alpha)^y,\phi(\beta)^y\}^{1/y} \leq (\phi(\alpha)^y + \phi(\beta)^y)^{1/y}
\end{equation*}
so that the result follows easily as well. Hence, we assume now that $\infty > x\geq y$. In this situation, we have that $x/y \geq 1$.
Therefore, by \eqref{MVTapp} we have that
\begin{equation*}
(\phi(\alpha)^y + \phi(\beta)^y)^{x/y} \geq \phi(\alpha)^x + \phi(\beta)^x
\end{equation*}
and it follows that
\begin{equation*}
(\phi(\alpha)^y + \phi(\beta)^y)^{1/y} \geq (\phi(\alpha)^x + \phi(\beta)^x)^{1/x}.
\end{equation*}
Hence, we have that $\phi(\alpha\beta) \leq (\phi(\alpha)^y + \phi(\beta)^y)^{1/y}$ so that $\phi$ has the $y$-triangle inequaity.
\end{proof}
We now observe that each $x$-metric height is well-defined on the quotient group $G/Z(\phi)$.
In the case that $x\geq 1$, the map $(\alpha,\beta) \mapsto \phi(\alpha\beta^{-1})$ defines a metric on $G/Z(\phi)$.
\begin{thm} \label{MetricProperties}
If $\phi:G\to [0,\infty)$ is an $x$-metric height for some $x\in (0,\infty]$ then
\begin{enumerate}[(i)]
\item\label{Subgroup} $Z(\phi)$ is a subgroup of $G$.
\item\label{WellDefined} $\phi(\zeta \alpha) = \phi(\alpha)$ for all $\alpha\in G$ and $\zeta\in Z(\phi)$. That is, $\phi$ is well-defined on the quotient $G/Z(\phi)$.
\item\label{FancyMetric} If $x\geq 1$, then the map $(\alpha,\beta)\mapsto \phi(\alpha\beta^{-1})$ defines a metric on $G/Z(\phi)$.
\end{enumerate}
\end{thm}
\begin{proof}
We first establish \eqref{Subgroup}. Obviously, we have that $1\in Z(G)$ by definition of height. Further, if $\phi(\alpha) = 0$
then again by definition of height we know that $\phi(\alpha^{-1}) = 0$.
If $\alpha,\beta\in Z(G)$ then using the $x$ triangle inequality we obtain
\begin{equation*}
\phi(\alpha\beta) \leq (\phi(\alpha)^x + \phi(\beta)^x)^{1/x} = 0.
\end{equation*}
Therefore, $\alpha\beta\in Z(G)$ so that $Z(G)$ forms a subgroup.
To prove \eqref{WellDefined}, we see that the $x$-triangle inequality yields
\begin{align*}
\phi(\alpha) & = \phi(\zeta^{-1}\zeta\alpha) \\
& \leq (\phi(\zeta^{-1})^x + \phi(\zeta\alpha)^x)^{1/x} \\
& = \phi(\zeta\alpha) \\
& \leq (\phi(\zeta)^x + \phi(\alpha)^x)^{1/x} \\
& = \phi(\alpha)
\end{align*}
implying that $\phi(\alpha) = \phi(\zeta\alpha)$.
Finally, if $x\geq 1$ then Lemma \ref{Intermediates} implies that $\phi$ has the triangle inequality. It then follows immediately that
the map $(\alpha,\beta)\mapsto \phi(\alpha\beta^{-1})$ is a metric on $G/Z(\phi)$.
\end{proof}
We are careful to note that if $x<1$ then the map $(\alpha,\beta) \mapsto \phi(\alpha\beta^{-1})$ does not, in general, form a metric on $G/Z(\phi)$. In this case,
the $x$-triangle inequality is indeed weaker than the triangle inequality, so we cannot expect the above map to form a metric except in trivial cases.
We now follow the method of Dubickas and Smyth for creating a metric from the Mahler measure. Write
\begin{equation*}
\mathcal X(G) = \{(\alpha_1,\alpha_2,\ldots): \alpha_n = 1\ \mathrm{for\ almost\ every}\ n\}
\end{equation*}
and, as before, let $\tau:\mathcal X(G) \to G$ be defined by
\begin{equation*}
\tau(\alpha_1,\alpha_2,\cdots) = \prod_{n=1}^\infty \alpha_n
\end{equation*}
so that $\tau$ is a group homomorphism. For each point $x\in (0,\infty]$ we define the map $\phi_x:\mathcal X(G) \to [0,\infty)$ by
\begin{equation*}
\phi_x(\alpha_1,\alpha_2,\ldots) = \left\{
\begin{array}{ll}
\displaystyle \left( \sum_{n=1}^{\infty} \phi(\alpha_n)^x\right)^{1/x} & \mathrm{if}\ x\in (0,\infty) \\
& \\
\displaystyle \max_{n\geq 1}\{\phi(\alpha_n)\} & \mathrm{if}\ x = \infty.
\end{array}
\right.
\end{equation*}
Then we define the {\it $x$-metric version} of $\phi_x$ of $\phi$ by
\begin{equation*}
\phi_x(\alpha) = \inf\{\phi_x(\bar\alpha): \bar\alpha\in\tau^{-1}(\alpha)\}.
\end{equation*}
It is immediately clear that if $\psi$ is another height on $G$ with $\phi \geq \psi$, then $\phi_x \geq \psi_x$ for all $x$.
Among other things, we see that $\phi_x$ is indeed an $x$-metric height on $G$.
\begin{thm} \label{MetricConstruction}
If $\phi:G\to [0,\infty)$ is a height on $G$ and $x\in (0,\infty]$ then
\begin{enumerate}[(i)]
\item\label{MetricHeightConversion} $\phi_x$ is an $x$-metric height on $G$ with $\phi_x\leq\phi$.
\item\label{BestMetricHeight} If $\psi$ is an $x$-metric height with $\psi\leq\phi$ then
$\psi\leq \phi_x$.
\item\label{NoChangeMetric} $\phi = \phi_x$ if and only if $\phi$ is an $x$-metric height. In particular, $(\phi_x)_x = \phi_x$.
\item\label{Comparisons} If $y\in (0,x]$ then $\phi_y \geq \phi_x$.
\end{enumerate}
\end{thm}
\begin{proof}
For the proofs of \eqref{MetricHeightConversion}-\eqref{NoChangeMetric}, we will assume that $x < \infty$. The proofs for the
case $x = \infty$ are quite similar to the proofs for other cases so we will not include them here. See \cite{FiliSamuels} for detailed proofs when $x=\infty$.
To prove \eqref{MetricHeightConversion}, let $\alpha,\beta\in G$. We observe that if $(\alpha_1,\alpha_2,\ldots)\in \tau^{-1}(\alpha)$ and
$(\beta_1,\beta_2,\ldots)\in \tau^{-1}(\beta)$ then it is obvious that
\begin{equation*}
\alpha\beta = \left(\prod_{n=1}^\infty \alpha_n\right)\left(\prod_{n=1}^\infty \beta_n\right).
\end{equation*}
We may also write
\begin{equation*}
\alpha\beta = \prod_{n=1}^\infty \alpha_n\beta_n
\end{equation*}
implying that $\tau(\alpha_1,\beta_1,\alpha_2,\beta_2,\ldots) = \alpha\beta$. In other words, we have that
\begin{equation} \label{ProductInInverseImage}
(\alpha_1,\beta_1,\alpha_2,\beta_2,\ldots) \in \tau^{-1}(\alpha\beta).
\end{equation}
This yields that
\begin{align} \label{InfIncrease}
\phi_x(\alpha\beta)^x & = \inf \{\phi_x(\gamma_1,\gamma_2,\ldots)^x: (\gamma_1,\gamma_2,\ldots)\in \tau^{-1}(\alpha\beta) \} \nonumber \\
& = \inf\{\phi_x(\alpha_1,\beta_1,\alpha_2,\beta_2,\ldots)^x: \alpha_n,\beta_n\in G,\ (\alpha_1,\beta_1,\ldots)\in \tau^{-1}(\alpha\beta)\} \nonumber \\
& \leq \inf\{\phi_x(\alpha_1,\beta_1,\alpha_2,\beta_2,\ldots)^x: (\alpha_1,\ldots)\in \tau^{-1}(\alpha),\ (\beta_1,\ldots)\in \tau^{-1}(\beta) \}.
\end{align}
We note that
\begin{align*}
\phi_x(\alpha_1,\beta_1,\alpha_2,\beta_2,\ldots)^x & = \sum_{n=1}^\infty \left (\phi(\alpha_n)^x + \phi(\beta_n)^x\right) \\
& = \sum_{n=1}^\infty \phi(\alpha_n)^x + \sum_{n=1}^\infty \phi(\beta_n)^x \\
& = \phi_x(\alpha_1,\ldots)^x + \phi_x(\beta_1,\ldots)^x.
\end{align*}
Then using \eqref{InfIncrease} we find that
\begin{align*}
\phi(\alpha\beta)^x & \leq \inf\{\phi_x(\alpha_1,\ldots)^x + \phi_x(\beta_1,\ldots)^x: (\alpha_1,\ldots)\in \tau^{-1}(\alpha),\ (\beta_1,\ldots)\in \tau^{-1}(\beta) \} \\
& = \inf\{\phi_x(\alpha_1,\ldots)^x: (\alpha_1,\ldots)\in \tau^{-1}(\alpha)\} \\
& \qquad + \inf\{\phi_x(\beta_1,\ldots)^x: (\beta_1,\ldots)\in \tau^{-1}(\beta)\} \\
& = \phi_x(\alpha)^x + \phi_x(\beta)^x
\end{align*}
and it follows that
\begin{equation*}
\phi_x(\alpha\beta) \leq (\phi_x(\alpha)^x + \phi_x(\beta)^x)^{1/x}.
\end{equation*}
To complete the proof of \eqref{MetricHeightConversion}, we observe that $(\alpha,1,1,\ldots) \in \tau^{-1}(\alpha)$ so
that $\phi_x(\alpha) \leq \phi(\alpha)$ for all $\alpha\in G$.
To prove \eqref{BestMetricHeight}, we note that
\begin{align*}
\phi_x(\alpha) & = \inf\left\{ \left(\sum_{n=1}^N\phi(\alpha_n)^x\right)^{1/x}:(\alpha_1,\alpha_2,\ldots)\in \tau^{-1}(\alpha)\right\} \\
& \geq \inf\left\{ \left(\sum_{n=1}^N\psi(\alpha_n)^x\right)^{1/x}:(\alpha_1,\alpha_2,\ldots)\in \tau^{-1}(\alpha)\right\} \\
& \geq \psi(\alpha)
\end{align*}
where the last inequality follows from the fact that $\psi$ has the $x$-triangle inequality.
To prove \eqref{NoChangeMetric}, we first observe that if $\phi = \phi_x$ then clearly $\phi$ is an $x$-metric height. If $\phi$ is already a metric height,
then by \eqref{BestMetricHeight}, we obtain that $\phi\leq \phi_x$. But we always have $\phi_x\leq \phi$ so the result follows. Of course, $\phi_x$ is an $x$-metric height
so this yields immediately $\phi_x = (\phi_x)_x$.
To establish \eqref{Comparisons}, we see that
\begin{align*}
\phi_y(\alpha) & = \inf\left\{ \left(\sum_{n=1}^N\phi(\alpha_n)^y\right)^{1/y}:(\alpha_1,\alpha_2,\ldots)\in \tau^{-1}(\alpha)\right\} \\
& = \inf\left\{ \left(\sum_{n=1}^N\phi(\alpha_n)^y\right)^{\frac{x}{y}\cdot\frac{1}{x}}:(\alpha_1,\alpha_2,\ldots)\in \tau^{-1}(\alpha)\right\}.
\end{align*}
But we have that $x\geq y$ so that $x/y \geq 1$. Therefore, by Lemma \ref{MVTapp} we have that
\begin{equation*}
\left(\sum_{n=1}^N\phi(\alpha_n)^y\right)^{x/y} \geq \sum_{n=1}^N\phi(\alpha_n)^x
\end{equation*}
which yields $\phi_y(\alpha) \geq \phi_x(\alpha)$.
\end{proof}
For a given height $\phi$ on $G$, let $\mathcal S(\phi)$ denote the set of all heights $\psi$ on $G$ such that $\psi_x = \phi_x$ for all $x\in (0,\infty]$. Further, define
the height $\phi_0$ by
\begin{equation} \label{OptimalFunction}
\phi_0(\alpha) = \lim_{x\to 0^+} \phi_x(\alpha).
\end{equation}
By \eqref{MetricHeightConversion} of Theorem \ref{MetricConstruction}, we know that $\phi_x \leq \phi$ for all $x$. Moreover, \eqref{Comparisons} of the same theorem
states that $x\mapsto \phi_x(\alpha)$ is non-increasing. This means that the limit on the right hand side of \eqref{OptimalFunction} does
indeed exist and
\begin{equation} \label{BoundForOptimal}
\phi_0 \geq \phi_x
\end{equation}
for all $x\in (0,\infty]$. We now observe that $\phi_0$ is the minimal element of $\mathcal S(\phi)$.
\begin{thm} \label{OptimalMinimal}
If $\phi$ is a height on $G$ then $\phi_0\in\mathcal S(\phi)$. Moreover, if $\psi\in \mathcal S(\phi)$ then $\psi \geq \phi_0$.
\end{thm}
\begin{proof}
As we have noted, $\phi_0\geq \phi_x$ for all $x$. Hence, we obtain immediately that $(\phi_0)_x \geq (\phi_x)_x = \phi_x$. On the other hand,
we know that $\phi_x \leq \phi$ so that
\begin{equation*}
\phi_0(\alpha) = \lim_{x\to 0^+} \phi_x(\alpha) \leq \phi(\alpha)
\end{equation*}
for all $\alpha\in G$. In other words, we have that $\phi_0 \leq \phi$ so that $(\phi_0)_x \leq \phi_x$ establishing the first statement of the theorem.
To prove the second statement, assume that $\psi\in \mathcal S(\phi)$ so that $\phi_x = \psi_x$ for all $x$. Hence we have that
\begin{equation*}
\phi_0(\alpha) = \lim_{x\to 0^+}\phi_x(\alpha) = \lim_{x\to 0^+}\psi_x(\alpha) \leq \psi(\alpha)
\end{equation*}
for all $\alpha\in G$ verifying the theorem.
\end{proof}
We now define the {\it modified version} of $\phi$ by
\begin{equation*}
\bar\phi(\alpha) = \inf \{\phi(\zeta\alpha): \zeta\in Z(\phi)\}.
\end{equation*}
In the case of the Mahler measure, we have stated in the introduction that $\bar\phi = \phi_0$. However, in the general case, we can conclude only that
$\bar\phi$ belongs to $\mathcal S(\phi)$.
\begin{thm} \label{BarPhiMetrics}
If $\phi$ is a height on $G$ then $\bar\phi\in\mathcal S(\phi)$.
\end{thm}
\begin{proof}
We must show that $\bar\phi_x = \phi_x$ for all $x\in (0,\infty]$. Since $1\in Z(\phi)$, we have immediately that $\bar\phi \leq \phi$, which means that
\begin{equation*}
\bar\phi_x \leq \phi_x.
\end{equation*}
Now for any $\alpha\in G$, we have that
\begin{equation*}
\phi_x(\alpha) \leq \inf \{(\phi(\zeta^{-1})^x + \phi(\zeta\alpha)^x)^{1/x}:\zeta\in Z(\phi)\} = \inf \{\phi(\zeta\alpha):\zeta\in Z(\phi)\} = \bar\phi(\alpha)
\end{equation*}
implying that $\phi_x \leq \bar\phi$. Then taking $x$-metric versions and using \eqref{NoChangeMetric} of Theorem \ref{MetricConstruction} we find that
\begin{equation*}
\phi_x = (\phi_x)_x \leq \bar\phi_x
\end{equation*}
completing the proof.
\end{proof}
We may now ask what we can say about the map $x\mapsto \phi_x(\alpha)$ for fixed $\phi$ and $\alpha$.
As we have noted, this map is non-increasing for all $\alpha$. Since $\phi_x(\alpha)$ is bounded
from above and below by constants not depending on $x$, both left and right hand limits exist at every point. Moreover, we always have
\begin{equation*}
\lim_{x\to \bar x^-}\phi_x(\alpha) \geq \phi_{\bar x}(\alpha) \geq \lim_{x\to \bar x^+}\phi_x(\alpha)
\end{equation*}
when $\bar x >0$. We say that a map $f:\mathbb R\to\mathbb R$ is {\it left} or {\it right semi-continuous} at a point $\bar x\in \mathbb R$ if
\begin{equation*}
\lim_{x\to \bar x^-}f(x) = f(\bar x)\quad\mathrm{or}\quad \lim_{x\to \bar x^+}f(x) = f(\bar x),
\end{equation*}
respectively. Indeed, $f$ is continuous at $\bar x$ if and only if $f$ is both left and right semi-continuous at $\bar x$. Although it is a consequence of Theorem
\ref{WeilHeightComp} that $x\mapsto \phi_x(\alpha)$ is not continuous in general, we can prove the following partial result.
\begin{thm} \label{LeftSemiContinuous}
If $\phi$ is a height on $G$ and $\alpha\in G$, then the map $x \mapsto \phi_x(\alpha)$ is left semi-continous on the positive real numbers.
\end{thm}
\begin{proof}
We already know that $\lim_{x\to \bar x^-}\phi_x(\alpha) \geq \phi_{\bar x}(\alpha)$ so we assume that $$\lim_{x\to \bar x^-}\phi_x(\alpha) > \phi_{\bar x}(\alpha).$$
Therefore, there exists $\varepsilon >0$ such that
\begin{equation} \label{EpsilonSqueeze}
\lim_{x\to \bar x^-}\phi_x(\alpha) > \phi_{\bar x}(\alpha) + \varepsilon.
\end{equation}
By definition of $\phi_{\bar x}$, we may choose points $\alpha_1,\ldots,\alpha_N \in G$ such that $\alpha = \alpha_1\cdots\alpha_N$ and
\begin{equation*} \label{CloseEnough}
\phi_{\bar x}(\alpha) + \varepsilon \geq \left( \sum_{n=1}^N \phi(\alpha_n)^{\bar x}\right)^{1/\bar x},
\end{equation*}
and define the function $f_\varepsilon$ by
\begin{equation*}
f_\varepsilon(x) = \left( \sum_{n=1}^N \phi(\alpha_n)^{x}\right)^{1/x}.
\end{equation*}
This yields
\begin{equation} \label{FApprox}
f_\varepsilon(\bar x) \leq \phi_{\bar x}(\alpha) + \varepsilon\quad\mathrm{and}\quad f_\varepsilon(x) \geq \phi_x(\alpha)\ \mathrm{for\ all}\ x.
\end{equation}
Also, since $f_\varepsilon$ is continuous, we have that
\begin{equation} \label{FCont}
f_\varepsilon(\bar x) = \lim_{x\to\bar x^-} f_\varepsilon(x).
\end{equation}
Combining \eqref{EpsilonSqueeze}, \eqref{FApprox} and \eqref{FCont} we obtain that
\begin{equation*}
f_\varepsilon(\bar x) = \lim_{x\to\bar x^-} f_\varepsilon(x) \geq \lim_{x\to\bar x^-} \phi_x(\alpha) > \phi_{\bar x}(\alpha) + \varepsilon \geq f_\varepsilon(\bar x)
\end{equation*}
which is a contradiction.
\end{proof}
\section{The Inifimum in $M_x(\alpha)$} \label{AchievedProofs}
Our proof of Theorem \ref{Achieved} will require the use of two results from \cite{Samuels}. The first of these is Theorem 2.1 of \cite{Samuels}, which shows that for any
point $\bar\alpha\in \tau^{-1}(\alpha)$, there exists another point $\bar\beta\in\tau^{-1}(\alpha)\cup\mathcal X(\mathrm{Rad}(K_\alpha))$ which has pointwise smaller
Mahler measures. We state the Theorem using the notation of \cite{Samuels}.
\begin{thm} \label{Reduction}
If $\alpha,\alpha_1,\ldots,\alpha_N$ are non-zero algebraic numbers with $\alpha = \alpha_1\cdots\alpha_N$ then
there exists a root of unity $\zeta$ and algebraic numbers $\beta_1,\ldots,\beta_N$ satifying
\begin{enumerate}[(i)]
\item $\alpha = \zeta\beta_1\cdots\beta_N$,
\item $\beta_n\in \mathrm{Rad}(K_\alpha)$ for all $n$,
\item $M(\beta_n) \leq M(\alpha_n)$ for all $n$.
\end{enumerate}
\end{thm}
In view of Theorem \ref{Reduction}, for each $x$, we need only consider only points $\bar\alpha\in\tau^{-1}(\alpha)\cup\mathcal X(\mathrm{Rad}(K_\alpha))$ in the definition of $M_x(\alpha)$.
In other words, in the case of $x <\infty$, the definition of $M_x(\alpha)$ may be rewritten
\begin{equation} \label{xMetricAltDef}
M_x(\alpha) = \inf\left\{\left(\sum_{n=1}^\infty M(\alpha_n)^x\right)^{1/x}: (\alpha_1,\alpha_2,\ldots) \in \tau^{-1}(\alpha)\cup\mathcal X(\mathrm{Rad}(K_\alpha))\right\}.
\end{equation}
Similar remarks apply in the case that $x = \infty$.
Therefore, it will be useful to have some control of the Mahler measures in the subgroup $\mathrm{Rad}(K_\alpha)$. For this purpose, we borrow Lemma 3.1 of \cite{Samuels}.
\begin{lem} \label{HeightInK}
Let $K$ be a Galois extension of $\mathbb Q$. If $\gamma\in\mathrm{Rad}(K)$ then there exists a root of unity $\zeta$ and $L,S\in\mathbb N$
such that $\zeta\gamma^L\in K$ and
\begin{equation*}
M(\gamma) = M(\zeta\gamma^L)^S.
\end{equation*}
In particular, the set
\begin{equation*}
\{M(\gamma):\gamma\in\mathrm{Rad}(K),\ M(\gamma) \leq B\}
\end{equation*}
is finite for every $B \geq 0$.
\end{lem}
It is an easy consequence of Lemma \ref{HeightInK} that $M(\gamma)$ is bounded below by the Mahler measure of an element in $K$. Indeed, we have that
\begin{equation*}
M(\gamma) = M(\zeta\gamma^L)^S \geq M(\zeta\gamma^L)
\end{equation*}
and $\zeta\gamma^L\in K$. In particular, we recall that $C(\alpha)$ denotes the minimum Mahler measure in the field $K_\alpha$. We now see easily that
\begin{equation} \label{RadBound}
M(\gamma) \geq C(\alpha)
\end{equation}
for all $\gamma\in \mathrm{Rad}(K_\alpha)\setminus\mathrm{Tor}(\overline\ratt)$. We are now prepared to prove Theorem \ref{Achieved}.
\begin{proof}[Proof of Theorem \ref{Achieved}]
By the results of \cite{Samuels}, we know that the theorem holds for $x = \infty$, so we may assume that $x <\infty$. Further, select a real number $B > M_x(\alpha)$.
In view of Theorem \ref{Reduction}, we know that $M_x(\alpha)$ is the infimum of
\begin{equation} \label{FiniteHope}
\left(\sum_{n=1}^N M(\alpha_n)^x\right)^{1/x}
\end{equation}
over the set of all $N\in \mathbb N$ and all points $\alpha_1,\ldots,\alpha_N\in \overline\ratt$ such that
\begin{enumerate}[(i)]
\item\label{Product} $\alpha = \alpha_1\cdots\alpha_N$,
\item\label{Unity} At most one point $\alpha_n$ is a root of unity,
\item\label{Rad} $\alpha_n\in \mathrm{Rad}(K_\alpha)$ for all $n$, and
\item\label{BBound} $\left(\sum_{n=1}^N M(\alpha_n)^x\right)^{1/x} \leq B$.
\end{enumerate}
We will show that the set of all values of \eqref{FiniteHope} is finite for $\alpha_1,\ldots,\alpha_N$ satisfying conditions \eqref{Product}-\eqref{BBound}.
We must first give an upper bound on $N$. We know that at least $N-1$ of the points $\alpha_1,\ldots,\alpha_N$ are not roots of unity. For all such points, we
have that
\begin{equation*}
M(\alpha_n) \geq C(\alpha)
\end{equation*}
by \eqref{RadBound}. Combining this with \eqref{BBound}, we obtain that
\begin{equation*}
B \geq \left(\sum_{n=1}^N M(\alpha_n)^x\right)^{1/x} \geq (N-1)^{1/x} C(\alpha)
\end{equation*}
which yields
\begin{equation} \label{NUpperBound}
N \leq 1+ \left(\frac{B}{C(\alpha)}\right)^x.
\end{equation}
Also by \eqref{BBound}, it follows that $M(\alpha_n) \leq B$ for all $n$. Moreover, since $\alpha_n\in\mathrm{Rad}(K_\alpha)$, the second statement of Lemma \ref{HeightInK}
implies that there are only finitely many possible values for $M(\alpha_n)$ for each $n$. Since $N$ is bounded above by the right hand side of \eqref{NUpperBound},
it follows that there are only finitely many possible values for \eqref{FiniteHope} with $\alpha_1,\ldots,\alpha_N$ satisfying \eqref{Product}-\eqref{BBound}.
We now know that $M_x(\alpha)$ is an infimum over a finite set, so the infimum must be achieved.
\end{proof}
\section{Minimality of $\bar M$} \label{MinimalBarM}
We first give the proof of Theorem \ref{SmallP} showing that $M_x(\alpha) = \bar M(\alpha)$ for sufficiently small values of $x$.
\begin{proof}[Proof of Theorem \ref{SmallP}]
By Theorem \ref{BarPhiMetrics}, we have immediately that $M_x(\alpha) = \bar M_x(\alpha)$ for all $x$, so it follows that
\begin{equation} \label{EasyUpperBound}
M_x(\alpha) \leq \bar M(\alpha).
\end{equation}
Now we must prove the opposite inequality.
We know by Theorem \ref{Achieved} that there exist points $\alpha_1,\ldots,\alpha_N\in \mathrm{Rad}(K_\alpha)$ such that
\begin{equation*}
\alpha = \alpha_1\cdots\alpha_N\quad\mathrm{and}\quad M_x(\alpha) = \left( \sum_{n=1}^N M(\alpha_n)^x\right)^{1/x}.
\end{equation*}
We know that $\alpha$ is not a root of unity, so at least one of $\alpha_1,\ldots,\alpha_N$ is not a root of unity.
We now consider two cases. First, assume that precisely one of $\alpha_1,\ldots,\alpha_N$ is not a root of unity. In other words, there exists
a root of unity $\zeta$ and a point $\beta\in \mathrm{Rad}(K_\alpha)\setminus \mathrm{Tor}(\overline\ratt)$ such that $\alpha = \zeta\beta$ and
\begin{equation*}
M_x(\alpha) = M(\beta).
\end{equation*}
Of course, we also have $\beta = \alpha\zeta^{-1}$ so that
\begin{equation*}
\bar M(\alpha) \leq M(\alpha\zeta^{-1}) = M(\beta) = M_x(\alpha).
\end{equation*}
Combining this inequality with \eqref{EasyUpperBound}, the result follows.
Next, assume that at least two of $\alpha_1,\ldots,\alpha_N$ are not a roots of unity. By Lemma \ref{HeightInK}, we know that
$M(\alpha_n) \geq C(\alpha)$ whenever $\alpha_n$ is not a root of unity. Hence, we obtain that
\begin{equation*}
M_x(\alpha) = \left(\sum_{n=1}^N M(\alpha_n)^x\right)^{1/x} \geq (2C(\alpha)^x)^{1/x}
\end{equation*}
so that
\begin{equation} \label{TwoBound}
M_x(\alpha) \geq 2^{1/x} C(\alpha).
\end{equation}
By our assumption, we have that
\begin{equation*}
\frac{1}{x} \geq \frac{\log \bar M(\alpha) - \log C(\alpha)}{\log 2}
\end{equation*}
which implies that
\begin{align*}
2^{1/x} & \geq 2^{\frac{\log \bar M(\alpha) - \log C(\alpha)}{\log 2}} \\
& = \exp(\log \bar M(\alpha) - \log C(\alpha)) \\
& = \frac{\exp(\log \bar M(\alpha))}{\exp(\log C(\alpha))} \\
& = \frac{\bar M(\alpha)}{C(\alpha)}.
\end{align*}
It now follows from \eqref{TwoBound} that
\begin{equation*}
M_x(\alpha) \geq \bar M(\alpha)
\end{equation*}
completing the proof.
\end{proof}
Next, we establish Corollary \ref{MBarMinimal} showing that $\bar M$ is minimal in the set $\mathcal S(M)$.
\begin{proof}[Proof of Corollary \ref{MBarMinimal}]
We observe again by Theorem \ref{BarPhiMetrics} that $\bar M\in \mathcal S(M)$. By Theorem \ref{SmallP}, for all sufficiently small $x$, we have that
$\bar M(\alpha) = M_x(\alpha)$. Hence, it follows that that
\begin{equation*}
\bar M(\alpha) = \lim_{x\to 0^+} M_x(\alpha) = M_0(\alpha)
\end{equation*}
and the result follows from Theorem \ref{OptimalMinimal}.
\end{proof}
We begin our proof of Theorem \ref{NotUniform} by giving a slight modification to Theorem \ref{SmallP}. More specifically, it will be useful to consider what happens
when the supposed inequality \eqref{AlwaysX} is replaced by a strict inequality.
\begin{lem} \label{StrongSmallP}
Let $\alpha$ be a non-zero algebraic number different from a root of unity and $x$ a positive real number satisfying
\begin{equation*}
x\cdot (\log \bar M(\alpha) - \log C(\alpha)) < \log 2.
\end{equation*}
Then any point $(\alpha_1,\alpha_2,\cdots) \in \tau^{-1}(\alpha)$ that achieves the infimum in the definition of $M_x(\alpha)$ has precisely one component
$\alpha_n$ that is not a root of unity.
\end{lem}
\begin{proof}
We recall first that
\begin{equation} \label{MexUpper}
M_x(\alpha) \leq \bar M(\alpha)
\end{equation}
by Theorem \ref{BarPhiMetrics}. Next, we note that
\begin{equation} \label{LooseBound}
\frac{1}{x} > \frac{\log \bar M(\alpha) - \log C(\alpha)}{\log 2}.
\end{equation}
Assume that $\alpha_1,\ldots,\alpha_N\in \overline\ratt$ are such that
\begin{equation} \label{AchievedApplication}
\alpha = \alpha_1\cdots\alpha_N\quad\mathrm{and}\quad M_x(\alpha) = \left( \sum_{n=1}^N M(\alpha_n)^x\right)^{1/x}.
\end{equation}
and at least two of the points $\alpha_1,\ldots,\alpha_N$ are not roots of unity. By Theorem \ref{Reduction}, there exists a root of unity $\zeta$ and
points $\beta_1,\ldots,\beta_N\in \mathrm{Rad}(K_\alpha)$ such that
\begin{equation*}
\alpha = \zeta\beta_1\cdots\beta_N\quad\mathrm{and}\quad M(\beta_n) \leq M(\alpha_n)
\end{equation*}
for all $n$. If for any $n$ we have that $M(\beta_n) < M(\alpha_n)$, then
\begin{equation*}
M_x(\alpha) \leq \left( \sum_{n=1}^N M(\beta_n)^x\right)^{1/x} < \left( \sum_{n=1}^N M(\alpha_n)^x\right)^{1/x}
\end{equation*}
which contradicts the right hand side of \eqref{AchievedApplication}. Therefore, we have that $M(\beta_n) = M(\alpha_n)$ for all
$n$. In particular, at least two of the points $\beta_1,\ldots,\beta_N$ are not roots of unity. Furthermore, since each $\beta_n\in \mathrm{Rad}(K_\alpha)$,
we may apply Lemma \ref{HeightInK} to see that $M(\beta_n) \geq C(\alpha)$ whenever $\beta_n$ is not a root of unity. This yields
\begin{equation*}
M_x(\alpha) = \left(\sum_{n=1}^N M(\beta_n)^x\right)^{1/x} \geq (2C(\alpha)^x)^{1/x}.
\end{equation*}
which implies that
\begin{equation*}
M_x(\alpha) \geq 2^{1/x} C(\alpha).
\end{equation*}
However, we now have the strict inequality \eqref{LooseBound} which gives $2^{1/x} > \bar M(\alpha)/C(\alpha)$ and
\begin{equation*}
M_x(\alpha) > \bar M(\alpha)
\end{equation*}
contradicting \eqref{MexUpper}. Therefore, exactly one point among $\alpha_1,\ldots,\alpha_N$ is not a root of unity.
\end{proof}
Before we prove Theorem \ref{NotUniform}, we recall our remark that $\bar M(\alpha)$ is often very reasonable to compute so that Theorem \ref{SmallP} and
Lemma \ref{StrongSmallP} are useful in applications. The following proof is a typical example.
\begin{proof}[Proof of Theorem \ref{NotUniform}]
Let $\alpha = p^2$. In order to prove \eqref{P2Small}, we wish to apply Lemma \ref{StrongSmallP}, so we must compute the values of $\bar M(\alpha)$ and $C(\alpha)$.
We begin by observing that
\begin{equation*}
\bar M(\alpha) = \inf\{M(\zeta\alpha):\zeta\in \mathrm{Tor}(\overline\ratt)\} = \inf\{\deg(\zeta\alpha)\cdot h(\zeta\alpha):\zeta\in \mathrm{Tor}(\overline\ratt)\}.
\end{equation*}
Then by \eqref{WeilHeightDefined}, we obtain that
\begin{equation} \label{MBarM}
\bar M(\alpha) = h(\alpha)\cdot \inf\{\deg(\zeta\alpha):\zeta\in \mathrm{Tor}(\overline\ratt)\}.
\end{equation}
It is clear that the infimum on the right hand side of \eqref{MBarM} is achieved since it is an infimum over positive integers. More specifically,
it is achieved by a root of unity $\zeta$ that makes $\deg(\zeta\alpha)$ as small as possible. In our case, $\alpha$ is rational, so this occurs when $\zeta = 1$ leaving
\begin{equation} \label{AlphaUpper}
\bar M(\alpha) = \bar M(p^2) = M(p^2) = \log (p^2).
\end{equation}
In addition, we know that $K_\alpha = \mathbb Q$ so that $C(\alpha) = \log 2$ which now gives
\begin{equation*}
x\cdot (\log \bar M(\alpha) - \log C(\alpha)) = x\cdot (\log \log (p^2) - \log \log 2) < \log 2.
\end{equation*}
By Lemma \ref{StrongSmallP}, we know that any point $(\alpha_1,\alpha_2,\ldots)$ that attains the infimum in $M_x(\alpha) = M_x(p^2)$ must have precisely one point $\alpha_n$
that is not a root of unity. This completes the proof of \eqref{P2Small}.
To prove \eqref{P2Large}, we take $x > 1$ and assume that $(\alpha_1,\alpha_2,\ldots)$ attains the infimum in the definition of $M_x(p^2)$ where are most one
point $\alpha_n$ is different from a root of unity. Therefore, there exists a root of unity $\zeta$ and an algebraic number $\beta$ such that
\begin{equation*}
p^2 = \zeta\beta\quad\mathrm{and}\quad M_x(p^2) = M(\beta).
\end{equation*}
Hence we find immediately that
\begin{equation*}
M(\beta) = M_x(p^2) \leq (M(p)^x + M(p)^x)^{1/x} = 2^{1/x}\log p.
\end{equation*}
Since $x > 1$, this yields that
\begin{equation*} \label{BetaUpper}
M(\beta) < 2\log p.
\end{equation*}
On the other hand, we have that $\beta = \zeta^{-1}p^2$ so that, using \eqref{AlphaUpper}, we obtain
\begin{equation*}
M(\beta) = M(\zeta^{-1}p^2) \geq \bar M(p^2) = 2\log p
\end{equation*}
which is a contradiction. Thus, at least two points among $(\alpha_1,\alpha_2,\ldots)$ must not be roots of unity.
\end{proof}
\section{Continuity of $x\mapsto M_x(\alpha)$} \label{ContinuitySection}
We have already proved that, for any height function $\phi$, the map $x\mapsto \phi_x(\alpha)$ is left semi-continuous. In general, we know that
such functions are not always right semi-continuous. However, we are able to use Theorem \ref{Achieved} and our observations about the Mahler measure
to establish right semi-continuity in this case.
\begin{proof}[Proof of Theorem \ref{Continuous}]
If $\alpha$ is a root of unity, then $M_x(\alpha) = 0$ for all $x$, so we may assume that $\alpha$ is not a root of unity.
Furthermore, we know by Theorem \ref{LeftSemiContinuous} that this map is left semi-continuous at all points, so it remains only to show that it
is right semi-continuous.
Now let $\bar x >0$ be a real number, so we must show that
\begin{equation} \label{RightSemiEnd}
\lim_{y\to \bar x^+} M_y(\alpha) = M_{\bar x}(\alpha).
\end{equation}
Since $x\mapsto M_x(\alpha)$ is decreasing, we know that the left hand side of \eqref{RightSemiEnd} exists. Moreover, we have that
\begin{equation} \label{HalfRightSemi}
\lim_{y\to \bar x^+} M_y(\alpha) \leq M_{\bar x}(\alpha).
\end{equation}
Now we select a point $y\in (\bar x, \bar x +1]$. By Theorem \ref{Achieved}, there must exist points
\begin{equation*}
\alpha_1,\ldots,\alpha_N \in \mathrm{Rad}(K_\alpha) \setminus \mathrm{Tor}(\overline\ratt)
\end{equation*}
and $\zeta\in \mathrm{Tor}(\overline\ratt)$ such that
\begin{equation*}
\alpha = \zeta\alpha_1\cdots\alpha_N\quad\mathrm{and}\quad M_y(\alpha) = \left( \sum_{n=1}^N M(\alpha_n)^y\right)^{1/y}.
\end{equation*}
Since $M_y(\alpha) \leq M(\alpha)$, we may assume without loss of generality that $M(\alpha_n) \leq M(\alpha)$ for all $n$. Furthermore, since $\alpha$ is not a root
of unity, we know that $N\geq 1$. For simplicity, we write now $a_n = M(\alpha_n)$ so that
\begin{equation*}
M_y(\alpha) = \left( \sum_{n=1}^N a_n^y\right)^{1/y},
\end{equation*}
and note that by Lemma \ref{HeightInK}, we have that
\begin{equation} \label{aLower}
a_n \geq C(\alpha)\ \mathrm{for\ all}\ n.
\end{equation}
Next, we define the function $f_y$ by
\begin{equation*}
f_y(x) = \left( \sum_{n=1}^N a_n^x\right)^{1/x}
\end{equation*}
and note that $f_y$ does indeed depend on $y$ because the points $\zeta$ and $\alpha_1,\ldots,\alpha_N$ depend on $y$. We now have immediately that
\begin{equation} \label{fAndM}
f_y(y) = M_y(\alpha).
\end{equation}
Since $\alpha = \zeta\alpha_1\cdots\alpha_N$, we know that
\begin{equation*}
M_{\bar x}(\alpha) \leq \left(\sum_{n=1}^N M(\alpha_n)^{\bar x}\right)^{1/{\bar x}} = \left(\sum_{n=1}^N a_n^{\bar x}\right)^{1/{\bar x}} = f_y(\bar x),
\end{equation*}
and therefore, we obtain that
\begin{equation} \label{fAndM2}
M_{\bar x}(\alpha) \leq f_y(\bar x).
\end{equation}
We know that $a_n> 0$ for all $n$ implying that $f_y(x) > 0$ for all $x$, so we may define the function $g_y(x) = \log f_y(x)$.
Since $f_y$ is differentiable on the positive real numbers, we know that $g_y$ is as well. Therefore, we may apply the Mean Value Theorem to it on $[\bar x, y]$.
Hence, there exists a point $c\in [\bar x,y]$ such that
\begin{equation*}
g_y'(c) = \frac{g_y(y) - g_y(\bar x)}{y-\bar x} = \frac{\log f_y(y) - \log f_y(\bar x)}{y-\bar x}
\end{equation*}
and it follows from \eqref{fAndM} and \eqref{fAndM2} that
\begin{equation} \label{MVTInequality}
g_y'(c) \leq \frac{\log M_y(\alpha) - \log M_{\bar x}(\alpha)}{y-\bar x}.
\end{equation}
We now wish to take limits of both sides of \eqref{MVTInequality} as $y$ tends to $\bar x$ from the right. However, it is possible that the limit of the left hand
side either equals $-\infty$ or does not exist as $y\to \bar x^+$. To solve this problem, we wish to give a lower bound on $g_y'(c)$ that does not depend on $y$.
For any $x >0$, we note that
\begin{align*}
g_y'(x) & = \frac{d}{dx}\log f_y(x) \\
& = \frac{d}{dx} \frac{1}{x}\left( \log \sum_{n=1}^N a_n^x\right) \\
& = \frac{1}{x^2}\left( x\cdot\frac{ \left(\sum_{n=1}^N a_n^x\log a_n\right)}{\left(\sum_{n=1}^N a_n^x\right)}
- \log \sum_{n=1}^N a_n^x \right ).
\end{align*}
Then using \eqref{aLower}, we have that
\begin{equation} \label{FirstLower}
g_y'(x) \geq \frac{1}{x^2}\left( x\cdot \log C(\alpha) - \log \sum_{n=1}^N a_n^x \right ).
\end{equation}
Now we need to give an upper bound on $\sum_{n=1}^N a_n^x$. Recall that we must have $a_n = M(\alpha_n) \leq M(\alpha)$ for all $n$.
Therefore, we have that
\begin{equation*}
\sum_{n=1}^N a_n^x \leq N M(\alpha)^x.
\end{equation*}
But using \eqref{aLower} again, we find that
\begin{equation*}
M(\alpha) \geq M_y(\alpha) = \left(\sum_{n=1}^N a_n^y\right)^{1/y} \geq (N C(\alpha)^y)^{1/y} = N^{1/y}C(\alpha).
\end{equation*}
We also know $C(\alpha) > 0$ and $y\in (\bar x, \bar x+1]$ so that
\begin{equation*} \label{NUpper}
N \leq \left( \frac{M(\alpha)}{C(\alpha)}\right)^y \leq \left( \frac{M(\alpha)}{C(\alpha)}\right)^{\bar x +1},
\end{equation*}
and therefore
\begin{equation*} \label{SumUpper}
\sum_{n=1}^N a_n^x \leq \frac{M(\alpha)^{x + \bar x +1}}{C(\alpha)^{\bar x + 1}}.
\end{equation*}
It now follows that
\begin{equation*}
-\log \sum_{n=1}^N a_n^x \geq -\log \left( \frac{M(\alpha)^{x + \bar x +1}}{C(\alpha)^{\bar x + 1}}\right).
\end{equation*}
Combining this with \eqref{FirstLower}, we obtain that
\begin{equation*}
g_y'(x) \geq\frac{1}{x^2}\left( x\cdot \log C(\alpha) - \log \left( \frac{M(\alpha)^{x + \bar x +1}}{C(\alpha)^{\bar x + 1}}\right) \right),
\end{equation*}
so we have shown that
\begin{equation} \label{SecondLower}
g_y'(x) \geq \frac{x + \bar x +1}{x^2} \log \left( \frac{C(\alpha)}{M(\alpha)} \right).
\end{equation}
For simplicity, we now write $D(\alpha,\bar x, x)$ to denote the right hand side of \eqref{SecondLower}. As a function of $x$, it is obvious that
$D(\alpha,\bar x, x)$ is continuous for all $x > 0$. Hence, we may define
\begin{equation*}
\mathcal D(\alpha,\bar x) = \min \{D(\alpha,\bar x, x): x\in [\bar x, \bar x +1]\}.
\end{equation*}
Now $\mathcal D(\alpha,\bar x)$ is the desired lower bound on $g_y'(c)$ not depending on $y$.
Since $c\in [\bar x,y] \subset [\bar x,\bar x+1]$, we may apply \eqref{MVTInequality} and \eqref{SecondLower} to see that
\begin{equation*}
\mathcal D(\alpha,\bar x) \leq D(\alpha,\bar x, c) \leq g_y'(c) \leq \frac{\log M_y(\alpha) - \log M_{\bar x}(\alpha)}{y-\bar x}.
\end{equation*}
By multiplying through by $y - \bar x$, we find that
\begin{equation} \label{AlmostDone}
(y-\bar x) \mathcal D(\alpha,\bar x) \leq \log M_y(\alpha) - \log M_{\bar x}(\alpha)
\end{equation}
holds for all $y\in (\bar x, \bar x + 1]$.
As we have noted, $\lim_{y\to \bar x^+} M_y(\alpha)$ exists. Since we have assumed that $\alpha$ is not a root of unity, we conclude from Theorem \ref{Achieved}
that $M_y(\alpha) > 0$ for all $y$. It now follows that $\lim_{y\to \bar x^+} \log M_y(\alpha)$ also exists. Moreover, the term $\mathcal D(\alpha,\bar x)$ is a real
number not depending on $y$, so the left hand side of \eqref{AlmostDone} tends to zero as $y$ tends to $\bar x$ from the right. This leaves
\begin{align*}
0 & = \lim_{y\to \bar x^+}((y-\bar x) \mathcal D(\alpha,\bar x)) \\
& \leq \lim_{y\to \bar x^+} (\log M_y(\alpha) - M_{\bar x}(\alpha)) \\
& = \lim_{y\to \bar x^+} \log M_y(\alpha) - \lim_{y\to \bar x^+}\log M_{\bar x}(\alpha) \\
& = \lim_{y\to \bar x^+} \log M_y(\alpha) - \log M_{\bar x}(\alpha),
\end{align*}
which yeilds
\begin{equation*}
\log M_{\bar x}(\alpha) \leq \lim_{y\to \bar x^+} \log M_y(\alpha)
\end{equation*}
so that $M_{\bar x}(\alpha) \leq \lim_{y\to \bar x^+} M_y(\alpha)$ and the result follows by combining this with \eqref{HalfRightSemi}.
\end{proof}
\section{Weil height}
Before we begin our proof of Theorem \ref{WeilHeightComp}, we recall that if $N$ is any integer, then it is well-known that
\begin{equation} \label{IntPowers}
h(\alpha^N) = |N|\cdot h(\alpha)
\end{equation}
for all algebraic numbers $\alpha$. Using this fact, we are able to proceed with our proof.
\begin{proof}[Proof of Theorem \ref{WeilHeightComp}]
First assume that $x \leq 1$. By \eqref{MetricHeightConversion} of Theorem \ref{MetricConstruction}, we have that $h_x(\alpha) \leq h(\alpha)$.
But also, it is well-known that $h$ is already a $1$-metric height. Therefore, \eqref{NoChangeMetric} of Theorem \ref{MetricConstruction} implies that
$h_1(\alpha) = h(\alpha)$. Then by \eqref{Comparisons} of Theorem \ref{MetricConstruction}, we conclude that $h_x(\alpha) \geq h(\alpha)$ verifying the
theorem in the case that $x \leq 1$.
Next, we assume that $x > 1$. Let $N$ be a positive integer and select $\beta\in\overline\ratt$ such that $\beta^N = \alpha$. Therefore, we have that
\begin{equation*}
h_x(\alpha) \leq \left(\sum_{n=1}^N h(\beta)^x\right)^{1/x} = (N h(\beta)^x)^{1/x} = N^{1/x}\cdot h(\beta).
\end{equation*}
Then using \eqref{IntPowers} we obtain that $h(\alpha) = N\cdot h(\beta)$ which yields
\begin{equation} \label{TightUpper}
h_x(\alpha) \leq N^{\frac{1}{x} - 1}\cdot h(\alpha).
\end{equation}
Since $x > 1$, the right hand side of \eqref{TightUpper} tends to zero as $N\to\infty$ completing the proof.
\end{proof}
\section{Acknowledgment}
The author wishes to thank the Max-Planck-Institut f\"ur Mathematik where the majority of this research took place.
\end{document} |
\betagin{document}
\title{Categorical Construction of $A,D,E$ Root Systems}
\author{A. Kirillov Jr.}
address{Department of Mathematics, SUNY at Stony Brook,
Stony Brook, NY 11794, USA}
\email{[email protected]}
\urladdr{http://www.math.sunysb.edu/\textasciitilde kirillov/}
\author{J. Thind}
address{Department of Mathematics and Statistics, Queen's University,
Kingston, ON, Canada}
\email{[email protected]}
\maketitle
\betagin{abstract}
Let $\mathcal{G}amma$ be a Dynkin diagram of type $A,D,E$ and let $R$ denote the corresponding root system. In this paper we give a categorical construction of $R$ from $\mathcal{G}amma$. Instead of choosing an orientation of $\mathcal{G}amma$ and studying representations of the associated quiver, we study representations of a canonical quiver $\mathcal{G}ammahat$ associated to $\mathcal{G}amma$. This construction is very closely related to the preprojective algebra of $\mathcal{G}amma$. In particular, the construction gives a certain periodicity result about the preprojective algebra.
\end{abstract}
\section{Introduction}\label{s:intro}
In the 1970's Gabriel showed that when the underlying graph of a quiver $\overrightarrow{\mathcal{G}amma}$ is a Dynkin diagram of type $A,D,E$ the set of isomorphism classes of indecomposable representations are in bijection with the set of positive roots of the corresponding root system (see \cite{gabriel}). Moreover, one can obtain an explicit description of the inner product and the root lattice. Ringel then showed that using this category one could construct the positive part of the corresponding Lie algebra (see \cite{ringel}). To obtain all roots and the whole Lie algebra one must consider isomorphism classes of objects in a related category; the ``2-periodic derived category" $\mathcal{D}^{b} (\mathbb{R}ep (\overrightarrow{\mathcal{G}amma}) )/ T^{2}$, where $T$ is the translation functor (see \cite{px}). In this approach two different choices of orientation of the same graph give rise to different Abelian categories, which are not equivalent, but instead derived equivalent. The relation between different categories is given by the ``BGP reflection functors".
The drawback of the quiver approach is that it requires a choice of orientation of the Dynkin diagram, making the constructions non-canonical. Similarly, the standard construction of a root system requires a choice of simple roots. One would like to find a construction which does not require these choices.
Another approach, which is independent of any choice of orientation, was suggested by Ocneanu \cite{ocneanu}, in the setting of quantum subgroups of $SU(2)$. His idea was to give a purely combinatorial construction by studying ``essential paths" in the quiver $\mathcal{G}ammahat = \mathcal{G}amma \times \mathbb{Z}_{h}$, which requires no choice of orientation.
In the case of affine Dynkin diagrams the McKay correspondence provides a tool for avoiding choosing orientations. The classical McKay correspondence identifies affine Dynkin diagrams of type $A,D,E$ and finite subgroups $G\subset SU(2)$. In 2006, Kirillov Jr. studied a geometric approach to McKay correspondence using $\bar{G}$-equivariant coherent sheaves on $\mathbb{P}^{1}$, where $\bar{G} = G/ \pm I$ (see \cite{kirillov}).
In particular, it was shown that indecomposable objects in the category $\mathcal{D}^{b}_{\bar{G}}(\mathbb{P}^{1})/T^{2}$ are in bijection with the roots of the corresponding affine root system, that the inner product and root lattice admit an explicit description in terms of this category, and that although there is no natural choice of simple roots, there is a canonical Coxeter element in the Weyl group. This gives a ``categorical construction" of the corresponding root system. Here ``categorical construction" means that roots are realized as classes of indecomposable objects in a certain category, and the inner product and root lattice admit explicit description in terms of this category.
Motivated by the above constructions, the authors conjectured an analogous construction to the affine case, in the case when $\mathcal{G}amma$ is a finite Dynkin graph in \cite{kt}. (For the reader's convenience that conjecture is stated at the end of this section.)
In this paper, that conjecture is established. Given a Dynkin diagram $\mathcal{G}amma$ we study a translation quiver $\mathcal{G}ammahat \subset \mathcal{G}amma \times \mathbb{Z}$, and a triangulated subcategory $\mathcal{D} \subset \mathcal{D} (\mathcal{G}ammahat)$ of the corresponding derived category. Given any choice of orientation $\Omega$, equivalences $\mathcal{D} \to \mathcal{D}^{b} (\mathbb{R}ep (\mathcal{G}amma , \Omega))$ are constructed and shown to be compatible with the reflection functors.
Moreover, this construction is closely related to the preprojective algebra of $\mathcal{G}amma$. We begin by giving a ``graphical" description of the Koszul complex of the preprojective algebra. In this setup elements of degree k in the Koszul complex are visualized as paths in the quiver $\mathcal{G}ammahat$ with k ``jumps". This description is then used to construct the indecomposable objects in the category $\mathcal{D}$. We show that the category $\mathcal{D}$ has $\mathcal{G}ammahat ^{op}$ as its Auslander-Reiten quiver, and use this to relate $\mathcal{D}$ with the mesh category as described in \cite{bbk} and \cite{happel}. This leads to a periodicity result about the preprojective algebra (see Theorem~\ref{t:periodic}). Finally, we study the quotient category $\mathcal{C}= \mathcal{D} / T^{2}$, where $T$ is the translation functor, and use this to establish the conjecture stated in \cite{kt}. The Auslander-Reiten quiver of $\mathcal{C}$ is denoted by $\mathcal{G}ammahat_{cyc}$, and $\mathcal{G}ammahat_{cyc} \subset \mathcal{G}amma \times \mathbb{Z}_{2h}$, where $h$ is the Coxeter number of $\mathcal{G}amma$.
The main result of this paper (and statement of the conjecture in \cite{kt}) is summarized in the following Theorem.
\betagin{thm}\label{t:main1}
Given a simply-laced Dynkin diagram $\mathcal{G}amma$ with Coxeter number $h$
there exists a triangulated category $\mathcal{C}$ with an exact functor
$\mathcal{C}\to\mathcal{C}\colon \mathcal{F}\mapsto\mathcal{F}(2)$ \textup{(}``twist''\textup{)} with the
following properties:
\betagin{enumerate}
\item The category $\mathcal{C}$ is 2-periodic: $T^2=\id$
\item For any $\mathcal{F}\in \mathcal{C}$, there is a canonical functorial isomorphism
$\mathcal{F}(2h)=\mathcal{F}$, where $h$ is the Coxeter number of $\mathcal{G}amma$.
\item Let $\mathcal{K}$ be the Grothendieck group of $\mathcal{C}$. The corresponding root system $R$ is identified with the set $\Ind\subset \mathcal{K}$ of all indecomposable classes.
\item The map $C\colon[\mathcal{F}]\mapsto[\mathcal{F}(-2)]$ is a Coxeter
element for this root system.
\item Set $\langle X,Y \rangle_{\mathcal{C}} = \dim \mathbb{R}Hom (X,Y) = \dim \Hom (X,Y) - \dim \Ext^{1} (X,Y)$ (the ``Euler form"), then the inner
product is given by $(X,Y) = \langle X,Y \rangle_{\mathcal{C}} + \langle Y,X \rangle_{\mathcal{C}}$.
\item There is a natural bijection $\Phii : \Ind \to \mathcal{G}ammahat_{cyc}$, between indecomposable objects and vertices in $\mathcal{G}ammahat_{cyc}$.
Under this bijection, the Coxeter element $C$ defined above is
identified with the map $\tau :(i,n)\mapsto (i,n+2)$.
Denote the indecomposable object corresponding
to $q=(i,n)$ by $X_{q}$.
\item The category $\mathcal{C}$ has ``Serre Duality": $$\Hom (X,Y) = (\Ext^{1} (Y, X(-2)))^{\ast}$$
There is also an identification
$$\Hom(X_{q} , X_{q^{\prime}} ) =\Path(q^{\prime}, q )/ J$$
where $\Path$ is the vector space generated by paths in $\mathcal{G}ammahat_{cyc}$ and $J$ is
some explicitly described subspace. Thus the form $\langle \cdot , \cdot \rangle$ is determined by paths in $\mathcal{G}ammahat_{cyc}$.
\item For every height function ${\bf{h}}$ (see Section~\ref{s:gammahat} for definition), there is a derived functor $R\rho_{{\bf{h}}} : \mathcal{C} \to
\mathcal{D}^b(\mathcal{G}amma , \Omega_{{\bf{h}}})/T^2$ which is an equivalence of
triangulated categories.
\end{enumerate}
\end{thm}
\betagin{remark}
It has been pointed out that Kajiura, Saito and Takahashi construct a similar category using matrix factorizations (see \cite{kst}). Specifically, from each polynomial of type $A,D,E$ they construct a triangulated category $\mathcal{H}$, and equivalence $\mathcal{H} \to \mathcal{D}^{b} (\mathcal{G}amma , \Omega)$. However, our construction is quite different than their construction; we take a more combinatorial approach, using the Dynkin graph $\mathcal{G}amma$ and the quiver $\mathcal{G}ammahat$ as a starting point, rather than the polynomial associated to $\mathcal{G}amma$.
\end{remark}
\section{Preliminaries - Quivers and Preprojective Algebra}\label{s:quivers}
A quiver $\overrightarrow{\mathcal{G}amma}$ is an oriented graph. The vertex set is denoted by $\mathcal{G}amma_{0}$ and the arrow set is denoted by $\mathcal{G}amma_{1}$. In what follows a quiver is obtained by orienting a graph $\mathcal{G}amma$. In such a case the quiver is denoted by $\overrightarrow{\mathcal{G}amma} = (\mathcal{G}amma, \Omega)$ where $\Omega$ is an orientation of the graph $\mathcal{G}amma$. There are two functions $s,t : \mathcal{G}amma_{1} \to \mathcal{G}amma_{0}$, called ``source" and ``target" respectively, defined on an oriented edge $e : i\to j$ by $s(e) = i$ and $t(e) = j$.
Fix a field $\mathbb{K}$. For any quiver $\overrightarrow{\mathcal{G}amma}$ let $P(\overrightarrow{\mathcal{G}amma})$ be the following algebra. As an algebra it is generated by elements $\{ e \}_{e \in \mathcal{G}amma_{1}} \cup \{ e_{i} \}_{i\in \mathcal{G}amma_{0}}$. Here the elements $e_{i}$ are thought of as ``paths of length 0 from $i$ to $i$". Viewing a path as a sequence of edges, the multiplication of basis elements is given by concatenation of paths.
\betagin{defi} The algebra $P(\overrightarrow{\mathcal{G}amma})$ defined above is called the {\em path algebra} of $\overrightarrow{\mathcal{G}amma}$. It is an assiociative algebra with unit given by $1 = \sum_{i \in \mathcal{G}amma_{0}} e_{i}$.
\end{defi}
The algebra $P(\overrightarrow{\mathcal{G}amma})$ is graded by path length and by the source and target of the path. This gives a decomposition
\betagin{equation}\label{e:decomp}
P (\overrightarrow{\mathcal{G}amma}) = \bigoplus_{i,j \in \mathcal{G}amma ; k \in \mathbb{N}} P_{i,j;k}
\end{equation}
where $P_{i,j;k}$ is the space spanned by paths of length $k$ from $i$ to $j$. (Here an edge has length 1, and the idempotent corresponding to a vertex has length 0.)
The preprojective algebra of a quiver $\overrightarrow{\mathcal{G}amma}$ is defined as follows:
Consider the double quiver $\overline{\mathcal{G}amma}$ which has the same vertex set as $\overrightarrow{\mathcal{G}amma}$ but for every arrow $e:i\to j$ there is an arrow $\overline{e} :j \to i$.
Choose a function $\epsilon : \overline{\mathcal{G}amma}_{1} \to \{ \pm 1 \}$ so that $\epsilon (e) + \epsilon ( \overline{e} ) = 0$. For each vertex $i\in \overrightarrow{\mathcal{G}amma}$ define $\thetaeta_{i} \in P_{i,i;2}$ by
\betagin{equation}\label{e:mesh}
\thetaeta_{i} =\sum_{s(e)=i} \epsilon (e) \overline{e} e \in P_{i,i;2}
\end{equation}
\betagin{defi} The {\em preprojective algebra} $\Pi (\mathcal{G}amma)$ of $\mathcal{G}amma$ is defined as $P(\overline{\mathcal{G}amma}) / J$ where $J$ is the ideal generated by the $\thetaeta_{i}$'s. The ideal $J$ is called the ``mesh" ideal.
\end{defi}
Note that this algebra is independent of the choice of $\epsilon$ and depends only on the underlying graph $\mathcal{G}amma$, not on the orientation $\Omega$. (See \cite{lusztig2} for details.)
Alternatively, we can define $\Pi (\mathcal{G}amma)$ as follows: Let $R$ be the algebra of functions $\mathcal{G}amma \mapsto \mathbb{K}$ with pointwise multiplication. Let $V$ be the vector space spanned by the edges of the double quiver $\overline{\mathcal{G}amma}$ and let $L$ be the $R$-submodule of $V\otimes V$ generated by the element $\thetaeta = \sum_{i \in \mathcal{G}amma} \thetaeta_{i} $. Consider the embedding $j:L \mathfrak{h}ookrightarrow V\otimes V$. Denote by $J$ the quadratic ideal in $V\otimes V$ generated by $L$. Then the preprojective algebra of $\mathcal{G}amma$ is $\Pi = T_{R} (V)/J$. (See \cite{etginz} for details.)
A representation of a quiver $\overrightarrow{\mathcal{G}amma}$ is a choice of vector space $X(i)$ for every vertex in $\mathcal{G}amma_{0}$ and linear map $x_{e} : X(i) \to X(j)$ for every edge $e: i \to j$. A morphism $\Phii : X \to Y$ of representations is a collection of linear maps $\Phii (i) : X(i) \to Y(i)$ such that the following diagram is commutative for every edge $e: i \to j$.
$$
\xymatrix{
X(i) \ar[r]^{x_{e}} \ar[d]^{\Phii (i)} & X(j) \ar[d]_{\Phii (j)} \\
Y(i) \ar[r]^{y_{e}} & Y(j) \\
}$$
Denote the Abelian category of representations of $\overrightarrow{\mathcal{G}amma}$ by $\mathbb{R}ep (\overrightarrow{\mathcal{G}amma})$.
\betagin{remark} Note that a representation of $\overrightarrow{\mathcal{G}amma}$ is the same as a module over the path algebra $P( \overrightarrow{\mathcal{G}amma} )$ and that the notion of morphism for each coincide as well. (See \cite{crawley-boevey} for details.)
\end{remark}
For each vertex $i$ define a representation $P_{i}$ by setting $P_{i} (j) = P_{i,j}$ the space spanned by paths from $i$ to $j$ in $\overrightarrow{\mathcal{G}amma}$. This representation is projective and indecomposable, and any indecomposable projective is isomorphic to $P_{i}$ for some vertex $i$ (see \cite{crawley-boevey}). For any vertex $i$ define a simple object $S_{i}$ by setting $S_{i} (j) = \delta_{i,j} \mathbb{K}$.
Now consider the corresponding bounded derived category, denoted by $\mathcal{D}^{b} (\overrightarrow{\mathcal{G}amma})$. Recall that an object in $\mathcal{D}^{b} (\overrightarrow{\mathcal{G}amma} )$ can be thought of as a choice of bounded complex $X^{\bullet} (i)$ for each vertex $i \in \mathcal{G}amma$, together with maps of complexes $x_{e} : X^{\bullet} (i) \to X^{\bullet} (j)$ for each edge $e:i \to j$.\\
In $\mathcal{D}^{b} (\overrightarrow{\mathcal{G}amma})$ the indecomposable objects, up to isomorphism, are of the form $X[k]$, where $X$ is an indecomposable object in $\mathbb{R}ep (\overrightarrow{\mathcal{G}amma})$ considered as a complex concentrated in degree 0. (See \cite{happel} for details.) When $\mathcal{G}amma$ is Dynkin, the Auslander-Reiten quiver of the derived category is the quiver $\mathcal{G}ammahat$ defined in Section~\ref{s:gammahat}. For more details about the structure of the derived category see \cite{happel}.
\betagin{defi}
The {\em Auslander-Reiten} quiver of a category $\mathcal{A}$ is defined as follows:
\betagin{enumerate}
\item The vertices are the set $\Ind(\mathcal{A})$ of non-zero isomorphism classes of indecomposable objects.
\item For vertices $[X] , [Y]$ there are $d_{ij}$ edges $e : [X] \to [Y]$, where $d_{ij} = \dim \text{Irr} (X, Y)$ and $\text{Irr} (X,Y)$ denotes the irreducible morphisms $X\to Y$.
\end{enumerate}
For more details, such as the definition of irreducible morphism, see \cite{ars} Chapter VII.
\end{defi}
On $\mathcal{D}^{b} (\overrightarrow{\mathcal{G}amma})$ there are functors $\mathfrak{n}u$ and $\tau$ defined by:
\betagin{align}
&\Hom (X,Y) = (\Ext^{1} (Y, \tau X))^{*} \\
&\mathfrak{n}u (X) = ( \Hom_{P} (X, P) )^{*}
\end{align}
where in the second line a representation $X$ is thought of as a module over the path algebra $P = P(\overrightarrow{\mathcal{G}amma})$.\\
\betagin{remark}
In the setting of equivariant sheaves on $\mathbb{P}^{1}$ considered in \cite{kirillov}, the functor $\tau$ is given by tensoring with the dualizing sheaf $\mathcal{O} (-2)$.
\end{remark}
\subsection{BGP Reflection Functors}
Despite the fact that Gabriel's Theorem establishes a bijection between the Grothendieck groups of $\mathbb{R}ep (\overrightarrow{\mathcal{G}amma})$ for any choice of $\Omega$, it is not the case that these categories are equivalent for different choices of $\Omega$.
Let $\overrightarrow{\mathcal{G}amma} = (\mathcal{G}amma , \Omega)$ be a quiver and let $i \in \mathcal{G}amma_{0}$ be a sink (or source) in the orientation $\Omega$. Define a new orientation $s_{i} \Omega$ of $\mathcal{G}amma$ by reversing all arrows at $i$. Hence the sink becomes a source (and the source a sink).
Let $i \in \mathcal{G}amma$ be a source (or sink) in the orientation $\Omega$ and consider the categories $\mathbb{R}ep (\mathcal{G}amma , \Omega)$ and $\mathbb{R}ep (\mathcal{G}amma , s_{i} \Omega)$. These two categories are not equivalent, however there is a nice functor $S_{i}^{\pm} : \mathbb{R}ep (\mathcal{G}amma , \Omega) \to \mathbb{R}ep (\mathcal{G}amma , s_{i} \Omega)$ between them. (See \cite{bgp}.) These functors are left (respectively right) exact. Although this functor is not an equivalence, the corresponding derived functor $RS_{i}^{+}$ (respectively $LS_{i}^{-}$) provides an equivalence of triangulated categories $\mathcal{D}^{b} (\mathcal{G}amma, \Omega) \to \mathcal{D}^{b} (\mathcal{G}amma , s_{i} \Omega)$. A brief description of the derived functors is given for the reader's convenience.
Let $\overrightarrow{\mathcal{G}amma} = (\mathcal{G}amma, \Omega)$ and let $i$ be a source for $\Omega$. Define a functor $RS_{i}^{+} : \mathcal{D}^{b} (\mathcal{G}amma, \Omega) \to \mathcal{D}^{b} (\mathcal{G}amma , s_{i} \Omega)$ by setting
$$ RS_{i}^{+} X (j) = \betagin{cases}
Cone(X(i) \to \bigoplus_{i\to k} X(k)) &\text{if} \ \ i=j \\
X(j) &\text{otherwise.}
\end{cases}
$$
For an edge $e: j \to k$ in $\Omega$ let $\overline{e}$ denote the corresponding edge in $s_{i} \Omega$, the map $RS_{i}^{+} (x_{\overline{e}})$ is given by
$$RS_{i}^{+} (x_{\overline{e}}) = \betagin{cases}
x_{e} &\text{if } s(e) \mathfrak{n}eq i \\
(0 ,\iota_{j}) : X(j) \to X^{\bullet +1}(i) \bigoplus \oplus_{i\to j} X(j) &\text{if } s(e) = i
\end{cases}
$$
where $\iota_{j}$ is the embedding of $X(j)$ into $\oplus X(j)$.
Similarly for $i$ a sink define $LS_{i}^{-} : \mathcal{D}^{b} (\mathcal{G}amma, \Omega) \to \mathcal{D}^{b} (\mathcal{G}amma , s_{i} \Omega)$ by
$$ LS_{i}^{-} X (j) = \betagin{cases}
Cone(\bigoplus_{i\to k} X(k) \to X(i) ) &\text{if} \ \ i=j \\
X(j) &\text{otherwise.}
\end{cases}
$$
For an edge $e: j \to k$ in $\Omega$ let $\overline{e}$ denote the corresponding edge in $s_{i} \Omega$, the map $LS_{i}^{-} (x_{\overline{e}})$ is given by
$$LS_{i}^{-} (x_{\overline{e}}) = \betagin{cases}
x_{e} &\text{if } t(e) \mathfrak{n}eq i \\
(\iota_{j}^{+1}, 0 ) : X(j) \to ( \oplus_{j\to i} X^{\bullet +1} (j) \bigoplus X^{\bullet}(i) ) &\text{if } t(e) = i
\end{cases}
$$
where $\iota_{j}$ is the embedding of $X(j)$ into $\oplus X(j)$.
These are the derived functors of the well-known ``BGP reflection functors" (see \cite{gelfman}). Note that the functors $RS_{i}^{+}$ and $LS_{i}^{-}$ are inverse to each other. The name reflection functor comes from the action of these functors on the Grothendieck group. In the setting of Gabriel's Theorem, the Grothendieck group of $\mathcal{D}^{b} (\overrightarrow{\mathcal{G}amma}) /T^{2}$ is isomorphic to the root lattice, indecomposable objects correspond to the roots and the reflection functors act on the Grothendieck group as the corresponding simple reflections in the Weyl group of the associated root system.
\section{The Quiver $\mathcal{G}ammahat$}\label{s:gammahat}
Let $\mathcal{G}amma$ be a finite graph without cycles. So in particular, $\mathcal{G}amma$ is bipartite. Let $\mathcal{G}amma = \mathcal{G}amma_0\sqcup \mathcal{G}amma_1$ be a bipartite splitting.
Define the quiver $\mathcal{G}amma \times \mathbb{Z}$ as
follows:
\betagin{align*}
&\text{vertices}\colon \mathcal{G}amma\times \mathbb{Z}\\
&\text{edges}\colon \text{for each $n\in \mathbb{Z}$ and edge }i-j \text{ in
}\mathcal{G}amma, \text{ there are oriented edges }\\
&\qquad(i,n)\to(j,n+1), (j,n)\to(i,n+1) \text{
in }\mathbb{Z}\mathcal{G}amma
\end{align*}
For $\mathcal{G}amma$ as above, with bipartite with splitting $\mathcal{G}amma = \mathcal{G}amma_0\sqcup \mathcal{G}amma_1$, the quiver $\mathbb{Z}\mathcal{G}amma$ is disconnected: $\mathcal{G}amma \times \mathbb{Z} =(\mathcal{G}amma \times \mathbb{Z})_0\sqcup (\mathcal{G}amma \times \mathbb{Z})_1$, where
$$(\mathcal{G}amma \times \mathbb{Z} )_k=\{(i,n)\; | \; n+p(i) \equiv k\mod 2\}$$ where $p(i)=0$ for $i\in \mathcal{G}amma_0$ and $p(i)=1$ for $i\in \mathcal{G}amma_1$.
\betagin{defi} Define the quiver $\mathcal{G}ammahat$ by setting
\betagin{equation}
\mathcal{G}ammahat = \{(i,n) \subset \mathcal{G}amma \times \mathbb{Z} \; | \; n+p(i) \equiv 0 \mod 2\} = (\mathcal{G}amma \times \mathbb{Z})_0
\end{equation}
Let $\mathcal{G}amma$ be an $A,D,E$ Dynkin diagram with Coxeter number $h$, so in particular $\mathcal{G}amma$ is bipartite. In this case, define a cyclic version of $\mathcal{G}ammahat$ by setting
\betagin{equation}\label{e:Ihat}
\mathcal{G}ammahat_{cyc} = \{ (i,n) \subset \mathcal{G}amma \times \mathbb{Z}_{2h} \; | \; n+p(i) \equiv 0 \mod 2 \}
\end{equation}
\end{defi}
\betagin{example}
For the graph $ \mathcal{G}amma = D_{5}$ the quiver $\mathcal{G}ammahat$ is shown in
Figure~\ref{f:Ihat-D5}.
\betagin{figure}[ht]
\centering
\includegraphics[height=3.00in]{ihatd5}
\caption{The quiver $\mathcal{G}ammahat$ for graph $\mathcal{G}amma = D_{5}$. For $D_{5}$ the Coxeter number is 8, so by identifying the outgoing arrows at the top level and the incoming arrows at the bottom level in this figure, one obtains $\mathcal{G}ammahat_{cyc}$.}\label{f:Ihat-D5}
\end{figure}
\end{example}
\betagin{remark}
Note that the quivers $\mathcal{G}ammahat$ and $\mathcal{G}ammahat_{cyc}$ do not depend on the choice of $p$, and in fact can be canonically defined as one (of two identical) connected component of the quiver $\mathbb{Z} \times \mathcal{G}amma $. The presentation given here is to simplify notation and make it possible to write explicit formulae.
\end{remark}
It is well-known that $\mathcal{G}ammahat$ and $\mathcal{G}ammahat_{cyc}$ are the Auslander-Reiten quivers of the categories $\mathcal{D}^{b}( \mathcal{G}amma , \Omega)$ and $\mathcal{D}^{b} (\mathcal{G}amma , \Omega) / T^{2}$ respectively, when $\mathcal{G}amma$ is of type A,D,E. (See \cite{happel}.)
The following basic properties of $\mathcal{G}ammahat$ also hold for $\mathcal{G}ammahat_{cyc}$. For brevity only $\mathcal{G}ammahat$ is considered.
Define a ``twist" map $\tau\colon \mathcal{G}ammahat \to
\mathcal{G}ammahat$ by
\betagin{equation}\label{e:tau}
\tau(i,n)=(i,n+2).
\end{equation}
\betagin{defi}\label{d:height}
A function ${\bf{h}} \colon \mathcal{G}amma \to \mathbb{Z}$ satisfying ${\bf{h}} (j)={\bf{h}} (i)\pm 1$ if $i,j$ are connected by an edge in $\mathcal{G}amma$ and satisfying ${\bf{h}} (i)\equiv p(i)\mod 2$, will be called a {\em height function}. (Here $p$ is the parity function defined in the beginning of this section.)
\end{defi}
\betagin{defi}\label{d:slice}
Following \cite{gabriel2}, a connected full subquiver of $\mathcal{G}ammahat$ which contains a unique representative of $\{ (i,n) \}_{n\in \mathbb{Z}}$ for each $i\in \mathcal{G}amma$ will be called a {\em slice}.
\end{defi}
Any height function ${\bf{h}}$ defines a slice $\mathcal{G}amma_{{\bf{h}}} = \{(i, {\bf{h}} (i))\; | \; i\in \mathcal{G}amma \}\subset \mathcal{G}ammahat$; it also defines an orientation $\Omega_{{\bf{h}}}$ on $\mathcal{G}amma$ where $i\to j$ if $i,j$ are connected by an edge and ${\bf{h}} (j)= {\bf{h}} (i)+1$. It is easy to see that two height functions give the same orientation if and only if they differ by an additive constant, or equivalently, if the corresponding slices are obtained one from another by applying a power of $\tau$. Conversely, the second coordinate of any slice defines a height function.
Let ${\bf{h}}$ be a height function and let $i\in \mathcal{G}amma$ be a source for the corresponding orientation $\Omega_{{\bf{h}}}$. Define a new height function $s^{+}_{i} {\bf{h}}$ by
$$s^{+}_{i} {\bf{h}} (j) = \betagin{cases}
{\bf{h}} (j)+2 &\text{if} \ \ j=i \\
{\bf{h}} (j) &\text{if} \ \ j\mathfrak{n}eq i
\end{cases}
.$$
Similarly, if $i\in \mathcal{G}amma$ is a sink for the corresponding orientation $\Omega_{{\bf{h}}}$, define a new height function $s^{-}_{i} {\bf{h}}$ by
$$s^{-}_{i} {\bf{h}} (j) = \betagin{cases}
{\bf{h}} (j)-2 &\text{if} \ \ j=i \\
{\bf{h}} (j) &\text{if} \ \ j\mathfrak{n}eq i
\end{cases}
.$$
Note that the orientation $\Omega_{s^{\pm}_{i} {\bf{h}}}$ of $\mathcal{G}amma$ is obtained by reversing all arrows at $i$, and that any orientation of $\mathcal{G}amma$ can be obtained by a sequence of such operations. It is well-known that for any two height functions ${\bf{h}}, {\bf{h}}^{\prime}$ one can be obtained from the other by a sequence of operations $s_{i}^{\pm}$.
For $\mathcal{G}amma$ Dynkin of type $A,D,E$, with Coxeter number $h$, define the following permutations on $\mathcal{G}ammahat$ (and $\mathcal{G}ammahat_{cyc}$). \\
The ``Nakayama" permutation given by:
\betagin{equation}\label{e:nakayama}
\mathfrak{n}u_{\mathcal{G}ammahat} (i,n) = (\check{\imath} \ ,n+h-2)
\end{equation}
The ``Twisted Nakayama" permuation given by:
\betagin{equation}\label{e:gamma}
\mathfrak{g}amma_{\mathcal{G}ammahat} (i,n) = (\check{\imath} \ , n+h) = \tau \circ \mathfrak{n}u_{\mathcal{G}ammahat} (i, n)
\end{equation}
Here $\check{\imath}$ is defined by $-\alphapha_{i}^{\Pi} = w_{0}^{\Pi} (\alphapha_{\check{\imath}}^{\Pi})$, where $w_{0}^{\Pi} \in W$ is the longest element and $\Pi$ is any set of simple roots. Thus for the root systems of type $A, D_{2n+1}, E_{6}$ this map corresponds to the diagram automorphism, while for $D_{2n}, E_{7}, E_{8}$ this map is just the identity.
It remains to verify that $\mathfrak{n}u_{\mathcal{G}ammahat}$ is well-defined. This only requires checking that the image does, in fact,
lie in $\mathcal{G}ammahat$. Note that if $h$ is even $p(\check{\imath}) = p(i)$ and $k+h=k
\mod 2$, so $(\check{\imath}, k+h) \in \mathcal{G}ammahat$. If $h$ is odd, then $R=A_{2n}$,
$h=2n+1$ and $\check{\imath} = 2n-i+1$, so that $p(\check{\imath})=p(i) + 1$
and $k+h=k+1 \mod 2$, so $(\check{\imath}, k+h) \in \mathcal{G}ammahat$. Hence the map
$\mathfrak{n}u_{\mathcal{G}ammahat}$ is well-defined.
\betagin{example}
The maps $\mathfrak{n}u_{\mathcal{G}ammahat}$ and $\mathfrak{g}amma_{\mathcal{G}ammahat}$ for the case $\mathcal{G}amma = A_{4}$ are shown in Figure~\ref{f:nugamma}.
\betagin{figure}[ht]
\centering
\betagin{overpic}
{nugamma}
\put (-15,92){$\mathfrak{g}amma_{\mathcal{G}ammahat} \mathcal{G}amma_{{\bf{h}}}$}
\put (-9,17){$\mathcal{G}amma_{{\bf{h}}}$}
\put (-15,72){$\mathfrak{n}u_{\mathcal{G}ammahat} \mathcal{G}amma_{{\bf{h}}}$}
\end{overpic}
\caption{The maps $\mathfrak{n}u_{\mathcal{G}ammahat}$ and $\mathfrak{g}amma_{\mathcal{G}ammahat}$ in the case $\mathcal{G}amma = A_{4}$. A slice $\mathcal{G}amma_{{\bf{h}}}$ and its images under $\mathfrak{n}u_{\mathcal{G}ammahat}$ and $\mathfrak{g}amma_{\mathcal{G}ammahat}$ are shown in bold.}\label{f:nugamma}
\end{figure}
\end{example}
In terms of the Auslander-Reiten quiver, these maps correspond to the functors $\tau$ and $\mathfrak{n}u$ defined on $\mathbb{R}ep (\mathcal{G}amma , \Omega)$.
\section{The Category $\mathcal{D}$}\label{s:catD}
In this section a precursor to the conjectured category is introduced and several basic results are established. To begin a few preliminary results are required.
\betagin{defi}
Let $\overrightarrow{\mathcal{G}amma}$ be any quiver and $e: i\to j$ be an edge in $\overrightarrow{\mathcal{G}amma}$. For any object $X\in \mathcal{D} (\overrightarrow{\mathcal{G}amma})$ define the complex of vector spaces $Cone_{e}(X)$ as the cone of the map $x_{e} : X(i) \to X(j)$. This will be called the ``Cone over the edge $e$".
\end{defi}
\betagin{remark}
Note that $x_{e}: X(i) \to X(j)$ is an honest morphism between complexes of vector spaces, not a morphism in a derived category.
\end{remark}
\betagin{lemma}\label{l:edgecone}
For any quiver $\overrightarrow{\mathcal{G}amma}$ and edge $e:i \to j$, the map $Cone_{e} : \mathcal{D}^{b} (\overrightarrow{\mathcal{G}amma}) \to \mathcal{D}^{b} (Vect)$ is functorial.
\end{lemma}
\betagin{proof}
Suppose that $X,Y \in \mathcal{D} (\overrightarrow{\mathcal{G}amma})$ and that $F:X \to Y$ is a map of complexes of representations. Let $x_{e} : X(i) \to X(j)$ and $y_{e} : Y(i) \to Y(j)$ be the maps of complexes corresponding to the edge $e$. Then $F(j) \circ x_{e} = y_{e} \circ F(i)$ and $d_{Y} \circ F = F \circ d_{X}$. By definition of $Cone_{e}$, this gives a map
$$\xymatrix{
Cone_{e} (x_{e}) \ar[d]^{ Cone_{e} (F) } & = (X^{\bullet +1}(i) \oplus X(j) , d_{X}(i) + x_{e} , d_{X} (j)) \\
Cone_{e} (y_{e}) & = (Y^{\bullet +1}(i) \oplus Y(j) , d_{Y}(i) + y_{e} , d_{Y} (j)) \\
}$$
Also note that this shows that if $F$ is a quasi-isomorphism, then the induced map $Cone_{e}(F)$ is also a quasi-isomorphism. \\
Let $f \in \Hom_{\mathcal{D} (\overrightarrow{\mathcal{G}amma})} (X, Y)$ and let $X \leftarrow Z \to Y$ be a roof diagram for $f$. Then the above shows that there is an induced roof diagram $Cone_{e} (x_{e}) \leftarrow Cone_{e} (z_{e}) \to Cone_{e} (y_{e})$. Hence $Cone_{e}$ is functorial.
\end{proof}
Let $\mathcal{G}amma$ be a finite graph without cycles. Let $\mathcal{D} (\mathcal{G}ammahat)$ be the (unbounded) derived category of representations of $\mathcal{G}ammahat$.
\betagin{defi}
Define a category $\mathcal{D}$ defined as follows: $Obj_{\mathcal{D}} = \{ (X , \Phii ) \}$ where $X \in \mathcal{D} (\mathcal{G}ammahat )$ is an object such that for any $q \in \mathcal{G}ammahat$ the complex $X(q)$ is bounded, and $\Phii$ is a collection of isomorphisms of complexes of vector spaces $\phi_{q} : X (\tau q) \to Cone(x_{q})$ for each $q \in \mathcal{G}ammahat$ such that the following diagram commutes.
$$\xymatrix{
\bigoplus_{q^{\prime}: q\to q^{\prime}} X(q^{\prime}) \ar[d]_{\sum_{q^{\prime} : q\to q^{\prime}} x_{q^{\prime}}} \ar[dr]^{\iota} & \\
X(\tau q) \ar[r]_{\phi_{q}} & Cone (x_{q}) \\
}$$
Here $x_{q} : X(q) \to \bigoplus_{q\to q^{\prime}} X(q^{\prime})$ is the map given by edges $q\to q^{\prime}$, $x_{q^{\prime}} : X(q^{\prime}) \to X(\tau q)$ is the map given by the edge $q^{\prime} \to \tau q$, $\iota$ is the inclusion map and $Cone(x_{q}) = \bigoplus_{e: s(e)=q} Cone_{e} (x_{e})$ is the direct sum of Cone over the edges $e: q\to q^{\prime}$, as defined above.
A morphism $F : (X, \Phii) \to (Y, \Psi )$ in $\mathcal{D}$ is given by a morphism $F : X \to Y$ in $\mathcal{D} (\mathcal{G}ammahat)$ such that the following diagram is commutative:
\betagin{equation}\label{e:morphism}
\xymatrix{
X(\tau q) \ar[r]^{F(\tau q)} \ar[d]^{\phi_{q}} & Y(\tau q) \ar[d]^{\psi_{q}} \\
Cone (x_{q} ) \ \ \ \ar[r]^{Cone( F( \tau q) ) } & \ \ \ Cone(y_{q}) \\
}
\end{equation}
\betagin{defi} An object $X \in \mathcal{D} (\mathcal{G}ammahat)$ is said to satisfy the {\em fundamental relation} if for any $q \in \mathcal{G}ammahat$ there is a choice of map of complexes $z : X(\tau q) \to X(q) [1]$ so that
\betagin{equation}\label{e:fundamentalrelation}
X(q) \; | \;ackrel{x_{q}}{\to} \bigoplus_{q \to q^{\prime}} X(q^{\prime}) \; | \;ackrel{\sum x_{q^{\prime}}}{\to} X(\tau q) \; | \;ackrel{z}{\to} X(q)[1]
\end{equation}
is an exact triangle. Here $x_{q}, x_{q^{\prime}}$ are the maps corresponding to edges $q\to q^{\prime}$ and $q^{\prime} \to \tau q$ in $\mathcal{G}ammahat$.
\end{defi}
\betagin{prop}
For any object $(X, \Phii) \in \mathcal{D}$, $X$ satisfies the fundamental relation.
\end{prop}
\betagin{proof}
By definition of an object $(X , \Phii) \in \mathcal{D}$.
\end{proof}
It remains to show that the category $\mathcal{D}$ inherits the structure of a triangulated category from $\mathcal{D} (\mathcal{G}ammahat)$. To do this define the translation functor in $\mathcal{D}$ as follows: $T_{\mathcal{D}} (X, \Phii) = (T_{\mathcal{D} (\mathcal{G}ammahat)} X, \Phii [1])$, where $\Phii [1] = \{ \phi_{q} [1] \}$. The distinguished triangles in $\mathcal{D}$ are triples $( (X, \Phii), (Y, \Psi) , (Z, \varphi) )$ where $(X,Y,Z)$ is a distinguished triangle in $\mathcal{D} (\mathcal{G}ammahat)$.
\end{defi}
\betagin{lemma}\label{l:cone}
\par\indent
\betagin{enumerate}
\item If $(X, \Phii), (Y, \Psi) \in \mathcal{D}$ and $F:X \to Y$ is a map of complexes satisfying Equation~\ref{e:morphism}, then $( Cone(F) , \varphi) \in \mathcal{D}$, where the maps $\varphi_{q}$ are those induced by $\phi_{q}, \psi_{q}$.
\item Any distiguished triangle in $\mathcal{D}$ is isomorphic to one of the form
$$ (X, \Phii) \; | \;ackrel{F}{\to} (Y, \Psi) \to (Cone(F), \varphi) \to X[1],$$
where $F$ is a map of complexes.
\end{enumerate}
\end{lemma}
\betagin{proof}
First note that any distinguished triangle in $\mathcal{D} (\mathcal{G}ammahat)$ is isomorphic to a triangle of the form $X \; | \;ackrel{F}{\to} Y \to Cone(F) \to X[1]$, where $F: X \to Y$ is a morphism of complexes (see \cite{gelfman} Chapter IV \S 2). So once we show the first statement, the second follows.
It remains to verify that if $F : X \to Y$ is a morphism of complexes between objects in $\mathcal{D}$ then $Z=Cone(F)$ comes with an identification $Z(\tau q) \to Cone(z_{q})$ where $z_{q}$ is the map $Z(q) \to \oplus_{q\to q^{\prime}} Z (q^{\prime})$. To see this note that:
\betagin{align*}
&Cone^{k} (z_{q}) = \\
&= Cone^{k} (Z(q) \to \oplus_{q \to q^{\prime}} Z(q^{\prime})) \\
&= Z^{k+1}(q) \bigoplus (\oplus_{q\to q^{\prime}} Z^{k}(q^{\prime}) )\\
&= Cone^{k+1} (F(q)) \bigoplus (\oplus_{q\to q^{\prime}} Cone^{k} (F(q^{\prime})))\\
&= \big\{ X^{k+2} (q) \oplus Y^{k+1} (q) ) \big\} \bigoplus \big\{ (\oplus_{q\to q^{\prime}} ( X^{k+1} (q^{\prime}) \oplus Y^{k} (q^{\prime}) )\big\} \\
&= \big\{ X^{k+2} (q) \oplus ( \oplus_{q\to q^{\prime}} X^{k+1} (q^{\prime})) \big\} \bigoplus \big\{ Y^{k+1} (q) \oplus ( \oplus_{q\to q^{\prime}} Y^{k} (q^{\prime}))\big\} \\
&= Cone^{k+1} ( X(q) \to \oplus_{q\to q^{\prime}} X(q^{\prime})) \bigoplus Cone^{k} ( Y(q) \to \oplus_{q\to q^{\prime}} Y(q^{\prime})) \\
&\simeq X^{k+1} (\tau q) \bigoplus Y^{k} (\tau q) \text{ \ \ (using the isomorphisms } \phi_{q}, \psi_{q}) \\
&=Cone^{k} (F(\tau q))\\
&=Z^{k}(\tau q).
\end{align*}
Denote this isomorphism by $\varphi_{q}$. To check that this is an isomorphism of complexes, not just of graded vector spaces, it remains to check that the differentials match.
Let $\delta$ be the differential of $Cone(F)$ and let $d_{X}, d_{Y}$ be the differentials of $X,Y$. Denote by $D$ the differential of $Cone (X^{k +1} (q) \oplus Y^{k} (q) \; | \;ackrel{x_{q} + y_{q}}{\to} \bigoplus_{q\to q^{\prime}} (X^{k +1} (q^{\prime}) \oplus Y^{k} (q^{\prime}) ) )$.
\betagin{align*}
\delta (\tau q) &= ( d_{X}^{+1} (\tau q) + F^{+1} (\tau q) , d_{Y} (\tau q)) \\
&= ( d_{X}^{+2} (q) + x_{q}^{+1} + F^{+1} (q) , \sum_{q \to q^{\prime}} d_{X}^{+1} + F^{+1} (q^{\prime}) , d_{Y}^{+1} (q) + y_{q} , \sum_{q \to q^{\prime}} d_{Y} (q^{\prime})) \\
&= ( d_{X}^{+2} (q) + x_{q}^{+1} + F^{+1} (q) , d_{Y}^{+1} (q) + y_{q} , \sum_{q \to q^{\prime}} d_{X}^{+1} + F^{+1} (q^{\prime}) , \sum_{q \to q^{\prime}} d_{Y} (q^{\prime})) \\
&= ( \delta^{+1} (q) + x_{q}^{+1} + y_{q}^{+1} , \delta (q^{\prime})) \\
&= D
\end{align*}
Hence $(Cone(F), \varphi) \in \mathcal{D}$.
\end{proof}
\betagin{thm}\label{t:triangulated}
$\mathcal{D}$ is a triangulated category.
\end{thm}
Following \cite{gelfman}, the notation T1, T2, T3, T4 for the axioms of a triangulated category will be used for simplicity. The reader can refer to Chapter IV of \cite{gelfman} for details.
\betagin{proof}
Since triangles and morphisms have been defined above, only the axioms T1 $\to$ T4 remain to be verified.
For T1 the only thing that needs to be checked is that a morphism $(X, \Phii) \to (Y, \Psi)$ can be completed to a triangle. First note that by construction, the objects and morphisms in $\mathcal{D}$ are objects and morphisms in the derived category $\mathcal{D} (\mathcal{G}ammahat)$ with extra structure. So to show that a morphism can be completed in $\mathcal{D}$ it is enough to show that the completion in $\mathcal{D} (\mathcal{G}ammahat)$ carries the required extra structure. By construction of the derived category any morphism can be completed to a distinguished triangle which is isomorphic to a triangle of the form
$$ X \; | \;ackrel{F}{\to} Y \to Cone(F) \to X[1]$$
where $F: X \to Y$ is a morphism of complexes (see \cite{gelfman} Chapter 4 \S 2). If $(X, \Phii) ,(Y, \Psi) \in \mathcal{D}$ and $F$ sastifies Equation~\ref{e:morphism}, then Lemma~\ref{l:cone} implies that this completion lies in $\mathcal{D}$.
T2 follows by definition of triangles in $\mathcal{D}$.
For T3, consider triangles $X \to Y \to Z \to X[1]$, $X^{\prime} \to Y^{\prime} \to Z^{\prime} \to X^{\prime} [1]$ in $\mathcal{D}$, where notation has been simplified by replacing $(X, \Phii)$ with $X$, etc. Given $F,G$ in the diagram below, we need to verify that there exists a morphism $H$ in $\mathcal{D}$ making the diagram commute.
$$\xymatrix{
X \ar[r]^{u} \ar[d]_{F} & Y \ar[r]^{v} \ar[d]_{G} & Z \ar[r]^{w} \ar[d]_{H} & X[1] \ar[d]_{F[1]} \\
X^{\prime} \ar[r]^{u^{\prime}} & Y^{\prime} \ar[r]^{v^{\prime}} & Z^{\prime} \ar[r]^{w^{\prime}} & X^{\prime}\\
}$$
Take roof diagrams $X \; | \;ackrel{r}{\leftarrow} X^{\prime \prime} \; | \;ackrel{f}{\to} X^{\prime}$, $Y \; | \;ackrel{s}{\leftarrow} Y^{\prime \prime} \; | \;ackrel{g}{\to} Y^{\prime}$ representing $F,G$ respectively. Then there is a map $u^{\prime \prime} : X^{\prime \prime} \to Y^{\prime \prime}$, which can be completed to a triangle $X^{\prime \prime} \to Y^{\prime \prime} \to Z^{\prime \prime} \to X^{\prime \prime}[1]$.
Using Lemma~\ref{l:cone}, take $Z,Z^{\prime}, Z^{\prime \prime}$ to be $Cone (u), Cone (u^{\prime}), Cone (u^{\prime \prime})$ and $u, u^{\prime}, u^{\prime \prime}$ to be maps of complexes. Then we have the following commutative diagram of complexes:
$$\xymatrix{
X \ar[r]^{u} & Y \ar[r] & Cone (u) \\
X^{\prime \prime} \ar[r]^{u^{\prime \prime}} \ar[d]_{f} \ar[u]^{r} & Y^{\prime \prime} \ar[r] \ar[d]_{g} \ar[u]^{s} & Cone (u^{\prime \prime}) \ar[d]_{f[1] \oplus g} \ar[u]^{r[1] \oplus s} \\
X^{\prime} \ar[r]^{u^{\prime}} & Y^{\prime} \ar[r] & Cone (u^{\prime} )\\
}$$
Then the required morphism $Cone (u) \to Cone (u^{\prime})$ is represented by the third column of the commutative diagram, and since $f,g,r,s$ each satisfy Equation~\ref{e:morphism}, so do these maps. Hence the completion is in $\mathcal{D}$.
For T4 (Octahedron Axiom), again Lemma~\ref{l:cone} shows that by completing the diagram in $\mathcal{D} (\mathcal{G}ammahat)$, the completion lies in $\mathcal{D}$. To see this, suppose we are given the upper cap of an octahedron in $\mathcal{D}$, as pictured below. Here all objects and maps are in $\mathcal{D}$, and the top and bottom triangles are distinguished in $\mathcal{D}$.
$$\xymatrix{
X^{\prime} \ar[dd]_{[1]} \ar[dr]^{[1]} & & Z \ar[ll] \\
& Y \ar[dl] \ar[ur]^{v} & \\
Z^{\prime} \ar[rr]_{[1]} & & X \ar[ul]^{u} \ar[uu]_{u \circ v} \\
}$$
Note that by completing the lower cap of the octahedron in $\mathcal{D} (\mathcal{G}ammahat)$, we get the following diagram, where the outer morphisms are in $\mathcal{D}$, the left and right triangles are distinguished (though not necessarily in $\mathcal{D}$), and the other two triangles commute.
$$\xymatrix{
X^{\prime} \ar[dd]_{[1]} & & Z \ar[ll] \ar[dl] \\
& Y^{\prime} \ar[ul] \ar[dr]^{[1]} & \\
Z^{\prime} \ar[ur] \ar[rr]_{[1]} & & X \ar[uu]_{u \circ v} \\
}$$
Since $X \; | \;ackrel{u\circ v}{\to} Z \to Y^{\prime}$ is distiguished, $Y^{\prime}$ is isomorphic to $Cone(u\circ v)$, and since $u\circ v$ is a morphism in $\mathcal{D}$, Lemma~\ref{l:cone} implies $Y^{\prime} \in \mathcal{D}$, and that the morphisms in the right triangle are in $\mathcal{D}$. Similarly, the left triangle is also in $\mathcal{D}$. Hence the lower cap can be completed to an octahedron, where all objects and morphisms lie in $\mathcal{D}$.
\end{proof}
From now on, we will abuse notation, and denote an object $(X, \Phii) \in \mathcal{D}$ simply by $X$ to simplify notation whenever possible.
\section{dg-Preprojective Algebra}\label{s:preproj}
In this section a graphical description of the ``derived preprojective algebra" is given. This is then related to the Koszul complex of the preprojective algebra. Later this will be used to construct projectives in the category $\mathbb{R}ep (\mathcal{G}ammahat)$ and to define indecomposable objects in the category $\mathcal{D}$. This algebra is known to experts, however the presentation given here is not readily available in the literature.
To begin consider the following algebra $A$.
\betagin{defi}\label{d:A}
Let $P$ be the path algebra of $\overline{\mathcal{G}amma}$. Let $A$ be the algebra obtained by adjoining to $P$ generators $\{ l_i \}_{i\in \mathcal{G}amma},$ with relations
$$
l_ie_j=e_j l_i=\delta_{ij} l_i.
$$
\end{defi}
Thus, $A$ is generated by the expressions of the form
\betagin{equation}\label{e:gen_of_A}
p_1l_{i_1}p_2\dots p_k l_{i_k}p_{k+1}
\end{equation}
where each $p_a$ is a path from $i_a$ to $i_{a-1}$.
The elements of $A$ are pictured as paths in $\mathcal{G}ammahat$ with jumps $(i,n)$ to $(i,n+2)$ for each $l_{i}$ that appears.
Extend the grading of $P$ (see Equation~\ref{e:decomp}) to $A$ by letting the $l_i$ be elements of degree 2, so that $l_i\in A_{i,i;2}$. This gives a decomposition of $A$:
\betagin{equation}\label{e:A}
A=\bigoplus_{i,j\in \mathcal{G}amma \ , \ k\in \mathbb{Z}_+} A_{i,j;k}
\end{equation}
The interesting fact is that $A$ has another grading, which is given by the number of jumps (i.e. the number of $l_i$ which appear in an expression). This gives a further decomposition
of $A$:
\betagin{equation}\label{e:Agrading}
A=\bigoplus_{n\le 0}A^n=\bigoplus_{i,j\in \mathcal{G}amma , \ k\in \mathbb{Z}_+, \ n\le 0}
A^n_{i,j;k}
\end{equation}
where $A^{-n}$ is the subspace in $A$ generated by expressions of the
form \eqref{e:gen_of_A} with exactly $n$ $l_i$'s. In particular, $A_{i,j;l}^{-n}$ can be thought of as paths from $i$ to $j$, of length $k$, with $n$ jumps.
The graded vector space $A = \bigoplus A^{n}$ is made into a complex by setting
\betagin{equation}\label{e:d}
\betagin{aligned}
d_{-k}\colon A^{-k}&\to A^{-k+1}\\
p_1l_{i_1}p_2\dots p_k l_{i_k}p_{k+1}&\mapsto
\sum_{a=1}^{k} (-1)^{a+1} p_1l_{i_1}p_2\dots p_a\theta_{i_a}p_{a+1}\dots
p_k l_{i_k}p_{k+1}
\end{aligned}
\end{equation}
where $\thetaeta_{i} \in A_{i,i;2}$ is given by
\betagin{equation}\label{e:mesh}
\thetaeta_{i} =\sum_{j} \epsilon (e^{ij}) e^{ji} e^{ij} \in P_{i,i;2}
\end{equation}
where the sum is over all $j$ connected to $i$ in $\mathcal{G}amma$, and $e^{ij}$ denotes the oriented edge from $i$ to $j$ in $\overline{\mathcal{G}amma}$.
A routine calculation shows that this definition of $d$ and multiplication in $A$ make $A$ a dg-algebra.
This dg-algebra can be easily identified with the Koszul complex of the preprojective algebra $\Pi (\mathcal{G}amma)$. Recall the description $\Pi (\mathcal{G}amma) = T_{R} (V)/J$ given in Section~\ref{s:quivers}. The Koszul complex of $\Pi (\mathcal{G}amma)$ is given by $$K_{\bullet} = T_{R}(V\oplus L) = \bigoplus V^{n_{1}} \otimes L \otimes V^{n_{2}} \otimes L \otimes \cdots L \otimes V^{n_{j}}$$ where $V^{n} = V^{\otimes n}$. (For details see \cite{etginz}.) The differential $d$ is given by
\betagin{equation}\label{e:Kdiff}
d(v_{1}\otimes l_{1} \otimes v_{2} \otimes \cdots \otimes l_{j-1} \otimes v_{j} ) = \sum_{i} (-1)^{i} v_{1}\otimes l_{1} \otimes v_{2} \otimes \cdots v_{i} \otimes j(l_{i}) \otimes v_{i+1} \otimes \cdots \otimes l_{j-1} \otimes v_{j}
\end{equation}
where $v_{1}\otimes l_{1} \otimes v_{2} \otimes \cdots \otimes l_{j-1} \otimes v_{j} \in V^{n_{1}} \otimes L \otimes V^{n_{2}} \otimes L \otimes \cdots L \otimes V^{n_{j}}$.
To relate these two constructions, we simply identify an element $a_{k} \otimes a_{k-1} \otimes \cdots \otimes a_{1} \in V^{k}$ with the path $p=a_{k}a_{k-1} \cdots a_{1}$, and the element $j(l_{i})$ with $\thetaeta_{i}$. It is easily verified that this identification is compatible with differentials and multiplication, which gives an identification $K_{\bullet} \simeq A^{\bullet}$ as dg-algebras. In particular, this identification gives a combinatorial picture of the Koszul complex of the preprojective algebra.
\betagin{subsection}{The non-Dynkin case}
Consider the case where $\mathcal{G}amma$ is non-Dynkin.
It is known (see \cite{mv} ) that the preprojective algebra of a non-Dynkin graph is Koszul, and that the Koszul complex $K_{\bullet}$ gives a dg-algebra resolution of $\Pi$. These results, together with the identification $ K_{\bullet} \simeq A^{\bullet}$ gives the following result.
\betagin{prop}
Suppose $\mathcal{G}amma$ is non-Dynkin. Then
$$H^{k} (A^{\bullet}) =\betagin{cases}
\Pi &\text{if} \ \ k=0 \\
0 &\text{if} \ \ k > 0
\end{cases}$$
so the complex $A^{\bullet}$ gives a dg-algebra resolution of the preprojective algebra $\Pi$.
\end{prop}
\end{subsection}
\section{Projective Representations of $\mathcal{G}ammahat$}\label{s:projreps}
Let $q\in \mathcal{G}ammahat$ be any vertex. Let $\Path^{k}(q,v)$ denote the space of ``paths with k jumps" from q to v. For $q=(i,n)$ and $v=(j,m)$, this space can be identified with the component $A_{i, j ; m-n}^{-k}$, defined by Equation~\ref{e:Agrading}.
\\
Define a representation $X_{q}^{k} \in \mathbb{R}ep (\mathcal{G}ammahat)$ by setting $X_{q}^{k} (v) = \Path^{k}(q,v)$. Composition of paths makes it a module over the path algebra $P$.
\betagin{prop}\label{p:XHom}
\par\indent
\betagin{enumerate}
\item For any $k$, and any vertex $q\in \mathcal{G}ammahat$, the representation $X_{q}^{k}$ is projective.
\item For any object $X\in \mathbb{R}ep (\mathcal{G}ammahat)$ we have $$\Hom_{\mathcal{G}ammahat} (X_{q}^{0}, X) = X(q).$$
\item For any object $X\in \mathbb{R}ep (\mathcal{G}ammahat)$ we have $$\Hom_{\mathcal{G}ammahat} (X_{q}^{k}, X) = \bigoplus_{v\in \mathcal{G}ammahat} \Hom_{\mathbb{C}} ( X^{k-1} (q,v) , X(\tau v)).$$
\end{enumerate}
\end{prop}
\betagin{proof}
\par\indent
\betagin{enumerate}
\item The space $X_{q}^{k}$ is freely generated over the path algebra $P$ by elements of the form $p=t_{k}p_{k-1} \cdots p_{2}t_{1}p_{1} $ where the $t_{i}$'s are jumps and the $p_{i}$'s are paths.
\item For any $x\in X$ define $\phi_{x} : X_{q}^{0} \to X$ by $\phi_{x} (p) = p.x$. This gives the required isomorphism.
\item First the isomorphism $\Hom_{\mathcal{G}ammahat} (X^{k}_{q} , X) \simeq \bigoplus_{v\in \mathcal{G}ammahat} X(2) \otimes (X^{k-1} (q,v))^{*} $ is established. This isomorphism is given by $$\phi \mapsto \bigoplus_{p\in X^{k-1}(q, v )} \phi (tp) \otimes p^{*}$$ with inverse $$x\otimes \psi_{p} \mapsto (p_{1} t p_{2} \; | \;ackrel {\phi_{x, \psi_{p}}} {\mapsto} p_{1} x \psi_{p} (p_{2})).$$ To see this, note that any element $\phi \in \Hom_{\mathcal{G}ammahat} (X_{q}^{k} , X)$ is determined by where it sends the generators $t_{i_{k}}p_{k-1} \cdots p_{2}t_{i_{1}}p_{1}$. So for each path $p:q\to v$ with $k-1$ jumps we need to assign an element $x\in \tau v$ which is the value of $\phi (tp)$.
\\
To establish the desired isomorphism, use the standard identification $W \otimes V^{*} \simeq \Hom(V,W)$ to obtain $$\Hom_{\mathcal{G}ammahat} (X_{q}^{k}, X) = \bigoplus_{v\in \mathcal{G}ammahat} \Hom_{\mathbb{C}} ( X^{k-1} (q,v) , X(\tau v)).$$
\end{enumerate}
\end{proof}
\section{Indecomposable Objects in $\mathcal{D}$}\label{s:indobj}
In this section the graded components of the dg-algebra $A$ defined in Section~\ref{s:preproj} are used to define objects in $\mathcal{D}$. To do this, the following preliminary result is required.
\betagin{lemma}\label{l:determined}
Let ${\bf{h}}$ be a height function, and let $\mathcal{G}amma_{{\bf{h}}}$ be the corresponding slice.
\betagin{enumerate}
\item
An object $X \in \mathcal{D}$ is determined up to isomorphism by the collection $\{ X^{\bullet} (q) \}_{q\in \mathcal{G}amma_{{\bf{h}}}}$ and morphisms corresponding to edges in the slice $\mathcal{G}amma_{{\bf{h}}}$.
\item For any $X,Y \in \mathcal{D}$ a morphism $f\in \Hom_{\mathcal{D}} (X,Y)$ is determined by the collection $\{ f(q) \}_{q\in \mathcal{G}amma_{{\bf{h}}}}$.
\end{enumerate}
\end{lemma}
\betagin{proof}
\par\indent
\betagin{enumerate}
\item
Let $q\in \mathcal{G}amma_{{\bf{h}}}$ be a source. Recall that by definition of objects in $\mathcal{D}$, $X$ satisfies the fundamental relations and comes with a fixed isomorphism $X(\tau q) \; | \;ackrel{\sim}{\to} Cone (x_{q})$, where $x_{q}$ is the map $X(q) \to \bigoplus_{q \to q^{\prime}} X( q^{\prime} )$. Hence the complex $X^{\bullet} (\tau q)$ is determined by $X(q)$ and $X(p)$ for $q \to p$ in $\mathcal{G}ammahat$ and morphisms corresponding to the edges joining them. (Noting that the $X(p)$ are in $\{ X^{\bullet} (q) \}_{q\in \mathcal{G}amma_{{\bf{h}}}}$ since $q$ is a source.) Write $q=(i,n)$ so that $i$ is a source in the quiver $(\mathcal{G}amma , \Omega_{{\bf{h}}})$ determined by the height function ${\bf{h}}$. Apply the reflection $s_{i}$ and consider a source in $q^{\prime} \in \mathcal{G}amma_{s_{i} {\bf{h}}}$. Then repeating the argument above and noting that $X(q^{\prime})$ is in the collection
$\{ X^{\bullet} (q) \}_{q\in \mathcal{G}amma_{{\bf{h}}}}$, and that $X^{\bullet} (p)$ for $q^{\prime} \to p$ is in the collection $\{ X^{\bullet} (q) \}_{q\in \mathcal{G}amma_{h}} \bigcup X^{\bullet} (\tau q)$, one sees that $X^{\bullet} (\tau q^{\prime} )$ is determined. Continuing in this way it follows that for any $p = \tau^{k} q$ for $q \in \mathcal{G}amma_{{\bf{h}}}$ the complex $X^{\bullet} (p)$ is determined by the collection $\{ X^{\bullet} (q) \}_{q\in \mathcal{G}amma_{{\bf{h}}}}$.
A similar argument for $q \in \mathcal{G}amma_{{\bf{h}}}$ a sink can be repeated. This shows that for any $p = \tau^{-k} (q)$ with $q\in \mathcal{G}amma_{{\bf{h}}}$ the complex $X^{\bullet} (p)$ is determined by the collection $\{ X^{\bullet} (q) \}_{q\in \mathcal{G}amma_{{\bf{h}}}}$.
\item Suppose that $F(q) : X(q) \to Y(q)$ is given for all $q \in \mathcal{G}amma_{{\bf{h}}}$. Take $q\in \mathcal{G}amma_{{\bf{h}}}$ to be a source, so that for any edge $q \to q^{\prime}$ in $\mathcal{G}ammahat$ , $q^{\prime}$ belongs to the slice $\mathcal{G}amma_{{\bf{h}}}$. Then using the isomorphisms $X( \tau q) \; | \;ackrel{\sim}{\to} Cone (x_{q})$ and $Y( \tau q) \; | \;ackrel{\sim}{\to} Cone(y_{q})$, together with the functoriality of ``cone over an edge", the following diagram has a unique completion $Cone (F (\tau q))$ making it commutative, which extends $F$ to $\tau q$.
$$
\xymatrix{
X (\tau q) \ar[d] \ar[r]^{F (\tau q)} & Y ( \tau q) \ar[d] \\
Cone (x_{q} ) \ \ \ \ar[r]^{Cone( F( \tau q) ) } & \ \ \ Cone(y_{q}) \\
}$$
Continuing in this way (and using a similiar argument for $q$ a sink) it is possible to extend $F$ to all vertices in $\mathcal{G}ammahat$.
\end{enumerate}
\end{proof}
Using the components of the dg-algebra $A$ defined in Section~\ref{s:preproj} define, for each vertex $q\in \mathcal{G}ammahat$, an object $X_{q}^{\bullet} \in \mathcal{D}$ as follows:
For $q=(i,n)$ and $v=(j,m)$ and $n \leq m$ set
$$X_{q}^{k}(v) = A_{i, j ; m-n}^{-k}$$ where $A_{i, j ; m-n}^{-k}$ is defined by ~\ref{e:Agrading}. For $n > m$ notice that Lemma~\ref{l:determined} implies that this is adequate to extend to all other vertices. It remains to check this does, in fact, define an object in $\mathcal{D}$.
\betagin{prop}\label{p:XinD}
For $q \in \mathcal{G}ammahat$ there is a canonical isomorphism (up to choice of function $\epsilon$) $X_{q}(\tau v) \simeq \text{Cone} (X_{q}(v) \to \oplus_{v \to v^{\prime}} X_{q} (v^{\prime}))$ and hence $X_{q} \in \mathcal{D}$.
\end{prop}
\betagin{proof}
Let $v=(i,n) \in \mathcal{G}ammahat$. For any edge $e: v\to v^{\prime}$ in $\mathcal{G}ammahat$, denote by $\overline{e}$ the corresponding edge $v^{\prime} \to \tau v$. Define the map $\phi _{v} : Cone (x_{v}) \to X (\tau v)$ by
\betagin{equation}
\phi_{v} (x,y) = t_{i}x + \sum_{e: s(e)=q} \epsilon(e) \overline{e} y
\end{equation}
where $x \in X_{q}^{\bullet +1} (v)$ and $y \in \bigoplus_{e: v \to v^{\prime}} X_{q} (v^{\prime})$. Note that the choice of sign $\epsilon (e)$ is forced by requiring that this map agree with the differentials:
\betagin{align*}
\phi_{v} (d_{C} (x,y)) &= \phi_{v} (d_{X_{q}} x , (-1)^{k} x_{v}x + d_{X_{q}} y) \\
&= t_{i}d_{X_{q}} (x) + (-1)^{k} \sum_{e : s(e)=v} \epsilon (e) \overline{e} x_{v} (x) + \epsilon (e) \overline{e} d_{X_{q}}(y) \\
&= t_{i}d_{X_{q}} (x) + (-1)^{k} \sum_{e : s(e)=v} \epsilon (e) \overline{e} ex + \epsilon (e) \overline{e} d_{X_{q}}(y) \\
&= t_{i} d_{X_{q}} (x) + (-1)^{k} \thetaeta_{i}x + \epsilon (e) \overline{e} y \\
&= d_{X_{q}} (t_{i} x + \epsilon (e) \overline{e} y ) \\
&= d_{X_{q}} (\phi_{v} (x,y) )
\end{align*}
where $(x,y) \in Cone^{k} (x_{v})$ and $d_{C}$ denotes the differential on $Cone (x_{v})$.
Since paths with jumps form a basis and any path $q \to \tau v$ with $k$ jumps is either a path $p:q \to v^{\prime}$ with k jumps followed by the edge $\overline{e} : v^{\prime} \to \tau v$, or is a path $p:q\to v$ with $k-1$ jumps followed by the jump $t_{i}$, the above map gives an isomorphism of complexes.
\end{proof}
\betagin{remark}
Although the object $X_{q}$ requires the choice of signs $\epsilon (e)$, different choices result in isomorphic objects. In particular, the category $\mathcal{D}$ does not depend on such a choice. The choice of $\epsilon$ amounts to choosing a representative of the isomorphism class of indecomposable object $[X_{q}]$ corresponding to $q\in \mathcal{G}ammahat$.
\end{remark}
Alternatively, for $i\mathfrak{n}eq j$ let $q=(i,n)$, then for $p=(j,n)$ set $X_{q} (p) = 0$. Now let $p=(j,n+1)$ and $n_{ij} =1$ so that $q\to p$ in $\mathcal{G}ammahat$. We define $X_{q}^{\bullet} (p) := \Path_{\mathcal{G}ammahat} (q,p)$ where by this we mean a complex with $\Path$ in degree 0, and $0$ in all other degrees. Note that by Lemma~\ref{l:determined} this is sufficient to extend to all other vertices using the fundamental relation. Note that it is clear from this definition that $X_{q}$ is indecomposable.
\section{Some results about $\Hom$ in $\mathcal{D}$}\label{s:homD}
In this section we give some results which will be useful in future sections.
\betagin{thm}\label{t:RHom}
\par\indent
\betagin{enumerate}
\item Let $Y\in \mathcal{D}$, and let $q \in \mathcal{G}ammahat$. Then there is an isomorphism $\mathbb{R}Hom (X_{q}, Y) = Y(q)$
\item Let $q=(j,n)$, $q^{\prime} = (i,m)$, then $\mathbb{R}Hom (X_{q} , X_{q^{\prime}}) = A_{i,j;n-m}$.
\item $\Hom (X_{q} , X_{q^{\prime}}) = \Path_{\mathcal{G}ammahat} (q^{\prime} , q) / J$ where $J$ is the mesh ideal, generated by the mesh relations (see Equation~\ref{e:mesh}).
\item Let ${\bf{h}}$ be a height function, and $\mathcal{G}amma_{{\bf{h}}}$ the corresponding slice. If $q,q^{\prime} \in \mathcal{G}amma_{{\bf{h}}}$ then $\Ext^{i} (X_{q} , X_{q^{\prime}}) = 0$ for $i>0$.
\end{enumerate}
\end{thm}
\betagin{proof}
\par\indent
\betagin{enumerate}
\item Let $\mathcal{G}amma_{{\bf{h}}}$ be a slice through $q$. By Lemma~\ref{l:determined} Part 2, $\mathbb{R}Hom_{\mathcal{D}} (X_{q} , Y)$ is determined on the slice $\mathcal{G}amma_{{\bf{h}}}$. On the slice $\mathcal{G}amma_{{\bf{h}}}$ the object $X_{q}$ is concentrated in degree 0 so we can identify $\mathbb{R}Hom (X_{q} , Y)$ with $Y(q)$ by definition of $\mathbb{R}Hom$ and Proposition~\ref{p:XHom} Part 2.
\item By Part 1 we have $\mathbb{R}Hom (X_{q} , X_{q^{\prime}}) = X_{q^{\prime}} (q) = A_{i,j;n-m}$.
\item By Part 1 we have $$\Hom (X_q , X_{q^{\prime}}) = H^{0} (X_{q^{\prime}} (q)) = \Path (q^{\prime} ,q) /J.$$
\item By Part 1 we have that $\Ext^{k} (X_q , X_{q^{\prime}} ) = \Path^{k} (q^{\prime} , q) /J$. However if $q,q^{\prime} \in \mathcal{G}amma_{{\bf{h}}}$ then there are no paths with jumps $q^{\prime} \to q$, in other words the complex $X_{q^{\prime}} (q)$ is concentrated in degree 0.
\end{enumerate}
\end{proof}
\section{Equivalence of Categories}\label{s:equiv}
In this section, for every height function ${\bf{h}} : \mathcal{G}amma \to \mathbb{Z}$ an equivalence of triangulated categories $R \rho_{{\bf{h}}} :\mathcal{D} \to \mathcal{D}^{b} (\mathcal{G}amma, \Omega_{{\bf{h}}})$ is constructed and shown to be compatible with the reflection functors.\\
\betagin{remark}
Note that the equivalence of categories given by a height function here, is between the category $\mathcal{D}$ and $\mathcal{D}^{b} ( \mathcal{G}amma , \Omega_{{\bf{h}}})$, whereas in the case of equivariant sheaves on $\mathbb{P}^{1}$, considered in \cite{kirillov}, the equivalence is between $\mathcal{D}$ and $\mathcal{D}^{b} ( \mathcal{G}amma , \Omega_{{\bf{h}}}^{op})$ and is given by constructing a tilting object. That can also be done here, however that is not the approach taken.
\end{remark}
Recall that any height function ${\bf{h}}$ determines an orientation $\Omega_{{\bf{h}}}$, and that the corresponding slice $\mathcal{G}amma_{{\bf{h}}}$ is an embedding of the quiver $(\mathcal{G}amma , \Omega_{{\bf{h}}})$ in $\mathcal{G}ammahat$. So any representation of $\mathcal{G}ammahat$ gives a representation of $(\mathcal{G}amma , \Omega_{{\bf{h}}})$ by restriction to the slice. So there is a restriction functor $\rho_{{\bf{h}}} : \mathbb{R}ep(\mathcal{G}ammahat) \to \mathbb{R}ep (\mathcal{G}amma, \Omega_{{\bf{h}}})$ defined by
\betagin{equation}\label{e:rho}
\rho_{{\bf{h}}} (X) = \bigoplus_{q\in \mathcal{G}amma_{{\bf{h}}}} X(q).
\end{equation}
Notice that this functor is exact. Denote by $R \rho_{{\bf{h}}} :\mathcal{D} \to \mathcal{D}^{b} (\mathcal{G}amma , \Omega_{{\bf{h}}})$ the corresponding derived functor.
\betagin{thm}\label{t:equivalence}
Let ${\bf{h}}$ be a height function, and let $\mathcal{G}amma_{{\bf{h}}}$ be the corresponding slice. Then the functor $R \rho_{{\bf{h}}} :\mathcal{D} \to \mathcal{D}^{b} (\mathcal{G}amma , \Omega_{{\bf{h}}})$ is an equivalence of triangulated categories.
\end{thm}
\betagin{proof}
Note that a height function ${\bf{h}}$ gives a lifting of the quiver $(\mathcal{G}amma , \Omega_{{\bf{h}}} )$ to $\mathcal{G}ammahat$ and that the image of $( \mathcal{G}amma , \Omega_{{\bf{h}}} )$ is the slice $\mathcal{G}amma_{{\bf{h}}}$. Hence for any object $Y \in \mathcal{D}^{b} (\mathcal{G}amma , \Omega_{{\bf{h}}})$ define an object in $\mathcal{D}$ as follows.
For $i \in \mathcal{G}amma$ and $q=(i, {\bf{h}} (i)) \in \mathcal{G}ammahat$ define $X(q) = Y(i)$ and for each edge $e:q \to q^{\prime} \in \mathcal{G}amma_{{\bf{h}}}$ define maps $x_{e} : X(q) \to X(q^{\prime})$ by $x_{e} = y_{e}$ where $y_{e} : Y(i) \to Y(j)$ and $q^{\prime} = (j, {\bf{h}} (j))$. Then Part 1 of Lemma~\ref{l:determined} shows this determines an object $X \in \mathcal{D}$. Hence for every $Y \in \mathcal{D}^{b} (\mathcal{G}amma , \Omega_{{\bf{h}}} )$ there exists $X \in \mathcal{D}$ such that $R \rho_{{\bf{h}}} (X) = Y$. Note that Part 2 of Lemma~\ref{l:determined} implies that for any $X,Y \in \mathcal{D}$ there is an isomorphism $\Hom_{\mathcal{D}} (X,Y) \simeq \Hom_{\mathcal{D}^{b} (\mathcal{G}amma , \Omega_{{\bf{h}}})} (R \rho_{{\bf{h}}} X , R \rho_{{\bf{h}}} Y)$. Together, this shows that $R \rho_{{\bf{h}}}$ is an equivalence, and since $R \rho_{{\bf{h}}}$ is the derived functor of an exact functor it is a triangle functor.
\end{proof}
\betagin{cor}\label{c:ARquiver}
Let $\mathcal{G}amma$ be Dynkin. The Auslander-Reiten quiver of $\mathcal{D}$ is $\mathcal{G}ammahat^{op}$, so the objects $X_{q}$ form a complete list of indecomposable objects in $\mathcal{D}$.
\end{cor}
\betagin{proof}
By Theorem~\ref{t:equivalence} the Auslander-Reiten quiver of $\mathcal{D}$ is isomorphic to that of $\mathcal{D}^{b} (\mathcal{G}amma ,\Omega_{{\bf{h}}})$. It is well known (see \cite{happel} for example) that the Auslander-Reiten quiver of $\mathcal{D}^{b} (\mathcal{G}amma ,\Omega_{{\bf{h}}})$ is isomorphic to $\mathcal{G}ammahat^{op}$.
\end{proof}
\betagin{remark}
Usually the Auslander-Reiten quiver of $\mathcal{D}^{b} (\mathcal{G}amma , \Omega_{{\bf{h}}})$ is identified with $\mathcal{G}ammahat$ by identifying the projectives with a slice in $\mathcal{G}ammahat$ that gives the opposite orientation to $\Omega_{{\bf{h}}}$, and proceeding from there. For reasons that will become clear in Section~\ref{s:mesh}, we instead identify it with $\mathcal{G}ammahat^{op}$ by identifying the projectives with the slice $\mathcal{G}amma_{{\bf{h}}} \subset \mathcal{G}ammahat^{op}$.
\end{remark}
\betagin{cor}
The category $\mathcal{D}$ has Serre Duality:
$$\Hom_{\mathcal{D}} (X,Y) =( \Ext_{\mathcal{D}}^{1} (Y, \tau_{\mathcal{D}} X))^{*}$$
where $^{*}$ denotes the dual space and $\tau_{\mathcal{D}}$ is given by Equation~\ref{e:tauD}.
\end{cor}
\betagin{proof}
It is well known (see \cite{happel} Proposition 4.10 p.42) that in the category $\mathcal{D}^{b} (\mathcal{G}amma, \Omega_{{\bf{h}}})$ this relation holds.
\end{proof}
The following theorem shows that the restriction functor is compatible with the reflection functors.
\betagin{thm}\label{t:reflecfunc} Let $i$ be a source (or sink) for the orientation $\Omega_{{\bf{h}}}$ and let $S_{i}^{\pm}$ denote the corresponding reflection functor. Then the following diagram is commutative.
$$\xymatrix{
& \mathcal{D}^{b} (\mathcal{G}amma ,\Omega_{{\bf{h}}}) \ar[dd]^{RS^{\pm}_{i}} \\
\mathcal{D} \ar[dr]_{R \rho_{s^{\pm}_{i} {\bf{h}}}} \ar[ur]^{R \rho_{{\bf{h}}}} & \\
& \mathcal{D}^{b} (\mathcal{G}amma ,\Omega_{s^{\pm}_{i} {\bf{h}}}) \\
}$$
\end{thm}
\betagin{proof}
Let $q\in \mathcal{G}amma_{{\bf{h}}}$ be a source. Let $X \in \mathcal{D}$, so that there is a fixed identification $X(\tau q) \simeq Cone(X(q) \to \oplus X(q^{\prime}))$. Restriction along the slice $\mathcal{G}amma_{s^{+}_{i} {\bf{h}}}$ gives $X(p)$ if $p\mathfrak{n}eq q$ and $X(\tau q)$ if $p=q$. The reflection functor is defined as $X(p)$ if $p\mathfrak{n}eq q$ and $Cone(X(q) \to \oplus X(q^{\prime}))$ if $p=q$, so the diagram commutes.
\end{proof}
\section{The Mesh Category $\mathcal{B}$}\label{s:mesh}
In the remainder of this paper, fix $\mathcal{G}amma$ to be Dynkin.
In this section the definition of the mesh category of a translation quiver $(Q, \tau)$ is recalled and then related to the category $\mathcal{D}$.
Let $(Q, \tau)$ be a translation quiver (see \cite{ars} Chapter VII for details). Define the set of indecomposable objects of the mesh category $\mathcal{B} (Q)$ to be the vertices of $Q$. Set $\Hom_{\mathcal{B}} (q,q^{\prime}) = \Path (q,q^{\prime}) / J$ where $J$ is the mesh ideal generated by the mesh relations $\sum_{s(e)=i} \epsilon (e) \bar e e$ (see Equation~\ref{e:mesh}).
Consider the mesh category of the translation quiver $(\mathcal{G}ammahat^{op} , \tau_{\mathcal{G}ammahat} )$ where $\tau_{\mathcal{G}ammahat} (i,n) = (i,n+2)$. For simplicity denote this by $\mathcal{B}$ and denote the translation by $\tau_{\mathcal{B}}$.
\betagin{remark}
Note that we consider the mesh category of $\mathcal{G}ammahat^{op}$ instead of $\mathcal{G}ammahat$ since the Auslander-Reiten quiver of $\mathcal{D}$ is $\mathcal{G}ammahat^{op}$.
\end{remark}
It is shown in \cite{bbk} (Section 6) that there are the following automorphisms in $\mathcal{B}$:
\betagin{enumerate}
\item A Nakayama automorphism $\mathfrak{n}u_{\mathcal{B}}$ which commutes with $\tau_{\mathcal{B}}$. (Here $\mathfrak{n}u = \tilde{\betata}^{-1}$ in the notation of \cite{bbk}.)
\item An automorphism $\mathfrak{g}amma_{\mathcal{B}}$ defined by $\mathfrak{g}amma_{\mathcal{B}} := \mathfrak{n}u_{\mathcal{B}} \tau_{\mathcal{B}}^{-1}$.
\end{enumerate}
These automorphisms satisfy:
\betagin{equation}\label{e:nugamma}
\betagin{aligned}
\mathfrak{n}u_{\mathcal{B}}^{2} &= \tau_{\mathcal{B}}^{-(h-2)} \\
\mathfrak{g}amma_{\mathcal{B}}^{2} &= \tau_{\mathcal{B}}^{-h}
\end{aligned}
\end{equation}
As before, for any $i\in \mathcal{G}amma$ define $\check{\imath}$ by $-\alphapha_{i}^{\Pi} = w_{0}^{\Pi} (\alphapha_{\check{\imath}}^{\Pi})$, where $w_{0}^{\Pi} \in W$ is the longest element and $\Pi$ is any set of simple roots.
In terms of $\mathcal{G}ammahat^{op}$ the maps $\mathfrak{n}u_{\mathcal{B}}$ and $\mathfrak{g}amma_{\mathcal{B}}$ are given by:
\betagin{equation}
\betagin{aligned}
\mathfrak{n}u_{\mathcal{B}} (i,n) &= (\check{\imath} \ ,n-h+2) \\
\mathfrak{g}amma_{\mathcal{B}} (i,n) &= (\check{\imath} \ ,n-h)
\end{aligned}
\end{equation}
\betagin{example}
For the graph $\mathcal{G}amma = A_{4}$, $\check{\imath} \ = 5-i$ and $h=5$ so $\mathfrak{n}u_{\mathcal{B}} (i,n) = (5-i , n-3)$ and $\mathfrak{g}amma_{\mathcal{B}} (i,n) = (5-i , n-5)$. The maps $\mathfrak{n}u_{\mathcal{B}}$ and $\mathfrak{g}amma_{\mathcal{B}}$ are shown in Figure~\ref{f:nugammaop}.
\betagin{figure}[ht]
\centering
\betagin{overpic}
{nugammaA4op}
\put (47,80){$\mathcal{G}amma_{h}$}
\put (47,30){$\mathfrak{n}u_{\mathcal{B}} \mathcal{G}amma_{h}$}
\put (47,13){$\mathfrak{g}amma_{\mathcal{B}} \mathcal{G}amma_{h}$}
\end{overpic}
\caption{The maps $\mathfrak{n}u_{\mathcal{B}}$ and $\mathfrak{g}amma_{\mathcal{B}}$ in the case $\mathcal{G}amma = A_{4}$. A slice $\mathcal{G}amma_{h}$ and its images under $\mathfrak{n}u_{\mathcal{B}}$ and $\mathfrak{g}amma_{\mathcal{B}}$ are shown in bold. Recall that we are considering the mesh category of $\mathcal{G}ammahat^{op}$ as mentioned above.}\label{f:nugammaop}
\end{figure}
\end{example}
Recall that the Auslander-Reiten quiver of $\mathcal{D}$ is $\mathcal{G}ammahat^{op}$. This identification is given by $[X_{q}] \mapsto q$. In terms of arrows, by Theorem~\ref{t:RHom} Part 2, for each arrow $q \to q^{\prime}$ in $\mathcal{G}ammahat$ there is an arrow $q^{\prime} \to q$ in the Auslander-Reiten quiver.
In the category $\mathcal{D}$ define an automorphism $\tau_{\mathcal{D}}$ by
\betagin{equation}\label{e:tauD}
\tau_{\mathcal{D}} (X) (q^{\prime}) = X (\tau^{-1} q^{\prime}).
\end{equation}
Notice that $$X_{\tau q} (q^{\prime}) = \Path_{\mathcal{G}ammahat}^{\bullet} (\tau q , q^{\prime} ) \simeq \Path_{\mathcal{G}ammahat}^{\bullet} (q, \tau^{-1} q^{\prime} )= \tau_{\mathcal{D}} (X_{q}) (q^{\prime})$$ so that $\tau_{\mathcal{D}} X_{q} \simeq X_{\tau q}$ for the indecomposables $X_{q}$. In terms of the Auslander-Reiten quiver of $\mathcal{D}$, this identifies $\tau_{\mathcal{D}}$ with the translation $\tau$ on $\mathcal{G}ammahat^{op}$.
\betagin{thm}\label{t:mesh}
Let ${\bf{h}}$ be a height function.
\betagin{enumerate}
\item There are equivalences of additive categories, given by the following commutative diagram:
$$\xymatrix{
\mathcal{B} & \mathcal{D}^{b} (\mathcal{G}amma ,\Omega_{{\bf{h}}}) \ar[l]_{\Psi_{{\bf{h}}}} \\
& \mathcal{D} \ar[u]_{R \rho_{{\bf{h}}}} \ar[ul]^{\psi} \\
}$$
\item Under these equivalences the automorphisms $\mathfrak{n}u_{\mathcal{B}}$ gives an Nakayama automorphism $\mathfrak{n}u_{\mathcal{D}}$ on $\mathcal{D}$ and is identified with the Nakayama automorphism $\mathfrak{n}u$ in $\mathcal{D}^{b} (\mathcal{G}amma, \Omega_{{\bf{h}}})$
\item The map $\tau_{\mathcal{B}}$ can be identified with the Auslander-Reiten translation in $\mathcal{D}^{b} (\mathcal{G}amma ,\Omega_{{\bf{h}}})$, and with $\tau_{\mathcal{D}}$ in $\mathcal{D}$.
\item The map $\mathfrak{g}amma_{\mathcal{B}}$ can be identified with $T$ in $\mathcal{D}^{b} (\mathcal{G}amma, \Omega_{{\bf{h}}})$, and with $T_{\mathcal{D}}$ in $\mathcal{D}$. Hence we can impose a triangulated structure on $\mathcal{B}$ making the equivalences in (1) triangulated equivalences.
\item In $\mathcal{D}$ the relation $T^{2} = \tau_{\mathcal{D}}^{-h}$ holds and hence the objects $X_{q}$ satisfy the relation $X_{q}^{\bullet +2} \simeq X_{\tau^{-h} q}^{\bullet}$.
\end{enumerate}
\end{thm}
\betagin{proof}
\par\indent
\betagin{enumerate}
\item The equivalence $\rho_{{\bf{h}}}$ is from Theorem~\ref{t:equivalence}. \\
The equivalence $\Psi_{{\bf{h}}}$ is the map which is given $P_{i} \mapsto (i, {\bf{h}} (i))$ on projectives, so that the projectives in $\mathbb{R}ep (\mathcal{G}amma , \Omega_{{\bf{h}}})$ map to the slice $\mathcal{G}amma_{{\bf{h}}} \subset \mathcal{G}ammahat^{op}$. (Note that in $\mathcal{G}ammahat^{op}$ the arrows are reversed, so this agrees with the usual identification of projectives with a slice giving the orientation opposite to $\Omega_{{\bf{h}}}$.) This is just the identification of $\Ind (\mathcal{D}^{b}(\mathcal{G}amma, \Omega_{{\bf{h}}}))$ with its Auslander-Reiten quiver. That this is an equivalence is well-known, see \cite{happel} for example.\\
The equivalence $\psi$ is given by $X_{q} \mapsto q$. By Corollary~\ref{c:ARquiver} the objects $X_{q}$ form a complete list of indecomposables, and since
\betagin{align*}
\Hom_{\mathcal{D}} (X_{q}, X_{q^{\prime}}) &= H^{0} (X_{q^{\prime}} (q)) = \Path_{\mathcal{G}ammahat} (q^{\prime}, q)/J = \Path_{\mathcal{G}ammahat^{op}} (q, q^{\prime})/J \\
&= \Hom_{\mathcal{B}} (q, q^{\prime})
\end{align*}
it follows that this is an equivalence.
\item Follows from (1), since $\Psi_{{\bf{h}}}$ is the identification of $\Ind (\mathcal{D}^{b} (\mathcal{G}amma ,\Omega_{{\bf{h}}}))$ with its Auslander-Reiten quiver.
\item Again follows from (1).
\item In $\mathcal{D}^{b} (\mathcal{G}amma ,\Omega_{{\bf{h}}})$ the Auslander-Reiten translation is defined by $\tau_{\mathcal{D}^{b}} := T^{-1} \mathfrak{n}u$, or equivalently $T = \mathfrak{n}u \tau^{-1}$
\item By (4) $\mathfrak{g}amma_{\mathcal{B}}$ is identified with $T_{\mathcal{D}}$, by (2) and (3) so there is an identification of Nakayama automorphisms and translations in $\mathcal{B}$ and $\mathcal{D}$. Then using Equation~\ref{e:nugamma} gives the result.
\end{enumerate}
\end{proof}
\section{Periodicity in the Dynkin Case}\label{s:period}
This section discusses periodicity of the ``dg-preprojective algebra" $A$ in the case where $\mathcal{G}amma$ is Dynkin.
Recall the decomposition given in Section 2:
\betagin{equation}
A=\bigoplus_{n\le 0}A^n=\bigoplus_{i,j\in \mathcal{G}amma , \ l\in \mathbb{Z}_+, n\le 0}
A^n_{i,j;l}
\end{equation}
Note that the differential in $A$ preserves the grading by path length, so this decomposition passes to homology:
$$H^{n} (A) = \bigoplus_{i,j;l} H^{n}(A_{i,j;l})$$
Now fix $q=(j,n+l)$ and $q^{\prime} = (i,n)$. Then by definition of $X_{q^{\prime}}$, and by Part 2 of Theorem~\ref{t:RHom}, $\mathbb{R}Hom (X_{q} , X_{q^{\prime}}) = X_{q^{\prime}}(q) = A_{i,j;l}$ so the component $A_{i,j;l}$ can be interpreted as the $\mathbb{R}Hom$ complex of the corresponding indecomposables.
Recall that in the case where $\mathcal{G}amma$ was not Dynkin the complex $A$ was a dg-resolution of the preprojective algebra $\Pi$. In particular, all homology was in degree 0. The decomposition of homology above and the identification $A_{i,j;l} \simeq \mathbb{R}Hom_{\mathcal{D}} (X_{q}, X_{q^{\prime}})$ makes it clear that this is not the case when $\mathcal{G}amma$ is Dynkin.
In the case where $\mathcal{G}amma$ is Dynkin there is the following periodicity result for the Koszul complex of the preprojective algebra and its homology. This is likely known to experts, but does not seem to be easily available in the literature.
\betagin{thm}\label{t:periodic}
There is a quasi-isomorphism of complexes
\betagin{equation}\label{e:periodicity}
A_{i,j;l}^{\bullet +2} \simeq A_{i,j;l+2h}^{\bullet}.
\end{equation}
In terms of the homology of the complex $A$ this gives:
$$H^{k+2}(A_{i,j;l}) = H^{k}(A_{i,j;l+2h})$$
\end{thm}
\betagin{proof}
By Theorem~\ref{t:mesh} Part 5 there is an identification $X_{q^{\prime}}^{\bullet +2} \simeq X_{\tau^{-h} q^{\prime}}^{\bullet}$ in $\mathcal{D}$, so in particular $X_{q^{\prime}}^{\bullet +2} (q) \simeq X_{\tau^{-h} q^{\prime}}^{\bullet}(q)$. For $q=(i,n)$ and $q^{\prime} =(j,n+l)$ there are identifications
\betagin{equation}\label{e:periodic}
\betagin{aligned}
& \mathbb{R}Hom(X_{q} , X_{q^{\prime}}) = X_{q^{\prime}} (q) = A_{i,j;l} \\
& \mathbb{R}Hom (X_{q} , X_{\tau^{-h} q^{\prime}}) = X_{\tau^{-h} q^{\prime}} (q) = A_{i,j;l+2h}.
\end{aligned}
\end{equation}
Combining these gives $A_{i,j;l}^{\bullet +2} \simeq A_{i,j;l+2h}$.
\end{proof}
In terms of the category $\mathcal{D}$ this result can be interpreted as follows.
\betagin{cor} Let $X,Y \in \mathcal{D}$.
\betagin{enumerate}
\item $\mathbb{R}Hom^{\bullet} (X, \tau^{-h} Y) \simeq \mathbb{R}Hom^{\bullet +2} (X , Y)$
\item $\Ext^{i} (X, \tau^{-h} Y) \simeq \Ext^{i+2} (X, Y)$
\end{enumerate}
\end{cor}
\betagin{proof} First note that the objects $X_{q}$ form a complete list of indecomposables in $\mathcal{D}$ (in this section $\mathcal{G}amma$ is Dynkin). So it's enough to prove this result for these objects. For these objects, recalling that $\tau X_{q} \simeq X_{\tau q}$ shows that the first statement follows from Equation~\ref{e:periodic} in the proof of Theorem~\ref{t:periodic}. The second statement follows by taking homology.
\end{proof}
\section{The quotient category $\mathcal{D} /T^{2}$}\label{s:quotcat}
It was shown in \cite{px} that the category $\mathcal{D}^{b} (\mathcal{G}amma , \Omega)/ T^{2}$ is a triangulated category, and that the set $\Ind(\mathcal{G}amma , \Omega)$, of classes of indecomposables gives the corresponding root system. More general quotient categories were studied in \cite{keller}, where conditions for a quotient category to inherit a triangulated structure are given.
In this section we consider the quotient category $\mathcal{C} = \mathcal{D} / T^{2}$ and relate it to Theorem~\ref{t:main1}.
\betagin{prop}\label{p:quotient}
The quotient category $\mathcal{D} /T^{2}$ has the following properties:
\betagin{enumerate}
\item It is triangulated.
\item $\tau^{-h} = Id = \tau^{h}$
\item It has Auslander-Reiten quiver $\mathcal{G}ammahat / \tau^{h} = \mathcal{G}ammahat_{cyc}$.
\end{enumerate}
\end{prop}
\betagin{proof}
Part 1 follows from the main result of \cite{keller}. The other parts follow from Theorem~\ref{t:mesh}.
\end{proof}
Let $R$ be the root system corresponding to $\mathcal{G}amma$. Proposition~\ref{p:quotient}, the equivalences in Section~\ref{s:equiv} and the results of \cite{kt} show that there is a bijection between $R$ and the Auslander-Reiten quiver of the category $\mathcal{C} = \mathcal{D} / T^{2}$. The following Theorem summarizes this bijection and completes the proof of Theorem~\ref{t:main1}.
\betagin{thm}
Let $\mathcal{G}amma$ be Dynkin with root system $R$ and let $\mathcal{K}$ be the Grothendieck group of $\mathcal{C}$. Set $\langle X,Y \rangle = \dim \mathbb{R}Hom (X,Y) = \dim \Hom (X,Y) - \dim \Ext^{1} (X,Y)$. The set $\Ind \subset \mathcal{K}$ of isomorphism classes of indecomposable objects in $\mathcal{C}$, with bilinear form given by $(X,Y) = \langle X,Y \rangle + \langle Y, X \rangle$, is isomorphic to $R$, and $\mathcal{K}$ is isomorphic to the root lattice. Moreover, the translation $\tau$ gives a Coxeter element for this root system, and $T_{\mathcal{C}}$ gives the longest element.
\end{thm}
\betagin{thebibliography}{EFK}
addcontentsline{toc}{chapter}{Bibliography}
\bibitem[ARS]{ars} M. Auslander, I. Reiten, S.O. Smalo, {\em Representation Theory of Artin Algebras}, Cambridge Studies in Advanced Mathematics, {\bf 36}, Cambridge University Press, Cambridge, 1995.
\bibitem[B\'ed]{bedard} R.~B\'edard,
{\em On commutation classes of reduced
words in Weyl groups}, European J. Combin. {\bf 20} (1999),
483--505.
\bibitem[B]{beilinson}
A.A. Beilinson, {\em Coherent Sheaves on $\mathbb{P}^{n}$ and Problems of Linear Algebra}, Funct. Anal. Appl., {\bf 12} (1979), 214-216.
\bibitem[BBD]{bbd}
A.A. Beilinson, J. Bernstein, P. Deligne {\em Faisceaux Pervers} in {\em Analysis and Topology on Singular Spaces, I (Luminy, 1981)}, Ast\'erisque, {\bf 100} (1982), pp. 5-171.
\bibitem[BGP]{bgp}
J.N. Bernstein, I.M. Gel'fand, V.A. Ponomarev {\em Coxeter Functors and Gabriel's Theorem}, (English Translation) Russian Math. Surveys, {\bf 28} (1973), no. 2, 17--32.
\bibitem[BBK]{bbk}
S.Brenner, M.C.R. Butler, A.D. King {\em Periodic Algebras which are Almost Koszul}, Algebras and Representation Theory, {\bf 5}, (2002), 331--367.
\bibitem[Bour]{bourbaki}
N. Bourbaki, {\em Lie Groups and Lie Algebras, Chapters 4-6},
Elements of Mathematics, Springer-Verlag, Berlin-Heidelberg-New
York, 2002.
\bibitem[C-B]{crawley-boevey}
W. Crawley-Boevey {\em Lectures on Representations of Quivers}, available at http://www.amsta.leeds.ac.uk/$\sim$pmtwc/quivlecs.pdf.
\bibitem[DX]{dengxiao}
B. Deng, J. Xiao, {\em On Ringel-Hall Algebras}, Fields Institute Communications, {\bf 40} (2004), 319--347.
\bibitem[EG]{etginz}
P. Etingof, V. Ginzburg {\em Noncommutative Complete Intersections and Matrix Integrals}, Pure and Applied Mathematics Quarterly, {\bf 3} (2007) no.1, pp. 107--151.
\bibitem[E-EU]{eteu}
P. Etingof, C.-H. Eu, {\em Koszulity and the Hilbert Series of Preprojectve Algebras}, Mathematical Research Letters, {\bf 14} (2007) no.4, pp.589--596.
\bibitem[FLM]{flm}
I. Frenkel, J. Lepowsky, A. Meurman, {\em Vertex Operator Algebras and The Monster}, Academic Press, Boston, 1988.
\bibitem[G1]{gabriel}
P. Gabriel, {\em Unzerlegbare Darstellungen I}, Manuscripta Math., {\bf 6} (1972), 71--103.
\bibitem[G2]{gabriel2}
P. Gabriel, {\em Auslander-Reiten sequences and representation-finite algebras}, in {\em Representation Theory I (Ottawa 1979)}, Lecture Notes in Math. 831, Springer, Berlin, 1980, 1--71.
\bibitem[GM]{gelfman}
S.I. Gelfand, Yu. I. Manin, {\em Methods of Homological Algebra}, Springer-Verlag, Berlin-Heidelbeg-New York, 1996.
\bibitem[Hap]{happel}
D. Happel, {\em Triangulated Categories in the Representation Theory of Finite Dimensional Algebras}, London Math. Soc. Lecture Note Ser., 119, Cambridge University Press, Cambridge, 1988.
\bibitem[J]{jantzen}
J.C. Jantzen, {\em Letures on Quantum Groups}, Amererican Mathematical Society, Graduate Studies in Mathematics, Vol. 6, Providence, 1996.
\bibitem[KST]{kst}
H. Kajiura, K. Saito, A. Takahashi, {\em Matrix factorization and Representations of Quivers II. Type ADE case}, Advances in Mathematics, {\bf 211} (2007), no. 1, 327Ð362.
\bibitem[Kel]{keller}
B. Keller, {\em On Triangulated Orbit Categories}, Doc. Math. {\bf 10} (2005), 551--581.
\bibitem[K]{kirillov}
A. Kirillov Jr, {\em McKay correspondence and equivariant sheaves on
$\mathbf{P}^1$}, Moscow Math. J. {\bf 6} (2006), pp. 505--529.
\bibitem[KO]{kirillov-ostrik}
A.Kirillov Jr., V. Ostrik,
{\em On a $q$-analogue of the McKay correspondence and the ADE
classification of $\widehat{\mathfrak{sl}}_2$ conformal field
theories}, Adv. in Math. {\bf 171} (2002), 183--227.
\bibitem[KT]{kt}
A. Kirillov Jr., J. Thind, {\em Coxeter Elements and Periodic Auslander-Reiten Quiver}, Journal of Algebra {\bf 323} (2010), 1241-1265.
\bibitem[Kl]{kleiner}
M. Kleiner, {\em The graded preprojective algebra of a quiver}, Bull. London Math. Soc. {\bf 36} (2004), no.1, 13--22.
\bibitem[Kos]{kostant} B.~Kostant,
{\em The Coxeter element and the branching law for the finite
subgroups of $SU(2)$}, The Coxeter legacy, 63--70, Amer. Math. Soc., Providence, RI, 2006.
\bibitem[Kos2]{kostant1} B. ~Kostant,
{\em The principal three-dimensional subgroups and the Betti numbers of a
complex simple Lie group}, Amer. J. Math {\bf 81} (1959), pp. 973--1032
\bibitem[L]{lusztig}
G. Lusztig, {\em Canonical Bases Arising from Quantized Enveloping Algebras}, J. Amer. Math. Soc., {\bf 3} (1990), no. 2, 447--498.
\bibitem[L2]{lusztig2}
G. Lusztig, {\em On Quiver Varities}, Advances in Mathematics, {\bf 136} (1998), no. 1, 141--182.
\bibitem[M-V]{mv}
R. Mart\'inez-Villa
{\em Applications of Koszul Algebras: The Preprojective Algebra}, in {\em Representation Theory of Algebras}, CMS Conference Proceedings 18, 1996, 487--504.
\bibitem[Os]{ostrik} V.~Ostrik,
{\em Module categories, weak Hopf algebras and modular
invariants},
Transform. Groups {\bf 8} (2003), no. 2, 177--206.
\bibitem[Oc]{ocneanu} A.~Ocneanu,
{\em Quantum subgroups, canonical bases and higher tensor structures},
talk at the workshop {\em Tensor Categories in Mathematics and
Physics}, Erwin Schr\"odinger Institute, Vienna, June 2004
\bibitem[PX1]{px}
Liangang Peng, Jie Xiao, {\em Root Categories and Simple Lie Algebras}, J. Algebra {\bf 198} (1997), no. 1, 19--56.
\bibitem[PX2]{px2}
L. Peng, J. Xiao, {\em Triangulated Categories and Kac-Moody algebras}, Invent. Math. {\bf 140} (2000), no.3, 563--603.
\bibitem[R1]{ringel}
C. Ringel, {\em Hall Algebras and Quantum Groups}, Invent. Math {\bf 101} (1990), 583--592.
\bibitem[R2]{ringel2}
C. Ringel. {\em Hall Polynomials and Representation Finite Hereditary Algebras}, Advances in Mathematics {\bf 84} (1990), no. 2, 137--178.
\bibitem[Shi]{shi} J.-Y.~Shi,
{\em The enumeration of Coxeter elements},
J. Algebraic Combin. {\bf 6} (1997), no. 2, 161--171.
\bibitem[Sp]{springer} T.A. Springer,
{\em Regular elements of finite reflection groups}, Inv. Math.,
{\bf 25} (1974), 159--198.
\bibitem[Z]{zelikson} S.~Zelikson,
{\em Auslander-Reiten quivers and the Coxeter complex},
Algebr. Represent. Theory {\bf 8} (2005), 35--55.
\end{thebibliography}
\end{document} |
\betaetagin{document}
\title[Norm inflation with infinite loss of regularity for gIBQ]{Norm inflation with infinite loss of regularity for the generalized improved Boussinesq equation}
\author{Pierre de Roubin}
\maketitle
\betaetagin{abstract}
In this paper, we study the ill-posedness issue for the generalized improved Boussinesq equation. In particular we prove there is norm inflation with infinite loss of regularity at general initial data in $\jb{\nabla}^{-s}\betaig(L^2 \cap L^\infty\betaig)(\mathbb{R})$ for any $s < 0$. This result is sharp in the $L^2$-based Sobolev scale in view of the well-posedness in $L^2(\mathbb{R}) \cap L^\infty(\mathbb{R})$. We also show that the same result applies to the multi-dimensional generalized improved Boussinesq equation. Finally, we extend our norm inflation result to Fourier-Lebesgue, modulation and Wiener amalgam spaces.
\end{abstract}
\sigmaection{Introduction}
We consider the generalized improved Boussinesq equation (gIBq):
\betaetagin{equation}
\betaetagin{cases}\ellabel{imBq}
\partial_t^{2}u-\partial_x^2 u - \partial_t^2 \partial_x^2u = \partial_x^2 \elleft(f(u) \rhoight) \\
(u,\partial_t u)|_{t = 0} = (u_0,u_1),
\end{cases}
\qquad ( t, x) \in \mathbb{R} \times \mathcal{M},
\end{equation}
\noindent where $\mathcal{M} = \mathbb{R}$ or $\mathbb{T}$ with $\mathbb{T} = \mathbb{R} / \mathbb{Z}$ and $f(u) = u^k$ for $k \gammaeq 2$ an integer. The equation \eqref{imBq} appears in a wide variety of physical problems. Makhankov~\cite{Mak} derived the improved Boussinesq equation, namely with $f(u) = u^2$, in order to study ion-sound wave propagation and mentioned the case of the improved modified Boussinesq equation, with $f(u) = u^3$, as modeling nonlinear Alfv\'en waves. He also mentioned the possibility to use the linear term of \eqref{imBq} to describe waves propagating at right angles to the magnetic field. Besides, Clarkson-LeVeque-Saxton \cite{CLS} considered the cases $f(u) = u^p$, with $p = 3$ or $5$, as a model for the propagation of longitudinal deformation waves in an elastic rod. See also \cite{Tur} for further discussion on the physical aspect of \eqref{imBq}.
This equation has also attracted attention from a mathematical point of view. In particular, the well-posedness of this problem has been studied extensively and Constantin-Molinet~\cite{CM} proved local well-posedness for \eqref{imBq} in $L^2(\mathbb{R}) \cap L^{\infty}(\mathbb{R})$ (and global well-posedness under additional conditions). See also \cite{CO, GL, Zhi}. On the other hand, it is known that \eqref{imBq} possesses some undesirable behaviour in negative Sobolev spaces. Following the work of Bourgain~\cite{Bou97}, it was showed in \cite{GL} that the solution map $\mathbf{P}hi \colon H^s(\mathbb{R}) \times H^s (\mathbb{R})\to C([-T,T], H^s(\mathbb{R}))$ of \eqref{imBq}, for nonlinearity $f(u) = u^k$, fails to be $C^k$ for $s< 0$. This result in particular implies that we can not study the well-posedness of \eqref{imBq} via a contraction argument. Note however that their result does not imply failure of continuity for the data-to-soluton map in negative Sobolev spaces. Our main goal in this article is to complete the well-posedness issue of this problem by proving the discontinuity of the data-to-solution map of \eqref{imBq} in $\jb{\nabla}^{-s}\betaig(L^2 \cap L^\infty\betaig) (\mathcal{M})$ for any $s < 0$. For more clarity, let us denote $\mathcal{W}^{s, 2, \infty}(\mathcal{M}) \coloneqq W^{s, 2, \infty}(\mathcal{M}) \times W^{s, 2, \infty}(\mathcal{M})$ with
$$
W^{s, 2, \infty}(\mathcal{M}) = \jb{\nabla}^{-s}\betaig(L^2 \cap L^\infty\betaig) (\mathcal{M}) = \{ f, \jb{\nabla}^sf \in L^2 (\mathcal{M}) \cap L^\infty (\mathcal{M})\}.
$$
\noindent We exhibit the following norm inflation with infinite loss of regularity behaviour:
\betaetagin{theorem}
\ellabel{THM:mainLinfty}
Let $k \gammaeq 2$, $\sigma \in \mathbb{R}$ and $s < 0$. Fix $(u_0 , u_1) \in \mathcal{W}^{s, 2, \infty} (\mathcal{M})$. Then, given any ${\mathbf v}arepsilon > 0$, there exists a solution $u_{\mathbf v}arepsilon$ to \eqref{imBq} with nonlinearity $f(u) = u^k$ on $\mathcal{M}$ and $t_{\mathbf v}arepsilon \in (0, {\mathbf v}arepsilon)$ such that
\betaetagin{equation}
\ellabel{EQ:NIwithILORLinfty}
\| (u_{\mathbf v}arepsilon (0), \partial_t u_{\mathbf v}arepsilon (0)) - (u_0 , u_1) \|_{ \mathcal{W}^{s, 2, \infty}} < {\mathbf v}arepsilon \quad \text{ and } \quad \| u_{\mathbf v}arepsilon (t_{\mathbf v}arepsilon) \|_{W^{\sigma, 2, \infty}} > {\mathbf v}arepsilon^{-1}.
\end{equation}
\end{theorem}
Note that, when $\sigma =s$ and $(u_0, u_1) = (0,0)$, \eqref{EQ:NIwithILORLinfty} is already called {\it norm inflation}. Since we have \eqref{EQ:NIwithILORLinfty} for any arbitrary initial data $(u_0, u_1) \in \mathcal{W}^{s, 2, \infty} (\mathcal{M})$, we say that we have norm inflation at {\it general initial data}. Besides, Theorem \rhoef{THM:main} also yields \eqref{EQ:NIwithILORLinfty} for any arbitrary $\sigma < s$, leading to the so-called {\it infinite loss of regularity}. Observe as well that, as a corollary, Theorem~\rhoef{THM:mainLinfty} gives the following discontinuity of the solution map.
\betaetagin{corollary}
\ellabel{COR:DiscSolMap}
Let $s<0$. Then, for any $T > 0$, the solution map $\mathbf{P}hi \colon (u_0, u_1) \in \mathcal{W}^{s, 2, \infty}(\mathcal{M}) \to (u, \partial_t u) \in C([-T, T], W^{s, 2, \infty}(\mathcal{M})) \times C([-T, T], W^{s, 2, \infty} (\mathcal{M})) $ to the generalized improved Boussinesq equation \eqref{imBq} is discontinuous everywhere in $\mathcal{W}^{s, 2, \infty} (\mathcal{M})$.
\end{corollary}
Throughout the rest of this paper, we prove in fact the following result of norm inflation at general initial data with infinite loss of regularity in negative Sobolev spaces:
\betaetagin{theorem}
\ellabel{THM:main}
Let $k \gammaeq 2$, $\sigma \in \mathbb{R}$ and $s < 0$. Fix $(u_0 , u_1) \in \mathcal{H}^s (\mathcal{M}) \coloneqq H^s (\mathcal{M}) \times H^s (\mathcal{M})$. Then, given any ${\mathbf v}arepsilon > 0$, there exists a solution $u_{\mathbf v}arepsilon$ to \eqref{imBq} with nonlinearity $f(u) = u^k$ on $\mathcal{M}$ and $t_{\mathbf v}arepsilon \in (0, {\mathbf v}arepsilon)$ such that
\betaetagin{equation}
\ellabel{EQ:NIwithILOR}
\| (u_{\mathbf v}arepsilon (0), \partial_t u_{\mathbf v}arepsilon (0)) - (u_0 , u_1) \|_{\mathcal{H}^s} < {\mathbf v}arepsilon \quad \text{ and } \quad \| u_{\mathbf v}arepsilon (t_{\mathbf v}arepsilon) \|_{H^\sigma} > {\mathbf v}arepsilon^{-1}.
\end{equation}
\end{theorem}
Once Theorem \rhoef{THM:main} is proved, Theorem \rhoef{THM:mainLinfty} follows as a corollary. See Remark~\rhoef{REM:NIinWs2infty}. In fact, we first prove the usual norm inflation at general initial data, namely we prove \eqref{EQ:NIwithILOR} with $\sigma = s$. Then, the loss of regularity follows with a slight modification. See Remark~\rhoef{REM:ILOR}. Note also that, in a similar manner, Theorem \rhoef{THM:main} implies the discontinuity of the solution map in negative Sobolev spaces.
There are several ways to prove norm inflation results. Christ-Colliander-Tao \cite{CCT} first introduced norm inflation with a method based on a dispersionless ODE approach. See also \cite{BTz1, Xia}. On the other hand, Carles with his collaborators in \cite{AC, BC, CK} proved norm inflation with infinite loss of regularity results for nonlinear Schr\"odinger equations (NLS) by using geometric optics.
Our strategy is to follow a Fourier analytic argument that originated from the abstract work of Bejenaru-Tao \cite{BT} on quadratic nonlinear Schr\"odinger equation. Their idea was to expand the solution into a power series and to exploit the {\it high-to-low energy transfer} in one of the terms to prove discontinuity of the solution map. This approach was refined later on by Iwabuchi-Ogawa~\cite{IO} who used it to extend the ill-posedness result in \cite{BT} into a norm inflation result. Kishimoto~\cite{Kis} then further developed these methods to prove norm inflation for the nonlinear Schr\"odinger equation. Meanwhile, Oh \cite{Oh} refined the argument of \cite{IO} in the context of cubic NLS by introducing a way to index the power series by trees, and to estimate each term separately. Forlano-Okamoto \cite{FO} proved afterwards norm inflation with an approach inspired by \cite{Oh} for nonlinear wave equations (NLW) in Sobolev spaces of negative regularities, and we use the same reasoning in our proof. For other papers with similar argument, the interested reader might turn to \cite{BH2, COW, CP, OOT, Ok, WZ}. See also \cite{Chevyrev} for an implementation of this method in probabilistic settings.
One of the key ingredients to our proof is the use of the Wiener algebra $\mathcal{F}L^1 (\mathcal{M})$, which we define now. Given $\mathcal{M} = \mathbb{R}$ or $\mathbb{T}$, let ${\mathbf w}idehat{\mathcal{M}}$ denote the Pontryagin dual of $\mathcal{M}$, i.e.
\betaetagin{equation}
\ellabel{DEF:PontryaginDual}
{\mathbf w}idehat{\mathcal{M}} =
\betaetagin{cases}
\mathbb{R} & \text{if} \quad \mathcal{M} = \mathbb{R}, \\
\mathbb{Z} & \text{if} \quad \mathcal{M} = \mathbb{T}.
\end{cases}
\end{equation}
\noindent Note that, when ${\mathbf w}idehat{\mathcal{M}} = \mathbb{Z}$, we endow it with the counting measure. We can then define the following Fourier-Lebesgue spaces:
\betaetagin{definition}[Fourier-Lebesgue spaces] \rhom
\ellabel{DEF:FLspaces}
For $s \in \mathbb{R}$ and $p \gammaeq 1$, we define the Fourier-Lebesgue space $\mathcal{F}L^{s,p} (\mathcal{M})$ as the completion of the Schwartz class of functions $\mathcal{S} ( \mathcal{M})$ with respect to the norm
$$
\| f \|_{\mathcal{F}L^{s,p}(\mathcal{M})} = \| \jb{\xi}^s {\mathbf w}idehat{f}(\xi) \|_{L^p_\xi ({\mathbf w}idehat{\mathcal{M}})},
$$
\noindent where $\jb{\xi} \coloneqq (1 + \abs{\xi}^2 )^{\frac 12}$.
\end{definition}
\noindent In particular, $\mathcal{F}L^{0,1}(\mathcal{M})$ corresponds to the Wiener algebra, and its algebra property allows us to prove easily that \eqref{imBq} is analytically locally well-posed in $\mathcal{F}L^{0,1}(\mathcal{M})$.
Another major point of our proof is the following power series expansion of a solution $u$ to \eqref{imBq} with $(u, \partial_t u ) |_{t=0} = {\mathbf v}ecc{u}_0$:
$$
u = \sigmaum^\infty_{j=0} \Xi_j ({\mathbf v}ecc{u}_0 ),
$$
\noindent where $\Xi_j ({\mathbf v}ecc{u}_0 )$ denotes a multilinear term in ${\mathbf v}ecc{u}_0$ of degree $(k-1)j + 1$ (for nonlinearity $f(u) = u^k$).
More precisely, these multilinear terms are exactly the successive terms of a Picard iteration expansion. See Section \rhoef{PowerSeries}. Then, by explicit computation we show that $\Xi_1({\mathbf v}ecc{u}_0)$ grows rapidly in a short time, achieving the desired growth, while we control the other terms. See Section \rhoef{Sec3}.
\betaetagin{remark} \rhom
In \cite{WC, WC2}, Wang-Chen studied the {\it multi-dimensional} generalized improved Boussinesq equation
\betaetagin{equation}
\betaetagin{cases}\ellabel{generalizedimBq}
\partial_t^{2}u-\Delta u - \partial_t^2 \Delta u = \Delta \elleft(f(u) \rhoight) \\
(u,\partial_t u)|_{t = 0} = (u_0,u_1),
\end{cases}
\qquad ( t, x) \in [0, +\infty) \times \mathbb{R}^d,
\end{equation}
\noindent which corresponds essentially to the $d$-dimensional form of \eqref{imBq}, for $d \gammaeq 1$. More precisely, they proved this problem is locally well-posed in $W^{2,p}\cap L^{\infty}$, for any $1 \elleq p \elleq \infty$, and in $H^s$ for $s \gammaeq \frac d2$. They also showed global well-posedness under additional conditions. We claim that norm inflation with infinite loss of regularity also applies for this problem, for any dimension $d \gammaeq 1$ and $s < 0$. See Remark \rhoef{RK:ProofGenIBq} for more details.
\end{remark}
\betaetagin{remark} \rhom
\ellabel{REM:NIforFLMW}
\cite{BH} used this method to extend the result of Forlano-Okamoto~\cite{FO} and prove infinite loss of regularity for the nonlinear wave equation in some Fourier-Lebesgue, modulation and Wiener amalgam spaces\footnote{However, we point out that, while Forlano-Okamoto did not state it, their argument implies infinite loss of regularity.}. We claim that we can prove the same result for equation \eqref{imBq}. See Appendix \rhoef{appendixA} for more details.
\end{remark}
\sigmaection{Power series extension}
\ellabel{PowerSeries}
In this section, we prove \eqref{imBq} with nonlinearity $f(u) = u^k$ is well-posed in the Wiener algebra, and it can be expanded into a power series. First, let us fix some notations. We define
$$
\mathcal{F}Lv^{s,p}(\mathcal{M}) \coloneqq \mathcal{F}L^{s,p}(\mathcal{M}) \times \mathcal{F}L^{s,p}(\mathcal{M})
$$
\noindent and, for more clarity, we denote $\mathcal{F}L^{p}(\mathcal{M}) \coloneqq \mathcal{F}L^{0, p}(\mathcal{M})$ and $\mathcal{F}Lv^{p}(\mathcal{M}) \coloneqq \mathcal{F}Lv^{0,p}(\mathcal{M})$.
We denote $S(t)$ the linear propagator:
\betaetagin{equation}
\ellabel{EQ:linearOp}
S(t)(u,v) = \cos\elleft(t P(D)\rhoight) u + \frac{\sigmain\elleft(tP(D)\rhoight)}{P(D)} v,
\end{equation}
\noindent with $P(D) \coloneqq \frac{\abs{\nabla}}{\jb{\nabla}}$. Namely, for any $\xi \in \mathcal{M}$,
$$
\mathcal{F} \elleft[P(D)f\rhoight](\xi) = \frac{\abs{\xi}}{( 1 + \xi^2)^{1/2}} {\mathbf w}idehat{f}(\xi) \eqqcolon \ellambda(\xi) {\mathbf w}idehat{f}(\xi).
$$
\noindent We also denote $\mathcal{I}_k$ the multilinear Duhamel operator.
\betaetagin{equation}
\ellabel{Eq:DuhamelOp}
\mathcal{I}_k(u_1, \dots, u_k)(t) = \int^{t}_0 \sigmain\elleft((t-t')P(D)\rhoight) P(D) \prod^k_{j = 1} u_j (t') \mathrm{d}t'.
\end{equation}
\noindent This gives the following Duhamel formulation for \eqref{imBq}
\betaetagin{equation}
\ellabel{Eq:duhamelP}
u(t) = S(t) (u_0, u_1) + \mathcal{I}_k (u).
\end{equation}
\noindent Note that, in the aforementioned formula, we used the short-hand notation $\mathcal{I}_k (u) \coloneqq \mathcal{I}_k(u, \dots, u)$.
To index the power series we intend to create, we need the following tree structure:
\betaetagin{definition}[$k$-ary trees]
\ellabel{DEF:Trees}
\rhom
\betaetagin{enumerate}
\item Given a set $\mathbb{T}T$ with partial order $\elleq$, we say that $b \in \mathbb{T}T$ with $b \elleq a$ and $b \ne a$ is a child of $a \in \mathbb{T}T$, if $b\elleq c \elleq a$ implies either $c = a$ or $c = b$. If the latter condition holds, we also say that $a$ is the parent of $b$.
\item A tree $\mathbb{T}T$ is a finite partially ordered set, satisfying the following properties\footnote{We do not identify two trees even if there is an order-preserving bijection between them.}:
\betaetagin{itemize}
\item Let $a_1, a_2, a_3, a_4 \in \mathbb{T}T$. If $a_4 \elleq a_2 \elleq a_1$ and $a_4 \elleq a_3 \elleq a_1$, then we have $a_2\elleq a_3$ or $a_3 \elleq a_2$,
\item A node $a\in \mathbb{T}T$ is called terminal, if it has no child. A non-terminal node $a\in \mathbb{T}T$, for $\mathbb{T}T$ a $k$-ary tree, is a node with exactly $k$ children,
\item There exists a maximal element $r \in \mathbb{T}T$, called the root node, such that $a \elleq r$ for all $a \in \mathbb{T}T$,
\item $\mathbb{T}T$ consists of the disjoint union of $\mathbb{T}T^0$ and $\mathbb{T}T^\infty$, where $\mathbb{T}T^0$ and $\mathbb{T}T^\infty$ denote the collections of non-terminal nodes and terminal nodes, respectively.
\end{itemize}
\item Let ${\betaf T}(j)$ denote the set of all trees of $j$-th generation, namely trees with $j$ non-terminal nodes.
\end{enumerate}
\end{definition}
Note that a tree of $j$-th generation $\mathbb{T}T \in {\betaf T}(j)$ has $kj + 1$ nodes. Indeed, it has $j$ non-terminal nodes by definition and an induction argument shows it has $(k-1)j + 1$ terminal nodes. Besides, we have the following bound on the number of trees of $j$-th generation:
\betaetagin{lemma}
\ellabel{LEM:NumberOfTrees}
There exists a constant $C_0 > 0$, depending only on $k$, such that, for any $j \in \mathcal{N}B$, we have
\betaetagin{equation}
\ellabel{EQ:NumberOfTrees}
\abs{{\betaf T} (j)} \elleq \frac{C^j_0}{(1 + j)^2 } \elleq C^j_0
\end{equation}
\end{lemma}
The following proof is an adaptation of the one in \cite{Oh} for ternary trees, we include it for completeness.
\betaetagin{proof}
We prove \eqref{EQ:NumberOfTrees} by induction. Note that the right inequality is immediate, so we only have to prove
$$
\abs{{\betaf T} (j)} \elleq \frac{C^j_0}{(1 + j)^2 }
$$
\noindent for any $j \gammaeq 0$. Observe first that
$$
\abs{{\betaf T} (0)} = \abs{{\betaf T} (1)} =1.
$$
\noindent Then, fix $j \gammaeq 2$. Assume equation \eqref{EQ:NumberOfTrees} holds for any $0 \elleq m \elleq j-1$ and take $\mathbb{T}T \in {\betaf T}(j)$. By Definition \rhoef{DEF:Trees}, there exist $k$ trees $\mathbb{T}T_1 \in {\betaf T}(j_1), \dots, \mathbb{T}T_k \in {\betaf T}(j_k )$, with $j_1 + \cdots + j_k = j -1$, such that $\mathbb{T}T$ is the tree consisting of a root node whose children are $\mathbb{T}T_1, \dots, \mathbb{T}T_k$. Thus, applying the induction hypothesis we get
\betaetagin{align*}
\abs{{\betaf T} (j)} & = \sigmaum_{\sigmaubstack{j_1 + \cdots + j_k = j-1, \\ j_1, \cdots, j_k \gammaeq 0}} \abs{{\betaf T} (j_1)} \times \cdots \times \abs{{\betaf T} (j_k)} \\
& \elleq \sigmaum_{\sigmaubstack{j_1 + \cdots + j_k = j-1, \\ j_1, \cdots, j_k \gammaeq 0}} \frac{C^{j_1}_0}{(1 + j_1)^2 } \times \cdots \times \frac{C^{j_k}_0}{(1 + j_k )^2 } \times \frac{(1+j)^2}{(1+j)^2} \\
& \elleq k^2 \elleft( \sigmaum_{j_2, \cdots j_k \gammaeq 0} \frac{1}{(1 + j_2)^2 \times \cdots \times (1 + j_k )^2 } \rhoight) \frac{C^{j-1}_0}{(1 + j)^2 }
\end{align*}
\noindent where we used $k \max \elleft( 1+ j_1 , \dots, 1 + j_k \rhoight) \gammaeq 1+j$ and rearranged the sum so the maximum is reached for $j_1$. Then, choosing $C_0 = k^2 \sigmaum_{j_2, \cdots j_k \gammaeq 0} \frac{1}{(1 + j_2)^2 \cdots (1 + j_k )^2 } < \infty$ ends the induction.
\end{proof}
From now on, for any ${\mathbf v}ecc{\phi} \in \mathcal{F}Lv^1$, we will associate to any tree $\mathbb{T}T \in {\betaf T} (j)$, $j \gammaeq 0$, a space-time distribution $\mathbf{P}si (\mathbb{T}T) ({\mathbf v}ecc{\phi}) \in \mathcal{D}' \elleft( (-T,T) \times \mathcal{M} \rhoight)$ as follows:
\betaetagin{itemize}
\item replace a non-terminal node by Duhamel integral operator $\mathcal{I}_k$ with its $k$ arguments being the children of the node,
\item replace a terminal node by $S(t) {\mathbf v}ecc{\phi}$.
\end{itemize}
\noindent For any $j \gammaeq 0$ and ${\mathbf v}ecc{\phi} \in \mathcal{F}Lv^1$, we also define $\Xi_j$ as follows:
\betaetagin{equation}
\ellabel{DEF:Xi_j}
\Xi_j ({\mathbf v}ecc{\phi}) = \sigmaum_{\mathbb{T}T \in {\betaf T} (j)} \mathbf{P}si(\mathbb{T}T)( {\mathbf v}ecc{\phi} ).
\end{equation}
\noindent For instance, we have the two following terms:
$$
\Xi_0 ({\mathbf v}ecc{\phi}) = S(t) {\mathbf v}ecc{\phi}, \quad \text{ and } \quad \Xi_1 ({\mathbf v}ecc{\phi}) = \mathcal{I}_k ( S(t) {\mathbf v}ecc{\phi}, \cdots, S(t) {\mathbf v}ecc{\phi}).
$$
\noindent Let us now state some basic multilinear estimates that will be useful both for norm inflation and for local well-posedness in the Wiener algebra $\mathcal{F}L^1$.
\betaetagin{lemma}
\ellabel{LEM:TreesEstFL}
There exists $C > 0$ such that, for any ${\mathbf v}ecc{\phi} \in \mathcal{F}Lv^1$, $j \in \mathcal{N}B$ and $0 < T \elleq 1$, we have
\betaetagin{equation}
\ellabel{EQ:TreesEst1}
\betaig\| \Xi_j ({\mathbf v}ecc{\phi}) (T) \betaig\|_{\mathcal{F}L^1} \elleq C^j T^{2j} \| {\mathbf v}ecc{\phi} \|^{(k-1)j+1}_{\mathcal{F}Lv^1}
\end{equation}
\noindent Moreover, if $j \gammaeq 1$ and ${\mathbf v}ecc{\psi} \in \mathcal{F}Lv^1 \cap \mathcal{H}^0 \elleft(\mathbb{R}\rhoight)$,
\betaetagin{equation}
\ellabel{EQ:TreesEstInf}
\elleft\| \Xi_j ({\mathbf v}ecc{\psi} ) (T) \rhoight\|_{\mathcal{F}L^\infty} \elleq C^j T^{2j} \| {\mathbf v}ecc{\psi} \|^{(k-1)j-1}_{\mathcal{F}Lv^1} \| {\mathbf v}ecc{\psi} \|^{2}_{\mathcal{H}^0}.
\end{equation}
\end{lemma}
\betaetagin{proof}
Let $\mathbb{T}T \in {\betaf T} (j)$. Using the same tree structure argument as for Lemma \rhoef{LEM:NumberOfTrees}, there exist $k$ trees $\mathbb{T}T_1 \in {\betaf T}(j_1), \dots, \mathbb{T}T_k \in{\betaf T}(j_k)$, with $j_1 + \cdots + j_k = j-1$, such that the root nodes of $\mathbb{T}T_1, \dots, \mathbb{T}T_1$ are the children of the root node of $\mathbb{T}T$. Thus, we can write
$$
\mathbf{P}si(\mathbb{T}T) ({\mathbf v}ecc{\phi}) = \mathcal{I}_k (\mathbf{P}si(\mathbb{T}T_1) ({\mathbf v}ecc{\phi}), \cdots , \mathbf{P}si(\mathbb{T}T_k) ({\mathbf v}ecc{\phi}) ),
$$
\noindent and $\mathbf{P}si(\mathbb{T}T) ({\mathbf v}ecc{\phi})$ consists essentially of $j = \abs{\mathbb{T}T^0}$ iterations of the Duhamel operator $\mathcal{I}_k$ with $(k-1)j + 1$ times the term $S(t){\mathbf v}ecc{\phi}$ as arguments.
Meanwhile, we deduce from \eqref{EQ:linearOp} that $S(t)$ is unitary in $\mathcal{F}L^1$ for any $ 0 < T \elleq 1$, and, since $\abs{\sigmain y} \elleq \abs{y}$ for any $y \in \mathbb{R}$ and $\ellambda(\xi) \elleq 1$ for any $\xi \in \mathbb{R}$, the algebra property of $\mathcal{F}L^1$ gives
\betaetagin{equation}
\ellabel{EQ:EstDuhamelOp}
\| \mathcal{I}_k [u_1, \dots, u_k] \|_{C_T \mathcal{F}L^1} \elleq \int^T_0 \abs{T-t'} \abs{\ellambda(\xi)}^2 \| u_1 \cdots u_k \|_{C_T \mathcal{F}L^1} \mathrm{d}t' \elleq \frac 12 T^2 \prod^k_{j=1} \| u_j \|_{C_T \mathcal{F}L^1}
\end{equation}
\noindent Hence, \eqref{EQ:TreesEst1} follows from an induction argument and Lemma \rhoef{LEM:NumberOfTrees}. Besides, Young's inequality and a similar argument give \eqref{EQ:TreesEstInf}.
\end{proof}
Let us now use Lemma \rhoef{LEM:TreesEstFL} to prove local well-posedness of \eqref{imBq} in the Wiener algebra and to justify the power series expansion.
\betaetagin{lemma}
\ellabel{LEM:ExistenceOfSolution}
Let $M > 0$. Then, for any time $T$ such that $0 < T \elll \min(M^{-\frac{k-1}{2} }, 1)$ and ${\mathbf v}ecc{u}_0 \in \mathcal{F}Lv^1$ with $\| {\mathbf v}ecc{u}_0 \|_{\mathcal{F}Lv^1} \elleq M$,
\betaetagin{enumerate}
\item there exists a unique solution $u \in \mathbb{C} ([0,T]; \mathcal{F}L^1 (\mathbb{R}))$ to \eqref{imBq} satisfying $(u, \partial_t u) = {\mathbf v}ecc{u}_0$.
\item Moreover, u can be expressed as
\betaetagin{equation}
\ellabel{EQ:PowerSeriesSol}
u = \sigmaum_{j = 0}^\infty \Xi_j ({\mathbf v}ecc{u}_0) = \sigmaum_{j = 0}^\infty \sigmaum_{\mathbb{T}T \in {\betaf T} (j)} \mathbf{P}si(\mathbb{T}T)({\mathbf v}ecc{u}_0)
\end{equation}
\end{enumerate}
\end{lemma}
\betaetagin{proof}
Let us first prove our problem is locally well-posed in $\mathcal{F}L^1$. We define the functional $\Gamma$ by
$$
\Gamma [u] (t) \coloneqq S(t){\mathbf v}ecc{u}_0 + \mathcal{I}_k (u)(t)
$$
\noindent for any $t \in [0,T]$. Then, using the unitarity of $S(t)$ and \eqref{EQ:EstDuhamelOp}, we have
$$
\elleft\| \Gamma[u] \rhoight\|_{C_T \mathcal{F}L^1} \elleq \| {\mathbf v}ecc{u}_0 \|_{\mathcal{F}Lv^1} + \frac 12 T^2 \| u \|_{\mathcal{F}L^1}^k
$$
\noindent Using the multilinearity of $\mathcal{I}_k$, we ensure, for $0 < T \elleq 1$ such that $\frac 12 T^2 M^{k-1} \elll 1$, that $\Gamma$ is a strict contraction on the ball
$$
B_{2M} \coloneqq \{ v \in C([0,T], \mathcal{F}L^1 (\mathcal{M})), \| v \|_{C_T \mathcal{F}L^1} \elleq 2M\}.
$$
\noindent Then, the contraction mapping theorem and an a posteriori continuity argument proves the local well-posedness. Let us move onto the power series expansion.
Fix ${\mathbf v}arepsilon >0$. We choose $0 < T \elleq 1$ such that $\frac 12 T^2 M^{k-1} \elll 1$. Then, from \eqref{EQ:TreesEst1}, the sum in \eqref{EQ:PowerSeriesSol} converges absolutely in $C([0,T], \mathcal{F}L^1 (\mathcal{M}))$. Let us denote
$$
U_J = \sigmaum_{j = 0}^J \Xi_j ({\mathbf v}ecc{u}_0) \qquad \text{ and } \qquad U = \sigmaum_{j = 0}^\infty \Xi_j ({\mathbf v}ecc{u}_0).
$$
\noindent There exists $J_1 \gammaeq 0$ such that
\betaetagin{equation}
\ellabel{EQ:PfPowerSeries1}
\| U - U_J \|_{C_T \mathcal{F}L^1} < \frac {\mathbf v}arepsilon3
\end{equation}
\noindent for any $J \gammaeq J_1$. In particular, this implies that $U$ and $U_J$ belong to the ball $B_{2M}$. Then, by continuity of $\Gamma$ as a map from $B_{2M}$ into itself, \eqref{EQ:PfPowerSeries1} implies there exists $J_2 \elleq 0$ such that, for any $J \gammaeq J_2$,
\betaetagin{equation}
\ellabel{EQ:PfPowerSeries2}
\| \Gamma[U] - \Gamma[U_J] \|_{C_T \mathcal{F}L^1} < \frac {\mathbf v}arepsilon3.
\end{equation}
\noindent All that is left now is to estimate $U_J - \Gamma[U_J]$. Fix an integer $J \gammaeq 1$. Then, from the tree structure argument we already used in the proof of Lemma \rhoef{LEM:TreesEstFL}, we get
\betaetagin{align*}
U_J - \Gamma[U_J] & = \sigmaum_{j = 1}^J \Xi_j ({\mathbf v}ecc{u}_0) - \sigmaum_{0 \elleq j_1, \dots, j_k \elleq J} \mathcal{I}_k \elleft( \Xi_{j_1}({\mathbf v}ecc{u}_0) , \cdots, \Xi_{j_k} ({\mathbf v}ecc{u}_0) \rhoight) \\
& = - \sigmaum^{kJ}_{l = J} \sigmaum_{\sigmaubstack{0 \elleq j_1 , \dots, j_k \elleq J, \\ j_1 + \cdots + j_k = l}} \mathcal{I}_k \elleft( \Xi_{j_1}({\mathbf v}ecc{u}_0) , \cdots, \Xi_{j_k} ({\mathbf v}ecc{u}_0) \rhoight).
\end{align*}
\noindent Now a crude estimation of the sums, along with \eqref{EQ:EstDuhamelOp} and \eqref{EQ:TreesEst1}, give
\betaetagin{align*}
\| U_J - \Gamma[U_J] \|_{C_T \mathcal{F}L^1} & \elleq \frac 12 T^2 \sigmaum^{kJ}_{l = J} \ \sigmaum_{\sigmaubstack{0 \elleq j_1 , \dots, j_k \elleq J, \\ j_1 + \cdots + j_k = l}} \ \prod^k_{m=1} \| \Xi_{j_m}({\mathbf v}ecc{u}_0) \|_{C_T \mathcal{F}L^1} \\
& \elleq \frac 12 T^2 \sigmaum^{kJ}_{l = J} \ \sigmaum_{\sigmaubstack{0 \elleq j_1 , \dots, j_k \elleq J, \\ j_1 + \cdots + j_k = l}} \ \prod^k_{m= 1} C^{j_m} T^{2j_m} \| {\mathbf v}ecc{u}_0 \|^{(k-1)j_m + 1}_{\mathcal{F}Lv^1} \\
& \elleq \frac 12 T^2 J^k M^k \sigmaum^{\infty}_{l = J} (C T^2 M^{k-1})^l
\end{align*}
\noindent Since we assumed $0 < T \elll \min(M^{-\frac{k-1}{2} }, 1)$, the sum converges and the right-hand side is bounded by $\frac 12 T^2 J^k M^k (C T^2 M^{k-1})^J$, which tends to $0$ as $J$ tends to infinity. Thus, there exists $J_3 \gammaeq 1$ such that for every $J \gammaeq J_3$,
\betaetagin{equation}
\ellabel{EQ:PfPowerSeries3}
\| U_J - \Gamma[U_J] \|_{C_T \mathcal{F}L^1} < \frac {\mathbf v}arepsilon3.
\end{equation}
\noindent Now, for any $J \gammaeq \max(J_1, J_2, J_3)$, \eqref{EQ:PfPowerSeries1}, \eqref{EQ:PfPowerSeries2} and \eqref{EQ:PfPowerSeries3} imply
$$
\| U - \Gamma[U] \|_{C_T \mathcal{F}L^1} \elleq \| U - U_J \|_{C_T \mathcal{F}L^1} + \| U_J - \Gamma[U_J] \|_{C_T \mathcal{F}L^1} + \| \Gamma[U] - \Gamma[U_J] \|_{C_T \mathcal{F}L^1} < {\mathbf v}arepsilon.
$$
\noindent Therefore, $U$ is a fixed point of $\Gamma$ and this ends the proof by uniqueness.
\end{proof}
\sigmaection{Norm inflation for IBq}
\ellabel{Sec3}
In this section, we present first the proof of Theorem \rhoef{THM:main}. Actually, our main goal is to prove the following proposition:
\betaetagin{proposition}
\ellabel{THM:main2}
Let $k \gammaeq 2$ and $s < 0$. Fix $(u_0 , u_1) \in \mathcal{H}^s (\mathcal{M})$. Then, given any ${\mathbf v}arepsilon > 0$, there exists a solution $u_{\mathbf v}arepsilon$ to \eqref{imBq} with nonlinearity $f(u) = u^k$ on $\mathcal{M}$ and $t_{\mathbf v}arepsilon \in (0, {\mathbf v}arepsilon)$ such that
$$
\| (u_{\mathbf v}arepsilon (0), \partial_t u_{\mathbf v}arepsilon (0)) - (u_0 , u_1) \|_{\mathcal{H}^s} < {\mathbf v}arepsilon \quad \text{ and } \quad \| u_{\mathbf v}arepsilon (t_{\mathbf v}arepsilon) \|_{H^s} > {\mathbf v}arepsilon^{-1}.
$$
\end{proposition}
Indeed, once Proposition \rhoef{THM:main2} is proved, the proof of Theorem \rhoef{THM:main} follows in the same way, with a slight modification that we treat separately for more clarity. See Remark \rhoef{REM:ILOR}.
To do so, suppose first that we proved the following proposition:
\betaetagin{proposition}
\ellabel{PROP:final2}
Let $k \gammaeq 2$ and $s < 0$. Fix $(u_0, u_1) \in \mathcal{S} ( \mathcal{M}) \times \mathcal{S}(\mathcal{M})$. Then, for any $n \in \mathcal{N}B$, there exists a solution $u_n$ to \eqref{imBq} with nonlinearity $f(u) = u^k$ and $t_n \in (0, \frac 1n )$ such that
\betaetagin{equation}
\ellabel{EQ:propFinal}
\| (u_n (0), \partial_t u_n (0)) - (u_0 , u_1) \|_{\mathcal{H}^s} < \frac 1n \quad \text{ and } \quad \| u_n (t_n ) \|_{H^s} > n.
\end{equation}
\end{proposition}
\noindent Then, let us fix ${\mathbf v}arepsilon > 0$ and choose $n \in \mathcal{N}B$ such that $n > {\mathbf v}arepsilon^{-1}$. According to Proposition \rhoef{PROP:final2}, there exists a solution $u_n$ to \eqref{imBq} and a time $t_n \in (0, \frac 1n ) \sigmaubset (0, {\mathbf v}arepsilon)$ such that
$$
\| (u_n (0), \partial_t u_n (0)) - (u_0 , u_1) \|_{\mathcal{H}^s} < \frac 1n < {\mathbf v}arepsilon \quad \text{ and } \quad \| u_n (t_n ) \|_{H^s} > n > {\mathbf v}arepsilon^{-1}.
$$
\noindent Therefore, Proposition \rhoef{THM:main2} follows from Proposition \rhoef{PROP:final2} and the density of $\mathcal{S}(\mathcal{M})$ in $H^s (\mathcal{M})$. Similarly, Theorem \rhoef{THM:main} follows from the following proposition:
\betaetagin{proposition}
\ellabel{PROP:final}
Let $k \gammaeq 2$, $\sigma \in \mathbb{R}$ and $s < 0$. Fix $(u_0 , u_1) \in \mathcal{S} ( \mathcal{M}) \times \mathcal{S}(\mathcal{M})$. Then, given any $n \in \mathcal{N}B$, there exists a solution $u_n$ to \eqref{imBq} with nonlinearity $f(u) = u^k$ on $\mathcal{M}$ and $t_n \in (0, \frac 1n )$ such that
$$
\| (u_n (0), \partial_t u_n (0)) - (u_0 , u_1) \|_{\mathcal{H}^s} < \frac 1n \quad \text{ and } \quad \| u_n (t_n) \|_{H^\sigma} > n.
$$
\end{proposition}
Consequently, the rest of this paper is devoted to the proofs of Proposition \rhoef{PROP:final2} and Proposition \rhoef{PROP:final}. In the following, we fix some ${\mathbf v}ecc{u}_0 \in \mathcal{S} ( \mathcal{M}) \times \mathcal{S}(\mathcal{M})$.
\sigmaubsection{Multilinear estimates}
In this subsection, we establish some multilinear estimates that are essentials to our proof.
Given $n \in \mathcal{N}B$, let us fix some $N = N(n) \gammag 1$, $R = R(N) \gammag 1$ and $ 1 \elll A = A(N) \elll N$ to be determined later. We define ${\mathbf v}ecc{\phi}_n \coloneqq (\phi_{n} , 0)$ by setting
\betaetagin{equation}
\ellabel{DEF:phi_n}
{\mathbf w}idehat{\phi_n} = R \chi_\Omega
\end{equation}
\noindent where
\betaetagin{equation}
\ellabel{DEF:Omega}
\Omega = \betaigcup_{\eta \in \Sigma} (\eta + Q_A)
\end{equation}
\noindent with $Q_A = [-\frac A2 , \frac A2]$ and
\betaetagin{equation}
\ellabel{DEF:Sigma}
\Sigma = \{ -2N, -N, N, 2N\}.
\end{equation}
\noindent Note that the condition $A \elll N$ ensures the union \eqref{DEF:Omega} is disjoint. Besides, observe that \eqref{DEF:phi_n}, \eqref{DEF:Omega} and \eqref{DEF:Sigma} imply for any $s \in \mathbb{R}$
\betaetagin{equation}
\ellabel{EQ:EstPhi_n}
\| {\mathbf v}ecc{\phi}_n \|_{\mathcal{F}Lv^1} \sigmaim RA \quad \text{ and } \quad \| {\mathbf v}ecc{\phi}_n \|_{\mathcal{H}^s} \sigmaim RN^s A^{1/2}.
\end{equation}
\noindent We define finally ${\mathbf v}ecc{u}_{0,n} \coloneqq {\mathbf v}ecc{u}_0 + {\mathbf v}ecc{\phi}_n$. Suppose $N$, $R$ and $A$ satisfy
\betaetagin{equation}
\ellabel{EQ:CondFL1u}
\| {\mathbf v}ecc{u}_0 \|_{\mathcal{F}Lv^1} \elll RA.
\end{equation}
\noindent Therefore, for each $n \in \mathcal{N}B$, provided
\betaetagin{equation}
\ellabel{EQ:CondOnT}
0 < T \elll \min( (RA)^{-\frac{k-1}{2}} , 1),
\end{equation}
\noindent Lemma \rhoef{LEM:ExistenceOfSolution} implies there exists a unique solution $u_n \in C([0,T], \mathcal{F}L^1(\mathcal{M}))$ to \eqref{imBq} with $(u_n , \partial_t u_n) |_{t=0} = {\mathbf v}ecc{u}_{0,n}$ and admitting the power series expansion:
\betaetagin{equation}
\ellabel{EQ:PowerSeriesSolRankn}
u_n = \sigmaum_{j = 0}^\infty \Xi_j ({\mathbf v}ecc{u}_{0,n}) = \sigmaum_{j = 0}^\infty \Xi_j ({\mathbf v}ecc{u}_{0} + {\mathbf v}ecc{\phi}_n) .
\end{equation}
\noindent The purpose of this subsection is then to estimate the terms of the power series on the right-hand side of \eqref{EQ:PowerSeriesSolRankn}. But first, let us recall the following lemma:
\betaetagin{lemma}
\ellabel{LEM:ConvolutionIneq}
Let $a,b \in \mathbb{R}$ and $A > 0$, then we have
$$
C A \chi_{a + b + Q_A } (\xi) \elleq \chi_{a + Q_A } \ast \chi_{b + Q_A } (\xi) \elleq {\mathbf w}idetilde{C} A \chi_{a + b + Q_{2A}}(\xi )
$$
\noindent where $C, {\mathbf w}idetilde{C} > 0$ are constants.
\end{lemma}
The proof of the following lemma is essentially the same as in \cite[Lemma 3.2]{FO} and is included for completeness.
\betaetagin{lemma}
\ellabel{LEM:MultilinearEst}
For any $s <0$, $t \in [0, T]$ and $j \in \mathcal{N}B$, the following estimates hold:
\betaetagin{align}
\| {\mathbf v}ecc{u}_{0,n} - {\mathbf v}ecc{u}_0 \|_{\mathcal{H}^s} & \sigmaim RN^s A^{1/2}, \ellabel{EQ:MultilinearEst1} \\
\| \Xi_0 ({\mathbf v}ecc{u}_{0,n})(t) \|_{H^s} & \ellesssim 1 + RA^{1/2} N^s, \ellabel{EQ:MultilinearEst2} \\
\| \Xi_1 ({\mathbf v}ecc{u}_{0,n})(t) - \Xi_1 ({\mathbf v}ecc{\phi}_n) (t) \|_{H^s} & \ellesssim t^2 R^{k-1}A^{k-1} \| {\mathbf v}ecc{u}_0 \|_{\mathcal{H}^0}, \ellabel{EQ:MultilinearEst3} \\
\| \Xi_j ({\mathbf v}ecc{u}_{0,n})(t) \|_{H^s} & \ellesssim C^j t^{2j} R^{(k-1)j}A^{(k-1)j} \elleft( \| {\mathbf v}ecc{u}_0 \|_{\mathcal{H}^0} + R g_s (A) \rhoight), \ellabel{EQ:MultilinearEst4}
\end{align}
\noindent where $g_s (A)$ is defined by
\betaetagin{equation}
\ellabel{DEF:gs}
g_s (A) \coloneqq
\betaetagin{cases}
1 & \textup{if } s<-\frac{1}{2}, \\
\elleft( \ellog A \rhoight)^{\frac{1}{2}}& \textup{if } s=-\frac{1}{2}, \\
A^{\frac{1}{2}+s} & \textup{if } s>-\frac{1}{2}
\end{cases}
\end{equation}
\end{lemma}
\betaetagin{proof}
The proofs of \eqref{EQ:MultilinearEst1} and \eqref{EQ:MultilinearEst2} follow directly from ${\mathbf v}ecc{u}_{0,n} = {\mathbf v}ecc{u}_0 + {\mathbf v}ecc{\phi}_n$, \eqref{EQ:EstPhi_n}, the unitarity of $S(t)$ for $t \elleq 1$ and the fact that ${\mathbf v}ecc{u}_0$ is fixed, implying $\|{\mathbf v}ecc{u}_0\|_{\mathcal{H}^s} \ellesssim 1$. Besides, the definition \eqref{DEF:Xi_j} of $\Xi_j$ and the multilinearity of $\mathcal{I}_k$ imply
$$
\Xi_1 ({\mathbf v}ecc{u}_{0,n})(t) - \Xi_1 ({\mathbf v}ecc{\phi}_n) (t) = \sigmaum\ellimits_{(v_1, \dots, v_k) \in E} \mathcal{I}_k (v_1, \cdots, v_k)(t)
$$
\noindent where $E$ is the subset of $\{ S(t){\mathbf v}ecc{u}_0 , S(t){\mathbf v}ecc{\phi}_n \}^k$ such that at least one of the choice is $S(t){\mathbf v}ecc{u}_0$. Thus, since $s<0$, Young's inequality, \eqref{EQ:EstDuhamelOp} and the unitarity of $S(t)$ imply
\betaetagin{align*}
\| \Xi_1 ({\mathbf v}ecc{u}_{0,n})(t) - \Xi_1 ({\mathbf v}ecc{\phi}_n) (t) \|_{H^s} & \ellesssim t^2 \|{\mathbf v}ecc{u}_0 \|_{\mathcal{H}^0} \elleft( \| {\mathbf v}ecc{u}_0 \|^{k-1}_{\mathcal{F}Lv^1} + \| {\mathbf v}ecc{\phi}_n \|^{k-1}_{\mathcal{F}Lv^1} \rhoight) \\
& \ellesssim t^2 \|{\mathbf v}ecc{u}_0 \|_{\mathcal{H}^0} \elleft( \| {\mathbf v}ecc{u}_0 \|^{k-1}_{\mathcal{F}Lv^1} + R^{k-1}A^{k-1} \rhoight)
\end{align*}
\noindent which, combined with assumption \eqref{EQ:CondFL1u}, proves \eqref{EQ:MultilinearEst3}.
For the last inequality, we split the left-hand side into two terms:
\betaetagin{equation}
\ellabel{EQ:SplitXi_j}
\Xi_j ({\mathbf v}ecc{u}_{0,n})(t) = \elleft( \Xi_j ({\mathbf v}ecc{u}_{0,n})(t) - \Xi_j ({\mathbf v}ecc{\phi}_{n})(t) \rhoight) + \Xi_j ({\mathbf v}ecc{\phi}_{n})(t)
\end{equation}
\noindent which we will estimate separately.
{\betaf Part 1:} $\mathcal{F}\betaig(\Xi_j ({\mathbf v}ecc{\phi}_{n})(t)\betaig)$ essentially consists of $(k-1)j + 1$ convolutions of terms of the form $\mathcal{F}(S(t) {\mathbf v}ecc{\phi}_n)$. According to \eqref{EQ:linearOp} and \eqref{DEF:phi_n}, the support of each of these terms is contained within at most $4$ disjoint cubes of volume approximately $A$. Thus, Lemma \rhoef{LEM:ConvolutionIneq} and a countability argument show the support of $ \mathcal{F}( \Xi_j ({\mathbf v}ecc{\phi}_{n})(t) )$ is contained within at most $4^{(k-1)j+1}$ cubes of volume approximately $A$. Therefore, \eqref{DEF:Xi_j} and Lemma \rhoef{LEM:NumberOfTrees} imply there exist $c,C > 0$ such that
$$
|\sigmaupp \mathcal{F}( \Xi_j ({\mathbf v}ecc{\phi}_{n})(t) ) | \elleq C^j A \elleq c |C^j Q_A |.
$$
\noindent Since $s < 0$, $\jb{\xi}^s$ is decreasing in $\abs{\xi}$ and Young's inequality, \eqref{EQ:TreesEstInf} and the unitarity of $S(t)$ yield
\betaetagin{align*}
\| \Xi_j ({\mathbf v}ecc{\phi}_{n})(t) \|_{H^s} & \elleq \| \jb{\xi}^s \|_{L^2 (\sigmaupp \mathcal{F}( \Xi_j ({\mathbf v}ecc{\phi}_{n})(t) ) ) } \| \Xi_j ({\mathbf v}ecc{\phi}_{n})(t) \|_{\mathcal{F}L^\infty} \\
& \ellesssim \| \jb{\xi}^s \|_{L^2 (c C^j Q_A) } C^j t^{2j} \| {\mathbf v}ecc{\phi}_n \|^{(k-1)j-1}_{\mathcal{F}Lv^1} \| {\mathbf v}ecc{\phi}_n \|^2_{\mathcal{H}^0}.
\end{align*}
\noindent Since $\| \jb{\xi}^s \|_{L^2 (c C^j Q_A) } \ellesssim g_s(A)$ with $g_s$ defined as in \eqref{DEF:gs}, we get from \eqref{EQ:EstPhi_n}
\betaetagin{equation}
\ellabel{EQ:EstXi_jPhi}
\| \Xi_j ({\mathbf v}ecc{u}_{0,n})(t) \|_{H^s} \ellesssim C^j t^{2j} R^{(k-1)j}A^{(k-1)j} R g_s (A).
\end{equation}
{\betaf Part 2:} On the other hand, since $\mathcal{F}\betaig(\Xi_j ({\mathbf v}ecc{u}_{0})(t) - \Xi_j ({\mathbf v}ecc{\phi}_{n})(t)\betaig)$ is essentially a sum of terms made of $j$ integrals in time and $(k-1)j + 1$ convolutions of terms of the form $\mathcal{F}(v)$, with $v \in \{ S(t){\mathbf v}ecc{u}_0 , S(t){\mathbf v}ecc{\phi}_n \}$, a similar argument as for \eqref{EQ:MultilinearEst3} along with $s < 0$ yield
\betaetagin{align*}
\| \Xi_j ({\mathbf v}ecc{u}_{0,n})(t) - \Xi_j ({\mathbf v}ecc{\phi}_{n})(t) \|_{H^s} & \elleq \| \Xi_j ({\mathbf v}ecc{u}_{0,n})(t) - \Xi_j ({\mathbf v}ecc{\phi}_{n})(t) \|_{L^2} \\
& \ellesssim C^j t^{2j} \| {\mathbf v}ecc{u}_0 \|_{\mathcal{H}^0} \elleft( \| S(t){\mathbf v}ecc{u}_0\|_{\mathcal{F}L^1} + \| S(t){\mathbf v}ecc{\phi}_n \|_{\mathcal{F}L^1} \rhoight)^{(k-1)j}.
\end{align*}
\noindent Thus, the unitarity of $S(t)$, \eqref{EQ:EstPhi_n} and \eqref{EQ:CondFL1u} give
\betaetagin{equation}
\ellabel{EQ:EstXi_jRem}
\| \Xi_j ({\mathbf v}ecc{u}_{0,n})(t) - \Xi_j ({\mathbf v}ecc{\phi}_{n})(t) \|_{H^s} \ellesssim C^j t^{2j} \| {\mathbf v}ecc{u}_0 \|_{\mathcal{H}^0} (RA)^{(k-1)j}.
\end{equation}
\noindent Now, \eqref{EQ:MultilinearEst4} follows from triangular inequality, \eqref{EQ:EstXi_jPhi} and \eqref{EQ:EstXi_jRem}.
\end{proof}
With the help of Lemma \rhoef{LEM:ConvolutionIneq}, we can also state the following crucial proposition. This proposition exploits the high-to-low energy transfer mentioned before to identify the first multilinear term in the Picard expansion as the one responsible for the instability in Proposition \rhoef{PROP:final2} and in Proposition \rhoef{PROP:final}.
\betaetagin{proposition}
\ellabel{PROP:EstSecondPicard}
Let $s < 0$, ${\mathbf v}ecc{\phi}_n$ defined as in equation \eqref{DEF:phi_n}, $T \elll 1$ and $A \elll N $. Then, there exists a constant $C > 0$ such that
\betaetagin{equation}
\ellabel{EQ:EstSecondPicard}
\| \Xi_1 ({\mathbf v}ecc{\phi}_n) (T) \|_{ H^s} \gammaes C^{k-1} R^k T^2 A^{k - \frac 12 + s}.
\end{equation}
\end{proposition}
\betaetagin{proof}
To simplify notation, we write
$$
\mathcal{L}ambda \coloneqq \elleft\{ (\xi_1, \dots, \xi_k) \in {\mathbf w}idehat{\mathcal{M}}^k \colon \sigmaum^k_{j=1} \xi_j = \xi \rhoight\} \quad \text{ and } \quad \mathrm{d}\xi_\mathcal{L}ambda \coloneqq \mathrm{d}\xi_1 \cdots \mathrm{d}\xi_{k-1}.
$$
Using \eqref{DEF:phi_n} and a product-to-sum formula, we have
\betaetagin{align*}
\mathcal{F}[ \Xi_1 ({\mathbf v}ecc{\phi}_n)] (T, \xi) = R^k \sigmaum\ellimits_{(\eta_1, \dots, \eta_k) \in \Sigma^k} & \int^T_0 \sigmain((T - t') \ellambda(\xi)) \ellambda(\xi) \\
& \times\int_\mathcal{L}ambda \prod^k_{j=1} \cos(t' \ellambda(\xi_j)) \chi_{\eta_j + Q_A} (\xi_j) \mathrm{d}\xi_\mathcal{L}ambda \mathrm{d}t'.
\end{align*}
\noindent We split the set $\Sigma^k$, and hence the sum, into two parts $\Sigma_1$ and $\Sigma_2$ where
$$
\Sigma_1 = \elleft\{ (\eta_1, \dots, \eta_k) \in \Sigma^k \colon \eta_1+ \cdots +\eta_k = 0 \rhoight\}
$$
\noindent and $\Sigma_2 = \Sigma^k \betaackslash \Sigma_1$. Note that, for any integer $k \gammaeq 2$, $\Sigma_1$ and $\Sigma_2$ are both non-empty. We denote then
\betaetagin{equation}
\ellabel{EQ:ProofSecondPicardSplit}
\mathcal{F}[ \Xi_1 ({\mathbf v}ecc{\phi}_n)] (T, \xi) = R^k \elleft( I_1 (T, \xi) + I_2 (T,\xi) \rhoight)
\end{equation}
\noindent with, for $r = 1,2$,
$$
I_r (T, \xi) = \sigmaum\ellimits_{(\eta_1, \dots, \eta_k) \in \Sigma_r} \int^T_0 \sigmain((T - t') \ellambda(\xi)) \ellambda(\xi) \int_\mathcal{L}ambda \prod^k_{j=1} \cos(t' \ellambda(\xi_j)) \chi_{\eta_j + Q_A} (\xi_j) \mathrm{d}\xi_\mathcal{L}ambda \mathrm{d}t'.
$$
Since $T \elll 1$, $\ellambda(\xi) \elleq 1$, $\sigmain(x) \sigmaim x$ for $\abs{x} \elleq 1$ and $\cos(x) \gammaeq \frac 12$ for $\abs{x} \elll 1$, we have
\betaetagin{align*}
I_1 (T, \xi) & \gammaes \sigmaum\ellimits_{(\eta_1, \dots, \eta_k) \in \Sigma_1} \frac 12 T^2 \ellambda(\xi)^2 \int_\mathcal{L}ambda \frac{1}{2^k} \prod^k_{j=1} \chi_{\eta_j + Q_A}(\xi_j) \mathrm{d}\xi_\mathcal{L}ambda \\
& \gammaes \frac{1}{2^{k+1}} T^2 C^{k-1}_1 A^{k-1} \ellambda(\xi)^2 \chi_{Q_A} (\xi),
\end{align*}
\noindent where the second inequality comes from $\abs{\Sigma_1} > 1$ and Lemma \rhoef{LEM:ConvolutionIneq}. Computing the $H^s$ norm, we get
\betaetagin{equation}
\ellabel{EQ:PicardSecondLowFreq}
\| I_1(T) \|_{H^s} \gammaes C^{k-1}_1 T^2 A^{k - \frac 12 + s}.
\end{equation}
Meanwhile, since $\sigmain(x) \sigmaim x$ for $\abs{x} < 1$, $\abs{ \cos(x)}\elleq 1$, and $\ellambda(\xi) \elleq 1$, we have
$$
\abs{I_2 (T, \xi)} \ellesssim \sigmaum\ellimits_{(\eta_1, \dots, \eta_k) \in \Sigma_2} \frac 12 T^2 \int_\mathcal{L}ambda \prod^k_{j=1} \chi_{\eta_j + Q_A}(\xi_j) \mathrm{d}\xi_\mathcal{L}ambda.
$$
\noindent Fix $(\eta_1 , \dots, \eta_k) \in \Sigma_2$. Since $\eta_1 + \cdots + \eta_k \neq 0$, \eqref{DEF:Sigma} implies there exists $m \in \mathbb{Z} \betaackslash \{ 0 \}$ such that
$$
\eta_1 + \cdots + \eta_k = m N.
$$
\noindent Lemma \rhoef{LEM:ConvolutionIneq} implies then
\betaetagin{align*}
\betaigg\| \int_\mathcal{L}ambda \prod^k_{j=1} \chi_{\eta_j + Q_A}(\xi_j) \mathrm{d}\xi_\mathcal{L}ambda \betaigg\|_{H^s} & \ellesssim C^{k-1}_1 A^{k-1} \| \chi_{ mN + Q_{kA}} (\xi)\|_{H^s} \\
& \ellesssim C^{k-1}_1 A^{k-\frac 12} N^s.
\end{align*}
\noindent Since $\abs{\Sigma_2} \elleq \abs{\Sigma^k} \elleq 4^k$, we get
\betaetagin{equation}
\ellabel{EQ:PicardSecondHighFreq}
\| I_2 (T) \|_{H^s} \ellesssim C^{k-1}_1 T^2 A^{k - \frac 12} N^s .
\end{equation}
\noindent Combining \eqref{EQ:PicardSecondLowFreq} and \eqref{EQ:PicardSecondHighFreq} with triangular inequality, we have
\betaetagin{align*}
\| \Xi_1 ({\mathbf v}ecc{\phi}_n) (T) \|_{ H^s} & \gammaeq R^k \betaig| \| I_1(T) \|_{H^s} - \| I_2 (T) \|_{H^s} \betaig| \\
& \gammaes R^k C^{k-1}_1 T^2 A^{k - \frac 12} \betaig( A^{s} - N^s \betaig)
\end{align*}
\noindent which, with the assumption $A \elll N$, proves our result.
\end{proof}
\sigmaubsection{Proof of Proposition \rhoef{PROP:final2}}
In this subsection we give the proof of Proposition \rhoef{PROP:final2}. We claim it suffices to show that, given $n \in \mathcal{N}B$, the following inequalities hold:
\betaetagin{align*}
\textup{(i)} & \quad RA^{\frac 12} N^s < \frac 1n, \\
\textup{(ii)} & \quad T^2 R^{k-1} A^{k-1} \elll 1, \\
\textup{(iii)} & \quad \| {\mathbf v}ecc{u}_0 \|_{\mathcal{H}^0} \elll R A^{\frac 12 + s} \text{ and } \| {\mathbf v}ecc{u}_0\|_{\mathcal{F}Lv^1} \elll RA, \\
\textup{(iv)} & \quad T^4 (RA)^{2(k-1)} R g_s (A) \elll T^2 R^k A^{k - \frac 12 + s}, \\
\textup{(v)} & \quad T^2 R^k A^{k - \frac 12 + s} \gammag n, \\
\textup{(vi)} & \quad A \elll N
\end{align*}
\noindent for some particular $R$, $T$ and $N$, all depending on $n$.
Let us show conditions (i) through (vi) prove Proposition \rhoef{PROP:final2}. First, condition (i) along with \eqref{EQ:MultilinearEst1} verifies the first estimate in \eqref{EQ:propFinal}. Besides, conditions (ii) and the second of (iii), along with Lemma \rhoef{LEM:ExistenceOfSolution}, prove existence and uniqueness of a solution $u_n$ in $C([0,T], \mathcal{F}L^1)$ with $(u_n , \partial_t u_n) |_{t=0} = {\mathbf v}ecc{u}_{0,n}$, as well as the convergence of the power series \eqref{EQ:PowerSeriesSolRankn}. Furthermore, these conditions, along with \eqref{EQ:MultilinearEst4}, yield
\betaetagin{align*}
\sigmaum^{\infty}_{j = 2} \| \Xi_j ({\mathbf v}ecc{u}_{0,n})(T) \|_{H^s} & \ellesssim T^4 (RA)^{2(k-1)} \elleft(\sigmaum^{\infty}_{j = 0} ( C T^2 R^{k-1}A^{k-1} )^j \rhoight) \elleft( \| {\mathbf v}ecc{u}_0 \|_{\mathcal{H}^0} + Rg_s (A) \rhoight) \\
& \ellesssim T^4 (RA)^{2(k-1)} R g_s (A).
\end{align*}
\noindent Then, using \eqref{EQ:PowerSeriesSolRankn}, Lemma \rhoef{LEM:MultilinearEst} and Proposition \rhoef{PROP:EstSecondPicard} - which is applicable by condition (vi) - conditions (i) through (v) give:
\betaetagin{align*}
\| u_n (T) \|_{H^s} & \gammaeq \|\Xi_1 ({\mathbf v}ecc{\phi}_{n})(T) \|_{H^s} \\
& \qquad - \betaigg\| \Xi_0 ({\mathbf v}ecc{u}_{0,n})(T) + \elleft( \Xi_1 ({\mathbf v}ecc{u}_{0,n})(T) - \Xi_1 ({\mathbf v}ecc{\phi}_{n})(T) \rhoight) + \sigmaum^{\infty}_{j = 2} \Xi_j ({\mathbf v}ecc{u}_{0,n})(T) \betaigg\|_{H^s} \\
& \gammaes R^k T^2 A^{k - \frac 12 + s} - 1 - RA^{1/2} N^s - T^2 R^{k-1}A^{k-1} \| {\mathbf v}ecc{u}_0 \|_{\mathcal{H}^0} - T^4 (RA)^{2(k-1)} R g_s (A) \\
& \sigmaim R^k T^2 A^{k - \frac 12 + s} \gammag n.
\end{align*}
Thus, this verifies the second estimate in \eqref{EQ:propFinal} at time $t_n \coloneqq T$. Finally, a suitable choice of $T$ in terms of $N = N(n)$ ensures $t_n \in (0, \frac 1n)$, for $N(n)$ sufficiently large. See details below. So ends the proof of Proposition \rhoef{PROP:final2}.
Consequently, it only remains to verify the conditions (i) through (vi). To do so, we express $A$, $R$ and $T$ in terms of $N$. More precisely, let us choose
$$
A = 10, \quad R = N^{-s-\delta} \quad \text{ and } \quad T = N^{\frac{k-1}{2}(s + \frac \delta2)}
$$
\noindent where $\delta$ is sufficiently small, namely $0 < \delta < \min \betaig(1, -\frac{2}{k+1}s \betaig)$. Since $A$ is a constant, condition (vi) is trivially satisfied and, since ${\mathbf v}ecc{u}_0$ is fixed, condition (iii) reduces to
$$
R \gammag 1 $$
\noindent which is true from $-s - \delta > 0$, for $N$ sufficiently large. Besides, for $N$ sufficiently large, we get
\betaetagin{align*}
& RA^{\frac 12} N^s \sigmaim N^{-\delta} \elll \frac 1n \\
& T^2 R^{k-1} A^{k-1} \sigmaim N^{- \frac{k-1}{2} \delta} \elll 1 \\
& T^2 R^k A^{k - \frac 12 + s} \sigmaim N^{-s - \frac{k+1}{2}\delta} \gammag 1
\end{align*}
\noindent which prove conditions (i), (ii) and (v). Lastly, condition (iv) is equivalent to
$$
T^2 R^{k-1} A^{k - 2}g_s(A) \elll A^{-\frac 12 + s}
$$
\noindent which, in our setting, is equivalent to condition (ii) since $A$ is constant. Finally, since $\delta < -s$, observe that $T$ goes to $0$ as $N \to \infty$, so $T \in (0, \frac 1n )$ for $N$ sufficiently large. Taking
$$
N = n^{\frac 2 \delta}
$$
completes the proof since $\delta < 1$.
\betaetagin{remark} \rhom
\ellabel{REM:ILOR}
In this remark, we use our previous arguments to prove infinite loss of regularity, namely Proposition \rhoef{PROP:final}. The idea is to use the same construction as before. Indeed, this construction allowed us to choose our parameter $A$ to be constant. In the following, we observe that the change of regularity between Proposition \rhoef{PROP:final2} and Proposition \rhoef{PROP:final} is only expressed in powers of $A$, so that this change has no major implications whatsoever.
Let $s < 0$, $\sigma \in \mathbb{R}$ and ${\mathbf v}ecc{u}_0 = ( u_0, u_1) \in \mathcal{F}Lv^1$. Define ${\mathbf v}ecc{\phi}_n$ and ${\mathbf v}ecc{u}_{0,n}$ as before. There exists, for any given $n \in \mathcal{N}B$, a unique solution $u_n$ to \eqref{imBq} in $C([0,T], \mathcal{F}L^1)$, with $T$ satisfying \eqref{EQ:CondOnT}, such that $(u_n , \partial_t u_n)|_{t=0} = {\mathbf v}ecc{u}_{0,n}$. Moreover, $u_n$ can be expressed as the power series in \eqref{EQ:PowerSeriesSolRankn}. First, we claim that it suffices to consider the case $\sigma < s$. Indeed, in the case $\sigma \gammaeq s$ the estimate on the initial data remains the same as for Proposition \rhoef{PROP:final2} and, by Sobolev embedding, we have
$$
\| u_n (t_n) \|_{H^\sigma} \gammaeq \| u_n (t_n) \|_{H^s} > n.
$$
Let us then assume $\sigma < s$. Lemma \rhoef{LEM:MultilinearEst}, \eqref{EQ:PowerSeriesSolRankn}, \eqref{EQ:CondOnT} and \eqref{EQ:CondFL1u} yield
\betaetagin{align*}
\| u_n(T) - \Xi_1({\mathbf v}ecc{\phi}_n) (T) \|_{H^\sigma} & \elleq \| u_n(T) - \Xi_1({\mathbf v}ecc{\phi}_n) (T) \|_{H^s} \\
& \ellesssim 1 + RA^{1/2} N^s + T^2 (RA)^{k-1} \| {\mathbf v}ecc{u}_0 \|_{\mathcal{H}^0} + T^4 (RA)^{2(k-1)} R g_s (A)
\end{align*}
\noindent while Proposition \rhoef{PROP:EstSecondPicard} gives
$$
\| \Xi_1 ({\mathbf v}ecc{\phi}_n) (T) \|_{ H^\sigma} \gammaes R^k T^2 A^{k - \frac 12 + \sigma}.
$$
\noindent Therefore, the same arguments as before allow us to say that, to prove Proposition \rhoef{PROP:final}, it suffices to verify the following hold:
\betaetagin{align*}
\textup{(i)} & \quad RA^{\frac 12} N^s < \frac 1n, \\
\textup{(ii)} & \quad T^2 R^{k-1} A^{k-1} \elll 1, \\
\textup{(iii)} & \quad \| {\mathbf v}ecc{u}_0 \|_{\mathcal{H}^0} \elll R A^{\frac 12 + \sigma} \text{ and } \| {\mathbf v}ecc{u}_0\|_{\mathcal{F}Lv^1} \elll RA, \\
\textup{(iv)} & \quad T^4 (RA)^{2(k-1)} R g_s (A) \elll T^2 R^k A^{k - \frac 12 + \sigma}, \\
\textup{(v)} & \quad T^2 R^k A^{k - \frac 12 + \sigma} \gammag n, \\
\textup{(vi)} & \quad A \elll N.
\end{align*}
\noindent Note that, compared to the conditions in the proof of Proposition \rhoef{PROP:final2}, the only changes are in conditions (iii), (iv) and (v), where the power of $A$ changes. Yet, if we choose $A$ to be constant again, the rest of the reasoning stays the same. Hence, the choices
$$
A = 10, \quad R = N^{-s-\delta}, \quad T = N^{\frac{k-1}{2}(s + \frac \delta2)}, \quad \text{ and } \quad N = n^{\frac 2 \delta}
$$
\noindent prove also Proposition \rhoef{PROP:final}.
\end{remark}
\betaetagin{remark} \rhom
\ellabel{RK:ProofGenIBq}
The proof we gave applies directly to the study of the multi-dimensional generalized improved Boussinesq equation \eqref{generalizedimBq}. The only major change to make is in the definition of the functions $\phi_n$. Indeed, let us denote $e_1 = (1, 0, \cdots, 0) \in \mathbb{R}^d$, we then define $\phi_n$ by
$$
{\mathbf w}idehat{\phi}_n (\xi) = R \chi_\Omega = R \sigmaum_{\eta \in \Sigma} \chi_{\{ \eta e_1 + Q_A \}}
$$
\noindent with $\Sigma$ as in \eqref{DEF:Sigma} and $Q_A = [-\frac A2 , \frac A2]^d$. Then, the same arguments are still applicable. In particular, observe that Lemma \rhoef{LEM:ConvolutionIneq} can be rephrased as:
\betaetagin{lemma}
\ellabel{LEM:ConvolutionIneqInRd}
Let $a,b \in \mathbb{R}^d$ and $A > 0$, then we have
$$
C_d A^d \chi_{a + b + Q_A } (\xi) \elleq \chi_{a + Q_A } \ast \chi_{b + Q_A } (\xi) \elleq {\mathbf w}idetilde{C}_d A^d \chi_{a + b + Q_{2A}}(\xi )
$$
\noindent where $C_d, {\mathbf w}idetilde{C}_d > 0$ are constants depending only on the dimension $d$.
\end{lemma}
Subsequently, the only changes caused from the change of dimension are expressed in powers of $A$. However, since $A$ is chosen to be constant, it does not change the argument, as for when we proved infinite loss of regularity from ``standard" norm inflation. Therefore, norm inflation with infinite loss of regularity for initial data ${\mathbf v}ecc{u}_0 \in \mathcal{H}^s(\mathbb{R}^d)$ with $s<0$ follows naturally from the same choice of parameters $A$, $R$, $T$ and $N$.
Similarly, our result would still apply on the same multidimensional problem on the torus $\mathbb{T}^d$, although such a problem has not been seen in the litterature yet, at least not to our knowledge.
\end{remark}
\betaetagin{remark}\rhom
\ellabel{REM:NIinWs2infty}
In this remark, we show that our proof also implies Theorem~\rhoef{THM:mainLinfty}. The idea is to use exactly the same construction and to estimate the norm. Using the same notations, observe that we have
$$
\| u_n (t_n) \|_{W^{\sigma, 2, \infty}} \gammaeq \| u_n (t_n) \|_{H^\sigma} > n.
$$
\noindent Therefore, we only need to show that
$$
\| {\mathbf v}ecc{u}_0 - {\mathbf v}ecc{u}_{0,n} \|_{\mathcal{W}^{s,2,\infty}} < \frac 1n
$$
\noindent which means that we want ${\mathbf v}ecc{u}_{0}$ and ${\mathbf v}ecc{u}_{0,n}$ to satisfy
$$
\| {\mathbf v}ecc{u}_0 - {\mathbf v}ecc{u}_{0,n} \|_{\mathcal{H}^{s}} < \frac 1n \quad \text{ and } \quad \| {\mathbf v}ecc{u}_0 - {\mathbf v}ecc{u}_{0,n} \|_{W^{s,\infty} \times W^{s,\infty}} < \frac 1n.
$$
\noindent We already proved that the first estimate is satisfied, so we are only interested in the second one. Observe that ${\mathbf v}ecc{u}_0 - {\mathbf v}ecc{u}_{0,n} = {\mathbf v}ecc{\phi}_n = (\phi_n , 0)$ and $\phi_n$ is frequently supported on $\Omega$, defined by \eqref{DEF:Omega}. Then, we get by Cauchy-Schwarz and inverse Fourier transform
$$
| \jb{\nabla}^s \phi_n | \elleq \int_{{\mathbf w}idehat{\mathcal{M}}} | \jb{\xi}^s {\mathbf w}idehat{\phi}_n (\xi)| \mathrm{d}\xi = \int_{\Omega} \jb{\xi}^s |{\mathbf w}idehat{\phi}_n (\xi)| \mathrm{d}\xi \elleq 2A^{\frac 12} \| \phi_n \|_{H^s}
$$
\noindent Since $A = 10$ in our proof, and $\| {\mathbf v}ecc{\phi}_n \|_{\mathcal{W}^{s, 2, \infty}} = \| \phi_n \|_{W^{s, 2, \infty}}$, this shows the desired result and Theorem~\rhoef{THM:mainLinfty}. Furthermore, combining this argument with Remark~\rhoef{RK:ProofGenIBq}, this result still holds in higher dimension.
\end{remark}
\appendix
\sigmaection{Norm inflation for other spaces}
\ellabel{appendixA}
In this appendix, we come back to Remark \rhoef{REM:NIforFLMW} and show how to prove norm inflation with infinite loss of regularity at general initial data for equation \eqref{imBq} in Fourier-Lebesgue, modulation and Wiener amalgam spaces. However, we do not show the entire proof since it is mostly similar to the one we gave for Sobolev spaces. Indeed, we use the same construction and decomposition as before, so the first differences are in the estimates we find in Lemma \rhoef{LEM:MultilinearEst} and Proposition \rhoef{PROP:EstSecondPicard}. Fortunately we find that the new estimates are, under the condition $A = 10$, equivalent to the ones we had for Sobolev spaces, so the last part of our argument remains unchanged. Therefore, we just show quickly how to get these estimates. Again, a whole proof would be quite redundant with what was shown before, the arguments being essentially the same. Thus, we just point out the key differences. Before that, we introduce our spaces. Let us recall, at this point, that modulation and Wiener amalgam spaces were introduced by Feichtinger~\cite{Fei83}.
\sigmaubsection{New spaces and some preliminary results.}
First, let us recall that Fourier-Lebesgue spaces are defined in Definition \rhoef{DEF:FLspaces}. Now, we introduce the modulation and Wiener amalgam spaces.
For any $n \in \mathbb{Z}^d$, we define $Q_n = n + \elleft[- \frac 12, \frac 12 \rhoight]^d$ so that $\mathbb{R}^d = \betaigcup_{n \in \mathbb{Z}^d} Q_n$. Then, let $\rhoho \in \mathcal{S} (\mathbb{R}^d)$ such that $\rhoho \colon \mathbb{R}^d \to [0,1]$ and
\betaetagin{equation}
\rhoho(\xi) = \betaetagin{cases}
1 \quad \text{ if } \quad \abs{\xi} \elleq \frac 12 \\
0 \quad \text{ if } \quad \abs{\xi} \gammaeq 1
\end{cases}
\end{equation}
\noindent We also denote $\rhoho_n (\xi) = \rhoho (\xi - n)$ for any $n \in \mathbb{Z}^d$. Let us define
$$
\sigma_n = \frac{\rhoho_n}{\sigmaum_{l \in \mathbb{Z}} \rhoho_l }
$$
\noindent and $P_n f = \mathcal{F}^{-1} ( \sigma_n \mathcal{F}(f))$ for any suitable $f$. Then, we define the modulation spaces in the following way:
\betaetagin{definition}[Modulation spaces] \rhom
Let $s \in \mathbb{R}$ and $p,q \gammaeq 1$. The Modulation space $M^{p,q}_s (\mathbb{R}^d)$ is the completion of the Schwartz class of functions $\mathcal{S} ( \mathbb{R}^d)$ with respect to the norm
$$
\| f \|_{M^{p,q}_s (\mathbb{R}^d)} = \betaig\| (1 + \abs{n}^s) \|P_n f \|_{L^p_x (\mathbb{R}^d)} \betaig\|_{\ell^q_n (\mathbb{Z}^d)}.
$$
\end{definition}
\noindent We also define Wiener amalgam spaces in the following way:
\betaetagin{definition}[Wiener amalgam spaces] \rhom
Let $s \in \mathbb{R}$ and $p,q \gammaeq 1$. The Wiener amalgam space $W^{p,q}_s (\mathbb{R}^d)$ is the completion of the Schwartz class of functions $\mathcal{S} ( \mathbb{R}^d)$ with respect to the norm
$$
\| f \|_{W^{p,q}_s (\mathbb{R}^d)} = \betaig\| \|(1 + \abs{n}^s)P_n f \|_{\ell^q_n (\mathbb{Z}^d)} \betaig\|_{L^p_x (\mathbb{R}^d)}.
$$
\end{definition}
\noindent We have the following relations between modulation and Wiener amalgam spaces:
\betaetagin{lemma}
\ellabel{LEM:RelationsMW}
Let $1 \elleq p,q, p_1, q_1, p_2, q_2 \elleq \infty$ and $d \gammaeq 1$ be any finite dimension. Then:
\betaetagin{enumerate}
\item For $p_1 \elleq p_2$ and $q_1 \elleq q_2$, we have
\betaetagin{equation}
\ellabel{EQ:EmbeddingsM}
M^{p_1, q_1}_s (\mathbb{R}^d) \hookrightarrow M^{p_2, q_2}_s (\mathbb{R}^d),
\end{equation}
\noindent and
\betaetagin{equation}
\ellabel{EQ:EmbeddingsW}
W^{p, q_1}_s (\mathbb{R}^d) \hookrightarrow W^{p, q_2}_s (\mathbb{R}^d),
\end{equation}
\item for $q\elleq p$, we have
\betaetagin{equation}
\ellabel{EQ:EmbeddingMintoW}
M^{p, q}_s (\mathbb{R}^d) \hookrightarrow W^{p, q}_s (\mathbb{R}^d),
\end{equation}
\noindent and for $p \elleq q$ we have
\betaetagin{equation}
\ellabel{EQ:EmbeddingWintoM}
W^{p, q}_s (\mathbb{R}^d) \hookrightarrow M^{p, q}_s (\mathbb{R}^d),
\end{equation}
\item and we finally have
\betaetagin{equation}
\ellabel{EQ:RelationsMandW_end}
M^{p, \min(p,q)}_s (\mathbb{R}^d) \hookrightarrow W^{p, q}_s (\mathbb{R}^d) \hookrightarrow M^{p, \max(p,q)}_s (\mathbb{R}^d).
\end{equation}
\end{enumerate}
\end{lemma}
\noindent The proof of this Lemma can be seen in \cite{BH}, but split into different proofs. For completeness and ease of reading, we include it here. However, we first need the following lemma, which proof can be seen in \cite[Lemma 6.1] {WHHG}:
\betaetagin{lemma}
\ellabel{LEM:lem6.1}
Let $\Omega$ be a compact subset of $\mathbb{R}^d$ with $d \gammaeq 1$, such that $\mathrm{diam} \ \Omega < 2R$, with $R > 0$, and $1 \elleq p \elleq q \elleq \infty$. Then, there exists a constant $C > 0$ depending only on $p$, $q$ and $R$ such that:
$$
\| f \|_{L^q (\mathbb{R}^d)} \elleq C \| f \|_{L^p (\mathbb{R}^d)}
$$
\noindent for any function $f \in L^p (\mathbb{R}^d)$ such that its Fourier transform ${\mathbf w}idehat{f}$ is compactly supported in $\Omega$.
\end{lemma}
\betaetagin{proof}[Proof of Lemma \rhoef{LEM:RelationsMW}]
The embeddings \eqref{EQ:EmbeddingsM} and \eqref{EQ:EmbeddingsW} follow from Lemma \rhoef{LEM:lem6.1} and the fact that, for any $p \elleq q$, $\ell^{p} (\mathbb{Z}^d) \hookrightarrow \ell^{q} (\mathbb{Z}^d)$.
Embeddings \eqref{EQ:EmbeddingMintoW} and \eqref{EQ:EmbeddingWintoM} follow from Minkowski's inequality.
Finally, \eqref{EQ:RelationsMandW_end} is a combination of \eqref{EQ:EmbeddingsM}, \eqref{EQ:EmbeddingsW}, \eqref{EQ:EmbeddingMintoW} and \eqref{EQ:EmbeddingWintoM}.
\end{proof}
\noindent Besides, we also have the following algebra property:
\betaetagin{lemma}
\ellabel{LEM:algebraM21}
For any $d \gammaeq 1$, the space $M^{2,1}_0 (\mathbb{R}^d)$ is a Banach algebra.
\end{lemma}
\betaetagin{proof}
We want to show that, for any $u,v \in M^{2,1}_0 (\mathbb{R}^d)$, we have
$$
\| uv \|_{M^{2,1}_0 (\mathbb{R}^d)} = \sigmaum_{n \in \mathbb{Z}^d} \| P_n (uv) \|_{L^2 (\mathbb{R}^d)} \elleq \| u \|_{M^{2,1}_0 (\mathbb{R}^d)} \| v \|_{M^{2,1}_0 (\mathbb{R}^d)}.
$$
\noindent Note that $u = \sigmaum_{m \in \mathbb{Z}^d} P_m u $ and $v = \sigmaum_{k \in \mathbb{Z}^d} P_k v$. Therefore
$$
P_n (uv) = \sigmaum_{m,k \in \mathbb{Z}^d} P_n (P_m u P_k v ).
$$
\noindent The idea then is to study the Fourier support of each of the terms $P_n (P_m u P_k v )$. Since, for any $\xi \in \mathbb{R}^d$,
$$
\mathcal{F} [ P_n (P_m u P_k v ) ] (\xi) = \sigma_n (\xi) \int_{\xi = \xi_1 + \xi_2} \sigma_m (\xi_1) {\mathbf w}idehat{u}(\xi_1) \sigma_k (\xi_2) {\mathbf w}idehat{v}(\xi_2) \mathrm{d}\xi_1
$$
\noindent and, for any $N \in \mathbb{Z}^d$, $\sigmaupp \sigma_N \sigmaubset \{ \xi \in \mathbb{R}^d, \abs{\xi - N} \elleq 1 \}$, we have then
$$
\sigmaupp \betaig[ (\sigma_m {\mathbf w}idehat{u}) \ast (\sigma_k {\mathbf w}idehat{v}) \betaig] \sigmaubset \{ \xi \in \mathbb{R}^d, \abs{\xi - m - k} \elleq 2 \}
$$
\noindent and
$$
\mathcal{F} [ P_n (P_m u P_k v ) ] \equiv 0 \quad \text{ if } \quad \abs{n - m - k} > 3.
$$
\noindent Plancherel's identity, H\"older's inequality and \eqref{EQ:EmbeddingsM} yield then:
\betaetagin{align*}
\| P_n (P_m u P_k v ) \|_{L^2 ( \mathbb{R}^d)} & \elleq \| P_m u P_k v \|_{L^2(\mathbb{R}^d)} \chi_{(\abs{n- k - m} \elleq 3)}, \\
& \elleq \| P_m u \|_{L^\infty (\mathbb{R}^d)} \| P_k v \|_{L^2 ( \mathbb{R}^d) } \chi_{(\abs{n- k - m} \elleq 3)}, \\
& \ellesssim \| P_m u \|_{L^2 (\mathbb{R}^d)} \| P_k v \|_{L^2 ( \mathbb{R}^d) } \chi_{(\abs{n- k - m} \elleq 3)}.
\end{align*}
\noindent Hence,
\betaetagin{align*}
\| uv \|_{M^{2,1}_0 (\mathbb{R}^d)} & \elleq \sigmaum_{n \in \mathbb{Z}^d} \sigmaum_{m,k \in \mathbb{Z}^d} \| P_n (P_m u P_k v ) \|_{L^2 ( \mathbb{R}^d)}, \\
& \ellesssim \sigmaum_{n \in \mathbb{Z}^d} \sigmaum_{m,k \in \mathbb{Z}^d} \| P_m u \|_{L^2 (\mathbb{R}^d)} \| P_k v \|_{L^2 ( \mathbb{R}^d) } \chi_{(\abs{n- k - m} \elleq 3)}, \\
& \ellesssim \sigmaum_{m,k \in \mathbb{Z}^d} \| P_m u \|_{L^2 (\mathbb{R}^d)} \| P_k v \|_{L^2 ( \mathbb{R}^d) }
\end{align*}
\noindent which ends the proof.
\end{proof}
\sigmaubsection{Proofs of norm inflation in other spaces}
In this subsection, we claim that the following result is true:
\betaetagin{theorem}
\ellabel{THM:NIwithILORforFLMW}
Assume that $s < 0$ and $1 \elleq q \elleq \infty$ and let
\betaetagin{equation}
\ellabel{EQ:DefZsq}
Z_s^q \coloneqq \mathcal{F}L^{s,q}(\mathcal{M}) \quad \text{ or } \quad M_s^{2,q}(\mathbb{R}) \quad \text{ or } \quad W_s^{2,q}(\mathbb{R}).
\end{equation}
\noindent Then, norm inflation with infinite loss of regularity occurs at the origin for \eqref{imBq} in $Z_s^q$.
\noindent Namely, let $\theta \in \mathbb{R}$, $s < 0$ and fix ${\mathbf v}ecc{u}_0 \in Z_s^q \times Z_s^q$. Given any ${\mathbf v}arepsilon > 0$, there exists a solution $u_{\mathbf v}arepsilon \in C([0,T], Z_s^q)$ to \eqref{imBq} with $T > 0$ and $t_{\mathbf v}arepsilon \in (0, {\mathbf v}arepsilon)$ such that
$$
\| (u_{\mathbf v}arepsilon (0), \partial_t u_{\mathbf v}arepsilon (0) ) - {\mathbf v}ecc{u}_0 \|_{Z_s^q \times Z_s^q} < {\mathbf v}arepsilon \qquad \text{ and } \qquad \| u_{\mathbf v}arepsilon (t_{\mathbf v}arepsilon ) \|_{Z_\theta^q} > {\mathbf v}arepsilon^{-1}.
$$
\end{theorem}
As explained before, the idea to prove our result is to use the same construction as in Section \rhoef{Sec3}. Actually, we show that, by keeping the same notations and choosing $A$ to be constant, we have estimates that are equivalent to the ones given in Lemma \rhoef{LEM:MultilinearEst} and Proposition \rhoef{PROP:EstSecondPicard}. Then, the proof of Theorem \rhoef{THM:NIwithILORforFLMW} follows with the same argument.
\sigmaubsubsection{The case of Fourier-Lebesgue spaces}
In the case of Fourier-Lebesgue spaces $\mathcal{F}L^{s,q} (\mathcal{M})$, we get from the same computations as for the Sobolev spaces $H^s (\mathcal{M})$:
\betaetagin{proposition}
\ellabel{PROP:EstforFL}
Let $s < 0$ and $1 \elleq q \elleq \infty$. We have then
\betaetagin{equation}
\ellabel{EQ:normXi0FL}
\| \Xi_0 [{\mathbf v}ecc{u}_{0, n}](T) \|_{\mathcal{F}L^{s,q}} \ellesssim 1 + R N^s A^{\frac 1q},
\end{equation}
\noindent as well as
\betaetagin{equation}
\ellabel{EQ:EstXi1UpperFL}
\|\Xi_1 [{\mathbf v}ecc{u}_{0, n}] (T) - \Xi_1 [{\mathbf v}ecc{\phi}_{ n}] (T) \|_{\mathcal{F}L^{s,q}} \ellesssim T^{2} (RA)^{(k-1)} \| {\mathbf v}ecc{u}_0 \|_{\mathcal{F}Lv^{q}( \mathcal{M})},
\end{equation}
\noindent and, for any $j \gammaeq 2$,
\betaetagin{equation}
\ellabel{EQ:EstXijFL}
\|\Xi_j [{\mathbf v}ecc{u}_{0, n}] (T) \|_{\mathcal{F}L^{s,q}} \ellesssim T^{2j} (RA)^{(k-1)j} \betaig( \| {\mathbf v}ecc{u}_0 \|_{\mathcal{F}Lv^{q}( \mathcal{M})} + Rf_{s,q} (A) \betaig)
\end{equation}
\noindent where
\betaetagin{equation}
\ellabel{EQ:deffsqFL}
f_{s,q}(A) = \| \jb{\xi}^s \|_{L^q_\xi (Q_A) }.
\end{equation}
\noindent Also, if $A \elll N$, we have the following lower bound:
\betaetagin{equation}
\ellabel{EQ:EstSecondPicardFL}
\| \Xi_1 ({\mathbf v}ecc{\phi}_n) (T) \|_{ \mathcal{F}L^{s,q}} \gammaes R^k T^2 A^{k - 1 + \frac 1q + s}
\end{equation}
\end{proposition}
\noindent Under the condition $A = 10$, we observe that \eqref{EQ:normXi0FL}, \eqref{EQ:EstXi1UpperFL}, \eqref{EQ:EstXijFL} and \eqref{EQ:EstSecondPicardFL} are essentially equivalent respectively to \eqref{EQ:MultilinearEst2}, \eqref{EQ:MultilinearEst3}, \eqref{EQ:MultilinearEst4} and \eqref{EQ:EstSecondPicard}, so that the expected result follows from the same argument. There is one condition to change though, which is $\| {\mathbf v}ecc{u}_0 \|_{\mathcal{F}Lv^{q}( \mathcal{M})} \ellesssim 1$ instead of $\| {\mathbf v}ecc{u}_0 \|_{\mathcal{H}^0 ( \mathcal{M})} \ellesssim 1$, but this is quite reasonable since our initial data ${\mathbf v}ecc{u}_0$ is fixed.
\sigmaubsubsection{The case of modulation spaces}
The computations in this section follow the same ideas as in the previous section, but the modulation spaces make them a bit trickier to apply. We first state our results:
\betaetagin{proposition}
\ellabel{PROP:EstForMs2qspaces}
Let $s < 0$ and $1 \elleq q \elleq \infty$. Keeping the same notations as before, and asssuming
$$
\| {\mathbf v}ecc{u}_0 \|_{M^{2,q}_s \times M^{2,q}_s} \ellesssim 1 \quad \text{ and } \quad \| {\mathbf v}ecc{u}_0 \|_{M^{2,1}_0 \times M^{2,1}_0} \elll R \| (1 + \abs{n})^s \|_{\ell^q ( 1 \elleq \abs{n} \elleq A)},
$$
\noindent we have
\betaetagin{equation}
\ellabel{EQ:normXi0M}
\| \Xi_0 [{\mathbf v}ecc{u}_{0,n}](T) \|_{M_s^{2,q}} \ellesssim 1 + R N^s A^{\frac 12},
\end{equation}
\noindent as well as
\betaetagin{equation}
\ellabel{EQ:EstXi1UpperM}
\|\Xi_1 [{\mathbf v}ecc{u}_{0, n}] (T) - \Xi_1 [{\mathbf v}ecc{\phi}_{ n}] (T) \|_{M_s^{2,q}} \ellesssim T^{2} (RA)^{(k-1)} \| {\mathbf v}ecc{u}_0 \|_{M^{2,1}_0( \mathcal{M})},
\end{equation}
\noindent and, for any $j \gammaeq 2$
\betaetagin{equation}
\ellabel{EQ:EstXijFL}
\| \Xi_j [{\mathbf v}ecc{u}_{0,n}] (T) \|_{M_s^{2,q}} \ellesssim T^{2j} (RA)^{(k-1)j} R \| (1 + \abs{n})^s \|_{\ell^q ( 1 \elleq \abs{n} \elleq A)}.
\end{equation}
\noindent Also, if $A \elll N$, we have the following upper bound:
\betaetagin{equation}
\ellabel{EQ:EstSecondPicardFL}
\| \Xi_1 ({\mathbf v}ecc{\phi}_n) (T) \|_{M_s^{2,q}} \gammaes R^k T^2 A^{k - 1}A^{ \frac 1q + s}.
\end{equation}
\end{proposition}
Again, we see that if we choose $A$ to be constant, all these estimates are equivalent to the estimates we had for the Sobolev spaces, and the norm inflation result becomes straightforward. We include the proof of Proposition \rhoef{PROP:EstForMs2qspaces} both for completeness and to put an emphasis on the differences between these spaces and Sobolev spaces.
\betaetagin{proof}[Proof of Proposition \rhoef{PROP:EstForMs2qspaces}]
First, we have from $\abs{ \cos x } \elleq 1$ and $\abs{\sigmain x} \ellesssim \abs{x}$,
\betaetagin{equation}
\ellabel{EQ:EstLinearM}
\| S(t)(u_0 , u_1 ) \|_{M_s^{2,q}} \ellesssim \| u_0 \|_{M_s^{2,q}} + \abs{t} \| u_1 \|_{M_s^{2,q}} \ellesssim \| (u_0, u_1) \|_{M^{2,q}_s \times M^{2,q}_s}
\end{equation}
\noindent for any $0 \elleq t \elleq 1$ and $u_0, u_1 \in M^{2,q}_s$. Besides, we also have
\betaetagin{align*}
\| \phi_n \|_{M_s^{2,q}} & = R \betaig\| (1 + \abs{n})^s \| P_n \mathcal{F}^{-1}(\sigmaum_{\eta \in \Sigma} \chi_{\eta + Q_A}) \|_{L^2} \betaig\|_{\ell^q_n} \\
& = R \betaig\| (1 + \abs{n})^s \| \sigma_n \sigmaum_{\eta \in \Sigma} \chi_{\eta + Q_A} \|_{L^2} \betaig\|_{\ell^q_n}
\end{align*}
\noindent but since the sets $\eta + Q_A$ are disjoint for any two $\eta_1 \neq \eta_2$ in $\Sigma$, we get
$$
\elleft\| \sigma_n \sigmaum_{\eta \in \Sigma} \chi_{\eta + Q_A} \rhoight\|^2_{L^2} \sigmaim \chi_{N + Q_A} (n) \int_{\mathbb{R}} \sigma^2_n (\xi) \chi^2_{n + Q_A} (\xi) \mathrm{d}\xi \sigmaim \chi_{N + Q_A} (n)
$$
\noindent and
$$
\| \phi_n \|_{M_s^{2,q}} \sigmaim R \| (1 + \abs{n} )^s \chi_{N + Q_A} (n)\|_{\ell^q_n} \sigmaim RN^s A^{\frac 1q}
$$
\noindent which, combined with \eqref{EQ:EstLinearM}, proves our first estimate. Besides, \eqref{EQ:EmbeddingsM} and the algebra property of $M^{2,1}_0$ yield, in a similar argument as for \eqref{EQ:MultilinearEst3}:
\betaetagin{align*}
\|\Xi_1 [{\mathbf v}ecc{u}_{0, n}] (T) - \Xi_1 [{\mathbf v}ecc{\phi}_{ n}] (T) \|_{M_s^{2,q}} & \ellesssim \|\Xi_1 [{\mathbf v}ecc{u}_{0, n}] (T) - \Xi_1 [{\mathbf v}ecc{\phi}_{ n}] (T) \|_{M_0^{2,1}}, \\
& \ellesssim T^2 \| {\mathbf v}ecc{u}_0 \|_{M^{2,1}_0 \times M^{2,1}_0} \betaig( \| {\mathbf v}ecc{u}_0 \|_{M^{2,1}_0 \times M^{2,1}_0} + \| {\mathbf v}ecc{\phi}_n \|_{M^{2,1}_0 \times M^{2,1}_0} \betaig)^{k-1}, \\
& \ellesssim T^2 \| (u_0, u_1) \|_{M^{2,1}_0 \times M^{2,1}_0} (RA^{\frac 12})^{k-1}.
\end{align*}
\noindent Hence \eqref{EQ:EstXi1UpperM}.
Now, let $j \gammaeq 2$. Using the same argument on the support as in the proof for \eqref{EQ:MultilinearEst4}, we have
\betaetagin{align*}
\| \sigma_n \mathcal{F}[\Xi_j ({\mathbf v}ecc{\phi}_n)](T) \|_{L^2_\xi} & \elleq \| \sigma_n \|_{L^2_\xi (\sigmaupp \mathcal{F}[ \Xi_j ({\mathbf v}ecc{\phi}_n)](T))} \| \Xi_j ({\mathbf v}ecc{\phi}_n) (T) \|_{\mathcal{F}L^\infty} \\
& \ellesssim \| \sigma_n\|_{L^2_\xi (Q_A \cap Q_n)} C^j T^{2j} \| {\mathbf v}ecc{\phi}_n \|^{(k-1)j-1}_{\mathcal{F}Lv^1} \| {\mathbf v}ecc{\phi}_n \|^2_{\mathcal{H}^0} \\
& \ellesssim \mathds{1}_{\{ n \in Q_A\}} T^{2j}(RA)^{(k-1)j} R
\end{align*}
\noindent so that
$$
\| \Xi_j [{\mathbf v}ecc{\phi}_n] (T) \|_{M_s^{2,q}} \ellesssim T^{2j} (RA)^{(k-1)j} R \| (1 + \abs{n})^s \mathds{1}_{\{ n \in Q_A \}} \|_{\ell^q_n}.
$$
\noindent Besides, a similar argument as for \eqref{EQ:EstXi1UpperM} yields
\betaetagin{align*}
\|\Xi_j [{\mathbf v}ecc{u}_{0, n}] (T) - \Xi_j [{\mathbf v}ecc{\phi}_{ n}] (T) \|_{M_s^{2,q}} & \ellesssim \|\Xi_j [{\mathbf v}ecc{u}_{0, n}] (T) - \Xi_j [{\mathbf v}ecc{\phi}_{ n}] (T) \|_{M_0^{2,1}}, \\
& \ellesssim T^{2j} \| {\mathbf v}ecc{u}_0 \|_{M^{2,1}_0 \times M^{2,1}_0} \betaig( \| {\mathbf v}ecc{u}_0 \|_{M^{2,1}_0 \times M^{2,1}_0} + \| {\mathbf v}ecc{\phi}_n \|_{M^{2,1}_0 \times M^{2,1}_0} \betaig)^{(k-1)j}, \\
& \ellesssim T^{2j} \| {\mathbf v}ecc{u}_0 \|_{M^{2,1}_0 \times M^{2,1}_0} (RA^{\frac 12})^{(k-1)j},
\end{align*}
\noindent which gives our third estimate.
Finally, our fourth estimate follows from a straightforward adaptaton of the proof of Proposition \rhoef{PROP:EstSecondPicard}, the only difference being the norm used.
\end{proof}
\sigmaubsubsection{The case of Wiener amalgam spaces}
This section relies a lot on the interactions between modulation and Wiener amalgam spaces. Indeed, using \eqref{EQ:RelationsMandW_end} and Proposition \rhoef{PROP:EstForMs2qspaces}, we get respectively
\betaetagin{equation}
\ellabel{EQ:EstForWs2q_Xi0}
\| \Xi_0 [{\mathbf v}ecc{u}_{0,n}](T) \|_{W_s^{2,q}} \ellesssim \| \Xi_0 [{\mathbf v}ecc{u}_{0,n}](T) \|_{M_s^{2,\min(2,q)}} \ellesssim 1 + R N^s A^{\frac 12},
\end{equation}
\noindent as well as
\betaetagin{align}
\ellabel{EQ:EstForWs2q_Xi1Upper}
\| \Xi_1 [{\mathbf v}ecc{u}_{0,n}] (T) - \Xi_1 [{\mathbf v}ecc {\phi}_n](T) \|_{W_s^{2,q}} & \ellesssim \| \Xi_1 [{\mathbf v}ecc{u}_{0,n}] (T) - \Xi_1 [{\mathbf v}ecc {\phi}_n](T) \|_{M_s^{2,\min(2,q)}}\\
& \ellesssim T^{2} (RA)^{(k-1)} \| {\mathbf v}ecc{u}_0 \|_{M^{2,1}_0( \mathcal{M})},
\end{align}
\noindent and, for any $j \gammaeq 2$,
\betaetagin{align}
\ellabel{EQ:EstForWs2q_Xij}
\| \Xi_j [{\mathbf v}ecc{u}_{0,n}] (T) \|_{W_s^{2,q}} & \ellesssim \| \Xi_j [{\mathbf v}ecc{u}_{0,n}] (T) \|_{M_s^{2,\min(2,q)}}\\
& \ellesssim T^{2j} (RA)^{(k-1)j} R \| (1 + \abs{n})^s \|_{\ell^{\min(2,q)} ( 1 \elleq \abs{n} \elleq A)}.
\end{align}
\noindent Besides, we also get the following lower bound:
\betaetagin{equation}
\ellabel{EQ:EstForWs2q_Xi1}
\| \Xi_1 ({\mathbf v}ecc{\phi}_n) (T) \|_{W_s^{2,q}} \gammaes \| \Xi_1 ({\mathbf v}ecc{\phi}_n) (T) \|_{W_s^{2,\max(2,q)}} \gammaes R^k T^2 A^{k - 1}A^{ \frac 1q + s}.
\end{equation}
\noindent Again, these estimates are equivalent to the ones we had for Sobolev spaces under the condition $A = 10$, so norm inflation with infinite loss of regularity follows.
\sigmaection*{Acknowledgments} The author would like to thank Tadahiro Oh for suggesting this problem and his continuous support throughout this work. The author acknowledges support from Tadahiro Oh's ERC grant (no. 864138 ``SingStochDispDyn"). The author would also like to thank Younes Zine for several helpful discussions.
\betaetagin{thebibliography}{99}
\betaibitem{AC}
T.~Alazard, R.~Carles,
{\it Loss of regularity for supercritical nonlinear Schrödinger equations},
Math. Ann. 343 (2009), no. 2, 397–420.
\betaibitem{BT}
I.~Bejenaru, T.~Tao,
{\it Sharp well-posedness and ill-posedness results for a quadratic non-linear Schrödinger equation},
J. Funct. Anal. 233 (2006), no. 1, 228–259.
\betaibitem{BC}
D.~G.~Bhimani, R.~Carles,
{\it Norm inflation for nonlinear Schr\"odinger equations in Fourier-Lebesgue and modulation spaces of negative regularity},
J. Fourier Anal. Appl. 26(6) 2020, paper no. 78, 34 pp.
\betaibitem{BH}
D.~G.~Bhimani, S.~Haque,
{\it Norm inflation with infinite loss of regularity at general initial data for nonlinear wave equations in Wiener amalgam and Fourier amalgam spaces},
arXiv:2106.13635 [math.AP].
\betaibitem{BH2}
D.~G.~Bhimani, S.~Haque,
{\it Norm inflation for BBM equation in Fourier amalgam and Wiener amalgam spaces with negative regularity},
Mathematics 2021, 9(23), 3145.
\betaibitem{Bou97}
J.~Bourgain,
{\it Periodic Korteweg de Vries equation with measures as initial data},
Selecta Math. (N.S) 3 (1997), no. 2, 115--159.
\betaibitem{BTz1}
N.~Burq, N.~Tzvetkov,
{\it Random data Cauchy theory for supercritical wave equations. I. Local theory,}
Invent. Math. 173 (2008), no. 3, 449--475.
\betaibitem{CK}
R.~Carles, T.~Kappeler,
{\it Norm-inflation with infinite loss of regularity for periodic NLS equations in negative Sobolev spaces},
Bull. Soc. Math. France 145 (2017), no. 4, 623–642.
\betaibitem{Chevyrev}
I.~Chevyrev,
{\it Norm inflation for a non-linear heat equation with Gaussian initial conditions},
arXiv:2205.14350 [math.AP].
\betaibitem{COW}
I.~Chevyrev, T.~Oh, Y.~Wang,
{\it Norm inflation for the cubic nonlinear heat equation above the scaling critical regularity},
arXiv:2205.14488 [math.AP].
\betaibitem{CO}
Y.~Cho, T.~Ozawa,
{\it On small amplitude solutions to the generalized Boussinesq equations},
Discrete Contin. Dyn. Syst. 17 (2007), no. 4, 691–711.
\betaibitem{CP}
A.~Choffrut, O.~Pocovnicu,
{\it Ill-posedness of the cubic nonlinear half-wave equation and other fractional NLS on the real line},
Int. Math. Res. Not. IMRN 2018, no. 3, 699–738.
\betaibitem{CCT}
M.~Christ, J.~Colliander, T.~Tao,
{\it Ill-posedness for nonlinear Schr\"odinger and wave equations},
arXiv:math/0311048 [math.AP].
\betaibitem{CLS}
P.~A.~Clarkson, R.~J.~LeVeque, R.~Saxton,
{\it Solitary-wave interactions in elastic rods},
Stud. Appl. Math. 75 (1986), no. 2, 95–121.
\betaibitem{CM}
A.~Constantin, L.~Molinet,
{\it The initial value problem for a generalized Boussinesq equation},
Differential Integral Equations 15 (2002), no. 9, 1061–1072.
\betaibitem{Fei83}
H.~G.~Feichtinger,
{\it Modulation spaces on locally compact Abelian groups},
Technical Report, University of Vienna, 1983.
\betaibitem{GL}
D.-A.~Geba, B.~Lin,
{\it Almost optimal local well-posedness for modified Boussinesq equations},
Electron. J. Differential Equations 2020, Paper No. 24, 10 pp.
\betaibitem{FO}
J.~Forlano, M.~Okamoto,
{\it A remark on norm inflation for nonlinear wave equations},
Dyn. Partial Differ. Equ. 17 (2020), no. 4, 361–381.
\betaibitem{IO}
T.~Iwabuchi, T.~Ogawa,
{\it Ill-posedness for the nonlinear Schrödinger equation with quadratic non-linearity in low dimensions},
Trans. Amer. Math. Soc. 367 (2015), no. 4, 2613–2630.
\betaibitem{Kis}
N.~Kishimoto,
{\it A remark on norm inflation for nonlinear Schrödinger equations},
Commun. Pure Appl. Anal. 18 (2019), no. 3, 1375–1402.
\betaibitem{Mak}
V.~G.~Makhankov,
{\it Dynamics of classical solitons (in nonintegrable systems)},
Phys. Rep. 35 (1978), no. 1, 1–128.
\betaibitem{Mot}
J.~Mott,
{\it Elastic waves propagation in an infinite isotropic solid cylinder},
J. Acoust. Soc. E, 54 (1973), 1129--1135.
\betaibitem{Oh}
T.~Oh,
{\it A remark on norm inflation with general initial data for the cubic nonlinear Schrödinger equations in negative Sobolev spaces},
Funkcial. Ekvac. 60 (2017), no. 2, 259–277.
\betaibitem{OOT}
T.~Oh, M.~Okamoto, N.~Tzvetkov,
{\it Uniqueness and non-unqueness of the Gaussian free field evolution under the two-dimensional wick ordered cubic wave equation},
arxiv:2206.00728 [math.AP].
\betaibitem{OW}
T.~Oh, Y.~Wang,
{\it On the ill-posedness of the cubic nonlinear Schrödinger equation on the circle},
An. Ştiinţ. Univ. Al. I. Cuza Iaşi. Mat. (N.S.) 64 (2018), no. 1, 53–84.
\betaibitem{Ok}
M.~Okamoto,
{\it Norm inflation for the generalized Boussinesq and Kawahara equations},
Nonlinear Anal. 157 (2017), 44–61.
\betaibitem{Tur}
S.~Turitsyn,
{\it Blow-up in the Boussinesq equation},
Phys. Rev. E, 73 (1993), 267--269.
\betaibitem{WHHG}
B.~Wang, Z.~Huo, C.~Hao, Z.~Guo,
{\it Harmonic analysis method for nonlinear evolution equations. I.},
World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2011. xiv+283 pp
\betaibitem{WC}
S.~Wang, G.~Chen,
{\it The Cauchy problem for the generalized IMBq equation in $W^{s,p}(\mathbb{R}^n)$},
J. Math. Anal. Appl. 266 (2002), no. 1, 38–54.
\betaibitem{WC2}
S.~Wang, G.~Chen,
{\it Small amplitude solutions of the generalized IMBq equation},
J. Math. Anal. Appl. 274 (2002), no. 2, 846--866.
\betaibitem{WZ}
Y.~Wang, Y.~Zine,
{\it Norm inflation for the derivative nonlinear Schr\"odinger equation},
arXiv:2206.08719 [math.AP].
\betaibitem{Xia}
B.~Xia,
{\it Generic ill-posedness for wave equation of power type on 3D torus},
arXiv:1507.07179 [math.AP].
\betaibitem{Zhi}
Y.~Zhijian,
{\it Existence and non-existence of global solutions to a generalized modification of the improved Boussinesq equation},
Math. Methods Appl. Sci. 21 (1998), no. 16, 1467–1477.
\end{thebibliography}
\end{document} |
\begin{enumerate}gin{document}
\title[Invariant Theory]
{invariant theory of artin-schelter regular algebras: a survey}
\author{Ellen E. Kirkman}
\address{ Department of Mathematics,
P. O. Box 7388, Wake Forest University, Winston-Salem, NC 27109}
\mathrm email{[email protected]}
\begin{enumerate}gin{abstract}
This is survey of results that extend notions of the classical invariant theory of linear actions by finite groups on $k[x_1, \dots, x_n]$ to the setting of finite group or Hopf algebra $H$ actions on an Artin-Schelter regular algebra $A$. We investigate when $A^H$ is AS regular, or AS Gorenstein, or a ``complete intersection" in a sense that is defined. Directions of related research are explored briefly.
\mathrm end{abstract}
\maketitle
\setcounter{section}{-1}
\section{Introduction}
\label{xxsec0}
The study of invariants of finite groups acting on a commutative polynomial ring $k[x_1, \dots, x_n]$ has played a major role in the development of commutative algebra, algebraic geometry, and representation theory. This paper is a survey of more recent work that extends these techniques to a noncommutative setting. We will be particularly concerned with algebraic properties of the subring of invariants.
We begin with some basic definitions. Throughout we let $k$ be an algebraically closed field of characteristic zero and $A$ be a $k$-algebra. A $k$-algebra $A$ is said to be {\it connected graded} if
$A = k \oplus A_1 \oplus A_2 \oplus \cdots$
with $A_i \cdot A_j \subseteq A_{i+j}$ for
all $i,j \in \mathbb{N}$; we denote the trivial module of a connected graded algebra by $k$. Throughout we assume that a connected graded algebra $A$ is generated in degree 1. The {\it Hilbert series} of $A$ is defined
to be the formal power series $H_A(t) = \sum_{i \in \mathbb{N}} (\dim_k A_i) t^i$. The Gelfand-Kirillov dimension of an algebra $A$ is
denoted by $\GKdim A$; it is related to the rate of growth
in the dimensions of the graded pieces $A_n$ of $A$ (see \cite{KL}).
The commutative polynomial ring $k[x_1, \dots, x_n]$
has Gelfand-Kirillov dimension $n$.
In our noncommutative setting we replace the commutative polynomial ring $k[x_1, \dots, x_n]$ with an Artin-Schelter regular algebra, defined as follows.
\begin{enumerate}gin{definition}
\label{zzdef1.1}
Let $A$ be a connected graded algebra.
We call $A$ {\it Artin-Schelter Gorenstein} (or {\it AS Gorenstein}
for short) {\it of dimension $d$} if the following conditions hold:
\begin{enumerate}gin{enumerate}
\item[(a)]
$A$ has injective dimension $d<\infty$ on the left and
on the right,
\item[(b)]
$\Ext^i_A(_Ak,_AA)=\Ext^i_{A}(k_A,A_A)=0$ for all $i\neq d$, and
\item[(c)]
$\Ext^d_A(_Ak,_AA)\cong \Ext^d_{A}(k_A,A_A)\cong k(-l)$ for some $l$ (where
$l$, the shift in the grading, is called the {\it AS index} of $A$).
\mathrm end{enumerate}
If, in addition,
\begin{enumerate}gin{enumerate}
\item[(d)]
$A$ has finite global dimension, and
\item[(e)]
$A$ has finite Gelfand-Kirillov dimension,
\mathrm end{enumerate}
then $A$ is called {\it Artin-Schelter regular}
(or {\it AS regular} for short) {\it of dimension $d$}.
\mathrm end{definition}
Note that polynomial rings $k[x_1,\dots, x_n]$
for $n\geq 1$, with $\deg x_i=1$, are AS
regular of dimension $n$, and they are the only
commutative AS regular algebras. Hence
AS regular algebras are natural
generalizations of commutative polynomial rings.
In some cases we are able to prove stronger results for a special class of
AS regular algebras that we call quantum polynomial rings.
\begin{enumerate}gin{definition}
\label{quantumpolynomial} Let $A$ be a connected graded
algebra. If $A$ is a noetherian, AS regular graded
domain of global dimension $n$ and $H_A(t)=(1-t)^{-n}$,
then we call $A$ {\it a quantum polynomial ring
of dimension $n$}.
\mathrm end{definition}
By \cite[Theorem 5.11]{Sm2}, a quantum polynomial ring is
Koszul and hence it is generated in degree 1. The
GK-dimension of a quantum polynomial ring of global
dimension $n$ is $n$.
Artin-Schelter regular algebras of dimension 3 were classified in
\cite{ASc, ATV}. They occur in two families: (1) the quantum polynomial
algebras having 3 generators and 3 quadratic relations (that include the {\it 3-dimensional Sklyanin algebra}), and (2) algebras having 2 generators and 2 cubic relations (that include the noetherian graded {\it down-up algebras}, which will be discussed in Section 3). There are many examples of AS regular algebras of higher dimensions, but their
classification has not been completed (current research centers on dimension 4, which appears to be much more complex). The invariant theory of AS regular algebras may become richer as higher dimensional AS regular algebras are discovered and classified.
We consider finite groups $G$ of graded automorphisms acting on $A$, and throughout we denote the graded automorphism group of $A$ by $\Aut(A)$. We note that any $n \times n$ matrix acts naturally on a commutative polynomial ring in $n$ indeterminates. However, in order for a linear map defined on the degree 1 piece of a noncommutative algebra $A$ to give a well-defined graded homomorphism of $A$, one must check that the map preserves the ideal of relations; for example, the transposition of $x$ and $y$ is an automorphism of $k_q[x,y]$, the skew polynomial ring with the relation $yx=qxy$, if and only if $q = \pm 1$.
Typically there is a limited supply of graded automorphisms of $A$, so we introduce further noncommutativity by allowing actions on $A$ by a finite dimensional Hopf algebra $H$. Throughout we adopt the usual notation for the Hopf structure of a Hopf algebra
$H$, namely $(H, m, \mathrm epsilon, \Delta, u, S)$ (see \cite{Mon}, our basic reference for Hopf algebra actions), and we denote the $H$-action on $A$ by $\cdot : H
\otimes A \rightarrow A$. All groups $G$ we consider are finite, and all Hopf algebras $H$ are finite dimensional.
\begin{enumerate}gin{definition}\
\label{def1.3}
\begin{enumerate}gin{enumerate}
\item Let $H$ be a Hopf algebra $H$ and $A$ be a $k$-algebra. We say
that {\it $H$ acts on $A$} (from the left), or $A$ is a {\it left
$H$-module algebra}, if $A$ is a left $H$-module, and for all $h \in H$,
$h \cdot (ab) = \sum (h_1 \cdot a)(h_2 \cdot b)$ for all $a,b \in A$, where $\Delta(h) =
\sum h_1 \otimes h_2$ (using the usual Sweedler notation convention),
and $h \cdot 1_A = \mathrm epsilon(h)
1_A$.
\item The {\it invariant subring} of
such an action is defined to be
$$A^H = \{ a \in A ~|~ h \cdot a = \mathrm epsilon(h) a, ~\forall ~h \in H \}.$$
\mathrm end{enumerate}
\mathrm end{definition}
When the Hopf algebra $H=k[G]$ is the group algebra of a finite group $G$, the usual coproduct on $k[G]$ is $\Delta(g) = g \otimes g$, and the Hopf algebra action on $A$ means that $g$ acts as a homomorphism on elements of $A$. Similarly, since $\mathrm epsilon(g) = 1$, the invariant subring $A^{k[G]} = A^G$ is the usual subring of invariants.
Sometimes it is more convenient to view a left $H$-module algebra $A$ as a right $H^\circ$-comodule algebra, where $H^\circ$ is the Hopf dual of $H$. When $H$ is finite dimensional the Hopf dual is the vector space dual, and a left $H$-module can be viewed as a right $H^\circ$-module. To be a right $K$-comodule algebra over a Hopf algebra $K$ we require a coaction $\rho: A \rightarrow A \otimes K$ with properties: $\rho(1_A) = 1_A \otimes 1_K$ and
$\rho(ab) = \rho(a) \rho(b)$ for all $a,b \in A$. The coinvariant subring of such a coaction is defined to be
$$A^{co K} = \{ a \in A | \rho(a) = a \otimes 1_K \}.$$
It follows (\cite[Lemma 6.1 (c)]{KKZ2}) that $A^H = A^{co H^\circ}$.
Throughout we require that the Hopf algebra $H$ acts on $A$ so that
\begin{enumerate}gin{hypotheses} \label{standing}\
\begin{enumerate}gin{itemize}
\item $A$ is an $H$-module algebra,
\item the grading on $A$ is preserved, and
\item the action of $H$ on $A$ is inner faithful.
\mathrm end{itemize}
\mathrm end{hypotheses}
The inner faithful assumption guarantees that the Hopf algebra action is not actually an action over a homomorphic image of $H$ that might be a group algebra (in which case the action is actually a group action).
\begin{enumerate}gin{definition}\cite{BB}.
\label{def1.4} Let $M$ be a left $H$-module. We say that $M$ is an {\it
inner faithful} $H$-module, or $H$ \mathrm emph{acts inner faithfully} on $M$,
if $IM\neq 0$ for every nonzero Hopf ideal $I$ of $H$.
Dually, let $N$ be a right $K$-comodule. We say that $N$ is an {\it inner faithful} $K$-comodule, or that $K$ {\it acts inner faithfully} on $N$, if for any proper Hopf subalgebra $K' \subsetneq K$, $\rho(N)$ is not contained in $N \otimes K'$.
\mathrm end{definition}
\begin{enumerate}gin{lemma} \cite[Lemma 1.6(a)]{CKWZ1}. Let $H$ be a finite dimensional Hopf algebra, $K = H^\circ$, and $U$ be a left $H$-module. Then $U$ is a right $K$-comodule, and the $H$-action on $U$ is inner faithful if and only if the induced $K$-coaction on $U$ is inner faithful.
\mathrm end{lemma}
Despite the hope that Hopf algebra actions would provide many new actions on $A$, we note the result of Etingof and Walton, that states that if $H$ is a semisimple Hopf algebra acting inner faithfully on a commutative domain, then $H$ is a group algebra \cite{EW1}; this result does not require our assumptions that the action of $H$ on $A$ preserves the grading on $A$. Other results showing that Hopf actions must factor through group actions are mentioned in Section 4.
There are, however, interesting actions by pointed Hopf algebras on domains and fields \cite{EW2}. Moreover, there are subrings of invariants that occur under Hopf actions that are not invariants under group actions, warranting continued study of these actions.
In the first section we discuss the question of when $A^H$ is AS regular, extending the Shephard-Todd-Chevalley Theorem to our noncommutative context. In the second section we discuss the question of when $A^H$ is AS Gorenstein; these results include extending Watanabe's Theorem that $k[x_1, \dots, x_n]^G$ is Gorenstein when $G$ is a finite subgroup of ${\rm SL}_n(k)$, and Felix Klein's classification of the invariants of finite subgroups of ${\rm SL}_2(k)$ acting on $k[x,y]$. In the third section we discuss the question of when $A^G$ is a ``complete intersection" (and what a complete intersection might mean in this context); here our goal is to extend results of Gordeev, Kac, Nakajima, and Watanabe to our noncommutative setting. In the final section we briefly discuss some related directions of research.
\section{Artin-Schelter regular subrings of invariants}
\label{xxsec1}
A result attributed to Carl F. Gauss states that the subring of invariants of the commutative polynomial ring $k[x_1, \dots, x_n]$ under the action of the symmetric group $S_n$, permuting the indeterminates, is generated by the $n$ elementary symmetric polynomials. Further, the symmetric polynomials are algebraically independent, so that the subring of invariants is also a polynomial ring. This result raised the question: for which finite groups $G$ acting on $k[x_1, \dots, x_n]$ is the fixed subring a polynomial ring? This question was answered in 1954 for $k$ an algebraically closed field of characteristic zero by G. C. Shephard and J. A. Todd \cite{ShT}, who classified the complex reflection groups and produced their invariants. Shortly afterward, C. Chevalley \cite{C} gave a more abstract argument that showed that for real reflection groups $G$, the fixed subring $k[x_1, \dots, x_n]^G$ is a polynomial ring, and J.-P. Serre \cite{S} showed that Chevalley's argument could be used to prove the result for all unitary reflection groups. A. Borel's history \cite[Chapter VII]{B} provides details on the origins of the ``Shephard-Todd-Chevalley Theorem". Invariants in a commutative polynomial ring under the action of a reflection group do \underline{not} always form a polynomial ring in characteristic p (see \cite[Example 3.7.7]{DK} (and following) for a discussion, including an example of Nakajima \cite{N1}). However, in characteristic p, to obtain a polynomial subring of invariants it is necessary (but not sufficient) that the group be a reflection group (see \cite[proof of Theorem 7.2.1]{Be} for a proof that works in any characteristic). In characteristic zero the necessity of $G$ being a reflection group follows from the sufficiency by considering the normal subgroup of reflections in the group.
\begin{enumerate}gin{theorem}[{\bf Shephard-Todd-Chevalley Theorem}] \cite{ShT} {\rm and} \cite{C}. For $k$ a field of characteristic zero, the subring of invariants $k[x_1, \dots, x_n]^G$ under a finite group $G$ is a polynomial ring if and only if $G$ is generated by reflections.
\mathrm end{theorem}
In this context a linear map $g$ on a vector space $V$, where $g$ is of finite order and hence is diagonalizable, is called a {\it reflection of $V$} if all but one of the eigenvalues of $g$ are 1 (i.e. dim $V^g = $ dim $V - 1$) (we note that sometimes the term ``reflection" is reserved for the case that $g$ has real eigenvalues (so that the single non-identity eigenvalue is -1) and the term ``pseudo-reflection" is used for the case that the single non-identity eigenvalue is a complex root of unity). We will call a graded automorphism that is a reflection (in this sense) of $V=A_1$, the $k$-space of elements of degree 1, a ``classical reflection".
In the setting of the Shephard-Todd-Chevalley Theorem the fixed subring $A^G$ is isomorphic to the original ring $A$ (both being commutative polynomial rings), and early noncommutative generalizations of the Shephard-Todd-Chevalley Theorem focused on this property. S.P. Smith \cite{Sm1} showed that if $G$ is a finite group acting on the first Weyl algebra
$A= A_1(k)$ then $A^G$ is isomorphic to $A$ if and only if $G = \{ 1 \}$. Alev and Polo \cite{AP} extended Smith's result to the higher Weyl algebras, and showed further that if ${\mathfrak g}$ and ${\mathfrak g}'$ are two
semisimple Lie algebras, and $G$ is a finite group of
algebra automorphisms of the universal enveloping algebra $U({\mathfrak g})$ such that
$U({\mathfrak g})^G \cong U({\mathfrak g}')$, then $G$
is trivial and ${\mathfrak g}\cong {\mathfrak g}'$. The preceding results were attributed to the ``rigidity" of noncommutative algebras, and these early results suggested that there is no noncommutative analogue of the Shephard-Todd-Chevalley Theorem. However, as we shall see, in the case that the algebra $A$ is graded there are other ways to generalize the Shephard-Todd-Chevalley Theorem.
We begin with an illustrative noncommutative example.
\begin{enumerate}gin{example}
\label{classicalref}
Let $A=k_{-1}[x,y]$ be the skew polynomial ring with the relation $yx=-xy$; the ring $A$ is an AS regular algebra of dimension 2. Let $G= \langle g \rangle $ be the cyclic group generated by the graded automorphism $g$, where $g(x) = \lambda_n x$ and $g(y) = y$, for
$\lambda_n$ a primitive n-th root of unity. The linear map $g$ acting on $V=A_1$ is a classical reflection. The fixed ring ${A}^{G} = k \langle x^n, y \rangle = k_{(-1)^n}[x,y]$ is isomorphic to $A$ when $n$ is odd, but not when $n$ is even (when it is a commutative polynomial ring). However, $A^G$ is
AS regular for all $n$. This example suggests that a reasonable generalization of the Shephard-Todd-Chevalley Theorem is that $G$ should be thought of as a ``reflection group" when $A^G$ is AS regular, rather than when $A^G$ is isomorphic to $A$. When $A$ is a commutative polynomial ring these two conditions are the same condition: when $A$ is a commutative polynomial ring, $A^G$ is AS regular if and only if $A^G \cong A$.
\mathrm end{example}
\begin{enumerate}gin{definition} We call a finite group $G$ of graded automorphisms of an AS regular algebra $A$ a {\it reflection group for $A$} if the fixed subring $A^G$ is an AS regular algebra.
\mathrm end{definition}
In our terminology the classical reflection groups are reflection groups for\linebreak $k[x_1, \dots, x_n]$. The next example demonstrates that a classical reflection group will not always produce an AS regular invariant subring when acting on some other AS regular algebra.
\begin{enumerate}gin{example}
\label{transposition} Again, let $A=k_{-1}[x,y]$ be the skew polynomial ring with the relation $yx=-xy$.
The transposition $g$ that interchanges $x$ and $y$ induces a graded automorphism of $A$, and $g$ generates the symmetric group $S_2$, which is a classical reflection group. One set of generators for the fixed ring $A^{\langle g \rangle}$ is $x+y$ and $x^3 +y^3$. Here $xy$ is not fixed, as it is in the commutative case, and the invariant $x^2 + y^2 = (x + y)^2$ is not a generator. The generators $x+y$ and $x^3 +y^3$ are not algebraically independent, and the algebra they generate is not an AS regular algebra. However, as we shall see (Example \ref{transagain}), the fixed ring is AS Gorenstein, and it can be viewed as a hypersurface in an AS regular algebra of dimension 3.
\mathrm end{example}
Example \ref{transposition} shows that when $A$ is noncommutative we need a different notion of ``reflection" to obtain an AS regular fixed subring, as the eigenvalues of the linear map $g$ no longer control the AS regularity of the fixed subring. Our results suggest that it is the trace function, defined below, that determines whether a linear map is a ``reflection". In Example \ref{toshowtraces} below we will show that the classical reflection $g$ in Example \ref{transposition} is not a reflection in this new sense, while the automorphism $g$ of Example \ref{classicalref} is a reflection (under both the classical definition and our new definition).
\begin{enumerate}gin{definition} Let $A$ be a graded algebra with $A_j$ denoting the elements of degree $j$. The {\it trace function} of a graded automorphism $g$ acting on $A$ is defined to be the formal power series
$$Tr_A(g,t) = \sum_{j=0}^{\infty} trace(g|_{A_j})\; t^j,$$
where $trace(g|_{A_j})$ is the usual trace of the linear map $g$ restricted to $A_j$.
\mathrm end{definition}
In this setting, by \cite[Theorem 2.3(4)]{JiZ} $Tr_A(g,t)$ is a rational function of the form
$1/e_g(t)$, where $e_g(t)$ is a polynomial in $k[t]$, and the zeroes of $e_g(t)$ are all roots of unity \cite[Lemma1.6(e)]{KKZ1} (in the case $A=k[x_1, \dots, x_n]$, the roots of $e_g(t)$ are the inverses of the eigenvalues of $g$). The next proposition shows that trace functions can be used in computing fixed subrings, giving a version of the classical Molien's Theorem. Knowing the Hilbert series of the subring of invariants is very useful in computing the subring of invariants. (In the Hopf algebra action case of the theorem below, see \cite[Definition 2.1.1]{Mon} for the definition of the integral).
\begin{enumerate}gin{proposition}[{\bf{Molien's Theorem}}] \label{Molien} \cite[Lemma 5.2]{JiZ}, \cite[Lemma 7.3]{KKZ2}. The Hilbert series of the fixed subring $A^G$ is
$$H_{A^G}(t) = \frac{1}{|G|} \sum_{g \in G} Tr_A(g,t).$$ Similarly, for a semisimple Hopf algebra $H$ acting on $A$ with integral $\int$ that has $\mathrm epsilon(\int) = 1 \in k$, the Hilbert series of the fixed subring $A^H$ is $H_{A^H}(t) = Tr_A(\int,t)$.
\mathrm end{proposition}
In our setting it is the order of the pole of $Tr_A(g,t)$ at $1$, rather than the eigenvalues of $g$, that determines whether $g$ is a ``reflection".
\begin{enumerate}gin{definition} \cite[Definition 1.4]{KKZ1}. Let $A$ be an AS regular algebra of $\GKdim A$ $= n$.
We call a graded automorphism $g$ of $A$ a {\it reflection of $A$} if the trace function of $g$ has the form
$$Tr_A(g,t) = \frac{1}{(1-t)^{n-1} q(t)} \mathbbox{ where } q(1) \neq 0.$$
\mathrm end{definition}
We note that in \cite{KKZ1} we used the term ``quasi-reflection" to distinguish our use of reflection from the usual notion of reflection. Here we will use the term ``reflection" as defined above, and refer to ``classical reflection" when we are referring to a reflection defined in terms of its eigenvalues.
The following examples can be used to justify our definition of reflection, as we want a group generated by ``reflections" to have a fixed subring that is AS regular (i.e. to be a reflection group for $A$).
\begin{enumerate}gin{example}
\label{toshowtraces}
Let $A= k_{-1}[x,y]$, an AS regular algebra of dimension 2, and in each case let $G= \langle g \rangle $ be the cyclic group generated by the graded automorphism $g$, expressed as a matrix acting on the vector space $A_1 = kx \oplus ky$. Let $\lambda_n$ be a primitive n-th root of unity.
\begin{enumerate}gin{enumerate}
\item As in Example \ref{classicalref}, let ${\displaystyle {g = \mattwo{ \lambda_n & 0\\ 0 & 1}}}$; the automorphism $g$ is a classical reflection of $A$. The trace function is ${\displaystyle Tr_{A}(g ,t) = \frac{1} {(1-t)(1- \lambda_n t)}}$, so that $g$ is a reflection of $A$ under our definition. Furthermore ${A}^{G} = k \langle x^n, y \rangle = k_{(-1)^n}[x,y]$ is AS regular for all $n$.
\item As in Example \ref{transposition}, let ${\displaystyle {g= \mattwo{0 & 1\\1 & 0}}}$. The trace function is ${\displaystyle Tr_{{A}}({g},t) = \frac{1}{1+t^2}}$. As we noted earlier, ${A}^G$ is not AS regular. The automorphism $g$ is a classical reflection, but it is not a reflection of $A$ in our sense (the order of the pole at 1 is not 2-1=1, and the fixed subring is not AS regular).
\item Let ${\displaystyle {g = \mattwo{0 & -1\\1 & 0}}}$; the automorphism $g$ is not a classical reflection. The trace function is ${\displaystyle Tr_{{A}}({g},t) = \frac{1}{{(1-t)}(1+t)}}$ and ${A}^{G} = k[x^2 + y^2, xy]$ is a commutative polynomial ring so is AS regular. The automorphism $g$ is a reflection of $A$ in our sense, and we called it a ``mystic reflection" to distinguish it from a classical reflection (a reflection such as in (1)).\\
\mathrm end{enumerate}
\mathrm end{example}
Proposition \ref{Molien} is used in computing Example \ref{toshowtraces}. For example, in (3) Molien's Theorem shows us that
$$H_{{A}^{G}}(t) = \frac{1}{4(1-t)^2} + \frac{2}{4(1-t^2)} + \frac{1}{4(1+t)^2} = \frac{1}{(1-t^2)^2},$$
hence the two invariants we have found generate the invariant subring, because they are fixed and the ring they generate has this Hilbert series.
We have shown that for $A$ a quantum polynomial ring, there are only two kinds of reflections of $A$: classical reflections and new reflections (as in Example \ref{toshowtraces} (3)) that we call ``mystic reflections".
\begin{enumerate}gin{theorem}\cite[Theorem 3.1]{KKZ1}.
\label{xxthm3.1} Let $A$ be a quantum polynomial ring of
global dimension $n$. If $g\in \Aut(A)$ is a reflection of $A$
of finite order, then $g$ is in one of the following two
cases:
\begin{enumerate}gin{enumerate}
\item
There is a basis of $A_1$, say $\{b_1,\cdots,b_n\}$,
such that $g(b_j)=b_j$ for all $j\geq 2$ and $g(b_1)
=\lambda_n b_1$ for $\lambda_n$ a root of unity. Hence $g|_{A_1}$ is a classical reflection.
\item
The order of $g$ is $4$ and there is a basis of $A_1$,
say $\{b_1,\cdots,b_n\}$,
such that $g(b_j)=b_j$ for all $j\geq 3$ and $g(b_1)
=i\; b_1$ and $g(b_2)=-i\; b_2$ (where $i^2=-1$). We call such a reflection a {\it mystic reflection}.
\mathrm end{enumerate}
\mathrm end{theorem}
It is quite possible that other AS regular algebras have different kinds of reflections, as AS regular algebras have been completely classified only in dimensions $\leq 3$.
We conjecture the following generalization of the Shephard-Todd-Chevalley Theorem.
\begin{enumerate}gin{conjecture}\label{STCconjecture}
Let $A$ be an AS regular algebra and $G$ be a finite group of graded automorphisms of $A$. Then $A^G$ is AS regular if and only if $G$ is generated by reflections of $A$.
\mathrm end{conjecture}
We have proved a number of partial results that support this conjecture.
First we note that, due to the following theorem, only one direction of this conjecture needs to be proved when $A$ is noetherian.
\begin{enumerate}gin{theorem}\cite[Proposition 2.5(b)]{KKZ3}. Let $A$ be a noetherian AS regular algebra and suppose that for every finite group of graded automorphisms of $A$ that is generated by reflections of $A$ it follows that $A^G$ is AS regular. Then Conjecture \ref{STCconjecture} is true.
\mathrm end{theorem}
Although we have not proved that if $A^G$ is AS regular, then $G$ is generated by reflections, we have shown that $G$ must contain at least one reflection.
\begin{enumerate}gin{theorem}\cite[Theorem 2.4]{KKZ1}.
\label{xxthm2.4}
Let $A$ be noetherian and AS regular, and let
$G$ be a finite group of graded automorphisms of $A$. If $A^G$ has
finite global dimension, then $G$ contains a reflection of $A$.
\mathrm end{theorem}
We have shown that a number of algebras have no reflections (hence no AS regular fixed algebras). These algebras include: non-PI Sklyanin algebras \cite[Corollary 6.3]{KKZ1}, homogenizations of the universal enveloping algebra of a finite dimensional Lie algebras $\mathfrak{g}$ (\cite[Lemma 6.5(d)]{KKZ1}), and down-up algebras (or any noetherian AS regular algebra of dimension 3 generated by two elements of degree 1) (\cite[Proposition 6.4]{KKZ1}). We will say more about down-up algebras in Section 3.
The skew polynomial ring $k_{p_{ij}}[x_1, \dots, x_n]$ is defined to be the $k$-algebra generated by $x_1, \dots, x_n$ with relations $x_jx_i = p_{ij}x_ix_j$ for all $1 \leq i < j \leq n$ and $p_{ii} =1$.
\begin{enumerate}gin{theorem}\label{skewed} \cite[Theorem 5.5]{KKZ3}. Let $A =k_{p_{ij}}[x_1, \dots, x_n]$, and let $G$ be a finite group of graded automorphisms of $A$. Then $A^G$ has finite global dimension if and only if $G$ is generated by reflections of $A$ (in which case $A^G$ is again a skew polynomial ring).
\mathrm end{theorem}
Theorem \ref{skewed} has been proved using different techniques by Y. Bazlov and A. Berenstein in \cite{BB2}, and we will say more about their results shortly.
\begin{enumerate}gin{theorem}\cite[Theorem 6.3]{KKZ3}.
Let $A$ be a quantum polynomial ring and let $G$ be a finite abelian group of graded automorphisms of $A$. Then $A^G$ has finite global dimension if and only if $G$ is generated by reflections of $A$.
\mathrm end{theorem}
In their seminal paper Shephard and Todd classified the (classical) reflection groups, i.e. the reflection groups for $k[x_1, \dots, x_n]$. When $A$ is a noncommutative AS regular algebra, whether or not a group is a reflection group depends upon the algebra $A$ on which the group acts, and groups different from the classsical reflection groups can occur as reflection groups for some noncommutative AS regular algebra.
We present two examples of reflection groups on $k_{p_{ij}}[x_1, \dots, x_n]$ with $p_{ij} = -1$ for all $i \neq j$.
\begin{enumerate}gin{example}\cite[Example 7.1]{KKZ3}. Let $G$ be the group generated by the mystic reflections
$$ g_1 = \begin{enumerate}gin{pmatrix}
0 & -1 & 0\\
1 & 0 & 0\\
0 & 0 &1\\
\mathrm end{pmatrix}
\text {and }
g_2 = \begin{enumerate}gin{pmatrix}
1 & 0 & 0\\
0 & 0 & -1\\
0 & 1 & 0\\
\mathrm end{pmatrix}.$$
$G$ is the rotation group of the cube, and is isomorphic to the symmetric group $S_4$. $G$ acts on $k_{-1}[x_1, x_2, x_3]$ where $p_{ij} = -1$ for $i \neq j$, with fixed ring the commutative polynomial ring $k[x_1^2+x_2^2+x_3^2, \; x_1x_2x_3, \; x_1^4+x_2^4+x_3^4]$. Hence under this representation (but not the permutation representation) $S_4$ is a reflection group for $k_{-1}[x_1, x_2, x_3]$.
\mathrm end{example}
\begin{enumerate}gin{example} \cite[Example 7.2]{KKZ3}. The binary dihedral groups
$G = BD_{4 \mathrm ell}$ are generated by the mystic reflections
$$ g_1 = \begin{enumerate}gin{pmatrix}
0& -\lambda^{-1}\\
\lambda & 0
\mathrm end{pmatrix} \text{ and }
{g_2 = \begin{enumerate}gin{pmatrix}
0 & -1\\
1 & 0
\mathrm end{pmatrix}}$$
for $\lambda$ a primitive $2 \mathrm ell$-th root of unity. These groups act on $A=k_{-1}[x,y]$ with fixed ring $A^G = k[x^{2 \mathrm ell}+ y^{2 \mathrm ell}, xy]$, a commutative polynomial ring. Hence the binary dihedral groups are reflection groups for $k_{-1}[x,y]$.
When $\mathrm ell= 2$, $G$ is the quaternion group of order 8, which is not a classical reflection group.
\mathrm end{example}
Other examples of reflections groups for $k_{-1}[x_1, \dots, x_n]$ include the infinite family $M(n,\alpha, \begin{enumerate}ta)$ (\cite[Section 7]{KKZ3}); there are infinite families of these groups that are not isomorphic as groups to classical reflection groups. Bazlov and Berenstein found this same class of groups occurring in their work \cite{BB1} related to Cherednik algebras, and in \cite{BB2} gave a different proof of Theorem \ref{skewed} by introducing a non-trivial correspondence between reflection groups $G$ for $k_{p_{ij}}[x_1, \dots, x_n]$ and classical reflection groups $G'$. In particular, they showed that for $G$ a reflection group for $k_{q_{ij}}[x_1, \dots, x_n]$ the group algebra $k[G]$ is isomorphic to the group algebra $k[G']$ (as algebras) for $G'$ a classical reflection group, even though $G$ and $G'$ are not isomorphic as groups.
Actions of noncocommutative Hopf algebras also can produce AS regular fixed subrings. Hence we expand our notion of reflection group to include ``reflection Hopf algebras" for a given AS regular algebra.
\begin{enumerate}gin{definition} We call a Hopf algebra $H$ a {\it reflection Hopf algebra for $A$} if $A^H$ is AS regular.
\mathrm end{definition}
The group algebra of a classical reflection group is a reflection Hopf algebra for $k[x_1, \dots,x_n]$. Next we present an example of a Hopf algebra that is not commutative or cocommutative but has an AS regular ring of invariants.
\begin{enumerate}gin{example}\cite[Section 7]{KKZ2}.
The smallest dimensional semisimple Hopf algebra $H$ that is not isomorphic to a group algebra or its dual is the 8-dimensional semisimple algebra $H_8$, defined by Kac and Paljutkin \cite{KP} (see also \cite{Ma1}).
As an algebra {$H_8$} is generated by {$x, y, z$} with the following relations:
$$x^2 = y^2 =1, \;\; xy=yx,\;\; zx=yz,$$
$$ zy=xz,\;\; z^2= \frac{1}{2}(1+x+y-xy). $$
The coproduct, counit and antipode are given as follows:
$$\Delta(x) = x\otimes x, \;\;\; \Delta(y)=y\otimes y,$$
$$ \Delta(z) = \frac{1}{2}(1\otimes 1 + 1\otimes x + y\otimes 1 - y\otimes x)(z\otimes z), $$
$$ \mathrm epsilon(x) = \mathrm epsilon(y) = \mathrm epsilon(z) =1, \quad S(x)=x^{-1},\; S(y)=y^{-1},\; S(z)=z.$$
${H_8}$ has a unique irreducible 2-dimensional representation on
$k u \oplus k v$ given by
{$$ x \mapsto
\begin{enumerate}gin{pmatrix}-1 & 0\\ 0 & 1
\mathrm end{pmatrix}, \quad
\;
y \mapsto \begin{enumerate}gin{pmatrix}1 & 0\\ 0& -1
\mathrm end{pmatrix}, \quad
\;
z \mapsto \begin{enumerate}gin{pmatrix}0 & 1 \\1& 0
\mathrm end{pmatrix}.$$}
\begin{enumerate}gin{enumerate}
\item Let ${A = k\langle u, v \rangle / \langle u^2-v^2 \rangle}$ ($A$ is isomorphic to $k_{-1}[u,v]$). Then $H_8$ acts on $A$ and the fixed subring is
${A}^{H_8} = k[u^2, (uv)^2 - (vu)^2],$
a commutative polynomial ring, and so ${H_8}$ is a reflection Hopf algebra for ${A}$.
\item Let ${A =k\langle u, v \rangle / \langle vu \pm iuv \rangle}$. Then $H_8$ acts on $A$ and the fixed subring is
${A}^{H_8} = k[u^2v^2, u^2 + v^2]$,
a commutative polynomial ring. Hence ${H_8}$ is a reflection Hopf algebra for ${A}$.
\mathrm end{enumerate}
\mathrm end{example}
Furthermore, actions of non-semisimple Hopf algebras can produce AS regular fixed algebras.
\begin{enumerate}gin{example}\cite[Section 3.2.1]{All}.
The Sweedler algebra ${H(-1)}$ is generated by ${g}$ and ${x}$ with algebra relations:
$$ g^2=1,\;\; x^2 = 0, \;\; xg = - gx,$$
and coproduct, counit, and antipode:
$$ \Delta(g) = g\otimes g \;\;\;\; \Delta(x) = g\otimes x + x\otimes 1, $$
$$ \mathrm epsilon(g) = 1, \; \mathrm epsilon(x) = 0 \quad S(g)=g,\; S(x)=-g x.$$
Then ${H(-1)}$ acts on the commutative polynomial algebra ${k[u,v]}$ as
{$$ x \mapsto
\begin{enumerate}gin{pmatrix} 0 & 1\\ 0 & 0
\mathrm end{pmatrix}, \quad
\;
g \mapsto \begin{enumerate}gin{pmatrix}1 & 0\\ 0& -1
\mathrm end{pmatrix}
$$}
with fixed subring ${k[u,v]}^{H(-1)} = k[u,v^2],$ a commutative polynomial ring. Hence $H(-1)$ is a reflection Hopf algebra for $k[u,v]$.
\mathrm end{example}
In work in progress we have shown that (using the notation of \cite{Ma2}) the Hopf algebras $H= A_{4m}$ (for $m$ odd) and $H= B_{4m}$ are reflection Hopf algebras for $A=k_{-1}[x,y]$; in both cases as an algebra (but not as a Hopf algebra) $H$ is isomorphic to $k[D_{4m}]$, the group algebra of the dihedral group of order $4m$, a classical reflection group. Further $A_{12}$ acts on a 3-dimensional (non-PI) AS regular algebra $A$ with (non-PI) regular fixed ring. These examples, along with the examples of group algebras acting on skew-polynomial algebras, suggest that the algebra structure of $H$ and its relation to the group algebra of a classical reflection group may be related to conditions that guarantee that the Hopf algebra is a reflection Hopf algebra. We also have some examples of commutative Hopf algebras that are reflection Hopf algebras; in this case the algebra structure of $H$ is not informative.
While we have made Conjecture \ref{STCconjecture} on the properties of a group $G$ that make it a reflection group for $A$, we have made no conjectures on the properties of a general Hopf algebra $H$ that make it a reflection Hopf algebra for $A$. One can still take trace functions of elements in $H$, but one does not always have a nice set of elements for which the trace functions should be computed. Moreover, properties of the trace functions that were used in proving results for groups are not true for the elements of the Hopf algebra $H$. For example, in the group case we showed that if $Tr_A(g,t) = Tr_A(1_G, t)$ then $g = 1_G$ (\cite[Proposition 1.8]{KKZ1}). However, in the Hopf algebra case, one can add the difference of any two elements with the same trace functions without changing the trace function of an element, so we do not have the strong uniqueness of trace functions that we had for groups. Characterizing reflection Hopf algebras for AS regular algebras remains an interesting unsolved problem.
\begin{enumerate}gin{question}
For a Hopf algebra $H$ acting on an AS regular algebra $A$, when is $H$ a reflection Hopf algebra for $A$?
\mathrm end{question}
\section{Artin-Schelter Gorenstein subrings of invariants}
Artin-Schelter regular invariant subrings occur under only very special circumstances, but, as H. Bass has noted, Gorenstein rings are ubiquitous. and many of the interesting fixed subrings are Gorenstein rings. Twenty years after the Shephard-Todd-Chevalley Theorem was proved, Watanabe \cite{W1} showed that if $G$ is a finite subgroup of ${\rm SL}_n(k)$, then $k[x_1, \dots, x_n]^G$ is a Gorenstein ring, and, in \cite{W2} he showed that the converse is true if $G$ contains no (classical) reflections. In our setting, where $A$ is an AS regular algebra, a reasonable generalization of the condition that $A^G$ is a Gorenstein ring is that $A^G$ is an AS Gorenstein algebra. Next, one must generalize the notion of ``determinant equal to 1". This generalization was accomplished by P. J{\o}rgensen and J. Zhang \cite{JoZ} with their introduction of the notion of the homological determinant of a graded automorphism $g$ of $A$.
The homological determinant $\hdet$ is a group homomorphism $$\hdet: \quad \Aut(A) \rightarrow k^\times$$ that arises in local cohomology; in the case that $A=k[x_1, \dots, x_n]$ it is the determinant (or its inverse, depending upon how $G$ acts on $A$). The original definition of $\hdet$ is given in \cite[Section 2]{JoZ}, and, in Definition \ref{defhomdet} below, we will give the general definition in the context of Hopf algebra actions. Fortunately, in many circumstances $\hdet$ can be computed without using the definition.
When $A$ is AS regular, the conditions of the following theorem are
satisfied by \cite[Proposition 3.3]{JiZ} and \cite[Proposition 5.5]{JoZ},
and $\hdet g$ can be computed from the trace function of $g$, using the
following result.
\begin{enumerate}gin{lemma}
\label{xxlem1.4}
\label{hdetbytrace}
\cite[Lemma 2.6]{JoZ}.
Let $A$ be noetherian and AS Gorenstein, and let $g\in \Aut(A)$.
If $g$ is $k$-rational in the sense of \cite[Definition 1.3]{JoZ},
then the rational function ${ Tr}_A(g,t)$ has the form
$${ Tr}_A(g,t) = (-1)^n (\hdet g)^{-1} t^{-\mathrm ell}
+ {\rm higher~terms}$$
when it is written as a Laurent series in $t^{-1}$.
\mathrm end{lemma}
Our results have shown that the condition that all elements of the group have homological determinant equal to 1 plays the role in our noncommutative setting that the condition that the group is a subgroup of ${\rm SL}_n(k)$ plays in classical invariant theory.
Using homological determinant to replace the usual determinant, J{\o}rgensen and Zhang proved the following generalization of Watanabe's Theorem.
\begin{enumerate}gin{theorem}\label{watanabe}\cite[Theorem 3.3]{JoZ}. If $G$ is a finite group of graded automorphisms of an AS regular algebra $A$ with $\hdet(g) = 1$ for all $g \in G$ then $A^G$ is AS Gorenstein.
\mathrm end{theorem}
In the classical case, the symmetric group $S_n$ acting as permutations, is a reflection group for $k[x_1, \dots, x_n]$, but we have already seen (Example \ref{transposition}) that this is not the case for $k_{-1}[x,y]$. However, next we note that if $A= k_{-1}[x_1, \dots, x_n]$ (where for each $i \neq j $ we have the relations $x_j x_i = -x_i x_j$), and if $S_n$ acts on $A$ as permutations, then all subgroups $G$ of $S_n$ have trivial homological determinant, so produce AS Gorenstein invariant subrings. It follows that the fixed subring in Example \ref{transposition} is AS Gorenstein.
\begin{enumerate}gin{example} \cite[Theorem 5.1]{KKZ5}.
Let ${g}$ be a 2-cycle and {$A = k_{-1}[x_1 \ldots, x_n]$} then
$$Tr_{{A}}({g},t) = \frac{1}{(1+t^2)(1-t)^{n-2}}$$
$$= (-1)^n \frac{1}{t^n} + \text{ higher terms }$$
so hdet ${g} = 1$, and hence by Theorem \ref{watanabe}, for ALL groups {$G$} of $n \times n$ permutation matrices, ${A}^{G}$ is
AS Gorenstein. This, of course, is not true for permutation actions on a commutative polynomial ring -- e.g.
${k[x_1, x_2, x_3, x_4]}^{\langle (1,2,3,4) \rangle}$
is not Gorenstein, while
${k_{-1}[x_1, x_2, x_3, x_4]}^{\langle (1,2,3,4) \rangle}$
is AS Gorenstein.
\mathrm end{example}
The invariants of $ k_{-1}[x_1, \dots, x_n]$ under permutation actions are studied in detail in \cite{KKZ5}, producing an interesting contrast to the classical case. As one example, these groups of permutations contain no reflections of $k_{-1}[x_1, \dots, x_n]$ \cite[Lemma1.7 (4)]{KKZ5}, and so are ``small groups", while the permutation representation of $S_n$ is a classical reflection group.
A theorem of R. Stanley \cite[Theorem 4.4]{Sta} states that the fixed subring $B= k[x_1, \dots,x_n]^G$ is Gorenstein if and only if the Hilbert series of $B$ satisfies the functional equation
$H_B(t) = \pm t^{-m} H_B(t^{-1})$ for some integer $m$. J{\o}rgensen and Zhang extended that result to the more general setting of finite groups acting on AS regular algebras.
\begin{enumerate}gin{theorem} \cite[Proposition 3.8]{KKZ2}. \label{stanley} Let $A$ be an AS regular algebra that satisfies a polynomial identity, and $G$ be a finite group of graded automorphisms of $A$. Then $B= A^G$ is AS Gorenstein if and only if the Hilbert series of $B$ satisfies the functional equation $H_B(t) = \pm t^{-m} H_B(t^{-1})$ for some integer $m$.
\mathrm end{theorem}
Theorem \ref{stanley} is also true under more general (but technical) conditions (see \cite{JoZ} for details).
The homological (co)determinant and Theorems \ref{watanabe} and \ref{stanley} were extended to actions by semisimple Hopf algebras in \cite{KKZ2}.
In \cite{CKWZ1} the homological (co)determinant is defined for any finite dimensional Hopf algebra. Let $A$ be AS Gorenstein of injective dimension $d$, and let $H$ be a finite dimensional Hopf algebra acting on $A$. Let $\mathfrak{m}$ denote the maximal graded ideal of $A$ consisting of all elements of positive degree, and let $H_{\mathfrak{m}}^d(A)$ be the $d$-th local cohomology of $A$ with respect to $\mathfrak{m}$. The lowest degree nonzero homogeneous component of $H_{\mathfrak{m}}^d(A)$ is 1-dimensional; let $\mathfrak{e}$ be a basis element. Then there is an algebra homomorphism $\mathrm eta: H \rightarrow k$ such that the right $H$-action on $H_{\mathfrak{m}}^d(A)^*$ is given by $\mathrm eta(h)\mathfrak{e}$ for all $h \in H$.
\begin{enumerate}gin{definition} \cite[Definition 3.3]{KKZ2} and \cite[Definition 1.7]{CKWZ1}. \label{defhomdet} Retaining the notation above,
the composition $\mathrm eta \circ S: H \rightarrow k$ is called the {\it homological determinant of the $H$-action on $A$}, and is denoted by $\hdet_H A$. We say that {\it $\hdet_H A$ is trivial} if $\hdet_H A = \mathrm epsilon$, the counit of $H$.
\mathrm end{definition}
Dually, if $K$ coacts on $A$ on the right, then $K$ coacts on $k\mathfrak{e}$ and $\rho(\mathfrak{e}) = \mathfrak{e} \otimes {\rm D}^{-1}$ for some grouplike element ${\rm D}$ in $K$.
\begin{enumerate}gin{definition}\cite[Definition 6.2]{KKZ2} and \cite[Definition 1.7]{CKWZ1}. Retaining the notation above,
the {\it homological codeterminant of the $K$-coaction on $A$} is defined to be hcodet$_K A = {\rm D}$. We say that {\it hcodet$_K(A)$ is trivial} if hcodet$_K A = 1_K$.
\mathrm end{definition}
The homological determinant $\hdet_H A$ is trivial if and only if the homological codeterminant hcodet$_{H^0} A$ is trivial (\cite[Remark 6.3]{KKZ2}).
Watanabe's Theorem was proved for semisimple Hopf actions on AS regular algebras in
\cite[Theorem 3.6]{KKZ2}; it was also shown for all finite dimensional Hopf actions on AS regular algebras of dimension 2 with trivial homological determinant in \cite[Proposition 0.5]{CKWZ1}.
\begin{enumerate}gin{theorem}\label{hopfwat} \cite[Theorem 3.6]{KKZ2}.
Let $H$ be a semisimple Hopf algebra acting an AS regular algebra $A$ with trivial homological determinant. Then $A^H$ is AS Gorenstein.
\mathrm end{theorem}
A partial converse to Theorem \ref{hopfwat} is given in \cite[Theorem 4.10]{KKZ2}; in particular if $G$ is a finite group containing no reflections (i.e. a ``small group"), then $A^G$ is AS Gorenstein if and only $\hdet_G(A)$ is trivial, recovering the classical result in \cite{W2}.
If $G$ is a finite subgroup of ${\rm SL}_n(k)$ acting on $A = k[x_1, \dots, x_n]$ then $G$ contains no reflections (since when $g$ is a classical reflection, $\det(g) = $ a root of unity $\neq 1$) so $A^G$ is not a polynomial ring. We obtain a similar result for the homological determinant.
\begin{enumerate}gin{theorem} \label{trivialnotregular} \cite[Theorem 2.3]{CKWZ1}. Let $H$ be a semisimple Hopf algebra, and $A$ be a noetherian AS regular algebra equipped with an $H$-algebra action. If $A^H \neq A$ and the $\hdet_H(A)$ is trivial, then $A^H$ is not AS regular.
\mathrm end{theorem}
Since the $\hdet$ is a homomorphism into $k$, it follows from Theorem \ref{trivialnotregular} that if $G$ is a group with $[G,G] = G$ (e.g. if $G$ is a simple group) then $A^G$ will never be AS regular. Hence such groups are never reflection groups for some AS regular algebra.
We also obtain a version of Stanley's Theorem for semisimple Hopf actions.
\begin{enumerate}gin{theorem} \cite[Proposition 3.8]{KKZ2}. \label{hopfstanley} Let $A$ be an AS regular algebra that satisfies a polynomial identity, and $H$ be a semisimple Hopf algebra acting on $A$. Then $B= A^G$ is AS Gorenstein if and only if the Hilbert series of $B$ satisfies the functional equation $H_B(t) = \pm t^{-m} H_B(t^{-1})$ for some integer $m$.
\mathrm end{theorem}
In 1884 Felix Klein (\cite{Kl1} \cite{Kl2} \cite{Su}) classified the finite subgroups of ${\rm SL}_2(k)$ and calculated the invariants $k[x,y]^G$, the ``Kleinian singularities", that are important in commutative algebra, algebraic geometry, and representation theory. These rings of invariants are hypersurfaces in $k[x,y,z]$, and the singularity is of type A, D, or E corresponding to the type of the McKay quiver of the irreducible representations of the group $G$. The paper \cite{CKWZ1} begins the analogous project for any AS regular algebra of dimension 2, finding all finite dimensional Hopf algebras $H$ that act on $A$, with our standing assumptions (\ref{standing}), and having trivial homological determinant; such a Hopf algebra is called a {\it quantum binary polyhedral group}. The AS regular algebras of dimension 2 generated in degree one are isomorphic to:
$$k_J[u,v]:=k\langle u,v\rangle/ (vu-uv-u^2)\;\; {\text{ or}}$$
$$\qquad k_q[u,v]:=k\langle u,v\rangle/(vu-quv).$$
The groups that act on one of these algebras with trivial homological determinant are the cyclic groups, the symmetric group $S_2$, the classical binary polyhedral groups, as well as the dihedral groups (which classically are reflection groups).
The additional semisimple Hopf algebras that occur are the dual of the group algebra of the dihedral group of order 8, the duals of various finite Hopf quotients of the coordinate Hopf algebra $\mathcal{O}_q({\rm SL}_2(k))$, Hopf algebras that have been studied by \cite{BN}, \cite{Ma2}, \cite{Mu}, and \cite{Ste}.
In addition, there are actions by non-semisimple Hopf algebras: the dual of the generalized Taft Hopf algebras $T_{q,\alpha,n}^\circ$, and Hopf algebras whose duals are extensions of the duals of various group algebras by the duals of certain quantum groups. The table below (reproduced from \cite[Table 1]{CKWZ1}) gives the corresponding AS regular algebras $A$ of dimension 2 and the finite dimensional Hopf algebras $H$ acting on $A$ with trivial homological determinant.\\
\noindent \underline{Notation.} [$\tilde{\Gamma}$, $\Gamma$, $C_n$, $D_{2n}$]
Let $\tilde{\Gamma}$ denote a finite subgroup of ${\rm SL}_2(k)$, $\Gamma$
denote a finite subgroup of ${\rm PSL}_2(k)$, $C_n$ denote a cyclic group of
order $n$, and $D_{2n}$ denote a dihedral group of order $2n$. Let
$\text{o}(q)$ denote the order of $q$, for $q \in k^{\times}$ a
root of unity. We write $A= k(U)/I$, where $U=ku \oplus kv$, and $I$ is the two-sided ideal generated by the relation.\\
\centerline{The quantum binary polyhedral groups $H$ and the AS regular algebras $A$ they act upon:}
\[
\begin{enumerate}gin{array}{|l|l|}
\hline
\text{AS regular algebra $A$ gldim 2} & \text{finite dimensional Hopf algebra(s) $H$ acting on $A$}\\
\hline
\hline
& \\
k[u,v] & k\tilde{\Gamma}\\
\hline
&\\
k_{-1}[u,v] & kC_n~\text{for}~n\geq 2; \hspace{.2in} kS_2, \hspace{.2in} kD_{2n};
\\
& (kD_{2n})^{\circ}; \hspace*{.2in} \mathcal{D}(\tilde{\Gamma})^{\circ} \text{ for } \tilde{\Gamma} \text{ nonabelian}\\
\hline
&\\
k_q[u,v], ~q \text{ root of 1, } \,q^2 \neq 1 & \\
&\\
{\text{ if $U$ non-simple}} & kC_n \text{~for~} n\geq 3; \hspace{.2in} (T_{q, \alpha, n})^{\circ};\\
\text{ if $U$ simple, $o(q)$ odd} & H \text{ with } 1 \to (k\tilde{\Gamma})^{\circ} \to H^{\circ} \to \mathfrak{u}_{q}(\mathfrak{sl}_2)^{\circ} \to 1;\\
\text{ if $U$ simple, $o(q)$ even, }\, q^4 \neq 1& H \text{ with } 1 \to (k\Gamma)^{\circ} \to H^{\circ}
\to \mathfrak{u}_{2,q}(\mathfrak{sl}_2)^{\circ} \to 1;\\
\text{ if $U$ simple}, ~q^4 =1 &
\begin{enumerate}gin{tabular}{l}
\hspace{-.13in} $H$ \text{ with } $1 \to (k\Gamma)^{\circ} \to H^{\circ}
\to \mathfrak{u}_{2,q}(\mathfrak{sl}_2)^{\circ} \to 1$ \\
\hspace{-.13in} $H$ \text{ with } $1 \to (k\Gamma)^{\circ} \to H^{\circ}
\to \frac{\mathfrak{u}_{2,q}(\mathfrak{sl}_2)^{\circ}}{(e_{12}-e_{21} e_{11}^2)}\to 1$
\mathrm end{tabular}
\\
&\\
\hline
&\\
k_q[u,v], ~q \text{ not root 1} & kC_n, n \geq 2\\
\hline
&\\
k_J[u,v] & kC_2 \\
\hline
\mathrm end{array}
\]
~\\
\centerline{Table 1 (\cite[Table 1]{CKWZ1})}
~\\
An interesting next question is to determine when a theorem of Auslander is true in the noncommutative setting. Recall that a group is called {\it small} if it contains no classical reflections; for example, subgroups of ${\rm SL}_2(k)$ are small.
\begin{enumerate}gin{theorem}{\rm({\bf Auslander's Theorem})} \cite[Proposition 3.4]{Aus} \rm{and} \cite[Theorem 5.15]{LW}. Let $G$ be a small finite subgroup of ${\rm GL}_n(k)$ acting linearly on $A=k[x_1, \dots, x_n]$. Then the skew group ring $A\#G$ is naturally isomorphic as a graded algebra to
the endomorphism ring $\Hom_{A^G}(A,A)$.
\mathrm end{theorem}
Some generalizations of Auslander's Theorem were proved by I. Mori and K. Ueyama for groups with trivial homological determinant. They show \cite[Theorem 3.7]{MU} that if $G$ is ``ample for $A$" in their sense, then $A\#G$ and $\Hom_{(A^G)^{op}}(A,A)$ are isomorphic as graded algebras, and they give a condition \cite[Corollary 3.11 (3)]{MU} that can be checked for the groups with trivial $\hdet$ acting on AS regular algebras of dimension 2. They relate Auslander's Theorem to Ueyama's notion of graded isolated singularity \cite{U}.
In \cite{CKWZ2} Auslander's Theorem is proved when $A$ has dimension 2 and $H$ is a semisimple Hopf algebra acting on $A$ under hypotheses (\ref{standing}) and trivial $\hdet_A(H)$. It is conjectured that Auslander's Theorem holds for noetherian AS regular algebras $A$ in any dimension.
\begin{enumerate}gin{conjecture} If $A$ is an AS regular noetherian algebra and $H$ is a semisimple Hopf algebra acting on $A$ under hypotheses (\ref{standing}) with trivial homological determinant, then
$A\#G$ is naturally isomorphic to $\Hom_{(A^G)^{op}}(A,A)$ as graded algebras.
\mathrm end{conjecture}
Auslander's theorem was used to relate finitely generated projective modules over the skew group ring $k[x,y]\#G$ and maximal Cohen-Macaulay modules over $k[x,y]^G$ when $G$ is a finite subgroup of ${\rm SL}_2(k)$. Furthermore, a theorem of Herzog \cite{H} states that the indecomposable maximal Cohen-Macaulay $k[x,y]^G$-modules are precisely the indecomposable $k[x,y]^G$ direct summands of $k[x,y]$. These and other results of the McKay correspondence are explored in
\cite{CKWZ2} for $A$ a noetherian AS regular algebra of dimension 2 and $H$ a semisimple Hopf algebra acting on $A$ under hypotheses (\ref{standing}) with trivial homological determinant. We call a graded $A$-module $M$ an {\it initial} $A$-module if it is generated by $M_0$, and $M_i = 0$ for $i < 0$. Among the results of \cite{CKWZ2} is the following theorem.
\begin{enumerate}gin{theorem} \cite{CKWZ2} Let $A$ be a noetherian AS regular algebra of dimension 2 and let $H$ be a semisimple Hopf algebra acting on $A$ under hypotheses (\ref{standing}) with trivial homological determinant. Then there is a bijective correspondence between the isomorphism classes of
\begin{enumerate}gin{enumerate}
\item indecomposable direct summands of $A$ as right $A^H$-modules
\item indecomposable finitely generated, projective, initial left $\Hom_{(A^H)^{op}}(A,A)$-modules
\item indecomposable finitely generated, projective, initial left $A\#H$-modules
\item simple left $H$-modules
\item indecomposable maximal Cohen-Macaulay $A^H$-modules, up to a degree shift.
\mathrm end{enumerate}
\mathrm end{theorem}
When $A$ is AS regular of dimension 2 and $H$ is a semisimple Hopf algebra acting on $A$ with trivial homological determinant the invariant subalgebras $A^H$ (called ``Kleinian quantum singularities") are all of the form $C/\Omega C$ for $C$ a noetherian AS regular algebra of dimension 3 and $\Omega$ a normal regular element of $C$, and hence can be regarded as hypersurfaces in an AS regular algebra of dimension 3 (see \cite[Theorem 0.1]{KKZ6} and \cite{CKWZ2}), and the explicit singularity $\Omega$ is given for each case. In \cite{CKWZ2} it is shown further that the McKay quiver of $H$ is isomorphic to the Gabriel quiver of the $H$-action on $A$, and the quivers that occur are Euclidean diagrams of types $\widetilde{A}, \widetilde{D}, \widetilde{E}, \widetilde{DL}$, and $\widetilde{L}$.
To conclude this section, we note that it is interesting to compare the roles various groups and Hopf algebras play in the invariant theory of
$k[x_1, \dots, x_n]$ and $k_{-1}[x_1, \dots, x_n]$. In Table 2 we give the classical reflections groups and finite subgroups of ${\rm SL}_n(k)$, and the analogous groups and Hopf algebras for $k_{-1}[x_1, \dots, x_n]$. Here we use the notation $H_8$ for the Kac-Puljutkin algebra, $A_{4n}$ and $B_{4n}$ as in \cite{Ma2}, $A(\widetilde{\Gamma})$ and $B(\widetilde{\Gamma)}$ as in \cite{BN}. We notice that some of the same groups play different roles in the two contexts. For example, the dihedral groups are classical reflection groups, but can act with trivial $\hdet$ on $k_{-1}[x_1, \dots, x_n]$. The binary dihedral groups are subgroups of ${\rm SL}_2(k)$ but reflections groups for $k_{-1}[x,y]$.\\
~\\
\begin{enumerate}gin{tabular}{|c|c|c|}
\hline
& $A= \mathbb{C}[x_1, \cdots, x_n]$ & $A=\mathbb{C}_{-1}[x_1, \cdots, x_n]$\\
\hline
&&\\
Reflection Group for $A$ & &\\ &&\\$n= 2$ & $D_{2n}=G(n,n,2)$ & $ BD_{4n},$ \\
&& $H_8 =B_8$,\\
&& $A_{4m}$ ($m$ odd), $ B_{4m}^\circ =B_{4m}$\\
&&\\
$n=3$ & & $A_{12}, S_4$ (rotations of cube)\\
&&\\
Any $n$ & $C_n$, $S_n$, $G(m,p,n)$ & $C_n$\\
&(34 Exceptional &\\
&for various $n$)&\\
&&\\
\hline
&&\\
Special Linear for $A$ &&\\&&\\
$n=2 $ & $C_n, BD_{4n}, \widetilde{\mathcal{T}}, \widetilde{\mathcal{O}}, \widetilde{\mathcal{I}} $& $C_n$, $D_{2n}$, $S_2$\\
& &$A_{4m}^\circ, B_{4m}^\circ =B_{4m}, k D_{2n}^\circ$,\\
&& $A(\widetilde{\mathcal{T}})^\circ$, $B(\widetilde{\mathcal{T}})^\circ$, $B(\widetilde{\mathcal{I}})^\circ$,\\
&& $A(\widetilde{\mathcal{O}})^\circ$, $B(\widetilde{\mathcal{O}})^\circ$\\
&&\\
Any $n$ &Finite subgroups of ${\rm SL}_n(k)$& $S_n$ and all subgroups\\
&&\\
\hline
\mathrm end{tabular}
\vspace*{.2in}
\centerline{Table 2}
\section{Complete intersection subrings of invariants}
Gorenstein commutative rings can have pathological properties, but
a well-behaved class of Gorenstein commutative rings is the class of graded complete intersections, i.e. the rings of the form $k[x_1, \dots, x_n]/(f_1, \dots, f_m)$ where $f_1, \dots, f_m$ is a regular sequence of homogeneous elements in $k[x_1, \dots, x_n]$.
When $A$ is a commutative polynomial ring over $\mathbb{C}$, the
problem of determining which finite groups $G$ have
the property that $A^G$ is a complete intersection
was solved by N.L. Gordeev \cite{G2} (1986) and (independently) by
H. Nakajima \cite{N2}, \cite{N3} (see the survey \cite{NW}) (1984).
A key result in this classification is the theorem of Kac-Watanabe \cite{KW} and Gordeev \cite{G1} that provides a necessary condition:
if the fixed subring $k[x_1,\cdots,x_n]^G$ (for any finite subgroup
$G\subset GL_n(k)$) is a complete intersection, then $G$ is
generated by bireflections (i.e., elements $g\in GL_n(k)$ such that
$\operatorname{rank} (g-I)\leq 2$ -- i.e. all but two eigenvalues of $g$ are 1). However, the condition that $G$ is generated by bireflections is not sufficient for $k[x_1,\cdots,x_n]^G$ to be a complete intesection.
A first problem in generalizing these results to our setting is that there is not an
established notion of a complete intersection for noncommutative rings.
In the commutative graded case a connected graded algebra $A$ is
a complete intersection if one of the following four
equivalent conditions holds \cite[Lemma 1.8]{KKZ4} (which references well-known results from \cite{BH}, \cite{FHT}, \cite{FT} \cite{Gu} \cite{Ta}).\\
\begin{enumerate}gin{enumerate}
\item[(cci$^\prime$)]
$A\cong k[x_1, \dots, x_d]/(\Omega_1, \cdots, \Omega_n)$, where
$\{\Omega_1, \dots, \Omega_n\}$ is a regular sequence of homogeneous
elements in $k[x_1, \dots, x_d]$ with $\deg x_i>0$.
\item[(cci)]
$A\cong C/(\Omega_1, \cdots, \Omega_n)$, where $C$ is a noetherian
AS regular algebra and $\{\Omega_1, \dots, \Omega_n\}$ is a
regular sequence of normalizing homogeneous elements in $C$.
\item[(gci)]
The $\Ext$-algebra $E(A):=\bigoplus_{n=0}^\infty \Ext^n_A(k,k)$ of
$A$ has finite Gelfand-Kirillov dimension.
\item[(nci)]
The $\Ext$-algebra $E(A)$ is noetherian.
\mathrm end{enumerate}
~\\
In \cite{KKZ4} we proposed calling a connected graded ring a {\it cci, gci, or nci} if the respective condition above holds for $A$; we called $A$ a {\it hypersurface} if it is a cci when $n=1$ (i.e. of the form $C/(\Omega)$, where $C$ is a noetherian AS regular algebra and $\Omega$ is a regular, normal element of $C$).
In the noncommutative case, unfortunately, the conditions (cci), (gci) and (nci) are
not all equivalent, nor does (gci) or (nci) force $A$ to be
Gorenstein \cite[Example 6.3]{KKZ4}, making it unclear which property
to use as the proper
generalization of a commutative complete intersection. A direct
generalization to the noncommutative case is condition (cci) which
involves considering regular sequences in {\it any} AS regular
algebra (in the commutative case the only AS regular algebras are
the polynomial algebras), and several researchers have taken an approach to complete intersections that uses regular
sequences. Though the condition (cci) seems to be a good definition
of a noncommutative complete intersection, there are very few tools
available to work with condition (cci), except for explicit
construction and computation, and it is not easy to show condition
(cci) fails, since one needs to consider regular sequences in {\it
any} AS regular algebra.
One relation between these properties that holds in the noncommutative setting is given in the following theorem.
\begin{enumerate}gin{theorem}\cite[Theorem 1.12(a)]{KKZ4}.
\label{xxthm0.1} Let $A$ be a
connected graded noncommutative algebra.
If $A$ satisfies (cci),
then it satisfies (gci).
\mathrm end{theorem}
\cite[Example 6.3]{KKZ4} shows that even both (gci) and (nci) together do not
imply (cci), and \cite[Example 6.2]{KKZ4} shows that (gci) does not imply (nci).
The Hilbert series of a commutative complete intersection is a quotient of cyclotomic
polynomials; we call a noncommutative ring whose Hilbert series has this property {\it cyclotomic}. A commutative complete intersection is also a Gorenstein ring; we call $A$ {\it cyclotomic Gorenstein} if it is cyclotomic and AS Gorenstein
\cite[Definition 1.9]{KKZ4}.
\begin{enumerate}gin{theorem}\cite [Theorem 1.12(b, c)]{KKZ4}. \label{notcyclotomic}
If $A$ satisfies (gci) or (nci), and if the Hilbert series of $A$ is a rational
function $p(t)/q(t)$ for some coprime integral polynomials $p(t), q(t)
\in \mathbb{Z}[t]$ with $p(0) = q(0) =1$, then $A$ is cyclotomic.
\mathrm end{theorem}
In \cite[Section 2]{KKZ4} we show that certain AS Gorenstein Veronese algebras are not cyclotomic, and hence by Theorem \ref{notcyclotomic} these algebras satisfy none of our conditions for a complete intersection.
In our noncommutative invariant theory context we have produced some examples of invariant subrings that are cci algebras. Classically the invariants of $k[x_1, \dots, x_n]^{S_n}$ under the permutation representation form a polynomial ring, but, as we noted, this is not the case in $k_{-1}[x_1, \dots, x_n]^{S_n}$. Under the alternating subgroup $A_n$ of these permutation matrices, the invariants $k[x_1, \dots, x_n]^{A_n}$ are a hypersurface in an $n+1$ dimensional polynomial ring. In \cite{KKZ5} we show that the two invariant subrings of the skew polynomial rings, $k_{-1}[x_1, \dots, x_n]^{S_n}$ and $k_{-1}[x_1, \dots, x_n]^{A_n}$, are each a cci, and we provide generators for the subring of invariants in each case.
We return to Example \ref{transposition}.
\begin{enumerate}gin{example}
\label{transagain} \cite[Remark 2.6]{KKZ6}. As in Example \ref{transposition} let $S_2$ act on $A = k_{-1}[x,y]$ by interchanging $x$ and $y$. One set of generators for the fixed subring $A^{S_2}$ is $X:= x+y$ and $Y:=(x-y)(xy)$. These elements generate a down-up algebra $C$ (an AS regular algebra of dimension 3) with relations
$$YX^2 = X^2Y \;\;\; \text{ and } \;\;\; Y^2X = XY^2,$$
and $A^{S_2} \cong C/\langle \Omega \rangle$, where $\Omega:= Y^2 - \frac{1}{4}X^2 (XY+YX)$, a central regular element of $C$, so that $A^{S_2}$ is a hypersurface in $C$.
\mathrm end{example}
To classify the groups that produce complete intersections, one would like to begin by proving the Kac-Watanabe-Gordeev Theorem: that if $A^G$ is a complete intersection (of some kind) then $G$ must be generated by bireflections. Toward this end we extend the notion of bireflection, as we extended the notion of a classical reflection, using trace functions.
\begin{enumerate}gin{definition}\cite[Definition 3.7]{KKZ4}.
Let $A$ be a noetherian connected graded AS
regular algebra of GK-dimension $n$. We call $g\in \Aut(A)$ a {\it bireflection} if its trace function has the form:
$$Tr_A(g,t) = \frac{1}{(1-t)^{n-2} q(t)}$$
where $q(t)$ is an integral polynomial with $q(1) \neq 0$ (i.e. $1$ is a pole of order $n-2$). We call it a {\it classical bireflection} if all but two of its eigenvalues are 1.
\mathrm end{definition}
The following example suggests that this notion of bireflection based on the trace function may be useful; in this example the fixed subring is a commutative complete intersection, so it satisfies all the equivalent conditions (cci), (nci), and (gci).
\begin{enumerate}gin{example} \cite[Example 6.6]{KKZ4}.
${A=k_{-1}[x,y,z]}$ is AS regular of dimension 3, and the automorphism
\[{g = \matthree{0 & -1 & 0\\1& 0 &0\\0 & 0 & -1}}\]
acts on it.
The eigenvalues of ${g}$ are $-1, i ,-i$ so ${g}$ is not a classical bireflection.
However, $Tr_{A}({g},t) = 1/((1+t)^2(1-t)) = -1/t^3 +$ higher degree terms and ${g}$ is
a bireflection with hdet ${g} =1$. The fixed subring is
\[{A}^{\langle g \rangle} \cong \frac{k[X,Y,Z,W]}{\langle W^2-(X^2+4Y^2)Z\rangle},\]
a commutative complete intersection.
\mathrm end{example}
In the context of permutation actions on $A=k_{-1}[x_1, \dots, x_n]$ we have proved the converse of the
Kac-Watanabe-Gordeev Theorem (a result which is not true in the case $A=k[x_1, \dots, x_n]$).
\begin{enumerate}gin{theorem}\cite[Theorem 5.4]{KKZ5}. \label{permbi}
If $G$ is a subgroup of $S_n$, represented as permutations of $\{x_1, \dots, x_n\}$, and if $G$ is generated by bireflections (defined in terms of the trace functions), then $k_{-1}[x_1, \dots, x_n]^G$ is a cci.
\mathrm end{theorem}
We conjecture that the Kac-Watanabe-Gordeev Theorem is also true in this context, and we have verified it for $n \leq 4$.
In dimension 2, by \cite[Theorem 0.1]{KKZ6}
all AS Gorenstein invariant subrings under the actions of finite groups are hypersurfaces in AS regular algebras of dimension 3, and all
automorphisms of finite order are trivially bireflections, and hence
the first interesting case of the Kac-Watanabe-Gordeev Theorem is in
dimension 3, so that it is natural to investigate generalizations of
this theorem for down-up algebras.
Down-up algebras were defined by Benkart and Roby \cite{BR}
in 1998 as a tool to study
the structure of certain posets.
Noetherian graded down-up algebras $A(\alpha, \begin{enumerate}ta)$ form a class of AS regular algebras
of global dimension 3 that are generated in degree 1 by two
elements $x$ and $y$, with two cubic relations:
$$ y^2x = \alpha yxy + \begin{enumerate}ta xy^2 \text{ and } yx^2 = \alpha xyx + \begin{enumerate}ta x^2y$$
for scalars $\alpha, \begin{enumerate}ta \in k$ with $\begin{enumerate}ta \neq 0$.
These algebras are not Koszul, but
they are (3)-Koszul. Their graded automorphism groups, which depend
upon the parameters $\alpha$ and $\begin{enumerate}ta$, were computed in \cite{KK},
and are sufficiently rich to provide many non-trivial examples (e.g.
in two cases the automorphism group is the entire group ${\rm GL}_2(k)$).
However, it follows from \cite[Proposition 6.4]{KKZ1} that these algebras have no reflections, so all finite
subgroups are ``small", and hence from \cite[Corollary 4.11]{KKZ2} $A^G$ is AS Gorenstein if and only if
$\hdet $ is trivial. Noetherian graded down-up algebras satisfy the following version of the
Kac-Watanabe-Gordeev Theorem.
\begin{enumerate}gin{theorem}\cite[Theorem 0.3]{KKZ6}.
Let $A$ be a graded noetherian down-up
algebra and $G$ be a finite subgroup of $\Aut(A)$. Then the
following are equivalent.
\begin{enumerate}gin{enumerate}
\item[(C1)]
$A^G$ is a gci.
\item[(C2)]
$A^G$ is cyclotomic Gorenstein and $G$ is generated by
bireflections.
\item[(C3)]
$A^G$ is cyclotomic Gorenstein.
\mathrm end{enumerate}
\mathrm end{theorem}
In many of the cases for $A$ a noetherian graded down-up algebra the fixed algebras $A^G$ are shown to be a cci, and it is an open question whether that is always the case.
It would be interesting to study the relation of these conditions for other classes of 3-dimensional AS regular algebras, and we have work in progress on actions of groups with trivial homological determinant acting on the generic 3-dimensional Sklyanin algebra.
\section{Related research directions}
In this section we briefly sketch some related directions of research, many of which contain open questions.\\
~\\
\noindent
A. {\bf Degree bounds.} When computing invariant subrings, it is very useful to have an upper bound on the degrees of the algebra generators of the fixed subring. In 1916 Emmy Noether \cite{No}
proved that $|G|$, the order of the group $G$, is an upper bound on the degrees of the algebra generators of $k[x_1, \dots, x_n]^G$, for any finite group $G$, when $k$ is a field of characteristic zero. The Noether upper bound does not always hold in characteristic p (see e.g. \cite[Example 3.5.7 (a) p. 94]{DK}); the survey paper \cite{Ne} is a good introduction to the problem of finding upper bounds on the degrees of the algebra generators of $k[x_1, \dots, x_n]^G$. We have seen (Example \ref{transposition}) that the Noether upper bound does not always hold in our noncommutative setting, for in that example the symmetric group $S_2$ has order 2 and the fixed subring requires a degree 3 generator. In 2011
P. Symonds \cite{Sy} proved the upper bound $n(|G|-1)$ (if $n >1$ and $|G| > 1$) on the degrees of the generators of $k[x_1, \dots, x_n]^G$, when $k$ is a field of characteristic p; letting $n$ be the number of generators of $A$, this upper bound also is too small an upper bound for the degrees of the generators in Example \ref{transposition}.
In the case of permutation
actions on $k[x_1, \dots, x_n]$ there is a smaller upper bound, $\max\{n, {n\operatorname{char}oose 2}\}$, on the degrees of the generators of the fixed subring under groups of permutations; this upper bound was proved by M. G{\"o}bel in 1995 \cite{Go}, and is true in any characteristic. In \cite[Theorem 2.5]{KKZ5} we prove the bound ${n \operatorname{char}oose 2}+\lfloor \frac{n}{2}\rfloor (\lfloor\frac{n}{2}\rfloor+1)$ (which is roughly $3n^2/4$) on the degrees of the generators of $k_{-1}[x_1, \dots, x_n]^G$ for $G$ a group of permutations of $\{x_1, \dots, x_n\}$. This upper bound follows from a more general upper bound that we state below, a bound that holds for semisimple Hopf actions on quantum polynomial algebras under certain technical conditions; this upper bound can be viewed as a generalization of Broer's Bound (see \cite[Proposition 3.8.5]{DK}) in the classical case. In the lemma that follows the field $k$ need not have characteristic zero.
\begin{enumerate}gin{lemma}[\bf Broer's Bound] \cite[Lemma 2.2]{KKZ5}.
\label{zzlem2.2} Let $A$ be a quantum polynomial algebra of
dimension $n$ and $C$ an iterated Ore extension
$k[f_1][f_2;\tau_2,\delta_2]\cdots [f_n;\tau_n,\delta_n]$. Assume
that
\begin{enumerate}gin{enumerate}
\item
$B=A^{H}$ where $H$ is a semisimple Hopf algebra acting on $A$,
\item
$C\subset B\subset A$ and $A_C$ is finitely generated, and
\item
$\deg f_{i}>1$ for at least two distinct $i$'s.
\mathrm end{enumerate}
Then $d_{A^H}$, the maximal degree of the algebra generators of $A^H$, satisifies the inequality"
$$d_{A^H}\leq \mathrm ell_C-\mathrm ell_A=\sum_{i=1}^n \deg f_i -n,$$
where $\mathrm ell_A$ and $\mathrm ell_C$ are the
AS indices of $A$ and $C$ respectively.
\mathrm end{lemma}
It would be useful to have further upper bounds on the degrees of the generators of the subring of invariants.\\
~\\
B. {\bf Actions on other algebras.} In the work described in this survey thus far we have assumed that $A$ is a graded algebra, and all actions preserve the grading on $A$. There is recent work on actions on filtered algebras, such as the Weyl algebras. Basic properties of this approach were established in \cite{CWWZ}, where it was assumed that the actions preserve the filtration on $A$.
More generally one can consider automorphisms or Hopf actions that may not preserve the filtration on $A$. Etingof and Walton began a program to show that in rather general circumstances a Hopf action must factor through a group action, beginning with \cite[Theorem 1.3]{EW1} that shows that semisimple Hopf actions on commutative domains must factor through group actions. In \cite{EW2} actions of finite dimensional Hopf algebras (that are not necessarily semisimple) on commutative domains, particularly when $H$ is pointed of finite Cartan type, are studied; in this setting there are nontrivial Hopf actions by Taft algebras, Frobenius-Lusztig kernels $u_q({\mathfrak{sl}}_2)$, and Drinfeld twists of some other small quantum groups. In \cite[Theorem 4.1]{CEW} Cuadra, Etingof and Walton show that if a semisimple Hopf algebra $H$ acts inner faithfully on a Weyl algebra $A_n(k)$ for $k$ an algebraically closed field of characteristic zero, then $H$ is cocommutative; in this setting they show further \cite[Theorem 4.2]{CEW} that if $H$ is not necessarily semisimple, but gives rise to a Hopf-Galois extension, then $H$ must be cocommutative. All of these results have no assumptions regarding preserving a grading or filtration.
Relaxing the noetherian assumption of \cite{CKWZ1}, universal quantum linear group coactions on non-noetherian AS regular algebras of dimension 2 are considered in \cite{WW}. In another direction, Hopf algebras (including Taft algebras, doubles of Taft algebras, and $u_q({\mathfrak{sl}}_2)$), that act on certain path algebras, preserving the path length filtration, are studied in \cite{KiW}.
~\\.
~\\C. {\bf Nakayama automorphism.} Considering further generalizations of the algebra $A$ on which the group or Hopf algebra acts, let $A$ be a (not necessarily graded) algebra over $k$, and let $A^e = A \otimes A^{op}$ denote the enveloping algebra of $A$.
\begin{enumerate}gin{definition}\cite[Definition 0.1]{RRZ} and \cite[Section 3.2]{Gi}.
\begin{enumerate}gin{enumerate}
\item $A$ is called {\it skew Calabi-Yau} (or {\it skew CY} for short) if \begin{enumerate}gin{enumerate}
\item $A$ has a projective resolution of finite length in the category
$A^e$-Mod, with every term in the projective resolution finitely generated, and
\item there is an integer $d$ and an automorphism $\mu$ of $A$ such that
$$\Ext^i_{A^e}(A,A^e) \cong ~ ^1 A^\mu \text{ for } i = d, \text{ and } \Ext^i_{A^e}(A,A^e) \cong 0 \text{ if } i \neq d$$
as $A$-bimodules, where $1$ denotes the identity map on $A$. The map $\mu$ is usually denoted $\mu_A$ and is called the
{\it Nakayama automorphism of $A$}.
\mathrm end{enumerate}
\item \cite[Definition 3.2.3]{Gi} $A$ is called {\it Calabi-Yau} (or {\it CY} for short) if $A$ is skew Calabi-Yau and $\mu_A$ is an inner automorphism of $A$.
\mathrm end{enumerate}
\mathrm end{definition}
By \cite[Lemma 1.2]{RRZ}
if $A$ is a connected graded algebra then $A$ is an AS regular algebra if and only if $A$ is skew CY.
A homological identity is given in \cite[Theorem 0.1]{CWZ} that has been used to show that the Nayakama automorphism plays a role in determining the class of Hopf algebras that can act on a given AS regular algebra. These techniques were used to show that if a finite dimensional Hopf algebra $H$ acts on $A=k_p[x_1, \dots, x_n]$ (under Hypotheses \ref{standing}) and $p$ is not a root of unity then $H$ is a group algebra (\cite[Theorem 0.4]{CWZ}); further, if it acts on the 3 or 4-dimensional Sklyanin algebras with trivial $\hdet$ then $H$ is semisimple (\cite[Theorem 0.6]{CWZ}). In \cite{LMZ} the Nakayama automorphism is used to characterize the kinds of Hopf algebras that can act on various families of 3-dimensional AS regular algebras; in several generic cases it is shown that the Hopf algebra must be a commutative group algebra or (when the $\hdet$ is trivial) the dual of a group algebra.
Further investigation of the Nakayama automorphism is likely to be useful in the study of Hopf actions, including actions on algebras that are not graded.\\
~\\
D. {\bf Computing the full automorphism group.} The first step in proving properties of the invariant subring of an algebra $A$ under {\bf any} finite group of automorphisms of $A$ usually is to determine the complete automorphism group of $A$. Such computations are notoriously difficult for commutative polynomial rings. Noncommutative algebras are more rigid than commutative algebras, so sometimes this task is more tractable for noncommutative algebras, even for some PI algebras.
A sequence of recent papers provide some new techniques for computing the full automorphism group of some algebras, including some filtered algebras whose associated graded algebras are AS regular algebras.
In \cite{CPWZ1} an invariant, the discriminant of the algebra over a central subring, is defined and used to compute the full automorphism group of some noncommutative algebras, including, for $n$ even, the filtered algebra (a ``quantum Weyl algebra") $W_n = k \langle x_1, \dots, x_n\rangle$ with relations $x_ix_j + x_j x_i = 1$ for $i > j$, and its associated graded algebra $k_{-1}[x_1, \dots, x_n]$. To cite another example, the discriminant is used to show that the full automorphism group of $B=k_{-1}[x,y]^{S_2}$ (the hypersurface of Example \ref{transposition}) is $k^\times
\rtimes S_2$ \cite[Example 5.8]{CPWZ1}. In \cite{CPWZ2}, automorphism groups of tensor products of quantum Weyl algebras and certain skew polynomial rings $k_{q_{i,j}}[x_1, \dots,x_n]$ are computed. In \cite{CPWZ3} it is shown that, when $n$ is even
and $n \geq 4$, the fixed subring of $W_n$ under any group of automorphisms of $W_n$ is a filtered AS Gorenstein algebra, but for $n \geq 3$ and odd the full automorphism group of $W_n$ contains a free subalgebra on two (and hence countably many) generators.
In \cite{CYZ} further results on computing the discriminant are proved, and some applications to Zariski cancellations problems and isomorphism questions of algebras are given (these two areas of application will be explored further in \cite{BZ} and \cite{CPWZ4}). Further results on the discriminant and its applications remain to be explored.
This new information about the full automorphism group of many families of noncommutative algebras suggests many interesting open questions about the structure of the invariant subrings. The questions considered in this survey can be investigated for actions of ANY finite group of (not necessarily graded) automorphisms of $A$, for larger classes of algebras than AS regular algebras.
\subsection*{Acknowledgments}
The author wishes to thank Chelsea Walton and James Zhang, as well as the referee, for making helpful suggestions on this paper.
Ellen Kirkman was partially supported by the
Simons Foundation grant no. 208314
\begin{enumerate}gin{thebibliography}{10}
\bibitem[AP]{AP}
J. Alev and P. Polo, A rigidity theorem for finite group actions
on enveloping algebras of semisimple Lie algebras, {\it Adv. Math.} {\bf
111} (1995), no. 2, 208--226.
\bibitem[All]{All} J. Allman, Actions of finite-dimensional, non-commutative, non-cocommutative Hopf algebras
on rings, MA thesis, Wake Forest University, 2009.
\bibitem[ASc]{ASc}
M. Artin and W. F. Schelter, Graded algebras of global
dimension $3$, {\it Adv. in Math.} {\bf 66} (1987), no. 2,
171--216.
\bibitem[ATV]{ATV} M. Artin, J. Tate and M. Van den Bergh,
Some algebras associated to automorphisms of elliptic
curves, ``The Grothendieck Festschrift,''
Vol. I, ed.\ P. Cartier et al., Birkh\"{a}user Boston
1990, 33-85.
\bibitem[AZ]{AZ}
M. Artin and J. J. Zhang, Noncommutative projective
schemes, {\it Adv. Math.} {\bf 109} (1994), 228-287.
\bibitem[Aus]{Aus} M. Auslander, On the purity of the branch locus, {\it Amer. J. Math.} {\bf 84} (1962), 116-125.
\bibitem[BB]{BB} T. Banica and J. Bichon, Hopf images and inner faithful representations, {\it Glasg. Math. J.} {\bf 52} (2010) no. 3, 677--703.
\bibitem[BB1]{BB1} Y. Bazlov and A. Berenstein, Noncommutative Dunkl operators and braided Cherednik algebras, {\it Selecta Math. (N.S.)} {\bf 14} (2009), no. 3-4, 325--372.
\bibitem[BB2]{BB2}Y. Bazlov and A. Berenstein, Mystic reflection groups. {\it SIGMA Symmetry Integrability Geom. Methods Appl.} {\bf 10} (2014), Paper 040, 11 pp.
\bibitem[BZ]{BZ} J. Bell and J.J. Zhang, Zariski cancellation problems for noncommutative algebras, in preparation (2015).
\bibitem[BR]{BR} G. Benkart and T. Roby, Down-up algebras, {\it J.
Algebra} {\bf 209} (1998), 305-344. Addendum, {\it J. Algebra} {\bf
213} (1999), no. 1, 378.
\bibitem[Be]{Be}
D.J. Benson, ``Polynomial invariants of finite groups", London
Mathematical Society Lecture Note Series, {\bf 190}. Cambridge
University Press, Cambridge, 1993.
\bibitem[BN]{BN} J. Bichon and S. Natale, Hopf algebra deformations of binary polyhedral groups, {\it Transform. Groups} {\bf 16} (2011) no 2, 339--374.
\bibitem[BH]{BH}
R. B{\o}gvad and S. Halperin, On a conjecture of Roos,
``Algebra, Algebraic Topology and Their Interactions (Stockholm, 1983)",
Lecture Notes in Masth., Vol. {\bf 1183}, Springer, Berlin, 1986, 120-127.
\bibitem[B]{B} A. Borel, Essays in the History of Lie Groups and Algebraic Groups, {\bf} ``History of Mathematics", Vol. {\bf 21}, Amer. Math. Soc., Providence, RI, 2001.
\bibitem[CPWZ1]{CPWZ1} S. Ceken, J. H. Palmieri, Y.-H. Wang, and J.J. Zhang, The discriminant controls automorphism groups of noncommutative algebras, {\it Adv. Math.} {\bf 269} (2015) 551-584.
\bibitem[CPWZ2]{CPWZ2} S. Ceken, J. H. Palmieri, Y.-H. Wang, and J.J. Zhang, The discriminant criterion and automorphisms of quantized algebras, ArXiv 1402.6625.
\bibitem[CPWZ3]{CPWZ3} S. Ceken, J. H. Palmieri, Y.-H. Wang, and J.J. Zhang, Invariant theory of quantum Weyl algebras under finite group action, ArXiv 1501.07881.
\bibitem[CPWZ4]{CPWZ4} S. Ceken, J. H. Palmieri, Y.-H. Wang, and J.J. Zhang, Tits alternative for automorphisms and derivations, in preparation (2015).
\bibitem[CKWZ1]{CKWZ1} K. Chan, E. Kirkman, C. Walton and J.J. Zhang,
Quantum binary polyhedral groups and their actions on quantum
planes, ArXiv 1303.7203, to appear in {\it J. Reine Angew. Math. (Crelle's Journal)}.
\bibitem[CKWZ2]{CKWZ2} K. Chan, E. Kirkman, C. Walton and J.J. Zhang, McKay Correspondence for semisimple Hopf actions on regular graded algebras, in preparation (2015).
\bibitem[CYZ]{CYZ} K. Chan, A. Young, and J.J. Zhang, Discriminant formulas and applications, ArXiv 1503.06327.
\bibitem[CWWZ]{CWWZ} K. Chan, C. Walton, Y.H. Wang, and J.J. Zhang, Hopf actions on filtered regular algebras, {\it J. Algebra} {\bf 397} (2014), 68-90.
\bibitem[CWZ]{CWZ} K. Chan, C. Walton, and J.J. Zhang, Hopf actions and Nakayama automorphisms {\it J. Algebra} {\bf 409} (2014), 26-53.
\bibitem[C]{C} C. Chevalley,
Invariants of finite groups generated by reflections. {\it Amer. J. Math.} {\bf 77} (1955), 778--782,
\bibitem[CEW]{CEW} J. Cuadra, P. Etingof, C. Walton, Semisimple Hopf actions on Weyl algebras, ArXiv 1409.1644.
\bibitem[DK]{DK}
H. Derksen and G. Kemper, ``Computational Invariant Theory",
Encyclopedia of Mathematical Sciences {\bf 130}: Invariant Theoy
and Algebraic Transformation Groups I,
Springer-Verlag, Berlin, 2002.
\bibitem[EW1]{EW1} P. Etingof and C. Walton, Semisimple Hopf actions on commutative domains, {\it Adv. Math} {\bf 251} (2014), 47--61..
\bibitem[EW2]{EW2} P. Etingof and C. Walton, Pointed Hopf actions on fields I, ArXiv 1403.4673, to appear {\it Transform. Groups}.
\bibitem[FHT]{FHT}
Y. F{\' e}lix, S. Halperin and J.-C. Thomas,
Elliptic Hopf algebras,
{\it J. London Math. Soc. (2)} {\bf 43} (1991), no. 3, 545--555.
\bibitem[FT]{FT}
Y. F{\'e}lix and J.-C. Thomas,
The radius of convergence of Poincar{\'e} series of loop spaces,
{\it Invent. Math.} {\bf 68} (1982), no. 2, 257-274.
\bibitem[Gi]{Gi} V. Ginzburg, Calabi-Yau algebras, ArXiv 0612139.
\bibitem[Go]{Go}
M. G\"{o}bel,
Computing bases for rings of permutation-invariant polynomials,
{\it J. Symbolic Comput.} {\bf 19} (1995), 285-291.
\bibitem[G1]{G1}
N.L. Gordeev,
Invariants of linear groups generated by matrices with two
nonidentity eigenvalues,
{\it Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI)}
{\bf 114} (1982),
120-130; English translation in {\it J. Soviet Math.} {\bf 27} (1984), no. 4.
\bibitem[G2]{G2}
N.L. Gordeev,
Finite linear groups whose algebra of invariants is a complete intersection
(Russian), {\it Izv. Akad. Nauk SSR Ser. Mat.} {\bf 50} (1986), no. 2, 343-392.
(English translation: {\it Math. USSR-Inv.} {\bf 28}(1987), no. 2, 335-379.)
\bibitem[Gu]{Gu}
T.H. Gulliksen,
A homological characterization of local complete intersections,
{\it Compos. Math.} {\bf 23} (1971) 251-255.
\bibitem[H]{H} J. Herzog, Ringe mit nur endlich vielen Isomorphismklassen von maximalen, unzerlegbaren Cohen-Macaulay-Moduln, {\it Math. Ann.} {\bf 233} (1978) no 1, 21-34.
\bibitem[JiZ]{JiZ}
N. Jing and J.J. Zhang, On the trace of graded automorphisms, {\it J.
Algebra} {\bf 189} (1997), no. 2, 353--376.
\bibitem[JoZ]{JoZ}
P. J{\o}rgensen and J.J. Zhang, Gourmet's guide to Gorensteinness,
{\it Adv. Math.} {\bf 151} (2000), no. 2, 313--345.
\bibitem[KP]{KP} G.I.~Kac and V.G.~Paljutkin, {Finite ring groups}, {\it Trans. Moscow Math. Soc.} 1996, 251-294.
\bibitem[KW]{KW} V. Kac and K. Watanabe, Finite linear groups whose ring
of invariants is a complete intersection, {\it Bull, Amer. Math. Soc. (N.S.)} {\bf 6}
(1982), no. 2, 221-223.
\bibitem[KiW]{KiW} R. Kinser and C. Walton, Actions of some pointed Hopf algebras on path algebras of quivers, arXiv:1410.7696.
\bibitem[KK]{KK}
E. Kirkman and J. Kuzmanovich,
Fixed subrings of Noetherian graded regular rings,
{\it J. Algebra} {\bf 288} (2005), no. 2, 463--484.
\bibitem[KKZ1]{KKZ1}
E. Kirkman, J. Kuzmanovich and J.J. Zhang,
Rigidity of graded regular algebras,
{\it Trans. Amer. Math. Soc.} {\bf 360} (2008), 6331-6369.
\bibitem[KKZ2]{KKZ2}
E. Kirkman, J. Kuzmanovich and J.J. Zhang,
Gorenstein subrings of invariants under Hopf algebra actions,
{\it J. Algebra} {\bf 322} (2009), no. 10, 3640--3669.
\bibitem[KKZ3]{KKZ3}
E. Kirkman, J. Kuzmanovich and J.J. Zhang,
Shephard-Todd-Chevalley Theorem for skew polynomial rings,
{\it Algebr. Represent. Theory} {\bf 13} (2010) no. 2, 127--158.
\bibitem[KKZ4]{KKZ4}
E. Kirkman, J. Kuzmanovich and J.J. Zhang,
Noncommutative complete intersection, ArXiv 1302.6209, to appear {\it J. Algebra}.
\bibitem[KKZ5]{KKZ5}
E. Kirkman, J. Kuzmanovich and J.J. Zhang,
Invariants of (-1)-skew polynomial rings under permutation representation, ArXiv 1305.3973, to appear in {\it Proceedings of the AMS Special Sessions on Geometric and Algebraic Aspects of Representation Theory and Quantum Groups and Noncommutative Algebraic Geometry: Tulane University, October 13-14, 2012}.
\bibitem[KKZ6]{KKZ6} E. Kirkman, J. Kuzmanovich and J.J. Zhang, Invariant theory of finite group actions on down-up algebras, ArXiv 1308.0579, to appear {\it Transform. Groups}.
\bibitem[Kl1]{Kl1}
F. Klein.
\"{U}ber bin{\"a}re Formen mit linearen Transformationen in sich selbst.
{\it Math. Ann.}, {\bf 9} (2) (1875), 183--208.
\bibitem[Kl2]{Kl2}
F. Klein, Vorlesungen {\"u}ber das Ikosaeder und die Aufl{\"o}sung
der Gleichungen vom f{\"u}nften Grade, Birkhauser
Verlag, Basel, 1993. Reprint of the 1884 original, Edited, with
an introduction and commentary by Peter Slodowy.
\bibitem[KL]{KL}
G. Krause and T. Lenagan, ``Growth of Algebras and
Gelfand-Kirillov Dimension", revised edition, Graduate Studies in
Mathematics, Vol. {\bf 22}, American Mathematical Society, Providence, RI, 2000.
\bibitem[LW]{LW} G.J. Leuschke and R. Wiegand, ``Cohen-Macaulay Representations", Mathematical Surveys and Monographs Vol. {\bf 181}, American Mathematical Society, Providence, RI, 2012.
\bibitem[LMZ]{LMZ} J.-F. L{\"u}, X.-F. Mao, and J.J. Zhang, Nakayama automorphism and applications, ArXiv 1408.5761.
\bibitem[Ma1]{Ma1} A.~Masuoka, {Semisimple Hopf algebras of dimensions 6 and 8}, {\it Israel J. Math.} \textbf{92} (1995), 361--375.
\bibitem[Ma2]{Ma2} A.~Masuoka, Cocycle deformations and Galois objects for some cosemisimple Hopf algebras of finite dimension. In {\it New trends in Hopf algebra theory} (La Falda, 1999), volume {\bf 267} of Contemp. Math., pages 195-214. Amer. Math. Soc., Providence, RI, 2000.
\bibitem[Mon]{Mon}
S. Montgomery,
``Hopf algebras and their actions on rings'',
CBMS Regional Conference Series in
Mathematics, {\bf 82},
Providence, RI, 1993.
\bibitem[MU]{MU} I. Mori and K. Ueyama, Ample group action on AS-regular algebras and noncommutative graded isolated singularities, ArXiv 1404.5045.
\bibitem[Mu]{Mu} E. M{\"u}ller, Finite subgroups of the quantum general linear group, Proc. Lond. Math. Soc. (3) {\bf 81} (2000), no. 1, 190-210.
\bibitem[N1]{N1} H. Nakajima, Invariants of finite groups generated by pseudo-reflections in positive characteristic, Tsukuba J. Math. {\bf 3} (1979), 109-122.
\bibitem[N2]{N2}
H. Nakajima,
Quotient singularities which are complete intersections,
{\it Manuscripta Math.} {\bf 48} (1984), no.1-3, 163-187.
\bibitem[N3]{N3}
H. Nakajima,
Quotient complete intersections of affine spaces by finite linear groups,
{\it Nagoya Math J.} {\bf 98} (1985), 1-36.
\bibitem[NW]{NW}
H. Nakajima and K. Watanabe, The classification of quotient singularities that are complete intersections.
``Complete intersections" (Acireale, 1983), 102-120,
Lecture Notes in Math. {\bf 1092}, Springer, Berlin, 1984.
\bibitem[Ne]{Ne} M. D. Neusel, Degree bounds--an invitation to postmodern invariant theory, {\it Topology Appl.} {\bf 154} (2007), no. 4, 792-814.
\bibitem[No]{No}
E. Noether,
Der Endlichkeitssatz der Invarianten endlicher Gruppen,
{\it Math. Ann.} {\bf 77} (1916), 89-92.
\bibitem[RRZ]{RRZ} M. Reyes, D. Rogalski, and J.J. Zhang, Skew Calabi-Yau algebras and homological identities, {\it Adv. Math.} {\bf 264} (2014), 308-354.
\bibitem[S]{S} J.-P. Serre, Groupes finis d'automorphismes d'anneaux locaux r\'{e}guliers, Colloque d'Alg\`{e}bre EN-SJF, Paris, 8:1-11, 1967.
\bibitem[ShT]{ShT}
G.C. Shephard and J.A. Todd, Finite unitary reflection groups,
{ \it Canadian J. Math.} {\bf 6}, (1954). 274--304.
\bibitem[Sm1]{Sm1}
S.P. Smith, Can the Weyl algebra be a fixed ring? {\it Proc. Amer.
Math. Soc.} {\bf 107} (1989), no. 3, 587--589.
\bibitem[Sm2]{Sm2}
S.P. Smith, Some finite-dimensional algebras related to elliptic
curves, Representation theory of algebras and related topics
(Mexico City, 1994), 315--348, CMS Conf. Proc., {\bf 19}, AMS,
Providence, RI, 1996.
\bibitem[Sta]{Sta} R.P. Stanley, Hilbert functions of graded algebras,
{\it Adv. Math.} {\bf 28} (1978), 57-83.
\bibitem[Ste]{Ste} D. Stefan, Hopf algebras of low dimension, {\it J. Algebra} {\bf 211} (1999), no. 1, 343--361.
\bibitem[Su]{Su}
M. Suzuki, Group theory. I.
Translated from the Japanese by the author,
Grundlehren der Mathematischen Wissenschaften
[Fundamental Principles of Mathematical Sciences], 247.
Springer-Verlag, Berlin-New York, 1982.
\bibitem[Sy]{Sy} P. Symonds, On the Castelnuovo-Mumford regularity of rings of polynomial invariants, {\it Ann. of Math. (2)} {\bf 174} (2011), no. 1, 499–-517.
\bibitem[Ta]{Ta} J. Tate, Homology of Noetherian rings and local rings,
{\it Illinois J. Math} {\bf 1} (1957), 14-27.
\bibitem[U]{U} K. Ueyama, Graded maximal Cohen-Macaulay modules over noncommutative graded Gorenstein isolated singularities, {\it J. Algebra} {\bf 383} (2013), 85-103.
\bibitem[WW]{WW} C. Walton and X. Wang, On quantum groups associated to non-noetherian regular algebras of dimension 2, arXiv:1503.09185.
\bibitem[W1]{W1}
K. Watanabe, Certain invariant subrings are Gorenstein, I, {\it Osaka J. Math.} {\bf 11}
(1974), 1-8.
\bibitem[W2]{W2}
K. Watanabe, Certain invariant subrings are Gorenstein, II, {\it Osaka J. Math.} {\bf 11}
(1974), 379-388.
\bibitem[Wa]{Wa} K. Watanabe,
Invariant subrings which are complete intersections I
(Invariant subrings of finite abelian groups),
{\it Nagoya Math. J.} {\bf 77} (1980), 89-98.
\mathrm end{thebibliography}
\mathrm end{document} |
\begin{document}
\title{
Deep-quantile-regression-based surrogate model for joint chance-constrained optimal power flow with renewable generation
}
\author{
Ge~Chen,~\IEEEmembership{Graduate Student Member,~IEEE,}
Hongcai~Zhang,~\IEEEmembership{Member,~IEEE,}
Hongxun~Hui,~\IEEEmembership{Member,~IEEE,}
and~Yonghua~Song,~\IEEEmembership{Fellow,~IEEE}
}
\maketitle
\begin{abstract}
Joint chance-constrained optimal power flow (JCC-OPF) is a promising tool to manage uncertainties from distributed renewable generation. However, most existing works are based on power flow equations, which require accurate network parameters that may be unobservable in many distribution systems. To address this issue, this paper proposes a learning-based surrogate model for JCC-OPF with renewable generation. This model equivalently converts joint chance constraints in quantile-based forms and introduces deep quantile regression to replicate them, in which a multi-layer perceptron (MLP) is trained with a special loss function to predict the quantile of constraint violations. Another MLP is trained to predict the expected power loss. Then, the JCC-OPF can be formulated without network parameters by reformulating these two MLPs into mixed-integer linear constraints. To further improve its performance, two pre-processing steps, i.e., data augmentation and calibration, are developed. The former trains a simulator to generate more training samples for enhancing the prediction accuracy of MLPs. The latter designs a positive parameter to calibrate the predictions of MLPs so that the feasibility of solutions can be guaranteed. Numerical experiments based on the IEEE 33- and 123-bus systems validate that the proposed model can achieve desirable feasibility and optimality simultaneously with no need for network parameters.
\end{abstract}
\begin{IEEEkeywords}
Optimal power flow, joint chance constraints, deep quantile regression, distribution network, distributed renewable generation.
\end{IEEEkeywords}
\section{Introduction} \label{sec_intro}
\IEEEPARstart{O}{ptimal} power flow (OPF) plays a critical role in the operation of distribution networks \cite{abdi2017review}. By solving OPF, network operators can find the most economical dispatch strategy while ensuring operational security. However, since the distributed generators (DGs), such as wind turbines and PV plants, have been increasingly integrated into distribution networks \cite{9709098}, considerable uncertainties are introduced in OPF, which dramatically increases the difficulty for solving OPF \cite{zakaria2020uncertainty}.
Chance constrained programming (CCP) is a promising method to account for the uncertainties from DGs in OPF \cite{geng2019data}. It allows constraint violation with a small probability so that operators can effectively balance robustness and optimality based on their preferences. Many recent efforts have also been made to use CCP to describe the impacts of uncertainties in OPF. References \cite{8515118,8688432} combined CCP to the linearized DistFlow model to coordinate uncertain DGs with flexible resources in distribution networks. Reference \cite{8600344} applied CCP to a linearized AC OPF model to schedule wind generation. Nevertheless, the above papers used individual chance constraints to control the violation probability of every critical constraint. This individual manner may not guarantee the joint satisfaction probability of all critical constraints, while this joint probability is more concerned for operators because they need to ensure the security of entire systems \cite{7973099}.
Conversely, the joint chance-constrained optimal power flow (JCC-OPF) directly restricts the joint satisfaction probability of all critical constraints \cite{9122389}. Hence, it is preferable to ensure the system-level security and attract increasing attention in recent years. References \cite{8355588,8662704} employed JCC-OPF to restrict the joint probability of critical constraint violations, where Bonferroni approximation was used to convert the intractable joint chance constraints into solvable individual ones. Reference \cite{8528856} proposed a joint chance-constrained linearized DistFlow model to schedule the reactive power compensation for distribution networks. References \cite{8060613, 8626040} combined JCC-OPF with a scenario approach to approximately reformulate the probability constraints into tractable deterministic forms. Nevertheless, most existing works still face two challenges:
\begin{enumerate}
\item Most published works, including \cite{9122389,8355588,8662704,8528856,8060613, 8626040}, are based on power flow equations, which require accurate power network parameters including topology of network and impedance of each line etc. However,
these parameters are often unavailable in many distribution networks due to the unaware topology changes or inaccurate data maintenance \cite{7875102}.
\item The impacts of uncertainties on bus voltages and branch currents are hard to quantify because the OPF model is non-convex. Existing papers usually introduce approximations (e.g. the linearized DistFlow used in \cite{8528856}) or relaxations (e.g. the semi-definite relaxation used in \cite{8060613, 8626040}) so that the impacts of uncertainties are convenient to describe. However, these approximation or relaxation models may lead to overly conservative or even infeasible solutions. For instance, in a radial distribution network with high DG penetration, reverse power flows may occur. In that case, the semi-definite relaxation, which is equivalent to a second-order cone (SOCP) relaxation in radial networks, is not exact and may not ensure feasibility \cite{6815671}.
\end{enumerate}
Since collecting historical data (e.g. power injections, bus voltages, and branch currents) becomes easier and cheaper nowadays, learning-based methods may be a potential choice to bypass the above two challenges because they can train tractable surrogate models without the network parameters to replace the non-convex power flow model \cite{9265482}. Generally speaking, the existing learning-based methods can be divided into the following three categories.
\subsubsection{Learn optimal solutions}
Methods in this category usually train neural networks to directly learn the optimal solution of OPF.
For example, reference \cite{9205647} trained a neural network to build a mapping from power demands to the optimal solution of DC OPF. In references \cite{chatzos2020high,9335481}, this learning-based method was combined with the Lagrangian dual approach to improve the feasibility of solutions. References \cite{8810819,9599403} trained neural networks to learn active constraints of OPF. Then, optimal solutions were obtained by solving the equations formed by these active constraints. Generally speaking, these methods could dramatically reduce the solving time of OPF. However, they need optimal solutions of OPF as the training labels, so the network parameters are still indispensable.
\subsubsection{Learn feasibility conditions}
Methods in this category usually learn to replicate OPF constraints by training neural networks. For instance, in references \cite{9302963,venzke2020neural}, binary classifiers were trained to judge whether a given strategy can satisfy all constraints or not. Then, the trained classifiers were equivalently reformulated as mixed-integer linear constraints so that the OPF problem can be replicated with no need for building any power flow model. Reference \cite{9502573} replaced the binary classifiers with a regression neural network that can predict the maximum constraint violation to improve the feasibility of solutions. The above methods only require solutions of power flow equations instead of optimal solutions of OPF as training labels, so the requirement of network parameters can be bypassed. Moreover, desirable optimality can be also achieved since the mixed-integer linear replication of OPF can be efficiently solved by the Branch-and-Bound algorithm.
However, it is difficult for these methods to quantitatively evaluate the impacts of uncertainties from DGs (inputs of neural networks) on the constraint violations (outputs of neural networks) because of the nonlinear activation functions in neural networks. Thus, they are not applicable to JCC-OPF.
\subsubsection{Reinforcement learning}
Reinforcement learning (RL) trains agents how to act in a specific environment to maximize the cumulative reward (e.g. the opposite of energy purchasing from upper-level grids). In reference \cite{9069289}, RL was applied to solve OPF, while the Lagrangian dual approach was combined to improve feasibility. In reference \cite{9275611}, behavior cloning was combined with RL to generate a desirable initial start so that the training process of agents can be accelerated. However, RL needs ``trial-and-errors" to train agents, which may be unacceptable in practical operations of distribution networks.
Moreover, it is also challenging for all the learning-based methods above to handle joint chance constraints. If they want to learn the characteristics of joint chance constraints and train surrogate models to replace them, then their training sets must contain enough samples of statistical results (e.g., the quantile of constraint violations). However, these samples are difficult to collect because only realizations of uncertainties instead of the quantile can be observed in practice. In fact, to the best of our knowledge, none of these learning-based methods have been successfully extended to the JCC-OPF problem.
To overcome the aforementioned challenges, this paper proposes a novel learning-based surrogate model for JCC-OPF with renewable generation. Two pre-processing steps, i.e., data augmentation and calibration, are further designed to improve the performance of the proposed model. The specific contributions are threefold:
\begin{enumerate}
\item We propose a learning-based surrogate model for JCC-OPF. The proposed model first re-expresses joint chance constraints in quantile-based forms and introduces deep quantile regression to replicate them, where a multi-layer perceptron (MLP) is trained based on a special loss function to predict the quantile of the maximum constraint violation. Then, another MLP is trained based on mean squared errors to predict the expected power loss. By reformulating these two MLPs into mixed-integer linear constraints, the proposed surrogate model can be established. Since the surrogate model only requires historical data to train MLPs but does not need to build exact power flow models, the requirement of network parameters can be bypassed. Moreover, this model can be efficiently solved by the Branch-and-Bound algorithm with guaranteed optimality.
\item Considering that the historical dataset may not contain enough training samples to reflect the true distribution of constraint violations, the prediction accuracy of the quantile regression may be undesirable. To address this issue, a data augmentation step is designed. This step uses the historical dataset to train a regressor as a simulator based on XGBoost. With this simulator, more training samples can be generated for the previous MLPs to improve the accuracy of the quantile regression.
\item Since the deep quantile regression may have prediction errors and harm the feasibility of solutions, a calibration step is further designed.
In this step, we first demonstrate that the underestimation for the quantile of constraint violations may lead to infeasible solutions. Then, a positive constant, i.e., calibration parameter, is designed to calibrate the outputs of the deep quantile regression to avoid the harmful underestimation so that the feasibility of solutions can be improved.
\end{enumerate}
The remaining parts are organized as follows. Section \ref{sec_formulation} describes the formulation of the JCC-OPF problem. Section \ref{sec_solution} introduces the proposed learning-based surrogate model in detail. Section \ref{sec_case} demonstrates simulation results and Section \ref{sec_conclusion} concludes this paper.
\section{Formulation of JCC-OPF} \label{sec_formulation}
This paper develops a learning-based surrogate model of the JCC-OPF problem for a distribution network without network parameters. In this section, we first present the detailed formulation of the JCC-OPF problem.
\subsubsection{Power injections}
By using $i \in \mathcal{V}$ to index buses, the active and reactive power injections on each bus, i.e., $\bm p \in \mathbb{R}^{|\mathcal{V}|}$ and $\bm q \in \mathbb{R}^{|\mathcal{V}|}$, can be expressed as:
\begin{align}
\bm p = - \bm p^\text{d} + \bm p^\text{DG}, \quad \bm q = - \bm q^\text{d} + \bm \phi * \bm p^\text{DG}, \label{eqn_injection}
\end{align}
where $\bm p^\text{d}$ and $\bm q^\text{d}$ represent the active and reactive power demands on each bus. Variable $\bm p^\text{DG}$ is the actual used active power from DGs. Parameter $\bm \phi$ is a ratio of the actual active power of DG to its reactive power. Operator $*$ denotes the element-wise multiplication. The actual used active power $\bm p^\text{DG}$ can be expressed by:
\begin{align}
\bm p^\text{DG} = \bm \lambda * \bm G^\text{DG},
\end{align}
where $\bm \lambda$ and $G^\text{DG}$ are the actual utilization rate and maximum available value of DG. In practice, the value of $\bm G^\text{DG}$ is uncertain, which can be expressed as follows:
\begin{align}
\bm G^\text{DG} = \overline{\bm G}^\text{DG} * (\bm 1 + \bm \omega),
\end{align}
where $\overline{\bm G}^\text{DG}$ represents the nominal available DG obtained by predictions and $\bm \omega$ is the corresponding uncertain level.
\subsubsection{Power flow model}
The power flow model of a radial network can be expressed by DistFlow \cite{19266}, as follows:
\begin{align}
\begin{cases}
\sum_{k \in \mathcal{C}_{j}} P_{jk} = p_{j} + P_{ij} - r_{ij}I_{ij}^2, \\
\sum_{k \in \mathcal{C}_{j}} Q_{jk} = q_{j} + Q_{ij} - x_{ij}I_{ij}^2, \\
V_{j}^2=V_{i}^2-2(r_{ij}P_{ij}+x_{ij}Q_{ij})\\
\quad \quad \quad \quad \quad+ (r_{ij}^2 + x_{ij}^2)I_{ij}^2, \\
I_{ij}^2 = \frac{P_{ij}^2 + Q_{ij}^2}{V_{i}^2},\\
\end{cases} \forall (i,j) \in \mathcal{B}, \label{eqn_distflow}
\end{align}
where $P_{ij}$ and $Q_{ij}$ are the active and reactive power flows on branch $(i, j)$, respectively; $V_{i}$ and $I_{ij}$ are the magnitudes of the voltage at bus $i$ and current on branch $(i, j)$, respectively; $r_{ij}$ and $x_{ij}$ denotes the resistance and reactance of branch $(i,j)$, respectively. Set $\mathcal{C}_{j}$ contains the child bus indexes of bus $j$. Set $\mathcal{B}$ represents the index set of branches in this network.
\subsubsection{Security constraints}
To ensure operation security, the magnitudes of all bus voltages and branch currents shall maintain in the corresponding allowable ranges.
According to (\ref{eqn_injection})-(\ref{eqn_distflow}), the uncertainties from DGs also affect the bus voltages and branch currents. To better balance optimality and feasibility of OPF solutions, a joint chance constraint is employed to describe the voltage and current limitations:
\begin{align}
\mathbb{P}_{\bm \omega}\left( \bm V_\text{min} \leq \bm V \leq \bm V_\text{max}, \ \bm I\leq \bm I_\text{max} \right) \geq 1 - \epsilon, \label{eqn_JCC}
\end{align}
where $\bm V$ and $\bm I$ are the vector forms of $V_{i}$ and $I_{ij}$; $\epsilon$ is the risk parameter. Note here we use a joint chance constraint instead of individual ones because this joint manner can better guarantee the security of the entire system \cite{9122389}.
\subsubsection{Energy purchasing}
The energy purchasing from the upper-level grid $G$ is equal to the net power at the substation and can be calculated based on the power balance of a distribution network:
\begin{align}
G = \bm 1^\intercal \bm p^\text{d} + p^\text{loss} - \bm 1^\intercal (\bm \lambda * \bm G^\text{DG}), \label{eqn_balance}
\end{align}
where $p^\text{loss}$ is the total power loss and can be calculated by:
\begin{align}
p^\text{loss} = \sum_{(i,j) \in \mathcal{B}} r_{ij} I_{ij}^2. \label{eqn_loss}
\end{align}
Finally, the JCC-OPF is formulated as:
\begin{align}
&\min_{\bm \lambda, G} \quad \mathbb{E}_{\bm \omega}(G),\quad \text{s.t.:} \text{ Eqs. (\ref{eqn_injection})-(\ref{eqn_loss})}. \tag{$\textbf{P1}$}
\end{align}
\section{Solution Methodology} \label{sec_solution}
As mentioned in Section \ref{sec_intro}, formulating \textbf{P1} can be challenging because the network parameters are often unavailable. Fortunately, collecting historical operation data becomes easier and cheaper due to the widespread use of smart meters. Therefore, we propose a learning-based surrogate model to address the aforementioned challenge. This model first introduces deep quantile regression to replicate the intractable joint chance constraints. Meanwhile, another neural network is trained to predict the expected power loss. Then, by reformulating the trained neural networks into mixed-integer linear constraints, \textbf{P1} can be replicated. Since the proposed model only requires historical data to train neural networks but does not need to build power flow models, the requirement of the network parameters can be bypassed.
\subsection{Deep quantile regression to replicate chance constraints}
\subsubsection{Motivation}
The joint chance constraint (\ref{eqn_JCC}) can be equivalently reformulated into quantile-based deterministic forms to eliminate the intractable probability operator.
Specifically, we define $\bm x$ as the nominal active and reactive power injections on each bus (except the slack bus):
\begin{align}
\bm x = [\bm p, \ \bm q]. \label{eqn_x_define}
\end{align}
We also define a new variable $h$ to denote the maximum violation of the OPF constraints:
\begin{align}
h(\bm x, \bm \omega) = \max \{\bm V_\text{min} - \bm V, \bm V - \bm V_\text{max}, \bm I - \bm I_\text{max}\}. \label{eqn_h_define}
\end{align}
Note the impacts of the uncertainty from DGs, i.e., $\bm \omega$, has been implied in the samples of $h$ because both the voltage $\bm V$ and current $\bm I$ are affected by $\bm \omega$. Based on (\ref{eqn_h_define}), the joint chance constraint (\ref{eqn_JCC}) can be expressed as:
\begin{align}
\mathbb{P}_{\bm \omega}\left( h(\bm x, \bm \omega) \leq 0 \right) \geq 1 - \epsilon, \label{eqn_JCC2}
\end{align}
which can be further equivalently reformulated into the following quantile-based form:
\begin{align}
\mathcal{Q}_{\bm \omega}^{1-\epsilon}(h(\bm x, \bm \omega)) \leq 0, \label{eqn_quantile_JCC}
\end{align}
where $\mathcal{Q}_{\bm \omega}^{1-\epsilon}(h(\bm x, \bm \omega))$ is the $1-\epsilon$ quantile of $h$ at a given $\bm x$:
\begin{align}
\mathcal{Q}_{\bm \omega}^{1-\epsilon}(h(\bm x, \bm \omega)) = \inf\{y: \mathbb{P}_{\bm \omega}\left(h(\bm x, \bm \omega)\leq y \right)\geq 1-\epsilon\}. \label{eqn_quantile_definition}
\end{align}
According to (\ref{eqn_quantile_JCC}), if the mapping from $\bm x$ to $\mathcal{Q}_{\bm \omega}^{1-\epsilon}(h(\bm x, \bm \omega))$ can be accurately described by simple relations (e.g. linear functions), then the intractability of the joint chance constraint can be overcome. This motivates us to introduce a powerful deep learning technique, deep quantile regression, to predict the quantile of constraint violations.
\subsubsection{Introduction of deep quantile regression}
Traditional regression is a process to model the relationship between dependent output and independent variables. For example, based on the dataset $\{(\bm x_n, \bm \omega_n, h_n)\}_{n \in \mathcal{N}}$ ($\mathcal{N}$ is the index set of samples), we can train a regression model $\hat h(\bm x, \bm \omega)$ to predict $h$ with a given $\bm x$ and $\bm \omega$.
The mean squared error is usually used as the loss function in traditional regression models:
\begin{align}
\text{Loss}^\text{R} = (h - \hat{h}(\bm x, \bm \omega))^2.
\end{align}
However, it is hard for traditional regression models to accurately predict the quantile $\mathcal{Q}_{\bm \omega}^{1-\epsilon}(h(\bm x, \bm \omega))$. This is because they need sufficient samples of the quantile as the training labels, which are usually difficult to collect. In practice, for a specific $\bm x=\bm x_n$, we may only observe one realization of $h$, i.e., $h_n=h(\bm x_n, \bm \omega_n)$, instead of the quantile at $\bm x_n$ (other samples usually have different $\bm x$). Without enough training labels, traditional regression models can not work well.
Conversely, deep quantile regression can directly predict the quantile based on realizations of uncertainties, i.e., $h_n$, instead of quantile samples \cite{hao2007quantile}. This advantage results from a specially designed loss function, as follows:
\begin{align}
\text{Loss}^\text{QR} = \mu \cdot(1-\epsilon-\mathbb{I}(\mu\leq 0)). \label{eqn_loss_quantile}
\end{align}
Here $\mu=h-\mathcal{\hat Q}^{1-\epsilon} (\bm x)$, where $\mathcal{\hat Q}^{1-\epsilon} (\bm x)$ is the prediction of $\mathcal{Q}_{\bm \omega}^{1-\epsilon}(h(\bm x, \bm \omega))$ given by the quantile regression. Symbol $\mathbb{I}(\cdot)$ denotes the indicator function. The following \textbf{Proposition} proves that we can predict the quantile without the samples of $\mathcal{Q}_{\bm \omega}^{1-\epsilon}(h(\bm x, \bm \omega))$ based on the loss function (\ref{eqn_loss_quantile}).
\begin{proposition} \label{proposition_1}
The quantile $\mathcal{Q}_{\bm \omega}^{1-\epsilon}(h(\bm x, \bm \omega))$ can be obtained by minimizing the expectation of (\ref{eqn_loss_quantile}), as follows \cite{hao2007quantile}:
\begin{align}
\mathcal{Q}_{\bm \omega}^{1-\epsilon}(h(\bm x, \bm \omega)) = \argmin_{\mathcal{\hat Q}^{1-\epsilon} (\bm x)} \mathbb E (\rm{Loss}^{\rm{QR}}). \label{eqn_proposition_1}
\end{align}
\end{proposition}
\emph{Proof}: See Appendix \ref{app_1}.
Based on \textbf{Proposition} \ref{proposition_1}, we can train a quantile regression neural network based on the historical dataset $\{(\bm x_n, h_n)\}_{n \in \mathcal{N}}$ to represent the mapping from $\bm x$ to the target quantile $\mathcal{Q}_{\bm \omega}^{1-\epsilon}(h(\bm x, \bm \omega))$. Note the label $h_n$ in the historical dataset is noisy due to the impacts of uncertainty $\bm \omega$.
\subsubsection{Replication of joint chance constraints}
With the noisy dataset $\{(\bm x_n, h_n)\}_{n \in \mathcal{N}}$, we can train a quantile regression network $\mathcal{\hat Q}^{1-\epsilon} (\bm x)$ to predict the target quantile $\mathcal{Q}_{\bm \omega}^{1-\epsilon}(h(\bm x, \bm \omega))$. Then, Eq. (\ref{eqn_quantile_JCC}) is replaced by:
\begin{align}
\mathcal{\hat Q}^{1-\epsilon} (\bm x) \leq 0. \label{eqn_quantile_JCC2}
\end{align}
In this paper, a MLP with ReLU activation functions is chosen as the quantile regression model. For convenience, this MLP is called ``quantile-MLP". A typical MLP is composed of one input layer, $|\mathcal{L}|$ hidden layers, and one output layer, as shown in Fig. \ref{fig_MLP}. Each neuron is made up of a linear mapping and a nonlinear ReLU function. By using $l$ index the hidden layers ($l \in \mathcal{L}$), the target quantile can be estimated by the forward propagation of the trained quantile-MLP:
\begin{align}
&\bm s^{0} = \bm x, \label{eqn_input}\\
\begin{split}
&\bm z^{l} = \bm W^{l} \bm s^{l-1} + \bm b^{l},
\forall l \in \mathcal{L}, \label{eqn_z}
\end{split} \\
&\bm s^{l} = \max(\bm z^{l}, 0), \quad \forall l \in \mathcal{L}, \label{eqn_h} \\
&\hat{\mathcal{Q}}^{1-\epsilon}(\bm x) = \bm W^{|\mathcal{L}|+1} \bm s^{|\mathcal{L}|} + \bm b^{|\mathcal{L}|+1}. \label{eqn_output}
\end{align}
Eq. (\ref{eqn_input}) defines the input layer; Eqs. (\ref{eqn_z}) and (\ref{eqn_h}) represent the linear mapping and ReLU in hidden layers, respectively; Eq. (\ref{eqn_output}) defines the output layer. Vector $\bm z^{l}$ and $\bm s^{l}$ are the outputs of the linear mapping and activation function in hidden layer $l$; matrix $\bm W^l$ and vector $\bm b^{l}$ are the weights and bias of layer $l$, which are parameters to be learned; $\hat{\mathcal{Q}}^{1-\epsilon}(\bm x)$ is the estimation of $\mathcal{Q}_{\bm \omega}^{1-\epsilon}(h(\bm x, \bm \omega))$ given by the quantile-MLP.
\begin{figure}
\caption{Structure of an example MLP with 3 hidden layers, where $z_n^l$ and $h_n^l$ denote the outputs of the linear mapping and ReLU of the $n$-th neroun in layer $l$, respectively.
}
\label{fig_MLP}
\end{figure}
\begin{remark}
We can also let the quantile-MLP output multiple quantile values at once to improve its practicability. To realize this, we only need to change its forward propagation into:
\begin{align}
\begin{cases}
\text{Eqs. (\ref{eqn_input})-(\ref{eqn_h})},\\
\left[\hat{\mathcal{Q}}^{1-\epsilon_i}(\bm x), \forall i \in \mathcal{I}\right]^{\intercal} = \bm W^{|\mathcal{L}|+1} \bm s^{|\mathcal{L}|} + \bm b^{|\mathcal{L}|+1}, \label{eqn_output2}
\end{cases}
\end{align}
where $\mathcal{I}$ is the index set of risk parameters. Then, even if multiple quantile values are required, only one single MLP needs to be trained.
\end{remark}
\subsection{Power loss calculation}
Based on (\ref{eqn_balance}), the objective of \textbf{P1} can be calculated by:
\begin{align}
\mathbb E_{\bm \omega} (G) &= \mathbb E_{\bm \omega} \left(\bm 1^\intercal \bm p^\text{d} + p^\text{loss} - \bm 1^\intercal (\bm \lambda * \bm G^\text{DG})\right), \notag \\
& = \bm 1^\intercal \bm p^\text{d} + \mathbb E_{\bm \omega} (p^\text{loss}) - \bm 1^\intercal (\bm \lambda * \overline{\bm G}^\text{DG}). \label{eqn_balance2}
\end{align}
Eq. (\ref{eqn_balance2}) indicates that the expected power loss is required for the calculation of \textbf{P1}'s objective. According to (\ref{eqn_loss}), the power loss calculation requires the magnitudes of branch currents. Thus, the power flow model is still required. To bypass this requirement, another MLP (we call it ``loss-MLP"), is trained to predict the expected power loss.
The samples of nominal power injection $\bm x$ and power loss $p^\text{loss}$ are treated as the features and noisy labels ($p^\text{loss}$ is affected by the uncertainty $\bm \omega$), respectively. The mean squared error is employed as the loss function, as follows:
\begin{align}
\text{Loss}^\text{pl} = (p^\text{loss} - \hat{p}^\text{loss}(\bm x))^2, \label{eqn_loss_powerLoss}
\end{align}
where $\hat{p}^\text{loss}(\bm x)$ is the prediction given by the loss-MLP.
\begin{proposition} \label{proposition_2}
The expected power loss can be obtained by minimizing the expectation of (\ref{eqn_loss_powerLoss}), as follows:
\begin{align}
\mathbb E_{\bm \omega} (p^\text{loss}) = \argmin_{\hat{p}^\text{loss}(\bm x)} \mathbb E (\rm{Loss}^{\rm{pl}}). \label{eqn_proposition_22}
\end{align}
\end{proposition}
\emph{Proof}: See Appendix \ref{app_2}.
After training, the expected power loss can be also predicted based on the forward propagation of the loss-MLP:
\begin{align}
&\bm s_{pl}^{0} = \bm x, \label{eqn_input_loss}\\
\begin{split}
&\bm z_{pl}^{l} = \bm W_{pl}^{l} \bm s_{pl}^{l-1} + \bm b_{pl}^{l},
\forall l \in \mathcal{L}_{pl}, \label{eqn_z_loss}
\end{split} \\
&\bm s_{pl}^{l} = \max(\bm z_{pl}^{l}, 0), \quad \forall l \in \mathcal{L}_{pl}, \label{eqn_h_loss} \\
&\hat p^\text{loss} (\bm x) = \bm W_{pl}^{|\mathcal{L}_{pl}|+1} \bm s^{|\mathcal{L}_{pl}|} + \bm b_{pl}^{|\mathcal{L}_{pl}|+1}, \label{eqn_output_loss}
\end{align}
where the subscript $pl$ is used to mark those variables belonging to the loss-MLP.
\subsection{Tractable reformulation of MLPs}
Once the two MLPs are trained, the quantile of the maximum constraint violation and expected power loss can be predicted by (\ref{eqn_input})-(\ref{eqn_output}) and (\ref{eqn_input_loss})-(\ref{eqn_output_loss}) with no need for building any power flow model. However, Eqs. (\ref{eqn_h}) and (\ref{eqn_h_loss}) are intractable for off-the-shelf solvers due to the maximum operator. To address this, the Big-M reformulation used in \cite{9302963,venzke2020neural,9502573} is employed to convert these intractable constraints into mixed-integer linear forms. Specifically, by introducing auxiliary variables $\bm r^{l}$ and $\bm \mu^{l}$ for each hidden layer, Eqs. (\ref{eqn_z})-(\ref{eqn_h}) can be reformulated as:
\begin{align}
&\left\{
\begin{aligned}
&\bm s^{l} - \bm r^{l}=\bm W^{l} \bm s^{l-1} + \bm b^{l}, \\
&0 \leq \bm s^{l} \leq M \cdot \bm \mu^{l},\\
&0 \leq \bm r^{l} \leq M\cdot(1-\bm \mu^{l}),\\
&\bm \mu^{l} \in \{0,1\}^{N_l},
\end{aligned}\right.
\label{eqn_reformulation}
\end{align}
where $N_l$ denotes the neuron number in the $l$-th hidden layer of the quantile-MLP. Similarly, Eqs. (\ref{eqn_z_loss})-(\ref{eqn_h_loss}) can be equivalently converted into the same form of (\ref{eqn_reformulation}), which is recorded as follows for convenience:
\begin{align}
\text{\{Eq. (\ref{eqn_reformulation})\}}_{pl}. \label{eqn_reformulation_loss}
\end{align}
Then, the JCC-OPF problem, i.e., \textbf{P1}, can be replicated by the following learning-based surrogate model:
\begin{align}
&\min_{\bm \lambda, G} \quad \mathbb E_{\bm \omega} (G) \tag{$\textbf{P2}$},\\
&\begin{array}{r@{\quad}r@{}l@{\quad}l}
\text{s.t.} &&\text{Eqs. (\ref{eqn_x_define}), (\ref{eqn_quantile_JCC2}), (\ref{eqn_input}), (\ref{eqn_output}), (\ref{eqn_balance2}), (\ref{eqn_input_loss}), and (\ref{eqn_output_loss})-(\ref{eqn_reformulation_loss}).}
\end{array} \notag
\end{align}
The number of the auxiliary binary variables in \textbf{P2} are the same as the total neuron numbers of the two MLPs. According to our test, with only a few neurons, the two MLPs can already achieve desirable prediction accuracy. Thus, the computational performance of \textbf{P2} is acceptable. This will be also verified by the simulations in Section \ref{sec_sensitive}.
\begin{remark}
The proposed surrogate model only requires historical samples to train the quantile-MLP and loss-MLP but does not need to build exact power flow model. Thus, the requirement of the network parameters can be bypassed.
\end{remark}
\subsection{Pre-processing to improve performance}
\subsubsection{Motivation}
If the quantile-MLP is directly trained based on the historical dataset $\{\bm x_n, h_n\}_{n \in \mathcal{N}}$, its prediction accuracy may be poor because the historical dataset may not have enough samples to precisely reflect the true distribution of $h$ at a given $\bm x$. For example, we may only find one sample $\{\bm x_n, h_n\}$ at $\bm x = \bm x_n$ (other samples usually have different $\bm x$). With only one sample, it is hard to learn the true distribution of $h$ at $\bm x = \bm x_n$. Moreover, even if we have an ideal training set to train the quantile-MLP, prediction errors are still inevitable, which may harm the feasibility of the proposed model. To overcome the above challenges, two pre-processing steps, i.e., data augmentation and calibration, are designed.
\subsubsection{Data augmentation}
We design a data augmentation step to construct an ideal training set for the quantile-MLP. Its key idea is very simple: train a simulator based on the historical data and use this simulator to generate more samples as the training set. The detailed procedure is summarized in Table \ref{tab_algorithm1}. Then, at a given $\bm x= \bm x^{(k)}$, multiple labels, i.e., $\{h^{(k)}_{n}\}_{n=1}^{N_\omega}$ can be found. As a result, the distribution of $h$ can be explicitly described. Here XGBoost regressor is used as our simulator due to its great prediction accuracy \cite{chen2016xgboost}.
\begin{algorithm}
\begin{small}
{
\caption{Data augmentation}
\label{tab_algorithm1}
\begin{tabular}{p{0.35cm}p{7.6cm}}
$01$&\textbf{Simulator training:} train a regressor as our simulator based on the historical dataset $\{\bm x_n, \bm \omega_n, h_n\}_{n \in \mathcal{N}}$. Its input and output are $(\bm x, \bm \omega)$ and $h(\bm x, \bm \omega)$, respectively; \\
$02$& \textbf{For} $k \in \mathcal{K}= [1,2,\cdots,K]$ \\
$03$&\begin{adjustwidth}{3mm}{0cm}Randomly select one $\bm x$ and multiple $\bm \omega$ from the historical dataset to construct different pairs, and record them as $\{(\bm x^{(k)}, \bm \omega_n^{(k)})\}_{n=1}^{N_{\omega}}$ ($N_\omega$ is the number of $\bm \omega$ we chosen); \end{adjustwidth}\\
$04$&\begin{adjustwidth}{3mm}{0cm} Give the above pairs as inputs to the simulator to predict $h$, and record the predictions as $\{\hat h^{(k)}_n\}_{n=1}^{N_{\omega}}$; \end{adjustwidth}\\
$05$ &\textbf{End for}\\
$06$ &\textbf{Data collection:} Collect the generated pairs and predictions to construct a new dataset, i.e., $\{\bm x^{(k)}, \bm \omega_n^{(k)},\hat h^{(k)}_n \}_{n=1}^{N_{\omega}}, \forall k \in \mathcal{K}$. By removing $\bm \omega_n^{(k)}$, we can get the ideal training set of the quantile-MLP, i.e., $\{\bm x^{(k)},\hat h^{(k)}_n \}_{n=1}^{N_{\omega}}, \forall k \in \mathcal{K}$.\\
\end{tabular}}
\end{small}
\end{algorithm}
\subsubsection{Calibration}
According to (\ref{eqn_quantile_JCC})-(\ref{eqn_quantile_JCC2}), if the quantile-MLP overestimates the target quantile, i.e., $\mathcal{\hat Q}^{1-\epsilon} (\bm x) \geq \mathcal{Q}_{\bm \omega}^{1-\epsilon}(h(\bm x, \bm \omega))$, then only the optimality of solutions will be harmed but the feasibility can be maintained. However, if the quantile-MLP underestimates the target quantile, i.e., $\mathcal{\hat Q}^{1-\epsilon} (\bm x) \leq \mathcal{Q}_{\bm \omega}^{1-\epsilon}(h(\bm x, \bm \omega))$, then the feasibility may not be guaranteed. Thus, underestimation is more harmful and should be avoided. Based on this observation, a calibration step is developed. In this step, we design a special positive calibration parameter $\rho$ to calibrate the prediction of the quantile-MLP $\mathcal{\hat Q}^{1-\epsilon}(\bm x)$ so that the harmful underestimation can be avoided. Specifically, parameter $\rho$ is designed as the maximum underestimation of the simulator in the historical dataset $\{\bm x_n, \bm \omega_n, h_n\}_{n \in \mathcal{N}}$:
\begin{align}
\rho = \max_{n \in \mathcal{N}} \{h_n - \hat h_n\}, \label{eqn_rho_define}
\end{align}
where $\hat h_n$ is the prediction of the simulator based on $(\bm x_n, \bm \omega_n)$.
According to (\ref{eqn_rho_define}), we must have:
\begin{align}
\hat h_n + \rho \geq h_n, \quad \forall n \in \mathcal{N}.
\end{align}
Then, in most cases, we have:
\begin{align}
&\mathcal{Q}_{\bm \omega}^{1-\epsilon}(\hat h + \rho) = \mathcal{Q}_{\bm \omega}^{1-\epsilon}(\hat h) + \rho \geq \mathcal{Q}_{\bm \omega}^{1-\epsilon}(h(\bm x, \bm \omega)), \label{eqn_overestimation}
\end{align}
where ``$=$" holds due to the translation invariance of the quantile. By regrading $\hat h$ as the training label to train the quantile-MLP, Eq. (\ref{eqn_overestimation}) can be replaced by:
\begin{align}
\mathcal{\hat Q}^{1-\epsilon}(\bm x) + \rho \geq \mathcal{Q}_{\bm \omega}^{1-\epsilon}(h(\bm x, \bm \omega)), \label{eqn_overestimation_2}
\end{align}
where $\mathcal{\hat Q}^{1-\epsilon}(\bm x)$ is the prediction of the quantile $\mathcal{Q}_{\bm \omega}^{1-\epsilon}(\hat h)$ given by the quantile-MLP. Note this prediction can achieve desirable accuracy because we can generate sufficient samples of $\hat h$ based on the simulator built in the data augmentation step.
Finally, the quantile-based form of the joint chance constraint, (\ref{eqn_quantile_JCC}) can be innerly approximated by:
\begin{align}
\mathcal{\hat Q}^{1-\epsilon}(\bm x) + \rho \leq 0.
\label{eqn_quantile_constraint3}
\end{align}
According to (\ref{eqn_overestimation_2}), Eq. (\ref{eqn_quantile_constraint3}) is an inner approximation of (\ref{eqn_quantile_JCC}). Thus, the aforementioned harmful underestimation can be avoided and the feasibility of solutions can be guaranteed.
\subsection{Whole procedure of the proposed model}
By applying the two pre-processing steps, \textbf{P2} can be replaced by the following surrogate model \textbf{P3}:
\begin{align}
&\min_{\bm \lambda, G} \quad \mathbb E_{\bm \omega} (G) \tag{$\textbf{P3}$},\\
&\begin{array}{r@{\quad}r@{}l@{\quad}l}
\text{s.t.:} &&\text{Eqs. (\ref{eqn_x_define}), (\ref{eqn_input}), (\ref{eqn_output}), (\ref{eqn_balance2}), (\ref{eqn_input_loss}), (\ref{eqn_output_loss})-(\ref{eqn_reformulation_loss}), and (\ref{eqn_quantile_constraint3}).}
\end{array} \notag
\end{align}
Fig. \ref{fig_procedure} illustrates the whole procedure for establishing the proposed learning-based surrogate model. Specifically, we first leverage historical data to train a simulator for data augmentation. Then, the quantile MLP is trained to predict the target quantile $\mathcal{Q}_{\bm \omega}^{1-\epsilon}(h(\bm x, \bm \omega))$. A calibration step is further applied to the quantile-MLP for improving the feasibility of solutions. Meanwhile, another MLP, i.e., the loss-MLP, is trained based on the historical data to predict the expected power loss. Finally, by reformulating these two MLPs into solvable mixed-integer forms, the proposed surrogate model can be established to replicate the JCC-OPF problem without the network parameters.
\begin{figure}
\caption{The whole procedure to establish the proposed surrogate model.
}
\label{fig_procedure}
\end{figure}
In the proposed model, three regressors are trained. For convenience, we summarize their inputs, outputs, and training sets in Table \ref{tab_regressor}. The required historical samples include nominal active/reactive power injections, bus voltages, branch currents, and nominal/actual available DGs' outputs in the past.
\begin{table}
\small
\centering
\caption{Descriptions of the trained regressors}
\begin{tabular}{cccc}
\hline
\textbf{Regressors} & \textbf{Inputs} & \textbf{Outputs} & \textbf{Training set} \\ \hline
Simulator & $(\bm x,\bm \omega)$ & $\hat{h}$ & Historical dataset \\
Quantile-MLP & $\bm x$ & ${\hat{\mathcal{Q}}}^{1-\epsilon}(\bm x)$ & Augmented dataset \\
Loss-MLP & $\bm x$ & $\hat p^\text{loss}$ & Historical dataset \\ \hline
\end{tabular} \label{tab_regressor}
\end{table}
\section{Case study} \label{sec_case}
\subsection{Simulation set up}
We implement two different case studies based on on the IEEE 33- and 123-bus systems, respectively. The first case study has two DGs, while the second one contains four DGs. There structures are illustrated in Fig. \ref{fig_33bus}. The slack bus voltage, i.e., $V_1$, in the two case studies are 12.66kV and 4.16kV, respectively. Other parameters used in the simulations are summarized in Table \ref{tab_parameter}.
\begin{figure}
\caption{Structures of the (a) 33-bus system and (b) 123-bus system.}
\label{fig_33bus}
\end{figure}
\begin{table}
\small
\centering
\caption{Parameters in simulations}
\begin{threeparttable}
\begin{tabular}{ccccc}
\hline
\rule{0pt}{11pt}
Case studies & Parameters & Value &Parameters & Value\\
\hline
\rule{0pt}{10pt}
\multirow{3}{*}{33-bus} & $\overline{G}_i^\text{DG}$ & 2MW & $V_\text{i, max}$& 1.1 p.u.\\
&$I_\text{b, max}$ & 0.249kA & $V_\text{i, min}$& 0.9 p.u. \\
& $\bm \phi$ & 0 & &\\
\hline
\multirow{2}{*}{123-bus} & $\overline{G}_i^\text{DG}$ & 1.5MW & $V_\text{i, max}$& 1.1 p.u.\\
&$I_\text{b, max}$ & 0.65kA & $V_\text{i, min}$& 0.9 p.u. \\
& $\bm \phi$ & 0.33 & &\\
\hline
\end{tabular}\label{tab_parameter}
\end{threeparttable}
\end{table}
We conduct power flow simulations based on Pandapower, a power system simulation toolbox in Python environment \cite{8344496}, to generate the historical data. In Pandapower, the power flow calculation is based on the full AC power flow model. During the simulations, we first randomly generate 10,000 pairs of nominal bus power injections and uncertain levels of DGs' outputs, i.e., $(\bm x, \bm \omega)$. Here the nominal power injection $\bm x$ is generated by a uniform distribution between its minimum and maximum allowable values. Based on these pairs, we can calculate the actual power injections on each bus, and then the bus voltages $\bm V$ and branch currents $\bm I$ can be calculated by Pandapower. With $\bm V$ and $\bm I$, the power loss $p^\text{loss}$ and maximum constraint violation $h(\bm x, \bm \omega)$ can be obtained based on (\ref{eqn_loss})-(\ref{eqn_h_define}). Then, following \textbf{Algorithm} \ref{tab_algorithm1}, we conduct the data augmentation to generate the training set for the quantile-MLP. The two parameters $N_{\omega}$ and $K$ in \textbf{Algorithm} \ref{tab_algorithm1} are set as 1000 and 100, respectively.
To demonstrate the generalization ability of the proposed model, different distributions are used to generate the samples of the uncertain level $\bm \omega$, as follows:
\begin{enumerate}
\item \textbf{Case 1}: The samples of $\bm \omega$ are generated by a Gaussian distribution, i.e., $\bm \omega \sim \text{Gaussian}(0,0.1)$;
\item \textbf{Case 2}: The samples of $\bm \omega$ are generated based on a Beta distributed uncertainty $\bm \omega^{'}$, i.e., $\bm \omega = \kappa^\text{Beta}(\bm \omega^{'} - \bm \mu^\text{beta})$, where $ \bm \omega^{'} \sim \text{Beta}(2,6)$;
\item \textbf{Case 3}: The samples of $\bm \omega$ are generated based on a Weibull distributed uncertainty $\bm \omega^{'}$, i.e., $\bm \omega = \kappa^\text{Weibull}(\bm \omega^{'} - \bm \mu^\text{Weibull})$, where $\bm \omega^{'} \sim \text{Weibull}(1,5)$.
\end{enumerate}
The scaling factor $\kappa^\text{Beta}/\kappa^\text{Weibull}$ is designed to make the magnitudes of the generated $\bm \omega$ more realistic, while the constant $\bm \mu$ is set as the expectation of $\bm \omega^{'}$ to make the expectation of $\bm \omega$ keep at zero. All these samples have been uploaded to \cite{samples2022}.
All numerical experiments are implemented on an Intel(R) 8700 3.20GHz CPU with 16 GB memory. The quantile-MLP and loss-MLP are established based on Pytorch. Dropout is also applied to mitigate the overfitting \cite{srivastava2014dropout}. Problem \textbf{P3} is built by CVXPY and solved by GUROBI.
\subsection{Benchmarks}
To demonstrate the superiority of the proposed model, we introduce the following three benchmarks:
\begin{enumerate}
\item \textbf{B1}: Linearized DistFlow model used in \cite{8515118,8688432,8528856} combined with the scenario approach;
\item \textbf{B2}: SOCP relaxation of AC OPF model used in \cite{8060613, 8626040} combined with the scenario approach;
\item \textbf{B3}: Risk-neutral full AC OPF model.
\end{enumerate}
In \textbf{B1} and \textbf{B2}, the scenario approach used in \cite{8060613, 8626040} is employed to handle the intractable joint chance constraint (\ref{eqn_JCC}). \textbf{B3} is a non-convex problem, which is directly solved by Pandapower.
Note that \textbf{B1}-\textbf{B3} are based on power flow models, in which the network parameters are assumed known.
\subsection{Case study based on the 33-bus system} \label{sec_33bus}
To verify the effectiveness of the proposed model, we compare the average utilization rates of DG, maximum violation probabilities of the joint chance constraint (\ref{eqn_JCC}), energy purchasing from the upper-level grid, and solving times of different models based on the 33-bus test system.
Both the maximum violation probabilities and energy purchasing are obtained by Monte Carlo Simulations based on Pandpower to make the comparison fair\footnote{We first solve each method to the solution. Then, by giving these solutions and uncertainty samples as inputs to Pandapower, the actual maximum violation probabilities and energy purchasing can be calculated.}. The neuron numbers of the quantile-MLP are set as (25, 25, 25), i.e., three hidden layers with 25 neurons in each layer, while the neuron numbers of the loss-MLP are set as (10, 10, 10).
\subsubsection{Case 1}
\begin{figure}
\caption{Results of (a) average utilization rates of DG, (b) maximum violation probabilities of the joint chance constraint (\ref{eqn_JCC}
\label{fig_results_Gaussian}
\end{figure}
Figure \ref{fig_results_Gaussian} compares the results of different models in \textbf{Case 1}.
Among all models, the linearized DistFlow model, \textbf{B1}, derives the most conservative results, while its maximum violation probability and utilization rate of DG are the lowest. Since \textbf{B1} ignores voltage drops on branches, it overestimates bus voltages \cite{19266}. Considering that promoting the integration of DG will increase bus voltages, less DG can be utilized in \textbf{B1} because it must ensure the overestimated bus voltages to be smaller than the corresponding upper bound. Nevertheless, its solution is always feasible for the joint chance constraint (\ref{eqn_JCC}). Conversely, the SOCP relaxation \textbf{B2} shows very poor feasibility (its maximum violation probability approaches 100\%), although it achieves the highest utilization rate of DG and lowest energy purchasing. This is because reverse power flows occur in the system, which makes its SOCP relaxation inexact. The risk-neutral model \textbf{B3} also fails to meet the joint chance constraint (\ref{eqn_JCC}) since it directly ignores the impacts of uncertainties. Thus, both \textbf{B2} and \textbf{B3} are not applicable to distribution systems with high DG penetration.
The proposed model can always ensure the feasibility of solutions, and its energy-efficiency is better than that of \textbf{B1}. Moreover, unlike the three benchmarks \textbf{B1}-\textbf{B3}, the proposed model only need historical data to train MLPs but does not require the network parameters to build power flow models. Although its computational performance is worse than \textbf{B1} due to the binary variables introduced by reformulating MLPs, the solving time is still acceptable (around 0.3s). These results demonstrate the great feasibility and optimality of the proposed model.
\subsubsection{Case 2}
Figure \ref{fig_results_Gaussian} compares the results of different models in \textbf{Case 2}, in which the uncertainty $\bm \omega$ follows Beta distribution. The results are very similar to those in \textbf{Case 1}: the energy-efficiency of \textbf{B1} is undesirable, while \textbf{B2} and \textbf{B3} can not guarantee the feasibility of solutions. In contrast, the proposed model can achieve desirable optimality and feasibility simultaneously with no need for the network parameters. The solving time of the proposed model still keeps around 0.3s, which is acceptable in practice.
\begin{figure}
\caption{Results of (a) average utilization rates of DG, (b) maximum violation probabilities, (c) energy purchasing, and (d) solving times in Case 2 (the uncertainty $\bm \omega$ follows Beta distribution).}
\label{fig_results_Beta}
\end{figure}
\subsubsection{Case 3}
Figure \ref{fig_results_Gaussian} illustrates the results of different models in \textbf{Case 3}, in which the uncertainty $\bm \omega$ follows Weibull distribution. Similarly, the proposed model achieves better energy-efficiency compares to \textbf{B1}. Moreover, it outperforms \textbf{B2} and \textbf{B3} on feasibility.
In summary, the above three cases not only illustrate that the proposed model can achieve desirable optimality and feasibility without the topology but also demonstrate its excellent generalization performance for arbitrary uncertainties.
\begin{figure}
\caption{Results of (a) average utilization rates of DG, (b) maximum violation probabilities, (c) energy purchasing, and (d) solving times in Case 3 (the uncertainty $\bm \omega$ follows Weibull distribution). }
\label{fig_results_Weibull}
\end{figure}
\subsection{Case study based on the 123-bus system}
We further conduct a case study based on the 123-bus system to better demonstrate the benefits of the proposed model. The neuron numbers of the quantile- and loss-MLPs are set as (30, 30, 30) and (10, 10, 10), respectively. The uncertainties are the same as those in \textbf{Case 3} (Weibull uncertainties).
The results in Section \ref{sec_33bus} show that \textbf{B1} is overly conservative. However, this conservativeness may be contributed by either the power flow approximation (linearized DistFlow) or the JCC reformulation (scenario approach). To highlight that the linearized DistFlow model introduces unnecessary conservativeness, we modify the benchmark \textbf{B1} as \textbf{B1-SAA}, in which the intractable JCCs are handled by sample average approximation (SAA). SAA is a promising way to handle JCCs with excellent optimality, but it is also time-consuming because numerous binary variables have to be involved \cite{geng2019data}.
The results of different models on the IEEE 123-bus system are illustrated in Fig. \ref{fig_results_123Bus}. Similar to the results on the 33-bus system, the SOCP relaxation \textbf{B2} can not always guarantee the feasibility of solutions due to the existence of reverse power flows. The risk-neutral model \textbf{B3} also fails to satisfy the JCC because it directly ignores the impacts of uncertainties. For the linearized DistFlow \textbf{B1-SAA}, even though the SAA method is introduced to handle the JCC, its energy purchasing amount is still much higher than that of the proposed model. This result indicates that the linearized DistFlow introduces significant conservativeness and may harm energy efficiency. Moreover, since SAA needs to introduce a large number of binary variables, its computational efficiency is much worse than that of the proposed one. For example, at the risk parameter $\epsilon=0.2$, the solving time of \textbf{B1-SAA} reaches 82.63s, while it is only 0.15s in the proposed model. These results further confirm the great performance of the proposed model.
\begin{figure}
\caption{Results of (a) average utilization rates of DG, (b) maximum violation probabilities, (c) energy purchasing, and (d) solving times in the case based on the IEEE 123-bus system. Here the uncertainties $\bm \omega$ follow the Weibull distribution.}
\label{fig_results_123Bus}
\end{figure}
\subsection{Sensitivity analysis} \label{sec_sensitive}
In this section, we investigate how the neuron numbers of MLPs affect the performance of the proposed model. The simulations are based on the IEEE 33-bus system.
The hidden layer numbers of MLPs are fixed at three, and the used samples of $\bm \omega$ are the same as those in \textbf{Case 1}.
\subsubsection{Neuron number of quantile-MLP}
The results of the proposed model with different neuron numbers in the quantile-MLP are illustrated in Fig. \ref{fig_results_neuronNum}, where ``neuron number" refers to the neuron number in each hidden layer. The structure of the loss-MLP is fixed as (10, 10, 10). With the growth of the neuron number, the approximation ability of the quantile-MLP becomes stronger. Thus, the prediction loss decreases, as shown in Fig. \ref{fig_results_neuronNum}(a).
Since we use an inner approximation (\ref{eqn_quantile_constraint3}) to replace the original joint chance constraint (\ref{eqn_JCC}) in the calibration step, the maximum violation probabilities are always lower than the risk parameter, i.e., the red surface in Fig. \ref{fig_results_neuronNum}(b). With the growth of the neuron number, the prediction error of the quantile-MLP can be either negative or positive. As a result, both the maximum violation probability and energy purchasing are not monotonous with respect to the neuron number. Nevertheless, the energy purchasing of the proposed model is always lower than that of \textbf{B1}, i.e., the green surface in Fig. \ref{fig_results_neuronNum}(c).
The solving time grows rapidly with the increase of the neuron number, as illustrated in Fig. \ref{fig_results_neuronNum}(d). According to (\ref{eqn_reformulation}), the integer variable number introduced by reformulating the quantile-MLP is equal to the neuron number. Therefore, a larger neuron number leads to a higher computational burden. Nevertheless, with a small neuron number, the proposed model can already achieve desirable optimality and feasibility simultaneously in a short time, e.g., the solving time is around 0.3s when the neuron number is set as 25.
\begin{figure}
\caption{Results of (a) loss function of the quantile-MLP, i.e., Eq. (\ref{eqn_loss_quantile}
\label{fig_results_neuronNum}
\end{figure}
\subsubsection{Neuron number of loss-MLP}
We further investigate the effects of the loss-MLP's neuron number on the proposed model's performance, and the results are summarized in Fig. \ref{fig_results_neuronNum_lossMLP}. Here the neuron number of the quantile-MLP is fixed as (25, 25, 25). Similarly, increasing the neuron number can reduce the training loss of the loss-MLP because this can enhance the prediction accuracy of the loss-MLP, as shown in Fig. \ref{fig_results_neuronNum_lossMLP}(a). However, the power loss is usually much smaller than the summation of power demands. Therefore, even if we change the neuron number, the optimality and feasibility of the proposed model's solutions are nearly constant. Nevertheless, the maximum violation probability is always lower than the required values, i.e., the red surface in Fig. \ref{fig_results_neuronNum_lossMLP}(b), and the energy-efficiency is always better than that of \textbf{B1}, i.e., the green surface in Fig. \ref{fig_results_neuronNum_lossMLP}(c). According to (\ref{eqn_reformulation}), the number of the auxiliary binary variables introduced by reformulating the loss-MLP is equal to its neuron number. Thus, a larger neuron number results in a higher computational burden, and further leads to a longer solving time, as shown in Fig. \ref{fig_results_neuronNum_lossMLP}(d). Nevertheless, a small number of neurons is enough for the proposed model because excellent optimality and feasibility can be already accomplished.
\begin{figure}
\caption{Results of (a) loss function of the loss-MLP, (b) maximum violation probability, (c) energy purchasing, and (d) solving times with different neuron numbers in the loss-MLP.}
\label{fig_results_neuronNum_lossMLP}
\end{figure}
\section{Conclusions} \label{sec_conclusion}
In this paper, we propose a deep-quantile-regression-based surrogate model for the JCC-OPF problem. In the proposed model, two MLPs are trained to predict the $1-\epsilon$ quantile of the maximum constraint violation and expected power loss, respectively. By reformulating the forward propagation of the two MLPs into mixed-integer linear constraints, the JCC-OPF problem can be replicated by the proposed learning-based surrogate model in a mixed-integer form with no need for power network parameters. Two pre-processing steps, i.e., data augmentation and calibration, are further designed to enhance the performance of the proposed model. The data augmentation step trains an XGBoost-based regressor to generator more training samples so that the accuracy of the quantile regression can be improved. The calibration step designs a positive parameter to calibrate the deep quantile regression to improve the feasibility of solutions. Simulation results based on the IEEE 33- and 123-bus distribution systems confirm that the proposed model can successfully replicate the JCC-OPF problem without the network parameters. Moreover, its optimality is better than the widely used linearized DistFlow model under arbitrary uncertainties, while its feasibility performance is much greater than the SOCP relaxation of AC OPF. Numerical experiments also demonstrate that a small number of neurons is already enough for the proposed model to achieve optimality and feasibility, so computational efficiency can be also guaranteed.
\appendices
\setcounter{table}{0}
\renewcommand\thetable{\Alph{section}\arabic{table}}
\section{} \label{app_1}
\emph{Proof of \textbf{Proposition} \ref{proposition_1}}:
The term on the right-hand side of (\ref{eqn_proposition_1}) is equal to
\begin{align}
\mathbb E (\text{Loss}^\text{QR}) = -\epsilon\int_{-\infty}^{\hat{\mathcal{Q}}^{1-\epsilon}}(h - \hat{\mathcal{Q}}^{1-\epsilon})dF_{H}(h) \notag \\
+ (1-\epsilon)\int_{\hat{\mathcal{Q}}^{1-\epsilon}}^{\infty}(h - \hat{\mathcal{Q}}^{1-\epsilon})dF_{H}(h), \label{eqn_proposition_2}
\end{align}
where $F_{H}(\cdot)$ denotes the cumulative distribution function of $h(\bm x, \bm \omega)$ at $\bm x$ under the uncertainty $\bm \omega$. At the optimal solution that minimizing the expectation (\ref{eqn_proposition_2}), the derivative of the expectation loss should be zero:
\begin{align}
\left. \frac{\partial \mathbb E (\text{Loss}^\text{QR})}{\partial \hat{\mathcal{Q}}^{1-\epsilon}}\right|_{y}=0, \label{eqn_proposition_3}
\end{align}
where $y$ is the optimal solution of $\hat{\mathcal{Q}}^{1-\epsilon}$. Then,
by substituting (\ref{eqn_proposition_2}), Eq. (\ref{eqn_proposition_3}) can be converted into the following form based on the Leibniz integral rule:
\begin{align}
&\epsilon \int_{-\infty}^{y} dF_{H}(h) - (1-\epsilon)\int_{y}^{\infty} dF_{H}(h)=0. \label{eqn_proposition_4}
\end{align}
By substituting $F_{H}(-\infty)=0$ and $F_{H}(\infty)=1$, Eq. (\ref{eqn_proposition_4}) can be further reformulated as:
\begin{align}
F_{H}(y) = 1 - \epsilon \Leftrightarrow y = \mathcal{Q}_{\bm \omega}^{1-\epsilon}(h(\bm x, \bm \omega)).
\end{align}
This completes the proof.
\section{} \label{app_2}
\emph{Proof of \textbf{Proposition} \ref{proposition_2}}:
Based on (\ref{eqn_loss_powerLoss}), the expectation of $\text{Loss}^\text{pl}$ can be expressed as:
\begin{align}
\mathbb E (\text{Loss}^{\text{pl}}) &= \mathbb E \left((p^\text{loss} - \hat{p}^\text{loss}(\bm x))^2\right)\notag\\
&= \mathbb E \left((p^\text{loss}-\mathbb{E}_{\bm \omega}(p^\text{loss}))^2\right) + \left(\mathbb{E}_{\bm \omega}(p^\text{loss}) - \hat{p}^\text{loss}(\bm x)\right)^2 \notag\\
&=\text{Var}(p^\text{loss}) + \left(\mathbb{E}_{\bm \omega}(p^\text{loss}) - \hat{p}^\text{loss}(\bm x)\right)^2, \label{eqn_A2_1}
\end{align}
where $\text{Var}(p^\text{loss})$ is the variance of $p^\text{loss}$. By regarding $\mathbb E (\text{Loss}^{\text{pl}})$ as a function of $\hat{p}^\text{loss}(\bm x)$, the minimum value of $\mathbb E (\text{Loss}^{\text{pl}})$ occurs at $\hat{p}^\text{loss}(\bm x)=\mathbb{E}_{\bm \omega}(p^\text{loss})$ according to (\ref{eqn_A2_1}). This completes the proof.
\footnotesize
\end{document} |
\begin{document}
\title[Solvability of a class of braided fusion categories]{Solvability of a class of braided fusion categories}
\author{Sonia Natale}
\author{Julia Yael Plavnik}
\address{Facultad de Matem\'atica, Astronom\'\i a y F\'\i sica,
Universidad Nacional de C\'ordoba, CIEM -- CONICET, (5000) Ciudad
Universitaria, C\'ordoba, Argentina}
\email{[email protected], [email protected]
\newline \indent \emph{URL:}\/ http://www.famaf.unc.edu.ar/$\mathbb{S}im$natale}
\thanks{The research of S. N. was partially supported by CONICET and Secyt-UNC. The research of J. P. was partially supported by CONICET, ANPCyT and Secyt-UNC} \mathbb{S}ubjclass{18D10; 16T05}
\keywords{Fusion category; braided fusion category; solvability}
\date{May 10, 2012}
\begin{abstract} We show that a weakly integral braided fusion category ${\mathcal C}$ such that every simple object of
${\mathcal C}$ has Frobenius-Perron dimension $\leq 2$ is solvable. In
addition, we prove that such a fusion category is
group-theoretical in the extreme case where
the universal grading group of ${\mathcal C}$ is trivial.
\end{abstract}
\maketitle
\mathbb{S}ection{Introduction and main results}
Let $k$ be an algebraically closed field of characteristic zero.
A fusion category over $k$ is a semisimple tensor category over $k$ having finitely many isomorphism
classes of simple objects.
In this paper we consider the problem of giving structural results
of a fusion category $\mathcal C$ under restrictions on the set $\cd
({\mathcal C})$ of Frobenius-Perron dimensions of its simple objects.
Results of this type were obtained in the paper \cite{NP}. For
instance, we showed in \cite[Theorem 7.3]{NP} that under the
assumption that ${\mathcal C}$ is braided odd-dimensional and $\cd({\mathcal C})
\mathbb{S}ubseteq \{p^m:\, m \geq 0\}$, where $p$ is a (necessarily odd)
prime number, then ${\mathcal C}$ is solvable. Also, the same is true when
${\mathcal C} = \Rep H$, where $H$ is a semisimple quasitriangular Hopf
algebra and $\cd (\mathcal C) = \{1, 2\}$ \cite[Theorem
6.12]{NP}.
Using results of the paper \cite{BN}, we also showed in
\cite[Theorem 6.4]{NP} that if ${\mathcal C} = \Rep H$, where $H$ is any
semisimple Hopf algebra, and $\cd({\mathcal C}) \mathbb{S}ubseteq \{ 1, 2\}$, then
${\mathcal C}$ is weakly group-theoretical, and furthermore, it is
group-theoretical if ${\mathcal C}$ coincides with the adjoint subcategory
${\mathcal C}_{\ad}$.
Our main results are the following theorems. Recall that a fusion
category ${\mathcal C}$ is called \emph{weakly integral} if the
Frobenius-Perron dimension of ${\mathcal C}$ is a natural integer.
\begin{theorem}\label{soluble}
Let ${\mathcal C}$ be a weakly integral braided fusion category such that
$\FPdim X \leq 2$, for all simple object $X$ of ${\mathcal C}$. Then ${\mathcal C}$ is
solvable.
\end{theorem}
Theorem \ref{soluble} extends the previous result for semisimple
quasitriangular Hopf algebras mentioned above. It implies in
particular that every weakly integral braided fusion category with
Frobenius-Perron dimensions of simple objects at most $2$ is
weakly group-theoretical. This gives some further support to the
conjecture that every weakly integral fusion category is weakly
group-theoretical. See \cite[Question 2]{ENO2}.
It is known that a nilpotent braided fusion category, which is in addition integral (that is, $\cd({\mathcal C}) \mathbb{S}ubseteq \mathbb Z_+$) is always group-theoretical \cite[Theorem 6.10]{DGNO}. We also show that the same conclusion is true in the opposite extreme case:
\begin{theorem}\label{gp-ttic} Let ${\mathcal C}$ be a weakly integral braided fusion category such that
$\FPdim X \leq 2$, for all simple object $X$ of ${\mathcal C}$.
Suppose that the universal grading group of ${\mathcal C}$ is trivial.
Then ${\mathcal C}$ is group-theoretical.
\end{theorem}
Theorems \ref{soluble} and \ref{gp-ttic} are proved in Section
\ref{pruebas}. Our proofs rely on the results of Naidu and Rowell
\cite{NaR} for the case where $\mathcal C$ is integral and has a
faithful self-dual simple object of Frobenius-Perron dimension
$2$.
Being group-theoretical, a braided fusion category ${\mathcal C}$ satisfying the assumptions of Theorem \ref{gp-ttic}, has the so called property \textbf{F}, namely, all asso\-ciated braid group representations on the tensor powers of objects of ${\mathcal C}$ factor over finite groups. See \cite[Corollary 4.4]{ERW}. It is conjectured that every braided weakly integral fusion category does have property \textbf{F} \cite{NaR}. This conjecture has been proved for braided fusion categories ${\mathcal C}$ with $\cd({\mathcal C}) = \{ 1,
2\}$ such that all objects of ${\mathcal C}$ are self-dual or ${\mathcal C}$ is generated by a self-dual simple object \cite[Corollary 4.3 and Remark 4.4]{NaR}.
The paper is organized as follows. In Section
\ref{preliminaries} we recall the main facts and terminology about
fusion and braided fusion categories used throughout. In Section
\ref{examples} we discuss some families of (integral) examples that appear in
the literature. We also recall in this section the results of the
paper \cite{NaR} related to dihedral group fusion rules that will
be used later. In Section \ref{pruebas} we give the proofs of
Theorems \ref{soluble} and \ref{gp-ttic}.
\mathbb{S}ection{Preliminaries}\label{preliminaries}
\mathbb{S}ubsection{Fusion categories}
Let ${\mathcal C}$ be a fusion category. We shall denote by $\Irr({\mathcal C})$ the
set of isomorphism classes of simple objects of ${\mathcal C}$ and by
$G({\mathcal C})$ the group of isomorphism classes of invertible objects of
${\mathcal C}$. For an object $X$ of ${\mathcal C}$, we shall indicate by ${\mathcal C}[X]$ the fusion subcategory generated by $X$ and by $G[X]$ the subgroup of $G({\mathcal C})$ consisting of invertible objects $g$ such that $g \otimes X \mathbb{S}imeq X$.
If $\mathcal D$ is another fusion category, ${\mathcal C}$ and ${\mathcal D}$
are \emph{Morita equivalent} if ${\mathcal D}$ is equivalent to the dual
${\mathcal C}^*_{\mathcal M}$ with respect to an indecomposable module
category $\mathcal M$. Recall that ${\mathcal C}$ is called \emph{pointed}
if all its simple objects are inver\-tible and it is called
\emph{group-theoretical} if it is Morita equivalent to a pointed
fusion category.
There is a canonical faithful grading ${\mathcal C} = \oplus_{g \in U({\mathcal C})}{\mathcal C}_g$, with trivial component ${\mathcal C}_e = {\mathcal C}_{\ad}$, where ${\mathcal C}_{\ad}$ is the \emph{adjoint
subcategory} of ${\mathcal C}$, that is, the fusion subcategory generated by $X \otimes X^*$, where $X$ runs through the
simple objects of ${\mathcal C}$. The group
$U({\mathcal C})$ is called the \emph{universal grading group} of ${\mathcal C}$. ${\mathcal C}$ is called nilpotent if the upper central series $\dots \mathbb{S}ubseteq {\mathcal C}^{(n+1)} \mathbb{S}ubseteq {\mathcal C}^{(n)} \mathbb{S}ubseteq \dots \mathbb{S}ubseteq {\mathcal C}^{(0)} = {\mathcal C}$ converges to $\vect_k$, where ${\mathcal C}^{(i)} : = ({\mathcal C}^{(i-1)})_{\ad}$, $i \geq 1$. See \cite{gel-nik}.
A \emph{weakly group-theoretical} fusion category is a fusion
category ${\mathcal C}$ which is Morita equivalent to a nilpotent fusion
category. If ${\mathcal C}$ is Morita equivalent to a cyclically nilpotent
fusion category, then ${\mathcal C}$ is called \emph{solvable}. We refer the
reader to \cite{ENO, ENO2} for further definitions and facts about
fusion categories.
\mathbb{S}ubsection{Braided fusion categories}
Let ${\mathcal C}$ be a \emph{braided} fusion category, that is, ${\mathcal C}$ is
equipped with natural isomorphisms $c_{X,Y} : X \otimes Y \rightarrow Y \otimes X$, $X, Y \in {\mathcal C}$, satisfying the hexagon
axioms.
Recall that ${\mathcal C}$ is called \emph{premodular} if it is also spherical, that is, ${\mathcal C}$ has a pivotal structure such that left and right categorical dimensions coincide.
Equivalently, ${\mathcal C}$ is premodular if it is endowed with a compatible ribbon structure \cite{bruguieres, Mu1}.
We say that the objects $X$ and $Y$ of a braided fusion category ${\mathcal C}$ centralize each other if
$c_{Y,X} c_{X,Y} = \id_{X\otimes Y}$. The \emph{centralizer} ${\mathcal D}'$
of a fusion subcategory ${\mathcal D} \mathbb{S}ubseteq {\mathcal C}$ is defined to be the full
subcategory of objects of ${\mathcal C}$ that centralize every object of ${\mathcal D}$.
The centralizer ${\mathcal D}'$
results a fusion subcategory of ${\mathcal C}$.
The \emph{Müger (or symmetric) center} $Z_2({\mathcal C})$ of ${\mathcal C}$ is $Z_2({\mathcal C}) = {\mathcal C}'$; this is a symmetric fusion subcategory of ${\mathcal C}$ whose objects are called central, dege\-nerate or transparent.
A braided fusion category ${\mathcal C}$ is called \emph{non-degenerate} if its Müger center $Z_2({\mathcal C})$ is trivial.
A \emph{modular} category is a non-degenerate premodular category ${\mathcal C}$.
\begin{remark}\label{spherical} Recall that a fusion category ${\mathcal C}$ is called pseudo-unitary if $\dim {\mathcal C} = \FPdim {\mathcal C}$,
where $\dim {\mathcal C}$ is the global dimension of ${\mathcal C}$ and $\FPdim {\mathcal C}$ is
the Frobenius-Perron dimension of ${\mathcal C}$. If ${\mathcal C}$ pseudo-unitary
then ${\mathcal C}$ has a canonical spherical structure with respect to
which categorical dimensions of all simple objects coincide with
their Frobenius-Perron dimensions \cite[Proposition 8.23]{ENO}.
In particular, this holds for any weakly integral fusion category,
because it is automatically pseudo-unitary \cite[Proposition
8.24]{ENO}.
Hence every weakly integral non-degenerate fusion
category is canonically a modular category.
\end{remark}
\mathbb{S}ection{Some families of examples}\label{examples}
\mathbb{S}ubsection{Examples of fusion categories with Frobenius-Perron dimensions $\leq 2$}
In this subsection we discuss examples of weakly integral fusion categories with
Frobenius-Perron dimensions of simple objects $\leq 2$ that appear in the literature.
\begin{ejem}
Consider a Hopf algebra
$H$ fitting into an abelian exact sequence:
\begin{equation}\label{exacta} k\rightarrow k^{\mathcal G}amma \rightarrow H \rightarrow
k\mathbb Z_2 \rightarrow k,
\end{equation} where ${\mathcal G}amma$ is a finite group.
Let ${\mathcal C} = \Rep H$. Then $\cd ({\mathcal C}) \mathbb{S}ubseteq \{1,2\}$ and equality holds if the
associated action of $\mathbb Z_2$ on ${\mathcal G}amma$ is not trivial.
All these examples are group-theoretical, in view of
\cite[Theorem 1.3]{gp-ttic}.
Observe that, as a consequence of \cite[Theorem
6.4]{BN}, any cosemisimple Hopf algebra $H$ such that $\cd({\mathcal C}) \mathbb{S}ubseteq
\{ 1, 2\}$ is group-theoretical if ${\mathcal C} = {\mathcal C}_{\ad}$. See \cite[Theorem 6.4]{NP}.
Non-trivial examples of cosemisimple Hopf algebras fitting into an exact sequence \eqref{exacta} are given by the Hopf algebras $${\mathcal A}^*_{4m}, {\mathcal B}^*_{4m} \quad m\geq 2,$$
of dimension $4m$, due to Masuoka \cite{mas-cocycle}. In these cases, ${\mathcal G}amma$ is a dihedral group.
\end{ejem}
\begin{ejem}
Let ${\mathcal C} = {\mathcal T}Y (G, \chi, \tau)$ be the Tambara-Yamagami category
associated to a finite (necessarily abelian) group $G$, a
symmetric non-degene\-rate bicharacter $\chi : G\times G \rightarrow
k^\times$ and an element $\tau\in k$ satisfying $|G|\tau^2 = 1$
\cite{TY}. This is a fusion category with isomorphism classes of
simple objects parameterized by the set $G\cup\{X\}$, where $X
\notin G$, obeying the fusion rules
\begin{equation}\label{ty}
g \otimes h = gh, \quad g, h\in G,\quad X \otimes X = \oplus_{g\in G} g.
\end{equation}
We have $\cd ({\mathcal C}) = \{1,2\}$ if and only if $G$ is of order $4$.
Therefore, in this case $\FPdim {\mathcal C} = 8$.
If $G\mathbb{S}imeq \mathbb Z_4$, there are two possible fusion categories
${\mathcal C}$. None of them is braided \cite[Theorem 1.2 (1)]{Siehler-braided}.
If $G\mathbb{S}imeq \mathbb Z_2 \times \mathbb Z_2$ there are exactly four
classes of Tambara-Yamagami categories with irreducibles degrees
$1$ or $2$, by \cite[Theorem 4.1]{TY}. Three of them are
(equivalent to) the categories of representations of
eight-dimensional Hopf algebras: the dihedral group algebra of
order $8$, the quaternion group algebra, and the Kac-Paljutkin
Hopf algebra $H_8$. The remaining fusion ca\-tegory, which has the
same $\chi$ as $H_8$ but $\tau = -1/2$, is not realized as the
fusion category of representations of a Hopf algebra. Since in
this case $G$ is an elementary abelian $2$-group all of this
categories admit a braiding, by \cite[Theorem 1.2 (1)]{Siehler-braided}.
All the fusion categories in this example are group-theoretical.
In fact, by \cite[Lemma 4.5]{GNN}, for any symmetric
non-degenerate bicharacter $\chi:G\times G \rightarrow
k^{\times}$, $G$ contains a Lagrangian subgroup with respect to
$\chi$. Therefore ${\mathcal T}Y (G, \chi, \tau)$ is group-theoretical, by
\cite[Theorem 4.6]{GNN}.
\end{ejem}
\begin{ejem}
Recall that a near-group category is a fusion category with exactly one isomorphism class of non-invertible simple object.
In the notation of \cite{Siehler-braided}, the fusion rules of ${\mathcal C}$ are determined by a pair
$(G, \kappa)$, where $G$ is the group of invertible objects of ${\mathcal C}$ and $\kappa$ is a nonnegative
integer. Letting $\Irr({\mathcal C}) = G \cup \{X\}$, where $X$ is non-invertible, we have the relation
\begin{equation}
X\otimes X = \oplus_{g \in G} g \oplus \kappa X.
\end{equation}
Near-group categories with fusion rule $(G, 0)$ for some finite
group $G$ are thus Tambara-Yamagami categories, discussed in the previous example.
Let us consider near-group categories with fusion rule $(G, \kappa)$ for some finite
group $G$ and a positive integer $\kappa$.
We have $\cd ({\mathcal C}) = \{1,2\}$ if and only if $G$ is of order $2$ and $\kappa = 1$, that means ${\mathcal C}$ is of type $(\mathbb Z_2, 1)$.
Therefore, in this case $\FPdim {\mathcal C} = 6$ and since $\kappa > 0$, then ${\mathcal C}$ is group-theoretical, by \cite[Theorem 1.1]{EGO}. By \cite[Theorem 1.5]{Thornton}, there are up to equivalence exactly two non-symmetric braided near-group categories with fusion rule $(\mathbb Z_2, 1)$.
\end{ejem}
\begin{ejem} Examples of a weakly integral braided fusion categories which are not integral and Frobenius-Perron dimensions of simple objects are $\leq 2$ are given by the Ising categories, studied in \cite[Appendix B]{DGNOI}.
In this case, there is a unique non-invertible simple object $X$ with $X^{\otimes 2} = \textbf{1} \oplus a$,
where $a$ generates the group of invertible objects, isomorphic to $\mathbb Z_2$ (note that these are also Tambara-Yamagami ca\-tegories). We have here $\cd({\mathcal C}) = \{ 1, \mathbb{S}qrt 2 \}$ and $\FPdim {\mathcal C} = 4$.
Every braided Ising
category is modular \cite[Corollary B.12]{DGNOI}.
Other examples come from braided fusion categories with generalized Tambara-Yamagami fusion rules of type $(G, \mathbb Z_2)$, where $G$ is a finite group. See \cite{liptrap}. In these examples, ${\mathcal C}$ is not pointed, the group of invertible objects is $G$, and $\mathbb Z_2 \mathbb{S}imeq {\mathcal G}amma \mathbb{S}ubseteq G$ is a subgroup such that $X \otimes X^* \mathbb{S}imeq \oplus_{h \in {\mathcal G}amma} h$, for all non-invertible object $X$ of ${\mathcal C}$. Hence we also have $\cd({\mathcal C}) = \{ 1, \mathbb{S}qrt 2 \}$.
Since they are not integral, these examples are not group-theoretical.
\end{ejem}
\begin{ejem} Let ${\mathcal C}$ be a braided group-theoretical fusion
category. Then ${\mathcal C}$ is an equivariantization of a pointed fusion
category, that is, ${\mathcal C} \mathbb{S}imeq {\mathcal D}^G$, where ${\mathcal D}$ is a pointed
fusion category and $G$ is a finite group acting on ${\mathcal D}$ by tensor
autoequivalences \cite{NNW}. In this case, ${\mathcal C}$ contains the
category $\Rep G$ of finite-dimensional representations of $G$ as
a fusion subcategory.
Suppose that $\cd({\mathcal C}) = \{1, p\}$, where $p$ is any prime number.
Then also $\cd(G) \mathbb{S}ubseteq \{1, p\}$. In particular, the group
$G$ must have a normal abelian $p$-complement; moreover, either
$G$ contains an abelian normal subgroup of index $p$ or the center
$Z(G)$ has index $p^3$. See \cite[Theorems 6.9, 12.11]{isaacs}.
\end{ejem}
\mathbb{S}ubsection{Fusion rules of dihedral type}\label{fusion_D_n}
Let $D_n$ be the dihedral group of order $2n$, $n\geq 1$. Recall
that $D_n$ has a presentation by generators $t,z$ and relations
$t^2 = 1 = z^n$, $tz = z^{-1}t$.
The following proposition describes the fusion rules of $\Rep D_n$
(\textit{c.f.} \cite{mas-cocycle}).
\begin{proposition}\label{D_n}
\begin{enumerate}
\item Suppose $n$ is odd. Then the isomorphisms classes of simple
objects of $\Rep D_n$ are represented by $2$ invertible objects,
$\textbf{1}$ and $g$, and $r = (n-1)/2$ simple objects $X_1,
\ldots, X_r$, of dimension $2$, such that
\begin{align*}
& g\otimes X_i = X_i = X_i\otimes g, \qquad \forall i=1, \ldots, r, \\
& X_i\otimes X_j = \left\{
\begin{array}{ll} X_{i+j}\oplus X_{|i-j|}, \quad & \text{if} \quad i+j \leq r, \\
X_{n-(i+j)}\oplus X_{|i-j|}, \quad & \text{if} \quad i+j > r;
\end{array}
\right.
\end{align*}
where $X_0 = \textbf{1}\oplus g$.
\item Suppose $n$ is even, that is $n = 2m$. Then the isomorphisms
classes of simple objects of $\Rep D_n$ are represented by $4$
invertible objects, $\textbf{1}$, $g$, $h$, $f = gh$, and
$m-1$ simple objects $X_1, \ldots, X_{m-1}$, of dimension $2$,
such that
\begin{align*}
& g\otimes X_i = X_i = X_i\otimes g, \qquad \forall i=1, \ldots, m-1, \\
& h\otimes X_i = X_{m-i} = X_i\otimes h, \qquad \forall i=1, \ldots, m-1, \\
& X_i\otimes X_j = \left\{ \begin{array}{ll} X_{i+j}\oplus X_{|i-j|}, \quad & \text{if} \quad i+j \leq m, \\
X_{2m-(i+j)}\oplus X_{|i+j|}, \quad & \text{if} \quad i+j >
m;
\end{array}
\right.
\end{align*}
where $X_0 = \textbf{1}\oplus g$ and $X_m = h\oplus f$.
\end{enumerate}
In particular, the group of invertible objects in $\Rep D_n$ is isomorphic to $\mathbb Z_2$ if $n$ is odd, and to $\mathbb Z_2\times\mathbb Z_2$ if $n$ is even.
\end{proposition}
\begin{remark}\label{ndivide4}
Suppose that $4$ divides $n = 2m$. Then $X_{m/2}$ is
fixed under (left and right) multiplication by all invertible objects of $\Rep D_n$.
\end{remark}
Let ${\mathcal C}$ be a fusion category with $\cd ({\mathcal C}) = \{1, 2\}$. Suppose that the Grothendieck ring of ${\mathcal C}$ is commutative
(for example, this is the case if ${\mathcal C}$ is braided). Assume in addition that the following conditions hold:
\begin{enumerate}
\item[(a)] All objects are self-dual, that is $X \mathbb{S}imeq X^*$, for every object $X$ of ${\mathcal C}$.
\item[(b)] ${\mathcal C}$ has a faithful simple object.
\end{enumerate}
Then, it is shown in \cite[Theorem 4.2]{NaR} that ${\mathcal C}$ is
Grothendieck equivalent to $\Rep D_n$. Moreover, ${\mathcal C}$ is
necessarily group-theoretical.
\medbreak It is possible to remove the assumption that all the
objects are self-dual but it is still necessary the condition of
self-duality on the faithful simple object. Namely, suppose that
${\mathcal C}$ is not self-dual, but satisfies
\begin{enumerate}
\item[(b')] ${\mathcal C}$ has a faithful self-dual simple object.
\end{enumerate}
In this case ${\mathcal C}$ is still group-theoretical and it is
Grothendieck equivalent to $\Rep \widetilde D_n$, $n$ odd. See
\cite[Remark 4.4]{NaR}. Here $\widetilde D_n$ is the generalized quaternion
(binary dihedral) group of order $4n$, that is, the group
presented by generators $a, s$, with relations $a^{2n} = 1$, $s^2
= a^n$, $s^{-1}as = a^{-1}$. (Observe that for $n$ odd,
$\widetilde D_n$ is isomorphic to the semidirect product $\mathbb
Z_n \rtimes \mathbb Z_4$, with respect to the action given by
inversion, considered in \cite{NaR}. For even $n$, $\Rep
\widetilde D_n$ is Grothendieck equivalent to $\Rep D_{2n}$, while
$\mathbb Z_n \rtimes \mathbb Z_4$ has no faithful representation
of degree $2$.)
\begin{lemma}\label{centers} Let $n \geq 2$. Then $(\Rep \widetilde D_{n})_{\ad}
= \Rep D_{n}$. In addition,
\begin{equation*}(\Rep D_{n})_{\ad} = \left\{ \begin{array}{ll} \Rep D_{n/2}, \quad & \text{if} \quad n \quad \text{is even}, \\
\Rep D_{n}, \quad & \text{if}\quad n \quad \text{is odd}.
\end{array}
\right.
\end{equation*}
\end{lemma}
\begin{proof} Recall that when ${\mathcal C} = \Rep G$, where $G$ is a finite group,
then ${\mathcal C}_{\ad} = \Rep G/Z(G)$ \cite{gel-nik}. The first claim
follows from the fact that the center of $\tilde D_n$ equals $\{1,
s^2\} \mathbb{S}imeq \mathbb Z_2$. On the other hand, the center $Z(D_n)$
is trivial if $n$ is odd, and equals $\{ 1, z^{n/2}\} \mathbb{S}imeq \mathbb
Z_2$ if $n$ is even.
This implies the second claim and finishes the proof of the lemma.
\end{proof}
\mathbb{S}ection{Proof of the main results}\label{pruebas}
In this section we shall prove Theorems \ref{soluble} and
\ref{gp-ttic}.
\begin{proposition}\label{equiv}
Let ${\mathcal C}$ be a premodular fusion category. Suppose ${\mathcal C}$ has an
invertible object $g$ of order $n$ and a simple object $X$ such
that
\begin{flalign} \label{(1)}& g\otimes X = X, \textrm{ and } & \\
\label{(2)}& g \textrm{ centrali\-zes } X.&
\end{flalign}
Then we have
\begin{enumerate} \item[(i)] ${\mathcal C}$ is an equivariantization by the cyclic group
$\mathbb Z_n$ of a fusion category $\widetilde {\mathcal C}$.
\item[(ii)] If $g \in {\mathcal C}'$, then $\widetilde {\mathcal C}$ is braided.
\end{enumerate}
\end{proposition}
\begin{proof}
Condition \eqref{(1)} ensures the existence of a fiber functor on the fusion category ${\mathcal C} [g]$ generated by
$g$. Then ${\mathcal C} [g]$ is equivalent to
$\Rep \mathbb Z_n$ as fusion categories.
Moreover, they are equivalent as braided fusion categories.
Indeed, \eqref{(1)} implies ${\mathcal C}[g]\mathbb{S}ubseteq {\mathcal C}[X]$ and therefore
${\mathcal C}[g]\mathbb{S}ubseteq Z_2({\mathcal C}[X])$, by \eqref{(2)}. Hence ${\mathcal C}[g]$ is
symmetric. Then the only possible twists in ${\mathcal C}$ are $\theta_h =
1$ and $\theta_h = -1$ for all $h\in\langle g \rangle$. But
$\theta_h$ is not equal to $-1$ since $h$ centralizes $X$ and
$h\otimes X = X$ \cite[Lemma 5.4]{Mu}. Then $\theta_h = 1$ for all
$h\in\langle g \rangle$. Therefore ${\mathcal C} [g] \mathbb{S}imeq \Rep \mathbb
Z_n$ as braided fusion categories, as claimed.
Let ${\mathcal G}amma = \langle g \rangle \mathbb{S}ubseteq G({\mathcal C})$. It follows from \cite[Theorem 4.18 (i)]{DGNOI} that the de-equivariantization $\widetilde {\mathcal C} = {\mathcal C}_{{\mathcal G}amma}$ of ${\mathcal C}$ by
${\mathcal G}amma$ is a fusion category and there is a canonical equivalence ${\mathcal C}\mathbb{S}imeq {\widetilde {\mathcal C}}^{{\mathcal G}amma}$ between the category ${\mathcal C}$ and the
${\mathcal G}amma$-equivariantization of $\widetilde {\mathcal C}$, which shows (i).
Furthermore, if $g \in {\mathcal C}'$ then $\widetilde {\mathcal C}$ is braided and the equivalence
${\mathcal C} \mathbb{S}imeq {\widetilde {\mathcal C}}^{{\mathcal G}amma}$ is of braided fusion categories \cite{bruguieres, Mu} (see also \cite[Theorem 4.18 (ii)]{DGNOI}).
Thus we get (ii).
This proves the proposition.
\end{proof}
\begin{lemma}\label{generadores}
Let ${\mathcal C}$ be a fusion category with commutative Grothendieck ring. Suppose that ${\mathcal C} = {\mathcal C}_{\ad}$. If ${\mathcal D}_1,
\ldots, {\mathcal D}_s$ are fusion subcategories that generate ${\mathcal C}$ as a
fusion category, then ${\mathcal D}_1^{(m)}, \ldots, {\mathcal D}_s^{(m)}$ generate
${\mathcal C}$ as a fusion category, $\forall m\geq 0$.
\end{lemma}
\begin{proof} Since ${\mathcal D}_1, \ldots, {\mathcal D}_s$ generate ${\mathcal C}$, then $({\mathcal D}_1)_{\ad}, \ldots,
({\mathcal D}_s)_{\ad}$ generate ${\mathcal C}$. In fact, let $X$ be a simple object
of ${\mathcal C}$. There exist simple objects $X_{i_1}, \ldots, X_{i_t}$,
with $X_{i_l} \in {\mathcal D}_{i_l}$, $1 \leq i_1, \dots, i_t \leq s$,
such that $X$ is a direct summand of $X_{i_1}\otimes \ldots
\otimes X_{i_t}$. Then $X\otimes X^*$ is a direct summand of
$$X_{i_1}\otimes \ldots \otimes X_{i_t}\otimes X_{i_t}^*\otimes
\ldots \otimes X_{i_1}^* \mathbb{S}imeq (X_{i_1}\otimes X_{i_1}^*)\otimes
\ldots \otimes (X_{i_t}\otimes X_{i_t}^*),$$ where we have used
that ${\mathcal C}$ has a commutative Grothendieck ring.
Notice that the object in the right hand side belongs to the fusion subcategory generated by $({\mathcal D}_1)_{\ad}, \ldots,
({\mathcal D}_s)_{\ad}$.
Since $X$ was arbitrary, it follows that $({\mathcal D}_1)_{\ad}, \ldots,
({\mathcal D}_s)_{\ad}$ gene\-rate ${\mathcal C}_{\ad}$. But ${\mathcal C} = {\mathcal C}_{\ad}$ by
assumption, then we have proved that $({\mathcal D}_1)_{\ad}, \ldots,
({\mathcal D}_s)_{\ad}$ generate ${\mathcal C}$. The statement follows from this by
induction on $n$, since ${\mathcal D}_j^{(n)} = ({\mathcal D}_j^{(n-1)})_{\ad}$, for
all $j = 1, \ldots s$, $n\geq 1$.
\end{proof}
\mathbb{S}ubsection{Braided fusion categories with irreducible degrees $1$ and $2$ }
Throughout this subsection ${\mathcal C}$ is a braided fusion category with
$\cd ({\mathcal C}) = \{1,2\}$. We regard ${\mathcal C}$ as a premodular category with
respect to its canonical spherical structure. See Remark
\ref{spherical}.
\begin{remark}\label{orderG[X]}
Note that $G[X]\neq \textbf{1}$, for all
$X$ such that $\FPdim X = 2$. Moreover, $|G[X]| = 2$ or $4$. In
particular the (abelian) group $G({\mathcal C})$ is not trivial.
\end{remark}
\begin{proposition}\label{equi_g} Let $g$ be a non-trivial invertible object such that $g^2 = 1$ and $\theta_g = 1$.
Assume that $g$ generates the Müger center ${\mathcal C}'$ of ${\mathcal C}$ as a
fusion category.
Then ${\mathcal C}$ is the equivariantization of a modular fusion category
$\widetilde {\mathcal C}$ by the group $\mathbb Z_2$. Furthermore
$\cd(\widetilde {\mathcal C}) \mathbb{S}ubseteq \{1,2\}$.
\end{proposition}
\begin{proof}
By assumption ${\mathcal C}'\mathbb{S}imeq \Rep \mathbb Z_2$ is tannakian. Then the
de-equivarianti\-zation $\widetilde {\mathcal C}$ of ${\mathcal C}$ by ${\mathcal C}'$ is a
modular category and there is an action of $\mathbb Z_2$ on
$\widetilde {\mathcal C}$ such that ${\mathcal C} \mathbb{S}imeq \widetilde {\mathcal C} ^{\mathbb Z_2}$
\cite{bruguieres, Mu}.
Since $\cd(\widetilde {\mathcal C} ^{\mathbb Z_2}) = \cd({\mathcal C}) = \{1,2\}$,
then $ \cd(\widetilde {\mathcal C}) \mathbb{S}ubseteq \{1,2\}$, by \cite[Proof of Proposition
6.2]{ENO2}, \cite[Lemma 7.2]{NP}.
\end{proof}
\begin{lemma}\label{noad}
Suppose that ${\mathcal C}\neq {\mathcal C}_{\ad}$ and ${\mathcal C}_{\ad}$ is solvable. Then ${\mathcal C}$ is
solvable.
\end{lemma}
\begin{proof}
Since ${\mathcal C}$ is braided, its universal grading group $U({\mathcal C})$ is
abelian \cite[Theorem 6.2]{gel-nik}. The category ${\mathcal C}$ is a
$U({\mathcal C})$-extension of ${\mathcal C}_{\ad}$ and an extension of a solvable
category by a solvable group is again solvable \cite[Proposition
4.5 (i)]{ENO2}. Then ${\mathcal C}$ is solvable, as claimed.
\end{proof}
\begin{lemma}\label{ad}
Assume ${\mathcal C} = {\mathcal C}_{\ad}$. Then $\FPdim {\mathcal C}' \geq 2$.
\end{lemma}
\begin{proof}
Suppose on the contrary that $\FPdim {\mathcal C}' = 1$, that is, ${\mathcal C}$ is
modular. Then, by \cite[Theorem 6.2]{gel-nik}, $U({\mathcal C})\mathbb{S}imeq
\widehat{G({\mathcal C})} \mathbb{S}imeq G({\mathcal C})$. By Remark \ref{orderG[X]}, ${\mathcal C}_{\ad}\mathbb{S}ubsetneq {\mathcal C}$, against the
assumption. Hence $\FPdim {\mathcal C}'\geq 2$, as claimed.
\end{proof}
\begin{lemma}\label{lema-dn} Suppose ${\mathcal C}$ is generated by a simple object $X$ such
that $X\mathbb{S}imeq X^*$ and $\FPdim X = 2$. Then we have
\begin{enumerate}
\item[(i)] ${\mathcal C}$ is not modular.
\end{enumerate}
Assume ${\mathcal C} = {\mathcal C}_{\ad}$. Then we have in addition
\begin{enumerate}
\item[(ii)] There is a group isomorphism $G({\mathcal C})\mathbb{S}imeq \mathbb
Z_2$.
\item[(iii)] $G({\mathcal C})\mathbb{S}ubseteq {\mathcal C}'$.
\end{enumerate}
\end{lemma}
\begin{proof}
By \cite[Theorem 4.2; Remark 4.4]{NaR}, ${\mathcal C}$ is Grothendieck
equivalent to $\Rep D_n$ or $\Rep \widetilde D_{2n+1}$, for some $n\geq
1$.
Since the universal grading group is a Grothendieck invariant, then in
the first case $U({\mathcal C})$ is isomorphic to $\mathbb Z_2$ if $n$ is
even and is trivial if $n$ is odd. But $G({\mathcal C})$, which is also a
Grothendieck invariant, is isomorphic to $\mathbb Z_2 \times
\mathbb Z_2$ if $n$ is even and is isomorphic to $\mathbb Z_2$ if
$n$ is odd, by Proposition \ref{fusion_D_n}. Then $U({\mathcal C})$ is not
isomorphic to $\widehat{G({\mathcal C})}$, for any $n$. Therefore ${\mathcal C}$ is
not modular, by \cite[Theorem 6.2]{gel-nik}. Similarly, if ${\mathcal C}$ is
Grothendieck equivalent to $\Rep \widetilde D_{2n+1}$, we have $U({\mathcal C}) \mathbb{S}imeq \mathbb Z_2$ and $G({\mathcal C}) \mathbb{S}imeq
\mathbb Z_4$. Hence ${\mathcal C}$ is not modular in this
case neither. This shows (i).
Notice that the assumption ${\mathcal C} = {\mathcal C}_{\ad}$ implies that ${\mathcal C}$ is
Grothendieck equivalent to $\Rep D_n$, for some $n$ odd. Then
(ii) follows immediately from the fusion rules of $\Rep D_n$, with $n$ odd (see Proposition
\ref{fusion_D_n}). Since, by (i), ${\mathcal C}'$ is not trivial, then $G({\mathcal C}') \neq
\textbf{1}$, because $\cd({\mathcal C}') \mathbb{S}ubseteq \{ 1, 2\}$ (\textit{c.f.} Remark \ref{orderG[X]}). By part (i), $G({\mathcal C}') =
G({\mathcal C})$ and (iii) follows.
\end{proof}
\begin{remark} \label{d_n_impar}If ${\mathcal C}$ is a fusion category as in Lemma \ref{lema-dn}, then the assumption ${\mathcal C} =
{\mathcal C}_{\ad}$ is equivalent to saying that ${\mathcal C}$ is Grothendieck equivalent
to $\Rep D_n$, for some $n \geq 1$ \emph{odd}.
\end{remark}
\begin{lemma}\label{genconad} Suppose that ${\mathcal C} = {\mathcal C}_{\ad}$.
Then ${\mathcal C}$ is generated by fusion subca\-tegories ${\mathcal D}_1, \dots,
{\mathcal D}_s$, $s \geq 1$, where ${\mathcal D}_i$ is Grothendieck equivalent to
$\Rep D_{n_i}$ and $n_i$ is an odd natural number, for all $i = 1,
\dots, s$.
\end{lemma}
\begin{proof} Let ${\mathcal C} = {\mathcal C}[X_1, \ldots, X_s]$ for some simple objects $X_1, \ldots,
X_s$. Let ${\mathcal D}_i = {\mathcal C}[X_i]$ be the fusion subcategory generated by $X_i$, $i = 1, \ldots, s$.
By Lemma \ref{generadores}, $({\mathcal D}_1)_{\ad}, \ldots, ({\mathcal D}_s)_{\ad}$ generate ${\mathcal C}$ as a fusion category.
Hence, it is enough to consider only those simple objects $X_i$ whose Frobenius-Perron dimension equals $2$ (otherwise, $\FPdim X_i = 1$ and $X_i\otimes X_i^* \mathbb{S}imeq \textbf{1}$).
Moreover, iterating the application of Lemma \ref{generadores}, we may further assume that $|G[X_i]| = 2$, for all $i = 1, \dots, s$. Thus we have a decomposition
$X_i \otimes X_i^* \mathbb{S}imeq \textbf{1} \oplus g_i \oplus X_i'$, where $G[X_i] = \{ \textbf{1}, g_i\}$ and $X_i'$ is a self-dual simple object of Frobenius-Perron dimension $2$.
Since $X_i\otimes X_i^*$ generates $({\mathcal D}_i)_{\ad}$, the above reductions allow us to assume that ${\mathcal D}_i = {\mathcal C}[X_i]$ with $X_i$ simple objects
of ${\mathcal C}$ such that $\FPdim X_i = 2$ and $X_i \mathbb{S}imeq X_i^*$,
$\forall i = 1, \ldots, s$.
We claim that we can choose the $X_i$'s in such a way
that $({\mathcal D}_i)_{\ad}\mathbb{S}imeq {\mathcal D}_i$. By \cite[Theorem 4.2; Remark
4.4]{NaR}, ${\mathcal D}_i$ is Grothendieck equivalent to $\Rep D_{n_i}$ or
to $\Rep \widetilde D_{2n_i+1}$.
Iterating the application of Lemma \ref{generadores} and using Lemma \ref{centers}, we obtain that
${\mathcal C} = {\mathcal C}[{\mathcal D}_1, \ldots, {\mathcal D}_s]$, with ${\mathcal D}_j$ a fusion subcategory of
${\mathcal C}$ Grothendieck equivalent to $\Rep D_{n_j}$, $n_j$ odd,
for all $j = 1, \ldots, s$, as we wanted.
\end{proof}
\mathbb{S}ubsection{Proof of Theorems \ref{soluble} and \ref{gp-ttic}} Let ${\mathcal C}$ be a weakly integral fusion category.
It follows from \cite[Theorem 3.10]{gel-nik} that either ${\mathcal C}$ is
integral, or ${\mathcal C}$ is a $\mathbb Z_2$-extension of a fusion
subcategory ${\mathcal D}$. In particular, if ${\mathcal C} = {\mathcal C}_{\ad}$, then ${\mathcal C}$ is
necessarily integral.
\begin{lemma}\label{prod-simples-categorico}
Let ${\mathcal C}$ be fusion category and let $X, X'$ be simple objects of
${\mathcal C}$. Then the following are equivalent:
\begin{enumerate} \item[(i)] The tensor product
$X^*\otimes X'$ is simple.
\item[(ii)] For every simple object $Y \neq \textbf{1}$ of ${\mathcal C}$, either $m(Y, X\otimes X^*)
= 0$ or $m(Y, X'\otimes X'^*) = 0$. \end{enumerate}
In particular, if
$X^*\otimes X'$ is not simple, then ${\mathcal C}[X]_{\ad} \cap
{\mathcal C}[X']_{\ad}$ is not trivial.
\end{lemma}
\begin{proof} The equivalence between (i) and (ii) is proved in \cite[Lemma 6.1]{BN} in the case where ${\mathcal C}$ is the category of (co)representations of a semisimple Hopf algebra.
Note that the proof \textit{loc. cit.} works in this more general context as well.
\end{proof}
\begin{proof}[Proof of Theorem \ref{soluble}]
The proof is by induction on $\FPdim {\mathcal C}$. As pointed out at the
beginning of this subsection, if ${\mathcal C}$ is not integral, then it is
a $\mathbb Z_2$-extension of a fusion subcategory ${\mathcal D}$. Since ${\mathcal D}$
also satisfies the assumptions of the theorem, then ${\mathcal D}$ is
solvable, by induction. Hence ${\mathcal C}$ is solvable as well.
We may thus assume that ${\mathcal C}$ is integral. Therefore $\cd({\mathcal C}) =
\{1, 2 \}$ and the results of the previous subsection apply. By
Lemma \ref{noad}, we may assume that ${\mathcal C} = {\mathcal C}_{\ad}$. Then it
follows from Lemma \ref{genconad} that ${\mathcal C} = {\mathcal C}[{\mathcal D}_1, \ldots,
{\mathcal D}_s]$, with ${\mathcal D}_j$ Grothendieck equivalent to $\Rep D_{n_j}$,
$n_j$ odd, $\forall j = 1, \ldots, s$.
By Lemma \ref{lema-dn}, $G({\mathcal D}_j) = \{\textbf{1}, g_j\}$, $\forall
j = 1, \ldots, s$. We claim that $g_i = g_j$ $\forall 1\leq i, j
\leq s$. Indeed, let ${\mathcal D}_j = {\mathcal C}[X^{(j)}]$, where $X^{(j)} =
X_1^{(j)}$ in the notation of Proposition \ref{D_n}. Then we have
$(X^{(j)})^{\otimes 2} = \textbf{1}\oplus g_j \oplus X_2^{(j)}$.
Fix $1\leq i, j \leq s$. Since ${\mathcal C}$ has no simple objects of
Frobenius-Perron dimension $4$ then $g_i = g_j$ or
$X_2^{(j)}\mathbb{S}imeq X_2^{(i)}$, by Lemma
\ref{prod-simples-categorico}. In the first case we are done. In
the second case, we note that $\{1, g_j\} = G[X_2^{(j)}] =
G[X_2^{(i)}] = \{1, g_i\}$. Then $g_j = g_i$, as claimed. Let $g = g_j = g_i$.
By Lemma \ref{lema-dn}, $g\in {\mathcal D}_i'$, for all $i = 1, \ldots, s$.
Since ${\mathcal D}_i$, $1\leq i \leq s$, generate ${\mathcal C}$ then $g \in {\mathcal C}'$. It
follows from Theorem \ref{equiv} (ii) that ${\mathcal C}$ is the
equivariantization by $\mathbb Z_2$ of a braided fusion category
$\widetilde {\mathcal C}$. In particular, $\FPdim \widetilde {\mathcal C} = \FPdim {\mathcal C}
/ 2$ and $\cd (\widetilde {\mathcal C}) \mathbb{S}ubseteq \{1,2\}$, by \cite[Proof of
Proposition 6.2 (1)]{ENO2}, \cite[Lemma 7.2]{NP}. By inductive
hypothesis, $\widetilde {\mathcal C}$ is solvable. Then ${\mathcal C}$, being the
equivariantization of a solvable fusion category by a solvable
group is itself solvable \cite[Proposition 4.5 (i)]{ENO2}.
\end{proof}
\begin{theorem}\label{morita-ccad} Let ${\mathcal C}$ be a weakly integral braided fusion category that $\FPdim X \leq 2$ for all simple object $X$ of ${\mathcal C}$.
Assume in addition that ${\mathcal C} = {\mathcal C}_{\ad}$. Then ${\mathcal C}$ is tensor
Morita equivalent to a pointed fusion category ${\mathcal C}(A \rtimes \mathbb Z_2,
\tilde \omega)$, where $A$ is an abelian group endowed with an action of $\mathbb Z_2$ by group automorphisms, and $\tilde
\omega$ is a certain $3$-cocycle on the semidirect product $A \rtimes \mathbb Z_2$. \end{theorem}
\begin{proof} The assumption ${\mathcal C} = {\mathcal C}_{\ad}$ implies that ${\mathcal C}$ is
integral. Hence we may assume that $\cd({\mathcal C}) = \{ 1, 2\}$. By Lemma
\ref{genconad}, ${\mathcal C}$ is generated by fusion subcategories ${\mathcal D}_1,
\dots, {\mathcal D}_s$, $s \geq 1$, where ${\mathcal D}_i$ is Grothendieck equivalent
to $\Rep D_{n_i}$ and $n_i$ is an odd natural number, for all $i =
1, \dots, s$. Furthermore, as in the proof of Theorem
\ref{soluble}, the assumption that ${\mathcal C} = {\mathcal C}_{\ad}$ implies that
$G({\mathcal D}_i) = \{ \textbf{1}, g\}$, for all $1\leq i \leq s$, and ${\mathcal C}[g]
\mathbb{S}imeq \Rep \mathbb Z_2$ is a tannakian subcategory of the M\"
uger center ${\mathcal C}'$. So that ${\mathcal C} \mathbb{S}imeq \tilde {\mathcal C}^{\mathbb Z_2}$ is
an equivariantization of a braided fusion category $\tilde {\mathcal C}$.
Equivariantization under a group action gives rise to exact
sequences of fusion categories \cite[Subsection
5.3]{tensor-exact}. In our situation we have an exact sequence
of braided tensor functors
\begin{equation}\label{sec-c}\Rep \mathbb Z_2 \to {\mathcal C} \overset{F}\to \tilde {\mathcal C}.\end{equation} In addition,
since ${\mathcal C}[g] \mathbb{S}ubseteq {\mathcal D}_i$, then \eqref{sec-c} induces by
restriction an exact sequence
\begin{equation}\label{sec-di}\Rep \mathbb Z_2 \to {\mathcal D}_i \to \tilde {\mathcal C}_i,\end{equation}
for all $i = 1, \dots, s$, where $\tilde {\mathcal C}_i$ is the essential
image of ${\mathcal D}_i$ in $\tilde {\mathcal C}$ under the functor $F$. Hence
$\tilde {\mathcal C}_i$ is a fusion subcategory of $\tilde {\mathcal C}$, for all $i$,
and moreover $\tilde {\mathcal C}_1, \dots, \tilde {\mathcal C}_s$ generate $\tilde
{\mathcal C}$ as a fusion category.
Note in addition that $\cd(\tilde {\mathcal C}), \cd(\tilde {\mathcal C}_i) \mathbb{S}ubseteq
\{ 1, 2\}$, for all $i = 1, \dots, s$.
On the other hand, exactness of the sequence
\eqref{sec-di} implies that $2n_i = \FPdim {\mathcal D}_i = 2 \FPdim \tilde
{\mathcal C}_i$ \cite[Proposition 4.10]{tensor-exact}. Hence $\FPdim \tilde
{\mathcal C}_i = n_i$ is an odd natural number.
Since $\tilde {\mathcal C}_i$ is an integral braided fusion category, then
the Frobenius-Perron dimension of every simple object of $\tilde
{\mathcal C}_i$ divides the Frobenius-Perron dimension of $\tilde {\mathcal C}_i$
\cite[Theorem 2.11]{ENO2}. Thus we get that $\FPdim Y = 1$, for
all $Y \in \Irr (\tilde {\mathcal C}_i)$. That is, $\tilde {\mathcal C}_i$ is a
pointed braided fusion category, for all $i = 1, \dots, s$. Since
$\tilde {\mathcal C}_1, \dots, \tilde {\mathcal C}_s$ generate $\tilde {\mathcal C}$ as a fusion
category, then $\tilde {\mathcal C}$ is also pointed. Therefore $\tilde {\mathcal C}
\mathbb{S}imeq {\mathcal C}(A, \omega)$ as fusion categories, where $A$ is an
abelian group and $\omega \in H^3(A, k^{\times})$.
Group actions on pointed categories were classified by
Tambara \cite{tambara}. In view of \cite[Theorem 4.1]{tambara} and \cite[Proposition 3.2]{nik},
the fusion category ${\mathcal C} \mathbb{S}imeq \tilde {\mathcal C}^{\mathbb Z_2}$ is tensor
Morita equivalent to a pointed category ${\mathcal C}(A \rtimes \mathbb Z_2,
\tilde \omega)$, where the semidirect product $A \rtimes \mathbb
Z_2$ is with respect of the induced action of $\mathbb Z_2$ on the
group $A$ of invertible objects of $\tilde {\mathcal C}$, and $\tilde
\omega$ is a certain $3$-cocycle on $A \rtimes \mathbb Z_2$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{gp-ttic}.]
The proof is an immediate consequence of Theorem
\ref{morita-ccad}.
\end{proof}
\begin{remark} Let ${\mathcal C}$ be a braided fusion category such that $\cd({\mathcal C}) = \{ 1, 2
\}$. Suppose that ${\mathcal C}$ is nilpotent. By \cite[Theorem 1.1]{DGNO}
${\mathcal C}$ admits a unique decomposition (up to the order of factors)
into a tensor product ${\mathcal C}_1 \boxtimes \dots \boxtimes {\mathcal C}_m$, where
${\mathcal C}_i$ are braided fusion categories of Frobenius-Perron dimension
$p_i^{m_i}$, for some pairwise distinct prime numbers $p_1, \dots,
p_m$.
Then ${\mathcal C}_i$ is an integral braided fusion category, for all $i =
1, \dots, m$, and by \cite[Theorem 2.11]{ENO2}, we get that ${\mathcal C}_i$
is pointed whenever $p_i > 2$. Hence ${\mathcal C} \mathbb{S}imeq {\mathcal C}_1 \boxtimes
\mathcal B$ as braided fusion categories, where ${\mathcal C}_1$ is a
braided fusion category of Frobenius-Perron dimension $2^m$ such
that $\cd({\mathcal C}_1) = \{ 1, 2 \}$, and $\mathcal B$ is a pointed
braided fusion category. \end{remark}
\end{document} |
\betagin{document}
\betagin{abstract}
We examine a transmission problem driven by a degenerate quasilinear operator with a natural interface condition. Two aspects of the problem entail genuine difficulties in the analysis: the absence of representation formulas for the operator and the degenerate nature of the diffusion process. Our arguments circumvent these difficulties and lead to new regularity estimates. For bounded interface data, we prove the local boundedness of weak solutions and establish an estimate for their gradient in ${\rm BMO}-$spaces. The latter implies solutions are of class $C^{0,{\rm Log-Lip}}$ across the interface. Relaxing the assumptions on the data, we establish local H\"older continuity for the solutions.
\end{abstract}
\keywords{Transmission problems; $p-$Laplace operator; local boundedness; BMO gradient estimates; Log-Lipschitz regularity.}
\subjclass{35B65; 35J92; 35Q74.}
\maketitle
\section{Introduction}\label{sec_mollybloom}
Transmission problems describe diffusive processes within heterogeneous media that change abruptly across certain interfaces. They find application, for example, in the study of electromagnetic conductivity and composite materials, and their mathematical formulation involves a domain split into sub-regions, where partial differential equations (PDEs) are prescribed. Since the PDEs vary from region to region, the problem may have discontinuities across the interfaces.
Consequently, the geometry of these interfaces (which, in contrast to free boundary problems, are fixed and given a priori) and the structure of the underlying equations play a crucial role in analysing transmission problems.
This class of problems first appeared circa 1950, in the work of Mauro Picone \cite{Picone1954}, as an attempt to address heterogeneous materials in elasticity theory. Several subsequent works developed the basics of the theory and generalised it in various directions \cite{Borsuk1968, Campanato1957, Campanato1959, Campanato1959a, Iliin-Shismarev1961, Schechter1960, Sheftel1963, Stampacchia1956}. We refer the interested reader to \cite{Borsuk2010} for a comprehensive account of this literature.
Developments concerning the regularity of the solutions to transmission problems are much more recent. In \cite{Li-Vogelius2000}, the authors study a class of elliptic equations in divergence form, with discontinuous coefficients, modelling composite materials with closely spaced interfacial boundaries, such as fibre-reinforced structures. The main result in that paper is the local H\"older continuity for the gradient of the solutions, with estimates. The findings in \cite{Li-Vogelius2000} are relevant from the applied perspective since the gradient of a solution accounts for the stress of the material, and estimating it shows the stresses remain uniformly bounded, even when fibres are arbitrarily close to each other. The vectorial counterpart of the results in \cite{Li-Vogelius2000} appeared in \cite{Li-Nirenberg2003}, where regularity estimates for higher-order derivatives of the solutions are obtained. See also the developments reported in \cite{Bonnetier2000}.
A further layer of analysis concerns the proximity of sub-regions in limiting scenarios. In \cite{Bao-Li-Yin1}, the authors examine a domain containing two subregions, which are $\varepsilon-$apart, for some $\varepsilon>0$. Within each sub-region, the diffusion process is given by a divergence-form equation with a diffusivity coefficient $A \neq 1$. In the remainder of the domain, the diffusivity is also constant but equal to $1$. By setting $A=+\infty$, the authors examine the case of perfect conductivity. The remarkable fact about this model is that estimates on the gradient of the solutions deteriorate as the two regions approach each other. In \cite{Bao-Li-Yin1}, the authors obtain blow-up rates for the gradient norm in terms of $\varepsilon\to 0$. We also notice the findings reported in \cite{Bao-Li-Yin2} extend those results to the context of multiple inclusions and also treat the case of perfect insulation $A=0$. We also refer the reader to \cite{Briane}.
More recently, the analysis of transmission problems focused on the geometry of the interface. The minimum requirements on the transmission interface yielding regularity properties for the solutions are particularly interesting. In \cite{CSCS2021}, the authors consider a domain split into two sub-regions. Inside each sub-region, the problem's solution is required to be a harmonic function, and a flux condition is prescribed along the interface separating the sub-regions. By resorting to a representation formula for harmonic functions, the authors establish the existence of solutions to the problem and prove that solutions are of class $C^{0,{\rm Log-Lip}}$ across the interface. In addition, under the assumption that the interface is locally of class $C^{1,\alpha}$, they prove the solutions are of class $C^{1,\alpha}$ within each sub-region, \emph{up to the transmission interface}. This fact follows from a new stability result allowing the argument to import information from the case of flat interfaces. In \cite{SCS2022}, the authors extend the analysis in \cite{CSCS2021} to the context of fully nonlinear elliptic operators. Under the assumption that the interface is of class $C^{1,\alpha}$, they prove that solutions are of class $C^{0,\alpha}$ across, and $C^{1,\alpha}$ up to the interface. Furthermore, if the interface is of class $C^{2,\alpha}$, then solutions became $C^{2,\alpha}-$regular, also up to the interface. The findings in \cite{SCS2022} rely on a new Aleksandrov-Bakelman-Pucci estimate and variants of the maximum principle and the Harnack inequality. We also notice the developments reported in \cite{Borsuk2019}. In that paper, the author proves local boundedness in a neighbourhood of boundary points for a transmission problem driven by a $p-$Laplacian type operator.
Our gist in this paper is to extend the results of \cite{CSCS2021} to the case of degenerate quasilinear equations, which are ``uxtit{the natural, and, in a sense, the best generalisation of the $p-$Laplace equation}" (cf. \cite{Lieberman_1991}), namely
$$ uxtnormal{div}\left(\frac{g\left(| D u|\right)}{| D u|} D u\right)=0,$$
where $g$ is a nonlinearity satisfying appropriate assumptions. We first prove that weak solutions to the transmission problem, properly defined and whose existence follows from well-known methods, are locally bounded. The proof combines delicate inequalities with the careful choice of auxiliary test functions and a cut-off argument to produce a variant of the weak Harnack inequality. Working under a $C^1$ interface geometry, we then obtain an integral estimate for the gradient, leading to regularity in ${\rm BMO}-$spaces. As a corollary, we infer that solutions are of class $C^{0,{\rm Log-Lip}}$ across the fixed interface. For the particular case of the $p-$Laplace operator, this result follows directly from potential estimates obtained in \cite{DM2011, KM2014c}; we also refer to \cite{M2011,M2011a}. Finally, we relax the boundedness assumption on the interface data and derive local H\"older continuity estimates.
This transmission problem driven by a quasilinear degenerate operator presents genuine difficulties compared to the Laplacian's linear case. Firstly, the operator lacks representation formulas, and the strategy developed in \cite{CSCS2021} is no longer available. Secondly, the degenerate nature of the problem rules out the approach put forward in \cite{SCS2022}. Consequently, one must develop new machinery to examine the regularity of the solutions.
Another fundamental question in transmission problems concerns the optimal regularity \emph{up to the interface}. As mentioned before, results of this type appear in the recent works \cite{CSCS2021} and \cite{SCS2022}; see also \cite{dong20}. The issue remains open in the context of quasilinear degenerate problems, particularly for the $p-$Laplace operator. We believe the analysis of the boundary behaviour of $p-$harmonic functions may yield helpful information in this direction.
The remainder of this article is organised as follows. Section \mathbb{R}f{sec_vicosa} contains the precise formulation of the problem, comments on the existence of a unique solution and gathers basic material used in the paper. In Section \mathbb{R}f{sec_alkhawarizmi}, we put forward the proof of the local boundedness. The proof of the BMO--regularity and its consequences is the object of Section \mathbb{R}f{sec_beacon}, where further generalisations are also included.
\section{Setting of the problem and auxiliary results}\label{sec_vicosa}
In this section, we precisely state our transmission problem, introduce the notion of a weak solution and comment on its existence and uniqueness. We then collect several auxiliary results.
\subsection{Problem setting and assumptions}
Let $\Omega\subset\mathbb{R}^d$ be a bounded domain and fix $\Omega_1\Subset\Omega$. Define $\Omega_2:=\Omega\setminus\overline{\Omega_1}$ and consider the interface $\Gammamma:=\partial\Omega_1$, which we assume is a $(d-1)-$surface of class $C^1$. For a function $u:\overline{\Omega}\to\mathbb{R}$, we set
\betagin{equation*}
u_1:=u\big|_{\overline{\Omega_1}}\hspace{.3in} \mbox{and}\hspace{.3in} u_2:=u\big|_{\overline{\Omega_2}}.
\end{equation*}
Note that we necessarily have $u_1=u_2$ on $\Gammamma$. Denoting with $\nu$ the unit normal vector to $\Gammamma$ pointing inwards to $\Omega_1$, we write
$$\frac{\partial u_i}{\partial\nu} = Du_i \cdot \nu, \quad i=1,2.$$
For a nonlinearity $g$, satisfying appropriate assumptions, we consider the quasilinear degenerate transmission problem consisting of finding a function $u:\overline{\Omega}\to\mathbb{R}$ such that
\betagin{equation}\label{eq_stima118}
\betagin{cases}
uxtnormal{div}\left(\frac{g\left(| D u_1|\right)}{| D u_1|} D u_1\right)=0&\hspace{.2in}\mbox{in}\hspace{.2in}\Omega_1\\
\vspace*{-0.3cm}\\
uxtnormal{div}\left(\frac{g\left(| D u_2|\right)}{| D u_2|} D u_2\right)=0&\hspace{.2in}\mbox{in}\hspace{.2in}\Omega_2,\\
\end{cases}
\end{equation}
with the additional conditions
\betagin{equation}\label{eq_stima119}
\betagin{cases}
u=0&\hspace{.2in}\mbox{on}\hspace{.2in}\partial\Omega\\
\vspace*{-0.3cm}\\
\frac{g(| D u_1|)}{| D u_1|}\frac{\partial u_1}{\partial\nu}-\frac{g(| D u_2|)}{| D u_2|}\frac{\partial u_2}{\partial\nu}=f&\hspace{.2in}\mbox{on}\hspace{.2in}\Gammamma,
\end{cases}
\end{equation}
for a given function $f$.
We assume the function $g\in C^1\left(\mathbb{R}^+_0\right)$ is such that
\betagin{equation}
g_0\le\frac{tg'(t)}{g(t)}\le g_1,\quad\forall t>0,
\label{business class}
\end{equation}
for fixed constants $1\le g_0\le g_1$. Moreover, we assume the monotonicity inequality
\betagin{equation}\label{monicavitti}
\bigg(\frac{g(|\xi|)}{|\xi|}\xi-\frac{g(|\mathbb{Z}eta|)}{|\mathbb{Z}eta|}\mathbb{Z}eta\bigg)\cdot(\xi-\mathbb{Z}eta)\ge C|\xi-\mathbb{Z}eta|^p, \quad\forall\xi,\mathbb{Z}eta\in\mathbb{R}^d,
\end{equation}
holds for a certain $p>2$ and $C>0$.
By choosing $g(t)=t^{p-1}$, with $p>2$, one gets in \eqref{eq_stima118} two degenerate $p-$Laplace equations. A different example of a nonlinearity $g=g(t)$ satisfying \eqref{business class}-\eqref{monicavitti} is
\[
g(t):=t^{p-1}\ln\left(a+t\right)^\alpha,
\]
for $p>2$, $a>1$ and $\alpha>0$.
We now define the primitive of $g$,
\betagin{equation*}
G(t)=\int_0^tg(s)\,{\rm d}s, \quad t \geq 0.
\end{equation*}
Due to the assumptions on $g$, one concludes that $G:\mathbb{R}\to\mathbb{R}\cup\left\lbrace+\infty\right\rbrace$ is left-continuous and convex, or a \emph{Young function} (see \cite[Definition 3.2.1]{at1}). Before proceeding, we introduce the Orlicz-Sobolev space defined by $G$.
\betagin{Definition}[Orlicz-Sobolev space]\label{notto}
Let $G$ be a Young function. We define the Orlicz-Sobolev space $W^{1,G}(\Omega)$ as the set of weakly differentiable functions $u\in W^{1,1}(\Omega)$ such that
\[
\int_\Omega G\left(|u(x)|\right){\rm d}x+\int_\Omega G\left(|Du(x)|\right){\rm d}x<\infty.
\]
The space $W^{1,G}_0(\Omega)$ is the closure of $C^{\infty}_c(\Omega)$ in $W^{1,G}(\Omega)$.
\end{Definition}
\subsection{Weak solutions}
The precise definition of solution we have in mind is the object of the following definition.
\betagin{Definition}\label{def_weaksol}
A function $u\in W_0^{1,G}(\Omega)$ is a weak solution of \eqref{eq_stima118}-\eqref{eq_stima119} if
\betagin{equation}
\int_{\Omega}\frac{g\left(| D u|\right)}{| D u|} D u\cdot D v\,{\rm d}x=\int_{\Gammamma}f v\,{\rm d}\mathcal{H}^{d-1},\hspace{.2in}\forall\,v\in W^{1,G}_0(\Omega).
\label{portia}
\end{equation}
\end{Definition}
We use the Hausdorff measure $\mathcal{H}^{d-1}$ in the surface integral to emphasise the operator acting on the solution is a measure supported along the interface, and we write
\betagin{equation}\label{eq_pde}
-uxtnormal{div}\left(\frac{g\left(| D u|\right)}{| D u|} D u\right)=f\,{\rm d}\mathcal{H}^{d-1}\big|_{\Gammamma}.
\end{equation}
To justify \eqref{eq_pde}, we multiply both equations in \eqref{eq_stima118} by a test function $\varphi\in C^\infty_c(\Omega)$, and formally integrate by parts to get
\[
\int_{\Omega_1}\frac{g\left(| D u_1|\right)}{| D u_1|} D u_1\cdot D\varphi\,{\rm d}x=-\int_{\Gammamma}\left(\frac{g\left(| D u_1|\right)}{| D u_1|} D u_1\cdot \nu\right)\varphi\,{\rm d}\mathcal{H}^{d-1}
\]
and
\[
\int_{\Omega_2}\frac{g\left(| D u_2|\right)}{| D u_2|} D u_2\cdot D\varphi\,{\rm d}x=-\int_{\Gammamma}\left(\frac{g\left(| D u_2|\right)}{| D u_2|} D u_2\cdot \nu\right)\varphi\,{\rm d}\mathcal{H}^{d-1}
\]
Adding and using \eqref{eq_stima119}, we obtain
\[
\int_{\Omega}\frac{g\left(| D u|\right)}{| D u|} D u\cdot D \varphi\,{\rm d}x=\int_{\Gammamma}f \varphi\,{\rm d}\mathcal{H}^{d-1},\hspace{.2in}\forall\,\varphi\in W^{1,G}(\Omega).
\]
\betagin{Remark}
We notice the integrals in Definition \mathbb{R}f{def_weaksol} are well-defined. Indeed, let $u,v\in W^{1,G}(\Omega)$; we verify that
\betagin{equation*}
\int_{\Omega}\frac{g(|Du|)}{|Du|}Du\cdot Dv\,{\rm d}x<\infty.
\end{equation*}
Since $g$ is increasing, we have $tg(t)\le CG(t)$ for $t\ge0$. Also, $G(t+s)\le C\big(G(t)+G(s)\big)$ for $t,s\ge0$. Hence,
\betagin{align*}
\bigg|\int_{\Omega}\frac{g(|Du|)}{|Du|}Du\cdot Dv\,{\rm d}x\bigg|\le&\int_{\Omega}g(|Du|)|Dv|\,{\rm d}x\\
\le&\int_{\Omega}g(|Du|+|Dv|)(|Du|+|Dv|)\,{\rm d}x\\
\le&C\int_{\Omega}G(|Du|+|Dv|)\,{\rm d}x\\
\le&C\int_{\Omega}G(|Du|)\,{\rm d}x+C\int_{\Omega}G(|Dv|)\,{\rm d}x\\
<&\infty.
\end{align*}
\end{Remark}
\betagin{Remark}
Let $u\in W_0^{1,G}(\Omega)$, and suppose that \eqref{monicavitti} is in force. Then one infers $u\in W_0^{1,p}(\Omega)$. Indeed, that inequality yields
\betagin{align*}
\int_\Omega|Du|^p\,{\rm d}x\le&C\int_\Omega g(|Du|)|Du|\,{\rm d}x\\
\le&C\int_\Omega G(|Du|)\,{\rm d}x.
\end{align*}
\end{Remark}
\subsection{Existence and uniqueness of weak solutions}
To prove the existence of a unique weak solution to \eqref{eq_stima118}-\eqref{eq_stima119}, one can resort to approximation and monotonicity methods. We refer the reader to \cite{baroni2015}; see also \cite{Lieberman_1991}. Additionally, we remark that the weak solution is the global minimiser of the functional $I: W_0^{1,G}(\Omega)\to\mathbb{R}$ defined by
\betagin{equation}\label{stima117}
I(u)=\int_{\Omega}G\left(| D u|\right)\,{\rm d}x-\int_{\Gammamma}fu\,{\rm d}\mathcal{H}^{d-1},
\end{equation}
whose Euler-Lagrange equation, in its weak formulation, is precisely \eqref{portia}.
\subsection{Auxiliary results}\label{subsec_prelim} We now collect some auxiliary material which will be instrumental in the proofs of the main results. We start with a technical inequality (c.f. \cite[Lemma 2]{Serrin_1964}).
\betagin{Lemma}\label{lemma numerico}
Let $p>0$, and $N\in\mathbb{N}$. Let also $a_1,\dots,a_N,q_1,\dots,q_N$ be real numbers such that $0<a_i<\infty$ and $0\le q_i<p$, for every $i=1,\ldots,N$. Suppose that $z,z_1,\ldots,z_N$ are positive real numbers satisfying
\betagin{equation*}
z^p\le\sum_{i=1}^Na_iz_i^{q_i}.
\end{equation*}
Then there exists $C>0$ such that
\betagin{equation*}
z\le C\sum_{i=1}^Na_i^{\gammamma_i}
\end{equation*}
where $\gammamma_i=(p-q_i)^{-1}$, for $i=1,\ldots,N$. Finally, $C=C(N,p,q_1,\ldots,q_N)$.
\end{Lemma}
Although standard in the field, the following result lacks detailed proof in the literature. We include it here for completeness and future reference.
\betagin{Lemma}\label{lem_stima119}
Fix $R_0>0$ and let $\phi:[0,R_0]\to[0,\infty)$ be a non-decreasing function. Suppose there exist constants $C_1,\alpha,\betata>0$, and $C_2,\mu\ge0$, with $\betata<\alpha$, satisfying
\betagin{equation*}
\phi(r)\le C_1\Big[\Big(\frac{r}{R}\Big)^{\alpha}+\mu\Big]\phi(R)+C_2R^{\betata},
\end{equation*}
for every $0<r\le R\le R_0$.
Then, for every $\sigma\le\betata$, there exists $\mu_0=\mu_0(C_1,\alpha,\betata,\sigma)$ such that, if $\mu<\mu_0$, for every $0<r\le R\le R_0$, we have
\betagin{equation*}
\phi(r)\le C_3\Big(\frac{r}{R}\Big)^{\sigma}\big(\phi(R)+C_2R^{\sigma}\big),
\end{equation*}
where $C_3=C_3(C_1,\alpha,\betata,\sigma)>0$. Moreover,
\betagin{equation*}
\phi(r)\le C_4r^{\sigma},
\end{equation*}
where $C_4=C_4(C_2,C_3,R_0,\phi(R_0),\sigma)$.
\end{Lemma}
\betagin{proof}
For clarity, we split the proof into two steps. First, an induction argument leads to an inequality at discrete scales; then, we pass to the continuous case and conclude the argument.
\noindent{\bf Step 1 -} We want to verify that
\betagin{equation}\label{eq_1}
\phi(\theta^{n+1}R)\le\theta^{(n+1)\deltalta}\phi(R)+C_2\theta^{n\betata}R^{\betata}\sum_{j=0}^n\theta^{j(\deltalta-\betata)},
\end{equation}
for every $n\in\mathbb{N}$. We notice it suffices to prove the estimate for $\sigma=\betata$ and work in this setting. For $0<\theta<1$ and $0<R\le R_0$ the assumption of the lemma yields
\[
\phi(\theta R)\le C_1\bigg[\bigg(\frac{\theta R}{R}\bigg)^{\alpha}+\mu\bigg]\phi(R)+C_2R^{\betata}=\theta^{\alpha}C_1(1+\mu\theta^{-\alpha})\phi(R)+C_2R^{\betata}.
\]
Choose $\theta\in(0,1)$ such that $2C_1\theta^{\alpha}=\theta^{\deltalta}$ with $\betata<\deltalta<\alpha$. Notice that $\theta$ depends only on $C_1,\alpha,\deltalta$. Take $\mu_0>0$ such that $\mu_0\theta^{-\alpha}<1$. For every $R\le R_0$ we then have
\betagin{equation}\label{eq_induction00}
\phi(\theta R)\le\theta^{\deltalta}\phi(R)+C_2R^{\betata}
\end{equation}
and the base case follows. Suppose the statement has already been verified for some $k\in\mathbb{N}$, $k\ge2$; then
\betagin{equation*}
\phi(\theta^kR)\le\theta^{k\deltalta}\phi(R)+C_2\theta^{(k-1)\betata}R^{\betata}\sum_{j=0}^{k-1}\theta^{j(\deltalta-\betata)}.
\end{equation*}
Thanks to \eqref{eq_induction00}, we have
\[
\betagin{split}
\phi(\theta^{k+1}R)&=\phi\big(\theta^k(\theta R)\big)\le\theta^{k\deltalta}\phi(\theta R)+C_2\theta^{(k-1)\betata}(\theta R)^{\betata}\sum_{j=0}^{k-1}\theta^{j(\deltalta-\betata)}\\
&\le\theta^{k\deltalta}\big[\theta^{\deltalta}\phi(R)+C_2R^{\betata}\big]+C_2\theta^{k\betata}R^{\betata}\sum_{j=0}^{k-1}\theta^{j(\deltalta-\betata)}\\
&=\theta^{(k+1)\deltalta}\phi(R)+C_2\theta^{k\deltalta}R^{\betata}+C_2\theta^{k\betata}R^{\betata}\sum_{j=0}^{k-1}\theta^{j(\deltalta-\betata)}\\
&=\theta^{(k+1)\deltalta}\phi(R)+C_2\theta^{k\betata}R^{\betata}\sum_{j=0}^k\theta^{j(\deltalta-\betata)}.
\end{split}
\]
Hence, \eqref{eq_1} holds for every $k\in\mathbb{N}$, and the induction argument is complete.
\noindent{\bf Step 2 -} Next, we pass from the discrete to the continuous case. In particular, we claim that
\betagin{equation*}
\phi(r)\le C_3\Big(\frac{r}{R}\Big)^{\betata}\big(\phi(R)+C_2R^{\betata}\big),
\end{equation*}
for every $0<r\le R\le R_0$.
Indeed,
\betagin{align*}
\phi(\theta^{k+1}R) \le&\theta^{(k+1)\deltalta}\phi(R)+C_2\theta^{k\betata}R^{\betata}\frac{1}{1-\theta^{\deltalta-\betata}}\\
=&\theta^{(k+1)\deltalta}\phi(R)+C_2R^{\betata}\frac{\theta^{(k+1)\betata}}{\theta^{\betata}-\theta^{\deltalta}}\\
\le&C_3\theta^{(k+1)\betata}\big(\phi(R)+C_2R^{\betata}\big),
\end{align*}
for every $k\in\mathbb{N}$. Taking $k\in\mathbb{N}$ such that $\theta^{k+2}R\le r<\theta^{k+1}R$, up to relabeling the constant $C_3$, we get
\betagin{align*}
\phi(r)\le&\phi(\theta^{k+1}R)\le C_3\theta^{(k+1)\betata}\big(\phi(R)+C_2R^{\betata}\big)\\
=&C_3\theta^{(k+2)\betata}\theta^{-\betata}\big(\phi(R)+C_2R^{\betata}\big)\\
\le&C_3\Big(\frac{r}{R}\Big)^{\betata}\big(\phi(R)+C_2R^{\betata}\big).
\end{align*}
Finally, one notices
\[
\phi(r)\le C_3\frac{1}{R_0^{\betata}}\big(\phi(R_0)+C_2R_0^{\betata}\big)r^{\betata}=:C_4r^{\betata},
\]
and the proof is complete.
\end{proof}
We conclude this section by introducing two functional spaces we resort to in the paper, namely Campanato and Morrey spaces. Indeed, we use embedding properties of these spaces to conclude the H\"older-continuity of weak solutions when the interface data is unbounded.
\betagin{Definition}[Campanato spaces]
We denote by $L_C^{p,\lambdabda}(\Omega;\mathbb{R}^d)$, with $1\le p<\infty$ and $\lambdabda\ge0$, the space of functions $u\in L^p(\Omega;\mathbb{R}^d)$ such that
\betagin{equation*}
[u]_{L_C^{p,\lambdabda}(\Omega;\mathbb{R}^d)}^p=\sup_{x^0\in\Omega,\rho>0}\frac{1}{\rho^{\lambdabda}}\int_{\Omega\cap B(x^0,\rho)}|u-(u)_{\Omega\cap B(x^0,\rho)}|^p\,{\rm d}x<\infty.
\end{equation*}
\end{Definition}
\betagin{Definition}[Morrey spaces]
We denote by $L_M^{p,\lambdabda}(\Omega;\mathbb{R}^d)$, with $1\le p<\infty$ and $\lambdabda\ge0$, the space of functions $u\in L^p(\Omega;\mathbb{R}^d)$ such that
\betagin{equation*}
\|u\|_{L_M^{p,\lambdabda}(\Omega)}^p=\sup_{x^0\in\Omega,\rho>0}\frac{1}{\rho^{\lambdabda}}\int_{\Omega\cap B(x^0,\rho)}|u|^p\,{\rm d}x<\infty.
\end{equation*}
\end{Definition}
Notice that $L^{p,\lambdabda}_{M}$ and $L^{p,\lambdabda}_{C}$ are isomorphic; see \cite[Proposition 2.3]{giusti}. We recall that a function $u\in W^{1,1}(\Omega)$ such that $Du\in L_M^{p,\lambdabda}(\Omega;\mathbb{R}^d)$ is H\"older continuous. More precisely, we have $u\in C^{0,\alpha}(\Omega)$ with $\alpha=1-\lambdabda/p$; see \cite{adamsmorrey}.
\section{Local boundedness}\label{sec_alkhawarizmi}
In this section, we prove the local boundedness for the weak solutions to a particular variant of our problem. Namely, we consider the case $g(t):=t^{p-1}$ and recover the $p-$Laplace operator. Our argument is inspired by the one put forward in \cite{Serrin_1964}.
\betagin{Theorem}[Local Boundedness]\label{thm_lb}
Let $u\in W_0^{1,p}(\Omega)$ be the weak solution to \eqref{eq_stima118}-\eqref{eq_stima119}, with $g(t):=t^{p-1}$ and $f \in L^{\infty}(\Gammamma)$. Then for any $B_{R}:=B_{R}(x_0)\Subset \Omega$, there exists $C=C\big(d,p,R,\|g\|_{L^{\infty}(\Gammamma)}\big)>0$ such that
\[
\|u\|_{L^{\infty}(B_{R/2})}\le CR^{-\frac{d}{p}}\big(\|u\|_{L^p(B_R)}+R^{\frac{d}{p}+1}\|f\|_{L^{\infty}(\Gammamma)}\big)
\]
and
\[
\| D u\|_{L^p(B_{R/2})}\le CR^{-1}\big(\|u\|_{L^p(B_R)}+R^{\frac{d}{p}+1}\|f\|_{L^{\infty}(\Gammamma)}\big).
\]
\end{Theorem}
\betagin{proof}
Fix $R>0$ such that $B_R\Subset \Omega$ and set $k:=R\|f\|_{L^{\infty}(\Gammamma)}$. Define $\overline{u}:\Omega\to\mathbb{R}$ as
\[
\overline{u}(x):=|u(x)|+k
\]
for all $x\in\Omega$. Fix $q\ge1$ and $\ell>k$. For $t\in\mathbb{R}$, denote $\overline{t}:=|t|+k$. To ease the presentation, we split the remainder of the proof into four steps.
\noindent{\bf Step 1 -} Let $F:[k,\infty)\to\mathbb{R}$ be defined as
\[
F(s):=
\betagin{cases}
s^q&\hspace{.3in}\mbox{if}\hspace{.3in}k\le s\le \ell\\
q\ell^{q-1}s-(q-1)\ell^q&\hspace{.3in}\mbox{if}\hspace{.3in}\ell<s.
\end{cases}
\]
Then $F\in C^1\big([k,\infty)\big)$ and $F\in C^\infty\big([k,\infty)\setminus{\{\ell\}}\big)$. Let $H:\mathbb{R}\to\mathbb{R}$ be defined as
\betagin{equation*}
H(t):=uxtnormal{sgn}(t)\big(F(\overline{t})F'(\overline{t})^{p-1}-q^{p-1}k^{\betata}\big),\quad\forall t\in\mathbb{R},
\end{equation*}
where $\betata=p(q-1)+1>1$. A simple computation yields
\betagin{align*}
H'(t)=
\betagin{cases}
q^{-1}\betata F'(\overline{t})^p&\hspace{.3in}\mbox{if}\hspace{.3in}|t|<\ell-k\\
F'(\overline{t})^p&\hspace{.3in}\mbox{if}\hspace{.3in}|t|>\ell-k.
\end{cases}
\end{align*}
Notice that
\[
|H(u)|\le F(\overline{u})F'(\overline{u})^{p-1}
\]
and
\[
\overline{u}F'(\overline{u})\le qF(\overline{u}).
\]
\noindent{\bf Step 2 -} In this step, we introduce auxiliary test functions, which build upon the former inequalities. Fix $0<r<R$. Let $\eta\in C_c^{\infty}(B_R)$, $0\le\eta\le1$, $\eta=1$ in $B_r$, $| D \eta|\le(R-r)^{-1}$. Let $v=\eta^pG(u)$. Since $G\in C^1\big(\mathbb{R}\setminus\{\pm(\ell-k)\}\big)$ is continuous, with bounded derivative, it follows that $G(u)\in W^{1,p}(\Omega)$. Hence $v$ is an admissible test function. We have
\[
D v=
\betagin{cases}
p\eta^{p-1}H(u) D \eta+\eta^pH'(u) D u&\hspace{.2in}\mbox{if}\hspace{.2in}u\ne\pm(\ell-k)\\
p\eta^{p-1}H(u) D \eta&\hspace{.2in}\mbox{if}\hspace{.2in}u=\pm(\ell-k).
\end{cases}
\]
Set $w(x)=F\big(\overline{u}(x)\big)$. Notice that $q^{-1}\betata\ge1$; hence $H'(u)\le q^{-1}\betata F'(\overline{u})^p$. Notice also that $| D u|=| D \overline{u}|$.
Using the trace theorem and the Poincar\'e inequality, we get
\betagin{equation}\label{stima2}
\int_{\Omega}| D u|^{p-2} D u\cdot D v\,{\rm d}x\le\|f\|_{L^{\infty}(\Gammamma)}\int_{\Gammamma}|v|\,{\rm d}\mathcal{H}^{d-1}\le C\int_{\Omega}| D v|\,{\rm d}x.
\end{equation}
Now we estimate the left-hand side of \eqref{stima2} from below. We get
\betagin{align}\label{stima5}\notag
\int_{B_1}| D u|^{p-2} D u\cdot D v\,{\rm d}x=&\int_{B_1}| D u|^{p-2} D u\cdot\big(p\eta^{p-1}H(u) D \eta+\eta^pH'(u) D u\big)\,{\rm d}x \notag\\
=&p\int_{B_1}\eta^{p-1}H(u)| D u|^{p-2} D u\cdot D \eta\,{\rm d}x \notag\\
&+\int_{B_1}\eta^pH'(u)| D u|^p\,{\rm d}x \notag\\
\ge&-p\int_{B_1}\eta^{p-1}F(\overline{u})F'(\overline{u})^{p-1}| D \overline{u}|^{p-1}| D \eta|\,{\rm d}x \notag\\
&+\int_{B_1}\eta^pF'(\overline{u})^p| D \overline{u}|^p\,{\rm d}x \notag\\
=&-p\int_{B_1}\eta^{p-1}w| D w|^{p-1}| D \eta|\,{\rm d}x \notag\\
&+\int_{B_1}\eta^p| D w|^p\,{\rm d}x \notag\\
\ge&-p\|w D \eta\|_{L^p(B_1)}\|\eta D w\|_{L^p(B_1)}^{p-1}+\|\eta D w\|_{L^p(B_1)}^p.
\end{align}
We also control the right-hand side of (\mathbb{R}f{stima2}) by computing
\betagin{align}\label{stima3}\notag
C\int_{B_1}| D v|\,{\rm d}x=&C\int_{B_1}\frac{\overline{u}^{p-1}}{\overline{u}^{p-1}}|p\eta^{p-1}H(u) D \eta+\eta^pH'(u) D u|\,{\rm d}x \notag\\
\le&Ck^{1-p}p\int_{B_1}\overline{u}^{p-1}\eta^{p-1}|H(u) D \eta|\,{\rm d}x \notag\\
&+Ck^{1-p}\int_{B_1}\overline{u}^{p-1}\eta^pH'(u)| D u|\,{\rm d}x \notag\\
\le&C\int_{B_1}\overline{u}^{p-1}\eta^{p-1}F(\overline{u})F'(\overline{u})^{p-1}| D \eta|\,{\rm d}x \notag\\
&+Cq^{-1}\betata\int_{B_1}\overline{u}^{p-1}\eta^pF'(\overline{u})^p| D u|\,{\rm d}x \notag\\
\le&C\int_{B_1}\eta^{p-1}q^{p-1}F(\overline{u})^{p-1}F(\overline{u})| D \eta|\,{\rm d}x \notag\\
&+Cq^{-1}\betata\int_{B_1}q^{p-1}F(\overline{u})^{p-1}\eta^p F'(\overline{u})| D u|\,{\rm d}x \notag\\
=&Cq^{p-1}\int_{B_1}(\eta w)^{p-1}w| D \eta|\,{\rm d}x \notag\\
&+Cq^{p-2}\betata\int_{B_1}(\eta w)^{p-1}\eta| D w|\,{\rm d}x \notag\\
\le&Cq^{p-1}\|\eta w\|_{L^p(B_1)}^{p-1}\|w D \eta\|_{L^p(B_1)} \notag\\
&+Cq^{p-2}\betata\|\eta w\|_{L^p(B_1)}^{p-1}\|\eta D w\|_{L^p(B_1)}.
\end{align}
From (\mathbb{R}f{stima2}), combining (\mathbb{R}f{stima3}) with (\mathbb{R}f{stima5}), we get
\betagin{align}\label{stima6}
\|\eta D w\|_{L^p(\Omega)}^p\le&p\|w D \eta\|_{L^p(\Omega)}\|\eta D w\|_{L^p(\Omega)}^{p-1} \notag\\
&+Cq^{p-1}\|\eta w\|_{L^p(\Omega)}^{p-1}\|w D \eta\|_{L^p(\Omega)} \notag\\
&+Cq^{p-1}\|\eta w\|_{L^p(\Omega)}^{p-1}\|\eta D w\|_{L^p(\Omega)},
\end{align}
where we have used
\[
\betata=pq-p+1\le pq-p+q\le pq+q=(p+1)q.
\]
\noindent{\bf Step 3 -} Set
\betagin{equation}
z=\frac{\|\eta D w\|_{L^p(\Omega)}}{\|w D \eta\|_{L^p(\Omega)}},\quad\mathbb{Z}eta=\frac{\|\eta w\|_{L^p(\Omega)}}{\|w D \eta\|_{L^p(\Omega)}}. \notag
\end{equation}
By dividing (\mathbb{R}f{stima6}) for $\|w D \eta\|_{L^p(\Omega)}^p$, we have
\betagin{align}
z^p\le&pz^{p-1}+Cq^{p-1}\frac{\|\eta w\|_{L^p(\Omega)}^{p-1}}{\|w D \eta\|_{L^p(\Omega)}^{p-1}}+Cq^{p-1}\frac{\|\eta w\|_{L^p(\Omega)}^{p-1}}{\|w D \eta\|_{L^p(\Omega)}^{p-1}}\frac{\|\eta D w\|_{L^p(\Omega)}}{\|w D \eta\|_{L^p(\Omega)}} \notag\\
=&pz^{p-1}+Cq^{p-1}\mathbb{Z}eta^{p-1}+Cq^{p-1}\mathbb{Z}eta^{p-1}z. \notag
\end{align}
An application of Lemma \mathbb{R}f{lemma numerico}, implies
\betagin{equation*}
z\le C\big(p+q^{\frac{p-1}{p}}\mathbb{Z}eta^{\frac{p-1}{p}}+q\mathbb{Z}eta\big)\le Cq(1+\mathbb{Z}eta),
\end{equation*}
giving
\betagin{equation}\label{stima7}
\|\eta D w\|_{L^p(\Omega)}\le Cq\big(\|\eta w\|_{L^p(\Omega)}+\|w D \eta\|_{L^p(\Omega)}\big).
\end{equation}
Using the Sobolev inequality, we get
\betagin{align}
\|\eta w\|_{L^{p^*}(\Omega)}\le&C\| D (\eta w)\|_{L^p(\Omega)} \notag\\
\le&C\big(\|w D \eta\|_{L^p(\Omega)}+\|\eta D w\|_{L^p(\Omega)}\big) \notag\\
\le&C\Big[\|w D \eta\|_{L^p(\Omega)}+Cq\big(\|\eta w\|_{L^p(\Omega)}+\|w D \eta\|_{L^p(\Omega)}\big)\Big] \notag
\end{align}
and so
\betagin{equation}\label{stima8}
\|\eta w\|_{L^{p^*}(\Omega)}\le Cq\big(\|\eta w\|_{L^p(\Omega)}+\|w D \eta\|_{L^p(\Omega)}\big).
\end{equation}
Recall that $\eta=1$ in $B_r$ and $| D \eta|\le(R-r)^{-1}$. Hence, (\mathbb{R}f{stima7}) becomes
\betagin{equation}\label{stima11}
\betagin{split}
\| D w\|_{L^p(B_r)}\le& Cq\Bigg[\bigg(\int_{B_R}w^p\,{\rm d}x\bigg)^{\frac{1}{p}}+\frac{1}{R-r}\bigg(\int_{B_R}w^p\,{\rm d}x\bigg)^{\frac{1}{
p}}\Bigg] \notag\\
=&Cq\|w\|_{L^p(B_R)}\bigg(1+\frac{1}{R-r}\bigg) \notag\\
=&Cq\frac{R-r+1}{R-r}\|w\|_{L^p(B_R)} \notag\\
\le&Cq\frac{uxtnormal{diam}(B_1)+1}{R-r}\|w\|_{L^p(B_R)} \notag\\
\le&Cq\frac{1}{R-r}\|w\|_{L^p(B_R)}.
\end{split}
\end{equation}
Similarly, (\mathbb{R}f{stima8}) becomes
\betagin{equation}\label{stima9}
\|w\|_{L^{p^*}(B_r)}\le Cq\frac{1}{R-r}\|w\|_{L^p(B_R)}.
\end{equation}
We claim that $F_\ell\le F_{\ell+1}$, for every $\ell\in\mathbb{N}$, $\ell>k$. The only non-trivial case is when $\ell<\overline{t}\le \ell+1$. In this case, we have
$$F_\ell(\overline{t})=q\ell^{q-1}\overline{t}-(q-1)\ell^q$$
and
$$F_{\ell+1}(\overline{t})=\overline{t}^q.$$
Let $h:(\ell,\ell+1]\to\mathbb{R}$ be defined by
$$h(\overline{t})=\overline{t}^q-q\ell^{q-1}\overline{t}+(q-1)\ell^q.$$
We have $h'(\overline{t})=q\overline{t}^{q-1}-q\ell^{q-1}>0$, for every $\overline{t}\in(\ell,\ell+1]$, and hence $h$ is an increasing function. Since $\lim_{\overline{t}\to \ell}h(\overline{t})=0$, we have $h\ge0$ in $(\ell,\ell+1]$, and so $F_\ell\le F_{\ell+1}$. Letting $\ell\to\infty$ in (\mathbb{R}f{stima9}), since $0\le F_\ell\le F_{\ell+1}$ for every $\ell\in\mathbb{N}$, $\ell>k$, by the Monotone Convergence Theorem, we obtain
\betagin{equation}
\bigg(\int_{B_r}\overline{u}^{qp^*}\,{\rm d}x\bigg)^{\frac{1}{p^*}}\le Cq\frac{1}{R-r}\bigg(\int_{B_R}\overline{u}^{qp}\,{\rm d}x\bigg)^{\frac{1}{p}}. \notag
\end{equation}
Set
\[
s:=qp\hspace{.3in}\mbox{and}\hspace{.3in}\gammamma:=p^*/p=d/(d-p);
\]
then
\betagin{equation*}
\bigg(\int_{B_r}\overline{u}^{s\gammamma}\,{\rm d}x\bigg)^{\frac{1}{p\gammamma}}\le Cq\frac{1}{R-r}\bigg(\int_{B_R}\overline{u}^{s}\,{\rm d}x\bigg)^{\frac{1}{p}}.
\end{equation*}
Raising both sides of the previous inequality to $p/s$, one gets
\betagin{equation}\label{stima10}
\bigg(\int_{B_r}\overline{u}^{s\gammamma}\,{\rm d}x\bigg)^{\frac{1}{s\gammamma}}\le C^{\frac{p}{s}}\bigg(\frac{s}{p}\bigg)^{\frac{p}{s}}\Big(\frac{1}{R-r}\Big)^{\frac{p}{s}}\bigg(\int_{B_R}\overline{u}^{s}\,{\rm d}x\bigg)^{\frac{1}{s}}.
\end{equation}
Set $s_j=s\gammamma^j$ and $r_j=r+2^{-j}(R-r)$, for every $j\in\mathbb{N}_0$. Iterating (\mathbb{R}f{stima10}), which holds for every $s\ge p$, we have
\betagin{align*}
\bigg(\int_{B_{r_{j+1}}}\overline{u}^{s_j\gammamma}\,{\rm d}x\bigg)^{\frac{1}{s_j\gammamma}}\le&C^{\frac{p}{s_j}}\bigg(\frac{s_j}{p}\bigg)^{\frac{p}{s_j}}2^{\frac{p}{s_j}(j+1)}\Big(\frac{1}{R-r}\Big)^{\frac{p}{s_j}}\bigg(\int_{B_{r_j}}\overline{u}^{s_j}\,{\rm d}x\bigg)^{\frac{1}{s_j}} \notag\\
=&C^{\frac{p}{s_{j-1}\gammamma}}\bigg(\frac{s_{j-1}\gammamma}{p}\bigg)^{\frac{p}{s_{j-1}\gammamma}}2^{\frac{p}{s_{j-1}\gammamma}(j+1)}\Big(\frac{1}{R-r}\Big)^{\frac{p}{s_{j-1}\gammamma}} \notag\\
&\times\bigg(\int_{B_{r_j}}\overline{u}^{s_{j-1}\gammamma}\,{\rm d}x\bigg)^{\frac{1}{s_{j-1}\gammamma}} \notag\\
\le& C(j,p,s,d)\Big(\frac{1}{R-r}\Big)^{\frac{p}{s}\sum_{k=0}^j\gammamma^{-k}}\bigg(\int_{B_R}\overline{u}^s\,{\rm d}x\bigg)^{\frac{1}{s}},
\end{align*}
where
\[
C(j,p,s,d):=C^{\frac{p}{s}\sum_{k=0}^j\gammamma^{-k}}\bigg(\frac{s}{p}\bigg)^{\frac{p}{s}\sum_{k=0}^j{\gammamma^{-k}}}\gammamma^{\frac{p}{s}\sum_{k=0}^jk\gammamma^{-k}}2^{\frac{p}{s}\sum_{k=0}^j(k+1)\gammamma^{-k}}.
\]
Notice that $r<r_{j}$, for every $j\in\mathbb{N}_0$, the series are convergent and in particular $\sum_{k=0}^\infty\gammamma^{-k}=d/p$. By letting $j\to\infty$, we get
\betagin{equation}
\sup_{B_r}\overline{u}\le C\bigg(\frac{1}{(R-r)^d}\int_{B_R}\overline{u}^s\,{\rm d}x\bigg)^{\frac{1}{s}}.
\end{equation}
\noindent{\bf Step 4 - }Now, we can choose some parameters in the former inequalities to complete the proof. By choosing $q=1$, setting $r:=R/2$, and recalling that $\overline{u}=|u|+k$, we get
\betagin{align}
\|u\|_{L^{\infty}(B_{R/2})}\le&\|\overline{u}\|_{L^{\infty}(B_{R/2})}\le CR^{-\frac{d}{p}}\big(\|u\|_{L^p(B_R)}+R^{\frac{d}{p}}k\big). \notag
\end{align}
The second inequality in the theorem follows by setting $q=1$ and $r:=R/2$ in \eqref{stima11}, obtaining
\betagin{align}
\| D u\|_{L^p(B_{R/2})}=&\| D \overline{u}\|_{L^p(B_{R/2})}\\
\le & CR^{-1}\|\overline{u}\|_{L^p(B_R)} \notag\\
\le&CR^{-1}\big(\|u\|_{L^p(B_R)}+\|k\|_{L^p(B_R)}\big) \notag\\
\le&CR^{-1}\big(\|u\|_{L^p(B_R)}+R^{\frac{d}{p}}k\big). \notag
\end{align}
\end{proof}
\section{Gradient regularity estimates in ${\rm BMO}-$spaces}\label{sec_beacon}
In this section, we prove regularity estimates for weak solutions. In case $f\in L^\infty(\Gammamma)$, we prove that $Du\in {\rm BMO}_{\rm loc}(\Omega)$. In addition, we allow the interface data to be unbounded, provided it belongs to a Sobolev space $W^{1,p'+\varepsilon}(\Omega)$, where the parameter $\varepsilon>0$ is to be set further. In this case, we verify that $u\in C^{0,\alpha}_{\rm loc}(\Omega)$, for some $\alpha\in(0,1)$ depending only on the dimension, $p$ and $\varepsilon$.
We proceed with an auxiliary lemma. For $w\in W^{1,G}(\Omega)$, let $W^{1,G}_w(\Omega)$ denote the Orlicz-Sobolev space comprising the functions $u\in W^{1,G}(\Omega)$ such that
\[
u-w\in W^{1,G}_0(\Omega).
\]
\betagin{Lemma}\label{stima146}
Let $w\in W^{1,G}(B_R)$. Suppose \eqref{business class}--\eqref{monicavitti} is in force. Suppose further $h\in W_w^{1,G}(B_R)$ is a weak solution to
\betagin{equation*}
uxtnormal{div}\bigg(\frac{g(| D h|)}{| D h|} D h\bigg)=0\quaduxtnormal{in }B_R.
\end{equation*}
Then there exists $C>0$ such that
\betagin{equation}\label{antonioni}
\int_{B_{R}}G(| D w|)-G(| D h|)\,{\rm d}x\ge C\int_{B_{R}}| D (w-h)|^p\,{\rm d}x.
\end{equation}
\end{Lemma}
\betagin{proof}$\\$
Let $\tau\in[0,1]$, define $v_\tau=\tau w+(1-\tau)h$. The monotonicity condition in \eqref{monicavitti} implies
\betagin{align*}
\int_{B_{R}}G(| D w|)-G(| D h|)\,{\rm d}x=&\int_0^1\frac{d}{d\tau}\bigg(\int_{B_{R}}G(| D v_{\tau}|)\,{\rm d}x\bigg)\,{\rm d}\tau\\
=&\int_0^1\int_{B_{R}}\frac{d}{d\tau}G(| D v_\tau|)\,{\rm d}x\,{\rm d}\tau\\
=&\int_0^1\int_{B_{R}}\frac{g(| D v_\tau|)}{| D v_\tau|} D v_\tau\cdot D (w-h)\,{\rm d}x\,{\rm d}\tau\\
=&\int_0^1\frac{1}{\tau}\int_{B_{R}}\bigg(\frac{g(| D v_\tau|)}{| D v_\tau|} D v_\tau-\frac{g(| D h|)}{| D h|} D h\bigg)\\
&\cdot D (v_\tau-h)\,{\rm d}x\,{\rm d}\tau\\
\ge&C\int_0^1\frac{1}{\tau}\int_{B_{R}}| D (v_\tau-h)|^p\,{\rm d}x\,{\rm d}\tau\\
=&C\int_{B_{R}}| D (w-h)|^p\,{\rm d}x,
\end{align*}
and the proof is complete.
\end{proof}
\subsection{Regularity estimates in ${\rm BMO}-$spaces}
In this section, we suppose $f\in L^\infty(\Omega)$ and establish ${\rm BMO}-$regularity estimates for the gradient of solutions. We start by recalling a proposition from \cite{baroni2015}.
\betagin{Proposition}\label{stima136}
Let $h\in W^{1,G}(B_R)$ be a weak solution of
\betagin{equation*}
uxtnormal{div}\bigg(\frac{g(| D h|)}{| D h|} D h\bigg)=0\quaduxtnormal{in }B_R.
\end{equation*}
Suppose \eqref{business class}--\eqref{monicavitti} are in force. Then there exist $C>0$ and $\alpha\in(0,1)$ such that, for every $r\in(0,R]$, we have
\betagin{equation*}
\int_{B_r}| D h-( D h)_r|\,{\rm d}x\le C\Big(\frac{r}{R}\Big)^{d+\alpha}\int_{B_R}| D h-( D h)_R|\,{\rm d}x.
\end{equation*}
\end{Proposition}
For a proof of Proposition \mathbb{R}f{stima136}, we refer the reader to \cite{baroni2015}.
\betagin{Proposition}\label{stima161}
Let $w\in W^{1,G}(B_R)$, and suppose $h\in W^{1,G}(B_R)$ is a weak solution of
\betagin{equation*}
uxtnormal{div}\bigg(\frac{g(| D h|)}{| D h|} D h\bigg)=0\quaduxtnormal{in }B_R.
\end{equation*}
Suppose \eqref{business class}--\eqref{monicavitti} are in force. Then there exists $C>0$ such that, for every $0<r\le R$, we have
\betagin{align}
\int_{B_r}| D w-( D w)_r|\,{\rm d}x\le& C\Big(\frac{r}{R}\Big)^{d+\alpha}\int_{B_R}| D w-( D w)_R|\,{\rm d}x \notag\\
&+C\int_{B_R}| D w- D h|\,{\rm d}x, \notag
\end{align}
where $\alpha$ is given by Proposition \mathbb{R}f{stima136}.
\end{Proposition}
\betagin{proof}Let $r\in(0,R]$. We have
\betagin{align}\label{stima137}
\int_{B_r}| D w-( D w)_r|\,{\rm d}x\le&\int_{B_r}| D w-( D h)_r|\,{\rm d}x \notag\\
&+\int_{B_r}|( D w)_r-( D h)_r|\,{\rm d}x.
\end{align}
Similarly, we have
\betagin{align}\label{stima138}
\int_{B_r}| D w-( D h)_r|\,{\rm d}x\le&\int_{B_r}| D w- D h|\,{\rm d}x \notag\\
&+\int_{B_r}| D h-( D h)_r|\,{\rm d}x.
\end{align}
Moreover,
\betagin{align}\label{stima139}
\int_{B_r}|( D w)_r-( D h)_r|\,{\rm d}x=&|( D w)_r-( D h)_r|\int_{B_r}\,{\rm d}x \notag\\
=&|B_r|\bigg|\frac{1}{|B_r|}\int_{B_r} D w- D h\,{\rm d}x\bigg| \notag\\
\le&\int_{B_r}| D w- D h|\,{\rm d}x.
\end{align}
Combining \eqref{stima137}, \eqref{stima138} with \eqref{stima139}, we get
\betagin{align}\label{stima142}
\int_{B_r}| D w-( D w)_r|\,{\rm d}x\le&\int_{B_r}| D h-( D h)_r|\,{\rm d}x \notag\\
&+2\int_{B_r}| D w- D h|\,{\rm d}x.
\end{align}
Changing the roles of $w$ and $h$ and integrating in the ball $B_R$, we obtsain
\betagin{align}\label{stima140}
\int_{B_R}| D h-( D h)_R|\,{\rm d}x\le&\int_{B_R}| D w-( D w)_R|\,{\rm d}x \notag\\
&+2\int_{B_R}| D w- D h|\,{\rm d}x.
\end{align}
Thanks to Proposition \mathbb{R}f{stima136}, we conclude
\betagin{align}\label{stima141}
\int_{B_r}| D w-( D w)_r|\,{\rm d}x\le&C\Big(\frac{r}{R}\Big)^{d+\alpha}\int_{B_R}| D h-( D h)_R|\,{\rm d}x \notag\\
&+C\int_{B_R}| D w- D h|\,{\rm d}x.
\end{align}
Combining \eqref{stima142}, \eqref{stima140} with \eqref{stima141} we get
\betagin{align}
\int_{B_r}| D w-( D w)_r|\,{\rm d}x\le&C\Big(\frac{r}{R}\Big)^{d+\alpha}\int_{B_R}| D w-( D w)_R|\,{\rm d}x \notag\\
&+C\Big(\frac{r}{R}\Big)^{d+\alpha}\int_{B_R}| D w- D h|\,{\rm d}x \notag\\
&+C\int_{B_R}| D w- D h|\,{\rm d}x \notag\\
\le&C\Big(\frac{r}{R}\Big)^{d+\alpha}\int_{B_R}| D w-( D w)_R|\,{\rm d}x \notag\\
&+C\int_{B_R}| D w- D h|\,{\rm d}x. \notag
\end{align}
The proof is complete.
\end{proof}
We now state and prove the main result in this section.
\betagin{Theorem}[Gradient regularity in ${\rm BMO}-$spaces]\label{thm_ll}
Let $u\in W^{1,G}_0(\Omega)$ be a weak solution for the transmission problem \eqref{eq_stima118}--\eqref{eq_stima119}. Suppose \eqref{business class}--\eqref{monicavitti} is in force. Then $ D u\in {\rm BMO}_{\rm loc}(\Omega)$. Moreover, for every $\Omega'\Subset\Omega$,
\[
\left\|Du\right\|_{{\rm BMO}(\Omega')}\leq C,
\]
where $C=C(d,\|f\|_{L^\infty(\Gammamma)},{\rm diam}(\Omega),{\rm dist}(\Omega',\partial\Omega))>0$.
\end{Theorem}
\betagin{proof} Let $x^0\in\Gammamma$, and let $R>0$ such that $B_R:=B(x^0,R)\Subset\Omega$. Let $h\in W_u^{1,G}(B_R)$ be the weak solution of
\betagin{equation*}
uxtnormal{div}\bigg(\frac{g(| D h|)}{| D h|} D h\bigg)=0\quaduxtnormal{in }B_R.
\end{equation*}
Since $h=u$ on $\partial B_R$ in the trace sense, we can extend $h$ to $\Omega\setminus B_R$ so that $h=u$ in $\Omega\setminus B_R$. This implies that $h\in W_0^{1,G}(\Omega)$ and hence, since $u$ is a global minimizer of \eqref{stima117}, we have
\betagin{align}\label{stima144}
\int_{\Omega}G(| D u|)\,{\rm d}x-\int_{\Gammamma}fu\,{\rm d}\mathcal{H}^{d-1}\le\int_{\Omega}G(| D h|)\,{\rm d}x-\int_{\Gammamma}fh\,{\rm d}\mathcal{H}^{d-1}.
\end{align}
Set $\Gammamma_R=B_R\cap\Gammamma$. Since $h=u$ in $\Omega\setminus B_R$, \eqref{stima144} becomes
\betagin{align}
\int_{B_R}G(| D u|)\,{\rm d}x-\int_{\Gammamma_R}fu\,{\rm d}\mathcal{H}^{d-1}\le\int_{B_R}G(| D h|)\,{\rm d}x-\int_{\Gammamma_R}fh\,{\rm d}\mathcal{H}^{d-1} \notag
\end{align}
from which, applying the Trace Theorem and Poincaré Inequality, follows
\betagin{align}\label{stima145}
\int_{B_R}G(| D u|)\,{\rm d}x-\int_{B_R}G(| D h|)\,{\rm d}x\le&\int_{\Gammamma_R}fu\,{\rm d}\mathcal{H}^{d-1}-\int_{\Gammamma_R}fh\,{\rm d}\mathcal{H}^{d-1} \notag\\
\le&\|f\|_{L^{\infty}(\Gammamma)}\int_{\Gammamma_R}|u-h|\,{\rm d}\mathcal{H}^{d-1} \notag\\
\le&C\int_{B_R}|u-h|\,{\rm d}x+C\int_{B_R}| D (u-h)|\,{\rm d}x \notag\\
\le&C\int_{B_{R}}| D (u-h)|\,{\rm d}x.
\end{align}
From Lemma \mathbb{R}f{stima146}, we bound the left-hand side of \eqref{stima145}
\betagin{align}\label{stima147}
\int_{B_R}G(| D u|)\,{\rm d}x-\int_{B_R}G(| D h|)\,{\rm d}x\ge&C\int_{B_R}| D (u-h)|^p\,{\rm d}x,
\end{align}
and, combining \eqref{stima145} with \eqref{stima147}, we get
\betagin{equation*}
\int_{B_R}| D (u-h)|^p\,{\rm d}x \leq C\int_{B_{R}}| D (u-h)|\,{\rm d}x.
\end{equation*}
Using this and H\"older's inequality, we obtain
\betagin{eqnarray*}
\left( \int_{B_R}| D (u-h)|\,{\rm d}x \right)^p & \leq & C' R^{d(p-1)}\int_{B_{R}}| D (u-h)|^p\,{\rm d}x\\
& \leq & C R^{d(p-1)}\int_{B_{R}}| D (u-h)|\,{\rm d}x
\end{eqnarray*}
and thus
\betagin{equation*}
\int_{B_R}| D (u-h)|\,{\rm d}x \leq C R^d.
\end{equation*}
From Proposition \mathbb{R}f{stima161}, we get
\betagin{equation*}
\int_{B_r}| D u-( D u)_r|\,{\rm d}x\le C\Big(\frac{r}{R}\Big)^{d+\alpha}\int_{B_R}| D u-( D u)_R|\,{\rm d}x+CR^d\
\end{equation*}
for every $0<r\le R$, and, applying Lemma \mathbb{R}f{lem_stima119}, we conclude
\betagin{equation*}
\int_{B_r}| D u-( D u)_r|\,{\rm d}x\le Cr^d, \quad\forall r\in(0,R].
\end{equation*}
The proof is complete.
\end{proof}
\betagin{Remark}[Potential estimates and the $p$-Laplace operator]
If $g(t):=t^{p-1}$, the conclusion of Theorem \mathbb{R}f{thm_ll} has been obtained through the use of potential estimates; see \cite[Corollary 1, item (C9)]{KM2014c}. Indeed, notice that for $B_r\subset\Omega$, we have
\[
\int_{B_r}f{\rm d}\mathcal{H}^{d-1}\leq Cr^{d-1},
\]
which is precisely the condition in \cite[Corollary 1, item (C9)]{KM2014c}. See also \cite{M2011,M2011a}.
\end{Remark}
As a corollary to Theorem \mathbb{R}f{thm_ll}, we obtain a modulus of continuity for the solution $u$ in $C^{0,{\rm Log-Lip}}-$spaces.
\betagin{Corollary}[Log-Lipschitz continuity estimates]\label{cor_ll}
Let $u\in W_0^{1,G}(\Omega)$ be a weak solution for \eqref{eq_stima118}-\eqref{eq_stima119}. Suppose \eqref{business class}--\eqref{monicavitti} are in force. Then $u\in C^{0,{\rm Log-Lip}}_{\rm loc}(\Omega)$. Moreover, for every $\Omega'\Subset\Omega$,
\[
\left\|u\right\|_{C^{0,{\rm Log-Lip}}(\Omega')}\leq C\left(\left\|u\right\|_{L^\infty(\Omega)}+\left\|f\right\|_{L^\infty(\Gammamma)}\right),
\]
where $C=C(p,d,{\rm diam}(\Omega),{\rm dist}(\Omega',\partial\Omega))>0$.
\end{Corollary}
Indeed, a function whose partial derivatives are in ${\rm BMO}$ belongs to the Zygmund class (cf. \cite{Zygmund2002}). Because functions in the latter have a $C^{0,{\rm Log-Lip}}$ modulus of continuity, the corollary follows. An alternative argument follows from embedding results for borderline spaces; see \cite[Theorem 3]{Cianchi1996}.
\subsection{H\"older continuity of weak solutions}
Here, we consider unbounded interface data. We work under the condition $f\in W^{1,p'+\varepsilon}(\Omega)$, where $\varepsilon>0$ depends on $p$ and the dimension, and prove a regularity result in H\"older spaces for the weak solutions of \eqref{eq_stima118}-\eqref{eq_stima119}.
\betagin{Theorem}\label{thm_c0alpha}
Let $u$ be a weak solution to the interface problem \eqref{eq_stima118}--\eqref{eq_stima119}, under assumptions \eqref{business class}--\eqref{monicavitti}. Let $2<p<d$ and $\varepsilon>0$ be such that
\[
\frac{d-p}{p-1}<\varepsilon<d-\frac{p}{p-1},
\]
and suppose $f\in W^{1,p'+\varepsilon}(\Omega)$. Then $u\in C^{0,\alpha}_{uxtnormal{loc}}(\Omega)$, where
\[
\alpha=1-\frac{d}{p+\varepsilon(p-1)},
\]
with estimates.
\end{Theorem}
\betagin{proof}We split the proof into three steps.
\noindent{\bf Step 1 - }Combining \eqref{stima145} and \eqref{stima147}, one obtains
\betagin{equation}\label{stima155}
\| D (u-h)\|_{L^p(B_R)}^p\le C\int_{\Gammamma}|f(u-h)|\,{\rm d}\mathcal{H}^{d-1}.
\end{equation}
We proceed by examining the right-hand side of \eqref{stima155}. Using the Trace Theorem, we get
\betagin{align}\label{stima156}
\int_{\Gammamma_R}|f(u-h)|\,{\rm d}\mathcal{H}^{d-1}\le&C\int_{B_{R}}|f(u-h)|\,{\rm d}x+C\int_{B_{R}}| D \big(f(u-h)\big)|\,{\rm d}x \notag\\
\le&C\int_{B_{R}}|f||u-h|\,{\rm d}x+C\int_{B_{R}}| D f||u-h|\,{\rm d}x \notag \\
&+C\int_{B_{R}}|f|| D (u-h)|\,{\rm d}x \notag\\
=:&I_1+I_2+I_3.
\end{align}
Now, we estimate each of the summands $I_1$, $I_2$ and $I_3$. Concerning $I_1$, we have
\betagin{align}\label{stima157}
\int_{B_{R}}|f||u-h|\,{\rm d}x\le&\bigg(\int_{B_{R}}|f|^{p'+\varepsilon}\,{\rm d}x\bigg)^{\frac{1}{p'+\varepsilon}}\bigg(\int_{B_{R}}|u-h|^{\frac{p'+\varepsilon}{p'+\varepsilon-1}}\,{\rm d}x\bigg)^{\frac{p'+\varepsilon-1}{p'+\varepsilon}} \notag\\
\le&C\Bigg[\bigg(\int_{B_R}|u-h|^{\frac{p'+\varepsilon}{p'+\varepsilon-1}\frac{p'+\varepsilon-1}{p'+\varepsilon}p}\,{\rm d}x\bigg)^{\frac{p'+\varepsilon}{p'+\varepsilon-1}\frac{1}{p}} \notag\\
&\times\bigg(\int_{B_{R}}\,{\rm d}x\bigg)^{\frac{\frac{p'+\varepsilon-1}{p'+\varepsilon}p-1}{\frac{p'+\varepsilon-1}{p'+\varepsilon}p}}\Bigg]^{\frac{p'+\varepsilon-1}{p'+\varepsilon}} \notag\\
\le&CR^{d\frac{\frac{p'+\varepsilon-1}{p'+\varepsilon}p-1}{p}}\|u-h\|_{L^p(B_R)} \notag\\
\le&CR^{d\frac{\frac{p'+\varepsilon-1}{p'+\varepsilon}p-1}{p}}\| D (u-h)\|_{L^p(B_R)}.
\end{align}
To estimate $I_2$, one notices that
\betagin{align}\label{stima158}
\int_{B_{R}}| D f||u-h|\,{\rm d}x\le&\bigg(\int_{B_{R}}| D f|^{p'+\varepsilon}\,{\rm d}x\bigg)^{\frac{1}{p'+\varepsilon}}\bigg(\int_{B_{R}}|u-h|^{\frac{p'+\varepsilon}{p'+\varepsilon-1}}\,{\rm d}x\bigg)^{\frac{p'+\varepsilon-1}{p'+\varepsilon}} \notag\\
\le&C\Bigg[\bigg(\int_{B_R}|u-h|^{\frac{p'+\varepsilon}{p'+\varepsilon-1}\frac{p'+\varepsilon-1}{p'+\varepsilon}p}\,{\rm d}x\bigg)^{\frac{p'+\varepsilon}{p'+\varepsilon-1}\frac{1}{p}} \notag\\
&\times\bigg(\int_{B_{R}}\,{\rm d}x\bigg)^{\frac{\frac{p'+\varepsilon-1}{p'+\varepsilon}p-1}{\frac{p'+\varepsilon-1}{p'+\varepsilon}p}}\Bigg]^{\frac{p'+\varepsilon-1}{p'+\varepsilon}} \notag\\
\le&CR^{d\frac{\frac{p'+\varepsilon-1}{p'+\varepsilon}p-1}{p}}\|u-h\|_{L^p(B_R)} \notag\\
\le&CR^{d\frac{\frac{p'+\varepsilon-1}{p'+\varepsilon}p-1}{p}}\| D (u-h)\|_{L^p(B_R)}.
\end{align}
Finally, we examine $I_3$. Indeed,
\betagin{align}\label{stima159}
\int_{B_{R}}|f|| D (u-h)|\,{\rm d}x\le&\bigg(\int_{B_{R}}|f|^{p'+\varepsilon}\,{\rm d}x\bigg)^{\frac{1}{p'+\varepsilon}}\\
& \times\bigg(\int_{B_{R}}| D (u-h)|^{\frac{p'+\varepsilon}{p'+\varepsilon-1}}\,{\rm d}x\bigg)^{\frac{p'+\varepsilon-1}{p'+\varepsilon}} \notag\\
\le&C\Bigg[\bigg(\int_{B_R}| D (u-h)|^{\frac{p'+\varepsilon}{p'+\varepsilon-1}\frac{p'+\varepsilon-1}{p'+\varepsilon}p}\,{\rm d}x\bigg)^{\frac{p'+\varepsilon}{p'+\varepsilon-1}\frac{1}{p}} \notag\\
&\times\bigg(\int_{B_{R}}\,{\rm d}x\bigg)^{\frac{\frac{p'+\varepsilon-1}{p'+\varepsilon}p-1}{\frac{p'+\varepsilon-1}{p'+\varepsilon}p}}\Bigg]^{\frac{p'+\varepsilon-1}{p'+\varepsilon}} \notag\\
\le&CR^{d\frac{\frac{p'+\varepsilon-1}{p'+\varepsilon}p-1}{p}}\| D (u-h)\|_{L^p(B_R)}.
\end{align}
Because of the role played by the exponents in the previous inequalities, we conclude this step by noticing that
\betagin{align*}
\frac{\frac{p'+\varepsilon-1}{p'+\varepsilon}p-1}{p}
=&\frac{\varepsilon(p-1)}{p(p'+\varepsilon)}.
\end{align*}
\noindent{\bf Step 2 -}Now we combine \eqref{stima155}, \eqref{stima156}, \eqref{stima157}, \eqref{stima158}, and \eqref{stima159} to produce
\betagin{equation*}
\| D (u-h)\|_{L^p(B_R)}^{p-1}\le CR^{d\frac{\varepsilon(p-1)}{p(p'+\varepsilon)}}.
\end{equation*}
As a consequence, it follows that
\betagin{equation}\label{stima160}
\int_{B_{R}}| D (u-h)|^p\,{\rm d}x\le CR^{d\frac{\varepsilon}{p'+\varepsilon}}.
\end{equation}
Hence, Proposition \mathbb{R}f{stima161} builds upon \eqref{stima160} to yield
\betagin{equation*}
\int_{B_{r}}| D u-( D u)_r|\,{\rm d}x\le C\Big(\frac{r}{R}\Big)^{d+\alpha}\int_{B_{R}}| D u-( D u)_R|\,{\rm d}x+CR^{d\frac{\varepsilon}{p'+\varepsilon}}.
\end{equation*}
The former inequality, together with Lemma \mathbb{R}f{lem_stima119}, leads to
\betagin{equation*}
\int_{B_{r}}| D u-( D u)_r|\,{\rm d}x\le Cr^{d\frac{\varepsilon}{p'+\varepsilon}}\quad\forall r\in(0,R],
\end{equation*}
and one easily concludes
\betagin{equation}\label{doron}
r^{(d-d\frac{\varepsilon}{p'+\varepsilon})-d}\int_{B_{r}}| D u-( D u)_r|^p\,{\rm d}x\le C\quad\forall r\in(0,R].
\end{equation}
\noindent{\bf Step 3 - }The inequality in \eqref{doron} implies $Du\in L_C^{p,\lambdabda}(\Omega;\mathbb{R}^d)$, with
\betagin{equation*}
\lambdabda:=d\bigg(1-\frac{\varepsilon}{p'+\varepsilon}\bigg)=\frac{dp}{p+p\varepsilon-\varepsilon}.
\end{equation*}
Since $\lambdabda<d$, we have $L_C^{p,\lambdabda}(\Omega;\mathbb{R}^d)=L_M^{p,\lambdabda}(\Omega;\mathbb{R}^d)$; as a consequence $u\in C_{uxtnormal{loc}}^{0,\alpha}(\Omega)$ with
\betagin{equation*}
\alpha=1-\frac{\lambdabda}{p}
\end{equation*}
if $p>\lambdabda$. That is, if
\betagin{equation*}
\varepsilon>\frac{d-p}{p-1},
\end{equation*}
which holds by assumption.
Since $p'+\varepsilon<d$, we finally get
\betagin{equation*}
\frac{d-p}{p-1}<\varepsilon<d-\frac{p}{p-1}.
\end{equation*}
\end{proof}
\betagin{Remark}[Endpoint-regularity]We conclude by examining the limit behaviour of the modulus of continuity -- encoded by the H\"older exponent $\alpha\in(0,1)$ in Theorem \mathbb{R}f{thm_c0alpha} -- as $\varepsilon$ approaches the endpoints of its interval of definition. Indeed, as
\[
\varepsilon\to\bigg(d-\frac{p}{p-1}\bigg)^-
\]
one gets
\[
\alpha\to1-\frac{1}{p-1}.
\]
On the other hand, as
\[
\varepsilon\to\bigg(\frac{d-p}{p-1}\bigg)^+
\]
one has
\[
\alpha\to0.
\]
\end{Remark}
{\small \noindent{\bf Acknowledgments.} The authors thank Paolo Baroni and Giuseppe Mingione for insightful comments on the material in the paper. VB is supported by the Centre for Mathematics of the University of Coimbra (UIDB/00324/2020, funded by the Portuguese Government through FCT/MCTES). EP is partially supported by the Centre for Mathematics of the University of Coimbra (UIDB/00324/2020, funded by the Portuguese Government through FCT/MCTES) and by FAPERJ (grants E26/200.002/2018 and E26/201.390/2021). JMU is partially supported by the King Abdullah University of Science and Technology (KAUST) and by the Centre for Mathematics of the University of Coimbra (UIDB/00324/2020, funded by the Portuguese Government through FCT/MCTES).}
\betagin{thebibliography}{99}
\bibitem{adamsmorrey}
David R. Adams.
\newblock {\em Morrey Spaces}.
\newblock Lecture Notes in Applied and Numerical Harmonic Analysis. Birkh\"auser/Springer, Cham, 2015.
\bibitem{Bao-Li-Yin1}
Ellen~Shiting Bao, YanYan Li, and Biao Yin.
\newblock Gradient estimates for the perfect conductivity problem.
\newblock {\em Arch. Ration. Mech. Anal.}, 193(1):195--226, 2009.
\bibitem{Bao-Li-Yin2}
Ellen~Shiting Bao, YanYan Li, and Biao Yin.
\newblock Gradient estimates for the perfect and insulated conductivity
problems with multiple inclusions.
\newblock {\em Comm. Partial Differential Equations}, 35(11):1982--2006, 2010.
\bibitem{baroni2015}
Paolo Baroni.
\newblock Riesz potential estimates for a general class of quasilinear equations.
\newblock {\em Calc. Var. Partial Differential Equations}, 53(3-4):803--846, 2015.
\bibitem{Bonnetier2000}
Eric Bonnetier and Michael Vogelius.
\newblock An elliptic regularity result for a composite medium with
``touching'' fibres of circular cross-section.
\newblock {\em SIAM J. Math. Anal.}, 31(3):651--677, 2000.
\bibitem{Borsuk1968}
Mikhail~V. Borsuk.
\newblock A priori estimates and solvability of second order quasilinear
elliptic equations in a composite domain with nonlinear boundary condition
and conjugacy condition.
\newblock {\em Trudy Mat. Inst. Steklov.}, 103:15--50. (loose errata), 1968.
\bibitem{Borsuk2019}
Mikhail~V. Borsuk.
\newblock Transmission Robin problem for singular $p(x)-$Laplacian equation in a cone.
\newblock {\em Electron. J. Qual. Theory Differ. Equ.}, Paper No. 93, 17 pp., 2019.
\bibitem{Borsuk2010}
Mikhail~V. Borsuk.
\newblock {\em Transmission problems for elliptic second-order equations in
non-smooth domains}.
\newblock Frontiers in Mathematics. Birkh\"{a}user/Springer Basel AG, Basel,
2010.
\bibitem{Briane}
Marc Briane, Yves Capdeboscq, and Luc Nguyen.
\newblock Interior regularity estimates in high conductivity homogenisation and
application.
\newblock {\em Arch. Ration. Mech. Anal.}, 207(1):75--137, 2013.
\bibitem{CSCS2021}
Luis Caffareli, Mar\'{i}a Soria-Carro, and Pablo R. Stinga.
\newblock Regularity for {$C^{1,\alpha}$} interface transmission problems.
\newblock {\em Arch. Ration. Mech. Anal.}, 240(1):265--294, 2021.
\bibitem{Campanato1957}
Sergio Campanato.
\newblock Sul problema di {M}. {P}icone relativo all'equilibrio di un corpo
elastico incastrato.
\newblock {\em Ricerche Mat.}, 6:125--149, 1957.
\bibitem{Campanato1959}
Sergio Campanato.
\newblock Sui problemi al contorno per sistemi di equazioni differenziali
lineari del tipo dell'elasticit\`a. {I}.
\newblock {\em Ann. Scuola Norm. Sup. Pisa Cl. Sci. (3)}, 13:223--258, 1959.
\bibitem{Campanato1959a}
Sergio Campanato.
\newblock Sui problemi al contorno per sistemi di equazioni differenziali
lineari del tipo dell'elasticit\`a. {II}.
\newblock {\em Ann. Scuola Norm. Sup. Pisa Cl. Sci. (3)}, 13:275--302, 1959.
\bibitem{Cianchi1996}
Andrea Cianchi.
\newblock Continuity properties of functions from Orlicz-Sobolev spaces and embedding theorems.
\newblock {\em Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4)}, 23:575--608, 1996.
\bibitem{CDF2022}
Cristiana De~Filippis.
\newblock Quasiconvexity and partial regularity via nonlinear potentials.
\newblock {\em J. Math. Pures Appl. (9)}, 163:11--82, 2022.
\bibitem{FM2021}
Cristiana De~Filippis and Giuseppe Mingione.
\newblock Lipschitz bounds and nonautonomous integrals.
\newblock {\em Arch. Ration. Mech. Anal.}, 242(2):973--1057, 2021.
\bibitem{FM2022}
Cristiana De~Filippis and Giuseppe Mingione.
\newblock Nonuniformly elliptic Schauder theory.
\newblock {\em {\rm arXiv:2201.07369 [math.AP]}}, 2022.
\bibitem{dong20}
Hongjie Dong.
\newblock A simple proof of regularity for $C^{1,\alpha}$ interface transmission problems.
\newblock {\em arXiv Preprint}, arxiv.2004.09365, 2020.
\bibitem{DM2010a}
Frank Duzaar and Giuseppe Mingione.
\newblock Gradient estimates via linear and nonlinear potentials.
\newblock {\em J. Funct. Anal.}, 259(11):2961--2998, 2010.
\bibitem{DM2011}
Frank Duzaar and Giuseppe Mingione.
\newblock Gradient estimates via non-linear potentials.
\newblock {\em Amer. J. Math.}, 133(4):1093--1149, 2011.
\bibitem{giusti} Enrico Giusti,
\newblock {\em Direct methods in the calculus of variations}.
\newblock World Scientific Publishing, River Edge, NJ,
2003.
\bibitem{Iliin-Shismarev1961}
Vladimir~A. Il'in and Il'ya~A. \v{S}i\v{s}marev.
\newblock The method of potentials for the problems of {D}irichlet and
{N}eumann in the case of equations with discontinuous coefficients.
\newblock {\em Sibirsk. Mat. \v{Z}.}, pages 46--58, 1961.
\bibitem{at1} Alois~Kufner, Old\v{r}ich~John and Svatopluk~Fu\v{c}\'{\i}k,
\newblock {\em Function spaces}.
\newblock Monographs and Textbooks on Mechanics of Solids and Fluids,
Mechanics: Analysis.
\newblock Noordhoff International Publishing, Leyden; Academia, Prague.
2010.
\bibitem{KM2013a}
Tuomo Kuusi and Giuseppe Mingione.
\newblock Linear potentials in nonlinear potential theory.
\newblock {\em Arch. Ration. Mech. Anal.}, 207(1):215--246, 2013.
\bibitem{KM2014a}
Tuomo Kuusi and Giuseppe Mingione.
\newblock The {W}olff gradient bound for degenerate parabolic equations.
\newblock {\em J. Eur. Math. Soc. (JEMS)}, 16(4):835--892, 2014.
\bibitem{KM2014c}
Tuomo Kuusi and Giuseppe Mingione.
\newblock Guide to nonlinear potential estimates.
\newblock {\em Bull. Math. Sci.}, 4(1):1--82, 2014.
\bibitem{Li-Nirenberg2003}
YanYan~Li and Louis~Nirenberg.
\newblock Estimates for elliptic systems from composite material.
\newblock {\em Comm. Pure Appl. Math.},
\newblock 56(7):892--925, 2003.
\newblock Dedicated to the memory of J\"{u}rgen K. Moser.
\bibitem{Li-Vogelius2000}
YanYan~Li and Michael~Vogelius.
\newblock Gradient estimates for solutions to divergence form elliptic
equations with discontinuous coefficients.
\newblock {\em Arch. Ration. Mech. Anal.}, 153(2):91--151, 2000.
\bibitem{Lieberman_1991}
Gary~Lieberman.
\newblock The natural generalisation of the natural conditions of Ladyzhenskaya and Ural’tseva for elliptic equations.
\newblock {\em Comm. in Partial Differential Equations}, 16(2-3):311--361, 1991.
\bibitem{M2011a}
Giuseppe Mingione.
\newblock Gradient potential estimates.
\newblock {\em J. Eur. Math. Soc. (JEMS)}, 13(2):459--486, 2011.
\bibitem{M2011}
Giuseppe Mingione.
\newblock Nonlinear measure data problems.
\newblock {\em Milan J. Math.}, 79(2):429--496, 2011.
\bibitem{Picone1954}
Mauro~Picone.
\newblock Sur un probl{\`e}me nouveau pour l'{\'e}quation lin{\'e}aire aux
d{\'e}riv{\'e}es partielles de la th{\'e}orie math{\'e}matique classique de
l'{\'e}lasticit{\'e}.
\newblock In {\em Colloque sur les {\'e}quations aux d{\'e}riv{\'e}es
partielles, CBRM, Bruxelles}, pages 9--11, 1954.
\bibitem{Schechter1960}
Martin~Schechter.
\newblock A generalisation of the problem of transmission.
\newblock {\em Ann. Scuola Norm. Sup. Pisa Cl. Sci. (3)}, 14:207--236, 1960.
\bibitem{Sheftel1963}
Zinovi~G. \v{S}eftel'.
\newblock Estimates in {$L\sb{p}$} of solutions of elliptic equations with
discontinuous coefficients and satisfying general boundary conditions and
conjugacy conditions.
\newblock {\em Soviet Math. Dokl.}, 4:321--324, 1963.
\bibitem{Serrin_1964}
James~Serrin.
\newblock Local behaviour of solutions of quasi-linear equations.
\newblock {\em Acta Math.}, 111:247--302, 1964.
\bibitem{SCS2022}
Mar\'{i}a~Soria-Carro and Pablo~R.~Stinga,
\newblock Regularity of viscosity solutions to fully nonlinear elliptic transmission problems.
\newblock {\em arXiv Preprint}, arxiv.org/abs/2207.13772, 2022.
\bibitem{Stampacchia1956}
Guido~Stampacchia.
\newblock Su un problema relativo alle equazioni di tipo ellittico del secondo
ordine.
\newblock {\em Ricerche Mat.}, 5:3--24, 1956.
\bibitem{Zygmund2002}
Antoni Zygmund.
\newblock {\em Trigonometric series. Vol. I, II}.
\newblock Cambridge University Press, Cambridge, 2002.
\end{thebibliography}
\end{document} |
\begin{document}
\title[The $n$ linear embedding theorem]{The $n$ linear embedding theorem}
\author[H.~Tanaka]{Hitoshi Tanaka}
\address{Graduate School of Mathematical Sciences, The University of Tokyo, Tokyo, 153-8914, Japan}
\email{[email protected]}
\thanks{
The author is supported by
the FMSP program at Graduate School of Mathematical Sciences, the University of Tokyo,
and Grant-in-Aid for Scientific Research (C) (No.~23540187),
the Japan Society for the Promotion of Science.
}
\subjclass[2010]{42B20, 42B35 (primary), 31C45, 46E35 (secondary).}
\keywords{
multinonlinear discrete Wolff's potential;
multilinear positive dyadic operator;
multilinear Sawyer's checking condition;
$n$ linear embedding theorem.
}
\date{}
\begin{abstract}
Let $\sigma_i$, $i=1,\leftdots,n$, denote
positive Borel measures on ${\mathbb R}^d$,
let ${\mathcal D}$ denote the usual collection of dyadic cubes in ${\mathbb R}^d$
and let $K:\,{\mathcal D}\to[0,\infty)$ be a~map.
In this paper we give a~characterization of the $n$ linear embedding theorem.
That is, we give a~characterization of the inequality
$$
\sum_{Q\in{\mathcal D}}
K(Q)\prod_{i=1}^n\left|\int_{Q}f_i\,d\sigma_i\right|
\lefte C
\prod_{i=1}^n
\|f_i\|_{L^{p_i}(d\sigma_i)}
$$
in terms of multilinear Sawyer's checking condition and
discrete multinonlinear Wolff's potential,
when $1<p_i<\infty$.
\end{abstract}
\maketitle
\section{Introduction}\leftabel{sec1}
The purpose of this paper is to investigate the $n$ linear embedding theorem.
We first fix some notations.
We will denote by ${\mathcal D}$ the family of all dyadic cubes
$Q=2^{-k}(m+[0,1)^d)$,
$k\in{\mathbb Z},\,m\in{\mathbb Z}^d$.
Let $K:\,{\mathcal D}\to[0,\infty)$ be a~map and
let $\sigma_i$, $i=1,\leftdots,n$, be positive Borel measures on ${\mathbb R}^d$.
In this paper we give a~necessary and sufficient condition
for which the inequality
\begin{equation}\leftabel{1.1}
\sum_{Q\in{\mathcal D}}
K(Q)\prod_{i=1}^n\left|\int_{Q}f_i\,d\sigma_i\right|
\lefte C
\prod_{i=1}^n
\|f_i\|_{L^{p_i}(d\sigma_i)},
\end{equation}
to hold when $1<p_i<\infty$.
For the bilinear embedding theorem,
in the case $\frac1{p_1}+\frac1{p_2}\ge 1$,
Sergei Treil gives a~simple proof of the following.
\begin{proposition}[{\rightm\cite[Theorem 2.1]{Tr}}]\leftabel{prp1.1}
Let $K:\,{\mathcal D}\to[0,\infty)$ be a~map and
let $\sigma_i$, $i=1,2$, be positive Borel measures on ${\mathbb R}^d$.
Let $1<p_i<\infty$ and
$\frac1{p_1}+\frac1{p_2}\ge 1$.
The following statements are equivalent:
\begin{itemize}
\item[{\rightm(a)}]
The following bilinear embedding theorem holds:
$$
\sum_{Q\in{\mathcal D}}
K(Q)\prod_{i=1}^2\left|\int_{Q}f_i\,d\sigma_i\right|
\lefte c_1
\prod_{i=1}^2
\|f_i\|_{L^{p_i}(d\sigma_i)}
<\infty;
$$
\item[{\rightm(b)}]
For all $Q\in{\mathcal D}$,
$$
\begin{cases}\displaystyle
\left(\int_{Q}\left(\sum_{Q'\subset Q}K(Q')\sigma_1(Q')1_{Q'}\right)^{p_2'}\,d\sigma_2\right)^{1/p_2'}
\lefte c_2
\sigma_1(Q)^{1/p_1}
<\infty,
\\ \displaystyle
\left(\int_{Q}\left(\sum_{Q'\subset Q}K(Q')\sigma_2(Q')1_{Q'}\right)^{p_1'}\,d\sigma_1\right)^{1/p_1'}
\lefte c_2
\sigma_2(Q)^{1/p_2}
<\infty.
\end{cases}
$$
\end{itemize}
\noindent
Moreover,
the least possible $c_1$ and $c_2$ are equivalent.
\end{proposition}
Here, for each $1<p<\infty$,
$p'$ denote the dual exponent of $p$,
i.e., $p'=\frac{p}{p-1}$, and
$1_{E}$ stands for the characteristic function of the set $E$.
Proposition \rightef{prp1.1} was first proved
for $p_1=p_2=2$ in \cite{NaTrVo}
by the Bellman function method.
Later in \cite{LaSaUr},
this was proved in full generality.
The checking condition in Proposition \rightef{prp1.1} is called
\leftq\leftq the Sawyer type checking condition",
since this was first introduced by Eric~T. Sawyer
in \cite{Sa1,Sa2}.
To describe the case
$\frac1{p_1}+\frac1{p_2}<1$,
we need discrete Wolff's potential.
Let $\mu$ and $\nu$ be positive Borel measures on ${\mathbb R}^d$ and
let $K:\,{\mathcal D}\to[0,\infty)$ be a~map.
For $p>1$,
the discrete Wolff's potential
${\mathcal W}^p_{K,\mu}[\nu](x)$
of the measure $\nu$
is defined by
$$
{\mathcal W}^p_{K,\mu}[\nu](x)
:=
\sum_{Q\in{\mathcal D}}
K(Q)\mu(Q)
\left(
\frac1{\mu(Q)}\sum_{Q'\subset Q}
K(Q')\mu(Q')\nu(Q')
\right)^{p-1}
1_{Q}(x),
\quad x\in{\mathbb R}^d.
$$
The author prove the following.
\begin{proposition}[{\rightm\cite[Theorem 1.3]{Ta1}}]\leftabel{prp1.2}
Let $K:\,{\mathcal D}\to[0,\infty)$ be a~map and
let $\sigma_i$, $i=1,2$, be positive Borel measures on ${\mathbb R}^d$.
Let $1<p_i<\infty$ and
$\frac1{p_1}+\frac1{p_2}<1$.
The following statements are equivalent:
\begin{itemize}
\item[{\rightm(a)}]
The following bilinear embedding theorem holds:
$$
\sum_{Q\in{\mathcal D}}
K(Q)\prod_{i=1}^2\left|\int_{Q}f_i\,d\sigma_i\right|
\lefte c_1
\prod_{i=1}^2
\|f_i\|_{L^{p_i}(d\sigma_i)}
<\infty;
$$
\item[{\rightm(b)}]
For $\frac1r+\frac1{p_1}+\frac1{p_2}=1$,
$$
\begin{cases}\displaystyle
\|{\mathcal W}^{p_2'}_{K,\sigma_2}[\sigma_1]^{1/p_2'}\|_{L^r(d\sigma_1)}
\lefte c_2<\infty,
\\[2mm]\displaystyle
\|{\mathcal W}^{p_1'}_{K,\sigma_1}[\sigma_2]^{1/p_1'}\|_{L^r(d\sigma_2)}
\lefte c_2<\infty.
\end{cases}
$$
\end{itemize}
\noindent
Moreover,
the least possible $c_1$ and $c_2$ are equivalent.
\end{proposition}
In his excerent survey of the $A_2$ theorem \cite{Hy},
Tuomas~P. Hyt\"{o}nen introduces
another proof of Proposition \rightef{prp1.1},
which uses the \leftq\leftq parallel corona" decomposition.
In this paper,
following Hyt\"{o}nen's arguments in \cite{Hy},
we shall establish the following theorems
(Theorems \rightef{thm1.3} and \rightef{thm1.4}).
\begin{theorem}\leftabel{thm1.3}
Let $K:\,{\mathcal D}\to[0,\infty)$ be a~map and
let $\sigma_i$, $i=1,\leftdots,n$, be positive Borel measures on ${\mathbb R}^d$.
Let $1<p_i<\infty$ and
$\sum_{i=1}^n\frac1{p_i}\ge 1$.
The following statements are equivalent:
\begin{itemize}
\item[{\rightm(a)}]
The following $n$ linear embedding theorem holds:
$$
\sum_{Q\in{\mathcal D}}
K(Q)\prod_{i=1}^n\left|\int_{Q}f_i\,d\sigma_i\right|
\lefte c_1
\prod_{i=1}^n
\|f_i\|_{L^{p_i}(d\sigma_i)}
<\infty;
$$
\item[{\rightm(b)}]
For all $j=1,\leftdots,n$ and for all $Q\in{\mathcal D}$,
$$
\sum_{Q'\subset Q}
K(Q')\sigma_j(Q')
\prod_{\substack{i=1 \\ i\ne j}}^n
\left|\int_{Q'}f_i\,d\sigma_i\right|
\lefte c_2
\sigma_j(Q)^{1/p_j}
\prod_{\substack{i=1 \\ i\ne j}}^n
\|f_i\|_{L^{p_i}(d\sigma_i)}
<\infty.
$$
\end{itemize}
\noindent
Moreover,
the least possible $c_1$ and $c_2$ are equivalent.
\end{theorem}
Let the symmetric group $S_n$ be the set of all permutations of
the set $\{1,\leftdots,n\}$,
that is,
the set of all bijections
from the set $\{1,\leftdots,n\}$ to itself.
Let $K:\,{\mathcal D}\to[0,\infty)$ be a~map and
let $\sigma_i$, $i=1,\leftdots,n$, be positive Borel measures on ${\mathbb R}^d$.
Let $1<p_i<\infty$ and
$\sum_{i=1}^n\frac1{p_i}<1$.
Let $\phi\in S_n$. Set
\begin{align*}
&\frac1{r^{\phi}_1}+\frac1{p_{\phi(1)}}=1,
\\[2mm]
&\frac1{r^{\phi}_2}+\frac1{p_{\phi(1)}}+\frac1{p_{\phi(2)}}=1,
\\ &\qquad\vdots \\[2mm]
&\frac1{r^{\phi}_{n-1}}+\sum_{i=1}^{n-1}\frac1{p_{\phi(i)}}=1,
\\[2mm]
&\frac1r+\sum_{i=1}^n\frac1{p_{\phi(i)}}=1.
\end{align*}
Let, for $Q\in{\mathcal D}$,
$$
K^{\phi}_1(Q)
:=
K(Q)\sigma_{\phi(1)}(Q)
\left(
\frac1{\sigma_{\phi(1)}(Q)}
\sum_{Q'\subset Q}
K(Q')
\prod_{i=1}^n\sigma_{\phi(i)}(Q')
\right)^{r^{\phi}_1-1},
$$
let
$$
K^{\phi}_2(Q)
:=
K^{\phi}_1(Q)\sigma_{\phi(2)}(Q)
\left(
\frac1{\sigma_{\phi(2)}(Q)}
\sum_{Q'\subset Q}
K^{\phi}_1(Q')
\prod_{i=2}^n\sigma_{\phi(i)}(Q')
\right)^{r^{\phi}_2/r^{\phi}_1-1}
$$
and, inductively,
for $j=3,\leftdots,n-1$, let
$$
K^{\phi}_j(Q)
:=
K^{\phi}_{j-1}(Q)\sigma_{\phi(j)}(Q)
\left(
\frac1{\sigma_{\phi(j)}(Q)}
\sum_{Q'\subset Q}
K^{\phi}_{j-1}(Q')
\prod_{i=j}^n\sigma_{\phi(i)}(Q')
\right)^{r^{\phi}_j/r^{\phi}_{j-1}-1}.
$$
\begin{theorem}\leftabel{thm1.4}
With the notation above,
the following statements are equivalent:
\begin{itemize}
\item[{\rightm(a)}]
The following $n$ linear embedding theorem holds:
$$
\sum_{Q\in{\mathcal D}}
K(Q)\prod_{i=1}^n\left|\int_{Q}f_i\,d\sigma_i\right|
\lefte c_1
\prod_{i=1}^n
\|f_i\|_{L^{p_i}(d\sigma_i)}
<\infty;
$$
\item[{\rightm(b)}]
For all $\phi\in S_n$,
$$
\left\|\left(
\sum_{Q\in{\mathcal D}}
K^{\phi}_{n-1}(Q)
1_{Q}
\right)^{1/r^{\phi}_{n-1}}
\right\|_{L^r(d\sigma_{\phi(n)})}
\lefte c_2<\infty.
$$
\end{itemize}
\noindent
Moreover,
the least possible $c_1$ and $c_2$ are equivalent.
\end{theorem}
Even though Theorems \rightef{thm1.3} and \rightef{thm1.4}
both characterize the same $n$ linear embedding theorem,
it seems that the characterizations are very different.
In very recent paper \cite{HaHyLi},
Timo~S. H\"{a}nninen,
Tuomas~P. Hyt\"{o}nen and
Kangwei Li give a~unified approach saying
\leftq\leftq sequential testing" characterization,
when $n=2,3$.
Especially,
our Theorem \rightef{thm1.4} with $n=3$
is obtained in \cite[Theorem 1.16]{HaHyLi}.
(An alternative form of another unified characterization has been simultaneously obtained by Vuorinen \cite{Vu}.)
In \cite{Ta2}, the author gives
a~characterization of the trilinear embedding theorem
interms of Theorem \rightef{thm1.3} and Propositions \rightef{prp1.1} and \rightef{prp1.2}.
The letter $C$ will be used for constants
that may change from one occurrence to another.
\section{Proof of the necessity}\leftabel{sec2}
In what follows we shall prove the necessity of theorems.
The necessity of Theorem \rightef{thm1.3},
that is,
(b) follows from (a)
at once if we substitute the test function $f_j=1_{Q}$.
So, we shall verify the necessity of Theorem \rightef{thm1.4}.
We need a~lemma (cf. Lemma 2.1 in \cite{Ta1}).
\begin{lemma}\leftabel{lem2.1}
Let $\sigma$ be a~positive Borel measure on ${\mathbb R}^d$.
Let $1<s<\infty$ and
$\{\alpha_{Q}\}_{Q\in{\mathcal D}}\subset[0,\infty)$.
Define, for $Q_0\in{\mathcal D}$,
\begin{align*}
A_1
&:=
\int_{Q_0}
\left(\sum_{Q\subset Q_0}\frac{\alpha_{Q}}{\sigma(Q)}1_{Q}\right)^s
\,d\sigma,
\\
A_2
&:=
\sum_{Q\subset Q_0}
\alpha_{Q}\left(\frac1{\sigma(Q)}\sum_{Q'\subset Q}\alpha_{Q'}\right)^{s-1},
\\
A_3
&:=
\int_{Q_0}
\sup_{Q\subset Q_0}
\left(\frac{1_{Q}(x)}{\sigma(Q)}\sum_{Q'\subset Q}\alpha_{Q'}\right)^s
\,d\sigma(x).
\end{align*}
Then
$$
A_1\lefte c(s)A_2,\quad
A_2\lefte c(s)^{\frac1{s-1}}A_3
\quad\text{and}\quad
A_3\lefte (s')^sA_1.
$$
Here,
$$
c(s)
:=
\begin{cases}\displaystyle
s,\quad 1<s\lefte 2,
\\ \displaystyle
\left(s(s-1)\cdots(s-k)\right)^{\frac{s-1}{s-k-1}},
\quad 2<s<\infty,
\end{cases}
$$
where $k=\leftceil s-2 \rightceil$ is
the smallest integer greater than $s-2$.
\end{lemma}
We will use
$\fint_{Q}f\,d\sigma$
to denote the integral average
$\sigma(Q)^{-1}\int_{Q}f\,d\sigma$.
The dyadic maximal operator
$M_{{\mathcal D}}^{\sigma}$
is defined by
$$
M_{{\mathcal D}}^{\sigma}f(x)
:=
\sup_{Q\in{\mathcal D}}
\frac{1_{Q}(x)}{\sigma(Q)}\int_{Q}|f(y)|\,d\sigma(y).
$$
Suppose that (a) of Theorem \rightef{thm1.4}.
Then, for $\phi\in S_n$,
\begin{equation}\leftabel{2.1}
\sum_{Q\in{\mathcal D}}
K(Q)
\prod_{i=1}^n
\left|\int_{Q}f_{\phi(i)}\,d\sigma_{\phi(i)}\right|
\lefte c_1
\prod_{i=1}^n
\|f_{\phi(i)}\|_{L^{p_{\phi(i)}}(d\sigma_{\phi(i)})}.
\end{equation}
Recall that
$\frac1{r^{\phi}_1}+\frac1{p_{\phi(1)}}=1$.
By duality, we see that
$$
\int_{{\mathbb R}^d}\left(
\sum_{Q\in{\mathcal D}}
K(Q)
\prod_{i=2}^n
\left|\int_{Q}f_{\phi(i)}\,d\sigma_{\phi(i)}\right|
1_{Q}
\right)^{r^{\phi}_1}
\,d\sigma_{\phi(1)}
\lefte c_1^{r^{\phi}_1}
\prod_{i=2}^n
\|f_{\phi(i)}\|_{L^{p_{\phi(i)}}(d\sigma_{\phi(i)})}^{r^{\phi}_1},
$$
which implies by Lemma \rightef{lem2.1}
\begin{align*}
\leftefteqn{
\sum_{Q\in{\mathcal D}}
K(Q)\sigma_{\phi(1)}(Q)
\prod_{i=2}^n
\left|\int_{Q}f_{\phi(i)}\,d\sigma_{\phi(i)}\right|
}\\ &\quad\times\left[
\frac1{\sigma_{\phi(1)}(Q)}
\sum_{Q'\subset Q}
K(Q')\sigma_{\phi(1)}(Q')
\prod_{i=2}^n
\left|\int_{Q'}f_{\phi(i)}\,d\sigma_{\phi(i)}\right|
\right]^{r^{\phi}_1-1}
\\ &\lefte C c_1^{r^{\phi}_1}
\prod_{i=2}^n
\|f_{\phi(i)}\|_{L^{p_{\phi(i)}}(d\sigma_{\phi(i)})}^{r^{\phi}_1}.
\end{align*}
It follows from this inequality that
\begin{align*}
\leftefteqn{
\sum_{Q\in{\mathcal D}}
K^{\phi}_1(Q)
\prod_{i=2}^n
\left|\int_{Q}g_{\phi(i)}\,d\sigma_{\phi(i)}\right|
}\\ &=
\sum_{Q\in{\mathcal D}}
K(Q)\sigma_{\phi(1)}(Q)
\prod_{i=2}^n
\left|\int_{Q}g_{\phi(i)}\,d\sigma_{\phi(i)}\right|
\left[
\frac1{\sigma_{\phi(1)}(Q)}
\sum_{Q'\subset Q}
K(Q')
\prod_{i=1}^n
\sigma_{\phi(i)}(Q')
\right]^{r^{\phi}_1-1}
\\ &=
\sum_{Q\in{\mathcal D}}
K(Q)\sigma_{\phi(1)}(Q)
\prod_{i=2}^n
\sigma_{\phi(i)}(Q)
\left|\fint_{Q}g_{\phi(i)}\,d\sigma_{\phi(i)}\right|^{1/r^{\phi}_1}
\\ &\quad\times
\left[
\frac1{\sigma_{\phi(1)}(Q)}
\sum_{Q'\subset Q}
K(Q')\sigma_{\phi(1)}(Q')
\prod_{i=2}^n
\sigma_{\phi(i)}(Q')
\left|\fint_{Q}g_{\phi(i)}\,d\sigma_{\phi(i)}\right|^{1/r^{\phi}_1}
\right]^{r^{\phi}_1-1}
\\ &\lefte
\sum_{Q\in{\mathcal D}}
K(Q)\sigma_{\phi(1)}(Q)
\prod_{i=2}^n
\int_{Q}
\left(M_{{\mathcal D}}^{\sigma_{\phi(i)}}g_{\phi(i)}\right)^{1/r^{\phi}_1}
\,d\sigma_{\phi(i)}
\\ &\quad\times
\left[
\frac1{\sigma_{\phi(1)}(Q)}
\sum_{Q'\subset Q}
K(Q')\sigma_{\phi(1)}(Q')
\prod_{i=2}^n
\int_{Q'}
\left(M_{{\mathcal D}}^{\sigma_{\phi(i)}}g_{\phi(i)}\right)^{1/r^{\phi}_1}
\,d\sigma_{\phi(i)}
\right]^{r^{\phi}_1-1}
\\ &\lefte C c_1^{r^{\phi}_1}
\prod_{i=2}^n
\|M_{{\mathcal D}}^{\sigma_{\phi(i)}}g_{\phi(i)}\|_{L^{p_{\phi(i)}/r^{\phi}_1}(d\sigma_{\phi(i)})}
\\ &\lefte C c_1^{r^{\phi}_1}
\prod_{i=2}^n
\|g_{\phi(i)}\|_{L^{p_{\phi(i)}/r^{\phi}_1}(d\sigma_{\phi(i)})},
\end{align*}
where we have used the boundedness of dyadic maximal operators.
Thus, we obtain
\begin{equation}\leftabel{2.2}
\sum_{Q\in{\mathcal D}}
K^{\phi}_1(Q)
\prod_{i=2}^n
\left|\int_{Q}f_{\phi(i)}\,d\sigma_{\phi(i)}\right|
\lefte C c_1^{r^{\phi}_1}
\prod_{i=2}^n
\|f_{\phi(i)}\|_{L^{p_{\phi(i)}/r^{\phi}_1}(d\sigma_{\phi(i)})}.
\end{equation}
Notice that
\begin{equation}\leftabel{2.3}
\begin{cases}\displaystyle
\frac{r^{\phi}_{i-1}}{r^{\phi}_i}
+
\frac{r^{\phi}_{i-1}}{p_{\phi(i)}}
=1,\quad
i=2,\leftdots,n-1,
\\[4mm]\displaystyle
\frac{r^{\phi}_{n-1}}{r}
+
\frac{r^{\phi}_{n-1}}{p_{\phi(n)}}
=1.
\end{cases}
\end{equation}
By the same manner as the above but starting from \eqref{2.2}, instead of \eqref{2.1},
and using \eqref{2.3} with $i=2$,
we obtain
$$
\sum_{Q\in{\mathcal D}}
K^{\phi}_2(Q)
\prod_{i=3}^n
\left|\int_{Q}f_{\phi(i)}\,d\sigma_{\phi(i)}\right|
\lefte C c_1^{r^{\phi}_2}
\prod_{i=3}^n
\|f_{\phi(i)}\|_{L^{p_{\phi(i)}/r^{\phi}_2}(d\sigma_{\phi(i)})}.
$$
By being continued inductively until the $n-1$ step,
we obtain
$$
\sum_{Q\in{\mathcal D}}
K^{\phi}_{n-1}(Q)
\left|\int_{Q}f_{\phi(n)}\,d\sigma_{\phi(n)}\right|
\lefte C c_1^{r^{\phi}_{n-1}}
\|f_{\phi(n)}\|_{L^{p_{\phi(n)}/r^{\phi}_{n-1}}(d\sigma_{\phi(n)})}.
$$
Notice that the last equation of \eqref{2.3}.
Then by duality
$$
\left\|
\sum_{Q\in{\mathcal D}}
K^{\phi}_{n-1}(Q)
1_{Q}
\right\|_{L^{r/r^{\phi}_{n-1}}(d\sigma_{\phi(n)})}
\lefte C c_1^{r^{\phi}_{n-1}}
$$
and, hence,
$$
\left\|\left(
\sum_{Q\in{\mathcal D}}
K^{\phi}_{n-1}(Q)
1_{Q}
\right)^{1/r^{\phi}_{n-1}}
\right\|_{L^r(d\sigma_{\phi(n)})}
\lefte C c_1,
$$
which completes the necessity of Theorem \rightef{thm1.4}.
\section{Proof of the sufficiency}\leftabel{sec3}
In what follows we shall prove the sufficiency of theorems.
Let $Q_0\in{\mathcal D}$ be taken large enough and be fixed.
We shall estimate the quantity
\begin{equation}\leftabel{3.1}
\sum_{Q\subset Q_0}
K(Q)
\prod_{i=1}^n\left(\int_{Q}f_i\,d\sigma_i\right),
\end{equation}
where $f_i\in L^{p_i}(d\sigma_i)$
is nonnegative and is supported in $Q_0$.
We define the collection of principal cubes
${\mathcal F}_i$ for the pair $(f_i,\sigma_i)$,
$i=1,\leftdots,n$. Namely,
$$
{\mathcal F}_i:=\bigcup_{k=0}^{\infty}{\mathcal F}_i^k,
$$
where
${\mathcal F}_i^0:=\{Q_0\}$,
$$
{\mathcal F}_i^{k+1}
:=
\bigcup_{F\in{\mathcal F}_i^k}ch_{{\mathcal F}_i}(F)
$$
and $ch_{{\mathcal F}_i}(F)$ is defined by
the set of all \leftq\leftq maximal" dyadic cubes $Q\subset F$ such that
$$
\fint_{Q}f_i\,d\sigma_i
>
2\fint_{F}f_i\,d\sigma_i.
$$
Observe that
\begin{align*}
\leftefteqn{
\sum_{F'\in ch_{{\mathcal F}_i}(F)}\sigma_i(F')
}\\ &\lefte
\left(2\fint_{F}f_i\,d\sigma_i\right)^{-1}
\sum_{F'\in ch_{{\mathcal F}_i}(F)}
\int_{F'}f_i\,d\sigma_i
\\ &\lefte
\left(2\fint_{F}f_i\,d\sigma_i\right)^{-1}
\int_{F}f_i\,d\sigma_i
=
\frac{\sigma_i(F)}{2},
\end{align*}
which implies
\begin{equation}\leftabel{3.2}
\sigma_i(E_{{\mathcal F}_i}(F))
:=
\sigma_i\left(F\setminus\bigcup_{F'\in ch_{{\mathcal F}_i}(F)}F'\right)
\ge
\frac{\sigma_i(F)}{2},
\end{equation}
where the sets
$E_{{\mathcal F}_i}(F)$, $F\in{\mathcal F}_i$,
are pairwise disjoint.
We further define the stopping parents,
for $Q\in{\mathcal D}$,
$$
\begin{cases}\displaystyle
\pi_{{\mathcal F}_i}(Q)
:=
\min\{F\supset Q:\,F\in{\mathcal F}_i\},
\\\displaystyle
\pi(Q)
:=
\left(\pi_{{\mathcal F}_1}(Q),\leftdots,\pi_{{\mathcal F}_n}(Q)\right).
\end{cases}
$$
Then we can rewrite the series in \eqref{3.1} as follows:
$$
\sum_{Q\subset Q_0}
=
\sum_{(F_1,\leftdots,F_n)\in({\mathcal F}_1,\leftdots,{\mathcal F}_n)}
\sum_{\substack{
Q: \\ \pi(Q)=(F_1,\leftdots,F_n)
}}.
$$
We notice the elementary fact that,
if $P,R\in{\mathcal D}$, then
$P\cap R\in\{P,R,\emptyset\}$.
This fact implies,
if $\pi(Q)=(F_1,\leftdots,F_n)$,
then
$$
Q\subset F_{\phi(1)}\subset\cdots\subset F_{\phi(n)}
\quad\text{for some}\quad
\phi\in S_n.
$$
Thus, for fixed $\phi\in S_n$,
we shall estimate
\begin{equation}\leftabel{3.3}
\sum_{\substack{
(F_{\phi(i)})\in({\mathcal F}_{\phi(i)}):
\\
F_{\phi(1)}\subset\cdots\subset F_{\phi(n)}
}}
\sum_{\substack{
Q: \\ \pi(Q)=(F_{\phi(i)})
}}
K(Q)\prod_{i=1}^n\left(\int_{Q}f_{\phi(i)}\,d\sigma_{\phi(i)}\right).
\end{equation}
\paragraph{{\bf Proof of (a) of Theorem \rightef{thm1.3}.}}
It follows that, for fixed
$F_{\phi(n)}\in{\mathcal F}_{\phi(n)}$,
\begin{align*}
\leftefteqn{
\sum_{F_{\phi(1)}\subset\cdots\subset F_{\phi(n)}}
\sum_{\substack{
Q: \\ \pi(Q)=(F_{\phi(i)})
}}
K(Q)\prod_{i=1}^n\left(\int_{Q}f_{\phi(i)}\,d\sigma_{\phi(i)}\right)
}\\ &\lefte
\left(2\fint_{F_{\phi(n)}}f_{\phi(n)}\,d\sigma_{\phi(n)}\right)
\sum_{F_{\phi(1)}\subset\cdots\subset F_{\phi(n)}}
\sum_{\substack{
Q: \\ \pi(Q)=(F_{\phi(i)})
}}
K(Q)\sigma_{\phi(n)}(Q)
\prod_{i=1}^{n-1}\left(\int_{Q}f_{\phi(i)}\,d\sigma_{\phi(i)}\right).
\end{align*}
We need two observations.
Suppose that
$F_{\phi(1)}\subset\cdots\subset F_{\phi(n)}$
and $\pi(Q)=(F_{\phi(i)})$.
Let $i=1,\leftdots,n-1$. If
$F'\in ch_{{\mathcal F}_{\phi(n)}}(F_{\phi(n)})$
satisfies $F'\subset Q$. Then
\begin{equation}\leftabel{3.4}
\pi_{{\mathcal F}_{\phi(n)}}\left(\pi_{{\mathcal F}_{\phi(i)}}(F')\right)
=
\begin{cases}\displaystyle
F_{\phi(n)},
\quad\text{when}\quad
f'\notin{\mathcal F}_{\phi(i)},
\\\displaystyle
F',
\quad\text{when}\quad
f'\in{\mathcal F}_{\phi(i)}.
\end{cases}
\end{equation}
By this observation, we define
$$
ch_{{\mathcal F}_{\phi(n)}}^{\phi(i)}(F_{\phi(n)})
:=
\{
F'\in ch_{{\mathcal F}_{\phi(n)}}(F_{\phi(n)}):\,
\text{ $F'$ satisfies \eqref{3.4}}
\}.
$$
We further observe that,
when $F'\in ch_{{\mathcal F}_{\phi(n)}}^{\phi(i)}(F_{\phi(n)})$,
we can regard $f_{\phi(i)}$ as a~constant on $F'$ in the above integrals,
that is,
we can replace $f_{\phi(i)}$ by
$f_{\phi(i)}^{F_{\phi(n)}}$
in the above integrals, where
$$
f_{\phi(i)}^{F_{\phi(n)}}
:=
f_{\phi(i)}1_{E_{{\mathcal F}_{\phi(n)}}(F_{\phi(n)})}
+
\sum_{F'\in ch_{{\mathcal F}_{\phi(n)}}^{\phi(i)}(F_{\phi(n)})}
\left(\fint_{F'}f_{\phi(i)}\,d\sigma_{\phi(i)}\right)
1_{F'}.
$$
It follows from (b) of Theorem \rightef{thm1.3} that
\begin{align*}
\leftefteqn{
\sum_{F_{\phi(1)}\subset\cdots\subset F_{\phi(n)}}
\sum_{\substack{
Q: \\ \pi(Q)=(F_{\phi(i)})
}}
K(Q)\sigma_{\phi(n)})(Q)
\prod_{i=1}^{n-1}
\left(\int_{Q}f_{\phi(i)}^{F_{\phi(n)}}\,d\sigma_{\phi(i)}\right)
}\\ &\lefte c_2
\sigma_{\phi(n)}(F_{\phi(n)})^{1/p_{\phi(n)}}
\prod_{i=1}^{n-1}
\|f_{\phi(i)}^{F_{\phi(n)}}\|_{L^{p_{\phi(i)}}(d\sigma_{\phi(i)})}.
\end{align*}
Thus, we obtain
$$
\eqref{3.3}
\lefte C c_2
\sum_{F_{\phi(n)}\in{\mathcal F}_{\phi(n)}}
\prod_{i=1}^{n-1}
\|f_{\phi(i)}^{F_{\phi(n)}}\|_{L^{p_{\phi(i)}}(d\sigma_{\phi(i)})}
\left(\fint_{F_{\phi(n)}}f_{\phi(n)}\,d\sigma_{\phi(n)}\right)
\sigma_{\phi(n)}(F_{\phi(n)})^{1/p_{\phi(n)}}.
$$
Since
$\sum_{i=1}^n\frac1{p_{\phi(i)}}\ge 1$,
we can select the auxiliary parameters
$s_{\phi(i)}$, $i=1,\leftdots,n-1$,
that satisfy
$$
\sum_{i=1}^{n-1}\frac1{s_{\phi(i)}}
+
\frac1{p_{\phi(n)}}
=1
\quad\text{and}\quad
1<p_{\phi(i)}\lefte s_{\phi(i)}<\infty.
$$
It follows from H\"{o}lder's inequality with exponents
$s_{\phi(1)},\leftdots,s_{\phi(n-1)},p_{\phi(n)}$
that
\begin{align*}
\eqref{3.3}
&\lefte C c_2
\prod_{i=1}^{n-1}
\left[
\sum_{F_{\phi(n)}\in{\mathcal F}_{\phi(n)}}
\|f_{\phi(i)}^{F_{\phi(n)}}\|_{L^{p_{\phi(i)}}(d\sigma_{\phi(i)})}^{s_{\phi(i)}}
\right]^{1/s_{\phi(i)}}
\\ &\quad\times
\left[
\sum_{F_{\phi(n)}\in{\mathcal F}_{\phi(n)}}
\left(\fint_{F_{\phi(n)}}f_{\phi(n)}\,d\sigma_{\phi(n)}\right)^{p_{\phi(n)}}
\sigma_{\phi(n)}(F_{\phi(n)})
\right]^{1/p_{\phi(n)}}
\\ &\lefte C c_2
\prod_{i=1}^{n-1}
\left[
\sum_{F_{\phi(n)}\in{\mathcal F}_{\phi(n)}}
\|f_{\phi(i)}^{F_{\phi(n)}}\|_{L^{p_{\phi(i)}}(d\sigma_{\phi(i)})}^{p_{\phi(i)}}
\right]^{1/p_{\phi(i)}}
\\ &\quad\times
\left[
\sum_{F_{\phi(n)}\in{\mathcal F}_{\phi(n)}}
\left(\fint_{F_{\phi(n)}}f_{\phi(n)}\,d\sigma_{\phi(n)}\right)^{p_{\phi(n)}}
\sigma_{\phi(n)}(F_{\phi(n)})
\right]^{1/p_{\phi(n)}}
\\[5mm] &=: C c_2
(I_1)\times\cdots\times(I_n),
\end{align*}
where we have used
$\|\cdot\|_{l^{p_{\phi(i)}}}\ge\|\cdot\|_{l^{s_{\phi(i)}}}$.
For $(I_n)$, using
$\sigma_{\phi(n)}(F_{\phi(n)})\lefte 2\sigma_{\phi(n)}(E_{{\mathcal F}_{\phi(n)}}(F_{\phi(n)}))$
(see \eqref{3.2}),
the fact that
$$
\fint_{F_{\phi(n)}}f_{\phi(n)}\,d\sigma_{\phi(n)}
\lefte
\inf_{y\in F_{\phi(n)}}
M_{{\mathcal D}}^{\sigma_{\phi(n)}}f_{\phi(n)}(y)
$$
and the disjointness of the sets
$E_{{\mathcal F}_{\phi(n)}}(F_{\phi(n)})$,
we have
\begin{align*}
(I_n)
&\lefte C
\left[
\sum_{F_{\phi(n)}\in{\mathcal F}_{\phi(n)}}
\int_{E_{{\mathcal F}_{\phi(n)}}(F_{\phi(n)})}
\left(M_{{\mathcal D}}^{\sigma_{\phi(n)}}f_{\phi(n)}\right)^{p_{\phi(n)}}\,d\sigma_{\phi(n)}
\right]^{1/p_{\phi(n)}}
\\ &\lefte C
\left[\int_{Q_0}\left(M_{{\mathcal D}}^{\sigma_{\phi(n)}}f_{\phi(n)}\right)^{p_{\phi(n)}}\,d\sigma_{\phi(n)}\right]^{1/p_{\phi(n)}}
\lefte C
\|f_{\phi(n)}\|_{L^{p_{\phi(n)}}(d\sigma_{\phi(n)})}.
\end{align*}
It remains to estimate
$(I_i)$,
$i=1,\leftdots,n-1$.
It follows that
\begin{align*}
(I_i)^{p_{\phi(i)}}
&=
\sum_{F_{\phi(n)}\in{\mathcal F}_{\phi(n)}}
\int_{E_{{\mathcal F}_{\phi(n)}}(F_{\phi(n)})}f_{\phi(i)}^{p_{\phi(i)}}\,d\sigma_{\phi(i)}
\\ &\quad +
\sum_{F_{\phi(n)}\in{\mathcal F}_{\phi(n)}}
\sum_{F'\in ch_{{\mathcal F}_{\phi(n)}}^{\phi(i))}(F_{\phi(n)})}
\left(\fint_{F'}f_{\phi(i)}\,d\sigma_{\phi(i)}\right)^{p_{\phi(i)}}
\sigma_{\phi(i)}(F').
\end{align*}
By the pairwise disjointness of the sets
$E_{{\mathcal F}_{\phi(n)}}(F_{\phi(n)})$,
it is immediate that
$$
\sum_{F_{\phi(n)}\in{\mathcal F}_{\phi(n)}}
\int_{E_{{\mathcal F}_{\phi(n)}}(F_{\phi(n)})}f_{\phi(i)}^{p_{\phi(i)}}\,d\sigma_{\phi(i)}
\lefte
\|f_{\phi(i)}\|_{L^{p_{\phi(i)}}(d\sigma_{\phi(i)})}^{p_{\phi(i)}}.
$$
For the remaining double sum,
there holds by the uniqueness of the parent
\begin{align*}
\leftefteqn{
\sum_{F_{\phi(n)}\in{\mathcal F}_{\phi(n)}}
\sum_{\substack{
F'\in ch_{{\mathcal F}_{\phi(n)}}(F_{\phi(n)}):
\\
\text{$F'$ satisfies \eqref{3.4}}
}}
\left(\fint_{F'}f_{\phi(i)}\,d\sigma_{\phi(i)}\right)^{p_{\phi(i)}}
\sigma_{\phi(i)}(F')
}\\ &\lefte 2
\sum_{F_{\phi(n)}\in{\mathcal F}_{\phi(n)}}
\sum_{\substack{
F\in{\mathcal F}_{\phi(i)}:
\\
\pi_{{\mathcal F}_{\phi(n)}}(F)=F_{\phi(n)}
}}
\sum_{\substack{
F'\in ch_{{\mathcal F}_{\phi(n)}}(F_{\phi(n)}):
\\
\pi_{{\mathcal F}_{\phi(i)}}(F')=F
}}
\left(\fint_{F'}f_{\phi(i)}\,d\sigma_{\phi(i)}\right)^{p_{\phi(i)}}
\sigma_{\phi(i)}(F')
\\ &\lefte 2
\sum_{F\in{\mathcal F}_{\phi(i)}}
\left(2\fint_{F}f_{\phi(i)}\,d\sigma_{\phi(i)}\right)^{p_{\phi(i)}}
\sigma_{\phi(i)}(F)
\\ &\lefte C
\|M_{{\mathcal D}}^{\sigma_{\phi(i)}}f_{\phi(i)}\|_{L^{p_{\phi(i)}}(d\sigma_{\phi(i)})}^{p_{\phi(i)}}
\lefte C
\|f_{\phi(i)}\|_{L^{p_{\phi(i)}}(d\sigma_{\phi(i)})}^{p_{\phi(i)}}.
\end{align*}
Altogether, we obtain
$$
\eqref{3.3}
\lefte C c_2
\prod_{i=1}^n
\|f_{\phi(i)}\|_{L^{p_{\phi(i)}}(d\sigma_{\phi(i)})}.
$$
This yields (a) of Theorem \rightef{thm1.3}.
\paragraph{{\bf Proof of (a) of Theorem \rightef{thm1.4}.}}
We shall estimate \eqref{3.3} by use of multinonlinear Wolff's potential.
We first observe that if
$F_{\phi(i)}\in{\mathcal F}_{\phi(i)}$,
$i=1,\leftdots,n$,
satisfy
$F_{\phi(1)}\subset\cdots\subset F_{\phi(n)}$
and, for some $Q\in{\mathcal D}$,
$\pi(Q)=(F_{\phi(i)})$,
then
\begin{equation}\leftabel{3.5}
\pi_{{\mathcal F}_{\phi(j)}}(F_{\phi(i)})
=
F_{\phi(j)}
\quad\text{for all}\quad
1\lefte i<j\lefte n.
\end{equation}
Fix
$F_{\phi(i)}\in{\mathcal F}_{\phi(i)}$,
$i=1,\leftdots,n$,
that satisfy \eqref{3.5}. Then
\begin{align*}
\leftefteqn{
\sum_{\substack{
Q: \\ \pi(Q)=(F_{\phi(i)})
}}
K(Q)\prod_{i=1}^n\left(\int_{Q}f_{\phi(i)}\,d\sigma_{\phi(i)}\right)
}\\ &\lefte
\prod_{i=1}^n
\left(2\fint_{F_{\phi(i)}}f_{\phi(i)}\,d\sigma_{\phi(i)}\right)
\sum_{\substack{
Q: \\ \pi(Q)=(F_{\phi(i)})
}}
K(Q)\prod_{i=1}^n\sigma_{\phi(i)}(Q).
\end{align*}
Recall that
\begin{equation}\leftabel{3.6}
\begin{cases}\displaystyle
\frac1{r^{\phi}_j}+\sum_{i=1}^j\frac1{p_{\phi(i)}}=1,
\quad j=1,\leftdots,n-1,
\\[4mm]\displaystyle
\frac1r+\sum_{i=1}^n\frac1{p_{\phi(i)}}=1.
\end{cases}
\end{equation}
In the following estimates,
$\sum_{F_{\phi(1)}}$ runs over all
$F_{\phi(1)}\in{\mathcal F}_{\phi(1)}$
that satisfy \eqref{3.5}
for fixed
$F_{\phi(i)}\in{\mathcal F}_{\phi(i)}$,
$i=2,\leftdots,n$.
\begin{align*}
\leftefteqn{
\sum_{F_{\phi(1)}}
\left(\fint_{F_{\phi(1)}}f_{\phi(1)}\,d\sigma_{\phi(1)}\right)
\sum_{\substack{
Q: \\ \pi(Q)=(F_{\phi(i)})
}}
K(Q)\prod_{i=1}^n\sigma_{\phi(i)}(Q)
}\\ &\lefte
\sum_{F_{\phi(1)}}
\left(\fint_{F_{\phi(1)}}f_{\phi(1)}\,d\sigma_{\phi(1)}\right)
\sum_{Q\subset F_{\phi(1)}}
K(Q)\prod_{i=1}^n\sigma_{\phi(i)}(Q)
\\ &=
\sum_{F_{\phi(1)}}
\left(\fint_{F_{\phi(1)}}f_{\phi(1)}\,d\sigma_{\phi(1)}\right)
\sigma_{\phi(1)}(F_{\phi(1)})^{1/p_{\phi(1)}}
\\ &\quad\times
\left(
\fint_{F_{\phi(1)}}
\left(\sum_{Q\subset F_{\phi(1)}}
K(Q)\prod_{i=2}^n\sigma_{\phi(i)}(Q)
1_{Q}
\right)\,d\sigma_{\phi(1)}\right)
\sigma_{\phi(1)}(F_{\phi(1)})^{1/r^{\phi}_1},
\end{align*}
where we have used \eqref{3.6} with $j=1$.
By H\"{o}lder's inequality, we have further that
\begin{align*}
\lefte&
\left[
\sum_{F_{\phi(1)}}
\left(\fint_{F_{\phi(1)}}f_{\phi(1)}\,d\sigma_{\phi(1)}\right)^{p_{\phi(1)}}
\sigma_{\phi(1)}(F_{\phi(1)})
\right]^{1/p_{\phi(1)}}
\\ &\quad\times
\left[
\sum_{F_{\phi(1)}}
\left(
\fint_{F_{\phi(1)}}
\left(
\sum_{Q\subset F_{\phi(1)}}
K(Q)\prod_{i=2}^n\sigma_{\phi(i)}(Q)
1_{Q}
\right)\,d\sigma_{\phi(1)}
\right)^{r^{\phi}_1}
\sigma_{\phi(1)}(F_{\phi(1)})
\right]^{1/r^{\phi}_1}.
\end{align*}
By the same way as the estimate of $(I_n)$,
we see that the last term is majorized by
$$
C\left(\int_{F_{\phi(2)}}
\left(\sum_{Q\subset F_{\phi(2)}}
K(Q)\prod_{i=2}^n\sigma_{\phi(i)}(Q)
1_{Q}
\right)^{r^{\phi}_1}
\,d\sigma_{\phi(1)}
\right)^{1/r^{\phi}_1}.
$$
By Lemma \rightef{lem2.1},
we have further that
$$
\lefte C
\left(\sum_{Q\subset F_{\phi(2)}}
K^{\phi}_1(Q)\prod_{i=2}^n\sigma_{\phi(i)}(Q)
\right)^{1/r^{\phi}_1}.
$$
By \eqref{2.3}, we notice that
\begin{equation}\leftabel{3.7}
\frac1{r^{\phi}_i}
+
\frac1{p_{\phi(i)}}
=
\frac1{r^{\phi}_{i-1}},
\quad i=2,\leftdots,n-1.
\end{equation}
In the following estimates,
$\sum_{F_{\phi(2)}}$ runs over all
$F_{\phi(2)}\in{\mathcal F}_{\phi(2)}$
that satisfy, for fixed
$F_{\phi(i)}\in{\mathcal F}_{\phi(i)}$,
$i=3,\leftdots,n$,
\begin{equation}\leftabel{3.8}
\pi_{{\mathcal F}_{\phi(j)}}(F_{\phi(i)})
=
F_{\phi(j)}
\quad\text{for all}\quad
2\lefte i<j\lefte n.
\end{equation}
There holds
\begin{align*}
\leftefteqn{
\sum_{F_{\phi(2)}}
\left(\fint_{F_{\phi(2)}}f_{\phi(2)}\,d\sigma_{\phi(2)}\right)
\times
\left(\sum_{Q\subset F_{\phi(2)}}
K^{\phi}_1(Q)\prod_{i=2}^n\sigma_{\phi(i)}(Q)
\right)^{1/r^{\phi}_1}
}\\ &\quad\times
\left(\sum_{F_{\phi(1)}}
\left(\fint_{F_{\phi(1)}}f_{\phi(1)}\,d\sigma_{\phi(1)}\right)^{p_{\phi(1)}}
\sigma_{\phi(1)}(F_{\phi(1)})
\right)^{1/p_{\phi(1)}}
\\ &=
\sum_{F_{\phi(2)}}
\left(\fint_{F_{\phi(2)}}f_{\phi(2)}\,d\sigma_{\phi(2)}\right)
\sigma_{\phi(2)}(F_{\phi(2)})^{1/p_{\phi(2)}}
\\ &\quad\times
\left(\fint_{F_{\phi(2)}}
\left(\sum_{Q\subset F_{\phi(2)}}
K^{\phi}_1(Q)\prod_{i=3}^n\sigma_{\phi(i)}(Q)
1_{Q}
\right)\,d\sigma_{\phi(2)}
\right)^{1/r^{\phi}_1}
\sigma_{\phi(2)}(F_{\phi(2)})^{1/r^{\phi}_2}
\\ &\quad\times
\left(\sum_{F_{\phi(1)}}
\left(\fint_{F_{\phi(1)}}f_{\phi(1)}\,d\sigma_{\phi(1)}\right)^{p_{\phi(1)}}
\sigma_{\phi(1)}(F_{\phi(1)})
\right)^{1/p_{\phi(1)}},
\end{align*}
where we have used \eqref{3.7} with $i=2$.
Recall that \eqref{3.6} with $j=2$.
Then H\"{o}lder's inequality gives
\begin{align*}
\lefte &
\left[\sum_{F_{\phi(2)}}
\left(\fint_{F_{\phi(2)}}f_{\phi(2)}\,d\sigma_{\phi(2)}\right)^{p_{\phi(2)}}
\sigma_{\phi(2)}(F_{\phi(2)})
\right]^{1/p_{\phi(2)}}
\\ &\times
\left[
\sum_{F_{\phi(2)}}
\sum_{F_{\phi(1)}}
\left(\fint_{F_{\phi(1)}}f_{\phi(1)}\,d\sigma_{\phi(1)}\right)^{p_{\phi(1)}}
\sigma_{\phi(1)}(F_{\phi(1)})
\right]^{1/p_{\phi(1)}}
\\ &\times
\left[
\sum_{F_{\phi(2)}}
\left(
\fint_{F_{\phi(2)}}
\left(\sum_{Q\subset F_{\phi(2)}}
K^{\phi}_1(Q)\prod_{i=3}^n\sigma_{\phi(i)}(Q)
1_{Q}
\right)\,d\sigma_{\phi(2)}
\right)^{r^{\phi}_2/r^{\phi}_1}
\sigma_{\phi(2)}(F_{\phi(2)})
\right]^{1/r^{\phi}_2}.
\end{align*}
The last term is majorized by
$$
C\left(\int_{F_{\phi(3)}}
\left(\sum_{Q\subset F_{\phi(3)}}
K^{\phi}_1(Q)\prod_{i=3}^n\sigma_{\phi(i)}(Q)
1_{Q}
\right)^{r^{\phi}_2/r^{\phi}_1}
\,d\sigma_{\phi(2)}
\right)^{1/r^{\phi}_2}.
$$
By Lemma \rightef{lem2.1},
we have further that
$$
\lefte C
\left(\sum_{Q\subset F_{\phi(3)}}
K^{\phi}_2(Q)\prod_{i=3}^n\sigma_{\phi(i)}(Q)
\right)^{1/r^{\phi}_2}.
$$
By being continued inductively until the $n-1$ step,
we obtain
\begin{align*}
\eqref{3.3}
&\lefte C
\left[\sum_{F_{\phi(n)}}
\left(\fint_{F_{\phi(n)}}f_{\phi(n)}\,d\sigma_{\phi(n)}\right)^{p_{\phi(n)}}
\sigma_{\phi(n)}(F_{\phi(n)})
\right]^{1/p_{\phi(n)}}
\\ &\quad\times
\left[
\sum_{F_{\phi(n)}}
\sum_{F_{\phi(n-1)}}
\left(\fint_{F_{\phi(n-1)}}f_{\phi(n-1)}\,d\sigma_{\phi(n-1)}\right)^{p_{\phi(n-1)}}
\sigma_{\phi(n-1)}(F_{\phi(n-1)})
\right]^{1/p_{\phi(n-1)}}
\\ &\quad\times\vdots
\\ &\quad\times
\left[
\sum_{F_{\phi(n)}}
\sum_{F_{\phi(n-1)}}
\cdots
\sum_{F_{\phi(1)}}
\left(\fint_{F_{\phi(1)}}f_{\phi(1)}\,d\sigma_{\phi(1)}\right)^{p_{\phi(1)}}
\sigma_{\phi(1)}(F_{\phi(1)})
\right]^{1/p_{\phi(1)}}
\\ &\quad\times
\left[
\sum_{F_{\phi(n)}}
\left(\fint_{F_{\phi(n)}}
\left(\sum_{Q\subset F_{\phi(n)}}
K^{\phi}_{n-1}(Q)
1_{Q}
\right)
\,d\sigma_{\phi(n)}
\right)^{r/r^{\phi}_{n-1}}
\sigma_{\phi(n)}(F_{\phi(n)})
\right]^{1/r},
\end{align*}
where
$\sum_{F_{\phi(n)}}$
runs over all
$F_{\phi(n)}\in{\mathcal F}_{\phi(n)}$
and $\sum_{F_{\phi(k)}}$,
$k=3,\leftdots,n-1$,
runs over all
$F_{\phi(k)}\in{\mathcal F}_{\phi(k)}$
that satisfy, for fixed
$F_{\phi(i)}$, $i=k+1,\leftdots,n$,
\begin{equation}\leftabel{3.9}
\pi_{{\mathcal F}_{\phi(j)}}(F_{\phi(i)})
=
F_{\phi(j)}
\quad\text{for all}\quad
k\lefte i<j\lefte n.
\end{equation}
The last term is majorized by
$$
C\left(\int_{Q_0}
\left(\sum_{Q\subset Q_0}
K^{\phi}_{n-1}(Q)
1_{Q}
\right)^{r/r^{\phi}_{n-1}}
\,d\sigma_{\phi(n)}
\right)^{1/r}
\lefte c_2.
$$
It follows from \eqref{3.5}, \eqref{3.8}, \eqref{3.9} and
the uniqueness of the parents that
\begin{align*}
\leftefteqn{
\left[
\sum_{F_{\phi(n)}}
\sum_{F_{\phi(n-1)}}
\cdots
\sum_{F_{\phi(i)}}
\left(\fint_{F_{\phi(i)}}f_{\phi(i)}\,d\sigma_{\phi(i)}\right)^{p_{\phi(i)}}
\sigma_{\phi(i)}(F_{\phi(i)})
\right]^{1/p_{\phi(i)}}
}\\ &\lefte
\left[
\sum_{F_{\phi(n)}}
\sum_{\substack{
F_{\phi(i)}\in{\mathcal F}_{\phi(i)}:
\\
\pi_{{\mathcal F}_{\phi(n)}}(F_{\phi(i)})=F_{\phi(n)}
}}
\left(\fint_{F_{\phi(i)}}f_{\phi(i)}\,d\sigma_{\phi(i)}\right)^{p_{\phi(i)}}
\sigma_{\phi(i)}(F_{\phi(i)})
\right]^{1/p_{\phi(i)}}
\\ &=
\left[
\sum_{F_{\phi(i)}\in{\mathcal F}_{\phi(i)}}
\left(\fint_{F_{\phi(i)}}f_{\phi(i)}\,d\sigma_{\phi(i)}\right)^{p_{\phi(i)}}
\sigma_{\phi(i)}(F_{\phi(i)})
\right]^{1/p_{\phi(i)}}
\\[5mm] &\lefte C
\|f_{\phi(i)}\|_{L^{p_{\phi(i)}}(d\sigma_{\phi(i)})}.
\end{align*}
Altogether, we obtain
$$
\eqref{3.3}
\lefte C c_2
\prod_{i=1}^n
\|f_{\phi(i)}\|_{L^{p_{\phi(i)}}(d\sigma_{\phi(i)})}.
$$
This yields (a) of Theorem \rightef{thm1.4}.
\end{document} |
\begin{document}
\title{Liouville theorem for Pseudoharmonic maps from Sasakian manifolds\footnote{Supported by NSFC grant No. 11271071} \footnote{Key words: Liouville theorem, pseudoharmonic map, sub-gradient estimate, Sasakian manifold, Heisenberg group} \footnote{2010 Mathematics Subject Classification. Primary: 32V05, 32V20. Secondary: 58E20}}
\author{Yibin Ren\footnote{School of Mathematical Science, Fudan University, Shanghai 200433, P.R.China. E-mail: [email protected]} \and Guilin Yang \and Tian Chong}
\date{}
\maketitle
\begin{abstract}
In this paper, we derive a sub-gradient estimate for pseudoharmonic maps from noncompact complete Sasakian manifolds which satisfy CR sub-Laplace comparison property, to simply-connected Riemannian manifolds with nonpositive sectional curvature. As its application, we obtain some Liouville theorems for pseudoharmonic maps. In the Appendix, we modify the method and apply it to harmonic maps from noncompact complete Sasakian manifolds.
\end{abstract}
\section{Introduction}
In \cite{y}, S. T. Yau derived a well-known gradient estimate for harmonic functions on complete noncompact Riemnnian manifolds. By this estimate, he got a Liouville theorem for positive harmonic functions on Riemannian manifolds with nonnegative Ricci curvature. In \cite{c}, S. Y. Cheng generlized the method in \cite{y} to harmonic maps. In \cite{ckt}, S. C. Chang, T. J. Kuo and J. Tie modified the method in \cite{y} and applied it to positive pseudohamonic functions on noncompact Sasakian $(2n+1)$-manifolds. They introduced a new auxiliary function and successfully dealt with the awkward term in Bochner-type formula. As a result, they obtained a sub-gradient estimate and Liouville theorem for positive pseudoharmonic functions.
In this paper, inspired by \cite{c, ckt}, we derive a sub-gradient estimate for pseudoharmonic maps from noncompact complete Sasakian manifolds which satisfies CR sub-Laplace comparison property (Theorem \ref{nsg}). Then we get the Liouville theorem for pseudoharmonic maps (Theorem \ref{cse}). In the Appendix, we apply the method to harmonic maps from noncompact complete Sasakian manifolds and derive a Reeb energy density estimate (Theorem \ref{csh}). From this estimate, we can prove Liouville theorem for harmonic maps on Sasakian manifolds.
\section{Basic Notions} \label{crb}
A smooth manifold $M$ of real dimension $(2n+1)$ is said to be a CR manifold, if there exists a smooth rank $n$ complex subbundle $T_{1,0} M \subset TM \otimes \mathbb{C}$ such that
$$
T_{1,0} M \cap T_{0,1} M =0
$$
and
$$
[\Gamma (T_{1,0} M), \Gamma (T_{1,0} M)] \subset \Gamma (T_{1,0} M)
$$
where $T_{0,1} M = \overline{T_{1,0} M}$ is the complex conjugate of $T_{1,0} M$. If $M$ is a CR manifold, then its Levi distribution is the real subbundle $HM = Re \: \{ T_{1,0}M \oplus T_{0,1}M \}$.
It carries a complex structure $J_b : HM \rightarrow HM$, which is given by $J_b (X+\overline{X})= \sqrt{-1} (X-\overline{X})$ for any $X \in T_{1,0} M$. Since $HM$ is naturally oriented by the complex structure, then $M$ is orientable if and only if
there exists a global nonvanishing 1-form $\theta$ such that $\theta (HM) =0 $.
Any such section $\theta$ is referred to as a pseudo-Hermitian structure on $M$. The Levi form $L_\theta $ is given by
$$L_\theta (Z,\overline{W} ) = - \sqrt{-1} d \theta (Z, \overline{W}) $$
for any $Z , W \in T_{1, 0} M$.
\begin{dfn}
An orientable CR manifold $M$ with a pseudo-Hermitian structure $\theta$, denoted by $(M, HM, J_b, \theta)$, is called a pseudo-Hermitian manifold. A pseudo-Hermitian manifold $(M, HM, J_b, \theta)$ is said to be a strictly pseudoconvex CR manifold if its Levi form $L_\theta$ is positive definite.
\end{dfn}
If $(M, HM, J_b, \theta)$ is strictly pseudoconvex, there exists a unique nonvanishing vector field $T$, transverse to $HM$, satisfying
$T \lrcorner \: \theta =1, \ T \lrcorner \: d \theta =0$. This vector field is called the characteristic direction of $(M, HM, J_b, \theta)$.
Define the bilinear form $G_\theta$ by
$$
G_\theta (X, Y)= d \theta (X, J_b Y)
$$
for $X, Y \in HM$. Since $L_\theta$ and $G_\theta$ coincide on $T_{1,0} M \otimes T_{0,1} M$, $G_\theta$ is also positive definite on $HM \otimes HM$. This allows us to define a Riemannian metric $g_\theta$ on $M$ by
$$g_\theta (X, Y) = G_\theta (\pi_H X, \pi_H Y)+ \theta(X) \theta (Y), \quad X, Y \in TM$$
where $\pi_H : TM \rightarrow HM$ is the projection associated to the direct sum decomposition $TM = HM \oplus \mathbb{R} T$. This metric is usually called the Webster metric.
On a strictly pseudoconvex CR manifold, there exists a canonical connection preserving the complex structure and the Webster metric. Actually
\begin{prp} [\cite{dt}]
Let $(M, HM, J_b, \theta)$ be a strictly pseudoconvex CR manifold. Let $T$ be the characteristic direction and $J_b$ the complex structure in $HM$ (extending to an endomorphism of $TM$ by requiring that $J_b T=0$). Let $g_\theta$ be the Webster metric. Then there is a unique linear connection $\nabla$ on $M$ (called the Tanaka-Webster connection) such that:
\begin{enumerate}[(i)]
\item The Levi distribution HM is parallel with respect to $\nabla$.
\item $\nabla J_b=0$, $\nabla g_\theta=0$.
\item The torsion $T_\nabla$ of $\nabla$ satisfies $T_{\nabla} (X, Y)= 2 d \theta (X, Y) T $ and $T_{\nabla} (T, J_b X) + J_b T_{\nabla} (T, X) =0 $ for any $X, Y \in HM$.
\end{enumerate}
\end{prp}
The pseudo-Hermitian torsion, denoted $\tau$, is the $TM$-valued 1-form defined by $\tau(X) = T_{\nabla} (T,X)$. Note that $\tau(T_{1,0} M) \subset T_{0,1} M$ and $\tau$ is $g_\theta$-symmetric (cf. \cite{dt}).
\begin{prp}[\cite{dt}]
If $(M, HM, J_b, \theta)$ is a strictly pseudoconvex CR manifold, the synthetic object $(J_b, -T, -\theta, g_\theta)$ is a contact metric structure on $M$. This contact metric structure is a Sasakian structure if and only if the pseudo-Hermitian torsion $\tau$ is zero.
\end{prp}
\begin{exm}[Heisenberg group]
The Heisenberg group $\mathbb{H}^n$ is obtained by $\mathbb{C}^n \times \mathbb{R}$ with the group law
$$
(z,t) \cdot (w,s) = (z+w, t+s+ 2 Im \langle z, w \rangle) .
$$
Let us consider the complex vector fields on $\mathbb{H}^n$,
$$
T_\alpha= \frac{\partial}{\partial z^\alpha} + \sqrt{-1} \overline{z^\alpha} \frac{\partial}{\partial t}
$$
where $\frac{\partial }{\partial z^\alpha} = \frac{1}{2} (\frac{\partial}{\partial x^\alpha} - \sqrt{-1} \frac{\partial}{\partial y^\alpha})$ and $z^\alpha= x^\alpha + \sqrt{-1} y^\alpha$.
The CR structure $T_{1,0} \mathbb{H}^n$ is spanned by $\{ T_1, \dots, T_n\}$.
There is a pseudo-Hermitian structure $\theta$ on $\mathbb{H}^n$ defined by
$$
\theta = d t+ 2 \sum_{\alpha =1}^n (x^\alpha d y^\alpha- y^\alpha d x^\alpha ).
$$
The Levi form $L_\theta= 2 \sum_{\alpha =1}^n d z^\alpha \wedge d z^{\bar{\alpha}} $ is positive definite, so $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ is a strictly pseudo-Hermitian CR manifold. The characteristic direction is $T = \frac{\partial}{\partial t}$. Moreover, the Tanaka-Webster connection of $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ is flat. Hence the pseudo-Hermitian torsion is zero, and $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ is Sasakian (See \cite{dt} for details).
\end{exm}
Let $(M, HM, J_b, \theta)$ be a strictly pseudoconvex CR (2n+1)-manifold. Let $\{ Z_1, \dots, Z_n \}$ be a local orthonormal frame of $T_{1,0} M$ defined on the open set $U \subset M$ , and $\{ \theta^1, \dots \theta^n \}$ its dual coframe.
Then,
\begin{align*}
d \theta = 2 \sqrt{-1} \sum_{\alpha=1}^{n} \theta^\alpha \wedge \theta^{\bar{\alpha}} .
\end{align*}
Since $\tau(T_{1,0} M) \subset T_{0,1} M$, one can set $\tau Z_\alpha = A_{\alpha}^{\ \bar{\beta}} Z_{\bar{\beta}}$ for some local smooth functions $A_{\alpha}^{\ \bar{\beta}} : U \rightarrow \mathbb{C}$. Denote by $\{ \omega_{\alpha}^{\ \beta} \}$ the Tanaka-Webster connection 1-forms with respect to the frame $\{ T_\alpha \}$, i.e. $\nabla Z_\alpha = \omega_{\alpha}^{\ \beta} \otimes Z_\beta$. Then the structure equations can be expressed as follows:
\begin{align} \label{se1}
d \theta^\beta = \theta^\alpha \wedge \omega_{\alpha}^{\ \beta} + \theta \wedge \tau^\beta, \quad
\tau_\alpha \wedge \theta^\alpha=0 , \quad
\omega_{\alpha}^{\ \beta} + \omega_{\bar{\beta}}^{\ \bar{\alpha}r{\alpha}}=0
\end{align}
where $\tau^\alpha = A^\alpha_{\ \bar{\beta}} \theta^{\bar{\beta}} = A_{\bar{\alpha} \bar{\beta}} \theta^{\bar{\beta}}$ is a local 1-form.
In \cite{dt, w}, the authors showed that the curvature form of Tanaka-Webster connection $\Pi_{\beta}^{\ \alpha} = d \omega_{\beta}^{\ \alpha} - \omega_{\beta}^{\ \gamma} \wedge \omega_{\gamma}^{\ \alpha}$ is given by
\begin{equation} \label{s3}
\Pi_{\beta}^{\ \alpha} = R_{\beta \ \mu \bar{\alpha}r{\gamma}}^{\ \alpha} \theta^{\mu} \wedge \theta^{\bar{\alpha}r{\gamma}} + W_{\beta \ \mu}^{\ \alpha} \theta^{\mu} \wedge \theta - W_{\ \beta \bar{\alpha}r{\mu}}^{\alpha} \theta^{\bar{\alpha}r{\mu}} \wedge \theta + 2 \sqrt{-1} \theta_{\beta} \wedge \tau^{\alpha} - 2 \sqrt{-1} \tau_{\beta} \wedge \theta^{\alpha}
\end{equation}
where $W_{\beta \ \mu}^{\ \alpha} = A_{\beta \mu, }^{\quad \ \alpha}$ and $W_{\ \beta \bar{\alpha}r{\mu}}^\alpha = A_{\ \bar{\alpha}r{\mu} , \beta}^\alpha$. In particular, $R_{\beta \bar{\alpha} \mu \bar{\alpha}r{\gamma}}=R_{\mu \bar{\alpha} \beta \bar{\alpha}r{\gamma}}$.
The pseudo-Hermitian $Ric$ tensor and the $Tor$ tensor on $T_{1,0} M$ are defined by
\begin{equation} \label{pric}
Ric(X,Y) = R_{\alpha \bar{\beta}} X_{\bar{\alpha}} Y_{\beta} =R_{\alpha \bar{\beta} \gamma \bar{\alpha}r{\gamma}} X_{\bar{\alpha}} Y_{\beta}
\end{equation}
and
\begin{equation}
Tor(X,Y) = \sqrt{-1} (X_\alpha Y_\beta A_{\bar{\alpha} \bar{\beta}}- X_{\bar{\alpha}} Y_{\bar{\beta}} A_{\alpha \beta})
\end{equation}
for $X=X_{\bar{\alpha}} Z_{\alpha} \in M, \ Y=Y_{\bar{\beta}} Z_{\beta} \in M$.
Assume that $(N,h)$ is a Riemannian manifold. Let $\{ \xi_i \}$ be a local orthonormal frame of $TN$, and $\{ \sigma^i \}$ its dual coframe. Denote by $\{ \eta^{\ i}_j \}$ the connection 1-forms of the Levi-Civita connection $\hat{\nabla}$ on $N$, i.e. $\hat{\nabla} \xi_i = \eta_i^{\ j} \otimes \xi_j$. Then we have the structure equations
\begin{align} \label{se2}
d \sigma^i = \sigma^j \wedge \eta^{\ i}_j, \quad
d \eta^{\ i}_j = \eta^{\ l}_j \wedge \eta^{\ i}_l + \Omega^{\ i}_j, \quad
\Omega^{\ i}_j = \frac{1}{2} \hat{R}_{j \ kl}^{\ i} \sigma^k \wedge \sigma^l ,
\end{align}
where $\hat{R}$ is the curvature of Levi-Civita connection $\hat{\nabla}$ in $(N,h)$.
Suppose that $(M, HM, J_b, \theta)$ is a strictly pseudoconvex CR (2n+1)-manifold and $\nabla$ is its Tanaka-Webster connection. Let $f:M \rightarrow N$ be a smooth map and $f^* TN$ the pullback bundle.
Denote
\begin{align} \label{e1}
\begin{gathered}
d_b f =\pi_H df= f^i_\alpha \theta^\alpha \otimes \xi_i + f^i_{\bar{\alpha}r{\alpha}} \theta^{\bar{\alpha}r{\alpha}} \otimes \xi_i \ \in \Gamma (T^*M \otimes f^*TN) , \\
f_0 = df(T) = f_0^i \: \xi_i \ \in \Gamma (f^*TN) .
\end{gathered}
\end{align}
Let $\nabla^f$ be the pullback connection in $f^* TN$ induced by the Levi-Civita connection of $(N, h)$.
Then we can determine a connection $\nabla^f$ in $T^* M \otimes f^* TN$ by
$$ \nabla_X^f (\omega \otimes \xi) = \nabla_X \omega \otimes \xi + \omega \otimes \nabla_X^f \: \xi$$
for any $X \in \Gamma(TM)$, $\omega \in \Gamma(T^*M)$ and $\xi \in \Gamma(f^*TN)$.
Under the local frame $\{ \theta, \theta^\alpha, \theta^{\bar{\alpha}} \}$ and $\{ \xi_i \}$, the tensor $\nabla^f df$ can be expressed by:
\begin{align}
\nabla^f d f = & \ f^i_{\alpha \beta} \theta^\alpha \otimes \theta^\beta \otimes \xi_i +
f^i_{\bar{\alpha} \beta} \theta^{\bar{\alpha}} \otimes \theta^{\beta} \otimes \xi_i +
f^i_{\alpha \bar{\beta}} \theta^{\alpha} \otimes \theta^{\bar{\beta}} \otimes \xi_i \nonumber \\
& \ +f^i_{\bar{\alpha} \bar{\beta}} \theta^{\bar{\alpha}} \otimes \theta^{\bar{\beta}} \otimes \xi_i + f^i_{0 \alpha} \theta \otimes \theta^\alpha \otimes \xi_i + f^i_{\alpha 0} \theta^\alpha \otimes \theta \otimes \xi_i \nonumber \\
& \ + f^i_{0 \bar{\alpha} } \theta \otimes \theta^{\bar{\alpha}} \otimes \xi_i + f^i_{\bar{\alpha} 0} \theta^{\bar{\alpha}} \otimes \theta \otimes \xi_i +f^i_{00} \theta \otimes \theta \otimes \xi_i . \label{se3}
\end{align}
Denote by $\nabla^f_b d_b f$ the restriction of $\nabla^f df$ to $HM \times HM$.
Throughout the paper, the Einstein summation convention is used (except in the inequality \eqref{bfp1}) and the ranges of indices are
$$
\alpha, \beta, \gamma, \mu ,\dots \in \{ 1, \dots, n \}, \quad i, j, k, l , \dots \in \{ 1, \dots, \mbox{dim N} \}
$$
where dim $M = 2n+1$.
\begin{dfn}[\cite{dt}]
Let us consider the $f-$tensor field on $M$ given by
$$\tau(f;\theta,\hat{\nabla})=trace_{G_\theta} ( \nabla^f_b d_b f )\in \Gamma (f^* TN) .$$
We say that $f$ is pseudoharmonic, if $\tau(f;\theta,\hat{\nabla})=0$.
\end{dfn}
It is known that pseudoharmonic maps are the critical points of the following energy functional (cf. \cite{dt}):
$$E_\Omega (f)= \frac{1}{2} \int_\Omega trace_{G_\theta} (\pi_H f^*h) \ \theta \wedge (d\theta)^n$$
for any compact domain $\Omega \subset \subset M$.
With respect to the local frame $\{ \theta, \theta^\alpha, \theta^{\bar{\alpha}} \}$ and $\{ \xi_i \}$, we have
\begin{align} \label{ph}
\tau(f;\theta,\hat{\nabla}) = (f^i_{\alpha \bar{\alpha}} + f^i_{\bar{\alpha} \alpha}) \xi_i .
\end{align}
\section{Bochner-Type formulas}
In \cite{g}, A. Greenleaf obtained the commutation relations of smooth functions and established Bochner-type formulas of pseudoharmonic functions. In \cite{lee}, John M. Lee derived the commutation relations of $(1,0)$-forms. We shall need the commutation relations of various covariant derivatives of smooth maps and Bochner-type formulas of pseudoharmonic maps.
\begin{lem} \label{cc}
Let $f: M\rightarrow N$ be a smooth map. The covariant derivatives of $df$ satisfy the following commutation relations:
\begin{align}
f^i_{\alpha \beta} = & f^i_{\beta \alpha}, \label{s1} \\
f^i_{\alpha \bar{\beta}} -f^i_{\bar{\beta} \alpha} = & 2 \sqrt{-1} f^i_0 \delta_{\alpha \bar{\beta}}, \label{s6} \\
f^i_{0 \alpha} -f^i_{\alpha 0} = & f^i_{\bar{\beta}} A^{\bar{\beta}}_{\ \alpha} , \label{s2}
\end{align}
and
\begin{align}
f^i_{\alpha \beta \gamma} - f^i_{\alpha \gamma \beta} = &2 \sqrt{-1} f^i_{\beta} A_{\alpha \gamma} -2 \sqrt{-1} f^i_\gamma A_{\alpha \beta} -f^j_\alpha f^k_\beta f^l_\gamma \hat{R}_{j \ kl}^{\ i}, \label{s4} \\
f^i_{\alpha \bar{\beta} \bar{\gamma}}- f^i_{\alpha \bar{\gamma} \bar{\beta}} = & 2 \sqrt{-1} f^i_\mu A^{\mu}_{\ \bar{\gamma}} \delta_{\alpha \bar{\beta}}- 2 \sqrt{-1} f^i_\mu A^{\mu}_{\ \bar{\beta}} \delta_{\alpha \bar{\gamma}}- f^j_{\alpha} f^k_{\bar{\beta}} f^l_{\bar{\gamma}} \hat{R}_{j \ kl}^{\ i}, \\
f^i_{\alpha \beta \bar{\gamma}} - f^i_{\alpha \bar{\gamma} \beta} =& f^i_\mu R_{\alpha \ \beta \bar{\gamma}}^{\ \mu} + 2 \sqrt{-1} f^i_{\alpha 0} \delta_{\beta \bar{\gamma}} - f^j_{\alpha} f^k_{\beta} f^l_{\bar{\gamma}} \hat{R}_{j \ kl}^{\ i}, \\
f^i_{\alpha \beta 0} - f^i_{\alpha 0 \beta} = & f^i_\gamma A_{\alpha \beta, }^{\quad \ \gamma}-f^i_{\alpha \bar{\gamma}} A^{\bar{\gamma}}_{\ \beta}- f^j_{\alpha} f^k_{\beta} f^l_0 \hat{R}_{j \ kl}^{\ i}, \\
f^i_{\alpha \bar{\beta} 0} -f^i_{\alpha 0 \bar{\beta}} =& -f^i_\gamma A^{\gamma}_{\ \bar{\beta},\alpha}- f^i_{\alpha \gamma} A^{\gamma}_{\ \bar{\beta}} -f^j_{\alpha} f^k_{\bar{\beta}} f^l_0 \hat{R}_{j \ kl}^{\ i}. \label{s5}
\end{align}
\end{lem}
\begin{proof}
The identities \eqref{e1} imply
\begin{align} \label{e2}
f^* \sigma^i = f^i_\alpha \theta^\alpha + f^i_{\bar{\alpha}} \theta^{\bar{\alpha}} + f^i_0 \theta .
\end{align}
We take the exterior derivative of \eqref{e2} and use the structure equations \eqref{se1}, \eqref{se2} to get
\begin{align}
0=& (d f^i_\alpha -f^i_\beta \omega^{\ \beta}_\alpha + f^j_\alpha \tilde{\eta}^{\ i}_j ) \wedge \theta^\alpha
+ (d f^i_{\bar{\alpha}} -f^i_{\bar{\beta}} \omega^{\ \bar{\beta}}_{\bar{\alpha}} + f^j_{\bar{\alpha}} \tilde{\eta}^{\ i}_j ) \wedge \theta^{\bar{\alpha}} \nonumber \\
& + (d f^i_0 + f^j_0 \tilde{\eta}^{\ i}_j) \wedge \theta + f^i_\alpha \theta \wedge \tau^\alpha + f^i_{\bar{\alpha}} \theta \wedge \tau^{\bar{\alpha}} +2 \sqrt{-1} f^i_0 \theta^\beta \wedge \theta^{\bar{\beta}} , \label{se4}
\end{align}
where $\tilde{\eta}^{\ i}_j = f^* \eta^{\ i}_j$.
On the other hand, the second-order covariant derivatives satisfy
\begin{align}
d f^i_\alpha -f^i_\beta \omega^{\ \beta}_\alpha + f^j_\alpha \tilde{\eta}^{\ i}_j &= f^i_{\alpha \beta} \theta^\beta + f^i_{\alpha \bar{\beta}} \theta^{\bar{\beta}} +f^i_{\alpha 0} \theta , \label{se5} \\
d f^i_{\bar{\alpha}} -f^i_{\bar{\beta}} \omega^{\ \bar{\beta}}_{\bar{\alpha}} + f^j_{\bar{\alpha}} \tilde{\eta}^{\ i}_j &= f^i_{\bar{\alpha} \beta} \theta^\beta + f^i_{\bar{\alpha} \bar{\beta}} \theta^{\bar{\beta}} +f^i_{\bar{\alpha} 0} \theta , \\
d f^i_0 + f^j_0 \tilde{\eta}^{\ i}_j &= f^i_{0 \beta} \theta^\beta + f^i_{0 \bar{\beta}} \theta^{\bar{\beta}} +f^i_{0 0} \theta .
\end{align}
Substituting the above three equations into \eqref{se4} and using $\tau^\alpha = A^\alpha_{\ \bar{\beta}} \theta^{\bar{\beta}}$, we obtain
\begin{align*}
0= & f^i_{\alpha \beta} \theta^{\beta} \wedge \theta^{\alpha} +f^i_{\bar{\alpha} \bar{\beta}} \theta^{\bar{\beta}} \wedge \theta^{\bar{\alpha}} + (f^i_{\alpha \bar{\beta}} -f^i_{\bar{\beta} \alpha} - 2\sqrt{-1} f^i_0 \delta_{\alpha \bar{\beta}} ) \theta^{\bar{\beta}} \wedge \theta^\alpha \\
& + (f^i_{\alpha 0} - f^i_{0 \alpha} + f^i_{\bar{\beta}} A^{\bar{\beta}}_{\ \alpha}) \theta \wedge \theta^\alpha + (f^i_{\bar{\alpha} 0} - f^i_{0 \bar{\alpha}} + f^i_{\beta} A^{\beta}_{\ \bar{\alpha}}) \theta \wedge \theta^{\bar{\alpha}} .
\end{align*}
which (by comparing types) yields \eqref{s1}-\eqref{s2}. To prove the next five equations, we differentiate \eqref{se5} and use the structure equations again. Then we obtain
\begin{align}
0= & (d f^i_{\alpha \beta} - f^i_{\alpha \gamma} \omega^{\ \gamma}_{\beta} - f^i_{\gamma \beta} \omega_{\alpha}^{\ \gamma} + f^j_{\alpha \beta} \tilde{\eta}^{\ i}_j) \wedge \theta^\beta \nonumber \\
&+(d f^i_{\alpha \bar{\beta}} - f^i_{\alpha \bar{\gamma}} \omega^{\ \bar{\gamma}}_{\bar{\beta}} - f^i_{\gamma \bar{\beta}} \omega_{\alpha}^{\ \gamma} + f^j_{\alpha \bar{\beta}} \tilde{\eta}^{\ i}_j) \wedge \theta^{\bar{\beta}} \nonumber \\
&+ (d f^i_{\alpha 0} - f^i_{\beta 0} \omega_{\alpha}^{\ \beta} + f^j_{\alpha 0} \tilde{\eta}^{\ i}_j) \wedge \theta + f^i_\beta \Pi^{\ \beta}_{ \alpha}- f^j_\alpha f^*(\Omega^i_j) \nonumber \\
& + f^i_{\alpha \beta} A^{\beta}_{\ \bar{\gamma}} \theta \wedge \theta^{\bar{\gamma}} + f^i_{\alpha \bar{\beta}} A^{\bar{\beta}}_{\ \gamma} \theta \wedge \theta^\gamma+ 2 \sqrt{-1} f^i_{\alpha 0} \theta^\beta \wedge \theta_\beta . \label{se6}
\end{align}
Since the third-order covariant derivatives of $f$ is given by
\begin{align*}
d f^i_{\alpha \beta} - f^i_{\alpha \gamma} \omega^{\ \gamma}_{\beta} - f^i_{\gamma \beta} \omega_{\alpha}^{\ \gamma} + f^j_{\alpha \beta} \tilde{\eta}^{\ i}_j =& f^i_{\alpha \beta \gamma} \theta^{\gamma} + f^i_{\alpha \beta \bar{\gamma}} \theta^{\bar{\gamma}} + f^i_{\alpha \beta 0} \theta , \\
d f^i_{\alpha \bar{\beta}} - f^i_{\alpha \bar{\gamma}} \omega^{\ \bar{\gamma}}_{\bar{\beta}} - f^i_{\gamma \bar{\beta}} \omega_{\alpha}^{\ \gamma} + f^j_{\alpha \bar{\beta}} \tilde{\eta}^{\ i}_j =& f^i_{\alpha \bar{\beta} \gamma} \theta^{\gamma} + f^i_{\alpha \bar{\beta} \bar{\gamma}} \theta^{\bar{\gamma}} + f^i_{\alpha \bar{\beta} 0} \theta , \\
d f^i_{\alpha 0} - f^i_{\beta 0} \omega_{\alpha}^{\ \beta} + f^j_{\alpha 0} \tilde{\eta}^{\ i}_j = & f^i_{\alpha 0 \gamma} \theta^{\gamma} + f^i_{\alpha 0 \bar{\gamma}} \theta^{\bar{\gamma}} + f^i_{\alpha 0 0} \theta ,
\end{align*}
we can substitute them into \eqref{se6} and use \eqref{s3}, \eqref{se2} to obtain
\begin{align*}
0&= \sum_{\gamma < \beta} (f^i_{\alpha \beta \gamma} - f^i_{\alpha \gamma \beta} - 2\sqrt{-1} f^i_{\beta} A_{\alpha \gamma} + 2 \sqrt{-1} f^i_\gamma A_{\alpha \beta} + f^j_\alpha f^k_\beta f^l_\gamma \hat{R}_{j \ kl}^{\ i}) \theta^\gamma \wedge \theta^\beta \\
&+ \sum_{\gamma , \beta} (f^i_{\alpha \beta \bar{\gamma}} - f^i_{\alpha \bar{\gamma} \beta} - f^i_\mu R_{\alpha \ \beta \bar{\gamma}}^{\ \mu} - 2 \sqrt{-1} f^i_{\alpha 0} \delta_{\beta \bar{\gamma}} + f^j_{\alpha} f^k_{\beta} f^l_{\bar{\gamma}} \hat{R}_{j \ kl}^{\ i}) \theta^{\bar{\gamma}} \wedge \theta^{\beta} \\
&+ \sum_{\gamma < \beta} (f^i_{\alpha \bar{\beta} \bar{\gamma}}- f^i_{\alpha \bar{\gamma} \bar{\beta}} -2 \sqrt{-1} f^i_\mu A^{\mu}_{\ \bar{\gamma}} \delta_{\alpha \bar{\beta}}+ 2\sqrt{-1} f^i_\mu A^{\mu}_{\ \bar{\beta}} \delta_{\alpha \bar{\gamma}}+ f^j_{\alpha} f^k_{\bar{\beta}} f^l_{\bar{\gamma}} \hat{R}_{j \ kl}^{\ i}) \theta^{\bar{\gamma}} \wedge \theta^{\bar{\beta}} \\
&+ \sum_{\beta} (f^i_{\alpha \beta 0} - f^i_{\alpha 0 \beta} - f^i_\gamma A_{\alpha \beta, }^{\quad \ \gamma}+f^i_{\alpha \bar{\gamma}} A^{\bar{\gamma}}_{\ \beta}+ f^j_{\alpha} f^k_{\beta} f^l_0 \hat{R}_{j \ kl}^{\ i}) \theta \wedge \theta^\beta \\
&+ \sum_{\beta} (f^i_{\alpha \bar{\beta} 0} -f^i_{\alpha 0 \bar{\beta}} +f^i_\gamma A^{\gamma}_{\ \bar{\beta},\alpha}+ f^i_{\alpha \gamma} A^{\gamma}_{\ \bar{\beta}} +f^j_{\alpha} f^k_{\bar{\beta}} f^l_0 \hat{R}_{j \ kl}^{\ i}) \theta \wedge \theta^{\bar{\beta}} .
\end{align*}
which (by comparing types) yields \eqref{s4}-\eqref{s5}.
\end{proof}
Before introducing the Bochner-type formulas, we recall a property of the sub-Laplace operator $\triangle_b$ (cf. \cite{dt}). If $u$ is a $C^2$ function on $M$, then $\triangle_b u = trace_{G_\theta} (\nabla_b d_b u)$. With respect to the local orthonormal frame $\{ T, Z_\alpha, Z_{\bar{\alpha}} \}$, we have $\triangle_b u = u_{\alpha \bar{\alpha}}+ u_{\bar{\alpha} \alpha}$.
\begin{lem} \label{c1}
For any smooth map $f: M \rightarrow N$, we have
\begin{align}
\frac{1}{2} \triangle_b |d_b f|^2 =& \ |\nabla_b^f d_b f|^2 +\langle \nabla_b^f \tau (f;\theta,\hat{\nabla}), d_b f\rangle - 4 \langle d_b f \circ J_b, \nabla_b^f f_0\rangle \nonumber\\
& \ + (2Ric - 2 (n-2)Tor) (f^i_{\bar{\beta}} Z_\beta, f^i_{\bar{\alpha}} Z_\alpha) \nonumber\\
& \ +2 (f^i_{\bar{\alpha}r{\alpha}} f^j_{\beta} f^k_{\bar{\alpha}r{\beta}} f^l_{\alpha} \hat{R}^{\ i}_{j\ kl}
+ f^i_{\alpha} f^j_{\beta} f^k_{\bar{\alpha}r{\beta}} f^l_{\bar{\alpha}r{\alpha}} \hat{R}^{\ i}_{j\ kl}) , \label{bf1}\\
\frac{1}{2} \triangle_b |d f(T)|^2 =& \ | \nabla_b^f f_0 |^2 + \langle df(T), \nabla_T^f \; \tau(f;\theta,\hat{\nabla})\rangle + 2 f^i_0 f^j_{\alpha} f^k_{\bar{\alpha}} f^l_0 \hat{R}_{j \ kl}^{\ i} \nonumber\\
&\ +2( f^i_0 f^i_\beta A_{\bar{\beta} \bar{\alpha} , \alpha} + f^i_0 f^i_{\bar{\beta}} A_{\beta \alpha , \bar{\alpha}} + f^i_0 f^i_{\bar{\beta} \bar{\alpha}} A_{\beta \alpha} +f^i_0 f^i_{\beta \alpha} A_{\bar{\beta} \bar{\alpha}} ) \label{bf2}
\end{align}
where $\nabla_b^f \tau (f;\theta,\hat{\nabla})$ and $\nabla_b^f f_0$ are the restriction of $\nabla^f \tau (f;\theta,\hat{\nabla})$ and $\nabla^f f_0$ to $HM$.
\end{lem}
\begin{proof}
Since $|d_b f|^2 = 2 f^i_\alpha f^i_{\bar{\alpha}}$, we have
\begin{align*}
\frac{1}{2} \triangle_b |d_b f|^2 =& (f^i_\alpha f^i_{\bar{\alpha}})_{\beta \bar{\beta}}+ (f^i_\alpha f^i_{\bar{\alpha}})_{\bar{\beta} \beta} \\
= & 2(f^i_{\alpha \beta} f^i_{\bar{\alpha} \bar{\beta}} + f^i_{\alpha \bar{\beta}} f^i_{\bar{\alpha} \beta}) + f^i_{\bar{\alpha}} f^i_{\alpha \beta \bar{\beta}} + f^i_{\alpha} f^i_{\bar{\alpha} \beta \bar{\beta}} + f^i_{\alpha} f^i_{\bar{\alpha} \bar{\beta} \beta} + f^i_{\bar{\alpha}} f^i_{\alpha \bar{\beta} \beta} \\
= & |\nabla^f_b d_b f|^2 + f^i_{\bar{\alpha}} f^i_{\alpha \beta \bar{\beta}} + f^i_{\alpha} f^i_{\bar{\alpha} \beta \bar{\beta}} + f^i_{\alpha} f^i_{\bar{\alpha} \bar{\beta} \beta} + f^i_{\bar{\alpha}} f^i_{\alpha \bar{\beta} \beta} .
\end{align*}
Lemma \ref{cc} implies
\begin{align*}
f^i_{\alpha \beta \bar{\beta}} = & f^i_{\beta \alpha \bar{\beta}} = f^i_{\beta \bar{\beta} \alpha } +f^i_\mu R_{\beta \ \alpha \bar{\beta}}^{\ \mu} + 2 \sqrt{-1} f^i_{\beta 0} \delta_{\alpha \bar{\beta}} - f^j_{\beta} f^k_{\alpha} f^l_{\bar{\beta}} \hat{R}_{j \ kl}^{\ i} \\
=& f^i_{\beta \bar{\beta} \alpha } +f^i_\mu R_{\beta \ \alpha \bar{\beta}}^{\ \mu} + 2 \sqrt{-1} (f^i_{0 \beta}- f^i_{\bar{\alpha}r{\mu}} A^{\bar{\alpha}r{\mu}}_{\ \beta}) \delta_{\alpha \bar{\beta}} - f^j_{\beta} f^k_{\alpha} f^l_{\bar{\beta}} \hat{R}_{j \ kl}^{\ i}, \\
f^i_{\bar{\alpha} \beta \bar{\beta}} = & (f^i_{\beta \bar{\alpha}} - 2 \sqrt{-1} f^i_0 \delta_{\beta \bar{\alpha}})_{\bar{\beta}} = f^i_{\beta \bar{\alpha} \bar{\beta}} - 2 \sqrt{-1} f^i_{0 \bar{\beta}} \delta_{\beta \bar{\alpha}} \\
= & f^i_{\beta \bar{\beta} \bar{\alpha}}+ 2 \sqrt{-1} f^i_\mu A^{\mu}_{\ \bar{\beta}} \delta_{\beta \bar{\alpha}}- 2 \sqrt{-1} f^i_\mu A^{\mu}_{\ \bar{\alpha}} \delta_{\beta \bar{\beta}}- f^j_{\beta} f^k_{\bar{\alpha}} f^l_{\bar{\beta}} \hat{R}_{j \ kl}^{\ i}- 2 \sqrt{-1} f^i_{0 \bar{\beta}} \delta_{\beta \bar{\alpha}} .
\end{align*}
Substituting them into the previous identity, we obtain
\begin{align*}
\frac{1}{2} \triangle_b |d_b f|^2 =& |\nabla^f_b d_b f|^2 + f^i_{\bar{\alpha}} (f^i_{\beta \bar{\beta}} + f^i_{\bar{\beta} \beta})_{\alpha} + f^i_{\alpha} (f^i_{\beta \bar{\beta}} + f^i_{\bar{\beta} \beta})_{\bar{\alpha}} \\
&+ 2 f^i_{\bar{\alpha}} f^i_{\mu} R_{\alpha \bar{\alpha}r{\mu} \beta \bar{\beta}} - 2 \sqrt{-1} (n-2) (f^i_{\alpha} f^i_\mu A_{\bar{\alpha}r{\mu} \bar{\alpha}}- f^i_{\bar{\alpha}} f^i_{\bar{\alpha}r{\mu}} A_{\mu \alpha}) \\
& +2 (f^i_{\bar{\alpha}r{\alpha}} f^j_{\beta} f^k_{\bar{\alpha}r{\beta}} f^l_{\alpha} \hat{R}^{\ i}_{j\ kl}
+ f^i_{\alpha} f^j_{\beta} f^k_{\bar{\alpha}r{\beta}} f^l_{\bar{\alpha}r{\alpha}} \hat{R}^{\ i}_{j\ kl}) + 4 \sqrt{-1} (f^i_{\bar{\alpha}} f^i_{0 \alpha} - f^i_{\alpha} f^i_{0 \bar{\alpha}}) .
\end{align*}
By the identity $\langle d_b f \circ J_b, \nabla_b^f f_0\rangle= \sqrt{-1} (f^i_{\alpha} f^i_{0 \bar{\alpha}} -f^i_{\bar{\alpha}} f^i_{0 \alpha}) $, we get \eqref{bf1}. The proof of \eqref{bf2} is similar.
\end{proof}
\begin{lem} \label{bte}
Let $(M, HM, J_b, \theta)$ be a $(2n+1)$-Sasakian manifold with
\begin{equation} \label{ric3}
Ric(X,X) \geq -k |X|^2
\end{equation}
for all $X \in T_{1,0} M$, and some $k \geq 0$. Suppose that $(N,h)$ is a Riemannian manifold with nonpositive sectional curvature. If $f: M \rightarrow N$ is a pseudoharmonic map, then for any $\nu>0$, we have
\begin{equation}
\triangle_b |d_b f|^2 \geq |\nabla_b^f d_b f|^2 + 2n |f_0|^2 - \left( 2k+ \frac{32}{\nu } \right) |d_b f|^2 - \frac{1}{2} \nu |\nabla_b^f f_0|^2 \label{nsm3}
\end{equation}
and
\begin{equation} \label{nsm4}
\triangle_b |f_0|^2 \geq 2 |\nabla_b^f f_0|^2 .
\end{equation}
\end{lem}
\begin{proof}
Since $f$ is pseudoharmonic, by definition we have $\tau(f;\theta,\hat{\nabla}) =0$. Because $(M, HM, J_b, \theta)$ is Sasakian, the tensor $Tor=0$. Hence, by the assumption on the pseudo-Hermitian Ricci curvature, \eqref{bf1} becomes
\begin{align}
\triangle_b |d_b f|^2 =& 2|\nabla_b^f d_b f|^2 - 8 \langle d_b f \circ J_b, \nabla_b^f f_0\rangle - 2 k |d_b f|^2 \nonumber\\
& \ +4 (f^i_{\bar{\alpha}r{\alpha}} f^j_{\beta} f^k_{\bar{\alpha}r{\beta}} f^l_{\alpha} \hat{R}^{\ i}_{j\ kl}
+ f^i_{\alpha} f^j_{\beta} f^k_{\bar{\alpha}r{\beta}} f^l_{\bar{\alpha}r{\alpha}} \hat{R}^{\ i}_{j\ kl}). \label{e3}
\end{align}
Using the commutation relation \eqref{s6}, we can estimate
\begin{align}
|\nabla_b^f d_b f |^2= & 2 \: \sum_{\alpha, \beta =1}^n (f^i_{\bar{\alpha} \beta} f^i_{\alpha \bar{\beta}} + f^i_{\alpha \beta} f^i_{\bar{\alpha} \bar{\beta}}) \geq 2 \: \sum_{\alpha=1}^n f^i_{\alpha \bar{\alpha}} f^i_{\bar{\alpha} \alpha} \nonumber \\
= & \frac{1}{2} \: \sum_{\alpha=1}^n [|f^i_{\alpha \bar{\alpha}}+ f^i_{\bar{\alpha} \alpha}|^2+ |f^i_{\alpha \bar{\alpha}}- f^i_{\bar{\alpha} \alpha}|^2]\
\geq \frac{1}{2} \: \sum_{\alpha=1}^n |f^i_{\alpha \bar{\alpha}}- f^i_{\bar{\alpha} \alpha}|^2 \nonumber \\
= & 2n |f_0|^2 . \label{bfp1}
\end{align}
The second term of the right side of \eqref{e3} can be controlled by the Schwarz inequality
\begin{equation} \label{bfp2}
- 8 \langle d_b f \circ J_b, \nabla_b^f f_0 \rangle \geq - \frac{32}{\nu} |d_b f|^2 - \frac{1}{2} \nu |\nabla_b^f f_0|^2 .
\end{equation}
To deal with the last term of \eqref{e3}, we set $e_\alpha = Re \; df(Z_\alpha)$ and $ e_{\alpha}' = Im \; df(Z_\alpha) $. Then
\begin{align}
Last\ term \ of \ \eqref{e3} =& \ 4 \langle \hat{R}( df(Z_{\bar{\beta}}), df(Z_{\bar{\alpha}}) ) df(Z_\beta), df(Z_\alpha)\rangle \nonumber \\
& \ \ + 4\langle \hat{R}( df(Z_{\bar{\alpha}r{\beta}}) , df(Z_\alpha) ) df(Z_\beta), df(Z_{\bar{\alpha}r{\alpha}})\rangle \nonumber \\
=& \ -4( \langle \hat{R}( e_\alpha, e_\beta ) e_\beta, e_\alpha\rangle + \langle \hat{R}( e_\alpha, e_{\beta}' ) e_{\beta}', e_\alpha\rangle \nonumber \\
& \ \ + \langle \hat{R}( e_{\alpha}', e_\beta ) e_\beta, e_{\alpha}'\rangle + \langle \hat{R}( e_{\alpha}', e_{\beta}' ) e_{\beta}', e_{\alpha}'\rangle ) \nonumber \\
\geq & \ 0 \label{bfp3}
\end{align}
where we have used the assmuption that the sectional curvature of $N$ is nonpositive. Substituting \eqref{bfp1}, \eqref{bfp2} and \eqref{bfp3} into \eqref{e3}, we get \eqref{nsm3}.
Observe that
\begin{align}
f^i_0 f^j_{\alpha} f^k_{\bar{\alpha}} f^l_0 \hat{R}_{j \ kl}^{\ i} = & \langle \hat{R}( df(Z_{\bar{\alpha}}), df(T) ) df(Z_\alpha) , df(T)\rangle \nonumber\\
= & -(\langle \hat{R}(e_\alpha,e_0)e_0,e_\alpha\rangle + \langle \hat{R}(e_{\alpha}',e_0)e_0,e_{\alpha}'\rangle ) \nonumber \\
\geq & 0 . \label{btep}
\end{align}
Then \eqref{nsm4} can be easily proved from \eqref{bf2} and \eqref{btep}.
\end{proof}
From now on, we assume that $(N,h)$ is a simply connected Riemannian manifold with nonpositive sectional curvature. Let $\rho$ be the distance to a fixed point $y_0 \in N$. Then $\rho^2$ is smooth on $N$. By the Hession comparison theorem, we have
$$Hess(\rho^2) \geq 2h . $$
For any smooth map $f: M \rightarrow N$, the chain rule gives that
\begin{align*}
\triangle_b (\rho^2 \circ f) = d \rho^2(\tau(f;\theta,\hat{\nabla})) + trace_{G_\theta} \: Hess(\rho^2)(d_b f,d_b f).
\end{align*}
Therefore, we can conclude that if $f$ is pseudoharmonic, then
\begin{equation} \label{nd1}
\triangle_b (\rho^2 \circ f) \geq 2|d_b f|^2.
\end{equation}
\section{Cannot-Carath\'eodory distance}
As known, the maximum principle is an important tool to obtain pointwise estimates for solutions of geometric PDEs.
In order to use it in Sasakian manifolds, we need some special exhaustion function to construct a cutoff function. A natural choice is the Carnot-Carath\'eodory distance function.
\begin{dfn} \label{ccd}
Let $(M, HM, J_b, \theta)$ be a strictly pseudoconvex CR manifold. A piecewise $C^1$-curve $\gamma : [0,1] \rightarrow M$ is said to be horizontal if $\gamma' (t) \in HM$ whenever $\gamma' (t)$ exists. The length of $\gamma$ is given by $$ l(\gamma) =\int^1_0 |\gamma'|^{1/2}_{G_\theta} dt . $$
We define the Cannot-Carath\'eodory distance between two points $p,q \in M$ by
$$ d_c(p,q) = inf \{ l(\gamma) | \: \gamma \in C_{p,q} \} $$
where $C_{p,q}$ is the set of all horizontal curves joining $p$ and $q$. We say that $(M, HM, J_b, \theta)$ is complete if it is complete as a metric space. A horizontal curve $\gamma: [0,1] \rightarrow M$ is called length minimizing geodesic if $l(\gamma) =d_c(\gamma(0), \gamma (1))$. Fix $x_0 \in M$, and set $r(x) = d_c(x_0,x)$. The Carnot-Carath\'eodory ball of radius $R$ centered at $x_0$ is denoted by $B_R(x_0) = \{ x \in M | \: r(x) <R \}$.
\end{dfn}
In \cite{s}, R. Strichartz pointed out that if $(M, HM, J_b, \theta)$ is complete, then for any $x_0, x \in M$, there exists at least one length minimizing geodesic $\gamma : [0,1] \rightarrow M$ joining $x_0$ and $x$. Moreover, $\gamma$ can extend to $(-\infty, \infty)$.
We say that $x$ is a cut point of $x_0$, if for any $\epsilon > 0$, $\gamma |_{[0,1+ \epsilon]}$ is no longer a length minimizing geodesic joining $x_0$ and $\gamma (1+ \epsilon)$. The set of all cut points of $x_0$, denoted by $cut(x_0)$, is called the cut locus of $x_0$.
Theorem 1.2 and Proposition 1.2 in \cite{a} assert that the Cannot-Carath\'eodory distance $r$ to a reference point $x_0 $ is smooth on $M \setminus ( cut(x_0) \cup \{ x_0 \} )$.
\begin{dfn}[\cite{ckt}] \label{crcp}
Let $(M, HM, J_b, \theta)$ be a noncompact complete Sasakian $(2n+1)$-manifold with
\begin{equation*}
Ric(X,X) \geq -k |X|^2
\end{equation*}
for all $X \in T_{1,0} M$ and some $k \geq 0$.
We say that $(M, HM, J_b, \theta)$ satisfies CR sub-Laplace comparison property relative to a point $x_0 \in M$, if there exists a positive constant $C_1$
such that the Carnot-Carath\'eodory distance $r$ to $x_0$ satisfies
\begin{equation}
\triangle_b r \leq C_1 (\frac{1}{r}+\sqrt{k})
\end{equation}
on $M \setminus ( cut(x_0) \cup \{ x_0 \} )$ and where $r \geq 1$.
\end{dfn}
\begin{prp}[\cite{ctw}] \label{hp1}
There exists a positive constant $C_1'$ on Heisenberg group $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ such that
\begin{eqnarray}
\triangle_b r \leq \frac{C_1'}{r} \label{h1}
\end{eqnarray}
on $M \setminus ( cut(o) \cup \{ o \} )$. Here $r$ is the Carnot-Carath\'eodory distance to the origin $o$.
\end{prp}
Since the pseudohermitian torsion and the pseudohermitian Ricci curvature of Heisenberg group $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ are both zero, Proposition \ref{hp1} asserts that the CR sub-Laplace comparison property holds on Heisenberg group.
\section{Sub-Gradient Estimate For Pseudoharmonic Map} \label{slp}
In this section, we will obtain a sub-gradient estimate for pseudoharmonic maps.
Let $(M, HM, J_b, \theta)$ be a noncompact complete $(2n+1)$-Sasakian manifold with CR sub-Laplace comparison property relative to a point $x_0 \in M$ and
\begin{equation*}
Ric(X,X) \geq -k |X|^2
\end{equation*}
for all $X \in T_{1,0} M$ and some $k\geq 0$. Suppose that $(N,h)$ is a simply connected Riemannian manifold with nonpositive sectional curvature. We consider a pseudoharmonic map $f: M \rightarrow N$. Let $\rho $ be the Riemannian distance to $y_0 = f(x_0)$.
We choose a function $\psi \in C^{\infty} ([0, \infty))$ with the property that
$$ \psi |_{[0,1]} =1, \quad \psi |_{[2, \infty)}=0, \quad -C_2 \: |\psi|^{\frac{1}{2}} \leq \psi ' \leq 0, \quad |\psi ''| \leq C_2 . $$
Let $R > 1 $ be fixed. By CR sub-Laplacian comparison property, the cutoff function $\eta = \psi(\frac{r}{R}) $ satisfies:
\begin{equation}
\begin{gathered}
\eta^{-1} |d_b \eta|^2 \leq \frac{C_2'}{R^2} \\
\triangle_b \eta = \frac{\psi ''}{R^2} |d_b r|^2 + \frac{\psi '}{R} \triangle_b r \geq - C_2' \: \left( \frac{1}{R^2}+\frac{\sqrt{k}}{R} \right) \label{nseta}
\end{gathered}
\end{equation}
on $M \setminus ( cut(x_0) \cup \{ x_0 \} )$. Here $C_2'$ depends only on $C_2$ and $C_1$. Denote $b_R =2 \: sup \: \{ \rho \circ f(x) | x \in B_{2R} (x_0) \}$. We construct a smooth function $F(x): B_{2R} (x_0) \rightarrow \mathbb{R}$ by
\begin{equation} \label{nf}
F(x)= \frac{|d_b f|^2 + \mu \eta |f_0|^2}{ b_R^2 - \rho^2 \circ f} (x) .
\end{equation}
The positive coefficient $\mu$ will be determined later.
\begin{lem} \label{l1}
If $r$ is smooth at $x \in B_{2R} (x_0)$ and $(\eta F) (x) \neq 0$, then at $x$, we have
\begin{align}
& \triangle_b (|d_b f|^2 + \mu \eta |f_0|^2) \geq \frac{1}{24} \frac{|d_b (|d_b f|^2 + \mu \eta |f_0|^2)|^2}{|d_b f|^2 + \mu \eta |f_0|^2} \nonumber\\
& \quad \quad \qquad + \left[ 2n - 6 \mu C_2' ( \frac{1}{R^2}+\frac{\sqrt{k}}{R}) \right] |f_0|^2 - 32 \left( k+ \frac{1}{\mu \eta} \right) |d_b f|^2 . \label{nsm6}
\end{align}
\end{lem}
\begin{proof}
First we compute
\begin{align*}
\triangle_b (\mu \eta |f_0|^2) = & \mu[(\triangle_b \eta) |f_0|^2 + 2 \langle d_b \eta, d_b |f_0|^2\rangle + \eta \: \triangle_b |f_0|^2] \\
= & \mu[(\triangle_b \eta) |f_0|^2 + 2 \langle d_b \eta, 2 \langle \nabla_b^f f_0, f_0\rangle_{f^*TN} \rangle + \eta \: \triangle_b |f_0|^2] \\
\geq & \mu[(\triangle_b \eta) |f_0|^2 - \eta |\nabla_b^f f_0|^2 - 4 |f_0|^2 \eta^{-1} |d_b \eta|^2 + \eta \: \triangle_b |f_0|^2] \\
\geq & \ \mu \eta |\nabla_b^f f_0|^2- 5 \mu C_2' \: \left(\frac{1}{R^2}+\frac{\sqrt{k}}{R} \right) |f_0|^2 .
\end{align*}
The last inequality is due to \eqref{nsm4} and \eqref{nseta}. Hence by \eqref{nsm3} with $\nu = \mu \eta$, we have the estimate
\begin{align}
& \triangle_b (|d_b f|^2 + \mu \eta |f_0|^2) \geq \frac{1}{2} \left( |\nabla_b^f d_b f|^2 + \mu \eta |\nabla_b^f f_0|^2 \right) +\frac{1}{2} |\nabla_b^f d_b f|^2 \nonumber\\
& \quad \qquad + \left[ 2n- 5 \mu C_2' ( \frac{1}{R^2}+\frac{\sqrt{k}}{R}) \right] |f_0|^2 - 32 \left( k+ \frac{1}{\mu \eta} \right) |d_b f|^2 . \label{nsm5}
\end{align}
In order to deal with the first term of the right side, we need the following Schwarz inequalities:
\begin{align}
|d_b |d_b f|^2|^2 \leq & 4 \: |d_b f|^2 \: |\nabla_b^f d_b f|^2 \label{cs1} , \\
|d_b |f_0|^2|^2 \leq & 4 \: |f_0|^2 |\nabla_b^f f_0|^2 . \label{cs2}
\end{align}
If $|d_b f|(x) \neq 0$ and $|f_0|(x) \neq 0$, then at $x$, we have
\begin{align*}
&\frac{1}{2} \left( |\nabla_b^f d_b f|^2 + \mu \eta |\nabla_b^f f_0|^2 \right) \\
& \qquad \geq \ \frac{1}{8} \left( \frac{|d_b |d_b f|^2|^2}{|d_b f|^2} + \mu \eta \frac{|d_b |f_0|^2|^2}{|f_0|^2}+ \mu \frac{|d_b \eta|^2}{\eta} |f_0|^2 \right) \ - \frac{1}{8} \mu \frac{|d_b \eta|^2}{\eta} |f_0|^2 \\
& \qquad \geq \ \frac{1}{24} \frac{|d_b (|d_b f|^2 + \mu \eta |f_0|^2)|^2}{|d_b f|^2 + \mu \eta |f_0|^2} - \frac{\mu C_2'}{8} \frac{1}{R^2} |f_0|^2.
\end{align*}
Substituting this inequality to \eqref{nsm5}, we get \eqref{nsm6}. If $|d_b f|(x) =0$ (or $|f_0|(x)=0$), we can directly discard the nonnegative term $\frac{1}{2} |\nabla_b^f d_b f|^2$ (or $\frac{1}{2} \mu \eta |\nabla_b^f f_0|^2$) from \eqref{nsm5} and use the Schwarz inequality \eqref{cs1} (or \eqref{cs2}) to obtain \eqref{nsm6}.
\end{proof}
Let $x$ be a maximum point of $\eta F$ on $B_{2R}(x_0)$. If $x$ is not in the cut locus of $x_0$, then $\eta$ is smooth near $x$. If $x$ is in the cut locus of $x_0$, we may remedy $\eta$ by the following consideration. Since $(M, HM, J_b, \theta)$ is complete, there exists a length minimizing geodesic curve $\gamma: [0,1] \rightarrow M$ which joins $x_0$ and $x$. Let $\epsilon$ be a small positive number. Along $\gamma$, $x$ is before the cut point of $\gamma (\epsilon)$. This guarantees that the modified function $\tilde{r}(z) = d_c (z,\gamma(\epsilon)) + \epsilon$ is smooth in the neighborhood of $x$. Moreover, triangle inequality implies that:
$$ r \leq \tilde{r}, \quad and \quad r(x) = \tilde{r} (x) . $$
Set $\tilde{\eta} = \psi (\frac{\tilde{r}}{R})$. Then $\tilde{\eta}$ is smooth near $x$ and
$$ \eta \geq \tilde{\eta}, \quad and \quad \eta(x) = \tilde{\eta}(x). $$
This means that $x$ is still a maximum point of $\tilde{\eta} F$. Hence, we may assume without loss of generality that $r$ is already smooth near $x$.
\begin{lem} \label{es1}
If x is a nonzero maximum point of $\eta F$ on $B_{2R} (x_0)$, then at $x$, we have the estimate
\begin{equation} \label{nsm7}
0 \geq \left[ 2 \eta F - 34n ( k+ \frac{1}{\mu} ) \right] \frac{|d_b f|^2}{b^2_R-\rho^2 \circ f}+ \left[ 2n - 31 \mu C_2' ( \frac{1}{R^2}+ \frac{\sqrt{k}}{R} ) \right] \frac{F}{\mu} .
\end{equation}
\end{lem}
\begin{proof}
It is obvious that $x$ is still a maximum point of $\ln(\eta F)$ on $B_{2R} (x_0)$.
Since $\triangle_b$ is a degenerate elliptic operator, the maximum principle implies that at $x$,
\begin{align}
0 & = d_b \ln(\eta F) = \frac{d_b \eta}{\eta} + \frac{d_b (|d_b f|^2 + \mu \eta |f_0|^2)}{|d_b f|^2 + \mu \eta |f_0|^2} +\frac{d_b (\rho^2 \circ f)}{b^2_R-\rho^2 \circ f}, \label{nsm1}\\
0 & \geq \triangle_b \ln(\eta F) = \frac{\triangle_b \eta}{\eta} - \frac{|d_b \eta|^2}{\eta^2} + \frac{\triangle_b (|d_b f|^2 + \mu \eta |f_0|^2)}{|d_b f|^2 + \mu \eta |f_0|^2} \nonumber\\
& \qquad -\frac{|d_b (|d_b f|^2 + \mu \eta |f_0|^2)|^2}{(|d_b f|^2 + \mu \eta |f_0|^2)^2}+ \frac{\triangle_b (\rho^2 \circ f)}{b^2_R-\rho^2 \circ f} +\frac{|d_b (\rho^2 \circ f)|^2}{(b^2_R- \rho^2 \circ f)^2}. \label{nsm2}
\end{align}
By Lemma \ref{l1}, \eqref{nsm2} becomes
\begin{align*}
0\geq &\ \frac{\triangle_b \eta}{\eta} - \frac{|d_b \eta|^2}{\eta^2} - \frac{23}{24} \frac{|d_b (|d_b f|^2 + \mu \eta |f_0|^2)|^2}{(|d_b f|^2 + \mu \eta |f_0|^2)^2}+\frac{|d_b (\rho^2 \circ f)|^2}{(b^2_R-\rho^2 \circ f)^2} \\
& \ + \frac{ [2n- 6 \mu C_2' ( \frac{1}{R^2}+\frac{\sqrt{k}}{R}) ] |f_0|^2 - 32( k+ \frac{1}{\mu \eta} ) |d_b f|^2}{|d_b f|^2 + \mu \eta |f_0|^2} + \frac{\triangle_b (\rho^2 \circ f)}{b^2_R-\rho^2 \circ f} .
\end{align*}
Substituting \eqref{nsm1} in above inequality and using Schwarz inequality: $(\alpha + \beta)^2 \leq 24 \alpha ^2 + \frac{24}{23} \beta^2$, we obtain
\begin{align*}
0 \geq &\ \frac{\triangle_b \eta}{\eta} - 24 \frac{|d_b \eta|^2}{\eta^2} - 32 \left(k + \frac{1}{\mu \eta} \right) \frac{|d_b f|^2 }{|d_b f|^2 + \mu \eta |f_0|^2} \\
&\quad + \left[2n- 6 \mu C_2' \left(\frac{1}{R^2}+\frac{\sqrt{k}}{R} \right)\right]\frac{|f_0|^2 }{|d_b f|^2 + \mu \eta |f_0|^2} +\frac{\triangle_b (\rho^2 \circ f)}{b^2_R-\rho^2 \circ f}.
\end{align*}
By the estimates \eqref{nd1} and \eqref{nseta}, we have
\begin{align}
0 \geq &\ - 25 \frac{C_2'}{\eta} \: \left(\frac{1}{R^2} +\frac{\sqrt{k}}{R} \right) - 32 \left(k + \frac{1}{\mu \eta} \right) \frac{|d_b f|^2 }{|d_b f|^2 + \mu \eta |f_0|^2} \nonumber \\
&\quad + \left[2n - 6 \mu C_2' \left(\frac{1}{R^2}+\frac{\sqrt{k}}{R} \right)\right]\frac{|f_0|^2 }{|d_b f|^2 + \mu \eta |f_0|^2} +2 \frac{|d_b f|^2}{b^2_R- \rho^2 \circ f} . \nonumber
\end{align}
Hence multiplying both sides by $\eta F$, we conclude that
\begin{align}
0 \geq & -25 C_2' \: \left(\frac{1}{R^2}+\frac{\sqrt{k}}{R} \right) F - 32 \left( \eta k + \frac{1}{\mu} \right) \frac{|d_b f|^2}{b^2_R -\rho^2 \circ f} \nonumber \\
& \quad + \left[2n - 6 \mu C_2' (\frac{1}{R^2}+\frac{\sqrt{k}}{R}) \right] \frac{\eta |f_0|^2}{b^2_R-\rho^2 \circ f} + 2 \eta F \frac{|d_b f|^2}{b^2_R -\rho^2 \circ f}. \label{nsm8}
\end{align}
Finally, we rewrite \eqref{nf} as
$$\frac{ \eta |f_0|^2}{b^2_R -\rho^2 \circ f} = \frac{1}{\mu} (F- \frac{|d_b f|^2}{b^2_R -\rho^2 \circ f}) $$
and substitute it into the previous inequality. This procedure yields
\begin{align*}
0 &\geq \ \left[ 2n - 31 \mu C_2' ( \frac{1}{R^2}+ \frac{\sqrt{k}}{R} ) \right] \frac{F}{\mu} \\
&\qquad + \left[ 2 \eta F - \frac{1}{\mu} \left( 2n - 6\mu C_2' ( \frac{1}{R^2}+ \frac{\sqrt{k}}{R} ) \right) - 32 ( \eta k+ \frac{1}{\mu}) \right] \frac{|d_b f|^2}{b^2_R-\rho^2 \circ f} \\
&\geq \ \left[ 2n - 31 \mu C_2' ( \frac{1}{R^2}+ \frac{\sqrt{k}}{R} ) \right] \frac{F}{\mu}+\left[ 2 \eta F - \frac{2 n}{\mu} - 32 ( k+ \frac{1}{\mu} ) \right] \frac{|d_b f|^2}{b^2_R-\rho^2 \circ f}.
\end{align*}
The last inequality is due to $0 \leq \eta \leq 1$. Since $n \geq 1$, we get \eqref{nsm7}.
\end{proof}
Now we present our main results.
\begin{thm} \label{nsg}
Let $(M, HM, J_b, \theta)$ be a noncompact complete $(2n+1)$-Sasakian manifold with CR sub-Laplace comparison property relative to a fixed point $x_0$ and
\begin{equation*}
Ric(X,X) \geq -k |X|^2
\end{equation*}
for all $X \in T_{1,0} M$, and some $k\geq 0$. Suppose that $(N,h)$ is a simply connected Riemannian manifold with nonpositive sectional curvature. Assume that $f: M \rightarrow N$ is a pseudoharmonic map. Let $\rho $ be the Riemannian distance to $y_0 = f(x_0)$. For any $R>1$, set $b_R =2 \: sup \: \{ \rho \circ f(x) | x \in B_{2R} (x_0) \}$ and $a=\frac{R^2}{1 + \sqrt{k} R}$. Then, on $B_R (x_0)$
\begin{equation} \label{nsge}
|d_b f|^2 + a |f_0|^2 \leq C_3 \: b_R^2 \: \left( \frac{1}{a} + k \right)
\end{equation}
where the constant $C_3$ only depends on the dimension of $M$ and $C_1$.
\end{thm}
\begin{rmk}
Our auxiliary function \eqref{nf} for the maximum principle is slightly different from that one introduced in \cite{ckt}. In our case, we omit the variable $t$ in the auxiliary function. This seems to simplify the related estimates even for the pseudoharmonic function case.
\end{rmk}
\begin{proof}
Let $\mu = \frac{n}{31 C_2'} \frac{R^2}{1+ \sqrt{k}R}= \frac{n}{31 C_2'} a$. We consider the auxiliary function $F$ given by \eqref{nf}. Let $x$ be a maximum point of $\eta F$ on $B_{2R} (x_0)$. We assume $(\eta F) (x) \neq 0$ (Otherwise, the following estimate \eqref{n2} is trivial).
Since $2n- 31 \mu C_2' (\frac{1}{R^2} + \frac{\sqrt{k}}{R}) = n >0$,
the last term of the right side in \eqref{nsm7} is positve.
Hence Lemma \ref{es1} yields
\begin{equation} \label{n2}
\max_{z \in B_{2R} (x_0)} \ (\eta F)(z) \leq 17n \left( k+\frac{1}{\mu}\right).
\end{equation}
Since $\eta(z)=1$ for $z \in B_{R}(x_0)$, this inequality asserts that on $B_{R}(x_0)$
\begin{equation*}
|d_b f|^2 + \mu |f_0|^2 \leq 17n (b_R^2- \rho^2 \circ f ) \left( k+\frac{1}{\mu} \right) \leq 17n b_R^2 \left( k+\frac{1}{\mu} \right).
\end{equation*}
Hence \eqref{nsge} can be obtained by choosing a proper constant $C_3$.
\end{proof}
The Reeb energy density is defined by the partial energy density $\frac{1}{2} |df(T)|^2$. From the sub-gradient estimate \eqref{nsge}, we can derive an estimate of Reeb energy density for pseudoharmonic maps and get some vanishing results.
\begin{cor} \label{plr}
Let $(M, HM, J_b, \theta)$ be a noncompact complete Sasakian manifold with CR sub-Laplace comparison property relative to a fixed point $x_0$ and
$$
Ric(X,X) \geq -k |X|^2
$$
for all $X \in T_{1,0} M$ and some $k \geq 0$. Suppose that $(N,h)$ is a simply connected Riemannian manifold with nonpositive sectional curvature. Assume that $f: M \rightarrow N$ is a pseudoharmonic map. Let $\rho $ be the Riemannian distance to $y_0 = f(x_0)$. For any $R>1$, set $b_R =2 \: sup \: \{ \rho \circ f(x) | x \in B_{2R} (x_0) \}$ and $a=\frac{R^2}{1 + \sqrt{k} R}$. Then, on $B_R (x_0)$
\begin{equation} \label{reeb}
|f_0|^2 \leq C_3 \: b_R^2 \left( \frac{2}{R^4} + \frac{3k}{R^2} + \frac{k\sqrt{k}}{R} \right) .
\end{equation}
In particular,
\begin{enumerate}[(i)]
\item if $Ric \geq 0$ (i.e. $k=0$) and the image of $f$ satisfies:
\begin{equation*}
\overline{\lim_{R \rightarrow \infty}} R^{-2} \: sup \: \{ \rho \circ f(x) | x \in B_{2R} (x_0)\} =0,
\end{equation*}
then $df(T)=0$.
\item if the pseudohermitian Ricci curvature of $M$ has strictly negative lower bound (i.e. $k>0$) and the image of f satisfies:
\begin{equation*}
\overline{\lim_{R \rightarrow \infty}} R^{-\frac{1}{2}} \: sup \: \{ \rho \circ f(x) | x \in B_{2R}(x_0) \} =0,
\end{equation*}
then $df(T)=0$.
\end{enumerate}
\end{cor}
The sub-gradient estimate \eqref{nsge} also gives Liouville theorem for pseudoharmonic maps.
\begin{thm} \label{cse}
Let $(M, HM, J_b, \theta)$ be a noncompact complete Sasakian manifold with nonnegative pseudohermitian Ricci curvature, and satisfy CR sub-Laplace comparison property relative to a fixed point $x_0 \in M$. Suppose that $(N,h)$ is a simply connected Riemannian manifold with nonpositive sectional curvature. Assume that $f: M \rightarrow N$ is a pseudoharmonic map. Let $\rho $ be the Riemannian distance to $y_0 = f(x_0)$. For any $R>1$, set $b_R =2 \: sup \: \{ \rho \circ f(x) | x \in B_{2R} (x_0) \}$. Then, on $B_R (x_0)$
\begin{equation*}
|d_b f|^2 + R^2 |f_0|^2 \leq C_3 \: \frac{b_R^2}{R^2}.
\end{equation*}
In particular, if the image of f satisfies
\begin{equation*}
\overline{\lim_{R \rightarrow \infty}} R^{-1} \: sup \: \{ \rho \circ f(x) | x \in B_{2R}(x_0) \} =0,
\end{equation*}
then f is a constant map.
\end{thm}
Since Heisenberg group $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ satisfies CR sub-Laplace comparison property, Theorem \ref{cse} can be applied to Heisenberg group.
\begin{cor} \label{hm1}
There is no bounded pseudoharmonic map from Heisenberg group $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ to a simply connected Riemannian manifold with nonpositive sectional curvature.
\end{cor}
\section{Appendix} \label{slh}
In this section, we will derive a Reeb energy density estimate for harmonic maps from Sasakian manifolds to Riemannian manifolds. We recall the definition of harmonic maps. Let $(M, HM, J_b, \theta)$ be a strictly pseudoconvex CR manifold, and let $\nabla^\theta$ be the Levi-Civita connection of $(M, g_\theta)$. Let $(N,h)$ be a Riemannian manifold, and $\hat{\nabla}$ its Levi-Civita connection. Suppose that $f: M \rightarrow N$ is a smooth map. Let $f^*TN$ be the pullback bundle and $\nabla^f$ the pullback connection. We can determine a connection $\nabla^{f, \theta}$ in $T^*M \otimes f^*TN$ by
$$
\nabla^{f, \theta}_X (\omega \otimes \xi )= \nabla^{ \theta}_X \omega \otimes \xi + \omega \otimes \nabla_X^f \: \xi$$
for any $X \in \Gamma(TM)$, $\omega \in \Gamma(T^*M)$ and $\xi \in \Gamma(f^*TN)$. So $f$ is harmonic if
$$
\tau^\theta (f;\theta,\hat{\nabla}) = trace_{g_\theta} (\nabla^{f,\theta} df) =0.
$$
With respect to the local orthonormal frame $\{ \theta, \theta^\alpha, \theta^{\bar{\alpha}}\}$ in $T^*M \otimes \mathbb{C}$ and $\{ \xi_i \}$ in $TN$, we have
\begin{align} \label{b1}
\tau^\theta (f;\theta,\hat{\nabla}) (f) = (f^i_{\alpha \bar{\alpha}} + f^i_{\bar{\alpha} \alpha} + f^i_{00} ) \xi_i.
\end{align}
Comparing with the equation \eqref{ph}, we obtain
\begin{equation} \label{phr}
\tau^\theta (f;\theta,\hat{\nabla}) (f) = \tau(f;\theta,\hat{\nabla})(f) + \nabla_T^f df(T).
\end{equation}
As above, we need a Bochner-type formula for harmonic maps and a special exhaustion function.
\begin{lem}
Let $f: M \rightarrow N$ be a smooth map. Then
\begin{align}
\frac{1}{2} \triangle |d f(T)|^2 =& \ | \nabla^f f_0 |^2 + \langle df(T), \nabla_T^f \; \tau^\theta (f;\theta,\hat{\nabla})\rangle + 2 f^i_0 f^j_{\alpha} f^k_{\bar{\alpha}} f^l_0 \hat{R}_{j \ kl}^{\ i} \nonumber\\
&\ +2( f^i_0 f^i_\beta A_{\bar{\beta} \bar{\alpha} , \alpha} + f^i_0 f^i_{\bar{\beta}} A_{\beta \alpha , \bar{\alpha}} + f^i_0 f^i_{\bar{\beta} \bar{\alpha}} A_{\beta \alpha} +f^i_0 f^i_{\beta \alpha} A_{\bar{\beta} \bar{\alpha}} ) , \label{bf3}
\end{align}
where $\triangle$ is the Laplacian operator in $(M, g_\theta)$.
\end{lem}
\begin{proof}
On the one hand, we notice that
\begin{align} \label{b2}
\frac{1}{2} \triangle |d f(T)|^2 =& \frac{1}{2} \triangle_b |d f(T)|^2 + \frac{1}{2} (f^i_0 f^i_0)_{00}= \frac{1}{2} \triangle_b |d f(T)|^2 + f^i_{00} f^i_{00} + f^i_0 f^i_{000} .
\end{align}
On the other hand, by \eqref{b1}, we have
\begin{align*}
\langle df(T), \nabla_T^f \; \tau^\theta (f;\theta,\hat{\nabla})\rangle =& \langle df(T), \nabla_T^f \; \tau(f;\theta,\hat{\nabla})\rangle+ \langle df(T), \nabla_T^f \nabla_T^f df(T) \rangle \\
=& \langle df(T), \nabla_T^f \; \tau(f;\theta,\hat{\nabla})\rangle+ f^i_0 f^i_{000} .
\end{align*}
Hence substituting the above equation and \eqref{bf2} into \eqref{b2}, we get \eqref{bf3}.
\end{proof}
\begin{lem} \label{sb1}
Let $(M, HM, J_b, \theta)$ be a Sasakian manifold, and $(N,h)$ a Riemannian manifold with nonpositive sectional curvature. If $f: M \rightarrow N$ is a harmonic map, then
\begin{equation}
\frac{1}{2} \triangle |d f(T)|^2 \geq | \nabla^f f_0 |^2 . \label{bte3}
\end{equation}
\end{lem}
The proof follows from \eqref{btep} and \eqref{bf3}.
\begin{dfn} \label{crch}
Let $(M, HM, J_b, \theta)$ be a Sasakian manifold with
$$Ric(X,X) \geq -k |X|^2$$
for any $X \in T_{1,0} M$, and some $k \geq 0$.
We say that $(M, HM, J_b, \theta)$ satisfies CR Laplace comparison property relative to a fixed point $x_0 \in M$, if there exists a positive constant $C_4$ such that the Carnot-Carath\'eodory distance $r$ to $x_0$ satisfies
\begin{eqnarray}
\triangle r & \leq & C_4 \: (\frac{1}{r}+ \sqrt{k}) \\
|d r|_{g_\theta} & \leq & C_4
\end{eqnarray}
on $M \setminus ( cut(x_0) \cup \{ x_0 \} )$ and where $r \geq 1$.
\end{dfn}
On Heisenberg group $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$, the square of the Carnot-Carath\'eodory distance function $r$ to the origin has the following expression
\begin{equation}
[r(z,t)]^2 = \frac{\phi^2}{(\sin \phi)^2} ||z||^2
\end{equation}
where $||z||^2 = \sum_{\alpha =1}^{n} |z^\alpha|^2$, $\phi$ is the unique solution of $\chi(\phi) ||z||^2 =|t|$ in the interval $[0, \pi)$ and $\chi(\phi)= \frac{\phi}{(\sin \phi)^2} - \cot \phi$. See \cite{ckt, ctw} for details.
\begin{prp} \label{hp2}
On Heisenberg group $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$, there exists a positive constant $C_4'$ such that the Carnot-Carath\'eodory distance $r$ to the origin $o$ satisfies
\begin{eqnarray}
\triangle r & \leq & \frac{C_4'}{r} \label{h2}\\
|d r|_{g_\theta}^2 & \leq & C_4' \label{h3}
\end{eqnarray}
on $M \setminus ( cut(o) \cup \{ o \} )$ and where $r \geq 1$. Therefore, $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ satisfies CR Laplace comparison property relative to the origin.
\end{prp}
\begin{proof}
We first calculate $T r$ and $TT r$ on $M \setminus ( cut(o) \cup \{ o \} )$. When $t>0$, we take the partial derivative along $\frac{\partial}{\partial t}$ of $\chi(\phi) ||z||^2 =|t|$ and use the expression of $\chi$. The result is
\begin{equation*}
\frac{\partial \phi}{\partial t} = \frac{1}{2||z||^2} \: \frac{(\sin \phi)^3}{\sin \phi - \phi \cos \phi}.
\end{equation*}
Therefore,
\begin{align*}
T r^2 & =\ \frac{\partial r^2}{\partial t} = \phi , \\
TT r^2 & =\ \frac{\partial^2 r^2}{\partial t^2} = \frac{1}{r^2} \frac{(\sin \phi)^5}{\phi^2 \: (\sin \phi - \phi \cos \phi)} .
\end{align*}
Since $TT r^2 = 2r \: TTr + 2 |Tr|^2$, there exists a constant $\tilde{C_4}$ such that
\begin{equation} \label{tr}
|Tr| \leq \frac{\tilde{C_4}}{r}, \quad |TTr| \leq \frac{\tilde{C_4}}{r^3} .
\end{equation}
When $t<0$, we can do the similar calculations and obtain the same inequality \eqref{tr}. When $t=0$, we can use the continuity property to get the same estimate \eqref{tr}, since $r$ is smooth on $M \setminus ( cut(o) \cup \{ o \} )$. Hence the inequalities \eqref{tr} always hold on $M \setminus ( cut(o) \cup \{ o \} )$. From Proposition \ref{hp1}, there exists a constant $\tilde{C_4'}$ such that
\begin{equation} \label{ccde}
\triangle_b r \leq \frac{\tilde{C_4'}}{r}
\end{equation}
on $M \setminus ( cut(o) \cup \{ o \} )$.
Let $C_4'=1+\tilde{C_4}+\tilde{C_4}^2+\tilde{C_4'}$. Then
\begin{eqnarray*}
\triangle r & =& \triangle_b r + TTr \leq \frac{C_4'}{r} \\
|d r|^2 & = & |d_b r |^2 + (Tr)^2 \leq C_4'
\end{eqnarray*}
on $M \setminus ( cut(o) \cup \{ o \} )$ and where $r \geq 1$.
\end{proof}
To derive the Reeb energy density estimate, we need an analogue estimate of \eqref{nd1}. Assume that $(N,h)$ is a simply connected Riemannian manifold with nonpositive sectional curvature. Let $\rho$ be the distance to a fixed point $y_0 \in N$.
If $f: M \rightarrow N$ is harmonic, the Hession comparison theorem implies
\begin{equation} \label{nd2}
\triangle (\rho^2 \circ f) \geq 2|d f|^2.
\end{equation}
\begin{thm} \label{csh}
Let $(M, HM, J_b, \theta)$ be a noncompact complete Sasakian manifold with CR Laplace comparison property relative to a fixed point $x_0$ and
$$Ric(X,X) \geq -k |X|^2$$
for any $X \in T_{1,0} M$, and some $k \geq 0$. Suppose that $(N,h)$ is a simply connected Riemannian manifold with nonpositive sectional curvature. Let $f: M \rightarrow N$ be a harmonic map. Let $\rho $ be the Riemannian distance to $y_0 = f(x_0)$. For any $R>1$, set $b_R =2 \: sup \: \{ \rho \circ f(x) | x \in B_{2R} (x_0)\}$. Then, on $B_R (x_0)$
\begin{equation} \label{he}
|df(T)|^2 \leq C_6 \: b_R^2 \: \left(\frac{1}{R^2}+ \frac{\sqrt{k}}{R}\right)
\end{equation}
where the constant $C_6$ depends only on $C_4$. Moreover,
\begin{enumerate}[(i)]
\item if $Ric \geq 0$ (i.e. $k=0$) and the image of f satisfies
\begin{equation*}
\overline{\lim_{R \rightarrow \infty}} R^{-1} \: sup \: \{ \rho \circ f(x) | x \in B_{2R}(x_0) \} =0,
\end{equation*}
then $df(T)=0$.
\item if the pseudohermitian Ricci curvature of $M$ has strictly negative lower bound (i.e. $k>0$) and the image of f satisfies
\begin{equation*}
\overline{\lim_{R \rightarrow \infty}} R^{-\frac{1}{2}} \: sup \: \{ \rho \circ f(x) | x \in B_{2R} (x_0)\} =0,
\end{equation*}
then $df(T)=0$.
\end{enumerate}
\end{thm}
\begin{rmk}
In \cite{p}, R. Petit got a similar vanishing theorem for harmonic maps from compact Sasakian manifolds to Riemannian manifolds with nonpositive sectional curvature.
\end{rmk}
\begin{proof}
The choices of $\psi$ and $\eta$ are the same as in Section \ref{slp}. Since $(M, HM, J_b, \theta)$ satisfies CR Laplace comparison property, then $\eta$ satisfies
\begin{equation}
\begin{gathered}
\eta^{-1} |d \eta|^2 \leq \frac{C_5}{R^2} \\
\triangle \eta = \frac{\psi ''}{R^2} |d r|^2 + \frac{\psi '}{R} \triangle r \geq - C_5 \: \left(\frac{1}{R^2}+ \frac{\sqrt{k}}{R}\right) \label{etah}
\end{gathered}
\end{equation}
\noindent on $M \setminus ( cut(x_0) \cup \{ x_0 \} )$. Here $C_5 $ depends only on $C_4$ and $C_2$.
Given $R >1$, we consider the function $G: M \rightarrow \mathbb{R}$, which is given by
$$ G(x) = \frac{|f_0|^2}{b_R^2-\rho^2 \circ f} (x) . $$
Let $x$ be a maximum point of $\eta G$ on $B_{2R} (x_0)$. If $x$ is in the cut locus of $x_0$, then we can modify $r$ as in Section \ref{slp}. Without loss of generality, assume that $r$ is smooth at $x$ and $ (\eta G)(x) \neq 0$. It is obvious that $x$ is still a maximum point of $\ln (\eta G)$ on $B_{2R} (x_0)$. Then the maximum principle asserts that at $x$,
\begin{align}
0 \ = d \ln (\eta G)= &\frac{d \eta}{\eta} + \frac{d |f_0|^2}{|f_0|^2} + \frac{d (\rho^2 \circ f)}{b^2_R- \rho^2 \circ f}, \label{hmp1}\\
0 \geq \triangle \ln (\eta G)= & \frac{\triangle \eta}{\eta} - \frac{|d \eta|^2}{\eta^2} + \frac{\triangle |f_0|^2}{|f_0|^2} - \frac{|d |f_0|^2|^2}{|f_0|^4} \nonumber \\
& \qquad+\frac{\triangle (\rho^2 \circ f)}{b^2_R-\rho^2 \circ f} +\frac{|d (\rho^2 \circ f)|^2}{(b^2_R- \rho^2 \circ f)^2}. \label{hmp2}
\end{align}
Applying \eqref{bte3} and the inequality $|d |f_0|^2|^2 \leq 4 \: |f_0|^2 |\nabla^f f_0|^2$ to \eqref{hmp2},
we have
\begin{equation*}
0 \geq \frac{\triangle \eta}{\eta} - \frac{|d \eta|^2}{\eta^2} -\frac{1}{2} \frac{|d |f_0|^2|^2}{|f_0|^4} +\frac{|d (\rho^2 \circ f)|^2}{(b^2_R- \rho^2 \circ f)^2}+\frac{\triangle (\rho^2 \circ f)}{b^2_R-\rho^2 \circ f}.
\end{equation*}
With the aid of Schwarz inequality, we can use \eqref{hmp1} to estimate the third and fourth terms. The result is
\begin{equation*}
0 \geq \frac{\triangle \eta}{\eta} -2\: \frac{|d \eta|^2}{\eta^2} +\frac{\triangle (\rho^2 \circ f)}{b^2_R-\rho^2 \circ f}.
\end{equation*}
Therefore combining with \eqref{nd2} and \eqref{etah}, we conclude that at $x$,
\begin{equation*}
\frac{|d f|^2}{b^2_R-\rho^2 \circ f} \leq \frac{3 C_5}{ 2\eta } \left(\frac{1}{R^2}+ \frac{\sqrt{k}}{R}\right).
\end{equation*}
Hence by $|f_0|^2 \leq |d f|^2$, we can get an estimate of $\eta G$:
\begin{equation*}
\max_{z \in B_{2R}(x)} \frac{\eta |f_0|^2}{b^2_R-\rho^2 \circ f} \: (z) = (\eta G)(x) = \frac{\eta |f_0|^2}{b^2_R-\rho^2 \circ f} \: (x) \leq \frac{3 C_5}{2} \: \left(\frac{1}{R^2}+ \frac{\sqrt{k}}{R}\right).
\end{equation*}
This yields for any $z \in B_R (x_0)$,
\begin{equation*}
|f_0|^2 \: (z) \leq \frac{3 C_5}{2} \; (b^2_R-\rho^2 \circ f (z)) \left( \frac{1}{R^2}+ \frac{\sqrt{k}}{R} \right) \leq \frac{3 C_5}{2} \: b^2_R \: \left(\frac{1}{R^2}+ \frac{\sqrt{k}}{R}\right).
\end{equation*}
Let $C_6= \frac{3}{2} C_5$. The above inequality yields \eqref{he}. The rest of this theorem follows from the estimate \eqref{he}.
\end{proof}
The relation \eqref{phr} shows that if $df(T) =0$, then harmonic map is equivalent to pseudoharmonic map. Therefore, Theorem \ref{cse} asserts the following Liouville theorem.
\begin{cor}
Let $(M, HM, J_b, \theta)$ be a noncompact complete Sasakian manifold with nonnegative pseudohermitian Ricci curvature, and satisfy both CR sub-Laplace comparison property and CR Laplace comparison property relative to a fixed point $x_0 \in M$. Suppose that $(N,h)$ is a simply connected Riemannian manifold with nonpositive sectional curvature. Assume that $f: M \rightarrow N$ is a harmonic map. Let $\rho$ be the Riemnnian distance to $y_0 = f(x_0)$. If the image of $f$ satisfies
\begin{equation*}
\overline{\lim_{R \rightarrow \infty}} R^{-1} \: sup \: \{ \rho \circ f(x) | x \in B_{2R}(x_0) \} =0,
\end{equation*}
then f is a constant map.
\end{cor}
Proposition \ref{hp1} and Proposition \ref{hp2} state that Heisenberg group satisfies both CR sub-Laplace comparison property and CR Laplace comparison property relative to the origin.
\begin{cor} \label{hm2}
There is no bounded harmonic map from Heisenberg group $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ to a simply connected Riemannian manifold with nonpositive sectional curvature.
\end{cor}
\begin{rmk}
If $n \geq 2$, then the Levi-Civita connection of Heisenberg group $(\mathbb{H}^n, H \mathbb{H}^n, J_b, \theta)$ does not have nonnegative Ricci curvature. Thus Corollary \ref{hm2} can not be derived from the results in \cite{c}.
\end{rmk}
\end{document} |
\begin{document}
\title[Regularity]{Regularity properties of some perturbations of non-densely defined operators with applications}
\author{Deliang Chen}
\address{Department of Mathematics, Shanghai Jiao Tong University, Shanghai 200240, People's Republic of China}
\email{[email protected]}
\thanks{Part of this work was done at East China Normal University. The author would like to thank Shigui Ruan, Ping Bi and Dongmei Xiao for their useful discussions and encouragement. The author is grateful to the referee(s) for useful comments and suggestions and particularly pointing out lemma \ref{lem:presentation} and a mistake in theorem \ref{thm:final}, which improved significantly the presentation of the original manuscript.}
\subjclass[2010]{Primary 47A55, 34D10; Secondary 34K12, 47N20, 47D62}
\keywords{regularity, perturbation, non-densely defined operators, critical growth bound, essential growth bound, integrated semigroup, age-structured population model}
\begin{abstract}
This paper is to study some conditions on semigroups, generated by some class of non-densely defined operators in the closure of its domain, in order that certain bounded perturbations preserve some regularity properties of the semigroup such as norm continuity, compactness, differentiability and analyticity. Furthermore, we study the critical and essential growth bound of the semigroup under bounded perturbations. The main results generalize the corresponding results in the case of Hille-Yosida operators. As an illustration, we apply the main results to study the asymptotic behaviors of a class of age-structured population models in $ L^p $ spaces ($ 1 \leq p < \infty $).
\end{abstract}
\maketitle
\setcounter{tocdepth}{2}
\section{Introduction}
The main goal of this paper is to study the preservation of the regularity properties of some class of non-densely defined operators under bounded perturbations. Let $A: D(A) \subset X \rightarrow X$ be a linear operator on some Banach Space $X$ and $B$ a perturbing linear operator. Assume that $A$ has some good regularity properties. Under which conditions can $A + B$ keep the properties of $A$? When $A$ is the generator of a $C_0$ semigroup $T_{A}$, or equivalently $A$ is a Hille--Yosida operator and densely defined, i.e., $\overline{D(A)} = X$, many classes of operator $B$ allow $A+B$ to generate a $C_0$ semigroup $T_{A+B}$, e.g., $B$ is a bounded operator, Desch--Schappacher perturbation, or Miyadera--Voigt perturbation; see \cite{EN00}. If $T_A$ has higher regularity properties, such as immediate/eventual norm continuity, immediate/eventual compactness, immediate/eventual differentiability and analyticity, then one may ask naturally that under what kinds of $B$, these properties can be preserved by $T_{A+B}$. When $B$ is a bounded operator, Nagel and Piazzera \cite{NP98} gave a unified treatment of the problem and found additional conditions assuring the permanence of these regularities. In particular, immediate norm continuity, immediate compactness and analyticity are stable under bounded perturbation \cite{NP98, EN00}. See also \cite{BP01, Pia99} for the case when $B$ is some class of Miyadera--Voigt perturbation. However, the differentiability is not in this way, see \cite{Ren95} for a counterexample. Pazy \cite{Paz68} gave a condition assuring the permanence of differentiability under bounded perturbation, and Iley \cite{Ile07} pointed out that this condition is also necessary. M{\'a}trai \cite{Mat08a} showed that immediate norm continuity is preserved under Desch--Schappacher perturbation and Miyadera--Voigt perturbation.
However, in many applications, the operator $A$ may be not densely defined or even not a Hille--Yosida operator (see, e.g., \cite{DPS87, PS02, MR07, DMP10}); see also \autoref{model}. Let $A$ be a Hille--Yosida operator \cite{DPS87, ABHN11}, $X_0 := \overline{D(A)}$, $A_0 := A_{X_0}$, where $A_{X_0}$ denotes the part of $A$ in $X_0$, i.e.,
\[
A_{X_0}x = Ax,~~ x \in D(A_{X_0}) := \{x \in D(A): Ax \in X_0\}.
\]
It is well known that $A_0$ generates a $C_0$ semigroup $T_{A_0}$ in $X_0$, and $A+B$ is also a Hille--Yosida operator if $B \in \mathcal{L}(X_0, X)$ \cite{KH89}. A natural question may be asked: Are the regularities of $T_{(A + B)_0}$ the same as $T_{A_0}$? B{\'a}tkai, Maniar and Rhandi \cite{BMR02} dealt with this problem by using extrapolation theory and obtained similar results as in \cite{NP98}.
Magal and Ruan \cite{MR07} studied a more general class of non-densely defined operators which in this paper are called \emph{MR operators} and \emph{quasi Hille--Yosida operators} (see \autoref{def:MR} and \autoref{def:pHY}). These operators turn out to be important for the study of certain abstract Cauchy problems, such as age-structured population models, parabolic differential equations and delay equations (see, e.g., \cite{PS02, MR07, DMP10, MR18, Che18g}). Let $A$ be an MR operator (resp. quasi Hille--Yosida operator) and $T_{A_0}$ the $C_0$ semigroup generated by $A_0$ in $X_0$. It was shown in \cite{MR07, Thi08} that $A$ is stable under the bounded perturbation, that is $A+B$ is still an MR operator (resp. quasi Hille--Yosida operator) for all $B \in \mathcal{L}(X_0, X)$. We are interested in that under which conditions $T_{(A+B)_0}$, generated by $(A+B)_0$ in $X_0$, can preserve the regularity properties of $T_{A_0}$. The first part of the paper addresses the problem and obtains analogous and generalized results as in \cite{NP98, BMR02}, i.e., \emph{norm continuity}, \emph{compactness}, \emph{differentiability} and \emph{analyticity} (see \autoref{regper}). We use the method developed in \cite{NP98, BMR02} and the integrated semigroups theory.
The second part of the paper is to study the stability of the \emph{critical growth bound} and \emph{essential growth bound} (see \autoref{def:crit} and \autoref{def:ess}) of a $C_0$ semigroup generated by the part of an MR operator in the closure of its domain under certain bounded perturbations.
Critical spectrum was introduced independently by Nagel and Poland \cite{NP00} and Blake \cite{Bla01}. The authors used this notion to obtain the partial spectral mapping theorem, which could characterize the stability of $C_0$ semigroups very well, see \cite{NP00} for details. Brendle, Nagel and Poland \cite{BNP00} studied the stability of critical growth bound of a $C_0$ semigroup under some class of Miyadera--Voigt perturbation. In particular, they obtained, under appropriate assumptions, a partial spectrum mapping theorem for the perturbed semigroup. Similar result was also obtained by Boulite, Hadd and Maniar \cite{BHM05} for the Hille--Yosida operators. The stability of the essential growth bound were considered by many authors, e.g., Voigt \cite{Voi80} and Andreu, Mart\'{i}nez and Maz\'{o}n \cite{AMM91} (the generators of $C_0$ semigroups), Thieme \cite{Thi97} (Hille--Yosida operators), Ducrot, Liu and Magal \cite{DLM08} (quasi Hille--Yosida operators). Such results have wide applications, e.g., to study the stability of equilibriums, the existence of center manifolds and Hopf bifurcation, see \cite{EN00, BNP00, ABHN11, MR09, MR09a}. See also \cite{Sbi07, Bre01} for an approach based on the resolvent characterization in Hilbert spaces and \cite[Section 2 and 3]{MS16} for some new partial spectral mapping theorems in an abstract framework. We consider the perturbation of the critical and essential growth bound in the case of MR operators and give a unified treatment of the two problems (see \autoref{criess}). Our proof is close to \cite{BNP00}.
Some simple version of our main results in \autoref{regper} and \autoref{criess} may be summarized below; see those sections for more detailed results and see \autoref{pre} for definitions and notations.
For a linear operator $ A: D(A) \subset X \to X $, let $X_0 = \overline{D(A)}$, $A_0 := A_{X_0}$. $ T_{A_0} $ denotes the $ C_0 $ semigroup (if it exists) generated by $ A_0 $.
\begin{thmA}[norm continuity and compactness]
Let $A$ be an MR operator (see \autoref{def:MR}), $L \in \mathcal{L}(X_0, X)$.
\begin{enumerate}[(a)]
\item Assume $LT_{A_0}$ is norm continuous on $(0, \infty)$. Then, $T_{A_0}$ is eventually (resp. immediately) norm continuous if and only if $T_{(A+L)_0}$ is eventually (resp. immediately) norm continuous.
\item Assume $LT_{A_0}$ is norm continuous and compact on $(0, \infty)$. Then, $T_{A_0}$ is eventually (resp. immediately) compact if and only if $T_{(A+L)_0}$ is eventually (resp. immediately) compact.
\item Suppose $A$ is a quasi Hille--Yosida operator (see \autoref{def:pHY}) and $LT_{A_0}$ is compact on $(0, \infty)$. Then, $T_{A_0}$ is eventually norm continuous (resp. eventually compact) if and only if $T_{(A+L)_0}$ is eventually norm continuous (resp. eventually compact); the result also holds when ``eventually'' is replaced by ``immediately''.
\end{enumerate}
\end{thmA}
The following result can be proved very simply if one uses the corresponding characterization of the resolvent.
\begin{thmA}[differentiability and analyticity]
\begin{enumerate}[(a)]
\item Let $A$ be a Hille--Yosida operator. Then, $T_{(A+L)_0}$ is eventually differentiable for all $L \in \mathcal{L}(X_0, X)$ if and only if $A_0$ satisfies the \emph{Pazy-Iley condition} (i.e., the condition of \autoref{thm:PI} in (b) or (c)).
\item If $A$ is a $p$-quasi Hille--Yosida operator (see \autoref{def:pHY}) and $A_0$ is a Crandall--Pazy operator satisfying
\[
\|R(iy, A_0)\|_{\mathcal{L}(X_0)} = O(|y|^{-\beta}),~ |y| \rightarrow \infty, ~ \beta > 1- \frac{1}{p},
\]
then $(A+L)_0$ is still a Crandall--Pazy operator (see, e.g., \cite{Ile07, CP69}) for any $L \in \mathcal{L}(X_0, X)$.
\item Let $A$ be an MR operator (see \autoref{def:MR}). Then, $T_{A_0}$ is analytic if and only if for any $L \in \mathcal{L}(X_0,X)$, $T_{(A+L)_0}$ is analytic. In particular, if $A$ is an almost sectorial operator (see \autoref{def:almost}), so is $A+L$ for any $L \in \mathcal{L}(X_0,X)$.
\end{enumerate}
\end{thmA}
Let $ \omega_{\mathrm{crit}}(T_{A_0}) $ and $ \omega_{\mathrm{ess}}(T_{A_0}) $ denote the \emph{critical growth bound} and the \emph{essential growth bound} of $ T_{A_0} $ respectively (see \autoref{def:crit} and \autoref{def:ess}).
\begin{thmA}\label{thm:C}
\begin{enumerate}[(a)]
\item Let $A$ be an MR operator (see \autoref{def:MR}) and $L \in \mathcal{L}(X_0,X)$.
If $LT_{A_0}$ is norm continuous on $(0,\infty)$, then $\omega_{\mathrm{crit}}(T_{(A+L)_0}) = \omega_{\mathrm{crit}}(T_{A_0})$.
If $LT_{A_0}$ is norm continuous and compact on $(0,\infty)$, then $\omega_{\mathrm{ess}}(T_{(A+L)_0}) = \omega_{\mathrm{ess}}(T_{A_0})$.
\item If $A$ is a quasi Hille--Yosida operator (see \autoref{def:pHY}) and $LT_{A_0}$ is compact on $(0,\infty)$, then $\omega_{\mathrm{ess}}(T_{(A+L)_0}) = \omega_{\mathrm{ess}}(T_{A_0})$, $\omega_{\mathrm{crit}}(T_{(A+L)_0}) = \omega_{\mathrm{crit}}(T_{A_0})$.
\end{enumerate}
\end{thmA}
\begin{rmk}
\begin{enumerate}[(a)]
\item The above results are known at least for the Hille--Yosida operators; see, e.g., \cite{BMR02, BHM05, Thi97}.
\item \autoref{thm:C} (b) is basically due to \cite{DLM08} but the result is strengthened as $\omega_{\mathrm{ess}}(T_{(A+L)_0}) = \omega_{\mathrm{ess}}(T_{A_0})$; also our proof in some sense simplifies \cite{DLM08}.
\end{enumerate}
\end{rmk}
As an illustration, in the third part of this paper (see \autoref{model}), we apply the main results in \autoref{regper} and \autoref{criess} to study the asymptotic behaviors of a class of age-structured population models in $ L^p $ ($ 1 \leq p < \infty $). This problem was investigated extensively by many authors, see, e.g., \cite{Web85, Web08, Thi91, Thi98, Rha98, BHM05, MR09a} etc in the $ L^1 $ case. It seems that the $ L^p $ ($ p > 1 $) case was first investigated in \cite{MR07}. As a motivation, in control theory (and approximation theory), age-structured population models can be considered as boundary control systems and in this case the state space is usually taken as $ L^p $ ($ p > 1 $); see, e.g., \cite{CZ95}. Basically, in order to give the asymptotic behaviors of the age-structured population model \eqref{equ:age} in $ L^p $ ($ p > 1 $), the results given in \autoref{regper} and \autoref{criess} are necessary. Concrete examples are given in \autoref{exa:model} to verify different cases in the main result (\autoref{thm:age}).
The paper is organized as follows. In section 2, we recall some definitions and results about integrated semigroups, some classes of non-densely defined operators (with a detailed summary of their basic properties), critical spectrum and essential spectrum. In section 3, we consider the regularities of $S_A \diamond V$ (see section 2 for the symbol's meaning). In section 4, we deal with the regularity properties of the perturbed semigroups generated by the part of MR operators in their closure domains. In section 5, we study the perturbation of critical and essential growth bound. Section 6 contains an application of our results to a class of age-structured population models in $ L^p $. In the final section, we give a relatively bounded perturbation for MR operators associated with their perturbed regularities, and some comments that how the results could be applied to (nonlinear) differential equations.
\section{Preliminaries}\label{pre}
In this section, we recall some definitions and results about what we need in the following, such as integrated semigroups, some classes of non-densely defined operators, the critical spectrum and essential spectrum of $ C_0 $ semigroups.
\subsection{Integrated semigroup}
The integrated semigroup was introduced by W. Arendt \cite{Are87a}. The systematic treatment based on techniques of Laplace transforms was given in \cite{ABHN11}.
Let $X$ and $Z$ be Banach spaces. Denote by $\mathcal{L}(X, Z)$ the space of all bounded linear operators from $X$ into $Z$ and by $\mathcal{L}(X)$ the $\mathcal{L}(X, X)$. Let $A:~ D(A) \subset X \rightarrow X$ be a linear operator and assume $\rho(A) \neq \emptyset$. If $Z \hookrightarrow X$ (i.e., $Z \subset X$ and the imbedding is continuous), $A_Z$ denotes the part of $A$ in $Z$, i.e.,
\[
A_{Z}x = Ax,~~ x \in D(A_{Z}) := \{x \in D(A) \cap Z: Ax \in Z\}.
\]
By the closed graph theorem, $A_Z$ is a closed operator in $Z$ since $A$ is closed. Here is the relationship between $A$ and $A_Z$, see \cite[Proposition B.8, Lemma 3.10.2]{ABHN11}.
\begin{lem}\label{partop}
Let $Z \hookrightarrow X$.
\begin{enumerate}[(a)]
\item If $\mu \in \rho(A)$ and $R(\mu, A)Z \subset Z$, then $D(A_Z) = R(\mu, A)Z$, and $R(\lambda, A)|_Z = R(\lambda, A_Z), ~ \forall \lambda \in \rho(A)$.
\item If $D(A) \subset Z$, then $R(\lambda, A)Z \subset Z, ~\forall \lambda \in \rho(A)$ and $\rho(A) = \rho(A_Z)$.
\end{enumerate}
\end{lem}
\begin{defi}[integrated semigroup \cite{ABHN11}]
Let $A$ be an operator on a Banach space $X$. We call $A$ the generator of (once non-degenerate) integrated semigroup if there exist $\omega \geq 0$ and a strongly continuous function $S: [0, \infty) \rightarrow \mathcal{L}(X)$ satisfying $\|\int_0^t S(s)~\mathrm{d}s \| \leq Me^{\omega t} (t \geq 0)$ for some constant $ M > 0 $ such that $(\omega, \infty) \subset \rho(A)$ and
\begin{equation}\label{equ:int}
R(\lambda, A)x = \lambda \int_0^\infty e^{-\lambda s}S(s)x~\mathrm{d}s, ~\forall \lambda > \omega, ~ \forall x \in X.
\end{equation}
In this case, $S$ is called the (once non-degenerate) \emph{integrated semigroup} generated by $A$.
\end{defi}
The equivalent definition is the following, see \cite[Proposition 3.2.4]{ABHN11}.
\begin{lem}\label{def:equ}
Let $S: [0, \infty) \rightarrow \mathcal{L}(X)$ be a strongly continuous function satisfying $\|\int_0^t S(s)~\mathrm{d}s \| \leq Me^{\omega t} (t \geq 0)$, for some $M,~ \omega\geq 0$. Then, the following assertions are equivalent.
\begin{enumerate}[(a)]
\item There exists an operator $A$ such that $(\omega, \infty) \subset \rho(A)$, and \eqref{equ:int} holds.
\item For $s, ~t \geq 0$,
\begin{equation}\label{equ:int:11}
S(t)S(s) = \int_0^t(S(r+s)-S(r))~\mathrm{d}r,
\end{equation}
and $S(t)x = 0$ for all $t \geq 0$ implies $x=0$.
\end{enumerate}
\end{lem}
For some basic properties of integrated semigroups, see \cite[Section 3.2]{ABHN11}. From now on, we set
\[
X_0 := \overline{D(A)}, ~A_0 := A_{X_0}.
\]
Note that, by \autoref{partop}, $\rho(A) = \rho(A_0)$.
\begin{lem}\label{partInt}
If $A_0$ generates a $C_0$ semigroup $T_{A_0}$ in $X_0$, then $A$ generates an integrated semigroup $S_A$ in $X$. Furthermore, the following hold.
\begin{enumerate}[(a)]
\item $T_{A_0}(t)x = \dfrac{d}{dt}S_A(t)x,~\forall x \in X_0$;
\item $S_A$ can be represented as the following,
\begin{align}
S_A(t)x & = (\mu - A_0) \int_0^tT_{A_0}(s)R(\mu, A)x~\mathrm{d}s \notag \\
& = \mu \int_0^tT_{A_0}(s)R(\mu, A)x~\mathrm{d}s - T_{A_0}(t)R(\mu, A)x + R(\mu, A)x, ~\forall x \in X,\label{equ:biaodashi}
\end{align}
for all $\mu \in \rho(A)$.
\item $S_A$ is exponentially bounded, i.e., $\|S_A(t)\| \leq M e^{\omega t}, ~ \forall t \geq 0$, for some constants $M \geq 0$ and $\omega \in \mathbb{R}$;
\item $T_{A_0}(t)S_{A}(s) = S_{A}(t+s) - S_A(t)$, for all $t,s \geq 0$;
\item $T_{A_0}(\cdot)x$ is continuously differentiable on $[t_0,\infty)$ for any $x \in D(A)$ if and only if $S_A(\cdot)x$ is continuously differentiable on $[t_0,\infty)$ for any $x \in X$, where $t_0 \geq 0$.
\end{enumerate}
\end{lem}
\begin{proof}
It suffices to show (b), but this directly follows from the definition of integrated semigroup. Others follow from (b) and \autoref{def:equ}.
\end{proof}
\subsection{Non-densely defined operator}\label{nddo}
Here, we discuss a general class of non-densely defined operators (i.e., $\overline{D(A)} \neq X$), developed by Magal and Ruan \cite{MR07}. Since the importance of the operators and for the sake of our reference, we call them MR operator and $p$-quasi Hille--Yosida operator below.
\begin{defi}[MR operator \cite{MR07}]\label{def:MR}
We call a closed linear operator $A$ an \textbf{MR operator}, if the following two conditions hold.
\begin{enumerate}[(a)]
\item $A_0$ generates a $C_0$ semigroup $T_{A_0}$ in $X_0$, i.e., $A_0$ is a Hille--Yosida operator in $X_0$, and $\overline{D(A_0)} = X_0$. Then, $A$ generates an integrated semigroup $S_A$ in $X$.
\item There exists an increasing function $\delta: ~[0,\infty) \rightarrow [0,\infty)$ such that $\delta(t) \rightarrow 0$, as $t \rightarrow 0$, and for any $f \in C^1(0,\tau; X), ~\tau > 0$,
\begin{equation}\label{equ:guji}
\left\|\frac{d}{dt} \int_0^tS_A(t-s)f(s)~\mathrm{d}s\right\| \leq \delta(t)\sup\limits_{s\in [0,t]}\|f(s)\|, ~\forall t \in [0,\tau].
\end{equation}
\end{enumerate}
\end{defi}
In the following, we set
\[
(S_A * f)(t) := \int_0^tS_A(t-s)f(s)~\mathrm{d}s, ~~(S_A \diamond f)(t) := \frac{d}{dt}(S_A * f)(t).
\]
We collect some basic properties about MR operators basically due to \cite{MR07, MR09}; we give a direct proof for the sake of readers.
\begin{lem}\label{lem:MR}
Let $A$ be an MR operator.
\begin{enumerate}[(a)]
\item \label{MRaa} $S_A$ is norm continuous on $[0,\infty)$ and $\|S_A(t)\| \leq \delta(t)$.
\item $\|R(\lambda, A)\| \rightarrow 0$, as $\lambda \rightarrow \infty, \lambda \in \mathbb{R}$; particularly $ S_A(t)x = \lim_{\mu \to \infty}\int_0^tT_{A_0}(s)\mu R(\mu, A)x~\mathrm{d}s $.
\item For $\forall f \in C(0,\tau; X)$, $(S_A*f)(t) \in D(A)$, $ t \mapsto (S_A*f)(t) $ is differentiable, $(S_A \diamond f)(t)\in X_0$, $t \mapsto (S_A \diamond f)(t)$ is continuous and \eqref{equ:guji} also holds. Furthermore, the following equations hold.
\begin{align}
(S_A \diamond f)(t) & = \lim_{\mu \rightarrow \infty} \int_0^tT_{A_0}(t-s) \mu R(\mu, A)f(s)~\mathrm{d}s \label{equ:sf1}\\
& = T_{A_0}(t-s)(S_A \diamond f)(s) + (S_A \diamond f(s + \cdot))(t-s) \label{equ:sf2}\\
& = \lim_{h \rightarrow 0} \frac{1}{h} \int_0^tT_{A_0}(t-s)S(h)f(s)~\mathrm{d}s. \label{equ:sf3}
\end{align}
\end{enumerate}
\end{lem}
\begin{proof}
(a) To show $\|S_A(t)\| \leq \delta(t)$, it suffices to take $ f(s) = x $ in \eqref{equ:guji}. Then, by \autoref{partInt} (d), we know $S_A$ is norm continuous on $[0,\infty)$.
(b) Note that for $ \lambda > 0 $ and $ \varepsilon = \lambda^{-1/2} $, we have
\begin{align*}
\|R(\lambda, A)\| & = \| \lambda \int_{0}^{\infty} e^{-\lambda s}S_{A}(s) ~\mathrm{d}s \| \\
& \leq \lambda \| \int_{0}^{\varepsilon} e^{-\lambda s}S_{A}(s) ~\mathrm{d}s \| + \lambda \| \int_{\varepsilon}^{\infty} e^{-\lambda s}S_{A}(s) ~\mathrm{d}s \| \\
& \leq \delta(\varepsilon) + \lambda \| \int_{\varepsilon}^{\infty} e^{-\lambda s} \int_{0}^{s} S_{A}(r) ~\mathrm{d}r ~\mathrm{d} s \| + \lambda \| e^{-\lambda\varepsilon} \int_{0}^{\varepsilon} S_{A}(r) ~\mathrm{d}r \|\\
& \quad \to 0, ~\text{as}~ \lambda \to \infty.
\end{align*}
The representation of $ S_{A} $ now immediately follows from \eqref{equ:biaodashi}.
(c) For the first statement, see \cite{MR07, Thi08}. We only consider the equations \eqref{equ:sf1} \eqref{equ:sf2} \eqref{equ:sf3}. Since
\[
R(\mu, A_0) (S_A \diamond f)(t) = (S_A \diamond R(\mu, A) f)(t) = \int_{0}^{t} T_{A_0} (t - s) R(\mu, A) f(s) ~\mathrm{d} s,
\]
we have \eqref{equ:sf1} holds; here for $ x \in X_0 $, $ \mu R(\mu, A_0)x \to x $ as $ \mu \to \infty $. \eqref{equ:sf2} follows from \eqref{equ:sf1} and the property of the convolution operation. Let us consider \eqref{equ:sf3}.
\begin{align*}
(S_A \diamond f)(t) & = \lim_{h \rightarrow 0} \frac{1}{h} ((S_A*f)(t+h) - (S_A*f)(t)) \\
& = \lim_{h \rightarrow 0} \frac{1}{h} [(S_A(h+\cdot)*f)(t) - (S_A*f)(t) + (S_A*f(t+\cdot))(h)] \\
& = \lim_{h \rightarrow 0} \frac{1}{h} [(S_A(h+\cdot) - S_A(\cdot))*f](t) \quad \text{(by (a))} \\
& = \lim_{h \rightarrow 0} \frac{1}{h} (T_{A_0}(\cdot)S(h)*f)(t). \quad \text{(by \autoref{partInt} (d))}
\end{align*}
This completes the proof.
\end{proof}
Equation \eqref{equ:sf2} will be frequently used in our proof.
Now we turn to other important class of MR operators.
\begin{defi}[$p$-quasi Hille--Yosida operator \cite{MR07}]\label{def:pHY}
We call a closed linear operator $A$ a \textbf{$p$-quasi Hille--Yosida operator} ($p \geq 1$), if the following two conditions hold.
\begin{enumerate}[(a)]
\item $A_0$ generates a $C_0$ semigroup $T_{A_0}$ in $X_0$. Then, $A$ generates an integrated semigroup $S_A$ in $X$.
\item There exist $\widehat{M} \geq 0, ~\widehat{\omega} \in \mathbb{R}$ such that for any $f \in C^1(0,\tau; X)$, $\tau > 0$
\begin{equation}\label{equ:pHY}
\left\| (S_A \diamond f)(t) \right\| \leq \widehat{M} \left\|e^{\widehat{\omega}(t-\cdot)}f(\cdot) \right\|_{L^p(0,t;X)}, ~\forall t \in [0,\tau].
\end{equation}
\end{enumerate}
\end{defi}
For $p$-quasi Hille--Yosida operator, if we don't emphasize the $p$, we also call it \emph{quasi Hille--Yosida operator}.
\begin{lem}\label{lem:pHY}Let $A$ be a $p$-quasi Hille--Yosida operator.
\begin{enumerate}[(a)]
\item 1-quasi Hille--Yosida operators are Hille--Yosida operators.
\item For all $f \in L^p(0,\tau;X)$, $(S_A*f)(t) \in D(A)$, $ t \mapsto (S_A*f)(t) $ is differentiable, $(S_A \diamond f)(t)\in X_0$, $t \mapsto (S_A \diamond f)(t)$ is continuous and \eqref{equ:pHY} also holds.
\item There is $M \geq 0$ such that
\[
\|R(\lambda, A)\| \leq \frac{M}{(\lambda-\widehat{\omega})^{\frac{1}{p}}}, ~~\forall \lambda > \widehat{\omega}.
\]
\end{enumerate}
\end{lem}
\begin{proof}
(a) See \cite[Section 4]{Thi08} for more general results.
(b) See \cite{MR07, Thi08}.
(c) This is a corollary of \cite[Theorem 4.7 (iii) and Remark 4.8]{MR07}. Indeed, by
\[
\|x^*\circ[\lambda-(A-\widehat{\omega})]^{-n}\|_{X^*} \leq \frac{1}{(n-1)!} \int_0^\infty s^{n-1}e^{-\lambda s}\chi_{x^*}(s)~\mathrm{d}s , ~~\lambda >0,
\]
where $x^* \in (X_0)^*$, $\sup\limits_{x^* \in (X_0)^*, |x^*| \leq 1}\|\chi_{x^*}\|_{L^p(0,\infty;\mathbb{R})} < \infty$, $n \in \mathbb{N}$, and by the H\"{o}lder inequality for the case $n=1$, we obtain the result. The proof is complete.
\end{proof}
\begin{rmk}\label{rmk:cc}
A beautiful characterization of $ A $ which is an MR operator or a $ p $-quasi Hille--Yosida operator by using the regularity of the integrated semigroup $ S_A $ was given by Thieme \cite{Thi08}. We state here for the convenience of readers. Suppose for the operator $ A: D(A) \subset X \to X $, $A_0$ generates a $C_0$ semigroup $T_{A_0}$ in $X_0$. Let $ S_A $ be the integrated semigroup generated by $ S_A $.
\begin{enumerate}[(a)]
\item $ A $ is an MR operator if and only if $ S_A $ is of \emph{bounded semi-variation} on $ [0, b] $ for some $ b > 0 $ with its semi-variation $ V_{\infty} (S_A; 0, b) \to 0 $ as $ b \to 0 $. Here we mention that if $ T_{A_0} $ is norm continuous on $ (0, b) $, then $ V_{\infty} (S_A; 0, b) \to 0 $ as $ b \to 0 $. Note also that $ S_A $ is of bounded semi-variation on $ [0, b] $ if and only if $ x^*S_A $ is of \emph{bounded variation} for all $ x^* \in (X_0)^* $ (see, e.g., \cite[Theorem 2.12]{Mon15}).
\item $ A $ is a $ p $-quasi Hille--Yosida operator if and only if $ S_A $ is of \emph{bounded semi-$ p $-variation} on $ [0, b] $ for some $ b > 0 $, and if and only if $ x^*S_A $ is \emph{bounded $ p' $-variation} on $ [0, b] $ for all $ x^* \in (X_0)^* $ where $ 1/{p'} + 1/p = 1 $. We notice that for the semi-$ p $-variation $ V_{p} (S_A; 0, b) $, one has $ V_{\infty} (S_A; 0, b) \leq b^{1/p} V_{p} (S_A; 0, b) $.
\end{enumerate}
For the notions of (semi-) ($ p $-) variation, see \cite{Thi08, Mon15} for details. Additional remark should be made: for a function $ f: [a,b] \to X $ with $ X $ having Radon-Nikodym property, then $ f $ is of bounded $ p $-variation if and only if $ f \in W^{1,p}((a,b), X) $.
Also note that for MR operator $ A $, $ (S_A \diamond f)(t) $ can be written in the convolution form by using the Stieltjes-type integral, i.e., (see \cite[Theorem 3.2]{Thi08})
\[
(S_A \diamond f)(t) = \int_{0}^{t} \mathrm{d}S_{A}(s) f(s - t).
\]
\end{rmk}
Quasi Hille--Yosida operators are related to almost sectorial operators.
\begin{defi}[$\frac{1}{p}$-almost sectorial operator \cite{PS02, DMP10}]\label{def:almost}
We call a closed linear operator $A$ a \textbf{$\frac{1}{p}$-almost sectorial operator} ($p \geq 1$), if there exist $\widehat{M} > 0$, $\widehat{\omega} \in \mathbb{R}$, $\theta \in (\frac{\pi}{2}, \pi)$ such that
\[
\|R(\lambda, A)\| \leq \frac{\widehat{M}}{|\lambda - \widehat{\omega}|^{p}},
\]
for all $\lambda \in \{\lambda \in \mathbb{C}:~\lambda \neq \widehat{\omega}, ~|\arg(\lambda - \widehat{\omega})| < \theta\}$.
\end{defi}
\begin{lem}\label{lem:almost}
\begin{enumerate}[(a)]
\item If $A$ is a $\frac{1}{p}$-almost sectorial operator, then $A_0$ is a generator of an analytic $C_0$ semigroup, and $A$ is a $\widehat{p}$-quasi Hille--Yosida operator for any $\widehat{p} > p$.
\item If $A$ is a $p$-quasi Hille--Yosida operator and $A_0$ is a generator of an analytic $C_0$ semigroup, then $A$ is a $\frac{1}{p}$-almost sectorial operator.
\item Let $A_0$ be a generator of an analytic $C_0$ semigroup. Then, $A$ is a Hille--Yosida operator if and only if $A$ is $1$-almost sectorial operator (i.e., the generator of a holomorphic semigroup, see \cite[Definition 3.7.1]{ABHN11}).
\end{enumerate}
\end{lem}
\begin{proof}
(a) That $A_0$ is a generator of an analytic $C_0$ semigroup was shown in \cite[Proposition 3.12, Theorem 3.13]{PS02}, and that $A$ is a $\widehat{p}$-quasi Hille--Yosida operator for any $\widehat{p} > p$ was proved in \cite[Theorem 3.11]{DMP10}.
(b) Combine \cite[Proposition 3.3]{DMP10} and \autoref{lem:pHY} (c).
(c) See \cite[Theorem 3.7.11 and Example 3.5.9 c)]{ABHN11}.
\end{proof}
Because of the above lemma, we may interpret almost sectorial operators as an analytic version of quasi Hille--Yosida operators.
We refer to \cite{MR07, MR09, MR09a} for more results and examples on MR operators (quasi Hille--Yosida operators), particularly in applications to abstract Cauchy problems, and to \cite{PS02, DMP10, CDN08} on almost sectorial operators; see also \autoref{model} and \autoref{comments}.
\begin{rmk}
For a generator $ A $ of an integrated semigroup $ S_A $, unlike the case of Hille--Yosida operator (where $ A $ is a Hille--Yosida operator if and only if $ S_A $ is locally Lipschitz, and if and only if $ S_A $ is of locally bounded semi-$ 1 $-variation) and $\frac{1}{p}$-almost sectorial operator, even $ S_A $ is of bounded semi-$ p $-variation ($ p > 1 $) or semi-variation, in general, $ A_{\overline{D(A)}} $ might not generate a $ C_0 $ semigroup; see, e.g., \cite[Example 5.2 and 5.3]{Thi08} where the operators given in that paper are densely defined.
\end{rmk}
Assume that $A$ is an MR operator. Denote by $C_s([0,\infty), \mathcal{L}(X_0, X))$ the space of strongly continuous map from $[0,\infty)$ into $\mathcal{L}(X_0, X)$). Set
\[
(S_A \diamond V)(t)x := (S_A \diamond V(\cdot)x)(t), ~~\forall x \in X_0,
\]
where $V \in C_s([0,\infty), \mathcal{L}(X_0, X))$. Note that for any $t \geq 0$, $(S_A \diamond V)(t) \in \mathcal{L}(X_0, X)$, and $t \mapsto (S_A \diamond V)(t)$ is norm continuous at zero (see \autoref{lem:MR} \eqref{MRaa}). Set
\[
\mathcal{B}(W) := S_A \diamond LW,~~ W \in C_s([0,\infty), \mathcal{L}(X_0, X)),
\]
where $L \in\mathcal{L}(X_0, X)$.
\begin{lem}\label{lem:fix}
Suppose that $A$ is an MR operator. Then, for any $V \in C_s([0,\infty), \mathcal{L}(X_0, X))$, the equation
\begin{equation}\label{equ:fix}
W = V + \mathcal{B}(W)
\end{equation}
has a unique solution $W \in C_s([0,\infty), \mathcal{L}(X_0, X))$. Furthermore,
\[
W = \sum\limits_{n=0}^{\infty}\mathcal{B}^n(V),
\]
where the series is uniformly convergent on any finite interval in the uniform operator topology.
\end{lem}
\begin{proof}
The result was already stated in the proof of \cite[Theroem 3.1]{MR07}. For the sake of readers, we give a detailed proof here. Since $\delta(t) \rightarrow 0,~ t \rightarrow 0$, there exists $\delta_0 > 0$ such that $\delta(t)\|L\| \leq r < 1$ for $t \in [0,\delta_0]$. Fix $x \in X_0$. Consider the operator
\[
(\widetilde{B}f)(t) = (S_A \diamond Lf)(t), ~~f \in C([0, \delta_0], X),
\]
which is contractive and $(\widetilde{B}V(\cdot)x)(t) = (\mathcal{B}V)(t)x$. Thus, we have a unique $W(\cdot)x \in C([0,\delta_0], X)$ such that
\[
\widetilde{B}W(\cdot)x + V(\cdot)x = W(\cdot)x,
\]
and
\begin{equation}\label{jssss}
W(t)x = \sum\limits_{n=0}^{\infty}(\widetilde{B}^nV(\cdot)x)(t) = \sum\limits_{n=0}^{\infty}\mathcal{B}^n(V)(t)x.
\end{equation}
Obviously, by the uniqueness, $ x \mapsto W(t)x $ is linear for each $ t $.
In addition, there is $\widetilde{M} > 0$ such that
\[
\|(\widetilde{B}^mV(\cdot)x)(t) - (\widetilde{B}^nV(\cdot)x)(t)\| \leq \frac{r^n}{1-r}\sup_{t\in[0,\delta_0]}\|(S_A \diamond LV)(t)x - V(t)x\| \leq \frac{r^n}{1-r}\widetilde{M}\|x\|,
\]
whenever $m \geq n$, which implies \eqref{jssss} is uniformly convergent on $[0,\delta]$ in the uniform operator topology. Therefore $W \in C_s([0,\delta_0], \mathcal{L}(X_0, X))$.
Next, there is $\widetilde{W}(\cdot)$ satisfying
\[
(S_A \diamond L\widetilde{W})(s) + T_{A_0}(s)(S_A \diamond LW)(\delta_0) + V(s+\delta_0) = \widetilde{W}(s), ~ s \in [0, \delta_0].
\]
(That is we take $T_{A_0}(\cdot)(S_A \diamond LW)(\delta_0) + V(\cdot+\delta_0)$ as the initial function.) Let
\[
W(t) = \widetilde{W}(t - \delta_0), ~t \in [\delta_0, 2\delta_0].
\]
Then,
\[
W(t) = (S_A \diamond LW(\delta_0+\cdot))(t-\delta_0) + T_{A_0}(t-\delta_0)(S_A \diamond LW)(\delta_0) + V(t) = (\mathcal{B}W)(t) + V(t).
\]
So $W(\cdot)$ is well defined on $[\delta_0, 2\delta_0]$ and satisfies \eqref{equ:fix}. Since $W(\cdot)$ is uniquely constructed in this way, this completes the proof.
\end{proof}
It was shown in \cite{MR07, Thi08} that MR operators (quasi Hille--Yosida operators) are stable under the bounded perturbation.
\begin{thm}[\cite{MR07, Thi08}]\label{thm:mrper}
Let $A$ be an MR operator (resp. $p$-quasi Hille--Yosida operator). For any $L \in \mathcal{L}(X_0, X)$, $A+L$ is still an MR operator (resp. $p$-quasi Hille--Yosida operator), and the following fixed equations hold.
\begin{align*}
T_{(A+L)_0} & = T_{A_0} + S_A \diamond LT_{(A+L)_0} \\
& = T_{A_0} + S_A \diamond LT_{A_0} + S_A \diamond LS_A \diamond LT_{(A+L)_0} \\
& = \cdots, \\
S_{A+L} & = S_A + S_A \diamond LS_{A + L} = \cdots.
\end{align*}
\end{thm}
Set
\begin{equation}\label{symbol}
S_0 := T_{A_0},~ R_0 := T_{(A+L)_0},~ S_n := \mathcal{B}^n(T_{A_0}),~R_n := \mathcal{B}^n(T_{(A+L)_0}), ~n = 1, 2, \cdots.
\end{equation}
Combining with \autoref{lem:fix}, we have
\begin{align}
S_{n+1} & = S_A \diamond LS_n \label{equ:sr1}\\
R_k & = \sum_{n=k}^{\infty} S_n, \label{equ:sr2}\\
R_k - S_k & = \mathcal{B} \circ R_k, \label{equ:sr3}\\
T_{(A+L)_0} & = \sum_{n=0}^{k-1}S_n + R_k, \label{equ:sr4}\\
& = \sum_{n=0}^{\infty}S_n, \label{equ:sr5}
\end{align}
where \eqref{equ:sr2} and \eqref{equ:sr5} are uniformly convergent on finite interval in the uniform operator topology.
The following is a result about $S_n$, which is similar as Dyson-Phillips series (\cite{EN00}).
\begin{lem} \label{lem:srelation}
$S_n(t+s) = \sum\limits_{k=0}^{n}S_k(t)S_{n-k}(s).$
\end{lem}
\begin{proof}
The case $n=0$ is clear. Consider
\begin{align*}
S_{n+1}(t+s) & = [S_A \diamond LS_n(\cdot)](t+s) \\
& = T_{A_0}(t)[S_A \diamond LS_n(\cdot)](s) + [S_A \diamond LS_n(s + \cdot)](t) \quad \\
& = T_{A_0}(t)S_{n+1}(s) + [S_A \diamond L \sum_{k=0}^{n}S_k(\cdot)S_{n-k}(s)](t) \quad \\
& = T_{A_0}(t)S_{n+1}(s) + [\sum_{k=0}^{n}S_{k+1}(\cdot)S_{n-k}(s)](t) \\
& = \sum_{k=0}^{n+1}S_k(t)S_{n+1-k}(s),
\end{align*}
the second equality being a consequence of \eqref{equ:sf2}, and the third one by induction.
\end{proof}
Using the above lemma, we obtain the following corollary which generalizes the corresponding case of $C_0$ semigroups; \autoref{cor:DP} \eqref{DPee} was also given in \cite[Theorem 3.2]{Bre01} in the context of $C_0$ semigroups.
\begin{cor}\label{cor:DP}
\begin{enumerate}[(a)]
\item If $S_0, S_1, \cdots, S_n$ are norm continuous (resp. compact, differentiable) at $t_0 > 0$, then $S_n(t)$ is norm continuous (resp. compact, differentiable) for all $t \geq t_0$.
\item If $S_0, S_1, \cdots, S_n$ are compact at $t_0 > 0$, then $S_n$ is norm continuous on $(t_0, \infty)$.
\item \label{DPcc} If $ S_n(t) $ is compact for all $ t > t_1 $ ($ \geq 0 $), then $ S_n $ is norm continuous on $ (t_1, \infty) $.
\item \label{DPdd} If $ R_n(t) $ is compact for all $ t > t_1 $ ($ \geq 0 $), then $ R_n $ is norm continuous on $ (t_1, \infty) $.
\item \label{DPee} $ S_n(t) $ is compact for all $ t > 0 $ if and only if $ S_n $ is norm continuous on $ (0, \infty) $ and $ (R(\lambda+ i\mu, A) L)^n R(\lambda + i \mu, A_0) $ is compact for all $ \mu \in \mathbb{R} $ and for some/all large $ \lambda > 0 $.
\end{enumerate}
\end{cor}
\begin{proof}
(a) and (b) are direct consequences of \autoref{lem:srelation}.
(c) Let $ t_0 > t_1 $. Take a constant $ M > 0 $ such that $ \sup_{t \in [0, 2t_0]}\|S_k(t)\| \leq M $, $ k = 0, 1, 2,\cdots,n $. For any small $ \varepsilon > 0 $, by \autoref{lem:MR} \eqref{MRaa}, we know there is $ 0 < \delta < t_0 $ such that $ t_0 - \delta > t_1 $ and $ \sup_{\sigma\in [0, 2\delta]}\|S_k(\sigma)\| \leq \varepsilon / M $. So
\[
\sup_{\sigma\in [0, 2\delta]} \sum_{k=1}^{n}\|S_k(\sigma)S_{n-k}(t_0 - \delta)\| \leq \varepsilon.
\]
Now for $ 0 < \delta' < \delta $, we see
\begin{align*}
& \|S_{n}(t_0) - S_{n}(t_0\pm\delta')\| \\
= &~ \|(T_{A_0}(\delta) - T_{A_0}(\delta\pm\delta'))S_{n}(t_0 - \delta) + \sum_{k=1}^{n} (S_k(\delta) - S_k(\delta\pm\delta')) S_{n-k}(t_0 - \delta) \| \\
\leq &~ \|(T_{A_0}(\delta) - T_{A_0}(\delta\pm\delta'))S_{n}(t_0 - \delta)\| + 2\varepsilon.
\end{align*}
If we take $ \delta' $ sufficiently small, then $ \|(T_{A_0}(\delta) - T_{A_0}(\delta\pm\delta'))S_{n}(t_0 - \delta)\| \leq \varepsilon $ as $ S_{n}(t_0 - \delta) $ is compact, and so $ \|S_{n}(t_0) - S_{n}(t_0\pm\delta')\| \leq 3\varepsilon $. This shows $ S_{n} $ is norm continuous at $ t_0 > 0 $.
(d) This is very similar as $ S_n $ by using the following equality:
\begin{multline*}
\|R_{n}(t_0) - R_{n}(t_0\pm\delta')\| = \|(T_{A_0}(\delta) - T_{A_0}(\delta\pm\delta'))R_{n}(t_0 - \delta) \\
+ (S_{A} \diamond LR_{n-1}(\cdot+t_0 - \delta)) (\delta) - (S_{A} \diamond LR_{n-1}(\cdot+t_0 - \delta)) (\delta\pm\delta') \|.
\end{multline*}
(e) This follows from the following \autoref{slem:L} and conclusion (c) (letting $ t_1 = 0 $), since $ (R(\lambda+ i\mu, A) L)^n R(\lambda + i \mu, A_0) = \mathcal{L} (S_n) (\lambda + i \mu) $ (the Laplace transform of $ S_n $ at $ \lambda + i \mu $).
The proof is complete.
\end{proof}
\begin{slem}\label{slem:L}
For norm continuous $ F: \mathbb{R}_+ \to \mathcal{L}(X_0, X) $ such that $ \|F(t)\| \leq C_1e^{\omega t} $ for all $ t > 0 $ and some constants $ C_1 \geq 1, \omega \in \mathbb{R} $, $ F(t) $ is compact for all $ t > 0 $ if and only if $ \mathcal{L}(F)(\lambda + i \mu) $ (the Laplace transform of $ F $ at $ \lambda + i \mu $) is compact for all $ \mu \in \mathbb{R} $ and some/all $ \lambda > \omega $.
\end{slem}
\begin{proof}
If $ F(t) $ is compact for all $ t > 0 $, then
\[
\mathcal{L}(F)(\lambda + i \mu) = \int_{0}^{\infty} e^{-(\lambda + i \mu)t} F(t) ~\mathrm{d} t = \lim_{r \to \infty} \int_{0}^{r} e^{-(\lambda + i \mu)t} F(t) ~\mathrm{d} t ,
\]
is compact. And if $ \mathcal{L}(F)(\lambda + i \mu) $ is compact for all $ \mu \in \mathbb{R} $ and some $ \lambda > \omega $, then by the complex inversion formula of Laplace transform (see, e.g., \cite[Theorem 2.3.4]{ABHN11}), one gets
\[
\int_{0}^{t} e^{-\omega s} F(s) ~\mathrm{d} s = \lim_{\mu \to \infty} \frac{1}{2\pi i} \int_{\lambda - \omega - i\mu}^{\lambda - \omega + i \mu} e^{rt} \frac{\mathcal{L}(F)(r+\omega)}{r} ~\mathrm{d} r,
\]
where the limit is uniform for $ t $ belonging to compact intervals and exists in the uniform operator topology. This shows that $ \int_{0}^{t} e^{-\omega s} F(s) ~\mathrm{d} s $ is compact and consequently $ F(t) $ is compact for all $ t > 0 $. The proof is complete.
\end{proof}
\noindent\emph{Problem}: for Hilbert space $ X $, is it true that if $ \|(R(\lambda+ i\mu, A) L)^n R(\lambda + i \mu, A_0)\| \to 0 $ as $ |\mu| \to \infty $, then $ S_{n-2} $ is norm continuous? (See \cite{Bre01}.) When $ LR(\lambda, A_0) = R(\lambda, A)L $, the proof is very similar.
\subsection{Critical spectrum and essential spectrum}\label{spec}
In this subsection, we recall some known results about spectral theory. For a complex Banach algebra $ Y $, $\sigma(x)$ denotes the spectrum of $x \in Y$, i.e.,
\[
\sigma(x) = \{\lambda \in \mathbb{C}:~ \lambda - x ~\text{is not invertible in $ Y $}\},
\]
and $r(x)$ denotes the spectral radius of $x$, i.e.,
\[
r(x) = \sup\{|\lambda|:~\lambda \in \sigma(x)\}.
\]
Critical spectrum was discovered by Nagel and Poland \cite{NP00}. Let $T$ be a $C_0$ semigroup on a Banach space $X$, whose generator is $B$. Set
\[
l^\infty(X) := \{ (x_n): \|(x_n)\|_\infty \triangleq \sup\limits_n \|x_n\| < \infty \}.
\]
$T$ can be naturally extended to $l^\infty(X)$ by
\[
\widetilde{T}(t)(x_n) := (T(t)x_n), ~~(x_n) \in l^\infty(X).
\]
Then, $\|\widetilde{T}(t)\| = \|T(t)\|$. Define
\begin{align*}
l_T^\infty(X) & := \{(x_n) \in l^\infty(X): \sup\limits_n\|T(t)x_n - x_n\| \rightarrow 0, ~\text{as}~ t \rightarrow 0^+ \} \\
& = \{(x_n) \in l^\infty(X): \widetilde{T}(t)(x_n) \rightarrow (x_n), ~\text{as}~ t \rightarrow 0^+ \} \\
& = \{(x_n) \in l^\infty(X): \sup\limits_{n \rightarrow \infty}\|T(t)x_n - x_n\| \rightarrow 0, ~\text{as}~ t \rightarrow 0^+ \}.
\end{align*}
$l_T^\infty(X)$ is the closed space of strong continuity of $\widetilde{T}$. Note that $\widetilde{T}(t)l_T^\infty(X) \subset l_T^\infty(X)$, for each $t \geq 0 $. So $\widetilde{T}$ can induce $\widehat{T}$, defined on the quotient space $l^\infty(X) / l_T^\infty(X)$ as
\[
\widehat{T}(t)[(x_n)] := [\widetilde{T}(t)(x_n)], ~~\text{where}~[(x_n)] := (x_n) + l_T^\infty(X).
\]
In general, if $F \in \mathcal{L} (X)$, define $\widetilde{F}$ on $l^\infty(X)$ as
\[
\widetilde{F}(x_n) := (F(x_n)),
\]
with $\|\widetilde{F}\| = \|F\|$. If $\widetilde{F} (l_T^\infty(X)) \subset l_T^\infty(X)$, then $\widehat{F}$ can be defined on $l^\infty(X) / l_T^\infty(X)$ by
\[
\widehat{F}[(x_n)] := [\widetilde{F}(x_n)],
\]
with $\|\widehat{F}\| \leq \|F\|$. If $G \in \mathcal{L}(X)$ and $\widetilde{G} (l_T^\infty(X)) \subset l_T^\infty(X)$, then
\[
\widetilde{F \circ G} = \widetilde{F} \circ \widetilde{G} ,~~ \widehat{F \circ G} = \widehat{F} \circ \widehat{G}.
\]
Note that $l^\infty(X) / l_T^\infty(X)$ is a Banach algebra. Now we give the following definitions.
\begin{defi}[\cite{NP00}]\label{def:crit}
For a $C_0$ semigroup $T$, we call
\[
\sigma_{\mathrm{crit}}(T(t)) := \sigma(\widehat{T}(t)),
\]
the critical spectrum of $T(t)$,
\[
r_{\mathrm{crit}}(T(t)) := r(\widehat{T}(t)),
\]
the critical spectral radius of $T(t)$, and
\begin{align*}
\omega_{\mathrm{crit}}(T) & := \inf \{ \omega \in \mathbb{R}: \exists M \geq 0,~ \|\widehat{T}(t)\| \leq Me^{\omega t}, \forall t \geq 0 \}\\
& = \lim_{t \rightarrow \infty} \frac{1}{t} \ln \|\widehat{T}(t)\|
= \inf_{t >0} \frac{1}{t} \ln \|\widehat{T}(t)\|
= \frac{1}{t_0}\ln r_{\mathrm{crit}}(T(t_0)) ~ (t_0 >0),
\end{align*}
the critical growth bound of semigroup $T$.
\end{defi}
With these definitions, the following theorem holds.
\begin{thm}[\cite{NP00}]\label{thm:crit}
For a $C_0$ semigroup $T$ with generator $B$, the following statements hold.
\begin{enumerate}[(a)]
\item $\sigma_{\mathrm{crit}}(T(t)) \subset \sigma(T(t))$.
\item (Partial spectral mapping theorem) For each $t > 0$,
\[
\sigma(T(t))\backslash \{0\} = e^{t\sigma(B)} \cup \sigma_{\mathrm{crit}}(T(t)) \backslash \{0\},
\]
and
\[
\sigma(T(t)) \cap \{r \in \mathbb{C}: |r| > r_{\mathrm{crit}}(T(t))\} = e^{t\sigma(B)} \cap \{r \in \mathbb{C}: |r| > r_{\mathrm{crit}}(T(t))\}.
\]
\item $\omega(T) = \max\{ s(B), \omega_{\mathrm{crit}}(T) \}$,
\end{enumerate}
where we denote by $\omega(T)$ the growth bound of T, and $s(B)$ the spectral bound of $B$, i.e.,
\begin{equation}\label{equ:bound}
\omega(T) = \inf \{ \omega \in \mathbb{R}: \exists M \geq 0,~ \|T(t)\| \leq Me^{\omega t}, \forall t \geq 0 \}, ~s(B) = \sup\{\mathrm{Re}\lambda: \lambda \in \sigma(B) \}.
\end{equation}
\end{thm}
See \cite{Bla01} for a different but equivalent definition of critical spectrum.
We refer to \cite{NP00, BNP00, Sbi07} for more details and applications on critical spectrum. Next, we turn to the essential spectrum. We follow the presentation of \cite{GGK90}. Consider the Calkin algebra $\mathcal{L}(X)/ \mathcal{K}(X)$, where $\mathcal{K}(X)$ denotes the closed ideal of bounded compact linear operators on $X$. Define $\overline{T}$ on $\mathcal{L}(X)/ \mathcal{K}(X)$ by
\[
\overline{T}(t) := [T(t)] := T(t) + \mathcal{K}(X).
\]
Now we have the following definitions.
\begin{defi}\label{def:ess}
For a $C_0$ semigroup $T$, we call
\[
\sigma_{\mathrm{ess}}(T(t)) := \sigma(\overline{T}(t)),
\]
the essential spectrum of $T(t)$,
\[
r_{\mathrm{ess}}(T(t)) := r(\overline{T}(t)),
\]
the essential spectral radius of $T(t)$ and
\begin{align*}
\omega_{\mathrm{ess}}(T) & := \inf \{ \omega \in \mathbb{R}: \exists M \geq 0,~ \|\overline{T}(t)\| \leq Me^{\omega t}, \forall t \geq 0 \}\\
& = \lim_{t \rightarrow \infty} \frac{1}{t} \ln \|\overline{T}(t)\|
= \inf_{t >0} \frac{1}{t} \ln \|\overline{T}(t)\|
= \frac{1}{t_0}\ln r_{\mathrm{ess}}(T(t_0)) ~ (t_0 >0),
\end{align*}
the essential growth bound of semigroup $T$.
\end{defi}
\begin{thm}[\cite{NP00, EN00}]\label{thm:ess}
For a $C_0$ semigroup $T$ with generator $B$, the following statements hold.
\begin{enumerate}[(a)]
\item $\sigma_{\mathrm{ess}}(T(t)) \subset \sigma_{\mathrm{crit}}(T(t)) \subset \sigma(T(t))$. Thus, $\omega_{\mathrm{ess}}(T) \leq \omega_{\mathrm{crit}}(T) \leq \omega(T)$, $\omega(T) = \max\{ s(B), \omega_{\mathrm{crit}}(T) \}$.
\item $\forall \gamma > \omega_{\mathrm{ess}}(T)$, the set $\{\lambda \in \sigma(B): \mathrm{Re}\lambda \geq \gamma \}$ is finite. And if it is not empty, then its elements are the finite-order poles of $R(\cdot, B)$, in particular the point spectrum of $B$.
\end{enumerate}
\end{thm}
For other definitions of essential spectrum, see \cite{Dei85}. Although different definitions of essential spectrum may not be coincided, all the essential spectral radiuses are equal.
\section{General arguments}
From now on, we assume that $A$ is an MR operator, $V \in C_s([0,\infty), \mathcal{L}(X_0, X))$. Then, $A_0 := A_{X_0}$ generates an $C_0$ semigroup $T_{A_0}$ in $X_0$, and $A$ generates an integrated semigroup $S_A$, where $X_0 := \overline{D(A)}$. In this section, we consider the regularity of $S_A \diamond V$, when $T_{A_0}$ or $V$ has higher regularity. Note that $t \mapsto (S_A \diamond V)(t)$ is norm continuous at zero (see \autoref{lem:MR} \eqref{MRaa}).
\begin{lem}\label{lem:NC}
\begin{enumerate}[(a)]
\item If $T_{A_0}$ is norm continuous on $(0,\infty)$, then $S_A \diamond V$ is norm continuous on $(0,\infty)$ (hence $[0,\infty)$). More generally, if $L \in \mathcal{L}(X_0, X)$ and $LT_{A_0}$ is norm continuous on $(0, \infty)$, so is $LS_{A} \diamond V$.
\item If $V$ is norm continuous on $(0,\infty)$, so is $S_A \diamond V$.
\item If $T_{A_0}$ in norm continuous on $(\alpha, \infty)$, and $V$ is norm continuous on $(\beta, \infty)$, then $S_A \diamond V$ is norm continuous on $(\alpha + \beta, \infty)$.
\end{enumerate}
\end{lem}
\begin{proof}
Let $t > s > 0$ in $U(t_0;\eta) = \{ r \in \mathbb{R}: |r - t_0| < \eta \}$ (a neighborhood of $t_0 > 0$), and $0 < \varepsilon < s$, where $t_0 > \eta > 0$.
(a) By \eqref{equ:sf2}, we have
\begin{align*}
L(S_A \diamond V)(t) - L(S_A \diamond V)(s) & = LT_{A_0}(t-s)(S_A \diamond V)(s) + L(S_A \diamond V(s + \cdot))(t-s) - L(S_A \diamond V)(s)\\
& = (LT_{A_0}(t-s + \varepsilon ) - LT_{A_0}(\varepsilon))(S_A \diamond V)(s-\varepsilon) + \\
& \qquad L(T_{A_0}(t-s) - I)(S_A \diamond V(s - \varepsilon + \cdot))(\varepsilon) + L(S_A \diamond V(s + \cdot))(t-s) \\
& := (LT_{A_0}(t-s + \varepsilon ) - LT_{A_0}(\varepsilon))(S_A \diamond V)(s-\varepsilon) + R,
\end{align*}
where
\[
R := L(T_{A_0}(t-s) - I)(S_A \diamond V(s - \varepsilon + \cdot))(\varepsilon) + L(S_A \diamond V(s + \cdot))(t-s),
\]
and
\[
\|R\| \leq \widetilde{M}(\delta(\varepsilon) + \delta(t-s)),
\]
for some constant $\widetilde{M} > 0$. Let $\varepsilon$, $\eta$ be sufficiently small ($\eta$ depending on $\varepsilon$), then $\|L(S_A \diamond V)(t) - L(S_A \diamond V)(s)\|$ can be sufficiently small, which shows that $LS_A \diamond V$ is norm continuous at $t_0$.
(b) By \eqref{equ:sf2}, we see
\begin{align*}
&\|(S_A \diamond V)(t) - (S_A \diamond V)(s)\| \\
= & \|T_{A_0}(s-\varepsilon)(S_A \diamond V)(t-s+\varepsilon) + (S_A \diamond V(t-s+\varepsilon + \cdot))(s-\varepsilon) \\
& - T_{A_0}(s-\varepsilon)(S_A \diamond V)(\varepsilon) - (S_A \diamond V(\varepsilon + \cdot))(s-\varepsilon)\| \\
\leq & \widetilde{M}(\delta(t-s+\varepsilon) + \delta(\varepsilon)) + \widetilde{M}\sup_{r \in [0,s-\varepsilon]} \|V(t-s+\varepsilon+r)-V(\varepsilon+r)\|,
\end{align*}
for some constant $\widetilde{M} > 0$. Since $V$ is uniformly continuous on $[\varepsilon, 2t_0]$, the last inequality can be sufficiently small.
(c) When $t > \alpha + \beta$, by \eqref{equ:sf2}, we get
\begin{equation}\label{equ:general}
(S_A \diamond V)(t) = T_{A_0}(t-\beta)(S_A \diamond V)(\beta) + (S_A \diamond V(\beta + \cdot))(t-\beta).
\end{equation}
The first term of the right side is norm continuous by norm continuity of $T_{A_0}$, and the second is norm continuous by (b). The proof is complete.
\end{proof}
For $ W \in C_s([0,\infty), \mathcal{L}(X_0, X)) $, we say $ W $ is compact on $ (a,b) $ if for each $ t \in (a,b) $, $ W(t) $ is compact.
\begin{lem}\label{lem:CP}
\begin{enumerate}[(a)]
\item If $T_{A_0}$ is compact on $(0,\infty)$, so is $S_A \diamond V$. More generally, if $L \in \mathcal{L}(X_0, X)$ and $LT_{A_0}$ is compact on $(0,\infty)$, so is $LS_A \diamond V$.
\item If $V$ is compact and norm continuous on $(0,\infty)$, so is $S_A \diamond V$.
\item If $T_{A_0}$ is compact on $(\alpha, \infty)$, and $V$ is compact and norm continuous on $(\beta, \infty)$, then $S_A \diamond V$ is compact and norm continuous on $(\alpha + \beta, \infty)$.
\end{enumerate}
\end{lem}
\begin{proof}
(a) For $ t > \varepsilon > 0 $, by
\[
L(S_A \diamond V)(t) = LT_{A_0}(\varepsilon)(S_A \diamond V)(t - \varepsilon) + L(S_A \diamond V(t-\varepsilon+\cdot))(\varepsilon),
\]
we see $LT_{A_0}(\varepsilon)(S_A \diamond V)(t - \varepsilon)$ converges to $L(S_A \diamond V)(t)$ in the operator norm topology, as $\varepsilon \rightarrow 0^+$. Since $ LT_{A_0}(\varepsilon) $ is compact, the result follows.
(b) Since $V$ is norm continuous on $(0,\infty)$, so is $S_A \diamond V$ (\autoref{lem:CP} (b)). Hence,
\[
(S_A \diamond V)(t) = \lim_{h \rightarrow 0} \frac{1}{h} \int_0^tT_{A_0}(t-s)S_A(h)V(s)~\mathrm{d}s,
\]
where the convergence is in the uniform operator topology; see the proof of equation \eqref{equ:sf3}. Since $V(t)$ is compact for all $t>0$, $\int_0^tT_{A_0}(t-s)S_A(h)V(s)~\mathrm{d}s$ is compact (see, e.g., \cite[Theorem C.7]{EN00}), yielding $(S_A \diamond V)(t)$ is compact.
(c) Use \eqref{equ:general} and (b). The proof is complete.
\end{proof}
The following result is a simple relation of differentiability in strong topology and uniform operator topology; the proof is omitted.
\begin{lem}\label{lem:reldiff}
Let $f:~ I \rightarrow \mathcal{L}(X,Y)$, where $I$ is an interval of $\mathbb{R}$.
\begin{enumerate}[(a)]
\item If $t \mapsto f(t)x$ is continuously differentiable for each $x \in X$, then $t \mapsto f(t)$ is norm continuous.
\item If $t \mapsto f(t)x$ is differentiable for each $x \in X$, let $G(t)x = f'(t)x$, $\forall x \in X$. If $t \mapsto G(t)$ is norm continuous, then $t \mapsto f(t)$ is norm continuously differentiable.
\end{enumerate}
\end{lem}
\begin{lem}\label{lem:diff}
\begin{enumerate}[(a)]
\item If $V$ is strongly continuously differentiable (resp. norm continuously differentiable) on $[0,\infty)$, and $V(0) = 0$, so is $S_A \diamond V$. Moreover, $S_A \diamond V = S_A * V'$.
\item If $T_{A_0}$ is strongly continuously differentiable on $[\alpha, \infty)$ and $V$ is strongly continuously differentiable on $[\beta, \infty)$, then $S_A \diamond V$ is strongly continuously differentiable on $[\alpha + \beta, \infty)$.
\end{enumerate}
\end{lem}
\begin{proof}
(a) Since
\[
(S_A \diamond V)(t) = S_A(t)V(0) + \int_0^tS_A(s)V'(t-s)~\mathrm{d}s = S_A * V',
\]
by the assumption of $A$, it shows $S_A * V$ is strongly continuously differentiable (see \autoref{lem:MR} (c)). Thus, $S_A \diamond V$ is strongly continuously differentiable. For the case of norm continuously differentiable, by \autoref{lem:NC} (b), $S_A \diamond V'$ is norm continuous. The result follows from \autoref{lem:reldiff} (b).
(b) For $t \geq \alpha + \beta$,
\begin{align*}
(S_A \diamond V)(t) & = T_{A_0}(t-\beta)(S_A \diamond V)(\beta) + (S_A \diamond V(\beta + \cdot))(t-\beta) \\
& = T_{A_0}(t-\beta)(S_A \diamond V)(\beta) + S_A(t-\beta)V(\beta) + S_A * V(\beta + \cdot)'.
\end{align*}
The first term of the right side is strongly continuously differentiable. By \autoref{partInt} (e), $S_A$ is strongly continuously differentiable on $[\alpha, \infty)$. By the assumption on $A$, the third term is also strongly continuously differentiable (see \autoref{lem:MR} (c)). This completes the proof.
\end{proof}
\section{Regularity of perturbed semigroups}\label{regper}
If $A$ is an MR operator, $L \in \mathcal{L}(X_0, X)$, then $A+L$ is also an MR operator. In this section, we study the regularity properties of the perturbed semigroup $T_{(A+L)_0}$, generated by $(A+L)_0$ in $X_0$. We make some regularity assumptions similar as \cite{NP98, BMR02} to ensure $T_{(A+L)_0}$ preserves the regularity of $T_{A_0}$.
\begin{lem}\label{lem:regfix}
Suppose that $V \in C_s([0,\infty), \mathcal{L}(X_0, X))$ has one of the following properties,
\begin{enumerate}[(a)]
\item norm continuity on $(0,\infty)$;
\item norm continuity and compactness on $(0,\infty)$;
\item $V(0) = 0$ and strongly continuous differentiability on $[0,\infty)$.
\end{enumerate}
Then, the solution $W$ of the equation
\[
W = V + \mathcal{B}(W),
\]
has the corresponding property.
\end{lem}
\begin{proof}
If $V$ has the property of (a) (resp. (b)), by \autoref{lem:NC} (resp. \autoref{lem:CP}), $\mathcal{B}^k(V)$ has the property of (a) (resp. (b)) for $k = 0,1,2,\cdots$. By \autoref{lem:fix}, $W = \sum\limits_{k=0}^{\infty}\mathcal{B}^k(V)$ is internally closed uniformly norm convergent. So $W$ has property of (a) (resp. (b)).
If $V(0) = 0$ and $V$ is strongly continuously differentiable on $[0,\infty)$, then by \autoref{lem:diff} (a), $[\mathcal{B}(V)]' = \mathcal{B}(V')$, and $[\mathcal{B}^k(V)]' = \mathcal{B}^k(V')$ $(k \geq 1)$. (Note that $\mathcal{B}^k(V)(0) = 0$ for $k \geq 1$.) Since $\sum\limits_{k=0}^{\infty}\mathcal{B}^k(V)$ and $\sum\limits_{k=0}^{\infty}(\mathcal{B}^k(V))' = \sum\limits_{k=0}^{\infty}\mathcal{B}^k(V')$ are internally closed uniformly convergent, we have $W' = \sum\limits_{k=0}^{\infty}\mathcal{B}^k(V')$. That is $W' = V' + \mathcal{B}(W')$, which completes the result.
\end{proof}
Next, we consider the relation between the regularity of $S_n$, $R_n$ (see \eqref{symbol}).
\begin{cor}\label{corr:sr}
The following are equivalent.
\begin{enumerate}[(a)]
\item $S_n$ has property $P$;
\item $S_k$ has property $P$, $\forall k \geq n$;
\item $R_n$ has property $P$;
\item $R_k$ has property $P$, $\forall k \geq n$.
\end{enumerate}
The property $P$ stands for norm continuity, or compactness on $(0, \infty)$. If $n \geq 1$, $P$ also stands for strongly continuous differentiability on $[0, \infty)$.
\end{cor}
\begin{proof}
(a) $\Rightarrow$ (b) By \autoref{lem:NC} (or \autoref{lem:CP}, or \autoref{lem:diff}) and \eqref{equ:sr1}.
(b) $\Rightarrow$ (c) By \eqref{equ:sr1} and \autoref{lem:regfix}.
(c) $\Rightarrow$ (d) The same reason as (a) $\Rightarrow$ (b).
(d) $\Rightarrow$ (a) By \eqref{equ:sr3} and \autoref{lem:NC} (or \autoref{lem:CP}, or \autoref{lem:diff}). Here note that if $ S_{n} $ (or $ R_{n} $) is compact on $ (0, \infty) $, then it is norm continuous on $ (0, \infty) $ by \autoref{cor:DP} \eqref{DPcc} \eqref{DPdd}.
\end{proof}
\begin{thm}
Let $A$ be an MR operator, $L \in \mathcal{L}(X_0, X)$. Then, the following statements hold.
\begin{enumerate}[(a)]
\item $T_{A_0}$ is immediately norm continuous if and only if $T_{(A+L)_0}$ is immediately norm continuous.
\item $T_{A_0}$ is immediately compact if and only if $T_{(A+L)_0}$ is immediately compact.
\end{enumerate}
\end{thm}
\begin{proof}
It's a direct result of \autoref{corr:sr} for the case $n=0$. Note that $C_0$ semigroup is immediately compact, then it is also immediately norm continuous (see, e.g., \cite[Lemma II.4.22]{EN00}).
\end{proof}
\begin{thm}\label{thm:perreg}
Let $A$ be an MR operator, $L \in \mathcal{L}(X_0, X)$, $\alpha \geq 0$. Then, the following statements hold.
\begin{enumerate}[(a)]
\item If $T_{A_0}$ is norm continuous on $(\alpha, \infty)$, and there exists $S_n$ which is norm continuous on $(0,\infty)$, then $T_{(A+L)_0}$ is norm continuous on $(n\alpha, \infty)$.
\item If $T_{A_0}$ is compact on $(\alpha, \infty)$, and there exists $S_n$ which is compact on $(0,\infty)$, then $T_{(A+L)_0}$ is compact (and norm continuous) on $(n\alpha, \infty)$.
\item If $T_{A_0}$ is strongly continuously differentiable on $[\alpha, \infty)$, and there exists $S_n$ ($n \geq 1$) which is strongly continuously differentiable on $[0,\infty)$, then $T_{(A+L)_0}$ is strongly continuously differentiable on $[n\alpha, \infty)$.
\end{enumerate}
\end{thm}
\begin{proof}
If $T_{A_0}$ is norm continuous (resp. compact (hence norm continuous), or differentiable) on $(\alpha, \infty)$, then $S_m$ is norm continuous (resp. compact and norm continuous, or strongly continuously differentiable) on $(m\alpha, \infty)$ by \autoref{lem:NC} (c) (resp. \autoref{lem:CP} (c), or \autoref{lem:diff} (b)). Combining with \autoref{corr:sr} and \eqref{equ:sr4}, we obtain the results. Here note that if $S_n$ is compact on $(0,\infty)$, then it is also norm continuous on $(0,\infty)$ by \autoref{cor:DP} \eqref{DPcc}.
\end{proof}
In applications, we need to calculate the $S_n$ to show it has higher regularity, see some examples in \cite{BNP00, BMR02}. Here we give some conditions such that $S_n$ has higher regularity. The following result can be obtained directly by \autoref{lem:NC} (a), \autoref{lem:CP} (a) and \autoref{corr:sr}.
\begin{cor}\label{corr:s1} Let $A$ be an MR operator, $L \in \mathcal{L}(X_0, X)$.
\begin{enumerate}[(a)]
\item If $LT_{A_0}$ is norm continuous on $(0, \infty)$, then $S_1$ and $T_{(A+L)_0} - T_{A_0}$ are norm continuous on $(0, \infty)$. Particularly, in this case, $T_{A_0}$ is eventually norm continuous if and only if $T_{(A+L)_0}$ is eventually norm continuous.
\item If $LT_{A_0}$ is norm continuous and compact on $(0, \infty)$, then $S_1$ and $T_{(A+L)_0} - T_{A_0}$ are norm continuous and compact on $(0, \infty)$. Particularly, in this case, $T_{A_0}$ is eventually compact if and only if $T_{(A+L)_0}$ is eventually compact.
\end{enumerate}
\end{cor}
For $p$-quasi Hille--Yosida operator $ A $, the compactness of $LT_{A_0}$ can make $S_2$ be higher regularity, which is discovered by Ducrot, Liu, Magal \cite{DLM08}; the key of the proof used the following fact: $\forall x^* \in (X_0)^*, x \in X$, $x^*S_A(\cdot)x \in W_{loc}^{1,p'}(0,\infty)$ (as $ x^*S_A(\cdot)x $ is of bounded $ p' $-variation), where $1/{p'} + 1/p = 1$ (see also \autoref{rmk:cc}).
\begin{cor}\label{corr:s2}
Let $A$ be a quasi Hille--Yosida operator. If $LT_{A_0}$ is compact on $(0, \infty)$, then $S_2$ is norm continuous and compact on $(0,\infty)$. Particularly, in this case, $T_{A_0}$ is eventually norm continuous (resp. eventually compact) if and only if $T_{(A+L)_0}$ is eventually norm continuous (resp. eventually compact).
\end{cor}
\begin{proof}
Since $LT_{A_0}$ is compact on $(0, \infty)$, we have $LS_A \diamond LT_{A_0}$ is norm continuous (\cite[Proposition 4.8]{DLM08}) and compact (\autoref{lem:CP} (a)) on $(0, \infty)$. Thus, $S_2$ is norm continuous and compact on $(0, \infty)$ (\autoref{lem:CP} (b)). Next we show that if $LT_{A_0}$ is compact on $(0, \infty)$, so is $LT_{(A+L)_0}$. Since by \autoref{thm:mrper}, we have
\[
LT_{(A+L)_0} = LT_{A_0} + LS_A \diamond LT_{(A+L)_0},
\]
and since by \autoref{lem:CP} (a), $LS_A \diamond LT_{(A+L)_0}$ is compact on $(0, \infty)$, it yields the compactness of $LT_{(A+L)_0}$. Then, the last statement follows from \autoref{thm:perreg} and the previous argument.
\end{proof}
\noindent\emph{Problem}: is it true that if $ LS_1 $ is compact, then $ S_3 $ is compact and norm continuous?
The above results don't show that whether $T_{(A+L)_0}$ is immediately differentiable if $T_{A_0}$ is immediately differentiable. In fact, though $A$ is a generator of a $C_0$ semigroup, the result may not hold \cite{Ren95}. We need more detailed characterization of differentiability. Set
\[
D_{\beta, c} \triangleq \{\lambda \in \mathbb{C}: \mathrm{Re}\lambda \geq c - \beta \log|\mathrm{Im\lambda}|\},
\]
where $\beta >0,~ c \in \mathbb{R}$. The following deep result about the characterization of differentiability is due to Pazy and Iley, see \cite[Theorem 4.1, Theorem 4.2]{Ile07}.
\begin{thm}[Pazy-Iley]\label{thm:PI}
Let $B$ be the generator of a $C_0$ semigroup $T_{B}$. The following are equivalent.
\begin{enumerate}[(a)]
\item For every $C \in \mathcal{L}(X)$, $T_{B+C}$, generated by $B+C$, is eventually strongly differentiable (resp. immediately strongly differentiable).
\item There is some (resp. all) $\beta > 0$ , and a constant $k$, such that $D_{\beta, c} \subset \rho(B)$, $\|R(\lambda, B)\| \leq k, ~\forall \lambda \in D_{\beta, c}$, for some $c \in \mathbb{R}$.
\item There is some (resp. all) $\beta' > 0$ , such that $D_{\beta', c'} \subset \rho(B)$, $\|R(\lambda, B)\| \rightarrow 0$, as $\lambda \rightarrow \infty$, $\forall \lambda \in D_{\beta', c'}$, for some $c' \in \mathbb{R}$.
\end{enumerate}
\end{thm}
\begin{rmk}
That (a) implies (c) was proved by Pazy \cite{Paz68}. Other's equivalence was proved by Iley \cite{Ile07}. See more results on characterization of differentiability in \cite[Section 3]{BK10}.
\end{rmk}
Pazy \cite{Paz68} characterized the generators of eventually and immediately strongly differentiable semigroups as follows.
\begin{thm}[Pazy criterion of differentiability]\label{thm:pz}
Let $B$ be the generator of the $C_0$ semigroup $T$, $\|T(t)\| \leq M e^{\omega t}$. Then, $T$ is eventually (resp. immediately) strongly differentiable if and only if there is some (resp. all) $\beta > 0$, $c \in \mathbb{R}$, $m \in \mathbb{N}$, $k > 0$, such that $D_{\beta, c} \subset \rho(B)$ and
\[
\|R(\lambda, B)\| \leq |\mathrm{Im} \lambda|^m,
\]
for all $\lambda \in D_{\beta, c}$, $\mathrm{Re} \lambda \leq \omega$. In this case, $T$ is strongly differentiable on $((m + 2)/\beta, \infty)$.
\end{thm}
Now the question arises: For what an MR operator $A$, is $T_{(A+L)_0}$ eventually differentiable for all $L \in \mathcal{L}(X_0, X)$? By \autoref{thm:PI}, we know $A_0$ at least satisfies the condition of \autoref{thm:PI} (b) or (c) in $X_0$ (i.e., consider the case $L \in \mathcal{L}(X_0)$).
When $\|R(\lambda, A)\|_{\mathcal{L}(X)}\|L\|_{\mathcal{L}(X_0,X)} < 1$, by \autoref{partop}, we have
\[
R(\lambda, (A+L)_0) = R(\lambda, A+L)|_{X_0} = (I - R(\lambda, A)L)^{-1}R(\lambda, A)|_{X_0} = (I - R(\lambda, A)L)^{-1}R(\lambda, A_0),
\]
and so
\begin{equation}\label{equ:gj1}
\|R(\lambda, (A+L)_0)\|_{\mathcal{L}(X_0)} \leq \frac{\|R(\lambda, A_0)\|_{\mathcal{L}(X_0)}}{1 - \|R(\lambda, A)\|_{\mathcal{L}(X)}\|L\|_{\mathcal{L}(X_0,X)}}.
\end{equation}
We will use this estimate to characterize the differentiability of $T_{(A+L)_0}$.
\begin{thm}
Let $A$ be an MR operator satisfying $\limsup\limits_{\lambda \rightarrow \infty} \|\lambda R(\lambda, A)\| < \infty$ (e.g., $A$ is a Hille--Yosida operator). Then, $T_{(A+L)_0}$ is eventually strongly differentiable for all $L \in \mathcal{L}(X_0, X)$ if and only if $A_0$ satisfies the Pazy-Iley condition, i.e., the condition of \autoref{thm:PI} in (b) or (c).
\end{thm}
\begin{proof}
Only need to consider the sufficiency. Assume $L \neq 0$ and $A_0$ satisfies the condition of \autoref{thm:PI} (b) or (c). Let $\varepsilon = \frac{\|L\|^{-1}}{4}$, $\lambda \in D_{\beta, c}$, $\mu = \omega + |\lambda|$, where $\omega > 0$ is sufficiently large such that (by the assumption $\limsup\limits_{\lambda \rightarrow \infty} \|\lambda R(\lambda, A)\| < \infty$)
\[
\|R(\mu, A)\| \leq \frac{M}{\mu} \leq \frac{M}{\omega} < \varepsilon.
\]
Then,
\begin{align*}
\|R(\lambda, A)\| & = \|(\mu - \lambda)R(\lambda, A_0)R(\mu, A) + R(\mu, A)\| \\
& \leq M \frac{|\mu - \lambda|}{\mu}\|R(\lambda, A_0)\| + \varepsilon \\
& \leq M \frac{2|\lambda|+\omega}{|\lambda|+\omega}\|R(\lambda, A_0)\| + \varepsilon.
\end{align*}
Let $K$ be sufficiently large, $D_{\beta', c'} \subset D_{\beta, c}$, such that
\[
\|R(\lambda, A_0)\| \leq \frac{\varepsilon}{2M},
\]
for $\lambda \in D_{\beta', c'}$, $|\lambda| > K$. Hence by \eqref{equ:gj1}, we get
\[
\| R(\lambda, (A+L)_0) \|_{\mathcal{L}(X_0)} \leq 2\|R(\lambda, A_0) \|_{\mathcal{L}(X_0)}.
\]
The result now follows from \autoref{thm:pz}.
\end{proof}
Next we consider another class of operators. A $C_0$ semigroup $T$ with the generator $B$ is called the \emph{Crandall--Pazy class} if $T$ is strongly immediately differentiable and
\begin{equation}\label{equ:cp1}
\|BT(t)\| = O(t^{-\alpha}), ~~\text{as}~ t \rightarrow 0^+,
\end{equation}
for some $\alpha \geq 1$. We also call the generator $B$ a \emph{Crandall--Pazy operator}. It was shown in \cite{CP69} that this is equivalent to
\begin{equation}\label{equ:cp2}
\|R(iy,B)\| = O(|y|^{-\beta}), ~\text{as}~ |y| \rightarrow \infty,
\end{equation}
for some $1 \geq \beta > 0$. Indeed, \eqref{equ:cp1} implies \eqref{equ:cp2} with $\beta = \frac{1}{\alpha}$, and \eqref{equ:cp2} implies \eqref{equ:cp1} for any $\alpha > \frac{1}{\beta}$. See \cite[Theorem 5.3]{Ile07} for a new characterization of the Crandall--Pazy class.
\begin{thm}
Let $A$ be an MR operator satisfying $\limsup\limits_{\stackrel{\mu \rightarrow \infty}{\mu > \widehat{\omega}}} \|(\mu - \widehat{\omega})^{\frac{1}{p}} R(\mu, A)\| < \infty$, where $p \geq 1, ~\widehat{\omega} \in \mathbb{R}$ (e.g., $A$ is a $p$-quasi Hille--Yosida operator, see \autoref{lem:pHY} in (c)). In addition, $A_0$ is a Crandall--Pazy operator satisfying
\[
\|R(iy, A_0)\|_{\mathcal{L}(X_0)} = O(|y|^{-\beta}),~ |y| \rightarrow \infty, ~ \beta > 1- \frac{1}{p}.
\]
Then, $(A+L)_0$ is still a Crandall--Pazy operator for any $L \in \mathcal{L}(X_0, X)$.
\end{thm}
\begin{proof}
Assume $L \neq 0$. Let $K_1 > 0$ be sufficiently large such that for all $|y| > K_1$,
\[
\|R(iy, A_0)\| \leq \frac{M}{|y|^\beta}, ~~\|R(\mu, A)\| \leq \frac{M}{|\mu - \widehat{\omega}|^{\frac{1}{p}}},
\]
where $\mu = \widehat{\omega} + |y|$. Then,
\begin{align*}
\|R(iy, A)\| & = \|(\mu - iy)R(iy, A_0)R(\mu, A) + R(\mu, A)\| \notag \\
& \leq M \frac{2|y| + \widehat{\omega}}{|y|^\beta \cdot |y|^{\frac{1}{p}}} + \frac{M}{|y|^{\frac{1}{p}}} \notag \\
& \leq \frac{\widetilde{M}}{|y|^\alpha},
\end{align*}
where $\alpha = \min\{\beta + 1/p -1, ~1/p\} > 0$. Let $K_2 > K_1$ be sufficiently large such that
\[
\|R(iy, A)\| \leq \frac{\|L\|^{-1}}{2},
\]
provided $|y| > K_2$. By \eqref{equ:gj1}, we have
\[
\| R(iy, (A+L)_0) \| \leq 2\|R(iy, A_0)\|.
\]
The proof is complete.
\end{proof}
Finally, we consider the analyticity.
\begin{thm}\label{thm:alm}
Suppose that $A$ is an MR operator and $T_{A_0}$ is analytic. Then, for any $L \in \mathcal{L}(X_0,X)$, $T_{(A+L)_0}$ is analytic.
\end{thm}
\begin{proof}
Assume $L \neq 0$. Since $T_{A_0}$ is analytic and $A$ is an MR operator, by \cite[Corollary 3.7.18]{ABHN11} and \autoref{lem:MR} (b), there is a constant $K > 0$, such that
\[
\|R(iy, A_0)\| \leq \frac{M}{|y|}, ~~\|R(\mu, A)\| \leq \varepsilon,
\]
provided $|y| > K$, where $\mu = |y|$, $\varepsilon = \frac{\|L\|^{-1}}{2(2M+1)}$. Hence,
\begin{align*}
\|R(iy, A)\| & = \|(\mu - iy)R(iy, A_0)R(\mu, A) + R(\mu, A)\| \\
& \leq \frac{2|y|M}{|y|}\varepsilon + \varepsilon = \|L\|^{-1}/2.
\end{align*}
The result follows from \eqref{equ:gj1} and \cite[Corollary 3.7.18]{ABHN11}.
\end{proof}
Combining with \autoref{lem:almost} (b) and \autoref{thm:alm}, we obtain the following perturbation of almost sectorial operators. See also \cite{DMP10} for more results on the relatively bounded perturbations of almost sectorial operators.
\begin{cor}
If $A$ is a $p$-almost sectorial operator ($p \geq 1$), so is $A+L$ for any $L \in \mathcal{L}(X_0, X)$.
\end{cor}
\section{Critical and essential growth bound of perturbed semigroups}\label{criess}
We still assume $A$ is an MR operator, $L \in \mathcal{L}(X_0, X)$. In this section, we study the critical and essential growth bound of a perturbed semigroup $T_{(A+L)_0}$. In many applications, one would like to hope the following hold:
\[
\omega_{\mathrm{crit}}(T_{(A+L)_0}) \leq \omega_{\mathrm{crit}}(T_{A_0}), \quad \omega_{\mathrm{ess}}(T_{(A+L)_0}) \leq \omega_{\mathrm{ess}}(T_{A_0}),
\]
see \autoref{comments} for a short discussion. The following elementary lemma seems well known, especially for the case $j = 0$, see also the proof of \cite[Proposition 3.7]{BNP00}.
\begin{lem}\label{lem:exponent}
Suppose $f_j: ~[0, \infty) \rightarrow [0, \infty)$, $ j = 0,1,2,\cdots $, satisfy
\[
f_j(t+s) \leq \sum_{k=0}^{j}f_k(t)f_{j-k}(s), ~\forall t,s > 0, ~j = 0,1,2,\cdots,
\]
and each $ f_j $ is bounded on any compact intervals.
Set ($ \ln 0 := - \infty $)
\[\label{asy}\tag{$ \divideontimes $}
\omega := \lim_{t \rightarrow \infty} \frac{1}{t}\ln f_0(t) = \inf_{t>0}\frac{1}{t}\ln f_0(t) = \inf\{ \omega \in \mathbb{R}: \exists M \geq 0, f_0(t) \leq Me^{\omega t}, \forall t \geq 0 \}.
\]
Then, for any $\gamma > \omega, ~\forall j \geq 0, ~\exists M_j \geq 0$, such that $f_j(t) \leq M_je^{\gamma t}, ~\forall t \geq 0$.
\end{lem}
\begin{proof}
Note that by a standard result (see, e.g., \cite[Lemma IV.2.3]{EN00}), the assumption on $ f_0 $ gives that \eqref{asy} holds.
By the definition of $\omega$, for any $\gamma > \omega$, there is $t_0 > 0$, such that $f_0(t_0) \leq \frac{1}{2}e^{\gamma t_0}$. By induction, one gets
\begin{align}
f_j(t) & = f_j(t-t_0+t_0) \leq \sum_{k=0}^{j}f_k(t-t_0)f_{j-k}(t_0) \notag\\
& \leq \frac{1}{2}e^{\gamma t_0}f_j(t-t_0) + M'_je^{\gamma t}, \label{eq:12aa}
\end{align}
where $t \geq t_0$, and the constant $M'_j$ doesn't depend on $t$.
Set
\[
a_n(s) := f_j(nt_0 + s),~~s \in [0,t_0).
\]
Using \eqref{eq:12aa}, we have
\begin{align*}
a_n(s) & \leq \frac{1}{2}e^{\gamma t_0}a_{n-1}(s) + M'_je^{\gamma (nt_0 + s)} \\
& \leq \frac{1}{2^2}e^{2 \gamma t_0}a_{n-2}(s) + (\frac{1}{2} + 1)M'_je^{\gamma (nt_0 + s)} \\
& \leq \cdots \\
& \leq \frac{1}{2^n}e^{n \gamma t_0}a_0(s) + (\frac{1}{2^{n-1}} + \frac{1}{2^{n-2}} + \cdots + 1)M'_je^{\gamma (nt_0 + s)} \\
& \leq M_j e^{\gamma (nt_0 + s)},
\end{align*}
for some constant $M_j \geq 0$. This completes the proof.
\end{proof}
Since the definition of critical spectrum depends on the space $l^{\infty}_{T}(X)$, we need some preliminaries. See \cite{BNP00} for similar results. Recall the meaning of $S_A \diamond V$, $S_n$, $R_n$ (see \autoref{nddo}), $\widetilde{S}_n, \widehat{S}_n, \overline{S}_{n}$ (which are defined similarly as for a $ C_0 $ semigroup $ T $ in \autoref{spec}) and so on in \autoref{pre}.
\begin{lem}\label{lem:eq}
$l_{T_{(A+L)_0}}^\infty(X_0) = l_{T_{A_0}}^\infty(X_0)$.
\end{lem}
\begin{proof}
This follows from
\[
\| T_{(A+L)_0}(h) - T_{A_0}(h) \| = \| (S_A \diamond LT_{(A+L)_0})(h) \| \leq M\delta(h) \rightarrow 0, ~~\text{as}~ h \rightarrow 0^+.
\]
\end{proof}
\begin{lem}\label{lem:abc}
$\widetilde{S}_n(t) l_{T_{A_0}}^\infty(X_0) \subset l_{T_{A_0}}^\infty(X_0),~\forall t \geq 0$.
\end{lem}
\begin{proof}
Consider
\begin{align*}
& T_{A_0}(h)S_n(t) - S_n(t) \\
= & T_{A_0}(h)(S_A \diamond LS_{n-1})(t) - (S_A \diamond LS_{n-1})(t) \\
= & (S_A \diamond LS_{n-1})(t+h) - (S_A \diamond LS_{n-1})(t) - (S_A \diamond LS_{n-1}(t+\cdot))(h) \\
= & (S_A \diamond LS_{n-1}(h+\cdot))(t) - (S_A \diamond LS_{n-1})(t) + T_{A_0}(t)(S_A \diamond LS_{n-1})(h) + W_1(h) \\
= & (S_A \diamond L\sum_{k=0}^{n-1}S_k(\cdot)S_{n-1-k}(h))(t) - (S_A \diamond LS_{n-1})(t) + W_2(h) \\
= & (S_A \diamond LS_{n-1}(\cdot)[T_{A_0}(h) - I])(t) + W_3(h) \\
= & S_n(t)(T_{A_0}(h) - I) + W_3(h),
\end{align*}
the second and third equality being a consequence of \eqref{equ:sf2}, and the fourth one being a consequence of \autoref{lem:srelation}, where
\begin{align*}
W_1(h) & := -(S_A \diamond LS_{n-1}(t+\cdot))(h), \\
W_2(h) & := W_1(h) + T_{A_0}(t)(S_A \diamond LS_{n-1})(h), \\
W_3(h) & := W_2(h) + \sum_{k=0}^{n-2}(S_A \diamond LS_k(\cdot)S_{n-1-k}(h))(t), \\
& = W_2(h) + \sum_{k=0}^{n-2}S_{k+1}(t)S_{n-1-k}(h).
\end{align*}
Note that for $k \geq 1$, $\|S_k(h)\| \leq \widetilde{M_k} \delta(h)$, for some constant $\widetilde{M_k}>0$. Hence, $\|W_3(h)\| \leq \widetilde{M}\delta(h)$, which shows the result.
\end{proof}
\begin{lem} \label{lem:sv}
If $(S_A \diamond V) (t)$ is (right) norm continuous at $t_0$, then $(S_A \diamond V)(t_0)l^\infty(X_0) \subset l_{T_{A_0}}^\infty(X_0)$, where $V \in C_s([0,\infty), \mathcal{L}(X_0, X))$.
\end{lem}
\begin{proof}
We need to show $\|T_{A_0}(h)(S_A \diamond V)(t_0) - (S_A \diamond V)(t_0)\| \rightarrow 0$, as $h \rightarrow 0^+$, which follows from
\[
T_{A_0}(h)(S_A \diamond V)(t_0) - (S_A \diamond V)(t_0) = (S_A \diamond V)(t_0 + h) - (S_A \diamond V)(t_0) - (S_A \diamond V(t_0+\cdot))(h).
\]
\end{proof}
The following are main results.
\begin{lem}\label{lem:sexponent}
\begin{enumerate}[(a)]
\item In $\mathcal{L}(X_0) / \mathcal{K}(X_0)$, $\forall \gamma > \omega_{\mathrm{ess}}(T_{A_0}), ~\forall j \geq 0$, there is $M_j^1 \geq 0$, such that $\|\overline{S}_j(t)\| \leq M_j^1e^{\gamma t}$.
\item In $l^\infty(X_0)/l_{T_{A_0}}^\infty(X_0)$, $\forall \gamma > \omega_{\mathrm{crit}}(T_{A_0}), ~\forall j \geq 0$, there is $ M_j^2 \geq 0$, such that $\|\widehat{S}_j(t)\| \leq M_j^2e^{\gamma t}$.
\end{enumerate}
\end{lem}
\begin{proof}
By \autoref{lem:abc}, $\widehat{S}_j$ can be defined in $l^\infty(X_0)/l_{T_{A_0}}^\infty(X_0)$. Thus, by \autoref{lem:srelation}, we have
\[
\widehat{S}_j(t+s) = \sum\limits_{k=0}^{j}\widehat{S}_k(t)\widehat{S}_{j-k}(s) ~\text{in}~ l^\infty(X_0)/l_{T_{A_0}}^\infty(X_0),
\]
and
\[
\overline{S}_j(t+s) = \sum\limits_{k=0}^{j}\overline{S}_k(t)\overline{S}_{j-k}(s) ~\text{in}~ \mathcal{L}(X_0) / \mathcal{K}(X_0).
\]
The proof is complete by \autoref{lem:exponent}.
\end{proof}
\begin{thm}\label{thm:main1}Let positive sequence $\{t_n\}$ satisfy $t_n \rightarrow \infty$, as $n \rightarrow \infty$.
\begin{enumerate}[(a)]
\item If there is $k \in \mathbb{N}$ such that $R_k(t_n)$ is compact, $n = 1, 2, 3, \cdots$, then $\omega_{\mathrm{ess}}(T_{(A+L)_0}) \leq \omega_{\mathrm{ess}}(T_{A_0})$.
\item If there is $k \in \mathbb{N}$ such that $R_k$ is norm continuous at $t_1, t_2, t_3, \cdots$, then $\omega_{\mathrm{crit}}(T_{(A+L)_0}) \leq \omega_{\mathrm{crit}}(T_{A_0})$.
\item If $R_1$ is compact (resp. norm continuous) at $t_1, t_2, t_3, \cdots$, then $\omega_{\mathrm{ess}}(T_{(A+L)_0}) = \omega_{\mathrm{ess}}(T_{A_0})$ (resp. $\omega_{\mathrm{crit}}(T_{(A+L)_0}) = \omega_{\mathrm{crit}}(T_{A_0})$).
\end{enumerate}
\end{thm}
\begin{proof}
(a) Since $R_k(t_n)$ is compact, in $\mathcal{L}(X_0) / \mathcal{K}(X_0)$, $\overline{R}_k(t_n) = 0$, for $n=1,2,3, \cdots$. Hence,
\[
\overline{T}_{(A+L)_0}(t_n) = \sum_{j=0}^{k-1}\overline{S}_j(t_n),
\]
for $n=1,2,3, \cdots$. By \autoref{lem:sexponent}, we have
\[
\omega_{\mathrm{ess}}(T_{(A+L)_0}) = \lim_{n \rightarrow \infty} \frac{1}{t_n}\ln \|\overline{T}_{(A+L)_0}(t_n)\| \leq \omega_{\mathrm{ess}}(T_{A_0}),
\]
which completes the proof of (a).
(b) Note that due to \autoref{lem:eq}, $l^\infty(X_0)/l_{T_{(A+L)_0}}^\infty(X_0) = l^\infty(X_0)/l_{T_{A_0}}^\infty(X_0)$. Thus, by \autoref{lem:sv}, $\widehat{R}_k(t_n) = 0$, for $n=1,2,3,\cdots$. The following proof is similar as (a).
(c) In this case $\overline{T}_{(A+L)_0}(t_n) = \overline{T}_{A_0}(t_n)$ (resp. $\widehat{T}_{(A+L)_0}(t_n) = \widehat{T}_{A_0}(t_n)$), for $n=1,2,3, \cdots$, which shows the result.
\end{proof}
In many cases, since we don't know the explicit expression of $T_{(A+L)_0}$, calculating $R_k$ is hard. In the following, we will give more elaborate result than \autoref{thm:main1}. Consider $T_{A_0}$ as the perturbation of $T_{(A+L)_0}$ under $-L$. Set
\[
\mathcal{D}(W) := -S_{A+L} \diamond LW,~W \in C_s([0,\infty), \mathcal{L}(X_0, X)).
\]
\begin{lem}\label{lem:permm}
The following statements are equivalent, where $P$ stands for norm continuity, or compactness on $(0,\infty)$.
\begin{enumerate}[(a)]
\item $\mathcal{B}^k(T_{(A+L)_0}) = R_k$ has property $P$ ($\Leftrightarrow$ $\mathcal{B}^k(T_{A_0}) = S_k$ has property $P$).
\item $\mathcal{D}^k(T_{(A+L)_0})$ has property $P$ ($\Leftrightarrow$ $\mathcal{D}^k(T_{A_0})$ has property $P$).
\end{enumerate}
\end{lem}
\begin{proof}
Here note that if $ S_{n} $ (or $ R_{n} $) is compact on $ (0, \infty) $, then it is norm continuous on $ (0, \infty) $ by \autoref{cor:DP} \eqref{DPcc} (or \eqref{DPdd}). So if $ P $ stands for compactness on $(0,\infty)$, then it stands for norm continuity and compactness on $(0,\infty)$.
It suffices to consider one direction, e.g., (a) $\Rightarrow$ (b). Since
\[
S_{A+L} = S_A + S_A \diamond LS_{A + L},
\]
we have
\[
\mathcal{D} = -\mathcal{B} + \mathcal{B}\mathcal{D}.
\]
We prove that $\mathcal{B}^{k-l}\mathcal{D}^l(T_{(A+L)_0})$ has property $P$ by induction, where $l=0,1,2,\cdots,k$. The case $l=0$ is (a). Since
\[
\mathcal{B}^{k-l}\mathcal{D}^l = \mathcal{B}^{k-l}\mathcal{D}^{l-1}\mathcal{D} = -\mathcal{B}^{k-(l-1)}\mathcal{D}^{l-1} + \mathcal{B}\mathcal{B}^{k-l}\mathcal{D}^l,
\]
and by induction, $\mathcal{B}^{k-(l-1)}\mathcal{D}^{l-1}(T_{(A+L)_0})$ has property $P$, it yields $\mathcal{B}^{k-l}\mathcal{D}^l(T_{(A+L)_0})$ has property $P$ by \autoref{lem:regfix}. Note that the equivalences in the brackets are \autoref{corr:sr}. This completes the proof.
\end{proof}
\begin{thm}\label{thm:main2}
\begin{enumerate}[(a)]
\item If there is $S_k$ (or $R_k$) such that it is compact on $(0,\infty)$, then $\omega_{\mathrm{ess}}(T_{(A+L)_0}) = \omega_{\mathrm{ess}}(T_{A_0})$.
\item If there is $S_k$ (or $R_k$) such that it is norm continuous on $(0,\infty)$, then $\omega_{\mathrm{crit}}(T_{(A+L)_0}) = \omega_{\mathrm{crit}}(T_{A_0})$.
\end{enumerate}
\end{thm}
\begin{proof}
Use \autoref{lem:permm} and apply \autoref{thm:main1} twice.
\end{proof}
Using \autoref{corr:s1} and \autoref{thm:main2}, we have the following corollary.
\begin{cor}\label{cor:lt}Let $A$ be an MR operator, $L \in \mathcal{L}(X_0, X)$.
\begin{enumerate}[(a)]
\item If $LT_{A_0}$ is norm continuous on $(0,\infty)$, then $\omega_{\mathrm{crit}}(T_{(A+L)_0}) = \omega_{\mathrm{crit}}(T_{A_0})$.
\item If $LT_{A_0}$ is norm continuous and compact on $(0,\infty)$, then $\omega_{\mathrm{ess}}(T_{(A+L)_0}) = \omega_{\mathrm{ess}}(T_{A_0})$.
\end{enumerate}
\end{cor}
Using \autoref{corr:s2} and \autoref{thm:main2}, we have the following corollary due to \cite{DLM08}.
\begin{cor}[{\cite[Theorem 1.2]{DLM08}}]\label{cor:lmr}
Suppose that $A$ is a quasi Hille--Yosida operator and $LT_{A_0}$ is compact on $(0,\infty)$, then $\omega_{\mathrm{ess}}(T_{(A+L)_0}) = \omega_{\mathrm{ess}}(T_{A_0})$.
\end{cor}
\begin{rmk}
It was only shown in \cite[Theorem 1.2]{DLM08} that $\omega_{\mathrm{ess}}(T_{(A+L)_0}) \leq \omega_{\mathrm{ess}}(T_{A_0})$. However, using \cite[Theorem 1.2]{DLM08}, it also yields $\omega_{\mathrm{ess}}(T_{(A+L)_0}) = \omega_{\mathrm{ess}}(T_{A_0})$, because $LT_{(A+L)_0}$ is also compact on $(0,\infty)$, see the proof in \autoref{corr:s2}.
\end{rmk}
\section{Age-structured population models in $ L^p $ spaces}\label{model}
Consider the following age-structured population model in $ L^p $ (see \cite{Web85, Web08, Thi91, Thi98, Rha98, BHM05, MR09a}):
\begin{equation}\label{equ:age}
\begin{cases}
(\partial_t + \partial_a)u(t, a) = A(a)u(t, a), ~t > 0, a \in (0, c),\\
u(t, 0) = \int_{0}^{c} C(a) u(t, a) ~\mathrm{d} a, ~ t > 0,\\
u(0, a) = u_0(a), ~ a \in (0, c), ~ u_0 \in L^p((0,c), E),
\end{cases}
\end{equation}
where $ 1 \leq p < \infty $, $ c \in (0, \infty] $ and $ E $ is a Banach space ($ u(t, a) \in E $); for all $ a \in (0,c) $, $ C(a): E \to E $ are bounded linear operators and $ A(a) : E \to E $ are closed linear operators (the detailed assumptions are given in \autoref{sub:ass}).
The approach given in this section to the study of model \eqref{equ:age} goes back to Thieme \cite{Thi91} as early as 1991. It seems that this model in $ L^p((0, c), E) $ ($ p > 1 $) was first investigated by Magal and Ruan in \cite{MR07} (see also \cite{Thi08}). We notice that, in control theory (and approximation theory), age-structured population models can be considered as boundary control systems and in this case the state space is often taken as $ L^p((0, c), E) $ ($ p > 1 $); see, e.g., \cite{CZ95}. In addition, when $ c = \infty $, since $ L^{p}((0,\infty), E) $ does not include in $ L^{1}((0,\infty), E) $, this gives in some sense more solutions of \eqref{equ:age} by taking initial data $ u_0 \in \bigcup_{1 \leq p < \infty} L^{p}((0,\infty), E) $. Finally, we mention that the geometric property of $ L^2((0, c), E) $ is usually better than $ L^1((0, c), E) $.
\subsection{evolution family: mostly review}
Before giving some naturally standard assumptions on \eqref{equ:age}, we first recall some backgrounds about evolution family.
We say $ \{U(a, s)\}_{0 \leq s \leq a < c} $ is an exponentially bounded linear evolution family (or for short \emph{evolution family}) if it satisfies the following:
\begin{enumerate}[(1)]
\item $ U(a, s) \in \mathcal{L}(E, E) $, $ U(a, s) = U(a, \tau) U(\tau, s) $ and $ U(a, a) = I $ for all $ 0 \leq s \leq \tau \leq a < c $;
\item $ (a, s, x) \mapsto U(a, s)x $ is continuous in $ 0 \leq s \leq a < c $ and $ x \in E $;
\item there exist constants $ C \geq 1 $ and $ \omega \in \mathbb{R} $ such that
\[
|U(a,s)| \leq C e^{\omega (a - s)}, ~ 0 \leq s \leq a < c.
\]
\end{enumerate}
Define the Howland semigroup $ T_0 $ on $ L^p((0, c), E) $ (with respect to $ \{U(a, s)\}_{0 \leq s \leq a < c} $) as
\begin{equation}\label{equ:how}
(T_0(t) \varphi) (a) = \begin{cases}
U(a, a -t)\varphi(a - t), &~ a \geq t,\\
0, &~ a \in [0, t],
\end{cases}
\end{equation}
which is a $ C_0 $ semigroup, and let $ \mathcal{B}_0 $ denote its generator.
Note that $ D(\mathcal{B}_0) \subset C([0,c), E) $. In fact, a simple computation shows that if $ \lambda \in \rho(\mathcal{B}_0) $, then
\[
(R(\lambda, \mathcal{B}_0) f )(a) = \int_{0}^{a} e^{\lambda(a - s)} U(a, s) f(s) ~\mathrm{d} s, ~ f \in L^p((0, c), E).
\]
\begin{proof}
Let $ R(\lambda, \mathcal{B}_0) f = \varphi $.
Note that we have
\[
e^{-\lambda t}T_0(t) \varphi - \varphi = (\lambda - \mathcal{B}_0) \int_{0}^{t} e^{-\lambda s} T_0(s) \varphi ~\mathrm{d} s = \int_{0}^{t} e^{-\lambda s} T_0(s) f ~\mathrm{d} s.
\]
For $ t \geq a $, we see $ (e^{-\lambda t}T_0(t) \varphi - \varphi) (a) = - \varphi (a) $ and
\[
\int_{0}^{t} (e^{-\lambda s} T_0(s) f) (a) ~\mathrm{d} s = \int_{0}^{a} e^{-\lambda s} U(a, a - s) f(a - s) ~\mathrm{d} s = - \int_{0}^{a} e^{\lambda(a - s)} U(a, s) f(s) ~\mathrm{d} s,
\]
completing the proof.
\end{proof}
\begin{thm}[See {\cite[Section 3.3]{CL99}}]\label{thm:evo}
$ \omega(T_0) = s(\mathcal{B}_0) = \omega(U) $, where $ \omega(U) $ is the growth bound of $ \{U(a,s)\} $, i.e.,
\[
\omega(U) : = \inf \{ \omega: ~\text{there is $ M(\omega) \geq 1 $ such that}~ |U(a,s)| \leq M(\omega)e^{\omega(a - s)} ~\text{for all}~ a \geq s \}.
\]
\end{thm}
\begin{rmk}
The sufficient and necessary conditions such that $ \mathcal{B}_0 $ is a generator of Howland semigroup \eqref{equ:how} induced by an evolution family were given by Schnaubelt \cite[Theorem 2.6]{Sch96} (see also \cite[Theorem 2.4]{RSRV00}).
\end{rmk}
Let us use the evolution family $ \{ U(a,s) \}_{0 \leq s \leq a < c} $ to study the sum $ -\frac{\mathrm{d}}{\mathrm{d} a} + A(\cdot) $ on $ L^p((0, c), E) $ with $ D(-\frac{\mathrm{d}}{\mathrm{d} a} + A(\cdot)) = D(\frac{\mathrm{d}}{\mathrm{d} a}) \cap D(A(\cdot)) $, where
\[
\frac{\mathrm{d}}{\mathrm{d} a} : f \mapsto \frac{\mathrm{d} f}{\mathrm{d} a}, ~D(\frac{\mathrm{d}}{\mathrm{d} a}) = \{ f \in W^{1,p}((0, c), E): f(0) = 0 \},
\]
and $ A(\cdot) $ is the multiplication operator, i.e., $ (A(\cdot) f(\cdot))(a) = A(a) f(a) $, $ a \in (0,c) $ with
\[
D(A(\cdot)) = \{ f \in L^p((0, c), E): f(a) \in D(A(a)) ~\text{a.e.}~ a \in (0,c), A(\cdot) f(\cdot) \in L^p((0, c), E) \}.
\]
\begin{defi}\label{def:evo}
We say $ \{A(a)\}_{0 \leq a < c} $ generates an exponentially bounded linear evolution family $ \{U(a, s)\}_{0 \leq s \leq a < c} $ (in $ L^p $-sense) if $ \mathcal{B}_0 $ is a closure of $ -\frac{\mathrm{d}}{\mathrm{d} a} + A(\cdot) $ (with $ D(-\frac{\mathrm{d}}{\mathrm{d} a} + A(\cdot)) = D(\frac{\mathrm{d}}{\mathrm{d} a}) \cap D(A(\cdot)) $) on $ L^p((0, c), E) $.
\end{defi}
\begin{rmk}
Consider the following abstract non-autonomous (linear) Cauchy problems:
\[\tag{ACP} \label{acp}
\dot{x}(a) = A(a) x(a), ~x(s) = x_s \in D(A(s)), ~ a \geq s,
\]
where $ \overline{D(A(s))} = X $ for all $ s \in [0,c) $. An evolution family $ \{U(a, s)\}_{0 \leq s \leq a < c} $ is said to \emph{solve the above Cauchy problem} \eqref{acp} if for each $ s \in [0,c) $, there is a dense linear space $ Y_{s} \subset D(A(s)) $ such that for each $ x_s \in Y_{s} $, $ x(a) = U(a, s) x_s $ ($ s \leq a < c $) is $ C^1 $ and satisfies \eqref{acp} point-wisely; see, e.g., \cite{Sch96} (or \cite[Definition VI.9.1]{EN00}). Now if $ \{U(a, s)\}_{0 \leq s \leq a < c} $ solves \eqref{acp}, then $ \{A(a)\}_{0 \leq a < c} $ generates $ \{U(a, s)\}_{0 \leq s \leq a < c} $ (in $ L^p $-sense); see \cite{Sch96} (or the proof of \cite[Theorem 3.12]{CL99}).
\end{rmk}
Consider the following examples.
\begin{exa}
\begin{enumerate}[(a)]
\item Assume $ A_0 $ is a generator of a $ C_0 $ semigroup $ T_0 $ and $ b(\cdot) \in C(\mathbb{R}, \mathbb{R}_+) $ (such that $ \inf b(\cdot) > 0 $). Let
\[
A(a) = b(a) A_0, ~ 0 \leq a < c.
\]
Then, for $ U(a,s) = T_0(\int_{s}^{a} b(r) ~\mathrm{d}r) $, $ \{U(a, s)\}_{0 \leq s \leq a < c} $ solves \eqref{acp}.
\item Assume $ A_0 $ is a generator of a $ C_0 $ semigroup and $ L: (0,c) \mapsto L(E, E) $ is strongly continuous and $ \sup_{t \in [0,c)} |L| < \infty $. Let
\[
A(a) = A_0 + L(a), ~ 0 \leq a < c.
\]
Then, it is easy to see $ \{A(a)\}_{0 \leq a < c} $ generates an evolution family $ \{U(a, s)\}_{0 \leq s \leq a < c} $; see, e.g., the proof of \cite[Proposition 6.23]{CL99}.
\end{enumerate}
\end{exa}
For the above examples, using the classical solutions of \eqref{acp} to give evolution family sometimes is limited; other way by using the mild solutions in the sense of \cite[Definition 3.1.1]{ABHN11} can be found in \cite[Section 3]{Che18c}. See also \cite[Section 5.6--5.7]{Paz83} for the ``parabolic'' type (in the sense of Tanabe) and \cite[Section 5.3--5.5]{Paz83} for the ``hyperbolic'' type (in sense of Kato) which in some contexts $ \{A(a)\}_{0 \leq a < c} $ can generate evolution family (see also \cite{Sch02} for a survey).
\subsection{standard assumptions and main results}\label{sub:ass}
Hereafter, we make the following assumptions.
\begin{enumerate}[\bfseries (H1)]
\item (about $ \{A(a)\} $) Assume $ \{A(a)\}_{0 \leq a < c} $ generates an exponentially bounded linear evolution family $ \{U(a, s)\}_{0 \leq s \leq a < c} $; see \autoref{def:evo}. Let $ T_0 $ be its corresponding Howland semigroup (see \eqref{equ:how}) with the generator $ \mathcal{B}_0 $.
\item (about $ \{C(a)\} $) Assume $ C(\cdot): (0, c) \to \mathcal{L}(E, E) $ is strongly continuous. In addition, there is a positive function $ \gamma \in L^{p'}(0,c) $ such that $ |C(a)| \leq \gamma(a) $ where $ 1/{p'} + 1/p = 1 $.
\end{enumerate}
Set $ X = E \times L^p((0, c), E) $, $ X_0 = \{0\} \times L^p((0, c), E) $, and
\begin{equation}\label{equ:boundary}
\mathcal{L}: X_0 \to X, (0,\varphi) \mapsto (L\varphi,0), ~ L\varphi = \int_{0}^{c} C(a)\varphi(a) ~\mathrm{d} a.
\end{equation}
Moreover, there is a unique closed operator $ \mathcal{A} $ with $ \overline{D(\mathcal{A})} = X_0 $ such that for all $ (y, f) \in X $ and $ \lambda \in \rho(\mathcal{B}_0) $, $ R(\lambda, \mathcal{A}) (y, f) = (0, \varphi) $, where
\[
\varphi(a) = e^{-\lambda a} U(a, 0) y + (R(\lambda, \mathcal{B}_0)f)(a),~ a \in (0, c),
\]
see \cite[Lemma 6.2]{MR07}; \cite{Thi91} contains more descriptions of $ \mathcal{A} $.
Note that by the assumption \textbf{(H2)} (on $ \{C(a)\} $), we see $ L $ and so $ \mathcal{L} $ are bounded.
Now, the solutions of the age-structured population model \eqref{equ:age} can be interpreted as the mild solutions of the following abstract Cauchy problem:
\[\label{equ:CP}\tag{CP}
\begin{cases}
\dot{U}(t) = (\mathcal{A} + \mathcal{L}) U(t),\\
U(0) = U_0 \in X_0.
\end{cases}
\]
Recall that $ U \in C([0,\tau], X) $ is called a \emph{mild solution} of \eqref{equ:CP} if $ \int_{0}^{s} U(r) ~\mathrm{d} r \in D(\mathcal{A} + \mathcal{L}) $ and
\[
U(s) = U(0) + (\mathcal{A} + \mathcal{L}) \int_{0}^{s} U(r) ~\mathrm{d} r, ~s \in [0,\tau],
\]
see, e.g., \cite[Definition 3.1.1]{ABHN11}.
\begin{defi}\label{defi:age}
If $ U = (0, u) $ is a mild solution of \eqref{equ:CP}, then $ u $ is called a mild solution of the age-structured population model \eqref{equ:age}.
\end{defi}
Note that if $ u \in W^{1, p}([0, \tau] \times [0, c), E) $ satisfies \eqref{equ:age} a.e. (or $ u \in C^1([0, \tau] \times [0, c], E) $ with $ c < \infty $ satisfies \eqref{equ:age} point-wisely), then $ u $ is a mild solution of \eqref{equ:age}. See \autoref{lem:presentation} for a characterization of mild solutions of the model \eqref{equ:age} in terms of themselves.
By a simple computation, we see $ \rho(\mathcal{A}) = \rho(\mathcal{B}_0) $, $ \mathcal{A}_0: = \mathcal{A}_{X_0} = 0 \times \mathcal{B}_0 $ and for $ \lambda \in \rho(\mathcal{A}) $,
\begin{equation}\label{equ:re}
R(\lambda, \mathcal{A})^n (y, f) = (0,\varphi_n), ~\varphi_n (a) = \frac{1}{(n -1)!} a^{n-1} e^{-\lambda a} U(a, 0) y + (R(\lambda, \mathcal{B}_0)^n f) (a).
\end{equation}
Particularly, we obtain
\begin{lem}
If $ p = 1 $, then $ \mathcal{A} $ is a Hille--Yosida operator and so is $ \mathcal{A} + \mathcal{L} $.
\end{lem}
Consider the case $ p > 1 $. Since $ \mathcal{A}_0 $ ($ \cong \mathcal{B}_0 $) generates a $ C_0 $ semigroup $ (0, T_0) $, we know $ \mathcal{A} $ generates an integrated semigroup $ S_{\mathcal{A}} $ defined by (see \eqref{equ:biaodashi})
\[
S_{\mathcal{A}} (t) (y, f) = (0, U_0(t)y + \int_{0}^{t}T_0(s) f ~\mathrm{d} s),
\]
where
\[
(U_0(t)y)(a) = \begin{cases}
U(a, 0)y, & a \leq t,\\
0, & a > t.
\end{cases}
\]
For $ f_1 \in L^p((0,t_1), E) $ and $ f_2 \in L^p((0,t_1), L^p((0,c), E)) $ ($ \cong L^p((0,t_1) \times (0,c), E) $), we have
\begin{equation}\label{equ:sa}
(S_{\mathcal{A}} \diamond (f_1, f_2))(t) (a) =
\begin{cases}
(0, U(a, 0)f_1(t-a) + (T_0 * f_2)(t)(a)), & a \leq t, \\
(0, (T_0 * f_2)(t)(a)), & a > t.
\end{cases}
\end{equation}
So, it's clear to see that there exists $ \widehat{M} \geq 1 $ independent of $ (f_1, f_2) $ and $ t_1 $ such that
\[
(\int_{0}^{c} |(S_{\mathcal{A}} \diamond (f_1, f_2))(t) (a)|^p ~\mathrm{d} a)^{1/p} \leq \widehat{M} \left\|e^{\omega(t_1-\cdot)}(f_1, f_2)(\cdot) \right\|_{L^p(0,t_1;X)}, ~\forall t \in [0,t_1].
\]
That is, we have the following; see also the proof of \cite[Theorem 6.6]{MR07} and the following of Proposition 5.6 in \cite{Thi08}.
\begin{lem}
If $ p > 1 $, then $ \mathcal{A} $ is a $ p $-quasi Hille--Yosida operator and so is $ \mathcal{A} + \mathcal{L} $.
\end{lem}
As $ \mathcal{A} $ is a quasi Hille--Yosida operator and thus the mild solutions of the linear equation \eqref{equ:CP} are given by
\[
U(t) = T_{\mathcal{A}_0} (t) U(0) + (S_{\mathcal{A}} \diamond \mathcal{L} U(\cdot))(t), ~U(0) \in X_0,
\]
returning to the second component of $ U = (0, u) $, by using $ T_{\mathcal{A}_0} = (0, T_{0}) $ and \eqref{equ:sa}, we obtain the following representation of the mild solutions of \eqref{equ:age} defined in \autoref{defi:age}; see also \cite[Section IV]{Thi91}.
\begin{lem}\label{lem:presentation}
$ u $ is a mild solution of \eqref{equ:age} with $ u(0, \cdot) = u_0 \in L^p((0, c), E) $ if and only if $ u $ satisfies
\[
u(t, a) = \begin{cases}
U(a, a - t)u_0(a - t), & a \geq t, \\
U(a, 0) \int_{0}^{c} C(s) u(t - a, s) ~\mathrm{d}s, & a \in [0, t].
\end{cases}
\]
\end{lem}
\begin{rmk}\label{rmk:noHY}
If there are a function $ h: (0, \infty) \mapsto \mathbb{R}_+ $ and $ y \in E $ such that $ |U(a, 0)y| \geq h(a) $ and
\[
\limsup_{\lambda \to \infty} \lambda^p \int_{0}^{\infty} e^{-\lambda p a} h^p(a) da > 0,
\]
then from \eqref{equ:re}, we know $ \mathcal{A} $ is not a Hille--Yosida operator when $ p > 1 $ and $ c = \infty $. For instance, $ U(a, 0) = I $ for all $ a \in (0,\infty) $.
\end{rmk}
Next, let us compute $ \mathcal{L}S_{\mathcal{A}}\diamond \mathcal{L}(0, T_0) = \mathcal{L}S_{\mathcal{A}}\diamond (LT_0, 0) $. Write $ \mathcal{L}S_{\mathcal{A}}\diamond (LT_0, 0) = (F, 0) $, then by \eqref{equ:sa}, we get
\[
F(t) = \begin{cases}
\int_{0}^{t} C(t-a) U(t-a, 0) LT_0(a) ~\mathrm{d}a, & t \leq c,\\
\int_{t-c}^{t} C(t-a) U(t-a, 0) LT_0(a) ~\mathrm{d}a, & t > c.
\end{cases}
\]
\begin{lem}\label{lem:pre}
\begin{enumerate}[(a)]
\item Suppose $ U(\cdot, 0) $ is norm continuous on $ (0, c) $, and $ C(\cdot) $ is norm measurable on $ (0, c) $ if $ p > 1 $ and is (essentially) norm continuous if $ p = 1 $. Then, $ F $ (i.e., $ \mathcal{L}S_{\mathcal{A}}\diamond (LT_0, 0) $) is norm continuous on $ (0, \infty) $.
\item If $ U(\cdot, 0) $ is compact on $ (0, c) $, then $ F $ (i.e., $ \mathcal{L}S_{\mathcal{A}}\diamond (LT_0, 0) $) is norm continuous and compact on $ (0, \infty) $.
\end{enumerate}
\end{lem}
\begin{proof}
We only consider the case $ t < c $. Set
\[
F_{\epsilon}(t) = \int_{0}^{t-\epsilon} C(t-a) U(t-a, 0) LT_0(a) ~\mathrm{d}a.
\]
Note that by the condition on $ \{C(a)\} $ and the boundedness of $ \{U(t-a, 0) LT_0(a)\}_{0 \leq a \leq t} $, we see $ F_{\epsilon}(t) \to F(t) $ as $ \epsilon \to 0 $ in the uniform operator topology. So it suffices to consider $ F_{\epsilon}(t) $. Let $ 0 < |h| < \epsilon $ (with $ \epsilon $) be small and let $ p' $ satisfy $ 1/{p'} + 1/p = 1 $ ($ 1 < p' \leq \infty $). In the following, denote by $ \widetilde{C} > 0 $ the universal constant independent of $ h $ which might be different line by line.
\begin{align*}
& |F_{\epsilon}(t+h) - F_{\epsilon}(t)| \\
= &~ |\int_{0}^{t + h -\epsilon} C(t + h - a) U(t + h - a, 0) LT_0(a) ~\mathrm{d}a - \int_{0}^{t -\epsilon} C(t - a) U(t - a, 0) LT_0(a) ~\mathrm{d}a| \\
= & ~ |\int_{t - \epsilon}^{t + h -\epsilon} C(t + h - a) U(t + h - a, 0) LT_0(a) ~\mathrm{d}a \\
& \quad + \int_{0}^{t -\epsilon} \{C(t + h - a) U(t + h - a, 0) - C(t - a) U(t - a, 0)\} LT_0(a) ~\mathrm{d}a |\\
& \leq \widetilde{C} h + R_1 + R_2,
\end{align*}
where
\begin{align*}
R_1 & = \int_{0}^{t -\epsilon} |\{C(t + h - a) - C(t - a)\} U(t - a, 0) LT_0(a)| ~\mathrm{d}a, \\
& \leq \left(\int_{0}^{t -\epsilon} |\{C(t + h - a) - C(t - a)\}U(t - a, 0)|^{p'} ~\mathrm{d}a\right)^{1/{p'}} \left(\int_{0}^{t -\epsilon} |LT_0(a)|^p ~\mathrm{d}a\right)^{1/p} \\
& \leq \widetilde{C} \left(\int_{0}^{t -\epsilon} |\{C(t + h - a) - C(t - a)\}U(t - a, 0)|^{p'} ~\mathrm{d}a\right)^{1/{p'}},
\end{align*}
and
\begin{align*}
R_2 & = \int_{0}^{t -\epsilon} |C(t + h - a) \{ U(t + h - a, 0) - U(t - a, 0)\} LT_0(a)| ~\mathrm{d}a \\
& \leq \left(\int_{0}^{t -\epsilon} |C(t + h - a)|^{p'} ~\mathrm{d} a\right)^{1/{p'}} \left(\int_{0}^{t -\epsilon} |\{ U(t + h - a, 0) - U(t - a, 0)\} LT_0(a)|^{p} ~\mathrm{d}a\right)^{1/p} \\
& \leq \widetilde{C} \left(\int_{0}^{t -\epsilon} |U(t + h - a, 0) - U(t - a, 0)|^{p} ~\mathrm{d}a\right)^{1/p}.
\end{align*}
To prove (a), by the condition on $ \{U(a, 0)\} $, we know $ R_2 \to 0 $ as $ h \to 0 $, and by the condition on $ \{C(a)\} $, we get
\[
R_1 \leq \widetilde{C} \left(\int_{0}^{t -\epsilon} |C(t + h - a) - C(t - a)|^{p'} ~\mathrm{d}a\right)^{1/{p'}} \to 0, ~\text{as}~ \to 0,
\]
since $ C(\cdot): (0,c) \to \mathcal{L}(E, E) $ is Bochner $ p' $-integrable (in the uniform operator topology) if $ p' < \infty $ and is (essentially) norm continuous if $ p' = \infty $. This shows that $ F_{\epsilon} $ is norm continuous at $ t $.
To prove (b), note first that if $ U(\cdot, 0) $ is compact on $ (0, c) $, then it is also norm continuous on $ (0, c) $. Indeed, if $ s > 0 $ and $ a > 0 $, then
\[
U(a+s, 0) - U(a,0) = (U(a+s, a) - I)U(a,0) \to 0,
\]
as $ s \to 0^+ $ in the uniform operator topology due to the compactness of $ U(a,0) $; the left norm continuity can be considered similarly. In particular, $ R_2 \to 0 $ as $ h \to 0 $. Since $ \{|C(a)|\}_{\epsilon \leq a \leq 2t} $ is bounded (due to the strong continuity of $ C(\cdot) $ on $ (0,c) $) and $ U(\cdot, 0) $ is compact on $ [\epsilon, t] $, by the Lebesgue dominated convergence theorem (see, e.g., \cite[Theorem 1.1.8]{ABHN11}), we get
\[
\lim_{h \to 0}R_1 \leq \widetilde{C} \left(\int_{0}^{t -\epsilon} \lim_{h \to 0} |\{C(t + h - a) - C(t - a)\}U(t - a, 0)|^{p'} ~\mathrm{d}a\right)^{1/{p'}} = 0.
\]
Thus, $ F_{\epsilon} $ is norm continuous at $ t $. The compactness of $ F_{\epsilon}(t) $ follows from \cite[Theorem C.7]{EN00} since $ U(\cdot, 0) $ is compact on $ [\epsilon, t] $. The proof is complete.
\end{proof}
Now we can give some asymptotic behaviors of the age-structured population model \eqref{equ:age}; see \autoref{thm:evo} for the characterization of $ \omega(T_0) $.
\begin{thm}\label{thm:age}
Let one of the following conditions hold:
\begin{enumerate}[(a)]
\item $ U(\cdot, 0) $ is norm continuous on $ (0, c) $, and $ C(\cdot) $ is norm measurable on $ (0, c) $ if $ p > 1 $ and is (essentially) norm continuous if $ p = 1 $;
\item $ U(\cdot, 0) $ is compact on $ (0, c) $;
\item $ L $ (see \eqref{equ:boundary}) is compact.
\end{enumerate}
Let $ T_{(\mathcal{A}+\mathcal{L})_0} $ be the $ C_0 $ semigroup generated by the part of $ \mathcal{A}+\mathcal{L} $ in $ L^p((0,c), E) $. Then, the following statements hold.
\begin{enumerate}[(1)]
\item If $ c < \infty $ and condition (b) or (c) holds, then $ T_{(\mathcal{A}+\mathcal{L})_0} $ is eventually compact (and so eventually norm continuous). Particularly, $ \omega (T_{(\mathcal{A}+\mathcal{L})_0}) = s(T_{(\mathcal{A}+\mathcal{L})_0}) $ (see \eqref{equ:bound}).
\item If condition (b) or (c) holds, then $ \omega_{\mathrm{ess}}(T_{(\mathcal{A}+\mathcal{L})_0}) = \omega_{\mathrm{ess}}(T_{0}) $. Particularly, if $ \omega_{\mathrm{ess}}(T_{0}) < 0 $, then $ T_{(\mathcal{A}+\mathcal{L})_0} $ is quasi-compact (see \cite[Section V.3]{EN00} for more consequences of this).
\item $ \omega_{\mathrm{crit}}(T_{(\mathcal{A}+\mathcal{L})_0}) = \omega_{\mathrm{crit}}(T_{0}) $.
\end{enumerate}
\end{thm}
\begin{proof}
If condition (a) (resp. (b)) holds, then by \autoref{lem:pre} and \autoref{lem:NC} (b) (resp. \autoref{lem:CP} (b)), we know $ S_2 \triangleq S_{\mathcal{A}}\diamond \mathcal{L}S_{\mathcal{A}}\diamond (LT_0, 0) $ is norm continuous (resp. compact) on $ (0, \infty) $. If condition (c) holds (i.e., $ \mathcal{L} $ is compact), then by \autoref{corr:s2}, we have $ S_2 $ is compact on $ (0, \infty) $.
To prove (1), as $ c < \infty $, we have for $ t > c $, $ T_0(t) = 0 $; so particularly $ T_0 $ is eventually compact. Conclusion (1) now follows \autoref{thm:perreg} (b).
Finally, conclusions (2) and (3) are direct consequence of \autoref{thm:main2}.
\end{proof}
\begin{exa}\label{exa:model}
In order to verify \autoref{thm:age}, consider the following examples which might be \emph{not} as general as possible.
\begin{asparaenum}[\bfseries (a)]
\item (See \cite[Section 5]{MR09a} and \cite{Web85}.) Let $ E = \mathbb{R}^n $. Then, $ C(a) $ and $ A(a) $ can be considered as matrices. Assume $ a \mapsto A(a) $ is continuous on $ [0,c) $ and $ a \mapsto C(a) $ is continuous on $ (0,c) $ with with $ \sup_{0<a<c}\{|A(a)|\} < \infty $ and $ |C(\cdot)| \in L^{p'}(0, c) $ ($ 1/{p'} + 1/p = 1 $), then assumptions \textbf{(H1) (H2)} hold. Note also that in this case $ L $ is compact. Now all the cases in \autoref{thm:age} are fulfilled. A very special case is $ A(a) \equiv - \mu I $ where $ \mu > 0 $; now $ U(t,s) = e^{-\mu (t - s)} I $ and so by \autoref{rmk:noHY} in this case $ \mathcal{A} $ is not a Hille--Yosida operator if $ p > 1 $ and $ c = \infty $; in addition, $ \omega_{\mathrm{ess}}(T_{(\mathcal{A}+\mathcal{L})_0}) < 0 $ as $ \omega(T_0) = \omega(U) \leq - \mu < 0 $.
\item \label{exbb} (See \cite[Section 5]{BHM05}.)
Consider the following model:
\begin{equation*}
\begin{cases}
(\partial_t + \partial_a)u(t, a, s) = -\mu(a)u(t, a, s) + k(a) \Delta_s u(t, a, s), ~t > 0, a \in (0, c), s \in \mathbb{R}^n, \\
u(t, 0, s) = \int_{0}^{c} \beta(a,s) u(t, a, s) ~\mathrm{d} a, ~ t > 0, s \in \mathbb{R}^n, \\
u(0, a, s) = u_0(a, s), ~ a \in (0, c), s \in \mathbb{R}^n, ~ u_0 \in L^p((0,c), L^q(\mathbb{R}^n)).
\end{cases}
\end{equation*}
Let $ E = L^q(\mathbb{R}^n) $ ($ 1 \leq q < \infty $). Take $ A(a) = -\mu (a) I + k(a) \Delta_s $ and $ C(a) = \beta (a, \cdot) $, where
\begin{enumerate}[(i)]
\item $ \Delta_s $ denotes the Laplace operator on $ L^q(\mathbb{R}^n) $ (i.e., $ \Delta f = \sum_{j = 1}^{n} D^2_{j} f $);
\item $ \mu(\cdot) \in L^{\infty}((0,c), \mathbb{R}_+) $, $ k(\cdot) \in C_{b}([0,c), \mathbb{R}_+) $ (with $ \inf_{a>0} k(a) > 0 $); and
\item $ \beta(\cdot, \cdot) \in C((0,c), L^{\infty}(\mathbb{R}^n)) \cap L^{p'}((0,c), L^{\infty}(\mathbb{R}^n)) $ ($ 1/{p'} + 1/p = 1 $).
\end{enumerate}
Note that the evolution family generated by $ \{A(a)\} $ is $ U(t,s) = e^{-\int_{s}^{t}\mu(r)~\mathrm{d}r} G(\int_{s}^{t} k(r) ~\mathrm{d}r) $ ($ t \geq s $), where $ G(\cdot) $ is the Gaussian semigroup generated by $ \Delta_s $, i.e.,
\[
(G(t)f)(x) : = (4\pi t)^{-n/2} \int_{\mathbb{R}^n} f(x - y) e^{-|y|^2/(4t)} ~\mathrm{d} y, ~ f \in L^q(\mathbb{R}^n).
\]
Clearly, $ U(t, s) $ is norm continuous on $ 0\leq s < t $ but $ U(a, 0) $ is not compact. The condition on $ \beta(\cdot, \cdot) $ implies $ C(\cdot) $ is norm continuous (but in general it is not uniformly norm continuous and also in this case $ L $ may be not compact). Therefore, \autoref{thm:age} (a) holds; however the result given in \cite[Section 5]{BHM05} cannot be applied (even for $ p = 1 $). Particularly, we have
\[
\omega_{\mathrm{crit}}(T_{(\mathcal{A}+\mathcal{L})_0}) = \omega_{\mathrm{crit}}(T_{0}) \leq \omega(U) \leq -|\mu(\cdot)|_{L^{\infty}}.
\]
Next, we show in this case $ \mathcal{A} $ usually is not a Hille--Yosida operator if $ p > 1 $ and $ c = \infty $. For simplicity, let $ n = 1 $, $ \mu(\cdot) \equiv 1 $ and $ k(\cdot) \equiv 1 $. Take $ f_1(x) = e^{-|x|^2} $. Then, $ f_1 \in L^p(\mathbb{R}) $ and for $ x > 0 $,
\begin{align*}
(U(a,0)f_1)(x) = e^{-a} (4\pi a)^{-1/2} \int_{-\infty}^{\infty} e^{-|x-y|^2} e^{-|y|^2/(4a)} ~\mathrm{d} y \geq c_0 e^{-a} e^{-x^2} (1+4a)^{-1/2},
\end{align*}
where $ c_0 > 0 $ is a constant. So
\begin{align*}
|(U(a,0)f_1)(\cdot)|_{L^q} = \left(\int_{-\infty}^{\infty} ((U(a,0)f_1)(x))^p ~\mathrm{d}x \right)^{1/q}\geq c_1 e^{-a} (1+4a)^{-1/2} : = h(a),
\end{align*}
where $ c_1 > 0 $ is a constant, and in particular, $ \limsup_{\lambda \to \infty} \lambda^p \int_{0}^{\infty} e^{-\lambda p a} h^p(a) da = \infty $ (in fact $ \lim_{\lambda \to \infty} \lambda \int_{0}^{\infty} e^{-(\lambda + 1) p a} (1+4a)^{-p/2} da = 1/p $). Hence, by \autoref{rmk:noHY}, in this case $ \mathcal{A} $ is not a Hille--Yosida operator.
\item (See \cite{Rha98} and \cite[Section 5]{Thi98} in $ L^1 $ case.) Let $ E = L^q(\Omega) $ ($ 1 \leq q < \infty $) where $ \Omega $ is a bounded domain of $ \mathbb{R}^n $ with smooth boundary $ \partial \Omega $. Let $ A(\cdot) $ (with suitable boundary condition e.g., Dirichlet, Neumann, Robin, etc) and $ C(\cdot) $ as be in \eqref{exbb} with $ \mathbb{R}^n $ in (i) (ii) (iii) replaced by $ \Omega $. In this context, $ \{A(a)\} $ also generates exponentially bounded linear evolution family $ \{U(a, s)\}_{0 \leq s \leq a < c} $ (see, e.g., \cite{Paz83}) and the embedding theorem gives that $ U(t, s) $ is compact on $ 0\leq s < t < c $ and particularly \autoref{thm:age} (b) holds. More generally, $ \{A(a)\} $ can be (uniformly strongly) elliptic differential operators with suitable boundary condition (see, e.g., \cite[Section 7.6]{Paz83}). Due to the positive setting in \cite[Section 5]{Thi98}, $ \{C(a)\} $ can be unbounded which \autoref{thm:age} cannot cover; the proof given here is different from \cite{Thi98, Rha98}.
\item (See \cite[Section 6]{BHM05} in $ L^1 $ case.) Consider the following model:
\begin{equation*}
\begin{cases}
(\partial_t + \partial_a)u(t, a, s) = -\mu(a)u(t, a, s) + k(a) q(s) u(t, a, s), ~t > 0, a \in (0, c), s \in \Omega, \\
u(t, 0, s) = \int_{0}^{c} \beta(a,s) u(t, a, s) ~\mathrm{d} a, ~ t > 0, s \in \Omega, \\
u(0, a, s) = u_0(a, s), ~ a \in (0, c), s \in \Omega, ~ u_0 \in L^p((0,c), L^q(\Omega)).
\end{cases}
\end{equation*}
Let $ E = L^q(\Omega) $ ($ 1 \leq q < \infty $) be endowed with a $ \sigma $-finite (Borel) measure where $ \Omega $ is a domain of $ \mathbb{R}^n $. Take $ A(a) = -\mu (a) I + k(a) q(\cdot) $ and $ C(a) = \beta (a, \cdot) $ where $ \mu(\cdot), k(\cdot), \beta(\cdot, \cdot) $ are the same as \eqref{exbb} with $ \mathbb{R}^n $ in (i), (ii), (iii) replaced by $ \Omega $. The measurable function $ q(\cdot): \Omega \to \mathbb{C} $ satisfies the following:
\begin{enumerate}[({a}1)]
\item $ \sup\{ \mathrm{Re}\lambda : \lambda \in q(\Omega) \} < \infty $ and
\item for any $ d \in \mathbb{R} $, $ \overline{q(\Omega)} \cap \{ \lambda \in \mathbb{C}:\mathrm{Re}\lambda \geq d \} $ is bounded.
\end{enumerate}
Under this circumstance, the evolution family generated by $ \{A(a)\} $ is $ U(t,s) = e^{-\int_{s}^{t}\mu(r) - k(r)q(\cdot)~\mathrm{d}r} $ ($ t \geq s $). Note that by (a2), $ U(t,s) $ is norm continuous on $ 0\leq s < t < c $ (see, e.g., \cite[p. 121]{EN00}) but in general is not compact. So \autoref{thm:age} (a) holds.
\item (See \cite[Section 1.4]{Web08} in $ L^1 $ case.)
Consider the following model:
\begin{equation*}
\begin{cases}
(\partial_t + \partial_a + k(a)\partial_s)u(t, a, s) = -\mu(a)u(t, a, s), ~t > 0, a \in (0, c), s \in (0,s_2), \\
u(t, 0, s) = \int_{0}^{c} \int_{0}^{s_2} \beta(a,\hat{s},s) u(t, a,\hat{s}) ~\mathrm{d}\hat{s} ~\mathrm{d} a, ~ t > 0, s \in (0,s_2), \\
u(0, a, s) = u_0(a, s), ~ a \in (0, c), s \in (0,s_2), ~ u_0 \in L^p((0,c), L^q(0, s_2)), \\
u(t,a,0) = 0, ~t > 0, a \in (0, c).
\end{cases}
\end{equation*}
Let $ E = L^q(0, s_2) $ ($ 1 \leq q < \infty $) where $ 0 < s_2 < \infty $. Let $ p', q' $ satisfy $ 1/{p'} + 1/p = 1 $ and $ 1/{q'} + 1/q = 1 $.
The measurable function $ \beta(\cdot, \cdot, \cdot) $ satisfies
\[\tag{b1}
\beta(\cdot, \cdot, \cdot) \in C( (0,c), L^{(q, q')}((0,s_2)^2) ) \cap L^{P}( (0,c) \times (0,s_2)^2 ) , ~P = (q, q', p'),
\]
where $ L^{P}( (0,c) \times (0,s_2)^2 ) $ (similarly for $ L^{(q, q')}((0,s_2)^2) $) is the mixed-norm Lebesgue space (see, e.g., \cite{BP61}) defined by
\[
\begin{split}
L^{P}( (0,c) \times (0,s_2)^2 )
:= \{ f:(0,c) \times(0,s_2)^2 \to \mathbb{R} ~\text{is measurable}~:\\ \left(\int_{0}^{c} \left( \int_{0}^{s_2} \left(\int_{0}^{s_2} |f(a,\hat{s}, s)|^{q} ~\mathrm{d} s \right)^{q'/q} \mathrm{d} \hat{s} \right)^{p'/{q'}}\mathrm{d}a \right)^{1/{p'}} < \infty \}.
\end{split}
\]
In addition, if one of $ p,q $ equals $ 1 $, then we assume
\[\tag{b2}
\begin{cases}
\lim_{h \to 0^+}\sup\limits_{(a,\hat{s}) \in (0,c) \times(0,s_2) } \int_{0}^{s_2 - h} |\beta(a, \hat{s}, s+h) - \beta(a, \hat{s}, s)| ~\mathrm{d}s = 0, ~ \text{if}~ p = q = 1, \\
\lim_{h \to 0^+} \int_{0}^{s_2} \left(\sup\limits_{\hat{s} \in (0,s_2) } \int_{0}^{s_2 - h} |\beta(a, \hat{s}, s+h) - \beta(a, \hat{s}, s)| ~\mathrm{d}s \right)^{p'} ~\mathrm{d}a = 0, ~\text{if}~ p > 1, q = 1,\\
\lim_{h \to 0^+}\sup\limits_{a \in (0,c)} \int_{0}^{s_2} \left(\int_{0}^{s_2 - h} |\beta(a, \hat{s}, s+h) - \beta(a, \hat{s}, s)|^{q} ~\mathrm{d}s\right)^{q'/q} ~\mathrm{d} \hat{s} = 0, ~ \text{if}~ p = 1, q > 1.
\end{cases}
\]
(If $ c < \infty $, then a typical case such that $ \beta(\cdot, \cdot, \cdot) $ satisfies (b1) (b2) for all $ 1 \leq p, q < \infty $ is $ \beta(\cdot, \cdot, \cdot) \in C([0,c]\times[0,s_2]^2) $.)
Take $ A(a) = - \mu(a) I - k(a) \frac{\mathrm{d}}{\mathrm{d}s} $ where $ \mu(\cdot), k(\cdot) $ are the same as \eqref{exbb} with $ \mathbb{R}^n $ in (i), (ii), (iii) replaced by $ (0, s_2) $; $ \frac{\mathrm{d}}{\mathrm{d}s} $ is the first-order differential operator on $ L^q(0,s_2) $, i.e.,
\[
\frac{\mathrm{d}}{\mathrm{d}s}: f \mapsto \frac{\mathrm{d}f}{\mathrm{d}s}, ~D(\frac{\mathrm{d}}{\mathrm{d}s}) = \{f \in W^{1,q}(0,s_2): f(0) = 0\}.
\]
Let $ C(a) = \beta(a, \cdot, \cdot) $ be defined by
\[
(C(a) \phi)(s) = \int_{0}^{s_2} \beta(a, \hat{s}, s) \phi(\hat{s}) ~\mathrm{d} \hat{s}, ~\phi \in L^q(0, s_2);
\]
By Minkowski integral inequality, we see $ C(a) $ indeed is a bounded linear operator on $ L^q(0, s_2) $. Clearly, $ \{A(a)\} $ generates the exponentially bounded linear evolution family $ \{U(a, s)\}_{0 \leq s \leq a < c} $ defined by $ U(t,s) = e^{-\int_{s}^{t}\mu(r)~\mathrm{d}r} T_{r}(\int_{s}^{t} k(a) ~\mathrm{d}a) $, where $ T_r(t) $ is the right translation on $ L^q(0,s_2) $, i.e.,
\[
(T_r(t) \phi)(s) : = \begin{cases}
\phi(s - t), & s - t \geq 0,\\
0,& s - t < 0.
\end{cases}
\]
The condition on $ \beta(\cdot, \cdot, \cdot) $ implies that $ L $ is compact, and so in this case \autoref{thm:age} (c) holds and
especially, we have
\[
\omega_{\mathrm{ess}}(T_{(\mathcal{A}+\mathcal{L})_0}) = \omega_{\mathrm{ess}}(T_{0}) \leq \omega(U) \leq -|\mu(\cdot)|_{L^{\infty}}.
\]
Notice also that $ U(a, 0) $ is not compact for all $ a > 0 $.
\begin{proof}
To show $ L $ is compact, one can use the classical Kolmogorov theorem on the characterization of the relatively compact subsets in $ L^q(0,s_2) $ which is standard; the details are as follows. We need to show $ \{ L\varphi: \varphi \in L^{p}((0,c), L^q(0,s_2)) ~\text{such that}~ |\varphi| \leq 1 \} $ is relatively compact in $ L^q(0,s_2) $, or equivalently,
\[
\lim_{h\to 0^+} \int_{0}^{s_2-h} |(L\varphi)(s+h) - (L\varphi)(s)|^{q} ~\mathrm{d} s = 0,~ ~\text{uniformly for}~ |\varphi| \leq 1.
\]
For $ \varphi \in L^{p}((0,c), L^q(0,s_2)) = L^{(q,p)}((0,c)\times(0,s_2)) $ such that $ |\varphi| \leq 1 $, i.e.,
\[
\left(\int_{0}^{c}\left(\int_{0}^{s_2} |\varphi(a,\hat{s})|^{q}~\mathrm{d} \hat{s}\right)^{p/q} ~\mathrm{d}a \right)^{1/p} \leq 1,
\]
using Minkowski integral inequality and H\"older inequality, we get
\begin{align*}
& \int_{0}^{s_2-h} |(L\varphi)(s+h) - (L\varphi)(s)|^{q} ~\mathrm{d} s \\
= & \int_{0}^{s_2-h} \left|\int_{0}^{c}\int_{0}^{s_2} \{\beta(a, \hat{s}, s + h) - \beta(a, \hat{s}, s)\} \varphi(a,\hat{s}) ~\mathrm{d} \hat{s} ~\mathrm{d}a\right|^{q} ~\mathrm{d} s \\
\leq & \int_{0}^{c}\int_{0}^{s_2} \left(\int_{0}^{s_2-h} |\beta(a, \hat{s}, s + h) - \beta(a, \hat{s}, s)|^{q} ~\mathrm{d} s\right)^{1/q} |\varphi(a,\hat{s})|~\mathrm{d} \hat{s} ~\mathrm{d}a \\
\leq & \int_{0}^{c} \left(\int_{0}^{s_2} \left(\int_{0}^{s_2-h} |\beta(a, \hat{s}, s + h) - \beta(a, \hat{s}, s)|^{q} ~\mathrm{d} s\right)^{q'/q} ~\mathrm{d} \hat{s}\right)^{1/q'} \cdot \left(\int_{0}^{s_2} |\varphi(a,\hat{s})|^{q}~\mathrm{d} \hat{s}\right)^{1/q} ~\mathrm{d}a \\
\leq & \left(\int_{0}^{c} \left(\int_{0}^{s_2} \left(\int_{0}^{s_2-h} |\beta(a, \hat{s}, s + h) - \beta(a, \hat{s}, s)|^{q} ~\mathrm{d} s\right)^{q'/q} ~\mathrm{d} \hat{s}\right)^{p'/q'} ~\mathrm{d}a\right)^{1/p'} : = \rho(h).
\end{align*}
Now we have $ \rho(h) \to 0 $ as $ h \to 0^+ $. Indeed, if $ p \neq 1 $ and $ q \neq 1 $, then by a standard argument we can obtain this (see, e.g., \cite[Section 10 Theorem 1]{BP61}); if one of $ p,q $ equals $ 1 $, then this is condition (b2). The proof is complete.
\end{proof}
\end{asparaenum}
\end{exa}
\begin{rmk}
In \autoref{exa:model} (b), (c), (d), if $ p = 1 $ and $ \beta(\cdot, \cdot) \in BUC([0,c], L^{\infty}(\Omega)) $ ($ \Omega = \mathbb{R}^n $ or a domain of $ \mathbb{R}^n $) as \cite{Rha98, BHM05}, then \autoref{thm:C} can be applied directly.
A more general model than \autoref{exa:model} (e) has been considered in \cite[Section 6]{Thi98}.
\end{rmk}
\section{Comments}\label{comments}
\subsection{Unbounded perturbation}
The unbounded perturbation theorem for MR operators (quasi Hille--Yosida operators) is few; see \cite[Section 2]{TV09} for certain unbounded perturbations of Hille--Yosida operators. It's still a difficult task. For the case of the generators of integrated semigroups and almost sectorial operators, we refer the readers to see \cite{ABHN11, KW03, Thi08} and the references therein. Here, we give an unbounded perturbation theorem for MR operators, which is essentially due to Arendt et al. \cite{ABHN11}.
\begin{thm}[{\cite[Theorem 3.5.7]{ABHN11}}]
Assume $A$ is an operator on a Banach space $X$ such that $(\omega,\infty)\subset \rho(A)$, $\limsup\limits_{\lambda > \omega, \lambda \rightarrow \infty} \|R(\lambda, A)\| = 0$, and $B \in \mathcal{L}(D(A), D(A))$ ($D(A)$ is endowed with the graph norm). Then, there is $C \in \mathcal{L}(X)$ such that $A+B$ is similar to $A+C$.
\end{thm}
\begin{proof}
The proof is essentially the same as \cite[Theorem 3.5.7]{ABHN11}. In fact, the condition $\sup\limits_{\lambda > \omega} \|\lambda R(\lambda, A)\| < \infty$ in \cite[Theorem 3.5.7]{ABHN11} can be weakened by $\limsup\limits_{\lambda > \omega,\lambda \rightarrow \infty} \|R(\lambda, A)\| = 0$, because we only need $I - BR(\lambda, A) $ is invertible for sufficiently large $\lambda$. We repeat the proof as follows for the sake of readers.
Take $ \lambda_0 > \omega $. Since $ (\lambda_0 - A) B R(\lambda_0, A) \in \mathcal{L}(X, X) $, we have $ I - (\lambda_0 - A) B R(\lambda, A) R(\lambda_0, A) = I - (\lambda_0 - A) B R(\lambda_0, A)R(\lambda, A) $ is invertible for large $\lambda$ as $ \limsup\limits_{\lambda > \omega, \lambda \rightarrow \infty} \|R(\lambda, A)\| = 0 $. Now $ I - BR(\lambda, A) = I - R(\lambda_0, A) (\lambda_0 - A) B R(\lambda, A) $ is invertible (here, we use the fact: for $ U, V \in \mathcal{L}(X, X) $, $ I - UV $ is invertible if and only if $ I - VU $ is invertible).
Take $C := (\lambda - A)BR(\lambda, A)$ and $ U := I - BR(\lambda, A) $ for sufficiently large $\lambda$. Then, as shown above $ U $ is invertible (and $ U D(A) = D(A) $). It is easy to verify that $ U (A + C) U^{-1} = A + B $ (see also the proof \cite[Theorem 3.5.7]{ABHN11}). The proof is complete.
\end{proof}
Since the condition $\limsup\limits_{\lambda > \omega, \lambda \rightarrow \infty} \|R(\lambda, A)\| = 0$ can be satisfied by MR operators (see \autoref{lem:MR} (b)), we obtain the following result.
\begin{cor}
If $A$ is an MR operator (resp. $p$-quasi Hille--Yosida operator) and $B \in \mathcal{L}(D(A), D(A))$, so is $A+B$.
\end{cor}
It's possible for us to extend the results in sections 3-5 for this unbounded perturbation. We take $L=(\lambda - A)BR(\lambda, A)$. Here are some examples.
\begin{cor}
Let $A$ is an MR operator, and $B \in \mathcal{L}(D(A), D(A))$.
\begin{enumerate}[(a)]
\item If there exists $\lambda_0 \in \rho(A)$ such that $(\lambda_0 - A)BR(\lambda_0, A_0) T_{A_0}$ is norm continuous (resp. norm continuous and compact) on $(0,\infty)$, then $\omega_{\mathrm{crit}}(T_{(A+B)_0}) = \omega_{\mathrm{crit}}(T_{A_0})$ (resp. $\omega_{\mathrm{ess}}(T_{(A+B)_0}) = \omega_{\mathrm{ess}}(T_{A_0})$).
\item If $A$ is also a quasi Hille--Yosida operator and there is $\lambda_0 \in \rho(A)$ such that $(\lambda_0 - A)BR(\lambda_0, A_0) T_{A_0}$ is compact on $(0,\infty)$, then $\omega_{\mathrm{ess}}(T_{(A+B)_0}) = \omega_{\mathrm{ess}}(T_{A_0})$.
\end{enumerate}
\end{cor}
\begin{proof}
For (a) (b), it suffices to show if $(\lambda_0 - A)BR(\lambda_0, A_0)T_{A_0}$ is norm continuous (resp. compact) on $(0,\infty)$, so is $(\lambda - A)BR(\lambda, A_0) T_{A_0}$ for all $\lambda \in \rho(A)$.
Note that
\[
R(\lambda, A_0) T_{A_0} = T_{A_0} R(\lambda, A_0),
\]
and
\[
BR(\lambda_0, A) T_{A_0} = R(\lambda_0, A)(\lambda_0 - A)BR(\lambda_0, A) T_{A_0}
\]
has the same regularity as $(\lambda_0 - A)BR(\lambda_0, A) T_{A_0}$.
By
\begin{align*}
(\lambda-A)BT_{A_0}R(\lambda, A_0) & = (\lambda-\lambda_0+\lambda_0-A)BT_{A_0} [R(\lambda_0, A_0) + (\lambda_0 - \lambda)R(\lambda_0, A_0)R(\lambda, A_0)] \\
& = [ (\lambda-\lambda_0)BT_{A_0}R(\lambda_0, A_0) + (\lambda_0-A)BT_{A_0}R(\lambda_0, A_0)] [I+(\lambda_0-\lambda)R(\lambda,A)],
\end{align*}
we obtain the results.
\end{proof}
\subsection{Cauchy problems}
Our consideration is relevant to the following semilinear Cauchy problem:
\begin{equation}\label{cauchy}
\begin{cases}
\dot{u}(t) = Au(t) + f(u(t)), \\
u(0) = x,
\end{cases}
\end{equation}
where $A:~D(A) \subset X \rightarrow X$ is an MR operator and $f:\overline{D(A)} \rightarrow X$ is $ C^1 $ and globally Lipschitz (for simplicity in order to avoid blowup). Many differential equations such as age-structured population models, parabolic differential equations, delay equations, and Cauchy problems with boundary conditions can be reformulated as Cauchy problems. However, in many cases, the operator $A$ is not densely defined or even not a Hille--Yosida operator, see, e.g., \cite{DPS87, PS02, DMP10, MR18}.
Assume zero is an equilibrium of \eqref{cauchy} (i.e., $f(0) = 0$). Let $L := Df(0) \in \mathcal{L}(\overline{D(A)}, X)$.
Consider the linearized equation of \eqref{cauchy}, i.e.,
\begin{equation}\label{lincauchy}
\begin{cases}
\dot{u}(t) = Au(t) + Lu(t), \\
u(0) = x.
\end{cases}
\end{equation}
It was shown in \cite{MR09} that solutions of \eqref{lincauchy} can reflect the properties of solutions of \eqref{cauchy} in the neighborhood of $0$. In general, the properties of $A$ (or $T_{A_0}$, $S_A$) would be known well. What we need is the properties of $T_{(A+L)_0}$. Our results can be applied to this situation.
For example, if $T_{(A+L)_0}$ is exponentially stable (i.e., $\omega (T_{(A+L)_0}) < 0$ (see \eqref{equ:bound})), then the zero solution of \eqref{cauchy} is locally stable \cite[Proposition 7.1]{MR09}. If $\omega_{\mathrm{crit}}(T_{A_0}) < 0$ and $\omega_{\mathrm{crit}}(T_{(A+L)_0}) \leq \omega_{\mathrm{crit}}(T_{A_0})$, then $\omega_{\mathrm{crit}}(T_{A_0}) < 0$. Therefore, all we need to consider is: $s(A+L) < 0$ (see \eqref{equ:bound})? (see \autoref{thm:crit} (c), and note that $s(A+L) = s((A+L)_0)$.) That is, the spectrum of $A+L$ can reflect the stability of the zero solution.
\begin{thm}\label{thm:final}
\begin{enumerate}[(a)]
\item Assume the condition of \autoref{thm:main1} (a) (or \autoref{thm:main2} (a), or \autoref{cor:lt} (a)) is satisfied and $\omega_{\mathrm{crit}}(T_{A_0}) < 0$, then the zero solution of \eqref{cauchy} is locally exponentially stable, provided $s(A+L) < 0$.
\item Assume the condition of \autoref{thm:main1} (b) (or \autoref{thm:main2} (b), or \autoref{cor:lt} (b), or \autoref{cor:lmr}) is satisfied and $\omega_{\mathrm{ess}}(T_{A_0}) < 0$. Then, the zero solution of \eqref{cauchy} is locally exponentially stable if $s(A+L) < 0$ and unstable if $s(A+L) > 0$.
\end{enumerate}
\end{thm}
\begin{proof}
(a) This follows from \autoref{thm:crit} (a) and \cite[Proposition 7.1]{MR09}.
(b) This follows from \autoref{thm:ess} (a) and \cite[Proposition 7.1, Proposition 7.4]{MR09}.
\end{proof}
The existence of the Hopf bifurcation and the center manifold (of an equilibrium) for Cauchy problems needs the condition $\omega_{\mathrm{ess}}(T_{(A+L)_0}) < 0$, see, e.g., \cite{MR09a}; this condition was replaced by the \emph{exponential dichotomy condition} instead in our paper \cite{Che18c} to give the invariant manifold theory around more general manifolds (in the sense of Hirsch, Pugh and Shub, and Fenichel).
A more concrete application of the results in \autoref{criess} and \autoref{regper} to a class of delay equations with non-dense domains, see \cite{Che18g}, which is very similar as \cite{BMR02}.
\input{regularity.bbl}
\end{document} |
\begin{document}
\maketitle
\begin{abstract}
A $W^{1,p}$-metric on an $n$-dimensional closed Riemannian manifold naturally induces a distance function, provided $p$ is sufficiently close to $n$. If a sequence of metrics $g_k$ converges in $W^{1,p}$ to a limit metric $g$, then the corresponding distance functions $d_{g_k}$ subconverge to a limit distance function $d$, which satisfies $d\le d_g$.
As an application, we show that the above convergence result applies to a sequence of conformal metrics with $L^{n/2}$-bounded scalar curvatures, under certain geometric assumptions. In particular, in this special setting, the limit distance function $d$ actually coincides with $d_{g}$.
\end{abstract}
\,\,\,\,ection{Introduction}
In this paper, we are interested in the convergence of a sequence of $W^{1,p}$-metrics on a Riemannian manifold, which is motivated by the study of conformal metrics with $L^{\frac{n}{2}}$-bounded scalar curvatures.
Let $(M, g_0)$ be a smooth closed $n$-dimensional Riemannian manifold and $p<n$ be sufficiently close to $n$. We first observe that, given a $W^{1,p}$-metric $g$ with respect to the background metric $g_0$, there is a well-defined distance function $d_g$ associated to $g$. Actually, using an idea similar to the Trace Embedding Theorem, we can show that $g$ is well-defined almost everywhere except a possible singular set of Hausdorff dimension at most $n-p$, which enables us to define
$$
d_g(x,y):=\inf\left\{ \int_\gamma \,\,\,\,qrt{g(\dot{\gamma},\dot{\gamma})}: \mbox{piecewise smooth $\gamma$ from $x$ to $y$} \right\}, \forall x,y\in M.$$
Now suppose $\{g_k\}$ is a sequence of smooth metrics on $M$ such that $g_k$ and $g_k^{-1}$ converges to $g$ and $g^{-1}$ in $W^{1,p}(M,g_0)$, respectively. Then the limit $W^{1,p}$-metric $g$ induces a distance function $d_g$. On the other hand, the distance function $d_{g_k}$ associated to $g_k$ converges uniformly to a distance function $d$, as the metric space $(M,d_{g_k})$ converge to $(M,d)$ in sense of the Gromov-Hausdorff distance.
Then it is very natural to ask what is the relation between $d_g$ and $d$?
Our first main result is
\begin{theorem}\label{main1}
Let $(M, g_0)$ be an $n$-dimensional closed Riemannian manifold and $p\in(\frac{2n(n-1)}{2n-1},n]$. Suppose $\{g_k\}$ is a sequence of smooth Riemannian metrics such that $\{g_k\} $ and $\{g^{-1}_k\}$ converge to $g$ and $g^{-1}$ in $W^{1,p}(M,g_0)$, respectively. Then, up to a subsequence, $\{d_{g_k}\}$ converges uniformly to a distance function $d$ with $d\leq d_g$. In particular, if $g$ and $g^{-1}$ are continuous, then $d=d_g$.
\end{theorem}
\begin{rem}
Although here we only get the identity $d=d_g$ when $g$ is continuous, we believe it is still true without the assumption that $g$ and $g^{-1}$ are continuous.
\end{rem}
As an application, we next study the compactness of a sequence of conformal metrics $\{g_k=u_k^{\frac{4}{n-2}}g_0\}$ with $L^\frac{n}{2}$-bounded scalar curvature $\|R(g_k)\|_{L^{n/2}}\leq C$ and $\mathrm{Vol}(M,g_k)=1$.
Since $W^{2,\frac{n}{2}}$-space fails to be embedded into $C^0$, in general one can not expect point-wise compactness without additional assumptions, as shown by the counterexamples in \cite{Brendle,Brendle-Marques,Chang-Gursky-Wolff}. In fact, there is no compactness even if $R(g_k)$ is bounded $L^\infty$, see \cite{Brendle, Brendle-Marques}.
Here we assume in addition to the $L^\frac{n}{2}$-norm of the scalar curvature being bounded, its limiting measure is locally small as specified in the statement of the following theorem. We focus on the convergence of measures and distance functions.
\begin{theorem}\label{main2}
Let $(M,g_0)$ be an $n$-dimensional closed Riemannian manifold with $n\geq 3$. Suppose $\{g_k=u_k^{\frac{4}{n-2}}g_0\}$ is a sequence of conformal metrics such that $\mathrm{Vol}(M,g_k)=1$ and $|R(g_k)|^{\frac{n}{2}}dV_{g_k}$ converges weakly to a measure $\mu$ with $\mu(M)<\Lambda$.
Then, for any $q\in (1,\frac{n}{2})$, there exists $\varepsilon_0>0$, which only depends on $(M,g_0)$, $\Lambda$ and $q$, such that
if
$$
\mu(\{x\})<\varepsilon_0,\forall x\in M,
$$
then, after passing to a subsequence, we have
1) $\{u_k\}$, $\{\frac{1}{u_k}\}$ and $\{\log u_k\}$ converge to $u$ , $\frac{1}{u}$ and $\log u$ weakly in $W_{loc}^{2,q}(M)$ respectively.
2)$d_{g_k}$ converges to $d_{g}$ in $C_{loc}^0(M\times M)$, where $d_g$ is the distance function associated to the limit metric $g:=u^\frac{4}{n-2}g_0.$
\end{theorem}
\begin{rem}
Note that the limit metric $g\in W^{2,q}(M), q<\frac{n}{2}$. $W^{2,q}$-space fails to be embedded into $C^0$. Hence in Theorem \ref{main2}, we can not get $g$ is continuous, but we can use the conformal condition to get the same conclusion as in Theorem \ref{main1}.
\end{rem}
\begin{rem}
Note that the $L^{\frac{n}{2}}$-norm of scalar curvature is rescaling invariant, i.e.
$$
\int_M|R(\lambda g_k)|^\frac{n}{2}dV_{\lambda g_k}=\int_M|R(g_k)|^\frac{n}{2}dV_{g_k}.
$$
So for a general sequence of collapsing metrics $\{g_k\}$, we can normalize the metric and set $g_k'=(c_ku_k)^\frac{4}{n-2}g_0$ such that $\mathrm{Vol}(M,g_k')=1$. Then after passing to a subsequence, the distance functions $d_{g_k'}$ associated to $g_k'$ converges uniformly to a distance function $d_g$, which is defined by $g=v^{\frac{4}{n-2}}g_0$, with $v$ being the $W^{2,q}$-weak limit of $c_ku_k$.
\end{rem}
In a recent paper \cite{Aldana-Carron-Tapie}, C. Aldana, G. Carron and S. Tapie obtained the above Gromov-Hausdoff convergence result in a similar setting, see \cite[Theorem 5.1]{Aldana-Carron-Tapie}). But they used a different method from ours, and the equality of $d$ and $d_g$ was left open.
In \cite{Li-Zhou}, the second author studied the bubble tree convergence of $\{g_k\}$ under a stronger assumption that $\|R(g_k)\|_{L^p}<C$ with $p>\frac{n}{2}$. The compactness of conformal metrics with uniformly $L^p$-bounded sectional curvature was discussed in \cite{Chang-Yang1,Chang-Yang2,Gursky}.
The rest of the paper is organized as follows. In Section 2, we study the basic properties of the average limit of a $W^{1,p}$ function with $p<n$.
In Section 3, we discuss the convergence of a sequence of distance functions associated to $W^{1,p}$-metrics and prove Theorem~\ref{main1}. At last, we provide a key $\varepsilon$-regularity theorem and finish the proof of Theorem~\ref{main2} in Section~4.
\textbf{Acknowledgment.}
Part of this work was done while the third author was visiting S.-Y. A. Chang at Princeton University. She would like to thank S.-Y. A. Chang for helpful discussions. The authors would like to thank the referees for their valuable suggestions on the revision.
\,\,\,\,ection{Traces of $W^{1,p}$-functions}
Recall that the \emph{Trace Embedding Theorem} states that the restriction of a $W^{1,p}$-function, which is defined on a domain in $\mathbb{R}^n$, on a $k$-dimensional subset is a $L^q$-function, where $p<n$, $n-p<k\leq n$ and $p\leq q\leq\frac{np}{n-p}$, see \cite[Theorem 4.12]{Adams-Fournier}. There is also the so-called \emph{trace inequality} in Sobolev Spaces, see \cite[Chapter 1]{V.G.-T.O.}.
Using similar ideas from those references, here we will show that the \emph{centered average limit} of a given $W^{1,p}$-function is well-defined almost everywhere, except on a subset of Hausdorff dimension smaller than $n-p$
(c.f. \cite{Evans-Gariepy}).
For $r>0$, we denote the $n$-ball centered at $x\in \mathbb{R}^n$ with radius $r$ by $B_r(x)$ and $B_r:=B_r(0)$. Given a function $u$ defined on a domain $U\,\,\,\,ubset \mathbb{R}^n$, the \emph{$r$-average} of $u$ at $x\in U$ is
$$
u_{x,r}:=\frac{1}{|B_r(x)\cap U|}\int_{B_r(x)\cap U}u(y)dy.
$$
Now for $u\in W^{1,p}(B_2)$, define the singular set
$$
A(u)=\{x\in B_1:\lim_{\tau\rightarrow 0}osc_{r\in(0,\tau]} u_{x,r}>0\}.
$$
By Federer and Ziemer's theorem, we know $\dim A(u)\leq n-p$ (see \cite[Theorem 2.1.2]{Lin-Yang} or \cite[p.160]{Evans-Gariepy}). Hence, we can define the \emph{centered average limit}
\begin{defi}
Given a function $u\in W^{1,p}(B_2), p<n$. For any $x\in B_1\,\,\,\,etminus A(u)$, there exists $\hat{u}(x)$, such that
$$
\lim_{r\to 0}\frac{1}{B_r(x)}\int_{B_r(x)}|u(y)-\hat{u}(x)|dy=0,
$$
$\hat{u}(x)$ is called the \emph{centered average limit} of $u$ at $x$.
\end{defi}
Thus $\hat{u}$ is well-defined for $\mathcal{H}^s$-a.e. $x\in B_1$, with $s\in (n-p,n)$.
The following estimate will play an essential role in the next section.
\begin{lem}\label{measure.estimate}
Let $u\in W^{1,p}(B_2)$ and
$$
\mathcal{M}(u,t):=\{x\in B_1\,\,\,\,etminus A(u): |\hat{u}|(x)>t\}.
$$
Assume $\|u\|_{L^1(B_2)}\leq \frac{t\omega_n}{4}$
and $s\in(n-p,n)$.
Then
$$
\mathcal{H}_{\infty}^s(\mathcal{M}(u,t))\leq\frac{\Lambda}{t^p}\int_{B_2}
|\nabla u|^p,
$$
where $\Lambda=\Lambda(n,s,p)$. Moreover, there exists
a cover $\{\overline{B_{r_i}(x_i)}\}$ of $\mathcal{M}(u,t)$, such that
$$
x_i\in\mathcal{M}(u,t),\,\,\,\, and\,\,\,\, \omega_s\,\,\,\,um_i r_i^s\leq\frac{\Lambda}{t^p}\int_{B_2}
|\nabla u|^p.
$$
\end{lem}
\proof Fix $x\in\mathcal{M}(u,t)$. By the definition of $\hat{u}$,
$\frac{1}{|B_{r_0}|}|\int_{B_{r_0}(x)}u|>t$ for sufficiently small ${r_0}$.
We claim that there exists a small $r_0$ such that
\begin{equation}\label{lemma1-claim}
\frac{1}{{r_0}^s}\int_{B_{r_0}(x)}|\nabla u|^p\geq t_1:=(\frac{t}{\Lambda'})^p,
\end{equation}
where $\Lambda'=\Lambda'(n,p,s)$ is a constant to be determined later.
Assume for contradiction that the claim is not true. By the proof of Poincar\'e inequality (see \cite[pp.275-276]{Evans}), we have
$$
\frac{1}{|B_r|}\int_{B_r(x)}|u-u_{x,\frac{r}{2}}|^p\leq
\Lambda_1r^{p-n}\int_{B_r(x)}|\nabla u|^p,
$$
where $\Lambda_1$ only depends on $n$.
It follows
\begin{eqnarray*}
|u_{x,r}-u_{x,\frac{r}{2}}|&=&\frac{1}{|B_r|}\left|\int_{B_r(x)}(u-u_{x,\frac{r}{2}})\right|\nonumber\\\nonumber
&\leq&\frac{1}{|B_r|}\left(\int_{B_r(x)}|u-u_{x,\frac{r}{2}}|^p\right)^\frac{1}{p}|B_r|^{1-\frac{1}{p}}\\
&=&\left(\frac{1}{|B_r|}\int_{B_r(x)}|u-u_{x,\frac{r}{2}}|^p\right)^\frac{1}{p}\nonumber\\
&\leq&\left(\Lambda_1r^{p-n}\int_{B_r(x)}|\nabla u|^p\right)^\frac{1}{p}\nonumber\\
&\leq&\Lambda_2r^\theta t_1^\frac{1}{p},
\end{eqnarray*}
where $\theta=\frac{p-n+s}{p}$ and $\Lambda_2=\Lambda_1^\frac{1}{p}$.
For $r_0\in[2^{-k},2^{-k+1})$, we have
$$
|u_{x,1}-u_{x,2^{-k}}|\leq \Lambda_2(\,\,\,\,um_{i=0}^{k-1} (2^{-i})^\theta)t_1^\frac{1}{p}
\leq \Lambda_3 t_1^\frac{1}{p},
$$
and
\begin{eqnarray*}
|u_{x,2^{-k}}-u_{x,r_0}|&=&\frac{1}{|B_{r_0}|}\left|\int_{B_{r_0}(x)}(u-u_{x,2^{-k}})\right|\\
&\leq&\frac{|B_{2^{-k+1}}|}{|B_{r_0}|}\frac{1}{|B_{2^{-k+1}}|}\int_{B_{2^{-k+1}(x)}}\left|u-u_{x,2^{-k}}\right|\\
&\leq&2^n\Lambda_2(2^{1-k})^\theta t_1^\frac{1}{p},
\end{eqnarray*}
where $\Lambda_3=2\Lambda_2$.
Then
\begin{eqnarray}\label{decay}
|u_{x,1}-u_{x,r_0}|\leq\Lambda_3t_1^\frac{1}{p}=
\frac{\Lambda_3}{\Lambda'}t.
\end{eqnarray}
Note that $|u_{x,1}|\leq\frac{t}{4}$, so we get a
contradiction if we set $\Lambda'>2\Lambda_3$. This proves our claim~(\ref{lemma1-claim}).
To complete the proof of the lemma, note that by Vitali Covering Theorem, there exist
pairwise disjoint closed balls
$\{\overline{B_{r_i}(x_i)}\}_{i=1}^\infty$ such that
$$
\frac{1}{r_i^s}\int_{B_{r_i}(x_i)}|\nabla u|^p\geq t_1,\,\,\,\,
\mathcal{M}(u,t)\,\,\,\,ubset\bigcup_i\overline{B_{5r_i}(x_i)}.
$$
Therefore, we get
\begin{eqnarray*}
\mathcal{H}_{\infty}^s(\mathcal{M}(u,t))&\leq& \,\,\,\,um_i\omega_s(5r_i)^s=5^s\omega_s\,\,\,\,um_i r_i^s\\
&\leq&
\frac{1}{t_1}5^s\omega_s\int_{\cup B_{r_i}(x_i)}|\nabla u|^p\\
&\leq&\frac{1}{t_1}5^s\omega_s\int_{B_2}|\nabla u|^p.
\end{eqnarray*}
$
\Box$\\
As an application of Lemma~\ref{measure.estimate}, we show that $W^{1,p}$-convergence implies
$\mathcal{H}^s$-a.e. convergence for each $s>n-p$.
\begin{lem}\label{equal}
Assume $u_k, u\in W^{1,p}(B_2)$ and $\|u_k-u\|^p_{W^{1,p}(B_2)}<\frac{1}{2^k}$, then for any $s>n-p$, $\hat{u}_k$ converges to
$\hat{u}$ for $\mathcal{H}^{s}$-a.e. $x\in B_1$.
\end{lem}
\proof
Set
$$
A=(\bigcup_{i=1}^\infty A(u_k))\bigcup A(u),\,\,\,\,
E_{km}=\{x\in B_1\,\,\,\,etminus A:|\hat{u}_k-\hat{u}|<\frac{1}{m}\},
$$
and
$$
E=\bigcap_{m=1}^\infty\bigcup_{i=1}^\infty\bigcap_{k=i}^\infty
E_{km}.
$$
It is easy to check that for any $x\in E$, $\hat{u}_k(x)$ converges to $\hat{u}(x)$.
Let
$$
F=E^c\cap B_1\,\,\,\,etminus A=\bigcup_{m=1}^\infty\bigcap_{i=1}^\infty\bigcup_{k=i}^\infty
F_{km},
$$
where
$$
F_{km}=\{x\in B_1\,\,\,\,etminus A:|\hat{u}_k-\hat{u}|\geq\frac{1}{m}\}.
$$
Since $\hat{u}_k-\hat u=\widehat{u_k-u}$ and $\frac{1}{|B_1(x)|}
\int_{B_1(x)}|u_k-u|dx\rightarrow 0$,
by Lemma \ref{measure.estimate},
$\mathcal{H}^s_{\infty}(F_{km})\leq Cm2^{-k}$ when $k$
is sufficiently large. It follows that
$$
\mathcal{H}_{\infty}^s(\bigcap_{i=1}^\infty\bigcup_{k=i}^\infty
F_{km})=0,
$$
which implies $\mathcal{H}^s(F)=0$. Since $B_1\,\,\,\,etminus E
\,\,\,\,ubset A\cup F$, we get $\mathcal{H}^s(B_1\,\,\,\,etminus E)=0$.
$
\Box$\\
\begin{rem}\label{second}
Lemma \ref{equal} provides another approach to define the value of $u$ at a point. Select a sequence of smooth functions
$u_k$ satisfying $\|u_k-u\|_{W^{1,p}}<2^{-k}$.
Since $\hat{u}_k=u_k$, by Lemma \ref{equal},
$u_k$ converges to $\hat{u}$ for $\mathcal{H}^{s}$-a.e. $x$ whenever $s>n-p$. Therefore, $\hat{u}$ is in fact an $\mathcal{H}^s$-a.e. limit of $u_k$. Using this point of view, one can easily check the following:
1) when $f\in C^1$, $\widehat{f u}=f\widehat{u}$ for $\mathcal{H}^s$-a.e. $x$.
2) when $s>n-\frac{p}{q}$, $\widehat{u^q}=\hat{u}^q$ for $\mathcal{H}^s$-a.e. $x$.
\end{rem}
Let $p>n-m$ and $\Sigma$ be a compact $m$-dimensional submanifold of $B_1$. We can establish a trace embedding inequality on $\Sigma$. Applying Theorem 1.1.2
in \cite{V.G.-T.O.}
to $\mu=\mathcal{H}^m\lfloor\Sigma$, we have
$$
\|u\|_{L^1(\Sigma)}\leq C(\Sigma)\|u\|_{W^{1,m}(\mathbb{R}^n)},
$$
where $u\in C_0^\infty(\mathbb{R}^n)$.
Given a function $u\in W^{1,p}(B_1)$, after extending it to a function $u'\in W^{1,p}_{0}(B_2)$ with
$$
\|u'\|_{W^{1,p}(B_2)}\leq C(n) \|u\|_{W^{1,p}(B_1)},
$$
we can find $u_k\in C^\infty_0(B_2)$ such that $\|u_k-u'\|_{W^{1,p}(\mathbb{R}^n)}\rightarrow 0$.
Then $\{u_k\}$ is a Cauchy sequence in
$L^1(\Sigma)$.
By Lemma \ref{equal}, we may assume
$u_k$ converges to $\hat{u}$ for $\mathcal{H}^m-$a.e. $x\in\Sigma$. Therefore, we obtain
\begin{equation}\label{trace.inequality}
\int_\Sigma|\hat{u}|d\mathcal{H}^m\lfloor\Sigma\leq C(\Sigma)
\|u\|_{W^{1,p}(B_1)}. \\
\end{equation}
Now we consider the case when $(M,g)$ is a smooth Riemannian manifold and $u\in W^{1,p}(M)$. For $x\in M$, in a local coordinate chart $(x^1,\cdots,x^n)$ centered at $x$, we can define $\hat{u}(x)$ to be the limit of $\frac{1}{\omega_n r^n}
\int_{B_r}udx$ as $r\rightarrow 0$, where $B_r$ is the Euclidean ball as before. In view of Remark \ref{second}, one checks that the value of $\hat{u}(x)$ is independent of the choice of coordinate chart for $\mathcal{H}^s-$ a.e.
$x\in M$, where $s>n-p$.
\begin{lem}\label{mf}
There exists a subset $E\,\,\,\,ubset M$ with dimension smaller than $(n-p)$ such that
for any $x\notin E$, there holds
\begin{equation}\label{mfd}
\hat{u}(x)=\lim_{r\to 0}\frac{1}{\mathrm{Vol}(B^{g}_r(x))}\int_{B^{g}_r(x)} udV_g,
\end{equation}
where
$$
B_r^g(p)=\{x\in M : d_g(x,p)<r\}.
$$
\end{lem}
\proof
Locally in a coordinate $(x^1,\cdots,x^n)$, we set
$$
\Lambda_s=\{x:\varlimsup_{r\to 0} \frac{1}{r^s}\int_{B_r^n(x)}|\nabla u|^pdx>0\},\,\,\,\,
and \,\,\,\,
\Lambda=\bigcap_{s\in(n-p,n)}\Lambda_s.
$$
It is a standard result that $H^s(\Lambda_s)=0$ when $s\in(n-p,n)$ (c.f. \cite[Lemma 2.1.1]{Lin-Yang}). Since $\Lambda_{s'}\,\,\,\,ubset\Lambda_{s}$ for any $s'<s$, we have $\dim \Lambda<n-p$. We will
show that \eqref{mfd} holds for any $x\notin \Lambda$. Obviously,
we only need to prove \eqref{mfd} holds
for any $x\notin \Lambda_s$ and $s\in (n-p,n)$.
Fix an $x_0\notin \Lambda$. As in
\eqref{decay}, we have
$$
|u_{x,r}-u_{x,r'}|\leq \Lambda_3\left(\frac{1}{r^s}\int_{B_r(x)}|\nabla u|^pdx\right)^\frac{1}{p},\,\,\,\, whenever\,\,\,\, r'<r.
$$
Thus $u_{x,r}$ converges as $r\rightarrow 0$ for any $x\notin \Lambda_s$. Denoting
$u_r(x)=u(x_0+rx)$ and applying the Poincar\'e inequality for a ball with any fixed radius $R>0$, we get
$$
\int_{B_R}\left|u_r(x)-\frac{1}{|B_1|}\int_{B_1}u_rdx\right|dx\leq
\int_{B_R}|\nabla u_r|^pdx=R^sr^{s+p-n}\frac{1}{(Rr)^s}\int_{B_{Rr}(x_0)}|\nabla u|^pdx
\rightarrow 0.
$$
Since $\frac{1}{|B_1|}\int_{B_1}u_rdx=u_{x_0,r}$ converges to $\hat{u}(x_0)$, we have
$$
\lim_{r\rightarrow 0}\int_{B_R}\left|u_r(x)-\hat{u}(x_0)\right|dx=0.
$$
Note that, when $r$ is sufficiently small, we may assume $B_1^{g(x_0+rx)/r^2}(0)\,\,\,\,ubset B_R$. Then we conclude
\begin{eqnarray*}
& \ &\lim_{r\to0}\frac{1}{\mathrm{Vol}(B_r^{{g}}(x_0))}\int_{B_{r}(x_0)} |u-\hat{u}(x_0)|dV_g\\
&=&\lim_{r\to0}\frac{1}{\mathrm{Vol}(B_1^{{g(x_0+rx)}/r^2}(0))}\int_{B_1^{{g(x_0+rx)}/r^2}(0))} |u_r-\hat{u}(x_0)|dV_{g(x_0+rx)/r^2}\\
&\leq& C(M)\cdot\lim_{r\to0}\int_{B_R}|u_r-\hat{u}(x_0)|dx\\
&=&0.
\end{eqnarray*}
$
\Box$\\
\,\,\,\,ection{$W^{1,p}$-metrics}
Suppose $(M,g_0)$ is a smooth closed $n$-manifold.
Let $g$ be a symmetric tensor of type $(0,2)$, which is positive almost everywhere.
Let $g^{-1}$ be the corresponding inverse tensor of type $(2,0)$.
We say that $g$ is a $W^{1,p}$-metric if both $g$ and $g^{-1}\in
W^{1,p}_{loc}(M,g_0)$. The goal of this section is to define the distance functions induced by $W^{1,p}$-metrics and study the compactness of such metrics. In a local coordinate chart, we can write
$$
g=g_{ij}dx^i\otimes dx^j,\,\,\,\, and\,\,\,\,
g^{-1}=g^{ij}\frac{\partial}{\partial x^i}\otimes \frac{\partial}{\partial x^j}.
$$
Then the functions $g_{ij}$, $g^{ij}$ belong to $W^{1,p}_{loc}$, and $(g^{ij})(g_{ij})=I$ as matrices.
Now define the centered average limit of the metric by
$$
\hat{g}_{ij}(x)= \lim_{r\to 0}\frac{1}{|B_r(x)|}\int_{B_r(x)} g_{ij}(y)dy,
$$
and the corresponding tensor by
$$
\hat{g}(x)(V(x),V(x))=\,\,\,\,um_{i,j}\hat{g}_{ij}(x)V_i(x)V_j(x), \quad \forall V\in \Gamma(TM).
$$
By Remark \ref{second}, when $s>n-p$, $\hat{g}$ and $\widehat{g^{-1}}$ are well-defined on
$T_xM$ and $(T_xM)^*$ for $\mathcal{H}^s$-a.e. $x$. Moreover, by Lemma \ref{mf}
\begin{align*}
\hat{g}(x)(V(x),V(x))&=\lim_{r\to 0}\frac{1}{\mathrm{Vol}(B_r^{g_0}(x))}\int_{B_r^{g_0}(x)}g(y)(V(y),V(y))dV_{g_0}
\\ &= \,\,\,\,um_{i,j} \left(\lim_{r\to 0}\frac{1}{|B_r(x)|}\int_{B_r(x)} g_{ij}(y)dy\right)V_i(x)V_j(x)
\end{align*}
Next, we define the associated distance function by
$$
d_g(x,y)=\inf\left\{ \int_\gamma \,\,\,\,qrt{g(\dot{\gamma},\dot{\gamma})}: \mbox{piecewise smooth $\gamma$ from $x$ to $y$} \right\}.$$
First of all, we need to show that $d_g$ is indeed a distance function.
\begin{lem}
When $p\in (n-1,n)$,
$d_g$ is a distance function and it's continuous on $M\times M$.
\end{lem}
\proof
We first show that $d_g(x,y)<+\infty$ for any $x, y\in M$.
Let $\varphi_x$ be the exponential map from $T_xM$
to $M$.
Since $(M,g_0)$ is compact, there exists a number $\tau=\tau(M,g_0)>0$, such that for each $x\in M$, $\varphi_x$ induces normal coordinates $({x'}^1,\cdots,{x'}^n)$ on $B_\tau^{g_0}(x)$
with
$$
|g_{0,ij}(x')-\delta_{ij}|<\frac{1}{2}.
$$
It follows that the metric
$$
g=g_{ij}(x')d{x'}^i\otimes d{x'}^j,
$$
satisfies
\begin{equation}\label{wip.chart}
\frac{1}{C}\|g\|_{W^{1,p}(B_\tau^{g_0}(x),g_0)}\leq\|(g_{ij})\|_{W^{1,p}(B_\tau^n(0))}\leq C\|g\|_{W^{1,p}(B_\tau^n(0))},
\end{equation}
and
\begin{equation}\label{wip.chart2}
\frac{1}{C}\|g^{-1}\|_{W^{1,p}(B_\tau^{g_0}(x),g_0)}\leq\|(g^{ij})\|_{W^{1,p}(B_\tau^n(0))}\leq C\|(g^{-1})\|_{W^{1,p}(B_\tau^n(0))},
\end{equation}
where $C$ is a constant independent of $x$. Following \cite[p.178]{Petersen}, we call a curve $\gamma_{pq}$ joining $p$ and $q$ a \emph{segment}in $(M,g_0)$ if $Length(\gamma_{pq})=d_{g_0}(p,q)$ and $|\dot\gamma_{pq}|$ is constant.
Let $\gamma:[0,l]\rightarrow M$ be the segment in $(M,g_0)$
from $x$ to $y$. Then we can find $x_1=\gamma(t_1)$, $x_2=\gamma(t_2)$, $\cdots$,
$x_m=\gamma(t_m)$, such that
$$
m<\frac{2l}{\tau}, \,\,\,\, 0\leq t_{i+1}-t_i\leq \frac{\tau}{2}.
$$
For convenience, we set $x_0=\gamma(0)=x$ and $x_{m+1}=\gamma(t_{m+1})=y$.
It suffices to prove $d_{g}(x_i,x_{i+1})<+\infty$.
Without loss of generality, we assume
the coordinate of $x_{i+1}$ in chart
$(B_\tau^{g_0}(x_i),\varphi_{x_i}^{-1})$ is
$(\delta_i,0,\cdots,0)$, where
$\delta_i=t_{i+1}-t_i$.
Obviously,
$$
d_g(x_i,x_{i+1})\leq\int_0^{\delta_i}\,\,\,\,qrt{\hat{g}_{11}(t,0,\cdots,0)}dt\leq
\delta_i^\frac{1}{2}\,\,\,\,qrt{\int_0^{\delta_i} \hat{g}_{11}(t,0,\cdots,0)dt}.
$$
By \eqref{wip.chart}, \eqref{wip.chart2} and \eqref{trace.inequality},
$\hat{g}_{11}(t,0,\cdots,0)$ is integrable on $[0,\delta_i]$. It follows that $d_g(x,y)<+\infty$ and $d_g(x,y)\rightarrow 0$ when $l\to 0$.
Next, we prove that $d_g(x,y)>0$ for any $x\neq y$. In fact, we can
prove a stronger result here: for any $\delta>0$, there exists $\delta'>0$, which depends on $g_0$, $\delta$ and
$||g^{-1}||_{W^{1,p}(M,g_0)}$, such that if $d_{g_0}(x,y)\geq\delta$,
then
\begin{equation}\label{positive.distance}
d_g(x,y)\geq \delta'.
\end{equation}
Assume $\gamma:[0,l]\rightarrow M$ is an arbitrary piecewise smooth curve from $x$ to $y$ in $M$. We set
$$
l'=\,\,\,\,up\{t\in[0,l]:\gamma([0,t])\,\,\,\,ubset B_\frac{\tau}{2}^{g_0}(x)\}.
$$
Obviously,
$$
d_{g_0}(x,\gamma(l'))=\min\{\tau/2,d_{g_0}(x,y)\}.
$$
In the coordinate defined by $\varphi_x$, we set
$\|g_{ij}\|=\,\,\,\,qrt{\,\,\,\,um (g_{ij})^2}$.
It is well-known that
$\|(g_{ij})\|^2$ is the
quadratic sum of the eigenvalues of $(g_{ij})$. Let $\lambda$ be the
smallest eigenvalue of $(\hat{g}_{ij})$. Since $\frac{1}{\lambda}$ is an
eigenvalue of $(\hat{g}^{ij})$, we have
$$
E_a:=\{x'\in B_\frac{\tau}{2}^n(0):\lambda(x')<a\}\,\,\,\,ubset\{x'\in B_\frac{\tau}{2}^n(0):\|(\hat{g}^{ij})\|(x')>\frac{1}{a}\}.
$$
By the inequality $\|(\hat{g}^{ij})\|\leq c(n)\,\,\,\,um_{ij} |\hat{g}^{ij}|$, together with
Lemma \ref{measure.estimate}, we can find a sufficiently small $a$, which depends on $||g^{-1}||_{W^{1,p}(M)}$, $\delta$ and $\tau$, such that
$$
\mathcal{H}_{\infty}^1(\varphi^{-1}_x(\gamma|_{[0,l']})\cap E_a)\leq\mathcal{H}_\infty^1(\{x'\in B_\frac{\tau}{2}^n(0):\|\hat{g}^{ij}\|>\frac{1}{a}\})\leq \,\,\,\,um_{ij}a^{p}\Lambda'\|{g}^{ij}\|^p_{W^{1,p}(B_\frac{\tau}{2}^n(0))}< \frac{d_{g_0}(x,\gamma(l'))}{4}.
$$
Note that using balls to cover $\varphi^{-1}_x(\gamma|_{[0,l']})$ might increase $\mathcal{H}_{\infty}^1(\varphi^{-1}_x(\gamma|_{[0,l']}))$ by at most a factor of $2$ (see \cite{Han-Lin}). It follows that
$$
d_{\mathbb{R}^n}(0,\varphi_x^{-1}(\gamma(l')))=d_{g_0}(x,\gamma(l')) \leq 2\mathcal{H}_{\infty}^1(\varphi^{-1}_x(\gamma|_{[0,l']})),
$$
which implies
\begin{eqnarray*}
\mathcal{H}^1(\varphi^{-1}_x(\gamma|_{[0,l']})\,\,\,\,etminus E_a)&\geq&\mathcal{H}_{\infty}^1(\varphi^{-1}_x(\gamma|_{[0,l']})\,\,\,\,etminus E_a) \geq\mathcal{H}^1_{\infty}(\varphi^{-1}_x(\gamma|_{[0,l']}))-\mathcal{H}^1_{\infty}(\varphi^{-1}_x(\gamma|_{[0,l']})\cap E_a)\\
&\geq&\frac{d_{g_0}(x,\gamma(l'))}{2} -\frac{d_{g_0}(x,\gamma(l'))}{4}=\frac{1}{4}d_{g_0}(x,\gamma(l')).
\end{eqnarray*}
Since $\gamma$ is locally Lipschitz continuous,
$$
\int_{\varphi_x^{-1}(\gamma|_{[0,l']}) \,\,\,\,etminus E_a} |\dot\gamma|\geq \mathcal{H}^1(\varphi^{-1}_x(\gamma|_{[0,l']})\,\,\,\,etminus E_a).
$$
we have
$$
\int_\gamma\,\,\,\,qrt{\hat{g}(\gamma)(\dot\gamma,\dot\gamma)}
\geq\int_{\varphi_x^{-1}(\gamma|_{[0,l']})\,\,\,\,etminus E_a}\,\,\,\,qrt{\hat{g}_{ij}(\gamma)\dot\gamma^i\dot\gamma^j}
\geq\int_{\varphi_x^{-1}(\gamma|_{[0,l']}) \,\,\,\,etminus E_a}\lambda|\dot\gamma|\geq\frac{a}{4}d_{g_0}(x,\gamma(l')).
$$
This completes the proof, by letting $\delta'=\frac{a}{4}\min\{\tau/2,\delta\}$.
$
\Box$\\
{\it The proof of Theorem \ref{main1}:}
Since $|\nabla_{g_k,x} d_{g_k}(x,y)|=1$, in local coordinates we have
$$
\lambda(x)|\nabla_{x} d_{g_k}(x,y)|\leq 1,
$$
where $\lambda$ is the smallest eigenvalue of $(g_{k,ij})$.
Since $\frac{1}{\lambda}$ is an eigenvalue of
$g_k^{ij}$, we see
$$
|\nabla_x d_{g_k}(x,y)|\leq\frac{1}{\lambda}\leq c(n)\,\,\,\,um_{ij} |g_k^{ij}(x)|.
$$
Similarly, we have $|\nabla_y d_{g_k}(x,y)|<c(n)\,\,\,\,um_{ij}|g_k^{ij}(y)|$,
hence $d_{g_k}$ is bounded in $W^{1,\frac{np}{n-p}}(M\times M,g_0)$. Therefore
we may assume $d_{g_k}$ converges to a function $d$ in $C^0(M\times M)$.
By \eqref{positive.distance}, we can assume further
\begin{equation}\label{positive.distance2}
d(x,y)\geq \tau,\,\,\,\, \mbox{whenever} \,\,\,\, d_{g_0}(x,y)>\delta.
\end{equation}
Next, we prove that $d\leq d_{g}$. Given two points $x,y \in M$, take a piecewise smooth curve $\gamma$ from $x$ to $y$.
By \eqref{trace.inequality}, after passing to a subsequence, $\,\,\,\,qrt{g_{k}(\dot{\gamma},\dot{\gamma}})$
converges to $\,\,\,\,qrt{g(\dot{\gamma},\dot{\gamma})}$
for $\mathcal{H}^1$-a.e. $x\in \gamma$. Moreover, we have
$$
\int_\gamma\left(\,\,\,\,qrt{g_k(\dot{\gamma},\dot{\gamma})}\right)^2<C,
$$
which implies
$$
\lim_{k\rightarrow+\infty}\int_\gamma \,\,\,\,qrt{g_k(\dot{\gamma},\dot{\gamma})}=\int_\gamma\,\,\,\,qrt{g(\dot{\gamma},\dot{\gamma})}.
$$
Thus, we arrive at $d(x,y)\leq d_g(x,y)$.
Finally, we show that $d=d_{g}$ in the case when
$g$ and $g^{-1}$ are continuous. For any $\varepsilon>0$ fixed, let
$$
E_k=\{x:g_k>(1-\varepsilon)g\}=\{x:g-g_k<\varepsilon g\},\,\,\,\, and\,\,\,\, F_k=E_k^c.
$$
We claim that
$$
\lim_{k\rightarrow+\infty}\mathcal{H}^1_\infty(F_k)=0.
$$
Since $M$ is compact, we only need to prove the claim in a local coordinate chart $\varphi: U\rightarrow \mathbb{R}^n$. That is, we only need to check that for any $B_R \,\,\,\,ubset \varphi(U)$,
$$
\mathcal{H}^1_\infty(B_R\cap F_k)\rightarrow 0.
$$
For simplicity, we denote the
maximum eigenvalue and the minimum eigenvalue of a matrix $A$ by
$\Lambda(A)$ and $\lambda(A)$ respectively.
Since $g$ and $g^{-1}$ are continuous, we assume for any
$x\in B_R$,
$$
\frac{\lambda(g_{ij}(x))}{\|(g_{ij}(x))\|}\geq\varepsilon_1
$$
for some $\varepsilon_1>0$. Note that
\begin{eqnarray*}
F_k\cap B_R&\,\,\,\,ubset&\{x\in B_R:\Lambda(g_{ij}-g_{k,ij})\geq\varepsilon\lambda(g_{ij})\}\\
&\,\,\,\,ubset&\{x\in B_R:\|g_{ij}-g_{k,ij}\|\geq\varepsilon\lambda(g_{ij})\}\\
&\,\,\,\,ubset&\{x\in B_R:\|g_{ij}\|\cdot\|I-g_{k,ij}g^{ij}\|\geq\varepsilon\lambda(g_{ij})\}\\
&\,\,\,\,ubset&\{x\in B_R:\|I-g_{k,ij}g^{ij}\|\geq\varepsilon_1\epsilon\}.
\end{eqnarray*}
From the identity
$$
\nabla(g_{k,ij})(g^{ij})=\nabla (g_{k,ij})(g^{ij})+
(g_{k,ij})(\nabla g^{ij}),
$$
we see
$$
\|I-(g_{k,ij})(g^{ij})\|_{W^{1,q}}\rightarrow 0,
$$
for any $q<\frac{np}{2n-p}$. Since $p>2n\frac{n-1}{2n-1}$, we can
choose $q$ such that $q>n-1$.
By Lemma \ref{measure.estimate}, after passing to a subsequence, we get
$$
\lim_{k\rightarrow+\infty}\mathcal{H}^1_\infty(\{x\in B_R:\|I-(g_{k,ij})(g^{ij})\|\geq\varepsilon_1\varepsilon\})=0.
$$
Thus the claim follows.
The above claim implies that, given $\varepsilon'>0$, for any $k$ sufficiently large, we can cover $F_k$, which is a compact subset,
by finitely many
balls $\overline{B_{r_1}}(x_1)$, $\cdots$, $\overline{B_{r_m}}(x_m)$, such that
$$
\,\,\,\,um r_i<\varepsilon'.
$$
Let $C_1$, $\cdots$, $C_{m'}$ be the connected components of
$B=\bigcup \overline{B_{r_i}(x_i)}$ and
set $t_1=\inf\{t:\gamma(t)\in B\}$. Without loss of generality, we assume $\gamma(t_1)\in C_1$. Put $t_2=\,\,\,\,up\{t:\gamma(t)\in C_1\}$, and replace
$\gamma|_{[t_1,t_2]}$ with the segment $\overline{\gamma(t_1)\gamma(t_2)}$.
In the same manner, we may choose $t_3=\inf\{t:\gamma(t)\in B\,\,\,\,etminus C_1\}$ and by induction,
we can find
$$
0\leq t_1<t_2<t_3<\cdots<t_{m'}\leq 1,
$$
such that
$$
\,\,\,\,um_id_{g_0}(\gamma(t_{2i}),\gamma(t_{2i-1}))\leq\,\,\,\,um_idiam(C_i)\leq\,\,\,\,um r_i<\varepsilon'.
$$
This give rise to a new curve $\gamma'$ in place of $\gamma$, such that
Then
\begin{align*}
\int_{\gamma} \,\,\,\,qrt{g_k(\dot\gamma,\dot\gamma)} &\geq \int_{\gamma\cap\gamma'}
\,\,\,\,qrt{g_k(\dot\gamma,\dot\gamma)}\\
&\geq
(1-\varepsilon)^\frac{1}{2}(
\int_{\gamma'}\,\,\,\,qrt{g({\dot{\gamma}}',{\dot{\gamma}}')}-\,\,\,\,um_i\int_{\gamma(t_{2i-1})}^{\gamma(t_{2i})}\,\,\,\,qrt{g(\dot{\gamma'},\dot{\gamma'}}))
\\
&\geq
(1-\varepsilon)^\frac{1}{2}(d_g(x,y)-\varepsilon'\|\,\,\,\,qrt{g}\|_{C^0}).
\end{align*}
Therefore,
$$
d_{g_k}(x,y)\geq (1-\varepsilon)^\frac{1}{2}(d_g(x,y)-\varepsilon'\|\,\,\,\,qrt{g}\|_{C^0}).
$$
Now let $k\rightarrow+\infty$, then $\varepsilon'\rightarrow 0$,
and finally $\varepsilon\rightarrow 0$, we get
$$
d(x,y)\geq d_g(x,y).
$$
$
\Box$\\
Although we only consider a compact manifold in Theorem \ref{main1}, a similar result
actually holds for certain complete manifolds. For example, we have
\begin{pro}\label{complete.case}
Let $\{g_k\}$ be a sequence of metrics defined on $\mathbb{R}^n$ and
assume $g_k$ and $g_k^{-1}$ converge to $g_{\mathbb{R}^n}$ and $g_{\mathbb{R}^n}^{-1}$ respectively
in $W_{loc}^{1,p}(\mathbb{R}^n)$ for some $p>2n\frac{n-1}{2n-1}$. Then, after passing to a subsequence, $d_{g_k}(x,y)$ converges to
$|x-y|$.
\end{pro}
\proof
Let $R=|x-y|$. We only need to prove this proposition on $\overline{B_{2R}}$. We omit the details since the proof is almost the same with the one of Theorem \ref{main1}.
$
\Box$\\
\,\,\,\,ection{Conformal metrics with $L^\frac{n}{2}$-bounded scalar curvature}\label{conformal.metrics}
First we recall some notations in conformal geometry. Let $(M,g)$ be a closed Riemannian manifold. Denote the scalar curvature by $R(g)$ (or $R_g$). Let $g=u^{\frac{4}{n-2}}g_0$ be a conformal metric, then $u$ satisfies the following equation
$$
-\frac{4(n-1)}{n-2}\Delta u+R(g_0)u=R(g)u^\frac{n+2}{n-2}.
$$
\,\,\,\,ubsection{$\varepsilon$-regularity}
Again we denote by $B_r$ a ball in $\mathbb{R}^n$ with radius $r$, centered at $0$. Let $u$ be a weak solution of
\begin{equation}\label{equation.epsilon}
-div(a^{ij}u_{j})=fu,
\end{equation}
where
\begin{equation}\label{aij}
0<\lambda_1 I\leq (a^{ij}),\,\,\,\, \|a^{ij}\|_{C^0(B_2)}+\|\nabla a^{ij}\|_{C^0(B_2)}
<\lambda_2.
\end{equation}
\begin{lem}\label{Lalpha}
Suppose $u\in W^{1,2}(B_2)$ is a positive weak solution of equations \eqref{equation.epsilon} and \eqref{aij}. Assume
$$
\int_{B_2}|f|^\frac{n}{2}\leq \Lambda,
$$
then
$$
r^{2-n}\int_{B_r(x)}|\nabla\log u|^2<C,\,\,\,\, \forall B_r(x)\,\,\,\,ubset B_1.
$$
Moreover, there exist constants $\alpha$ and $C$, which depend on $\Lambda$, $\lambda_1$, $\lambda_2$, such that
$$
\int_{B_1}(cu)^{\alpha}+\int_{B_1}(cu)^{-\alpha}<C,
$$
where $-\log c$ is the mean value of $\log u$ on $B_1$.
\end{lem}
\proof
For a ball $B_{2r}(x)\,\,\,\,ubset B_2(0)$, take $\phi=\eta^2u^{-1}$ as a test function, with $\eta\equiv 1$ on $B_r(x)$, $\eta\in C^\infty_0(B_{2r}(x))$ and $|\nabla \eta|\leq \frac{C}{r}$. Multiplying
\eqref{equation.epsilon} by $\phi$ and integrating, we get
$$
\int_{B_{2r}(x)}\eta^2u^{-2}|\nabla u|^2\leq C\left(\int_{B_{2r}(x)} |\nabla\eta|^2+(\int_{B_{2r}(x)} f^{\frac{n}{2}})^{\frac{2}{n}}(\int_{B_{2r}(x)} \eta^{\frac{2n}{n-2}})^{\frac{n-2}{n}}\right),
$$
which implies
\begin{align*}
\int_{B_{r}(x)}|\nabla \log u|^2&\leq Kr^{n-2}.
\end{align*}
By the Sobolev Embedding Theorem and the John-Nirenberg Lemma \cite[Theorem 3.5]{Han-Lin}, for $\alpha=\frac{C(n)}{K}$, we have
\begin{equation}\label{JN}
\|u\|_{L^{\alpha}(B_{1})}\|u^{-1}\|_{L^{\alpha}(B_1)}\leq C.
\end{equation}
Let $v=\log cu$, where $c$ is chosen such that
$$
\int_{B_1}v=0.
$$
By the Poincar\'e inequality, we can assume
$$
\int_{B_1}|v|\leq \beta_0,
$$
where $\beta_0$ only depends on $\Lambda$, $\lambda_1$ and
$\lambda_2$. Denote the Lebesgue measure over $\mathbb{R}^n$ by $L^n$.
Let
$$
E=\{x:v\leq \frac{2\beta_0}{L^n(B_1)}\}.
$$
Then
$$
L^n(B_1\,\,\,\,etminus E)\leq \frac{L^n(B_1)}{2\beta_0}\int_{B_1}|v|\leq\frac{L^n(B_1)}{2},
$$
and $L^n(E)\geq\frac{1}{2}L^n(B_1)$.
By \eqref{JN}, we get
$$
C\geq \int_{B_1}(cu)^\alpha\int_{B_1}(cu)^{-\alpha}\geq\int_{B_1}(cu)^{\alpha}\int_{E}(cu)^{-\alpha}\geq \frac{1}{2}L^n(B_1)
e^{-\frac{2\alpha \beta_0}{L^n(B_1)}}\int_{B_1}(cu)^{\alpha}.
$$
In the same way, we can get the estimate of $\int_{B_1}(cu)^{-\alpha}$.
$
\Box$\\
\begin{lem}\label{regularity}
Suppose $u\in W^{1,2}(B_2)$ is a positive solution of \eqref{equation.epsilon}, \eqref{aij}, and $\log u\in W^{1,2}(B_2)$.
Then for any $q\in (0,\frac{n}{2})$, there exists $\varepsilon_0
=\varepsilon(q,\lambda_1,\lambda_2)>0$, such that
if
$$
\int_{B_2}|f|^\frac{n}{2}<\varepsilon_0,
$$
then
$$
\|\nabla\log u\|_{W^{1,q}(B_{\frac{1}{2}})}\leq C(\lambda_1,\lambda_2,\epsilon_0).
$$
and
$$
e^{-\frac{1}{|B_\frac{1}{2}|}\int_{B_\frac{1}{2}}\log u}\|u\|_{W^{2,q}(B_\frac{1}{2})}+e^{\frac{1}{|B_\frac{1}{2}|}\int_{B_\frac{1}{2}}\log u}\|u^{-1}\|_{W^{2,q}(B_\frac{1}{2})}
\leq C(\lambda_1,\lambda_2,\epsilon_0).
$$
\end{lem}
\proof Let $v=\log u$. In order to apply Lemma \ref{Lalpha}, we firstly assume $\int_{B_1}v=0$.
Let $\eta$ be a smooth cutoff function and $\phi=\eta^2u^{\beta}$ be a test function, where $\eta$ and $\beta\neq -1$ or $0$ will be defined later. Multiplying both sides of
(\ref{equation.epsilon}) by $\phi$ and integrating, we obtain
$$
\int_{B_1} 2\eta\nabla\eta u^{\beta}\nabla u+\int_{B_1} \eta^2\beta u^{\beta-1}|\nabla u|^2=\int f\eta^2u^{\beta+1}.
$$
By Young inequality and H\"older inequality:
\begin{align}\label{ie1}
|\beta|\int_{B_1} \eta^2 u^{\beta-1} |\nabla u|^2 \leq \frac{C}{|\beta|}\int_{B_1} |\nabla \eta|^2 u^{\beta+1}+(\int_{B_1} |f|^\frac{n}{2})^\frac{2}{n}(\int_{B_1} (\eta^2 u^{\beta+1})^\frac{n}{n-2})^\frac{n-2}{n}.
\end{align}
Applying the Sobolev inequality and Poincar\'e inequality to $\eta u^\frac{\beta+1}{2}$
, we get
\begin{align*}
(\int_{B_1} (\eta u^\frac{\beta+1}{2})^\frac{2n}{n-2})^{\frac{n-2}{n}}&\leq \alpha_n\int_{B_1}|\nabla(\eta u^{\frac{\beta+1}{2}})|^2
\\&\leq 2\alpha_n \int_{B_1}(\nabla \eta)^2 u^{\beta+1}+2\alpha_n \int_{B_1}(\eta)^2|\nabla u^\frac{\beta+1}{2}|^2,
\end{align*}
which together with (\ref{ie1}) gives
\begin{align}\label{ie2}
\frac{4|\beta|}{(\beta+1)^2}\int_{B_1} \eta^2|\nabla u^\frac{\beta+1}{2}|^2 \leq (\frac{C}{|\beta|}+C\varepsilon_0)\int_{B_1} |\nabla\eta|^2 u^{\beta+1}+C\varepsilon_0\int_{B_1} \eta^2|\nabla u^\frac{\beta+1}{2}|.
\end{align}
When
$$
C\varepsilon_0\leq \frac{2|\beta|}{(\beta+1)^2},
$$
we have
$$
\frac{2|\beta|}{(\beta+1)^2}\int_{B_1} \eta^2|\nabla u^\frac{\beta+1}{2}|^2 \leq (\frac{C}{|\beta|}+\frac{2|\beta|}{(\beta+1)^2})\int_{B_1} |\nabla\eta|^2 u^{\beta+1},
$$
and
$$
\frac{2|\beta|}{(\beta+1)^2}\int_{B_1} |\nabla\eta u^\frac{\beta+1}{2}|^2 \leq (\frac{C}{|\beta|}+\frac{6|\beta|}{(\beta+1)^2})\int_{B_1} |\nabla\eta|^2 u^{\beta+1}.
$$
Take $\frac{1}{2}\leq r_1< r_2\leq 1$. Let $\eta\in C^\infty_0(B_{r_2})$, $\eta\equiv 1$ on $B_{r_1}$ and $|\nabla \eta|\leq\frac{C}{|r_2-r_1|}$.
By Poincar\'e inequality and Sobolev inequality, we get
$$
(\int_{B_{r_1}} |u^{\frac{\beta+1}{2}}|^{2^*})^\frac{1}{2^*}\leq C\left(\frac{(\beta+1)^2}{\beta^2}+1\right)\frac{1}{|r_2-r_1|}(\int_{B_{r_2}} (u^\frac{1+\beta}{2})^2)^\frac{1}{2},
$$
where $2^*=2\frac{n}{n-2}$.
Next we deduce an uniform bound for $\|u\|_{L^p}$.
Let $\frac{\beta+1}{2}=\alpha$. We can choose $\varepsilon_0$ such that
$\|u\|_{L^{2^*\alpha}}<C$. Then by setting $\frac{\beta+1}{2}=2^*\alpha$ we can get $\|u\|_{L^{2^*\cdot 2^* \alpha}}<C$. After several
iterations, we get an estimate of $\|u\|_{L^\frac{n}{n-2}}$.
So without loss of generality, we assume $\|u\|_{L^\frac{n}{n-2}}<C$.
Denote $\alpha=\frac{n}{n-2}$ and take
$$
\frac{n}{n-2}\geq p_0>1.
$$
Then
$$
(\int_{B_{r_1}} |u^{p_0\frac{\alpha(\beta+1)}{p_0}}|)^{\frac{p_0}{\alpha(\beta+1)}\frac{\beta+1}{2p_0}}\leq C\left(\frac{(\beta+1)^2}{\beta^2}+1\right)\frac{1}{|r_2-r_1|}(\int_{B_{r_2}} u^{p_0\frac{1+\beta}{p_0}})^{\frac{p_0}{\beta+1}\frac{\beta+1}{2p_0}},
$$
i.e.
\begin{equation}\label{moser1}
(\int_{B_{r_1}} |u^{p_0\frac{\alpha(\beta+1)}{p_0}}|)^{\frac{p_0}{\alpha(\beta+1)}}\leq \left(C(\frac{(\beta+1)^2}{\beta^2}+1)\frac{1}{|r_2-r_1|}\right)^\frac{2p_0}{\beta+1}(\int_{B_{r_2}} u^{p_0\frac{1+\beta}{p_0}})^{\frac{p_0}{\beta+1}}.
\end{equation}
Take $\beta+1=\alpha^mp_0$, $r_1=\frac{1}{2}+\frac{1}{2^{m+2}}$ and $r_2=\frac{1}{2}+\frac{1}{2^{m+1}}$, where $m=0,1,2,\cdots$, $m_0$, and
$$
m_0=\max\{m:C\varepsilon_0\leq \frac{2(\alpha^mp_0-1)}{(\alpha^mp_0)^2}\}.
$$
Rewrite \eqref{moser1} as follows:
$$
\|u^{p_0}\|_{L^{\alpha^{m+1}}(B_{r_1})}\leq C^\frac{2m}{\alpha^m}\|u^{p_0}\|_{L^{\alpha^m}(B_{r_2})},
$$
which implies
$$
\|u^{p_0}\|_{L^{\alpha^{m_0+1}(B_{\frac{1}{2}})}}\leq C^{\,\,\,\,um\limits_{i=0}^{+\infty} i\alpha^{-i}}\|u^{p_0}\|_{L^{1}(B_{1})}.
$$
Given $p\geq p_0$, select $m_0$ such that $p<p_0\alpha^{m_0+1}$ and choose $\varepsilon_0$ under additional assumption:
$$
C\varepsilon_0\leq \min\{\frac{2(\alpha^mp_0-1)}{(\alpha^mp_0)^2}:m=0,1,\cdots,m_0\}.
$$
It follows
$$
\|u\|_{L^p(B_{\frac{1}{2}})}\leq C\|u\|_{L^{p_0\alpha^{m+1}}(B_{\frac{1}{2}})}\leq C\|u\|_{L^{p_0}(B_1)}\leq C.
$$
Now we return to the elliptic equation \eqref{equation.epsilon}.
For any $q<\frac{n}{2}$, we have
$$
(\int_{B_\frac{1}{2}} (fu)^q)^{\frac{1}{q}}\leq (\int_{B_\frac{1}{2}} f^\frac{n}{2})^{\frac{2}{n}}(\int_{B_\frac{1}{2}} u^\frac{n}{n-2q})^{\frac{n-2q}{n}}.
$$
Thus, if $p>\frac{n}{n-2q}$, we get
$$
\|u\|_{W^{2,q}(B_\frac{1}{4})}<C.
$$
Finally, we derive the estimate of $\|u^{-1}\|_{W^{2,q}}$.
Similar to the above arguments, one can get $\|u^{-1}\|_{L^p(B_\frac{1}{4})}<C$. The estimate of $\|u^{-1}\|_{W^{2,q}}$ follows
from the following:
$$
\nabla u^{-1}=\frac{\nabla u}{u^2},\,\,\,\, \nabla^2u^{-1}=
\frac{\nabla^2u}{u^2}-2\frac{|\nabla u|^2}{u^3}.
$$
Since
$$
\nabla\log u=\frac{\nabla u}{u},\,\,\,\, \nabla^2\log u=
\frac{\nabla^2u}{u}-\frac{|\nabla u|^2}{u^2},
$$
we get the estimate of $\log u$.
Notice that for any positive constant $c$, $cu$ still satisfies the equation.
So we can get the estimate of $\|\log u\|_{W^{2,p}}$ without the assumption
that $\int_{B_1}\log u=0$.
$
\Box$\\
\,\,\,\,ubsection{Proof of Theorem \ref{main2}}The main goal of
this subsection is to prove Theorem \ref{main2}.
For any $x\in M$, on a small ball $B_r(x)\,\,\,\,ubset M$, $g_0|_{B_r(x)}$ can be regarded as a metric over $B_r\,\,\,\,ubset \mathbb{R}^n$ and we have the following equation
$$
-\frac{4(n-1)}{n-2}\Delta_{g_0} u_k=(-R(g_0)+R(g)u_k^\frac{4}{n-2})u_k.
$$
By Lemma \ref{Lalpha}, $\|\nabla\log u_k\|_{L^2(M)}<C$. Let $-\log c_k$
be the mean value of $\log u_k$ over $M$. By the Poincar\'e inequality,
$\|\log c_ku_k\|_{L^1(M)}<C$. Note $c_ku_k$ also satisfies
$$
-\frac{4(n-1)}{n-2}\Delta_{g_0}c_k u_k=(-R(g_0)+R(g)u_k^\frac{4}{n-2})c_ku_k.
$$
Cover $M$ by finitely many balls $B_{r_1}(x_1)$, $\cdots$,
$B_{r_m}(x_m)$. Assume for each $B_{r_i}(x_i)$,
$$
\int_{B_{2r_i}(x_i)} |R_k|^2dV_{g_k}<\varepsilon_0.
$$
Applying Lemma \ref{regularity} to $c_ku_k$, we get
$\|c_ku_k\|_{W^{2,q}(M)}+\|(c_ku_k)^{-1}\|_{W^{2,q}(M)}<C$.
Since
$$
1=\int_Mu_k^\frac{2n}{n-2}=\frac{1}{c_k^\frac{2n}{n-2}}\int_M(c_ku_k)^\frac{2n}{n-2}<C\frac{1}{c_k^\frac{2n}{n-2}},
$$
and
\begin{eqnarray*}
\mathrm{Vol}^2(M,g_0)&\leq\int_Mu_k^\frac{2n}{n-2} dV_{g_0}\int_Mu_k^{-\frac{2n}{n-2}} dV_{g_0}=\int_Mu_k^{-\frac{2n}{n-2}} dV_{g_0}\\
&=c_k^\frac{2n}{n-2}\int_M(c_ku_k)^{-\frac{2n}{n-2}} dV_{g_0}
\leq Cc_k^\frac{2n}{n-2},
\end{eqnarray*}
we get the bound of $c_k$.
This proves the first part of Theorem \ref{main2}.
By Theorem \ref{main1}, we may assume the sequence $\{d_{g_k}\}$ converges to a distance function
$d$ with $d\leq d_g$. To finish the proof of Theorem \ref{main2},
we need to show $d\geq d_g$.
The key observation is the following:
\begin{lem}\label{du0/d0}
For any $\varepsilon$, we can find $\beta$ and $\tau$, which only depend on $\varepsilon$, such that if
$$
\mu(B_{2\delta}(0))<\tau, \,\,\,\, \delta<\delta_0,
$$
then
$$
\frac{d_g(x,y)}{d(x,y)}\leq1+\varepsilon,\,\,\,\, \forall x, y\in B_{\beta\delta}^{g_0}(0).
$$
\end{lem}
\proof
We argue by contradiction and assume the lemma is not true. Then we can find $\delta_m\to0$ and $y_m, x_m\in B_{\delta_m}(0)$, such that
$\frac{|x_m-y_m|}{\delta_m}\rightarrow 0$ and
$$
\limsup_{k\rightarrow+\infty}\int_{B_{\delta(0)}}|R(g_k)|^\frac{n}{2}dV_{g_k}= 0,\,\,\,\, \frac{d(y_m,x_m)}{d_g(y_m,x_m)}\rightarrow a<1.
$$
For any fixed $m$, by the Sobolev Embedding Theorem and Lemma \ref{regularity}, and taking $q$ sufficiently closed to $\frac{n}{2}$, we can find $k_m$ such that $\{u_{k_m}\}$ and $\{\frac{1}{u_{k_m}}\}$ converges to $u$ and $\frac{1}{u}$ respectively in $W^{1,p_0}$, with $p_0<\frac{nq}{n-q}$. Moreover, using the H\"older's Inequality, we can get that, after passing to a subsequence, $\{\frac{u_{k_m}}{u}\}$ converges to $1$ in $W^{1,p}_{loc}(B_{\delta_0}(0))$, for some $p\in (n-1,p_0)$. So we have
$$
\left|\frac{d_{g_{k_m}}(y_m,x_m)}{d_g(y_m,x_m)}-\frac{d(y_m,x_m)}{d_g(y_m,x_{m})}\right|<\frac{1}{m},
$$
and
\begin{equation}\label{L2}
\int_{B_{\delta_m}(x_m)}\left|\nabla\frac{u_{k_m}(y)}{u(y)}\right|^pdy
+\int_{B_{\delta_m}(x_m)}\left|\frac{u_{k_m}(y)}{u(y)}-1\right|^pdy<\frac{1}{m},
\end{equation}
where $r_m=|y_m-x_m|$ and $B_{\delta_m}(x_m)\,\,\,\,ubset B_{\delta_0}(0)$. By (\ref{L2}), we have
\begin{equation}\label{L1}
r_m^{n-p}\int_{B_{\frac{\delta_m}{r_m}}(0)}\left|\nabla\frac{u_{k_m}(r_mx+x_m)}{u(r_mx+x_m)}\right|^pdx
+r_m^{n}\int_{B_{\frac{\delta_m}{r_m}}(0)}\left|\frac{u_{k_m}(r_mx+x_m)}{u(r_mx+x_m)}-1\right|^pdx<\frac{1}{m},
\end{equation}
For simplicity, we set $y_m=x_m+r_m(1,0,\cdots,0)$ in local coordinates. More precisely, we can take a segment $\gamma_m$ joining $x_m$ and $y_m$, such that
$$
\gamma_m(t)=x_m+r_m\gamma(t),\,\,\,\, where\,\,\,\, \gamma(t)=(t,0,\cdots,0),\,\,\,\, t\in[0,1].
$$
Let $u_m'=c_mu_{k_m}(x_m+r_mx)$, where $c_m$ is chosen such that
$$
0=\int_{B_\frac{1}{2}}\log u_m'.
$$
In the local coordinate, set $h_m(x)=(g_0)_{ij}(x_m+r_mx)dx^i\otimes dx^j$, which converges to $g_{\mathbb{R}^n}$ smoothly. Let $g_m'(x)=(u_m'(x))^\frac{4}{n-2}h_m(x)=\frac{c_m^\frac{4}{n-2}}{r_m^2}g_{k_m}(x_m+r_mx)$. Since for any $R>0$
$$
\int_{B_R(0)}|R(g_m')|^\frac{n}{2}dV_{g_m'}\leq\int_{B_{\delta_0(0)}}|R(g_k)|^\frac{n}{2}
dV_{g_k}\rightarrow 0.
$$
By Lemma \ref{regularity}, we may assume $u_m'$ converges to a positive harmonic function $u'$ weakly in $W^{2,q}_{loc}(\mathbb{R}^n)$
with $\int_{B_\frac{1}{2}}\log u'=0$, for some $q<\frac{n}{2}$.
By Liouville's theorem, $u'$ is a constant. Since $\int_{B_\frac{1}{2}}\log u'=0$,
$u'=1$.
Hence, for a fixed $R>1$, $u_m'$ converges to $1$ in $W^{1,p}(B_R(0))$. By \eqref{L1}, $\frac{u(x_m+r_mx)}{u_{k_m}(x_m+r_mx)}$ converges to 1 in $W^{1,p}(B_R(0))$. According to the H\"older's Inequality, $u_m'(x)\frac{u(x_m+r_mx)}{u_{k_m}(x_m+r_{m}x)}$ converges to 1 in $W^{1,p'}(B_R(0))$, for some $p'\in (n-1,p)$. Then we have
\begin{align*}
\lim_{m\to \infty}c_m^\frac{2}{n-2}d_g(x_m,y_m)&\leq \lim_{m\to \infty}\int_{\gamma_m}(c_mu(\gamma_m(t)))^\frac{2}{n-2}
\\ &= \lim_{m\to \infty}\int_0^1(c_mu(\gamma_m(t)))^\frac{2}{n-2}
\,\,\,\,qrt{(g_0)_{ij}(\gamma_m(t))\dot{\gamma}_m^i(t)\dot{\gamma}_m^j(t)}dt
\\ &=\lim_{m\to \infty} \int_0^1 \left(c_mu_{k_m}(x_m+r_{m}\gamma(t))\frac{u(x_m+r_m\gamma(t))}{u_{k_m}(x_m+r_{m}\gamma(t))}\right)^\frac{2}{n-2}dt
\\ &=\lim_{m\to \infty} \int_0^1\left(u_m'(\gamma(t))\frac{u(x_m+r_m\gamma(t))}{u_{k_m}(x_m+r_{m}\gamma(t))}\right)^\frac{2}{n-2}dt.
\end{align*}
Therefore, we get
$$
\lim_{m\rightarrow+\infty} c_m^\frac{2}{n-2}d_g(x_m,y_m)\leq 1.
$$
By Proposition \ref{complete.case},
$$
c_m^\frac{2}{n-2}d_{g_{k_m}}(x_m,y_m)=d_{g_m'}(0,(1,0\cdots,0))\rightarrow 1.
$$
Hence,
$$
\frac{d(x_m,y_m)}{d_g(x_m,y_m)}\geq1
$$
when $m$ is sufficiently large.
$
\Box$\\
{\it The proof of Theorem \ref{main2}:} It remains to show that $d_g\leq d$. Let $\varepsilon$, $\tau$ and $\beta$ be in Lemma \ref{du0/d0} and set $A_\tau=\{x:\mu(\{x\})>\tau\}$. Obviously, $A_\tau$
is a finite set. Then, for any $\delta>0$, we have
$$
\int_{B_\delta(x)}|R(g_k)|^\frac{n}{2}dV_{g_k}<\tau,\,\,\,\, whenever\,\,\,\, B_{2\delta}(x)
\cap A_\tau=\emptyset,
$$
when $k$ is sufficiently large. It follows that
$$
\frac{d_g(x,y)}{d(x,y)}<1+\varepsilon,
$$
whenever $d_{g_0}(x,y)<\beta\delta$ and $x\notin B_\delta(A_\tau)$.
We say a metric space is a length space if, for each $x$
and $y$ in this space, there exists a minimal geodesic joining them (see \cite[p.148]{Fukaya}).
Since $(M,d)$ is also the Gromov-Haudorff limit of $(M,d_{g_k})$, $(M,d)$ is a length space according to \cite [Proposition 1.10]{Fukaya}. Let $\gamma$ be the minimal geodesic defined in $(M,d)$ joining
$x_1$ and $x_2$, i.e. $\gamma:[0,a]\rightarrow (M,d)$
is a continuous map which satisfies
$$
\gamma(0)=x_1,\,\,\,\, \gamma(a)=x_2,\,\,\,\, and\,\,\,\,
d(\gamma(s),\gamma(s'))=|s-s'|,\,\,\,\,\forall s,s'\in[0,a].
$$
We claim that $\gamma$ is also continuous in $(M,g_0)$. For otherwise, we can find $t_k\rightarrow t$ and $a>0$, such that
$d_{g_0}(\gamma(t_k),\gamma(t))>a$.
By \eqref{positive.distance2}, there exists $a'>0$, such that
$$
|t_k-t|\geq d(\gamma(t_k),\gamma(t))>a',
$$
which is impossible.
Now we consider two cases. The first case is when $\gamma\cap A_\tau=\emptyset$, say $d(A_\tau,\gamma[0,a])>0$.
Since $\gamma$ is continuous, we may assume
$$
d_{g_0}(A_\tau,\gamma[0,a])>\delta>0.
$$
Then there exists
$$
s_0=0<s_1<\cdots<s_m=a,
$$
such that
$$
d_{g_0}(\gamma(s_{i+1}),\gamma(s_i))<\beta\delta.
$$
It follows that
\begin{eqnarray}\label{d0.d0'}\nonumber
d(x_1,x_2)&=&\,\,\,\,um_{i=0}^{m-1} d(\gamma(s_i),\gamma(s_{i+1}))\\
&\geq& (1+\varepsilon)^{-1}\,\,\,\,um_{i=0}^{m-1}d_g(\gamma(s_i),
\gamma(s_{i+1}))\\\nonumber
&\geq& (1+\varepsilon)^{-1}d_g(x_1,x_2)
\end{eqnarray}
The remaining case is when $\gamma\cap A_\tau\neq
\emptyset$. Let
$$
\gamma\cap A_\tau=\{\gamma(a_1), \cdots, \gamma(a_i)\}.
$$
The distance function $d$ is bounded by
\begin{eqnarray*}
d(x_1,x_2)&\geq& d(x_1,\gamma(a_1-\varepsilon'))
+d(\gamma(a_1+\varepsilon'),\gamma(a_2-\varepsilon'))+
\cdots+d(\gamma(a_i+\varepsilon',x_2))\\
&\geq&
(1+\varepsilon)^{-1}(d_g(x_1,\gamma(a_1-\varepsilon'))
+
\cdots+d_g(\gamma(a_i+\varepsilon',x_2))).
\end{eqnarray*}
When $\varepsilon'\rightarrow 0$, we get \eqref{d0.d0'} again.
Finally, by letting $\varepsilon\rightarrow 0$, we get the desired inequality.
$
\Box$\\
{\,\,\,\,mall
\end{document} |
\begin{document}
\title{Grassmannians of Lagrangian Polarizations}
\tableofcontents
\abstract{This paper is an introduction to polarizations in the symplectic and orthogonal settings. They arise in association to a triple of compatible structures on a real vector space, consisting of an inner product, a symplectic form, and a complex structure. A polarization is a decomposition of the complexified vector space into the eigenspaces of the complex structure; this information is equivalent to the specification of a compatible triple. When either a symplectic form or inner product is fixed, one obtains a Grassmannian of polarizations.
We give an exposition of this circle of ideas, emphasizing the symmetry of the symplectic and orthogonal settings, and allowing the possibility that the underlying vector spaces are infinite-dimensional. This introduction would be useful for those interested in applications of polarizations to representation theory, loop groups, complex geometry, moduli spaces, quantization, and conformal field theory.}\blfootnote{\keywords{Lagrangian Grassmannian, polarizations, symplectic form, inner product, Siegel disk, symplectic group, orthogonal group}}\blfootnote{\subjclass{15A66, 15B77, 32G15, 53D99, 81S10}}
\section{Introduction}
This paper is a self-contained introduction to polarizations of complex vector spaces and Lagrangian Grassmannians of polarizations. These appear in geometry and algebra in many contexts, such as algebraic and complex geometry \cite{farkas_riemann_1992,siegel_topics_1988,birkenhake_complex_2004}, moduli spaces \cite{nag_teichmuller_1995,siegel_topics_1988}, loop groups \cite{segal_unitary_1981,pressley_loop_2003}, and the metaplectic and spin representations \cite{plymen_spinors_1994,habermann_introduction_2006}. They also appear in physics in association with the latter objects \cite{woit_quantum_2017,ottesen_infinite_1995}, and in conformal field theory \cite{huang_two-dimensional_1995,bleuler_definition_1988}. The paper covers the two types of polarization: the Riemannian one, in which the polarization is an orthogonal decomposition; and the symplectic one, in which the polarization is a symplectic decomposition.
Our presentation is suitable for graduate students. We also hope that seasoned researchers might find it useful as a quick introduction. It could serve as preparation or as a reference for those encountering polarizations and Grassmannians in any of the topics above. Although these topics are not treated here, we included many examples with references for orientation.
Throughout, we have emphasized the symmetry of the Riemannian and symplectic point of view. We therefore chose a ``middle ground'' notationally, sampling from common notation in both fields, as well as from complex geometry.
At the heart of the idea of polarization is a triple of compatible structures: a complex structure, an inner product, and a symplectic form. An familiar example is that of a K\"ahler manifold, which is a manifold equipped with three compatible structures: 1) a Riemannian metric; 2) an integrable complex structure; 3) a symplectic form. Here the three structures vary from point to point on the manifold.
The compatibility condition ensures that when we are provided with two out of the three structures, we can reconstruct the third one, if it exists (which is not always the case).
It then becomes natural to fix \emph{one} of these structures, and study the space of structures of a second type, which allow for the reconstruction of a compatible structure of the third type.
For example, we can fix a symplectic manifold $(M, \omega)$, and study the space of integrable complex structures on $M$, such that $M$ admits a compatible Riemannian metric.
In this article we restrict our attention to structures on a fixed real vector space $M=H$. (In the example of K\"ahler manifold, one could restrict to a particular tangent space, or, one could consider the case that the manifold is just a complex vector space). In particular we do not discuss integrability. Instead, we consider deformations of these structures on the fixed vector space, or equivalently, the Grassmannian of polarizations on the complexification of $H$. We explore different models of the Grassmannians, and their manifold structures.
We also simultaneously generalize the setting, by allowing (and indeed focusing on) the case that $H$ is infinite-dimensional. Doing this requires only a little extra effort, and immediately broadens the scope.
Some functional analytic issues rear their head when passing to the infinite-dimensional case, but these are easily dealt with.
There are many interesting examples which fit in this context, many of which appear in the text. In general, most of the examples are completely accessible, but in a few cases where details would be distracting we indicate the relevant literature.
Here is an outline of the paper.
In \cref{sec:Triples} we recall the definition of compatible triples, and explore their basic properties and characterizations. We also take care of a few functional analytic issues necessary when dealing with the case of (infinite-dimensional) Hilbert spaces.
In \cref{sec:Polarizations}, we consider both types of polarization. Briefly,
if $J$ is a complex structure on $H$, then the complexification $H_{\mathbb{C}}$ splits as a direct sum of the $\pm i$-eigenspaces of $J$.
This establishes a correspondence between certain types of decompositions of $H_{\mathbb{C}}$, and complex structures on $H$.
We exploit this relation to phrase our problem in the language of Grassmannians, in anticipation of Sections \ref{sec:Bosons} and \ref{sec:fermions}.
If $g$ is a Riemannian metric, and $J$ is orthogonal, then the corresponding decomposition of $H_{\mathbb{C}}$ is orthogonal.
If $\omega$ is a symplectic form, and $J$ is a symplectomorphism, then the corresponding decomposition of $H_{\mathbb{C}}$ is Lagrangian.
A Lagrangian, resp.~orthogonal decomposition of $H_{\mathbb{C}}$ is part of the data required to carry out bosonic, resp.~fermionic geometric quantization.
In the current context, this procedure produces a representation of the Heisenberg, resp.~Clifford algebra of $H$ on the Fock space of the chosen subspace of $H_{\mathbb{C}}$.
We give a cursory overview of this construction in \cref{sec:HilbertSchmidt} for the sake of motivation.
For us, the most important point is that considerations of unitary equivalence of representations lead to a natural functional analytic condition, which picks out a subset of each of the Grassmannians under consideration. These are the ``restricted'' Grassmannians of representation theory.
In \cref{sec:Bosons,sec:fermions} we describe the symplectic and orthogonal Grasmannians respectively, giving different models and providing a logically complete exposition of their basic properties.
These Grassmannians have the desirable property that they can be equipped with the structure of complex manifold.
We give a detailed construction of an atlas of these manifolds. In the restricted case, this is modelled on the Hilbert space of Hilbert-Schmidt operators on some infinite-dimensional Hilbert space.
Section \ref{se:sewing} explores a geometric interpretation of a particular Lagrangian Grassmannian in terms of sewing. This example arises in loop groups, conformal field theory, and Teichm\"uller theory. Finally, Section \ref{se:solutions} contains a complete set of solutions to the exercises.
\paragraph{Acknowledgements}
PK gratefully acknowledges support from the Pacific Institute for the Mathematical Sciences, and from the Hausdorff Center for Mathematics. ES acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC).
\section{Compatible triples}\label{sec:Triples}
In this section, we introduce the notion of compatible triples. This is the information of an inner product, symplectic form, and complex structure, which are related in a sense defined shortly. We will see various equivalent ways of expressing this compatibility. The ubiquity of compatible triples is illustrated with examples.
As promised, we give an exposition in the generality of Hilbert spaces.
We begin with a few considerations about Hilbert spaces.
Those concerned only in the finite-dimensional case could skip this first page, and ignore a few extra lines of justification in some of the proofs, without losing the thread. This could profitably be maintained until the restricted Grassmannians are considered starting in Section \ref{sec:Bosons}.
Let $H$ be a separable (possibly even finite-dimensional) real Hilbert space.
That is, $H$ is a separable topological vector space over $\mathbb{R}$, which admits an inner product with respect to which it is a Hilbert space.
We recall the following three notions:
\begin{itemize}\itemsep-.2em
\item A \emph{strong inner product} on $H$ is a continuous symmetric bilinear form on $H$, such that the map $\varphi_{g}:H \rightarrow H^{*}$ defined by the relation $\varphi_{g}(v)(w) = g(v,w)$ is an isomorphism, and such that $g(v,v) \geqslant 0$ for all $v \in H$.
\item A \emph{complex structure} $J$ on $H$ is a continuous map $J: H \rightarrow H$ with the property that $J^{2} = -\mathds{1}$.
\item A \emph{strong symplectic form} on $H$ is a continuous anti-symmetric bilinear form $\omega$ on $H$, such that the map $\varphi_{\omega}:H \rightarrow H^{*}$ defined by the relation $\varphi_{\omega}(v)(w) = \omega(v,w)$ is an isomorphism.
\end{itemize}
The maps $\varphi_{\omega},\varphi_{g}: H \rightarrow H^{*}$ are sometimes called the musical isomorphisms.
Note that here, by isomorphism, we mean a bounded linear bijection. Thus the inverse must also be bounded. Of course in finite dimensions a linear bijection is automatically bounded.
When a vector space is equipped with a symplectic form $\omega$, one may consider the \emph{canonical commutation relation} algebra of $H$.
This is the unital, associative algebra generated by $H$ subject to the condition $vw - wv = \omega(v,w)$.
Similarly, when one is given an inner product, one may consider the \emph{canonical anti-commutation relation} algebra of $H$.
This is the unital, associative algebra generated by $H$ subject to the condition $vw + wv = g(v,w)$.
Consideration of (certain variations of) these algebras and their modules motivates many of the definitions and results outlined in this article.
We will give a sketch of some of this motivation in \cref{sec:HilbertSchmidt} ahead.
If $(H,g)$ is a Hilbert space, then $g$ is a strong inner product. In general, a strong inner product gives Hilbert space structure equivalent to $g$, as the following proposition shows.
\begin{proposition}\label{pr:StrongEquivalence}
Let $(H,g)$ be a Hilbert space. Let $\psi: H \times H \rightarrow \mathbb{R}$ be a bilinear pairing. Then $\psi$ is a strong inner product if and only if $(H,\psi)$ is a Hilbert space and $\psi$ and $g$ are equivalent.
\end{proposition}
\begin{proof}
Denote by $\| w \|_\psi$ the norm $\sqrt{ \psi(w,w)}$. By assumption there is a $K$ such that $\| w \|_\psi \leq K \| w\|$ for all $w$, and in particular $\| w \|_\psi$ is finite for any fixed $w$.
Since $\varphi_\psi$ is an isomorphism, we have that it is bounded below.
Using this together with the Cauchy-Schwarz inequality for $\psi$ we get
\begin{equation*}
C \| w \| \leq \| \varphi_{\psi}(w) \| = \sup_{\| v \|=1 } |\psi(w,v)|
\leq \| w \|_\psi \| v\|_\psi \leq K \| w\|_{\psi}.
\end{equation*}
This shows that $(H,\psi)$ is complete and thus a Hilbert space, and that the norms are equivalent.
Conversely, if $(H,\psi)$ is a Hilbert space and the norms induced by $\psi$ and $g$ are equivalent, then they are also equivalent on $H^*$. Since $\varphi_\psi:(H,\psi) \rightarrow (H^*,\psi)$ is an isomorphism by the Riesz representation theorem, it is also a bounded isomorphism with respect to $g$.
\end{proof}
Emboldened by \cref{pr:StrongEquivalence} we occasionally refer to a Hilbert space without designating a particular metric as special.
It is worth pausing here to compare the infinite- and finite-dimensional settings.
If $\psi: H \times H \rightarrow \mathbb{R}$ is a bilinear map, then $\psi$ is \emph{weakly non-degenerate} if the induced map $\varphi_{\psi}: H \rightarrow H^{*}$ defined by $\varphi_{\psi}(v)(w) = \psi(v,w)$ is injective.
This is equivalent to the statement that for every $0 \neq v \in H$ there exists a $w \in H$ such that $\psi(v,w) \neq 0$.
The map $\psi$ is \emph{strongly non-degenerate} if the induced map $\varphi_{\psi}: H \rightarrow H^{*}$ is an isomorphism.
In finite dimensions, these notions coincide, but if $H$ is infinite-dimensional, this is no longer true as \cref{ex:WeakInnerProduct} will show.
On the other hand, in finite dimensions an inner product is automatically strong on account of its positive-definiteness.
\begin{example}\label{ex:WeakInnerProduct}
Consider the Hilbert space $\ell^{2}(\mathbb{R})$ of square-summable sequences, with its standard inner product $g( \{ a_{n} \}, \{ b_{n} \}) = \sum_{n \geqslant 1} a_{n}b_{n}$.
We equip $\ell^{2}(\mathbb{R})$ with the symmetric bilinear map $\psi(\{a_{n}\},\{b_{n}\}) = \sum_{n \geqslant 1} a_{n}b_{n}/n$.
The map $\psi$ is certainly weakly non-degenerate.
It is however, not strongly non-degenerate, because $\varphi_{\psi}$ is not surjective.
Indeed, let $\xi: H \rightarrow \mathbb{R}$ be the map $\xi(\{a_{n} \}) = \sum_{n \geqslant 1} a_{n}/n$.
The pre-image $\{ x_{n} \}$ of $\xi$ under the map $\varphi_{\psi}$ (if it existed) would have to satisfy
\begin{equation*}
\sum_{n \geqslant 1} a_{n}/n = \psi(x_{n},a_{n}) = \sum_{n \geqslant 1} x_{n} a_{n} / n,
\end{equation*}
for all $\{ a_{n} \} \in \ell^{2}(\mathbb{R})$.
It follows that $x_{n} =1$ for all $n$, but this sequence is not square-summable.
\end{example}
We will not be concerned with weakly non-degenerate maps, so from now on, we assume that any inner product or symplectic form is strong.
The term will be dropped except when needed for proof or emphasis.
\begin{definition}\label{def:CompatTriples}
A \emph{compatible triple} $(g,J,\omega)$ consists of a strong inner product $g$, a complex structure $J$, and a strong symplectic form $\omega$, such that $g(v,w) = \omega(v,Jw)$.
\end{definition}
Compatible triples can also be characterized as follows.
\begin{proposition} \label{pr:three_properties_compatible} Let $(g,J,\omega)$ be an inner product, complex structure $J$, and symplectic form $\omega$ respectively. The following are equivalent.
\begin{enumerate}
\item $g(v,w) = \omega(v,Jw)$ for all $v,w \in H$;
\item $\omega(v,w) = g(Jv,w)$ for all $v,w \in H$;
\item $J(v) = \varphi_{g}^{-1} \varphi_{\omega}(v)$ for all $v \in H$.
\end{enumerate}
\end{proposition}
\begin{proof}
Observe that properties 1 and 2 are equivalent to $\varphi_{g} = -\varphi_{\omega} J$ and $\varphi_{\omega} = \varphi_{g} J$ respectively.
Now by multiplying with $J$ from the right, we see that $\varphi_{g} = -\varphi_{\omega} J$ if and only if $\varphi_{\omega} = \varphi_{g} J$, and by multiplying with $\varphi_{g}^{-1}$ from the left, we see that $\varphi_{\omega} = \varphi_{g}J$ if and only if $\varphi_{g}^{-1} \varphi_{\omega} = J$.
\end{proof}
Thus, any of the above could be taken as equivalent definitions of compatible triple.
\begin{exercise} \label{exer:one_two_then_J_preserves} Assume that $(g,J,\omega)$ is a compatible triple. Show that $J$ preserves both $g$ and $\omega$, i.e.~$g(Jv,Jw) = g(v,w)$ and $\omega(Jv,Jw) = \omega(v,w)$.
\end{exercise}
\begin{exercise} \label{exer:two_of_three_gives_compat} Show that
\begin{enumerate}[label=\alph*)]
\item If $g$ and $J$ are an inner product and a complex structure, then there exists a symplectic form $\omega$ such that $(g,J,\omega)$ is compatible if and only if $J$ is skew-adjoint with respect to to $g$.
\item If $g$ and $\omega$ are an inner product and a symplectic form, then there exists a complex structure $J$ such that $(g,J,\omega)$ is compatible if and only if $\varphi_{g}^{-1}\varphi_{\omega} = - \varphi_{\omega}^{-1} \varphi_{g}$.
\item If $J$ and $\omega$ are a complex structure and a symplectic form, then if $g(v,w) = \omega(v,Jw)$ defines an inner product, then $J$ is skew-symmetric with respect to $\omega$.
\end{enumerate}
\end{exercise}
\begin{exercise} \label{exer:example_not_inner}
Show that there exist a symplectic form $\omega$ and a complex structure $J$, which is skew-symmetric with respect to $\omega$, such that $g(v,w) = \omega(v,Jw)$ does not define an inner product.
\end{exercise}
\begin{example} \label{ex:ct_standard_finite}
Let $H=\mathbb{R}^{2n}$.
Let $\{e_{i}\}_{i=1,...,2n}$ be the standard basis, and write $x_{i} = e_{2i-1}$ and $y_{i} = e_{2i}$.
Let $g$ be the standard inner product, and define $J$ by $J x_{i} = y_{i}$ and $J y_{i} = - x_{i}$.
The symplectic form $\omega$ such that $(g, J, \omega)$ is a compatible triple is given by
\[ \omega(x_k,y_k)= -\omega(y_k,x_k)=1, \ \ \ \ k=1,\ldots,n, \]
and $\omega$ is zero on all other pairs of basis vectors.
\end{example}
\begin{example} \label{ex:complex_manifold}
Let $M$ be a complex manifold; that is, a $2n$-dimensional manifold with an atlas of charts $\{ \varphii:U \rightarrow \mathbb{C}^n \}$ such that for any pair of charts $\varphii_1$, $\varphii_2$, it holds that $\varphii_2 \circ \varphii_1^{-1}$ is a biholomorphism on its domain. If $(z_1,\ldots,z_n)$ are coordinates on $\mathbb{C}^{2n}$ and $z_k=x_k+iy_k$, then a chart $\varphii$ induces vector fields $\partial/\partial x_k$ and $\partial/\partial y_k$ in $U$. At a point $p \in U$, the real tangent space $T_p M$ has a complex structure defined to be the real linear extension of
\begin{equation} \label{eq:almost_complex_structure}
J_p \frac{\partial}{\partial x_k} = \frac{\partial}{\partial y_k}, \ \ \ J_p \frac{\partial}{\partial y_k} = -\frac{\partial}{\partial x_k}.
\end{equation}
It can be shown that this complex structure is independent of the choice of coordinates, using the Cauchy-Riemann equations.
\end{example}
\begin{remark}
In general, a smoothly varying choice of complex structure on each tangent space of a real $2n$-dimensional manifold $M$ is called an almost complex structure. Not every manifold with an almost complex structure can be given an atlas making it a complex manifold with almost complex structure arising from Equation \eqref{eq:almost_complex_structure}. If this can be done, the almost complex structure is called integrable.
\end{remark}
\begin{example} \label{ex:Hodge_star_real}
Let $\mathscr{R}$ be a compact Riemann surface, (i.e.~a compact complex manifold of dimension 2).
In local coordinates $z=x+iy$, any real one-form can be expressed as $\beta = a(z) dx + b(z) dy$.
The Hodge star operator on one-forms is defined to be the complex linear extension of
\[ \ast dx = dy, \ \ \ \ast dy = -dx. \]
By the Cauchy-Riemann equations, this is coordinate-independent.
A real or complex one-form $\beta$ is said to be harmonic if it is closed and co-closed, that is, if $d \beta=0$ and $d \ast \beta =0$ respectively. Equivalently, in local coordinates $\beta(z) = h(z) dz$ where $h(z)$ is harmonic.
By the Hodge theorem, every deRham cohomology class on $\mathscr{R}$ has a harmonic representative. So the set of real harmonic one-forms $\mathcal{A}^{\mathbb{R}}_{\mathrm{harm}} (\mathscr{R})$ on $\mathscr{R}$ is a real $2g$-dimensional vector space. Define the pairing
\begin{equation} \label{eq:pairing_oneforms}
g(\beta,\gamma) = \iint_{\mathscr{R}} \beta \wedge \ast {\gamma}.
\end{equation}
It is easily checked that this is an inner product
on $\mathcal{A}^{\mathbb{R}}_{\mathrm{harm}}(\mathscr{R})$. Define also
\[ \omega(\beta,\gamma) = \iint_{\mathscr{R}} \beta \wedge \gamma. \]
With $H=\mathcal{A}^{\mathbb{R}}_{\mathrm{harm}} (\mathscr{R})$, $(g,\ast,\omega)$ is a compatible triple.
\end{example}
\begin{remark}
In the previous example, the restriction of the Hodge star operator to a single tangent space is the dual of the almost complex structure given in Example \ref{ex:complex_manifold}.
\end{remark}
\begin{exercise} \label{exer:hodge_well_defined_symplectic} Let $\mathscr{R}$ and $\omega$ be as in Example \ref{ex:Hodge_star_real}.
Let $H^{1}_{\text{dR}}(\mathscr{R})$ denote the deRham cohomology space of smooth closed one-forms modulo smooth exact one-forms on $\mathscr{R}$. Given $[\beta],[\gamma] \in H^{1}_{\text{dR}}(\mathscr{R})$, let $\hat{\beta},\hat{\gamma}$ be the harmonic representatives of their respective equivalence classes. Show that for arbitrary representatives $\beta$, $\gamma$ of $[\beta]$, $[\gamma]$
\[ \omega(\beta,\gamma) = \omega(\hat{\beta},\hat{\gamma}). \]
In particular, $\omega$ is a well-defined symplectic form on $H^{1}_{\text{dR}}(\mathscr{R})$.
\end{exercise}
Next, we introduce two examples that are of fundamental importance in both complex function theory and conformal field theory.
\begin{example} \label{ex:real_boson_example}
Let $H^{\mathrm{b}}$ denote the space of sequences \[ \left\{ \{ a_n \}_{n \in \mathbb{Z}, n \neq 0} \,:\, a_n \in \mathbb{C}, \ a_{-n} = \overline{a_{n}}, \ \sum_{n=1}^\infty n |a_n|^2 <\infty \right\}. \]
Note that $H^{\mathrm{b}}$ is a subset of $\ell^2$ (indexed doubly infinitely), and in particular the Fourier series
\[ \sum_{n \in \mathbb{Z} \setminus \{0\}} a_n e^{in\varthetaeta} \]
converges almost everywhere on $\mathbb{S}^1$.
The space $H^{\mathrm{b}}$ can be identified with the real homogeneous Sobolev space $\operatorname{d}\!ot{H}^{1/2}_{\mathbb{R}}(\mathbb{S}^1)$.
For the purpose of this example, we define $\operatorname{d}\!ot{H}^{1/2}_{\mathbb{R}}(\mathbb{S}^1)$ to be the subset of $L^2_{\mathbb{R}}(\mathbb{S}^1)$ whose Fourier coefficients are in $H^{\mathrm{b}}$.
This is a Hilbert space with respect to the following inner product: given
\[ \{ a_n \}_{n \in \mathbb{Z} \backslash \{0\}} , \ \ \ \{ b_n \}_{n \in \mathbb{Z} \backslash \{0\}} \]
we have
\begin{equation} \label{eq:g_in_Hb}
g( \{ a_{n} \}, \{ b_{n} \} ) = 2 \operatorname{Re} \sum_{n=1}^{\infty} n a_{n} \overline{b_{n}} = \sum_{n \in \mathbb{Z} \setminus \{0\}} |n| a_n \overline{b_n}.
\end{equation}
Note that this is {\it not} the inner product in $L^2(\mathbb{S}^1)$.
We also have the symplectic form
\begin{equation} \label{eq:omega_in_Hb}
\omega( \{ a_{n} \}, \{ b_{n} \} ) = 2 \operatorname{Im} \sum_{n=1}^{\infty} n a_{n} \overline{b_{n}} = -i \sum_{n \in \mathbb{Z} \setminus \{0\}} n a_n b_{-n} .
\end{equation}
The Hilbert transform $J:H^{\mathrm{b}} \rightarrow H^{\mathrm{b}}$ is defined to be the linear extension of the map
\[ J(e^{i n\varthetaeta}) = -i \, \mathrm{sgn}(n) e^{i n \varthetaeta} \]
where $\mathrm{sgn}(n)=n/|n|$ for $n \neq 0$.
With these choices $(g,J,\omega)$ is a compatible triple on $H^{\mathrm{b}}$.
\end{example}
\begin{exercise} \label{exer:smooth_symplectic_Hb}
In the previous example, show that if $f_1$ and $f_2$ are functions on $\mathbb{S}^1$ corresponding to $\{a_n \}$, $\{b_n \}$ respectively which happen to be smooth, then we can write the symplectic form as
\begin{equation*}
\omega( \{ a_{n} \}, \{ b_{n} \} )= \frac{1}{2 \pi} \int_{\mathbb{S}^1} f_1 \,df_2.
\end{equation*}
\end{exercise}
\begin{remark}
Fourier series arising from elements of $H^{\mathrm{b}}$ are precisely the set of functions on $\mathbb{S}^1$ arising as boundary values of harmonic functions $h$ on $\mathbb{D}=\{ z: |z|<1 \}$ in the real homogeneous Dirichlet space
\[ \operatorname{d}\!ot{\mathcal{D}}_{\mathbb{R},\mathrm{harm}}(\mathbb{D}) = \left\{ h:\mathbb{D} \rightarrow \mathbb{R} \ \text{harmonic} \,:\, h(0)=0, \ \iint_\mathbb{D} \left|\nabla h \right|^2 <\infty. \right\}. \]
The function $h$ is obtained from the Fourier series by replacing $e^{i n \varthetaeta}$ with $z^n$ for $n>0$, and with $\bar{z}^n$ for $n <0$.
Equivalently, the Dirichlet space is the set of anti-derivatives of $L^2$ harmonic one-forms on $\mathbb{D}$, which vanish at $0$.
In summary
\[ H^{\mathrm{b}}_{\mathbb{R}} \simeq \operatorname{d}\!ot{H}^{1/2}_{\mathbb{R}}(\mathbb{S}^1) \simeq \operatorname{d}\!ot{\mathcal{D}}_{\mathbb{R},\mathrm{harm}}(\mathbb{D}). \]
\end{remark}
\begin{example} \label{ex:fermion_real_circle}
Let $H^{\mathrm{f}}$ be the space of complex-valued sequences
\begin{equation*}
H^{\mathrm{f}} \operatorname{d}\!efeq \left\{ \{ a_n \}_{n \in \mathbb{Z}}: \sum |a_n|^2 <\infty, a_{-n} = \overline{a_n} \right\}.
\end{equation*}
We equip $H^{\mathrm{f}}$ with a compatible triple, given by $(g,J,\omega)$
\begin{align*}
g( \{ a_{n} \}, \{ b_{n} \} ) &= 2 \operatorname{Re} \sum_{n=0}^{\infty} a_{n} \overline{b_{n}}, & J\{ a_{n} \} &= \{ i a_{n} \}, & \omega( \{ a_{n} \}, \{ b_{n} \} ) &= 2 \operatorname{Im} \sum_{n=0}^{\infty} a_{n} \overline{b_{n}}.
\end{align*}
We note that $H^{\mathrm{f}}$ is a (real) Hilbert space with respect to $g$.
We wish to view elements $f = \{ a_{n} \}$ of $H^{\mathrm{f}}$ as \emph{real}-valued functions on the circle, by setting
\begin{equation*}
f(z) = \sum_{n=0}^{\infty} a_{n}z^{n+1/2} + \overline{a_{n}z^{n+1/2}},
\end{equation*}
where $z^{1/2}$ is the following choice of square root on $S^{1}$: $e^{i \varthetaeta} \mapsto e^{i \varthetaeta/2}$ for $0 \leqslant \varthetaeta < 2\pi$.
This identifies $H^{\mathrm{f}}$ with the space of real-valued square-integrable functions on the circle.
$H^{\mathrm{f}}$ arises naturally as a description of the space of $L^2$-sections of the odd spinor bundle on the circle (\cite[Section 2]{kristel_fusion_2019}). Equivalently, it can be identified with a space of $L^2$-half-densities on the circle.
The factor $z^{1/2}$ is necessary to identify these spaces as a space of functions.
However, this identification has the feature that even analytically well-behaved elements of these spaces (e.g.~$a_{i} = \operatorname{d}\!elta_{i0}$) are represented by discontinuous functions.
In fact, because the odd spinor bundle on the circle is non-trivial, there is no $C^{\infty}(S^{1})$-equivariant way to identify its smooth sections with the smooth functions on the circle (see e.g.~the Serre-Swan theorem).
\end{example}
We say that a pairing $\langle - , - \rangle$ on a complex vector space $H$ is {\it sesquilinear} if it is complex linear in the first entry, conjugate linear in the second entry, and $\left<v,w\right>=\overline{\left<w,v\right>}$ for all $v,w \in H$.
\begin{remark} \label{re:non-doubled_complexification}
Suppose that a compatible triple $(g,J,\omega)$ is given, and suppose that $H$ is a Hilbert space with respect to $g$.
We denote by $H_{J}$ the complex vector space $H$ where complex multiplication is given by $Jv = iv$.
Show that the form
\begin{equation*}
\langle v, w \rangle = g(v,w) - i \omega(v,w) = g(v,w) - i g(Jv,w).
\end{equation*}
is sesquilinear.
This turns $H_{J}$ into a complex Hilbert space.
\end{remark}
In the next section, we consider instead the complexification of a space $H$ with compatible triple.
\section{Polarizations}\label{sec:Polarizations}
In this section, we introduce the notion of polarization.
This is a decomposition of the complexification of the Hilbert space into conjugate subspaces.
We consider two kinds of polarization: those in which the subspaces are symplectic Lagrangian subspaces, and those in which the subspaces are orthogonal. The symplectic decomposition is assumed to satisfy an additional positivity condition (the asymmetry between the two cases is an artefact of the fact that symplectic forms are only required to be non-degenerate, whereas an inner product must also be positive definite).
We refer to these as positive symplectic polarizations and orthogonal polarizations respectively. The terminology is not standard, but was chosen to easily keep track of the two varieties of polarization.
For example, in the symplectic literature, the term positive polarization is used.
We outline the basic theory, with examples illustrating a few of the ways that they arise. The main aim of this section is to show that a positive symplectic polarization defines a unique complex structure and complex inner product resulting in a compatible triple on the original real Hilbert space. Similarly, an orthogonal polarization defines a unique complex structure and symplectic form which restrict to a compatible triple on the real space.
Thus in some sense the two concepts coincide.
However, the two perspectives are quite different when it comes to deformations of structures. In symplectic geometry the situation often arises that the symplectic form is fixed, and the inner product and complex structure vary; while in spin geometry, the situation where the inner product is fixed but the complex structure and symplectic form vary arises. We will explore those in subsequent sections.
We proceed as follows. First, we fix a compatible triple, and show how it naturally specifies a decomposition which is both a positive symplectic and an orthogonal polarization. Then, we show how each of the two types of polarization naturally defines a compatible triple.
Let us write $H_{\mathbb{C}}$ for the complex Hilbert space $H \otimes_{\mathbb{R}} \mathbb{C}$ (not to be confused with $H_{J}$).
The fact that $H_{\mathbb{C}}$ arises as the complexification of the real Hilbert space $H$ is witnessed by the map $\alpha: H_{\mathbb{C}} \rightarrow H_{\mathbb{C}}$ defined to be the real-linear extension of the map $v \otimes \lambda \mapsto v \otimes \overline{\lambda}$.
The map $\alpha$ is a \emph{real structure}: it is a conjugate-linear, isometric involution.
In applications, it is often more natural to start from a complex Hilbert space, which is then equipped with a real structure.
The real Hilbert space can then be recovered as the vectors that are fixed by the real structure.
If $T:H \rightarrow H$ is a bounded linear operator, then we write $T:H_{\mathbb{C}} \rightarrow H_{\mathbb{C}}$ for its complex-linear extension.
A complex-linear operator $T:H_{\mathbb{C}} \rightarrow H_{\mathbb{C}}$ corresponds to a linear operator $H \rightarrow H$ if and only if it commutes with $\alpha$.
We extend $g$ to $H_{\mathbb{C}}$ sesquilinearly, i.e.~
\begin{equation*}
g(v \otimes \lambda, w \otimes \mu) = \lambda \overline{\mu} g(v,w),
\end{equation*}
and extend $\omega$ to $H_{\mathbb{C}}$ bilinearly. We then have that
\begin{align*}
g(v,w) &= \omega(v, J \alpha w), & \omega(v,w) &= g(Jv, \alpha w).
\end{align*}
Any complex structure $J$ on $H$ induces a (complex direct-sum) decomposition $H_{\mathbb{C}} = L^{+} \oplus L^{-}$, by setting
\begin{align} \label{eq:standard_pol_def}
L^{+} &= \{ v \in H_{\mathbb{C}} \, : \, Jv = iv \}, & L^{-} &= \{ v \in H_{\mathbb{C}} \, : \, Jv=-iv \}.
\end{align}
We make the (somewhat trivial) observation that $L^{\pm}$ are \emph{closed} subspaces because they can be realized as $L^{\pm} = \ker( J \mp i)$; this observation will be important later.
\begin{remark}\label{rem:WhyIsAlphaL+EqualToL-}
The decomposition $H_{\mathbb{C}} = L^{+} \oplus L^{-}$ has the property that $\alpha(L^{+}) = L^{-}$.
Indeed, let $v = x \otimes 1 + y \otimes i \in L^{+}$, we compute
\begin{equation*}
- y \otimes 1 + x \otimes i = iv = Jv = J(x \otimes 1 + y \otimes i) = Jx \otimes 1 + Jy \otimes i,
\end{equation*}
and thus $Jx = -y$ and $Jy = x$.
It follows that
\begin{equation*}
J\alpha v = J(x \otimes 1 - y \otimes i) = -y \otimes 1- x \otimes i = -i (x \otimes 1 - y \otimes i) = -i \alpha v,
\end{equation*}
whence $\alpha v \in L^{-}$, and thus $\alpha(L^{+}) \subseteq L^{-}$.
Similarly, one proves $\alpha(L^{-}) \subseteq L^{+}$, which implies $L^{-} \subseteq \alpha(L^{+})$, and thus $\alpha (L^{+}) = L^{-}$.
Conversely, suppose that we are given a direct-sum decomposition $H_{\mathbb{C}} = W^{+} \oplus W^{-}$.
Write $P_{W^{\pm}}$ for the projection of $H_{\mathbb{C}}$ onto $W^{\pm}$ along $W^{\mp}$.
From the above, we know that for $J_{W} = i (P_{W^{+}} - P_{W^{-}})$ to restrict to a complex structure, a necessary condition is that $\alpha(L^{+}) = L^{-}$.
One easily verifies that this is also a sufficient condition, by computing $[\alpha,J_{W}]$.
\end{remark}
Fix now a compatible triple $(g,J,\omega)$ on $H$.
A subspace $L \subset H$ is called \emph{isotropic} (with respect to $\omega$) if $\omega$ is identically zero on $L \times L$.
A subspace $L \subset H$ is \emph{Lagrangian} if it is isotropic, and there exists another isotropic subspace $L' \subset H$ such that $H = L \oplus L'$.
Because $H$ is taken to be a Hilbert space, we have the following characterization of Lagrangian subspace, taken from \cite[Proposition 5.1]{weinstein_symplectic_1971}.
\begin{lemma}\label{lem:maximalMeansLagrangian}
If $L \subset H$ is a maximal isotropic subspace, then it is Lagrangian.
\end{lemma}
The compatibility of the triple $(g,J,\omega)$ is reflected in the decomposition $H = L^{+} \oplus L^{-}$ as follows.
\begin{lemma} \label{le:standard_polarization_all_properties}
The decomposition $H_{\mathbb{C}} = L^{+} \oplus L^{-}$ induced by $J$ has the following properties:
\begin{itemize}
\item it is orthogonal with respect to the sesquilinear extension of $g$;
\item it is Lagrangian with respect to the complex-bilinear extension of $\omega$.
\end{itemize}
\end{lemma}
\begin{proof}
We prove the orthogonality claim.
Let $v \in L^{+}$ and $w \in L^{-}$.
We then have that $g(v,w) = g(Jv,Jw) = g(iv,-iw) = -g(v,w)$, whence $g(v,w) = 0$.
The claim then follows from the fact that $L^{+}$ and $L^{-}$ are closed subspaces of $H_{\mathbb{C}}$ (and thus $(L^{\pm})^{\perp \perp} = L^{\pm}$).
The claim that the decomposition is Lagrangian follows similarly.
\end{proof}
If $T: H_{\mathbb{C}} \rightarrow H_{\mathbb{C}}$ is either a complex-linear or conjugate-linear operator, then we write
\begin{equation*}
T = \begin{pmatrix}
a & b \\
c & d
\end{pmatrix} \begin{array}{c} L^+ \\ \oplus \\ L^- \end{array} \rightarrow \begin{array}{c} L^+ \\ \oplus \\ L^- \end{array}
\end{equation*}
for the corresponding decomposition; where $a,b,c,d$ are complex-linear, resp.~conjugate-linear operators.
It follows from \cref{le:standard_polarization_all_properties} that with respect to the decomposition $H_{\mathbb{C}} = L^{+} \oplus L^{-}$ we have
\begin{equation*}
\alpha = \begin{pmatrix}
0 & \left. \alpha \right|_{L^{-}} \\
\left. \alpha \right|_{L^+} & 0
\end{pmatrix}.
\end{equation*}
A straightforward computation shows that the operator $T$ commutes with $\alpha$ if and only if $\alpha a \alpha = a$, and $\alpha d \alpha = d$, and $\alpha b = c\alpha$.
In the sequel, we will have to compare the operator $T$ to the operator $\alpha T \alpha$.
In some sources, the notation $\alpha T \alpha = \overline{T}$ is used.
This notation is justified by the observation that if $T_{ij} \in \mathbb{C}$ is some matrix component of $T$, with respect to a basis $\{e_{i} \}$ of $\alpha$-fixed vectors, then $(\alpha T \alpha)_{ij} = \overline{T_{ij}}$.
We eschew this notation, because we will usually be working with bases consisting of vectors that are not $\alpha$-fixed; in particular, no non-zero element of $L^{\pm}$ is $\alpha$-fixed.
\begin{exercise} \label{exer:standard}
Consider the vector space $H = \mathbb{R}^{2n}$.
Let $\{e_{i}\}_{i=1,...,2n}$ be the standard basis, and write $x_{i} = e_{2i-1}$ and $y_{i} = e_{2i}$.
Let $g$ be the standard inner product, and define $J$ by $J x_{i} = y_{i}$ and $J y_{i} = - x_{i}$.
Determine the symplectic form $\omega$ such that $(g, J, \omega)$ is a compatible triple.
Then, determine the splitting $H \otimes_{\mathbb{R}} \mathbb{C} = \mathbb{C}^{2n} = L^{+} \oplus L^{-}$.
\end{exercise}
\begin{exercise} \label{exer:two_models_complexify}
Let $H$ be a real Hilbert space with complex structure $J$. Let $H_J$ be as in Remark \ref{re:non-doubled_complexification}. Show that
\begin{align*}
\Psi: H_J \rightarrow L^+, \quad
v \mapsto \frac{1}{\sqrt{2}} \left( v-i J v \right)
\end{align*}
is a complex vector space isomorphism.
\end{exercise}
\begin{example}
Let $M$ be a complex manifold. For a point $p \in M$, let $J_p$ be as in Example \ref{ex:complex_manifold}. The induced decomposition of the complexified tangent space
\[ \mathbb{C} \otimes T_p M = L^+ \oplus L^- \]
is often denoted
\[ L^+=T^{(1,0)}_p M, \ \ L^- = T^{(0,1)}_p M. \]
$T^{(1,0)}_pM$ is called the holomorphic tangent space.
By Exercise \ref{exer:two_models_complexify}, $\Psi$ is an isomorphism between the real tangent space $T_pM$ and the holomorphic tangent space $T^{(1,0)}_pM$. If $(z_1,\ldots,z_n)$ are local coordinates on $M$ near $p$, then
\begin{equation*}
T^{(1,0)}_p M = \mathrm{span} \left\{ \frac{\partial}{\partial z_1},\ldots,\frac{\partial}{\partial z_n} \right\}. \qedhere
\end{equation*}
\end{example}
\begin{exercise} \label{ex:Hermitian}
Now assume that $(g,J,\omega)$ is a compatible triple for $H$ in Exercise \ref{exer:two_models_complexify}. Let $g_+$ denote the restriction to $L^+$ of the sesquilinear extension of $g$. Show that $\Psi:(H_J,\langle,\rangle) \rightarrow (L^+,g_+)$ is an isometry; that is, $g_+(\Psi(v),\Psi(w))=\langle v,w \rangle$ for all $v,w \in H$.
\end{exercise}
\begin{remark} In the previous exercise, $\langle -,- \rangle$ and $g^+$ are both often referred to as a Hermitian metric. The data $J$ and a metric $g$ such that $g(Jv,Jw)=g(v,w)$ for all $v,w$ are sufficient to recover the compatible triple $(g,J,\omega)$ by \cref{exer:two_of_three_gives_compat}.
\end{remark}
\begin{example}
Let $M$ be a finite-dimensional complex manifold. A Hermitian metric on $M$ is a Hermitian metric on $T_p M$ for each $p \in M$ (or equivalently $T^{(1,0)}_pM$), which varies smoothly with $p$.
If the associated form $\omega$ is closed (so that $M$ is also a symplectic manifold), then $M$ is called a K\"ahler manifold.
\end{example}
\begin{example} \label{ex:polarization_compact_surface}
Let $\mathscr{R}$ be a compact Riemann surface, and $H= \mathcal{A}^{\mathbb{R}}_{\mathrm{harm}}(\mathscr{R})$ be the vector space of real harmonic one-forms on $\mathscr{R}$, and let $J = \ast$, $\omega$, and $g$ be as in Example \ref{ex:Hodge_star_real}.
Then $H_{\mathbb{C}}$ is the vector space of complex-valued harmonic one forms, which we denote by $\mathcal{A}_{\mathrm{harm}}(\mathscr{R})$.
The dimension of $\mc{A}_{\mathrm{harm}}(\mathscr{R})$ over $\mathbb{C}$ is twice the genus of $\mathscr{R}$.
The real structure $\alpha$ is just complex conjugation.
Denote the set of holomorphic one-forms on $\mathscr{R}$ by $\mathcal{A}(\mathscr{R})$, and the anti-holomorphic one-forms by $\overline{\mathcal{A}(\mathscr{R})}$.
We have the direct sum decomposition
\[ \mathcal{A}_{\mathrm{harm}}(\mathscr{R}) = \mathcal{A}(\mathscr{R}) \oplus \overline{\mathcal{A}(\mathscr{R})}. \]
Explicitly, every complex harmonic one-form $\beta$ has a unique decomposition $\beta = \gamma + \overline{\operatorname{d}\!elta}$
where $\gamma,\operatorname{d}\!elta \in \mathcal{A}(\mathscr{R})$.
Locally these are written $\gamma= a(z) dz, \overline{\operatorname{d}\!elta} = \overline{b(z)} d\bar{z}$ where $a$ and $b$ are holomorphic functions of the coordinate $z$.
It is easily checked that $J \gamma = -i \gamma$ and $J \overline{\operatorname{d}\!elta}= i \overline{\operatorname{d}\!elta}$. So
\[ L^-= \mathcal{A}(\mathscr{R}), \ \ \ L^+ = \overline{\mathcal{A}(\mathscr{R})}. \]
The corresponding sesquilinear extension of the inner product $g$ in Example \ref{ex:Hodge_star_real} is
\begin{equation}
g(\beta_1,\beta_2) = \iint_{\mathscr{R}} \beta_1 \wedge \ast \overline{\beta_2}.
\end{equation}
\end{example}
\begin{example}[Sobolev-$\nicefrac{1}{2}$ functions on the circle]\label{ex:bosonExample}
Let $H^{\mathrm{b}}$ be as in Example \ref{ex:real_boson_example}.
Treating $H^{\mathrm{b}}$ as a set of functions on the circle (i.e. as $\operatorname{d}\!ot{H}^{1/2}_{\mathbb{R}}(\mathbb{S}^1)$), its complexification $H^{\mathrm{b}}_{\mathbb{C}}$ is the set of complex-valued functions in $L^2(\mathbb{S}^1)$ whose Fourier series
\[ \sum_{n \in \mathbb{Z} \backslash \{0\}} a_n e^{i n \varthetaeta} \]
satisfy the stronger condition
\[ \sum_{n \in \mathbb{Z} \backslash \{0\}} |n| |a_n|^2<\infty. \]
that is, the Sobolev-$1/2$ space $\operatorname{d}\!ot{H}^{1/2}(\mathbb{S}^1)$. Equivalently, one removes the condition $a_{-n} = \overline{a_n}$ in the sequence model of $H^{\mathrm{b}}$:
\[ H^{\mathrm{b}}_{\mathbb{C}} = \left\{ \{a_n\}_{n \in \mathbb{Z} \backslash \{0\}} \,:\, \sum_{n \in \mathbb{Z} \backslash \{0\}} |n| |a_n|^2<\infty \right\}. \]
In this case, the real structure is given by complex conjugation of the function on $\mathbb{S}^1$; equivalently,
\[ \alpha(\{ a_n \}) = \{ \overline{a_{-n}} \}. \]
We then see that $L^{-}$ is given by the sequences $\{ a_{n} \}$ such that $a_{n} = 0$ for $n < 0$ and $L^+$ is given by the sequences such that $a_n = 0$ for $n>0$.
Equivalently, $L^{-}$ consists of those functions that extend to holomorphic functions on the unit disk while elements of $L^+$ extend anti-holomorphically.
\end{example}
\begin{exercise} \label{exer:standard_polarization_Hb}
In the previous example, show that
\[ \omega(\{a_n\},\{b_n\}) = -i \sum_{n=-\infty}^\infty n a_n b_{-n} \]
and
\[ g(\{a_n\},\{b_n\}) = \sum_{n=-\infty}^\infty |n| a_n \overline{b_n} \]
and verify $g(v,w)=\omega(v,J \alpha w)$.
Verify that $L^+ \oplus L^-$ is both Lagrangian and orthogonal.
\end{exercise}
\begin{remark}
The polarization in the previous example corresponds to the decomposition of complex harmonic functions in the homogeneous Dirichlet space on $\mathbb{D}$ into their holomorphic and anti-holomorphic parts. Defining
\[ \operatorname{d}\!ot{\mathcal{D}}(\mathbb{D}) = \left\{ h:\mathbb{D} \rightarrow \mathbb{C} \ \mathrm{holomorphic} \,:\, \iint_{\mathbb{D}} |h'|^2 <\infty \right\} \]
and
\[ \operatorname{d}\!ot{\mathcal{D}}_{\mathrm{harm}}(\mathbb{D}) = \left\{ h:\mathbb{D} \rightarrow \mathbb{R} \ \text{harmonic} \,:\, h(0)=0, \ \iint_\mathbb{D} \left| \frac{\partial h}{\partial z} \right|^2 +\left| \frac{\partial h}{\partial \bar{z}} \right|^2 <\infty. \right\} \]
we have the decomposition
\[ \operatorname{d}\!ot{\mathcal{D}}_{\mathrm{harm}}(\mathbb{D})= \operatorname{d}\!ot{\mathcal{D}}(\mathbb{D}) \oplus \overline{\operatorname{d}\!ot{\mathcal{D}}(\mathbb{D})}. \]
Extending elements of $H^{\mathrm{b}}_{\mathbb{C}}$ harmonically to $\mathbb{D}$ we can identify $L^-\simeq \operatorname{d}\!ot{\mathcal{D}}(\mathbb{D})$, $L^+ \simeq \overline{\operatorname{d}\!ot{\mathcal{D}}(\mathbb{D})}$.
\end{remark}
\begin{example}[Square-integrable functions on the circle]\label{ex:fermionExample}
Let $H^{\mathrm{f}}$ be as in Example \ref{ex:fermion_real_circle}.
The complexification of $H^{\mathrm{f}}$ can be identified with the complex-valued square-integrable functions on the circle, which we in turn identify (by taking Fourier coefficients) with the space of complex-valued sequences $\{a_{n} \}$ indexed by $\mathbb{Z}$ such that the sum
\begin{equation*}
\sum_{n} |a_{n}|^{2}
\end{equation*}
converges.
We then see that $L^{+}$ is given by the sequences $\{ a_{n} \}$ such that $a_{n} = 0$ for $n < 0$.
Equivalently, $L^{+}$ consists of those functions $f$, such that $z \mapsto f(z)z^{-1/2}$ extends to a holomorphic function on the unit disk.
\end{example}
\begin{remark}
The set of holomorphic extensions $h(z)$ to the unit disk of functions $f(z)z^{-1/2}$ in the previous example is precisely the Hardy space. This follows from a classical characterization of the Hardy space as the class of holomorphic functions on the disk, whose non-tangential boundary values are in $L^2(\mathbb{S}^1)$.
On the other hand $L^-$ consists of those $f(z) z^{1/2}$ which extend anti-holomorphically into the unit disk. These extensions are precisely the conjugate of the Hardy space.
\end{remark}
It turns out that the space of deformations of either $g$ or $\omega$ has a rich geometric structure. In the rest of this section, we define two notions of polarization - those arising from a fixed inner product (orthogonal polarizations), and those arising from a fixed symplectic form (positive symplectic polarizations). Both lead to a compatible triple.
\begin{definition}
Let $(H,g)$ be a real Hilbert space and let $H_{\mathbb{C}}$ be its complexification with real structure $\alpha$.
Extend $g$ sesquilinearly to $H_{\mathbb{C}}$.
An \emph{orthogonal polarization} is an orthogonal decomposition
\[ H_{\mathbb{C}} = W^{+} \oplus W^{-} \]
such that $\alpha (W^{+}) = W^{-}$.
\end{definition}
One can associate a complex structure and bilinear form to an orthogonal polarization.
\begin{definition}
Let $(H,g)$ be a real Hilbert space and let $H_{\mathbb{C}}$ be its complexification with real structure $\alpha$. Let $H_{\mathbb{C}} = W^{+} \oplus W^{-}$ be an orthogonal polarization.
Let $P_{W}^{\pm}: H_{\mathbb{C}} \rightarrow W^{\pm}$ be the orthogonal projections.
The complex structure associated to the polarization is $J_{W} = i(P_{W}^{+} - P_{W}^{-})$, and the bilinear form on $H_{\mathbb{C}}$ associated to the polarization is
\begin{equation*}
\omega_W(v,w) = g(J_W v,w). \qedhere
\end{equation*}
\end{definition}
Observe that by the reasoning in Remark \cref{rem:WhyIsAlphaL+EqualToL-}, the condition that $\alpha(W^{+}) = W^{-}$ is both necessary and sufficient for $J_{W}$ to restrict to $H$.
An orthogonal polarization defines a compatible triple.
\begin{proposition}\label{lem:SubspaceToSymplectic} Let $H_{\mathbb{C}}= W^+ \oplus W^-$ be an orthogonal polarization.
The complex structure $J_{W}$ and symplectic form $\omega_W$ associated to this polarization restrict to $H$, and $(g,J_{W},\omega_{W})$ is a compatible triple.
Moreover, $W^{+}$ and $W^{-}$ are Lagrangian with respect to $\omega_{W}$.
\end{proposition}
\begin{proof}
First, it is clear that $J_{W}^{2} = - \mathds{1}$.
Now, let $v = (x^{+},x^{-}) \in W^{+} \oplus W^{-}$ be arbitrary.
We then have $\alpha J_{W} v = \alpha (ix^{+},-ix^{-}) = (i \alpha x^{-}, - i \alpha x^{+}) = J_{W} \alpha v$.
Observe that $H \subseteq H_{\mathbb{C}}$ can be identified as the $+1$ eigenspace for $\alpha$; it follows that if $v \in H$, then $\alpha J_{W}v = J_{W} \alpha v = J_{W}v$ and thus $J_{W}v \in H$.
It immediately follows that $\omega_W(v,w)=g(J_Wv,w)$ is real on $H$.
It remains to be shown that $(g,J_{W},\omega_{W})$ is a compatible triple.
Of course, if $\omega_{W}$ is symplectic, then it is compatible.
That $\omega_{W}$ is skew-symmetric follows from the computation $\omega_{W}(v,w) = g(J_{W}v,w) = -g(v,J_{W}w) = -\omega_{W}(w,v)$.
The map $\varphi_{\omega_{W}}: H \rightarrow H^{*}$ is given by $\varphi_{g}J_{W}$, and is thus an isomorphism.
It is straightforward to show that $W^{+}$ and $W^{-}$ are isotropic, and the assumption that $H_{\mathbb{C}} = W^{+} \oplus W^{-}$ then implies that they are both Lagrangian.
\end{proof}
Let $\mathrm{Pol}^{g}(H)$ denote the set of those closed subspaces $W^{+} \subset H_{\mathbb{C}}$ such that $H_{\mathbb{C}} = W^{+} \oplus \alpha(W^{+})$ as an orthogonal direct sum.
Observe that by \cref{lem:SubspaceToSymplectic} and \cref{le:standard_polarization_all_properties} we may view $\mathrm{Pol}^{g}(H)$ as parameterizing the set of deformations of $\omega$ (compatible with $g$).
Next, we define positive symplectic polarizations.
\begin{definition} \label{de:Kahler_pol}
Let $(H,\omega)$ be a real vector space with symplectic form $\omega$, and let $H_{\mathbb{C}}$ be its complexification with real structure $\alpha$. Extend $\omega$ complex bilinearly to $H_{\mathbb{C}}$.
A \emph{positive symplectic polarization} is a Lagrangian decomposition
\[ H_{\mathbb{C}} = W^{+} \oplus W^{-} \]
such that $\alpha (W^{+}) = W^{-}$ and $-i \omega(v,\alpha{w})$ is a positive-definite sesquilinear form on $W^+$.
\end{definition}
In fact, this implies that $g_+(v,w) := -i\omega(v,\alpha w)$ is a strong inner product, which by Proposition \ref{pr:StrongEquivalence} makes $W_+$ a Hilbert space. To see this, observe that $\varphi_{\omega}: H_{\mathbb{C}} \rightarrow H_{\mathbb{C}}^{*}$ restricts to an isomorphism $\varphi_{\omega}: W^{+} \rightarrow (W^{-})^{*}$.
A direct computation shows that $\varphi_{g_+}: W^{+} \rightarrow (W^{+})^{*}$ is given by $\varphi_{g_+} = -i \alpha^{*} \varphi_{\omega}$, whence $\varphi_{g_+}$ is an isomorphism.
It follows that $g_+$ is a strong inner product if and only if it is positive-definite.
\begin{exercise} \label{exer:Kpol_other_half_pos_def}
Show that if $W^{+} \oplus W^{-} = H_{\mathbb{C}}$ is a positive symplectic polarization, then the pairing $i\omega(v,\alpha{w})$ is a positive-definite sesquilinear form on $W^-$, such that $W^-$ is a Hilbert space with respect to this pairing.
\end{exercise}
As with orthogonal polarizations, a positive symplectic polarization is enough to define the other two pieces of data.
\begin{definition}\label{def:ComplexPlusSesqui}
Let $(H,\omega)$ be a real Hilbert space, equipped with symplectic form $\omega$.
Let $H_{\mathbb{C}} = W^{+} \oplus W^{-}$ be a symplectic polarization; let $P_{W}^{\pm}: H_{\mathbb{C}} \rightarrow W^{\pm}$ be the projections with respect to this decomposition.
The complex structure associated to the polarization is $J_{W} =i (P_{W}^{+} - P_{W}^{-})$, and the sesquilinear form on $H_{\mathbb{C}}$ associated to the polarization is
\begin{equation*}
g_W(v,w) = \omega(v,J\alpha{w}). \qedhere
\end{equation*}
\end{definition}
The condition that the symplectic polarization $H_{\mathbb{C}} = W^{+} \oplus W^{-}$ is positive is exactly the condition that is required to guarantee that $(g_{W},J_{W},\omega)$ is compatible.
\begin{proposition}\label{pr:SubspaceToInner} Let $H_{\mathbb{C}}= W^+ \oplus W^-$ be a positive symplectic polarization.
The associated complex structure $J_{W}$ and sesquilinear form $g_W$ restrict to a complex structure and inner product on $H$.
Moreover, $g_W$ is a positive definite sesquilinear form on $H_{\mathbb{C}}$, with respect to which $H_{\mathbb{C}}$ is a Hilbert space, and $(g_W,J_W,\omega)$ is a compatible triple.
\end{proposition}
\begin{proof}
It is clear that $J_{W}^{2} = - \mathds{1}$.
Sesquilinearity of $g_W$ follows immediately from the definition.
let $g_{W,\pm}$ denote the restrictions of $g_W$ to $W^\pm$. We have
\begin{align*}
g_{W,+}(v,w) & = -i\omega(v,\alpha w) \ \ \ v,w \in W^+ \\
g_{W,-}(v,w) & = i \omega(v,\alpha w) \ \ \ \ v,w \in W^-.
\end{align*}
It is easily checked that $W^+$ and $W^-$ are orthogonal with respect to $g_W$, so we have
\[ (H_{\mathbb{C}},g_W) = (W^+,g_{W,+}) \oplus (W^-,g_{W,-}) \]
and both summands are Hilbert spaces by Exercise \ref{exer:Kpol_other_half_pos_def}.
The fact that $\alpha(W^{+}) = W^{-}$ implies that $J_{W}$ commutes with $\alpha$, which in turn implies that $J_{W}$ restricts to $H$. To see that $g_W$ is real on $H$, observe that any $v,w\in H$ can be written $(v'+\alpha{v'})/2$, $(w' + \alpha{w'})/2$ where $v',w' \in W^+$.
Using the definition $g_W(v,w)=\omega(v,J\alpha{w})$ and the fact that $J$ commutes with complex conjugate, we have
\[ g_W(v'+\alpha{v'},w'+\alpha{w'})= g_W(v',w') + g_W(\alpha{v'},\alpha{w'}) \]
which is real. Finally, restricting to $v,w \in H$, we have
\[ g_W(v,w) = \omega(v,Jw) \]
so $(g_W,J_W,\omega)$ is a compatible triple on $H$.
\end{proof}
Denote the set $W^+$ such that $H_{\mathbb{C}}=W^+ \oplus \alpha(W^+)$ is a positive symplectic polarization by $\mathrm{Pol}^\omega(H)$. By Lemma \ref{le:standard_polarization_all_properties} and Proposition \ref{pr:SubspaceToInner}, the positive symplectic polarizations can be viewed as parametrizing the deformations of $g$ (compatible with $\omega$).
Summarizing the section so far, we have seen that a vector space $H$, equipped with a compatible triple $(g,J,\omega)$, can equivalently be seen as
\begin{itemize}\itemsep-.4em
\item A Hilbert space $(H,g)$, equipped with an orthogonal polarization of $H_{\mathbb{C}}$; or
\item A symplectic vector space $(H,\omega)$, equipped with a positive symplectic polarization of $H_{\mathbb{C}}$.
\end{itemize}
We write $\operatorname{GL}(H)$ for the group of invertible linear transformations from $H$ to $H$.
We then define the \emph{symplectic group} of $H$ to be
\begin{equation*}
\operatorname{Sp}(H) \operatorname{d}\!efeq \{ u \in \operatorname{GL}(H) \, : \, \omega(u v,u w) = \omega(v,w) \text{ for all } v,w \in H \}.
\end{equation*}
Similarly, we define the \emph{orthogonal group} of $H$ to be
\begin{equation*}
\operatorname{O}(H) \operatorname{d}\!efeq \{ u \in \operatorname{GL}(H) \, : \, g(u v,u w) = g(v,w) \text{ for all } v,w \in H \}.
\end{equation*}
Define $u^{\star}g$, $u^{\star}J$, and $u^{\star}\omega$ by
\begin{align*}
u^{\star}g(v,w) &= g(u^{-1}v,u^{-1}w), & u^{\star}Jv &= uJu^{-1}v, & u^{\star}\omega(v,w) = \omega(u^{-1}v,u^{-1}w).
\end{align*}
We then have the following.
\begin{proposition}
If $u \in \operatorname{GL}(H)$, then the triple $(u^{\star}g, u^{\star}J, u^{\star}\omega)$ is again compatible.
\end{proposition}
\begin{proof} It is immediate that $u^\star J$ is a complex structure. To prove the other two claims,
observe that if $\psi: H \times H \rightarrow \mathbb{R}$ is a bilinear pairing, then we have $\varphi_{u^{\star}\psi}(v)(w) = u^{\star}\psi(v,w) = \psi(u^{-1}v,u^{-1}w) = \varphi_{\psi}(u^{-1}v)(u^{-1}w)$.
This implies that $\varphi_{u^{\star}\psi} = (u^{-1})^*\varphi_{\psi}u^{-1}$, where $u^*$ is the adjoint map $H^{*} \rightarrow H^{*}$.
It follows that $\varphi_{\psi}$ is invertible if and only if $\varphi_{u^{\star}\psi}$ is invertible.
Because pullback clearly preserves anti-symmetry, $u^\star \omega$ is a strong symplectic form.
The fact that $u^\star g$ is a strong inner product would follow from $u^{\star}g(v,v) \geqslant 0$ by the discussion following Definition \ref{de:Kahler_pol}. But this follows from the positivity of $g$, since $u^{\star}g(v,v) = g(u^{-1}v,u^{-1}v) \geqslant 0$.
\end{proof}
The groups $\operatorname{GL}(H)$, $\operatorname{O}(H)$, and $\operatorname{Sp}(H)$ act on $H_{\mathbb{C}}$ by complex-linear extension.
The decomposition $H_{\mathbb{C}} = L^{+}_{u} \oplus L^{-}_{u}$ corresponding to the triple $(u^{\star}g, u^{\star}J, u^{\star}\omega)$ is given by $L^{\pm}_{u} = u L^{\pm}$ (this follows directly from \cref{eq:standard_pol_def}).
The above guarantees that $\operatorname{Sp}(H)$ acts on $\mathrm{Pol}^{\omega}(H)$ by $(u,W^{+}) \mapsto u W^{+}$.
Indeed, if $(g_{W},J_{W},\omega)$ is the triple corresponding to $W^{+}$, and $u \in \operatorname{Sp}(H)$, then $(u^{\star}g, u^{\star}J,\omega)$ is the triple corresponding to $uW^{+}$, which tells us that $uW^{+} \in \mathrm{Pol}^{\omega}(H)$.
Similarly, one sees that $\operatorname{O}(H) \times \mathrm{Pol}^{g}(H) \rightarrow \mathrm{Pol}^{g}(H), (u,W^{+}) \mapsto uW^{+}$ defines an action of $\operatorname{O}(H)$ on $\mathrm{Pol}^{g}(H)$.
The \emph{unitary group} of $H$ is defined to be the intersection of the symplectic and orthogonal group, i.e.~$\operatorname{U}(H) \operatorname{d}\!efeq \operatorname{Sp}(H) \cap \operatorname{O}(H)$.
\begin{exercise} \label{exer:unitaries_are_same}
Show that the unitary group $\operatorname{U}(H)$ is equal to the unitary group of the complex Hilbert space $H_{J}$, which is defined in the usual manner.
\end{exercise}
The following lemma tells us that, at least as sets, we may view $\mathrm{Pol}^{g}(H)$ and $\mathrm{Pol}^{\omega}(H)$ as homogeneous spaces.
\begin{lemma}\label{lem:HomogeneousSets}
The maps $\operatorname{O}(H) \rightarrow \mathrm{Pol}^{g}(H), u \mapsto u(L^{+})$ and $\operatorname{Sp}(H) \rightarrow \mathrm{Pol}^{\omega}(H), u \mapsto u(L^{+})$ descend to bijections
\begin{align*}
\operatorname{O}(H)/\operatorname{U}(H) &\rightarrow \mathrm{Pol}^{g}(H), & \operatorname{Sp}(H)/\operatorname{U}(H) &\rightarrow \mathrm{Pol}^{\omega}(H) .
\end{align*}
\end{lemma}
\begin{proof}
An operator $u \in \operatorname{O}(H)$ preserves $L^{+}$ if and only if it commutes with $J$, which it does if and only if $u \in \operatorname{Sp}(H)$; whence the stabilizer of $L^{+} \in \mathrm{Pol}^{g}(H)$ is $\operatorname{U}(H)$.
It thus remains to be shown that $\operatorname{O}(H)$ acts transitively on $\mathrm{Pol}^{g}(H)$.
To this end, let $W^{+} \in \mathrm{Pol}^{g}(H)$ be arbitrary.
Pick orthonormal bases $\{ l_{i} \}$ and $\{ e_{i} \}$ for $L^{+}$ and $W^{+}$ respectively.
We obtain orthonormal bases $\{ \alpha l_{i} \}$ and $\{ \alpha e_{i} \}$ for $\alpha(L^{+})$ and $\alpha(W^{+})$ respectively.
We let $u: H_{\mathbb{C}} \rightarrow H_{\mathbb{C}}$ be the complex-linear extension of the map that sends $l_{i}$ to $e_{i}$ and $\alpha l_{i}$ to $\alpha e_{i}$.
It is then easy to see that $u \in \operatorname{O}(H)$ and $u(L^{+}) = W^{+}$.
This proves that the map $\operatorname{O}(H) / \operatorname{U}(H) \rightarrow \mathrm{Pol}^{g}(H)$ is a bijection.
The fact that the map $\operatorname{Sp}(H)/\operatorname{U}(H) \rightarrow \mathrm{Pol}^{\omega}(H)$ is a bijection is proved similarly.
\end{proof}
\begin{exercise} \label{exer:stabilizer}
Prove that the stabilizer of $L^{+} \in \mathrm{Pol}^{\omega}(H)$ is $\operatorname{U}(H)$ and that $\operatorname{Sp}(H)$ acts transitively on $\mathrm{Pol}^{\omega}(H)$.
Conclude that the map $\operatorname{Sp}(H)/\operatorname{U}(H) \rightarrow \mathrm{Pol}^{\omega}(H)$ is a bijection.
\end{exercise}
\section{The Grassmannian of polarizations associated to a symplectic form}\label{sec:Bosons}
In this section, we fix a symplectic form, and describe the associated Lagrangian Grassmannian of positive symplectic polarizations. We will show that it is described by a space of operators called the Siegel disk, and that it is a complex Banach manifold.
We also consider the ``restricted'' Siegel disk and restricted symplectic groups. These restricted objects require the extra analytic condition of Hilbert-Schmidt-ness, which plays a central role in representation theory. We also establish that the restricted Grassmannian is not just a Banach manifold, but a Hilbert manifold.
Let $H$ be a real Hilbert space, and let $(g,J,\omega)$ be a compatible triple.
We are interested in deformations of $g$, while keeping $\omega$ fixed.
As explained in \cref{sec:Triples}, we can study these deformations by considering the space of symplectic polarizations in $H_{\mathbb{C}}$.
We denote by $P^{\pm}$ the orthogonal projection $H_{\mathbb{C}} \rightarrow L^{\pm}$.
\begin{proposition}\label{pr:CompatInequality}
Suppose that $W^{+}\subset H_{\mathbb{C}}$ is a Lagrangian such that $H_{\mathbb{C}} = W^{+} \oplus \alpha(W^{+})$ (not necessarily orthogonal); and set $W^{-} = \alpha(W^{+})$.
Then, $W^{+}$ is a positive symplectic polarization if and only if $\| P^{+}x \| > \|P^{-} x\|$ for all $0 \neq x \in W^{+}$.
Moreover, if this holds, then $P^{+}|_{W^{+}}:W^{+} \rightarrow L^{+}$ is a bijection.
\end{proposition}
\begin{proof}
We recall that the decomposition $H_{\mathbb{C}} = W^{+} \oplus W^{-}$ allows us to define a complex structure $J_{W}$ and a sesquilinear form $g_{W}$, see \cref{def:ComplexPlusSesqui}.
Let $x \in W^{+}$.
We then compute
\begin{equation*}
g_{W}(x,x) = \omega(x, \alpha J_{W} x) = \omega(x, \alpha ix) = g(Jx, ix) = g((P^{+}-P^{-})x,x) = \| P^{+} x\|^{2} - \|P^{-}x \|^{2}.
\end{equation*}
We thus see that $g_{W}$ is positive definite if and only if $\|P^{+}x \|^{2} > \|P^{-}x \|^{2}$ for all $0 \neq x \in W^{+}$.
Repeating the proof of Proposition \ref{pr:SubspaceToInner} we see that $J_W$ and $g_W$ restrict to the real space.
Furthermore, we have that the restriction of $g_W$ to the real space is strong, because $\varphi_{g_W}= \varphi_{\omega} J_W$.
Now, suppose that $\|P^{+}x \|^{2} > \|P^{-}x\|^{2}$ for all $0 \neq x \in W^{+}$.
We then have, for all $x \in W^{+}$
\begin{equation*}
\| x \|^{2} = \|P^{+} x\|^{2} + \|P^{-}x\|^{2} \leqslant 2 \|P^{+}x \|^{2},
\end{equation*}
whence $P^{+}|_{W^{+}}$ is bounded below, and thus injective with closed image.
We now claim that $P^{+}W^{+}$ is dense in $L^{+}$.
Indeed, suppose that $y \in (P^{+}W^{+})^{\perp} \subseteq L^{+}$.
We then have, for all $x \in W^{+}$
\begin{equation*}
0 = g(y, P^{+}x) = g(y, x) = g(y, (P_{W}^{+} - P_{W}^{-})x) = ig(y, J_{u}x) = \omega(y, \alpha x)
\end{equation*}
which implies that $y$ is in the symplectic complement of $W^{-}$, which is equal to $W^{-}$.
This means that $\alpha y \in L^{-} \cap W^{+}$, but this intersection is zero, because it is contained in the kernel of $P^{+}|_{W^{+}}$, which is zero.
We conclude that $P^{+}|_{W^{+}}$ is a bijection, and thus invertible.
\end{proof}
\begin{exercise}\label{exer:VerifyGraphGivesSubspace}
Show that if $W^{+} \subset H_{\mathbb{C}}$ is a positive symplectic polarization, then $W^{+} = \mathrm{graph}(Z)$, where $Z = P^{-}(P^{+}|_{W^{+}})^{-1}: L^{+} \rightarrow L^{-}$.
\end{exercise}
\begin{corollary}\label{cor:GraphOperator}
If $W^{+} \subset H_{\mathbb{C}}$ is a positive symplectic polarization, then the operator $Z = P^{-}(P^{+}|_{W^{+}})^{-1}$ is the unique operator such that $W^{+} = \operatorname{graph}(Z)$.
\end{corollary}
\begin{proof}
Any operator is uniquely defined by its graph, so the only question is existence; since we have provided an expression for $Z$, all that remains to be shown is that this does trick, which is \cref{exer:VerifyGraphGivesSubspace}.
\end{proof}
Now, we would like to determine for which operators $Z: L^{+} \rightarrow L^{-}$, we have that $W_{Z} = \operatorname{graph}(Z) \in \mathrm{Pol}^{\omega}(H)$.
\begin{lemma}\label{lem:GraphLagrangianCondition}
The subspace $W_{Z}$ is Lagrangian if and only if $Z = \alpha Z^{*} \alpha$.
\end{lemma}
\begin{proof}
First, assume that $W_{Z}$ is Lagrangian.
This means that it is in particular isotropic.
We thus compute, for arbitrary $x,y \in L^{+}$
\begin{align} \label{eq:lagrangian_condition_Z}
0 = \omega(x+Zx,y+Zy) &= \omega(x,Zy) + \omega(Zx,y) \nonumber \\
&= g(Jx, \alpha Zy) + g(JZx, \alpha y) \nonumber \\
&= -g(x, (J \alpha Z + Z^{*} J \alpha) y),
\end{align}
and thus $Z = \alpha J Z^{*} J \alpha$.
We have $\alpha J Z^{*} J \alpha = \alpha Z^{*} \alpha$, because the domain of $Z$ is $L^{+}$.
Now, assume that $Z = \alpha J Z^{*} J \alpha$.
The computation above then implies that $W_{Z}$ is isotropic.
It remains to be shown that $W_{Z}$ is Lagrangian.
To that end, suppose that $z = z^{+} + z^{-} \in L^{+} \oplus L^{-}$ satisfies $\omega(z, w) = 0$ for all $w \in W_{Z}$.
We then have, for all $x \in L^{+}$,
\begin{align*}
0 = \omega(x+Zx ,z^{+} + z^{-}) &= \omega(Zx,z^{+}) + \omega(x,z^{-}) \\
&= g(JZx,\alpha z^{+}) + g(Jx,\alpha z^{-}) \\
&= -g(x, Z^{*}J \alpha z^{+}) - g(x, J\alpha z^{-}),
\end{align*}
thus $Z^{*}J \alpha z^{+} + J \alpha z^{-} = 0$, which implies $z^{-} = \alpha J Z^{*}J \alpha z^{+} = Jz^{+}$, whence $z^{+} + z^{-} \in W_{Z}$.
This implies that $W_{Z}$ is maximal isotropic, whence it is Lagrangian, by \cref{lem:maximalMeansLagrangian}.
\end{proof}
\begin{lemma}\label{lem:PositiveGraphCondition}
We have $\mathds{1} - Z^{*}Z > 0$ if and only if $W_{Z} \cap \alpha(W_{Z}) = \{ 0 \}$ and $W_Z \in \mathrm{Pol}^\omega(H)$.
\end{lemma}
\begin{proof}
First, assume that $\mathds{1} - Z^{*} Z > 0$.
We have, for all $0 \neq x \in L^{+}$ that
\begin{equation}\label{eq:ZtZ}
x \neq Z^{*}Zx = \alpha Z \alpha Zx.
\end{equation}
Now, we have that $\alpha(W_{Z}) = \operatorname{graph}(\alpha Z \alpha)$, from which it follows that if $z \in W_{Z} \cap \alpha(W_{Z})$, then there exist $x \in L^{+}$ and $y \in L^{-}$ such that $(x,Zx) = z = (\alpha Z \alpha y, y)$.
This implies that $x = \alpha Z \alpha Z x$, whence $x=0$ by \cref{eq:ZtZ}; and thus $W_{Z} \cap \alpha (W_{Z}) = \{ 0 \}$.
Now, we compute, for arbitrary $0 \neq w = x+Zx \in W_{Z}$
\begin{equation}\label{eq:PositivityRelation}
\| P^{+} w \|^{2} - \| P^{-} w \|^{2} = \|x \|^{2} - \|Zx \|^{2} = g(x,x) - g(x, Z^{*}Zx) = g(x, (\mathds{1} - Z^{*}Z)x)
\end{equation}
Now, we wish to show that $W_Z \in \mathrm{Pol}^\omega(H)$.
According to \cref{pr:CompatInequality}, it suffices to show that $\| P^{+}w \|^{2} > \|P^{-}w \|^{2}$ for all $0 \neq w \in W_{Z}$, this follows from the positivity of $\mathds{1} - Z^{*}Z$ through \cref{eq:PositivityRelation}.
Now, suppose that $W_{Z} \in \mathrm{Pol}^\omega(H)$ and
set $g_{Z} = g_{W_{Z}}$ and $J_{Z} = J_{W_{Z}}$. \cref{eq:PositivityRelation} also implies (through \cref{pr:CompatInequality}) that, then $\mathds{1} - Z^{*}Z$ is positive.
\end{proof}
Set
\begin{equation}\label{eq:SiegelDiskDef}
\mathfrak{D}(H) \operatorname{d}\!efeq \{ Z \in \mc{B}(L^{+},L^{-}) \mid Z = \alpha Z^{*} \alpha, \text{ and } \mathds{1} - Z^{*}Z > 0 \}.
\end{equation}
This is called the Siegel disk, originating with C. L. Siegel \cite{siegel_symplectic_1943}.
The Siegel disk in the setting of Hilbert spaces was defined by G. Segal \cite{segal_unitary_1981}. There, the extra condition that $Z$ is Hilbert-Schmidt was added, resulting in what we call the restricted Siegel disk below.
The preceding discussion is now nicely summarized by the following result.
\begin{proposition}\label{prop:GrassmannianIsSiegelDisk}
The maps
\begin{align*}
\mathrm{Pol}^{\omega}(H) & \rightarrow \mathfrak{D}(H), & \mathfrak{D}(H) & \rightarrow \mathrm{Pol}^{\omega}(H), \\
W &\mapsto P^{-}(P^{+}|_{W})^{-1}, & Z & \mapsto \operatorname{graph}(Z),
\end{align*}
are well-defined, and each others' inverses.
\end{proposition}
\begin{proof}
By \cref{pr:CompatInequality} we know that if $W \in \mathrm{Pol}^{\omega}(H)$, then $(P^{+}|_{W})^{-1}$ exists.
By \cref{cor:GraphOperator} we then have that $W = \mathrm{graph}(P^{-}(P^{+}|_{W})^{-1})$.
From \cref{lem:GraphLagrangianCondition,lem:PositiveGraphCondition} it then follows that $P^{-}(P^{+}|_{W})^{-1} \in \mathfrak{D}(H)$.
Conversely, it follows from \cref{lem:GraphLagrangianCondition,lem:PositiveGraphCondition} that if $Z \in \mathfrak{D}(H)$, then $\operatorname{graph}(Z) \in \mathrm{Pol}^{\omega}(H)$.
That these maps are each others inverses is a straightforward consequence of \cref{cor:GraphOperator}.
\end{proof}
\begin{remark} \label{re:Siegel_upper_half}
Another model of $\mathrm{Pol}^\omega(H)$ is given by the Siegel upper half space. This is equivalent but based on different conventions. Rather than viewing positive symplectic polarizations as a variation of a fixed polarization $L^+ \oplus L^-$, instead one fixes a real Lagrangian decomposition. More precisely, we say that a Lagrangian subspace $L$ in $H_{\mathbb{C}}$ is real if it is the complexification of a Lagrangian subspace in $H$. Fix a decomposition $H_{\mathbb{C}} = L \oplus L'$ where $L$ and $L'$ are transverse real Lagrangian spaces. It can be shown that any $W \in \mathrm{Pol}^\omega(H)$ is transverse to $L$ and $L'$, and can be uniquely expressed as the graph of some $Z:L \rightarrow L'$. This $Z$ satisfies $Z^T = Z$ and $\mathrm{Im}(Z)$ is positive definite. The set of such matrices is called the Siegel upper half space $\mathfrak{H}(H)$. These two conditions can be given meaning either through the use of a specific basis, or by $Z^T:= \alpha Z^* \alpha$ and $\mathrm{Im}(Z) := (1/2i) (Z - \alpha \, Z \, \alpha)$. For details, see \cite{berndt_introduction_2001,vaisman_symplectic_1987,siegel_symplectic_1943}.
\end{remark}
\begin{example}
Here we refer to Example \ref{ex:polarization_compact_surface}.
Let $\mathscr{R}$ be a compact Riemann surface, and $\mathcal{A}(\mathscr{R}) \oplus \overline{\mathcal{A}(\mathscr{R})}$ the decomposition of the space of complex harmonic one-forms on $\mathscr{R}$.
This polarization, induced by the complex structure on the Riemann surface, is traditionally represented by an element of the Siegel upper half space as follows.
Choose simple closed curves $a_1,\ldots,a_\mathfrak{g},b_1,\ldots,b_{\mathfrak{g}}$ which are generators of the first homology group of $\mathscr{R}$.
These can be chosen so that their intersection numbers $\gamma_1 \cdot \gamma_2$ satisfy $a_k \cdot b_j = - b_j \cdot a_k = \operatorname{d}\!elta_{kj}$, and are zero otherwise. Intuitively, the intersection numbers count the number of crossings with sign, where the sign of the crossing depends on the relative direction. For a precise definition of intersection number, and proof of existence of such a basis see \cite{farkas_riemann_1992,siegel_topics_1988}.
It can be shown that there are holomorphic one-forms $\beta_1,\ldots,\beta_{\mathfrak{g}}$ such that
\[ \int_{a_k} \beta_j = \operatorname{d}\!elta_{jk}. \]
Then,
\[ Z_{kj} = \int_{b_k} \beta_j \]
is a matrix representing an element of the Siegel upper half space.
The symmetry of this matrix is referred to as the Riemann bilinear relations, and $Z_{kj}$ is called the period matrix.
To see the relation to the operator specified in Remark \ref{re:Siegel_upper_half}, let $\eta_j$ be the unique basis of harmonic one-forms on $\mathscr{R}$ dual to the curves $a_1,\ldots,a_{\mathfrak{g}},b_1,\ldots,b_{\mathfrak{g}}$. Explicitly
\[ \int_{a_k} \eta_j = \operatorname{d}\!elta_{jk} \ \ \text{ and }
\int_{b_k} \eta_{j+\mathfrak{g}} = \operatorname{d}\!elta_{jk}. \]
Let $L$ be the span of $\{\eta_1,\ldots,\eta_{\mathfrak{g}} \}$ and $L'$ be the span of $\{\eta_{\mathfrak{g}+1},\ldots,\eta_{2\mathfrak{g}}\}$. Then it can be shown that
\begin{equation} \label{eq:holomorphic_oneforms_explicit}
\beta_j = \eta_j + \sum_{k=1}^n Z_{jk} \eta_{k+\mathfrak{g}}, \ \ \ j = 1,\ldots,\mathfrak{g}.
\end{equation}
and
\[ \mathcal{A}(\mathscr{R}) = \mathrm{span} \{ \beta_1,\ldots,\beta_{\mathfrak{g}} \}. \]
Thus if $Z_{jk}$ are the components of an operator $Z:L \rightarrow L'$ with respect to the bases above, then $Z$ is in the Siegel upper half plane and $\mathcal{A}(\mathscr{R})$ is the graph of $Z$.
See \cite{farkas_riemann_1992,siegel_topics_1988} for details.
Observe that an important role is played by the choice of homology basis (or marking).
\end{example}
\begin{exercise} \label{exer:holomorphic_one_forms}
The Hodge theorem says that every cohomology class in $H^{1}_{\text{dR}}(\mathscr{R})$ is represented by a unique harmonic one-form. Use the Hodge theorem
to show that the one-forms \cref{eq:holomorphic_oneforms_explicit} are indeed holomorphic and span $\mathcal{A}(\mathscr{R})$.
\end{exercise}
Next, we return to the symplectic action on $\mathrm{Pol}^\omega(H)$ and express it in terms of $\mathfrak{D}(H)$.
Fix an element $u \in \operatorname{Sp}(H)$, we obtain a new subspace $W_{u}^{+} \operatorname{d}\!efeq u(L^{+}) \subset H_{\mathbb{C}}$.
It is easy to see from the definition of $\operatorname{Sp}(H)$ that $W_{u}^{+}$ is again a Lagrangian, which moreover satisfies $W_{u}^{+} \cap \alpha W_{u}^{+} = \{ 0 \}$.
The equation $\omega(ux,uy) = \omega(x,y)$ implies that
\begin{equation} \label{eq:J_commute_symplectic}
-Ju^{*}Ju = \mathds{1}
\end{equation}
Writing
\begin{equation} \label{eq:block_form_symplectomorphism}
u = \begin{pmatrix}
a & \alpha b \alpha \\
b & \alpha a \alpha
\end{pmatrix}:\begin{array}{c} L^+ \\ \oplus \\ L^- \end{array} \rightarrow \begin{array}{c} L^+ \\ \oplus \\ L^- \end{array}
\end{equation}
in block form we obtain
\begin{equation} \label{eq:symplectomorphism_identities}
\mathds{1} = - \begin{pmatrix}
i & 0 \\
0 & -i
\end{pmatrix}
\begin{pmatrix}
a^{*} & b^{*} \\
\alpha b^{*} \alpha & \alpha a^{*} \alpha
\end{pmatrix}
\begin{pmatrix}
i & 0 \\
0 & -i
\end{pmatrix}
\begin{pmatrix}
a & \alpha b \alpha \\
b & \alpha a \alpha
\end{pmatrix} = \begin{pmatrix}
a^{*}a - b^{*}b & a^{*}\alpha b \alpha - b^{*} \alpha a \alpha \\
- \alpha b^{*} \alpha a + \alpha a^{*} \alpha b & - \alpha b^{*} b \alpha + \alpha a^{*}a \alpha
\end{pmatrix}.
\end{equation}
By a symplectomorphism, we mean a bounded bijection which preserves the symplectic form. Recall that the inverse is also bounded.
\begin{lemma} \label{le:top_left_invertible_symplecto}
Let $u$ be a symplectomorphism in block form (\ref{eq:block_form_symplectomorphism}). Then $a$ is invertible and
\begin{equation} \label{eq:symp_inverse_formula}
u^{-1} = \left( \begin{array}{cc} a^* & -b^* \\ -\alpha b^* \alpha & \alpha a^* \alpha \end{array} \right).
\end{equation}
\end{lemma}
\begin{proof}
The expression for $u^{-1}$ follows from \cref{eq:J_commute_symplectic,eq:block_form_symplectomorphism}. If $av = 0$, since by \cref{eq:symplectomorphism_identities} $a^* a - b^* b = \mathds{1}$ we see that for any $v \in L^+$
\[ \| v \|^2 = \| av \|^2 - \| bv \|^2 =-\|bv\|^2 \]
so $v=0$. Thus $a$ is injective. Applying this to $u^{-1}$ we see that $a^*$ is injective, so (denoting closure by $\mathrm{cl}$)
\[ \mathrm{cl} \, \mathrm{range}(a) = \mathrm{ker}(a^*)^\perp = L^+, \]
i.e. $a$ has dense range. Since $\| v\|^2 \leq \| a v\|^2$, $a$ is bounded below, so it is invertible.
\end{proof}
\begin{exercise} \label{exer:sympletic_inverse}
Verify that $u^{-1}$ is given by equation (\ref{eq:symp_inverse_formula}), completing the proof of the lemma.
\end{exercise}
\begin{corollary} \label{cor:SpActsOnSiegelDisk}
If $u \in \operatorname{Sp}(H)$ is given in block form by (\ref{eq:block_form_symplectomorphism}) and $Z \in \mathcal{D}(H)$, then $a + \alpha b \alpha Z$ is invertible.
Under the bijection of Proposition \ref{prop:GrassmannianIsSiegelDisk}, $\operatorname{Sp}(H)$ acts transitively on $\mathcal{D}(H)$ via
\[ Z \mapsto ( {b} + \alpha {a} \alpha Z)(a + \alpha b \alpha Z)^{-1}. \]
\end{corollary}
\begin{proof}
Fix $Z \in \mathcal{D}(H)$ and let $W_+$ be the graph of $Z$, so that
\begin{align*}
\Psi: L^+ & \rightarrow W_+ \\
x & \mapsto x + Z x
\end{align*}
is an isomorphism. Let $W_+'=u W_+$ and observe that $\left. u \right|_{W_+}: W_+ \rightarrow W_+'$ is an isomorphism. Since $\left. P_+' \right|_{W_+'}$ is also an isomorphism by Corollary \ref{cor:GraphOperator}, we see that
\[ a + \alpha b \alpha Z = P_+' \, u \, \Psi: L^+ \rightarrow L^+ \]
is invertible.
We have already seen in Lemma \ref{lem:HomogeneousSets} that $\operatorname{Sp}(H)$ acts transitively on $\mathrm{Pol}^\omega(H)$, so the action induced on $\mathcal{D}(H)$ is transitive. Since $W_+'$ is the image of the graph of $Z$ under $u$, it is the image of
\[ \left( \begin{array}{c} a + \alpha b \alpha Z \\ b + \alpha a \alpha Z \end{array} \right) = u \left( \begin{array}{c} \mathds{1} \\ Z \end{array} \right):L^+ \rightarrow \begin{array}{c} L^+ \\ \oplus \\ L^- \end{array}. \]
This agrees with the image of
\[ \left( \begin{array}{c} \mathds{1} \\ (b + \alpha a \alpha Z)( a + \alpha b \alpha Z)^{-1} \end{array} \right) \]
which proves the claim.
\end{proof}
\begin{example} \label{ex:smooth_example_bosonicH}
Let $H^{\mathrm{b}}_{\mathbb{C}}$ be as in Example \ref{ex:bosonExample}. Assume that $\varphii \in \mathrm{Diff}(\mathbb{S}^1)$. Modelling $H^{\mathrm{b}}_{\mathbb{C}}$ by the space of functions $\operatorname{d}\!ot{H}^{1/2}(\mathbb{S}^1)$ and using the expression
\[ \omega(f,g) = \int_{\mathbb{S}^1} f \, dg, \]
for smooth $f,g$, we see by change of variables that
\begin{align*}
\mathcal{C}_\varphii: \operatorname{d}\!ot{H}^{1/2}(\mathbb{S}^1) &\rightarrow \operatorname{d}\!ot{H}^{1/2}(\mathbb{S}^1) \\f & \mapsto f \circ \varphii - \frac{1}{2\pi} \int_{\mathbb{S}^1} f d\varthetaeta
\end{align*}
preserves $\omega$ for smooth $f,g$. It can be shown that $\mathcal{C}_\varphii$ is bounded for any diffeomorphism $\varphii$. So the invariance of $\omega$ under $\mathcal{C}_\varphii$ extends to all of $H^{\mathrm{b}}$ by continuity of $\omega$. Furthermore, $\varphii^{-1} \in \mathrm{Diff}(\mathbb{S}^1)$ is a bounded inverse of $\mathcal{C}_\varphii$ so we see that $\mathcal{C}_\varphii$
is a symplectomorphism.
Letting $L^\pm$ be as in Example \ref{ex:bosonExample}, we then have that
\[ W^\pm_\varphii = \mathcal{C}_\varphii L^+ \]
defines an element of $\mathrm{Pol}^{\omega}(H^{\mathrm{b}})$.
The stabilizer of $L^+$ is precisely $\text{M\"ob}(\mathbb{S}^1)$, the set of M\"obius transformations preserving the circle. In summary, we obtain a well-defined injection
\begin{align*}
\text{Diff}(\mathbb{S}^1)/\text{M\"ob}(\mathbb{S}^1) & \rightarrow \mathrm{Pol}^\omega(H^{\mathrm{b}}) \\
[\varphii] & \mapsto W_\varphii
\end{align*}
where $[\varphii]$ denotes the equivalence class of a representative $\varphii \in \text{Diff}(\mathbb{S}^1)$.
It can be shown that the operator $Z \in \mathfrak{D}(H^{\mathrm{b}})$ associated to $W_\varphii$ (see Corollary \ref{cor:GraphOperator}) is the Grunsky operator, which was introduced to complex function theory eighty years ago by H. Grunsky.
\end{example}
\begin{remark} \label{re:universal_Teich_space}
Examples \ref{ex:smooth_example_bosonicH}, \ref{ex:real_boson_example}, and \ref{ex:bosonExample} originate with G. Segal \cite{segal_unitary_1981}. In the same paper, he introduced the concept of the infinite Siegel disk.
It was shown by Nag-Sullivan \cite{nag_teichmuller_1995} and Vodopy'anov \cite{vodopyanov_mappings_1989} that a homeomorphism of $\mathbb{S}^1$ is a symplectomorphism if and only if it is a quasisymmetry. Furthermore, the unitary subgroup is the set of M\"obius transformations preserving the circle.
(Equivalently, by Exercise \ref{exer:stabilizer}, the unitary subgroup is the stabilizer of the polarization in Example \ref{ex:bosonExample}).
The definition of quasisymmetries is beyond the scope of this paper. We mention only that quasisymmetries modulo M\"obius transformations $\text{QS}(\mathbb{S}^1)/\text{M\"ob}(\mathbb{S}^1)$ is a model of the universal Teichm\"uller space.
Takhtajan and Teo showed that this gives a holomorphic embedding of the universal Teichm\"uller space $\text{QS}(\mathbb{S}^1)/\text{M\"ob}(\mathbb{S}^1)$ into the infinite Siegel disk.
The fact that the operator $Z$ is the Grunsky matrix was shown by Kirillov and Yuri'ev \cite{kirillov_representations_1988} in the smooth setting, and by Takhtajan and Teo \cite{takhtajan_weil-petersson_2006} in the quasisymmetric case.
\end{remark}
Next, we define the restricted Grassmannian and symplectic group.
We first define a relation $\sim$ on $\mathrm{Pol}^{\omega}(H)$ by saying that $W_{1}^{+} \sim W_{2}^{+}$ if the restriction to $W_{1}^{+} \subset H_{\mathbb{C}}$ of the projection operator $H_{\mathbb{C}} = W_{2}^{+} \oplus \alpha (W_{2}^{+}) \rightarrow \alpha (W_{2}^{+})$ is Hilbert-Schmidt.
One might object that this definition is too vague, since it does not specify with respect to which inner product this operator is supposed to be Hilbert-Schmidt (reasonable options being $g$, $g_{W_{1}}$, and $g_{W_{2}}$).
The following result tells us that it does not matter.
\begin{lemma}
If $H$ is a Hilbert space, equipped with two strong inner products, $g_{1}$ and $g_{2}$, then an operator $T:H \rightarrow H$ is Hilbert-Schmidt with respect to $g_{1}$ if and only if it is Hilbert-Schmidt with respect to $g_{2}$.
\end{lemma}
\begin{proof}
Choose bases $\{ e_{i} \}$ and $\{ f_{i} \}$ which are orthonormal for $g_{1}$ and $g_{2}$ respectively.
Let $A: H \rightarrow H$ be the complex-linear extension of $e_{i} \mapsto f_{i}$.
The map $A$ is bounded linear, with bounded linear inverse by \cref{pr:StrongEquivalence}.
Moreover, $A$ satisfies $g_{1}(v,w) = g_{2}(Av,Aw)$ for all $v,w \in H$.
Denote by $\| - \|_{i}$ the Hilbert-Schmidt norm w.r.t.~$g_{i}$.
Now, if $T$ is Hilbert-Schmidt w.r.t~$g_{1}$, then so is $A^{-1}TA$, and we compute
\begin{align*}
\| ATA^{-1} \|_{1}^{2} &= \sum_{i} g_{1}( A^{-1}TA e_{i}, A^{-1}TA e_{i}) \\
&= \sum_{i} g_{2}(T f_{i}, T f_{i}) \\
&= \|T\|_{2}^{2}. \qedhere
\end{align*}
\end{proof}
\begin{lemma}\label{lem:BlockHilbertSchmidt}
Let $u \in \operatorname{Sp}(H)$.
Then $uL^{+} \sim L^{+}$ if and only if $b$ is Hilbert-Schmidt, where $b$ is given by the decomposition \cref{eq:block_form_symplectomorphism}.
\end{lemma}
\begin{proof}
The projection $uL^{+} \rightarrow \alpha(L^{+})$ is given by $(ax,bx) \mapsto bx$.
Because the operator $a$ is invertible (by \cref{le:top_left_invertible_symplecto}), we have that this operator is Hilbert-Schmidt if and only if $b$ is Hilbert-Schmidt.
\end{proof}
\begin{proposition}\label{prop:SymplecticEquivalence}
The relation $\sim$ on $\mathrm{Pol}^{\omega}(H)$ is an equivalence relation.
\end{proposition}
\begin{proof}
It is clear that $\sim$ is reflexive, because the projection operator $W^{+} \rightarrow \alpha(W^{+})$ is identically zero.
Now, suppose that $W_{1}^{+} \sim W_{2}^{+}$.
We may, without loss of generality, assume that $W_{2}^{+} = L^{+}$.
There then exists an element $u \in \operatorname{Sp}(H)$ such that $uL^{+} = W_{2}^{+}$.
Let $q:H_{\mathbb{C}} \rightarrow u(L^{-})$ be the projection with respect to the splitting $H_{\mathbb{C}} = u(L^{+}) \oplus u(L^{-})$.
We determine the decomposition of $q$ with respect to the splitting $H_{\mathbb{C}} = L^{+} \oplus L^{-}$.
Let $x \in H_{\mathbb{C}}$.
Then, there exist $v^{\pm} \in u(L^{\pm})$ such that $x = v^{+} + v^{-}$; and we have $q(x) = v^{-}$.
We apply $u^{-1}$ to obtain $u^{-1}x = u^{-1}v^{+} + u^{-1}v^{-}$.
We observe that $u^{-1}v^{\pm} \in L^{\pm}$, thus projecting onto $L^{-}$, we obtain $P^{-}u^{-1}x = u^{-1}v^{-}$.
Finally, we apply $u$ to obtain $uP^{-}u^{-1}(x) = v^{-} = q(x)$.
We now compute
\begin{equation*}
uP^{-}u^{-1} = \begin{pmatrix}
a & \alpha b \alpha \\
b & \alpha a \alpha
\end{pmatrix} \begin{pmatrix}
0 & 0 \\
0 & 1
\end{pmatrix} \begin{pmatrix}
a^{*} & -b^{*} \\
- \alpha b^{*} \alpha & \alpha a^{*} \alpha
\end{pmatrix} = \begin{pmatrix}
0 & \alpha b \alpha \\
0 & \alpha a \alpha
\end{pmatrix} \begin{pmatrix}
a^{*} & -b^{*} \\
- \alpha b^{*} \alpha & \alpha a^{*} \alpha
\end{pmatrix} =
\begin{pmatrix}
- \alpha bb^{*} \alpha & \alpha b a^{*} \alpha \\
- \alpha a b^{*} \alpha & \alpha aa^{*} \alpha
\end{pmatrix}
\end{equation*}
The restriction of $q$ to $L^{+}$ is simply given by the first column of this matrix.
We thus see that if $b$ is Hilbert-Schmidt, then $q$ is Hilbert-Schmidt.
The assumption is that $uL^{+} \sim L^{+}$, which tells us that $b$ is Hilbert-Schmidt, through \cref{lem:BlockHilbertSchmidt}.
It follows that $q$ is Hilbert-Schmidt, thus $L^{+} \sim u L^{+}$.
Finally, suppose now that we have $W_{1}^{+} \sim W_{2}^{+}$ and $W_{2}^{+} \sim W_{3}^{+}$.
Let $\iota_{1}^{+}:W_{1}^{+} \rightarrow H_{\mathbb{C}}$ and $\iota_{2}^{+}: W_{2}^{+} \rightarrow H_{\mathbb{C}}$ be the inclusions, and let $P_{i}^{\pm}: H_{\mathbb{C}} \rightarrow W_{i}^{\pm}$ be the projection with respect to the decompositions $H_{\mathbb{C}} = W_{i}^{+} \oplus W_{i}^{-}$ for $i=1,2,3$.
We then have that the operators $P_{2}^{-}\iota_{1}^{+}$ and $P_{3}^{-}\iota_{2}^{+}$ are Hilbert-Schmidt.
We have
\begin{equation*}
P_{3}^{-}\iota_{1}^{+} = P_{3}^{-}(\iota_{2}^{+}P_{2}^{+} + \iota_{2}^{-}P_{2}^{-})\iota_{1}^{+} = P_{3}^{-} \iota_{2}^{+} P_{2}^{+} \iota_{1}^{+} + P_{3}^{-} \iota_{2}^{-} P_{2}^{-} \iota_{1}^{+}.
\end{equation*}
It follows that $P_{3}^{-} \iota_{1}^{+}$ is Hilbert-Schmidt, and we are done.\textbf{}
\end{proof}
We introduce the \emph{restricted symplectic Lagrangian Grassmannian}
\begin{align*}
\mathrm{Pol}_{2}^{\omega}(H) &\operatorname{d}\!efeq \{ W^{+} \in \mathrm{Pol}^{\omega}(H) \mid W^{+} \sim L^{+} \} = \{ W^{+} \in \mathrm{Pol}^{\omega}(H) \mid L^{+} \rightarrow \alpha(W^{+}) \text{ is Hilbert-Schmidt} \},
\end{align*}
where $L^{+} \rightarrow \alpha(W^{+})$ is the restriction of the projection $H_{\mathbb{C}} = W^{+} \oplus \alpha(W^{+}) \rightarrow \alpha(W^{+})$ to $L^{+} \subset H_{\mathbb{C}}$.
Similarly, we may consider the \emph{restricted Siegel disk}
\begin{equation*}
\mathfrak{D}_{2}(H) \operatorname{d}\!efeq \mathfrak{D}(H) \cap \mc{B}_{2}(L^{+}, L^{-}),
\end{equation*}
where $\mc{B}_{2}(L^{+},L^{-})$ is the space of Hilbert-Schmidt operators from $L^{+}$ to $L^{-}$.
\begin{lemma} \label{le:LG_Siegel_disk_isomorphism_res}
The isomorphism $\mathrm{Pol}^{\omega}(H) \rightarrow \mathfrak{D}(H)$ restricts to an isomorphism $\mathrm{Pol}_{2}^{\omega}(H) \rightarrow \mathfrak{D}_{2}(H)$.
\end{lemma}
\begin{proof}
First, the image of $\mathrm{Pol}_{2}^{\omega}(H)$ in $\mathfrak{D}(H)$ under the map above lies in $\mathfrak{D}_{2}(H)$, because $P^{-}$ is Hilbert-Schmidt.
To prove the converse, we observe that the orthogonal projection $p:\mathrm{graph}(Z) \rightarrow \alpha(L^{+})$ is given by $(x,Zx) \mapsto Zx$.
If $\hat{Z}: L^{+} \rightarrow \mathrm{graph}(Z)$ is the invertible operator $x \mapsto (x,Zx)$ we thus have $Z = p \hat{Z}$.
This implies that if $Z$ is Hilbert-Schmidt, then $p$ must be Hilbert-Schmidt.
In other words, $\mathrm{graph}(Z) \sim L^{+}$.
The symmetry of $\sim$ then implies that $\mathrm{graph}(Z) \in \mathrm{Pol}_{2}^{\omega}(H)$.
\end{proof}
The \emph{restricted symplectic group} is
\begin{equation*}
\operatorname{Sp}_{2}(H) \operatorname{d}\!efeq \{ u \in \operatorname{Sp}(H) \, : \, u(L^+) \sim L^{+} \}.
\end{equation*}
\begin{proposition} \label{pr:symp_res_characterization}
Let $u \in \operatorname{Sp}(H)$. The following are equivalent.
\begin{enumerate}
\item $u \in \operatorname{Sp}_{2}(H)$;
\item If $u$ is given in block form by (\ref{eq:block_form_symplectomorphism}) then $b$ is Hilbert-Schmidt;
\item $u^{\star}J - J \text{ is Hilbert-Schmidt}$.
\end{enumerate}
\end{proposition}
\begin{proof}
By Lemma \ref{le:top_left_invertible_symplecto} $a$ is invertible. We have $u L^+$ is the image of $(a,\overline{b})^T L^+$, and thus is the graph of $\overline{b} a^{-1}$. Again since $a$ is invertible, $b$ is Hilbert-Schmidt if and only if $Z$ is Hilbert-Schmidt. By Lemma \ref{le:LG_Siegel_disk_isomorphism_res} conditions 1 and 2 are equivalent. The equivalence of 2 and 3 follows from the computation
\begin{equation} \label{eq:J_deformation}
u^{-1} J u - J = 2i \left( \begin{array}{cc} b^*b & b^* \alpha a \alpha \\ - \alpha b^* \alpha a & - \alpha b^* b \alpha \end{array} \right).
\end{equation}
\end{proof}
\begin{exercise} \label{exer:complete_Hilbert_Schmidt_char}
Show that \cref{eq:J_deformation} holds.
\end{exercise}
The actions of $\operatorname{Sp}(H)$ on the Siegel disk and Lagrangian Grassmannian restrict appropriately.
\begin{proposition} \label{pr:res_sym_preserves}
The actions of $\operatorname{Sp}_{2}(H)$ preserves $\mathrm{Pol}_2^{\omega}(H)$ and $\mathfrak{D}_2(H)$.
\end{proposition}
\begin{proof}
That $\operatorname{Sp}_{2}(H)$ preserves $\mathrm{Pol}_{2}^{\omega}(H)$ is clear by definition.
That $\operatorname{Sp}_{2}(H)$ preserves $\mathfrak{D}_{2}(H)$ follows from \cref{cor:SpActsOnSiegelDisk} together with part 2 of \cref{pr:symp_res_characterization}.
\end{proof}
\begin{theorem} The Siegel disk $\mathfrak{D}(H)$ is an open subset of the Banach space of bounded linear operators on $H$ satisfying $\alpha Z \alpha = Z^*$, and in particular is a complex Banach manifold. The restricted Siegel disk $\mathcal{D}_2(H)$ is an open subset of the Hilbert space of Hilbert-Schmidt operators on $H$ satisfying $\alpha Z \alpha = Z^*$, and in particular is a complex Hilbert manifold.
\end{theorem}
\begin{proof}
Let $X$ denote the set of operators satisfying $\alpha Z \alpha = Z^*$. $X$ is a closed linear subspace of $\mc{B}(L^+,L^-)$, so it is itself a Banach space.
We have that $I- Z^* Z$ is positive definite if and only if $\| Z \|<1$. Thus $\mathcal{D}(H)$ is an open subset of $\mc{B}(L^+,L^-)$, and hence also an open subset of $X \cap \mc{B}(L^+,L^-)$.
The Hilbert-Schmidt norm controls the operator norm; that is, inclusion from the Hilbert-Schmidt operators into the bounded linear operators is a bounded map. Thus $\mathcal{D}_2(H)$ is an open subset of the space of Hilbert-Schmidt operators, and the remaining claims follow similarly.
\end{proof}
By Propositions \ref{prop:GrassmannianIsSiegelDisk} and \ref{pr:res_sym_preserves} this gives the Grassmannian and restricted Grassmannian Banach and Hilbert manifold structures respectively.
\begin{example} Referring to Example \ref{ex:smooth_example_bosonicH} and Remark \ref{re:universal_Teich_space}, it was shown by Segal that for smooth $\varphii$ the operator $\mathcal{C}_\varphii$ is an element of $\operatorname{Sp}_{2}(H)$. It was shown by Takhtajan and Teo that for a quasisymmetry $\varphii$, $\mathcal{C}_\varphii$ is in $\operatorname{Sp}_{2}(H)$ if and only if $\varphii$ is what is known as a Weil-Petersson quasisymmetry. The definition is beyond the scope of the paper; we mention only that the Weil-Petersson quasisymmetries modulo the M\"obius transformations of the disk, $\text{QS}_{\text{WP}}(\mathbb{S}^1)/\text{M\"ob}(\mathbb{S}^1)$, is a model of the Weil-Petersson universal Teichm\"uller space.
\end{example}
\section{The Grassmannian of polarizations associated to an inner product}\label{sec:fermions}
In this section, we fix an inner product, and describe the Lagrangian Grassmannian of orthogonal polarizations associated to that inner product. We show that it is a complex Banach manifold.
We also consider the ``restricted'' Grassmannian and orthogonal groups, and established that the the restricted Grassmannian is a Hilbert manifold.
We assume again that $H$ is equipped with a compatible triple $(g,J,\omega)$, and we write $H_{\mathbb{C}} = L^{+} \oplus L^{-}$ for the decomposition w.r.t.~$J$, i.e.~$L^{\pm} = \ker(J \mp i)$.
Now, choose an orthonormal basis $\{ e_{i} \}_{i \geqslant 1}$ for $L^{+}$.
For $i \leqslant 1$ we set $e_{i} = \alpha e_{-i}$, which yields an orthonormal basis $\{ e_{i} \}_{i \leqslant 1}$ for $L^{-} = \alpha(L^{+})$.
Nothing material will depend on these choices, but they will be convenient for the proofs to come.
Motivated by \cref{thm:FermionEquivalence} we introduce the \emph{restricted Lagrangian Grassmannian}
\begin{align*}
\mathrm{Pol}_{2}^{g}(H) &\operatorname{d}\!efeq \{ W^{+} \in \mathrm{Pol}^{g}(H) \mid L^{+} \rightarrow \alpha(W^{+}) \text{ is Hilbert-Schmidt} \},
\end{align*}
where $L^{+} \rightarrow \alpha(W^{+})$ is the restriction of the orthogonal projection $H_{\mathbb{C}} \rightarrow \alpha(W^{+})$ to $L^{+} \subset H_{\mathbb{C}}$.
We define a relation $\sim$ on $\mathrm{Pol}^{g}(H)$ by saying that $W_{1}^{+} \sim W_{2}^{+}$ if the orthogonal projection $W_{1}^{+} \rightarrow \alpha (W_{2}^{+})$ is Hilbert-Schmidt.
\begin{exercise}\label{ex:EquivalenceRelation}
Prove that $\sim$ defines an equivalence relation on $\mathrm{Pol}^{g}(H)$. (This exercise is challenging.)
\end{exercise}
We also introduce the \emph{restricted orthogonal group}
\begin{equation*}
\operatorname{O}_{2}(H) \operatorname{d}\!efeq \{ u \in \operatorname{O}(H) \mid u(L^{+}) \in \mathrm{Pol}_{2}^{g}(H) \}.
\end{equation*}
Given an operator $u \in \operatorname{O}(H)$, let us write
\begin{equation*}
u = \begin{pmatrix}
a & b \\
c & d
\end{pmatrix},
\end{equation*}
with respect to the decomposition $H_{\mathbb{C}} = L^{+} \oplus L^{-}$.
\begin{lemma}
We have that $u \in \operatorname{O}_{2}(H)$ if and only if $c$ is Hilbert-Schmidt, if and only if $b$ is Hilbert-Schmidt.
\end{lemma}
\begin{proof}
First, we observe that $\alpha(u(L^{+})) = u(L^{-})$.
The set $\{ ue_{i} \}_{i \leqslant 1} = \{ (be_{i},de_{i}) \}_{i \leqslant 1}$ is then an orthonormal basis for $u(L^{-})$.
The orthogonal projection of $e_{j}$ onto $u(L^{-})$, for $j \geqslant 1$ is given by
\begin{equation*}
\sum_{i \leqslant 1} \langle e_{j}, be_{i} \rangle (be_{i},de_{i}).
\end{equation*}
It follows that the Hilbert-Schmidt norm of the orthogonal projection $L^{+} \rightarrow u(L^{-})$ is given by
\begin{align*}
\sum_{j \geqslant 1} \|\sum_{i \leqslant 1} \langle e_{j}, be_{i} \rangle (be_{i},de_{i}) \|^{2} &= \sum_{-i,j \geqslant 1} | \langle e_{j}, b e_{i} \rangle |^{2},
\end{align*}
which is simply the Hilbert-Schmidt norm of the operator $b: L^{-} \rightarrow L^{+}$.
It follows that $u(L^{+}) \in \operatorname{O}_{2}(H)$ if and only if $b$ is Hilbert-Schmidt.
That $b$ is Hilbert-Schmidt if and only if $c$ is Hilbert-Schmidt follows from the equation $\alpha c = b \alpha$.
\end{proof}
\begin{corollary}
If $u \in \operatorname{O}_{2}(H)$, then $a$ and $d$ are Fredholm.
\end{corollary}
\begin{proof}
It follows from the fact that $uu^{*} = \mathds{1}$ that $aa^{*} + bb^{*} = \mathds{1}$, and from $u^{*}u = \mathds{1}$ that $a^{*}a + c^{*}c = \mathds{1}$.
Because $b$ and $c$ are Hilbert-Schmidt, we have that $bb^{*}$ and $c^{*}c$ are trace-class, thus in particular compact.
It follows that $a$ is Fredholm (and $a^{*}$ is a ``parametrix'' for $a$).
\end{proof}
Now, suppose that $Z: L^{+} \rightarrow L^{-}$ is a linear operator.
The computation
\begin{equation*}
\langle (v,Zv), \alpha (w, Zw) \rangle = \langle v, \alpha Zw \rangle + \langle Zv, \alpha w \rangle = \langle v, (\alpha Z + Z^{*} \alpha)w \rangle,
\end{equation*}
shows that $\operatorname{graph}(Z)$ is perpendicular to $\alpha (\operatorname{graph}(Z))$ if and only if $\alpha Z \alpha = -Z^{*}$.
\begin{exercise}\label{ex:PolarCondition}
Suppose that $\alpha Z \alpha = -Z^{*}$, and show that if $(x,y) \in H_{\mathbb{C}}$ is perpendicular to both $\operatorname{graph}(Z)$ and $\alpha(\operatorname{graph}(Z))$, then $(x,y) = 0$.
\end{exercise}
It follows from \cref{ex:PolarCondition} that $\operatorname{graph}(Z) \in \mathrm{Pol}^{g}(H)$ if and only if $\alpha Z \alpha = - Z^{*}$.
Following \cite[Section 7.1]{pressley_loop_2003}, we equip $\mathrm{Pol}_{2}^{g}(H)$ with the structure of complex manifold.
To be precise, we shall equip $\mathrm{Pol}_{2}^{g}(H)$ with an atlas modeled on the complex Hilbert space
\begin{equation*}
\mc{B}_{2}^{\alpha}(L^{+},L^{-}) = \{ Z \in \mc{B}_{2}(L^{+},L^{-}) \mid \alpha Z \alpha = - Z^{*} \},
\end{equation*}
such that the transition functions are holomorphic.
For $S$ a finite subset of the positive integers let $W_{S}$ be the closed linear span of
\begin{equation*}
\{ e_{k}, e_{-l} \mid k \notin S, \, l \in S \}.
\end{equation*}
We see that $W_{S} \in \mathrm{Pol}_{2}^{g}(H)$.
We shall use the set of finite subsets of the positive integers as index set for our atlas.
\begin{lemma}\label{lem:FredholmZero}
If $W^{+} \in \mathrm{Pol}_{2}^{g}(H)$, then the orthogonal projection $P:W^{+} \rightarrow L^{+}$ is a Fredholm operator, and $\mathrm{coker}(P) = \alpha \ker(P)$.
In particular, the index of $P$ is zero.
\end{lemma}
\begin{proof}
Let $W^{+} \in \mathrm{Pol}_{2}^{g}(H)$, and write $P: W^{+} \rightarrow L^{+}$ for the orthogonal projection.
There exists an operator $u \in \operatorname{O}_{2}(H)$ such that $u(L^{+}) = W^{+}$.
As before, we write
\begin{equation*}
u = \begin{pmatrix}
a & b \\
c & d
\end{pmatrix},
\end{equation*}
where $a: L^{+} \rightarrow L^{+}$ is Fredholm.
Now, we claim that the map
\begin{equation*}
L^{+} \rightarrow W^{+}, x \mapsto u(x),
\end{equation*}
restricts to a bijection from $\ker(a)$ to $\ker(P)$.
Indeed, $x \in \ker(a) \subset L^{+}$ if and only if $u(x) = (0, cx)$.
Similarly $v \in \ker(P) \subset W^{+} = u(L^{+})$ if and only if $v = (0,cx)$.
It thus follows that $\ker(P)$ is finite-dimensional, because $\ker(a)$ is (because $a$ is Fredholm).
Now if $x \in L^{+}$ is arbitrary, then we have $u(x) = (ax, cx)$, and thus $Pu(x) = ax)$.
it follows that the range of $P$ is given by the range of $a$, which is closed, and has finite-dimensional cokernel, because $a$ is Fredholm.
Observe that the kernel of $P$ is given by $W^{+} \cap \alpha(L^{+})$.
We claim that the cokernel of $P$ is $\alpha (W^{+}) \cap L^{+}$.
In other words $\mathrm{Im}(P)^{\perp} = \alpha (W^{+}) \cap L^{+}$.
Let $v \in \alpha(W^{+}) \cap L^{+}$ and $w \in W^{+}$, and denote by $P^{+}: H_{\mathbb{C}} \rightarrow L^{+}$ the orthogonal projection, so that $P^{+}|_{W^{+}} = P$.
We then compute
\begin{equation*}
\langle v, Pw \rangle = \langle v, P^{+}w \rangle = \langle P^{+}v, w \rangle = \langle v,w \rangle = 0.
\end{equation*}
So $v \in \mathrm{Im}(P)^\perp$. Conversely, if $v \in \mathrm{Im}(P)^\perp$, then $0=\langle v, Pw \rangle = \langle v, w \rangle $ for all $w \in W^+$ so $v \in \alpha(W^+)$.
\end{proof}
\begin{corollary}\label{cor:EquivalentFredholmZero}
If $W_{1}^{+} \sim W_{2}^{+}$, then the orthogonal projection $W_{1}^{+} \rightarrow W_{2}^{+}$ is a Fredholm operator of index zero.
\end{corollary}
\begin{proof}
This follows from \cref{lem:FredholmZero} together with the transitivity of the relation $\sim$, see \cref{ex:EquivalenceRelation}.
\end{proof}
\begin{remark}
\cref{lem:FredholmZero} should be compared to \cref{pr:CompatInequality}. In that case, the projection operator is not just Fredholm, but even invertible.
Moreover, \cref{lem:FredholmZero} should be compared to e.g.~\cite[Prop.~6.2.4]{pressley_loop_2003}; there, any index can occur.
\end{remark}
We define
\begin{equation*}
\mc{O}_{S} \operatorname{d}\!efeq \{ \mathrm{graph}(Z) \in \mathrm{Pol}_{2}^{g}(H) \mid Z \in \mc{B}_{2}^{\alpha}(W_{S},W_{S}^{\perp}) \}.
\end{equation*}
The following result tells us that the sets $\mc{O}_{S}$ cover $\mathrm{Pol}_{2}^{g}(H)$, (see also \cite[Section 4]{borthwick_pfaffian_1992}, and c.f.~\cite[Proposition 7.1.6]{pressley_loop_2003}).
\begin{lemma}\label{lem:OpenCoverPolg}
Any element of $\mathrm{Pol}_{2}^{g}(H)$ can be written as the graph of some operator $Z \in \mc{B}_{2}^{\alpha}(W_{S},W_{S}^{\perp})$.
\end{lemma}
\begin{proof}
Let $W^{+} \in \mathrm{Pol}_{2}^{g}(H)$ be arbitrary.
We claim that there exists a finite subset $S$ of the positive integers, such that the projection operator $P_{S}:W^{+} \rightarrow W_{S}$ is an isomorphism.
Once we have found such an $S$, it follows that $W^{+} = \mathrm{graph}(P^{\perp}_{S} P_{S}^{-1})$, where $P_{S}^{\perp}$ is the orthogonal projection of $W^{+}$ onto $W_{S}^{\perp}$, (c.f.~\cref{exer:VerifyGraphGivesSubspace}).
The operator $P_{S}^{\perp}P_{S}^{-1}$ is Hilbert-Schmidt, because $P_{S}^{\perp}$ is.
Now, let $E_{\emptyset} \subset W^{+}$ be the kernel of the projection operator $P: W^{+} \rightarrow L^{+} = W_{\emptyset}$.
Let $d < \infty$ be its dimension.
If $d=0$, it follows that $P$ is an isomorphism, because it is Fredholm of index $0$ by \cref{cor:EquivalentFredholmZero}.
So, assume that $d > 0$, and let $0 \neq v \in E_{\emptyset}$ be a unit vector.
Choose a positive integer $l$ such that $\langle v, e_{-l} \rangle \neq 0$.
We now consider the orthogonal projection $P_{\{ l \}}: W^{+} \rightarrow W_{\{l\}}$; and we let $E_{\{ l \}}$ be its kernel.
We claim that $E_{\{ l \} }$ is a strict subset of $E_{\emptyset}$, and thus that the dimension of $E_{\{ l \} }$ is strictly smaller than the dimension of $E_{\emptyset}$.
First, observe that $v \notin E_{\{ l \} }$, while $v \in E_{\emptyset}$.
It thus remains to be shown that $E_{ \{l \} }$ is a subset of $E_{\emptyset}$ in the first place.
Suppose that $x \in E_{\{ l \} }$, and write $x = \sum_{k} \lambda_{k} e_{k}$.
The condition that $x \in E_{\{ l \} }$ then implies that
\begin{equation*}
\lambda_{k} = 0, \quad k \geqslant 1, k \neq l.
\end{equation*}
Now, we have that $v = \sum_{k} \mu_{k} e_{k}$, where $\mu_{k} = 0$ for $k \geqslant 1$, and $\mu_{-l} = \langle v, e_{-l} \rangle \neq 0$.
We have
\begin{equation*}
\alpha(x) = \sum_{k} \overline{\lambda_{-k}} e_{k}.
\end{equation*}
We then use that $\alpha(x) \in W^{-} = (W^{+})^{\perp}$ and compute
\begin{equation*}
0 = \langle \alpha(x), v \rangle = \sum_{k} \overline{\lambda_{-k}} \mu_{k} = \overline{\lambda_{l}} \mu_{-l}.
\end{equation*}
The condition that $\mu_{-l} \neq 0$ then implies that $\lambda_{l} = 0$, whence $x \in E_{\emptyset}$.
By induction (using \cref{cor:EquivalentFredholmZero}) on this dimension, we construct a finite subset of the positive integers $S$, such that $P_{S}:W^{+} \rightarrow W_{S}$ is injective.
Because this operator is Fredholm of index zero (again by \cref{cor:EquivalentFredholmZero}), it follows that it is an isomorphism, which completes the proof.
\end{proof}
Fix now finite subsets of the positive integers $S_{1}$ and $S_{2}$.
We determine the ``transition functions'' corresponding to the charts $\mc{B}_{2}(W_{S_{1}},W_{S_{1}}^{\perp}) \leftarrow \mc{O}_{S_{1}} \cap \mc{O}_{S_{2}} \rightarrow \mc{B}_{2}(W_{S_{2}},W_{S_{2}}^{\perp})$, following \cite[Proposition 7.1.2]{pressley_loop_2003}.
Let $a,b,c,d$ be operators such that
\begin{equation*}
\begin{pmatrix}
a & b \\
c & d
\end{pmatrix}: \begin{pmatrix}
W_{S_{1}} \\
W_{S_{1}}^{\perp}
\end{pmatrix} \rightarrow \begin{pmatrix}
W_{S_{2}} \\
W_{S_{2}}^{\perp}
\end{pmatrix}
\end{equation*}
is the identity operator.
A straightforward verification shows that $a$ and $d$ are Fredholm, and that $b$ and $c$ are Hilbert-Schmidt (indeed, even finite-rank).
\begin{proposition} \label{pr:transition_functions_HS_fermion}
The image of $\mc{O}_{S_{1}} \cap \mc{O}_{S_{2}}$ in $\mc{B}_{2}^{\alpha}(W_{S_{1}},W_{S_{1}}^{\perp})$ consists of those operators $Z_{1}$ such that $a+bZ_{1}$ has a bounded inverse.
Moreover, the transition function is given by $Z_{1} \mapsto (c+dZ_{1})(a+bZ_{1})^{-1}$.
\end{proposition}
\begin{proof}
Let $W^{+} \in \mc{O}_{S_{1}} \cap \mc{O}_{S_{2}}$, we then have $\mathrm{graph}(Z_{1}) = W^{+} = \mathrm{graph}(Z_{2})$, for some $Z_{i} \in \mc{B}_{2}^{\alpha}(W_{S_{i}},W_{S_{i}}^{\perp})$.
The orthogonal projection $P_{i}:W^{+} \rightarrow W_{S_{i}}$ is invertible, with inverse $x \mapsto (x,Z_{i}x)$.
This implies that the composition $P_{2}P_{1}^{-1}: W_{S_{1}} \rightarrow W_{S_{2}}$ is invertible.
The calculation
\begin{align*}
P_{2}P_{1}^{-1}x = P_{2}(x, Z_{1}x) = ax + bZ_{1}x
\end{align*}
shows that $P_{2}P_{1}^{-1} = a + bZ_{1}$.
On the other hand, if $Z_{1} \in \mc{B}_{2}^{\alpha}(W_{S_{1}},W_{S_{1}}^{\perp})$ is such that $a+bZ_{1}$ has a bounded inverse, then we consider the operator $(c+dZ_{1})(a+bZ_{1})^{-1}:W_{S_{2}} \rightarrow W_{S_{2}}^{\perp}$.
This operator is Hilbert-Schmidt, because both $c$ and $Z_{1}$ are Hilbert-Schmidt.
Now, let $y \in W_{S_{2}}$ be arbitrary.
Set $x = (a+bZ_{1})^{-1}y \in W_{S_{1}}$.
We then compute
\begin{equation*}
\begin{pmatrix}
y \\
(c+dZ_{1})(a+bZ_{1})^{-1}y
\end{pmatrix} = \begin{pmatrix}
(a+bZ_{1})x \\
(c+dZ_{1})x
\end{pmatrix} = \begin{pmatrix}
a & b \\
c & d
\end{pmatrix} \begin{pmatrix}
x \\
Z_{1}x
\end{pmatrix}.
\end{equation*}
This proves that the graph of $Z_{1}$ is equal to the graph of $(c+dZ_{1})(a+bZ_{1})^{-1}$.
\end{proof}
To prove that the transition functions form a complex atlas, we need an elementary lemma.
\begin{lemma} \label{le:control_inverse}
Let $A$ and $A'$ be bounded operators on a Banach space, and assume that $A$ is invertible.
If $\| A - A' \|<1/ (2 \| A^{-1} \|)$ then $A'$ is invertible and
\[ \|(A')^{-1} \| \leq 2 \| A^{-1} \|. \]
\end{lemma}
\begin{proof}
Since $\| (A'-A) A^{-1} \| <1/2$, the Neumann series provides an inverse for $I + (A'-A)A^{-1}$ and
\[ \left\| \left[ I + (A'- A)A^{-1} \right]^{-1} \right\| \leq \frac{1}{1-\| (A'-A) A^{-1} \|} <2. \]
Thus
\[ (A')^{-1} = A^{-1} \left( I + (A'- A)A^{-1} \right)^{-1} \]
and the claim follows directly.
\end{proof}
\begin{theorem} The transition functions in Proposition \ref{pr:transition_functions_HS_fermion} are holomorphic.
\end{theorem}
\begin{proof}
Let $\Psi:Z \mapsto (c+dZ)(a+bZ)^{-1}$ be the transition map in the space of Hilbert-Schmidt operators. It is enough to show that $\Psi$ is G\^ateaux differentiable and locally bounded. Denote the Hilbert-Schmidt norm by $\| \cdot \|_{HS}$ and the usual operator norm by $\| \cdot \|$. Two facts we will use are that $\| AB \|_{HS} \leq \| A \| \| B \|_{HS}$ and $\| A \| \leq \| A \|_{HS}$.
It is easily seen that the map is G\^ateaux holomorphic with G\^ateaux derivative
\[ D \Psi(Z;W) = \left[ dW + (c+d Z) (a+bZ)^{-1} bW \right](a+bZ)^{-1}. \]
To see that $\Psi$ is locally bounded, fix $Z$ and assume that
\[ \| Z' - Z \|< \frac{1}{\| b\| \| (a + bZ)^{-1} \|}. \]
By Lemma \ref{le:control_inverse} with $A = a + bZ$, $A'=a+bZ'$ we then have
\[ \left\| \left(a + b Z' \right)^{-1} \right\| \leq \frac{1}{2 \| (a + b Z)^{-1} \|}. \]
Thus
\[ \left\| (c + dZ')\left(a+bZ'\right)^{-1} \right\|_{HS} \leq \left( \| c \|_{HS} + \| d \| \| Z' \|_{HS} \right) \frac{1}{2 \| (a + b Z)^{-1} \|} \]
which proves the claim.
\end{proof}
\begin{remark}
The Grassmannian $\mathrm{Pol}_{2}^{g}(H)$ is a rich geometric object.
For example, it carries a ``Pfaffian line bundle'' \cite{borthwick_pfaffian_1992}.
\end{remark}
\section{Motivation for the restricted Grassmannian}\label{sec:HilbertSchmidt}
In this section, we give a sketch of the representation-theoretic and physical motivation for the the restricted spaces.
Let $H$ be a real Hilbert space, with inner product $g$.
From this data, one can construct a C$^{*}$-algebra, called the \emph{Clifford C$^{*}$-algebra}.
We now give an overview of this construction, together with some aspects of the corresponding representation theory.
A full account of the theory is given \cite{plymen_spinors_1994} and \cite{jorgensen_bogoliubov_1987}.
The complex Clifford algebra of $H$ is the complex $*$-algebra generated by elements of $H_{\mathbb{C}}$, subject to the condition that $vw+wv = g(v,\alpha w)$ and $v^{*} = \alpha(v)$, for all $v,w \in H_{\mathbb{C}}$.
This algebra can be completed to a C$^{*}$-algebra, which we denote by $\mathbb{C}l(H)$.
For $W^{+} \in \mathrm{Pol}^{g}(H)$ we define $F_{W^{+}}$ to be the Hilbert completion of the exterior algebra of $W^{+}$:
\begin{equation*}
F_{W^{+}} \operatorname{d}\!efeq \overline{\bigoplus_{n \geqslant 0} \wedge^{n} W^{+}}.
\end{equation*}
We define a map $\pi_{W^{+}}: H_{\mathbb{C}} \rightarrow \mc{B}(F_{W^{+}})$ by setting
\begin{equation*}
\pi_{W^{+}}(x) w_{1} \wedge ... \wedge w_{n} = x \wedge w_{1} \wedge ... \wedge w_{n}
\end{equation*}
for $x,w_{1},...,w_{n} \in W^{+}$, and $\pi_{W^{+}}(y) = \pi_{W^{+}}(\alpha(y))^{*}$ for $y \in W^{-}$.
The map $\pi_{W^{+}}$ defined in this way admits a unique extension to a $*$-homomorphism $\pi_{W^{+}}: \mathbb{C}l(H) \rightarrow \mc{B}(F_{W^{+}})$; this is the Fock representation of $\mathbb{C}l(H)$ corresponding to $W^{+}$, (\cite[Chapter 2]{plymen_spinors_1994}).
It is then natural to attempt to compare the Fock representations corresponding to two elements $W_{1}^{+},W_{2}^{+} \in \mathrm{Pol}^{g}(H)$.
The following result tells us when two such representations should be viewed as equivalent.
\begin{theorem}\label{thm:FermionEquivalence}
There exists a unitary operator $u: F_{W_{1}^{+}} \rightarrow F_{W_{2}^{+}}$ with the property that
\begin{equation*}
\pi_{W_{1}^{+}}(a) = u^{*}\pi_{W_{2}^{+}}(a)u
\end{equation*}
for all $a \in \mathbb{C}l(H)$ if and only if the orthogonal projection $H \rightarrow \alpha(W_{2}^{+})$ restricts to a Hilbert-Schmidt operator $W_{1}^{+} \rightarrow \alpha(W_{2}^{+})$
\end{theorem}
A proof of this classical result, together with references to the expansive literature on the subject, can be found in \cite[Chapter 3]{plymen_spinors_1994}.
If $H$ is a real Hilbert space with symplectic form $\omega$, then one defines the Heisenberg (Lie) algebra to be $H \times \mathbb{R}$, equipped with the bracket
\begin{equation*}
[(v,t),(w,s)] = (0, \omega(v,w)).
\end{equation*}
Given a positive symplectic polarization $H_{\mathbb{C}} = W^{+} \oplus W^{-}$, one obtains a representation of the Heisenberg algebra on the Hilbert completion of the symmetric algebra of $W^{+}$, similar to the construction above.
However, this representation is by \emph{unbounded} operators. In spite of this, the situation is entirely analogous:
If $H$ is finite-dimensional, then all such representations are unitarily equivalent; this is the Stone-von Neumann theorem.
The equivalence problem in the infinite-dimensional case was settled by D.~Shale \cite{shale_linear_1962}.
Indeed, the Stone-von Neumann theorem is superseded by the result that two positive symplectic polarizations $W_{1}^{\pm}$ and $W_{2}^{\pm}$ give unitarily equivalent representations, if and only if they are in the same Hilbert-Schmidt class.
This statement is not quite precise, because we have not explained what a representation by unbounded operators is.
There are several ways to deal with this issue, but this would take us too far afield.
The reader can find the subject treated in for example \cite{lion_weil_1980,ottesen_infinite_1995,habermann_introduction_2006}.
\section{Sewing and \texorpdfstring{$\mathrm{Diff}(\mathbb{S}^1)$}{Diff(S1)}} \label{se:sewing}
In this section, we return to Example \ref{ex:smooth_example_bosonicH}, and give the Grassmannian of polarizations a geometric interpretation in terms of sewing.
This sewing interpretation arises in conformal field theory, see \cite{bleuler_definition_1988,huang_two-dimensional_1995}. This section can be treated as an extended example. This example appears in many contexts.
We first give a summary. Let $\overline{\mathbb{C}}$ denote the Riemann sphere,
\[ \mathbb{D}_+ = \{z \in \mathbb{C} \,: \, |z|<1 \}, \ \ \ \text{and} \ \ \ \mathbb{D}_-= \{ z \in \overline{\mathbb{C}} \, :\, |z|>1 \} \cup \{ \infty \}. \]
Every diffeomorphism $\varphii \in \mathrm{Diff}(\mathbb{S}^1)$ gives rise to a conformal map $f$ from the unit disk into the sphere, obtained by sewing $\mathbb{D}_-$ to $\mathbb{D}_+$ using $\varphii$ to identify points on their boundaries. The image of $f$ is bounded by a smooth Jordan curve representing the seam generated by the choice of $\varphii$. If $\varphii$ is the identity map, the seam is just the unit circle in the Riemann sphere.
Now let $\Sigma$ be the complement of the closure of the image of $f$. Recall that $\varphii$ induces a composition operator $\mathcal{C}_\varphii$ on $H^{\mathrm{b}}$, which is a bounded symplectomorphism. and that $W_\varphii = \mathcal{C}_\varphii W_+ \in \mathrm{Pol}^\omega(H^{\mathrm{b}})$. We then have that $W_\varphii$ can be interpreted as pull-back by $f$ of the set of boundary values of holomorphic functions on $\Sigma$.
To fill in this summary, we need a result known as conformal welding. It originated in quasiconformal Teichm\"uller theory in the 1960s (see \cite{lehto_univalent_1987} and references therein), and independently in other contexts including conformal field theory. We give the description in the smooth case because it allows a simpler presentation in terms of more well-known theorems.
The conformal welding theorem (in the smooth case) says the following.
\begin{theorem}[smooth conformal welding] Let $\varphii \in \mathrm{Diff}(\mathbb{S}^1)$. There are holomorphic one-to-one functions $f:\mathbb{D}_+ \rightarrow \overline{\mathbb{C}}$, $g: \mathbb{D}_- \rightarrow \overline{\mathbb{C}}$, which extend smoothly and bijectively to $\mathbb{S}^1$, such that $f(\mathbb{S}^1)=g(\mathbb{S}^1)$ and $\varphii = \left. g^{-1} \circ f \right|_{\mathbb{S}^1}$.
These are unique up to post-composition by a M\"obius transformation; that is, any other such pair of maps is given by $T \circ f$, $T \circ g$.
\end{theorem}
\begin{proof}
We give a sketch of a proof. Given $\varphii \in \mathrm{Diff}(\mathbb{S}^1)$, we treat it as a parametrization of the boundary of the Riemann surface $\mathbb{D}_-$. Sew on $\mathbb{D}_+$ by identifying points on $\partial \mathbb{D}_+$ and $\partial \mathbb{D}_-$ under $\varphii$. One then obtains a topological sphere $S = \mathrm{cl} \,\mathbb{D}_+ \sqcup \mathrm{cl} \, \mathbb{D}_-/\sim$, where $p \in \partial \mathbb{D}_+$ is equivalent to $q \in \partial \mathbb{D}_-$ if and only if $q = \varphii(p)$. $S$ can be given a unique complex manifold structure compatible with that on $\mathbb{D}_+$ and $\mathbb{D}_-$. By the uniformization theorem, there is a biholomorphism $\Psi:S \rightarrow \overline{\mathbb{C}}$. Set
\[ f= \left. \Psi \right|_{\mathbb{D}_+}, \ \ \ g = \left. \Psi \right|_{\mathbb{D}_-}. \]
It follows from continuity of $\Psi$ together with the definition of the equivalence relation $\sim$ that $\varphii = g^{-1} \circ f$.
Uniqueness can be obtained from the fact that any biholomorphism of the Riemann sphere is a M\"obius transformation.
\end{proof}
\begin{remark}
In fact, the original version in quasiconformal Teichm\"uller theory was valid more generally for quasisymmetries of $\mathbb{S}^1$ (see Remark \ref{re:universal_Teich_space}). This is the theorem usually referred to as the conformal welding theorem.
\end{remark}
The proof of the smooth conformal welding theorem shows that in general, given a disk $\mathbb{D}_-$ whose boundary is equipped with a parameterization $\varphii \in \mathrm{Diff}(\mathbb{S}^1)$ one can sew on a disk $\mathbb{D}_+$. This resulting Riemann surface is the Riemann sphere, but the seam is now a smooth Jordan curve $\Gamma$. Let $\Sigma = \Psi(\mathbb{D}_-)$ denote the copy of $\mathbb{D}_-$ in $\overline{\mathbb{C}}$; the parameterization $\varphii$ now is represented equivalently by the boundary values of $f$, which is a smooth function taking $\mathbb{S}^1$ to $\Gamma=f(\mathbb{S}^1)=g(\mathbb{S}^1)$.
Without loss of generality, assume that $\infty \in \Sigma$. Let
\[ \mathcal{D}_\infty(\Sigma) = \left\{ h:\Sigma \rightarrow \mathbb{C} \text{ holomorphic } \,:\, \iint_{\Sigma} |h'|^2 <\infty, \ h(\infty)=0 \right\} \]
be the Dirichlet space. Given $h \in \mathcal{D}_\infty(\Sigma)$, if we assume that $h$ has a smooth extension to the boundary $\partial \Sigma$, then $h \circ f$ is a smooth function on $\mathbb{S}^1$, and in particular is in $H^{\mathrm{b}}_{\mathbb{C}}$. In fact, it can be shown that $h$ extends to the boundary and $h \circ f$ makes sense for any $h \in \mathcal{D}_\infty(\Sigma)$, and furthermore $h \circ f -h(f(0))$ is in $H^{\mathrm{b}}_{\mathbb{C}}$ \cite{schippers_analysis_2022}.
(In fact this holds for any quasisymmetry $\varphii$, cf Remark \ref{re:universal_Teich_space}.) We then have
\begin{theorem} For $\varphii \in \mathrm{Diff}(\mathbb{S}^1)$, if $W_\varphii = \mathcal{C}_\varphii L^+$, then
\begin{equation*}
W_\varphii = f^\star \mathcal{D}_\infty(\Sigma) : = \{ h \circ f -h(f(0)) \,:\, h \in \mathcal{D}_\infty(\Sigma) \}.
\end{equation*}
\end{theorem}
More generally, in his sketch of a definition of conformal field theory, Segal \cite{bleuler_definition_1988} considered the category whose objects are Riemann surfaces of genus $g$ with $n$ closed, boundary curves circles endowed with boundary parametrizations $(\varphii_1,\ldots,\varphii_n)$. After sewing on copies of the disk, one obtains a compact surface $\mathscr{R}$, holomorphic maps $f=(f_1,\ldots,f_n)$ representing the parametrizations, and $\Sigma$ can be identified with the complement of the closures of the images of $f_1(\mathbb{D}_+),\ldots,f_n(\mathbb{D}_+)$. The sets $f^*\mathcal{D}(\Sigma)$ play a role in the construction; the boundary parameterizations are a means to obtain Fourier series from the boundary values of elements of $\mathcal{D}(\Sigma)$, and the induced polarizations represent the positive and negative Fourier modes.
The moduli space $\widetilde{\mathcal{M}}(g,n)$ space of surfaces with boundary is as follows.
Two elements $(\Sigma,\varphii_1,\ldots,\varphii_n)$, $(\Sigma',\varphii_1',\ldots,\varphii_n')$ are equivalent if there is a conformal map $F:\Sigma \rightarrow \Sigma'$ such that $\varphii_k' = F \circ \varphii_k$ for $k=1,\ldots,n$.
If the parameterizations are taken to be smooth in the definition of $\widetilde{M}(g,n)$, then since all type $(0,1)$ surfaces are equivalent to $\mathbb{D}_-$ we can identify
\[ \widetilde{M}(0,1) \cong \mathrm{Diff}(\mathbb{S}^1) /\text{M\"ob}(\mathbb{S}^1). \]
The moduli space $\widetilde{\mathcal{M}}(0,1)$, as well as the universal Teichm\"uller space can be ebmedded in the Grassmannian of polarizations $\mathrm{Pol}^\omega(H^{\mathrm{b}})$.
To see this, recall from Remark \ref{re:universal_Teich_space} that the stabilizer of $L^+$ in $\mathrm{Diff}(\mathbb{S}^1)$ is the set $\text{M\"ob}(\mathbb{S}^1)$ of M\"obius transformations preserving $\mathbb{S}^1$. This gives the following.
\begin{corollary}
We have a well-defined injective map
\begin{align*}
\widetilde{M}(0,1) & \rightarrow \mathrm{Pol}^{\omega}(H^{\mathrm{b}})\\
[(\mathbb{D}_-,\varphii)] & \mapsto W_\varphii.
\end{align*}
\end{corollary}
The universal Teichm\"uller space $T(0,1)$ is a moduli space containing the Teichm\"uller spaces of all Riemann surfaces covered by the disk.
It is classically known to be modelled by $\text{QS}(\mathbb{S}^1)/\text{M\"ob}(\mathbb{S}^1)$. The embedding in Remark \ref{re:universal_Teich_space} is as follows.
\begin{theorem}
There is a well-defined injective map \cite{nag_teichmuller_1995} from $T(0,1)=\text{QS}(\mathbb{S}^1)/\text{M\"ob}(\mathbb{S}^1)$ to $\mathrm{Pol}^\omega(H^{\mathrm{b}})$.
\begin{align*}
T(0,1) & \rightarrow \mathrm{Pol}^{\omega}(H^{\mathrm{b}})\\
[\varphii] & \mapsto W_\varphii.
\end{align*}
This map is holomorphic with respect to the classical Banach manifold structures on $T(0,1)$ and $\mathcal{D}(H^{\mathrm{b}})$ \cite[Theorem B1]{takhtajan_weil-petersson_2006}.
\end{theorem}
\begin{remark}
If one uses quasisymmetric parametrizations in the definition of Segal's moduli space $\widetilde{M}(0,1)$, one then sees that it can be identified naturally with the universal Teichm\"uller space $T(0,1)$.
The desirability of this extension to quasisymmetries is strongly motivated by Nag-Sullivan/Vodop'yanov's result that the quasisymmetries produce precisely the composition operators which are bounded symplectomorphisms (see Remark \ref{re:universal_Teich_space}). It is remarkable that the link between these two spaces - a geometric fact - is established by an analytic condition. This analytic condition in turn is motivated by an algebraic requirement; namely, the requirement that the representation of quasisymmetries be bounded. Interestingly, this analytic condition was in place for independent reasons in Teichm\"uller theory decades before the result of Nag-Sullivan/Vodopy'anov.
\end{remark}
\begin{remark}
The association between the Teichm\"uller space and the moduli space of Segal holds for arbitrary surfaces of genus $g$ with $n$ boundary curves. In general, the Segal moduli space $\widetilde{M}(g,n)$, if extended to allow quasisymmetric parametrizations, is a quotient of the Teichm\"uller space $T(g,n)$ by a finite-dimensional modular group \cite{radnell_quasisymmetric_2006}. (This modular group is trivial in the case $(g,n)=(0,1)$).
\end{remark}
\begin{remark}
The extension of the symplectic action of $\mathrm{Diff}(\mathbb{S}^1)$ to quasisymmetries (resulting in the embedding of universal Teichm\"uller space), was recognized by Nag and Sullivan \cite{nag_teichmuller_1995}. The problem of determining which $\varphii$ result generate elements of the restricted Grassmannian has roots in work of Nag and Verjovsky \cite{nag_rm_1990}, who considered the convergence of a natural generalization of the classical Weil-Petersson pairing on the tangent space to the universal Teichm\"uller space. This eventually gave birth to the Weil-Petersson universal Teichm\"uller space, now widely studied. Takhtajan and Teo \cite{takhtajan_weil-petersson_2006} showed that the embeddings of the universal Teichm\"uller space and Weil-Petersson Teichm\"uller space into $\mathrm{Pol}^\omega(H^{\mathrm{b}})$ and $\mathrm{Pol}_2^\omega(H^{\mathrm{b}})$ are holomorphic. The interpretation in terms of pulling back boundary values of functions in the Dirichlet space requires some justification \cite{schippers_analysis_2022}.
For the induced representation on symmetric Fock space, see \cite{segal_unitary_1981} in the smooth case. This was extended to the Weil-Petersson class universal Teichm\"uller space by A. Serge'ev \cite{sergeev_lectures_2014}. A quantization procedure on the classical universal Teichm\"uller space is also described there and in references therein.
\end{remark}
\section{Solutions} \label{se:solutions}
\noindent {\bf{Exercise} \ref{exer:one_two_then_J_preserves}}. By definition, we have
\[ g(Jv,Jw)= \omega(Jv,J^2w) = \omega(w,Jv) =g(v,w) \]
and similarly using Proposition \ref{pr:three_properties_compatible} part 2,
\[ \omega(Jv,Jw) = g(-v,Jw) = -\omega(v,J^2w) = \omega(v,w). \]
\noindent
{\bf{Exercise} \ref{exer:two_of_three_gives_compat}}.
\begin{enumerate}[label=\alph*)]
\item A symplectic form that makes the triple $(g,J,\omega)$ compatible, if it exists, must be given by the equation $\omega(v,w) = g(Jv,w)$.
It is clear that the induced map $\varphi_{\omega}$ is invertible, because $\varphi_{\omega} = \varphi_{g} J$.
One sees that $\omega$ is anti-symmetric if and only if $J$ is skew-adjoint with respect to $g$ by the computation $\omega(v,w) = g(Jv,w) = -g(Jw,v) = \omega(w,v)$.
\item In this case, the equation $J = \varphi_{g}^{-1} \varphi_{\omega}$ defines a complex structure.
\item We compute $\omega(Jv,w) = -\omega(w,Jv) = -g(w,v) = -g(v,w) = -\omega(v,Jw)$.
\end{enumerate}
\noindent {\bf{Exercise} \ref{exer:example_not_inner}}.
If $(g,J,\omega)$ is a compatible triple, then replacing $J$ by $-J$ will give an example as required.
\noindent {\bf{Exercise} \ref{exer:hodge_well_defined_symplectic}}.
Let $\operatorname{d}\!elta$ be closed and $d\varphii$ be exact. By Stokes' theorem, since $d\operatorname{d}\!elta =0$
\[ \omega(\operatorname{d}\!elta,d\varphii) = \frac{1}{\pi} \iint_{\mathscr{R}} \operatorname{d}\!elta \wedge d\varphii = - \iint_{\mathscr{R}} d (\varphii \operatorname{d}\!elta) =0 \]
where the final equality is because $\mathscr{R}$ has no boundary. Thus given any closed one-forms $\beta,\gamma$, with harmonic one-form representatives $\hat{\beta},\hat{\gamma}$ such that
\[ \beta = \hat{\beta} + d\varphii, \ \ \gamma = \hat{\gamma}+ d\psi, \]
we have $\omega(\hat{\beta},d\psi)=\omega(d\varphii,\hat{\gamma})=\omega(d\varphii,d\psi)=0$ which proves the claim.
\noindent {{\bf Exercise \ref{exer:smooth_symplectic_Hb}}}.
One can work in the complex $L^2(\mathbb{S}^1)$ space.
The Fourier series of $df_2$ is
\[ df_2 = \sum_{n \in \mathbb{Z} \setminus \{0\}} i n b_n e^{i n \varthetaeta}. \]
This can be justified (for example) by using the fact that since $df_2$ is smooth, it converges uniformly; therefore, the Fourier series of $f_2$ can be derived from that of $df_2$ by integrating term by term. Since $f_2$ is real,
\[ \omega(\{a_n\},\{b_n\} ) = \frac{1}{2\pi } \int f_1 d\overline{f_2} = \frac{1}{2\pi} \int_0^{2\pi} \left( \sum_{n \in \mathbb{Z} \setminus \{0\}} a_n e^{in \varthetaeta} \right) \left( \sum_{m \in \mathbb{Z} \setminus \{0\}} - im \overline{b_m} e^{-im \varthetaeta} \right). \]
Now using $\overline{b_m}=b_{-m}$ and the fact that $\{ e^{i m \varthetaeta} \}_{n \in \mathbb{Z} \setminus \{0 \}}$ is an orthonormal basis proves the claim.
\noindent {\bf{Exercise} \ref{exer:standard}}.
By Proposition \ref{pr:three_properties_compatible} we have $\omega(v,w)=g(v,w)$. Thus
\begin{align*}
\omega(x_k,x_l) & = g(x_k,Jx_l) = g(x_k,y_l)=0 \\
\omega(y_k,y_l) & = g(y_k,J y_l) = - g(y_k,x_l) =0 \\
\omega(x_k,y_l) & =g(x_k,Jy_l) = -g(x_k,x_l) = - \operatorname{d}\!elta_{kl}
\end{align*}
where $\operatorname{d}\!elta_{kl}$ is the Kronecker delta function.
\noindent {\bf{Exercise} \ref{exer:two_models_complexify}}.
We first show that $\Psi$ is complex linear:
\[ \Psi(Jv)=\frac{1}{\sqrt{2}} \left(Jv - i J^2 v \right) = \frac{1}{\sqrt{2}} i \left( v-iJv \right) = i \Psi(v). \]
In other words $\Psi$ is complex linear with respect to the complex structure $H_J$.
Clearly $\Psi(v)=0$ implies $v=0$. Dimension counting implies that it is surjective.
\noindent {\bf{Exercise} \ref{ex:Hermitian}}.
By Exercise \ref{exer:one_two_then_J_preserves} $g(Jv,Jw)=g(v,w)$ and $\omega(Jv,Jw)=\omega(v,w)$ for all $v,w$. We compute
\begin{align*}
g_+(\Psi(v),\Psi(w)) & = \frac{1}{2} g_+(v-iJv,w-iJw) \\
& = \frac{1}{2} \left( g(v,w) + g(iJv,iJw) \right) - \frac{1}{2} \left( g(v,iJw) + g(iJv,w) \right)\\
& = \frac{1}{2} \left( g(v,w) + g(Jv,Jw) \right) + \frac{1}{2} \left( i g(v,Jw) -i g(Jv,w) \right)\\
& = g(v,w) - i g(Jv,w) = g(v,w) - i \omega(v,w) \\
& = \langle v,w \rangle.
\end{align*}
\noindent {\bf{Exercise} \ref{exer:standard_polarization_Hb}}.
The expression for $\omega$ is just the (implicit) complex linear extension of (\ref{eq:omega_in_Hb}). Similarly, the expression for $g$ is just the sesquilinear extension of (\ref{eq:g_in_Hb}). We have
\[ J \{ b_m \} = \{ \hat{b}_m \} \]
where
\[ \hat{b}_m = \left\{ \begin{array}{rl} -i \, \overline{b_{-m}} & m >0 \\ i \, \overline{b_{-m}} & m < 0. \end{array} \right. \]
So we compute
\begin{align*}
\omega(\{a_n\},J \alpha \{b_n\}) & = -i \sum_{n=-\infty}^\infty n a_n \hat{b}_{-n} \\
& = -i \sum_{n=1}^\infty n a_n (i \overline{b_n}) -i \sum_{n=-\infty}^{-1} -i n a_n \overline{b_n} \\
& = g(\{a_n\},\{b_n \}).
\end{align*}
The fact that $L^+ \oplus L^-$ is a direct sum decomposition follows from the definition of $H^{\mathrm{b}}$.
Assume that $a_n = 0$ for all $n<0$ and $b_n = 0$ for all $n>0$. Then
\[ g(\{a_n\},\{b_n \}) = \sum_{n=-\infty}^\infty n a_n \overline{b_n} = 0. \]
On the other hand, if $a_n=0,b_n=0$ for all $n<0$ and $b_n = 0$ for all $n<0$,
\[ \omega(\{a_n\},\{b_n\}) = \sum_{n=-\infty}^\infty n a_n b_{-n} =0 \]
and similarly if $a_n=0,b_n=0$ for all $n>0$.
\noindent {\bf{Exercise} \ref{exer:Kpol_other_half_pos_def}}. Let $v,w \in W^-$, and observe that
\[ i\omega(v,\alpha{w})= \overline{-i\omega(\alpha{v},w)}. \]
Since $\alpha{v},\alpha{w} \in W^+$, the claim follows from the fact that $g$ is positive-definite on $W^+$.
\noindent {\bf{Exercise} \ref{exer:unitaries_are_same}}.
Assume that $u \in U(H)$. Then for all $v,w \in H$
\[ g_J(uv,uw) = g(uv,uw) - i\omega(uv,uw)=g(v,w)-i\omega(v,w) = g_J(u,v). \]
Conversely, if $g_J(uv,uw) = g_J(v,w)$
for all $v,w \in H$, then $u$ must preserve the real and imaginary parts of $g_J$. So $u \in \operatorname{O}(H) \cap \operatorname{Sp}(H)$.
\noindent {\bf{Exercise} \ref{exer:stabilizer}}.
An operator $u \in \operatorname{Sp}(H)$ preserves $L^{+}$ if and only if it commutes with $J$, which it does if and only if $u \in \operatorname{O}(H)$; whence the stabilizer of $L^{+} \in \mathrm{Pol}^{\omega}(H)$ is $\operatorname{U}(H)$.
Let $W^{+} \in \mathrm{Pol}^{\omega}(H)$ be arbitrary.
Pick an orthonormal basis $\{ e_{i} \}$ for $W^{+}$ and observe that $\{ \alpha e_{i} \}$ is an orthonormal basis for $\alpha(W^{+})$.
Let $u$ be the complex-linear extension of the map that sends $l_{i}$ to $e_{i}$ and $\alpha l_{i}$ to $\alpha e_{i}$.
It is straightforward to see that $u \in \operatorname{Sp}(H)$, and that $u(L^{+}) = W^{+}$.
\noindent {\bf{Exercise}
\ref{exer:VerifyGraphGivesSubspace}}.
Let $x \in W^{+}$ be arbitrary.
Set $y = P^{+}x$.
We then have $x = (P^{+}x, P^{-}x) = (y, P^{-} (P^{+}|_{W^{+}})^{-1}y)$, thus $x \in \mathrm{graph}(Z)$.
On the other hand, let $y \in L^{+}$ be arbitrary, and set $x = (P^{+}|_{W^{+}})^{-1}$.
We then have $(y,Zy) = (P^{+}x,ZP^{+}x) = (P^{+}x,P^{-}x) = x$.
\noindent {\bf{Exercise} \ref{exer:holomorphic_one_forms}}.
The one-forms $\eta_k$ were assumed to be harmonic, so $\beta_k$ are all harmonic. By definition of $Z$, for each $k=1,\ldots,\mathfrak{g}$ there is a holomorphic one-form whose periods agree with those of $\beta_k$. By uniqueness of the harmonic representative $\beta_k$ is holomorphic.
It remains to be shown that $\beta_k$ are linearly independent. But if
\[ \sum_{k=1}^{\mathfrak{g}} \lambda_k \beta_k =0, \]
then integrating over each of the curves $a_k$, using the definition of $\eta_k$ we obtain that $\lambda_k=0$ for $k=1,\ldots,\mathfrak{g}$.
\noindent {\bf{Exercise} \ref{exer:sympletic_inverse}}. Since $u$ is invertible, we need only show that the expression in equation (\ref{eq:symp_inverse_formula}) is a left inverse for $u$. This follows immediately from equation (\ref{eq:symplectomorphism_identities}).
\noindent {\bf{Exercise} \ref{exer:complete_Hilbert_Schmidt_char}}.
We compute using Proposition \ref{le:top_left_invertible_symplecto} that
\[ u^{-1} J u - J = i \left( \begin{array}{cc} a^* a + b^* b -\mathds{1} & a^* \alpha b \alpha + b^* \alpha a \alpha \\ - \alpha b^* \alpha a - \alpha a^* \alpha b & - \alpha ( b^* b + a^* a -\mathds{1} ) \alpha \end{array} \right). \]
By equation (\ref{eq:symplectomorphism_identities}) we have
\[ a^{*}\alpha b \alpha = b^{*} \alpha a \alpha \ \ \text{and} \ \ a^*a - \mathds{1} = b^*b. \]
Inserting these in the above proves the claim.
\noindent {\bf{Exercise} \ref{ex:EquivalenceRelation}}.
We have that $W_{1}^{+} \rightarrow \alpha(W_{1}^{+})$ is the zero map, thus $\sim$ is reflexive.
Suppose that $W_{1}^{+} \sim W_{2}^{+}$.
Denote by $P_{i}^{\pm}$ the orthogonal projection $H \rightarrow W_{i}^{\pm}$.
We have that $\iota_{i}^{\pm} = (P_{i}^{\pm})^{*}: W_{i}^{\pm} \rightarrow H$ is the inclusion.
This means that the orthogonal projection $W_{1}^{+} \rightarrow \alpha(W_{2}^{+})$ factors as $P_{2}^{-} \iota_{1}^{+}$; by assumption, this operator is Hilbert-Schmidt, thus so is its adjoint $(P_{2}^{-} \iota_{1}^{+})^{*} = P_{1}^{+} \iota_{2}^{-}$.
Conjugating this operator by $\alpha$, we obtain another Hilbert-Schmidt operator
\begin{equation*}
\alpha P_{1}^{+} \iota_{2}^{-} \alpha = \alpha P_{1}^{+} \alpha^{2} \iota_{2}^{-} \alpha = P_{1}^{-} \iota_{2}^{+},
\end{equation*}
but the expression on the right-hand side is nothing but the orthogonal projection $W_{2}^{+} \rightarrow \alpha(W_{1}^{+})$, so $W_{2}^{+} \sim W_{1}^{+}$, i.e.~the relation is symmetric.
Now, suppose that $W_{1}^{+} \sim W_{2}^{+}$ and $W_{2}^{+} \sim W_{3}^{+}$.
This implies that the operators $P_{2}^{-}\iota_{1}^{+}$ and $P_{3}^{-}\iota_{2}^{+}$ are Hilbert-Schmidt.
We have
\begin{equation*}
P_{3}^{-}\iota_{1}^{+} = P_{3}^{-}(\iota_{2}^{+}P_{2}^{+} + \iota_{2}^{-}P_{2}^{-})\iota_{1}^{+} = P_{3}^{-} \iota_{2}^{+} P_{2}^{+} \iota_{1}^{+} + P_{3}^{-} \iota_{2}^{-} P_{2}^{-} \iota_{1}^{+}.
\end{equation*}
It follows that $P_{3}^{-} \iota_{1}^{+}$ is Hilbert-Schmidt, and we are done.
\noindent {\bf{Exercise} \ref{ex:PolarCondition}}.
Indeed, if $(x,y)$ is perpendicular to both $\operatorname{graph}(Z)$ and $\alpha(\operatorname{graph}(Z))$ we have
\begin{align*}
0&= \langle (x,y), (v, Zv) \rangle + \langle (x,y),(\alpha Zw,\alpha w) \rangle = \langle (x,y), (v, Zv) \rangle + \langle (x,y),(-Z^{*} \alpha w,\alpha w) \rangle \\
&= \langle x, v - Z^{*} \alpha w \rangle + \langle y, Zv + \alpha w \rangle
\end{align*}
for all $v, w \in L^{+}$.
By setting $v=x$ and $\alpha w = -Zx$ we obtain
\begin{equation*}
0 = \langle x, x + Z^{*} Z x \rangle = \| x \|^{2} + \| Z x \|^{2}.
\end{equation*}
which implies that $x=0$.
Similarly, one shows that $y=0$.
\small
\end{document} |
\begin{document}
\footnote[0]{Preliminary results were presented as paper IAC-19,C1,9,3,x49975 at the 70th International Astronautical Congress, Washington D. C., America, 21-25 October 2019. }
\title{Robust Bang-Off-Bang Low-Thrust Guidance Using Model Predictive Static Programming}
Model Predictive Static Programming (MPSP) was always used under the assumption of continuous control, which impedes it for applications with bang-off-bang control directly. In this paper, MPSP is employed for the first time as a guidance scheme for low-thrust transfers with bang-off-bang control where the fuel-optimal trajectory is used as the nominal solution. In our method, dynamical equations in Cartesian coordinates are augmented by the mass costate equation, while the unconstrained velocity costate vector is used as control variable, and is expressed as a combination of Fourier basis functions with corresponding weights. A two-loop MPSP algorithm is designed where the weights and the initial mass costate are updated in the inner loop and continuation is conducted on the outer loop in case of large perturbations. The sensitivity matrix (SM) is recursively calculated using analytical derivatives and SM at switching points is compensated based on calculus of variations. An sample interplanetary CubeSat mission to an asteroid is used as study case to illustrate the effectiveness of the method developed.
\section{Introduction}\label{sec:intro}
In recent decades, highly efficient propulsion systems, such as electric propulsion and solar sails, have made low-thrust engines an alternative to enable ambitious space missions. Extensive work has focused on high-fidelity modeling and open-loop optimal low-thrust trajectory design, solved by direct or indirect methods ~\cite{tang1995,Betts2010,zhang2015,Haberkorn2004}. However, in real-world applications, disturbances such as solar radiation pressure, irregular gravitational fields, outgassing, and unmodeled accelerations, deviate the spacecraft from the nominal trajectory, which requires to update the control profile. The commonly used strategy is to up-link control commands from ground. This requires massive off-line computations and frequent communications between the ground station and the spacecraft. Due to rapid proliferation of space probes, this strategy hardly meets the always-increasing demand for autonomy~\cite{quadrelli2015}.
In literature, a number of guidance laws were proposed for low-thrust orbit transfer problems. Edelbaum~\cite{edelbaum1961propulsion} employed the constant-thrust steering law for quasi-circular orbits. Casalino and Colasurdo~\cite{casalino2007improved} further improved the Edelbaum's method by considering variable specific impulse and thrust magnitude with constant power level. Kluever~\cite{kluever1998simple} designed the simple guidance scheme that blended the individual control law which maximize the time rate of change of a desired orbital element. Petropoulos~\cite{petropoulos2003simple} presented the so-called Q-law for low-thrust orbit transfers, where the proximity quotient Q is served as a candidate Lyapunov function. Hernandez and Akella~\cite{hernandez2014lyapunov,hernandez2016energy} designed Lyapunov control methods for low-thrust orbit transfers Using Levi-Civita and Kustaanheimo-Stiefel transformations.
Nonlinear optimal control theory (NOCP) is attractive to design the guidance and control law since it can handle constraints while optimizing given performance index. However, the previous mentioned methods are not based on NOCP. Several methods based on NOCP have been designed to track the nominal solution which is computed offline. Neighboring optimal control (NOC) calculates feedback control by optimizing the second-order performance index, which calculates and stores the gain matrix at each time instance off-line and extracts gain matrix using interpolation on-line~\cite{BrysonHo-807}. Pontani et al~\cite{Pontani2015a} proposed variable-time-domain neighboring optimal guidance (VTD-NOG) that avoids numerical difficulty caused by the singularity of gain matrices at terminal time. Zheng~\cite{chen2017a,chen2018b} proposed the backward sweeping algorithm from geometric point of view, for both fixed terminal time and free terminal time of low-thrust transfer problems. Di Lizia et al~\cite{Lizia2008application,Lizia2014high} designed the high-order NOC control law, by using high-order Taylor series automatically achieved by differential algebra around the nominal trajectory. Besides, model predictive control (NMPC) which employs the iterative and finite-horizon optimization strategy has been applied to design the controller. Based on orbit averaging techniques, Gao designed NMPC \cite{gao2008low} to track the mean orbit elements. Huang et al~\cite{huang2012nonlinear} proposed a NMPC strategy using differential transformation based optimization method to track the nominal trajectory. However, the tracking approaches is the overdependence on the nominal profile. For example, the bang-bang thrust sequence is assumed to be unchanged under perturbations when using the NOC method~\cite{chen2017a}. Also, these techniques lack the operational flexibility since the trajectory is restricted to the vicinity of nominal solution. In order to overcome these drawbacks, the algorithms that enable the spacecraft to re-compute the entire nominal trajectory on-line at the beginning of each guidance interval is attractive. There have been several attempts to design the efficient algorithms. Wang~\cite{wang2018minimum} proposed to use convex programming to calculate fuel-optimal spacecraft trajectory. Pesch~\cite{pesch1989a,pesch1989b} used multiple shooting method to re-compute the trajectory for general optimal control problems.
In this work, model Predictive Static Programming (MPSP), an optimal control design technique that combines the philosophy of model predictive control and approximate dynamic programming~\cite{padhi2008}, is designed for low-thrust neighboring control law. The innovation of this method has three aspects \cite{maity2014}. Firstly, it successfully converts a dynamic programming problem to a static programming problem, and thus it requires only a static costate vector for the update of the control profile. Secondly, the symbolic costate vector enables a closed-form solution, which reduces computational load. Thirdly, the sensitivity matrix (SM) which is necessary for the calculation of the static costate vector can be computed recursively. These advantages promote wide applications of the MPSP technique, e.g., terminal guidance~\cite{oza2012}, reentry guidance~ \cite{halbe2013}, and lunar landing guidance~\cite{zhang2016}, etc. Some variations of the MPSP technique have also been proposed to enhance the algorithm performance. For example, the generalized MPSP \cite{maity2014} formulates the problem in continuous-time framework, which does not require any discretization process to begin with. Quasi-Spectral MPSP \cite{mondal2017} expresses the control profile as a weighted sum of basis functions, enabling the method to optimize only a set of coefficients instead of optimizing the control variable at every grid point. However, most works assume continuity of the control profile, which impedes its application for low-thrust transfer missions with bang-off-bang control.
Considering that MPSP technique is an inherent Newton-type method that requires a good initial guess solution \cite{pan2018}, MPSP as a potential neighboring control law for low-thrust transfer problems is investigated in this work. Firstly, the fuel-optimal low-thrust problem is stated in Cartesian coordinates, where the necessary conditions are formulated based on Pontryagin minimum principle (PMP). The fuel-optimal solution is used as the nominal solution solved by an indirect method. Secondly, inspired by the natural feedback controller given by PMP, the unconstrained costate variable related to the velocity is used as new control variable in MPSP design. In order to ensure the continuity of the switching function at switching points, dynamical equations are augmented by the mass costate equation. Thirdly, SM is recursively calculated using analytical derivative, where SM at switching points is compensated based on calculus of variations. Since SM discontinuity would result in discontinuity of discrete control sequence, the control profile is represented by the combination of Fourier basis functions and corresponding weights, where the weights are initialized based on nominal trajectory using least square method, and updated using Newton's method. Two-loop MPSP algorithm structure is designed for both small and large perturbations, where Newton's method and continuation are implemented in inner and outer loops respectively. The presented MPSP technique is successfully applied to bang-off-bang control for the first time in literature, without resorting to the additional optimization solver. Several numerical simulations are conducted, showing the effectiveness of the proposed method, so enhancing mission flexibility.
This paper is structured as follows: Section 2 states the control problem by using MPSP method. Section 3 depicts the detailed MPSP guidance design. Section 4 presents numerical simulations for a CubeSat mission to an asteroid. Conclusions are given in Section 5.
\section{Problem Statement} \label{sec:statement}
\subsection{Equations of Motion}
This work considers the heliocentric phase of an interplanetary transfer mission. The restricted two-body problem is employed, where the spacecraft subjects to the gravitatonal attraction of the Sun. The spacecraft natural motion consists of Keplerian orbits around the Sun, corresponding to the equation of motion~\cite{battin1999introduction}
\begin{equation} \label{eq1}
\ddot{\vect r} + \dfrac{\mu}{r^3}\vect r = \vect 0
\end{equation}
where $\vect r$ is the spacecraft position vector relative to the center of the Sun and $\mu$ is the gravitational parameter. When the low-thrust engine is considered, Eq.\ \eqref{eq1} is modified as
\begin{equation}
\label{eq:dynamical_eqs}
\dot{\vect x} = \vect f(\vect x,\vect \alpha,u)
\Rightarrow
\begin{pmatrix}
\dot{\vect r}\\
\dot{\vect v}\\
\dot m
\end{pmatrix}
=
\begin{pmatrix}
\vect v \\
\vect g(\vect r)+ u \dfrac{T_{\rm max}}{m} \vect \alpha \\
-u\dfrac{T_{\rm max}}{c}
\end{pmatrix}
\end{equation}
where $\vect g(\vect r):=-\mu\vect r/r^3$, $\vect r := [x,y,z]^\top \in \mathbb{R}^3$ and $\vect v := [v_x,v_y,v_z]^\top \in \mathbb{R}^3$ are the gravitational vector field, the spacecraft position vector, and its velocity vector, respectively; $m$ is the spacecraft mass, $T_{\rm max}$ is the maximum thrust magnitude, $c = I_{\rm sp} g_0$ is the exhaust velocity ($I_{\rm sp}$ is the engine specific impulse, $g_0$ is the gravitational acceleration at sea level), $u$ is the thrust throttle factor, $\vect \alpha$ is the thrust pointing vector. The state vector is $\vect x = \left[\vect r,\vect v,m \right]\in \mathbb{R}^7$. Both $T_{max}$ and $c$ are assumed constant during flight.
\subsection{Fuel-Optimal Problem}
In this work, fuel-optimal low-thrust trajectory is employed as the reference trajectory. The corresponding performance index is
\begin{equation}
J = \dfrac{T_{\rm max}}{c}\int_{t_0}^{t_f} u \ {\rm d} t
\end{equation}
where $t_0$ and $t_f$ are initial and terminal time instants, both fixed. The initial state is known, i.e., $\vect x(t_0) = \vect x_0$. For the interplanetary mission to the asteroid, the fixed terminal constraint is considered, as
\begin{equation} \label{eq:terminalconstr}
\vect x (t_f) = \vect x_f
\end{equation}
The inequality constraint for thrust throttle factor $u$ is
\begin{equation}
0 \leq u \leq 1
\end{equation}
The Hamiltonian function reads\ \cite{BrysonHo-807}
\begin{equation}
\label{eq:Hamiltonian}
H = \dfrac{T_{\rm max}}{c}u + \vect\lambda^T_r \vect v + \vect\lambda^T_v\left[\vect g(\vect r) + \dfrac{T_{\rm max}}{m}u \vect\alpha\right] - \lambda_m \dfrac{u{T_{\rm max}}}{c}
\end{equation}
where $\vect\lambda = [\vect\lambda_r, \vect\lambda_v, \lambda_m]$ is the costate vector associated with $\vect x$. Dynamical equations of $\vect\lambda$ are
\begin{equation}
\label{eq:costate_eqs}
\dot{\vect\lambda} = -\left(\dfrac{\partial H}{\partial \vect x}\right)^\top
\Rightarrow
\begin{pmatrix}
\dot{\vect\lambda}_r\\
\dot{\vect\lambda}_v\\
\dot \lambda_m
\end{pmatrix}
=
\begin{pmatrix}
-\vect G^\top \vect\lambda_v \\
-\vect\lambda_r \\
u T_{\rm max}/m^2 \vect\lambda^\top_v \vect\alpha
\end{pmatrix}
\end{equation}
where $\vect G = \partial \vect g(\vect r)/\partial \vect r $. Since the final mass is free, there exists
\begin{equation}
\label{eq:lambda_mf}
\lambda_m (t_f) = 0
\end{equation}
According to PMP, the optimal thrust direction is along the opposite direction of the primer vector $\vect\lambda_v$, as
\begin{equation}
\label{eq:thrust_dirction}
\vect\alpha^* = -\dfrac{\vect\lambda_v}{\lambda_v}, \quad \mbox{if}\quad \lambda_v \not= 0
\end{equation}
substituting Eq.\ \eqref{eq:thrust_dirction} into Eq.\ \eqref{eq:Hamiltonian} yields
\begin{equation}
\label{eq:hamiltonian_1}
H = \vect\lambda^\top_r \vect v + \vect\lambda^\top_v \vect g(\vect r) + \dfrac{u T_{\rm max}}{c}S
\end{equation}
where the switching function $S$ is defined as
\begin{equation}
\label{eq:switching_func}
S = -\lambda_v\frac{c}{m} - \lambda_m + 1
\end{equation}
The optimal $u^*$ is governed by $S$ through
\begin{equation} \label{eq:uswitch}
u^* =
\begin{cases}
0, & \mbox{if} \quad S > 0 \\
1, & \mbox{if} \quad S < 0
\end{cases}
\end{equation}
which is a bang-off-bang control type, forming the thrust sequence.
The fuel-optimal problem is solved by indirect method, which is to find $\vect\lambda_0$ that (together with $\vect x_0$) allows integrating Eqs.\ \eqref{eq:dynamical_eqs} and \eqref{eq:costate_eqs} with the control law in Eqs.\ \eqref{eq:thrust_dirction} and \eqref{eq:uswitch} and verifies the terminal constraints \eqref{eq:terminalconstr} and \eqref{eq:lambda_mf}~\cite{zhang2015}. Singular thrust arcs are not considered here since they have been shown to be non-optimal in general \cite{robbins1965optimality}. Once the optimal $\vect\alpha^*(t)$ and $u^*(t)$ are determined, the spacecraft trajectory can be generated by integrating Eqs.\ \eqref{eq:dynamical_eqs} and \eqref{eq:costate_eqs}.
\subsection{MPSP Dynamics and Control}
In real world flight, disturbances or new mission requirements need the spacecraft to have the capability update the control sequence automatically. The guidance scheme based on Model predictive static programming (MPSP) is of interest~\cite{padhi2008}. However, the fuel-optimal problem is the optimal control problem with control constraint. MPSP cannot be appied to it directly, since MPSP is originally designed for the unconstrained problem \cite{padhi2008}. In this work, the augmented dynamics and the new control variable are proposed.
Let $\vect x^*(t)$ and $\vect\lambda^*(t)$ denote the reference state and costate profiles, and let $\vect x(t)$ and $\vect\lambda(t)$ be the associated, off-nominal profiles. Let $\Delta\vect\lambda(t)$ be the costate deviation, the two functions~\cite{chen2017a}
\begin{equation}
\label{eq:feedback_contorl}
\begin{cases}
u(\vect x,\vect\lambda^* + \Delta \vect\lambda) = 1 - \rm{Sgn}\left[ S(\vect x,\vect \lambda^* + \Delta\vect\lambda) \right] \\
\vect\alpha (\vect x,\vect\lambda^* + \Delta \vect\lambda) = -\dfrac{\vect\lambda^*_v + \Delta \vect\lambda_v}{ \lVert \vect\lambda^*_v + \Delta\vect\lambda_v \lVert}, \quad \mbox{if} \ \lVert \vect\lambda^*_v + \Delta\vect\lambda_v \lVert \neq 0
\end{cases}
\end{equation}
define the feedback controller associated with $\vect x$ at time instant $t$, where $\rm{Sgn}$ function is defined as
\begin{equation}
\rm{Sgn}\ (z) =
\begin{cases}
0, & \mbox{if} \quad z > 0 \\
1, & \mbox{if} \quad z < 0
\end{cases}
\end{equation}
In this work, the unconstrained costate vector is used as a new control variable for MPSP controller design. This idea also has been utilized in NOC design~\cite{chen2017a} and Lyapunov guidance design \cite{gao2010optimization}. Notice from Eqs.\ \eqref{eq:switching_func} and \eqref{eq:feedback_contorl} that, costate variables which affect $u$ and $\vect\alpha$ are $\vect\lambda_v$ and $\lambda_m$. However, only $\vect\lambda_v$ is seen as the new control variable, based on three facts. Firstly, it can be seen from $\dot\lambda_m$ in Eq.\ \eqref{eq:costate_eqs} that $\vect\lambda_v$ and $\lambda_m$ are dependent, and $\lambda_m$ profile is determined by $\vect\lambda_v$. Secondly, if $\lambda_m$ is also used as a control variable, $\dot{\lambda_m}$ cannot be expressed by Eq.\ \eqref{eq:costate_eqs}. The derivative of the switching function $S$ in Eq.\ \eqref{eq:switching_func}
\begin{equation}
\dot S = -\dot \lambda_v \dfrac{c}{m} - \lambda_v \dfrac{u T_{max}}{m^2} - \dot \lambda_m
\end{equation}
would hardly be continuous because of the presence of $u$. On the other hand, if $\dot\lambda_m$ is remained as Eq.\ \eqref{eq:costate_eqs}, $\dot S$ becomes simply to be
\begin{equation}
\label{eq:time_dot_S}
\dot S = -\dot \lambda_v \dfrac{c}{m}
\end{equation}
which is naturally implicitly dependent on $u$. Thirdly, the second-order differential of $\lambda_m$ w.r.t time is not continuous due to discontinuity of $u$ in $\dot \lambda_m$. Since basis functions are used to approximate the control profile in this work, these may not be appropriate to efficiently capture the discontinuity \cite{FurfaroMortari-2065}.
Thus, the dynamical equations used for MPSP algorithm design in this work are
\begin{equation}
\label{eq:arg_eqs}
\dot{\vect X}
= \mathcal{F}(t,\vect X,\vect U)
\Rightarrow
\begin{pmatrix}
\dot{\vect r}\\
\dot{\vect v}\\
\dot m \\
\dot \lambda_m
\end{pmatrix}
=
\begin{pmatrix}
\vect v \\
\vect g(\vect r)- u \dfrac{T_{\rm max}}{m\lambda_v} \vect \lambda_v \\
-u\dfrac{T_{\rm max}}{c}\\
- u T_{max}/m^2 \lambda_v
\end{pmatrix}
\end{equation}
where $\vect X = \left[\vect x,\lambda_m \right] \in \mathbb{R}^8$, $\vect U = \vect \lambda_v \in \mathbb{R}^3$, and optimal thrust direction Eq.~\eqref{eq:thrust_dirction} is embedded into Eq.~\eqref{eq:arg_eqs}. The relationships between thrust angles and $\vect U$ are
\begin{equation}
\begin{cases}
\vect \alpha = \arctan(\dfrac{\lambda_{v,2}}{\lambda_{v,1}})\\
\vect \beta = \arcsin(\dfrac{\lambda_{v,3}}{\|\vect \lambda_v\|_2})
\end{cases}
\end{equation}
where $\vect \alpha \in [0,360^\circ]$ is the in-plane angle, $\beta \in [-180^\circ,180^\circ]$ is the out-of-plane angle and $\lambda_{v,i}$ is the $i$th element of $\vect \lambda_v$. Once $\vect X(t)$ and $\vect U(t)$ are determined, the profile of $S$ is decided automatically, which then determines the switching time and thrust sequence. In this work, the task of the MPSP algorithm is to determine a suitable $\Delta \vect U$ and $\Delta \lambda_{m0}$ such that the trajectory of the spacecraft obtained by integrating Eq.\ \eqref{eq:arg_eqs} satisfies the required boundary conditions Eqs.~\eqref{eq:terminalconstr} and \eqref{eq:lambda_mf}, while conducting bang-off-bang control.
\section{MPSP Algorithm Design} \label{sec:method}
\subsection{Sensitive Matrix Calculation}
Different from problems with continuous control profile, dynamical discontinuity happens at switching points. Thus, the trajectory cannot be treated as a whole. In this work, the trajectory is split into multiple segments with switching points located at the boundary of each segment. The time instants at the boundary of each segment are $\{t_0,t_1,\cdots,t_M \}$, where $M$ is the number of total segments, $t_0$ and $t_M = t_f$ are initial and final time, respectively, and $t_k,k=1,2,\cdots,M-1$ are the switching times. Let $\{ t_k^{0_-},t_k^{0_+}, t_k^1,...t_k^{N_k} \}, k=0,1,...,M-1$ denote an evenly-spaced time grid within $[t_k,t_{k+1}]$, where $t_k^{0_-}$ and $t_k^{0_+}$ are the time instants across the impulse. For $k$th segment, $N_k$ is the minimum number of points, such that the time step is just less than prescribed maximum time step $h_{max}$. Note also that $t_k^{N_k} = t_{k+1}^{0_-}$. To ease notation, $N_k$ is denoted $N$. Suppose there is no impulse at initial time, then $t_0^{0_-} = t_0^{0_+} = t_0$.
Consider the $k$th time interval $[t_k,t_{k+1}]$, the discrete system dynamics and the output can be written as
\begin{equation}
\label{eq:expr_x_k_plus_1}
\vect X_k^{i+1} = \vect F_k^i(\vect X_k^i, \vect U_k^i) \quad \vect Y_k^i = \vect O(\vect X_k^i)
\end{equation}
where $\vect Y_k^i$ is the output at the $i$th step, which is the function of $\vect X_k^i$. $\vect F_k^i(\vect X_k^i, \vect U_k^i)$ can be obtained using standard integration formula, such as the Euler method\ \cite{padhi2008}. High-order integration results in higher accuracy, but larger computational load. In this work, the standard 4th order Runge--Kutta integration is used, see the Appendix for the computation of Eq.\ \eqref{eq:expr_x_k_plus_1}.
The primary objective is to obtain an updated control history $\vect U_k^i$ and initial state $\vect X_0^0$ such that the output $\vect Y$ at terminal time, i.e., $\vect Y^{N}_{M-1}$, reaches to the desired value $\vect Y_{d}$. Writing $\vect Y_{M-1}^N$ about $\vect Y_{d}$ in Taylor series expansion and neglecting high-order terms, the error at the terminal output $\Delta \vect Y_{M-1}^N = \vect Y_{M-1}^N - \vect Y_{d}$ is approximated as
\begin{equation}
\label{eq:terminal_err}
\Delta\vect Y_{M-1}^N \cong {\rm d} \vect Y_{M-1}^N = \left[ \dfrac{\partial \vect Y_{M-1}^N}{\partial \vect X_{M-1}^N} \right] {\rm d} \vect X_{M-1}^N
\end{equation}
The deduction of ${\rm d} \vect X_k^{i+1}$ should consider whether the impulse happens or not. For the interval $[t_k^i,t_k^{i+1}],\ i = 1,2,\cdots,N-1$ where there is no switching point, according to Eq.\ \eqref{eq:expr_x_k_plus_1}, there exists
\begin{equation}
\label{eq:dx_k_i_plus_1}
{\rm d} \vect X_k^{i+1} = \left[ \dfrac{\partial \vect F_k^i}{\partial \vect X_k^i} \right] {\rm d} \vect X_k^i + \left[ \dfrac{\partial \vect F_k^i}{\partial \vect U_k^i} \right] {\rm d} \vect U_k^i
\end{equation}
where ${\rm d} \vect X_k^i$ and ${\rm d} \vect U_k^i$ are full differentials of state and control vectors.
For the interval $[t_k^{0_-},t_k^{1}]$ that contains an impulse, we have
\begin{equation}
\label{eq:dx_k_1}
\begin{aligned}
{\rm d} \vect X_k^1 & = \dfrac{\partial \vect F_k^{0_+}}{\partial \vect X^{0_+}_k} {\rm d} \vect X^{0_+}_k + \dfrac{\partial \vect F_k^{0_+}}{\partial \vect U^{0_+}_k} {\rm d}\vect U^{0_+}_k \\
& = \dfrac{\partial \vect F_k^{0_+}}{\partial \vect X^{0_+}_k}\dfrac{\partial \vect X^{0_+}_k}{\partial \vect X^{0_-}_k}
{\rm d}\vect X^{0-}_k + \dfrac{\partial \vect F_k^{0_+}}{\partial \vect X^{0_+}_k}\dfrac{\partial \vect X^{0_+}_k}{\partial \vect U^{0_-}_k}
{\rm d}\vect U^{0_-}_k + \dfrac{\partial \vect F_k^{0_+}}{\partial \vect U^{0_+}_k} {\rm d} \vect U^{0_+}_k \\
& =
\dfrac{\partial \vect F^{0}_k}{\partial \vect X^0_k} {\rm d}\vect X^0_k + \dfrac{\partial \vect F^{0}_k}{\partial \vect U^{0}_k} {\rm d}\vect U^{0}_k
\end{aligned}
\end{equation}
where ${\rm d}\vect U^{0}_k = {\rm d} \vect U^{0_-}_k = {\rm d} \vect U^{0_+}_k$ due to thrust angle continuity, ${\rm d} \vect X^0_k = {\rm d} \vect X^{0_-}_k$, and
\begin{equation}
\dfrac{\partial \vect F^{0}_k}{\partial \vect X^0_k} = \dfrac{\partial \vect F_k^{0_+}}{\partial \vect X^{0_+}_k}\dfrac{\partial \vect X^{0_+}_k}{\partial \vect X^{0_-}_k}, \quad
\dfrac{\partial \vect F^{0}_k}{\partial \vect U^{0}_k} = \dfrac{\partial \vect F_k^{0_+}}{\partial \vect X^{0_+}_k}\dfrac{\partial \vect X^{0_+}_k}{{\rm d}\vect U^{0_-}_k} + \dfrac{\partial \vect F_k^{0_+}}{\partial \vect U^{0_+}_k}
\end{equation}
The explressions of ${\partial \vect X^{0_+}_k}/{\partial \vect X^{0_-}_k}$ and ${\partial \vect X^{0_+}_k}/{\partial \vect U^{0_-}_k}$ are\ \cite{russell2007primer}
\begin{equation}
\left[ \dfrac{\partial \vect X^{0_+}_k}{\partial \vect X^{0_-}_k} \right] = \vect I + \left( \dot{\vect X}^{0_+}_k - \dot{\vect X}^{0_-}_k \right)S_{\vect X}/\dot{S}
\end{equation}
\begin{equation}
\left[ \dfrac{\partial \vect X^{0_+}_k}{\partial \vect U^{0_-}_k} \right] = \left( \dot{\vect X}^{0_+}_k - \dot{\vect X}^{0_-}_k \right)S_{\vect U}/\dot{S}
\end{equation}
where $S_{\vect X}$ and $S_{\vect U}$ are row vectors that are the partial derivative of switching function $S$ w.r.t $\vect X$ and $\vect U$ respectively, and $\dot{S}$ is calculated according to Eq.\ \eqref{eq:time_dot_S}.
Combining Eq.\ \eqref{eq:dx_k_i_plus_1} with Eq.\ \eqref{eq:dx_k_1} yields the uniform form of ${\rm d} \vect X_k^{i+1}$ as
\begin{equation}
\label{eq:dx_unique_form}
{\rm d} \vect X_k^{i+1} = \left[ \dfrac{\partial \vect F_k^i}{\partial \vect X_k^i} \right] {\rm d} \vect X_k^i + \left[ \dfrac{\partial \vect F_k^i}{\partial \vect U_k^i} \right] {\rm d} \vect U_k^i, \quad i=0,1,\cdots,N-1
\end{equation}
Substituting Eq.\ \eqref{eq:dx_unique_form} into Eq.\ \eqref{eq:terminal_err} yields
\begin{equation}
{\rm d} \vect Y_{M-1}^N = \left[ \dfrac{\partial \vect Y_{M-1}^N}{\partial \vect X_{M-1}^N} \right] \left \{ \left[ \dfrac{\partial \vect F_{M-1}^{N-1}}{\partial \vect X_{M-1}^{N-1}} \right] {\rm d}\vect X_{M-1}^{N-1} + \left[ \dfrac{\partial \vect F_{M-1}^{N-1}}{\partial \vect U_{M-1}^{N-1}} \right] {\rm d} \vect U_{M-1}^{N-1} \right \}
\end{equation}
Similarly, the state differential at time step $(N-1)$ can be expanded in terms of state and control differentials at time step $(N-2)$. Next, ${\rm d} \vect X_{M-1}^{N-2}$ can be expanded in terms of ${\rm d} \vect X_{M-1}^{N-3}$ and ${\rm d} \vect U_{M-1}^{N-3}$. For $(M-1)$th segment, this process is continued to ${\rm d} \vect X_{M-1}^0$. Notice that ${\rm d} \vect X_{M-2}^N = {\rm d} \vect X_{M-1}^0$, the same process is continued at $(M-2)$th segment. Extending the process until $\vect X_0^0$, one obtains
\begin{equation}
\label{eqs:dyn_expansion}
\begin{aligned}
{\rm d} \vect Y^N_{M-1} &= \vect A {\rm d}\vect X^0_0 + \vect B_0^0{\rm d}\vect U_0^0 + \vect B_0^1{\rm d}\vect U_0^1 + \vect B_{M-1}^{N-1}{\rm d}\vect U_{M-1}^{N-1} \\
& = \vect A{\rm d} \vect X_0^0 + \sum_{k=0}^{M-1} \sum_{i=0}^{N-1} \vect B_k^i{\rm d} \vect U_k^i
\end{aligned}
\end{equation}
where the compact form of coefficients $\vect A$ and $\vect B_k^i$ in Eq.\ \eqref{eqs:dyn_expansion} are
\begin{equation}
\label{eq:mpap_coeffs}
\begin{array}{cc}
\vect A = \left[ \dfrac{\partial \vect Y_{M-1}^N}{\partial \vect X_{M-1}^N} \right] \prod\limits_{k=0}^{M-1} \prod\limits_{i=0}^{N-1} \left[ \dfrac{\partial \vect F_k^i}{\partial \vect X_k^i} \right] \\
\vect B^i_k = \left[ \dfrac{\partial \vect Y_{M-1}^N}{\partial \vect X_{M-1}^N} \right] \left\{\prod\limits_{p=M-1}^{k+1} \prod\limits_{q=N-1}^{1}\left[ \dfrac{\partial \vect F_p^q}{\partial \vect X_p^q} \right]\right\}
\left\{\prod\limits_{q=N-1}^{i+1}\left[ \dfrac{\partial \vect F_k^q}{\partial \vect X_k^q} \right]\right\}\dfrac{\partial \vect F_k^i}{\partial \vect U_k^i}
\end{array}
\end{equation}
The presented MPSP is desirable because the computation of the sensitivity matrix $\vect B_k^i$ can be reduced to an iterative calculation. Define
\begin{equation}
\vect B_{M-1,0}^{N} = \left[ \dfrac{\partial \vect Y_{M-1}^N}{\partial \vect X_{M-1}^N} \right]
\end{equation}
there exists
\begin{equation}
\vect B^i_{k,0} = \vect B^{i+1}_{k,0} \left[ \dfrac{\partial \vect F^{i+1}_k}{\partial \vect X^{i+1}_k} \right], \quad \vect B^i_k = \vect B^i_{k,0} \left[ \dfrac{\partial \vect F_k^i}{\partial \vect U_k^i} \right]
\end{equation}
\subsection{Control Representation and Update}
For applications with continuous control profile, discrete control sequence works due to the continuity of SM. However, $\vect B_k^i$ is discontinuous before and after impulse, resulting in the discontinuity of the thrust angle sequence if the control profile is discretized, which is meaningless from the physical point of view. In this work, the control profile is expressed by basis functions. The advantages lies in two facets. Firstly, the continuity of the thrust angle profile can be ensured automatically due to the continuity of basis functions. Secondly, the time derivative of the switching function $S$ can be calculated analytically. The control is expressed as
\begin{equation}
\label{eq:CF_1}
\vect U_k^i(\eta) = \vect P_k^i(\eta) \vect\epsilon
\end{equation}
where $\epsilon$ is the weight to the basis functions, and
\begin{equation}
\vect P(\eta) =
\begin{bmatrix}
\vect h^\top(\eta) & & &\\
& \vect h^\top(\eta) & &\\
& & & \vect h^\top(\eta)
\end{bmatrix}
\end{equation}
$\vect h(\eta)$ is the collection of different orders of basis functions. The $\eta$ range is determined by the basis functions chosen. The linear projection of $\eta$ w.r.t time $t$ is used as
\begin{equation}
\eta = \dfrac{\eta_f-\eta_0}{t_f-t_0}\left(t-t_0\right) + \eta_0
\end{equation}
From Eq.~\eqref{eq:CF_1}, the update of the control profile is achieved by updating $\vect \epsilon$. The differential of Eq.\ \eqref{eq:CF_1} is
\begin{equation}
\label{eqs:diff_control_profile}
{\rm d}\vect U_k^i = \vect P_k^i(\eta) {\rm d}\vect\epsilon
\end{equation}
Substituting Eq.\ \eqref{eqs:diff_control_profile} into Eq.\ \eqref{eqs:dyn_expansion} yields
\begin{equation}
\label{eqs:dYn_expression_new}
{\rm d}\vect Y_{M-1}^N = \vect A{\rm d}\vect X_0^0 + \sum_{k=0}^{M-1} \sum_{i=0}^{N-1} \vect B_k^i\vect P_k^i{\rm d}\vect\epsilon
\end{equation}
Since the control used is not directly linked to the thrust throttle factor $u$, the solution in the neighborhood of the reference solution is preferred. The performance index is set to
\begin{equation}
J = \dfrac{1}{2} {\rm d}\vect\epsilon^\top \vect R_{\epsilon} {\rm d}\vect\epsilon + \dfrac{1}{2} ({\rm d}\vect X_0^0)^\top \vect R_{0} {\rm d}\vect X_0^0
\end{equation}
Denote $\vect B_v = \sum_{k=0}^{M-1} \sum_{i=0}^{N-1} \vect B_k^i\vect P_k^i$, the augmented performance index reads
\begin{equation}
\hat{J} = \dfrac{1}{2} {\rm d}\vect\epsilon^\top \vect R_{\epsilon} {\rm d}\vect\epsilon + \dfrac{1}{2} ({\rm d} \vect X_0^0)^\top \vect R_{0} {\rm d}\vect X_0^0 + \vect p^\top\left( {\rm d} \vect Y_{M-1}^N - \vect A{\rm d}\vect X_0^0 - \vect B_v {\rm d}\vect\epsilon \right)
\end{equation}
where $\vect p$ is the associated static costate vector. The optimal conditions read
\begin{equation}
\label{eqs:optimal_cond}
\begin{array}{cc}
\left(\dfrac{{\rm d}\hat{J}}{{\rm d}\left({\rm d} \vect \epsilon\right)}\right)^\top &= \vect R_{\epsilon} {\rm d}\vect\epsilon - \vect B_v^\top \vect p = \vect 0 \\
\left(\dfrac{{\rm d}\hat{J}}{{\rm d}({\rm d}\vect X_0^0)}\right)^\top &= \vect R_{0} {\rm d} \vect X_0^0 - \vect A^\top \vect p = \vect 0\\
\end{array}
\end{equation}
Substituting Eq.\ \eqref{eqs:optimal_cond} into Eq.\ \eqref{eqs:dYn_expression_new} yields
\begin{equation}
\label{eqs:p_expression}
\vect p = \left(\vect A \vect R_{0}^{-1} \vect A^\top + \vect B_v \vect R_{\epsilon}^{-1} \vect B_v^\top \right)^{-1}{\rm d}\vect Y_{M-1}^N
\end{equation}
and substituting Eq.\ \eqref{eqs:p_expression} into Eq .\ \eqref{eqs:optimal_cond} yields the Newton direction ${\rm d}\vect\epsilon$ and ${\rm d}\vect X_0^0$, as
\begin{equation}
\begin{array}{cc}
{\rm d}\vect\epsilon &= \vect R_{\epsilon}^{-1}\vect B_v^\top \vect p\\
{\rm d} \vect X_0^0 & = \vect R_{0}^{-1}\vect A^\top \vect p
\end{array}
\end{equation}
Since the spacecraft initial position, velocity and mass are known and fixed, ${\rm d}\lambda_{m0}$ is to be solved. $\vect \epsilon$ and $\lambda_{m0}$ are updated at $j$th iteration as
\begin{equation}
\begin{array}{cc}
\vect\epsilon_{j+1} &= \vect\epsilon_{j} - \kappa {\rm d} \vect\epsilon_{j}\\
\lambda_{m0,j+1} &= \lambda_{m0,j} - \kappa {\rm d}\lambda_{m0,j}
\end{array}
\end{equation}
where $\kappa$ is the Newton step length. Then the updated $\vect U(t)$ and $\vect X(t)$ are determined through calculating Eq.\ \eqref{eq:CF_1} and integrating Eq.\ \eqref{eq:arg_eqs}.
However, particular attention needs to be paid for Newton step length $\kappa$. If $\kappa$ is too large, the thrust sequence may be apparently changed, which amplifies undesired terminal error and further deteriorates algorithm performance. On the other hand, the unconstrained and changeable thrust sequence contributes toward enlarging the convergence domain. Therefore, rigorous strategy for $\kappa$ selection should be designed to maintain the algorithm stability and simultaneously enlarge the convergence domain. In this work, the thrust sequence at each iteration is checked and restricted. Denote $N_{seg,i}$ and $N_{seg,ref}$ as the sum of thrust segments and coast segments for the trajectory at $i$th iteration and reference trajectory, respectively. For $(i+1)$th iteration, the restriction on $N_{seg,i+1}$ is implemented such that $\|N_{seg,i+1}-N_{seg,ref}\| \leq N_{\rm seg,tol}$, where $N_{\rm seg,tol}$ is the tolerance for the varied segments. Otherwise, $\kappa$ is reduced.
\subsection{Nominal Solution Generation}
The fixed-time fuel-optimal open-loop problem can be formulated into a two-point boundary value problem (TPBVP) and it requires to search a zero of the shooting function associated with TPBVP \cite{zhang2015}. In this work, the method that combines analytic derivatives, switching detection technique and numerical continuation is applied to find the fuel-optimal low-thrust trajectory\cite{zhang2015}, which is used as nominal solution. There is no need to assign control structure a prior, and it is also useful in cases where very low-thrust accelerations are used in highly nonlinear vector fields.
The nominal discrete control sequence at $k$th evenly time grid is denoted as $\vect U_{k,ref}$. Collecting all discrete points yields the following linear algebraic equation
\begin{equation}
\label{eq:Algebric_eqs}
\vect A \vect\epsilon_0 = \vect B
\end{equation}
where
\begin{equation}
\vect A = \left[ \vect P_1, \vect P_2, \cdots, \vect P_N \right]
\end{equation}
\begin{equation}
\vect B = [\vect U_{1,ref}, \vect U_{2,ref}, \cdots, \vect U_{N,ref}]
\end{equation}
The least-square solution is used as nominal solution, as
\begin{equation}
\vect\epsilon_0 = (\vect A^\top \vect A)^{-1}\vect A^\top \vect b
\end{equation}
\subsection{Implementation}
The MPSP algorithm for bang-off-bang low-thrust transfers requires to detect the switching time accurately, which is based on two reasons. Firstly, if the switching time is not detected, the integration error will be accumulated around the switching points, which deteriorates the performance of the Newton's method. Secondly, SM is discontinuous as shown in Eqs.\ \eqref{eq:dx_k_i_plus_1} and \eqref{eq:dx_k_1}, which requires to detect switching time for accurate calculation of SM. The switching detection technique is embedded into four-order Runge-Kutta fixed step trajectory integral scheme. The detection is active as soon as the switching function Eq.\ \eqref{eq:switching_func} traverses zero at time interval $[t_k,t_{k+1}]$. The bisection method is used to find the switching time $t_{sw} \in [t_k,t_{k+1}]$ such that the absolute value of switching function is within the tolerance $10^{-12}$.
Based on the techniques proposed above, a two-loop MPSP algorithm is designed consisting of inner-loop and outer-loop parts. The inner-loop algorithm is illustrated in Algorithm \ref{algo:inner-MPSP}, which is to update $\vect \varepsilon$ and $\lambda_{m,0}$ for the input boundary conditions using Newton's method. In inner-loop algorithm, the Newton's method is implemented only when the L2-norm of terminal error at $j$th iteration is less than a maximum error tolerance $\Delta_{\rm max}$, i.e., $\|\vect Y_{M-1}^N\|_{2,j} \leq \Delta_{\rm max}$. $\Delta_{\rm max}$ is the conservative value indicating the failure of the iteration or the encounter of large perturbations. $Sign$ is used to label the success ($Sign = 1$) or failure ($Sign = 0$) of the inner-loop part. The failure occurs when the terminal error exceeds the tolerance or the step length is small enough. The outer-loop MPSP algorithm is shown in Algorithm \ref{algo:outer-MPSP}, which is triggered when $Sign = 0$ is returned. In this case, the continuation from nominal conditions to perturbed conditions is conducted. Denote $\vect C_{\rm ref}$ as reference boundary conditions, $\vect C_{\rm per}$ as perturbed conditions and $\tau$ as continuation parameter. Starting from $\tau = 0$ which corresponds to $\vect C_{\rm ref}$, continuation proceeds until $\tau = 1$ which corresponds to $\vect C_{\rm per}$. At each step, the inner-loop MPSP algorithm is applied to find the solution corresponding to the conditions
\begin{equation}
\vect C_{\tau} = (1-\tau)\vect C_{\rm ref} + \tau \vect C_{\rm per}
\end{equation}
Beside, the $N_{\rm seg,tol}$ used in the inner-loop algorithm is initially set to be $0$ at the outer-loop algorithm. Thus, the MPSP algorithm tries to find the solution with the same thrust sequence as the nominal solution first. The value of $N_{\rm seg,tol}$ increases once the inner-loop MPSP algorithm fails.
\begin{algorithm}[htp]
\caption{Inner-Loop MPSP Algorithm}
\label{algo:inner-MPSP}
\begin{algorithmic}[1]
\REQUIRE{Boundary conditions $\vect C_{\tau}$, weight of control sequence $\vect \varepsilon$, $\Delta_{\rm max}$, $N_{\rm seg,tol}$, reference solution}.
\ENSURE{The updated $\vect \varepsilon$, $\lambda_{m0}$ and label $Sign$}
\STATE{Integrate the trajectory using initial condition $\vect X_0$ and $\vect \varepsilon$.}
\STATE{Set $Sign = 1$, $i = 0$.}
\WHILE{The terminal error does not satisfy requirement}
\IF{$\| \vect Y_{M-1}^N \|_{2,i} \geq \Delta_{\rm max}$}
\STATE{$Sign = 0$. Return.}
\ENDIF
\STATE{ Calculate $\vect A$ and $\vect B_v$ matrix in Eq.\ \eqref{eq:mpap_coeffs} and the static costate vector $\vect p$ in Eq.\ \eqref{eqs:p_expression}. Calculate Newton direction ${\rm d} \vect \varepsilon_j$ and ${\rm d} \lambda_{m0,j}$. Set initial $\kappa = 1$.}
\WHILE{1}
\STATE{$\vect \varepsilon_{\rm old} := \vect \varepsilon$ and $\lambda_{m,0,{\rm old}} := \lambda_{m,0}$.}
\STATE{Update $\vect \varepsilon$ and $\lambda_{m,0}$. Integrate dynamical equations to obtain the trajectory. Get $N_{{\rm seg},i+1}$. $i:=i+1$}
\IF{$\|N_{{\rm seg},i+1}-N_{\rm seg,ref}\| \leq N_{\rm seg,tol}$}
\STATE{Save the updated control. Break.}
\ELSE
\STATE{$\kappa := \kappa/2$. $\vect \varepsilon := \vect \varepsilon_{\rm old}$. $\lambda_{m,0} := \lambda_{m,0,{\rm old}}$.}
\IF{$\kappa \leq 1/2^5$}
\STATE{Set $Sign = 0$. Return.}
\ENDIF
\ENDIF
\ENDWHILE
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[htp]
\caption{Outer-Loop MPSP Algorithm}
\label{algo:outer-MPSP}
\begin{algorithmic}[1]
\STATE{Solving the fuel-optimal low-thrust transfer problem using indirect method~\cite{zhang2015}. The initial weight $\vect\epsilon_0$ is calculated using least square method.}
\STATE{Calculate perturbed condition or thruster parameters.}
\STATE{Set error tolerance $\Delta_{max}$, continuation parameter variation $\partiallta \tau$ and $N_{\rm seg,tol}=0$.}
\STATE{Apply inner-loop MPSP algorithm. Denote the returned label of success as $Sign1$.}
\IF{$Sign1 = 0$}
\WHILE{1}
\STATE{Set $\tau = \partiallta \tau$, $\tau_{\rm old} = 0$.}
\WHILE{$\partiallta \tau > 0$}
\STATE{Calculate the perturbed condition for current $\tau$.}
\STATE{Apply inner-loop MPSP algorithm. Denote the returned label of success as $Sign2$.}
\IF{$Sign2 = 0$}
\STATE{$\partiallta \tau := \partiallta \tau/2$.}
\ELSE
\STATE{Save the solution as new initial guess solution for the next iteration.}
\STATE{Set $\partiallta \tau := \min(1-\tau, 2\partiallta \tau)$, $\tau_{\rm old} = \tau$}
\ENDIF
\STATE{$\tau = \tau_{\rm old} + \partiallta \tau$.}
\IF{$\partiallta \tau \leq 0.01$ and $\tau_{\rm old} \neq 1$}
\STATE{Break}
\ENDIF
\ENDWHILE
\IF{$\partiallta \tau = 0$}
\STATE{break}
\ELSE
\STATE{$N_{\rm seg,tol} : = N_{\rm seg,tol} + 2$}
\IF{$N_{seg,ref} - N_{\rm seg,tol} < 0$}
\STATE{Print failure information; break}
\ENDIF
\ENDIF
\ENDWHILE
\ELSE
\STATE{Save the solution.}
\ENDIF
\end{algorithmic}
\end{algorithm}
\section{Numerical Simulations} \label{sec:simulations}
\subsection{Fuel-optimal Trajectory}
An interplanetary Cubesats mission to the asteroid $99942$ Apophis is considered. The related physical constants are listed in Tab.\ \ref{table:physical_constant}. The initial mass $m_0$, maximum thrust magnitude $T_{\rm max}$ and specific impulsive $I_{\rm sp}$ are set to be $25 kg$, $1.5\times 10^{-3} N$ and $3000 s$, respectively. The departure epoch is October 1st, 2020, the arrival epoch is December 1st, 2023, and the departure position is Sun-Earth L2. The boundary conditions of the spacecraft at the initial and rendezvous time are listed in Tab.~\ref{table:boundary_conditions}. The fuel-optimal solution is employed as the nominal solution which is solved by using indirect method \cite{zhang2015}. The optimal transfer orbit is shown in Fig.\ \ref{fig:open_tra_ast}, where the red line denotes the thrust segment, i.e., $u=1$, while the blue dash line denotes the coast segment, i.e., $u=0$. The variations of thrust throttle $u$, switching function $S$ and the mass $m$ w.r.t time are shown in Fig.\ \ref{fig:open_usm_ast}. The optimal trajectory consists of five thrust segments and four coast segments, and the final mass of the spacecraft is $21.062 kg$.
In the following numerical simulations, Fourier basis polynomials with maximum $15$th order are used to approximate the control sequence. The largest time step for each segment is set to be $h_{max} = 0.0005 \times t_f$. The convergence conditions to terminate the MPSP algorithm is such that the terminal position error $\|\Delta \vect r_f\|_2 \leq 500 km$, the terminal velocity error $\|\Delta \vect v_f\|_2 \leq 0.1 km/s$ and $|\lambda_{mf}| \leq 10^{-6}$. For other parameter setting, $\tau_0 = 0.5$, $\partiallta \tau = 0.5$ and $\Delta_{max} = 1$. All simulations are conducted under Intel Core i7-9750H, [email protected], Windows 10 system within MATLAB environment. Considering the paper length, the simulations with large number of random perturbations are not reported.
\begin{table}
\centering
\caption{Physical constants.}
\begin{tabular}{lccc}
\toprule
\toprule
Physical constant & Values \\
\midrule
Mass parameter $\mu$ & $1.327124 \times 10^{11} \ km^3/s^2$ \\
Gravitational field, $g_0$ & $9.80655 \ m/s^2$ \\
Length unit, LU& $1.495979 \times 10^8 \ km$\\
Time unit, TU & $5.022643 \times 10^6 \ s$\\
Velocity unit, VU & $29.784692 \ km/s$\\
Mass unit, MU & $25 \ kg$ \\
\bottomrule
\end{tabular}
\label{table:physical_constant}
\end{table}
\begin{table}
\centering
\caption{Boundary Conditions.}
\begin{tabular}{lccc}
\toprule
\toprule
Boundary Condition & Values \\
\midrule
Initial position vector (LU) & $\vect r_0 = [1.001367,0.140622,--6.594513\times 10^{-6}]^\top$\\
Initial velocity vector (VU) & $\vect v_0 = [-0.155386,0.986258,-4.827818\times 10^{-5}]^\top$ \\
Terminal position vector (LU) & $\vect r_f = [-1.044138,-0.122918,-0.018183]^\top$\\
Terminal velocity vector (VU) & $\vect v_f = [0.222668,-0.875235,0.051944]^\top$\\
\bottomrule
\end{tabular}
\label{table:boundary_conditions}
\end{table}
\begin{figure}
\caption{The fuel-optimal trajectory for boundary conditions in Tab.~\ref{table:boundary_conditions}
\label{fig:open_tra_ast}
\end{figure}
\begin{figure}
\caption{The variations of thrust throttle $u$, switching function $S$ and mass $m$ w.r.t time corresponding to the fuel-optimal trajectory in Fig.~\ref{fig:open_tra_ast}
\label{fig:open_usm_ast}
\end{figure}
\subsection{Perturbations on Initial Conditions}
In this section, the proposed MPSP algorithm is tested by assuming the perturbations on the initial conditions. Different scales of perturbation magnitude for the single base perturbation are used for analysis. The random base perturbation is set to be as $\partiallta \vect x_0 = [1.6712,-1.0659,-4.1460,1.0876,2.3763,4.6091] \times 10^{-2}$. The $9$ cases of different scales of perturbation magnitude are simulated, and the corresponding perturbed initial condition is $\partiallta \hat{\vect x}_0 = \vect x_0 + \kappa \partiallta \vect x_0$ where $\kappa \in [-3,-2.5,-2,-1,1,2,2.5,3,3.5]$. The perturbation direction of the former $4$ cases are opposite to that of the later $5$ cases. The simulation results are summarized in Tab.~\ref{table:ini_per} which gives the terminal errors, the Newton's iteration in the inner-loop MPSP algorithm and the percentage of fuel increase w.r.t the corresponding optimal solutions. It can be observed that, the algorithm works successfully since the terminal errors are all within the tolerance.
The comparisons between the thrust angles of the converged MPSP solutions and the nominal thrust angles are shown in Fig.~\ref{fig:Ini_per_9_cases}, where the variations of thrust angles remain to be smooth. The variations of $\alpha$ for cases 5-9 is more apparent than that of cases 1-4, while the $\beta$ oscillations for all cases remain in the vicinity of nominal solution. The comparisons between thrust sequences of the converged MPSP solutions and the corresponding fuel-optimal thrust sequences are shown in Fig.~\ref{fig:Ini_per_u_9_cases}. Case 4 requires the minimum iterations. In this case, the outer-loop MPSP continuation process is not triggered. Case 9 requires the maximum iterations, since the thrust sequence is changed dramatically compared with nominal thrust sequence. From case 4 to 1, the optimal solutions gradually emerges new coast segments, but the obtained MPSP solutions remain the nominal thrust sequence. From case 5 to 9, the initial conditions are becoming tighter, and more thrust is required to drive the spacecraft to the target. The MPSP solutions and optimal solutions are shown a similar trend which gradually increases the thrust segments and reduces the coast segments. From Tab.~\ref{table:ini_per}, the largest increase of fuel consumption is the case 1, which is around $9\%$, while the minimum increase of fuel consumption is the case $4$, which is only $0.16\%$. From case 5 to 9, even though the proposed algorithm requires more iterations for tighter initial conditions, the fuel consumption is nearly optimal. The differences between converged MPSP solutions and the nominal solution on coordinates are shown in Fig.~\ref{fig:ini_per_state_9_cases}. It is interesting to see that the the differences are symmetric for opposite direction of initial perturbations.
\begin{figure}\label{fig:Ini_per_alpha_9_cases}
\label{fig:Ini_per_beta_9_cases}
\label{fig:Ini_per_9_cases}
\end{figure}
\begin{figure}
\caption{The comparison of optimal thrust sequences and thrust sequences of MPSP solutions for cases in Tab.~\ref{table:ini_per}
\label{fig:Ini_per_u_9_cases}
\end{figure}
\begin{figure}
\caption{Differences on coordinates between converged MPSP trajectories and the nominal trajectory for cases in Tab.~\ref{table:ini_per}
\label{fig:ini_per_state_9_cases}
\end{figure}
\begin{table}
\centering
\caption{Perturbations on initial conditions.}
\begin{tabular}{ccccccc}
\toprule
\toprule
Case & $\kappa$ & $\|\vect x_f - \vect x(t_f)\|_2 \ (\rm{km})$ & $\|\vect v_f - \vect v(t_f)\|_2 \ (\rm{km/s})$ & $|\lambda_m(t_f)|$ & Newton's iteration & Fuel Increase ($\%$)\\
\midrule
1 & $-3$ & $216.45$ & $4.20 \times 10^{-5}$ & $3.74 \times 10^{-8}$ & $40$ & $9.01$\\
2 & $-2.5$ & $28.77$ & $5.66 \times 10^{-6}$ & $3.11 \times 10^{-9}$ & $21$ & $6.82$\\
3 & $-2$ & $495.99$ & $9.04 \times 10^{-5}$ & $3.26 \times 10^{-7}$ & $6$ & $2.64$\\
4 & $-1$ & $328.34$ & $6.07 \times 10^{-5}$ & $1.28 \times 10^{-7}$ & $5$ & $2.27$\\
5 & $1$ & $128.41$ & $2.43 \times 10^{-5}$ & $2.48 \times 10^{-8}$ & $40$ & $0.16$\\
6 & $2$ & $9.24$ & $1.86 \times 10^{-6}$ & $1.09 \times 10^{-9}$ & $48$ & $5.02$\\
7 & $2.5$ & $13.18$ & $2.55 \times 10^{-6}$ & $4.74 \times 10^{-10}$ & $67$ & $1.11$\\
8 & $3$ & $40.11$ & $1.66 \times 10^{-5}$ & $1.82 \times 10^{-7}$ & $98$ & $2.54$\\
9 & $3.5$ & $454.25$ & $8.13 \times 10^{-5}$ & $2.41 \times 10^{-7}$ & $135$ &$1.94$\\
\bottomrule
\bottomrule
\end{tabular}
\label{table:ini_per}
\end{table}
\subsection{Perturbations on Terminal Conditions}
Different variations on the terminal positions are simulated to test the proposed method. The totally 8 perturbed terminal positions lie in the vertex of the cube where the reference position locates in the center of the cube, and the side of the cube is set to be $0.04\ {\rm LU}$. The corresponding $8$ cases are shown in Tab.~\ref{table:ter_per_cases}. The simulation results including the terminal errors, the Newton's iteration in the inner-loop MPSP algorithm and the percentage of fuel increase w.r.t the corresponding optimal solutions are shown in Tab.~\ref{table:ter_per}. From the results, it can be observed that all obtained terminal values are within the tolerance.
Fig.~\ref{fig:ter_per_8_cases} represents the variations of thrust angles $\alpha$ and $\beta$ w.r.t time. It can be seen that the obtained angle variations are close to the nominal case. Since the terminal $\vect x_f$ for cases 1-4 are farther than cases 5-8 w.r.t the Sun, the $\alpha$ starts more inward in cases 1-4 than cases 5-8. The similar trend is seen from $\beta$ variation. The comparison between the thrust sequences for the converged MPSP solutions and the corresponding optimal thrust sequences are shown in Fig.~\ref{fig:ter_per_u_8_cases}. It is nice to see that the converged MPSP solutions coincide with the optimal solutions well in most cases. For cases 1 and 3, optimal thrust sequences have more coast segments than nominal one. MPSP solutions in these cases prefer to maintain to be the same as the nominal thrust sequence. For cases 2, 4 and 8, MPSP solutions capture the main structure of optimal thrust sequences except some near-impulse thrust segments. In cases 5, 6 and 7, MPSP solutions perfectly coincide with the optimal thrust sequences. In cases 1, 3 and 7 where MPSP solutions remain the nominal thrust sequence, correspond to just 5-6 Newton's iterations. In cases 2, 4, 5 and 8, MPSP requires around 30 Newton's iterations since one less thrust segment is required with $N_{seg,tol} = 2$. In case 6, MPSP requires more iterations because two less coast segment is required with $N_{seg,tol} = 4$. However, for all cases, the fuel consumption is very close to the optimal solution. The minimum fuel consumption is the case 2, which is only $0.79\%$ more than its optimal fuel consumption. The maximum fuel consumption is case 6, which is only $2.36\%$ more than its optimal fuel consumption. The fuel consumption remains nearly optimal even though the thrust sequence is changed w.r.t the nominal thrust sequence. The difference between nominal solution and the converged MPSP solutions are shown in Fig.~\ref{fig:ter_per_xyz_8_cases}. The $x$ difference for case 1-4 and case 5-8 are near symmetric. The $y$ differences are also shown similar symmetry except the last $200$ days. The $z$ differences are shown more complexity and the magnitude of the differences tends to amplify.
\begin{table}
\centering
\caption{Cases study for Perturbations on terminal positions.}
\begin{tabular}{cccc}
\toprule
\toprule
Case & $\partiallta x_f$ (LU) & $\partiallta y_f$ (LU) & $\partiallta z_f$ (LU)\\
\midrule
1 & $0.02$ & $0.02$ & $0.02$ \\
2 & $0.02$ & $0.02$ & $-0.02$\\
3 & $0.02$ & $-0.02$ & $0.02$\\
4 & $0.02$ & $-0.02$ & $-0.02$\\
5 & $-0.02$ & $0.02$ & $0.02$ \\
6 & $-0.02$ & $0.02$ & $-0.02$\\
7 & $-0.02$ & $-0.02$ & $0.02$ \\
8 & $-0.02$ & $-0.02$ & $-0.02$\\
\bottomrule
\bottomrule
\end{tabular}
\label{table:ter_per_cases}
\end{table}
\begin{table}
\centering
\caption{Simulation results for perturbations on terminal positions.}
\begin{tabular}{cccccc}
\toprule
\toprule
Case & $\|\vect x_f - \vect x(t_f)\|_2 \ (\rm{km})$ & $\|\vect v_f - \vect v(t_f)\|_2 \ (\rm{km/s})$ & $|\lambda_m(t_f)|$ & Newton's iteration & Fuel Increase($\%$)\\
\midrule
1 & $7.98$ & $1.13 \times 10^{-6}$ & $8.35\times 10^{-8}$ & $6$ & $1.33$\\
2 & $12.71$ & $2.41 \times 10^{-6}$ & $2.49\times 10^{-9}$ & $40$ & $0.79$\\
3 & $8.57$ & $1.60 \times 10^{-6}$ & $1.14\times 10^{-9}$ & $6$ & $2.26$\\
4 & $18.85$ & $3.10 \times 10^{-6}$ & $2.52\times 10^{-8}$ & $45$ & $1.01$\\
5 & $288.30$ & $5.05 \times 10^{-5}$ & $1.31\times 10^{-7}$ & $32$ & $1.26$\\
6 & $16.28$ & $2.82 \times 10^{-6}$ & $2.13\times 10^{-9}$ &$71$ & $2.36$\\
7 & $55.82$ & $1.00 \times 10^{-5}$ & $1.26\times 10^{-9}$ &$5$ & $1.28$\\
8 & $53.00$ & $8.75 \times 10^{-6}$ & $1.57\times 10^{-9}$ &$40$ & $2.08$\\
\bottomrule
\bottomrule
\end{tabular}
\label{table:ter_per}
\end{table}
\begin{figure}\label{fig:ter_per_alpha_8_cases}
\label{fig:ter_per_beta_8_cases}
\label{fig:ter_per_8_cases}
\end{figure}
\begin{figure}
\caption{The comparison of optimal thrust sequences and thrust sequences of MPSP solutions for cases in Tab.~\ref{table:ter_per_cases}
\label{fig:ter_per_u_8_cases}
\end{figure}
\begin{figure}
\caption{Differences on coordinates between converged MPSP trajectories and the nominal trajectory for terminal position variation cases in Tab.~\ref{table:ter_per_cases}
\label{fig:ter_per_xyz_8_cases}
\end{figure}
\subsection{Perturbations on Thruster Parameters}
The perturbations on the thruster parameters are simulated. Specifically, the perturbations on the $T_{\rm max}$ are tested. The percentage of the perturbation w.r.t the nominal solution is set to be $\rm{\eta} = [-10\%,\ -6\%,\ -3\%,\ 3\%,\ 6\%,\ 10\%]$, which corresponds to 6 simulation cases as shown in Tab.~\ref{table:para}. It is assumed that the percentage of the error of $T_{\rm max}$ remains to be the same throughout the flight. The simulation results are summarized in Tab.~\ref{table:para} which gives the terminal errors, the Newton's iteration in the inner-loop MPSP algorithm and the percentage of fuel increase w.r.t the corresponding optimal solutions. For all cases, MPSP algorithm converges successfully.
In Fig.~\ref{fig:para_per_6_cases}, it is nice to see from that the variations of $\alpha$ and $\beta$ are in the vicinity of nominal values. Fig.~\ref{fig:para_per_u_6_cases} illustrates the comparison between the optimal thrust sequences and thrust sequences of the MPSP solutions. The thrust sequences coincide well even when the thrust sequence is changed. The increase of the fuel consumption w.r.t the corresponding fuel-optimal solutions are negligible. As expected, the maximum Newton's iteration occurs in case 1 since the variation of the thrust sequence is the largest. It is also noticed that it only requires 4 to 5 iterations when the thrust sequence remains to be the same as the nominal one. Fig.~\ref{fig:para_per_xyz_6_cases} depicts the trajectory differences between converged MPSP trajectories w.r.t the nominal trajectory on coordinates. The differences for case 1 and 6 are the most obvious since the perturbations on $T_{\rm max}$ are the largest. Not like Figs.~\ref{fig:ini_per_state_9_cases} and \ref{fig:ter_per_xyz_8_cases}, the differences are not symmetry for the cases such as case 1 and 6.
The outcome of this simulation study indicates: 1) the thrust angles of the converged MPSP trajectories remain smooth; 2) the thrust sequence of the MPSP solution prefers to remain to be the nominal thrust sequence when coast segments can be increased. 3) the thrust sequence of MPSP solution can capture the main structure of the optimal thrust sequence when the coast sequence is reduced; 4) Even though the fuel consumption is not included inside the performance index, the MPSP trajectories are competitive in terms of fuel consumption even when the thrust sequence is changed.
\begin{table}
\centering
\caption{Perturbations on $T_{\rm max}$.}
\begin{tabular}{ccccccc}
\toprule
\toprule
Case & $\eta \ (\%)$ & $\|\vect x_f - \vect x(t_f)\|_2 \ (\rm{km})$ & $\|\vect v_f - \vect v(t_f)\|_2 \ (\rm{km/s})$ & $|\lambda_m(t_f)|$ & Newton's iteration & Fuel Increase ($\%$)\\
\midrule
1 & $-10$ & $1.77$ & $3.57 \times 10^{-7}$ & $2.42\times 10^{-9}$ & $93$ & $1.27$\\
2 & $-6$ & $373.68$ & $5.21 \times 10^{-5}$ & $5.62\times 10^{-7}$ & $24$ & $0.18$\\
3 & $-3$ & $267.52$ & $5.18 \times 10^{-5}$ & $2.76\times 10^{-7}$ & $4$ & $0.082$\\
4 & $3$ & $6.50$ & $1.10 \times 10^{-6}$ & $3.15\times 10^{-9}$ & $4$ & $0.062$\\
5 & $6$ & $12.25$ & $2.10\times 10^{-6}$ & $5.06\times 10^{-9}$ & $5$ & $0.48$\\
6 & $10$ & $55.35$ & $9.47\times 10^{-6}$ & $1.27\times 10^{-8}$ & $5$ & $1.19$\\
\bottomrule
\bottomrule
\end{tabular}
\label{table:para}
\end{table}
\begin{figure}\label{fig:para_per_alpha_6_cases}
\label{fig:para_per_beta_6_cases}
\label{fig:para_per_6_cases}
\end{figure}
\begin{figure}
\caption{The comparison of optimal thrust sequences and thrust sequences of MPSP solutions for cases in Tab.~\ref{table:para}
\label{fig:para_per_u_6_cases}
\end{figure}
\begin{figure}
\caption{Differences on coordinates between converged MPSP trajectories and the nominal trajectory for cases of perturbations on $T_{\rm max}
\label{fig:para_per_xyz_6_cases}
\end{figure}
\section{Conclusion}
Unlike the applications with continuous control profile, this paper shows that the sensitive matrix is discontinuous at the bang-bang switching point. A robust two-loop MPSP algorithm is further designed as the low-thrust guidance scheme. Numerical simulations illustrate that the proposed MPSP algorithm is robust for various kinds of perturbations. Besides, the fuel consumption is near optimal even when the thrust sequence is required to be changed. The future work will refine the algorithm design to reduce the total iterations.
\section*{Acknowledgment}
Yang Wang acknowledges the support of this work by the China Scholarship Council (Grant no.201706290024).
\section*{Appendix}
\subsection{Trajectory Discretization}
Instead of discretization using Euler method, a higher order method is used to increase the accuracy. The classical 4th order Runge--Kutta formula is used as
\begin{equation}
\label{eq:Runge-Kutta}
\left\{
\begin{array}{ll}
\vect x_{n+1} &= \vect x_n + \dfrac{h}{6}\left(\vect K_1 + 2\vect K_2 + 2\vect K_3 +\vect K_4\right) \\
\vect K_1 &= \mathcal{F} (t_{n,1},\vect x_{n,1}), \quad t_{n,1} = t_n, \ \vect x_{n,1} = \vect x_n\\
\vect K_2 &= \mathcal{F} (t_{n,2},\vect x_{n,2}), \quad t_{n,2} = t_n + \dfrac{h}{2}, \ \vect x_{n,2} = \vect x_n + \dfrac{h}{2}\vect K_1\\
\vect K_3 &= \mathcal{F} (t_{n,3},\vect x_{n,3}), \quad t_{n,3} = t_n + \dfrac{h}{2}, \ \vect x_{n,3} = \vect x_n + \dfrac{h}{2}\vect K_2\\
\vect K_4 &= \mathcal{F} (t_{n,4},\vect x_{n,4}), \quad t_{n,4} = t_n + h, \ \vect x_{n,4} = \vect x_n + h\vect K_3
\end{array}
\right.
\end{equation}
Combining Eq.\ \eqref{eq:Runge-Kutta} with Eq.\ \eqref{eq:expr_x_k_plus_1} yields
\begin{equation}
\vect F_n(t_n,\vect x_n) = \vect x_n + \dfrac{h}{6}\left(\vect K_1 + 2\vect K_2 + 2\vect K_3 +\vect K_4\right)
\end{equation}
and its differential w.r.t $\vect x_n$ is
\begin{equation}
\dfrac{{\rm d} \vect F_n(t_n,\vect x_n)}{{\rm d} \vect x_n} = \vect I_n + \dfrac{h}{6}\left( \dfrac{{\rm d} \vect K_1}{{\rm d} \vect x_n} + 2\dfrac{{\rm d} \vect K_2}{{\rm d} \vect x_n} + 2\dfrac{{\rm d} \vect K_3}{{\rm d} \vect x_n} + \dfrac{{\rm d} \vect K_4}{{\rm d} \vect x_n} \right)
\end{equation}
where
\begin{equation}
\dfrac{{\rm d} \vect K_i}{{\rm d} \vect x_n} = \dfrac{{\rm d} \mathcal{F} (t_{n,i},\vect x_{n,i})}{{\rm d} \vect x_{n,i}} \dfrac{{\rm d} \vect x_{n,i}}{{\rm d} \vect x_n},\quad i = 1,2,3,4
\end{equation}
\end{document} |
\begin{equation}gin{document}
\begin{equation}gin{center}
{\Large\bf Twisting the fake monster superalgebra}\\[0.8cm]
Nils R. Scheithauer\footnote{Supported by a DAAD grant.},\\
Mathematisches Seminar der Universit\"at Hamburg,\\
Bundesstr. 55, 20146 Hamburg, Germany\\
\end{center}
\vspace*{1.5cm}
\noindent
We calculate twisted denominator identities of the fake monster superalgebra and use them to construct new examples of supersymmetric generalized Kac-Moody superalgebras. Their denominator identities give new infinite product identities.
\section{Introduction}
There are 3 generalized Kac-Moody algebras or superalgebras which represent the physical states of a string moving on a certain variety, namely the monster algebra, the fake monster algebra \cite{B} and the the fake monster superalgebra \cite{NRS}. The no-ghost theorem from string theory can be used to construct actions of finite groups on these algebras. For example the monster group acts on the monster algebra. Applying elements of the finite groups to the Weyl denominator identity of these algebras gives twisted denominator identities. They can be calculated explicitly because the simple roots of the algebras are known. For the monster algebra and the fake monster algebra this has been done in \cite{B}. There Borcherds uses twisted denominator identities of the monster algebra to prove the moonshine conjectures. In this paper we calculate twisted denominator identities of the fake monster superalgebra and use them to construct new examples of supersymmetric generalized Kac-Moody superalgebras. They have similar properties as the fake monster superalgebra. For example they have no real roots and the Weyl vector is zero. Their denominator identities give new infinite product expansions. In a forthcoming paper we show that they define automorphic forms of singular weight.
We describe the sections of this paper.
In the second section we recall some facts about the fake monster superalgebra and describe the action of an extension of the Weyl group of $E_8$ on this algebra.
In the third section we derive the general expression of the twisted denominator identity corresponding to elements in this group of odd order.
In the last section we calculate the twisted denominator identities explicitly for certain elements of order 3 and 7. We construct two supersymmetric generalized Kac-Moody superalgebras of rank 6 and 4 and describe their simple roots and the multiplicities.
\section{The fake monster superalgebra}
In this section we recall some results about the fake monster superalgebra from \cite{NRS} and construct an action of $2^8.2.Aut(E_8)$ on it. The main difference to the bosonic case described in \cite{B} is that we have to work with a double cover of the automorphism group of the light cone lattice rather than with the automorphism group of that lattice.
The fake monster superalgebra $G$ can be constructed as the space of physical states of a chiral N$=$1 superstring moving on a 10 dimensional torus. $G$ is a generalized Kac-Moody superalgebra. The root lattice of $G$ is the 10 dimensional even unimodular Lorentzian lattice $II_{9,1}=E_8 \oplus II_{1,1}$. A nonzero element $\alpha \in II_{9,1}$ is a root of $G$ if and only if $\alpha^2\leq 0$. In particular $G$ has no real roots. This reflects the fact that the superstring has no tachyons. The multiplicity of a root $\alpha$ is given by
$mult_0(\alpha)=mult_1(\alpha)=c(-\alpha^2/2)$ where $c(n)$
is the coefficient of $q^n$ in
$ 8 \eta(q^2)^8/\eta(q)^{16} = 8+128q+1152q^2+7680q^3+42112q^4+\ldots$
and $\eta(q)$ is the Dedekind eta function. There are 2 cones of negative norm vectors in $II_{9,1}$. We define one of them as the positive cone and denote the closure of the positive cone by $II_{9,1}^+$. Then the positive roots of $G$ are the nonzero vectors in $II_{9,1}^+$ and the simple roots of $G$ are the positive roots of zero norm. The simple roots have multiplicity 8 as even and odd roots. The Cartan subalgebra of $G$ is isomorphic to the vector space generated by $II_{9,1}$. Since $G$ has no real roots the Weyl group is trivial. The Weyl vector of $G$ is zero.
We describe some results about the 8 dimensional real spin group. For more details confer \cite{J}. Let $\mathbb O$ be the real 8 dimensional algebra of octonions, i.e. the unique real alternative division algebra of dimension 8. $\mathbb O$ has a quadratic form $N$ permitting composition. For $b\in {\mathbb O}$ define the left multiplication $L_b$, the right multiplication $R_b$ and the \mbox{operator} $U_b=L_bR_b=R_bL_b$. Let ${\mathbb O}_0$ be the ortho\-gonal complement of ${\mathbb R}1$ and denote $C({\mathbb O},N)$ the Clifford algebra generated by ${\mathbb O}$ with relations $a^2=N(a)1$. There is an isomorphism $\varepsilon$ from the even subalgebra $C^e({\mathbb O},N)$ of $C({\mathbb O},N)$ to the Clifford algebra $C({\mathbb O}_0,-N)$ mapping $1a$, $a\in {\mathbb O}$, to $a$, where we have identified the unit in $C({\mathbb O}_0,-N)$ with the unit in ${\mathbb O}$. $C({\mathbb O}_0,-N)$ acts naturally on $\mathbb O$ by left and right multiplication. An element $u$ of the spin group $\Gamma^e_0({\mathbb O},N)$ can be written as $u=1b_1 \ldots 1b_n$ with $\prod N(b_i)=1$ and $\varepsilon(u)= b_1 \ldots b_n$. The actions $\rho_L(u)=L_{b_1} \ldots L_{b_n}, \, \rho_R(u)=R_{b_1} \ldots R_{b_n}$ and $\rho_V(u)=U_{b_1}\ldots U_{b_n}$ give three irreducible and inequivalent 8 dimensional re\-presentations of the spin group called conjugate spinor, spinor and vector representation. They are related by triality, i.e.
$\rho_V(u)(ab)=(\rho_L(u)a)(\rho_R(u)b)$ holds for all $a,b\in {\mathbb O}$. The image of $\Gamma^e_0({\mathbb O},N)$ under each of these re\-presentations is $SO(8)$. The kernel of $\rho_V : \Gamma^e_0({\mathbb O},N) \rightarrow SO(8)$ is $\{1,-1\}$ so that $\Gamma^e_0({\mathbb O},N)$ is a double cover of $SO(8)$.
The representations $\rho_L,\rho_R$ and $\rho_V$ of the spin group induce representations of the Lie algebra $so(8)$ with weights
$\frac{1}{2}(\pm 1,\ldots, \pm1)$ with an odd number of $-$ signs,
$\frac{1}{2}(\pm 1,\ldots, \pm1)$ with an even number of $-$ signs
and the permutations of $(\pm1,0,\ldots,0)$.
We embed $E_8$ into ${\mathbb O}$. Let $Aut(E_8)$ be the group of
automorphisms of $E_8$ leaving the bilinear form invariant. Then
$Aut(E_8)\subset SO(8)$ and the inverse image of $Aut(E_8)$ under $\rho_V$ is
a double cover of $Aut(E_8)$.
$Aut(E_8)$ can also be constructed using the ring of integral octonions (cf. \cite{C}). This description implies that $E_8 \subset {\mathbb O}$ is also invariant under the actions $\rho_L$ and $\rho_R$.
Now we construct an action of an extension of the double cover $2.Aut(E_8)$ on the fake monster superalgebra. The extension $2^8.Aut(E_8)$ of $Aut(E_8)$ by $Ho\hspace{0.3mm}m(E_8,{\mathbb Z}_2)$ acts naturally on the vertex algebra $V_{E_8}$ of the lattice $E_8$. The same holds for the extension of $2.Aut(E_8)$ by $2^8$ where $2.Aut(E_8)$ acts by $\rho_V$. The vector space $E_8 \otimes {\mathbb R}[t^{-1}]t^{-\frac{1}{2}}$ is an abelian subalgebra of the Heisenberg algebra $E_8 \otimes {\mathbb R}[t,t^{-1}]t^{\frac{1}{2}}$. The exterior algebra $V_{NS}$ of $E_8 \otimes {\mathbb R}[t^{-1}]t^{-\frac{1}{2}}$ is a vertex superalgebra carrying a representation of the Virasoro algebra. $V_{NS}$ decomposes into eigenspaces of $L_0$ with eigenvalues in $\frac{1}{2}{\mathbb Z}$. $2.Aut(E_8)$ acts on $V_{NS}$ in the vector representation. We define the vertex superalgebra $V_0=V_{NS}\otimes V_{E_8}$. This algebra carries a representation of the Virasoro algebra of central charge $4+8=12$. We write $V_{0,n}$ for the subspace of $L_0$-degree $n\in \frac{1}{2}{\mathbb Z}$. The vector space $E_8 \otimes {\mathbb R}[t^{-1}]t^{-1}$ is an abelian subalgebra of the Heisenberg algebra $E_8 \otimes {\mathbb R}[t,t^{-1}]$. We define $V_R$ as the tensor product of the exterior algebra of $E_8 \otimes {\mathbb R}[t^{-1}]t^{-1}$ with the sum $S \oplus C$ of two $8$ dimensional spaces. The double cover of $Aut(E_8)$ acts in the vector representation on the first tensor factor and in the spinor resp. conjugate spinor representation on the second factor. $V_R$ can be given the structure of a $V_{NS}$-module. We decompose $V_R=V_R^+\oplus V_R^-$ where $V_R^+$ is the subspace generated by vectors $d_{-n_1}\wedge \ldots \wedge d_{-n_k}\otimes v$ where $v$ is in $C$ if $k$ is even and in $S$ if $k$ is odd and analogous for $V_R^-$. The projection of $V_R$ on $V_R^+$ is called GSO projection. We define $V_1=V_{R}^+\otimes V_{E_8}$ and adopt the same notations as for $V_0$. $V_1$ carries a representation the Virasoro algebra of central charge $12$. Now $2^8.2.Aut(E_8)$ acts on the fake monster superalgebra in the following way. We decompose the Cartan subalgebra ${\mathbb R}\otimes II_{9,1}$ by writing $II_{9,1}=E_8 \oplus II_{1,1}$. $2.Aut(E_8)$ acts in the vector representation on the vector space generated by $E_8$. Since $G$ is graded by $II_{9,1}=E_8 \oplus II_{1,1}$ it is also graded by $II_{1,1}$. We denote the corresponding spaces $G_a$ with $a\in II_{1,1}$ and the even resp. odd subspace $G_{0,a}$ resp. $G_{1,a}$. All these spaces are $E_8$-graded. By the no-ghost theorem (cf. \cite{GSW},\cite{P} or \cite{B}) the even subspace $G_{0,a}$ is isomorphic to $V_{0,(1-a^2)/2}$ and $G_{1,a}$ is isomorphic to $V_{1,(1-a^2)/2}$ as $E_8$-graded $2^8.2.Aut(E_8)$-module. This gives us a natural action of $2^8.2.Aut(E_8)$ on the fake monster superalgebra.
\section{The twisted denominator identities}
In this section we calculate twisted denominator identities of the fake monster superalgebra corresponding to elements in $2.Aut(E_8)$ of odd order.
The fake monster superalgebra $G$, like any generalized Kac-Moody superalgebra, can be written as direct sum $E\oplus H \oplus F$ where $H$ is the Cartan subalgebra and $E$ and $F$ are the subalgebras corresponding to the positive and negative roots. We have the standard sequence
\[ \ldots \rightarrow \Lambda^2(E) \rightarrow \Lambda^1(E)
\rightarrow \Lambda^0(E) \rightarrow 0 \]
with homology groups $H_i(E)$.
Note that $E=E_0\oplus E_1$ is a superspace so that the exterior algebra $\Lambda(E)$ is defined as the tensor algebra of $E$ divided by the two sided ideal generated by $u \otimes v + (-1)^{|u||v|} v\otimes u$.
The Euler-Poincar\'{e} principle implies
\[ \Lambda^*(E) = H(E) \]
where $\Lambda^* (E)=\oplus_{n\geq 0} (-1)^n \Lambda^n (E)$ is the alternating sum of exterior powers of $E$ and $H(E)=\oplus_{n\geq 0} (-1)^n H_n(E)$ is the alternating sum of homology groups of $E$. Both sides of this identity are graded by the root lattice $II_{9,1}$ of $G$ and the homogeneous subspaces are finite dimensional. The homology groups $H_n(E)$ can be calculated in the same way as for Kac-Moody algebras (cf. \cite{B}). The result is that $H_n(E)$ is the subspace of $\Lambda^n (E)$ spanned by the homogeneous vectors of $\Lambda^n (E)$ of degree $\alpha \in II_{9,1}$ with $\alpha^2=0$. We can work out the homology groups of $G$ explicitly because we know the simple roots. Denote the subspace of $H(E)$ with degree $\alpha \in II_{9,1}$ by $H(E)_{\alpha}$ and analogous for $E$. Let $\langlembda$ be a primitive norm zero vector in $II_{9,1}^+$. Then
${\mathbb R} \oplus \oplus_{n>0} H(E)_{n \langlembda}
= \Lambda^*(\oplus_{n>0}E_{n\langlembda})$.
The denominator identity of the fake monster superalgebra now follows easily by calculating the character on both sides of $\Lambda^*(E) = H(E)$.
More generally if $g$ is an automorphism of $G$ then $g$ commutes with the derivations $d_i : \Lambda^i(E) \rightarrow \Lambda^{i-1}(E)$ and we get a sequence\[ \ldots \rightarrow g(\Lambda^2(E)) \rightarrow g(\Lambda^1(E))
\rightarrow g(\Lambda^0(E)) \rightarrow 0 \, . \]
$g$ induces an isomorphism between the homology groups of the two complexes above. Hence we can apply $g$ to both sides of the equation $\Lambda^*(E) = H(E)$ and take the trace. This gives a twisted denominator identity. It depends only on the conjugacy class of $g$ in the automorphism group of $G$.
Let $u$ be an element of $2.Aut(E_8)$ of odd order $N$.
The lattice $E_8$ has a unique central extension $\hat{E}_8$ by $ \{1,-1\}$ such that the commutator of any inverse images of $r,v$ in $E_8$ is $(-1)^{(r,v)}$. Since $\rho_V(u)$ has odd order it has a lift to $Aut (\hat{E}_8)=2^8.Aut (E_8)$ such that $\rho_V(u)^n$ fixes all elements of $\hat{E}_8$ which are in the inverse image of the vectors of $E_8$ fixed by $\rho_V(u)^n$ (cf. Lemma 12.1 in \cite{B}). We define $g$ as the automorphism of $G$ induced by this lift.
Let $E_8^u$ be the sublattice of $E_8$ fixed by $\rho_V(u)$. The natural projection
$\pi :{\mathbb R} \otimes E_8 \rightarrow
{\mathbb R} \otimes E_8^u$
maps $E_8$ onto the dual lattice $E_8 ^{u*}$ because $E_8$ is unimodular. We define the Lorentzian lattice $L=E_8^u\oplus II_{1,1}$ with dual lattice $L^* = E_8 ^{u*} \oplus II_{1,1}$ and denote the closures of the canonical positive cones by $L^+$ and $L^{*+}$.
The fake monster superalgebra has a natural $L^*$-grading.
For $\alpha=(r^*,a)\in L^*$ we define
\[ \tilde{E}_{0,\alpha} = \oplus_{\pi(r)=r^*} E_{0,(r,a)} \]
and analogous for $\tilde{E}_{1,\alpha}$.
Let $\varepsilon_i$ and $\sigma_i$ denote the eigenvalues of $\rho_V(u)$ and $\rho_L(u)$.
Then we have
\begin{equation}gin{th1}
The twisted denominator identity corresponding to g is given by
\[ \prod_{\alpha \in L^{*+}}
\frac{ (1-e^{\alpha})^{ {mult}_0({\alpha}) }}
{ (1+e^{\alpha})^{ {mult}_1({\alpha}) }} =
1 + \sum a(\langlembda)e^{\langlembda} \]
where
\begin{equation}ann
\mbox{mult}_0({\alpha}) &=&
\sum_{ds|(({\alpha},L),N)} \frac{\mu(s)}{ds}\, tr(g^d|\tilde{E}_{0,{\alpha}/ds}) \\
\mbox{mult}_1({\alpha}) &=&
\sum_{ds|(({\alpha},L),N)} \frac{\mu(s)}{ds}\, tr(g^d|\tilde{E}_{1,{\alpha}/ds})
\end{equation}ann
and $a(\langlembda)$ is the coefficient of $q^m$ in
\[ \prod_{n\geq 1} \prod_{\: 1\leq i\leq 8}
\frac{(1-\varepsilon_i q^n)}
{(1+\sigma_i q^n)} \]
if $\langlembda$ is $m$ times a primitive norm zero vector in $L^+$ and zero else.
\end{th1}
{\em Proof:} We consider both sides of $\Lambda^*(E) = H(E)$ as $L^*$-graded $g$-modules. $\Lambda^*(E)$ is isomorphic to $\Lambda^*(E_0) \otimes S^*(E_1)$ if we forget the superstructure on $E_1$. First we calculate the trace of $g$ on $S^*(E_1)$. For that we recall some formulas. Let $V$ be a finite dimensional vector space. Then we have
\[ \sum _{n\geq 0}(-1)^n (dim \, S^n(V)) q^n = (1+q)^{-dim V}
= exp \, \Big\{ \sum_{n>0}(-1)^n (dim \,V) q^n/n \Big\} \, . \]
The second equality can be proven by taking logarithms and using the expansion $log (1+q) =- \sum_{n>0} (-1)^n q^n/n$.
This can be generalized to
\[ \sum _{n\geq 0}(-1)^n \,tr(g|S^n(V)) q^n
= exp \, \Big\{ \sum_{n>0} (-1)^n tr(g^n|V) q^n/n \Big\} \, . \]
Another formula we need is
$S^*(V_1 \oplus V_2) = S^*(V_1) \otimes S^*(V_2)$. Using these two formulas we find that the trace of $g$ on $S^*(E_1)$ is given by
\[ exp \, \Big\{
\sum_{\beta \in L^{*+}} \sum_{n>0} (-1)^n \,tr(g^n|\tilde{E}_{1,\beta} )
e^{n\beta}/n \Big\} \, . \]
We want to express the trace in the form
\[ \prod_{\beta \in L^{*+}}(1+e^{\beta})^{ - mult_1({\beta}) }=
\mbox{\it exp} \,
\Big\{ \sum_{\beta \in L^{*+}} \sum_{n>0} (-1)^n mult_1(\beta)e^{n\beta}/n
\Big\} \, . \]
Taking logarithms and comparing coefficients at $e^{\alpha }$ this implies
\[ \sum_{\beta \in L^{*+} \atop n\beta = \alpha }
(-1)^n \,tr(g^n|\tilde{E}_{1,\beta} )/n =
\sum_{\beta \in L^{*+} \atop n\beta = \alpha }
(-1)^n mult_1(\beta) /n \, . \]
The vector $\alpha \in L^*$ is a positive multiple $m$ of a primitive vector in $L^*$. This vector generates a lattice that we can identify with ${\mathbb Z}$. Then the right hand side of the last equation is the convolution product of the arithmetic functions $h(n)=(-1)^n/n$ and $mult_1(n)$. Hence
\[
mult_1(\alpha)= \sum_{ds|m} h^{* -1}(s) (-1)^d\, tr(g^d|\tilde{E}_{1,\alpha /ds})/d
\, . \]
Let $\mu$ be the M\"obiusfunction and $f(n)$ the arithmetic function which is zero if $n$ contains an odd square and $(-1)^k / p_1\ldots p_k$, where the $p_i$ are the different primes dividing $n$, else. Then convolution inverse of $h(n)$ is given by
\[ h^{* -1}(n) = \left\{ \begin{equation}gin{array}{cl}
(\mu h) (n) & \quad n \quad \mbox{odd} \\
f(n) & \quad n \quad \mbox{even}
\end{array}
\right. \]
$g$ has the same order as $u$ because $N$ is odd. The trace $tr(g^d|\tilde{E}_{1,\alpha})$ depends only on $\alpha$ and $(d,N)$. This implies that the sum in the expression for $mult_1(\alpha)$ extends only over $ds|(m,N)$. It is easy to see that $m$ is equal to the highest common factor $(\alpha,L)$ of the numbers
$(\alpha,\beta),\, \beta \in L$. Hence we get the following formula
\[
mult_1(\alpha) = \sum_{ds|((\alpha,L),N)} \mu(s)\, tr(g^d|\tilde{E}_{1,\alpha/ds})/ds
\, . \]
Note that this formula is wrong for even $N$. This is another reason why we restrict to elements in $2.Aut(E_8)$ of odd order.
If we express the contributions of $\Lambda^*(E_0)$ to the trace in the form
\[ \prod_{\alpha \in L^{*+}}(1-e^{\alpha})^{ mult_0(\alpha) } \]
then we find
\[ mult_0 (\alpha)= \sum_{ds|((\alpha,L),N)} h^{* -1}(s)\,
tr(g^d|\tilde{E}_{0,\alpha/ds})/d \]
with $h(n)=1/n$. This function is strongly multiplicative so that its convolution inverse is given by $\mu h$. An argument as above then gives the expression for $mult_0(\alpha)$.
Next we calculate the trace of $g$ on $H(E)$. We have
\[ H(E) = \oplus_{\alpha \in L^{*+}} \tilde{H}(E)_{\alpha} \] with
\[ \tilde{H}(E)_{\alpha}=\oplus_{\pi(r)=r^*} H(E)_{(r,a)} \]
for $\alpha=(r^*,a)$ in $L^{*+}$.
Clearly $tr(g|\tilde{H}(E)_{\alpha})$ is zero if $\alpha$ is not in $L$ and
$tr(g|\tilde{H}(E)_{\alpha})=tr(g|H(E)_{\alpha})$ for $\alpha\in L$.
We can also restrict to $\alpha^2=0$. Let $\alpha\in L^+ \subset II_{9,1}^+$ be m times a primitive norm zero vector $\langlembda$ in $II_{9,1}^+$. Then $\langlembda$ is also a primitive vector in $L$. $H(E)_{\alpha}$ is determined by
\[ {\mathbb R} \oplus \oplus_{n>0} H(E)_{n \langlembda}
= \Lambda^*(\oplus_{n>0}E_{0,n\langlembda}) \otimes
S^*(\oplus_{n>0}E_{1,n\langlembda}) \, . \]
g acts by $\rho_V(u)$ on $E_{0,n\langlembda}\cong {\mathbb R}\otimes E_8$ and by
$\rho_L(u)$ on $E_{1,n\langlembda}\cong C$ (cf. section 2).
Going over to complexifications this implies that $tr(g|H(E)_{\alpha})$ is given by the coefficient of $q^m$ in
\[ 1 + \sum_{n \geq 1} tr(g|H(E)_{n \langlembda}) q^n =
\prod_{n\geq 1} \prod_{\: 1\leq i\leq 8}
\frac{(1-\varepsilon_i q^n)}{(1+\sigma_i q^n)} \, . \]
This finishes the proof of the theorem.
For $u=1$ the theorem gives the denominator identity of the fake monster superalgebra.
If the multiplicities are nonnegative integers then the twisted denominator identity is the untwisted denominator identity of a generalized Kac-Moody superalgebra with the following properties. The root lattice is $L^*$ or a sublattice thereof. The algebra has no real roots so that the Weyl group is trivial. The multiplicities of a root $\alpha$ are given by ${mult}_0({\alpha})$ and ${mult}_1({\alpha})$. There are no real simple roots and the imaginary simple roots are the norm zero vectors in $L^{*+}$. The Weyl vector is zero. We describe the multiplicities of the simple roots. Let $\alpha = n \langlembda$ be a simple root where $\langlembda$ is a primitive vector in $L^+$. Suppose that $\rho_V(u)$ and $\rho_L(u)$ have cycle shapes $a_1^{b_1}\ldots a_k^{b_k}$ and $c_1^{d_1}\ldots c_l^{d_l}$. Then the multiplicities of $\alpha$ as even and odd root are given by
$\sum_{a_k | n} b_k$ and $\sum_{c_k | n} d_k$.
The formulas for the multiplicities simplify if $u$ has in addition prime order.
In this case we only need to calculate the trace of $g$ on $\tilde{E}_{0,\alpha}$ and $\tilde{E}_{1,\alpha}$ with $\alpha \in L^*$ and the dimensions of these spaces. We find
\begin{equation}gin{pth1} \langlebel{hel}
For $\alpha \in L^*$ the trace $tr(g|\tilde{E}_{0,\alpha})$ is given by the coefficient of $q^{(1-\alpha^2)/2}$ in
\[ \frac{1}{2}\left(
\prod_{n \geq 1} \prod_{\: 1 \leq i \leq 8}
\frac{(1+\varepsilon_i q^{n-1/2})}
{(1-\varepsilon_iq^n)}
- \prod_{n \geq 1} \prod_{\: 1 \leq i \leq 8}
\frac{(1-\varepsilon_i q^{n-1/2})}
{(1-\varepsilon_iq^n)}
\right) \]
if $\alpha \in L$ and zero else.
Let $tr (\rho_L(u))=tr (\rho_R(u))$. Then $tr(g|\tilde{E}_{1,\alpha})$ is the coefficient
of $q^{(1-\alpha^2)/2}$ in
\[ tr (\rho_L(u)) \: q^{1/2}
\prod_{n \geq 1} \prod_{\: 1 \leq i \leq 8}
\frac{(1+\varepsilon_iq^n)}
{(1-\varepsilon_iq^n)}
\]
if $\alpha \in L$ and zero else.
\end{pth1}
{\em Proof:} Clearly the trace of $g$ is zero on $\tilde{E}_{0,\alpha}$ if $\alpha$ is not in $L$. For $\alpha \in L$ we have $tr(g|\tilde{E}_{0,\alpha})=tr(g|E_{0,\alpha})$. Write $\alpha = (r,a)$. Then $E_{0,\alpha}$ is isomorphic as $g$-module to the subspace of $V_{0,(1-a^2)/2}$ of degree $r$ (cf. section 2). This space is generated by products of fermionic and bosonic oscillators and $e^r$. The sum of the $L_0$-contribution of the oscillators and the $L_0$-contribution of $e^r$ is $\frac{1}{2}-\frac{1}{2}a^2$. The vector $e^r$ has $L_0$-eigenvalue $\frac{1}{2}r^2$ so that the $L_0$-contribution of the oscillators is $ \frac{1}{2}-\frac{1}{2}a^2-\frac{1}{2} r^2 = \frac{1}{2}-\frac{1}{2}\alpha^2$. Now we go over to complexifications and choose a basis of ${\mathbb C}\otimes E_8$ in which $\rho_V(u)$ is diagonal. Then the trace of $g$ on $E_{0,\alpha}$ is given by the coefficient of $q^{(1-\alpha^2)/2}$ in
\[ \prod_{n \geq 1} \prod_{\: 1 \leq i \leq 8}
\frac{(1+\varepsilon_i q^{n-1/2})}
{(1-\varepsilon_iq^n)} \,. \]
Since we only need the half integral exponents of $q$ in this expression we can subtract the integral exponents. This proves the first statement.
The argument for the second statement is similar. Note that there are two types of ground states on which $u$ may act differently. We avoid this problem by assuming that $u$ has in both representations the same trace.
This proves the proposition.
\begin{equation}gin{pth1} \langlebel{de}
Let $\alpha=(r^*,a)\in L^*$ and let $r^{\bot *}\in E_8^{u \bot *}$ such that $r^* + r^{\bot *} \in E_8$. Then the dimension of $\tilde{E}_{0,\alpha}$ and $\tilde{E}_{1,\alpha}$ is given by the coefficient of $q^{(1-\alpha^2)/2}$ in
\[ 8 q^{1/2} \frac{\eta(q^2)^8}{\eta(q)^{16}}\:
\theta_{r^{\bot *} +E_8^{u \bot }}(q) \]
where $\theta_{r^{\bot *}+E_8^{u \bot }}(q)$ is the theta function of the translated lattice $r^{\bot *}+E_8^{u \bot }$.
\end{pth1}
{\em Proof:} Clearly $\tilde{E}_{0,\alpha}$ and $\tilde{E}_{1,\alpha}$ have the same dimension. The inverse image of $r^*$ in $E_8$ under $\pi$ is $r^*+(r^{\bot *}+E_8^{u \bot })$. Hence $\tilde{E}_{1,\alpha}=\oplus_{\pi(r)=r^*} E_{1,(r,a)}$ is isomorphic to the direct sum of the $V_{1,(1-a^2)/2}(r^*+s)$ where $s$ is in the translated lattice $r^{\bot *}+E_8^{u \bot}$. As above the sum of the $L_0$-contribution of the bosonic and fermionic oscillators and $\frac{1}{2}s^2$ is $\frac{1}{2}-\frac{1}{2}\alpha^2$. The proposition now follows from simple counting.
We will use the shorter notation $\theta_{r^{\bot *}}(q)$ in the following.
\section{Two supersymmetric algebras}
In this section we calculate explicitly twisted denominator identities corresponding to elements in $2.Aut(E_8)$ of order 3 and 7. These identities are the untwisted denominator identities of 2 supersymmetric generalized Kac-Moody superalgebras.
We choose an orthonormal basis $\{e_0,e_1,\ldots,e_7 \}$ of ${\mathbb R}^8$ and embed the lattice $E_8$ as the set of points $\sum m_i e_i$ where all $m_i$ are in $\mathbb Z$ or all $m_i$ are in $\mathbb Z+\frac{1}{2}$ and $\sum m_i$ is even. In these coordinates the automorphism group of $E_8$ is generated by the permutations of the coordinates, even sign changes and an involution generated by the Hadamard matrix. We also identify ${\mathbb R}^8$ with the alternative algebra ${\mathbb O}$ of octonions by defining $e_0$ as the identity and $e_i e_j= a_{ijk}e_k-\delta_{ij}1$ for $1\leq i,j \leq 7$ where $a_{ijk}$ is the totally antisymmetric tensor with
$a_{ijk}=1$ for $ijk=123, 154, 264, 374, 176, 257, 365$.
The element $u=\frac{1}{4}1(e_2-e_3)1(e_1-e_2)1(e_6-e_7)1(e_5-e_6)$ in $2.Aut(E_8)$ has order $3$. It is easy to check that the transformations corresponding to $\rho_V(u),\rho_L(u)$ and $\rho_R(u)$ are all equal and
\[ \renewcommand{1.3}{1.3}
\begin{equation}gin{array}{llll}
\rho_V(u)1=1 & \rho_V(u)e_1 = e_3 & \rho_V(u)e_2 = e_1 & \rho_V(u)e_3 = e_2 \\
\rho_V(u)e_4=e_4 & \rho_V(u)e_5 = e_7 & \rho_V(u)e_6 = e_5
& \rho_V(u)e_7 = e_6 \, .
\end{array} \]
Hence $\rho_V(u),\rho_L(u)$ and $\rho_R(u)$ all have cycle shape $1^23^2$.
The following proposition collects some results on $E^u_8$.
\begin{equation}gin{p1}
The sublattice $E^u_8$ of $E_8$ fixed by $\rho_V(u)$ is the 4 dimensional lattice with elements $(m_1,m_2,m_3,m_4)$ where all $m_i$ are in $\mathbb Z$ or all $m_i$ are in $\mathbb Z+\frac{1}{2}$ and $\sum m_i$ is even. The norm is $(m_1,m_2,m_3,m_4)^2=m_1^2+m_2^2+3m_3^2+3m_4^2$. $E^u_8$ has determinant $3^2$ and the quotient $E_8^{u*}/E_8^u$ is ${\mathbb Z}_3\times {\mathbb Z}_3$. The level of $E_8^u$ is $3$ so that $3E_8^{u*}$ is a sublattice of $E^u_8$. The orthogonal complement of $E^u_8$ in $E_8$ is $A_2 \oplus A_2$.
\end{p1}
We recall some results about modular forms. The congruence subgroup of $SL_2({\mathbb Z})$ of level $3$ is defined as
$\Gamma(3)=\{ \big( {a \atop c} {b \atop d} \big) \in SL_2({\mathbb Z}) \, |\,
\big( {a \atop c} {b \atop d} \big) = \big( {1 \atop 0} {0 \atop 1} \big)
\mbox{ mod } 3 \, \}$.
The vector space of modular forms for $\Gamma(3)$ with even positive weight $n$ has dimension $n+1$. A modular form in this vector space is zero if and only if the coefficients of $q^0,q^{1/3},\ldots,q^{n/3}$ in its Fourier expansion are zero.
For $r\in A_2^*\oplus A_2^*$ we define $\delta(r)=1$ if $r\in A_2 \oplus A_2$ and $0$ else. We have
\begin{equation}gin{pp1} \langlebel{tta}
Let $r\in A_2^*\oplus A_2^*$. Then
\[ \theta_{r+ A_2\oplus A_2}(q) =
\frac{1}{4} \, \frac{\eta(q)^{12}}{\eta(q^2)^6}
\left\{
\frac{\eta(q^6)^2}{\eta(q^3)^4}\delta(r)
+ \sum_{j=0}^2 \varepsilon^{-3jr^2/2}
\frac{ \eta\!\left( (\varepsilon^jq^{1/3})^2 \right)^2 }
{ \eta\!\left( \varepsilon^jq^{1/3} \right)^4 }
\right\} \]
where $\varepsilon = e^{2\pi i/3}$.
\end{pp1}
{\em Proof:} $A_2\oplus A_2$ has dimension $4$ and level $3$ so that $\theta_{r+ A_2\oplus A_2}$ is a modular form for $\Gamma(3)$ of weight $2$. The same holds for the right hand side of the formula which can be shown by calculating the transformations under a set of generators. The proposition follows from comparing the coefficients at $q^0,q^{1/3}$ and $q^{2/3}$.
The next identity is a twisted version of Jacobi's identity.
\begin{equation}gin{pp1} \langlebel{hnel}
Let $|q|<1$. Then
\[ \frac{1}{2q^{1/2}}
\left\{
\prod_{n \geq 1}(1+q^{3n-3/2})^2(1+q^{n-1/2})^2
-\prod_{n \geq 1}(1-q^{3n-3/2})^2(1-q^{n-1/2})^2
\right\} \]
\[ = 2 \prod_{n \geq 1} (1+q^{3n})^2(1+q^n)^2 \, . \]
\end{pp1}
{\em Proof:} This is an identity between modular forms and therefore can be proven by comparing sufficiently many coefficients in their Fourier expansions.
The proposition implies that the generating functions for $tr(g|\tilde{E}_{0,\alpha})$ and \linebreak $tr(g|\tilde{E}_{1,\alpha})$ are equal.
Define $c(n)$ by
\[ \sum_{n\geq 0} c(n)q^n
= 2\, \prod_{n \geq 1} \frac{(1+q^{3n})^2(1+q^n)^2 }{(1-q^{3n})^2(1-q^n)^2 }
= 2\, \frac{\eta(q^6)^2\eta(q^2)^2}{\eta(q^3)^4\eta(q)^4} \]
\[ = 2 + 8q + 24q^2 + 72q^3 + 184q^4 + 432q^5 + 984q^6 + 2112q^7 + \ldots \]
Let $g$ be the automorphism induced by $u$. Then we have
\begin{equation}gin{thp1}
The twisted denominator identity corresponding to $g$ is
\[ \prod_{\alpha\in L^{+}}
\frac{ (1-e^{\alpha})^{ c(-\alpha^2/2) }}
{ (1+e^{\alpha})^{ c(-\alpha^2/2) }}
\prod_{\alpha \in L^{+}\cap 3L^*}
\frac{ (1-e^{\alpha})^{ c(-\alpha^2/6) }}
{ (1+e^{\alpha})^{ c(-\alpha^2/6) }}
= 1 + \sum a(\langlembda)e^{\langlembda} \]
where $a(\langlembda)$ is the coefficient of $q^n$ in
\[ \prod_{n \geq 1}\frac{(1-q^{3n})^2(1-q^n)^2}{(1+q^{3n})^2(1+q^n)^2}
= 1 - 4q + 4q^2 - 4q^3 + 20q^4 - 24q^5 + 4q^6 - \ldots \]
if $\langlembda$ is $n$ times a primitive norm zero vector in $L^+$ and zero else.
\end{thp1}
{\em Proof:} Recall that $3L^*\subset L$ because $L$ has level 3. Furthermore
$3$ divides $(\alpha,L)$ if and only if $\alpha \in 3L^*$.
We consider now 4 cases. \\
$\alpha\notin L$.
Then $\alpha\notin 3L^*$ and by Proposition \ref{hel} the multiplicities $mult_0(\alpha)$ and $mult_1(\alpha)$ are zero.\\
$\alpha \in L$ and $\alpha\notin 3L^*$.
Then $mult_0(\alpha)=tr(g|\tilde{E}_{0,\alpha})$ and $mult_1(\alpha)=tr(g|\tilde{E}_{1,\alpha})$. Using Propositions \ref{hel} and \ref{hnel} we find $mult_0(\alpha)=mult_1(\alpha)=c(-\alpha^2/2)$.\\
$\alpha\in 3L^*$ and $\alpha \notin 3L$.
Then $mult_0(\alpha)=tr(g|\tilde{E}_{0,\alpha})+tr(1|\tilde{E}_{0,\alpha/3})/3$. The first term gives $c(-\alpha^2/2)$. Write $\alpha/3=(r^*,a)$ where $r^*$ is in $E_8^{u*}$ but not in $E_8^{u}$ and choose $r^{*\bot}$ as in Proposition \ref{de}. Then the dimension of $\tilde{E}_{0,\alpha/3}$
is the coefficient of $q^{-\alpha^2/18}$ in
$8 \eta(q^2)^8 \theta_{r^{\bot *}}(q)/{\eta(q)^{16}}$
or equivalently the coefficient of $q^{-\alpha^2/6}$ in
$8 \eta(q^6)^8 \theta_{r^{\bot *}}(q^3)/ \eta(q^3)^{16}$.
Note that $\alpha^2\in 6 {\mathbb Z}$. We have
$\alpha^2/9 = r^{*2} = - r^{\bot*2}$ mod $2$ and
$\alpha^2/6 = - 3 r^{\bot*2}/2$ mod $3$ so that
by Proposition \ref{tta}
\[ \theta_{r^{\bot *}}(q^3) =
\frac{1}{4} \, \frac{\eta(q^3)^{12}}{\eta(q^6)^6}
\sum_{j=0}^2 \varepsilon^{j\alpha^2/6}
\, \frac{ \eta\left( (\varepsilon^j q)^2 \right)^2 }
{ \eta\left( \varepsilon^j q \right)^4 }
\, . \]
The coefficient of $q^{-\alpha^2/6}$ in $8 \eta(q^6)^8 \theta_{r^{\bot *}}(q^3)/ \eta(q^3)^{16}$ is equal to the coefficient of $q^{-\alpha^2/6}$ in
\[ 6 \, \frac{\eta(q^6)^2\eta(q^2)^2}{\eta(q^3)^4\eta(q)^4}\, . \]
This shows that $mult_0(\alpha) = c(-\alpha^2/2) + c(-\alpha^2/6)$.
The result for $mult_1(\alpha)$ is clear. \\
$\alpha \in 3L$. Then
$mult_0(\alpha)=tr(g|\tilde{E}_{0,\alpha})-tr(g|\tilde{E}_{0,\alpha/3})/3
+ tr(1|\tilde{E}_{0,\alpha/3})/3$.
Here we have an additional term in $\theta_{r^{\bot *}}$ which cancels exactly with the term from $tr(g|\tilde{E}_{0,\alpha/3})$ so that we get the same result for $mult_0(\alpha)$ as in the case before. The result for $mult_1(r)$ is again clear. \\
This proves the theorem.
Since the multiplicities are all nonnegative integers there is a generalized Kac-Moody superalgebra whose denominator identity is the identity given in the theorem.
\begin{equation}gin{cp1}
There is a generalized Kac-Moody superalgebra with root lattice $L$ and root multiplicities given by
\[ mult_0(\alpha) = mult_1(\alpha) = c(-\alpha^2/2)
\qquad \alpha\in L,\: \alpha \notin 3L^* \]
and
\[ mult_0(\alpha) = mult_1(\alpha) = c(-\alpha^2/2)+ c(-\alpha^2/6)
\qquad \alpha\in 3L^*\,. \]
The simple roots of are the norm zero vectors in $L^+$. Their multiplicities as even and odd roots are equal. Let $\langlembda$ be a simple root. Then $mult_0(\langlembda)=4$ if $\langlembda$ is 3n times a primitive vector in $L^+$ and $mult_0(\langlembda)=2$ else.
\end{cp1}
As the fake monster superalgebra this algebra is supersymmetric and has no real roots. The Weyl group is trivial and the Weyl vector is zero. The supersymmetry is a consequence of the twisted Jacobi identity in Proposition \ref{hnel}. In a forthcoming paper we show that the denominator function of this algebra defines an automorphic form for a discrete subgroup of $O_{6,2}({\mathbb R})$ of weight $2$.
Now we consider the next example. The analysis is similar to the above one. The element $u=\frac{1}{8}1(e_6-e_7)1(e_5-e_6)1(e_4-e_5)1(e_3-e_4)1(e_2-e_3)1(e_1-e_2)$ in $2.Aut(E_8)$ has order $7$. The transformations corresponding to $\rho_V(u),\rho_L(u)$ and $\rho_R(u)$ are all equal and
\[ \renewcommand{1.3}{1.3}
\begin{equation}gin{array}{llll}
\rho_V(u)1=1 & \rho_V(u)e_1 = e_7 & \rho_V(u)e_2 = e_1 & \rho_V(u)e_3 = e_2 \\
\rho_V(u)e_4=e_3 & \rho_V(u)e_5 = e_4 & \rho_V(u)e_6 = e_5
& \rho_V(u)e_7 = e_6 \, .
\end{array} \]
Hence $\rho_V(u),\rho_L(u)$ and $\rho_R(u)$ have cycle shape $1^17^1$. We have
\begin{equation}gin{pp1}
The sublattice $E^u_8$ of $E_8$ fixed by $\rho_V(u)$ is the 2 dimensional lattice with elements $(m_1,m_2)$, where either $m_1$ and $m_2$ are in $\mathbb Z$ and $m_1+m_2$ is even or $m_1$ and $m_2$ are in $\mathbb Z+\frac{1}{2}$ and $m_1+m_2$ is odd, and norm $(m_1,m_2)^2=m_1^2+7m_2^2$. The quotient $E_8^{u*}/E_8^u$ is ${\mathbb Z}_7$ and $E^u_8$ has level $7$. The orthogonal complement of $E^u_8$ in $E_8$ is isomorphic to $A_6$.
\end{pp1}
The theta function $\theta_{r+ A_6}$ of a coset $r+A_6$ of $A_6$ in its dual depends only on $r^2$ mod $2$.
\begin{equation}gin{pp1}
Let $r\in A_6^*$. Then
\[ \theta_{r+ A_6}(q) =
\frac{1}{8}\, \frac{\eta(q)^{14}}{\eta(q^2)^7}
\left\{
\frac{\eta(q^{14})}{\eta(q^7)^2}\delta(r)
+ \sum_{j=0}^6 \varepsilon^{-7jr^2/2}
\frac{ \eta\left( (\varepsilon^jq^{1/7})^2 \right)^2 }
{ \eta\left( \varepsilon^jq^{1/7} \right)^4 }
\right\} \]
where $\varepsilon = e^{2\pi i/7}$.
\end{pp1}
The following supersymmetry relation holds.
\begin{equation}gin{pp1}
Let $|q|<1$. Then
\[ \frac{1}{2q^{1/2}}
\left\{
\prod_{n \geq 1}(1+q^{7n-7/2})(1+q^{n-1/2})
-\prod_{n \geq 1}(1-q^{7n-7/2})(1-q^{n-1/2})
\right\} \]
\[ = \prod_{n \geq 1} (1+q^{7n})(1+q^n) \]
\end{pp1}
Here we define the numbers $c(n)$ by
\[ \sum_{n\geq 0} c(n)q^n
= \prod_{n \geq 1} \frac{(1+q^{7n})(1+q^n)}{(1-q^{7n})(1-q^n)}
= \frac{\eta(q^{14})\eta(q^2)}{\eta(q^7)^2\eta(q)^2} \]
\[ = 1 + 2q + 4q^2 + 8q^3 + 14q^4 + 24q^5 + 40q^6 + 66q^7 + \ldots \]
Write $g$ for the automorphism induced by $u$. Then
\begin{equation}gin{thp1}
The twisted denominator identity corresponding to $u$ is
\[ \prod_{\alpha\in L^{+}}
\frac{ (1-e^{\alpha})^{ c(-\alpha^2/2) } }
{ (1+e^{\alpha})^{ c(-\alpha^2/2) } }
\prod_{\alpha \in L^{+}\cap 7L^*}
\frac{ (1-e^{\alpha})^{ c(-\alpha^2/14) } }
{ (1+e^{\alpha})^{ c(-\alpha^2/14) } }
= 1 + \sum a(\langlembda)e^{\langlembda} \]
where $a(\langlembda)$ is the coefficient of $q^n$ in
\[ \prod_{n \geq 1}\frac{(1-q^{7n})(1-q^n)}{(1+q^{7n})(1+q^n)}
= 1 - 2q + 2q^4 - 2q^7 + 4q^8 - 2q^9 - 4q^{11} + 6q^{16} - \ldots \]
if $\langlembda$ is $n$ times a primitive norm zero vector in $L^+$ and zero else.
\end{thp1}
Again the multiplicities are all nonnegative integers and we have
\begin{equation}gin{cp1}
There is a generalized Kac-Moody superalgebra with root lattice $L$ and root multiplicities given by
\[ mult_0(\alpha) = mult_1(\alpha) = c(-\alpha^2/2)
\qquad \alpha \in L,\: \alpha \notin 7L^* \]
and
\[ mult_0(\alpha) = mult_1(\alpha) = c(-\alpha^2/2)+ c(-\alpha^2/14)
\qquad \alpha \in 7L^*\,. \]
The simple roots of are the norm zero vectors in $L^+$. Let $\langlembda$ be a simple root. Then $mult_0(\langlembda)=mult_1(\langlembda)=2$ if $\langlembda$ is 7n times a primitive vector in $L^+$ and $mult_0(\langlembda)=mult_1(\langlembda)=1$ else.
\end{cp1}
The denominator identity of this algebra is given by the identity in the above theorem. It defines an automorphic form for a subgroup of $O_{4,2}({\mathbb R})$ of weight $1$.
\section*{Acknowledgments}
I thank R. E. Borcherds for stimulating discussions.
\begin{equation}gin{thebibliography}{mmmm}
\addcontentsline{toc}{section}{References}
\bibitem[B]{B} R. E. Borcherds, Monstrous moonshine and monstrous Lie
super\-algebras, {\em Invent. math.} {\bf 109} (1992), 405-444
\bibitem[C]{C} J. H. Conway et al., "Atlas of finite groups", Clarendon Press,
1985
\bibitem[GSW]{GSW} M. B. Green, J. H. Schwarz and E. Witten,
"Superstring theory", Vols. 1 \& 2, Cambridge University Press, 1988
\bibitem[J]{J} N. Jacobson, Some groups of transformations defined by Jordan
\mbox{algebras}. II., {\em J. reine angew. Math.} {\bf 204} (1960), 74-98
\bibitem[S]{NRS} N. R. Scheithauer, The Fake Monster Superalgebra, {\em Adv. Math.} {\bf 151} (2000), 226-269
\bibitem[P]{P} J. Polchinsky, "String Theory", Vols. 1 \& 2, Cambridge
University Press, 1998
\end{thebibliography}
\end{document} |
\begin{document}
\vspace*{0ex}
\begin{center}
{\Large\bf
A Hamiltonian structure of the Isobe--Kakinuma model \\[0.5ex] for water waves}
\end{center}
\begin{center}
{\em Dedicated to the late Professor Walter L. Craig}
\end{center}
\begin{center}
Vincent Duch\^ene and Tatsuo Iguchi
\end{center}
\begin{abstract}
We consider the Isobe--Kakinuma model for water waves, which is obtained as the system
of Euler--Lagrange equations for a Lagrangian approximating Luke's Lagrangian for water waves.
We show that the Isobe--Kakinuma model also enjoys a Hamiltonian structure analogous to
the one exhibited by V. E. Zakharov on the full water wave problem
and, moreover, that the Hamiltonian of the Isobe--Kakinuma model is a higher order
shallow water approximation to the one of the full water wave problem.
\end{abstract}
\section{Introduction}
\label{section:intro}
We consider a model for the motion of water in a moving domain of the $(n+1)$-dimensional
Euclidean space.
The water wave problem is mathematically formulated as a free boundary problem for
an irrotational flow of an inviscid, incompressible, and homogeneous fluid under a vertical gravitational field.
Let $t$ be the time, $\boldsymbol{x}=(x_1,\ldots,x_n)$ the horizontal spatial coordinates,
and $z$ the vertical spatial coordinate.
We assume that the water surface and the bottom are represented as $z=\eta({\boldsymbol x},t)$ and
$z=-h+b({\boldsymbol x})$, respectively, where $\eta({\boldsymbol x},t)$ is the surface elevation,
$h$ is the mean depth, and $b({\boldsymbol x})$ represents the bottom topography.
We denote by $\Omega(t)$, $\Gamma(t)$, and $\Sigma$ the water region, the water surface, and the
bottom of the water at time $t$, respectively.
Then, the motion of the water is described by the velocity potential $\Phi({\boldsymbol x},z,t)$ satisfying
Laplace's equation
\begin{equation}\label{intro:Laplace}
\Delta\Phi + \partial_z^2\Phi = 0 \makebox[3em]{in} \Omega(t), \; t>0,
\end{equation}
where $\Delta=\partial_{x_1}^2+\cdots+\partial_{x_n}^2$.
The boundary conditions on the water surface are given by
\begin{equation}\label{intro:BC1}
\begin{cases}
\partial_t\eta + \nabla\Phi\cdot\nabla\eta - \partial_z\Phi = 0 & \mbox{on}\quad \Gamma(t), \; t>0, \\
\displaystyle
\partial_t\Phi +\frac12 \bigl( |\nabla\Phi|^2 + (\partial_z\Phi)^2 \bigr) + g\eta = 0
& \mbox{on}\quad \Gamma(t), \; t>0,
\end{cases}
\end{equation}
where $\nabla=(\partial_{x_1},\ldots,\partial_{x_n})^{\rm T}$, and $g$ is the gravitational constant.
The first equation is the kinematic condition on the water surface and the second one is Bernoulli's equation.
Finally, the boundary condition on the bottom of the water is given by
\begin{equation}\label{intro:BC2}
\nabla\Phi\cdot\nabla b - \partial_z\Phi = 0 \makebox[3em]{on} \Sigma, \; t>0,
\end{equation}
which is the kinematic condition on the fixed and impermable bottom.
These are the basic equations for the water wave problem.
We put
\begin{equation}\label{intro:CV}
\phi({\boldsymbol x},t) = \Phi({\boldsymbol x},\eta({\boldsymbol x},t),t),
\end{equation}
which is the trace of the velocity potential on the water surface.
Then, the basic equations for water waves~\eqref{intro:Laplace}--\eqref{intro:BC2} are transformed
equivalently into
\begin{equation}\label{intro:ZCS}
\begin{cases}
\partial_t\eta-\Lambda(\eta,b)\phi = 0 & \mbox{on}\quad \mathbf{R}^n, \; t>0, \\[.5ex]
\partial_t\phi + g\eta + \dfrac12|\nabla\phi|^2
- \dfrac12\dfrac{\bigl(\Lambda(\eta,b)\phi + \nabla\eta\cdot\nabla\phi\bigr)^2}{1+|\nabla\eta|^2} = 0
& \mbox{on}\quad \mathbf{R}^n, \; t>0,
\end{cases}
\end{equation}
where $\Lambda(\eta,b)$ is the Dirichlet-to-Neumann map for Laplace's equation.
Namely, it is defined by
\[
\Lambda(\eta,b)\phi= (\partial_z\Phi)|_{z=\eta} - \nabla\eta\cdot(\nabla\Phi)|_{z=\eta},
\]
where $\Phi$ is the unique solution to the boundary value problem of Laplace's equation
\eqref{intro:Laplace} under the boundary conditions \eqref{intro:BC2}--\eqref{intro:CV}.
As is well-known, the water wave problem has a conserved energy $E=E_{\rm kin} + E_{\rm pot}$,
where $E_{\rm kin}$ is the kinetic energy
\begin{align*}
E_{\rm kin}
&= \frac{1}{2}\rho\iint_{\Omega(t)} \bigl( |\nabla\Phi({\boldsymbol x},z,t)|^2
+ (\partial_z\Phi({\boldsymbol x},z,t))^2 \bigr)\,{\rm d}{\boldsymbol x}{\rm d}z \\
&= \frac{1}{2}\rho\int_{\mathbf{R}^n}\phi({\boldsymbol x},t)(\Lambda(\eta,b)\phi)({\boldsymbol x},t)\,{\rm d}{\boldsymbol x},
\end{align*}
and $E_{\rm pot}$ is the potential energy
\[
E_{\rm pot} = \frac{1}{2}\rho g\int_{\mathbf{R}^n}\eta({\boldsymbol x},t)^2\,{\rm d}{\boldsymbol x}
\]
due to the gravity. Here, $\rho$ is a constant density of the water.
V. E. Zakharov~\cite{Zakharov1968} found that the water wave system has a Hamiltonian structure and
$\eta$ and $\phi$ are the canonical variables.
The Hamiltonian $\mathscr{H}$ is essentially the total energy, that is, $\mathscr{H} = \frac{1}{\rho}E$.
He showed that the basic equations for water waves \eqref{intro:Laplace}--\eqref{intro:BC2} are transformed
equivalently into Hamilton's canonical equations
\[
\partial_t\eta = \frac{\delta\mathscr{H}}{\delta\phi}, \quad
\partial_t\phi = -\frac{\delta\mathscr{H}}{\delta\eta}.
\]
Although V. E. Zakharov did not use explicitly the Dirichlet-to-Neumann map $\Lambda(\eta,b)$,
the above canonical equations are exactly the same as \eqref{intro:ZCS}.
W. Craig and C. Sulem~\cite{CraigSulem1993} introduced the Dirichlet-to-Neumann map explicitly
and derived \eqref{intro:ZCS}.
Therefore, nowadays \eqref{intro:ZCS} is often called the Zakharov--Craig--Sulem
formulation of the water wave problem.
Since then, W. Craig and his collaborators \cite{CraigGroves1994,CraigGroves2000, CraigGuyenneKalisch2005,
CraigGuyenneNichollsSulem2005, CraigGuyenneSulem2010, CraigGuyenneSulem2012} have used the Hamiltonian
structure of the water wave problem in order to analyze long-wave and modulation approximations.
Let us also mention the recent work of W. Craig~\cite{Craig2016}, which generalizes the Hamiltonian
formulation of water waves described above to a general coordinatization of the free surface
allowing overturning wave profiles.
On the other hand, as was shown by J. C. Luke~\cite{Luke1967},
the water wave problem has also a variational structure.
His Lagrangian density is of the form
\begin{equation}\label{intro:Luke's Lagrangian}
\mathscr{L}(\Phi,\eta) = \int_{-h+b({\boldsymbol x})}^{\eta({\boldsymbol x},t)}\biggl(
\partial_t\Phi({\boldsymbol x},z,t) + \frac12\bigl( |\nabla\Phi({\boldsymbol x},z,t)|^2
+ (\partial_z\Phi({\boldsymbol x},z,t))^2 \bigr) \biggr)\,{\rm d}z
+ \frac12g\bigl(\eta(\boldsymbol{x},t)\bigr)^2
\end{equation}
and the action function is given by
\[
\mathscr{J}(\Phi,\eta)
= \int_{t_0}^{t_1}\!\!\!\int_{\mathbf{R}^n}\mathscr{L}(\Phi,\eta)\,{\rm d}{\boldsymbol x}\,{\rm d}t.
\]
In fact, the corresponding Euler--Lagrange equations are exactly the basic equations for water waves
\eqref{intro:Laplace}--\eqref{intro:BC2}.
We refer to J. W. Miles~\cite{Miles1977} for the relation between Zakharov's Hamiltonian and Luke's Lagrangian.
M. Isobe~\cite{Isobe1994, Isobe1994-2} and T. Kakinuma~\cite{Kakinuma2000, Kakinuma2001, Kakinuma2003}
obtained a family of systems of equations after replacing the velocity potential $\Phi$ in Luke's Lagrangian by
\[
\Phi^{\rm app}({\boldsymbol x},z,t) = \sum_{i=0}^N\Psi_i(z;b)\phi_i({\boldsymbol x},t),
\]
where $\{\Psi_i\}$ is a given appropriate function system in the vertical coordinate $z$ and may depend on
the bottom topography $b$ and $(\phi_0,\phi_1,\ldots,\phi_N)$ are unknown variables.
The Isobe--Kakinuma model is a system of Euler--Lagrange equations corresponding to the action function
\begin{equation}\label{intro:IK-action}
\mathscr{J}^{\rm app}(\phi_0,\phi_1,\ldots,\phi_N,\eta)
= \int_{t_0}^{t_1}\!\!\!\int_{\mathbf{R}^n}\mathscr{L}(\Phi^{\rm app},\eta)
\,{\rm d}{\boldsymbol x}{\rm d}t.
\end{equation}
We have to choose the function system $\{\Psi_i\}$ carefully for the Isobe--Kakinuma model
to produce good approximations to the water wave problem.
One possible choice is the bases of the Taylor series of the velocity potential
$\Phi({\boldsymbol x},z,t)$ with respect to the vertical spatial coordinate $z$ around the bottom.
Such an expansion has been already used by J. Boussinesq~\cite{Boussinesq1872} in the case of the flat bottom
and, for instance, by C. C. Mei and B. Le M\'ehaut\'e~\cite{MeiLeMehaute1966} for general bottom topographies.
The corresponding choice of the function system is given by
\[
\Psi_i(z;b)
=
\begin{cases}
(z + h)^{2i} & \mbox{in the case of the flat bottom}, \\
(z + h - b({\boldsymbol x}))^i & \mbox{in the case of the variable bottom}.
\end{cases}
\]
Here we note that the latter choice is valid also for the case of the flat bottom.
However, it turns out that the terms of odd degree do not play any important role in such a case
so that the former choice is more adequate.
In order to treat both cases at the same time, we adopt the approximation
\begin{equation}\label{intro:app}
\Phi^{\rm app}({\boldsymbol x},z,t)
= \sum_{i=0}^N(z+h-b({\boldsymbol x}))^{p_i}\phi_i({\boldsymbol x},t),
\end{equation}
where $p_0,p_1,\ldots,p_N$ are nonnegative integers satisfying $0=p_0<p_1<\cdots<p_N$.
Plugging~\eqref{intro:app} into the action function~\eqref{intro:IK-action},
the corresponding Euler--Lagrange equation yields the Isobe--Kakinuma model of the form
\begin{equation}\label{intro:IK model}
\left\{
\begin{array}{l}
\displaystyle
H^{p_i} \partial_t \eta + \sum_{j=0}^N \left\{ \nabla \cdot \left(
\frac{1}{p_i+p_j+1} H^{p_i+p_j+1} \nabla\phi_j
- \frac{p_j}{p_i+p_j} H^{p_i+p_j} \phi_j \nabla b \right) \right.\\
\displaystyle\phantom{ H^{p_i} \partial_t \eta + \sum_{j=0}^N \biggl\{ }
+ \left. \frac{p_i}{p_i+p_j} H^{p_i+p_j} \nabla b \cdot \nabla\phi_j
- \frac{p_ip_j}{p_i+p_j-1} H^{p_i+p_j-1} (1 + |\nabla b|^2) \phi_j \right\} = 0 \\
\makebox[28em]{}\mbox{for}\quad i=0,1,\ldots,N, \\[1ex]
\displaystyle
\sum_{j=0}^N H^{p_j} \partial_t \phi_j + g\eta
+ \frac12 \left\{ \left| \sum_{j=0}^N ( H^{p_j}\nabla\phi_j - p_j H^{p_j-1}\phi_j\nabla b ) \right|^2
+ \left( \sum_{j=0}^N p_j H^{p_j-1} \phi_j \right)^2 \right\} = 0,
\end{array}
\right.
\end{equation}
where $H({\boldsymbol x},t) = h + \eta({\boldsymbol x},t) - b({\boldsymbol x})$ is the depth of the water.
Here and in what follows we use the notational convention $0/0=0$.
This system consists of $(N+1)$ evolution equations for $\eta$ and only one evolution equation
for $(N+1)$ unknowns $(\phi_0,\phi_1,\ldots,\phi_N)$, so that this is an overdetermined and
underdetermined composite system.
However, the total number of the unknowns is equal to the total number of the equations.
The main purpose of this paper is to show that the Isobe--Kakinuma model~\eqref{intro:IK model} also
enjoys a canonical Hamiltonian structure which is analogous to the one of the water waves problem.
In particular, the Hamiltonian is a higher order shallow water approximation of the original Hamiltonian
of the water waves problem.
\noindent
{\bf Acknowledgement} \\
T. I. was partially supported by JSPS KAKENHI Grant Number JP17K18742 and JP17H02856.
\section{Preliminaries}
\label{section:IK}
Since the hypersurface $t=0$ in the space-time $\mathbf{R}^n\times\mathbf{R}$
is characteristic for the Isobe--Kakinuma model~\eqref{intro:IK model},
the initial value problem to the model is not solvable in general.
In fact, if the problem has a solution $(\eta,\phi_0,\ldots,\phi_N)$, then by eliminating the time derivative
$\partial_t\eta$ from the equations we see that the solution has to satisfy the relations
\begin{align}\label{IK:comp}
& H^{p_i}\sum_{j=0}^N\nabla\cdot\left(
\frac{1}{p_j+1}H^{p_j+1}\nabla\phi_j
-H^{p_j}\phi_j\nabla b\right) \nonumber \\
&= \sum_{j=0}^N\left\{\nabla\cdot\left(
\frac{1}{p_i+p_j+1}H^{p_i+p_j+1}\nabla\phi_j
-\frac{p_j}{p_i+p_j}H^{p_i+p_j}\phi_j\nabla b\right)\right. \\
&\phantom{ =\sum_{j=0}^N\biggl\{ }
\displaystyle\left.
+\frac{p_i}{p_i+p_j}H^{p_i+p_j}\nabla b\cdot\nabla\phi_j
-\frac{p_ip_j}{p_i+p_j-1}H^{p_i+p_j-1}(1+|\nabla b|^2)\phi_j\right\} \nonumber
\end{align}
for $i=1,\ldots,N$.
Therefore, the initial data have to satisfy these relations in order to allow the existence of a solution.
Y. Murakami and T. Iguchi~\cite{MurakamiIguchi2015} and R. Nemoto and T. Iguchi~\cite{NemotoIguchi2018}
showed that the initial value problem to the Isobe--Kakinuma model~\eqref{intro:IK model} is well-posed
locally in time in a class of initial data for which the relations~\eqref{IK:comp} and a generalized
Rayleigh--Taylor sign condition are satisfied.
Moreover, T. Iguchi~\cite{Iguchi2018-1, Iguchi2018-2} showed that the Isobe--Kakinuma model~\eqref{intro:IK model}
is a higher order shallow water approximation for the water wave problem
in the strongly nonlinear regime.
The Isobe--Kakinuma model~\eqref{intro:IK model} has also a conserved energy, which is the total
energy given by
\begin{align}
E^{\mbox{\rm\scriptsize IK}}(\eta,\boldsymbol{\phi})
&= \frac12\rho\iint_{\Omega(t)}\bigl( |\nabla\Phi^{\rm app}({\boldsymbol x},z,t)|^2
+ (\partial_z\Phi^{\rm app}({\boldsymbol x},z,t))^2 \bigr)\,
{\rm d}{\boldsymbol x}{\rm d}z
+ \frac12\rho g\int_{\mathbf{R}^n}\eta({\boldsymbol x},t)^2\,{\rm d}{\boldsymbol x} \nonumber \\
&= \frac{\rho}{2}\int_{\mathbf{R}^n}\left\{
\sum_{i,j=0}^N\left(
\frac{1}{p_i+p_j+1}H^{p_i+p_j+1}\nabla\phi_i\cdot\nabla\phi_j
-\frac{2p_i}{p_i+p_j}H^{p_i+p_j}\phi_i\nabla b\cdot\nabla\phi_j \right. \right. \nonumber\\
& \left. \phantom{\sum_{i,j=0}^N}\makebox[6em]{}
+ \frac{p_ip_j}{p_i+p_j-1}H^{p_i+p_j-1}(1+|\nabla b|^2)\phi_i\phi_j\biggr) + g\eta^2 \right\}
{\rm d}{\boldsymbol x},
\label{IK:energy}
\end{align}
where $\boldsymbol{\phi}=(\phi_0,\phi_1,\ldots,\phi_N)^{\rm T}$.
We introduce second order differential operators $L_{ij}=L_{ij}(H,b)$ $(i,j=0,1,\ldots,N)$ depending on
the water depth $H$ and the bottom topography $b$ by
\begin{align}
L_{ij}\psi_j
&= -\nabla\cdot\biggl(
\frac{1}{p_i+p_j+1}H^{p_i+p_j+1}\nabla\psi_j
-\frac{p_j}{p_i+p_j}H^{p_i+p_j}\psi_j\nabla b\biggr) \nonumber \\[0.5ex]
&\quad\,
-\frac{p_i}{p_i+p_j}H^{p_i+p_j}\nabla b\cdot\nabla\psi_j
+\frac{p_ip_j}{p_i+p_j-1}H^{p_i+p_j-1}(1+|\nabla b|^2)\psi_j.
\label{IK:L}
\end{align}
Then, we have $L_{ij}^*=L_{ji}$, where $L_{ij}^*$ is the adjoint operator of $L_{ij}$ in $L^2(\mathbf{R}^n)$.
Moreover, the Isobe--Kakinuma model~\eqref{intro:IK model} and the relations~\eqref{IK:comp} can be written
simply as
\begin{equation}\label{IK:IK model}
\left\{
\begin{array}{l}
\displaystyle
H^{p_i} \partial_t \eta - \sum_{j=0}^N L_{ij}(H,b) \phi_j = 0 \qquad\mbox{for}\quad i=0,1,\ldots,N, \\
\displaystyle
\sum_{j=0}^N H^{p_j} \partial_t \phi_j + g\eta
+ \frac12\bigl( |(\nabla\Phi^{\rm app})|_{z=\eta}|^2
+ ((\partial_z\Phi^{\rm app})|_{z=\eta})^2 \bigr) = 0
\end{array}
\right.
\end{equation}
and
\begin{equation}\label{IK:comp2}
\sum_{j=0}^N ( L_{ij}(H,b) - H^{p_i} L_{0j}(H,b) ) \phi_j = 0 \qquad\mbox{for}\quad i=1,\ldots,N,
\end{equation}
respectively.
It is easy to calculate the variational derivative of the energy function
$E^{\mbox{\rm\scriptsize IK}}(\eta,\boldsymbol{\phi})$ and to obtain
\begin{equation}\label{IK:deltaE}
\begin{cases}
\displaystyle
\frac{1}{\rho} \delta_{\phi_i} E^{\mbox{\rm\scriptsize IK}}
= \sum_{j=0}^N L_{ij}(H,b)\phi_j \quad j=0,1,\ldots,N, \\[3ex]
\displaystyle
\frac{1}{\rho} \delta_{\eta} E^{\mbox{\rm\scriptsize IK}}
= \frac12\bigl(|(\nabla\Phi^{\rm app})|_{z=\eta}|^2
+ ((\partial_z\Phi^{\rm app})|_{z=\eta})^2 \bigr) + g\eta.
\end{cases}
\end{equation}
Therefore, introducing $\boldsymbol{l}(H) = (H^{p_0},H^{p_1},\ldots,H^{p_N})^{\rm T}$,
the Isobe--Kakinuma model~\eqref{intro:IK model} can also be written simply as
\begin{equation}\label{IK:IK model2}
\begin{pmatrix}
0 & -\boldsymbol{l}(H)^{\rm T} \\
\boldsymbol{l}(H) & O
\end{pmatrix}
\partial_t
\begin{pmatrix}
\eta \\
\boldsymbol{\phi}
\end{pmatrix}
= \frac{1}{\rho}
\begin{pmatrix}
\delta_{\eta} E^{\mbox{\rm\scriptsize IK}} \\
\delta_{\boldsymbol{\phi}} E^{\mbox{\rm\scriptsize IK}}
\end{pmatrix}.
\end{equation}
In view of~\eqref{IK:comp2} we introduce also linear operators $\mathcal{L}_i = \mathcal{L}_i(H,b)$
$(i = 1,\ldots,N)$ depending on the water depth $H$ and the bottom topography $b$,
and acting on $\mbox{\boldmath$\varphi$} = (\varphi_0,\varphi_1,\ldots,\varphi_N)^{\rm T}$ by
\begin{equation}\label{IK:scrLi}
\mathcal{L}_i \mbox{\boldmath$\varphi$} = \sum_{j=0}^N \bigl( L_{ij}(H,b) - H^{p_i} L_{0j}(H,b) \bigr) \varphi_j
\quad\mbox{for}\quad i=1,\ldots,N,
\end{equation}
and put $\mathcal{L} \mbox{\boldmath$\varphi$}
= (\mathcal{L}_1 \mbox{\boldmath$\varphi$},\ldots,\mathcal{L}_N \mbox{\boldmath$\varphi$})^{\rm T}$.
Then, the conditions~\eqref{IK:comp} can be written simply as
\begin{equation}\label{IK:comp3}
\mathcal{L}(H,b)\boldsymbol{\phi} = {\bf 0} .
\end{equation}
For later use, we also put $L=L(H,b)=(L_{ij}(H,b))_{0\leq i,j\leq N}$ and define $L_0=L_0(H,b)$ by
\begin{equation}\label{IK:L0}
L_0(H,b)\boldsymbol{\varphi} = \sum_{j=0}^NL_{0j}(H,b)\varphi_j.
\end{equation}
Then, the conditions~\eqref{IK:comp} are also equivalent to
\begin{equation}\label{IK:comp4}
L(H,b)\boldsymbol{\phi} = \bigl(L_0(H,b)\boldsymbol{\phi}\bigr)\boldsymbol{l}(H).
\end{equation}
Now, for given functions $F_0$ and $\boldsymbol{F}=(F_1,\ldots,F_N)^{\rm T}$ we consider the equations
\begin{equation}\label{IK:elliptic}
\begin{cases}
\boldsymbol{l}(H)\cdot \boldsymbol{\varphi} =F_0,\\
\mathcal{L}(H,b)\boldsymbol{\varphi} = \boldsymbol{F}.
\end{cases}
\end{equation}
Let $W^{m,p}=W^{m,p}(\mathbf{R}^n)$ be the $L^p$-based Sobolev space of order $m$ on $\mathbf{R}^n$
and $H^m=W^{m,2}$.
The norms of the Sobolev space $H^m$ and of a Banach space $\mathscr{X}$
are denoted by $\|\cdot\|_m$ and by $\|\cdot\|_\mathscr{X}$, respectively.
Set $\mathring{H}^m=\{\phi \,;\, \nabla\phi\in H^{m-1} \}$.
The following lemma was proved in~\cite{NemotoIguchi2018}.
\begin{lemma}\label{IK:lem1}
Let $h,c_0,M$ be positive constants and $m$ an integer such that $m>\frac{n}{2}+1$.
There exists a positive constant $C$ such that if $\eta$ and $b$ satisfy
\begin{equation}\label{IK:cond}
\left\{
\begin{array}{l}
\|\eta\|_m+\|b\|_{W^{m,\infty}} \leq M, \\[0.5ex]
c_0\leq H({\boldsymbol x})=h+\eta({\boldsymbol x})-b({\boldsymbol x}) \quad\mbox{for}\quad {\boldsymbol x}\in\mathbf{R}^n,
\end{array}
\right.
\end{equation}
then for any $F_0\in \mathring{H}^k$ and $\mbox{\boldmath$F$}=(F_1,\ldots,F_N)^{\rm T}\in (H^{k-2})^N$
with $1\leq k\leq m$ there exists a unique solution
$\mbox{\boldmath$\varphi$}=(\varphi_0,\varphi_1,\ldots,\varphi_N)^{\rm T}\in \mathring{H}^k\times (H^{k})^N$
to~\eqref{IK:elliptic}.
Moreover, the solution satisfies
\[
\|\nabla\varphi_0\|_{k-1}+\|(\varphi_1,\ldots,\varphi_N)\|_k
\leq C\;(\|\nabla F_0\|_{k-1}+\|(F_1,\ldots,F_N)\|_{k-2}).
\]
\end{lemma}
\section{Hamiltonian structure}
\label{section:HS}
In the following, we will fix $b\in W^{m,\infty}$ with $m>\frac{n}{2}+1$.
Let $(\eta,\phi_0,\ldots,\phi_N)$ be a solution to the Isobe--Kakinuma model~\eqref{intro:IK model}.
As we will see later, the canonical variables of the Isobe--Kakinuma model are the surface elevation
$\eta$ and the trace of the approximated velocity potential on the water surface
\begin{equation}\label{H:CV}
\phi = \Phi^{\rm app}|_{z=\eta} = \sum_{j=0}^NH^{p_j}\phi_j
= \boldsymbol{l}(H)\cdot\boldsymbol{\phi}.
\end{equation}
Then, the relations~\eqref{IK:comp} and the above equation are written in the simple form
\begin{equation}\label{H:eqCV}
\begin{cases}
\boldsymbol{l}(H)\cdot \boldsymbol{\phi} =\phi,\\
\mathcal{L}(H,b)\boldsymbol{\phi} = {\bf 0}.
\end{cases}
\end{equation}
Therefore, it follows from Lemma~\ref{IK:lem1} that once the canonical variables
$(\eta,\phi)$ are given in an appropriate class of functions,
$\boldsymbol{\phi}=(\phi_0,\phi_1,\ldots,\phi_N)^{\rm T}$ can be determine uniquely.
In other words, these variables $(\phi_0,\phi_1,\ldots,\phi_N)$ depend on the canonical variables
$(\eta,\phi)$ and furthermore they depend on $\phi$ linearly so that we can write
\[
\boldsymbol{\phi} = {\bf S}(\eta,b)\phi
\]
with a linear operator ${\bf S}(\eta,b)$ depending on $\eta$ and $b$.
Since we fixed $b$, we simply write ${\bf S}(\eta)$ in place of ${\bf S}(\eta,b)$ for simplicity.
We proceed to analyze this operator ${\bf S}(\eta)$ more precisely.
We put
\[
U^m_b = \{ \eta \in H^m \,;\,
\inf_{{\boldsymbol x}\in\mathbf{R}^n}(h+\eta({\boldsymbol x})-b({\boldsymbol x}))>0 \},
\]
which is an open set in $H^m$.
For Banach spaces $\mathscr{X}$ and $\mathscr{Y}$, we denote by $B(\mathscr{X};\mathscr{Y})$
the set of all linear and bounded operators from $\mathscr{X}$ into $\mathscr{Y}$.
In view of \eqref{IK:comp4}, \eqref{H:CV}, and Lemma~\ref{IK:lem1}, we see easily the following lemma.
\begin{lemma}\label{H:lem1}
Let $m$ be an integer such that $m>\frac{n}{2}+1$ and $b\in W^{m,\infty}$.
For each $\eta \in U^m_b$ and for $k=1,2,\ldots,m$, the linear operator
\[
{\bf S}(\eta) : \mathring{H}^k \ni\phi \mapsto \boldsymbol{\phi} \in\mathring{H}^k\times(H^k)^N
\]
is defined,
where $\boldsymbol{\phi}=(\phi_0,\phi_1,\ldots,\phi_N)^{\rm T}$ is the unique solution to \eqref{H:eqCV}.
Moreover, we have ${\bf S}(\eta) \in B(\mathring{H}^k;\mathring{H}^k\times(H^k)^N)$ and
\[
L(H,b)\boldsymbol{\phi} = \bigl(L_0(H,b)\boldsymbol{\phi}\bigr)\boldsymbol{l}(H).
\]
\end{lemma}
Formally, $D_{\eta}{\bf S}(\eta)[\dot{\eta}]$ the Fr\'echet derivative of ${\bf S}(\eta)$ with respect to
$\eta$ is given by
\begin{equation}\label{H:Frechet}
\begin{cases}
\boldsymbol{l}(H)\cdot\dot{\boldsymbol{\psi}}
= -\bigl( \boldsymbol{l}'(H)\cdot\boldsymbol{\phi} \bigr)\dot{\eta}, \\
\mathcal{L}(H,b)\dot{\boldsymbol{\psi}} = -D_H\mathcal{L}(H,b)[\dot{\eta}]\boldsymbol{\phi},
\end{cases}
\end{equation}
with $\boldsymbol{\phi}={\bf S}(\eta)\phi$ and $\dot{\boldsymbol{\psi}}=D_{\eta}{\bf S}(\eta)[\dot{\eta}]\phi$,
where $\boldsymbol{l}'(H)\cdot\boldsymbol{\phi} =\sum_{j=1}^Np_jH^{p_j-1}\phi_j$,
\[
D_H\mathcal{L}_i(H)[\dot{\eta}]\boldsymbol{\phi}
= \sum_{j=0}^N\bigl( D_HL_{ij}(H,b)[\dot{\eta}] - H^{p_i}D_HL_{0j}(H,b)[\dot{\eta}]
- p_iH^{p_i-1}\dot{\eta}L_{0j}(H,b) \bigr)\phi_j,
\]
and
\begin{align*}
D_H L_{ij}(H,b)[\dot{\eta}]\phi_j
&= -\nabla\cdot\bigl\{ \dot{\eta}(
H^{p_i+p_j}\nabla\phi_j
-p_j H^{p_i+p_j-1}\phi_j\nabla b ) \bigr\} \nonumber \\[0.5ex]
&\quad\,
+ \dot{\eta}\bigl\{ -p_i H^{p_i+p_j-1}\nabla b\cdot\nabla\phi_j
+ p_ip_j H^{p_i+p_j-2}(1+|\nabla b|^2)\phi_j \bigr\}.
\end{align*}
By using these equations together with Lemma~\ref{IK:lem1} and standard arguments,
we can justify the Fr\'echet differentiability of ${\bf S}(\eta)$ with respect to $\eta$.
More precisely, we have the following lemma.
\begin{lemma}\label{H:lem2}
Let $m$ be an integer such that $m>\frac{n}{2}+1$ and $b\in W^{m,\infty}$.
Then, the map $U^m_b \ni \eta \mapsto {\bf S}(\eta) \in B(\mathring{H}^k;\mathring{H}^k\times(H^k)^N)$
is Fr\'echet differentiable for $k=1,2,\ldots,m$, and~\eqref{H:Frechet} holds.
\end{lemma}
As mentioned before, the Isobe--Kakinuma model~\eqref{intro:IK model} has a conserved quantity
$E^{\mbox{\rm\scriptsize IK}}(\eta,\boldsymbol{\phi})$ given by~\eqref{IK:energy},
which is the total energy.
Now, we define a Hamiltonian $\mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi)$ to the
Isobe--Kakinuma model by
\begin{equation}\label{H:H}
\mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi)
= \frac{1}{\rho}E^{\mbox{\rm\scriptsize IK}}(\eta,{\bf S}(\eta)\phi),
\end{equation}
which is essentially the total energy in terms of the canonical variables $(\eta,\phi)$.
\begin{lemma}\label{H:lem3}
Let $m$ be an integer such that $m>\frac{n}{2}+1$ and $b\in W^{m,\infty}$.
Then, the map
$U^m_b \times\mathring{H}^1 \ni (\eta,\phi) \mapsto \mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi)\in\mathbf{R}$
is Fr\'echet differentiable and the variational derivatives of the Hamiltonian are
\[
\begin{cases}
\delta_\phi\mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi) = L_0(H,b)\boldsymbol{\phi}, \\
\delta_\eta\mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi)
= \frac{1}{\rho}(\delta_\eta E^{\mbox{\rm\scriptsize IK}})(\eta,\boldsymbol{\phi})
-(\boldsymbol{l}'(H)\cdot\boldsymbol{\phi})L_0(H,b)\boldsymbol{\phi},
\end{cases}
\]
where $\boldsymbol{\phi}={\bf S}(\eta)\phi$.
\end{lemma}
\begin{proof}[{\bf Proof}.]
Let us calculate Fr\'echet derivatives of the Hamiltonian $\mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi)$.
Let us consider first
$U^m_b \times H^2 \ni (\eta,\phi) \mapsto \mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi) $.
For any $\dot{\phi}\in {H}^2$, we see that
\begin{align*}
D_\phi\mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi)[\dot{\phi}]
&= \frac{1}{\rho}(D_{\boldsymbol{\phi}} E^{\mbox{\rm\scriptsize IK}})
(\eta,{\bf S}(\eta)\phi)[{\bf S}(\eta)\dot{\phi}] \\
&= \frac{1}{\rho}( (\delta_{\boldsymbol{\phi}} E^{\mbox{\rm\scriptsize IK}})(\eta,\boldsymbol{\phi}),
{\bf S}(\eta)\dot{\phi})_{L^2} \\
&= (L(H,b)\boldsymbol{\phi},{\bf S}(\eta)\dot{\phi})_{L^2} \\
&= (\bigl(L_0(H,b)\boldsymbol{\phi}\bigr)\boldsymbol{l}(H), {\bf S}(\eta)\dot{\phi})_{L^2} \\
&= (L_0(H,b)\boldsymbol{\phi}, \boldsymbol{l}(H)\cdot {\bf S}(\eta)\dot{\phi})_{L^2} \\
&= ( L_0(H,b)\boldsymbol{\phi}, \dot{\phi})_{L^2},
\end{align*}
where we used \eqref{IK:deltaE} and Lemma~\ref{H:lem1}.
The above calculations are also valid when $(\phi,\dot{\phi})\in \mathring{H}^1\times \mathring{H}^1$,
provided we replace the $L^2$ inner products with the $\mathscr{X}^\prime$--$\mathscr{X}$ duality product
where $\mathscr{X} = \mathring{H}^1\times (H^1)^N$ for the first lines, and $\mathscr{X}= \mathring{H}^1$
for the last line.
This gives the first equation of the lemma.
Similarly, for any $(\eta,\phi)\in U_b^m\times \mathring{H}^2$ and $\dot{\eta}\in H^m$ we see that
\begin{align*}
D_\eta\mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi)[\dot{\eta}]
= \frac{1}{\rho}(D_\eta E^{\mbox{\rm\scriptsize IK}})(\eta,{\bf S}(\eta)\phi)[\dot{\eta}]
+ \frac{1}{\rho}(D_{\boldsymbol{\phi}} E^{\mbox{\rm\scriptsize IK}})(\eta,{\bf S}(\eta)\phi)
[D_\eta {\bf S}(\eta)[\dot{\eta}]\phi].
\end{align*}
Here, we have
\begin{align*}
\frac{1}{\rho}(D_{\boldsymbol{\phi}} E^{\mbox{\rm\scriptsize IK}})(\eta,{\bf S}(\eta)\phi)
[D_\eta {\bf S}(\eta)[\dot{\eta}]\phi]
&= \frac{1}{\rho}( (\delta_{\boldsymbol{\phi}} E^{\mbox{\rm\scriptsize IK}})(\eta,\boldsymbol{\phi}),
D_\eta {\bf S}(\eta)[\dot{\eta}]\phi)_{L^2} \\
&= ( L(H,b)\boldsymbol{\phi}, D_\eta {\bf S}(\eta)[\dot{\eta}]\phi)_{L^2} \\
&= ( L_0(H,b)\boldsymbol{\phi}, \boldsymbol{l}(H)\cdot D_\eta {\bf S}(\eta)[\dot{\eta}]\phi)_{L^2} \\
&= -( L_0(H,b)\boldsymbol{\phi}, (\boldsymbol{l}'(H)\cdot {\bf S}(\eta)\phi) \dot{\eta})_{L^2} \\
&= - ( (\boldsymbol{l}'(H)\cdot\boldsymbol{\phi})L_0(H,b)\boldsymbol{\phi}, \dot{\eta})_{L^2},
\end{align*}
where we used the identity
\[
\boldsymbol{l}(H)\cdot D_\eta {\bf S}(\eta)[\dot{\eta}]\phi
+ (\boldsymbol{l}'(H)\cdot {\bf S}(\eta)\phi) \dot{\eta} = 0,
\]
stemming from~\eqref{H:Frechet}.
Again, the above identities are still valid for $(\eta,\phi)\in U_b^m\times \mathring{H}^1$
provided we replace the $L^2$ inner products with suitable duality products.
This concludes the proof of the Fr\'echet differentiability, and the second equation of the lemma.
\end{proof}
Now, we are ready to show our main result in this section.
\begin{theorem}
Let $m$ be an integer such that $m>\frac{n}{2}+1$ and $b\in W^{m,\infty}$.
Then, the Isobe--Kakinuma model~\eqref{intro:IK model} is equivalent to Hamilton's canonical equations
\begin{equation}\label{H:CF}
\partial_t\eta = \frac{\delta\mathscr{H}^{\mbox{\rm\scriptsize IK}}}{\delta\phi}, \quad
\partial_t\phi = -\frac{\delta\mathscr{H}^{\mbox{\rm\scriptsize IK}}}{\delta\eta},
\end{equation}
with $\mathscr{H}^{\mbox{\rm\scriptsize IK}}$ defined in~\eqref{H:H} as long as $\eta(\cdot,t) \in U^m_b$
and $\phi(\cdot,t)\in \mathring{H}^1$.
More precisely, for any regular solution $(\eta,\boldsymbol{\phi})$ to the Isobe--Kakinuma model
\eqref{intro:IK model},
if we define $\phi$ by~\eqref{H:CV}, then $(\eta,\phi)$ satisfies Hamilton's canonical
equations~\eqref{H:CF}.
Conversely, for any regular solution $(\eta,\phi)$ to Hamilton's canonical equations~\eqref{H:CF},
if we define $\boldsymbol{\phi}$ by $\boldsymbol{\phi}={\bf S}(\eta)\phi$,
then $(\eta,\boldsymbol{\phi})$ satisfies the Isobe--Kakinuma model~\eqref{intro:IK model}.
\end{theorem}
\begin{proof}[{\bf Proof}.]
Suppose that $(\eta,\boldsymbol{\phi})$ is a solution to the Isobe--Kakinuma model~\eqref{intro:IK model}.
Then, it satisfies~\eqref{IK:IK model2}, particularly, we have
\begin{equation}\label{H:eq1}
\partial_t\eta = L_0(H,b)\boldsymbol{\phi}.
\end{equation}
It follows from~\eqref{H:CV} and~\eqref{IK:IK model2} that
\begin{align*}
\partial_t\phi
&= \boldsymbol{l}(H)\cdot\partial_t\boldsymbol{\phi}
+ (\boldsymbol{l}'(H)\cdot\boldsymbol{\phi})\partial_t\eta \\
&= -\frac{1}{\rho} (\delta_\eta E^{\mbox{\rm\scriptsize IK}})(\eta,\boldsymbol{\phi})
+ (\boldsymbol{l}'(H)\cdot\boldsymbol{\phi})L_0(H,b)\boldsymbol{\phi}.
\end{align*}
These equations together with Lemma~\ref{H:lem3} show that $(\eta,\phi)$ satisfies~\eqref{H:CF}.
Conversely, suppose that $(\eta,\phi)$ satisfies Hamilton's canonical equations~\eqref{H:CF} and
put $\boldsymbol{\phi}={\bf S}(\eta)\phi$.
Then, it follows from~\eqref{H:CF} and Lemma~\ref{H:lem3} that we have~\eqref{H:eq1}.
This fact and Lemma~\ref{H:lem1} imply the equation
\[
\boldsymbol{l}(H)\partial_t\eta = L(H,b)\boldsymbol{\phi}
= \frac{1}{\rho}\delta_{\boldsymbol{\phi}}E^{\mbox{\rm\scriptsize IK}}(\eta,\boldsymbol{\phi}).
\]
We see also that
\begin{align*}
\boldsymbol{l}(H)\cdot\partial_t\boldsymbol{\phi}
&= \partial_t\phi - (\boldsymbol{l}'(H)\cdot\boldsymbol{\phi})\partial_t\eta
= - \frac{1}{\rho}\delta_{\eta}E^{\mbox{\rm\scriptsize IK}}(\eta,\boldsymbol{\phi}),
\end{align*}
where we used~\eqref{H:CF} and Lemma~\ref{H:lem3}.
Therefore, $(\eta,\boldsymbol{\phi})$ satisfies~\eqref{IK:IK model2}, that is,
the Isobe--Kakinuma model~\eqref{intro:IK model}.
\end{proof}
\section{Consistency}
\label{section:C}
As aforementioned, it was shown in~\cite{Iguchi2018-1, Iguchi2018-2} that the Isobe--Kakinuma model
\eqref{intro:IK model} is a higher order shallow water approximation for the water wave problem
in the strongly nonlinear regime.
In this section, we will show that the canonical Hamiltonian structure exhibited
in the previous section is consistent with this approximation,
in the sense that the Hamiltonian of the Isobe--Kakinuma model,
$\mathscr{H}^{\mbox{\rm\scriptsize IK}}(\eta,\phi)$,
approximates the Hamiltonian of the water wave problem,
$\mathscr{H}(\eta,\phi)$, in the shallow water regime.
In order to provide quantitative results, we first rewrite the equations in a nondimensional form.
Let $\lambda$ be the typical wavelength of the wave.
Recalling that $h$ is the mean depth, we introduce the nondimensional aspect ratio
\[
\delta = \frac{h}{\lambda} ,
\]
measuring the shallowness of the water.
We then rescale the physical variables by
\[
{\boldsymbol x} = \lambda\tilde {\boldsymbol x}, \quad z = h\tilde z, \quad
t = \frac{\lambda}{\sqrt{gh}}\tilde t, \quad \eta = h\tilde\eta, \quad
b= h\tilde b, \quad \Phi=\lambda\sqrt{gh}\tilde\Phi.
\]
Under these rescaling, after dropping the tildes for the sake of readability,
the basic equations for water waves~\eqref{intro:Laplace}--\eqref{intro:BC2}
are rewritten in a non-dimensional form
\begin{equation}\label{C:Laplace}
\Delta\Phi+\delta^{-2}\partial_z^2\Phi=0 \makebox[3em]{in} \Omega(t), \; t>0,
\end{equation}
\begin{equation}\label{C:BC1}
\begin{cases}
\partial_t\eta + \nabla\Phi\cdot\nabla\eta - \delta^{-2}\partial_z\Phi = 0
& \mbox{on}\quad \Gamma(t), \; t>0, \\
\displaystyle
\partial_t\Phi + \frac12\bigl( |\nabla\Phi|^2 + \delta^{-2}(\partial_z\Phi)^2 \bigr) + \eta = 0
& \mbox{on}\quad \Gamma(t), \; t>0,
\end{cases}
\end{equation}
\begin{equation}\label{C:BC2}
\nabla\Phi\cdot\nabla b - \delta^{-2} \partial_z\Phi = 0 \makebox[3em]{on} \Sigma, \; t>0,
\end{equation}
denoting $\Omega(t)$, $\Gamma(t)$, and $\Sigma$ the rescaled water region, water surface, and
bottom of the water at time $t$, respectively.
Specifically, the rescaled water surface and bottom of the water are represented as
$z=\eta(\boldsymbol{x},t)$ and $z=-1+b(\boldsymbol{x})$, respectively.
The corresponding dimensionless Zakharov--Craig--Sulem formulation is
\begin{equation}\label{C:ZCS}
\begin{cases}
\partial_t\eta-\Lambda^\delta(\eta,b)\phi = 0 & \mbox{on}\quad \mathbf{R}^n, \; t>0, \\[.5ex]
\partial_t\phi + \eta + \dfrac12|\nabla\phi|^2
- \dfrac{\delta^2}2\dfrac{\bigl(\Lambda^\delta(\eta,b)\phi + \nabla\eta\cdot\nabla\phi\bigr)^2
}{1+\delta^2|\nabla\eta|^2} = 0 & \mbox{on}\quad \mathbf{R}^n, \; t>0,
\end{cases}
\end{equation}
where
\begin{equation}\label{C:CV}
\phi({\boldsymbol x},t) = \Phi({\boldsymbol x},\eta({\boldsymbol x},t),t)
\end{equation}
is the trace of the velocity potential on the water surface, and
$\Lambda^\delta(\eta,b)$ is the dimensionless Dirichlet-to-Neumann map for Laplace's equation,
namely, it is defined by
\[
\Lambda^\delta(\eta,b)\phi = \delta^{-2} (\partial_z\Phi)|_{z=\eta} - \nabla\eta\cdot(\nabla\Phi)|_{z=\eta},
\]
where $\Phi$ is the unique solution to the boundary value problem
of the scaled Laplace's equation~\eqref{C:Laplace} under the boundary conditions
\eqref{C:BC2} and \eqref{C:CV}.
With this rescaling and definitions, the Hamiltonian of the water wave system is given by
\[
\mathscr{H}^\delta(\eta,\phi)
= \frac12 \iint_{\Omega(t)} \bigl( |\nabla\Phi|^2 + \delta^{-2}(\partial_z\Phi)^2 \bigr)
\,{\rm d}{\boldsymbol x}{\rm d}z + \frac{1}{2}\int_{\mathbf{R}^n}\eta^2 \,{\rm d}{\boldsymbol x} .
\]
In order to rewrite the Isobe--Kakinuma model~\eqref{intro:IK model} in dimensionless form,
we need to rescale the unknown variables $(\phi_0,\phi_1,\ldots,\phi_N)$,
depending on the choice of function system $\{\Psi_i\}$.
In view of~\eqref{intro:app}, we rescale them by
\[
\phi_i = \frac{\lambda\sqrt{gh}}{\lambda^{p_i}} \tilde \phi_i \qquad\mbox{for}\quad i=0,1,\ldots,N,
\]
so that
\begin{equation}\label{C:app}
\Phi^{\rm app}({\boldsymbol x}, z, t)
= \lambda\sqrt{gh}\;\tilde\Phi^{\rm app}(\tilde{\boldsymbol x},\tilde z,\tilde t)
= \lambda\sqrt{gh}\;\biggl(\sum_{i=0}^N\delta^{p_i}(\tilde z+1-\tilde b(\tilde{\boldsymbol x}))^{p_i}
\phi_i(\tilde{\boldsymbol x},\tilde t)\biggr).
\end{equation}
As before, we will henceforth drop the tildes for the sake of readability.
It is also convenient to introduce the notation
\[
\phi_i^\delta=\delta^{p_i}\tilde\phi_i \qquad\mbox{for}\quad i=0,1,\ldots,N,
\]
so that the Isobe--Kakinuma model~\eqref{intro:IK model} in rescaled variables is
\begin{equation}\label{C:IK model}
\left\{
\begin{array}{l}
\displaystyle
H^{p_i} \partial_t \eta + \sum_{j=0}^N \left\{ \nabla \cdot \left(
\frac{1}{p_i+p_j+1} H^{p_i+p_j+1} \nabla\phi_j^\delta
- \frac{p_j}{p_i+p_j} H^{p_i+p_j} \phi_j^\delta \nabla b \right) \right.\\
\displaystyle\phantom{ H^{p_i} \partial_t \eta + \sum_{j=0}^N \biggl\{ }
+ \left. \frac{p_i}{p_i+p_j} H^{p_i+p_j} \nabla b \cdot \nabla\phi_j^\delta
- \frac{p_ip_j}{p_i+p_j-1} H^{p_i+p_j-1} (\delta^{-2} + |\nabla b|^2) \phi_j^\delta \right\} = 0 \\
\makebox[28em]{}\mbox{for}\quad i=0,1,\ldots,N, \\[1ex]
\displaystyle
\sum_{j=0}^N H^{p_j} \partial_t \phi_j^\delta + \eta
+ \frac12 \left\{ \left| \sum_{j=0}^N
( H^{p_j}\nabla\phi_j^\delta - p_j H^{p_j-1}\phi_j^\delta\nabla b ) \right|^2
+ \delta^{-2}\left( \sum_{j=0}^N p_j H^{p_j-1} \phi_j^\delta \right)^2 \right\} = 0,
\end{array}
\right.
\end{equation}
where $H({\boldsymbol x},t) = 1 + \eta({\boldsymbol x},t) - b({\boldsymbol x})$.
We also use the notations
$\boldsymbol{\phi}^\delta=(\phi_0^\delta,\phi_1^\delta,\dots,\phi_N^\delta)^{\rm T}$
and $L^\delta=L^\delta(H,b)=(L_{ij}^\delta(H,b))_{0\leq i,j\leq N}$, where
\begin{align}
L_{ij}^\delta\psi_j
&= -\nabla\cdot\biggl(
\frac{1}{p_i+p_j+1}H^{p_i+p_j+1}\nabla\psi_j
-\frac{p_j}{p_i+p_j}H^{p_i+p_j}\psi_j\nabla b\biggr) \nonumber \\[0.5ex]
&\quad\,
-\frac{p_i}{p_i+p_j}H^{p_i+p_j}\nabla b\cdot\nabla\psi_j
+\frac{p_ip_j}{p_i+p_j-1}H^{p_i+p_j-1}(\delta^{-2}+|\nabla b|^2)\psi_j.
\label{C:L}
\end{align}
Then, \eqref{C:IK model} can be written in a compact form
\begin{equation}\label{C:IK model2}
\begin{pmatrix}
0 & -\boldsymbol{l}(H)^{\rm T} \\
\boldsymbol{l}(H) & O
\end{pmatrix}
\partial_t
\begin{pmatrix}
\eta \\
\boldsymbol{\phi}^\delta
\end{pmatrix}
=
\begin{pmatrix}
\delta_{\eta} E^{\mbox{\rm\scriptsize IK},\delta} \\
\delta_{\boldsymbol{\phi}^\delta} E^{\mbox{\rm\scriptsize IK},\delta}
\end{pmatrix},
\end{equation}
where
\begin{align}
E^{\mbox{\rm\scriptsize IK},\delta}(\eta,\boldsymbol{\phi}^\delta)
&= \frac{1}{2}\int_{\mathbf{R}^n}\left\{
\sum_{i,j=0}^N\left(
\frac{1}{p_i+p_j+1}H^{p_i+p_j+1}\nabla\phi_i^\delta\cdot\nabla\phi_j^\delta
-\frac{2p_i}{p_i+p_j}H^{p_i+p_j}\phi_i^\delta\nabla b\cdot\nabla\phi_j^\delta \right. \right. \nonumber \\
& \left. \phantom{\sum_{i,j=0}^N}\makebox[6em]{}
+ \frac{p_ip_j}{p_i+p_j-1}H^{p_i+p_j-1}(\delta^{-2}+|\nabla b|^2)\phi_i^\delta\phi_j^\delta\biggr)
+ \eta^2\right\}{\rm d}{\boldsymbol x}.
\end{align}
Then, we define the Hamiltonian
\[
\mathscr{H}^{\mbox{\rm\scriptsize IK},\delta}(\eta,\phi)
= E^{\mbox{\rm\scriptsize IK},\delta}(\eta,\boldsymbol{\phi}^\delta),
\]
where $\boldsymbol{\phi}^\delta$ is the solution to
\begin{equation}\label{C:S}
\begin{cases}
\boldsymbol{l}(H)\cdot\boldsymbol{\phi}^\delta = \phi, \\
L^\delta(H,b)\boldsymbol{\phi}^\delta
= \bigl(L_0^\delta(H,b)\boldsymbol{\phi}^\delta\bigr)\boldsymbol{l}(H).
\end{cases}
\end{equation}
Here, we used the notation $L_0^\delta=(L_{00}^\delta,\dots,L_{0N}^\delta)$.
We recall that $\boldsymbol{\phi}^\delta$ is uniquely determined by~\eqref{C:S} thanks to Lemma~\ref{H:lem1}.
To analyze the consistency of the Hamiltonian in the shallow water regime,
we will further restrict ourselves to the following two cases:
\begin{itemize}
\item[(H1)]
In the case of the flat bottom $b({\boldsymbol x})\equiv0$, $p_i=2i$ for $i=0,1,\ldots,N$.
\item[(H2)]
In the case with general bottom topographies, $p_i=i$ for $i=0,1,\ldots,N$.
\end{itemize}
We are now in position to state the consistency of the Hamiltonian of the Isobe--Kakinuma model
with respect to Zakharov's Hamiltonian of the water wave problem in the shallow water regime.
\begin{theorem}\label{C:consistency-Hamiltonian}
Let $c_0,M$ be positive constants and $m>\frac{n}{2}+1$ an integer such that $m\geq 4(N+1)$
in the case {\rm (H1)} and $m\geq 4([\frac{N}{2}]+1)$ in the case {\rm (H2)}.
There exists a positive constant $C$ such that if $\eta\in H^m$ and $b\in W^{m+1,\infty}$ satisfy
\[
\begin{cases}
\|\eta\|_m + \|b\|_{W^{m+1,\infty}} \leq M, \\
c_0\leq H({\boldsymbol x}) = 1+\eta({\boldsymbol x})-b({\boldsymbol x})
\quad\mbox{for}\quad {\boldsymbol x}\in\mathbf{R}^n,
\end{cases}
\]
then for any $\delta\in(0,1]$ and any $\phi\in \mathring{H}^{m}$, we have
\[
|\mathscr{H}^\delta(\eta,\phi) -\mathscr{H}^{\mbox{\rm\scriptsize IK},\delta}(\eta,\phi) |
\leq
\begin{cases}
C\|\nabla\phi\|_{4N+3}\|\nabla\phi\|_{0}\, \delta^{4N+2} & \text{ in the case {\rm (H1)}}, \\
C\|\nabla \phi\|_{4[\frac{N}{2}]+3}\|\nabla\phi\|_{0}\, \delta^{4[\frac{N}{2}]+2}
& \text{ in the case {\rm (H2)}}.
\end{cases}
\]
\end{theorem}
\begin{remark}
Theorem 2.4 in~\cite{Iguchi2018-2} in fact states the stronger result that the difference between
exact solutions of the water wave problem obtained in \cite{Iguchi2009,Lannes2013} and the corresponding solutions
of the Isobe--Kakinuma model is bounded with the same order of precision as above on the relevant timescale.
\end{remark}
\begin{remark}
It is important to notice that the order of the approximation given in Theorem~\ref{C:consistency-Hamiltonian}
is greater than what we could expect based on~\eqref{C:app}, and in particular greater than the one obtained
when using the Boussinesq expansion in the flat bottom case (H1):
\[
\phi_{\rm B}(\tilde t,\tilde{\boldsymbol x})
= \tilde\Phi^{\rm app}_{\rm B}(\tilde{\boldsymbol x}, \eta(\tilde{\boldsymbol x}, \tilde t),\tilde t)
\quad \mbox{with} \quad
\tilde\Phi^{\rm app}_{\rm B}(\tilde{\boldsymbol x}, \tilde z, \tilde t)
= \sum_{i=0}^N\delta^{2i}(\tilde z+1)^{2i} \frac{(-\Delta)^i\phi_0(\tilde{\boldsymbol x},\tilde t)}{(2i)!}
\]
where $\phi_0$ is the trace of the velocity potential at the bottom.
Here we can only expect that the approximation is valid up to an error of order $O(\delta^{2N+2})$,
which coincides with the precision of Theorem~\ref{C:consistency-Hamiltonian} only when $N=0$.
When $N=0$, we recover that the Saint-Venant or shallow-water equations provide approximate solutions
with precision $O(\delta^2)$; see~\cite{Iguchi2009, Lannes2013}.
\end{remark}
\begin{proof}[{\bf Proof of Theorem \ref{C:consistency-Hamiltonian}}.]
We will modify slightly the strategy in~\cite{Iguchi2018-2}.
We first notice that
\begin{align*}
&\mathscr{H}^\delta(\eta,\phi)
= \frac12\iint_{\Omega}\bigl( |\nabla\Phi|^2 + \delta^{-2}(\partial_z\Phi)^2 \bigr){\rm d}\boldsymbol{x}{\rm d}z
+ \frac12\int_{\mathbf{R}^n}\eta^2{\rm d}\boldsymbol{x}, \\
&\mathscr{H}^{\mbox{\rm\scriptsize IK},\delta}(\eta,\phi)
= \frac12\iint_{\Omega}\bigl( |\nabla\Phi^{\rm app}|^2
+ \delta^{-2}(\partial_z\Phi^{\rm app})^2 \bigr){\rm d}\boldsymbol{x}{\rm d}z
+ \frac12\int_{\mathbf{R}^n}\eta^2{\rm d}\boldsymbol{x},
\end{align*}
where $\Phi$ is the unique solution to the boundary value problem of the scaled Laplace's
equation~\eqref{C:Laplace} under the boundary conditions \eqref{C:BC2} and \eqref{C:CV},
and the approximate velocity potential $\Phi^{\rm app}$ is defined by
\[
\Phi^{\rm app}({\boldsymbol x},z)
= \sum_{i=0}^{N} ( z+1-b(\boldsymbol{x}))^{p_i}\phi_i^\delta({\boldsymbol x}),
\]
where
${\boldsymbol{\phi}}^\delta=({\phi}_0^\delta,{\phi}_1^\delta,\dots,{\phi}_{N}^\delta)^{\rm T}$
is the solution to
\[
\begin{cases}
\displaystyle
\sum_{i=0}^{N} H^{p_i} {\phi}_{i}^\delta = \phi, \\[3ex]
\displaystyle
\sum_{j=0}^{N} {L}_{ij}^\delta(H,b){{\phi}}_j^\delta
= H^{p_i}\sum_{j=0}^{N} {L}_{0j}^\delta(H,b){{\phi}}_j^\delta
\makebox[4em]{for} i=0,1,\dots,N.
\end{cases}
\]
We will denote with tildes, as in~\cite{Iguchi2018-2}, the functions obtained
when replacing $N$ with $2N+2$.
Hence, $\widetilde{\boldsymbol{\phi}}^\delta
= (\widetilde{\phi}_0^\delta,\widetilde{\phi}_1^\delta,\dots,\widetilde{\phi}_{2N+2}^\delta)^{\rm T}$
is the solution to
\[
\begin{cases}
\displaystyle
\sum_{i=0}^{2N+2} H^{p_i} \widetilde{\phi}_{i}^\delta = \phi, \\[3ex]
\displaystyle
\sum_{j=0}^{2N+2} L_{ij}^\delta(H,b)\widetilde{{\phi}}_j^\delta
= H^{p_i}\sum_{j=0}^{2N+2} L_{0j}^\delta(H,b)\widetilde{{\phi}}_j^\delta
\makebox[4em]{for} i=0,1,\dots,2N+2.
\end{cases}
\]
We also introduce, as in \cite{Iguchi2018-2}, a modified approximate velocity potential
$\widetilde{\Phi}^{\rm app}$ by
\begin{equation}\label{C:def-Phiapp}
\widetilde{\Phi}^{\rm app}({\boldsymbol x},z)
= \sum_{i=0}^{2N+2} ( z+1-b(\boldsymbol{x}))^{p_i}\widetilde\phi_i^\delta({\boldsymbol x}),
\end{equation}
and set $\Phi^{\rm res} = \Phi-\widetilde{\Phi}^{\rm app}$ and
$\boldsymbol{\varphi}^\delta = (\varphi_0^\delta,\varphi_1^\delta,\ldots,\varphi_N^\delta)^{\rm T}$ with
$\varphi_j^\delta=\phi_j^\delta-\widetilde{\phi}_j^\delta$ for $j=0,1,\ldots,N$.
Then, we decompose the difference $\mathscr{H}^\delta - \mathscr{H}^{\mbox{\rm\scriptsize IK},\delta}$ as
\begin{align*}
&\mathscr{H}^\delta(\eta,\phi) - \mathscr{H}^{\mbox{\rm\scriptsize IK},\delta}(\eta,\phi) \\
&= \frac12\iint_{\Omega}\bigl\{ \bigl( |\nabla\Phi|^2 + \delta^{-2}(\partial_z\Phi)^2 \bigr)
- \bigl( |\nabla\widetilde{\Phi}^{\rm app}|^2 + \delta^{-2}(\partial_z\widetilde{\Phi}^{\rm app})^2 \bigr)
\bigr\}{\rm d}\boldsymbol{x}{\rm d}z \\
&\quad\;
+ \frac12\iint_{\Omega}\bigl\{ \bigl( |\nabla\widetilde{\Phi}^{\rm app}|^2
+ \delta^{-2}(\partial_z\widetilde{\Phi}^{\rm app})^2 \bigr)-\bigl( |\nabla{\Phi}^{\rm app}|^2
+ \delta^{-2}(\partial_z{\Phi}^{\rm app})^2 \bigr)\bigr\}{\rm d}\boldsymbol{x}{\rm d}z \\
&= I_1 + I_2.
\end{align*}
We first evaluate $I_1$.
It is easy to see that
\begin{align}
|I_1| &\leq \frac12\bigl\{
\|\nabla\Phi^{\rm res}\|_{L^2(\Omega)}\bigl(
\|\nabla\Phi\|_{L^2(\Omega)} + \|\nabla\widetilde{\Phi}^{\rm app}\|_{L^2(\Omega)} \bigr) \nonumber \\
&\quad\;\label{C:I1}
+ \delta^{-2}\|\partial_z\Phi^{\rm res}\|_{L^2(\Omega)}\bigl(
\|\partial_z\Phi\|_{L^2(\Omega)} + \|\partial_z\widetilde{\Phi}^{\rm app}\|_{L^2(\Omega)} \bigr) \bigr\}.
\end{align}
By using \cite[Lemma~8.1]{Iguchi2018-2} with $k=0$ as well as~\cite[Lemma~6.4]{Iguchi2018-2} with
$(k,j)=(0,2N+1)$ in the case (H1) and~\cite[Lemma~6.9]{Iguchi2018-2} with $(k,j)=(0,2[\frac{N}{2}]+1)$
in the case (H2), we find
\[
\|\nabla\Phi^{\rm res}\|_{L^2(\Omega)} + \delta^{-1}\|\partial_z\Phi^{\rm res}\|_{L^2(\Omega)} \leq
\begin{cases}
C\|\nabla\phi\|_{4N+3}\;\delta^{4N+3} & \text{ in the case (H1)}, \\
C\|\nabla \phi\|_{4[\frac{N}{2}]+3}\;\delta^{4[\frac{N}{2}]+3} & \text{ in the case (H2)},
\end{cases}
\]
provided that $m\geq 4(N+1)$ in the case (H1), and $m\geq 4([\frac{N}{2}]+1)$ in the case (H2).
Here and in what follows, $C$ denotes a positive constant depending on $N$, $m$, $c_0$, and $M$,
which changes from line to line.
On the other hand, it follows from elliptic estimates given in \cite{Iguchi2009, Lannes2013} that
\[
\|\nabla\Phi\|_{L^2(\Omega)} + \delta^{-1}\|\partial_z\Phi\|_{L^2(\Omega)} \leq C\|\nabla\phi\|_0.
\]
Moreover, by the definition \eqref{C:def-Phiapp} and using~\cite[Lemma~3.4]{Iguchi2018-2} with $k=0$,
we see that
\begin{align*}
\|\nabla\widetilde{\Phi}^{\rm app}\|_{L^2(\Omega)}
+ \delta^{-1}\|\partial_z\widetilde{\Phi}^{\rm app}\|_{L^2(\Omega)}
&\leq C\bigl( \|\nabla\widetilde{\phi}_0^\delta\|_0
+ \|(\widetilde{\phi}_1^\delta,\ldots,\widetilde{\phi}_{2N+2}^\delta)\|_1
+ \delta^{-1}\|(\widetilde{\phi}_1^\delta,\ldots,\widetilde{\phi}_{2N+2}^\delta)\|_0 \bigr) \\
&\leq C\bigl( \|\nabla\widetilde{\phi}_0^\delta\|_0
+ \delta^{-1}\|(1-\delta^2\Delta)^{\frac12}
(\widetilde{\phi}_1^\delta,\ldots,\widetilde{\phi}_{2N+2}^\delta)\|_0 \bigr) \\
&\leq C\|\nabla\phi\|_0.
\end{align*}
Plugging the above estimates into~\eqref{C:I1}, we obtain
\begin{equation}\label{C:I1.5}
|I_1| \leq
\begin{cases}
C\|\nabla\phi\|_{4N+3} \|\nabla\phi\|_0 \, \delta^{4N+3} & \text{ in the case (H1)}, \\
C\|\nabla \phi\|_{4[\frac{N}{2}]+3} \|\nabla\phi\|_0 \, \delta^{4[\frac{N}{2}]+3} & \text{ in the case (H2)},
\end{cases}
\end{equation}
provided that $m\geq 4(N+1)$ in the case (H1), and $m\geq 4([\frac{N}{2}]+1)$ in the case (H2).
We proceed to evaluate $I_2$ by noticing that, after the calculations in~\cite[p.~2009]{Iguchi2018-2},
\begin{align*}
I_2 &= \frac12\sum_{i=0}^{2N+2}\sum_{j=0}^{2N+2}
\bigl( L_{ij}^\delta(H,b)\tilde{\phi}_j^\delta,\tilde{\phi}_i^\delta \bigr)_{L^2}
- \frac12\sum_{i=0}^N\sum_{j=0}^N
\bigl( L_{ij}^\delta(H,b)\phi_j^\delta,\phi_i^\delta \bigr)_{L^2} \\
&= \frac12\sum_{j=0}^{N}\sum_{i=N+1}^{2N+2} \big(
(L_{ij}^\delta(H,b) - H^{p_i}L_{0j}^\delta(H,b))\varphi_j^\delta, \widetilde{\phi}_{i}^\delta \big)_{L^2} \\
&\quad\;
- \frac12\sum_{j=N+1}^{2N+2}\sum_{i=N+1}^{2N+2} \big(
(L_{ij}^\delta(H,b) - H^{p_i}L_{0j}^\delta(H,b))\widetilde{{\phi}}_j^\delta,
\widetilde{\phi}_{i}^\delta \big)_{L^2}.
\end{align*}
Therefore,
\begin{align*}
|I_2| &\leq C \bigl\{
\|\boldsymbol{\varphi}^\delta\|_{2N^*+3}
+ \| (\widetilde{\phi}_{N+1}^\delta,\ldots,\widetilde{\phi}_{2N+2}^\delta) \|_{2N^*+3} \\
&\quad\;
+ \delta^{-2} \bigl( \|\boldsymbol{\varphi}^\delta\|_{2N^*+1}
+ \| (\widetilde{\phi}_{N+1}^\delta,\ldots,\widetilde{\phi}_{2N+2}^\delta) \|_{2N^*+1}\bigr) \bigr\}
\| (\widetilde{\phi}_{N+1}^\delta,\ldots,\widetilde{\phi}_{2N+2}^\delta) \|_{-(2N^*+1)}
\end{align*}
with $N^\star=N$ in the case (H1) and $N^\star=[\frac{N}{2}]$ in the case (H2).
Using~\cite[Lemma~6.2]{Iguchi2018-2} with $(k,j) = (2N+3,N),(2N+1,N+1),(-2N-1,N+1)$ in the case (H1)
and~\cite[Lemma~6.7]{Iguchi2018-2} with $(k,j) = (2[\frac{N}{2}]+3,[\frac{N}{2}]),
(2[\frac{N}{2}]+1,[\frac{N}{2}]+1),(-2[\frac{N}{2}]-1,[\frac{N}{2}]+1)$ in the case (H2), we obtain
\begin{equation}\label{C:I2}
|I_2| \leq
\begin{cases}
C\|\nabla\phi\|_{4N+2}\|\nabla\phi\|_0\;\delta^{4N+2} & \text{ in the case (H1)}, \\
C\|\nabla\phi\|_{4[\frac{N}{2}]+2}\|\nabla\phi\|_0\;\delta^{4[\frac{N}{2}]+2} & \text{ in the case (H2)},
\end{cases}
\end{equation}
provided that $m\geq 4N+3$ in the case (H1), and $m\geq 4[\frac{N}{2}]+3$ in the case (H2).
Now, \eqref{C:I1.5} and \eqref{C:I2} give the desired estimate.
\end{proof}
Vincent Duch\^ene \par
{\sc Institut de Recherche Math\'ematique de Rennes} \par
{\sc Univ Rennes}, CNRS, IRMAR -- UMR 6625 \par
{\sc F-35000 Rennes, France} \par
E-mail: \texttt{[email protected]}
Tatsuo Iguchi \par
{\sc Department of Mathematics} \par
{\sc Faculty of Science and Technology, Keio University} \par
{\sc 3-14-1 Hiyoshi, Kohoku-ku, Yokohama, 223-8522, Japan} \par
E-mail: \texttt{[email protected]}
\end{document} |
\begin{document}
\baselineskip=1.8pc
\begin{center}
{\bf
Conservative Multi-Dimensional Semi-Lagrangian Finite Difference Scheme: Stability and Applications to the Kinetic and Fluid Simulations
}
\end{center}
\centerline{
Tao Xiong \fracootnote{School of Mathematical Sciences, Fujian Provincial Key Laboratory of Mathematical Modeling and High-Performance Scientific Computing, Xiamen University, Xiamen, Fujian, P.R. China, 361005. Email: [email protected]}
Giovanni Russo \fracootnote{Department of Mathematics and Computer Science, University of Catania, Catania, 95125, E-mail: [email protected]}
Jing-Mei Qiu\fracootnote{Department of Mathematics, University of Houston,
Houston, 77004. E-mail: [email protected]. Research supported by
Air Force Office of Scientific Computing grant FA9550-16-1-0179, NSF grant DMS-1522777 and University of Houston.}
}
\centerline{\bf Abstract}
In this paper, we propose a mass conservative semi-Lagrangian finite difference scheme for multi-dimensional problems {\em without dimensional splitting}.
The semi-Lagrangian scheme, based on tracing characteristics backward in time from grid points, does not necessarily conserve the total mass.
To ensure mass conservation, we propose a conservative correction procedure based on a flux difference form. Such procedure guarantees local mass conservation, while introducing time step constraints for stability. We theoretically investigate such stability constraints from an ODE point of view by assuming exact evaluation of spatial differential operators and from the Fourier analysis for linear PDEs.
The scheme is tested by classical two dimensional linear passive-transport problems, such as linear advection, rotation and swirling deformation. The scheme is applied to solve the nonlinear Vlasov-Poisson system using a a high order tracing mechanism proposed in [Qiu and Russo, 2016]. Such high order characteristics tracing scheme is generalized to the nonlinear guiding center Vlasov model and incompressible Euler system. The effectiveness of the proposed conservative semi-Lagrangian scheme is demonstrated numerically by our extensive numerical tests.
\noindent {\bf Keywords:}
Semi-Lagrangian,
conservative,
high order,
WENO,
Linear stability analysis,
Fourier analysis,
Vlasov-Poisson system.
\section{Introduction}
\label{sec1}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
Semi-Lagrangian (SL) schemes have been used extensively in many areas of science and engineering, including weather forecasting \cite{staniforth1991semi, lin1996multidimensional, guo2014conservative}, kinetic simulations \cite{filbet2001conservative, guo2013hybrid} and fluid simulations \cite{pironneau1982transport, xiu2001semi}, interface tracing \cite{enright2005fast, strain1999semi}, etc. The schemes are designed to combine the advantages of Eulerian and Lagrangian approaches. In particular, the schemes are build upon a fixed set of computational mesh. Similar to the Eulerian approach, high spatial resolution can be realized by using high order interpolation/reconstruction procedures or by using piecewise polynomial solution spaces. On the other hand, in each time step evolution, the scheme is designed by propagating information along characteristics, relieving the CFL condition. Typically, the numerical time step size allowed for an SL scheme is larger than that of an Eulerian approach, leading to gains in computational efficiency.
Among high order SL schemes, depending on solution spaces, different classes of methods can be designed. For example, a finite difference scheme evolves point-wise values and realizes high spatial resolution by high order interpolation procedures \cite{xiu2001semi, qiu2010conservative}, a finite volume scheme considers integrated cell-averages with high order reconstruction procedures \cite{lin1996multidimensional, crouseilles2010conservative}, while a finite element method has piecewise continuous or discontinuous polynomial functions as its solution space \cite{pironneau1982transport, morton1988stability, qiu2011positivity, rossmanith2011positivity, guo2014conservative}. Each class of the above mentioned SL methods has its own advantages. For example, the finite element method is more flexible with the geometry and handling boundary conditions, while the finite difference and finite volume schemes could perform better in resolving solution structures with sharp gradients, e.g. by using a weighted essentially non-oscillatory (WENO) procedure. To compare finite difference and finite volume schemes, finite volume scheme is often considered more physically relevant and the local mass conservation can be built up in a natural way; while the finite difference scheme is more flexible and computationally efficient for high-dimensional problems, if one consider schemes of third order or higher.
In this paper, we consider SL finite difference scheme with {\em local mass conservation property}. In fact, many existing SL
finite difference schemes are built based on tracing characteristics backward in time together with a high order interpolation procedure \cite{carrillo2007nonoscillatory}. Typically such schemes do not have local mass conservation property, which is fine for some certain applications. However, for applications in weather forecasting or in kinetic simulations, ignoring local mass conservation could lead to significant loss of total mass, especially when the solution with sharp gradients becomes under-resolved by the computational mesh \cite{huot2003instability}.
There have been many attempts to preserve the mass conservation of an SL finite difference scheme with large time stepping sizes, e.g. \cite{qiu2010conservative, qiu2011conservative_jcp}. However, they are mostly designed for 1D problems by taking advantage of some special features in a 1D setting. Their generalization to high dimensional problems often relies on dimensional splitting which is subject to splitting errors. In this paper, we propose and investigate a truly multi-dimensional approach without dimensional splitting errors. To build an SL finite difference scheme with local mass conservation, one essential framework that we propose to work with is the flux-difference form. However, by working with the flux difference form, one often observe time step constraint for numerical stability. Note that unlike the Eulerian approach, such time step constraint does not come from the CFL condition (i.e. numerical domain of dependence should include the physical domain of dependence), but from numerical stability one employed for temporal integration.
As far as we are aware of, there is little work in quantifying the stability constraint and in optimizing the numerical strategies balancing stability, accuracy and computational efficiency. This paper aims to fill such gap, by understanding such time step constraints.
In particular, we investigate the stability of time integration schemes based on a linear stability analysis around the imaginary axis in the complex plane, assuming the spatial differentiation is exact. We optimize quadrature rules for time integration by maximizing the stability interval along the imaginary axis. We further employ Fourier analysis to study the numerical stability of a fully-discretized scheme. The schemes are applied to 2D passive-transport problems, as well as to nonlinear Vlasov-Poisson (VP) by using a high order characteristics tracing scheme proposed in \cite{qiu_russo_2016}. Further more, we apply the scheme to the nonlinear guiding center Vlasov system and the incompressible Euler system in vorticity stream function formulation, for which we propose a high order characteristics tracing scheme following the idea in \cite{qiu_russo_2016}.
Finally, we would like to mention a few of our previous work related to stability of SL finite difference schemes in flux-difference form. In \cite{qiu2011conservative_jcp}, a special treatment is introduced to relieve the time step constraint for 1D passive transport problems. However, such treatment is not possible for general high dimensional problems. In \cite{christlieb2014high}, the time step constraint is studied by Fourier analysis for an SL finite difference scheme coupled with integral deferred correction framework.
The paper is organized as follows. The SL finite difference scheme in flux-difference form is described in Section \ref{sec: cons}. The stability of time integration with quadrature rule is investigated in Section \ref{ssec: temp}, assuming {\em exact} evaluation of spatial differentiation operators.
We also optimize temporal integration rules. In Section \ref{ssec: spat}, we study the numerical stability of a fully discretized scheme by Fourier analysis. In Section \ref{sec5}, numerical tests are performed for 2D linear passive-transport problems. In Section \ref{sec6}, we apply the scheme to the nonlinear VP system, the guiding center Vlasov equation and incompressible Euler system.
\section{A mass conservative SL finite difference scheme}
\label{sec: cons}
In this section, we describe an SL finite difference scheme based on a flux-difference form to locally preserve mass. The scheme starts from a standard non-conservative procedure with backward characteristics tracing and high order spatial interpolation. Then a conservative correction is performed by a flux-difference formulation. We describe the scheme in a 1D linear setting, noting that its extension to nonlinear and high dimensional problems is straightforward, as long as characteristics can be properly traced backward in time, e.g. see our numerical examples in Section \ref{sec5}.
We consider a 1D linear advection equation,
\begin{equation}
\pad{f}{t} + \pad{f}{x} = 0, \quad f(x,0) = f^0(x), \quad x\in [-\pi, \pi].
\label{eq:scalar2}
\end{equation}
For simplicity, we assume a periodic boundary condition.
We assume a uniform discretization in space with $x_j = j\Delta x$, $j=1\ldots,n_x$ and let $f^n_j$ be an approximation of the solution at time $t^n$ and position $x_j$. We describe below the conservative SL procedure to update $\{f^{n+1}_j\}_{j=1}^{n_x}$ from $\{f^n_j\}_{j=1}^{n_x}$.
In an Eulerian finite difference method, typically one would firstly approximate the spatial derivative by a flux difference form to ensure mass conservation, then the system of ODEs will be evolved in time by a high order numerical integrator such as the Runge-Kutta (RK) method via the method of lines. In the SL setting, however, we propose to perform the time integration based on quadrature rules first,
\begin{equation}
\label{eq: conser}
f^{n+1}_j = f^n_j - \fracrac{\partial}{\partial x}\left(\int_{t^n}^{t^{n+1}} f(x, t)dt\right)|_{x=x_j} \approx
f^n_j - \fracrac{\partial}{\partial x} (\mathcal{F}(x))|_{x_j},
\end{equation}
where we let
\begin{equation}
\label{eq: quadr}
\mathcal{F}(x) \doteq \sum_{\ell=1}^{s} f(x, t^n+c_\ell \Delta t) b_\ell \Delta t
\end{equation}
as a quadrature approximation of $\int_{t^n}^{t^{n+1}} f(x, t)dt$. Here $(c_\ell,b_\ell)$, $\ell = 1,\ldots,s$ are the nodes and weights of an accurate quadrature formula and $f(x_j, t^n+c_\ell \Delta t)$, $\ell=1\cdots s$ (we call it stage values) can be approximated via a non-conservative SL scheme via backward characteristics tracing and high order spatial interpolation. For the linear equation \eqref{eq: conser}, $f(x_j, t^n+c_\ell \Delta t)$ can be traced back along characteristics to $t^n$ at $f(x_j-c_\ell \Delta t/\Delta x, t^n)$, whose value can be obtained via a WENO interpolation from neighboring grid point values $\{f^n_j\}_{j=1}^{n_x}$, see our description for different interpolation procedures in Section~\ref{ssec: spat}.
Then a conservative scheme, based on a flux-difference form, can be proposed in the spirit of the work by Shu and Osher \cite{shu1989efficient}. In particular, the scheme can be formulated as
\begin{equation}
\label{eq: cons_update}
f^{n+1}_j = f^n_j - \fracrac{1}{\Delta x} (\hat{F}_{j+\fracrac12} - \hat{F}_{j-\fracrac12}),
\end{equation}
where $\hat{F}_{j+\fracrac12}$ comes from WENO reconstruction of fluxes from $\{\mathcal{F}_j\}_{j=1}^{nx}$ with $\mathcal{F}_j \doteq \mathcal{F}(x_j)$.
We refer to \cite{shu2009high} for the basic principle and detailed procedures of WENO reconstruction. Also, Section \ref{ssec: spat} provides detailed discussions on different reconstruction procedures. It can be shown that the mass conservation is locally preserved due to the flux difference form \eqref{eq: cons_update}.
Such conservative correction procedure can be directly generalized to problems with non-constant velocity fields in a multi-dimensional setting without any difficulty, e.g. rotation and swirling deformation. In additional to the procedures described above, a high order ODE integrator such as a Runge-Kutta method can be employed to locate the foot of a characteristic accurately. For example, we consider a 2D problem with a prescribed velocity field $a(x, y, t)$ and $b(x, y, t)$
\[
f_t + \left(a(x, y, t) f\right)_x + \left(b(x, y, t) f\right)_y = 0.
\]
Let the set of grid points
\begin{equation}
\label{eq: 2dgrid}
x_1 < \cdots < x_i < \cdots < x_{n_x}, \quad y_1 < \cdots < y_j < \cdots < y_{n_y}
\end{equation}
be a uniform discretization of a 2D rectangular domain with $x_i = i \Delta x$ and $y_j = j \Delta y$.
The foot of characteristic emanating from a 2D grid point, say $(x_i, y_j)$ at $t^{\ell}\doteq t^n+c_\ell \Delta t$ can be located by solving the following final-value problem accurately with a high order Runge-Kutta method,
\begin{equation}
\label{eq: char1}
\fracrac{dx}{dt} = a(x, y, t), \quad \fracrac{dy}{dt} = b(x, y, t), \quad x(t^\ell) = x_i, \quad y(t^\ell) = y_j.
\end{equation}
Once the foot of characteristic located, say at $(x^\star_i, y^\star_j)$, then $f(x_i, y_j, t^{\ell})$ can be evaluated by approximating $f(x^\star_i, y^\star_j, t^n)$ via a high order 2D WENO interpolation procedure \cite{shu2009high}. A 2D conservative scheme based on a flux-difference form can be formulated as
\begin{equation}
\label{eq: cons_update}
f^{n+1}_{ij} = f^n_{ij} - \fracrac{1}{\Delta x} (\hat{F}_{i+\fracrac12, j} - \hat{F}_{i-\fracrac12, j})- \fracrac{1}{\Delta y} (\hat{G}_{i, j+\fracrac12} - \hat{G}_{i, j-\fracrac12}),
\end{equation}
where $\hat{F}_{i\pm\fracrac12, j}$ comes from WENO reconstruction of fluxes from $\{\mathcal{F}_{ij}\}_{i=1}^{n_x}$ for all $j$ with
\[
\mathcal{F}_{ij} \doteq \mathcal{F}(x_i, y_j) \approx \Delta t \sum_{\ell=1}^{s} f(x_i, y_j, t^n+c_\ell \Delta t) b_\ell.
\]
The procedure for WENO reconstruction is the same as the 1D case for all $j$ and we again refer to the review paper \cite{shu2009high}. Similarly, $\hat{G}_{i, j\pm\fracrac12}$ comes from WENO reconstruction of fluxes from $\{\mathcal{F}_{ij}\}_{j=1}^{n_y}$ for all $i$.
To generalize the conservative SL scheme to nonlinear systems, a problem-dependent high order characteristics tracing procedure needs to be designed for solving the final-value problem in the form of equation \eqref{eq: char1}, but with the velocity field depending on the unknown function $f$. In many cases, a high order Runge-Kutta method could not be directly applied. In \cite{qiu_russo_2016}, a high order multi-dimensional characteristics tracing scheme for the VP system is proposed and can be applied in the above proposed conservative SL framework. In Section~\ref{sec6} we present numerical results and generalize the characteristics tracing procedure for the VP system to a guiding center Vlasov system and incompressible Euler system in vorticity stream function formulation.
We close this section by making the following remark to motivate our discussions in the following two sections. There are two sources in the scheme formulation that contribute to the stability issue of the above proposed SL scheme. One is the discretization by the quadrature rule \eqref{eq: quadr}. This part of stability is viewed as an ODE stability (assuming exact evaluation of spatial operators) and is investigated carefully in Section \ref{ssec: temp}. The other source can be explained by observing the following situation: if one changes the time stepping size slightly (could be arbitrary small), the root of characteristics $x_j-c_\ell \Delta t/\Delta x$ could come from a different grid cell, leading to a different interpolation stencil in the implementation. This aspect is associated with spatial discretization and is investigated in Section \ref{ssec: spat}.
\section{Temporal discretization and stability.}
\label{ssec: temp}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
\subsection{Linear stability functions and stability regions}
We first investigate the linear stability of quadrature rules for temporal discretization \eqref{eq: quadr} in an ODE setting, by assuming an {\em exact} evaluation of spatial derivative in eq.~\eqref{eq: conser}. In particular, we look for the evolution of a Fourier mode, identified by a Fourier variable $\xi \in [-\pi, \pi]$, assuming exact evaluation of spatial interpolation and reconstruction procedure mentioned above.
Such a discrete Fourier mode at time $t^n = n \Delta t$, will be denoted by,
\[
f_\xi^n(x) = (Q(\xi))^n e^{\mathbf{i} x \xi/\Delta x}, \quad \mathbf{i} = \sqrt{-1},
\]
where $Q(\xi)$ is the amplification factor associated with $\xi$. After plugging such ansatz into the scheme with $c_\ell$ and $b_\ell$, $\ell=1, \cdots s$ for temporal discretization, we obtain
\begin{equation}
\label{eq: Q}
Q(\xi) = 1- \mathbf{i} \xi \sum_{\ell=1}^s b_\ell e^{-\mathbf{i}c_\ell\xi}.
\end{equation}
The scheme is stable if
\[
|Q(\xi)|\le 1, \quad \fracorall \xi\in[-\pi, \pi].
\]
Such stability property is closely related to the linear stability of the quadrature rule, which can be studied by the stability region for a scalar linear ODE,
\[
z' = y z, \quad z(0)=1, \quad \fracorall y \in \mathbb{C}.
\]
Considering the quadrature rule with $c_\ell$ and $b_\ell$, $\ell=1, \cdots s$, the associated stability function is
\begin{equation}
\label{eq: R}
{R}(y) = 1 + y\sum_{\ell=1}^s b_\ell e^{c_\ell y},
\end{equation}
with which the stability region can be drawn by the set $\{y\in\mathbb{C}; |R(y)|\le1\}$. Comparing equations~\eqref{eq: Q} and ~\eqref{eq: R}, one has $Q=R(-\mathbf{i}\xi)$, with $\xi\in[-\pi,\pi]$. Thus the stability of a quadrature rule in a conservative SL scheme for a linear advection equation is closely related to the stability on the imaginary axis. In order to guarantee stability, we look for the largest interval $I^*\doteq[-y^*,y^*]$ of the
imaginary axis such that $|R(\mathbf{i}y)| \leq 1$, $\fracorall y\in I^*$. The bound
\begin{equation}
y^*/\pi
\label{eq: cfl_ode}
\end{equation} quantifies the maximum CFL number for the SL scheme that guarantees
stability.
Below, we report the stability regions for the following commonly used quadrature rules in the left panel of Fig.~\ref{fig:stab_reg}.
\begin{enumerate}
\item midpoint: $q_1 = 1/2$, $w_1=1$.
\item trapezoidal: $q_1 = 0, q_2 = 1$, $w_1 = w_2 = 1/2$.
\item Simpson: $q_1=0, q_2=1/2, q_3=1$, $w_1 = w_3=1/6, w_2=2/3$.
\item two-point Gauss-Legendre formulas (GL2): $q_1=\fracrac12-\fracrac{1}{2\sqrt{3}}, q_2=\fracrac12 +\fracrac1{2\sqrt{3}}$, $w_1=w_2=1/2$.
\end{enumerate}
As it is apparent from the plot, midpoint and Simpson's rule do not include a portion of the imaginary axis, while the trapezoidal rule and the two-point Gauss-Legendre rule do. The boundary of the stability region of the trapezoidal rule intersects the imaginary axis at $\pi$, and therefore the maximum
CFL number that guarantees linear stability for the conservative scheme is $1$. The two point Gauss-Legendre quadrature formula provides a wider
stability interval, since in this case $y^*\approx 5.43$, giving a maximum CFL number of approximately $5.43/\pi=1.72$.
Higher order Gauss-Legendre quadrature formulas, hereafter denoted by GL$s$, where $s$ indicates the number of nodes, may provide wider stability interval, as is illustrated in the right panel of Fig. \ref{fig:stab_reg}. To better appreciate the stability region, we plot in Fig. \ref{fig:glstab} $\rho^2-1$ as a function of $y$. GL4 is observed to have better stability property, as $\rho^2-1 \le0$ for an interval with boundary $y^*\approx 6.2765$ leading to a maximum CFL number of approximately $6.2765/\pi=1.99$. Gauss-Legendre rule formulas with odd number of points, such as GL3 and GL5, are unstable near the origin, see the right panel of Fig. \ref{fig:stab_reg} as well as Fig. \ref{fig:glstab}.
\begin{figure}
\caption{Stability region for midpoint, trapezoidal, Simpson's
and two-point Gauss-Legendre rules (left) and GLs with $s=2, 3, 4, 5$ (right).}
\label{fig:stab_reg}
\end{figure}
\begin{figure}
\caption{$R(\mathbf{i}
\label{fig:glstab}
\end{figure}
\subsection{Maximize the stability interval on imaginary axis}
In order to analyze the stability of quadrature formulas,
let us consider the expression $R(\mathbf{i}y)$ from eq.~\rf{eq: R}, and write it in the form
\begin{equation}
R(\mathbf{i}y) = 1+\mathbf{i}y(C_s(y) + i S_s(y)) = 1-yS_s(y) + \mathbf{i}y C_s(y)
\end{equation}
where
\begin{equation}
C_s(y)\equiv \sum_{\ell=1}^s b_\ell \cos(c_\ell y), \quad
S_s(y)\equiv \sum_{\ell=1}^s b_\ell \sin(c_\ell y).
\end{equation}
The stability condition therefore becomes
\[
|R(\mathbf{i}y)|^2 = 1-2yS_s(y) + y^2 \left( C_s^2(y)+S_s^2(y) \right) \leq 1.
\]
Such condition can be written in the form
\begin{equation}
y F_s(y) \geq 0, \quad {\rm where} \quad F_s(y)\equiv S_s(y) - \fracrac{1}{2}y\left(C_s^2(y)+S_s^2(y)\right).
\label{eq:stab_inequal}
\end{equation}
The problem of finding quadrature formulas with the widest stability region
can be stated as: determine the coefficients
$\mathbf{b}=(b_1, \ldots, b_s)$ and $\mathbf{c}=(c_1, \ldots, c_s)$ so that the interval in which
\rf{eq:stab_inequal} is satisfied is the widest.
Rather than directly solving this optimization problem, we consider a particular case of
quadrature formulas, i.e. those for which the nodes are symmetrically
located with respect to point $1/2$ in the interval $[0,1]$.
Among such formulas we restrict to the case in which $s$ even, as the schemes are observed to be unstable for odd $s$, see Fig.~\ref{fig:glstab}.
Let us denote by $\tilde{c}_\ell = 1-2 c_\ell, \ell = 1, \ldots, s$.
Then $c_\ell = (1-\tilde{c}_\ell)/2$.
Since the nodes are symmetric and the quadrature formula is
interpolatory, we have
\begin{equation}
\tilde{c}_\ell = -\tilde{c}_{s-\ell+1}, \quad b_\ell = b_{s-\ell+1}.
\label{eq:symmetry}
\end{equation}
The absolute stability function $R(\mathbf{i}y)$ can then be written,
after simple manipulations
\[
R(\mathbf{i}y) = 1-2y \sin(y/2) \tilde{C}_s(y) + 2 \mathbf{i}y \cos(y/2)\tilde{C}_s(y),
\]
where
\[
\tilde{C}_s(y)\equiv \suml{s/2}b_\ell \cos(\tilde{c}_\ell y/2),
\]
leading to
\begin{equation}
|R(\mathbf{i}y)|^2 = 1-4y \sin(y/2)\tilde{C}_s(y) + 4 y^2 \tilde{C}_s^2(y).
\label{eq:R2}
\end{equation}
The function $F_s(y)$ can then be written,
after simple manipulations
\begin{equation}
F_s(y) = 2 \tilde{C}_s(y) \left(\sin(y/2) - y \tilde{C}_s(y)\right).
\label{eq:F_s}
\end{equation}
Then the stability condition \rf{eq:stab_inequal} becomes
\[
\tilde{C}_s(y) \left(\sin(y/2) - y \tilde{C}_s(y)\right) \geq 0.
\]
Because function $F_s$ contains the product between two factors, the condition to ensure that the function does not change sign at roots is that the two factors vanish simultaneously at simple roots, therefore $\tilde{C}_s(y)$ has to vanish also at the same
points $y_k > 0$ at which $\sin(y/2) - y \tilde{C}_s(y)=0$. There is no need to impose that
$\tilde{C}_s$ vanishes at the origin, since, because of symmetry, $yF_s(y)$ does not change sign at the origin.
In order to determine the coefficients that define the quadrature formula for maximizing the stability interval on imaginary axis, we proceed as follows.
Because of the symmetry constraints \rf{eq:symmetry}, we have to find
$s$ coefficients, i.e. $b_1,\ldots,b_{s/2}$ and $\tilde{c}_1,\ldots,\tilde{c}_{s/2}$ by imposing a total of $s$
conditions. Such conditions will be a balance between accuracy and
stability.
If we want that the quadrature formulas have degree of precision
$s-1$,
i.e. if we want that they are exact on polynomials of degree less or
equal to $s-1$, we have to impose
\begin{equation}
\fracrac{1}{2} \int_{-1}^1 \zeta^{2k} \, dz = \sum_{\ell=1}^s b_\ell (\tilde{c}_\ell)^{2k},
\quad k=0,\ldots,s/2-1.
\label{eq:order1}
\end{equation}
We only impose the condition for even polynomials, since odd polynomials are automatically satisfied because of
symmetry.
The condition that $\tilde{C}_s(y)$ vanishes when $\sin(y/2)-y\tilde{C}_s(y)$ vanishes becomes
\begin{equation}
\tilde{C}_s(2\pi k) = 0, \quad k=1,\ldots,s/2.
\label{eq:C=0}
\end{equation}
For $k=0$ the stability condition \rf{eq:C=0} is marginally satisfied since
$\tilde{C}_s(0) = \suml{s/2}b_\ell = 1/2$.
Eqs.~\rf{eq:order1} and \rf{eq:C=0} constitute a nonlinear set of
equations for the $s$ coefficients $b_1,\ldots,b_{s/2}$ and
$\tilde{c}_1,\ldots,\tilde{c}_{s/2}$. Because the equations are nonlinear, we have
to resort to Newton's method for its solution. In practice, for
large values of $s$, it is hard to find an initial guess which lies in
the convergence basin of Newton's method. We had to resort to
a relaxed version of Newton's method, coupled with continuation
techniques, in order to solve the system.
We numerically compute nodes and weights for $s=2, 4, 6, 8, 10, 12$ and check {\em a posteriori} whether the stability condition is actually
satisfied. The following phenomena are observed:
\begin{itemize}
\item $s=2$: the quadrature nodes and weights are consistent with those in the two-point Gauss-Legendre formula.
\item $s = 4, 8, 12$: In Fig. \ref{fig:stab:check1} we plot the functions $|R_s(\mathbf{i}y)|^2-1$ (left panel) and the corresponding stability regions in the complex plane (right panel) for $s = 4, 8, 12$. A wide interval with stability on the imaginary axis is shown. We report the coefficients in Table \ref{tab:coeff}. Only $s/2$ coefficients are reported, since the other satisfy the symmetry relation \rf{eq:symmetry}. We also report the maximum CFL number $a^* =y^*/\pi$ (see eq. \eqref{eq: cfl_ode}).
\item $s=6$ and $s=10$: In Fig. \ref{fig:stab:check2}, we plot the functions $|R_s(\mathbf{i}y)|^2-1$ for $s = 6$ and $s = 10$. It is observed that $R_s(\mathbf{i}y)|^2\ge1$ for any interval containing the origin, i.e. these two quadrature formulas are not stable.
\end{itemize}
\begin{figure}
\caption{Left: plot of $|R(\mathbf{i}
\label{fig:stab:check1}
\end{figure}
\begin{figure}
\caption{Plot of $|R(\mathbf{i}
\label{fig:stab:check2}
\end{figure}
\begin{table}
\begin{center}
\caption{Weights and nodes of accurate and stable quadrature
formulas. Each formula is exact for polynomials of degree not
greater than $s-1$. The maximum CFL number $a^*$ that guarantees
stability in the theoretical Fourier analysis is reported.}
\begin{tabular}{|c|c|c|} \hline
\multicolumn{3}{|c|}{$s=4, a^*= 4.8125674352016$} \\ \hline
1 & 0.199889211759008 & 0.083205952308564 \\
2 & 0.300110788240992 & 0.347904700949451 \\ \hline
\multicolumn{3}{|c|}{$s=8, a^*= 9.4130380474585$} \\ \hline
1 & 0.058702317190867 & 0.023248965963790 \\
2 & 0.119923212650690 & 0.114686793929813 \\
3 & 0.154113350301760 & 0.253867587586135 \\
4 & 0.167261119856682 & 0.415892817555109 \\ \hline
\multicolumn{3}{|c|}{$s=12, a^*=13.7671988660496$} \\ \hline
1 & 0.027182888487959 & 0.010668025829619 \\
2 & 0.059633412276882 & 0.054560771376909 \\
3 & 0.084799522112170 & 0.127471263371368 \\
4 & 0.101625491473440 & 0.221353922812027 \\
5 & 0.111259037829236 & 0.328318059665840 \\
6 & 0.115499647820313 & 0.442082833046309 \\ \hline
\end{tabular}
\label{tab:coeff}
\end{center}
\end{table}
The stability regions for the quadrature formulas obtained for
$s=4,8,12$ reported in Table \ref{tab:coeff} are computed under the
assumption that one considers the exact space dependence of the
Fourier mode, so that the only error is in time integration. In
reality there are several other causes of errors, that may affect the
stability region of the quadrature. In the next section,
we take spatial discretization into account and quantify the corresponding stability interval.
\section{Spatial discretization.}
\label{ssec: spat}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
There are two spatial discretization processes in the scheme. One is the WENO interpolation in approximating $f(x_j, t^n+c_\ell \Delta t) = f(x_j-c_\ell \Delta t/\Delta x, t^n)$ from neighboring grid point values $\{f^n_j\}_{j=1}^{n_x}$. The other is the WENO reconstruction in obtaining numerical fluxes $\hat{F}_{j+\fracrac12}$ in \eqref{eq: cons_update} from $\{\mathcal{F}_j\}_{j=1}^{nx}$. In this paper, we consider the following two classes of spatial discretizations.
\begin{itemize}
\item Odd order approximations. For the linear equation \eqref{eq:scalar2}, we use a right-biased stencil to approximate $f(x_j-c_\ell \Delta t/\Delta x, t^n)$ and use a left-biased stencil for reconstructing the flux $\hat{F}_{j+\fracrac12}$. For example, for a first order scheme with $ \Delta t/\Delta x<1$, $f(x_j-c_\ell \Delta t/\Delta x, t^n)$ is approximated from the {\em interpolation} stencil $\{f_j\}$
and the numerical flux $\hat{F}_{j+\fracrac12}$ is approximated from the {\em reconstruction} stencil $\{\mathcal{F}_j\}$. With such stencil arrangement, the SL scheme is reduced to a first order upwind scheme when $ \Delta t/\Delta x<1$,
\[
f^{n+1}_j = f^n_j - \Delta t/\Delta x (f^n_j - f^n_{j-1}).
\]
Third, fifth, seventh and ninth order schemes can be constructed by including one, two, three, four more points symmetrically from left and from right, respectively, in the interpolation and reconstruction stencils. We list them as follows.
\begin{equation}
\begin{array}{lll}
&\mbox{Third order}: & \{f_{j-1}, f_j, f_{j+1}\}, \quad \{\mathcal{F}_{j-1}, \mathcal{F}_j, \mathcal{F}_{j+1}\}.\\[2mm]
&\mbox{Fifth order}: & \{f_{j-2}, f_{j-1}, f_j, f_{j+1}, f_{j+2}\}, \quad \{\mathcal{F}_{j-2}, \mathcal{F}_{j-1}, \mathcal{F}_j, \mathcal{F}_{j+1}, \mathcal{F}_{j+2}\}.\\[2mm]
&\mbox{Seventh order}: & \{f_{j-3}, f_{j-2}, f_{j-1}, f_j, f_{j+1}, f_{j+2}, f_{j+3}\}, \\
& & \{\mathcal{F}_{j-3}, \mathcal{F}_{j-2}, \mathcal{F}_{j-1}, \mathcal{F}_j, \mathcal{F}_{j+1}, \mathcal{F}_{j+2}, \mathcal{F}_{j+3}\}.\\[2mm]
&\mbox{Ninth order}: & \{f_{j-4}, f_{j-3}, f_{j-2}, f_{j-1}, f_j, f_{j+1}, f_{j+2}, f_{j+3}, f_{j+4}\}, \\
& & \{\mathcal{F}_{j-4}, \mathcal{F}_{j-3}, \mathcal{F}_{j-2}, \mathcal{F}_{j-1}, \mathcal{F}_j, \mathcal{F}_{j+1}, \mathcal{F}_{j+2}, \mathcal{F}_{j+3}, \mathcal{F}_{j+4}\}.
\end{array}
\notag
\end{equation}
\item Even order approximations. For the linear equation \eqref{eq:scalar2}, we use symmetric stencils to approximate $f(x_j-c_\ell \Delta t/\Delta x, t^n)$ by interpolation and to approximate $\hat{F}_{j+\fracrac12}$ by reconstruction. For example, for a second order scheme with $ \Delta t/\Delta x<1$, $f(x_j-c_\ell \Delta t/\Delta x, t^n)$ is approximated from the {\em interpolation} stencil $\{f_{j-1}, f_j\}$
and the numerical flux $\hat{F}_{j+\fracrac12}$ is approximated from the {\em reconstruction} stencil $\{\mathcal{F}_j, \mathcal{F}_{j+1}\}$. Fourth, sixth and eighth order schemes can be constructed by including one, two, three more points symmetrically from left and from right, respectively, in the interpolation and reconstruction stencils. We list them as follows.
\begin{equation}
\begin{array}{lll}
&\mbox{Fourth order}: & \{f_{j-2}, f_{j-1}, f_j, f_{j+1}\}, \quad \{\mathcal{F}_{j-1}, \mathcal{F}_j, \mathcal{F}_{j+1}, \mathcal{F}_{j+2}\}.\\[2mm]
&\mbox{Sixth order}: & \{f_{j-3}, f_{j-2}, f_{j-1}, f_j, f_{j+1}, f_{j+2}\}, \\
& & \{\mathcal{F}_{j-2}, \mathcal{F}_{j-1}, \mathcal{F}_j, \mathcal{F}_{j+1}, \mathcal{F}_{j+2}, \mathcal{F}_{j+3}\}.\\[2mm]
&\mbox{Eighth order}: & \{f_{j-4}, f_{j-3}, f_{j-2}, f_{j-1}, f_j, f_{j+1}, f_{j+2}, f_{j+3}\}, \\
& & \{\mathcal{F}_{j-3}, \mathcal{F}_{j-2}, \mathcal{F}_{j-1}, \mathcal{F}_j, \mathcal{F}_{j+1}, \mathcal{F}_{j+2}, \mathcal{F}_{j+3}, \mathcal{F}_{j+4}\}.
\end{array}
\notag
\end{equation}
\end{itemize}
\begin{rem}
We follow the same principle in the interpolation and reconstruction procedures in more general settings, for example the situation when the time stepping size is greater than the CFL restriction, i.e $\Delta t/\Delta x \ge 1$ for eq. \eqref{eq:scalar2}. For general high dimensional problems, e.g. the Vlasov equation, similar procedures can be applied in a truly multi-dimensional fashion.
\end{rem}
To access the stability property of the conservative method, we perform Fourier analysis via the linear equation \eqref{eq:scalar2} with $x\in [0, 2\pi]$ and periodic boundary condition. In particular, we make the ansatz $f^n_j = \hat{f}^n e^{\mathbf{i} j \xi}$ with $\mathbf{i} = \sqrt{-1}$ and $\xi \in [0, 2\pi]$. Plugging the ansatz into the SL conservative scheme as described in Section~\ref{sec: cons}, we obtain $\hat{f}^{n+1}(\xi) = Q_{\lambda}(\xi) \hat{f}^n (\xi)$ with $Q_{\lambda}(\xi)$ being the amplification factor for the Fourier mode associated with $\xi$ and $\lambda = \fracrac{\Delta t}{\Delta x}$. To ensure linear stability, it is sufficient to have
\begin{equation}
\label{eq: linear_stab}
|Q_\lambda (\xi)|\le 1, \quad \fracorall \xi \in [0, 2\pi], \quad \fracorall\lambda \in [0, \lambda^\star], \quad \mbox{for some $\lambda^\star$}.
\end{equation}
We seek for $\lambda^\star$ by numerically checking the inequality \eqref{eq: linear_stab} for $100$ discretized grid points on $\xi \in [0, 2\pi]$,
and by gradually increasing $\lambda$ with a step size of $0.01$ starting from $\lambda =0$. Taking the machine precision into account in our implementation, we check the inequality $|Q_\lambda (\xi)|\le 1 + 10^{-11}$ instead. We tabulate such $\lambda^\star$ in Table~\ref{tab41} for different quadrature formulas as discussed in Section~\ref{ssec: temp} and with different choices of spatial interpolation and reconstruction stencils with odd and even order respectively. One can observe that the second order trapezoidal rule and the fourth order GL2 perform much better than the mid-point rule in terms of stability, especially when the orders for spatial approximations are high. The time stepping sizes allowed for stability of fully discretized schemes with $s=4, 8, 12$ are observed to be much less than the one provided by ODE stability analysis in the previous section.
In the following, we take the linear advection equation $u_t + u_x = 0$ with a smooth initial function $\sin(2\pi x)$ on the domain $[0,1]$, to test the CFL bounds in Table \ref{tab41}. Here for better illustration, only linear interpolation and linear reconstruction are used. We consider schemes that couple GL2 for temporal integration with third and fourth order spatial approximations. Errors and orders of convergence at a final integration time $T=100.1$ are recorded in Table \ref{tab42}. Clear third order and fourth order spatial accuracy are observed at the corresponding upper bounds for CFL ($1.22$ for third order and $1.84$ for fourth order as in Table \ref{tab41}.) The code will blow up with the CFL increased by $0.01$ at the corresponding time, which confirms the validity of the CFL bounds in the table. We have similar observations for other orders of schemes, but omit to present them to save space. Although even order schemes comparatively have larger CFL bounds than odd order ones, for solutions with discontinuities, we can observe that odd order schemes with upwind mechanism can resolve the discontinuities better. We present numerical solutions of our schemes with linear weights for advecting a step function in Fig. \ref{fig:test}.
Due to the above considerations, we use the scheme with the 5th order spatial approximation and with two-point Gaussian rule for temporal integration in the following numerical sections.
\begin{table}
\begin{center}
\caption{Upper bounds of CFL for FD SL scheme with odd and even order interpolation and reconstruction.
The amplification factor is bounded by $1+10^{-11}$. $N=100$.}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
temp/spatial& 1st& 3rd & 5th & 7th & 9th & exact \\\hline
mid-point & 1.00 & 1.00 & 0.14 & 0.04 & 0.02 & 0.00 \\\hline
trapezoid & 1.99 & 1.68 & 1.52 & 1.44 & 1.38 & 1.00\\\hline
Simpson & 1.33 & 1.50 & 1.35 & 0.71 & 0.37 & 0.00 \\\hline
GL2 & 1.00 & 1.22 & 1.19 & 1.16 & 1.15 & 1.72 \\\hline
s=4 & 1.00 & 1.37 & 1.27 & 1.22 & 1.19 & 4.81 \\\hline
s=8 & 1.00 & 1.35 & 1.26 & 1.21 & 1.18 & 9.41\\\hline
s=12 & 1.00 & 1.37 & 1.25 & 1.21 & 1.18 & 13.76\\\hline \hline
temp/spatial& 2nd & 4th & 6th & 8th & 10th & exact \\\hline
mid-point & 2.00 & 0.04 & 0.01 & 0.00 & 0.00 &0.00 \\\hline
trapezoid & 1.29 & 1.26 & 1.24 & 1.22 & 1.20 &1.00 \\\hline
Simpson & 3.00 & 2.91 & 0.83 & 0.34 & 0.20 &0.00 \\\hline
GL2 & 1.85 & 1.84 & 1.84 & 1.83 & 1.83 &1.72 \\\hline
s=4 & 1.96 & 1.97 & 1.98 & 1.98 & 1.98 &4.81 \\\hline
s=8 & 1.99 & 1.99 & 1.99 & 1.99 & 1.99 &9.41 \\\hline
s=12 & 1.99 & 1.99 & 1.99 & 1.99 & 2.00 &13.76 \\\hline
\end{tabular}
\label{tab41}
\end{center}
\end{table}
\begin{table}
\begin{center}
\caption{Accuracy test of the linear advection equation $u_t + u_x = 0$ with the initial function $\sin(2\pi x)$ for the 3rd order scheme with $CFL=1.22$ at $T=100.1$ and 4th order with $CFL=1.84$ at $T=1001.1$.}
\begin{tabular}{|c|c|c|c|c|c|} \hline
Scheme & N & $L^1$ error & order & $L^\infty$ error & order \\ \hline
\multirow{4}{*}{3rd order} & 240 & 4.12E-04 & --& 6.47E-04 & -- \\ \cline{2-6}
& 480 & 5.15E-05 & 3.00& 8.09E-05 & 3.00 \\ \cline{2-6}
& 960 & 6.44E-06 & 3.00& 1.01E-05 & 3.00 \\ \cline{2-6}
& 1920 & 8.04E-07 & 3.00& 1.26E-06 & 3.00 \\ \hline
\multirow{4}{*}{4th order}& 120 & 1.76E-03 & --& 2.77E-03 & -- \\ \cline{2-6}
& 240 & 1.10E-04 & 4.00& 1.73E-04 & 4.00 \\ \cline{2-6}
& 480 & 6.89E-06 & 4.00& 1.08E-05 & 4.00 \\ \cline{2-6}
& 960 & 4.76E-07 & 3.85& 7.48E-07 & 3.85 \\ \hline
\end{tabular}
\label{tab42}
\end{center}
\end{table}
\begin{figure}
\caption{Numerical solution for the linear advection equation $u_t + u_x = 0$ with an initial step function
at $T=100.1$. Left: 3rd order with $CFL=1.22$ and 5th order with $CFL=1.19$; Right: 4th order and 6th order with $CFL=1.84$. $N=800$. Here linear interpolation and linear reconstruction are used without WENO.}
\label{fig:test}
\end{figure}
\section{Numerical tests on 2D linear passive-transport problems}
\label{sec5}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
In this section, the conservative truly multi-dimensional SL scheme will be tested for passive transport equations, such as linear advection, rotation and swirling deformation. Since the velocity of the field is given a priori, characteristics can be traced by a high order Runge-Kutta ODE integrator.
In this and next sections, we use $5$th order spatial approximations with WENO (i.e. WENO interpolation and WENO reconstruction) for evaluating flux functions in \eqref{eq: cons_update}.
We use GL2 for temporal integration, while characteristics are traced back in time by Runge-Kutta to locate feet of characteristics. For a general two dimensional problem $u_t+f(u)_x + g(u)_y = 0$, the time step is taken as
\[
\Delta t = CFL/(a/\Delta x+b/\Delta y),
\]
where $a=\max|f'(u)|$ and $b=\max|g'(u)|$. From Table \ref{tab41}, the CFL number is $1.22$ for a 3rd order spatial discretization and $1.19$ for the $5$th order. In the following, we take $CFL=1.15$ without specification.
\begin{exa}
We first test our problem for the linear equation $u_t+u_x+u_y=0$ with initial condition
$u(x,y,0)=\sin(x)\sin(y)$. The exact solution is $u(x,y,t)=\sin(x-t)\sin(y-t)$.
For this example, the roots of characteristics are located exactly.
Table \ref{tab1} and Table \ref{tab1t} presents spatial and temporal order of convergence of the proposed scheme.
Both $5$-th order spatial accuracy and $4$-th order temporal accuracy from GL2 can be observed.
\begin{table}[h]
\begin{center}
\label{tab1}
\caption{{Errors and orders for the linear equation in space. $T=1.2$. $CFL=1.15$.}}
\begin{tabular}{|c | c|c|c| c|c|}
\hline
\cline{1-5} $N_x \times N_y$ &{$20\times 20$} &{$40\times 40$} &{$60\times 60$}&{$80\times 80$}&{$100\times100$} \\
\hline
\cline{1-5} $L^1$ error &2.76E-4&8.38E-6 &1.11E-6&2.64E-7&8.68E-8\\
\hline
order &--&5.04&4.99&4.98&4.99\\
\hline
\end{tabular}
\caption{{Errors and orders for the linear equation in time. $N_x=N_y=200$. $T=1$.}}
\begin{tabular}{|c | c|c|c| c|c|}
\hline
\cline{1-5} $CFL$ &{$1.1$} &{$1.0$} &{$0.9$}&{$0.8$}&{$0.7$} \\
\hline
\cline{1-5} $L^1$ error &3.34E-9&2.27E-9 &1.49E-9&9.29E-10&5.46E-10\\
\hline
order &--&4.07&3.97&4.02&3.98\\
\hline
\end{tabular}
\label{tab1t}
\end{center}
\end{table}
\end{exa}
\begin{exa}
Now we consider two problems defined on the domain $[-\pi,\pi]^2$.
One is the rigid body rotating problem
\[
u_t - y u_x + x u_y =0,
\]
the other is the swirling deformation flow problem
\[
u_t - \left(\cos^2\left(\fracrac{x}{2}\right)\sin(y)g(t)u\right)_x + \left(\sin(x)\cos^2\left(\fracrac{y}{2}\right)g(t)u\right)_y = 0,
\]
with $g(t)=\cos(\pi t/T)\pi$. Both have the initial condition, which includes a slotted disk, a cone as well as a smooth hump, see Fig. \ref{fig1} (top) and Fig. \ref{fig2} (top).
For the rigid body rotating problem, its period is $2\pi$. In Fig. \ref{fig1}, we have shown
the results at a half period and one period. As we can see, the shape of the bodies are well preserved. For the swirling deformation flow problem, after a half period the bodies are deformed, but they regain its initial shape after one period, see Fig. \ref{fig2}.
\begin{figure}
\caption{Rigid body rotating problem. Mesh size: $128\times128$. Top: $T=0$; middle: $T=\pi$;
bottom: $T=2\pi$. Contour plots: 10 equally spaced lines.}
\label{fig1}
\end{figure}
\begin{figure}
\caption{Rigid body rotating problem. Mesh size: $128\times128$. Top: $T=0$; middle: $T=0.75$;
bottom: $T=1.5$. Contour plots: 10 equally spaced lines.}
\label{fig2}
\end{figure}
\end{exa}
\section{Numerical tests of nonlinear systems}
\label{sec6}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
In this section, we test the conservative SL scheme on the nonlinear VP system, the guiding center Vlasov system and the incompressible Euler system in vorticity stream function formulation. Despite different application backgrounds, the latter two systems are indeed in almost the same mathematical formulation, only with different signs in the Poisson's equation.
\subsection{VP system}
Arising from collisionless plasma applications, the VP system
\begin{equation}
\fracrac{\partial f}{\partial t} + {\bf v} \cdot \nabla_{\bf x} f +
\mathbf{E}({\bf x},t) \cdot \nabla_{\bf v} f = 0, \label{eq: vlasov}
\end{equation}
and
\begin{equation}
\mathbf{E}(\mathbf{x},t)=-\nabla_{\bf x}\phi(\mathbf{x},t),\quad
-\Delta_{\bf
x}\phi(\mathbf{x},t)=\rho(\mathbf{x},t)-1,\label{eq: poisson}
\end{equation}
describes the temporal evolution of the particle distribution function in six dimensional phase space. $f( {\bf x},{\bf v},t)$ is the
probability distribution function which describes the probability of
finding a particle with velocity $\bf{v}$ at position $\bf{x}$ at
time $t$, $\bf{E}$ is the electric field, and $\phi$ is the
self-consistent electrostatic potential. The probability
distribution function couples to the long range fields via the
charge density, $\rho(t,x) = \int_{\mathbb{R}^3} f(x,v,t)dv$,
where we take the limit of uniformly distributed infinitely massive
ions in the background. In this paper, we consider the VP system with
1-D in ${\bf x}$ and 1-D in ${\bf v}$. Periodic
boundary condition is imposed in x-direction, while zero boundary
condition is imposed in v-direction. The equations for tracking characteristics are
\begin{equation}
\fracrac{dx}{dt} = v, \quad \fracrac{dv}{dt} = E,
\end{equation}
where $E$ nonlinearly depends on $f$ via the Poisson system \eqref{eq: poisson}. To locate the foot of characteristics accurately, we apply the high order procedure proposed in \cite{qiu_russo_2016}.
Next we recall several norms in the VP system below, which should remain constant in time.
\begin{enumerate}
\item Mass:
\[
\text{Mass}=\int_v\int_xf(x,v,t)dxdv.
\]
\item $L^p$ norm $1\leq p<\infty$:
\begin{equation}
\|f\|_p=\left(\int_v\int_x|f(x,v,t)|^pdxdv\right)^\fracrac1p.
\end{equation}
\item Energy:
\begin{equation}
\text{Energy}=\int_v\int_xf(x,v,t)v^2dxdv + \int_xE^2(x,t)dx,
\end{equation}
where $E(x,t)$ is the electric field.
\item Entropy:
\begin{equation}
\text{Entropy}=\int_v\int_xf(x,v,t)\log(f(x,v,t))dxdv.
\end{equation}
\end{enumerate}
Tracking relative deviations of these quantities numerically will be
a good measure of the quality of numerical schemes. The relative
deviation is defined to be the deviation away from the corresponding
initial value divided by the magnitude of the initial value.
We also check the mass conservation over time $\int_v\int_x f(x,v,t)dxdv$, which is the same as the $L^1$ norm if $f$ is positive. However, since our scheme is not positivity preserving, the time evolution of the mass could be different from that of the $L^1$ norm due to the negative values appearing in numerical solutions.
In our numerical tests, we let the time step size $\Delta t = CFL \cdot \min(\Delta x/v_{max}, \Delta v/\max(E))$, where $CFL$ is specified as 1.15, and let $v_{max} = 6$ to minimize the error from truncating the domain in $v$-direction.
\begin{exa} (Weak Landau damping)
For the VP system, we first consider the weak Landau damping with the initial condition:
\begin{equation}
\label{landau}
f(t=0,x,v)=\frac{1}{\sqrt{2\pi}}(1+\alpha \cos(k x))\exp(-\frac{v^2}{2}),
\end{equation}
where $\alpha=0.01$ and $k=0.5$. The length of the domain in the x-direction is
$L=\frac{2\pi}{k}$, which is similar in the following examples.
In Fig. \ref{fig21}, we plot the time evolution of the electric field in $L^2$ norm and $L^\infty$ norm, the relative derivation of the discrete $L^1$ norm, $L^2$ norm, kinetic energy and entropy.
\begin{figure}
\caption{Weak Landau damping. Time evolution of the electric field in $L^2$ norm and $L^\infty$ norm (top), discrete $L^1$ norm and $L^2$ norm (middle), kinetic
energy and entropy (bottom). Mesh: $128 \times 128$.}
\label{fig21}
\end{figure}
\end{exa}
\begin{exa} (Strong Landau damping)
The initial condition of strong Landau damping is still to be (\ref{landau}),
with $\alpha=0.5$ and $k=0.5$. Similarly in Fig. \ref{fig22}, we plot the time evolution
of electric field in $L^2$ norm and $L^\infty$ norm, the relative derivation of the discrete $L^1$ norm, $L^2$ norm, kinetic energy and entropy. The mass conservation is indicated by the bottom straight line in the $L^1$ norm figure.
\begin{figure}
\caption{Strong Landau damping. Time evolution of the electric field in $L^2$ norm and $L^\infty$ norm (top), discrete $L^1$ norm and $L^2$ norm (middle), kinetic
energy and entropy (bottom). Mesh: $128 \times 128$. The straight red line indicates mass conservation.}
\label{fig22}
\end{figure}
\end{exa}
\begin{exa} (Two stream instability)
Now we consider the two stream instability problem, with an unstable initial distribution
function given by:
\begin{equation}
\label{2stream1}
f(t=0,x,v)=\frac{2}{7\sqrt{2\pi}}(1+5v^2)(1+\alpha((\cos(2kx)+\cos(3kx))/1.2+\cos(kx))\exp(-\frac{v^2}{2})
\end{equation}
where $\alpha=0.01$ and $k=0.5$. We plot the numerical solution at $T=53$ in Fig. \ref{fig23}. While the time evolution of electric field in $L^2$ norm and $L^\infty$ norm,
the relative derivation of the discrete $L^1$ norm, $L^2$ norm, kinetic energy and
entropy are shown in Fig. \ref{fig24}.
\begin{figure}
\caption{Two stream instability at $T=53$. Mesh: $128 \times 128$. Contour plots: 10 equally spaced lines.}
\label{fig23}
\end{figure}
\begin{figure}
\caption{Two stream instability. Time evolution of the electric field in $L^2$ norm and $L^\infty$ norm (top), discrete $L^1$ norm and $L^2$ norm (middle), kinetic
energy and entropy (bottom). Mesh: $128 \times 128$.}
\label{fig24}
\end{figure}
\end{exa}
\begin{exa} (Symmetric two stream instability)
We consider the symmetric two stream instability with the initial condition:
\begin{equation}
\label{2stream2}
f(t=0,x,v)=\frac{1}{2v_{th}\sqrt{2\pi}}\left[\exp\left(-\frac{(v-u)^2}{2v_{th}^2}\right)
+\exp\left(-\frac{(v+u)^2}{2v_{th}^2}\right)\right](1+\alpha\cos(kx))
\end{equation}
with $\alpha=0.05$, $u=0.99$, $v_{th}=0.3$ and $k=\frac{2}{13}$.
We plot the numerical solution at $T=70$ in Fig. \ref{fig25}. The time evolution
of the electric field in $L^2$ norm and $L^\infty$ norm, the relative derivation of the discrete $L^1$ norm, $L^2$ norm, kinetic energy and
entropy are reported in Fig. \ref{fig26}. Similarly, the mass conservation is indicated by the straight line on the bottom
in the $L^1$ norm figure.
\begin{figure}
\caption{Symmetric two stream instability at $T=70$. Mesh: $256 \times 128$. Contour plots: 10 equally spaced lines.}
\label{fig25}
\end{figure}
\begin{figure}
\caption{Symmetric two stream instability. Time evolution of the electric field in $L^2$ norm and $L^\infty$ norm (top), discrete $L^1$ norm and $L^2$ norm (middle), kinetic
energy and entropy (bottom). Mesh: $256 \times 128$. The straight red line indicates mass conservation.}
\label{fig26}
\end{figure}
\end{exa}
\subsection{The guiding center Vlasov model}
Consider the guiding center approximation of the 2D Vlasov model \cite{yang2014conservative, frenod2015long},
\begin{equation}
\fracrac{\partial \rho}{\partial t} + E_2 \fracrac{\partial \rho}{\partial x} - E_1 \fracrac{\partial \rho}{\partial y} =0,
\end{equation}
or equivalently in a conservative form as
\begin{equation}
\fracrac{\partial}{\partial t} \rho+ \fracrac{\partial }{\partial x} (\rho E_2)+ \fracrac{\partial }{\partial y}(-\rho E_1) =0,
\end{equation}
where ${\bf E} = (E_1, E_2) = -\nabla \Phi$ with $\Phi$ determined from the Poisson's equation
$$\bigtriangleup \Phi = -\rho.$$
We assume a uniform set of 2D grid points as specified in eq. \eqref{eq: 2dgrid}. The equations for tracking characteristics emanating from a grid point $(x_i, y_j)$ at some future time $t^{n+1}$ (without loss of generality),
\begin{equation}
\label{eq: guiding_char}
\fracrac{dx(t)}{dt} = E_2, \quad \fracrac{dy(t)}{dt} = -E_1, \quad x(t^{n+1}) = x_i, \quad y(t^{n+1}) = y_j.
\end{equation}
Below we generalize the characteristics tracing procedures in \cite{qiu_russo_2016} to the guiding center model,
which can be directly applied to the incompressible Euler equations in the following subsection. In particular for the system \eqref{eq: guiding_char}, we propose a scheme to locate the foot of characteristics $(x^{\star}_{i,j}, y^{\star}_{i,j})$ at $t^n$. Once the foot of characteristic is located, then a 2D interpolation procedure can be employed to approximate the solution value $\rho(x^{\star}_{i,j}, y^{\star}_{i,j}, t^n)$. We remark that solving \eqref{eq: guiding_char} with high order temporal accuracy is challenging. Especially,
the ${\bf E}$ depends on the unknown function $\rho$ via the 2-D Poisson's equation in a global rather than a local fashion, and it is
difficult to evaluate ${\bf E}$ for some intermedia time stages, i.e. Runge-Kutta methods cannot be used directly.
In our notations, the superscript $^n$ denotes the time level, the subscripts $i$ and $j$ denote the location at $(x_i, y_j)$. e.g. $E^n_{1, i, j} = E_1(x_i, y_j, t^n)$. The superscript $^{(p)}$ denotes the formal order of {\em temporal} approximation. For example, in eq. \eqref{eq: x_v_1} below, $x^{n, (1)}_{i,j}$ (or $y^{n, (1)}_{i,j}$) approximates $x_{i,j}^\star$ (or $y_{i,j}^\star$) with first order. $\fracrac{d}{dt} = \fracrac{\partial}{\partial t} + \fracrac{\partial x}{\partial t}\fracrac{\partial}{\partial x}+ \fracrac{\partial y}{\partial t}\fracrac{\partial}{\partial y}$
denotes the material derivative along characteristics. We use a spectrally accurate fast Fourier transform (FFT) for solving the 2-D Poisson's equation \eqref{eq: poisson}.
We start from a first order scheme for tracing characteristics \eqref{eq: guiding_char}, by letting
\begin{equation}
\label{eq: x_v_1}
x^{n, (1)}_{i, j} = x_i - E_2(x_i, y_j, t^n) \Delta t; \quad y^{n, (1)}_{i, j} = y_j + E_1(x_i, y_j, t^n) \Delta t.
\end{equation}
They are first order approximations to $x_{i, j}^\star$ and $y_{i, j}^\star$.
Let
\begin{equation}
\rho^{n+1, (1)}_{i, j} = \rho(x^{n, (1)}_{i, j}, y^{n, (1)}_{i, j}, t^n),
\end{equation}
which can be obtained by a high order spatial interpolation. Based on $\{\rho^{n+1, (1)}_{i, j}\}$, we can compute
\[
{\bf E}^{n+1, (1)}_{i, j} = (E^{n+1, (1)}_{1, i, j}, E^{n+1, (1)}_{2, i, j}),
\]
by using FFT based on the 2-D Poisson's equation \eqref{eq: poisson}.
Note that ${\bf E}^{n+1, (1)}_{i, j}$ approximates ${\bf E}^{n+1}_{i, j}$ with first order temporal accuracy.
A second order scheme can be built upon the first order one, by letting
\begin{align}
\label{eq: x_2}
& x^{n, (2)}_{i, j} = x_i - \fracrac12\left (E^{n+1, (1)}_{2, i, j} + E_2(x^{n, (1)}_{i, j}, y^{n, (1)}_{i, j}, t^n)\right) \Delta t,
\\
\label{eq: v_2}
&y^{n, (2)}_{i, j} = y_j + \fracrac12\left (E^{n+1, (1)}_{1, i, j} + E_1(x^{n, (1)}_{i, j}, y^{n, (1)}_{i, j}, t^n)\right) \Delta t.
\end{align}
Here ${\bf E}(x^{n, (1)}_{i, j}, y^{n, (1)}_{i, j}, t^n)$ can be approximated by a high order spatial interpolation.
$(x^{n, (2)}_{i, j}, y^{n, (2)}_{i, j})$ can be shown to be second order approximations to $(x_{i, j}^\star, y_{i, j}^\star)$ by a local truncation error analysis.
Finally, a third order scheme can be designed based on a second order one, by letting
\begin{align}
\label{eq: x_3}
x^{n, (3)}_{i, j} = x_i - E^{n+1, (2)}_{2, i, j} \Delta t +& \fracrac{\Delta t^2}2 \Big(
\fracrac23
(\fracrac{d E_2}{dt})^{n+1, (2)}_{i, j}+ \fracrac13
\fracrac{dE_2}{dt}(x_{i, j}^{n, (2)}, y_{i, j}^{n, (2)},t^n)
\Big); \\
\label{eq: v_3}
y^{n, (3)}_{i, j} = y_j + E^{n+1, (2)}_{1, i, j} \Delta t - &\fracrac{\Delta t^2}2 \Big(
\fracrac23
(\fracrac{d E_1}{dt})^{n+1, (2)}_{i, j}+ \fracrac13 \fracrac{d E_1}{dt}(x_{i, j}^{n, (2)}, y_{i, j}^{n, (2)},t^n)
\Big);
\end{align}
which are third order approximations to $x_{i, j}^\star$ and $y_{i, j}^\star$, see Proposition~\ref{prop: order3} below.
Here
\begin{equation}
\label{eq: material_der}
\fracrac{d}{dt} E_s = \fracrac{\partial E_s}{\partial t}+\fracrac{\partial E_s}{\partial x} E_2 - \fracrac{\partial E_s}{\partial y} E_1, \quad s=1, 2
\end{equation}
are material derivatives along characteristics. Notice that on the r.h.s. of eq. \eqref{eq: material_der}, the partial derivatives are not explicitly given. The spatial derivative terms can be approximated by high order spatial approximations, while the time derivative term $\fracrac{\partial {\bf E}}{\partial t}$ can be approximated by utilizing the Vlasov equation. In particular, taking partial time derivative of the
2-D Poisson's equation gives
\begin{equation}
\Delta \phi_t=-(E_2\rho)_x+(E_1\rho)_y.
\label{eq: poisson2}
\end{equation}
After obtaining ${\bf E}$ by solving the original Poisson's equation \eqref{eq: poisson}, the right hand side of \eqref{eq: poisson2} can be constructed by a high order central finite difference scheme, e.g., 6th
order central finite difference scheme. Then we can solve \eqref{eq: poisson2} by FFT to get $\fracrac{\partial {\bf E}}{\partial t}= -(( \phi_t )_x, ( \phi_t )_y)$.
With such a procedure, both $\fracrac{\partial {\bf E}}{\partial t}(x_{i, j}^{n, (2)}, y_{i, j}^{n, (2)},t^n) $ and $\left(\fracrac{\partial {\bf E}}{\partial t}\right)^{n+1,(2)}_{i,j}$ can be obtained.
\begin{prop}
\label{prop: order3}
$x^{n, (3)}_{i, j}$ and $y^{n, (3)}_{i, j}$ constructed in equations \eqref{eq: x_3}-\eqref{eq: v_3} are third order approximations to $x_{i, j}^\star$ and $y_{i, j}^\star$ in time.
\end{prop}
\noindent
{\em Proof.}
It can be checked by Taylor expansion
\begin{equation}a
x_{i, j}^\star &=& x_i - \fracrac{d x}{dt}(x_i, y_j, {t^{n+1}}) {\Delta t} + \left(\fracrac23 \fracrac{d^2 x}{dt^2}(x_i, y_j, {t^{n+1}}) + \fracrac13 \fracrac{d^2 x}{dt^2}(x_{i,j}^\star, y_{i,j}^\star, {t^{n}})\right) \fracrac{\Delta t^2}{2}
+\mathcal{O}(\Delta t^4) \nonumber\\
&=& x_i - E^{n+1}_{2, i, j} {\Delta t} +
\left(
\fracrac23 \fracrac{d E_2}{d t}(x_i, y_j, t^{n+1})
+\fracrac13 \fracrac{d E_2}{d t}(x^\star_{i, j}, y^\star_{i, j}, t^{n})
\right)
\fracrac{\Delta t^2}{2} +
\mathcal{O}(\Delta t^4) \nonumber\\
&=& x_i -
(E^{n+1, (2)}_{2, i, j} + \mathcal{O}(\Delta t^3)) {\Delta t} +
\left(
\fracrac23 (\fracrac{dE_2}{d t})^{n+1, (2)}_{i, j}\right.\\ \noindent
&&\left.+ \fracrac13 \fracrac{dE_2}{dt}(x^{n, (2)}_{i, j}, y^{n, (2)}_{i, j}, t^{n})+ \mathcal{O}(\Delta t^3)
\right) \fracrac{\Delta t^2}{2}
+\mathcal{O}(\Delta t^4) \nonumber\\
&\stackrel{\eqref{eq: x_3}}{=}& x^{n, (3)}_i + \mathcal{O}(\Delta t^4). \nonumber
\end{equation}a
The second last equality is due to the fact that a second order scheme (with superscript $(2)$) gives locally third order approximations.
Hence $x^{n, (3)}_{i, j}$ (similarly $y^{n, (3)}_{i, j}$) is a fourth order approximation to $x_{i, j}^\star$ (similarly $y_{i,j}^\star$) locally in time for one time step. The approximation is third order in time globally.
$\mbox{ }\rule[0pt]{1.5ex}{1.5ex}$.
\begin{exa} (Kelvin-Helmholtz instability problem). This example is the 2-D guiding center model problem with the initial condition
\begin{equation}
\rho_0(x,y)=\sin(y)+0.015\cos(kx)
\end{equation}
and periodic boundary conditions on the domain $[0,4\pi]\times[0,2\pi]$. We let $k=0.5$, which
will create a Kelvin-Helmholtz instability.
For this example, we show the surface and contour plots for the solution at $T=40$ in Fig. \ref{fig33}, similar to the results
in \cite{frenod2015long}. The mesh size is $128\times128$.
\begin{figure}
\caption{Kelvin-Helmholtz instability problem. Mesh size $128\times128$. $T=40$. Contour plots: 10 equally spaced lines. }
\label{fig33}
\end{figure}
\end{exa}
\subsection{Incompressible Euler equation}
\begin{exa}
\label{ex67}
We first consider the incompressible Euler system on the domain $[0, 2\pi]\times[0,2\pi]$ with an initial condition $\omega_0(x,y)=-2\sin(x)\sin(y)$.
The exact solution will stay stationary with $\omega(x,y,t)=-2\sin(x)\sin(y)$.
Similarly as in Table \ref{tab1} and Table \ref{tab1t}, the $5$th order spatial accuracy and
3rd order temporal accuracy are clearly observed in Table \ref{tab3} and Table \ref{tab3t} respectively.
Here for the temporal accuracy, 7th order linear interpolation and linear reconstruction are used.
\begin{table}[h]
\begin{center}
\caption{{Errors and orders for the incompressible Euler equation in Example \ref{ex67}. $T=1.2$.}
}
\begin{tabular}{|c | c|c|c| c|c|}
\hline
\cline{1-5} $N_x \times N_y$ &{$20\times 20$} &{$40\times 40$} &{$60\times 60$}&{$80\times 80$}&{$100\times100$} \\
\hline
\cline{1-5} $L^1$ error &1.00E-2&3.01E-4 &3.80E-5&8.79E-6&2.84E-6\\
\hline
order &--&5.06&5.10&5.09&5.07\\
\hline
\end{tabular}
\label{tab3}
\caption{{Errors and orders for the incompressible Euler equation in Example \ref{ex67}. $N_x=N_y=128$. $T=1$.}}
\begin{tabular}{|c | c|c|c| c|c|}
\hline
\cline{1-4} $CFL$ &{$1.15$} &{$1.05$} &{$0.95$}&{$0.85$} \\
\hline
\cline{1-4} $L^1$ error &4.80E-9&3.65E-9 &2.69E-9&1.92E-9\\
\hline
order &--&3.03&3.04&3.04\\
\hline
\end{tabular}
\label{tab3t}
\end{center}
\end{table}
\end{exa}
\begin{exa} (The vortex patch problem).
In this example, we consider the incompressible Euler equations with the
initial condition given by
\begin{equation}
\omega_0(x,y)=
\begin{cases}
-1, \qquad & \fracrac{\pi}{2} \le x \le \fracrac{\pi}{4}\le y \le \fracrac{3\pi}{4}; \\
1, \qquad & \fracrac{\pi}{2} \le x \le \fracrac{5\pi}{4} \le y \le \fracrac{7\pi}{4}; \\
0, \qquad & \text{otherwise}.
\end{cases}
\end{equation}
We show the surface and contour plots of $\omega$ at $T=5$ in Fig. \ref{fig31}. The mesh size is $128\times128$.
\begin{figure}
\caption{Vortex patch problem. Mesh size $128\times128$. $T=5$. Contour plot: 10 equally spaced lines.}
\label{fig31}
\end{figure}
\end{exa}
\begin{exa} (Shear flow problem).
This example is the same as above but with following
initial conditions
\begin{equation}
\omega_0(x,y)=
\begin{cases}
\delta \cos(x)-\fracrac{1}{\rho}sech^2((y-\pi/2)/\rho)^2, \qquad &y\le \pi; \\
\delta \cos(x)+\fracrac{1}{\rho}sech^2((3\pi/2-y)/\rho)^2,\qquad & y>\pi.
\end{cases}
\end{equation}
where $\delta=0.05$ and $\rho=\fracrac{\pi}{15}$.
We show the surface and contour plots of $\omega$ at $T=6$ (top) and $T=8$ (bottom) in Fig. \ref{fig32}. The mesh size is $128\times128$.
\begin{figure}
\caption{Shear flow problem. Mesh size $128\times128$. $T=6$ (top) and $T=8$ (bottom). Contour plots: 10 equally spaced lines.}
\label{fig32}
\end{figure}
\end{exa}
\section{Conclusion}
\label{sec7}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
In this paper, we propose a conservative semi-Lagrangian finite difference scheme based on a flux difference formulation. We investigate its numerical stability from the linear ODE and PDE point of view via Fourier analysis. The upper bound of time step constraints have been found in the linear setting and have been numerically verified. These upper bounds are only slightly greater than those from the Eulerian approach, unfortunately. The schemes are applied to passive transport problems as well as nonlinear Vlasov systems and the incompressible Euler system to showcase its effectiveness. A new characteristics tracing procedure for the guiding center Vlasov system and incompressible Euler system is proposed, mimicking the characteristic tracing mechanism in \cite{qiu_russo_2016}.
\end{document} |
\begin{document}
\title{Geometric construction of metaplectic covers of ${\rm GL}_{n}$
in characteristic zero}
{\rm Aut}hor{Richard Hill}
\date{March 2002}
\maketitle
\begin{abstract}
\noindent
This paper presents a new construction
of the $m$-fold metaplectic cover of ${\rm GL}_{n}$ over
an algebraic number field $k$, where $k$ contains a primitive $m$-th
root of unity.
A 2-cocycle on ${\rm GL}_{n}({\mathbb A})$ representing
this extension is given and the splitting of the
cocycle on ${\rm GL}_{n}(k)$ is found explicitly.
The cocycle is smooth at almost all places of $k$.
As a consequence, a formula for the
Kubota symbol on ${\rm SL}_{n}$ is obtained.
The construction of the paper
requires neither class field theory
nor algebraic K-theory, but relies instead on
naive techniques from the geometry of numbers introduced
by W. Habicht and T. Kubota.
The power reciprocity law for a number field
is obtained as a corollary.
\end{abstract}
\tableofcontents
\section{Introduction.}
\subsection{Metaplectic Groups}
Let $k$ be a global field with ad\`ele ring ${\mathbb A}$
and let $G$ be a linear algebraic group over $k$.
We shall regard $G({\mathbb A})$ as a locally compact topological group with
the topology induced by that of ${\mathbb A}$.
For a finite abelian group $A$, a \emph{metaplectic extension}
of $G$ by $A$ is a topological central extension:
$$
1 \to A \to \tilde{G}({\mathbb A}) \to G({\mathbb A}) \to 1,
$$
which splits over the discrete subgroup $G(k)$ of $G({\mathbb A})$.
Such extensions are of use in the theory of
automorphic forms since certain automorphic forms
(for example the classical theta-functions,
see \cite{weil})
may be regarded as functions on $\tilde{G}({\mathbb A})$,
which are invariant under translation by the lift of
$G(k)$ to $\tilde{G}({\mathbb A})$.
The groups $\tilde{G}({\mathbb A})$ are topological groups
but they are not in general groups of ad\`ele valued points of
an algebraic group.
If $k$ contains a primitive $m$-th root of unity,
then the group ${\rm SL}_{n}$ has a canonical metaplectic
extension with kernel the group $\mu_{m}$ of all $m$-th
roots of unity in $k$.
This extension is always non-trivial;
in fact if $m$ is the total number of roots of unity in $k$
and if $n\ge 3$,
then the canonical extension
is universal amongst metaplectic extensions.
As the groups ${\rm GL}_{n}({\mathbb A})$ and ${\rm GL}_{n}(k)$
are not perfect, ${\rm GL}_{n}$ has no universal
metaplectic extension.
However the canonical extension of ${\rm SL}_{n}$
may be continued in various ways to give a metaplectic
extension of ${\rm GL}_{n}$ by $\mu_{m}$.
This has been done by embedding ${\rm GL}_{n}$ in
${\rm SL}_{r}$ for $r>n$ (see \cite{kazpat});
we shall call the metaplectic extensions on ${\rm GL}_{n}$
obtained in this way the \emph{standard twists}.
In this paper we shall give an elementary construction
of a metaplectic extension ${\rm GL}t_{n}({\mathbb A})$
of ${\rm GL}_{n}$, which in fact is not one of the standard twists,
but which nevertheless restricts to the canonical metaplectic
extension of ${\rm SL}_{n}$.
The method used gives an independent construction
of the canonical metaplectic extension of ${\rm SL}_{n}$.
There are other ways of constructing the group
${\rm GL}t_{n}({\mathbb A})$.
The advantages of the method of construction employed in this
paper are as follows:
\begin{itemize}
\item
Other methods of construction require class field theory and
algebraic K-theory.
In contrast the method here is very elementary.
In fact one can deduce certain theorems of class field theory
as corollaries of the results here.
\item
The method described here is very explicit in the sense that
a 2-cocycle ${\rm Dec}A$ is given which represents the group extension.
This means ${\rm GL}t_{n}({\mathbb A})$ may be realized as a set of pairs
$(\alpha,{\rm char}i)\in {\rm GL}_{n}({\mathbb A})\times\mu_{m}$ with multiplication
given by
$$
(\alpha,{\rm char}i)(\beta,\xi)
=
(\alpha\beta,{\rm char}i\xi {\rm Dec}A(\alpha,\beta)).
$$
Thus the cohomology class of ${\rm Dec}A$ is an element of
$H^{2}({\rm GL}_{n}({\mathbb A}),\mu_{m})$ which splits on the subgroup
${\rm GL}_{n}(k)$.
An expression for ${\rm Dec}A$ as a coboundary on ${\rm GL}_{n}(k)$
is also obtained.
In contrast the usual method of construction gives only
a cocycle on the standard Borel subgroup.
An expression for the whole cocycle has been obtained
(after various incorrect formulae obtained by
other authors) in \cite{banksetc}, but the cocycle
obtained there is more complicated than ours.
In partiular the formula of \cite{banksetc}
involves first decomposing
$\alpha$, $\beta$ and $\alpha\beta$ in the Bruhat
decomposition, and then decomposing each of the three
Weyl group elements as a minimal product of simple
reflections.
\item
The cocycle ${\rm Dec}A$ is smooth on the non-archimedean part of
${\rm GL}_{n}({\mathbb A})$; in fact if $k$ has no real places
then we obtain a cocycle which is smooth everywhere.
The cocycle may therefore be used to study the smooth
representations of the metaplectic group.
More precisely suppose $\pi$ is an
irreducible representation of ${\rm GL}t_{n}({\mathbb A})$
on a space $V$ of smooth functions
on ${\rm GL}t_{n}({\mathbb A})$, on which ${\rm GL}t_{n}({\mathbb A})$ acts by
right translation:
\begin{equation}
\label{actionA}
(\pi(g)\phi)(h):= \phi(hg),\quad
g,h\in{\rm GL}t_{n}({\mathbb A}).
\end{equation}
Let $\epsilonsilon:\mu_{m}\to{\mathbb C}^{\times}$ be the restriction of the
central character to the subgroup $\mu_{m}$.
Then $V$ is isomorphic to a space $V'$ of smooth functions on
${\rm GL}_{n}({\mathbb A})$ with the twisted action:
\begin{equation}
\label{actionB}
(\pi(\alpha,{\rm char}i) \phi')(\beta)
=
\epsilonsilon({\rm char}i {\rm Dec}A(\beta,\alpha)) \phi'(\beta\alpha).
\end{equation}
The isomorphism $V\to V'$ is given by $\phi\mapsto\phi'$,
where $\phi'$ is defined by
$$
\phi'(\alpha)
=
\phi(\alpha,1).
$$
Although the action (\ref{actionB}) looks more complicated
than (\ref{actionA}), it is perhaps easier to
use in calculations as one is dealing with elements of ${\rm GL}_{n}$.
One cannot do this with the cocycle of \cite{banksetc}
as it is not smooth (in fact on ${\rm GL}_{n}({\mathbb A})$ with $n\ge 2$
it is nowhere continuous).
\end{itemize}
There are two disadvantages to the method of construction
described in this paper.
First, the construction is long and quite difficult.
Second, the part of the
cocycle on the subgroup ${\rm GL}_{n}(k_{v})$ for $v|m$ is
not very explicit.
In a sense one has the same problem with the cocycle
of \cite{banksetc}, since it is expressed in
terms of ramified Hilbert symbols.
\scP{\rm ar}graph{The method of construction.}
The case that $k$ is a function field is described in
\cite{hill2}, \cite{hill3};
in this paper we shall deal with the more difficult case that $k$
is an algebraic number field.
Let $S$ be the set of places $v$ of $k$ for which $|m|_{v}\ne 1$,
and let ${\mathbb A}S$ denote the restricted topological product of the
fields $k_{v}$ for $v\notin S$.
Let $k_{\infty}$ be the sum of the archimedean completions of $k$
and let $k_{m}$ be the sum of the fields $k_{v}$
for non-archimedean places $v\in S$.
We then have
$$
{\mathbb A}
=
{\mathbb A}S \oplus k_{\infty} \oplus k_{m},
$$
and hence:
$$
{\rm GL}_{n}({\mathbb A})
=
{\rm GL}_{n}({\mathbb A}S) \oplus {\rm GL}_{n}(k_{\infty}) \oplus {\rm GL}_{n}(k_{m}).
$$
We shall write down explicit 2-cocycles ${\rm Dec}AS$ on ${\rm GL}_{n}({\mathbb A}S)$
and ${\rm Dec}i$ on ${\rm GL}_{n}(k_{\infty})$.
Then, for a certain compact, open subgroup $U_{m}$ of ${\rm GL}_{n}(k_{m})$,
we find a function $\tau:{\rm GL}_{n}(k)\cap U_{m}\to \mu_{m}$ such that
\begin{equation}
\label{ratsplit}
{\rm Dec}AS(\alpha,\beta)
{\rm Dec}i(\alpha,\beta)
=
\frac{\tau(\alpha)\tau(\beta)}{\tau(\alpha\beta)},
\quad
\alpha,\beta\in{\rm GL}_{n}(k)\cap U_{m}.
\end{equation}
It follows fairly easily from this that we can extend $\tau$ to ${\rm SL}_{n}(k)$
in such a way that there is a unique continuous cocycle ${\rm Dec}_{m}$ on
${\rm SL}_{n}(k_{m})$ defined by the formula
$$
{\rm Dec}AS(\alpha,\beta)
{\rm Dec}i(\alpha,\beta)
{\rm Dec}_{m}(\alpha,\beta)
=
\frac{\tau(\alpha)\tau(\beta)}{\tau(\alpha\beta)},
\quad
\alpha,\beta\in{\rm SL}_{n}(k).
$$
Finally we show that there is a cocycle ${\rm Dec}A$ on ${\rm GL}_{n}({\mathbb A})$
which is metaplectic and which extends all our cocycles.
Note that this definition of ${\rm Dec}_{m}$ is global;
it is defined on the dense subgroup ${\rm SL}_{n}(k)$ and then
extended by continuity to ${\rm SL}_{n}(k_{m})$.
It would be of some interest to find a local construction
of the cocycle ${\rm Dec}_{m}$, as the ramified Hilbert
symbols may be expressed in terms of this cocycle
via the isomorphism (\ref{K2}) below.
However I do not know how to make such a construction.
If $m$ is even then the cocycle ${\rm Dec}A$
has the surprising property that it is not, even up to a coboundary,
a product of cocycles ${\rm Dec}v$ on the groups ${\rm GL}_{n}(k_{v})$
(in contrast the standard twists are products of local cocycles).
In fact if we write ${\rm GL}t_{n}(k_{v})$ for the preimage of
${\rm GL}_{n}(k_{v})$ in ${\rm GL}t_{n}({\mathbb A})$, then the various
subgroups ${\rm GL}t_{n}(k_{v})$ do not even commute with each other.
This means that irreducible representations $\pi$
of ${\rm GL}t_{n}({\mathbb A})$ cannot be expressed as
restricted tensor products of
irreducible representations of the groups ${\rm GL}t_{n}(k_{v})$.
Thus the usual local-to-global approach
to studying automorphic representations must be
modified to deal with ${\rm GL}t_{n}({\mathbb A})$.
\scP{\rm ar}graph{Matsumoto's Construction.}
We now review the usual construction of the canonical metaplectic
extension of ${\rm SL}_{n}$.
Let $F$ be any field.
Recall (or see \cite{milnor} or \cite{hahnomeara})
that for $n\ge 3$ there is a universal central extension
$$
1 \to K_{2}(F) \to {\rm St}_{n}(F) \to {\rm SL}_{n}(F) \to 1,
$$
where ${\rm St}_{n}$ denotes the Steinberg group.
Hence, for any abelian group $A$ we have
\begin{equation}
\label{K2}
H^{2}({\rm SL}_{n}(F),A) \cong {\rm Hom}(K_{2}(F),A).
\end{equation}
Matsumoto (see \cite{matsumoto} or \cite{milnor})
proved that for any field $F$
the group $K_{2}(F)$ is the quotient of
$F^{\times}\otimes_{{\mathbb Z}} F^{\times}$ by the subgroup generated
by $\{\alpha\otimes (1-\alpha) :\alpha\in F\setminus\{0,1\}\}$.
The isomorphism (\ref{K2}) may be described as follows.
If $\sigma$ is a 2-cocycle representing a cohomology class
in $H^{2}({\rm SL}_{n}(F),A)$ then for diagonal matrices
$\alpha,\beta\in{\rm SL}_{n}(F)$ we have
$$
\frac{\sigma(\alpha,\beta)}{\sigma(\beta,\alpha)}
=
{\rm pr}od_{i=1}^{n}\phi(\alpha_{i},\beta_{i}),
\quad
\alpha
=
\left(
\matrix{\alpha_{1}\cr &\ddots\cr &&\alpha_{n}}
\right),
\quad
\beta
=
\left(
\matrix{\beta_{1}\cr &\ddots\cr &&\beta_{n}}
\right),
$$
where $\phi:K_{2}(F)\to A$ is the image of $\sigma$.
Here we are writing the group law in $A$ multiplicatively.
Suppose that $k$ is a global field
containing a primitive $m$-th root of unity.
For any place $v$ of $k$ the $m$-th power
Hilbert symbol gives a map $K_{2}(k_{v})\to \mu_{m}$.
Corresponding to this map there is a cocycle
$\sigma_{v}\in H^{2}({\rm SL}_{n}(k_{v}),\mu_{m})$.
For any place $v$ of $k$ we shall write ${\mathfrak o}_{v}$ for the ring of
integers in $k_{v}$.
For almost all places $v$ the cocycle $\sigma_{v}$ splits on
${\rm SL}_{n}({\mathfrak o}_{v})$.
One may therefore define a cocycle $\sigma_{{\mathbb A}}$ on ${\rm SL}_{n}({\mathbb A})$
by $\sigma_{{\mathbb A}}={\rm pr}od_{v}\sigma_{v}$.
This corresponds to a topological central extension:
\begin{equation}
\label{extension}
1 \to \mu_{m} \to {\rm SL}t_{n}({\mathbb A}) \to {\rm SL}_{n}({\mathbb A}) \to 1.
\end{equation}
Now recall the following.
\begin{theorem}
[Power Reciprocity Law]
For any place $v$ of $k$ let $(-,-)_{v,m}$ denote the
$m$-th power Hilbert symbol on $k_{v}$.
For $\alpha,\beta\in k^{\times}$ we have
$$
{\rm pr}od_{v} (\alpha,\beta)_{v,m}
=
1,
$$
where the product is taken over all places of $k$.
\end{theorem}
(For a proof, see Chapter 12, Verse 4, Theorem 12 of \cite{ArtinTate}.)
If one restricts $\sigma_{{\mathbb A}}$ to ${\rm SL}_{n}(k)$,
this restriction corresponds under (\ref{K2})
to the map $K_{2}(k)\to \mu_{m}$
induced by the $m$-th power Hilbert symbol on $k_{v}$.
Hence the restriction of $\sigma_{{\mathbb A}}$ to ${\rm SL}_{n}(k)$
corresponds to the map $K_{2}(k)\to \mu_{m}$ induced by the
product of all the $m$-th power Hilbert symbols.
By the reciprocity law this map is trivial.
Therefore $\sigma_{{\mathbb A}}$ splits on ${\rm SL}_{n}(k)$,
so the extension (\ref{extension}) is metaplectic.
\subsection{The Kubota symbol}
One of the results of this paper is a formula
(see \S6.2) for the Kubota symbol on ${\rm SL}_{n}$.
We recall here the definition of the Kubota symbol.
Let $k$ be an algebraic number field and let
${\mathfrak o}$ denote the ring of algebraic integers in $k$.
Given an ideal ${\mathfrak a}$ of ${\mathfrak o}$,
we define ${\rm SL}_{n}({\mathfrak o},{\mathfrak a})$ to be the subgroup of
matrices in ${\rm SL}_{n}({\mathfrak o})$ which are congruent
to the identity matrix modulo ${\mathfrak a}$.
A subgroup of ${\rm SL}_{n}(k)$ is said to be an \emph{arithmetic} subgroup
if it is commensurable with ${\rm SL}_{n}({\mathfrak o})$.
An arithmetic subgroup is said to be a \emph{congruence subgroup}
if it contains ${\rm SL}_{n}({\mathfrak o},{\mathfrak a})$ for some non-zero ideal ${\mathfrak a}$.
The \emph{congruence subgroup problem} is the question:
``is every arithmetic subgroup a congruence subgroup?''
This question has been answered by Bass--Milnor--Serre
\cite{bassmilnorserre} for $n\ge 3$ and by Serre \cite{serre2} in the
more difficult case $n=2$
(see for example \cite{PrasadRaghunathan} for generalizations).
To rule out some pathological cases,
assume either that $k$ has at least two archimedean places
or that $n\ge 3$.
If $k$ has a real place then the answer to the congruence
subgroup problem is ``yes'' whereas if $k$ is totally complex
the answer is ``no''.
We shall describe what happens when $k$ is totally complex.
To make statements clearer suppose $\mu_{m}$ is the group of all roots of
unity in $k$.
In this case there is an ideal ${\mathfrak f}$ and a surjective homomorphism
$\kappa_{m}:{\mathbb G}amma({\mathfrak f})\to \mu_{m}$ with the following properties:
\begin{itemize}
\item
$\ker(\kappa_{m})$ is a non-congruence subgroup.
\item
For any arithmetic subgroup ${\mathbb G}amma$ of ${\rm SL}_{n}(k)$ there
is an ideal ${\mathfrak a}$
such that ${\mathbb G}amma\supset{\rm SL}_{n}({\mathfrak o},{\mathfrak a})\cap\ker(\kappa_{m})$.
\end{itemize}
The map $\kappa_{m}$ is called the Kubota symbol.
In the case of ${\rm SL}_{2}$ it is given by the following formula
(see \cite{kubota1}):
$$
\kappa_{m}\left(\matrix{a & b \cr c & d}\right)
=
\left\{
\begin{array}{ll}
\displaystyle{
\left(\frac{c}{d}\right)_{m}} &
\hbox{if $c\ne 0$},
\\
1 & \hbox{if $c=0$.}
\end{array}
\right.
$$
Here $\displaystyle{\left(\frac{c}{d}\right)_{m}}$ denotes the $m$-th power
residue symbol, which we recall is defined by
$$
\left(\frac{c}{d}\right)_{m}
=
{\rm pr}od_{v|d} (c,d)_{v,m}
=
\left[\frac{k(
$\Box$rt[m]{c})/k}{d}\right].
$$
The symbol on the right is the global Artin symbol;
the Galois group ${\mathfrak a}l(k(
$\Box$rt[m]{c})/k)$ may be
identified with $\mu_{m}$.
\scP{\rm ar}graph{Connection between the congruence subgroup problem
and metaplectic groups.}
As before suppose the number field $k$ is totally complex.
We shall introduce two topologies on ${\rm SL}_{n}(k)$.
For the first topology we take the congruence subgroups
as a basis of neighbourhoods of the identity.
The completion of ${\rm SL}_{n}(k)$ with respect to this topology
is the group ${\rm SL}_{n}({\mathbb A}f)$, where ${\mathbb A}f$ denotes the ring of finite
ad\`eles of $k$.
For our second topology we take the arithmetic subgroups
of ${\rm SL}_{n}(k)$ as a basis of neighbourhoods of the identity.
This is a finer topology, and the completion of ${\rm SL}_{n}(k)$ with
respect to this topology is a group extension ${\rm SL}t_{n}({\mathbb A}f)$ of
${\rm SL}_{n}({\mathbb A}f)$.
The homomorphism $\kappa_{m}$ identifies the kernel of the extension
with $\mu_{m}$.
We therefore have a short exact sequence:
\begin{equation}
\label{SLtAf}
1 \to \mu_{m} \to {\rm SL}t_{n}({\mathbb A}f) \to {\rm SL}_{n}({\mathbb A}f) \to 1.
\end{equation}
This must be a central extension as ${\rm SL}_{n}({\mathbb A}f)$ is perfect.
On the other hand ${\rm SL}_{n}(k)$ is contained in both completions,
so the extension splits on ${\rm SL}_{n}(k)$.
Adding the group ${\rm SL}_{n}(k_{\infty})$
to both ${\rm SL}_{n}({\mathbb A}f)$ and ${\rm SL}t_{n}({\mathbb A}f)$
we obtain the canonical metaplectic extension of ${\rm SL}_{n}$.
Conversely we may reconstruct the Kubota symbol from the
metaplectic group as follows.
Let ${\rm SL}t_{n}({\mathbb A}f)$ be the
preimage of ${\rm SL}_{n}({\mathbb A}f)$ in the
canonical metaplectic extension of ${\rm SL}_{n}$.
By restriction we have an exact sequence (\ref{SLtAf}).
Since $k$ is totally complex,
the group ${\rm SL}_{n}(k_{\infty})$ is
both connected and simply connected.
This implies that the restriction map gives an
isomorphism
$$
H^{2}({\rm SL}_{n}({\mathbb A}),\mu_{m})
\to
H^{2}({\rm SL}_{n}({\mathbb A}f),\mu_{m}).
$$
Hence, if we regard ${\rm SL}_{n}(k)$
as a subgroup of ${\rm SL}_{n}({\mathbb A}f)$,
then this subgroup lifts to a subgroup of ${\rm SL}t_{n}({\mathbb A}f)$.
There are two subgroups of
${\rm SL}_{n}({\mathbb A}f)$ on which
the extension (\ref{SLtAf})
splits.
First, since (\ref{SLtAf}) is a topological central extension,
there are neighbourhoods $\tilde U$ of the
identity in ${\rm SL}t_{n}({\mathbb A}f)$
and $U$ of the identity in ${\rm SL}_{n}({\mathbb A}f)$
such that the projection map restricts to
a homeomorphism $\tilde U\to U$.
As ${\rm SL}_{n}({\mathbb A}f)$ is totally disconnected
we may take $\tilde U$, and hence $U$,
to be a compact open subgroup.
The inverse map $U\to \tilde U$ is a
continuous splitting of the extension.
Secondly we have a splitting of the extension on ${\rm SL}_{n}(k)$.
By the Strong Approximation Theorem
(Theorem 3.3.1 of \cite{bump}),
${\rm SL}_{n}(k)$ is dense in ${\rm SL}_{n}({\mathbb A}f)$.
Hence the splitting on ${\rm SL}_{n}(k)$ cannot be continuous,
since otherwise it would extend by continuity to a splitting
of the whole extension.
Let ${\mathbb G}amma=U\cap {\rm SL}_{n}(k)$.
The group ${\mathbb G}amma$ is a congruence subgroup of ${\rm SL}_{n}(k)$,
and we have two different splittings of the extension on ${\mathbb G}amma$.
Dividing one splitting by the other we obtain a homomorphism
$\kappa_{m}:{\mathbb G}amma\to\mu_{m}$.
This homomorphism is not continuous with respect to the
induced topology from ${\rm SL}_{n}({\mathbb A}f)$,
so its kernel is a non-congruence subgroup.
The map $\kappa_{m}$ is the Kubota symbol.
In the construction of this paper the
splittings of the extension are described explicitely.
As a consequence we obtain a formula for
the Kubota symbol on ${\rm SL}_{n}$.
\subsection{Organization of the paper}
The paper is organized into the following sections:
\begin{itemize}
\item[\S2]
We fix some standard notation from group cohomology and
singular homology.
To avoid confusions of signs later, the normalizations
of various maps are fixed.
Some known results are stated for later reference.
\item[\S3]
The cocycles ${\rm Dec}AS$ and ${\rm Dec}i$ on the
groups ${\rm GL}_{n}({\mathbb A}S)$ and ${\rm GL}_{n}(k_{\infty})$
are defined.
The cocycle ${\rm Dec}AS$ has been studied in \cite{hill2};
some of the results from there are recalled.
Analogous results are obtained for the cocycle ${\rm Dec}i$.
On their own the results of this section
concerning ${\rm Dec}i$ are of little interest since they
describe group extensions which are already
well understood.
It is the relation (\ref{ratsplit}) between ${\rm Dec}i$ and ${\rm Dec}AS$
that is interesting.
This relation is stated in \S4 and proved in \S6.
\item[\S4]
The function $\tau$ is defined, and the formalism
used in the proof of the relation (\ref{ratsplit})
between ${\rm Dec}AS$, ${\rm Dec}i$ and $\tau$ is introduced.
\item[\S5]
This is a technical section on the existence of
certain limits.
\item[\S6]
The relation (\ref{ratsplit}) between ${\rm Dec}AS$,
${\rm Dec}i$ and $\tau$ is proved.
\item[\S7]
The cocycles are extended to ${\rm GL}_{n}({\mathbb A})$.
\end{itemize}
Sections 3 and 4 are easy, although
section 4 has a lot of notation.
Section 7 is quite formal.
In contrast, sections 5 and 6 are difficult.
\section{Notation}
\subsection{Acyclic $({\mathbb Z}/m)[\mu_{m}]$-modules.}
Throughout the paper, $\mu_{m}$ will denote a cyclic group of order $m$.
As $\mu_{m}$ will often be taken to be a group of roots of unity,
we shall write the group law in $\mu_{m}$ multiplicatively.
We shall deal with various modules over the group-ring $({\mathbb Z}/m)[\mu_{m}]$.
We shall write $[\mu_{m}]$ for the sum of the elements of $\mu_{m}$.
We also fix once and for all a generator $\zeta$ of $\mu_{m}$.
There is an exact sequence (see \S7 of \cite{AtiyahWall}):
$$
({\mathbb Z}/m)[\mu_{m}]
\stackrel{[\mu_{m}]}{\leftarrow}
({\mathbb Z}/m)[\mu_{m}]
\stackrel{1-[\zeta]}{\leftarrow}
({\mathbb Z}/m)[\mu_{m}]
\stackrel{[\mu_{m}]}{\leftarrow}
({\mathbb Z}/m)[\mu_{m}].
$$
Applying the function ${\rm Hom}_{({\mathbb Z}/m)[\mu_{m}]}(-,M)$
we obtain the chain complex:
\begin{equation}
\label{star}
M
\stackrel{[\mu_{m}]}{\to}
M
\stackrel{1-[\zeta]}{\to}
M
\stackrel{[\mu_{m}]}{\to}
M.
\end{equation}
The cohomology of this complex is the Tate cohomology
$\hat H^{\bullet}(\mu_{m},M)$ (see \cite{weibel}).
We shall call $M$ an \emph{acyclic} module if (\ref{star})
is exact, i.e. if its Tate cohomology is trivial.
Free modules are clearly acyclic.
Injective modules are acyclic,
since for these the functor
${\rm Hom}(-,M)$ is exact.
More generally, by Shapiro's Lemma
(see \cite{weibel}),
any $({\mathbb Z}/m)[\mu_{m}]$-module
which is induced from a ${\mathbb Z}/m$-module
is acyclic.
\begin{lemma}
\label{easy}
Let $M$ be an acyclic $({\mathbb Z}/m)[\mu_{m}]$-module
and suppose we have a $({\mathbb Z}/m)[\mu_{m}]$-module homomorphism
$$
\Phi:M \to {\mathbb Z}/m.
$$
Then there is a $({\mathbb Z}/m)[\mu_{m}]$-homomorphism
$\hat\Phi:(1-[\zeta])M\to \mu_{m}$
defined by
$$
\hat\Phi ( (1-[\zeta]) a ) = \zeta^{\Phi(a)}.
$$
The lifted map $\hat\Phi$ is independent of the choice of
generator $\zeta$.
Here ${\mathbb Z}/m$ and $\mu_{m}$ are regarded as
$({\mathbb Z}/m)[\mu_{m}]$-modules with the trivial action of $\mu_{m}$.
\end{lemma}
The definition of $\hat\Phi$ looks more natural if we identify
$\mu_{m}$ with ${\mathfrak a}/{\mathfrak a}^{2}$, where ${\mathfrak a}$ denotes the
augmentation ideal of $({\mathbb Z}/m)[\mu_{m}]$.
{\it Proof. }
We need only show that $\hat\Phi$ is well-defined.
Suppose $(1-[\zeta]) a=(1-[\zeta])b$.
By the exact sequence (\ref{star}), there is a $c\in M$
such that $b-a=[\mu_{m}] c$.
Therefore $\Phi(b)-\Phi(a)=[\mu_{m}] \Phi(c)$.
Since the action of $[\mu_{m}]$ on ${\mathbb Z}/m$ is zero,
we have $\Phi(b)=\Phi(a)$.
$\Box$
\subsection{Central extensions.}
Let $G$ be an abstract group.
We shall regard $\mu_{m}$ as a $G$-module with the trivial action.
Given an inhomogeneous 2-cocycle $\sigma$
on $G$ with values in $\mu_{m}$, one defines a central
extension of $G$ by $\mu_{m}$ normalized as follows:
$$
\tilde G
=
G\times \mu_{m},\quad
(g,\xi)(h,\psi)
:=
(gh,\xi\psi\sigma(g,h)).
$$
Conversely, given a central extension
$$
1 \to \mu_{m} \to \tilde G \to G \to 1,
$$
we may recover a 2-cocycle $\sigma$ by choosing,
for every $g\in G$,
a preimage $\hat g\in\tilde G$
and defining
$$
\sigma(g,h)
=
\hat{g}\hat{h}\widehat{gh}^{-1}.
$$
If $G$ is a locally compact topological group
then by a \emph{topological central extension}
of $G$ by $\mu_{m}$ we shall mean a short exact sequence of
topological groups and continuous homomorphisms:
$$
1 \to \mu_{m} \to \tilde{G} \to G \to 1,
$$
such that the map $\tilde{G}\to G$ is locally
a homeomorphism.
The isomorphism classes of
topological central extensions of $G$ by $\mu_{m}$
correspond to elements of $H^{2}_{\rm meas}(G,\mu_{m})$,
where $H^{\bullet}_{\rm meas}$ denotes
the group cohomology theory based on Borel-measurable cochains
(see \cite{moore}).
As all our cochains will be Borel-measurable
we shall write $H^{\bullet}$ instead of
$H^{\bullet}_{\rm meas}$.
\subsection{Commutators and symmetric cocycles}
Proofs of the following facts may be found in \cite{klose},
where they were used to determine the metaplectic extensions of
$D^{\times}$, where $D$ is a quaternion algebra over $k$.
Let $G$ be a group and suppose
$G_{1}$ and $G_{2}$ are two subgroups of $G$,
such that every element of $G_{1}$ commutes with every element
of $G_{2}$.
Given a 2-cocycle $\sigma\in Z^{2}(G,\mu_{m})$,
we define for $a\in G_{1}$ and $b\in G_{2}$
the \emph{commutator}:
$$
[a,b]_{\sigma}
:=
\frac{\sigma(a,b)}{\sigma(b,a)}.
$$
The commutator map is bimultiplicative and skew symmetric,
and depends only on the cohomology class of $\sigma$.
If $G$ is a locally compact topological group
and $\sigma$ is Borel-measurable,
then the commutator map is a continuous function on $G_{1}\times G_{2}$.
If $G$ is an abelian group,
then we shall call a 2-cocycle $\sigma$
on $G$ \emph{symmetric} if $[\cdot,\cdot]_{\sigma}$ is trivial on
$G\times G$.
This amounts to saying that the corresponding central extension
$\tilde G$ is abelian.
We shall write $H^{2}_{sym}(G,\mu_{m})$ for the subgroup
of symmetric classes.
If $G$ and $H$ are two abelian groups, then
the restriction maps give an isomorphism:
$$
H^{2}_{sym}(G\oplus H,\mu_{m})
\cong
H^{2}_{sym}(G,\mu_{m})
\oplus
H^{2}_{sym}(H,\mu_{m}).
$$
The restriction map gives an isomorphism:
$$
H^{2}_{sym}(G,\mu_{m})
\cong
H^{2}_{sym}(G[m],\mu_{m}),
$$
where $G[m]$ denotes the subgroup of $m$-torsion elements of $G$.
Furthermore there is a canonical isomorphism (independent of $\zeta$):
$$
\begin{array}{rcl}
H^{2}_{sym}(\mu_{m},\mu_{m})
&\cong&
{\mathbb Z}/m\\
\sigma
&\mapsto&
b,\quad \hbox{ where }\quad
\displaystyle{
{\rm pr}od_{i=1}^{m}\sigma(\zeta,\zeta^{i})
=
\zeta^{b}.
}
\end{array}
$$
\subsection{Singular homology groups.}
We will need some notation from singular homology,
which we now introduce.
In order to avoid sign errors we must be
clear about the precise definition of our chain complex,
which is rather non-standard.
For $r\ge 0$ we define the \emph{$r$-simplex} $\Delta^r$ by
$$
\Delta^r
=
\left\{
\underline{x}\in{\mathbb R}^{r+1} :
x_0,\ldots,x_r\ge 0 \hbox{ and } \sum x_i =1
\right\}.
$$
By an \emph{$r$-prism} we shall mean a product of finitely many simplices,
i.e. an expression of the form ${\rm pr}od_{i=1}^s \Delta^{a(i)}$,
where $\sum a(i)=r$.
Let $Y$ be a topological space.
By a \emph{singular $r$-cell} in $Y$ we shall mean a continuous map from an
$r$-prism to $Y$.
The cell ${\mathscr T}:{\rm pr}od_{i=1}^s \Delta^{a(i)}\to Y$
will be said to be \emph{degenerate} if ${\mathscr T}(\underline{x}_1,\ldots,\underline{x}_s)$
is independent of one of the variables $\underline{x}_i\in\Delta^{a(i)}$
($a(i)>0$).
We shall write $C_r(Y)$ for the ${\mathbb Z}/m{\mathbb Z}$-module
generated by the set of all singular $r$-cells in $Y$,
with the following relations:
\begin{itemize}
\item
Suppose $A$ is an $a$-prism and $B$ is a $b$-prism with $a+b=r$.
If ${\mathscr T}:A\times B \to Y$ is a singular $r$-cell
then we define another singular $r$-cell
${\mathscr T}^{\rm pr}ime:B\times A\to Y$ by ${\mathscr T}^{\rm pr}ime(b,a)={\mathscr T}(a,b)$.
For every such ${\mathscr T},{\mathscr T}^{\rm pr}ime$ we impose a relation:
$$
{\mathscr T} = (-1)^{ab} {\mathscr T}^{\rm pr}ime;
$$
\item
${\mathscr T}=0$ for every degenerate $r$-prism ${\mathscr T}$.
\end{itemize}
For any simplex $\Delta^r$ we define the $j$-th face map ($j=0,\ldots,r$)
to be the map
$$
{\mathfrak F}_j:\Delta^{r-1}\to \Delta^r,\quad
(x_0,\ldots,x_{r-1})
\mapsto
(x_0,\ldots,x_{j-1},0,x_{j+1},\ldots,x_r).
$$
The boundary of a singular cell
${\mathscr T}:{\rm pr}od_{i=1}^s \Delta^{a(i)} \to Y$
is defined to be the sum
\begin{eqnarray*}
\partial {\mathscr T}
&=&
\sum_{i=1}^s
\sum_{j=0}^{a(i)}
(-1)^{a(1)+\cdots+a(i-1)+j}
{\mathscr T}
\circ
{\mathfrak F}_{i,j},\\
\hbox{where }
{\mathfrak F}_{i,j}(\underline{x}_{1},\ldots,\underline{x}_{s})
&=&
(\underline{x}_1,\ldots,\underline{x}_{i-1},{\mathfrak F}_{j}(\underline{x}_{i}),\underline{x}_{i+1},\ldots,\underline{x}_s).
\end{eqnarray*}
This boundary map extends by linearity
to a map $\partial : C_r(Y) \to C_{r-1}(Y)$.
For any subspace $Z\subseteq Y$
there is an inclusion
$C_r(Z) \subseteq C_r(Y)$,
and we define $C_{\bullet}(Y,Z) = C_{\bullet}(Y) / C_{\bullet}(Z)$.
The homology groups of the complexes
$C_{\bullet}(Y)$ and $C_{\bullet}(Y,Z)$
are the usual singular homology groups with coefficients in
${\mathbb Z}/m{\mathbb Z}$ (see for example \cite{massey}).
We have taken a non-standard definition of the chain complex because
we will write down singular cells explicitly and these will
be as described above.
The base set $|{\mathscr T}|$ of a singular $r$-cell ${\mathscr T}$ is defined
to be the image of ${\mathscr T}$ if ${\mathscr T}$ is non-degenerate, and the empty set
if ${\mathscr T}$ is degenerate.
The base set of an element of $C_r(Y)$ is defined to be the union
of all base-sets of singular $r$-cells in its support.
If $Y$ is a vector space over ${\mathbb R}$ then
for vectors $v_{0},\ldots,v_{r}\in Y$
we shall denote by $[v_0,\ldots,v_r]$
the $r$-cell $\Delta^r\to Y$ given by
$$
[v_0,\ldots,v_r](\underline{x})
:=
\sum_{i=0}^{r} x_i v_i.
$$
The image of this map is the convex hull
of the set $\{v_{0},\ldots,v_{r}\}$.
To simplify our formulae we shall sometimes substitute
the closed interval $I=[0,1]$ for $\Delta^1$,
by identifying $0$ with $(1,0)$ and $1$ with $(0,1)$.
\scP{\rm ar}graph{Orientations.}
Let $Y$ be a $d$-dimensional manifold.
If $y\in Y$ then $H_d(Y,Y\setminus \{y\})$ is non-canonically
isomorphic to ${\mathbb Z}/m{\mathbb Z}$.
The manifold $Y$ is said to be ${\mathbb Z}/m{\mathbb Z}$-orientable
if one can associate to each point $y\in Y$ an isomorphism
$$
{\rm ord}_y:H_d(Y,Y\setminus \{y\}) \longrightarrow {\mathbb Z}/m{\mathbb Z},
$$
with the property that for every $y\in Y$ there is a neighbourhood $U$
of $y$, such that for every $z\in U$ the following diagram commutes.
\unitlength0.8cm
\begin{picture}(15,5)
\thicklines
\put(5.8,4){$H_d(Y,Y\setminus U)$}
\put(2.8,2){$H_d(Y,Y\setminus \{y\})$}
\put(9.2,2){$H_d(Y,Y\setminus \{z\})$}
\put(6.5,0){${\mathbb Z}/m{\mathbb Z}$}
\put(4.5,0.5){${\rm ord}_y$}
\put(9,0.5){${\rm ord}_z$}
\put(4.5,1.8){\vector(3,-2){2}}
\put(10,1.8){\vector(-3,-2){2}}
\put(8,3.8){\vector(3,-2){2}}
\put(6.5,3.8){\vector(-3,-2){2}}
\end{picture}\\
\noindent
Such a collection of isomorphisms will be called an \emph{orientation}.
Suppose $Y$ is ${\mathbb Z}/m{\mathbb Z}$-orientable
and fix an orientation $\{{\rm ord}_y\}$ on $Y$.
Let ${\mathscr T} \in C_d(Y)$.
Suppose that $y\in Y$ does not
lie in the base set of $\partial {\mathscr T}$.
Then ${\mathscr T}$ represents a homology class in $H_{d}(Y,Y\setminus\{y\})$.
From our condition on ${\rm ord}$, the map
$y\mapsto {\rm ord}_y({\mathscr T})$ is a locally constant
function $Y\setminus |\partial {\mathscr T}| \to {\mathbb Z}/m{\mathbb Z}$.
For a discrete subset $M\subseteq Y$ we shall use the notation
$$
\{ {\mathscr T} | M\}
=
\sum_{y\in M} {\rm ord}_{y}{\mathscr T}.
$$
\section{The arithmetic and geometric cocycles.}
\subsection{The arithmetic cocycle}
Let $\mu_{m}$ be a cyclic group of order $m$.
Let $V_{\rm arith}$ be a totally disconnected locally compact abelian group
and suppose that multiplication by $m$ induces a measure-preserving
automorphism of $V_{\rm arith}$.
Suppose $\mu_{m}$ acts on $V_{\rm arith}$ in such a way that
every non-zero element of $V_{\rm arith}$ has trivial stabilizer in $\mu_{m}$.
Given this data we shall construct a 2-cocycle ${\rm Dec}a$
on the group ${\mathbb G}a={\rm Aut}_{\mu_{m}}(V_{\rm arith})$ with values in $\mu_{m}$.
This cocycle ${\rm Dec}a$ was first studied in \cite{hill2}
and we shall keep to the notation of that paper.
Those results, which are stated here without proof,
are proved in \cite{hill2}.
\scP{\rm ar}graph{The Cocycle.}
We choose a compact, open, $\mu_{m}$-invariant
neighbourhood $L$ of $0$ in $V_{\rm arith}$ and we normalize
the Haar measure $dv$ on $V_{\rm arith}$ so that $L$ has measure $1$.
By our condition on $V_{\rm arith}$, it follows that for any compact
open subset $U$ of $V_{\rm arith}$, the measure of $U$ is a rational number
and is integral at every prime dividing $m$.
Thus for any locally constant
function $\varphi:V_{\rm arith}\to{\mathbb Z}/m$ of compact support,
we may define its integral to be an element of ${\mathbb Z}/m$.
This ``modulo $m$ integration'' is independent of the neighbourhood
$L$ used to normalize the measure.
Finally, we choose an open and closed fundamental domain
$F$ for the action of $\mu_{m}$ on $V_{\rm arith}\setminus\{0\}$;
we write $f$ for the characteristic function
of $F$.
The cocycle is given by the formula:
\begin{equation}
\label{def1}
{\rm Dec}a^{f,L}(\alpha,\beta)
=
{\rm pr}od_{\xi\in\mu_{m}}
\xi^{\textstyle{\left\{\int_{L}-\int_{\beta L}\right\}
f(\alpha v)f(\xi v)dv}},
\qquad
\alpha,\beta\in{\mathbb G}a.
\end{equation}
Up to a coboundary, this is independent of the choices
of $f$ and $L$.
If we regard ${\mathbb G}a$ as a topological group with the
compact-open topology then the cocycle ${\rm Dec}a$ is
a locally constant function.
This may be deduced using the cocycle relation from the
following fact.
\begin{lemma}
If $\alpha,\beta\in {\mathbb G}a$ and $\beta L=L$
then we have ${\rm Dec}a^{(f,L)}(\alpha,\beta)=1$.
\end{lemma}
{\it Proof. }
This follows immediately from the definition of ${\rm Dec}a$.
$\Box$
\scP{\rm ar}graph{A Pairing.}
Before proceeding,
we shall reformulate the definition of ${\rm Dec}a$
in a more useful notation.
We shall call a function $\varphi:V_{\rm arith}\to{\mathbb Z}/m$ \emph{symmetric}
if $\varphi(\xi v)= \varphi(v)$ for all $\xi\in\mu_{m}$.
We define ${\mathcal C}$ to be the space of locally constant
symmetric functions of compact support on $V_{\rm arith}$.
On the other hand a function $g:V_{\rm arith}\setminus\{0\}\to{\mathbb Z}/m$
will be called \emph{cosymmetric} if the sum
$$
\sum_{\xi\in \mu_{m}} g(\xi v)
$$
is a constant, independent of $v$.
The value of the constant will be called the \emph{degree} of $g$.
We define ${\mathcal F}$ to be the space of locally constant, cosymmetric
functions $V_{\rm arith}\setminus\{0\}\to{\mathbb Z}/m$.
A cosymmetric function $f\in{\mathcal F}$ of degree $1$
will be called a \emph{fundamental function}.
For example the function $f$ described above is a fundamental function.
Thus fundamental functions generalize
the notion of a fundamental domain.
The group ${\mathbb G}a$ acts on ${\mathcal C}$ and ${\mathcal F}$ on the right by composition.
Let ${\mathcal C}o$ be the submodule of functions $M\in {\mathcal C}$ satisfying
$M(0)=0$ and let ${\mathcal F}o$ be the submodule of functions
in ${\mathcal F}$ of degree $0$.
There is a pairing ${\mathcal F}o\times {\mathcal C}o\to\mu_{m}$
given by
$$
\langle
g|M
\rangle
=
{\rm pr}od_{\xi\in\mu_{m}}
\xi^{\textstyle{\int g(v)f(\xi v)M(v)dv}},
$$
where $f$ is a fundamental function.
The pairing is independent of $f$,
and is ${\mathbb G}a$-invariant in the sense that for $\alpha\in {\mathbb G}a$
we always have:
\begin{equation}
\label{covariant}
\langle h\alpha|M\alpha \rangle
=
\langle h|M \rangle.
\end{equation}
With this notation we can express ${\rm Dec}a$ as follows:
\begin{equation}
\label{def2}
{\rm Dec}a^{(f,L)}(\alpha,\beta)
=
\langle f-f\alpha | \beta L-L \rangle.
\end{equation}
Here we are writing $L$ and $\beta L$ for the characteristic
functions of these sets.
It is now clear that ${\rm Dec}a$ is a 2-cocycle,
since it is the cup-product of the 1-cocycles $f-f\alpha$
and $\beta L-L$.
\scP{\rm ar}graph{Another expression for the pairing.}
To aid calculation we shall describe the pairing
in a different way.
Let ${\mathcal M}^{c}$ be the space of locally constant functions
of compact support $\varphi:V_{\rm arith}\setminus\{0\}\to {\mathbb Z}/m$.
Let ${\mathcal M}$ be the space of locally constant functions
$\varphi:V_{\rm arith}\setminus\{0\}\to {\mathbb Z}/m$.
There is a right action of ${\mathbb G}a$ by composition
on the spaces ${\mathcal M}^{c}$ and ${\mathcal M}$:
$$
(\phi\alpha)(v)
:=
\phi(\alpha v),
\quad
\alpha\in{\mathbb G}a.
$$
We shall also regard ${\mathcal M}$ and ${\mathcal M}^{c}$
as left $({\mathbb Z}/m)[\mu_{m}]$-modules with the action given by
$$
(\xi\phi)(v) = \phi(\xi^{-1}v),
\quad
\xi\in\mu_{m}.
$$
The two actions commute.
As $({\mathbb Z}/m)[\mu_{m}]$-modules,
both ${\mathcal M}$ and ${\mathcal M}^{c}$ are acyclic,
since they are induced from spaces of
functions on the fundamental domain $F$.
Hence, by Lemma \ref{easy},
the integration map $\int:{\mathcal M}^{c}\to {\mathbb Z}/m$
lifts to a map
$$
\widehat{\int}: (1-[\zeta]){\mathcal M}^{c} \to \mu_{m},
$$
defined by
$$
\widehat{\int} (\varphi-\varphi\zeta^{-1})(v) dv
=
\widehat{\int} (1-[\zeta])\varphi dv
:=
\zeta^{-\textstyle{\int \varphi(v)dv}}.
$$
The modules ${\mathcal F}o$ and ${\mathcal C}o$ introduced above may be expressed as
follows:
$$
{\mathcal C}o
=
\ker\Big((1-[\zeta]):{\mathcal M}^{c}\to{\mathcal M}^{c}\Big)
=
[\mu_{m}]{\mathcal M}^{c},
$$
$$
{\mathcal F}o
=
\ker\Big([\mu_{m}]:{\mathcal M}\to{\mathcal M}\Big)
=
(1-[\zeta]){\mathcal M}.
$$
\begin{proposition}
Given $g\in{\mathcal F}o$ and $M\in{\mathcal C}o$, the product $g\cdot M$
is in $(1-[\zeta]){\mathcal M}^{c}$.
The pairing ${\mathcal F}o\times{\mathcal C}o\to\mu_{m}$
is given by
\begin{equation}
\langle g|M \rangle
=
\widehat{\int} (g\cdot M)(v)dv.
\end{equation}
\end{proposition}
{\it Proof. }
Since $[\mu_{m}]g=0$, we have $g=(1-[\zeta]) h$ for some $h\in {\mathcal M}$.
As $M$ is symmetric we have $g\cdot M = (1-[\zeta]) (h\cdot M)$.
Since $M$ has compact support, so does $h\cdot M$.
Therefore $g\cdot M\in (1-[\zeta]){\mathcal M}^{c}$
and we have
$$
\widehat\int g\cdot M
=
\zeta^{\textstyle \int h(v) M(v) dv}.
$$
We now calculate the pairing:
$$
\langle g | M \rangle
=
{\rm pr}od_{\xi\in\mu_{m}}
\xi^{\textstyle\int f(\xi v)(h(v)-h(\zeta^{-1}v))M(v)dv}.
$$
Replacing $\xi$ by $\zeta^{-1} \xi$ in the second term
we obtain:
$$
\langle g | M \rangle
=
{\rm pr}od_{\xi\in\mu_{m}}
\xi^{\textstyle
\int f(\xi v) h(v) M(v) dv}
(\zeta^{-1}\xi)^{\textstyle
-\int f(\zeta^{-1}\xi v) h(\zeta^{-1} v) M(v) dv}.
$$
Replacing $v$ by $\zeta v$ in the second term
and using the symmetry of $M$ we obtain:
$$
\langle g | M \rangle
=
{\rm pr}od_{\xi\in\mu_{m}}
\xi^{\textstyle
\int f(\xi v) h(v) M(v) dv}
(\zeta^{-1}\xi)^{\textstyle
-\int f(\xi v) h( v) M(v) dv}.
$$
This cancels to give:
$$
\langle g | M \rangle
=
{\rm pr}od_{\xi\in\mu_{m}}
\zeta^{\textstyle
\int f(\xi v) h( v) M(v) dv}.
$$
Since $f$ is fundamental we have:
$$
\langle g | M \rangle
=
\zeta^{\textstyle \int h(v)M(v)dv}.
$$
$\Box$
\subsection{Reduction to a discrete space.}
The arithmetic cocycle ${\rm Dec}a$ depends on a choice of
an open and closed fundamental domain $F$ for $\mu_{m}$ in
$V_{\rm arith}\setminus\{0\}$.
In practice it is unnecessary to describe such an $F$ completely.
We shall show that a large part of
the cocycle depends only on a fundamental
domain in a discrete quotient of $V_{\rm arith}$.
We shall fix once and for all a $\mu_{m}$-invariant, compact,
open subgroup $L\subset V_{\rm arith}$.
We shall write $X$ for the (discrete) quotient group $V_{\rm arith}/L$.
The idea is that it is easier to find a fundamental domain
in $X\setminus\{0\}$ than in $V_{\rm arith}\setminus\{0\}$.
We introduce modules of functions on $X$
analogous to ${\mathcal F}^{o}$ and ${\mathcal C}^{o}$ above.
We let ${\mathcal M}_{X}$ denote the space of all functions
$X\setminus\{0\}\to {\mathbb Z}/m$
and we let ${\mathcal M}_{X}^{c}$ denote the space of all
such functions with finite support.
As the action of $\mu_{m}$ on $X\setminus\{0\}$
has no fixed points, these are acyclic
$({\mathbb Z}/m)[\mu_{m}]$ modules.
We define
$$
{\mathcal F}o_{X}
=
\ker\Big([\mu_{m}]:{\mathcal M}_{X}\to{\mathcal M}_{X}\Big),\quad
{\mathcal C}o_{X}
=
\ker\Big(1-[\zeta]:{\mathcal M}_{X}^{c}\to{\mathcal M}_{X}\Big).
$$
There is a pairing ${\mathcal F}o_{X}\times {\mathcal C}o_{X}\to\mu_{m}$ given by
\begin{equation}
\label{simplepairing}
\langle (1-[\zeta])g | M \rangle_{X}
=
\zeta^{\textstyle{\sum_{x\in X\setminus\{0\}}g(x)M(x)}}.
\end{equation}
We have canonical inclusions
$\iota:{\mathcal M}_{X}\to {\mathcal M}$ and $\iota:{\mathcal M}^{c}_{X}\to {\mathcal M}^{c}$
and we have
\begin{equation}
\label{Xism2}
\langle g | M \rangle_{X}
=
\langle \iota(g) | \iota(M) \rangle.
\end{equation}
We shall write ${\mathbb G}a^{+}$ for the semi-group of
elements $\alpha\in{\mathbb G}a$ such that $\alpha L \supseteq L$.
Let $F_{X}$ be a fundamental domain for the
action of $\mu_{m}$ on $X\setminus\{0\}$.
We shall suppose our fundamental domain $F$ is chosen
so that for $v\notin L$ we have
$v\in F$ if and only if $v+L\in F_{X}$.
As before we shall write $f$ for the characteristic function of $F$.
\begin{lemma}
\label{Xism4}
For $\alpha,\beta\in {\mathbb G}a^{+}$ we have:
\begin{itemize}
\item
The restriction of $f\alpha^{-1}$ to $V_{\rm arith}\setminus \alpha L$
is $L$-periodic, and therefore induces
a function on $X\setminus \alpha L$.
\item
The set $\alpha\beta L-\alpha L$ is $L$-periodic.
Its characteristic function therefore
induces a function on $X$ which is zero on $\alpha L$.
\item
We have
$$
{\rm Dec}a^{(f,L)}(\alpha,\beta)
=
\langle f\alpha^{-1}-f |\alpha\beta L - \alpha L \rangle_{X}.
$$
\end{itemize}
\end{lemma}
{\it Proof. }
The first two statements are easy to check;
the third follows from (\ref{covariant}), (\ref{def2})
and (\ref{Xism2}).
$\Box$
\subsection{Examples of arithmetic cocycles}
Let $k$ be a global field containing a primitive $m$-th
root of unity, and let $\mu_{m}$ be the group of
$m$-th roots of unity in $k$.
Consider the vector space $V=k^{n}$.
\scP{\rm ar}graph{Local examples.}
Let $v$ be a place of $k$ such that $|m|_{v}=1$,
where $|\cdot|_{v}$ is the absolute value
on $k_{v}$,
normalized so that $d(mx)=|m|_{v}dx$
for any Haar measure $dx$ on $k_{v}$.
We shall write $V_{v}$ for the vector space $V\otimes_{k}k_{v}=k_{v}^{n}$.
The action of $\mu_{m}$ by scalar multiplication on $V_{v}$
satisfies the conditions of $V_{\rm arith}$ of \S3.1.
We therefore obtain a 2-cocycle on
${\rm GL}_{n}(k_{v})$ with values in $\mu_{m}$.
We shall refer to this cocycle as ${\rm Dec}v$.
The commutator of ${\rm Dec}v$ on the maximal torus
has been calculated in Theorems 3 and 5 of \cite{hill2}.
It is given by
\begin{equation}
\label{localcomm}
[\alpha,\beta]_{{\rm Dec}v}
=
(-1)^{\textstyle{\frac{(|\det\alpha|_{v}-1)(|\det\beta|_{v}-1)}{m^{2}}}}
{\rm pr}od_{i=1}^{n}
(-1)^{\textstyle{\frac{(|\alpha_{i}|_{v}-1)(|\beta_{i}|_{v}-1)}{m^{2}}}}
(\alpha_{i},\beta_{i})_{m,v},
\end{equation}
where
$$
\alpha
=
\left(
\matrix{\alpha_{1} \cr &\ddots\cr&&\alpha_{n}}
\right),\quad
\beta
=
\left(
\matrix{\beta_{1} \cr &\ddots\cr&&\beta_{n}}
\right)
$$
and $(\cdot,\cdot)_{m,v}$ denotes the $m$-th power Hilbert symbol
on $k_{v}$.
Our restriction on $v$ amounts to requiring that $(\cdot,\cdot)_{m,v}$
is the tame symbol.
\scP{\rm ar}graph{Semi-global examples.}
Let $S$ be the set of all places $v$ of $k$,
such that $|m|_{v}\ne 1$.
We shall write ${\mathbb A}S$ for the ring of $S$-ad\`eles of
$k$, i.e. the restricted topological product
of the fields $k_{v}$ for $v\notin S$.
Let $V_{{\mathbb A}S}=V\otimes_{k} {\mathbb A}S={\mathbb A}S^{n}$.
The action of $\mu_{m}$ on $V_{{\mathbb A}S}$ by
scalar multiplication satisfies the conditions
of $V_{\rm arith}$ of \S3.1.
We therefore obtain a 2-cocycle on ${\rm GL}_{n}({\mathbb A}S)$
with values in $\mu_{m}$.
We shall refer to this cocycle as ${\rm Dec}AS$.
The cocycle ${\rm Dec}AS$ is not quite the product of
the local cocycles ${\rm Dec}v$ for $v\notin S$.
In fact we have (Theorem 3 of \cite{hill2})
up to a coboundary:
$$
{\rm Dec}AS(\alpha,\beta)
=
{\rm pr}od_{v\notin S}{\rm Dec}v(\alpha,\beta)
{\rm pr}od_{v<w}
(-1)^{
\textstyle{\frac{(|\det\alpha|_{v}-1)(|\det\beta|_{w}-1)}{m^{2}}}}.
$$
Here we have chosen an ordering on the set of places $v\notin S$;
up to a coboundary the right hand side of the above formula is
independent of the choice of ordering.
As a consequence of this and (\ref{localcomm}),
we have on the maximal torus in ${\rm GL}_{n}({\mathbb A}S)$:
$$
[\alpha,\beta]_{{\rm Dec}AS}
=
(-1)^{\textstyle{\frac{(|\det\alpha|_{{\mathbb A}S}-1)(|\det\beta|_{{\mathbb A}S}-1)}{m^{2}}}}
{\rm pr}od_{v\notin S}
{\rm pr}od_{i=1}^{n}
(-1)^{\textstyle{\frac{(|\alpha_{i}|_{v}-1)(|\beta_{i}|_{v}-1)}{m^{2}}}}
(\alpha_{i},\beta_{i})_{v,m}.
$$
\scP{\rm ar}graph{The positive characteristic case.}
Suppose for a moment that $k$ has positive characteristic.
In this case the set $S$ is empty,
so in fact we have a cocycle ${\rm Dec}A$
on the whole group ${\rm GL}nA$,
where ${\mathbb A}$ is the full ad\`ele ring of $k$.
The proof that ${\rm Dec}A$ splits on ${\rm GL}_{n}(k)$
is easy to understand.
We may also take $V_{\rm arith}={\mathbb A}^{n}/k^{n}$.
This is acted on by ${\rm GL}_{n}(k)$, so we obtain
a corresponding cocycle ${\rm Dec}_{k}$
on ${\rm GL}_{n}(k)$ with values in $\mu_{m}$.
One easily shows that, up to a coboundary,
${\rm Dec}_{k}$ is the restriction of ${\rm Dec}A$.
In this case however, $V_{\rm arith}$ is compact, so we may take $L=V_{\rm arith}$
in the definition (\ref{def1}) of the cocycle.
With this choice of $L$ one sees immediately that
the cocycle is trivial.
This shows that the restriction of ${\rm Dec}A$ to ${\rm GL}_{n}(k)$ is
a coboundary.
\subsection{The Gauss--Schering Lemma}
We next indicate the connection between the cocycle ${\rm Dec}AS$
and the Gauss-Schering Lemma.
Let $k$, $\mu_{m}$ and $S$
be as in \S3.3.
We shall write ${\mathfrak o}^{S}$ for the ring of $S$-integers in $k$.
Recall that for non-zero, coprime $\alpha,\beta\in {\mathfrak o}^{S}$, the
$m$-th power Legendre symbol is defined by
$$
\left(\frac{\alpha}{\beta}\right)_{S,m}
=
{\rm pr}od_{v\notin S,\; v|\beta}(\alpha,\beta)_{m,v}.
$$
The Gauss-Schering Lemma is a formula for the Legendre symbol,
commonly used to prove the quadratic reciprocity law in
undergraduate courses.
Choose a set $R_{\beta}$ of representatives
of the non-zero $\mu_{m}$-orbits in ${\mathfrak o}^{S}/(\beta)$.
Such sets are called $m$-th sets modulo $\beta$.
For any $\xi\in\mu_{m}$ define
$$
r(\xi)
=
|\{x\in R_{\beta}:\alpha x\in\xi R_{\beta}\}|.
$$
The Gauss-Schering Lemma is the statement
$$
\left(\frac{\alpha}{\beta}\right)_{S,m}
=
{\rm pr}od_{\xi\in\mu_{m}} \xi^{r(\xi)}.
$$
Consider the cocycle ${\rm Dec}AS$ on ${\rm GL}_{1}({\mathbb A}S)$ of \S3.3.
Let $L={\rm pr}od_{v\notin S}{\mathfrak o}_{v}$;
we shall also write $L$ for the characteristic function of this set.
The subset $L\subset{\mathbb A}S$ satisfies the conditions of \S3.2.
The semi-group ${\mathbb G}a^{+}$ of \S3.2 contains $\beta^{-1}$ for
all non-zero $\beta\in{\mathfrak o}^{S}$.
Choose a fundamental domain $F$ for the action of
$\mu_{m}$ on $({\mathbb A}S/L) \setminus\{0\}$
and let $f$ be its characteristic function.
We shall write $f_{{\mathbb A}S}$ for an extension of $f$ to ${\mathbb A}S\setminus\{0\}$.
The Gauss-Schering Lemma may be reformulated as follows.
\begin{proposition}
For non-zero, coprime $\alpha,\beta\in{\mathfrak o}_{S}$ we have:
$$
\left(\frac{\alpha}{\beta}\right)_{S,m}
\;=\;
{\rm Dec}AS^{f_{{\mathbb A}S},L}(\alpha,\beta^{-1})^{-1}.
$$
\end{proposition}
{\it Proof. }
As in \S3.2 we let $X={\mathbb A}S/L$
and we let $F$ be a set of representatives for $\mu_{m}$-orbits
in $X$.
Let $X[\beta]=\{x\in X:\beta x=0\}$.
We shall identify $X[\beta]$ with ${\mathfrak o}^{S}/(\beta)$.
Let $F_{\beta}=F\cap X[\beta]$.
The set $F_{\beta}$ is an $m$-th set modulo $\beta$
in the sense described above.
Therefore the Gauss-Schering Lemma
states that
$\left(\alpha/\beta\right)_{S,m} = {\rm pr}od \xi^{r(\xi)}$,
where $r(\xi)$ may be rewritten as:
$$
r(\xi)
=
\sum_{x\in F_{\beta}}
f(\xi^{-1}\alpha x).
$$
This is equivalent to
$$
r(\xi)
=
\sum_{x\in X[\beta]\setminus\{0\}}
f(x)f(\xi^{-1}\alpha x).
$$
As we have normalized the Haar measure to give $L$
measure 1, the sum can be rewritten as an integral:
$$
r(\xi)
=
\left\{\int_{\beta^{-1}L}-\int_{L}\right\}
f(v)f(\xi^{-1}\alpha v)dv.
$$
The proposition follows from the definition
(\ref{def1}) of ${\rm Dec}a$.
$\Box$
It is common in proofs of reciprocity laws using the
Gauss--Scher\-ing Lemma to change from a summation
over $\beta$-division points to a summation over
$\alpha\beta$-division points when calculating
$(\alpha/\beta)_{m}$ (see \cite{thesis,habicht}).
The reason why this technique is useful,
is because whereas the sum over $\beta$-division points is
${\rm Dec}AS(\alpha,\beta^{-1})$,
the sum over $\alpha\beta$-division points is,
as we can see from Lemma \ref{Xism4},
related to ${\rm Dec}AS(\alpha,\beta)$.
\subsection{The geometric cocycle}
In this section we let $V_\infty$ be a vector space
over ${\mathbb R}$
with an action of the cyclic group $\mu_m$
such that every non-zero vector in $V_{\infty}$
has trivial stabilizer in $\mu_m$
(such representations are called linear space forms).
We shall write $d$ for the dimension of $V_\infty$ as
a vector space over ${\mathbb R}$.
Let $G_\infty$ be the group ${\rm Aut}_{\mu_m}(V_\infty)$
and let ${\mathfrak g}$ be its Lie algebra ${\rm End}_{\mu_m}(V_\infty)$.
We shall construct a 2-cocycle ${\rm Dec}i$ on $G_{\infty}$
with values in $\mu_{m}$.
The construction is analogous to the construction of
${\rm Dec}a$ of \S3.1.
If $m=2$ then the group $\mu_{2}$ acts on $V_{\infty}$
with the non-trivial element acting by multiplication by $-1$.
In this case $G_{\infty}={\rm GL}_{d}({\mathbb R})$.
In contrast if $m\ge 3$ then $V_{\infty}$ is a direct sum
of irreducible two-dimensional representations of $\mu_{m}$.
The group $G_{\infty}$ is then a direct sum of groups
isomorphic to ${\rm GL}_{r(i)}({\mathbb C})$,
with $d=2\sum_{i} r(i)$.
For any singular $r$-cell ${\mathscr T}:A\to{\mathfrak g}$ in ${\mathfrak g}$ and any
singular $s$-cell
${\mathscr U}:B\to V_\infty$ we define a singular $r+s$-cell
${\mathscr T}\cdot{\mathscr U}:A\times B\to V_\infty$ by
$$
({\mathscr T}\cdot{\mathscr U})(\underline{x},\underline{y})
=
{\mathscr T}(\underline{x})\cdot {\mathscr U}(\underline{y}).
$$
This operation extends to a bilinear map
$C_r({\mathfrak g}) \times C_s(V_\infty) \to C_{r+s}(V_\infty)$.
We fix an orientation ${\rm ord}$ on $V_{\infty}$.
The following lemma will often be used in what follows.
\begin{lemma}
\label{orient}
(i) For any $\alpha\in G_{\infty}$, $v\in V_{\infty}$
and any ${\mathscr T}\in Z_{d}(V_{\infty},V_{\infty}\setminus\{v\})$
we have in ${\mathbb Z}/m$:
$$
{\rm ord}_{\alpha v}(\alpha{\mathscr T})
=
{\rm ord}_{v}({\mathscr T}).
$$
(ii) For every ${\mathscr T}\in C_{d}(V_{\infty},V_{\infty}\setminus\{0\})$
we have in ${\mathbb Z}/m$:
$$
{\rm ord}_{0}((1-[\zeta]){\mathscr T})
=
{\rm ord}_{0}([\mu_{m}]{\mathscr T})
=
0.
$$
\end{lemma}
{\it Proof. }
(i)
Since ${\mathbb Z}/2{\mathbb Z}$ has no non-trivial automorphisms,
there is nothing to prove in the case $m=2$.
Assume now $m>2$.
The group $G_{\infty}$ is a direct sum of groups
isomorphic to ${\rm GL}_{r}({\mathbb C})$, and is therefore connected.
Hence the action of $G_{\infty}$ on the
set of orientations of $V_{\infty}$ is trivial.
(ii)
By (i) we have ${\rm ord}_{0}((1-[\zeta]){\mathscr T})=0$.
On the other hand,
the following holds in $({\mathbb Z}/m)[\mu_{m}]$:
$$
[\mu_{m}]
=
\Big([\zeta]+2[\zeta^{2}]+\ldots+(m-1)[\zeta^{m-1}]\Big)
(1-[\zeta]).
$$
Hence ${\rm ord}_{0}([\mu_{m}]{\mathscr T})=0$.
$\Box$
Let $d$ be the dimension of $V_{\infty}$ as a vector space over ${\mathbb R}$.
We may choose a finite $d-1$-dimensional cell complex
${\mathfrak S}$ in $V_{\infty}\setminus\{0\}$ with the following
properties:
\begin{itemize}
\item
The inclusion ${\mathfrak S}\hookrightarrow V_{\infty}\setminus\{0\}$
is a homotopy equivalence;
\item
The group $\mu_{m}$ permutes the cells in ${\mathfrak S}$
and each cell has trivial stabilizer in
$\mu_{m}$.
\end{itemize}
One could for example take ${\mathfrak S}$ to be
a triangulation of the unit sphere in $V_{\infty}$.
We shall write ${\mathfrak S}_{\bullet}$ for the corresponding
chain complex with coefficients in ${\mathbb Z}/m$.
It follows from the second condition
above that each ${\mathfrak S}_{r}$ is a free $({\mathbb Z}/m)[\mu_{m}]$-module.
A basis consists of a set of representatives of $\mu_{m}$-orbits
of $r$-cells.
Thus each ${\mathfrak S}_{r}$ satisfies the exact sequence (\ref{star}).
Choose a cycle $\omega\in{\mathfrak S}_{d-1}$ whose
homology class is mapped to 1 by the isomorphisms:
$$
H_{d-1}({\mathfrak S})
\to
H_{d-1}(V_{\infty}\setminus\{0\})
\stackrel{\partial^{-1}}{\to}
H_{d}(V_{\infty},V_{\infty}\setminus\{0\})
\stackrel{{\rm ord}_{0}}{\to}
{\mathbb Z}/m.
$$
By Lemma \ref{orient} we know that $(1-[\zeta])[\omega]=0$
in $H_{d-1}({\mathfrak S})$.
As ${\mathfrak S}_{d}=0$, this implies that $(1-[\zeta])\omega=0$
in ${\mathfrak S}_{d-1}$.
Thus by the exact sequence (\ref{star}) there is a ${\mathscr D}\in{\mathfrak S}_{d-1}$
such that $\omega=[\mu_{m}]{\mathscr D}$.
We may think of ${\mathscr D}$ as a fundamental domain
for $\mu_{m}$ in ${\mathfrak S}$.
As $\omega$ is a cycle we have:
$$
[\mu_{m}]\partial{\mathscr D}
=
\partial[\mu_{m}]{\mathscr D}
=
\partial\omega
=
0.
$$
Thus, by the exact sequence (\ref{star}),
there is an ${\mathscr E}\in{\mathfrak S}_{d-2}$ such that
$$
\partial {\mathscr D} = (1-[\zeta]){\mathscr E}.
$$
\begin{definition}
We may define the geometric cocycle ${\rm Dec}i^{({\mathscr D})}$
for generic $\alpha,\beta\in G_{\infty}$ as follows:
$$
{\rm Dec}i^{({\mathscr D})}(\alpha,\beta)
=
\zeta^{\textstyle{-{\rm ord}_{0}([1,\alpha,\alpha\beta]\cdot{\mathscr E})}}.
$$
\end{definition}
\begin{remark}
In fact we have
${\rm Dec}i^{({\mathscr D})}(\alpha,\beta)
=\widehat{{\rm ord}}_{0}(-[1,\alpha,\alpha\beta]\cdot\partial{\mathscr D})$,
where $\widehat{{\rm ord}}_{0}$ is a lifted map
in the sense of Lemma \ref{easy}.
The $({\mathbb Z}/m)[\mu_{m}]$-module
$Z_{d}(V_{\infty},V_{\infty}\setminus\{0\})$
is free.
However we shall not use this interpretation.
\end{remark}
If the choice of ${\mathscr D}$ is clear or irrelevant,
then we shall omit it from the notation.
In order to define ${\rm Dec}i (\alpha,\beta)$
on all pairs $(\alpha,\beta)$ rather than just generically,
we must define
${\rm ord}_{0}([1,\alpha,\alpha\beta]\cdot{\mathscr E})$ when $0$ is
on the boundary.
To do this we slightly modify our definition.
We define $[1,\alpha,\alpha\beta]$ to be the map ${\mathfrak g}^3\to C_2({\mathfrak g})$
defined by
$$
(\epsilonsilon_1,\epsilonsilon_2,\epsilonsilon_3)
\mapsto
[1+\epsilonsilon_1,\alpha(1+\epsilonsilon_2),\alpha\beta(1+\epsilonsilon_3)].
$$
Then $[1,\alpha,\alpha\beta]\cdot{\mathscr E}$ becomes a map ${\mathfrak g}^3\to C_d(V_\infty)$.
We fix a basis $\{b_1,\ldots,b_r\}$ for ${\mathfrak g}$ over ${\mathbb R}$.
For $\epsilonsilon=\sum x_i b_i$ we shall use the notation:
$$
``{\mathop{{\bf lim}}_{\epsilonsilon\to 0^+}}"
=
\lim_{x_1\to 0^+}
\cdots
\lim_{x_r\to 0^+}.
$$
With this notation we define
$$
{\rm ord}_0([1,\alpha,\alpha\beta]\cdot{\mathscr E})
=
\mathop{{\bf lim}}_{\epsilonsilon_1\to 0^+}
\mathop{{\bf lim}}_{\epsilonsilon_2\to 0^+}
\mathop{{\bf lim}}_{\epsilonsilon_3\to 0^+}
{\rm ord}_0\Big(
[1+\epsilonsilon_1,\alpha(1+\epsilonsilon_2),\alpha\beta(1+\epsilonsilon_3)]\cdot{\mathscr E}
\Big).
$$
We will prove in \S5 that, for a suitable choice of ${\mathfrak S}$,
the above limit exists for all $\alpha,\beta\in G_\infty$.
It is worth mentioning that the above limits do not
necessarily commute.
\begin{theorem}
${\rm Dec}i$ is a 2-cocycle.
\end{theorem}
{\it Proof. }
Let $\alpha,\beta,{\mathfrak a}mma\in G_\infty$
and consider the $3$-cell in ${\mathfrak g}$:
$$
{\mathscr A}
=
[1+\epsilonsilon_1,\alpha(1+\epsilonsilon_2),
\alpha\beta(1+\epsilonsilon_3),\alpha\beta{\mathfrak a}mma(1+\epsilonsilon_4)].
$$
We have straight from the definition:
$$
\partial{\rm Dec}i(\alpha,\beta,{\mathfrak a}mma)
=
\mathop{{\bf lim}}_{\epsilonsilon_1\to 0^+}
\mathop{{\bf lim}}_{\epsilonsilon_2\to 0^+}
\mathop{{\bf lim}}_{\epsilonsilon_3\to 0^+}
\mathop{{\bf lim}}_{\epsilonsilon_4\to 0^+}
\zeta^{\textstyle{-{\rm ord}_{0}((\partial{\mathscr A})\cdot{\mathscr E})}}.
$$
The boundary of ${\mathscr A}\cdot{\mathscr E}$ is given by
$$
\partial({\mathscr A}\cdot{\mathscr E})
=
\partial({\mathscr A})\cdot{\mathscr E}
+
{\mathscr A}\cdot\partial{\mathscr E}.
$$
(Since coefficients are in ${\mathbb Z}/m$,
the sign is only important for $m\ge3$, and in these cases
$d$ is even).
Taking the order of this at 0 and then taking limits
we obtain:
$$
\partial{\rm Dec}i(\alpha,\beta,{\mathfrak a}mma)
=
\mathop{{\bf lim}}_{\epsilonsilon_1\to 0^+}
\mathop{{\bf lim}}_{\epsilonsilon_2\to 0^+}
\mathop{{\bf lim}}_{\epsilonsilon_3\to 0^+}
\mathop{{\bf lim}}_{\epsilonsilon_4\to 0^+}
\zeta^{\textstyle{{\rm ord}_{0}({\mathscr A}\cdot\partial{\mathscr E})}}.
$$
We must show that the right hand side here is 1.
We know that $(1-[\zeta]){\mathscr E}=\partial {\mathscr D}$.
Therefore $(1-[\zeta])\partial{\mathscr E}=0$,
so by the exact sequence (\ref{star}),
there is a ${\mathscr B}\in{\mathfrak S}_{d-3}$ such that
$$
\partial{\mathscr E} = [\mu_{m}]{\mathscr B},
$$
This implies
$$
{\mathscr A}\cdot\partial{\mathscr E}
=
[\mu_{m}]({\mathscr A}\cdot{\mathscr B}).
$$
The result now follows from Lemma \ref{orient}.
$\Box$
We next verify that our definition is independent of the various
choices made.
\begin{proposition}
(i) The cocycle ${\rm Dec}i^{({\mathscr D})}$ is independent of the choice of
${\mathscr E}$, the orientation ${\rm ord}$ and the generator $\zeta$ of
$\mu_{m}$.\\
(ii) The cohomology class of ${\rm Dec}i^{({\mathscr D})}$ is independent of the
choice of ${\mathscr D}$.
\end{proposition}
{\it Proof. }
(i)
We first fix ${\mathscr D}$ and choose another ${\mathscr E}^{{\rm pr}ime}$ such that
$$
(1-[\zeta]){\mathscr E}
=
(1-[\zeta]){\mathscr E}^{{\rm pr}ime}
=
\partial{\mathscr D}.
$$
By the exact sequence (\ref{star}) there is an ${\mathscr A}\in{\mathfrak S}_{d-2}$ such that
$$
{\mathscr E}^{{\rm pr}ime}-{\mathscr E}
=
[\mu_{m}]{\mathscr A}.
$$
This implies by Lemma \ref{orient}:
\begin{eqnarray*}
{\rm Dec}i^{{\mathscr E}^{{\rm pr}ime}}(\alpha,\beta)
&=&
{\rm Dec}i^{{\mathscr E}}(\alpha,\beta)
\zeta^{\textstyle{-{\rm ord}_{0}([\mu_{m}][1,\alpha,\alpha\beta]\cdot{\mathscr A})}}\\
&=&
{\rm Dec}i^{{\mathscr E}}(\alpha,\beta).
\end{eqnarray*}
Now suppose we choose a different orientation ${\rm ord}'$.
We have ${\rm ord}'=u\cdot{\rm ord}$ for some $u\in({\mathbb Z}/m)^{\times}$.
We may therefore choose $\omega'=u^{-1}\omega$,
${\mathscr D}'=u^{-1}{\mathscr D}$ and ${\mathscr E}'=u^{-1}{\mathscr E}$.
With these choices we have
$$
{\rm ord}_{0}'([1,\alpha,\alpha\beta]\cdot{\mathscr E}')
=
{\rm ord}_{0}([1,\alpha,\alpha\beta]\cdot{\mathscr E}).
$$
Finally suppose $\zeta^{{\rm pr}ime u}=\zeta$ for some
$u\in({\mathbb Z}/m)^{\times}$.
We then have in $({\mathbb Z}/m)[\mu_{m}]$:
$$
(1-\zeta)
=
(1-\zeta')(1+\zeta'+\ldots+\zeta^{{\rm pr}ime u-1}).
$$
We may therefore take
$$
{\mathscr E}'
=
(1+\zeta'+\ldots+\zeta^{{\rm pr}ime u-1}){\mathscr E}.
$$
This implies by Lemma \ref{orient}
$$
{\rm ord}_{0}([1,\alpha,\alpha\beta]\cdot{\mathscr E}')
=
u \cdot {\rm ord}_{0}([1,\alpha,\alpha\beta]\cdot{\mathscr E}).
$$
Therefore
$$
\zeta^{{\rm pr}ime\textstyle{-{\rm ord}_{0}([1,\alpha,\alpha\beta]\cdot{\mathscr E}')}}
=
\zeta^{\textstyle{-{\rm ord}_{0}([1,\alpha,\alpha\beta]\cdot{\mathscr E})}}.
$$
(ii)
We now allow ${\mathscr D}$ to vary.
We choose ${\mathscr D}^{{\rm pr}ime}$ to satisfy
$$
[\mu_{m}]{\mathscr D}^{{\rm pr}ime}
=
[\mu_{m}]{\mathscr D}.
$$
By the exact sequence (\ref{star}) there is a ${\mathscr B}\in{\mathfrak S}_{d-1}$ such that
$$
{\mathscr D}^{{\rm pr}ime}
=
{\mathscr D} + (1-[\zeta]){\mathscr B}.
$$
Thus
$$
\partial{\mathscr D}^{{\rm pr}ime}
=
\partial{\mathscr D} + (1-[\zeta])\partial {\mathscr B}.
$$
We may therefore choose ${\mathscr E}^{{\rm pr}ime}={\mathscr E}+\partial{\mathscr B}$.
Note that we have
\begin{eqnarray*}
\partial([1,\alpha,\alpha\beta]\cdot{\mathscr B})
&=&
\partial([1,\alpha,\alpha\beta])\cdot{\mathscr B}
+
[1,\alpha,\alpha\beta]\cdot\partial{\mathscr B}\\
&=&
[1,\alpha]\cdot{\mathscr B}-[1,\alpha\beta]\cdot{\mathscr B}+[\alpha,\alpha\beta]\cdot{\mathscr B}
+
[1,\alpha,\alpha\beta]\cdot\partial{\mathscr B}.
\end{eqnarray*}
This implies using Lemma \ref{orient}:
\begin{eqnarray*}
{\rm ord}_{0}([1,\alpha,\alpha\beta]\cdot\partial{\mathscr B})
&=&
{\rm ord}_{0}(
[1,\alpha\beta]\cdot{\mathscr B}
-[1,\alpha]\cdot{\mathscr B}
-[\alpha,\alpha\beta]\cdot{\mathscr B})\\
&=&
{\rm ord}_{0}([1,\alpha\beta]\cdot{\mathscr B})
-
{\rm ord}_{0}([1,\alpha]\cdot{\mathscr B})
-
{\rm ord}_{0}([1,\beta]\cdot{\mathscr B}).
\end{eqnarray*}
Putting things back together we obtain:
$$
{\rm Dec}i^{({\mathscr D}')}(\alpha,\beta)
=
{\rm Dec}i^{({\mathscr D})}(\alpha,\beta)
\frac{\tau(\alpha)\tau(\beta)}{\tau(\alpha\beta)},
$$
where
$$
\tau(\alpha)
=
\zeta^{\textstyle{{\rm ord}_{0}([1,\alpha]\cdot{\mathscr B})}}.
$$
$\Box$
\subsection{Example: ${\rm GL}_{2}({\mathbb R})$}
If $V_{\infty}$ is $1$-dimensional then the fundamental domain
${\mathscr D}$ is zero-dimensional, and so we have ${\mathscr E}=0$
and ${\rm Dec}i$ is always 1.
The smallest non-trivial example is the case $m=2$
and $V_{\infty}={\mathbb R}^{2}$ with the group $\mu_{2}$
acting on ${\mathbb R}^{2}$ by scalar multiplication.
We shall consider this example now.
We have $G_{\infty}={\rm GL}_{2}({\mathbb R})$.
As $m=2$ there is no need to worry about a choice of orientation.
We may take our fundamental domain ${\mathscr D}$ to be the half-circle:
$$
{\mathscr D}
=
\left\{\left(\matrix{x \cr y}\right):
x^{2}+y^{2}=1,\; y\ge 0
\right\}.
$$
The boundary of this consists of the two points $\left(\matrix{1 \cr 0}\right)$
and $\left(\matrix{-1 \cr 0}\right)$.
We may therefore take
${\mathscr E}=[v]$, where $v=\left(\matrix{1 \cr 0}\right)$.
The cocycle is then given (generically) by:
$$
{\rm Dec}i^{({\mathscr D})}(\alpha,\beta)
=
\left\{
\begin{array}{rl}
1 & \hbox{if } 0\notin [v,\alpha v,\alpha\beta v],\\
-1 & \hbox{if } 0\in [v,\alpha v,\alpha\beta v].
\end{array}
\right.
$$
By choosing a different ${\mathscr D}$ we may replace $v$ by any
other non-zero vector to obtain a cohomologous cocycle.
For later use we calculate the commutator of ${\rm Dec}i$
on the standard torus in ${\rm GL}_{2}({\mathbb R})$.
\begin{proposition}
\label{twocom}
The commutator of ${\rm Dec}i$ on the
subgroup of diagonal matrices in ${\rm GL}_{2}({\mathbb R})$
is given by:
$$
\left[
\left(\matrix{\alpha_{1}\cr & \alpha_{2}}\right),
\left(\matrix{\beta_{1}\cr & \beta_{2}}\right)
\right]_{{\rm Dec}i}
=
(\alpha_{1},\beta_{1})_{{\mathbb R},2}
(\alpha_{2},\beta_{2})_{{\mathbb R},2}
(\det\alpha,\det\beta)_{{\mathbb R},2}.
$$
The right hand side here consists of quadratic Hilbert
symbols on ${\mathbb R}$.
\end{proposition}
{\it Proof. }
Commutators are is continuous, bimultiplicative
and alternating.
The right hand side of the above formula is also continuous,
bimultiplicative and alternating in $\alpha,\beta$.
It is therefore sufficient to check the formula
in the case $\alpha=\left( \matrix{1 \cr &-1}\right)$,
$\beta=\left(\matrix{-1 \cr &1}\right)$.
To calculate the commutator there we choose
$v=\left(\matrix{1\cr 1}\right)$.
With this choice we have
\begin{eqnarray*}
\left[(1+\epsilonsilon_{1})v,\alpha(1+\epsilonsilon_{2})
v,\alpha\beta(1+\epsilonsilon_{3}) v\right]
&=&
\left[
(1+\epsilonsilon_{1}) \left(\matrix{1\cr 1}\right),
(1+\epsilonsilon_{2})^{\alpha} \left(\matrix{1\cr -1}\right),
(1+\epsilonsilon_{3}) \left(\matrix{-1\cr -1}\right)
\right],\\
\left[(1+\epsilonsilon_{1})v,\beta(1+\epsilonsilon_{2})
v,\beta\alpha(1+\epsilonsilon_{3}) v\right]
&=&
\left[
(1+\epsilonsilon_{1})\left(\matrix{1\cr 1}\right),
(1+\epsilonsilon_{2})^{\beta}\left(\matrix{-1\cr 1}\right),
(1+\epsilonsilon_{3})\left(\matrix{-1\cr -1}\right)
\right].
\end{eqnarray*}
We therefore have in ${\mathbb Z}/2$:
$$
\mathop{{\bf lim}}_{\epsilonsilon_{1}\to 0^{+},\epsilonsilon_{2}\to 0^{+},\epsilonsilon_{3}\to 0^{+}}
\left(
\begin{array}{ll}
{\rm ord}_{0}(
[(1+\epsilonsilon_{1})v,
\alpha(1+\epsilonsilon_{2}) v,
\alpha\beta(1+\epsilonsilon_{3}) v])\\
-{\rm ord}_{0}(
[(1+\epsilonsilon_{1})v,
\alpha(1+\epsilonsilon_{2})v,
\alpha\beta(1+\epsilonsilon_{3}) v])
\end{array}
\right)
=
1.
$$
This implies $\frac{{\rm Dec}i(\alpha,\beta)}{{\rm Dec}i(\beta,\alpha)}=-1$,
which verifies the result.
$\Box$
\subsection{Stability of the cocycles}
Suppose $V_\infty$ is the direct sum
of the representations $V_1$ and $V_2$ of $\mu_m$.
The group $G_\infty$ contains the direct sum
of $G_1={\rm Aut}_{\mu_m}(V_1)$ and $G_2={\rm Aut}_{\mu_m}(V_2)$.
We have defined cocycles ${\rm Dec}i$ on $G_\infty$,
${\rm Dec}i^{(1)}$ on $G_1$ and ${\rm Dec}i^{(2)}$ on $G_2$.
The next result describes how these are related.
\begin{proposition}
\label{stability}
Let $\alpha=(\alpha_1,\alpha_2)$ and $\beta=(\beta_{1},\beta_{2})$
denote elements of $G_1\oplus G_2$.
We have up to a coboundary:
$$
{\rm Dec}i(\alpha,\beta)
=
{\rm Dec}i^{(1)}(\alpha_1,\beta_1)
{\rm Dec}i^{(2)}(\beta_1,\beta_2)
(\det(\alpha_1),\det(\beta_2))_{{\mathbb R},2}.
$$
Here $(\cdot,\cdot)_{{\mathbb R},2}$ denotes the quadratic
Hilbert symbol on ${\mathbb R}$
and $\det$ is the determinant over the base field ${\mathbb R}$.
\end{proposition}
Note that for $m\ge 3$, $\alpha_{i}$ and $\beta_{i}$ have
positive determinant, so in this case the final term above
vanishes.
{\it Proof. }
We shall first consider the case $m=2$.
Thus we have $G_1={\rm GL}_a({\mathbb R})$, $G_2={\rm GL}_b({\mathbb R})$
and $G_\infty={\rm GL}_{a+b}({\mathbb R})$.
There is an isomorphism:
$$
H^2(G_1\oplus G_2,\mu_2)
\cong
H^2(G_1,\mu_2)
\oplus
H^1(G_1,H^1(G_2,\mu_2))
\oplus
H^2(G_2,\mu_2)
$$
The middle component of the isomorphism
is given by the commutator
$[\alpha_{1},\beta_{2}]$ ($\alpha_{1}\in G_1$, $\beta_{2}\in G_2$);
the other two components are given by restriction.
We must show that the image of ${\rm Dec}i$
is that described in the proposition.
We first examine the restriction of ${\rm Dec}i$ to $G_1$.
We may assume without loss of generality
that $V_2={\mathbb R}$.
We shall assume for the moment
that $V_1={\mathbb R}^n$ with $n\ge 2$;
The case $n=1$ will be dealt with separately.
For any $r$-cell ${\mathscr A}:A\to{\mathbb R}^n$ we define
we define two $r+1$-cells
${\mathscr A}^+,{\mathscr A}^-:A\times I\to {\mathbb R}^{n+1}$ by:
$$
({\mathscr A}^+)(\underline{x},t)
=
(1-t){\mathscr A}(\underline{x})+ te_{n+1},
$$
$$
({\mathscr A}^-)(\underline{x},t)
=
(1-t){\mathscr A}(\underline{x})- te_{n+1},
$$
and we shall write ${\mathscr A}^\pm={\mathscr A}^+-{\mathscr A}^-$.
Here $e_{n+1}$ is the $n+1$-st standard basis element in ${\mathbb R}^{n+1}$.
The above construction has the following properties
which are easily checked:
\begin{itemize}
\item[1.]
For $r\ge 1$, we have modulo degenerate cells:
$\partial ({\mathscr A}^\pm) = (\partial{\mathscr A})^\pm$.
\item[2.]
For $\alpha\in G_1={\rm GL}_n({\mathbb R})$
we have $\alpha({\mathscr A}^\pm)=(\alpha{\mathscr A})^\pm$.
Furthermore $([-1]\cdot {\mathscr A})^{\pm}=[-1]\cdot({\mathscr A}^\pm)$
(here $[-1]$ is acting by scalar multiplication on $V_{\infty}$,
rather than on the coefficients of chains).
\item[3.]
If ${\mathscr A}$ is an $n$-cell in ${\mathbb R}^n$
then ${\rm ord}_{0,{\mathbb R}^n}({\mathscr A}) = {\rm ord}_{0,{\mathbb R}^{n+1}}({\mathscr A}^\pm)$.
(The choice of orientation here is unnecessary as $m=2$).
One should understand this formula as meaning that if
one side is defined then so is the other and they are equal.
\end{itemize}
Let $\omega_{1}$ be the generator of $H_{n-1}(V_{1}\setminus\{0\})$
as in \S3.5.
By the first and third properties above,
we may take $\omega=\omega_{1}^{\pm}$
as our generator for $H_{n}(V_{\infty}\setminus\{0\})$.
By the second property,
we may choose ${\mathscr D}={\mathscr D}_1^\pm$.
Since $n\ge 2$, the first property
implies that we may take ${\mathscr E}={\mathscr E}_1^\pm$.
Let $\alpha_{1},\beta_{1}\in G_1$.
By the second property we have:
$$
[1,\alpha_{1},\alpha_{1}\beta_{1}]\cdot{\mathscr E}
=
([1,\alpha_{1},\alpha_{1}\beta_{1}]\cdot{\mathscr E}_{1})^\pm.
$$
Hence by the third property, it follows that:
$$
{\rm Dec}i^{({\mathscr D}_{1})}(\alpha_{1},\beta_{1})
=
{\rm Dec}i^{({\mathscr D})}(\alpha_{1},\beta_{1}),
$$
so the restriction of ${\rm Dec}i$ to $G_1$ is ${\rm Dec}i^{(1)}$.
We may check by hand that this still holds in the case $n=1$,
where ${\rm Dec}i^{(1)}$ is trivial
(simply take ${\mathscr E}=\left[\left(\matrix{0\cr 1}\right)\right]$
and draw a picture).
By the same reasoning we also know that the restriction
of ${\rm Dec}i$ to $G_2$ is cohomologous to ${\rm Dec}i^{(2)}$.
It remains only to prove the formula for
the commutator $[\alpha_{1},\beta_{2}]_{{\rm Dec}i}$,
where $\alpha_{1}\in G_1$, $\beta_{2}\in G_2$.
As commutators are bimultiplicative,
this only depends on $\det(\alpha_{1})$ and $\det(\beta_{2})$
(as ${\rm SL}_{n}({\mathbb R})$ is the commutator subgroup of ${\rm GL}_{n}({\mathbb R})$).
Furthermore, by what we have already proved,
we may assume without loss of generality that
$V_1=V_2={\mathbb R}$.
We are reduced to calculating the commutator:
$$
\left[
\left(\matrix{-1 & 0\cr 0 & 1}\right),
\left(\matrix{1 & 0\cr 0 & -1}\right)
\right]_{{\rm Dec}i}.
$$
The result now follows from Proposition \ref{twocom}.
Finally suppose $m\ge 3$.
In this case $G_{1}$ and $G_{2}$ are connected,
so the middle commutator term is trivial.
We need only verify that the
restriction of ${\rm Dec}i$ to $G_{1}$ is ${\rm Dec}i^{(1)}$.
By induction it is sufficient to prove this in the case
that $V_{2}$ is a simple $\mu_{m}$-module.
Since $V_{2}$ is a linear space form of $\mu_{m}$,
this implies that $V_{2}$ is 2-dimensional.
The result follows as in the case $m=2$ but replacing
the construction ${\mathscr A}^{\pm}$ by a construction which
increases the dimensions of cells by 2.
This is left to the reader.
$\Box$
\begin{corollary}
Let $m=2$ and let ${\rm Dec}i$ be the cocycle
on ${\rm GL}_n({\mathbb R})$ constructed from the action of
$\mu_2$ on ${\mathbb R}^n$.
Then, on the standard torus in ${\rm GL}_n({\mathbb R})$,
the commutator of ${\rm Dec}i$ is given by
$$
\left[ \alpha, \beta \right]_{{\rm Dec}i}
=
(\det\alpha,\det\beta)_{{\mathbb R},2}
{\rm pr}od_{i=1}^{n}
(\alpha_i,\beta_i)_{{\mathbb R},2},
$$
where
$$
\alpha
=
\left(
\matrix{\alpha_1 &&\cr &\ddots&\cr &&\alpha_n}
\right),
\quad
\beta
=
\left(
\matrix{\beta_1 &&\cr &\ddots&\cr &&\beta_n}
\right).
$$
\end{corollary}
{\it Proof. }
This follows by induction from Proposition \ref{stability}.
$\Box$
\subsection{Example: ${\rm GL}_{1}({\mathbb C})$}
We now calculate a specific example which we shall need
in the next section.
We choose an embedding of $\iota:\mu_m\hookrightarrow{\mathbb C}^\times$ and we
let $V_\infty={\mathbb C}$ with the action of $\mu_m$ given by $\iota$.
Thus ${\rm Dec}_\infty$ defines an element of $H^2({\mathbb C}^\times,\mu_m)$
and we shall now calculate this element.
As ${\mathbb C}^\times$ is abelian, 2-cocycles on this group may be studied by
studying their commutators.
However since ${\mathbb C}^\times$ is connected, the commutator of
every $2$-cocycle is trivial.
We therefore have
$$
H^2({\mathbb C}^\times,\mu_m)
=
H^2_{sym}({\mathbb C}^\times,\mu_m).
$$
By Klose's isomorphism (see \S2.3), it follows that
\begin{eqnarray*}
H^2({\mathbb C}^\times,\mu_m)
&\cong&
{\mathbb Z}/m,\\
\sigma
&\mapsto&
a,\quad\hbox{where }
\zeta^a={\rm pr}od_{i=1}^{m-1}\sigma(\iota(\zeta)^i,\iota(\zeta)).
\end{eqnarray*}
\begin{proposition}
The image under the above isomorphism of ${\rm Dec}i$ is $1$.
\end{proposition}
{\it Proof. }
We choose the generator $\zeta$ so that
$\iota(\zeta)=e^{2\pi i/m}$.
The group $H_{2}({\mathbb C},{\mathbb C}^{\times})$ is generated by
the following element:
$$
{\mathscr A}
=
[1,\iota(\zeta),\iota(\zeta)^2]
+
[1,\iota(\zeta)^2,\iota(\zeta)^3]
+
\ldots
+
[1,\iota(\zeta)^{m-1},\iota(\zeta)^m].
$$
We shall fix our orientation on ${\mathbb C}$ such that ${\rm ord}_{0}({\mathscr A})=1$.
We may therefore take
$$
\omega
=
\partial{\mathscr A}
=
[1,\iota(\zeta)]
+
[\iota(\zeta),\iota(\zeta)^2]
+
\ldots
+
[\iota(\zeta)^{m-1},1].
$$
We may then choose ${\mathscr D}$ to be the line segment $[1,e^{2\pi i/m}]$.
Thus
$$
\partial{\mathscr D}
=
[e^{2\pi i/m}]-[1]
=
(1-[\iota(\zeta)])(-[1]).
$$
We may therefore choose ${\mathscr E}=-[1]$.
With this choice of ${\mathscr E}$ we have as required:
$$
{\rm pr}od_{i=1}^{m-1}{\rm Dec}i(\iota(\zeta)^i,\iota(\zeta))
=
\zeta^{\textstyle{{\rm ord}_0({\mathscr A})}}
=
\zeta.
$$
$\Box$
The corresponding central extension (normalized
as described in \S2.2) is as follows:
$$
\begin{array}{ccccccccc}
1 &\to& \mu_{m}& \stackrel{\iota}{\to}&
{\mathbb C}^{\times} &\to &{\mathbb C}^{\times}& \to& 1,\\
&&&& \alpha &\mapsto&\alpha^{m}.
\end{array}
$$
\subsection{The totally complex case.}
In this section we suppose $k$
is a totally complex number field
containing a primitive $m$-th root of unity and let
$\mu_{m}$ be the group of all $m$-th roots of unity in $k$.
We let $V_{\infty}=k_{\infty}^{n}$,
where $k_{\infty}=k\otimes_{{\mathbb Q}}{\mathbb R}$.
The action of $\mu_{m}$ by scalar multiplication on $V_{\infty}$
satisfies the conditions of \S3.5.
The group $G_{\infty}$ contains ${\rm GL}_{n}(k_{\infty})$.
We therefore obtain by restriction
a cocycle ${\rm Dec}i$ on ${\rm GL}_{n}(k_{\infty})$.
In this section we shall study this cocycle.
\subsubsection{The cocycle up to a coboundary.}
The determinant map gives as isomorphism:
$$
{\rm GL}_{n}(k_{\infty})/{\rm SL}_{n}(k_{\infty})
\stackrel{\det}{\cong}
k_{\infty}^{\times}.
$$
This gives us an inflation map:
$$
H^{2}(k_{\infty}^{\times},\mu_{m})
\to
H^{2}({\rm GL}_{n}(k_{\infty}),\mu_{m}).
$$
As ${\rm SL}_{n}(k_{\infty})$ is both connected and simply connected,
the groups $H^{1}({\rm SL}_{n}(k_{\infty}),\mu_{m})$
and $H^{2}({\rm SL}_{n}(k_{\infty}),\mu_{m})$
are both trivial.
Hence by from the Hochschild--Serre spectral sequence,
it follows that the above map is an isomorphism.
As $k_{\infty}^{\times}$
is abelian we may speak of the
commutators of cocycles; however as
$k_{\infty}^{\times}$
is connected, these commutators are all trivial.
Thus we have
$$
H^{2}({\rm GL}_{n}(k_{\infty}),\mu_{m})
\cong
H^{2}_{sym}(k_{\infty}^{\times},\mu_{m}),
$$
Let $S_{\infty}$ be the set of archimedean places of $k$.
We have a decomposition:
$$
k_{\infty}^{\times}
=
{\bf i}goplus_{v\in S_{\infty}} k_{v}^{\times}.
$$
By the results described in \S2.3, we have
$$
H^{2}({\rm GL}_{n}(k_{\infty}),\mu_{m})
\cong
{\bf i}goplus_{v\in S_{\infty}}
H^{2}_{sym}(k_{v}^{\times},\mu_{m}).
$$
Klose's isomorphism (\S2.3) now gives:
$$
H^{2}({\rm GL}_{n}(k_{\infty}),\mu_{m})
\cong
{\bf i}goplus_{v\in S_{\infty}}
{\mathbb Z}/m.
$$
By the results of the previous two sections 3.7 and 3.8,
we know that the image of ${\rm Dec}i$ under the above
isomorphism is $(1,\ldots,1)$.
\subsubsection{The group extension.}
We now describe the central extension of ${\rm GL}_{n}(k_{\infty})$,
corresponding to ${\rm Dec}i$.
Fix $v\in S_{\infty}$, so $k_{v}$ is non-canonically isomorphic
to ${\mathbb C}$.
Define a subgroup $\widetilde{{\rm GL}}_{n}(k_{v})$ of ${\rm GL}_{n+1}(k_{v})$
as follows:
$$
\widetilde{{\rm GL}}_{n}(k_{v})
=
\left\{\left(\matrix{\alpha&0\cr 0&\beta}\right) :
\begin{array}{c}
\alpha\in{\rm GL}_{n}(k_{v}),\beta\in k_{v}^{\times},\\
\det\alpha=\beta^{m}
\end{array}
\right\}.
$$
The group extension of ${\rm GL}_{n}(k_{v})$
defined by the restriction of ${\rm Dec}i$,
is concretely realized as follows:
$$
\begin{array}{ccccccccc}
1&\to&\mu_{m}&\to&\widetilde{{\rm GL}}_{n}(k_{v})
&\to&{\rm GL}_{n}(k_{v})&\to&1,\\
&&\zeta&\mapsto&
\left(\matrix{I_{n}&0\cr 0&\iota_{v}(\zeta)}\right),\\
&&&&\left(\matrix{\alpha&0\cr 0&\beta}\right)&\mapsto&\alpha.
\end{array}
$$
Here $\iota_{v}:k\hookrightarrow k_{v}$ denotes the embedding
corresponding to the place $v$.
Let $\mu_{m}(k_{\infty})$ be the $m$-torsion subgroup of
$k_{\infty}^{\times}$.
Define a subgroup $K$ of $\mu_{m}(k_{\infty})$ to be the kernel of
the homomorphism $h:\mu_{m}(k_{\infty})\to\mu_{m}$ defined by
$$
h(\xi_{v})
=
{\rm pr}od_{v\in S_{\infty}}\iota_{v}^{-1}(\xi_{v}).
$$
The full group extension $\widetilde{{\rm GL}}_{n}(k_{\infty})$ is the quotient
$$
\left(
{\bf i}goplus_{v\in S_{\infty}}
\widetilde{{\rm GL}}_{n}(k_{v})
\right)/K.
$$
\subsubsection{How the cocycle splits.}
We will need to calculate precisely how the cocycle
${\rm Dec}i$ splits.
This is essential in order to find a formula for
the Kubota symbol.
Consider the set
$$
U
=
\{\alpha\in {\rm GL}_{n}(k_{\infty}):
\hbox{$\alpha$ has no negative real eigenvalue}\}.
$$
The set $U$ is contractible, as it is a star body from $1$.
It is also a dense open subset of ${\rm GL}_{n}(k_{\infty})$,
as may be seen from the Jordan canonical form.
\begin{lemma}
If $\alpha\in U$ then the function ${\rm Dec}i$
is locally constant at the point $(1,\alpha)$.
\end{lemma}
{\it Proof. }
By definition we have:
$$
{\rm Dec}i(1,\alpha)
=
\zeta^{\textstyle{{\rm ord}_{0}([1,1,\alpha]\cdot{\mathscr E})}}.
$$
If we suppose that ${\rm Dec}i$ is discontinuous at
$(1,\alpha)$ then this implies that $0$ is on the
base set of $[1,1,\alpha]\cdot{\mathscr E}$.
Thus there is a $v$ in the base set of ${\mathscr E}$ such that
$$
(t+(1-t)\alpha) \cdot v
=
0,
\quad t\in(0,1).
$$
This implies that $v$ is a (non-zero) eigenvector of $\alpha$
with eigenvalue $\frac{-t}{1-t}<0$.
~
$\Box$
Let $\overline\mu=k_{\infty}^{\times}/K$.
Using the map $h$,
we may identify $\mu_{m}$ with the subgroup $\mu_{m}(k_{\infty})/K$
of $\overline\mu$,
and hence we may regard ${\rm Dec}i$ as taking values in $\overline\mu$;
as such it is a coboundary.
We define a function $w:{\rm GL}_{n}(k_{\infty})\to \overline\mu$ which splits
${\rm Dec}i$.
First suppose $\alpha\in U$ and consider the path in ${\mathfrak g}$:
$$
{\wp}_{\alpha}=[1,\alpha].
$$
As $\alpha$ has no negative real eigenvalues,
this path is contained in ${\rm GL}_{n}(k_{\infty})$.
There is a unique continuous path $q_{\alpha}:I\to k_{\infty}^{\times}$
defined by:
$$
q_{\alpha}(0)=1,\quad
q_{\alpha}(t)^{m}=\det{\wp}_{\alpha}(t).
$$
We define $w(\alpha)=q_{\alpha}(1)$.
More generally for $\alpha\in {\rm GL}_{n}(k_{\infty})$ we define
$$
w(\alpha)
=
\mathop{{\bf lim}}_{\epsilonsilon\to 0^{+}}
w(\alpha(1+\epsilonsilon)).
$$
Clearly $w(\alpha)^{m}=\det\alpha$.
Hence, for $\alpha\in {\rm SL}_{n}(k_{\infty})$
we have $w(\alpha)\in \mu_{m}(k_{\infty})$.
\begin{theorem}
\label{complexsplit}
For $\alpha,\beta\in {\rm GL}_{n}(k_{\infty})$ we have
$\frac{w(\alpha)w(\beta)}{w(\alpha\beta)}\in \mu_{m}(k_{\infty})$.
Furthermore in $\mu_{m}$ we have:
$$
{\rm Dec}i(\alpha,\beta)
=
h\left(\frac{w(\alpha)w(\beta)}{w(\alpha\beta)}\right).
$$
For $\alpha,\beta\in {\rm SL}_{n}(k_{\infty})$ we have
$$
{\rm Dec}i(\alpha,\beta)
=
\frac{hw(\alpha)hw(\beta)}{hw(\alpha\beta)}.
$$
\end{theorem}
The theorem gives an explicit splitting of the
image of ${\rm Dec}i$ in $Z^{2}({\rm GL}_{n}(k_{\infty}),\overline\mu)$
and also a splitting of the restriction of
${\rm Dec}i$ to ${\rm SL}_{n}(k_{\infty})$.
{\it Proof. }
The first statement is trivial and the third statement follows
immediately from the second.
We shall prove the second statement.
By the limiting definition of both sides
of the formula, it is sufficient to
prove this in the case $\alpha,\beta,\alpha\beta\in U$.
As $U$ is simply connected
there is a unique continuous section $\tau:U\to \widetilde{{\rm GL}}_{n}(k_{\infty})$
such that $\tau(1)=1$.
This section is given by
$$
\alpha
\mapsto
\left(\matrix{\alpha&0\cr 0&w(\alpha)}\right).
$$
Now consider the realization of $\widetilde{{\rm GL}}_{n}(k_{\infty})$
as ${\rm GL}_{n}(k_{\infty})\times\mu_{m}$ with multiplication
given by
$(\alpha,\xi)(\beta,{\rm char}i)=(\alpha\beta,\xi{\rm char}i{\rm Dec}i(\alpha,\beta))$.
To prove the theorem it is sufficient to show that the map
$U\to \widetilde{{\rm GL}}_{n}(k_{\infty})$ given by
$\alpha\mapsto (\alpha,1)$ is continuous, and hence coincides
with $\tau$.
There is a neighbourhood $U_{0}$ of 1 in ${\rm GL}_{n}(k_{\infty})$ such that for
$\alpha,\beta\in U_{0}$ we have ${\rm Dec}(\alpha,\beta)=1$.
Therefore the map $\beta\to (\beta,1)$ is a local
homomorphism on $U_{0}$, and is therefore continuous.
On the other hand by the previous lemma,
for $\alpha\in U$ there is a neighbourhood
$U_{\alpha}$ of 1 in $U_{0}$,
such that for $\beta\in U_{\alpha}$ we have:
$$
(\alpha\beta,1)=(\alpha,1)(\beta,1).
$$
Thus the left hand side is a continuous function
of $\beta\in U_{\alpha}$, so the theorem is proved.
$\Box$
\begin{remark}
The above theorem shows that
in the complex case the cocycle ${\rm Dec}i^{({\mathscr D})}$
does not depend on the fundamental domain
${\mathscr D}$.
The cocycle does depend on the basis of
${\mathfrak g}$ used to define the limits, but this
dependence is only for $\alpha$, $\beta$
or $\alpha\beta$ in ${\rm GL}_{n}(k_{\infty})\setminus U$.
\end{remark}
\subsection{The real case.}
If $k$ is a number field, which is not totally complex,
then $k$ contains only two roots of unity.
We describe the result analogous to Theorem \ref{complexsplit}
in this case.
Suppose that $\mu_{m}=\{1,-1\}$.
We shall identify $V_{\infty}$ with ${\mathbb R}^{d}$ for purposes
of notation.
With this identification, $G_{\infty}$ is the group ${\rm GL}_{d}({\mathbb R})$.
We may choose ${\mathfrak S}$ to be a triangulation
of the unit sphere $S^{d-1}$ in ${\mathbb R}^{d}$.
Our fundamental domain ${\mathscr D}$ may be taken to be the cell:
$$
{\mathscr D}
=
\left\{
\left(\matrix{x_{1}\cr\vdots\cr x_{d}}\right)\in S^{d-1} :
x_{1}\ge 0
\right\}.
$$
Thus ${\mathscr E}$ can be taken to be the cell
$$
{\mathscr E}
=
\left\{
\left(\matrix{x_{1}\cr\vdots\cr x_{d}}\right)\in S^{d-1} :
x_{1}= 0, \; x_{2}\ge 0
\right\}.
$$
The cell ${\mathscr E}$ is contained in the following subspace
$W = \left\{
\left(\matrix{0\cr x_{2}\cr\vdots\cr x_{d}}\right)
\right\}$.
Define
$$
U^{{\rm pr}ime}
=
\left\{\alpha\in{\rm GL}_{n}({\mathbb R}):
\begin{array}{c}
\hbox{ $\alpha$ has no eigenvector in $W$}\\
\hbox{with a negative real eigenvalue}
\end{array}
\right\}.
$$
The set $U^{{\rm pr}ime}$ is a dense, open, contractible subset of ${\rm GL}_{n}({\mathbb R})$.
One may prove analogously to the totally complex case:
\begin{theorem}
If $\alpha$, $\beta$ and $\alpha\beta$ are all in $U^{{\rm pr}ime}$
then ${\rm Dec}i^{({\mathscr D})}$ is locally constant at $(\alpha,\beta)$.
\end{theorem}
This shows that ${\rm Dec}i^{({\mathscr D})}$ is the cocycle
obtained from a section
$G_{\infty}\to \tilde G_{\infty}$,
which takes the identity to the identity
and is continuous on $U'$.
\section{Construction of fundamental functions.}
Let $k$ be an algebraic number field
containing a primitive $m$-th root of unity
and consider the vector space $V=k^{n}$.
As before, we let $S$ be the set of places $v$ of $k$ such that
$|m|_{v}\ne 1$.
We define $V_{{\mathbb A}S}={\mathbb A}S^{n}$ and $V_{\infty}=k_{\infty}^{n}$,
where $k_{\infty}=k\otimes_{{\mathbb Q}}{\mathbb R}$.
From the previous section
we have an arithmetic cocycle ${\rm Dec}AS^{(f,L)}$ on ${\rm GL}nAS$
and a geometric cocycle ${\rm Dec}i^{({\mathscr D})}$ on ${\rm GL}n(k_\infty)$.
We shall relate the two.
However the arithmetic cocycle is dependent on a choice
of fundamental function $f$ on ${\mathbb A}S^n\setminus\{0\}$
and the geometric cocycle is dependent (in the real case at least)
on a fundamental domain ${\mathscr D}$.
In order to describe the relation between ${\rm Dec}AS$ and ${\rm Dec}i$,
we must first fix our choices of $f$ and ${\mathscr D}$.
In this section we choose a fundamental function $f$
(at least generically) and a related fundamental domain ${\mathscr D}$
and describe the mechanism by which the two cocycles will be related.
In section 5 we deal with the problem of defining $f$ everywhere,
In section 6 we prove the relation between
the cocycles,
based on the methods of this section.
From now on we shall assume that $m$ is a power
of a prime $p$.
There is no loss of generality here,
but we will save on notation by doing this.
We shall write $\rho$ for a primitive $p$-th root of unity in $k$.
\subsection{The space $X$.}
We fix once and for all a lattice
$L\subset V$ which is free as a ${\mathbb Z}[\mu_{m}]$-module.
For any finite place $v$ we shall write $L_v$
for the closure of $L$ in $V_v$.
We shall also write $L_{{\mathbb A}S}$ for the closure of $L$
in $V_{{\mathbb A}S}$.
Hence $L_{{\mathbb A}S}={\rm pr}od_{v\notin S} L_{v}$.
Let
$$
V_m
=
V
\cap
{\bf i}gcap_{v|m} L_v,
$$
and consider the group $X=V_{m}/L$.
There are two ways of thinking about $X$.
First, the diagonal embedding of $k$ in ${\mathbb A}S$ induces an isomorphism
\begin{equation}
\label{Xism}
X \cong V_{{\mathbb A}S}/L_{{\mathbb A}S}.
\end{equation}
Secondly, we can regard $X$ as a dense subgroup of the group
$X_{\infty}=V_{\infty}/L$.
The two ways of thinking about $X$ give the connection
between the arithmetic and geometric cocycles defined
in \S3.
\scP{\rm ar}graph{The semi-group $\Upsilon$.}
Consider the following semigroup in ${\rm GL}_{n}(k)$:
$$
\Upsilon
=
\{\alpha\in {\rm GL}_{n}(k) :
\alpha L \supseteq L\}.
$$
Let $f$ be a fundamental function on $X\setminus\{0\}$,
and define a fundamental function $f_{{\mathbb A}S}$ on
$V_{{\mathbb A}S}\setminus\{0\}$ by
$$
f_{{\mathbb A}S}(v)
=
\left\{
\begin{array}{ll}
f(v+L_{{\mathbb A}S}) & v\notin L_{{\mathbb A}S},\\
f_{o}(v) & v\in L_{{\mathbb A}S},
\end{array}
\right.
$$
where $f_{o}$ is any fixed fundamental function.
By abuse of notation we shall write $L$ and $L_{{\mathbb A}S}$ for the
characteristic functions of these sets.
\begin{lemma}
\label{Xism3}
With the above notation we have for $\alpha,\beta\in\Upsilon$:
$$
{\rm Dec}AS^{(f_{{\mathbb A}S},L_{{\mathbb A}S})}(\alpha,\beta)
=
\langle f\alpha^{-1}-f | \alpha\beta L - \alpha L\rangle_{X}.
$$
\end{lemma}
{\it Proof. }
This follows immediately from Lemma \ref{Xism4}.
$\Box$
\subsection{The complex ${\mathfrak X}$.}
Our method of calculating ${\rm Dec}AS(\alpha,\beta)$
on $\Upsilon$ involves constructing fundamental functions on
$X$ quite explicitly by embedding $X$ in $X_{\infty}:=V_{\infty}/L$
and finding a fundamental domain ${\mathscr F}$ for the
action of $\mu_{m}$ on $X_{\infty}$.
In this section we will find the fundamental domain ${\mathscr F}$.
\subsubsection{Parallelotopes and ${\rm Diam}ond$-products.}
Let ${\mathscr T}$ be a singular $r$-cube and ${\mathscr U}$ a
singular $s$-cube in $V_\infty$.
We can define an $(r+s)$-cube ${\mathscr T}{\rm Diam}ond {\mathscr U}$ by:
$$
({\mathscr T}{\rm Diam}ond{\mathscr U})(x_1,...,x_r,y_1,...,y_s)
=
{\mathscr T} (x_1,...,x_r) + {\mathscr U} (y_1,...,y_s).
$$
Note that for any $v\in V_{\infty}$,
the cell $[v]{\rm Diam}ond{\mathscr T}$ is simply a
translation of ${\mathscr T}$ by $v$.
Let $v,a_{1},\ldots,a_{r}\in V_{\infty}$.
By the \emph{parallelotope} $\scP{\rm ar}t(v,a_{1},\ldots,a_{r})$ in
$V_{\infty}$
we shall mean the following cell $I^{r}\to X_{\infty}$:
$$
\Big(\scP{\rm ar}t(v,a_{1},\ldots,a_{r})\Big)(\underline{x})
=
v+\sum_{i=1}^r x_{i}a_i.
$$
Hence this is just a ${\rm Diam}ond$-product of line-segments.
We shall more often deal with
the projections $\scP{\rm ar}$ of $\scP{\rm ar}t$ in $X_{\infty}$.
We do not assume that the vectors $a_{i}$
are linearly independent or even non-zero.
\subsubsection{Construction of ${\mathfrak X}$.}
We shall require a cell decomposition ${\mathfrak X}$ of $X_{\infty}$
in which the cells are parallelotopes.
To describe this cell decomposition
it is sufficient to describe the
highest dimensional cells.
We begin with a cell decomposition
of ${\mathbb Q}(\rho)_{\infty}/{\mathbb Z}[\rho]$.
The highest dimensional cells are of the form
$$
{\mathscr P}
=
\scP{\rm ar}\left(0,\frac{\rho^{i}}{1-\rho},\frac{\rho^{i+1}}{1-\rho},
\ldots,\frac{\rho^{i+p-2}}{1-\rho}\right),
\quad
i=1,\ldots,p.
$$
\begin{lemma}
The cells ${\mathscr P}$ above form the highest dimensional cells of
a cell decomposition of ${\mathbb Q}(\rho)_{\infty}/{\mathbb Z}[\rho]$.
\end{lemma}
{\it Proof. }
This is Theorem 1.1 of \cite{kubota2}.
$\Box$
We shall refer to the corresponding cell decomposition
of ${\mathbb Q}(\rho)_{\infty}/{\mathbb Z}[\rho]$ as ${\mathfrak X}(p)$.
We next introduce a cell decomposition
of ${\mathbb Q}(\zeta)_{\infty}/{\mathbb Z}[\zeta]$.
We have a decomposition
$$
{\mathbb Q}(\zeta)_{\infty}/{\mathbb Z}[\zeta]
=
{\bf i}goplus_{i=1}^{m/p}
\zeta^{i} \cdot {\mathbb Q}(\rho)_{\infty}/{\mathbb Z}[\rho].
$$
We may therefore define a cell decomposition
${\mathfrak X}(p^a)$ of ${\mathbb Q}(\zeta)_{\infty}/{\mathbb Z}[\zeta]$ by
$$
{\mathfrak X}(p^a)
=
{\rm pr}od_{i=1}^{m/p}\zeta^{i}\cdot{\mathfrak X}(p).
$$
Thus the cells of ${\mathfrak X}(p^a)$ are of the form
${\bf i}gDiamond_{i=1}^{m/p} \zeta^i {\mathscr P}_i$
with ${\mathscr P}_i$ a cell of ${\mathfrak X}(p)$.
As we are assuming that $L$ is free over ${\mathbb Z}[\zeta]$,
there is a basis $\{b_{1},\ldots,b_{n}\}$ for $V$ over ${\mathbb Q}(\zeta)$
such that $L=\sum_{i=1}^{n} {\mathbb Z}[\zeta]b_{i}$.
Again we have a decomposition
$$
X_\infty
=
{\bf i}goplus_{i=1}^{n}({\mathbb Q}(\zeta)_{\infty}/{\mathbb Z}[\zeta])\cdot b_i.
$$
We may therefore define
$$
{\mathfrak X}
=
{\rm pr}od_{i=1}^{n}{\mathfrak X}(p^a)\cdot b_i.
$$
\begin{lemma}
\label{constructX}
The cell decomposition ${\mathfrak X}$ of $X_{\infty}$
has the following properties:
\begin{itemize}
\item[(i)]
The group $\mu_{m}$ permutes the cells of ${\mathfrak X}$.
\item[(ii)]
Every positive dimensional cell has trivial
stabilizer in $\mu_m$.
\item[(iii)]
Every $r$-cell ${\mathscr P}$ in ${\mathfrak X}$ is of the form
$$
{\mathscr P}
=
{\rm pr} {\mathscr P}t,\quad
{\mathscr P}t
=
\scP{\rm ar}t(v_{\mathscr P},a_{{\mathscr P},1},\ldots,a_{{\mathscr P},r}),
$$
with $v_{\mathscr P}\in \frac{1}{1-\rho}L$ and
$a_{{\mathscr P},i}\in \frac{1}{1-\rho}L\setminus L$.
For any ${\mathscr P}$
the set $\{a_{{\mathscr P},1},\ldots,a_{{\mathscr P},r}\}$
is linearly independent over ${\mathbb R}$.
\item[(iv)]
We have $|{\mathscr P}t|\cap L \subseteq\{0\}$
and $v_{{\mathscr P}}=0$ if and only if $0\in|{\mathscr P}t|$.
\end{itemize}
\end{lemma}
Much of this result is contained in Theorem 1.3 of \cite{kubota2},
although it is stated there in a rather different notation.
A sketch of the proof is included for completeness.
{\it Proof. }
(i) and (iii) are clear from the construction.
It is sufficient to verify (iv) for ${\mathfrak X}(p)$
and this is not difficult.
It remains to show that no positive dimensional cell
is fixed by a non-trivial subgroup of $\mu_m$.
Let ${\mathscr P}$ be a positive dimensional cell
and suppose $\xi{\mathscr P}={\mathscr P}$ for some $\xi\in\mu_{m}\setminus\{1\}$.
We have an expression for ${\mathscr P}$ as a parallelotope:
$$
{\mathscr P}
=
\scP{\rm ar}(v_{{\mathscr P}},a_{{\mathscr P},1},\ldots,a_{{\mathscr P},r}).
$$
The set $\{a_{{\mathscr P},1},\ldots,a_{{\mathscr P},r}\}$ is permuted by $\xi$.
This implies that the elements $\xi^{i}a_{{\mathscr P},1}$ are all in this
set.
However, the sum of these elements is zero.
This contradicts the fact (iii),
that the $a_{{\mathscr P},i}$ are linearly independent.
$\Box$
We shall write ${\mathfrak X}_{\bullet}$ for the corresponding chain complex with
coefficients in ${\mathbb Z}/m$.
Thus ${\mathfrak X}_{r}$ is the free ${\mathbb Z}/m$-module on the $r$-cells of the
decomposition.
By part (i) of the lemma, we have an action of $\mu_{m}$ on
${\mathfrak X}_{\bullet}$.
\begin{lemma}
For $r=1,2,\ldots,d$ the $({\mathbb Z}/m)[\mu_{m}]$-module ${\mathfrak X}_{r}$
is free.
\end{lemma}
{\it Proof. }
A basis consists of a set of representatives
for the $\mu_{m}$-orbits of $r$-cells in ${\mathfrak X}$.
To show that this is a basis
we use the fact that such cells have trivial stabilizer.
$\Box$
\subsubsection{The fundamental function $f$.}
We shall fix an orientation ${\rm ord}_{V}$ on $V_{\infty}$.
Using this, we define a corresponding orientation ${\rm ord}_{X}$
on $X_{\infty}$ by the formula
$$
{\rm ord}_{X,x}({\rm pr}({\mathscr T}))
=
\sum_{v\in V_{\infty} :\\ {\rm pr}(v)=x}
{\rm ord}_{V,v}({\mathscr T}).
$$
Let $\omega_{X}\in {\mathfrak X}_{d}$ be the generator
of $H_{d}({\mathfrak X})$, for which ${\rm ord}_{X,x}(\omega_{X})=1$
holds for every $x\in X_{\infty}$.
By Lemma \ref{orient}, $(1-[\zeta])\omega_{X}=0$
holds in $H_{d}({\mathfrak X})$.
Since ${\mathfrak X}_{d+1}=0$, we have $(1-[\zeta])\omega_{X}=0$
in ${\mathfrak X}_{d}$.
Hence by the exact sequence (\ref{star})
there is an element ${\mathscr F}\in {\mathfrak X}_{d}$ such that
$[\mu_{m}]{\mathscr F} = \omega_{X}$.
We fix such an ${\mathscr F}$ once and for all.
We may regard ${\mathscr F}$ as a fundamental domain for
the action of $\mu_{m}$ on $X_{\infty}$.
Define a function $f:X_{\infty}\setminus |\partial{\mathscr F}| \to {\mathbb Z}/m$
by
$$
f(x)
=
{\rm ord}_{x}({\mathscr F}).
$$
\begin{lemma}
The function $f$ is fundamental on the set of $x\in X_{\infty}$
whose $\mu_{m}$-orbit does not meet the boundary of ${\mathscr P}$.
\end{lemma}
{\it Proof. }
For such an $x$, we have by Lemma \ref{orient}:
$$
\sum_{\xi\in\mu_{m}} f(\xi x)
=
\sum_{\xi\in\mu_{m}} {\rm ord}_{\xi x}{\mathscr F}
=
\sum_{\xi\in\mu_{m}} {\rm ord}_{x}\left(\xi^{-1}\cdot{\mathscr F}\right).
$$
By linearity of ${\rm ord}_{x}$ we have:
$$
\sum_{\xi\in\mu_{m}} f(\xi x)
=
{\rm ord}_{x}\left([\mu_{m}]{\mathscr F}\right)
=
{\rm ord}_{x}\left(\omega_{X}\right)
=
1.
$$
$\Box$
We shall worry about how to extend the definition
of $f$ to $|\partial{\mathscr F}|$ in \S5.3.
The solution will be to take a limit over fundamental
domains tending to ${\mathscr F}$.
\subsection{The complex ${\mathfrak S}$.}
Now that we have a fundamental function $f$ at least generically,
we shall describe the fundamental domain ${\mathscr D}$ and the cell
complex ${\mathfrak S}\subset V_{\infty}\setminus\{0\}$
used in the definition of ${\rm Dec}_\infty^{({\mathscr D})}$
in \S3.5.
We have a cell decomposition ${\mathfrak X}$ of $X_\infty$.
Each $r$-cell in this decomposition is of the form
$$
{\mathscr P}
=
\scP{\rm ar}(v_{\mathscr P},a_{{\mathscr P},1},\ldots,a_{{\mathscr P},r}).
$$
Corresponding to each such cell,
we define an $r-1$-chain
${\mathfrak s}({\mathscr P})\in C_{r-1}(V_\infty\setminus\{0\})$ as follows.
If $v_{\mathscr P}\ne 0$ then we simply define ${\mathfrak s}{\mathscr P}=0$.
If $v_{\mathscr P}=0$
then we define ${\mathfrak s}{\mathscr P}:\Delta^{r-1}\to V_\infty\setminus\{0\}$
by ${\mathfrak s}{\mathscr P}=[a_{{\mathscr P},1},\ldots,a_{{\mathscr P},r}]$.
Roughly speaking ${\mathfrak s}{\mathscr P}$ is the set of unit tangent vectors
to ${\mathscr P}$ at $0$.
The cells ${\mathfrak s}{\mathscr P}$ form a cell complex,
which we shall denote ${\mathfrak S}$.
It follows easily that ${\mathfrak S}$ satisfies the conditions of \S3.5.
We extend ${\mathfrak s}$ by linearity to a map
${\mathfrak s}:{\mathfrak X}_\bullet\to {\mathfrak S}_{\bullet-1}$.
\begin{lemma}
\label{sigma}
The map ${\mathfrak s}:{\mathfrak X}_\bullet\to {\mathfrak S}_{\bullet-1}$
commutes with the action of $\mu_{m}$
and anticommutes with $\partial$.
That is, for any cell ${\mathscr P}$ of ${\mathfrak X}$, we have
$$
\partial {\mathfrak s} {\mathscr P}
=
-{\mathfrak s}\partial{\mathscr P}.
$$
(Here the minus sign is acting on the coefficients,
rather than on $V_{\infty}$).
\end{lemma}
{\it Proof. }
The first statement is clear.
For the second we need to
examine some separate cases.
Let ${\mathscr P}$ be an $r$-cell in ${\mathfrak X}$.
First suppose $v_{\mathscr P}\ne 0$.
In this case, we know by part (iv) of Lemma \ref{constructX},
that 0 is not in the base set of ${\mathscr P}$.
It follows that $0$ is not in the base set of any face
${\mathscr Q}$ of ${\mathscr P}$.
Hence $v_{\mathscr Q}\ne0$ for every face ${\mathscr Q}$ of ${\mathscr P}$.
We therefore have ${\mathfrak s}{\mathscr P}=0$ and ${\mathfrak s}{\mathscr Q}=0$ for every face ${\mathscr Q}$.
As $\partial{\mathscr P}$ is a sum of faces, the result follows in this case.
Now suppose $v_{\mathscr P}=0$.
Let ${\mathscr Q}$ be a face of ${\mathscr P}$.
If ${\mathscr Q}$ is a front face then we have $v_{\mathscr Q}=0$
and if ${\mathscr Q}$ is a back face we have $v_{\mathscr Q}\ne 0$.
Thus, when calculating ${\mathfrak s}\partial{\mathscr P}$,
we need only take into account the front faces of ${\mathscr P}$.
It follows that we have
$$
{\mathfrak s}\partial{\mathscr P}
=
\sum_{i=1}^r
(-1)^{i}
{\mathfrak s}\left({\bf i}gDiamond_{j\ne i} [0,a_{{\mathscr P},j}]\right).
$$
By the definition of ${\mathfrak s}$, this gives
\begin{eqnarray*}
{\mathfrak s}\partial{\mathscr P}
&=&
\sum_{i=1}^r
(-1)^{i}
[a_{{\mathscr P},1},\ldots,a_{{\mathscr P},i-1},a_{{\mathscr P},i+1},\ldots,a_{{\mathscr P},r}]\\
&=&
-\partial[a_{{\mathscr P},1},\ldots,a_{{\mathscr P},r}]
=
-\partial{\mathfrak s}{\mathscr P}.
\end{eqnarray*}
$\Box$
We define ${\mathscr D}={\mathfrak s}{\mathscr F}$,
where ${\mathscr F}$ is the fundamental domain in ${\mathfrak X}_d$ chosen
in \S4.2.
\begin{lemma}
The element ${\mathscr D}$ satisfies the conditions of
\S3.5.
\end{lemma}
{\it Proof. }
By Lemma \ref{sigma} we have:
$$
\partial[\mu_{m}]{\mathscr D}
=
\partial {\mathfrak s}[\mu_{m}]{\mathscr F}
=
\partial {\mathfrak s}\omega_{X}
=
-{\mathfrak s}\partial \omega_{X}
=
0.
$$
Hence $[\mu_{m}]{\mathscr D}$ is a cycle.
It remains to check
that the image of $[\mu_{m}]{\mathscr D}$ under the maps
$$
H_{d-1}(V_\infty\setminus\{0\})
\stackrel{\partial^{-1}}{\to}
H_{d}(V_\infty,V_\infty\setminus\{0\})
\stackrel{{\rm ord}_0}{\to}
{\mathbb Z}/m,
$$
is 1.
For any cell ${\mathscr P}$ of ${\mathfrak X}$ centred at 0 we shall
use the notation
$$
{\mathfrak t}{\mathscr P}
=
[0,a_{{\mathscr P},1},\ldots,a_{{\mathscr P},r}].
$$
We extend ${\mathfrak t}$ by linearity to a map
${\mathfrak X}_{\bullet}\to C_{\bullet}(V_{\infty})$.
With this notation we have:
$$
\partial({\mathfrak t}{\mathscr P})
=
{\mathfrak s}{\mathscr P} - {\mathfrak t}(\partial{\mathscr P}).
$$
Applying this relation to $[\mu_m]{\mathscr F}$,
we obtain by Lemma \ref{sigma}:
$$
\partial([\mu_m]{\mathfrak t}{\mathscr F})
=
[\mu_m]{\mathscr D}.
$$
We are therefore reduced to showing
that ${\rm ord}_{V,0}([\mu_m]{\mathfrak t}{\mathscr F})={\rm ord}_{X,0}([\mu_m]{\mathscr F})$.
This is a little messy, but it can be proved by induction
on the dimension $d$ of $V_{\infty}$.
$\Box$
\subsection{Modified Parallelotopes.}
In this section we shall discuss a method for
constructing more general fundamental functions on $X_{\infty}$.
We have a cell decomposition ${\mathfrak X}$ of $X_{\infty}$ in which the $r$-cells
are of the form
$$
{\mathscr P}
=
\scP{\rm ar}(v_{\mathscr P},a_{{\mathscr P},1},\ldots,a_{{\mathscr P},r}).
$$
Recall that ${\mathfrak g}={\rm End}_{\mu_m}(V_{\infty})$.
We shall write $1$ for the identity matrix in ${\mathfrak g}$.
Suppose ${\wp}$ is a path from $0$ to $1$ in ${\mathfrak g}$
and ${\mathscr P}$ is an $r$-cell in the complex ${\mathfrak X}$.
We define an $r$-cell ${\wp}\bowtie {\mathscr P}:I^r\to X_\infty$ as follows:
$$
({\wp}\bowtie{\mathscr P})(x_{1},\ldots,x_{r})
=
{\rm pr}\left(
v_{{\mathscr P}}+\sum_{i=1}^{r} {\wp}(x_{i}) \cdot a_{{\mathscr P},i}
\right).
$$
We extend the operator ``${\wp}\bowtie$'' by linearity to
a map $({\wp}\bowtie) : {\mathfrak X}_{r} \to C_{r}(X_{\infty})$.
Following Kubota, \cite{kubota2} we shall refer to
${\wp}\bowtie{\mathscr P}$ as a \emph{modified parallelotope}.
\begin{lemma}
The maps $({\wp}\bowtie) : {\mathfrak X}_{r} \to C_{r}(X_{\infty})$
commute with the boundary maps and with the
action of $\mu_{m}$.
\end{lemma}
{\it Proof. }
This is a routine verification.
$\Box$
Suppose ${\wp}_{1}$ and ${\wp}_{2}$ are two paths from 0 to 1
in ${\mathfrak g}$ and ${\mathscr H}$ is a homotopy from ${\wp}_{1}$ to ${\wp}_{2}$.
By this,
we shall mean ${\mathscr H}:I^2\to{\mathfrak g}$ is a continuous map,
satisfying for all $t,x\in I$:
$$
{\mathscr H}(0,x)={\wp}_1(x),\quad
{\mathscr H}(1,x)={\wp}_2(x),\quad
{\mathscr H}(t,0)=0,\quad
{\mathscr H}(t,1)=1.
$$
Let ${\mathscr P}$ be an $r$-cell in the complex ${\mathfrak X}$:
$$
{\mathscr P}
=
\scP{\rm ar}(v_{\mathscr P},a_{{\mathscr P},1},\ldots,a_{{\mathscr P},r}).
$$
We define an $r+1$-cell $({\mathscr H}\bowtie{\mathscr P}):I^{r+1}\to V_{\infty}$
by
$$
({\mathscr H}\bowtie{\mathscr P})(t,x_{1},\ldots,x_{r})
=
{\rm pr}\left(
v_{{\mathscr P}}+ \sum_{i=1}^{r} {\mathscr H}(t,x_{i}) \cdot a_{{\mathscr P},r}
\right).
$$
We extend the operators ``${\mathscr H}\bowtie$'' to linear maps
$({\mathscr H}\bowtie):{\mathfrak X}_{r}\to C_{r+1}(X_{\infty})$.
\begin{lemma}
The maps ${\mathscr H}\bowtie$ commute with the action of $\mu_{m}$.
Let ${\mathscr H}$ be a homotopy from ${\wp}_{1}$ to ${\wp}_{2}$.
Then ${\mathscr H}\bowtie$ is a chain homotopy
from ${\wp}_{1}\bowtie$ to ${\wp}_{2}\bowtie$.
In other words, for any ${\mathscr P}\in {\mathfrak X}_{r}$ we have in
$C_{r}(X_{\infty})$:
$$
\partial({\mathscr H}\bowtie{\mathscr P})+{\mathscr H}\bowtie\partial{\mathscr P}
=
{\wp}_{2}\bowtie{\mathscr P}-{\wp}_{1}\bowtie{\mathscr P}.
$$
\end{lemma}
{\it Proof. }
This is proved by calculating $\partial({\mathscr H}\bowtie{\mathscr P})$
directly for a cell ${\mathscr P}$ of ${\mathfrak X}$.
$\Box$
\begin{lemma}
For any path ${\wp}$, the following holds in $H_{d}(X_{\infty})$:
$$
[\mu_m]{\wp}\bowtie{\mathscr F}
=
[\mu_m]{\mathscr F}.
$$
\end{lemma}
(This lemma is implicit in \cite{kubota2}).
{\it Proof. }
This follows immediately from the previous two lemmata.
$\Box$
We define a function
$f^{{\wp}}:X_{\infty}\setminus |{\wp}\bowtie\partial{\mathscr F}|\to{\mathbb Z}/m$ by
$$
f^{{\wp}}(x)
=
{\rm ord}_{x}({\wp}\bowtie{\mathscr F}).
$$
It follows from the previous lemma,
that $f^{{\wp}}$ is fundamental at all $x$, whose
$\mu_{m}$-orbit avoids the base set of ${\wp}\bowtie\partial{\mathscr F}$.
However $f^{{\wp}}$ need not be
the characteristic function of a fundamental domain,
since it may take other values apart from $0$ and $1$.
\scP{\rm ar}graph{Equivalence of paths.}
Suppose ${\wp}$ and ${\wp}^{{\rm pr}ime}$ are two
paths from $0$ to $1$ in ${\mathfrak g}$.
We shall say that ${\wp}$ and ${\wp}^{{\rm pr}ime}$
are \emph{equivalent} if there is an increasing continuous
bijection $\phi:I\to I$,
such that for all $x\in I$ we have:
$$
{\wp}(\phi(x))
=
{\wp}^{{\rm pr}ime}(x).
$$
\begin{lemma}
If ${\wp}$ and ${\wp}^{{\rm pr}ime}$ are equivalent paths
then the corresponding functions $f^{{\wp}}$ and
$f^{{\wp}^{{\rm pr}ime}}$ have the same domains of definition and are equal.
\end{lemma}
{\it Proof. }
To prove that $f^{{\wp}}$ and $f^{{\wp}^{{\rm pr}ime}}$
have the same domains of definition,
we need only show that
$|\partial({\wp}\bowtie{\mathscr F})|=|\partial({\wp}^{{\rm pr}ime}\bowtie{\mathscr F})|$.
This follows by breaking the boundary into faces and
using the fact that $\phi$ is bijective.
The map $\phi$ is homotopic to the identity map.
In other words there is a map $\psi:I\times I\to I$
such that $\psi(1,x)=\phi(x)$, $\psi(0,x)=x$,
$\psi(t,0)=0$ and $\psi(t,1)=1$ for all $x,t\in I$.
Using the function $\psi$ we define a homotopy ${\mathscr H}$
from ${\wp}$ to ${\wp}^{\rm pr}ime$ by ${\mathscr H}(t,x)={\wp}(\psi(t,x))$.
As ${\mathscr H}\bowtie$ is a chain homotopy we have
$$
{\wp}^{{\rm pr}ime}\bowtie{\mathscr F}-{\wp}\bowtie{\mathscr F}
=
\partial({\mathscr H}\bowtie{\mathscr F})
+
{\mathscr H}\bowtie\partial{\mathscr F}.
$$
Note that for any cell ${\mathscr P}$ of ${\mathfrak X}$ we have
$|{\mathscr H}\bowtie{\mathscr P}|=|{\wp}\bowtie{\mathscr P}|=|{\wp}^{\rm pr}ime\bowtie{\mathscr P}|$.
Therefore for $x\notin |{\wp}\bowtie\partial{\mathscr F}|$ we have:
$$
{\rm ord}_x({\wp}^{{\rm pr}ime}\bowtie{\mathscr F})-{\rm ord}_x({\wp}\bowtie{\mathscr F})
=
0.
$$
In other words $f^{{\wp}}(x) = f^{{\wp}^{{\rm pr}ime}}(x)$.
$\Box$
In view of the above lemma,
we may specify piecewise linear paths
simply as sequences of line segments:
$$
{\wp}
=
[0,a_1]+[a_1,a_2]+\cdots+[a_s,1],
$$
without worrying about the precise parametrization.
\subsection{Statement of the results in the generic case.}
\scP{\rm ar}graph{The $d-1$-chain ${\mathscr G}$.}
For the moment we shall assume that $d\ge 2$.
Thus we are ruling out the case $k={\mathbb Q}$, $n=1$.
By the definition of ${\mathscr F}$, we have
$$
\partial([\mu_m] {\mathscr F})
=
0.
$$
Thus $[\mu_m](\partial{\mathscr F})=0$.
It follows from the exact sequence (\ref{star}),
that there is an element ${\mathscr G}\in {\mathfrak X}_{d-1}$ satisfying
$$
\partial {\mathscr F}
=
(1-[\zeta]){\mathscr G}.
$$
We shall fix such a ${\mathscr G}$.
Note that by Lemma \ref{sigma},
the $d-2$-chain ${\mathscr E}$ used in the definition of ${\rm Dec}i$
may be taken to be $-{\mathfrak s}{\mathscr G}$.
\scP{\rm ar}graph{The semigroup $\Upsilon_{{\mathfrak f}}$.}
For an ideal ${\mathfrak f}$ of the ring of integers in $k$,
let $G_{{\mathfrak f}}$ be the subgroup of ${\rm GL}_{n}(k)$ consisting
of matrices which are integral at every prime dividing ${\mathfrak f}$
and are congruent to the identity matrix modulo ${\mathfrak f}$.
We shall fix ${\mathfrak f}=(1-\rho)m^{2}$ if $m$ is odd and
${\mathfrak f}=4m^{2}$ if $m$ is even.
Next let $\Upsilon_{{\mathfrak f}}=\Upsilon\cap G_{{\mathfrak f}}$.
\scP{\rm ar}graph{The results.}
For $\alpha\in\Upsilon_{{\mathfrak f}}$, we define a path
$$
{\wp}(\alpha)
=
[0,\alpha]+[\alpha,1],
$$
and a homotopy ${\mathscr H}^1_\alpha$ from
${\wp}(1)$ to ${\wp}(\alpha)$:
$$
{\mathscr H}^1_\alpha(t,x)
=
(1-t)x+t{\wp}(\alpha)(x).
$$
Finally we define
$$
\tau(\alpha)
=
\zeta^{\textstyle{\{{\mathscr H}^{1}_{\alpha}\bowtie{\mathscr G}|\alpha L\}}}.
$$
(One may interpret this definition as a lifted map
in the sense of Lemma \ref{easy})
We shall prove that ${\rm Dec}i^{({\mathscr D})}$ and ${\rm Dec}AS^{(f,L)}$ are related on
$\Upsilon_{{\mathfrak f}}$ by the following formula:
$$
{\rm Dec}AS^{(f,L)}(\alpha,\beta)
{\rm Dec}i^{({\mathscr D})}(\alpha,\beta)
=
\frac{\tau(\alpha)\tau(\beta)}{\tau(\alpha\beta)},
\quad
\alpha,\beta\in\Upsilon_{{\mathfrak f}}.
$$
As a consequence, we will deduce that for totally complex $k$,
the Kubota symbol on ${\rm SL}_n({\mathfrak o},{\mathfrak f})$ is given by
$$
\kappa(\alpha)
=
\frac{\tau(\alpha)}{hw(\alpha)}
$$
Here $hw$ is the splitting of ${\rm Dec}i$
of \S3.9, Theorem \ref{complexsplit}.
\subsection{A generic formula for the pairing.}
Given paths ${\wp}^{1}$ and ${\wp}^{2}$ from $0$ to $1$ in ${\mathfrak g}$,
we have constructed fundamental functions
$f^{1}$ and $f^{2}$ in ${\mathcal F}_{X}$.
We now describe a geometric method
for calculating the pairing $\langle f^1-f^2|M-L\rangle_{X}$,
where $M\supset L$ is a lattice contained in $V_{m}$.
\begin{proposition}
\label{productformula}
Suppose $M\setminus L$ does not intersect the
base set of $\partial({\mathscr H}\bowtie{\mathscr G})$.
Then we have:
$$
\langle f^{{\wp}_{2}}-f^{{\wp}_{1}}|M-L\rangle_{X}
=
\zeta^{\textstyle{\{{\mathscr H}\bowtie{\mathscr G}|M-L\}}}.
$$
\end{proposition}
The right hand side is a lifted map in the sense
of Lemma \ref{easy}.
{\it Proof. }
As ${\mathscr H}\bowtie$ is a chain homotopy from ${\wp}_{1}\bowtie$ to
${\wp}_{2}\bowtie$, the following holds in $C_{d}(X_{\infty})$:
$$
{\wp}_{2}\bowtie{\mathscr F} - {\wp}_{1}\bowtie{\mathscr F}
=
\partial({\mathscr H}\bowtie{\mathscr F})+{\mathscr H}\bowtie\partial{\mathscr F}.
$$
This implies for $x\in M\setminus L$:
$$
f^{{\wp}_{2}}(x)-f^{{\wp}_{1}}(x)
=
{\rm ord}_{x}({\wp}_{2}\bowtie{\mathscr F} - {\wp}_{1}\bowtie{\mathscr F})
=
{\rm ord}_{x}({\mathscr H}\bowtie\partial{\mathscr F}).
$$
By the definition of ${\mathscr G}$ and Lemma \ref{orient}, we have:
$$
f^{{\wp}_{2}}(x)-f^{{\wp}_{1}}(x)
=
{\rm ord}_{x}({\mathscr H}\bowtie(1-[\zeta]){\mathscr G})
=
{\rm ord}_{x}({\mathscr H}\bowtie{\mathscr G})
-
{\rm ord}_{\zeta^{-1}x}({\mathscr H}\bowtie{\mathscr G}).
$$
The proposition now follows from the
definition (\ref{simplepairing}) in \S3.2
of the pairing $\langle-|-\rangle_{X}$.
$\Box$
\subsection{The order of ${\mathscr H}\bowtie{\mathscr G}$ at $0$.}
The statement of the results in \S4.5 involves
numbers of the form ${\rm ord}_{0}({\mathscr H}\bowtie {\mathscr G})$.
However, this is not as yet defined,
since $0$ is always in the
base set of $\partial({\mathscr H}\bowtie {\mathscr G})$.
To avoid this problem, for any $r$-cell ${\mathscr P}$
in ${\mathfrak X}$ containing $0$
we cut ${\mathscr H}\bowtie{\mathscr P}$ into a singular
part ${\mathscr H}\bowtie{\mathscr P}^{0}$ and a non-singular part ${\mathscr H}\bowtie{\mathscr P}^{+}$.
Let $U$ be a small neighbourhood of $0$ in $I^{d-1}$.
We define ${\mathscr H}\bowtie{\mathscr P}^{0}$ to be the restriction of
${\mathscr H}\bowtie{\mathscr P}$ to $I\times U$ and we define
${\mathscr H}\bowtie{\mathscr P}^{+}$ to be the restriction of
${\mathscr H}\bowtie{\mathscr P}$ to $I\times (I^{d-1}\setminus U)$.
We define the order of ${\mathscr H}\bowtie{\mathscr P}$ at 0 to be the limit as $U$
gets smaller of the order of ${\mathscr H}\bowtie{\mathscr P}^{+}$ at 0.
To make things a little more precise we define for $\epsilonsilon>0$
an $r$-cell ${\mathscr P}^\epsilonsilon$ in $X_{\infty}$.
If $v_{{\mathscr P}}=0$ then we define ${\mathscr P}^{\epsilonsilon}$
to be the restriction of ${\mathscr P}:I^r\to X_\infty$
to the set
$$
\left\{(x_1,\ldots,x_r)\in I^r: \sum x_i \le \epsilonsilon\right\}.
$$
If $v_{{\mathscr P}}=0$ then we define ${\mathscr P}^{\epsilonsilon}=0$.
\begin{lemma}
\label{Pepsilon}
For $\epsilonsilon>0$ sufficiently small we have
$$
\partial({\mathscr P}^\epsilonsilon)-(\partial{\mathscr P})^\epsilonsilon
=
{\rm pr}( \epsilonsilon\cdot{\mathfrak s}{\mathscr P} ).
$$
\end{lemma}
{\it Proof. }
We shall suppose that ${\mathscr P}$ has 0 as its origin,
since otherwise both sides of the formula are zero.
Under this assumption, we have:
$$
{\mathscr P}
=
{\bf i}gDiamond_{i=1}^r
[0,a_{{\mathscr P},i}],
\quad
{\mathscr P}^{\epsilonsilon}
=
[0, \epsilonsilon a_{{\mathscr P},1},\epsilonsilon a_{{\mathscr P},2},\ldots,\epsilonsilon a_{{\mathscr P},r}].
$$
Therefore
\begin{eqnarray*}
\partial({\mathscr P}^{\epsilonsilon})
&=&
[\epsilonsilon a_{{\mathscr P},1},\ldots,\epsilonsilon a_{{\mathscr P},r}]
+
\sum_{i=1}^r
(-1)^i
[0, \epsilonsilon a_{{\mathscr P},1},\ldots,
\epsilonsilon a_{{\mathscr P},i-1},\epsilonsilon a_{{\mathscr P},i+1}\ldots,
\epsilonsilon a_{{\mathscr P},r}]\\
&=&
[\epsilonsilon a_{{\mathscr P},1},\ldots,\epsilonsilon a_{{\mathscr P},r}]
+
(\partial{\mathscr P})^\epsilonsilon.
\end{eqnarray*}
The result follows.
$\Box$
\subsection{Dependence of ${\rm ord}_{0}({\mathscr H}\bowtie{\mathscr G})$ on ${\mathscr H}$.}
Let ${\wp}_{1}$ and ${\wp}_{2}$ be two paths from $0$ to
$1$ in ${\mathfrak g}$,
Suppose we have two homotopies ${\mathscr H}$ and
${\mathscr I}$ from ${\wp}_{1}$ to ${\wp}_{2}$.
In this section, we investigate the relation between
${\rm ord}_{0,X}({\mathscr H}\bowtie{\mathscr G})$ and ${\rm ord}_{0,X}({\mathscr I}\bowtie{\mathscr G})$.
We begin by choosing a homotopy ${\mathscr U}$ from ${\mathscr H}$ to ${\mathscr I}$.
Thus, ${\mathscr U}:I^{3}\to {\mathfrak g}$ satisfies the following conditions:
$$
{\mathscr U}(0,t,x)={\mathscr H}(t,x),\quad
{\mathscr U}(1,t,x)={\mathscr I}(t,x),
$$
$$
{\mathscr U}(u,0,x)={\wp}_{1}(x),\quad
{\mathscr U}(u,1,x)={\wp}_{2}(x),
$$
$$
{\mathscr U}(u,t,0)=0,\quad
{\mathscr U}(u,t,1)=1.
$$
We shall also suppose that there is some
$\epsilonsilon>0$ such that for $x<1$ we have
$$
{\mathscr U}(u,t,\epsilonsilon x)
=
x{\mathscr U}(u,t,\epsilonsilon).
$$
We shall define, under this assumption, maps
$({\mathfrak s}{\mathscr U}):I^2\to{\mathfrak g}$, $({\mathfrak s}{\mathscr H}):I\to{\mathfrak g}$
and $({\mathfrak s}{\mathscr I}):I\to{\mathfrak g}$ by
$$
({\mathfrak s}{\mathscr U})(u,t)
=
\epsilonsilon^{-1}{\mathscr U}(u,t,\epsilonsilon)
=
\frac{\partial}{\partial x} {\mathscr U}(u,t,x),
$$
$$
({\mathfrak s}{\mathscr H})(t)
=
\epsilonsilon^{-1}{\mathscr H}(t,\epsilonsilon),\quad
({\mathfrak s}{\mathscr I})(t)
=
\epsilonsilon^{-1}{\mathscr I}(t,\epsilonsilon).
$$
We define maps $({\mathscr U}\bowtie):{\mathfrak X}_{r}\to C_{r+2}(X_{\infty})$ by
$$
({\mathscr U}\bowtie{\mathscr P})(u,t,x_{1},\ldots,x_{r})
=
{\rm pr}\left(
v_{{\mathscr P}}
+
\sum_{i=1}^{r}{\mathscr U}(u,t,x_{i})\right).
$$
The next lemma says that ${\mathscr U}\bowtie$ is in some sense
a chain homotopy between ${\mathscr H}\bowtie$ and ${\mathscr I}\bowtie$
(although these are themselves chain homotopies, not
chain maps).
\begin{lemma}
\label{Ubowtie}
The map ``${\mathscr U}\bowtie$'' commutes with the action of $\mu_{m}$.
For every ${\mathscr P}\in{\mathfrak X}_{r}$
the following holds in $C_{r+1}(X_{\infty})$:
$$
{\mathscr I}\bowtie{\mathscr P} - {\mathscr H}\bowtie{\mathscr P}
=
\partial({\mathscr U}\bowtie{\mathscr P})
-
{\mathscr U}\bowtie\partial{\mathscr P}.
$$
\end{lemma}
{\it Proof. }
By definition of the boundary map we have
$$
\partial({\mathscr U}\bowtie{\mathscr P})
=
({\mathscr I}\bowtie{\mathscr P}-{\mathscr H}\bowtie{\mathscr P})
-
\hbox{degenerate cells}
+
{\mathscr U}\bowtie(\partial{\mathscr P}).
$$
$\Box$
The crucial point in relating the two cocycles
is the following formula.
\begin{proposition}
\label{XV}
Suppose that for every $d-1$-cell ${\mathscr P}$ in ${\mathfrak X}$,
${\rm ord}_{0,X}({\mathscr H}\bowtie{\mathscr P}^+)$
and ${\rm ord}_{0,X}({\mathscr I}\bowtie{\mathscr P}^+)$
are defined and for every $d-2$-cell ${\mathscr P}$ in ${\mathfrak X}$,
${\rm ord}_{0,X}({\mathscr U}\bowtie{\mathscr P}^+)$ is defined.
Then so is ${\rm ord}_{V,0}({\mathfrak s}{\mathscr U}\cdot{\mathfrak s}{\mathscr G})$
and we have:
$$
{\rm ord}_{X,0}({\mathscr I}\bowtie{\mathscr G}^{+})-{\rm ord}_{X,0}({\mathscr H}\bowtie{\mathscr G}^{+})
=
{\rm ord}_{V,0}({\mathfrak s}{\mathscr U}\cdot{\mathfrak s}{\mathscr G}).
$$
\end{proposition}
{\it Proof. }
Applying lemma \ref{Ubowtie} to ${\mathscr G}$ we obtain
$$
{\mathscr I}\bowtie{\mathscr G} - {\mathscr H}\bowtie{\mathscr G}
=
\partial({\mathscr U}\bowtie{\mathscr G})
-
{\mathscr U}\bowtie\partial{\mathscr G}
$$
Recall that ${\mathscr G}$ is chosen to satisfy the relation
$(1-[\zeta]){\mathscr G}=\partial{\mathscr F}$.
This implies $(1-[\zeta])\partial{\mathscr G}=0$.
By the exact sequence (\ref{star}),
there is a $d-2$-chain ${\mathscr Q}$ in ${\mathfrak X}$
such that
$$
\partial{\mathscr G}=[\mu_{m}]{\mathscr Q}.
$$
We therefore have
$$
({\mathscr I}-{\mathscr H})\bowtie{\mathscr G}
=
\partial({\mathscr U}\bowtie{\mathscr G})
-
[\mu_{m}]({\mathscr U}\bowtie{\mathscr Q}).
$$
We cannot define the order at $0$ of either side of the above equation.
We therefore break ${\mathscr G}$ and ${\mathscr Q}$ into their singular
and non-singular parts.
This gives:
$$
({\mathscr I}-{\mathscr H})\bowtie{\mathscr G}^{+}
=
\partial({\mathscr U}\bowtie{\mathscr G})
-
[\mu_{m}]{\mathscr U}\bowtie{\mathscr Q}^{+}
-
[\mu_{m}]{\mathscr U}\bowtie{\mathscr Q}^{\epsilonsilon}
-
({\mathscr I}-{\mathscr H})\bowtie{\mathscr G}^{\epsilonsilon}.
$$
Note that ${\rm ord}_0({\mathscr U}\bowtie{\mathscr Q}^{+})$ is defined,
so we have by Lemma \ref{orient} ${\rm ord}_0([\mu_m]{\mathscr U}\bowtie{\mathscr Q}^{+})=0$.
Hence the following holds in ${\mathbb Z}/m$:
$$
{\rm ord}_{0,X}(({\mathscr I}-{\mathscr H})\bowtie{\mathscr G}^{+})
=
-{\rm ord}_{0,X}(({\mathscr U}\bowtie\partial{\mathscr G})^{\epsilonsilon}
+({\mathscr I}-{\mathscr H})\bowtie{\mathscr G}^{\epsilonsilon}).
$$
As every cell on the right hand side is contained
in a small neighbourhood of $0$,
we may replace ${\rm ord}_{0,X}$ by ${\rm ord}_{0,V}$.
Now choose an $r$-cell ${\mathscr P}$ in ${\mathfrak X}$.
We have for $x_i\le\epsilonsilon$:
$$
({\mathscr U}\bowtie{\mathscr P})(u,t,x_1,\ldots,x_r)
=
\sum_{i=1}^{r} {\mathscr U}(u,t,x_i)a_{{\mathscr P},i}
=
{\mathfrak s}{\mathscr U}(u,t) \sum_{i=1}^{r} x_i a_{{\mathscr P},i}.
$$
Thus
$$
({\mathscr U}\bowtie{\mathscr P})^\epsilonsilon={\mathfrak s}{\mathscr U}\cdot({\mathscr P}^\epsilonsilon),
$$
and similarly,
$$
({\mathscr H}\bowtie{\mathscr P})^\epsilonsilon
=
{\mathfrak s}{\mathscr H}\cdot({\mathscr P}^\epsilonsilon),
\quad
({\mathscr I}\bowtie{\mathscr P})^\epsilonsilon
=
{\mathfrak s}{\mathscr I}\cdot({\mathscr P}^\epsilonsilon).
$$
We therefore have
$$
{\rm ord}_{0,X}(({\mathscr I}-{\mathscr H})\bowtie{\mathscr G}^{+})
=
-{\rm ord}_{0,V}({\mathfrak s}{\mathscr U}\cdot(\partial{\mathscr G})^{\epsilonsilon}
+({\mathfrak s}{\mathscr I}-{\mathfrak s}{\mathscr H})\cdot{\mathscr G}^{\epsilonsilon}).
$$
On the other hand, modulo degenerate cells, we have :
$$
\partial{\mathfrak s}{\mathscr U}={\mathfrak s}{\mathscr I}-{\mathfrak s}{\mathscr H}.
$$
This implies
$$
{\rm ord}_{0,X}(({\mathscr I}-{\mathscr H})\bowtie{\mathscr G}^{+})
=
-{\rm ord}_{0,V}({\mathfrak s}{\mathscr U}\cdot(\partial{\mathscr G})^{\epsilonsilon}
+(\partial{\mathfrak s}{\mathscr U})\cdot({\mathscr G}^{\epsilonsilon})).
$$
Adding $\partial({\mathfrak s}{\mathscr U}\cdot({\mathscr G}^{\epsilonsilon}))$ we obtain:
$$
{\rm ord}_{0,X}(({\mathscr I}-{\mathscr H})\bowtie{\mathscr G}^{+})
=
-{\rm ord}_{0,V}(
{\mathfrak s}{\mathscr U}\cdot (\partial{\mathscr G})^{\epsilonsilon}
-{\mathfrak s}{\mathscr U}\cdot \partial({\mathscr G}^{\epsilonsilon}) ).
$$
Now by Lemma \ref{Pepsilon} we have:
$$
{\rm ord}_{0,X}(({\mathscr I}-{\mathscr H})\bowtie{\mathscr G}^{+})
=
{\rm ord}_{0,V}(\epsilonsilon{\mathfrak s}{\mathscr U}\cdot{\mathfrak s}{\mathscr G}).
$$
The right hand side is however independent of $\epsilonsilon$
so the result follows.
$\Box$
\begin{corollary}
\label{XVcor}
Suppose there is an $\epsilonsilon>0$ such that for all $0<x<\epsilonsilon$
we have ${\mathscr H}(t,x)={\mathscr I}(t,x)$.
Then ${\rm ord}_{0}({\mathscr H}\bowtie{\mathscr G})={\rm ord}_{0}({\mathscr I}\bowtie{\mathscr G})$.
(By this we understand that if both sides are defined then they are equal).
\end{corollary}
{\it Proof. }
in this case we can choose ${\mathscr U}$
so that ${\mathfrak s}{\mathscr U}$ is degenerate.
$\Box$
\section{Deformation of fundamental functions.}
Given a path ${\wp}$,
we have constructed a function $f^{{\wp}}$ on
$X_\infty \setminus |\partial({\wp}\bowtie{\mathscr P}) |$,
which is fundamental on all $\mu_{m}$-orbits
which do not intersect $|\partial({\wp}\bowtie{\mathscr P})|$.
In this chapter we describe a method for extending $f^{\wp}$
to all but a finite number of points of $X_\infty$.
Recall that we have fixed an ordered basis $\{b_1,\ldots,b_{r}\}$
for ${\mathfrak g}$ as a vector space over ${\mathbb R}$.
The general idea is that if a point $x\in X_{\infty}$
is on the boundary of ${\wp}\bowtie{\mathscr P}$,
then we move the path ${\wp}$ a little in the direction of $b_{1}$;
if $x$ is still on the boundary then we move the path in the
direction $b_{2}$ and so on.
Thus, we define $f^{\wp}$ as a limit of the
form $\mathop{{\bf lim}}_{\epsilonsilon\to 0^+}$ in the notation
of \S3.5.
We begin with some general results on the existence
of such limits, and then prove the specific results
which we need.
\subsection{Existence of certain limits.}
We shall call a subset $Z\subset{\mathbb R}^{a}$
a \emph{small} subset of ${\mathbb R}^{a}$, if there is
a $b\ge 0$ and a Zariski-closed subset $Z^{{\rm pr}ime}\subset {\mathbb R}^{a+b}$
of codimension $\ge b+1$,
such that $Z$ is contained in
the archimedean closure of the projection of $Z^{{\rm pr}ime}$
in ${\mathbb R}^{a}$.
If $Z_{1},\ldots,Z_{c}$ are small subsets of
${\mathbb R}^{a}$ then $Z_{1}\cup\ldots\cup Z_{c}$ is a
small subset of ${\mathbb R}^{a}$.
Note that if $Z$ is small then it has codimension at least 1
in ${\mathbb R}^{a}$.
\begin{lemma}
\label{zariskilimit}
Let $Z$ be a small subset of ${\mathbb R}^{a}$.
Then for any locally constant function
$\psi:{\mathbb R}^{a}\setminus Z\to {\mathbb Z}$
the following (ordered) limit exists:
$$
\lim_{\epsilonsilon_{1}\to 0^{+}}
\lim_{\epsilonsilon_{2}\to 0^{+}}
\ldots
\lim_{\epsilonsilon_{a}\to 0^{+}}
\psi(\epsilonsilon_{1},\ldots,\epsilonsilon_{a}).
$$
\end{lemma}
{\it Proof. }
We shall prove this by induction on $a$.
When $a=1$ the set $Z^{{\rm pr}ime}\subset {\mathbb R}^{b+1}$ has codimension $\ge b+1$,
and is therefore finite.
It follows that $Z$ is finite,
and it is clear that the limit exists in this case.
Now suppose $a>1$.
We shall decompose the Zariski-closed set $Z^{{\rm pr}ime}\subset {\mathbb R}^{a+b}$
into irreducible components:
$$
Z^{{\rm pr}ime} = Z^{{\rm pr}ime}_{1}\cup\ldots\cup Z^{{\rm pr}ime}_{r}.
$$
We shall write $Z_{i}$ for the archimedean closure of the
projection of $Z^{{\rm pr}ime}_{i}$ in ${\mathbb R}^{a}$.
Let $\{b_{1},\ldots,b_{a+b}\}$ be the standard basis of ${\mathbb R}^{a+b}$.
Write $H^{{\rm pr}ime}$ for the hyperplane in ${\mathbb R}^{a+b}$ spanned
by $\{b_{1},\ldots,b_{a-1},\ b_{a+1},\ldots,b_{a+b}\}$
and let $H={\rm Sp}an\{b_{1},\ldots,b_{a-1}\}$
be the projection of $H$ in ${\mathbb R}^{a}$.
For each component $Z^{{\rm pr}ime}_{i}$ of $Z^{{\rm pr}ime}$,
we define a subset $W^{{\rm pr}ime}_{i}\subset H^{{\rm pr}ime}$ by
$$
W^{{\rm pr}ime}_{i}
=
\left\{
\begin{array}{ll}
Z^{{\rm pr}ime}_{i}\cap H & \hbox{if $Z^{{\rm pr}ime}_{i}\not\subset H'$,}\\
\emptyset & \hbox{if $Z^{{\rm pr}ime}_{i}\subset H'$.}
\end{array}
\right.
$$
Thus $W^{{\rm pr}ime}:=W^{{\rm pr}ime}_{1}\cup\ldots \cup W^{{\rm pr}ime}_{r}$
is Zariski-closed in $H^{{\rm pr}ime}$
and has codimension $\ge b+1$ in $H^{{\rm pr}ime}$.
Let $W$ be the archimedean closure of the
projection of $W^{{\rm pr}ime}$ in $H$.
Thus $W$ is a small subset of $H$.
By the inductive hypothesis,
it is sufficient to show that the limit
$$
\Psi(v)
=
\lim_{\epsilonsilon_{a}\to 0^{+}}\psi(v+\epsilonsilon_{a} b_{a})
$$
exists and is locally constant for
$v\in H\setminus W$.
Let $v\in H\setminus W$.
Choose a compact, connected, archimedean neighbourhood $U$ of $v$
in $H$ such that $U\cap W=\emptyset$.
We shall prove that the limit $\Psi$ exists on $U$ and
is constant there.
Let $w\in U$.
For any $i$ we either have $Z^{{\rm pr}ime}_{i}\subset H$
or $w\notin Z_{i}$.
In either case there is a $\delta(w,i)>0$ sufficiently small so that
we have $w+\epsilonsilon_{a}b_{a}\notin Z_{i}^{{\rm pr}ime}$
whenever $0<\epsilonsilon_{a}<\delta(w,i)$.
The $\delta(w,i)$ may be chosen to be continuous in $w$.
As $U$ is compact the $\delta(w,i)$ are bounded
below by some positive $\delta$.
This means that the subset $\tilde U=U\times (0,\delta)b_{a}$ of ${\mathbb R}^{a}$
does not intersect $Z$.
The function $\psi$ is therefore defined on $\tilde U$ and since
$\tilde U$ is connected, $\psi$ is equal to a constant $c$ on $\tilde U$.
It follows that $\Psi(w)=c$ for all $w\in U$.
$\Box$
Note that if the order of the limits is changed in the above Lemma,
then the value of the limit may change.
\subsection{Deformations of cells.}
Given a $d$-chain ${\mathscr T}\in C_d(V_\infty)$
we will describe a method for defining ${\rm ord}_v({\mathscr T})$ for
$v\in |\partial{\mathscr T}|$.
The general idea is to replace ${\mathscr T}$ by a map
${\mathscr T}:{\mathbb R}^b \to C_d(V_\infty)$ so that our original ${\mathscr T}$ is ${\mathscr T}(0)$.
We then have a function $\psi$, defined on part of ${\mathbb R}^b$ by
$$
\psi(\epsilonsilon)
=
{\rm ord}_x{\mathscr T}(\epsilonsilon).
$$
If the function ${\mathscr T}$ is sufficiently nice then
we may use Lemma \ref{zariskilimit} to define
$$
{\rm ord}_x({\mathscr T})
=
\lim_{\epsilonsilon_{1}\to 0^{+}}
\lim_{\epsilonsilon_{2}\to 0^{+}}
\ldots
\lim_{\epsilonsilon_{b}\to 0^{+}}
\psi(\epsilonsilon_{1},\ldots,\epsilonsilon_{b}).
$$
In this section,
we investigate the conditions, which we must impose on ${\mathscr T}$,
in order for Lemma \ref{zariskilimit} to be applicable.
\subsubsection{Deformable $d-1$-cells.}
Let ${\mathscr T}:{\mathbb R}^{b}\times {\mathbb R}^{d-1}\to V_{\infty}$
be an algebraic function satisfying the following conditions:
\begin{itemize}
\item[(A)]
For every $\underline{x}\in{\mathbb R}^{d-1}$,
the map ${\mathbb R}^{b}\to V_{\infty}$ defined
by $\epsilonsilon \mapsto {\mathscr T}(\epsilonsilon,\underline{x})$
is affine.
We shall write ${\rm rk}(\underline{x})$ for the rank of the
linear part of this map.
\item[(B)]
For any $i=1,\ldots,d$,
the set of $\underline{x}\in{\mathbb R}^{d-1}$ such that
${\rm rk}(\underline{x})= i$ has dimension $\le i-1$.
\end{itemize}
We shall call such a ${\mathscr T}$ a \emph{deformable $d-1$-cell}.
If ${\mathscr T}(\epsilonsilon,\underline{x})$ is constant for a certain $\underline{x}$ then the
value of the constant will be called a \emph{vertex} of ${\mathscr T}$.
For any $\epsilonsilon\in{\mathbb R}^{b}$ we shall
consider the $d-1$-cell ${\mathscr T}(\epsilonsilon):I^{d-1}\to V_{\infty}$
defined by:
$$
\Big({\mathscr T}(\epsilonsilon)\Big)(\underline{x})
:=
{\mathscr T}(\epsilonsilon,\underline{x}).
$$
\begin{lemma}
Let ${\mathscr T}$ be a deformable $d-1$-cell.
Then, for any $v\in V_{\infty}$ which is not a vertex of ${\mathscr T}$,
the set $\{ \epsilonsilon\in{\mathbb R}^{b} : v\in |{\mathscr T}(\epsilonsilon)|\}$
is a small subset of ${\mathbb R}^{b}$.
\end{lemma}
{\it Proof. }
The above set is contained in the
projection to ${\mathbb R}^{b}$ of the following set:
$$
Z^{{\rm pr}ime}
=
\{ (\epsilonsilon,\underline{x})\in {\mathbb R}^{b}\times {\mathbb R}^{d-1}:
{\mathscr T}(\epsilonsilon,\underline{x})=v\}.
$$
As $Z^{{\rm pr}ime}$ is algebraic, it remains only to show that
$Z^{{\rm pr}ime}$ has codimension at least $d$ in ${\mathbb R}^{b}\times {\mathbb R}^{d-1}$.
To show this we shall write $Z^{{\rm pr}ime}$ as a finite union of subsets
and bound the dimension of each of the subsets.
Define
$$
Z^{{\rm pr}ime}_{i}
=
\{(\epsilonsilon,\underline{x})\in {\mathbb R}^{b}\times {\mathbb R}^{d-1}:
{\mathscr T}(\epsilonsilon,\underline{x})=v, {\rm rk}(\underline{x})=i\}
\quad
(i=0,1,2,\ldots,d).
$$
The set $Z^{{\rm pr}ime}$ is the union of the subsets $Z^{{\rm pr}ime}_{i}$.
As we are assuming that $v$ is not a vertex of ${\mathscr T}$,
it follows that $Z^{{\rm pr}ime}_{0}$ is empty.
For $i>0$,
our assumption (B) on ${\mathscr T}$ implies that the projection
of $Z^{{\rm pr}ime}_i$ in ${\mathbb R}^{d-1}$ has dimension $\le i-1$.
However each fibre of this projection is an affine subspace
with codimension $i$ in ${\mathbb R}^{b}$.
Therefore the dimension of $Z^{{\rm pr}ime}_{i}$ is $\le b-1$.
This proves the lemma.
$\Box$
\subsubsection{Deformable $d$-cells}
Now suppose we have a continuous map
${\mathscr T}:{\mathbb R}^b\times I^d \to V_\infty$.
Thus, for each $\epsilonsilon\in {\mathbb R}^b$ we have a $d$-cell
${\mathscr T}(\epsilonsilon)$.
We shall call ${\mathscr T}$ a \emph{deformable $d$-cell}
if the faces of ${\mathscr T}$ are deformable $d-1$-cells.
By a \emph{vertex of ${\mathscr T}$},
we shall mean a vertex of a face of ${\mathscr T}$.
\begin{lemma}
\label{pipedlim}
Let ${\mathscr T}$ be a deformable $d$-cell.
Then for any $v\in V_{\infty}$ which is not a vertex
of ${\mathscr T}$, the following limit exists:
$$
\lim_{\epsilonsilon_{1}\to 0^{+}}
\ldots
\lim_{\epsilonsilon_{b}\to 0^{+}}
{\rm ord}_{v}({\mathscr T}(\epsilonsilon)).
$$
If $v\notin |\partial {\mathscr T}(0)|$ then the limit
is equal to ${\rm ord}_{v}({\mathscr T}(0))$.
\end{lemma}
{\it Proof. }
We first prove the existence of the limit.
Consider the set
$$
Z
=
\{\epsilonsilon\in{\mathbb R}^{b}:v\in|\partial{\mathscr T}(\epsilonsilon)|\}.
$$
Since ${\mathscr T}(\epsilonsilon)$ tends uniformly to ${\mathscr T}(0)$,
the set $Z$ is archimedeanly closed in ${\mathbb R}^{b}$.
On ${\mathbb R}^{b}\setminus Z$ we have a function
$$
\psi(\epsilonsilon)= {\rm ord}_{v}({\mathscr T}(\epsilonsilon)).
$$
To prove the existence of the limit,
it is sufficient, by Lemma \ref{zariskilimit},
to show that $Z$ is small and $\psi$ is locally constant
on ${\mathbb R}^{b}\setminus Z$.
It follows from the previous lemma that $Z$ is small.
We shall show that $\psi$ is locally constant on
${\mathbb R}^{b}\setminus Z$.
Choose $\epsilonsilon\notin Z$.
As $Z$ is closed
there is a path connected neighbourhood $U$ of $\epsilonsilon$
in ${\mathbb R}^{b}$ which does not intersect $Z$.
We shall show that $\psi$ is constant on $U$.
Let $\epsilonsilon^{{\rm pr}ime}\in U$
and choose a path $p$ from $\epsilonsilon$ to $\epsilonsilon^{{\rm pr}ime}$ in $U$.
Now consider the $(d+1)$-chain
$$
{\mathscr V}(t,\underline{x})
=
{\mathscr T}(p(t),\underline{x}).
$$
The boundary of ${\mathscr V}$ is
$$
\partial {\mathscr V}
=
{\mathscr T}(\epsilonsilon^{{\rm pr}ime})-{\mathscr T}(\epsilonsilon)
-
\sum_{{\mathscr U}} {\mathscr V}_{{\mathscr U}},
$$
where ${\mathscr V}_{{\mathscr U}}(t,\underline{x})={\mathscr U}(p(t),\underline{x})$
and ${\mathscr U}$ runs over the faces of ${\mathscr T}$.
As $p(t)\notin Z$ we have ${\mathscr U}(p(t),\underline{x})\ne v$.
Therefore $v\notin |{\mathscr V}_{{\mathscr U}}|$ so in
$H_{d}(V_{\infty},V_{\infty}\setminus\{v\})$ we have:
$$
{\mathscr T}(\epsilonsilon)={\mathscr T}(\epsilonsilon^{{\rm pr}ime}).
$$
This implies $\psi(\epsilonsilon)=\psi(\epsilonsilon^{{\rm pr}ime})$,
so $\psi$ is locally constant
and we have proved the existence of the limit.
Now suppose that $v\notin |\partial {\mathscr T}(0)|$.
This means that $0\notin Z$.
As $Z$ is archimedianly closed,
there is a neighbourhood
of $0$ on which $\psi$ is constant.
We therefore have as required:
$$
\lim_{\epsilonsilon_{1}\to 0^{+}}
\ldots
\lim_{\epsilonsilon_{b}\to 0^{+}}
\psi(\epsilonsilon)
=
\psi(0).
$$
$\Box$
\subsubsection{Deformations with respect to ${\mathfrak g}$.}
We shall now specialize the above result to the case which we
require.
Recall that ${\mathfrak g}={\rm End}_{\mu_m}(V_\infty)$.
Thus we have ${\mathfrak g}=M_s({\mathbb Q}(\zeta)\otimes_{\mathbb Q}{\mathbb R})$.
We shall write $S_\infty({\mathbb Q}(\zeta))$ for the set of
archimedean places of ${\mathbb Q}(\zeta)$.
There is a decomposition:
$$
{\mathfrak g}
=
{\bf i}goplus_{v\in S_\infty({\mathbb Q}(\zeta))}
M_s({\mathbb Q}(\zeta)_v).
$$
As a ${\mathfrak g}$-module, $V_\infty$ decomposes as a sum of
simple modules:
$$
V_\infty
=
{\bf i}goplus_{v\in S_\infty({\mathbb Q}(\zeta))}
{\mathbb Q}(\zeta)_v^s.
$$
For any subset $T\subseteq S_\infty({\mathbb Q}(\zeta))$
we shall write $V_T$ for the sum of the ${\mathbb Q}(\zeta)_v^s$
for $v\in T$.
Every ${\mathfrak g}$-submodule of $V_\infty$ is one of the submodules $V_T$.
We consider real-algebraic functions ${\mathscr T}:{\mathfrak g}^a\times I^{d-e}\to V_{\infty}$,
which satisfy the following conditions:
\begin{itemize}
\item[(C)]
The map ${\mathscr T}$ is of the form:
$$
{\mathscr T}(\epsilonsilon_1,\epsilonsilon_2,\ldots,\epsilonsilon_a,\underline{x})
=
{\mathscr T}(0,\underline{x})+\sum_{i=1}^{a}\alpha_i\epsilonsilon_i\phi_i(\underline{x}),
$$
where the functions $\phi_i:{\mathbb R}^{d-1}\to V_{\infty}$ are algebraic
and $\alpha_{1},\ldots,\alpha_{a}\in{\rm GL}_n(k_\infty)$.
\item[(D)]
For every non-empty subset $T\subseteq S_\infty({\mathbb Q}(\zeta))$,
the set
$$
\left\{
\underline{x}\in {\mathbb R}^{d-e}:
\phi_1(\underline{x}),\ldots,\phi_a(\underline{x})\in V_{T}
\right\},
$$
has dimension $\le \max\{\dim_{{\mathbb R}}(V_{T})-e,0\}$.
\end{itemize}
Such a function ${\mathscr T}$ will be called a ${\mathfrak g}$-deformable
$d-e$-cell.
\begin{lemma}
Let ${\mathscr T}$ be a ${\mathfrak g}$-deformable $d-1$-cell.
Then ${\mathscr T}$ is a deformable $d-1$-cell when
regarded as a function ${\mathbb R}^{ar}\times I^{d-1}\to V_\infty$.
The vertices of ${\mathscr T}$ are the points ${\mathscr T}(\underline{x})$, where $\underline{x}$
is a solution to $\phi_1(\underline{x})=\ldots=\phi_i(\underline{x})=0$.
\end{lemma}
{\it Proof. }
The statement on the vertices is clear and
condition (A) follows immediately from condition (C)
It remains to verify condition (B).
Let $Z_{i}=\{\underline{x}\in{\mathbb R}^{d-1} : {\rm rk}(\underline{x})=i\}$.
We must show that for $i=1,\ldots,d$,
the set $Z_{i}$ has dimension $\le i-1$.
For any $\underline{x}\in {\mathbb R}^{d-1}$,
the image of ${\mathscr T}(\underline{x})$
is a translation of a ${\mathfrak g}$-submodule of $V_{\infty}$.
The ${\mathfrak g}$-submodules of $V_{\infty}$ are of the form $V_{T}$
for subsets $T$ of $S_{\infty}({\mathbb Q}(\zeta))$.
If $V_{T}$ is the submodule corresponding to $\underline{x}$,
then we clearly have ${\rm rk}(\underline{x})=\dim(V_{T})$.
Given $T\subseteq S_{\infty}({\mathbb Q}(\zeta))$,
let $Z_{T}$ denote the set of $\underline{x}$,
for which the corresponding submodule is $V_{T}$.
With this notation we have:
$$
Z_{i}
=
{\bf i}gcup_{T\ :\ \dim V_{T}=i} Z_{T}.
$$
As this is a finite union,
it is sufficient to show that for any non-empty $T$,
the set $Z_{T}$ has dimension $\le \dim(V_{T})-1$.
For a particular $\underline{x}$,
the corresponding submodule $V_{T}$ is the
${\mathfrak g}$-span of the vectors $\phi_{1}(\underline{x}),\ldots,\phi_{a}(\underline{x})$.
Hence
$$
Z_{T}
\subseteq
\{\underline{x}\in{\mathbb R}^{d-1} :\phi_i(\underline{x})\in V_{T}\}.
$$
The result now follows from condition (D).
$\Box$
We have fixed an ordered basis
$\{b_{1},\ldots,b_{r}\}$ for ${\mathfrak g}$
as a vector space over ${\mathbb R}$.
Let $\epsilonsilon=\epsilonsilon_{1}b_{1}+\ldots+\epsilonsilon_{r}b_{r}\in {\mathfrak g}$.
Recall the abbreviation:
$$
``\mathop{{\bf lim}}_{\epsilonsilon\to 0^{+}}"
:=
\lim_{\epsilonsilon_{1}\to 0^{+}}
\lim_{\epsilonsilon_{2}\to 0^{+}}
\ldots
\lim_{\epsilonsilon_{r}\to 0^{+}}.
$$
Consider a real-algebraic map
${\mathscr T}:{\mathfrak g}^a\times I^{d}\to V_{\infty}$.
If the faces of ${\mathscr T}$
satisfy (C) and (D) above then ${\mathscr T}$ is a
deformable $d$-cell.
Hence by Lemma \ref{pipedlim}, the limit
$$
\mathop{{\bf lim}}_{\epsilonsilon_{1}\to 0^{+}}
\mathop{{\bf lim}}_{\epsilonsilon_{2}\to 0^{+}}
\ldots
\mathop{{\bf lim}}_{\epsilonsilon_{a}\to 0^{+}}
{\rm ord}_0{\mathscr T}(\epsilonsilon_{1},\ldots,\epsilonsilon_{r})
$$
exists for all $v$ apart from the vertices of ${\mathscr T}$.
Condition (D) above is rather technical.
To be able to verify it in practice
we shall use the following lemmata.
\begin{lemma}
\label{technical}
Let $W$ be a $d-1$-dimensional ${\mathbb Q}$-subspace of $V$
and let $W_\infty$ be the closure of $W$ in $V_\infty$.
Then for any non-empty set $T$ of archimedean places of ${\mathbb Q}(\zeta)$,
we have
$$
\dim_{\mathbb R}(W_\infty\cap V_{T})
=
\dim_{\mathbb R}(V_{T})-1.
$$
\end{lemma}
{\it Proof. }
As $W_{\infty}$ is a hyperplane in $V_{\infty}$,
it is sufficient to show that $V_{T}$ is not contained
in $W_{\infty}$.
It is sufficient to prove this in the case
that $T$ consists of a single place $v$.
We have a non-degenerate ${\mathbb Q}$-bilinear form on $V$ given by:
$$
<v,w>
=
\sum_{i=1}^s{\rm Tr}_{{\mathbb Q}(\zeta)/{\mathbb Q}}(v_i w_i);
$$
here we are identifying $V$ with ${\mathbb Q}(\zeta)^{s}$.
Extending the form to $V_\infty$, the various subspaces
$V_v$ are orthogonal.
The subspace $W_\infty$ is the orthogonal complement of
some $w\in V\setminus\{0\}$.
As the coordinate of $w$ in $V_v$ is non-zero,
it follows that $w$ is not orthogonal to $V_v$.
Therefore $V_v$ is not a subspace of $W_\infty$ and the result follows.
$\Box$
\begin{lemma}
\label{verytechnical}
Let ${\mathscr P}$ be a $d-2$-dimensional cell in ${\mathfrak X}$
and let $W_{\mathscr P}$ be the ${\mathbb R}$-span of the vectors $a_{{\mathscr P},i}$.
Then for any non-empty set $T$ of archimedean places of ${\mathbb Q}(\zeta)$ we have
$$
\dim_{\mathbb R}(W_{\mathscr P}\cap V_T)
=
\max\{\dim_{\mathbb R}(V_T)-2,0\}.
$$
\end{lemma}
More general statements than the above seem to be false.
{\it Proof. }
As with the previous lemma,
it is sufficient to prove this in the case $T=\{v\}$.
If $m=2$ then $V_v=V_\infty$ and there is nothing to prove.
We therefore assume $m>2$,
so ${\mathbb Q}(\zeta_m)$ is totally complex.
The subspace $W_{\mathscr P}$ is the orthogonal complement of $\{v,w\}$
some $v,w\in V\setminus\{0\}$.
If we show that the coordinates of $v$ and $w$ in $V_v$ are linearly
independent over ${\mathbb R}$, then the result follows.
Our strategy for finding the vectors $v$ and $w$ is as follows.
The cell ${\mathscr P}$ is a $d-2$-dimensional face of some
$d$-dimensional cell ${\mathscr Q}$ in ${\mathfrak X}$.
There are two $d-1$-dimensional
faces of ${\mathscr Q}$ containing ${\mathscr P}$,
each of which is obtained by removing one of the basis elements
$\{a_{{\mathscr Q},i}\}$.
For each $i=1,\ldots,d$
we shall find a non-zero vector $v(i)$,
which is orthogonal to the vectors $\{a_{{\mathscr Q},j}:j\ne i\}$.
The vectors $v,w$ may be taken to be
$v(i),v(j)$,
where $a_{{\mathscr Q},i}$ and $a_{{\mathscr Q},j}$ are the removed basis vectors.
We recall the construction of the basis $\{a_{{\mathscr Q},i}\}$.
We begin with a basis $\{d_1,\ldots,d_s\}$
for the lattice $L$ over ${\mathbb Z}[\zeta]$.
For each $i=1,\ldots,s$,
we choose a set of representatives
$\zeta^{a(i,1)},\ldots,\zeta^{a(i,m/p)}$
for $\mu_p$-cosets in $\mu_m$.
Then the basis $\{a_{{\mathscr Q},i}\}$ is as follows:
$$
\left\{\textstyle{\frac{\rho^k}{1-\rho}}\zeta^{a(i,j)}d_i
: i=1,\ldots,s,\; j=1,\ldots,m/p,\; k=1,\ldots,p-1\right\}.
$$
To ease notation we shall use the index set:
$$
{\mathcal I}
=
\{(i,j,k):i=1,\ldots,s,\; j=1,\ldots,m/p,\; k=1,\ldots,p-1\}.
$$
For $(i,j,k)\in{\mathcal I}$ we define
$a_{i,j,k}=\rho^k\zeta^{a(i,j)}a_i$.
Then the basis $\{a_{{\mathscr Q},i}\}$ is simply
$\{\frac{1}{1-\rho}a_{{\bf i}}:{\bf i}\in{\mathcal I}\}$.
We shall use the following Hermitean form on $V_\infty$:
$$
\left\langle\sum v_i d_i,\sum w_i d_i\right\rangle
=
\frac{p}{m}
\sum_{i=1}^s
{\rm Tr}^{{\mathbb Q}(\zeta)_\infty}_{\mathbb R}(v_i \overline{w_i}),
$$
where $\overline{w_i}$ denotes the complex conjugate
of $w_i$ in ${\mathbb Q}(\zeta)_\infty$.
We shall write $A$ for the ${\mathcal I}\times {\mathcal I}$ matrix
$\langle a_{\bf i},a_{\bf j}\rangle$.
One can show
that the entries of $A$ are as follows:
$$
\langle a_{i,j,k},a_{i',j',k'}\rangle
=
\left\{
\begin{array}{ll}
0 & (i,j)\ne(i',j'), \\
-1 & (i,j)=(i',j'),\; k\ne k', \\
p-1 & (i,j,k)=(i',j',k').
\end{array}
\right.
$$
Now consider a vector $v=(1-\overline\rho)\sum v_{{\bf i}} a_{\bf i}$
and let $[v]$ be the column vector of coefficients $v_{\bf i}$.
The vector $v$ is orthogonal to $\frac{1}{1-\rho}a_{\bf i}$ if and only
if the ${\bf i}$-th row of $A[v]$ is zero.
Fix an ${\bf i}$ and suppose $v$ is orthogonal to
$\frac{1}{1-\rho}a_{\bf j}$ for all ${\bf j}\ne{\bf i}$.
This means that all but the ${\bf i}$-th row
of $A[v]$ is zero, so $[v]$ is a multiple of the
${\bf i}$-th column of $A^{-1}$.
We shall write $v({\bf i})$ for the element of $V_\infty$,
for which $[v({\bf i})]$ is the ${\bf i}$-th column of $A^{-1}$.
To prove the theorem we need to show show that for
${\bf i}\ne {\bf j}$ the coordinates of $v({\bf i})$ and $v({\bf j})$
in $V_v$ are linearly independent over ${\mathbb R}$.
By finding $A^{-1}$ one obtains:
$$
v(i,j,k)
=
(1-\overline\rho)(\rho^k-1)\zeta^{a(i,j)}d_i.
$$
Let the place $v$ correspond to the
embedding $\iota:{\mathbb Q}(\zeta)\hookrightarrow{\mathbb C}$.
We must show that for $(i,j,k)\ne(i',j',k')$
the vectors $\iota(v(i,j,k)),\iota(v(i',j',k'))\in{\mathbb C}^s$
are linearly independent over ${\mathbb R}$.
If $i\ne i^{\rm pr}ime$ then this is clearly the case as they
are independent over ${\mathbb C}$.
We therefore assume $i=i'$.
We are reduced to showing that the complex number
$$
z
=
\iota\left(
\frac{(\rho^k-1)\zeta^{a(i,j)}}{(\rho^{k'}-1)\zeta^{a(i,j')}}
\right).
$$
is not real.
We let $z_1=\iota\left(\frac{\rho^k-1}{\rho^{k'}-1}\right)$
and $z_2=\iota(\zeta)^{a(i,j)-a(i,j')}$.
The argument of $z_1$ is of the form
$$
\pi (k-k')\frac{r}{p},
\quad
r\in\{1,2,\ldots,p-1\}.
$$
Therefore $z_1^p\in{\mathbb R}$.
If $j\ne j'$ then $z_2^p\notin{\mathbb R}$ so we are done.
Finally, if $j=j'$ then $z_2=1$ but $k\ne k'$,
so $z_1\notin{\mathbb R}$ and again we are done.
$\Box$
Up until now,
we have examined deformable cells in $V_\infty$.
However,
we need to define the order of a cell in $X_\infty$
at points on its boundary.
We call a map ${\mathscr T}:{\mathbb R}^b\times I^{d}\to X_\infty$ a deformable cell
in $X_{\infty}$,
if one, or equivalently all,
of its lifts to $V_\infty$ are deformable cells.
We define the vertices of such a ${\mathscr T}$ to be the projections
in $X_\infty$ of the vertices of a lift of ${\mathscr T}$.
\begin{lemma}
\label{Xlimit}
Let ${\mathscr T}:{\mathbb R}^b\times I^d\to X_\infty$ be a deformable $d$-cell
in $X_\infty$.
If $x\in X_\infty$ is not a vertex of ${\mathscr T}$
then the following limit exists:
$$
\lim_{\epsilonsilon_{1}\to 0^{+}}
\ldots
\lim_{\epsilonsilon_{b}\to 0^{+}}
{\rm ord}_{x}({\mathscr T}(\epsilonsilon)).
$$
\end{lemma}
{\it Proof. }
This follows from the previous results using the relation
$$
{\rm ord}_x({\mathscr T}(\epsilonsilon))
=
\sum_{y\to x}{\rm ord}_y(\tilde{\mathscr T}(\epsilonsilon)),
$$
where $\tilde{\mathscr T}$ is a lift of ${\mathscr T}$ and the sum is over
the preimages of $x$ in $V_\infty$.
As $|\tilde{\mathscr T}|$ is compact, the sum is in fact finite
and hence commutes with the limits.
$\Box$
It will be more convenient to speak of
\emph{piecewise deformable cells}.
We shall call a map ${\mathbb R}^{b}\times I^d\to X_{\infty}$
a piecewise deformable cell,
if there is a subdivision of $I^d$,
such that the restriction of ${\mathscr T}$
to any of the pieces in the subdivision is deformable.
The reason for this is that our functions will be piecewise
algebraic rather that algebraic.
By a \emph{piecewise deformable chain} we shall simply
mean a formal sum of piecewise deformable cells.
Thus a piecewise deformable chain will be a map ${\mathbb R}^b\to C_\bullet$.
If $x$ is not a vertex of a piecewise deformable $d$-chain ${\mathscr T}$
then we define
$$
{\rm ord}_x({\mathscr T})
:=
\lim_{\epsilonsilon_{1}\to 0^{+}}
\ldots
\lim_{\epsilonsilon_{b}\to 0^{+}}
{\rm ord}_{x}({\mathscr T}(\epsilonsilon)).
$$
It follows from the above results that this limit exists.
If ${\mathscr T}$ is deformable then we shall call ${\mathscr T}$ a \emph{deformation}
of ${\mathscr T}(0)$.
\subsection{Deforming paths.}
\scP{\rm ar}graph{The function $\bar f$.}
We now apply Lemma \ref{pipedlim}
to the function $f$.
Define a path ${\wp}(\epsilonsilon)$ for $\epsilonsilon\in{\mathfrak g}$ by
$$
{\wp}(\epsilonsilon)
=
\left[0, \textstyle{\frac{1}{2}}+\epsilonsilon\right]
+
\left[\textstyle{\frac{1}{2}}+\epsilonsilon,1\right].
$$
\begin{proposition}
For any $d-1$ or $d-2$-dimensional cell ${\mathscr P}$ in ${\mathfrak X}$
The map $\epsilonsilon \mapsto {\wp}(\epsilonsilon)\bowtie{\mathscr P}$
is piecewise ${\mathfrak g}$-deformable.
Its vertices are in $\frac{1}{1-\rho}L$.
\end{proposition}
{\it Proof. }
Let ${\mathscr P}$ be any $d-1$-cell in ${\mathfrak X}$.
We first cut ${\wp}(\epsilonsilon)\bowtie{\mathscr P}$ into its $2^{d-1}$
algebraic pieces and then prove that each piece is deformable.
Thus for any subset $A\subseteq\{1,2,\ldots,d-1\}$
we define
$$
{\mathscr A}_A(\epsilonsilon)
=
[v_{\mathscr P}]
{\rm Diam}ond
{\bf i}gDiamond_{i\in A}
[0, (\textstyle{\frac{1}{2}}+\epsilonsilon)a_{{\mathscr P},i}]
{\rm Diam}ond
{\bf i}gDiamond_{i\notin A}
[(\textstyle{\frac{1}{2}}+\epsilonsilon) a_{{\mathscr P},i},a_{{\mathscr P},i}].
$$
We shall show that each ${\mathscr A}_A$ is deformable
by verifying conditions (C) and (D) above.
We have
$$
{\mathscr A}(\epsilonsilon,\underline{x})
=
v_{\mathscr P}
+
\sum_{i\in A}
(\textstyle{\frac{1}{2}}+\epsilonsilon) x_i a_{{\mathscr P},i}
+
\displaystyle\sum_{i\notin A}
((\textstyle{\frac{1}{2}}+\epsilonsilon) +(\textstyle{\frac{1}{2}}-\epsilonsilon)x_i)
a_{{\mathscr P},i}.
$$
This implies
$$
{\mathscr A}(\epsilonsilon,\underline{x})
=
{\mathscr A}(0,\underline{x})
+\epsilonsilon \left(\sum_{i\in A} x_i a_{{\mathscr P},i}
+\sum_{i\notin A} (1-x_i) a_{{\mathscr P},i}\right).
$$
Therefore ${\mathscr A}$ verifies condition (C) with
$$
\phi(\underline{x})
=
\sum_{i\in A} x_i a_{{\mathscr P},i}+\sum_{i\notin A} (1-x_i) a_{{\mathscr P},i}.
$$
To verify (D) we let $W_{\mathscr P}$ be the ${\mathbb R}$-span of the
vectors $a_{{\mathscr P},i}$.
Thus $\phi$ maps ${\mathbb R}^{d-1}$ bijectively to $W_{{\mathscr P}}$.
We must show that for any non-empty set of archimedean places $T$
we have $\dim_{\mathbb R}(W_{\mathscr P}\cap V_T)\le\dim_{\mathbb R}(V_T)-1$.
This follows from Lemma \ref{technical}.
It follows from the formula for $\phi$ that the only vertex of
${\mathscr A}$ is $v_{\mathscr P}+\sum_{i\notin A} a_{{\mathscr P},i}$.
By Lemma \ref{constructX} we know that this is in $\frac{1}{1-\rho}L$.
The case of a $d-2$-cell in ${\mathfrak X}$ is similar except
that one must use Lemma \ref{verytechnical}
instead of Lemma \ref{technical}.
$\Box$
We may now define
$$
\bar f(x)
:=
\mathop{{\bf lim}}_{\epsilonsilon\to 0^{+}} {\rm ord}_x({\wp}(\epsilonsilon)\bowtie{\mathscr F}).
$$
This limit exists for all $x$ not in $\frac{1}{1-\rho}L$.
\begin{proposition}
\label{flimit}
If $f(x)$ is defined then so is $\bar f(x)$ and they are equal.
However $\bar f(x)$ is defined for all $x\notin{\rm Vert}({\mathscr P})$.
Furthermore if the $\mu_{m}$-orbit of $x$ does not intersect
${\rm Vert}({\mathscr P})$ then we have
$$
\sum_{\zeta\in\mu_{m}}\bar f(\zeta x)
=
1.
$$
In particular the restriction of $f$ to $X\setminus \{0\}$
is a fundamental function.
\end{proposition}
{\it Proof. }
The first two assertions follow
from Lemma \ref{Xlimit}.
To prove the formula,
we use the fact that finite sums
commute with limits as follows:
$$
\sum_{\zeta\in\mu_{m}}
\bar f(\zeta x)
=
\sum_{\zeta\in\mu_{m}}
\mathop{{\bf lim}}_{\epsilonsilon\to 0^{+}}
f^{({\wp}(\epsilonsilon))}(\zeta x)
=
\mathop{{\bf lim}}_{\epsilonsilon\to 0^{+}}
\sum_{\zeta\in\mu_{m}}
f^{({\wp}(\epsilonsilon))}(\zeta x)
=
\mathop{{\bf lim}}_{\epsilonsilon\to 0^{+}} 1
=
1.
$$
$\Box$
\scP{\rm ar}graph{The function $\bar f^{\alpha}$.}
Recall that for any $\alpha\in {\rm GL}_{n}(k_{\infty})$
we have a path
${\wp}^{\alpha}$ from $0$ to $1$ in ${\mathfrak g}$,
defined by
$$
{\wp}^{\alpha}
=
[0,\alpha] + [\alpha,1].
$$
More precisely, let
$$
{\wp}^{\alpha}(x)
=
\left\{
\begin{array}{lll}
2x \alpha & \hbox{for} & x\le \frac{1}{2},
\\
\alpha+(2x-1)(1-\alpha) & & x \ge \frac{1}{2}.
\end{array}
\right.
$$
By a vertex of ${\wp}^{\alpha}$ we shall mean
one of the following points of $I$:
$$
{\rm Vert}({\wp}^{\alpha})
=
\left\{
0,\frac{1}{2},\frac{m^{2}+1}{2m^{2}},\frac{m^{2}+2}{2m^{2}}, \ldots,1
\right\}.
$$
By a vertex of ${\wp}^{\alpha}\bowtie{\mathscr F}$
we shall mean a point $v\in V_{\infty}$ of the form
$$
v
=
({\wp}^{\alpha}\bowtie{\mathscr P})(\underline{x}),
\quad
\underline{x}
\in
{\rm Vert}({\wp}^{\alpha})^{d-1},
$$
where ${\mathscr P}$ is a $d-1$-cell in ${\mathfrak X}$.
We shall write ${\rm Vert}({\wp}^{\alpha}\bowtie{\mathscr F})$
for the set of all vertices of ${\wp}^{\alpha}\bowtie{\mathscr F}$.
The path ${\wp}^{\alpha}$ gives rise to a fundamental function
$f^{\alpha}$ away from the boundary of ${\wp}^{\alpha}\bowtie{\mathscr P}$.
We shall extend the definition of $f^{\alpha}$ to all
points of $X_{\infty}$ apart from the vertices of ${\wp}^{\alpha}\bowtie{\mathscr F}$.
Given $\epsilonsilon,\nu\in{\mathfrak g}$ we define a new path
${\wp}^{\alpha}(\epsilonsilon,\nu)$ by
\begin{eqnarray*}
{\wp}^{\alpha}(\epsilonsilon,\nu,x)
&=&
\left\{
\begin{array}{lll}
\alpha{\wp}(\epsilonsilon,2x) & \hbox{for}& x\le \frac{1}{2},
\\
\alpha + (2x-1)(1-\alpha) + \phi(2m^{2}x)\nu
& &x\ge \frac{1}{2}.
\end{array}
\right.
\end{eqnarray*}
Here $\phi:{\mathbb R}/{\mathbb Z}\to {\mathbb R}$ is the ${\mathbb Z}$-periodic function
defined on the interval $I$ by
$$
\phi(x)
=
\left\{
\begin{array}{ll}
x & x\le \frac{1}{2},
\\
1-x & x\ge \frac{1}{2}.
\end{array}
\right.
$$
The path ${\wp}^{\alpha}(\epsilonsilon,\nu)$
reduces to ${\wp}^{\alpha}$
when $\epsilonsilon$ and $\nu$ are both zero.
We extend our definition of $f^{\alpha}$ as follows:
$$
\bar f^{\alpha}(x)
=
\mathop{{\bf lim}}_{\epsilonsilon\to 0^{+}}
\mathop{{\bf lim}}_{\nu\to 0^{+}}
{\rm ord}_x({\wp}^{\alpha}(\epsilonsilon,\nu)\bowtie{\mathscr F}).
$$
\begin{proposition}
\label{falimit}
Let ${\mathscr P}$ be a $d-1$- or $d-2$-cell in ${\mathfrak X}$.
The map $(\epsilonsilon,\nu)\mapsto
{\wp}^{\alpha}(\epsilonsilon,\nu)\bowtie{\mathscr P}$
is piecewise ${\mathfrak g}$-deformable.
It is a deformation of ${\wp}^{\alpha}\bowtie{\mathscr P}$.
Its vertices are in ${\rm Vert}({\wp}^{\alpha}\bowtie{\mathscr P})$.
Consequently the limit $\bar f^\alpha(x)$ exists for all
$x\notin {\rm Vert}({\wp}^{\alpha}\bowtie{\mathscr F})$.
The function $\bar{f}^{\alpha}$ is fundamental on all $\mu_{m}$-orbits
which do not intersect ${\rm Vert}({\wp}^{\alpha}\bowtie{\mathscr F})$.
\end{proposition}
{\it Proof. }
Let ${\mathscr P}$ be any $d-1$-cell in ${\mathfrak X}$.
As before we must show that the map
$$
{\mathscr T}(\epsilonsilon,\nu,\underline{x})
=
({\wp}^{\alpha}(\epsilonsilon,\nu)\bowtie{\mathscr P})(\underline{x})
$$
is piecewise deformable.
We cut ${\mathscr T}$ into its algebraic pieces.
These pieces are translations
by elements of ${\rm Vert}({\wp}^{\alpha}\bowtie{\mathscr P})$
of the pieces
\begin{eqnarray*}
{\mathscr A}
&=&
\left(
\alpha({\textstyle\frac{1}{2}}+\epsilonsilon)\cdot {\bf i}gDiamond_{i\in A} [0, a_{{\mathscr P},i}]
\right)\\
&&
{\rm Diam}ond
\left(
\alpha({\textstyle\frac{1}{2}}
-\epsilonsilon) \cdot {\bf i}gDiamond_{i\in B} [0, a_{{\mathscr P},i}]
\right)\\
&&
{\rm Diam}ond
\left(
({\textstyle\frac{1-\alpha}{2m^{2}}+\frac{\nu}{2}})
\cdot
{\bf i}gDiamond_{i\in C} [0, a_{{\mathscr P},i}]
\right)\\
&&
{\rm Diam}ond
\left(
({\textstyle\frac{1-\alpha}{2m^{2}}-\frac{\nu}{2}})
\cdot
{\bf i}gDiamond_{i\in D} [0, a_{{\mathscr P},i}]
\right),
\end{eqnarray*}
where the sets $A,B,C,D$ form a partition of $\{1,2,\ldots,d-1\}$.
The piece ${\mathscr A}$ satisfies condition (C) above
with
$$
\phi_1(\underline{x})
=
\left(\sum_{i\in A}-\sum_{i\in B}\right) x_i a_{{\mathscr P},i},
\quad
\phi_2(\underline{x})
=
\left(\sum_{i\in C}-\sum_{i\in D}\right) x_i a_{{\mathscr P},i}.
$$
To prove condition (D) we apply
lemma \ref{technical} to the subspace $W_{{\mathscr P}}$
spanned by $\{a_{{\mathscr P},i}\}$.
The case of a $d-2$-cell ${\mathscr P}$ is similar
but one must use Lemma \ref{verytechnical}
instead of Lemma \ref{technical}.
As the $a_{{\mathscr P},i}$ are linearly independent
it follows that the only vertex of ${\mathscr A}$ is $0$,
so the vertices of ${\mathscr T}$ are the translations,
which are in ${\rm Vert}({\wp}^{\alpha}\bowtie{\mathscr P})$.
$\Box$
\scP{\rm ar}graph{The function $\bar f^{\alpha\beta,\alpha}$.}
Finally for $\alpha,\beta\in {\rm GL}_{n}(k_{\infty})$
we define a path ${\wp}^{\alpha\beta,\alpha}$ by
$$
{\wp}^{\alpha\beta,\alpha}
=
[0,\alpha\beta]+[\alpha\beta,\alpha]+[\alpha,1].
$$
More precisely let
$$
{\wp}^{\alpha\beta,\alpha}(x)
=
\left\{
\begin{array}{ll}
4x \alpha\beta & x\le \frac{1}{4}, \\
\alpha\beta + (4x-1)(\alpha-\alpha\beta))
& \frac{1}{4}\le x \le \frac{1}{2},\\
\alpha + (2x-1)(1-\alpha) & x \ge \frac{1}{2}.
\end{array}
\right.
$$
We define
$$
{\rm Vert}({\wp}(\alpha\beta,\alpha))
=
\left\{
0,\frac{1}{4},\frac{m^{2}+1}{4m^{2}},
\ldots,
\frac{1}{2},\frac{m^{2}+1}{2m^{2}},\ldots,1
\right\},
$$
$$
{\rm Vert}({\mathscr P}^{\alpha\beta,\alpha})
=
\{ {\mathscr P}_{i}^{\alpha\beta,\alpha}(\underline{x}):
\underline{x} \in{\rm Vert}({\wp}(\alpha\beta,\alpha))^{d-1}\}.
$$
We shall extend the definition of $f^{\alpha\beta,\alpha}$ to
$X_{\infty}\setminus{\rm Vert}({\mathscr P}^{\alpha\beta,\alpha})$.
To do this we define a deformation of
${\wp}^{\alpha\beta,\alpha}$ as follows:
\begin{eqnarray*}
{\wp}^{\alpha\beta,\alpha}(\epsilonsilon,\nu,\xi,x)
&=&
\left\{
\begin{array}{ll}
\alpha{\wp}^{\beta}(\epsilonsilon,\nu,2x)
&
x\le \frac{1}{2},
\\
{\wp}^{\alpha}(0,\xi,x)
&
x\ge \frac{1}{2}.
\end{array}
\right.
\end{eqnarray*}
Again, for any $x\in X_{\infty}$,
which is not a vertex of ${\mathscr P}^{\alpha\beta,\alpha}$,
we may define
$$
\bar f^{\alpha\beta,\alpha}(x)
=
\mathop{{\bf lim}}_{\epsilonsilon\to 0^{+}}
\mathop{{\bf lim}}_{\nu\to 0^{+}}
\mathop{{\bf lim}}_{\xi\to 0^{+}}
{\rm ord}_x({\wp}^{\alpha\beta,\alpha}(\epsilonsilon,\nu,\xi)\bowtie {\mathscr F}).
$$
\begin{proposition}
\label{fabalimit}
Let ${\mathscr P}$ be a $d-1$- or $d-2$-cell in ${\mathfrak X}$.
The map
$$
(\epsilonsilon,\nu,\xi)
\mapsto
{\wp}^{\alpha\beta,\alpha}(\epsilonsilon,\nu,\xi)\bowtie{\mathscr P}
$$
is piecewise
a ${\mathfrak g}$-deformation of ${\wp}^{\alpha\beta,\alpha}\bowtie{\mathscr P}$.
Its vertices are in ${\rm Vert}({\wp}^{\alpha\beta,\alpha}\bowtie{\mathscr F})$.
Consequently the limit $\bar f^{\alpha\beta,\alpha}(x)$ exists for all
$x\notin {\rm Vert}({\wp}^{\alpha\beta,\alpha}\bowtie{\mathscr F})$ and is
fundamental there.
\end{proposition}
This is proved in a similar way to Proposition \ref{falimit}.
\subsection{Deformation of homotopies.}
Again to take account of points on the boundary of ${\mathscr H}\bowtie{\mathscr G}$
we must construct a deformation of ${\mathscr H}\bowtie{\mathscr G}$.
\begin{proposition}
Suppose that for
$\epsilonsilon_{1},\ldots,\epsilonsilon_{r},\nu_{1},\ldots,\nu_{s}\in{\mathfrak g}$
we have paths ${\wp}_{1}(\epsilonsilon_{1},\ldots,\epsilonsilon_{r})$
and ${\wp}_{2}(\nu_{1},\ldots,\nu_{s})$.
Suppose further that for any $d-1$- or $d-2$-cell ${\mathscr P}$
in ${\mathfrak X}$,
the maps ${\wp}_1\bowtie {\mathscr P}$ and ${\wp}_2\bowtie {\mathscr P}$
are piecewise ${\mathfrak g}$-deformable.
Define a homotopy
${\mathscr H}(\epsilonsilon_{1},\ldots,\epsilonsilon_{r},\nu_{1},\ldots,\nu_{s})$
from ${\wp}_{1}(\epsilonsilon_{1},\ldots,\epsilonsilon_{r})$
to ${\wp}_{2}(\nu_{1},\ldots,\nu_{s})$ by
$$
{\mathscr H}(t,x)=(1-t){\wp}_1(x)+t{\wp}_2(x).
$$
Then $\partial({\mathscr H}\bowtie{\mathscr G})$ is piecewise ${\mathfrak g}$-deformable.
\end{proposition}
{\it Proof. }
Recall that ${\mathscr G}\in {\mathfrak X}_{d-1}$.
We must therefore show that for any $d-1$-cell ${\mathscr P}$ in ${\mathfrak X}$,
the $d-1$-chain $\partial({\mathscr H}\bowtie{\mathscr Q})$ is
piecewise ${\mathfrak g}$-deformable.
We have
$$
\partial({\mathscr H}\bowtie{\mathscr P})
=
({\wp}_2-{\wp}_1)\bowtie{\mathscr P} +{\mathscr H}\bowtie\partial{\mathscr P}.
$$
As we already know that ${\wp}_i\bowtie{\mathscr P}$ is
piecewise ${\mathfrak g}$-deformable,
it is sufficient to show that for any $d-2$-cell ${\mathscr Q}$
the cell ${\mathscr H}\bowtie{\mathscr Q}$ is piecewise ${\mathfrak g}$-deformable.
We have
$$
({\mathscr H}\bowtie{\mathscr Q})(t,\underline{x})
=
(1-t)({\wp}_1\bowtie{\mathscr Q})(\underline{x})+t({\wp}_2\bowtie{\mathscr Q})(\underline{x}).
$$
This implies
$$
{\rm rk}_{{\mathscr H}\bowtie{\mathscr Q}}(t,\underline{x})
\ge
\min\{ {\rm rk}_{{\wp}_1\bowtie{\mathscr Q}}(\underline{x}),{\rm rk}_{{\wp}_2\bowtie{\mathscr Q}}(\underline{x})\}.
$$
The result now follows as ${\wp}_i\bowtie{\mathscr Q}$ is
piecewise ${\mathfrak g}$-deformable.
$\Box$
\subsection{The limit defining ${\rm Dec}i$.}
Recall that we have defined ${\rm Dec}i(\alpha,\beta)$ in terms
of the limit
$$
\mathop{{\bf lim}}_{\epsilonsilon_1\to 0^+}
\mathop{{\bf lim}}_{\epsilonsilon_2\to 0^+}
\mathop{{\bf lim}}_{\epsilonsilon_3\to 0^+}
\;
{\rm ord}_{0,V}
\Big(
[1+\epsilonsilon_1,\alpha(1+\epsilonsilon_2),\alpha\beta(1+\epsilonsilon_3)]\cdot{\mathscr E}
\Big).
$$
To show that this limit exists we must prove:
\begin{proposition}
The map
$$
(\epsilonsilon_1,\epsilonsilon_2,\epsilonsilon_3)
\mapsto
[1+\epsilonsilon_1,\alpha(1+\epsilonsilon_2),\alpha\beta(1+\epsilonsilon_3)]\cdot{\mathscr E}
$$
is a ${\mathfrak g}$-deformation of $[1,\alpha,\alpha\beta]\cdot{\mathscr E}$
with no vertices.
\end{proposition}
{\it Proof. }
We have ${\mathscr E}={\mathfrak s} {\mathscr G}$,
where ${\mathscr G}$ is an element of ${\mathfrak X}_{d-1}$.
We therefore fix a $d-1$-cell ${\mathscr P}$
in ${\mathfrak X}$ and define
$$
{\mathscr A}
=
[1+\epsilonsilon_1,\alpha(1+\epsilonsilon_2),\alpha\beta(1+\epsilonsilon_3)]
\cdot{\mathfrak s}{\mathscr P}.
$$
We must show that the boundary of ${\mathscr A}$ is deformable.
As ${\mathfrak s}$ anticommutes with $\partial$ we have:
$$
\partial{\mathscr A}
=
\partial([1+\epsilonsilon_1,\alpha(1+\epsilonsilon_2),\alpha\beta(1+\epsilonsilon_3)])
\cdot{\mathfrak s}{\mathscr P}
-
[1+\epsilonsilon_1,\alpha(1+\epsilonsilon_2),\alpha\beta(1+\epsilonsilon_3)]
\cdot{\mathfrak s}\partial{\mathscr P}.
$$
We must show that both the summands above are deformable.
We shall show that all the cells in the above expression
satisfy conditions (C) and (D) above
and have no vertices.
We begin with the first summand.
This is made up of cells of the form
$$
[(1+\epsilonsilon_1),\alpha(1+\epsilonsilon_2)]\cdot{\mathfrak s}{\mathscr P}:
\Delta^{1}\times\Delta^{d-2}\to V_{\infty}.
$$
Condition (C) is satisfied with
$$
\phi_i(\underline{x},\underline{y})
=
x_i {\mathfrak s}{\mathscr P}(\underline{y}),
\quad
(\underline{x},\underline{y})\in\Delta^{1}\times\Delta^{d-2}.
$$
As $0\notin|{\mathfrak s}{\mathscr P}|$ and the $x_i$ are not all zero
it follows that there are no vertices.
Let $H$ be the hyperplane in $W_{\mathscr P}$ containing $|{\mathfrak s}{\mathscr P}|$.
To verify condition (D) we must show that
$H\cap V_T$ has dimension $\le \dim_{\mathbb R}(V_T)-2$.
This follows from Lemma \ref{technical} since $H$ does not contain $0$.
The second summand contains cells of the form:
$$
[(1+\epsilonsilon_1),\alpha(1+\epsilonsilon_2),\alpha\beta(1+\epsilonsilon_3)]
\cdot
{\mathfrak s}{\mathscr Q}:
\Delta^{2}\times\Delta^{d-3}\to V_{\infty},
$$
with ${\mathscr Q}$ a $d-2$-cell of ${\mathfrak X}$.
Again condition (C) is satisfied with
$$
\phi_i(\underline{x},\underline{y})
=
x_i {\mathfrak s}{\mathscr Q}(\underline{y}),
\quad
(\underline{x},\underline{y})\in\Delta^{2}\times\Delta^{d-3}
$$
As the $x_i$ are never all zero and $0\notin|{\mathfrak s}{\mathscr Q}|$,
it follows that the functions $\phi_i$
are never simultaneously all $0$.
This shows that there are no vertices.
The base set of ${\mathfrak s}{\mathscr Q}$ lies in a hyperplane
$H$ in $W_{{\mathscr Q}}$.
To verify condition (D) we must show that for any set $T$
of archimedean places of ${\mathbb Q}(\zeta)$ the dimension of
$H\cap V_T$ is $\le \dim_{\mathbb R}(V_T)-3$.
As $H$ does not go through $0$ this reduces to proving that
$\dim_{\mathbb R}(W_{{\mathscr P}}\cap V_T)\le \dim_{\mathbb R}(V_T)-2$.
However this follows from Lemma \ref{verytechnical}.
$\Box$
\section{The relation between the arithmetic and geometric cocycles.}
\subsection{Main Results.}
We now state our main results.
Recall that we have a homotopy ${\mathscr H}^{1}_{\beta}$ from ${\wp}(1)$ to
${\wp}(\beta)$ defined by
$$
{\mathscr H}^{1}_{\beta}(x,t)
=
t{\wp}(\beta)(x) + (1-t)x.
$$
From this we have constructed a homotopy
${\mathscr H}^{\alpha}_{\alpha\beta,\alpha}$ from
${\wp}(\alpha)$ to ${\wp}(\alpha\beta,\alpha)$ as follows:
$$
{\mathscr H}^{\alpha}_{\alpha\beta,\alpha}(x,t)
=
\left\{
\begin{array}{ll}
\alpha {\mathscr H}^{1}_{\beta}(2x,t)
& x\le \frac{1}{2},
\\
{\wp}(\alpha)(x)
& x\ge \frac{1}{2}.
\end{array}
\right.
$$
Finally we have a homotopy
${\mathscr H}^{\alpha\beta,\alpha}_{\alpha\beta}$
from ${\wp}^{{\rm pr}ime}(\alpha\beta,\alpha)$ to ${\wp}(\alpha\beta)$ satisfying
$$
{\mathscr H}^{\alpha\beta,\alpha}_{\alpha\beta}(x,t)
=
2x\alpha\beta,\quad x\le \frac{1}{2}.
$$
and constructed by splitting the triangle
$[\alpha\beta,\alpha,1]$ into $m^{2}$ smaller triangles.
Recall that our geometric cocycle is given by the formula:
$$
{\rm Dec}_\infty^{({\mathfrak s}{\mathscr F})}(\alpha,\beta)
=
\zeta^{
\textstyle{{\rm ord}_{0,V}
\left([1,\alpha,\alpha\beta]\cdot{\mathfrak s}{\mathscr G}
\right)}}.
$$
By Proposition \ref{XV} we have
\begin{equation}
\label{geometric}
{\rm Dec}_\infty^{({\mathfrak s}{\mathscr F})}(\alpha,\beta)
=
\zeta^{
\textstyle{{\rm ord}_{0,X}\left(
({\mathscr H}^{1}_{\alpha}+{\mathscr H}^{\alpha}_{\alpha\beta,\alpha}
+{\mathscr H}^{\alpha\beta,\alpha}_{\alpha\beta}
-{\mathscr H}^{1}_{\alpha\beta})\bowtie{\mathscr G}
\right)}}.
\end{equation}
One may check that this remains true even if the quantities
are defined as limits in the sense of \S5.
Looked at from this point of view, it seems natural to
divide out a certain coboundary from this cocycle.
For $\alpha\in\Upsilon_{\mathfrak f}$ define
$$
\tau(\alpha)
=
\zeta^{
\textstyle{\left\{
{\mathscr H}^{1}_{\alpha}\bowtie{\mathscr G}
|\alpha L
\right\}}}.
$$
We shall prove the following.
\begin{theorem}
\label{main}
For $\alpha,\beta\in\Upsilon_{\mathfrak f}$ we have
$$
{\rm Dec}i(\alpha,\beta){\rm Dec}AS(\alpha,\beta)
=
\frac{\tau(\alpha)\tau(\beta)}{\tau(\alpha\beta)}.
$$
\end{theorem}
We shall show that there is a continuous
cocycle ${\rm Dec}_m$ on ${\rm SL}_n(k_m)$ and an extension of $\tau$ to
${\rm SL}_n(k)$ such that for $\alpha,\beta\in{\rm SL}_n(k)$ we have
\begin{equation}
\label{globalsplit}
{\rm Dec}i(\alpha,\beta){\rm Dec}_m(\alpha,\beta){\rm Dec}AS(\alpha,\beta)
=
\frac{\tau(\alpha)\tau(\beta)}{\tau(\alpha\beta)}.
\end{equation}
The proof of Theorem \ref{main}
will be broken down into the following three lemmata.
\begin{lemma}
\label{A}
Let $\alpha\in\Upsilon_{{\mathfrak f}}$.
Then for any $\mu_{m}$-invariant lattice $M\subset V_{m}$
containing $L$, we have
$$
\langle f\alpha^{-1}-f^{\alpha} |\alpha M-\alpha L\rangle_{X}
=
1.
$$
\end{lemma}
\begin{lemma}
\label{C}
Let $\alpha\in G_{{\mathfrak f}}$ and $\beta\in{\rm GL}_n(k_\infty)$.
Then for any $\mu_{m}$-invariant lattice $M\subset V_{m}$
containing both $L$ and $\alpha^{-1}L$,
the following holds in ${\mathbb Z}/m$:
$$
\left\{{\mathscr H}^1_\beta\bowtie{\mathscr G}|M\right\}
=
\left\{{\mathscr H}^{\alpha}_{\alpha\beta,\alpha}\bowtie{\mathscr G}|\alpha M\right\}.
$$
\end{lemma}
\begin{lemma}
\label{D}
Let $\alpha,\beta\in G_{{\mathfrak f}}$.
For any $\mu_{m}$-invariant lattice $M\subset V_{m}$
containing $L$, $\alpha L$ and $\alpha\beta L$,
the following holds in ${\mathbb Z}/m$:
$$
\left\{{\mathscr H}^{\alpha\beta}_{\alpha\beta,\alpha}\bowtie{\mathscr G}|M\right\}
=
0.
$$
\end{lemma}
The proofs of these lemmata in \S6.4-6.6 are essentially exercises in
the geometry of numbers.
In each case one is reduced to showing
that the number of lattice points in a certain
set is a multiple of $m$.
One achieves this by cutting the
set into $m$ pieces which are translations of one another
by elements of the lattice.
\subsection{A formula for the Kubota symbol.}
Assume in this section that $k$ is
totally complex.
We shall describe the Kubota symbol on the
group
$$
{\mathbb G}amma_{{\mathfrak f}}
=
\{\alpha\in {\rm SL}_{n}(k): \alpha L=L,\; \alpha\equiv I_{n} \bmod {\mathfrak f}\}
=
\Upsilon_{{\mathfrak f}}\cap {\rm SL}_{n}({\mathfrak o}).
$$
\begin{corollary}
\label{kubota}
For $\alpha\in{\mathbb G}amma_{{\mathfrak f}}$ the Kubota symbol
$\kappa_{m}(\alpha)$ is given by the formula:
$$
\kappa_{m}(\alpha)
=
\frac{\zeta^{\textstyle{{\rm ord}_{0,X}\left({\mathscr H}^1_\alpha\bowtie{\mathscr G}\right)}}}
{h(w(\alpha))}.
$$
\end{corollary}
{\it Proof. }
We shall regard ${\rm SL}_{n}(k)$ as a dense subgroup of ${\rm SL}_{n}({\mathbb A}f)$,
where ${\mathbb A}f$ denotes the ring of finite ad\`eles of $k$.
Let $U$ be the closure of ${\mathbb G}amma_{{\mathfrak f}}$ in ${\rm SL}_{n}({\mathbb A}f)$.
This is a compact open subgroup of ${\rm SL}_{n}({\mathbb A}f)$.
Consider the extension
$$
1 \to
\mu_{m} \to
{\rm SL}t_{n}({\mathbb A}f) \to
{\rm SL}_{n}({\mathbb A}f) \to
1,
$$
corresponding to the cocycle ${\rm Dec}AS{\rm Dec}m$.
The cocycle ${\rm Dec}AS{\rm Dec}m$ is 1 on $U\times U$.
Therefore on $U$ the map $\alpha\mapsto(\alpha,1)$
is a splitting of the extension.
On ${\rm SL}_{n}(k)$ we have by (\ref{globalsplit})
and Theorem \ref{complexsplit}:
$$
{\rm Dec}AS{\rm Dec}m(\alpha,\beta) \partial hw(\alpha,\beta)
=
\partial\tau(\alpha,\beta).
$$
This implies that on ${\rm SL}_{n}(k)$ the map
$\alpha\mapsto (\alpha, \tau(\alpha)/hw(\alpha))$
splits the extension.
As the Kubota symbol is the ratio
of these two splittings,
the result follows.
$\Box$
\begin{remark}
Some authors speak of the ``Kubota symbol on ${\rm GL}_n$''.
By this they are in effect choosing an embedding of ${\rm GL}_n$
in ${\rm SL}_{n+r}$ and pulling back the Kubota symbol.
As the above formula is valid for $n$ arbitrarily large
there is no need to have a separate formula for ${\rm GL}_n$.
It is worth mentioning that the formula of the above corollary
gives a homomorphism
$$
{\rm GL}_{n}({\mathfrak o},{\mathfrak f}) \to \overline\mu,\quad
\alpha
\mapsto
\frac{\zeta^{\textstyle{{\rm ord}_{0,X}\left({\mathscr H}^1_\alpha\bowtie{\mathscr G}\right)}}}
{w(\alpha)},
$$
which is a rather more canonical extension
of the Kubota symbol on ${\rm GL}_{n}$.
Here ${\rm GL}_{n}({\mathfrak o},{\mathfrak f})$ represents the principal congruence
subgroup modulo ${\mathfrak f}$.
\end{remark}
\subsection{Proof of Theorem \ref{main}.}
We shall deduce the theorem from lemmata \ref{A}, \ref{C} and \ref{D}.
Let $\alpha,\beta\in \Upsilon_{\mathfrak f}$.
We begin with the definition of ${\rm Dec}AS$:
$$
{\rm Dec}AS(\alpha,\beta)
=
\langle f-f\alpha | \beta L - L \rangle
=
\langle f\alpha^{-1}-f | \alpha\beta L - \alpha L \rangle.
$$
By Lemma \ref{A} we have:
$$
{\rm Dec}AS(\alpha,\beta)
=
\langle f^\alpha-f | \alpha\beta L - \alpha L \rangle.
$$
By Proposition \ref{productformula} we have:
$$
{\rm Dec}AS(\alpha,\beta)
=
\zeta^{
\textstyle{\left\{{\mathscr H}^\alpha_1\bowtie{\mathscr G} | \alpha\beta L - \alpha
L\right\}}}.
$$
This implies
\begin{eqnarray*}
{\rm Dec}AS(\alpha,\beta)
&=&
\tau(\alpha)
\zeta^{
\textstyle{\left\{{\mathscr H}^\alpha_1\bowtie{\mathscr G} | \alpha\beta L\right\}}}\\
&=&
\frac{\tau(\alpha)}{\tau(\alpha\beta)}
\zeta^{
\textstyle{
\left\{({\mathscr H}^\alpha_1+{\mathscr H}^1_{\alpha\beta})\bowtie{\mathscr G} |
\alpha\beta L\right\}}}.
\end{eqnarray*}
On the other hand we have by Lemma \ref{C}:
$$
\tau(\beta)
=
\zeta^{
\textstyle{\{{\mathscr H}^1_\beta\bowtie{\mathscr G}|\beta L\}}}
=
\zeta^{
\textstyle{\{{\mathscr H}^\alpha_{\alpha\beta,\alpha}\bowtie{\mathscr G}| \alpha\beta L\}}}.
$$
Taking the last two formulae together we obtain:
$$
{\rm Dec}AS(\alpha,\beta)
=
\frac{\tau(\alpha)\tau(\beta)}{\tau(\alpha\beta)}
\zeta^{
\textstyle{\left\{
({\mathscr H}^\alpha_1
+{\mathscr H}^1_{\alpha\beta}
-{\mathscr H}^\alpha_{\alpha\beta,\alpha})
\bowtie{\mathscr G} | \alpha\beta L\right\}}}.
$$
By Lemma \ref{D} we have
$$
{\rm Dec}AS(\alpha,\beta)
=
\frac{\tau(\alpha)\tau(\beta)}{\tau(\alpha\beta)}
\zeta^{
\textstyle{\left\{
({\mathscr H}^\alpha_1
+{\mathscr H}^1_{\alpha\beta}
+{\mathscr H}^{\alpha\beta}_{\alpha\beta,\alpha}
+{\mathscr H}^{\alpha\beta,\alpha}_\alpha)
\bowtie{\mathscr G} | \alpha\beta L\right\}}}.
$$
This implies by (\ref{geometric}):
$$
{\rm Dec}AS(\alpha,\beta)
{\rm Dec}i(\alpha,\beta)
=
\frac{\tau(\alpha)\tau(\beta)}{\tau(\alpha\beta)}
\zeta^{\textstyle{\left\{
({\mathscr H}^\alpha_1
+{\mathscr H}^1_{\alpha\beta}
+{\mathscr H}^{\alpha\beta}_{\alpha\beta,\alpha}
+{\mathscr H}^{\alpha\beta,\alpha}_\alpha)
\bowtie{\mathscr G} | \alpha\beta L-L\right\}}}.
$$
On the other hand by Proposition \ref{productformula}
the final term in the above is equal to:
$$
\langle
f^\alpha -f
+f-f^{\alpha\beta}
+f^{\alpha\beta}-f^{\alpha\beta,\alpha}
+f^{\alpha\beta,\alpha}-f^\alpha|
\alpha\beta L-L\rangle
=
1.
$$
$\Box$
\subsection{Proof of Lemma \ref{A}.}
We have by definition,
\begin{eqnarray*}
\langle f^{\alpha}-f\alpha^{-1}| \alpha M - \alpha L \rangle_{X}
&=&
{\rm pr}od_{\zeta\in\mu_{m}}
\zeta^{\textstyle{\sum_{x\in T} f^{\alpha}(x)f(\zeta\alpha^{-1} x)}},
\end{eqnarray*}
where the sums are over all $x$ in the finite subset $T$ of $X$
given by:
$$
T
=
((\alpha M)/L) \setminus ((\alpha L)/L).
$$
The functions $f$ and $f^{\alpha}$ are defined as limits
of the functions $f^{(\epsilonsilon)}$ and $f^{\alpha,(\epsilonsilon,\nu)}$.
As $T$ is finite we may choose $\epsilonsilon,\nu$ small enough
so that for every $x\in T$ we have
$f(\alpha^{-1}x)=f^{(\epsilonsilon)}(\alpha^{-1}x)$
and
$f^{\alpha}(x)=f^{\alpha,(\epsilonsilon,\nu)}(x)$.
Fix $\zeta\ne 1$.
It is sufficient to show that in ${\mathbb Z}/m$ we have
$$
\sum_{x\in T} f^{\alpha,(\epsilonsilon,\nu)}(x)f^{(\epsilonsilon)}(\zeta\alpha^{-1} x)
=
0.
$$
By definition we have
$$
f^{\alpha,(\epsilonsilon,\nu)}(x)
=
{\rm ord}_x({\wp}(\alpha,(\epsilonsilon,\nu))\bowtie{\mathscr F})
$$
It is therefore sufficient to show that for any $d$-cell ${\mathscr P}$
in ${\mathfrak X}$ we have:
$$
\sum_{x\in T}
{\rm ord}_x({\wp}(\alpha,(\epsilonsilon,\nu))\bowtie{\mathscr P})
f^{(\epsilonsilon)}(\zeta\alpha^{-1} x)
=
0.
$$
Fix such a ${\mathscr P}$.
We cut ${\wp}(\alpha,(\epsilonsilon,\nu))\bowtie{\mathscr P}$
into $2^{d}$ smaller pieces.
Define for each subset $A\subseteq\{1,2,\ldots,d\}$
$$
{\mathscr A}_A(x_{1},\ldots,x_{d})
=
\sum_{i\notin A} \alpha{\wp}(\epsilonsilon,x_{i})\cdot a_{{\mathscr P},i}
+
\sum_{i\in A} (\alpha+(1-\alpha)x_{i} + \phi(m^{2}x_{i})\nu) a_{{\mathscr P},i}.
$$
Here $\phi$ is as in \S5.3.
Then for $x\in T$ we have
$$
{\rm ord}_x({\wp}(\alpha,(\epsilonsilon,\nu))\bowtie{\mathscr P})
=
\sum_{A\subseteq \{1,\ldots,d\}}
{\rm ord}_x({\mathscr A}_A).
$$
It is therefore sufficient to prove that for any $A$ we have
in ${\mathbb Z}/m$:
$$
\sum_{x\in T}
{\rm ord}_x({\mathscr A}_A)
f^{(\epsilonsilon)}(\zeta\alpha^{-1} x)
=
0.
$$
There are two cases which we must consider.
In the case that $A$ is empty, we have
$$
\alpha^{-1}{\mathscr A}_A
=
{\wp}(\epsilonsilon)\bowtie{\mathscr P}.
$$
Therefore if $x$ is in the base set of ${\mathscr A}_{A}$,
then $\alpha^{-1}x$ (which is well defined as $\alpha\in\Upsilon$)
is in the base set of ${\wp}(\epsilonsilon)\bowtie{\mathscr P}$.
If this is the case then as $\zeta\ne 1$, we have
$f^{(\epsilonsilon)}(\alpha^{-1}\zeta x)=0$.
Next suppose that there is an index $j\in A$.
Without loss of generality assume $1\in A$.
Then we have a decomposition ${\mathscr A}_A = {\mathscr B}{\rm Diam}ond {\mathscr C}$,
where ${\mathscr B}:I\to X_{\infty}$ is the 1-cell given by
$$
{\mathscr B}(x)
=
((1-\alpha)x + \phi(m^{2}x)\nu)\cdot a_{{\mathscr P},1},
$$
and ${\mathscr C}$ is a $d-1$-cell.
As the function $\phi$ is periodic,
we may write ${\mathscr B}$ as a sum of $m^{2}$ translations of
$$
{\mathscr B}_{1}(x)
=
\left(\frac{1-\alpha}{m^{2}}x + \phi(x)\nu \right) \cdot a_{{\mathscr P},1},
$$
where the translations are by multiples of
$\frac{1-\alpha}{m^{2}} a_{{\mathscr P},1}$.
It follows from our congruence condition of $\alpha$
that these translations are
in $\alpha L/L$.
Our expression for ${\mathscr B}$ implies a similar expression for
${\mathscr A}_{A}$:
$$
{\mathscr A}_A
=
\sum_{l=1}^{m^{2}} {\mathscr S}(l),
$$
where each ${\mathscr S}(l)$ is a translation of ${\mathscr S}(1)$ by
an element of $\alpha L/L$.
As both the set $T$ and the function $f^{(\epsilonsilon)}(\zeta\alpha^{-1}x)$
are invariant under such translations, we have
\begin{eqnarray*}
\sum_{x\in T}{\rm ord}_{x}({\mathscr A}_{A})f(\zeta\alpha^{-1}x)
&=&
\sum_{l=1}^{m^{2}}
\sum_{x\in T}{\rm ord}_{x}({\mathscr S}(l))f(\zeta\alpha^{-1} x)\\
&=&
m^{2}
\sum_{x\in T}{\rm ord}_{x}({\mathscr S}(1))f(\zeta\alpha^{-1} x)
\equiv
0 \bmod m.
\end{eqnarray*}
$\Box$
\subsection{Proof of Lemma \ref{C}.}
Choose a lattice $M\subset V_{m}$ containing
$L$ and $\alpha^{-1} L$.
It is sufficient to show that for
every $d-1$-cell ${\mathscr P}$ in ${\mathfrak X}$ we have in ${\mathbb Z}/m$:
$$
\{
{\mathscr H}^\alpha_{\alpha\beta,\alpha}\bowtie {\mathscr P}
|
\alpha M
\}
=
\{
{\mathscr H}^1_\beta\bowtie {\mathscr P}
|
M
\}.
$$
To prove this, we cut ${\mathscr H}^\alpha_{\alpha\beta,\alpha}\bowtie {\mathscr P}$
into $2^{d-1}$ pieces.
One of the pieces will be precisely $\alpha\cdot{\mathscr H}^1_\beta\bowtie {\mathscr P}$;
for the other pieces ${\mathscr A}$ we will show that $\{{\mathscr A}|M\}=0$.
The various pieces of ${\mathscr H}^\alpha_{\alpha\beta,\alpha}\bowtie {\mathscr P}$
will be indexed by the subsets $A\subseteq\{1,2,\ldots,d-1\}$.
Recall that ${\mathscr H}^\alpha_{\alpha\beta,\alpha}\bowtie{\mathscr P}$ is defined by
$$
({\mathscr H}^\alpha_{\alpha\beta,\alpha}\bowtie{\mathscr P})
(t,x_1,\ldots,x_{d-1})
=
v_{\mathscr P}
+
\sum_{i=1}^{d-1}
{\mathscr H}^\alpha_{\alpha\beta,\alpha}(t,x_i)\cdot a_{{\mathscr P},i}.
$$
For any subset $A\subset\{1,2,\ldots,d-1\}$ we define
$$
{\mathscr A}_A
(t,x_1,\ldots,x_{d-1})
=
v_{\mathscr P}
+
\sum_{i\in A}
{\mathscr H}^\alpha_{\alpha\beta,\alpha}(t,x_i/2)\cdot a_{{\mathscr P},i}
+
\sum_{i\not\in A}
{\mathscr H}^\alpha_{\alpha\beta,\alpha}(t,(x_i+1)/2)\cdot a_{{\mathscr P},i}.
$$
As no element of $\alpha M$ lies in any set $|\partial{\mathscr A}_A|$,
we have:
$$
\{{\mathscr H}^\alpha_{\alpha\beta,\alpha}\bowtie{\mathscr P}|\alpha M\}
=
\sum_A \{{\mathscr A}_A|\alpha M\}.
$$
We now examine the pieces ${\mathscr A}_A$.
First consider ${\mathscr A}_{\{1,2,\ldots,d-1\}}$.
For $t,x\in I$ we have
$$
\alpha^{-1}\cdot{\mathscr H}^\alpha_{\alpha\beta,\alpha}(t,x/2)
=
{\mathscr H}^1_\beta (t,x).
$$
This implies
$$
\alpha^{-1}\cdot{\mathscr A}_{\{1,2,\ldots,d-1\}}
=
{\mathscr H}^1_\beta\bowtie{\mathscr P}.
$$
Therefore
$$
\{{\mathscr A}_{\{1,2,\ldots,d-1\}}|\alpha M\}
=
\{{\mathscr H}^1_\beta\bowtie{\mathscr P}|M\}.
$$
The lemma will now follow when we show that for any
proper subset $A\subset\{1,2,\ldots,d-1\}$
we have $\{{\mathscr A}_A|\alpha M\}=0$.
Fix a proper subset $A\subset \{1,2,\ldots,d-1\}$.
Without loss of generality we have $1\notin A$.
Note that for $t,x\in I$ we have
$$
{\mathscr H}^{\alpha}_{\alpha\beta,\alpha}(t,(x+1)/2)
=
{\mathscr M}(x),\quad
\hbox{where ${\mathscr M}(x)={\wp}(\alpha,(x+1)/2)$.}
$$
We therefore have
$$
{\mathscr A}_A
=
({\mathscr M} \cdot a_{{\mathscr P},1}) {\rm Diam}ond {\mathscr B},
$$
for some $d-1$-cell ${\mathscr B}$.
We shall now use this fact to cut ${\mathscr A}_A$ into $m$
pieces, which are translations of each other by elements of $M$.
We note that from the definition of ${\wp}(\alpha)$ we have
$$
\hbox{${\mathscr M}\left(x+\frac{1}{m}\right)
=
{\mathscr M}(x) + \frac{1-\alpha}{m},\quad
0\le x\le 1-\frac{1}{m}$.}
$$
We define for $x\in I$:
$$
{\mathscr N}(x)={\mathscr M}(\hbox{$\frac{x}{m}$})\cdot a_{{\mathscr P},1}.
$$
With this notation we have
$$
\{{\mathscr A}_A|\alpha M\}
=
\sum_{i=0}^{m-1}
\left\{
\left[i\frac{1-\alpha}{m}a_{{\mathscr P},1}\right]{\rm Diam}ond{\mathscr N} {\rm Diam}ond {\mathscr B}
\Big|
\alpha M\right\}.
$$
To prove the lemma it only remains to show that all the
terms in the above sum are equal.
We have
$$
\left\{
\left[i\frac{1-\alpha}{m}a_{{\mathscr P},1}\right]{\rm Diam}ond{\mathscr N} {\rm Diam}ond {\mathscr B}
\Big|
\alpha M\right\}
=
\left\{ {\mathscr N} {\rm Diam}ond {\mathscr B}
\Big|
\alpha M-i\frac{1-\alpha}{m}a_{{\mathscr P},1}\right\},
$$
so we need only show that the translation $\frac{1-\alpha}{m}a_{{\mathscr P},1}$
is an element of the lattice $\alpha M$.
This follows from the congruence condition on $\alpha$, the condition on $M$
and the fact that $(1-\rho)a_{{\mathscr P},1}\in L$.
$\Box$
\subsection{Proof of Lemma \ref{D}.}
We must show that
$\{{\mathscr H}^{\alpha\beta,\alpha}_{\alpha\beta}\bowtie{\mathscr G}|M\}=0$
in ${\mathbb Z}/m$.
However by Corollary \ref{XVcor} it is sufficient to prove this
formula with the homotopy ${\mathscr H}^{\alpha\beta,\alpha}_{\alpha\beta}$
replaced by another homotopy ${\mathscr H}$ from ${\wp}(\alpha\beta,\alpha)$
to ${\wp}(\alpha,\beta)$ as long as we have for $x$ close to $0$:
${\mathscr H}(t,x)=2\alpha\beta x$.
We shall choose such a homotopy ${\mathscr H}$ for which the calculation
is easier.
\scP{\rm ar}graph{The homotopy ${\mathscr H}$.}
Before beginning we shall fix a parametrization of the
path ${\wp}(\alpha\beta,\alpha)$ as follows:
$$
{\wp}(\alpha\beta,\alpha)(x)
=
\left\{
\begin{array}{ll}
\alpha\beta{\wp}(\epsilonsilon,2x)
&
x\le \frac{1}{2},\\
\alpha{\wp}^{\beta}(0,\nu,2x-\frac{1}{2})
&
\frac{1}{2}\le x\le \frac{3}{4},\\
{\wp}^{\alpha}(0,\xi,2x-1)
&
x\ge \frac{3}{4}.
\end{array}
\right.
$$
This has the advantage that it agrees with ${\wp}^{\alpha\beta}(x)$
for $x\le \frac{1}{2}$.
We fix $\epsilonsilon$, $\xi$ and $\nu$
sufficiently small for out purposes,
and we forget about these variables for the rest of the proof.
We introduce a sequence of
auxiliary paths
$$
{\wp}(\alpha\beta,\alpha)={\wp}_{0},
\ldots,
{\wp}_{m^{2}}={\wp}(\alpha\beta).
$$
These are defined as follows.
For any $i=0,\ldots,m^{2}$ we define
$$
{\wp}_{i}(x)=2x\alpha\beta, \quad x\le \frac{1}{2}.
$$
The triangle in ${\mathfrak g}$ with vertices $\alpha\beta$, $\alpha$ and $1$
is cut into $m^{2}$ similar triangles, or $4m^{2}$ if $m$ is even.
These similar triangles are numbered as in the following diagram.
\begin{center}
\includegraphics{metapic.jpg}
\end{center}
We define ${\wp}_{i}$ to be the path from $0$ to $1$
in the diagram which passes below the triangles numbered
$1,\ldots,i$ and above the triangles numbered $i+1,\ldots,m^{2}$.
This means that paths ${\wp}_{i-1}$ and ${\wp}_{i}$ differ only
by the $i$-th triangle.
We shall parametrize these paths as follows.
For any $x\in I$ which is not mapped into the edge of the
$i$-th triangle we define ${\wp}_{i-1}(x)={\wp}_{i}(x)$.
Let $[a_{i},b_{i}]\subset I$ be the subinterval mapped to the $i$-th triangle
by both ${\wp}_{i}$ and ${\wp}_{i-1}$.
The interval $[a_{i},b_{i}]$ will have length $\frac{1}{2m}$.
One of the two paths will go around two edges of the triangle and
the other will go around the third edge.
We parametrize the path which goes around only one of the edges
so that the path is affine there.
The other path will be affine on the two
subintervals $[a_{i},a_{i}+\frac{1}{4m}]$ and
$[a_{i}+\frac{1}{4m},b_{i}]$, and will map $a_{i}+\frac{1}{4m}$ to the
vertex of the triangle.
We define homotopies ${\mathscr H}_{1},\ldots,{\mathscr H}_{m^{2}}$ as follows:
$$
{\mathscr H}_{i}(x,t)
=
(1-t){\wp}_{i-1}(x) + t{\wp}_{i}(x).
$$
Thus ${\mathscr H}_{i}$ is a homotopy from ${\wp}_{i-1}$ to ${\wp}_{i}$.
Finally we put all these homotopies together to make the homotopy
${\mathscr H}$:
$$
{\mathscr H}(x,t)
=
{\mathscr H}_{[mt+1]}(x,\{mt\}),
$$
where $[\cdot]$ and $\{\cdot\}$ denote the integer part and fractional part
respectively.
This is a homotopy from ${\wp}(\alpha\beta,\alpha)$
to ${\wp}(\alpha\beta)$.
\scP{\rm ar}graph{Calculation of $\{{\mathscr H}\bowtie{\mathscr G}|M\}$.}
Note that we have
$$
\{{\mathscr H}\bowtie{\mathscr G}|M\}
=
\mathop{{\bf lim}}_{\epsilonsilon\to 0^{+}}
\mathop{{\bf lim}}_{\nu\to 0^{+}}
\mathop{{\bf lim}}_{\xi\to 0^{+}}
\mathop{{\bf lim}}_{\eta\to 0^{+}}
\{ {\mathscr H}(\epsilonsilon,\nu,\xi,\eta) \bowtie{\mathscr G}|M\}.
$$
We fix $\epsilonsilon,\nu,\xi,\eta$ so that all our functions are
defined on $M$ and equal to their limits.
It is sufficient to show that for any $d-1$ cell ${\mathscr P}$ in ${\mathfrak X}$
we have in ${\mathbb Z}/m$:
$$
\{{\mathscr H}\bowtie{\mathscr P}|M\}
=
0.
$$
We now fix such a ${\mathscr P}$.
Assume for a moment that $m$ is odd.
Recall that to construct the homotopy ${\mathscr H}$
we used a sequence of paths
${\wp}_{0},\ldots,{\wp}_{m^{2}}$ and homotopies
${\mathscr H}_{1},\ldots,{\mathscr H}_{m^{2}}$,
where ${\mathscr H}_{i}$ is a homotopy from ${\wp}_{i-1}$ to ${\wp}_{i}$.
We therefore have
$$
\{{\mathscr H}|M \}
=
\sum_{i=1}^{m^{2}}
\{{\mathscr H}_i\bowtie{\mathscr P}|M\}.
$$
Each homotopy ${\mathscr H}_{i}$ corresponds to one of the $m^{2}$ subtriangles
$T_{1},\ldots,T_{m^{2}}$ of the triangle with vertices $\alpha\beta,\alpha,1$.
Each of these triangles is either a translation of $T_{1}$
or a translation of $T_{2}$.
We shall prove that if $T_{i}$ is a translation of $T_{j}$ then we
have
\begin{equation}
\label{claim2}
\{{\mathscr H}_i\bowtie{\mathscr P}|M\}
\equiv
\{{\mathscr H}_j\bowtie{\mathscr P}|M\}
\bmod m.
\end{equation}
As there are $\frac{m(m+1)}{2}$ triangles of type $T_{1}$ and
$\frac{m(m-1)}{2}$ of type $T_{2}$, this implies
\begin{eqnarray*}
\{{\mathscr H}\bowtie{\mathscr P}|M \}
&\equiv&
\frac{m(m+1)}{2}\{{\mathscr H}_1\bowtie{\mathscr P}|M\}
+
\frac{m(m-1)}{2}\{{\mathscr H}_2\bowtie{\mathscr P}|M\}\\
&\equiv&
0
\qquad\bmod m,
\end{eqnarray*}
which proves the result.
This is the only place in which we need to assume that $m$ is odd.
In the case that $m$ is even we must cut the large triangle into
$4m^{2}$ subtriangles instead of just $m^{2}$
so $m(2m+1)$ of them are of type $T_{1}$ and $m(2m-1)$ are
of type $T_{2}$.
This is why we need a slightly different congruence condition
on $\alpha$ and $\beta$ when $m$ is even.
It remains only to prove the congruence (\ref{claim2}).
To do this we shall cut ${\mathscr H}_i\bowtie{\mathscr P}$ and ${\mathscr H}_j\bowtie{\mathscr P}$
into pieces.
Some of the pieces of ${\mathscr H}_i\bowtie{\mathscr P}$ will be translates by
elements of $M$
of pieces of ${\mathscr H}_j\bowtie{\mathscr P}$, and so will cancel each other out.
Any piece which does not cancel in this way
will be the product of a line segment with length in $m\cdot M$
and a $d-1$-chain.
Thus its contribution to (\ref{claim2}) will vanish modulo $m$.
Without loss of generality we shall assume that
$T_{i}$ is a translation of $T_{1}$.
We begin by cutting the interval $I$ onto four pieces.
For $x$ in the interval $[0,\frac{1}{2}]$ we have
$$
{\mathscr H}_{i}(t,x)
=
{\wp}_{i}(x)
=
\alpha\beta{\wp}(\epsilonsilon)(2x).
$$
Let $[a_{i},b_{i}]$ be the subinterval of $[\frac{1}{2},1]$ which is
mapped by ${\wp}_{i}$ and ${\wp}_{i-1}$ to the triangle $T_{i}$.
The other two pieces of $I$ are $[\frac{1}{2},a_{i}]$ and $[b_{i},1]$.
It is possible that one of these two will be a single point.
For $x$ in the interval $[\frac{1}{2},a_{i}]$ we have
${\mathscr H}_{i}(x,t)={\wp}_{i}(x)$.
In this region, ${\wp}_{i}$ is a sum of line segments
whose endpoints differ by $\frac{1-\alpha}{m}$,
$\frac{1-\alpha\beta}{m}$ or $\frac{\alpha-\alpha\beta}{m}$.
We define
$$
{\mathscr U}_{i}(x)
=
{\wp}_{i}\left((1-x)\frac{1}{2}+xa_{i}\right).
$$
The region of ${\wp}_{i}$ between $b_{i}$ and $1$ is similar
and we define
$$
{\mathscr U}_{i}^{{\rm pr}ime}(x)
=
{\wp}_{i}((1-x)b_{i}+x).
$$
Suppose $\{1,\ldots,d-1\}$ is the disjoint union of the
four sets $A$, $B$, $C$ and $D$.
We shall define ${\mathscr A}_{i}(A,B,C,D)$ to be the restriction
of ${\mathscr H}_{i}\bowtie{\mathscr P}$ to the subset
$$
I
\times
[0,\textstyle{\frac{1}{2}}]^{A}
\times
[\textstyle{\frac{1}{2}},a_{i}]^{B}
\times
[a_{i},b_{i}]^{C}
\times
[b_{i},1]^{D}
\subset I^{d}.
$$
In other words we have
\begin{eqnarray*}
{\mathscr A}_{i}(A,B,C,D)(t,x_{1},\ldots,x_{d-1})
&=&
v_{\mathscr P}
+
\sum_{j\in A}
\alpha\beta{\wp}(\epsilonsilon)(x_{j})\cdot a_{{\mathscr P},j}\\
&&
+
\sum_{j\in B}
{\mathscr U}_{i}(x_{j})\cdot a_{{\mathscr P},j}\\
&&
+
\sum_{j\in C}
{\mathscr H}_{i}((1-x_{j})a_{i}+x_{j}b_{i},t)\cdot a_{{\mathscr P},j}\\
&&
+
\sum_{j\in D}
{\mathscr U}_{i}^{{\rm pr}ime}(x_{j})\cdot a_{{\mathscr P},j}.
\end{eqnarray*}
We have
$$
\{{\mathscr H}_{i}\bowtie{\mathscr P}|M\}
=
\sum_{A,B,C,D}
\{{\mathscr A}_{i}(A,B,C,D)|M\}.
$$
It is sufficient to prove that for any choice of $A$, $B$, $C$ and $D$
we have
$$
\{{\mathscr A}_{i}(A,B,C,D)|M\}
=
\{{\mathscr A}_{1}(A,B,C,D)|M\}.
$$
To prove this we shall consider three cases.
Case 1. Suppose that $B$ is non-empty
and let $j\in B$.
We can then decompose ${\mathscr A}_{i}(A,B,C,D)$ as
$$
{\mathscr A}_{i}(A,B,C,D)
=
{\mathscr V} {\rm Diam}ond {\mathscr W},
$$
where ${\mathscr V}(x)={\mathscr U}_{i}(x)\cdot a_{{\mathscr F},j}$
and ${\mathscr W}$ is a $d-1$-chain.
The cell ${\mathscr V}$ is a sum of line segments
whose length is in $m\cdot M$.
It follows that $\{{\mathscr A}_{i}(A,B,C,D)|M\}\equiv 0\bmod m$.
Similarly $\{{\mathscr A}_{1}(A,B,C,D)|M\}\equiv 0\bmod m$.
Case 2.
Suppose that $D$ is non-empty.
We may reason as in case 1 to show that
both $\{{\mathscr A}_{i}(A,B,C,D)|M\}$ and $\{{\mathscr A}_{1}(A,B,C,D)|M\}$
are congruent to 0 modulo $m$.
Case 3.
Suppose $B$ and $D$ are empty.
We shall prove that ${\mathscr A}_{i}(A,B,C,D)$ is a translation
of ${\mathscr A}_{1}(A,B,C,D)$ by an element of $M$.
We shall assume without
loss of generality that $C=\{1,\ldots,r\}$
and $A=\{r+1,\ldots d-1\}$.
We may then decompose ${\mathscr A}_{i}(A,B,C,D)$ as follows:
$$
{\mathscr A}_{i}(A,B,C,D)
=
{\mathscr B}_{i}
{\rm Diam}ond
{\mathscr C},
$$
where ${\mathscr B}_{i}:I^{r+1}\to X_{\infty}$ is given by
$$
{\mathscr B}_{i}(t,x_{1},\ldots,x_{r})
=
\sum_{j=1}^{r}
{\mathscr H}_{i}(t,(1-x_{j})a_{i}+x_{j}b_{i})\cdot a_{{\mathscr P},j}
$$
and ${\mathscr C}:I^{d-r-1}\to X_{\infty}$ is given by:
$$
{\mathscr C}(\underline{x})
=
\sum_{j=1}^{d-r-1}
\alpha\beta{\wp}(\epsilonsilon, x_{j})\cdot a_{{\mathscr P},j}.
$$
It is therefore sufficient to prove that
${\mathscr B}_{i}$ is a translation of ${\mathscr B}_{1}$
by an element of $M$.
Recall that the triangle $T_{i}$ is a translation
of $T_{1}$ by some vector $v\in{\mathfrak g}$.
Furthermore $v$ is of the form
$$
v
=
r\frac{\alpha\beta-1}{m}
+
s\frac{\alpha-1}{m},
\qquad
r,s\in{\mathbb Z}.
$$
It follows that the restriction of ${\mathscr H}_{i}$ to $[a_{i},b_{i}]$
and the restriction of ${\mathscr H}_{1}$ to $[a_{1},b_{1}]$ also
differ by the translation $v$.
In other words we have for $x\in I$,
$$
{\mathscr H}_{i}(t,(1-x)a_{i}+x b_{i})
=
{\mathscr H}_{i}(t,(1-x)a_{1}+x b_{1})+v.
$$
This implies
$$
{\mathscr B}_{i}(t,x_{1},\ldots,x_{r})
=
{\mathscr B}_{1}(t,x_{1},\ldots,x_{r})
+
v\sum_{j=1}^{r}a_{{\mathscr P},j}.
$$
It remains to show that the translations $v\cdot a_{{\mathscr P},j}$ are in $M$.
This reduces to showing that $\frac{\alpha-1}{m}a_{{\mathscr P},j}$
and $\frac{\alpha\beta-1}{m}a_{{\mathscr P},j}$ are in $M$.
However this is true by our congruence conditions on $\alpha$
and $\beta$, the fact that $(1-\rho)a_{{\mathscr P},j}\in L$
and the assumption that $M$ contains $L$, $\alpha L$ and $\alpha\beta L$.
$\Box$
\section{Extending the cocycle to ${\rm GL}_n({\mathbb A})$.}
\subsection{The cocycle on ${\rm SL}_{n}(k_{m})$.}
Let $k_{m}$ be the sum of the fields $k_{v}$ for all
finite places $v$ dividing $m$.
We therefore have ${\mathbb A}={\mathbb A}S\oplus k_{\infty}\oplus k_{m}$.
So far, we have a cocycle ${\rm Dec}AS$ on ${\rm GL}_{n}({\mathbb A}S)$
and a cocycle ${\rm Dec}i $ on ${\rm GL}_{n}(k_{\infty})$.
We shall now find a continuous cocycle ${\rm Dec}_{m}$
on ${\rm SL}_{n}(k_{m})$ so that ${\rm Dec}AS{\rm Dec}i {\rm Dec}_{m}$ is
metaplectic on ${\rm SL}_{n}({\mathbb A})$.
\scP{\rm ar}graph{Extending $\tau$.}
Recall that $G_{{\mathfrak f}}$ is the subgroup of ${\rm GL}_{n}(k)$ consisting of
matrices which are integral at all places dividing $m$
and congruent to the identity modulo ${\mathfrak f}$.
This is the subgroup generated by the semigroup $\Upsilon_{{\mathfrak f}}$.
We have a function $\tau:\Upsilon_{{\mathfrak f}}\to \mu_{m}$
such that for $\alpha,\beta\in\Upsilon_{{\mathfrak f}}$ the following holds:
\begin{equation}
\label{splitting}
{\rm Dec}AS(\alpha,\beta){\rm Dec}i (\alpha,\beta)
=
\partial\tau(\alpha,\beta).
\end{equation}
For any matrix $\alpha\in G_{{\mathfrak f}}$ there is a natural number $N$
such that $\alpha/N\in\Upsilon_{{\mathfrak f}}$.
We extend $\tau$ to a function on $G_{{\mathfrak f}}$ by defining
$$
\tau(\alpha)
=
\frac{{\rm Dec}AS(N,N^{-1}\alpha){\rm Dec}i (N,N^{-1}\alpha)}
{{\rm Dec}AS(N,N^{-1}){\rm Dec}i (N,N^{-1})}
\tau(N^{-1}\alpha),
\quad
\alpha\in G_{{\mathfrak f}},
$$
where $N\in{\mathbb N}$ is chosen so that $N^{-1}\alpha\in\Upsilon_{{\mathfrak f}}$.
It follows from the cocycle relation and the fact that
$\tau(N^{-1})=1$ for $N\in{\mathbb N}\cap\Upsilon^{-1}$,
that this does not depend on our choice of $N$.
Furthermore it is easy to check that on the whole
group $G_{{\mathfrak f}}$ the formula (\ref{splitting}) still holds.
We extend $\tau$ to a function on ${\rm GL}_{n}(k)$.
To do this we choose a set ${\rm Rep}$ of representatives
for cosets ${\rm GL}_{n}(k)/G_{{\mathfrak f}}$.
Thus every element of ${\rm GL}_{n}(k)$ may be uniquely
expressed in the form $r\alpha$ with $r\in{\rm Rep}$ and $\alpha\in G_{{\mathfrak f}}$.
We define
$$
\tau(r\alpha)
=
{\rm Dec}AS(r,\alpha){\rm Dec}i (r,\alpha)\tau(\alpha).
$$
\scP{\rm ar}graph{The cocycle ${\rm Dec}_{m}$.}
We define the cocycle ${\rm Dec}_{m}$ on the dense subgroup ${\rm SL}_{n}(k)$
of ${\rm SL}_{n}(k_{m})$ by
\begin{equation}
\label{decm}
{\rm Dec}_{m}(\alpha,\beta)
=
\frac{\partial\tau(\alpha,\beta)}
{{\rm Dec}AS(\alpha,\beta){\rm Dec}i (\alpha,\beta)}.
\end{equation}
We shall prove that ${\rm Dec}_{m}$ extends to a continuous cocycle
on ${\rm SL}_{n}(k_{m})$.
We then define for $\alpha,\beta\in{\rm SL}_{n}({\mathbb A})$
$$
{\rm Dec}A(\alpha,\beta)
=
{\rm Dec}AS(\alpha,\beta)
{\rm Dec}i (\alpha,\beta)
{\rm Dec}_{m}(\alpha,\beta).
$$
It follows immediately from the definition (\ref{decm})
that the restriction of ${\rm Dec}A$ to ${\rm SL}_{n}(k)$
is $\partial \tau$.
Therefore ${\rm Dec}A$ is metaplectic.
It remains to prove the following.
\begin{theorem}
The cocycle ${\rm Dec}_{m}$ on ${\rm SL}_{n}(k)$ extends by
continuity to ${\rm SL}_{n}(k_{m})$.
\end{theorem}
{\it Proof. }
It follows immediately from the definitions
that for $\alpha,\beta\in {\rm SL}_{n}(k)$ and $\epsilonsilon\in G_{{\mathfrak f}}$ we have
$$
{\rm Dec}_{m}(\alpha,\beta\epsilonsilon)
=
{\rm Dec}_{m}(\alpha,\beta).
$$
It is therefore sufficient to prove that for any $\beta\in{\rm SL}_{n}(k)$,
the function $\alpha\mapsto{\rm Dec}_{m}(\alpha,\beta)$ is continuous.
We fix $\beta$.
Note that for $\epsilonsilon\in G_{{\mathfrak f}}\cap (\beta G_{{\mathfrak f}}\beta^{-1})$ we have
from the cocycle relation:
$$
{\rm Dec}_{m}(\alpha\epsilonsilon,\beta)
=
{\rm Dec}_{m}(\alpha,\beta){\rm Dec}_{m}(\epsilonsilon,\beta).
$$
In particular the map $\psi:G_{{\mathfrak f}}\cap \beta G_{{\mathfrak f}}\beta^{-1}\to\mu_{m}$
given by $\psi(\epsilonsilon)={\rm Dec}_{m}(\epsilonsilon,\beta)$ is a homomorphism.
To prove continuity we need only show that $\ker(\psi)\cap {\rm SL}_{n}(k)$
is open in the induced topology from ${\rm SL}_{n}(k_{m})$.
However this fact follows immediately from
Lemma \ref{congruence} below.
$\Box$
\begin{remark}
It is worth noting that $\ker(\psi)$ is not open in the topology
induced by ${\rm GL}_{n}(k_{m})$ and ${\rm Dec}_{m}$ does not extend by
continuity to ${\rm GL}_{n}(k_{m})$.
\end{remark}
\scP{\rm ar}graph{A weak form of the congruence subgroup problem.}
Let $S_{m}$ be the set of finite primes in $S$ and define
$R=\{x\in k: \forall v\in S_{m}\; |x|_{v}\le 1\}$.
This is a subring of $k$ whose primes correspond to the elements
of $S_{m}$.
The ring $R$ is a dense subring of ${\mathfrak o}_{m}:=\oplus_{v\in S_{m}}{\mathfrak o}_{v}$.
\begin{lemma}
\label{congruence}
Every subgroup $H$ of finite index in ${\rm SL}_{n}(R)$
is a congruence subgroup, ie. $H$ contains all matrices
congruent to the identity modulo some non-zero ideal of $R$.
Equivalently ${\rm SL}_{n}({\mathfrak o}_{m})$ is the profinite completion
of ${\rm SL}_{n}(R)$.
\end{lemma}
\begin{remark}
This is a very weak statement, which follows immediately
from \cite{bassmilnorserre} (or \cite{serre2} for $n=2$).
However since these papers effectively construct the
universal metaplectic cover of ${\rm SL}_{n}$,
it is worth noting that the limited result required here
can be obtained in an elementary way.
\end{remark}
{\it Proof. }
Let $H$ be a subgroup of ${\rm SL}_{n}(R)$ of index $d$.
We shall assume without loss of generality that $m$ divides $d$
and that $H$ is normal in ${\rm SL}_{n}(R)$.
We shall show that any matrix congruent to the identity
modulo $d^{2}R$ is in $H$.
First note that the $d$-th power of any element of ${\rm SL}_{n}(R)$
is in $H$.
Therefore $H$ contains the elementary matrices
$$
I_{n}+\lambda d e_{i,j},
\quad
\lambda\in R,\;
i\ne j.
$$
Here $e_{i,j}$ denotes the matrix whose $(i,j)$-entry is 1
and whose other entries are all zero.
By a $d$-operation we shall mean an operation of the form ``add
$\lambda d$ times row $i$ to row $j$'' ($i\ne j$, $\lambda \in R$).
If a matrix can be reduced by $d$-operations to the identity matrix
then that matrix must be in $H$ since $d$-operations have the effect of
multiplying on the left by $I_{n}+\lambda d e_{i,j}$.
Now let $A=(a_{i,j})$ be any matrix congruent to the identity
modulo $d^{2}R$.
We shall show that $A$ may be reduced to the identity matrix
by $d$-operations.
Since $m$ divides $d^{2}$, the entries $a_{i,i}$ on the diagonal of $A$
are units in $R$ (they are congruent to 1 modulo every prime ideal
of $R$).
The entries off the diagonal are divisible by $d^{2}$.
Therefore we can reduce $A$ by $d^{2}$-operations to a diagonal matrix.
Furthermore the diagonal matrix which we obtain
will still be congruent to the identity modulo $d^{2}$.
Now let $A$ be diagonal.
It remains to show how $A$ can be reduced to the identity.
We first describe a method for converting $a_{i,i}$ to $1$.
For $i<n$ we may add $da_{i,i}^{-1}$ times row $i$ to row $i+1$.
This gives us a $d$ in the $(i+1,i)$ entry.
Next subtracting $d\frac{a_{i,i}-1}{d^{2}}$ times row $i+1$ from
row $i$ we obtain a $1$ in the $(i,i)$ entry.
After this we subtract $d$ times row $i$ from row $i+1$ to obtain
a zero there.
Finally, subtracting a multiple of row $i+1$ from row $i$
we obtain a diagonal matrix with a 1 in the $(i,i)$ position.
In this process we have only changed the
$(i,i)$ and $(i+1,i+1)$-entries.
We may perform this process for $i=1,2,\ldots,n-1$
consecutively to obtain a diagonal matrix with $a_{i,i}=1$ for
$i=1,2,\ldots,n-1$.
Since the resulting matrix has determinant 1, it follows that
$a_{n,n}$ is also 1.
$\Box$
\scP{\rm ar}graph{A product formula for ${\rm Dec}A$.}
For any place $v$ of $k$, let ${\rm Dec}v$ be the restriction of
${\rm Dec}A$ to ${\rm SL}_{n}(k_{v})$.
It is known (see \cite{hill2,hill3}) that
for almost all places $v$
the cocycle ${\rm Dec}v$ is trivial on ${\rm GL}_{n}({\mathfrak o}_{v})$.
Therefore the product ${\rm pr}od_{v}{\rm Dec}_{v}(\alpha,\beta)$
converges and we have
up to a coboundary\footnote{A formula for this coboundary
is given in \cite{hill2}.}
$$
{\rm Dec}AS(\alpha,\beta)
=
{\rm pr}od_{v\notin S}
{\rm Dec}v(\alpha,\beta),
\quad
\alpha\beta\in{\rm SL}_{n}({\mathbb A}S).
$$
As the groups ${\rm SL}_{n}(k_{v})$ ($v\in S$) are perfect,
the restriction maps give an isomorphism:
$$
H^{2}({\rm SL}_{n}(k_{S}),\mu_{m})
\cong
{\bf i}goplus_{v\in S}
H^{2}({\rm SL}_{n}(k_{v}),\mu_{m}).
$$
(The corresponding statement for ${\rm GL}_{n}$ would be false).
This implies up to a coboundary on ${\rm SL}_{n}({\mathbb A})$:
$$
{\rm Dec}A
=
{\rm pr}od_{v}
{\rm Dec}v.
$$
\subsection{Application : The power reciprocity law.}
We shall now deduce the power reciprocity law
from our results.
Consider the cocycle ${\rm Dec}A$ on ${\rm SL}_{3}({\mathbb A})$.
For $\alpha\in {\mathbb A}^{\times}$ we define
$$
\varphi(\alpha)
=
\left(
\begin{array}{ccc}
\alpha & 0 & 0\\
0 & \alpha^{-1} & 0\\
0 & 0 & 1
\end{array}
\right),
\quad
\varphi^{{\rm pr}ime}(\alpha)
=
\left(
\begin{array}{ccc}
\alpha & 0 & 0\\
0 & 1 & 0\\
0 & 0 & \alpha^{-1}
\end{array}
\right),
$$
One knows (see \cite{hill2} Lemma 23)
that for $\alpha,\beta\in{\mathbb A}^{\times}$ we have
$$
[\varphi(\alpha),\varphi^{{\rm pr}ime}(\beta)]_{{\rm Dec}AS}
=
{\rm pr}od_{v\notin S}
(\alpha,\beta)_{v,m}.
$$
On the other hand if $\alpha,\beta\in k^{\times}$, then since
${\rm Dec}A$ is metaplectic we have
$$
[\varphi(\alpha),\varphi^{{\rm pr}ime}(\beta)]_{{\rm Dec}A}
=
1.
$$
This implies for $\alpha,\beta\in k^{\times}$,
$$
{\rm pr}od_{v\not\in S}(\alpha,\beta)_{v,m}
{\rm pr}od_{v\in S}[\varphi(\alpha),\varphi^{{\rm pr}ime}(\beta)]_{{\rm Dec}v}
=
1.
$$
Fix a place $v\in S$ and consider the function
$\psi:k_{v}^{\times}\times k_{v}^{\times}\to \mu_{m}$ defined by
$$
\psi(\alpha,\beta)
=
[\varphi(\alpha),\varphi^{{\rm pr}ime}(\beta)]_{{\rm Dec}v},
$$
where ${\rm Dec}v$ is the restriction of ${\rm Dec}A$ to ${\rm SL}_{3}(k_{v})$.
To prove the reciprocity law
it remains to show that $\psi$ is the $m$-th power
Hilbert symbol.
For real $v$ this is a consequence of Corollary 1 (\S3.8).
For compex $v$ both $\psi$ and the Hilbert symbol
are trivial for topological reasons.
Assume from now on that $v$ is a non-archimedean
place dividing $m$.
The function $\psi$ is bimultiplicative and continuous
by the properties of commutators.
Furthermore we have $\psi(\alpha,1-\alpha)=1$ for all
$\alpha\ne 1$ since this formula holds on the dense subset
$k^{\times}\setminus\{0\}$.
Therefore $\psi$ is a continuous Steinberg symbol.
As the Hilbert symbol is the universal continuous
Steinberg symbol on $k_{v}$ we have
$\psi(\alpha,\beta)=(\alpha,\beta)_{m,v}^{a}$
for some fixed $a\in{\mathbb Z}/m{\mathbb Z}$.
Substituting $\alpha=\zeta\in\mu_{m}$, $\beta\in{\mathfrak o}_{v}^{\times}$ we have
(see for example \cite{serre3} XIV Proposition 6):
$$
(\zeta,\beta)_{m,v}
=
\zeta^{\textstyle{\frac{1-N^{k_{v}}_{{\mathbb Q}_{p}}(\beta)}{m}}}
$$
Taking $\beta\in{\mathfrak o}$ close to 1 in the topology of ${\mathfrak o}_{w}$
for all $w\in S\setminus\{v\}$ we have
$$
\psi(\zeta,\beta)
=
[\varphi(\zeta),\varphi'(\beta)]_{{\rm Dec}AS}^{-1}.
$$
Proposition 2 of \cite{hill2}
implies
$$
[\varphi(\zeta),\varphi'(\beta)]_{{\rm Dec}AS}^{-1}
=
\zeta^{\textstyle{\frac{1-N^{k}_{{\mathbb Q}}(\beta)}{m}}}
=
(\zeta,\beta)_{m,v}.
$$
With a suitable choice of $\zeta,\beta$ this shows that $a=1$.
$\Box$
As was mentioned in the introduction,
it would be more satisfactory to have a local definition
of ${\rm Dec}v$ for $v\in S_{m}$
and a local proof that $\psi$ is the Hilbert symbol.
\subsection{Extending the cocycle to ${\rm GL}_{n}$.}
We now have a metaplectic cocycle ${\rm Dec}A$ on ${\rm SL}_{n}({\mathbb A})$,
whose restriction to ${\rm SL}_{n}({\mathbb A}S\oplus k_{\infty})$
extends naturally to ${\rm GL}_{n}({\mathbb A}S\oplus k_{\infty})$.
One might ask whether ${\rm Dec}AS{\rm Dec}i$
extends to a metaplectic cocycle on ${\rm GL}_{n}({\mathbb A})$.
The answer to this depends on precisely how one poses the question.
If one asks whether there is a cocycle ${\rm Dec}_{m}$ on ${\rm GL}_{n}(k_{m})$,
which when multiplied together with ${\rm Dec}AS$ and ${\rm Dec}i$
gives a metaplectic cocycle then the answer is ``no''.
However it is true that ${\rm Dec}AS{\rm Dec}i$ is the restriction
of a metaplectic cocycle ${\rm Dec}A$ on ${\rm GL}_{n}({\mathbb A})$.
\scP{\rm ar}graph{The change of base field property.}
By embedding the group ${\rm GL}_{n}$ in ${\rm SL}_{n+1}$,
one can obtain a perfectly good
metaplectic extension of ${\rm GL}_{n}$ by restriction.
In fact, this is the metaplectic extension
which has been most studied.
However, such extensions are badly behaved under change of
base field (see \cite{paddytori})
compared with ${\rm Dec}AS{\rm Dec}i$.
For this reason, I shall extend ${\rm Dec}AS{\rm Dec}i$ to ${\rm GL}_{n}({\mathbb A})$
to obtain an extension which is well behaved under change of base
field.
More precisely, if $l$ is a finite extension of $k$
of degree $d$ then by choosing
a basis for $l$ over $k$ we may regard ${\rm GL}_{n}({\mathbb A}_{l})$ as a
subgroup of ${\rm GL}_{nd}({\mathbb A}_{k})$.
With this identification ${\rm GL}_{n}(l)$ is a subgroup of ${\rm GL}_{nd}(k)$.
Thus metaplectic extensions of ${\rm GL}_{nd}/k$ restrict to
metaplectic extensions of ${\rm GL}_{n}/l$.
We shall write $R$ the restriction map from ${\rm GL}_{nd}/k$ to
${\rm GL}_{n}/l$.
The classes ${\rm Dec}AS$ and ${\rm Dec}i$
have the following ``change of base field property'':
$$
R({\rm Dec}AS^{(k)})={\rm Dec}AS^{(l)},\quad
R({\rm Dec}i ^{(k)})={\rm Dec}i ^{(l)}.
$$
This is clear since the base field never arises
in the definitions of ${\rm Dec}AS$ or ${\rm Dec}i $.
It is known that the class on ${\rm GL}_{n}$ obtained by
restricting from ${\rm SL}_{n+1}$ does not have the
change of base field property
(see for example \cite{paddytori}).
We shall extend ${\rm Dec}AS{\rm Dec}i$ to ${\rm GL}_{n}({\mathbb A})$
in such a way that it does have this property.
To achieve this it is clearly sufficient to treat
the case $k={\mathbb Q}(\mu_{m})$.
From now on we shall restrict ourselves to this case.
\scP{\rm ar}graph{The metaplectic kernel of ${\rm GL}_{n}$.}
The group ${\rm GL}_{n}$ is the semi-direct product of
${\rm SL}_{n}$ and ${\rm GL}_{1}$.
The normal subgroups ${\rm SL}_{n}({\mathbb A})$
and ${\rm SL}_{n}(k)$ are perfect,
i.e. they are equal to their own commutator subgroups.
Therefore the Hochschild--Serre spectral sequence
shows that the restriction maps give isomorphisms:
$$
H^{2}({\rm GL}_{n}({\mathbb A}),\mu_{m})
\cong
H^{2}({\rm SL}_{n}({\mathbb A}),\mu_{m})
\oplus
H^{2}({\rm GL}_{1}({\mathbb A}),\mu_{m}),
$$
$$
H^{2}({\rm GL}_{n}(k),\mu_{m})
\cong
H^{2}({\rm SL}_{n}(k),\mu_{m})
\oplus
H^{2}({\rm GL}_{1}(k),\mu_{m}).
$$
For an algebraic group $G$ we shall write ${\mathcal M}(G,\mu_{m})$
for the kernel of the restriction map
$H^{2}(G({\mathbb A}),\mu_{m})\to H^{2}(G(k),\mu_{m})$.
The above isomorphisms imply that we have
$$
{\mathcal M}({\rm GL}_{n},\mu_{m})
\cong
{\mathcal M}({\rm SL}_{n},\mu_{m})
\oplus
{\mathcal M}({\rm GL}_{1},\mu_{m}).
$$
We have already constructed a metaplectic extension
of ${\rm SL}_{n}$, so to show that our cocycle extends to a metaplectic
cocycle of ${\rm GL}_{n}$ we need only show that its restriction
to ${\rm GL}_{1}({\mathbb A}S\oplus k_{\infty})$ extends to a
metaplectic cocycle on ${\rm GL}_{1}({\mathbb A})$.
However by Theorem \ref{stability}
and \cite{hill2}, the restriction of ${\rm Dec}AS{\rm Dec}i$ to ${\rm GL}_{1}$ is simply
the cocycle ${\rm Dec}AS{\rm Dec}i$ constructed in the case $n=1$.
We describe the group $H^{2}({\mathbb A}^{\times},\mu_{m})$.
For $\sigma\in H^{2}({\mathbb A}^{\times},\mu_{m})$
the commutator of $\sigma$ is a continuous
bimultiplicative, skew symmetric function
${\mathbb A}^{\times}\times{\mathbb A}^{\times}\to\mu_{m}$.
We therefore have a map
$$
H^{2}({\mathbb A}^{\times},\mu_{m})
\to
{\rm Hom}(\wedge^{2}{\mathbb A}^{\times},\mu_{m}).
$$
This map is surjective.
We write $H^{2}_{sym}({\mathbb A}^{\times},\mu_{m})$ for its kernel.
There is an isomorphism given by the restriction maps (see \cite{klose}):
$$
H^{2}_{sym}({\mathbb A}^{\times},\mu_{m})
\cong
{\bf i}goplus_{v} H^{2}(\mu_{m}(k_{v}),\mu_{m}).
$$
Each of the groups $H^{2}(\mu_{m}(k_{v}),\mu_{m})$ is
canonically isomorphic to ${\mathbb Z}/m$.
We write $H^{2}_{asym}({\mathbb A}^{\times},\mu_{m})$
for the kernel of the restriction map
$H^{2}({\mathbb A}^{\times},\mu_{m})
\to\oplus_{v} H^{2}(\mu_{m}(k_{v}),\mu_{m})$.
Thus the commutator gives an isomorphism:
$$
H^{2}_{asym}({\mathbb A}^{\times},\mu_{m})
\cong
{\rm Hom}(\wedge^{2}{\mathbb A}^{\times},\mu_{m}),
$$
and we have a decomposition
$$
H^{2}({\mathbb A}^{\times},\mu_{m})
=
H^{2}_{sym}({\mathbb A}^{\times},\mu_{m})
\oplus
H^{2}_{asym}({\mathbb A}^{\times},\mu_{m}).
$$
There is a similar decomposition of $H^{2}(k^{\times},\mu_{m})$:
\begin{eqnarray*}
H^{2}(k^{\times},\mu_{m})
&=&
H^{2}_{sym}(k^{\times},\mu_{m})
\oplus
H^{2}_{asym}(k^{\times},\mu_{m}),\\
H^{2}_{sym}(k^{\times},\mu_{m})
&\cong&
H^{2}(\mu_{m}(k),\mu_{m})
\cong
{\mathbb Z}/m{\mathbb Z},\\
H^{2}_{asym}(k^{\times},\mu_{m})
&\cong&
{\rm Hom}(\wedge^{2}k^{\times},\mu_{m}).
\end{eqnarray*}
Consider the restriction map
$H^{2}({\mathbb A}^{\times},\mu_{m})\to H^{2}(k^{\times},\mu_{m})$.
Clearly the restriction to $k^{\times}$ of a symmetric cocycle on
${\mathbb A}^{\times}$ is symmetric\footnote{
However the restriction of an asymmetric cocycle is not necessarily
asymmetric.}.
The resulting map
$H^{2}_{sym}({\mathbb A}^{\times},\mu_{m})\to H^{2}(k^{\times},\mu_{m})$
corresponds to the map $\oplus_{v}{\mathbb Z}/m{\mathbb Z} \to {\mathbb Z}/m{\mathbb Z}$ given by
$$
(a_{v})
\mapsto
\sum_{v} a_{v}.
$$
We now examine the commutator of ${\rm Dec}AS{\rm Dec}i$.
Under our conditions on $k$ the group
${\rm Hom}(\wedge^{2}k_{\infty}^{\times},\mu_{m})$ is trivial,
so ${\rm Dec}i $ has trivial commutator.
Therefore the commutator of ${\rm Dec}i {\rm Dec}AS$ is the
same as the commutator of ${\rm Dec}AS$.
This has been calculated in \cite{hill2} (Theorems 4 and 5)
and is given by
$$
[\alpha,\beta]_{{\mathbb A}S}
=
\left\{
\begin{array}{ll}
(-1)^{\displaystyle{\frac{(|\alpha|_{{\mathbb A}S}-1)(|\beta|_{{\mathbb A}S}-1)}{m^{2}}}}
\displaystyle{{\rm pr}od_{v\notin S} (\alpha,\beta)_{v,m}}
& \hbox{if $m$ is even,}\\
\displaystyle{{\rm pr}od_{v\notin S}(\alpha,\beta)_{v,m}}
& \hbox{if $m$ is odd.}
\end{array}
\right.
$$
Define a map ${\rm char}i_{k}:{\mathbb A}_{k}^{\times}/k^{\times}\to{\mathbb Z}_{2}^{\times}$ by
$$
{\rm char}i_{k}(\alpha)
=
{\rm char}i_{{\mathbb Q}}(N^{k}_{{\mathbb Q}}\alpha),
\qquad
{\rm char}i_{{\mathbb Q}}(\alpha)
=
{\rm sign}(\alpha_{\infty})\alpha_{2} {\rm pr}od_{\hbox{$p$ finite}}|\alpha|_{p}.
$$
Define also a bilinear map
$\psi_{k}:({\mathbb A}^{\times}/k^{\times})\times({\mathbb A}^{\times}/k^{\times})\to\mu_{m}$
by
$$
\psi_{k}(\alpha,\beta)
=
(-1)^{\frac{({\rm char}i_{k}(\alpha)-1)({\rm char}i_{k}(\beta)-1)}{m^{2}}}.
$$
The point of this is that for $\alpha,\beta\in{\mathbb A}S^{\times}$
we have
$$
\psi_{k}(\alpha,\beta)
=
(-1)^{\frac{(|\alpha|_{{\mathbb A}S}-1)(|\beta|_{{\mathbb A}S}-1)}{m^{2}}}.
$$
\begin{theorem}
There is a unique class $\mathscr{D}\mathrm{ec}A\in {\mathcal M}({\rm GL}_{1}/{\mathbb Q}(\mu_{m}),\mu_{m})$
with the following properties:
\begin{itemize}
\item
the restriction of $\mathscr{D}\mathrm{ec}A$ to ${\mathbb A}S^{\times}$
is ${\rm Dec}AS$;
\item
the restriction of $\mathscr{D}\mathrm{ec}A$ to $k_{\infty}^{\times}$
is ${\rm Dec}i $;
\item
The commutator of $\mathscr{D}\mathrm{ec}A$ is
$$
[\alpha,\beta]_{\mathscr{D}\mathrm{ec}A}
=
\psi_{{\mathbb Q}(\mu_{m})}(\alpha,\beta)
{\rm pr}od_{v} (\alpha,\beta)_{m,v}.
$$
\end{itemize}
\end{theorem}
{\it Proof. }
It follows from the above discussion that there is a unique
class with the given commutator and restrictions
to ${\mathbb A}S^{\times}$ and $k_{\infty}^{\times}$
and any given symmetric restriction $\mu_{m}(k_{m})$.
As $m$ is a power of a prime, $k_{m}$ is a field,
so there is a unique choice of restriction to $\mu_{m}(k_{m})$
for which the restriction to $k$ is asymmetric.
$\Box$
\end{document} |
\boldsymbol{e}gin{document}
\title{
The Eddy Current--LLG Equations--Part I: FEM-BEM Coupling
}
\author{Michael Feischl}
\address{School of Mathematics and Statistics,
The University of New South Wales,
Sydney 2052, Australia}
\email{[email protected]}
\author{Thanh Tran}
\address{School of Mathematics and Statistics,
The University of New South Wales,
Sydney 2052, Australia}
\email{[email protected]}
\thanks{Supported by the Australian Research Council under
grant numbers DP120101886 and DP160101755}
\subjclass[2000]{Primary 35Q40, 35K55, 35R60, 60H15, 65L60,
65L20, 65C30; Secondary 82D45}
\keywords{Landau--Lifshitz--Gilbert equation, eddy current,
finite element, boundary element, coupling, a priori error
estimates, ferromagnetism}
\date{\today}
\boldsymbol{e}gin{abstract}
We analyse a numerical method for the coupled
system of the eddy current equations in ${\mathbb R}^3$ with the
Landau-Lifshitz-Gilbert equation in a bounded domain.
The unbounded domain is discretised by means of
finite-element/boundary-element coupling. Even though the
considered problem is strongly nonlinear, the numerical approach is
constructed such that only two linear systems per time step have to
be solved. In this first part of the paper,
we prove unconditional weak convergence (of a
subsequence) of the finite-element solutions
towards a weak solution. A priori error estimates
will be presented in the second part.
\end{abstract}
\maketitle
\section{Introduction}
This paper deals with the coupling of finite element and boundary
element methods to solve the system of the eddy current
equations in
the whole 3D spatial space and the Landau-Lifshitz-Gilbert equation
(LLG), the so-called ELLG system or equations. The system is also called the
quasi-static Maxwell-LLG (MLLG) system.
The LLG is widely considered as a valid model of micromagnetic
phenomena occurring in, e.g., magnetic sensors, recording heads,
and magneto-resistive storage device~\cite{Gil,LL,prohl2001}.
Classical results concerning existence and non-uniqueness of solutions can
be found in~\cite{as,vis}. In a ferro-magnetic material, magnetisation is
created or affected by external electro-magnetic fields. It is therefore
necessary to augment the Maxwell system with the LLG, which describes the
influence of ferromagnet; see e.g.~\cite{cim, KruzikProhl06, vis}.
Existence, regularity and local uniqueness for the MLLG equations are studied
in~\cite{CimExist}.
Throughout the literature, there are various works on numerical
approximation methods for the LLG, ELLG, and MLLG
equations~\cite{alouges,alok,bako,bapr,cim,ellg,thanh} (the list is not
exhausted),
and even with the full Maxwell system on bounded
domains~\cite{BBP,MLLG}, and in the whole~${\mathbb R}^3$~\cite{CarFab98}.
Originating from the seminal
work~\cite{alouges}, the recent works~\cite{ellg,thanh} consider a
similar numeric integrator for a bounded domain.
This work studies the ELLG equations where we
consider the electromagnetic field on the whole ${\mathbb R}^3$ and do not need to
introduce artificial boundaries.
Differently from~\cite{CarFab98} where the Faedo-Galerkin method is
used to prove existence of weak solutions,
we extend the analysis for the integrator used in~\cite{alouges, ellg,
thanh} to a
finite-element/boundary-element (FEM/BEM) discretisation of the
eddy current part on ${\mathbb R}^3$. This is inspired by the FEM/BEM coupling
approach designed for the pure eddy current problem
in~\cite{dual}, which
allows to treat unbounded domains without introducing artificial
boundaries. Two approaches are proposed in~\cite{dual}: the
so-called ``magnetic (or $\boldsymbol{H}$-based)
approach'' which eliminates the electric
field, retaining only the magnetic field as the unknown in the
system, and the ``electric (or $\boldsymbol{E}$-based)
approach'' which considers a
primitive of the electric field as the only unknown. The
coupling of the eddy-current system with the LLG dictates that
the first approach is more appropriate; see~\eqref{eq:strong}.
The main result of this first part is the weak convergence of the
discrete approximation towards a weak solution without any
condition on the space and time discretisation. This also proves
the existence of weak solutions.
The remainder of this part is organised as follows.
Section~\ref{section:model} introduces the coupled problem and the
notation, presents the numerical algorithm, and states the main
result of this part of
the paper. Section~\ref{sec:pro} is devoted to the proof of
this main result. Numerical results are presented in
Section~\ref{section:numerics}. The second part of this paper~\cite{FeiTraII} proves a~priori estimates for the proposed algorithm.
\section{Model Problem \& Main Result}\label{section:model}
\subsection{The problem}\label{subsec:pro}
Consider a bounded Lipschitz domain $D\subset {\mathbb R}^3$ with connected
boundary $\Gamma$ having the outward normal vector $\boldsymbol{n}$.
We define $D^\ast:={\mathbb R}^3\setminus\overline D$,
$D_T:=(0,T)\times D$, $\Gamma_T := (0,T)\times\Gamma$,
$D_T^{\ast}:=(0,T)\times D^\ast$,
and ${\mathbb R}^3_T := (0,T)\times{\mathbb R}^3$ for $T>0$.
We start with the quasi-static approximation of the full Maxwell-LLG system from~\cite{vis} which reads as
\boldsymbol{e}gin{subequations}\label{eq:strong}
\boldsymbol{e}gin{alignat}{2}
\boldsymbol{m}_t - \alpha\boldsymbol{m}\times\boldsymbol{m}_t &= -\boldsymbol{m} \times \boldsymbol{H}_{\rm eff}
&&\quad\text{in }D_T,\label{eq:llg}\\
\sigma\boldsymbol{E} -\nabla\times\boldsymbol{H}&=0&&\quad\text{in }{\mathbb R}^3_T,\label{eq:MLLG1}\\
\mu_0\boldsymbol{H}_t +\nabla \times\boldsymbol{E} &=-\mu_0\widetilde\boldsymbol{m}_t&&\quad\text{in }{\mathbb R}^3_T,\label{eq:MLLG2}\\
{\rm div}(\boldsymbol{H}+\widetilde\boldsymbol{m})&=0 &&\quad\text{in }{\mathbb R}^3_T,\label{eq:MLLG3}\\
{\rm div}(\boldsymbol{E})&=0&&\quad\text{in }D^\ast_T,
\end{alignat}
\end{subequations}
where $\widetilde \boldsymbol{m}$ is
the zero extension of $\boldsymbol{m}$ to ${\mathbb R}^3$
and $\boldsymbol{H}_{\rm eff}$ is the
effective field defined by $\boldsymbol{H}_{\rm eff}= C_e\Delta\boldsymbol{m}+\boldsymbol{H}$ for
some constant $C_e>0$.
Here the parameter $\alpha>0$ and permability $\mu_0\geq0$ are
constants, whereas the conductivity $\sigma$ takes a constant
positive value in
$D$ and the zero value in $D^\ast$.
Equation~\eqref{eq:MLLG3} is understood in the
distributional sense because there is a jump of~$\widetilde\boldsymbol{m}$
across~$\Gamma$.
It follows from~\eqref{eq:llg} that $|\boldsymbol{m}|$ is constant. We
follow the usual practice to normalise~$|\boldsymbol{m}|$ (and thus
the same condition is required for $|\boldsymbol{m}^0|$).
The following conditions are imposed on the solutions
of~\eqref{eq:strong}:
\boldsymbol{e}gin{subequations}\label{eq:con}
\boldsymbol{e}gin{alignat}{2}
\partial_n\boldsymbol{m}&=0
&& \quad\text{on }\Gamma_T,\label{eq:con1}
\\
|\boldsymbol{m}| &=1
&& \quad\text{in } D_T, \label{eq:con2}
\\
\boldsymbol{m}(0,\cdot) &= \boldsymbol{m}^0
&& \quad\text{in } D, \label{eq:con3}
\\
\boldsymbol{H}(0,\cdot) &= \boldsymbol{H}^0
&& \quad\text{in } {\mathbb R}^3,
\\
\boldsymbol{E}(0,\cdot) &= \boldsymbol{E}^0
&& \quad\text{in } {\mathbb R}^3,
\\
|\boldsymbol{H}(t,x)|&=\mathcal{O}(|x|^{-1})
&& \quad\text{as }|x|\to \infty,
\end{alignat}
\end{subequations}
where $\partial_n$ denotes the normal derivative.
The initial data~$\boldsymbol{m}^0$ and~$\boldsymbol{H}^0$ satisfy~$|\boldsymbol{m}^0|=1$ in~$D$ and
\boldsymbol{e}gin{align}\label{eq:ini}
\boldsymbol{e}gin{split}
{\rm div}(\boldsymbol{H}^0 + \widetilde \boldsymbol{m}^0)&=0\quad\text{in }{\mathbb R}^3.
\end{split}
\end{align}
Below, we focus on an $\boldsymbol{H}$-based formulation of the problem.
It is possible to recover $\boldsymbol{E}$ once $\boldsymbol{H}$ and $\boldsymbol{m}$ are known;
see~\eqref{eq:el}
\subsection{Function spaces and notations}\label{subsec:fun spa}
Before introducing the concept of weak solutions
to problem~\eqref{eq:strong}--\eqref{eq:con}
we need the following definitions of function
spaces. Let ${\mathbb{L}}two{D}:=L^2(D;{\mathbb R}^3)$ and ${\mathbb{H}}curl{D}:=\set{\boldsymbol{w}\in
{\mathbb{L}}two{D}}{\nabla\times\boldsymbol{w}\in{\mathbb{L}}two{D}}$.
We define $H^{1/2}(\Gamma)$ as the usual trace space of $H^1(D)$ and define its dual space $H^{-1/2}(\Gamma)$ by extending
the $L^2$-inner product on $\Gamma$.
For convenience we denote
\[
{\mathcal X}:=\set{(\boldsymbol{\xi},\zeta)\in{\mathbb{H}}curl{D}\times
H^{1/2}(\Gamma)}{\boldsymbol{n}\times\boldsymbol{\xi}|_\Gamma =\boldsymbol{n}\times \nabla_\Gamma\zeta
\text{ in the sense of traces}}.
\]
Recall that~$\boldsymbol{n}\times\boldsymbol{\xi}|_\Gamma$ is the tangential trace (or
twisted tangential trace) of~$\boldsymbol{\xi}$, and $\nabla_\Gamma\zeta$ is the
surface gradient of~$\zeta$. Their definitions and properties can
be found in~\cite{buffa, buffa2}.
Finally, if $X$ is a normed vector space then $L^2(0,T;X)$,
$H^m(0,T;X)$, and $W^{m,p}(0,T;X)$
denote the usual corresponding Lebesgues and Sobolev spaces of functions
defined on $(0,T)$ and taking values in $X$.
We finish this subsection with the clarification of the meaning of the
cross product between different mathematical objects.
For any vector functions $\boldsymbol{u}, \boldsymbol{v}, \boldsymbol{w}$
we denote
\boldsymbol{e}gin{gather*}
\boldsymbol{u}\times\nabla\boldsymbol{v}
:=
\left(
\boldsymbol{u}\times\frac{\partial\boldsymbol{v}}{\partial x_1},
\boldsymbol{u}\times\frac{\partial\boldsymbol{v}}{\partial x_2},
\boldsymbol{u}\times\frac{\partial\boldsymbol{v}}{\partial x_3}
\right),
\quad
\nabla\boldsymbol{u}\times\nabla\boldsymbol{v}
:=
\sum_{i=1}^3
\frac{\partial\boldsymbol{u}}{\partial x_i}
\times
\frac{\partial\boldsymbol{v}}{\partial x_i}
\\
\intertext{and}
(\boldsymbol{u}\times\nabla\boldsymbol{v})\cdot \nabla\boldsymbol{w}
:=
\sum_{i=1}^3
\left(
\boldsymbol{u}\times\frac{\partial\boldsymbol{v}}{\partial x_i}
\right)
\cdot \frac{\partial\boldsymbol{w}}{\partial x_i}.
\end{gather*}
\subsection{Weak solutions}\label{subsec:wea sol}
A weak formulation for~\eqref{eq:llg} is well-known, see
e.g.~\cite{alouges, thanh}.
Indeed, by multiplying~\eqref{eq:llg} by~$\boldsymbol{\phi}\in
C^\infty(D_T;{\mathbb R}^3)$, using~$|\boldsymbol{m}|=1$ and integration by parts, we
deduce
\[
\alpha\dual{\boldsymbol{m}_t}{\boldsymbol{m}\times\boldsymbol{\phi}}_{D_T}
+
\dual{\boldsymbol{m}\times\boldsymbol{m}_t}{\boldsymbol{m}\times\boldsymbol{\phi}}_{D_T}+C_e
\dual{\nabla\boldsymbol{m}}{\nabla(\boldsymbol{m}\times\boldsymbol{\phi})}_{D_T}
=
\dual{\boldsymbol{H}}{\boldsymbol{m}\times\boldsymbol{\phi}}_{D_T}.
\]
To tackle the eddy current equations on ${\mathbb R}^3$, we aim to employ
FE/BE coupling methods.
To that end, we employ the \emph{magnetic} approach
from~\cite{dual}, which eventually results in a variant of the
\emph{Trifou}-discretisation of the eddy-current Maxwell equations.
The magnetic approach is more or less mandatory in our case, since
the coupling with the LLG equation requires the magnetic field
rather than the electric field.
Multiplying~\eqref{eq:MLLG2} by~$\boldsymbol{\xi}\in C^{\infty}(D,{\mathbb R}^3)$
satisfying~$\nabla\times\boldsymbol{\xi}=0$ in~$D^\ast$, integrating over~${\mathbb R}^3$,
and using integration by parts, we obtain for almost
all~$t\in[0,T]$
\[
\mu_0
\dual{\boldsymbol{H}_t(t)}{\boldsymbol{\xi}}_{{\mathbb R}^3}
+
\dual{\boldsymbol{E}(t)}{\nabla\times\boldsymbol{\xi}}_{{\mathbb R}^3}
=
-\mu_0
\dual{\boldsymbol{m}_t(t)}{\boldsymbol{\xi}}_{D}.
\]
Using~$\nabla\times\boldsymbol{\xi}=0$ in~$D^\ast$ and~\eqref{eq:MLLG1} we deduce
\[
\mu_0
\dual{\boldsymbol{H}_t(t)}{\boldsymbol{\xi}}_{{\mathbb R}^3}
+
\sigma^{-1}\dual{\nabla\times\boldsymbol{H}(t)}{\nabla\times\boldsymbol{\xi}}_{D}
=
-\mu_0
\dual{\boldsymbol{m}_t(t)}{\boldsymbol{\xi}}_{D}.
\]
Since~$\nabla\times\boldsymbol{H}=\nabla\times\boldsymbol{\xi}=0$ in~$D^\ast$,
there exists~$\varphi$ and~$\zeta$ such
that~$\boldsymbol{H}=\nabla\varphi$ and~$\boldsymbol{\xi}=\nabla\zeta$ in~$D^\ast$.
Therefore, the above equation can be rewritten as
\[
\mu_0
\dual{\boldsymbol{H}_t(t)}{\boldsymbol{\xi}}_{D}
+
\mu_0
\dual{\nabla\varphi_t(t)}{\nabla\zeta}_{D^\ast}
+
\sigma^{-1}\dual{\nabla\times\boldsymbol{H}(t)}{\nabla\times\boldsymbol{\xi}}_{D}
=
-\mu_0
\dual{\boldsymbol{m}_t(t)}{\boldsymbol{\xi}}_{D}.
\]
Since~\eqref{eq:MLLG3} implies~${\rm div}(\boldsymbol{H})=0$ in~$D^\ast$,
we have~$\Delta\varphi=0$ in~$D^\ast$, so that
(formally)~$\Delta\varphi_t=0$ in~$D^\ast$. Hence
integration by parts yields
\boldsymbol{e}gin{equation}\label{eq:Ht}
\mu_0
\dual{\boldsymbol{H}_t(t)}{\boldsymbol{\xi}}_{D}
-
\mu_0
\dual{\partial_n^+\varphi_t(t)}{\zeta}_{\Gamma}
+
\sigma^{-1}\dual{\nabla\times\boldsymbol{H}(t)}{\nabla\times\boldsymbol{\xi}}_{D}
=
-\mu_0
\dual{\boldsymbol{m}_t(t)}{\boldsymbol{\xi}}_{D},
\end{equation}
where~$\partial_n^+$ is the exterior Neumann trace operator with
the limit taken from~$D^\ast$. The advantage of the above
formulation is that no integration over the unbounded domain~$D^\ast$ is
required.
The exterior Neumann trace~$\partial_n^+\varphi_t$ can be computed
from the exterior Dirichlet trace~$\lambda$ of~$\varphi$ by using the
Dirichlet-to-Neumann operator~$\mathfrak{S}$, which is defined as follows.
Let $\gamma^-$ be the interior Dirichlet trace
operator and $\partial_n^-$ be the interior normal derivative
or Neumann trace operator.
(The $-$ sign indicates the trace is taken from $D$.)
Recalling the fundamental solution of the Laplacian
$G(x,y):=1/(4\pi|x-y|)$, we introduce the following integral operators
defined formally on $\Gamma$ as
\boldsymbol{e}gin{align*}
\mathfrak{V}(\lambda):=\gamma^-\overline\mathfrak{V}(\lambda),
\quad
\mathfrak{K}(\lambda):=\gamma^-\overline\mathfrak{K}(\lambda)+\mfrac12,
\quad\text{and}\quad
\mathfrak{W}(\lambda):=-\partial_n^-\overline\mathfrak{K}(\lambda),
\end{align*}
where, for $x\notin\Gamma$,
\boldsymbol{e}gin{align*}
\overline\mathfrak{V}(\lambda)(x)
:=
\int_{\Gamma} G(x,y) \lambda(y)\,ds_y
\quad\text{and}\quad
\overline\mathfrak{K}(\lambda)(x)
:=\int_{\Gamma} \partial_{n(y)}G(x,y)\lambda(y)\,ds_y.
\end{align*}
Moreover, let $\mathfrak{K}^\prime$ denote the adjoint operator of $\mathfrak{K}$
with respect to the extended $L^2$-inner product.
Then the exterior Dirichlet-to-Neumann map $\mathfrak{S}\colon
H^{1/2}(\Gamma)\to H^{-1/2}(\Gamma)$ can be represented as
\boldsymbol{e}gin{equation}\label{eq:dtn}
\mathfrak{S}
=
- \mathfrak{V}^{-1}(1/2-\mathfrak{K}).
\end{equation}
Another more symmetric representation is
\boldsymbol{e}gin{equation}\label{eq:dtn2}
\mathfrak{S}
=
-(1/2-\mathfrak{K}^\prime) \mathfrak{V}^{-1}(1/2-\mathfrak{K})-\mathfrak{W}.
\end{equation}
Recall that~$\varphi$ satisfies~$\boldsymbol{H}=\nabla\varphi$ in~$D^\ast$.
We can choose~$\varphi$ satisfying~$\varphi(x)=O(|x|^{-1})$ as
$|x|\to\infty$.
Now if~$\lambda=\gamma^+\varphi$
then~$\lambda_t=\gamma^+\varphi_t$.
Since~$\Delta\varphi=\Delta\varphi_t=0$ in~$D^\ast$, and since the
exterior Laplace problem has a unique solution
we
have~$\mathfrak{S}\lambda=\partial_n^+\varphi$
and~$\mathfrak{S}\lambda_t=\partial_n^+\varphi_t$.
Hence~\eqref{eq:Ht} can be rewritten as
\boldsymbol{e}gin{equation}\label{eq:H lam t}
\dual{\boldsymbol{H}_t(t)}{\boldsymbol{\xi}}_{D}
-
\dual{\mathfrak{S}\lambda_t(t)}{\zeta}_{\Gamma}
+
\mu_0^{-1}
\sigma^{-1}\dual{\nabla\times\boldsymbol{H}(t)}{\nabla\times\boldsymbol{\xi}}_{D}
=
-
\dual{\boldsymbol{m}_t(t)}{\boldsymbol{\xi}}_{D}.
\end{equation}
We remark that if~$\nabla_\Gamma$ denotes the surface gradient
operator on~$\Gamma$ then it is well-known that
$
\nabla_\Gamma\lambda
=
(\nabla\varphi)|_{\Gamma}
-
(\partial_n^+\varphi)\boldsymbol{n}
=
\boldsymbol{H}|_{\Gamma}
-
(\partial_n^+\varphi)\boldsymbol{n};
$
see e.g.~\cite[Section~3.4]{Monk03}. Hence~$\boldsymbol{n}\times\nabla_\Gamma\lambda =
\boldsymbol{n}\times\boldsymbol{H}|_{\Gamma}$.
The above analysis prompts us to define the following
weak formulation.
\boldsymbol{e}gin{definition}\label{def:fembemllg}
A triple $(\boldsymbol{m},\boldsymbol{H},\lambda)$ satisfying
\boldsymbol{e}gin{align*}
\boldsymbol{m} &\in {\mathbb{H}}one{D_T}
\quad\text{and}\quad
\boldsymbol{m}_t|_{\Gamma_T} \in L^2(0,T;H^{-1/2}(\Gamma)),
\\
\boldsymbol{H} &\in L^2(0,T;{\mathbb{H}}curl{D})\cap H^1(0,T;{\mathbb{L}}two{D}),
\\
\lambda &\in H^1(0,T;H^{1/2}(\Gamma))
\end{align*}
is called a weak solution to~\eqref{eq:strong}--\eqref{eq:con}
if the following statements hold
\boldsymbol{e}gin{enumerate}
\item $|\boldsymbol{m}|=1$ almost everywhere in $D_T$; \label{ite:1}
\item $\boldsymbol{m}(0,\cdot)=\boldsymbol{m}^0$, $\boldsymbol{H}(0,\cdot)=\boldsymbol{H}^0$, and
$\lambda(0,\cdot)=\gamma^+ \varphi^0$ where~$\varphi^0$ is a scalar
function satisfies $\boldsymbol{H}^0=\nabla\varphi^0$ in~$D^\ast$ (the
assumption~\eqref{eq:ini} ensures the existence
of~$\varphi^0$); \label{ite:2}
\item For all $\boldsymbol{\phi}\in C^\infty(D_T;{\mathbb R}^3)$
\label{ite:3}
\boldsymbol{e}gin{subequations}\label{eq:wssymm}
\boldsymbol{e}gin{align}
\alpha\dual{\boldsymbol{m}_t}{\boldsymbol{m}\times\boldsymbol{\phi}}_{D_T}
&+
\dual{\boldsymbol{m}\times\boldsymbol{m}_t}{\boldsymbol{m}\times\boldsymbol{\phi}}_{D_T}+C_e
\dual{\nabla\boldsymbol{m}}{\nabla(\boldsymbol{m}\times\boldsymbol{\phi})}_{D_T}
\nonumber
\\
&=
\dual{\boldsymbol{H}}{\boldsymbol{m}\times\boldsymbol{\phi}}_{D_T};
\label{eq:wssymm1}
\end{align}
\item There holds $\boldsymbol{n}\times \nabla_\Gamma\lambda = \boldsymbol{n}\times
\boldsymbol{H}|_{\Gamma}$ in the sense of traces; \label{ite:4}
\item For $\boldsymbol{\xi}\in C^\infty(D;{\mathbb R}^3)$ and $\zeta\in
C^\infty(\Gamma)$ satisfying
$\boldsymbol{n}\times\boldsymbol{\xi}|_{\Gamma}=\boldsymbol{n}\times\nabla_\Gamma\zeta$ in the sense of
traces \label{ite:5}
\boldsymbol{e}gin{align}\label{eq:wssymm2}
\dual{\boldsymbol{H}_t}{\boldsymbol{\xi}}_{D_T}
-\dual{\mathfrak{S}\lambda_t}{\zeta}_{\Gamma_T}
+
\sigma^{-1}\mu_0^{-1}\dual{\nabla\times\boldsymbol{H}}{\nabla\times\boldsymbol{\xi}}_{D_T}
&=-\dual{\boldsymbol{m}_t}{\boldsymbol{\xi}}_{D_T};
\end{align}
\end{subequations}
\item For almost all $t\in[0,T]$
\label{ite:6}
\boldsymbol{e}gin{gather}
\norm{\nabla \boldsymbol{m}(t)}{{\mathbb{L}}two{D}}^2
+
\norm{\boldsymbol{H}(t)}{{\mathbb{H}}curl{D}}^2
+
\norm{\lambda(t)}{H^{1/2}(\Gamma)}^2
\nonumber
\\
+
\norm{\boldsymbol{m}_t}{{\mathbb{L}}two{D_t}}^2
+
\norm{\boldsymbol{H}_t}{{\mathbb{L}}two{D_t}}^2
+
\norm{\lambda_t}{H^{1/2}(\Gamma_t)}^2 \leq C,
\label{eq:energybound2}
\end{gather}
where the constant $C>0$ is independent of $t$.
\end{enumerate}
\end{definition}
The reason we integrate over~$[0,T]$ in~\eqref{eq:H lam t} to
have~\eqref{eq:wssymm2} is to facilitate the passing to the
limit in the proof of the main theorem.
The following lemma justifies the above definition.
\boldsymbol{e}gin{lemma}\label{lem:equidef}
Let $(\boldsymbol{m},\boldsymbol{H},\boldsymbol{E})$ be a strong solution
of~\eqref{eq:strong}--\eqref{eq:con}. If $\varphi\in
H(0,T;H^1(D^\ast))$
satisfies $\nabla\varphi=\boldsymbol{H}|_{D^\ast_T}$, and if
$\lambda:=\gamma^+\varphi$, then the triple
$(\boldsymbol{m},\boldsymbol{H}|_{D_T},\lambda)$
is a weak solution in the sense of Definition~\ref{def:fembemllg}.
Conversely, let $(\boldsymbol{m},\boldsymbol{H},\lambda)$ be a sufficiently smooth
solution in the sense of Definition~\ref{def:fembemllg},
and let~$\varphi$ be the solution of
\boldsymbol{e}gin{equation}\label{eq:var phi}
\Delta\varphi = 0
\text{ in } D^\ast,
\quad
\varphi = \lambda
\text{ on } \Gamma,
\quad
\varphi(x) = O(|x|^{-1})
\text{ as } |x|\to\infty.
\end{equation}
Then $(\boldsymbol{m},\overline\boldsymbol{H},\boldsymbol{E})$ is a strong solution
to~\eqref{eq:strong}--\eqref{eq:con}, where~$\overline\boldsymbol{H}$ is
defined by
\boldsymbol{e}gin{align}\label{eq:repform}
\overline\boldsymbol{H}:=\boldsymbol{e}gin{cases}
\boldsymbol{H} &\quad\text{in }D_T,\\
\nabla\varphi
&\quad\text{in } D_T^\ast,
\end{cases}
\end{align}
and~$\boldsymbol{E}$ is reconstructed by letting
$\boldsymbol{E}=\sigma^{-1}(\nabla\times\boldsymbol{H})$ in $D_T$ and by solving
\boldsymbol{e}gin{subequations}\label{eq:el}
\boldsymbol{e}gin{alignat}{2}
\nabla\times\boldsymbol{E} &= -\mu_0\overline\boldsymbol{H}_t
&&\quad\text{in } D_T^\ast,
\\
{\rm div}(\boldsymbol{E}) &=0
&&\quad\text{in }D_T^\ast,
\\
\boldsymbol{n}\times \boldsymbol{E}|_{D_T^\ast} &= \boldsymbol{n}\times \boldsymbol{E}|_{D_T}
&&\quad\text{on }\Gamma_T.
\end{alignat}
\end{subequations}
\end{lemma}
\boldsymbol{e}gin{proof}
We follow~\cite{dual}.
Assume that $(\boldsymbol{m},\boldsymbol{H},\boldsymbol{E})$
satisfies~\eqref{eq:strong}--\eqref{eq:con}.
Then clearly Statements~\eqref{ite:1}, \eqref{ite:2} and~\eqref{ite:6} in
Definition~\ref{def:fembemllg} hold, noting~\eqref{eq:ini}.
Statements~\eqref{ite:3}, \eqref{ite:4}
and~\eqref{ite:5} also hold due to the analysis above
Definition~\ref{def:fembemllg}.
The converse is also true due to the
well-posedness of~\eqref{eq:el} as stated
in~\cite[Equation~(15)]{dual}.
\end{proof}
\boldsymbol{e}gin{remark}
The solution~$\varphi$ to~\eqref{eq:var phi} can be represented
as
$
\varphi
=
(1/2+\mathfrak{K})\lambda - \mathfrak{V}\mathfrak{S}\lambda.
$
\end{remark}
The next subsection defines the spaces and functions to be used
in the approximation of the weak solution
the sense of Definition~\ref{def:fembemllg}.
\subsection{Discrete spaces and functions}\label{subsec:dis spa}
For time discretisation, we use a uniform partition $0\leq t_i\leq
T$, $i=0,\ldots,N$ with $t_i:=ik$ and $k:=T/N$. The spatial
discretisation is determined by a (shape) regular triangulation
${\mathcal T}_h$ of $D$ into compact tetrahedra $T\in{\mathcal T}_h$ with diameter
$h_T/C\leq h\leq Ch_T$ for some uniform constant $C>0$.
Denoting by~${\mathbb N}N_h$ the set of nodes of~${\mathcal T}_h$,
we define the following spaces
\boldsymbol{e}gin{align*}
{\mathcal S}^1({\mathcal T}_h)&:=\set{\phi_h\in C(D)}{\phi_h|T \in {\mathbb P}P^1(T)\text{ for all } T\in{\mathcal T}_h},\\
{\mathcal K}_{\boldsymbol{\phi}_h}&:=\set{\boldsymbol{\psi}_h\in{\mathcal S}^1({\mathcal T}_h)^3}{\boldsymbol{\psi}_h(z)\cdot\boldsymbol{\phi}_h(z)=0\text{
for all }z\in{\mathbb N}N_h},
\quad\boldsymbol{\phi}_h\in{\mathcal S}^1({\mathcal T}_h)^3,
\end{align*}
where ${\mathbb P}P^1(T)$ is the space of polynomials of degree at most~1
on~$T$.
For the discretisation of~\eqref{eq:wssymm2}, we employ the
space ${\mathbb N}N{\mathcal D}^1({\mathcal T}_h)$ of
first order N\'ed\'elec (edge) elements for $\boldsymbol{H}$ and
and the space ${\mathcal S}^1({\mathcal T}_h|_\Gamma)$ for $\lambda$.
Here ${\mathcal T}_h|_{\Gamma}$ denotes the restriction of the
triangulation to the boundary~$\Gamma$.
It follows from Statement~\ref{ite:4} in
Definition~\ref{def:fembemllg} that for each~$t\in[0,T]$, the
pair~$(\boldsymbol{H}(t),\lambda(t))\in{\mathcal X}$. We approximate the space~${\mathcal X}$ by
\boldsymbol{e}gin{align*}
{\mathcal X}_h:=\set{(\boldsymbol{\xi},\zeta)\in {\mathbb N}N{\mathcal D}^1({\mathcal T}_h)\times
{\mathcal S}^1({\mathcal T}_h|_\Gamma)}{\boldsymbol{n}\times \nabla_\Gamma\zeta =
\boldsymbol{n}\times\boldsymbol{\xi}|_\Gamma}.
\end{align*}
To ensure the condition $\boldsymbol{n}\times \nabla_\Gamma\zeta = \boldsymbol{n}\times
\boldsymbol{\xi}|_{\Gamma}$, we observe the following.
For any~$\zeta\in {\mathcal S}^1({\mathcal T}_h|_\Gamma)$, if $e$ denotes an edge of
${\mathcal T}_h$
on $\Gamma$, then $\int_e \boldsymbol{\xi}\cdot \boldsymbol{\tau}\,ds =
\int_e \nabla \zeta \cdot \boldsymbol{\tau}\,ds = \zeta(z_0)-\zeta(z_1)$, where
$\boldsymbol{\tau}$ is the unit direction vector on~$e$, and
$z_0,z_1$ are the endpoints of $e$.
Thus, taking as degrees of freedom all interior edges of ${\mathcal T}_h$
(i.e. $\int_{e_i}\boldsymbol{\xi}\cdot\boldsymbol{\tau} \,ds$) as well as all nodes of
${\mathcal T}_h|_\Gamma$ (i.e.~$\zeta(z_i)$), we fully determine
a function pair $(\boldsymbol{\xi},\zeta)\in{\mathcal X}_h $.
Due to the considerations above, it is clear that the above
space can be implemented directly without use of Lagrange
multipliers or other extra equations.
The density properties of the finite element
spaces~$\{{\mathcal X}_h\}_{h>0}$
are shown in Subsection~\ref{subsec:som lem}; see
Lemma~\ref{lem:den pro}.
Given functions
$\boldsymbol{w}_h^i\colon D\to {\mathbb R}^d$, $d\in{\mathbb N}$,
for all $i=0,\ldots,N$ we define for all $t\in [t_i,t_{i+1}]$
\boldsymbol{e}gin{align*}
\boldsymbol{w}_{hk}(t):=\frac{t_{i+1}-t}{k}\boldsymbol{w}_h^i + \frac{t-t_i}{k}\boldsymbol{w}_h^{i+1},
\quad
\boldsymbol{w}_{hk}^-(t):=\boldsymbol{w}_h^i,
\quad
\boldsymbol{w}_{hk}^+(t):=\boldsymbol{w}_h^{i+1} .
\end{align*}
Moreover, we define
\boldsymbol{e}gin{equation}\label{eq:dt}
d_t\boldsymbol{w}_h^{i+1}:=\frac{\boldsymbol{w}_h^{i+1}-\boldsymbol{w}_h^i}{k}\quad\text{for all }
i=0, \ldots, N-1.
\end{equation}
Finally, we denote by ${\mathbb P}i_{{\mathcal S}}$ the usual interpolation operator on
${\mathcal S}^1({\mathcal T}_h)$.
We are now ready to present the algorithm to compute approximate
solutions to problem~\eqref{eq:strong}--\eqref{eq:con}.
\subsection{Numerical algorithm}\label{section:alg}
In the sequel, when there is no confusion we use the same
notation $\boldsymbol{H}$ for
the restriction of $\boldsymbol{H}\colon {\mathbb R}^3_T\to {\mathbb R}^3$ to the domain $D_T$.
\boldsymbol{e}gin{algorithm}\label{algorithm}
\mbox{}
\textbf{Input:} Initial data $\boldsymbol{m}^0_h\in{\mathcal S}^1({\mathcal T}_h)^3$,
$(\boldsymbol{H}^0_h,\lambda_h^0)\in{\mathcal X}_h$,
and parameter $\theta\in[0,1]$.
\textbf{For} $i=0,\ldots,N-1$ \textbf{do:}
\boldsymbol{e}gin{enumerate}
\item Compute the unique function $\boldsymbol{v}^i_h\in {\mathcal K}_{\boldsymbol{m}_h^i}$
satisfying for all $\boldsymbol{\phi}_h\in {\mathcal K}_{\boldsymbol{m}_h^i}$
\boldsymbol{e}gin{align}\label{eq:dllg}
\boldsymbol{e}gin{split}
\alpha\dual{\boldsymbol{v}_h^i}{\boldsymbol{\phi}_h}_D
&+
\dual{\boldsymbol{m}_h^i\times \boldsymbol{v}_h^i}{\boldsymbol{\phi}_h}_D
+
C_e\theta k \dual{\nabla \boldsymbol{v}_h^i}{\nabla \boldsymbol{\phi}_h}_D
\\
&=
-C_e \dual{\nabla \boldsymbol{m}_h^i}{\nabla \boldsymbol{\phi}_h}_D
+
\dual{\boldsymbol{H}_h^i}{\boldsymbol{\phi}_h}_D.
\end{split}
\end{align}
\item Define $\boldsymbol{m}_h^{i+1}\in {\mathcal S}^1({\mathcal T}_h)^3$ nodewise by
\boldsymbol{e}gin{equation}\label{eq:mhip1}
\boldsymbol{m}_h^{i+1}(z) =\boldsymbol{m}_h^i(z) + k\boldsymbol{v}_h^i(z) \quad\text{for all } z\in{\mathbb N}N_h.
\end{equation}
\item Compute the unique functions
$(\boldsymbol{H}_h^{i+1},\lambda_h^{i+1})\in{\mathcal X}_h$
satisfying for all
$(\boldsymbol{\xi}_h,\zeta_h)\in {\mathcal X}_h$
\boldsymbol{e}gin{align}\label{eq:dsymm}
\dual{d_t\boldsymbol{H}_h^{i+1}}{\boldsymbol{\xi}_h}_{D}
&
-\dual{d_t\mathfrak{S}_h\lambda_h^{i+1}}{\zeta_h}_\Gamma
+
\sigma^{-1}\mu_0^{-1}\dual{\nabla\times\boldsymbol{H}_h^{i+1}}{\nabla\times\boldsymbol{\xi}_h}_{D}
\nonumber
\\
&=
-\dual{\boldsymbol{v}_h^i}{\boldsymbol{\xi}_h}_{D},
\end{align}
where $\mathfrak{S}_h\colon H^{1/2}(\Gamma)\to {\mathcal S}^1({\mathcal T}_h|_\Gamma)$
is the discrete Dirichlet-to-Neumann operator to be defined
later.
\end{enumerate}
\textbf{Output:} Approximations $(\boldsymbol{m}_h^i,\boldsymbol{H}_h^i,\lambda_h^i)$
for all $i=0,\ldots,N$.
\end{algorithm}
The linear formula~\eqref{eq:mhip1} was introduced
in~\cite{bartels} and used in~\cite{Abert_etal}.
Equation~\eqref{eq:dsymm} requires the computation
of~$\mathfrak{S}_h\lambda$ for any~$\lambda\in H^{1/2}(\Gamma)$.
This is done by use of
the boundary element method.
Let~$\mu\in H^{-1/2}(\Gamma)$
and~$\mu_h\in{\mathbb P}P^0({\mathcal T}_h|_\Gamma)$
be, respectively, the solution of
\boldsymbol{e}gin{align}\label{eq:bem}
\mathfrak{V}\mu= (\mathfrak{K}-1/2)\lambda
\quad\text{and}\quad
\dual{\mathfrak{V} \mu_h}{\nu_h}_\Gamma =
\dual{(\mathfrak{K}-1/2)\lambda}{\nu_h}_\Gamma\quad\forall\nu_h\in
{\mathbb P}P^0({\mathcal T}_h|_\Gamma),
\end{align}
where ${\mathbb P}P^0({\mathcal T}_h|_\Gamma)$ is the space of
piecewise-constant functions on~${\mathcal T}_h|_\Gamma$.
If the representation~\eqref{eq:dtn} of~$\mathfrak{S}$ is used,
then~$\mathfrak{S}\lambda=\mu$, and
we can uniquely define~$\mathfrak{S}_h\lambda$
by solving
\boldsymbol{e}gin{equation}\label{eq:bem1}
\dual{\mathfrak{S}_h\lambda}{\zeta_h}_\Gamma
=
\dual{\mu_h}{\zeta_h}_\Gamma
\quad\forall\zeta_h\in{\mathcal S}^1({\mathcal T}_h|_\Gamma).
\end{equation}
This is known as the Johnson-N\'ed\'elec coupling.
If we use the representation~\eqref{eq:dtn2} for~$\mathfrak{S}\lambda$
then $\mathfrak{S}\lambda = (1/2-\mathfrak{K}^\prime)\mu-\mathfrak{W}\lambda$.
In this case we can uniquely define~$\mathfrak{S}_h\lambda$ by solving
\boldsymbol{e}gin{align}\label{eq:bem2}
\dual{\mathfrak{S}_h\lambda}{\zeta_h}_\Gamma
=
\dual{(1/2-\mathfrak{K}^\prime)\mu_h}{\zeta_h}_\Gamma
-
\dual{\mathfrak{W} \lambda}{\zeta_h}_\Gamma
\quad\forall\zeta_h\in {\mathcal S}^1({\mathcal T}_h|_\Gamma).
\end{align}
This approach yields an (almost) symmetric system and is called
Costabel's coupling.
In practice,~\eqref{eq:dsymm} only requires the computation
of~$\dual{\mathfrak{S}_h\lambda_h}{\zeta_h}_\Gamma$ for
any~$\lambda_h, \zeta_h\in{\mathcal S}^1({\mathcal T}_h|_\Gamma)$.
So in the implementation,
neither~\eqref{eq:bem1} nor~\eqref{eq:bem2} has to be solved.
It suffices to solve the second equation in~\eqref{eq:bem}
and compute the right-hand side of either~\eqref{eq:bem1}
or~\eqref{eq:bem2}.
It is proved in~\cite[Appendix~A]{afembem} that Costabel's coupling
results in a discrete operator which is uniformly elliptic and
continuous:
\boldsymbol{e}gin{align}\label{eq:dtnelliptic}
\boldsymbol{e}gin{split}
-\dual{\mathfrak{S}_h \zeta_h}{\zeta_h}_\Gamma&\geq
C_\mathfrak{S}^{-1}\norm{\zeta_h}{H^{1/2}(\Gamma)}^2\quad\text{for all
}\zeta_h\in {\mathcal S}^1({\mathcal T}_h|_\Gamma),\\
\norm{\mathfrak{S}_h\zeta}{H^{-1/2}(\Gamma)}^2&\leq C_\mathfrak{S}
\norm{\zeta}{H^{1/2}(\Gamma)}^2\quad\text{for all }\zeta\in H^{1/2}(\Gamma),
\end{split}
\end{align}
for some constant $C_\mathfrak{S}>0$ which depends only on $\Gamma$.
Even though the remainder of the analysis works analogously for both
approaches,
we are not aware of an ellipticity result of the
form~\eqref{eq:dtnelliptic} for the Johnson-N\'ed\'elec
approach. Thus, from now on $\mathfrak{S}_h$ is understood to be defined
by~\eqref{eq:bem2}.
\subsection{Main result}
Before stating the main result of this part of the paper,
we first state some general
assumptions. Firstly, the weak convergence of approximate solutions
requires the following
conditions on $h$ and $k$, depending on the value of the parameter~$\theta$
in~\eqref{eq:dllg}:
\boldsymbol{e}gin{equation}\label{eq:hk 12}
\boldsymbol{e}gin{cases}
k = o(h^2) \quad & \text{when } 0 \le \theta < 1/2, \\
k = o(h) \quad & \text{when } \theta = 1/2, \\
\text{no condition} & \text{when } 1/2 < \theta \le 1.
\end{cases}
\end{equation}
Some supporting lemmas which have their own interests do not require any
condition when $\theta=1/2$. For those results, a slightly different
condition is required, namely
\boldsymbol{e}gin{equation}\label{eq:hk con}
\boldsymbol{e}gin{cases}
k = o(h^2) \quad & \text{when } 0 \le \theta < 1/2, \\
\text{no condition} & \text{when } 1/2 \le \theta \le 1.
\end{cases}
\end{equation}
The initial data are assumed to satisfy
\boldsymbol{e}gin{equation}\label{eq:mh0 Hh0}
\sup_{h>0}
\left(
\norm{\boldsymbol{m}_h^0}{H^1(D)}
+
\norm{\boldsymbol{H}_h^0}{{\mathbb{H}}curl{D}}
+
\norm{\lambda_h^0}{H^{1/2}(\Gamma)}
\right)
<\infty
\quad\text{and}\quad
\lim_{h\to0}
\norm{\boldsymbol{m}_h^0-\boldsymbol{m}^0}{{\mathbb{L}}two{D}}
= 0.
\end{equation}
We are now ready to state the main result of this part of the paper.
\boldsymbol{e}gin{theorem}[Existence of solutions]\label{thm:weakconv}
Under the assumptions~\eqref{eq:hk 12} and~\eqref{eq:mh0
Hh0}, the problem~\eqref{eq:strong}--\eqref{eq:con} has a
solution~$(\boldsymbol{m},\boldsymbol{H},\lambda)$ in the sense of
Definition~\ref{def:fembemllg}.
\end{theorem}
\section{Proofs of the main result}\label{sec:pro}
\subsection{Some lemmas}\label{subsec:som lem}
In this subsection we prove all important lemmas which are directly related
to the proofs of the theorem.
The first lemma proves density properties of the discrete spaces.
\boldsymbol{e}gin{lemma}\label{lem:den pro}
Provided that the meshes $\{{\mathcal T}_h\}_{h>0}$ are regular,
the union~$\bigcup_{h>0}{\mathcal X}_h$ is dense in~${\mathcal X}$.
Moreover, there exists an interpolation operator
${\mathbb P}i_{\mathcal X}:=({\mathbb P}i_{{\mathcal X},D},{\mathbb P}i_{{\mathcal X},\Gamma})\colon
\big({\mathbb{H}}^2(D)\times H^2(\Gamma)\big)
\cap {\mathcal X}\to {\mathcal X}_h$ which satisfies
\boldsymbol{e}gin{align}\label{eq:int}
\norm{(1-{\mathbb P}i_{\mathcal X})(\boldsymbol{\xi},\zeta)}{{\mathbb{H}}curl{D}\times H^{1/2}(\Gamma) }
&\leq C_{{\mathcal X}} h( \norm{\boldsymbol{\xi}}{{\mathbb{H}}^2(D)}+ h^{1/2}\norm{\zeta}{H^2(\Gamma)}),
\end{align}
where $C_{\mathcal X}>0$ depends only on $D$, $\Gamma$, and the shape
regularity of ${\mathcal T}_h$.
\end{lemma}
\boldsymbol{e}gin{proof}
The interpolation operator
${\mathbb P}i_{\mathcal X}:=({\mathbb P}i_{{\mathcal X},D},{\mathbb P}i_{{\mathcal X},\Gamma})\colon
\big({\mathbb{H}}^2(D)\times H^2(\Gamma)\big)
\cap {\mathcal X}\to {\mathcal X}_h$ is constructed as follows.
The interior degrees of freedom (edges) of ${\mathbb P}i_{\mathcal X}(\boldsymbol{\xi},\zeta)$
are equal to the interior degrees of freedom of
${\mathbb P}i_{{\mathbb N}N{\mathcal D}}\boldsymbol{\xi}\in{\mathbb N}N{\mathcal D}^1({\mathcal T}_h)$, where ${\mathbb P}i_{{\mathbb N}N{\mathcal D}}$ is
the usual interpolation operator onto ${\mathbb N}N{\mathcal D}^1({\mathcal T}_h)$.
The degrees of freedom of ${\mathbb P}i_{\mathcal X}(\boldsymbol{\xi},\zeta)$ which lie on
$\Gamma$ (nodes) are equal to ${\mathbb P}i_{\mathcal S}\zeta$.
By the definition of ${\mathcal X}_h$, this fully determines ${\mathbb P}i_{\mathcal X}$.
Particularly, since $\boldsymbol{n}\times
\boldsymbol{\xi}|_\Gamma=\boldsymbol{n}\times\nabla_\Gamma\zeta$, there holds
${\mathbb P}i_{{\mathbb N}N{\mathcal D}}\boldsymbol{\xi}|_\Gamma = {\mathbb P}i_{{\mathcal X},\Gamma}(\boldsymbol{\xi},\zeta)$.
Hence, the interpolation error can be bounded by
\boldsymbol{e}gin{align*}
\norm{(1-{\mathbb P}i_{\mathcal X})(\boldsymbol{\xi},\zeta)}{{\mathbb{H}}curl{D}\times H^{1/2}(\Gamma) }&\leq \norm{(1-{\mathbb P}i_{{\mathbb N}N{\mathcal D}})\boldsymbol{\xi}}{{\mathbb{H}}curl{D}}+\norm{(1-{\mathbb P}i_{\mathcal S})\zeta}{H^{1/2}(\Gamma)}\\
&\lesssim h( \norm{\boldsymbol{\xi}}{{\mathbb{H}}^2(D)}+ h^{1/2}\norm{\zeta}{H^2(\Gamma)}).
\end{align*}
Since $\big({\mathbb{H}}^2(D)\times H^2(\Gamma)\big)\cap {\mathcal X}$ is dense in
${\mathcal X}$, this concludes the proof.
\end{proof}
The following lemma gives an equivalent form
to~\eqref{eq:wssymm2} and shows that
Algorithm~\ref{algorithm} is well-defined.
\boldsymbol{e}gin{lemma}\label{lem:bil for}
Let $a(\cdot,\cdot)\colon {\mathcal X}\times{\mathcal X}\to {\mathbb R}$, $a_h(\cdot,\cdot)\colon {\mathcal X}_h\times{\mathcal X}_h\to {\mathbb R}$, and
$b(\cdot,\cdot)\colon {\mathbb{H}}curl{D}\times{\mathbb{H}}curl{D}\to {\mathbb R}$ be
bilinear forms defined by
\boldsymbol{e}gin{align*}
a(A,B)&:=\dual{\boldsymbol{\psi}}{\boldsymbol{\xi}}_D -\dual{\mathfrak{S} \eta}{\zeta}_\Gamma,
\\
a_h(A_h,B_h)
&:=
\dual{\boldsymbol{\psi}_h}{\boldsymbol{\xi}_h}_D
-
\dual{\mathfrak{S}_h\eta_h}{\zeta_h}_{\Gamma},
\\
b(\boldsymbol{\psi},\boldsymbol{\xi})&:=
\sigma^{-1}\mu_0^{-1}
\dual{\nabla\times \boldsymbol{\psi}}{\nabla\times \boldsymbol{\xi}}_\Gamma,
\end{align*}
for all $\boldsymbol{\psi}, \boldsymbol{\xi}\in{\mathbb{H}}curl{D}$, $A:=(\boldsymbol{\psi},\eta)$, $B:=(\boldsymbol{\xi},\zeta)\in{\mathcal X}$, $A_h = (\boldsymbol{\psi}_h,\eta_h), B_h = (\boldsymbol{\xi}_h,\zeta_h) \in {\mathcal X}_h$.
Then
\boldsymbol{e}gin{enumerate}
\item
The bilinear forms satisfy,
for all
$A=(\boldsymbol{\psi},\eta)\in{\mathcal X}$ and
$A_h=(\boldsymbol{\psi}_h,\eta_h)\in{\mathcal X}_h$,
\boldsymbol{e}gin{align}\label{eq:elliptic}
\boldsymbol{e}gin{split}
a(A,A)&\geq C_{\rm ell}
\big(
\norm{\boldsymbol{\psi}}{{\mathbb{L}}two{D}}^2+\norm{\eta}{H^{1/2}(\Gamma)}^2
\big),
\\
a_h(A_h,A_h)
&\geq
C_{\rm ell}
\big(
\norm{\boldsymbol{\psi}_h}{{\mathbb{L}}two{D}}^2+\norm{\eta_h}{H^{1/2}(\Gamma)}^2
\big),
\\
b(\boldsymbol{\psi},\boldsymbol{\psi})
&\geq C_{\rm ell}\norm{\nabla\times
\boldsymbol{\psi}}{{\mathbb{L}}two{D}}^2.
\end{split}
\end{align}
\item
Equation~\eqref{eq:wssymm2} is equivalent to
\boldsymbol{e}gin{equation}\label{eq:bil for}
\int_0^T
a(A_t(t),B) \, dt
+
\int_0^T
b(\boldsymbol{H}(t),\boldsymbol{\xi}) \, dt
=
-
\dual{\boldsymbol{m}_t}{\boldsymbol{\xi}}_{D_T}
\end{equation}
for all~$B=(\boldsymbol{\xi},\zeta)\in {\mathcal X}$, where~$A=(\boldsymbol{H},\lambda)$.
\item
Equation~\eqref{eq:dsymm} is of the form
\boldsymbol{e}gin{align}\label{eq:eddygeneral}
a_h(d_t A_h^{i+1},B_h) + b(\boldsymbol{H}_h^{i+1},\boldsymbol{\xi}_h)
=
-\dual{\boldsymbol{v}_h^i}{\boldsymbol{\xi}_h}_\Gamma
\end{align}
where $A_h^{i+1}:=(\boldsymbol{H}_h^{i+1},\lambda_h^{i+1})$ and
$B_h:=(\boldsymbol{\xi}_h,\zeta_h)$.
\item
Algorithm~\ref{algorithm} is well-defined in the sense
that~\eqref{eq:dllg} and~\eqref{eq:dsymm} have unique solutions.
\end{enumerate}
\end{lemma}
\boldsymbol{e}gin{proof}
The unique solvability of~\eqref{eq:dsymm} follows immediately from the continuity and ellipticity of the bilinear forms $a_h(\cdot,\cdot)$
and $b(\cdot,\cdot)$.
The unique solvability of~\eqref{eq:dllg} follows from the positive
definiteness of the left-hand side, the linearity of the right-hand side,
and the finite space dimension.
\end{proof}
The following lemma establishes an energy bound for the discrete
solutions.
\boldsymbol{e}gin{lemma}\label{lem:denergygen}
Under the assumptions~\eqref{eq:hk con} and~\eqref{eq:mh0
Hh0}, there holds
for all $k<2\alpha$ and $j=1,\ldots,N$
\boldsymbol{e}gin{align}\label{eq:denergy}
\sum_{i=0}^{j-1}&
\left(
\norm{\boldsymbol{H}_h^{i+1}-\boldsymbol{H}_h^i}{{\mathbb{L}}two{D}}^2+\norm{\lambda_h^{i+1}-\lambda_h^i}{H^{1/2}(\Gamma)}^2
\right)\notag
\\
&+
k
\sum_{i=0}^{j-1}
\norm{\nabla\times \boldsymbol{H}_h^{i+1}}{{\mathbb{L}}two{D}}^2
+ \norm{\boldsymbol{H}_h^j}{{\mathbb{H}}curl{D}}^2
+ \norm{\lambda_h^{j}}{H^{1/2}(\Gamma)}^2
+
\norm{\nabla \boldsymbol{m}_h^{j}}{{\mathbb{L}}two{D}}^2\notag
\\
&+
\max\{2\theta-1,0\}k^2
\sum_{i=0}^{j-1}
\norm{\nabla\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
+
k
\sum_{i=0}^{j-1}
\norm{\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
\\
&+k \sum_{i=0}^{j-1} (\norm{d_t\boldsymbol{H}_h^{i+1}}{{\mathbb{L}}two{D}}^2 +\norm{d_t\lambda_h^{i+1}}{H^{1/2}(\Gamma)}^2)+\sum_{i=0}^{j-1}
\norm{\nabla\times (\boldsymbol{H}^{i+1}_h-\boldsymbol{H}^{i}_h)}{{\mathbb{L}}two{D}}^2
\leq C_{\rm ener}.\notag
\end{align}
\end{lemma}
\boldsymbol{e}gin{proof}
Choosing $B_h=A_h^{i+1}$ in~\eqref{eq:eddygeneral} and
multiplying the resulting equation by $k$ we obtain
\boldsymbol{e}gin{equation}\label{eq:e1}
a_h(A_h^{i+1}-A_h^i,A_h^{i+1})
+
kb(\boldsymbol{H}_h^{i+1},\boldsymbol{H}_h^{i+1})
= -k\dual{\boldsymbol{v}_h^i}{\boldsymbol{H}_h^i}_D
-k\dual{\boldsymbol{v}_h^i}{\boldsymbol{H}_h^{i+1}-\boldsymbol{H}_h^i}_D.
\end{equation}
On the other hand, it follows from~\eqref{eq:mhip1}
and~\eqref{eq:dllg} that
\boldsymbol{e}gin{align*}
\boldsymbol{e}gin{split}
\norm{\nabla \boldsymbol{m}_h^{i+1}}{{\mathbb{L}}two{D}}^2&=\norm{\nabla \boldsymbol{m}_h^{i}}{{\mathbb{L}}two{D}}^2+k^2\norm{\nabla \boldsymbol{v}_h^{i}}{{\mathbb{L}}two{D}}^2+2k\dual{\nabla\boldsymbol{m}_h^i}{\nabla\boldsymbol{v}_h^i}_D
\\
& = \norm{\nabla\boldsymbol{m}_h^i}{{\mathbb{L}}two{D}}^2-2(\theta-\tfrac12)k^2\norm{\nabla\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
- \frac{2\alpha k}{C_e}\norm{\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2 +
\frac{2k}{C_e}\dual{\boldsymbol{H}_h^i}{\boldsymbol{v}_h^i}_D,
\end{split}
\end{align*}
which implies
\boldsymbol{e}gin{align*}
k \dual{\boldsymbol{v}_h^i}{\boldsymbol{H}_h^i}_D
&=
\frac{C_e}{2}
\left(
\norm{\nabla\boldsymbol{m}_h^{i+1}}{{\mathbb{L}}two{D}}^2
-
\norm{\nabla\boldsymbol{m}_h^{i}}{{\mathbb{L}}two{D}}^2
\right)
+
(\theta-\tfrac12)k^2C_e \norm{\nabla\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
+
\alpha k \norm{\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2.
\end{align*}
Inserting this into the first term on the right-hand side
of~\eqref{eq:e1} and rearranging the resulting
equation yield, for any $\varepsilonilon>0$,
\boldsymbol{e}gin{align*}
&a_h(A_h^{i+1}-A_h^i,A_h^{i+1})
+
k
b(\boldsymbol{H}_h^{i+1},\boldsymbol{H}_h^{i+1})
\nonumber
\\
&+
\frac{C_e}{2}
\left(
\norm{\nabla \boldsymbol{m}_h^{i+1}}{{\mathbb{L}}two{D}}^2
-
\norm{\nabla\boldsymbol{m}_h^i}{{\mathbb{L}}two{D}}^2
\right)
+
(\theta-1/2)k^2 C_e
\norm{\nabla\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
+
\alpha k
\norm{\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
\nonumber
\\
&=
-k
\dual{\boldsymbol{v}_h^i}{\boldsymbol{H}_h^{i+1}-\boldsymbol{H}_h^{i}}_D
\\
&\leq
\frac{\varepsilonilon k}{2}
\norm{\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
+
\frac{k}{2\varepsilonilon}
\norm{\boldsymbol{H}_h^{i+1}-\boldsymbol{H}_h^i}{{\mathbb{L}}two{D}}^2
\leq
\frac{\varepsilonilon k}{2}
\norm{\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
+
\frac{k}{2\varepsilonilon}
a_h(A_h^{i+1}-A_h^i,A_h^{i+1}-A_h^i),
\end{align*}
where in the last step we used the definition of $a_h(\cdot,\cdot)$
and~\eqref{eq:dtnelliptic}.
Rearranging gives
\boldsymbol{e}gin{align*}
a_h(A_h^{i+1}-A_h^i,A_h^{i+1})
&+
k
b(\boldsymbol{H}_h^{i+1},\boldsymbol{H}_h^{i+1})
+
\frac{C_e}{2}
\left(
\norm{\nabla \boldsymbol{m}_h^{i+1}}{{\mathbb{L}}two{D}}^2
-
\norm{\nabla\boldsymbol{m}_h^i}{{\mathbb{L}}two{D}}^2
\right)
\\
&+
(\theta-1/2)k^2 C_e
\norm{\nabla\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
+
(\alpha-\varepsilonilon /2) k
\norm{\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
\nonumber
\\
&\leq
\frac{k}{2\varepsilonilon}
a_h(A_h^{i+1}-A_h^i,A_h^{i+1}-A_h^i).
\end{align*}
Summing over $i$ from $0$ to $j-1$ and
(for the first term on the left-hand side)
applying Abel's summation by parts formula
\boldsymbol{e}gin{equation}\label{equ:Abe}
\sum_{i=0}^{j-1}
(u_{i+1}-u_{i})u_{i+1}
=
\frac{1}{2} |u_j|^2
-
\frac{1}{2} |u_0|^2
+
\frac{1}{2}
\sum_{i=0}^{j-1}
|u_{i+1}-u_{i}|^2,
\end{equation}
we deduce, after multiplying the equation by two and rearranging,
\boldsymbol{e}gin{align*}
&(1-k/\varepsilonilon)
\sum_{i=0}^{j-1}
a_h(A_h^{i+1}-A_h^i,A_h^{i+1}-A_h^i)
+
2k
\sum_{i=0}^{j-1}
b(\boldsymbol{H}_h^{i+1},\boldsymbol{H}_h^{i+1})
+
a_h(A_h^j,A_h^j)
\\
&
+
C_e
\norm{\nabla \boldsymbol{m}_h^{j}}{{\mathbb{L}}two{D}}^2
+
(2\theta-1)k^2 C_e
\sum_{i=0}^{j-1}
\norm{\nabla\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
+
(2\alpha-\varepsilonilon ) k
\sum_{i=0}^{j-1}
\norm{\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
\\
&
\leq
C_e
\norm{\nabla \boldsymbol{m}_h^{0}}{{\mathbb{L}}two{D}}^2
+
a_h(A_h^0,A_h^0).
\end{align*}
Since $k<2\alpha$ we can choose $\varepsilon>0$ such
that $2\alpha-\varepsilonilon >0$ and $1-k/\varepsilonilon>0$. By noting the
ellipticity~\eqref{eq:dtnelliptic}, the bilinear forms
$a_h(\cdot,\cdot)$ and $b(\cdot,\cdot)$ are elliptic in their
respective (semi-)norms. We obtain
\boldsymbol{e}gin{align}\label{eq:e4}
\sum_{i=0}^{j-1}&
\left(
\norm{\boldsymbol{H}_h^{i+1}-\boldsymbol{H}_h^i}{{\mathbb{L}}two{D}}^2+\norm{\lambda_h^{i+1}-\lambda_h^i}{H^{1/2}(\Gamma)}^2
\right)
+
k
\sum_{i=0}^{j-1}
\norm{\nabla\times \boldsymbol{H}_h^{i+1}}{{\mathbb{L}}two{D}}^2
+ \norm{\boldsymbol{H}_h^j}{{\mathbb{L}}two{D}}^2
\notag
\\
&
+ \norm{\lambda_h^{j}}{H^{1/2}(\Gamma)}^2
+
\norm{\nabla \boldsymbol{m}_h^{j}}{{\mathbb{L}}two{D}}^2
+
(2\theta-1)k^2
\sum_{i=0}^{j-1}
\norm{\nabla\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
+
k
\sum_{i=0}^{j-1}
\norm{\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
\notag
\\
&
\leq
C
\left(
\norm{\nabla \boldsymbol{m}_h^{0}}{{\mathbb{L}}two{D}}^2
+
\norm{\boldsymbol{H}_h^0}{{\mathbb{L}}two{D}}^2
+ \norm{\lambda_h^{0}}{H^{1/2}(\Gamma)}^2
\right)
\leq
C,
\end{align}
where in the last step we used~\eqref{eq:mh0 Hh0}.
It remains to consider the last three terms on the
left-hand side of~\eqref{eq:denergy}.
Again, we consider~\eqref{eq:eddygeneral} and select
$B_h=d_tA_h^{i+1}$
to obtain after multiplication by~$2k$
\boldsymbol{e}gin{align*}
\boldsymbol{e}gin{split}
2k a_h(d_tA_h^{i+1},d_tA_h^{i+1})
&
+
2b(\boldsymbol{H}_h^{i+1},\boldsymbol{H}_h^{i+1}-\boldsymbol{H}_h^i)
\\
&= -2k\dual{\boldsymbol{v}_h^i}{d_t\boldsymbol{H}_h^{i+1}}_D
\le
k \norm{\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
+
k \norm{d_t\boldsymbol{H}_h^{i+1}}{{\mathbb{L}}two{D}}^2,
\end{split}
\end{align*}
so that, noting~\eqref{eq:e4} and~\eqref{eq:elliptic},
\boldsymbol{e}gin{align}\label{eq:e5}
\boldsymbol{e}gin{split}
k \sum_{i=0}^{j-1}
\left(
\norm{d_t\boldsymbol{H}_h^{i+1}}{{\mathbb{L}}two{D}}^2
\right.
&+
\left.
\norm{d_t\lambda_h^{i+1}}{H^{1/2}(\Gamma)}^2
\right)
+
2 \sum_{i=0}^{j-1}
b(\boldsymbol{H}_h^{i+1},\boldsymbol{H}_h^{i+1}-\boldsymbol{H}_h^i)
\\
&
\lesssim
k
\sum_{i=0}^{j-1}
\norm{\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
\leq
C.
\end{split}
\end{align}
Using Abel's summation by parts formula~\eqref{equ:Abe} for the
second sum on the left-hand side, and noting
the ellipticity of
the bilinear form $b(\cdot,\cdot)$ and~\eqref{eq:mh0 Hh0}, we obtain
together with~\eqref{eq:e4}
\boldsymbol{e}gin{align}\label{eq:d4}
\sum_{i=0}^{j-1}&
(\norm{\boldsymbol{H}_h^{i+1}-\boldsymbol{H}_h^i}{{\mathbb{L}}two{D}}^2+\norm{\lambda_h^{i+1}-\lambda_h^i}{H^{1/2}(\Gamma)}^2)\notag
\\
&+
k
\sum_{i=0}^{j-1}
\norm{\nabla\times \boldsymbol{H}_h^{i+1}}{{\mathbb{L}}two{D}}^2
+ \norm{\boldsymbol{H}_h^j}{{\mathbb{H}}curl{D}}^2
+ \norm{\lambda_h^{j}}{H^{1/2}(\Gamma)}^2
+
\norm{\nabla \boldsymbol{m}_h^{j}}{{\mathbb{L}}two{D}}^2\notag
\\
&+
(2\theta-1)k^2
\sum_{i=0}^{j-1}
\norm{\nabla\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
+
k
\sum_{i=0}^{j-1}
\norm{\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
\\
&+k \sum_{i=0}^{j-1} (\norm{d_t\boldsymbol{H}_h^{i+1}}{{\mathbb{L}}two{D}}^2 +\norm{d_t\lambda_h^{i+1}}{H^{1/2}(\Gamma)}^2)+\sum_{i=0}^{j-1}
\norm{\nabla\times (\boldsymbol{H}^{i+1}_h-\boldsymbol{H}^{i}_h)}{{\mathbb{L}}two{D}}^2
\leq
C.\notag
\end{align}
Clearly, if $1/2\le\theta\le1$ then~\eqref{eq:d4}
yields~\eqref{eq:denergy}. If $0\le\theta<1/2$ then since the mesh
is regular, the inverse
estimate~$\norm{\nabla\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}\lesssim
h^{-1}\norm{\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}$ gives
\boldsymbol{e}gin{align*}
(2\theta-1)k^2
\sum_{i=0}^{j-1}
\norm{\nabla\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
+
k
\sum_{i=0}^{j-1}
\norm{\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
&\gtrsim
\left(
1-k^2h^{-1}(1-2\theta)
\right)
k
\sum_{i=0}^{j-1}
\norm{\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
\\
&\gtrsim
k
\sum_{i=0}^{j-1}
\norm{\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
\end{align*}
as $k^2h^{-1}\to0$ under the assumption~\eqref{eq:hk con}.
This estimate and~\eqref{eq:d4} give~\eqref{eq:denergy},
completing the proof of the lemma.
\end{proof}
Collecting the above results we obtain the following equations
satisfied by the discrete functions
defined from~$\boldsymbol{m}_h^i$, $\boldsymbol{H}_h^i$, $\lambda_h^i$, and
$\boldsymbol{v}_h^i$.
\boldsymbol{e}gin{lemma}\label{lem:mhk Hhk}
Let $\boldsymbol{m}_{hk}^{-}$, $A_{hk}^\pm:=(\boldsymbol{H}_{hk}^\pm,\lambda_{hk}^\pm)$, and $\boldsymbol{v}_{hk}^{-}$ be
defined from $\boldsymbol{m}_h^i$, $\boldsymbol{H}_h^i$, $\lambda_h^i$, and $\boldsymbol{v}_h^i$ as
described in Subsection~\ref{subsec:dis spa}.
Then
\boldsymbol{e}gin{subequations}\label{eq:mhk Hhk}
\boldsymbol{e}gin{align}
\alpha\dual{\boldsymbol{v}_{hk}^-}{\boldsymbol{\phi}_{hk}}_{D_T}
&+
\dual{(\boldsymbol{m}_{hk}^-\times \boldsymbol{v}_{hk}^-)}{\boldsymbol{\phi}_{hk}}_{D_T}
+
C_e \theta k \dual{\nabla \boldsymbol{v}_{hk}^-}{\nabla \boldsymbol{\phi}_{hk}}_{D_T}
\nonumber
\\
&=
-C_e \dual{\nabla \boldsymbol{m}_{hk}^-}{\nabla \boldsymbol{\phi}_{hk}}_{D_T}
+
\dual{\boldsymbol{H}_{hk}^-}{\boldsymbol{\phi}_{hk}}_{D_T}
\label{eq:mhk Hhk1}
\\
\intertext{and with~$\partial_t$ denoting time derivative}
\int_0^T
a_h(\partial_t A_{hk}(t),B_h)
\,dt
&+
\int_0^T
b(\boldsymbol{H}_{hk}^+(t),\boldsymbol{\xi}_h)
\,dt
=
-\dual{\boldsymbol{v}_{hk}^-}{\boldsymbol{\xi}_h}_{D_T}
\label{eq:mhk Hhk2}
\end{align}
\end{subequations}
for all $\boldsymbol{\phi}_{hk}$ and $B_h:=(\boldsymbol{\xi}_h,\zeta_h)$ satisfying
$\boldsymbol{\phi}_{hk}(t,\cdot)\in{\mathcal K}_{\boldsymbol{m}_h^i}$ for $t\in[t_i,t_{i+1})$ and
$B_h\in{\mathcal X}_h$.
\end{lemma}
\boldsymbol{e}gin{proof}
The lemma is a direct consequence of~\eqref{eq:dllg}
and~\eqref{eq:eddygeneral}.
\end{proof}
The next lemma shows that the functions defined in the above
lemma form sequences which have convergent subsequences.
\boldsymbol{e}gin{lemma}\label{lem:weakconv}
Assume that the assumptions~\eqref{eq:hk con} and~\eqref{eq:mh0
Hh0} hold.
As $h$, $k\to0$, the following limits exist up to extraction of
subsequences
\boldsymbol{e}gin{subequations}\label{eq:weakconv}
\boldsymbol{e}gin{alignat}{2}
\boldsymbol{m}_{hk}&\rightharpoonup\boldsymbol{m}\quad&&\text{in }{\mathbb{H}}one{D_T},\label{eq:wc1}\\
\boldsymbol{m}_{hk}^\pm&\rightharpoonup\boldsymbol{m}\quad&&\text{in }
L^2(0,T;{\mathbb{H}}one{D}),\label{eq:wc1a}
\\
\boldsymbol{m}_{hk}^\pm
& \rightarrow\boldsymbol{m}
\quad&&
\text{in } {\mathbb{L}}two{D_T},
\label{eq:wc2}\\
(\boldsymbol{H}_{hk},\lambda_{hk})&\rightharpoonup(\boldsymbol{H},\lambda)\quad&&\text{in }
L^2(0,T;{\mathcal X}),\label{eq:wc3}\\
(\boldsymbol{H}_{hk}^\pm,\lambda_{hk}^\pm)&\rightharpoonup(\boldsymbol{H},\lambda)
\quad&&\text{in } L^2(0,T;{\mathcal X}),\label{eq:wc4}\\
(\boldsymbol{H}_{hk},\lambda_{hk})&\rightharpoonup (\boldsymbol{H},\lambda) \quad&&\text{in }
H^1(0,T;{\mathbb{L}}two{D}\times H^{1/2}(\Gamma)),\label{eq:wc31}\\
\boldsymbol{v}_{hk}^- &\rightharpoonup \boldsymbol{m}_t\quad&&\text{in }{\mathbb{L}}two{D_T},\label{eq:wc5}
\end{alignat}
\end{subequations}
for certain functions $\boldsymbol{m}$, $\boldsymbol{H}$, and $\lambda$ satisfying
$\boldsymbol{m}\in {\mathbb{H}}one{D_T}$, $\boldsymbol{H}\in
H^1(0,T;{\mathbb{L}}two{D})$, and $(\boldsymbol{H},\lambda)\in
L^2(0,T;{\mathcal X})$. Here~$\rightharpoonup$ denotes the weak convergence
and~$\to$ denotes the strong convergence in the relevant space.
Moreover, if the assumption~\eqref{eq:mh0 Hh0} holds
then there holds additionally $|\boldsymbol{m}|=1$ almost everywhere
in~$D_T$.
\end{lemma}
\boldsymbol{e}gin{proof}
Note that due to the Banach-Alaoglu Theorem, to show the
existence of a weakly convergent subsequence, it suffices to show
the boundedness of the sequence in the respective norm. Thus in
order to prove~\eqref{eq:wc1} we will prove
that $\norm{\boldsymbol{m}_{hk}}{{\mathbb{H}}one{D_T}}\le C$ for all~$h,k>0$.
By Step~(3) of Algorithm~\ref{algorithm} and due to an idea from~\cite{bartels}, there holds for all $z\in{\mathbb N}N_h$
\boldsymbol{e}gin{align*}
\boldsymbol{e}gin{split}
|\boldsymbol{m}_h^j(z)|^2 &=|\boldsymbol{m}_h^{j-1}(z)|^2+k^2|\boldsymbol{v}_h^{j-1}(z)|^2=|\boldsymbol{m}_h^{j-2}(z)|^2+k^2|\boldsymbol{v}_h^{j-1}(z)|^2+k^2|\boldsymbol{v}_h^{j-2}(z)|^2\\
&= |\boldsymbol{m}_h^{0}(z)|^2+k^2\sum_{i=0}^{j-1}|\boldsymbol{v}_h^{i}(z)|^2.
\end{split}
\end{align*}
By using the equivalence (see e.g. \cite[Lemma~3.2]{thanh})
\boldsymbol{e}gin{equation}\label{eq:LT13}
\norm{\boldsymbol{\phi}}{L^p(D)}^p
\simeq
h^3 \sum_{z\in{\mathbb N}N_h}
|\boldsymbol{\phi}(z)|^p,
\quad 1 \le p < \infty,
\quad \boldsymbol{\phi}\in {\mathcal S}^1({\mathcal T}_h)^3,
\end{equation}
we deduce that
\boldsymbol{e}gin{align}\label{eq:const}
\left|
\norm{\boldsymbol{m}_h^j}{{\mathbb{L}}two{D}}^2-\norm{\boldsymbol{m}_h^0}{{\mathbb{L}}two{D}}^2
\right|
&
\simeq
h^3
\sum_{z\in{\mathbb N}N_h}
\left(
|\boldsymbol{m}_h^j(z)|^2
-
|\boldsymbol{m}_h^0(z)|^2
\right)
=
k^2\sum_{i=0}^{j-1}
h^3
\sum_{z\in{\mathbb N}N_h}
|\boldsymbol{v}_h^i(z)|^2
\nonumber
\\
&
\simeq
k^2\sum_{i=0}^{j-1}\norm{\boldsymbol{v}_h^{i}}{{\mathbb{L}}two{D}}^2\leq kC_{\rm ener},
\end{align}
where in the last step we used~\eqref{eq:denergy}.
This proves immediately
\boldsymbol{e}gin{align*}
\norm{\boldsymbol{m}_{hk}}{{\mathbb{L}}two{D_T}}^2\simeq k\sum_{i=1}^N \norm{\boldsymbol{m}^i_h}{{\mathbb{L}}two{D}}^2\leq
k\sum_{i=1}^N \big(\norm{\boldsymbol{m}^0_h}{{\mathbb{L}}two{D}}^2+kC_{\rm ener}\big)
\leq C.
\end{align*}
On the other hand, since $\partial_t\boldsymbol{m}_{hk} = (\boldsymbol{m}_h^{i+1}-\boldsymbol{m}_h^i)/k$ on
$(t_i,t_{i+1})$ for $i=0,\ldots,N-1$
and $\boldsymbol{m}_h^{i+1}(z)-\boldsymbol{m}_h^i(z) = k\boldsymbol{v}_h^i(z)$ for all
$z\in{\mathbb N}N_h$, we have by using~\eqref{eq:denergy} and~\eqref{eq:LT13}
\boldsymbol{e}gin{align}\label{eq:dt mhk}
\norm{\partial_t\boldsymbol{m}_{hk}}{{\mathbb{L}}two{D_T}}^2
&
=
\sum_{i=0}^{N-1}
\int_{t_j}^{t_{j+1}}
\norm{\partial_t\boldsymbol{m}_{hk}}{{\mathbb{L}}two{D}}^2 \, dt
=
k^{-1}\sum_{i=0}^{N-1} \norm{\boldsymbol{m}^{i+1}_h-\boldsymbol{m}^i_h}{{\mathbb{L}}two{D}}^2
\nonumber
\\
&\simeq
k^{-1}
\sum_{i=0}^{N-1}
h^3
\sum_{z\in {\mathbb N}N_h} |\boldsymbol{m}^{i+1}_h(z)-\boldsymbol{m}^i_h(z)|^2
=
k
\sum_{i=0}^{N-1}
h^3
\sum_{z\in {\mathbb N}N_h} |\boldsymbol{v}^i_h(z)|^2
\nonumber
\\
&\simeq
k
\sum_{i=0}^{N-1}
\norm{\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
\leq
C_{\rm ener}.
\end{align}
Finally the gradient $\nabla\boldsymbol{m}_{hk}$
is shown to be bounded by using~\eqref{eq:denergy}
again as follows:
\boldsymbol{e}gin{align*}
\norm{\nabla\boldsymbol{m}_{hk}}{{\mathbb{L}}two{D_T}}^2\simeq k\sum_{i=1}^N
\norm{\nabla\boldsymbol{m}^i_h}{{\mathbb{L}}two{D}}^2\leq C_{\rm ener}kN\leq C_{\rm
energy}T.
\end{align*}
Altogether, we showed that $\{\boldsymbol{m}_{hk}\}$
is a bounded sequence in ${\mathbb{H}}one{D_T}$ and thus posesses a weakly convergent
subsequence, i.e., we proved~\eqref{eq:wc1}.
In particular,~\eqref{eq:mh0 Hh0}, \eqref{eq:denergy},
and~\eqref{eq:const} imply
\boldsymbol{e}gin{equation}\label{eq:mhk pm}
\norm{\boldsymbol{m}_{hk}^\pm}{L^2(0,T;{\mathbb{H}}one{D})}
\le
\norm{\boldsymbol{m}_{hk}^\pm}{L^\infty(0,T;{\mathbb{H}}one{D})}
\lesssim
C_{\rm ener},
\end{equation}
yielding~\eqref{eq:wc1a}.
We prove~\eqref{eq:wc2} for $\boldsymbol{m}_{hk}^-$ only; similar arguments
hold for $\boldsymbol{m}_{hk}^+$. First, we note that the definition
of~$\boldsymbol{m}_{hk}$ and~$\boldsymbol{m}_{hk}^-$, and the estimate~\eqref{eq:dt mhk}
imply, for all $t\in[t_j,t_{j+1})$,
\boldsymbol{e}gin{align*}
\norm{\boldsymbol{m}_{hk}(t,\cdot)-\boldsymbol{m}_{hk}^-(t,\cdot)}{{\mathbb{L}}two{D}}
&=
\norm{(t-t_j)\frac{\boldsymbol{m}_{h}^{j+1}-\boldsymbol{m}_{h}^j}{k}}{{\mathbb{L}}two{D}}
\le
k\norm{\partial_t\boldsymbol{m}_{hk}(t,\cdot)}{{\mathbb{L}}two{D}}
\lesssim
k C_{\rm ener}.
\end{align*}
This in turn implies
\[
\norm{\boldsymbol{m}_{hk}-\boldsymbol{m}_{hk}^-}{{\mathbb{L}}two{D_T}}
\lesssim
kT C_{\rm ener}
\to 0 \quad\text{as }h,k\to0.
\]
Thus,~\eqref{eq:wc2} follows from the triangle
inequality,~\eqref{eq:wc1}, and the Sobolev embedding.
Statement~\eqref{eq:wc3} follows immediately
from~\eqref{eq:denergy} by noting that
\boldsymbol{e}gin{align*}
\norm{(\boldsymbol{H}_{hk},\lambda_{hk})}{L^2(0,T;{\mathcal X})}^2
\simeq
k\sum_{i=1}^N
\big(
\norm{\boldsymbol{H}_h^i}{{\mathbb{H}}curl{D}}^2+\norm{\lambda_{h}^i}{H^{1/2}(\Gamma)}^2
\big)
\leq
kNC_{\rm ener}\leq TC_{\rm ener}.
\end{align*}
The proof of~\eqref{eq:wc4} follows analogously. Consequently, we
obtain~\eqref{eq:wc31} by using again~\eqref{eq:denergy} and the
above estimate as follows:
\boldsymbol{e}gin{align*}
\norm{\boldsymbol{H}_{hk}}{H^1(0,T;{\mathbb{L}}two{D})}^2
&\simeq
\norm{\boldsymbol{H}_{hk}}{{\mathbb{L}}two{D_T}}^2 + k\sum_{i=1}^N\norm{d_t\boldsymbol{H}_h^i}{{\mathbb{L}}two{D}}^2
\leq TC_{\rm ener}+C_{\rm ener}.
\end{align*}
The convergence of $\lambda_{hk}$ in the statement follows analogously.
Finally,~\eqref{eq:wc5} follows from $\partial_t\boldsymbol{m}_{hk}(t)=\boldsymbol{v}^-_{hk}(t)$ and~\eqref{eq:wc1}.
To show that $\boldsymbol{m}$ satisfies the constraint $|\boldsymbol{m}|=1$,
we first note that
\boldsymbol{e}gin{align*}
\norm{|\boldsymbol{m}|-1}{L^2(D_T)}\leq \norm{\boldsymbol{m}-\boldsymbol{m}_{hk}}{{\mathbb{L}}two{D_T}}+\norm{|\boldsymbol{m}_{hk}|-1}{L^2(D_T)}.
\end{align*}
The first term on the right-hand side
converges to zero due to~\eqref{eq:wc1} and the
compact embedding of~${\mathbb{H}}one{D_T}$ in~${\mathbb{L}}two{D_T}$.
For the second term, we note that
\boldsymbol{e}gin{align*}
\norm{1-|\boldsymbol{m}_{hk}|}{L^2(D_T)}^2
&\lesssim
k\sum_{j=0}^N\big(\norm{|\boldsymbol{m}_{h}^j|-|\boldsymbol{m}_h^0|}{L^2(D)}^2
+
\norm{1-|\boldsymbol{m}_h^0|}{L^2(D)}^2\big)
\nonumber\\
&\leq k\sum_{j=0}^N\big(\norm{|\boldsymbol{m}_{h}^j|^2
-
|\boldsymbol{m}_h^0|^2}{L^1(D)}+\norm{|\boldsymbol{m}^0|-|\boldsymbol{m}_h^0|}{L^2(D)}^2\big)
\end{align*}
where we used $(x-y)^2\leq |x^2-y^2|$ for all $x,y\ge0$.
Similarly to~\eqref{eq:const} it can be shown that
\boldsymbol{e}gin{equation}\label{eq:mhj mh0}
\norm{|\boldsymbol{m}_h^j|^2-|\boldsymbol{m}_h^0|^2}{L^1(D)}
\simeq
k^2\sum_{i=0}^{j-1}
\norm{\boldsymbol{v}_h^{i}}{{\mathbb{L}}two{D}}^2
\leq kC_{\rm ener}.
\end{equation}
Hence
\[
\norm{1-|\boldsymbol{m}_{hk}|}{L^2(D_T)}^2
\leq kC_{\rm ener} +\norm{\boldsymbol{m}^0-\boldsymbol{m}_h^0}{{\mathbb{L}}two{D_T}}^2\to
0\quad\text{as }h,k\to 0.
\]
Altogether, we showed $|\boldsymbol{m}|=1$ almost everywhere in~$D_T$,
completing the proof of the lemma.
\end{proof}
We also need the following strong convergence property.
\boldsymbol{e}gin{lemma}\label{lem:str con}
Under the assumptions~\eqref{eq:hk 12} and~\eqref{eq:mh0 Hh0} there holds
\boldsymbol{e}gin{equation}\label{eq:mhk h12}
\norm{\boldsymbol{m}_{hk}^--\boldsymbol{m}}{L^2(0,T;{\mathbb{H}}^{1/2}(D))}
\to 0
\quad\text{as } h,k\to 0.
\end{equation}
\end{lemma}
\boldsymbol{e}gin{proof}
It follows from the triangle inequality and the definitions of~$\boldsymbol{m}_{hk}$
and~$\boldsymbol{m}_{hk}^-$ that
\boldsymbol{e}gin{align*}
\norm{\boldsymbol{m}_{hk}^--\boldsymbol{m}}{L^2(0,T;{\mathbb{H}}^{1/2}(D))}^2
&\lesssim
\norm{\boldsymbol{m}_{hk}^--\boldsymbol{m}_{hk}}{L^2(0,T;{\mathbb{H}}^{1/2}(D))}^2
+
\norm{\boldsymbol{m}_{hk}-\boldsymbol{m}}{L^2(0,T;{\mathbb{H}}^{1/2}(D))}^2 \\
&\leq
\sum_{i=0}^{N-1}
k^3\norm{\boldsymbol{v}_h^i}{{\mathbb{H}}^{1/2}(D)}^2
+
\norm{\boldsymbol{m}_{hk}-\boldsymbol{m}}{L^2(0,T;{\mathbb{H}}^{1/2}(D))}^2 \\
&\leq
\sum_{i=0}^{N-1}k^3\norm{\boldsymbol{v}_h^i}{{\mathbb{H}}one{D}}^2
+
\norm{\boldsymbol{m}_{hk}-\boldsymbol{m}}{L^2(0,T;{\mathbb{H}}^{1/2}(D))}^2.
\end{align*}
The second term on the right-hand side converges to zero due
to~\eqref{eq:wc1} and the compact embedding of
\[
{\mathbb{H}}one{D_T}
\simeq
\{\boldsymbol{v} \, | \,
\boldsymbol{v}\in L^2(0,T;{\mathbb{H}}one{D}), \, \boldsymbol{v}_t\in L^2(0,T;{\mathbb{L}}two{D}) \}
\]
into $L^2(0,T;{\mathbb{H}}^{1/2}(D))$; see \cite[Theorem 5.1]{Lio69}.
For the first term on the right-hand side,
when $\theta>1/2$,~\eqref{eq:denergy} implies
$
\sum_{i=0}^{N-1}k^3\norm{\boldsymbol{v}_h^i}{{\mathbb{H}}one{D}}^2
\lesssim k \to 0.
$
When $0\le\theta\leq 1/2$, a standard inverse inequality,~\eqref{eq:denergy}
and~\eqref{eq:hk 12} yield
\[
\sum_{i=0}^{N-1}k^3\norm{\boldsymbol{v}_h^i}{{\mathbb{H}}one{D}}^2
\lesssim
\sum_{i=0}^{N-1}h^{-2}k^3\norm{\boldsymbol{v}_h^i}{{\mathbb{L}}two{D}}^2
\lesssim
h^{-2}k^2 \to 0,
\]
completing the proof of the lemma.
\end{proof}
The following lemma involving the ${\mathbb{L}}^2$-norm of the cross product
of two vector-valued functions will be used when passing
to the limit of equation~\eqref{eq:mhk Hhk1}.
\boldsymbol{e}gin{lemma}\label{lem:h}
There exists a constant $C_{\rm sob}>0$ which depends only on
$D$ such that
\boldsymbol{e}gin{equation}
\label{eq:l2}
\norm{\boldsymbol{w}_{0}\times\boldsymbol{w}_1}{{\mathbb{L}}two{D}}
\leq
C_{\rm sob}
\norm{\boldsymbol{w}_{0}}{{\mathbb{H}}^{1/2}(D)}\norm{\boldsymbol{w}_1}{{\mathbb{H}}one{D}}.
\end{equation}
for all $\boldsymbol{w}_0\in{\mathbb{H}}^{1/2}(D)$ and $\boldsymbol{w}_{1}\in{\mathbb{H}}one{D}$.
\end{lemma}
\boldsymbol{e}gin{proof}
It is shown in~\cite[Theorem~5.4, Part~I]{adams} that
the embedding $\iota\colon{\mathbb{H}}one{D}\to{\mathbb{L}}^6(D)$ is continuous.
Obviously, the identity $\iota\colon {\mathbb{L}}two{D}\to{\mathbb{L}}two{D}$ is
continous. By real interpolation, we find that $\iota\colon
[{\mathbb{L}}two{D},{\mathbb{H}}one{D}]_{1/2}\to [{\mathbb{L}}two{D},{\mathbb{L}}^6(D)]_{1/2}$ is
continuous. Well-known results in interpolation theory show
$
[{\mathbb{L}}two{D},{\mathbb{H}}one{D}]_{1/2}= {\mathbb{H}}^{1/2}(D)
$
and
$
[{\mathbb{L}}two{D},{\mathbb{L}}^6(D)]_{1/2}={\mathbb{L}}^3(D)
$
with equivalent norms; see e.g.~\cite[Theorem~5.2.1]{BL}.
By using H\"older's inequality, we deduce
\boldsymbol{e}gin{align*}
\norm{\boldsymbol{w}_{0}\times\boldsymbol{w}_{1}}{{\mathbb{L}}two{D}}\leq
\norm{\boldsymbol{w}_{0}}{{\mathbb{L}}^3(D)}\norm{\boldsymbol{w}_{1}}{{\mathbb{L}}^6(D)}
\lesssim
\norm{\boldsymbol{w}_{0}}{{\mathbb{H}}^{1/2}(D)}\norm{\boldsymbol{w}_1}{{\mathbb{H}}one{D}},
\end{align*}
proving the lemma.
\end{proof}
Finally, to pass to the limit in equation~\eqref{eq:mhk Hhk2} we
need the following result.
\boldsymbol{e}gin{lemma}\label{lem:wea con}
For any sequence~$\{\lambda_h\}\subset H^{1/2}(\Gamma)$
and any function~$\lambda\in H^{1/2}(\Gamma)$, if
\boldsymbol{e}gin{equation}\label{eq:zet con}
\lim_{h\to0}
\dual{\lambda_h}{\nu}_{\Gamma}
=
\dual{\lambda}{\nu}_{\Gamma}
\quad\forall\nu\in H^{-1/2}(\Gamma)
\end{equation}
then
\boldsymbol{e}gin{equation}\label{eq:dtn zet con}
\lim_{h\to0}
\dual{\mathfrak{S}_h\lambda_h}{\zeta}_{\Gamma}
=
\dual{\mathfrak{S}\lambda}{\zeta}_{\Gamma}
\quad\forall\zeta\in H^{1/2}(\Gamma).
\end{equation}
\end{lemma}
\boldsymbol{e}gin{proof}
Let~$\mu$ and~$\mu_h$ be defined
by~\eqref{eq:bem} with~$\lambda$ in the second equation replaced
by~$\lambda_h$. Then (recalling that Costabel's symmetric
coupling is used) $\mathfrak{S}\lambda$ and~$\mathfrak{S}_h\lambda_h$ are
defined via~$\mu$ and~$\mu_h$ by~\eqref{eq:dtn2}
and~\eqref{eq:bem2}, respectively, namely,
$\mathfrak{S}\lambda = (1/2-\mathfrak{K}^\prime)\mu - \mathfrak{W}\lambda$ and
$
\dual{\mathfrak{S}_h\lambda_h}{\zeta_h}_\Gamma
=
\dual{(1/2-\mathfrak{K}^\prime)\mu_h}{\zeta_h}_\Gamma
-
\dual{\mathfrak{W} \lambda_h}{\zeta_h}_\Gamma
$
for all~$\zeta_h\in {\mathcal S}^1({\mathcal T}_h|_\Gamma)$.
For any~$\zeta\in H^{1/2}(\Gamma)$,
let~$\{\zeta_h\}$ be a sequence
in~${\mathcal S}^1({\mathcal T}_h|_\Gamma)$
satisfying~$\lim_{h\to0}\norm{\zeta_h-\zeta}{H^{1/2}(\Gamma)}=0$.
By using the triangle inequality and the above representations
of~$\mathfrak{S}\lambda$ and~$\mathfrak{S}_h\lambda_h$ we deduce
\boldsymbol{e}gin{align}\label{eq:dtn dtn}
\big|
\dual{\mathfrak{S}_h\lambda_h}{\zeta}
-
\dual{\mathfrak{S}\lambda}{\zeta}_\Gamma
\big|
&\leq
\big|\dual{\mathfrak{S}_h\lambda_h-\mathfrak{S}\lambda}{\zeta_h}_\Gamma\big|
+
\big|\dual{\mathfrak{S}_h\lambda_h-\mathfrak{S}\lambda}{\zeta-\zeta_h}_\Gamma\big|
\notag
\\
&\le
\big|
\dual{(\tfrac12-\mathfrak{K}^\prime)(\mu_h-\mu)}{\zeta_h}_\Gamma
\big|
+
\big|
\dual{\mathfrak{W} (\lambda_h-\lambda)}{\zeta_h}_\Gamma
\big|
\notag
\\
&\quad
+
\big|\dual{\mathfrak{S}_h\lambda_h-\mathfrak{S}\lambda}{\zeta-\zeta_h}_\Gamma\big|
\notag
\\
&\le
\big|
\dual{(\tfrac12-\mathfrak{K}^\prime)(\mu_h-\mu)}{\zeta_h}_\Gamma
\big|
+
\big|
\dual{\mathfrak{W} (\lambda_h-\lambda)}{\zeta}_\Gamma
\big|
\notag
\\
&\quad
+
\big|
\dual{\mathfrak{W} (\lambda_h-\lambda)}{\zeta_h-\zeta}_\Gamma
\big|
+
\big|\dual{\mathfrak{S}_h\lambda_h-\mathfrak{S}\lambda}{\zeta-\zeta_h}_\Gamma\big|.
\end{align}
The second term on the right-hand side of~\eqref{eq:dtn dtn}
goes to zero as~$h\to0$ due to~\eqref{eq:zet con} and the
self-adjointness of~$\mathfrak{W}$. The third term converges to zero
due to the strong convergence~$\zeta_h\to\zeta$
in~$H^{1/2}(\Gamma)$ and the boundedness of~$\{\lambda_h\}$
in~$H^{1/2}(\Gamma)$, which is a consequence of~\eqref{eq:zet
con} and the Banach-Steinhaus Theorem. The last term tends to
zero due to the convergence of~$\{\zeta_h\}$ and the boundedness
of~$\{\mathfrak{S}_h\lambda_h\}$; see~\eqref{eq:dtnelliptic}.
Hence~\eqref{eq:dtn zet con} is proved if we prove
\boldsymbol{e}gin{equation}\label{eq:muh mu}
\lim_{h\to0}
\dual{(1/2-\mathfrak{K}^\prime)(\mu_h-\mu)}{\zeta_h}_{\Gamma}
= 0.
\end{equation}
We have
\boldsymbol{e}gin{align}\label{eq:muh mu3}
\dual{(\tfrac12-\mathfrak{K}^\prime)(\mu_h-\mu)}{\zeta_h}_\Gamma
=
\dual{\mu_h-\mu}{(\tfrac12-\mathfrak{K})\zeta}_\Gamma
+
\dual{\mu_h-\mu}{(\tfrac12-\mathfrak{K})(\zeta_h-\zeta)}_\Gamma.
\end{align}
The definition of~$\mu_h$ implies
$
\norm{\mu_h}{H^{-1/2}(\Gamma)}
\lesssim
\norm{\lambda_h}{H^{1/2}(\Gamma)}
\lesssim
1,
$
and therefore the second term on the right-hand side
of~\eqref{eq:muh mu3} goes to zero.
Hence it suffices to prove
\boldsymbol{e}gin{equation}\label{eq:muh mu2}
\lim_{h\to0}
\dual{\mu_h-\mu}{\eta}_\Gamma
= 0
\quad\forall\eta\in H^{1/2}(\Gamma).
\end{equation}
Since~$\mathfrak{V} : H^{-1/2}(\Gamma) \to
H^{1/2}(\Gamma)$ is bijective and self-adjoint,
for any~$\eta\in H^{1/2}(\Gamma)$
there exists~$\nu\in H^{-1/2}(\Gamma)$ such that
\[
\dual{\mu_h-\mu}{\eta}_{\Gamma}
=
\dual{\mu_h-\mu}{\mathfrak{V}\nu}_{\Gamma}
=
\dual{\mathfrak{V}(\mu_h-\mu)}{\nu}_{\Gamma}
=
\dual{\mathfrak{V}(\mu_h-\mu)}{\nu_h}_{\Gamma}
+
\dual{\mathfrak{V}(\mu_h-\mu)}{\nu-\nu_h}_{\Gamma},
\]
where~$\{\nu_h\}\subset{\mathbb P}P^0({\mathcal T}_h|\Gamma)$ is a sequence
satisfying~$\norm{\nu_h-\nu}{H^{-1/2}(\Gamma)}\to0$.
The definitions of~$\mu_h$ and~$\mu$, and the above equation imply
\boldsymbol{e}gin{align*}
\dual{\mu_h-\mu}{\eta}_{\Gamma}
&=
\dual{(\mathfrak{K}-\tfrac12)(\lambda_h-\lambda)}{\nu_h}_{\Gamma}
+
\dual{\mathfrak{V}(\mu_h-\mu)}{\nu-\nu_h}_{\Gamma}
\\
&=
\dual{\lambda_h-\lambda}{(\mathfrak{K}^\prime-\tfrac12)\nu_h}_{\Gamma}
+
\dual{\mathfrak{V}(\mu_h-\mu)}{\nu-\nu_h}_{\Gamma}
\\
&=
\dual{\lambda_h-\lambda}{(\mathfrak{K}^\prime-\tfrac12)\nu}_{\Gamma}
+
\dual{\lambda_h-\lambda}{(\mathfrak{K}^\prime-\tfrac12)(\nu_h-\nu)}_{\Gamma}
+
\dual{\mathfrak{V}(\mu_h-\mu)}{\nu-\nu_h}_{\Gamma}.
\end{align*}
The first two terms on the right-hand side go to zero due to the
convergence of~$\{\lambda_h\}$ and~$\{\nu_h\}$. The last term
also approaches zero if we note the boundedness of~$\{\mu_h\}$.
This proves~\eqref{eq:muh mu2} and completes the proof of the lemma.
\end{proof}
\subsection{Proof of Theorem~\ref{thm:weakconv}}\label{section:weak}
We are now ready to prove that the
problem~\eqref{eq:strong}--\eqref{eq:con} has a weak solution.
\boldsymbol{e}gin{proof}
We recall from~\eqref{eq:wc1}--\eqref{eq:wc5} that $\boldsymbol{m}\in {\mathbb{H}}one{D_T}$,
$(\boldsymbol{H},\lambda)\in L^2(0,T;{\mathcal X})$ and $\boldsymbol{H}\in H^1(0,T;{\mathbb{L}}two{D})$.
By virtue of Lemma~\ref{lem:bil for} it suffices to prove that
$(\boldsymbol{m},\boldsymbol{H},\lambda)$ satisfies~\eqref{eq:wssymm1}
and~\eqref{eq:bil for}.
Let $\boldsymbol{\phi}\in C^\infty(D_T)$ and $B:=(\boldsymbol{\xi},\zeta)\in L^2(0,T;{\mathcal X})$.
On the one hand, we define the test function
$\boldsymbol{\phi}_{hk}:={\mathbb P}i_{{\mathcal S}}(\boldsymbol{m}_{hk}^-\times\boldsymbol{\phi})$
as the usual interpolant of~$\boldsymbol{m}_{hk}^-\times\boldsymbol{\phi}$ into
${\mathcal S}^1({\mathcal T}_h)^3$.
By definition, $\boldsymbol{\phi}_{hk}(t,\cdot) \in{\mathcal K}_{\boldsymbol{m}_h^j}$ for all
$t\in[t_j,t_{j+1})$.
On the other hand, it follows from Lemma~\ref{lem:den pro}
that there exists~$B_h:=(\boldsymbol{\xi}_h,\zeta_h)\in{\mathcal X}_h$ converging
to~$B\in{\mathcal X}$.
Equations~\eqref{eq:mhk Hhk} hold with these
test functions. The main idea of the proof is to
pass to the limit in~\eqref{eq:mhk Hhk1} and~\eqref{eq:mhk Hhk2} to
obtain~\eqref{eq:wssymm1} and~\eqref{eq:bil for}, respectively.
In order to prove that~\eqref{eq:mhk Hhk1}
implies~\eqref{eq:wssymm1} we will prove that as~$h,k\to0$
\boldsymbol{e}gin{subequations}\label{eq:conv}
\boldsymbol{e}gin{align}
\dual{\boldsymbol{v}_{hk}^-}{\boldsymbol{\phi}_{hk}}_{D_T}
&\to
\dual{\boldsymbol{m}_t}{ \boldsymbol{m}\times\boldsymbol{\phi}}_{D_T},
\label{eq:conv1}
\\
\dual{\boldsymbol{m}_{hk}^-\times\boldsymbol{v}_{hk}^-}{\boldsymbol{\phi}_{hk}}_{D_T}
&\to
\dual{\boldsymbol{m}\times\boldsymbol{m}_t}{\boldsymbol{m}\times\boldsymbol{\phi}}_{D_T},
\label{eq:conv2}
\\
k\dual{\nabla\boldsymbol{v}_{hk}^-}{\nabla\boldsymbol{\phi}_{hk}}_{D_T}
&\to0,
\label{eq:conv3}
\\
\dual{\nabla\boldsymbol{m}_{hk}^-}{\nabla\boldsymbol{\phi}_{hk}}_{D_T}
&\to
\dual{\nabla\boldsymbol{m}}{\nabla(\boldsymbol{m}\times\boldsymbol{\phi})}_{D_T},
\label{eq:conv4}
\\
\dual{\boldsymbol{H}_{hk}^-}{\boldsymbol{\phi}_{hk}}_{D_T}
&\to
\dual{\boldsymbol{H}}{ \boldsymbol{m}\times\boldsymbol{\phi}}_{D_T}.
\label{eq:conv5}
\end{align}
\end{subequations}
Firstly, it can be easily shown that (see~\cite{alouges})
\boldsymbol{e}gin{equation}\label{eq:phi mhk}
\norm{\boldsymbol{\phi}_{hk}-\boldsymbol{m}_{hk}^-\times\boldsymbol{\phi}}{L^2(0,T;{\mathbb{H}}one{D})}
\lesssim
h
\norm{\boldsymbol{m}_{hk}^-}{L^2(0,T;{\mathbb{H}}one{D})}
\norm{\boldsymbol{\phi}}{{\mathbb W}^{2,\infty}(D_T)}
\lesssim
h
\norm{\boldsymbol{\phi}}{{\mathbb W}^{2,\infty}(D_T)}
\end{equation}
and
\boldsymbol{e}gin{align}\label{eq:phiinfty}
\norm{\boldsymbol{\phi}_{hk}-\boldsymbol{m}_{hk}^-\times\boldsymbol{\phi}}{L^\infty(0,T;{\mathbb{H}}one{D})}
\lesssim
h
\norm{\boldsymbol{m}_{hk}^-}{L^\infty(0,T;{\mathbb{H}}one{D})}
\norm{\boldsymbol{\phi}}{{\mathbb W}^{2,\infty}(D_T)}
\lesssim
h
\norm{\boldsymbol{\phi}}{{\mathbb W}^{2,\infty}(D_T)},
\end{align}
where we used~\eqref{eq:mhk pm}.
In particular, we have
\boldsymbol{e}gin{equation}\label{eq:phi hk inf}
\norm{\boldsymbol{\phi}_{hk}}{L^\infty(0,T;{\mathbb{H}}one{D})}
\lesssim
1.
\end{equation}
We now prove~\eqref{eq:conv1} and~\eqref{eq:conv5}.
With~\eqref{eq:phi mhk}, there holds for $h,k\to0$,
\boldsymbol{e}gin{align}\label{eq:phihk mhk}
\norm{\boldsymbol{\phi}_{hk}-\boldsymbol{m}\times\boldsymbol{\phi}}{{\mathbb{L}}two{D_T}}
&\le
\norm{\boldsymbol{\phi}_{hk}-\boldsymbol{m}_{hk}^-\times\boldsymbol{\phi}}{{\mathbb{L}}two{D_T}}
+
\norm{(\boldsymbol{m}_{hk}^--\boldsymbol{m})\times\boldsymbol{\phi}}{{\mathbb{L}}two{D_T}}
\nonumber
\\
&\lesssim
\big(h + \norm{\boldsymbol{m}_{hk}^--\boldsymbol{m}}{{\mathbb{L}}two{D_T}}\big)
\norm{\boldsymbol{\phi}}{{\mathbb W}^{2,\infty}(D_T)}
\to0
\end{align}
due to~\eqref{eq:wc2}. Consequently, with the
help of~\eqref{eq:wc31} and~\eqref{eq:wc5} we
obtain~\eqref{eq:conv1} and~\eqref{eq:conv5}.
In order to prove~\eqref{eq:conv2} we note that
the elementary identity
\boldsymbol{e}gin{equation}\label{eq:abc}
\boldsymbol{a}\cdot(\boldsymbol{b}\times\boldsymbol{c})= \boldsymbol{b}\cdot(\boldsymbol{c}\times\boldsymbol{a})=
\boldsymbol{c}\cdot(\boldsymbol{a}\times\boldsymbol{b})
\quad\forall\boldsymbol{a},\boldsymbol{b},\boldsymbol{c}\in{\mathbb R}^3
\end{equation}
yields
\boldsymbol{e}gin{align}\label{eq:idid}
\dual{\boldsymbol{m}_{hk}^-\times\boldsymbol{v}_{hk}^-}{\boldsymbol{\phi}_{hk}}_{D_T}
=
\dual{\boldsymbol{v}_{hk}^-}{\boldsymbol{\phi}_{hk}\times\boldsymbol{m}_{hk}^-}_{D_T}.
\end{align}
It follows successively from the triangle inequality,
~\eqref{eq:l2} and~\eqref{eq:phi hk inf} that
\boldsymbol{e}gin{align*}
\norm{\boldsymbol{\phi}_{hk}&\times\boldsymbol{m}_{hk}^{-}
-
(\boldsymbol{m}\times\boldsymbol{\phi})\times\boldsymbol{m}}{{\mathbb{L}}two{D_T}}\\
&\leq
\norm{\boldsymbol{\phi}_{hk}\times(\boldsymbol{m}_{hk}^--\boldsymbol{m})}{{\mathbb{L}}two{D_T}}+
\norm{(\boldsymbol{\phi}_{hk}-(\boldsymbol{m}\times\boldsymbol{\phi}))\times\boldsymbol{m}}{{\mathbb{L}}two{D_T}}\\
&\lesssim
\Big(\int_0^T\norm{\boldsymbol{\phi}_{hk}(t)}{{\mathbb{H}}one{D}}^2\norm{\boldsymbol{m}_{hk}^-(t)-\boldsymbol{m}(t)}{{\mathbb{H}}^{1/2}(D)}^2\,dt\Big)^{1/2}+
\norm{\boldsymbol{\phi}_{hk}-(\boldsymbol{m}\times\boldsymbol{\phi})}{{\mathbb{L}}two{D_T}}\\
&\leq
\norm{\boldsymbol{\phi}_{hk}}{L^\infty(0,T;{\mathbb{H}}one{D})}
\norm{\boldsymbol{m}_{hk}^--\boldsymbol{m}}{L^2(0,T;{\mathbb{H}}^{1/2}(D))}
+
\norm{\boldsymbol{\phi}_{hk}-(\boldsymbol{m}\times\boldsymbol{\phi})}{{\mathbb{L}}two{D_T}} \\
&\lesssim
\norm{\boldsymbol{m}_{hk}^--\boldsymbol{m}}{L^2(0,T;{\mathbb{H}}^{1/2}(D))}
+
\norm{\boldsymbol{\phi}_{hk}-(\boldsymbol{m}\times\boldsymbol{\phi})}{{\mathbb{L}}two{D_T}}.
\end{align*}
Thus~\eqref{eq:mhk h12} and~\eqref{eq:phihk mhk} imply
$\boldsymbol{\phi}_{hk}\times\boldsymbol{m}_{hk}^-\to
(\boldsymbol{m}\times\boldsymbol{\phi})\times\boldsymbol{m}$ in ${\mathbb{L}}two{D_T}$.
This together with~\eqref{eq:wc5} and~\eqref{eq:idid} implies
\[
\dual{\boldsymbol{m}_{hk}^-\times\boldsymbol{v}_{hk}^-}{\boldsymbol{\phi}_{hk}}_{D_T}
\to
\dual{\boldsymbol{m}_t}{(\boldsymbol{m}\times\boldsymbol{\phi})\times\boldsymbol{m}}_{D_T},
\]
which is indeed~\eqref{eq:conv2} by invoking~\eqref{eq:abc}.
Statement~\eqref{eq:conv4} follows from~\eqref{eq:phi mhk},
\eqref{eq:wc1a}, and~\eqref{eq:wc2} as follows:
As $h,k\to 0$,
\boldsymbol{e}gin{align*}
\dual{\nabla\boldsymbol{m}_{hk}^-}{\nabla\boldsymbol{\phi}_{hk}}_{D_T}
&=
\dual{\nabla\boldsymbol{m}_{hk}^-}{\nabla(\boldsymbol{\phi}_{hk}-\boldsymbol{m}_{hk}^-\times\boldsymbol{\phi})}_{D_T}
+
\dual{\nabla\boldsymbol{m}_{hk}^-}{\nabla(\boldsymbol{m}_{hk}^-\times\boldsymbol{\phi})}_{D_T}
\\
&=
\dual{\nabla\boldsymbol{m}_{hk}^-}{\nabla(\boldsymbol{\phi}_{hk}-\boldsymbol{m}_{hk}^-\times\boldsymbol{\phi})}_{D_T}
+
\dual{\nabla\boldsymbol{m}_{hk}^-}{\boldsymbol{m}_{hk}^-\times\nabla\boldsymbol{\phi}}_{D_T}
\\
&\longrightarrow
\dual{\nabla\boldsymbol{m}}{0}_{D_T} +
\dual{\nabla\boldsymbol{m}}{\boldsymbol{m}\times\nabla\boldsymbol{\phi}}_{D_T}
=
\dual{\nabla\boldsymbol{m}}{\nabla(\boldsymbol{m}\times\boldsymbol{\phi})}_{D_T} .
\end{align*}
Finally, in order to prove~\eqref{eq:conv3} we first note that
\eqref{eq:phi mhk} and the boundedness
of the sequence~$\{\norm{\boldsymbol{m}_{hk}^-}{L^2(0,T;{\mathbb{H}}one{D})}\}$,
see~\eqref{eq:mhk pm},
give the boundedness
of~$\{\norm{\boldsymbol{\phi}_{hk}}{L^2(0,T;{\mathbb{H}}one{D}}\}$, and thus
of~$\{\norm{\nabla\boldsymbol{\phi}_{hk}}{{\mathbb{L}}two{D_T}}\}$.
On the other hand,
\boldsymbol{e}gin{equation}\label{eq:vhk}
\norm{\nabla\boldsymbol{v}_{hk}^-}{{\mathbb{L}}two{D_T}}^2
=
k
\sum_{i=0}^{N-1}
\norm{\nabla\boldsymbol{v}_{h}^i}{{\mathbb{L}}two{D}}^2.
\end{equation}
If $1/2<\theta\le1$ then~\eqref{eq:denergy} and~\eqref{eq:vhk}
yield the boundedness of $\{\norm{\nabla\boldsymbol{v}_{hk}^-}{{\mathbb{L}}two{D_T}}\}$.
Hence
\[
k\dual{\nabla\boldsymbol{v}_{hk}^-}{\nabla\boldsymbol{\phi}_{hk}}_{D_T}
\to0
\quad\text{as }h,k\to 0.
\]
If $0\le\theta\le1/2$ then the inverse estimate,~\eqref{eq:vhk},
and~\eqref{eq:denergy} yield
\[
\norm{\nabla\boldsymbol{v}_{hk}^-}{{\mathbb{L}}two{D_T}}^2
\lesssim
kh^{-2}
\sum_{i=0}^{N-1}
\norm{\boldsymbol{v}_{h}^i}{{\mathbb{L}}two{D}}^2
\lesssim h^{-2},
\]
so that
$
\left|
k\dual{\nabla\boldsymbol{v}_{hk}^-}{\nabla\boldsymbol{\phi}_{hk}}_{D_T}
\right|
\lesssim
kh^{-1}.
$
This goes to 0 under the assumption~~\eqref{eq:hk 12}.
Altogether, we obtain~\eqref{eq:wssymm1} when passing to the limit
in~\eqref{eq:mhk Hhk1}.
Next, recalling that~$B_h\to B$ in~${\mathcal X}$
we prove that~\eqref{eq:mhk Hhk2}
implies~\eqref{eq:bil for} by proving
\boldsymbol{e}gin{subequations}\label{eq:convb}
\boldsymbol{e}gin{align}
\dual{\partial_t\boldsymbol{H}_{hk}}{\boldsymbol{\xi}_h}_{D_T}
&\to
\dual{\boldsymbol{H}_t}{\boldsymbol{\xi}}_{D_T},
\label{eq:convb1}
\\
\dual{\mathfrak{S}_h\partial_t\lambda_{hk}}{\zeta_h}_{\Gamma_T}
&\to
\dual{\mathfrak{S}\lambda_t}{\zeta}_{\Gamma_T}, \label{eq:spec}
\\
\dual{\nabla\times\boldsymbol{H}_{hk}^+}{\nabla\times\boldsymbol{\xi}_h}_{D_T}
&\to
\dual{\nabla\times\boldsymbol{H}}{\nabla\times\boldsymbol{\xi}}_{D_T},
\\
\dual{\boldsymbol{v}_{hk}^-}{\boldsymbol{\xi}_h}_{D_T}
&\to
\dual{\boldsymbol{v}}{\boldsymbol{\xi}}_{D_T}.
\end{align}
\end{subequations}
The proof is similar to that of~\eqref{eq:conv} (where we
use Lemma~\ref{lem:wea con}
for the proof of~\eqref{eq:spec}) and is
therefore omitted. This proves~(3) and~(5) of
Definition~\ref{def:fembemllg}.
Finally, we obtain $\boldsymbol{m}(0,\cdot)=\boldsymbol{m}^0$, $\boldsymbol{H}(0,\cdot)=\boldsymbol{H}^0$, and
$\lambda(0,\cdot)=\lambda^0$ from the weak convergence and the
continuity of the trace operator.
This and $|\boldsymbol{m}|=1$ yield Statements~(1)--(2) of
Definition~\ref{def:fembemllg}. To obtain~(4), note that
$\nabla_\Gamma\colon H^{1/2}(\Gamma)\to {\mathbb{H}}_\perp^{-1/2}(\Gamma)$ and
$\boldsymbol{n}\times(\boldsymbol{n}\times(\cdot))\colon {\mathbb{H}}curl{D}\to {\mathbb{H}}_\perp^{-1/2}(\Gamma)$ are bounded
linear operators; see~\cite[Section~4.2]{buffa2} for exact definition of the spaces and the result.
Weak convergence then proves~(4) of
Definition~\ref{def:fembemllg}.
Estimate~\eqref{eq:energybound2} follows by weak
lower-semicontinuity and the energy bound~\eqref{eq:denergy}.
This completes the proof of the theorem.
\end{proof}
\section{Numerical experiment}\label{section:numerics}
The following numerical experiment is carried out by use of the
FEM toolbox FEniCS~\cite{fenics} (\texttt{fenicsproject.org})
and the BEM toolbox BEM++~\cite{bempp} (\texttt{bempp.org}).
We use GMRES to solve the linear systems and blockwise diagonal
scaling as preconditioners.
The values of the constants in this example are taken from the
standard problem \#1 proposed by the Micromagnetic Modelling Activity
Group at the National Institute of Standards and
Technology~\cite{mumag}.
As domain serves the unit cube $D=[0,1]^3$ with initial conditions
\boldsymbol{e}gin{align*}
\boldsymbol{m}^0(x_1,x_2,x_3):=\boldsymbol{e}gin{cases} (0,0,-1)&\text{for } d(x)\geq 1/4,\\
(2Ax_1,2Ax_2,A^2-d(x))/(A^2+d(x))&\text{for }d(x)<1/4,
\end{cases}
\end{align*}
where $d(x):= |x_1-0.5|^2+|x_2-0.5|^2$ and $A:=(1-2\sqrt{d(x)})^4/4$ and
\boldsymbol{e}gin{align*}
\boldsymbol{H}^0= \boldsymbol{e}gin{cases} (0,0,2)&\text{in } D,\\
(0,0,2)-\boldsymbol{m}^0&\text{in } D^\ast.
\end{cases}
\end{align*}
We choose the constants
\boldsymbol{e}gin{align*}
\alpha=0.5,\quad \sigma=\boldsymbol{e}gin{cases}1&\text{in }D,\\ 0& \text{in }D^\ast,\end{cases}\quad \mu_0=1.25667\times 10^{-6},\quad C_e=\frac{2.6\times 10^{-11}}{\mu_0 \,6.4\times 10^{11}}.
\end{align*}
For time and space discretisation of $D_T:= [0,5]\times D$, we apply a
uniform partition in space ($h=0.1$) and time ($k=0.002$).
Figure~\ref{fig:en} plots the corresponding energies over time.
Figure~\ref{fig:m} shows a series of magnetizations $\boldsymbol{m}(t_i)$ at certain times $t_i\in[0,5]$.
Figure~\ref{fig:h} shows that same for the magnetic field $\boldsymbol{H}(t_i)$.
\boldsymbol{e}gin{figure}
\psfrag{energy}{\tiny energy}
\psfrag{time}{\tiny time $t$}
\psfrag{menergy}{\tiny magnetization energy}
\psfrag{henergy}{\tiny magnetic energy}
\psfrag{sum}{\tiny total energy}
\includegraphics[width=0.6\textwidth]{pics/energy2.eps}
\caption{The magnetization engergy
$\norm{\nabla\boldsymbol{m}_{hk}(t)}{{\mathbb{L}}two{D}}$ and the energy of the magnetic
field $\norm{\boldsymbol{H}_{hk}(t)}{{\mathbb{H}}curl{D}}$ plotted over the time.}
\label{fig:en}
\end{figure}
\boldsymbol{e}gin{figure}
\includegraphics[width=0.24\textwidth]{pics/m20.eps}
\includegraphics[width=0.24\textwidth]{pics/m21.eps}
\includegraphics[width=0.24\textwidth]{pics/m22.eps}
\includegraphics[width=0.24\textwidth]{pics/m23.eps}
\includegraphics[width=0.24\textwidth]{pics/m24.eps}
\includegraphics[width=0.24\textwidth]{pics/m25.eps}
\includegraphics[width=0.24\textwidth]{pics/m26.eps}
\includegraphics[width=0.24\textwidth]{pics/m27.eps}
\includegraphics[width=0.24\textwidth]{pics/m28.eps}
\includegraphics[width=0.24\textwidth]{pics/m29.eps}
\includegraphics[width=0.24\textwidth]{pics/m210.eps}
\includegraphics[width=0.24\textwidth]{pics/colorbar2.eps}
\caption{Slice of the magnetization $\boldsymbol{m}_{hk}(t_i)$ at $[0,1]^2\times \{1/2\}$ for $i=0,\ldots,10$ with $t_i=0.2i$. The color of the vectors represents the magnitude $|\boldsymbol{m}_{hk}|$.
We observe that the magnetization aligns itself with the initial magnetic field $\boldsymbol{H}^0$ by performing a damped precession.}
\label{fig:m}
\end{figure}
\boldsymbol{e}gin{figure}
\includegraphics[width=0.24\textwidth]{pics/h20.eps}
\includegraphics[width=0.24\textwidth]{pics/h21.eps}
\includegraphics[width=0.24\textwidth]{pics/h22.eps}
\includegraphics[width=0.24\textwidth]{pics/h23.eps}
\includegraphics[width=0.24\textwidth]{pics/h24.eps}
\includegraphics[width=0.24\textwidth]{pics/h25.eps}
\includegraphics[width=0.24\textwidth]{pics/h26.eps}
\includegraphics[width=0.24\textwidth]{pics/h27.eps}
\includegraphics[width=0.24\textwidth]{pics/h28.eps}
\includegraphics[width=0.24\textwidth]{pics/h29.eps}
\includegraphics[width=0.24\textwidth]{pics/h210.eps}
\includegraphics[width=0.24\textwidth]{pics/colorbarh2.eps}
\caption{Slice of the magnetic field $\boldsymbol{H}_{hk}(t_i)$ at $[0,1]^2\times \{1/2\}$ for $i=0,\ldots,10$ with $t_i=0.2i$. The color of the vectors represents the magnitude $|\boldsymbol{H}_{hk}|$.
We observe only a slight movement in the middle of the cube combined with an overall reduction of field strength.}
\label{fig:h}
\end{figure}
\end{document} |
\begin{document}
\author{Bego\~na Barrios}
\address{
Departamento de Matem\'aticas\\
Universidad Aut\'onoma de Madrid\\
Ciudad Universitaria de Cantoblanco\\
and
Instituto de Ciencias Matem\'{a}ticas, (ICMAT, CSIC-UAM-UC3M-UCM)\\
C/Nicol\'{a}s Cabrera 15, 28049-Madrid (Spain) }
\email{[email protected]}
\author{Alessio Figalli}
\address{
The University of Texas at Austin\\
Mathematics Dept. RLM 8.100\\
2515 Speedway Stop C1200\\
Austin, TX 78712-1202 (USA) } \email{[email protected]}
\author{Enrico Valdinoci}
\address{
Dipartimento di Matematica\\
Universit\`a degli Studi di Milano\\
Via Cesare Saldini 50\\
20133 Milano (Italy) } \email{[email protected]}
\title[Bootstrap regularity and nonlocal
minimal surfaces]{Bootstrap regularity\\
for integro-differential
operators\\
and its application\\
to nonlocal minimal surfaces}
\date{\today}
\begin{abstract}
We prove that $C^{1,\alpha}$ $s$-minimal surfaces are
of class $C^\infty$. For this, we develop a new bootstrap
regularity theory for solutions of integro-differential equations
of very general type, which we believe is of independent interest.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}
Motivated by the structure of interphases arising in phase
transition models with long range interactions, in~\cite{CRS} the
authors introduced a nonlocal version of minimal surfaces. These
objects are obtained by minimizing a ``nonlocal perimeter'' inside
a fixed domain $\Omega$: fix $s \in (0,1)$, and given two sets
$A,B\subset \mathbb R^n$, let us define the interaction term
$$ L(A,B):=\int_A \int_B \frac{dx\,dy}{|x-y|^{n+s}}.$$
The nonlocal perimeter of $E$ inside $\Omega$ is defined as
\begin{multline*} {\rm Per}(E,\Omega,s):=L\big( E\cap\Omega,(\mathbb CC E)\cap\Omega\big)\\
+L\big( E\cap\Omega,(\mathbb CC E)\cap(\mathbb CC\Omega)\big)+L\big( (\mathbb CC
E)\cap\Omega,E\cap(\mathbb CC\Omega)\big).
\end{multline*}
Then nonlocal ($s$-)minimal
surfaces correspond to minimizers of the above functional with the
``boundary condition'' that $E\cap ({\mathbb CC}\Omega)$ is prescribed.
It is proved in~\cite{CRS} that ``flat $s$-minimal surface'' are
$C^{1,\alpha}$ for all $\alpha<s$, and in~\cite{CV1, ADM, CV2}
that, as $s\rightarrow 1^-$, the $s$-minimal surfaces approach the
classical ones, both in a geometric sense and in a
$\Gamma$-convergence framework, with uniform estimates
as~$s\rightarrow1^-$. In particular,
when $s$
is
sufficiently close to~$1$, they inherit some nice regularity
properties from the classical minimal surfaces (see
also~\cite{Sou, SV1, SV2} for the relation between $s$-minimal
surfaces and the interfaces of some phase transition equations
driven by the fractional Laplacian).
On the other hand, all the previous literature only focused on
the~$C^{1,\alpha}$ regularity, and higher regularity was left as
an open problem. In this paper we address this issue, and we prove
that $C^{1,\alpha}$ $s$-minimal surfaces are indeed~$C^\infty$,
according to the following result\footnote{ Here and in the
sequel, we write~$x\in\mathbb R^n$ as~$x=(x',x_n)\in\mathbb R^{n-1}\times\mathbb R$.
Moreover, given~$r>0$ and~$p\in\mathbb R^n$, we define
$$K_r(p):= \{ x\in\mathbb R^n \,:\, |x'-p'|<r
{\mbox{ and }} |x_n-p_n|<r\}.$$ As usual, $B_r(p)$ denotes the
Euclidean ball of radius~$r$ centered at~$p$.
Given~$p'\in\mathbb R^{n-1}$, we set
$$B^{n-1}_r(p'):=\{ x'\in\mathbb R^{n-1} \,:\, |x'-p'|<r\}.$$
We also use the notation~$K_r:=K_r(0)$, $B_r:=B_r(0)$,
$B^{n-1}_r:= B^{n-1}_r(0)$.}:
\begin{theorem}\label{main} Let~$s\in(0,1)$, and~$\partial E$ be a~$s$-minimal surface
in~$K_R$ for some~$R>0$. Assume that
\begin{equation}\label{XC2}
\partial E\cap
K_R =\leqslantft\{ (x',x_n) \,:\, x' \in B_R^{n-1}{\mbox{ and }} x_n =u(x')\right\}
\end{equation}
for some
$u:B^{n-1}_R\to \mathbb R$, with $u\in {C^{1,\alpha}} (B^{n-1}_R)$ for any $\alpha<s$ and~$u(0)=0$.
Then
$$u\in C^{\infty}(B_{\rho}^{n-1})\quad\forall\,\rho\in (0,R).$$
\end{theorem}
The regularity result of Theorem~\ref{main} combined with
\cite[Theorem~6.1]{CRS} and \cite[Theorems 1, 3, 4, 5]{CV2},
implies also the following results (here and in the sequel, $\{
e_1, e_2,\dots, e_n\}$ denotes the standard Euclidean basis):
\begin{corollary}
Fix~$s_o\in(0,1)$. Let~$s\in(s_o,1)$ and~$\partial E$ be a~$s$-minimal
surface in~$B_R$ for some $R>0$. There exists~$\epsilon_\star>0$,
possibly depending on~$n$, $s_o$ and~$\alpha$, but independent
of~$s$ and $R$, such that if
$$ \partial E\cap B_R\subseteq \{ |x\cdot e_n|\leqslant \epsilon_\star R\}
$$then~$\partial E\cap B_{R/2}$ is
a~$C^{\infty}$-graph in the~$e_n$-direction.\end{corollary}
\begin{corollary}
There exists~$\epsilon_o\in(0,1)$ such that
if~$s\in(1-\epsilon_o,1)$, then:
\begin{itemize}
\item If~$n\leqslant 7$, any $s$-minimal surface is of class
$C^{\infty}$; \item If~$n=8$, any $s$-minimal surface is of class
$C^{\infty}$ except, at most, at countably many isolated points.
\end{itemize}
More generally, in any dimension $n$ there
exists~$\epsilon_n\in(0,1)$ such that if~$s\in(1-\epsilon_n,1)$
then any $s$-minimal surface is of class $C^{\infty}$ outside a
closed set~$\Sigma$ of Hausdorff dimension $n-8$.
\end{corollary}
Also, Theorem~\ref{main} here combined with Corollary~1
in~\cite{SVc} gives the following regularity result in the plane:
\begin{corollary}
Let~$n=2$. Then, for any~$s\in(0,1)$, any $s$-minimal surface is a smooth embedded curve of
class~$C^{\infty}$.
\end{corollary}
In order to prove Theorem~\ref{main} we establish in fact a very
general result about the regularity of integro-differential
equations, which we believe is of independent interest.
For this, we consider a kernel $K=K(x,w):\mathbb R^n
\times(\mathbb R^n\setminus\{0\})\rightarrow(0,+\infty)$ satisfying some
general structural assumptions. In the following, $\sigma \in
(1,2)$.
First of all, we suppose that $K$ is close to an autonomous kernel
of fractional Laplacian type, namely
\begin{equation}\label{ass silv}
\leqslantft\{
\begin{aligned}
&{\mbox{there exist~$a_0,r_0>0$ and $\eta \in (0,a_0/4)$
such that}}\\
&\leqslantft| \frac{|w|^{n+\sigma}
K(x,w)}{2-\sigma}-a_0\right|\leqslant\eta
\qquad \forall \,x\in B_{1},\,w\in B_{r_0}\setminus\{0\}.
\end{aligned}
\right.
\end{equation}
Moreover, we assume that\footnote{Observe that we use~$|\cdot|$
both to denote the Euclidean norm of a vector and, for a
multi-index case $\alpha=(\alpha_1,\dots,\alpha_n)\in\mathbb N^n$, to
denote $|\alpha|:=\alpha_1+ \dots+\alpha_n$. However, the meaning
of $|\cdot|$ will always be clear from the context.}
\begin{equation}\label{sm 1}
\leqslantft\{
\begin{aligned}
&{\mbox{there exist $k \in \mathbb N\cup \{0\}$ and $C_k>0$ such that}}\\
&K\in C^{k+1}\big(B_{1}\times (\mathbb R^n\setminus\{0\})\big),\\
&\|\partial^\mu_x \partial^\theta_w K(\cdot,
w)\|_{L^\infty(B_{1})} \leqslant \frac{C_k}{|w|^{n+\sigma+|\theta|}}\\
&\qquad
\qquad \forall\,\mu, \theta\in \mathbb N^n,\,|\mu|+|\theta|\leqslant k+1,\,w\in \mathbb R^n\setminus\{0\}.\\
\end{aligned}
\right.
\end{equation}
Our main result is a ``Schauder regularity theory'' for
solutions\footnote{We adopt the notion of viscosity solution used
in~\cite{CScpam, CS2011, CS2012}.} of an integro-differential
equation. Here and in the sequel we use the notation
\begin{equation}\label{delta}
\delta u(x,w):= u(x+w)+u(x-w)-2u(x).\end{equation}
\begin{theorem}\label{boot}
Let~$\sigma\in (1,2)$, $k\in\mathbb N\cup\{0\}$, and $u\in
L^\infty(\mathbb R^n)$ be a viscosity solution of the equation
\begin{equation}
\label{eq:main}
\int_{\mathbb R^{n}}{K(x,w)\,\delta u(x,w)dw}=f(x,u(x))\qquad
\text{inside $B_1$,}
\end{equation}
with $f\in C^{k+1}(B_1\times\mathbb R)$. Assume that
$K:B_{1}\times (\mathbb R^n\setminus\{0\})\rightarrow (0,+\infty)$
satisfies assumptions \eqref{ass silv} and \eqref{sm 1} for the
same value of $k$.
Then, if $\eta$ in \eqref{ass silv} is sufficiently small (the
smallness being independent of $k$), we have $u\in
C^{k+\sigma+\alpha}(B_{1/2})$ for any $\alpha<1,$ and
\begin{equation}\label{3bis}
\| u\|_{C^{k+\sigma+\alpha}(B_{1/2})} \leqslant C
\leqslantft(1+\|u\|_{L^\infty(\mathbb R^n)}+\|f\|_{L^{\infty}(B_{1}\times\mathbb R)}\right) ,
\end{equation}
where\footnote{As customary, when~$\sigma+\alpha\in(1,2)$ (resp.
$\sigma+\alpha>2$), by \eqref{3bis} we mean that~$u\in
C^{k+1,\sigma+\alpha-1}(B_{1/2})$ (resp. $u\in
C^{k+2,\sigma+\alpha-1}(B_{1/2})$). (To avoid any issue, we will
always implicitly assume that $\alpha$ is chosen different from
$2-\sigma$, so that $\sigma+\alpha\neq 2$.)} $C>0$ depends only on
$n$, $\sigma$, $k$, $C_k$, and~$\|f\|_{C^{k+1}(B_{1}\times\mathbb R)}$.
\end{theorem}
Let us notice that, since the right hand side in \eqref{eq:main} depends
on $u$, there is no uniqueness for such an equation.
In particular it is not enough for us to prove a-priori estimates for smooth
solutions and then argue by approximation, since we do not know if our solution
can be obtained as a limit of smooth solution.
We also note that, if in \eqref{sm 1} one replaces the $C^{k+1}$-regularity of $K$
with the $C^{k,\beta}$-assumption
\begin{equation}\label{new condition}
\|\partial^\mu_x
\partial^\theta_w K(\cdot, w)\|_{C^{0,\beta}(B_{1})} \leqslant
\frac{C_k}{|w|^{n+\sigma+|\theta|}},
\end{equation}
for all $|\mu|+|\theta|\leqslant k$, then we obtain the following:
\begin{theorem}\label{boot2}
Let~$\sigma\in (1,2)$, $k\in\mathbb N\cup\{0\}$, and $u\in
L^\infty(\mathbb R^n)$ be a viscosity solution of equation \eqref{eq:main}
with $f\in C^{k, \beta}(B_1\times\mathbb R)$. Assume that
$K:B_{1}\times (\mathbb R^n\setminus\{0\})\rightarrow (0,+\infty)$
satisfies assumptions \eqref{ass silv} and \eqref{new condition}
for the same value of $k$.
Then, if $\eta$ in \eqref{ass silv} is sufficiently small (the
smallness being independent of $k$), we have $u\in
C^{k+\sigma+\alpha}(B_{1/2})$ for any $\alpha<\beta,$ and
\begin{equation*}
\| u\|_{C^{k+\sigma+\alpha}(B_{1/2})} \leqslant C
\leqslantft(1+\|u\|_{L^\infty(\mathbb R^n)}+\|f\|_{L^{\infty}(B_{1}\times\mathbb R)}\right) ,
\end{equation*}
where $C>0$ depends only on $n$, $\sigma$, $k$, $C_k$,
and~$\|f\|_{C^{k,\beta}(B_{1}\times\mathbb R)}$.
\end{theorem}
The proof of Theorem \ref{boot2} is essentially the same as the one of Theorem \ref{boot},
the only difference being that instead of differentiating
the equations (see for instance the argument in Section
\ref{section:uniforml}) one should use incremental quotients.
Although this does not introduce any major additional
difficulties, it makes the proofs longer and more tedious. Hence,
since the proof of Theorem \ref{boot} already contains all the
main ideas to prove also
Theorem \ref{boot2}, we will show the details of the proof only for Theorem \ref{boot}.
The paper is organized as follows: in the next section we prove
Theorem \ref{boot}, and then in Section \ref{section:main} we
write the fractional minimal surface equation in a suitable form
so that we can apply Theorems~\ref{boot} and~\ref{boot2}
to prove Theorem~\ref{main}.\\
\textit{Acknowledgements:} We wish to thank Guido De Philippis and Francesco Maggi for stimulating
our interest in this problem. We also thank Guido De Philippis for a
careful reading of a preliminary version of our manuscript,
and Nicola Fusco for kindly pointing out to us
a computational inaccuracy.
BB was partially supported by Spanish Grant MTM2010-18128. AF was partially supported by NSF Grant DMS-0969962.
EV was partially supported by ERC Grant 277749 and FIRB Grant A\&B.
\section{Proof of Theorem \ref{boot}}
The core in the proof of Theorem \ref{boot} is the step $k=0$,
which will be proved in several steps.
\subsection{Toolbox}
We collect here some preliminary observations on scaled H\"older
norms, covering arguments, and differentiation of integrals that
will play an important role in the proof of Theorem~\ref{boot}.
This material is mainly technical, and the expert reader may go
directly to Section~\ref{TTT} at page~\pageref{TTT}.
\subsubsection{Scaled H\"older norms and coverings}
Given~$m\in\mathbb N$, $\alpha\in(0,1)$, $x\in\mathbb R^n$, and~$r>0$, we define
the $C^{m,\alpha}$-norm of a function~$u$ in~$B_r(x)$ as
$$ \|u\|_{C^{m,\alpha} (B_r(x))}:=
\sum_{|\gamma|\leqslant m} \|D^\gamma u\|_{L^\infty (B_r(x))}
+\sum_{|\gamma|=m}\sup_{y\ne z\in B_r(x)}\frac{|D^\gamma u(y)-
D^\gamma u(z)|}{|y-z|^\alpha}.$$ For our purposes it is also
convenient to look at the following classical rescaled version of
the norm:
\begin{eqnarray*}
\|u\|^*_{C^{m,\alpha} (B_r(x))}&:=&\sum_{j=0}^m \sum_{|\gamma|=j}
r^{j} \|D^\gamma u\|_{L^\infty (B_r(x))}
\\&&+\sum_{|\gamma|=m} r^{m+\alpha}
\sup_{y\ne z\in B_r(x)}\frac{|D^\gamma u(y)- D^\gamma
u(z)|}{|y-z|^\alpha}.\end{eqnarray*} This scaled norm behaves
nicely under covering, as the next observation points out:
\begin{lemma}\label{co} Let~$m\in\mathbb N$, $\alpha\in(0,1)$, $\rho>0$, and~$x\in\mathbb R^n$. Fix $\lambda \in (0,1)$, and suppose that~$B_{\rho}(x)$ is covered
by finitely many balls~$\{B_{\lambda\rho/2}(x_k)\}_{k=1}^N$. Then,
there exists~$C_o>0$, depending only on $\lambda$ and~$m$, such that
$$ \|u\|^*_{C^{m,\alpha} (B_\rho (x))}\leqslant C_o
\sum_{k=1}^N \|u\|^*_{C^{m,\alpha} (B_{\lambda\rho}(x_k))}.$$
\end{lemma}
\begin{proof} We first observe that, if~$j\in \{0,\dots,m\}$ and~$|\gamma|=j$,
\begin{eqnarray*}
\rho^{j} \|D^\gamma u\|_{L^\infty (B_\rho(x))} &\leqslant& \lambda^{-j}
(\lambda\rho)^{j} \max_{k=1,\dots,N} \|D^\gamma u\|_{L^\infty
(B_{\lambda\rho} (x_k))}
\\ &\leqslant& \lambda^{-m} \sum_{k=1}^N
(\lambda\rho)^{j} \|D^\gamma u\|_{L^\infty (B_{\lambda\rho} (x_k))}
\\ &\leqslant& \lambda^{-m} \sum_{k=1}^N
\|u\|^*_{C^{m,\alpha} (B_{\lambda\rho} (x_k))}.
\end{eqnarray*}
Now, let~$|\gamma|=m$: we claim that
\begin{equation*}
\rho^{m+\alpha} \sup_{y\ne z\in B_{\rho}(x)}\frac{|D^\gamma u(y)-
D^\gamma u(z)|}{|y-z|^\alpha} \leqslant2
\lambda^{-(m+\alpha)}\sum_{k=1}^N
\|u\|^*_{C^{m,\alpha} (B_{\lambda\rho}(x_k))}.
\end{equation*}
To check this, we take~$y, z\in B_{\rho}(x)$ with $y \neq z$ and
we distinguish two cases. If~$|y-z|< \lambda\rho/2$ we
choose~$k_o\in\{1,\dots,N\}$ such that~$y\in
B_{\lambda\rho/2}(x_{k_o})$. Then~$|z-x_{k_o}|\leqslant
|z-y|+|y-x_{k_o}|<\lambda\rho$, which implies~$y,z\in
B_{\lambda\rho}(x_{k_o})$, therefore
\begin{eqnarray*}
\rho^{m+\alpha} \frac{|D^\gamma u(y)-D^\gamma u(z)|}{|y-z|^\alpha}
&\leqslant& \rho^{m+\alpha} \sup_{\tilde y\ne \tilde z\in
B_{\lambda\rho}(x_{k_o})} \frac{|D^\gamma u(\tilde y)-D^\gamma
u(\tilde z)|}{
|\tilde y-\tilde z|^\alpha} \\
&\leqslant&\lambda^{-(m+\alpha)}\| u\|^*_{ C^{m,\alpha} (B_{\lambda\rho}(x_{k_o}))}.
\end{eqnarray*}
Conversely, if~$|y-z|\geqslantqslant\lambda\rho/2$, recalling that $\alpha \in (0,1)$
we have
\begin{eqnarray*}
\rho^{m+\alpha} \frac{|D^\gamma u(y)- D^\gamma
u(z)|}{|y-z|^\alpha} &\leqslant& 2 \lambda^{-\alpha} \rho^{m}
{\|D^\gamma u\|_{L^\infty(B_\rho(x))}}\\
&\leqslantqslant& 2 \lambda^{-\alpha} \rho^{m}\sum_{k=1}^{N}{\|D^{\gamma}u\|_{L^{\infty}(B_{\lambda\rho}(x_{k}))}}\\
&\leqslantqslant&2
\lambda^{-(m+\alpha)}\sum_{k=1}^N\|u\|^{*}_{C^{m,\alpha}(B_{\lambda\rho}(x_{k}))}.
\end{eqnarray*}
This proves the claim and concludes the proof.
\end{proof}
Scaled norms behave also nicely in order to go from local to
global bounds, as the next result shows:
\begin{lemma}\label{Co2}
Let~$m\in\mathbb N$, $\alpha\in(0,1)$, and~$u\in C^{m,\alpha}(B_1)$.
Suppose that there exist $\mu \in (0,1/2)$ and $\nu\in (\mu,1]$ for which the following holds:
for any~$\epsilon>0$ there
exists~$\Lambda_\epsilon>0$ such that, for any~$x\in B_1$ and
any~$r\in (0,1-|x|]$, we have
\begin{equation}\label{X0}
\|u\|^*_{C^{m,\alpha}(B_{\mu r}(x))} \leqslantqslant \Lambda_\epsilon +\epsilon
\|u\|^*_{C^{m,\alpha}(B_{\nu r}(x))}.
\end{equation}
Then there exist constants~$\epsilon_o$, $C>0$, depending only
on~$n$, $m$, $\mu$, $\nu$, and $\alpha$, such that
$$ \|u\|_{C^{m,\alpha}(B_{\mu})}\leqslant C\Lambda_{\epsilon_o}.$$
\end{lemma}
\begin{proof} First of all we observe that
\begin{equation*}
\|u\|^*_{C^{m,\alpha}(B_{\mu r}(x))}\leqslant
\|u\|_{C^{m,\alpha}(B_{\mu r}(x))} \leqslant \|u\|^*_{C^{m,\alpha}(B_1)}
\end{equation*}
because~$r\in(0,1)$, which implies that
$$
Q:=\sup_{{x\in B_1}\atop{r\in (0,1-|x|]}}
\|u\|^*_{C^{m,\alpha}(B_{\mu r}(x))}<+\infty.
$$
We now use a covering argument: pick $\lambda \in (0,1/2]$
to be chosen later, and fixed any~$x\in B_1$ and~$r\in
(0,1-|x|]$ we cover $B_{\mu r}(x)$ with finitely many balls
$\{B_{\lambda\mu r/2}(x_k)\}_{k=1}^N$, with $x_k \in B_{\mu r}(x)$, for
some~$N$ depending only on $\lambda$ and the dimension $n$. We now observe that, since $\mu<1/2$,
\begin{equation}
\label{eq:xk}
|x_k|+ {r}/2\leqslant |x_k-x|+|x|+{r}/2\leqslant \mu{r}+
|x|+{r}/2< r+|x|\leqslant 1.
\end{equation}
Hence, since $\lambda \leqslantqslant 1/2$, we can use~\eqref{X0}
(with~$x=x_k$ and $r$ scaled to $\lambda r$) to obtain
\begin{eqnarray*}
\|u\|^*_{C^{m,\alpha}(B_{\lambda \mu r}(x_k))}
\leqslantqslant \Lambda_\epsilon +\epsilon
\|u\|^*_{C^{m,\alpha}(B_{\lambda \nu r}(x_k))}.
\end{eqnarray*}
Then, using
Lemma~\ref{co} with~$\rho:=\mu r$ and $\lambda=\mu/(2\nu)$,
and recalling \eqref{eq:xk} and the definition of $Q$, we get
\begin{eqnarray*}
\|u\|^*_{C^{m,\alpha}(B_{\mu r}(x))} &\leqslantqslant & C_o
\sum_{k=1}^N \|u\|_{C^{m,\alpha}(B_{\lambda \mu r}(x_k))}^*\\
&\leqslantqslant & C_o N \Lambda_\epsilon+C_o \epsilon \sum_{k=1}^N
\|u\|^*_{C^{m,\alpha}(B_{\lambda \nu r}(x_k))}\\
&=&C_o N \Lambda_\epsilon+C_o \epsilon \sum_{k=1}^N
\|u\|^*_{C^{m,\alpha}(B_{\mu r/2}(x_k))}\\
&\leqslantqslant & C_oN\Lambda_\epsilon + \epsilon C_oN Q.
\end{eqnarray*}
Using the definition of $Q$ again, this implies
$$ Q \leqslantqslant C_oN\Lambda_\epsilon + \epsilon C_oN Q, $$
so that, by choosing~$\epsilon_o:=1/(2C_oN)$,
$$ Q \leqslantqslant 2C_o N \Lambda_{\epsilon_o}.$$
Thus we have proved that
$$
\|u\|_{C^{m,\alpha}(B_{\mu r}(x))}^* \leqslant 2C_o N
\Lambda_{\epsilon_o} \qquad \forall\, x\in B_1, \,r\in (0,1-|x|],
$$
and the desired result follows setting~$x=0$ and~$r=1$.
\end{proof}
\subsubsection{Differentiating integral functions}
In the proof of Theorem~\ref{boot} we will need to differentiate,
under the integral sign, smooth functions that are either
supported near the origin or far from it. This purpose will be
accomplished in Lemmata~\ref{D} and~\ref{E}, after some technical
bounds that are needed to use the Dominated Convergence Theorem.
Recall the notation in~\eqref{delta}.
\begin{lemma}\label{D1}
Let~$r>r'>0$, $v\in C^3(B_r)$, $x\in B_{r'}$, $h\in\mathbb R$ with~$|h|<
(r-r')/2$. Then, for any~$w\in\mathbb R^n$ with~$|w|< (r-r')/2$, we have
$$ |\delta v(x+he_1,w)-\delta v(x,w)|\leqslant |h|\, |w|^2
\|v\|_{C^3(B_r)} .$$
\end{lemma}
\begin{proof} Fixed~$x\in B_{r'}$ and~$|w|<
(r-r')/2$, for any~$h\in [(r'-r)/2,(r-r')/2]$ we
set~$g(h):=v(x+he_1+w)+v(x+he_1-w)-2v(x+he_1)$. Then
\begin{eqnarray*}&& |g(h)-g(0)|\leqslant |h| \sup_{|\xi|\leqslant |h|} |g'(\xi)|
\\ &&\quad\leqslant |h|\sup_{|\xi|\leqslant |h|} \big|\partial_1 v(x+\xi e_1+w)
+\partial_1 v(x+\xi e_1-w)-2 \partial_1 v(x+\xi e_1)\big|.
\end{eqnarray*}
Noticing that~$|x+\xi e_1\pm w|\leqslant r'+|h|+|w|<r$, a second order
Taylor expansion of~$\partial_1 v$ with respect to the
variable~$w$ gives
\begin{equation} \label{ed3}
\big|\partial_1 v(x+\xi e_1+w)
+\partial_1 v(x+\xi e_1-w)-2 \partial_1 v(x+\xi e_1)\big| \leqslant
|w|^2 \|\partial_1 v\|_{C^2(B_r)}.
\end{equation}
Therefore
\begin{eqnarray*}
|\delta v(x+he_1,w)-\delta v(x,w)|= |g(h)-g(0)| \leqslant |h|\, |w|^2
\|v\|_{C^3(B_r)},
\end{eqnarray*}
as desired.
\end{proof}
\begin{lemma}\label{D1bis}
Let~$r>r'>0$, $v\in W^{1,\infty}(\mathbb R^n)$, $h\in\mathbb R$. Then, for
any~$w\in\mathbb R^n$,
$$ |\delta v(x+he_1,w)-\delta v(x,w)|\leqslant 4|h| \|
\nabla v\|_{L^\infty(\mathbb R^n)}.$$
\end{lemma}
\begin{proof} It sufficed to proceed as in the proof of Lemma~{\ref{D1}}, but
replacing~\eqref{ed3} with the following estimate:
\begin{eqnarray*}
\big|\partial_1 v(x+\xi e_1+w) +\partial_1 v(x+\xi e_1-w)-2
\partial_1 v(x+\xi e_1)\big| \leqslant 4 \|\partial_1 v\|_{L^\infty
(\mathbb R^n)} .\end{eqnarray*}
\end{proof}
\begin{lemma}\label{D}
Let~$\ell\in\mathbb N$, $r\in(0,2)$,~$K$ satisfy~\eqref{sm 1}, and~$U\in
C^{\ell+2}_0(B_r)$. Let~$\gamma=(\gamma_1,\dots,\gamma_n)\in\mathbb N^n$
with~$|\gamma|\leqslant \ell \leqslantqslant k+1$. Then
\begin{equation}\label{002}\begin{split}
&\partial^\gamma_x \int_{\mathbb R^n} K(x,w)\,\delta U(x,w)\,dw =
\int_{\mathbb R^n} \partial^\gamma_x\Big( K(x,w)\,\delta U(x,w)\Big)\,dw
\\ &\quad= \sum_{{{1\leqslant i\leqslant n}\atop{0\leqslant \lambda_i\leqslant
\gamma_i}}\atop{\lambda=(\lambda_1,\dots,\lambda_n)}} \leqslantft(
{\gamma_1}\atop{\lambda_1}\right)\dots \leqslantft(
{\gamma_n}\atop{\lambda_n}\right) \int_{\mathbb R^n} \partial^{\lambda}_x
K(x,w)\,\delta (\partial^{\gamma-\lambda}_x U)(x,w)\,dw
\end{split}\end{equation}
for any~$x\in B_r$.
\end{lemma}
\begin{proof} The latter equality follows from the standard product
derivation formula, so we focus on the proof of the first
identity. The proof is by induction over~$|\gamma|$.
If~$|\gamma|=0$ the result is trivially true, so we consider the
inductive step. We take~$x$ with~$r':=|x|<r$, we suppose
that~$|\gamma|\leqslant \ell-1$ and, by inductive hypothesis, we know
that
$$g_\gamma(x):=
\partial^\gamma_x \int_{\mathbb R^n} K(x,w)\,\delta U(x,w)\,dw=
\int_{\mathbb R^n} \theta(x,w)\,dw$$ with
$$ \theta(x,w):=
\sum_{{{1\leqslant i\leqslant n}\atop{0\leqslant \lambda_i\leqslant
\gamma_i}}\atop{\lambda=(\lambda_1,\dots,\lambda_n)}} \leqslantft(
{\gamma_1}\atop{\lambda_1}\right)\dots \leqslantft(
{\gamma_n}\atop{\lambda_n}\right)
\partial^{\lambda}_x K(x,w)\,\delta
(\partial^{\gamma-\lambda}_x U)(x,w)\,dw.$$ By~\eqref{sm 1},
if~$0<|h|< (r-r')/2$ then
\begin{equation}\label{001}
|\partial^{\lambda}_x K(x+he_1,w)-
\partial^{\lambda}_x K(x,w)|\leqslant {C_{|\lambda|+1}} |h|\,
|w|^{-n-\sigma}.
\end{equation}
Moreover, if~$|w|<(r-r')/2$, we can apply Lemma~\ref{D1}
with~$v:=\partial^{\gamma-\lambda}_x U$ and obtain
\begin{equation}\label{v0}
|\delta (\partial^{\gamma-\lambda}_x U)(x+he_1,w)-
\delta(\partial^{\gamma-\lambda}_x U)(x,w)|\\
\leqslant |h|\, |w|^2\|U\|_{C^{|\gamma-\lambda|+3}(B_r)}.
\end{equation}
On the other hand, by Lemma~\ref{D1bis} we obtain
$$ |\delta (\partial^{\gamma-\lambda}_x U)(x+he_1,w)-\delta
(\partial^{\gamma-\lambda}_x U)(x,w)|\leqslant \,4\,|h|\, \|
\partial^{\gamma-\lambda}_x U\|_{C^1(\mathbb R^n)}.$$
All in all,
\begin{equation}\label{eq:1}\begin{split}
&|\delta (\partial^{\gamma-\lambda}_x U)(x+he_1,w)-\delta
(\partial^{\gamma-\lambda}_x U)(x,w)|
\\ &\qquad\leqslant\,|h|\,
\|U\|_{C^{|\gamma-\lambda|+3}(\mathbb R^n)}\min\{4,|w|^2\}.
\end{split}\end{equation}
Analogously, a simple Taylor expansion provides also the bound
\begin{equation}\label{eq:2}
|\delta (\partial^{\gamma-\lambda}_x U)(x,w)|\leqslant\,
\|U\|_{C^{|\gamma-\lambda|+2}(\mathbb R^n)}\min\{4,|w|^2\}.
\end{equation}
Hence, \eqref{sm 1}, \eqref{001}, \eqref{eq:1}, and \eqref{eq:2}
give
\begin{eqnarray*}
&& \big|
\partial^{\lambda}_x K(x+he_1,w)\,\delta
(\partial^{\gamma-\lambda}_x U)(x+he_1,w) -
\partial^{\lambda}_x K(x,w)\,\delta
(\partial^{\gamma-\lambda}_x U)(x,w)
\big| \\
&\leqslant& \big|
\partial^{\lambda}_x K(x+he_1,w)\,\big[\delta
(\partial^{\gamma-\lambda}_x U)(x+he_1,w) -\delta
(\partial^{\gamma-\lambda}_x U)(x,w) \big]\big|
\\ &&+
\big|\big[
\partial^{\lambda}_x K(x+he_1,w)
-
\partial^{\lambda}_x K(x,w)
\big]\delta (\partial^{\gamma-\lambda}_x U)(x,w) \big|
\\ &\leqslant& C_1 |h|\,\min\{|w|^{-n-\sigma},|w|^{2-n-\sigma} \},
\end{eqnarray*}
with~$C_1>0$ depending only on~$\ell$, $C_\ell$
and~$\|U\|_{C^{\ell+2}(\mathbb R^n)}$. As a consequence,
$$ |\theta(x+he_1,w)-\theta(x,w)|\leqslant
C_2 |h|\,\min \{|w|^{-n-\sigma},|w|^{2-n-\sigma} \},$$ and, by the
Dominated Convergence Theorem, we get
\begin{eqnarray*}
\int_{\mathbb R^n}\partial_{x_1}\theta (x,w)\,dw &=& \lim_{h\rightarrow
0} \int_{\mathbb R^n}\frac{\theta (x+he_1,w)-\theta(x,w)}{h}\,dw
\\ &=& \lim_{h\rightarrow 0}\frac{g_\gamma(x+he_1)-g_\gamma (x)}{h}
\\ &=& \partial_{x_1}g_\gamma(x),
\end{eqnarray*}
which proves~\eqref{002} with~$\gamma$ replaced by~$\gamma+e_1$.
Analogously one could prove the same result with $\gamma$
replaced by~$\gamma+e_i$, concluding the inductive step.
\end{proof}
The differentiation under the integral sign in~\eqref{002} may
also be obtained under slightly different assumptions, as next
result points out:
\begin{lemma}\label{E}
Let~$\ell\in\mathbb N$, $R>r>0$. Let~$U\in C^{\ell+1}(\mathbb R^n)$ with~$U=0$
in~$B_{R}$. Let~$\gamma=(\gamma_1,\dots,\gamma_n)\in\mathbb N^n$
with~$|\gamma|\leqslant \ell$. Then~\eqref{002} holds true for any~$x\in
B_r$.
\end{lemma}
\begin{proof} If~$x\in B_r$, $w\in B_{(R-r)/2}$
and~$|h|\leqslant (R-r)/2$, we have that~$|x+w+h e_1|< R$ and so~$\delta
U(x+he_1,w)=0$. In particular
$$ \delta U(x+he_1,w)-\delta U(x,w)=0$$
for small~$h$ when~$w\in B_{(R-r)/2}$. This formula
replaces~\eqref{v0}, and the rest of the proof goes on as the one
of Lemma~\ref{D}.
\end{proof}
\subsubsection{Integral computations}
Here we collect some integral computations which will be used in
the proof of Theorem \ref{boot}.
\begin{lemma}
Let $v:\mathbb R^n \to \mathbb R$ be smooth and with all its derivatives
bounded. Let $x\in B_{1/4}$, and $\gamma$, $\lambda\in \mathbb N^n$,
with~$\gamma_i\geqslant\lambda_i$ for any~$i\in\{1,\dots,n\}$. Then
there exists a constant $C'>0$, depending only on $n$ and
$\sigma$, such that
\begin{equation}\label{A2 estimate}
\leqslantft|\int_{\mathbb R^n} \partial^{\lambda}_x K(x,w)\,\delta
(\partial^{\gamma-\lambda}_x v)(x,w)\,dw\right| \leqslant C' \,
C_{|\gamma|} \,\| v\|_{C^{|\gamma-\lambda|+2} (\mathbb R^n)}
.\end{equation} Furthermore, if \begin{equation}\label{V001}
{\mbox{$v=0$ in $B_{1/2}$}}\end{equation} we have
\begin{equation}\label{A3 estimate}
\leqslantft|\int_{\mathbb R^n} \partial^{\lambda}_x K(x,w)\,\delta
(\partial^{\gamma-\lambda}_x v)(x,w)\,dw\right|\leqslant C'\,
C_{|\gamma|}\, \| v\|_{L^\infty(\mathbb R^n)}.
\end{equation}
\end{lemma}
\begin{proof} By \eqref{sm 1} and \eqref{eq:2} (with $U=v$),
\begin{eqnarray*}
&& \int_{\mathbb R^n} \big|\partial^{\lambda}_x K(x,w)\big|\, \Big|
\,\delta (\partial^{\gamma-\lambda}_x v)(x,w)\Big|\,dw
\\ &&\leqslant C_{|\lambda|} \leqslantft( \| v\|_{C^{|\gamma-\lambda|+2}(\mathbb R^n)}
\int_{B_2} |w|^{-n-\sigma+2}\,dw+ 4\|
v\|_{C^{|\gamma-\lambda|}(\mathbb R^n)} \int_{\mathbb R^n\setminus B_2}
|w|^{-n-\sigma}\,dw \right),
\end{eqnarray*}
which proves \eqref{A2 estimate}.
We now prove \eqref{A3 estimate}. For this we notice that, thanks
to~\eqref{V001}, $v(x+w)$ and $v(x-w)$ (and also their
derivatives) are equal to zero if $x$ and $w$ lie in $B_{1/4}$.
Hence, by an integration by parts we see that
\begin{eqnarray*}
&& \int_{\mathbb R^n} \partial^{\lambda}_x K(x,w)\,\delta
(\partial^{\gamma-\lambda}_x v)(x,w)\,dw
\\
&=& \int_{\mathbb R^n}\partial^{\lambda}_x
K(x,w)\,\partial^{\gamma-\lambda}_w \big[v(x+w)- v(x-w)\big]\,dw
\\ &=&
\int_{\mathbb R^n\setminus B_{1/4}} \partial^{\lambda}_x
K(x,w)\,\partial^{\gamma-\lambda}_w \big[v(x+w)- v(x-w)\big]\,dw
\\ &=& (-1)^{|\gamma-\lambda|}
\int_{\mathbb R^n\setminus B_{1/4}}
\partial^{\lambda}_x\partial^{\gamma-\lambda}_w K(x,w)\,
\big[v(x+w)- v(x-w)\big]\,dw.
\end{eqnarray*}
Consequently, by~\eqref{sm 1},
\begin{eqnarray*}
&& \leqslantft|\int_{\mathbb R^n} \partial^{\lambda}_x K(x,w)\,\delta
(\partial^{\gamma-\lambda}_x v)(x,w)\,dw\right|
\\ &&\leqslant 2C_{|\gamma|}\, \| v\|_{L^\infty(\mathbb R^n)}
\int_{\mathbb R^n\setminus B_{1/4}} |w|^{-n-\sigma-|\gamma-\lambda|}
\,dw,
\end{eqnarray*}
proving \eqref{A3 estimate}.
\end{proof}
\subsection{Approximation by nicer kernels}
\label{TTT}
In what follows, it will be convenient
to approximate the solution $u$ of \eqref{eq:main} with smooth functions $u_\varepsilon$
obtained
by solving equations similar to \eqref{eq:main}, but with
kernels $K_\varepsilon$ which coincide with the fractional
Laplacian in a neighborhood of the origin. Indeed, this will allow
us to work with smooth functions, ensuring that in our
computations all integrals converge. We will then prove uniform
estimates on $u_\varepsilon$, which will give
the desired $C^{\sigma+\alpha}$-bound on $u$ by letting $\varepsilon \to 0$.
To simplify the notation, up to multiply both $K$ and $f$
by $1/a_0$, we assume without loss of generality that the constant $a_0$
in \eqref{ass silv} is equal to $1$.
\\
Let~$\eta\in C^{\infty}(\mathbb R^n)$ satisfy
$$
\eta=\leqslantft\{\begin{array}{ll}
1 &\quad\mbox{in } B_{1/2}, \\
0 &\quad\mbox{in } \mathbb R^n\setminus B_{3/4},
\end{array}\right.
$$
and for any $\varepsilon,\delta>0$ set
$\eta_{\varepsilon}(w):=\eta\big(\frac{w}{\varepsilon}\big)$ for
any $\varepsilon>0$,
$\hat\eta_{\delta}(x):=\delta^{-n}\eta(x/\delta)$. Then we define
\begin{equation}\label{19bis}
K_{\varepsilon}(x,w):=\eta_{\varepsilon}(w)\frac{2-\sigma}{|w|^{n+\sigma}}+(1-\eta_{\varepsilon}(w))\hat
K_{\varepsilon}(x,w),
\end{equation}
where
\begin{equation}
\label{eq:hat K} \hat K_{\varepsilon}(x,w):=K(x,w)\ast \Big(
\hat\eta_{\varepsilon^2}(x)\hat\eta_{\varepsilon^2}(w)\Big),
\end{equation}
and
\begin{equation}\label{17b}
L_{\varepsilon}v(x):=\int_{\mathbb R^n}{K_{\varepsilon}(x,w)\,\delta
v(x,w) dw}.\end{equation} We also define
\begin{equation}
\label{eq:f eps}
f_{\varepsilon}(x):=f(x,u(x))\ast\hat\eta_{\varepsilon}(x).
\end{equation}
Note that we get a family $f_\varepsilon\in C^\infty(B_{1})$ such
that
$$ {\mbox{$\displaystyle\lim_{\varepsilon\to 0^+}{f_{\varepsilon}}=f$
uniformly in~$B_{3/4}$.}}$$ Finally, we define $u_{\varepsilon}\in
L^\infty(\mathbb R^{n})\cap C(\mathbb R^n)$ as the unique solution to the
following linear problem:
\begin{equation}\label{20bis}
\begin{array}{llll}\leqslantft\{\begin{matrix}
L_{\varepsilon}{u_{\varepsilon}}=
f_{\varepsilon}(x)&\quad\mbox{in }B_{3/4}\\
u_{\varepsilon}=u&\quad\mbox{in } \mathbb R^{n}\setminus B_{3/4}.
\end{matrix}\right.
\end{array}
\end{equation}
It is easy to check that the kernels $K_\varepsilon$ satisfy
\eqref{ass silv} and \eqref{sm 1} with constants independent of
$\varepsilon$ (recall that, to simplify the notation, we are assuming $a_0=1$).
Also, since $K$ satisfies assumption \eqref{sm 1} with
$k=0$ and the convolution parameter $\varepsilon^2$ in
\eqref{19bis} is much smaller than $\varepsilon$, the operators
$L_\varepsilon$ converge to the operator associated to $K$ in the
weak sense introduced in~\cite[Definition 22]{CS2011}. Indeed, let
$v$ a smooth function satisfying
\begin{equation}\label{v condition}
|v|\leqslantqslant M \quad\mbox{in $\mathbb R^{n}$},\qquad |v(w)-v(x)-(w-x)\cdot\nabla v(x)|\leqslantqslant M|x-w|^{2}\quad\forall\,w\in
B_{1}(x),
\end{equation}
for some $M>0$.
Then, from \eqref{sm 1} and \eqref{v condition}, it follows that
\begin{eqnarray}
&&\int_{\mathbb{R}^{n}}{\leqslantft|\eta_{\varepsilon}(w)\frac{2-\sigma}{|w|^{n+\sigma}}+(1-\eta_{\varepsilon}(\omega))\bigl(K(x,w)\ast\hat{\eta}_{\varepsilon^2}(x)\hat{\eta}_{\varepsilon^{2}}(w)\bigr)-K(x,w)\right||\delta v(x,w)|\,dw}\nonumber\\
&\leqslant&\int_{\mathbb{R}^{n}}{\leqslantft(\eta_{\varepsilon}(w)\Big|\frac{2-\sigma}{|w|^{n+\sigma}}-K(x,w)\Big|+(1-\eta_{\varepsilon}(\omega))
\Big|K(x,w)\ast\hat{\eta}_{\varepsilon^2}(x)\hat{\eta}_{\varepsilon^{2}}(w)-K(x,w)\Big|\right)}\nonumber\\
&&\qquad\qquad \qquad\qquad\qquad \qquad\qquad\qquad \qquad \times|\delta v(x,w)|\,dw\nonumber\\
&\leqslantqslant&\int_{B_{\varepsilon}}{C|w|^{2-n-\sigma}}+\int_{\mathbb R^n\setminus B_{\varepsilon}}{\bigl|K(x,w)\ast\hat{\eta}_{\varepsilon^2}(x)\hat{\eta}_{\varepsilon^{2}}(w)-K(x,w)\bigr|\,|\delta v(x,w)|\,dw}\nonumber\\
&\leqslantqslant&
C\varepsilon^{2-\sigma}+I,
\label{ne}
\end{eqnarray}
with
$$I:=\int_{\mathbb R^n\setminus B_{\varepsilon}}{\bigl|K(x,w)\ast\hat{\eta}_{\varepsilon^2}(x)\hat{\eta}_{\varepsilon^{2}}(w)-K(x,w)\bigr|\,|\delta
v(x,w)|\,dw}.$$ By \eqref{sm 1}, \eqref{v condition}, and the fact
that $\sigma>1$, we have
\begin{eqnarray*}
I&=&\int_{\mathbb R^n\setminus B_{\varepsilon}}{\int_{B_{1}}{\int_{B_{1}}{\leqslantft|K(x-\varepsilon^2y,w-\varepsilon^2\tilde{w})\eta(y)\eta(\tilde{w})-K(x,w)\right|\,dy
\,d\tilde{w}\,|\delta v(x,w)|\,dw}}}\\
&\leqslantqslant&\int_{\mathbb R^n\setminus B_{\varepsilon}}{\frac{C\varepsilon^2}{|w|^{n+1+\sigma}}\,|\delta v(x,w)|\,dw}\\
&\leqslantqslant&C\int_{B_{1}\setminus B_{\varepsilon}}{\frac{\varepsilon^2}{|w|^{n-1+\sigma}}\,dw}+C\int_{\mathbb R^n\setminus B_{1}}{\frac{\varepsilon^2}{|w|^{n+1+\sigma}}\,dw}\\
&\leqslantqslant &C(\varepsilon^{3-\sigma}+\varepsilon^{2}).
\end{eqnarray*}
Combining this estimate with \eqref{ne}, we get
$$\int_{\mathbb{R}^{n}}{\leqslantft|\eta_{\varepsilon}(w)\frac{2-\sigma}{|w|^{n+\sigma}}+(1-\eta_{\varepsilon}(\omega))(K(x,w)\ast\hat{\eta}_{\varepsilon^2}(x)\hat{\eta}_{\varepsilon^{2}}(w))-K(x,w)\right|\delta v(x,w)dw}\leqslantqslant C\varepsilon^{2-\sigma},$$
where $C$ depends of $M$ and $\sigma$. Since $\sigma<2$ we conclude
that
$$\|L_{\varepsilon}-L\|\to 0 \quad\mbox{as $\varepsilon\to 0$.}$$
Thanks to this fact, we can repeat almost \textit{verbatim}\footnote{In order to repeat the
argument in the proof of \cite[Lemma 7]{CS2011} one needs to know
that the functions $u_\varepsilon$ are equicontinuous, which is a
consequence of \cite[{Lemmata 2 and 3}]{CS2011}. To be precise, to apply
\cite[Lemma 3]{CS2011} one would need the kernels to satisfy the
bounds $\frac{(2-\sigma)\lambda}{|w|^{n+\sigma}}\leqslantqslant K_*(x,w)\leqslantqslant
\frac{(2-\sigma)\Lambda}{|w|^{n+\sigma}}$ for all $w \neq 0$,
while in our case the kernel $K$ (and so also $K_\varepsilon$)
satisfies
\begin{equation}\label{Lo}
\frac{(2-\sigma)\lambda}{|w|^{n+\sigma}}\leqslantqslant K(x,w)\leqslantqslant
\frac{(2-\sigma)\Lambda}{|w|^{n+\sigma}}\qquad \forall\,|w|\leqslantqslant
r_0
\end{equation}
with $\lambda:={a_0}-\eta$,
$\Lambda:=a_{0}+\eta$, and
$r_0>0$ (observe that, by our assumptions in \eqref{ass silv},
$\lambda \geqslantqslant 3{a_0}/4$).
However this is not a big problem: if $v \in L^\infty(\mathbb R^n)$
satisfies
$$
\int_{\mathbb R^n} K_*(x,w)\,\delta v(x,w)\,dw = f(x) \qquad\text{in
$B_{3/4}$}
$$
for some kernel satisfying {\eqref{sm 1} and}
$\frac{(2-\sigma)\lambda}{|w|^{n+\sigma}}\leqslantqslant K_*(x,w)\leqslantqslant
\frac{(2-\sigma)\Lambda}{|w|^{n+\sigma}}$ only for $|w| \leqslantqslant r_0$,
we define $K'(x,w):=\zeta(w)K_*(x,w) +
(2-\sigma)\frac{1-\zeta(w)}{|w|^{n+\sigma}}$, with $\zeta$ a
smooth cut-off function supported inside $B_{r_0}$, to get
$$
\int_{\mathbb R^n} K'(x,w)\,\delta v(x,w)\,dw = f(x) + \int_{\mathbb R^n}
[1-\zeta(w)]\leqslantft({-}K_*(x,w)+ \frac{2-\sigma}{|w|^{n+\sigma}}
\right)\,\delta v(x,w)\,dw.
$$
Since $1-\zeta(w)=0$ near the origin, {by assumption \eqref{sm 1}}, the second integral is
uniformly bounded as a function of $x$, so \cite[Lemma 3]{CS2011}
applied to $K'$ gives the desired equicontinuity.
Finally, the uniqueness for the boundary problem
$$
\leqslantft\{\begin{matrix}
\int_{\mathbb R^n} K(x,w) \,\delta v(x,w)\,dw=f(x,u(x))&\quad\mbox{in $B_{3/4}$,}\\
v=u\quad&\mbox{in $\mathbb R^{n}\setminus B_{3/4}$.}
\end{matrix}\right.
$$
follows by a standard comparison principle argument (see for
instance the argument used in the proof of \cite[Theorem
3.2]{BCF}). }
the proof of \cite[Lemma 7]{CS2011} to obtain the uniform convergence
\begin{equation}\label{Uu}
u_{\varepsilon}\to u \quad \text{on $\mathbb R^n$} \qquad\mbox{as $\varepsilon\to 0$.}
\end{equation}
\subsection{Smoothness of the approximate solutions}
We prove now that the functions $u_{\varepsilon}$ defined in the
previous section are of class $C^{\infty}$ inside a small ball
(whose size is uniform with respect to $\varepsilon$): namely,
there exists $r\in (0,1/4)$ such that, for any $m\in\mathbb N^{n}$
\begin{equation}\label{LE10} \| D^m u_{\varepsilon}
\|_{L^\infty(B_{r})}\leqslant C \end{equation} for some positive
constant $C= C(m, \sigma, \varepsilon,\| u\|_{L^\infty(\mathbb R^n)},
\|f\|_{ L^\infty(B_{1}\times\mathbb R) })$.
For this, we observe that by~\eqref{19bis}
$$ \frac{2-\sigma}{|w|^{n+\sigma}}=K_\varepsilon(x,w)-
(1-\eta_{\varepsilon}(w))\hat K_{\varepsilon}(x,w)
+(1-\eta_\varepsilon(w))\,\frac{2-\sigma}{|w|^{n+\sigma}} ,$$ so for any $x\in B_{1/4}$
\begin{eqnarray*}
\frac{2-\sigma}{2c_{n,\sigma}}(-\Delta)^{\sigma/2}u_{\varepsilon}(x)&=&
\int_{\mathbb R^n}{\frac{2-\sigma}{|w|^{n+\sigma}}\delta
u_{\varepsilon}(x,w)dw}\\
\qquad&=&f_{\varepsilon}(x)-\int_{\mathbb R^n}{(1-\eta_{\varepsilon}(w))\hat
K_{\varepsilon}(x,w)\, \delta
u_{\varepsilon}(x,w)dw}\\
\quad&&+\int_{\mathbb R^n}{(1-\eta_{\varepsilon}(w))\frac{2-\sigma}{|w|^{n+\sigma}}\,
\delta u_{\varepsilon}(x,w)dw}
\end{eqnarray*}
(here $c_{n,\sigma}$ is the positive constant that appears in the
definition of the fractional Laplacian, see e.g. \cite{PhS,
guida}). Then, for any $x\in B_{1/4}$ it follows that
\begin{eqnarray}
&& (-\Delta)^{\sigma/2}u_{\varepsilon}(x)
\nonumber\\
&=&d_{n,\sigma}\Big[f_{\varepsilon}(x)+\int_{\mathbb R^n}{(1-\eta_{\varepsilon}(w))\Big(\frac{2-\sigma}{|w|^{n+\sigma}}-\hat
K_{\varepsilon}(x,w)\Big)\delta
u_{\varepsilon}(x,w)dw}\Big]\nonumber\\
&=:&d_{n,\sigma}[f_{\varepsilon}(x)+h_\varepsilon(x)]\label{fund.eq}\\
&=:&d_{n,\sigma}g_\varepsilon(x).\nonumber
\end{eqnarray}
with $\displaystyle d_{n,\sigma}:=\frac{2c_{n,\sigma}}{2-\sigma}.$
Making some changes of variables we can rewrite $h_\varepsilon$
as follows:
\begin{eqnarray}\label{BBB0}
\nonumber h_\varepsilon(x)&=&\int_{\mathbb R^n}
(1-\eta_{\varepsilon}(w-x))\Big(\frac{2-\sigma}{|w-x|^{n+\sigma}}-
\hat K_{\varepsilon}(x,w-x)\Big)u_{\varepsilon}(w) dw\\
&+&\int_{\mathbb R^n} (1-\eta_{\varepsilon}(x-w))
\Big(\frac{2-\sigma}{|w-x|^{n+\sigma}}-\hat
K_{\varepsilon}(x,x-w)\Big)u_{\varepsilon}(w) dw \nonumber
\\
&-&2 u_{\varepsilon}(x)\int_{\mathbb R^n}
(1-\eta_{\varepsilon}(w))\Big(\frac{2-\sigma}{|w|^{n+\sigma}}-\hat
K_{\varepsilon}(x,w)\Big) dw.
\end{eqnarray}
We now notice that ``the function $h_{{\varepsilon}}$ is locally
as smooth as $u_{\varepsilon}$'', is the sense that for any $m \in
\mathbb N$ and $U\subset B_{1/4}$ open we have
\begin{equation}\label{BL7}
\|h_{{\varepsilon}}\|_{C^{m}(U)} \leqslantqslant
{C_{\varepsilon,\,m}}\leqslantft(1+\|u_{{\varepsilon}}\|_{C^{m}(U)}\right)
\end{equation}
for some constant $C_{\varepsilon,\,m}>0$. To see this observe that, in the first two
integrals, the variable $x$ appears only inside $\eta_\varepsilon$
and in the kernel $\hat K_{\varepsilon}$, and $\eta_{\varepsilon}$
is equal to $1$ near the origin. Hence the first two integrals are
smooth functions of $x$ (recall that $\hat K_{\varepsilon}$ is
smooth, see \eqref{eq:hat K}). The third term is clearly as
regular as $u_{\varepsilon}$ because the third integral is smooth
by the same reason as before. This proves \eqref{BL7}.
We are now going to prove that the functions $u_{\varepsilon}$
belong to $C^\infty(B_{1/5})$, with
\begin{equation}\label{LE1}
\| u_{\varepsilon} \|_{C^m(B_{1/4-r_{m}})}\leqslant
C(r_1,m,{\sigma},\varepsilon, \|
u_{\varepsilon}\|_{L^\infty(\mathbb R^n)},\|f\|_{ L^\infty(B_{1}\times\mathbb R)
})
\end{equation}
for any $m\in\mathbb N$, where
$r_{m}:=1/20 - 25^{-m}$ (so that $1/4-r_m>1/5$ for any $m$).
To show this, we begin by observing that, since $\sigma\in(1,2)$,
by \eqref{fund.eq}, \eqref{BL7}, and \cite[Theorem 61]{CS2011}, we
have that $u_{\varepsilon}\in L^\infty(\mathbb R^{n})\cap
C^{1,\beta}(B_{1/4-r_1})$ for any $\beta<\sigma-1$ ($r_1=1/100$),
and
\begin{equation}\label{7B7A}
\|u_{\varepsilon}\|_{C^{1,\beta}(B_{1/4-r_1})}\leqslantqslant
{C_{\varepsilon}}\Big(\|u\|_{L^{\infty}(\mathbb R^n)}+\|f\|_{L^{\infty}(B_{1}\times\mathbb R)}\Big).
\end{equation}
Now, to get a bound on higher derivatives, the idea would be to
differentiate~\eqref{fund.eq} and use again \eqref{BL7} and
\cite[Theorem 61]{CS2011}. However we do not have $C^1$ bounds on
the function $u_{\varepsilon}$ outside
$B_{{1/4-r_1}}$, and therefore we can not apply
directly this strategy to obtain the $C^{2,\alpha}$ regularity of
the function $u_{\varepsilon}$.
To avoid this problem we follow the localization argument in
\cite[Theorem 13.1]{CScpam}: we take $\delta>0$ small (to be chosen)
and we consider a smooth cut-off function
$$
\vartheta:=\leqslantft\{\begin{array}{ll}
1 &\quad\mbox{in } B_{1/4-(1+\delta)r_1}, \\
0 &\quad\mbox{on } \mathbb R^n\setminus B_{1/4-r_1},
\end{array}\right.
$$
and for fixed~$e\in S^{n-1}$ and $|h|<\delta r_1$ we define
\begin{equation}\label{Dv}
v(x):=\frac{u_{\varepsilon}(x+eh)-u_{\varepsilon}(x)}{|h|}.\end{equation}
{The function $v(x)$ is uniformly bounded in $B_{1/4-(1+\delta)r_1}$ because $u\in C^{1}(B_{1/4-r_1})$.}
We now
write~$v(x)=v_{1}(x)+v_{2}(x)$, being
$$v_{1}(x):=\frac{\vartheta u_{\varepsilon}(x+eh)-\vartheta u_{\varepsilon}(x)}{|h|}\quad\mbox{and}\quad v_{2}(x):=\frac{(1-\vartheta)u_{\varepsilon}(x+eh)-(1-\vartheta)u_{\varepsilon}(x)}{|h|}.$$
By \eqref{7B7A} it is clear that
$$
v_{1}\in L^{\infty}(\mathbb R^n)
$$
and that (recall that $|h|<\delta r_{1}$)
\begin{equation}\label{v1two}
v_1=v\qquad \mbox{inside $B_{1/4-(1+2\delta)r_1}$.}
\end{equation}
Moreover, for $x\in B_{1/4-(1+2\delta)r_1}$, using \eqref{fund.eq},
\eqref{eq:f eps}, and \eqref{BL7} we get
\begin{eqnarray}
\leqslantft|(-\Delta)^{\sigma/2}v_1(x)\right|&=&
\leqslantft|(-\Delta)^{\sigma/2}v(x)-(-\Delta)^{\sigma/2}v_{2}(x)\right|\nonumber\\
&=&\leqslantft|\frac{g_{{\varepsilon}}(x+eh)-g_{{\varepsilon}}(x)}{|h|}-(-\Delta)^{\sigma/2}v_{2}(x)\right|\nonumber\\
&\leqslantqslant&
{C_{\varepsilon}}\Big(1+\|u_{\varepsilon}\|_{C^{1}(B_{1/4-r_1})}\Big)+\leqslantft|(-\Delta)^{\sigma/2}v_{2}(x)\right|.\label{M+}
\end{eqnarray}
Now, let us denote by
$K_o(y):=\frac{c_{n,\sigma}}{|y|^{n+\sigma}}$ the kernel of the
fractional Laplacian. Since for $x\in B_{1/4-(1+2\delta)r_1}$ and
$|y|<\delta r_{1}$ we have that $(1-\vartheta)u_{\varepsilon}(x\pm
y)=0$, it follows from a change of variable that
\begin{eqnarray*}
|(-\Delta)^{\sigma/2}v_{2}(x)|&\leqslantqslant&\Big|\int_{\mathbb R^n}{(v_{2}(x+y)+v_{2}(x-y)-2v_{2}(x))K_o(y)dy}\Big|\\
&\leqslant&\Big|\int_{\mathbb R^n}{\frac{(1-\vartheta)u_{\varepsilon}(x+y+eh)-(1-\vartheta)u_{\varepsilon}(x+y)}{|h|}K_o(y)dy}\Big|\\
&&+\Big|\int_{\mathbb R^n}{\frac{(1-\vartheta)u_{\varepsilon}(x-y+eh)-(1-\vartheta)u_{\varepsilon}(x-y)}{|h|}K_o(y)dy}\Big|\\
&\leqslantqslant&\int_{\mathbb R^n}{(1-\vartheta)|u_{\varepsilon}|(x+y)\Big|\frac{K_o(y-eh)-K_o(y)}{|h|}\Big|dy}\\
&&+\int_{\mathbb R^n}{(1-\vartheta)|u_{\varepsilon}|(x-y)\Big|\frac{K_o(y-eh)-K_o(y)}{|h|}\Big|dy}\\
&\leqslantqslant&\|u_{\varepsilon}\|_{L^{\infty}(\mathbb R^n)}\int_{\mathbb R^n\setminus
B_{{\delta r_1}} }{\frac{1}{|y|^{n+\sigma+1}}dy}\\
&\leqslantqslant&C_\delta \|u_{\varepsilon}\|_{L^{\infty}(\mathbb R^n)}.
\end{eqnarray*}
Therefore, by \eqref{M+} we obtain
$$\big|(-\Delta)^{\sigma/2}v_{1}(x)\big|\leqslantqslant
{C_{\varepsilon,\delta}}\Big(1+ \|u_{\varepsilon}\|_{C^{1}(B_{1/4-r_1})}
+\|u_{\varepsilon}\|_{L^{\infty}(\mathbb R^n)}\Big),\qquad x\in
{B_{1/4-(1+2\delta)r_1}},$$ and we can apply \cite[Theorem 61]{CS2011} to
get
that
$v_{1}\in C^{1,\beta}(B_{1/4-r_2})$ for any $\beta<\sigma-1$, with
$$\|v_{1}\|_{C^{1,\beta}(B_{1/4-r_{2}})}\leqslantqslant
{C_{\varepsilon}}\Big(1+\|v_{1}\|_{L^{\infty}(\mathbb R^n)}+
\|u_{\varepsilon}\|_{C^{1}(B_{1/4}-r_1)}
+\|u_{\varepsilon}\|_{L^{\infty}(\mathbb R^n)}\Big),\,$$
provided $\delta>0$ was chosen sufficiently small
so that~{$r_2>(1+2\delta)r_1$}. By \eqref{Dv}, \eqref{v1two}, and
\eqref{7B7A}, this implies that $u_{\varepsilon}\in
C^{2,\beta}(B_{1/4-r_{2}})$, with
\begin{eqnarray*}
\|u_{\varepsilon}\|_{C^{2,\beta}(B_{1/4-r_{2}})}&\leqslantqslant& {C_{\varepsilon}}\Big(1+
\|u_{\varepsilon}\|_{C^{1}(B_{1/4-r_1})}
+\|u_{\varepsilon}\|_{L^{\infty}(\mathbb R^n)}\Big)
\\ &\leqslantqslant&
{C_\varepsilon}\Big(1 +\|u\|_{L^{\infty}(\mathbb R^n)}
+\|f\|_{L^{\infty}(B_{1}\times\mathbb R)}\Big).
\end{eqnarray*}
Iterating this argument we obtain~\eqref{LE1}, as desired.
\subsection{Uniform estimates and conclusion of the proof for $k=0$}
\label{section:uniforml} Knowing now that the functions
$u_\varepsilon$ defined by \eqref{20bis} are smooth inside $B_{1/5}$ (see \eqref{LE1}), our goal is
to obtain a-priori bounds independent of $\varepsilon$.
By \cite[Theorem 61]{CS2011} applied\footnote{As already observed in the footnote on
page~\pageref{Uu}, the fact that the kernel satisfies \eqref{Lo}
only for $w$ small is not a problem, and one can easily check that
\cite[Theorem 61]{CS2011} still holds in our setting. }
to $u$, we have that $u\in
C^{1,\beta}(B_{1-R_{1}})$ for any $\beta<\sigma-1$ and~$R_{1}>0$,
with
\begin{equation}\label{Be}
\|u\|_{C^{1,\beta}(B_{1-R_1})}\leqslantqslant
C\leqslantft(\|u\|_{L^{\infty}(\mathbb R^n)}+\|f\|_{L^{\infty}(B_{1}\times\mathbb R)}\right).
\end{equation}
Then, for any
$\varepsilon$ sufficiently small,~$f_{\varepsilon}\in
C^1(B_{1/2})$ with
\begin{equation}\label{FE}\begin{split} & \|
f_{\varepsilon}\|_{C^1(B_{1/2})}\leqslant C' \leqslantft(1+\| u\|_{C^{1}
(B_{1-R_1})}\right)\\
&\qquad\leqslant C' C \leqslantft(1+\| u\|_{L^{\infty}(\mathbb R^n)}
+\|f\|_{L^{\infty}(B_{1}\times\mathbb R)}\right),
\end{split}\end{equation}
where~$C'>0$ depends on~$\|f\|_{C^1(B_{1}\times\mathbb R)}$ only.
Consider a cut-off function~$\tilde\eta$
which is $1$ inside $B_{1/7}$ and $0$ outside
$B_{1/6}$.
Then, recalling~\eqref{20bis}, we write the equation satisfied by
$u_{\varepsilon}$ as
\begin{equation*}
f_{\varepsilon}(x)= \int_{\mathbb R^{n}} K_{\varepsilon}(x,w)\,\delta
(\tilde\eta u_{\varepsilon})(x,w)dw +\int_{\mathbb R^{n}}
K_{\varepsilon}(x,w)\,\delta ((1-\tilde\eta) u_{\varepsilon})(x,w)dw,
\end{equation*}
and by differentiating it, say in direction~$e_1$, we obtain
(recall Lemmata~\ref{D} and~\ref{E})
\begin{eqnarray*}
\partial_{x_1} f_{\varepsilon}(x) &=&
\int_{\mathbb R^{n}} K_{\varepsilon}(x,w) \delta (\partial_{x_1}(\tilde\eta
u_{\varepsilon}))(x,w)dw
\\ &&\quad+\int_{\mathbb R^{n}} \partial_{x_1} \big[
K_{\varepsilon}(x,w)\delta ((1-\tilde\eta) u_{\varepsilon})(x,w)\big]
dw\\&&\quad +\int_{\mathbb R^{n}}
\partial_{x_1} K_{\varepsilon}(x,w)\delta (\tilde\eta u_{\varepsilon})(x,w)dw
\end{eqnarray*}for any~$x\in B_{1/5}$.
It is convenient to rewrite this equation as
$$ \int_{\mathbb R^{n}} K_{\varepsilon}(x,w)
\delta (\partial_{x_1}(\tilde\eta
u_{\varepsilon}))(x,w)dw=A_1-A_2-A_3,$$ with
\begin{eqnarray*}
A_1 &:=& \partial_{x_1} f_{\varepsilon}(x),\\
A_2 &:=& \int_{\mathbb R^{n}} \partial_{x_1} K_{\varepsilon}(x,w)\delta (\tilde\eta u_{\varepsilon})(x,w)dw\\
A_3&:=& \int_{\mathbb R^{n}} \partial_{x_1} \big[
K_{\varepsilon}(x,w)\delta ((1-\tilde\eta) u_{\varepsilon})(x,w)\big]
dw.
\end{eqnarray*}
We claim that
\begin{equation}\label{A1233}
\|A_1-A_2-A_3\|_{L^\infty(B_{1/14})} \leqslantqslant C
\leqslantft(1+\|u\|_{L^{\infty}(\mathbb R^n)}+\|u_{\varepsilon}\|_{C^{2}(B_{1/6})}\right)\end{equation}
with $C$ depending only on $\|f\|_{C^1(B_1\times\mathbb R)}$.
To prove this, we first observe that by~\eqref{FE}
$$
\|A_1\|_{L^\infty(B_{1/14})} \leqslantqslant C \leqslantft(1+
\|u\|_{L^{\infty}(\mathbb R^n)}\right).
$$
Also, since $| \partial_{x_1} \hat {K}_{\varepsilon}(x,w)| \leqslantqslant C|w|^{-(n+\sigma)}$,\footnote{
This can be easily checked using the definition of $\hat {K}_{\varepsilon}$ and \eqref{sm 1}.
Indeed, because of the presence of the term $(1-\eta_{\varepsilon}(w))$
which vanishes for $|w|\leqslantqslant \varepsilon/2$, one only needs to check that
$$
\int_{\mathbb R^n} |w-z|^{-n-\sigma} \hat{\eta}_{\varepsilon^2} (z) \,dz
\leqslantqslant C|w|^{-n-\sigma} \qquad\text{for } |w|\geqslantqslant \varepsilon/2,
$$
which is easy to prove (we leave the details to the reader).
}
by~\eqref{A2 estimate} (used with
$\gamma=\lambda:=(1,0,\dots,0)$ and $v:=\tilde\eta u_{\varepsilon}$) we get
$$
\|A_2\|_{L^\infty(B_{1/14})} \leqslantqslant C\|\tilde\eta u_{\varepsilon}\|_{C^2(\mathbb R^n)}
\leqslantqslant C
\|u_{\varepsilon}\|_{C^2(B_{1/6})},
$$
where we used that $\tilde\eta$ is supported in $B_{1/6}$.
Moreover, since~$(1-\tilde\eta)u_\varepsilon=0$ inside $B_{1/7}$, we can
use \eqref{A3 estimate} with $v:=(1-\tilde\eta)u_{\varepsilon}$ to
obtain
\begin{eqnarray*}
&& \leqslantft|\int_{\mathbb R^{n}} \partial_{x_1} K_{\varepsilon}(x,w)\,\delta
((1-\tilde\eta) u_{\varepsilon})(x,w)\,dw\right|
\\&&\qquad+
\leqslantft|\int_{\mathbb R^{n}} K_{\varepsilon}(x,w)\,\partial_{x_1}\delta
((1-\tilde\eta) u_{\varepsilon})(x,w)\, dw\right|
\\ &&\qquad\qquad\leqslant
C\, C_{k}\, \| (1-\tilde\eta)u_{\varepsilon}\|_{L^\infty(\mathbb R^n)}
\end{eqnarray*}
for any $x\in B_{1/14}$,
which gives (note that, by an easy comparison principle,
$\|u_\varepsilon\|_{L^\infty(\mathbb R^n)} \leqslantqslant
C(1+\|u\|_{L^\infty(\mathbb R^n)})$)
$$\|A_3\|_{L^\infty(B_{1/14})} \leqslantqslant C(1+\|u\|_{L^\infty(\mathbb R^n)}).$$
The above estimates imply \eqref{A1233}.
Since $\partial_{x_1}(\tilde{\eta}u_{\varepsilon})$ is bounded
on the whole of~$\mathbb R^n$, by \eqref{A1233}
and~\cite[Theorem~61]{CS2011} we obtain that~$\partial_{x_1}(\tilde{\eta}
u_{\varepsilon})\in C^{1,\beta}(B_{1/14-R_2})$ for any
$R_{2}>0$, with
$$
\|\partial_{x_1}(\tilde{\eta}
u_{\varepsilon})\|_{C^{1,\beta}(B_{1/14-R_2})} \leqslantqslant C
\leqslantft(1+\|u\|_{L^{\infty}(\mathbb R^n)}+\|u_{\varepsilon}\|_{C^{2}(B_{1/6})}\right),$$
which implies
\begin{equation}\label{29AA}
\|u_{\varepsilon}\|_{C^{2,\beta}(B_{1/15})} \leqslantqslant C
\leqslantft(1+\|u\|_{L^{\infty}(\mathbb R^n)}+\|u_{\varepsilon}\|_{C^{2}(B_{1/6})}\right).
\end{equation}
To end the proof we need to reabsorb the $C^2$-norm on the right
hand side. To do this, we observe that by standard interpolation
inequalities (see for instance \cite[Lemma 6.35]{GT}), for any
$\delta\in(0,1)$ there exists $C_\delta>0$ such that
\begin{equation}\label{29BB}
\|u_{\varepsilon}\|_{C^2(B_{1/6})} \leqslantqslant \delta
\|u_{\varepsilon}\|_{C^{2,\beta}(B_{1/5})}+C_\delta
\|u_{\varepsilon}\|_{L^\infty(\mathbb R^n)}.
\end{equation}
Hence, by~\eqref{29AA} and~\eqref{29BB} we obtain
\begin{equation}\label{29CC}
\|u_{\varepsilon}\|_{C^{2,\beta}(B_{1/15})} \leqslantqslant C_\delta
(1+\|u\|_{L^\infty(\mathbb R^n)})
+C\delta\|u_{\varepsilon}\|_{C^{2,\beta}(B_{1/5})}.
\end{equation}
To conclude, one needs to apply the above estimates at every point
inside $B_{1/5}$ at every scale: for any $x \in B_{1/5}$, let $r>0$ be any
radius such that $B_r(x)\subset B_{1/5}$. Then we consider
\begin{equation}\label{sc}
v_{\varepsilon,r}^x(y):=u_{\varepsilon}(x+ry),
\end{equation}
and we observe that $v_{\varepsilon,r}^x$ solves an analogous
equation as the one solved by~$u_{\varepsilon}$ with the kernel
given by
$$K^{x}_{\varepsilon,r}(y,z):=r^{n+\sigma}K_{\varepsilon}(x+ry,rz)$$
and with right hand side
$$F_{\varepsilon,r}(y):=r^{\sigma}\int_{\mathbb R^n}{f(x+ry-\tilde{x},u(x+ry-\tilde{x}))\hat{\eta}_{\varepsilon}(\tilde{x})d\tilde{x}}.$$
We now observe that the kernels $K^{x}_{\varepsilon,r}$ satisfy assumptions \eqref{ass silv} and \eqref{sm 1} uniformly with
respect to $\varepsilon$, $r$, and $x$. Moreover, for $|x|+r\leqslantqslant
1/5$, and $\varepsilon$ small, we have
$$\|F_{\varepsilon,r}\|_{C^{1}(B_{1/2})}\leqslantqslant r^{\sigma}C(1+\|u\|_{C^{1}(B_{3/4})}), $$
with $C>0$ depending on $\|f\|_{C^{1}(B_{1}\times\mathbb R)}$ only.
Hence, by \eqref{Be} this implies
$$\|F_{\varepsilon,r}\|_{C^{1}(B_{1/2})}\leqslantqslant r^{\sigma}C\leqslantft(1+\|u\|_{L^{\infty}(\mathbb R^n)}+\|f\|_{L^{\infty}(B_{1}\times\mathbb R)}\right).$$
Thus, applying~\eqref{29CC} to $v_{\varepsilon,r}^x$ (by the
discussion we just made, the constants are all independent of
$\varepsilon$, $r$, and $x$) and scaling back, we get
\begin{eqnarray*}
\|u_{\varepsilon}\|^*_{C^{2,\beta}(B_{r/15}(x))} \leqslantqslant
C_\delta
\leqslantft(1+\|u\|_{L^\infty(\mathbb R^n)}+\|f\|_{L^{\infty}(B_{1}\times\mathbb R)}\right)
+C\delta
\|u_{\varepsilon}\|^*_{C^{2,\beta}(B_{r/5}(x))}.
\end{eqnarray*}
Using now Lemma~\ref{Co2} inside $B_{1/5}$ with $\mu=1/15$, $\nu=1/5$, $m=2$, and $\Lambda_\delta=C_\delta
(1+\|u\|_{L^\infty(\mathbb R^n)}+\|f\|_{L^{\infty}(B_{1}\times\mathbb R)})$, we
conclude (observe that $1/15 \cdot 1/5=1/75$)
$$ \|u_{\varepsilon}\|_{C^{2,\beta}(B_{1/75})}\leqslant C\leqslantft(1+\|u\|_{L^\infty(\mathbb R^n)}+\|f\|_{L^{\infty}(B_{1}\times\mathbb R)}\right),$$
which implies
$$ \|u\|_{C^{2,\beta}(B_{1/75})}\leqslant C\leqslantft(1+\|u\|_{L^\infty(\mathbb R^n)}+\|f\|_{L^{\infty}(B_{1}\times\mathbb R)}\right)$$
by letting $\varepsilon \to 0$ (see \eqref{Uu}). Since
$\beta<\sigma-1$, this is equivalent to
$$ \|u\|_{C^{\sigma+\alpha}(B_{1/75})}\leqslant C
\leqslantft(1+\|u\|_{L^\infty(\mathbb R^n)}+\|f\|_{L^{\infty}(B_{1}\times\mathbb R)}\right),\,\quad\mbox{for any $\alpha<1$}.$$
A standard covering/rescaling argument completes the proof of
Theorem~\ref{boot} in the case $k=0$.
\subsection{The induction argument.}
We already proved Theorem~\ref{boot} in the case $k=0$.
We now show by induction that, for any $k \geqslantqslant 1$,
\begin{equation}\label{induction}
\| u\|_{C^{k+\sigma+\alpha}(B_{1/2^{{3k+4}}})} \leqslant C_k
\leqslantft(1+\|u\|_{L^\infty(\mathbb R^n)}+\|f\|_{L^{\infty}(B_{1}\times\mathbb R)}\right) ,
\end{equation}
for some constant~$C_k>0$: by a standard covering/rescaling
argument, this proves~\eqref{3bis} and so Theorem~\ref{boot}.
As we shall see, the argument is more or less identical to the case $k=0$.
To be fully rigorous, we should apply the regularization argument with the functions $u_\varepsilon$
as done in the previous step.
However, to simplify the notation and make the argument more transparent, we will skip the
regularization.
Define $g(x):=f(x,u(x))$, and consider a cut-off function~$\tilde\eta$
which is $1$ inside $B_{1/2^{3k+5}}$ and $0$ outside
$B_{1/2^{3k+4}}$.
By Lemmata~\ref{D} and~\ref{E} we differentiate the equation~$k+1$
times according to the following computation: first we observe
that, {since \eqref{induction} is true for $k-1$ and we can choose $\alpha \in (2-\sigma,1)$
so that $\sigma+\alpha>2$}, we deduce that $g\in C^{k+{1}}(B_{1/2^{3k+4}})$ with
\begin{equation}\label{gg}
\|g\|_{C^{k+{1}}(B_{1/2^{3k+4}})}\leqslantqslant C\leqslantft(1+ \| u\|_{C^{k+{1}}
(B_{1/2^{3k+4}})}\right){\leqslantqslant C\leqslantft(\|u\|_{L^{\infty}(\mathbb R^n)}+\|f\|_{L^{\infty}(B_1\times\mathbb R^n)}\right)},
\end{equation}
with~$C>0$ depending on~$\|f\|_{C^{k+{1}}(B_{1}\times\mathbb R)}$ only. Now
we take~$\gamma\in\mathbb N^n$ with~$|\gamma|=k+{1}$ and we differentiate
the equation to obtain
\begin{eqnarray*}&& \partial^\gamma g(x)
\\ &=&
\sum_{{{1\leqslant i\leqslant n}\atop{0\leqslant \lambda_i\leqslant
\gamma_i}}\atop{\lambda=(\lambda_1,\dots,\lambda_n)}} \leqslantft(
{\gamma_1}\atop{\lambda_1}\right)\dots \leqslantft(
{\gamma_n}\atop{\lambda_n}\right) \int_{\mathbb R^n} \partial^{\lambda}_x
K(x,w)\,\delta (\partial^{\gamma-\lambda}_x (\tilde\eta u))(x,w)\,dw
\\ &+&
\sum_{{{1\leqslant i\leqslant n}\atop{0\leqslant \lambda_i\leqslant
\gamma_i}}\atop{\lambda=(\lambda_1,\dots,\lambda_n)}} \leqslantft(
{\gamma_1}\atop{\lambda_1}\right)\dots \leqslantft(
{\gamma_n}\atop{\lambda_n}\right) \int_{\mathbb R^n} \partial^{\lambda}_x
K(x,w)\,\delta (\partial^{\gamma-\lambda}_x (1-\tilde\eta)u)(x,w)\,dw
.\end{eqnarray*} Then, we isolate the term with~$\lambda=0$ in the
first sum:
$$
\int_{\mathbb R^n} K(x,w)\, \delta (\partial^\gamma_x(\tilde\eta u)) (x,w)\,dw
= A_1-A_2-A_3
$$
with
\begin{eqnarray*}
A_1 &:=& \partial^\gamma g(x),\\
A_2 &:=& \sum_{{{1\leqslant i\leqslant n}\atop{0\leqslant \lambda_i\leqslant
\gamma_i}}\atop{\lambda=(\lambda_1,\dots,\lambda_n) \ne0}} \leqslantft(
{\gamma_1}\atop{\lambda_1}\right)\dots \leqslantft(
{\gamma_n}\atop{\lambda_n}\right) \int_{\mathbb R^n} \partial^{\lambda}_x
K(x,w)\,\delta
(\partial^{\gamma-\lambda}_x (\tilde\eta u))(x,w)\,dw\\
A_3&:=& \sum_{{{1\leqslant i\leqslant n}\atop{0\leqslant \lambda_i\leqslant
\gamma_i}}\atop{\lambda=(\lambda_1,\dots,\lambda_n)}} \leqslantft(
{\gamma_1}\atop{\lambda_1}\right)\dots \leqslantft(
{\gamma_n}\atop{\lambda_n}\right) \int_{\mathbb R^n} \partial^{\lambda}_x
K(x,w)\,\delta (\partial^{\gamma-\lambda}_x (1-\tilde\eta)u)(x,w)\,dw
\end{eqnarray*}
We claim that
\begin{equation}\label{R}
\|A_1-A_2-A_3\|_{L^\infty(B_{1/2^{3k+6}})} \leqslantqslant C
\leqslantft(1+\|u\|_{L^\infty(\mathbb R^n)}
+\|u\|_{C^{k+{2}}(B_{1/2^{3k+4}})}\right),
\end{equation}
Indeed, by the fact that~$|\gamma-\lambda|\leqslant k$ we see that
\begin{equation}\label{gg2}
\begin{split}
\|A_2\|_{L^\infty(B_{1/2^{3k+6}})}
& \leqslantqslant C \, C_{k} \,\| \tilde\eta u\|_{C^{k+{2}} (\mathbb R^n)}\\
& \leqslantqslant C \, C_{k} \,\| u\|_{C^{k+{2}} (B_{1/2^{3k+4}})}.
\end{split}\end{equation}
Furthermore, since $(1-\tilde\eta)u=0$ inside $B_{1/2^{3k+5}}$, we can
use \eqref{A3 estimate} with $v:=(1-\tilde\eta)u$ to obtain
$$ \|A_3\|_{L^\infty(B_{1/2^{3k+6}})}\leqslant C \|u\|_{L^\infty(\mathbb R^n)}.$$
This last estimate, \eqref{gg}, and \eqref{gg2} allow us to
conclude the validity of~\eqref{R}.
Now, by~\cite[Theorem~61]{CS2011} applied to
$\partial^\gamma_x(\tilde\eta u)$ we get
$$
\|u\|_{C^{\sigma+k+\alpha}(B_{1/2^{3k+7}})} \leqslantqslant C
\leqslantft(1+\|u\|_{C^{k+{2}}(B_{1/2^{3k+4}})}+\|u\|_{L^\infty(\mathbb R^n)}\right),$$
which is the analogous of \eqref{29AA} with $\sigma+\alpha=2+\beta$.
Hence, arguing as in the case $k=0$ (see the argument after \eqref{29AA}) we conclude that
$$ \|u\|_{C^{\sigma+k+\alpha}(B_{1/2^{{3(2k+1)+5}}})}\leqslant
C\leqslantft(1+\|u\|_{L^\infty(\mathbb R^n)}+\|f\|_{L^{\infty}(B_{1}\times\mathbb R)}\right).$$
A covering argument shows the validity of \eqref{induction},
comcluding the proof of Theorem~\ref{boot}.
\section{Proof of Theorem~\ref{main}}
\label{section:main}
The idea of the proof is to write the fractional minimal surface
equation in a suitable form so that we can apply
Theorem~\ref{boot}.
\subsection{Writing the operator on the graph of~$u$}
The first step in our proof consists in writing the $s$-minimal
surface functional in terms of the function $u$ which (locally)
parameterizes the boundary of a set $E$. More precisely, we assume
that $u$ parameterizes $\partial E \cap K_R$ and that (without
loss of generality) $E \cap K_R$ is contained in the ipograph of
$u$. Moreover, since by assumption $u(0)=0$ and $u$ is of class
$C^{1,\alpha}$, up to rotating the system of coordinates (so that
$\nabla u(0)=0$) and reducing the size of $R$, we can also assume
that
\begin{equation}
\label{eq:small Lip}
\partial E \cap K_R\subset B_R^{n-1}\times [-R/8,R/8].
\end{equation}
Let $\varphi\in C^\infty(\mathbb R)$ be an even function satisfying
$$
\varphi(t)=\leqslantft\{\begin{array}{ll}
1 &\quad\mbox{if } |t|\leqslantqslant 1/4, \\
0 &\quad\mbox{if } |t|\geqslantqslant 1/2,
\end{array}\right.
$$
and define the smooth cut-off functions
$$
\zeta_R(x'):=\varphi(|x'|/R)\qquad\eta_R(x):= \varphi(|x'|/R)
\varphi(|x_n|/R).
$$
Observe that
$$
\zeta_R= 1 \quad \text{in }B_{R/4}^{n-1},\qquad \zeta_R= 0\quad
\text{outside }B_{R/2}^{n-1},
$$
$$
\eta_R= 1\quad \text{in } K_{R/4},\qquad \eta_R= 0 \quad
\text{outside } K_{R/2}.
$$
We claim that, for any~$x\in\partial E \cap \leqslantft(B_{R/2}^{n-1}\times [-R/8, R/8]\right)$,
\begin{equation}\label{LHS-1}\begin{split}
& \int_{\mathbb R^{n}}\eta_R(y-x)\frac{\chi_E(y)-\chi_{\mathbb CC E}
(y)}{|x-y|^{n+s}}\,dy
\\ &\qquad=2\int_{\mathbb R^{n-1}}
F\leqslantft( \frac{u(x'-w')-u(x')}{|w'|}\right)
\frac{\zeta_R(w')}{|w'|^{n-1+s}}\,dw',\end{split}
\end{equation}
where
$$
F(t):=\int_0^t\frac{d\tau}{(1+\tau^2)^{(n+s)/2}}.
$$
Indeed, writing $y=x-w$ we have (observe that $\eta_R$ is even)
\begin{eqnarray}\label{JW}
\nonumber &&\int_{\mathbb R^n}\eta_R(y-x)\frac{\chi_E(y)-\chi_{\mathbb CC E}
(y)}{|x-y|^{n+s}}\,dy
\\ &=&\int_{\mathbb R^n}\eta_R(w)\frac{\chi_E(x-w)-\chi_{\mathbb CC E} (x-w)}{|w|^{n+s}}\,dw
\\ &=&\int_{\mathbb R^{n-1}}\zeta_R(w') \biggl[
\int_{-R/4}^{R/4} \frac{\chi_E (x-w)-\chi_{\mathbb CC E}(x-w)}{\Big(
1+(w_n/|w'|)^2\Big)^{(n+s)/2}}\,dw_n
\biggr]\frac{dw'}{|w'|^{n+s}}, \nonumber\end{eqnarray} where the
last equality follows from the fact that $\varphi(|w_n|/R)= 1$ for
$|w_n| \leqslantqslant R/4$,
and that by \eqref{eq:small Lip} and by
symmetry the contributions of $\chi_E (x-w)$ and $\chi_{\mathbb CC
E}(x-w)$ outside $\{|w_n|\leqslantqslant R/4\}$ cancel each other.
We now compute the inner integral: using the change
variable~$t:=w_n/|w'|$ we have
\begin{eqnarray*}
&&\int_{-R/4}^{R/4}\frac{\chi_{E} (x-w)}{\Big(
1+(w_n/|w'|)^2\Big)^{(n+s)/2}}\,dw_n
\\ &=&
\int_{u(x')-u(x'-w')}^{R/4}\frac{1}{\Big(
1+(w_n/|w'|)^2\Big)^{(n+s)/2}}\,dw_n
\\ &=& |w'| \int_{(u(x')-u(x'-w'))/|w'|}^{R/(4|w'|)}\frac{1}{\big(
1+t^2\big)^{(n+s)/2}}\,dt\\ &=& |w'| \biggl[ F\leqslantft(
\frac{R}{4|w|'}\right)-F\leqslantft(\frac{u(x')-u(x'-w')}{|w|'}\right)\biggr]
.\end{eqnarray*} In the same way,
\begin{eqnarray*}
&& \int_{-R/4}^{R/4}\frac{\chi_{\mathbb CC E} (x-w)}{\Big(
1+(w_n/|w'|)^2\Big)^{(n+s)/2}}\,dw_n \\ &&\qquad=|w'| \biggl[
F\leqslantft(\frac{u(x')-u(x'-w')}{|w|'}\right) -F\leqslantft(-
\frac{R}{4|w'|}\right)\biggr] .\end{eqnarray*} Therefore,
since~$F$ is odd, we immediately get that
\begin{eqnarray*}
\int_{-R/4}^{R/4}\frac{\chi_E (x-w)-\chi_{\mathbb CC E}(x-w)}{\Big(
1+(w_n/|w'|)^2\Big)^{(n+s)/2}}\,dw_n = 2|w'| F\leqslantft(
\frac{u(x'-w')-u(x')}{|w'|}\right),\end{eqnarray*} which together
with~\eqref{JW} proves~\eqref{LHS-1}.
Let us point out that to justify these computations in a pointwise
fashion one would need $u \in C^{1,1}(x)$ (in the sense of
\cite[Definition 3.1]{BCF2}). However, by using the viscosity
definition it is immediate to check that \eqref{LHS-1} holds in
the viscosity sense (since one only needs to verify it at points
where the graph of $u$ can be touched with paraboloids).
\subsection{The right hand side of the equation}
Let us define the function
\begin{equation}\label{smooth0}
\Psi_R(x):=
\int_{\mathbb R^{n}}\leqslantft[1-\eta_R(y-x)\right]\frac{\chi_E(y)-\chi_{\mathbb CC
E} (y)}{|x-y|^{n+s}}\,dy.
\end{equation}
Since $1-\eta_R(y-x)$ vanishes in a neighborhood of $\{x=y\}$, it
is immediate to check that the function $ \psi_R(z):=
\displaystyle\frac{1-\eta_R(z)}{|z|^{n+s}}$ is of class
$C^\infty$, with
$$
|\partial^\alpha \psi_R(z)| \leqslantqslant
\frac{C_{|\alpha|}}{1+|z|^{n+s}}\qquad \forall\,\alpha\in \mathbb N^n.
$$
Hence, since $1/(1+|z|^{n+s}) \in L^1(\mathbb R^n)$ we deduce that
\begin{equation}\label{R7}
{\mbox{$\Psi_R \in C^\infty(\mathbb R^n)$, with all its derivatives
uniformly bounded.}}
\end{equation}
\subsection{An equation for~$u$ and conclusion}
By~\cite[Theorem 5.1]{CRS} we have that the equation
$$ \int_{\mathbb R^n} \frac{\chi_E(y)-\chi_{\mathbb CC E}(y)}{|x-y|^{n+s}}\,dy=0$$
holds in viscosity sense for any~$x\in (\partial E)\cap K_R$.
Consequently, by~\eqref{LHS-1} and~\eqref{smooth0} we deduce that
$u$ is a viscosity solution of
$$
\int_{\mathbb R^{n-1}} F\leqslantft( \frac{u(x'-w')-u(x')}{|w'|}\right)
\frac{\zeta_R(w')}{|w'|^{n-1+s}}\,dw'=-\frac{\Psi_R(x',u(x'))}{2}
$$
inside $B_{R/2}^{n-1}$.
Since $F$ is odd, we can add the term $F\leqslantft(-\nabla u(x')\cdot \frac{w'}{|w'|}\right)$ inside the integral in the left hand side (since it integrates to zero), so the equation actually becomes
\begin{multline}\label{eq1}
\int_{\mathbb R^{n-1}} \biggl[F\leqslantft( \frac{u(x'-w')-u(x')}{|w'|}\right)-F\leqslantft(-\nabla u(x')\cdot \frac{w'}{|w'|}\right)\biggr]
\frac{\zeta_R(w')}{|w'|^{n-1+s}}\,dw'\\
=-\frac{\Psi_R(x',u(x'))}{2}.
\end{multline}
We would like to apply the regularity
result from Theorem~\ref{boot2}, exploiting~\eqref{R7} to bound the
right hand side of~\eqref{eq1}. To this aim, using the Fundamental
Theorem of Calculus, we rewrite the left hand side in \eqref{eq1}
as
\begin{equation}
\label{eq:rewrite lhs} \int_{\mathbb{R}^{n-1}}{\bigl(u(x'-w')-u(x')+\nabla u(x')\cdot w'\bigr)
\frac{a(x',-w')\zeta_R(w')}{|w'|^{n+s}}}\,dw',
\end{equation}
where
$$
a(x',-w'):=\int_{0}^{1}\biggl(1+\leqslantft(t\frac{u(x'-w')-u(x')}{|w'|}-(1-t)\nabla u(x')\cdot \frac{w'}{|w'|}\right)^{2}\biggr)^{-({n+s})/{2}}\,dt.
$$
Now, we claim that
\begin{equation}\label{eq2}
\begin{split}
\int_{\mathbb{R}^{n-1}}\delta u(x',w')\, K_R(x',w')\,dw'
=-\Psi_R(x',u(x')) +A_R(x'),
\end{split}
\end{equation}
where
$$
K_R(x',w'):=
\frac{[a(x',w')+a(x',-w')]\zeta_R(w')}{2|w'|^{(n-1)+(1+s)}}
$$
and
$$
A_R(x'):=\int_{\mathbb{R}^{n-1}}[u(x'-w')-u(x')+\nabla u(x')\cdot
w'] \frac{[a(x',w')-a(x',-w')]\zeta_R(w')}{|w'|^{n+s}}\,dw'
$$
To prove~\eqref{eq2}
we introduce a short-hand notation:
we define
$$
u^{\pm}(x',w'):=u(x'\pm w')-u(x')\mp \nabla u(x')\cdot w',\qquad
a^\pm(x',w'):= a(x', \pm w')\frac{\zeta_R (w')}{|w'|^{n+s}},$$
while the integration over~${\mathbb{R}^{n-1}}$, possibly in the principal value sense, will be denoted by~$I[\cdot]$.
With this notation, and recalling~\eqref{eq:rewrite lhs}, it follows that
\eqref{eq1} can be written
\begin{equation}\label{eq:rewrite lhs.2}
-\frac{\Psi_R}2= I[u^- a^-].\end{equation}
By changing $w'$ with $-w'$ in the integral given by~$I$,
we see that
$$ I[u^+ a^+]=I[u^- a^-],$$
consequently~\eqref{eq:rewrite lhs.2} can be rewritten as
\begin{equation}
\label{eq:rewrite lhs2}
-\frac{\Psi_R}2= I[u^+ a^+].
\end{equation}
Notice also that
\begin{equation}\label{extra}
u^+ + u^- = \delta u, \qquad I[u^+(a^+-a^-)]=I[u^-(a^--a^+)].\end{equation}
{Hence,} adding~\eqref{eq:rewrite lhs.2} and~\eqref{eq:rewrite lhs2}, and {using \eqref{extra}},
we obtain
{
\begin{eqnarray*}
-\Psi_R &=& I[u^+ a^+]+I[u^- a^-]
\\ &=& \frac{1}{2}I[(u^+ + u^-) (a^++a^-)]+\frac{1}{2}I[(u^+ - u^-) (a^+-a^-)]
\\ &=& \frac{1}{2}I[\delta u\, (a^++a^-)]+\frac{1}{2}I[(u^+ - u^-) (a^+-a^-)]
\\ &=& \frac{1}{2}I[\delta u\, (a^++a^-)]-I[u^- (a^+-a^-)],
\end{eqnarray*}
}
which proves~\eqref{eq2}.
Now, to conclude the proof of Theorem~\ref{main}
it suffices to apply Theorem~\ref{boot2} iteratively: more precisely, let us start by assuming that
$u \in
C^{1,\beta}(B_{2r}^{n-1})$ for some $r\leqslantqslant R/2$ and any $\beta<s$. Then, by the discussion above we get that $u$ solves
$$
\int_{\mathbb{R}^{n-1}}\delta u(x',w')\, K_{r}(x',w')\,dw'
=-\Psi_r(x',u(x')) +A_r(x) \qquad \text{in }B_r^{n-1}.
$$
Moreover, one can easily check that the regularity of $u$ implies
that the assumptions of Theorem \ref{boot2} {with $k=0$} are satisfied with
$\sigma:=1+s$ and $a_0:= 1/(1-s)$. (Observe that \eqref{new
condition} holds since~$\|u\|_{C^{1,\beta}(B^{n-1}_{2r})}$.)
Furthermore, it is not difficult to check that,
for $|w'|\leqslantqslant 1$,
$$
\leqslantft|[u(x'-w')-u(x')+\nabla u(x')\cdot
w']\,[a(x',w')-a(x',-w')]\right| \leqslantqslant C|w'|^{2\beta+1},
$$
which implies that the integral defining $A_r$ is convergent by choosing
$\beta>s/2$.
Furthermore, a tedious computation (which we postpone to Subsection \ref{EEE} below)
shows that \begin{equation}\label{END} A_r \in
C^{2\beta-s}(B_{r}^{n-1}).\end{equation}
Hence, by Theorem \ref{boot2} with $k=0$ we deduce that $u \in
C^{1,2\beta}(B^{n-1}_{r/2})$.
But then this implies that $A_{r}\in
C^{4\beta-s}(B_{r/4}^{n-1})$ and so by Theorem \ref{boot2} again $u \in
C^{1,4\beta}(B^{n-1}_{r/8})$ for all $\beta <s$.
Iterating this argument infinitely many times\footnote{Note that,
once we know that $\|u\|_{C^{k,\beta}(B^{n-1}_{2r})}$ is bounded for some $k \geqslantqslant 2$ and $\beta \in (0,1]$,
for any $|\gamma|\leqslantqslant k-1$ we get
$$
\partial_x^\gamma A_r(x)=\int_{\mathbb{R}^{n-1}}
\partial_x^\gamma\bigl([u(x'-w')-u(x')+\nabla u(x')\cdot
w']\,[a(x',w')-a(x',-w')]\bigr)
\frac{\zeta_r(w')}{|w'|^{n+s}}\,dw',
$$
and exactly as in the case $k=0$ one shows that
$$
\leqslantft|\partial_x^\gamma\bigl([u(x'-w')-u(x')+\nabla u(x')\cdot
w']\,[a(x',w')-a(x',-w')]\bigr)\right| \leqslantqslant C|w'|^{2\beta+1}\qquad \forall\,|w'|\leqslantqslant 1,
$$
and that $A_r \in C^{k-1,2\beta-s}(B_{r}^{n-1})$.}
we get that $u \in
C^{m}(B^{n-1}_{\lambda^mr})$ for some $\lambda>0$ small, for any $m\in\mathbb{N}$. Then, by a simple covering argument we obtain that $u \in
C^{m}(B^{n-1}_{\rho})$ for any $\rho<R$ and $m\in\mathbb{N}$, that is, $u$ is of class $C^{\infty}$ inside $B_{\rho}$ for any $\rho<R$. This completes the proof of
Theorem \ref{main}.
\subsection{H\"older regularity of $A_{R}$.}
\label{EEE}
We now prove \eqref{END},
i.e., if $u\in C^{1,\beta}(B^{n-1}_{2r})$ then $A_r\in C^{2\beta-s}(B^{n-1}_{r})$
($r\leqslantqslant R/2$). For this we introduce the following notation:
$$U(x',w'):=u(x'-w')-u(x')+\nabla u(x')\cdot w'$$
and
$$p(\tau):=\frac{1}{ (1+\tau^2)^{\frac{n+s}{2}} }.$$
In this way we can write
\begin{equation}\label{57a}
a(x',\textcolor{red}{-}w')=\int_{0}^{1}{p\biggl(t\frac{u(x'-w')-u(x')}{|w'|}
-(1-t)\nabla u(x')\cdot \frac{w'}{|w'|}\biggr)\,dt}.
\end{equation}
Let us define
$$\mathscr{A}(x',w'):=a(x',w')-a(x',-w').$$
Then we have
$$A_r(x')=\int_{\mathbb R^{n-1}}{U(x',w')\frac{\mathscr{A}(x',w')}{|w'|^{n+s}}\zeta_r(w')\,dw'}.$$
To prove the desired H\"older condition of the function $A_r(x')$, we first note that
$$U(x',w')=\int_0^1\bigl[\nabla u(x')-\nabla u(x'-tw')\bigr] dt\,\cdot w'.$$
Since $u\in C^{1,\beta}(B_{R}^{n-1})$ and $2r\leqslantqslant R$, we get
\begin{equation}\label{fin1}
|U(x',w')-U(y',w')|\leqslantqslant C\,
\min\{|x'-y'|^{\beta}|w'|,|w'|^{\beta+1}\},\quad\mbox{for $y'\in B_{r}^{n-1}$}
\end{equation}
and
\begin{equation}\label{fin2}
|U(x',w')|\leqslantqslant C|w'|^{\beta+1}.
\end{equation}
Therefore, from \eqref{fin1} and \eqref{fin2} it follows that,
for any $y'\in B_{r}^{n-1}$,
\begin{eqnarray}
|A_{r}(x')-A_{r}(y')|&=&\bigg|\int_{\mathbb R^{n-1}}{\big(U(x',w')
\mathscr{A}(x',w')-U(y',w')\mathscr{A}(y',w')\big)\frac{\zeta_r(w')}{|w'|^{n+s}}\,dw'}\bigg|\nonumber\\
&\leqslantqslant&C\int_{\mathbb R^{n-1}}{\min\{|x'-y'|^{\beta}|w'|,|w'|^{\beta+1}\}
\frac{|\mathscr{A}(x',w')|}{|w'|^{n+s}}\zeta_r(w')\,dw'}\nonumber\\
&+&C\int_{\mathbb R^{n-1}}{|w'|^{\beta+1}
\frac{|\mathscr{A}(x',w')-\mathscr{A}(y',w')|}{|w'|^{n+s}}\zeta_r(w')\,dw'}\nonumber\\
&=:&I_1(x',y')+I_2(x',y')\label{fin3}.
\end{eqnarray}
To estimate the last two integrals we define
$$\mathscr{A_*}(x',w'):=
a(x',w')-p\biggl(\nabla u(x')\cdot\frac{w'}{|w'|}\biggr).$$
With this notation
\begin{equation}\label{fin4}
\mathscr{A}(x',w')=\mathscr{A_*}(x',w')-\mathscr{A_*}(x',-w').
\end{equation}
By \eqref{57a}
and \eqref{fin2}, since $|p'(t)| \leqslantqslant C$ and $p$ is even, it follows that
\begin{eqnarray}\nonumber
&&|\mathscr{A_*}(x',-w')|\\
&\leqslantqslant&\int_0^1\int_0^1 \biggl|\frac{d}{d\lambda} p\biggl(\lambda t\frac{u(x'-w')-u(x')}{|w'|}
-[\lambda (1-t)+(1-\lambda)]\nabla u(x')\cdot\frac{ w'}{|w'|}\biggr)\biggr|\,d\lambda\,dt\nonumber\\
&\leqslantqslant &\int_0^1{t\frac{|U(x',w')|}{|w'|}
\biggl(\int_0^1{\bigg|p'\bigg(\lambda t\frac{U(x',w')}{|w'|}-\nabla u(x')\cdot\frac{w'}{|w'|}\bigg)\bigg|\,d\lambda}\biggr)\,dt}\nonumber\\
&\leqslantqslant& C|w'|^{\beta}\label{fin5}
\end{eqnarray}
for all $|w'|\leqslantqslant r$.
Estimating $\mathscr{A_*}(x',w')$ in the same way, by \eqref{fin4} and \eqref{fin5}, we get, for any $\beta>s/2$,
\begin{eqnarray}
I_1(x',y')&\leqslantqslant&C\int_{\mathbb R^{n-1}}{\min\{|x'-y'|^{\beta}|w'|,|w'|^{\beta+1}\}|w'|^{\beta-n-s}\zeta_r(w')\,dw'}\nonumber\\
&\leqslant&C|x'-y'|^{\beta}\int_{|x'-y'|}^{r}{t^{\beta-s-1}dt}+\int_{0}^{|x'-y'|}{t^{2\beta-s-1}\,dt}\nonumber\\
&\leqslant&C|x'-y'|^{2\beta-s}.\label{fin6}
\end{eqnarray}
On the other hand, to estimate $I_2$ we note that
\begin{eqnarray}
|\mathscr{A}(x',w')-\mathscr{A}(y',w')|&\leqslantqslant&|\mathscr{A_*}(x',w')-\mathscr{A_*}(y',w')|\nonumber\\
&+&|\mathscr{A_*}(y',-w')-\mathscr{A_*}(x',-w')|.\label{fin7}
\end{eqnarray}
Hence, arguing as in \eqref{fin5} we have
\begin{eqnarray}
&&|\mathscr{A_*}(x',-w')-\mathscr{A_*}(y',-w')|\nonumber\\
&\leqslantqslant &
\int_0^1t\frac{|U(x',w')|}{|w'|}\int_0^1\bigg|p'\bigg(\lambda t\frac{U(x',w')}{|w'|}-\nabla u(x')\cdot\frac{w'}{|w'|}\bigg)\nonumber\\
&&\qquad\qquad\qquad\qquad\qquad\qquad -p'\bigg(\lambda t\frac{U(y',w')}{|w'|}
-\nabla u(y')\cdot\frac{w'}{|w'|}\bigg)\bigg|\,d\lambda\,dt\nonumber\\
&+&\int_0^1{t\frac{|U(x',w')-U(y',w')|}{|w'|}\int_0^1
{\bigg|p'\bigg(\lambda t\frac{U(y',w')}{|w'|}-\nabla u(y')\cdot\frac{w'}{|w'|}\bigg)\bigg|\,d\lambda}\,dt}\nonumber\\
&=:&I_{2,1}(x',y')+I_{2,2}(x',y').\label{fin8}
\end{eqnarray}
We bound each of these integrals separately. First, since $|p'(t)|\leqslantqslant C$,
it follows immediately from \eqref{fin1} that
\begin{eqnarray}
I_{2,2}(x',y')\leqslantqslant
C\min\{|x'-y'|^{\beta},|w'|^{\beta}\}.\label{fin9}
\end{eqnarray}
On the other hand, by \eqref{fin2}, \eqref{fin1}, and the fact that $u\in C^{1,\beta}(B_{R}^{n-1})$ and $p'$ is uniformly Lipschitz,
we get
\begin{eqnarray}
I_{2,1}(x',y')&\leqslantqslant& C|w'|^{\beta}\bigg(\frac{|U(x',w')-U(y',w')|}{|w'|}+|\nabla u(x')-\nabla u(y')|\bigg)\nonumber\\
&\leqslantqslant&C|w'|^{\beta}\Big(
\min\{|x'-y'|^{\beta},|w'|^{\beta}\}+|x'-y'|^{\beta}\Big)\nonumber\\
&\leqslantqslant&C|w'|^{\beta}|x'-y'|^{\beta}.\label{fin10}
\end{eqnarray}
Then, assuming without loss of generality $r \leqslantqslant 1$ (so that also $|x'-y'|\leqslantqslant 1$),
by \eqref{fin8}, \eqref{fin9}, and \eqref{fin10} it follows that
\begin{eqnarray}\nonumber
|\mathscr{A_*}(x',-w')-\mathscr{A_*}(y',-w')|&\leqslantqslant&
C\bigg(\min\{|x'-y'|^{\beta},|w'|^{\beta}\}+|w'|^{\beta}|x'-y'|^{\beta}\bigg)
\\\label{fin11}
&\leqslantqslant& C\min\{|x'-y'|^{\beta},|w'|^{\beta}\}.
\end{eqnarray}
As $|\mathscr{A_*}(y',w')-\mathscr{A_*}(x',w')|$ is bounded
in the same way, by \eqref{fin7}, we have
$$|\mathscr{A}(x',w')-\mathscr{A}(y',w')|\leqslantqslant C\min\{|x'-y'|^{\beta},|w'|^{\beta}\}.$$
By arguing as in \eqref{fin6}, we get that, for any $s/2<\beta<s$,
\begin{eqnarray}
I_{2}(x',y')
&\leqslantqslant &C\int_{\mathbb R^{n-1}}{|w'|^{\beta+1}\frac{\min\{|x'-y'|^{\beta},|w'|^{\beta}\}}{|w'|^{n+s}}\zeta_r(w')dw'}\nonumber\\
&\leqslantqslant& C|x'-y'|^{2\beta-s}.\label{fin12}
\end{eqnarray}
Finally, by \eqref{fin3}, \eqref{fin6} and \eqref{fin12}, we conclude that
$$|A_{r}(x')-A_{r}(y')|\leqslantqslant C|x'-y'|^{2\beta-s},\quad y'\in B_{r}^{n-1},$$
as desired.
\end{document} |
\begin{document}
\draft
\title{Canonical Transformations and the Hamilton-Jacobi Theory
in Quantum Mechanics}
\author{Jung-Hoon Kim\footnote{e-mail: [email protected]}
and Hai-Woong Lee}
\address{
Department of Physics,
Korea Advanced Institute of Science and Technology,
Taejon, 305-701, Korea }
\maketitle
\begin{abstract}
Canonical transformations using the idea of quantum generating functions
are applied to construct a quantum Hamilton-Jacobi theory, based on the
analogy with the classical case. An operator and a c-number forms of the
time-dependent quantum Hamilton-Jacobi equation are derived and used to
find dynamical solutions of quantum problems. The phase-space picture of
quantum mechanics is discussed in connection with the present theory.
\end{abstract}
\pacs{PACS number(s): 03.65.-w, 03.65.Ca, 03.65.Ge}
\section{Introduction}
Various mechanical problems can be elegantly approached by the Hamiltonian
formalism, which not only found well-established ground in classical
theories\cite{one}, but also provided much physical insight in the early
development of quantum theories\cite{two,three}. It is curious though that
the concept of canonical transformations, which plays a fundamental role in
the Hamiltonian formulation of classical mechanics, has not attracted as
much attention in the corresponding formulation of quantum mechanics. A
relatively small quantity of literature is available as of now on
this subject [4--10].
The main reason for this is probably that canonical variables in quantum
mechanics are not c-numbers but noncommuting operators, manipulation of which
is considerably involved. In spite of this difficulty, the great success of
canonical transformations in classical mechanics makes it desirable to
investigate the possibility of application of the concept of canonical
transformations in quantum mechanics at least to the extent allowed in view
of the analogy with the classical case.
The usefulness of the classical canonical transformations is most visible in
the Hamilton-Jacobi theory where one seeks a generating function that makes
the transformed Hamiltonian become identically zero\cite{one}. A quantum
analog of the Hamilton-Jacobi theory has previously been considered by
Leacock and Padgett\cite{eight} with particular emphasis on the quantum
Hamilton's characteristic function and applied to the definition of the
quantum action variable and the determination of the bound-state energy
levels\cite{one2}. However, the {\em dynamical} aspect of the quantum
Hamilton-Jacobi theory appears to remain untouched. In the present study,
we concentrate on this aspect of the problem, and derive the time-dependent
quantum Hamilton-Jacobi equation following closely the procedure that lead
to the classical Hamilton-Jacobi equation.
The analogy between the classical and quantum Hamilton-Jacobi theories can
be best exploited by employing the idea of the quantum generating function
that was first introduced by Jordan\cite{four} and Dirac\cite{five}, and
recently reconsidered by Lee and l'Yi\cite{ten}. The ``well-ordered''
operator counterpart of the quantum generating function is used in
constructing our quantum Hamilton-Jacobi equation, which resembles in form
the classical Hamilton-Jacobi equation. By means of well-ordering, a unique
operator is associated with a given c-number function, thereby the ambiguity
in the ordering problem is removed. We identify the quantum generating
function accompanying the quantum Hamilton-Jacobi theory as the quantum
Hamilton's principal function, and apply this theory to find the dynamical
solutions of quantum problems.
The prevailing conventional belief that physical observables should be
Hermitian operators invokes in our discussion the unitary transformation
that transforms one Hermitian operator to another. This along with the
fact that the unitary transformation preserves the fundamental quantum
condition for the new canonical variables $[\hat{Q},\hat{P}]=i\hbar$ if
the old canonical variables satisfy $[\hat{q},\hat{p}]=i\hbar$ provides a
good reason why we call the unitary transformation the quantum canonical
transformation. This definition of the quantum canonical transformation is
analogous to the classical statement that the classical canonical
transformation keeps the Poisson brackets invariant, i.e.,
$[Q,P]_{PB}=[q,p]_{PB}=1$. In our current discussion of the quantum
canonical transformation we will consider exclusively the case of the
unitary transformation.
The paper is organized as follows. In Sec.\ II the quantum canonical
transformation using the idea of the quantum generating function is briefly
reviewed, and the transformation relation between the new Hamiltonian and
the old Hamiltonian expressed in terms of the quantum generating function is
derived. From this relation, and by analogy with the classical case, we
arrive at the quantum Hamilton-Jacobi equation in Sec.\ III. It will be found
that the unitary transformation of the special type
$\hat{U}(t)=\hat{T}(t)\hat{A}$ where $\hat{T}(t)$ is the time-evolution
operator and $\hat{A}$ is an arbitrary time-independent unitary operator
satisfies the quantum Hamilton-Jacobi equation.
Sec.\ IV is devoted to the discussion of
the quantum phase-space distribution function under canonical transformations.
The differences between our approach and that of Ref.\cite{one1} are described.
Boundary conditions and simple applications of the theory are given
in Sec.\ V, where to perceive the main idea easily most of the discussion
is developed with the simple case $\hat{A}=\hat{I}$, the unit operator, while
keeping in mind that the present formalism is not restricted to this case.
Finally, Sec.\ VI presents concluding remarks.
\section{Quantum Canonical Transformations}
Let us begin our discussion by reviewing the theory of the quantum canonical
transformations\cite{five,ten}. A quantum generating function that is
analogous to a classical generating function is defined in terms of the
matrix elements of a unitary operator as follows\cite{five},
\begin{equation}
e^{iF_1(q_1,Q_2,t)/\hbar}\equiv \langle q_1|Q_2\rangle _t
=\langle q_1|\hat{U}(t)|q_2\rangle ,
\end{equation}
where the unitary operator $\hat{U}(t)$ transforms an eigenvector of
$\hat{q}$ into an eigenvector of $\hat{Q}=\hat{U}\hat{q}\hat{U}^\dagger$,
i.e., $|Q_1\rangle _t=\hat{U}(t)|q_1\rangle$ (and $|P_1
\rangle _t=\hat{U}(t)|p_1\rangle$).\footnote{
An eigenvalue $X_1$ and an eigenvector $|X_1\rangle$ of an
operator $\hat{X}$ are defined by the equation, $\hat{X}|X_1\rangle
=X_1|X_1\rangle$ ($X=q$, $p$, $Q$, and $P$).
Different subindices are used to distinguish different eigenvalues or
eigenvectors, e.g., $X_2$, $|X_2\rangle$, etc.; The subscript $t$ on a
ket $|\rangle _t$ (bra ${_t\langle}|$) expresses time dependence of the
ket $|\rangle _t$ (bra ${_t\langle}|$).}
Different types of the quantum
generating function can be defined similarly\cite{ten}, i.e.,
$e^{iF_2(q_1,P_2,t)/\hbar}\equiv \langle q_1|P_2\rangle _t
=\langle q_1|\hat{U}(t)|p_2\rangle$,
$e^{iF_3(p_1,Q_2,t)/\hbar}\equiv \langle p_1|Q_2\rangle _t
=\langle p_1|\hat{U}(t)|q_2\rangle$, and
$e^{iF_4(p_1,P_2,t)/\hbar}\equiv \langle p_1|P_2\rangle _t
=\langle p_1|\hat{U}(t)|p_2\rangle$.
The quantum canonical transformation, or the unitary transformation,
corresponds to a change of representation or equivalently to a rotation
of axes in the Hilbert space. The unitary transformation guarantees that
the fundamental quantum condition $[\hat{Q},\hat{P}]
=[\hat{q},\hat{p}]=i\hbar$ holds, the new canonical variables
$(\hat{Q},\hat{P})$ are Hermitian operators, and the eigenvectors of
$\hat{Q}$ or $\hat{P}$ form a complete basis. One should keep in mind
that the eigenvalue $Q_1$ has the same numerical value as the eigenvalue
$q_1$ because the unitary transformation preserves the eigenvalue spectrum
of an operator\cite{three}. In cases where it is convenient, one is free to
interchange $q_1$ $ (p_1)$ with $Q_1$ $(P_1)$.
Transformation relations between $(\hat{q},\hat{p})$ and $(\hat{Q},\hat{P})$
can be expressed in terms of the ``well-ordered'' generating operator
$\bar{F}_1(\hat{q},\hat{Q},t)$\cite{five} that is an operator counterpart
of the quantum generating function $F_1(q_1,Q_2,t)$ as follows:\footnote{
A well-ordered operator $\bar{G}(\hat{X},\hat{Y})$ is
developed from a c-number function $G(X_1,Y_2)$ such that
$\langle X_1|\bar{G}(\hat{X},\hat{Y})|Y_2
\rangle = G(X_1,Y_2)\langle X_1|Y_2\rangle $\cite{five}. For example,
if $G(X_1,Y_2)=X_1Y_2+Y_2^2X_1^3$, then $\bar{G}(\hat{X},\hat{Y})=
\hat{X}\hat{Y}+\hat{X}^3\hat{Y}^2$.}
\begin{equation}
\hat{p}=\frac{\partial \bar{F}_1(\hat{q},\hat{Q},t)}{\partial \hat{q}},
\hspace{1cm}
\hat{P}=-\frac{\partial \bar{F}_1(\hat{q},\hat{Q},t)}{\partial \hat{Q}}.
\end{equation}
Similar expressions for other types of the generating operators can be
immediately inferred by analogy with the classical relations. For a later
reference, we present the relations for $\bar{F}_2(\hat{q},\hat{P},t)$ below,
\begin{equation}
\hat{p}=\frac{\partial \bar{F}_2(\hat{q},\hat{P},t)}{\partial \hat{q}},
\hspace{1cm}
\hat{Q}=\frac{\partial \bar{F}_2(\hat{q},\hat{P},t)}{\partial \hat{P}}.
\end{equation}
It is interesting to note that, whereas the four types of the generating
functions in classical mechanics are related with each other through the
Legendre transformations\cite{one}, the relations between the quantum
generating functions of different types can be expressed by means of the
Fourier transformations. For example, the transition from
$F_1(q_1,Q_2,t)$ to $F_2(q_1,P_2,t)$ can be accomplished by
\begin{eqnarray}
e^{iF_2(q_1,P_2,t)/\hbar}&=&\int dQ_2 \langle q_1|Q_2\rangle _t\hspace{0.7mm}
{_t\langle} Q_2|P_2\rangle _t, \nonumber \\
&=&\frac{1}{\sqrt{2\pi \hbar}} \int dQ_2 e^{iF_1(q_1,Q_2,t)/\hbar}
e^{iP_2Q_2/\hbar}.
\end{eqnarray}
The usefulness of the concept of the quantum generating function can be
revealed, for example, by considering the unitary transformation
$\hat{U} =e^{ig(\hat{q})/\hbar}$ where $g$ is an arbitrary real function.
From the definition of the quantum generating function, we have
\begin{eqnarray}
e^{iF_2(q_1,P_2)/\hbar}&=&\langle q_1|e^{ig(\hat{q})/\hbar}|p_2 \rangle ,
\nonumber \\
&=& \frac{1}{\sqrt{2\pi \hbar}}e^{\frac{i}{\hbar}[g(q_1)+q_1P_2]}.
\end{eqnarray}
The well-ordered generating operator is then given by
\begin{equation}
\bar{F}_2(\hat{q},\hat{P})=g(\hat{q})+\hat{q}\hat{P}+i\frac{\hbar}{2}
\ln 2\pi \hbar ,
\end{equation}
and Eq.\ (3) yields the transformation relations
\begin{eqnarray}
\hat{Q}&=& \hat{q}, \\
\hat{P}&=&\hat{p}-\frac{\partial g(\hat{q})}{\partial \hat{q}}.
\end{eqnarray}
This shows that, in some cases, an introduction of the quantum generating
function can provide an effective method of finding the transformation
relations between ($\hat{q},\hat{p}$) and ($\hat{Q},\hat{P}$) without
recourse to the equations $\hat{Q}=\hat{U}\hat{q}\hat{U}^\dagger$ and
$\hat{P}=\hat{U}\hat{p}\hat{U}^\dagger$.
Now we consider the dynamical equations governing the time-evolution of
quantum systems. The time-dependent Schr\"{o}dinger equation for the system
with the Hamiltonian $H(\hat{q},\hat{p},t)$ is given in terms of a
time-dependent ket $|\psi \rangle _t$ by
\begin{equation}
i\hbar \frac{\partial}{\partial t}|\psi \rangle _t
=H(\hat{q},\hat{p},t)|\psi \rangle _t.
\end{equation}
In $Q$-representation the time-dependent Schr\"{o}dinger equation takes
the form
\begin{equation}
i\hbar \frac{\partial}{\partial t}\psi ^Q(Q_1,t)=K\left( Q_1,
-i\hbar \frac{\partial}{\partial Q_1},t\right) \psi ^Q(Q_1,t),
\end{equation}
where $\psi ^Q(Q_1,t)={_t\langle}Q_1|\psi \rangle _t$, and
\begin{equation}
K(\hat{Q},\hat{P},t)=H(\hat{q},\hat{p},t)+i\hbar \hat{U}
\frac{\partial \hat{U}^\dagger}{\partial t}.
\end{equation}
The second term on the right hand side of Eq.\ (11) arises from the fact
that we allow the time dependence of the unitary operator $\hat{U}(t)$,
which indicates that, even though we adopt here the Schr\"{o}dinger picture
where the time dependence associated with the dynamical evolution of a
system is attributed solely to the ket $|\psi \rangle _t$, $\hat{Q}$ and
$|Q_1\rangle _t$ may depend on time also. In terms of the generating
operator $\bar{F}_1(\hat{q},\hat{Q},t)$, Eq.\ (11) can be written as
\begin{equation}
K(\hat{Q},\hat{P},t)=H(\hat{q},\hat{p},t)+\frac{\partial \bar{F} _1
(\hat{q},\hat{Q},t)}{\partial t}.
\end{equation}
The equivalence of Eqs.\ (11) and (12) can be proved as shown in Appendix A.
It is important to note that $K(\hat{Q},\hat{P},t)$ plays the role of the
transformed Hamiltonian governing the time-evolution of the system in
$Q$-representation. The analogy with the classical theory is remarkable.
\section{Quantum Hamilton-Jacobi Theory}
We are now ready to proceed to formulate the quantum Hamilton-Jacobi theory.
One can immediately notice that, if $K(\hat{Q},\hat{P},t)$ of Eq.\ (12)
vanishes, the time-dependent Schr\"{o}dinger equation in $Q$-representation
yields a simple solution, $\psi ^Q=$\ const. This observation along with
Eq.\ (2) naturally leads us to the following quantum Hamilton-Jabobi equation,
\begin{equation}
H\left( \hat{q},\frac{\partial \bar{S}_1(\hat{q},\hat{Q},t)}{\partial
\hat{q}},t\right) +\frac{\partial \bar{S}_1(\hat{q},\hat{Q},t)}{\partial t} =0,
\end{equation}
where, following the classical notational convention, we denote the
generating operator that is analogous to the classical Hamilton's principal
function by $\bar{S}_1(\hat{q},\hat{Q},t)$. Eq.\ (13) bears a close formal
resemblance to the classical Hamilton-Jacobi equation. It, however, differs
from the classical equation in that it is an operator partial differential
equation. The procedure of solving dynamical problems is completed if we
express the wave function in the original $q$-representation as
\begin{eqnarray}
\psi ^q(q_1,t)&=&\int \langle q_1|Q_2 \rangle _t\hspace{0.7mm}
{_t\langle}Q_2| \psi \rangle _t dQ_2, \nonumber \\
&=&\int e^{iS_1(q_1,Q_2,t)/\hbar}\psi ^Q(Q_2)dQ_2,
\end{eqnarray}
where $S_1(q_1,Q_2,t)$ is the c-number counterpart of
$\bar{S}_1(\hat{q},\hat{Q},t)$, and is obtained by replacing the well-ordered
$\hat{q}$ and $\hat{Q}$ in $\bar{S}_1$, respectively, with $q_1$ and $Q_2$.
In Eq.\ (14), $t$ is dropped from $\psi ^Q$, since
${_t\langle}Q_2| \psi \rangle _t =$ const.
As is the case for the classical Hamilton-Jacobi equation, the mission of
solving dynamical problems is assigned to the quantum Hamilton-Jacobi
equation.
Even though we arrive at the correct form of the quantum Hamilton-Jacobi
equation, it seems at first sight quite difficult to attain solutions of it
due to its unfamiliar appearance as an operator partial differential equation.
Thus it seems desirable to search a corresponding c-number form of the
quantum Hamilton-Jacobi equation. For this task, we note that, if the unitary
operator $\hat{U}(t)$ is assumed to be separable into
$\hat{U}(t)=\hat{T}(t)\hat{A}$, where $\hat{T}(t)$ is the time-evolution
operator
and $\hat{A}$ is an arbitrary time-independent unitary operator, then
$\psi ^Q(Q_1,t) ={_t\langle}Q_1|\psi \rangle _t=\langle q_1|\hat{A}^\dagger
\hat{T}^\dagger (t)\hat{T}(t)|\psi (t=0) \rangle =\langle q_1|\hat{A}^\dagger
|\psi (t=0)\rangle =$\ const. This means that the left hand side of Eq.\ (10)
becomes zero, i.e., the canonical transformation mediated by a separable
unitary operator is exactly the one that we seek. Assuming
$\hat{U}(t)=\hat{T}(t)\hat{A}$, we rewrite Eq.\ (1) as
\begin{equation}
e^{iS_1(q_1,Q_2,t)/\hbar} =\langle q_1|\hat{T}(t)\hat{A}|q_2\rangle .
\end{equation}
Differentiating this equation with respect to time, we obtain
\begin{eqnarray}
\frac{i}{\hbar}\frac{\partial S_1}{\partial t}e^{iS_1/\hbar} &=&
\langle q_1 |\frac{\partial \hat{T}}{\partial t}\hat{A}|q_2\rangle
=\frac{1}{i\hbar}\langle q_1|\hat{H}\hat{T}\hat{A}|q_2\rangle , \nonumber \\
&=&\frac{1}{i\hbar}H\left( q_1,-i\hbar \frac{\partial}{\partial q_1},t
\right) \langle q_1|\hat{T}\hat{A}|q_2\rangle ,\nonumber \\
&=&\frac{1}{i\hbar}H\left( q_1,-i\hbar \frac{\partial}{\partial q_1},t
\right) e^{iS_1/\hbar}.
\end{eqnarray}
Eq.\ (16) leads immediately to the desired c-number form of the quantum
Hamilton-Jacobi equation
\begin{equation}
\left[ H\left( q_1,-i\hbar \frac{\partial}{\partial q_1},t\right)
+\frac{\partial
S_1(q_1,Q_2,t)}{\partial t}\right] e^{iS_1(q_1,Q_2,t)/\hbar}=0.
\end{equation}
Substitution of $S_2(q_1,P_2,t)$ for $S_1(q_1,Q_2,t)$ generates another
c-number form of the quantum Hamilton-Jacobi equation. The equations for
the cases of $S_3(p_1,Q_2,t)$ and $S_4(p_1,P_2,t)$ can be derived through
a similar process.
Consider a one-dimensional nonrelativistic quantum system whose
Hamiltonian is given by
\begin{equation}
H(\hat{q},\hat{p},t)=\frac{\hat{p}^2}{2}+V(\hat{q},t).
\end{equation}
The c-number form of the quantum Hamilton-Jacobi equation (17) for this
problem becomes
\begin{equation}
\frac{1}{2}\left(\frac{\partial S_1}{\partial q_1}\right)^2
-i\frac{\hbar}{2}\frac{\partial ^2S_1}{\partial q_1^2}+V(q_1,t)
+\frac{\partial S_1}{\partial t}=0.
\end{equation}
We can see clearly that, in the limit $\hbar \rightarrow 0$, the above
equation reduces to the classical Hamilton-Jacobi equation. The second term
of Eq.\ (19) represents the quantum effect. We note that it has been known
from the early days that substitution of
$\psi (q,t)=e^{iS(q,t)/\hbar}$ into the Schr\"{o}dinger equation gives rise
to the same Hamilton-Jacobi equation for $S(q,t)$,\footnote{For a stationary
state of a system whose Hamiltonian does not depend explicitly on time, one
may put $S(q,t) = W(q) -Et$ and obtain a differential equation for $W(q)$.
To find a solution to the resulting equation, one may then use the expansion
of $W$ in powers of $\hbar$. This approach has been extensively considered
in connection with the well-known WKB approximation. In the present paper,
the formalism is developed for general nonstationary states (of systems
that can possibly have time-dependent Hamiltonians).}
where $S(q,t)$ is interpreted merely as the complex-valued phase of the
wave function (see, for example, Ref.\cite{schiff}).
The present approach
more clearly shows the strong analogy between the classical and quantum
Hamilton-Jacobi theories emphasizing that the quantum Hamilton's principal
function $S_1$ which is related with the wave function via Eq.\ (14) plays
the role of the quantum counterpart of the classical generating function.
Moreover, as discussed later in Sec.\ V, $e^{iS_1/\hbar}$ defined in Eq.\ (14)
can be interpreted as a propagator under a certain choice of $\hat{A}$.
It may be viewed that the Hamilton-Jacobi equation in the form of Eq.\ (19)
is no more tractable analytically than the Schr\"{o}dinger equation
for general potential problems. Nevertheless, it would be possible at least to
obtain an approximate solution of it using a perturbative method as follows.
Since the solution of Eq.\ (19) is given by the classical Hamilton's
principal function in the limit $\hbar \rightarrow 0$, we can expand the
general solution in powers of $\hbar$:
\begin{equation}
S_1=S_1^{(0)}+\hbar S_1^{(1)}+\hbar ^2S_1^{(2)}+\cdots ,
\end{equation}
where $S_1^{(0)}$ is the classical Hamilton's principal function.
Substituting Eq.\ (20) into Eq.\ (19) and collecting coefficients of the
same orders in $\hbar$, we can obtain
\begin{equation}
\frac{1}{2}\left(\frac{\partial S_1^{(0)}}{\partial q_1}\right)^2+V(q_1,t)
+\frac{\partial S_1^{(0)}}{\partial t}=0,
\end{equation}
and
\begin{equation}
\frac{1}{2}\sum_{k=0}^{n}\frac{\partial S_1^{(k)}}{\partial q_1}
\frac{\partial S_1^{(n-k)}}{\partial q_1}
-\frac{i}{2}\frac{\partial ^2S_1^{(n-1)}}{\partial q_1^2}
+\frac{\partial S_1^{(n)}}{\partial t}=0, \hspace{0.5cm} n\geq 1.
\end{equation}
Given the solution $S_1^{(0)}$ of the classical Hamilton-Jacobi equation (21),
we solve Eq.\ (22) to find $S_1^{(1)}$. $S_1^{(2)}$ can be determined
subsequently from the knowledge of $S_1^{(0)}$ and $S_1^{(1)}$, and so forth.
We note that Eq.\ (22) is linear in $S_1^{(n)}$ and first-order differential
in $q_1$ for $S_1^{(n)}$. Thus, from a practical viewpoint, Eqs.\ (21) and
(22) could
be more advantageous to deal with than Eq.\ (19) as long as
the classical Hamilton's
principal function that is the solution of Eq.\ (21) is readily available.
The present formalism provides an encouraging point that the well-ordered
operator counterpart of the quantum Hamilton's principal function gives also
the solutions of the Heisenberg equations through Eq.\ (2). If we consider
the case $\hat{U}(t)=\hat{T}(t)$, we can obtain in the Heisenberg picture the
relations $(\hat{q}_H,\hat{p}_H)\equiv
(\hat{T}^\dagger \hat{q}_S\hat{T}, \hat{T}^\dagger \hat{p}_S\hat{T})$
and $(\hat{Q}_H,\hat{P}_H)\equiv (\hat{T}^\dagger \hat{Q}_S\hat{T},
\hat{T}^\dagger \hat{P}_S\hat{T})= (\hat{T}^\dagger \hat{T}\hat{q}_S
\hat{T}^\dagger \hat{T},\hat{T}^\dagger \hat{T}\hat{p}_S\hat{T}^\dagger
\hat{T})=(\hat{q}_S,\hat{p}_S)$, where we attached the subscript $_S$ and
$_H$ to operators to explicitly denote, respectively, the Schr\"{o}dinger and
the Heisenberg pictures. Thus, when expressed in the Heisenberg
picture Eq.\ (2) turns into
\begin{equation}
\hat{p}_H=\frac{\partial \bar{S}_1(\hat{q}_H,\hat{q}_S,t)}{\partial \hat{q}_H},
\hspace{1cm}
\hat{p}_S=-\frac{\partial \bar{S}_1(\hat{q}_H,\hat{q}_S,t)}{\partial \hat{q}_S},
\end{equation}
and from these transformation relations we can obtain $\hat{q}_H$ and
$\hat{p}_H$ as functions of time and the initial operators $\hat{q}_S$ and
$\hat{p}_S$. Obviously, $\hat{q}_H(\hat{q}_S,\hat{p}_S,t)$ and
$\hat{p}_H(\hat{q}_S,\hat{p}_S,t)$ obtained
in this way evolve according to the Heisenberg equations.
\section{Quantum Phase-Space distribution functions and canonical
transformations}
Since our theory of the quantum canonical transformations is formulated with
the canonical position $\hat{q}$ and momentum $\hat{p}$ variables on an equal
footing, it would be relevant to consider the phase-space picture of quantum
mechanics, exploiting the distribution functions in relation to the present
theory.
\subsection{Distribution functions}
For a given density operator $\hat{\rho}$, a general way of defining quantum
distribution functions proposed by Cohen\cite{one5} is that
\begin{equation}
F^f(q_1,p_1,t)=\frac{1}{2\pi ^2\hbar}\int \int \int dxdydq_2 \langle
q_2+y|\hat{\rho} |q_2-y\rangle f(x,2y/\hbar )e^{ix(q_2-q_1)}
e^{-i2yp_1/\hbar}.
\end{equation}
Various choices of $f(x,2y/\hbar)$ lead to a wide class of quantum
distribution functions\cite{one6}. To mention only a few, the choice $f=1$
produces the well-known Wigner distribution function \cite{one7}, while the
choice $f(x,2y/\hbar)= e^{-\hbar x^2/4m\alpha -m\alpha y^2/\hbar}$ yields
the Husimi distribution function that recently has found its application in
nonlinear dynamical problems\cite{one8}. The transformed distribution
function is defined in ($Q_1,P_1$) phase space likewise by
\begin{equation}
G^f(Q_1,P_1,t)=\frac{1}{2\pi ^2\hbar}\int \int \int dXdYdQ_2\hspace{0.7mm}
{_t\langle} Q_2+Y|\hat{\rho} |Q_2-Y\rangle _tf(X,2Y/\hbar )e^{iX(Q_2-Q_1)}
e^{-i2YP_1/\hbar}.
\end{equation}
Our main objective here is to find a relation between the old and the
transformed distribution functions. After a straightforward algebra, which
is displayed in Appendix B, it turns out that the transformation relation
between the two distribution functions can be expressed as
\begin{equation}
G^f(Q_1,P_1,t)=\int \int dq_2dp_2\kappa (Q_1,P_1,q_2,p_2,t)
F^f(q_2,p_2,t),
\end{equation}
where the kernel $\kappa$ is given by
\begin{eqnarray}
\kappa (Q_1,P_1,q_2,p_2,t)=\frac{1}{2\pi ^3\hbar} \int \int \int
\int \int \int dXdYdQ_2dxdyd\alpha \frac{f(X,2Y/\hbar )}
{f(x,2y/\hbar )} \nonumber \\
\times e^{\frac{i}{\hbar}[F_1(q_2+\alpha -y,Q_2-Y,t)-F_1^*(q_2+\alpha +y,
Q_2+Y,t)]} e^{i[X(Q_2-Q_1)-\alpha x]} e^{\frac{2i}{\hbar}
[yp_2-YP_1]}.
\end{eqnarray}
This expression for the kernel can be further simplified if integrations
in Eq.\ (27) can be performed with a specific choice of the function $f$.
For instance, the simple choice $f=1$ provides the following kernel for the
Wigner distribution function,
\begin{eqnarray}
\kappa (Q_1,P_1,q_2,p_2,t)=\frac{2}{\pi \hbar} \int \int dYdy
e^{\frac{i}{\hbar} [F_1(q_2-y,Q_1-Y,t)-F_1^*(q_2+y,Q_1+Y,t)]}
e^{\frac{2i}{\hbar}[yp_2-YP_1]}.
\end{eqnarray}
This equation was first derived by Garcia-Calder\'on and Moshinsky\cite{one9}
without employing the idea of the quantum generating function.
Curtright {\it et al.}\cite{one10} also obtained an equivalent expression
in their recent discussion of the time-independent Wigner
distribution functions.
We wish to point out that the quantum canonical transformation described here
is basically different from that considered earlier by Kim and
Wigner\cite{one1}.
While the present approach deals with the transformation between operators
($\hat{q},\hat{p}$) and ($\hat{Q}, \hat{P}$), their approach is about the
transformation between c-numbers ($q,p$) and ($Q,P$). For the transformation
$Q=Q(q,p,t)$ and $P=P(q,p,t)$, their approach yields for the kernel the
expression
\begin{equation}
\kappa (Q_1,P_1,q_2,p_2,t)=\delta [Q_1-Q(q_2,p_2,t)]\delta
[P_1-P(q_2,p_2,t)],
\end{equation}
where $Q(q,p,t)$ and $P(q,p,t)$ satisfy the classical Poisson brackets
relation, $[Q,P]_{PB}=[q,p]_{PB}=1$. The kernels of Eq.\ (28) and Eq.\ (29)
coincide with each other for the special case of a linear canonical
transformation, as was shown by Garcia-Calder\'on and Moshinsky\cite{one9}.
Specifically, for the case of the Wigner distribution function, they showed
that the linear transformation for operators, $\hat{Q}=a\hat{q}+b\hat{p}$ and
$\hat{P}=c\hat{q}+d\hat{p}$, and that for c-number variables, $Q=aq+bp$ and
$P=cq+dp$, yield the same kernel
$\kappa (Q_1,P_1,q_2,p_2)=\delta [Q_1-(aq_2+bp_2)]\delta [P_1-(cq_2+dp_2)]$.
In general cases, however, Eq.\ (27) and Eq.\ (29) give rise to different
kernels. As an example, let us consider the unitary transformation
$\hat{U}=e^{ig(\hat{q})/\hbar}$ considered in Sec.\ II. The first-type
quantum generating function has the form
$e^{iF_1(q_1,Q_2)/\hbar}=e^{ig(q_1)/\hbar}
\delta (q_1-Q_2)$. This nonlinear canonical transformation yields for the
Wigner distribution function the kernel
\begin{equation}
\kappa (Q_1,P_1,q_2,p_2)= \frac{\delta (Q_1-q_2)}{\pi \hbar} \int
dy e^{\frac{i}{\hbar}
[g(q_2-y)-g(q_2+y)]} e^{\frac{2i}{\hbar}(p_2-P_1)y}.
\end{equation}
It is apparent that the integral of the above equation cannot generally be
reduced to the $\delta$-function of Eq.\ (29) except for some trivial cases,
e.g., $g=$const, $g=q$, and $g=q^2$. Distribution functions other than the
Wigner distribution function do not usually allow the simple expression for
the kernel in the form of Eq.\ (29), even if one considers a linear canonical
transformation.
\subsection{Dynamics}
In this subsection we describe how the quantum Hamilton-Jacobi theory can
lead to dynamical solutions in the phase-space picture of quantum mechanics.
For this task, we first consider the time evolution of the transformed
distribution function in ($Q_1,P_1$) phase space. Differentiating Eq.\ (25)
with respect to time, we can get
\begin{eqnarray}
\frac{\partial G^f}{\partial t}=\frac{1}{2\pi ^2\hbar}
\int \int \int dXdYdQ_2 \left[ \left( \frac{\partial}
{\partial t}{_t\langle} Q_2+Y| \right) \hat{\rho} |Q_2-Y\rangle _t+
{_t\langle}Q_2+Y|\frac{\partial \hat{\rho}}{\partial t}|Q_2-Y\rangle _t
\right. \nonumber \\
\left. +{_t\langle}Q_2+Y|\hat{\rho} \left( \frac{\partial}{\partial t}
|Q_2-Y\rangle _t\right) \right] f(X,2Y/\hbar)e^{iX(Q_2-Q_1)}
e^{-i2YP_1/\hbar}.
\end{eqnarray}
We now substitute into Eq.\ (31) the time evolution equations
\begin{eqnarray}
\frac{\partial \hat{\rho}}{\partial t}&=&-\frac{i}{\hbar}
[\hat{H},\hat{\rho}], \\
\frac{\partial}{\partial t}{_t\langle}Q_2+Y|&=&{_t\langle}
Q_2+Y|\hat{U}\frac{\partial \hat{U}^\dagger}{\partial t}, \\
\frac{\partial}{\partial t}|Q_2-Y\rangle _t &=&\frac{\partial \hat{U}}
{\partial t}\hat{U}^\dagger |Q_2-Y\rangle _t=-\hat{U}\frac{\partial
\hat{U}^\dagger}{\partial t}|Q_2-Y\rangle _t,
\end{eqnarray}
where $\hat{H}=H(\hat{q},\hat{p},t)$ is the Hamiltonian governing the
dynamics of the system, and obtain
\begin{eqnarray}
\frac{\partial G^f}{\partial t}=\frac{1}{2\pi ^2\hbar}
\int \int \int dXdYdQ_2\hspace{0.7mm} {_t\langle}Q_2+Y|\left( -\frac{i}{\hbar}
[\hat{K},\hat{\rho}]\right) |Q_2-Y\rangle _tf(X,2Y/\hbar)e^{iX(Q_2-Q_1)}
e^{-i2YP_1/\hbar},
\end{eqnarray}
where $\hat{K}=K(\hat{Q},\hat{P},t)$ is just the transformed Hamiltonian
already defined in Eq.\ (11). Eq.\ (35) should be compared with the
following equation that governs the time evolution of the distribution
function in ($q_1,p_1$) phase space,
\begin{eqnarray}
\frac{\partial F^f}{\partial t}=\frac{1}{2\pi ^2\hbar}
\int \int \int dxdydq_2 \langle q_2+y|\left( -\frac{i}{\hbar}
[\hat{H},\hat{\rho}]\right) |q_2-y\rangle f(x,2y/\hbar)e^{ix(q_2-q_1)}
e^{-i2yp_1/\hbar}.
\end{eqnarray}
We can easily see that, through the quantum canonical transformation, the
role played by $\hat{H}$ is turned over to $\hat{K}$.
Just as the wave function has a trivial solution in the representation
where the transformed Hamiltonian $K(\hat{Q},\hat{P},t)$ vanishes, so does
the distribution function in the corresponding phase space, as can be seen
from Eq.\ (35). With the trivial solution $G^f=$\ const., we go back to the
original space via the inverse of the transformation equation (26) to obtain
$F^f(q_1,p_1,t)$. For example, for the case of the Wigner distribution
function the transformation can be accomplished by
\begin{equation}
F^W(q_1,p_1,t)=\int \int dQ_2dP_2\tilde{\kappa} (q_1,p_1,Q_2,P_2,t)
G^W(Q_2,P_2),
\end{equation}
where $\tilde{\kappa}$ is given in terms of the quantum principal function by
\begin{eqnarray}
\tilde{\kappa} =\frac{2}{\pi \hbar} \int \int dydY e^{\frac{i}{\hbar}
[S_1(q_1+y,Q_2+Y,t)-S_1^*(q_1-y,Q_2-Y,t)]}
e^{\frac{2i}{\hbar}[YP_2-yp_1]}.
\end{eqnarray}
Thus, once the quantum Hamilton-Jacobi equation is solved and the quantum
principal function $S_1$ is obtained, the dynamics of the distribution
function, as well as that of the wave function, can be determined.
\section{Boundary conditions and Applications}
Up to this point the whole theory has been developed for the case
$\hat{U}(t)=\hat{T}(t)\hat{A}$ with $\hat{A}$ taken to be arbitrary
unless otherwise mentioned.
To see how the quantum Hamilton-Jacobi theory is used to achieve the
dynamical solutions of quantum problems, it would be
sufficient, though, to consider the case of $\hat{A}=\hat{I}$, the unit
operator. This case was considered by Dirac in
connection with his action principle (see Sec.\ 32 of Ref.\cite{three}). He
showed that $S_1$ defined by Eq.\ (15) equals the classical action function
in the limit $\hbar \rightarrow 0$. It should be mentioned that this
particular case allows the quantum generating functions to attain the
property that $e^{iS_1/\hbar}$ is the propagator in position space and
$e^{iS_4/\hbar}$ the propagator in momentum space. We will henceforth work
on the case $\hat{U}(t)=\hat{T}(t)$. The general case
$\hat{U}(t)=\hat{T}(t)\hat{A}$ will be briefly treated in Appendix C.
Before applying the theory it is necessary to provide some remarks
concerning the
quantum Hamilton-Jacobi equation (17) and its solution. First, if
the Hamiltonian depends only on either $\hat{q}$ or $\hat{p}$, we do not
need to solve Eq.\ (17). Instead, since the unitary operator has the simple
form $\hat{U}=\hat{T}=e^{-iH(\hat{q})t/\hbar}$ or $e^{-iH(\hat{p})t/\hbar}$,
we can obtain $S_1$ directly from Eq.\ (15) by calculating the matrix
elements of $\hat{U}$. For example, for a free particle,
$\hat{U}=e^{-i\hat{p}^2t/2\hbar}$, it is convenient to calculate
$e^{iS_2/\hbar}=\langle q_1|e^{-i\hat{p}^2t/2\hbar}|p_2\rangle$, and we get
$S_2(q_1,P_2,t)=-\frac{P_2^2t}{2}+q_1P_2+i\frac{\hbar}{2} \ln 2\pi \hbar$.
Second, in order to solve Eq.\ (17), we need to impose proper boundary
conditions on $S_1$. Since here we are dealing with unitary transformations,
we immediately get from the definition of $S_1$ the condition
\begin{equation}
\int dQ_3 e^{i[S_1(q_1,Q_3,t)-S_1^*(q_2,Q_3,t)]/\hbar} =\delta (q_1-q_2),
\end{equation}
which follows from the calculation of the matrix elements of
$\hat{U}(t)\hat{U}^{\dagger}(t)=\hat{I}$.
This unitary condition ensures that the well-ordered operator
counterpart of $S_1$ yields Hermitian operators for $\hat{Q}$ and $\hat{P}$
from Eq.\ (2). Mathematically, Eq.\ (17) can have
several solutions, and there is an arbitrariness in the choice of the
new position variable, because any function of the constant of integration
of Eq.\ (17) can be a candidate for the new position variable. Not all the
possible solutions correspond to the unitary transformations, and from the
possible solutions we choose only those which satisfy Eq.\ (39) and thus give
Hermitian position and momentum operators that are observables.
These solutions correspond to
the unitary transformations of the type $\hat{U}(t)=\hat{T}(t)\hat{A}$.
Further, from these solutions we single out the one that corresponds to the
case $\hat{A}=\hat{I}$ by imposing the condition
$e^{iS_1(q_1,Q_2,t=0)/\hbar}=\delta (q_1-Q_2)$ as an initial
condition. The appropriate form for $S_2$ corresponding to this condition is
that $e^{iS_2(q_1,P_2,t=0)/\hbar}=\frac{1}{\sqrt{2\pi \hbar}}
e^{iq_1P_2/\hbar}$. In the limit $\hbar \rightarrow 0$, $S_2$ in this
equation reduces to the correct classical generating function for the
identity transformation, $S_2=q_1P_2$. In solving the Hamilton-Jacobi
equation perturbatively using Eqs.\ (21) and (22), in order to consistently
satisfy the initial condition, we start with the classical Hamilton's
principal function $S_1^{(0)}$ that gives at initial
time the relations $q_1=Q_2$ and $p_1=P_2$ from the classical c-number
counterpart of Eq.\ (2). An arbitrary additive constant $c$ to the solution
of Eq.\ (17) that always appears in the form $S_1+c$ when we deal with a
partial differential equation such as Eq.\ (19) which contains only partial
derivatives of $S_1$\cite{one} can also be fixed by the initial condition.
Depending
whether boundary conditions can readily be expressed in a simple form, one
type of the quantum generating function may be favored over another. The
existence and uniqueness of the independent solution of Eq.\ (17) satisfying
the above conditions can be guaranteed from the consideration of the equation
$e^{iS_1(q_1,Q_2,t)/\hbar}=\langle q_1|\hat{T}(t)|q_2\rangle$, in which
$S_1$ is just given by the matrix elements of $T(t)$. It is clear that
these matrix elements exist and are uniquely defined.
As illustrations of the application of the theory, we consider the following
two simple systems.
{\sl Example 1. A particle under a constant force.}
As a first example, let us consider a particle moving under a constant force
of magnitude $a$, for which the Hamiltonian is $\hat{H}=\hat{p}^2/2-a\hat{q}$.
We start with the following classical principal function that is
the solution of Eq. (21),
\begin{equation}
S_1^{(0)}=\frac{(q_1-Q_2)^2}{2t} +\frac{at(q_1+Q_2)}{2}
-\frac{a^2t^3}{24}.
\end{equation}
Substituting $S_1^{(0)}$ into Eq.\ (22) and solving the resulting equation,
we find that the first order term in $\hbar$ has the general solution
\begin{equation}
S_1^{(1)}=\frac{i}{2}\ln t +f\left( \frac{q_1-Q_2}{t} -\frac{a}{2}t\right) ,
\end{equation}
where $f$ is an arbitrary differentiable function. To satisfy the proper
boundary condition $e^{iS_1(q_1,Q_2,t=0)/\hbar} =\delta (q_1-Q_2)$,
$f$ and all higher order terms of $S_1$ are chosen to be zero, and the overall
additive constant to be $c=\hbar \frac{i}{2}\ln i2\pi \hbar$.
By well-ordering terms, we get the generating operator
\begin{equation}
\bar{S_1}(\hat{q},\hat{Q},t)=\frac{\hat{q}^2-2\hat{q}\hat{Q}
+\hat{Q}^2}{2t}+\frac{at}{2}(\hat{q}+\hat{Q})
-\frac{a^2t^3}{24} +\hbar\frac{i}{2}\ln i2\pi \hbar t.
\end{equation}
We can easily check that the operator form of the quantum Hamilton-Jacobi
equation (13) is satisfied by the above generating operator.
From Eq.\ (14) we obtain the wave function
\begin{equation}
\psi ^q(q_1,t)=\int \frac{1}{\sqrt{i2\pi \hbar t}}
e^{\frac{i}{2\hbar t}
[(q_1-Q_2)^2 +at^2(q_1+Q_2)-a^2t^4/12]}\psi ^Q(Q_2)dQ_2.
\end{equation}
Because $\psi ^Q(Q_2)$ is constant in time, we can express it in terms of the
initial wave function. For the present case in which we use the first-type
quantum generating function $S_1$ and $\hat{A}=\hat{I}$, we have simply
$\psi ^q(q_2=Q_2,t=0)=\psi ^Q(Q_2)$. We note that Eq.\ (43) is in exact
agreement with the result of Feynman's path-integral approach\cite{two0}.
For the time evolution of the distribution function, we find from Eq.\ (38)
the following kernel for the Wigner distribution function,
\begin{eqnarray}
\tilde{\kappa} (q_1,p_1,Q_2,P_2,t)&=&\frac{1}{\pi ^2\hbar ^2 t}\int \int
dYdy e^{-\frac{2i}{\hbar}\left(Q_2-q_1+p_1t-\frac{at^2}{2}\right)
\frac{y}{t}} e^{\frac{2i}{\hbar}\left( P_2-\frac{q_1-Q_2-at^2/2}{t}
\right) Y}, \nonumber \\
&=&\delta(Q_2-q_1+p_1t-at^2/2) \delta \left(P_2-\frac{q_1-Q_2-at^2/2}
{t}\right) .
\end{eqnarray}
Substituting Eq.\ (44) into Eq. (37), we obtain
\begin{eqnarray}
F^W(q_1,p_1,t)=F^W(q_1-p_1t+at^2/2,p_1-at,0),
\end{eqnarray}
where use has been made of the relation $F^W(q_1,p_1,t=0)=G^W(q_1,p_1)$.
As has been mentioned, the present Hamilton-Jacobi theory also
provides the solutions of the Heisenberg equations via the transformation
relations between the two sets of canonical operators.
From Eqs.\ (2) and (42) we can obtain
\begin{eqnarray}
\hat{q}_S&=&\hat{Q}_S(t)+\hat{P}_S(t)t+\frac{a}{2}t^2, \\
\hat{p}_S&=&\hat{P}_S(t) +at.
\end{eqnarray}
In the Heisenberg picture, the above equations become
\begin{eqnarray}
\hat{q}_H(t)&=&\hat{q}_S+\hat{p}_St+\frac{a}{2}t^2, \\
\hat{p}_H(t)&=&\hat{p}_S +at,
\end{eqnarray}
which are the solutions of the Heisenberg equations.
By setting $a=0$, we can obtain the free particle solution.
{\sl Example 2. The harmonic oscillator}
For the harmonic oscillator whose Hamiltonian is given
by $\hat{H}=\hat{p}^2/2+\hat{q}^2/2$, the classical Hamilton-Jacobi
equation (21) can be solved to give the classical principal function
\begin{equation}
S_1^{(0)}=\frac{1}{2}(q_1^2+Q_2^2)\cot t-q_1Q_2\csc t.
\end{equation}
With the boundary condition $e^{iS_1(q_1,Q_2,t=0)/\hbar}=\delta (q_1-Q_2)$,
Eq.\ (22) can be solved to give
\begin{equation}
S_1^{(1)}=\frac{i}{2}\ln \sin t,
\end{equation}
and $S_1^{(2)}=\cdots =0$. The additive constant has the form
$c=\hbar \frac{i}{2}\ln i2\pi \hbar$. The well-ordered generating operator
is then written as
\begin{equation}
\bar{S_1}(\hat{q},\hat{Q},t)=\frac{1}{2}(\hat{q}^2+\hat{Q}^2)\cot t
-\hat{q}\hat{Q}\csc t +\hbar \frac{i}{2} \ln i2\pi \hbar \sin t.
\end{equation}
The wave function takes the form
\begin{equation}
\psi ^q(q_1,t)=\int \frac{1}{\sqrt{i2\pi \hbar \sin t}}
e^{\frac{i}{2\hbar \sin t}
[(q_1^2+Q_2^2)\cos t -2q_1Q_2]}\psi ^q(Q_2,0)dQ_2,
\end{equation}
and the kernel and the distribution function are given respectively by
\begin{equation}
\tilde{\kappa} (q_1,p_1,Q_2,P_2,t)=\delta (Q_2-q_1\cos t +p_1\sin t )
\delta (P_2+Q_2\cos t -q_1\csc t),
\end{equation}
and
\begin{equation}
F^W(q_1,p_1,t)=F^W(q_1\cos t-p_1\sin t,q_1\sin t+p_1\cos t,0).
\end{equation}
This equation shows that the Wigner distribution function for the harmonic
oscillator rotates clockwise in phase space.
The quantum Hamilton-Jacobi equation for other types of generating operators
can be solved by a similar technique. For instance, we can obtain the
following solution for the second-type generating operator,
\begin{equation}
\bar{S_2}(\hat{q},\hat{P},t)=-\frac{1}{2}(\hat{q}^2+\hat{P}^2)\tan t
+\hat{q}\hat{P}\sec t+\hbar \frac{i}{2} \ln 2\pi \hbar \cos t.
\end{equation}
The solutions of the Heisenberg equations can be obtained from Eqs.\ (2) and
(52) (or Eqs. (3) and (56)). In the Heisenberg picture we have
\begin{eqnarray}
\hat{q}_H(t)&=&\hat{q}_S\cos t+\hat{p}_S\sin t, \\
\hat{p}_H(t)&=&-\hat{q}_S\sin t+\hat{p}_S\cos t.
\end{eqnarray}
It should be mentioned that, even though we restricted our discussion
in this section only to the case $\hat{A}=\hat{I}$ by imposing
the special initial condition, it is very probable that another choice of
$\hat{A}$ satisfying the quantum Hamilton-Jacobi equation happens to be more
readily obtainable. In that case, the initial condition that
is derived from $e^{iS_1(q_1,Q_2,0)/\hbar}=\langle q_1|\hat{A}|q_2\rangle$ is
of course different from that described above. As an example, for the
harmonic oscillator, we presented a different solution for $S_1$ in Appendix C
where the unitary operator $\hat{A}$ corresponds to the transformation that
interchanges the position and momentum operators.
\section{Concluding remarks}
We wish to give some final remarks concerning the quantum Hamilton-Jacobi
theory. In this approach, the quantum Hamilton-Jacobi equation takes the
place of the time-dependent Schr\"{o}dinger equation for solving dynamical
problems, and the quantum Hamilton's principal function $S_1$ that is the
solution of the former equation gives the solution of the latter equation
through Eq.\ (14). As mentioned in Sec.\ V, $e^{iS_1/\hbar}$ becomes the
propagator in position space for the case $\hat{A}=\hat{I}$. To find the
propagator, Feynman's path-integral approach divides the time difference
between a given initial state and a final state into infinitesimal time
intervals, and then lets the quantum generating function for the
infinitesimal transformation equal the classical action function plus
a proper additive constant that vanishes in the limit $\hbar \rightarrow
0$, and finally takes the sum of the infinitesimal
transformations. On the other hand, the present approach seeks the
quantum generating function that
directly transforms the initial state to the final state.
The present formalism gives also the solutions of the Heisenberg equations
through the transformation relations which in the Heisenberg picture can
be expressed as Eq.\ (23).
In conclusion, it is clear that the present approach, which has its origin
in Dirac's canonical transformation theory, helps better comprehend the
interrelations among the existing different formulations of quantum mechanics.
Finally, one more remark may be worth making as to the extent to which the
quantum Hamilton-Jacobi theory can stretch the range of its validity.
Even though our work here deals with the
unitary transformation to ensure that the new operators become Hermitian,
and hence observables, the main idea presented in this paper could be
extended so as to include the non-unitary transformation that deals with
non-Hermitian operators. The theory would then have the form of the
quantum Hamilton-Jacobi equation, but it would be associated with
different types of
transformations, such as $\hat{U}(t)=\hat{T}(t)\hat{B}$ where $\hat{U}(t)$ and
$\hat{B}$ are not unitary. However, it may then be necessary to pay particular
attention
and care to the completeness of the eigenstates of the new operators
$\hat{Q}$ and $\hat{P}$, for the property is crucial to several relations
derived and has been used implicitly throughout the paper.
\appendix
\section{Proof of the equivalence of equations (11) and (12)}
Eq.\ (12) can be derived from Eq.\ (11) by considering the matrix element
of the second term on the right hand side of Eq.\ (11) as follows,
\begin{eqnarray}
\langle q_1|i\hbar \hat{U}\frac{\partial \hat{U}^\dagger}{\partial t}| q_2
\rangle &=& -\langle q_1|i\hbar \frac{\partial \hat{U}}{\partial t}
\hat{U}^\dagger |q_2 \rangle , \\
&=&-\int dq_3\langle q_1|i\hbar \frac{\partial \hat{U}}{\partial t}
|q_3\rangle \langle q_3|\hat{U}^\dagger |q_2 \rangle , \\
&=&\int dQ_3\left(-i\hbar \frac{\partial}{\partial t}\langle
q_1|\hat{U}|q_3\rangle \right) {_t\langle}Q_3|q_2\rangle ,
\end{eqnarray}
where the identity $\hat{U}\hat{U}^\dagger =\hat{I}$ is used to obtain
Eq.\ (A1).
Using the definition of the quantum generating function (1), we can obtain
\begin{eqnarray}
\langle q_1|i\hbar \hat{U}\frac{\partial \hat{U}^\dagger}{\partial t}| q_2
\rangle &=& \int dQ_3 \frac{\partial F_1(q_1,Q_3)}{\partial t}
\langle q_1|Q_3\rangle _t\hspace{0.7mm} {_t\langle}Q_3|q_2\rangle , \\
&=& \int dQ_3\langle q_1|\frac{\partial \bar{F}_1(\hat{q}, \hat{Q},t)}
{\partial t} |Q_3\rangle _t\hspace{0.7mm} {_t\langle} Q_3|q_2\rangle , \\
&=&\langle q_1|\frac{\partial \bar{F}_1(\hat{q},\hat{Q},t)}
{\partial t}|q_2 \rangle .
\end{eqnarray}
Since $i\hbar \hat{U}\frac{\partial \hat{U}^\dagger}{\partial t}$ and
$\frac{\partial \bar{F}_1(\hat{q},\hat{Q},t)}{\partial t}$ have the
same matrix element, we conclude that the two operators are identical.
\section{Derivation of the kernel $\kappa$}
In order to find the relation between $F^f$ and $G^f$, we first make use of
the completeness of the eigenvectors of $\hat{q}$ in Eq.\ (25) and get
\begin{eqnarray}
G^f(Q_1,P_1,t)&=&\frac{1}{2\pi ^2\hbar}\int \int \int \int \int
dXdYdQ_2dq_3dq_4\hspace{0.7mm} {_t\langle}Q_2+Y|q_3\rangle \langle q_3|
\hat{\rho} |q_4\rangle \langle q_4|Q_2-Y\rangle _t \nonumber \\
&&\times f(X,2Y/\hbar )e^{iX(Q_2-Q_1)} e^{-i2YP_1/\hbar}.
\end{eqnarray}
Changing variables with $q_5=(q_3+q_4)/2$ and $y=(q_3-q_4)/2$, and using the
relation $\int dy'dq_6\delta (y-y')\delta (q_5-q_6) g(y',q_6)=g(y,q_5)$ where
the $\delta$-functions are written in the forms
\begin{equation}
\delta (y-y')=\frac{1}{\pi \hbar}\int dp_3e^{-2ip_3(y-y')/\hbar},
\end{equation}
and
\begin{equation}
\delta (q_5-q_6)=\frac{1}{2\pi}\int dxe^{ix(q_6-q_5)},
\end{equation}
we obtain
\begin{eqnarray}
G^f(Q_1,P_1,t)&=&\frac{1}{2\pi ^4\hbar ^2}\int \int \int \int
\int \int \int \int \int dXdYdQ_2dq_5dydy'dp_3dq_6dx
f(X,2Y/\hbar ){_t\langle}Q_2+Y|q_5+y'\rangle \nonumber \\
&& \times \langle q_6+y|\hat{\rho} |q_6-y\rangle \langle
q_5-y'|Q_2-Y\rangle _t e^{iX(Q_2-Q_1)}
e^{-i2YP_1/\hbar} e^{-2ip_3(y-y')/\hbar} e^{ix(q_6-q_5)}.
\end{eqnarray}
Next, we multiply this equation by
\begin{equation}
\int \int dx''dy'' \frac{f(x,2y/\hbar )}{f(x'',2y''/\hbar )}\delta
(x''-x)\delta (y''-y)=1,
\end{equation}
where
\begin{eqnarray}
\delta (x''-x)\delta (y''-y)=\frac{1}{2\pi ^2\hbar} \int \int
d\alpha d \beta e^{-i\alpha (x''-x)} e^{-2i\beta (y''-y)/ \hbar}.
\end{eqnarray}
In the resulting equation, we replace the integrations over $q_5$ and $p_3$,
respectively, with those over $q_2=q_5-\alpha$ and $p_2=p_3-\beta$, and
then integrate over $\beta$ and $y'$. We then obtain
\begin{eqnarray}
G^f(Q_1,P_1,t)&=&\int \int dq_2dp_2\bigg[ \frac{1}{2\pi ^3\hbar}
\int \int \int \int \int \int dXdYdQ_2dx''dy''d\alpha
{_t\langle} Q_2+Y| q_2+\alpha +y''\rangle \nonumber \\
&& \times \langle q_2+\alpha -y''|Q_2-Y \rangle _t
\frac{f(X,2Y/\hbar )}{f(x'',2y''/\hbar )}
e^{iX(Q_2-Q_1)} e^{-2iYP_1/\hbar} e^{-i\alpha x''} e^{2ip_2y''/\hbar} \bigg]
\nonumber \\
&& \times \left[ \frac{1}{2\pi ^2\hbar} \int \int \int dxdydq_6
\langle q_6+y|\hat{\rho} |q_6-y\rangle f(x,2y/\hbar )
e^{ix(q_6-q_2)}e^{-2iyp_2/\hbar} \right] ,
\end{eqnarray}
which leads immediately to Eq.\ (26).
\section{Solutions of the quantum Hamilton-Jacobi equation}
In Sec.\ V we considered the case $\hat{A}=\hat{I}$ only for convenience,
because,
as can be noticed from the two examples of Sec.\ V, this case not only gives
a simple relation between the initial wave function $\psi ^q(q_1,0)$
(distribution function $F^f(q_1,p_1,0)$) in $q$-representation and the constant
wave function $\psi ^Q(Q_2)$ (distribution function $G^f(Q_2,P_2)$) in
$Q$-representation, but also makes it easy to express the new
operators $(\hat{Q}_H,\hat{P}_H)$ in the Heisenberg picture in terms of the
old operators $(\hat{q}_S,\hat{p}_S)$ in the Schr\"{o}dinger picture.
In doing so, we required the solution to satisfy the
specific initial condition described in the first part of Sec.\ V. This
specialization is of course not of necessity, and if this initial condition
is discarded with the unitary condition of the transformation still retained,
we have a group of general solutions each member of which corresponds to a
specific choice of $\hat{A}$. As an illustration, we present below
another possible
solution of the quantum Hamilton-Jacobi equation (19) for the harmonic
oscillator that belongs to unitary transformations of the type
$\hat{U}(t)=\hat{T}(t)\hat{A}$,
\begin{equation}
\bar{S_1}(\hat{q},\hat{Q},t)=-\frac{1}{2}(\hat{q}^2+\hat{Q}^2)\tan t
+\hat{q}\hat{Q}\sec t+\hbar \frac{i}{2} \ln 2\pi \hbar \cos t.
\end{equation}
Equation (C1) satisfies at initial time $e^{iS_1(q_1,Q_2,0)/\hbar}=\frac{1}
{\sqrt{2\pi \hbar}}
e^{iq_1Q_2/\hbar}$, which equals $\langle q_1|\hat{A}|q_2\rangle$, i.e.,
the matrix element of the unitary operator $\hat{A}$.
In this case, the operator
$\hat{A}$ corresponds to the exchange transformation which generates
the transformation relations
\begin{eqnarray}
\hat{p}&=&\frac{\partial \bar{S_1}(\hat{q},\hat{Q},0)}{\partial \hat{q}}
=\hat{Q}, \\
\hat{P}&=&-\frac{\partial \bar{S_1}(\hat{q},\hat{Q},0)}{\partial \hat{Q}}
=-\hat{q}.
\end{eqnarray}
\begin{references}
\bibitem{one} H. Goldstein. Classical Mechanics. 2nd ed.
Addison-Wesley, Reading, MA. 1981.
\bibitem{two} W. Heisenberg. The Physical Principles of the
Quantum Theory. Dover, New York. 1930.
\bibitem{three} P. A. M. Dirac. The Principles of Quantum Mechanics.
4th ed. Oxford University Press, London. 1958.
\bibitem{four} P. Jordan. Z. Phys. {\bf 38}, 513 (1926).
\bibitem{five} P. A. M. Dirac. Phys. Z. Sowjetunion, Band 3, Heft 1 (1933);
Rev. Mod. Phys. {\bf 17}, 195 (1945).
\bibitem{six} J. Schwinger. Phys. Rev. {\bf 82}, 914 (1951); {\bf 91},
713 (1953).
\bibitem{seven} M. Moshinski and C. Quesne. J. Math. Phys. {\bf 12},
1772 (1971); P. A. Mello and M. Moshinski. J. Math. Phys. {\bf 16}, 2017
(1975).
\bibitem{eight} R. A. Leacock and M. J. Padgett. Phys. Rev. Lett. {\bf 50},
3 (1983); Phys. Rev. D {\bf 28}, 2491 (1983).
\bibitem{nine} G. I. Ghandour. Phys. Rev. D {\bf 35}, 1289 (1987).
\bibitem{ten} H. Lee and W. S. l'Yi. Phys. Rev. A {\bf 51}, 982 (1995).
\bibitem{one2} R. S. Bhalla, A. K. Kapoor, and P. K. Panigrahi.
Am. J. Phys. {\bf 65}, 1187 (1997).
\bibitem{one1} Y. S. Kim and E. P. Wigner. Am. J. Phys. {\bf 58},
439 (1990).
\bibitem{schiff} L. I. Schiff. Quantum Mechanics. 3rd ed.
McGraw-Hill, New York. 1968. p. 269.
\bibitem{one5} L. Cohen. J. Math. Phys. {\bf 7}, 781 (1966).
\bibitem{one6} H. W. Lee. Phys. Rep. {\bf 259}, 147 (1995).
\bibitem{one7} E. Wigner. Phys. Rev. {\bf 40}, 749 (1932).
\bibitem{one8} K. Takahashi. Prog. Theoret. Phys. Suppl. {\bf 98}, 109 (1989).
\bibitem{one9} G. Garcia-Calder\'on and M. Moshinsky. J. Phys. A {\bf 13},
L185 (1980).
\bibitem{one10} T. Curtright, D. Fairlie, and C. Zachos. Phys. Rev. D
{\bf 58}, 025002 (1998).
\bibitem{two0} R. P. Feynman and A. R. Hibbs. Quantum Mechanics
and Path Integrals. McGraw-Hill, New York. 1965.
\end{references}
\end{document} |
\begin{document}
\title{Accurate calculation of the geometric measure of entanglement for multipartite quantum states}
\author{Peiyuan Teng}
\institute{Peiyuan Teng \at
Department of Physics\\
The Ohio State University\\
Columbus, Ohio,43210,USA\\ \email{[email protected]}
}
\maketitle
\begin{abstract}
This article proposes an efficient way of calculating the geometric measure of entanglement using tensor decomposition methods. The connection between these two concepts is explored using the tensor representation of the wavefunction. Numerical examples are benchmarked and compared. Furthermore, we search for highly entangled qubit states to show the applicability of this method.
\keywords{geometric measure of entanglement \and tensor decomposition \and multipartite entanglement \and highly entangled states}
\end{abstract}
\section{Introduction}
Quantum entanglement is an essential concept in quantum physics and quantum information. Various measures of quantum entanglement have been proposed to characterize quantum entanglement, such as the Von Neumann entanglement entropy. The geometric measure of entanglement\cite{wei2003geometric} has recently gained popularity, owing to its clear geometric meaning. The geometric measure of entanglement was first proposed by Shimony\cite{shimony1995degree}, then generalized to the multipartite system by Barnum and Linden\cite{barnum2001monotones}, and finally examined by Wei and Goldbart, who gave a rigorous proof that it provides a reliable measure of entanglement\cite{wei2003geometric}.
A large amount of research regarding the properties of geometric entanglement has been performed. For example, properties of symmetric states were discussed using the Majorana representation of
such states\cite{aulbach2010maximally}. The geometric measure of entanglement has been discussed theoretically, although few practical numerical evaluation methods are available owing to the complicated structure of a quantum state, whose amplitude is a complex-valued function. A simple way to determine geometric entanglement was given in Ref\cite{PhysRevA.84.022323}, where their method was tested for three or four qubits states with non-negative coefficients. A problem with this method is that although the overlap will converge, it may not converge to the minimal overlap. Recently, a method to calculate of the geometric measure of entanglement for non-negative tensors was proposed by\cite{hu2016computing}. Our article illustrates a way to numerically calculate the geometric measure of entanglement for any arbitrary quantum state with complex amplitude, which extends the scope of previous numerical methods.
Tensor network theory is currently widely used as a way of simulating physical systems. The idea of tensor network theory is to represent the wavefunction in terms of a multi-indexed tensor, such as the matrix product states (MPS)\cite{verstraete2008matrix}. Therefore, it is also natural to consider the entanglement within this context. Tensor theory was applied to study quantum entanglement in Ref.\cite{nispherical}\cite{curtef2007conjugate}. Using tensor eigenvalues to study geometric entanglement was discussed in Ref.\cite{ni2014geometric}. The possibility of using tensor decomposition methods to study quantum entanglement was pointed out in Ref.\cite{enriquez2015minimal} in the context of Minimal Renyi-Ingarden-Urbanik entropy, of which the geometric entanglement is a special case. The asymptotic behavior of the GME for qutrit systems was studied using the PARAFAC tensor decomposition in Ref.\cite{enriquez2015minimal}. In this work, we comprehensively study the possibility of using tensor decomposition methods to calculate the geometric measure of entanglement for arbitrary quantum states. Tensor decomposition methods are currently being developed rapidly. By using the new results in tensor decomposition theory, we can not only use the most efficient way to calculate the geometric measure of entanglement, but also gain a deeper understanding of the structure of quantum states from the perspective of theoretical tensor decomposition theory.
To furthermore demonstrate the efficiency of tensor decomposition methods, we conduct a numerical research for maximally and highly entangled quantum states.
Deep understanding of highly entangled multiqubit states is important for quantum information processing. Highly entangled states, such as the cluster states, could be crucial to quantum computers\cite{PhysRevLett.86.5188}.
Highly entangled states are also key parts of quantum error correction and quantum communication\cite{doi:10.1063/1.3511477}. Therefore, searching for highly entangled quantum states is necessary for the development of quantum information science.
In this article, we first review the concept of geometric measure of entanglement and tensor rank decomposition. Then we point out that the spectrum value for a rank-one decomposition is identical to the overlap of wavefunctions. Our method is capable of calculating an arbitrary (real or complex, symmetrical or non-symmetrical) pure state wavefunction. We also demonstrated that tensor decomposition method can be used to extract the hierarchical structure of a wavefunction. Perfect agreement is found for the examples that we tested. At last, we use this method to characterize some quantum states. A maximally entangled state that is similar to the HS states is found. In addition, we permormed a numerical search for highly entangled quantum states from four qubits to seven qubits. We provide new examples of such states that are more entangled than a few of currently known states under geometric entanglement.
This article is organized as follows. In Section \ref{marker2}, we mainly focus on the theoretical aspects of tensor theory and entanglement theory. In Section \ref{marker3}, several known examples are calculated to demonstrate the effectiveness of the tensor rank decomposition method. In Section \ref{marker4}, maximally and highly entangled states are searched and discussed.
\section{\label{marker2} Geometric measure of entanglement and tensor decomposition}
\subsection{Geometric measure of entanglement}
The geometric measure of entanglement for multipartite systems was comprehensively examined by Wei and Goldbart in Ref.\cite{wei2003geometric}. Following their notations, we start from a general n-partite pure state
\begin{equation}
|\psi\rangle=\sum_{p_1,\dots p_n}\chi_{p_1,p_2\dots p_n}|e^1_{p_1}e^2_{p_2}\dots e^n_{p_n}\rangle.
\end{equation}
Define a distance as
\begin{equation}
d=\min_{|\phi\rangle}\||\psi\rangle-|\phi\rangle\|,
\end{equation}
where $|\phi\rangle$ is the nearest separable state, which can be written as
\begin{equation}
|\phi\rangle=\otimes^n_{l=1}|\phi^{l}\rangle.
\end{equation}
$|\phi^{l}\rangle$ is the normalized wavefunction for each party $l$. A practical choice of the norm could be the Hilbert\textendash Schmidt norm, or equivalently the Frobenius norm for a tensor, which equals the squared sum of the modulus of the coefficients.
The geometric entanglement can be written as
\begin{equation}
E(|\psi\rangle)=1-|\langle\psi|\phi\rangle|^2.
\end{equation}
It was proved by Wei and Goldbart in Ref.\cite{wei2003geometric} that this measure of entanglement satisfies the criteria of a good entanglement measure.
We can write a wavefunction in the language of tensor. A general n-partite pure state can be written as
\begin{equation}
|\psi\rangle=\sum_{i,j,\dots k}T_{ij\dots k}|ij\dots k\rangle.
\end{equation}
We use tensor $T$ to describe a quantum state and the Frobenius norm of this tensor $\|T\|=1$. The label $i,j,k$ goes from one to the dimension of the Hilbert space of each party.
A direct product state can be written as
\begin{equation}
|\phi\rangle=a_i|i\rangle\otimes a_j|j\rangle\cdots \otimes a_k|k\rangle.
\end{equation}
$a_i|i\rangle$ is a normalized wavefunction for party $i$, here Einstein summation convention is used.
After writing the wavefunction in the language of tensors, we can use the techniques from tensor decomposition theory to calculate the geometric measure of entanglement.
\subsection{Tensor decomposition}
In general, a tensor decomposition method decomposes a tensor into the direct products of several smaller tensors. Moreover, there are two major ways to decompose a tensor.
One way is the "Tensor Rank Decomposition" or "Canonical Polyadic Decomposition". For an $n$-way tensor, the Tensor Rank Decomposition can be written as
\begin{equation}
T_{mn\cdots p}=\sum\lambda_r a_{rm}\circ a_{rn}\cdots\circ a_{rp}.
\end{equation}
The minimal value of $r$, that can make this expression exact, is called the rank of this tensor.
The Tensor Rank Decomposition can be physically understood as the decomposition of a multipartite wavefunction into the sum of the direct products of the wavefunction from each part. The dyadic product notation "$\circ$" is used, which means that we treat the product as a single tensor.
Another way to decompose a tensor is the Tucker Decomposition. In some articles, it is called "Higher Order Singular Value Decomposition (HOSVD)". It can be written as
\begin{equation}
T_{mnp...z}=\sum\lambda_{\alpha\beta\gamma...\omega} a_{\alpha m}\circ a_{\beta n}\circ a_{\gamma p}\cdots\circ a_{\omega z}.
\end{equation}
The Greek letters $\alpha, \beta, \gamma.,.. \omega$ are arbitrary fixed integers.
These two decomposition methods can be regarded as the tensor generalization of the widely used Singular Value Decomposition (SVD) for a matrix.
\begin{equation}
T_{mn}=\sum_{i=1}^{r}\lambda_i a_{im}\circ a_{in}=U S V^{*}.
\end{equation}
$S$ is the singular value matrix.
Since matrix $S$ is diagonal, different understandings of this singular matrix can lead to different decomposition methods. A detailed discussion of tensor decomposition methods can be found in Ref.\cite{kolda2009tensor}.
The objective function of a rank\textendash $k$ approximation of a tensor can be written as
\begin{equation}
d=\min{\|T_{mn\cdots p}-\sum_{i=1}^{k}\lambda_i a_{im}\circ a_{in}\cdots\circ a_{ip}\|}.
\end{equation}
While for the Tucker decomposition, we can also fix the index range of $\lambda_{\alpha\beta\gamma...\omega}$ and minimize the norm.
When we restrict our $\lambda$ to be a single scalar for both the Tucker Decomposition and the Tensor Rank Decomposition, these two approximations become the same. In another word, they have the same rank\textendash $1$ decomposition. Therefore, our objective function becomes
\begin{equation}
d=\min{\|T_{mn\cdots p}-\lambda a_{m}\circ a_{n}\cdots\circ a_{p}\|}.
\end{equation}
For general quantum states, these tensors and vectors are defined on the complex field ${C}$.
Notice that this objective function has the same form as in our definition of geometric measure of entanglement with $T_{mn\cdots p}=|\psi\rangle$ and $\lambda a_{m}\circ a_{n}\cdots\circ a_{p}=|\phi\rangle$
From a geometric argument, if $\| T_{mn\cdots p}\|=1$ and $\| a_{m}\circ a_{n}\cdots\circ a_{p}\|=1$, then our claim is
\begin{equation}
\boxed{\lambda=\langle\psi|\phi\rangle.}
\end{equation}
This can be understood intuitively: since $\| T_{mn\cdots p}\|$ is a unit vector in our space, and for a fixed $\| a_{m}\circ a_{n}\cdots\circ a_{p}\|$ with unit length, $\| \lambda a_{m}\circ a_{n}\cdots\circ a_{p}\|$ is a line in our vector space ($m\times n\times \cdots p$ dimensional), when we vary $\lambda$. Therefore, our minimization problem can be geometrically understood as finding the minimal perpendicular distance from all the possible direct product lines in the space. Since both vectors are unit vectors, $\lambda$ must equal the angle between these two vectors. Understanding quantum mechanics in the context of geometry has been pointed out in Ref.\cite{brody2001geometric}. Then our geometric entanglement is
\begin{equation}
E(T)=1-| \lambda |^2,
\end{equation}
which is expressed in the language of a tensor.
Tensor decomposition methods have been existing the scientific computing society for some time, moreover, they have been applied to different fields such as statistics and signal processing etc.\cite{kolda2009tensor}.
\subsection{Numerical algorithm}
There are numerous algorithms that can be used for both the Tensor Rank and Tucker decomposition. The Alternate Least Squares algorithm is one of the most popular approaches. We will not discuss the details of the algorithms here. A complete survey of the algorithms can be found in Ref.\cite{kolda2009tensor} and one of the Alternating Least Squares algorithm for Tucker decomposition was given in Ref. \cite{kapteyn1986approach}.
There are also numerous existing code packages that can be utilized on different coding platforms, such as C++ or MATLAB etc. In this article, we use the MATLAB tensor toolbox 2.6 developed by Sandia National Laboratories\cite{bader2012matlab}. This package is already developed and available online.
We must point out a few important facts about the numerical results. Theoretically, both tensor rank decomposition and Tucker decomposition can be used to perform the calculation. In reality, however, some codes are actually written for the set of real numbers $R$. We need to work in the domain of complex numbers $C$ in order to be able to represent an arbitrary wave function. Note that different vector spaces will lead to different optimization results. In addition, the decomposed wavefunction may not be normalized. Apractical choice here would be the Alternate Lease Squares algorithm for the Tucker Decomposition ($tucker\_als$) provided in the toolbox.
The Alternate Lease Squares Tucker algorithm involves the following parameters: (i) The initial tensor, i.e., the The tensor that is used to represent quantum states. (ii) The core of the Tucker decomposition, which can be a tensor with any dimension. (In the case of the best rank one approximation or geometric entanglement, this tensor is just a scalar.) (iii) An optional initial condition, which is used to initialize the iteration and could be set at random. (iv) Optional iteration control parameters.
After proper normalization of the initial tensor, the output: the best-fitted core scalar is the maximal overlap and the fitted tensors are the corresponding direct product states. This function implements the well-known Higher Order Orthogonal Iteration (HOOI) algorithm for the Tucker approximation, which behaves better than the previous naive HOSVD algorithm\cite{kolda2009tensor}. The details of this algorithm are non-trivial, and can be found in Ref.\cite{doi:10.1137/S0895479898346995}. The original HOOI paper was formulated in terms of a real tensor, but as pointed out by the authors of Ref.\cite{doi:10.1137/S0895479898346995}, this algorithm equally applies to a complex tensor. Moreover, our numerical study also shows the applicability to quantum states with complex amplitudes.
From the viewpoint of tensor decomposition theory, we can see that previous work about the numerical evaluation of geometric entanglement\cite{PhysRevA.84.022323} is a special case of a naive HOSVD algorithm, which was used at an early stage of Tucker decomposition. The problem with the naive HOSVD in Ref. \cite{PhysRevA.84.022323} is that although the wavefunction overlap converges, the converged overlap value may not be the minimal overlap in the Hilbert space, see section 4.2 in Ref. \cite{kolda2009tensor} . The HOOI algorithm is designed to minimize the norm and, therefore, it gives the correct result for the geometric measure of entanglement. Another practical point is that the solution may not be unique, and the result may be trapped in locally minimal state\cite{kolda2009tensor} . Therefore, for consistency, it is better to examine the initial conditions for all the calculations.
\section{\label{marker3} Numerical evaluation of geometric measure of entanglement using alternate least square algorithm}
\subsection{Geometric measure of entanglement for symmetric qubits pure states}
We would like to benchmark the results given by Wei and Goldbart in Ref.\cite{wei2003geometric}.
Considering a general $n$ qubit symmetric state
\begin{equation}
|S(n,k)\rangle=\sqrt{\frac{k!(n-k)!}{n!}}\sum_{permutations}|0\cdots 01\cdots 1\rangle.
\end{equation}
$k$ is the number of $|0\rangle$s, and $n-k$ is the number of $|1\rangle$s.
The overlap is given by
\begin{equation}
\Lambda=\sqrt{\frac{n!}{k!(n-k)!}}(\frac{k}{n})^{\frac{k}{2}}(\frac{n-k}{n})^{\frac{n-k}{2}}.
\end{equation}
In Table 1, we use $\Lambda$ to denote the theoretical results and the $\lambda$ to label the numerical ones.
\begin{table}
\caption {\textbf{Overlaps for n-partite qubit systems}}
\begin{center}
\begin{tabular}{| c | c | c | c |}
\hline
n value & k value & $\Lambda$ theoretical & $\lambda$ numerical \\ \hline
4 & 0 & 1 & 1.0000 \\ \hline
& 1 & 0.6495 & 0.6495 \\ \hline
& 2 & 0.6124 & 0.6124 \\ \hline
& 3 & 0.6495 & 0.6495 \\ \hline
5 & 0 & 1 & 1.0000 \\ \hline
& 1 & 0.6400 & 0.6400 \\ \hline
& 2 & 0.5879 & 0.5879 \\ \hline
& 3 & 0.5879 & 0.5879 \\ \hline
& 4 & 0.6400 & 0.6400 \\ \hline
6 & 0 & 1 & 1.0000 \\ \hline
& 1 & 0.6339 & 0.6339 \\ \hline
& 2 & 0.5738 & 0.5738 \\ \hline
& 3 & 0.5590 & 0.5590 \\ \hline
& 4 & 0.5738 & 0.5738 \\ \hline
& 5 & 0.6339 & 0.6339 \\ \hline
\end{tabular}
Comparison between theoretical value $\Lambda$ and the calculation using tensor decomposition $\lambda$. Alternate Lease Square method for Tucker decomposition is used.
\end{center}
\end{table}
We test the overlaps for both methods up to 6-partite systems, i.e. 6-way tensors. Agreements are found.
\subsection{Geometric measure of entanglement for combinations of three qubits W states}
Assuming we have a superposition of two W states
\begin{equation}
|\psi\rangle=\sqrt{s}|S(3,2)\rangle+\sqrt{1-s}e^{i\phi}|S(3,1)\rangle\\=
\sqrt{s}|W\rangle+\sqrt{1-s}e^{i\phi}|\widetilde{W}\rangle.
\end{equation}
We can gauge out the factor $\phi$ by changing basis without affecting the entanglement. The geometric measure of entanglement of this state is given by, see Ref.\cite{wei2003geometric}
\begin{equation}
E=1-\Lambda^2.
\end{equation}
With (notice that there is a typo in\cite{wei2003geometric} for this equation)
\begin{equation}
\Lambda=\frac{\sqrt{3}}{2}[\sqrt{s}cos\theta(s)+\sqrt{1-s}sin\theta(s)]sin2\theta(s).
\end{equation}
$t=tan\theta$, where $t$ is the real root of the equation
\begin{equation}
\sqrt{1-s}t^3+2\sqrt{s}t^2-\sqrt{1-s}t-\sqrt{s}=0.
\end{equation}
Perfect agreement is found, see Figure \ref{fig:fig1}.
\begin{figure}
\caption{Entanglement as a function of s using tensor decomposition.}
\label{fig:fig1}
\end{figure}
For a general complex wavefunction
\begin{equation}
|\psi\rangle=|W\rangle+\sqrt{1-s}e^{i\phi}|\widetilde{W}\rangle.
\end{equation}
Tensor decomposition method can indeed capture the complex $\phi$ factor and reflect the fact that this factor does not affect the entanglement. See Figure \ref{fig:fig2}.
\begin{figure}
\caption{Decomposition for complex tensors. Entanglement for two parameters using tensor decomposition, entanglement is not affected by $\phi$.}
\label{fig:fig2}
\end{figure}
\subsection{Geometric measure of entanglement for $d$-level system (qudits)}
Up till now, the index of our tensor has a range of two, which corresponds to a qubit system. We can obviously use a tensor that has a larger index range which corresponds to a $d$-level system.
For example, we have a symmetric state with $n$ parts, for simplicity we assume that one part is in state $d-1$, the other parts are all in state $0$, and our wavefunction is a symmetric sum of all these possible state.
\begin{equation}
|S(n,d)\rangle=\sqrt{\frac{(n-1)!}{n!}}\sum_{permutations}|0\cdots 0(d-1)\rangle.
\end{equation}
The overlap is given by, from Ref.\cite{wei2003geometric}
\begin{equation}
\Lambda=\sqrt{\frac{n!}{(n-1)!}}(\frac{1}{n})^{\frac{1}{2}}(\frac{n-1}{n})^{\frac{n-1}{2}},
\end{equation}
which is independent of $d$.
In Table 2, we use $\Lambda$ to denote the theoretical results and the $\lambda$ to label the numerical ones.
\begin{table}
\caption {\textbf{Overlaps for n-partite qudit systems}}
\begin{center}
\begin{tabular}{| c | c | c | c |}
\hline
n value & d value & $\Lambda$ theoretical & $\lambda$ numerical \\ \hline
4 & 4 & 0.6495 & 0.6495 \\ \hline
& 10 & & 0.6495 \\ \hline
& 100 & & 0.6495 \\ \hline
& 200 & & 0.6495 \\ \hline
5 & 5 & 0.6400 & 0.6400 \\ \hline
& 10 & & 0.6400 \\ \hline
& 20 & & 0.6400 \\ \hline
& 50 & & 0.6400 \\ \hline
6 & 6 & 0.6339
& 0.6339
\\ \hline
& 10 & & 0.6339
\\ \hline
& 20 & & 0.6339
\\ \hline
\end{tabular}
Comparison between theoretical value $\Lambda$ and the calculations using tensor decomposition $\lambda$ for qudit n-partite states.
\end{center}
\end{table}
We tested the overlaps of qudit systems up to 6-partite system, i.e. 6-way tensors. Each tensor is tested up to a bond dimension of 50 to 200. The largest tested tensor has a bond dimension of $50^5$ owing to the restriction of computational power. Agreements are found. Our results show that tensor decomposition method is capable of dealing with qudit systems.
\subsection{Hierarchies of Geometric measure of entanglement}
In general, hierarchies of geometric measure of entanglement refers to the structure of the distances from a quantum state to the K-separable states. For example, for a general pure state, some parts of the system is entangled while the wavefunction can still be written as the direct products of some larger parts. A detailed discussion of the hierarchies can be found in Ref. \cite{blasone2008hierarchies}.
We need to point out that this hierarchy structure of entanglement is quite natural to understand in the context of tensor theory. For a general tensor $T_{ij\cdots k}$, we can combine the first two index together $T_{(ij)\cdots k}$ and write $ij$ as a single index $l$, which means that we treat them as one part. To calculate the entanglement for this partition, what we need is to find the entanglement for the tensor $T_{l\cdots k}$. It is easy to see that different partitions are equivalent to different ways of combining tensor indices. Therefore, it is natural to understand the hierarchies in the language of tensor.
For example, if we have a quantum state which has three parties and can be written as a three index tensor $T_{2, 3, 4}$, i.e. the dimension of the Hilbert space of each party is 2, 3, 4 respectively. Naturally, we can consider a 2-separable state where one party has a Hilbert dimension of 6 and another party has a dimension of 4. Writing in the language of tensors, for $T_{2, 3, 4}$, we can rewrite the first to labels as one single label which has a bond dimension of $2\times 3=6$, simply by rewriting each 2 by 3 matrix as a 6 dimensional vector. Therefore, by combining two indices we get a new tensor $T_{6, 4}$. Although these two tensors have a one to one map, the tensor decomposition structure has changed, therefore, we can calculate two parties geometric entanglements by different ways of combining indices.
We would like to calculate the hierarchies for the 5-qubits W state, and compare with the results in Ref. \cite{blasone2008hierarchies}.
\begin{equation}
|W\rangle=
\sqrt{\frac{1}{5}}(|00001\rangle+|00010\rangle+|00100\rangle+|01000\rangle+|10000\rangle).
\end{equation}
\begin{table}
\caption {\textbf{Hierarchies of 5-qubits W state}}
\begin{center}
\begin{tabular}{| c | c | c | c | c |}
\hline
Partition & Tensor size & $\lambda$ numerical & E & E from\cite{blasone2008hierarchies} \\ \hline
1,4 & $2\times 16$ & 0.8944 & 0.2000 & 0.200 \\ \hline
2,3 & $4\times 8$ & 0.7745 & 0.4001 & 0.400 \\ \hline
1,1,3 & $2\times 2\times 8$ & 0.7745 & 0.4001 & 0.400 \\ \hline
1,2,2 & $2\times 4\times 4$ &0.6761 & 0.5429 & 0.543 \\ \hline
1,1,1,2 &$2\times2\times 2\times 4$ &0.6639 & 0.5592 & 0.559 \\ \hline
1,1,1,1,1 & $2\times2\times 2\times 2\times 2$ &0.6400 & 0.5904 & 0.590 \\ \hline
\end{tabular}
Comparison of hierarchies using tensor decomposition method.
\end{center}
\end{table}
We found agreements between these results, see Table 3 for more details. Tensor decomposition method is capable of finding the hierarchical structure of a quantum state.
Tensor decomposition method can also be used to find the Hierarchies of geometric entanglement for non-symmetric states.
For example,
\begin{equation}
|\psi_W^3\rangle=
N_3(\gamma_1|001\rangle+\gamma_2|010\rangle+\gamma_3|100\rangle).
\end{equation}
The theoretical value of overlap square is found to be\cite{blasone2008hierarchies}
\begin{equation}
\Lambda^2(i|j,k)=
N_3^2max[\gamma_i^2,\gamma_j^2+\gamma_k^2].
\end{equation}
Where $i,j,k$ are labels for different parties.
With $\gamma_1=1$,$\gamma_2=2$,$\gamma_3=3$, the overlap square is found to be at $0.6428$ using tensor decomposition method, which is in perfect agreement with the theoretical value.
For another example,
\begin{equation}
|\psi_W^4\rangle=
N_4(\gamma_1|0001\rangle+\gamma_2|0010\rangle+\gamma_3|0100\rangle+\gamma_4|1000\rangle).
\end{equation}
The theoretical value of overlap square is found to be\cite{blasone2008hierarchies}
\begin{equation}
\Lambda^2(i,j|k,l)=
N_4^2max[\gamma_i^2+\gamma_j^2,\gamma_k^2+\gamma_l^2].
\end{equation}
With $\gamma_1=1$,$\gamma_2=2$,$\gamma_3=3$,$\gamma_4=4$, the overlap square is found to be at $0.8333$ using tensor decomposition method, which is also in perfect agreement with the theoretical value.
\section{\label{marker4} Searching for highly entangled states and maximally entangled states.}
Deep understanding of highly entangled multiqubit states is important for quantum information processing. In this section, we discuss several maximally or highly entangled quantum states.
\subsection{Bounds on the geometric measure of entanglement}
By exploiting the correspondence between the geometric measure of entanglement and best rank-one approximation, properties of the geometric measure of entanglement, such as the upper bound, can be acquired.
For example, considering a quantum state that can be represented by a real tensor $T$. Assuming the party number is $m$, and the dimension of the each party is given by $2\leq n_1\leq n_2\leq \cdots \leq n_m$. Then the overlap in the real space satisfies
\begin{equation}
\frac{1}{\sqrt{n_1 n_2 \cdots n_{m-1}}} <\lambda \leq 1.
\end{equation}
Therefore,
\begin{equation}
0\leq E<1- \frac{1}{n_1 n_2 \cdots n_{m-1}}.
\end{equation}
Based on the states that we tested, we could say that this bond is valid. It is not clear whether or when this bound is exact. For mathematical details, please see Ref.\cite{LiqunQi}.
\subsection{Maximally entangled four qubits states}
The four qubits quantum state, Higuchi- Sudbery (HS) state, is conjectured to be maximally entangled\cite{Higuchi2000213}.
We consider a family of Higuchi- Sudbery states $|HS\rangle_t$, where $w=e^{\frac{2\pi i}{3}}$ corresponds to the previously discovered HS state.
\begin{equation}
|HS\rangle_t=
\sqrt{\frac{1}{6}}[|0011\rangle+|1100\rangle+ w(|1010\rangle+|0101\rangle)+w^2(|1001\rangle+|0110\rangle)],
\end{equation}
In Figure \ref{fig:fig3}, we show the evolution of the geometric entanglement as a function of $w$. As expected, $E$ has a maximum at $w=e^{\frac{2\pi i}{3}}$. We also notice that the state at $w=e^{\frac{\pi i}{3}}$ has the same entanglement as the $|HS\rangle$ state. Therefore, we have numerically discovered a few four qubits states that is maximally entangled. However, we should point out that these states might be equivalent under local unity transformations.
\begin{figure}
\caption{Entanglement of the HS family of states as a function of t, $w=e^{t i}
\label{fig:fig3}
\end{figure}
We searched complex four qubits state using Monte Carlo sampling with 100000 samples. We did not find any four qubits quantum states with a higher geometric entanglement, therefore, the $|HS\rangle$ is likely to be the four qubits state with the highest entanglement under geometric entanglement measure.
\subsection{Highly entangled four qubits states}
The L state maximizes the average Tsallis $\alpha$-entropy of the partial trace for $\alpha>0$\cite{1742-6596-698-1-012003}. While, surprisingly, we find that this state has a constant geometric entanglement $E=0.6667$ with respect to changing $w$.
\begin{equation}
|L\rangle_t=
\sqrt{\frac{1}{12}}[(1+w)(|0000\rangle+|1111\rangle)+(1-w)(|0011\rangle+|1100\rangle)+w^2(|0101\rangle+|0110\rangle+|1001\rangle+|1010\rangle)].
\end{equation}
The $|BBSB_4\rangle$ is found to be a highly entangled state with respect to a certain measure \cite{BSSB}.
\begin{equation}
|BSSB_4\rangle=
\sqrt{\frac{1}{8}}[|0110\rangle+|1011\rangle+i(|0010\rangle+|1111\rangle)+ (1+i)(|0101\rangle+|1000\rangle)].
\end{equation}
Our result shows that it is a local minimum under a family of $|BSSB_4\rangle_t$ states, at $w=i$ with $E_{BSSB4}=0.7500$, see Figure \ref{fig:fig4}.
\begin{figure}
\caption{Entanglement of the BSSB family of states as a function of x, $w=e^{x i}
\label{fig:fig4}
\end{figure}
\begin{equation}
|BSSB_4\rangle_t=
\sqrt{\frac{1}{8}}[|0110\rangle+|1011\rangle+w(|0010\rangle+|1111\rangle)+ (1+w)(|0101\rangle+|1000\rangle)].
\end{equation}
In addition to the highly entangled state listed above, we provide a list of highly entangled four qubits states, based on our numerical search. The states with real integer coefficients are relatively easy to prepare in experiment. These states have the same entanglement as the $|BBSB_4\rangle$ state.
\begin{equation}
|\phi_{4,1}\rangle=
\frac{1}{2}(|0000\rangle+|1110\rangle+|0101\rangle+|1011\rangle).
\end{equation}
\begin{equation}
|\phi_{4,2}\rangle=
\frac{1}{2}(|1100\rangle+|0010\rangle+|0101\rangle+|1011\rangle).
\end{equation}
\begin{equation}
|\phi_{4,3}\rangle=
\frac{1}{2}(|1000\rangle+|0110\rangle+|0001\rangle+|1111\rangle).
\end{equation}
\begin{equation}
|\phi_{4,4}\rangle=
\frac{1}{2}(|0100\rangle+|0010\rangle+|1001\rangle+|1111\rangle).
\end{equation}
\begin{equation}
|\phi_{4,5}\rangle=
\frac{1}{2}(|0110\rangle+|1010\rangle+|0001\rangle+|1101\rangle).
\end{equation}
\begin{equation}
|\phi_{4,6}\rangle=
\frac{1}{2}(|0010\rangle+|1110\rangle+|0101\rangle+|1001\rangle).
\end{equation}
\begin{equation}
|\phi_{4,7}\rangle=
\frac{1}{2}(|0000\rangle+|1100\rangle+|0011\rangle+|1111\rangle).
\end{equation}
All the states above have a overlap of $\lambda=0.5$ and a geometric entanglement of $E=0.75$.
All the $|\phi\rangle$ states in this paper are constructed and searched using Monte Carlo Sampling. We start with a several index tensor and random initialize each element of tensor to zero or one. Practically, the number of 1s in each tensor is fixed under each Monte Carlo process, althought different value is used in different run. Then we normalize each tensor and calculate the geometric entanglement. Using a large number of samples, the tensor with the largest geometric entanglement is recorded.
\subsection{Highly entangled five qubits states}
The $|BBSB_5\rangle$ state is found to be a highly entangled five qubits state\cite{BSSB}.
\begin{equation}
|BBSB_5\rangle=
\sqrt{\frac{1}{8}}(|00001\rangle-|00010\rangle+|01000\rangle-|01011\rangle +|10001\rangle+|10010\rangle+|11100\rangle+|11111\rangle).
\end{equation}
The geometric entanglement is 0.7500. Our search find a new state $|\phi_{5,1}\rangle$, which is more entangled than
$|BBSB_5\rangle$ under the measure of geometric entanglement.
\begin{equation}
|\phi_{5,1}\rangle=
\sqrt{\frac{1}{6}}(|00000\rangle+|01100\rangle+|10010\rangle+|11001\rangle+|00111\rangle+|11111\rangle).
\end{equation}
For $|\phi_{5,1}\rangle$, the overlap is $\lambda=0.4329$ with entanglement $E=0.8126$.
\begin{equation}
|\phi_{5,2}\rangle=
\sqrt{\frac{1}{8}}(|11000\rangle+|01100\rangle+|10010\rangle+|10110\rangle +|00001\rangle+|01001\rangle+|00111\rangle+|11111\rangle).
\end{equation}
For $|\phi_{5,2}\rangle$, the overlap is $\lambda=0.500$ with entanglement $E=0.7500$, which is the same as $|BBSB_5\rangle$.
\subsection{Highly entangled six and seven qubits states}
We provide two examples of six qubits state.
\begin{equation}
|\phi_{6,1}\rangle=
\sqrt{\frac{1}{7}}(|100000\rangle+|011000\rangle+|011110\rangle+|101110\rangle+|101001\rangle+|110101\rangle+|000011\rangle).
\end{equation}
For $|\phi_{6,1}\rangle$, the overlap is $\lambda=0.3780$ with entanglement $E=0.8571$.
\begin{equation}
|\phi_{6,2}\rangle=
\sqrt{\frac{1}{8}}(|11000\rangle+|001100\rangle+|010110\rangle+|100110\rangle +|001001\rangle+|100101\rangle+|111101\rangle+|101011\rangle ).
\end{equation}
For $|\phi_{6,2}\rangle$ the overlap is $\lambda=0.3954$ with entanglement $E=0.8436$.
Notice that our six qubits states are more simple than the state found in Ref.\cite{1751-8121-40-44-018}.
For seven qubits states, we found
\begin{multline}
|\phi_{7,1}\rangle=
\sqrt{\frac{1}{10}}(|0110000\rangle+|0011000\rangle+|1100100\rangle+|0001100\rangle+|1110010\rangle\\+|1001010\rangle+|1101001\rangle+|1010101\rangle+|0000011\rangle+|1111111\rangle).
\end{multline}
For $|\phi_{7,1}\rangle$, the overlap is $\lambda=0.3162$ with entanglement $E=0.9000$.
\begin{multline}
|\phi_{7,2}\rangle=
\sqrt{\frac{1}{11}}(|0110000\rangle+|0000100\rangle+|1100100\rangle+|1011100\rangle+|1001010\rangle\\+|0011110\rangle+|0101101\rangle+|1110011\rangle+|0000011\rangle+|0011011\rangle+|1010111\rangle).
\end{multline}
For $|\phi_{7,2}\rangle$, the overlap is $\lambda=0.3183$ with entanglement $E=0.8987$.
Notice the geometric entanglement of all the states in this section is invariant under local unitary transformation of each party. Therefore, we can get other state by applying a rotation on each qubit.
\section{Discussions}
\subsection{Geometric measure of entanglement for many-body systems}
The geometric measure of entanglement defined above is for finite quantum states. For many-body systems, we can define geometric entanglement per site, by using the overlap between an entangled state an a direct product state of every site. For a 1-D system, the ground state can be written as a Matrix Product State (MPS). Assuming translational symmetry, we can efficiently calculate geometric entanglement per site based on the local structure of the MPS representation. Remarkably, the geometric entanglement structure for a translational symmetric many-body system is more simpler for a finite state space. For certain 1-D systems, analytical solutions exist. The details of the process discussed above can be found in Ref. \cite{PhysRevA.81.062313}.
Recently, research has been performed for 2-D systems. For a 2-D translational invariant quantum many-body system, the ground state can be represented as an infinite Projected Entangled Pair States (iPEPS). Following the same procedure used for the 1-D case, geometric entanglement per site can be calculated by contracting over the tensor network representation of the overlap coefficient. The overlap is dominated by the largest singular value of the representation tensor treated as a matrix. The largest singular value of a matrix is the same as the overlap coefficient of the best rank-one approximation of the tensor discussed in this paper. We can use this overlap and geometric entanglement to discover phase transition for a 2-D many-body system (see details in Ref.\cite{PhysRevA.93.062341}). For a 2-D system, an iPEPS tensor can be represented as a matrix and therefore, it is still easy to calculate. Our method is potentially beneficial to the tensor representation of a 3-D or higher dimensional system, although a realistic tensor representation for 3-D quantum systems is beyond the current computational power.
\subsection{Several comments}
A topic that we have not discussed in this article is the calculation of the geometric measure of entanglement for mixed states. It is known that the entanglement curve for a mixed state is the convex hull for the corresponding pure state. After numerically calculation the entanglement surface of the pure states, it should be straightforward to calculate the convex hull geometrically using numerical methods.
A subtle detail that we should stress is that the tensor decomposition may be trapped in a numerical metastable state if the initial conditions are not properly set. Therefore, for a reliable calculation, great care should be given to the initial conditions to avoid erroneous results.
Tensor decomposition theory is currently still under development and therefore, some theoretical aspects of its properties are still unknown. It will be interesting if new developments of tensor decomposition theory shed some light on quantum theory and quantum information theory.
It will be interesting to explore the restrictions of this method. It is known that the calculation of the best rank-one approximation of a tensor is NP-hard\cite{hillar2013most}, which was also proved in Ref.\cite{huang2014computing}. Therefore, it is difficult to calculate a geometric measure of entanglement for a rather large quantum system. Our method is easy to implement, and is based on existing code packages. Based on existing calculation software such as MATLAB. Convex hull (Convex envelop) can also be constructed in MATLAB, to represent the entanglement of mixed quantum states, see \cite{wei2003geometric} for details.
\section{Conclusions}
In this article, we established the connection between tensor decomposition theory and the geometric measure of entanglement. We found agreements between theoretical and numerical method. Furthermore, we searched and characterized several quantum states with high entanglement. We illustrated that the tensor decomposition method is an efficient and accurate method for the calculation of the geometric measure of entanglement.
\end{document} |
\begin{document}
\author{David Joyner\thanks{Dept Math, US Naval Academy, Annapolis, MD, [email protected].}}
\title{A primer on computational group homology and
cohomology using {\tt GAP} and \SAGE\thanks{Dedecated to my friend and colleague
Tony Gaglione on the occasion of his $60^{th}$ birthday.}}
\maketitle
\vskip .4in
These are expanded lecture notes of a series of expository
talks surveying basic aspects of group
cohomology and homology. They were written for someone who has
had a first course in graduate algebra but no background in
cohomology. You should know the definition of a (left)
module over a (non-commutative) ring, what $\mathbb{Z}[G]$ is
(where $G$ is a group written multiplicatively and
$\mathbb{Z}$ denotes the integers), and some ring theory and group theory.
However, an attempt has been made to (a) keep the
presentation as simple as possible, (b) either provide
an explicit reference or proof of everything.
Several computer algebra packages are used to illustrate the
computations, though for various reasons we have focused on the
free, open source packages, such as {\tt GAP} \cite{Gap} and
\SAGE \cite{St} (which includes {\tt GAP}).
In particular, Graham Ellis generously allowed
extensive use of his HAP \cite{Ehap} documentation (which is sometimes copied
almost verbatim) in the presentation below.
Some interesting work not included in this (incomplete) survey
is (for example) that of Marcus Bishop \cite{Bi},
Jon Carlson \cite{C} (in MAGMA), David Green \cite{Gr}
(in C), Pierre Guillot \cite{Gu} (in GAP, C++, and \SAGE),
and Marc R\"oder \cite{Ro}.
Though Graham Ellis' {\tt HAP} package (and Marc R\"oder's add-on
{\tt HAPcryst} \cite{Ro}) can compute comhomology and homology of some
infinite groups, the computational examples given below
are for finite groups only.
\section{Introduction}
First, some words of motivation.
Let $G$ be a group and $A$ a $G$-module\footnote{We call
an abelian group $A$ (written additively) which is
a left $\mathbb{Z}[G]$-module a {\bf $G$-module}.
}.
\index{$G$-module}
Let $A^G$ denote the largest submodule of $A$ on
which $G$ acts trivially. Let us begin by asking ourselves
the following natural question.
{\bf Question}: Suppose $A$ is a submodule of a $G$-module
$B$ and $x$ is an arbitrary $G$-fixed element of
$B/A$. Is there an element $b$ in $B$, also
fixed by $G$, which maps onto $x$ under the quotient map?
The answer to this question can be formulated in terms of
group cohomology. (``Yes'', if $H^1(G,A)=0$.)
The details, given below, will help motivate
the introduction of group cohomology.
Let $A_G$ is the largest quotient module of $A$ on
which $G$ acts trivially. Next, we ask ourselves
the following analogous question.
{\bf Question}: Suppose $A$ is a submodule of a $G$-module
$B$ and $b$ is an arbitrary element of
$B_G$ which maps to $0$ under the natural map
$B_G\rightarrow (B/A)_G$. Is there an element $a$ in $a_G$
which maps onto $b$ under the inclusion map?
The answer to this question can be formulated in terms of
group homology. (``Yes'', if $H_1(G,A)=0$.)
The details, given below, will help motivate
the introduction of group homology.
Group cohomology arises as the right higher derived
functor for $A\longmapsto A^G$.
The {\bf cohomology groups of $G$ with coefficients in $A$}
are defined by
\index{cohomology groups of $G$ with coefficients in $A$}
\[
H^n(G,A)={\rm Ext}\, _{\mathbb{Z} [G]}^n(\mathbb{Z} ,A).
\]
(See \S \ref{sec:H^n} below for more details.)
These groups were first introduced in 1943 by S. Eilenberg and
S. MacLane \cite{EM}.
The functor $A\longmapsto A^G$ on the category of
left $G$-modules is additive and left exact.
This implies that if
\[
0 \rightarrow
A {\rightarrow} B {\rightarrow}
C {\rightarrow} 0
\]
is an exact sequence of $G$-modules
then we have a {\bf long exact sequence of cohomology}
\index{long exact sequence of cohomology}
\begin{equation}
\label{eqn:LESC}
\begin{array}{c}
0 \rightarrow
A^G {\rightarrow} B^G \rightarrow C^G \rightarrow
H^1(G,A) \rightarrow \\
H^1(G,B) \rightarrow
H^1(G,C) \rightarrow
H^2(G,A) \rightarrow \dots
\end{array}
\end{equation}
Similarly, group homology arises as the left higher derived
functor for $A\longmapsto A_G$.
The {\bf homology groups of $G$ with coefficients in $A$}
are defined by
\index{homology groups of $G$ with coefficients in $A$}
\[
H_n(G,A)={\rm Tor}\, _n^{\mathbb{Z} [G]} (\mathbb{Z} ,A).
\]
(See \S \ref{sec:H_n} below for more details.)
The functor $A\longmapsto A_G$ on the category of
left $G$-modules is additive and right exact.
This implies that if
\[
0 \rightarrow
A {\rightarrow} B {\rightarrow}
C {\rightarrow} 0
\]
is an exact sequence of $G$-modules
then we have a {\bf long exact sequence of homology}
\index{long exact sequence of homology}
\begin{equation}
\label{eqn:LESH}
\begin{array}{c}
\dots \rightarrow
H_2(G,C) \rightarrow
H_1(G,A) \rightarrow
H_1(G,B) \rightarrow \\
H_1(G,C) \rightarrow A_G \rightarrow
B_G \rightarrow C_G \rightarrow 0.
\end{array}
\end{equation}
Here we will define both
cohomology $H^n(G,A)$ and homology $H_n(G,A)$
using projective resolutions
and the higher derived functors
${\rm Ext}\, ^n$ and ${\rm Tor}\, _n$. We ``compute'' these
when $G$ is a finite cyclic group. We also give various
functorial properties,
such as corestriction, inflation, restriction, and transfer.
Since some of these cohomology
groups can be computed with the help of computer algebra systems,
we also include some discussion of how to use computers
to compute them.
We include several applications to
group theory.
One can also define $H^1(G,A)$, $H^2(G,A)$, \dots , by
explicitly constructing cocycles and coboundaries.
Similarly, one can also define $H_1(G,A)$, $H_2(G,A)$, \dots , by
explicitly constructing cycles and boundaries.
For the proof that these constructions yield the
same groups, see Rotman \cite{R}, chapter 10.
For the general outline, we follow \S 7 in chapter 10
of \cite{R} on homology. For some details, we follow
Brown \cite{B}, Serre \cite{S} or Weiss \cite{W}.
For a recent expository account of
this topic, see for example Adem \cite{A}.
Another good reference is Brown \cite{B}.
\section{Differential groups}
In this section cohomology and homology are viewed in
the same framework. This ``differential groups''
idea was introduced by Cartan
and Eilenberg \cite{CE}, chapter IV, and developed in
R. Godement \cite{G}, chapitre 1, \S 2. However, we shall follow
Weiss \cite{W}, chapter 1.
\subsection{Definitions}
A {\bf differential group} is a pair $(L,d)$, $L$ an abelian
group and $d:L\rightarrow L$ a homomorphism such that $d^2=0$.
We call $d$ a {\bf differential operator}.
\index{differential operator}
\index{differential group}
The group
\[
H(L)={\rm Kernel}\, (d)/{\rm Image}\, (d)
\]
is the {\bf derived group} of $(L,d)$. If
\index{derived group}
\[
L=\oplus_{n=-\infty}^\infty L_n
\]
then we call $L$ {\bf graded}. Suppose $d$
(more precisely, $d|_{L_n}$) satisfies, in addition,
for some fixed $r\not= 0$,
\[
d:L_n\rightarrow L_{n+r},\ \ \ \ n\in\mathbb{Z}.
\]
We say $d$ is {\bf compatible} with the grading provided
$r=\pm 1$. In this case, we call $(L,d,r)$ a {\bf graded differential group}. As we shall see,
\index{graded differential group}
the case $r=1$ corresponds to cohomology and the
the case $r=-1$ corresponds to homology.
Indeed, if $r=1$ then we call $(L,d,r)$ a
(differential) {\bf group of cohomology type}
\index{differential group of cohomology type}
and if $r=-1$ then we call $(L,d,r)$
a {\bf group of homology type}.
\index{differential group of homology type}
Note that if $L=\oplus_{n=-\infty}^\infty L_n$ is a group of
cohomology type then $L'=\oplus_{n=-\infty}^\infty L'_n$
is a group of homology type, where $L'_n=L_{-n}$,
for all $n\in\mathbb{Z}$.
\vskip .7in
{\bf For the impatient}:
For {\it cohomology}, we shall eventually take
$L=\oplus_n {\rm Hom}_G(X_n,A)$, where the $X_n$ form a
chain complex (with $+1$ grading) determined by a certain type of
resolution. The group $H(L)$ is an abbreviation for
$\oplus_n {\rm Ext}\, _{\mathbb{Z} [G]}^n(\mathbb{Z},A)$.
For {\it homology}, we shall eventually take
$L=\oplus_n \mathbb{Z} \otimes_{\mathbb{Z} [G]}X_n$, where the $X_n$ form a
chain complex (with $-1$ grading) determined by a certain type of
resolution. The group $H(L)$ is an abbreviation for
$\oplus_n {\rm Tor}\, ^{\mathbb{Z} [G]}_n(\mathbb{Z},A)$.
\vskip .7in
Let $(L,d)=(L,d_L)$ and $(M,d)=(M,d_M)$ be differential
groups (to be more precise, we should use different symbols
for the differential operators of $L$ and $M$ but, for notational
simplicity, we use the same symbol and hope the context
removes any ambiguity).
A homomorphism $f:L\rightarrow M$ satisfying
$d\circ f=f\circ d$ will be called {\bf admissible}.
\index{admissible}
For any $n\in \mathbb{Z}$, we define $nf:L\rightarrow M$
by $(nf)(x)=n\cdot f(x)=f(x)+\dots +f(x)$ ($n$ times).
If $f$ is admissible then so is $nf$, for any $n\in \mathbb{Z}$.
An admissible map $f$ gives rise to a map of
derived groups: define the map $f_*:H(L)\rightarrow H(M)$, by
$f_*(x +dL)=f(x)+dM$, for all $x\in L$.
\subsection{Properties}
\label{sec:properties}
Let $f$ be an admissible map as above.
\begin{enumerate}
\item
The map $f_*:H(L)\rightarrow H(M)$ is a homomorphism.
\item
If $f:L\rightarrow M$ and $g:L\rightarrow M$ are admissible, then so
is $f+g$ and we have $(f+g)_*=f_*+g_*$.
\item
If $f:L\rightarrow M$ and $g:M\rightarrow N$ are admissible then
so is $g\circ f:L\rightarrow N$ and we have
$(g\circ f)_*=g_*\circ f_*$.
\item
If
\begin{equation}
\label{eqn:LMN0}
0 \rightarrow L \stackrel{i}{\rightarrow}
M \stackrel{j}{\rightarrow} N \rightarrow 0
\end{equation}
is an exact sequence of differential groups with
admissible maps $i,j$ then there is a
homomorphism $d_*:H(N)\rightarrow H(L)$
for which the following triangle is exact:
{\footnotesize{
\begin{equation}
\label{eqn:LMN1}
{\footnotesize{
\begin{picture}(200.00,130.00)(-60.00,0.00)
\thicklines
\put(-30.00,50.00){$H(N)$}
\put(5.00,65.00){\vector(1,1){50.00}}
\put(55.00,-10.00){\vector(-1,1){50.00}}
\put(60.00,120.00){$H(L)$}
\put(60,-30.00){$H(M)$}
\put(70.00,110.00){\vector(0,-1){115.00}}
\put(80.00,60.00){$i_*$}
\put(20.00,105.00){$d_*$}
\put(20.00,-5.00){$j_*$}
\end{picture}
}}
\end{equation}
}}
\vskip .7in
\noindent
This diagram\footnote{This is a special case of
Th\'eor\`eme 2.1.1 in \cite{G}.} encodes both the long exact sequence of
cohomology (\ref{eqn:LESC}) and the long exact sequence of homology
(\ref{eqn:LESH}).
Here is the construction of $d_*$:
Recall $H(N)={\rm Kernel}\, (d)/{\rm Image}\, (d)$, so any $x\in H(N)$ is represented by
an $n\in N$ with $dn=0$. Since $j$ is surjective,
there is an $m\in M$ such that $j(m)=n$. Since $j$ is admissible
and the sequence is exact,
$j(dm)=d(j(m))=dn=0$, so $dm \in {\rm Kernel}\, (j)={\rm Image}\, (i)$.
Therefore, there is an $\ell \in L$ such that
$dm=i(\ell)$.
Define $d_*(x)$ to be the class
of $\ell$ in $H(L)$, i.e., $d_*(x)=\ell + dL$.
Here's the verification that $d_*$ is well-defined:
We must show that if we defined instead
$d_*(x)=\ell' + dL$, some $\ell' \in L$, then
$\ell'-\ell\in dL$.
Pull back the above $n\in N$
with $dn=0$ to an $m\in M$ such that
$j(m)=n$. As above, there is an $\ell \in L$ such that
$dm=i(\ell)$.
Represent $x\in H(N)$ by an $n'\in N$, so $x=n'+dN$
and $dn'=0$. Pull back this $n'$
to an $m'\in M$ such that $j(m')=n'$. As above, there is an
$\ell' \in L$ such that $dm'=i(\ell')$.
We know $n'-n\in dN$, so $n'-n=dn''$, some $n''\in N$.
Let $j(m'')=n''$, some $m''\in M$, so
$j(m'-m-dm'')=n'=n-j(dm'')=n'-n-dj(m'')=n'-n-dn''=0$.
Since the sequence $L-M-N$ is exact,
this implies there is an $\ell_0\in L$ such that
$i(\ell_0)=m'-m-dm''$. But
$di(\ell_0)=i(d\ell_0)=dm'-dm=i(\ell')-i(\ell)=i(\ell'-\ell)$,
so $\ell'-\ell\in dL$.
\item
If $M=L\oplus N$ then $H(M)=H(L)\oplus H(N)$.
{\bf proof}:\
To avoid ambiguity, for the moment, let
$d_X$ denote the differential operator on $X$,
where $X\in \{L,M,N\}$.
In the notation of (\ref{eqn:LMN0}),
$j$ is projection and $i$ is inclusion.
Since both are admissible, we know that
$d_M|_L=d_L$ and $d_M|_N=d_N$.
Note that $H(X)\subset X$, for any differential group $X$,
so $H(M)=H(M)\cap L\oplus H(M)\cap N\subset H(L)\oplus H(N)$.
It follows from this that that $d_*=0$. From
the exactness of the triangle (\ref{eqn:LMN1}),
it therefore follows that
this inclusion is an equality.
$\Box$
\item
Let $L$, $L'$, $M$, $M'$, $N$, $N'$ be differential groups.
If
{\footnotesize{
\begin{equation}
\label{eqn:LMN}
\begin{CD}
0 @>>> L @>i>> M @>j>> N @>>> 0\\
@. @VfVV @VgVV @VhVV @. \\
0 @>>> L' @>i'>> M' @>j'>> N' @>>> 0
\end{CD}
\end{equation}
}}
\vskip .7in
\noindent
is a commutative diagram of exact sequences with
$i,i',j,j',f,g,h$ all admissible then
\[
\begin{CD}
H(L) @>i_*>> H(M) \\
@Vf_*VV @Vg_*VV \\
H(L') @>i'_*>> H(M')
\end{CD}
\]
commutes,
\[
\begin{CD}
H(M) @>j_*>> H(N) \\
@Vg_*VV @Vh_*VV \\
H(M') @>i'_*>> H(N')
\end{CD}
\]
commutes, and
\[
\begin{CD}
H(N) @>d_*>> H(L) \\
@Vh_*VV @Vf_*VV \\
H(N') @>d_*>> H(L')
\end{CD}
\]
commutes.
This is a case of Theorem 1.1.3 in \cite{W} and of
Th\'eor\`eme 2.1.1 in \cite{G}.
The proofs that the first two squares commute are similar,
so we only verify one and leave the other to the reader.
By assumption, (\ref{eqn:LMN}) commutes and all the maps
are admissible. Representing $x\in H(M)$ by
$x=m+dM$, we have
\[
\begin{split}
h_*j_*(x)&=h_*(j(m)+dN)=hj(m)+dN'=gi'(m)+dN'\\
&=
g_*(i'(m)+dM')=g_*i'_*(m+dM)=g_*i'_*(x),
\end{split}
\]
as desired.
The proof that the last square commutes is a little
different than this, so we prove this too. Represent $x\in H(N)$ by
$x=n+dN$ with $dn=0$ and recall that $d_*(x)=\ell+dL$,
where $dm=i(\ell)$,
$\ell \in L$, where $j(m)=n$, for $m\in M$.
We have
\[
f_*d_*(x)=f_*(\ell+dL)=f(\ell)+dL'.
\]
On the other hand,
\[
d_*h_*(x)=d_*(h(n)+dN')=\ell'+dL',
\]
for some $\ell'\in L'$. Since $h(n)\in N'$,
by the commutativity of (\ref{eqn:LMN})
and the definition of $d_*$,
$\ell'\in L'$ is an element
such that $i'(\ell')=gi(\ell)$. Since $i'$ is
injective, this condition on $\ell'$ determines it
uniquely mod $dL'$. By the commutativity of (\ref{eqn:LMN}),
we may take $\ell'=f(\ell)$.
\item
Let $L$, $L'$, $M$, $M'$, $N$, $N'$ be differential
graded groups with grading $+1$ (i.e., of ``cohomology type'').
Suppose that we have a commutative diagram, with all
maps admissible and all rows exact as in
(\ref{eqn:LMN}). Then the following diagram is commutative and
has exact rows:
{\tiny{
\[
\begin{CD}
\dots @>>> H_{n-1}(N) @>d_*>> H_n(L) @>i_*>> H_n(M) @>j_*>> H_n(N) @>d_*>> H_{n+1}(L) @>>> \dots \\
@. @Vh_*VV @Vf_*VV @Vg_*VV @Vh_*VV @Vf_*VV @. \\
\dots @>>> H_{n-1}(N') @>d_*>> H_n(L') @>i'_*>> H_n(M') @>j'_*>> H_n(N') @>d_*>> H_{n+1}(L') @>>> \dots
\end{CD}
\]
}}
This is Proposition 1.1.4 in \cite{W}.
As pointed out there, it is an immediate consequence of the
properties, 1-6 above.
Compare this with Proposition 10.69 in \cite{R}.
\item
Let $L$, $L'$, $M$, $M'$, $N$, $N'$ be differential
graded groups with grading $-1$ (i.e., of ``homology type'').
Suppose that we have a commutative diagram, with all
maps admissible and all rows exact, as in
(\ref{eqn:LMN}). Then the following diagram is commutative and
has exact rows:
{\tiny{
\[
\begin{CD}
\dots @>>> H_{n+1}(N) @>d_*>> H_n(L) @>i_*>> H_n(M) @>j_*>> H_n(N) @>d_*>> H_{n-1}(L) @>>> \dots \\
@. @Vh_*VV @Vf_*VV @Vg_*VV @Vh_*VV @Vf_*VV @. \\
\dots @>>> H_{n+1}(N') @>d_*>> H_n(L') @>i'_*>> H_n(M') @>j'_*>> H_n(N') @>d_*>> H_{n-1}(L') @>>> \dots
\end{CD}
\]
}}
This is the analog of the previous property and is proven similarly.
Compare this with Proposition 10.58 in \cite{R}.
\item
Let $(L,d)$ be a differential
graded group with grading $r$.
If $d_n=d|_{L_n}$ then $d_{n+r}\circ d_n=0$ and
\begin{equation}
\label{eqn:d_n}
\dots \rightarrow
L_{n-r} \stackrel{d_{n-r}}{\rightarrow}
L_n \stackrel{d_{n}}{\rightarrow} L_{n+r}
\stackrel{d_{n}}{\rightarrow} L_{n+2r} \rightarrow \dots
\end{equation}
is exact.
\item
If $\{L_n\ |\ n\in\mathbb{Z}\}$ is a sequence of abelian groups
with homomorphisms $d_n$ satisfying (\ref{eqn:d_n}) then
$(L,d)$ is a differential group, where $L=\oplus_n L_n$
and $d=\oplus_n d_n$.
\end{enumerate}
\subsection{Homology and cohomology}
When $r=1$, we call $L_n$ the {\bf group of $n$-cochains},
$Z_n=L_n\cap {\rm Kernel}\, (d_n)$ the group of {\bf $n$-cocycles},
and $B_n=L_n\cap d_{n-1}(L_{n-1})$ the group of {\bf $n$-coboundaries}.
We call $H_n(L) =Z_n/B_n$ the {\bf $n^{th}$ cohomology group}.
When $r=-1$, we call $L_n$ the {\bf group of $n$-chains},
$Z_n=L_n\cap {\rm Kernel}\, (d_n)$ the group of {\bf $n$-cycles},
and $B_n=L_n\cap d_{n+1}(L_{n+1})$ the group of {\bf $n$-boundaries}.
We call $H_n(L) =Z_n/B_n$ the {\bf $n^{th}$ homology group}.
\index{group of $n$-cochains}
\index{group of $n$-cocycles}
\index{group of $n$-cycles}
\index{group of $n$-chains}
\section{Complexes}
We introduce complexes in order to define explicit
differential groups which will then be used
to construct group (co)homology.
\subsection{Definitions}
\label{sec:complexes}
Let $R$ be a non-commutative ring, for example $R=\mathbb{Z} [G]$.
We shall define a ``finite free, acyclic, augmented chain
complex'' of left $R$-modules.
A {\bf complex} (or chain complex or $R$-complex with a negative grading)
is a sequence of maps
\index{complex}
\begin{equation}
\label{eqn:del_n}
\dots \rightarrow
X_{n+1} \stackrel{\partial_{n+1}}{\rightarrow}
X_{n} \stackrel{\partial_{n}}{\rightarrow} X_{n-1}
\stackrel{\partial_{n-1}}{\rightarrow} X_{n-2} \rightarrow \dots
\end{equation}
for which $\partial_n\partial_{n+1}=0$, for all $n$.
If each $X_n$ is a free $R$-module with a finite basis over $R$
(so is $\cong R^k$, for some $k$) then the complex is called
{\bf finite free}.
If this sequence is exact then it is called an
{\bf acyclic complex}. The complex is {\bf augmented}
if there is a surjective $R$-module homomorphism
$\epsilon : X_0\rightarrow \mathbb{Z}$ and an injective
$R$-module homomorphism
$\mu : \mathbb{Z}\rightarrow X_{-1}$ such that
$\partial_0= \mu\circ \epsilon$, where (as usual) $\mathbb{Z}$ is
regarded as a trivial $R$-module.
\index{acyclic complex}
\index{augmented complex}
The {\bf standard diagram} for such an $R$-complex is
\index{standard diagram}
{\footnotesize{
\[
\begin{CD}
\dots @>>> X_2 @>\partial_2>> X_1 @>\partial_1>> X_0 @>\partial_0>> X_{-1} @>\partial_{-1}>> X_{-2} @>>> \dots \\
@. @. @. @V\epsilon VV @AA\mu A @. \\
@. @. @. \mathbb{Z} @= \mathbb{Z} @. \\
@. @. @. @VVV @AAA @. \\
@. @. @. 0 @. 0 @.
\end{CD}
\]
}}
Such an acyclic augmented complex can be broken up into the
{\bf positive part}
\[
\dots \rightarrow
X_{2} \stackrel{\partial_{2}}{\rightarrow}
X_{1} \stackrel{\partial_{1}}{\rightarrow} X_{0}
\stackrel{\epsilon}{\rightarrow} \mathbb{Z} \rightarrow 0,
\]
and the {\bf negative part}
\[
0 \rightarrow
\mathbb{Z} \stackrel{\mu}{\rightarrow}
X_{-1} \stackrel{\partial_{-1}}{\rightarrow} X_{-2}
\stackrel{\partial_{-2}}{\rightarrow} X_{-3} \rightarrow \dots \ .
\]
Conversely, given a positive part and a negative part,
they can be combined into a standard diagram by taking
$\partial_0=\mu\circ\epsilon$.
If $X$ is any left $R$-module, let $X^*={\rm Hom}_R(X,\mathbb{Z})$
be the dual $R$-module, where
$\mathbb{Z}$ is regarded as a trivial $R$-module.
Associated to any $f\in {\rm Hom}_R(X,Y)$ is the pull-back
$f^*\in {\rm Hom}_R(Y^*,X^*)$. (If $y^*\in Y^*$ then
define $f^*(y^*)$ to be $y^*\circ f:X\rightarrow \mathbb{Z}$.)
Since ``dualizing'' reverses the direction of the maps,
if you dualize the entire complex with a $-1$ grading,
you will get a complex with a $+1$ grading.
This is the {\bf dual complex}.
\index{dual complex}
When $R=\mathbb{Z} [G]$ then we call a
finite free, acyclic, augmented chain
complex of left $R$-modules, a {\bf $G$-resolution}.
\index{$G$-resolution}
The maps $\partial_i:X_i\rightarrow X_{i-1}$ are sometimes
called {\bf boundary maps}.
\index{boundary maps}
\begin{remark}
{\rm
Using the command {\tt BoundaryMap}
in the {\tt GAP} {\tt CRIME} package of Marcus Bishop, one can easily compute
the boundary maps of a cohomology object associated to a $G$-module.
However, $G$ must be a $p$-group.
}
\end{remark}
\begin{example}
\label{ex:hap1}
{\rm
We use the package {\tt HAP} \cite{Ehap} to illustrate some of these
concepts more concretely.
Let $G$ be a finite group, whose elements we have ordered
in some way: $G=\{g_1,...,g_n\}$.
Since a $G$-resolution $X_*$ determines a sequence of finitely
generated free $\mathbb{Z}[G]$-modules, to concretely describe $X_*$ we must
be able to concretely describe a finite free $\mathbb{Z}[G]$-module.
In order to represent a word $w$ in a free $\mathbb{Z}[G]$-module $M$ of rank $n$,
we use a list of integer pairs $w=[ [i_1,e_1], [i_2,e_2], ..., [i_k,e_k] ]$.
The integers $i_j$ lie in the range $\{-n,..., n\}$ and
correspond to the free $\mathbb{Z}[G]$-generators of $M$ and their
additive inverses. The integers $e_j$ are positive (but not necessarily
distinct) and correspond to the group element $g_{e_j}$.
Let's begin with a {\tt HAP} computation.
\vskip .2in
\begin{Verbatim}[fontsize=\scriptsize,fontfamily=courier,fontshape=tt,frame=single,label={\tt GAP}]
gap> LoadPackage("hap");
true
gap> G:=Group([(1,2,3),(1,2)]);;
gap> R:=ResolutionFiniteGroup(G, 4);;
\end{Verbatim}
\vskip .1in
\noindent
This computes the first $5$ terms of a $G$-resolution ($G=S_3$)
\[
X_4 \stackrel{\delta_4}{\rightarrow} X_3
\stackrel{\delta_3}{\rightarrow} X_2
\stackrel{\delta_2}{\rightarrow} X_1
\stackrel{\delta_1}{\rightarrow} X_0
\rightarrow \mathbb{Z} \rightarrow 0.
\]
The bounday maps $\delta_i$ are determined from
the {\tt boundary} component of the {\tt GAP} record {\tt R}.
This record has (among others) the following components:
\begin{itemize}
\item
{\tt R!.dimension(k)} -- the $\mathbb{Z}[G]$-rank of the module $X_k$,
\item
{\tt R!.boundary(k, j)} -- the image in $X_{k-1}$ of the $j$-th free
generator of $X_k$,
\item
{\tt R!.elts} -- the elements in $G$,
\item
{\tt R!.group} is the group in question.
\end{itemize}
Here is an illustration:
\vskip .2in
\begin{Verbatim}[fontsize=\scriptsize,fontfamily=courier,fontshape=tt,frame=single,label={\tt GAP}]
gap> R!.group;
Group([ (1,2), (1,2,3) ])
gap> R!.elts;
[ (), (2,3), (1,2), (1,2,3), (1,3,2), (1,3) ]
gap> R!.dimension(3);
4
gap> R!.boundary(3,1);
[ [ 1, 2 ], [ -1, 1 ] ]
gap> R!.boundary(3,2);
[ [ 2, 2 ], [ -2, 4 ] ]
gap> R!.boundary(3,3);
[ [ 3, 4 ], [ 1, 3 ], [ -3, 1 ], [ -1, 1 ] ]
gap> R!.boundary(3,4);
[ [ 2, 5 ], [ -3, 3 ], [ 2, 4 ], [ -1, 4 ], [ 2, 1 ], [ -3, 1 ] ]
\end{Verbatim}
\vskip .1in
\noindent
In other words, $X_3$ is rank $4$ as a $G$-module, with generators
$\{f_1, f_2, f_3, f_4\}$ say, and
\[
\delta_3(f_1) = f_1g_2 - f_1g_1,
\]
\[
\delta_3(f_2) = f_2g_2 - f_2g_4,
\]
\[
\delta_3(f_3) = f_3g_4 - f_3g_1+f_1g_3-f_1g_1,
\]
\[
\delta_3(f_4) = f_2(g_1+g_3+g_5) - f_3g_3 + f_1g_4-f_3g_1.
\]
Now, let us create another resolution and compute the
equivariant chain map between them.
Below is the complete {\tt GAP} session:
\vskip .2in
{\footnotesize{
\begin{Verbatim}[fontsize=\scriptsize,fontfamily=courier,fontshape=tt,frame=single,label={\tt GAP}]
gap> G1:=Group([(1,2,3),(1,2)]);
Group([ (1,2,3), (1,2) ])
gap> G2:=Group([(1,2,3),(2,3)]);
Group([ (1,2,3), (2,3) ])
gap> phi:=GroupHomomorphismByImages(G1,G2,[(1,2,3),(1,2)],[(1,2,3),(2,3)]);
[ (1,2,3), (1,2) ] -> [ (1,2,3), (2,3) ]
gap> R1:=ResolutionFiniteGroup(G1, 4);
Resolution of length 4 in characteristic 0 for Group([ (1,2), (1,2,3) ]) .
gap> R2:=ResolutionFiniteGroup(G2, 4);
Resolution of length 4 in characteristic 0 for Group([ (2,3), (1,2,3) ]) .
gap> ZP_map:=EquivariantChainMap(R1, R2, phi);
Equivariant Chain Map between resolutions of length 4 .
gap> map := TensorWithIntegers( ZP_map);
Chain Map between complexes of length 4 .
gap> Hphi := Homology( map, 3);
[ f1, f2, f3 ] -> [ f2, f2*f3, f1*f2^2 ]
gap> AbelianInvariants(Image(Hphi));
[ 2, 3 ]
gap>
gap> GroupHomology(G1,3);
[ 6 ]
gap> GroupHomology(G2,3);
[ 6 ]
\end{Verbatim}
}}
\vskip .1in
\noindent
In other words, $H(\phi)$ is an isomorphism (as it should be,
since the homology is independent of the resolution choosen).
}
\end{example}
\subsection{Constructions}
Let $R=\mathbb{Z}[G]$.
\subsubsection{Bar resolution}
\label{sec:bar_res}
This section follows \S 1.3 in \cite{W}.
Define a symbol $[.]$ and call it the {\bf empty cell}.
Let $X_0=R[.]$, so $X_0$ is a finite free (left) $R$-module
whose basis has only $1$ element.
For $n>0$, let $g_1,\dots ,g_n\in G$ and define an
{\bf $n$-cell} to be the symbol $[g_1,\dots ,g_n]$.
\index{cell}
Let
\[
X_n=\oplus_{(g_1,\dots ,g_n)\in G^n} R[g_1,\dots ,g_n],
\]
where the sum runs over all ordered $n$-tuples
in $G^n$.
Define the differential operators $d_n$ and the
augmentation $\epsilon$, as $G$-module maps, by
\[
\begin{split}
\epsilon(g[.])&=1,\ \ \ \ \ \ g\in G\\
d_1([g])&=g[.]-[.],\\
d_2([g_1,g_2])&=g_1[g_2]-[g_1g_2]+[g_1],\\
& \vdots \\
d_n([g_1,\dots ,g_n])&=g_1[g_2,\dots ,g_n]
+\sum_{i=1}^{n-1}(-1)^i[g_1,\dots ,g_{i-1},g_ig_{i+1},g_{i+2},\dots ,g_n]\\
& \ \ \ \ \ \ +(-1)^n[g_1,\dots ,g_{n-1}],
\end{split}
\]
for $n\geq 1$.
Note that the condition
$\epsilon(g[.])=1$ for all $g\in G$ is equivalent to
saying $\epsilon([.])=1$. This is because $\epsilon$ is a
$G$-module homomorphism and $\mathbb{Z}$ is a trivial $G$-module,
so $\epsilon(g[.])=g\epsilon([.])=g\cdot 1=1$, where the
(trivial) $G$-action
on $\mathbb{Z}$ is denoted by a $\cdot$.
The $X_n$ are finite free $G$-modules, with the set of all
$n$-cells serving as a basis.
\begin{proposition}
\label{prop:bar}
With these definitions, the sequence
\[
\dots \rightarrow
X_{2} \stackrel{d_{2}}{\rightarrow}
X_{1} \stackrel{d_{1}}{\rightarrow} X_{0}
\stackrel{\epsilon}{\rightarrow} \mathbb{Z} \rightarrow 0,
\]
is a free $G$-resolution.
\end{proposition}
Sometimes this resolution is called the
{\bf bar resolution}\footnote{This resolution is not
the same as the resolution computed by {\tt HAP}
in Example \ref{ex:hap1}. For details on the resolution used
by {\tt HAP}, please see Ellis \cite{E2}.}.
\index{bar resolution}
There are two other resolutions we shall consider.
One is the closely related ``homogeneous resolution''
and the other is the ``normalized bar resolution''.
This simple-looking proposition is not so simple
to prove. First, we shall show it is a complex, i.e.,
$d^2=0$. Then, and this is
the most non-trivial part of the proof, we show that
the sequence is exact.
First, we need some definitions and a lemma.
Let $f:L\rightarrow M$ and $g:L\rightarrow M$ be
$+1$-graded admissible maps.
We say $f$ is {\bf homotopic} to $g$ if there is
a homomorphism $D:L\rightarrow M$,
called a {\bf homotopy}, such that
\index{homotopy}
\begin{itemize}
\item
$D_n=D|_{L_n}:L_n\rightarrow M_{n+1}$,
\item
$f-g=Dd+dD$.
\end{itemize}
If $L=M$ and
the identity map $1:L\rightarrow L$ is homotopic to
the zero map $0:L\rightarrow L$ then the homotopy
is called a {\bf contracting homotopy for $L$}.
\index{contracting homotopy}
\begin{lemma}
If $L$ has a contracting homotopy then $H(L)=0$.
\end{lemma}
{\bf proof}:\
Represent $x\in H(L)$ by $\ell\in L$ with $d\ell =0$.
But $\ell=1(\ell)-0(\ell)=dD(\ell)+Dd(\ell)=dD(\ell)$.
Since $D:L\rightarrow L$, this shows $\ell\in dL$,
so $x=0$ in $H(L)$.
$\Box$
Next, we construct a contracting homotopy for the
complex $X_*$ in Proposition \ref{prop:bar}
with differential operator $d$.
Actually, we shall {\it temporarily}
let $X_{-1}=\mathbb{Z}$, $X_{-n}=0$
and $d_{-n}=0$ for $n>1$,
so that that the complex is infinite in both
directions. We must define $D:X\rightarrow X$
such that
\begin{itemize}
\item
$D_{-1}=D|_{\mathbb{Z}}:\mathbb{Z}\rightarrow X_0$,
\item
$D_{n}=D|_{X_n}:X_n\rightarrow X_{n+1}$,
\item
$\epsilon D_{-1}=1$ on $\mathbb{Z}$,
\item
$d_1D_0+D_{-1}\epsilon =1$ on $X_0$,
\item
$d_{n+1}D_n+D_{n-1}d_n=1$ in $X_n$, for $n\geq 1$.
\end{itemize}
Define
\[
\begin{split}
D_{-n}&=0,\ \ \ \ \ \ n>1,\\
D_{-1}(1)&=[.],\\
D_0(g[.])&=[g],\\
D_n(g[g_1,\dots ,g_n])&=[g,g_1,\dots ,g_n],\ \ \ \ \ \ n>0,
\end{split}
\]
and extend to a $\mathbb{Z}$-basis linearly.
Now we must verify the desired properties.
By definition, for $m\in \mathbb{Z}$,
$\epsilon D_{-1}(m)=\epsilon (m[.])=m\epsilon ([.])=m$.
Therefore, $\epsilon D_{-1}$ is the identity map on
$\mathbb{Z}$.
Similarly,
\[
\begin{split}
(d_1 D_0+D_{-1}\epsilon )(g[.])=
d_1 ([g])+D_{-1}(1) \\
=g[.]-[.]+D_{-1}(1)=g[.]-[.]+[.]=g[.].
\end{split}
\]
For the last property, we compute
\[
\begin{split}
d_{n+1} D_n(g[g_1,\dots ,g_n])
&=d_{n+1} ([g,g_1,\dots ,g_n])\\
&=g[g_1,\dots ,g_n]-[gg_1,\dots ,g_n]\\
& \ \ \ \ \ \
+\sum_{i=1}^{n-1}(-1)^{i-1}[g,g_1,\dots ,
g_{i-1},g_ig_{i+1},g_{i+2},\dots ,g_n]\\
& \ \ \ \ \ \ +(-1)^{n+1}[g,g_1,\dots ,g_{n-1}],
\end{split}
\]
and
\[
\begin{split}
D_{n-1}d_{n} (g[g_1,\dots ,g_n])\\
&=D_{n-1}(gd_{n} ([g_1,\dots ,g_n]))\\
&=D_{n-1}(gg_1[g_2,\dots ,g_n]\\
& \ \ \ \ \ \
+\sum_{i=1}^{n-1}(-1)^ig[g_1,\dots ,g_{i-1},g_ig_{i+1},g_{i+2},\dots ,g_n]\\
& \ \ \ \ \ \ +(-1)^{n}g[g_1,\dots ,g_{n-1}])\\
&=[gg_1,g_2,\dots ,g_n]\\
& \ \ \ \ \ \
+\sum_{i=1}^{n-1}(-1)^i[g,g_1,\dots ,g_{i-1},g_ig_{i+1},g_{i+2},\dots ,g_n]\\
& \ \ \ \ \ \ +(-1)^{n}[g,g_1,\dots ,g_{n-1}].
\end{split}
\]
All the terms but one cancels, verifying that
$d_{n+1}D_n+D_{n-1}d_n=1$ in $X_n$, for $n\geq 1$.
Now we show $d^2=0$. One verifies $d_1d_2=0$ directly
(which is left to the reader). Multiply $d_kD_{k-1}+D_{k-2}d_{k-1}=1$
on the right by $d_k$ and $d_{k+1}D_{k}+D_{k-1}d_{k}=1$
on the left by $d_k$:
\[
d_kD_{k-1}d_k + D_{k-2}d_{k-1}d_k=d_k=
d_kd_{k+1}D_{k}+d_kD_{k-1}d_{k}.
\]
Cancelling like terms, the induction hypothesis
$d_{k-1}d_k=0$ implies $d_{k}d_{k+1}=0$. This
shows $d^2=0$ and hence that the sequence in
Proposition \ref{prop:bar} is exact.
This completes the proof of Proposition \ref{prop:bar}.
$\Box$
\vskip .1in
The above complex can be ``dualized'' in the sense of \S \ref{sec:complexes}.
This dualized complex is of the form
\[
0 \rightarrow
\mathbb{Z} \stackrel{\mu}{\rightarrow}
X_{-1} \stackrel{d_{-1}}{\rightarrow} X_{-2}
\stackrel{d_{-2}}{\rightarrow} X_{-3} \rightarrow \dots \ .
\]
The {\bf standard $G$-resolution} is obtained by splicing these
together.
\index{standard $G$-resolution}
\subsubsection{Normalized bar resolution}
Define the {\bf normalized cells} by
\[
[g_1,...,g_n]^*=
\left\{
\begin{array}{cc}
[g_1,...,g_n], &{\rm if \ all\ }g_i\not= 1,\\
0, & {\rm if \ some\ }g_i= 1.
\end{array}
\right.
\]
Let $X_0=R[.]$ and
\[
X_n=\oplus_{(g_1,\dots ,g_n)\in G^n} R[g_1,\dots ,g_n]^*,\ \ \ \ \
n\geq 1,
\]
where the sum runs over all ordered $n$-tuples
in $G^n$.
Define the differential operators $d_n$ and the augmentation
map exactly as for the bar resolution.
\begin{proposition}
\label{prop:nbar}
With these definitions, the sequence
\[
\dots \rightarrow
X_{2} \stackrel{d_{2}}{\rightarrow}
X_{1} \stackrel{d_{1}}{\rightarrow} X_{0}
\stackrel{\epsilon}{\rightarrow} \mathbb{Z} \rightarrow 0,
\]
is a free $G$-resolution.
\end{proposition}
Sometimes this resolution is called the
{\bf normalized bar resolution}.
\index{normalized bar resolution}
{\bf proof}:\
See Theorem 10.117 in \cite{R}. $\Box$
\subsubsection{Homogeneous resolution}
Let $X_0=R$, so $X_0$ is a finite free (left) $R$-module
whose basis has only $1$ element.
For $n>0$, let $X_n$ denote the $\mathbb{Z}$-module generated
by all $(n+1)$-tuples $(g_0,\dots ,g_n)$. Make $X_i$ into
a $G$-module by defining the action by
$g:X_n\rightarrow X_n$ by
\[
g:(g_0,...,g_n)\longmapsto (gg_0,\dots ,gg_n),\ \ \ \ \ g\in G.
\]
Define the differential operators $\partial_n$ and the
augmentation $\epsilon$, as $G$-module maps, by
\[
\begin{split}
\epsilon (g)&=1,\\
\partial_n(g_0,\dots ,g_n)&=\sum_{i=0}^{n-1}(-1)^i
(g_0,\dots ,g_{i-1},\hat{g}_i,g_{i+1},\dots ,g_n),
\end{split}
\]
for $n\geq 1$.
\begin{proposition}
\label{prop:homog}
With these definitions, the sequence
\[
\dots \rightarrow
X_{2} \stackrel{\partial_{2}}{\rightarrow}
X_{1} \stackrel{\partial_{1}}{\rightarrow} X_{0}
\stackrel{\epsilon}{\rightarrow} \mathbb{Z} \rightarrow 0,
\]
is a $G$-resolution.
\end{proposition}
Sometimes this resolution is called the
{\bf homogeneous resolution}.
\index{homogeneous resolution}
Of the three resolutions presented here, this
one is the most straightforward to deal with.
{\bf proof}:\
See Lemma 10.114, Proposition 10.115, and Proposition 10.116
in \cite{R}.
$\Box$
\section{Definition of $H^n(G,A)$}
\label{sec:H^n}
For convenience, we briefly recall the definition of ${\rm Ext}\, ^n$.
Let $A$ be a left $R$-module, where $R=\mathbb{Z} [G]$, and let
$(X_i)$ be a $G$-resolution of $\mathbb{Z}$. We define
\[
{\rm {\rm Ext}\, }^n_{\mathbb{Z} [G]}(\mathbb{Z},A)=
{\rm Kernel}\, (d_{n+1}^*)/{\rm Image}\, (d_n^*),
\]
where
\[
d_n^*:Hom(X_{n-1},A)\rightarrow Hom(X_n,A),
\]
is defined by sending $f:X_{n-1}\rightarrow A$ to
$fd_n:X_{n}\rightarrow A$.
It is known that this is, up to isomorphism, independent of
the resolution choosen.
Recall ${\rm {\rm Ext}\, }^*_{\mathbb{Z} [G]}(\mathbb{Z},A)$
is the right-derived functors of the right-exact
functor $A\longmapsto A^G={\rm Hom}_G(\mathbb{Z},A)$ from the category of
$G$-modules to the category of abelian groups.
We define
\begin{equation}
\label{eqn:H^ndef}
H^n(G,A)={\rm {\rm Ext}\, }^n_{\mathbb{Z} [G]}(\mathbb{Z},A),
\end{equation}
When we wish to emphasize the dependence on the
resolution choosen, we write $H^n(G,A,X_*)$.
For example, let $X_*$ denote the bar
resolution in \S \ref{sec:bar_res} above.
Call $C^n=C^n(G,A)={\rm Hom}_G(X_n,A)$ the
{\bf group of $n$-cochains of $G$ in $A$},
\index{group of $n$-cochains of $G$ in $A$}
$Z^n=Z^n(G,A)=C^n\cap {\rm Kernel}\, (\partial)$ the group of {\bf $n$-cocycles},
\index{group of $n$-cocycles}
and $B^n=B^n(G,A)=
\partial(C^{n-1})$ the group of {\bf $n$-coboundaries}.
\index{group of $n$-coboundaries}
We call $H^n(G,A) =Z^n/B^n$ the {\bf $n^{th}$ cohomology group of $G$
in $A$}. This is an abelian group.
We call also define the cohomology group using some other resolution,
the normalized bar resolution or the homogeneous resolution for example.
If we wish to express the dependence on the resolution $X_*$ used, we
write $H^n(G,A,X_*)$. Later we shall see that, up to isomorphism,
this abelian group is independent of the resolution.
The group $H_2(G,\mathbb{Z})$ (which is isomorphic to the algebraic dual group of
$H^2(G,\mathbb{C}^\times)$)
is sometimes called the {\bf Schur multiplier} of $G$. Here $\mathbb{C}$ denotes
the field of complex numbers.
\index{Schur multiplier}
We say that the group $G$ has {\bf cohomological dimension} $n$,
\index{cohomological dimension}
written $cd(G)=n$, if
$H^{n+1}(H,A)=0$ for all $G$-modules $A$ and all subgroups
$H$ of $G$, but $H^n(H,A)\not= 0$ for some such $A$ and $H$.
\begin{remark}
\begin{itemize}
\item
If $cd(G)<\infty$ then
$G$ is torsion-free\footnote{This follows from the
fact that if $G$ is a cyclic group then $H^n(G,\mathbb{Z})\not= 0$,
discussed below.}.
\item
If $G$ is a free abelian group of finite rank
then $cd(G)=rank(G)$.
\item
If $cd(G)=1$ then $G$ is free.
This is a result of Stallings and Swan (see for example
\cite{R}, page 885).
\end{itemize}
\end{remark}
\subsection{Computations}
We briefly discuss computer programs which compute cohomology and
some examples of known computations.
\subsubsection{Computer computations of cohomology}
{\tt GAP} \cite{Gap} can compute some cohomology groups\footnote{See
\S 37.22 of the {\tt GAP} manual, M. Bishop's
package {\tt CRIME} for cohomology of $p$-groups, G. Ellis'
package {\tt HAP} for group homology and cohomology of finite
or (certain) infinite groups, and M. R\"oder's {\tt HAPCryst} package
(an add-on to the {\tt HAP} package).
\SAGE \cite{St} computes cohomology via it's {\tt GAP} interface.}.
All the \SAGE commands which compute group homology or
cohomology require that the package {\tt HAP} be loaded.
You can do this on the command line from the main \SAGE directory
by typing\footnote{This is
the current package name - change {\tt 4.4.10\_3} to whatever
the latest version is on
\url{http://www.sagemath.org/packages/optional/} at the time you
read this. Also, this command assumes you are using
\SAGE on a machine with an internet connection.}
\verb+sage -i gap_packages-4.4.10_3.spkg+
\begin{example}
{\rm
This example uses \SAGE, which wraps several of the {\tt HAP}
functions.
\vskip .2in
\begin{Verbatim}[fontsize=\scriptsize,fontfamily=courier,fontshape=tt,frame=single,label=\SAGE]
sage: G = AlternatingGroup(5)
sage: G.cohomology(1,7)
Trivial Abelian Group
sage: G.cohomology(2,7)
Trivial Abelian Group
\end{Verbatim}
\vskip .1in
\noindent
This implies $H^1(A_5,GF(7))=H^2(A_5,GF(7))=0$.
}
\end{example}
\subsubsection{Examples}
Some example computations of a more theoretical nature.
\begin{enumerate}
\item
$H^0(G,A)=A^G$.
This is by definition.
\item
Let $L/K$ denote a Galois extension with finite Galois
group $G$.
We have $H^1(G,L^\times)=1$.
This is often called Hilbert's Theorem 90.
See Theorem 1.5.4 in \cite{W} or Proposition 2 in \S X.1
of \cite{S}.
\item
Let $G$ be a finite cyclic group and $A$ a trivial
torsion-free $G$-module. Then $H^1(G,A)=0$.
This is a consequence of properties given in the next section.
\item
If $G$ is a finite cyclic group of order $m$
and $A$ is a trivial $G$-module then
\[
H^2(G,A)=A/mA
\]
This is a consequence of properties given below.
For example, $H^2(GF(q)^\times,\mathbb{C})=0$.
\item
If $|G|=m$, $rA=0$ and $gcd(r,m)=1$, then
$H^n(G,A)=0$, for all $n\geq 1$.
This is Corollary 3.1.7 in \cite{W}.
For example, $H^1(A_5,\mathbb{Z}/7\mathbb{Z})=0$.
\end{enumerate}
\section{Definition of $H_n(G,A)$}
\label{sec:H_n}
We say $A$ is {\bf projective} if the functor
\index{projective $R$-module}
$B\longmapsto {\rm Hom}_G(A,B)$ (from the category of $G$-modules
to the category of abelian groups) is exact.
Recall, if
\begin{equation}
\label{eqn:P_Z}
P_{\mathbb{Z}}=
\dots \rightarrow P_2
\stackrel{d_2}{\rightarrow}
P_1 \stackrel{d_1}{\rightarrow} P_0
\stackrel{\epsilon}{\rightarrow} \mathbb{Z} \rightarrow 0
\end{equation}
is a projective resolution of $\mathbb{Z}$ then
\[
{\rm Tor}\, _n^{\mathbb{Z} [G]}(\mathbb{Z},A)
={\rm Kernel}\, (d_n\otimes 1_A)/{\rm Image}\, (d_{n+1}\otimes 1_A).
\]
It is known that this is, up to isomorphism, independent of
the resolution choosen.
Recall ${\rm Tor}\, _*^{\mathbb{Z} [G]}(\mathbb{Z},A)$
are the right-derived functors of the right-exact
functor $A\longmapsto A_G=\mathbb{Z}\otimes_{\mathbb{Z} [G]}A$
from the category of
$G$-modules to the category of abelian groups.
We define
\begin{equation}
\label{eqn:H_ndef}
H_n(G,A)={\rm Tor}\, _n^{\mathbb{Z} [G]}(\mathbb{Z},A),
\end{equation}
When we wish to emphasize the dependence on the
resolution, we write $H_n(G,A,P_\mathbb{Z})$.
\begin{remark}
{\rm
If $G$ is a $p$-group, then using the command {\tt ProjectiveResolution}
in {\tt GAP}'s {\tt CRIME} package, one can easily compute
the minimal projective resolution of a $G$-module, which can be either
trivial or given as a {\tt MeatAxe}\footnote{See for example
\url{http://www.math.rwth-aachen.de/~MTX/}.} module.
}
\end{remark}
Since we can identify the functor $A\longmapsto A_G$ with
$A\longmapsto A\otimes_{\mathbb{Z}[G]} \mathbb{Z}$ (where $\mathbb{Z}$ is considered as a trivial
$\mathbb{Z} [G]$-module), the following is another way to
formulate this definition.
If $\mathbb{Z}$ is considered as a trivial
$\mathbb{Z} [G]$-module, then a free $\mathbb{Z} [G]$-resolution of
$\mathbb{Z}$ is a sequence of $\mathbb{Z} [G]$-module homomorphisms
\[
... {\rightarrow} M_n {\rightarrow} M_{n-1}
{\rightarrow} ... {\rightarrow} M_1 {\rightarrow} M_0
\]
satisfying:
\begin{itemize}
\item
(Freeness) Each $M_n$ is a free $\mathbb{Z}[G]$-module.
\item
(Exactness) The image of $M_{n+1} {\rightarrow} M_n$ equals
the kernel of $M_n {\rightarrow} M_{n-1}$ for all $n>0$.
\item
(Augmentation) The cokernel of $M_1 {\rightarrow} M_0$ is
isomorphic to the trivial $\mathbb{Z}[G]$-module $\mathbb{Z}$.
\end{itemize}
The maps $M_n {\rightarrow} M_{n-1}$ are the boundary
homomorphisms of the resolution.
Setting $TM_n$ equal to the abelian group $M_n/G$ obtained from
$M_n$ by killing the $G$-action, we get an induced sequence of
abelian group homomorphisms
\[
... {\rightarrow} TM_n {\rightarrow} TM_{n-1}
{\rightarrow} ... {\rightarrow} TM_1 {\rightarrow} TM_0
\]
This sequence will generally not satisfy the above exactness
condition, and one defines the integral homology of $G$ to be
\[
H_n(G,\mathbb{Z}) = {\rm Kernel}\, (TM_n {\rightarrow} TM_{n-1}) /
{\rm Image}\, (TM_{n+1} {\rightarrow} TM_n)
\]
for all $n>0$.
\subsection{Computations}
We briefly discuss computer programs which compute homology and
some examples of known computations.
\subsubsection{Computer computations of homology}
\begin{example}
{\rm
{\tt GAP} will compute the Schur multiplier
$H_2(G,\mathbb{Z})$ using the
\newline
{\tt AbelianInvariantsMultiplier} command.
To find
$H_2(A_5,\mathbb{Z})$, where $A_5$ is the alternating group on $5$ letters,
type
\vskip .2in
\begin{Verbatim}[fontsize=\scriptsize,fontfamily=courier,fontshape=tt,frame=single,label={\tt GAP}]
gap> A5:=AlternatingGroup(5);
Alt( [ 1 .. 5 ] )
gap> AbelianInvariantsMultiplier(A5);
[ 2 ]
\end{Verbatim}
\vskip .1in
\noindent
So, $H_2(A_5,\mathbb{C})\cong \mathbb{Z}/2\mathbb{Z}$.
Here is the same computation in \SAGE:
\vskip .2in
\begin{Verbatim}[fontsize=\scriptsize,fontfamily=courier,fontshape=tt,frame=single,label=\SAGE]
sage: G = AlternatingGroup(5)
sage: G.homology(2)
Multiplicative Abelian Group isomorphic to C2
\end{Verbatim}
}
\end{example}
\begin{example}
{\rm
The \SAGE command {\tt poincare\_series}
returns the Poincare series of $G \pmod p$ ($p$ must be a prime).
In other words, if you input a (finite) permutation group $G$,
a prime $p$, and a positive integer $n$, {\tt poincare\_series(G,p,n)}
returns a quotient of polynomials
$f(x)=P(x)/Q(x)$ whose coefficient of $x^k$ equals the rank of
the vector space $H_k(G,ZZ/pZZ)$,
for all $k$ in the range $ 1\leq k \leq n$ .
\vskip .2in
{\footnotesize{
\begin{Verbatim}[fontsize=\scriptsize,fontfamily=courier,fontshape=tt,frame=single,label=\SAGE]
sage: G = SymmetricGroup(5)
sage: G.poincare_series(2,10)
(x^2 + 1)/(x^4 - x^3 - x + 1)
sage: G = SymmetricGroup(3)
sage: G.poincare_series(2,10)
1/(-x + 1)
\end{Verbatim}
}}
\vskip .1in
\noindent
This last one implies
\[
\dim_{GF(2)}H_k(S_2,\mathbb{Z}/2\mathbb{Z})=1,
\]
for $1\leq k\leq 10$.
}
\end{example}
\begin{example}
\label{ex:ppart}
{\rm
Here are some more examples using \SAGE's interface to {\tt HAP}:
\vskip .2in
{\footnotesize{
\begin{Verbatim}[fontsize=\scriptsize,fontfamily=courier,fontshape=tt,frame=single,label=\SAGE]
sage: G = SymmetricGroup(5)
sage: G.homology(1)
Multiplicative Abelian Group isomorphic to C2
sage: G.homology(2)
Multiplicative Abelian Group isomorphic to C2
sage: G.homology(3)
Multiplicative Abelian Group isomorphic to C2 x C4 x C3
sage: G.homology(4)
Multiplicative Abelian Group isomorphic to C2
sage: G.homology(5)
Multiplicative Abelian Group isomorphic to C2 x C2 x C2
sage: G.homology(6)
Multiplicative Abelian Group isomorphic to C2 x C2
sage: G.homology(7)
Multiplicative Abelian Group isomorphic to C2 x C2 x C4 x C3 x C5
\end{Verbatim}
}}
\vskip .1in
\noindent
The last one means that
\[
H_7(S_5,Z) =
(\mathbb{Z}/2\mathbb{Z})^2\times (\mathbb{Z}/3\mathbb{Z})\times (\mathbb{Z}/4\mathbb{Z})\times (\mathbb{Z}/5\mathbb{Z}).
\]
\vskip .2in
{\footnotesize{
\begin{Verbatim}[fontsize=\scriptsize,fontfamily=courier,fontshape=tt,frame=single,label=\SAGE]
sage: G = AlternatingGroup(5)
sage: G.homology(1)
Trivial Abelian Group
sage: G.homology(1,7)
Trivial Abelian Group
sage: G.homology(2,7)
Trivial Abelian Group
\end{Verbatim}
}}
\vskip .1in
\noindent
This implies $H_1(A_5,\mathbb{Z})=H_1(A_5,GF(7))=H_2(A_5,GF(7))=0$.
}
\end{example}
\subsubsection{Examples}
\label{sec:homprops}
Some example computations of a more theoretical nature.
\begin{enumerate}
\item
If $A$ is a $G$-module then ${\rm Tor}\, _0^{\mathbb{Z}[G]}(\mathbb{Z},A)=
H_0(G,A)=A_G\cong A/DA$.
{\bf proof}: We need some lemmas.
Let $\epsilon :\mathbb{Z} [G]\rightarrow \mathbb{Z}$
be the augmentation map. This is a ring homomorphism
(but not a $G$-module homomorphism). Let $D={\rm Kernel}\, (\epsilon)$
denote its kernel, the {\bf augmentation ideal}.
\index{augmentation ideal}
This is a $G$-module.
\begin{lemma}
\label{lemma:Disfree}
As an abelian group, $D$ is free abelian generated by
$G-1=\{g-1\ |\ g\in G\}$.
\end{lemma}
We write this as $D=\mathbb{Z} \langle G-1\rangle$.
{\bf proof of lemma}: If $d\in D$ then $d=\sum_{g\in G}m_gg$,
where $m_g\in \mathbb{Z}$ and $\sum_{g\in G}m_g=0$. Thus,
$d=\sum_{g\in G}m_g(g-1)$, so $D\subset \mathbb{Z} \langle G-1\rangle$.
To show $D$ is free: If $\sum_{g\in G}m_g(g-1)=0$
then $\sum_{g\in G}m_g g - \sum_{g\in G}m_g=0$ in $\mathbb{Z}[G]$.
But $\mathbb{Z}[G]$ is a free abelian group with basis $G$, so $m_g=0$
for all $g\in G$.
$\Box$
\begin{lemma}
$\mathbb{Z}\otimes_{\mathbb{Z} [G]}A= A/DA$,
where $DA$ is generated by elements
of the form $ga-a$, $g\in G$ and $a\in A$.
\end{lemma}
Recall $A_G$ denotes the largest quotient of $A$ on which $G$
acts trivially\footnote{Implicit in the words ``largest quotient''
is a universal property which we leave to the reader for
formulate precisely.}.
{\bf proof of lemma}: Consider the $G$-module map,
$A\rightarrow \mathbb{Z}\otimes_{\mathbb{Z}[G]}A$, given by
$a\longmapsto 1\otimes a$. Since $\mathbb{Z}\otimes_{\mathbb{Z}[G]}A$
is a trivial $G$-module, it must factor through $A_G$. The previous
lemma implies $A_G\cong A/DA$. (In fact, the quotient map
$q:A\rightarrow A_G$ satisfies $q(ga-a)=0$ for all $g\in G$ and
$a\in A$, so $DA\subset {\rm Kernel}\, (q)$. By maximality of $A_G$,
$DA={\rm Kernel}\, (q)$. QED) So, we have maps
$A\rightarrow A_G \rightarrow \mathbb{Z}\otimes_{\mathbb{Z}[G]}A$.
By the definition of tensor products, the map
$\mathbb{Z}\times A\rightarrow A_G$, $1\times a\longmapsto 1\cdot aDA$,
corresponds to a map $ \mathbb{Z}\otimes_{\mathbb{Z}[G]}A \rightarrow A_G $
for which the composition
$A_G \rightarrow \mathbb{Z}\otimes_{\mathbb{Z}[G]}A \rightarrow A_G $
is the identity. This forces $A_G \cong \mathbb{Z}\otimes_{\mathbb{Z}[G]}A$.
$\Box$
See also \# 11 in \S \ref{sec:basisprops}.
\item
If $G$ is a finite group then
$H_0(G,\mathbb{Z})=\mathbb{Z}$.
This is a special case of the example above
(taking $A=\mathbb{Z}$, as a trivial $G$-module).
\item
$H_1(G,\mathbb{Z})\cong G/[G,G]$, where $[G,G]$ is the commutator
subgroup of $G$.
This is Proposition 10.110 in \cite{R}, \S 10.7.
{\bf proof}:\
First, we {\bf claim}: $D/D^2 \cong G/[G,G]$, where
$D$ is as in Lemma \ref{lemma:Disfree}. To prove this,
define $\theta: G\rightarrow D/D^2$ by
$g\longmapsto (g-1)+D^2$. Since
$gh-1-(g-1)-(h-1)=(g-1)(h-1)$, it follows that
$\theta(gh)=\theta(g)\theta(h)$, so $\theta$ is a homomorphism.
Since $D/D^2$ is abelian and $G/[G,G]$ is the maximal
abelian quotient of $G$, we must have ${\rm Kernel}\, (\theta)\subset [G,G]$.
Therefore, $\theta$ factors through
$\theta': G/[G,G]\rightarrow D/D^2$, $g[G,G]\longmapsto (g-1)+D^2$.
Now, we construct an inverse. Define
$\tau:D\rightarrow G/[G,G]$ by
$g-1 \longmapsto g[G,G]$. Since $\tau (g-1 + h-1) =
g[G,G]\cdot h[G,G]=gh [G,G]$, it is not hard to see
that this is a homomorphism. We would be essentially
done (with the construction of the inverse of $\theta'$,
hence the proof of the
claim) if we knew $D^2\subset {\rm Kernel}\, (\tau)$.
(The inverse would be the composition of
the quotient $D/D^2\rightarrow D/{\rm Kernel}\, (\tau)$ with
the map induced from $\tau$, $D/{\rm Kernel}\, (\tau)\rightarrow G/[G,G]$.)
This follows from the
fact that any $x\in D^2$ can be written as
$x=(\sum_g m_g (g-1)) (\sum_h m'_h (h-1)) =
(\sum_{g,h} m_gm'_h (g-1)(h-1))$, so
$\tau(x)=\prod_{g,h} (ghg^{-1}h^{-1})^{m_gm'_h}[G,G]=[G,G]$.
QED (claim)
Next, we show $H_1(G,\mathbb{Z})\cong D/D^2$. From
the short exact sequence
\[
0\rightarrow D \rightarrow \mathbb{Z}[G]
\stackrel{\epsilon}{\rightarrow} \mathbb{Z} \rightarrow 0,
\]
we obtain the long exact sequence of homology
\begin{equation}
\label{eqn:LESHG}
\begin{array}{c}
\dots \rightarrow
H_1(G,D) \rightarrow
H_1(G,\mathbb{Z}[G]) \rightarrow \\
H_1(G,\mathbb{Z}) \stackrel{\partial}{\rightarrow} H_0(G,D)
\stackrel{f}{\rightarrow }
H_0(G,\mathbb{Z}[G]) \stackrel{\epsilon_*}{\rightarrow} H_0(G,\mathbb{Z}) \rightarrow 0.
\end{array}
\end{equation}
Since $\mathbb{Z}[G]$ is a free $\mathbb{Z}[G]$-module,
$H_1(G,\mathbb{Z}[G])=0$. Therefore $\partial$ is injective.
By item \# 1 above (i.e., $H_0(G,A)\cong A/DA\cong A_G$,
we have
$H_0(G,\mathbb{Z})\cong \mathbb{Z}_G=\mathbb{Z}$ and
$H_0(G,\mathbb{Z}[G])\cong \mathbb{Z}[G]/D\cong \mathbb{Z}$.
By (\ref{eqn:LESHG}), $\epsilon_*$ is surjective.
Combining the last two statements, we find
$\mathbb{Z}/{\rm Kernel}\, (\epsilon_*)\cong \mathbb{Z}$.This forces $\epsilon_*$
to be injective. This, and (\ref{eqn:LESHG}), together imply
$f$ must be $0$. Since this forces $\partial$ to be
an isomorphism, we are done.
$\Box$
\item
Let $G=F/R$ be a presentation of $G$, where
$F$ is a free group and $R$ is a normal subgroup
of relations. {\bf Hopf's formula} states:
$H_2(G,\mathbb{Z})\cong (F\cap R)/[F,R]$, where $[F,R]$ is the commutator
subgroup of $G$.
See \cite{R}, \S 10.7.
The group $H_2(G,\mathbb{Z})$ is sometimes called the {\bf Schur multiplier} of $G$.
\index{Schur multiplier}
\end{enumerate}
\section{Basic properties of $H^n(G,A)$, $H_n(G,A)$}
\label{sec:basisprops}
Let $R$ be a (possibly non-commutative) ring and $A$ be
an $R$-module. We say $A$ is {\bf injective} if the functor
\index{injective $R$-module}
$B\longmapsto {\rm Hom}_G(B,A)$ (from the category of $G$-modules
to the category of abelian groups) is exact.
\index{injective $R$-module}
(Recall $A$ is projective if the functor
$B\longmapsto {\rm Hom}_G(A,B)$ is exact.)
We say $A$ is {\bf co-induced} if it has the form
\index{co-induced $R$-module}
${\rm Hom}_{\mathbb{Z}}(R,B)$ for some abelian group $B$.
We say $A$ is {\bf relatively injective} if
\index{relatively injective $R$-module}
it is a direct factor of a co-induced $R$-module.
We say $A$ is {\bf relatively projective} if
\[
\begin{array}{ccc}
\pi : & \mathbb{Z} [G]\otimes_\mathbb{Z} A
\rightarrow & A,\\
& x\otimes a \longmapsto & xa,
\end{array}
\]
maps a direct factor of $\mathbb{Z} [G]\otimes_\mathbb{Z} A $
isomorphically onto $A$. These are the
$G$-modules $A$ which are isomorphic to a direct
factor of the induced module $\mathbb{Z} [G]\otimes_\mathbb{Z} A $.
\index{relatively projective $R$-module}
When $G$ is finite, the notions of relatively injective
and relatively projective coincide\footnote{These notions
were introduced by Hochschild \cite{Ho}.}.
\begin{enumerate}
\item
The definition of $H^n(G,A)$ does not depend on the
$G$-resolution $X_*$ of $\mathbb{Z}$ used.
\item
If $A$ is an projective $\mathbb{Z} [G]$-module then
$H^n(G,A)=0$, for all $n\geq 1$.
This follows immediately from the definitions.
\item
If $A$ is an injective $\mathbb{Z} [G]$-module then
$H_n(G,A)=0$, for all $n\geq 1$.
See also \cite{S}, \S VII.2.
\item
If $A$ is a relatively injective $\mathbb{Z} [G]$-module then
$H^n(G,A)=0$, for all $n\geq 1$.
This is Proposition 1 in \cite{S}, \S VII.2.
\item
If $A$ is a relatively projective $\mathbb{Z} [G]$-module then
$H^n(G,A)=0$, for all $n\geq 1$.
This is Proposition 2 in \cite{S}, \S VII.4.
\item
If $A=A'\oplus A''$ then $H^n(G,A)=H^n(G,A')\oplus H^n(G,A'')$,
for all $n\geq 0$.
More generally, if $I$ is any indexing family
and $A=\oplus_{i\in I} A_i$ then $H^n(G,A)=\oplus_{i\in I} H^n(G,A_i)$,
for all $n\geq 0$.
This follows from Proposition 10.81 in \S 10.6 of
Rotman \cite{R}.
\item
If
\[
0 \rightarrow
A {\rightarrow} B {\rightarrow}
C {\rightarrow} 0
\]
is an exact sequence of $G$-modules
then we have a long exact sequence of cohomology
(\ref{eqn:LESC}).
See \cite{S}, \S VII.2, and properties of the $ext$ functor
\cite{R}, \S 10.6.
\item
$A\longmapsto H^n(G,A)$ is the higher right derived functor
associated to $A\longmapsto A^G={\rm Hom}_G(A,\mathbb{Z})$ from the category of
$G$-modules to the category of abelian groups.
This is by definition.
See \cite{S}, \S VII.2, or \cite{R}, \S 10.7.
\item
If
\[
0 \rightarrow A {\rightarrow} B {\rightarrow} C {\rightarrow} 0
\]
is an exact sequence of $G$-modules
then we have a long exact sequence of homology
(\ref{eqn:LESH}).
In the case of a finite group, see \cite{S}, \S VIII.1.
In general, see \cite{S}, \S VII.4, and properties
of the ${\rm Tor}\, $ functor in \cite{R}, \S 10.6.
\item
$A\longmapsto H_n(G,A)$ is the higher left derived functor
associated to $A\longmapsto A_G=\mathbb{Z} \otimes_{\mathbb{Z} [G]}A$
on the category of $G$-modules.
This is by definition.
See \cite{S}, \S VII.4, or \cite{R}, \S 10.7.
\item
If $G$ is a finite cyclic group then
\[
\begin{split}
H_0(G,A) &= A_G,\\
H_{2n-1}(G,A) & = A^G/NA,\\
H_{2n}(G,A) &={\rm Kernel}\, (N)/DA ,
\end{split}
\]
for all $n\geq 1$.
To prove this, we need a lemma.
\begin{lemma}
\label{lemma:les_cyclic}
Let $G=\langle g\rangle$ be acyclic group of order $k$. Let
$M=g-1$ and $N=1+g+g^2+...+g^{k-1}$. Then
\[
\dots \rightarrow
\mathbb{Z} [G] \stackrel{N}{\rightarrow}
\mathbb{Z} [G] \stackrel{M}{\rightarrow} \mathbb{Z} [G] \rightarrow
\mathbb{Z} [G] \stackrel{N}{\rightarrow}
\mathbb{Z} [G] \stackrel{M}{\rightarrow} \mathbb{Z} [G]
\stackrel{\epsilon}{\rightarrow} \mathbb{Z} \rightarrow 0,
\]
is a free $G$-resolution.
\end{lemma}
{\bf proof of lemma}:
It is clearly free. Since $MN=NM=(g-1)(1+g+g^2+...+g^{k-1})=g^k-1=0$,
it is a complex. It remains to prove exactness.
Since ${\rm Kernel}\, (\epsilon)=D={\rm Image}\, (M)$, by Lemma \ref{lemma:Disfree},
this stage is exact.
To show ${\rm Kernel}\, (M)={\rm Image}\, (N)$, let $x=\sum_{j=0}^{k-1}m_jg^j\in {\rm Kernel}\, (M)$.
Since $(g-1)x=0$, we must have $m_0=m_1=...=m_{k-1}$. This forces
$x=m_0N\in {\rm Image}\, (N)$. Thus ${\rm Kernel}\, (M)\subset {\rm Image}\, (N)$. Clearly
$MN=0$ implies ${\rm Image}\, (N)\subset {\rm Kernel}\, (M)$, so ${\rm Kernel}\, (M)={\rm Image}\, (N)$.
To show ${\rm Kernel}\, (N)={\rm Image}\, (M)$, let $x=\sum_{j=0}^{k-1}m_jg^j\in {\rm Kernel}\, (N)$.
Since $Nx=0$, we have $0=\epsilon(Nx)=\epsilon(N)\epsilon(x)=k\epsilon(x)$,
so $\sum_{j=0}^{k-1}m_j=0$. Observe that
\[
\begin{array}{ll}
x&=m_0\cdot 1+m_1g+m_2g^2+...+m_{k-1}g^{k-1}\\
&=(m_0-m_0g)+(m_0+m_1)g+m_2g^2+...+m_{k-1}g^{k-1}\\
&=(m_0-m_0g)+(m_0+m_1)g-(m_0+m_1)g^2\\
&+(m_0+m_1+m_2)g^2-(m_0+m_1+m_2)g^3+...\\
&+(m_0+..+m_{k-1})g^{k-1}-(m_0+..+m_{k-1})g^{k}.
\end{array}
\]
where the last two terms are actually $0$. This implies
$x=-M(m_0+(m_0+m_1)g+(m_0+m_1+m_2)g^2
+...+ (m_0+..+m_{k-1})g^{k-1}\in {\rm Image}\, (M)$.
Thus ${\rm Kernel}\, (N)\subset {\rm Image}\, (M)$. Clearly
$NM=0$ implies ${\rm Image}\, (M)\subset {\rm Kernel}\, (N)$, so ${\rm Kernel}\, (N)={\rm Image}\, (M)$.
This proves exactness at every stage.$\Box$
Now we can prove the claimed property.
By property 1 in \S \ref{sec:homprops}, it suffices to assume
$n>0$. Tensor the complex in Lemma \ref{lemma:les_cyclic}
on the right with $A$:
{\footnotesize{
\[
\begin{array}{cc}
\dots \rightarrow
&\mathbb{Z} [G]\otimes_{\mathbb{Z} [G]}A \stackrel{N_*}{\rightarrow}
\mathbb{Z} [G]\otimes_{\mathbb{Z} [G]}A \stackrel{M_*}{\rightarrow}
\mathbb{Z} [G]\otimes_{\mathbb{Z} [G]}A \stackrel{N_*}{\rightarrow} \\
&\mathbb{Z} [G]\otimes_{\mathbb{Z} [G]}A \stackrel{M_*}{\rightarrow}
\mathbb{Z} [G] \otimes_{\mathbb{Z} [G]}A
\stackrel{\epsilon}{\rightarrow} \mathbb{Z}\otimes{\mathbb{Z} [G]}A \rightarrow 0,
\end{array}
\]
}}
where the new maps are distinguished from the old maps
by adding an asterisk. By definition,
$\mathbb{Z} [G]\otimes_{\mathbb{Z} [G]}A \cong A$, and by
property 1 in \S \ref{sec:homprops},
$\mathbb{Z} \otimes_{\mathbb{Z} [G]}A \cong A/DA$.
The above sequence becomes
\[
\dots \rightarrow
A \stackrel{N_*}{\rightarrow}
A \stackrel{M_*}{\rightarrow}
A \stackrel{N_*}{\rightarrow}
A \stackrel{M_*}{\rightarrow}
A
\stackrel{\epsilon}{\rightarrow} A/DA \rightarrow 0.
\]
This implies, by definition of ${\rm Tor}\, $,
\[
{\rm Tor}\, _{2n-1}^{\mathbb{Z}[G]}(\mathbb{Z},A)={\rm Kernel}\, (M_*)/{\rm Image}\, (N_*)
=A^G/NA,
\]
and
\[
{\rm Tor}\, _{2n}^{\mathbb{Z}[G]}(\mathbb{Z},A)={\rm Kernel}\, (N_*)/{\rm Image}\, (M_*)
=A[N]/DA.
\]
See also \cite{S}, \S VIII.4.1 and the Corollary in \S VIII.4.
\item
The group $H^2(G,A)$ classifies group extensions of $A$ by $G$.
This is Theorem 5.1.2 in \cite{W}. See also \S 10.2 in
\cite{R}.
\item
If $G$ is a finite group of order $m=|G|$ then
$mH^n(G,A)=0$, for all $n\geq 1$.
This is Proposition 10.119 in \cite{R}.
\item
If $G$ is a finite group and $A$ is a finitely-generated
$G$-module then $H^n(G,A)$ is finite, for all $n\geq 1$.
This is Proposition 3.1.9 in \cite{W} and Corollary
10.120 in \cite{R}.
\item
The group $H^1(G,A)$ constructed using resolutions
is the same as the group constructed using $1$-cocycles.
The group $H^2(G,A)$ constructed using resolutions
is the same as the group constructed using $2$-cocycles.
This is Corollary 10.118 in \cite{R}.
\item
If $G$ is a finite cyclic group then
\[
\begin{split}
H^0(G,A) &= A^G,\\
H^{2n-1}(G,A) &={\rm {\rm Kernel}\, }\, N/DA ,\\
H^{2n}(G,A) &= A^G/NA ,
\end{split}
\]
for all $n\geq 1$.
Here $N:A\rightarrow A$ is the
norm map $Na=\sum_{g\in G}ga$ and $DA$ is the
augmentation ideal defined above (generated by elements
of the form $ga-a$).
{\bf proof}:\
The case $n=0$: By definition,
$H^0(G,A)={\rm Ext}\, ^0_{\mathbb{Z}[G]}(\mathbb{Z},A)={\rm Hom}_G(\mathbb{Z},A)$.
Define $\tau : {\rm Hom}_G(\mathbb{Z},A)\rightarrow A^G$ by
sending $f\longmapsto f(1)$. It is easy to see that
this is well-defined and, in fact, injective.
For each $a\in A^G$, define $f=f_a\in {\rm Hom}_G(\mathbb{Z},A)$ by
$f(m)=ma$. This shows $\tau$ is surjective as well,
so case $n=0$ is proven.
Case $n>0$: Applying the functor ${\rm Hom}_G(*,A)$ to the
$G$-resolution in Lemma \ref{lemma:les_cyclic} to get
{\footnotesize{
\[
\begin{array}{cc}
\dots \leftarrow
&{\rm Hom}_G(\mathbb{Z} [G],A) \stackrel{N_*}{\leftarrow}
{\rm Hom}_G(\mathbb{Z} [G],A) \stackrel{M_*}{\leftarrow} {\rm Hom}_G(\mathbb{Z} [G],A)
\stackrel{\epsilon_*}{\leftarrow} {\rm Hom}_G(\mathbb{Z},A) \leftarrow 0.
\end{array}
\]
}}
It is known that ${\rm Hom}_G(\mathbb{Z}[G],A)\cong A$
(see Proposition 8.85 on page 583 of \cite{R}).
It follows that
\[
\begin{array}{cc}
\dots \leftarrow
&A \stackrel{N_*}{\leftarrow}
A \stackrel{M_*}{\leftarrow} A
\stackrel{\epsilon_*}{\leftarrow} A^G \leftarrow 0.
\end{array}
\]
By definition of ${\rm Ext}\, $, for $n>0$ we have
\[
{\rm Ext}\, _{\mathbb{Z}[G]}^{2n}(\mathbb{Z},A)={\rm Kernel}\, (M_*)/{\rm Image}\, (N_*)=A^G/NA,
\]
and
\[
{\rm Ext}\, _{\mathbb{Z}[G]}^{2n-1}(\mathbb{Z},A)={\rm Kernel}\, (N_*)/{\rm Image}\, (M_*)={\rm Kernel}\, (N)/(g-1)A,
\]
where $g$ is a generator of $G$ as
in Lemma \ref{lemma:les_cyclic}.
$\Box$
See also \cite{S}, \S VIII.4.1 and the Corollary in \S VIII.4.
\item
If $G$ is a finite cyclic group
of order $m$ and $A$ is a {\it trivial} $G$-module then
\[
\begin{split}
H^0(G,A) &= A^G,\\
H^{2n-1}(G,A) &\cong A[m],\\
H^{2n}(G,A) &\cong A/mA,
\end{split}
\]
for all $n\geq 1$.
This is a consequence of the previous property.
\end{enumerate}
\section{Functorial properties}
In this section, we investigate some of the ways in
which $H^n(G,A)$ depends on $G$.
One way to construct all these in a common framework is to
introduce the notion of a ``homomorphism of pairs''.
Let $G,H$ be groups.
Let $A$ be a $G$-module and $B$ an $H$-module.
If $\alpha :H\rightarrow G$ is a homomorphism of groups
and $\beta:A\rightarrow B$ is a homomorphism of
$H$-modules (using $\alpha$ to regard $B$ as an $H$-module)
then we call $(\alpha,\beta)$ a
{\bf homomorphism of pairs}, written
\index{homomorphism of pairs}
\[
(\alpha,\beta):(G,A)\rightarrow (H,B).
\]
Let $G\subset H$ be groups and $A$ an $H$-module (so, by restriction,
a $G$-module). We say a map
\[
f_{G,H}:H^n(G,A)\rightarrow H^n(H,A),
\]
is {\bf transitive} if $f_{G_2,G_3}f_{G_1,G_2}=f_{G_1,G_2}$,
for all subgroups $G_1\subset G_2\subset G_3$.
\index{transitive}
Let $X_*$ be a $G$-resolution and $X'_*$ a $H$-resolution,
each with a $-1$ grading.
Associated to a homomorphism of groups
$\alpha :H\rightarrow G$ is a sequence of $H$-homomorphisms
\begin{equation}
\label{eqn:hom_Hom}
A_n:X'_n\rightarrow X_n,
\end{equation}
$n\geq 0$, such that
$d_{n+1}A_{n+1}=A_nd'_{n+1}$ and $\epsilon A_0=\epsilon'$.
\begin{theorem}
\label{thrm:hom_Hom}
\begin{enumerate}
\item
If $(\alpha,\beta):(G,A)\rightarrow (G',A')$
and $(\alpha',\beta'):(G',A')\rightarrow (G'',A'')$
are homomorphisms of pairs then so
is
$(\alpha'\circ \alpha,\beta'\circ \beta):(G,A)\rightarrow (G'',A'')$.
\item
Suppose $(\alpha,\beta):(G,A)\rightarrow (G',A')$
is homomorphism of pairs, $X_*$ is a $G$-resolution,
and $X'_*$ is a $G'$-resolution
(each infinite in both directions, with a $-1$ grading).
Let $H^n(G,A,X_*)$ denote the derived groups associated to the
differential groups ${\rm Hom}_G(X_*,A)$ with $+1$ grading.
There is a homomorphism
\[
(\alpha,\beta)_{X_*,X'_*}:H^n(G,A,X_*)\rightarrow H^n(G',A',X'_*)
\]
satisfying the following properties.
\begin{enumerate}
\item
If $G=G'$, $A=A'$, $X=X'$, $\alpha=1$ and $\beta=1$ then
$(1,1)_{X_*,X'_*}=1$.
\item
If $(\alpha',\beta'):(G',A')\rightarrow (G'',A'')$
is homomorphism of pairs, $X''_*$ is a $G''$-resolution
then
\[
(\alpha'\circ \alpha,\beta'\circ \beta)_{X_*,X''_*}=
(\alpha',\beta')_{X'_*,X''_*}\circ (\alpha,\beta)_{X_*,X'_*}.
\]
\item
If $(\alpha,\gamma):(G,A)\rightarrow (G',A')$
is homomorphism of pairs then
\[
(\alpha,\beta+ \gamma)_{X_*,X'_*}=
(\alpha,\beta)_{X_*,X'_*}+ (\alpha,\gamma)_{X_*,X'_*}.
\]
\end{enumerate}
\end{enumerate}
\end{theorem}
\begin{remark}
For an analogous result for homology, see \S\S III.8 in
Brown \cite{B}.
\end{remark}
{\bf proof}:\ We sketch the proof, following
Weiss, \cite{W}, Theorem 2.1.8, pp 52-53.
(1): This is ``obvious''.
(2): Let $(\alpha,\beta):(G,A)\rightarrow (G',A')$
be a homomorphism of pairs. Using (\ref{eqn:hom_Hom}),
we have an associated chain map
\[
\alpha^*: {\rm Hom}_{G}(X_*,A)\rightarrow {\rm Hom}_{G'}(X'_*,A')
\]
of differential groups (Brown \S III.8 in \cite{B}).
The homomorphism of cohomology groups induced by $\alpha^*$
is denoted
\[
\alpha^*_{n,X_*,X'_*}:H^n(G,A,X_*)\rightarrow H^n(G',A',X'_*).
\]
Properties (a)-(c) follow from \S \ref{sec:properties}
and the corresponding properties of $\alpha^*$.
$\Box$
As the cohomology groups are independent of the
resolution used, the map
$(\alpha,\beta)_{X_*,X'_*}:H^n(G,A,X_*)\rightarrow H^n(G',A',X'_*)$
is sometimes simply denoted by
\begin{equation}
\label{eqn:hompair}
(\alpha,\beta)_{*}:H^n(G,A)\rightarrow H^n(G',A').
\end{equation}
\subsection{Restriction}
Let $X_*=X_*(G)$ denote the bar resolution.
If $H$ is a subgroup of $G$ then
the cycles on $G$, $C^n(G,A)={\rm Hom}_G(X_n(G),A)$,
can be restricted to $H$:
$C^n(H,A)={\rm Hom}_H(X_n(H),A)$. The restriction map
$C^n(G,A)\rightarrow C^n(H,A)$ leads to a
map of cohomology classes:
\[
{\rm Res}\, :H^n(G,A)\rightarrow H^n(H,A).
\]
In this case, the homomorphism of pairs is
given by the inclusion map $\alpha:H\rightarrow G$
and the identity map $\beta:A\rightarrow A$.
The map ${\rm Res}\, $ is the induced map defined by
(\ref{eqn:hompair}). By the properties of
this induced map, we see that
${\rm Res}\, _{H,G}$ is transitive: if
$G\subset G'\subset G''$ then\footnote{There
is an analog of the restriction for
homology which also satisfies this transitive
property (Proposition 9.5 in Brown \cite{B}).}
\[
{\rm Res}\, _{G',G}\circ {\rm Res}\, _{G'',G'}={\rm Res}\, _{G'',G}.
\]
A particularly nice feature of the restriction map is the
following fact.
\begin{theorem}
If $G$ is a finite group and $G_p$ is a $p$-Sylow subgroup
and if $H^n(G,A)_p$ is the $p$-primary component of $H^n(G,A)$
then
(a) there is a canonical isomorphism
$H^n(G,A)\cong \oplus_p H^n(G,A)_p$, and
(b) $Res:H^n(G,A)\rightarrow H^n(G_p,A)$
restricted to $H^n(G,A)_p$ (identified with a subgroup
of $H^n(G,A)$ via (a)) is injective.
\end{theorem}
{\bf proof}:\
See Weiss, \cite{W}, Theorem 3.1.15.
$\Box$
\begin{example}
\label{ex:sylow}
{\rm
Homology is a functor. That is, for any $n > 0$ and group
homomorphism $f : G \rightarrow G'$ there is an induced homomorphism
$H_n(f) : H_n(G,\mathbb{Z}) \rightarrow H_n(G',\mathbb{Z})$ satisfying
\begin{itemize}
\item
$H_n(gf) = H_n(g)H_n(f)$ for group homomorphisms $f : G \rightarrow G'$
$g : G' \rightarrow G''$,
\item
$H_n(f)$ is the identity homomorphism if $f$ is the identity.
\end{itemize}
The following commands compute $H_3(f) : H_3(P,\mathbb{Z}) \rightarrow
H_3(S_5,\mathbb{Z})$ for the inclusion $f : P \hookrightarrow S_5$
into the symmetric group $S_5$ of its Sylow $2$-subgroup. They
also show that the image of the induced homomorphism $H_3(f)$
is precisely the Sylow $2$-subgroup of $H_3(S_5,\mathbb{Z})$.
\vskip .2in
{\footnotesize{
\begin{Verbatim}[fontsize=\scriptsize,fontfamily=courier,fontshape=tt,frame=single,label={\tt GAP}]
gap> S_5:=SymmetricGroup(5);; P:=SylowSubgroup(S_5,2);;
gap> f:=GroupHomomorphismByFunction(P,S_5, x->x);;
gap> R:=ResolutionFiniteGroup(P,4);;
gap> S:=ResolutionFiniteGroup(S_5,4);;
gap> ZP_map:=EquivariantChainMap(R,S,f);;
gap> map:=TensorWithIntegers(ZP_map);;
gap> Hf:=Homology(map,3);;
gap> AbelianInvariants(Image(Hf));
[2,4]
gap> GroupHomology(S_5,3);
[2,4,3]
\end{Verbatim}
}}
}
\end{example}
If $H$ is a subgroup of finite index in $G$ then
there is an analogous restriction map
in group homology (see for example Brown \cite{B}, \S III.9).
\subsection{Inflation}
Let $X_*$ denote the bar resolution of $G$. Recall
\[
X_n=\oplus_{(g_1,\dots ,g_n)\in G^n} R[g_1,\dots ,g_n],
\]
where the sum runs over all ordered $n$-tuples
in $G^n$. If $H$ is a subgroup of $G$, let $X^H_*$ denote the complex
defined by
\[
X^H_n=\oplus_{(g_1H,\dots ,g_nH)\in (G/H)^n} R[g_1H,\dots ,g_nH].
\]
This is a resolution, and we have a chain map
defined on $n$-cells by
$[g_1,\dots ,g_n]\longmapsto [g_1H,\dots ,g_nH]$.
Suppose that $H$ is a normal subgroup of $G$
and $A$ is a $G$-module. We may view $A^H$ as a
$G/H$-module.
In this case, the homomorphism of pairs is
given by the quotient map $\alpha:G\rightarrow G/H$
and the inclusion map $\beta:A^H\rightarrow A$.
The {\bf inflation} map ${\rm Inf}\, $ is the induced map defined by
(\ref{eqn:hompair}), denoted
\index{inflation}
\[
{\rm Inf}\, :H^n(G/H,A^H)\rightarrow H^n(G,A).
\]
The {\bf inflation-restriction sequence in dimension $n$} is
\index{inflation-restriction sequence in dimension $n$}
\[
0\rightarrow H^n(G/H,A^H)\stackrel{{\rm Inf}\, }{\rightarrow}
H^n(G,A) \stackrel{{\rm Res}\, }{\rightarrow}
H^n(H,A).
\]
For a proof, see Weiss, \cite{W}, \S 3.4.
There an analog of this inflation-restriction
sequence for homology.
We omit any discussion of transfer and Shapiro's lemma, due
to space limitations.
{\it Acknowledgements}:
I thank G. Ellis, M. Mazur and J. Feldvoss,
P. Guillot for correspondence
which improved the content of these notes.
\end{document} |
\begin{document}
\title{A geometric approach to full Colombeau algebras}
\author{R. Steinbauer\footnote{This work was supported by projects P-16742 and Y-237 of the Austrian Science Fund.}\\
Faculty of Mathematics, University of Vienna\\
Nordbergstr.\ 15, A-1090 Wien, Austria\\
E-mail: [email protected]}
\date{October 10, 2007}
\ensuremath{\hat{\mathcal E}_M} aketitle
\abstract{We present a geometric approach to diffeomorphism invariant full Colombeau algebras
which allows a particularly clear view on the construction of the intrinsically defined algebra
$\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} nsuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal G}} (M)$ on the manifold $M$ given in \cite{gksv}.
\ensuremath{\hat{\mathcal E}_M} edskip\ensuremath{\hat{\mathcal N}} oindent
MSC 2000: Primary: 46T30; secondary: 46F30.
\ensuremath{\hat{\mathcal E}_M} edskip\ensuremath{\hat{\mathcal N}} oindent
Keywords: Algebras of generalised functions, full Colombeau algebras, generalised functions on manifolds,
diffeomorphism invariance.
}
\section{Introduction.}\label{s1}
In the early 1980-ies J. F. Colombeau \cite{c1}, \cite{c2} constructed algebras of generalised functions
con\-taining the vector space $\D'$ of distributions as a subspace and the space of ${\ensuremath{\hat{\mathcal E}_M} athcal C}^\infty$-functions as a
subalgebra. As associative and commutative differential algebras they combine a maximum of favourable differential
algebraic properties with a maximum of consistency properties with respect to classical analysis, according to the
Schwartz impossibility result (\cite{s}). Colombeau algebras since then have proved to be a useful tool in nonlinear
analysis, in particular in nonlinear PDE with non-smooth data (see \cite{o} and references therein) and have
increasingly been used in geometric applications such as Lie group analysis of differential equations (see, e.g.\
\cite{dkp}) and general relativity (for a recent review see \cite{sv}).
In this work we shall be exclusively interested in so called {\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} m full} Colombeau algebras which possess a
canonical embedding of distributions. One drawback of the early constructions (given e.g.\ in \cite{c1})
was their lack of diffeomorphism invariance. (Note that this is in contrast to the situation
for the so-called special algebras, which do not possess a canonical embedding of $\D'$: The definition
of special algebras is automatically diffeomorphism invariant, hence these algebras lend themselves
naturally to geometric constructions, see e.g.\ \cite[Ch.\ 3.2]{gkos}. For matters of embedding $\D'$ into
simplified algebras in the manifold context see \cite[Ch.\ 3.2.2]{gkos}.)
It was only after a considerable effort that the full construction could be suitably modified to obtain diffeomorphism invariance (\cite{cm,jel,gfks}). Moreover, the construction of a (full) Colombeau algebra
$\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} nsuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal G}} (M)$ on a manifold $M$ exclusively using intrinsically defined building blocks was given in \cite{gksv}.
Note that such an intrinsic construction is vital for applications in a geometric context, e.g.\ in relativity.
The natural next step, i.e., the construction of generalised tensor fields on manifolds again proved to be
rather challenging. For details we refer to the forthcoming work \cite{gksv2}.
Among other tasks (that are in part discussed in \cite{g}) it was necessary to develop a new
point of view on the construction given in \cite{gksv}, in particular, on the property that the Lie
derivative commutes with the embedding.
In this short note we elaborate on this new point of view by presenting a geometric approach to the
construction of the algebra $\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} nsuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal G}} (M)$ of \cite{gksv}. We believe that this novel approach serves two purposes:
It provides a short introduction into diffeomorphism invariant full Colombeau algebras for
readers not familiar with this topic and it suggests to the experts a very useful shift of focus.
To set the stage for our main topic to be presented in section \ref{s2} we begin by recalling
the general characteristics of Colombeau's construction on open subsets of $\R^n$.
It provides associative and commutative differential algebras---from now on denoted by ${\ensuremath{\hat{\mathcal E}_M} athcal G}$---satisfying the
following distinguishing properties:
\begin{enumerate}
\item[(i)] There exists a linear embedding $\iota:\D'\hookrightarrow{\ensuremath{\hat{\mathcal E}_M} athcal G}$ and the function $f(x)=1$
is the unit in ${\ensuremath{\hat{\mathcal E}_M} athcal G}$.
\item[(ii)] There exist derivative operators $\pa:\ {\ensuremath{\hat{\mathcal E}_M} athcal G}\to{\ensuremath{\hat{\mathcal E}_M} athcal G}$ that are linear and satisfy the
Leibniz rule.
\item[(iii)] The operators $\pa$, when restricted to $\D'$, coincide with the usual partial derivatives, i.e.,
$\pa\circ\iota=\iota\circ\pa$.
\item[(iv)] Multiplication in ${\ensuremath{\hat{\mathcal E}_M} athcal G}$, when restricted to ${\ensuremath{\hat{\mathcal E}_M} athcal C}^\infty\times{\ensuremath{\hat{\mathcal E}_M} athcal C}^\infty$,
coincides with the usual product of functions.
\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} nd{enumerate}
Recall that these properties are optimal in the light of the impossibility result of L. Schwartz (\cite{s}).
Roughly the construction consists of the following steps (for a more elaborate scheme of construction see
\cite[Ch.\ 3]{gfks}):
\begin{enumerate}
\item [(A)] Definition of a basic space ${\ensuremath{\hat{\mathcal E}_M} athcal E}$ that is an algebra, together with linear embeddings
$\iota:{\ensuremath{\hat{\mathcal E}_M} athcal D}'\hookrightarrow{\ensuremath{\hat{\mathcal E}_M} athcal E}$ and $\sigma:{\ensuremath{\hat{\mathcal E}_M} athcal C}^\infty\hookrightarrow{\ensuremath{\hat{\mathcal E}_M} athcal E}$
where $\sigma$ is an algebra homomorphism. Definition of derivative operators on ${\ensuremath{\hat{\mathcal E}_M} athcal E}$ that coincide
with the usual derivatives on ${\ensuremath{\hat{\mathcal E}_M} athcal D}'$.
\item [(B)] Definition of the spaces ${\ensuremath{\hat{\mathcal E}_M} athcal E}_M$ of moderate and ${\ensuremath{\hat{\mathcal E}_M} athcal N}$ of negligible elements
of the basic space ${\ensuremath{\hat{\mathcal E}_M} athcal E}$ such that ${\ensuremath{\hat{\mathcal E}_M} athcal E}_M$ is a subalgebra of ${\ensuremath{\hat{\mathcal E}_M} athcal E}$
and ${\ensuremath{\hat{\mathcal E}_M} athcal N}$ is an ideal in ${\ensuremath{\hat{\mathcal E}_M} athcal E}_M$ that contains $(\iota-\sigma)({\ensuremath{\hat{\mathcal E}_M} athcal C}^\infty)$.
Definition of the algebra as the quotient ${\ensuremath{\hat{\mathcal E}_M} athcal G}:={\ensuremath{\hat{\mathcal E}_M} athcal E}_M/{\ensuremath{\hat{\mathcal E}_M} athcal N}$.
\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} nd{enumerate}
Many different versions of this construction have appeared over the years, adapted to special
situations or specific applications. In the next section we shall focus on step (A) above in a geometrical
context.
\section{A geometric approach to Colombeau algebras.}\label{s2}
In this main part of our work we present a geometric approach to diffeomorphism invariant full
Colombeau algebras. We will carry out our construction on an oriented paracompact smooth Hausdorff
manifold $M$ of dimension $n$ and proceed in three steps.
\ensuremath{\hat{\mathcal E}_M} edskip
\ensuremath{\hat{\mathcal N}} oindent
\subsection{The basic space and the embeddings.}
We want to embed the space of distributions $\D'(M):=(\Omega^n_c(M))'$ (with $\Om^n_c$ denoting the space
of compactly supported $n$-forms) as well as ${\ensuremath{\hat{\mathcal E}_M} athcal C}^\infty(M)$
into our forthcoming basic space $\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} (M)$: a natural choice therefore would be
${\ensuremath{\hat{\mathcal E}_M} athcal C}^\infty(\Omega^n_c(M)\times M)$. However, for technical reasons one actually restricts the
first slot to elements of $\Omega^n_c$ with unit integral. Denoting their space by $\hat{\ensuremath{\hat{\mathcal E}_M} athcal A}_0(M)$ we
define the basic space by
\[\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} (M):={\ensuremath{\hat{\mathcal E}_M} athcal C}^\infty(\hat{\ensuremath{\hat{\mathcal E}_M} athcal A}_0(M)\times M).\]
Elements of the basic space will be denoted by $R$ and its arguments by $\omega$ and $p$.
Now it is natural to define the embeddings $\sigma$ and $\iota$ by
\[ \sigma(f)(\omega,p):=f(p)\quad \ensuremath{\hat{\mathcal E}_M} box{resp.\ }\quad \iota(u)(\omega,p):=\langle u,\omega\rangle.\]
Note that we clearly have $\sigma(fg)=\sigma(f)\sigma(g)$. On the other hand, the formula for $\iota$ might seem
a little unusual to those acquainted with the works of Colombeau \cite{c1,c2}.
In the local situation---that is, on an open subset $\Omega$ of $\R^n$---it has been used by Jelinek
in \cite{jel} while the more familiar formula (found e.g.\ in \cite{c2}, however, with an additional reflection)
for embedding a distribution $u$ on $\Omega$ is ($\vphi\in\D(\Om)$, $x\in\Om$)
\begin{equation}\label{colemb}
\iota(u)(\vphi,x):=(u*\vphi)(x)=\langle u,\vphi(.-x)\rangle.
\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} nd{equation}
In the local situation these two formulae give rise to different formulations
(``Jelinek formalism'' vs.\ ``Colombeau formalism'') which in turn have been shown to give rise to equivalent
theories in \cite[Ch.\ 5]{gfks}.
(As a historical remark we mention that, in fact, the embedding of distributions in
Colombeau's first approach (\cite{c0}), if written out explicitly, would take the form
$\iota(T)(\omega)=\langle T,\omega\rangle$, thus anticipating ``half'' of Jelinek's
formula. For a detailed discussion of the algebra introduced in \cite[Def.\ 3.4.6]{c0}, see
\cite[Ch.\ 1.6]{gkos}.)
In the ``Colombeau formalism'' property (iii) is an easy consequence of the interplay between convolution and
differentiation. Indeed, for any multi-index $\al$ we have
$(\iota\pa^\al u)(\vphi,x)=(\pa^\al u* \vphi)(x)=\pa^\al(u * \vphi)(x)=(\pa^\al\iota(u))(\vphi,x)$.
However, on a manifold---in the absence of a concept of convolution---no analogue to formula (\ref{colemb})
exists and Jelinek's embedding becomes the only one possible. Also on $M$ partial
derivatives have to be replaced by Lie derivatives with respect to smooth
vector fields, which---to keep the presentation simple---we will generally assume to be complete.
Recall that the Lie derivative of a smooth function $f$ on $M$ w.r.t.\ the
vector field $X$ is defined by
$L_Xf:=\frac{d}{d\tau}\ensuremath{\hat{\mathcal E}_M} athop{\ensuremath{\hat{\mathcal E}_M} id}_{\tau=0}\big(\mathrm{Fl}_\tau^X\big)^*f,$
where $(\mathrm{Fl}_\tau^X)^*$ denotes the pullback under the flow of $X$.
So, in order to define derivative operators on our basic space, we first have to introduce
the pullback action of diffeomorphisms induced on $\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} (M)$, which we shall do next.
\ensuremath{\hat{\mathcal E}_M} edskip
\ensuremath{\hat{\mathcal N}} oindent
\subsection{Action of diffeomorphisms.}
Let $\ensuremath{\hat{\mathcal E}_M} u:M\to M$ denote a diffeomorphism of the manifold $M$. Classically we have the following action
of $\ensuremath{\hat{\mathcal E}_M} u$ on smooth functions resp.\ distributions
\[\ensuremath{\hat{\mathcal E}_M} u^*f(p):=f(\ensuremath{\hat{\mathcal E}_M} u p)\quad \ensuremath{\hat{\mathcal E}_M} box{and}\quad \langle\ensuremath{\hat{\mathcal E}_M} u^*u,\omega\rangle:=\langle u,\ensuremath{\hat{\mathcal E}_M} u_*\omega\rangle,\]
where we have used $\ensuremath{\hat{\mathcal E}_M} u p$ as shorthand notation for $\ensuremath{\hat{\mathcal E}_M} u(p)$ and $\ensuremath{\hat{\mathcal E}_M} u_*\omega$ denotes the pushforward of
the $n$-form $\omega$: when written in coordinates
the second of the above formulae takes the familiar form of the usual pullback of distributions.
Therefore the natural definition of the action of $\ensuremath{\hat{\mathcal E}_M} u$ on $\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} (M)$ which we denote
by $\hat\ensuremath{\hat{\mathcal E}_M} u^*$ is given by
\[\hat\ensuremath{\hat{\mathcal E}_M} u^*R(\omega,p):=R(\ensuremath{\hat{\mathcal E}_M} u_*\omega,\ensuremath{\hat{\mathcal E}_M} u p).\]
This definition guarantees that the embeddings are diffeomorphism invariant. Indeed we have
\[ \hat\ensuremath{\hat{\mathcal E}_M} u^*\circ\sigma=\sigma\circ\ensuremath{\hat{\mathcal E}_M} u^*\quad \ensuremath{\hat{\mathcal E}_M} box{and}\quad \hat\ensuremath{\hat{\mathcal E}_M} u^*\circ\iota=\iota\circ\ensuremath{\hat{\mathcal E}_M} u^*,\]
by the following simple calculation
\[
\hat\ensuremath{\hat{\mathcal E}_M} u^*(\iota(u))(\omega,p)=\iota(u)(\ensuremath{\hat{\mathcal E}_M} u_*\omega,\ensuremath{\hat{\mathcal E}_M} u p)=\langle u,\ensuremath{\hat{\mathcal E}_M} u_*\omega\rangle=
\langle\ensuremath{\hat{\mathcal E}_M} u^*u,\omega\rangle=\iota(\ensuremath{\hat{\mathcal E}_M} u^*(u))(\omega,p),
\]
and similarly for $\sigma$.
We are now ready to define our derivative operators on $\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} (M)$.
\ensuremath{\hat{\mathcal E}_M} edskip
\ensuremath{\hat{\mathcal N}} oindent
\subsection{Lie derivatives.}
Let $X$ be a smooth vector field on $M$ and $R\in\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} (M)$. We define the Lie derivative of $R$ w.r.t.\ $X$ by
\[\hat L_X R:=\left.\frac{d}{d\tau}\right|_{\tau=0}\big(\widehat{\mathrm{Fl}^X_\tau})^*\, R.\]
Note that property (iii) is now immediate. In fact, the Lie derivative automatically commutes with the
embeddings since the action of a diffeomorphism does, i.e., we have
\[\hat L_X\circ\sigma=\sigma\circ L_X\quad \ensuremath{\hat{\mathcal E}_M} box{and}\quad \hat L_X\circ\iota=\iota\circ L_X.\]
Rather than giving the one-line proof---which would actually be little more than a replication of the
above calculation showing that $\iota$ commutes with $\ensuremath{\hat{\mathcal E}_M} u^*$---we derive an explicit formula for $\hat L_X R$.
We have
\begin{eqnarray}\label{jelder}
\hat L_XR(\omega,p)
&=&\left.\frac{d}{d\tau}\right|_{\tau=0}\big(\widehat{\mathrm{Fl}^X_\tau})^*\, R(\omega,p)
\,=\,\left.\frac{d}{d\tau}\right|_{\tau=0}R\big((\mathrm{Fl}^X_\tau)_*\omega,\mathrm{Fl}^X_\tau p\big)\ensuremath{\hat{\mathcal N}} onumber\\
&=&d_1R(\omega,p)\left.\frac{d}{d\tau}\right|_{\tau=0}\underbrace{\big(\mathrm{Fl}^X_\tau\big)_*\omega}_ {(\mathrm{Fl}^X_{-\tau})^*\omega}
\,+\,d_2R(\omega,p)\underbrace{\left.\frac{d}{d\tau}\right|_{\tau=0}\mathrm{Fl}^X_\tau p}_{X(p)}\ensuremath{\hat{\mathcal N}} onumber\\
&=&-d_1R(\omega,p)\,L_X\omega+L_XR(\omega,.)\ensuremath{\hat{\mathcal E}_M} id_p,
\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} nd{eqnarray}
where in the second line we have used the chain rule. Note that already in the definition of
$\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} (M)$ we have used calculus in infinite dimensions, but it is at this point where we can
emphasise the inevitability of using it:
In a manifold setting, there is only ``Jelinek's formalism'' available. To have the embedding
commute with the action of diffeomorphisms respectively Lie derivatives, the structure of the
very formula (2) clearly necessitates taking derivatives also w.r.t.\ the $\omega$-slot.
This is the ultimate reason for requiring smoothness w.r.t.\ all variables in the definition
of the basic space. For a detailed account on this matters see \cite[pp.\ 103--107]{gkos}.
However, in order to keep our presentation
simple, we only remark that Chapters 4, 6 and 14 of \cite{gfks} provide all relevant
details as to calculus in convenient vector spaces (see \cite{km}) for the setting at hand,
and (cf.\ \cite{cm}, p.\ 362) `we invite the reader to admit the [respective] smoothness properties'.
Rather we would like to draw the attention of our readers to the fact that formula (\ref{jelder})---in
the local context---already appeared in Remark 22 of \cite{jel}. However, in that reference
it is an operational consequence of Jelinek's formalism, whereas here it arises
as a simple consequence of our natural choice of definitions.
At the end of this section we digress to remark that the geometric
approach to the definition of the Lie derivative
can also be employed to define the usual derivative for distributions, say on the real line.
Indeed, taking $X=\pa_x$, the flow is given by a translation, i.e., $\mathrm{Fl}^X_\tau x=x+\tau$ and we have
\begin{eqnarray*}
\langle u',\vphi\rangle
&=&\langle L_Xu,\vphi\rangle
\,=\,\langle\left.\frac{d}{d\tau}\right|_{\tau=0}\big(\mathrm{Fl}^X_\tau)^*u,\vphi\rangle\\
&=&\left.\frac{d}{d\tau}\right|_{\tau=0}\langle u,\big(\mathrm{Fl}^X_{-\tau}\big)^*\vphi\rangle
\,=\,\langle u,-L_X\vphi\rangle
\,=\,-\langle u,\vphi'\rangle.
\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} nd{eqnarray*}
Observe the somehow amusing fact that in this approach no explicit reference to integration by parts occurs.
Indeed, in this way integration by parts follows from the translation invariance of the integral.
\section{Summary and conclusions.}
To sum up we have constructed the basic space for the Colombeau algebra
$\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} nsuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal G}} (M)$ of \cite{gksv} together with the embeddings of smooth functions resp.\ distributions
and the Lie derivative. Recall that our definitions were all geometrically motivated
and allowed us to obtain properties (i)---(iii) in a remarkably effortless way.
Finally we sketch how one establishes property (iv), i.e., the constructions outlined in step (B) above.
To this end we employ a quotient construction
to identify the images of ${\ensuremath{\hat{\mathcal E}_M} athcal C}^\infty(M)$ under the two embeddings $\sigma$ and $\iota$.
More precisely, we want to identify the two middle terms of
\[\sigma(f)(\omega,p)=f(p)\ \sim\ \int f(q)\omega(q)=\iota(f)(\omega,p).\]
The obvious idea is to set $\omega(q)=\delta_p(q)$ which clearly is only possible asymptotically.
At this point the usual asymptotic estimates (`tests' in the language of \cite{gfks})
come into play (for the present case see
\cite[Defs.\ 3.10, 3.11]{gksv}) and are used to single out the moderate resp.\ the negligible elements of
$\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} (M)$. Denoting the respective spaces by $\ensuremath{\hat{\mathcal E}_M} (M)$ resp.\ $\ensuremath{\hat{\mathcal N}} (M)$ we finally may define
the Colombeau algebra on the manifold $M$ by
\[\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} nsuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal G}} (M):=\ensuremath{\hat{\mathcal E}_M} (M)/\ensuremath{\hat{\mathcal N}} (M).\]
It is a differential algebra with respect to the Lie derivative w.r.t.\ smooth vector fields and
the Lie derivative commutes with the embedding of distributions. Moreover it localises appropriately
to the diffeomorphism invariant local algebra $\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} nsuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal G}} ^d(\Omega)$ of \cite{gfks}.
\ensuremath{\hat{\mathcal E}_M} edskip
The new aspect we have demonstrated in this contribution is the following: The fact that the Lie
derivative commutes with both embeddings (on the level of the basic space, hence with the embedding
of distributions after taking the quotient) is a consequence of diffeomorphism invariance of the
embeddings which itself is due to a natural choice of the definition of the action of diffeomorphisms
on the basic space as well as of the natural definition of the embeddings themselves. Hence in the global setting
property (iii)---which in the local context follows from the properties of the convolution---is a direct
consequence of the diffeomorphism invariance of the construction.
{\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} m Acknowledgements:} The author wishes to express his gratitude to the organisers of the
GF07 conference in B\c{e}dlewo, in particular to Swiet\l{}ana Minczewa-Kami\'nska and to
Andrzej Kami\'nski.
\begin{thebibliography}{10}
\bibitem{c0}{J.~F. Colombeau,}
{\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} m New Generalized Functions and Multiplication of Distributions}, North Holland, Amsterdam, 1984.
\bibitem{c1}
{J.~F. Colombeau,}
{\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} m Elementary Introduction to New Generalized Functions}, North Holland, Amsterdam, 1985.
\bibitem{c2}
{J.~F. Colombeau,}
{\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} m Multiplication of distributions,} {Bull. Amer. Math. Soc. (N.S.)} {23} (1990), 251--268.
\bibitem{cm}
{J. F. Colombeau and A. Meril,}
{\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} m Generalized functions and multiplication of distributions on {${\ensuremath{\hat{\mathcal E}_M} athcal C}^\infty$} manifolds,}
J.\ Math.\ Anal.\ Appl.\ {186} (1994), 357--364.
\bibitem{dkp}
N. Dapic, M. Kunzinger, and S. Pilipovic,
{\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} m Symmetry group analysis of weak solutions,} {Proc.\ London Math.\ Soc.} {84}(3) (2002), 686--710.
\bibitem{gfks}
M. Grosser, E. Farkas, M. Kunzinger and R. Steinbauer,
{\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} m On the foundations of nonlinear generalized functions {I}, {II},}
Mem.\ Amer.\ Math.\ Soc. {153}(729), 2001.
\bibitem{g}
{M. Grosser,} {\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} m Tensor valued Colombeau functions on manifolds,} (this volume).
\bibitem{gkos}
M. Grosser, M. Kunzinger, M. Oberguggenberger and R. Steinbauer,
{\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} m Geometric Theory of Generalized Functions}, Vol.\ 537, Mathematics and its Applications,
Kluwer Academic Publishers, Dordrecht, 2001.
\bibitem{gksv}
M. Grosser, M. Kunzinger, R. Steinbauer and J. Vickers,
{\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} m A global theory of algebras of generalized functions,}
Adv.\ Math.\, {166} (2002), 179--206.
\bibitem{gksv2}
M. Grosser, M. Kunzinger, R. Steinbauer and J. Vickers,
{\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} m A global theory of algebras of generalized functions II: tensor distributions},
in preparation.
\bibitem{jel}
{J. Jel\'\i nek,}
{\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} m An intrinsic definition of the {C}olombeau generalized functions,}
Comment.\ Math.\ Univ.\ Carolinae, {40} (1999), 71--95.
\bibitem{km}
A. Kriegl and P. Michor,
{\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} m The Convenient Setting of Global Analysis,}
Vol.\ 53, AMS Math. Surv. and Monographs, Providence, 1997.
\bibitem{o}
{M. Oberguggenberger,}
{\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} m Multiplication of Distributions and Applications to Partial Differential Equations},
Vol.\ 259, Pitman Research Notes in Mathematics, Longman, Harlow, 1992.
\bibitem{s}
{L. Schwartz,}
{\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} m Sur l'impossibilit\'e de la multiplication des distributions,}
C.\ R.\ Acad.\ Sci.\ Paris, {239} (1954), 847--848.
\bibitem{sv}
{R. Steinbauer and J. Vickers,}
{\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} m The use of generalized functions and distributions in general relativity,}
Class.\ Quantum Grav. {23}(10) (2006), R91--R114.
\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} nd{thebibliography}
\ensuremath{\hat{\ensuremath{\hat{\mathcal E}_M} athcal E}} nd{document} |
\begin{document}
\begin{frontmatter}
\title{\textbf{\textsf{Fuzzy ample spectrum contractions in (more general than) non-Archimedean fuzzy metric spaces}}}
\author[Antonio]{Antonio Francisco Rold\'{a}n L\'{o}pez de Hierro\corref{cor1}} \ead{[email protected]}
\author[Erdal,Erdal2]{Erdal Karap{\i}nar} \ead{[email protected], [email protected]}
\author[Naseer]{Naseer Shahzad} \ead{[email protected]}
\cortext[cor1]{Corresponding author: Antonio Francisco Rold\'{a}n L\'{o}pez de Hierro, [email protected]\\
\hspace*{5mm} 2010 Mathematics Subject Classification: 47H10, 47H09, 54H25, 46T99.}
\address[Antonio]{Department of Statistics and Operations Research, University of Granada, Granada, Spain.}
\address[Erdal]{Department of Medical Research, China Medical University, Taichung 40402, Taiwan.}
\address[Erdal2]{Department of Mathematics, \c{C}ankaya University, 06790, Etimesgut, Ankara, Turkey.}
\address[Naseer]{Department of Mathematics, Faculty of Science, King Abdulaziz University, P.O.B. 80203, Jeddah 21589, Saudi Arabia.}
\date{October 22$^{th}$, 2020}
\begin{abstract}
Taking into account that Rold\'{a}n \emph{et al.}'s \emph{ample spectrum contractions} have managed to extend and unify more than ten distinct families of contractive mappings in the setting of metric spaces, in this manuscript we present a first study on how such concept can be implemented in the more general framework of fuzzy metric spaces in the sense of Kramosil and Mich\'{a}lek. We introduce two distinct approaches to the concept of \emph{fuzzy ample spectrum contractions} and we prove general results about existence and uniqueness of fixed points. The proposed notions enjoys the following advantages with respect to previous approaches: (1) they permit to develop fixed point theory in a very general fuzzy framework (for instance, the underlying fuzzy space is not necessarily complete); (2) the procedures that we employ are able to overcome the technical drawbacks raising in fuzzy metric spaces in the sense of Kramosil and Mich\'{a}lek (that do not appear in fuzzy metric spaces in the sense of George and Veeramani); (3) we introduce a novel property about sequences that are not Cauchy in a fuzzy space in order to consider a more general context than non-Archimedean fuzzy metric spaces; (4) the contractivity condition associated to \emph{fuzzy ample spectrum contractions} does not have to be expressed in separate variables; (5) such fuzzy contractions generalize some well-known families of fuzzy contractive operators such that the class of all Mihe\c{t}'s fuzzy $\psi$-contractions. As a consequence of these properties, this study gives a positive partial answer to a question posed by this author some years ago.
\end{abstract}
\begin{keyword}
Ample spectrum contraction \sep Fuzzy metric space \sep Fixed point \sep Property $\mathcal{NC}$ \sep $(T,\mathcal{S})$-sequence
\end{keyword}
\end{frontmatter}
\section{\textbf{Introduction}}
\emph{Fixed point theory} has become, in recent years, into a very flourishing
branch of Nonlinear Analysis due, especially, to its ability to find solutions
of nonlinear equations. After Banach's pioneering definition, a multitude of
results have appeared that generalize his famous contractive mapping
principle. Some extensions generalize the contractive condition (see, e.g., \cite{AyKaRo}) and others
focus their efforts on the metric characteristics of the underlying space.
Fixed point theory has been successfully applied in many fields, especially in
Nonlinear Analysis, where it constitutes a major instrument in order to the existence and the uniqueness of solutions of several
classes of equations such as functional equations, matrix equations \cite{AsMo,Berzig2012}, integral equations \cite{AyNaSaYa,HaLoSa,SaHuShFaRa}, nonlinear systems \cite{AAKR}, etc.
In \cite{RoSh-ample1} Rold\'{a}n L\'{o}pez de Hierro and Shahzad introduced a
new family of contractions whose main characteristic was its capacity to
extend and unify several classes of well-known previous contractive mappings
in the setting of metric spaces: (1) Banach's contractions; (2) manageable
contractions \cite{DuKh}; (3) Farshid \emph{et al}'s $\mathcal{Z}
$-contractions involving \emph{simulation functions} \cite{KhShRa,RoKaMa}; (4)
Geraghty's contractions \cite{Geraghty}; (5) Meir and Keeler's contractions
\cite{MeKe,Lim}; (6) $R$-contractions \cite{RoSh1}; (7) $(R,\mathcal{S}
)$-contractions \cite{RoSh2}; (8) $(A,\mathcal{S})$-contractions
\cite{ShRoFa1}; (9) Samet et al's contractions \cite{SaVeVe}; (10) Shahzad
\emph{et al.}'s contractions \cite{ShKaRo}; (11) Wardowski's F-contractions
\cite{War}; etc. In a broad (or even philosophical) sense, the conditions that
define a Rold\'{a}n \emph{et al.}'s ample spectrum contraction should rather
be interpreted as the minimal set of properties that any contraction must
satisfy. Therefore, from our point of view, any other approach to contractive
mappings in the field of fixed point theory could take them into account.
One of the main settings in which more work has been done to extend contractive mappings in metric spaces is the context of fuzzy metric spaces (see, e.g., \cite{AlMi,GV,Grabiec,GrSa,Mi3,RoKaManro,FA-spaces}). These abstract spaces are able of providing an appropriate point of view for determining how similar or distinct two imprecise quantities are, which gives added value to the real analytical techniques that are capable of being extended to the fuzzy setting. In fact, the category of fuzzy metric spaces is so rich and broad that, as we shall see, the notion of fuzzy ample spectrum contraction does not directly come from its real corresponding counterpart. As a consequence of this great variety of examples, fuzzy metrics are increasingly appreciated in the scientific field in general due to the enormous popularity and importance of the results that are currently being obtained when working with fuzzy sets instead of classical sets, especially in Computation and Artificial Intelligence.
In this work we give a first introduction on how ample spectrum contractions
can be extended to fuzzy metric spaces. We will consider fuzzy metric spaces
in the sense of Kramosil and Mich\'{a}lek \cite{KrMi}, which are more general
than fuzzy metric spaces in the sense of George and Veeramani \cite{GV}.
Undoubtedly, the second kind of fuzzy metric spaces are easier to handle than
the first one because in the second case the metric only takes strictly
positive values. Kramosil and Mich\'{a}lek's fuzzy spaces are as general that
they include the case in which the distance between two points is infinite,
that is, its fuzzy metric is constantly zero (see \cite{MRR3}). This fact greatly complicates
the proofs that could be made of many fixed point theorems in the context of
George and Veeramani's fuzzy spaces (which has been a significant challenge).
In our study, we have tried to be as faithful as possible to the original
definition of ample spectrum contraction in metric spaces. This could mislead
the reader to the wrong idea that fuzzy ample spectrum contractions are
relatively similar to ample spectrum contractions in metric spaces. This is a
completely false statement: fuzzy ample spectrum contractions are so singular
that, for the moment, these authors have not been able to demonstrate that
each ample spectrum contraction in a metric space can be seen as a fuzzy ample
spectrum contractions in a fuzzy metric space (we have only proved it under
one additional condition). The difficulty appears when, in a fuzzy metric
space, we try to define a mapping that could extend the mapping $\varrho$ of
the ample spectrum contraction. In fact, as we commented above, the wide variety of
distinct classes of fuzzy metrics, and the way in which the fuzzy distance between
two points is represented by a distance distribution function, greatly complicates the
task of studying some possible relationship between ample spectrum contractions in
real metric spaces and in fuzzy metric spaces.
The presented fuzzy contractivity condition makes only use of two metric terms:
the fuzzy distance between two distinct points, $M(x,y,t)$, and the
fuzzy distance between their images, $M(Tx,Ty,t)$, under the self-mapping $T$.
Traditionally, these two terms have played separate roles in classical fuzzy and non-fuzzy
contractivity conditions: for instance, they use to appear in distinct sides of the inequality.
However, inspired by Farshid \emph{et al}'s $\mathcal{Z}$-contractions \cite{KhShRa,RoKaMa}
(based on \emph{simulation functions}), the contractivity condition associated to
\emph{fuzzy ample spectrum contractions} is not necessarily a constraint on separate variables.
As a consequence of this generalization, it is easy to check that the canonical examples
motivating the fixed point theory, that is, the Banach's contractive mappings, are particular cases of
\emph{fuzzy ample spectrum contractions}. Furthermore, we illustrate how Mihe\c{t} fuzzy
$\psi$-contractions \cite{Mi3} can also be seen as \emph{fuzzy ample spectrum contractions}, and why
this last class of fuzzy contractions are not the best choice to check that
Banach's contractions in metric spaces are fuzzy contractions.
In this paper we introduce two kind of fuzzy ample spectrum contractions. The
first one is more strict, but it allows to demonstrate some fixed point
results when the contractive condition does not satisfy additional
constraints. The second definition is more general but it forces us to handle
a concrete subfamily of contractivity conditions. We prove some distinct
results about existence and uniqueness of fixed points associated to each
class of fuzzy contractions. Our results generalize some previous fixed point
theorems introduced, on the one hand, by Mihe\c{t} and, on the other hand, by
Altun and Mihe\c{t}.
By the way, this study gives a positive partial answer to a question posed by Mihe\c{t} in \cite{Mi3} some years ago.
In such paper, the author wondered whether some fixed point theorems involving fuzzy $\psi$-contractions
could also hold in fuzzy metric spaces that did not necessarily satisfy the non-Archimedean property.
To face this challenge, we have introduced a novel assumption that we have called \emph{property} $\mathcal{NC}$ (\textquotedblleft non-Cauchy\textquotedblright).
Such condition establishes a very concrete behavior for asymptotically regular sequences that do not satisfy the Cauchy's condition:
in such cases, the fuzzy distance between two partial subsequences and their corresponding predecessors can be controlled
in terms of convergence. Obviously, our study is a proper generalization because each non-Archimedean fuzzy metric space satisfies such property
even if the associated triangular norm is only continuous at the boundary of the unit square (but not necessarily in the interior of the unit square).
\vspace*{3mm}
\section{\textbf{Preliminaries}}
For an optimal understanding of this paper, we introduce here some basic
concepts and notations that could also be found in
\cite{G-book,RoSh-ample1,ShRoFa1}. Throughout this manuscript, let
$\mathbb{R}$ be the family of all real numbers, let $\mathbb{I}$ be the real
compact interval $\left[ 0,1\right] $, let $\mathbb{N}=\{1,2,3,\ldots\}$
denote the set of all positive integers and let $\mathbb{N}_{0}=\mathbb{N}
\cup\{0\}$. Henceforth, $X$ will denote a non-empty set. A {\emph{binary
relation on }$X$} is a non-empty subset $\mathcal{S}$ of the Cartesian product
space $X\times X$. The notation $x\mathcal{S}y$ means that $(x,y)\in
\mathcal{S}$. We write $x\mathcal{S}^{\mathcal{\ast}}y$ when $x\mathcal{S}y$
and $x\neq y$. Hence $\mathcal{S}^{\mathcal{\ast}}$ is another binary relation
on $X$ (if it is non-empty). Two points $x$ and $y$ are $\mathcal{S}
$\emph{-comparable} if $x\mathcal{S}y$ or $y\mathcal{S}x$. We say that
$\mathcal{S}$ is \emph{transitive} if we can deduce $x\mathcal{S}z$ from
$x\mathcal{S}y$ and $y\mathcal{S}z$. The \emph{trivial binary relation}
$\mathcal{S}_{X}$ on $X$ is defined by $x\mathcal{S}_{X}y$ for each $x,y\in
X$.
From now on, let $T:X\rightarrow X$ be a map from $X$ into itself. We say that
$T$ is {$\mathcal{S}$\emph{-nondecreasing}} if $Tx\mathcal{S}Ty$ for each
$x,y\in X$ such that $x\mathcal{S}y$. If a point $x\in X$ verifies $Tx=x $,
then $x$ is a \emph{fixed point of }$T$. We denote by $\operatorname*{Fix}(T)$
the set of all fixed points of $T$.
A sequence $\{x_{n}\}_{n\in\mathbb{N}_{0}}$ is called a {\emph{Picard sequence
of }$T$} \emph{based on }$x_{0}\in X$ if $x_{n+1}=Tx_{n}$ for all
$n\in\mathbb{N}_{0}$. Notice that, in such a case, $x_{n}=T^{n}x_{0}$ for each
$n\in\mathbb{N}_{0}$, where $\{T^{n}:X\rightarrow X\}_{n\in\mathbb{N}_{0}}$
are the \emph{iterates of }$T$ defined by $T^{0}=\,$identity, $T^{1}=T $ and
$T^{n+1}=T\circ T^{n}$ for all $n\geq2$.
Following \cite{RoShJNSA}, a sequence $\left\{ x_{n}\right\} $ in $X$ is
\emph{infinite} if $x_{n}\neq x_{m}$ for all $n\neq m$, and $\left\{
x_{n}\right\} $ is \emph{almost periodic} if there exist $n_{0}
,N\in\mathbb{N}$ such that
\[
x_{n_{0}+k+Np}=x_{n_{0}+k}\quad\text{for all }p\in\mathbb{N}\text{ and all
}k\in\left\{ 0,1,2,\ldots,N-1\right\} .
\]
\begin{proposition}
\label{K38 - 22 lem infinite or almost periodic}\textrm{(\cite{RoShJNSA},
Proposition 2.3)} Every Picard sequence is either infinite or almost periodic.
\end{proposition}
\subsection{\textbf{Ample spectrum contractions in metric spaces}}
We describe here the notion of \emph{ample spectrum contraction} in the
context of a metric space. Such concept involves a key kind of sequences of
real numbers that must be highlighted. Let $(X,d)$ be a metric space, let
$\mathcal{S}$ be a binary relation on $X$, let $T:X\rightarrow X$ be a
self-mapping and let $\varrho:A\times A\rightarrow\mathbb{R}$ be a function
where $A\subseteq\mathbb{R}$ is a non-empty subset of real numbers.
\begin{definition}
\label{K38 - 02 def TS sequence in MS}\textrm{(\cite{RoSh-ample1}, Definition
3)} Let $\{a_{n}\}$ and $\{b_{n}\}$ be two sequences of real numbers. We say
that $\{\left( a_{n},b_{n}\right) \}$ is a {$(T,\mathcal{S}^{\ast}
)$\emph{-sequence}} if there exist two sequences $\{x_{n}\},\{y_{n}\}\subseteq
X$ such that
\[
x_{n}\mathcal{S}^{\ast}y_{n},\quad Tx_{n}\mathcal{S}^{\ast}Ty_{n},\quad
a_{n}=d(Tx_{n},Ty_{n})\quad\text{and}\quad b_{n}=d(x_{n},y_{n})\quad\text{for
all }n\in\mathbb{N}_{0}.
\]
\end{definition}
The previous class of sequences plays a crucial role in the third condition of
the following definition.
\begin{definition}
\label{K38 - 01 def ample spectrum contraction MS}\textrm{(\cite{RoSh-ample1},
Definition 4)} We will say that $T:X\rightarrow X$ is an \emph{ample spectrum
contraction} w.r.t. $\mathcal{S}$ and $\varrho$ if the following four
conditions are fulfilled.
\begin{description}
\item[$\left( \mathcal{B}_{1}\right) $] $A$ is nonempty and $\left\{
\,d\left( x,y\right) \in\left[ 0,\infty\right) :x,y\in X,~x\mathcal{S}
^{\ast}y\,\right\} \subseteq A$.
\item[$\left( \mathcal{B}_{2}\right) $] If $\{x_{n}\}\subseteq X$ is a
Picard $\mathcal{S}$-nondecreasing sequence of $T$ such that
\[
x_{n}\neq x_{n+1}\quad\text{and}\quad\varrho\left( d\left( x_{n+1}
,x_{n+2}\right) ,d\left( x_{n},x_{n+1}\right) \right) \geq0\quad\text{for
all }n\in\mathbb{N}_{0},
\]
then $\{d\left( x_{n},x_{n+1}\right) \}\rightarrow0$.
\item[$\left( \mathcal{B}_{3}\right) $] If $\{\left( a_{n},b_{n}\right)
\}\subseteq A\times A$ is a $(T,\mathcal{S}^{\ast})$-sequence such that
$\{a_{n}\}$ and $\{b_{n}\}$ converge to the same limit $L\geq0$ and verifying
that $L<a_{n}$ and $\varrho(a_{n},b_{n})\geq0$ for all $n\in\mathbb{N}_{0}$,
then $L=0$.
\item[$\left( \mathcal{B}_{4}\right) $] {$\varrho\left(
d(Tx,Ty),d(x,y)\right) \geq0\quad$for all $x,y\in X\quad$such that
$x\mathcal{S}^{\ast}y$ and $Tx\mathcal{S}^{\ast}Ty$.}
\end{description}
\end{definition}
In some cases, we will also consider the following two auxiliary properties.
\begin{description}
\item[$\left( \mathcal{B}_{2}^{\prime}\right) $] If $x_{1},x_{2}\in X$ are
two points such that
\[
T^{n}x_{1}\mathcal{S}^{\ast}T^{n}x_{2}\quad\text{and}\quad{\varrho({d}\left(
T^{n+1}x_{1},T^{n+1}x_{2}\right) ,{d}\left( T^{n}x_{1},T^{n}x_{2}\right)
)\geq0}\quad\text{for all }n\in\mathbb{N}_{0},
\]
then $\{{d}\left( T^{n}x_{1},T^{n}x_{2}\right) \}\rightarrow0$.
\item[$\left( \mathcal{B}_{5}\right) $] If $\{\left( a_{n},b_{n}\right)
\}$ is a $(T,\mathcal{S}^{\ast})$-sequence such that $\{b_{n}\}\rightarrow0$
and $\varrho(a_{n},b_{n})\geq0$ for all $n\in\mathbb{N}_{0}$, then
$\{a_{n}\}\rightarrow0$.
\end{description}
Rold\'{a}n L\'{o}pez de Hierro and Shahzad demonstrated that, under very weak
conditions, these contractions have a fixed point and, if we assume other
constraints, then such fixed point is unique (see \cite{RoSh-ample1}). After
that, the same authors and Karap\i nar were able to extend their study to
Branciari distance spaces.
\subsection{\textbf{Triangular norms}}
A \emph{triangular norm} \cite{ScSk}\ (for short, a \emph{t-norm}) is a
function $\ast:\mathbb{I}\times\mathbb{I}\rightarrow\mathbb{I}$ satisfying the
following properties: associativity, commutativity, non-decreasing on each
argument, has $1$ as unity (that is, $t\ast1=t$ for all $t\in\mathbb{I}$). It
is usual that authors consider continuous t-norms on their studies. A t-norm
$\ast$ is \emph{positive} if $t\ast s>0$ for all $t,s\in\left( 0,1\right] $.
Given two t-norms $\ast$ and $\ast^{\prime}$, we will write $\ast\leq
\ast^{\prime}$ when $t\ast s\leq t\ast^{\prime}s$ for all $t,s\in\mathbb{I}$.
Examples of t-norms are the following ones:
\[
\begin{tabular}
[c]{ll}
$\text{Product }\ast_{P}:$ & $t\ast_{P}s=t\,s$\\
$\text{\L ukasiewicz }\ast_{L}:$ & $t\ast_{L}s=\max\{0,t+s-1\}$\\
$\text{Minimum }\ast_{m}:$ & $t\ast_{m}s=\min\{t,s\}$\\
$\text{Drastic }\ast_{D}:$ & $t\ast_{D}s=\left\{
\begin{tabular}
[c]{ll}
$0,$ & if $t<1$ and $s<1,$\\
$\min\{t,s\},$ & if $t=1$ or $s=1.$
\end{tabular}
\right. $
\end{tabular}
\]
If $\ast$ is an arbitrary t-norm, then $\ast_{D}\leq\ast\leq\ast_{m}$, that
is, the drastic t-norm is the absolute minimum and the minimum t-norm is the
absolute maximum among the family of all t-norms (see \cite{t-norm}).
\begin{definition}
We will say that a t-norm $\ast$ is \emph{continuous at the }$1$
\emph{-boundary} if it is continuous at each point of the type $\left(
1,s\right) $ where $s\in\mathbb{I}$ (that is, if $\{t_{n}\}\rightarrow1$ and
$\{s_{n}\}\rightarrow s$, then $\{t_{n}\ast s_{n}\}\rightarrow1\ast s=s$).
\end{definition}
Obviously, each continuous t-norm is continuous at the $1$-boundary.
\begin{proposition}
\label{K38 - 18 propo cancellation}Let $\{a_{n}\},\{b_{n}\},\{c_{n}
\},\{d_{n}\},\{e_{n}\}\subseteq\mathbb{I}$ be five sequences and let
$L\in\mathbb{I}$ be a number such that $\{a_{n}\}\rightarrow L$,
$\{b_{n}\}\rightarrow1$, $\{d_{n}\}\rightarrow1$ and $\{e_{n}\}\rightarrow L$.
Suppose that $\ast$ is a continuous at the $1$-boundary $t$-norm and that
\[
a_{n}\geq b_{n}\ast c_{n}\ast d_{n}\geq e_{n}\quad\text{for all }
n\in\mathbb{N}.
\]
Then $\{c_{n}\}$ converges to $L$.
\end{proposition}
\begin{proof}
As $\{c_{n}\}\subseteq\mathbb{I}=\left[ 0,1\right] $ is bounded, then it has
a convergent partial subsequence. Let $\{c_{\sigma(n)}\}$ be an arbitrary
convergent partial subsequence of $\{c_{n}\}$ and let $L^{\prime}
=\lim_{n\rightarrow\infty}c_{\sigma(n)}$. Since $\ast$ is continuous at the
point $\left( 1,L^{\prime}\right) $, $\{b_{\sigma(n)}\}\rightarrow1$ and
$\{c_{\sigma(n)}\}\rightarrow L^{\prime}$, then $\{b_{\sigma(n)}\ast
c_{\sigma(n)}\}\rightarrow1\ast L^{\prime}=L^{\prime}$, and as $\{d_{\sigma
(n)}\}\rightarrow1$, then $\{b_{\sigma(n)}\ast c_{\sigma(n)}\ast d_{\sigma
(n)}\}\rightarrow L^{\prime}\ast1=L^{\prime}$. Furthermore, taking into
account that $a_{n}\geq b_{n}\ast c_{n}\ast d_{n}\geq e_{n}$ for all
$n\in\mathbb{N}$, we deduce that
\[
L=\lim_{n\rightarrow\infty}a_{\sigma(n)}\geq\lim_{n\rightarrow\infty}\left(
b_{\sigma(n)}\ast c_{\sigma(n)}\ast d_{\sigma(n)}\right) \geq\lim
_{n\rightarrow\infty}e_{\sigma(n)}=L.
\]
Hence $L^{\prime}=\lim_{n\rightarrow\infty}\left( b_{\sigma(n)}\ast
c_{\sigma(n)}\ast d_{\sigma(n)}\right) =L$. This proves that any convergent
partial subsequence of $\{c_{n}\}$ converges to $L$. Next we consider the
limit inferior and the limit superior of $\{c_{n}\}$. The previous argument
shows that
\[
L=\liminf_{n\rightarrow\infty}c_{n}\leq\limsup_{n\rightarrow\infty}c_{n}=L.
\]
As the limit inferior and the limit superior of $\{c_{n}\}$ are equal to $L$,
the sequence $\{c_{n}\}$ is convergent and its limit is $L$.
\end{proof}
\begin{example}
The cancelation property showed in Proposition
\ref{K38 - 18 propo cancellation} is not satisfied by all t-norms. For
instance, let $\ast_{D}$ be the drastic t-norm (which is not continuous at any
point $(1,s)$ or $\left( s,1\right) $ of the $1$-boundary when $s>0$). Let
$L=0$ and let $\{a_{n}\},\{b_{n}\},\{c_{n}\},\{d_{n}\},\{e_{n}\}\subseteq
\mathbb{I}$ be the sequences on $\mathbb{I}$ given, for all $n\in\mathbb{N}$,
by:
\[
a_{n}=e_{n}=0,\quad b_{n}=d_{n}=1-\frac{1}{n},\quad c_{n}=\frac{1}{2}.
\]
Then $b_{n}\ast_{D}c_{n}=b_{n}\ast_{D}c_{n}\ast_{D}d_{n}=a_{n}=e_{n}=L=0$ for
all $n\in\mathbb{N}$, $\{b_{n}\}\rightarrow1$ and $\{d_{n}\}\rightarrow1$.
However, $\{c_{n}\}$ does not converge to $L=0$. In fact, it has not any partial
subsequence converging to $L=0$.
\end{example}
\subsection{\textbf{Fuzzy metric spaces}}
In this subsection we introduce two distinct notions of \emph{fuzzy metric
space} that represent natural extensions of the concept of \emph{metric space}
to a setting in which some uncertainty or imprecision can be considered when
determining the distance between two points.
\begin{definition}
\label{definition KM-space}\textrm{(cf. Kramosil and Mich\'{a}lek
\cite{KrMi})} A triplet $(X,M,\ast$) is called a \emph{fuzzy metric space in
the sense of Kramosil and Mich\'{a}lek} (briefly, a \emph{KM-FMS}) if $X$ is
an arbitrary non-empty set, $\ast$ is a $t$-norm and $M:X\times X\times\left[
0,\infty\right) \rightarrow\mathbb{I} $ is a fuzzy set satisfying the
following conditions, for each $x,y,z\in X$, and $t,s\geq0$:
\begin{description}
\item[(KM-1)] $M(x,y,0)=0$;
\item[(KM-2)] $M(x,y,t)=1$ for all $t>0$ if, and only if, $x=y$;
\item[(KM-3)] $M(x,y,t)=M(y,x,t)$;
\item[(KM-4)] $M(x,z,t+s)\geq M(x,y,t)\ast M(y,z,s)$;
\item[(KM-5)] $M(x,y,\cdot):\left[ 0,\infty\right) \rightarrow\left[
0,1\right] $ is left-continuous.
\end{description}
\end{definition}
The value $M(x,y,t)$ can be interpreted of as the degree of nearness between
$x$ and $y$ compared to $t$. On their original definition, Kramosil and
Mich\'{a}lek did not assume the continuity of the t-norm $\ast$. However, in
later studies in KM-FMS, it is very usual to suppose that $\ast$ is continuous
(see, for instance, \cite{Mi3}).
The following one is the canonical way in which a metric space can be seen as
a KM-FMS.
\begin{example}
\label{K38 - 24 ex canonical metric FMS}Each metric space $\left( X,d\right)
$ can be seen as a KM-FMS $(X,M^{d},\ast)$, where $\ast$ is any t-norm, by
defining $M:X\times X\times\left[ 0,\infty\right) \rightarrow\mathbb{I}$ as:
\[
M^{d}\left( x,y,t\right) =\left\{
\begin{tabular}
[c]{ll}
$0,$ & if $t=0,$\\
$\dfrac{t}{t+d\left( x,y\right) },$ & if $t>0.$
\end{tabular}
\right.
\]
Notice that $0<M^{d}\left( x,y,t\right) <1$ for all $t>0$ and all $x,y\in X$
such that $x\neq y$. Furthermore, $\lim_{t\rightarrow\infty}M^{d}\left(
x,y,t\right) =1$ for all $x,y\in X$. More properties of these spaces are
given in Proposition \ref{K38 - 44 propo Md non-Archimedean}.
\end{example}
\begin{remark}
\label{K38 - 32 rem Km to GV}Definition \ref{definition KM-space} is as
general that such class of fuzzy spaces can verify that
\begin{equation}
M(x,y,t)=0\text{\quad for all }t>0\text{\quad when }x\neq y.
\label{K38 - 26 prop bad condition}
\end{equation}
In the context of \emph{extended metric spaces} (whose metrics take values in
the extended interval $\left[ 0,\infty\right] $, including $\infty$; see
\cite{MRR3,FA-spaces}), this property can be interpreted by saying that the
\textquotedblleft distance\textquotedblright\ between the points $x$ and $y$
is infinite. Property (\ref{K38 - 26 prop bad condition}) is usually
unsuitable in the setting of fixed point theory because it often spoils the
arguments given in the proofs. This drawback is sometimes overcame by assuming
that the initial condition is a point $x_{0}\in X$ such that $M(x_{0}
,Tx_{0},t)>0$ for all $t>0$ because the contractivity condition helps to prove
that all the points of the Picard sequence based on $x_{0}$ satisfy the same
condition. In order to hereby such characteristic, we would need to assume
additional constraints on the auxiliary functions we will employ to introduce
the announced fuzzy ample spectrum contractions. Such constraints would not
appear if we would have decided to restrict out study to fuzzy metric spaces
in the sense of George and Veeramani, which were introduced in order to
consider a Hausdorff topology on the corresponding fuzzy spaces and to prove a
version of the Baire theorem.
\end{remark}
\begin{definition}
\textrm{(cf. George and Veeramani \cite{GV})} A triplet $(X,M,\ast$) is called
a \emph{fuzzy metric space in the sense of George and Veeramani} (briefly,
a\emph{\ GV-FMS}) if $X$ is an arbitrary non-empty set, $\ast$ is a continuous
$t$-norm and $M:X\times X\times\left( 0,\infty\right) \rightarrow\mathbb{I}$
is a fuzzy set satisfying the following conditions, for each $x,y,z\in X$, and
$t,s>0$:
\begin{description}
\item[(GV-1)] $M(x,y,t)>0$;
\item[(GV-2)] $M(x,y,t)=1\text{ for all }t>0$ if, and only if, $x=y$;
\item[(GV-3)] $M(x,y,t)=M(y,x,t)$;
\item[(GV-4)] $M(x,z,t+s)\geq M(x,y,t)\ast M(y,z,s)$;
\item[(GV-5)] $M(x,y,\cdot):\left( 0,\infty\right) \rightarrow\left[
0,1\right] $ is a continuous function.
\end{description}
\end{definition}
\begin{lemma}
Each GV-FMS is, in fact, a KM-FMS by extending $M$ to $t=0$ as $M(x,y,0)=0$
for all $x,y\in X$.
\end{lemma}
\begin{lemma}
\textrm{(cf. Grabiec \cite{Grabiec})} If $\left( X,M,\ast\right) $ is a
KM-FMS (respectively, a GV-FMS) and $x,y\in X$, then each function
$M(x,y,\cdot)$ is nondecreasing on $\left[ 0,\infty\right) $ ((respectively,
on $\left( 0,\infty\right) $).
\end{lemma}
\begin{proposition}
\label{K40 21 propo either infinite or almost-constant fuzzy}\textrm{(cf.
\cite{RoKaFu}, Proposition 2)} Let $\left\{ x_{n}\right\} $ be a Picard
sequence in a KM-FMS $(X,M,\ast)$ such that $\{M(x_{n},x_{n+1},t)\}\rightarrow
1$ for all $t>0$. If there are $n_{0},m_{0}\in\mathbb{N}$ such that
$n_{0}<m_{0}$ and $x_{n_{0}}=x_{m_{0}}$, then there is $\ell_{0}\in\mathbb{N}$
and $z\in X$ such that $x_{n}=z$ for all $n\geq\ell_{0}$ (that is, $\left\{
x_{n}\right\} $ is constant from a term onwards). In such a case, $z$ is a
fixed point of the self-mapping for which $\left\{ x_{n}\right\} $ is a
Picard sequence.
\end{proposition}
\begin{proof}
Let $T:X\rightarrow X$ be a mapping for which $\left\{ x_{n}\right\} $ is a
Picard sequence, that is, $x_{n+1}=Tx_{n}$ for all $n\in\mathbb{N}$. The set
\[
\Omega=\left\{ \,k\in\mathbb{N}:\exists~n_{0}\in\mathbb{N}\text{ such that
}x_{n_{0}}=x_{n_{0}+k}\,\right\}
\]
is non-empty because $m_{0}-n_{0}\in\Omega$, so it has a minimum $k_{0}
=\min\Omega$. Then $k_{0}\geq1$ and there is $n_{0}\in\mathbb{N}$ such that
$x_{n_{0}}=u_{n_{0}+k_{0}}$. As $\left\{ x_{n}\right\} $ is not infinite,
then it must be almost periodic by Proposition
\ref{K38 - 22 lem infinite or almost periodic}. In fact, it is easy to check,
by induction on $p$, that:
\begin{equation}
x_{n_{0}+r+pk_{0}}=x_{n_{0}+r}\quad\text{for all }p\in\mathbb{N}\text{ and all
}r\in\left\{ 0,1,2,\ldots,k_{0}-1\right\} . \label{K40 20 prop}
\end{equation}
If $k_{0}=1$, then $x_{n_{0}}=x_{n_{0}+1}$. Similarly $x_{n_{0}+2}
=Tx_{n_{0}+1}=Tx_{n_{0}}=x_{n_{0}+1}=x_{n_{0}}$. By induction, $x_{n_{0}
+r}=x_{n_{0}}$ for all $r\geq0$, which is precisely the conclusion. Next we
are going to prove that the case $k_{0}\geq2$ leads to a contradiction.
Assume that $k_{0}\geq2$. Then each two terms in the set $\{\,x_{n_{0}
},x_{n_{0}+1},x_{n_{0}+2},\ldots,x_{n_{0}+k_{0}-1}\,\}$ are distinct, that is,
$x_{n_{0}+i}\neq x_{n_{0}+j}$ for all $0\leq i<j\leq k_{0}-1$ (on the contrary
case, $k_{0}$ is not the minimum of $\Omega$). Since $x_{n_{0}+i}\neq
x_{n_{0}+i+1}$ for all $i\in\left\{ 0,1,2,\ldots,k_{0}-1\right\} $, then
there is $s_{i}>0$ such that $M(x_{n_{0}+i},x_{n_{0}+i+1},s_{i})<1$ for all
$i\in\left\{ 0,1,2,\ldots,k_{0}-1\right\} $. Since each $M(x_{n_{0}
+i},x_{n_{0}+i+1},\cdot)$ is a non-decreasing function, if $t_{0}
=\min(\{\,s_{i}:0\leq i\leq k_{0}-1\,\})>0$, then
\[
M(x_{n_{0}+i},x_{n_{0}+i+1},t_{0})<1\quad\text{for all }i\in\left\{
0,1,2,\ldots,k_{0}-1\right\} .
\]
Let define
\[
\delta_{0}=\max\left( \left\{ \,M(x_{n_{0}+i},x_{n_{0}+i+1},t_{0}):0\leq
i\leq k_{0}-1\,\right\} \right) \in\left[ 0,1\right) .
\]
Then $\delta_{0}<1$. Since $\lim_{n\rightarrow\infty}M(x_{n},x_{n+1},t_{0})=1
$, there is $r_{0}\in\mathbb{N}$ such that $r_{0}\geq n_{0}$ and $M(x_{r_{0}
},x_{r_{0}+1},t_{0})>\delta_{0}$. Let $i_{0}\in\left\{ 0,1,2,\ldots
,k_{0}-1\right\} $ be the unique integer number such that the non-negative
integer numbers $r_{0}-n_{0}$ and $i_{0}$ are congruent modulo $k_{0}$, that
is, $i_{0}$ is the rest of the integer division of $r_{0}-n_{0}$ over $k_{0}$.
Hence there is a unique integer $p\geq0$ such that $\left( r_{0}
-n_{0}\right) -i_{0}=pk_{0}$. Since $r_{0}=n_{0}+i_{0}+pk_{0}$, property
(\ref{K40 20 prop}) guarantees that
\[
x_{r_{0}}=x_{n_{0}+i_{0}+pk_{0}}=x_{n_{0}+i_{0}},
\]
where $n_{0}+i_{0}\in\left\{ n_{0},n_{0}+1,n_{0}+2,\ldots,n_{0}
+k_{0}-1\right\} $. As a consequence:
\[
\delta_{0}=\max\left( \left\{ \,M(x_{n_{0}+i},x_{n_{0}+i+1},t_{0}):0\leq
i\leq k_{0}-1\,\right\} \right) \geq M(x_{n_{0}+i_{0}},x_{n_{0}+i_{0}
+1},t_{0})=M(x_{r_{0}},x_{r_{0}+1},t_{0})>\delta_{0},
\]
which is a contradiction.
\end{proof}
\subsection{\textbf{Non-Arquimedean fuzzy metric spaces}}
A KM-FMS $(X,M,\ast)$ is said to be \emph{non-Archimedean} \cite{Istr}\ if
\begin{equation}
M(x,z,t)\geq M(x,y,t)\ast M(y,z,t)\quad\text{for all }x,y,z\in X\text{ and all
}t>0. \label{K38 - 27 prop non-Arquimedean}
\end{equation}
This property is equivalent to:
\[
M(x,z,\max\{t,s\})\geq M(x,y,t)\ast M(y,z,s)\quad\text{for all }x,y,z\in X\text{
and all }t,s>0.
\]
Notice that this property depends on both the fuzzy metric $M$ and the t-norm
$\ast$. Furthermore, the non-Archimedean property
(\ref{K38 - 27 prop non-Arquimedean}) implies the triangle inequality (KM-4),
so each non-Arquimedean fuzzy metric space is a KM-FMS.
\begin{proposition}
\label{K38 - 44 propo Md non-Archimedean}Given a metric space $\left(
X,d\right) $, let $(X,M^{d})$ be the canonical way to see $\left(
X,d\right) $ as a KM-FMS (recall Example
\ref{K38 - 24 ex canonical metric FMS}). Then the following properties are fulfilled.
\begin{enumerate}
\item \label{K38 - 44 propo Md non-Archimedean, item 1}$(X,M^{d})$ is a KM-FMS
(and also a GV-FMS) under any t-norm $\ast$.
\item \label{K38 - 44 propo Md non-Archimedean, item 2}If $\ast$ is a t-norm
such that $\ast\leq\ast_{P}$, then $(X,M,\ast)$ is a non-Archimedean KM-FMS.
\item \label{K38 - 44 propo Md non-Archimedean, item 3}The metric space
$(X,d)$ satisfies $d\left( x,z\right) \leq\max\{d\left( x,y\right)
,d(y,z)\}$ for all $x,y,z\in X$ if, and only if, $(X,M^{d},\ast_{m})$ is a
non-Archimedean KM-FMS.
\end{enumerate}
\end{proposition}
Notice that the \emph{discrete metric} on $X$, defined by $d\left(
x,y\right) =0$ if $x=y$ and $d\left( x,y\right) =1$ if $x\neq y$, is an
example of metric satisfying the property involved in item
\ref{K38 - 44 propo Md non-Archimedean, item 3} of Proposition
\ref{K38 - 44 propo Md non-Archimedean}.
\begin{example}
\textrm{(Altun and Mihe\c{t} \cite{AlMi}, Example 1.3)} Let $\left(
X,d\right) $ be a metric space and let $\vartheta$ be a nondecreasing and
continuous function from $\left( 0,\infty\right) $ into $\left( 0,1\right)
$ such that $\lim_{t\rightarrow\infty}\vartheta(t)=1$. Let $\ast$ be a t-norm
such that $\ast\leq\ast_{P}$. For each $x,y\in X$ and all $t\in\left(
0,\infty\right) $ , define
\[
M(x,y,t)=\left[ \vartheta(t)\right] ^{d\left( x,y\right) }.
\]
Then $\left( X,M,\ast\right) $ is a non-Archimedean KM-FMS.
\end{example}
\begin{example}
A KM-FMS is called \emph{stationary} when each function $t\mapsto M(x,y,t)$
does not depend on $t$, that is, it only depends in the points $x$ and $y$.
For instance, if $X=\left( 0,\infty\right) $ and $M$ is defined by
\[
M(x,y,t)=\frac{\min\{x,y\}}{\max\{x,y\}}\quad\text{for all }x,y\in\left(
0,\infty\right) \text{ and all }t>0,
\]
then $(X,M,\ast)$ is a stationary KM-FMS. In fact, it is non-Archimedean.
\end{example}
\section{\textbf{Fuzzy spaces}}
There are many properties which are defined for mathematical objects in a
fuzzy metric space that, in fact, only depend on the fuzzy metric $M$, but not
on the t-norm $\ast$. For instance, the notions of convergency and Cauchy
sequence. Therefore, it is worth-noting to introduce such notions when $M$ is
an arbitrary function.
\begin{definition}
A \emph{fuzzy space} is a pair $(X,M)$ where $X$ is a non-empty set and $M$ is
a fuzzy set on $X\times X\times\left[ 0,\infty\right) $, that is, a mapping
$M:X\times X\times\left[ 0,\infty\right) \rightarrow\mathbb{I}$ (notice that
no additional conditions are assumed for $M$). In a fuzzy space $(X,M)$, we
say that a sequence $\{x_{n}\}\subseteq X$ is:
\begin{itemize}
\item $M$\emph{-Cauchy} if for all $\varepsilon\in\left( 0,1\right) $ and
all $t>0$ there is $n_{0}\in\mathbb{N}$ such that $M\left( x_{n}
,x_{m},t\right) >1-\varepsilon$ for all $n,m\geq n_{0}$;
\item $M$\emph{-convergent to }$x\in X$ if for all $\varepsilon\in\left(
0,1\right) $ and all $t>0$ there is $n_{0}\in\mathbb{N}$ such that $M\left(
x_{n},x,t\right) >1-\varepsilon$ for all $n\geq n_{0}$ (in such a case, we
write $\{x_{n}\}\rightarrow x$).
\end{itemize}
We say that the fuzzy space $(X,M)$ is \emph{complete} (or $X$ is
$M$\emph{-complete}) if each $M$-Cauchy sequence in $X$ is $M$-convergent to a
point of $X$.
\end{definition}
\begin{proposition}
\label{K38 - 49 propo uniqueness of limit}The limit of an $M$-convergent
sequence in a KM-FMS whose t-norm is continuous at the $1$-boundary is unique.
\end{proposition}
\begin{remark}
\begin{enumerate}
\item Using the previous definitions, it is possible to prove that a sequence
$\{x_{n}\}$ in a metric space $(X,d)$ is Cauchy (respectively, convergent to
$x\in X$) if, and only if, $\{x_{n}\}$ is $M^{d}$-Cauchy (respectively,
$M^{d}$-convergent to $x\in X$) in $(X,M^{d}$).
\item Notice that a sequence $\{x_{n}\}$ is $M$-convergent to $x\in X$ if, and
only if, $\lim_{n\rightarrow\infty}M(x_{n},x,t)=1$ for all $t>0$.
\end{enumerate}
\end{remark}
It is clear that one of the possible drawbacks of the previous definition is
the fact that, if $M$ does not satisfy additional properties such as
(KM-1)-(KM-5), then the limit of an $M$-convergent sequence has not to be
unique, or an $M$-convergent sequence needs not be $M$-Cauchy. However, it is
a good idea to consider general fuzzy spaces because such spaces allows us to
reflex about what properties depend on $M$ and $\ast$, and what other
properties only depend on $M$. In the second case, the next conditions will be
of importance in what follows. Notice that the following notions have sense
even if the limits of $M$-convergent sequences are not unique.
\begin{definition}
Let $\mathcal{S}$ be a binary relation on a fuzzy space $\left( X,M\right)
$, let $Y\subseteq X$ be a nonempty subset, let $\{x_{n}\}$ be a sequence in
$X$ and let $T:X\rightarrow X$ be a self-mapping. We say that:
\begin{itemize}
\item $T$ is $\mathcal{S}$\emph{-nondecreasing-continuous} if $\{Tx_{n}
\}\rightarrow Tz$ for all $\mathcal{S}$-nondecreasing sequence $\{x_{n}
\}\subseteq X$ such that $\{x_{n}\}\rightarrow z\in X$;
\item $T$ is $\mathcal{S}$\emph{-strictly-increasing-continuous} if
$\{Tx_{n}\}\rightarrow Tz$ for all $\mathcal{S}$-strictly-increasing sequence
$\{x_{n}\}\subseteq X$ such that $\{x_{n}\}\rightarrow z\in X$;
\item $Y$ is $(\mathcal{S},M)$\emph{-strictly-increasing-complete} if every
$\mathcal{S}$-strictly-increasing and $M$-Cauchy sequence $\{y_{n}\}\subseteq
Y$ is $M$-convergent to a point of $Y$;
\item $Y$ is $(\mathcal{S},M)$\emph{-strictly-increasing-precomplete} if there
exists a set $Z$ such that $Y\subseteq Z\subseteq X$ and $Z$ is $(\mathcal{S}
,M)$-strictly-increasing-complete;
\item $\left( X,M\right) $ is $\mathcal{S}$
\emph{-strictly-increasing-regular} if, for all $\mathcal{S}$
-strictly-increasing sequence $\{x_{n}\}\subseteq X$ such that $\{x_{n}
\}\rightarrow z\in X$, it follows that $x_{n}\mathcal{S}z$ for all
$n\in\mathbb{N}$;
\item $\left( X,M\right) $ is \emph{metrically-}$\mathcal{S}$
\emph{-strictly-increasing-regular} if, for all $\mathcal{S}$
-strictly-increasing sequence $\{x_{n}\}\subseteq X$ such that $\{x_{n}
\}\rightarrow z\in X$ and
\[
M\left( x_{n},x_{n+1},t\right) >0\quad\text{for all }n\in\mathbb{N}\text{
and all }t>0,
\]
it follows that $x_{n}\mathcal{S}z$ and $M(x_{n},z,t)>0$ for all
$n\in\mathbb{N}$ and all $t>0$.
\end{itemize}
\end{definition}
The reader can notice that we will only use the previous notions when we are
working with infinite Picard sequences, so they could be refined in prospect work.
\section{\textbf{The property $\mathcal{NC}$}}
In a fuzzy metric space $(X,M,\ast)$, if $\{x_{n}\}$ is not an $M$-Cauchy
sequence, then there are $\varepsilon_{0}\in\left( 0,1\right) $ and
$t_{0}>0$ such that, for all $k\in\mathbb{N}$, there are natural numbers
$m\left( k\right) ,n\left( k\right) \geq k$ such that $M\left(
x_{n(k)},x_{m(k)},t_{0}\right) \leq1-\varepsilon_{0}$. Equivalently, there
are two partial subsequences $\{x_{n(k)}\}_{k\in\mathbb{N}}$ and
$\{x_{m(k)}\}_{k\in\mathbb{N}}$ of $\{x_{n}\}$ such that $k<n\left( k\right)
<m\left( k\right) <n\left( k+1\right) $ and
\[
1-\varepsilon_{0}\geq M\left( x_{n(k)},x_{m(k)},t_{0}\right) \quad\text{for
all }k\in\mathbb{N}.
\]
Associated to each number $n\left( k\right) $, if we take $m\left(
k\right) $ is the least integer satisfying the previous property, then we can
also suppose that
\[
M\left( x_{n\left( k\right) },x_{m\left( k\right) -1},t_{0}\right)
>1-\varepsilon_{0}\geq M\left( x_{n\left( k\right) },x_{m\left( k\right)
},t_{0}\right) \quad\text{for all }k\in\mathbb{N}.
\]
From these inequalities, in a general fuzzy metric space, it is difficult to
go further, even if we try to use the properties of the t-norm $\ast$. Next,
we are going to introduce a new condition on the fuzzy space in order to
guarantee that the sequences $\{M\left( x_{n\left( k\right) },x_{m\left(
k\right) },t_{0}\right) \}_{k\in\mathbb{N}}$ and $\{M\left( x_{n\left(
k\right) -1},x_{m\left( k\right) -1},t_{0}\right) \}_{k\in\mathbb{N}}$
satisfy additional properties. This will be of great help when we handle such
sequences in the proofs of fixed point theorems. Immediately after, we show
that non-Archimedean fuzzy KM-metric spaces satisfy this new condition (which
is not trivial at all).
\begin{definition}
We will say that a fuzzy space $(X,M)$ satisfies the \emph{property
}$\mathcal{NC}$ (\textquotedblleft\emph{not Cauchy}\textquotedblright) if for
each sequence $\{x_{n}\}\subseteq X$ which is not $M$-Cauchy and verifies
$\lim_{n\rightarrow\infty}M\left( x_{n},x_{n+1},t\right) =1$ for all $t>0$,
there are $\varepsilon_{0}\in\left( 0,1\right) $, $t_{0}>0$ and two partial
subsequences $\{x_{n(k)}\}_{k\in\mathbb{N}}$ and $\{x_{m(k)}\}_{k\in
\mathbb{N}}$ of $\{x_{n}\}$ such that, for all $k\in\mathbb{N}$,
\begin{align*}
& k<n\left( k\right) <m\left( k\right) <n\left( k+1\right)
\qquad\text{and}\\
& M\left( x_{n\left( k\right) },x_{m\left( k\right) -1},t_{0}\right)
>1-\varepsilon_{0}\geq M\left( x_{n\left( k\right) },x_{m\left( k\right)
},t_{0}\right) ,
\end{align*}
and also
\[
\lim_{k\rightarrow\infty}M\left( x_{n\left( k\right) },x_{m\left(
k\right) },t_{0}\right) =\lim_{k\rightarrow\infty}M\left( x_{n\left(
k\right) -1},x_{m\left( k\right) -1},t_{0}\right) =1-\varepsilon_{0}.
\]
\end{definition}
Notice that the previous definition does not depend on any t-norm. However,
when a t-norm of a specific class plays a role, then additional properties hold.
\begin{theorem}
\label{K38 - 24 th Non-Arch impies NC}Each non-Arquimedean KM-fuzzy metric
space $(X,M,\ast)$ whose t-norm $\ast$ is continuous at the $1$-boundary
satisfies the property $\mathcal{NC}$.
\end{theorem}
\begin{proof}
Suppose that $\{x_{n}\}\subseteq X$ is a sequence which is not $M$-Cauchy and
verifies
\[
\lim_{n\rightarrow\infty}M\left( x_{n},x_{n+1},t\right) =1\quad\text{for all
}t>0.
\]
Then there are $\varepsilon_{0}\in\left( 0,1\right) $, $t_{0}>0$ and two
partial subsequences $\{x_{n(k)}\}_{k\in\mathbb{N}}$ and $\{x_{m(k)}
\}_{k\in\mathbb{N}}$ of $\{x_{n}\}$ such that $k<n\left( k\right) <m\left(
k\right) <n\left( k+1\right) $ and
\[
M\left( x_{n\left( k\right) },x_{m\left( k\right) -1},t_{0}\right)
>1-\varepsilon_{0}\geq M\left( x_{n\left( k\right) },x_{m\left( k\right)
},t_{0}\right) \quad\text{for all }k\in\mathbb{N}.
\]
As $(X,M,\ast)$ is a non-Archimedean KM-FMS, then, for all $k\in\mathbb{N}$,
\begin{align}
1-\varepsilon_{0} & \geq M\left( x_{n\left( k\right) },x_{m\left(
k\right) },t_{0}\right) \geq M\left( x_{n\left( k\right) },x_{m\left(
k\right) -1},t_{0}\right) \ast M\left( x_{m\left( k\right) -1}
,x_{m\left( k\right) },t_{0}\right) \nonumber\\
& \geq\left( 1-\varepsilon_{0}\right) \ast M\left( x_{m\left( k\right)
-1},x_{m\left( k\right) },t_{0}\right) .\label{K38 - 29 prop}
\end{align}
Since $\lim_{k\rightarrow\infty}M\left( x_{m\left( k\right) -1},x_{m\left(
k\right) },t_{0}\right) =1$ and $\ast$ is continuous at the $1$-boundary,
then
\[
\lim_{k\rightarrow\infty}M\left( x_{n\left( k\right) },x_{m\left(
k\right) },t_{0}\right) =1-\varepsilon_{0}.
\]
Taking into account that, by (\ref{K38 - 29 prop}),
\[
\overset{a_{k}}{\overbrace{\,1-\varepsilon_{0}\,}}~\geq~\overset{c_{k}
}{\overbrace{\,M\left( x_{n\left( k\right) },x_{m\left( k\right)
-1},t_{0}\right) \,}}\ast\overset{d_{k}}{\overbrace{\,M\left( x_{m\left(
k\right) -1},x_{m\left( k\right) },t_{0}\right) \,}}~\geq~\overset{e_{k}
}{\overbrace{\,\left( 1-\varepsilon_{0}\right) \ast M\left( x_{m\left(
k\right) -1},x_{m\left( k\right) },t_{0}\right) \,}},
\]
for all $k\in\mathbb{N}$ and $\ast$ is continuous, Proposition
\ref{K38 - 18 propo cancellation} (applied with $b_{k}=1$ for all
$k\in\mathbb{N}$) guarantees that
\[
\lim_{k\rightarrow\infty}M\left( x_{n\left( k\right) },x_{m\left(
k\right) -1},t_{0}\right) =1-\varepsilon_{0},
\]
which means that
\[
\lim_{k\rightarrow\infty}M\left( x_{n\left( k\right) },x_{m\left(
k\right) },t_{0}\right) =\lim_{k\rightarrow\infty}M\left( x_{n\left(
k\right) },x_{m\left( k\right) -1},t_{0}\right) =1-\varepsilon_{0}.
\]
Next, observe that, for all $k\in\mathbb{N}$,
\begin{align*}
\overset{a_{k}}{\overbrace{\,M\left( x_{n\left( k\right) },x_{m\left(
k\right) },t_{0}\right) \,}~} & \geq~\overset{b_{k}}{\overbrace{\,M\left(
x_{n\left( k\right) },x_{n\left( k\right) -1},t_{0}\right) \,}}
\ast\,\overset{c_{k}}{\overbrace{\,M\left( x_{n\left( k\right)
-1},x_{m\left( k\right) -1},t_{0}\right) \,}}\ast\overset{d_{k}}
{\overbrace{\,M\left( x_{m\left( k\right) -1},x_{m\left( k\right) }
,t_{0}\right) \,}}\\
& \geq~M\left( x_{n\left( k\right) },x_{n\left( k\right) -1}
,t_{0}\right) \ast M\left( x_{n\left( k\right) -1},x_{n(k)},t_{0}\right)
\ast M\left( x_{n\left( k\right) },x_{m\left( k\right) },t_{0}\right) \\
& \qquad\underset{e_{k}}{\underbrace{\qquad\qquad\ast M\left( x_{m\left(
k\right) },x_{m\left( k\right) -1},t_{0}\right) \ast M\left( x_{m\left(
k\right) -1},x_{m\left( k\right) },t_{0}\right) \qquad\qquad}}~.
\end{align*}
Clearly $\{a_{k}\}\rightarrow1-\varepsilon_{0}$, $\{b_{k}\}\rightarrow1$,
$\{d_{k}\}\rightarrow1$ and $\{e_{k}\}\rightarrow1-\varepsilon_{0}$. Thus,
Proposition \ref{K38 - 18 propo cancellation} guarantees again that
\[
\lim_{k\rightarrow\infty}M\left( x_{n\left( k\right) -1},x_{m\left(
k\right) -1},t_{0}\right) =1-\varepsilon_{0}.
\]
As a consequence,
\[
\lim_{k\rightarrow\infty}M\left( x_{n\left( k\right) },x_{m\left(
k\right) },t_{0}\right) =\lim_{k\rightarrow\infty}M\left( x_{n\left(
k\right) -1},x_{m\left( k\right) -1},t_{0}\right) =1-\varepsilon_{0}.
\]
This completes the proof.
\end{proof}
\begin{corollary}
\label{K38 - 43 coro metric implies NC}If $(X,d)$ is a metric space, then
$(X,M^{d})$ satisfies the property $\mathcal{NC}$.
\end{corollary}
\begin{proof}
By item \ref{K38 - 44 propo Md non-Archimedean, item 2} of Proposition
\ref{K38 - 44 propo Md non-Archimedean, item 3}, $(X,M^{d},\ast_{p})$ is a
non-Archimedean KM-FMS and, as $\ast_{P}$ is continuos, Theorem
\ref{K38 - 24 th Non-Arch impies NC} ensures that $(X,M^{d})$ satisfies the
property $\mathcal{NC}$.
\end{proof}
\begin{example}
Let $(X,d)$ be a metric space for which there are $x_{0},y_{0},z_{0}\in X$
such that $d(x_{0},z_{0})>\max\{d(x_{0},y_{0}),d(y_{0},z_{0})\}$. By Corollary
\ref{K38 - 43 coro metric implies NC}, the fuzzy space $(X,M^{d})$ satisfies
the property $\mathcal{NC}$. Furthermore, item
\ref{K38 - 44 propo Md non-Archimedean, item 1} of Proposition
\ref{K38 - 44 propo Md non-Archimedean, item 3} ensures that $(X,M^{d}
,\ast_{m})$ is a KM-FMS. However, taking into account the third item of the
same proposition, $(X,M^{d},\ast_{m})$ is not a non-Archimedean KM-FMS because
it does not satisfy the non-Archimedean property for $x_{0}$, $y_{0}$ and
$z_{0}$.
\end{example}
\section{\textbf{Fuzzy ample spectrum contractions}}
In this section we introduce two distinct notions of \emph{fuzzy ample
spectrum contraction} in the setting of KM-FMS. At a first sight, we can
believe that they are the natural performance in FMS of the properties that
define an ample spectrum contraction in metric spaces. However, FMS are
distinct in nature to metric spaces: for instance, they are much varied than
metric spaces. As we advise in the introduction, we will work on a KM-FMS.
Their main advantage is that they are more general than GV-FMS, but they also
have a great drawback: the statements of the fixed point theorems need more
technical hypotheses and their corresponding proofs are more difficult. The
reader can easily deduce how the proofs would be easier in a GV-FMS.
Furthermore, rather than working in a non-Archimedean FMS, we will employ
KM-FMS that only satisfy the property $\mathcal{NC}$ and whose t-norms are
continuous at the $1$-boundary (which are more general).
We give the following definitions in the context of fuzzy spaces. Throughout
this section, let $\left( X,M\right) $ be a fuzzy space, let $T:X\rightarrow
X$ be a self-mapping, let $\mathcal{S}$ be a binary relation on $X$, let
$B\subseteq\mathbb{R}^{2}$ be a subset and let $\theta:B\rightarrow\mathbb{R}$
be a function. Let
\[
\mathcal{M}_{T}=\left\{ \,(M\left( Tx,Ty,t\right) ,M\left( x,y,t\right)
)\in\mathbb{I}\times\mathbb{I}:x,y\in X,~x\mathcal{S}^{\ast}y,~Tx\mathcal{S}
^{\ast}Ty,~t\in\left[ 0,\infty\right) \,\right\} .
\]
\subsection{(\textbf{Type-1) Fuzzy ample spectrum contractions}}
The notion of fuzzy ample spectrum contraction directly depend on a very
singular kind of sequences of pairs of distance distribution functions.
\begin{definition}
\label{K38 - 55 def T,S,M sequence}Let $\{\phi_{n}\}$ and $\{\psi_{n}\}$ be
two sequences of functions $\phi_{n},\psi_{n}:\left[ 0,\infty\right)
\rightarrow\mathbb{I}$ . We say that $\{\left( \phi_{n},\psi_{n}\right) \}$
is a {$(T,\mathcal{S}^{\ast},M)$\emph{-sequence}} if there exist two sequences
$\{x_{n}\},\{y_{n}\}\subseteq X$ such that
\[
x_{n}\mathcal{S}^{\ast}y_{n},\quad Tx_{n}\mathcal{S}^{\ast}Ty_{n},\quad
\phi_{n}\left( t\right) =M(Tx_{n},Ty_{n},t)\quad\text{and}\quad\psi
_{n}\left( t\right) =M(x_{n},y_{n},t)
\]
for all $n\in\mathbb{N}$ and all $t>0$.
\end{definition}
\begin{definition}
\label{K38 - 54 def FASC}A mapping $T:X\rightarrow X$ is said to be a
\emph{fuzzy ample spectrum contraction w.r.t. }$\left( M,\mathcal{S}
,\theta\right) $ if the following four conditions are fulfilled.
\begin{description}
\item[$(\mathcal{F}_{1})$] $B$ is nonempty and $\mathcal{M}_{T}\subseteq B$.
\item[$(\mathcal{F}_{2})$] If $\{x_{n}\}\subseteq X$ is a Picard $\mathcal{S}
$-strictly-increasing sequence of $T$ such that
\[
\theta\left( M\left( x_{n+1},x_{n+2},t\right) ,M\left( x_{n}
,x_{n+1},t\right) \right) \geq0\quad\text{for all }n\in\mathbb{N}\text{ and
all }t>0,
\]
then $\lim_{n\rightarrow\infty}M\left( x_{n},x_{n+1},t\right) =1$ for all
$t>0$.
\item[$(\mathcal{F}_{3})$] If $\{\left( \phi_{n},\psi_{n}\right) \}$ is a
$(T,\mathcal{S}^{\ast},M)$-sequence and $t_{0}>0$ are such that $\{\phi
_{n}\left( t_{0}\right) \}$ and $\{\psi_{n}\left( t_{0}\right) \}$
converge to the same limit $L\in\mathbb{I}$ and verifying that $L>\phi
_{n}\left( t_{0}\right) $ and $\theta(\phi_{n}\left( t_{0}\right)
,\psi_{n}\left( t_{0}\right) )\geq0$ for all $n\in\mathbb{N}_{0}$, then
$L=1$.
\item[$(\mathcal{F}_{4})$] ${\theta}${$\left( M(Tx,Ty,t),M(x,y,t)\right)
\geq0\quad$for all }$t>0$ and all {\ $x,y\in X\quad$such that $x\mathcal{S}
^{\ast}y$ and $Tx\mathcal{S}^{\ast}Ty$.}
\end{description}
\end{definition}
In some cases, we will also consider the following properties.
\begin{description}
\item[$(\mathcal{F}_{2}^{\prime})$] If $\{x_{n}\},\{y_{n}\}\subseteq X$ are
two $T$-Picard sequences such that
\[
x_{n}\mathcal{S}^{\ast}y_{n}\quad\text{and}\quad{\theta(M\left(
x_{n+1},y_{n+1},t\right) ,M\left( x_{n},y_{n},t\right) )\geq0}
\quad\text{for all }n\in\mathbb{N}\text{ and all }t>0,
\]
then $\lim_{n\rightarrow\infty}{M}\left( x_{n},y_{n},t\right) =1$ for all
$t>0$.
\item[$(\mathcal{F}_{5})$] If $\{\left( \phi_{n},\psi_{n}\right) \}$ is a
$(T,\mathcal{S}^{\ast},M)$-sequence such that $\{\psi_{n}\left( t\right)
\}\rightarrow1$ for all $t>0$ and $\theta(\phi_{n}(t),\psi_{n}(t))\geq0$ for
all $n\in\mathbb{N}$ and all $t>0$, then $\{\phi_{n}\left( t\right)
\}\rightarrow1$ for all $t>0$.
\end{description}
Many of the remarks that were given in the context of metric spaces for ample
spectrum contractions can now be repeated. In particular, we highlight the
following ones.
\begin{remark}
\label{K38 - 30 rem def FAEC}
\begin{enumerate}
\item \label{K38 - 30 rem def FAEC, item 1}Although the set $B$ in which is
defined the function $\theta:B\rightarrow\mathbb{R}$ can be greater than
$\mathbb{I}\times\mathbb{I}$, for our purposes, we will only be interested in
the values of $\theta$ when its arguments belong to $\mathbb{I}\times
\mathbb{I}$. Hence it is sufficient to assume that $B\subseteq\mathbb{I}
\times\mathbb{I}$.
\item \label{K38 - 30 rem def FAEC, item 2}If the function $\theta$ satisfies
$\theta\left( t,s\right) \leq t-s$ for all $\left( t,s\right) \in
B\cap(\mathbb{I}\times\mathbb{I})$, then property $(\mathcal{F}_{5})$ holds.
It follows from the fact that
\[
0\leq\theta(\phi_{n}(t),\psi_{n}(t))\leq\phi_{n}(t)-\psi_{n}(t)\quad
\Rightarrow\quad\psi_{n}(t)\leq\phi_{n}(t)\leq1.
\]
\item By choosing $y_{n}=x_{n+1}$ for all $n\in\mathbb{N}$, it can be proved
that $(\mathcal{F}_{2}^{\prime})$ implies $(\mathcal{F}_{2})$.
\end{enumerate}
\end{remark}
In the following result we check that a great family of ample spectrum
contractions are, in fact, fuzzy ample spectrum contractions.
\begin{theorem}
\label{K38 - 56 th ASC implies FASC}Let $(X,d)$ be a metric space and let
$T:X\rightarrow X$ be an ample spectrum contraction w.r.t. $\mathcal{S}$ and
$\varrho$, where $\varrho:\left[ 0,\infty\right) \times\left[
0,\infty\right) \rightarrow\mathbb{R}$. Suppose that the function $\varrho$
satisfies the following property:
\begin{equation}
t,s>0,\quad\varrho\left( t,s\right) \geq0\quad\Rightarrow\quad\left[
~\varrho\left( \frac{t}{r},\frac{s}{r}\right) \geq0~~\text{for all
}r>0~\right] . \label{K38 - 51 prop}
\end{equation}
Let define:
\[
\theta_{\varrho}:\left( 0,1\right] \times\left( 0,1\right] \rightarrow
\mathbb{R},\qquad\theta_{\varrho}\left( t,s\right) =\varrho\left(
\frac{1-t}{t},\frac{1-s}{s}\right) \quad\text{for all }t,s\in\left(
0,1\right] .
\]
Then $T$ is a fuzzy ample spectrum contraction w.r.t. $(M^{d},\mathcal{S}
,\theta_{\varrho})$. Furthermore, if property $(\mathcal{B}_{5})$
(respectively, $(\mathcal{B}_{2}^{\prime})$) holds, then property
$(\mathcal{F}_{5})$ (respectively, $(\mathcal{F}_{2}^{\prime})$) also holds.
\end{theorem}
\begin{proof}
First of all, we observe that, for all $x,y\in X$ and all $t>0$,
\begin{align}
& \theta_{\varrho}(M^{d}(Tx,Ty,t),M^{d}(x,y,t))=\theta_{\varrho}\left(
\frac{t}{t+d(Tx,Ty)},\frac{t}{t+d(x,y)}\right) \nonumber\\
& \qquad=\varrho\left( \frac{\,1-\dfrac{t}{t+d(Tx,Ty)}\,}{\dfrac
{t}{t+d(Tx,Ty)}},\frac{\,1-\dfrac{t}{t+d(x,y)}\,}{\dfrac{t}{t+d(x,y)}}\right)
=\varrho\left( \frac{d(Tx,Ty)}{t},\frac{d(x,y)}{t}\right) .
\label{K38 - 50 prop}
\end{align}
Next we check all properties that define a fuzzy ample spectrum contraction.
$(\mathcal{F}_{1})$ $B=\left( 0,1\right] \times\left( 0,1\right] $ is
nonempty and $\mathcal{M}_{T}\subseteq B$.
$(\mathcal{F}_{2})$ Let $\{x_{n}\}\subseteq X$ be a Picard $\mathcal{S}
$-strictly-increasing sequence of $T$ such that
\[
\theta_{\varrho}(M^{d}\left( x_{n+1},x_{n+2},t\right) ,M^{d}\left(
x_{n},x_{n+1},t\right) )\geq0\quad\text{for all }n\in\mathbb{N}\text{ and all
}t>0.
\]
In particular, for $t=1$, using (\ref{K38 - 50 prop}), for each $n\in
\mathbb{N}$,
\[
0\leq\theta_{\varrho}(M^{d}\left( x_{n+1},x_{n+2},1\right) ,M^{d}\left(
x_{n},x_{n+1},1\right) )=\varrho(d(x_{n+1},x_{n+2}),d(x_{n},x_{n+1})).
\]
As $T$ is an ample spectrum contraction w.r.t. $\mathcal{S}$ and $\varrho$,
axiom $(\mathcal{B}_{2})$ implies that $\{d(x_{n},x_{n+1})\}\rightarrow0$, so
$\lim_{n\rightarrow\infty}M^{d}\left( x_{n},x_{n+1},t\right) =1$ for all
$t>0$.
$(\mathcal{F}_{3})$~Let $\{\left( \phi_{n},\psi_{n}\right) \}$ be a
$(T,\mathcal{S}^{\ast},M^{d})$-sequence and let $t_{0}>0$ be such that
$\{\phi_{n}\left( t_{0}\right) \}$ and $\{\psi_{n}\left( t_{0}\right) \}$
converge to the same limit $L\in\mathbb{I}$ and verifying that $L>\phi
_{n}\left( t_{0}\right) $ and $\theta_{\varrho}(\phi_{n}\left(
t_{0}\right) ,\psi_{n}\left( t_{0}\right) )\geq0$ for all $n\in
\mathbb{N}_{0}$. Let $\{x_{n}\},\{y_{n}\}\subseteq X$ be some sequences such
that, for all $n\in\mathbb{N}$ and all $t>0$,
\[
x_{n}\mathcal{S}^{\ast}y_{n},\quad Tx_{n}\mathcal{S}^{\ast}Ty_{n},\quad
\phi_{n}\left( t\right) =M^{d}(Tx_{n},Ty_{n},t)\quad\text{and}\quad\psi
_{n}\left( t\right) =M^{d}(x_{n},y_{n},t).
\]
Let define $a_{n}=d(Tx_{n},Ty_{n})$ and $b_{n}=d(x_{n},y_{n})$ for all
$n\in\mathbb{N}$. Then $\{\left( a_{n},b_{n}\right) \}\subseteq\left[
0,\infty\right) \times\left[ 0,\infty\right) $ is a $(T,\mathcal{S}^{\ast
})$-sequence. Furthermore, for all $n\in\mathbb{N}$,
\begin{align*}
0 & \leq\theta_{\varrho}(\phi_{n}\left( t_{0}\right) ,\psi_{n}\left(
t_{0}\right) )=\theta_{\varrho}(M^{d}(Tx_{n},Ty_{n},t_{0}),M^{d}(x_{n}
,y_{n},t_{0}))\\[0.2cm]
& =\varrho\left( \frac{d(Tx_{n},Ty_{n})}{t_{0}},\frac{d(x_{n},y_{n})}{t_{0}
}\right) =\varrho\left( \frac{a_{n}}{t_{0}},\frac{b_{n}}{t_{0}}\right) .
\end{align*}
By (\ref{K38 - 51 prop}), using $r=1/t_{0}$, we deduce that, for all
$n\in\mathbb{N}$,
\[
0\leq\varrho\left( \frac{\,\frac{a_{n}}{t_{0}}\,}{\frac{1}{t_{0}}}
,\frac{\,\frac{b_{n}}{t_{0}}\,}{\frac{1}{t_{0}}}\right) =\varrho\left(
a_{n},b_{n}\right) .
\]
Since $L>\phi_{n}\left( t_{0}\right) $, then $L>0$. Let $\varepsilon
_{0}=t_{0}\frac{1-L}{L}\geq0$. Thus, as $\{\phi_{n}\left( t_{0}\right)
\}\rightarrow L$ and $\{\psi_{n}\left( t_{0}\right) \}\rightarrow L$, then
\[
\left\{ \frac{t_{0}}{t_{0}+d(Tx_{n},Ty_{n})}\right\} _{n\in\mathbb{N}
}\rightarrow L\quad\text{and}\quad\left\{ \frac{t_{0}}{t_{0}+d(x_{n},y_{n}
)}\right\} _{n\in\mathbb{N}}\rightarrow L.
\]
This is equivalent to
\[
\{a_{n}\}=\left\{ d(Tx_{n},Ty_{n})\right\} _{n\in\mathbb{N}}\rightarrow
t_{0}\frac{1-L}{L}=\varepsilon_{0}\quad\text{and}\quad\{b_{n}\}=\left\{
d(x_{n},y_{n})\right\} _{n\in\mathbb{N}}\rightarrow t_{0}\frac{1-L}
{L}=\varepsilon_{0}.
\]
Notice that
\begin{align*}
\frac{t_{0}}{t_{0}+d(Tx_{n},Ty_{n})}=\phi_{n}\left( t_{0}\right) <L\quad &
\Leftrightarrow\quad\frac{1}{t_{0}+a_{n}}<\frac{L}{t_{0}}\quad\Leftrightarrow
\quad\frac{t_{0}}{L}<t_{0}+a_{n}\\[0.2cm]
& \Leftrightarrow\quad\frac{t_{0}}{L}-t_{0}<a_{n}\quad\Leftrightarrow
\quad\varepsilon_{0}=t_{0}\frac{1-L}{L}<a_{n}.
\end{align*}
Taking into account that $T$ is an ample spectrum contraction w.r.t.
$\mathcal{S}$ and $\varrho$, condition $(\mathcal{B}_{3})$ ensures that
$\varepsilon_{0}=0$, which leads to $L=1$.
$(\mathcal{F}_{4})$ Let{\ }$t>0$ and let {$x,y\in X$ be such that
$x\mathcal{S}^{\ast}y$ and $Tx\mathcal{S}^{\ast}Ty$. Therefore, }$d(x,y)>0$
and $d(Tx,Ty)>0$.{\ As} $T$ is an ample spectrum contraction w.r.t.
$\mathcal{S}$ and $\varrho$, property $(\mathcal{B}_{4})$ guarantees that
$\varrho\left( d(Tx,Ty),d(x,y)\right) \geq0$. By (\ref{K38 - 51 prop}), it
follows that
\[
\varrho\left( \frac{d(Tx,Ty)}{r},\frac{d(x,y)}{r}\right) \geq0~~\text{for
all }r>0.
\]
As a consequence, by (\ref{K38 - 50 prop}),
\[
\theta_{\varrho}({M^{d}(Tx,Ty,t),M^{d}(x,y,t))=\varrho\left( \frac
{d(Tx,Ty)}{t},\frac{d(x,y)}{t}\right) \geq0.}
\]
$(\mathcal{B}_{5})\Rightarrow(\mathcal{F}_{5})$. Suppose that the property
$(\mathcal{B}_{5})$ holds. Let $\{\left( \phi_{n},\psi_{n}\right) \}$ be a
$(T,\mathcal{S}^{\ast},M)$-sequence such that $\{\psi_{n}\left( t\right)
\}\rightarrow1$ for all $t>0$ and $\theta_{\varrho}(\phi_{n}(t),\psi
_{n}(t))\geq0$ for all $n\in\mathbb{N}$ and all $t>0$. Let $\{x_{n}
\},\{y_{n}\}\subseteq X$ be some sequences such that, for all $n\in\mathbb{N}$
and all $t>0$,
\[
x_{n}\mathcal{S}^{\ast}y_{n},\quad Tx_{n}\mathcal{S}^{\ast}Ty_{n},\quad
\phi_{n}\left( t\right) =M^{d}(Tx_{n},Ty_{n},t)\quad\text{and}\quad\psi
_{n}\left( t\right) =M^{d}(x_{n},y_{n},t).
\]
Using $t=1$, for all $n\in\mathbb{N}$,
\[
0\leq\theta_{\varrho}(\phi_{n}(1),\psi_{n}(1))=\theta_{\varrho}(M^{d}
(Tx_{n},Ty_{n},1),M^{d}(x_{n},y_{n},1))=\varrho\left( d(Tx_{n},Ty_{n}
),d(x_{n},y_{n})\right) .
\]
Notice that $\{M^{d}(x_{n},y_{n},t)\}=\{\psi_{n}\left( t\right)
\}\rightarrow1$ for all $t>0$ is equivalent to say that $\{d(x_{n}
,y_{n})\}\rightarrow0$. Taking into account that we suppose that property
$(\mathcal{B}_{5})$ holds, then $\{d(Tx_{n},Ty_{n})\}\rightarrow0$, so
$\{M^{d}(Tx_{n},Ty_{n},t)\}=\{\phi_{n}\left( t\right) \}\rightarrow1$ for
all $t>0$.
$(\mathcal{B}_{2}^{\prime})\Rightarrow(\mathcal{F}_{2}^{\prime})$. Let
$\{x_{n}\},\{y_{n}\}\subseteq X$ be two $T$-Picard sequences such that
\[
x_{n}\mathcal{S}^{\ast}y_{n}\quad\text{and}\quad{\theta}_{\varrho}{(M}
^{d}{\left( x_{n+1},y_{n+1},t\right) ,M}^{d}{\left( x_{n},y_{n},t\right)
)\geq0}\quad\text{for all }n\in\mathbb{N}\text{ and all }t>0.
\]
Using $t=1$, for all $n\in\mathbb{N}$,
\[
0\leq{\theta}_{\varrho}{(M}^{d}{\left( x_{n+1},y_{n+1},1\right) ,M}
^{d}{\left( x_{n},y_{n},1\right) )}=\varrho\left( d(Tx_{n},Ty_{n}
),d(x_{n},y_{n})\right) .
\]
As we suppose that property $(\mathcal{B}_{2}^{\prime})$ holds, then
$\{d(x_{n},y_{n})\}\rightarrow0$ which means that $\{{M}^{d}{\left(
x_{n},y_{n},t\right) }\}\rightarrow1$ for all $t>0$.
\end{proof}
\begin{example}
If $T$ is a Banach contraction, then there is $\lambda\in\left[ 0,1\right) $
such that $d(Tx,Ty)\leq\lambda d(x,y)$ for all $x,y\in X$. In this case, $T$
is an ample spectrum contraction associated to the function $\varrho_{\lambda
}(t,s)=\lambda s-t$ for all $t,s\geq0$. In such a case, for all $t,s,r>0$,
\[
\varrho_{\lambda}\left( \frac{t}{r},\frac{s}{r}\right) =\lambda\frac{s}
{r}-\frac{t}{r}=\frac{\lambda s-t}{r}=\frac{\varrho_{\lambda}(t,s)}{r},
\]
which means that property (\ref{K38 - 51 prop}) holds.
\end{example}
In this general framework we introduce one of our main results. Notice that in
the following result we assume that the t-norm $\ast$ is continuous at the
$1$-boundary, but it has not to be continuous on the whole space
$\mathbb{I}\times\mathbb{I}$.
\begin{theorem}
\label{K38 - 31 th main FAEC}Let $\left( X,M,\ast\right) $ be a KM-FMS
endowed with a transitive binary relation $\mathcal{S}$ and let
$T:X\rightarrow X$ be an $\mathcal{S}$-nondecreasing fuzzy ample spectrum
contraction with respect to $\left( M,\mathcal{S},\theta\right) $. Suppose
that $\ast$ is continuous at the $1$-boundary, $T(X)$ is
$(\mathcal{S},M)$-strictly-increasing-precomplete and there exists a point $x_{0}\in X$
such that $x_{0}\mathcal{S}Tx_{0}$. Also assume that, at least, one of the
following conditions is fulfilled:
\begin{description}
\item[$\left( a\right) $] $T$ is $\mathcal{S}$-strictly-increasing-continuous.
\item[$\left( b\right) $] $\left( X,M\right) $ is $\mathcal{S}
$-strictly-increasing-regular and condition $(\mathcal{F}_{5})$ holds.
\item[$\left( c\right) $] $\left( X,M\right) $ is $\mathcal{S}
$-strictly-increasing-regular and $\theta\left( t,s\right) \leq t-s$ for all
$\left( t,s\right) \in B\cap(\mathbb{I}\times\mathbb{I})$.
\end{description}
If $\left( X,M\right) $\ satisfies the property $\mathcal{NC}$, then the
Picard sequence of $T~$based on $x_{0}$ converges to a fixed point of $T$. In
particular, $T$ has at least a fixed point.
\end{theorem}
\begin{proof}
Let $x_{0}\in X$ be the point such that $x_{0}\mathcal{S}Tx_{0}$ and let
$\{x_{n}\}$ be the Picard sequence of $T$ starting from $x_{0}$. If there is
some $n_{0}\in\mathbb{N}_{0}$ such that $x_{n_{0}}=x_{n_{0}+1}$, then
$x_{n_{0}}$ is a fixed point of $T$, and the proof is finished. On the
contrary, suppose that $x_{n}\neq x_{n+1}$ for all $n\in\mathbb{N}_{0}$. As
$x_{0}\mathcal{S}Tx_{0}=x_{1}$ and $T$ is $\mathcal{S}$-nondecreasing, then
$x_{n}\mathcal{S}x_{n+1}$ for all $n\in\mathbb{N}_{0}$. In fact, as
$\mathcal{S}$ is transitive, then
\[
x_{n}\mathcal{S}x_{m}\quad\text{for all }n,m\in\mathbb{N}_{0}\text{ such that
}n<m,
\]
which means that $\{x_{n}\}$ is an $\mathcal{S}$-nondecreasing sequence. Since
$T$ is a fuzzy ample spectrum contraction with respect to $\left(
M,\mathcal{S},\theta\right) $ and $x_{n}\mathcal{S}^{\ast}x_{n+1}$ and
$Tx_{n}=x_{n+1}\mathcal{S}^{\ast}x_{n+2}=Tx_{n+1}$, then
\[
{\theta\left( M(x_{n+1},x_{n+2},t),M(x_{n},x_{n+1},t)\right) ={\theta\left(
M(Tx_{n},Tx_{n+1},t),M(x_{n},x_{n+1},t)\right) }\geq0}
\]
{for all }$t>0$ and all $n\in\mathbb{N}_{0}${. Axiom }$(\mathcal{F}_{2})$
guarantees that
\begin{equation}
\lim_{n\rightarrow\infty}M\left( x_{n},x_{n+1},t\right) \rightarrow
1\quad\text{for all }t>0. \label{K38 - 17 prop}
\end{equation}
By Proposition \ref{K38 - 22 lem infinite or almost periodic}, the $T$-Picard
sequence $\{x_{n}\}$ is infinite or almost periodic. If $\{x_{n}\}$ is almost
periodic, then there are $n_{0},m_{0}\in\mathbb{N}$ such that $n_{0}<m_{0}$
and $x_{n_{0}}=x_{m_{0}}$. In such a case, Proposition
\ref{K40 21 propo either infinite or almost-constant fuzzy} guarantees that
there is $\ell_{0}\in\mathbb{N}$ and $z\in X$ such that $x_{n}=z$ for all
$n\geq\ell_{0}$, so $z$ is a fixed point of $T$, and the proof is finished. On
the contrary case, suppose that $\{x_{n}\}$ is infinite, that is, $x_{n}\neq
x_{m}$ for all $n\neq m$. This means that $\{x_{n}\}$ is an $\mathcal{S}
$-strictly-increasing sequence because $x_{n}\mathcal{S}x_{m}$ and $x_{n}\neq
x_{m}$, that is, $x_{n}\mathcal{S}^{\ast}x_{m}$ for all $n<m$.
Next we prove that $\{x_{n}\}$ is an $M$-Cauchy sequence by contradiction.
Since we suppose that $\left( X,M\right) $ satisfies the property
$\mathcal{NC}$ and $\{x_{n}\}$ is a sequence which is not $M$-Cauchy but it
satisfies (\ref{K38 - 17 prop}), there are $\varepsilon_{0}\in\left(
0,1\right) $ and $t_{0}>0$ and two partial subsequences $\{x_{n(k)}
\}_{k\in\mathbb{N}}$ and $\{x_{m(k)}\}_{k\in\mathbb{N}}$ of $\{x_{n}\}$ such
that, for all $k\in\mathbb{N}$, $k<n\left( k\right) <m\left( k\right)
<n\left( k+1\right) $,
\[
M\left( x_{n\left( k\right) },x_{m\left( k\right) -1},t_{0}\right)
\geq1-\varepsilon_{0}>M\left( x_{n\left( k\right) },x_{m\left( k\right)
},t_{0}\right)
\]
and also
\begin{equation}
\lim_{k\rightarrow\infty}M\left( x_{n\left( k\right) },x_{m\left(
k\right) },t_{0}\right) =\lim_{k\rightarrow\infty}M\left( x_{n\left(
k\right) -1},x_{m\left( k\right) -1},t_{0}\right) =1-\varepsilon_{0}.
\label{K38 - 21 prop}
\end{equation}
Let define $L=1-\varepsilon_{0}\in\left( 0,1\right) $ and, for all $t>0$ and
all $k\in\mathbb{N}$,
\[
\phi_{k}\left( t\right) =M(x_{n\left( k\right) },x_{m\left( k\right)
},t)\quad\text{and}\quad\psi_{k}\left( t\right) =M(x_{n\left( k\right)
-1},x_{m\left( k\right) -1},t).
\]
We claim that $\{(\phi_{k},\psi_{k})\}_{k\in\mathbb{N}}$, $t_{0}$ and $L$
satisfy all hypotheses in condition $(\mathcal{F}_{3})$. On the one hand,
$\{(\phi_{k},\psi_{k})\}$ is a {$(T,\mathcal{S}^{\ast},M)$}-sequence because
$\phi_{k}=M(x_{n\left( k\right) },x_{m\left( k\right) },\cdot
)=M(Tx_{n\left( k\right) -1},Tx_{m\left( k\right) -1},\cdot)$, $\psi
_{k}=M(x_{n\left( k\right) -1},x_{m\left( k\right) -1},\cdot)$,
$x_{n\left( k\right) -1}\mathcal{S}^{\ast}x_{m\left( k\right) -1}$ and
$Tx_{n\left( k\right) -1}=x_{n\left( k\right) }\mathcal{S}^{\ast
}x_{m\left( k\right) }=Tx_{m\left( k\right) -1}$ for all $k\in\mathbb{N}$.
Furthermore, $L=1-\varepsilon_{0}>M(x_{n\left( k\right) },x_{m\left(
k\right) },t_{0})=\phi_{k}\left( t_{0}\right) $ for all $k\in\mathbb{N}$.
Also (\ref{K38 - 21 prop}) means that
\[
\lim_{k\rightarrow\infty}\phi_{k}\left( t_{0}\right) =\lim_{k\rightarrow
\infty}\psi_{k}\left( t_{0}\right) =1-\varepsilon_{0}=L.
\]
As $T$ is a fuzzy ample spectrum contraction with respect to $\left(
M,\mathcal{S},\theta\right) $, condition $(\mathcal{F}_{3})$ guarantees that
$1-\varepsilon_{0}=L=1$, which is a contradiction because $\varepsilon_{0}>0
$. This contradiction shows that $\{x_{n}\}$ is an $M$-Cauchy sequence in
$(X,M)$. Since $\{x_{n+1}=Tx_{n}\}\subseteq TX$ and $T(X)$ is $(\mathcal{S},M)
$-strictly-increasing-precomplete, then there exists a set $Z$ such that
$TX\subseteq Z\subseteq X$ and $Z$ is $(\mathcal{S},M)$
-strictly-increasing-complete. Since $\{x_{n+1}=Tx_{n}\}\subseteq TX\subseteq
Z$ and $\{x_{n}\}$ is an $\mathcal{S}$-strictly-increasing $M$-Cauchy
sequence, then there is $z\in Z\subseteq X$ such that $\{x_{n}\}$
$M$-converges to $z$. It only remain to prove that, under any of the
conditions $(a)$, $(b)$ or $(c)$, $z$ is a fixed point of $T$.
\begin{description}
\item[$(a)$] Suppose that $T$ is $\mathcal{S}$-strictly-increasing-continuous.
As $\{x_{n}\}$ is $\mathcal{S}$-strictly-increasing and $\{x_{n}\}$
$M$-converges to $z$, then $\{Tx_{n}\}$ $M$-converges to $Tz$. However, as
$Tx_{n}=x_{n+1}$ for all $n\in\mathbb{N}$ and the $M$-limit in a KM-FMS is
unique, then $Tz=z$, that is, $z$ is a fixed point of $T$.
\item[$(b)$] Suppose that $\left( X,M\right) $ is $\mathcal{S}
$-strictly-increasing-regular and condition $(\mathcal{F}_{5})$ holds. In this
case, since $\{x_{n}\}$ is an $\mathcal{S}$-strictly-increasing sequence such
that $\{x_{n}\}\rightarrow z\in X$, it follows that $x_{n}\mathcal{S}z$ for
all $n\in\mathbb{N}$. Taking into account that the sequence $\{x_{n}\}$ is
infinite, then there is $n_{0}\in\mathbb{N}$ such that $x_{n}\neq z$ and
$x_{n}\neq Tz$ for all $n\geq n_{0}-1$. Moreover, as $T$ is $\mathcal{S}
$-nondecreasing, then $x_{n+1}=Tx_{n}\mathcal{S}Tz$, which means that
$x_{n}\mathcal{S}^{\ast}z$ and $x_{n}\mathcal{S}^{\ast}Tz$ for all $n\geq
n_{0}$. Using that $T$ is a fuzzy ample spectrum contraction with respect to
$\left( M,\mathcal{S},\theta\right) $, condition $(\mathcal{F}_{4})$ implies
that
\[
\theta\left( M(x_{n+1},Tz,t),M(x_{n},z,t)\right) =\theta\left(
M(Tx_{n},Tz,t),M(x_{n},z,t)\right) \geq0
\]
for all $n\geq n_{0}$ and all $t>0$. Taking into account that $\{M(x_{n}
,z,t)\}_{n\geq n_{0}}\rightarrow1$ for all $t>0$, assumption $(\mathcal{F}
_{5})$ applied to the $(T,\mathcal{S}^{\ast},M)$-sequence
\[
\left\{ \,\left( \,\phi_{n}=M(Tx_{n},Tz,\cdot),\,\psi_{n}=M(x_{n}
,z,\cdot)\,\right) \,\right\} _{n\geq n_{0}}
\]
leads to $\{M(x_{n+1},Tz,t)\}_{n\geq n_{0}}\rightarrow1$ for all $t>0$, that
is, $\{x_{n}\}_{n\geq n_{0}}$ $M$-converges to $Tz$. As the $M$-limit in a
KM-FMS is unique, then $Tz=z$, so $z$ is a fixed point of $T$.
\item[$(c)$] Suppose that $\left( X,M\right) $ is $\mathcal{S}
$-strictly-increasing-regular and $\theta\left( t,s\right) \leq t-s$ for all
$\left( t,s\right) \in B\cap(\mathbb{I}\times\mathbb{I})$. This case follows
from $(b)$ taking into account item \ref{K38 - 30 rem def FAEC, item 2} of
Remark \ref{K38 - 30 rem def FAEC}.
\end{description}
\end{proof}
Theorem \ref{K38 - 31 th main FAEC} guarantees the existence of fixed points
of $T$. In the following result we describe some additional assumptions in
order to ensure that such fixed point is unique.
\begin{theorem}
\label{K38 - 31 th main FAEC uniqueness}Under the hypotheses of Theorem
\ref{K38 - 31 th main FAEC}, assume that the property $(\mathcal{F}
_{2}^{\prime}) $ is fulfilled and that each pair of fixed points
$x,y\in\operatorname*{Fix}(T)$ of $T$ is associated to another point $z\in X$
which is, at the same time, $\mathcal{S}$-comparable to $x$ and to $y$. Then
$T$ has a unique fixed point.
\end{theorem}
\begin{proof}
Let $x,y\in\operatorname*{Fix}(T)$ be two fixed points of $T$. By hypothesis,
there exists $z_{0}\in X$ such that $z_{0}$ is, at the same time,
$\mathcal{S}$-comparable to $x$ and $\mathcal{S}$-comparable to $y$. We claim
that the $T$-Picard sequence $\{z_{n}\}$ of $T$ starting from $z_{0}$
$M$-converges, at the same time, to $x$ and to $y$ (so we will deduce that
$x=y$). We check the first statement. To prove it, we consider two possibilities.
\begin{itemize}
\item Suppose that there is $n_{0}\in\mathbb{N}$ such that $z_{n_{0}}=x$. In
this case, $z_{n_{0}+1}=Tz_{n_{0}}=Tx=x$. Repeating this argument, $z_{n}=x$
for all $n\geq n_{0}$, so $\{z_{n}\}$ $M$-converges to $x$.
\item Suppose that $z_{n}\neq x$ for all $n\in\mathbb{N}$. Since $z_{0}$ is
$\mathcal{S}$-comparable to $x$, assume, for instance, that $z_{0}\mathcal{S}x
$ (the case $x\mathcal{S}z_{0}$ is similar). As $z_{0}\mathcal{S}x$, $T$ is
$\mathcal{S}$-nondecreasing and $Tx=x$, then $z_{n}\mathcal{S}x$ for all
$n\in\mathbb{N}$. Therefore $z_{n}\mathcal{S}^{\ast}x$ and $Tz_{n}
\mathcal{S}^{\ast}Tx$ for all $n\in\mathbb{N}$. Using the contractivity
condition $(\mathcal{F}_{4})$, for all $n\in\mathbb{N}$ and all $t>0$,
\[
0\leq\theta(M(Tz_{n},Tx,t),M(z_{n},x,t))=\theta(M(T^{n+1}z_{0},T^{n+1}
x,t),M(T^{n}z_{0},T^{n}x,t)).
\]
It follows from $(\mathcal{F}_{2}^{\prime})$ that $\{M(z_{n},x,t)\}=\{M(T^{n}
z_{0},T^{n}x,t)\}\rightarrow1$ for all $t>0$, that is, $\{z_{n}\}$
$M$-converges to $x$.
\end{itemize}
In any case, $\{z_{n}\}\rightarrow x$ and, similarly, $\{z_{n}\}\rightarrow y
$, so $x=y$ and $T$ has a unique fixed point.
\end{proof}
\begin{corollary}
Under the hypotheses of Theorem \ref{K38 - 31 th main FAEC}, assume that
condition $(\mathcal{F}_{2}^{\prime})$ holds and each two fixed points of $T$
are $\mathcal{S}$-comparable. Then $T$ has a unique fixed point.
\end{corollary}
Immediately we can deduce that Theorems \ref{K38 - 31 th main FAEC} and
\ref{K38 - 31 th main FAEC uniqueness} remains true if we replace the
hypothesis that $(X,M)$ satisfies the property $\mathcal{NC}$ by the fact that
$(X,M,\ast)$ is a non-Archimedean KM-FMS (recall Theorem
\ref{K38 - 24 th Non-Arch impies NC}). Taking into account its great
importance in this manuscript, we enunciate here the complete statement.
\begin{corollary}
Let $\left( X,M,\ast\right) $ be a non-Archimedean KM-FMS endowed with a
transitive binary relation $\mathcal{S}$ and let $T:X\rightarrow X$ be an
$\mathcal{S}$-nondecreasing fuzzy ample spectrum contraction with respect to
$\left( M,\mathcal{S},\theta\right) $. Suppose that $\ast$ is continuous at
the $1$-boundary, $T(X)$ is $(\mathcal{S},d)$-strictly-increasing-precomplete
and there exists a point $x_{0}\in X$ such that $x_{0}\mathcal{S}Tx_{0}$. Also
assume that, at least, one of the following conditions is fulfilled:
\begin{description}
\item[$(a)$] $T$ is $\mathcal{S}$-strictly-increasing-continuous.
\item[$(b)$] $\left( X,M\right) $ is $\mathcal{S}$
-strictly-increasing-regular and condition $(\mathcal{F}_{5})$ holds.
\item[$(c)$] $\left( X,M\right) $ is $\mathcal{S}$
-strictly-increasing-regular and $\theta\left( t,s\right) \leq t-s$ for all
$\left( t,s\right) \in B\cap(\mathbb{I}\times\mathbb{I})$.
\end{description}
Then the Picard sequence of $T~$based on $x_{0}$ converges to a fixed point of
$T$ (in particular, $T$ has at least a fixed point).
In addition to this, assume that property $(\mathcal{F}_{2}^{\prime})$ is
fulfilled and that for all $x,y\in\operatorname*{Fix}(T)$, there exists $z\in
X$ which is $\mathcal{S}$-comparable, at the same time, to $x$ and to $y$.
Then $T$ has a unique fixed point.
\end{corollary}
We highlight that the previous results also hold in GV-FMS (but we avoid its
corresponding statements).
\subsection{\textbf{Type-2 fuzzy ample spectrum contractions}}
As we commented in Remark \ref{K38 - 32 rem Km to GV}, Definition
\ref{definition KM-space} is as general that such class of fuzzy spaces can
verify that
\[
M(x,y,t)=0\text{\quad for all }t>0\text{\quad when }x\neq y,
\]
which correspond to infinite distance. In such cases, although a sequence
$\{x_{n}\}$ satisfies
\[
\theta\left( M\left( x_{n+1},x_{n+2},t\right) ,M\left( x_{n}
,x_{n+1},t\right) \right) \geq0\quad\text{for all }n\in\mathbb{N}\text{ and
all }t>0,
\]
it is impossible to deduce that $\lim_{n\rightarrow\infty}M\left(
x_{n},x_{n+1},t\right) =1$ for all $t>0$. Therefore, in order to cover the
fixed point theorems that were demonstrated under this assumption, we must
slightly modify the conditions that a fuzzy ample spectrum contraction
satisfies. In this case, the following new type of contractions may be considered.
\begin{definition}
\label{K38 - 53 def type-2 FASC}Let $\left( X,M\right) $ be a fuzzy space,
let $T:X\rightarrow X$ be a self-mapping, let $\mathcal{S}$ be a binary
relation on $X$, let $B\subseteq\mathbb{R}^{2}$ be a subset and let
$\theta:B\rightarrow\mathbb{R}$ be a function. We will say that
$T:X\rightarrow X$ is a \emph{type-2 fuzzy ample spectrum contraction w.r.t.
}$\left( M,\mathcal{S},\theta\right) $ if it satisfies properties
$(\mathcal{F}_{1})$, $(\mathcal{F}_{3})$ and the following ones:
\begin{description}
\item[$(\widetilde{\mathcal{F}}_{2})$] If $\{x_{n}\}\subseteq X$ is a Picard
$\mathcal{S}$-strictly-increasing sequence of $T$ such that, for all
$n\in\mathbb{N}$ and all $t>0$,
\[
M\left( x_{n},x_{n+1},t\right) >0\quad\text{and}\quad\theta\left( M\left(
x_{n+1},x_{n+2},t\right) ,M\left( x_{n},x_{n+1},t\right) \right) \geq0,
\]
then $\lim_{n\rightarrow\infty}M\left( x_{n},x_{n+1},t\right) =1$ for all
$t>0$.
\item[$(\widetilde{\mathcal{F}}_{4})$] If {$x,y\in X$ are such that
$x\mathcal{S}^{\ast}y$ and $Tx\mathcal{S}^{\ast}Ty$ and }$t_{0}>0$ is such
that $M(x,y,t_{0})>0$, then
\[
{\theta\left( M(Tx,Ty,t_{0}),M(x,y,t_{0})\right) \geq0.}
\]
\end{description}
\end{definition}
In some cases, we will also consider the following properties.
\begin{description}
\item[$(\widetilde{\mathcal{F}}_{2}^{\prime})$] If $\{x_{n}\},\{y_{n}
\}\subseteq X$ are two $T$-Picard sequences such that, for all $n\in\mathbb{N}
$ and all $t>0$,
\[
x_{n}\mathcal{S}^{\ast}y_{n},\quad{M(x_{n},y_{n},t)>0}\quad\text{and}
\quad{\theta(M\left( x_{n+1},y_{n+1},t\right) ,M\left( x_{n},y_{n}
,t\right) )\geq0}
\]
then $\lim_{n\rightarrow\infty}{M}\left( x_{n},y_{n},t\right) =1$ for all
$t>0$.
\item[$(\widetilde{\mathcal{F}}_{5})$] If $\{\left( \phi_{n},\psi_{n}\right)
\}$ is a $(T,\mathcal{S}^{\ast},M)$-sequence such that $\phi_{n}(t)>0$,
$\psi_{n}(t)>0$ and $\{\psi_{n}\left( t\right) \}\rightarrow1$ for all
$t>0$, and also $\theta(\phi_{n}(t),\psi_{n}(t))\geq0$ for all $n\in
\mathbb{N}$ and all $t>0$, then $\{\phi_{n}\left( t\right) \}\rightarrow1$
for all $t>0$.
\end{description}
It is clear that $(\mathcal{F}_{i})\Rightarrow(\widetilde{\mathcal{F}}_{i})$
for all $i\in\{2,4,5\}$ and also $(\mathcal{F}_{2}^{\prime})\Rightarrow
(\widetilde{\mathcal{F}}_{2}^{\prime})$, so each type-1 fuzzy ample spectrum
contraction is a type-2 fuzzy ample spectrum contraction, that is, the notion
of type-2 fuzzy ample spectrum contraction is more general than the notion of
type-1 fuzzy ample spectrum contraction.
\begin{lemma}
\label{K38 - 52 lem equal notion in GV-FMS}In a GV-FMS, the notions of type-1
and type-2 fuzzy ample spectrum contractions coincide.
\end{lemma}
\begin{proof}
It follows from the fact that, in a GV-FMS, $M(x,y,t)>0$ for all $x,y\in X$
and all $t>0$. Hence the respective properties $(\mathcal{F}_{i})$ and
$(\widetilde{\mathcal{F}}_{i})$ are equal.
\end{proof}
However, this generality forces us to assume additional constraints in order
to guarantee existence and uniqueness of fixed points, as we show in the next result.
\begin{theorem}
\label{K38 - 33 th main FAEC type-2}Let $\left( X,M,\ast\right) $ be a
KM-FMS endowed with a transitive binary relation $\mathcal{S}$ and let
$T:X\rightarrow X$ be an $\mathcal{S}$-nondecreasing type-2 fuzzy ample
spectrum contraction with respect to $\left( M,\mathcal{S},\theta\right) $.
Suppose that $\ast$ is continuous at the $1$-boundary, $T(X)$ is
$(\mathcal{S},d)$-strictly-increasing-precomplete and there exists a point
$x_{0}\in X$ such that $x_{0}\mathcal{S}Tx_{0}$ and $M(x_{0},Tx_{0},t)>0$ for
all $t>0$. Also suppose that the function $\theta$ satisfies:
\begin{equation}
(t,s)\in B,\quad\theta\left( t,s\right) \geq0,\quad s>0\quad\Rightarrow\quad
t>0. \label{K38 - 34 prop theta}
\end{equation}
Assume that, at least, one of the following conditions is fulfilled:
\begin{description}
\item[$\left( a\right) $] $T$ is $\mathcal{S}$-strictly-increasing-continuous.
\item[$\left( b\right) $] $\left( X,M\right) $ is metrically-$\mathcal{S}
$-strictly-increasing-regular and condition $(\widetilde{\mathcal{F}}_{5})$ holds.
\item[$\left( c\right) $] $\left( X,M\right) $ is metrically-$\mathcal{S}
$-strictly-increasing-regular and $\theta\left( t,s\right) \leq t-s$ for all
$\left( t,s\right) \in B\cap(\mathbb{I}\times\mathbb{I})$.
\end{description}
If $\left( X,M\right) $\ satisfies the property $\mathcal{NC}$, then the
Picard sequence of $T~$based on $x_{0}$ converges to a fixed point of $T$. In
particular, $T$ has at least a fixed point.
\end{theorem}
\begin{proof}
We can repeat many of the arguments showed in the proof of Theorem
\ref{K38 - 31 th main FAEC}, but we must refine them. Let $x_{0}\in X$ be the
point such that $x_{0}\mathcal{S}Tx_{0}$ and $M(x_{0},Tx_{0},t)>0$ for all
$t>0$. Let $\{x_{n}\}$ be the Picard sequence of $T$ starting from $x_{0}$.
Assume that $x_{n}\neq x_{n+1}$ for all $n\in\mathbb{N}_{0}$ and
$x_{n}\mathcal{S}x_{m}$ for all $n,m\in\mathbb{N}_{0}$ such that $n<m$. Since
$T$ is a fuzzy ample spectrum contraction with respect to $\left(
M,\mathcal{S},\theta\right) $ and $x_{0}\mathcal{S}^{\ast}Tx_{0}=x_{1}$,
$Tx_{0}=x_{1}\mathcal{S}^{\ast}x_{2}=Tx_{1}$ and $M(x_{0},x_{1},t)=M(x_{0}
,Tx_{0},t)>0$ then
\[
{\theta\left( M(x_{1},x_{2},t),M(x_{0},x_{1},t)\right) ={\theta\left(
M(Tx_{0},Tx_{1},t),M(x_{0},x_{1},t)\right) }\geq0}
\]
{for all }$t>0${. Using property (\ref{K38 - 34 prop theta}), it follows that,
for all }$t>0$,
\[
{\theta\left( M(x_{1},x_{2},t),M(x_{0},x_{1},t)\right) \geq0,}\quad
M(x_{0},x_{1},t)>0\quad\Rightarrow\quad M(x_{1},x_{2},t)>0.
\]
By induction, it can be proved that
\begin{equation}
M(x_{n},x_{n+1},t)>0\quad\text{for all }t>0\text{ and all }n\in\mathbb{N}_{0}
\label{K38 - 35 prop}
\end{equation}
and
\[
{\theta\left( M(x_{n+1},x_{n+2},t),M(x_{n},x_{n+1},t)\right) \geq0}
\quad\text{for all }t>0\text{ and all }n\in\mathbb{N}_{0}.
\]
Hence we can apply property $(\widetilde{\mathcal{F}}_{2})$ and we deduce that
\[
\lim_{n\rightarrow\infty}M\left( x_{n},x_{n+1},t\right) \rightarrow
1\quad\text{for all }t>0.
\]
Following the same arguments given in the proof of Theorem
\ref{K38 - 31 th main FAEC}, we can reduce us to the case in which $\{x_{n}\}$
is infinite, in which we know that there is $z\in X$ such that $\{x_{n}\}$
$M$-converges to $z$. It only remain to prove that, under any of conditions
$(a)$, $(b)$ or $(c)$, $z$ is a fixed point of $T$. In case $(a)$, the proof
of Theorem \ref{K38 - 31 th main FAEC} can be repeated.
\begin{description}
\item[$(b)$] Suppose that $\left( X,M\right) $ is metrically-$\mathcal{S}
$-strictly-increasing-regular and condition $(\widetilde{\mathcal{F}}_{5})$
holds. In this case, since $\{x_{n}\}$ is an $\mathcal{S}$-strictly-increasing
sequence such that $\{x_{n}\}\rightarrow z\in X$ and (\ref{K38 - 35 prop})
holds, it follows that
\[
x_{n}\mathcal{S}z\quad\text{and}\quad M(x_{n},z,t)>0\quad\text{for all }
n\in\mathbb{N}\text{ and all }t>0.
\]
Taking into account that the sequence $\{x_{n}\}$ is infinite, then there is
$n_{0}\in\mathbb{N}$ such that $x_{n}\neq z$ and $x_{n}\neq Tz$ for all $n\geq
n_{0}-1$. Moreover, as $T$ is $\mathcal{S}$-nondecreasing, then $x_{n+1}
=Tx_{n}\mathcal{S}Tz$, which means that $x_{n}\mathcal{S}^{\ast}z$ and
$x_{n}\mathcal{S}^{\ast}Tz$ for all $n\geq n_{0}$. Using that $T$ is a type-2
fuzzy ample spectrum contraction with respect to $\left( M,\mathcal{S}
,\theta\right) $ and $M(x_{n},z,t)>0$ for all $n\in\mathbb{N}$ and all $t>0
$, condition $(\widetilde{\mathcal{F}}_{4})$ implies that
\[
\theta\left( M(x_{n+1},Tz,t),M(x_{n},z,t)\right) =\theta\left(
M(Tx_{n},Tz,t),M(x_{n},z,t)\right) \geq0
\]
for all $n\geq n_{0}$ and all $t>0$. Furthermore, property
(\ref{K38 - 34 prop theta}) ensures that
\[
\theta\left( M(x_{n+1},Tz,t),M(x_{n},z,t)\right) \geq0,\quad M(x_{n}
,z,t)>0\quad\Rightarrow\quad M(x_{n+1},Tz,t)>0
\]
for all $n\geq n_{0}$ and all $t>0$. Taking into account that $\{M(x_{n}
,z,t)\}_{n\geq n_{0}}\rightarrow1$ for all $t>0$, assumption $(\widetilde
{\mathcal{F}}_{5})$ applied to the $(T,\mathcal{S}^{\ast},M)$-sequence
\[
\left\{ \,\left( \,\phi_{n}=M(Tx_{n},Tz,\cdot),\,\psi_{n}=M(x_{n}
,z,\cdot)\,\right) \,\right\} _{n\geq n_{0}}
\]
leads to $\{M(x_{n+1},Tz,t)\}_{n\geq n_{0}}\rightarrow1$ for all $t>0$, that
is, $\{x_{n}\}_{n\geq n_{0}}$ $M$-converges to $Tz$. As the $M$-limit in a
KM-FMS is unique, then $Tz=z$, so $z$ is a fixed point of $T$.
\end{description}
\end{proof}
In this context, it is also possible to prove a similar uniqueness result.
\begin{theorem}
\label{K38 - 42 th main FAEC type-2 uniqueness}Under the hypotheses of Theorem
\ref{K38 - 33 th main FAEC type-2}, assume that property $(\mathcal{F}
_{2}^{\prime})$ is fulfilled and that for all $x,y\in\operatorname*{Fix}(T)$,
there exists $z\in X$ which is $\mathcal{S}$-comparable, at the same time, to
$x$ and to $y$. Then $T$ has a unique fixed point.
\end{theorem}
\begin{proof}
All arguments of the proof of Theorem \ref{K38 - 31 th main FAEC uniqueness}
\ can be repeated in this context.
\end{proof}
\begin{corollary}
Theorems \ref{K38 - 33 th main FAEC type-2} and
\ref{K38 - 42 th main FAEC type-2 uniqueness} remains true if we replace the
hypothesis that $(X,M)$ satisfies the property $\mathcal{NC}$ by the fact that
$(X,M,\ast)$ is a non-Archimedean KM-FMS.
\end{corollary}
\begin{proof}
It follows from Theorem \ref{K38 - 24 th Non-Arch impies NC}.
\end{proof}
\section{\textbf{Consequences}}
In this section we show some direct consequences of our main results.
\subsection{\textbf{Mihe\textrm{\c{t}}'s fuzzy }$\psi$\textbf{-contractions}}
In \cite{Mi3} Mihe\c{t} introduced a class of contractions in the setting of
KM-fuzzy metric spaces that attracted much attention. It was defined by
considering the following family of auxiliary function. Let $\Psi$ be the
family of all continuous and nondecreasing functions $\psi:\mathbb{I}
\rightarrow\mathbb{I}$ satisfying $\psi\left( t\right) >t$ for all
$t\in\left( 0,1\right) $. Notice that if $\psi\in\Psi$, then $\psi\left(
0\right) \geq0$ and $\psi\left( 1\right) =1$, so $\psi\left( t\right)
\geq t$ for all $t\in\mathbb{I}$.
\begin{definition}
\label{K38 - 37 def Mihet}\textrm{(Mihe\c{t} \cite{Mi3}, Definition 3.1)}
Given a KM-FMS $\left( X,M,\ast\right) $ (it is assumed that $\ast$ is
continuous), a mapping $T:X\rightarrow X$ is a \emph{fuzzy }$\psi
$\emph{-contraction} if there is $\psi\in\Psi$ such that, for all $x,y\in X$
and all $t>0$,
\begin{equation}
M\left( x,y,t\right) >0\quad\Rightarrow\quad M\left( Tx,Ty,t\right)
\geq\psi\left( M\left( x,y,t\right) \right) .
\label{K38 - 37 def Mihet, prop}
\end{equation}
\end{definition}
\begin{theorem}
\label{K38 - 38 th Mihet}\textrm{(Mihe\c{t} \cite{Mi3}, Theorem 3.1)} Let
$(X,M,\ast)$ be an $M$-complete non-Archimedean KM-FMS (it is assumed that
$\ast$ is continuous) and let $T:X\rightarrow X$ be a fuzzy $\psi$-contractive
mapping. If there exists $x\in X$ such that $M(x,Tx,t)>0$ for all $t>0$, then
$T$ has a fixed point.
\end{theorem}
We show that the class of fuzzy ample spectrum contraction properly contains
to the class of Mihe\c{t}'s $\psi$-contractions.
\begin{theorem}
\label{K38 - 39 th Mihet implies type-2}Given a Mihe\c{t}'s fuzzy $\psi
$-contraction $T:X\rightarrow X$ in a KM-FMS $(X,M,\ast)$, let define
$\theta_{\psi}:\mathbb{I}\times\mathbb{I}\rightarrow\mathbb{R}$ by
$\theta_{\psi}\left( t,s\right) =t-\psi\left( s\right) $ for all
$t,s\in\mathbb{I}$. Then $T$ is a type-2 fuzzy ample spectrum contraction
w.r.t. $(M,\mathcal{S}_{X},\theta_{\psi})$ that also satisfies properties
$(\widetilde{\mathcal{F}}_{2}^{\prime})$, $(\widetilde{\mathcal{F}}_{5})$,
(\ref{K38 - 34 prop theta}) and $\theta_{\psi}\left( t,s\right) \leq t-s$
for all $t,s\in\mathbb{I}$.
\end{theorem}
\begin{proof}
Let $T:X\rightarrow X$ be a $\psi$-contraction in a KM-FMS $\left(
X,M,\ast\right) $. Let consider on $X$ the trivial order $\mathcal{S}_{X}$
defined by $x\mathcal{S}_{X}y$ for all $x,y\in X$. Let $B=\mathbb{I}
\times\mathbb{I} $ and let define $\theta_{\psi}:\mathbb{I}\times
\mathbb{I}\rightarrow\mathbb{R}$ by $\theta_{\psi}\left( t,s\right)
=t-\psi\left( s\right) $ for all $t,s\in\mathbb{I}$. We claim that $T$ is a
fuzzy ample spectrum contraction w.r.t. $\left( M,\mathcal{S}_{X}
,\theta_{\psi}\right) $ that also satisfies properties $(\widetilde
{\mathcal{F}}_{2}^{\prime})$, $(\widetilde{\mathcal{F}}_{5})$,
(\ref{K38 - 34 prop theta}) and $\theta_{\psi}\left( t,s\right) \leq t-s$
for all $t,s\in\mathbb{I}$. We check all conditions.
(\ref{K38 - 34 prop theta}) If $t,s\in\mathbb{I}$ are such that $\theta_{\psi
}\left( t,s\right) \geq0$ and $s>0$, then $t-\psi\left( s\right) \geq0$,
so $t\geq\psi\left( s\right) \geq s>0$. In fact, since $\psi\left(
s\right) \geq s$ for all $s\in\mathbb{I}$, then
\[
\theta_{\psi}\left( t,s\right) =t-\psi\left( s\right) \leq t-s\quad
\text{for all }t,s\in\mathbb{I}.
\]
$(\mathcal{F}_{1})$. It is obvious because $B=\mathbb{I}\times\mathbb{I}$.
$(\widetilde{\mathcal{F}}_{4})$. Let{\ $x,y\in X$ be two points such that
$x\mathcal{S}_{X}^{\ast}y$ and $Tx\mathcal{S}_{X}^{\ast}Ty$ and let }$t>0$ be
such that{\ }$M(x,y,t)>0${. Since }$T$ is a fuzzy $\psi$-contraction, {by
(\ref{K38 - 37 def Mihet, prop}),}
\[
M\left( x,y,t\right) >0\quad\Rightarrow\quad M\left( Tx,Ty,t\right)
\geq\psi\left( M\left( x,y,t\right) \right) .
\]
Therefore
\[
\theta_{\psi}\left( M\left( Tx,Ty,t\right) ,M\left( x,y,t\right) \right)
=M\left( Tx,Ty,t\right) -\psi\left( M\left( x,y,t\right) \right) \geq0.
\]
$(\widetilde{\mathcal{F}}_{2}^{\prime})$. Let $x_{1},x_{2}\in X$ be two points
such that, for all $n\in\mathbb{N}$ and all $t>0$,
\[
T^{n}x_{1}\mathcal{S}^{\ast}T^{n}x_{2},\quad{M(T^{n}x_{1},T^{n}x_{2}
,t)>0}\quad\text{and}\quad{\theta}_{\psi}{(M(T^{n+1}x_{1},T^{n+1}
x_{2},t),M(T^{n}x_{1},T^{n}x_{2},t))\geq0.}\quad
\]
Therefore
\begin{align*}
0 & \leq{\theta}_{\psi}{(M(T^{n+1}x_{1},T^{n+1}x_{2},t),M(T^{n}x_{1}
,T^{n}x_{2},t))}\\
& ={M(T^{n+1}x_{1},T^{n+1}x_{2},t)-\psi(M(T^{n}x_{1},T^{n}x_{2},t)).}
\end{align*}
Hence ${\psi(M(T^{n}x_{1},T^{n}x_{2},t))\leq M(T^{n+1}x_{1},T^{n+1}x_{2},t)}
$, which means that, for all $n\in\mathbb{N}$ and all $t>0$,
\begin{equation}
{0<M(T^{n}x_{1},T^{n}x_{2},t)\leq\psi(M(T^{n}x_{1},T^{n}x_{2},t))\leq
M(T^{n+1}x_{1},T^{n+1}x_{2},t)\leq1.} \label{K38 - 40 prop}
\end{equation}
As a consequence, for each $t>0$, the sequence $\{{M(T^{n}x_{1},T^{n}x_{2}
,t)}\}_{n\in\mathbb{N}}$ is nondecreasing and bounded above. Hence, it is
convergent. If $L(t)=\lim_{n\rightarrow\infty}{M(T^{n}x_{1},T^{n}x_{2},t)}$,
letting $n\rightarrow\infty$ in (\ref{K38 - 40 prop}) and taking into account
that $\psi$ is continuous, we deduce that
\[
L(t)\leq\psi\left( L\left( t\right) \right) \leq L(t).
\]
This means that $\psi\left( L\left( t\right) \right) =L\left( t\right)
$, so $L\left( t\right) \in\{0,1\}$. However, as $L(t)\geq{M(T^{n}
x_{1},T^{n}x_{2},t)}>0$, then $L(t)=1$. This proves that $\lim_{n\rightarrow
\infty}{M(T^{n}x_{1},T^{n}x_{2},t)}=1$ for all $t>0$.
$(\widetilde{\mathcal{F}}_{2})$. It follows from $(\widetilde{\mathcal{F}}
_{2}^{\prime})$ using $x_{2}=Tx_{1}$.
$(\widetilde{\mathcal{F}}_{3})$. Let $\{\left( \phi_{n},\psi_{n}\right) \}$
be a $(T,\mathcal{S}^{\ast},M)$-sequence and let $t_{0}>0$ be such that
$\{\phi_{n}\left( t_{0}\right) \}$ and $\{\psi_{n}\left( t_{0}\right) \}$
converge to the same limit $L\in\mathbb{I}$, which satisfies $L>\phi
_{n}\left( t_{0}\right) $ and $\theta_{\psi}(\phi_{n}\left( t_{0}\right)
,\psi_{n}\left( t_{0}\right) )\geq0$ for all $n\in\mathbb{N}$. Therefore,
for all $n\in\mathbb{N}$,
\[
0\leq\theta_{\psi}(\phi_{n}\left( t_{0}\right) ,\psi_{n}\left(
t_{0}\right) )=\phi_{n}\left( t_{0}\right) -\psi\left( \psi_{n}\left(
t_{0}\right) \right) ,
\]
so
\[
\psi\left( \psi_{n}\left( t_{0}\right) \right) \leq\phi_{n}\left(
t_{0}\right) \quad\text{for all }n\in\mathbb{N}.
\]
Letting $n\rightarrow\infty$, as $\psi$ is continuous, we deduce that
$\psi\left( L\right) \leq L$, so $L\in\{0,1\}$. However, as $L>\phi
_{n}\left( t_{0}\right) \geq0$, then $L=1$.
$(\widetilde{\mathcal{F}}_{5})$. It follows from the fact that $\theta_{\psi
}\left( t,s\right) =t-\psi\left( s\right) \leq t-s$ for all $t,s\in
\mathbb{I}$.
\end{proof}
The previous result means that each Mihe\c{t}'s fuzzy $\psi$-contraction in a
KM-FMS is a type-2 fuzzy ample spectrum contraction that also satisfies
properties $(\widetilde{\mathcal{F}}_{2}^{\prime})$, $(\widetilde{\mathcal{F}
}_{5})$, (\ref{K38 - 34 prop theta}) and $\theta_{\psi}\left( t,s\right)
\leq t-s$ for all $t,s\in\mathbb{I}$. Furthermore, we are going to show that
it is also $\mathcal{S}_{X}$-strictly-increasing-continuous.
\begin{lemma}
\label{K38 - 48 lem Mihet contraction is nondecreasing}Each Mihe\c{t}'s fuzzy
$\psi$-contraction $T:X\rightarrow X$ in a KM-FMS $(X,M,\ast)$ is $T$ is
$\mathcal{S}_{X}$-strictly-increasing-continuous.
\end{lemma}
\begin{proof}
Let $\{x_{n}\}\subseteq X$ be an $\mathcal{S}_{X}$-strictly-increasing
sequence such that $\{x_{n}\}$ $M$-converges to $z\in X$. Let $t_{0}>0$ be
arbitrary. Since $\lim_{n\rightarrow\infty}M(x_{n},z,t_{0})=1$, then there is
$n_{0}\in\mathbb{N}$ such that $M(x_{n},z,t_{0})>0$ for all $n\geq n_{0}$.
Theorem \ref{K38 - 39 th Mihet implies type-2} ensures that $T$ is a type-2
fuzzy ample spectrum contraction w.r.t. $(M,\mathcal{S}_{X},\theta_{\psi})$,
where $\theta_{\psi}:\mathbb{I}\times\mathbb{I}\rightarrow\mathbb{R}$ is given
by $\theta_{\psi}\left( t,s\right) =t-\psi\left( s\right) $ for all
$t,s\in\mathbb{I}$. Applying property $(\widetilde{\mathcal{F}}_{4})$, we
deduce that, for all $n\geq n_{0}$,
\[
0\leq\theta_{\psi}(M(Tx_{n},Tz,t_{0}),M(x_{n},z,t_{0}))=M(Tx_{n}
,Tz,t_{0})-\psi(M(x_{n},z,t_{0})).
\]
As a result, $\psi(M(x_{n},z,t_{0}))\leq M(Tx_{n},Tz,t_{0})\leq1$ for all
$n\geq n_{0}$. Letting $n\rightarrow\infty$, we deduce that
\[
1=\psi(1)=\lim_{n\rightarrow\infty}\psi(M(x_{n},z,t_{0}))\leq\lim
_{n\rightarrow\infty}M(Tx_{n},Tz,t_{0})\leq1,
\]
so $\lim_{n\rightarrow\infty}M(Tx_{n},Tz,t_{0})=1$. Varying $t_{0}>0$, we
deduce that $\{Tx_{n}\}$ $M$-converges to $Tz$, so $T$ is $\mathcal{S}_{X}$-strictly-increasing-continuous.
\end{proof}
We can conclude that the Mihe\c{t}'s theorem
\ref{K38 - 39 th Mihet implies type-2} is a particular case of our main results.
\begin{corollary}
Theorem \ref{K38 - 38 th Mihet} follows from Theorems
\ref{K38 - 33 th main FAEC type-2} and
\ref{K38 - 42 th main FAEC type-2 uniqueness}.
\end{corollary}
\begin{proof}
Let $(X,M,\ast)$ be an $M$-complete non-Archimedean KM-FMS (it is assumed that
$\ast$ is continuous) and let $T:X\rightarrow X$ be a fuzzy $\psi$-contractive
mapping. Suppose that there exists $x_{0}\in X$ such that $M(x_{0}
,Tx_{0},t)>0$ for all $t>0$. Given $\psi\in\Psi$, let define $\theta_{\psi
}\left( t,s\right) =t-\psi\left( s\right) $ for all $t,s\in\mathbb{I}$.
Theorem \ref{K38 - 39 th Mihet implies type-2} guarantees that $T$ type-2
fuzzy ample spectrum contraction w.r.t. $(M,\mathcal{S}_{X},\theta_{\psi})$
that also satisfies properties $(\widetilde{\mathcal{F}}_{2}^{\prime})$,
$(\widetilde{\mathcal{F}}_{5})$, (\ref{K38 - 34 prop theta}) and $\theta
_{\psi}\left( t,s\right) \leq t-s$ for all $t,s\in\mathbb{I}$. Notice that
$T(X)$ is $(\mathcal{S}_{X},d)$-strictly-increasing-precomplete because $X$ is
$M$-complete. By Lemma \ref{K38 - 48 lem Mihet contraction is nondecreasing},
$T$ is $\mathcal{S}_{X}$-strictly-increasing-continuous. Item $(a)$ of Theorem
\ref{K38 - 33 th main FAEC type-2} demonstrates that $T$ has, at least, one
fixed point. Furthermore, as $(\widetilde{\mathcal{F}}_{2}^{\prime})$ holds
and any two fixed points of $T$ are $\mathcal{S}_{X}$-comparable, then Theorem
\ref{K38 - 42 th main FAEC type-2 uniqueness} implies that $T$ has a unique
fixed point.
\end{proof}
\begin{example}
Mihe\c{t}'s $\psi$-contractions include Radu's contractions \cite{Radu}
(which, at the same time, generalize Gregori and Sapena's contractions
\cite{GrSa}) satisfying:
\[
M(Tx,Ty,t)\geq\frac{M(x,y,t)}{M(x,y,t)+k(1-M(x,y,t))}\quad\text{for all
}x,y\in X\text{ and all }t>0,
\]
where $k\in\left( 0,1\right) $. Therefore, if we consider the function
$\theta:\mathbb{I}\times\mathbb{I}\rightarrow\mathbb{R}$ given, for all
$t,s\in\mathbb{I}$, by
\[
\theta\left( t,s\right) =t-\frac{s}{s+k\left( 1-s\right) },
\]
then each Radu's contraction is also a fuzzy ample spectrum contraction
associated to $\theta$.
\end{example}
\begin{remark}
In \cite{Mi3} Mihe\c{t} posed the following question: \emph{Does Theorem 3.1
remain true if \textquotedblleft non-Archimedean fuzzy metric
space\textquotedblright\ is replaced by \textquotedblleft fuzzy metric
space\textquotedblright?} Here we cannot give a general answer, but we can say
that if $(X,M)$ satisfies the property $\mathcal{NC}$ and $\ast$ is continuous
at the $1$-boundary, then his Theorem 3.1 remains true.
\end{remark}
\begin{remark}
In order to conclude the current subsection, we wish to highlight one of the main
advantages of the contractivity condition described in $(\mathcal{F}_{4})$,
that is,
\[
{\theta\left( M(Tx,Ty,t),M(x,y,t)\right) \geq0}
\]
\emph{versus} the Mihe\c{t}'s inequality
\[
M\left( Tx,Ty,t\right) \geq\psi\left( M\left( x,y,t\right) \right) .
\]
To explain it, let us call $u=M\left( x,y,t\right) $ and $v=M\left(
Tx,Ty,t\right) $. Then the first inequality is equivalent to
\[
{\theta\left( u,v\right) \geq0,}
\]
and the second one can be expressed as
\[
v\geq\psi(u).
\]
In this sense, Mihe\c{t}'s inequality can be interpreted as a condition in
separate variables, that is, $u$ and $v$ are placed on distinct sides of the
inequality. However, from the inequality ${\theta\left( u,v\right) \geq0}$,
in general, it is impossible to deduce a relationship in separate variables
such as $v\geq\psi(u)$. As a consequence, it is often easy to check that a
self-mapping $T:X\rightarrow X$ satisfies a general condition such as
${\theta\left( u,v\right) \geq0}$ rather than a more restricted
contractivity condition such as $v\geq\psi(u)$.
To illustrate this advantage, we show how canonical examples of contractions
in the setting of fixed point theory, that is, Banach's contractions in metric
spaces, can be easily seen as fuzzy ample spectrum contractions but, in
general, it is complex to prove that they are also Mihe\c{t}'s fuzzy $\psi$-contractions.
\end{remark}
\begin{example}
Let consider the fuzzy metric $M^{d}$ on $X=\left[ 0,\infty\right) $
associated to the Euclidean metric $d(x,y)=\left\vert x-y\right\vert $ for all
$x\in X$. Given $\lambda\in\left( 0,1\right) $, let $T:X\rightarrow X$ be
the self-mapping given by
\[
Tx=\lambda\ln\left( 1+x\right) \qquad\text{for all }x\in X\text{.}
\]
Although $T$ is a Banach's contraction associated to the constant $\lambda$,
that is, $d(Tx,Ty)\leq\lambda d(x,y)$ for all $x,y\in X$, in general, it is
not easy to prove that $T$ is a Mihe\c{t}'s $\psi$-contraction because we must
to determine a function $\psi:\mathbb{I}\rightarrow\mathbb{I}$, satisfying
certain properties, and also such that
\[
\frac{t}{t+\lambda\left\vert \, \ln\dfrac{1+x}{1+y} \, \right\vert }\geq\psi\left(
\frac{t}{t+\left\vert x-y\right\vert }\right) \qquad\text{for all }x,y\in
X\text{ and all }t>0\text{.}
\]
To show that $T$ is a fuzzy contraction in $(X,M^{d})$ it could be better to
employ other methodologies, involving terms such that $M^{d}(x,y,\lambda t)$,
rather than Mihe\c{t}'s procedure. However, by handling the function $\theta$
defined as
\[
\theta\left( t,s\right) =\lambda\frac{1-s}{s}-\frac{1-t}{t}\qquad\text{for
all }t,s\in\left( 0,1\right] ,
\]
it can directly checked that $T$ is a fuzzy ample spectrum contraction
because, for all $x,y\in X$ and all $t>0$:
\begin{align*}
\theta(M^{d}\left( Tx,Ty,t\right) ,M^{d}\left( x,y,t\right) ) =
\lambda \, \frac{~1-\dfrac{t}{t+d(x,y)}~}{\dfrac{t}{t+d(x,y)}}-\frac{~1-\dfrac
{t}{t+d(Tx,Ty)}~}{\dfrac{t}{t+d(Tx,Ty)}} =\frac{\lambda d(x,y)-d(Tx,Ty)}{t}.
\end{align*}
\end{example}
\subsection{\textbf{Altun and Mihe\textrm{\c{t}}'s fuzzy contractions}}
In \cite{AlMi} Altun and Mihe\c{t} introduce the following kind of fuzzy
contractions in the setting of ordered KM-FMS. Recall that a \emph{partial
order on a set }$X$ is a binary relation $\mathcal{S}$ on $X$ which is
reflexive, antisymmetric and transitive.
\begin{theorem}
\textrm{(Altun and Mihe\c{t} \cite{AlMi}, Theorem 2.4)} Let $(X,M,\ast)$ be a
complete non-Archimedean KM-FMS (it is assumed that $\ast$ is continuous) and
let $\preceq$ be a partial order on $X$. Let $\psi:\mathbb{I}\rightarrow
\mathbb{I}$ be a continuous mapping such that $\psi(t)>t$ for all $t\in\left(
0,1\right) $. Also let $T:X\rightarrow X$ be a nondecreasing mapping w.r.t.
$\preceq$ with the property
\[
M(Tx,Ty,t)\geq\psi(M(x,y,t))\quad\text{for all }t>0\text{ and all }x,y\in
X\text{ such that }x\preceq y.
\]
Suppose that either:
\begin{description}
\item[$(a)$] $T$ is continuous,\quad or
\item[$(b)$] $x_{n}\preccurlyeq x$ for all $n\in\mathbb{N}$ whenever
$\{x_{n}\}\subseteq X$ is a nondecreasing sequence with $\{x_{n}\}\rightarrow
x\in X$
\end{description}
hold. If there exists $x_{0}\in X$ such that
\[
x_{0}\preccurlyeq Tx_{0}\quad\text{and}\quad\psi(M(x_{0},Tx_{0},t))>0\quad
\text{for each }t>0,
\]
then $T$ has a fixed point.
\end{theorem}
Obviously, the previous theorem can be interpreted as a version of Theorem
\ref{K38 - 37 def Mihet} in which a partial order (which is a transitive
binary relation) is employed. Notice that here the function $\psi$ is not
necessarily nondecreasing, but such property have not been used in the
arguments of the proofs of the previous subsection.
\subsection{\textbf{Fuzzy ample spectrum contractions by using admissible mappings}}
{Following \cite{SaVeVe}, }when a fuzzy space $(X,M)$ is endowed with a
function $\alpha:X\times X\rightarrow\left[ 0,\infty\right) $, the following
notions can be introduced:
\begin{itemize}
\item the function $\alpha$ is \emph{transitive} if $\alpha(x,z)\geq1$ for
each point $x,z\in X$ for which there is $y\in X$ such that $\alpha(x,y)\geq1$
and $\alpha(y,z)\geq1$;
\item a mapping $T:X\rightarrow X$ is $\alpha$\emph{-admissible} whenever
$\alpha\left( x,y\right) \geq1$ implies $\alpha\left( Tx,Ty\right) \geq1$,
where $x,y\in X$;
\item a sequence $\{x_{n}\}\subseteq X$ is $\alpha$\emph{-nondecreasing} if
$\alpha(x_{n},x_{n+1})\geq1$ for all $n\in\mathbb{N}$;
\item a sequence $\{x_{n}\}\subseteq X$ is $\alpha$\emph{-strictly-increasing}
if $\alpha(x_{n},x_{n+1})\geq1$ and $x_{n}\neq x_{n+1}$ for all $n\in
\mathbb{N}$;
\item a mapping $T:X\rightarrow X$ is $\alpha$\emph{-nondecreasing-continuous}
if $\{Tx_{n}\}\rightarrow Tz$ for all $\alpha$-nondecreasing sequence
$\{x_{n}\}\subseteq X$ such that $\{x_{n}\}\rightarrow z\in X$;
\item a mapping $T:X\rightarrow X$ is $\alpha$
\emph{-strictly-increasing-continuous} if $\{Tx_{n}\}\rightarrow Tz$ for all
$\alpha$-strictly-increasing sequence $\{x_{n}\}\subseteq X$ such that
$\{x_{n}\}\rightarrow z\in X$;
\item a subset $Y\subseteq X$ is $(\alpha,M)$
\emph{-strictly-increasing-complete} if every $\alpha$-strictly-increasing and
$M$-Cauchy sequence $\{y_{n}\}\subseteq Y$ is $M$-convergent to a point of $Y$;
\item a subset $Y\subseteq X$ is $(\alpha,M)$
\emph{-strictly-increasing-precomplete} if there exists a set $Z$ such that
$Y\subseteq Z\subseteq X$ and $Z$ is $(\alpha,M)$-strictly-increasing-complete;
\item $\left( X,M\right) $ is $\alpha$\emph{-strictly-increasing-regular}
if, for all $\alpha$-strictly-increasing sequence $\{x_{n}\}\subseteq X$ such
that $\{x_{n}\}\rightarrow z\in X$, it follows that $\alpha(x_{n},z)\geq1$ for
all $n\in\mathbb{N}$;
\end{itemize}
In this setting, it is possible to introduce the notion of \emph{fuzzy ample
spectrun contraction w.r.t. }$(M,\alpha,\theta)$ by replacing any condition of
the type $x\mathcal{S}y$ by $\alpha(x,y)\geq1$ in Definition
\ref{K38 - 54 def FASC} (or Definition \ref{K38 - 53 def type-2 FASC}). In
this general framework, it is possible to obtain some consequences as the
following ones.
\begin{corollary}
Let $\left( X,M,\ast\right) $ be a KM-FMS endowed with a transitive function
$\alpha:X\times X\rightarrow\left[ 0,\infty\right) $ and let $T:X\rightarrow
X$ be an $\alpha$-nondecreasing fuzzy ample spectrum contraction with respect
to $\left( M,\alpha,\theta\right) $. Suppose that $\ast$ is continuous at
the $1$-boundary, $T(X)$ is $(\alpha,d)$-strictly-increasing-precomplete and
there exists a point $x_{0}\in X$ such that $\alpha(x_{0},Tx_{0})\geq1$. Also
assume that, at least, one of the following conditions is fulfilled:
\begin{description}
\item[$(a)$] $T$ is $\alpha$-strictly-increasing-continuous.
\item[$(b)$] $\left( X,M\right) $ is $\alpha$-strictly-increasing-regular
and condition $(\mathcal{F}_{5})$ holds.
\item[$(c)$] $\left( X,M\right) $ is $\alpha$-strictly-increasing-regular
and $\theta\left( t,s\right) \leq t-s$ for all $\left( t,s\right) \in
B\cap(\mathbb{I}\times\mathbb{I})$.
\end{description}
If $\left( X,M\right) $\ satisfies the property $\mathcal{NC}$, then the
Picard sequence of $T~$based on $x_{0}$ converges to a fixed point of $T$. In
particular, $T$ has at least a fixed point.
\end{corollary}
\begin{proof}
It follows from Theorem \ref{K38 - 31 th main FAEC} by considering on $X$ the
binary relation $\mathcal{S}_{\alpha}$ given, for each $x,y\in X$, by
$x\mathcal{S}_{\alpha}y$ if $\alpha(x,y)\geq1$.
\end{proof}
\vspace*{-3mm}
\section{\textbf{Conclusions and prospect work}}
In this paper we have introduced the notion of \emph{fuzzy ample spectrum
contraction} in the setting of fuzzy metric spaces in the sense of Kramosil
and Mich\'{a}lek into two distinct ways. After that, we have demonstrated some
very general theorems in order to ensure the existence and uniqueness of fixed
points for such families of fuzzy contractions. We have also illustrated that
these novel classes of fuzzy contractions extend and generalize some well
known groups of previous fuzzy contractions.
In order to attract the attention of many researchers in the field of fixed
point theory, in the title we announced that out results was going to be
developed in the setting of non-Archimedean fuzzy metric spaces. However, we
have presented a new property (that we have called $\mathcal{NC}$) in order to
consider more general families of fuzzy metric spaces. By working with
property $\mathcal{NC}$ we have given a positive partial answer to an open
problem posed by Mihe\c{t} in \cite{Mi3}.
In this line of research there is much prospect work to carry out. For
instance, we have shown that these novel fuzzy contraction generalize the
notion of \emph{ample spectrum contraction} in the setting of metric spaces
under an additional assumption. Then, the following doubts naturally appears:
\emph{Open question 1:} is any ample spectrum contraction in a metric space
$(X,d)$ a fuzzy ample spectrum contraction in the fuzzy metric space
$(X,M^{d})$?
\emph{Open question 2:} in order to cover other kinds of fuzzy contractions,
is it possible to extend the notion of ample spectrum contraction to the fuzzy
setting by introducing a function in the argument $t$ of $M(x,y,t)$?
\vspace*{3mm}
\section*{Acknowledgments}
A.F. Rold\'{a}n L\'{o}pez de Hierro is grateful to Project TIN2017-89517-P of
Ministerio de Econom\'{\i}a, Industria y Competitividad and also to Junta de
Andaluc\'{\i}a by project FQM-365 of the Andalusian CICYE. This article was
funded by the Deanship of Scientific Research (DSR), King Abdulaziz
University, Jeddah. N. Shahzad acknowledges with thanks DSR for financial support.
\end{document} |
\begin{document}
\maketitle
\centerline{\scshape Duy Phan$^*$}
{\footnotesize
\centerline{Institut f\"ur Mathematik, Leopold-Franzens-Universit\"at Innsbruck}
\centerline{Technikerstra\ss e 13/7, A-6020 Innsbruck, Austria.}
}
\centerline{\scshape Lassi Paunonen}
{\footnotesize
\centerline{Mathematics, Faculty of Information Technology and Communication Sciences,}
\centerline{Tampere University,}
\centerline{PO. Box 692, 33101 Tampere, Finland.}
}
\centerline{(Communicated by the associate editor name)}
\begin{abstract}
We study the robust output regulation of linear boundary control systems by constructing extended systems. The extended systems are established based on solving static differential equations under two new conditions.
We first consider the abstract setting and present finite-dimensional reduced order controllers.
The controller design is then used for particular PDE models: high-dimensional parabolic equations and beam equations with Kelvin-Voigt damping. Numerical examples will be presented using Finite Element Method.
\end{abstract}
\maketitle
\section{Introduction}
We consider linear boundary control systems of the form \cite[Chapter 10]{TucWei09}
\begin{align*}
\dot{w}(t) &= {\mathcal A} w(t), \qquad w(0) = w_0, \\
{\mathcal B} w(t) &= u(t), \\
y(t) &= C_0 w(t)
\end{align*}
on a Hilbert space $X_0$ where $C_0$ is a bounded linear operator.
The main aim of robust output regulation problem for boundary control systems is to design a dynamic error feedback controller so that the output $y(t)$ of the linear infinite-dimensional boundary control system converges to a given reference signal $y_{\mbox{\scriptsize\textit{ref}}} (t)$, i.e.
\begin{align*}
\| y(t) - y_{\mbox{\scriptsize\textit{ref}}}(t)\| \to 0, \quad \text{as~~} t \to \infty.
\end{align*}
In addition, the control is required to be robust in the sense that the designed controller achieves the output tracking and disturbance rejection even under uncertainties and perturbations in the parameters of the system.
The robust output regulation and internal model based controller design for linear
infinite-dimensional systems and PDEs --- with both distributed and boundary control --- has been considered in several articles, see~\cite{LogTow97,HamPohMMAR02,RebWei03,Imm07a,HamPoh10,Pau16a} and references therein.
In \cite{PauPhan19}, two finite-dimensional low-order robust controllers for parabolic control systems with distributed inputs and outputs were constructed. The main aim of this paper is to extend this design for linear boundary control systems. However, the main challenge is that the boundary input generally corresponds to an unbounded input operator. To tackle this issue, we construct an extended system with a new state variable $x = (v, u)^\top = (w - Eu, u)^\top$ where $E$ is an extension operator in such a way that the input operator of the new system is bounded.
The construction of extension operator $E$ is one of key points of this paper.
In the literature (for example \cite[Section 3.3]{CurZwa95}), the operator $E$ is chosen to be a right inverse operator of ${\mathcal B}$. However, finding an arbitrary right inverse operator is not easy.
In this paper, we propose the additional conditions to construct the operator $E$.
The construction of $E$ is completed by solving static differential equations.
The idea comes from recent works on boundary stabilization for PDEs (for example \cite{Bad09,PhanRod18,Rod15}) or boundary control systems in abstract form (see \cite{Sal87,Sta05,TucWei09}) .
Under our approach, the theory of partial differential equations guarantees the existence of the extension operator $E$.
For simple cases (such as the heat equation with Neumann boundary control in Section \ref{sec-exHeatNeu}), the construction of $E$ by the new conditions does not give significant advantages compared to the choice of a right arbitrary inverse operator.
Nevertheless, the advantage of our new approach can see clearly in more complicated partial differential equations (for example general linear parabolic equations on multi-dimensional domains, see the numerical example in Section \ref{sec-numex-para}).
For these cases, the construction of right inverse operators by hand is not possible. In our approach we can approximate the operator $E$ by solving differential equations numerically and use the approximation in the controller design.
For the reference signals, we assume that $y_{\mbox{\scriptsize\textit{ref}}}: \R \to \C^p$ can be written in the form
\begin{align}
\label{eq-refsig}
y_{\mbox{\scriptsize\textit{ref}}}(t) = a_0(t) + \sum_{k=1}^q \left(a_k(t) \cos(w_k t) + b_k (t) \sin (w_k t)\right)
\end{align}
where all frequencies $\{ w_k \}_{k=0}^q \subset \R$ with $0 = w_0 < w_1 < \dots < w_q$ are known, but the coefficient polynomials vectors $\{a_k (t)\}_{k}$ and $\{b_k (t)\}_{k}$ with real or complex coefficients (any of the polynomials are allowed to be zero) are unknown. We assume the maximum degrees of the coefficient polynomial vectors are known, so that $a_k(t) \in \C^p$ are polynomial of order at most $n_k-1$ for each $k \in \{ 0,\dots,q\}$.
The class of signals having the form \eqref{eq-refsig} is diverse.
In Section \ref{sec-numex-para}, we present a numerical example with non-smooth reference signals. To track non-smooth signals, we approximate them by truncated Fourier series. In another numerical example, we track a signal where the coefficients are not constants.
Under certain standing assumptions, we present an algorithm to design a robust controller for boundary control system by employing the finite-dimensional controllers in \cite{PauPhan19}. To apply the finite-dimensional controllers design for boundary control systems, we need some checkable assumptions to obtain the stabilizability and detectability of the extended systems. The assumptions can be influenced by free choices of some parameters in the construction of the extended systems.
The next step is to utilize the controller design for two particular partial differential equations, namely linear diffusion-convection-reaction equations and linear beam equations with Kelvin-Voigt damping. For the case of beam equations, we present two different extended systems which work well both in theoretical and numerical aspects.
The numerical computation is another contribution of this paper. Actually there are several numerical schemes satisfying the approximation assumption \ref{ass-A1} below. We also use Finite Element Method (FEM) as in \cite{PauPhan19} to simulate the controlled solution. We will present two numerical examples: a 2D diffusion-reaction-convection equation and a 1D beam equation with Kelvin-Voigt damping. In both examples, by choosing a suitable family of test functions, we approximate all operators and construct the extension operators $E$ numerically (in case we do not know $E$ explicitly). Then our finite-dimensional controllers can be computed through matrix computations. Another advantage of Finite Element Method is that this method can deal with various types of multi-dimensional domains (see the example in Section \ref{sec-numex-para}).
The paper is organized as follows.
In Section \ref{sec-RORP}, we construct extended system from boundary control system with two additional assumptions on abstract boundary control systems, propose a collection of assumptions on the system, formulate the robust output regulation problem, and recall the Galerkin approximation.
In Section \ref{sec-DesignCon}, we present the algorithm to design the robust controller for boundary control system and clarify that the controller solves the robust output regulation problem in Theorem \ref{the-RORP}.
A block diagram of the algorithm for robust output regulation of boundary control systems will be presented in Section \ref{subsec-Algo}.
Section \ref{sec-para} deals with general parabolic PDE models. Section \ref{sec-beam} concentrates on beam equations with Kelvin-Voigt damping. Two numerical examples will follow in each section by using Finite Element method.
\subsection*{Notation}
For a linear operator $A: X \to Y$ we denote by $D(A),~{\mathcal N}(A),~{\mathcal R}(A)$ the domain, kernel, and range of $A$, respectively.
$\rho(A)$ denotes the resolvent set of operator $A$, $\sigma(A) = \C \setminus \rho(A)$ denotes the spectrum of operator $A$.
The space of bounded linear operators from $X$ to $Y$ is denoted by ${\mathcal L}(X,Y)$.
\section{Boundary control systems and Robust Output Regulation}
\label{sec-RORP}
\subsection{Boundary control system} \label{sec-BcSysExtSys}
We start with the abstract boundary control system
\begin{subequations} \label{eq-abs-bc}
\begin{align}
\dot{w}(t) &= {\mathcal A} w(t), \qquad w(0) = w_0, \\
{\mathcal B} w(t) &= u(t), \\
y(t) &= C_0 w(t).
\end{align}
\end{subequations}
with ${\mathcal A}: D({\mathcal A}) \subset X_0 \to X_0$, $u(t) \in U \coloneqq \C^m$, $y(t) \in Y \coloneqq \C^p$ and the boundary operator ${\mathcal B}: D({\mathcal A}) \subset X_0 \to X_0 $.
\begin{ass} \label{ass-boundsys1}
There exist two operators $A_{\mathrm{d}}$ and $A_{\mathrm{rc}}$ satisfying $D(A_{\mathrm{d}}) = D({\mathcal A}) \subseteq D(A_{\mathrm{rc}})$ and the decomposition ${\mathcal A} = A_{\mathrm{d}} + A_{\mathrm{rc}}$, and $A_{\mathrm{rc}}$ is relatively bounded with respect to $A_{\mathrm{d}}$.
\end{ass}
$A_{\mathrm{rc}}$ is relatively bounded to $A_d$ if $D(A_{\mathrm{d}}) \subseteq D(A_{\mathrm{rc}})$ and there are non-negative constants $\alpha$ and $\beta$ so that
\begin{align*}
\| A_{\mathrm{rc}} x \| \le \alpha \|x\| + \beta \| A_{\mathrm{d}} x \| \quad
\text{for all~~} x \in D(A_{\mathrm{d}}).
\end{align*}
The notations $A_{\mathrm{d}}$ and $A_{\mathrm{rc}}$ are motivated by linear parabolic equations where we usually choose $A_{\mathrm{d}}$ as the diffusion term and $A_{\mathrm{rc}}$ as the reaction-convection term.
We assume that the system \eqref{eq-abs-bc} is a ``boundary control system'' in the sense of \cite{Sal87,TucWei09}.
\begin{definition}
The control system \eqref{eq-abs-bc} is \emph{a boundary control system} if the followings hold:
a. The operator $A_0: D(A_0) \to X_0$ with $D(A_0) = D({\mathcal A}) \cap \mathcal{N}({\mathcal B})$ and $A_0 x = {\mathcal A} x$ for $x \in D(A_0)$ is the infinitesimal generator of a strongly continuous semigroup on $X_0$.
b. ${\mathcal R} ({\mathcal B}) = U$.
\end{definition}
The condition (b) implies that there exists an operator $E \in {\mathcal L}(U,X_0)$ such that ${\mathcal B} E = I$. However, finding an arbitrary right inverse operator of ${\mathcal B}$ is not easy especially in the cases of multi-dimensional PDEs. Thus we propose the following additional assumption to construct the operator $E$.
\begin{ass} \label{ass-boundsys2}
There exists a constant $\eta \ge 0$ such that $\eta \in \rho(A_0)$ and $E \in {\mathcal L}(U,X_0)$ such that ${\mathcal R}(E) \subset D({\mathcal A})$ and
\begin{subequations}
\begin{align}
A_{\mathrm{d}} E u &= \eta E u, \label{ass-cond1} \\
{\mathcal B} E u &= u, \label{ass-condBC}
\end{align}
\end{subequations}
for all $u \in U$.
\end{ass}
Under Assumptions \ref{ass-boundsys1} and \ref{ass-boundsys2}, $A_{\mathrm{rc}} E$ is a bounded linear operator since $U$ is finite-dimensional and $\| A_{\mathrm{rc}} E u \| \le \alpha \| Eu \| + \beta \| A_{\mathrm{d}} Eu \| \le (\alpha + \beta \eta ) \|E\| \|u\|_U $.
\begin{rem}
Comparing with the definition 3.3.2 in \cite{CurZwa95}, the condition \eqref{ass-cond1} is new. For particular PDEs, the construction of extension $E$ based on \eqref{ass-cond1} and \eqref{ass-condBC} leads to solve an ODE or an elliptic PDE. We call $E$ as ``an extension'' since its role is to transfer the boundary control into the whole domain. Note that the operator $E$ depends on the choice of $\eta \ge 0$.
The approach of constructing an extension operator $E$ as a solution of an abstract elliptic equation has also been used, e.g., in \cite{Bad09,PhanRod18,Rod15,Sal87}, \cite[Section 5.2]{Sta05}, and \cite[Remark 10.1.5]{TucWei09}).
\end{rem}
\subsubsection*{Assumptions on the system} We next introduce two assumptions on the system.
\begin{enumerate}
\renewcommand{{\sf A\arabic{enumi}}}{{\sf I\arabic{enumi}}}
\renewcommand{\labelenumi}{}
\item $\bullet$~Assumption~{\sf A\arabic{enumi}}:\label{ass-I1} The pair $(A_0,\,E)$ is exponentially stabilizable.
\item $\bullet$~Assumption~{\sf A\arabic{enumi}}:\label{ass-I2} There exists $L_0\in \Lin(\C,X_0)$ such that $A_0+L_0 C_0$ is exponentially stable and
for every $k \in \{ 1, \dots, q\}$ we have
$P_L(iw_k) \neq 0$ where $P_L(\lambda)=C_0R(\lambda,A_0+L_0C_0)E$.
\end{enumerate}
Let $V_0$ be a Hilbert space, densely and continuously imbedded in $X_0$. We denote the inner product on $X_0$ and $V_0$ with $\langle\cdot,\cdot\mathcal{R}gle_{X_0}$ and $\langle\cdot,\cdot\mathcal{R}gle_{V_0}$, respectively. Analogously denote by $\|\cdot\|_{X_0}$ and $\|\cdot\|_{V_0}$ the norms on $X_0$ and $V_0$.
\subsubsection*{Assumptions on the sesquilinear form} We assume that operator $A_0$ corresponds with sesquilinear $\sigma_0$ by the formula below
\begin{align*}
\langle -A_0 w_1,\, w_2 \mathcal{R}gle = \sigma_0 (w_1,w_2), \qquad \forall w_1, w_2 \in V_0
\end{align*}
where $D(A_0) = \{ w \in V_0 \mid \sigma_0(w, \cdot) \text{~~has an extension to~~} X_0 \} $. The sesquilinear form $\sigma_0: V_0 \times V_0 \to \C$ satisfies two assumptions
\begin{enumerate}
\renewcommand{{\sf A\arabic{enumi}}}{{\sf S\arabic{enumi}}}
\renewcommand{\labelenumi}{}
\item $\bullet$~Assumption~{\sf A\arabic{enumi}} (Boundedness):\label{ass-S1} There exists $c_1 > 0$ such that for $w_1,~w_2 \in V_0$ we have
\begin{align*}
| \sigma_0(w_1, w_2)| \le c_1 \|w_1\|_{V_0} \|w_2\|_{V_0}.
\end{align*}
\item $\bullet$~Assumption~{\sf A\arabic{enumi}} (Coercivity):\label{ass-S2} There exist $c_2 > 0$ and some real $\lambda_0 > 0$ such that for $w \in V_0$, we have
\begin{align*}
\re \sigma_0 (w, w) + \lambda_0 \|w\|^2_{X_0} \ge c_2 \|w\|^2_{V_0}.
\end{align*}
\end{enumerate}
Under these assumptions, $A_0 - \lambda_0 I$ generates an analytic semigroup on $X_0$ (see \cite{BanKun84}).
\subsection{Construction of the extended system}
By defining a new variable $v(t) = w(t) - E u (t)
$, we rewrite the equation \eqref{eq-abs-bc} in a new form
\begin{subequations} \label{eq-abs-Cauchy}
\begin{align}
\dot{v}(t) &= A_0 v(t) - E(\dot{u}(t) - \eta u(t) ) + A_{\mathrm{rc}} E u(t), \\
v(0) &= v_0.
\end{align}
\end{subequations}
Since $A_0$ is the infinitesimal generator of an analytic semigroup, and $E,~A_{\mathrm{rc}} E$ are bounded linear operators, Theorem 3.1.3 in \cite{CurZwa95} implies that the equation \eqref{eq-abs-Cauchy} has a unique classical solution for $v_0 \in D(A_0)$ and $u \in C^2([0, \tau]; U)$ for all $\tau >0$. The concept of ``classical solution'' means that $v(t)$ and $\dot{v}(t)$ are elements of $C((0,\tau), X_0)$ for all $\tau >0$, $v(t) \in D (A_0)$ and $v(t)$ satisfies
\eqref{eq-abs-Cauchy}.
Denoting $\kappa(t) = \dot{u}(t) - \eta u(t)$, we obtain the extended systems with the new state variable $x = (v, u)^\top = \left(w - E u, u \right)^\top \in X \coloneqq X_0 \times U$ and a new control input $\kappa(t)$ as follows
\begin{align} \label{eq-sys-ext}
\dot{x}(t) =
\pmat{A_0 & A_{\mathrm{rc}} E \\ 0 & \eta I} x(t)+ \pmat{-E \\ I} \kappa(t), \qquad x(0)= \pmat{w(0)-Eu(0) \\ u(0)}.
\end{align}
The observation part can be rewritten with the new variable as follows
\begin{align} \label{eq-obs-ext}
y(t) = C_0 w(t) = C_0 \left(v(t) + E u(t) \right) = \pmat{C_0 & C_0 E } x(t).
\end{align}
The theorem below shows the relationship between the solutions of \eqref{eq-abs-bc}, \eqref{eq-abs-Cauchy}, and \eqref{eq-sys-ext}. Its proof is analogous to the proof in \cite[Theorem 3.3.4]{CurZwa95}.
\begin{theorem}
\label{the-change-var}
Consider the boundary control system \eqref{eq-abs-bc} and the abstract Cauchy equation \eqref{eq-abs-Cauchy}. Assume that $u \in C^2([0, \tau]; U)$ for all $\tau > 0 $. Then, if $v_0 = w_0 - E u(0) \in D(A_0)$, the classical solutions of \eqref{eq-abs-bc} and \eqref{eq-abs-Cauchy} are related by
\begin{align*}
v(t) = w(t) - E u (t).
\end{align*}
Furthermore, the classical solution of \eqref{eq-abs-bc} is unique. \\
In addition, if $v_0 \in D(A_0)$, the extended system \eqref{eq-sys-ext} with $(x_0)_1 = v_0$, $(x_0)_2 = u(0)$ has the unique classical solution $x(t) = (v(t), u(t))^\top$, where $v(t)$ is the unique classical solution of \eqref{eq-abs-Cauchy}.
\end{theorem}
\subsection{The Robust Output Regulation Problem}
We write the system \eqref{eq-sys-ext}-\eqref{eq-obs-ext} in an abstract form on a Hilbert space $X = X_0 \times U$.
\begin{subequations}
\begin{align*}
\dot{x}(t) &= A x (t) + B \kappa(t), \\
y (t) &= C x(t)
\end{align*}
\end{subequations}
where
\begin{align} \label{eq-extABC}
A = \pmat{A_0 & A_{\mathrm{rc}} E \\ 0 & \eta I}, \quad
B = \pmat{-E \\ I}, \quad C = \pmat{C_0 & C_0 E}.
\end{align}
Note that $B$ and $C$ are bounded operators.
We consider the design of internal model based error feedback controllers of the form on $Z = \C^{s}$
\begin{align*}
\dot{z}(t) &= {\mathcal G}_1 z(t) + {\mathcal G}_2 e(t), \quad z(0) = z_0 \in Z, \\
\kappa (t) &= K z(t),
\end{align*}
where $e(t) = y(t) - y_{\mbox{\scriptsize\textit{ref}}}(t)$ is the regulation error, ${\mathcal G}_1 \in \C^{s \times s}$, ${\mathcal G}_2 \in \C^{s \times p}$, and $K \in \C^{m \times s}$. Letting $x_e (t) = (x(t), z(t))^\top$, the system and the controller can be written together as a closed-loop system on the Hilbert space $X_e = X \times Z$
\begin{align*}
\dot{x}_e (t) &= A_e x_e(t) + B_e y_{\mbox{\scriptsize\textit{ref}}}(t), \quad x_e (0) = x_{e0} \\
e(t) &= C_e x_e(t) + D_e y_{\mbox{\scriptsize\textit{ref}}}(t)
\end{align*}
where $x_{e0} = (x_0, z_0)^\top$ and
\begin{align*}
A_e = \pmat{A & BK \\ {\mathcal G}_2 C & {\mathcal G}_1}, \quad B_e = \pmat{0 \\ -{\mathcal G}_2}, \quad C_e = \pmat{C & 0}, \quad D_e = -I.
\end{align*}
The operator $A_e$ generates a strongly continuous semigroup $T_e(t)$ on $X_e$.
\subsubsection*{The Robust Output Regulation Problem}
The matrices $({\mathcal G}_1, {\mathcal G}_2, K)$ are to be chosen so that the conditions below are satisfied.
(a) The semigroup $T_e(t)$ is exponentially stable.
(b) There exists $M_e,~w_e >0$ such that for all initial states $x_0 \in X$ and $z_0 \in Z$ and for all signal $y_{\mbox{\scriptsize\textit{ref}}}(t)$ of the form \eqref{eq-refsig} we have
\begin{align}
\label{eq-yerr}
\| y(t) - y_{\mbox{\scriptsize\textit{ref}}}(t) \| \le M_e e^{-w_e t} (\| x_{e0} \| + \| \Lambda \|).
\end{align}
where $\Lambda$ is a vector containing the coefficients of the polynomials $\{a_k (t) \}_{k} $ and $\{b_k (t) \}_{k} $ in \eqref{eq-refsig}.
(c) When $(A, B, C)$ are perturbed to $(\tilde{A}, \tilde{B}, \tilde{C})$ in such a way that the perturbed closed-loop system remains exponentially stable, then for all $x_0 \in X$ and $z_0 \in Z$ and for all signals $y_{\mbox{\scriptsize\textit{ref}}}(t)$ of the form \eqref{eq-refsig} the regulation error satisfies \eqref{eq-yerr} for some modified constants $\tilde{M}_e, \tilde{w}_e >0$.
\subsection{Galerkin approximation}
\label{sec-GalApp}
Let $V_0^N \subset V_0$ be a sequence of finite-dimensional subspaces. We define $A^N_0:~V_0^N \to V_0^N$ by
\begin{align*}
\langle -A^N_0 v_1, v_2 \mathcal{R}gle = \sigma_0(v_1, v_2) \qquad \text{for all}\quad v_1,v_2 \in V^N_0,
\end{align*}
that is, $A^N_0$ is defined via restriction of $\sigma_0$ to $V^N_0 \times V^N_0$.
Assume that operator $A_{\mathrm{rc}}$ corresponds with sesquilinear $\sigma_{\mathrm{rc}}$ by the formula
\begin{align*}
\langle -A_{\mathrm{rc}} w_1,\, w_2 \mathcal{R}gle = \sigma_{\mathrm{rc}} (w_1,w_2), \qquad \forall w_1, w_2 \in V_0
\end{align*}
where $D(A_{\mathrm{rc}}) = \{ w \in V_0 \mid \sigma_{\mathrm{rc}}(w, \cdot) \text{~has an extension to~} X_0 \} $. We define $A_{\mathrm{rc}}^N:~V_0^N \to V_0^N$ by
\begin{align*}
\langle -A_{\mathrm{rc}}^N v_1, v_2 \mathcal{R}gle = \sigma_{\mathrm{rc}}(v_1, v_2) \qquad \text{for all}\quad v_1,v_2 \in V^N_0,
\end{align*}
For a given $E \in {\mathcal L} (U, X_0)$, we define $E^N \in {\mathcal L}(U, V^N_0 )$ by
\begin{align*}
\langle E^N \kappa, v_2 \mathcal{R}gle = \langle \kappa, E^* v_2 \mathcal{R}gle_{X_0} \qquad \text{for all~~} \quad v_2 \in V^N_0,
\end{align*}
and $C_0^N \in {\mathcal L}(V^N_0, Y)$ denotes the restriction of $C_0$ onto $V^N_0$.
Let $P^N$ denote the usual orthogonal projection of $X_0$ into $V^N_0$, i.e., for $v_1 \in V_0$
\begin{align*}
P^N v_1 \in V^N_0 \text{~~and~~} \langle P^N v_1, v_2 \mathcal{R}gle = \langle v_1, v_2 \mathcal{R}gle_{X_0} \text{~~for all~~} v_2 \in V^N_0.
\end{align*}
We assume an approximation assumption as follows
\begin{enumerate}
\renewcommand{{\sf A\arabic{enumi}}}{{\sf A\arabic{enumi}}}
\renewcommand{\labelenumi}{}
\item $\bullet$~Assumption~{\sf A\arabic{enumi}}:\label{ass-A1} For any $v \in V_0$, there exists a sequence $v^N \in V^N_0$ such that $\|v^N - v\|_{V_0} \to 0$ as $N \to \infty$.
\end{enumerate}
\section{Reduced order finite-dimensional controllers}
\subsection{The controller}
\label{sec-DesignCon}
In this section, we recall a finite-dimensional controller design, namely ``Observer-based finite dimensional controller'' presented in \cite[Section III.A]{PauPhan19} to design robust controller for boundary control system \eqref{eq-abs-bc}. Another controller, namely ``Dual observer-based finite dimensional controller'' presented in \cite[Section III.B]{PauPhan19} can be applied analogously.
The finite-dimensional robust controller is based on an internal model with a reduced order observer of the original system and has the form
\begin{subequations}
\label{eq:FinConObs}
\eqn{
\dot{z}_1(t)&= G_1z_1(t) + G_2 e(t)\\
\dot{z}_2(t)&= (A_L^r+B_L^rK_2^r)z_2(t) + B_L^r K_1^N z_1(t) -L^r e(t)\\
u(t)&= K_1^N z_1(t) + K_2^rz_2(t)
}
\end{subequations}
with state $(z_1(t),z_2(t))\in Z:= Z_0\times \C^r$.
All matrices
$(G_1,G_2,A_L^r,B_L^r,K_1^N,K_2^r,L^r)$ are chosen based on the four-step algorithm given below.
The matrices $G_1,G_2$ are \keyterm{the internal model} in the controller.
The remaining matrices $A_L^r,~B^r_L,~L^r,~K_1^N,~K_2^r$ are computed based on the Galerkin approximation $(A_0^N, A_{\mathrm{rc}}^N, E^N, C_0^N)$ and model reduction of this approximation.
\noindent \textbf{Step C1. The Internal Model:} \\
We choose $Z_0=Y^{n_0}\times Y^{2n_1} \times \ldots \times Y^{2n_q}$, $G_1 = \diag(J_0^Y, \ldots, J^Y_q)\in {\mathcal L}(Z_0)$, and
$G_2=(G_2^k)_{k=0}^q \in {\mathcal L}(Y,Z_0)$.
The components of $G_1$ and $G_2$ are chosen as follows.
For $k=0$ we let
\begin{align*}
J^Y_0 = \pmat{
0_p & I_p & & \\
& 0_p & \ddots & \\
& & \ddots & I_p \\
& & & 0_p
}
, \qquad
G_2^0 = \pmat{0_p\\\vdots\\0_p\\I_p}
\end{align*}
where $0_p $ and $I_p$ are the $p\times p$ zero and identity matrices, respectively.
For $k \in \{ 1, \ldots, q \}$ we choose
\begin{align*}
J^Y_k = \pmat{
\Omega_k & I_{2p} & & \\
& \Omega_k & \ddots& \\
&& \ddots& I_{2p} \\
&& & \Omega_k
}
, \qquad
G_2^k = \pmat{0_{2p}\\\vdots\\0_{2p}\\I_p\\0_p}
\end{align*}
where $\Omega_k = \pmatsmall{0_p&\omega_k I_p\\-\omega_k I_p&0_p}$.
\noindent \textbf{Step C2. The Galerkin Approximation:}
For a fixed and sufficiently large $N\in\N$ we apply the Galerkin approximation described in Section~\ref{sec-GalApp} in $V_0$ to operators $(A_0, A_{\mathrm{rc}}, E, C_0)$ to get their corresponding approximations $(A_0^N, A_{\mathrm{rc}}^N, E^N, C_0^N)$. Then we compute the matrices $(A^N, B^N, C^N)$ as follows
\begin{align*}
A^N = \pmat{A_0^N & A_{\mathrm{rc}}^N E^N \\ 0 & \eta I}, \quad
B^N =\pmat{-E^N \\ I}, \quad
C^N = \pmat{C_0^N & C_0^N E^N}.
\end{align*}
\noindent \textbf{Step C3. Stabilization:} \\
Denote the approximation $V^N \coloneqq V_0^N \times U$ of the space $V = V_0 \times U$.
Let $\alpha_1,\alpha_2\geq 0$.
Let $Q_1 \in \Lin(X,Y_0)$ and $Q_2 \in \Lin(U_0,X)$ with $U_0,Y_0$ Hilbert be such that $(A + \alpha_2 I,Q_2)$ is exponentially stabilizable and $(Q_1,A+\alpha_1 I)$ is exponentially detectable.
Let $Q_1^N$ and $Q_2^N$ be the approximations of $Q_1$ and $Q_2$, respectively.
Let $Q_0\in \Lin(Z_0,\C^{p_0})$ be such that $(Q_0,G_1)$ is observable, and $R_1\in \Lin(Y)$ and $R_2\in \Lin(U)$ be such that $R_1>0$ and $R_2>0$.
We then define the matrices $(A^N_c, B^N_c,\,C^N_c)$ as follows
\eq{
A^N_c = \pmat{G_1&G_2C^N\\0&A^N}, \quad B^N_c=\pmat{0\\B^N}, \quad C^N_c = \pmat{Q_0&0\\0&Q_1^N}.
}
Define
$L^N =-\Sigma_N C^N R_1\inv\in {\mathcal L}(Y, V^N) $
and
$K^N := \pmat{K_1^N,\; K_2^N} =-R_2\inv (B^N_c)^\ast\Pi_N \in {\mathcal L}(Z_0\times V^N ,U )$
where
$\Sigma_N$
and
$\Pi_N$
are the non-negative solutions of finite-dimensional Riccati equations
\eq{
(A^N + \alpha_1 I) \Sigma_N + \Sigma_N (A^N + \alpha_1 I)^* - \Sigma_N \left(C^N \right)^* R_1\inv C^N \Sigma_N &=- Q_2^N (Q_2^N)^*, \\
(A^N_c + \alpha_2 I)^* \Pi_N + \Pi_N (A^N_c +\alpha_2 I) - \Pi_N B^N_c R_2\inv\left(B^N_c\right)^\ast \Pi_N &=-
\left(C^N_c\right)^\ast C^N_c.
}
\noindent \textbf{Step C4. The Model Reduction:}\\
For a fixed and suitably large $r\in\N$, $r\leq N$, by using the Balanced Truncation method to the stable finite-dimensional system
\eq{
(A^N+L^NC^N, [ B^N,\;L^N],K^N_2),
} we obtain a stable $r$-dimensional reduced order system
\eq{
\left(A_L^r,[B_L^r ,\; L^r] ,K_2^r\right).}
The next theorem claims that the controller above solves Robust Output Regulation Problem for the boundary control systems \eqref{eq-abs-bc}. In Section \ref{sec-StabDetExtSys}, we present sufficient conditions for the stabilizablity and detectability of the extended system $(A,\,B,\, C)$ .
\begin{theorem} \label{the-RORP}
Let assumptions \ref{ass-S1}, \ref{ass-S2}, \ref{ass-I1}, \ref{ass-I2}, and \ref{ass-A1} be satisfied. Assume that the extended system $(A,\,B,\, C)$ in \eqref{eq-extABC} is stabilizable and detectable.
The finite dimensional controller \eqref{eq:FinConObs} solves the Robust Output Regulation Problem provided that the order $N$ of the Galerkin approximation and the order $r$ of the model reduction are sufficiently high.
If $\alpha_1, \alpha_2 >0$, the controller achieves a uniform stability margin in the sense that for any fixed $0 < \alpha < \min \{\alpha_1, \alpha_2\} $ the operator $A_e + \alpha I$ will generate an exponentially stable semigroup if $N$ and $r \le N $ are sufficiently large.
\end{theorem}
\begin{proof}
The proof of this theorem is an application of Theorem III.2 in \cite{PauPhan19} under three checkable statements.
\textbf{Step 1. ``Stabilizability and Detectability''} \\
Recall the abstract system
\begin{align*}
\dot{x}(t) &= A x (t) + B \kappa(t), \\
y (t) &= C x(t).
\end{align*}
We assume that the extended system $(A, B, C)$ is stabilizable and detectable.
The sufficient conditions to guarantee the stabilizability and detectability of $(A, B, C)$ will be presented in Section \ref{sec-StabDetExtSys}.
\textbf{Step 2. ``Boundedness and Coercivity of the sesquilinear form''} \\
Define $V = V_0 \times U$ and $X = X_0 \times U$, the sesquilinear form $\sigma$ is defined by
\begin{align} \label{sesqui-para}
\sigma (\phi_1, \phi_2) = \sigma \left((v_1, u_1), (v_2, u_2) \right) = \sigma_0 (v_1,v_2) - \langle A_{\mathrm{rc}} E u_1, v_2 \mathcal{R}gle_{X_0} - \eta u_1 u_2^\top.
\end{align}
For $\phi =(v,u)^\top \in V $, we define $\|\phi\|_X^2 = \|v\|_{X_0}^2 + \|u\|_{U}^2$ and $\|\phi\|_V^2 = \|v\|_{V_0}^2 + \|u\|_{U}^2$.
Since $\sigma_0$ satisfies two assumptions \ref{ass-S1} and \ref{ass-S2}, there exist constants $c_1 >0$, $c_2 >0$ and $\lambda_0 > 0$ such that for $v_1, v_2, \text{~and~} v \in V_0$ we have
\begin{align*}
|\sigma_0(v_1,v_2)| &\le c_1 \|v_1\|_{V_0} \|v_2\|_{V_0}, \\
\re \sigma_0 (v, v) + \lambda_0 \|v\|^2_{X_0} &\ge c_2 \|v\|^2_{V_0}.
\end{align*}
To check the boundedness of $\sigma(\phi_1, \phi_2)$, we have
\begin{align*}
|\sigma(\phi_1, \phi_2)| &\le |\sigma_0(v_1, v_2)| + |\langle A_{\mathrm{rc}} E u_1, v_2 \mathcal{R}gle_{X_0}| + \eta u_1 u_2^\top \\
&\le \left(c_1 + k \|A_{\mathrm{rc}} E\|_{{\mathcal L}(U,X_0)} + \eta \right) \|\phi_1\|_{V} \|\phi_2\|_{V}.
\end{align*}
Regarding the coercivity of $\sigma(\phi, \phi)$, let $\phi = (v, u)$, we have
\begin{align*}
\re \sigma(\phi, \phi) &= \re \sigma_0 (v,v) - \re \langle A_{\mathrm{rc}} E u, v \mathcal{R}gle_{X_0} - \eta \|u\|^2_U \\
&\ge c_2 \left( \|v\|_{V_0}^2 + \|u\|^2_U \right) - \left( \lambda_0 + \frac{1}{2} \right) \|v\|^2_{X_0} \\
&\qquad - \left( \frac{1}{2}\|A_{\mathrm{rc}} E\|_{{\mathcal L}(U,X_0)}^2 + \eta + c_2 \right)\|u\|^2_U.
\end{align*}
Define $\lambda_1 = \max \left\{ \lambda_0 + \frac{1}{2}, \frac{1}{2}\|A_{\mathrm{rc}} E\|_{{\mathcal L}(U,X_0)}^2 + \eta + c_2 \right\}$, we finally obtain $\re \sigma(\phi, \phi) + \lambda_1 \|\phi\|^2_X \ge c_2 \|\phi\|^2_V $.
In conclusion the sesquilinear form $\sigma$ satisfies two assumptions \ref{ass-S1} and \ref{ass-S2} in the suitable spaces $X$ and $V$.
\textbf{Step 3. ``Approximation assumption''} \\
Denote analogously $V^n = V_0^n \times U$. Under assumption \ref{ass-A1}, for any $v \in V_0$, there exists a sequence $v^n \in V_0^n$ such that $\|v^n - v\|_{V_0} \to 0$ as $n \to \infty$. Then for $x = (v, u) \in V$, define the sequence $x^n = (v^n, u) \in V^n$ satisfying $\|x^n - x\|_V \to 0$ as $n \to \infty$.
\end{proof}
\subsection{Stabilizability and detectability of the extended systems}
\label{sec-StabDetExtSys}
In this section, we use three Theorems 5.2.6, 5.2.7, and 5.2.11 in \cite{CurZwa95}. We introduce new notations as follows. The spectrum of $A_0$ is decomposed into two distinct parts of the complex plane
\begin{align*}
\sigma^+(A_0) &= \sigma(A_0)\cap \overline{\C_0^+}, \quad \C_0^+ = \{ \lambda \in \C | \re \lambda > 0 \}, \\
\sigma^-(A_0) &= \sigma(A_0)\cap \C_0^-, \quad \C_0^- = \{ \lambda \in \C | \re \lambda < 0 \}.
\end{align*}
Under the detectability of $(A_0, C_0)$ or the stabilizability of $(A_0, E)$, Theorems 5.2.6 or 5.2.7 in \cite{CurZwa95} guarantees that $A_0$ satisfies the spectrum decomposition assumption at 0. The decomposition of the spectrum induces a corresponding decomposition of the state space $X_0$, and of the operator $A$. We follow the definition of $T_0^- (t)$ as in \cite[Equation 5.33]{CurZwa95}.
\begin{lem} \label{lem-detec-extsys}
Assume that $(A_0, C_0)$ is exponentially detectable.
(i.) If $A_{\mathrm{rc}} E = 0$ and $C_0 E$ is injective, the extended system $(A,C)$ is also exponentially detectable.
(ii.) If $A_{\mathrm{rc}} E \neq 0$, assume further that
\begin{align} \label{eq-detec-cond}
\mathcal{N}(\eta I - A)~\cap~\mathcal{N}(C) = \{0\}
\end{align} then the extended system $(A, C)$ is exponentially detectable.
\end{lem}
\begin{proof}
Since $(A_0, C_0)$ is exponentially detectable, Theorem 5.2.7 in \cite{CurZwa95} implies that $A_0$ satisfies the spectrum decomposition at 0, $T_0^- (t)$ is exponentially stable, and $\sigma^+(A_0)$ is finite. Then we can apply Theorem 5.2.11 in \cite{CurZwa95} for the detectable pair $(A_0, C_0)$ to obtain that
\begin{align*}
\mathcal{N}(sI - A_0) \cap \mathcal{N}(C_0) = \{0\} \quad \text{for all~~} s \in \overline{\C}_0^+.
\end{align*}
Under our choice $\eta \in \rho(A_0)$, the extended operator $A$ satisfies all conditions of Theorem 5.2.11. To prove the detectability of the extended system $(A,C)$, we will verify that
\begin{align*}
\mathcal{N}(sI - A) \cap \mathcal{N}(C)= \{0\} \quad \text{for all~~} s \in \overline{\C}_0^+.
\end{align*}
Take $(v, u)^\top \in \mathcal{N}(sI - A) \cap \mathcal{N}(C)$, for any $s \in \overline{\C}_0^+$ we have
\begin{align} \label{eq-detec-gencon}
\begin{cases}
(sI - A_0) v - A_{\mathrm{rc}} E u = 0, \\
(s - \eta) u = 0, \\
C_0 v + C_0 E u = 0.
\end{cases}
\end{align}
(i.) If $A_{\mathrm{rc}} E = 0$, we rewrite the conditions \eqref{eq-detec-gencon} as $(sI - A_0) v = 0$, $(s - \eta) u = 0$, and $C_0 v + C_0 E u = 0$.
For $s \in \overline{\C}_0^+ \setminus \eta$, we have that $u = 0$, $(sI - A_0)v = 0$, and $C_0 v = 0$. This implies that $v \in \mathcal{N}(sI - A_0) \cap \mathcal{N}(C_0) = \{0\}$, and thus $v = 0$.\\
For $s = \eta \in \rho(A_0)$, we get that $v = 0$ and $C_0 E u = 0$. Under the condition that $C_0 E$ is injective, this implies that $u = 0$.
Finally for all $s \in \overline{\C}_0^+$, we obtain that $\mathcal{N}(sI - A) \cap \mathcal{N}(C)= \{0\}$. It follows that the extended pair $(A, C)$ is exponentially detectable.
(ii.) If $A_{\mathrm{rc}} E \neq 0$, we first consider $s \in \overline{\C}_0^+ \setminus \eta$. Analogously as in the first case, we get that $u = 0$, $(sI - A_0)v = 0$, and $C_0 v = 0$. This implies that $v = 0$ due to the detectability of the pair $(A_0, C_0)$.
In the case $s = \eta$, we rewrite the condition as
$(\eta I - A_0) v - A_{\mathrm{rc}} E u = 0$ and $ C_0 v + C_0 E u = 0$. Under the additional assumption \eqref{eq-detec-cond}, we get that $(v,u) = 0$.
Since $\mathcal{N}(sI - A) \cap \mathcal{N}(C)= \{0\}$ for all $s \in \overline{\C}_0^+$, we conclude that the extended pair $(A, C)$ is exponentially detectable
\end{proof}
\begin{lem} \label{lem-stab-extsys}
Assume that $(A_0, E)$ is exponentially stabilizable.
(i.) If $A_{\mathrm{rc}} E = 0$, the extended system $(A,B)$ is also exponentially stabilizable.
(ii.) If $A_{\mathrm{rc}} E \neq 0$, assume further that
\begin{align} \label{eq-stab-cond}
\mathcal{N}\left((sI - A_0)^\ast\right) \cap \mathcal{N} \left(\left(A_{\mathrm{rc}} E + (\eta - s)E \right)^\ast \right) = \{0\} \qquad \text{for~} s \in \sigma^+(A_0),
\end{align}
the extended system $(A,B)$ is exponentially stabilizable.
\end{lem}
\begin{proof}
The pair $(A_0, E)$ is exponentially stabilizable if and only if $(A_0^\ast, E^\ast)$ is exponentially detectable. Analogously as in Lemma \ref{lem-detec-extsys} we get that
\begin{align*}
\mathcal{N}(sI - A_0^\ast) \cap \mathcal{N}(E^\ast) = \{0\} \quad \text{for all~~} s \in \overline{\C}_0^+.
\end{align*}
Since the pair $(A_0, E)$ is exponentially stabilizable, by Theorem 5.2.6 in \cite{CurZwa95} the extended operator $A$ satisfies all conditions of Theorem 5.2.11 in \cite{CurZwa95}. To prove the stabilizability of the extended system $(A, B)$, we will check that
\begin{align*}
\mathcal{N}(sI - A^\ast) \cap \mathcal{N}(B^\ast) = \{0\} \quad \text{for all~~} s \in \overline{\C}_0^+.
\end{align*}
If $(v,u)^\top \in \mathcal{N}(sI - A^\ast) \cap \mathcal{N}(B^\ast)$, for $s \in \overline{\C}^+_0$ then
\begin{align} \label{eq-stab-gencon}
\begin{cases}
(sI - A_0^\ast) v = 0, \\
(-A_{\mathrm{rc}} E)^\ast v + (s - \eta ) u = 0, \\
-E^\ast v + u = 0.
\end{cases}
\end{align}
(i.) If $A_{\mathrm{rc}} E = 0$, the conditions \eqref{eq-stab-gencon} are rewritten as $(sI - A_0^\ast) v = 0$, $(s - \eta) u = 0$, $-E^\ast v + u = 0$.
For $s \in \overline{\C}_0^+ \setminus \eta $, it follows that $u = 0$ and $(sI - A_0^\ast) v = 0$ and $E^\ast v = 0$. It is equivalent that $v \in \mathcal{N}(sI - A^\ast) \cap \mathcal{N}(E^\ast)$. Thus $v = 0$.
For $s = \eta$, since $\eta \in \rho(A_0)$, we get that $v = 0$. It follows that $u = 0$.
Finally, for all $s \in \overline{\C}_0^+$, we get that $\mathcal{N}(sI - A^\ast) \cap \mathcal{N}(B^\ast) = \{0\}$. Therefore we conclude that the extended system $(A, B)$ is stabilizable.
(ii.) We consider the case as $A_{\mathrm{rc}} E \neq 0$. For $s \in \overline{\C}_0^+ \cap \rho(A_0^\ast)$, we get that $v=0$ and then $u = 0$.
For $s \in \sigma^+(A_0^\ast)$, we rewrite $u = E^\ast v$ and
\begin{align*}
0 &= (A_{\mathrm{rc}} E)^\ast v - (s - \eta) u = (A_{\mathrm{rc}} E)^\ast v + (\eta - s) E^\ast v
= \left(A_{\mathrm{rc}} E + (\eta - \bar{s} ) E \right)^\ast v
\end{align*}
It follows that $v \in \mathcal{N} \left( \left(A_{\mathrm{rc}} E + (\eta - \bar{s} ) E \right)^\ast \right)$. Moreover $v \in \mathcal{N} \left((\bar{s} I - A_0)^\ast \right)$. Under the additional assumption \eqref{eq-stab-cond}, we get that $v = 0$, and then $u = 0$.
In conclusion for all $s \in \overline{\C}_0^+$, we have that $\mathcal{N}(sI - A^\ast) \cap \mathcal{N}(B^\ast) = \{0\}$ and thus the extended system $(A,B)$ is stabilizable.
\end{proof}
\begin{rem}
In \cite[Exercise 5.25]{CurZwa95}, we need the assumption $0 \in \rho (A_0)$ to obtain the detectability and stabilizability of the extended systems $(A,B,C)$. In our approach, we instead require $\eta \in \rho(A_0)$. This condition is less restrictive since we can freely choose $\eta >0$ .
\end{rem}
\begin{rem}
The additional conditions \eqref{eq-detec-cond} and \eqref{eq-stab-cond} to guarantee the detectability and stabilizability of the extended system in Lemmas \ref{lem-detec-extsys} and \ref{lem-stab-extsys} are checkable. We need to check \eqref{eq-detec-cond} for only $\eta$ and \eqref{eq-stab-cond} for finite $s \in \sigma^+(A_0)$. Under the Galerkin approximation, we can easily verify these conditions \eqref{eq-detec-cond} and \eqref{eq-stab-cond} by using the approximations of all operators. We then check these conditions below
\begin{align*}
\mathcal{N}\left(\eta I - A^N \right)~\cap~\mathcal{N}\left(C^N\right) &= \{0\}, \\
\mathcal{N}\left(\left(sI - A_0^N\right)^\ast\right) \cap \mathcal{N} \left(\left(A_{\mathrm{rc}}^N E^N + (\eta - s)E^N \right)^\ast \right) &= \{0\} \qquad \text{for~} s \in \sigma^+\left(A_0^N\right).
\end{align*}
\end{rem}
In the following, we present a block diagram of the algorithm for robust regulation of boundary control systems.
\subsection{The algorithm} \label{subsec-Algo}
\begin{center}
\begin{tikzpicture}[auto, thick, node distance=2.3cm, text width=10cm]
\node at (-5,1) [above=5mm, right=0mm] {\textsc{Extended system}};
\node[process] (Ext){
\textbf{Step E1. Extension $E$} \\
Construct an extension $E$ by solving a system \\
$A_{\mathrm{d}} E u = \eta E u,\quad {\mathcal B} E u = u.$
};
\node (ConExt) [process, below of=Ext]{
\textbf{Step E2. Extended system $(A, B, C)$} \\
Construct an extended system $(A, B, C)$ where \\
$
A = \pmat{A_0 & A_{\mathrm{rc}} E \\ 0 & \eta I}, \quad
B = \pmat{-E \\ I}, \quad C = \pmat{C_0 & C_0 E}.
$
};
\draw[->] (Ext) -- (ConExt);
\draw[blue,thick,dotted] ($(Ext.north west)+(-0.2,0.5)$) rectangle ($(ConExt.south east)+(0.2,-0.5)$);
\node at (-5,-5) [above=8mm, right=0mm] {\textsc{The controller}};
\node (InMod) [process, below of=ConExt, yshift = -0.9cm ] {
\textbf{Step C1. The Internal Model} \\
Choose $G_1$ and $G_2$ incorporating the internal model.
};
\node (Gar) [process, below of=InMod] {
\textbf{Step C2. The Galerkin Approximation} \\
Fix $N \in \N$, apply the Galerkin approximation to \\
operators $(A_0, A_{\mathrm{rc}}, E, C_0)$ to get their corresponding \\ matrices $(A_0^N, A_{\mathrm{rc}}^N, E^N, C_0^N)$. Then compute the matrices $(A^N, B^N, C^N)$ as the approximations of $(A,B,C)$.
};
\node[process, below of=Gar] (Stab){
\textbf{Step C3. Stabilization} \\
Choose $L^N, K_1^N, K_2^N$ by solving finite-dimensional Riccati equations with the matrices $(A^N, B^N, C^N)$ and $(G_1, G_2)$.
};
\node (ModRed) [process, below of=Stab] {
\textbf{Step C3. The Model Reduction} \\
Fix $r \le N$, use Balanced Truncation Method to get a stable $r-$dimensional system $(A_L^r,[B_L^r ,\; L^r] ,K_2^r)$
};
\draw[->] (InMod) -- (Gar);
\draw[->] (Gar) -- (Stab);
\draw[->] (Stab) -- (ModRed);
\draw[->] (ConExt) -- (InMod);
\draw[red,thick,dotted] ($(InMod.north west)+(-0.2,0.5)$) rectangle ($(ModRed.south east)+(0.2,-0.5)$);
\end{tikzpicture}
\end{center}
\section{Boundary control of parabolic partial differential equations} \label{sec-para}
We consider controlled parabolic equations with Dirichlet boundary controls, for time $t >0$, in a $C^\infty\text{-smooth}$ domain $\Omega \subset \R^d$ with $d$ a positive integer, located locally on one side of its boundary $\partial \Omega = \Gamma_c \cup \Gamma_u,~~\Gamma_c \cap \Gamma_u = \emptyset$ as follows
\begin{subequations} \label{eq-para}
\begin{align}
\frac{\partial w}{\partial t}(\xi, t) - \nu {\mathrm D}elta w (\xi, t) + \alpha(\xi) w(\xi, t)+ \nabla \cdot \left( \beta (\xi) w (\xi, t) \right) &= 0, \quad w(x,0) = w_0(\xi),\\
w (\bar{\xi}, t) = \sum \limits_{i = 1}^m {u_i(t) \psi_i(\bar{\xi})} \text{~~for~~} \bar{\xi}\in \Gamma_c,\quad w (\bar{\xi}, t) &= 0 \text{~~for~~} \bar{\xi}\in \Gamma_u.
\end{align}
\end{subequations}
In the variable $(\xi, \bar{\xi}, t)\in \Omega \times \Gamma \times (0, +\infty)$, the unknown in the equation is the function $w = w(\xi,t) \in \R$. The diffusion coefficient $\nu$ is a positive constant. The functions $\alpha : \R \to \R$ and $\beta: \R^d \to \R$ are fixed and depend only on $\xi$. Function $w_0$ is known. We also assume that $\alpha \in L^\infty (\Omega, \R)$ and $\beta \in L^\infty (\Omega, \R^d)$.
The functions $ \psi_i (\bar{\xi})$ are fixed and will play the role of boundary actuators. The control input is $u(t) = (u_i(t))_{i=1}^m \in U = \C^m$ (see \cite{PhanRod18} and example below).
Analogously we assume the system has $p$ measured outputs so that \\ $y(t) = (y_k(t))_{k=1}^p \in Y = \R^p$ and
\begin{align*}
y_k(t) = \int_\Omega {w(\xi,t) c_k (\xi) d\xi},
\end{align*}
for some fixed $c_k(\cdot) \in L^2(\Omega,\R)$. The output operator $C_0 \in {\mathcal L}(X_0,Y)$ is such that $C_0 w = \left( \langle w, c_k \mathcal{R}gle_{L^2(\Omega)} \right)_{k=1}^p$ for all $w \in X_0$.
\subsection{Constructing the extended system}
We choose $X_0 = L^2(\Omega, \R)$, $V_0 = H_0^1 (\Omega, \R)$ and denote $X = X_0 \times U,~~V = V_0 \times U$. Denote $v = w - Eu$, $A_{\mathrm{d}} w \coloneqq \nu {\mathrm D}elta w$ and $A_{\mathrm{rc}} w \coloneqq -\alpha w - \nabla \cdot (\beta w)$. \\
For each actuator $\psi_i \in H^{\frac{3}{2}}(\Gamma)$, we choose the extension $\Psi_i \in H^2(\Omega)$ which solves the elliptic equations
\begin{align}\label{eq-ext-elliptic}
\nu {\mathrm D}elta \Psi_i = \eta \Psi_i, \qquad \Psi_i\mid_{\Gamma_c} = \psi_i, \qquad \Psi_i\mid_{\Gamma_u} = 0.
\end{align}
We then set the operator $E: U \to S_\Psi$ with $S_\Psi \coloneqq \Span \{ \Psi_i \mid i \in \{1,2,\dots, m \} \} $ as
\begin{align*}
Eu \coloneqq \sum_{i=1}^m u_i \Psi_i.
\end{align*}
We rewrite the boundary control problem with the new state variable $x = (v, u)^\top = (w-Eu, u)^\top$.
The new dynamic control variable $\kappa(t) \in U$ is defined as $\kappa_i (t) = \dot{u}_i(t) - \eta u_i(t)$ for all $i \in \{1, \dots, m \}$. The new input operator $B \in {\mathcal L}(U, X)$ is such that $B \kappa = \pmat {-E \\ I} \kappa = \pmat {-\sum_{i=1}^m \kappa_i \Psi_i \\ \kappa}$ for all $\kappa \in U$. The new output operator $C \in {\mathcal L} (X, Y)$ is such that
\begin{align*}
C x = \left(\int_\Omega v(\xi)c_k(\xi) d\xi + \sum_{i=1}^m u_i \int_\Omega \Psi_i (\xi) c_k (\xi) d\xi \right)_{k=1}^p
\end{align*}
for all $x \in X$. We get an extended system with $(A,B,C)$ as in \eqref{eq-extABC}.
As shown in \cite[Section V. B.]{PauPhan19}, the sesquilinear $\sigma_0$ corresponding with operator $A_0$ is bounded and coercive. Thus the sesquilinear form $\sigma$ corresponding with the extended operator $A$ here has the same properties (as shown in the proof of \eqref{the-RORP}).
\subsection{A 1D heat equation with Neumann boundary control} \label{sec-exHeatNeu}
In this section we consider a 1D heat equation with Neumann boundary control and construct the extended system by our approach. Reformulating this control system as an extended system was also considered in \cite[Example 3.3.5]{CurZwa95} with the choice of right inverse operator. We first introduce the PDE model
\begin{align*}
\frac{\partial w}{\partial t} (\xi,t) &= \frac{\partial^2 w}{\partial \xi^2} (\xi,t), \quad \frac{\partial w}{\partial \xi} (0,t) = 0,~~\frac{\partial w}{\partial \xi} (1,t) = u(t), \\
w(\xi,0) &= w_0(\xi).
\end{align*}
To construct the extended system, we define $X_0 = L^2(0,1),~U = \C$. The operator ${\mathcal A} =\frac{\partial^2}{\partial \xi^2}$ is with domain
$
D({\mathcal A}) = \big\{ h \in H^2(0,1) \mid \frac{dh}{d\xi} (0) = 0 \big\}
$
and the boundary operator ${\mathcal B} h = \frac{dh}{d\xi} (1)$ with $D({\mathcal B}) = D({\mathcal A})$.
We define operator $A_0 = \frac{d^2}{d \xi^2}$ with domain
\begin{align*}
D(A_0) = D({\mathcal A}) \cap \mathcal{N}({\mathcal B}) = \Big\{ h \in H^2(0,1) \mid \frac{dh}{d\xi} (0) = \frac{dh}{d\xi} (1) = 0 \Big\}.
\end{align*}
By choosing $A_{\mathrm{d}} = A_0$, we define $Eu(t) = g(\xi) u(t)$ where $g(\xi)$ solves the following second order ODE
\begin{align*}
g'' (\xi) = g(\xi),\quad g'(0) = 0,\quad g'(1) = 1.
\end{align*}
By solving this ODE, we get $g(\xi) = \frac{2 \mathrm{e}}{\mathrm{e}^2-1} \cosh \xi$. By denoting an extended variable $x = (v, u)^\top$, we get an abstract system $\dot{x}(t) = A x(t) + B\kappa(t)$ where
\begin{align*}
A = \pmat{\frac{d^2}{d\xi^2} & 0 \\0 & 1}, \quad B = \pmat{-E \\ 1}.
\end{align*}
In \cite{CurZwa95}, the function $g(\xi) = \frac{1}{2}\xi^2$ was chosen and also defined $Eu(t) = g(\xi) u(t)$. This choice leads to another extended system with
$
A = \pmat{\frac{d^2}{d\xi^2} & 1 \\0 & 0}
$ and the same $B$. We emphasize that the corresponding operator $A$ does not coincide with the choice of $\eta = 0$ in our approach.
We then apply the controller design in Section \ref{sec-DesignCon} for the extended systems.
For simple PDE models, the construction of extension $E$ using our approach does not yet give significant advantages over the method presented in \cite{CurZwa95}. We will next present a two-dimensional PDE model where the construction of $E$ would not be possible by hand.
\subsection{A 2D diffusion-reaction-convection model}
\label{sec-numex-para}
In this example, we consider the equation \eqref{eq-para} on a domain $\Omega = \left(\bigcup\limits_{i = 1}^6 \Omega_i \right) \setminus \Omega_7$ (plotted in \eqref{fig_domain2D}) where
\begin{align*}
\Omega_1 &= \{ (\xi_1, \xi_2) \in \R^2 \mid -2 < \xi_1 < 0,~~ -1 < \xi_2 \le 1 \}, \\
\Omega_2 &= \{ (\xi_1, \xi_2)\in \R^2 \mid \xi_1^2 + \xi_2^2 < 1,~~ \xi_1 \ge 0,~~\xi_2 \le 0 \} , \\
\Omega_3 &= \{ (\xi_1, \xi_2) \in \R^2 \mid -1 \le \xi_1 \le 1,~~0 < \xi_2 < 2 \}, \\
\Omega_4 &= \{ (\xi_1, \xi_2) \in \R^2 \mid \xi_1^2 + (\xi_2 - 2)^2 < 1,~~ \xi_2 \ge 2 \}, \\
\Omega_5 &= \{ (\xi_1, \xi_2) \in \R^2 \mid (\xi_1+2)^2 + (\xi_2 - 2)^2 > 1,~~ -2 < \xi_1 \le -1,~~1 \le \xi_2 < 2 \}, \\
\Omega_6 &= \{ (\xi_1, \xi_2) \in \R^2 \mid (\xi_1 + 2)^2 + \xi_2^2 < 1,~~\xi_1 \le -2 \} \\
\Omega_7 &= \left\{ (\xi_1, \xi_2) \in \R^2 \mid \left(\xi_1 + \frac{3}{2} \right)^2 + \left(\xi_2 - \frac{1}{4} \right)^2 \le \frac{4}{25} \right \}.
\end{align*}
The boundary $\Gamma$ can be described as seven segments
\begin{align*}
\Gamma_1 &= \{ (\xi_1,-1 ) \in \R^2 \mid -2 < \xi_1 < 0 \},\\
\Gamma_2 &= \{ (\xi_1, \xi_2)\in \R^2 \mid \xi_1^2 + \xi_2^2 = 1,~~ \xi_1 \ge 0,~~\xi_2 \le 0 \},\\
\Gamma_3 &= \{ (1, \xi_2) \in \R^2 \mid 0 < \xi_2 < 2 \} \\
\Gamma_4 &= \{ (\xi_1, \xi_2)\in \R^2 \mid \xi_1^2 + (\xi_2 - 2)^2 = 1,~~\xi_2 \ge 2 \}, \\
\Gamma_5 &= \{ (\xi_1, \xi_2)\in \R^2 \mid (\xi_1+2)^2 + (\xi_2 - 2)^2 = 1,~~\xi_1 > -2,~~\xi_2 < 2 \}, \\
\Gamma_6 &= \{ (\xi_1, \xi_2)\in \R^2 \mid (\xi_1 + 2)^2 + \xi_2^2 = 1,~~\xi_1 \le -2 \} \\
\Gamma_7 &= \left\{ (\xi_1, \xi_2)\in \R^2 \mid \left(\xi_1 + \frac{3}{2} \right)^2 + \left(\xi_2 - \frac{1}{4} \right)^2 = \frac{4}{25} \right\}.
\end{align*}
\begin{figure}
\caption{Boundary controls located on red segments and regions of observations (blue).}
\label{fig_domain2D}
\end{figure}
We take $\nu = 0.5$, $\alpha (\xi) = 3 (\xi_1 + \xi_2),~\beta_1(\xi)= \cos (\xi_1) - \sin (2 \xi_2) - 2,~\beta_2 (\xi) = \sin (3 \xi_1) + \cos (4 \xi_2),~\beta = (\beta_1, \beta_2)$ in \eqref{eq-para}.
We consider \eqref{eq-para} with two boundary inputs located in two distinct segments $\Gamma_3$ and $\Gamma_6$ (see red segments of boundary in Figure \ref{fig_domain2D}), i.e. $\Gamma_c = \Gamma_3 \cup \Gamma_6$ and $\Gamma_c = \Gamma_1 \cup \Gamma_2 \cup \Gamma_4 \cup \Gamma_5 \cup \Gamma_7$ . On these segments, for $\bar{\xi} = \left( \bar{\xi_1}, \bar{\xi_2} \right) \in \Gamma$, we take $\psi_1 (\bar{\xi}) = \sin \left( \frac{\pi \bar{\xi_2}}{2}\right) \chi_{\Gamma_3}(\bar{\xi})$ and $\psi_2 (\bar{\xi}) = \sin \left( 3\left(\theta(\bar{\xi}) - \frac{\pi}{2} \right) \right) \chi_{\Gamma_6}(\bar{\xi})$ where $\theta(\bar{\xi}) = \arctan2 \left( \frac{\bar{\xi_1}+2}{\bar{\xi_2}} \right) $. Next we define two extensions of boundary controls by solving elliptic equations with $\eta = \nu = 0.5$
\begin{align}
\nu {\mathrm D}elta \Psi_i = \eta \Psi_i, \qquad \Psi_i\mid_{\Gamma_c} = \psi_i, \quad \Psi_i\mid_{\Gamma_u} = 0 \quad i \in \{1,~2\}.
\end{align}
Two corresponding solutions $\Psi_1$ and $\Psi_2$ are plotted in Figure \ref{fig_para_ext}.
\begin{figure}
\caption{Two extensions of boundary actuators.}
\label{fig_para_ext}
\end{figure}
\begin{rem}
In the theoretical results, we have asked the boundary actuators to be in $H^\frac{3}{2}(\Gamma)$. Two actuators $\psi_1 (\bar{\xi})$ and $\psi_2 (\bar{\xi})$ above are actually in $H^s(\Gamma)$ with $s < \frac{3}{2}$, but not necessarily in $H^\frac{3}{2}(\Gamma)$. This lack of regularity will be neglected in simulation.
\end{rem}
Two measurements act on blue rectangular subdomains of $\Omega$ (see Figure \ref{fig_domain2D}). The rectangular $\Omega_{m1}$ has four corners
$$(-1,~-.75),~~(-.5,~-.75),~~(-.5,~-.25),~~(-1,~-.25),$$
and the rectangular $\Omega_{m2}$ has four corners
$$(-.3,~1.9464),~~(.0536,~2.3),~~(-.3,~2.6536),~~(0.6536,~2.3).$$
More precisely, we choose
$c_1(\cdot) = \chi_{\Omega_{m1}} (\cdot) ,~~ c_2(\cdot) = \chi_{\Omega_{m2}} (\cdot)$.
Our aim is to track a non-smooth periodic reference signal $y_{\mbox{\scriptsize\textit{ref}}}(t+2) = y_{\mbox{\scriptsize\textit{ref}}}(t) = (y_1(t), y_2(t)),~~\forall t \ge 0$ where
\begin{align*}
y_1(t) = \begin{cases}
1 \qquad &\text{if~~} 0 \le t < \frac{1}{2}, \\
-2 t + 2 \qquad &\text{if~~} \frac{1}{2} \le t < 1, \\
0 \qquad &\text{if~~} 1 \le t < \frac{3}{2}, \\
2 t -3 \qquad &\text{if~~} \frac{3}{2} \le t < {2},
\end{cases}
\end{align*}
and
\begin{align*}
y_2(t) = \begin{cases}
-t \qquad &\text{if~~} 0 \le t < 1, \\
t-2 \qquad &\text{if~~} 1 \le t < 2. \\
\end{cases}
\end{align*}
This type of signals is approximated by truncated Fourier series
\begin{align*}
y_{\mbox{\scriptsize\textit{ref}}}(t) \approx a_0 (t) + \sum \limits_{k=1}^q (a_k (t) \cos ( k \pi t ) + b_k (t) \sin ( k \pi t )).
\end{align*}
Here we use $q = 10$ and the corresponding set of frequencies is $\{ k \pi \mid k \in \{ 0, 1, \dots, 10 \} \}$ and $n_k = 1$ for all $k \in \{ 0, 1, \dots, 10 \}$.
The domain $\Omega$ is approximated by a polygonal domain $\Omega_D$ and we consider a partition of $\Omega_D$ into non-overlapping triangles to discretize the extended system using Finite Element Method.
We construct the observer-based controller using a Galerkin approximation with order $N = 1956$ and subsequent Balanced Truncation with order $r = 30$. The internal model has dimension $\dim Z_0 = 2 \times 2 \times 10 + 2 \times 1 \times 1 = 42$. The parameters of the stabilization are chosen as
\begin{align*}
\alpha_1 = 0.65,~~ \alpha_2 = 0.95,~~R_1 = I_2,~~R_2 = 10^{-2}I_2.
\end{align*}
The operators $Q_0,~ Q_1$, and $Q_2 $ are freely chosen such that $Q_2 Q_2^\ast = I_X$ and $C_c^\ast C_c = I_{Z_0 \times X}$.
Another Finite Element approximation with $M = 2688$ is constructed to simulate the original system. The initial states to solve the controlled system are $v_0 (\xi) = 0.25 \sin (\xi_1)$ and $u_0 = 0 \in \R^{42 + 30}$. The tracking signals are plotted in Figure \ref{fig_track2D}. In Figure \ref{fig_hsv2D}, the first Hankel singular values of the Galerkin approximation are plotted.
\begin{figure}
\caption{Output tracking of the boundary control of the 2D parabolic equation.}
\label{fig_track2D}
\end{figure}
\begin{figure}
\caption{Hankel singular values.}
\label{fig_hsv2D}
\end{figure}
\section{Boundary control of a beam equation with Kelvin-Voigt damping} \label{sec-beam}
Consider a one-dimensional Euler-Bernoulli beam model on $\Omega = (0,l)$
\begin{subequations} \label{eq-beam}
\begin{align}
\frac{\partial^2 w}{\partial t^2} (\xi,t) &+ \frac{\partial^2}{\partial \xi^2} \left( \alpha \frac{\partial^2 w}{\partial \xi^2} (\xi,t) + \beta \frac{\partial^3 w}{\partial \xi^2 \partial t} (\xi,t) \right) + \alphamma \frac{\partial w}{\partial t} (\xi,t) = 0,
\\
w(\xi,0) &= w_0 (\xi), \qquad \frac{\partial w}{\partial t}(\xi,0) = w_1 (\xi), \\
y(t) &= C_1 w(\cdot, t) + C_2 \dot{w}(\cdot, t),
\end{align}
\end{subequations}
with the constants $\alpha, \beta > 0$ and $\alphamma \ge 0$. The measurement operators for the deflection $w(\cdot,t)$ and the velocity $\dot{w}(\cdot, t)$ are such that $C_j w = \left(\langle w, c^j_k \mathcal{R}gle_{L^2} \right)_{k=1}^p \in Y = \R^p$ for $w \in L^2(0,l)$ and $j = 1,2$ for some fixed functions $c^j_k(\cdot) \in L^2 (0,l)$.
We consider boundary conditions
\begin{align*}
w(0,t) = \frac{\partial w}{\partial \xi} (0,t) = \frac{\partial^3 w}{\partial \xi^3} (l,t) = 0, \quad \frac{\partial^2 w}{\partial \xi^2} (l,t) = u(t),
\end{align*}
where $u(t)$ is the boundary input at $\xi = l$. This type of boundary controls was considered in \cite[Section 10.4]{TucWei09} and \cite{Guo14} (with boundary disturbance signals).
Let $W_0 = \left\{ w \in H^2(0, l) \mid w(0) = \frac{dw}{d\xi}(0) = 0 \right\}$ and define the inner product on $W_0$ by
\begin{align*}
\langle w_1 , w_2 \mathcal{R}gle_{W_0} = \int_0^l w_1 (\xi) w_2(\xi) d\xi, \quad \forall w_1, w_2 \in W_0.
\end{align*}
We define the spaces $X_0 = W_0 \times L^2 (0,l) $, $V_0 = W_0 \times W_0$ and the operator
\begin{align*}
{\mathcal A} &= \pmat{0 & {\mathbf I} \\ -\alpha \frac{\partial^4}{\partial \xi^4} & -\beta \frac{\partial^4}{\partial \xi^4} - \alphamma}, \\
\text{with~} D({\mathcal A}) &= \left \{ (w_1, w_2) \in V_0 \mid \alpha \frac{d^2}{d\xi^2} w_1 + \beta \frac{d^2}{d\xi^2} w_2 \in H^2(0,l), ~~ \frac{d^3 w_1}{d\xi^3}(l) = 0 \right \}.
\end{align*}
The boundary operator ${\mathcal B}: X_0 \to \C$ denotes by ${\mathcal B} \pmat{w_1 \\ w_2} = \frac{d^2 w_1}{d\xi^2} (l)$.\\
The operator $A_0$ is given by
$
A_0 = \pmat{0 & {\mathbf I} \\ -\alpha \frac{\partial^4}{\partial \xi^4} & -\beta \frac{\partial^4}{\partial \xi^4} - \alphamma}
$ with the domain
\begin{align*}
D(A_0) &= D({\mathcal A})~\cap~{\mathcal N} ({\mathcal B}) \\
&= \left \{ (v_1, v_2) \in V_0 \mid \alpha \frac{d^2 v_1}{d\xi^2} + \beta \frac{d^2 v_2}{d\xi^2 } \in H^2(0,l), ~~ \frac{d^2 v_1}{d\xi^2}(l) = \frac{d^3 v_1}{d\xi^3}(l) = 0 \right \}.
\end{align*}
\subsection{The extended system}
\label{sec-beam-1stext}
Choose $A_{\mathrm{d}} = \pmat{0 & {\mathbf I} \\ -\alpha \frac{\partial^4}{\partial \xi^4} & -\beta \frac{\partial^4}{\partial \xi^4}}$ and \\
$A_{\mathrm{rc}} = \pmat{0 & 0 \\ 0 & -\alphamma} $ with $D(A_{\mathrm{d}}) = D({\mathcal A})$ and $D(A_{\mathrm{rc}}) = X_0$.
We construct $Eu(t) = \pmat{g_1(\xi)\\ g_2(\xi)} u(t)$ satisfying both conditions \eqref{ass-cond1} and \eqref{ass-condBC} as follows
\begin{align*}
\pmat{0 & {\mathbf I} \\ -\alpha \frac{\partial^4}{\partial \xi^4} & -\beta \frac{\partial^4}{\partial \xi^4}} \pmat{g_1(\xi)\\ g_2(\xi)} &= \eta \pmat{g_1(\xi)\\ g_2(\xi)}, \qquad
{\mathcal B} \pmat{g_1(\xi)\\ g_2(\xi)} = 1. \\
g_1(0) = g_1'(0) = g_1'''(l) &= 0, \quad g_2(0) = g_2'(0) = 0.
\end{align*}
We need to solve a system of ODEs as follows
\begin{subequations} \label{eq-beam-1stode}
\begin{align}
g_2(\xi) &= \eta g_1(\xi), \\
g_1''''(\xi) &= \frac{-\eta^2}{\alpha + \beta \eta} g_1(\xi), \\
g_1(0) = g_1'(0) = g_1'''(l) &= 0, \quad g_1''(l) = 1, \\
g_2(0) = g_2'(0) &= 0.
\end{align}
\end{subequations}
We have freedom of choices on boundary conditions of $g_2$. Here we choose $g_2'''(l) = 0$, and $g_2''(l) = \eta$. The condition $\alpha g''_1 + \beta g_2'' \in H^2(0,l)$ can be verified after solving the system.
Define the change of variable
$
\pmat{v \\ \dot{v}} = \pmat{w \\ \dot{w}} - Eu(t),
$
and the new control $\kappa(t) = \dot{u}(t) - \eta u(t)$. The extended system can be rewritten in terms of the new state $x = (v, \dot{v}, u)^\top $in the abstract form $\dot{x}(t) = A x(t) + B\kappa(t) $ where
\begin{align} \label{beam-eq-1stext}
A = \pmat{0 & {\mathbf I} & 0 \\
-\alpha \frac{\partial^4}{\partial \xi^4} & -\beta \frac{\partial^4}{\partial \xi^4} - \alphamma & -\alphamma g_2(\xi) \\
0 & 0 & \eta \\
}, \quad
B= \pmat{-g_1(\xi) \\ -g_2(\xi) \\ 1 }.
\end{align}
The sesquilinear associated to the operator $A_0$ is bounded and coercive (see\cite{ItoMor98} and \cite[Section V.C.]{PauPhan19}). Thus the sesquilinear generated by operator $A$ in \eqref{beam-eq-1stext} is also bounded and coercive as shown in the proof of Theorem \ref{the-RORP}.
The observation part can be rewritten in the new state
\begin{align*}
y(t) = C_1 v(\cdot, t) + C_2 \dot{v}(\cdot, t) = C_1 (w (\cdot, t) + g_1(\cdot) u(t) ) + C_2( \dot{w}(\cdot, t) + g_2(\cdot) u (t) )
\end{align*}
which implies that $C = \pmat{C_1 & C_2 & C_1 g_1 + C_2 g_2} $. Under this setting, the extended system $(A,B,C)$ can be rewritten in the abstract form as in \eqref{eq-sys-ext}-\eqref{eq-obs-ext}.
\subsection{An alternative extended system}
\label{sec-beam-2ndext}
For second-order (in time) PDE models, we can use an alternative approach to construct the extended system. For this class of system, this approach here is more natural than the first one. However it still has some disadvantages that we will discuss below.
Let us define $v(\xi,t) = w(\xi,t) - g(\xi) u(t)$ where $g(\xi)$ solves the ODE
\begin{subequations} \label{eq-beam-2ndode}
\begin{align}
g'''' (\xi) - \eta g (\xi) &= 0, \\
g(0) = g'(0) = g'''(l) &= 0, \\
g''(l) &= 1
\end{align}
\end{subequations}
where $\eta$ is a positive constant. Then we can rewrite the equation \eqref{eq-beam} as follows
\begin{subequations} \label{eq-pde-ext}
\begin{align}
\frac{\partial^2 v}{\partial t^2}(\xi,t) &+ \alpha \frac{\partial^4 v}{\partial \xi^4}(\xi,t) + \left( \beta \frac{\partial^4 }{\partial \xi^4} + \alphamma \right) \frac{\partial v}{\partial t}(\xi,t) \\
&= - \left( u''(t) + (\beta \eta + \alphamma) u'(t) + \alpha \eta u(t) \right) g(\xi), \\
v(0,t) &= \frac{\partial v}{\partial \xi} (0,t) = \frac{\partial^2 v}{\partial \xi^2} (l,t) = \frac{\partial^3 v}{\partial \xi^3} (l,t) = 0.
\end{align}
\end{subequations}
Defining $\kappa(t) = u''(t) + (\beta \eta + \alphamma) u'(t) + \alpha \eta u(t)$, we get an alternative extended system $\dot{x}(t) = A x(t) + B \kappa(t)$ where
\begin{align} \label{beam-eq-2ndext}
A = \pmat{0 & {\mathbf I} & 0 & 0 \\
-\alpha \frac{\partial^4}{\partial \xi^4} & -\beta \frac{\partial^4}{\partial \xi^4} - \alphamma & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & -\alpha \eta & - \beta \eta - \alphamma
\\}, \quad
B = \pmat{ 0 \\ - g(\xi) \\ 0 \\ 1 }.
\end{align}
The observation part can be rewritten in the new state as
\begin{align*}
y (t) = C_1 v(\cdot, t) + C_2 \dot{v}(\cdot, t) = C_1 (w (\cdot, t) + g(\cdot) u(t) ) + C_2( \dot{w}(\cdot, t) + g(\cdot) u' (t) ),
\end{align*}
which leads to the output operator
$
C = \pmat{C_1 & C_2 & C_1 g & C_2 g}
$.
\begin{lem}
Consider the abstract differential equation $\dot{x}(t) = A x(t) + B \kappa(t)$ where $A$ and $B$ are defined in \eqref{beam-eq-2ndext}. Assume that $u \in C^3([0, \tau]; \R)$ for all $\tau >0$ and $(v_0, v_1) = (w_0 - gu(0), w_1 - gu'(0)) \in D(A_0)$. The extended system with $(x_0)_1 = v_0,~(x_0)_2 = v_1,~(x_0)_3 = u(0),~(x_0)_4 = u'(0)$ has a unique solution $x(t) = (v(t), \dot{v}(t), u(t), u'(t))^\top$.
\end{lem}
\begin{proof}
By denoting
$A_1 = \pmat{0 & {\mathbf I} \\ -\alpha \frac{\partial^4}{\partial \xi^4} & -\beta \frac{\partial^4}{\partial \xi^4} - \alphamma}$ and $A_2 = \pmat{0 & 1\\ -\alpha \eta & -\beta \eta - \alphamma}$, we rewrite $A = \pmat{A_1 & 0 \\ 0 & A_2}$. Then we analogously apply Lemma 3.2.2 in \cite{CurZwa95} and the procedure of Theorem \ref{the-change-var} to get the result.
\end{proof}
\begin{rem}
Comparing with the case of parabolic equations in Section \ref{sec-para}, the difference is that we can find the extension $Eu(t)$ explicitly by solving ODE \eqref{eq-beam-1stode} or \eqref{eq-beam-2ndode}.
Considering the system of ODE \eqref{eq-beam-1stode}, the characteristic equation is $\lambda + \frac{\eta^2}{\alpha + \beta \eta} = 0$. By denoting $\tilde{\eta} = \sqrt[4]{\frac{\eta^2}{4(\alpha + \beta \eta)}}$, the solution of characteristic equation is $\lambda = \pm \tilde{\eta} \pm i \tilde{\eta}$. Thus the general solution is
\begin{align*}
g_1 (\xi) = m_1 \mathrm{e}^{\tilde{\eta} \xi} \cos (\tilde{\eta} \xi) +
m_2 \mathrm{e}^{\tilde{\eta} \xi} \sin (\tilde{\eta} \xi) +
m_3 \mathrm{e}^{-\tilde{\eta} \xi} \cos (\tilde{\eta} \xi) +
m_4 \mathrm{e}^{-\tilde{\eta} \xi} \sin (\tilde{\eta} \xi)
\end{align*}
and $g_2(\xi) = \eta g_1(\xi)$. Obviously $g_1 (\xi)$ and $g_2(\xi)$ belong to $H^2(0,l)$.
On the other hand, for the ODE \eqref{eq-beam-2ndode}, the corresponding characteristic equation is
$
\lambda^4 - \eta = 0
$
whose solutions are $\lambda =\pm \sqrt[4]{\eta}$ and $\lambda =\pm i \sqrt[4]{\eta}$. The general solution is
\begin{align*}
g(\xi) = m_1 \mathrm{e}^{\sqrt[4]{\eta} \xi} + m_2 \mathrm{e}^{-\sqrt[4]{\eta} \xi} + m_3 \cos(\sqrt[4]{\eta} \xi) + m_4 \sin(\sqrt[4]{\eta} \xi).
\end{align*}
All unknown parameters $m_1,~m_2,~m_3,~m_4$ can be determined from the boundary conditions by solving a corresponding linear algebraic system.
\subsection{Two approaches with other types of boundary control}
The type of boundary condition below was presented before in some works \cite[Section 3]{ItoMor98} or \cite[Section V.C]{PauPhan19}. Here we design a boundary control.
The construction of extension operator $Eu(t)$ in section \ref{sec-beam-1stext} can be modified to adapt with this type of boundary condition
\begin{subequations}
\begin{align} \label{eq-beam-anotherBC}
w(0,t) = \frac{\partial w}{\partial \xi} (0,t) &= 0, \\
\alpha \frac{\partial^2 w}{\partial \xi^2} (l,t) + \beta \frac{\partial^3 w}{\partial \xi^2 \partial t} (l,t) &= u(t),\\
\alpha \frac{\partial^3 w}{\partial \xi^3} (l,t) + \beta \frac{\partial^4 w}{\partial \xi^3 \partial t} (l,t) &= 0,
\end{align}
\end{subequations}
By denoting $M(\xi, t) = \alpha \frac{\partial^2 w}{\partial \xi^2} (\xi,t) + \beta \frac{\partial^3 w}{\partial \xi^2 \partial t} (\xi,t) $, we modify the domain of ${\mathcal A}$ as
\begin{align*}
D({\mathcal A}) &= \left \{ (w_1, w_2) \in V \mid \alpha \frac{d^2 w_1}{d\xi^2} + \beta \frac{d^2 w_2}{d\xi^2} \in H^2(0,l), ~~ \frac{d M}{d \xi} (l,\cdot) = 0 \right \}.
\end{align*}
The boundary operator ${\mathcal B}: X_0 \to \C$ denotes by ${\mathcal B} \pmat{w_1 \\ w_2} = \alpha \frac{d^2}{d\xi^2} w_1(l) + \beta \frac{d^2}{d\xi^2} w_2(l)$ with $D({\mathcal B}) = D({\mathcal A})$. \\
The domain of operator $A_0$ is denoted by
\begin{align*}
D(A_0) = \left \{ (v_1, v_2) \in V \mid \alpha \frac{d^2 v_1 }{d\xi^2} + \beta \frac{d^2 v_2}{d\xi^2} \in H^2(0,l), ~~ M (l, \cdot) = \frac{d M}{d \xi}(l,\cdot) = 0 \right \}.
\end{align*}
With the same choice of $A_{\mathrm{d}}$ and $A_{\mathrm{rc}}$, we get the system of ODEs as follows
\begin{align*}
g_2(\xi) = \eta g_1(\xi), \quad g_1''''(\xi) = \frac{-\eta^2}{\alpha + \beta \eta} g_1(\xi)
\end{align*}
whose boundary conditions are modified as
\begin{align*}
g_1(0) = g_1'(0) = g_1'''(l) &= 0,\quad g_1''(l) = \frac{1}{\alpha + \beta \eta}, \\
g_2(0) = g_2'(0) &= 0.
\end{align*}
Again, we can choose $g_2'''(l) = 0$ and $g_2''(l) = \frac{\eta}{\alpha + \beta \eta}$.
However the approach in section \ref{sec-beam-2ndext} does not work with this type of boundary conditions.
\end{rem}
\subsection{A numerical example}
In this example, we consider the system \eqref{eq-beam} with $l=7, \alpha = 10, \beta = 0.01$, and $\alphamma = 10^{-5}$. The observation is
\begin{align*}
y(t) = \int_2^4 {w(\xi,t) + \frac{\partial w}{\partial t} (\xi,t) d\xi}, \quad \text{i.e.} \quad C_1 = C_2 = \chi_{(2,4)}(\cdot).
\end{align*}
With the choice of parameters, the stability margin of the system is very small (approximately $10^{-3}$). In this example, we use the boundary control to improve the stability of the original system and obtain an acceptable closed-loop stability margin.
We want to track the reference signal $y_{\mbox{\scriptsize\textit{ref}}}(t) = \frac{1}{10}(t^2 - t)\sin(3t)$. The set of frequency has only one element $\{ 3 \}$ with $n_k = 3$.
We also used two different meshes. Again, we use Finite Element Method with cubic Hermit shape functions as in \cite[Section V.C.]{PauPhan19}.
We construct the observer-based finite-dimensional controller based on the algorithm in Section \ref{sec-DesignCon} using a coarse mesh with $N = 34$ (the corresponding size of the matrix $A^N$ is 138) and subsequent Balanced Truncation with order $r = 50$. The internal model has dimension $\dim Z_0 = 2 \times 3 = 6$.
For the controller in Section \ref{sec-beam-1stext}, we choose $\eta = 0.12$ in system \eqref{eq-beam-1stode}. The corresponding solutions $g_1$ and $g_2$ are plotted in Figure \ref{fig_extbeam1}. The parameters of the stabilization are chosen as
\begin{align*}
\alpha_1 = 0.65, \quad \alpha_2 = 0.5, \quad R_1 = 0.1, \quad R_2 = 1.
\end{align*}
For the alternative extended system in Section \ref{sec-beam-2ndext}, we choose $\eta = 10$ in \eqref{eq-beam-2ndode}. The solution $g$ of \eqref{eq-beam-2ndode} with $\eta = 10$ is plotted in Figure \ref{fig_extbeam2}.
We choose other parameters of stabilization to improve the stability margin as
\begin{align*}
\alpha_1 = 0.75, \quad \alpha_2 = 0.5, \quad R_1 = 1, \quad R_2 = 10^{-3}.
\end{align*}
\begin{figure}
\caption{Solutions of ODEs.}
\label{fig_extbeam1}
\label{fig_extbeam2}
\label{fig:testfig}
\end{figure}
For the simulation of the original system \eqref{eq-beam}, we use another Finite Element approximation with $M = 86$. The corresponding size of matrix $A^M$ is 346. The initial state of the original systems $v_0 (\xi) = 0.25 (\cos (5 \xi) - 2)$, $v_1 (\xi) = 0.25 \sin (5\xi) $, and $z_0 =\{ v \in \R^{6+40} \mid v_i = -0.3 \}$.
The tracking controlled signals under two different extensions are plotted in Figure \ref{fig_trackbeam} where the blue line corresponds with the extension \eqref{beam-eq-1stext} and the green one corresponds with the extension \eqref{beam-eq-2ndext}.
\begin{figure}
\caption{Output tracking of the boundary controlled beam equation with two different extensions.}
\label{fig_trackbeam}
\end{figure}
\section{Final remarks}
We have presented new methods for design finite-dimensional reduced order controllers for robust output regulation problems of boundary control systems. The controllers are constructed based on an extended system. Theorem \ref{the-RORP} shows that the controllers solve the robust output regulation problem. The construction of extended system is completed by two additional assumptions. Comparing with the choice of arbitrary right inverse operators in the literature, our construction is efficient in PDE models with multi-dimensional domains. Concerning with the boundary disturbance signals, some examples was also introduced before in \cite[Section V. A.]{PauPhan19} or \cite{Guo14}. We remark that the method can be analogously applied to construct a new bounded disturbance operator. We can then extend the control design here for the case with boundary disturbance signals.
We must assume the boundedness of output operators because the extension approach does not have an analogue for the output operators. Moreover, the controller design method in \cite{PauPhan19} requires a bounded output operator, and extending the results for unbounded $C$ is an important topic for future research.
As shown in the proofs in~\cite{PauPhan19}, the possibility for model reduction in the controller design (for a fixed $N\in \N$) is based on the smallness of the $H_\infty$-error between the transfer functions of the stable finite-dimensional systems $(A_L^r,[B_L^r,L^r],K_2^r)$ and $(A^N+L^NC^N,[B^N+L^ND^N,L^N],K_2^N)$. Our results do not provide lower bounds for a suitable value of $r$, but the results on Balanced Truncation show that for a given $r\leq N$ the error between these transfer functions is determined by the rate of decay of the Hankel singular values of the latter system. Because of this, rapid decay of the Hankel singular values of $(A^N+L^NC^N,[B^N+L^ND^N,L^N],K_2^N)$ can be used as an indicator that reduction of the controller order is possible for the considered system and its approximation $(A^N,B^N,C^N)$.
\section*{Acknowledgments}
The research is supported by the Academy of Finland grants number 298182 and 310489 held by L. Paunonen. D. Phan is partially supported by Universit\"at Innsbruck.
\providecommand{\href}[2]{#2}
\providecommand{\arxiv}[1]{\href{http://arxiv.org/abs/#1}{arXiv:#1}}
\providecommand{\url}[1]{\texttt{#1}}
\providecommand{\urlprefix}{URL }
\end{document} |
\begin{document}
\def \la{\lambda}
\title{On oriented graphs with minimal skew energy}
\author{Shicai Gong$^{a}$\thanks{Corresponding author. E-mail addresses:
[email protected](S. Gong); [email protected](X. Li);
[email protected](G. Xu).} \thanks{ Supported by Zhejiang Provincial
Natural Science Foundation of China(No. Y12A010049).}, Xueliang
Li$^{b}$\thanks{ Supported by National Natural Science Foundation of
China(No. 10831001).} $~$ and Guanghui Xu$^{a}$\thanks{ Supported by
National Natural Science Foundation of China(No. 11171373).}\\
\\{\small \it a. Zhejiang A $\&$ F University, Hangzhou, 311300, P.
R. China}
\\{\small \it b.Center for Combinatorics and LPMC-TJKLC, Nankai
University,}\\ {\small \it Tianjin 300071, P. R. China}
}
\date{}
\maketitle
\begin{abstract}
Let $S(G^\sigma)$ be the skew-adjacency matrix of an oriented graph
$G^\sigma$. The skew energy of $G^\sigma$ is defined as the sum of
all singular values of its skew-adjacency matrix $S(G^\sigma)$. In
this paper, we first deduce an integral formula for the skew energy
of an oriented graph. Then we determine all oriented graphs with
minimal skew energy among all connected oriented graphs on $n$
vertices with $m \ (n\le m < 2(n-2))$ arcs, which is an analogy to
the conjecture for the energy of undirected graphs proposed by
Caporossi {\it et al.} [G. Caporossi, D. Cvetkovi$\acute{c}$, I.
Gutman, P. Hansen, Variable neighborhood search for extremal graphs.
2. Finding graphs with external energy, J. Chem. Inf. Comput. Sci.
39 (1999) 984-996.].
\vskip 0.3cm
\noindent {\bf Keywords}: oriented graph; graph energy; skew energy;
skew-adjacency matrix; skew characteristic polynomial.
\noindent {\bf AMS subject classification 2010}: 05C50, 15A18
\end{abstract}
\section{Introduction}
Let $G^\sigma$ be a digraph that arises from a simple undirected
graph $G$ with an orientation $\sigma$, which assigns to each edge
of $G$ a direction so that $G^\sigma$ becomes an \emph{oriented
graph}, or a \emph{directed graph}. Then $G$ is called the
\emph{underlying graph} of $G^\sigma$. Let $G^\sigma$ be an
undirected graph with vertex set
$V(G^\sigma)=\{v_1,v_2,\cdots,v_n\}.$ Denote by $(u, v)$ an arc, of
$G^\sigma$, with tail $u$ and head $v$. The {\it skew-adjacency
matrix} related to $G^\sigma$ is the $n \times n$ matrix
$S(G^\sigma) = [s_{ij} ],$ where the $(i,j)$ entry satisfies:
$$s_{ij}=\left \{ \begin{array}{ll}
1, & {\rm if \mbox{ } (v_i, v_j)\in G^\sigma \mbox{ }};\\
-1, & {\rm if \mbox{ } (v_j, v_i)\in G^\sigma \mbox{ } };\\
0, & {\rm otherwise. \mbox{ }}
\end{array}\right.$$
The {\it skew energy} of an oriented graph $G^\sigma$, introduced by
Adiga, Balakrishnan and So in \cite{ad} and denoted by
$\mathcal{E}_S(G^\sigma)$, is defined as the sum of all singular
values of $S(G^\sigma)$. Because the skew-adjacency matrix
$S(G^\sigma)$ is skew-symmetric, the eigenvalues $\{\la_1, \la_2,
\cdots, \la_n\}$ of $S(G^\sigma)$ are all purely imaginary numbers.
Consequently, the skew energy $\mathcal{E}_S(G^\sigma)$ is the sum
of the absolute values of its eigenvalues, {\it
i.e.,}$$\mathcal{E}_S(G^\sigma)=\sum_{i=1}^n|\la_i|,$$which has the
same expression as that of the energy of an undirected graph with
respect to its adjacent matrix; see e.g. \cite{iv}.
The work on the energy of a graph can be traced back to 1970's
\cite{g1} when Gutman investigated the energy with respect to the
adjacency matrix of an undirected graph, which has a still older
chemical origin; see e.g. \cite{cou}. Then much attention has been
devoted to
the energy of the adjacency matrix of a graph; see e.g.
\cite{aa,ago,bs,ds,gkm,gkmz,iv1,yp3,ls,lz1,zz}, and the references
cited therein. For undirected graphs, Caporossi,
Cvetkovi$\acute{c}$, Gutman and Hansen \cite{cc} proposed a
conjecture for the minimum energy as follows.
{\bf Conjecture 1.} Let $G$ be the graph with minimum energy among
all connected graphs with $n\ge 6$ vertices and $m \ (n\le m \le
2(n-2))$ edges. Then $G$ is $O_{n,m}$ if $m\le n+\lfloor
\frac{n-7}{2}\rfloor$; and $B_{n,m}$ otherwise, where $O_{n,m}$ and
$B_{n,m}$ are respectively the underlying graphs of the oriented
graphs $O^+_{n,m}$ and $B^+_{n,m}$ given in Fig. 1.1.
This conjecture was proved to be true for $m=n-1, 2(n-2)$ by
Caporossi et al. (\cite{cc}, Theorem 1), and $m=n$ by Hou
\cite{yp1}. In \cite{lz1}, Li, Zhang and Wang confirmed this
conjecture for bipartite graphs. Conjecture 1 has not yet been
solved completely.
Recently, in analogy to the energy of the adjacency matrix, a few
other versions of graph energy were introduced in the mathematical
literature, such as Laplacian energy \cite{gz}, signless Laplacian
energy \cite{gr} and skew energy \cite{ad}.
In \cite{ad}, Adiga {\it et.al.} obtained the skew energies of
directed cycles under different orientations and showed that the
skew energy of a directed tree is independent of its orientation,
which is equal to the energy of its underlying tree. Naturally, the
following question is interesting:
\noindent {\bf Question:} Denote by $M$ a class of oriented graphs.
Which oriented graphs have the extremely skew energy among all
oriented graphs of $M$ ?
Hou et al. \cite{hou1} determined the oriented unicyclic graphs with
the maximal and minimal skew energies. Zhu \cite{Zhu} determined the
oriented unicyclic graphs with the first $\lfloor \frac {n-9}
2\rfloor$ largest skew energies. Shen el al. \cite{Shen} determined
the bicyclic graphs with the maximal and minimal energies. Gong and
Xu \cite{GX} determined the 3-regular graphs with the optimum skew
energy, and Tian \cite{Tian} determined the hypercubes with the
optimum skew energy. In the following we will study the minimal skew
energy graphs of order $n$ and size $m$.
At first, we need some notations. Denote by $K_n$, $S_n$ and $C_n$
the complete undirected graph, the undirected star and the
undirected cycle on $n$ vertices, respectively. Let $ O^+_{n,m}$ be
the oriented graph on $n$ vertices which is obtained from the
oriented star $S^\sigma_n$
by adding $m-n+1$ arcs such that all those arcs have a common vertex; see Fig. 1.1, where $v_1$ is the tail of each arc
incident to it and $v_2$ is the head of each arc incident to it, and
$ B^+_{n,m}$, the oriented graph obtained from $ O^+_{n,m+1}$ by
deleting the arc $(v_1,v_2)$. Denote by $ O_{n,m}$ and $ B_{n,m}$
the underlying graphs of $ O^+_{n,m}$ and $ B^+_{n,m}$,
respectively. Notice that both $ O^+_{n,m}$ and $ B^+_{n,m}$
contain $n$ vertices and $m$ arcs.
\setlength{\unitlength}{0.8mm}
\begin{center}
\begin{picture}(140,31)
\put(10,20){\circle*{2}}
\put(10,14){\circle*{2}} \put(10,0){\circle*{2}}
\put(10,6){$\vdots$}
\put(20,10){\circle*{2}} \put(18,5){$v_1$}
\put(19,11){\vector(-1,1){8}} \put(19,10){\vector(-2,1){8}}
\put(19,9){\vector(-1,-1){8}}
\put(40,10){\circle*{2}} \put(38,5){$v_2$}
\put(17,-15){$O^+_{n,m}$}
\put(21,10){\vector(1,0){17}} \put(21,10){\line(1,0){18}}
\put(30,22){\circle*{2}} \put(30,14.5){\circle*{2}}
\put(29.5,16){$\vdots$}
\put(21,10.5){\vector(2,1){8}} \put(21,10.5){\vector(3,4){8}}
\put(39,10.5){\line(-2,1){8}} \put(39,10.5){\line(-3,4){8}}
\put(30,15){\vector(2,-1){7}} \put(30,22.5){\vector(3,-4){7}}
\put(30,1){\circle*{2}}
\put(21,9.5){\vector(1,-1){7}}\put(21,9.5){\line(1,-1){9}}
\put(30,.5){\vector(1,1){7}}\put(39,9.5){\line(-1,-1){9}}
\put(90,20){\circle*{2}}
\put(90,14){\circle*{2}} \put(90,0){\circle*{2}}
\put(90,6){$\vdots$}
\put(100,10){\circle*{2}} \put(98,5){$v_1$}
\put(99,11){\vector(-1,1){8}} \put(99,10){\vector(-2,1){8}}
\put(99,9){\vector(-1,-1){8}}
\put(120,10){\circle*{2}} \put(118,5){$v_2$}
\put(97,-15){$B^+_{n,m}$}
\put(110,22){\circle*{2}}
\put(110,14.5){\circle*{2}}
\put(109.5,16){$\vdots$}
\put(101,10.5){\vector(2,1){8}} \put(101,10.5){\vector(3,4){8}}
\put(119,10.5){\line(-2,1){8}} \put(119,10.5){\line(-3,4){8}}
\put(110,15){\vector(2,-1){7}} \put(110,22.5){\vector(3,-4){7}}
\put(110,1){\circle*{2}}
\put(101,9.5){\vector(1,-1){7}}\put(101,9.5){\line(1,-1){9}}
\put(110,.5){\vector(1,1){7}}\put(119,9.5){\line(-1,-1){9}}
\put(16,-26) {Fig. 1.1. Two oriented graphs $O^+_{n,m}$ and
$B^+_{n,m}$. }
\end{picture}
\end{center}
In this paper, we first deduce an integral formula for the skew
energy of an oriented graph. Then we study the question above and
determine all oriented graphs with minimal skew energy among all
connected oriented digraphs on $n$ vertices with $m \ (n\le m <
2(n-2))$ arcs. Interestingly, our result is an analogy to Conjecture
1.
\begin{theorem} \label{01} Let
$G^\sigma$ be an oriented graph with minimal skew energy among all
oriented graphs with $n$ vertices and $m \ (n\le m < 2(n-2))$ arcs.
Then, up to isomorphism, $G^\sigma$ is\\ $(1)$ $O^+_{n,m}$ if
$m<\frac{3n-5}{2}$; \\ $(2)$ either $B^+_{n,m}$ or $O^+_{n,m}$ if
$m=\frac{3n-5}{2}$; and \\ $(3)$ $B^+_{n,m}$ otherwise.
\end{theorem}
\vskip 0.5cm
\section{Integral formula for the skew energy}
In this section, based on the formula established by Adiga {\it et
al.} \cite{ad}, we deduce an integral formula for the skew energy of
an oriented graph, which is an analogy to the Coulson integral
formula for the energy of an undirected graph. Firstly, we introduce
some notations and preliminary results.
An even cycle $C$ in an oriented graph $G^\sigma$ is called
\emph{oddly oriented} if for either choice of direction of traversal
around $C$, the number of edges of $C$ directed in the direction of
the traversal is odd. Since $C$ is even, this is clearly independent
of the initial choice of direction of traversal. Otherwise, such an
even cycle $C$ is called as \emph{evenly oriented}. (Here we do not
involve the parity of the cycle with length odd. The reason is that
it depends on the initial choice of direction of traversal.)
A ``\emph{basic oriented graph}" is an oriented graph whose
components are even cycles and/or complete oriented graphs with
exactly two vertices.
Denote by $\phi(G^\sigma;x)$ the \emph{ skew characteristic
polynomial} of an oriented graph $G^\sigma$, which is defined as
\begin{eqnarray*}
\phi(G^\sigma;x)=det(xI_n-S(G^\sigma))=\sum_{i=0}^na_{i}(G^\sigma)x^{n-i},
\end{eqnarray*}
where $I_n$ denotes the identity matrix of order $n$. The following
result is a cornerstone of our discussion below, which determines
all coefficients of the skew characteristic polynomial of an
oriented graph in terms of its basic oriented subgraphs; see
\cite[Theorem 2.4]{hou} for an independent version.
\begin{lemma} \em{\cite[Corollary 2.3]{gx}} \label{0001}
Let $G^\sigma$ be an oriented graph on $n$ vertices, and let the
skew characteristic polynomial of $G^\sigma$ be
$$\phi(G^\sigma,\la)=\sum_{i=0}^n(-1)^ia_i
\la^{n-i}=\la^n-a_1\la^{n-1}+a_2\la^{n-2}+\cdots+(-1)^{n-1}a_{n-1}\la+(-1)^na_n.$$
Then $a_i=0$ if $i$ is odd; and
$$a_i=\sum_{\mathscr{H}}(-1)^{c^+}2^c {\rm \mbox{ } \mbox{ } if \mbox{ }}
i {\rm \mbox{ } is \mbox{ } even},$$ where the summation is over
all basic oriented subgraphs $\mathscr{H}$ of $G^\sigma$ having $i$
vertices and $c^{+}$ and $c$ are respectively the number of evenly
oriented even cycles and even cycles contained in $\mathscr{H}$.
\end{lemma}
Let $G=(V(G),E(G))$ be a graph, directed or not, on $n$ vertices.
Then denote by $\Delta(G)$ the maximum degree of $G$ and set
$\Delta(G^\sigma)=\Delta(G)$. An \emph{$r$-matching} in a graph $G$
is a subset of $r$ edges such that every vertex of $V(G)$ is
incident with at most one edge in it. Denote by $M(G,r)$ the number
of all $r$-matchings in $G$ and set $M(G,0)=1$.
Denote by $q(G)$ the number of quadrangles in a undirected graph
$G$. Then as a consequence of Lemma \ref{0001}, we have
\begin{theorem} \label{013} Let $G^\sigma$ be an
oriented graph containing $n$ vertices and $m$ arcs. Suppose
$$\phi(G^\sigma,\la)=\sum_{i=0}^{n}(-1)^ia_{i}(G^\sigma) \la^{n-i}.$$
Then $a_0(G^\sigma)=1$, $a_2(G^\sigma)=m$ and $a_4(G^\sigma)\ge
M(G,2)-2q(G)$ with equality if and only if all oriented quadrangles
of $G^\sigma$ are evenly oriented.
\end{theorem}
\noindent {\bf Proof.} The result follows from Lemma \ref{0001} and
the fact that each arc corresponds a basic oriented graph having $2$
vertices, and each basic oriented graph having $4$ vertices is
either a $2$-matching or a quadrangle.
$\blacksquare$
Furthermore, as well-known, the eigenvalues of an arbitrary real
skew symmetric matrix are all purely imaginary numbers and
occur in conjugate pairs. Henceforth, Lemma \ref{0001} can be
strengthened as follows, which will provide much convenience for our
discussion below.
\begin{lemma} \label{2} Let $G^\sigma$ be an
oriented graph of order $n$. Then each coefficient of the skew
characteristic polynomial $$\phi(G^\sigma,\la)=\sum_{i=0}^{\lfloor
\frac{n}{2} \rfloor}a_{2i}(G^\sigma) \la^{n-2i}$$ satisfies
$a_{2i}(G^\sigma)\ge 0$ for each $i(0\le i\le \lfloor \frac{n}{2}
\rfloor)$.
\end{lemma}
{\bf Proof.} Let $\la_1,\la_2,\cdots,\la_n $ be all eigenvalues of
the skew adjacency matrix $S(G^\sigma)$ of $G^\sigma$. Because
$\la_1,\la_2,\cdots,\la_n $ are all purely imaginary numbers and
must occur in conjugate pairs, we suppose, without loss of
generality, that there exists an integer number $m(\le \lfloor
\frac{n}{2} \rfloor)$ such that
$$\la_t=-\la_{n-t+1}=p_t i,\ \ for \ \ t=1,2,\cdots,m,$$ and all other
eigenvalues are zero, where each $p_t$ is a positive real number and
$i$ satisfies $i^2=-1.$ Then we have \begin{equation*}
\begin{array}{lll}
\phi(G^\sigma,\la)
&=&\prod_{t=1}^n(\la-\la_t)\\&=&\la^{n-2m}\prod_{t=1}^m(\la^2+p^2_t),
\end{array}
\end{equation*}
which implies that the result follows.
$\blacksquare$
For an oriented graph $G^\sigma$ on $n$ vertices, an integral
formula for the skew energy in terms of the skew characteristic
polynomial $\phi(G^\sigma, \la)$ and its derivative is given by
\cite{ad}
$$ \mathscr{E}_s(G^\sigma) =\frac{1}{\pi}\int_{-\infty}^{+\infty}\left[n+\la \frac{\phi'(G^\sigma, -\la)}{\phi(G^\sigma,
-\la)}\right]d\la.\eqno{(2.1)}$$ However, using the above integral,
it is by no means easy to calculate the skew energy of an oriented
graph. Hence, it is rather important to establish some other more
simpler formula.
Applying to (2.1) the fact that the coefficient $a_i=0$ for each odd
$i$ from Lemma \ref{0001} and replacing $\la$ by $-\la$, we have
$$ \mathscr{E}_s(G^\sigma)
=\frac{1}{\pi}\int_{-\infty}^{+\infty}\left[n-\la
\frac{\phi'(G^\sigma, \la)}{\phi(G^\sigma, \la)}\right]d\la.$$
Meanwhile, note that
$$\frac{\phi'(G^\sigma,
\la)}{\phi(G^\sigma, \la)}d\la=d\ln \phi(G^\sigma, \la).$$Then we
have
\begin{equation*}
\begin{array}{lll}
\mathscr{E}_s(G^\sigma)
&=&\frac{1}{\pi}\int_{-\infty}^{+\infty}\left[n-\la
\frac{\phi'(G^\sigma, \la)}{\phi(G^\sigma,
\la)}\right]d\la \\
&=&\frac{1}{\pi}\int_{-\infty}^{+\infty}\left[n-\la (\frac{d}{d\la}) \ln\phi(G^\sigma,
\la)\right]d\la.
\end{array}\eqno{(2.2)}
\end{equation*}
Therefore, we have
\begin{theorem} \label{04} Let $G^\sigma$ be an
oriented graph with order $n$. Then $$
\mathscr{E}_s(G^\sigma)=\frac{1}{\pi}\int_{-\infty}^{+\infty}\la^{-2}\ln
\psi(G^\sigma,\la)d\la,\eqno{(2.3)}$$where
$$\psi(G^\sigma,\la)=\sum_{i=0}^{\lfloor \frac{n}{2} \rfloor}a_{2i}(G^\sigma) \la^{2i}$$ and $a_{2i}(G^\sigma)$ denotes
the coefficient of $\la^{n-2i}$ in the skew characteristic
polynomial $\phi(G^\sigma,\la)$.
\end{theorem}
{\bf Proof.} Let both $G^{\sigma_1}_1$ and $G^{\sigma_2}_2$ be
oriented graphs with order $n$. ($G_1$ perhaps equals $G_2$.) Then
applying (2.2) we have
\begin{eqnarray*}
\mathscr{E}_s(G^{\sigma_1}_1)-\mathscr{E}_s(G^{\sigma_2}_2) =-\frac{1}{\pi}\int_{-\infty}^{+\infty}\la
(\frac{d}{d\la}) \ln\left[\frac{\phi(G^{\sigma_1}_1,
\la)}{\phi(G^{\sigma_2}_2, \la)}\right]d\la.
\end{eqnarray*}
Using partial integration, we have
$$\mathscr{E}_s(G^{\sigma_1}_1)-\mathscr{E}_s(G^{\sigma_2}_2)
=-\frac{\la}{\pi}\ln[ \frac{\phi(G^{\sigma_1}_1,
\la)}{\phi(G^{\sigma_2}_2,
\la)}]|_{-\infty}^{+\infty}+\frac{1}{\pi}\int_{-\infty}^{+\infty}\ln\left[
\frac{\phi(G^{\sigma_1}_1, \la)}{\phi(G^{\sigma_2}_2,
\la)}\right]d\la.
$$ Notice that
$$\frac{\la}{\pi}\ln\left[ \frac{\phi(G^{\sigma_1}_1, \la)}{\phi(G^{\sigma_2}_2,
\la)}\right]\mid_{-\infty}^{+\infty}=0.$$ Hence
$$ \mathscr{E}_s(G^{\sigma_1}_1)-\mathscr{E}_s(G^{\sigma_2}_2)=\frac{1}{\pi}\int_{-\infty}^{+\infty}\ln\left[
\frac{\phi(G^{\sigma_1}_1, \la)}{\phi(G^{\sigma_2}_2,
\la)}\right]d\la.$$ Suppose now that $G^{\sigma_2}_2$ is the null
oriented graph, an oriented graph containing $n$ isolated vertices.
Then $\phi(G^{\sigma_2}_2, \la)=\la^n$ and thus
$\mathscr{E}_s(G^{\sigma_2}_2)=0$. After an appropriate change of
variables we can derive
$$\mathscr{E}_s(G^{\sigma_1}_1)=\frac{1}{\pi}\int_{-\infty}^{+\infty}\la^{-2}\ln
\psi(G^{\sigma_1}_1,\la)d\la.$$ Then the result follows.
$\blacksquare$
\section{Proof of Theorem \ref{01}}
From Theorem \ref{04}, for an oriented graph $G^\sigma$ on $n$
vertices, the skew energy $\mathcal{E}_s(G^\sigma)$ is a strictly
monotonically increasing function of the coefficients
$a_{2k}(G^\sigma) (k = 0, 1, \cdots, \lfloor\frac{n}{2}\rfloor)$,
since for each $i$ the coefficient of $\la^{n-i}$ in the
characteristic polynomial $\phi(G^\sigma,\la)$, as well as
$\psi(G^\sigma,\la)$, satisfies $a_i(G^\sigma)\ge 0$ by Lemma
\ref{2}. Thus, similar to comparing two undirected graphs with
respect to their energies, we define the quasi-ordering relation
$``\preceq"$ of two oriented graphs with respect to their skew
energies as follows.
Let $G^{\sigma_1}_1$ and $G^{\sigma_2}_2$ be two oriented graphs of
order $n$. ($G_1$ is not necessarily different from $G_2$.) If
$a_{2i}(G^{\sigma_1}_1)\le a_{2i}(G^{\sigma_2}_2)$ for all $i$ with
$0\le i \le \lfloor\frac{n}{2}\rfloor$, then we write that
$G^{\sigma_1}_1 \preceq G^{\sigma_2}_2$.
Furthermore, if $G^{\sigma_1}_1 \preceq G^{\sigma_2}_2$ and there
exists at least one index $i$ such that $a_{2i}(G^{\sigma_1}_1)<
a_{2i}(G^{\sigma_2}_2)$, then we write that $G^{\sigma_1}_1 \prec
G^{\sigma_2}_2$. If $a_{2i}(G^{\sigma_1}_1)= a_{2i}(G^{\sigma_2}_2)$
for all $i$, we write $G^{\sigma_1}_1 \sim G^{\sigma_2}_2$. Note
that there are non-isomorphic oriented graphs $G^{\sigma_1}_1$ and
$G^{\sigma_2}_2$ such that $G^{\sigma_1}_1 \sim G^{\sigma_2}_2$,
which implies that $``\preceq"$ is not a partial ordering in
general.
According to the integral formula (2.2), we have, for two oriented
graphs $D_1$ and $D_2$ of order $n$, that $$D_1 \preceq
D_2\Longrightarrow \mathcal{E}_s(D_1)\le \mathcal{E}_s(D_2)$$ and
$$D_1 \prec D_2\Longrightarrow \mathcal{E}_s(D_1)<
\mathcal{E}_s(D_2).\eqno{(3.1)}$$
In the following, by discussing the relation ``$\succeq$", we
compare the skew energies for two oriented graphs and then complete
the proof of Theorem \ref{01}.
Firstly, by a directly calculation we have
$$\phi(O^+_{n,m})=\la^n+m\la^{n-2}+(m-n+1)(2n-m-3)\la^{n-4},\eqno{(3.2)}$$and
$$\phi(B^+_{n,m})=\la^n+m\la^{n-2}+(m-n+2)(2n-m-4)\la^{n-4}.\eqno{(3.3)}$$
Denote by $G^\sigma(n,m)$ and $G(n,m)$ the sets of all connected
oriented graphs and undirected graphs with $n$ vertices and $m$
edges, respectively. The following results on undirected graphs are
needed.
\begin{lemma} \label{05} Let $n\ge 5$ and $G\in G(n,m)$ be an arbitrary
connected undirected graph containing $n$ vertices and $m \ (n\le m
<2(n-2))$ edges. Then $q(G)\le \left(
\begin{array}{c}
m-n+2 \\
2
\end{array}\right)
$, where $q(G)$ denotes the number of quadrangles contained in $G$.
\end{lemma}
{\bf Proof.} We prove this result by induction on $m$.
The result is obvious for $m= n$. So we suppose that $n< m <2(n-2)$
and the result is true for smaller $m$.
Let $e$ be an edge of $G$ and $q_G(e)$ denote the number of
quadrangles containing the edge $e$. Suppose $e=(u,v)$. Let $U$ be
the set of neighbors of $u$ except $v$, and $V$ the set of neighbors
of $v$ except $u$. Then there are just $q_G(e)$ edges between $U$
and $V$. Let $X$ be the subset of $U$ such that each vertex in $X$
is incident to some of the above $q_G(e)$ edges and $Y$ be the
subset of $V$ defined similarly to $X$. Assume $|X| =x$ and $|Y|=
y$. Let $G_0$ be the subgraph of $G$ induced by $V(G_0)=u \cup v
\cup X \cup Y$. Then there are at least $q_G(e)+x+y+1$ edges and
exactly $x + y +2$ vertices in $G_0$. In order for the remaining
vertices to connect to $G_0$, the number of remaining edges must be
not less than that of the remaining vertices. Thus
$$m-(q_G(e)+x+y+1)\ge n-(x+y+2).$$ That is $$q_G(e)\le m-n+1.$$
By induction hypothesis, $q(G-e)\le \left(\begin{array}{c}
(m-1)-n+2 \\
2
\end{array}\right)=\left(\begin{array}{c}
m-n+1 \\
2
\end{array}\right).$
Then we have
$$q(G)=q_G(e)+q(G-e)\le m-n+1+\left(\begin{array}{c}
m-n+1 \\
2
\end{array}\right)=\left(\begin{array}{c}
m-n+2 \\
2
\end{array}\right).$$
Hence, the result follows.
$\blacksquare$
By a similar method, we can show that
\begin{lemma} \label{015} Let $n\ge 5$ and $G\in G(n,m)$ be an arbitrary undirected graph
containing $n$ vertices and $m \ (n\le m <2(n-2))$ edges. Suppose
$\Delta(G)=n-1$. Then $$q(G)\le \left(
\begin{array}{c}
m-n+1 \\
2
\end{array}\right).$$
\end{lemma}
\begin{lemma} {\em \cite[A part of Theorem 2.6]{gx}}\label{016} Let $G^{\sigma}$ be
an oriented graph with an arc $e=(u,v)$. Suppose that $e$ is not
contained in any even cycle. Then
$$\phi(G^{\sigma}, \la)=\phi(G^{\sigma} -e, \la)+ \phi(G^{\sigma}
- u-v,\la).$$
\end{lemma}
As a consequence of Lemma \ref{016}, we have the following result.
\begin{lemma}\label{017} Let $G^{\sigma} $ be an oriented graph on $n$
vertices and $(u,v)$ a pendant arc of $G^{\sigma} $ with pendant
vertex $v$. Suppose $\phi(G^{\sigma},
\la)=\sum_{i=0}^na_i(G^{\sigma})\la^{n-i}.$ Then
$$a_{i}(G^{\sigma})=a_{i}(G^{\sigma}-v)+a_{i-2}(G^{\sigma}-v-u).$$
\end{lemma}
Based on the preliminary results above, we have the following two
results.
\begin{lemma} \label{06} Let $n\ge 5$ and $G^\sigma\in G^\sigma(n,m)$ be
an oriented graph with maximum degree $n-1$. Suppose that $ n \le m
< 2(n-2)$ and $G^\sigma \nsim O^+_{n,m}$. Then $G^\sigma \succ
O^+_{n,m}$.
\end{lemma}
{\bf Proof.} By Theorem \ref{013}, it suffices to prove that
$a_4(G^\sigma)> a_4(O^+_{n,m})$. Suppose that $v$ is the vertex with
degree $n-1$. For convenience, all arcs incident to $v$ are colored
as white and all other arcs are colored as black. Then there are
$n-1$ white arcs and $m-n+1$ black arcs. We estimate the cardinality
of $2$-matchings in $G^\sigma$ as follows. Noticing that all white
arcs are incident to $v$, each pair of white arc can not form a
$2$-matching of $G^\sigma$. Since $d(v)=n-1$ and each black arc
incident to exactly two white arcs, each black arc together with a
white arcs except its neighbors forms a $2$-matching of $G^\sigma$,
that is, there are $(m-n+1)(n-3)$ black-white $2$-matchings.
Moreover, noticing that $G^\sigma\neq O^+_{n,m}$, $G^\sigma-v$ does
not contain the directed star $S_{m-n+2}$ as its subgraph, and thus
there is at least one $2$-matching formed by a pair of disjoint
black arcs, or $G^\sigma$ is an oriented graph of the following
graph $F$. \setlength{\unitlength}{1mm}
\begin{center}
\begin{picture}(50,20)
\put(10,20){\circle*{2}} \put(10,14){\circle*{2}}
\put(10,0){\circle*{2}} \put(10,6){$\vdots$}
\put(20,10){\circle*{2}} \put(18,5){$v_1$}
\put(19,11){\line(-1,1){9}} \put(19,10){\line(-2,1){9}}
\put(19,9){\line(-1,-1){9}} \put(40,10){\circle*{2}}
\put(38,5){$v_2$}
\put(21,10){\line(1,0){17}}\put(21,10){\line(1,0){18}}
\put(30,18){\circle*{2}} \put(21,10.5){\line(4,3){9}}
\put(39,10.5){\line(-4,3){9}} \put(30,1){\circle*{2}}
\put(21,9.5){\line(1,-1){9}} \put(30,.5){\line(1,1){9}}
\put(30,.5){\line(0,1){18}} \put(8,-9){Fig. 1.2. The graph $F$. }
\end{picture}
\end{center}
If it is the first case, then the number of
$2$-matchings in $G^\sigma$ satisfies
$$M(G^\sigma,2)\ge (m-n+1)(n-3)+1.$$
From Lemma \ref{015}, $q(G^\sigma)\le \left(
\begin{array}{c}
m-n+1 \\
2
\end{array}\right),$ and then by applying
Theorem \ref{013} again, we have
\begin{eqnarray*} a_4(G^\sigma)&\ge&
M(G^\sigma,2)-2q(G^\sigma)\\&\ge& (m-n+1)(n-3)+1- 2\left(
\begin{array}{c}
m-n+1 \\
2
\end{array}\right)\\&=&a_4(O^+_{n,m})+1
\end{eqnarray*}
by Eq.(3.2).
If it is the second case, clearly $m=n+2$, $q(F)=3$, but the three quadrangles
can not be all evenly oriented. Then
\begin{eqnarray*} a_4(F)\ge
M(F,2)-2q(F)\ge (m-n+1)(n-3)-4>a_4(O^+_{n,n+2}).
\end{eqnarray*}
The result thus follows.
$\blacksquare$
\begin{lemma} \label{07} Let $n\ge 5$ and $G^\sigma\in G^\sigma(n,m)$
be an oriented graph with $n\le m < 2(n-2)$. Suppose that
$\Delta(G^\sigma)\le n-2$ and $G^\sigma \nsim B^+_{n,m}$. Then
$G^\sigma \succ B^+_{n,m}$.
\end{lemma}
{\bf Proof.} By Theorem \ref{013} again, it suffices to prove that
$a_4(G^\sigma)> a_4(B^+_{n,m})$. We apply induction on $n$ to prove
it. By a direct calculation, the result follows if $n=5$, since then
$5=m<2(5-2)=6$ and there exists exactly four graphs in
$G^\sigma(5,5)$, namely, the oriented cycle $C_3$ together with two
pendant arcs attached to two different vertices of the $C_3$, the
oddly oriented cycle $C_4$ together with a pendant arc, $B^+_{5,5}$
and the oriented cycle $C_5$. Suppose now that $n\ge 6$ and the
result is true for smaller $n$.
\noindent {\bf Case 1.} There is a pendant arc $(u,v)$ in $G^\sigma$
with pendant vertex $v$.
By Lemma \ref{017} we have
$$a_4(G^\sigma)=a_4(G^\sigma-v)+a_2(G^\sigma-v-u)=a_4(G^\sigma-v)+e(G^\sigma-v-u).$$
Noticing that $\Delta(G^\sigma)\le n-2$, we have $e(G^\sigma-v-u)\ge
m-\Delta(G^\sigma)\ge m-n+2$.
By induction hypothesis, $a_4(G^\sigma-v)\ge a_4(B^+_{n-1,m-1})$
with equality if and only if $G^\sigma-v= B^+_{n-1,m-1}$. Then
\begin{equation*}
\begin{array}{lll}
a_4(G^\sigma)&=&a_4(G^\sigma-v)+a_2(G^\sigma-v-u)\\&\ge&
a_4(B^+_{n-1,m-1})+m-n+2\\&=& a_4(B^+_{n-1,m-1})+e(S_{m-n+1})\\ &=&
a_4(B_{n,m})
\end{array}
\end{equation*}
with equality if and only if $G^\sigma= B^+_{n,m}$. The result thus
follows.
\noindent {\bf Case 2.} There are no pendant vertices in $G^\sigma$.
Let
$$(d)_{G^\sigma}=(d_1,d_2,\cdots,d_i,d_{i+1},\cdots,d_n)$$
be the non-increasing degree sequence of $G^\sigma$. We label the
vertices of $G^\sigma$ corresponding to the degree sequence
$(d)_{G^\sigma}$ as $v_1,v_2,\cdots,v_n$ such that
$d_{G^\sigma}(v_i)=d_i$ for each $i$. Assume $d_1<n-2$. Then there
exists a vertex $v_k$ that is not adjacent to $v_1$, but is adjacent
to one neighbor, say $v_i$, of $v_1$. Thus
$$(d_1+1,d_2,\cdots
d_i-1,d_{i+1},\cdots,d_n)$$ is the degree sequence of the oriented
graph $D'$ obtained from $G^\sigma$ by deleting the arc $(v_k,v_i)$
and adding the arc $(v_k,v_1)$, regardless the orientation of the
arc $(v_k,v_1)$. Rewriting the sequence above such that
$$(d)_{D'}=(d'_1,d'_2,\cdots,d'_{i},d'_{i+1},\cdots,d'_n)$$
is also a non-increasing sequence. Then $d_{1}\ge d_i\ge 2$ and thus
we have
$$\sum_{i=1}^n\left(
\begin{array}{c}
d_i' \\
2
\end{array}\right)>
\sum_{i=1}^n\left(
\begin{array}{c}
d_i \\
2
\end{array}\right),\eqno{(3.4)}$$since
\begin{equation*}
\begin{array}{lll}\sum_{i=1}^n\left(
\begin{array}{c}
d_i' \\
2
\end{array}\right)-
\sum_{i=1}^n\left(
\begin{array}{c}
d_i \\
2
\end{array}\right)&=&
\left(
\begin{array}{c}
d_1+1 \\
2
\end{array}\right)+\left(
\begin{array}{c}
d_i-1 \\
2
\end{array}\right)-\left(
\begin{array}{c}
d_1 \\
2
\end{array}\right)-\left(
\begin{array}{c}
d_i \\
2
\end{array}\right)\\
&=& d_{1}-d_i+1 \\&>&0.
\end{array}
\end{equation*}
Repeating this procedure, we can eventually obtain a non-increasing
graph sequence
$$(d)_{D''}=(d''_1,d''_2,\cdots,d''_{i},d''_{i+1},\cdots,d''_n)$$such
that $\Delta(D'')=d_1''=n-2$ and
$$\sum_{v\in D''}\left(
\begin{array}{c}
d''(v) \\
2
\end{array}\right)> \sum_{v\in
D' }\left(
\begin{array}{c}
d'(v) \\
2
\end{array}\right)>\cdots > \sum_{v\in
G^\sigma}\left(
\begin{array}{c}
d(v) \\
2
\end{array}\right).\eqno{(3.5)}$$
Similarly, we can assume that there exists a vertex $v_k$ that is
not adjacent to $v_i$, but is adjacent to one neighbor, say $v_j$,
of $v_i$. Thus
$$(d_1,d_2,\cdots
d_i+1,d_{i+1},\cdots,d_j-1,d_{j+1},\cdots,d_n)$$ is the degree
sequence of the oriented graph $D'''$ obtained from $D''$ by
deleting the arc $(v_k,v_j)$ and adding the arc $(v_k,v_i)$,
regardless the orientation of the arc $(v_k,v_j)$. By a similar
proof, we can get
$$\sum_{v\in D'''}\left(
\begin{array}{c}
d'''(v) \\
2
\end{array}\right)> \sum_{v\in
D'' }\left(
\begin{array}{c}
d''(v) \\
2
\end{array}\right).$$
Then by applying the above procedure repeatedly, we eventually
obtain the degree sequence $(d)_{B^+_{n,m}}$,
$$(d)_{B^+_{n,m}}=(n-2,m-n+2,2,2,\cdots,2,1,1,\cdots,1),$$
where the number of vertices of degree $2$ is $m-n+2$, and the
number of vertices of degree $1$ is $2n-m-4$. Finally, we get
$$\sum_{v\in B^+_{n,m}}\left(
\begin{array}{c}
d^{B^+}(v) \\
2
\end{array}\right)> \sum_{v\in
D''' }\left(
\begin{array}{c}
d'''(v) \\
2
\end{array}\right)> \sum_{v\in
D''}\left(
\begin{array}{c}
d''(v) \\
2
\end{array}\right)>\cdots > \sum_{v\in
G^\sigma}\left(
\begin{array}{c}
d(v) \\
2
\end{array}\right).$$
Then the lemma follows by combining Eq.(3.3) and Lemma \ref{05} with
the fact that $M(G,2)=\left(
\begin{array}{c}
m \\
2
\end{array}\right)-\sum_{v\in G^\sigma}\left(
\begin{array}{c}
d(v) \\
2
\end{array}\right)$.
$\blacksquare$
Combining Lemma \ref{06} with Lemma \ref{07}, we get the proof of
Theorem \ref{01} immediately.
\vskip3mm
\noindent {\bf Proof of Theorem \ref{01}.} Combining with
Lemmas \ref{06} and \ref{07}, the oriented graph with minimal skew
energy among all oriented graphs of $G^\sigma(n,m)$ with $n\le m \le
2(n-2)$ is either $O^+_{n,m}$ or $B^+_{n,m}$. Furthermore, from
(3.2) and (3.3), we have
$$a_4(O^+_{n,m})=(m-n+1)(2n-m-3)$$ and
$$a_4(B^+_{n,m})=(m-n+2)(2n-m-4).$$
Then, by a direct calculation we have
$a_4(O^+_{n,m})<a_4(B^+_{n,m})$ if $m<\frac{3n-5}{2}$;
$a_4(B^+_{n,m})=a_4(O^+_{n,m})$ if $m=\frac{3n-5}{2}$; and
$a_4(O^+_{n,m})>a_4(B^+_{n,m})$ otherwise. The proof is thus
complete by (3.1).
$\blacksquare$
\end{document} |
\begin{document}
\title{Local Expectation Gradients for Doubly Stochastic Variational Inference}
\author{Michalis K. Titsias \\
Athens University of Economics and Business, \\
76, Patission Str. GR10434, Athens, Greece}
\date{}
\maketitle
\begin{abstract}
We introduce {\em local expectation gradients} which is a general purpose stochastic variational
inference algorithm for constructing stochastic gradients through sampling from the variational distribution.
This algorithm divides the problem of estimating the stochastic gradients over multiple variational parameters into smaller sub-tasks
so that each sub-task exploits intelligently the information coming from the most relevant part
of the variational distribution. This is achieved by performing an exact expectation over
the single random variable that mostly correlates with the variational parameter of interest resulting in
a Rao-Blackwellized estimate that has low variance and can work efficiently for both continuous
and discrete random variables. Furthermore, the proposed algorithm has interesting similarities with Gibbs
sampling but at the same time, unlike Gibbs sampling, it can be trivially parallelized.
\end{abstract}
\section{Introduction}
Stochastic variational inference has emerged as a promising and flexible framework for performing
large scale approximate inference in complex probabilistic models. It significantly
extends the traditional variational inference framework \cite{Jordan:1999,bishop:2006:PRML}
by incorporating stochastic approximation \cite{robbinsmonro51} into the optimization of the variational lower bound.
Currently, there exists two major research directions in stochastic variational inference. The
first attempts to deal with massive datasets by constructing stochastic gradients
by using mini-batches of training examples \cite{HoffmanBB10,Hoffmanetal13}. The second direction aims at dealing with the intractable
expectations under the variational distribution that are encountered in non-conjugate probabilistic models
\cite{paisleyetal12, Ranganath14, MnihGregor2014, salimans2013, KingmaW13, stochbackpropDeepmind2014, titsiaslazaro2014}.
The unified idea in the second direction is that stochastic optimization can be carried out by sampling from the variational distribution. This results
in a {\em doubly stochastic} estimation approach, where the mini-batch source of stochasticity (the first direction above)
can be combined with a second source of stochasticity associated with sampling from the variational distribution.
Furthermore, within the second direction there exist now two main sub-classes of methods: the first based on the log derivative trick
\cite{paisleyetal12, Ranganath14, MnihGregor2014} and the second based on the reparametrization trick
\cite{KingmaW13, stochbackpropDeepmind2014,titsiaslazaro2014}. The first approach is completely general
and can allow to apply stochastic optimization of variational lower bounds corresponding to arbitrary models and
forms for the variational distribution. For instance, probabilistic models having
both continuous and discrete latent variables can be accommodated by the log derivative trick approach.
On the other hand, the reparametrization approach is specialised to
continuous spaces and probabilistic models with differentiable joint distributions.
In this paper, we are interested to further investigate the doubly stochastic methods that sample from the variational
distribution. A challenging issue here is concerned with the variance reduction of the stochastic gradients.
Specifically, while the method based on the log derivative trick is the most general one,
it has been observed to severely suffer from high variance problems \cite{paisleyetal12, Ranganath14, MnihGregor2014}
and thus it is only applicable together with sophisticated variance reduction techniques based on control
variates. However, the construction of efficient control variates
is a very challenging issue and in each application depends on the form of the probabilistic model.
Therefore, it would be highly desirable to investigate whether it is possible to avoid using control variates altogether
and construct simple stochastic gradient estimates that can work well for any probabilistic model. Notice, that
while the reparametrization approach \cite{KingmaW13, stochbackpropDeepmind2014,titsiaslazaro2014}
has shown to perform well without the use of control variates, it is at the same time applicable only
to a restricted class of variational inference problems that involve probabilistic
models with differentiable joint distributions
and variational distributions typically taken from the location-scale family.
Next, we introduce a general purpose algorithm for constructing stochastic gradients
by sampling from the variational distribution that has a very low variance and can work efficiently
without the need of any variance reduction technique. This method builds upon the
log derivative trick and it is based on the key observation that stochastic gradient estimation
over multiple variational parameters can be divided into smaller sub-tasks
where each sub-task requires using more information coming from some part
of the variational distribution and less information coming from other parts. For instance, assume we have a factorized variational distribution
of the form $\prod_{i=1}^n q_{v_i}(x_i)$ where $v_i$ is a local variational
parameter and $x_i$ the associated latent variable. Clearly, $v_i$ determines the distribution over $x_i$,
and therefore we expect the latent variable $x_i$ to be the most important piece
of information for estimating $v_i$ or its gradient. Based on this intuitive observation
we introduce the {\em local expectation gradients} algorithm that provides a stochastic gradient over $v_i$ by performing an exact expectation over
the associated random variable $x_i$ while using a single sample from the remaining
latent variables. Essentially this consist of a Rao-Blackwellized estimate that allows to dramatically
reduce the variance of the stochastic gradient so that, for instance, for continuous spaces the new stochastic gradient
is guaranteed to have lower variance than the stochastic gradient corresponding to the
state of the art reparametrization method. Furthermore, the local expectation algorithm has striking
similarities with Gibbs sampling with the important difference, that unlike Gibbs sampling, it can be trivially parallelized.
The remainder of the paper has as follows: Section \ref{eq:svi} discusses the main two
types of algorithms for stochastic variational inference that are based on simulating from the variational
distribution. Section \ref{sec:localexp} describes the proposed local expectation gradients algorithm.
Section \ref{sec:experiments} provides experimental results and the paper concludes
with a discussion in Section \ref{sec:discussion}.
\section{Stochastic variational inference \label{eq:svi}}
Here, we discuss the main ideas behind current algorithms on stochastic variational inference
and particularly doubly stochastic methods that sample from the variational distribution in order to approximate
intractable expectations using Monte Carlo. Given a joint probability distribution $p(\mathbf{y},{\vect x})$ where
$\mathbf{y}$ are observations and ${\vect x}$ are latent variables (possibly including model parameters that consist of
random variables) and a variational distribution $q_{{\vect v}}({\vect x})$, the objective is to maximize
the lower bound
\begin{align}
F({\vect v}) & = \mathbbm{E}_{q_{{\vect v}}({\vect x})} \left[
\log p(\mathbf{y},{\vect x}) - \log q_{{\vect v}}({\vect x}) \right], \label{eq:bound1} \\
& = \mathbbm{E}_{q_{{\vect v}}({\vect x})} \left[
\log p(\mathbf{y},{\vect x}) \right] - \mathbbm{E}_{q_{{\vect v}}({\vect x})} \left[ \log q_{{\vect v}}({\vect x}) \right],
\label{eq:bound2}
\end{align}
with respect to the variational parameters ${\vect v}$. Ideally, in order to tune
${\vect v}$ we would like to have a closed-form expression for the lower bound so that we could
subsequently maximize it by using standard optimization routines such as gradient-based algorithms.
However, for many probabilistic models and forms of the variational
distribution at least one of the two expectations in (\ref{eq:bound2}) are intractable.
For instance, if $p(\mathbf{y},{\vect x})$ is defined though a neural network, the expectation
$\mathbbm{E}_{q_{{\vect v}}({\vect x})} \left[\log p(\mathbf{y},{\vect x}) \right]$ will be analytically intractable
despite the fact that $q_{{\vect v}}({\vect x})$ might have a very simple form,
such as a Gaussian, so that the second expectation in eq.\ (\ref{eq:bound2}) (i.e.\ the entropy of $q_{{\vect v}}({\vect v})$)
will be tractable. In other cases, such as when $q_{{\vect v}}({\vect v})$ is a mixture model or is defined through a
complex graphical model, the entropic term will also be intractable. Therefore, in general we are
facing with the following intractable expectation
\begin{equation}
\widetilde{F}({\vect v}) = \mathbbm{E}_{q_{{\vect v}}({\vect x})} \left[ f({\vect x}) \right],
\label{eq:genexpect}
\end{equation}
where $f({\vect x})$ can be either $\log p(\mathbf{y},{\vect x})$, $-\log q_{{\vect v}}({\vect x})$ or
$\log p(\mathbf{y},{\vect x}) - \log q_{{\vect v}}({\vect x})$, from which we would like to efficiently
estimate the gradient over ${\vect v}$ in order to apply gradient-based optimization.
The most general method for estimating the gradient $\nabla_{{\vect v}} \widetilde{F}({\vect v})$
is based on the log derivative trick \cite{paisleyetal12, Ranganath14, MnihGregor2014}.
Specifically, this makes use of the property $\nabla_{{\vect v}} q_{{\vect v}}({\vect x}) = q_{{\vect v}}({\vect x}) \nabla_{{\vect v}} \log q_{{\vect v}}({\vect x})$,
which allows to write the gradient as
\begin{equation}
\nabla_{{\vect v}} \widetilde{F}({\vect v}) = \mathbbm{E}_{q_{{\vect v}}({\vect x})} \left[ f({\vect x}) \nabla_{{\vect v}} \log q_{{\vect v}}({\vect x}) \right]
\label{eq:gengrad}
\end{equation}
and then obtain an unbiased estimate according to
\begin{equation}
\frac{1}{S} \sum_{s=1}^S f({\vect x}^{(s)}) \nabla_{{\vect v}} \log q_{{\vect v}}({\vect x}^{(s)}),
\label{eq:logderivestimate}
\end{equation}
where each ${\vect x}^{(s)}$ is an independent draw from $q_{{\vect v}}({\vect x})$. While
this estimate is unbiased, it has been observed to severely suffer from high variance so that
in practice it is necessary to consider variance reduction techniques such as those based on
control variates \cite{paisleyetal12, Ranganath14, MnihGregor2014}. Despite this limitation the above framework is
very general as it can deal with any variational distribution over both discrete and
continuous latent variables. The proposed method presented in Section \ref{sec:localexp}
is essentially based on the log derivative trick, but it does not suffer from high variance.
The second approach is suitable for continuous spaces where $f({\vect x})$ is a differentiable function
of ${\vect x}$ \cite{KingmaW13, stochbackpropDeepmind2014,titsiaslazaro2014}. It is based on a simple transformation
of (\ref{eq:genexpect}) which allows to move the variational parameters ${\vect v}$ inside $f({\vect x})$
so that eventually the expectation is taken over a base distribution that does not depend on the
variational parameters anymore. For example, if the variational
distribution is the Gaussian $\mathcal{N}({\vect x}|{\vect m}u,L L^{\top})$ where ${\vect v} = ({\vect m}u,L)$, the expectation in
(\ref{eq:genexpect}) can be re-written as $\widetilde{F}({\vect m}u,L) = \int \mathcal{N}({\vect z}|{\vect z}ero, I) f({\vect m}u + L {\vect z}) d {\vect z}$
and subsequently the gradient over $({\vect m}u,L)$ can be approximated by the following unbiased Monte Carlo estimate
\begin{equation}
\frac{1}{S} \sum_{s=1}^S \nabla_{({\vect m}u,L)} f({\vect m}u + L {\vect z}^{(s)}),
\label{eq:reparamEstimate}
\end{equation}
where each ${\vect z}^{(s)}$ is an independent sample from $\mathcal{N}({\vect z}|{\vect z}ero,I)$.
This estimate makes efficient use of the slope of $f({\vect x})$ which allows to perform informative moves in
the space of $({\vect m}u,L)$. For instance, observe that as $L \rightarrow 0$ the gradient over
${\vect m}u$ approaches $\nabla_{{\vect m}u} f({\vect m}u)$ so that the optimization reduces to a standard gradient-ascent
procedure for locating a mode of $f({\vect x})$. Furthermore, it has been shown experimentally in several studies
\cite{KingmaW13, stochbackpropDeepmind2014,titsiaslazaro2014} that the estimate in (\ref{eq:reparamEstimate})
has relatively low variance and can lead to efficient optimization even when a single sample is
used at each iteration. Nevertheless, a limitation of the approach is that it is only applicable to models where ${\vect x}$ is continuous
and $f({\vect x})$ is differentiable. Even within this subset of models we are also additionally
restricted to using certain classes of variational distributions \cite{KingmaW13, stochbackpropDeepmind2014}.
Therefore, it is clear that from the current literature is lacking a universal method that both can be applicable to a very broad class of models
(in both discrete and continuous spaces) and also provide low-variance stochastic gradients.
Next, we introduce such an approach..
\section{Local expectation gradients \label{sec:localexp}}
Suppose that the $n$-dimensional latent vector ${\vect x}$ in the probabilistic model takes values in some space $\mathcal{S}_1 \times \ldots \mathcal{S}_n$
where each set $\mathcal{S}_i$ can be continuous or discrete. We consider a variational distribution over ${\vect x}$
that is represented as a directed graphical model having the following joint density
\begin{equation}
q_{{\vect v}}({\vect x}) = \prod_{i=1}^n q_{v_i}(x_i|\text{pa}_i),
\label{eq:vardist}
\end{equation}
where $q_{v_i}(x_i|\text{pa}_i)$ is the conditional factor over $x_i$ given the set of the parents denoted by $\text{pa}_i$.
We assume that each conditional factor has its own separate set of variational parameters $v_i$ and ${\vect v} = (v_i,\ldots,v_n)$.
The objective is then to obtain a stochastic approximation for the gradient of the lower bound over each variational
parameter $v_i$ based on the log derivative form in eq.\ (\ref{eq:gengrad}).
Our method is motivated by the observation that each parameter $v_i$
is rather influenced mostly by its corresponding latent variable $x_i$ since $v_i$
determines the factor $q_{v_i}(x_i|\text{pa}_i)$.
Therefore, to get information about the gradient of
$v_i$ we should be exploring multiple possible values of $x_i$
and a rather smaller set of values from the remaining latent variables
${\vect x}_{\setminus i}$. Next we take this idea into the extreme
where we will be using infinite draws from $x_i$ (i.e.\ essentially an exact expectation)
together with just a single sample of ${\vect x}_{\setminus i}$. More precisely, we factorize the
variational distribution as follows
\begin{equation}
q_{{\vect v}}({\vect x}) = q(x_i|\text{mb}_i) q({\vect x}_{\setminus i}),
\end{equation}
where $\text{mb}_i$ denotes the Markov blanket of $x_i$. By using the log derivative trick
the gradient over $v_i$ can be written as
\begin{align}
\nabla_{v_i} \widetilde{F}({\vect v}) & =
\mathbbm{E}_{q ({\vect x})} \left[ f({\vect x}) \nabla_{v_i} \log q_{v_i}(x_i|\text{pa}_i) \right], \nonumber \\
& = \mathbbm{E}_{q ({\vect x}_{\setminus i})} \left[ \mathbbm{E}_{q(x_i|\text{mb}_i)} \left[ f({\vect x}) \nabla_{v_i} \log q_{v_i}(x_i|\text{pa}_i) \right] \right],
\label{eq:graditer}
\end{align}
where in the second expression we used the law of iterated expectations. Then, an unbiased
stochastic gradient, say at the $t$-th iteration of an optimization algorithm, can be obtained by drawing a single sample
${\vect x}_{\setminus i}^{(t)}$ from $q({\vect x}_{\setminus i})$ so that
\begin{equation}
\mathbbm{E}_{q(x_i|\text{mb}_i^{(t)})} \left[ f({\vect x}_{\setminus i}^{(t)}, x_i) \nabla_{v_i} \log q_{v_i}(x_i|\text{pa}_i^{(t)}) \right],
\label{eq:graditerGibbs}
\end{equation}
which is the expression for the proposed stochastic gradient for the parameter $v_i$.
To get an independent sample ${\vect x}_{\setminus i}^{(t)}$ from $q({\vect x}_{\setminus i})$
we can simply simulate a full latent vector ${\vect x}^{(t)}$ from $q_{{\vect v}}({\vect x})$ by applying the standard
ancestral sampling procedure for directed graphical models \cite{bishop:2006:PRML}.
Then, the sub-vector ${\vect x}_{\setminus i}^{(t)}$ is by construction an independent draw from the marginal $q({\vect x}_{\setminus i})$. Furthermore,
the sample ${\vect x}^{(t)}$ can be thought of as a {\em pivot} sample that is needed to be drawn once
and then it can be re-used multiple times in order to compute all stochastic
gradients for all variational parameters $v_1,\ldots,v_n$ according to
eq.\ (\ref{eq:graditerGibbs}).
When the variable $x_i$ takes discrete values, the expectation under
$q(x_i|\text{mb}_i^{(t)})$ in eq.\ (\ref{eq:graditerGibbs}) reduces to a sum of terms
associated with all possible values of $x_i$.
On the other hand, when $x_i$ is a continuous variable the expectation in (\ref{eq:graditerGibbs}) corresponds to
an univariate integral that in general may not be analytically tractable. In this case we shall use fast numerical
integration methods (e.g.\ Gaussian quadrature when $q_{v_i}(x_i|\text{pa}_i)$ is Gaussian).
We shall refer to the above algorithm for providing stochastic gradients over variational parameters
as {\em local expectation gradients} and pseudo-code of a stochastic variational inference scheme that internally uses this algorithm
is given in Algorithm \ref{alg1}. Notice that Algorithm \ref{alg1} corresponds to the case where
$f({\vect x}) = \log p(\mathbf{y},{\vect x}) - \log q_{{\vect v}}({\vect x})$ while other cases can be expressed similarly.
In the next two sections we discuss the computational complexity of the proposed algorithm and draw interesting connections between local
expectation gradients with Gibbs sampling (Section \ref{sec:conneGibbs}) and the reparametrization approach for differentiable
functions $f({\vect x})$ (Section \ref{sec:conneReparam}).
\begin{algorithm}[tb]
\caption{Stochastic variational inference using local expectation gradients\label{algor1}}
\label{alg1}
\begin{algorithmic}
\STATE {\mathbf{s}eries Input:} $f({\vect x})$, $q_{{\vect v}}({\vect x})$.
\STATE Initialize ${\vect v}^{(0)}$, $t=0$.
\REPEAT
\STATE Set $t=t+1$.
\STATE Draw pivot sample ${\vect x}^{(t)} \sim q_{{\vect v}}({\vect x})$.
\mathbf{F}OR{$i=1$ {\mathbf{s}eries to} $n$}
\STATE $dv_i = \mathbbm{E}_{q(x_i|\text{mb}_i^{(t)})} \left[ f({\vect x}_{\setminus i}^{(t)}, x_i) \nabla_{v_i} \log q_{v_i}(x_i|\text{pa}_i^{(t)}) \right]$.
\STATE $v_i = v_i + \eta_t dv_i$.
\ENDFOR
\UNTIL{convergence criterion is met.}
\end{algorithmic}
\end{algorithm}
\subsection{Time complexity and connection with Gibbs sampling \label{sec:conneGibbs}}
In this section we discuss computational issues when running the proposed algorithm
and we point out similarities and difference that has with Gibbs sampling.
Let us assume that time complexity is dominated by function evaluations of
$f({\vect x})$. We further assume that this function neither factorizes into a sum of local terms,
where each term depends on a subset of variables, nor allows savings
by performing incremental updates of statistics computed during intermediate
steps when evaluating $f({\vect x})$. Based on this the complexity per iteration is $O(n K)$ where $K$ is the maximum number
of evaluations of $f({\vect x})$ needed when estimating the gradient for each $v_i$.
If each $x_i$ takes $K$ discrete values, the exact number of
evaluations will be $n (K-1) + 1$ where the ``plus one'' comes from the evaluation of
the pivot sample ${\vect x}^{(t)}$ that needs to be performed once and then it can be re-used for
each $i=1,\ldots,n$. Notice that this time complexity corresponds to a worst-case scenario. In
practice, we could save a lot of computations by taking advantage of
any factorization in $f({\vect x})$ and also the fact that any function evaluation
is performed for inputs that are the same as the pivot sample ${\vect x}^{(t)}$
but having a single variable changed. Furthermore, once we have drawn the pivot
sample ${\vect x}^{(t)}$ all function evaluations can be trivially parallelized.
There is an interesting connection between local expectation gradients and
Gibbs sampling. In particular, carrying out Gibbs sampling in the variational distribution in
eq.\ (\ref{eq:vardist}) requires iteratively sampling from each conditional
$q(x_i|\text{mb}_i)$, for $i=1,\ldots,n$. Clearly, the same conditional
appears also in the local expectation algorithm when estimating the stochastic gradients.
The obvious difference is that instead of sampling from $q(x_i|\text{mb}_i)$
we now average under this distribution. Furthermore, for models having discrete
latent variables the time complexity per iteration is the same as with Gibbs
sampling with the important difference, however, that the algorithm of local expectation gradients
is trivially parallelizable while Gibbs sampling is not.
\subsection{Connection with the reparametrization approach \label{sec:conneReparam}}
A very interesting property of local expectation gradients is that it is guaranteed to provide stochastic gradients
having lower variance that the state of the art reparametrization method \cite{KingmaW13, stochbackpropDeepmind2014,titsiaslazaro2014},
also called stochastic belief propagation \cite{stochbackpropDeepmind2014}, which is suitable for continuous
spaces and differential functions $f({\vect x})$. Specifically, we will prove
that this property holds for any factorized location-scale variational distribution of the form
\begin{equation}
q_{{\vect v}}({\vect x}) = \prod_{i=1}^n q_{v_i} (x_i),
\end{equation}
where $v_i = (\mu_i, \ell_i)$, $\mu_i$ is a location-mean parameter and $\ell_i$ is a scale parameter.
Given that $q(z_i)$ is the base distribution based on which we can reparametrize $x_i$ according to
$x_i = \mu_i + \ell z_i$ with $z_i \sim q(z_i)$, the single-sample stochastic gradient over $v_i$
is given by
\begin{equation}
\nabla_{v_i} f({\vect m}u + \boldsymbol{\ell} \circ {\vect z}^{(t)}), \ \ z^{(t)}_i \sim q(z_i),
\label{eq:gradReparFactor}
\end{equation}
where ${\vect m}u$ is the vector of all $\mu_i$s, similarly $\boldsymbol{\ell}$ and ${\vect z}^{(t)}$ are vectors of $\ell_i$s
and $z_i^{(t)}$s, while $\circ$ denotes element-wise product. The local expectation stochastic gradient
from eq.\ (\ref{eq:graditerGibbs}) takes the form
\begin{align}
& \int q_{v_i} (x_i) f({\vect x}_{\setminus i}^{(t)}, x_i) \nabla_{v_i} \log q_{v_i}(x_i) d x_i, \nonumber \\
& \int \nabla_{v_i} q_{v_i} (x_i) f({\vect x}_{\setminus i}^{(t)}, x_i) d x_i.
\label{eq:gradLocExpFactor1}
\end{align}
Now notice that ${\vect x}_{\setminus i}^{(t)}= {\vect m}u_{\setminus i} + \boldsymbol{\ell}_{\setminus i} \circ {\vect z}_{\setminus i}^{(t)}$. Also
by interchanging the order of the gradient and integral operators together with using the base distribution we have
\begin{align}
& \nabla_{v_i} \int q_{v_i} (x_i) f({\vect m}u_{\setminus i} + \boldsymbol{\ell}_{\setminus i} \circ {\vect z}_{\setminus i}^{(t)}, x_i) d x_i, \nonumber \\
& = \nabla_{v_i} \int q(z_i) f({\vect m}u + \boldsymbol{\ell} \circ {\vect z}^{(t)}) d z_i, \nonumber \\
& = \int q(z_i) \nabla_{v_i} f({\vect m}u + \boldsymbol{\ell} \circ {\vect z}^{(t)}) d z_i,
\label{eq:gradLocExpFactor2}
\end{align}
where the final eq.\ (\ref{eq:gradLocExpFactor2}) is clearly an expectation of the reparametrization gradient from
eq.\ (\ref{eq:gradReparFactor}). Therefore, based on the standard Rao-Blackwellization argument the
variance of the stochastic gradient obtained by (\ref{eq:gradLocExpFactor2}) will always be lower or equal than the variance of the gradient
of the reparametrization method. To intuitively understand this, observe that eq.\ (\ref{eq:gradLocExpFactor2})
essentially says that the single-sample reparametrization gradient from (\ref{eq:gradReparFactor}) is just a Monte Carlo approximation
to the local expectation stochastic gradient obtained by drawing a single sample from the base distribution, thus
naturally it should have higher variance. In the experiments in Section \ref{sec:experiments} we typically observe
that the variance provided by local expectation gradients is roughly one order of magnitude lower
than the corresponding variance of the reparametrization gradients.
\section{Experiments \label{sec:experiments}}
In this section, we apply local expectation gradients (LeGrad) to different types of stochastic variational inference
problems and we compare it against the standard stochastic gradient based on the log derivative trick (LdGrad)
described by eq.\ (\ref{eq:logderivestimate}) as well as the reparametrization-based gradient (ReGrad) given by eq.\
(\ref{eq:reparamEstimate}). In Section \ref{sec:expGaussian} we consider fitting using a factorized variational
Gaussian distribution a highly correlated multivariate Gaussian. In Section \ref{sec:logreg},
we consider a two-class classification problem using two digits from the MNIST database and we approximate
a Bayesian logistic regression model using stochastic variational inference. Finally, in Section
\ref{sec:sigmoidbnet} we consider a sigmoid belief network with one layer of hidden variables
and we fit it to the binarized version of the MNIST digits. For this problem we also parametrize
the variational distribution using a recognition model.
\subsection{Fitting a high dimensional Gaussian \label{sec:expGaussian}}
We start with a simple ``artificial'' variational inference problem where we would like to
fit a factorized variational Gaussian distribution of the form
\begin{equation}
q_{{\vect v}}({\vect x}) = \prod_{i=1}^n \mathcal{N}(x_i|\mu_i, \ell_i^2),
\label{eq:factorGauss}
\end{equation}
to a highly correlated multivariate Gaussian of the form $\mathcal{N}({\vect x}|{\vect m}, \Sigma)$.
We assume that $n=100$, ${\vect m} = \boldsymbol{2}$, where $\boldsymbol{2}$ is the $100$-dimensional vector
of $2$s. Further, the correlated matrix $\Sigma$ was constructed from a kernel function
so that $\Sigma_{ij} = e^{ - \frac{1}{2} (x_i - x_j)^2} + 0.1 \delta_{i j}$ and
where the inputs where placed in an uniform grid in $[0,10]$. This covariance matrix is shown in Figure
\ref{fig:covmatrix}. The smaller eigenvalue of $\Sigma$ is roughly $0.1$ so we expect
that the optimal values for the variances $\ell_i^2$ to be around $0.1$
(see e.g.\ \cite{bishop:2006:PRML}) while the optimal value for each $\mu_i$ is $2$.
Given that the latent vector ${\vect x}$ is continuous, to obtain the stochastic
gradient for each $(\mu_i,\ell_i)$ we need to apply numerical integration. More precisely,
the stochastic gradient for LeGrad according to eq.\ (\ref{eq:graditerGibbs}) reduces to an expectation
under the Gaussian $\mathcal{N}(x_i|\mu_i, \ell_i^2)$ and therefore we can naturally
apply Gaussian quadrature. We used the quadrature rule having $K=5$ grid points\footnote{Gaussian
quadrature with $K$ grid points integrates exactly polynomials up to $2 K -1$ degree.} so that the whole
complexity of LeGrad was $O(n K) = O(500)$ function evaluations per iteration (see Section
\ref{sec:conneGibbs}). When we applied the standard LdGrad approach we set the number
of samples in (\ref{eq:logderivestimate}) equal to $S=500$ so that the computational
costs of LeGrad and LdGrad match exactly with one another. When using the ReGrad
approach based on (\ref{eq:reparamEstimate}) we construct the stochastic
gradient using a single sample, as this is typical among the practitioners that use this method,
and also because we want to empirically confirm the theory
from Section \ref{sec:conneReparam} which states that the LeGrad method
should always have lower variance than ReGrad given that the latter uses
a sample size of one.
Figure \ref{fig:toyGaussian}(a) shows the evolution of the variance of the three alternative stochastic
gradients (estimated by using the first variational parameter $\mu_1$ and a running window
of $10$ previous iterations) as the stochastic optimization algorithm iterates. Clearly,
LeGrad (red line) has the lowest variance, then comes ReGrad (blue line) and last
is LdGrad which despite the fact that it uses $500$ independent samples in the Monte Carlo average in
eq.\ (\ref{eq:logderivestimate}) suffers from high variance. Someone could
ask about how many samples the LdGrad method requires to decrease its variance to the level of the
LeGrad method. In this example, we have found empirically that this is achieved when LdGrad uses $S=10^4$
samples; see Figure \ref{fig:toyGaussian}(b). This shows that LeGrad is significantly better
than LdGrad and the same conclusion is supported from the experiment in Section \ref{sec:logreg}
where we consider a Bayesian logistic regression model.
Furthermore, observe that the fact that LeGrad has lower variance than ReGrad is in a good accordance with the
theoretical results of Section \ref{sec:conneReparam}. Notice also that the variance of LeGrad is
roughly one order of magnitude lower than the variance of ReGrad.
Finally, to visualize the convergence of the different algorithms and their ability to maximize
the lower bound in Figure \ref{fig:toyGaussian}(c) we plot the stochastic value of the lower
bound computed at each iteration by drawing a single sample from the variational distribution. Clearly,
the stochastic value of the bound can allow us to quantify convergence and it could also be used as a diagnostic of
high variance problems. From Figure \ref{fig:toyGaussian}(c) we can observe
that LdGrad makes very slow progress in maximizing the bound while LeGrad and ReGrad converge
rapidly.
\begin{figure}
\caption{The $100 \times 100$ covariance matrix $\Sigma$ used in the experiment in Section \ref{sec:expGaussian}
\label{fig:covmatrix}
\end{figure}
\begin{figure*}
\caption{The panel in (a) shows the variance of the gradient for the variational parameter $\mu_1$
when using LeGrad (red line), ReGrad (blue line) and LdGrad (green line). The number of samples
used by LdGrad was $S=500$. The panel in (b) shows again the variances for all methods (the red and blue lines are from
(a)) when LdGrad uses $S=10^4$ samples. The panel in (c) shows the evolution of the
stochastic value of the lower bound (for LdGrad $S=500$ was used).}
\label{fig:toyGaussian}
\end{figure*}
\subsection{Logistic regression \label{sec:logreg}}
In this section we compare the three approaches in a challenging
binary classification problem using Bayesian logistic regression. We
apply the stochastic variational algorithm in order to approximate the posterior
over the regression parameters. Specifically, given a dataset
$\mathcal{D} \equiv \{{\vect z}_j, y_j\}_{j=1}^m$, where ${\vect z}_m\in \mathbbm{R}^{n}$
is the input and $y_m \in \{-1,+1\}$ the class label,
we model the joint distribution over the observed labels and the parameters $\mathbf{w}$ by
$$
p(\mathbf{y}, \mathbf{w}) = \left( \prod_{m=1}^M \sigma(y_m {\vect z}_m^{\top} \mathbf{w}) \right) p(\mathbf{w}),
$$
where $\sigma(a)$ is the sigmoid function and $p(\mathbf{w})$ denotes a zero-mean Gaussian prior
on the weights $\mathbf{w}$. As in the previous section we will assume a variational
Gaussian distribution defined as in eq.\ (\ref{eq:factorGauss}) with the only notational
difference that the unknowns now are model parameters, denoted by $\mathbf{w}$, and not latent
variables.
For the above setting we considered a subset of the MNIST dataset that includes all
$12660$ training examples from the digit classes $2$ and $7$. We applied all three
stochastic gradient methods using a setup analogous to the one used in the previous section.
In particular, the LeGrad method uses again a Gaussian quadrature rule of $K=5$ grid points so that the time
complexity per iteration was $O(n K) = O(785 * 5)$ and where the number $785$ comes
from the dimensionality of the digit images plus the bias term. To match this with the time complexity
of LdGrad, we will use a size of $S=3925$ samples in the Monte Carlo approximation in
eq.\ (\ref{eq:logderivestimate}).
Figure \ref{fig:mnistLogreg}(a) displays the variance of the stochastic gradient for the LeGrad method
(green line) and the ReGrad method (blue line). As we can observe the local expectation method
has roughly one order of magnitude lower variance than the reparametrization approach which is in a accordance with the
results from the previous section (see Figure \ref{fig:toyGaussian}). In contrast to these two methods, which somehow have comparable variances,
the LdGrad approach severely suffers from very high variance as shown in Figure \ref{fig:mnistLogreg}(b) where the displayed
values are of the order of $10^7$. Despite the fact that $3925$ independent samples are used in the Monte Carlo approximation
still LdGrad cannot make good progress when maximizing the lower bound. To get a sensible performance with LdGrad
we will need to increase considerably the sample-size which is computationally very expensive.
Of course, control variates can be used to reduce variance to some extend, but it would have been much better if this
problem was not present in the first place. LeGrad can be viewed as a certain variant of LdGrad
but has the useful property that does not suffer from the high variance problem.
Finally, Figure \ref{fig:mnistLogreg}(b) shows the evolution of the stochastic value of the lower bound
for all three methods. Here, we can observe that LeGrad has significant faster and much more stable convergence than
the other two methods. Furthermore, unlike in the example from the previous section, in this real dataset the ReGrad
method exhibits clearly much slower convergence that the LeGrad approach.
\begin{figure*}
\caption{The panel in (a) shows the variance of the gradient for the variational parameter $\mu_1$
when using LeGrad (red line), ReGrad (blue line), while panel (b) shows the corresponding curve for LdGrad (green line).
The number of samples used by LdGrad was $S=3925$. The panel in (c) shows the evolution of the stochastic values of the lower bound.}
\label{fig:mnistLogreg}
\end{figure*}
\begin{figure*}
\caption{The first row shows $100$ training examples and the learned model parameters $W$ for all $K=200$ hidden variables.
The second row shows the corresponding reconstructed data and the evolution of the stochastic value of the lower bound.}
\label{fig:mnistSigmoid}
\end{figure*}
\subsection{Fitting one-layer sigmoid belief net with a recognition model \label{sec:sigmoidbnet}}
In the final example we consider a sigmoid belief network with a single hidden layer.
More precisely, by assuming the observations are binary data vectors of the form $\mathbf{y}_i \in \{0,1\}^D$,
such a sigmoid belief network assumes that each $\mathbf{y}_i$ is generated independently according to
\begin{equation}
p(\mathbf{y}|W) = \sum_{{\vect x}} \prod_{d=1}^D \left[ \sigma(\mathbf{w}_d^\top {\vect x}) \right]^{y_d}
\left[1-\sigma(\mathbf{w}_d^\top {\vect x}) \right]^{1 -y_d} p({\vect x}),
\label{eq:sigmoidbnJoint}
\end{equation}
where ${\vect x} \in \{0,1\}^K$ is a vector of hidden variables while the prior $p({\vect x})$ is
taken to be uniform. The matrix $W$ (that incorporates also a bias term)
consists of the set of model parameters to be estimated by fitting the model to the data. In theory
we could use the EM algorithm to learn the parameters $W$, however, such an approach is not
feasible because at the E step we need to compute the posterior distribution $p({\vect x}_i|\mathbf{y}_i,W)$
over each hidden variable which clearly is intractable since each ${\vect x}_i$ takes $2^K$ values. Therefore,
we need to apply approximate inference and next we consider
stochastic variational inference using the local expectation gradients algorithm.
More precisely, by following recent trends in the literature for fitting this type of models
\cite{MnihGregor2014, KingmaW13, stochbackpropDeepmind2014} we assume a variational
distribution parametrized by a ``reverse'' sigmoid network that predicts the latent vector ${\vect x}_i$
from the associated observation $\mathbf{y}_i$:
\begin{equation}
q_{V} ({\vect x}_i) = \prod_{k=1}^K \left[ \sigma({\vect v}_k^\top \mathbf{y}_i) \right]^{x_{i k}}
\left[1-\sigma({\vect v}_k^\top \mathbf{y}_i) \right]^{1 - x_{i k}},
\label{eq:sigmoidbnQ}
\end{equation}
where $V$ is a matrix (that incorporates also a bias term) comprising the set of all variational
parameters to be estimated. Often a variational distribution of the above form is referred
to as the {\em recognition model} because allows to predict the activations of the
hidden variables in unseen data $\mathbf{y}_*$ without needing to fit each time
a new variational distribution.
Based on the above model we considered a set of
$1000$ binarized MNIST digits so that $100$ examples were chosen from
each digit class. The application of stochastic variational inference
is straightforward and boils down to constructing a separate lower bound for
each pair $(\mathbf{y}_i,{\vect x}_i)$ having the form
\begin{align}
\mathcal{F}_i(V,W) & = \sum_{{\vect x}_i} q_V({\vect x}_i) \log \prod_{d=1}^D
[\sigma(\mathbf{w}_d^\top {\vect x}_i)]^{y_{i d}} [1 - \sigma(\mathbf{w}_d^\top {\vect x}_i)]^{1-y_{i d}} \nonumber \\
& - \sum_{k=1}^K \sigma({\vect v}_k^\top \mathbf{y}_i) \log \sigma({\vect v}_k^\top \mathbf{y}_i) - \sum_{k=1}^K (1 - \sigma({\vect v}_k^\top \mathbf{y}_i)) \log (1 - \sigma({\vect v}_k^\top \mathbf{y}_i)).
\end{align}
The total lower bound is then expressed as the sum of all data-specific bounds and it is
maximized using stochastic updates to tune the forward model weights $W$ and the recognition weights
$V$. Specifically, the update we used for $W$ was based on drawing a single sample from the full
variational distribution: $q_{V}({\vect x}_i)$, with $i=1,\ldots,n$. The update
for the recognition weights was carried out by computing the stochastic gradients
according to the local expectation gradients so that for each ${\vect v}_k$ the
estimate obtained the following form
\begin{align}
\nabla_{{\vect v}_k} \mathcal{F} = \sum_{i=1}^n \nabla_{{\vect v}_k} \mathcal{F}_i & = \sum_{i=1}^n \sigma_{ik} (1 - \sigma_{ik}) \left[ \sum_{d=1}^D \log \left(
\frac{ 1 + e^{- \widetilde{y}_{i d} \mathbf{w}_d^\top ({\vect x}_{i \setminus k}^{(t)}, x_{i k}=0) } }
{1 + e^{- \widetilde{y}_{i d} \mathbf{w}_d^\top ({\vect x}_{i \setminus k}^{(t)}, x_{i k}=1) }} \right) + \log \left( \frac{1 - \sigma_{i k}}{\sigma_{ik}} \right) \right] \mathbf{y}_i,
\end{align}
where $\sigma_{i k} = \sigma({\vect v}_k^\top \mathbf{y}_i)$ and $\widetilde{y}_{i d}$ is the $\{-1,1\}$ encoding of $y_{i d}$. Figure \ref{fig:mnistSigmoid} shows several plots that illustrate how the model fits the data
and how the algorithm converges. More precisely, we optimized the model assuming $K=200$ hidden variables in the hidden layer of the sigmoid network
with the corresponding weights $W$ shown in Figure \ref{fig:mnistSigmoid}.
Clearly, the model is able to provide very good reconstruction of the
training data and exhibits also a fast and very stable convergence.
Finally, for comparison reasons we tried also to optimize the variational lower bound using the LdGrad algorithm. However, this proved to be very problematic
since the algorithm was unable to make good progress and has severe tendency to get stuck to local maxima possibly due to the high variance problems.
\section{Discussion \label{sec:discussion}}
We have presented a stochastic variational inference algorithm which we call
{\em local expectation gradients}. This algorithm provides a a general framework
for estimating stochastic gradients that exploits the local independence structure
of the variational distribution in order to efficiently optimize the variational parameters.
We have shown that this algorithm does not suffer from high variance problems.
Future work will concern with the further theoretical analysis of the properties of the algorithm as well
as applications to hierarchical probabilistic models such as sigmoid belief networks with multiple layers.
\nocite{langley00}
\end{document} |
\begin{equation}gin{document}
\title{Glauber Dynamics on Trees \\ and Hyperbolic Graphs}
\author{
Noam Berger \thetaanks{Research supported by Microsoft graduate fellowship.}
\\ University of California, Berkeley \\
[email protected] \and
Claire Kenyon \\ LRI, UMR CNRS \\
Universit\'e Paris-Sud, France \\[email protected] \and
Elchanan Mossel \thetaanks{Supported by a visiting position at INRIA
and a PostDoc at Microsoft research.}
\\ University of California, Berkeley \\
[email protected] \and
Yuval Peres\thetaanks{ Research supported by NSF Grants
DMS-0104073, CCR-0121555 and a Miller Professorship at UC Berkeley.}
\\ University of California, Berkeley \\
[email protected]
}
\maketitle
\thetaispagestyle{empty}
\begin{equation}gin{abstract}
We study continuous time Glauber dynamics for random configurations with
local constraints (e.g. proper coloring, Ising and Potts models) on
finite graphs with $n$ vertices and of bounded degree.
We show that the relaxation time
(defined as the reciprocal of the spectral gap
$|\lambdabdambda_1-\lambdabdambda_2|$) for the dynamics on trees and on planar
hyperbolic graphs, is polynomial in $n$. For these hyperbolic graphs,
this yields a general polynomial sampling algorithm for random configurations.
We then show that for general graphs,
if the relaxation time $\tau_2$ satisfies $\tau_2=O(1)$,
then the correlation coefficient, and the mutual information, between
any local function (which depends only on the configuration in a fixed
window) and the boundary conditions, decays exponentially in the distance
between the window and the boundary.
For the Ising model on a regular tree, this condition is sharp.
\end{abstract}
\section{Introduction}
\subsubsection*{Context}
In recent years, Glauber dynamics on the lattice ${\bf{Z}}^d$ was
extensively studied. A good account can be found in \cite{Ma}.
In this work, we study this dynamics on other graphs.
The main goal of our work is to determine
which geometric properties of the underlying graph
are most relevant to the mixing rate
of the Glauber dynamics on particle systems.
To define a general particle system~\cite{Li}
on an undirected graph $G=(V,E)$,
define a configuration as an element $\sigma$ of $A^{V}$ where
$A$ is some finite set, and to each edge $(v,w) \in E$,
associate a weight function
$\alpha_{vw} : A\times A \to \hbox{I\kern-.2em\hbox{R}}_{+}$.
The Gibbs distribution assigns every configuration $\sigma$ probability
proportional to ${\sf PRE}od_{ \{ v,w\} \in E}
\alpha_{vw}(\sigma_v,\sigma_w)$.
The Ising model
(for which $\alpha_{vw}(\sigma_v,\sigma_w)=e^{\begin{equation}ta\sigma_v\sigma_w}$)
and the Potts model are examples of such systems;
so is the coloring model (for which
$\alpha_{vw}={\bf 1}_{\sigma_v\neq\sigma_w}$)
On a finite graph, the Heat-Bath Glauber dynamics is a continuous
time Markov chain with the generator
\begin{equation}gin{equation}\lambdabdabel{generator}
({\mathcal L}(f))(\sigma)=\sum\limits_{v\in V}
\left(\sum\limits_{a\in A} K[\sigma \to \sigma_v^a]
\left(f(\sigma_v^a)-f(\sigma)\right) \right),
\end{equation}
where $\sigma_v^a$ is the configuration s.t.
\begin{equation}gin{equation*}
\sigma_v^a(w) = \left\{
\begin{equation}gin{array}{ll}
a & \mbox{ if }w=v \\
\sigma(w) & \mbox{ if }w\neq v
\end{array}
\right.
\end{equation*}
and
\begin{equation}gin{equation*}
K[\sigma \to \sigma_v^a]=\frac
{{\sf PRE}od\limits_{w:(w,v)\in E}
\alpha_{vw}(a,\sigma_w)}
{\sum\limits_{a^{\sf PRE}ime\in A}\left({{\sf PRE}od\limits_{w:(w,v)\in E}
\alpha_{vw}(a^{\sf PRE}ime,\sigma_w)}\right)}.
\end{equation*}
It is easy to check that this dynamics is reversible with respect to the
Gibbs measure.
An equivalent representation for the Glauber dynamics, known as the
{\em Graphical representation}, is the following:
Each vertex has a rate $1$ Poisson clock attached to it. These Poisson
clocks are independent of each other.
Assume that the clock at $v$ rang at time $t$ and that just before
time $t$ the configuration was $\sigma$.
Then at time $t$ we replace $\sigma(v)$ by a random spin $\sigma'(v)$
chosen according to the Gibbs distribution conditional on the
rest of the configuration:
\begin{equation}gin{equation*}
\frac{{\bf{P}}[ \sigma'(v) = i \mid \sigma]}{{\bf{P}}[\sigma'(v) = j \mid \sigma]}
= {\sf PRE}od_{w: \{ v,w\} \in E}
{\alpha_{vw}(i,\sigma(w))\over \alpha_{vw}(j,\sigma(w))} .
\end{equation*}
We are interested in the rate of convergence of the Glauber dynamics
to the stationary distribution. Note that this process mixes $n=|V|$
times faster than
the corresponding discrete time process, simply because it performs (on
average) $n$ operations per time unit while the discrete time process
performs one operation per time unit.
In section \ref{subsec:cut-width},
we describe a connection between the geometry of a graph and the mixing time
of Glauber dynamics on it.
In particular, we show that
for balls in hyperbolic tilings, the Glauber dynamics for the Ising model,
the Potts model and proper coloring with $\Delta+2$ colors
(where $\Delta$ is the maximal degree),
have mixing time polynomial in the volume.
An example of such a hyperbolic graph
can be obtained from the binary tree by adding horizontal edges across levels;
another example is given in Figure \ref{5gon}.
\begin{equation}gin{figure} [hptp] \lambdabdabel{5gon}
\epsilonfxsize=5cm
{\epsilonfbox{5gon}
}
\caption{A ball in hyperbolic tiling}
\end{figure}
In sections \ref{subsec:recur}-\ref{sec:hightemp}
we study Glauber dynamics for the Ising model
on regular trees.
For these trees
we show that the mixing time is polynomial at all temperatures, and
we characterize
the range of temperatures for which the spectral gap
is bounded away from zero.
Thus, the notion that the two sides of the phase transition
(high versus low temperatures) should correspond to polynomial versus
super-polynomial mixing times for the associated dynamics,
fails for the Ising model on trees: here the two sides of the high/intermediate
versus low temperature phase transition just correspond to uniformly bounded versus
{\em unbounded} inverse spectral gap. We
also exhibit another surprising phenomenon:
On infinite regular trees, there is a range
of temperatures in which the inverse spectral gap is bounded, even though
there are many different Gibbs measures.
In section \ref{sec:general} of the paper we go beyond trees and
hyperbolic graphs and study Glauber dynamics for
families of finite graphs of bounded degree.
We show that if the inverse spectral
gap of the Glauber dynamics on the ball centered at $\rho$
stays bounded as the ball grows,
then the correlation between the
state of a vertex $\rho$ and the states of
vertices at distance $r$ from $\rho$, must
decay exponentially in $r$.
\subsubsection*{Setup}
\noindent{\bf The graphs.}
Let $G=(V,E)$ be an infinite graph with maximal degree $\Delta$.
Let $\rho$ be a distinguished vertex and denote by $G_r = (V_r,E_r)$ the
induced graph
on $V_r = \{v \in V : {\rm dist}(\rho,v) \leq r\}$.
Let $n_r$ be the number of vertices in $G_r$.
At some parts of the paper we will
focus on the case where $G=T=(V,E)$ is the infinite
$b$-ary tree.
In these cases, $T_r^{(b)}=(V_r,E_r)$ will denote the $r$-level $b$-ary tree.
\noindent{\bf The Ising model.}
In the Ising model on $G_r$ at inverse temperature $\begin{equation}ta$, every
configuration $\sigma \in \{ -1,1 \}^{V_r}$ is assigned probability
$$
\mu[\sigma]=Z(\begin{equation}ta)^{-1} \exp{\cal{B}}ig(\begin{equation}ta \sum_{\{v,w\}\in E_r} \sigma(v)
\sigma(w) {\cal{B}}ig)
$$
where
$Z(\begin{equation}ta)$ is a normalizing constant.
When $G_r = T_r^{(b)}$, this measure has the following equivalent
definition~\cite{EKPS}:
Fix $\epsilonilon=(1+e^{2 \begin{equation}ta})^{-1}$.
Pick a random spin $\pm 1$
uniformly for the root of the tree.
Scan the tree top-down, assigning vertex $v$ a spin equal to
the spin of its parent with probability $1-\epsilonilon$ and opposite with
probability $\epsilonilon$.
The Heat-Bath Glauber dynamics for the Ising model
chooses the new spin $\sigma'(v)$ in such a way that:
\begin{equation}gin{equation*}
\frac{{\bf{P}}[ \sigma'(v) = +1 \mid \sigma]}{{\bf{P}}[\sigma'(v) = -1 \mid \sigma]}
= \exp{\cal{B}}ig( 2 \begin{equation}ta \sum_{w: \, \{ w, v\} \in E_r}\sigma(w){\cal{B}}ig) \,.
\end{equation*}
See \cite{Li} or \cite{Ma} for more background.
\noindent{\bf Mixing times.}
\begin{equation}gin{Definition}
For a reversible continuous time Markov chain,
let $0 = \lambdabdam_1 \leq \lambdabdam_2 \leq \ldots \leq \lambdabdam_k$
be the eigenvalues of $-{\mathcal L}$ where ${\mathcal L}$ is the generator.
The {\bf spectral gap} of the chain is
defined
as $\lambdabdam_2$, and the {\bf relaxation time},
$\tau_2$, is defined as
the inverse of the spectral gap.
\end{Definition}
Note that the corresponding discrete time Glauber dynamics has
transition matrix $M=\I+{\frac{1}{n}}{\mathcal L}$, where $n$ is the number
of vertices. Moreover, the eigenvalues of $M$ are
$1,1-{\frac{\lambdabdam_2}{n}},1-{\frac{\lambdabdam_3}{n}},\ldots$ and
therefore the spectral gap of the discrete dynamics is the spectral
gap of the continuous dynamics divided by $n$.
\begin{equation}gin{Definition}
For measures $\mu$ and $\nu$ on the same discrete space,
the {\bf total-variation distance}, $d_V(\mu,\nu)$, between $\mu$ and $\nu$
is defined
as
\[
d_V(\mu,\nu) = \frac{1}{2} \sum_{x} |\mu({x}) - \nu({x})| \,.
\]
\end{Definition}
\begin{equation}gin{Definition}
Consider an ergodic Markov chain $\{X_t\}$
with stationary distribution $\pi$ on a finite state
space. Denote by ${\bf{P}}^t_x$ the law of $X_t$ given $X_0=x$.
The {\bf mixing time} of the chain, $\tau_1$, is defined as
\[
\tau_1 = \inf \{ t : \sup_{x,y} d_V({\bf{P}}^t_x,{\bf{P}}^t_y) \leq e^{-1}\}.
\]
\end{Definition}
\noindent For $t \ge \ln (1/\epsilonilon )\tau_1$, we have
$$\sup_{x} d_V({\bf{P}}^t_x,\pi) \leq \sup_{x,y} d_V({\bf{P}}^t_x,{\bf{P}}^t_y) \leq
\epsilonilon .$$
Using $\tau_2$ one can bound the mixing time $\tau_1$, since every
reversible Markov
chain with stationary distribution $\pi$ satisfies (see, e.g., \cite{Al}),
\begin{equation}gin{equation} \lambdabdabel{eq:t1_and_t2}
\tau_2 \leq \tau_1 \leq \tau_2
\left( 1 + \log \left( (\min_{\sigma} \pi(\sigma))^{-1} \right)
\right).
\end{equation}
For the Markov chains studied in this paper, this gives
$\tau_2 \leq \tau_1 \leq O(n)\tau_2$.
\noindent{\bf Cut-Width and relaxation time.}
\begin{equation}gin{Definition}
The {\bf cut-width} ${\xi}(G)$ of a graph $G$ is the
smallest integer such that
there exists a labeling $v_1,\ldots,v_n$ of the vertices
such that for all $1 \leq k \leq n$,
the number of edges from $\{v_1,\ldots,v_k\}$ to
$\{v_{k+1},\ldots,v_n\}$, is at most ${\xi}(G)$.
\end{Definition}
\begin{equation}gin{Remark}
The {\em vertex-separation} of a graph $G$
is defined analogously to the cut-width in terms
of vertices among $\{v_1,\ldots,v_k\}$ that are adjacent to
$\{v_{k+1},\ldots,v_n\}$. In \cite{Ki} it is shown that the vertex-separation of
$G$ equals its {\em path-width}, see \cite{RS}.
In \cite{first} the cut-width was called the {\em exposure}.
\end{Remark}
Generalizing an argument in \cite[Theorem 6.4]{Ma} for
${\bf{Z}}^d$, (see also \cite{JS1}), we prove:
\begin{equation}gin{Proposition} \lambdabdabel{prop:cut-width}
Let $G$ be a finite graph with $n$ vertices and
maximal degree $\Delta$.
\begin{equation}gin{enumerate}
\item\lambdabdabel{prop:exp:ising}
Consider the Ising model on $G$.
The relaxation time of the Glauber dynamics is at most
$n e^{(4 {\xi}(G) + 2 \Delta) \begin{equation}ta}$.
\item\lambdabdabel{prop:exp:color}
Consider the coloring model on $G$. If the number of colors $q$
satisfies $q \geq \Delta + 2$, then the relaxation time of the Glauber
dynamics is at most
$(\Delta + 1) n (q-1)^{{\xi}(G) + 1}$.
\end{enumerate}
\end{Proposition}
\noindent
Analogous results hold for the independent set and hard core models.
\noindent {\bf Cut-Width and long-range correlations for hyperbolic
graphs.} The usefulness of Proposition \ref{prop:cut-width} comes
about when we bound the relaxation time of certain graphs by
estimating their cut-width. The following proposition bounds the
cut-width of balls in hyperbolic tilings of the plane. Recall that
the {\bf Cheeger constant} of an infinite graph $G$ is
\begin{equation}gin{equation}\lambdabdabel{eq:def_cheeg}
c(G) =
\inf\left\{\left.
\frac{|{\tt path}rtial A|}{|A|}\right|
A\subseteq G ; 0 < |A| < \infty
\right\} ,
\end{equation}
where ${\tt path}rtial A$ is the set of vertices of $A$ which have
neighbors in $G\setminus A$.
\begin{equation}gin{Proposition}\lambdabdabel{prop:hyptylexp}
For every $c > 0$ and $\Delta < \infty$,
there exists a constant $C = C(c,\Delta)$ such that if $G$ is an infinite planar graph with
\begin{equation}gin{itemize}
\item
Cheeger constant at least $c$,
\item
maximum degree bounded by $\Delta$ and
\item
for every $r$ no cycle from $G_r$ separates two
vertices of $G\setminus G_r$,
\end{itemize}
then ${\xi}(G_r) \leq C
\log n_r$ for all $r$, where $n_r$ is the number of vertices of $G_r$.
\end{Proposition}
Combining this with Proposition \ref{prop:cut-width} we get that the
Glauber dynamics for the Ising models on balls in the hyperbolic
tiling has relaxation time polynomial in the volume for every
temperature. On the other hand, we have the following proposition:
\begin{equation}gin{Proposition}\lambdabdabel{prop:corrhyp}
Let $G$ be a planar graph with bounded degrees, bounded co-degrees and a
positive Cheeger constant. Then there exist $\begin{equation}ta' < \infty$ and
$\delta > 0$ such that for
all $r$, all $\begin{equation}ta > \begin{equation}ta'$, and all vertices $u,v$ in $G_r$,
the Ising model on $G_r$ satisfies that ${\bf{E}}[\sigma_u \sigma_v] \geq \delta$.
In other words, at low enough temperature there are
long-range correlations.
\end{Proposition}
This shows that for the Ising model on balls of hyperbolic tilings at
very low temperature, there are long-range correlations coexisting with
polynomial time mixing.
While there is no characterization of all planar graphs with positive Cheeger constants, an important family of such graphs is
Cayley graphs of nonelementary fuchsian groups which are nonamenable, and hence have a positive Cheeger constant, see \cite{pater,magnus,katok} for background and \cite{HJR} for some
explicit estimates.
\noindent{\bf Relaxation time for the Ising model on the tree.}
The Ising model on the
$b$-ary tree has three different regimes,
see \cite{BRZ, EKPS}.
In the high temperature regime, where $1 - 2 \epsilon < 1/b$, there is
a unique Gibbs measure on the infinite tree, and the expected value of the spin at the
root $\sigma_{\rho}$ given any
boundary conditions $\sigma_{{\tt path}rtial T_r^{(b)}}$ decays
exponentially in $r$.
In the intermediate regime, where $1/b < 1 - 2 \epsilon < 1/\sqrt{b}$,
the exponential decay described above still holds
for typical boundary conditions, but not for certain exceptional
boundary conditions, such as the all $+$ boundary;
consequently, there are
infinitely many Gibbs measures on the infinite tree.
In the low temperature regime, where $1 - 2 \epsilon > 1/\sqrt{b}$, typical
boundary conditions impose bias on the expected value of the spin at the root $\sigma_{\rho}$.
\begin{equation}gin{Theorem} \lambdabdabel{thm:tree}
Consider the Ising model on the $b$-ary tree $T_r= T_r^{(b)}$ with $r$ levels.
Let $\epsilonilon=(1+e^{2 \begin{equation}ta})^{-1}$.
The relaxation time $\tau_2$
for Glauber dynamics on $T_r^{(b)}$ can be bounded as follows:
\begin{equation}gin{enumerate}
\item \lambdabdabel{treeupperbound}
The relaxation time is polynomial at all temperatures:
$\tau_2=n_r^{O(\log(1/\epsilonilon ))}$. Furthermore, the limit
\[
\lim\limits_{r\to\infty}\frac{\log(\tau_2(T_r^{(b)},\begin{equation}ta))}{\log(n_r)}
\]
exists.
\item {Low temperature regime:}
\begin{equation}gin{enumerate}
\item \lambdabdabel{treesuperlinear}
If $1-2\epsilonilon\geq 1/\sqrt{b}$ then $\sup_r \tau_2 (T_r) =\infty$.
In fact, $\tau_2(T_r) =\Omega(n_r^{\log_b(b(1-2\epsilonilon )^2)})$ when
$1 - 2\epsilonilon > 1/\sqrt{b}$ and $\tau_2(T_r) = \Omega(\log n_r)$ when
$1 - 2\epsilonilon = 1/\sqrt{b}$.
\item \lambdabdabel{treeinfinitedegree}
Moreover, the degree of $\tau_2$ tends to infinity
as $\epsilon$ tends to zero:
$\tau_2(T_t) =n_r^{\Omega(\log (1/\epsilonilon ))}$.
\end{enumerate}
\item \lambdabdabel{treelinear} {Intermediate and high temperature regimes:} \\
If $ 1-2\epsilonilon < 1/\sqrt{b} $ then
the relaxation time is uniformly bounded:
$\tau_2 = O(1)$. Furthermore, this result holds for every external field
$\{H(v)\}_{v\in T_r}$.
\end{enumerate}
\end{Theorem}
In particular we obtain from Equation (\ref{eq:t1_and_t2})
that in the low temperature region
$\tau_1 = nrb^{{\bf T}heta(\begin{equation}ta)}$, and in the intermediate and high
temperature regions $\tau_1 = O(nrb)$.
A recent work by Peres and Winkler \cite{PW03} compares
the mixing times of single site and block dynamics
for the heat-bath Glauber dynamics for the Ising model.
They show that if the blocks are of bounded volume, then
the same mixing time up to constants is obtained for the single site
and block dynamics.
Combining these results with the path coupling argument of Section \ref{sec:hightemp},
it follows that
$\tau_1 = O(\log nrb)$ in the intermediate and high
temperature regions.
We emphasize that Theorem \ref{thm:tree} implies that in the
intermediate region
$1/2 < 1 - 2 \epsilon < 1/\sqrt{b}$,
the relaxation time is bounded by a constant,
yet, in the infinite volume
there are infinitely many Gibbs measures.
This Theorem is perhaps easiest to appreciate when compared to other
results on the Gibbs distribution for the Ising model
on binary trees, summarized in Table~\ref{tableau}.
\begin{equation}gin{table}\lambdabdabel{tableau}
\begin{equation}gin{center}
{\small
\begin{equation}gin{tabular}{|l|c|l|l|c|}\hline
Temp. & $1-2\epsilonilon$ & $\sigma_{\rho}|\sigma_{{\tt path}rtial T}\equiv + $ &
$I(\sigma_{\rho},\sigma_{{\tt path}rtial T})$
& $\tau_2$ \\ \hline\hline
high & $< 1/2$ & unbiased & $\to 0$ & $O(1)$\\ \hline
med. & $\in (\frac{1}{2}, \frac{1}{\sqrt{2}})$ & biased
& $\to 0$ & $O(1)$\\ \hline
low & $> \frac{1}{\sqrt{2}}$ & biased & inf$>0$ &
$n^{\Omega(1)}$ \\ \hline
freeze & $1-o(1)$ & biased & $1 - o(1)$ &
$n^{{\bf T}heta(\begin{equation}ta)}$ \\ \hline
\end{tabular} }
\end{center}
\caption{The Ising model on binary trees. Here the root is denoted $\rho$,
and the vertices at distance $r$ from the root are denoted ${\tt path}rtial T$.}
\end{table}
The proof of the low temperature result is quite general and applies
to other models with ``soft'' constraints,
such as Potts models on the tree .
\noindent{\bf Spectral gap and correlations.}
At infinite temperature, where distinct vertices are independent, the
Glauber dynamics on a graph of $n$ vertices
reduces to an (accelerated by a factor of $n$) random walk on a discrete $n$-dimensional cube,
where it is well known that the relaxation time is ${\bf T}heta(1)$.
Our next result shows that at any temperature where
such fast relaxation takes place, a strong form of independence holds.
This is well known in ${\bf{Z}}^d$, see~\cite{Ma}, but our formulation is valid
for any graph of bounded degree.
\begin{equation}gin{Theorem} \lambdabdabel{thm:general}
Denote by $\sigma_r$ the configuration on all vertices
at distance
$r$ from $\rho$.
If $G$ has bounded degree and the relaxation time
of the Glauber dynamics satisfies $\tau_2(G_r) = O(1)$, then
the Gibbs distribution on $G_r$ has the following property.
For any fixed finite set of vertices $A$, there exists $c_A>0$ such
that for $r$ large enough
\begin{equation}gin{equation} \lambdabdabel{eq:cor_decay}
{\rm Cov}[f,g] \le e^{-c_A r} \sqrt{{\rm Var}(f){\rm Var}(g)} \, ,
\end{equation}
provided that $f(\sigma)$ depends only on $\sigma_A$ and $g(\sigma)$ depends
only on $\sigma_r$. Equivalently, there exists $c_A'>0$ such that
\begin{equation}gin{equation} \lambdabdabel{eq:mut_inf_decay}
I[\sigma_A,\sigma_r] \le e^{-c_A' r} \, ,
\end{equation}
where $I$ denotes mutual information (see~\cite{CT}.)
\end{Theorem}
This theorem holds in a very general setting which includes Potts models,
random colorings, and other local-interaction models.
Our proof of Theorem \ref{thm:general} uses ``disagreement
percolation'' and a coupling argument
exploited by van den Berg, see \cite{vB}, to establish uniqueness of
Gibbs measures in ${\bf{Z}}^d$; according to F. Martinelli (personal communication)
this kind of argument is originally due to B. Zegarlinski.
Note however, that Theorem \ref{thm:general} holds also when there
are multiple Gibbs measures --
as the case of the Ising model in the intermediate regime demonstrates.
Moreover, combining Theorem \ref{thm:general} and Theorem \ref{thm:tree}, one
infers that for $1 - 2 \epsilon < 1/\sqrt{b}$, we have
$\lim_{r \to \infty} I[\sigma_0,\sigma_r] = 0$.
This yields another proof
of this fact which was proven before in \cite{BRZ,Io,EKPS}.
\subsubsection*{Plan of the paper}
In section~\ref{sec:alltemp} we prove Proposition~\ref{prop:cut-width}
via a canonical path argument, and give the resulting polynomial time
upper bound of
Theorem~\ref{thm:tree} part \ref{treeupperbound}.
We also present a more elementary proof of the upper bound
on the relaxation time for the tree, which gives sharper exponents and
the existence of a limiting exponent;
this
proof uses Martinelli's block dynamics to
show sub-additivity.
In section~\ref{sec:lowerbounds} we sketch a proof of
Theorem~\ref{thm:tree} part~\ref{treesuperlinear} and present a proof
of Theorem~\ref{thm:tree} part \ref{treeinfinitedegree}.
These lower bounds are
obtained by finding a low conductance ``cut'' of the configuration
space, using global majority of the boundary spins
for the former result,
and recursive majority for the latter result.
In section~\ref{sec:hightemp}
we establish the high temperature result, using comparison to
block dynamics which are analyzed via path-coupling.
Finally, in section~\ref{sec:general} we prove Theorem~\ref{thm:general} by a Peierls
argument controlling ``paths of disagreement''
between two coupled dynamics.
{\bf Remark:}
Most of the results proved here were presented
(along with proof sketches) in the extended abstract \cite{first}.
However, the proofs of our results for hyperbolic graphs
(see Section 2.2), which involve some interesting geometry,
were not even sketched there. Also, the general polynomial
upper bound for trees that we establish in Section 2.3
is a substantial improvement
on the results of \cite{first}, since it only assumes the dynamics is
ergodic and allows for arbitrary hard-core constraints.
\section{Polynomial Upper Bounds}\lambdabdabel{sec:alltemp}
\subsection{Cut-Width and mixing time} \lambdabdabel{subsec:cut-width}
We begin by showing how part
\ref{prop:exp:ising} of Proposition \ref{prop:cut-width} implies the upper
bound in part \ref{treeupperbound} of Theorem~\ref{thm:tree}.
\begin{equation}gin{Lemma}\lambdabdabel{firstdepthsearch}
Let $T_r^{(b)}$ be the $b$-ary tree with $r$ levels. Then,
${\xi}(T_r^{(b)})<(b-1)r+1.$
\end{Lemma}
\begin{equation}gin{proof}
Order the vertices using the {\em Depth first search left to right
order }, i.e., use the
following labeling for the vertices: The root is labeled
$\lambdabdangle 0,0,\ldots,0\rangle$. The children of the root are labeled $\lambdabdangle 1,0,\ldots,0\rangle$
through $\lambdabdangle b,0,\ldots,0\rangle$, and so on, so that the children of
$\lambdabdangle a_1,a_2,\ldots,a_k,0,\ldots,0\rangle$ are $\lambdabdangle a_1,a_2,\ldots,a_k,1,\ldots,0\rangle$
through $\lambdabdangle a_1,a_2,\ldots,a_k,b,\ldots,0\rangle$. Then order the vertices
lexicographically. Note that in the lexicographic ordering, a vertex always appears before its children.
When we enumerated all vertices up to
$\lambdabdangle a_1,a_2,\ldots,a_r\rangle$, the only vertices that were enumerated but whose
children were not enumerated are among the set of at most $r$ vertices
\[
\left\{
\lambdabdangle 0,0,\ldots,0\rangle,\lambdabdangle a_1,0,\ldots,0\rangle,\lambdabdangle a_1,a_2,\ldots,0\rangle,\ldots,\lambdabdangle a_1,a_2,\ldots,a_r\rangle
\right\}.
\]
Each of these vertices has at most $b$ children,
and for all but $\lambdabdangle a_1,a_2,\ldots,a_r\rangle$ at least one child has
already been enumerated. Therefore,
$${\xi}(T_r^{(b)})<(b-1)r+1.$$
\end{proof}
\begin{equation}gin{Corollary}\lambdabdabel{cor:thm:tree}
\begin{equation}gin{enumerate}
\item
The relaxation time of the Glauber dynamics for the Ising model on $T_r^{(b)}$ is at most
\[
C(\epsilon) nrb^{1 + 4(b-1) \log_b\frac{1-\epsilon}{\epsilon}}=n_r^{O(\log
(1/\epsilonilon ))}.
\]
\item
The relaxation time of the Glauber dynamics for the coloring on
$T_r^{(b)}$ with $q>b+2$ colors is at most
\[
(b+1)nrb^{1 + 2(b-1) \log_b(q)}
\]
\end{enumerate}
\end{Corollary}
\begin{equation}gin{proof}
The Corollary follows from Lemma \ref{firstdepthsearch} and
Proposition \ref{prop:cut-width}.
\end{proof}
The upper bound in part~\ref{treeupperbound} of Theorem~\ref{thm:tree}
follows immediately.
\begin{equation}gin{proof}[Proof of part \ref{prop:exp:ising} of Proposition
\ref{prop:cut-width}]
The proof follows the lines
of the proof given in \cite[Theorem 6.4]{Ma} for
the Ising model in ${\bf{Z}}^d$, (see also \cite{JS1}).
Let ${\cal{G}}amma$ be the graph corresponding to the transitions of the
Glauber dynamics on the graph $G$. Between any two configurations
$\sigma$ and $\eta$, we define a ``canonical path" $\gamma(\sigma
,\eta )$ as follows. Fix an order $<$ on the vertices of $G$ which
achieves the cut-width. Consider the vertices $v_1 < v_2 < \ldots$
at which $\sigma_v \neq \eta_v$.
We define the $k$-th configuration $\sigma^{(k)}$ on the path
$\gamma(\sigma,\eta)$ by giving spin $\sigma_v$ to every
labeled vertex $v\leq v_k$, spin $\eta_v$ to every labeled vertex $v>v_k$,
and spin $\sigma_v=\eta_v$ for every unlabeled vertex $v$. Note
that $\sigma^{(0)}=\eta$ and $\sigma^{(d(\sigma ,\eta
))}=\sigma$. Since $\sigma^{(k-1)}$ and $\sigma^{(k)}$ are
identical except for the spin of vertex $v_k$, they are adjacent
in ${\cal{G}}amma$. This defines $\gamma(\sigma ,\eta )$ (see
Figure~\ref{fig:can-path}).
\begin{equation}gin{figure}[b]
\center{\epsilonfig{figure=can-path, height=1.5in, width=2in}}
\caption{The canonical path from $\sigma$ to $\eta$. The vertices
on which $\sigma$ and $\eta$ agree are marked in grey; the other
vertices are colored black if their spin is chosen according to
$\sigma$ and white if their spin is chosen according to $\eta$.
}\lambdabdabel{fig:can-path}
\end{figure}
Note that for every $k$, there are at
most ${\xi}(G)$ pairs of
adjacent vertices $(v_i,v_j)$ such that $i\leq k<j$, hence any
configuration on the canonical path between $\sigma$ and $\eta$ will have at
most ${\xi}(G)$ edges between spins copied from
$\sigma$ and spins copied from $\eta$.
{\noindent \bf Using canonical paths to bound the mixing rate.}
For each directed edge ${\bf e}=(\omega,\zeta)$ on the
configuration graph ${\cal{G}}amma$, we say that ${\bf e}\in
\gamma(\sigma ,\eta)$ if $\omega$ and $\zeta$ are adjacent
configurations in $\gamma(\sigma ,\eta).$ Let
$$\rho = \sup_{{\bf e}}
\sum_{\sigma , \eta : \, {\bf e}\in \gamma(\sigma ,\eta)}
{\frac{\mu[\sigma]\mu[\eta]}{Q({\bf e})}},$$
where
$\mu$ is the stationary measure (i.e. the Gibbs distribution),
and for any two adjacent configurations $\omega$ and $\zeta$,
$Q({\bf e})=Q((\omega,\zeta))=\mu[\omega]K[\omega \to \zeta]$.
If $L$ is the maximal length of a canonical path, then
by the argument in \cite{JS1,Ma}, the relaxation time of the Markov
chain is at most
\begin{equation}gin{equation} \lambdabdabel{eq:canon_path}
\tau_2 \leq L \rho.
\end{equation}
Since $L \leq n$, it follows that $\tau_2 \leq n\rho$,
thus it only remains to prove an upper bound on $\rho$.
{\bf Analysis of the canonical path}. For each directed edge ${\bf
e}$ in ${\cal{G}}amma$, we define an injection from canonical paths going
through ${\bf e}$ in the specified direction, to configurations on
$G$. To a canonical path $\gamma(\sigma ,\eta)$ going through
${\bf e}$, such that ${\bf e}=(\sigma^{(k-1)},\sigma^{(k)})$, we
associate the configuration $\varphi$ which has spin $\eta_{v_i}$ for
every $v_i$ s.t. $i \geq k$ and spin $\sigma_{v_i}$ for every
$v_i$ s.t. $i < k$. To verify that this is an injection, note
that one can reconstruct $\sigma$ and $\eta$ by first identifying
the unique $k_0$ s.t. $\omega$ and $\zeta$ differ on $v_{k_0}$ and
then taking (as in Figure~\ref{fig:injection})
\begin{equation}gin{figure}
\center{\epsilonfig{figure=injection, height=1.5in, width=2in}}
\caption{The injection from $({\bf e},\varphi )$ to $(\sigma , \eta )
$. The vertices on which both endpoints of ${\bf e}$ and $\varphi$
agree are marked in grey; the other vertices are colored black if
they precede $v_{k_0}$ and their spin is chosen according to
$\varphi$, or if they are preceded by $v_{k_0}$ and their spin is
chosen according to the endpoints of ${\bf e}$; and are colored
white otherwise.}\lambdabdabel{fig:injection}
\end{figure}
\[
\sigma_{v_k} = \left\{
\begin{equation}gin{array}{ll}
\omega_{v_k} & \omega_{v_k} = \varphi_{v_k} \\
\omega_{v_k} & k \geq k_0 \ve \omega_{v_k} \neq \varphi_{v_k}\\
\varphi_{v_k} & k < k_0 \ve \omega_{v_k} \neq \varphi_{v_k}
\end{array}
\right.\]
and
\[
\eta_{v_k} = \left\{
\begin{equation}gin{array}{ll}
\omega_{v_k} & \omega_{v_k} = \varphi_{v_k} \\
\varphi_{v_k} & k \geq k_0 \ve \omega_{v_k} \neq \varphi_{v_k}\\
\omega_{v_k} & k < k_0 \ve \omega_{v_k} \neq \varphi_{v_k}.
\end{array}
\right.\]
\noindent By the property of our labeling,
\begin{equation}gin{equation} \lambdabdabel{eq:prop_label}
\mu[\sigma]\mu[\eta]\leq
\mu[\sigma^{(k-1)}]\mu[\varphi] e^{4 {\xi}(G) \begin{equation}ta}.
\end{equation}
and $K[\sigma^{(k-1)} \to \sigma^{(k)}] \geq \exp(-2\Delta\begin{equation}ta).$
Now a short calculation concludes the proof:
\begin{equation}gin{eqnarray} \nonumber
\rho &\leq&
\sup_e \sum_{\sigma,\eta \hbox{ s.t. } e \in \gamma(\sigma,\eta)}
\frac{\mu[\sigma] \mu[\eta]} {\mu[\sigma^{(k-1)}] K[\sigma^{(k-1)} \to
\sigma^{(k)}]} \\
&\leq&
e^{4 {\xi}(G) \begin{equation}ta}
\sup_e\sum_{\varphi} \frac{\mu[\sigma^{(k-1)}] \mu[\varphi]}{\mu[\sigma^{(k-1)}]
K[\sigma^{(k-1)} \to \sigma^{(k)}]}
\\ &\leq& \lambdabdabel{eq:canonical}
e^{4 {\xi}(G) \begin{equation}ta}
e^{2 \Delta \begin{equation}ta} \sum_{\varphi} \mu[\varphi]
\leq e^{(4 {\xi}(G) + 2 \Delta) \begin{equation}ta}\,.
\end{eqnarray}
The last inequality follows from the fact
that the map
$\gamma \to \varphi$ is injective and therefore $\sum_{\varphi} \mu[\varphi] \leq 1$.
\end{proof}
\begin{equation}gin{proof}[Proof of part \ref{prop:exp:color} of Proposition \ref{prop:cut-width}]
The previous argument does not directly extend to
coloring, because the
configurations $\sigma^{(k)}$ along the path (as defined above) may not be
proper colorings.
Assume that $q \geq \Delta + 2$ and let $v_1 < v_2 \cdots < v_{n}$ be an ordering
of the vertices of $G$ which achieves the cut-width. We construct a path
$\gamma(\sigma,\eta)$ such that
\begin{equation}gin{equation} \lambdabdabel{eq:coloring1}
|\gamma(\sigma,\eta)| \leq (\Delta + 1) n.
\end{equation}
Moreover, for all $\tau \in \gamma(\sigma,\eta)$ there exists a $k$ such that
\begin{equation}gin{equation} \lambdabdabel{eq:coloring2}
\tau_v = \left\{ \begin{equation}gin{array}{ll} \eta_v & \mbox{if } v \leq v_k \\
\sigma_v & \mbox{if } v > v_k \mbox{ and } v \nsim \{v,\ldots,v_k\}
\end{array} \right.
\end{equation}
The way to construct a path $\gamma(\sigma,\eta)$ satisfying (\ref{eq:coloring1}) and (\ref{eq:coloring2})
is the following:
$\sigma^0=\sigma$. Given $\sigma^k$, we proceed to create
$\sigma^{k+1}$ as follows: Let
$i(k)=\inf\{j:\sigma^k_{v_j}\neq\eta_{v_j}\}$. If
\[
\rho = \left\{ \begin{equation}gin{array}{ll} \sigma^k_v & \mbox{if } v \neq v_{i(k)} \\
\eta_v & \mbox{if } v = v_{i(k)}
\end{array} \right.
\]
is a legal configuration, then $\sigma^{k+1}=\rho.$ otherwise, let
\[
h(k)=\inf\{j:\sigma^k_{v_j}=\eta_{v_{i(k)}} \ve v_j\sim v_{i(k)}\},
\]
and let $c$ be a color that is different from $\eta_{v_{i(k)}}$ and is
legal for $v_{h(k)}$ under $\sigma^k$. Such a color exists because $q
\geq \Delta + 2$.
Then, we take
\[
\sigma^{k+1}_v = \left\{ \begin{equation}gin{array}{ll} \sigma^k_v & \mbox{if } v \neq v_{h(k)} \\
c & \mbox{if } v = v_{h(k)}
\end{array} \right.
\]
It is easy to verify that the path satisfies (\ref{eq:coloring1}) and (\ref{eq:coloring2}).
Since all legal configurations have the same weight,
(\ref{eq:prop_label}) is replaced by
\begin{equation}gin{equation} \lambdabdabel{eq:prop_label_color}
\mu[\sigma]\mu[\eta]=
\mu[\sigma^{(k-1)}]\mu[\varphi]
\end{equation}
On the other hand, the map $\gamma \to \varphi$ is not injective. Instead, by (\ref{eq:coloring2}),
there are at most $(q-1)^{{\xi}(G)}$ paths which are mapped to the same coloring.
We therefore obtain that for the coloring model $\rho \leq n (q-1)^{{\xi}(G) + 1}$ and therefore from
(\ref{eq:coloring1}) and
(\ref{eq:prop_label_color}),
\[
\tau_2 \leq (\Delta + 1) n q (q-1)^{{\xi}(G)}.
\]
\end{proof}
\subsection{Hyperbolic graphs} \lambdabdabel{subsec:hyper}
In this subsection we show that balls in a hyperbolic tiling have
logarithmic cut-width. Let $G=(V,E)$ be an infinite planar graph
and let $o\in V$. Let $G_r$ be the ball of radius $r$ in $G$
around $o$, with the induced edges. The following proposition
implies
Propositions \ref{prop:hyptylexp} and
\ref{prop:corrhyp}.
\begin{equation}gin{Proposition}\lambdabdabel{prop:hyper}
\begin{equation}gin{enumerate}
\item\lambdabdabel{hypecut-width}
Suppose $G$ has
\begin{equation}gin{itemize}
\item
a positive Cheeger constant,
\item
degrees bounded by $\Delta$,
\item
for all $r$, no cycle from $G_r$ separates two
vertices of $G\setminus G_r$,
\end{itemize}
then there
exists constants $\alphapha_1$ and $\alphapha_2$ s.t. ${\xi}(G_r)\leq
\alphapha_1\Delta \log(|G_r|) + \alphapha_2 \Delta.$
\item\lambdabdabel{hypecorrelation}
Assume that $G$ has bounded degrees, bounded co-degrees,
no cycle from $G_r$ separates two vertices of
$G\setminus G_r$, and the following weak
isoperimetric condition holds:
\begin{equation}gin{equation}\lambdabdabel{logcheeger}
|{\tt path}rtial A|\geq C\log(|A|)
\end{equation}
for every finite
$A\subseteq G$ and for some constant $C$.
Then there exist $\begin{equation}ta'<\infty$ and $\delta>0$ s.t. for every $\begin{equation}ta >
\begin{equation}ta'$, for every $r$ and for
every $u,v$ in $G_r$, the free Gibbs measure for the Ising model
on $G_r$ with inverse temperature $\begin{equation}ta$ satisfies
$\cov(\sigma_u,\sigma_v)\geq\delta.$
\end{enumerate}
\end{Proposition}
\begin{equation}gin{proof}[Proof of part \ref{hypecut-width} of Proposition
\ref{prop:hyper}]
\newcommand{\maxdeg} {{\Delta}}
Consider a planar embedding of $G$. Since no cycle from $G_r$
separates two vertices of $G\setminus G_r$, all vertices of $G\setminus G_r$ are
in the same face of $G_r$, and without loss of generality we can
assume that it is the infinite face of our chosen embedding of
$G_r$.
Let $T$ be a shortest path tree from $o$ in $G_r$. In other words, $T$
is a tree such that
for every vertex $v$, the path from $o$ to $v$ in $T$ is a shortest
path in $G_r$. Let $e_1\in T$ be an edge adjacent to $o$.
We perform a depth-first-search traversal of $T$, starting from
$o=v_0$, traversing $e_1$ to its end vertex $v_1$, and continuing in
counterclockwise order around $T$. This defines a linear ordering
$v_0\leq v_1\leq \cdots \leq v_{n-1}$ of the vertices of $G_r$.
Consider the induced ordering $w_1\leq w_2\leq\cdots\leq w_k$ on the
vertices of $G_r$ which are at distance exactly $r$ from $o$.
Fix $i<j$. We first consider edges between
\begin{equation}gin{equation*}
V_{ij}=\{ u\in G_r:w_i< u< w_j , u \hbox{ not ancestor of $w_j$ in }T\}
\end{equation*}
and $G_r\setminus V_{ij}$. Note that $V_{ij}$ does not contain any vertex on the
paths in $T$ from $o$ to either $w_i$ or $w_j$. Obviously, there
can be edges from $V_{ij}$ to vertices on the paths from $o$ to $w_i$ or
$w_j$. Let $e=\{ u,v\}$ be an edge with one endpoint in $V_{ij}$ and
the other end in $G\setminus G_r$ but not on Path$(w_i)$ or
Path$(w_j)$,
where
$\hbox{Path}(w_j)$ denote the path in $T$ from $o$ to $w_j$.
Without loss of generality, assume that $w_i<u<w_j<v$. The case where
$v < w_i < u < w_j$ is treated similarly.
The path from $o$ to $u$ in $T$, followed by the edge $e$, followed by
the path from $v$ to $o$ in $T$, defines a cycle $C_e$ in $G_r$ (see Figure \ref{fig:hyppath}).
Since $w_i<u<w_j<v$, $C_e$ must enclose exactly one of $w_j$ and
$w_i$. Since the graph is embedded in the plane, the ones among those
cycles which enclose $w_j$ must form a nested sequence, and
therefore there is an outermost such cycle $C_{e^*}$ with an
associated ``outermost'' edge $e^*=\{ u^*,v^* \}$.
Similarly, among the edges such that the corresponding cycle
encloses $w_i$, there is an ``outermost'' edge $f^*=\{ x^*,y^*\}$.
\begin{equation}gin{figure}[h]
\center{\epsilonfig{figure=hyppath, height=2in, width=2in}}
\caption{The cycle $C_e$ defined by $e$.
}\lambdabdabel{fig:hyppath}
\end{figure}
There can only be edges from $V_{ij}$ to the vertices
enclosed by $C_{e^*}$ or by $C_{f^*}$ (note that this includes the
paths from $o$ to $w_i$ and to $w_j$) . Since all the vertices of
$G\setminus G_r$ are in the infinite face of $G_r$, hence outside
$C_{e^*}$, the set of vertices enclosed by $C_{e^*}$ is the same
in $G$ as in $G_r$. Let $A$ denote the set of vertices enclosed by
$C_{e^*}$ (including $C_{e^*}$). We have: $|{\tt path}rtial A|\leq 2r+1$,
hence $|A|\leq (2r+1)/c$, where $c$ is the Cheeger constant of $G$.
Reasoning similarly for $C_{f^*}$, we obtain that the set of
vertices in $G_r\setminus V_{ij}$ adjacent to $V_{ij}$ has size at
most $(4r+2)/c$.
Let $B_j=V_{j-1,j}$ for $j\geq 2$, and $B_1=\{ u\in G_r: u<w_1
\hbox{ or } u>w_k, u\hbox{ not an ancestor of }w_1 \hbox{ in }T\}$.
Let us bound the cardinality of $B_j$.
As above, we define $C_{e^*}$ and $C_{f^*}$. Let $A$ denote the
union of $B_j$, of the vertices enclosed by $C_{e^*}$, and of the
vertices enclosed by $C_{f^*}$.
Since the vertices of $B_j$ are at
distance at most $r-1$ from $o$, they have no neighbors in
$G\setminus G_r$. Thus the neighborhood of $A$ in $G$ is such that
${\tt path}rtial A\subset C_{e^*}\cup C_{f^*}$,
hence $|{\tt path}rtial A|\leq 4r+2$, and so $|B_j|\leq |A|\leq (4r+2)/c$.
Finally, to compute the cut-width, let $S=\{ u: v_0\leq u\leq v_i
\}$, and let $j$ be such that $w_{j-1}<v_i\leq w_j$. We have:
$$V_{1,j-1}\subseteq S \subseteq B_1\cup V_{1,j-1}\cup B_j\cup\hbox{Path}(w_1)\cup
\hbox{Path}(w_{j-1})\cup\hbox{Path}(w_j).$$ Thus
the set of edges between $S$ and $G_r\setminus S$ has size at most
$\Delta (4r+2)/c+ (|B_1\cup B_j|)\Delta + (3r+1)\Delta$, which is at most
$(3r+1+(12r+6)/c)\Delta$.
Since $G$ has positive Cheeger constant, $$|G_r|=|G_{r-1}\cup{\tt path}rtial
G_{r-1}|\geq |G_{r-1}|(1+c),$$
and so $|G_r|\geq (1+c)^r$, that is, $r\leq \log |G_r| / \log
(c+1)$. Hence the set of edges between $S$ and $G\setminus S$ has size
at most $(3+12/c)(\Delta/\log (c+1))\log |G_r| + (1+6/c)\Delta$.
This concludes the proof.
\end{proof}
\begin{equation}gin{figure}[h]
\center{\epsilonfig{figure=hypfig, height=1.5in, width=1.5in}}
\caption{The region between $\mbox{Path}(W_i)$ and $\mbox{Path}(W_j)$}\lambdabdabel{fig:hyp}
\end{figure}
\begin{equation}gin{proof}[Proof of part \ref{hypecorrelation} of Proposition \ref{prop:hyper}]
We use the Random Cluster representation of the Ising model (see,
e.g. \cite{FK}) and a standard {\sf PRE}ls { }path-counting argument.
For every $u$ and $v$ in $G_r$,
$\cov(\sigma_u,\sigma_v)$ is the probability that $u$ is connected to
$v$ in the Random Cluster model. Fix $p<1$.
The exact value of $p$ will be specified later.
Then, $\begin{equation}ta$ is large enough, i.e., if
$(1-e^{-\begin{equation}ta})/e^{-\begin{equation}ta} > 2 p / (1-p)$,
then the Random Cluster model dominates percolation with parameter
$p$. So, what we need to show is that for a graph satisfying the
requirements of part \ref{hypecorrelation} of the proposition and $p$
high enough, there exists $\delta>0$ s.t. for every $r$ and every $u,v$
in $G_r$, we have ${\bf{P}}_p(u\leftrightarrow v)\geq \delta$. By the FKG inequality
(see \cite{FKG}),
\[
{\bf{P}}_p(u\leftrightarrow v)\geq{\bf{P}}_p(u\leftrightarrow o){\bf{P}}_p(v\leftrightarrow o)
\]
where $o$ is the center. Therefore we need to show that
${\bf{P}}(v\leftrightarrow o)$ is bounded away from zero. To this end, we
will pursue a standard path counting technique: in order for $o$ and
$v$ not to be connected, there needs to be a closed path in the dual
graph that separates $o$ and $v$.
\begin{equation}gin{Claim}\lambdabdabel{afikoman}
There exists $M=M(G)$ s.t. for every $r$ and $v\in G_r$ there are at
most $M^k$ paths of length $k$ in the dual graph of $G_r$
that separate $o$ from $v$.
\end{Claim}
By Claim \ref{afikoman}, if we take $p>1-{1}/{(2M)}$ and choose
$\begin{equation}ta$ accordingly then the probability that there exists
a closed path in the dual graph that separates $o$ and $v$
is bounded away from $1$.
\end{proof}
\begin{equation}gin{proof}[Proof of Claim \ref{afikoman}]
\newcommand {\uu} {{\psi}}
\newcommand{\hatdelta} {{\hat{\Delta}}}
Here again, we consider an embedding of $G_r$ such that
all the vertices of $G\setminus G_r$ lie on the infinite face $F$ of $G_r$.
Let $\gamma$ be a shortest path connecting $v$ to $o$ in $G_r$.
Every dual path separating $v$ from $o$ must intersect $\gamma$.
For an edge $e$ let ${\mathcal L}ambda_k(e)$ be the set of dual paths $\uu$ of
length $k$ separating $o$ from $v$ such that $\uu$ intersects $e$.
If $\hatdelta$ is the maximal co-degree in $G$ then $|{\mathcal L}ambda_k(e)|\leq
\hatdelta^k$
for every $e$.
Let $e\in \gamma$ be such that $d(e,o)>\exp (k/C)+k\hatdelta$,
$d(e,v)> \exp (k/C)+k\hatdelta$, and $d(e,F)>k\hatdelta$. We will now show that
$|{\mathcal L}ambda_k(e)|=0$. Assume, for a contradiction, that $\psi\in
{\mathcal L}ambda_k(e)$. Since $\psi$ has length $k$ and $d(e,F)>\hatdelta k$, $\psi$
does not touch the outer face $F$, and so the area enclosed by $\psi$
in $G_r$ equals the area enclosed by $\psi$ in $G$. The dual path $\psi$ encloses
either $v$ or $o$. Without loss of generality, assume that it encloses $o$.
Let $e'$ be the edge of $\gamma$ closest to $o$ which $\psi$
intersects. Since $\psi$ has length $k$, we get that $d(e',o)>\exp (k/C)$, and so at least
$1+\exp (k/C)$ vertices of $\gamma$ are enclosed by $\psi$. By
(\ref{logcheeger}) this implies that $\psi$ has length strictly greater than $k$,
a contradiction.
Thus, the total number of paths of length $k$ separating $o$ from $v$ is at most
$$\sum_{e:|{\mathcal L}ambda_k(e)|\neq 0} |{\mathcal L}ambda_k(e)|\leq [2\exp(k/C)+k\hatdelta)+
\hatdelta k]\hatdelta^k.$$
\end{proof}
{\bf Remark:} An isoperimetric inequality of the type of (\ref{logcheeger}) is necessary. An example where all other conditions of part \ref{hypecorrelation} of Proposition \ref{prop:hyper} are satisfied and yet the conclusion does not hold can be found in Figure \ref{fig:diamond}.
\begin{equation}gin{figure}[h]
\center{\epsilonfig{figure=diamond, height=2in, width=2in}}
\caption{An example of a planar graph s.t. for all temperatures, correlations decay
exponentially with distance.
}\lambdabdabel{fig:diamond}
\end{figure}
\subsection{A polynomial upper bound for trees} \lambdabdabel{subsec:recur}
In this subsection
we give an improved bound on relaxation time for the tree.
Let $A$ be a finite set, and let $\alphapha_{vw}:A\times A\to\hbox{I\kern-.2em\hbox{R}}_{+}$ be
a weight function. Let $G$ be a graph. Let the Glauber dynamics be as
defined above, and let ${\mathcal L}={\mathcal L}(A, \alphapha, G)$ be its generator. We say
that the Glauber dynamics on $(A, \alphapha, G)$ is ergodic if for every
two legal configurations $\sigma_1$ and $\sigma_2$, we have
$\left(\exp({\mathcal L})\right)_{\sigma_1\sigma_2}>0$.
We will prove the following proposition:
\begin{equation}gin{Proposition}\lambdabdabel{prop:polbound}
Let $b\geq 2$, and let $T$ denote the infinite $b$-ary tree, and let $T_n$
be the $b$-ary tree with $n$ levels. If the Glauber dynamics on
$(A, \alphapha, T_n)$ is ergodic for every $n$ then
\begin{equation}gin{equation*}
\limsup_{n\to\infty}\frac{1}{n}\log\left(\tau_2\left({\mathcal L}\left(A,
\alphapha, T_n\right)\right)\right) <\infty .
\end{equation*}
\end{Proposition}
\begin{equation}gin{Conjecture}\lambdabdabel{conj:pol}
Let $b\geq 2$, $T$ denote the infinite $b$-ary tree, and let $T_n$
be the $b$-ary tree with $n$ levels. If the Glauber dynamics on
$(A, \alphapha, T)$ is ergodic then there exists $0\leq\tau<\infty$
s.t.
\begin{equation}gin{equation}\lambdabdabel{eq:conjpoll}
\lim_{n\to\infty}\frac{1}{n}\log\left(\tau_2\left({\mathcal L}\left(A,
\alphapha, T_n\right)\right)\right) = \tau .
\end{equation}
\end{Conjecture}
We prove a special case of Conjecture \ref{conj:pol}:
\begin{equation}gin{Proposition}\lambdabdabel{prop:soft}
If the interactions are soft, i.e. $\alphapha_{vw}(a,b) > 0$ for all
$v,w,a$ and $b$,
then (\ref{eq:conjpoll}) holds.
\end{Proposition}
The main tool we use for proving Propositions \ref{prop:polbound}
and \ref{prop:soft} is block dynamics (see e.g. \cite{Ma}). For a
spin (or a color) $a\in A$, we denote by ${\mathcal L}(a,\alphapha,n)$ the
Glauber dynamics on the $b$-ary tree of depth $n$, under the
interaction matrix $\alphapha$ and with the boundary condition that
the root has a parent colored $a$. With a slight abuse of
notations, we say that $\tau_2(a,\alphapha,n)$ is the relaxation
time for ${\mathcal L}(a,\alphapha,n)$.
\newcommand {\htt} {{\hat{\tau}}}
\begin{equation}gin{Lemma}\lambdabdabel{subadd}
Let
\begin{equation}gin{equation*}
\htt_2(\alphapha,n)=\sup_{a\in A}\tau_2(a,\alphapha,n)
\end{equation*}
Then, for all $m$ and $n$,
\begin{equation}gin{equation*}
\htt_2(\alphapha,n+m)\leq\htt_2(\alphapha,n)\htt_2(\alphapha,m).
\end{equation*}
\end{Lemma}
\begin{equation}gin{proof}
Let $l=n+m$. Partition the tree $T_l$ into disjoint sets
$V_1,...,V_k$ to be specified below. We call $V_1,...,V_k$ {\em
blocks}, and consider the following block dynamics: Each block
$V_i$ has a (rate $1$) Poisson clock, and whenever it rings, $V_i$
updates according to its Gibbs measure determined by the boundary
conditions given by the configurations of $T^{(b)}_l-V_i$ and by
the external boundary conditions. We denote by
${\mathcal L}^B={\mathcal L}^B(V_1,...,V_k)$ the generator for the block dynamics, and
let ${\mathcal L}^B_a$ be the generator for the block dynamics with the
boundary condition that the parent of the root has color $a$.
By \cite{Ma}[Proposition 3.4, page 119],
\begin{equation}gin{equation*}
\htt_2(\alphapha,l)\leq\sup_{i}\htt_2(\alphapha,V_i)\cdot\sup_{a\in
A}\tau_2({\mathcal L}^B_a)
\end{equation*}
We now define the partition to blocks. For every vertex $v$ up to
depth $n$, the singleton $\{v\}$ is a block, and for every vertex
$w$ at depth $n$, the full subtree of depth $m$ starting at $w$ is
a block (see Figure \ref{blocks}).
\begin{equation}gin{figure}[h]
\center{\epsilonfig{figure=blocks, height=1.in, width=1.5in}}
\caption{Partition of a tree to blocks}\lambdabdabel{blocks}
\end{figure}
All we need now to finish the proof is the following easy claim:
\begin{equation}gin{Claim}\lambdabdabel{blocdin}
\begin{equation}gin{equation*}
\sup_{a\in A}\tau_2({\mathcal L}^B_a) = \htt_2(\alphapha,n).
\end{equation*}
\end{Claim}
\begin{equation}gin{proof}
We use the following fact (that could also serve as a definition of
the relaxation time). Given the dynamics ${\mathcal L}$ we define the Dirichlet
form ${\cal E}[g,g] =\frac{1}{2}
\sum_{\sigma,\tau} \mu[\sigma] K[\sigma \to \tau] (g(\sigma) -
g(\tau))^2$. Then
\begin{equation}gin{equation}\lambdabdabel{eq:dir_def}
\tau_2=
\sup\left\{ \frac
{\mu[g^2]}
{{\cal E}[g,g]}
~:~ \mu[g]=0 \right\}.
\end{equation}
Clearly, the expression in (\ref{eq:dir_def}) evaluated for $f$
and ${\mathcal L}^B_a$ is equal to the one evaluated for $g$ and
${\mathcal L}(a,\alphapha,n)$, if
\begin{equation}gin{equation}\lambdabdabel{elchanan}
g(\eta)=f(\sigma) \mbox{ for all } \eta \mbox{ and } \sigma \mbox{
s.t. } \eta\left| _{T_n}\right.=\sigma.
\end{equation}
Therefore, we need to show that the maximum in (\ref{eq:dir_def})
for the dynamics ${\mathcal L}^B_a$ is
obtained at a function that satisfies (\ref{elchanan}). The maximum in
(\ref{eq:dir_def}) is obtained at an eigenfunction of
${\mathcal L}^B_a$. Moreover for
every function $g$, ${\mathcal L}^B_a(g)$ satisfies (\ref{elchanan}) with some
function $f$. It now follows that the maximum is obtained at a
function that satisfies (\ref{elchanan}).
\end{proof}
\end{proof}
\begin{equation}gin{proof}[Proof of Proposition \ref{prop:polbound}]
From Lemma \ref{subadd} and the sub-additivity lemma, we learn that
\begin{equation}gin{equation*}
\limsup_{n\to\infty}\frac{1}{n}\log\left(\htt_2\left({\mathcal L}\left(A, \alphapha, T_n\right)\right)\right)
<\infty
\end{equation*}
By another application of Matinelli's block dynamics lemma, we get
that
\begin{equation}gin{equation}\lambdabdabel{gadolkatan}
\tau_2\left({\mathcal L}\left(A, \alphapha, T_n\right)\right)\leq
\tau_2\left({\mathcal L}\left(A, \alphapha, T_1\right)\right) \cdot
\htt_2\left({\mathcal L}\left(A, \alphapha, T_{n-1}\right)\right)
\end{equation}
and the proposition follows.
\end{proof}
\begin{equation}gin{proof}[Proof of proposition \ref{prop:soft}]
From Lemma \ref{subadd} and the sub-additivity lemma, we learn that
there exists $0\leq\tau<\infty$ s.t.
\begin{equation}gin{equation*}
\lim_{n\to\infty}\frac{1}{n}\log\left(\htt_2\left({\mathcal L}\left(A, \alphapha,
T_n\right)\right)\right) = \tau.
\end{equation*}
For every $a$, let $\mu_a$ be the Gibbs measure for the tree of
depth $n$ with the boundary condition that the parent of the root
has color $a$. Note that $\mu_a$ is the stationary distribution of
${\mathcal L}(a,\alphapha,n)$. Since the interactions are soft, there exists
$0<C<\infty$ s.t. for every $a$, every $n$, and every two
configurations $\sigma$ and $\eta$ on the tree of depth $n$,
\[
\frac{1}{C}\mu(\sigma) \leq \mu_a(\sigma) \leq C\mu(\sigma),
\]
and
\[
\frac{1}{C}{\mathcal L}(\alphapha,n)_{\sigma,\eta} \leq
{\mathcal L}(a,\alphapha,n)_{\sigma,\eta} \leq C{\mathcal L}(\alphapha,n)_{\sigma,\eta}.
\]
Therefore, by (\ref{eq:dir_def}),
\[
\frac{1}{C^3}\tau_2\left({\mathcal L}\left(A, \alphapha,T_n\right)\right) \leq
\htt_2\left({\mathcal L}\left(A, \alphapha,T_n\right)\right) \leq
C^3\tau_2\left({\mathcal L}\left(A, \alphapha,T_n\right)\right)
\]
and the proposition follows.
\end{proof}
\section{Lower Bounds}\lambdabdabel{sec:lowerbounds}
\begin{equation}gin{proof}[Proof of Theorem~\ref{thm:tree}, part~\ref{treesuperlinear}]
Theorem~\ref{thm:tree}
part~\ref{treesuperlinear} is a direct
consequence of the extremal characterization of $\tau_2$ given
in (\ref{eq:dir_def}),
applied to the particular test function $g$
which sums the spins on the boundary
of the tree.
It is easy to see that $\mu[g] = 0$ and that
\[
{\cal E}[g,g] \leq \sum_{\sigma,\tau} \mu[\sigma] K[\sigma \to \tau] = O(n_r).
\]
We repeat the variance calculation from ~\cite{EKPS}. When $b(1 - 2
\epsilon)^2 > 1$:
\begin{equation}gin{eqnarray*}
\mu[g^2]&=&
\sum_{w\in{\tt path}rtial{\bf T}}\mu[\sigma_w^2]+
\sum_{w\in{\tt path}rtial{\bf T}}\sum_{{v\in{\tt path}rtial{\bf T} \atop v\neq w}}
\mu[\sigma_w\sigma_v]\\
&=&b^r\cdot\left(
1+ \sum_{i=1}^r (b-1) b^{i-1} (1-2\epsilon)^{2i} \right)\\
&=& b^r \left(1+{\bf T}heta\left(\left(b(1-2\epsilon)^2\right)^r\right)\right)\\
&=& {\bf T}heta\left(n_r^{1+\log_b(b(1-2\epsilon)^2)}\right).
\end{eqnarray*}
It now follows by (\ref{eq:dir_def})
that if $b(1-2\epsilon)^2 > 1$ then
\[
\tau_2=\Omega \left(n_r^{\log_b(b(1-2\epsilon)^2)}\right),
\]
as needed.
Repeating the calculation for the case $b (1 - 2 \epsilon)^2 = 1$ yields that
\[
\tau_2 = \Omega(\log n_r).
\]
The proof follows.
\end{proof}
\begin{equation}gin{Remark}
Suppose that $\mu$ admits a Markovian representation where the
conditional distribution of $\sigma_u$ given its parent $\sigma_v$ is
given by an $|{\cal{A}}| \times |{\cal{A}}|$ mutation matrix $P$. Let $\lambdabdam_2(P)$ be
the second eigen-value of $P$ (in absolute value), and $x$ the
corresponding eigen-vector, so that $Px^t = \lambdabdam_2(P) x^t$ and $|x|_2 = 1$.
Let $g$ be the test function $g = c_n x^t$, where $c_n(i)$ is the
number of boundary nodes that are labeled by $i$.
It is then easy to see once again that ${\cal E}[g,g] = O(n_r)$.
Repeating the calculation from \cite{MP} it follows that if $b
|\lambdabdam_2(P)|^2 > 1$, then
\[
{\bf{Var}}[g] = {\bf T}heta\left(n_r^{1+\log_b(b |\lambdabdam_2(P)|^2)}\right).
\]
Thus in this case,
\[
\tau_2 = \Omega \left(n_r^{\log_b(b |\lambdabdam_2(P)|^2)}\right).
\]
\end{Remark}
\begin{equation}gin{figure} [hptp] \lambdabdabel{rec_maj}
\epsilonfxsize=5cm
{\epsilonfbox{maj}
}
\caption{The recursive majority function.}
\end{figure}
In order to prove the lower bound on the relaxation time for
very low temperatures stated in Theorem~\ref{thm:tree}
part~\ref{treeinfinitedegree},
we apply (\ref{eq:dir_def}) to the test function $g$ which is obtained
by applying recursive majority to the boundary spins;
see \cite{Mo2} for background regarding the recursive-majority function for
the Ising model on the tree.
For simplicity we consider first the ternary tree $T$, see
Figure $5$.
Recursive majority is
defined on the configuration space as follows.
Given a configuration $\sigma$, first label each boundary vertex $v$
by its spin $\sigma_v$.
Next, inductively label each interior vertex $w$ with the label of the
majority of the children of $w$.
The value of the recursive majority function $g$ is then the label of
the root. We write $\sigma_v$ for the spin at $v$ and $m_v$ for the
recursive majority value at $v$.
\begin{equation}gin{Lemma}
If $u$ and $w$ are children of the same parent $v$, then
${\bf{P}}[m_u \neq m_w] \leq 2 \epsilon + 8 \epsilon^2$.
\end{Lemma}
\noindent {\bf Proof:}
$$
{\bf{P}}[m_u \neq m_w] \leq {\bf{P}}[\sigma_u \neq m_u] + {\bf{P}}[\sigma_w \neq
m_w] + {\bf{P}}[\sigma_u \neq \sigma_v] +
{\bf{P}}[\sigma_w \neq \sigma_v].
$$
\noindent We will show that recursive majority is highly correlated with spin, {\em
i.e.}
if $\epsilon$ is small enough (say $\epsilon < 0.01$),
then ${\bf{P}}[m_v \neq \sigma_v] \leq 4 \epsilon^2$.
The proof is by induction on the distance $\ell$ from $v$ to the boundary of
the tree.
For a vertex $v$ at distance $\ell$ from the boundary of the tree, write
$p_\ell = {\bf{P}}[m_v \neq \sigma_v]$.
By definition $p_0 = 0 \leq 4 \epsilon^2$.
For the induction step, note that if $\sigma_v \neq m_v$ then one of the
following events hold:
\begin{equation}gin{itemize}
\item
At least $2$ of the children of $v$, have different $\sigma$ value than that
of $\sigma_v$, or
\item
One of the children of $v$ has a spin different from the spin at $v$, and
for some other child
$w$ we have $m_w \neq \sigma_w$, or
\item
For at least $2$ of the children of $v$, we have $\sigma_w \neq m_w$.
\end{itemize}
Summing up the probabilities of these events, we see that $
p_\ell \leq 3 \epsilon^2 + 6 \epsilon p_{\ell-1} + 3 p_{\ell-1}^2.$
It follows that $p_\ell \leq 4 \epsilon^2$, hence the Lemma.
$\qed$
\begin{equation}gin{proof}[Proof of Theorem~\ref{thm:tree} part~\ref{treeinfinitedegree}]
Let $m$ be the recursive majority function. Then by symmetry ${\bf{E}}[m] = 0$,
and ${\bf{E}}[m^2] = 1$.
By plugging $m$ in definition (\ref{eq:dir_def}), we see that
\begin{equation}gin{equation} \lambdabdabel{eq:rec_maj_cut}
\tau_2 \geq \left( \sum_{\sigma, \tau : m[\sigma] = 1, m[\tau] = -1}
\mu[\sigma] {\bf{P}}[\sigma \to \tau] \right)^{-1}.
\end{equation}
Observe that if $\sigma,\tau$ are adjacent configurations
(i.e., ${\bf{P}}[\sigma \to \tau] > 0$) such that
$m(\sigma) = 1$ and $ m(\tau) = -1$,
then there is a unique vertex $v_r$ on the boundary of the tree
where $\sigma$ and $\tau$ differ.
Moreover, if $\rho=v_1,\ldots,v_r$ is the path from $\rho$ to $v_r$, then
for $\sigma$ we have
$m(v_1) = \ldots = m(v_r) = 1$ while for $\tau$ we have $m(v_1) = \ldots =
m(v_r) = -1$.
Writing $u_i,w_i$ for the two siblings of $v_i$ for $2 \leq i \leq k$, we
see that for
all $i$, for both $\sigma$ and $\tau$ we have $m(u_i) \neq m(v_i)$.
Note that these events are independent for different values of $i$.
We therefore obtain that the probability that $v_1,\ldots,v_r$ is such a
path is bounded by
$(2 \epsilon + 8 \epsilon^2)^{r-1}$. Since there are $3^r$ such paths and
since ${\bf{P}}[\sigma \to \tau] \leq 3^{-r}$ we obtain
that the right term of (\ref{eq:rec_maj_cut}) is bounded below by
\[
(2 \epsilon + 8 \epsilon^2)^{1-r} \ge n^{\Omega( \begin{equation}ta )}\, .
\]
\end{proof}
Note that the proof above easily extends to the $d$-regular tree
for $d \geq 3$. A similar proof also applies to the binary tree
$T$, where $g$ is now defined as follows. Look at $T_k$ for even
$k$. For the boundary vertices define $m_v = \sigma_v$. For each
vertex $v$ at distance $2$ from the boundary, choose three leaves
on the boundary below it $v_1,v_2,v_3$ and let $m_v$ be the
majority of the values $m_{v_i}$. Now continue recursively.
Repeating the above proof, and letting
$p_{\ell} = P[m_v \neq \sigma_v]$ for a vertex at distance $2 \ell$,
we derive the following recursion:
$p_{\ell} \leq 3 (2 \epsilon)^2 + 6 (2 \epsilon) p_{\ell - 1} + 3 p_{\ell - 1}^2$.
We then continue in exactly the same way as for the ternary tree.
\section{High temperatures}\lambdabdabel{sec:hightemp}
\begin{equation}gin{proof}[Proof of Theorem~\ref{thm:tree} part~\ref{treelinear}]
Our analysis uses a comparison to block dynamics.
{\bf Block dynamics}.
We view our tree $T = T_r^{(b)}$ as a part of a larger $b$-ary tree $T_*$ of
height
$r+2h$, where the root $\rho$ of $T$ is at level $h$ in $T_*$.
For each vertex $v$ of $T_*$, consider
the subtree of height $h$ rooted at $v$.
A {\bf block} is by definition the intersection of
$T$ with
such a subtree.
Each block has a rate $1$ Poisson clock and whenever the clock rings
we
erase all the spins
of vertices belonging to the block, and put new spins in, according to the
Gibbs distribution conditional on the spins in the rest of $T$.
\noindent
{\bf Discrete dynamics:}
In order to be consistent with \cite{BD}, we will first analyze the
corresponding discrete time dynamics: at each step of the block
dynamics, pick a block at random, erase all the spins
of vertices belonging to the block, and put new spins in, according to the
Gibbs distribution conditional on the spins in the rest of $T$.
{\bf A coupling analysis}.
We use a weighted Hamming metric on configurations,
\[
d(\sigma,\eta)=
\sum_{v} \lambdabdambda^{|v|} 1(\sigma_v\neq\eta_v),
\]
where $|v|$ denotes the distance from vertex $v$ to the root.
Let $\thetaeta=1-2\epsilonilon$ and $\lambdabdambda=1/\sqrt{b}$. Note that
$b \lambdabdambda \thetaeta
<1$ and
$\thetaeta < \lambdabdambda$.
Starting from two distinct configurations $\sigma$ and $\eta$,
our coupling always picks the same block in $\sigma$ and in $\eta$ and chooses the
coupling between the two block moves which minimizes $d(\sigma',\eta')$.
We use path-coupling~\cite{BD}, {\em i.e.}, we will prove that for every
pair of configurations
which differ by a single spin, applying one step of the block dynamics will
reduce
the expected distance between the two configurations.
Let $v$ be the single vertex, such that $\sigma_v\neq \eta_v$.
Then $d(\sigma,\eta)=\lambdabdambda^{|v|}$. Let $B$ denote the
chosen block, and
$\sigma',\eta'$ be the configurations after the move.
In order to understand $(\sigma',\eta')$, we will need the following
Lemma.
\begin{equation}gin{Lemma}\lambdabdabel{reductiontofree}
Let $T$ be a finite tree and let $v\neq w$ be vertices in $T$. Let
$\{\begin{equation}ta_e\geq 0\}_{e\in E(T)}$ be the (ferromagnetic)
interactions on $T$, and let $\{-\infty<H(u)<\infty\}_{u\in V(T)}$
be an external field on the vertices of $T$.
we consider the following conditional Gibbs measures:\\
$\mu_{+,H}$: The Gibbs measure with external field $H$ conditioned on $\sigma_v = 1$. \\
$\mu_{-,H}$: The Gibbs measure with external field $H$ conditioned on $\sigma_v = -1$. \\
Then, the function $\mu_{+,H}[\sigma_w]-\mu_{+,H}[\sigma_w]$ achieves
its maximum at $H\equiv 0$.
\end{Lemma}
Before proving the Lemma, we utilize it
to prove Theorem \ref{thm:tree}, part \ref{treelinear}.
There are four situations to consider.
\noindent{\bf Case 1.}
if $B$ contains neither $v$ nor any vertex adjacent to $v$, then
$d(\sigma',\eta')=d(\sigma,\eta)$.
\noindent{\bf Case 2.}
If $B$ contains $v$, then $\sigma '=\eta '$ and
$d(\sigma',\eta')=0=d(\sigma,\eta)-\lambdabdambda^{|v|}$.
There are $h$ such blocks, corresponding to the $h$ ancestors
of $v$ at $1,2,\ldots ,h$ generations above $v$. (Note that this holds even
when $v$ is the root of $T$ or a leaf of $T$, because of our definition of
blocks).
\noindent{\bf Case 3.}
If $B$ is rooted at one of $v$'s children, then the conditional
probabilities given the outer boundaries of $B$ are not the same since one block has $+1$ above it
and the other block has $-1$ above it. However both blocks have their
leaves adjacent to the same boundary configuration. When considering the process on the block,
the influence of the boundary configuration can be counted as altering the external field.
Since $\sigma$ and $\eta$ have the same external fields and the same boundary configuration on all
of the boundary vertices except $v$,
by Lemma \ref{reductiontofree},
conditioning on this lower boundary can only reduce $d(\sigma',\eta')$. Therefore, we bound
$d(\sigma',\eta')$ by studying the case where one block is conditioned to
having
a $+1$ adjacent to the root, the other block is conditioned to having a $-1$
adjacent to the root, and no external field or boundary conditions. Then the
block is
simply filled in a top-down manner, every edge is faithful ({\em i.e.} the
spin of the
current vertex equals the spin of its parent) with probability
$\thetaeta$ and cuts information (the spin of the current vertex is a new
random spin)
with probability $1 - \thetaeta$. Coupling these
choices for corresponding edges for $\sigma$ and for $\eta$, we see that the
distance between $\sigma'$ and $\eta'$ will be equal to the weight of the
cluster
containing $v$, in expectation
$\sum_j \lambdabdambda^{|v|+j} b^j \thetaeta^j
\leq \lambdabdambda^{|v|}/(1 - b \lambdabdambda \thetaeta)$. There are $b$ such blocks,
corresponding to the $b$ children of $v$.
\noindent{\bf Case 4.}
If $B$ is rooted at $v$'s ancestor exactly $h+1$ generations above
$v$, then the conditional probabilities are not the same since one block has
a leaf $v$ adjacent
to a $+1$ and the other block has a leaf adjacent to a $-1$. There is
exactly one such block.
Again we appeal to Lemma~\ref{reductiontofree} to show that the expected
distance is dominated by
the size of the $\thetaeta$ cluster of $w$.
The expected weight of $v$'s cluster is bounded by summing over the ancestors
$w$ of $v$:
\begin{equation}gin{eqnarray*}
\lefteqn{\sum_{w} \thetaeta^{|v|-|w|} \sum_j \lambdabdambda^{|w|+j} b^j \thetaeta^j=}\\
&=&
\frac{ \sum_{w} \lambdabdambda^{|w|} \thetaeta^{|v| - |w|} }{1 - b \lambdabdambda \thetaeta}\\
&=&
\frac{\lambdabdambda^{|v|}}{(1 - \thetaeta \lambdabdambda^{-1}) (1 - b \lambdabdambda \thetaeta)}.
\end{eqnarray*}
Overall, the expected change in distance is
\[
\begin{equation}gin{aligned}
&{\bf{E}}(d(\sigma',\eta')-d(\sigma,\eta)) \leq \\[1ex]
&\left(\frac{b\lambdabdam^{|v|}}{1-b\lambdabdambda \thetaeta}
+ \frac{\lambdabdambda^{|v|}}{(1 - \thetaeta \lambdabdambda^{-1}) (1 - b \lambdabdambda
\thetaeta)} -h\lambdabdam^{|v|} \right) \frac{1}{n + h - 1}.
\end{aligned}
\]
If the block height $h$ is a sufficiently large constant,
we get that for some positive constant $c$,
\begin{equation}gin{equation}\lambdabdabel{eq**}
{\bf{E}}(d(\sigma',\eta')-d(\sigma,\eta))\leq \frac{- c \lambdabdambda^{|v|}}{n}
\leq \frac{-c }{n}d(\sigma,\eta).
\end{equation}
Note that $\max d(\sigma,\eta) = \sum_{j \leq r} b^j \lambdabdam^j \leq
\sqrt{n}$.
Therefore, by a path-coupling argument (see \cite{BD})
we obtain a mixing time of at most $O(n \log n)$ for the blocks
dynamics.
{\bf Spectral gap of discrete time block dynamics.}
The $(1-c/n)$ contraction at each step of the coupling implies, by
an argument from~\cite{Chen} which
we now recall,
that the spectral gap of the block dynamics is at least $c/n$.
Indeed, let $\lambdabdambda_2$ be the second largest eigenvalue in absolute
value,
and $f$ an eigenvector for $\lambdabdambda_2$.
Let $M=\sup_{\sigma,\eta} |f(\sigma)-f(\eta)|/d(\sigma,\eta)$ and
denote by
${\bf{P}}$ the transition operator. Then
\begin{equation}gin{eqnarray*}
|\lambdabdambda_2|M &=& \!\!\!\!
\sup_{\sigma,\eta}
\frac{|{\bf{P}} f(\sigma)-{\bf{P}} f(\eta)|}{d(\sigma,\eta)}~~
\hbox{ since $f$ eigenvector for $\lambdabdambda_2$} \\
&\leq & \!\!\!\!
\sup_{\sigma,\eta} \sum_{\sigma',\eta'}
{\bf{P}}[(\sigma,\eta)\rightarrow
(\sigma',\eta')]\frac{|f(\sigma')-f(\eta')|}{d(\sigma',\eta' )}
\frac{d(\sigma',\eta')}{d(\sigma,\eta)}\\
&\leq & \!\!\!\!
\sup_{\sigma,\eta}\sum_{\sigma',\eta'}{\bf{P}}[(\sigma,\eta)\rightarrow
(\sigma',\eta')]M\frac{d(\sigma',\eta')}{d(\sigma,\eta)}\\
&=& \!\!\!\!
M \sup_{\sigma,\eta} \frac{{\bf{E}}[d(\sigma',\eta')]}{d(\sigma,\eta)}\\
&\leq & \!\!\!\!
(1-c/n) M ~~\hbox{by ~(\ref{eq**}).}
\end{eqnarray*}
Thus $|\lambdabdambda_2| M\leq (1-c/n)M$, whence the
(discrete time) block dynamics has relaxation time at most $O(n)$.
{\bf Relaxation time for continuous time block dynamics}. The
continuous time dynamics is $n$ times faster than the discrete time
dynamics. This is true because the transition matrix for the discrete
dynamics is $M=\I+{\frac{1}{n}}{\mathcal L}$ where $\I$ is the $2^n$-dimensional
unit matrix. Therefore
\[
\tau_2(\mbox{block dynamics})=O(1).
\]
{\bf Relaxation time for single-site dynamics}.
Since each block update can
be simulated by doing a constant number of single-site updates inside the
block,
and each tree vertex only belongs to a bounded number of blocks,
it follows from proposition 3.4 of \cite{Ma} that the relaxation time
of the single-site Glauber dynamics is also $O(1)$.
\end{proof}
\begin{equation}gin{proof}[Proof of Lemma~\ref{reductiontofree}]
\noindent{\bf Reduction from trees to paths.}
We first claim that it suffices to prove the lemma when the
tree $T$ consists of a path
$v = v_1, \ldots, v_k = w.$
(see Figure~\ref{tree2path}).
To see this, let $T_1,T_2,\ldots,T_k$ be the connected components of $T$
when the edges in the path $v_1,v_2,\ldots,v_k$ are erased,
s.t. $v_i\in T_i$ for $i=1,2,\ldots,k.$
Let $\sigma$ be a configuration on $v_1,\ldots,v_k$, and for a subgraph
$J$ let $S(J)$ be the space of configurations on $J$.
The probability of a configuration $\sigma$ on $v_1,\ldots,v_k$ is
\begin{equation}gin{eqnarray*}
\frac{1}{Z}
&\exp&\left(
\sum_{i=1}^{k-1}\begin{equation}ta_{\{v_i,v_{i+1}\}}\sigma_{v_i}\sigma_{v_{i+1}}
\right)\cdot
{\sf PRE}od_{i=1}^{k}\left(
\sum_{\tau\in S(T_i-\{v_i\})}\exp\left(\hamil(\tau\cup\sigma_{v_i})\right)
\right)\\
=\frac{1}{Z^{\sf PRE}ime}
&\exp&\left(
\sum_{i=1}^{k-1}\begin{equation}ta_{\{v_i,v_{i+1}\}}\sigma_{v_i}\sigma_{v_{i+1}}+
\sum_{i=1}^{k}H^{\sf PRE}ime_{v_i}\sigma_{v_i}
\right)
\end{eqnarray*}
for some external field $\{H^{\sf PRE}ime_u\}$ depending only on $\{H_u\}$ and $\{\begin{equation}ta_e\}$,
where $Z$ and $Z^{\sf PRE}ime$ are partition functions and $\hamil(\cdot)$ denotes the Hamiltonian.
\begin{equation}gin{figure}[ht]
\begin{equation}gin{center}{\small
\setlength{\unitlength}{1cm}
\begin{equation}gin{picture}(7,3)
\multiput(0,0)(4.5,0){2}{
\put(0,0.5){\circle*{.15}}
\put(1.5,2){\circle*{.15}}
\put(2,2.5){\circle*{.15}}
\put(0,0.5){\line(1,1){.5}}
\put(1.5,2){\line(-1,-1){.5}}
\put(2,2.5){\line(-1,-1){.5}}
\put(-0.4,.5){$v_k$}
\put(1.1,2){$v_2$}
\put(1.6,2.5){$v_1$}
}
\put(0,0.5){\line(1,1){.5}}
\put(1.5,2){\line(-1,-1){.5}}
\put(2,2.5){\line(-1,-1){.5}}
\put(2,2.5){\line(1,-1){.5}}
\put(1.5,2){\line(1,-1){.5}}
\put(0,0.5){\line(1,-1){.5}}
\put(0,0.5){\line(0,-1){.5}}
\put(1.5,2){\line(0,-1){.5}}
\put(2,2.5){\line(0,-1){.5}}
\put(0,0){\line(1,0){.5}}
\put(1.5,1.5){\line(1,0){.5}}
\put(2,2){\line(1,0){.5}}
\put(0.6,0){$T_k$}
\put(2.1,1.5){$T_2$}
\put(2.6,2){$T_1$}
\put(3,1){\vector(1,0){1}}
\end{picture}
}\end{center}
\caption{Reduction from trees to paths.}\lambdabdabel{tree2path}
\end{figure}
We will now prove the lemma by induction on the length of the path
$v_1,\ldots,v_k$.
\noindent{\bf Paths of length $2$.}
Assume $k=2$. Writing $\begin{equation}ta$ for the strength of $(v_1,v_2)$
interaction,
$H$ for external field at $w = v_2$. Then,
\begin{equation}gin{eqnarray*}
{\mu_{+,\tau}[\sigma_w] - \mu_{-,\tau}[\sigma_w]}
&=&
\frac{e^{\begin{equation}ta + H } - e^{-\begin{equation}ta - H }}{e^{\begin{equation}ta + H } +
e^{-\begin{equation}ta - H }} -
\frac{e^{-\begin{equation}ta + H } - e^{\begin{equation}ta - H }}{e^{-\begin{equation}ta + H } +
e^{\begin{equation}ta - H }}\\
&=& \tanh (\begin{equation}ta + H ) - \tanh(H - \begin{equation}ta).
\end{eqnarray*}
It therefore suffices to prove that for $\begin{equation}ta > 0$, the function
\[
H \mapsto g(\begin{equation}ta,H ) = \tanh (H + \begin{equation}ta) - \tanh(H -
\begin{equation}ta)
\]
has a unique maximum at $H =0$.
Consider the partial derivative,
\begin{equation}gin{equation} \lambdabdabel{eq:derg}
g_{H }(\begin{equation}ta,H ) = \cosh^{-2}(H + \begin{equation}ta) -
\cosh^{-2}(H - \begin{equation}ta).
\end{equation}
Therefore, if $\begin{equation}ta > 0$ and $H > 0$ then
$g_{H }(\begin{equation}ta,H ) < 0$ and if $\begin{equation}ta > 0$ and $H < 0$
then
$g_{H }(\begin{equation}ta,H ) > 0$.
Thus $H =0$ is the unique maximum and the claim for $k=2$ follows.
{\bf Induction step.}
We assume that the claim is true for
$k-1$ and prove
it for $k$. We denote $v' = v_{k-1}, \mu'_{+,H} = \mu_{H}[ \cdot | \sigma_{v'} =
1]$ and similarly
$\mu'_{-,H}.$
Now,
\begin{equation}gin{eqnarray}
\lefteqn{\mu_{+,H}[\sigma_w] - \mu_{-,H}[\sigma_w]} \nonumber \\
&=& \nonumber
\left(
\mu_{+,H}[\sigma_{v'} = 1] \mu'_{+,H}[\sigma_w] + \mu_{+,H}[\sigma_{v'} = -1]
\mu'_{-,H}[\sigma_w]
\right) \\ \nonumber &-&
\left(
\mu_{-,H}[\sigma_{v'} = 1] \mu'_{+,H}[\sigma_w] + \mu_{-,H}[\sigma_{v'} = -1]
\mu'_{-,H}[\sigma_w]
\right)
\\ &=& \lambdabdabel{eq:ind_free}
\frac{1}{2} (\mu_{+,H} - \mu_{-,H})[\sigma_{v'}] (\mu'_{+,H} -
\mu'_{-,H})[\sigma_w].
\end{eqnarray}
Since by the induction hypothesis both multipliers in
(\ref{eq:ind_free}) achieve their maximums at $H\equiv 0$, we get that
${\mu_{+,H}[\sigma_w] - \mu_{-,H}[\sigma_w]}$ also achieves its
maximum at $H\equiv 0$.
\end{proof}
\section{Proof of Theorem ~\ref{thm:general}}\lambdabdabel{sec:general}
Recall that we denoted by $\sigma_r$ the configuration on all vertices
at distance exactly
$r$ from $\rho$. Also recall that $\mu$ is the Gibbs measure
which is stationary for the Glauber dynamics. We abbreviate
$\int f \, d\mu$ as $\mu(f)$.
{\bf Mutual information and $L^2$ estimates.} For Markov chains
such as $\{\sigma_r\}$, it is generally known \cite{Sa} that
(\ref{eq:mut_inf_decay}) follows from (\ref{eq:cor_decay}), which
itself, is a consequence of the following stronger statement:
There exists $c_*>0$ such that for any vertex set
$A \subset G_{r/2}$ and any functions
$f,g$ with $\mu(f)=\mu(g)=0$, we have
\begin{equation}gin{equation} \lambdabdabel{eq:l2_decay}
\mu(fg) \le e^{-c_* r} (\mu(f^2)\mu(g^2))^{1/2} \, ,
\end{equation}
provided that $f(\sigma)$ depends only on $\sigma_A$ and $g(\sigma)$ depends
only on $\sigma_r$.
(\ref{eq:l2_decay}) will follow from a more general proposition
below. For a set $A$ of vertices in a graph $G$ we write ${\tt path}rtial_i
A$ for the set of vertices $v$ in $A$ for which there exists an edge
$(v,u)$ with $u \notin A$.
\begin{equation}gin{Proposition}\lambdabdabel{prop:gendecay}
Let $G$ be a finite graph, and let $A$ and $B$ be sets of vertices in $G$.
Let $d$ be the distance between $A$ and $B$ and let $\Delta$ be the
maximum degree in $G$. For $0 < c < 1$, let
\begin{equation}gin{equation}\lambdabdabel{eq:largedivrate}
I(c)=c-\log c-1.
\end{equation}
Let $c^{\ast}$ be the unique $0 < c < 1 $ satisfying $I(c) = \log
\Delta$ and for $0 < c < c^{\ast}$, let
\begin{equation}gin{equation}\lambdabdabel{eq:constforldv}
C(c,\Delta) = \left(1-e^{\log\Delta-I(c)}\right)^{-1/2}.
\end{equation}
Further, let $\lambdabdambda_2$ be the absolute value
of the second eigenvalue of the generator of the Glauber dynamics on
$G$, i.e. $\lambdabdambda_2=\frac{1}{\tau_2}$. Let $f=f(\sigma)$ depend only
on the values of the configuration in $A$ and $g=g(\sigma)$ depend
only on the values of the configuration in $B$. If $\mu(f)=\mu(g)=0$,
then
\begin{equation}gin{equation}\lambdabdabel{eq:tgen}
\mu(fg)\leq \left(e^{-cd\lambdabdambda_2}+2C(c,\Delta)
\sqrt{\left|{\tt path}rtial_i A\right|e^{d\left(\log\Delta-I(c)\right)}}
\right) \Vert f \Vert _2 \Vert g \Vert _2.
\end{equation}
In particular (by letting $c = e^{-\log\Delta-\gamma-2}$) for $\gamma
\geq 0$,
\begin{equation}gin{equation}\lambdabdabel{eq:tgen2}
\mu(fg)\leq \left(e^{-d\lambdabdambda_2\exp(-\log\Delta-\gamma-2)}+
4\sqrt{\exp(-(\gamma+1)d)\left|{\tt path}rtial_i A\right|}
\right) \Vert f \Vert _2 \Vert g \Vert _2.
\end{equation}
\end{Proposition}
\begin{equation}gin{proof}[Proof of Theorem \ref{thm:general}]
Note that $|{\tt path}rtial_i A|\leq|A|\leq\Delta^{r/2}$. Therefore, to prove
(\ref{eq:l2_decay}) we use (\ref{eq:tgen2}) with B=$\{v:d(v,o)=r\}$,
$d=r/2$ and $\gamma$ s.t. $e^\gamma>\Delta$.
\end{proof}
\begin{equation}gin{proof}[Proof of Proposition \ref{prop:gendecay}]
We use a coupling argument.
Let $\mu$ be the Gibbs measure on $G$, and let $X_0$ be
chosen according to $\mu$.
Let $X_t$ and $Y_t$ be defined as follows:
Set $Y_0=X_0$. For $t>0$, let $X_t$ and $Y_t$
evolve according to the dynamics with the following graphical
representation:
Each $v\in G$ has a Poisson clock. Assume the clock at $v$ rang at time $t$, and let $X_{t-}$ and $Y_{t-}$
be the configurations just before time $t$.
At time $t$ we do the following:
\begin{equation}gin{enumerate}
\item If $v\in B$ then $X_v$ updates according to the Gibbs measure,
and $Y_v$ does not change.
\item
If $v\not\in B$ and $X_{t-}(w)=Y_{t-}(w)$ for every neighbor $w$ of $v$, then both $X$ and $Y$ update according
to the Gibbs measure so that $X_t(v)=Y_t(v)$.
\item
If $v\not\in B$ and there exists a neighbor $w$ of $v$ s.t. $X_{t-}(w)\neq Y_{t-}(w)$ then both $X$ and $Y$ update according
to the Gibbs measure, but this time independently of each other.
\end{enumerate}
For a vertex $v\in B$ we define $t_v$ to be the first time
the Poisson clock at $v$ rang.
For any $v \in G\setminus B$, we define
$t_v$ to be the first time the Poisson clock at $v$ rang after
$\min_{(w,v) \in E_G} t_w$.
Note that $X_t(v)=Y_t(v)$ at any time $t<t_v$,
and that
$t_v$ depends only on the Poisson clocks,
and is independent
of the initial configuration $X_0$.
We let $t_A = \min_{v \in A} t_v$.
\newcommand {\galp} {{\tilde{P}}}
\newcommand {\galq} {{\tilde{Q}}}
Let ${\cal{F}}_t$ denote the ($\sigma$-algebra of the) Poisson clocks at the vertices up to time $t$.
Let $(P^t f)(\sigma) = {\bf{E}}[f(X_t)|X_0=\sigma,{\cal{F}}_t]$ and
let $(Q^t f)(\sigma) = {\bf{E}}[f(Y_t)|X_0=\sigma,{\cal{F}}_t]$. Also,
let $(\galp^t f)(\sigma) = {\bf{E}}[f(X_t)|X_0=\sigma]$ and $(\galq^t f)(\sigma) = {\bf{E}}[f(Y_t)|X_0=\sigma]$.
Since for all $t$ the process $Y_t$ is at the stationary
distribution and $Y_t|_B=X_0|_B$ for all $t$, we get
\begin{equation}gin{equation} \lambdabdabel{eq:proj1}
\mu[g f] = {\bf{E}}[g(Y_t) f(Y_t)] =
{\bf{E}}[g(X_0) f(Y^t)] =
{\bf{E}}[g \galq^t f ].
\end{equation}
If $t<t_A$, then clearly $X_t=Y_t$ on $A$. Therefore, $ \Vert (Q^t f-P^tf)\cdot
1_{t<t_A} \Vert _2 ^2=0$. On the other hand,
$ \Vert Q^t f \Vert _2 \leq \Vert f \Vert _2 ~~ {\cal{F}}_t-\mbox{a.s.}$ and $ \Vert P^t f \Vert _2 \leq \Vert f \Vert _2 ~~ {\cal{F}}_t-\mbox{a.s.}$
This is because the operators $f\to Q^t(f)$ and $f\to P^t(f)$ given ${\cal{F}}_t$ are Markov operators and hence
contractions. Therefore
\begin{equation}gin{eqnarray*}
\Vert \galq^t f-\galp^t f \Vert _2 ^2
&=&{\bf{E}}\left([\galq^t f(X_0)-\galp^t f(X_0)]^2\right)\\
& \leq &
{\bf{E}}\left([Q^t f(X_0)-P^t f(X_0)]^2\right)\\
& = &
{\bf{P}}(t\leq t_A)\int d\mu(\sigma)
{\bf{E}}\left([Q^t f(\sigma)-P^t f(\sigma)]^2 |{t \leq t_A} \right)\\
& + &
{\bf{P}}(t > t_A)\int d\mu(\sigma)
{\bf{E}}\left([Q^t f(\sigma) -P^t f(\sigma) ]^2 |{t > t_A},X_0=\sigma\right)\\
& \leq &
4 {\bf{P}}[t_A \leq t] \Vert f \Vert _2^2 \,
\end{eqnarray*}
where the first inequality is because $\galq^t f(X_0)-\galq^t f(X_0)$ is a conditional
expectation of $Q^t f(X_0) - P^t f(X_0)$,
and the second
inequality is because $\{t > t_A\}$ is ${\cal{F}}_t$-measurable.
Therefore, by the Cauchy-Schwartz inequality,
\begin{equation}gin{equation} \lambdabdabel{eq:another_cs}
{\bf{E}}[(\galq^t f- \galp^t f)g]
\leq 2 \sqrt{{\bf{P}}[t_A \leq t]} \Vert f \Vert _2\ \Vert g \Vert _2 \, .
\end{equation}
Since
\[
{\bf{E}}[g \galp^t f] \leq e^{-\lambdabdam_2t} \Vert f \Vert _2 \Vert g \Vert _2,
\]
We infer that from (\ref{eq:another_cs}) and (\ref{eq:proj1}) that
\begin{equation}gin{equation} \lambdabdabel{eq:coupling_and_mixing}
\mu[fg] \leq \left( e^{-\lambdabdam_2t} + 2 \sqrt{{\bf{P}}[t_A \leq t]} \right)
\Vert f \Vert _2 \Vert g \Vert _2 \, .
\end{equation}
It remains to bound the two terms in the right-hand side of
(\ref{eq:coupling_and_mixing}).
For $0<c<c^{\ast}$, we take $t = cd$.
We obtain that the first term is
$
e^{-cd\lambdabdam_2},
$ as desired.
It remains to bound ${\bf{P}}[t_A \leq t]$.
We note that $t_A \leq t$ only if there is some self-avoiding path
(sometimes referred to as ``path of disagreement")
between the $A$ and $B$
along which the discrepancy between the two
distributions has been conveyed in time less than $t$.
\ignore{
Note that there are at most $|A| (\Delta-1)^k$ such paths of length $k$ for all $k \geq r/2$.
We fix such a path $v_1,\ldots,v_k$ and bound the probability that this path was activated up to
time $t$.
This probability is clearly bounded by ${\bf{P}}[{\cal{B}}in(t,n_r^{-1}) \geq k]$ (think of ``success" as an
activation of the first non-active element of $v_1,\ldots,v_k$).
We let $c' > 0$ be a constant such that for all $m$ and $p$ and for
all $k \geq mp/c'$ one has the following tail
estimate:
\[
{\bf{P}}[{\cal{B}}in(m,p) \geq k] \leq \Delta^{-2k}.
\]
(such a constant exists by standard large deviation estimates,
see, e.g., \cite[Corollary 2.4]{JLR}).
Thus,
\[
{\bf{P}}[{\cal{B}}in(t,n_r^{-1}) \geq k] \leq \Delta^{-2k}.
\]
So summing over all paths we obtain:
\begin{equation}gin{equation} \lambdabdabel{eq:exp_time}
{\bf{P}}[t_A \leq t] \leq |A| \sum_{k \geq r/2} (\Delta-1)^{k} \Delta^{-2k}.
\end{equation}
Thus both summands in (\ref{eq:coupling_and_mixing}) decay
exponentially in $r$,
as claimed.
}
Time-reversing the process, this means that first-passage-percolation
with rate-$1$ exponential passage times starting at $A$ needs to
arrive at distance $d$ within time $cd$. There are at most
$\left|{\tt path}rtial_i A\right|\Delta^{k}$ paths of length $k$ for the first-passage-percolation
for each $k\geq d$. Let $\tau(v,w)$ be the time needed to cross the edge
$(v,w)$. For each path $v_1,v_2,\ldots,v_k$,
\begin{equation}gin{eqnarray*}
{\bf{P}}\left(\tau(v_1,v_2)+\tau(v_2,v_3)+\ldots
\tau(v_{k-1},v_k)<cd\right)
<e^{-kI(c)}
\end{eqnarray*}
where $I(c)=c-\log c-1$ is the large deviation rate function for the
exponential distribution.
Therefore,
\begin{equation}gin{eqnarray*}
{\bf{P}}(t_A\leq t)&\leq&\left|{\tt path}rtial_i A\right|\sum\limits_{k=d}^\infty\exp\left[k(-I(c')+\log\Delta)\right]\\
&\leq& C^2(c,\Delta)\left|{\tt path}rtial_i A\right|e^{d\left(\log\Delta-I(c)\right)}
\end{eqnarray*}
\end{proof}
Plugging this bound into (\ref{eq:coupling_and_mixing}),
we obtain (\ref{eq:tgen}) as needed.
\section{Open Problems}
In this section we specify some relevant problems that are still open.
\begin{equation}gin{Problem}
What is the relaxation time $\tau_2(n,b,b^{-1/2})$ of the Glauber
dynamics of the Ising model on the $b$-ary tree of depth $n$ at the
critical temperature $1-2\epsilonilon=\frac{1}{\sqrt{b}}$?
\end{Problem}
\noindent
Using the sum of spins as a test function, we learn that $\Omega(\log
n)$ is a lower bound for $\tau_2(n,b,b^{-1/2})$. We conjecture that
the relaxation time is of order ${\bf T}heta(\log n)$.
A weaker conjecture is that
\[
\lim_{n\to\infty}\frac{\log(\tau_2(n,b,b^{-1/2}))}{n} = 0.
\]
\begin{equation}gin{Problem}
Fix $b$, and let
\[
\tau_2(\begin{equation}ta)=\lim_{n\to\infty}\frac{\log(\tau_2(n,b,\begin{equation}ta))}{n}.
\]
Theorem \ref{thm:tree} part \ref{treeupperbound} tells us that
$\tau_2(\begin{equation}ta)$ exists and is finite for all $\begin{equation}ta$. Show that
$\tau_2(\begin{equation}ta)$ is a monotone function of $\begin{equation}ta$.
This question is a special case of a more general monotonicity
conjecture due to the fourth author, described in \cite{serban}.
See \cite{serban} where a monotonicity result is proven for the
Ising model on the cycle.
\end{Problem}
\begin{equation}gin{Problem}
For the Ising model (with free boundary conditions and no external
field) on a general graph of bounded degree, does the converse
of Theorem \ref{thm:general} hold, i.e., does uniform exponential decay
of point-to-set correlations imply a uniform spectral gap? \newline
(As pointed out by F. Martinelli (personal communication),
the converse fails in certain lattices
if plus boundary conditions are allowed).
\end{Problem}
\begin{equation}gin{Problem}
Recall the general upper bound $n e^{(4 {\xi}(G) + 2 \Delta) \begin{equation}ta}$
on the relaxation time of Glauber dynamics
in terms of cut-width from proposition \ref{prop:cut-width}.
For which graphs does a similar lower bound
of the form $\tau_2 \ge e^{c {\xi}(G) \begin{equation}ta}$
(for some constant $c>0$) hold at low temperature?
Such a lower bound is known to hold for boxes in a Euclidean
lattice, our results imply its validity for regular trees, and we
can also verify it for expander graphs. A specific class of graphs
which could be considered here are the metric balls around a specific
vertex in an infinite graph ${\cal{G}}amma$ that has critical probability
$p_c({\cal{G}}amma) <1$ for bond percolation.
\end{Problem}
\noindent{\bf Remark.} After the results presented here were
described in the extended abstract \cite{first},
striking further results on this topic were obtained by
F. Martinelli, A. Sinclair, and D. Weitz \cite{MSW}.
For the Ising model on regular trees, in the temperatures where
we show the Glauber dynamics has a uniform spectral gap, they show
it satisfies a uniform log-Sobolev inequality; moreover, they
study in depth the effects of external fields and boundary conditions.
\noindent{\bf Acknowledgment.} We are grateful to David Aldous,
David Levin, Laurent Saloff-Coste and Peter Winkler for useful
discussions. We thank Dror Weitz for helpful comments on \cite{first}.
\begin{equation}gin{thebibliography}{99}
\bibitem{Al}
Aldous, D. and Fill, J. A. (2000) Reversible Markov chains and random
walks on graphs,
{\em book in preparation}.
Current version available at \newline
{\tt www.stat.berkeley.edu/users/aldous/book.html}.
\bibitem{vB}
van den Berg, J. (1993)
A uniqueness condition for Gibbs measures,
with application to the $2$-dimensional Ising antiferromagnet.
{\em Comm. Math. Phys.} {\bf 152, no. 1}, 161--166.
\bibitem{BRZ} Bleher, P. M., Ruiz, J. and Zagrebnov V. A. (1995)
On the purity of limiting Gibbs state for the Ising model on the Bethe
lattice, {\em J. Stat. Phys} {\bf 79}, 473--482.
\bibitem{BD} Bubley, R. and Dyer, M. (1997)
Path coupling: a technique for proving rapid mixing in Markov chains.
In Proceedings of the $38$th Annual Symposium on Foundations of
Computer Science (FOCS), 223--231.
\bibitem{Chen} Chen, M. F. (1998)
Trilogy of couplings and general formulas for lower bound of spectral gap.
{\em Probability towards 2000}
Lecture Notes in Statist., {\bf 128},
Springer, New York, 123--136.
\bibitem{CT} Cover, T. M. and Thomas, J. A. (1991)
{\em Elements of Information Theory}, Wiley, New York.
\bibitem{DG}
Dyer M. and Greenhill C. (2000), `On Markov chains for independent sets',
{\em J. Algor.} {\bf 35,} 17--49.
\bibitem{EKPS} Evans, W., Kenyon, C., Peres, Y. and Schulman, L. J. (2000)
Broadcasting on trees and the Ising Model, {\it Ann. Appl. Prob.},
{\bf 10}, 410--433.
\bibitem{FK} Fortuin C. M., Kasteleyn P. W. (1972) On the
random-cluster model. I. Introduction and relation to other
models. {\sl Physica} {\bf 57}, 536--564.
\bibitem{FKG} Fortuin C. M., Kasteleyn P. W. and Ginibre J. (1971)
Correlation inequalities on some partially ordered sets. {\sl
Comm. Math. Phys.} {\bf 22} , 89--103.
\bibitem{HJR} Olle Haggstrom, Johan Jonasson and Russell Lyons. Explicit
isoperimetric constants and phase transitions in the random-cluster model,
Ann. Probab. 30 (2002), 443--473.
\bibitem{Io} Ioffe, D. (1996). A note on the extremality of the disordered
state for the Ising model on the Bethe lattice.
{\em Lett.\ Math.\ Phys.} {\bf 37}, 137--143.
\bibitem{JLR} Janson, S., Luczak T. and Ruci\'nski A. (2000)
{\em Random Graphs}, Wiley, New York.
\bibitem{J95}
Jerrum, M. (1995) A very simple algorithm for estimating the number of
$k$-colorings of a low-degree graph.
{\em Rand. Struc. Alg.} {\bf 7}, 157--165.
\bibitem{JS1}
Jerrum, M. and Sinclair, A. (1989).
Approximating the permanent. {\em Siam Jour.\ Comput.} {\bf 18}, 1149--1178.
\bibitem{JS2}
Jerrum, M. and Sinclair, A. (1993).
Polynomial time approximation algorithms for the Ising model.
{\em Siam Jour. Comput.} {\bf 22}, 1087--1116.
\bibitem{JSV}
Jerrum, M., Sinclair, A. and Vigoda, E. (2001).
A polynomial-time approximation algorithm for the permanent of a matrix with non-negative
entries. Proceedings of the {\em $33$rd Annual ACM Symposium on Theory of Computing}, Crete, Greece.
\bibitem{katok} S. Katok (1992) Fuchsian Groups. University of Chicago Press.
\bibitem{first} Kenyon, C., Mossel, E. and Peres, Y. (2001)
Glauber dynamics on trees and hyperbolic graphs. {\sl 42nd
IEEE Symposium on Foundations of Computer Science (Las Vegas,
NV, 2001),} 568--578, {\sl IEEE Computer Soc., Los Alamitos,
CA,}.
\bibitem{Ki} Kinnersley, N. G. (1992)
The vertex seperation number of a graph equals its path-width.
{\em Infor. Proc. Lett.}, {\bf 42}, 345--350.
\bibitem{Li}
Liggett, T. (1985) {\em Interacting particle systems}, Springer, New York.
\bibitem{LV97}
Luby, M. and Vigoda, E. (1997)
Approximately Counting Up To Four,
In proceedings of the $29$th Annual Symposium on Theory of
Computing (STOC),, 682--687.
\bibitem{LV99}
Luby, M. and Vigoda, E. (1999).
Fast Convergence of the Glauber Dynamics for Sampling Independent Sets,
Statistical physics methods in discrete probability,
combinatorics and theoretical computer science,
{\em Rand. Struc. Alg.} {\bf 15}, 229--241.
\bibitem{magnus} W. Magnus, Noneuclidean tessellations and their groups, Academic
Press,
New York and London, 1974.
\bibitem{Ma}
Martinelli, F. (1998)
Lectures on Glauber dynamics for discrete spin models.
{\em Lectures on probability theory and statistics (Saint-Flour,
1997)} 93--191, {\em Lecture Notes in Math.} {\bf 1717}, Springer,
Berlin.
\bibitem{MSW}
Martinelli, F., Sinclair, A. and Weitz, D. (2003)
Glauber dynamics on trees: Boundary conditions and mixing time.
{\em Preprint}, available at
{\tt http://front.math.ucdavis.edu/math.PR/0307336}
\bibitem{Mo} Mossel, E. (2001)
Reconstruction on trees: Beating the second eigenvalue,
{\it Ann. Appl. Probab.}, {\bf 11 no. 1},285--300.
\bibitem{Mo2} Mossel, E. (1998)
Recursive reconstruction on periodic trees.
{\em Rand.\ Struc.\ Alg.} {\bf 13}, 81--97.
\bibitem{MP}
Mossel, E. and Peres Y. (2003)
Information flow on trees, to appear in {\it Ann. Appl. Probab.}.
\bibitem{serban} Nacu, S. (2003)
Glauber dynamics on the cycle is monotone. {\sl To appear, Probab.\ Theory
Related Fields}
\bibitem{pater} Paterson, A. L. T. (1988) {\it Amenability}.
American Mathematical Soc., Providence.
\bibitem{PW96}
Propp, J. and Wilson, D. (1996)
Exact Sampling with Coupled Markov Chains and Applications to Statistical Mechanics.
{\em Rand. Struc. Alg.} {\bf 9}, 223--252.
\bibitem{PW03}
Peres, Y. and Winkler, P. (2003), in preparation.
\bibitem{RT00}
Randall, D. and Tetali, P. (2000), Analyzing Glauber dynamics by comparison of
Markov chains. {\em J. of Math. Phys.} {\bf 41},
1598--1615.
\bibitem{RS}
N. Robertson and P.D. Seymour (1983), Graph minors. I. Excluding a forest.
{\em J. Comb. Theory Series B} 35, 39--61.
\bibitem{Sa} Saloff-Coste, L. (1997)
Lectures on finite Markov chains.
{\it Lectures on probability theory and statistics (Saint-Flour,
1996)} 301--413, Lecture Notes in Math. {\bf 1665}, Springer, Berlin.
\bibitem{Vi} Vigoda, E. (2001).
Improved bounds for sampling colorings.
Probabilistic techniques in equilibrium and nonequilibrium statistical physics.
{\em J. Math. Phys.} {\bf 41}, no.\ 3, 1555--1569.
\end{thebibliography}
\end{document}
\end{document} |
\begin{document}
\title[Dyadic representation and $A_2$ theorem]{Representation of singular integrals by dyadic operators, and the $A_2$ theorem}
\author[T.~P.\ Hyt\"onen]{Tuomas P.\ Hyt\"onen}
\address{Department of Mathematics and Statistics, P.O.B.~68 (Gustaf H\"all\-str\"omin katu~2b), FI-00014 University of Helsinki, Finland}
\email{[email protected]}
\subjclass[2010]{42B25, 42B35}
\maketitle
\begin{center}
{\small
Department of Mathematics and Statistics\\
P.O.B.~68 (Gustaf H\"all\-str\"omin katu~2b)\\
FI-00014 University of Helsinki, Finland\\
[email protected]
}
\end{center}
\begin{abstract}
This exposition presents a self-contained proof of the $A_2$ theorem, the quantitatively sharp norm inequality for singular integral operators in the weighted space $L^2(w)$. The strategy of the proof is a streamlined version of the author's original one, based on a probabilistic Dyadic Representation Theorem for singular integral operators. While more recent non-probabilistic approaches are also available now, the probabilistic method provides additional structural information, which has independent interest and other applications. The presentation emphasizes connections to the David--Journ\'e $T(1)$ theorem, whose proof is obtained as a byproduct. Only very basic Probability is used; in particular, the conditional probabilities of the original proof are completely avoided.\\
\noindent\textsc{Keywords:} Singular integral, Calder\'on--Zygmund operator, weighted norm inequality, sharp estimate, $A_2$ theorem, $T(1)$ theorem
\end{abstract}
\section{Introduction}
The goal of this exposition is to prove the following \emph{$A_2$ theorem}:
\begin{theorem}\label{thm:A2}
Let $T$ be any Calder\'on--Zygmund operator on $\mathbb{R}^d$ (like the Hilbert transform on $\mathbb{R}$, the Beurling transform on $\mathbb{C}\simeq\mathbb{R}^2$, or any of the Riesz transforms in $\mathbb{R}^d$ for $d\geq 2$; see Section~\ref{sec:representation} for the general definition).
Let $w:\mathbb{R}^d\to[0,\infty]$ be a weight in the Muckenhoupt class $A_2$, i.e.,
\begin{equation*}
[w]_{A_2}:=\sup_Q\fint_Q w\cdot\fint_Q\frac{1}{w}<\infty\qquad\Big(\fint_Q w:=\frac{1}{\abs{Q}}\int_Q w\Big),
\end{equation*}
where the supremum is over all axes-parallel cubes $Q$ in $\mathbb{R}^d$.
Let $L^2(w)$ be the space of all measurable functions $f:\mathbb{R}^d\to\mathbb{C}$ such that
\begin{equation*}
\mathbb{N}orm{f}{L^2(w)}:=\Big(\int_{\mathbb{R}^d}\abs{f}^2 w\Big)^{1/2}<\infty.
\end{equation*}
Then the following norm inequality is valid for any $f\in L^2(w)$, where $C_T$ only depends on $T$ and not on $f$ or $w$:
\begin{equation*}
\mathbb{N}orm{Tf}{L^2(w)}\leq C_T\cdot[w]_{A_2}\cdot\mathbb{N}orm{f}{L^2(w)}.
\end{equation*}
\end{theorem}
This general theorem for all Calder\'on--Zygmund operators is due to the author~\cite{Hytonen:A2}, but it was first obtained in the listed special cases by S.~Petermichl and A.~Volberg \cite{PV} and Petermichl \cite{Petermichl:Hilbert,Petermichl:Riesz}, and in various further particular instances by a number of others \cite{CMP,Dragicevic:cubic,LPR,Vagharshakyan}. See also Section~\ref{sec:Beurling} for more details on the history of the problem.
Although several different proofs of Theorem~\ref{thm:A2} are known by now, I will present one that is a direct descendant of the original approach, but greatly streamlined in various places, based on ingredients from various subsequent proofs. On the large scale, I follow the strategy of my paper with C.~P\'erez, S.~Treil and A.~Volberg \cite{HPTV}, the first simplification of my original proof \cite{Hytonen:A2}. This consists of the following steps, which have independent interest:
\begin{enumerate}
\item\label{it:red2Dyadic} Reduction to \emph{dyadic shift operators} (the Dyadic Representation Theorem): every Calder\'on--Zygmund operator $T$ has a (probabilistic) representation in terms of these simpler operators, and hence it suffices to prove a similar claim for every dyadic shift $S$ in place of $T$. This was a key novelty of \cite{Hytonen:A2} when it first appeared. In this exposition, the probabilistic ingredients of this representation have been simplified from \cite{Hytonen:A2,HPTV}, in that no conditional probabilities are needed.
\item\label{it:red2Testing} Reduction to \emph{testing conditions} (a local $T(1)$ theorem): in order to have the full norm inequality
\begin{equation*}
\mathbb{N}orm{Sf}{L^2(w)}\leq C_S[w]_{A_2}\mathbb{N}orm{f}{L^2(w)},
\end{equation*}
it suffices to have such an inequality for special test functions only:
\begin{equation*}
\begin{split}
\mathbb{N}orm{S(1_Q w^{-1})}{L^2(w)} &\leq C_S[w]_{A_2}\mathbb{N}orm{1_Q w^{-1}}{L^2(w)}, \\
\mathbb{N}orm{S^*(1_Q w)}{L^2(w^{-1})} &\leq C_S[w]_{A_2}\mathbb{N}orm{1_Q w}{L^2(w^{-1})}.
\end{split}
\end{equation*}
This goes essentially back to F.~Nazarov, Treil and Volberg \cite{NTV:2weightHaar}. (In the original proof \cite{Hytonen:A2}, in contrast to the simplification \cite{HPTV}, this reduction was done on the level of the Calder\'on--Zygmund operator, using a more difficult variant due to P\'erez, Treil and Volberg~\cite{PTV}).
\item\label{it:verifyTesting} Verification of the testing conditions for $S$. This was first achieved by M.~T. Lacey, Petermichl and M.~C. Reguera~\cite{LPR}, although some adjustments were necessary to achieve the full generality in \cite{Hytonen:A2}.
\end{enumerate}
As said, several different proofs and extensions of the $A_2$ theorem have appeared over the past few years; see the final section for further discussion and references. In particular, it is now known that the probabilistic Dyadic Representation Theorem may be replaced by a deterministic Dyadic Domination Theorem. Its first version, a domination in norm, is due to A.~Lerner \cite{Lerner:domination}, and based on his clever local oscillation formula \cite{Lerner:formula}; this was subsequently improved to pointwise domination by J.~M. Conde-Alonso and G.~Rey \cite{CondeRey} and, independently, by Lerner and Nazarov \cite{LerNaz:book}. Yet another approach to the pointwise domination was found by Lacey \cite{Lacey:elem} and again simplified by Lerner \cite{Lerner:simplest}; this has the virtue of covering the biggest class of operators admissible for the $A_2$ theorem at the present state of knowledge. However, the probabilistic method continues to have its independent interest: it achieves the reduction to dyadic model operators as a linear \emph{identity}, in contrast to the (non-linear) \emph{upper bound} provided the deterministic domination. As such, it provides a structure theorem for singular integral operators, which has found other uses beyond the weighted norm inequalities, including the following:
\begin{itemize}
\item The \emph{theorem itself} is applied to the estimation of \emph{commutators} of Calder\'on--Zygmund operators and BMO functions in a multi-parameter setting by L.~Dalenc and Y.~Ou~\cite{DalOu:iterated} and in a two-weight setting by I.~Holmes, M.~Lacey and B.~Wick \cite{HLW,HW}; it is also applied to sharp norm bounds for \emph{vector-valued extensions} of Calder\'on--Zygmund operators by S.~Pott and A.~Stoica~\cite{PS}.
\item The \emph{methods behind this theorem} have been generalized by H.~Martikainen \cite{Martikainen:Advances} and Y.~Ou \cite{Ou:Tb} to the analysis of \emph{bi-parameter singular integrals}, yielding new $T(1)$ and $T(b)$ type theorems for these operators.
\end{itemize}
Whereas the domination method \emph{assumes} the unweighted $L^2$ boundedness of the operator $T$, the representation method can (and will, in this exposition) be set up in such a way that it \emph{derives} the unweighted boundedness from a priori weaker assumptions as a byproduct. Indeed, a proof of the $T(1)$ theorem of G.~David and J.-L. Journ\'e \cite{DJ} is obtained as a byproduct of the present exposition, and this approach was lifted to the nontrivial case of bi-parameter singular integrals in the mentioned works of Martikainen \cite{Martikainen:Advances} and Ou \cite{Ou:Tb}. Of course, the deterministic domination method has its own advantages, but the point that I want to make here is that so does the probabilistic approach, which I present in the following exposition.
\section{Preliminaries}
The standard (or reference) system of dyadic cubes is
\begin{equation*}
\mathscr{D}^0:=\{2^{-k}([0,1)^d+m):k\in\mathbb{Z},m\in\mathbb{Z}^d\}.
\end{equation*}
We will need several dyadic systems, obtained by translating the reference system as follows. Let $\omega=(\omega_j)_{j\in\mathbb{Z}}\in(\{0,1\}^d)^{\mathbb{Z}}$ and
\begin{equation*}
I\dot+\omega:=I+\sum_{j:2^{-j}<\ell(I)}2^{-j}\omega_j.
\end{equation*}
Then
\begin{equation*}
\mathscr{D}^{\omega}:=\{I\dot+\omega:I\in\mathscr{D}^0\},
\end{equation*}
and it is straightforward to check that $\mathscr{D}^{\omega}$ inherits the important nestedness property of $\mathscr{D}^0$: if $I,J\in\mathscr{D}^{\omega}$, then $I\cap J\in\{I,J,\varnothing\}$. When the particular $\omega$ is unimportant, the notation $\mathscr{D}$ is sometimes used for a generic dyadic system.
\subsection{Haar functions}
Any given dyadic system $\mathscr{D}$ has a natural function system associated to it: the Haar functions. In one dimension, there are two Haar functions associated with an interval $I$: the non-cancellative $h^0_I:=\abs{I}^{-1/2}1_I$ and the cancellative $h^1_I:=\abs{I}^{-1/2}(1_{I_{\ell}}-1_{I_r})$, where $I_{\ell}$ and $I_r$ are the left and right halves of $I$. In $d$ dimensions, the Haar functions on a cube $I=I_1\times\cdots\times I_d$ are formed of all the products of the one-dimensional Haar functions:
\begin{equation*}
h_I^{\eta}(x)=h_{I_1\times\cdots\times I_d}^{(\eta_1,\ldots,\eta_d)}(x_1,\ldots,x_d):=\prod_{i=1}^d h_{I_i}^{\eta_i}(x_i).
\end{equation*}
The non-cancellative $h_I^0=\abs{I}^{-1/2}1_I$ has the same formula as in $d=1$. All other $2^d-1$ Haar functions $h_I^{\eta}$ with $\eta\in\{0,1\}^d\setminus\{0\}$ are cancellative, i.e., satisfy $\int h_I^{\eta}=0$, since they are cancellative in at least one coordinate direction.
For a fixed $\mathscr{D}$, all the cancellative Haar functions $h_I^{\eta}$, $I\in\mathscr{D}$ and $\eta\in\{0,1\}^d\setminus\{0\}$, form an orthonormal basis of $L^2(\mathbb{R}^d)$. Hence any function $f\in L^2(\mathbb{R}^d)$ has the orthogonal expansion
\begin{equation*}
f=\sum_{I\in\mathscr{D}}\sum_{\eta\in\{0,1\}^d\setminus\{0\}}\pair{f}{h_I^{\eta}}h_I^{\eta}.
\end{equation*}
Since the different $\eta$'s seldom play any major role, this will be often abbreviated (with slight abuse of language) simply as
\begin{equation*}
f=\sum_{I\in\mathscr{D}}\pair{f}{h_I}h_I,
\end{equation*}
and the summation over $\eta$ is understood implicitly.
\subsection{Dyadic shifts}
A dyadic shift with parameters $i,j\in\mathbb{N}:=\{0,1,2,\ldots\}$ is an operator of the form
\begin{equation*}
Sf=\sum_{K\in\mathscr{D}}A_K f,\qquad
A_K f=\sum_{\substack{I,J\in\mathscr{D};I,J\subseteq K \\ \ell(I)=2^{-i}\ell(K)\\ \ell(J)=2^{-j}\ell(K)}}a_{IJK}\pair{f}{h_I}h_J,
\end{equation*}
where $h_I$ is a Haar function on $I$ (similarly $h_J$), and the $a_{IJK}$ are coefficients with
\begin{equation*}
\abs{a_{IJK}}\leq\frac{\sqrt{\abs{I}\abs{J}}}{\abs{K}}.
\end{equation*}
It is also required that all subshifts
\begin{equation*}
S_{\mathscr{Q}}=\sum_{K\in\mathscr{Q}}A_K,\qquad\mathscr{Q}\subseteq\mathscr{D},
\end{equation*}
map $S_{\mathscr{Q}}:L^2(\mathbb{R}^d)\to L^2(\mathbb{R}^d)$ with norm at most one.
The shift is called cancellative, if all the $h_I$ and $h_J$ are cancellative; otherwise, it is called non-cancellative.
The notation $A_K$ indicates an ``averaging operator'' on $K$. Indeed, from the normalization of the Haar functions, it follows that
\begin{equation*}
\abs{A_K f}\leq 1_K\fint_K\abs{f}
\end{equation*}
pointwise.
For cancellative shifts, the $L^2$ boundedness is automatic from the other conditions. This is a consequence of the following facts:
\begin{itemize}
\item The pointwise bound for each $A_K$ implies that $\mathbb{N}orm{A_K f}{L^p}\leq\mathbb{N}orm{f}{L^p}$ for all $p\in[1,\infty]$; in particular, these components of $S$ are uniformly bounded on $L^2$ with norm one. (This first point is true even in the non-cancellative case.)
\item Let $\D_K^{i}$ denote the orthogonal projection of $L^2$ onto $\lspan\{h_I:I\subseteq K,\ell(I)=2^{-i}\ell(K)\}$. When $i$ is fixed, it follows readily that any two $\D_K^{i}$ are orthogonal to each other. (This depends on the use of cancellative $h_I$.) Moreover, we have $A_K=\D_K^{j}A_K \D_K^{i}$. Then the boundedness of $S$ follows from two applications of Pythagoras' theorem with the uniform boundedness of the $A_K$ in between.
\end{itemize}
A prime example of a non-cancellative shift (and the only one we need in these lectures) is the \emph{dyadic paraproduct}
\begin{equation*}
\Pi_b f=\sum_{K\in\mathscr{D}}\pair{b}{h_K}\ave{f}_K h_K
=\sum_{K\in\mathscr{D}}\abs{K}^{-1/2}\pair{b}{h_K}\cdot\pair{f}{h_K^0} h_K,
\end{equation*}
where $b\in\BMO_d$ (the dyadic BMO space) and $h_K$ is a cancellative Haar function. This is a dyadic shift with parameters $(i,j)=(0,0)$, where $a_{IJK}=\abs{K}^{-1/2}\pair{b}{h_K}$ for $I=J=K$. The $L^2$ boundedness of the paraproduct, if and only if $b\in\BMO_d$, is part of the classical theory. Actually, to ensure the normalization condition of the shift, it should be further required that $\mathbb{N}orm{b}{\BMO_d}\leq 1$.
\subsection{Random dyadic systems; good and bad cubes}
We obtain a notion of \emph{random dyadic systems} by equipping the parameter set $\Omega:=(\{0,1\}^d)^{\mathbb{Z}}$ with the natural probability measure: each component $\omega_j$ has an equal probability $2^{-d}$ of taking any of the $2^d$ values in $\{0,1\}^d$, and all components are independent of each other.
Let $\phi:[0,1]\to[0,1]$ be a fixed \emph{modulus of continuity}: a strictly increasing function with $\phi(0)=0$, $\phi(1)=1$, and $t\mapsto\phi(t)/t$ decreasing (hence $\phi(t)\geq t$) with $\lim_{t\to 0}\phi(t)/t=\infty$.
We further require the \emph{Dini condition}
\begin{equation*}
\int_0^1\phi(t)\frac{\ud t}{t}<\infty.
\end{equation*}
Main examples include $\phi(t)=t^{\gamma}$ with $\gamma\in(0,1)$ and
\begin{equation*}
\phi(t)=\Big(1+\frac{1}{\gamma}\log\frac{1}{t}\Big)^{-\gamma},\qquad \gamma>1.
\end{equation*}
We also fix a (large) parameter $r\in\mathbb{Z}_+$. (How large, will be specified shortly.)
A cube $I\in\mathscr{D}^{\omega}$ is called bad if there exists $J\in\mathscr{D}^{\omega}$ such that $\ell(J)\geq 2^r\ell(I)$ and
\begin{equation*}
\dist(I,\partial J)\leq\phi\Big(\frac{\ell(I)}{\ell(J)}\Big)\ell(J):
\end{equation*}
roughly, $I$ is relatively close to the boundary of a much bigger cube.
\begin{remark}
This definition of good cubes goes back to Nazarov--Treil--Volberg \cite{NTV:Tb} in the context of singular integrals with respect to non-doubling measures. They used the modulus of continuity $\phi(t)=t^{\gamma}$, where $\gamma$ was chosen to depend on the dimension and the H\"older exponent of the Calder\'on--Zygmund kernel via
\begin{equation*}
\gamma=\frac{\alpha}{2(d+\alpha)}.
\end{equation*}
This choice has become ``canonical'' in the subsequent literature, including the original proof of the $A_2$ theorem. However, other choices can also be made, as we do here.
\end{remark}
We make some basic probabilistic observations related to badness. Let $I\in\mathscr{D}^0$ be a reference interval. The \emph{position} of the translated interval
\begin{equation*}
I\dot+\omega=I+\sum_{j:2^{-j}<\ell(I)}2^{-j}\omega_j,
\end{equation*}
by definition, depends only on $\omega_j$ for $2^{-j}<\ell(I)$. On the other hand, the \emph{badness} of $I\dot+\omega$ depends on its \emph{relative position} with respect to the bigger intervals
\begin{equation*}
J\dot+\omega=J+\sum_{j:2^{-j}<\ell(I)}2^{-j}\omega_j+\sum_{j:\ell(I)\leq 2^{-j}<\ell(I)}2^{-j}\omega_j.
\end{equation*}
The same translation component $\sum_{j:2^{-j}<\ell(I)}2^{-j}\omega_j$ appears in both $I\dot+\omega$ and $J\dot+\omega$, and so does not affect the relative position of these intervals. Thus this relative position, and hence the badness of $I$, depends only on $\omega_j$ for $2^{-j}\geq\ell(I)$. In particular:
\begin{lemma}\label{lem:indep}
For $I\in\mathscr{D}^0$, the position and badness of $I\dot+\omega$ are independent random variables.
\end{lemma}
Another observation is the following: by symmetry and the fact that the condition of badness only involves relative position and size of different cubes, it readily follows that the probability of a particular cube $I\dot+\omega$ being bad is equal for all cubes $I\in\mathscr{D}^0$:
\begin{equation*}
\prob_{\omega}(I\dot+\omega\bad)=\pi_{\bad}=\pi_{\bad}(r,d,\phi).
\end{equation*}
The final observation concerns the value of this probability:
\begin{lemma}
We have
\begin{equation*}
\pi_{\bad}\leq 8d\int_0^{2^{-r}}\phi(t)\frac{\ud t}{t};
\end{equation*}
in particular, $\pi_{\bad}<1$ if $r=r(d,\phi)$ is chosen large enough.
\end{lemma}
With $r=r(d,\phi)$ chosen like this, we then have $\pi_{\good}:=1-\pi_{\bad}>0$, namely, good situations have positive probability!
\begin{proof}
Observe that in the definition of badness, we only need to consider those $J$ with $I\subseteq J$. Namely, if $I$ is close to the boundary of some bigger $J$, we can always find another dyadic $J'$ of the same size as $J$ which contains $I$, and then $I$ will also be close to the boundary of $J'$. Hence we need to consider the relative position of $I$ with respect to each $J\supset I$ with $\ell(J)=2^k\ell(I)$ and $k=r,r+1,\ldots$ For a fixed $k$, this relative position is determined by
\begin{equation*}
\sum_{j:\ell(I)\leq 2^{-j}<2^k\ell(I)}2^{-j}\omega_j,
\end{equation*}
which has $2^{kd}$ different values with equal probability. These correspond to the subcubes of $J$ of size $\ell(I)$.
Now bad position of $I$ are those which are within distance $\phi(\ell(I)/\ell(J))\cdot\ell(J)$ from the boundary. Since the possible position of the subcubes are discrete, being integer multiples of $\ell(I)$, the effective bad boundary region has depth
\begin{equation*}
\begin{split}
\Big\lceil \phi\Big(\frac{\ell(I)}{\ell(J)}\Big)\frac{\ell(J)}{\ell(I)}\Big\rceil\ell(I)
&\leq\Big(\phi\Big(\frac{\ell(I)}{\ell(J)}\Big)\frac{\ell(J)}{\ell(I)}+1\Big)\ell(I) \\
&=\ell(J)\Big(\phi\Big(\frac{\ell(I)}{\ell(J)}\Big)+\frac{\ell(I)}{\ell(J)}\Big)\leq 2\ell(J)\phi\Big(\frac{\ell(I)}{\ell(J)}\Big),
\end{split}
\end{equation*}
by using that $t\leq\phi(t)$.
The good region is the cube inside $J$, whose side-length is $\ell(J)$ minus twice the depth of the bad boundary region:
\begin{equation*}
\ell(J)-2\Big\lceil \phi\Big(\frac{\ell(I)}{\ell(J)}\Big)\frac{\ell(J)}{\ell(I)}\Big\rceil\ell(I)
\geq\ell(J)-4\ell(J)\phi\Big(\frac{\ell(I)}{\ell(J)}\Big).
\end{equation*}
Hence the volume of the bad region is
\begin{equation*}
\begin{split}
\abs{J}-\Big(\ell(J)-2\Big\lceil \phi\Big(\frac{\ell(I)}{\ell(J)}\Big)\frac{\ell(J)}{\ell(I)}\Big\rceil\ell(I)\Big)^d
&\leq\abs{J}\Big(1-\Big(1-4\phi\Big(\frac{\ell(I)}{\ell(J)}\Big)\Big)^d\Big) \\
&\leq\abs{J}\cdot 4d\phi\Big(\frac{\ell(I)}{\ell(J)}\Big)
\end{split}
\end{equation*}
by the elementary inequality $(1-\alpha)^d\geq 1-\alpha d$ for $\alpha\in[0,1]$. (We assume that $r$ is at least so large that $4\phi(2^{-r})\leq 1$.)
So the fraction of the bad region of the total volume is at most $4d\phi(\ell(I)/\ell(J))=4d\phi(2^{-k})$ for a fixed $k=r,r+1,\ldots$. This gives the final estimate
\begin{equation*}
\begin{split}
\prob_{\omega}(I\dot+\omega\bad)
&\leq\sum_{k=r}^{\infty}4d\phi(2^{-k})
=\sum_{k=r}^{\infty}8d\frac{\phi(2^{-k})}{2^{-k}}2^{-k-1} \\
&\leq\sum_{k=r}^{\infty}8d\int_{2^{-k-1}}^{2^{-k}}\frac{\phi(t)}{t}\ud t
=8d\int_0^{2^{-r}}\phi(t)\frac{\ud t}{t},
\end{split}
\end{equation*}
where we used that $\phi(t)/t$ is decreasing in the last inequality.
\end{proof}
\section{The dyadic representation theorem}\label{sec:representation}
Let $T$ be a Calder\'on--Zygmund operator on $\mathbb{R}^d$. That is, it acts on a suitable dense subspace of functions in $L^2(\mathbb{R}^d)$ (for the present purposes, this class should at least contain the indicators of cubes in $\mathbb{R}^d$) and has the kernel representation
\begin{equation*}
Tf(x)=\int_{\mathbb{R}^d}K(x,y)f(y)\ud y,\qquad x\notin\supp f.
\end{equation*}
Moreover, the kernel should satisfy the \emph{standard estimates}, which we here assume in a slightly more general form than usual, involving another modulus of continuity~$\psi$, like the one considered above:
\begin{equation*}
\begin{split}
\abs{K(x,y)} &\leq\frac{C_0}{\abs{x-y}^d}, \\
\abs{K(x,y)-K(x',y)}+\abs{K(y,x)-K(y,x')}
&\leq\frac{C_\psi}{\abs{x-y}^d}\psi\Big(\frac{\abs{x-x'}}{\abs{x-y}}\Big)
\end{split}
\end{equation*}
for all $x,x',y\in\mathbb{R}^d$ with $\abs{x-y}>2\abs{x-x'}$. Let us denote the smallest admissible constants $C_0$ and $C_\psi$ by $\mathbb{N}orm{K}{CZ_0}$ and $\mathbb{N}orm{K}{CZ_\psi}$. The classical standard estimates correspond to the choice $\psi(t)=t^{\alpha}$, $\alpha\in(0,1]$, in which case we write $\mathbb{N}orm{K}{CZ_\alpha}$ for $\mathbb{N}orm{K}{CZ_\psi}$.
We say that $T$ is a bounded Calder\'on--Zygmund operator, if in addition $T:L^2(\mathbb{R}^d)\to L^2(\mathbb{R}^d)$, and we denote its operator norm by $\mathbb{N}orm{T}{L^2\to L^2}$.
Let us agree that $\abs{\ }$ stands for the $\ell^{\infty}$ norm on $\mathbb{R}^d$, i.e., $\abs{x}:=\max_{1\leq i\leq d}\abs{x_i}$. While the choice of the norm is not particularly important, this choice is slightly more convenient than the usual Euclidean norm when dealing with cubes as we will: e.g., the diameter of a cube in the $\ell^{\infty}$ norm is equal to its sidelength $\ell(Q)$.
Let us first formulate the dyadic representation theorem for general moduli of continuity, and then specialize it to the usual standard estimates. Define the following coefficients for $i,j\in\mathbb{N}$:
\begin{equation*}
\tau(i,j):=\phi(2^{-\max\{i,j\}})^{-d}\psi\big(2^{-\max\{i,j\}}\phi(2^{-\max\{i,j\}})^{-1}\big),
\end{equation*}
if $\min\{i,j\}>0$; and
\begin{equation*}
\tau(i,j):= \Psi\big(2^{-\max\{i,j\}}\phi(2^{-\max\{i,j\}})^{-1}\big),\qquad
\Psi(t):=\int_0^t\psi(s)\frac{\ud s}{s},
\end{equation*}
if $\min\{i,j\}=0$.
We assume that $\phi$ and $\psi$ are such, that
\begin{equation}\label{eq:decay}
\sum_{i,j=0}^{\infty}\tau(i,j)\eqsim\int_0^1\frac{1}{\phi(t)^{d}}\psi\Big(\frac{t}{\phi(t)}\Big)\frac{\ud t}{t}+\int_0^1\Psi\Big(\frac{t}{\phi(t)}\Big)\frac{\ud t}{t}<\infty.
\end{equation}
This is the case, in particular, when $\psi(t)=t^{\alpha}$ (usual standard estimates) and $\phi(t)=(1+a^{-1}\log t^{-1})^{-\gamma}$; then one checks that
\begin{equation*}
\tau(i,j)\lesssim P(\max\{i,j\})2^{-\alpha\max\{i,j\}},\qquad P(j)=(1+j)^{\gamma(d+\alpha)},
\end{equation*}
which clearly satisfies the required convergence. However, it is also possible to treat weaker forms of the standard estimates with a logarithmic modulus $\psi(t)=(1+a^{-1}\log t^{-1})^{-\alpha}$. This might be of some interest for applications, but we do not pursue this line any further here.
\begin{theorem}\label{thm:formula}
Let $T$ be a bounded Calder\'on--Zygmund operator with modulus of continuity satisfying the above assumption. Then it has an expansion, say for $f,g\in C^1_c(\mathbb{R}^d)$,
\begin{equation*}
\pair{g}{Tf}
=c\cdot\big(\mathbb{N}orm{T}{L^2\to L^2}+\mathbb{N}orm{K}{CZ_\psi}\big)\cdot\Exp_{\omega} \sum_{i,j=0}^{\infty} \tau(i,j)\pair{g}{S^{ij}_{\omega}f},
\end{equation*}
where $c$ is a dimensional constant and $S^{ij}_{\omega}$ is a dyadic shift of parameters $(i,j)$ on the dyadic system $\mathscr{D}^{\omega}$; all of them except possibly $S^{00}_{\omega}$ are cancellative.
\end{theorem}
The first version of this theorem appeared in \cite{Hytonen:A2}, and another one in \cite{HPTV}.
The present proof is yet another variant of the same argument. It is slightly simpler in terms of the probabilistic tools that are used: no conditional probabilities are needed, although they were important for the original arguments.
In proving this theorem, we do not actually need to employ the full strength of the assumption that $T:L^2(\mathbb{R}^d)\to L^2(\mathbb{R}^d)$; rather it suffices to have the kernel conditions plus the following conditions of the $T1$ theorem of David--Journ\'e:
\begin{equation*}
\begin{split}
\abs{\pair{1_Q}{T1_Q}} &\leq C_{WBP}\abs{Q}\quad\text{(weak boundedness property)},\\
& T1\in\BMO(\mathbb{R}^d),\quad T^*1\in\BMO(\mathbb{R}^d).
\end{split}
\end{equation*}
Let us denote the smallest $C_{WBP}$ by $\mathbb{N}orm{T}{WBP}$. Then we have the following more precise version of the representation:
\begin{theorem}\label{thm:formula2}
Let $T$ be a Calder\'on--Zygmund operator with modulus of continuity satisfying the above assumption. Then it has an expansion, say for $f,g\in C^1_c(\mathbb{R}^d)$,
\begin{equation*}
\begin{split}
\pair{g}{Tf}
&=c\cdot\big(\mathbb{N}orm{K}{CZ_0}+\mathbb{N}orm{K}{CZ_\psi}\big)\Exp_{\omega} \sum_{\substack{i,j=0\\ \max\{i,j\}> 0}}^{\infty} \tau(i,j)\pair{g}{S^{ij}_{\omega}f} \\
&+c\cdot\big(\mathbb{N}orm{K}{CZ_0}+\mathbb{N}orm{T}{WBP}\big)\Exp_{\omega}\pair{g}{S^{00}_{\omega}f}
+\Exp_{\omega}\pair{g}{\Pi_{T1}^{\omega}f}+\Exp_{\omega}\pair{g}{(\Pi_{T^*1}^{\omega})^*f}
\end{split}
\end{equation*}
where $S^{ij}_{\omega}$ is a cancellative dyadic shift of parameters $(i,j)$ on the dyadic system $\mathscr{D}^{\omega}$, and $\Pi_{b}^{\omega}$ is a dyadic paraproduct on the dyadic system $\mathscr{D}^{\omega}$ associated with the $\BMO$-function $b\in\{T1,T^*1\}$.
\end{theorem}
\begin{remark}
Note that $\Pi^{\omega}_b=\mathbb{N}orm{b}{\BMO}\cdot S^{\omega}_b$, where $S^{\omega}_b=\Pi^{\omega}_b/\mathbb{N}orm{b}{\BMO}$ is a shift with the correct normalization. Hence, writing everything in terms of normalized shifts, as in Theorem~\ref{thm:formula}, we get the factor $\mathbb{N}orm{T1}{\BMO}\lesssim\mathbb{N}orm{T}{L^2\to L^2}+\mathbb{N}orm{K}{CZ_\psi}$ in the second-to-last term, and $\mathbb{N}orm{T^*1}{\BMO}\lesssim\mathbb{N}orm{T}{L^2\to L^2}+\mathbb{N}orm{K}{CZ_\psi}$ in the last one.
The proof will also show that both occurrences of the factor $\mathbb{N}orm{K}{CZ_0}$ could be replaced by $\mathbb{N}orm{T}{L^2\to L^2}$, giving the statement of Theorem~\ref{thm:formula} (since trivially $\mathbb{N}orm{T}{WBP}\leq\mathbb{N}orm{T}{L^2\to L^2}$).
\end{remark}
As a by-product, Theorem~\ref{thm:formula2} delivers a proof of the $T1$ theorem: under the above assumptions, the operator $T$ is already bounded on $L^2(\mathbb{R}^d)$. Namely, all the dyadic shifts $S^{ij}_{\omega}$ are uniformly bounded on $L^2(\mathbb{R}^d)$ by definition, and the convergence condition \eqref{eq:decay} ensures that so is their average representing the operator $T$. This by-product proof of the $T1$ theorem is not a coincidence, since the proof of Theorems~\ref{thm:formula} and \ref{thm:formula2} was actually inspired by the proof of the $T1$ theorem for non-doubling measures due to Nazarov--Treil--Volberg \cite{NTV:Tb} and its vector-valued extension \cite{Hytonen:nonhomog}.
A key to the proof of the dyadic representation is a random expansion of $T$ in terms of Haar functions $h_I$, where the bad cubes are avoided:
\begin{proposition}
\begin{equation*}
\pair{g}{Tf}
=\frac{1}{\pi_{\good}}\Exp_{\omega}\sum_{I,J\in\mathscr{D}^{\omega}}1_{\good}(\operatorname{smaller}\{I,J\})\cdot
\pair{g}{h_{J}}\pair{h_{J}}{Th_{I}}\pair{h_{I}}{f},
\end{equation*}
where
\begin{equation*}
\operatorname{smaller}\{I,J\}:=\begin{cases} I & \text{if }\ell(I)\leq\ell(J), \\ J & \text{if }\ell(J)>\ell(I). \end{cases}
\end{equation*}
\end{proposition}
\begin{proof}
Recall that
\begin{equation*}
f=\sum_{I\in\mathscr{D}^0}\pair{f}{h_{I\dot+\omega}}h_{I\dot+\omega}
\end{equation*}
for any fixed $\omega\in\Omega$; and we can also take the expectation $\Exp_{\omega}$ of both sides of this identity.
Let
\begin{equation*}
1_{\good}(I\dot+\omega):=\begin{cases} 1, & \text{if $I\dot+\omega$ is good},\\ 0, & \text{else}\end{cases}
\end{equation*}
We make use of the above random Haar expansion of $f$, multiply and divide by
\begin{equation*}
\pi_{\good}=\prob_{\omega}(I\dot+\omega\good)=\Exp_{\omega}1_{\good}(I\dot+\omega),
\end{equation*}
and use the independence from Lemma~\ref{lem:indep} to get:
\begin{equation*}
\begin{split}
\pair{g}{Tf}
&=\Exp_{\omega}\sum_{I}\pair{g}{Th_{I\dot+\omega}}\pair{h_{I\dot+\omega}}{f} \\
&=\frac{1}{\pi_{\good}}\sum_{I}\Exp_{\omega}[1_{\good}(I\dot+\omega)] \Exp_{\omega}[\pair{g}{Th_{I\dot+\omega}}\pair{h_{I\dot+\omega}}{f}] \\
&=\frac{1}{\pi_{\good}}\Exp_{\omega}\sum_{I}1_{\good}(I\dot+\omega) \pair{g}{Th_{I\dot+\omega}}\pair{h_{I\dot+\omega}}{f} \\
&=\frac{1}{\pi_{\good}}\Exp_{\omega}\sum_{I,J}1_{\good}(I\dot+\omega) \pair{g}{h_{J\dot+\omega}}\pair{h_{J\dot+\omega}}{Th_{I\dot+\omega}}\pair{h_{I\dot+\omega}}{f}.
\end{split}
\end{equation*}
On the other hand, using independence again in half of this double sum, we have
\begin{equation*}
\begin{split}
&\frac{1}{\pi_{\good}}\sum_{\ell(I)>\ell(J)}\Exp_{\omega}[1_{\good}(I\dot+\omega) \pair{g}{h_{J\dot+\omega}}\pair{h_{J\dot+\omega}}{Th_{I\dot+\omega}}\pair{h_{I\dot+\omega}}{f} ] \\
&=\frac{1}{\pi_{\good}}\sum_{\ell(I)>\ell(J)}\Exp_{\omega}[1_{\good}(I\dot+\omega)]
\Exp_{\omega}[ \pair{g}{h_{J\dot+\omega}}\pair{h_{J\dot+\omega}}{Th_{I\dot+\omega}}\pair{h_{I\dot+\omega}}{f} ] \\
&= \Exp_{\omega}\sum_{\ell(I)>\ell(J)}
\pair{g}{h_{J\dot+\omega}}\pair{h_{J\dot+\omega}}{Th_{I\dot+\omega}}\pair{h_{I\dot+\omega}}{f},
\end{split}
\end{equation*}
and hence
\begin{equation*}
\begin{split}
\pair{g}{Tf}
&= \frac{1}{\pi_{\good}}\Exp_{\omega}\sum_{\ell(I)\leq\ell(J)}
1_{\good}(I\dot+\omega) \pair{g}{h_{J\dot+\omega}}\pair{h_{J\dot+\omega}}{Th_{I\dot+\omega}}\pair{h_{I\dot+\omega}}{f} \\
&\qquad+\Exp_{\omega}\sum_{\ell(I)>\ell(J)}
\pair{g}{h_{J\dot+\omega}}\pair{h_{J\dot+\omega}}{Th_{I\dot+\omega}}\pair{h_{I\dot+\omega}}{f}.
\end{split}
\end{equation*}
Comparison with the basic identity
\begin{equation}\label{eq:basic}
\pair{g}{Tf}
=\Exp_{\omega}\sum_{I,J}\pair{g}{h_{J\dot+\omega}}\pair{h_{J\dot+\omega}}{Th_{I\dot+\omega}}\pair{h_{I\dot+\omega}}{f}
\end{equation}
shows that
\begin{equation*}
\begin{split}
&\Exp_{\omega}\sum_{\ell(I)\leq\ell(J)}
\pair{g}{h_{J\dot+\omega}}\pair{h_{J\dot+\omega}}{Th_{I\dot+\omega}}\pair{h_{I\dot+\omega}}{f} \\
&= \frac{1}{\pi_{\good}}\Exp_{\omega}\sum_{\ell(I)\leq\ell(J)}
1_{\good}(I\dot+\omega) \pair{g}{h_{J\dot+\omega}}\pair{h_{J\dot+\omega}}{Th_{I\dot+\omega}}\pair{h_{I\dot+\omega}}{f}.
\end{split}
\end{equation*}
Symmetrically, we also have
\begin{equation*}
\begin{split}
&\Exp_{\omega}\sum_{\ell(I)>\ell(J)}
\pair{g}{h_{J\dot+\omega}}\pair{h_{J\dot+\omega}}{Th_{I\dot+\omega}}\pair{h_{I\dot+\omega}}{f} \\
&= \frac{1}{\pi_{\good}}\Exp_{\omega}\sum_{\ell(I)>\ell(J)}
1_{\good}(J\dot+\omega) \pair{g}{h_{J\dot+\omega}}\pair{h_{J\dot+\omega}}{Th_{I\dot+\omega}}\pair{h_{I\dot+\omega}}{f},
\end{split}
\end{equation*}
and this completes the proof.
\end{proof}
This is essentially the end of probability in this proof. Henceforth, we can simply concentrate on the summation inside $\Exp_{\omega}$, for a fixed value of $\omega\in\Omega$, and manipulate it into the required form. Moreover, we will concentrate on the half of the sum with $\ell(J)\geq\ell(I)$, the other half being handled symmetrically. We further divide this sum into the following parts:
\begin{equation*}
\begin{split}
\sum_{\ell(I)\leq\ell(J)}
&=\sum_{\dist(I,J)>\ell(J)\phi(\ell(I)/\ell(J))}
+\sum_{I\subsetneq J}+\sum_{I=J}
+\sum_{\substack{\dist(I,J)\leq\ell(J)\phi(\ell(I)/\ell(J))\\ I\cap J=\varnothing}} \\
&=:\sigma_{\operatorname{out}}+\sigma_{\operatorname{in}}+\sigma_{=}+\sigma_{\operatorname{near}}.
\end{split}
\end{equation*}
In order to recognize these series as sums of dyadic shifts, we need to locate, for each pair $(I,J)$ appearing here, a common dyadic ancestor which contains both of them. The existence of such containing cubes, with control on their size, is provided by the following:
\begin{lemma}\label{lem:IveeJ}
If $I\in\mathscr{D}$ is good and $J\in\mathscr{D}$ is a disjoint ($J\cap I=\varnothing$) cube with $\ell(J)\geq\ell(I)$, then there exists $K\supseteq I\cup J$ which satisfies
\begin{equation*}
\begin{split}
\ell(K) &\leq 2^r\ell(I), \qquad\text{if}\qquad \dist(I,J)\leq\ell(J)\phi\Big(\frac{\ell(I)}{\ell(J)}\Big), \\
\ell(K)\phi\Big(\frac{\ell(I)}{\ell(K)}\Big) &\leq 2^r\dist(I,J), \qquad\text{if}\qquad\dist(I,J)>\ell(J)\phi\Big(\frac{\ell(I)}{\ell(J)}\Big).
\end{split}
\end{equation*}
\end{lemma}
\begin{proof}
Let us start with the following initial observation: if $K\in\mathscr{D}$ satisfies $I\subseteq K$, $J\subset K^c$, and $\ell(K)\geq 2^r\ell(I)$, then
\begin{equation*}
\ell(K)\phi\Big(\frac{\ell(I)}{\ell(K)}\Big)<\dist(I,\partial K)=\dist(I,K^c)\leq\dist(I,J).
\end{equation*}
\subsubsection*{Case $\dist(I,J)\leq\ell(J)\phi(\ell(I)/\ell(J))$}
As $I\cap J=\varnothing$, we have $\dist(I,J)=\dist(I,\partial J)$, and since $I$ is good, this implies $\ell(J)< 2^r\ell(I)$.
Let $K=I^{(r)}$, and assume for contradiction that $J\subset K^c$. Then the initial observation implies that
\begin{equation*}
\ell(K)\phi\Big(\frac{\ell(I)}{\ell(K)}\Big)<\dist(I,J)\leq\ell(J)\phi\Big(\frac{\ell(I)}{\ell(J)}\Big).
\end{equation*}
Dividing both sides by $\ell(I)$ and recalling that $\phi(t)/t$ is decreasing, this implies that $\ell(K)<\ell(J)$, a contradiction with $\ell(K)=2^r\ell(I)>\ell(J)$.
Hence $J\not\subset K^c$, and since $\ell(J)<\ell(K)$, this implies that $J\subset K$.
\subsubsection*{Case $\dist(I,J)>\ell(J)\phi(\ell(I)/\ell(J))$}
Consider the minimal $K\supset I$ with $\ell(K)\geq 2^r\ell(I)$ and $\dist(I,J)\leq\ell(K)\phi(\ell(I)/\ell(K))$. (Since $\phi(t)/t\to\infty$ as $t\to 0$, this bound holds for all large enough $K$.) Then (since $\phi(t)/t$ is decreasing) $\ell(K)>\ell(J)$, and by the initial observation, $J\not\subset K^c$. Hence either $J\subset K$, and it suffices to estimate $\ell(K)$.
By the minimality of $K$, there holds at least one of
\begin{equation*}
\tfrac12\ell(K)<2^r\ell(I)\qquad\text{or}\qquad \tfrac12\ell(K)\phi\Big(\frac{\ell(I)}{\tfrac12\ell(K)}\Big)<\dist(I,J),
\end{equation*}
and the latter immediately implies that $\ell(K)\phi(\ell(I)/\ell(K))<2\dist(I,J)$. In the first case, since $\ell(I)\leq\ell(J)\leq\ell(K)$, we have
\begin{equation*}
\ell(K)\phi\Big(\frac{\ell(I)}{\ell(K)}\Big)
\leq 2^r\ell(I)\Big(\frac{\ell(I)}{\ell(K)}\Big)
\leq 2^r\ell(J)\Big(\frac{\ell(I)}{\ell(J)}\Big)
<2^r\dist(I,J),
\end{equation*}
so the required bound is true in each case.
\end{proof}
We denote the minimal such $K$ by $I\vee J$, thus
\begin{equation*}
I\vee J:=\bigcap_{K\supseteq I\cup J} K.
\end{equation*}
\subsection{Separated cubes, $\sigma_{\operatorname{out}}$}
We reorganize the sum $\sigma_{\operatorname{out}}$ with respect to the new summation variable $K=I\vee J$, as well as the relative size of $I$ and $J$ with respect to $K$:
\begin{equation*}
\sigma_{\operatorname{out}}
=\sum_{j=1}^{\infty}\sum_{i=j}^{\infty}\sum_K \sum_{\substack{\dist(I,J)>\ell(J)\phi(\ell(I)/\ell(J))\\ I\vee J=K \\ \ell(I)=2^{-i}\ell(K), \ell(J)=2^{-j}\ell(K)}}.
\end{equation*}
Note that we can start the summation from $1$ instead of $0$, since the disjointness of $I$ and $J$ implies that $K=I\vee J$ must be strictly larger than either of $I$ and $J$.
The goal is to identify the quantity in parentheses as a decaying factor times a cancellative averaging operator with parameters $(i,j)$.
\begin{lemma}
For $I$ and $J$ appearing in $\sigma_{\operatorname{out}}$, we have
\begin{equation*}
\abs{\pair{h_J}{Th_I}}
\lesssim\mathbb{N}orm{K}{CZ_\psi}\frac{\sqrt{\abs{I}\abs{J}}}{\abs{K}}\phi\Big(\frac{\ell(I)}{\ell(K)}\Big)^{-d}\psi\Big(\frac{\ell(I)}{\ell(K)}\phi\Big(\frac{\ell(I)}{\ell(K)}\Big)^{-1}\Big),
\quad K=I\vee J.
\end{equation*}
\end{lemma}
\begin{proof}
Using the cancellation of $h_I$, standard estimates, and Lemma~\ref{lem:IveeJ}
\begin{equation*}
\begin{split}
\abs{\pair{h_J}{Th_I}}
&=\Babs{\iint h_J(x)K(x,y)h_I(y)\ud y\ud x} \\
&=\Babs{\iint h_J(x)[K(x,y)-K(x,y_I)]h_I(y)\ud y\ud x} \\
&\lesssim\mathbb{N}orm{K}{CZ_\psi}\iint \abs{h_J(x)}\frac{1}{\dist(I,J)^d}\psi\Big(\frac{\ell(I)}{\dist(I,J)}\Big)\abs{h_I(y)}\ud y\ud x \\
&=\mathbb{N}orm{K}{CZ_\psi}\frac{1}{\dist(I,J)^d}\psi\Big(\frac{\ell(I)}{\dist(I,J)}\Big)\mathbb{N}orm{h_J}{1}\mathbb{N}orm{h_I}{1} \\
&\lesssim\mathbb{N}orm{K}{CZ_\psi}\frac{1}{\ell(K)^d}\phi\Big(\frac{\ell(I)}{\ell(K)}\Big)^{-d}\psi\Big(\frac{\ell(I)}{\ell(K)}\phi\Big(\frac{\ell(I)}{\ell(K)}\Big)^{-1}\Big)\sqrt{\abs{J}}\sqrt{\abs{I}}.\qedhere
\end{split}
\end{equation*}
\end{proof}
\begin{lemma}
\begin{equation*}
\begin{split}
\sum_{\substack{\dist(I,J)>\ell(J)\phi(\ell(I)/\ell(J))\\ I\vee J=K \\ \ell(I)=2^{-i}\ell(K)\leq \ell(J)=2^{-j}\ell(K)}}
&1_{\good}(I)\cdot \pair{g}{h_{J}}\pair{h_{J}}{Th_{I}}\pair{h_{I}}{f} \\
&=\mathbb{N}orm{K}{CZ_\psi}\phi(2^{-i})^{-d}\psi\big(2^{-i}\phi(2^{-i})^{-1}\big)\pair{g}{A_K^{ij}f},
\end{split}
\end{equation*}
where $A_K^{ij}$ is a cancellative averaging operator with parameters $(i,j)$.
\end{lemma}
\begin{proof}
By the previous lemma, substituting $\ell(I)/\ell(K)=2^{-i}$,
\begin{equation*}
\abs{\pair{h_J}{Th_I}}
\lesssim\mathbb{N}orm{K}{CZ_\psi}\frac{\sqrt{\abs{I}\abs{J}}}{\abs{K}}\phi(2^{-i})^{-d}\psi\big(2^{-i}\phi(2^{-i})^{-1}\big),
\end{equation*}
and the first factor is precisely the required size of the coefficients of $A_K^{ij}$.
\end{proof}
Summarizing, we have
\begin{equation*}
\sigma_{\operatorname{out}}
=\mathbb{N}orm{K}{CZ_\psi}\sum_{j=1}^{\infty}\sum_{i=j}^{\infty}\phi(2^{-i})^{-d}\psi\big(2^{-i}\phi(2^{-i})^{-1}\big)\pair{g}{S^{ij}f}.
\end{equation*}
\subsection{Contained cubes, $\sigma_{\operatorname{in}}$}
When $I\subsetneq J$, then $I$ is contained in some subcube of $J$, which we denote by $J_I$.
\begin{equation*}
\begin{split}
\pair{h_J}{Th_I}
&=\pair{1_{J_I^c}h_J}{Th_I}+\pair{1_{J_I}h_J}{Th_I} \\
&=\pair{1_{J_I^c}h_J}{Th_I}+\ave{h_J}_{J_I}\pair{1_{J_I}}{Th_I} \\
&=\pair{1_{J_I^c}(h_J-\ave{h_J}_{J_I})}{Th_I}+\ave{h_J}_{I}\pair{1}{Th_I},
\end{split}
\end{equation*}
where we noticed that $h_J$ is constant on $J_I\supseteq I$.
\begin{lemma}
\begin{equation*}
\abs{ \pair{1_{J_I^c}(h_J-\ave{h_J}_{J_I})}{Th_I} }
\lesssim\big(\mathbb{N}orm{K}{CZ_0}+\mathbb{N}orm{K}{CZ_\psi}\big)\Big(\frac{\abs{I}}{\abs{J}}\Big)^{1/2}\Psi\Big(\frac{\ell(I)}{\ell(J)}\phi\big(\frac{\ell(I)}{\ell(J)}\big)^{-1}\Big),
\end{equation*}
where
\begin{equation*}
\Psi(r):=\int_0^r\psi(t)\frac{\ud t}{t},
\end{equation*}
and $\mathbb{N}orm{K}{CZ_0}$ could be alternatively replaced by $\mathbb{N}orm{T}{L^2\to L^2}$.
\end{lemma}
\begin{proof}
\begin{equation*}
\abs{ \pair{1_{J_I^c}(h_J-\ave{h_J}_{J_I})}{Th_I} }
\leq 2\mathbb{N}orm{h_J}{\infty}\int_{J_I^c}\abs{Th_I(x)}\ud x,
\end{equation*}
where $\mathbb{N}orm{h_J}{\infty}=\abs{J}^{-1/2}$.
\subsubsection*{Case $\ell(I)\geq 2^{-r}\ell(J)$}
We have
\begin{equation*}
\begin{split}
\int_{J_I^c}\abs{Th_I(x)}\ud x
&\leq\int_{3I\setminus I}\Babs{\int K(x,y)h_I(y)\ud y}\ud x \\
&\qquad+\int_{(3I)^c}\Babs{\int [K(x,y)-K(x,y_I)]h_I(y)\ud y}\ud x \\
&\lesssim\mathbb{N}orm{K}{CZ_0}\int_{3I\setminus I}\int_I\frac{1}{\abs{x-y}^d}\ud y\ud x \mathbb{N}orm{h_I}{\infty} \\
&\qquad+\mathbb{N}orm{K}{CZ_\psi}\int_{(3I)^c}\frac{1}{\dist(x,I)^d}\psi\Big(\frac{\ell(I)}{\dist(x,I)}\Big)\mathbb{N}orm{h_I}{1}\ud x \\
&\lesssim\mathbb{N}orm{K}{CZ_0}\abs{I}\mathbb{N}orm{h_I}{\infty}+
\mathbb{N}orm{K}{CZ_\psi}\int_{\ell(I)}^{\infty}\frac{1}{r^d}\psi\Big(\frac{\ell(I)}{r}\Big)r^{d-1}\ud r\mathbb{N}orm{h_I}{1} \\
&=\mathbb{N}orm{K}{CZ_0}\abs{I}^{1/2}+\mathbb{N}orm{K}{CZ_\psi}\int_0^1\psi(t)\frac{\ud t}{t}\abs{I}^{1/2} \\
&\lesssim\big(\mathbb{N}orm{K}{CZ_0}+\mathbb{N}orm{K}{CZ_\psi}\big)\abs{I}^{1/2}
\end{split}
\end{equation*}
by the Dini condition for $\psi$ in the last step.
Alternatively, the part giving the factor $\mathbb{N}orm{K}{CZ_0}$ could have been estimated by
\begin{equation*}
\begin{split}
\int_{3I\setminus I}\Babs{\int K(x,y)h_I(y)\ud y}\ud x
\leq\abs{3I\setminus I}^{1/2}\mathbb{N}orm{Th_I}{2}
\lesssim\abs{I}^{1/2}\mathbb{N}orm{T}{L^2\to L^2}.
\end{split}
\end{equation*}
\subsubsection*{Case $\ell(I)< 2^{-r}\ell(J)$}
Since $I\subseteq J_I$ is good, we have
\begin{equation*}
\dist(I,J_I^c)>\ell(J_I)\phi\Big(\frac{\ell(I)}{\ell(J_I)}\Big)\gtrsim\ell(J)\phi\Big(\frac{\ell(I)}{\ell(J)}\Big)
\end{equation*}
and hence
\begin{equation*}
\begin{split}
\int_{J_I^c}\abs{Th_I(x)}\ud x
&\lesssim\mathbb{N}orm{K}{CZ_\psi}\int_{J_I^c}\frac{1}{d(x,I)^d}\psi\Big(\frac{\ell(I)}{\dist(x,I)}\Big)\mathbb{N}orm{h_I}{1}\ud x \\
&\lesssim\mathbb{N}orm{K}{CZ_\psi}\int_{\ell(J)\phi(\ell(I)/\ell(J))}\frac{1}{r^d}\psi\Big(\frac{\ell(I)}{r}\Big)r^{d-1}\ud r\cdot\mathbb{N}orm{h_I}{1}\\
&=\mathbb{N}orm{K}{CZ_\psi}\int_0^{\ell(I)/\ell(J)\cdot\phi(\ell(I)/\ell(J))^{-1}}\psi(t)\frac{\ud t}{t}\cdot\abs{I}^{1/2}.\qedhere
\end{split}
\end{equation*}
\end{proof}
Now we can organize
\begin{equation*}
\sigma_{\operatorname{in}}'
:=\sum_J\sum_{I\subsetneq J}\pair{g}{h_J}\pair{1_{J_I^c}(h_J-\ave{h_J}_{J_I})}{Th_I}\pair{h_I}{f}
=\sum_{i=1}^{\infty}\sum_J\sum_{\substack{I\subset J\\ \ell(I)=2^{-i}\ell(J)}},
\end{equation*}
and the inner sum is recognized as
\begin{equation*}
\big(\mathbb{N}orm{K}{CZ_0}+\mathbb{N}orm{K}{CZ_\psi}\big)\Psi(2^{-i}\phi(2^{-i})^{-1})\pair{g}{A_J^{i0} f},
\end{equation*}
or with $\mathbb{N}orm{T}{L^2\to L^2}$ in place of $\mathbb{N}orm{K}{CZ_0}$,
for a cancellative averaging operator of type $(i,0)$.
On the other hand,
\begin{equation*}
\begin{split}
\sigma_{\operatorname{in}}''
&:=\sum_J\sum_{I\subsetneq J}\pair{g}{h_J}\ave{h_J}_{I}\pair{1}{Th_I}\pair{h_I}{f} \\
&=\sum_I\Big\langle\sum_{J\supsetneq I}\pair{g}{h_J}h_J\Big\rangle_{I}\pair{1}{Th_I}\pair{h_I}{f} \\
&=\sum_I\ave{g}_I\pair{T^*1}{h_I}\pair{h_I}{f} \\
&=\Bpair{\sum_I\ave{g}_I\pair{T^*1}{h_I}h_I}{f} =:\pair{\Pi_{T^*1}g}{f}
=\pair{g}{\Pi_{T^*1}^* f}.
\end{split}
\end{equation*}
Here $\Pi_{T^*1}$ is the \emph{paraproduct}, a non-cancellative shift composed of the non-cancellative averaging operators
\begin{equation*}
A_I g=\pair{T^*1}{h_I}\ave{g}_I h_I=\abs{I}^{-1/2}\pair{T^*1}{h_I}\cdot\pair{g}{h_I^0}h_I
\end{equation*}
of type $(0,0)$.
Summarizing, we have
\begin{equation*}
\begin{split}
\sigma_{\operatorname{in}}
&=\sigma_{\operatorname{in}}'+\sigma_{\operatorname{in}}'' \\
&=\big(\mathbb{N}orm{K}{CZ_0}+\mathbb{N}orm{K}{CZ_\psi}\big)\sum_{i=1}^{\infty}\Psi(2^{-i}\phi(2^{-i})^{-1})\pair{g}{S^{i0}f}+\pair{\Pi_{T^*1}g}{f},
\end{split}
\end{equation*}
where $\Psi(t)=\int_0^t\psi(s)\ud s/s$, and $\mathbb{N}orm{K}{CZ_0}$ could be replaced by $\mathbb{N}orm{T}{L^2\to L^2}$. Note that if we wanted to write $\Pi_{T^*1}$ in terms of a shift with correct normalization, we should divide and multiply it by $\mathbb{N}orm{T^*1}{\BMO}$, thus getting a shift times the factor $\mathbb{N}orm{T^*1}{\BMO}\lesssim\mathbb{N}orm{T}{L^2}+\mathbb{N}orm{K}{CZ_\psi}$
\subsection{Near-by cubes, $\sigma_{=}$ and $\sigma_{\operatorname{near}}$}
We are left with the sums $\sigma_{=}$ of equal cubes $I=J$, as well as $\sigma_{\operatorname{near}}$ of disjoint near-by cubes with $\dist(I,J)\leq\ell(J)\phi(\ell(I)/\ell(J))$.
Since $I$ is good, this necessarily implies that $\ell(I)>2^{-r}\ell(J)$. Then, for a given $J$, there are only boundedly many related $I$ in this sum.
\begin{lemma}
\begin{equation*}
\abs{\pair{h_J}{Th_I}}\lesssim\mathbb{N}orm{K}{CZ_0}+\delta_{IJ}\mathbb{N}orm{T}{WBP}.
\end{equation*}
\end{lemma}
Note that if we used the $L^2$-boundedness of $T$ instead of the $CZ_0$ and $WBP$ conditions (as is done in Theorem~\ref{thm:formula}, we could also estimate simply
\begin{equation*}
\abs{\pair{h_J}{Th_I}}\leq\mathbb{N}orm{h_J}{2}\mathbb{N}orm{T}{L^2\to L^2}\mathbb{N}orm{h_I}{2}=\mathbb{N}orm{T}{L^2\to L^2}.
\end{equation*}
\begin{proof}
For disjoint cubes, we estimate directly
\begin{equation*}
\begin{split}
\abs{\pair{h_J}{Th_I}}
&\lesssim\mathbb{N}orm{K}{CZ_0}\int_J\int_I\frac{1}{\abs{x-y}^d}\ud y\ud x\mathbb{N}orm{h_J}{\infty}\mathbb{N}orm{h_I}{\infty} \\
&\leq\mathbb{N}orm{K}{CZ_0}\int_J\int_{3J\setminus J}\frac{1}{\abs{x-y}^d}\ud y\ud x\abs{J}^{-1/2}\abs{I}^{-1/2} \\
&\lesssim\mathbb{N}orm{K}{CZ_0}\abs{J}\abs{J}^{-1/2}\abs{J}^{-1/2}=\mathbb{N}orm{K}{CZ_0},
\end{split}
\end{equation*}
since $\abs{I}\eqsim\abs{J}$.
For $J=I$, let $I_i$ be its dyadic children. Then
\begin{equation*}
\begin{split}
\abs{\pair{h_I}{Th_I}}
&\leq\sum_{i,j=1}^{2^d}\abs{\ave{h_I}_{I_i}\ave{h_I}_{I_j}\pair{1_{I_i}}{T1_{I_j}}} \\
&\lesssim\mathbb{N}orm{K}{CZ_0}\sum_{j\neq i}\abs{I}^{-1}\int_{I_i}\int_{I_j}\frac{1}{\abs{x-y}^d}\ud x\ud y
+\sum_i\abs{I}^{-1}\abs{\pair{1_{I_i}}{T1_{I_i}}} \\
&\lesssim\mathbb{N}orm{K}{CZ_0}+\mathbb{N}orm{T}{WBP},
\end{split}
\end{equation*}
by the same estimate as earlier for the first term, and the weak boundedness property for the second.
\end{proof}
With this lemma, the sum $\sigma_{=}$ is recognized as a cancellative dyadic shift of type $(0,0)$ as such:
\begin{equation*}
\begin{split}
\sigma_{=}
&=\sum_{I\in\mathscr{D}}1_{\good}(I)\cdot\pair{g}{h_I}\pair{h_I}{Th_I}\pair{h_I}{f} \\
&=\big(\mathbb{N}orm{K}{CZ_0}+\mathbb{N}orm{T}{WBP}\big)\pair{g}{S^{00}f},
\end{split}
\end{equation*}
where the factor in front could also be replaced by $\mathbb{N}orm{T}{L^2\to L^2}$.
For $I$ and $J$ participating in $\sigma_{\operatorname{near}}$, we conclude from Lemma~\ref{lem:IveeJ} that $K:=I\vee J$ satisfies $\ell(K)\leq 2^r\ell(I)$, and hence we may organize
\begin{equation*}
\sigma_{\operatorname{near}}
=\sum_{i=1}^r\sum_{j=1}^i \sum_K\sum_{\substack{I,J:I\vee J=K\\ \dist(I,J)\leq\ell(J)\phi(\ell(I)/\ell(J))\\ \ell(I)=2^{-i}\ell(K) \\ \ell(J)=2^{-j}\ell(K)}},
\end{equation*}
and the innermost sum is recognized as $\mathbb{N}orm{K}{CZ_0}\pair{g}{A^{ij}_K f}$ for some cancellative averaging operator of type $(i,j)$.
Summarizing, we have
\begin{equation*}
\sigma_{=}+\sigma_{\operatorname{near}}
=\big(\mathbb{N}orm{K}{CZ_0}+\mathbb{N}orm{T}{WBP}\big)\pair{g}{S^{00}f}
+\mathbb{N}orm{K}{CZ_0}\sum_{j=1}^r\sum_{i=j}^r \pair{g}{S^{ij}f},
\end{equation*}
where $S^{00}$ and $S^{ij}$ are cancellative dyadic shifts, and the factor $\big(\mathbb{N}orm{K}{CZ_0}+\mathbb{N}orm{T}{WBP}\big)$ could also be replaced by $\mathbb{N}orm{T}{L^2\to L^2}$.
\subsection{Synthesis}
We have checked that
\begin{equation*}
\begin{split}
\sum_{\ell(I)\leq\ell(J)} &1_{\operatorname{good}}(I)\pair{g}{h_J}\pair{h_J}{Th_I}\pair{h_I}{f} \\
&=\big(\mathbb{N}orm{K}{CZ_0}+\mathbb{N}orm{K}{CZ_\psi}\big)\Big(\sum_{1\leq j\leq i<\infty}\phi(2^{-i})^{-d}\psi(2^{-i}\phi(2^{-i})^{-1})\pair{g}{S^{ij}f} \\
&\qquad\qquad+\sum_{1\leq i<\infty}\Psi(2^{-i}\phi(2^{-i})^{-1}))\pair{g}{S^{i0}f}\Big) \\
&\qquad+\big(\mathbb{N}orm{K}{CZ_0}+\mathbb{N}orm{T}{WBP}\big)\pair{g}{S^{00}f}+\pair{g}{\Pi_{T^*1}^*f}
\end{split}
\end{equation*}
where $\Psi(t)=\int_0^t\psi(s)\ud s/s$, $\Pi_{T^*1}$ is a paraproduct---a non-cancellative shift of type $(0,0)$--, and all other $S^{ij}$ is a cancellative dyadic shifts of type $(i,j)$.
By symmetry (just observing that the cubes of equal size contributed precisely to the presence of the cancellative shifts of type $(i,i)$, and that the dual of a shift of type $(i,j)$ is a shift of type $(j,i)$), it follows that
\begin{equation*}
\begin{split}
\sum_{\ell(I)>\ell(J)} &1_{\operatorname{good}}(J)\pair{g}{h_J}\pair{h_J}{Th_I}\pair{h_I}{f} \\
&=\big(\mathbb{N}orm{K}{CZ_0}+\mathbb{N}orm{K}{CZ_\psi}\big)\Big(\sum_{1\leq i<j<\infty}\phi(2^{-j})^{-d}\psi(2^{-j}\phi(2^{-j})^{-1})\pair{g}{S^{ij}f} \\
&\qquad+\sum_{1\leq j<\infty}\Psi(2^{-j}\phi(2^{-j})^{-1}))\pair{g}{S^{0j}f}\Big)+\pair{g}{\Pi_{T1}f}
\end{split}
\end{equation*}
so that altogether
\begin{equation*}
\begin{split}
\sum_{I,J} &1_{\operatorname{good}}(\min\{I,J\}) \pair{g}{h_J}\pair{h_J}{Th_I}\pair{h_I}{f} \\
&=\big(\mathbb{N}orm{K}{CZ_0}+\mathbb{N}orm{K}{CZ_\psi}\big)\Big(\sum_{i=1}^{\infty}\Psi(2^{-i}\phi(2^{-i})^{-1}))(\pair{g}{S^{i0}f}+\pair{g}{S^{0i}f}) \\
&\qquad\qquad+\sum_{i,j=1}^{\infty}\phi(2^{-\max(i,j)})^{-d}\psi(2^{-\max(i,j)}\phi(2^{-\max(i,j)})^{-1})\pair{g}{S^{ij}f}\Big)\\
&\qquad+\big(\mathbb{N}orm{K}{CZ_0}+\mathbb{N}orm{T}{WBP}\big)\pair{g}{S^{00}f}+\pair{g}{\Pi_{T1}f}+\pair{g}{\Pi_{T^*1}^*f},
\end{split}
\end{equation*}
and this completes the proof of Theorem~\ref{thm:formula}.
\section{Two-weight theory for dyadic shifts}
Before proceeding further, it is convenient to introduce a useful trick due to E.~Sawyer. Let $\sigma$ be an everywhere positive, finitely-valued function. Then $f\in L^p(w)$ if and only if $\phi=f/\sigma\in L^p(\sigma^p w)$, and they have equal norms in the respective spaces. Hence an inequality
\begin{equation}\label{eq:original}
\mathbb{N}orm{Tf}{L^p(w)}\leq N\mathbb{N}orm{f}{L^p(w)}\qquad\forall f\in L^p(w)
\end{equation}
is equivalent to
\begin{equation*}
\mathbb{N}orm{T(\phi\sigma)}{L^p(w)}\leq N\mathbb{N}orm{\phi\sigma}{L^p(w)}=N\mathbb{N}orm{\phi}{L^p(\sigma^p w)}\qquad\forall \phi\in L^p(\sigma^p w).
\end{equation*}
This is true for any $\sigma$, and we now choose it in such a way that $\sigma^p w=\sigma$, i.e., $\sigma=w^{-1/(p-1)}=w^{1-p'}$, where $p'$ is the dual exponent. So finally \eqref{eq:original} is equivalent to
\begin{equation*}
\mathbb{N}orm{T(\phi\sigma)}{L^p(w)}\leq N\mathbb{N}orm{\phi}{L^p(\sigma)}\qquad\forall \phi\in L^p(\sigma).
\end{equation*}
This formulation has the advantage that the norm on the right and the operator
\begin{equation*}
T(\phi\sigma)(x)=\int K(x,y)\phi(y)\cdot\sigma(y)\ud y
\end{equation*}
involve integration with respect to the same measure $\sigma$. In particular, the $A_2$ theorem is equivalent to
\begin{equation*}
\mathbb{N}orm{T(f\sigma)}{L^2(w)}\leq c_T[w]_{A_2}\mathbb{N}orm{f}{L^2(\sigma)}
\end{equation*}
for all $f\in L^2(w)$, for all $w\in A_2$ and $\sigma=w^{-1}$. But once we know this, we can also study this two-weight inequality on its own right, for two general measures $w$ and $\sigma$, which need not be related by the pointwise relation $\sigma(x)=1/w(x)$.
\begin{theorem}\label{thm:2weight}
Let $\sigma$ and $w$ be two locally finite measures with
\begin{equation*}
[w,\sigma]_{A_2}:=\sup_Q\frac{w(Q)\sigma(Q)}{\abs{Q}^2}<\infty.
\end{equation*}
Then a dyadic shift $S$ of type $(i,j)$ satisfies $S(\sigma\cdot):L^2(\sigma)\to L^2(w)$ if and only if
\begin{equation*}
\mathfrak{S}:=\sup_Q\frac{\mathbb{N}orm{1_Q S(\sigma 1_Q)}{L^2(w)}}{\sigma(Q)^{1/2}},\qquad
\mathfrak{S}^*:=\sup_Q\frac{\mathbb{N}orm{1_Q S^*(w 1_Q)}{L^2(\sigma)}}{w(Q)^{1/2}}
\end{equation*}
are finite, and in this case
\begin{equation*}
\mathbb{N}orm{S(\sigma\cdot)}{L^2(\sigma)\to L^2(w)}
\lesssim(1+\kappa)(\mathfrak{S}+\mathfrak{S}^*)+(1+\kappa)^2[w,\sigma]_{A_2}^{1/2},
\end{equation*}
where $\kappa=\max\{i,j\}$.
\end{theorem}
This result from my work with P\'erez, Treil, and Volberg \cite{HPTV} was preceded by an analogous qualitative version due to Nazarov, Treil, and Volberg \cite{NTV:2weightHaar}.
The proof depends on decomposing functions in the spaces $L^2(w)$ and $L^2(\sigma)$ in terms of expansions similar to the Haar expansion in $L^2(\mathbb{R}^d)$. Let $\D^{\sigma}_I$ be the orthogonal projection of $L^2(\sigma)$ onto its subspace of functions supported on $I$, constant on the subcubes of $I$, and with vanishing integral with respect to $\ud\sigma$. Then any two $\D_I^{\sigma}$ are orthogonal to each other. Under the additional assumption that the $\sigma$ measure of quadrants of $\mathbb{R}^d$ is infinite, we have the expansion
\begin{equation*}
f=\sum_{Q\in\mathscr{D}}\D_Q^{\sigma}f
\end{equation*}
for all $f\in L^2(\sigma)$, and Pythagoras' theorem says that
\begin{equation*}
\mathbb{N}orm{f}{L^2(\sigma)}=\Big(\sum_{Q\in\mathscr{D}}\mathbb{N}orm{\D_Q^{\sigma}f}{L^2(\sigma)}^2\Big)^{1/2}.
\end{equation*}
(These formulae needs a slight adjustment if the $\sigma$ measure of quadrants is finite; Theorem~\ref{thm:2weight} remains true without this extra assumption.) Let us also write
\begin{equation*}
\D^{\sigma,i}_K:=\sum_{\substack{I\subseteq K\\ \ell(I)=2^{-i}\ell(K)}}\D^{\sigma}_I.
\end{equation*}
For a fixed $i\in\mathbb{N}$, these are also orthogonal to each other, and the above formulae generalize to
\begin{equation*}
f=\sum_{Q\in\mathscr{D}}\D_Q^{\sigma,i}f,\qquad\mathbb{N}orm{f}{L^2(\sigma)}=\Big(\sum_{Q\in\mathscr{D}}\mathbb{N}orm{\D_Q^{\sigma,i}f}{L^2(\sigma)}^2\Big)^{1/2}.
\end{equation*}
The proof is in fact very similar in spirit to that of Theorem~\ref{thm:formula}; it is another $T1$ argument, but now with respect to the measures $\sigma$ and $w$ in place of the Lebesgue measure. We hence expand
\begin{equation*}
\pair{g}{S(\sigma f)}_w
=\sum_{Q,R\in\mathscr{D}}\pair{\D_R^w g}{S(\sigma\D_Q^{\sigma}f)}_w,\qquad f\in L^2(\sigma),\ g\in L^2(w),
\end{equation*}
and estimate the matrix coefficients
\begin{equation}\label{eq:2weightMatrix}
\begin{split}
\pair{\D_R^{w}g}{S(\sigma \D^{\sigma}_Q f)}_{w}
&=\sum_K\pair{\D^w_R g}{A_K(\sigma\D^{\sigma}_Q f)}_w \\
&=\sum_K\sum_{I,J\subseteq K}a_{IJK}\pair{\D^w_R g}{h_J}_w\pair{h_I}{\D^{\sigma}_Q f}_{\sigma}.
\end{split}
\end{equation}
For $\pair{h_I}{\D^{\sigma}_Q f}_{\sigma}\neq 0$, there must hold $I\cap Q\neq\varnothing$, thus $I\subseteq Q$ or $Q\subsetneq I$. But in the latter case $h_I$ is constant on $Q$, while $\int\D^{\sigma}_Q f\cdot\sigma=0$, so the pairing vanishes even in this case. Thus the only nonzero contributions come from $I\subseteq Q$, and similarly from $J\subseteq R$. Since $I,J\subseteq K$, there holds
\begin{equation*}
\big(I\subseteq Q\subsetneq K\quad\text{or}\quad K\subseteq Q\big)\qquad\text{and}\qquad\big(J\subseteq R\subsetneq K\quad\text{or}\quad K\subseteq R\big).
\end{equation*}
\subsection{Disjoint cubes}
Suppose now that $Q\cap R=\varnothing$, and let $K$ be among those cubes for which $A_K$ gives a nontrivial contribution above. Then it cannot be that $K\subseteq Q$, since this would imply that $Q\cap R\supseteq K\cap J=J\neq\varnothing$, and similarly it cannot be that $K\subseteq R$. Thus $Q,R\subsetneq K$, and hence
\begin{equation*}
Q\vee R\subseteq K.
\end{equation*}
Then
\begin{align*}
\abs{\pair{\D_R^w g}{S(\sigma\D^{\sigma}_Q f)}_w}
&\leq \sum_{K\supseteq Q\vee R}\abs{\pair{\D^w_R g}{A_K(\sigma\D^{\sigma}_Q f)}_{w}} \\
&\lesssim \sum_{K\supseteq Q\vee R} \frac{\mathbb{N}orm{\D^w_R g}{L^1(w)} \mathbb{N}orm{\D^{\sigma}_Q f}{L^1(\sigma)}}{\abs{K}} \\
&\lesssim \frac{\mathbb{N}orm{\D^w_R g}{L^1(w)} \mathbb{N}orm{\D^{\sigma}_Q f}{L^1(\sigma)}}{\abs{Q\vee R}}
\end{align*}
On the other hand, we have $Q\supseteq I$, $R\supseteq J$ for some $I,J\subseteq K$ with $\ell(I)=2^{-i}\ell(K)$ and $\ell(J)=2^{-j}\ell(K)$. Hence $2^{-i}\ell(K)\leq\ell(Q)$ and $2^{-j}\ell(K)\leq\ell(R)$, and thus
\begin{equation*}
Q\vee R\subseteq K\subseteq Q^{(i)}\cap R^{(j)}.
\end{equation*}
Now it is possible to estimate the total contribution of the part of the matrix with $Q\cap R=\varnothing$. Let $P:=Q\vee R$ be a new auxiliary summation variable. Then $Q,R\subset P$, and $\ell(Q)=2^{-a}\ell(P)$, $\ell(R)=2^{-b}\ell(P)$ where $a=1,\ldots,i$, $b=1,\ldots,j$.
Thus
\begin{align*}
\sum_{\substack{Q,R\in\mathscr{D}\\ Q\cap R=\varnothing}} &\abs{\pair{\D_R^{w}g}{S(\sigma\D^{\sigma}_Q f)}_{w}} \\
&\lesssim\sum_{a=1}^i\sum_{b=1}^j\sum_{P\in\mathscr{D}}\frac{1}{\abs{P}}
\sum_{\substack{Q,R\in\mathscr{D}:Q\vee R=P\\ \ell(Q)=2^{-a}\ell(P)\\ \ell(R)=2^{-b}\ell(P)}}\mathbb{N}orm{\D^w_R g}{L^1(\sigma)} \mathbb{N}orm{\D^{\sigma}_Q f}{L^1(w)} \\
&\leq\sum_{a,b=1}^{i,j}\sum_{P\in\mathscr{D}}\frac{1}{\abs{P}}\sum_{\substack{R\subseteq P\\ \ell(R)=2^{-b}\ell(P)}}\mathbb{N}orm{\D^w_R g}{L^1(\sigma)}
\sum_{\substack{Q\subseteq P\\ \ell(Q)=2^{-a}\ell(P)}}\mathbb{N}orm{\D^{\sigma}_{Q}f}{L^1(\sigma)} \\
&=\sum_{a,b=1}^{i,j}\sum_{P\in\mathscr{D}}\frac{1}{\abs{P}}\BNorm{\sum_{\substack{R\subseteq P\\ \ell(R)=2^{-b}\ell(P)}}\D^w_R g}{L^1(\sigma)}
\BNorm{\sum_{\substack{Q\subseteq P\\ \ell(Q)=2^{-a}\ell(P)}}\D^{\sigma}_{Q}f}{L^1(\sigma)}\\
&\qquad\qquad\qquad\text{(by disjoint supports)} \\
&=\sum_{a,b=1}^{i,j}\sum_{P\in\mathscr{D}}\frac{1}{\abs{P}}\mathbb{N}orm{\D^{w,j}_{P}g}{L^1(w)}\mathbb{N}orm{\D^{\sigma,i}_{P}f}{L^1(\sigma)} \\
&\leq\sum_{a,b=1}^{i,j}\sum_{P\in\mathscr{D}}\frac{\sigma(P)^{1/2}w(P)^{1/2}}{\abs{P}}
\mathbb{N}orm{\D^{w,j}_{P}g}{L^2(w)}\mathbb{N}orm{\D^{\sigma,i}_{P}f}{L^2(\sigma)}\\
&\leq\sum_{a,b=1}^{i,j}[w,\sigma]_{A_2}^{1/2}\Big(\sum_{P\in\mathscr{D}}\mathbb{N}orm{\D^{w,j}_{P}g}{L^2(w)}^2\Big)^{1/2}
\Big(\sum_{P\in\mathscr{D}}\mathbb{N}orm{\D^{\sigma,i}_{P}f}{L^2(\sigma)}^2\Big)^{1/2} \\
&\leq ij[w,\sigma]_{A_2}^{1/2}\mathbb{N}orm{g}{L^2(w)}\mathbb{N}orm{f}{L^2(\sigma)}.
\end{align*}
\subsection{Deeply contained cubes}
Consider now the part of the sum with $Q\subset R$ and $\ell(Q)<2^{-i}\ell(R)$. (The part with $R\subset Q$ and $\ell(R)<2^{-j}\ell(Q)$ would be handled in a symmetrical manner.)
\begin{lemma}\label{lem:contCubesAlgebra}
For all $Q\subset R$ with $\ell(Q)<2^{-i}\ell(R)$, we have
\begin{equation*}
\pair{\D^w_R g}{S(\sigma\D^{\sigma}_Q f)}_{w}
=\ave{\D^w_R g}_{Q^{(i)}} \pair{S^*(w1_{Q^{(i)}})}{\D^{\sigma}_Q f}_{\sigma},
\end{equation*}
where further
\begin{equation*}
\D^{\sigma}_Q S^*(w1_{Q^{(i)}})
=\D^{\sigma}_Q S^*(w1_{P})\qquad\text{for any }P\supseteq Q^{(i)}.
\end{equation*}
\end{lemma}
Recall that $\D^{\sigma}_Q=(\D^{\sigma}_Q)^2=(\D^{\sigma}_Q)^*$ is an orthogonal projection on $L^2(\sigma)$, so that it can be moved to either or both sides of the pairing $\pair{\ }{\ }_{\sigma}$.
\begin{proof}
Recall formula~\eqref{eq:2weightMatrix}.
If $\pair{h_I}{\D_Q^{\sigma} f}_{\sigma}$ is nonzero, then $I\subseteq Q$, and hence
\begin{equation*}
J\subseteq K=I^{(i)}\subseteq Q^{(i)}\subsetneq R
\end{equation*}
for all $J$ participating in the same $A_K$ as $I$. Thus $\D^w_R g$ is constant on $Q^{(i)}$, hence
\begin{equation*}
\begin{split}
\pair{\D^w_R g}{A_K(\sigma\D^{\sigma}_Q f)}_{w}
&=\pair{1_{Q^{(i)}}\D^w_R g}{A_K(\sigma\D^{\sigma}_Q f)}_{w} \\
&=\ave{\D^w_R g}_{Q^{(i)}}^w \pair{1_{Q^{(i)}}}{A_K(\sigma\D^{\sigma}_Q f)}_{w} \\
&=\ave{\D^w_R g}_{Q^{(i)}}^w \pair{A_K^*(w1_{Q^{(i)}})}{\D^{\sigma}_Q f}_{\sigma}.
\end{split}
\end{equation*}
Moreover, for any $P\supseteq Q^{(i)}\supseteq K$,
\begin{equation*}
\begin{split}
\pair{\D^{\sigma}_Q A_K^*(w1_{Q^{(i)}})}{f}_{\sigma}
&=\pair{1_{Q^{(i)}}}{A_K(\sigma\D^{\sigma}_Q f)}_{w} \\
&=\int A_K(\sigma\D^{\sigma}_Q f)w \\
&=\pair{1_P}{A_K(\sigma\D^{\sigma}_Q f)}_w
=\pair{\D^{\sigma}_QA_K^*(w1_P)}{ f}_{\sigma}.
\end{split}
\end{equation*}
Summing these equalities over all relevant $K$, and using $S=\sum_K A_K$, gives the claim.
\end{proof}
By the lemma, we can then manipulate
\begin{align*}
\sum_{\substack{Q,R:Q\subset R\\ \ell(Q)<2^{-i}\ell(R)}} &\pair{\D^w_R g}{S(\sigma\D^{\sigma}_Q f)}_{w} \\
&=\sum_Q\Big(\sum_{R\supsetneq Q^{(i)}}\ave{\D^w_R g}_{Q^{(i)}}^w\Big)\pair{S^*(w1_{Q^{(i)}})}{\D^{\sigma}_Q f}_{\sigma} \\
&=\sum_Q\ave{g}^{w}_{Q^{(i)}}\pair{S^*(w1_{Q^{(i)}})}{\D^{\sigma}_Q f}_{\sigma} \\
&=\sum_R\ave{g}^w_R\Bpair{S^*(w1_R)}{\sum_{\substack{Q\subseteq R \\ \ell(Q)=2^{-i}\ell(R)}}\D^{\sigma}_Q f}_{\sigma} \\
&=\sum_R\ave{g}^w_R\Bpair{S^*(w 1_R)}{\D^{\sigma,i}_R f}_{\sigma},
\end{align*}
where $\ave{g}^w_R:=w(R)^{-1}\int_R g\cdot w$ is the average of $g$ on $R$ with respect to the $w$ measure.
By using the properties of the pairwise orthogonal projections $\D^{\sigma,i}_R$ on $L^2(\sigma)$,
the above series may be estimated as follows:
\begin{align*}
&\Babs{\sum_{\substack{Q,R:Q\subset R\\ \ell(Q)<2^{-i}\ell(R)}} \pair{\D^w_R g}{S(\sigma\D^{\sigma}_Q f)}_{w}} \\
&\leq\sum_R\abs{\ave{g}^w_R}\mathbb{N}orm{\D^{\sigma,i}_R S^*(w1_R)}{L^2(\sigma)}\mathbb{N}orm{\D^{\sigma,i}_R f}{L^2(\sigma)} \\
&\leq\Big(\sum_R\abs{\ave{g}^w_R}^2\mathbb{N}orm{\D^{\sigma,i}_R S^*(w1_R)}{L^2(\sigma)}^2\Big)^{1/2}
\Big(\sum_R\mathbb{N}orm{\D^{\sigma,i}_R f}{L^2(\sigma)}^2\Big)^{1/2},
\end{align*}
where the last factor is equal to $\mathbb{N}orm{f}{L^2(w)}$.
The first factor on the right is handled by the dyadic Carleson embedding theorem: It follows from the second equality of Lemma~\ref{lem:contCubesAlgebra}, namely $\D^{\sigma}_Q S^*(w1_Q^{(i)})=\D^{\sigma}_Q S^*(w1_P)$ for all $P\supseteq Q^{(i)}$, that $\D^{\sigma,i}_R S^*(w1_R)=\D^{\sigma}_Q S^*(w1_P)$ for all $P\subseteq R$. Hence, we have
\begin{equation*}
\begin{split}
\sum_{R\subseteq P}\mathbb{N}orm{\D^{\sigma,i}_R S^*(w 1_R)}{L^2(\sigma)}^2
&=\sum_{R\subseteq P}\mathbb{N}orm{\D^{\sigma,i}_R(1_P S^*(w 1_P))}{L^2(\sigma)}^2 \\
&\leq\mathbb{N}orm{1_P S^*(w 1_P)}{L^2(\sigma)}^2\lesssim \mathfrak{S}_*^2\sigma(P)
\end{split}
\end{equation*}
by the (dual) testing estimate for the dyadic shifts. By the Carleson embedding theorem, it then follows that
\begin{equation*}
\Big(\sum_R\abs{\ave{g}^w_R}^2 \mathbb{N}orm{\D^{\sigma,i}_R S^*(w 1_R)}{L^2(\sigma)}^2\Big)^{1/2}
\lesssim \mathfrak{S}_*\mathbb{N}orm{g}{L^2(\sigma)},
\end{equation*}
and the estimation of the deeply contained cubes is finished.
\subsection{Contained cubes of comparable size}
It remains to estimate
\begin{equation*}
\sum_{\substack{Q,R:Q\subseteq R\\ \ell(Q)\geq 2^{-i}\ell(R)}}\pair{\D^w_R g}{S(\sigma\D^{\sigma}_Q f)}_{w};
\end{equation*}
the sum over $R\subsetneq Q$ with $\ell(R)\geq 2^{-j}\ell(Q)$ would be handled in a symmetric manner. The sum of interest may be written as
\begin{equation*}
\sum_{a=0}^i\sum_R\sum_{\substack{Q\subseteq R\\ \ell(Q)=2^{-a}\ell(R)}}\pair{\D^w_R g}{S(\sigma\D^{\sigma}_Q f)}_{w}
=\sum_{a=0}^i\sum_R\pair{\D^w_R g}{S(\sigma\D^{\sigma,i}_R f)}_{w},
\end{equation*}
and
\begin{equation*}
\pair{\D^w_R g}{S(\sigma\D^{\sigma,i}_R f)}_{w}
=\sum_{k=1}^{2^d}\ave{\D^w_R g}_{R_k}\pair{S^*(w 1_{R_k})}{\D^{\sigma,i}_R f}_{\sigma}
\end{equation*}
where the $R_k$ are the $2^d$ dyadic children of $R$, and $\ave{\D^w_R g}_{R_k}$ is the constant valued of $\D^w_R g$ on $R_k$. Now
\begin{equation*}
\pair{S^*(w 1_{R_k})}{\D^{\sigma,i}_R f}_{\sigma}
=\pair{1_{R_k}S^*(w 1_{R_k})}{\D^{\sigma,i}_R f}_{\sigma}+\pair{S^*(w 1_{R_k})}{1_{R_k^c}\D^{\sigma,i}_R f}_{\sigma},
\end{equation*}
where
\begin{equation*}
\abs{\pair{1_{R_k}S^*(w 1_{R_k})}{\D^{\sigma,i}_R f}_{\sigma}}
\leq\mathfrak{S}_* w(R_k)^{1/2}\mathbb{N}orm{\D^{\sigma,i}_R f}{L^2(\sigma)}
\end{equation*}
and, observing that only those $A_K^*$ where $K$ intersects both $R_k$ and $R_k^c$ contribute to the second part,
\begin{equation*}
\begin{split}
\abs{\pair{S^*(w 1_{R_k})}{1_{R_k^c}\D^{\sigma,i}_R f}_{\sigma}}
&=\Babs{\sum_{K\supsetneq R_k}\pair{A_K^*(w 1_{R_k})}{1_{R_k^c}\D^{\sigma,i}_R f}_{\sigma}} \\
&\lesssim\sum_{K\supseteq R}\frac{1}{\abs{K}}w(R_k)\mathbb{N}orm{\D^{\sigma,i}_R f}{L^1(\sigma)} \\
&\lesssim\frac{1}{\abs{R}}w(R_k)\sigma(R)^{1/2}\mathbb{N}orm{\D^{\sigma,i}_R f}{L^1(\sigma)} \\
&\leq\frac{w(R)^{1/2}\sigma(R)^{1/2}}{\abs{R}}w(R_k)^{1/2}\mathbb{N}orm{\D^{\sigma,i}_R f}{L^2(\sigma)} \\
&\leq[w,\sigma]_{A_2}w(R_k)^{1/2}\mathbb{N}orm{\D^{\sigma,i}_R f}{L^2(\sigma)}.
\end{split}
\end{equation*}
It follows that
\begin{equation*}
\abs{\pair{S^*(w 1_{R_k})}{\D^{\sigma,i}_R f}_{\sigma}}
\lesssim(\mathfrak{S}_* +[w,\sigma]_{A_2})w(R_k)^{1/2}\mathbb{N}orm{\D^{\sigma,i}_R f}{L^2(\sigma)}
\end{equation*}
and hence
\begin{equation*}
\abs{\pair{\D_R^w g}{S(\sigma\D^{\sigma,i}_R f)}_{w}}
\lesssim(\mathfrak{S}_* +[w,\sigma]_{A_2})\mathbb{N}orm{\D^w_R g}{L^2(w)}\mathbb{N}orm{\D^{\sigma,i}_R f}{L^2(\sigma)}
\end{equation*}
Finally,
\begin{align*}
&\sum_{a=0}^i\sum_R \abs{\pair{\D_R^w g}{S(\sigma\D^{\sigma,i}_R f)}_{w}} \\
&\lesssim (\mathfrak{S}_* +[w,\sigma]_{A_2})\sum_{a=0}^i\Big(\sum_R\mathbb{N}orm{\D^w_R g}{L^2(\sigma)}^2\Big)^{1/2}
\Big(\sum_R\mathbb{N}orm{\D^{\sigma,i}_R f}{L^2(\sigma)}^2\Big)^{1/2} \\
&\leq(1+i)(\mathfrak{S}_* +[w,\sigma]_{A_2})\mathbb{N}orm{g}{L^2(w)}\mathbb{N}orm{f}{L^2(\sigma)}.
\end{align*}
The symmetric case with $R\subset Q$ with $\ell(R)\geq 2^{-j}\ell(Q)$ similarly yields the factor $(1+j)(\mathfrak{S} +[w,\sigma]_{A_2})$. This completes the proof of Theorem~\ref{thm:2weight}.
\section{Final decompositions: verification of the testing conditions}
We now turn to the estimation of the testing constant
\begin{equation*}
\mathfrak{S}:=\sup_{Q\in\mathscr{D}}\frac{\mathbb{N}orm{1_Q S(\sigma 1_Q)}{L^2(w)}}{\sigma(Q)^{1/2}}.
\end{equation*}
Bounding $\mathfrak{S}_*$ is analogous by exchanging the roles of $w$ and $\sigma$.
\subsection{Several splittings}
First observe that
\begin{equation*}
1_Q S(\sigma 1_Q)
=1_Q\sum_{K:K\cap Q\neq\varnothing}A_K(\sigma 1_Q)
=1_Q\sum_{K\subseteq Q}A_K(\sigma 1_Q)+1_Q\sum_{K\supsetneq Q}A_K(\sigma 1_Q).
\end{equation*}
The second part is immediate to estimate even pointwise by
\begin{equation*}
\abs{1_Q A_K(\sigma 1_Q)}\leq 1_Q\frac{\sigma(Q)}{\abs{K}},\qquad\sum_{K\supsetneq Q}\frac{1}{\abs{K}}\leq\frac{1}{\abs{Q}},
\end{equation*}
and hence its $L^2(w)$ norm is bounded by
\begin{equation*}
\BNorm{1_Q\frac{\sigma(Q)}{\abs{Q}}}{L^2(w)}
=\frac{w(Q)^{1/2}\sigma(Q)}{\abs{Q}}\leq[w,\sigma]_{A_2}\sigma(Q)^{1/2}.
\end{equation*}
So it remains to concentrate on $K\subseteq Q$, and we perform several consecutive splittings of this collection of cubes. First, we \textbf{separate scales} by introducing the splitting according to the $\kappa+1$ possible values of $\log_2\ell(K)\mod(\kappa+1)$. We denote a generic choice of such a collection by
\begin{equation*}
\mathscr{K}=\mathscr{K}_k:=\{K\subseteq Q:\log_2\ell(K)\equiv k\mod(\kappa+1)\},
\end{equation*}
where $k$ is arbitrary but fixed. (We will drop the subscript $k$, since its value plays no role in the subsequent argument.) Next, we \textbf{freeze the $A_2$ characteristic} by setting
\begin{equation*}
\mathscr{K}^a:=\Big\{K\in\mathscr{K}: 2^{a-1}<\frac{w(K)\sigma(K)}{\abs{K}}\leq 2^a\Big\},\qquad a\in\mathbb{Z},\quad a\leq\ceil{\log_2[w,\sigma]_{A_2}},
\end{equation*}
where $\ceil{\ }$ means rounding up to the next integer.
In the next step, we \textbf{choose the principal cubes} $P\in\mathscr{P}^a\subseteq\mathscr{K}^a$.
This construction was first introduced by B. Muckenhoupt and R. Wheeden \cite{MW:77}, and it has been influential ever since.
Let $\mathscr{P}^a_0$ consist of all maximal cubes in $\mathscr{K}^a$, and inductively $\mathscr{P}^a_{p+1}$ consist of all maximal $P'\in\mathscr{K}^a$ such that
\begin{equation*}
P'\subset P\in\mathscr{P}^a_p,\qquad \frac{\sigma(P')}{\abs{P'}}>2\frac{\sigma(P)}{\abs{P}}.
\end{equation*}
Finally, let $\mathscr{P}^a:=\bigcup_{p=0}^{\infty}\mathscr{P}^a_p$.
For each $K\in\mathscr{K}^a$, let $\Pi^a(K)$ denote the minimal $P\in\mathscr{P}^a$ such that $K\subseteq P$. Then we set
\begin{equation*}
\mathscr{K}^a(P):=\{K\in\mathscr{K}^a:\Pi^a(K)=P\},\qquad P\in\mathscr{P}^a.
\end{equation*}
Note that $\sigma(K)/\abs{K}\leq 2\sigma(P)/\abs{P}$ for all $K\in\mathscr{K}^a(P)$, which allows us to \textbf{freeze the $\sigma$-to-Lebesgue measure ratio} by the final subcollections
\begin{equation*}
\mathscr{K}^a_b(P):=\Big\{K\in\mathscr{K}^a(P): 2^{-b}<\frac{\sigma(K)}{\abs{K}}\frac{\abs{P}}{\sigma(P)}\leq 2^{1-b}\Big\},\qquad b\in\mathbb{N}.
\end{equation*}
We have
\begin{equation*}
\begin{split}
&\{K\in\mathscr{D}:K\subseteq Q\}
=\bigcup_{k=0}^{\kappa}\mathscr{K}_k,\qquad
\mathscr{K}_k=\mathscr{K}=\bigcup_{a\leq\ceil{\log_2[w,\sigma]_{A_2}}}\mathscr{K}^a,\\
&\mathscr{K}^a=\bigcup_{P\in\mathscr{P}^a}\mathscr{K}^a(P),\qquad
\mathscr{K}^a(P)=\bigcup_{b=0}^{\infty}\mathscr{K}^a_b(P),\qquad
\end{split}
\end{equation*}
where all unions are disjoint. Note that we drop the reference to the separation-of-scales parameter $k$, since this plays no role in the forthcoming arguments. Recalling the notation for subshifts $S_{\mathscr{Q}}=\sum_{K\in\mathscr{Q}}A_K$, this splitting of collections of cubes leads to the splitting of the function
\begin{equation*}
\sum_{K\subseteq Q}A_K(\sigma 1_Q)=\sum_{k=0}^{\kappa}\sum_{a\leq\ceil{\log_2[w,\sigma]_{A_2}}}\sum_{P\in\mathscr{P}^a}\sum_{b=0}^{\infty}S_{\mathscr{K}^a_b(P)}(\sigma 1_Q).
\end{equation*}
On the level of the function, we split one more time to write
\begin{equation*}
\begin{split}
S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)
&=\sum_{n=0}^{\infty} 1_{E^a_b(P,n)}S_{\mathscr{K}^a_b(P)}(\sigma 1_Q),\\
E^a_b(P,n) &:=\{x\in\mathbb{R}^d:n2^{-b}\ave{\sigma}_P<\abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)(x)}\leq (n+1)2^{-b}\ave{\sigma}_P\}.
\end{split}
\end{equation*}
This final splitting, from \cite{HLMORSU}, is not strictly `necessary' in that it was not part of the original argument in \cite{Hytonen:A2}, nor its predecessor in \cite{LPR}, which made instead more careful use of the cubes where $S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)$ stays constant; however, it now seems that this splitting provides another simplification of the argument.
Now all relevant cancellation is inside the functions $S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)$, so that we can simply estimate by the triangle inequality:
\begin{equation*}
\begin{split}
&\Babs{\sum_{K\subseteq Q}A_K(\sigma 1_Q)} \\
&\quad\leq\sum_{k=0}^{\kappa}\sum_{a}\sum_{P\in\mathscr{P}^a}\sum_{b=0}^{\infty}\sum_{n=0}^{\infty}(1+n)2^{-b}
\ave{\sigma}_P 1_{\{\abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>n2^{-b}\ave{\sigma}_P\}},
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
&\BNorm{\sum_{K\subseteq Q}A_K(\sigma 1_Q)}{L^2(w)} \\
&\leq\sum_{k=0}^{\kappa}\sum_{a}\sum_{b=0}^{\infty}2^{-b}\sum_{n=0}^{\infty}(1+n)
\BNorm{\sum_{P\in\mathscr{P}^a}\ave{\sigma}_P 1_{\{\abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>n2^{-b}\ave{\sigma}_P\}}}{L^2(w)}.
\end{split}
\end{equation*}
Obviously, we will need good estimates to be able to sum up these infinite series.
Write the last norm as
\begin{equation*}
\Big(\int\Big[\sum_{P\in\mathscr{P}^a}\ave{\sigma}_P 1_{\{\abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>n2^{-b}\ave{\sigma}_P\}}(x)\Big]^2 \ud w(x)\Big)^{1/2},
\end{equation*}
observe that
\begin{equation*}
\{\abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>n2^{-b}\ave{\sigma}_P\}\subseteq P,
\end{equation*}
and look at the integrand at a fixed point $x\in\mathbb{R}^d$. At this point we sum over a subset of those values of $\ave{\sigma}_P$ where the principal cube $P\owns x$. Let $P_0$ be the smallest cube such that $\abs{S_{\mathscr{K}^a_b(P)}}>n2^{-b}\ave{\sigma}_P$, let $P_1$ be the next smallest, and so on. Then $\ave{\sigma}_{P_m}<2^{-1}\ave{\sigma}_{P_{m-1}}<\ldots<2^{-m}\ave{\sigma}_{P_0}$ by the construction of the principal cubes, and hence
\begin{equation*}
\begin{split}
\Big[\sum_{P\in\mathscr{P}^a}\ave{\sigma}_P 1_{\{\abs{S_{\mathscr{K}^a_b(P)}}>n2^{-b}\ave{\sigma}_P\}}(x)\Big]^2
&=\Big[\sum_{m=0}^{\infty}\ave{\sigma}_{P_m}\Big]^2 \\
&\leq\Big[\sum_{m=0}^{\infty}2^{-m}\ave{\sigma}_{P_0}\Big]^2
=4\ave{\sigma}_{P_0}^2 \\
&\leq 4\sum_{P\in\mathscr{P}^a}\ave{\sigma}_P^2 1_{\{\abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>n2^{-b}\ave{\sigma}_P\}}(x)
\end{split}
\end{equation*}
Hence
\begin{equation*}
\begin{split}
&\BNorm{\sum_{P\in\mathscr{P}^a}\ave{\sigma}_P 1_{\{\abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>n2^{-b}\ave{\sigma}_P\}}}{L^2(w)} \\
&\leq \Big(\int\Big[4\sum_{P\in\mathscr{P}^a}\ave{\sigma}_P^2 1_{\{\abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>n2^{-b}\ave{\sigma}_P\}}\Big]w\Big)^{1/2} \\
&=2\Big(\sum_{P\in\mathscr{P}^a}\ave{\sigma}_P^2 w(\{\abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>n2^{-b}\ave{\sigma}_P\})\Big)^{1/2},
\end{split}
\end{equation*}
and it remains to obtain good estimates for the measure of the level sets
\begin{equation*}
\{\abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>n2^{-b}\ave{\sigma}_P\}.
\end{equation*}
\subsection{Weak-type and John--Nirenberg-style estimates}
We still need to estimate the sets above. Recall that $S_{\mathscr{K}^a_b(P)}$ is a subshift of $S$, which in particular has its scales separated so that $\log_2\ell(K)\equiv k\mod (\kappa+1)$ for all $K$ for which $A_K$ participating in $S_{\mathscr{K}^a_b(P)}$ is nonzero and $k\in\{0,1,\ldots,\kappa:=\max\{i,j\}\}$ is fixed, $S$ being of type $(i,j)$. The following estimate deals with such subshifts, which we simply denote by $S$.
\begin{proposition}
Let $S$ be a dyadic shift of type $(i,j)$ with scales separated. Then
\begin{equation*}
\abs{\{\abs{Sf}>\lambda\}}\leq\frac{C}{\lambda}\mathbb{N}orm{f}{1},\qquad\forall\lambda>0,
\end{equation*}
where $C$ depends only on the dimension.
\end{proposition}
\begin{proof}
The proof uses the classical Calder\'on--Zygmund decomposition:
\begin{equation*}
f=g+b,\qquad b:=\sum_{L\in\mathscr{B}}b_L:=\sum_{L\in \mathscr{B}}1_B(f-\ave{f}_L),
\end{equation*}
where $L\in\mathscr{B}$ are the maximal dyadic cubes with $\ave{|f|}_L>\lambda$; hence $\ave{|f|}_L\leq 2^d\lambda$. As usual,
\begin{equation*}
g=f-b=1_{\big(\bigcup\mathscr{B}\big)^c}f+\sum_{L\in\mathscr{B}}\ave{f}_L
\end{equation*}
satisfies $\mathbb{N}orm{g}{\infty}\leq 2^d\lambda$ and $\mathbb{N}orm{g}{1}\leq\mathbb{N}orm{f}{1}$, hence $\mathbb{N}orm{g}{2}^2\leq\mathbb{N}orm{g}{\infty}\mathbb{N}orm{g}{1}\leq 2^{d}\lambda\mathbb{N}orm{f}{1}$, and thus
\begin{equation*}
\abs{\{\abs{Sg}>\tfrac12\lambda\}}\leq\frac{4}{\lambda^2}\mathbb{N}orm{Sg}{2}^2
\leq\frac{4}{\lambda^2}\mathbb{N}orm{g}{2}^2\leq 4\cdot 2^d\frac{1}{\lambda}\mathbb{N}orm{f}{1}.
\end{equation*}
It remains to estimate $\{\abs{Sb}>\tfrac12\lambda\}$. First observe that
\begin{equation*}
Sb=\sum_{K\in\mathscr{D}}\sum_{L\in\mathscr{B}}A_K b_L
=\sum_{L\in\mathscr{B}}\Big(\sum_{K\subseteq L}A_K b_L+\sum_{K\supsetneq L}A_K b_L\Big),
\end{equation*}
since $A_K b_L\neq 0$ only if $K\cap L\neq\varnothing$. Now
\begin{equation*}
\begin{split}
\abs{\{\abs{Sb}>\tfrac12\lambda\}}
&\leq\Babs{\Big\{\Babs{\sum_{L\in\mathscr{B}}\sum_{K\subseteq L}A_K b_L}>0\Big\}}
+\Babs{\Big\{\Babs{\sum_{L\in\mathscr{B}}\sum_{K\supsetneq L}A_K b_L}>\tfrac12\lambda\Big\}} \\
&\leq\sum_{L\in\mathscr{B}}\abs{L}+\frac{2}{\lambda}\BNorm{\sum_{L\in\mathscr{B}}\sum_{K\supsetneq L}A_K b_L}{1} \\
&\leq\frac{1}{\lambda}\mathbb{N}orm{f}{1}+\frac{2}{\lambda}\sum_{L\in\mathscr{B}}\sum_{K\supsetneq L}\mathbb{N}orm{A_K b_L}{1},
\end{split}
\end{equation*}
where we used the elementary properties of the Calder\'on--Zygmund decomposition to estimate the first term.
For the remaining double sum, we still need some observations. Recall that
\begin{equation*}
A_K b_L=\sum_{\substack{I,J\subseteq K \\ \ell(I)=2^{-i}\ell(K)\\ \ell(J)=2^{-j}\ell(K)}}a_{IJK}h_I\pair{h_J}{b_L}.
\end{equation*}
Now, if $\ell(K)>2^{\kappa}\ell(L)\geq 2^j\ell(L)$, then $\ell(J)>\ell(L)$, and hence $h_J$ is constant on $L$. But the integral of $b_L$ vanishes, hence $\pair{h_J}{b_L}=0$ for all relevant $J$, and thus $A_K b_L=0$ whenever $\ell(K)>2^\kappa\ell(L)$.
Thus, in the inner sum, the only possible nonzero terms are $A_K b_L$ for $K=L^{(m)}$ for $m=1,\ldots,\kappa$. By the separation of scales, at most one of these terms is nonzero, and we write $\tilde{L}$ for the corresponding unique $K$. So in fact
\begin{equation*}
\frac{2}{\lambda}\sum_{L\in\mathscr{B}}\sum_{K\supsetneq L}\mathbb{N}orm{A_K b_L}{1}
=\frac{2}{\lambda}\sum_{L\in\mathscr{B}}\mathbb{N}orm{A_{\tilde L} b_L}{1}
\leq\frac{2}{\lambda}\sum_{L\in\mathscr{B}}\mathbb{N}orm{b_L}{1}
\leq\frac{2}{\lambda}\cdot 2\mathbb{N}orm{f}{1}=\frac{4}{\lambda}\mathbb{N}orm{f}{1}
\end{equation*}
by using the normalized boundedness of the averaging operators $A_{\tilde L}$ on $L^1(\mathbb{R}^d)$, and an elementary estimate for the bad part of the Calder\'on--Zygmund decomposition.
Altogether, we obtain the claim with $C=4\cdot 2^d+5$.
\end{proof}
For the special subshifts $S_{\mathscr{K}^a_b(P)}$, we can improve the weak-type $(1,1)$ estimate to an exponential decay:
\begin{proposition}
Let $S_{\mathscr{K}^a_b(P)}$ be the subshift of $S$ as constructed earlier. Then the following estimate holds when $\nu$ is either the Lebesgue measure or $w$:
\begin{equation*}
\nu\Big(\Big\{\abs{S_{\mathscr{K}^a_b(P)}(\sigma1_Q)}>C2^{-b}\ave{\sigma}_P\cdot t\Big\}\Big)
\lesssim C2^{-t}\nu(P),\qquad t\geq 0,
\end{equation*}
where $C$ is a constant.
\end{proposition}
\begin{proof}
Let $\lambda:=C2^{-b}\ave{\sigma}_P$, where $C$ is a large constant, and $n\in\mathbb{Z}_+$. Let $x\in\mathbb{R}^d$ be a point where
\begin{equation}\label{eq:>nLambda}
\abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)(x)}>n\lambda.
\end{equation}
Then for all small enough $L\in\mathscr{K}^a_b(P)$ with $L\owns x$, there holds
\begin{equation*}
\Babs{\sum_{\substack{K\in\mathscr{K}^a_b(P) \\ K\supseteq L}}A_K(\sigma 1_Q)(x)}>n\lambda.
\end{equation*}
Since $\displaystyle\sum_{\substack{K\in\mathscr{K}^a_b(P) \\ K\supsetneq L}}A_K(\sigma1_Q)$ is constant on $L$ (thanks to separation of scales), and
\begin{equation}\label{eq:ALwQ}
\mathbb{N}orm{A_L(\sigma 1_Q)}{\infty}\lesssim\frac{\sigma(L)}{\abs{L}}\leq 2^{1-b}\frac{\sigma(P)}{\abs{P}},
\end{equation}
it follows that
\begin{equation}\label{eq:>n-2/3}
\Babs{\sum_{\substack{K\in\mathscr{K}^a_b(P) \\ K\supsetneq L}}A_K(\sigma 1_Q)}>(n-\tfrac{2}{3})\lambda\qquad\text{on }L.
\end{equation}
Let $\mathscr{L}\subseteq\mathscr{K}^a_b(P)$ be the collection of maximal cubes with the above property. Thus all $L\in\mathscr{L}$ are disjoint, and all $x$ with \eqref{eq:>nLambda} belong to some $L$. By maximality of $L$, the minimal $L^*\in\mathscr{K}^a_b(S)$ with $L^*\supsetneq L$ satisfies
\begin{equation*}
\Babs{\sum_{\substack{K\in\mathscr{K}^a_b(P) \\ K\supsetneq L^*}}A_K(\sigma 1_Q)}\leq(n-\tfrac{2}{3})\lambda\qquad\text{on }L^*.
\end{equation*}
By an estimate similar to \eqref{eq:ALwQ}, with $L^*$ in place of $L$, it follows that
\begin{equation*}
\Babs{\sum_{\substack{K\in\mathscr{K}^a_b(P) \\ K\supsetneq L}}A_K(\sigma 1_Q)}\leq (n-\tfrac{1}{3})\lambda\qquad\text{on }L.
\end{equation*}
Thus, if $x$ satisfies \eqref{eq:>nLambda} and $x\in L\in\mathscr{L}$, then necessarily
\begin{equation*}
\abs{S_{\{K\in\mathscr{K}^a_b(P); K\subseteq L\}}(\sigma 1_{Q})(x)}=
\Babs{\sum_{\substack{K\in\mathscr{K}^a_b(P) \\ K\subseteq L}}A_K(\sigma 1_Q)(x)}>\tfrac13\lambda.
\end{equation*}
Using the weak-type $L^1$ estimate to the shift $S_{\{K\in\mathscr{K}^a_b(P);K\subseteq L\}}$ of type $(i,j)$ with scales separated, noting that $A_K(\sigma 1_Q)=A_K(\sigma 1_L)$ for $K\subseteq L$, it follows that
\begin{align*}
\Babs{\Big\{\Babs{\sum_{\substack{K\in\mathscr{K}^a_b(P) \\ K\subseteq L}}A_K(\sigma 1_Q)(x)}>\tfrac13\lambda\Big\}}
&\leq \frac{C}{\lambda}\sigma(L) \\
&\leq\frac{C}{\lambda}2^{1-b}\frac{\sigma(S\cap Q)}{\abs{S}}\abs{L} \leq \tfrac13\abs{L},
\end{align*}
provided that the constant in the definition of $\lambda$ was chosen large enough.
Recalling \eqref{eq:>n-2/3}, there holds
\begin{align*}
\Babs{\sum_{K\in\mathscr{K}^a_b(P)}A_K(\sigma 1_Q)}
&\geq\Babs{\sum_{\substack{K\in\mathscr{K}^a_b(P) \\ K\supsetneq L}}A_K(\sigma1_Q)}
-\Babs{\sum_{\substack{K\in\mathscr{K}^a_b(P) \\ K\subseteq L}}A_K(\sigma 1_Q)} \\
&>(n-\tfrac23)\lambda-\tfrac13\lambda=(n-1)\lambda\quad\text{on }\tilde{L}\subset L\text{ with }\abs{\tilde{L}}\geq\tfrac23\abs{L}.
\end{align*}
Thus
\begin{align*}
\abs{\{\abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>n\lambda\}}
&\leq\sum_{L\in\mathscr{L}}\abs{L\cap \{\abs{S_{\mathscr{K}^a_b(P)}(\sigma1_Q)}>n\lambda\}} \\
&\leq\sum_{L\in\mathscr{L}}\abs{\{\abs{S_{\{K\in\mathscr{K}^a_b(P):K\subseteq L\}}(\sigma 1_Q)}>\tfrac13\lambda\}} \\
&\leq\sum_{L\in\mathscr{L}}\tfrac13\abs{L}\leq\sum_{L\in\mathscr{L}}\tfrac13\cdot\tfrac 32\abs{\tilde{L}} \\
&\leq\tfrac12\sum_{L\in\mathscr{L}} \abs{L\cap\{ \abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>(n-1)\lambda\}} \\
&\leq\tfrac12\abs{\{ \abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>(n-1)\lambda\}}.
\end{align*}
By induction it follows that
\begin{align*}
\abs{\{\abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>n\lambda\}}
&\leq 2^{-n}\abs{\{ \abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>0\}} \\
&\leq 2^{-n}\sum_{M\in\mathscr{M}}\abs{M}\leq 2^{-n}\abs{P},
\end{align*}
where $\mathscr{M}$ is the collection of maximal cubes in $\mathscr{K}^a_b(S)$.
Recalling that we defined $\lambda:=C2^{-b}\ave{\sigma}_P$ in the beginning of the proof, the previous display gives precisely the claim of the Proposition in the case that $\nu$ is the Lebesgue measure. We still need to consider the case that $\nu=w$. To this end, selected intermediate steps of the above computation, as well as the definition of $\mathscr{K}^a_b(P)$, will be exploited. Recall that $K\in\mathscr{K}^a$ means that $2^{a-1}<\ave{w}_K\ave{\sigma}_K\leq 2^a$, while $K\in\mathscr{K}^a_b(P)$ means that in addition $2^{-b}<\ave{\sigma}_K/\ave{\sigma}_P\leq 2^{1-b}$. Put together, this says that
\begin{equation*}
2^{a+b-2}\ave{\sigma}_P<\frac{w(K)}{\abs{K}}<2^{a+b}\ave{\sigma}_P\qquad\forall K\in\mathscr{K}^a_b(P).
\end{equation*}
Hence, using the collections $\mathscr{L},\mathscr{M}\subseteq\mathscr{K}^a_b(P)$ as above,
\begin{align*}
w(\{\abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>n\lambda\})
&\leq\sum_{L\in\mathscr{L}}w(L)
\leq\sum_{L\in\mathscr{L}}2^{a+b}\ave{\sigma}_P\abs{L} \\
&\leq 2^{a+b}\ave{\sigma}_P\abs{\{ \abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>(n-1)\lambda\}} \\
&\leq 2^{a+b}\ave{\sigma}_P\cdot 2^{-n}\sum_{M\in\mathscr{M}}\abs{M} \\
&\leq 4\cdot 2^{-n}\sum_{M\in\mathscr{M}}w(M)\leq 4\cdot 2^{-n}w(S).\qedhere
\end{align*}
\end{proof}
\subsection{Conclusion of the estimation of the testing conditions}
Recall that
\begin{equation*}
\begin{split}
&\BNorm{\sum_{K\subseteq Q}A_K(\sigma 1_Q)}{L^2(w)} \\
&\leq\sum_{k=0}^{\kappa}\sum_{a}\sum_{b=0}^{\infty}2^{-b}\sum_{n=0}^{\infty}(1+n)
\BNorm{\sum_{P\in\mathscr{P}^a}\ave{\sigma}_P 1_{\{\abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>n2^{-b}\ave{\sigma}_P\}}}{L^2(w)}
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
& \BNorm{\sum_{P\in\mathscr{P}^a}\ave{\sigma}_P 1_{\{\abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>n2^{-b}\ave{\sigma}_P\}}}{L^2(w)} \\
&\leq 2\Big(\sum_{P\in\mathscr{P}^a}\ave{\sigma}_P^2 w(\{\abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>n2^{-b}\ave{\sigma}_P\})\Big)^{1/2} \\
&\leq C\Big(\sum_{P\in\mathscr{P}^a}\ave{\sigma}_P^2 2^{-n/C}w(P)\Big)^{1/2} \\
&=C2^{-cn}\Big(\sum_{P\in\mathscr{P}^a}\frac{\sigma(P)w(P)}{\abs{P}^2}\sigma(P)\Big)^{1/2} \\
&\leq C2^{-cn}\Big(2^a\sum_{P\in\mathscr{P}^a}\sigma(P)\Big)^{1/2},
\end{split}
\end{equation*}
recalling the freezing of the $A_2$ characteristic between $2^{a-1}$ and $2^a$ for cubes in $\mathscr{K}^a\supseteq\mathscr{P}^a$.
For the summation over the principal cubes, we observe that
\begin{equation*}
\begin{split}
\sum_{P\in\mathscr{P}^a}\sigma(P)
=\sum_{P\in\mathscr{P}^a}\ave{\sigma}_P\abs{P}
=\int_Q\sum_{P\in\mathscr{P}^a}\ave{\sigma}_P 1_P(x)\ud x.
\end{split}
\end{equation*}
At any given $x$, if $P_0\subsetneq P_1\subsetneq\ldots\subseteq Q$ are the principal cubes containing it, we have
\begin{equation*}
\sum_{P\in\mathscr{P}^a}\ave{\sigma}_P 1_P(x)
=\sum_{m=0}^{\infty}\ave{\sigma}_{P_m}
\leq\sum_{m=0}^{\infty}2^{-m}\ave{\sigma}_{P_0}=2\ave{\sigma}_{P_0}
\leq 2M(\sigma 1_Q)(x),
\end{equation*}
where $M$ is the dyadic maximal operator. Hence
\begin{equation*}
\sum_{P\in\mathscr{P}^a}\sigma(P)
\leq 2\int_Q M(\sigma 1_Q)\ud x
\leq 2[\sigma]_{A_\infty}\sigma(Q),
\end{equation*}
where we use the following notion of the $A_\infty$ characteristic:
\begin{equation*}
[\sigma]_{A_\infty}:=\sup_Q\frac{1}{\sigma(Q)}\int_Q M(\sigma 1_Q)\ud x;
\end{equation*}
this was implicit already in the work of Fujii \cite{Fujii:weightedBMO} and it was taken as an explicit definition by the author and C. P\'erez \cite{HytPer}.
Substituting back, we have
\begin{equation*}
\begin{split}
&\BNorm{\sum_{K\subseteq Q}A_K(\sigma 1_Q)}{L^2(w)} \\
&\leq\sum_{k=0}^{\kappa}\sum_{a}\sum_{b=0}^{\infty}2^{-b}\sum_{n=0}^{\infty}(1+n)
\BNorm{\sum_{P\in\mathscr{P}^a}\ave{\sigma}_P 1_{\{\abs{S_{\mathscr{K}^a_b(P)}(\sigma 1_Q)}>n2^{-b}\ave{\sigma}_P\}}}{L^2(w)} \\
&\leq\sum_{k=0}^{\kappa}\sum_{a}\sum_{b=0}^{\infty}2^{-b}\sum_{n=0}^{\infty}(1+n)\cdot
C2^{-cn}\Big(2^a\sum_{P\in\mathscr{P}^a}\sigma(P)\Big)^{1/2} \\
&\leq\sum_{k=0}^{\kappa}\sum_{a}\sum_{b=0}^{\infty}2^{-b}\sum_{n=0}^{\infty}(1+n)\cdot
C2^{-cn}\big(2^a[\sigma]_{A_\infty}\big)^{1/2} \\
&=C\cdot[\sigma]_{A_\infty}^{1/2}\sum_{k=0}^{\kappa}\Big(\sum_{a\leq\ceil{\log_2[w,\sigma]_{A_2}}}2^{a/2}\Big)
\Big(\sum_{b=0}^{\infty}2^{-b}\Big)\Big(\sum_{n=0}^{\infty}(1+n)\cdot 2^{-cn}\Big) \\
&\leq C\cdot[\sigma]_{A_\infty}^{1/2}\cdot(1+\kappa)\cdot [w,\sigma]_{A_2}^{1/2},
\end{split}
\end{equation*}
and thus the testing constant $\mathfrak{S}$ is estimated by
\begin{equation*}
\mathfrak{S}\leq C\cdot(1+\kappa)\cdot[w,\sigma]_{A_2}^{1/2}\cdot[\sigma]_{A_\infty}^{1/2}.
\end{equation*}
By symmetry, exchanging the roles of $w$ and $\sigma$, we also have the analogous result for $\mathfrak{S}^*$, and so we have completed the proof of the following:
\begin{theorem}\label{thm:testing}
Let $\sigma,w\in A_\infty$ be functions which satisfy the joint $A_2$ condition
\begin{equation*}
[w,\sigma]_{A_2}:=\sup_Q\frac{w(Q)\sigma(Q)}{\abs{Q}^2}<\infty.
\end{equation*}
Then the testing constants $\mathfrak{S}$ and $\mathfrak{S}^*$ associated with a dyadic shift $S$ of type $(i,j)$ satisfy the following bounds, where $\kappa:=\max\{i,j\}$:
\begin{equation*}
\begin{split}
\mathfrak{S} &\leq C\cdot(1+\kappa)\cdot[w,\sigma]_{A_2}^{1/2}\cdot[\sigma]_{A_\infty}^{1/2}, \\
\mathfrak{S}^* &\leq C\cdot(1+\kappa)\cdot[w,\sigma]_{A_2}^{1/2}\cdot[w]_{A_\infty}^{1/2}.
\end{split}
\end{equation*}
\end{theorem}
\section{Conclusions}
In this section we simply collect the fruits of the hard work done above.
A combination of Theorems~\ref{thm:2weight} and \ref{thm:testing} gives the following two-weight inequality, whose qualitative version was pointed out by Lacey, Petermichl and Reguera \cite{LPR}. In the precise form as stated, this result and its consequences below were obtained by P\'erez and myself \cite{HytPer}, although originally formulated only in the case that $\sigma=w^{-1}$ is the dual weight.
\begin{theorem}\label{thm:2weightShift}
Let $\sigma,w\in A_\infty$ be functions which satisfy the joint $A_2$ condition
\begin{equation*}
[w,\sigma]_{A_2}:=\sup_Q\frac{w(Q)\sigma(Q)}{\abs{Q}^2}<\infty.
\end{equation*}
Then a dyadic shift $S$ of type $(i,j)$ satisfies $S(\sigma\cdot):L^2(\sigma)\to L^2(w)$, and more precisely
\begin{equation*}
\mathbb{N}orm{S(\sigma\cdot)}{L^2(\sigma)\to L^2(w)}
\lesssim (1+\kappa)^2[w,\sigma]_{A_2}^{1/2}\big([w]_{A_\infty}^{1/2}+[\sigma]_{A_\infty}^{1/2}\big),
\end{equation*}
where $\kappa=\max\{i,j\}$.
\end{theorem}
The quantitative bound as stated, including the polynomial dependence on $\kappa$, allows to sum up these estimates in the Dyadic Representation Theorem to deduce:
\begin{theorem}\label{thm:2weightCZO}
Let $\sigma,w\in A_\infty$ be functions which satisfy the joint $A_2$ condition.
Then any $L^2$ bounded Calder\'on--Zygmund operator $T$ whose kernel $K$ has H\"older type modulus of continuity $\psi(t)=t^{\alpha}$, $\alpha\in(0,1)$, satisfies
\begin{equation*}
\mathbb{N}orm{T(\sigma\cdot)}{L^2(\sigma)\to L^2(w)}
\lesssim (\mathbb{N}orm{T}{L^2\to L^2}+\mathbb{N}orm{K}{CZ_\alpha})[w,\sigma]_{A_2}^{1/2}\big([w]_{A_\infty}^{1/2}+[\sigma]_{A_\infty}^{1/2}\big).
\end{equation*}
\end{theorem}
Recalling the dual weight trick and specializing to the one-weight situation with $\sigma=w^{-1}$, this in turn gives:
\begin{theorem}\label{thm:1weightCZO}
Let $w\in A_2$.
Then any $L^2$ bounded Calder\'on--Zygmund operator $T$ whose kernel $K$ has H\"older type modulus of continuity $\psi(t)=t^{\alpha}$, $\alpha\in(0,1)$, satisfies
\begin{equation*}
\begin{split}
\mathbb{N}orm{T}{L^2(w)\to L^2(w)}
&\lesssim (\mathbb{N}orm{T}{L^2\to L^2}+\mathbb{N}orm{K}{CZ_\alpha})[w]_{A_2}^{1/2}\big([w]_{A_\infty}^{1/2}+[w^{-1}]_{A_\infty}^{1/2}\big) \\
&\lesssim (\mathbb{N}orm{T}{L^2\to L^2}+\mathbb{N}orm{K}{CZ_\alpha})[w]_{A_2}.
\end{split}
\end{equation*}
\end{theorem}
The second displayed line is the original $A_2$ theorem \cite{Hytonen:A2}, and it follows from the first line by $[w]_{A_\infty}\lesssim[w]_{A_2}$ and $[w^{-1}]_{A_\infty}\lesssim [w^{-1}]_{A_2}=[w]_{A_2}$ (see Lemma~\ref{lem:A2Ainf} below). Its strengthening on the first line was first observed in my joint work with C.~P\'erez \cite{HytPer}. Note that, compared to the introductory statement in Theorem~\ref{thm:A2}, the dependence on the operator $T$ has been made more explicit. (The implied constants in the notation ``$\lesssim$'' only depend on the dimension and the H\"older exponent $\alpha$.) This dependence on $\mathbb{N}orm{T}{L^2\to L^2}$ and $\mathbb{N}orm{K}{CZ_\alpha}$ is implicit in the original proof.
For completeness, we include the proof (in the stated form essentially from \cite{LPR}, but see also \cite{HytPer} for more general comparison of $A_\infty$ and $A_p$ constants) that
\begin{lemma}\label{lem:A2Ainf}
For all weights $w\in A_2$, we have
\begin{equation*}
[w]_{A_\infty}:=\sup_Q\frac{1}{w(Q)}\int_Q M(1_Q w)\ud x\leq 8[w]_{A_2}.
\end{equation*}
\end{lemma}
\begin{proof}
Let $\mathcal{P}$ be the principal cubes of Muckenhoupt and Wheeden \cite{MW:77} given by $\mathcal{P}=\bigcup_{p=0}^\infty\mathcal{P}_p$, where $\mathcal{P}_0:=\{Q\}$ and $\mathcal{P}_{p+1}$ consists of the maximal $P'\subset P\in\mathcal{P}_p$ with $w(P')/\abs{P'}>2w(P)/\abs{P}$. Then
\begin{equation*}
M(1_Q w)(x)=\sup_{R:x\in R\subseteq Q}\frac{w(R)}{\abs{R}}\leq 2\sup_{P\in\mathcal{P}:x\in P}\frac{w(P)}{\abs{P}}\leq 2\sum_{P\in\mathcal{P}}\frac{w(P)}{\abs{P}}1_P(x),
\end{equation*}
and hence
\begin{equation*}
\int_Q M(1_Q w)\ud x\leq 2\sum_{P\in\mathcal{P}}w(P).
\end{equation*}
Consider the pairwise disjoint sets $E(P):=P\setminus\bigcup_{P'\in\mathcal{P}:P'\subsetneq P}P'$. Since
\begin{equation*}
\sum_{\substack{P'\subsetneq P\\ P'\text{ maximal}}}\abs{P'}
\leq\sum_{\substack{P'\subsetneq P\\ P'\text{ maximal}}} \frac{w(P')\abs{P}}{2w(P)}\leq \frac{w(P)\abs{P}}{2w(P)}=\frac{\abs{P}}{2},
\end{equation*}
it follows that $\abs{E(P)}\geq\frac12\abs{P}$. We derive a similar condition for the weighted measure from the $A_2$ condition. Indeed,
\begin{equation*}
\begin{split}
\abs{E(P)}
&=\int_{E(P)}w^{1/2}w^{-1/2}\ud x
\leq\Big(\int_{E(P)}w\ud x\Big)^{1/2}\Big(\int_P w^{-1}\ud x\Big)^{1/2} \\
&=w(E(P))^{1/2}\Big(\fint_P w^{-1}\ud x\Big)^{1/2}\abs{P}^{1/2} \\
&\leq w(E(P))^{1/2}[w]_{A_2}^{1/2}\Big(\fint_P w\ud x\Big)^{-1/2}\abs{P}^{1/2}
=\Big([w]_{A_2}\frac{w(E(P))}{w(P)}\Big)^{1/2}\abs{P}.
\end{split}
\end{equation*}
Using $\abs{P}\leq 2\abs{E(P)}$ and squaring, this shows that
\begin{equation*}
w(P)\leq 4[w]_{A_2}w(E(P)).
\end{equation*}
After this, it is immediate to compute that
\begin{equation*}
\sum_{P\in\mathcal{P}}w(P)
\leq 4[w]_{A_2}\sum_{P\in\mathcal{P}}w(E(P))
\leq 4[w]_{A_2}w(Q),
\end{equation*}
since the sets $E(P)$ are pairwise disjoint and contained in $Q$.
\end{proof}
\section{Further results and remarks}
This final section briefly collects, without proofs, some further related developments, and poses some open problems.
The $A_2$ theorem implies a corresponding $A_p$ theorem for all $p\in(1,\infty)$. This follows from a version of the celebrated extrapolation theorem, one of the most useful tools in the theory of $A_p$ weights. The extrapolation theorem was first found by J. L. Rubio de Francia \cite{Rubio:factorAp}, and shortly after (so soon that it was published earlier) another proof was given by J. Garc{\'{\i}}a-Cuerva \cite{Garcia:extrapolation}. For the present purposes, we need a quantitative form of the extrapolation theorem, which is due to Dragi\v{c}evi\'c, Grafakos, Pereyra, and Petermichl \cite{DGPP}, and reads as follows. Although relatively recent, it was known well before the proof of the full $A_2$ theorem.
\begin{theorem}\label{thm:extrap}
If an operator $T$ satisfies
\begin{equation*}
\mathbb{N}orm{T}{L^2(w)\to L^2(w)}\leq C_T [w]_{A_2}^{\tau}
\end{equation*}
for all $w\in A_2$, then it satisfies
\begin{equation*}
\mathbb{N}orm{T}{L^p(w)\to L^p(w)}\leq c_p C_T [w]_{A_p}^{\tau\max\{1,1/(p-1)\}}
\end{equation*}
for all $p\in(1,\infty)$ and $w\in A_p$.
\end{theorem}
\begin{corollary}\label{cor:Ap}
Let $p\in(1,\infty)$ and $w\in A_p$.
Then any $L^2$ bounded Calder\'on--Zygmund operator $T$ whose kernel $K$ has H\"older type modulus of continuity $\psi(t)=t^{\alpha}$, $\alpha\in(0,1)$, satisfies
\begin{equation*}
\mathbb{N}orm{T}{L^p(w)\to L^p(w)}
\lesssim (\mathbb{N}orm{T}{L^2\to L^2}+\mathbb{N}orm{K}{CZ_\alpha})[w]_{A_p}^{\max\{1,1/(p-1)\}}.
\end{equation*}
\end{corollary}
It is also possible to apply a version of the extrapolation argument to the mixed $A_2$/$A_\infty$ bounds \cite{HytPer}, but this did not give the optimal results for $p\neq 2$. However, by setting up a different argument directly in $L^p(w)$, the following bounds were obtained in my collaboration with M.~Lacey \cite{HytLac}:
\begin{theorem}
Let $p\in(1,\infty)$ and $w\in A_p$.
Then any $L^2$ bounded Calder\'on--Zygmund operator $T$ whose kernel $K$ has H\"older type modulus of continuity $\psi(t)=t^{\alpha}$, $\alpha\in(0,1)$, satisfies
\begin{equation*}
\mathbb{N}orm{T}{L^p(w)\to L^p(w)}
\lesssim (\mathbb{N}orm{T}{L^2\to L^2}+\mathbb{N}orm{K}{CZ_\alpha})[w]_{A_p}^{1/p}\big([w]_{A_\infty}^{1/p'}+[w^{1-p'}]_{A_\infty}^{1/p}\big).
\end{equation*}
\end{theorem}
For weak-type bounds, which were investigated by Lacey, Martikainen, Orponen, Reguera, Sawyer, Uriarte-Tuero, and myself \cite{HLMORSU}, we need only `half' of the strong-type upper bound:
\begin{theorem}
Let $p\in(1,\infty)$ and $w\in A_p$.
Then any $L^2$ bounded Calder\'on--Zygmund operator $T$ whose kernel $K$ has H\"older type modulus of continuity $\psi(t)=t^{\alpha}$, $\alpha\in(0,1)$, satisfies
\begin{equation*}
\begin{split}
\mathbb{N}orm{T}{L^p(w)\to L^{p,\infty}(w)}
&\lesssim (\mathbb{N}orm{T}{L^2\to L^2}+\mathbb{N}orm{K}{CZ_\alpha})[w]_{A_p}^{1/p}[w]_{A_\infty}^{1/p'} \\
&\lesssim (\mathbb{N}orm{T}{L^2\to L^2}+\mathbb{N}orm{K}{CZ_\alpha})[w]_{A_p}.
\end{split}
\end{equation*}
\end{theorem}
All these results remain valid for the non-linear operators given by the \emph{maximal truncations}
\begin{equation*}
T_{\natural}f(x):=\sup_{\varepsilon>0}\abs{T_{\varepsilon}f(x)},\qquad
T_{\varepsilon}f(x):=\int_{\abs{x-y}>\varepsilon}K(x,y)f(y)\ud y,
\end{equation*}
which have been addressed in \cite{HytLac,HLMORSU}. In \cite{HLMORSU} it was also shown that the sharp weighted bounds for dyadic shifts can be made linear (instead of quadratic) in $\kappa$, a result recovered by a different (Bellman function) method by Treil \cite{Treil:linear}. Earlier polynomial-in-$\kappa$ Bellman function estimates for the shifts were due to Nazarov and Volberg \cite{NV}. An extension of the $A_2$ theorem to abstract metric spaces with a doubling measure (spaces of homogeneous type) is due to Nazarov, Reznikov, and Volberg \cite{NRV}.
A higher degree of non-linearity is obtained by replacing the supremum over $\epsilon>0$ defining the maximal truncation by one of the \emph{variation norms}
\begin{equation*}
\mathbb{N}orm{\{v_\epsilon\}_{\epsilon>0}}{V^q}
:=\sup_{\{\epsilon_i\}_{i\in\mathbb{Z}}}\Big(\sum_i\abs{v_{\epsilon_i}-v_{\epsilon_{i+1}}}^q\Big)^{1/q},
\end{equation*}
where the supremum is over all monotone sequences $\{\epsilon_i\}_{i\in\mathbb{Z}}\subset(0,\infty)$. Sharp weighted bounds for the $q$-variation ($q\in(2,\infty)$) of Calder\'on--Zygmund operators were first proved by Hyt\"onen--Lacey--P\'erez \cite{HLP}, although replacing the sharp truncation $T_\epsilon f(x)$ by a smooth truncation
\begin{equation*}
T^\phi_\epsilon f(x):=\int \phi\Big(\frac{\abs{x-y}}{\epsilon}\Big)K(x,y)f(y)\ud y,
\end{equation*}
where $\phi$ is smooth and $0\leq\phi\leq 1_{(1,\infty)}$. Sharp weighted bounds for the $q$-variation of the sharp truncations with $\phi=1_{(1,\infty)}$ were recently obtained by de Fran\c{c}a Silva and Zorin-Kranich \cite{FSZK}.
The approach to the $q$-variation in \cite{HLP} was through a non-probabilistic counterpart of the Dyadic Representation, a Dyadic Domination, which was independently discovered by Lerner \cite{Lerner:domination,Lerner:simple}. Another advantage of this method was its ability to handle Calder\'on--Zygmund kernels with weaker moduli of continuity $\psi$ than those treated by the present approach; namely any moduli $\psi$ subject to the log-bumped Dini condition $\int_0^1\psi(t)(1+\log\frac1t)\frac{\ud t}{t}<\infty$.
In its original form, the Dyadic Domination theorem gave a domination in norm, which improved to pointwise domination by Conde-Alonso and Rey \cite{CondeRey} and, independently, by Lerner and Nazarov \cite{LerNaz:book}. All these approaches required the same log-Dini condition, and the necessity of the logarithmic correction to the Dini-condition remained open for some time, until it was finally eliminated by Lacey~\cite{Lacey:elem} by yet another approach. The following quantitative form of Lacey's theorem was obtained by L. Roncal, O. Tapiola and the author \cite{HytRoncal}, and with a simpler proof by Lerner \cite{Lerner:simplest}:
\begin{theorem}\label{thm:logDini}
Let $w\in A_2$. Then any $L^2$ bounded Calder\'on--Zygmund operator $T$ whose kernel $K$ has modulus of continuity $\psi$ , satisfies
\begin{equation*}
\mathbb{N}orm{T}{L^2(w)\to L^2(w)}
\lesssim \Big(\mathbb{N}orm{T}{L^2\to L^2}+\mathbb{N}orm{K}{CZ_0}+\mathbb{N}orm{K}{CZ_\psi}\int_0^1\psi(t)\frac{\ud t}{t}\Big)[w]_{A_2}.
\end{equation*}
\end{theorem}
Asking for even less regularity, one may wonder about the sharp weighted bound for the class of rough homogeneous singular integral operators
\begin{equation*}
Tf(x)=\text{p.v.}\int_{\mathbb{R}^d}\frac{\Omega(y)}{\abs{y}^d}f(x-y)\ud y,
\end{equation*}
where
\begin{equation*}
\Omega(y)=\Omega(\frac{y}{\abs{y}}),\qquad\Omega\in L^\infty(\mathbb{S}^{d-1}),\qquad\int_{\mathbb{S}^{d-1}}\Omega(\sigma)\ud\sigma=0.
\end{equation*}
Their qualitative boundedness $T:L^2(w)\to L^2(w)$ is known for $w\in A_2$ (see Watson~\cite{Watson}). Roncal, Tapiola and the author \cite{HytRoncal} showed that $\mathbb{N}orm{T}{L^2(w)\to L^2(w)}\lesssim \mathbb{N}orm{\Omega}{\infty}[w]_{A_2}^2$, but it is not known whether this quadratic dependence on $[w]_{A_2}$ is sharp.
\subsection{The Beurling operator and its powers}\label{sec:Beurling}
One of the key original motivations to study the $A_2$ theorem was a conjecture of Astala--Iwaniec--Saksman \cite{AIS} concerning the special case where $T$ is the Beurling operator
\begin{equation*}
Bf(z):=-\frac{1}{\pi}\operatorname{p.v.}\int_{\mathbb{C}}\frac{1}{\zeta^2}f(z-\zeta)\ud A(\zeta),
\end{equation*}
and $A$ is the area measure (two-dimensional Lebesgue measure) on $\mathbb{C}\simeq\mathbb{R}^2$. This was the first Calder\'on--Zygmund operator for which the $A_2$ theorem was proven; it was achieved by Petermichl and Volberg \cite{PV}, confirming the mentioned conjecture of Astala, Iwaniec, and Saksman \cite{AIS}. Another proof of the $A_2$ theorem for this specific operator is due to Dragi\v{c}evi\'c and Volberg \cite{DV}.
The powers $B^n$ of $B$ have also been studied, and then it is of interest to understand the growth of the norms as a function of $n$. Shortly before the proof of the full $A_2$ theorem, by methods specific to the Beurling operator, O.~Dragi\v{c}evi\'c \cite{Dragicevic:cubic} was able to prove the cubic growth
\begin{equation*}
\mathbb{N}orm{B^n}{L^2(w)\to L^2(w)}\lesssim\abs{n}^{3}[w]_{A_2},\qquad n\in\mathbb{Z}\setminus\{0\}.
\end{equation*}
Now, let us see what the general $A_2$ theorem gives for these specific powers.
It is known (see e.g. \cite{DPV}) that $B^n$ is the convolution operator with the kernel
\begin{equation*}
K_n(z)=(-1)^n\frac{\abs{n}}{\pi}\Big(\frac{\bar{z}}{z}\Big)^n\abs{z}^{-2},
\end{equation*}
and it is elementary to check that this satisfies $\mathbb{N}orm{K_n}{CZ_\alpha}\lesssim\abs{n}^{1+\alpha}$ for any $\alpha\in(0,1)$. Moreover, since $B$ is an isometry on $L^2(\mathbb{C})$, we have $\mathbb{N}orm{B^n}{L^2\to L^2}=1$. From Theorem~\ref{thm:1weightCZO} we deduce:
\begin{corollary}
The powers $B^n$ of the Beurling operator satisfy
\begin{equation*}
\mathbb{N}orm{B^n}{L^2(w)\to L^2(w)}\lesssim\abs{n}^{1+\alpha}[w]_{A_2},\qquad\alpha>0,
\end{equation*}
where the implied constant depends on $\alpha$.
\end{corollary}
A sharper estimate still is provided by Theorem~\ref{thm:logDini}, as observed in \cite{HytRoncal}:
\begin{corollary}
The powers $B^n$ of the Beurling operator satisfy
\begin{equation*}
\mathbb{N}orm{B^n}{L^2(w)\to L^2(w)}\lesssim\abs{n}(1+\log\abs{n}) [w]_{A_2}.
\end{equation*}
\end{corollary}
For this it suffices to check that, defining the modulus of continuity
\begin{equation*}
\psi_n(t):=\min\{\abs{n}t,1\},
\end{equation*}
we have $\mathbb{N}orm{K_n}{CZ_{\psi_n}}\lesssim\abs{n}$ and hence
\begin{equation*}
\mathbb{N}orm{K_n}{CZ_{\psi_n}}\int_0^1\psi_n(t)\frac{\ud t}{t}\lesssim \abs{n}(1+\log\abs{n}).
\end{equation*}
However, a better bound would follow if we had the $A_2$ theorem for the rough singular integrals in the form
\begin{equation*}
\mathbb{N}orm{T}{L^2(w)\to L^2(w)}\lesssim \mathbb{N}orm{\Omega}{\infty}[w]_{A_2},
\end{equation*}
for this would lead to the linear estimate $\mathbb{N}orm{B^n}{L^2(w)\to L^2(w)}\lesssim\abs{n}[w]_{A_2}$, simply by viewing the kernels $K_n$ (although smooth), as rough kernels of homogeneous singular integrals.
Let us notice that no bound better than this is possible, at least on the scale of power-type dependence on $\abs{n}$:
\begin{proposition}
No bound of the form $\mathbb{N}orm{B^n}{L^2(w)\to L^2(w)}\lesssim\abs{n}^{1-\epsilon}[w]_{A_2}^{\tau}$ can be valid for any $\epsilon,\tau>0$.
\end{proposition}
\begin{proof}
Suppose for contradiction that such a bound holds for some fixed $\epsilon,\tau>0$ and all $n\in\mathbb{Z}\setminus\{0\}$. By Theorem~\ref{thm:extrap}, we deduce that
\begin{equation*}
\mathbb{N}orm{B^n}{L^p(w)\to L^p(w)}\lesssim_p\abs{n}^{1-\epsilon}[w]_{A_p}^{\tau\max\{1,1/(p-1)\}},
\end{equation*}
and hence in particular we have the unweighted bound
\begin{equation*}
\mathbb{N}orm{B^n}{L^p\to L^p}\lesssim_p\abs{n}^{1-\epsilon},\qquad 1<p<\infty.
\end{equation*}
However, it has been shown by Dragi\v{c}evi\'c, Petermichl and Volberg that the correct dependence here is
\begin{equation*}
\mathbb{N}orm{B^n}{L^p\to L^p}\eqsim_p\abs{n}^{\abs{1-2/p}},\qquad 1<p<\infty.
\end{equation*}
The previous two displays are clearly in contradiction for $p$ close to either $1$ or~$\infty$, and we are done.
\end{proof}
The quest for the $A_2$ theorem began from the investigations of the Beurling transform, but clearly even this case is not yet fully understood.
\end{document} |
\begin{equation}gin{document}
\newcommand{\ensuremath{ \ensuremath{\langle}ngle \!\ensuremath{\langle}ngle 0 |} }{\ensuremath{ \ensuremath{\langle}ngle \!\ensuremath{\langle}ngle 0 |} }
\newcommand{\ensuremath{ | 0 \ensuremath{\rangle}ngle \!\ensuremath{\rangle}ngle } }{\ensuremath{ | 0 \ensuremath{\rangle}ngle \!\ensuremath{\rangle}ngle } }
\newcommand{\ensuremath{ \ensuremath{\langle}ngle \! \ensuremath{\langle}ngle 0 | 0 \ensuremath{\rangle}ngle \! \ensuremath{\rangle}ngle} }{\ensuremath{ \ensuremath{\langle}ngle \! \ensuremath{\langle}ngle 0 | 0 \ensuremath{\rangle}ngle \! \ensuremath{\rangle}ngle} }
\newcommand{\ensuremath{\widehat}}{\ensuremath{\widehat}}
\newcommand{\begin{equation}}{\begin{equation}gin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\newcommand{\begin{equation}a}{\begin{equation}gin{eqnarray}}
\newcommand{\end{equation}a}{\end{eqnarray}}
\newcommand{\ensuremath{\rangle}}{\ensuremath{\ensuremath{\rangle}ngle}}
\newcommand{\ensuremath{\langle}}{\ensuremath{\ensuremath{\langle}ngle}}
\newcommand{\ensuremath{ \rangle \! \rangle }}{\ensuremath{ \ensuremath{\rangle}ngle \! \ensuremath{\rangle}ngle }}
\newcommand{\ensuremath{ \langle \! \langle }}{\ensuremath{ \ensuremath{\langle}ngle \! \ensuremath{\langle}ngle }}
\newcommand{\str}{\rule[-.125cm]{0cm}{.5cm}}
\newcommand{\ensuremath{^\prime}}{\ensuremath{^\ensuremath{^\prime}ime}}
\newcommand{\ensuremath{^{\prime \prime}}}{\ensuremath{^{\ensuremath{^\prime}ime \ensuremath{^\prime}ime}}}
\newcommand{\ensuremath{^\dag}}{\ensuremath{^\ensuremath{^\dag}g}}
\newcommand{^\ast}{^^\astt}
\newcommand{\ensuremath{\epsilon}}{\ensuremath{\ensuremath{\epsilon}ilon}}
\newcommand{\ensuremath{\vec}}{\ensuremath{\ensuremath{\vec}c}}
\newcommand{\kappa}{\kappappa}
\newcommand{\ensuremath{\nonumber}}{\ensuremath{\ensuremath{\nonumber}umber}}
\newcommand{\ensuremath{\left}}{\ensuremath{\left}}
\newcommand{\ensuremath{\right}}{\ensuremath{\right}}
\newcommand{\ensuremath{\alpha}}{\ensuremath{\ensuremath{\alpha}pha}}
\newcommand{\ensuremath{\equiv}}{\ensuremath{\equiv}}
\newcommand{\ensuremath{\gamma}}{\ensuremath{\ensuremath{\gamma}mma}}
\newcommand{\ensuremath{\tilde}}{\ensuremath{\ensuremath{\tilde}lde}}
\newcommand{\ensuremath{\widetilde}}{\ensuremath{\widetilde}}
\newcommand{\ensuremath{\hspace*{.5cm}}}{\ensuremath{\ensuremath{\hspace*{.5cm}}pace*{.5cm}}}
\newcommand{\begin{equation}t}{\ensuremath{\begin{equation}ta}}
\newcommand{\ensuremath{\omega}}{\ensuremath{\ensuremath{\omega}ega}}
\newcommand{\ensuremath{{\cal O}}}{\ensuremath{{\cal O}}}
\newcommand{\ensuremath{{\cal S}}}{\ensuremath{{\cal S}}}
\newcommand{\ensuremath{{\cal F}}}{\ensuremath{{\cal F}}}
\newcommand{\ensuremath{{\cal X}}}{\ensuremath{{\cal X}}}
\newcommand{\ensuremath{{\cal Z}}}{\ensuremath{{\cal Z}}}
\newcommand{\ensuremath{{\cal G}}}{\ensuremath{{\cal G}}}
\newcommand{\ensuremath{{\cal R}}}{\ensuremath{{\cal R}}}
\newcommand{\ensuremath{{\cal V}}}{\ensuremath{{\cal V}}}
\newcommand{\ensuremath{{\cal C}}}{\ensuremath{{\cal C}}}
\newcommand{\ensuremath{{\cal P}}}{\ensuremath{{\cal P}}}
\newcommand{\ensuremath{^{(p)}}}{\ensuremath{^{(p)}}}
\newcommand{\ensuremath{^\prime}pr}{\ensuremath{\ensuremath{^\prime}ime \ensuremath{^\prime}ime }}
\twocolumn[
\OSAJNLtitle{\bf
Squeezing the Local Oscillator Does Not Improve Signal-to-Noise Ratio in Heterodyne Laser Radar
}
\OSAJNLauthor{
Mark A. Rubin and Sumanth Kaushik}
\OSAJNLaddress{Lincoln Laboratory\\
Massachusetts Institute of Technology\\
244 Wood Street\\
Lexington, Massachusetts 02420-9185}
\OSAJNLemail{\{rubin,skaushik\}@LL.mit.edu}
\begin{equation}gin{center}
\begin{equation}gin{abstract*}
The signal-to-noise ratio for heterodyne laser radar with a coherent target-return beam and a squeezed
local-oscillator beam is lower than that obtained using a coherent local oscillator,
regardless of the method employed to combine the beams at the detector.
{\em OCIS codes:}\/ 270.0270, 270.6570, 280.5600, 040.2840
\end{abstract*}
\end{center}
]
\maketitle\symbolfootnote[0]{This work was sponsored by the Air Force under Air Force Contract
FA8721-05-C-0002. Opinions, interpretations, conclusions, and
recommendations are those of the authors and are not necessarily endorsed
by the U.S. Government.}
Squeezed light holds promise for reducing noise in optical-detection applications below
the level obtainable using coherent light such as that emitted by lasers.\cite{WallsMilburn1994}
However, squeezing is degraded by loss\cite{Caves1981}.
In laser radar applications, the loss in the target-return beam---the beam received by the radar
system after reflection from a target---is severe.\cite{Kingston1995}
Therefore squeezing is not useful in laser radar, at least not when applied to the target-return beam.
This still leaves open the possibility that squeezed light could be profitably employed as the local oscillator (LO) in a {\rm heterodyne}\/ laser radar. In such a system the target-return beam is combined with a local-oscillator beam of a different frequency on a photosensitive detector, and the presence of a target is inferred from observation of oscillation of the detector response at the difference frequency of the two beams\cite{Kingston1995}.
A heterodyne laser radar system combining the target-return and LO beams on a single detecting element has been proposed by
Li {\em et al.}\/\cite{Lietal1999}. Their work has been criticized by Ralph\cite{Ralph2000} on the grounds that
the method they employ to combine the target-return beam and LO beam on the detector introduces sufficient noise
to cancel out any improvement in signal-to-noise ratio (SNR) due to squeezing. However, one might envision employing other
methods of combining the beams which do not add noise; e.g., using a Fabry-Perot etalon\cite{Hernandez1986} that reflects the
LO frequency and transmits the target-return frequency.
(This approach is to be distinguished from a ``balanced''
heterodyne system in which the two beams are directed to {\em two}\/ detectors using a beam splitter. Quantum
noise in balanced heterodyne detection with squeezed light has been examined by Yuen and Chan\cite{YuenChan1983} and
by Annovazzi-Lodi {\em et al.}\/\cite{Annovazzi-Lodietal1992}. We would not expect such
a system to benefit from squeezing, since both beams must pass through the beam splitter and thus suffer
squeezing-destroying loss. In fact, a calculation of the SNR in the balanced case
gives the same result as that obtained below in the present case, eq. (\ref{SNRb}), except for the change of $\cos^2$\/ to $\sin^2$\/ due to the
$\pi/2$\/ phase change of the reflected light at the beam splitter. The detection of a Doppler beat signal using
a squeezed LO, as proposed by Li {\em et al.}\/\cite{Lietal1997}, is also an example of
a heterodyne measurement to which the considerations of the present paper apply.)
Here, we show that a heterodyne detection scheme combining a coherent target-return beam and a squeezed LO beam on a single detector will
fail to improve SNR regardless of the method used to combine the beams.
For target detection using the statistic $S$\/\cite{Helstrom1976},
\begin{equation}
\mbox{SNR}=\left(\ensuremath{\langle}\psi_1|\ensuremath{\widehat}{S}|\psi_1\ensuremath{\rangle}-\ensuremath{\langle}\psi_0|\ensuremath{\widehat}{S}|\psi_0\ensuremath{\rangle}\right)^2/\mbox{Var}_0S,\ensuremath{\langle}bel{SNRdef}
\end{equation}
where $\ensuremath{\langle}\psi_1|\ensuremath{\widehat}{S}|\psi_1\ensuremath{\rangle}$\/
is the mean value of $S$\/ in that quantum state, $|\psi_1\ensuremath{\rangle}$\/, in which the target is present,
$\ensuremath{\langle}\psi_0|\ensuremath{\widehat}{S}|\psi_0\ensuremath{\rangle}$\/
is the mean value of $S$\/ when the target is absent, and
$\mbox{Var}_0S$\/ is the variance of $S$\/ in the target-absent condition,
\begin{equation}
\mbox{Var}_0S=\ensuremath{\langle} \psi_0|\ensuremath{\widehat}{S}^2|\psi_0\ensuremath{\rangle}-\ensuremath{\langle} \psi_0|\ensuremath{\widehat}{S}|\psi_0\ensuremath{\rangle}^2.\ensuremath{\langle}bel{Var0formula}
\end{equation}
In choosing pure quantum states to correspond to the target-present and target-absent conditions we are assuming
the absence of additional non-quantum sources of noise, e.g., thermal noise, which would have to be treated using the density
operator formalism.\cite{GottfriedYan2003}
For heterodyne detection,
\begin{equation}
\ensuremath{\widehat}{S}=\tau^{-1}\int_0^\tau dt \; \cos(\ensuremath{\omega}ega_Ht+\theta_H)\ensuremath{\widehat}{I}(t), \ensuremath{\langle}bel{Sopdef}
\end{equation}
where $\ensuremath{\widehat}{I}(t)$\/ is the quantum operator corresponding to the photoelectric current produced by the
detector at time $t$\/ and $\tau$\/ is the fixed time interval during which the target present/absent
decision is to be made. That is, $\ensuremath{\widehat}{S}$\/ is the Fourier component of the photoelectric current at frequency $\ensuremath{\omega}ega_H$\/ and phase $\theta_H$\/.
We take $\ensuremath{\omega}ega_H$\/ to be equal to the difference
between the respective frequencies of the target-return and LO
beams,
\begin{equation}
\ensuremath{\omega}ega_H=\ensuremath{\omega}ega_T-\ensuremath{\omega}ega_{LO}.\ensuremath{\langle}bel{omegaorder}
\end{equation}
(For simplicity we will always take $\ensuremath{\omega}ega_T-\ensuremath{\omega}ega_{LO} >0$\/.)
So $\ensuremath{\widehat}{S}$\/ corresponds to detection of the beat frequency between the LO and the target-return beam.
For suitable broadband detectors,
the operator corresponding to the photoelectric current at time $t$\/ is\cite{Glauber1965}
\begin{equation}
\ensuremath{\widehat}{I}(t)=\kappappa \ensuremath{\widehat}{E}^{(-)}(t)\;\ensuremath{\widehat}{E}^{(+)}(t),\ensuremath{\langle}bel{IopdefP2}
\end{equation}
where $\kappappa$\/ is a constant, $\ensuremath{\widehat}{E}^{(-)}(t)=(\ensuremath{\widehat}{E}^{(+)}(t))\ensuremath{^\dag},$\/ and $\ensuremath{\widehat}{E}^{(+)}(t)$\/ is the positive-frequency part of the time-dependent electric field operator at the
detector,
\begin{equation}
\ensuremath{\widehat}{E}^{(+)}(t)=\sum_{k}i\left(\frac{\hbar \ensuremath{\omega}ega_{k}}{2 \varepsilon_0 V}\right)^{1/2}
\ensuremath{\widehat}{a}_{k}\; \exp\left(-i\ensuremath{\omega}ega_{k}t\right).\ensuremath{\langle}bel{EplusP2}
\end{equation}
The mode frequencies are $\ensuremath{\omega}ega_k=ck$\/
where the wavenumber $k$\/ runs over the values $k=2\pi n/V^{1/3}$\/, $n=1,2,\ldots.$\/
In writing $\ensuremath{\widehat}{E}^{(+)}(t)$\/ as in (\ref{EplusP2}) we are assuming that the detector is only sensitive to a single direction of polarization (which is the
direction in which both the LO and target-return beam will be polarized) and that the optical system is such that, for
each frequency, only
a single spatial mode need be considered (that mode with wave vector normal to the detector surface).
The annihilation operators $\ensuremath{\widehat}{a}_k$\/ satisfy the usual commutation relations,
\begin{equation}
[\ensuremath{\widehat}{a}_k,\ensuremath{\widehat}{a}_l]=[\ensuremath{\widehat}{a}\ensuremath{^\dag}_k,\ensuremath{\widehat}{a}\ensuremath{^\dag}_l]=0, \ensuremath{\hspace*{.5cm}}pace*{5mm}[\ensuremath{\widehat}{a}_k,\ensuremath{\widehat}{a}\ensuremath{^\dag}_l]=\delta_{kl}.\ensuremath{\langle}bel{commutation}
\end{equation}
Using (\ref{Sopdef})-(\ref{EplusP2})
we obtain, in the limit $\tau \rightarrow \infty$\/,
\begin{equation}a
\ensuremath{\widehat}{S}&=&\frac{\kappappa\hbar}{4 \varepsilon_0 V}\;
\sum_{l, k \mbox{ \footnotesize s.t. } |\ensuremath{\omega}ega_{l}-\ensuremath{\omega}ega_{k}|=\ensuremath{\omega}ega_H}
(\ensuremath{\omega}ega_{l}\ensuremath{\omega}ega_{k})^{1/2}\ensuremath{\nonumber}umber\\
&&\ensuremath{\widehat}{a}\ensuremath{^\dag}_{l}\ensuremath{\widehat}{a}_{k}
\;\exp\left(-i\varepsilon(\ensuremath{\omega}ega_{l}-\ensuremath{\omega}ega_{k})\theta_H\right) ,\ensuremath{\langle}bel{SopP2}
\end{equation}a
where
\begin{equation}
\varepsilon(x)=\mbox{ sign of }x. \ensuremath{\langle}bel{signfuncdef}
\end{equation}
In the target-absent state, all modes but the LO are in the vacuum state:
\begin{equation}
|\psi_0\ensuremath{\rangle}=|\ensuremath{\alpha}pha,\xi\ensuremath{\rangle} _{k_{LO}} \ensuremath{^\prime}od_{k\neq k_{LO}} |0\ensuremath{\rangle}_{k}.\ensuremath{\langle}bel{psi0P2}
\end{equation}
Here $|\ensuremath{\alpha}pha,\xi\ensuremath{\rangle} _{k_{LO}}$\/ is the squeezed local-oscillator-frequency ($\ensuremath{\omega}ega_{LO}$\/) mode parameterized by
complex numbers $\ensuremath{\alpha}pha$\/ and $\xi$\/\cite{GerryKnight2005}.
In the target-present case
an additional mode is in a nonvacuum state, specifically the coherent state $|\begin{equation}ta\ensuremath{\rangle}_{k_T}$\/ at the target-return frequency $\ensuremath{\omega}ega_{T}$\/:
\begin{equation}
|\psi_1\ensuremath{\rangle}=|\begin{equation}ta\ensuremath{\rangle}_{k_T}|\ensuremath{\alpha}pha,\xi\ensuremath{\rangle}_{k_{LO}} \ensuremath{^\prime}od_{k \neq k_T,k_{LO}} |0\ensuremath{\rangle}_{k}.\ensuremath{\langle}bel{psi1P2}
\end{equation}
Using (\ref{omegaorder}), (\ref{SopP2})-(\ref{psi1P2})
and the relations\cite{GerryKnight2005}
\begin{equation}
\ensuremath{\widehat}{a}_k|0\ensuremath{\rangle}_k=\mbox{}_k\ensuremath{\langle} 0|\ensuremath{\widehat}{a}\ensuremath{^\dag}_k=0,\ensuremath{\langle}bel{killers}
\end{equation}
\begin{equation}
\mbox{}_{k_{T}}\ensuremath{\langle} \begin{equation}ta |\ensuremath{\widehat}{a}_{k_{T}}|\begin{equation}ta\ensuremath{\rangle}_{k_{T}}=\begin{equation}ta,\ensuremath{\hspace*{.5cm}}pace*{5mm}
\mbox{}_{k_{T}}\ensuremath{\langle} \begin{equation}ta |\ensuremath{\widehat}{a}\ensuremath{^\dag}_{k_{T}}|\begin{equation}ta\ensuremath{\rangle}_{k_{T}}=\begin{equation}ta^^\astt,\ensuremath{\langle}bel{cohmatelts}
\end{equation}
and
\begin{equation}a
\mbox{}_{k_{LO}}\ensuremath{\langle} \ensuremath{\alpha}pha,\xi |\ensuremath{\widehat}{a}_{k_{LO}}|\ensuremath{\alpha}pha,\xi\ensuremath{\rangle}_{k_{LO}}&=&\ensuremath{\alpha}pha,\ensuremath{\nonumber}umber\\
\mbox{}_{k_{LO}}\ensuremath{\langle} \ensuremath{\alpha}pha,\xi |\ensuremath{\widehat}{a}\ensuremath{^\dag}_{k_{LO}}|\ensuremath{\alpha}pha,\xi\ensuremath{\rangle}_{k_{LO}}&=&\ensuremath{\alpha}pha^^\astt,\ensuremath{\langle}bel{sqmatelts}
\end{equation}a
we find that
\begin{equation}
\ensuremath{\langle} \psi_0|\ensuremath{\widehat}{S}|\psi_0 \ensuremath{\rangle}=0,\ensuremath{\langle}bel{meanS0}
\end{equation}
since the only possible nonzero term, $\ensuremath{\widehat}{a}\ensuremath{^\dag}_{k_{LO}}\ensuremath{\widehat}{a}_{k_{LO}}$\/, is forbidden by the restriction on
the summation in (\ref{SopP2}),
and
\begin{equation}a
\ensuremath{\langle} \psi_1|\ensuremath{\widehat}{S}|\psi_1 \ensuremath{\rangle}&=&\frac{\kappappa \hbar}{2\varepsilon_0 V}\;\left(\ensuremath{\omega}ega_{T}\ensuremath{\omega}ega_{LO}\right)^{1/2}\ensuremath{\nonumber}umber\\
&&|\ensuremath{\alpha}pha||\begin{equation}ta|\cos(\theta_T-\theta_{LO}+\theta_H),\ensuremath{\langle}bel{meanS1}
\end{equation}a
where
\begin{equation}
\theta_T=\arg{\begin{equation}ta},\ensuremath{\hspace*{.5cm}}pace*{5mm}\theta_{LO}=\arg{\ensuremath{\alpha}pha}.\ensuremath{\langle}bel{thetadefs}
\end{equation}
Using (\ref{commutation})-(\ref{psi0P2}) and (\ref{killers}),
$$
\ensuremath{\langle} \psi_0|\ensuremath{\widehat}{S}^2|\psi_0 \ensuremath{\rangle}=\left(\frac{\kappappa\hbar}{4 \varepsilon_0 V}\right)^2\ensuremath{\hspace*{.5cm}}pace*{5in}
$$
$$
\sum_{k \mbox{ \footnotesize s.t. }|\ensuremath{\omega}ega_{LO}-\ensuremath{\omega}ega_{k}|=\ensuremath{\omega}ega_H}\;\;
\sum_{l \mbox{ \footnotesize s.t. }|\ensuremath{\omega}ega_{l}-\ensuremath{\omega}ega_{LO}|=\ensuremath{\omega}ega_H}\;\;
\ensuremath{\omega}ega_{LO}\left(\ensuremath{\omega}ega_{k}\ensuremath{\omega}ega_{l}\right)^{1/2}\ensuremath{\hspace*{.5cm}}pace*{5in}
$$
\begin{equation}a
&&
\ensuremath{\langle}\psi_0|\ensuremath{\widehat}{a}\ensuremath{^\dag}_{k_{LO}}\ensuremath{\widehat}{a}_{k}\ensuremath{\widehat}{a}\ensuremath{^\dag}_{l}\ensuremath{\widehat}{a}_{k_{LO}}|\psi_0\ensuremath{\rangle}\ensuremath{\nonumber}umber\\
&&\exp\left(-i[\varepsilon(\ensuremath{\omega}ega_{LO}-\ensuremath{\omega}ega_{k})+\varepsilon(\ensuremath{\omega}ega_{l}-\ensuremath{\omega}ega_{LO})]\theta_H\right).\ensuremath{\langle}bel{meanSsqaP2}
\end{equation}a
Neither $k$\/ nor $l$\/ can be equal to $k_{LO}$\/, due to the restrictions in the summations in (\ref{meanSsqaP2}).
If $k\neq l$\/ then $\ensuremath{\widehat}{a}_{k}$\/ and $\ensuremath{\widehat}{a}\ensuremath{^\dag}_{l}$\/
commute, yielding zero since the non-$LO$\/ modes are in the vacuum state. So the only surviving terms are
those for which $k=l$\/. Using (\ref{commutation}),
\begin{equation}
\ensuremath{\langle} \psi_0|\ensuremath{\widehat}{S}^2|\psi_0 \ensuremath{\rangle}=\left(\frac{\kappappa\hbar}{4 \varepsilon_0 V}\right)^2
\sum_{k \mbox{ \footnotesize s.t. }|\ensuremath{\omega}ega_{LO}-\ensuremath{\omega}ega_{k}|=\ensuremath{\omega}ega_H}\ensuremath{\omega}ega_{LO}\ensuremath{\omega}ega_{k}
\bar{n}_{LO},\ensuremath{\langle}bel{meanSsqbP2}
\end{equation}
where
\begin{equation}
\bar{n}_{LO}=\mbox{}_{k_{LO}}\ensuremath{\langle} \ensuremath{\alpha}pha,\xi|\ensuremath{\widehat}{a}\ensuremath{^\dag}_{k_{LO}}\ensuremath{\widehat}{a}_{k_{LO}}|\ensuremath{\alpha}pha,\xi\ensuremath{\rangle}_{k_{LO}}.\ensuremath{\langle}bel{nLObarP2}
\end{equation}
Using (\ref{Var0formula}), (\ref{meanS0}) and (\ref{meanSsqbP2}),
\begin{equation}
\mbox{Var}_0S=\left(\frac{\kappappa\hbar}{4 \varepsilon_0 V}\right)^2
\sum_{k \mbox{ \footnotesize s.t. }|\ensuremath{\omega}ega_{LO}-\ensuremath{\omega}ega_{k}|=\ensuremath{\omega}ega_H}\ensuremath{\omega}ega_{LO}\ensuremath{\omega}ega_{k}
\bar{n}_{LO}.\ensuremath{\langle}bel{Var0bP2}
\end{equation}
The contribution to (\ref{Var0bP2}) from the term $\ensuremath{\omega}ega_k=\ensuremath{\omega}ega_{LO}-\ensuremath{\omega}ega_H$\/ is termed the ``image band''
contribution\cite{Haus2000}.
In practice $\ensuremath{\omega}ega_H \ll \ensuremath{\omega}ega_T,\ensuremath{\omega}ega_{LO}$\/,
so we can take
\begin{equation}
\ensuremath{\omega}ega_T\approx\ensuremath{\omega}ega_{LO}\equiv\ensuremath{\omega}ega.\ensuremath{\langle}bel{omega}
\end{equation}
Using (\ref{omega}), (\ref{meanS1}) and (\ref{Var0bP2}) become
\begin{equation}
\ensuremath{\langle} \psi_1|\ensuremath{\widehat}{S}|\psi_1 \ensuremath{\rangle}=\frac{\kappappa \hbar\ensuremath{\omega}ega}{2\varepsilon_0 V}
|\ensuremath{\alpha}pha||\begin{equation}ta|\cos(\theta_T-\theta_{LO}+\theta_H),\ensuremath{\langle}bel{meanS1bP2}
\end{equation}
\begin{equation}a
\mbox{Var}_0S&=&2\left(\frac{\kappappa\hbar\ensuremath{\omega}ega}{4 \varepsilon_0 V}\right)^2
\bar{n}_{LO}.\ensuremath{\langle}bel{Var0cP2}
\end{equation}a
Using (\ref{meanS0}), (\ref{meanS1bP2}) and (\ref{Var0cP2}), the signal-to-noise ratio (\ref{SNRdef}) is
$$
\mbox{SNR}=\frac{2|\ensuremath{\alpha}pha|^2|\begin{equation}ta|^2 \cos^2(\theta_T-\theta_{LO}+\theta_H)}{\bar{n}_{LO}}\ensuremath{\hspace*{.5cm}}pace*{5in}
$$
\begin{equation}
=2\left(1-\frac{\sinh^2(r)}{\bar{n}_{LO}}\right)\bar{n}_T \cos^2(\theta_T-\theta_{LO}+\theta_H)\ensuremath{\langle}bel{SNRb}
\end{equation}
using the relations\cite{GerryKnight2005}
\begin{equation}
|\begin{equation}ta|^2=\bar{n}_{k_T}=\mbox{}_{k_T}\ensuremath{\langle} \begin{equation}ta|\ensuremath{\widehat}{a}\ensuremath{^\dag}_{k_T}\ensuremath{\widehat}{a}_{k_T}|\begin{equation}ta\ensuremath{\rangle}_{k_T},\ensuremath{\langle}bel{nTP2}
\end{equation}
\begin{equation}
\bar{n}_{LO}=|\ensuremath{\alpha}pha|^2+\sinh^2(r).\ensuremath{\langle}bel{nLOalphaconstraint}
\end{equation}
The parameter $r=|\xi|$\/
is termed the ``squeezing parameter.'' The value $r=0$\/ corresponds to no squeezing (coherent state).
From (\ref{SNRb}) it is clear that squeezing the LO mode---i.e., letting the LO be in
a state with $r > 0$\/---can only reduce the signal-to-noise ratio.
This result is consistent with the observation by Yuen and Chan\cite{YuenChan1983}, in the context of balanced detection, that while ``quantum noise is frequently supposed to
arise from local-oscillator (LO) shot noise \ldots it actually arises from the signal quantum fluctuation.''
The reasonable but incorrect expectation that squeezing the LO will improve SNR arises from the fact that
the variance of the zero-frequency signal, i.e., the
time-averaged photoelectric current corresponding
to the operator
\begin{equation}
\ensuremath{\widehat}{S}\ensuremath{^\prime}=\tau^{-1}\int_0^\tau dt \; \ensuremath{\widehat}{I}(t)=\frac{\kappappa\hbar\ensuremath{\omega}ega_{k}}{2 \varepsilon_0 V}\;\ensuremath{\widehat}{a}\ensuremath{^\dag}_{k}\ensuremath{\widehat}{a}_{k} \ensuremath{\langle}bel{Spropdef}
\end{equation}
(the second equalty holding in the limit $\tau \rightarrow \infty$\/), does change with squeezing.
In the target-absent state,
\begin{equation}
\mbox{Var}_0 S\ensuremath{^\prime}=\ensuremath{\langle}\psi_0|\ensuremath{\widehat}{S}^{\ensuremath{^\prime}ime 2}|\psi_0\ensuremath{\rangle}-\ensuremath{\langle}\psi_0|\ensuremath{\widehat}{S}^{\ensuremath{^\prime}ime }|\psi_0\ensuremath{\rangle}^2,\ensuremath{\langle}bel{VaroSprdef}
\end{equation}
which, for $\tau \rightarrow \infty$\/ has the value
\begin{equation}
\mbox{Var}_0 S\ensuremath{^\prime}=\left(\frac{\kappappa\hbar\ensuremath{\omega}ega_{LO}}{2\varepsilon V}\right)^2\mbox{var}\;n_{LO},\ensuremath{\langle}bel{Var0Spr}
\end{equation}
where
\begin{equation}a
\mbox{var}\;n_{LO}&=&\left[\rule[-.3cm]{0cm}{.6cm}\mbox{}_{k_{LO}}\ensuremath{\langle} \ensuremath{\alpha}pha,\xi|(\ensuremath{\widehat}{a}\ensuremath{^\dag}_{k_{LO}}\ensuremath{\widehat}{a}_{k_{LO}})^2|\ensuremath{\alpha}pha,\xi\ensuremath{\rangle}_{k_{LO}}\right.\ensuremath{\nonumber}umber\\
&-&\left.\left(\mbox{}_{k_{LO}}\ensuremath{\langle} \ensuremath{\alpha}pha,\xi|\ensuremath{\widehat}{a}\ensuremath{^\dag}_{k_{LO}}\ensuremath{\widehat}{a}_{k_{LO}}|\ensuremath{\alpha}pha,\xi\ensuremath{\rangle}_{k_{LO}}\right)^2\right].\ensuremath{\langle}bel{varnLOdef}
\end{equation}a
For suitable choice of the phase of $\xi$\/, (\ref{varnLOdef}) can indeed be lower than $\bar{n}_{LO}$\/, the value it takes in a
coherent state.\cite{GerryKnight2005}
However, statistical decision theory\cite{Helstrom1976} indicates that if a quantity $S$\/ computed from measurements made by a detector is used as the decision criterion in a target-detection task, then it is the variance of {\em that
same quantity $S$\/}\/ that is relevant in evaluating the suitability of $S$\/ for the task. In heterodyne radar the computed quantitiy $S$\/ is the Fourier component of the instantaneous photocurrent induced at the detector by the combined target-return and LO beams\cite{Kingston1995}. The operator corresponding to the instantaneous detector response is $\ensuremath{\widehat}{I}(t)$\/, so the operator corresponding to the required Fourier component is $\ensuremath{\widehat}{S}$\/ as defined in (\ref{Sopdef}).
It is thus the variance of $\ensuremath{\widehat}{S}$\/, not that of
$\ensuremath{\widehat}{S}\ensuremath{^\prime}$\/, which must be used for computing SNR.
(The general expression for the signal operator (\ref{Sopdef}) for
$\tau$\/
not necessarily infinite, $\ensuremath{\omega}ega_H$\/ not necessarily equal to $|\ensuremath{\omega}ega_k-\ensuremath{\omega}ega_l|$\/ for any $k,l$\/, is
$$
\ensuremath{\widehat}{S}=\frac{\kappappa\hbar}{2 \varepsilon_0 V}\;\sum_{l,k} (\ensuremath{\omega}ega_{l}\ensuremath{\omega}ega_{k})^{1/2} \ensuremath{\widehat}{a}\ensuremath{^\dag}_{l}\ensuremath{\widehat}{a}_{k}\exp\left(-i\varepsilon(\ensuremath{\omega}ega_{l}-\ensuremath{\omega}ega_{k})\theta_H\right)
$$
\vspace*{-.375cm}
\begin{equation}a
\cdot\frac{1}{2i\tau}\left\{\frac{\exp\left(i\theta_H\right)}{\ensuremath{\omega}ega_l-\ensuremath{\omega}ega_k+\ensuremath{\omega}ega_H}
\left[\exp\left(i(\ensuremath{\omega}ega_l-\ensuremath{\omega}ega_k+\ensuremath{\omega}ega_H)\tau\right)-1\right]\right.&&\ensuremath{\nonumber}umber\\
\ensuremath{\hspace*{.5cm}}pace{6mm}\left.+\frac{\exp\left(-i\theta_H\right)}{\ensuremath{\omega}ega_l-\ensuremath{\omega}ega_k-\ensuremath{\omega}ega_H}
\left[\exp\left(i(\ensuremath{\omega}ega_l-\ensuremath{\omega}ega_k-\ensuremath{\omega}ega_H)\tau\right)-1\right]\right\}.&&\ensuremath{\langle}bel{Gkldef}
\end{equation}a
This reduces to (\ref{SopP2}), (\ref{Spropdef})
for the appropriate limiting values of $\tau$\/, $\ensuremath{\omega}ega_H$\/ and $\theta_H$\/.)
\mbox{}
M. A. R. thanks Jonathan Ashcom and Jae Kyung for a helpful discussion on mixing efficiency.
\begin{equation}gin{thebibliography}{99}
\bibitem{WallsMilburn1994} D.~F.~Walls and G.~J.~Milburn, {\em Quantum Optics}\/ (Springer, Berlin, 1994).
\bibitem{Caves1981} C.~M.~Caves, Phys. Rev. D {\bf23}, 1693 (1981).
\bibitem{Kingston1995} R.~H.~Kingston, {\em Optical Sources, Detectors and Systems: Fundamentals and Applications}\/, (Academic Press, San Diego, 1995).
\bibitem{Lietal1999} Y.-q.~Li, D.~Guzun, and M.~Xiao, Phys. Rev. Lett. {\bf 82}, 5225 (1999).
\bibitem{Ralph2000} T.~C.~ Ralph, Phys. Rev. Lett. {\bf 85}, 677 (2000).
\bibitem{Hernandez1986} G.~Hernandez, {\em Fabry-Perot Interferometers}\/ (Cambridge, 1986).
\bibitem{YuenChan1983} H.~P.~Yuen and and V.~W.~S.~Chan, Opt. Lett. {\bf 8}, 177 (1983).
\bibitem{Annovazzi-Lodietal1992}V.~Annovazzi-Lodi, S.~Donati and S.~Merlo, Opt. Quan. Electron. {\bf 24}, 258 (1992).
\bibitem{Lietal1997} Y.-q.~Li, P.~Lynam, M.~Xiao, and P.~J.~Edwards, Phys.
Rev. Lett. {\bf 78}, 3105 (1997).
\bibitem{Helstrom1976} C.~W.~Helstrom, {\em Quantum Detection and Estimation Theory}\/ (Academic, New York, 1976).
\bibitem{GottfriedYan2003} K.~Gottfreid and T.-M.~Yan, {\em Quantum Mechanics: Fundamentals,}\/ 2nd. ed. (Springer, New York, 2003).
\bibitem{Glauber1965} R.~J.~Glauber, ``Optical coherence and photon statistics,'' in
{\em Quantum Optics and Electronics}, C.~DeWitt, A.~Blandin, and C.~Cohen-Tannoudji, eds. (Gordon and Breach, New York, 1965).
\bibitem{GerryKnight2005} C.~G.~Gerry and P.~L.~Knight, {\em Introductory Quantum Optics}\/ (Cambridge, 2005).
\bibitem{Haus2000} H.~A.~Haus, {\em Electromagnetic Noise and Quantum Optical Measurements}\/ (Springer, Berlin, 2000).
\end{thebibliography}
\end{document} |
\begin{document}
\title[\tiny{Algebraic characterizations of homeomorphisms between algebraic varieties}]{Algebraic characterizations of homeomorphisms between algebraic varieties}
\author{Fran\c cois Bernard, Goulwen Fichou, Jean-Philippe Monnier and Ronan Quarez}
\thanks{The second author wishes to thank Olivier Wittenberg for fruitful discussions. The authors have received support from the Henri Lebesgue Center ANR-11-LABX-0020-01 and the project EnumGeom ANR-18-CE40-0009.}
\address{François Bernard\\
Univ Angers, CNRS, LAREMA, SFR MATHSTIC, F-49000 Angers, France}
\email{[email protected]}
\address{Goulwen Fichou\\
Univ Rennes, CNRS, IRMAR - UMR 6625, F-35000 Rennes, France}
\email{[email protected]}
\address{Jean-Philippe Monnier\\
Univ Angers, CNRS, LAREMA, SFR MATHSTIC, F-49000 Angers, France}
\email{[email protected]}
\address{Ronan Quarez\\
Univ Rennes\\
Campus de Beaulieu, 35042 Rennes Cedex, France}
\email{[email protected]}
\date\today
\subjclass[2020]{14A10,13B22,14P99}
\keywords{homeomorphisms of algebraic varieties, weak normalization, seminormalization, saturation, regulous functions, real closed fields}
\begin{abstract} We address the question of finding algebraic properties that are respectively equivalent, for a morphism between algebraic varieties over an algebraically closed field of characteristic zero, to be an homeomorphism for the Zariski topology and for a strong topology that we introduce. Our answers involve a study of seminormalization and saturation for morphisms between algebraic varieties, together with an interpretation in terms of continuous rational functions on the closed points of an algebraic variety. The continuity refers to the strong topology which is the usual Euclidean topology in the complex case, whereas it comes from the theory of real closed fields otherwise.
\end{abstract}
\maketitle
\vskip 15mm
Let $k$ be an algebraically closed field. Let $\pi:Y\to X$ be a morphism between algebraic varieties over $k$ and $\pi_k:Y(k)\to X(k)$ be its restriction to the closed points. The main purpose of this paper is to find algebraic characterizations for topological conditions on $\pi$ or $\pi_k$.
In this direction, we compare bijections, isomorphisms and homeomorphisms with respect to the Zariski topology, and to a strong topology on the closed points when char($k$)=0 (corresponding to the Euclidean topology when $k=\C$ is the field of complex numbers).
As first comparisons, recall from the Nullstellensatz that $\pi$ is a homeomorphism if and only if $\pi_k$ is a homeomorphism. Also, it is clear that if $\pi$ is an isomorphism, then it is a homeomorphism and in particular $\pi_k$ is a bijection. However, in general, having a bijection at the level of closed points does not induce a homeomorphism or an isomorphism between the varieties. To obtain results of this kind, one must add some conditions on the varieties. For example, if $\pi_k$ is a bijection and $X$,$Y$ are irreducible curves, then $\pi$ is a homeomorphism. In greater dimension, a bijection (or a birational bijection in positive characteristics) between irreducible varieties induces an isomorphism when the target variety is normal by Zariski Main Theorem.
Assuming now $\pi$ to be a homeomorphism, there are similar results involving the notions of seminormality and weak normality instead of normality. Andreotti and Bombieri \cite{AB} proved that $\pi$ is an isomorphism if $X$ is weakly normal and $\pi$ is finite. Vitulli \cite{V2} managed to remove the finiteness assumption on $\pi$, by requiring that $X$ does not have one-dimensional components. This dimensional condition is necessary, as illustrated by the normalization of a nodal curve with one of the preimage of the singular point removed : we obtain a homeomorphism onto a seminormal curve which is not an isomorphism. The dimensional condition of Vitulli guaranties in fact that the homeomorphism is a finite morphism.
The weak normality property appearing in the results of Andreotti, Bombieri and Vitulli is very closed to the notion of seminormality and they are both related to the notions of subintegral and weakly subintegral morphisms. A morphism $\pi:Y\to X$ is (resp. weakly) subintegral if it is integral, bijective and equiresidual (resp. residually purely inseparable) i.e. it induces isomorphisms (resp. purely inseparable extensions) between the residue fields. The seminormalization (resp. weak normalization) of $X$ in $Y$, introduced by Bombieri and Andreotti, is the maximal variety with a (resp. weakly) subintegral morphism onto $X$ which factorizes $\pi$. Note that when $\car(k)=0$, weakly subintegral means subintegral and weak normality means seminormality.
The notions of weak normality and seminormality were first introduced by Andreotti and Norguet \cite{AN} for complex analytic varieties, then by Andreotti and Bombieri \cite{AB} for schemes and by Traverso for rings \cite{T}. It appears in the study of Picard groups \cite{T} or as singularities in the minimal model program \cite{KoKo}. Seminormalization, weak normalization and (weakly) subintegral morphisms are studied in section \ref{sect-semi}. The close notion of radicial morphism, as introduced by Grothendieck \cite{Gr1}, only requires that the (non necessarily integral) morphism is injective and residually purely inseparable. We study in section \ref{sect-sat} the saturation for varieties as the geometric counterpart of radiciality. The saturation appears first in the context of Lipschitz geometry with works of Pham and Teissier \cite{PT} in complex analytic geometry and Lipman \cite{Lip} for ring extensions. For integral morphisms, saturation and weak normalization coincide, providing a different approach to weak normality as proposed by Manaresi \cite{Mana}. However, it is not established that the saturation produces a variety, contrarily to the weak normalization and the seminormalization. A comparaison between all these notions is done in section \ref{comparaison}.
It will be crucial in our discussions to consider another topology than the Zariski topology.
For an algebraic variety $X$ over the complex numbers, the strong topology on the closed points $X(\C)$ of $X$ comes from the isomorphism $\R[\sqrt{-1}]=\C$ that gives an identification $\C\simeq \R^2$ and the property of $\R$ to be a real closed field. Indeed, the Euclidean topology on $\R^n$ has a basis of open sets given by semialgebraic subsets of $\R^n$, i.e. given by real polynomial equalities and inequalities. The theory of semialgebraic sets provides an algebraic way to discuss about topological question in real algebraic geometry, as developed in \cite{BCR}. The great advantage of this approach is that it generalizes from $\R$ to any real closed field. A real closed field is an ordered field that does not admit any ordered algebraic extension. Equivalently, adding a square root of $-1$ to a real closed field gives an algebraically closed field of characteristic zero. Real closed fields have been initially studied by Artin and Schreier \cite{AS} in the way of Artin's proof of Hilbert XVIIth Problem \cite{A}. The most basic examples away from $\R$ are the field of algebraic real numbers, which is the real closure of $\QQ$, and the field of Puiseux series with real coefficients, which is the real closure of the field $\R((T))$ of Laurent series ordered by $T$ positive and infinitely small. There are many of them, as illustrated by the fact that any algebraically closed field $k$ of characteristic zero contains (infinitely many) real closed subfields $R\subset k$ with $k=R[\sqrt{-1}]$. Fixing such a choice of $R$ leads to an identification $k\simeq R^2$ and equips $k$ with an order topology, called the $R$-topology on $k$. Note that in general $R$ is not connected, and a closed and bounded interval is not compact. Anyway, for a given algebraic variety $X$ over an algebraically closed field $k$ of characteristic zero, the choice of a real closed field $R\subset k$ such that $k=R[\sqrt{-1}]$ makes $X(k)$ a topological space for the $R$-topology. Of course, if $k=\C$ and $R=\R$ then we recover the Euclidean topology. The $R$-topology on the closed points of algebraic varieties over an algebraically closed field $k$ of characteristic zero is studied in section \ref{regulous}. The introduction of the $R$-topology allows us to characterize finite morphisms which are homeomorphisms via subintegrality and radiciality. Concerning subintegrality, it generalizes to any algebraically closed field of characteristic zero a result of the first author \cite[Thm. 3.1]{Be} in complex geometry.
\begin{thmx}\label{thmA} Let $\pi:Y\to X$ be a finite morphism between algebraic varieties over an algebraically closed field $k$ of characteristic zero. Let $R\subset k$ be a real closed subfield such that $k=R[\sqrt{-1}]$.
The following properties are equivalent :
\begin{enumerate}
\item[(i)] $\pi$ is a homeomorphism.
\item[(ii)] $\pi_k$ is a homeomorphism for the $R$-topology.
\item[(iii)] $\pi$ is subintegral.
\item[(iv)] $\pi$ is radicial.
\end{enumerate}
\end{thmx}
We focus in section \ref{sect-homeo} on the relations between the four equivalent properties appearing in Theorem \ref{thmA} when we remove the finiteness hypothesis. Note that the first two properties are topological whereas the last ones are algebraic. The equivalence between the four above properties is no longer true without the finiteness hypothesis. In particular, a homeomorphism with respect to the Zariski topology need not be a homeomorphism with respect to the $R$-topology, even for irreducible affine curves. Our interpretation is that the relevant topology to associate to subintegrality is the $R$-topology, whereas Zariski topology is rather related to the notion of radiciality.
\begin{thmx}\label{thmB} Let $\pi:Y\to X$ be a morphism between algebraic varieties over an algebraically closed field $k$ of characteristic zero. Let $R\subset k$ be a real closed subfield such that $k=R[\sqrt{-1}]$.
Then :
\begin{enumerate}
\item[(i)] $\pi_k$ is a homeomorphism for the $R$-topology if and only if $\pi$ is subintegral.
\item[(ii)] If $\pi$ is a homeomorphism then $\pi$ is radicial.
\end{enumerate}
\end{thmx}
The key result we prove to get the first statement of Theorem \ref{thmB} is that a homeomorphism for the $R$-topology is always finite. Note moreover that, if the second part of Theorem \ref{thmB} does not refer to the $R$-topology, our proof is a consequence of the first result where the use of the $R$-topology is crucial. We prove along the way that a homeomorphism with respect to the $R$-topology is a homeomorphism, the converse being true except for curves. In section \ref{sect-carpos} we consider the situation where $k$ has positive characteristic, and then the $R$-topology does not exist anymore. We provide an alternative proof of the second statement of Theorem \ref{thmB} in this context. As consequences, we completely answer the question considered by Vitulli \cite{V2} of characterizing when a homeomorphism is an isomorphism.
\begin{thmx}\label{thmC} Let $\pi:Y\to X$ be a morphism between algebraic varieties over an algebraically closed field $k$ of characteristic zero. Let $R\subset k$ be a real closed subfield such that $k=R[\sqrt{-1}]$.
\begin{enumerate}
\item[(i)] Assume $\pi_k$ is a homeomorphism for the $R$-topology. Then, $\pi$ is an isomorphism if and only if $X$ is seminormal in $Y$.
\item[(ii)] Assume $\pi$ is a homeomorphism. Then, $\pi$ is an isomorphism if and only if $X$ is saturated in $Y$.
\end{enumerate}
\end{thmx}
The study of seminormalization over non algebraically closed fields presents some difficulty, and the last three authors managed to define a sort of seminormalization for algebraic varieties over the field of real numbers \cite{FMQ}. This notion has to do with the central points of a real algebraic variety, that is the Euclidean closure of the set of regular points. This approach would not have been possible without the recent study of continuous rational functions in real algebraic geometry, as initiated by Kucharz \cite{Ku}, Koll\'ar and Nowak \cite{KN}, and further developed in \cite{FHMM} as regulous functions. These regulous functions happens to be related to seminormality for complex algebraic varieties too as studied by the first author \cite{Be}. It is this approach of seminormality via continuous functions with respect to the $R$-topology we develop at the end of section \ref{sect-CR}.
In particular, we provide a full study of the relation between seminormality and the $R$-topology completely parallel to the complex case in \cite{Be}. We introduce the continuous rational functions over $k$ in section \ref{sect-CR}. A remarkable fact is that the continuity of a rational function defined on an algebraic variety $X$ over $k$ does not depend on the choice of the real closed field $R$. Even more, these continuous rational functions coincide with the regular functions on the seminormalization of $X$, cf. Theorem \ref{caractK0}. As a consequence, fixing a real closed field $R\subset k$ brings all the flexibility of semialgebraic geometry over $R$ to algebraic geometry over $k$, without loosing in generality.
We prove notably that the seminormalization determines a variety up to biregulous equivalence. As a consequence, we obtain the following result, which goes in the direction of the problems considered by Koll\'ar in \cite{Ko} (or \cite{KMOS,Ce}).
\begin{thmx}\label{thmD}
\label{thmD}
Let $X$ and $Y$ be seminormal algebraic varieties over an algebraically closed field $k$ of characteristic zero. If $X(k)$ and $Y(k)$ are biregulously equivalent, then $X$ and $Y$ are isomorphic.
\end{thmx}
\vskip 5mm
In the paper, $k$ denotes a field (sometimes algebraically closed) of any characteristic, and an algebraic variety over $k$ is a reduced and separated scheme of finite type over $k$.
\section{Subintegrality, weak normalization and seminormalization}\label{sect-semi}
After some reminders on integral extensions and normalization, we recall the notions of (weakly) subintegral extensions and the respective constructions of Traverso \cite{T}, Andreotti and Bombieri \cite{AB} of the seminormalization and the weak normalization for ring extensions and morphisms between algebraic varieties.
\vskip 5mm
{\bf Notation and terminology.}
Let $A$ be a ring. The Zariski spectrum $\Sp A$ of $A$ is the set of prime ideals of $A$. It is a topological space for the topology whose closed sets are generated by the sets $\V(f)=\{\p\in\Sp A\mid
f\in\p\}$ for $f\in A$. We denote by $\Max A$ the subspace of maximal ideals of $A$. For $\p\in\Sp A$, we denote by $k(\p)$ the residue field at $\p$.
Let $(X,\SO_X)$ be a variety over $k$.
For $x\in X$ we denote by $k(x)$ the residue field at $x$; for an affine neighborhood $U$ of $x$ then $x$ corresponds to a prime ideal $\p_x$ of $\SO_X(U)$ and we have $k(x)=k(\p_x)$. In case $X$ is affine then we denote by $k[X]$ the coordinate ring of $X$ i.e $k[X]=\SO_X(X)$.
Let $K$ be a field containing $k$. We denote by $X(K)$ the set $\Mor(\Sp K,X)$ of $K$-rational points.
If $K=k$ then $X(k)$ is also the set of $k$-closed points of $X$, i.e the points of $X$ with residue field equal to $k$. We have thus an inclusion $$X(k)\hookrightarrow X$$ that makes $X(k)$ a topological space for the Zariski topology. We denote by $\SO_{X(k)}$ the sheaf of regular functions on $X(k)$, for $x\in X(k)$ we have $\SO_{X,x}=\SO_{X(k),x}$.
In the case $k$ algebraically closed, for an open subset $U$ of $X$, we may identify $\Max \SO_X(U)$ with $U(k)$ by the Nullstellensatz, and similarly we identify the regular functions on
$U$ with those on $U(k)$, namely $\SO_X(U)=\SO_{X(k)}(U(k))$. If $T$ is a subset of $X$ or $X(k)$ or $\Sp A$ then we will denote by $\overline{T}^Z$ the closure of $T$ for the Zariski topology.
A ring extension $i:A\to B$ induces a map $\Sp(i):\Sp B\to \Sp A$,
given by $\p\mapsto (\p\cap A)=i^{-1}(\p)$.
If $\pi:Y\rightarrow X$ is a morphism between
algebraic varieties over $k$, with $\SO_X\to \pi_*\SO_Y$ the associated morphism of sheaves of rings on $X$, then for any open subset $U\subset X$ the ring morphism $\SO_X(U)\to \SO_Y(\pi^{-1}(U))$ is an extension if $\pi$ is dominant. For a field extension $k\to K$, we denote by $\pi_{K}:Y(K)\to X(K)$ the induced map. Remark that $\pi_k$ is also the restriction of $\pi$ to the $k$-closed points.
\vskip 2mm
In the sequel, $\K(A)$ (resp. $\K$) will denote the total ring of fractions of $A$ (resp. the sheaf of total ring of fractions on $X$).
\subsection{Reminder on integral extensions and normalization}
A ring extension $A\to B$ is said of finite type (resp. finite) if it makes $B$ a finitely generated $A$-algebra (resp. $A$-module). The extension $A\to B$ is
birational if it induces an isomorphism between
$\K(A)$ and $\K(B)$.
An
element $b\in B$ is integral over $A$ if $b$ is the root of a monic
polynomial with coefficients in $A$, which is equivalent for $A[b]$ to be a finite $A$-module by \cite[Prop. 5.1]{AM}. As a consequence
$$A_B'=\{b\in B|\,b\,\, {\rm is\,\,
integral\,\, over}\,\,A\}$$ is a ring called the integral closure of $A$ in
$B$. The extension $A\to B$ is said to be integral if $A_B'=B$. In
case $B=\K(A)$ then the ring $A_{\K(A)}'$ is
denoted by $A'$ and is simply called the integral closure of $A$.
The ring $A$ is called integrally closed (resp. in $B$) if
$A=A'$ (resp. $A=A_B'$).
We recall that a dominant morphism $Y\rightarrow X$ between
algebraic varieties over $k$ is said of finite type (resp. finite, birational, integral) if for any open subset $U\subset X$ the ring extension
$\SO_X(U)\rightarrow \SO_Y(\pi^{-1}(U))$ is of finite type (resp. finite, birational, integral). In this paper, a morphism between algebraic varieties is always of finite type.
Let $X$ be an algebraic variety over $k$. The normalization of $X$, denoted by $X'$, is the algebraic variety over $k$ with a finite birational morphism $\pi':X'\rightarrow X$, called the normalization morphism such that for any open subset $U\subset X$ we have $\SO_{X'} (\pi'^{-1}(U))=\SO_X(U)'$.
We say that $X$
is normal if $\pi'$ is an isomorphism. A point $x\in X$ is said normal if $\SO_{X,x}$ is integrally closed.
\vskip 2mm
We will use frequently that an integral extension of rings induces surjectivity at the spectrum level (See \cite[Thm. 9.3]{Ma} or \cite[Thm. 5.10, Cor. 5.8]{AM}).
\begin{prop}
\label{lying-over}
Let $A\to B$ be an integral extension of rings.
The maps $\Sp B\to \Sp A$ and $\Max B\to \Max A$ are surjective and closed.
\end{prop}
As a consequence, if $\pi$ is a finite morphism between
algebraic varieties over $k$
then Proposition \ref{lying-over} implies that $\pi$ and $\pi_{k}$ are surjective.
\subsection{Subintegral extensions, weak normalization and seminormalization}
We recall the concept of subintegral and weakly subintegral extensions introduced respectively by Traverso \cite{T}, Andreotti and Bombieri \cite{AB}.
\begin{defn}
Let $A\to B$ be an extension of rings.
\begin{enumerate}
\item For $\p\in\Sp B$, we say that $\Sp B\to\Sp A$ is equiresidual (resp. residually purely inseparable) at $\p$ if the extension $k(\p\cap A)\to k(\p)$ is an isomorphism (resp. purely inseparable).\\
Let $W\subset \Sp B$, we say that $\Sp B\to\Sp A$ is equiresidual (resp. residually purely inseparable) by restriction to $W$ if for any $\p\in W$, $\Sp B\to\Sp A$ is equiresidual (resp. residually purely inseparable) at $\p$. If $W=\Sp B$ then we simply say equiresidual (resp. residually purely inseparable).\\
The extension $A\to B$ is said equiresidual (resp. residually purely inseparable) if $\Sp B\to \Sp A$ is.
\item The extension $A\to B$ and the map $\Sp B\to \Sp A$ are said (resp. weakly) subintegral if the extension is integral and $\Sp B\to\Sp A$ is bijective and equiresidual (resp. residually purely inseparable).
\end{enumerate}
\end{defn}
Note that a field extension is equiresidual (resp. residually purely inseparable) if and only if it is an isomorphism (resp. purely inseparable). Remark that a subintegral extension is weakly subintegral and that the converse holds in characteristic zero.
We extend these definitions to the geometric setting, adding moreover a notion of hereditarily birational morphism that have been introduced in the real setting in \cite{FMQ}.
\begin{defn}
Let $\pi:Y\to X$ be a dominant morphism between
algebraic varieties over $k$.
\begin{enumerate}
\item We say that $\pi$ is equiresidual (resp. residually purely inseparable) if for any $y\in Y$ then the field extension $k(\pi(y))\to k(y)$ is an isomorphism (resp. purely inseparable).
\item We say that $\pi$ is (resp. weakly) subintegral if $\pi$ is integral, bijective and equiresidual (resp. residually purely inseparable).
\item We say that $\pi:Y\to X$ is hereditarily birational if for any open subset $U\subset X$ and
for any irreducible algebraic subvariety $V=\V(\p)\simeq\Sp(\SO_Y(\pi^{-1}(U))/\p)$ in $\pi^{-1}(U)$, the
morphism $$\pi_{|V}: V\to W=\V(\p\cap \SO_X(U))\simeq\Sp\big(\SO_X(U)/(\p\cap \SO_X(U))\big)$$ is
birational.
\end{enumerate}
\end{defn}
Geometrically speaking, a dominant morphism $\pi:Y\to X$ is equiresidual if and only if it is hereditarily birational. Indeed, for any open subset $U\subset X$ and for any irreducible algebraic subvariety $V=\V(\p)\simeq\Sp(\SO_Y(\pi^{-1}(U))/\p)$ in $\pi^{-1}(U)$, the restricted morphism $$\pi_{|V}: V\to W=\V(\p\cap \SO_X(U))\simeq\Sp \big( \SO_X(U)/(\p\cap \SO_X(U))\big)$$ is
birational if and only if the extension $k(\p\cap \SO_X(U))=\K(W)\to k(\p)=\K(V)$ is an isomorphism.
An hereditarily birational morphism is not necessarily bijective. However, adding an integrality assumption and using Proposition \ref{lying-over}, we get the following characterization.
\begin{lem}
Let $\pi:Y\to X$ be an integral morphism between
algebraic varieties over $k$. The following properties are equivalent:
\begin{enumerate}
\item $\pi$ is hereditarily birational and injective.
\item $\pi$ is subintegral.
\end{enumerate}
\end{lem}
The notion of (resp. weak) subintegral extension leads to the notion of seminormalization (resp. weak normalization), in a similar way that integral extensions lead to normalization.
In order to define the notions of seminormality and weak normalization, we need to consider sequences of ring extensions.
A ring $C$ is said intermediate between the rings $A$ and $B$ if there exists a sequence of extensions $A\to C\to B$. In that case, we say that $A\to C$ and $C\to B$ are intermediate extensions of $A\to B$ and we say in addition that $A\to C$ is a subextension of $A\to B$.
Seminormal (resp. weakly normal) extensions are maximal (resp. weakly) subintegral extensions.
\begin{defn}
\label{defseminorm}
Let $A\rightarrow C\to B$ be a sequence of two extensions of rings with $A\to C$ (resp. weakly) subintegral.
We say that $C$ is seminormal (resp. weakly normal) between $A$ and $B$
if for every
intermediate ring
$D$ between $C$ and $B$, with $C$ different from $D$, then $A\to D$ is not (resp. weakly)
subintegral.
We say that $A$ is seminormal (resp. weakly normal) in $B$ if $A$ is
seminormal (resp. weakly normal) between $A$ and $B$. We say that $A$ is seminormal (resp. weakly normal) if $A$ is seminormal (resp. weakly normal) between $A$ and $A'$.
\end{defn}
Recall that the characteristic exponent $e(K)$ of a field $K$ is 1 if $\car (K) = 0$
and is $p$ if $\car (K) = p > 0$.
Given an extension of rings $A\to B$, Traverso \cite{T}, Andreotti and Bombieri \cite{AB} (see also
\cite{V}) proved respectively there exists a unique intermediate ring which is seminormal (resp. weakly normal) between $A$ and $B$. To this purpose, they introduced the rings
$$A_B^+=\{b\in A_B'|\,\,\forall\p\in\Sp
A,\,\,b_{\p}\in A_{\p}+\JRad((A_B')_{\p})\},$$
$$A_B^*=\{b\in A_B'|\,\,\forall\p\in\Sp
A,\exists n\geq 0,\,\,b_{\p}^{e(k(\p))^n}\in A_{\p}+\JRad((A_B')_{\p})\},$$
where $\JRad$ stands for the Jacobson radical, namely the intersection of all the maximal ideals. The idea to build $A_B^+$ and $A_B^*$ is, for all $\p\in\Sp A$, to glue together all the prime ideals of $A_B'$ lying over $\p$ (see \cite{Mnew}).
\begin{thm}\label{thmT} \cite{T}, \cite{AB}\\
Let $A\to B$ be an extension of rings. Then $A_B^+$ (resp. $A_B^*$) is the unique ring which is seminormal (resp. weakly normal) between $A$ and $B$.
Moreover, for any intermediate ring $C$ between A and B, the extension $ A\to C$ is (resp. weakly) subintegral if and only if $C\subset A^+_B$ (resp. $C\subset A_B^*$).
\end{thm}
The ring $A^+_B$ (resp. $A_B^*$) is called the seminormalization (resp. weak normalization) of the ring extension $ A\to B$ or the seminormalization (resp. weak normalization) of $A$ in $B$. Note that $$A\subset A_B^+\subset A_B^*\subset A_B^\prime\subset B.$$ The ring $A^+_{A'}$ (resp. $A^*_{A'}$) is called the seminormalization (resp. weak normalization) of $A$ and is simply denoted by $A^+$ (resp. $A^*$).
Note that when $A$ and $B$ are domains, then $A$ and $A_B^+$ have in particular the same fraction field and that $\K(A)\to \K(A_B^*)$ is purely inseparable.
Note that the inclusion $A_B^+\subset A_B^*$ can be strict.
\begin{ex} Let $K$ be a field of characteristic $2$, $x$ be an indeterminate, and consider
the integral extension $A=K[x^2]\to B=K[x]$.
It follows from a criterion of Hamman that $A_B^+=A$ (see \cite[Ex. 2.13]{V}). Since $x^2$ and $2x= 0$ are both
in $A$ but $x$ is not in $A$, it follows from \cite[Prop. 3.10]{V} that $A$ is not weakly normal in $B$.
\end{ex}
\subsection{Seminormalization and weak normalization of a morphism between algebraic varieties}
Andreotti and Bombieri \cite{AB} have introduced and built the seminormalization and the weak normalization of a scheme in another one.
In this section, we provide a different and elementary construction of the seminormalization and the weak normalization of an affine algebraic variety in another one.
The seminormalization and the weak normalization answer the respective following questions. Let $Y\to X$ be a dominant morphism between algebraic varieties over $k$. Does there exist a biggest algebraic variety $Z$ such that $Y\to X$ factorizes through $Z$ and $Z\to X$ is (resp. weakly) subintegral~?
\vskip 2mm
We recall first the notion of normalization of a variety in another one.
Let $\pi:Y\to X$ be a dominant morphism between algebraic varieties over $k$. The integral closure $(\SO_X)_{\pi_*(\SO_Y)}'$ of $\SO_X$ in $\pi_*(\SO_Y)$ is a coherent sheaf \cite[Lem. 52.15]{STPmorph} and by
\cite[II Prop. 1.3.1]{Gr2} it is the structural sheaf of a variety over $k$.
\begin{defn} \label{defnorm}
Let $\pi:Y\to X$ be a dominant morphism of finite type between algebraic varieties over $k$.
The variety with structural sheaf equal to the integral closure of $\SO_X$ in $\pi_*(\SO_Y)$ is called the normalization of
$X$ in $Y$ and is denoted by $X_Y'$.
\end{defn}
Be aware that the normalization of a variety in another one is not necessarily a normal variety, nor it admits a birational morphism onto the original variety.
For a dominant morphism $Y\to X$ between algebraic varieties over $k$, we say that an algebraic variety $Z$ over $k$ is intermediate between $X$ and $Y$ if $Y\to X$ factorizes through $Z$. For affine varieties, it is equivalent to say that $k[Z]$ is an intermediate ring between $k[X]$ and $k[Y]$.
The normalization of a variety in another one satisfies the following property :
\begin{prop} \label{PUnormalization}
Let $Y\to X$ be a dominant morphism between algebraic varieties over $k$. Let $Z$ be an intermediate variety between $X$ and $Y$.
Then $Z\to X$ is finite if and only if it factorizes $X_Y'\to X$.
\end{prop}
We describe now an elementary construction of the seminormalization and the weak normalization of an affine algebraic variety in another one.
Let $Y\to X$ be a dominant morphism between affine algebraic varieties over $k$. We want to check that the rings $A_1=k[X]^{+}_{k[Y]}$ and $A_2=k[X]^*_{k[Y]}$ are coordinate rings of algebraic varieties. We know that the morphism $X_Y'\to X$ is finite by Proposition \ref{PUnormalization}, so we can apply Lemma \ref{lemintermed} below to the extensions
$$k[X]\subset A_1\subset A_2 \subset k[X'_Y]=k[X]'_{k[Y]} \subset k[Y]$$
to conclude.
\begin{lem}
\label{lemintermed}
Let $\pi:Y\to X$ be a finite morphism between affine
algebraic varieties over $k$. Let $A$ be a ring such that
$k[X]\subset A\subset k[Y]$. Then $A$ is the coordinate ring of a unique
affine algebraic variety over $k$ and $\pi$ factorizes through this
variety.
\end{lem}
\begin{proof}
Since $k[Y]$ is a finite module over the Noetherian ring $k[X]$ then it is a Noetherian $k[X]$-module.
Thus the ring $A$ is a finite $k[X]$-module as a submodule of a Noetherian $k[X]$-module. It follows that $A$ is a finitely generated algebra over $k$ and the proof is done.
\end{proof}
For general constructions of the seminormalization and the weak normalization, one needs to check that the seminormalization and the weak normalization of the local charts of an affine covering glue together to give a global variety. This is done by Andreotti and Bombieri \cite{AB} using Grothendieck criterion \cite[II Prop. 1.3.1]{Gr2} concerning the quasi-coherence of sheaves. It leads to the following definitions.
\begin{defn}
Let $\pi:Y\to X$ be a dominant morphism between algebraic varieties over $k$. The seminormalization (resp. weak normalization) of $X$ in $Y$ is the algebraic variety $X^+_Y$
(resp. $X_Y^*$) over $k$ with structural sheaf equal to the seminormalization (resp. weak normalization) of $\SO_X$ in $\pi_*(\SO_Y)$.
We call $X^{+}$ (resp. $X^*$) the seminormalization (resp. weak normalization) of $X$ in its normalization $Y=X'$. We say that $X$ is seminormal in $Y$ (resp. seminormal) if $X=X^+_Y$ (resp. $X=X^+$). We say that $X$ is weakly normal in $Y$ (resp. weakly normal) if $X=X^*_Y$ (resp. $X=X^*$).
\end{defn}
\begin{rem} Note that the seminormalization of $X$ in $Y$ is birational to $X$, even if $Y\to X$ is not birational. It is not the case for the normalization of $X$ in $Y$ and also for the weak normalization of $X$ in $Y$. We have in general
$$Y\to X'_Y\to X^*_Y\to X^+_Y\to X.$$
\end{rem}
The seminormalization and the weak normalization of a variety in another one satisfies the following universal properties~:
\begin{prop}\label{propCSEPvariety}
Let $Y\to Z\to X$ be a sequence of dominant morphisms between algebraic varieties over $k$. Then $Z\to X$ is (resp. weakly) subintegral if and only if $X^+_Y\to X$ (resp. $X_Y^*$)
factorizes though $Z$.
\end{prop}
\begin{proof}
It is a reformulation of the second part of Theorem \ref{thmT}.
\end{proof}
\section{saturation}\label{sect-sat}
In the classical study of the seminormalization, some basic properties such as the local nature happen to be not so straightforward to prove. A nice algebraic approach has been proposed by Manaresi \cite{Mana}, in the spirit of the relative Lipschitz saturation \cite{Lip}, via the saturation of a ring $A$ in another ring $B$ which is integral over $A$. The saturation coincides with the weak normalization when the ring extension is finite. We aim to study the properties of this saturation for more general extensions, and establish its universal properties.
\subsection{Universal property of the saturation}
We define the saturation of a ring extension analogously to \cite{Mana}, but for non-necessarily integral extensions.
\begin{defn}
Let $A\to B$ be an extension of rings. The saturation of $A$ in $B$, denoted by $\widehat{A}_B$, is defined by
$$\widehat{A}_B=\{b\in B\mid b\otimes_A1-1\otimes_A b\in {\rm{NilRad}}(\BAB) \}$$
where the nil radical $\NilRad$ denotes the ideal of nilpotent elements.
We say that $A$ is saturated in $B$ if $\widehat{A}_B=A$.
The saturation of $A$ is its saturation in $A'$ and it is simply denoted by $\widehat{A}$. We say that $A$ is saturated if $\widehat{A}=A$.
\end{defn}
Recall that the nilradical is the intersection of all prime ideals.
In order to study the saturation, we need to understand better the relation between prime ideals in $A$ and $B$ and prime ideals in $\BAB$. For a ring extension $A\to B$, we introduce the notation
$\varphi_1$, $\varphi_2$ for the ring morphisms $\varphi_i:B\to \BAB$ defined by
\begin{equation}\label{eq-phi}
\varphi_1(b)= b\otimes_A 1 \textrm{~~~~~and~~~~~}\varphi_2(b)= 1\otimes_A b.
\end{equation}
The data of a prime ideal $\omega$ in $\BAB$, or more precisely the data of a morphism $g:\BAB\to k(\omega)$ with kernel $\omega$, is equivalent to the data of a $4$-tuple of prime ideals $$(\p_1,\p_2,\q,\p)\in\Sp B\times\Sp B\times\Sp A\times\Sp (k(\p_1)\otimes_{k(\q)}k(\p_2))$$
such that $$\p_1=\ker (g\circ \varphi_1),~~\p_2=\ker (g\circ \varphi_2),~~\q=\p_1\cap A=\p_2\cap A,~~k(\omega)=k(\p)$$ and such that the
composition $$\BAB\to k(\p_1)\otimes_{k(\q)}k(\p_2)\to k(\p)$$ coincides with $g$.
The saturation is a ring, compatible with inclusion.
\begin{lem}\label{lem-elem} Let $A\to B$ be an extension of rings.
\begin{enumerate}
\item $\widehat{A}_B$ is a subring of $B$ containing $A$.
\item If $A\subset C \subset B$, then $\widehat{A}_B \subset \widehat{C}_B$.
\end{enumerate}
\end{lem}
\begin{proof}
\begin{enumerate}
\item The set $\widehat{A}_B$ is an $A$-module as the kernel of the $A$-module morphism $$B\xrightarrow{\varphi_1-\varphi_2} \dfrac{\BAB}{{\rm{NilRad}}(\BAB)}.$$
The stability under product comes from the identity
$$b_1b_2\otimes_A 1-1\otimes_A b_1b_2=(b_1\otimes_A 1)(b_2\otimes_A 1-1\otimes_A b_2)+(1\otimes_A b_2)(b_1\otimes_A 1-1\otimes_A b_1)$$
and the fact that ${\rm{NilRad}}(\BAB)$ is an ideal.
\item The image of a nilpotent element by the ring morphism $\BAB \to B\otimes_C B$ remains nilpotent.
\end{enumerate}
\end{proof}
In order to give a universal property of the saturation, we recall the notion of radicial extension introduced by Grothendieck \cite[I, def. 3.7.2]{Gr1}. We also introduce a notion of radicial sequence of extensions similarly to \cite{Mnew}, due to the lack of integrality of the ring extensions.
\begin{defn}
\begin{enumerate}
\item An extension of rings $A\to B$ is said radicial if $\Sp B\to\Sp A$ is injective and residually purely inseparable.
\item A sequence of extensions $A\to C\to B$ of rings is said radicial if the restriction of $\Sp C\to\Sp A$ to the image of $\Sp B\to\Sp C$ is injective and residually purely inseparable.
\end{enumerate}
\end{defn}
\begin{rem} An extension (resp. a sequence of extensions) of fields $K\to K'$ (resp. $K\to K'\to K''$) is radicial if and only if $K\to K'$ is purely inseparable.
\end{rem}
\vskip 2mm
The saturation furnishes radicial sequences of extensions.
\begin{prop}\label{prop-sat} Let $A\to B$ be a ring extension. For any $C\subset \widehat A_B$, the sequence $A\to C\to B$ is radicial.
\end{prop}
Before entering into the proof, we state an elementary result about field extensions. Remark that it gives a proof of (2) implies (3) of Theorem \ref{PU1saturation} in the special case of a sequence of field extensions.
\begin{lem}\label{lem-field} Let $K\to K'\to K''$ be a non radicial sequence of field extensions i.e such that $K\to K'$ is not purely inseparable. Then, there are two $K$-morphisms $K''\to L$ into a (algebraically closed) field $L$ whose compositions with $K'\to K''$ are distinct.
\end{lem}
\begin{proof}
Since $K\to K'$ is not purely inseparable then it follows from \cite[Prop. I.3.7.1]{Gr1} that there are two distinct $K$-morphisms $\psi_1, \psi_2 : K'\to L'$ into a field $L'$. The point is to extend them to $K''$.
For $i\in\{1,2\}$, one can embed the field extensions $K'\to K''$ and $\psi_i: K' \to L'$ into a common extension $K'\to L'_i$ by amalgamation \cite[Chap 5, §4, Prop. 2]{Bour}. Denote by $\psi_i' :K''\to L'_i$ the induced extension.
By amalgamation of $L'_1$ and $L_2'$ over $K''$, one can assume that $\psi_1'$ and $\psi_2'$ take values in a common field $L$.
The morphisms $\psi_1',\psi_2': K'' \to L$ fulfil the requirements since the restriction of $\psi_i'$ to $K'$ coincides with $\psi_i$.
\end{proof}
\begin{rem} It is classical that one can choose $L=L'$ in the proof of Lemma \ref{lem-field} if the extension $K'\to K''$ is moreover algebraic, and this is used in \cite{Lip} to prove that the Lipschitz saturation is stable under contraction : in the setting of Proposition \ref{prop-sat}, if $C\to B$ is integral, then the Lipschitz saturation of $A$ in $C$ is equal to the intersection of $C$ with the Lipschitz saturation of $A$ in $B$. In our context the extensions are not assumed to be integral, and this contraction property does not hold, as illustrated by Example \ref{exVitdetail}.
\end{rem}
\begin{proof}[Proof of Proposition \ref{prop-sat}]
Let $\p_1$, $\p_2$ be two prime ideals of $B$ lying over the same ideal $\q$ of $A$. A first step is to prove that $\p_1$ and $\p_2$ lye over the same ideal of $C$.
Let $\p$ be a prime ideal of $k(\p_1)\otimes_{k(\q)}k(\p_2)$, and $\omega=(\p_1,\p_2,\q,\p)\in\Sp (\BAB)$ be the corresponding element, coming with a morphism $g:\BAB\to k(\omega)$ with $\ker g=\omega$. For $c\in\p_1\cap C$, we have $g\circ \varphi_1(c)=g(c\otimes_A 1)=0$ by construction of $g$. The element $\cAc$ is nilpotent in $\BAB$ by assumption, so that
$$0=g(\cAc)=g\circ \varphi_1(c)-g\circ \varphi_2(c).$$
As a consequence $g\circ \varphi_2(c)=0$ and thus $c\in\p_2\cap C$. By symmetry we obtain $$\p_1\cap C= \p_2\cap C.$$
The second step is to prove that the extension $\phi:k(\q)\to k(\p_1\cap C)$ is purely inseparable, where $\q=\p_1\cap A$. Assume by contradiction that $\phi$ is not purely inseparable, and consider the composition
$$k(\q)\xrightarrow{\phi} k(\p_1\cap C)\rightarrow k(\p_1).$$
By Lemma \ref{lem-field}, there are two distinct $k(\q)$-morphisms $\psi_1,\psi_2:k(\p_1)\to L$ into some field $L$, which remain distinct by restriction to $k(\p_1\cap C)$. Thus there exists $c\in C$ such that $\psi_1\circ\pi (c)\not=\psi_2\circ\pi (c)$, where $\pi:B\to k(\p_1)$ denote the natural morphism.
The morphisms $\psi_1$ and $\psi_2$ induce a morphism $\psi:k(\p_1)\otimes_{k(\q)}k(\p_1)\to L$ given by
$$\psi(\pi(b_1)\otimes_{k(\q)} \pi(b_2))=\psi_1\circ \pi(b_1)\cdot\psi_2\circ \pi(b_2)$$
for $b_1,b_2\in B$.
The kernel $\p$ of $\psi$ gives rise to a prime ideal $\omega=(\p_1,\p_2,\q,\p)$ of $\BAB$ coming with a morphism
$$g:\BAB\to k(\omega)\to L.$$
By our choice of $c$, the element
$$\psi(\pi(c)\otimes 1-1\otimes \pi(c))=\psi_1\circ\pi (c)-\psi_2\circ\pi (c)$$
is not zero, so that $\cAc$ does not belong to $\ker g=\omega$, contradicting the inclusion $C\subset \widehat{A}_B$.
\end{proof}
Actually the converse of the preceding result holds true, and it gives rise to universal properties of the saturation, in terms of radicial sequences of extensions. This result is, up to our knowledge, not present in the literature.
\begin{thm}
\label{PU1saturation}
Let $A\xrightarrow{i} C\xrightarrow{j} B$ be a sequence of extensions of rings.
The following properties are equivalent:
\begin{enumerate}
\item For any field $K$, the map
$$\Sp(j)\circ (\Mor(\Sp K, \Sp B))\to \Mor(\Sp K, \Sp A)$$
$$(\Sp(j)\circ \alpha)\mapsto \Sp(i)\circ (\Sp(j)\circ \alpha)$$ is injective.
\item For any field $K$, if $\psi_1:B\to K$ and $\psi_2:B\to K$ are two field morphisms distinct by composition with $j$, then they are distinct by composition with $j\circ i$.
\item The sequence $A\to C\to B$ is radicial.
\item $j(C)\subset \widehat{A}_B$.
\item The kernel of the morphism $\CAC\to C$ defined by $c_1\otimes_A c_2\mapsto c_1c_2$ is included in the nilradical of $\BAB$.
\end{enumerate}
\end{thm}
\begin{proof}
The equivalence between (1) and (2) is straightforward.
Since $\ker (\CAC\to C)$ is generated by the elements of the form $c\otimes_A1-1\otimes_A c$ for $c\in C$, then (4) $\Leftrightarrow$ (5). Note that $(4)$ implies $(3)$ by Proposition \ref{prop-sat}.
\vskip 2mm
Let us prove that (3) implies (2) by contraposition. Let $\psi_1:B\to K$ and $\psi_2:B\to K$ be two morphisms in a field K such that $\psi_1\circ j\not=\psi_2\circ j$ and
$\psi_1\circ j\circ i=\psi_2\circ j \circ i$. Let $\p_1$, $\p_2$ and $\q$ denote respectively the kernels of $\psi_1$, $\psi_2$ and $\psi_1\circ j\circ i:A\to K$. For $i=1,2$ we get the following commutative diagram:
$$\begin{array}{ccccccc}
A&\xrightarrow{i} & C &\xrightarrow{j} & B & \xrightarrow{\psi_i} & K\\
\downarrow&&\downarrow&& \downarrow & \nearrow & \\
k(\q)&\rightarrow & k(\p_i\cap C) & \rightarrow& k(\p_i) & &\\
\end{array}$$
If $\p_1\cap C$ is not equal to $\p_2\cap C$, then $\Sp C\to \Sp A$ is not injective on the image of $\Sp B$.\\
If $\p_1\cap C$ is equal to $\p_2\cap C$, then $\psi_1$ and $\psi_2$ induce two different $k(\q)$-morphisms $\psi'_\iota: k(\p_1\cap C) \to K$ since $\psi_1\circ j\not=\psi_2\circ j$. From \cite[Prop. I.3.7.1]{Gr1}, the extension $k(\q)\to k(\p_1\cap C)$ cannot be purely inseparable.
In both cases, the extension $A\to C\to B$ is not radicial.
\vskip 2mm
Finally we prove that (2) implies (4) by contraposition. By assumption there are $c\in C$ and $\omega \in \Sp \BAB$ such that $\cAc \notin \omega$. The ideal $\omega$ comes with a morphism $g:\BAB \to K$ with $\ker g=\omega$. Consider the composition of $g$ with the morphisms $\varphi_1$ and $\varphi_2$ defined in \eqref{eq-phi}. By construction $g\circ \varphi_1:B\to K$ coincides with $g\circ \varphi_2:B\to K$ when composed with $j\circ i$, but not when composed with $j$ because $g\circ \varphi_1 (j(c))\neq g\circ \varphi_2(j(c))$. It contradicts $(2)$.
\end{proof}
If we focus on the particular case of radicial extensions rather that sequences, we recover \cite[Prop. I.3.7.1]{Gr1}, with an additional condition using the saturation.
\begin{prop}
\label{radicial}
Let $i:A\to B$ be an extension of rings and $\Sp(i):\Sp B\to\Sp A$ be the associated map.
The following properties are equivalent:
\begin{enumerate}
\item For any field $K$, the map $$\Mor(\Sp K, \Sp B)\to \Mor(\Sp K, \Sp A)$$
$$\alpha\mapsto \Sp(i)\circ\alpha$$ is injective.
\item If $\psi_1:B\to K$ and $\psi_2:B\to K$ are two distinct morphisms then the compositions $\psi_1\circ i$ and $\psi_2\circ i$ are different.
\item $i:A\to B$ is radicial.
\item $B=\widehat{A}_B$.
\item The kernel of the morphism $\BAB\to B$ defined by $b_1\otimes_A b_2\mapsto b_1b_2$ is included in the nilradical of $\BAB$.
\end{enumerate}
\end{prop}
\begin{proof}
Direct consequence of Theorem \ref{PU1saturation}, using the fact that an extension $A\to B$ is radicial if and only if the sequence of extensions $A\to B\to B$ is so.
\end{proof}
\subsection{Saturation for varieties}
We begin with the definition of radiciality and saturation for morphisms \cite[Chap. I,3.7.2]{Gr1}. Then, we extend these definitions to sequences of morphisms.
\begin{defn}
\begin{enumerate}
\item Let $\pi:Y\to X$ be a dominant morphism between algebraic varieties over $k$.
We say that $\SO_X\to \pi_*\SO_Y$ is radicial if for any open subset $U\subset X$ the extension $\SO_X (U)\to\SO_Y (\pi^{-1}(U))$ is radicial. In this situation, we say that $\pi$ is radicial.
\item Let $Y\stackrel{\phi}{\to} Z\stackrel{\psi}{\to} X$ be a sequence of dominant morphisms between algebraic varieties over $k$. We say that $\SO_X\to \psi_*\SO_Z\to (\psi\circ\phi)_*\SO_Y$ is radicial if for any open subset $U\subset X$ the sequence of extensions $\SO_X (U)\to \SO_Z (\psi^{-1}(U))\to\SO_Y ((\psi\circ\phi)^{-1} (U))$ is radicial.
In this situation, we say that the sequence of morphisms $Y\to Z\to X$ is radicial.
\item We say that $X$ is saturated in $Y$ if $\SO_X$ is saturated in $\pi_*\SO_Y$,
i.e for any open subset $U\subset X$ then $\SO_X(U)$ is saturated in $\SO_Y(\pi^{-1}(U))$.
\end{enumerate}
\end{defn}
\begin{rem}
\label{defsatvar}
\begin{enumerate}
\item Let $\pi:Y\to X$ be a dominant morphism between algebraic varieties over $k$. Then, $\pi$ is radicial if and only if $\pi$ is injective and residually purely inseparable.
\item Let $Y\stackrel{\phi}{\to} Z\stackrel{\psi}{\to} X$ be a sequence of dominant morphisms between algebraic varieties over $k$. Then, $Y\to Z\to X$ is radicial if and only $\psi$ is injective and residually purely inseparable by restriction to the image of $\phi$.
\end{enumerate}
\end{rem}
In order to translate the universal property of the saturation in terms of varieties, we recall the notion of universal injectivity from \cite[Chap. I, 3.4.3]{Gr1}.
\begin{defn}
\begin{enumerate}
\item A morphism $\pi:Y\to X$ between algebraic varieties over $k$ is said universally injective if for any field extension $k\to K$, the map $\pi_K:Y(K)\to X(K)$ is injective.
\item A sequence of morphisms $Y\to Z\to X$ between algebraic varieties over $k$ is said universally injective if for any field extension $k\to K$, the map $Z(K)\to X(K)$ is injective by restriction to the image of $Y(K)\to Z(K)$.
\end{enumerate}
\end{defn}
Grothendieck \cite[Prop. 3.7.1]{Gr1} proved that the notions of radicial and universally injective morphisms coincide.
The universal property given in Theorem \ref{PU1saturation} implies that it is also the case if we consider sequences of morphisms rather than morphisms.
\begin{prop}\label{PU1saturationvar}
Let $Y\stackrel{\phi}{\to} Z\stackrel{\psi}{\to} X$ be a sequence of dominant morphisms between algebraic varieties over $k$. The following properties are equivalent:
\begin{enumerate}
\item $Y\to Z\to X$ is universally injective.
\item $Y\to Z\to X$ is radicial.
\item $\psi_*\SO_Z\subset (\widehat{\SO_X})_{(\psi\circ\phi)_*\SO_Y}$ i.e for any open subset $U\subset X$ we have $\SO_Z (\psi^{-1}(U))\subset\widehat{\SO_X (U)}_{\SO_Y ((\psi\circ\phi)^{-1} (U))}$.
\end{enumerate}
\end{prop}
\begin{rem} Let $\pi:Y\to X$ be a dominant morphism between affine algebraic varieties over $k$. Contrarily to the seminormalization case it is not clear whether $\widehat{k[X]}_{k[Y]}$ is a finitely generated algebra over $k$ and thus lead to the existence of a variety.
\end{rem}
As a consequence of the previous remark, the statement $\pi$ is subintegral if and only if $X^+_Y=Y$ has no equivalent when $\pi$ is radicial. Nevertheless we get:
\begin{cor}
\label{PU2saturationvar}
Let $\pi:Y\to X$ be a dominant morphism between algebraic varieties over $k$. Then $\pi$ is radicial if and only if
$(\widehat{\SO_X})_{\pi_*\SO_Y}=\pi_*\SO_Y$.
\end{cor}
\section{Comparison between saturation, seminormalization and weak normalization} \label{comparaison}
In general, the seminormalization and the weak normalization are only included in the saturation.
\begin{lem} \label{satetsemi}
Let $A\to B$ be an extension of rings. Then $$A^+_B\subset A^*_B\subset \widehat{A}_B.$$
\end{lem}
\begin{proof}
We already know that $A^+_B\subset A^*_B$.
Since $A\to A^*_B$ is weakly subintegral then $A\to A^*_B\to B$ is radicial. The inclusion $A^*_B\subset \widehat{A}_B$ follows by Theorem \ref{PU1saturation}.
\end{proof}
If $A\to B$ is a purely inseparable extension of fields which is not an isomorphism then we get $A^+_B\subset A^*_B= \widehat{A}_B$ and the inclusion is strict. So in the sequel we focus in the comparison between weak normalization and saturation.
Note that there is no special relationship between the saturation and the relative normalization. However saturation and weak normalization coincide when we restrict to integral extensions.
\begin{prop} \label{satetsemi2}
Let $A\to B$ be an integral extension of rings. Then $$A^*_B= \widehat{A}_B.$$
\end{prop}
\begin{proof}
The direct inclusion comes from Lemma \ref{satetsemi}.
The sequence $A\to \widehat{A}_B\to B$ is radicial by Theorem \ref{PU1saturation}. Since $ \widehat{A}_B\to B$ is integral then $\Sp B\to \Sp \widehat{A}_B$ is surjective by Proposition \ref{lying-over}. It follows that $A\to \widehat{A}_B$ is radicial and integral and thus is weakly subintegral. This forces $ \widehat{A}_B$ to be equal to the weak normalization $A^*_B$ of $A$ in $B$ by Theorem \ref{thmT}.
\end{proof}
Finally we state the relations between saturation, weak normalization and seminormalization for varieties induced by Lemma \ref{satetsemi} and Proposition \ref{satetsemi2}.
\begin{prop} \label{sat=semivar}
Let $\pi:Y\to X$ be a dominant morphism between varieties over $k$.
\begin{enumerate}
\item If $\pi$ is subintegral then $\pi$ is weakly subintegral.
\item If $\pi$ is weakly subintegral then $\pi$ is radicial.
\item If $X$ is saturated in $Y$, then $X$ is weakly normal and seminormal in $Y$.
\item If $\pi$ is moreover integral, then $\pi$ is weakly subintegral if and only if $\pi$ is radicial, and $X$ is saturated in $Y$ if and only if $X$ is weakly normal in $Y$.
\end{enumerate}
\end{prop}
In the following proposition and examples $k$ is an algebraically closed field and $\car (k)=0$.
We compare the notions of seminormality and relative seminormality.
\begin{prop} \label{semimprel}
Let $\pi:Y\to X$ be a dominant morphism between varieties over $k$. If $X$ is seminormal then $X$ is seminormal in $Y$.
\end{prop}
\begin{proof}
Suppose $X$ is not seminormal in $Y$. From Proposition \ref{propCSEPvariety} the morphism $\varphi:X_Y^+\to X$ is subintegral, factorizes $\pi$ and is not an isomorphism. Since $\varphi$ is birational and finite then it follows that the normalization map $X'\to X$ factorizes throught $\varphi$. By subintegrality of $\varphi$ and seminormality of $X$ then we get a contradiction.
\end{proof}
The converse of Proposition \ref{semimprel} is false, take $Y=X$ with $X$ not seminormal for example.
From \cite[Rem. 1.4]{C}, we know that the notions of relative weak normalization and relative saturation differ in any characteristic when we do not consider integral extensions of rings and integral morphisms of varieties.
We end this section by providing explicit examples to illustrate that the notions of relative saturation and seminormalization do not coincide for varieties over an algebraically closed field of characteristic null, in any dimension. The examples are built on Example \ref{exVitdetail}, constructed from a nodal curve, for which we offer two arguments : a simple geometric one, and a direct computational one in order to construct the generalization in any dimension in Example \ref{exVitdetail2}.
\begin{ex} \label{exVitdetail}
\begin{enumerate}
\item Let $X$ be the nodal plane curve with coordinate ring $A=k[X]=k[x,y]/(y^2-x^2(x+1))$. Its normalization $X'$ has coordinate ring $A'=k[X']=k[x,z]/(z^2-(x+1))=A[y/x]$ and the inclusion $A\to A'$ is given by $(x,y)\mapsto (x,xz)$. Let $Y$ be defined by removing one of the two points $p=(0,1)$ and $q=(0,-1)$ of $X'(k)$ lying above the singular point of $X(k)$, say $p$. The coordinate ring of $Y$ is $$B=k[Y]=k[x,z,s]/(z^2-(x+1),s(z-1)-1)=A'[1/(z-1)]=A'[s]=A[y/x,s],$$
and we have a sequence of inclusions $A\to A'\to B$.
Then $A^+_B=A$ whereas $\widehat{A}_B=B$. To see the first point, since the variety $X$ is seminormal (see \cite{GT}) then $X$ is seminormal in $Y$ by Proposition \ref{semimprel}. For the second point, note that $A\to B$ is radicial because, for irreducible curves, the prime ideals correspond to the generic point and the closed points, and here $Y\to X$ is birational with $Y(k)\to X(k)$ bijective. As a consequence $\widehat{A}_B=B$ by Proposition \ref{radicial}.
\item We revisit the nodal curve example proving the equality $\widehat{A}_B=B$ using the very definition of the saturation. Keeping previous notation, set $\alpha=(z\otimes_A 1)-(1\otimes_A z)$ and $\beta=(s\otimes_A 1)-(1\otimes_A s)$. It suffices to prove that $\alpha$ and $\beta$ are nilpotent elements of $\BAB$. Indeed $\widehat{A}_B$ is a ring containing $x,z$ and $s$ so that $\widehat{A}_B=B$ in that case.
Note that
$$\begin{array}{cccc}
x \alpha & = & x \big((z+1)\otimes_A 1-1\otimes_A(z+1)\big) & \\
&= & \big(x(z+1)\otimes_A 1\big) - \big(1\otimes_A x(z+1)\big) &\\
&= & \big((y+x)\otimes_A 1\big) - \big(1\otimes_A (y+x)\big) & \textrm{~~since~~} y+x=x(z+1) \textrm{~~in~~} B\\
&= & 0 &\\
\end{array}$$
hence
$$\begin{array}{cccc}
\alpha^2 & = & \alpha \big((z+1)\otimes_A 1-1\otimes_A(z+1)\big) & \\
& = & \alpha (xs\otimes_A 1-1\otimes_Axs) & \textrm{~~since~~} xs=z+1 \textrm{~~in~~} B\\
&= & x\alpha (s\otimes_A 1 - 1\otimes_A s) &\\
&= & 0. &\\
\end{array}$$
Actually we even have $\alpha=0$ in $\BAB$, since a straightforward computation shows that $\alpha=\frac1{4}\alpha^3$. Finally, using the equality $\alpha= (z-1)\otimes_A 1-1\otimes_A(z-1)$ and the relation $s(z-1)=1$, we observe that
$$0=(s\otimes_A 1)\alpha (1\otimes_A s) =-\beta.$$
\end{enumerate}
\end{ex}
\begin{ex}\label{exVitdetail2} Consider the curves $X$ and $Y$ as in Example \ref{exVitdetail}. For $n\geq 1$, the variety $X\times\Af_k^n$ is seminormal in $Y\times\Af_k^n$, whereas the saturation of $X\times\Af_k^n$ in $Y\times\Af_k^n$ is $Y\times\Af_k^n$.
To see this, note that $X$ and $\Af_k^n$ are seminormal, so $X\times \Af_k^n$ is also seminormal \cite[Cor. 5.9]{GT} and thus $X\times \Af_k^n$ is seminormal in $Y\times\Af_k^n$ by Proposition \ref{semimprel}.
For the saturation, if the radiciality of $$k[X\times\Af_k^n]= A[t_1,\ldots,t_n] \to B[t_1,\ldots,t_n]=k[Y\times\Af_k^n]$$ is not so straightforward since we no longer work with curves as in Example \ref{exVitdetail} (1), the computations done in Example \ref{exVitdetail} (2) still prove that $\widehat{A[t_1,\cdots,t_n]}_{B[t_1,\cdots,t_n]}$ contains $x,z$ and $s$, and so is equal to $B[t_1,\ldots,t_n]$.
\end{ex}
\section{Strong topology on the rational closed points and regulous functions}\label{regulous}
Continuous rational functions and regulous functions have been originally studied in real algebraic geometry \cite{Ku,KN,FHMM}, where the continuity is regarded with respect to the Euclidean topology, which can be studied algebraically via semialgebraic open sets \cite{BCR}. For an algebraic variety $X$ over $\C$, we can consider the Euclidean (or strong) topology of the complex points $X(\C)$ seen as a topological variety (see \cite{Sha2}). For example, if $X$ is affine then we have $X\subset \mathbb A_{\C}^n$ for some $n\in\N$ and the strong topology is induced by the natural inclusion $X(\C)\subset \R^{2n}$. With this point of view Bernard has developed in \cite{Be} the theory of regulous functions for algebraic varieties over $\C$.
Subintegral extensions and regulous functions are strongly related in real algebraic geometry as developed in \cite{FMQ2}.
Working with the field of complex numbers, we know from the work of Bernard that the same holds true in the geometric case. The purpose of this section is first to generalize the work of Bernard to varieties over any algebraically closed field of characteristic zero, and then to relate it to the theory of relative seminormalization.
\vskip 2mm
In this section $k$ is an algebraically closed field and $\car (k)=0$.
\subsection{Generalizing the strong topology of $\C$}\label{sect-R}
Since there is a priori no natural strong topology on the $k$-rational points of a variety over $k$, we use the theory of real closed fields to define such a topology as in \cite{K,HK} (see also \cite{BW} for a recent cohomological use of this approach).
\vskip 2mm
From Artin Schreier theory \cite{AS}, we know the existence of (many) real closed subfields of $k$ with algebraic closure equal to $k$. Let $R\subset k$ denote one of these real closed fields. Then $k=R[\sqrt{-1}]$ and $R$ comes with a unique ordering. The ordering on $R$ gives rise to an order topology on the affine spaces $R^n$, in a similar way than the Euclidean topology on $\R^n$, even if the topological space $R$ is not connected (except in the case $R=\R$) or the closed interval $[0,1]$ is in general not compact.
\vskip 2mm
We use this choice of $R$ to define a topology on the closed points of an algebraic variety over $k$. First, for an algebraic variety $X$ over $R$, choose an affine covering of $X$ by Zariski open subsets $U_i$, and endow each affine sets $U_i(R)$ with the order topology. These open sets glue together to define the order topology on $X(R)$, and this topology does not depend on the choice of the covering. This topological space can be endowed additionally with the structure of a semialgebraic space by considering the sheaf of continuous semialgebraic functions \cite{DK2,DK}, or even of a real algebraic variety with the sheaf of regular functions on the $R$-points \cite{BCR,Hui}.
Consider now the case of a quasi-projective algebraic variety $X$ over $k$.
By Weil restriction \cite{W,GrFGA}, we associate to $X$ an algebraic variety $X_R$ over $R$ whose $R$-points are in bijection with the $k$-points of $X$.
We endow $X(k)$ with the topology induced by the order topology on $X_R(R)$, and we call it the $R$-topology on $X(k)$.
If $X$ is no longer quasi-projective, then the Weil restriction does not necessarily exist. Anyway choose an affine open covering $(U_i)_{i\in I}$ of $X$, endow the $R$-points of the Weil restrictions $(U_i)_R$ with the order topology, and note that these open sets glue together to define a topology on $X(k)$. This topology does not depend on the choice of the covering by \cite[Lemma 5.6.1]{Sc}, and we call it the $R$-topology on $X(k)$. Again one can consider $X(k)$ as a semialgebraic space in the sense of \cite{DK} or as a real algebraic variety in the sense of \cite{BCR}.
\vskip 2mm
The $R$-topology on $X(k)$ has many good properties, for instance $X(k)$ is semialgebraically connected and of pure dimension twice the dimension of $X$ if $X$ is irreducible \cite{K}. For $k=\C$ and $R=\R$, the $\R$-topology is nothing more than the strong topology.
The choice of a different real closed field $R$ in $k$ will lead to different topologies on $X(k)$ (for instance the semialgebraic fundamental group does depend on the choice of $R$ \cite{K}). Already with $k=\C$, one can choose a real closed field different from $\R$, even for instance a non-Archimedean $R\subset \C$. We will see however that in our setting, the choice of the real closed field is transparent.
\subsubsection{Basics on the $R$-topology of $k$-varieties}
In this section we fix a real closed field $R$ with algebraic closure $k$.
Let $X$ be a quasi-projective algebraic variety over $k$. Recall that by Weil restriction \cite{GrFGA,Sc}~:
\begin{enumerate}
\item The variety $X_R$ is nonsingular if $X$ is nonsingular. More precisely, a $k$-point in $X$ is singular if and only if its corresponding $R$-point in $X_R$ is singular.
\item A Zariski open subset $U\subset X$ induces a Zariski open subset $U_R\subset X_R$.
\item A proper morphism $Y\to X$ between quasi-projective algebraic varieties over $k$ induces a proper morphism $Y_R\to X_R$.
\item A finite morphism $Y\to X$ between quasi-projective algebraic varieties over $k$ induces a finite morphism $Y_R\to X_R$.
\end{enumerate}
Let $X$ be an affine algebraic variety over $k$. A regular function on $X$ gives rise to a polynomial (and thus continuous) mapping $X_R(R)\to R^2$. Indeed the regular function is polynomial, and by Weil restriction a polynomial function to $k$ induced a polynomial mapping to $R^2$ by taking the real and imaginary parts. Finally a polynomial function is continuous with respect to the $R$-topology.
The topological properties of $R$-varieties coming from $k$-varieties are much more moderate than for general $R$-varieties. For instance, if complex irreducible varieties are locally of equal dimension, irreducible algebraic subsets of $\R^n$ may have isolated points. From \cite{BCR}, a real algebraic variety is called central if its subset of nonsingular points is dense with respect to the $R$-topology.
For a subset $A\subset X(k)$, we denote by $\overline{A}^R$ the closure of $A$ with respect to the $R$-topology. We denote by $\Reg(X(k))$ the set of nonsingular points of $X(k)$.
\begin{prop}\label{prop-cent} Let $X$ be an irreducible algebraic variety over $k$. Then $X(k)$ is central :
$$\overline{\Reg(X(k))}^R=X(k).$$
\end{prop}
\begin{proof}
The question being local, it suffices to assume $X$ is affine, and in particular the Weil restriction of $X$ exists.
If $X$ is nonsingular, then so is $X_R$ by Weil restriction, and so $X_R(R)$ is central.
Otherwise, consider a resolution $\sigma :\tilde X \to X$ of the singularities of $X$ which exists by \cite{Hiro} since $k$ has characteristic zero. Then $\sigma_k$ is surjective since $k$ is algebraically closed, and one can assume that $\sigma_k$ induces a bijection
$$\tilde U=\sigma_k^{-1}(\Reg(X(k))\to \Reg(X(k))=U.$$
Let $x\in X(k)$, and choose a preimage $\tilde x\in \sigma_k^{-1}(x)$. The centrality of $\tilde X(k)$ implies the existence of a continuous semialgebraic curve $\tilde \gamma :[0,1]\to \tilde X(k)$ with $\tilde \gamma(0)=\tilde x$ and $\tilde \gamma(t)\in \tilde U$ for $t\in (0,1]\subset R$ by the Curve Selection Lemma \cite[Theorem 2.5.5]{BCR}. Its composition $\gamma=\sigma_k \circ \tilde \gamma$ is a continuous semialgebraic curve from $[0,1]$ to $X(k)$ with $\gamma(0)=x$ and $\gamma(t)\in U$ for $t\in (0,1]\subset R$. As a consequence $x$ belongs to the closure with respect to the $R$-topology of $\Reg(X(k))$ in $X(k)$, and so $X(k)$ is central.
\end{proof}
\begin{rem}
In particular, if the irreducible algebraic variety $X$ over $k$ has dimension $d$, then the local semialgebraic dimension of $X(k)$ at any point $x\in X(k)$ is equal to $2d$.
\end{rem}
The following result is not valid in general for algebraic varieties over $R$, and even for $R=\R$.
\begin{lem}\label{lem-dense} Let $X$ be an irreducible algebraic variety over $k$. A non-empty Zariski open subset of $X(k)$ is dense with respect to the $R$-topology.
\end{lem}
\begin{proof} The question being local, it suffices to assume $X$ is affine, and in particular the Weil restriction of $X$ exists.
A Zariski open subset remains Zariski open by Weil restriction. Combined with Proposition \ref{prop-cent}, it suffices to check that a non-empty Zariski open set $U$ in a central irreducible algebraic variety over $R$ is dense with respect to the $R$-topology. This last property is classical ; for instance, the complement is an algebraic subset of strictly smaller dimension, and a semialgebraic triangulation of $X(k)$ adapted to the complement shows that locally, a point in the complement is in the boundary of a semialgebraic simplex in $U(k)$.
\end{proof}
Over a general real closed field, the notion of compact sets is advantageously replaced by closed and bounded semialgebraic sets. For instance, the image of a closed and bounded semialgebraic set by a continuous semialgebraic map is again closed and bounded (and semialgebraic) \cite[Theorem 2.5.8]{BCR}.
A semialgebraic map is said to be proper with respect to the $R$-topology if the preimage of a closed and bounded semialgebraic set is closed and bounded.
\begin{lem}\label{lem-surj} Let $\sigma:\tilde X\to X$ be a proper morphism between irreducible varieties over $k$. Then $\sigma_k$ is proper with respect to the $R$-topology. If $\sigma$ is moreover birational, then $\sigma_k$ is surjective.
\end{lem}
\begin{proof}
The notion of properness is local on the target, so that there is an affine covering of $X$ such that for any open affine subset $U$ in the covering, the restriction $\sigma'=\sigma_{|\sigma^{-1}(U)}$ of $\sigma$ to $\sigma^{-1}(U)$ is proper.
The properness of $\sigma'$ is kept by Weil restriction, so that $\sigma'_k$ is proper with respect to the $R$-topology by \cite[Theorem 9.6]{DK2}. Finally $\sigma_k$ is proper with respect to the $R$-topology since that notion of properness is local on the target too by \cite[Proposition 5.7]{DK}.
If $\sigma$ is birational, there are Zariski open sets $\tilde U\subset \tilde X$ and $U\subset X$ such that $\sigma_{|\tilde U}$ is a bijection onto $U$. Then
$$U(k)= \sigma_k({\tilde U(k)})\subset \sigma_k(\overline{\tilde U(k)}^R)=\sigma_k(\tilde X(k)),$$
the right hand side equality coming from Lemma \ref{lem-dense}. Finally $\sigma_k(\tilde X(k))$ is closed for the $R$-topology by properness of $\sigma_k$, so that taking the closure with respect to the $R$-topology gives the result by Lemma \ref{lem-dense}.
\end{proof}
\begin{lem}\label{lem-fini} Let $\sigma:\tilde X\to X$ be a finite morphism between algebraic varieties over $k$. Then $\sigma_k$ is closed with respect to the $R$-topology.
\end{lem}
\begin{proof} By \cite[Theorem 4.2]{DK2}, a finite morphism between algebraic varieties over $R$ is closed with respect to the $R$-topology. The result follows since finiteness is local and Weil restriction preserves finite morphisms.
Alternatively when $X$ and $\tilde X$ are irreducible, a finite morphism is proper, and apply Lemma \ref{lem-surj}.
\end{proof}
\subsection{Subintegrality and homeomorphisms}
We are now in position to generalize the characterization of subintegrality via homeomorphisms, as in \cite[Thm. 3.1]{Be}, over any algebraically closed field of characteristic zero. We also add a radicial property in the equivalences.
\begin{thm}\label{thmFB1}
Let $\pi:Y\to X$ be a finite morphism between
algebraic varieties over $k$. The following
properties are equivalent:
\begin{enumerate}
\item $\pi$ is subintegral.
\item $\pi_{k}$ is bijective.
\item $\pi_{k}$ is a homeomorphism for the $R$-topology.
\item $\pi_{k}$ is a homeomorphism for the Zariski topology.
\item $\pi$ is a homeomorphism.
\item $\pi$ is radicial.
\end{enumerate}
\end{thm}
\begin{proof}
The equivalence between (4) and (5) is given by the Nullstellensatz. Using the Nullstellensatz, by Proposition \ref{lying-over} and proceeding similarly to Bernard's proof of \cite[Thm. 3.1]{Be} then we get the equivalence between (1), (2) and (5).
It is clear that (3) implies (2). The proof that (2) implies (3) is a direct consequence of Lemma \ref{lem-fini}. Indeed assuming (2), the map $\pi_k$ admits an inverse, and this inverse is continuous with respect to the $R$-topology since $\pi_k$ is a closed map by Lemma \ref{lem-fini}.
The equivalence between (1) and (6) is given by Proposition \ref{sat=semivar}.
\end{proof}
\subsection{Regulous functions on the $k$-rational points}\label{sect-CR}
We fix a real closed field $R$ such that $R[\sqrt{-1}]=k$.
\subsubsection{Characterization of continuous rational functions}
The choice of $R$ induces a topology, hence a notion of continuity. We define continuous rational functions on an algebraic variety over $k$ as follows.
\begin{defn}\label{def-rat-cont} Let $X$ be an algebraic variety over $k$. Let $U\subset X$ be an open subset of $X$. A continuous rational function on $U(k)$ is a function from $U(k)$ to $k$ which is the continuous extension to $U(k)$ of a rational function on $X$, when $X$ is endowed with the $R$-topology.
\end{defn}
This notion comes initially from real algebraic variety \cite{Ku,KN,FHMM}, and has been studied also in complex algebraic geometry \cite{Be}. In the setting of Definition \ref{def-rat-cont}, it depends a priori on the choice of $R$.
A rational function that is continuous with respect to the $R$-topology can be characterized by the fact that it becomes regular after applying a relevant proper birational map.
\begin{prop} Let $X$ be an algebraic variety over $k$. Let $f:X(k)\to k$ be an everywhere defined function, and assume that $f$ coincides with a regular function on a Zariski open subset of $X(k)$.
Then, $f$ is continuous with respect to the $R$-topology if and only if there is a proper birational map $\sigma:\tilde X\to X$ such that $f\circ \sigma_k :\tilde X(k)\to k$ is regular.
\end{prop}
\begin{proof}
Arguing similarly to \cite[Lem. 4.4]{Be}, we may assume $X$ is irreducible.
Assume $f$ to be continuous, and denote by $g$ the rational function on $X$ that coincides with $f$ on a Zariski open subset of $X(k)$. One can resolve the indeterminacy of the rational map $g$ by a sequence of blowings-up along nonsingular centers, giving rise to a proper birational morphism $\sigma:\tilde X\to X$ such that $g\circ \sigma_k :\tilde X(k)\to \PP^1(k)$ is regular. The functions $f\circ \sigma_k$ and $g\circ \sigma_k$ are equal on a Zariski dense subset of $\tilde X(k)$, so they are equal on a subset dense with respect to the $R$-topology by Lemma \ref{lem-dense}. Therefore they coincide on $\tilde X(k)$ by continuity. As a consequence the regular function $g\circ \sigma_k$ takes its values in $k$ rather than in $\PP^1(k)$.
Conversely, let $C\subset k$ be a closed subset with respect to the $R$-topology. The set $(f\circ \sigma_k)^{-1}(C)$ is closed by continuity of $f\circ \sigma_k$, and its image under $\sigma_k$ is equal to $f^{-1}(C)$ by surjectivity of $\sigma_k$ via Lemma \ref{lem-surj}. As a consequence $f^{-1}(C)$ is closed by properness of $\sigma_k$ with respect to the $R$-topology, thanks to Lemma \ref{lem-surj} again.
\end{proof}
Note that the characterization of continuity given above, via a resolution of indeterminacy, does not refer to the choice of $R$. In particular, the continuity with respect to the $R$-topology of a rational function is independent of the choice of the real closed field $R$.
\vskip 2mm
Let $U\subset X$ be an open subset of $X$. The continuous rational functions on $U(k)$ are the sections of a presheaf of $k$-algebras on $X(k)$, denoted by $\K_{X(k)}^0$ in the sequel. Since $\K$ is a sheaf, and the presheaf of locally continuous functions on $X(k)$ for the $R$-topology is also a sheaf for the Zariski topology, then $\K_{X(k)}^0$ is a sheaf called the sheaf of continuous rational functions. It makes $(X(k),\K_{X(k)}^0)$ a ringed space.
In case $X$ is affine then we simply denote by $\SR(X(k))$ the global sections of $\K_{X(k)}^0$ on $X(k)$.
A dominant morphism $\pi:Y\to X$ between varieties over $k$ induces an extension $\K_{X(k)}^0\to (\pi_{k})_*\K_{Y(k)}^0$, hence a morphism
$(Y(k),\K_{Y(k)}^0)\to (X(k),\K_{X(k)}^0)$ of ringed spaces.
An important fact is that continuous rational functions on a normal variety are regular. More precisely, next proposition says that continuous rational functions are integral on the regular ones and thus are already regular if the variety is normal.
The set of indeterminacy points of a continuous rational function is related to the normal locus of the ambient variety.
\begin{prop}\label{prop-lieunorm} Let $X$ be an algebraic variety over $k$.
We have:
\begin{enumerate}
\item $\K_{X(k)}^0\subset (\pi_{k}^\prime)_* \SO_{X'(k)}$ where $\pi':X'\to X$ is the normalization map.
\item If $x$ is a normal closed point of $X(k)$ then $\K_{X(k),x}^0=\SO_{X,x}$.
\end{enumerate}
\end{prop}
\begin{proof}
The properties are local so we may assume $X$ is affine.
The original proof in \cite[Proposition 4.7]{Be} uses only one argument related to the complex setting. It is the density with respect to the strong topology of a Zariski dense open set, that can be replaced by Lemma \ref{lem-dense}. Note that Hartogs Lemma used in the proof is valid over $k$ : if $X$ is normal then the restriction map $\SO(X(k))\to \SO(\Reg(X(k)))$ is surjective since $$\dim(X(k)\setminus \Reg(X(k)))\leq \dim (X(k))-2$$
by \cite[p. 124]{Iit}.
\end{proof}
In the remaining of this section, we show how seminormalization and continuous rational functions are in relation for varieties over $k$. We begin with a description of regular functions on the relative seminormalization in terms of regular functions on the relative normalization.
\begin{prop}
\label{constant}
Let $Y\to X$ be a dominant morphism between algebraic varieties over $k$. Let $U$ be an open subset of $X$. Then
$$\SO_{X^{+}_Y}((\pi^+)^{-1}(U))=\{ f\in \SO_{X'_Y}((\pi')^{-1}(U))\mid \,f {\rm \,is\,constant\, on \,the\,fibers\,of}\,\pi'_{k}\}$$
where $\pi':X'_Y\to X$ (resp. $\pi^+:X^+_Y\to Y$) is the relative normalization (resp. seminormalization) morphism.
\end{prop}
\begin{proof}
By \cite[Cor. 3.7]{Be} (which is written in the case $U$ is affine, but all section 3 there is valid for any open subset $U$), we have
$$\SO_{X^{+}_Y}((\pi^+)^{-1}(U))=\{f\in \SO_{X'_Y}((\pi')^{-1}(U))\mid \forall x\in U(k), \,\,f\in \SO_{X,x}+\JRad(\SO_{X'_Y,x})\}$$
The radical being an intersection of maximal ideals, we see that the functions in $\SO_{X^{+}_Y}((\pi^+)^{-1}(U))$ correspond to the elements of $ \SO_{X'_Y}((\pi')^{-1}(U))$ constant on the fibers of $\pi'_{k}$.
\end{proof}
We give a characterization of the structural sheaf of the seminormalization of an algebraic variety over $k$ in another one with continuous rational functions generalizing the main result in \cite{Be} to the relative seminormalization and over any algebraically closed field of characteristic zero. In order to state the result, we use the fiber product of two sheaf extensions.
\begin{thm}
\label{thmintclosregulu}
Let $\pi:Y\to X$ be a dominant morphism between algebraic varieties over $k$.
Then $$(\pi^+_{k})_*\SO_{X^{+}_Y(k)}=\K_{X(k)}^0\times_{(\pi_{k})_*\K_{Y(k)}^0}(\pi_{k})_*\SO_{Y(k)} $$
where $\pi^+:X^+_Y\to X$ is the relative seminormalization morphism.
\end{thm}
\begin{proof}
We may assume $X$ and $Y$ are affine and thus we want to prove that
$$k[X^{+}_Y]=\SR( X(k))\times_{\SR(Y(k))}k[Y],$$
where the right hand side stands for the fiber product of the rings.
Let $\pi':X_Y^\prime\to X$ be the relative normalization map.
We consider the following diagram $$\begin{array}{ccccc}
k[X]&\rightarrow & k[X_Y']&\rightarrow &k[Y]\\
\downarrow&&\downarrow&& \downarrow \\
\SR(X(k))&\rightarrow &\SR(X'_Y(k))& \rightarrow&\SR(Y(k))\\
\end{array}$$
where the horizontal maps from the top (resp. the bottom) are given by composition with respectively $\pi'$ and $Y\to X_Y^\prime$ (resp. $\pi_{k}^\prime$ and $Y(k)\to X_Y^\prime(k)$).
We have $k[X^{+}_Y]\subset k[Y]$ by definition. Since $\pi^+$ is subintegral then it follows from Theorem \ref{thmFB1} that $\pi^+_k$ is an homeomorphism with respect to the $R$-topology. Since $\pi^+$ is in addition birational, the composition by $\pi_k^+$ gives an isomorphism between $\K^0(X(k))$ and $\K^0(X_Y^+(k))$. Therefore we get $k[X^{+}_Y]\subset \K^0(X_Y^+(k))=\K^0(X(k))$. In particular $k[X^{+}_Y]\subset\SR( X(k))\times_{\SR(Y(k))}k[Y]$.
Let us prove the converse inclusion. Let $f\in\SR( X(k)))\times_{\SR(Y(k))}k[Y]$. The continuous rational function $f$ is integral over $k[X]$ by Proposition \ref{prop-lieunorm}, therefore $f\in k[X'_Y]$ since additionally $f\in k[Y]$. As a function on $X'_Y(k)$, the function $f$ is constant on the fibers of $X'_Y(k)\to X(k)$ since $f$ induces a continuous function on $X(k)$. By Proposition \ref{constant}, we obtain then $f\in k[X^+_Y]$. It gives the reverse inclusion
$\SR( X(k))\times_{\SR(Y(k))}k[Y]\subset k[X^+_Y]$.
\end{proof}
A continuous rational function on a normal algebraic variety over $k$ is a regular function by Proposition \ref{prop-lieunorm}. The fact that the normalization of $X$ in $Y$ is not necessarily normal imposes to take the fiber product with $\SO_{Y(k)}$ in Theorem \ref{thmintclosregulu}. We state as a theorem the particular case $Y=X'$, the statement becoming much simpler by Proposition \ref{prop-lieunorm}. It says that the ring of continuous rational functions is isomorphic to the ring of regular functions on the seminormalization, it generalizes the main result in \cite{Be} over any algebraically closed field of characteristic zero.
\begin{thm}
\label{caractK0}
Let $X$ be an algebraic variety over $k$. The ringed space $(X^+(k),\SO_{X^{+}(k)})$ is isomorphic to $(X(k),\K_{X(k)}^0)$.
\end{thm}
\begin{rem}
Since $X^+$ doesn't depend of the choice of the real closed field, Theorem \ref{caractK0} gives another way to check that the continuity property of a rational function does not depend on the chosen real closed field.
\end{rem}
\subsubsection{Regulous functions and homeomorphisms}
A regulous function $f$ is a continuous rational function that satisfies the additional property that $f$ remains rational by restriction to any subvariety.
The first author proved that it is always the case \cite[Prop. 4.14]{Be} for complex varieties, contrarily to the real case \cite{KN}.
The following result asserts that continuous rational functions are always regulous.
\begin{cor}\label{cor-reg}
Let $X$ be an algebraic variety over $k$ and let $f\in\K^0(X(k))$. For any Zariski closed subset $V$ of $X$, the restriction $f_{\mid V(k)}$ belongs to $\K^0(V(k))$.
\end{cor}
\begin{proof}
The proof of \cite[Proposition 4.14]{Be} works verbatim using Theorem \ref{caractK0}.
\end{proof}
In the sequel, we also say regulous for continuous rational.
\begin{defn}
Let $X$ and $Y$ be algebraic varieties over $k$. Let $E\subset X(k)$ and $F\subset Y(k)$ be two closed subsets for the $R$-topology.
We say that a map $h:E\to F$ is biregulous if it is a homeomorphism for the $R$-topology which is birational i.e there exist two dense Zariski open subsets $U_1\subset \overline{E}^Z$, $U_2\subset \overline{F}^Z$, a birational map $g:\overline{E}^Z\to\overline{F}^Z$ with $g_{\mid U_1}:U_1\to U_2$ an isomorphism such that $g=h$ by restriction to $U_1\cap E$.
In this situation, we say that $E$ and $F$ are biregulous or biregulously equivalent.
\end{defn}
In this situation and if in addition $E=Y(k)$, $F=X(k)$ and $h$ is an homeomorphism for the Zariski topology, so that the morphism $\K_{X(k)}^0\to (h)_*\K_{Y(k)}^0$ is well-defined, then this morphism is an isomorphism and thus the ringed spaces $(Y(k),\K_{Y(k)}^0)$ and $(X(k),\K_{X(k)}^0)$ are isomorphic.
The following theorem is an extended version of Theorem \ref{thmFB1} and it explains how subintegral extensions, continuous rational functions and biregulous morphisms are related.
It is a generalization of \cite[Thm. 3.1, Prop. 4.12]{Be} with an additional radiciality property.
\begin{thm}\label{thmFB}
Let $\pi:Y\to X$ be a finite morphism between
algebraic varieties over $k$. The following
properties are equivalent:
\begin{enumerate}
\item $\pi$ is subintegral.
\item $\pi_{k}$ is bijective.
\item $\pi_{k}$ is biregulous.
\item The ringed spaces $(Y(k),\K_{Y(k)}^0)$ and $(X(k),\K_{X(k)}^0)$ are isomorphic.
\item $\pi_{k}$ is a homeomorphism for the strong topology.
\item $\pi_{k}$ is a homeomorphism for the Zariski topology.
\item $\pi$ is a homeomorphism.
\item $\pi$ is radicial.
\end{enumerate}
\end{thm}
\begin{proof}
We already have the equivalence of (1), (2), (5), (6), (7) and (8) by Theorem \ref{thmFB1}.
Assuming $\pi$ to be subintegral, then it follows from Proposition \ref{propCSEPvariety}
that $X$ and $Y$ have the same seminormalization. So $\K_{X(k)}^0\to (\pi_k)_*\K_{Y(k)}^0$ is an isomorphism by Theorem \ref{caractK0}. It shows (1) implies (4).
We show (4) implies (1) by contradiction. Assume $\pi_k$ is not bijective. We can suppose $X$ and $Y$ are affine.
We may separate two different points $y,y'$ in the fibre $\pi^{-1}_k(x)$ of some $x\in X(k)$ by a regular function $f$ on $Y(k)$. But such a function is continuous with respect to the $R$-topology, and does not belong to the image of $\K^0(X(k))\to \K^0(Y(k))$ (given by the composition by $\pi_k$) since it is not constant on the fibres of $\pi_k$.
A biregulous map is bijective so (3) implies (2), and conversely by (4) we know that the inverse of $\pi_k$ is continuous rational, so regulous by Corollary \ref{cor-reg}.
\end{proof}
Remark also that a morphism satisfying the conditions of Theorem \ref{thmFB} is (bijective thus) automatically birational.
\vskip 2mm
We prove now that the seminormalization determines a variety up to biregulous equivalence. This result goes in the direction of the problems considered by Koll\'ar \cite{Ko}, \cite{KMOS}, \cite{Ce}.
Before that we need to prove that a biregulous map between algebraic varieties over $k$ is a homeomorphism for the Zariski topology.
Let $X$ be an algebraic variety over $k$ and let $E\subset X(k)$. We denote by $\overline{E}$ the closure of $E$ for the $R$-topology. Recall that a locally closed subset of $X$ or $X(k)$ is the intersection of a Zariski open subset with a Zariski closed subset and that a Zariski constructible subset is a finite union of locally closed subsets.
\begin{prop}
\label{biregZhomeo}
Let $X$ and $Y$ be algebraic varieties over $k$ and let $h:Y(k)\to X(k)$ be a biregulous map. Then $h$ is a homeomorphism for the Zariski topology.
\end{prop}
\begin{proof}
Let $Z$ be an irreducible closed subset of $Y(k)$. By restriction we get a biregulous map $Z\to h(Z)$. We claim that $h(Z)$ is a Zariski closed subset of $X(k)$. Since $h$ is biregulous then $h(Z)$ is closed in $X(k)$ for the $R$-topology. To prove the claim then it is sufficient to assume that $\overline{h(Z)}^Z=X(k)$, $X$ is irreducible and thus we have to prove that $h:Z\to X(k)$ is surjective.
Since a regulous function $f$ on $Z$ is still regulous (and thus rational) by restriction then there exists a stratification of $Z$ in locally closed strata such that the restriction of $f$ to each stratum is regular (see \cite{FHMM} and \cite{Mnew}). It follows that $h(Z)$ is a Zariski constructible set and thus $h(Z)=\cup_{i=1}^n V_i\cap U_i$ such that, for $i=1,\ldots,n$, $V_i$ (resp. $U_i$) is a Zariski closed (resp. open) subset of $X(k)$. Since $\overline{h(Z)}^Z=X(k)$ and $X$ is irreducible then we may assume $V_1=X(k)$.
We have $\overline{U_1}\subset \overline{h(Z)}=h(Z)$ and thus $h:Z\to X(k)$ is surjective by Lemma \ref{lem-dense}.
It shows that $h^{-1}$ is continuous for the Zariski topology. Similarly, $h$ is continuous for the Zariski topology.
\end{proof}
\begin{thm}
\label{QKoC}
Let $X$ and $Y$ be algebraic varieties over $k$. Then, $X(k)$ and $Y(k)$ are biregulously equivalent if and only if $X^+$ and $Y^+$ are isomorphic.
\end{thm}
\begin{proof}
Assume $h:Y(k)\to X(k)$ is a biregulous map. From Proposition \ref{biregZhomeo} $h$ is a homeomorphism for the Zariski topology and thus the ringed spaces $(Y(k),\K_{Y(k)}^0)$ and $(X(k),\K_{X(k)}^0)$ are isomorphic. By Theorem \ref{caractK0} it follows that $X^+$ and $Y^+$ are isomorphic.
Since $X^+(k)\to X(k)$ and $Y^+(k)\to Y(k)$ are biregulous morphisms then we easily get the converse implication.
\end{proof}
\begin{rem}
Note that in the statement of the previous theorem, we do not require that there is a morphism from $X$ to $Y$ nor from $Y$ to $X$. Consider a smooth irreducible curve $Y$ over $k$ without automorphisms (e.g $\PP^1_{k}$ minus sufficiently many points). Let $P_1,P_2$ be two distincts points of $Y$. Let $X_1$ (resp. $X_2$) be the curve obtained from $Y$ by creating a cusp at $P_1$ (resp. $P_2$) i.e associated to the module $2P_1$ (resp. $2P_2$) (see \cite{Se}). We have $Y=X_1^+=X_2^+$ and thus $X_1(k)$ is biregulously equivalent to $X_2(k)$. However, there is no morphism from $X_1$ to $X_2$ (nor from $X_2$ to $X_1$) because otherwise it would lift to a finite and birational morphism $Y\to Y$ sending $P_1$ to $P_2$ (see \cite[Ex. 5.5]{FMQ2}) and since $Y$ is normal then this morphism is an isomorphism, a contradiction.
\end{rem}
\section{Homeomorphisms between algebraic varieties}\label{sect-homeo}
In this section $k$ is an algebraically closed field and $\car (k)=0$. We fix a real closed field $R$ such that $R[\sqrt{-1}]=k$.
\vskip 1cm
Given a morphism $\pi:Y\to X$ between algebraic varieties over $k$, we are looking for algebraic conditions on $\pi$ which are respectively equivalent to the topological property that $\pi_{k}$ is an homeomorphism for the $R$-topology and $\pi$ is an homeomorphism for the Zariski topology.
In case $\pi$ is finite then we already know the answer. Indeed, from Theorem \ref{thmFB1} the two topological properties stated above are equivalent to each other, and moreover equivalent to the two algebraic conditions that $\pi$ is subintegral or $\pi$ is radicial.
\begin{thm}\label{thmFB2}
Let $\pi:Y\to X$ be a finite morphism between
algebraic varieties over $k$. The following
properties are equivalent:
\begin{enumerate}
\item $\pi_{k}$ is a homeomorphism for the $R$-topology.
\item $\pi$ is a homeomorphism.
\item $\pi$ is subintegral.
\item $\pi$ is radicial
\end{enumerate}
\end{thm}
In the sequel, we compare these four properties when we drop the finiteness hypothesis.
\vskip 2mm
After some generalities on the relation between homeomorphism with respect to Zariski topology, isomorphism and normality, we provide a complete solution to this problem
for $R$-homeomorphisms. As consequences, we give a partial answer to the problem for Zariski homeomorphisms and we provide statements that explain exactly when $R$-homeomorphisms and Zariski homeomorphisms are isomorphisms in the same spirit of Vitulli's result \cite[Thm. 2.4]{V2}.
\subsection{Bijection, birationality, homeomorphism}
We aim to compare the notions of bijection, birational morphism, homeomorphism with respect to Zariski topology at the spectrum level and homeomorphism with respect to Zariski topology at the level of closed points.
\vskip 2mm
To begin with, recall that it follows from the Nullstellensatz that the property for a morphism to be a homeomorphism with respect to the Zariski topology is already decided at the level of closed points.
\begin{prop}
\label{NS}
Let $\pi:Y\to X$ be a morphism between algebraic varieties over $k$. Then $\pi$ is a homeomorphism if and only if $\pi_k$ is a homeomorphism with respect to the Zariski topology.
\end{prop}
Note that the previous result is still true if $\car(k)>0$.
It is clear that an homeomorphism induces a bijection at the level of $k$-rational points, the converse being false in general as illustrated by Example \ref{exVit2} below.
Restricting our attention to curves, note that the converse holds true in the case of a morphism between irreducible algebraic curves. The irreducibility of the source space is crucial here, consider for instance the disjoint union of a point with a line minus a point, sent to a line. However even for morphisms between irreducible curves, a birational homeomorphism need not be an isomorphism as illustrated by the normalization of the cuspidal curve with equation $y^2=x^3$.
An important contribution to these questions is the fact that the bijectivity at the level of closed points induces the birationality for irreducible varieties, by Zariski Main Theorem.
\begin{prop}\label{prop-wk} Let $X$ and $Y$ be irreducible varieties over $k$. Then a morphism from $Y$ to $X$ inducing a bijection at the level of $k$-rational points is quasi-finite and birational. If in addition $X$ is normal, it is an isomorphism.
\end{prop}
The proof is classical, but we include it for the clarity of the exposition.
\begin{proof}
First note that $Y\to X$ is quasi-finite by \cite[Lem. 20.10]{STPmorph} and the Nullstellensatz.
By Grothendieck's form of Zariski Main Theorem, a quasi-finite morphism $\pi: Y\to X$ between irreducible algebraic varieties over $k$ factorizes into an open immersion $Y\to Z$ and a finite morphism $Z\to X$. So we identify $Y$ with an open subset of $Z$ and further assume that $Y$ is Zariski dense in $Z$.
Assume $\pi_{k}:Y(k)\to X(k)$ bijective, so that $X$, $Y$ and $Z$ have the same dimension. Recall that the degree of the extension $\K(X)\to \K(Z)=\K(Y)$ is the cardinal of a generic fiber of $Z(k)\to X(k)$ by \cite[Thm. 7]{Sha}. Such a generic fiber is in general in $Y(k)$, otherwise the dimension of $\dim (Z\setminus Y)$ would be greater than or equal to $\dim X$, in contradiction with the density of $Y$.
Thus the finite morphism $Z\to X$ has necessarily degree one, so that $Z\to X$ is birational and thus also $\pi$.
Assuming in addition $X$ normal implies that $Z$ is isomorphic to $X$. The open immersion is surjective at the level of $k$-rational points and from the Nullstellensatz it follows that it is surjective,
thus an isomorphism.
\end{proof}
The following example shows that a morphism which gives a bijection at the level of $k$-rational points needs not be an homeomorphism.
\begin{ex} \label{exVit2}
\begin{enumerate}
\item Consider the varieties of Example \ref{exVitdetail2}, for which we have an open immersion $\psi$ and a finite morphism $\phi$ as follows :
$$\pi : Y\times \Af_k^n \xrightarrow{\psi} X'\times \Af_k^n \xrightarrow{\phi} X\times\Af_k^n.$$
Even if $\pi_k$ is still bijective, the morphism $\pi$ is no longer a homeomorphism when $n>0$.
To see it, it suffices to consider the case $n=1$.
Denote by $O\in (X\times \Af_k^1)(k)$ the origin, and by $P=(0,1,0)$ and $Q=(0,-1,0)$ the two points in the fiber $\phi^{-1}(O)$, where the coordinates are $(x,z,t_1)$ in the notation of Example \ref{exVitdetail2}.
Let $C$ be the curve in $X'\times \Af_k^1$ given by intersection with the plane $x-z+t_1+1=0$ in $\Af_k^2\times\Af_k^1$. Note that $P\in C$, $Q\notin C$, $C\setminus \{P\}$ is not a closed subset of $X'\times \Af_k^1$ but it is a closed subset of $Y\times \Af_k^1$ since $Y=X'\setminus\{P\}$. If $\pi$ was a homeomorphism, then $\pi_{k}(C(k)\setminus \{P\})$ should be Zariski closed in $(X\times \Af_k^1)(k)$ by Proposition \ref{NS}. However $O$ is in the closure of $\pi_{k}(C(k)\setminus\{P\})$.
\item This example is also interesting to consider relatively to Grothendieck's notion of universal homeomorphism \cite[Defn. 3.8.1]{Gr2}. Recall that a morphism $Y\to X$ is a universal homeomorphism if $Y\times_X Z\to Z$ is a homeomorphism for any morphism $Z\to X$.
The morphism $Y\to X$ of Example \ref{exVitdetail} is an homeomorphism but not a universal homeomorphism. Indeed, let $Z=X\times \Af_k^1$ and consider the base change $Z\to X$ given by the first projection. We have already checked that $Y\times_X Z=Y\times \Af_k^1\to Z=X\times \Af_k^1$ is not closed.
\end{enumerate}
\end{ex}
\subsection{Main results}
We focus now on the four properties appearing in Theorem \ref{thmFB2}, namely given a morphism $\pi:Y\to X$ between algebraic varieties over $k$, we consider the properties :\\
- $\pi_k$ is a homeomorphism for the $R$-topology,\\
- $\pi$ is a homeomorphism,\\
- $\pi$ is subintegral,\\
- $\pi$ is radicial.\\
By Theorem \ref{thmFB2}, these four properties are equivalent when $\pi$ is finite. Considering the open immersion $\Af_{k}^1\to \PP_{k}^1$, it is radicial but not bijective. Since finite morphisms are surjective by Proposition \ref{lying-over} then if do not assume $\pi$ to be finite then we have to replace the last property above by:\\
- $\pi$ is radicial and surjective.\\
The equivalence between the four above properties is no longer true without the finiteness hypothesis. In particular, an homeomorphism with respect to the Zariski topology need not be a homeomorphism with respect to the $R$-topology, even for irreducible affine curves.
\begin{ex} \label{exVit3}
\begin{enumerate}
\item
Consider the morphism $\pi:Y\to X$ from Example \ref{exVitdetail}. We have already shown that $\pi$ is an homeomorphism, $\pi$ is radicial but $\pi$ is not subintegral since it is not an integral morphism.
The morphism $\pi_k$ is not a homeomorphism with respect to the $R$-topology. Indeed, consider a small open ball $B$ of $Y(k)$ containing the point that is sent to the singular point of $X(k)$ by $\pi_k$. Then the image of $Y(k)\setminus B$ is not closed.
\item Let $n>0$ and consider the morphism $\pi_n: Y\times\Af_k^n\to X\times \Af_k^n$ from Example \ref{exVitdetail2}. We have already proved that $\pi$ is radicial and surjective, $ \pi$ is not subintegral nor an homeomorphism.
Note that $(\pi_n)_k$ is not a homeomorphism with respect to the $R$-topology either, following the same proof as in (1).
\end{enumerate}
\end{ex}
In view of the foregoing explanation, the only equivalence which remains possible to obtain is between the properties for a morphism to be subintegral and an homeomorphism for the $R$-topology. It will be the subject of our main result.
The following two results measure the rigidity of the $R$-topology.
\begin{prop}\label{prop-fini} Let $\pi:Y\to X$ be a morphism between algebraic varieties over $k$. If $\pi_{k}$ is a homeomorphism with respect to the $R$-topology, then $\pi$ is finite.
\end{prop}
\begin{proof}
It is sufficient to assume that $Y$ and $X$ are irreducible. By Proposition \ref{prop-wk} then $\pi$ is quasi-finite.
By Grothendieck's form of Zariski Main Theorem, $\pi$ factorizes into an open immersion $g:Y\to Z$ and a finite morphism $h:Z\to X$. We consider $Y(k)$ embedded as an open subset of $Z(k)$ for the $R$-topology. We also assume $Y$ to be Zariski dense in $Z$, and thus $Y(k)$ is dense in $Z(k)$ for the $R$-topology by Lemma \ref{lem-dense}.
Since $\pi_{k}$ is bijective, the finite morphism $h$ is birational by Proposition \ref{prop-wk}. Moreover $h_{k}$ is surjective by Proposition \ref{lying-over}. Let us prove that $h_{k}$ is also injective. If not, there exist $y\in Y(k)$ and $z\in Z(k)\setminus Y(k)$ with $h_{k}(y)=h_{k}(z)$. Denote this point by $x\in X(k)$. Let $V_y$ be a closed neighborhood of $y$ in $Y(k)$ for the $R$-topology, and $V_z$ a closed neighborhood of $z$ in $Z(k)$ for the $R$-topology disjoint from $V_y$. Then $V_x=h_{k}(V_y)$ is a closed neighborhood of $x$ in $X(k)$ for the $R$-topology by assumption on $\pi_{k}$.
By the Curve Selection Lemma \cite[Thm. 2.5.5]{BCR}, there is a continuous semialgebraic curve $\gamma :[0,1)\to Z(k)$ with $\gamma(0)=z$ and $\gamma(0,1)\subset V_z\cap Y(k)$. Then $h_{k}\circ \gamma :[0,1)\to X(k)$ is a continuous semialgebraic curve with $ h_{k} \circ \gamma(0)=x$, so it meets $V_x\setminus\{x\}=h_{k}(V_y\setminus \{y\})$. As a consequence $V_y$ and $V_z$ cannot be disjoint because $\pi_{k}$ is bijective.
Therefore $h_{k}$ is bijective and thus $g_{k}$ is also bijective. The Nullstellensatz forces $g$ to be a bijective open immersion, thus an isomorphism. As a consequence $\pi$ is finite like $h$.
\end{proof}
\begin{cor}\label{cor-homeo} Let $\pi:Y\to X$ be a morphism between algebraic varieties over $k$. If $\pi_{k}$ is a homeomorphism with respect to the $R$-topology, then $\pi$ is a homeomorphism.
\end{cor}
\begin{proof}
The finiteness follows from Proposition \ref{prop-fini}. Being a bijection on the closed points, it is a homeomorphism by Theorem \ref{thmFB1}.
\end{proof}
Corollary \ref{cor-homeo} admits a converse, for varieties of dimension at least two.
\begin{prop} \label{equivhomeo} Let $\pi:Y\to X$ be a morphism between irreducible algebraic varieties over $k$ of dimension at least two. If $\pi$ is a homeomorphism, then $\pi_{k}$ is a homeomorphism with respect to the $R$-topology.
\end{prop}
\begin{proof}
By \cite[Theorem 2.2]{V2}, the morphism $\pi$ is finite (it is also birational by Proposition \ref{prop-wk}), so we conclude using Theorem \ref{thmFB1}.
\end{proof}
Proposition \ref{equivhomeo} is however not true for curves as illustrated by Example \ref{exVit3} (1).
\vskip 1cm
We are now able to get the main result of the paper. We prove that, for a given morphism between algebraic varieties over $k$, the topological property to be a homeomorphism for the $R$-topology does not depend of the chosen real closed field since it is equivalent to the algebraic property to be subintegral.
\begin{thm}
\label{Euclhomeo}
Let $\pi:Y\to X$ be a morphism between algebraic varieties over $k$. The following properties are equivalent:
\begin{enumerate}
\item $\pi_{k}$ is a homeomorphism with respect to the $R$-topology.
\item $\pi$ is subintegral.
\item $X_Y^+=Y$.
\end{enumerate}
\end{thm}
\begin{proof}
From Theorem \ref{thmFB2} and Proposition \ref{propCSEPvariety} we know that (2) and (3) are equivalent and they imply (1).
Assume $\pi_{k}$ is a homeomorphism with respect to the $R$-topology. Then $\pi$ is finite by Proposition \ref{prop-fini} and thus $\pi$ is subintegral again by Theorem \ref{thmFB2}.
\end{proof}
Note that we cannot replace in Theorem \ref{Euclhomeo} the topological assumption (1) on $\pi_{k}$ by $\pi_{k}$ is bijective or even $\pi$ or $\pi_k$ is a homeomorphism, as illustrated by Examples \ref{exVitdetail} and \ref{exVit3}.
As illustrated by Proposition \ref{prop-wk}, the normality of the target space plays a role to upgrade a bijection into an isomorphism. Next result, which is a direct consequence of Theorem \ref{Euclhomeo}, shows that relative seminormality is the correct notion to associate to a homeomorphism with respect to the $R$-topology in order to obtain an isomorphism.
\begin{cor}
\label{corEuclhomeo1}
Let $\pi:Y\to X$ be a morphism between algebraic varieties over $k$ such that $\pi_{k}$ is a homeomorphism with respect to the $R$-topology. Then $\pi$ is an isomorphism if and only if $X$ is seminormal in $Y$.
\end{cor}
We obtain an alternative version of \cite[Thm. 2.4]{V2}, where we replace the Zariski topology by the $R$-topology. In particular our statement is valid without any restriction on dimension.
\begin{cor}
\label{corEuclhomeo2} Let $\pi:Y\to X$ be a morphism between algebraic varieties over $k$. If $\pi_{k}$ is a homeomorphism with respect to the $R$-topology and $X$ is seminormal, then $\pi$ is an isomorphism.
\end{cor}
\begin{proof}
From Proposition \ref{semimprel} then $X$ is seminormal in $Y$ and the result follows from Corollary \ref{corEuclhomeo1}.
\end{proof}
We end the section by some results with a slightly different flavour. Forgetting about the $R$-topology, we compare now homeomorphisms with radicial and sujective morphisms.
So we compare the two properties quoted at the beginning of the section, properties that do not appear in the equivalence of Theorem \ref{Euclhomeo}.
We may also wonder what is the correct assumption to add to an homeomorphism in order to obtain an isomorphism in the spirit of Corollary \ref{corEuclhomeo1}.
We start by proving that an homeomorphism between varieties over $k$ is radicial.
\begin{prop}
\label{Zhomeo}
Let $\pi:Y\to X$ be a morphism between algebraic varieties over $k$. If $\pi$ is a homeomorphism then $\pi$ is radicial.
\end{prop}
\begin{proof}
We may assume $X$ and $Y$ irreducible, since the irreducible components of $X$ and $Y$ are homeomorphic one-by-one.
Assume first that the dimension of $X$ and $Y$ is at least two. Then $\pi_{k}$ is a homeomorphism with respect to the $R$-topology by Proposition \ref{equivhomeo}, so $\pi$ is finite by Proposition \ref{prop-fini}. By Theorem \ref{Euclhomeo} we get that $\pi$ is subintegral and thus radicial (Proposition \ref{sat=semivar}).
Assume $X$ and $Y$ are curves. Then $X$ and $Y$ are birational by Proposition \ref{prop-wk}. Since moreover $\pi_{k}$ is bijective then $\pi$ is bijective and equiresidual. By definition $\pi$ is radicial.
\end{proof}
Contrary to Theorem \ref{Euclhomeo}, the converse implication in Proposition \ref{Zhomeo} is not valid as Examples \ref{exVit2} (2) and \ref{exVit3} (2) show. Anyway, we get the analogue of Corollary \ref{corEuclhomeo1} with respect to the Zariski topology. It gives a generalization of \cite[Thm. 2.4]{V2} in any dimension for varieties over $k$.
\begin{cor}
\label{corZhomeo}
Let $\pi:Y\to X$ be a morphism between algebraic varieties over $k$ such that $\pi$ is a homeomorphism. Then $\pi$ is an isomorphism if and only if $X$ is saturated in $Y$.
\end{cor}
\begin{proof}
Assume $X$ is saturated in $Y$, the converse implication being trivial. By Proposition \ref{Zhomeo} then $\pi$ is radicial. Then $Y\stackrel{Id}{\to} Y\stackrel{\pi}{\to} X$ is a radicial sequence of morphisms.
We conclude that $\pi$ is an isomorphism by Proposition \ref{PU1saturationvar}.
\end{proof}
We get the analogue of Corollary \ref{corEuclhomeo2} with respect to the Zariski topology only in dimension $\geq 2$. It is a kind of reformulation of \cite[Theorem 2.4]{V2} in characteristic zero.
\begin{cor}
\label{corZhomeo2} Let $\pi:Y\to X$ be a morphism between irreducible algebraic varieties over $k$ of dimension at least $2$. If $\pi$ is a homeomorphism and $X$ is saturated, then $\pi$ is an isomorphism.
\end{cor}
\begin{proof}
From Propositions \ref{prop-fini}, \ref{equivhomeo} and \ref{Zhomeo} then $\pi$ is radicial and finite. It follows from Proposition \ref{sat=semivar} that $\pi$ is subintegral and $X$ is seminormal.
We conclude $\pi$ is an isomorphism from Corollary \ref{corEuclhomeo1}.
\end{proof}
\begin{rem} Example \ref{exVitdetail} shows that Corollary \ref{corZhomeo2} is false for curves.
\end{rem}
Note that, even though the statements of Proposition \ref{Zhomeo} and Corollaries \ref{corZhomeo}, \ref{corZhomeo2} does not mention the $R$-topology, it plays a crucial role in our proofs.
\section{Homeomorphisms versus isomorphisms in positive characteristic} \label{sect-carpos}
In this section $k$ is an algebraically closed field and $\car (k)=p>0$.
\vskip 2mm
One can wonder which statements of the previous section can be generalized in positive characteristic. Of course the $R$-topology no longer exists and therefore one can only try to obtain versions of Proposition \ref{Zhomeo} and Corollaries \ref{corZhomeo}, \ref{corZhomeo2}. Note also that Proposition \ref{prop-wk} is no longer valid : indeed, the Frobenious map $\varphi:\Af_k^n\to\Af_k^n$ defined by $\varphi(x_1,\ldots,x_n)=(x_1^p,\ldots,x_n^p)$ is a homeomorphism but is not birational.
We start by giving a version of Proposition \ref{Zhomeo} in positive characteristic.
\begin{prop}
\label{Zhomeocarpos}
Let $\pi:Y\to X$ be a morphism between algebraic varieties over $k$. If $\pi$ is a homeomorphism then $\pi$ is radicial.
\end{prop}
\begin{proof}
Assume $\pi:Y\to X$ is a homeomorphism. We may assume $X$ and $Y$ irreducible, since the irreducible components of $X$ and $Y$ are homeomorphic one-by-one.
Assume first that the dimension of $X$ and $Y$ is at least two. By \cite[Thm. 2.2]{V} then $\pi$ is finite. It is clear that $\pi$ is bijective. Proceeding similarly to Bernard's proof of \cite[Thm. 3.1]{Be} then for any $y\in Y$ we get that the algebraic field extension $k(\pi(y))\to k(y)$ has degree equal to one. It follows that $\pi$ is residually purely inseparable. So $\pi$ is weakly subintegral and thus radicial (Proposition \ref{sat=semivar}).
Assume $X$ and $Y$ are curves. We already know that $\pi$ is bijective. Let $y\in Y(k)$ then since we get an extension $k(\pi(y))\to k(y)=k$ and since $k(\pi(y)$ contains $k$ then $\pi(y)\in X(k)$. Let $x\in X(k)$, by bijectivity of $\pi$ then there exists $y\in Y$ such that $\pi(y)=x$. By Zariski's Lemma then $k(x)=k\to k(y)$ is finite and thus $y\in Y(k)$. We have proved $\pi_k$ is bijective. We adapt now in our situation the beginning of the proof of Proposition \ref{prop-wk}. First note that $Y\to X$ is quasi-finite by \cite[Lem. 20.10]{STPmorph} and the Nullstellensatz.
By Grothendieck's form of Zariski Main Theorem, a quasi-finite morphism $\pi: Y\to X$ between irreducible algebraic varieties over $k$ factorizes into an open immersion $Y\to Z$ and a finite morphism $Z\to X$. So we identify $Y$ with an open subset of $Z$ and further assume that $Y$ is Zariski dense in $Z$. It follows that $Y\to Z$ is birational.
Since $\pi_{k}:Y(k)\to X(k)$ is bijective, so that $X$, $Y$ and $Z$ have the same dimension.
Recall that the degree of the extension $\K(X)\to \K(Z)=\K(Y)$ is the cardinal of a generic fiber of $Z(k)\to X(k)$ by \cite[Thm. 7]{Sha}. Such a generic fiber is in general in $Y(k)$, otherwise the dimension of $\dim (Z\setminus Y)$ would be $\geq \dim X$, in contradiction with the density of $Y$.
Since $\pi_{k}:Y(k)\to X(k)$ is bijective then the finite morphism $Z\to X$ has necessarily degree one, so that $\K(X)\to \K(Z)$ is purely inseparable. It follows that $\pi$ is bijective and residually purely inseparable and thus radicial and surjective.
\end{proof}
We end the paper by versions of Corollaries \ref{corZhomeo}, \ref{corZhomeo2} in positive characteristic.
\begin{cor}
\label{corZhomeocarpos}
Let $\pi:Y\to X$ be a morphism between algebraic varieties over $k$ such that $\pi$ is a homeomorphism. Then $\pi$ is an isomorphism if and only if $X$ is saturated in $Y$.
\end{cor}
\begin{proof}
We copy the proof of Corollary \ref{corZhomeo} using Proposition \ref{Zhomeocarpos} instead of Proposition \ref{Zhomeo}.
\end{proof}
\begin{cor}
\label{corZhomeo2carpos} Let $\pi:Y\to X$ be a morphism between irreducible algebraic varieties over $k$ of dimension $\geq 2$. If $\pi$ is a homeomorphism and $X$ is saturated, then $\pi$ is an isomorphism.
\end{cor}
\begin{proof}
We copy the proof of Corollary \ref{corZhomeo2} using Proposition \ref{Zhomeocarpos}, \cite[Thm. 2.2]{V} and Corollary \ref{corZhomeocarpos} instead respectively of Propositions \ref{Zhomeo}, \ref{prop-fini} and Corollary \ref{corZhomeo}.
\end{proof}
\end{document} |
\begin{document}
\mathfrak{s}etcounter{section}{1}
\begin{abstract} This is the third paper of this series. In \cite{Wang20}, we defined the monopole Floer homology for any pair $(Y,\omega)$, where $Y$ is a compact oriented 3-manifold with toroidal boundary and $\omega$ is a suitable closed 2-form viewed as a decoration. In this paper, we establish a gluing theorem for this Floer homology when two such 3-manifolds are glued suitably along their common boundary, assuming that $\mathfrak{p}artial Y$ is disconnected, and $\omega$ is small and yet non-vanishing on $\mathfrak{p}artial Y$.
As applications, we construct a monopole Floer 2-functor and the generalized cobordism maps. Using results of Kronheimer-Mrowka and Ni, it is shown that for any such 3-manifold $Y$ that is irreducible, this Floer homology detects the Thurston norm on $H_2(Y,\mathfrak{p}artial Y;\mathbb{R})$ and the fiberness of $Y$. Finally, we show that our construction recovers the monopole link Floer homology for any link inside a closed 3-manifold.
\mathbf{e}nd{abstract}
\title{Monopoles And Landau-Ginzburg Models III:\ A Gluing Theorem}
\tilde{a}bleofcontents
\mathfrak{p}art{Introduction}
\mathfrak{su}bsection{An Overview} The Seiberg-Witten Floer homology of a closed oriented 3-manifold as introduced by Kronheimer-Mrowka \cite{Bible} has greatly influenced the study of 3-manifold
topology since its inception. The underlying idea is an infinite dimensional Morse theory: the monopole equations on $\mathbb{R}_t$ times a closed 3-manifold is interpreted as the downward gradient flow of the Chern-Simons-Dirac functional.
This idea was further explored by the author in \cite{Wang20} in order to define the monopole Floer homology for any pair $(Y,\omega)$, where $Y$ is a compact oriented 3-manifold with toroidal boundary and $\omega$ is a suitable closed 2-form on $Y$. This Floer homology categorifies the Milnor-Turaev torsion invariant of $Y$ (this follows from the work of Meng-Taubes \cite{MT96} and Turaev \cite{T98}) and can be cast into a functor from a suitable cobordism category; see \cite[Theorem 1.5]{Wang20}. However, the second paper \cite{Wang20} was devoted to the analytic foundation of this Floer theory; very little was explored about its topological properties.
The goal of the present paper is to understand the properties of this Floer homology in the special case that
\begin{enumerate}[label=($\mathfrak{s}tar$)]
\item\label{star} $\mathfrak{p}artial Y$ is disconnected, and $\omega$ has non-zero small pairing with each component of $\mathfrak{p}artial Y$.
\mathbf{e}nd{enumerate}
Under this assumption, we will prove that the monopole Floer homology of $(Y,\omega)$ is a topological invariant: it depends only on the 3-manifold $Y$, the cohomology class $[\omega]\in H^2(Y; i\mathbb{R})$ and an additional class in $H^1(\mathfrak{p}artial Y; i\mathbb{R})$. Moreover, when $Y$ is irreducible, this invariant detects the Thurston norm on $H_2(Y,\mathfrak{p}artial Y; \mathbb{R})$ and the fiberness of $Y$, generalizing the classical results \cite{KM97B,Bible,Ni08} for closed 3-manifolds.
In the context of Heegaard Floer homology, the knot Floer homology $\widehat{\HFK}_*$ and $\HFK_*^-$ were introduced by Oszv\'{a}th-Sz\'{a}bo \cite{KFH} and independently Rasmussen \cite{KFH1}. Motivated by the sutured manifold technique developed by Juh\'{a}sz \cite{J06,J08}, Kronheimer-Mrowka \cite{KS} defined the monopole knot Floer homology $\mathcal{K}HM_*$ as the analogue of the hat-version $\widehat{\HFK}_*$.
One motivation of this project is to give an alternative definition of $\mathcal{K}HM_*$ so that a (3+1) TQFT property can be verified easily. This goal is accomplished in this paper: for any knot $K$ inside a closed 3-manifold $Z$, we prove that for a suitable choice of $\omega$ the monopole Floer homology of the link complement $Z\mathfrak{s}etminus N(K\cup m)$ is isomorphic to $\mathcal{K}HM_*(Z,K)$, where $m\mathfrak{su}bset Z\mathfrak{s}etminus K$ is a meridian. This confirms a longstanding speculation \cite{M16} that the knot Floer homology is related to the monopole equations on $\mathbb{R}_t$ times the link complement $Z\mathfrak{s}etminus N(K\cup m)$.
Most topological implications of this paper follow immediately from a gluing theorem that computes the monopole Floer homology when two such 3-manifolds $(Y_1,\omega_1)$ and $(Y_2,\omega_2)$ are glued suitably along their common boundary; see Theorem \ref{T2.5} for the precise statement. Under the assumption \ref{star}, the gluing map
\begin{equation}\label{E1.1}
\alpha: \HM_*(Y_1,\omega_1)\otimes_\mathbb{N}R \HM_*(Y_2,\omega_2)\to \HM_*(Y_1\circ_h Y_2, \omega_1\circ_h\omega_2)
\mathbf{e}nd{equation}
is in fact an isomorphism and is functorial when considering both 3-manifold and 4-manifold cobordisms, where $\circ_h$ denotes horizontal composition and $Y_1\circ_h Y_2$ is the 3-manifold obtained after gluing. In fact, the (3+1) TQFT can be upgraded into a (2+1+1) TQFT: there is a monopole Floer 2-functor
\[
\HM_*: \mathbb{A}T\to \mathbb{A}R,
\]
from a suitable cobordism bi-category $\mathbb{A}T$ to the strict 2-category $\mathbb{A}R$ of finitely generated $\mathbb{N}R$-modules. In this paper, we will always work with the mod 2 Novikov ring $\mathbb{N}R$ to avoid any orientation issues.
With that being said, the gluing map \mathbf{e}qref{E1.1} is constructed simply using Floer's excision argument \cite{BD95,KS}, and the technique involved in the proof of the Gluing Theorem \ref{T2.5} is pretty standard. This is not a bad sign: the monopole Floer homology of $(Y,\omega)$ is expected to have intimate relations with the existing theory for closed 3-manifolds and for balanced sutured manifolds. This paper is providing the first few results towards this direction.
\mathfrak{su}bsection{Summary of Results}\label{Subsec1.2} For the benefit of the readers, we give a more detailed account of results that come out of the Gluing Theorem \ref{T2.5}. Let $Y$ be a connected, compact, oriented 3-manifold whose boundary $\mathfrak{p}artial Y\cong \Sigma\colonequals \coprod_{1\leq j\leq n} \mathbb{T}^2_j$ is a union of 2-tori. The monopole Floer homology constructed in \cite{Wang20} relies on some auxiliary data on the surface $\Sigma$. In this paper, we focus on the special case that $\Sigma$ is disconnected and choose the following data in order:
\begin{itemize}
\item a flat metric $g_\Sigma$ of $\Sigma$;
\item an imaginary-valued harmonic 1-form $\lambda\in \Omega_h^1(\Sigma; i\mathbb{R})$ such that $\lambda|_{\mathbb{T}^2_j}\mathfrak{n}eq 0,\ 1\leq j\leq n$;
\item an imaginary-valued harmonic 2-form $\mu\in \Omega_h^2(\Sigma; i\mathbb{R})$ such that $|\langle \mu, [\mathbb{T}^2_j]\rangle|<2\mathfrak{p}i$ and $\mathfrak{n}eq 0$, $1\leq j\leq n$.
\mathbf{e}nd{itemize}
Such a quadruple $\mathbb{T}Sigma=(\Sigma, g_\Sigma,\lambda,\mu)$ will be called a $T$-surface in this paper, and the triple $\mathfrak{d}=(g_\Sigma,\lambda,\mu)$ is called a geometric datum on $\Sigma$ in \cite{Wang20}. We choose a closed 2-form $\omega\in \Omega^2(Y; i\mathbb{R})$ on $Y$ such that
\[
\omega=\mu+ds\wedge\lambda
\]
in a collar neighborhood $(-1,0]_s\times \Sigma\mathfrak{su}bset Y$. We denote such a pair $(Y,\omega)$ together with other auxiliary data in the construction by a thickened letter $\mathbb{Y}$.
In \cite{Wang20}, the monopole Floer homology group $\HM_*(\mathbb{Y})$ of $\mathbb{Y}$ is defined by studying the monopole equations on the non-compact 3-manifold $\widehat{Y}\colonequals\ Y\ \bigcup_{\Sigma}\ [0,+\infty)_s\times\Sigma$ with perturbation given by the 2-form $\omega$. The geometric datum $\mathfrak{d}=(g_\Sigma,\lambda,\mu)$ is used here to specify the geometry along the cylindrical end $[0,+\infty)_s\times\Sigma$, and the class $[*_\Sigma\lambda]\in H^1(\Sigma; i\mathbb{R})$ must lie in the image of $H^1(Y;i\mathbb{R})$ in order for the compactness argument to work. By \cite[Theorem 1.4]{Wang20}, $\HM_*(\mathbb{Y})$ is a finitely generated module over $\mathbb{N}R$. However, the results from \cite{Wang20} did not guarantee that $\HM_*(\mathbb{Y})$ is a topological invariant: this group may a priori depend on the geometric datum $\mathfrak{d}$ in a subtle way. In this papar, we establish an invariance result in the special case described above:
\begin{theorem}[Theorem \ref{T5.1}]\label{T1.1} Under above assumptions, the monopole Floer homology group $\HM_*(\mathbb{Y})$ depends at most on the 3-manifold $Y$, the class $[\omega]\in H^2(Y;i\mathbb{R})$ and $[*_\Sigma\lambda]\in \im (H^1(Y;i\mathbb{R})\to H^1(\Sigma; i\mathbb{R}))$. This group is independent of the rest of data used in the construction up to canonical isomorphisms.
\mathbf{e}nd{theorem}
The group $\HM_*(\mathbb{Y})$ admits a decomposition with respect to relative \mathfrak{s}pinc structures on $Y$:
\[
\HM_*(\mathbb{Y})=\bigoplus_{\widehat{\mathfrak{s}}\in \Spincr(Y)}\HM_*(\mathbb{Y},\widehat{\mathfrak{s}}).
\]
For any closed irreducible 3-manifold $Z$, this grading information \cite{KM97B, Bible} determines the Thurston norm $x(\cdot)$ on $H_2(Z;\mathbb{R})$. The Gluing Theorem \ref{T2.5} then allows us to relate $\HM_*(\mathbb{Y})$ with the monopole Floer homology of the double $Y\cup (-Y)$, so a similar detection result is obtained for any irreducible 3-manifold with disconnected toroidal boundary. However, our statement below is slightly different from the one in \cite{KM97B,Bible}, as the author was unable to verify the adjunction inequality.
\begin{theorem}[Theorem \ref{T7.1}]\label{T1.2} Consider the set of monopole classes
\[
\mathfrak{s}w(\mathbb{Y})\colonequals \{ c_1(\widehat{\mathfrak{s}}): \HM_*(\mathbb{Y},\widehat{\mathfrak{s}})\mathfrak{n}eq \{0\}\}\mathfrak{su}bset H^2(Y,\mathfrak{p}artial Y;\mathbb{Z}),
\]
and define $\delta arphi_{\mathbb{Y}}(\kappa)\colonequals \max_{a\in \mathfrak{s}w(\mathbb{Y})} \langle a, \kappa\rangle$, $\kappa\in H_2(Y,\mathfrak{p}artial Y;\mathbb{R})$. We set $\delta arphi_{\mathbb{Y}}\mathbf{e}quiv -\infty$ if $\mathfrak{s}w(\mathbb{Y})=\mathbf{e}mptyset$. Then
\[
\half(\delta arphi_{\mathbb{Y}}(\kappa)+\delta arphi_{\mathbb{Y}}(-\kappa))\leq x(\kappa),\ \mathfrak{f}orall \kappa\in H_2(Y,\mathfrak{p}artial Y;\mathbb{R}).
\]
If in addition $Y$ is irreducible, then $\mathfrak{s}w(\mathbb{Y})$ is non-empty, and the equality holds for any $\kappa$.
\mathbf{e}nd{theorem}
\begin{remark}\label{R1.3} Ideally, one would expect that $\mathfrak{s}w(\mathbb{Y})$ is symmetric about the origin, and so $\delta arphi_{\mathbb{Y}}(\kappa)=\delta arphi_{\mathbb{Y}}(-\kappa)$ whenever $\mathfrak{s}w(\mathbb{Y})$ is non-empty; but this symmetry is hard to verify directly due to the presence of the 2-form $\omega$. Nevertheless, the ideal adjunction inequality $\delta arphi_{\mathbb{Y}}(\kappa)\leq x(\kappa)$ can be established in some special cases, e.g., when $\kappa$ is integral and $[\mathfrak{p}artial \kappa]\in H_1(\mathfrak{p}artial Y;\mathbb{Z})$ is proportional to the Poincar\'{e} dual of $[*_\Sigma\lambda]$; see Corollary \ref{C8.4}.
\mathbf{e}nd{remark}
This Thurston norm detection theorem is accompanied with a fiberness detection result. In the context of Heegaard Floer homology, such a result was first established by Ghiggini \cite{Ghiggini08} for genus-1 fibered knots and by Ni for any knots \cite{Ni07} and for any closed 3-manifolds \cite{Ni09b} in general. In the context of the Seiberg-Witten theory, this was proved by Kronheimer-Mrowka \cite{KS} for the monopole knot Floer homology $\mathcal{K}HM_*$, and by Ni \cite{Ni08} for closed 3-manifolds. Our statement below, however, is slightly weaker than the ideal version that one would hope to prove due to the same reason explained in Remark \ref{R1.3}.
\begin{theorem}[Theorem \ref{T7.2}]\label{T1.4} For any integral class $\kappa\in H_2(Y,\mathfrak{p}artial Y;\mathbb{Z})$, consider the subgroup
\[
\HM_*(\mathbb{Y}|\kappa)\colonequals \bigoplus_{\langle c_1(\widehat{\mathfrak{s}}), \kappa\rangle=\delta arphi_{\mathbb{Y}}(\kappa)}\HM_*(\mathbb{Y},\widehat{\mathfrak{s}}).
\]
If $Y$ is irreducible and $\rank_\mathbb{N}R\HM_*(\mathbb{Y}|\kappa)=\rank_\mathbb{N}R\HM_*(\mathbb{Y}|-\kappa)=1$, then $Y$ fibers over $S^1$.
\mathbf{e}nd{theorem}
The proof of Theorem \ref{T7.2} relies on the fiberness detection result \cite[Theorem 6.1]{KS} for sutured monopole Floer homology $\mathcal{H}M$. In the original definition of Gabai \cite{Gabai83}, any 3-manifolds with toroidal boundary are examples of sutured manifolds, but they are not balanced in the sense of Juh\'{a}sz \cite[Definition 2.2]{J06}. Thus the sutured monopole Floer homology $\mathcal{H}M$ is not previously defined for this class of sutured manifolds. One may regard our construction as a natural extension of $\mathcal{H}M$ and ask if a sutured decomposition theorem, as in \cite[Theorem 1.3]{J08} and \cite[Theorem 6.8]{KS}, continues to hold at this generality. Theorem \ref{T8.1} is a preliminary result in this direction. Theorem \ref{T1.4} then follows immediately from Theorem \ref{T8.1} and the works of Kronheimer-Mrowka \cite{KS} and Ni \cite{Ni09b}.
Theorem \ref{T1.2} combined with Theorem \ref{T1.4} gives a simple characterization of the product manifold $[-1,1]_s\times \mathbb{T}^2$, which was first suggested to the author by Chris Gerig.
\begin{proposition}[Corollary \ref{C9.3}]\label{P0.5} Let $Y$ be any oriented 3-manifold with disconnected toroidal boundary. If $Y$ is connected and irreducible, then $\rank_{\mathbb{N}R}\HM_*(\mathbb{Y})\mathfrak{g}eq 1$. If the equality holds, then $Y=[-1,1]_s\times\mathbb{T}^2$ is a product. This is true for any choice of $(\omega,\mathfrak{d})$ satisfying the conditions at the beginning of Section \ref{Subsec1.2}.
\mathbf{e}nd{proposition}
\mathfrak{su}bsection{Connected Sum Formulae} Two simple connected sum formulas can be derived from the Gluing Theorem \ref{T2.5} for reducible 3-manifolds. For any $\mathbb{Y}_1$ and $\mathbb{Y}_2$, take their connected sum to be
\[
\mathbb{Y}_1\# \mathbb{Y}_2=(Y_1\# Y_2, \omega_1\# \omega_2,\cdots).
\]
The class $[\omega_1\# \omega_2]\in H^2(Y_1\#Y_2; i\mathbb{R})$ is canonically determined by $\omega_j\in \Omega^2(Y_j; i\mathbb{R}),\ j=1,2$ and by the relation $\langle [\omega_1\# \omega_2], S^2\rangle=0$, where $S^2\mathfrak{su}bset Y_1\# Y_2$ is the 2-sphere separating $Y_1$ and $Y_2$.
\begin{proposition}[Proposition \ref{P10.1}]\label{P1.6} Under above assumptions, we have
\[
\HM_*(\mathbb{Y}_1\# \mathbb{Y}_2)=\HM_*(\mathbb{Y}_1)\otimes_\mathbb{N}R\HM_*(\mathbb{Y}_2)\otimes_\mathbb{N}R V,
\]
where $V$ is a two dimensional vector space over $\mathbb{N}R$.
\mathbf{e}nd{proposition}
\begin{proposition}[Proposition \ref{P10.3}]\label{P1.7} For any closed 3-manifold $Z$, one can form the connected sum $\mathbb{Y}\# Z$ in a similar way. Then
\[
\HM_*(\mathbb{Y}\# Z)\cong \HM_*(\mathbb{Y})\otimes_\mathbb{N}R\mathbb{T}HM_*(Z)
\]
where $\mathbb{T}HM_*(Z)$ is the sutured monopole Floer homology of $(Z(1),\delta)$, which is the balanced sutured manifold obtained from $Z$ by removing a 3-ball; the unique suture $\delta$ is the equator of $\mathfrak{p}artial Z(1)$; see Definition \ref{D10.2}.
\mathbf{e}nd{proposition}
Proposition \ref{P1.6} and \ref{P1.7} are consistent with the connected sum formulas in \cite[Proposition 9.15]{J06} for sutured Heegaard Floer homology; the analogue for sutured monopole Floer homology was obtained in \cite[Theorem 1.5]{Li18b}.
Proposition \ref{P1.6} should be compared with a vanishing result, which is concerned with the case that the 2-form $\omega$ on $Y_1\# Y_2$ has non-zero pairing with the separating 2-sphere $S^2$.
\begin{proposition} Suppose $Y$ is any 3-manifold with toroidal boundary that contains an embedded 2-sphere $S^2\mathfrak{su}bset Y$. If in addition $\langle c_1(\widehat{\mathfrak{s}}),[S^2]\rangle =0$ and $\langle \omega, [S^2]\rangle \mathfrak{n}eq 0$, then the group $\HM_*(\mathbb{Y},\widehat{\mathfrak{s}})$ vanishes.
\mathbf{e}nd{proposition}
\begin{proof} This follows from the neck-stretching argument in \cite[Proposition 40.1.3]{Bible} and the energy equation in \cite[Proposition 5.4]{Wang20}.
\mathbf{e}nd{proof}
\mathfrak{su}bsection{Gerig's program in contact topology} This paper is motivated in part by Gerig's broader program to attack the two-or-infinite conjecture (see for instance \cite{HWZ98, HWZ03, HT09, GH16}) by mimicking Taubes' solution \cite{Taubes07} to the Weinstein conjecture in dimension 3. What he needed from the Seiberg-Witten theory is a much stronger result, which we state now as a conjecture.
\begin{conjt}[Gerig]\label{conjecture} For any connected 3-manifold $Y$ with toroidal boundary, one can choose a closed 2-form $\omega$ and a geometric datum $\mathfrak{d}=(g_\Sigma,\lambda,\mu)$ on $\mathfrak{p}artial Y$ such that $\rank_\mathbb{N}R\HM_*(\mathbb{Y})\mathfrak{g}eq 1$. If $\rank=1$, then $Y$ is the connected sum of $[-1,1]_s\times \mathbb{T}^2$ with some monopole $L$-space.
\mathbf{e}nd{conjt}
In light of Proposition \ref{P0.5}, \ref{P1.6} and \ref{P1.7}, this conjecture holds if $Y$ is the connected sum $Y_1\#\cdots \# Y_m$ of prime 3-manifolds, with each $Y_j$ either closed or boundary-disconnected.
Gerig's approach towards the two-or-infinite conjecture can be described as follows. Suppose that $Z_0$ is any compact oriented 3-manifold, equipped with a contact 1-form $\alpha_0$ with only finitely many Reeb orbits. Denote by $Y_0$ the complement of these orbits. If one can define an embedded contact homology for the pair $(Y_0,\alpha_0)$ and prove an ECH=SW type theorem, then $\rank_\mathbb{N}R\HM_*(\mathbb{Y})=\rank_\mathbb{N}R \text{ECH}(Y_0,\alpha_0)=1$. Conjecture \ref{conjecture} then implies that $\mathfrak{p}artial Y_0$ has only two components.
An appropriate candidate for this ECH theory has already appeared in \cite{CGH10} and \cite{HS05} assuming that all Reeb orbits of $\alpha_0$ are non-degenerate and elliptic. In this case, the Reeb flow of $\alpha_0$ foliates the boundary of $Y_0$ with irrational slopes, which is a technical condition needed for their construction. The main difficulty in Gerig's program is to find the correct 2-form $\omega_0$ on $Y_0$ and to establish this ECH=SW type theorem using the deformation $ \omega_0+td\alpha_0$ as $t\to +\infty$. The 2-form $\omega_0$ must be adapted to $\alpha_0$ in a certain sense so that Taubes' compactness argument goes through as usual. (Of course, Taubes' solution to the Weinstein conjecture in dimension 3 requires only a weaker version of ``ECH=SW", but it is conceptually easier to think of Gerig's strategy this way).
\mathfrak{su}bsection{Relations with Link Floer Homology} Our monopole Floer theory can be also used to recover the monopole link Floer homology $\LHM_*$. The next theorem will be proved using the Decomposition Theorem \ref{T8.1} in Section \ref{Sec8}.
\begin{theorem}[Corollary \ref{C8.3}]\label{T1.6} For any link $L=\{L_i\}_{i=1}^n$ in a closed oriented 3-manifold $Z$, we pick a meridian $m_i$ for each component $L_i$, and consider the link complement
\[
Y(Z, L)=Z\mathfrak{s}etminus N(L\cup m_1\cup \cdots\cup m_n).
\]
Then for a suitable 2-form $\omega$ on $Y=Y(Z,L)$, we have $
\HM_*(\mathbb{Y})\cong \LHM_*(Z, L). $
\mathbf{e}nd{theorem}
By the work of Osv\'{a}th-Sz\'{a}bo\cite{OS08} and Ni \cite{Ni09}, the link Floer homology detects the Thurston norm of the link complement for any links in rational homology spheres. The assumption on homology is later removed by Juh\'{a}sz \cite[Remark 8.5]{J08}. The same detection result using monopole link Floer homology $\LHM_*$ was obtained by Ghosh-Li \cite[Theorem 1.17]{GL19}. By Theorem \ref{T1.6}, Theorem \ref{T1.2} recovers the previous detection results in the case that the link complement is irreducible. This constraint can be further removed by our connected sum formulae.
On the other hand, any 3-manifold $Y$ with toroidal boundary can be viewed as the link complement of some link $L$ inside a closed 3-manifold $Z$. Then the link Floer homology of $(Z,L)$ detects the Thurston norm of $Y$. However, the statements in \cite{OS08, Ni09b,J08} and \cite{GL19} must rely on the choice of $(Z,L)$ and therefore have a radically different nature from the one in Theorem \ref{T1.2}.
\mathfrak{su}bsection{Generalized (3+1) TQFT} In \cite{Wang20}, the cobordism maps were constructed only for a very restrictive class of cobordisms called \textit{strict}. For any 3-manifolds $(Y_i,\Sigma_i)$ with toroidal boundary, $i=1,2$, a cobordism
\[
(X,W): (Y_1,\Sigma_1)\to (Y_2,\Sigma_2)
\]
is a 4-manifold with corners, with $W$ a cobordism from $\Sigma_1$ to $\Sigma_2$. It is called strict, if $\Sigma_1=\Sigma_2$ and $W=[-1,1]_t\times\Sigma_1$ is a product. This constraint is removed by constructing generalized cobordism maps in this paper. Again we shall work under the assumptions stated in the beginning of Section \ref{Subsec1.2}.
\begin{theorem}[Corollary \ref{C4.4}]\label{T1.10} We define a category $\mathbb{A}T_2$ as follows: each object of $\mathbb{A}T_2$ is a 3-manifold $\mathbb{Y}=(Y,\omega,\cdots)$ with toroidal boundary, equipped with a closed 2-form $\omega$ satisfying the conditions in Section \ref{Subsec1.2}. Each morphism is a triple $(\mathbb{X},\mathbb{B}W, a)$ where
\begin{itemize}
\item $(X,W): (Y_1,\Sigma_1)\to (Y_2,\Sigma_2)$ is a 4-manifold with corners;
\item $a\in \HM_*(\mathbb{B}W)$ is an element in the monopole Floer homology of $\mathbb{B}W=(W,\omega_{W},\cdots)$.
\mathbf{e}nd{itemize}
Then there is a functor $\HM_*: \mathbb{A}T_2\to \mathbb{N}R\mathcal{M}od$ from $\mathbb{A}T_2$ to the category of finitely generated $\mathbb{N}R$-modules, which assigns to each $\mathbb{Y}$ its monopole Floer homology $\HM_*(\mathbb{Y})$.
\mathbf{e}nd{theorem}
\begin{remark} To better package the information on 2-forms, the actual construction of $\mathbb{A}T_2$ in Corollary \ref{C4.4} is slightly different. The gluing map \mathbf{e}qref{E1.1} has to be used in order to compose morphisms in the category $\mathbb{A}T_2$. This generalized (3+1) TQFT structure contains strictly less information than the monopole Floer 2-functor in Theorem \ref{T2.5}, since the functoriality of the element $a\in \HM_*(\mathbb{B}W)$ is completely ignored here.
\mathbf{e}nd{remark}
\mathfrak{su}bsection{A Remark on the Gluing Theorem} Most results of this paper are based on the monopole Floer 2-functor in the Gluing Theorem \ref{T2.5}, and we give a short conceptual reason here to see why it has a particularly simple form and how this gluing result fits into the broader context of Landau-Ginzburg models.
In the first paper \cite{Wang202}, the author introduced an infinite dimensional gauged Landau-Ginzburg model for any $T$-surface $\mathbb{T}Sigma=(\Sigma,g_\Sigma,\lambda,\mu)$:
\[
(M(\Sigma), W_\lambda, \mathbb{C}G(\Sigma)).
\]
It is more appropriate to think of $\HM_*(\mathbb{Y})$ as an invariant of $Y$ relative to this Landau-Ginzburg model. To develop a gluing theory in general, one has to assign an $A_\infty$-category $\mathbb{C}A$ to $(M(\Sigma), W_\lambda, \mathbb{C}G(\Sigma))$ and upgrade the Floer homology of $\mathbb{Y}$ as an $A_\infty$-module over $\mathbb{C}A$. By the work of Seidel \cite[Corollary 18.27]{S08}, we wish to construct a spectral sequence abutting to $\HM_*(\mathbb{Y}_1\circ_h\mathbb{Y}_2)$ with $E_1$ page equal to
\[
\HM_*(\mathbb{Y}_1)\otimes_\mathbb{N}R\HM_*(\mathbb{Y}_2).
\]
In a finite dimensional framework, this spectral sequence has been reproved using the complex gradient flow equation in \cite{Wang22}, and a generalization to suitable infinite dimensional situations may come soon afterwards. However, in the special case \ref{star} that we have discussed so far, the $A_\infty$-category $\mathbb{C}A$ is expected to be trivial: the thimbles of different critical points of $W_\lambda$ are not supposed to intersect at all. This suggests that the spectral sequence collapses at $E_1$-page and the gluing formula is simply a tensor product. On the technical level, this allows us to take a shortcut in this paper by using Floer's excision argument, which bypasses the complicated general framework of \cite{Wang22}.
The next interesting case that we hope to investigate in the future is when $\Sigma$ is connected and $\mu=0$ in the geometric datum $\mathfrak{d}=(g_\Sigma,\lambda,\mu)$, which is also important for Gerig's program in contact topology. This may lead eventually to an analytic construction of the knot Floer homology $\HFK_*^-$ from our story. In this case, the gluing map \mathbf{e}qref{E1.1} constructed in this paper is identically zero, and the general framework of \cite{Wang22} will be indispensable. Readers are referred to \cite[Section 1.6]{Wang20} for more heuristics on this direction.
\begin{remark} We also note that the Gluing Theorem \ref{T2.5} is indeed subject to certain limitations. When $(Y_1,\omega_1)$ and $(Y_2,\omega_2)$ are glued along their common boundaries, the 2-forms $\omega_1$ and $\omega_2$ must match within a neighborhood of the boundary. This imposes a non-trivial homological constraint on the way they are glued.
\mathbf{e}nd{remark}
\mathfrak{su}bsection{Organization} The rest of this paper is organized as follows:
In Section \ref{Sec2}, we review the basic definition of bi-categories and state the Gluing Theorem \ref{T2.5}. Section \ref{Sec3} is devoted to its proof. We will follow closely the proof of Floer's excision theorem in \cite{BD95, KS}. In Section \ref{Sec4}, we construct the generalized cobordism maps and define the functor in Theorem \ref{T1.10}. In Section \ref{Sec5}, we prove the Invariance Theorem \ref{T1.1}.
In Section \ref{Sec6}, we review the theory for closed 3-manifolds following the book \cite{Bible} by Kronheimer-Mrowka and adapt their non-vanishing results to the case of non-exact perturbations. Section \ref{Sec7} is devoted to the Thurston norm detection result: Theorem \ref{T1.2}. After a digression into sutured Floer homology in Section \ref{Sec8}, the proof of the fiberness detection result, Theorem \ref{T1.4}, is supplied in Section \ref{Sec9}. The connected sum formulae are derived in Section \ref{Sec10}.
\mathfrak{su}bsection*{Acknowledgments} The author is extremely grateful to his advisor, Tom Mrowka, for his patient help and constant encouragement throughout this project. The author would like to thank Chris Gerig for suggesting Proposition \ref{P0.5}. The author would also like to thank Zhenkun Li for many helpful discussions. This material is based upon work supported by the National Science Foundation under Grant No.1808794.
\mathfrak{p}art{The Gluing Theorem}
The primary goal of this part is to construct the monopole Floer 2-functor
\[
\HM_*: \mathbb{A}T\to \mathbb{A}R
\]
where $\mathbb{A}T$ is the toroidal bi-category constructed in Section \ref{Sec2} and $\mathbb{A}R$ is the strict 2-category of finitely generated $\mathbb{N}R$-modules. Throughout this paper, the base ring $\mathbb{N}R$ is always the mod $2$ Novikov ring. Our main result is the Gluing Theorem \ref{T2.5}.
\mathfrak{s}ection{The Toroidal Bi-category}\label{Sec2}
\mathfrak{su}bsection{2-categories}
In this section, we review the definition of 2-categories following \cite{B67}. The $\Hom$-sets of a 2-category form categories themselves.
\begin{definition} A strict 2-category $\mathbb{A}K$ consists of the following data:
\begin{enumerate}[label=(Y\arabic*)]
\item\label{Y1} a collections of objects $A,B,C\cdots$. They are also called $0$-cells;
\item\label{Y2} for any objects $A,B\in \Ob \mathbb{A}K$, there is a category $\mathbb{A}K(A,B)$, whose objects are called 1-cells and morphisms are called 2-cells. The identity morphism of an 1-cell $f$ is denoted by $\Id_f:f\to f$. The compositions of 2-cells in $\mathbb{A}K(A,B)$ are called vertical compositions, denoted by $\circ_v$.
\item\label{Y3} for any objects $A,B,C\in \Ob\mathbb{A}K$, there is a functor
\[
\circ_h: \mathbb{A}K(A,B)\times \mathbb{A}K(B,C)\to \mathbb{A}K(A,B),
\]
called the horizontal composition.
\item\label{Y4} The horizontal composition $\circ_h$ is associative, i.e. for any four 0-cells $A,B,C,D$, the two different ways of composing $\circ_h$:
\[
-\ \circ_h\ (-\ \circ_h\ -) \text{ and }(-\ \circ_h\ -)\ \circ_h \ -
\]
give rise to the same functor from $\mathbb{A}K(A,B)\times\mathbb{A}K(B,C)\times \mathbb{A}K(C,D)$ to $\mathbb{A}K(A,D)$.
\item\label{Y5} let $\mathbf{1}$ be the category with one object and one morphism; for any object $A\in \Ob \mathbb{A}K$, there is a functor $1_A: \mathbf{1}\to \mathbb{A}K(A,A)$ picking out the identity 1-cell $e_A: A\to A$ and its identity 2-cell $\Id_{e_A}:e_A\to e_A$. The functor $1_A$ is the unit of the horizontal composition $\circ_h$. \mathfrak{q}edhere
\mathbf{e}nd{enumerate}
\mathbf{e}nd{definition}
\begin{example} Let $\mathbb{N}R$ be the Novikov field defined over $\mathbb{B}F_2$, the field of 2-elements:
\[
\mathbb{N}R\colonequals \mathbb{B}F_2[\mathbb{R}]^-=\{\mathfrak{su}m_{i\mathfrak{g}eq 0} a_i q^{n_i}: a_i\in \mathbb{B}F_2, n_i\in \mathbb{R}, \lim_{i\to\infty} n_i=-\infty\}.
\]
We define $\mathbb{A}R$ to be the strict 2-category with a single object $\mathfrak{s}tar$ such that $\mathbb{A}R(\mathfrak{s}tar)\colonequals \mathbb{A}R(\mathfrak{s}tar, \mathfrak{s}tar)$ is the category of finitely generated $\mathbb{N}R$-modules. The horizontal composition is defined using the tensor product of $\mathbb{N}R$-modules. The identity 1-cell $e\in \mathbb{A}R(\mathfrak{s}tar)$ is $\mathbb{N}R$ itself.
\mathbf{e}nd{example}
\begin{example} Let $\mathbb{C}AT$ be the strict 2-category consisting of all categories as 0-cells. For any categories $A,B$, 1-cells in $\mathbb{C}AT(A,B)$ are functors from $A$ to $B$, while 2-cells are given by natural transformations.
\mathbf{e}nd{example}
For technical reasons, the toroidal bi-category $\mathbb{A}T$ defined in the next subsection is not strictly associative. However, $\mathbb{A}T$ is still unital, and the associativity of $\circ_h$ still holds up to 2-isomorphisms; so it is an example of bi-categories:
\begin{definition} A (unital) bi-category $\mathbb{A}K$ satisfies \ref{Y1}\ref{Y2}\ref{Y3}\ref{Y5} and
\begin{enumerate}[label=(Y4')]
\item\label{Y4'} for any 0-cells $A,B,C,D$, there is a natural isomorphism $a(A,B,C,D)$ between the two functors below:
\[
\begin{tikzcd}
\mathbb{A}K(A,B)\times\mathbb{A}K(B,C)\times \mathbb{A}K(C,D)\arrow[r, bend left, "-\ \circ_h\ (-\ \circ_h\ -)"] \arrow[r, bend right, "(-\ \circ_h\ -)\ \circ_h\ -"'] &\mathbb{A}K(A,D),
\mathbf{e}nd{tikzcd}
\]
which satisfies an associativity coherence condition \cite[P.5]{B67}. For any triple $(f,g,k)$, we use $a(f,g,k)$ to denote the 2-cell isomorphism:
\[
a(f,g,k): f\circ_h (g\circ_h k)\to (f\circ_h g)\circ_h k. \mathfrak{q}edhere
\]
\mathbf{e}nd{enumerate}
\mathbf{e}nd{definition}
\mathfrak{su}bsection{The Toroidal Bi-category}\label{Subsec2.2} The primary goal of this subsection is to define the toroidal bi-category $\mathbb{A}T$, over which the monopole Floer 2-functor $\HM_*$ is defined. Each object of $\mathbb{A}T$ is a quadruple
\[
\mathbb{T}Sigma=(\Sigma, g_\Sigma, \lambda, \mu),
\]
called a $T$-surface, where
\begin{enumerate}[label=(T\arabic*)]
\item $\Sigma=\coprod_{i=1}^n \Sigma^{(i)}\mathfrak{n}eq \mathbf{e}mptyset$ is an oriented surface consisting of finitely many 2-tori; we insist here that the surface $\Sigma$ is non-empty;
\item $g_\Sigma$ is a flat metric of $\Sigma$;
\item\label{T3} $\lambda\in \Omega^1_h(\Sigma, i\mathbb{R})$ is a harmonic 1-form and $\mu\in \Omega^2_h(\Sigma, i\mathbb{R})$ is a harmonic 2-form; when restricted to each connected component $\Sigma^{(i)}$, $\lambda$ and $\mu$ are both non-zero;
\item\label{T1} for any $1\leq i\leq n$, $|\langle \mu, [\Sigma^{(i)}]\rangle|<2\mathfrak{p}i$.
\mathbf{e}nd{enumerate}
For any $T$-surfaces $\mathbb{T}Sigma_1$ and $\mathbb{T}Sigma_2$, $\mathbb{A}T(\mathbb{T}Sigma_1, \mathbb{T}Sigma_2)$ is a full subcategory of the strict cobordism category $\mathbb{C}ob_s$ defined in \cite[Section 3]{Wang20}. We recall the definition below for the sake of completeness. An object of $\mathbb{A}T(\mathbb{T}Sigma_1, \mathbb{T}Sigma_2)$ is a quintuple $\mathbb{Y}=(Y, \mathfrak{p}artial_si, g_Y, \omega, \{\mathfrak{q}\})$ satisfying the following properties:
\begin{enumerate}[label=(P\arabic*)]
\item\label{P1} $Y$ is a compact oriented 3-manifold with toroidal boundary and $\mathfrak{p}artial_si : \mathfrak{p}artial Y\to (-\Sigma_1)\cup
\Sigma_2$ is an orientation preserving diffeomorphism. $\Sigma_1$ and $\Sigma_2$ are regarded as the incoming and outgoing boundaries of $Y$ respectively. When it is clear from the context, the identification map $\mathfrak{p}artial_si$ might be dropped from our notations.
\item \label{P1.2} Each component of $Y$ intersects non-trivially with both $\mathbb{T}Sigma_1$ and $\mathbb{T}Sigma_2$.
\item\label{P1.5} The metric $g_Y$ of $Y$ is cylindrical, i.e. $g_Y$ is the product metric
\[
ds^2+\mathfrak{p}artial_si^*(g_{\Sigma_1}, g_{\Sigma_2})
\]
within a collar neighborhood $[-1,1)_s\times\Sigma_1\ \bigcup\ (-1,1]_s\times\Sigma_2$ of $\mathfrak{p}artial Y$.
\item\label{P2} $\omega\in \Omega^2(Y, i\mathbb{R})$ is an imaginary valued \textbf{closed} 2-form on $Y$ such that
\begin{align*}
\omega=\mu_1+ds\wedge\lambda_1& \text{ on } [-1,1)_s\times\Sigma_1,\\
\omega=\mu_2+ds\wedge \lambda_2& \text{ on } (-1,1]_s\times\Sigma_2.
\mathbf{e}nd{align*}
In particular, $(\mu_1,\mu_2)$ lies in the image $\im (H^2(Y; i\mathbb{R})\to H^2(\Sigma; i\mathbb{R}))$.
\item\label{P3} The cohomology class of $(*_1\lambda_1, *_2\lambda_2)$ lies in the image $
\im (H^1(Y;i\mathbb{R})\to H^1(\mathfrak{p}artial Y; i\mathbb{R})).$ In particular, there exists a co-closed 2 form $\omega_{h}$ such that
\begin{align*}
\omega_h=ds\wedge\lambda_1& \text{ on } [-1,1)_s\times\Sigma_1;\\
\omega_h=ds\wedge \lambda_2& \text{ on } (-1,1]_s\times\Sigma_2;
\mathbf{e}nd{align*}
\item\label{P8} $\{\mathfrak{q}\}$ is a collection of admissible perturbations (in the sense of \cite[Definition 13.3]{Wang20}) of the Chern-Simons-Dirac functional $\mathbb{C}L_\omega$, one for each relative \mathfrak{s}pinc structure of $Y$.
\mathbf{e}nd{enumerate}
\begin{remark} The reason to include \ref{P1.2} is to rule out closed components of $Y$ when considering horizontal compositions. This is not a serious problem; see Subsection \ref{Subsec2.4}.
\mathbf{e}nd{remark}
Having defined the objects (1-cells) of $\mathbb{A}T(\mathbb{T}Sigma_1,\mathbb{T}Sigma_2)$, let us now describe the set of morphisms (2-cells). Since each 3-manifold with toroidal boundary is now decorated by a closed 2-form $\omega$, morphisms will take these forms into account. Given two objects $\mathbb{Y}_i=(Y_i, \mathfrak{p}artial_si_i, g_i, \omega_i,\{\mathfrak{q}_i\}), i=1,2$ in $\mathbb{A}T(\mathbb{T}Sigma_1,\mathbb{T}Sigma_2)$, a morphism
\[
\mathbb{X}: \mathbb{Y}_1\to \mathbb{Y}_2
\]
is a triple $\mathbb{X}=(X,[\mathfrak{p}artial_si_X],[\omega_X]_{cpt})$ satisfying the following properties.
\begin{figure}[H]
\begin{tikzpicture}
\draw (0,0) -- (3,0) -- (3,1.2) -- (0,1.2) -- (0,0);
\draw [ultra thick] (0,0) -- (3,0);
\draw [ultra thick] (0,1.2) -- (3,1.2);
\mathfrak{n}ode at (1.5,0.6) {$X$};
\mathfrak{n}ode [left]at (0,0.6) {\mathfrak{s}mall $[-1,1]_t\times\Sigma_1$};
\mathfrak{n}ode [right] at (3,0.6) {\mathfrak{s}mall$[-1,1]_t\times\Sigma_2$};
\mathfrak{n}ode [above] at (1.5,1.2) {$Y_1$};
\mathfrak{n}ode [below] at (1.5,0) {$Y_2$};
\mathfrak{n}ode [above] at (3,1.2) {$\Sigma_2$};
\mathfrak{n}ode [above] at (0,1.2) {$\Sigma_1$};
\mathfrak{n}ode [below] at (0,0) {$\Sigma_1$};
\mathfrak{n}ode [below] at (3,0) {$\Sigma_2$};
\mathfrak{n}ode at (9,0.6) {$\begin{tikzcd}[column sep=2cm]
\mathbb{T}Sigma_1\arrow[r,"\mathbb{Y}_1"]\arrow[rd,"\mathbb{X}",phantom] \arrow[d,equal]& \mathbb{T}Sigma_2\arrow[d,equal]\\
\mathbb{T}Sigma_1\arrow[r, "\mathbb{Y}_2"'] & \mathbb{T}Sigma_2.
\mathbf{e}nd{tikzcd}$};
\mathfrak{n}ode at (6,0.6) {$\rightsquigarrow$};
\mathbf{e}nd{tikzpicture}
\caption{A 2-cell morphism $\mathbb{X}$.}
\label{Pic0}
\mathbf{e}nd{figure}
\begin{enumerate}[label=(Q\arabic*)]
\item \label{Q1}$X$ is a 4-manifold with corners, i.e. $X$ is a space stratified by manifolds
\[
X\mathfrak{su}pset X_{-1}\mathfrak{su}pset X_{-2}\mathfrak{su}pset X_{-3}=\mathbf{e}mptyset
\]
such that the co-dimensional 1 stratum $X_{-1}$ consists of three parts
\[
X_{-1}= (-Y_1)\cup (Y_2)\cup W_X.
\]
where $W_X$ is an oriented 3-manifold with boundary $\mathfrak{p}artial W_X=\mathfrak{p}artial Y_1\cap\mathfrak{p}artial Y_2$. Moreover, $\mathfrak{p}artial Y_i=Y_i\cap W_X$ and $X_{-2}=\mathfrak{p}artial Y_1\cup \mathfrak{p}artial Y_2$.
\item\label{Q3} $\mathfrak{p}artial_si_X: W_X\to [-1,1]_t\times ((-\Sigma_1)\cup\Sigma_2)$ is an orientation preserving diffeomorphism compatible with $\mathfrak{p}artial_si_1$ and $\mathfrak{p}artial_si_2$. More precisely, we require that
\begin{align*}
\mathfrak{p}artial_si_X |_{\mathfrak{p}artial Y_1}&=\mathfrak{p}artial_si_1: \mathfrak{p}artial Y_1\to \{-1\}\times ((-\Sigma_1)\cup\Sigma_2), \\
\mathfrak{p}artial_si_X |_{\mathfrak{p}artial Y_2}&=\mathfrak{p}artial_si_2: \mathfrak{p}artial Y_2\to \{1\}\times ((-\Sigma_1)\cup\Sigma_2),
\mathbf{e}nd{align*}
which hold also in a collar neighborhood of $\mathfrak{p}artial W_X$. When no chance of confusion is possible, $\mathfrak{p}artial_si_X$ might be dropped from our notations. Such a pair $(X, \mathfrak{p}artial_si_X)$ is called \textbf{a strict cobordism} from $(Y_1,\mathfrak{p}artial_si_1)$ to $(Y_2,\mathfrak{p}artial_si_2)$. $[\mathfrak{p}artial_si_X]$ denotes the isotopy class of such a diffeomorphism.
\item\label{Q6} There exists a closed 2-form $\omega_X\in \Omega^2(X, i\mathbb{R})$ on $X$ satisfying the following properties.
\begin{itemize}
\item $\omega_X=\omega_i $ (see \ref{P2}) within a collar neighborhood of $Y_i\mathfrak{su}bset X_{-1}$ for $i=1,2$;
\item within a collar neighborhood of $W_X\mathfrak{su}bset X_{-1}$,
\begin{align*}
\omega_X&=\mu_1+ds\wedge \lambda_1 \text{ on } [-1,1]_t\times [-1,1)_s\times\Sigma_1,\\
\omega_X&=\mu_2+ds\wedge \lambda_2\text{ on } [-1,1]_t\times (-1,1]_s\times\Sigma_2.
\mathbf{e}nd{align*}
\mathbf{e}nd{itemize}
The existence of such a 2-form $\omega_X$ is guaranteed by a cohomological condition on $[\omega_X]\in H^2(X; i\mathbb{R})$; see \cite[Section 3 (Q4)]{Wang20}. Any two such forms $\omega_X,\omega_X'$ are called relative cohomologous if $\omega_X-\omega_X'=da$ for a compactly supported smooth 1-form $a\in \Omega^1_c(X,i\mathbb{R})$. We fix the relative cohomology class of $\omega_X$, denoted by $[\omega_X]_{cpt}$.
\mathbf{e}nd{enumerate}
\begin{remark} It is necessary to record the isotopy class of $\mathfrak{p}artial_si_X$ here, because the diffeomorphism group $\mathcal{D}iff_+(\mathbb{T}^2)$ is not simply connected. By \cite[Theorem 1(b)]{EE67}, $\mathcal{D}iff_+(\mathbb{T}^2)$ has the same homotopy type of its linear subgroup $S^1\times S^1\times \mathscr{L}L(2,\mathbb{Z})$, so $\mathfrak{p}i_1(\mathcal{D}iff_+(\mathbb{T}^2))\cong \mathbb{Z}\oplus\mathbb{Z}$.
\mathbf{e}nd{remark}
The vertical composition of $\mathbb{A}T(\mathbb{T}Sigma_1, \mathbb{T}Sigma_2)$ is defined by composing strict cobordisms. Since we have recorded the relative cohomology class $[\omega_X]_{cpt}$ (instead of just $[\omega_X]\in H^2(X;i\mathbb{R})$), these classes can be concatenated accordingly on the composed manifold. For any 1-cell $\mathbb{Y}\in \mathbb{A}T(\mathbb{T}Sigma_1, \mathbb{T}Sigma_2)$, the identity 2-cell is given by the product strict cobordism $([-1,1]_t\times Y, \Id_{[-1,1]_t} \times \mathfrak{p}artial_si)$, with $[\omega]_{cpt}$ being the class of the pull-back 2-form $\omega$. A metric of $X$ is \textbf{not} encoded in the definition of $\mathbb{X}$. This category is topological, although auxiliary data are specified for its objects (1-cells).
The horizontal composition $\mathbb{A}T(\mathbb{T}Sigma_1, \mathbb{T}Sigma_2)\times \mathbb{A}T(\mathbb{T}Sigma_2,\mathbb{T}Sigma_3)\to \mathbb{A}T(\mathbb{T}Sigma_1,\mathbb{T}Sigma_3)$ is defined using the diffeomorphisms $\mathfrak{p}artial_si$. On the level of 1-cells, given any pair $(\mathbb{Y}_{12},\mathbb{Y}_{23})\in \mathbb{A}T(\mathbb{T}Sigma_1, \mathbb{T}Sigma_2)\times \mathbb{A}T(\mathbb{T}Sigma_2,\mathbb{T}Sigma_3)$, the underlying 3-manifold of $\mathbb{Y}_{12}\circ_h \mathbb{Y}_{23}$ is formed by gluing
\[
Y_{12}\mathfrak{s}etminus [0,1]_s\times \mathfrak{p}artial_si_{12}^{-1}(\Sigma_2) \text{ and } Y_{23}\mathfrak{s}etminus [-1,0]_s\times \mathfrak{p}artial_si_{23}^{-1}(\Sigma_2)
\]
along the common boundary $\{0\}\times \Sigma_2$, using the composition
\[
\mathfrak{p}artial_si_{12}^{-1} (\Sigma_2)\mathbb{X}rightarrow{\mathfrak{p}artial_si_{12}} \Sigma_2 \mathbb{X}rightarrow{\mathfrak{p}artial_si_{23}^{-1}}\mathfrak{p}artial_si_{23}^{-1}(\Sigma_2),
\]
which is orientation reversing. The condition \ref{P3} is certified by concatenating the co-closed 2-form $(\omega_h)_{12}$ with $(\omega_h)_{23}$. We shall formally write:
\[
\mathbb{Y}_{12}\circ_h \mathbb{Y}_{23}=(Y_{12}\circ_h Y_{23},\ \omega_{12}\circ_h\omega_{23},\ \cdots).
\]
The problem arises from \ref{P8}: it is not guaranteed that two admissible perturbations on $Y_{12}$ and $Y_{23}$ can be concatenated in a canonical way to form an admissible perturbation on $Y_{12}\circ_h Y_{23}$. Instead, we will pick a random collection of admissible peturbations for $\mathbb{Y}_{12}\circ_h \mathbb{Y}_{23}$ making $\circ_h$ not strictly associative.
As for 2-cells, their horizontal compositions are formed similarly using $\mathfrak{p}artial_si_X$ instead.
Let us now construct the natural isomorphism $a(\mathbb{T}Sigma_1, \mathbb{T}Sigma_2,\mathbb{T}Sigma_3,\mathbb{T}Sigma_4)$ in \ref{Y4'}: for any triple $(\mathbb{Y}_{12},\mathbb{Y}_{23},\mathbb{Y}_{34})$, define
\[
a(\mathbb{Y}_{12},\mathbb{Y}_{23},\mathbb{Y}_{34}): \mathbb{Y}_{12}\circ_h(\mathbb{Y}_{23}\circ_h\mathbb{Y}_{34})\to (\mathbb{Y}_{12}\circ_h\mathbb{Y}_{23})\circ_h\mathbb{Y}_{34}.
\]
to be the \textit{identity} 2-cell. Indeed, as 1-cells in $\mathbb{A}T(\mathbb{T}Sigma_1,\mathbb{T}Sigma_4)$, they have the same underlying metrics and closed 2-forms; only the admissible perturbations may differ from one another.
\mathfrak{s}mallskip
Finally, for any 0-cell $\mathbb{T}Sigma$, we define its identity 1-cell $e_{\mathbb{T}Sigma}$ as
\[
(Y=[-1,1]_s\times \Sigma,\ \mathfrak{p}artial_si=\Id,\ g_Y=d^2s+g_{\Sigma},\ \omega=\mu+ds\wedge\lambda,\ \{\mathfrak{q}=0\}).
\]
For any 1-cell $\mathbb{Y}_{12}\in \mathbb{A}T(\mathbb{T}Sigma_1, \mathbb{T}Sigma_2)$, one may set $\mathbb{Y}_{12}\circ_h e_{\Sigma_2}$ and $e_{\Sigma_1}\circ_h \mathbb{Y}_{12}$ to be just $\mathbb{Y}_{12}$, as they already have the same underlying metrics and closed 2-forms by our conventions of horizontal compositions. In this way, the toroidal bi-category $\mathbb{A}T$ becomes strictly unital.
\mathfrak{su}bsection{The Monopole Floer 2-Functor} \label{Subsec2.3}The primary goal of this paper is to define the monopole Floer 2-functor:
\[
\HM: \mathbb{A}T\to \mathbb{A}R.
\]
We expand on the requirement for a 2-functor in the theorem below:
\begin{theorem}[The Gluing Theorem]\label{T2.5} There exists a 2-functor $\HM$ from the toroidal bi-category $\mathbb{A}T$ to the strict 2-category $\mathbb{A}R$ of finitely generated $\mathbb{N}R$-modules satisfying the following properties:
\begin{enumerate}[label=(G\arabic*)]
\item\label{G1} for any T-surfaces $\mathbb{T}Sigma_1, \mathbb{T}Sigma_2$, the functor \[
\HM_*: \mathbb{A}T(\mathbb{T}Sigma_1,\mathbb{T}Sigma_2)\to \mathbb{A}R(\mathfrak{s}tar)
\]
is defined as in \cite[Theorem 1.5\ \&\ Remark 1.7]{Wang20}, which assigns each 1-cell $\mathbb{Y}$ to its monopole Floer homology group $\HM_*(\mathbb{Y})$;
\item\label{G2} for any $\mathbb{T}Sigma_1, \mathbb{T}Sigma_2,\mathbb{T}Sigma_3$, there is a natural isomorphism $\alpha(\mathbb{T}Sigma_1, \mathbb{T}Sigma_2,\mathbb{T}Sigma_3)$ between the two compositions in the digram below:
\[
\begin{tikzcd}[column sep=2cm]
\mathbb{A}T(\mathbb{T}Sigma_1,\mathbb{T}Sigma_2)\times \mathbb{A}T(\mathbb{T}Sigma_2,\mathbb{T}Sigma_3)\arrow[rd,"\circ_h"']\arrow[r,"\HM_*\times \HM_*"] & \mathbb{A}R(\mathfrak{s}tar)\times\mathbb{A}R(\mathfrak{s}tar) \arrow[r,"\circ_h"]& \mathbb{A}R(\mathfrak{s}tar)\\
& \mathbb{A}T(\mathbb{T}Sigma_1,\mathbb{T}Sigma_3) \arrow[ru,"\HM_*"'].&
\mathbf{e}nd{tikzcd}
\]
In other words, for any composing pair $(\mathbb{Y}_{12},\mathbb{Y}_{23})$, there is an isomorphism of $\mathbb{N}R$-modules:
\[
\alpha: \HM_*(\mathbb{Y}_{12})\otimes_\mathbb{N}R\HM_*(\mathbb{Y}_{23})\to \HM_*(\mathbb{Y}_{12}\circ_h \mathbb{Y}_{23}),
\]
that is natural with respect to the 2-cell morphisms in $\mathbb{A}T(\mathbb{T}Sigma_1,\mathbb{T}Sigma_2)$ and $ \mathbb{A}T(\mathbb{T}Sigma_2,\mathbb{T}Sigma_3)$;
\item\label{G3} $\alpha$ is associative meaning that the digram
\begin{equation}\label{E2.1}
\begin{tikzcd}
\HM_*(\mathbb{Y}_{12})\otimes \HM_*(\mathbb{Y}_{23})\otimes \HM_*(\mathbb{Y}_{34})\arrow[r,"\Id\otimes \alpha"] \arrow[d,"\alpha\otimes \Id"]& \HM_*(\mathbb{Y}_{12})\otimes \HM_*(\mathbb{Y}_{23}\circ_h\mathbb{Y}_{34})\arrow[d,"\alpha"]\\
\HM_*(\mathbb{Y}_{12}\circ \mathbb{Y}_{23})\otimes \HM_*(\mathbb{Y}_{34})\arrow[dr,"\alpha"] &\HM_*(\mathbb{Y}_{12}\circ_h(\mathbb{Y}_{23}\circ_h \mathbb{Y}_{34})) \arrow[d, "{\HM_*(a(\mathbb{Y}_{12},\mathbb{Y}_{23},\mathbb{Y}_{34}))}","\cong"']\\
& \HM_*((\mathbb{Y}_{12}\circ_h\mathbb{Y}_{23})\circ_h \mathbb{Y}_{34})
\mathbf{e}nd{tikzcd}
\mathbf{e}nd{equation}
is commutative for any triples $(\mathbb{Y}_{12},\mathbb{Y}_{23},\mathbb{Y}_{34})\in \mathbb{A}T(\mathbb{T}Sigma_1,\mathbb{T}Sigma_2)\times \mathbb{A}T(\mathbb{T}Sigma_2,\mathbb{T}Sigma_3)\times \mathbb{A}T(\mathbb{T}Sigma_3,\mathbb{T}Sigma_4).$
\item\label{G4} for any T-surface $\mathbb{T}Sigma$ and its identity $1$-cell $e_{\mathbb{T}Sigma}$, there is a canonical isomorphism
\[
\iota_{\mathbb{T}Sigma}: \HM_*(e_{\mathbb{T}Sigma})\to \mathbb{N}R,
\]
such that the gluing map
\[
\alpha: \HM_*(\mathbb{Y}_{12})\otimes \HM_*(e_{\mathbb{T}Sigma_2})\to \HM_*(\mathbb{Y}_{12}\circ_h e_{\mathbb{T}Sigma_2})=\HM_*(\mathbb{Y}_{12}),
\]
is simply $\Id \otimes \iota_{\mathbb{T}Sigma}$. A similar property holds also for $e_{\mathbb{T}Sigma_1}\circ_h \mathbb{Y}_{12}$.
\mathbf{e}nd{enumerate}
\mathbf{e}nd{theorem}
\mathfrak{su}bsection{A convention for $\mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma)$}\label{Subsec2.4} If we allow the empty surface $\mathbf{e}mptyset$ to be a 0-cell of the toroidal bi-category $\mathbb{A}T$, then we must allow the underlying 3-manifold of an 1-cell $\mathbb{Y}=(Y,\omega,\cdots)$ to have closed components. This is not a serious problem, as long as on each closed component, the 2-form $\omega$ is never balanced or negatively monotone with respect to any \mathfrak{s}pinc structures, in the sense of \cite[Definition 29.1.1]{Bible}. This will allow us to apply the adjunction inequality in Proposition \ref{P4.2}. Instead of setting up the theory at this generality, we will simply define
\[
\mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma), \mathbb{A}T(\mathbb{T}Sigma, \mathbf{e}mptyset),
\]
as categories in their own rights, and do not regard them as part of the bi-category $\mathbb{A}T$. We can still define the horizontal composition
\[
\circ_h: \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma_1)\times \mathbb{A}T(\mathbb{T}Sigma_1,\mathbb{T}Sigma_2)\to \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma_2)
\]
in the usual way and make the assignment
\[
\mathbb{T}Sigma\mapsto \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma)
\]
into a 2-functor (in a suitable sense) from $\mathbb{A}T$ to $\mathbb{C}AT$. However, the latter point of view is not needed for the purpose of this paper.
The category $\mathbb{A}T(\mathbf{e}mptyset,\mathbf{e}mptyset)$ consists of closed 3-manifolds equipped with imaginary valued closed 2-forms that are never balanced or negatively monotone on each component.
\mathfrak{s}ection{Proof of the Gluing Theorem}\label{Sec3}
In this section, we present the proof of the Gluing Theorem \ref{T2.5}. The construction of the gluing map $\alpha$ in Theorem \ref{T2.5} \ref{G2} is based upon Floer's Excision Theorem \cite{BD95}, which has been adapted to the monopole Floer homology by Kronheimer-Mrowka \cite{KS}. We will follow the setup of \cite[Theorem 3.1\ \&\ 3.2]{KS} closely.
This section starts with an overview of the monopole Floer homology defined in \cite{Wang20}, which yields the monopole Floer functor $\HM_*$ in Theorem \ref{T2.5} \ref{G1}. The gluing map $\alpha$ is then constructed in Subsection \ref{Subsec3.3}.
\mathfrak{su}bsection{Review} Recall that a \mathfrak{s}pinc structure $\mathfrak{s}_X$ on an oriented Riemannian 4-manifold $X$ is a pair $(S_X,\rho_4)$ where $S_X=S^+\oplus S^-$ is the spin bundle, and the bundle map $\rho_4: T^*X\to \mathcal{E}nd(S_X)$ defines the Clifford multiplication. A configuration $\mathfrak{g}amma=(A,\Phi)\in \mathcal{C}(X,\mathfrak{s})$ consists of a smooth \mathfrak{s}pinc connection $A$ and a smooth section $\Phi$ of $S^+$. Use $A^t$ to denote the induced connection of $A$ on $\bigwedge^2 S^+$. Let $\omega_X\in \Omega^2(X, i\mathbb{R})$ be a closed 2-form on $X$ and $\omega^+_X$ denote its self-dual part. The Seiberg-Witten equations perturbed by $\omega_X$ are defined on the configuration space $\mathcal{C}(X,\mathfrak{s})$ by the formula:
\begin{equation}\label{SWEQ}
\left\{
\begin{array}{rl}
\half \rho_4(F_{A^t}^+)-(\Phi\Phi^*)_0&=\rho_4(\omega^+_X),\\
D_A^+\Phi&=0,
\mathbf{e}nd{array}
\right.
\mathbf{e}nd{equation}
where $D_A^+: \Gamma(S^+)\to \Gamma(S^-)$ is the Dirac operator and $(\Phi\Phi^*)_0=\Phi\Phi^*-\half |\Phi|^2\otimes\Id_{S^+}$ is the traceless part of the endomorphism $\Phi\Phi^*:S^+\to S^+$.
\mathfrak{s}mallskip
When it comes to an oriented Riemannian 3-manifold $Y$, the dimensional reduction of \mathbf{e}qref{SWEQ} is obtained by looking at \mathbf{e}qref{SWEQ} on the product manifold $\mathbb{R}_t\times Y$ and by asking the configuration $(A,\Phi)$ to be $\mathbb{R}_t$-invariant. A \mathfrak{s}pinc structure $\mathfrak{s}$ on $Y$ is again a pair $(S,\rho_3)$ where the spin bundle $S=S^+$ has complex rank 2 and $\rho_3: T^*Y\to \mathcal{E}nd(S)$ defines the Clifford multiplication. The 3-dimensional Seiberg-Witten equations now read:
\begin{equation}\label{3DDSWEQ}
\left\{
\begin{array}{rl}
\half \rho_3(F_{B^t})-(\Psi\Psi^*)_0&=\rho_3(\omega),\\
D_B\Psi&=0.
\mathbf{e}nd{array}
\right.
\mathbf{e}nd{equation}
where $B$ is a \mathfrak{s}pinc connection and $\Psi\in \Gamma(Y,S)$. Here $\omega\in \Omega^2(Y, i\mathbb{R})$ is a closed 2-form and $D_B:\Gamma(Y,S)\to \Gamma(Y,S)$ denotes the Dirac operator on $Y$.
\mathfrak{su}bsection{Results from the Second Paper}\label{Subsec3.2} In this subsection, we review the construction of the monopole Floer homology from \cite{Wang20}, which defines the functor in Theorem \ref{T2.5} \ref{G1}. For any $T$-surface $\mathbb{T}Sigma$, define $-\mathbb{T}Sigma\colonequals (-\Sigma, g_\Sigma, -\lambda,\mu)$ to be the orientation reversal of $\mathbb{T}Sigma$. Since the category $\mathbb{A}T(\mathbb{T}Sigma_1, \mathbb{T}Sigma_2)$ is more or less equivalent to $\mathbb{A}T(\mathbf{e}mptyset, (-\mathbb{T}Sigma_1)\cup \mathbb{T}Sigma_2)$ (only the property \ref{P1.2} may be different), we focus on the case when $\mathbb{T}Sigma_1=\mathbf{e}mptyset$.
Given any 1-cell $\mathbb{Y}=(Y,\mathfrak{p}artial_si, g_Y,\omega, \{\mathfrak{q}\})\in \mathbb{A}T(\mathbf{e}mptyset, \mathbb{T}Sigma)$, we first attach a cylindrical end $[1,\infty)_s\times\Sigma$ to $Y$ to obtain a complete Riemannian manifold:
\[
\widehat{Y}=Y\cup_{\mathfrak{p}artial_si} [1,\infty)_s\times\Sigma.
\]
The metric on the end is given by $d^2s+g_\Sigma$. The closed 2-form $\omega$ is extended constantly as $\mu+ds\wedge\lambda$ on $[1,\infty)_s\times\Sigma$, denoted also by $\omega\in \Omega^2(\widehat{Y}, i\mathbb{R})$.
Let $\mathfrak{s}_{std}=(S_{std},\rho_{std,3})$ be the standard \mathfrak{s}pinc structure on $\mathbb{R}_s\times\Sigma$ with $c_1(\mathfrak{s}_{std})=0\in H^2(\Sigma, i\mathbb{R})$. The spin bundle $S_{std}$ can be constructed explicitly as
\[
S_{std}=\mathbb{C}\oplus \Lambda^{0,1} \Sigma.
\]
See \cite[Section 2]{Wang20} for more details. A relative \mathfrak{s}pinc structure $\widehat{\mathfrak{s}}$ on $Y$ is a pair $(\mathfrak{s}, \delta arphi)$ where $\mathfrak{s}=(S,\rho_3)$ is a \mathfrak{s}pinc structure on $Y$ and
\[
\delta arphi: (S,\rho_3)|_{\mathfrak{p}artial Y}\to \mathfrak{p}artial_si^*\mathfrak{s}_{std}|_{\mathfrak{p}artial Y}
\]
is an isomorphism of \mathfrak{s}pinc structures near the boundary, compatible with $\mathfrak{p}artial_si:\mathfrak{p}artial Y\to \Sigma$. The set of isomorphism classes of relative \mathfrak{s}pinc structures on $Y$
\[
\Spincr(Y)
\]
is a torsor over $H^2(Y,\mathfrak{p}artial Y; \mathbb{Z})$. There is a natural forgetful map from $\Spincr(Y)$ to the set of isomorphism classes of \mathfrak{s}pinc structures:
\[
\Spincr(Y) \to \Spinc(Y),\ \widehat{\mathfrak{s}}=(\mathfrak{s},\delta arphi)\mapsto \mathfrak{s},
\]
whose fiber is acted on freely and transitively by $H^1(\Sigma, \mathbb{Z})/\im (H^1(Y,\mathbb{Z}))$ reflecting the change of boundary trivializations. Any $\widehat{\mathfrak{s}}\in \Spincr(Y)$ extends to a relative \mathfrak{s}pinc structure on $\widehat{Y}$, denoted also by $\widehat{\mathfrak{s}}$.
The key observation is that on $\mathbb{R}_s\times\Sigma$, the 3-dimensional Seiberg-Witten equations \mathbf{e}qref{3DDSWEQ} perturbed by $\omega=\mu+ds\wedge\lambda$ have a canonical $\mathbb{R}_s$-invariant solution, denoted by
\[
(B_*,\Psi_*),
\]
which is also the unique finite energy solution on $\mathbb{R}_s\times\Sigma$, up to gauge, by our assumptions on $(\lambda,\mu)$. This result is due to Taubes \cite[Proposition 4.4\ \&\ 4.7]{Taubes01}. See \cite[Theorem 2.6]{Wang20} for the precise statement that we use here.
When it comes to $\widehat{Y}$, each configuration is required to approximate this special solution $(B_*,\Psi_*)$ as $s\to\infty$. Take $(B_0,\Psi_0)$ to be a smooth configuration on $\widehat{Y}$ which agrees with $(B_*,\Phi_*)$ on the cylindrical end $[1,\infty)_s\times \Sigma$ and consider the configuration space for any $k> \half$:
\begin{align*}
\mathcal{C}_k(\widehat{Y},\widehat{\mathfrak{s}})=\{(B,\Psi): (b,\mathfrak{p}artial_si)=(B,\Psi)-(B_0,\Psi_0)\in L^2_k (\widehat{Y}, iT^*\widehat{Y}\oplus S)
\}.
\mathbf{e}nd{align*}
which is acted on freely by the gauge group
\begin{align*}
\mathbb{C}G_{k+1}(\widehat{Y})=\{u: \widehat{Y}\to S^1\mathfrak{su}bset \mathbb{C}: u-1\in L^2_{k+1} (\widehat{Y}, \mathbb{C})\},
\mathbf{e}nd{align*}
using the formula:
\[
u(B,\Psi)=(B-u^{-1}du, u\Psi).
\]
The perturbed Chern-Simons-Dirac functional on $\mathcal{C}_k(\widehat{Y}, \widehat{\mathfrak{s}})$ is then defined as
\begin{equation}\label{E3.3}
\mathbb{C}L_\omega (B,\Psi)=-\mathfrak{f}rac{1}{8}\int_{\widehat{Y}} (B^t-B_0^t)\wedge (F_{B^t}+F_{B_0^t})+\half \int_{\widehat{Y}}\langle D_B\Psi, \Psi\rangle+\half \int_{\widehat{Y}}(B^t-B_0^t)\wedge \omega. \mathfrak{q}edhere
\mathbf{e}nd{equation}
For any 1-cell $\mathbb{Y}\in \mathbb{A}T(\mathbf{e}mptyset, \mathbb{A}T)$ and any relative \mathfrak{s}pinc structure $\widehat{\mathfrak{s}}\in \Spincr(Y)$, the monopole Floer homology $\HM_*(\mathbb{Y},\widehat{\mathfrak{s}})$ is defined as the Morse homology of $\mathbb{C}L_\omega$ on the quotient configuration space $\mathcal{C}_k(\widehat{Y},\widehat{\mathfrak{s}})/\mathbb{C}G_{k+1}(\widehat{Y})$. However, it is not guaranteed that $\mathbb{C}L_\omega$ descends to a Morse function on $\mathcal{C}_k(\widehat{Y},\widehat{\mathfrak{s}})/\mathbb{C}G_{k+1}(\widehat{Y})$, so an admissible perturbation $\mathfrak{q}$ of $\mathbb{C}L_\omega$, which is encoded already in \ref{P8}, is needed to make critical points Morse and moduli spaces of flowlines regular. The main result of \cite{Wang20} says that the monopole Floer homology
\[
\HM_*(\mathbb{Y},\widehat{\mathfrak{s}})
\]
is well-defined as a finitely generated module over the mod $2$ Novikov field $\mathbb{N}R$. Since we have assumed in \ref{T3} that for any $T$-surface $\mathbb{T}Sigma$, both $\lambda$ and $\mu$ are non-vanishing on any component $\Sigma^{(i)}\mathfrak{su}bset \Sigma$, we have a stronger statement:
\begin{theorem}[{\cite[Theorem 1.4]{Wang20}}] For any 1-cell $\mathbb{Y}\in \mathbb{A}T(\mathbb{T}Sigma_1,\mathbb{T}Sigma_2)$, the direct sum
\[
\HM_*(\mathbb{Y})\colonequals \bigoplus_{\widehat{\mathfrak{s}}\in \Spincr(Y)}\HM_*(\mathbb{Y},\widehat{\mathfrak{s}}),
\]
is finitely generated over $\mathbb{N}R$. In particular, the group $\HM_*(\mathbb{Y},\widehat{\mathfrak{s}})$ is non-trivial for only finitely many relative \mathfrak{s}pinc structures.
\mathbf{e}nd{theorem}
As explained in \cite[Section 17]{Wang20}, this Floer homology is further enhanced into a functor:
\begin{theorem}[{\cite[Theorem 1.6 \& Remark 1.7]{Wang20}}]\label{T3.2} For any $T$-surfaces $\mathbb{T}Sigma_1,\mathbb{T}Sigma_2$, there is a functor from $\mathbb{A}T(\mathbb{T}Sigma_1,\mathbb{T}Sigma_2)$ to the category of finitely generated $\mathbb{N}R$-modules $ \mathbb{A}R(\mathfrak{s}tar)$:
\[
\HM_*: \mathbb{A}T(\mathbb{T}Sigma_1,\mathbb{T}Sigma_2)\to \mathbb{A}R(\mathfrak{s}tar),
\]
which assigns to each 1-cell $\mathbb{Y}$ to its monopole Floer homology group $\HM_*(\mathbb{Y})$.
\mathbf{e}nd{theorem}
\begin{remark} In the second paper \cite{Wang20}, we focused on connected 3-manifolds with toroidal boundary, but the results generalize to the disconnected case with no difficulty.
\mathbf{e}nd{remark}
For any 2-cell morphism $\mathbb{X}:\mathbb{Y}_1\to \mathbb{Y}_2$, the cobordism map
\[
\HM_*(\mathbb{X}): \HM_*(\mathbb{Y}_1)\to \HM_*(\mathbb{Y}_2)
\]
is constructed as follows. We focus on the case when $\mathbb{T}Sigma_1=\mathbf{e}mptyset$. For the underlying strict cobordism $X: Y_1\to Y_2$, pick a diffeomorphism $\mathfrak{p}artial_si_X: W_X\to [-1,1]_t\times \Sigma$ and a closed 2-form $\omega_X\in \Omega^2(X, i\mathbb{R})$ belonging to the class $[\mathfrak{p}artial_si_X]$ and $[\omega_X]_{cpt}$ respectively, as in \ref{Q3}\ref{Q6}. Choose a metric $g_X$ of $X$ compatible with its corner structures. We attach an end in the spatial direction to obtain a cobordism from $\widehat{Y}_1$ to $\widehat{Y}_2$:
\[
\widehat{X}\colonequals X\ \bigcup_{\mathfrak{p}artial_si_X}\ [-1,1]_t\times [0,\infty)_s\times\Sigma,
\]
and attach cylindrical ends in the time direction to form a complete Riemannian manifold:
\[
\mathbb{C}X\colonequals (-\infty, -1]_t\times \widehat{Y}_1\ \bigcup\ \widehat{X}\ \bigcup\ [1,+\infty)_t\times \widehat{Y}_2.
\]
The closed 2-form $\omega_X$ extends to a 2-form on $\mathbb{C}X$ by setting
\[
\omega_X=\left\{\begin{array}{ll}
\omega_1 & \text{ on } (-\infty, -1]_t\times \widehat{Y}_1,\\
\omega_2 & \text{ on }[1,+\infty)_t\times \widehat{Y}_2,\\
\mu+ds\wedge\lambda& \text{ on } \mathbb{R}_t\times [0,\infty)_s\times \Sigma.
\mathbf{e}nd{array}
\right.
\]
The cobordism map $\HM_*(\mathbb{X})$ is then defined by counting 0-dimensional solutions (modulo gauge) to the Seiberg-Witten equations \mathbf{e}qref{SWEQ} with $\omega_X$ defined above. Some additional perturbations are required here to make moduli spaces regular; see \cite[Section 16]{Wang20} for more details. The cobordism map $\HM_*(\mathbb{X})$ depends only on the isotopy class of $\mathfrak{p}artial_si_X$ and the relative cohomology class of $\omega_X$, and is independent of the planar metric $g_X$ of $X$.
\mathfrak{su}bsection{The Canonical Grading} By \cite[Lemma 28.1.1]{Bible}, the standard spinor $\Psi_*$ on $\mathbb{R}_s\times\Sigma$ determines a canonical oriented 2-plane field $\mathbb{X}i_*$ that is $\mathbb{R}_s$-invariant.
For any 3-manifold $Y$ with $\mathfrak{p}artial Y\cong \Sigma$, an oriented 2-plane field $\mathbb{X}i$ is called relative if $\mathbb{X}i=\mathbb{X}i_*$ near the boundary. Any homotopy of oriented relative 2-plane fields is supposed to preserve $\mathbb{X}i_*$ near the boundary $\mathfrak{p}artial Y$.
Inspired by the construction in \cite[Section 28]{Bible}, the author introduced a canonical grading on the group $\HM_*(\mathbb{Y},\widehat{\mathfrak{s}})$ in \cite[Section 18]{Wang20}. The grading set
\[
\mathcal{X}i^{\mathfrak{p}i}(\mathbb{Y},\widehat{\mathfrak{s}})
\]
is identified with the homotopy classes of oriented relative 2-plane fields that belongs to the relative \mathfrak{s}pinc structure $\widehat{\mathfrak{s}}$.
\mathfrak{su}bsection{Euler Characteristics} The 3-manifold $Y$ is homology oriented, if we pick an orientation of $\bigoplus_{i=0}^3 H_i(Y;\mathbb{R})$. Any homology orientation of $Y$ induces a canonical mod 2 grading on $\HM_*(\mathbb{Y},\widehat{\mathfrak{s}})$ (cf. \cite{MT96}\cite[Subsection 18.2]{Wang20}). Then the graded Euler Characteristics of $\HM_*(\mathbb{Y},\widehat{\mathfrak{s}})$ is well-defined and recovers a classical algebraic invariant for 3-manifolds with toroidal boundary. For future reference, we record the statement below:
\begin{theorem}[\cite{MT96,T98,Taubes01}]\label{T3.4} The graded Euler Characteristic of $\HM_*(\mathbb{Y},\widehat{\mathfrak{s}})$:
\begin{align*}
\SW(Y): \Spincr(Y)&\to \mathbb{Z}\\
\widehat{\mathfrak{s}} &\mapsto \chi(\HM_*(\mathbb{Y},\widehat{\mathfrak{s}})),
\mathbf{e}nd{align*}
is independent of the auxiliary data $(g_Y,g_\Sigma; \omega,\lambda,\mu)$ and is equal to the Minor-Turaev torsion $T(Y)$ up to an overall sign ambiguity. Moreover, the function $\SW(Y)$ is invariant under the conjugacy symmetry: $\widehat{\mathfrak{s}}\leftrightarrow \widehat{\mathfrak{s}}^*$.
\mathbf{e}nd{theorem}
\mathfrak{su}bsection{Identity 1-cells}\label{Subsec3.2.5} Before we proceed to the construction of the gluing map, let us first define the canonical isomorphism
\[
\iota_{\mathbb{T}Sigma}: \HM_*(e_{\mathbb{T}Sigma})\to \mathbb{N}R
\]
in Theorem \ref{T2.5}\ \ref{G4}. By the definition of the identity 1-cell $e_{\mathbb{T}Sigma}$, $\HM_*(e_{\mathbb{T}Sigma})$ is computed using the product manifold $\mathbb{R}_s\times \Sigma$ with $\omega=\mu+ds\wedge\lambda$. As noted earlier, the 3-dimensional Seiberg-Witten equations \mathbf{e}qref{3DDSWEQ} have a unique finite energy solution $(B_*,\Psi_*)$ up to gauge for the standard relative \mathfrak{s}pinc structure $\widehat{\mathfrak{s}}_{std}$. Moreover, $(B_*,\Psi_*)$ is irreducible and non-degenerate as the critical point of $\mathbb{C}L_\omega$ in the quotient configuration space; see the proof of \cite[Proposition 12.1]{Wang20}. By \cite[Theorem 2.4]{Wang20}, any downward gradient flowline of $\mathbb{C}L_\omega$ connecting $(B_*,\Psi_*)$ to itself is necessary a constant path. As a result, the monopole Floer chain complex of $(e_\mathbb{T}Sigma,\widehat{\mathfrak{s}}_{std})$ is generated by this special solution with trivial differential. The canonical isomorphism $\iota_{\mathbb{T}Sigma}$ is then defined by sending this generator to $1\in \mathbb{N}R$. When $\widehat{\mathfrak{s}}\mathfrak{n}eq \widehat{\mathfrak{s}}_{std}$, the 3-dimensional equations \mathbf{e}qref{3DDSWEQ} have no solutions at all, so $\HM_*(e_{\mathbb{T}Sigma},\widehat{\mathfrak{s}})=\{0\}$.
\mathfrak{su}bsection{The Gluing Map}\label{Subsec3.3} Having defined the functor $\HM_*$ in Theorem \ref{T2.5} \ref{G1} and the canonical isomorphism $\iota_{\mathbb{T}Sigma}$ in \ref{G4}, let us construct the gluing map $\alpha$ in \ref{G2} in this subsection. The idea is borrowed from the proof of Floer's excision theorem \cite{BD95} and \cite[Theorem 3.2]{KS}. We focus on the special case when $\mathbb{T}Sigma_1=\mathbb{T}Sigma_3=\mathbf{e}mptyset$ and construct the map
\[
\alpha: \HM_*(\mathbb{Y}_1)\otimes_\mathbb{N}R\HM_*(\mathbb{Y}_2)\to \HM_*(\mathbb{Y}_1\circ_h \mathbb{Y}_2),
\]
for any 1-cells $\mathbb{Y}_1\in \mathbb{A}T(\mathbf{e}mptyset, \mathbb{T}Sigma)$ and $\mathbb{Y}_2\in \mathbb{A}T(\mathbb{T}Sigma, \mathbf{e}mptyset)$. The general case is not really different.
Let $Y_i$ be the underlying 3-manifold of $\mathbb{Y}_i$ and $Y_1\circ_h Y_2$ be the closed 3-manifold obtained by gluing $Y_1$ and $Y_2$ along $\{0\}\times \Sigma$. In what follows, we also work with the truncated 3-manifolds
\[
\begin{array}{lcl}
Y_{1,-}\colonequals& \{s\leq -1\}&\mathfrak{su}bset \widehat{Y}_1,\\
Y_{2,-}\colonequals&\{s\mathfrak{g}eq 1\}&\mathfrak{su}bset \widehat{Y}_2.
\mathbf{e}nd{array}
\]
The gluing map $\alpha$ is induced from an explicit strict cobordism between
\[
X: Y_1\ \coprod\ Y_2\to [-1,1]_s\times\Sigma\ \coprod\ Y_1\circ_h Y_2,
\]
as we describe now. Let $\Omega$ be an octagon with a prescribed metric such that the boundary of $\Omega$ consists of geodesic segments of length 2, and the internal angles are always $\mathfrak{p}i/2$. Moreover, this metric is hyperbolic somewhere in the interior and flat near the boundary.
\begin{figure}[H]
\centering
\begin{overpic}[scale=.12]{Pic1.png}
\mathfrak{p}ut(5,10){\mathfrak{s}mall$\mathfrak{g}amma_1$}
\mathfrak{p}ut(85,10){\mathfrak{s}mall$\mathfrak{g}amma_2$}
\mathfrak{p}ut(85,85){\mathfrak{s}mall$\mathfrak{g}amma_3$}
\mathfrak{p}ut(5,85){\mathfrak{s}mall$\mathfrak{g}amma_4$}
\mathfrak{p}ut(47,47){\mathfrak{s}mall$\Omega$}
\mathfrak{p}ut(-20,25){\mathfrak{s}mall$-1$}
\mathfrak{p}ut(107,25){\mathfrak{s}mall$1$}
\mathfrak{p}ut(107,65){\mathfrak{s}mall$-1$}
\mathfrak{p}ut(-35,65){\mathfrak{s}mall$h=1$}
\mathfrak{p}ut(20,-12){\mathfrak{s}mall$-1$}
\mathfrak{p}ut(67,-12){\mathfrak{s}mall 1}
\mathfrak{p}ut(27,103){\mathfrak{s}mall 1}
\mathfrak{p}ut(63,103){\mathfrak{s}mall $-1$}
\mathbf{e}nd{overpic}
\caption{The surface $\Omega$ with corners}\label{Pic1}
\mathbf{e}nd{figure}
The product $\Omega\times\Sigma$ is now a 4-manifold with corners. The desired strict cobordism $X$ is obtained then by attaching $[-1,1]_t\times Y_{1,-}$ to $\mathfrak{g}amma_1\times \Sigma$ and $[-1,1]_t\times Y_{2,-}$ to $\mathfrak{g}amma_2\times \Sigma$. Arrows in Figure \ref{Pic1} indicate the positive direction of the time coordinate $t$.
\mathfrak{s}mallskip
To define the closed 2-form $\omega_X$, let $h: \Omega\to \mathbb{R}$ be a function that equals to $1$ on $\mathfrak{g}amma_2\cup \mathfrak{g}amma_4$ and to $-1$ on $\mathfrak{g}amma_1\cup \mathfrak{g}amma_3$. Also, $h$ is required to be the linear function on the other four boundary segments of $\Omega$. Set
\begin{equation}\label{E3.1}
\omega_X=\mu+dh\wedge \lambda
\mathbf{e}nd{equation}
on $\Omega\times\Sigma$. The 4-manifold cobordism
\[
\widehat{X}:\widehat{Y}_1\ \coprod\ \widehat{Y}_2\to (\mathbb{R}_s\times\Sigma)\ \coprod\ (Y_1\circ_h Y_2)
\]
can be now schematically shown as follows:
\begin{figure}[H]
\centering
\begin{overpic}[scale=.10]{Pic2.png}
\mathfrak{p}ut(35,0){$Y_1\circ _h Y_2$}
\mathfrak{p}ut(-20,40){$\widehat{Y}_1\mathbb{R}ightarrow$}
\mathfrak{p}ut(95,40){$\Leftarrow \widehat{Y}_2$}
\mathfrak{p}ut(35,90){$\mathbb{R}_s\times\Sigma$}
\mathfrak{p}ut(-70,10){$[-1,1]_t\times Y_1\rightsquigarrow$}
\mathfrak{p}ut(95,10){$\leftsquigarrow [-1,1]_t\times Y_2$}
\mathfrak{p}ut(105,80){$\leftsquigarrow [-1,1]_t\times (-\infty,-1]_s\times \Sigma$}
\mathfrak{p}ut(-120,80){$[-1,1]_t\times [1,\infty)_s\times\Sigma\rightsquigarrow$}
\mathbf{e}nd{overpic}
\caption{}
\label{Pic2}
\mathbf{e}nd{figure}
The arrows $``\to"$ in Figure \ref{Pic2} indicate the positive direction of the spatial coordinate $s$. As a result, the complete Riemannian manifold $\mathbb{C}X$ obtained by attaching cylindrical ends to $\widehat{X}$ has two planar ends: $\mathbb{R}_t\times [1,+\infty)_s\times\Sigma$ and $\mathbb{R}_t\times (-\infty,-1]_s\times\Sigma$.
\begin{remark} In order to make $\omega_X$ into a smooth form on $\mathbb{C}X$, the function $h:\Omega\times\Sigma\to \mathbb{R}$ must extend to a smooth function on $\mathbb{C}X$ such that $h\mathbf{e}quiv s$ on any planar end. One may think of $h$ as an extension of the spatial coordinate $s$ over the region $\Omega\times\Sigma$.
\mathbf{e}nd{remark}
On the other hand, one can draw the cobordism $\widehat{X}$ vertically and regard $\Omega$ as the part of the pair-of-pants cobordism that contains the saddle point:
\begin{figure}[H]
\centering
\begin{overpic}[scale=.10]{Pic3.png}
\mathfrak{p}ut(35,-4){$Y_1\circ _h Y_2$}
\mathfrak{p}ut(-10,42){$\widehat{Y}_1$}
\mathfrak{p}ut(105,42){$\widehat{Y}_2$}
\mathfrak{p}ut(105,12){$\mathbb{R}_s\times\Sigma$}
\mathfrak{p}ut(45,42){$\Omega \times\Sigma$}
\mathfrak{p}ut(-31,3){$[-1,1]_t\times Y_1\rightsquigarrow$}
\mathfrak{p}ut(80,3){$\leftsquigarrow [-1,1]_t\times Y_2$}
\mathfrak{p}ut(87,32){$\leftsquigarrow [-1,1]_t\times (-\infty,-1]_s\times \Sigma$}
\mathfrak{p}ut(-45,32){$[-1,1]_t\times [1,\infty)_s\times\Sigma\rightsquigarrow$}
\mathbf{e}nd{overpic}
\caption{Draw $\widehat{X}$ vertically.}
\label{Pic3}
\mathbf{e}nd{figure}
Since the base ring $\mathbb{N}R$ is also a field, we use the K\"{u}nneth formula to identify
\[
\HM_*(\mathbb{Y}_1\coprod \mathbb{Y}_2)\cong\HM_*(\mathbb{Y}_1)\otimes_\mathbb{N}R\HM_*(\mathbb{Y}_2).
\]
The gluing map $\alpha$ is then defined as the cobordism map induced from $\mathbb{X}=(X,\omega_X)$ and multiplied by a normalizing constant $\mathbf{e}ta\in \mathbb{N}R$:
\[
\begin{array}{rcc}
\alpha: \HM_*(\mathbb{Y}_1)\otimes_\mathbb{N}R \HM_*(\mathbb{Y}_2)&\mathbb{X}rightarrow{\HM_*(\mathbb{X})}& \HM_*(e_{\mathbb{T}Sigma})\otimes_\mathbb{N}R \HM_*(\mathbb{Y}_1\circ_h \mathbb{Y}_2)\\
&\mathbb{X}rightarrow{\iota_{\mathbb{T}Sigma}\otimes \Id}&\mathbb{N}R\otimes_\mathbb{N}R \HM_*(\mathbb{Y}_1\circ_h \mathbb{Y}_2)\\
&\mathbb{X}rightarrow{\mathbf{e}ta\times}&\HM_*(\mathbb{Y}_1\circ_h \mathbb{Y}_2),
\mathbf{e}nd{array}
\]
where
\[
\mathbf{e}ta=\mathfrak{p}artial_r od_{i=1}^n\mathfrak{f}rac{1}{t_i-t_i^{-1}}\in \mathbb{N}R\text{ with }t_i=q^{|2\mathfrak{p}i i\langle \mu, [\Sigma^{(i)}]\rangle|},
\]
and $\Sigma^{(i)}$ is the $i$-th component of $\Sigma=\coprod_{i=1}^n\Sigma^{(i)}$. The inverse $\beta$ of $\alpha$ is induced from the opposite cobordism of $\mathbb{X}$, denoted by $\mathbb{X}'=(X',\omega_X')$, and is normalized by the same constant $\mathbf{e}ta$:
\[
\beta: \HM_*(\mathbb{Y}_1\circ_h \mathbb{Y}_2)\mathbb{X}rightarrow{\mathbf{e}ta\ \otimes\ -\ } \HM_*(e_{\mathbb{T}Sigma})\otimes_\mathbb{N}R \HM_*(\mathbb{Y}_1\circ_h \mathbb{Y}_2)\mathbb{X}rightarrow{\HM_*(\mathbb{X}')}\HM_*(\mathbb{Y}_1)\otimes_\mathbb{N}R \HM_*( \mathbb{Y}_2)
\]
\begin{figure}[H]
\centering
\begin{overpic}[scale=.10]{Pic4.png}
\mathfrak{p}ut(35,31){$Y_1\circ _h Y_2$}
\mathfrak{p}ut(0,0){$\widehat{Y}_1$}
\mathfrak{p}ut(82,0){$\widehat{Y}_2$}
\mathfrak{p}ut(50,45){$\mathbb{R}_s\times\Sigma$}
\mathbf{e}nd{overpic}
\caption{The opposite cobordism $\widehat{X}'$.}
\label{Pic4}
\mathbf{e}nd{figure}
The choice of the normalizing constant $\mathbf{e}ta$ is justified by the following theorem, which says that the gluing map $\alpha$ is indeed an isomorphism with inverse $\beta$.
\begin{theorem}[Floer's Excision Theorem]\label{T3.3} The gluing map $\alpha$ and $\beta$ constructed above are mutual inverses to each other, i.e.,
\[
\alpha\circ \beta=\Id_{\HM_*(\mathbb{Y}_1\circ_h \mathbb{Y}_2)},\ \beta\circ \alpha=\Id_{\HM_*(\mathbb{Y}_1)\otimes\HM_*(\mathbb{Y}_2)}.
\]
\mathbf{e}nd{theorem}
The gluing map $\alpha$ preserves the canonical grading on $\HM_*(\mathbb{Y}_i)$. There are natural concatenation maps:
\begin{align*}
-\ \circ_h\ -\ &:\Spincr(Y_1)\times \Spincr(Y_2)\to \Spincr(Y_1\circ_h Y_2),\\
-\ \circ_h\ -\ &:\mathcal{X}i^{\mathfrak{p}i}(\mathbb{Y}_1,\widehat{\mathfrak{s}}_1)\times\mathcal{X}i^{\mathfrak{p}i}(\mathbb{Y}_2,\widehat{\mathfrak{s}}_2)\to \mathcal{X}i^{\mathfrak{p}i}(\mathbb{Y}_1\circ_h \mathbb{Y}_2, \widehat{\mathfrak{s}}_1\circ_h\widehat{\mathfrak{s}}_2).
\mathbf{e}nd{align*}
Indeed, any two relative \mathfrak{s}pinc structures $\widehat{\mathfrak{s}}_i=(\mathfrak{s}_i,\delta arphi_i), i=1,2$ can be composed using the map
\[
(S_1,\rho_{3,1})|_{\mathfrak{p}artial Y_1} \mathbb{X}rightarrow{\delta arphi_1} \mathfrak{s}_{std}|_\Sigma\mathbb{X}rightarrow{\delta arphi_2^{-1}} (S_2,\rho_{3,2})|_{\mathfrak{p}artial Y_2},
\]
to produce a \mathfrak{s}pinc structure on $Y_1\circ_h Y_2$. Meanwhile, any oriented relative 2-plane fields $(\mathbb{X}i_1,\mathbb{X}i_2)$ can be composed, since they agree with the canonical 2-plane field $\mathbb{X}i_*$ near $\Sigma$.
\mathfrak{s}mallskip
In the special case that we have considered so far, $Y_1\circ_h Y_2$ is a closed 3-manifold, so $\Spincr(Y_1\circ_h Y_2)=\Spinc(Y_1\circ_h Y_2)$. $\mathcal{X}i^{\mathfrak{p}i}(\mathbb{Y}_1\circ_h \mathbb{Y}_2, \mathfrak{s})$ is the subset of $\mathfrak{p}i_0(\mathcal{X}i(Y_1\circ_h Y_2))$, the homotopy classes of oriented 2-plane fields on $Y_1\circ_h Y_2$, that belongs to $\mathfrak{s}$; see \cite[P. 585]{Bible} for the precise definition of $\mathfrak{p}i_0(\mathcal{X}i(Y_1\circ_h Y_2))$.
\begin{theorem}\label{T3.5} The gluing map $\alpha: \HM_*(\mathbb{Y}_1)\otimes_\mathbb{N}R\HM_*(\mathbb{Y}_2)\to \HM_*(\mathbb{Y}_1\circ_h \mathbb{Y}_2)$ preserves the relative \mathfrak{s}pinc grading and the canonical grading by the homotopy classes of oriented relative 2-plane fields, meaning that $\alpha$ restricts to a map
\[
\alpha: \bigoplus_{ \widehat{\mathfrak{s}}_1\circ_h \widehat{\mathfrak{s}}_2=\mathfrak{s}}\HM_*(\mathbb{Y}_1,\widehat{\mathfrak{s}}_1)\otimes \HM_*(\mathbb{Y}_2,\widehat{\mathfrak{s}}_2) \to \HM_*(\mathbb{Y}_1\circ_h \mathbb{Y}_2,\widehat{\mathfrak{s}}),
\]
which is an isomorphism by Theorem \ref{T3.3}. Moreover, if an element $(x_1, x_2) $ belongs to the grading $([\mathbb{X}i_1],[\mathbb{X}i_2])\in \mathcal{X}i^{\mathfrak{p}i}(\mathbb{Y}_1,\widehat{\mathfrak{s}}_1)\times\mathcal{X}i^{\mathfrak{p}i}(\mathbb{Y}_2,\widehat{\mathfrak{s}}_2)$, then $\alpha(x_1\otimes x_2)$ is in the grading $[\mathbb{X}i_1]\circ_h[\mathbb{X}i_2]$.
\mathbf{e}nd{theorem}
The rest of Subsection \ref{Subsec3.3} is devoted to the proof of Theorem \ref{T3.3} and \ref{T3.5}.
\begin{proof}[Proof of Theorem \ref{T3.3}] The argument in \cite[Theorem 3.2]{KS} carries over to our case with little change. We focus on the second identity $\beta\circ \alpha=\Id_{\HM_*(\mathbb{Y}_1)\otimes\HM_*(\mathbb{Y}_2)}$ to explain the choice of the normalizing constant $\mathbf{e}ta$. The map $\beta\circ \alpha$ is identical to the one induced by $\mathbb{X}'\circ _v \mathbb{X}$:
\begin{figure}[H]
\centering
\begin{overpic}[scale=.08]{Pic5.png}
\mathfrak{p}ut(-5,0){$\widehat{Y}_1$}
\mathfrak{p}ut(85,0){$\widehat{Y}_2$}
\mathfrak{p}ut(30,45){\mathfrak{s}mall $k\times \Sigma$}
\mathbf{e}nd{overpic}
\caption{}
\label{Pic5}
\mathbf{e}nd{figure}
To compare $\mathbb{X}'\circ_v \mathbb{X}$ with the product cobordism from $\mathbb{Y}_1\coprod \mathbb{Y}_2$ to itself, we stretch the neck along a union of 3-tori $k\times \Sigma$, where $k$ is the red circle in Figure \ref{Pic5}. To specify the closed 2-form $\omega_4$ in the Seiberg-Witten equations \mathbf{e}qref{SWEQ} as we vary the metrics, we regard $k$ as a curve in $\Omega'\circ_v \Omega$:
\begin{figure}[H]
\centering
\begin{overpic}[scale=.12]{Pic6.png}
\mathfrak{p}ut(-20,70){$\widehat{Y}_1\mathbb{R}ightarrow$}
\mathfrak{p}ut(-20,20){$\widehat{Y}_1\Leftarrow$}
\mathfrak{p}ut(25,101){$k$}
\mathfrak{p}ut(55,20){$\mathbb{R}ightarrow\widehat{Y}_2$}
\mathfrak{p}ut(55,70){$\Leftarrow\widehat{Y}_2$}
\mathfrak{p}ut(5,92){$1$}
\mathfrak{p}ut(40,92){$-1$}
\mathfrak{p}ut(5,3){$1$}
\mathfrak{p}ut(40,3){$-1$}
\mathfrak{p}ut(5,65){$-1$}
\mathfrak{p}ut(40,65){$1$}
\mathfrak{p}ut(10,20){$\Omega'$}
\mathbf{e}nd{overpic}
\caption{}
\label{Pic6}
\mathbf{e}nd{figure}
Here $\Omega'$ is the opposite cobordism of $\Omega$, regarded as part of a pair-of-pants cobordism. In Figure \ref{Pic6}, the top horizontal edge is identified with the bottom edge. On $\Omega'\circ_v\Omega$, one may homotope the function $h: \Omega'\circ_v \Omega\to \mathbb{R}$ rel boundary such that $h\mathbf{e}quiv 1/2$ on $[-1/2,1/2]_s\times k$, the tubular neighborhood of $k$ colored orange in Figure \ref{Pic6}. As we stretch the neck $[-1/2,1/2]_s\times k\times \Sigma$, the 2-form $\omega_4$ is set to be $\mu+dh\wedge\lambda$ on $(\Omega'\circ_v\Omega)\times\Sigma$. In particular, $\omega\mathbf{e}quiv \mu$ on the neck $[-1/2,1/2]_s\times k\times \Sigma$.
This 2-form $\omega_4$ is relatively cohomologous to the concatenation $\omega_X'\circ_v\omega_X$. Indeed, their difference is $d(f\wedge\lambda)$ for a compactly supported smooth function $f: \Omega'\circ_v \Omega\to \mathbb{R}$. Thus we can use $\omega_4$ to compute the cobordism map of $\mathbb{X}'\circ_v \mathbb{X}$.
As the underlying 4-manifold of $\mathbb{X}'\circ_v \mathbb{X}$ is completely stretched along $k\times\Sigma$ in Figure \ref{Pic5}, we need the following result concerning the monopole Floer homology of the 3-torus $\mathbb{T}^3$:
\begin{lemma}\label{L3.4} Let $\mathbb{T}^2$ be the 2-torus and $\mathbb{T}^3=\mathbb{T}^2\times S^1$. Let $d\in H^2(\mathbb{T}^3,\mathbb{Z})$ be the Poincar\'{e} dual of $\{pt\}\times S^1$ and set $[\omega]=i\delta \cdot d\in H^2(\mathbb{T}^3, i\mathbb{R})$ for some $\delta\in \mathbb{R}$. Following the notations from \cite[Section 30]{Bible}, we write $\HM_*(\mathbb{T}^3, \mathfrak{s}, c;\mathbb{N}R_\omega)$ for the monopole Floer homology of $\mathbb{T}^3$ associated to the period class $c=-2\mathfrak{p}i i[\omega]=2\mathfrak{p}i \delta\cdot d$ and the \mathfrak{s}pinc structure $\mathfrak{s}$, which is defined using the Seiberg-Witten equations \mathbf{e}qref{3DDSWEQ} for some imaginary valued 2-form $\omega$ in the class $ [\omega]$. If in addition $\delta\mathfrak{n}eq 0$ and $|\delta|<2\mathfrak{p}i$, then this group can be computed as follows:
\[
\HM_*(\mathbb{T}^3, \mathfrak{s}, c;\mathbb{N}R_\omega)=\left\{\begin{array}{cl}
\mathbb{N}R & \text{ if } c_1(\mathfrak{s})=0,\\
\{0\} & \text{ otherwise.}
\mathbf{e}nd{array}
\right.
\]
\mathbf{e}nd{lemma}
\begin{proof}[Proof of Lemma \ref{L3.4}] Pick a flat metric of $\mathbb{T}^2$ and equip $\mathbb{T}^3$ with the product metric. Take $\omega$ to be a multiple of the volume form $dvol_{\mathbb{T}^2}$. In this case, the 3-dimensional Seiberg-Witten equations \mathbf{e}qref{3DDSWEQ} can be solved explicitly. If $|\delta|<2\mathfrak{p}i$ and $c_1(\mathfrak{s})\mathfrak{n}eq 0$, \mathbf{e}qref{3DDSWEQ} has no solutions at all. If $\delta\mathfrak{n}eq 0$ and $c_1(\mathfrak{s})=0$, \mathbf{e}qref{3DDSWEQ} has a unique solution $\mathfrak{g}amma_*$, which is irreducible; see \cite[Lemma 3.1]{Taubes01}. Since we have worked with a non-balanced perturbation, the perturbed Chern-Simons-Dirac functional $\mathbb{C}L_\omega$ is not full gauge invariant. By \cite[Proposition 4.4]{Taubes01}, the moduli space of down-ward gradient flowlines of $\mathbb{C}L_\omega$ connecting $\mathfrak{g}amma_*$ to itself is not empty, although the formal dimension predicted by the index theory is always zero.
This issue can be circumvented using an admissible perturbation $\mathfrak{q}$ of $\mathbb{C}L_\omega$ supported away from $\mathfrak{g}amma_*$, as explained in \cite[Section 15]{Bible}. Moreover, $\mathfrak{q}$ can be made small so that $\mathfrak{g}amma_*$ is still the unique critical point of the perturbed functional, giving rise to the unique generator of $\HM_*(\mathbb{T}^3,\mathfrak{s},c; \mathbb{N}R_\omega)$.
\mathbf{e}nd{proof}
To complete the proof of Theorem \ref{T3.3}, we need another result regarding the monopole invariants of $M\colonequals D^2\times \mathbb{T}^2$, which we recall below. Let $\mathfrak{s}_{std}$ be the standard \mathfrak{s}pinc structure on $\mathbb{T}^3$ with $c_1(\mathfrak{s}_{std})=0$. A relative \mathfrak{s}pinc structure $\widehat{\mathfrak{s}}_M$ on $M$ is a pair $(\mathfrak{s}_M,\delta arphi)$ where $\mathfrak{s}_M$ is a \mathfrak{s}pinc structure and $\delta arphi:\mathfrak{s}_M|_{\mathfrak{p}artial M}\to \mathfrak{s}_{std}$ is a fixed isomorphism. In particular, one may define its relative Chern class $c_1(\widehat{\mathfrak{s}}_M)\in H^2(M, \mathfrak{p}artial M;\mathbb{Z})\cong H^2(D^2,S^1;\mathbb{Z})$. We shall work with a flat metric of $\mathbb{T}^2$ and make $D^2$ into a surface with a cylindrical end:
\begin{figure}[H]
\centering
\begin{overpic}[scale=.12]{Pic7.png}
\mathfrak{p}ut(110, 16){$\cdots$}
\mathfrak{p}ut(100,36){$s\to $}
\mathbf{e}nd{overpic}
\caption{The disk with a cylindrical end.}
\label{Pic7}
\mathbf{e}nd{figure}
Let $d\in H^2(M, \mathbb{Z})$ be the dual of $\{pt\}\times (D^2,S^1)$ and $[\omega_M]\colonequals i\delta \cdot d$ for some $\delta\in \mathbb{R}$. The monopole invariant of $M$ is defined as a generating function
\[
\mathfrak{m}(M,[\omega_M])\colonequals \mathfrak{su}m_{\widehat{\mathfrak{s}}_M} \mathfrak{m}(M, \widehat{\mathfrak{s}}_M,[\omega_M])\cdot q^{-\mathcal{E}_{top}^{\omega_M}(\widehat{\mathfrak{s}}_M)}\in\mathbb{N}R
\]
with $\mathfrak{m}(M, \widehat{\mathfrak{s}}_M,[\omega_M])\in \mathbb{B}F_2$ and
\begin{equation}\label{E3.5}
\mathcal{E}_{top}^{\omega_M}(\widehat{\mathfrak{s}}_M)\colonequals \mathfrak{f}rac{1}{4}\int_M (F_{A^t_0}-2\omega_M)\wedge(F_{A^t_0}-2\omega_M)=-2\mathfrak{p}i \delta\cdot (c_1(\widehat{\mathfrak{s}})\cup d)[M,\mathfrak{p}artial M].
\mathbf{e}nd{equation}
Here $A_0$ is a reference \mathfrak{s}pinc connection on $(M,\widehat{\mathfrak{s}}_M)$. The coefficient $\mathfrak{m}(M, \widehat{\mathfrak{s}}_M,[\omega_M])$ is defined by counting finite energy solutions to the Seiberg-Witten equations \mathbf{e}qref{SWEQ} with $\omega_M=i\delta\cdot dvol_{\mathbb{T}^2}/ \mathcal{V}ol(\mathbb{T}^2)$ for the relative \mathfrak{s}pinc manifold $(M,\widehat{\mathfrak{s}}_M)$. In practice, one has to perturb $\omega_M$ by a compactly supported closed 2-form (see \cite{Taubes01}) or add a tame perturbation to $\mathbb{C}L_\omega$ as in the proof of Lemma \ref{L3.4}, to ensure that the moduli space is transversely cut out. More invariantly, one should regard $\mathfrak{m}(M,[\omega_M])$ as an element in $\HM_*(\mathbb{T}^3,\mathfrak{s}_{std}, c; \mathbb{N}R_\omega)$.
\begin{lemma}\label{L3.5} Using the canonical identification $\HM_*(\mathbb{T}^3,\mathfrak{s}_{std},c;\mathbb{N}R_\omega)\cong \mathbb{N}R$ in the proof of Lemma \ref{L3.4}, the monopole invariant $\mathfrak{m}(M,[\omega_M])$ can be computed as
\[
(t-t^{-1})^{-1}=t^{-1}+t^{-3}+t^{-5}+\cdots \in \mathbb{N}R \text{ with }t=q^{2\mathfrak{p}i |\delta|}.
\]
\mathbf{e}nd{lemma}
\begin{proof}[Proof of Lemma \ref{L3.5}] Although we have used non-exact perturbations, this computation is not different from the one in \cite[Section 38.2]{Bible}, in which case exact perturbations and a non-trivial local coefficient system are used. This formula can be found in \cite[P.719]{Bible}.
\mathbf{e}nd{proof}
Back to the proof of Theorem \ref{T3.3}. Once the neck is completely stretched along $k\times\Sigma$ in Figure \ref{Pic5}, we glue two copies of $D^2\times \Sigma$ to obtain the product cobordism from $\widehat{Y}_1 \coprod \widehat{Y}_2$ to itself, which induces the identity map on monopole Floer homology groups. Since our surface $\Sigma=\coprod_{i=1}^n\Sigma^{(i)}$ is disconnected, we have
\[
\mathfrak{m}(D^2\times \Sigma, [\mu])=\mathfrak{p}artial_r od_{i=1}^n \mathfrak{m}(D^2\times \Sigma^{(i)}, [\mu_i])=\mathfrak{p}artial_r od_{i=1}^n (t_i-t_i^{-1})^{-1}=\mathbf{e}ta,
\]
with $\mu_i\colonequals \mu|_{\Sigma^{(i)}}$ and $t_i=q^{|2\mathfrak{p}i i\langle \mu,[\Sigma^{(i)}]\rangle|}$. As a result,
\[
\beta\circ \alpha=\mathbf{e}ta^2 \HM_*(\mathbb{X}'\circ_v \mathbb{X})=\Id_{\HM_*(\mathbb{Y}_1)\otimes \HM_*(\mathbb{Y}_2)}.
\]
The computation for $\alpha\circ \beta$ is similar and is omitted here. This completes the proof of Theorem \ref{T3.3}.
\mathbf{e}nd{proof}
\begin{proof}[Proof of Theorem \ref{T3.5}] Fix relative \mathfrak{s}pinc structures $\widehat{\mathfrak{s}}_i\in \Spincr(Y_i)$ for $i=1,2$. The statement about relative \mathfrak{s}pinc gradings is obvious. Indeed, the 4-manifold cobordism in Figure \ref{Pic2} can be upgraded into a relative \mathfrak{s}pinc cobordism:
\[
(\widehat{X},\widehat{\mathfrak{s}}_X): (\widehat{Y}_1,\widehat{\mathfrak{s}}_1)\ \coprod\ (\widehat{Y}_2,\widehat{\mathfrak{s}}_2)\to (Y_1\circ_h Y_2,\widehat{\mathfrak{s}})\ \coprod\ (\mathbb{R}_s\times\Sigma, \widehat{\mathfrak{s}}_{std})
\]
only if $\widehat{\mathfrak{s}}=\widehat{\mathfrak{s}}_1\circ_h\widehat{\mathfrak{s}}_2$. To deal with the canonical grading, we make use of an equivalent definition of $\mathcal{X}i^{\mathfrak{p}i}(\widehat{Y}_i,\widehat{\mathfrak{s}}_i)$ following \cite[(18.2)]{Wang20}. Instead of oriented relative 2-plane fields, we investigate the space of unit-length relative spinors on $\widehat{Y}$, denoted by $\mathcal{X}i(\mathbb{Y},\widehat{\mathfrak{s}})$. A spinor $\Psi\in \Gamma(\widehat{Y},S)$ is called relative, if $\Psi=\Psi_*/|\Psi_*|$ on the cylindrical end $[0,\infty)_s\times\Sigma$; see \cite[Definition 18.2]{Wang20}. Finally, we have
\[
\mathcal{X}i^{\mathfrak{p}i}(\mathbb{Y},\widehat{\mathfrak{s}})\colonequals \mathfrak{p}i_0(\mathcal{X}i(\mathbb{Y},\widehat{\mathfrak{s}}))/H^2(Y,\mathfrak{p}artial Y;\mathbb{Z})
\]
where $H^2(Y,\mathfrak{p}artial Y;\mathbb{Z})= \mathfrak{p}i_0(\mathbb{C}G(\widehat{Y}))$ is the component group of the gauge group $\mathbb{C}G(\widehat{Y})$.
Let $\mathfrak{a}_i\in \mathcal{C}(\widehat{Y}_i,\widehat{\mathfrak{s}}_i)$ be a critical point of the perturbed Chern-Simons-Dirac functional on $\widehat{Y}_i$ for $i=1,2$. In fact, before passing to the quotient configuration space, we can assign an element
\[
\mathfrak{g}r(\mathfrak{a}_i)\in \mathfrak{p}i_0(\mathcal{X}i(\widehat{Y},\widehat{\mathfrak{s}}))
\]
whose image in $\mathcal{X}i^{\mathfrak{p}i}(\mathbb{Y},\widehat{\mathfrak{s}})$ is the chain level grading of $[\mathfrak{a}_i]$. Let $\Psi_i\in \Gamma(\widehat{Y}_i,S_i)$ be a unit-length relative spinor representing $\mathfrak{g}r(\mathfrak{a}_i)$.
Since the cobordism map $\HM_*(\mathbb{X})$ is defined by counting 0-dimensional moduli spaces on the 4-manifold $\mathbb{C}X$, the chain level map exists between
\[
(\mathfrak{a}_1,\mathfrak{a}_2) \text{ and } (\mathfrak{a}_3, \mathfrak{a}_*)
\]
only if an index condition holds, where $\mathfrak{a}_*=(B_*,\Psi_*)$ is the canonical solution on $\mathbb{R}_s\times\Sigma$ and $\mathfrak{a}_3$ is a critical point on $Y_1\circ_h Y_2$. The relative spinor representing the grading of $\mathfrak{a}_*$ is $ \Psi_*/|\Psi_*|$, by the Normalization Axiom \cite[Section 18]{Wang20}. Let $\Psi_3$ be the one for $\mathfrak{a}_3$.
By the Index Axiom from \cite[Section 18]{Wang20}, this index condition can be stated in terms of the quadruple $(\Psi_1,\Psi_2; \Psi_3, \Psi_*/|\Psi_*|)$. For a fixed relative \mathfrak{s}pinc cobordism $\widehat{\mathfrak{s}}_X$, we construct a non-vanishing spinor $\Phi_X$ on $\mathbb{C}X\mathfrak{s}etminus X$ as follows:
\begin{itemize}
\item $\Phi_X\mathbf{e}quiv \Psi_i$ on $(-\infty,1]_t\times \widehat{Y}_i$ for $i=1,2$;
\item $\Phi_X\mathbf{e}quiv \Psi_3$ on $[1,\infty)_t\times (Y_1\circ_h Y_2)$;
\item $\Phi_X\mathbf{e}quiv \Psi_*/|\Psi_*|$ on $[1,\infty)_t\times\mathbb{R}_s\times\Sigma$ and
\item $\Phi_X\mathbf{e}quiv \Psi_*/|\Psi_*|$ also on $\mathbb{R}_t\times [1,\infty)_s\times\Sigma$ and $\mathbb{R}_t\times (-\infty,-1]_s\times \Sigma$.
\mathbf{e}nd{itemize}
The index condition is then equivalent to saying that the relative Euler class of $\Phi_X$ vanishes
\begin{equation}\label{E3.2}
e(S^+_X; \Phi_X)[X,\mathfrak{p}artial X]=0,
\mathbf{e}nd{equation}
which determines the class $[\Psi_3]\in \mathfrak{p}i_0(\mathcal{X}i(\mathbb{Y}_1\circ_h \mathbb{Y}_2,\widehat{\mathfrak{s}}_1\circ_h\widehat{\mathfrak{s}}_2))$ in terms of $[\Psi_1]$ and $[\Psi_2]$.
There is an obvious \mathfrak{s}pinc cobordism $\widehat{\mathfrak{s}}_X$ on $X$ (see Figure \ref{Pic2}): we pick the product relative \mathfrak{s}pinc structures on $[-1,1]_t\times Y_{1,-}$ and $[-1,1]_t\times Y_{2,-}$ respectively, and choose any relative \mathfrak{s}pinc structure on $\Omega\times\Sigma$. In this case, the characteristic condition \mathbf{e}qref{E3.2} holds trivially, if we take $\Psi_3$ to be the concatenation of $\Psi_1$ and $\Psi_2$.
In general, any other relative \mathfrak{s}pinc cobordism $\widehat{\mathfrak{s}}_X'$ differs from $\widehat{\mathfrak{s}}_X$ by taking the tensor product with a relative line bundle $L\in H^2(X,\mathfrak{p}artial X;\mathbb{Z})$, an action that leaves the relative Euler number unaffected. Thus the image of $[\Psi_3]$ in
\[
\mathfrak{p}i_0(\mathcal{X}i(\mathbb{Y}_1\circ_h\mathbb{Y}_2,\widehat{\mathfrak{s}}_1\circ_h\widehat{\mathfrak{s}}_2))/H^2(Y_1\circ_h Y_2;\mathbb{Z})
\]
is independent of $\widehat{\mathfrak{s}}_X'$. This completes the proof of Theorem \ref{T3.5}.
\mathbf{e}nd{proof}
\mathfrak{su}bsection{Proof of \ref{G2}} The proof of Theorem \ref{T2.5} will dominate the rest of Section \ref{Sec3}. In this subsection, we focus on \ref{G2}. For any 1-cells $\mathbb{Y}_{12}\in\mathbb{A}T(\mathbb{T}Sigma_1,\mathbb{T}Sigma_2)$ and $\mathbb{Y}_{23}\in \mathbb{A}T(\mathbb{T}Sigma_2,\mathbb{T}Sigma_3)$, the isomorphism
\[
\alpha: \HM_*(\mathbb{Y}_{12})\otimes_\mathbb{N}R\HM_*(\mathbb{Y}_{23})\to \HM_*(\mathbb{Y}_{12}\circ_h \mathbb{Y}_{23}),
\]
is constructed in the same way as in the special case discussed in Subsection \ref{Subsec3.3}. It remains to verify that $\alpha$ is a functor. Let $\mathbb{X}_{12}: \mathbb{Y}_{12}\to \mathbb{Y}_{12}'$ and $\mathbb{X}_{23}:\mathbb{Y}_{23}\to\mathbb{Y}_{23}'$ be 2-cell morphisms in $\mathbb{A}T(\mathbb{T}Sigma_1,\mathbb{T}Sigma_2)$ and $\mathbb{A}T(\mathbb{T}Sigma_2,\mathbb{T}Sigma_3)$ respectively. We have to show that
\[
\HM_*(\mathbb{X}_{12}\circ_h \mathbb{X}_{23})\circ \alpha=\alpha\circ (\HM_*(\mathbb{X}_{12})\otimes \HM_*(\mathbb{X}_{23}))
\]
as maps from $\HM_*(\mathbb{Y}_{12})\otimes_\mathbb{N}R\HM_*(\mathbb{Y}_{23})$ to $ \HM_*(\mathbb{Y}_{12}'\circ_h \mathbb{Y}_{23}')$. Indeed, both of them agree with the map induced by the 4-manifold cobordism in Figure \ref{Pic8}. This completes the proof of \ref{G2}. \mathfrak{q}ed
\begin{figure}[H]
\centering
\begin{overpic}[scale=.10]{Pic8.png}
\mathfrak{p}ut(-15,50){$\widehat{Y}_{12}$}
\mathfrak{p}ut(105,50){$\widehat{Y}_{23}$}
\mathfrak{p}ut(29,31){\mathfrak{s}mall $X_{12}$}
\mathfrak{p}ut(59,31){\mathfrak{s}mall $X_{23}$}
\mathfrak{p}ut(-63,35){\mathfrak{s}mall $(-\infty,-1]_t\times Y_{12}\to $}
\mathfrak{p}ut(-25,10){\mathfrak{s}mall $[1,\infty)_t\times Y_{12}'\to $}
\mathfrak{p}ut(72,10){\mathfrak{s}mall $\leftarrow [1,\infty)_t\times Y_{23}'$}
\mathfrak{p}ut(102,35){\mathfrak{s}mall $\leftarrow (-\infty, 1]_t\times Y_{23}$}
\mathfrak{p}ut(36,-7){\mathfrak{s}mall $Y_{12}'\circ_h Y_{23}'$}
\mathbf{e}nd{overpic}
\caption{}
\label{Pic8}
\mathbf{e}nd{figure}
\mathfrak{su}bsection{Proof of \ref{G3}} For any triple $(\mathbb{Y}_{12},\mathbb{Y}_{23},\mathbb{Y}_{34})\in \mathbb{A}T(\mathbb{T}Sigma_1,\mathbb{T}Sigma_2)\times \mathbb{A}T(\mathbb{T}Sigma_2,\mathbb{T}Sigma_3)\times \mathbb{A}T(\mathbb{T}Sigma_3,\mathbb{T}Sigma_4)$, the composition $\alpha\circ (\alpha\otimes\Id)$ in the diagram \mathbf{e}qref{E2.1} is induced from the cobordism in Figure \ref{Pic9} below, with the red dot line indicating a copy of the completion $\widehat{Y_{12}\circ_h Y_{23}}$.
\begin{figure}[H]
\centering
\begin{overpic}[scale=.06]{Pic9.png}
\mathfrak{p}ut(-17,70){\mathfrak{s}mall $Y_{12}\mathbb{R}ightarrow$}
\mathfrak{p}ut(-55,20){\mathfrak{s}mall $(Y_{12}\circ_h Y_{23})\circ_h Y_{34}\Leftarrow$}
\mathfrak{p}ut(47,70){$\Leftarrow Y_{23}$}
\mathfrak{p}ut(15,-7){$Y_{34}$}
\mathfrak{p}ut(50,53){$\widehat{\ Y_{12}\circ_hY_{23}\ }$}
\mathbf{e}nd{overpic}
\caption{}
\label{Pic9}
\mathbf{e}nd{figure}
On the other hand, the map $\alpha\circ (\Id\otimes \alpha)$ is identical to the one induced from Figure \ref{Pic10}, which can be obtained from Figure \ref{Pic9} by continuously varying the metric and the closed 2-form, which implies that the diagram \mathbf{e}qref{E2.1} is commutative and completes the proof of \ref{G3}. \mathfrak{q}ed
\begin{figure}[H]
\centering
\begin{overpic}[scale=.08]{Pic10.png}
\mathfrak{p}ut(-12,15){\mathfrak{s}mall $Y_{12}\mathbb{R}ightarrow$}
\mathfrak{p}ut(5,-5){\mathfrak{s}mall $Y_{12}\circ_h (Y_{23}\circ_h Y_{34})$}
\mathfrak{p}ut(70,48){$Y_{23}$}
\mathfrak{p}ut(70,-5){$Y_{34}$}
\mathfrak{p}ut(100,40){$\widehat{\ Y_{12}\circ_hY_{23}\ }$}
\mathbf{e}nd{overpic}
\caption{}
\label{Pic10}
\mathbf{e}nd{figure}
\mathfrak{su}bsection{Proof of \ref{G4}} We have to show that the gluing map
$$\alpha: \HM_*(\mathbb{Y}_{12})\otimes\HM_*(e_{\mathbb{T}Sigma_2})\to\HM_*(\mathbb{Y}_{12}\circ_h e_{\mathbb{T}Sigma_2})=\HM_*(\mathbb{Y}_{12})$$
agrees with $\Id\otimes_\mathbb{N}R \iota_{\mathbb{T}Sigma_2}$. Equivalently, we prove that
\[
\tilde{\alpha}:\HM_*(\mathbb{Y}_{12})\mathbb{X}rightarrow{\Id\otimes \iota_{\mathbb{T}Sigma_2}^{-1}(1)} \HM_*(\mathbb{Y}_{12})\otimes\HM_*(e_{\mathbb{T}Sigma_2})\mathbb{X}rightarrow{\alpha}\HM_*(\mathbb{Y}_{12}).
\]
is the identity map. We start with a few reductions:
\textit{Step } 1. $\tilde{\alpha}$ is an isomorphism. This is by Theorem \ref{T3.3}.
\textit{Step } 2. It suffices to verify the special case when $\mathbb{T}Sigma_1=\mathbb{T}Sigma_2$ and $\mathbb{Y}_{12}=e_{\mathbb{T}Sigma_2}$.
Indeed, if the statement holds for this special case, consider the diagram:
\[
\begin{tikzcd}[column sep=2.3cm]
\HM_*(\mathbb{Y}_{12})\arrow[r,"\Id\otimes\iota_{\mathbb{T}Sigma_2}^{-1}(1)\otimes \iota_{\mathbb{T}Sigma_2}^{-1}(1)"]& \HM_*(\mathbb{Y}_{12})\otimes \HM_*(e_{\mathbb{T}Sigma_2})\otimes \HM_*(e_{\mathbb{T}Sigma_2})\arrow[r,bend left,"\alpha (\Id\otimes\alpha)"']\arrow[r,bend right,"\alpha(\alpha\otimes\Id)"] & \HM_*(\mathbb{Y}_{12}).
\mathbf{e}nd{tikzcd}
\]
Applying Theorem \ref{T2.5} \ref{G3} to the triple $(\mathbb{Y}_{12},e_{\mathbb{T}Sigma_2},e_{\mathbb{T}Sigma_2})$, we obtain that $\tilde{\alpha}^2=\tilde{\alpha}$; so $\tilde{\alpha}=\Id$.
\textit{Step } 3. In the case when $\mathbb{T}Sigma_1=\mathbb{T}Sigma_2$ and $\mathbb{Y}_{12}=e_{\mathbb{T}Sigma_2}$, the group $\HM_*(e_{\mathbb{T}Sigma_2})$ has rank $1$; so $\tilde{\alpha}:\HM_*(e_{\mathbb{T}Sigma_2})\to \HM_*(e_{\mathbb{T}Sigma_2})$ is a multiplication map. Let $\mathbb{X}$ be the cobordism inducing the gluing map $\alpha$. In this special case, the opposite cobordism $\mathbb{X}'$ of $\mathbb{X}$ is identical to $\mathbb{X}$. Theorem \ref{T3.3} then implies that $\tilde{\alpha}^2=\Id$; so $\tilde{\alpha}=\Id$. This completes the proof of Theorem \ref{T2.5} \ref{G4}.\mathfrak{q}ed
\mathfrak{s}ection{Generalized Cobordism Maps}\label{Sec4}
2-cell morphisms in the category $\mathbb{A}T(\mathbf{e}mptyset, \mathbb{T}Sigma)$ are given by strict cobordisms: the induced cobordisms between boundaries are necessarily standard products, as required by Property \ref{Q3}. The primary goal of this section is to remove this constraint and define the generalized cobordism maps.
For any 1-cells $\mathbb{Y}_1\in \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma_1)$ and $\mathbb{Y}_2\in \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma_2)$, a general cobordism $(X,W): (Y_1,\Sigma_1)\to (Y_2,\Sigma_2)$ is a 4-manifold with corners. To better package the data of closed 2-forms, however, we shall adopt a different point of view and introduce a new category $\mathbb{A}T_1$.
\begin{definition} Consider the category $\mathbb{A}T_1$ with objects
\[
\Ob \mathbb{A}T_1=\coprod_{\mathbb{T}Sigma} \Ob \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma).
\]
For any $\mathbb{Y}_1\in \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma_1)$ and $\mathbb{Y}_2\in \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma_2)$, a morphism in $\mathbb{A}T_1(\mathbb{Y}_1,\mathbb{Y}_2)$ is a pair $(\mathbb{X}_{12},\mathbb{B}W_{12})$ where
\[
\mathbb{B}W_{12}\in \mathbb{A}T(\mathbb{T}Sigma_1,\mathbb{T}Sigma_2),\ \mathbb{X}_{12}\in \Hom_{\mathbb{A}T(\mathbf{e}mptyset, \mathbb{T}Sigma_2)}(\mathbb{Y}_1\circ_h \mathbb{B}W_{12},\mathbb{Y}_2).
\]
For morphisms $(\mathbb{X}_{12},\mathbb{B}W_{12})\in \mathbb{A}T_1(\mathbb{Y}_1,\mathbb{Y}_2)$ and $(\mathbb{X}_{23},\mathbb{B}W_{23})\in \mathbb{A}T_1(\mathbb{Y}_2,\mathbb{Y}_3)$, their composition $(\mathbb{X}_{13},\mathbb{B}W_{13})\in \mathbb{A}T_1(\mathbb{Y}_1,\mathbb{Y}_3)$ is defined as
\[
\mathbb{B}W_{13}=\mathbb{B}W_{12}\circ_h\mathbb{B}W_{23},\ \mathbb{X}_{13}=(\mathbb{X}_{12}\circ_h \Id_{\mathbb{B}W_{23}})\circ \mathbb{X}_{23},
\]
The associativity can be easily checked using the digram that represents $\mathbb{X}_{13}$:
\begin{equation}\label{E5.2}
\begin{tikzcd}
\mathbf{e}mptyset \arrow[r,"\mathbb{Y}_1"] \arrow[d,equal]& \mathbb{T}Sigma_1 \arrow[r,"\mathbb{B}W_{12}"] \arrow[rd, phantom, "\mathbb{X}_{12}"] & \mathbb{T}Sigma_2 \arrow[r,dashed, "\mathbb{B}W_{23}"] \arrow[d,equal] \arrow[rd, phantom, "\Id_{\mathbb{B}W_{23}}"]& \mathbb{T}Sigma_3 \arrow[d,equal] \\
\mathbf{e}mptyset \arrow[rr,"\mathbb{Y}_2"] \arrow[d,equal]& & \mathbb{T}Sigma_2 \arrow[r,"\mathbb{B}W_{23}"] \arrow[rd, phantom, "\mathbb{X}_{23}"]& \mathbb{T}Sigma_3 \arrow[d,equal]\\
\mathbf{e}mptyset\arrow[rrr,"\mathbb{Y}_3"] & & {}&\mathbb{T}Sigma_3.
\mathbf{e}nd{tikzcd}\mathfrak{q}edhere
\mathbf{e}nd{equation}
\mathbf{e}nd{definition}
\begin{corollary} We define a fake functor $\HM_*: \mathbb{A}T_1\to \mathbb{A}R(\mathfrak{s}tar)$ as follows. For any object $\mathbb{Y}_i\in \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma_i)$, we assign its monopole Floer homology group $\HM_*(\mathbb{Y}_i)$. For any morphism $(\mathbb{X}_{12},\mathbb{B}W_{12}):\mathbb{Y}_1\to \mathbb{Y}_2$, we assign the map $ \HM_*(\mathbb{X}_{12},\mathbb{B}W_{12})\colonequals \HM_*(\mathbb{X}_{12})\circ \alpha: $
\[
\HM_*(\mathbb{Y}_1)\otimes \HM_*(\mathbb{B}W_{12})\mathbb{X}rightarrow{\alpha} \HM_*(\mathbb{Y}_1\circ_h\mathbb{B}W_{12})\mathbb{X}rightarrow{\HM_*(\mathbb{X}_{12})} \HM_*(\mathbb{Y}_2).
\]
This assignment $\HM_*$ fails to be a functor, since the ordinary composition law is violated. The replacement is a commutative diagram relating $\HM_*(\mathbb{X}_{12},\mathbb{B}W_{12})$, $\HM_*(\mathbb{X}_{23},\mathbb{B}W_{23})$ and the map of their composition $(\mathbb{X}_{13},\mathbb{B}W_{13})=(\mathbb{X}_{12},\mathbb{B}W_{12})\circ (\mathbb{X}_{23},\mathbb{B}W_{23})$:
\begin{equation}\label{E5.1}
\begin{tikzcd}
\HM_*(\mathbb{Y}_1)\otimes \HM_*(\mathbb{B}W_{12})\otimes \HM_*(\mathbb{B}W_{23})\arrow[r,"\Id\otimes\alpha"] \arrow[d,"{\HM(\mathbb{X}_{12},\mathbb{B}W_{12})\otimes \Id}"] & \HM_*(\mathbb{Y}_1)\otimes \HM_*(\mathbb{B}W_{13}) \arrow[d, "{\HM_*(\mathbb{X}_{13},\mathbb{B}W_{13})}"]\\
\HM_*(\mathbb{Y}_2)\otimes \HM_*(\mathbb{B}W_{23})\arrow[r,"{\HM_*(\mathbb{X}_{23},\mathbb{B}W_{23})}"] & \HM_*(\mathbb{Y}_3).
\mathbf{e}nd{tikzcd}
\mathbf{e}nd{equation}
\mathbf{e}nd{corollary}
\begin{proof} The commutativity of $\mathbf{e}qref{E5.1}$ is obtained by applying the monopole Floer 2-functor $\HM_*$ in Theorem \ref{T2.5} to the digram \mathbf{e}qref{E5.2}.
\mathbf{e}nd{proof}
In order to obtain a genuine functor, we have to enlarge the category $\mathbb{A}T_1$ to incorporate an element of $\HM_*(\mathbb{B}W_{12})$ for each morphism $(\mathbb{X}_{12},\mathbb{B}W_{12})$.
\begin{definition} The category $\mathbb{A}T_2$ has the set of same objects as $\mathbb{A}T_1$. For any objects $\mathbb{Y}_1\in \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma_1)$ and $\mathbb{Y}_2\in \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma_2)$, a morphism in $\mathbb{A}T_2(\mathbb{Y}_1,\mathbb{Y}_2)$ is now a triple $(\mathbb{X}_{12},\mathbb{B}W_{12}, a_{12})$ where
\[
(\mathbb{X}_{12},\mathbb{B}W_{12})\in \mathbb{A}T_1(\mathbb{Y}_1,\mathbb{Y}_2) \text{ and } a_{12}\in \HM_*(\mathbb{B}W_{12}).
\]
For $(\mathbb{X}_{12},\mathbb{B}W_{12},a_{12})\in \mathbb{A}T_2(\mathbb{Y}_1,\mathbb{Y}_2)$ and $(\mathbb{X}_{23},\mathbb{B}W_{23},a_{23})\in \mathbb{A}T_2(\mathbb{Y}_2,\mathbb{Y}_3)$, their composition $$(\mathbb{X}_{13},\mathbb{B}W_{13},a_{13})\in \mathbb{A}T_2(\mathbb{Y}_1,\mathbb{Y}_3)$$ is defined as
\[
\mathbb{B}W_{13}=\mathbb{B}W_{12}\circ_h\mathbb{B}W_{23},\ \mathbb{X}_{13}=(\mathbb{X}_{12}\circ_h \Id_{\mathbb{B}W_{23}})\circ \mathbb{X}_{23},\ a_{23}=\alpha(a_{12}\otimes a_{23}). \mathfrak{q}edhere
\]
\mathbf{e}nd{definition}
\begin{corollary}\label{C4.4} We define a functor $\HM_*: \mathbb{A}T_2\to \mathbb{A}R(\mathfrak{s}tar)$ as follows. For any object $\mathbb{Y}_i\in \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma_i)$, we assign its monopole Floer homology group $\HM_*(\mathbb{Y}_i)$. For any morphism $(\mathbb{X}_{12},\mathbb{B}W_{12},a_{12}):\mathbb{Y}_1\to \mathbb{Y}_2$, define the map $\HM_*(\mathbb{X}_{12},\mathbb{B}W_{12},a_{12})$ to be the composition
\[
\HM_*(\mathbb{Y}_1)\mathbb{X}rightarrow{\alpha(\cdot, a_{12})} \HM_*(\mathbb{Y}_1\circ_h\mathbb{B}W_{12})\mathbb{X}rightarrow{\HM_*(\mathbb{X}_{12})} \HM_*(\mathbb{Y}_2).
\]
Then $\HM_*: \mathbb{A}T_2\to \mathbb{A}R(\mathfrak{s}tar)$ is a functor in the classical sense.
\mathbf{e}nd{corollary}
\begin{proof} The functoriality of $\HM_*$ follows from the commutative digram \mathbf{e}qref{E5.1}.
\mathbf{e}nd{proof}
\mathfrak{s}ection{Invariance of Boundary Metrics}\label{Sec5}
For any 1-cell $\mathbb{Y}\in \mathbb{A}T(\mathbf{e}mptyset, \mathbb{T}Sigma)$, we defined its monopole Floer homology group in \cite{Wang20}; but its invariance is only proved in a weak sense:
\begin{theorem}[{\cite[Remark 1.8\ \&\ Corollary 19.10]{Wang20}}]\label{T5.2}For any $T$-surface $\mathbb{T}Sigma$ and any 1-cell $\mathbb{Y}\in \mathbb{A}T(\mathbf{e}mptyset, \mathbb{T}Sigma)$, the monopole Floer functor from Theorem \ref{T3.2} implies the invariance of $\HM_*(\mathbb{Y},\widehat{\mathfrak{s}})$ when
\begin{itemize}
\item we change the cylindrical metric $g_Y$ in \ref{P1.5};
\item we replace $\omega$ by $\omega+d_{\widehat{Y}}b$ for a compactly supported 1-form $b\in \Omega^1_c(Y,i\mathbb{R})$ in \ref{P2};
\item we apply an isotopy to the diffeomorphism $\mathfrak{p}artial_si:\mathfrak{p}artial Y\to \Sigma$.
\mathbf{e}nd{itemize}
\mathbf{e}nd{theorem}
In this section, we strengthen this invariance result by showing that $\HM_*(\mathbb{Y})$ is independent of the flat metric of $\Sigma$ and depends only on the class $[\omega]\in H^2(Y;i\mathbb{R})$ and $[*_\Sigma\lambda]\in H^1(\Sigma, i\mathbb{R})$. In particular, the Floer homology $\HM_*(\mathbb{Y})$ is a topological invariant for the triple $(Y,[\omega],[*_\Sigma\lambda])$. The main result is as follows:
\begin{theorem}\label{T5.1} Suppose that $\mathbb{T}Sigma_i=(\Sigma, g_i, \lambda_i,\mu_i), i=1,2$ have the same underlying oriented surface $\Sigma$ and
\begin{align*}
[*_1\lambda_1]&=[*_2\lambda_2]\in H^1(\Sigma, i\mathbb{R}),& [\mu_1]&=[\mu_2] \in H^2(\Sigma, i\mathbb{R}).
\mathbf{e}nd{align*}
If 1-cells $\mathbb{Y}_i\in \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma_i),i=1,2$ have the same underlying 3-manifold $Y_1\cong Y_2$ (this diffeomorphism is compatible with the identification maps $\mathfrak{p}artial_si_i:\mathfrak{p}artial Y_i\to \Sigma, i=1,2$) and
\[
[\omega_1]=[\omega_2]\in H^2(Y, i\mathbb{R}),
\]
then $\HM_*(\mathbb{Y}_1, \widehat{\mathfrak{s}})\cong \HM_*(\mathbb{Y}_2,\widehat{\mathfrak{s}})$ for any relative \mathfrak{s}pinc structure $\widehat{\mathfrak{s}}\in \Spincr(Y)$.
\mathbf{e}nd{theorem}
The proof of Theorem \ref{T5.1} is based on the Gluing Theorem \ref{T2.5} and a special property concerning the product manifold $[-3,3]_s\times \Sigma$:
\begin{lemma}\label{L5.2} Under the assumptions of Theorem \ref{T5.1}, consider the 1-cell $\mathbb{Y}_0\in \mathbb{A}T(\mathbb{T}Sigma_1, \mathbb{T}Sigma_2)$ with $Y_0=[-3,3]_s\times\Sigma$ and $\mathfrak{p}artial_si=\Id: \mathfrak{p}artial Y_0\to (-\Sigma)\cup \Sigma$. Then for any cylindrical metric of $Y_0$ and any compatible 2-form $\omega_0\in \Omega^2(Y_0, i\mathbb{R})$, we have
\[
\HM_*(\mathbb{Y}_0,\widehat{\mathfrak{s}})\cong \left\{ \begin{array}{cl}
\mathbb{N}R & \text{ if }\ \widehat{\mathfrak{s}}=\widehat{\mathfrak{s}}_{std},\\
\{0\} & \text{ otherwise. }
\mathbf{e}nd{array}
\right.
\]
\mathbf{e}nd{lemma}
\begin{proof}[Proof of Theorem \ref{T5.1}] By applying the gluing functor from Theorem \ref{T2.5}
\[
\alpha: \mathbb{A}T(\mathbf{e}mptyset, \mathbb{T}Sigma_1)\times\mathbb{A}T(\mathbb{T}Sigma_1,\mathbb{T}Sigma_2)\to \mathbb{A}T(\mathbf{e}mptyset, \mathbb{T}Sigma_2).
\]
to the pair $(\mathbb{Y}_1, \mathbb{Y}_0)$, we deduce from Theorem \ref{T3.5} and Lemma \ref{L5.2} that
\[
\HM_*(\mathbb{Y}_1,\widehat{\mathfrak{s}})\cong \HM_*(\mathbb{Y}_1\circ_h \mathbb{Y}_0,\widehat{\mathfrak{s}}).
\]
for any $\widehat{\mathfrak{s}}\in \Spincr(Y)$. Now $\mathbb{Y}_1\circ_h \mathbb{Y}_0$ and $\mathbb{Y}_2$ are 1-cells in the same strict cobordism category $\mathbb{A}T(\mathbf{e}mptyset, \mathbb{T}Sigma_2)$. The difference $\omega_1\circ_h\omega_0-\omega_2$ determines a class $\beta\in H^2(Y, \mathfrak{p}artial Y; i\mathbb{R})$. Our goal here is to choose $\omega_0$ properly so that $\beta=0$. This can be always done, because $[\omega_2]=[\omega_1]=[\omega_1\circ_h\omega_0]$; so $\beta$ lies in the image of $H^1(\mathfrak{p}artial Y; i\mathbb{R})$:
\[
\begin{array}{rcl}
\cdots \to H^1(\mathfrak{p}artial Y; i\mathbb{R}) \to &H^2(Y,\mathfrak{p}artial Y;i\mathbb{R})&\to H^2(Y;i\mathbb{R})\to\cdots \\
&\beta&\mapsto 0.
\mathbf{e}nd{array}
\]
When $\beta=0$, we can then identify $\HM_*(\mathbb{Y}_1\circ_h \mathbb{Y}_0,\widehat{\mathfrak{s}})$ with $\HM_*(\mathbb{Y}_2,\widehat{\mathfrak{s}})$ using Theorem \ref{T5.2}. This completes the proof of Theorem \ref{T5.1}.
\mathbf{e}nd{proof}
The proof of Lemma \ref{L5.2} relies on a computation for the 3-torus $\mathbb{T}^3$, which generalizes Lemma \ref{L3.4}.
\begin{lemma}\label{L5.4} For any closed 2-form $\omega\in \Omega^2(\mathbb{T}^3, i\mathbb{R})$, suppose that the period class $c\colonequals -2\mathfrak{p}i i[\omega]$ is neither negative monotone nor balanced for any \mathfrak{s}pinc structures on $\mathbb{T}^3$, then
\[
\HM_*(\mathbb{T}^3, \mathfrak{s}, c; \mathbb{N}R_\omega)=\left\{\begin{array}{cl}
\mathbb{N}R & \text{ if } c_1(\mathfrak{s})=0,\\
\{0\} & \text{ otherwise.}
\mathbf{e}nd{array}
\right.
\]
\mathbf{e}nd{lemma}
\begin{proof}[Proof of Lemma \ref{L5.4}]The meaning of Lemma \ref{L5.4} will become more transparent when we review the theory of closed 3-manifolds in Section \ref{Sec6}. When $c_1(\mathfrak{s})\mathfrak{n}eq 0$, the statement can be verified as in the proof of Lemma \ref{L3.4}, by working with a product metric and a harmonic 2-form $\omega$. When $c_1(\mathfrak{s})=0$, the statement follows from \cite[Lemma 3.1]{Taubes01}. Alternatively, we may apply Proposition \ref{P4.2} to reduce the problem to the case of exact perturbations.
\mathbf{e}nd{proof}
\begin{proof}[Proof of Lemma \ref{L5.2}] With loss of generality, we assume that $\Sigma$ is connected. Since the roles of $\mathbb{T}Sigma_1$ and $\mathbb{T}Sigma_2$ are symmetric, consider a similar 1-cell $\mathbb{Y}_0'\in \mathbb{A}T(\mathbb{T}Sigma_2,\mathbb{T}Sigma_1)$. Now regard $\mathbb{Y}_0$ and $\mathbb{Y}_0'$ as 1-cells in $\mathbb{A}T(\mathbf{e}mptyset, \mathbb{T}Sigma_2\cup (-\mathbb{T}Sigma_1))$ and $\mathbb{A}T(\mathbb{T}Sigma_2\cup (-\mathbb{T}Sigma_1),\mathbf{e}mptyset)$ respectively. We apply Theorem \ref{T3.3} to obtain that
\begin{equation}\label{E5.3}
\HM_*(\mathbb{Y}_0)\otimes_\mathbb{N}R \HM_*(\mathbb{Y}_0')\cong \HM_*(\mathbb{T}^3, \mathfrak{s}, c; \mathbb{N}R_\omega)
\mathbf{e}nd{equation}
where $\omega=\omega_0\circ_h\omega_0'$ is a closed 2-form on the 3-torus $\mathbb{T}^3=S^1\times\Sigma$. The condition of Lemma \ref{L5.4} can be verified, since
\[
|\langle \omega, [\Sigma]\rangle|<2\mathfrak{p}i \text{ and }\mathfrak{n}eq 0 \text{ by properties }\ref{T3} \text{ and }\ref{T1}.
\]
We conclude from \mathbf{e}qref{E5.3} and Lemma \ref{L5.4} that
\[
\rank_{\mathbb{N}R} \HM_*(\mathbb{Y}_0)=\rank_{\mathbb{N}R}\HM_*(\mathbb{Y}_0')=1.
\]
In particular, the group $\HM_*(\mathbb{Y}_0,\widehat{\mathfrak{s}})$ vanishes except for one particular relative \mathfrak{s}pinc structure. By Theorem \ref{T3.4}, the Euler characteristic $\chi(\HM_*(\mathbb{Y}_0,\widehat{\mathfrak{s}}))$ is independent of the metric and the 2-form $\omega_0$. We conclude from the computation of $\HM_*(e_{\mathbb{T}Sigma_1},\widehat{\mathfrak{s}}_{std})$ that
\[
\chi (\HM_*(\mathbb{Y}_0,\widehat{\mathfrak{s}}_{std}))=1.
\]
Thus $\HM_*(\mathbb{Y}_0,\widehat{\mathfrak{s}}_{std})\mathfrak{n}eq \{0\}$. This completes the proof of Lemma \ref{L5.2}.
\mathbf{e}nd{proof}
\mathfrak{p}art{Monopoles and Thurston Norms}
Let $F=\bigcup_{i=1}^n F_i$ be a compact oriented surface with $F_i$ connected. Recall that the norm of $F$ is defined to be
\[
x(F)\colonequals -\mathfrak{su}m_{i} \min\{\chi(F_i),0\}.
\]
For any 3-manifold $Y$ with toroidal boundary, Thurston \cite{T86} introduced a semi-norm $x(\cdot)$ on $H_2(Y,\mathfrak{p}artial Y;\mathbb{R})$ such that for any integral class $\kappa\in H_2(Y,\mathfrak{p}artial Y;\mathbb{Z})$, we have
\[
x(\kappa)\colonequals \min\{ x(F): (F,\mathfrak{p}artial F)\mathfrak{su}bset (Y,\mathfrak{p}artial Y) \text{ properly embedded and }[F]=\kappa\}.
\]
In this part, we show that the monopole Floer homology $\HM_*(\mathbb{Y})$ defined in \cite{Wang20} detects the Thurston norm and fiberness of $Y$, if the underlying 3-manifold $Y$ is connected and irreducible, generalizing the previous results for closed 3-manifolds \cite{Bible,Ni08,Ni09b}. The same detection theorems for link Floer homology have been obtained in \cite{OS08, Ni09, J08,GL19} and \cite{Ni07}. The main results are Theorem \ref{T7.1} and \ref{T7.2}.
For the proof of Theorem \ref{T7.1}, we use the Gluing Theorem \ref{T2.5} to reduce the problem to the double of $Y$, and apply the non-vanishing result \cite[Corollary 41.4.3]{Bible} for closed 3-manifold. However, we have to adapt this corollary first to the case of non-exact perturbations, which is done in Section \ref{Sec6}.
The proof of Theorem \ref{T7.2} is accomplished in Section \ref{Sec9}, which exploits the relation of $\HM_*(\mathbb{Y})$ with sutured Floer homology, as discussed in Section \ref{Sec8}. Any 3-manifold with toroidal boundary is a sutured manifold in the sense of Gabai \cite{Gabai83}, but it is not an example of balanced sutured manifolds in the sense of \cite[Definition 2.2]{J06}. One may think of our construction as a natural extension of the sutured Floer homology. Although the author was not able to prove a general sutured manifold decomposition theorem, a preliminary result, Theorem \ref{T8.1}, will be supplied in Section \ref{Sec8} to justify this heuristic.
\mathfrak{s}ection{Closed 3-Manifolds Revisited}\label{Sec6}
Throughout this section, we will take $Y$ to be a closed connected oriented 3-manifold. In \cite{Bible}, Kronheimer and Mrowka introduced three flavors of monopole Floer homology groups for any \mathfrak{s}pinc structure $\mathfrak{s}$ on $Y$, which fit into a long exact sequence:
\begin{equation}\label{E6.1}
\cdots\mathbb{X}rightarrow{i_*} \mathfrak{f}HM_*(Y,\mathfrak{s}; \Gamma)\mathbb{X}rightarrow{j_*}\widehat{\HM}_*(Y,\mathfrak{s}; \Gamma)\mathbb{X}rightarrow{p_* }\overline{\HM}_*(Y,\mathfrak{s};\Gamma)\mathbb{X}rightarrow{i_*}\cdots
\mathbf{e}nd{equation}
Here $\Gamma$ is any local coefficient system on the blown-up configuration space $\mathbb{C}B^{\mathfrak{s}igma}(Y,\mathfrak{s})$. For instance, one may take $\Gamma$ to be the trivial system with $\mathbb{Z}$ coefficient. For any real 1-cycle $\mathbb{X}i$, we can also define the local coefficient system $\Gamma_\mathbb{X}i$ as in \cite[Section 3.7]{Bible}, whose fiber is always $\mathbb{R}$. We write $\HM_*(Y,\mathfrak{s};\Gamma)\colonequals \im j_*$ for the reduced Floer homology. The third group $\overline{\HM}_*(Y,\mathfrak{s};\Gamma)$ in \mathbf{e}qref{E6.1} is trivial if
\begin{itemize}
\item $c_1(\mathfrak{s})\in H^2(Y;\mathbb{Z}) $ is non-torsion, or
\item $\Gamma=\Gamma_\mathbb{X}i$ and $[\mathbb{X}i]\mathfrak{n}eq 0\in H_1(Y;\mathbb{R})$; see \cite[Proposition 3.9.1]{Bible}.
\mathbf{e}nd{itemize}
In either case, the map $j_*$ is an isomorphism, so $\HM_*(Y,\mathfrak{s};\Gamma)\cong \mathfrak{f}HM_*(Y,\mathfrak{s};\Gamma)$.
Monopole Floer homology detects the Thurston norm $x(\cdot)$ on $H_2(Y;\mathbb{R})$. The next theorem expand on what lies behind this slogan:
\begin{theorem}[{\cite[Corollary 40.1.2 \& 41.4.3]{Bible}}]\label{T4.1} Let $Y$ be a closed oriented 3-manifold with $b_1(Y)>0$ and $\kappa\in H_2(Y,\mathbb{Z})$ be any integral class.
\begin{enumerate}
\item $($The adjunction inequalities$)$ For any local coefficient system $\Gamma$ and any \mathfrak{s}pinc structure $\mathfrak{s}$ with $|\langle c_1(\mathfrak{s}),\kappa\rangle|>\|\kappa\|_{Th}$, the monopole Floer homology group $\HM_*(Y,\mathfrak{s};\Gamma)$ is trivial.
\item\label{2} If in addition $Y$ is irreducible, then there is a \mathfrak{s}pinc structure $\mathfrak{s}$ on $Y$ such that
\[
\langle c_1(\mathfrak{s}), \kappa\rangle=\|\kappa\|_{Th} \text{ and }\HM_*(Y,\mathfrak{s}; \Gamma_\mathbb{X}i)\mathfrak{n}eq \{0\}
\]
for any 1-cycle $\mathbb{X}i$ with $[\mathbb{X}i]\mathfrak{n}eq 0\in H_1(Y;\mathbb{R})$.
\mathbf{e}nd{enumerate}
\mathbf{e}nd{theorem}
In particular, when $[\mathbb{X}i]\mathfrak{n}eq 0\in H_1(Y,\mathbb{R})$, the subset
\[
\{c_1(\mathfrak{s}): \HM_*(Y,\mathfrak{s};\Gamma_\mathbb{X}i)\mathfrak{n}eq \{0\}\}\mathfrak{su}bset H^2(Y,\mathbb{R})
\]
determines the Thurston norm on $H_2(Y,\mathbb{R})$ in the same way that the Newton polytope of the Alexander polynomial determines the Alexander norm; see \cite{M02}.
However, Theorem \ref{T4.1} is stated only for the monopole Floer homology defined using exact perturbations. In \cite[Section 30]{Bible}, these groups are extended for non-exact perturbations:
\begin{equation}\label{E4.1}
\cdots\mathbb{X}rightarrow{i_*} \mathfrak{f}HM_*(Y,\mathfrak{s},c; \Gamma)\mathbb{X}rightarrow{j_*}\widehat{\HM}_*(Y,\mathfrak{s},c; \Gamma)\mathbb{X}rightarrow{p_* }\overline{\HM}_*(Y,\mathfrak{s}, c;\Gamma)\mathbb{X}rightarrow{i_*}\cdots
\mathbf{e}nd{equation}
where $c\in H^2(Y;\mathbb{R})$ is the period class and $\Gamma$ is any $c$-complete local coefficient system in the sense of \cite[Definition 30.2.2]{Bible}. These groups are defined using the Seiberg-Witten equations \mathbf{e}qref{3DDSWEQ} on $Y$ with the closed 2-form $\omega$ belonging to the class $[\omega]=\mathfrak{f}rac{i}{2\mathfrak{p}i}\cdot c\in H^2(Y,i\mathbb{R})$.
\mathfrak{s}mallskip
The purpose of this section is to understand the extent to which Theorem \ref{T4.1} generalizes to these groups defined using non-exact perturbations. With that said, the results of this section follow almost trivially from the general theory in \cite{Bible}; no originality is claimed here.
\mathfrak{su}bsection{Statements} Fix a \mathfrak{s}pinc structure $\mathfrak{s}$ on $Y$. Recall from \cite[Definition 29.1.1]{Bible} that the non-exact perturbation associated to a closed 2-form $\omega\in \Omega^2(Y, i\mathbb{R})$ is called monotone if
\[
2\mathfrak{p}i^2c_1(\mathfrak{s})+c=2\mathfrak{p}i^2c_1(\mathfrak{s})\cdot t
\]
for some $t\in \mathbb{R}$, where $c\colonequals -2\mathfrak{p}i i[\omega]$ is the period class of this perturbation. Furthermore, it is called balanced, positively or negatively monotone if $t=0$, $t>0$ or $<0$ respectively. We write
\[
\HM_*(Y,\mathfrak{s},c;\Gamma)\colonequals \im j_*\mathfrak{su}bset \widehat{\HM}_*(Y,\mathfrak{s},c;\Gamma)
\]
for the reduced monopole Floer homology. The third group $\overline{\HM}_*(Y,\mathfrak{s},c;\Gamma)$ in \mathbf{e}qref{E4.1} is always trivial unless the period class $c=c_b\colonequals -2\mathfrak{p}i^2c_1(\mathfrak{s})$ is balanced. Thus $\HM_*(Y,\mathfrak{s},c;\Gamma)= \widehat{\HM}_*(Y,\mathfrak{s},c;\Gamma)$ if $c\mathfrak{n}eq c_b$.
\mathfrak{s}mallskip
We focus on the local coefficient systems $\mathbb{N}R_\omega$ on $\mathbb{C}B^\mathfrak{s}igma(Y,\mathfrak{s})$, whose fiber at every point is always the mod 2 Novikov ring $\mathbb{N}R$. The fundamental group of $\mathbb{C}B^\mathfrak{s}igma(Y,\mathfrak{s})$ is $H^1(Y,\mathbb{Z})=\mathfrak{p}i_0(\mathbb{C}G(Y))$. The monodromy of $\mathbb{N}R_\omega$ is then defined by sending $z\in H^1(Y,\mathbb{Z})$ to $q^{f(z)}$ with
\begin{align}\label{E4.6}
f(z)\colonequals \langle (2\mathfrak{p}i^2c_1(\mathfrak{s})+c)\cup z, [Y]\rangle\in \mathbb{R}.
\mathbf{e}nd{align}
For the adjunction inequality, it suffices to consider \mathfrak{s}pinc structures with non-torsion $c_1(\mathfrak{s})$. The problem already occurs for the 3-torus $\mathbb{T}^3=\mathbb{T}^2\times S^1$ in Lemma \ref{L3.4}. Let $d\in H^2(\mathbb{T}^3, \mathbb{Z})$ be the Poincar\'{e} dual of $ \{pt\}\times S^1$. Suppose $\omega=i\delta\cdot d$ and consider the \mathfrak{s}pinc structure $\mathfrak{s}$ on $\mathbb{T}^3$ with $c_1(\mathfrak{s})=2\cdot d$. If $\delta<-2\mathfrak{p}i $, then this perturbation is negatively monotone. In this case, the moduli space of the Seiberg-Witten equations \mathbf{e}qref{3DDSWEQ} is non-empty, and is diffeomorphic to $\mathbb{T}^2\times\{pt\}$. As one may verify, for either $\Gamma=\mathbb{Z}$ or $\mathbb{N}R_\omega$, the monopole Floer homology group $\HM_*(\mathbb{T}^3,\mathfrak{s},c;\Gamma)$ is non-trivial, and so the adjunction inequality in Theorem \ref{T4.1} is violated in this case.
\mathfrak{s}mallskip
However, by Proposition \ref{P4.2}, negatively monotone perturbations are the only exceptions.
\begin{proposition}\label{P4.2} Let $Y$ be any closed oriented 3-manifold with $b_1(Y)\mathfrak{g}eq 2$ and $\mathfrak{s}$ be any non-torsion \mathfrak{s}pinc structure. Suppose that $[\omega]\in H^2(Y,i\mathbb{R})$ is neither negatively monotone nor balanced with respect to $\mathfrak{s}$, then there is an isomorphism
\[
\HM_*(Y,\mathfrak{s},c;\mathbb{N}R_\omega)\cong \HM_*(Y,\mathfrak{s};\mathbb{N}R_\omega)
\]
where the second group is defined using an exact perturbation with local system $\mathbb{N}R_\omega$. In particular, the adjunction inequality from Theorem \ref{T4.1} holds also for the group $\HM_*(Y,\mathfrak{s},c;\mathbb{N}R_\omega)$: it is trivial, whenever $\langle c_1(\mathfrak{s}),\kappa\rangle >\|\kappa\|_{Th}$ for some integral class $\kappa\in H_2(Y;\mathbb{Z})$.
\mathbf{e}nd{proposition}
The non-vanishing result is more robust: it suffices to rule out exact perturbations:
\begin{proposition}\label{P4.5} Let $Y$ be any irreducible closed oriented 3-manifold with $b_1(Y)\mathfrak{g}eq 1$. If the period class $c=-2\mathfrak{p}i i[\omega]\in H^2(Y; \mathbb{R})$ is non-exact, then for any integral class $\kappa\in H^2(Y;\mathbb{Z})$, there is a \mathfrak{s}pinc structure $\mathfrak{s}$ such that
\[
\langle c_1(\mathfrak{s}), \kappa\rangle=\|\kappa\|_{Th} \text{ and } \HM_*(Y,\mathfrak{s},c;\mathbb{N}R_\omega)\mathfrak{n}eq \{0\}.
\]
\mathbf{e}nd{proposition}
To ease our notation, we introduce the following shorthand:
\begin{definition}\label{D6.4} For any closed 2-form $\omega\in \Omega^2(Y; i\mathbb{R})$ and any integral class $\kappa\in H_2(Y;\mathbb{Z})$, we write $\HM(Y,[\omega])$ for the direct sum
\[
\bigoplus_{\mathfrak{s}} \HM_*(Y,\mathfrak{s},c;\mathbb{N}R_\omega)
\]
and $\HM(Y,[\omega]|\kappa)$ for the subgroup
\[
\bigoplus_{\langle c_1(\mathfrak{s}),\kappa\rangle=x(\kappa)} \HM_*(Y,\mathfrak{s},c;\mathbb{N}R_\omega).
\]
For any connected oriented subsurface $F\mathfrak{su}bset Y$, define similarly
\[
\HM(Y,[\omega]|F)= \bigoplus_{\langle c_1(\mathfrak{s}),\kappa\rangle=x(F)} \HM_*(Y,\mathfrak{s},c;\mathbb{N}R_\omega).\mathfrak{q}edhere
\]
\mathbf{e}nd{definition}
\begin{remark}\label{R4.5} When $[\omega]=0\in H^2(Y; i\mathbb{R})$ and $c_1(\mathfrak{s})\mathfrak{n}eq 0$, the local system $\mathbb{N}R_\omega$ is not trivial. Nevertheless, by \cite[P.288 (16.5)]{Bible}, we still have an isomorphism:
\[
\HM_*(Y,\mathfrak{s},c;\mathbb{N}R_\omega)\cong \HM_*(Y,\mathfrak{s}; \mathbb{N}R).
\]
The latter group is defined using the trivial system with coefficients $\mathbb{N}R$. Set
\[
\HM_*(Y|\kappa)\colonequals\bigoplus_{\langle c_1(\mathfrak{s}),\kappa\rangle=x(\kappa)} \HM_*(Y,\mathfrak{s};\mathbb{N}R).
\]
Then $ \HM_*(Y|\kappa)\cong \HM_*(Y,[0]|\kappa)$. We define similarly the group $\HM_*(Y|F)$.
\mathbf{e}nd{remark}
\begin{corollary}\label{C6.5} If $Y$ is irreducible with $b_1(Y)>0$ and $[\omega]\mathfrak{n}eq 0\in H^2(Y; i\mathbb{R})$, then $\HM(Y,[\omega]|\kappa)\mathfrak{n}eq \{0\}$ for any integral class $\kappa\in H^2(Y;\mathbb{Z})$.
\mathbf{e}nd{corollary}
The proof of Proposition \ref{P4.2} and \ref{P4.5} will dominate the rest of this section.
\mathfrak{su}bsection{Completions of Chain Complexes} The proof of Proposition \ref{P4.2} is achieved in two steps: the first step is to relate the group $\HM_*(Y,\mathfrak{s},c;\mathbb{N}R_\omega)$ with the Floer homology associated to a balanced class $c_b=-2\mathfrak{p}i^2c_1(s)$. For monotone perturbations, this is already done in \cite[Theorem 31.1.1\ \&\ 31.5.1]{Bible}. We shall describe a slightly different setup that allows generalization for any non-balanced perturbations.
Consider the monopole Floer chain complexes of $(Y,\mathfrak{s}, c_b)$ associated to the local system $\mathbb{N}R_\omega$:
\[
\widecheck{C}_*(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega),\ \widehat{C}_*(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega),\ \overline{C}_*(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega).
\]
As modules, they are free over the Novikov ring $\mathbb{N}R$, but none of them is finitely generated. Since the period class $c_b$ is balanced, we have to blow up the configuration space in order to define the Floer homology. Suppose an admissible perturbation of the Chern-Simons-Dirac functional $\mathbb{C}L_\omega$ is chosen, then each reducible solution $[\alpha]$ of the perturbed 3-dimensional Seiberg-Witten equations will contribute infinitely many generators to each of $\widecheck{C}_*,\widehat{C}_*,\overline{C}_*$, corresponding to the eigenvectors of the Dirac operator at $[\alpha]$.
As noted in \cite[Section 30.1]{Bible}, we can form a chain level completion using the filtration of eigenvalues. Label the reducible solutions in the quotient configuration space $\mathbb{C}B(Y,\mathfrak{s})$ as $[\alpha^1],\cdots,[\alpha^p]$ and label the corresponding critical points in the blown-up space as $[\alpha^r_i]$ with $i\in \mathbb{Z}$ and $1\leq r\leq p$, so that $[\alpha^r_i]$ corresponds to the eigenvalue $\lambda^r_i$ of the perturbed Dirac operator at $[\alpha^r]$ and $$\cdots<\lambda^r_i<\lambda^r_{i+1}<\cdots,$$ with $\lambda^r_0$ being the first positive one. For any $m\mathfrak{g}eq 1$, let
\begin{equation}\label{E4.3}
\widehat{C}_*(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega)_m\mathfrak{su}bset \widehat{C}_*(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega),
\mathbf{e}nd{equation}
be the subgroup generated by $[\alpha^r_i]$ with $i\leq -m$. This defines a filtration on $\widehat{C}_*$, called the $\lambda$-filtration. We form the completion
\[
\widehat{C}_\bullet(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega)\mathfrak{su}pset\widehat{C}_*(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega).
\]
The same construction applies also to the bar-version; so we obtain
\[
\overline{C}_\bullet(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega)\mathfrak{su}pset \overline{C}_*(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega).
\]
On the other hand, $\widehat{C}_*$ carries an additional filtration arising from the base ring $\mathbb{N}R$, called the $\mathbb{N}R$-filtration. One may formally write
\[
\widehat{C}_*(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega)=\bigoplus_{n\mathfrak{g}eq 0} \mathbb{N}R e_n,
\]
by identifying the fibers of $\mathbb{N}R_\omega$ at different critical points using a collection of paths in the blown-down space $\mathbb{C}B(Y,\mathfrak{s})$. We may start with the group ring $\mathbb{B}F_2[\mathbb{R}]$ and obtain $\mathbb{N}R$ by taking the completion in the negative direction. This filtration on $\mathbb{N}R=\mathbb{B}F_2[\mathbb{R}]^-$ then induces the $\mathbb{N}R$-filtration on the free module $ \widehat{C}_*(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega)$; let
\[
\widehat{C}_\diamond(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega)\mathfrak{su}bset \widehat{C}_\bullet (Y,\mathfrak{s},c_b; \mathbb{N}R_\omega),
\]
be the resulting completion. Any element $\mathfrak{su}m_{n\mathfrak{g}eq 0} a_n\cdot e_n\in \widehat{C}_\diamond(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega)$ may have infinitely many non-zero coefficients $a_n\in \mathbb{N}R$, but under the topology of $\mathbb{N}R$, we must have
\begin{equation}\label{E4.5}
\lim_{n\to\infty} a_n=0.
\mathbf{e}nd{equation}
The bar-version analogue is constructed in a slightly different way. First, take the $\mathbb{N}R$-completion of its $m$-th filtered subgroup $\overline{C}_*(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega)_m\mathfrak{su}bset \overline{C}_*(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega)$ for each $m\in \mathbb{Z}$, denoted by
\[
\overline{C}_\diamond(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega)_m.
\]
Next, we form the union:
\[
\overline{C}_\diamond(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega)\colonequals\bigcup_{m\in \mathbb{Z}} \overline{C}_\diamond(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega)_m\mathfrak{su}bset \overline{C}_\bullet(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega).
\]
An element in $\overline{C}_\diamond(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega)$ has only finitely many non-zero coefficients for critical points with $\lambda^r_i>0$, while a infinite sum may occur in the negative direction satisfying the convergence condition \mathbf{e}qref{E4.5}. The upshot is that the homology groups of
\[
\widehat{C}_*(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega),\widehat{C}_\diamond(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega), \overline{C}_\diamond(Y,\mathfrak{s}, c_b; \mathbb{N}R_\omega)
\]
form a long exact sequence (by the proof of \cite[Proposition 22.2.1]{Bible}), which fits into a diagram below:
\begin{equation}\label{E4.2}
\begin{tikzcd}[column sep=1.7em]
\cdots\arrow[r,"i_*"] &\mathfrak{f}HM_*(Y,\mathfrak{s},c_b; \mathbb{N}R_\omega)\arrow[r,"j_*"] \arrow[d,equal]&\widehat{\HM}_*(Y,\mathfrak{s},c_b; \mathbb{N}R_\omega)\arrow[r,"p_*"] \arrow[d,"x_*"]&\overline{\HM}_*(Y,\mathfrak{s},c_b;\mathbb{N}R_\omega)\arrow[r,"i_*"]\arrow[d]&\cdots\\
\cdots\arrow[r,"i_\diamond"] &\mathfrak{f}HM_*(Y,\mathfrak{s},c_b; \mathbb{N}R_\omega)\arrow[r,"j_\diamond"]\arrow[d,equal] &\widehat{\HM}_\diamond(Y,\mathfrak{s},c_b; \mathbb{N}R_\omega)\arrow[r,"p_\diamond"]\arrow[d,"y_\diamond"] &\overline{\HM}_\diamond(Y,\mathfrak{s},c_b;\mathbb{N}R_\omega)\arrow[r,"i_\diamond"]\arrow[d]&\cdots\\
\cdots\arrow[r,"i_\bullet"] &\mathfrak{f}HM_*(Y,\mathfrak{s},c_b; \mathbb{N}R_\omega)\arrow[r,"j_\bullet"] &\widehat{\HM}_\bullet(Y,\mathfrak{s},c_b; \mathbb{N}R_\omega)\arrow[r,"p_\bullet"] &\overline{\HM}_\bullet(Y,\mathfrak{s},c_b;\mathbb{N}R_\omega)\arrow[r,"i_\bullet"]&\cdots
\mathbf{e}nd{tikzcd}
\mathbf{e}nd{equation}
The next proposition says that $\HM_*(Y,\mathfrak{s},c;\mathbb{N}R_\omega)$ can be computed in terms of the group $\widehat{\HM}_\diamond$ associated the balanced period class $c_b\colonequals -2\mathfrak{p}i^2c_1(\mathfrak{s})$.
\begin{proposition}\label{P4.3} For any $(Y,\mathfrak{s})$ and any non-balanced perturbation with period class $c=-2\mathfrak{p}i i[\omega]$, we have an isomorphism
\begin{align*}
\widehat{\HM}_*(Y,\mathfrak{s}, c;\mathbb{N}R_\omega)&\cong \widehat{\HM}_\diamond(Y,\mathfrak{s},c_b; \mathbb{N}R_\omega),
\mathbf{e}nd{align*}
\mathbf{e}nd{proposition}
\begin{proof} This result follows from the proof of \cite[Theorem 31.1.1]{Bible}. See \cite[Section 31.2]{Bible}. Since we have used a $c$-complete local coefficient system $\mathbb{N}R_\omega$ here, there is no need to estimate the topological energy $\mathcal{E}_{1}^{top}$ in \cite[(31.5)\ P.611]{Bible}. The argument is even simpler.
\mathbf{e}nd{proof}
The second step in the proof of Proposition \ref{P4.2} is to relate $\widehat{\HM}_\diamond(Y,\mathfrak{s},c_b; \mathbb{N}R_\omega)$ with the bullet-version $\widehat{\HM}_\bullet (Y,\mathfrak{s},c_b;\mathbb{N}R_\omega)$ using the vertical map $y_\diamond$ in \mathbf{e}qref{E4.2}. In fact, more is true when the class $[\omega]$ is not monotone with respect to $c_1(\mathfrak{s})$:
\begin{proposition}\label{P4.4} Let $Y$ be any closed oriented 3-manifold with $b_1(Y)\mathfrak{g}eq 2$ and $\mathfrak{s}$ be any non-torsion \mathfrak{s}pinc structure. Suppose that the period class $c=-2\mathfrak{p}i i[\omega]$ is not monotone, then the vertical maps $x_*$ and $y_\diamond$ in digram \mathbf{e}qref{E4.2} are isomorphisms
\[
\widehat{\HM}_*(Y,\mathfrak{s},c_b; \mathbb{N}R_\omega)\mathbb{X}rightarrow[\cong]{x_*} \widehat{\HM}_\diamond(Y,\mathfrak{s},c_b; \mathbb{N}R_\omega)\mathbb{X}rightarrow[\cong]{y_\diamond}
\widehat{\HM}_\bullet(Y,\mathfrak{s},c_b; \mathbb{N}R_\omega).
\]
\mathbf{e}nd{proposition}
Now we ready to prove Proposition \ref{P4.2}.
\begin{proof}[Proof of Proposition \ref{P4.2}] When $c$ is not monotone, we can combine Proposition \ref{P4.3}\ \&\ \ref{P4.4} with the following isomorphism
\begin{equation}\label{E4.8}
\widehat{\HM}_\bullet(Y,\mathfrak{s},c_b; \mathbb{N}R_\omega)\cong \HM_*(Y,\mathfrak{s}; \mathbb{N}R_\omega)
\mathbf{e}nd{equation}
from \cite[Theorem 31.1.1]{Bible} to conclude. The case when $c$ is positively monotone is already addressed in \cite[Theorem 31.1.2]{Bible}.
\mathbf{e}nd{proof}
\begin{proof}[Proof of Proposition \ref{P4.4}] Since the left vertical maps in digram \mathbf{e}qref{E4.2} are identity maps, the statement then follows from the fact that for any $\circ\in \{*, \diamond,\bullet\}$,
\[
\overline{\HM}_\circ (Y,\mathfrak{s},c_b; \mathbb{N}R_\omega)=\{0\}.
\]
The case when $\circ=\bullet$ is addressed already in \cite[Theorem 31.1.1]{Bible}; in fact, the group $\overline{\HM}_\bullet(Y,\mathfrak{s},c_b; \Gamma)$ always vanishes, no matter which local system $\Gamma$ we use.
In general, for any $\circ\in \{*, \diamond,\bullet\}$, the group $\overline{\HM}_\circ (Y,\mathfrak{s},c_b; \mathbb{N}R_\omega)$ is an instance of coupled Morse homology (see \cite[Section 34]{Bible}) associated to the Picard torus of $Y$
$$\mathbb{T}^b\colonequals H^1(Y; \mathbb{R})/ H^1(Y;\mathbb{Z}),$$
with $b\colonequals b_1(Y)\mathfrak{g}eq 2$. In particular, we may use \cite[Theorem 35.1.6]{Bible} to compute $\overline{\HM}_\circ (Y,\mathfrak{s},c_b; \mathbb{N}R_\omega)$ using a standard chain complex, which we recall below. Consider the local coefficient system $\Gamma_\omega$ on $\mathbb{T}^b$ with fiber $\mathbb{N}R[T,T^{-1}]$ and with monodromy given by
\begin{align}\label{E4.7}
\mathfrak{p}i_1(\mathbb{T}^b)\cong H^1(Y,\mathbb{Z})&\to \mathbb{N}R[T,T^{-1}]^\times\\
z&\mapsto q^{f(z)}T^{g(z)}\mathfrak{n}onumber
\mathbf{e}nd{align}
where $f(z)$ is as in \mathbf{e}qref{E4.6} and $g(z)=\langle \half c_1(\mathfrak{s})\cup z, [Y]\rangle$. Then the group $\overline{\HM}_*(Y,\mathfrak{s},c_b; \mathbb{N}R_\omega)$ is isomorphic to the homology of $(C, \mathfrak{p}artial=\mathfrak{p}artial_1+\mathfrak{p}artial_3)$, where
\[
C=C(\mathbb{T}^b, h; \Gamma_\omega)\colonequals \bigoplus_{n\mathfrak{g}eq 0}^N \mathbb{N}R[T,T^{-1}] x_n
\]
is the Morse complex of $\mathbb{T}^b$ with local coefficients $\Gamma_\omega$, defined using a suitable Morse function $h: \mathbb{T}^b\to\mathbb{R}$ and $\mathfrak{p}artial_1$ is the Morse differential. As a result, $C$ is a finite-rank free module over the ring $\mathbb{N}R[T,T^{-1}]$, generated by critical points $\{x_n\}$ of $h$. The Morse index then induces an additional grading on $C$ such that $\deg\mathfrak{p}artial_i=-i$ for $i=1,3$. The $m$-th $\lambda$-filtered subgroup of $C$ is given by
\[
C_m\colonequals T^{-m}\bigoplus_{n\mathfrak{g}eq 0}^N \mathbb{N}R[T^{-1}] x_n, m\in \mathbb{Z}.
\]
The upshot is that te differential $\mathfrak{p}artial$ is $\mathbb{N}R[T,T^{-1}]$-linear. The bar-version chain complex $$(\overline{C}_*(Y,\mathfrak{s},c_b;\mathbb{N}R_\omega),\bar{\partial})$$
which defines $\overline{\HM}_*(Y,\widehat{\mathfrak{s}},c;\mathbb{N}R_\omega)$ admits a very similar structure to $(C,\mathfrak{p}artial)$, but $\bar{\partial}$ is only linear in $\mathbb{N}R$ and it may have higher components with respect to the Morse grading: $\bar{\partial}=\bar{\partial}_1+\bar{\partial}_3+\bar{\partial}_5+\cdots$. By \cite[Proposition 34.4.1\ \&\ 33.3.8, Theorem 35.1.6]{Bible}, $(\overline{C}_*(Y,\mathfrak{s},c_b;\mathbb{N}R_\omega),\bar{\partial})$ is homotopic to such a standard complex $(C,\mathfrak{p}artial)$, preserving the topology induced by $\{(\overline{C}_*)_m\}$ and $\{C_m\}$ respectively.
To compute the homology of $(C,\mathfrak{p}artial)$, we exploit the spectral sequence induced from the Morse grading, which abuts to $H(C,\mathfrak{p}artial)$ and whose $E_1$-page is $H(C,\mathfrak{p}artial_1)$. Since the period class $c=-2\mathfrak{p}i i[\omega]$ is not monotone, one verifies immediately that $H(C,\mathfrak{p}artial_1)=\{0\}$. Indeed, we may write $\mathbb{T}^b=S^1\times \mathbb{T}^{b-1}$ such that a integral multiple of $\{pt\}\times \mathbb{T}^{b-1}$ is dual to the map $g\in H^1(\mathbb{T}^b,\mathbb{Z})=\Hom(H_1(\mathbb{T}^b, \mathbb{Z}), \mathbb{Z})$. One may further decompose $\mathbb{T}^{b-1}$ into a product of circles and let $h$ be the sum of Morse functions on circles. It follows that the homology of $\mathbb{T}^{b-1}$ with local coefficients $\Gamma_\omega$ is trivial, since the holonomy of $\Gamma_\omega$ along some $S^1$-factor is non-trivial and lies in $\mathbb{N}R$. Finally, we deduce that $H(C,\mathfrak{p}artial_1)=\{0\}$ using the K\"{u}nneth formula.
\mathfrak{s}mallskip
This finishes the proof when $\circ=*$. When $\circ=\diamond$ or $\bullet$, it suffices to replace the base ring $\mathbb{N}R[T,T^{-1}]$ in \mathbf{e}qref{E4.7} by
\[
\mathbb{N}R[T,T^{-1}]_\diamond \text{ or } \mathbb{N}R[T,T^{-1}]]
\]
respectively. The ring $\mathbb{N}R[T,T^{-1}]_\diamond$ is obtained by taking the completion of $\mathbb{N}R[T^{-1}]$ with respect to the topology of $\mathbb{N}R$ and then inverting $T^{-1}$. Any element in $\mathbb{N}R[T,T^{-1}]_\diamond$ is then of the form $\mathfrak{su}m_{n\mathfrak{g}eq m}^{\infty} a_n T^{-n}$ for some $m\in \mathbb{Z}$ and $a_n\in \mathbb{N}R$ such that $\lim_{n\to\infty} a_n=0$ in $\mathbb{N}R$; so
\[
\mathbb{N}R[T,T^{-1}]\mathfrak{su}bset \mathbb{N}R[T,T^{-1}]_\diamond\mathfrak{su}bset \mathbb{N}R[T,T^{-1}]].
\]
One may now apply the universal coefficient theorem to conclude.
\mathbf{e}nd{proof}
\mathfrak{su}bsection{Proof of Proposition \ref{P4.5}} To deal with the non-vanishing result, we first look at a non-torsion \mathfrak{s}pinc structure $\mathfrak{s}$ on $Y$.
If the period class $c$ is non-balanced with respect to $\mathfrak{s}$, then the map $j_\bullet=y_\diamond\circ j_\diamond$ in diagram \mathbf{e}qref{E4.2} is an isomorphism, since the group $\overline{\HM}_\bullet(Y,\mathfrak{s},c_b;\mathbb{N}R_\omega)$ vanishes by \cite[Theorem 31.1.1]{Bible}. This shows that the vertical map
\[
y_\diamond: \widehat{\HM}_\diamond(Y,\mathfrak{s},c_b; \mathbb{N}R_\omega)\to
\widehat{\HM}_\bullet(Y,\mathfrak{s},c_b; \mathbb{N}R_\omega).
\]
is always surjective. By \mathbf{e}qref{E4.8} and Proposition \ref{P4.3}, we conclude that
\begin{equation}\label{E4.9}
\rank_\mathbb{N}R \HM_*(Y,\mathfrak{s}, c; \mathbb{N}R_\omega)\mathfrak{g}eq \rank_\mathbb{N}R\HM_*(Y,\mathfrak{s}; \mathbb{N}R_\omega).
\mathbf{e}nd{equation}
If the period class $c$ is balanced with respect to $\mathfrak{s}$, then $\HM_*(Y,\mathfrak{s}, c; \mathbb{N}R_\omega)\colonequals \im j^*$ in the digram \mathbf{e}qref{E4.2}. We apply the same argument to $y_\diamond\circ x_*$; so the inequality \mathbf{e}qref{E4.9} still holds.
When $c_1(\mathfrak{s})$ is torsion and $c$ is non-exact, the inequality \mathbf{e}qref{E4.9} follows from \cite[Theorem 31.1.3]{Bible}.
It remains to verify that for the \textit{special} \mathfrak{s}pinc structure $\mathfrak{s}$ given by Theorem \ref{T4.1} $(2)$, we have $
\HM_*(Y,\mathfrak{s}; \mathbb{N}R_\omega)\mathfrak{n}eq \{0\}$. Combined with \mathbf{e}qref{E4.9}, this will complete the proof of Proposition \ref{P4.5}.
\mathfrak{s}mallskip
The rest of the argument is completely formal and is inspired by \cite[Section 32.3]{Bible}. The group $\HM_*(Y,\mathfrak{s}; \mathbb{N}R_\omega)$ can be understood using a smaller local coefficient system $\Pi_\omega$, whose fiber at any point in $\mathbb{C}B(Y,\mathfrak{s})$ is the group ring $R\colonequals \mathbb{Z}[\mathbb{Z}]$ and whose monodromy is again \mathbf{e}qref{E4.6} (note that the image of $f$ lies in a $\mathbb{Z}$-subgroup in $\mathbb{R}$). Let $(\widehat{C}_*,\hat{\mathfrak{p}artial})$ be the resulting chain complex defined over $R$. There are a few other base rings that we will look at:
\[
\begin{tikzcd}
R=\mathbb{Z}[\mathbb{Z}] \arrow[r]\arrow[rd] & \mathbb{R}[\mathbb{Z}] \arrow[r,"g_\alpha"]&\mathbb{R}\\
& \mathbb{B}F_2[\mathbb{Z}]\arrow[r] & \mathcal{K} \arrow[r] & \mathbb{N}R.
\mathbf{e}nd{tikzcd}
\]
where $\mathcal{K}$ is the rational field of $\mathbb{B}F_2[\mathbb{Z}]$ and for any $\alpha\in \mathbb{R}$, $g_\alpha$ is the ring homomorphism sending the generator $t\in\mathbb{Z}$ to $e^{\alpha}\in \mathbb{R}$. Note that both $\mathbb{B}F_2[\mathbb{Z}]$ and $\mathbb{R}[\mathbb{Z}]$ are principal ideal domains. Moreover, $\mathcal{K}\to \mathbb{N}R$ is a field extension. The goal is to show that $H(\widehat{C}_*\otimes_R \mathbb{B}F_2[\mathbb{Z}], \hat{\mathfrak{p}artial})$ contains at least one free summand, so that $\HM_*(Y,\mathfrak{s} ;\mathbb{N}R_\omega)=H(\widehat{C}_*\otimes_R \mathbb{N}R, \hat{\mathfrak{p}artial})\mathfrak{n}eq \{0\}$.
Consider a finite presentation of $H(\widehat{C}_*)$ over the Noetherian ring $R$:
\begin{equation}\label{E4.10}
R^n\mathbb{X}rightarrow{A} R^k\to H(\widehat{C}_*)\to 0,
\mathbf{e}nd{equation}
and let $I\mathfrak{su}bset R$ be the ideal generated by all $k\times k$ minors of $A$, called the Fitting ideal of $H(\widehat{C}_*)$. By \cite[Corollary 20.4]{E95}, $I$ is independent of the resolution that we choose.
Suppose on the contrary that the finitely generated $\mathbb{B}F_2[\mathbb{Z}]$-module $H(\widehat{C}_*\otimes_R \mathbb{B}F_2[\mathbb{Z}], \hat{\mathfrak{p}artial})$ is torsion. Then we claim that $I\mathfrak{n}eq \{0\}$; otherwise, the extension $I_{\mathbb{B}F_2}$ of $I$ in $\mathbb{B}F_2[\mathbb{Z}]$ is trivial. $I_{\mathbb{B}F_2}$ is also the Fitting ideal of $H(\widehat{C}_*)\otimes_\mathbb{Z} \mathbb{B}F_2$, which can be seen by taking the tensor product of \mathbf{e}qref{E4.10} with $\mathbb{B}F_2$ (over $\mathbb{Z}$). As a result, $H(\widehat{C}_*)\otimes_\mathbb{Z} \mathbb{B}F_2$ contains a free $\mathbb{B}F_2[\mathbb{Z}]$-summand. The universal coefficient theorem \cite[Theorem 3A.3]{Hatcher} then provides an injection:
\[
0\to H(\widehat{C}_*)\otimes_\mathbb{Z} \mathbb{B}F_2\to H(\widehat{C}_*\otimes_\mathbb{Z}\mathbb{B}F_2, \hat{\mathfrak{p}artial}),
\]
which contradicts the assumption that we start with.
Now consider the tensor product of $\mathbf{e}qref{E4.10}$ with $\mathbb{R}$. The Fitting ideal $I_\mathbb{R}$ of $H(\hat{C}_*)\otimes_\mathbb{Z} \mathbb{R}$ is then the extension of $I$ in $\mathbb{R}[\mathbb{Z}]$. Since $I\mathfrak{n}eq \{0\}$, $I_\mathbb{R}\mathfrak{n}eq \{0\}$. Using the universal coefficient theorem, we conclude that
\[
H(\hat{C}_*)\otimes_\mathbb{Z} \mathbb{R}\cong H(\hat{C}_*\otimes_\mathbb{Z} \mathbb{R})
\]
is a torsion $\mathbb{R}[\mathbb{Z}]$-module and is therefore annihilated by some polynomial $u(t)\in \mathbb{R}[\mathbb{Z}]$.
On the one hand, one may pick $\alpha\in \mathbb{R}$ such that $g_\alpha(u(t))=u(e^{\alpha})\mathfrak{n}eq 0\in \mathbb{R}$; as a result, the group $H(\widehat{C}_*\otimes_{g_\alpha} \mathbb{R}, \hat{\mathfrak{p}artial})$ is trivial for this particular ring homomorphism $g_\alpha: \mathbb{R}[\mathbb{Z}]\to \mathbb{R}$.
On the other hand, the local coefficient system $\Pi_\omega\otimes_{g_\alpha} \mathbb{R}$ agrees with $\Gamma_\mathbb{X}i$ for some 1-cycle $\mathbb{X}i$ with $[\mathbb{X}i]\mathfrak{n}eq 0\in H_1(Y;\mathbb{Z})$, since $[\omega]$ is non-balanced. Theorem \ref{T4.1} (2) then asserts that $\HM_*(Y,\mathfrak{s}; \Gamma_\mathbb{X}i)$ is non-trivial. A contradiction.\mathfrak{q}ed
\mathfrak{su}bsection{Computation for Product 3-manifolds} For future reference, we recall a classical result for the 3-manifold $\Sigma_g\times S^1$, where $\Sigma_g$ is a closed surface of genus $g\mathfrak{g}eq 2$.
\begin{lemma}\label{L6.9} Following the shorthands from Definition \ref{D6.4}, for any closed 2-form $\omega\in \Omega^2(\Sigma_g\times S^1, i\mathbb{R})$ with
\begin{equation}\label{E4.11}
i\langle [\omega], [\Sigma_g]\rangle<2\mathfrak{p}i (g-1),
\mathbf{e}nd{equation}
we have
$
\HM(\Sigma_g\times S^1,[\omega]|\Sigma_g )\cong \mathbb{N}R.
$
\mathbf{e}nd{lemma}
\begin{proof}[Proof of Lemma \ref{L6.9}] Let $\mathfrak{s}$ be any \mathfrak{s}pinc structures contributing to the group $\HM(\Sigma_g\times S^1,[\omega]|\Sigma_g )$. Under the assumption \mathbf{e}qref{E4.11}, the period class $c=-2\mathfrak{p}i i[\omega]$ is neither negatively monotone nor balanced with respect to $\mathfrak{s}$; indeed,
\[
\langle 2\mathfrak{p}i^2 c_1(\mathfrak{s})+c, [\Sigma_g]\rangle=4\mathfrak{p}i^2(g-1)-2\mathfrak{p}i i\langle [\omega],[\Sigma_g]\rangle >0.
\]
Using Proposition \ref{P4.2}, we reduce the computation to the case when the perturbation is exact, but the coefficient system is still $\mathbb{N}R_\omega$. Now we use \cite[Lemma 2.2]{KS} to conclude the proof.
\mathbf{e}nd{proof}
\mathfrak{s}ection{Thurston Norm Detection}\label{Sec7}
From now on, we will always take $Y$ to be an oriented 3-manifold with toroidal boundary.
\begin{definition}\label{D7.1}For any $T$-surface $\mathbb{T}Sigma=(\Sigma, g_\Sigma,\lambda,\mu)$ and any 1-cell $\mathbb{Y}\in \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma)$, consider the set of monopole classes:
\[
\mathfrak{s}w(\mathbb{Y})\colonequals \{c_1(\widehat{\mathfrak{s}}): \HM_*(\mathbb{Y},\widehat{\mathfrak{s}})\mathfrak{n}eq\{0\}\} \mathfrak{su}bset H^2(Y,\mathfrak{p}artial Y; \mathbb{Z}),
\]
and for any $\kappa\in H_2(Y,\mathfrak{p}artial Y;\mathbb{Z})$, define
\[
\delta arphi_{\mathbb{Y}}(\kappa)\colonequals \max_{c_1(\widehat{\mathfrak{s}})\in \mathfrak{s}w(\mathbb{Y})} \langle c_1(\widehat{\mathfrak{s}}), \kappa\rangle,\ \kappa\in H_2(Y,\mathfrak{p}artial Y;\mathbb{Z}).
\]
Our convention here is that $\delta arphi_{\mathbb{Y}}\mathbf{e}quiv -\infty$, if $\mathfrak{s}w(\widehat{Y})=\mathbf{e}mptyset$.
\mathbf{e}nd{definition}
When $Y$ is connected and irreducible, it is tempting to generalize Theorem \ref{T4.1} and relate $\mathfrak{s}w(\mathbb{Y})$ with the Thurston norm on $H_2(Y,\mathfrak{p}artial Y;\mathbb{R})$. However, the author was not able to show that $\mathfrak{s}w(\mathbb{Y})$ is symmetric about the origin; so only a weaker statement is obtained in this paper.
\begin{theorem}\label{T7.1} For any $T$-surface $\mathbb{T}Sigma=(\Sigma, g_\Sigma,\lambda,\mu)$, let $\mathbb{Y}\in \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma)$ be any 1-cell with $Y$ connected and irreducible. Then the set of monopole classes $\mathfrak{s}w(\mathbb{Y})$ is non-empty and determines the Thurston norm on $H_2(Y,\mathfrak{p}artial Y;\mathbb{R})$ in the following sense:
\[
x(\kappa)=\half ( \delta arphi_{\mathbb{Y}}(\kappa)+\delta arphi_{\mathbb{Y}}(-\kappa)),\ \mathfrak{f}orall \kappa\in H_2(Y,\mathfrak{p}artial Y; \mathbb{Z}).
\]
In general, we only have an inequality $\delta arphi_{\mathbb{Y}}(\kappa)+\delta arphi_{\mathbb{Y}}(-\kappa)\leq 2x(\kappa)$.
\mathbf{e}nd{theorem}
One may approach the more desirable statement
\[
x(\kappa)=\delta arphi_{\mathbb{Y}}(\kappa)=\delta arphi_{\mathbb{Y}}(-\kappa).
\]
by either proving the symmetry of $\mathfrak{s}w(\mathbb{Y})$ or the adjunction inequality:
\[
\delta arphi_{\mathbb{Y}}(\kappa)\leq x(\kappa).
\]
But the author was unable to verify either of them directly. This Thurston norm detection result is accompanied with a fiberness detection result:
\begin{theorem}\label{T7.2} For any $T$-surface $\mathbb{T}Sigma=(\Sigma, g_\Sigma,\lambda,\mu)$, let $\mathbb{Y}\in \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma)$ be any 1-cell with $Y$ connected and irreducible, and $\kappa\in H_2(Y,\mathfrak{p}artial Y;\mathbb{Z})$ be any integral class. Consider the subgroup
\[
\HM_*(\mathbb{Y}|\kappa)\colonequals \bigoplus_{\langle c_1(\widehat{\mathfrak{s}}), \kappa\rangle=\delta arphi_{\mathbb{Y}}(\kappa)} \HM_*(\mathbb{Y},\widehat{\mathfrak{s}}).
\]
If $\rank_\mathbb{N}R \HM_*(\mathbb{Y}|\kappa)=\rank_\mathbb{N}R \HM_*(\mathbb{Y}|-\kappa)=1$, then $\kappa$ can be represented by a Thurston norm minimizing surface $F$ and $Y$ fibers over $S^1$ with $F$ as fiber.
\mathbf{e}nd{theorem}
The rest of this section is devoted to the proof of Theorem \ref{T7.1}, while the proof of Theorem \ref{T7.2} is deferred to Section \ref{Sec9}, after a digression into sutured monopole Floer homology in Section \ref{Sec8}.
\begin{proof}[Proof of Theorem \ref{T7.1}] We focus on the case when $Y$ is irreducible. The strategy is to apply the Gluing Theorem \ref{T2.5} and reduce the problem to the double of $Y$:
\[
\tilde{Y}\colonequals Y\ \bigcup_{\Sigma}\ (-Y).
\]
Since $Y$ is irreducible, so is $\tilde{Y}$. Then we can conclude using Proposition \ref{P4.2} and \ref{P4.5}.
For any 1-cell $\mathbb{Y}=(Y,g_Y,\omega,\cdots)$, consider its orientation reversal:
\[
(-\mathbb{Y})=(-Y,g_Y,\omega, \cdots)\in \mathbb{A}T(\mathbb{T}Sigma',\mathbf{e}mptyset).
\]
The problem here is that the $T$-surface $\mathbb{T}Sigma'=(\Sigma, g_\Sigma, -\lambda,\mu)$ is different from $\mathbb{T}Sigma$ in that the sign of $\lambda$ is reversed. We can not form the horizontal composition $\mathbb{Y}\circ_h (-\mathbb{Y})$.
This problem is circumvented by changing the 2-form $\omega$ on $(-Y)$. Since $\HM_*(\mathbb{Y},\widehat{\mathfrak{s}})$ is independent of the metric $g_Y$, we assume that it is cylindrical on $(-2,1]_s\times\Sigma\mathfrak{su}bset Y$. Pick a cut-off function $\chi_1: (-2,1]_s\to \mathbb{R}$ such that $\chi_1\mathbf{e}quiv 0$ if $s\leq -3/2$ and $\mathbf{e}quiv 1$ if $s\mathfrak{g}eq-1$. Write $
\omega=\overline{\omega}+\chi_1(s)ds\wedge\lambda$ and set
\[
\mathbb{Y}'\colonequals (Y,g_Y,\omega',\cdots )\in \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma') \text{ with }\omega'\colonequals\overline{\omega}-\chi_1(s)ds\wedge\lambda.
\]
We shall work instead with the orientation reversal $(-\mathbb{Y}')\in \mathbb{A}T(\mathbb{T}Sigma, \mathbf{e}mptyset)$. At this point, we need a lemma relating the Floer homology of $\mathbb{Y}'$ with that of $\mathbb{Y}$:
\begin{lemma}\label{L7.3} For any $T$-surface $\mathbb{T}Sigma$, let $\mathbb{Y}\in \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma)$ be any 1-cell and $\widehat{\mathfrak{s}}\in \Spincr(Y)$ be any relative \mathfrak{s}pinc structure. Then we have the following isomorphisms:
\begin{enumerate}[label=(\arabic*)]
\item\label{7.2.1} (Poincar\'{e} Duality) $ \HM_*((-\mathbb{Y}),\widehat{\mathfrak{s}})\cong \HM^*(\mathbb{Y},\widehat{\mathfrak{s}})$.
\item\label{7.2.2} (Reversing the sign of $\lambda$) $\HM_*(\mathbb{Y},\widehat{\mathfrak{s}})\cong \HM_*(\mathbb{Y}',\widehat{\mathfrak{s}})$.
\mathbf{e}nd{enumerate}
\mathbf{e}nd{lemma}
Let us finish the proof of Theorem \ref{T7.1} assuming Lemma \ref{L7.3}. Consider the horizontal composition of $\mathbb{Y}$ and $-\mathbb{Y}'$:
\[
(\tilde{Y},\tilde{\omega},\cdots)\colonequals \mathbb{Y}\circ_h (-\mathbb{Y}').
\]
We verify that the closed 2-form $\tilde{\omega}$ is neither negatively monotone nor balanced with respect to any \mathfrak{s}pinc structures on the double $\tilde{Y}$. Indeed, by \ref{T3} and \ref{T1}, $|\langle \tilde{\omega},[\Sigma^{(i)}]\rangle|<2\mathfrak{p}i$ and $\mathfrak{n}eq 0$ for any component $\Sigma^{(i)}\mathfrak{su}bset \Sigma$.
Any integral class $\kappa\in H_2(Y, \mathfrak{p}artial Y;\mathbb{Z})$ can be represented by a properly embedded oriented surface $F\mathfrak{su}bset Y$ minimizing the Thurston norm. Combined with its orientation reversal $(-F)\mathfrak{su}bset (-Y)$, they form a closed surface
\[
\tilde{F}\colonequals F\cup (-F)\mathfrak{su}bset \tilde{Y}.
\]
Since $Y$ is irreducible and $Y\mathfrak{n}eq S^1\times D^2$, the surface $F$ has no sphere or disk components. By \cite[Lemma 6.15]{Gabai83}, the double $\tilde{F}\mathfrak{su}bset \tilde{Y}$ is also norm-minimizing.
By Theorem \ref{T3.5}, we have
\begin{equation}\label{E7.1}
\HM_*(\tilde{Y},\mathfrak{s},\tilde{c}; \mathbb{N}R_{\tilde{\omega}})\cong \bigoplus_{ \widehat{\mathfrak{s}}_1\circ_h \widehat{\mathfrak{s}}_2=\mathfrak{s}} \HM_*(\mathbb{Y},\widehat{\mathfrak{s}}_1)\otimes_\mathbb{N}R \HM_*(-\mathbb{Y}',\widehat{\mathfrak{s}}_2).
\mathbf{e}nd{equation}
where the sum is over all pairs $(\widehat{\mathfrak{s}}_1,\widehat{\mathfrak{s}}_2)\in \Spincr(Y)\times \Spincr(-Y)$ with $\widehat{\mathfrak{s}}_1\circ_h \widehat{\mathfrak{s}}_2=\mathfrak{s}$. Note that
\begin{equation}\label{E7.3}
\langle c_1(\widehat{\mathfrak{s}}_1\circ_h\widehat{\mathfrak{s}}_2), [\tilde{F}]\rangle =\langle c_1(\widehat{\mathfrak{s}}_1), [F]\rangle +\langle c_1(\widehat{\mathfrak{s}}_2), [-F]\rangle,
\mathbf{e}nd{equation}
By the adjunction inequality from Proposition \ref{P4.2}, the left hand side of \mathbf{e}qref{E7.1} vanishes whenever
\[
\langle c_1(\mathfrak{s}), [\tilde{F}]\rangle >x(\tilde{F})=2x(F).
\]
Combined with Corollary \ref{C6.5} and \mathbf{e}qref{E7.1}\mathbf{e}qref{E7.3}, this implies that the group
\begin{equation}\label{E7.2}
\HM(\tilde{Y},[\tilde{\omega}]|[\tilde{F}])\cong \HM_*(\mathbb{Y}|\kappa)\otimes_\mathbb{N}R \HM_*(-\mathbb{Y}'|-\kappa),
\mathbf{e}nd{equation}
is non-vanishing. By Lemma \ref{L7.3}, we have
\begin{equation}
\rank_\mathbb{N}R \HM_*(-\mathbb{Y}'|-\kappa)= \rank_\mathbb{N}R\HM_*(\mathbb{Y}|-\kappa),
\mathbf{e}nd{equation}
so
\[
2x(\kappa)=x(\tilde{F})=\delta arphi_{\mathbb{Y}}(\kappa)+\delta arphi_{(-\mathbb{Y}')}(-\kappa)=\delta arphi_{\mathbb{Y}}(\kappa)+\delta arphi_{\mathbb{Y}}(-\kappa).
\]
This completes the proof of Theorem \ref{T7.1} when $Y$ is irreducible. In the general case, the inequality
\[
\delta arphi_{\mathbb{Y}}(\kappa)+\delta arphi_{\mathbb{Y}}(-\kappa)\leq 2x(\kappa),
\]
follows from the vanishing result that we used earlier.
\mathbf{e}nd{proof}
\begin{proof}[Proof of Lemma \ref{L7.3}] The first isomorphism \ref{7.2.1} is due to Poincar\'{e} duality. Changing the orientation of $\widehat{Y}$ has the same effect as changing the sign of the perturbed Chern-Simons-Dirac functional $\mathbb{C}L_\omega$ in \mathbf{e}qref{E3.3}, while keeping the orientation fixed; see \cite[Section 22.5]{Bible} for more details. Since we have worked with a ring $\mathbb{N}R$ over $\mathbb{B}F_2$, there is no need to deal with orientations. Our Floer homology $\HM_*(\mathbb{Y})$ is defined using a local coefficient system with fibers $\mathbb{N}R$. Thus the most relevant analogue for closed 3-manifolds is \cite[P.624 (32.2)]{Bible}.
\mathfrak{s}mallskip
The second isomorphism \ref{7.2.2} is induced from a concrete strict cobordism
\[
\mathbb{X}:\mathbb{Y}\coprod e_{\mathbb{T}Sigma}\to \mathbb{Y}',
\]
as we describe now. Similar to the construction in Section \ref{Sec3}, we start with a hexagon $\Omega_1$ with boundary consisting of geodesic segments of length $2$ and whose internal angles are always $\mathfrak{p}i/2$. Also, the metric of $\Omega_1$ is flat near the boundary.
\begin{figure}[H]
\centering
\begin{overpic}[scale=.09]{Pic12.png}
\mathfrak{p}ut(37,41){\mathfrak{s}mall$\mathfrak{g}amma_{12}$}
\mathfrak{p}ut(56,41){\mathfrak{s}mall$\mathfrak{g}amma_{23}$}
\mathfrak{p}ut(53,28){\mathfrak{s}mall$t_{13}$}
\mathfrak{p}ut(27,21){\mathfrak{s}mall$\widehat{Y}$}
\mathfrak{p}ut(68,21){\mathfrak{s}mall$\widehat{Y}$}
\mathfrak{p}ut(41,58){\mathfrak{s}mall$\mathbb{R}_{s_2}\times\Sigma$}
\mathfrak{p}ut(5,46){\mathfrak{s}mall$s_1$}
\mathfrak{p}ut(94,47){\mathfrak{s}mall$s_3$}
\mathfrak{p}ut(15,69){\mathfrak{s}mall$s_2$}
\mathfrak{p}ut(80,69){\mathfrak{s}mall$s_2$}
\mathfrak{p}ut(46,18){\mathfrak{s}mall $\mathfrak{g}amma_{13}$}
\mathfrak{p}ut(27,46){\mathfrak{s}mall$t_{12}$}
\mathfrak{p}ut(67,46){\mathfrak{s}mall$t_{32}$}
\mathfrak{p}ut(48,46){$\Omega_1$}
\mathbf{e}nd{overpic}
\caption{}
\label{Pic12}
\mathbf{e}nd{figure}
The desired cobordism $X$ is then obtained from $\Omega_1\times\Sigma$ by attaching $[-1,1]_t\times Y_-$ to $\mathfrak{g}amma_{13}\times \Sigma$ in Figure \ref{Pic12}, where $Y_-\colonequals \{s\leq -1\}\mathfrak{su}bset \widehat{Y}$. In this case, we use $s_1,s_2,s_3$ for spatial coordinates and $t_{12}, t_{13},t_{32}$ for time coordinates; their relations are indicated as in Figure \ref{Pic12}.
We have to specify the closed 2-form $\omega_X$ on $X$ in order to construct the cobordism map. Recall that the 2-form $\omega$ on $Y$ is decomposed as
\[
\omega=\overline{\omega}+\chi_1(s)ds\wedge\lambda.
\]
First, pull back $\overline{\omega}$ to $[-1,1]_t\times Y_-$ and extend it on $\Omega_1\times\Sigma$ constantly as the harmonic 2-form $\mu\in \Omega^1_h(\Sigma, i\mathbb{R})$. Second, consider a function $f: (-2,\infty)_s\to \mathbb{R}$ such that $f(s)\mathbf{e}quiv s$ when $s\mathfrak{g}eq -1$ and $f'(s)=\chi_1(s)$ when $s\in (-2,1]$. We shall regard $f$ as a function on $\widehat{Y}$ by extending $f$ constantly when $s\leq -3/2$. Now consider a function $h: \widehat{X}\to \mathbb{R}$ with the following properties:
\begin{itemize}
\item $h\mathbf{e}quiv f$ on a neighborhood $[-1,-1+\mathbf{e}psilon)_{t_{13}}\times \widehat{Y}$ and $\mathbf{e}quiv -f$ on $(1-\mathbf{e}psilon, 1]_{t_{13}}\times \widehat{Y}$;
\item $h\mathbf{e}quiv s_1\mathbf{e}quiv s_2$ on $\mathfrak{g}amma_{12}\times [1,\infty)_{s_1}\times\Sigma$ and $\mathbf{e}quiv -s_3\mathbf{e}quiv s_2$ on $\mathfrak{g}amma_{23}\times [1,\infty)_{s_3}\times \Sigma$;
\item $h\mathbf{e}quiv s_2$ on a neighborhood $(1-\mathbf{e}psilon,1]_{t_{12}}\times \mathbb{R}_{s_2}\times \Sigma$.
\mathbf{e}nd{itemize}
The extension of $h$ over the interior of $X$ can be arbitrary. Finally, set
\[
\omega_X=\overline{\omega}+dh\wedge\lambda \text{ on } \widehat{X}.
\]
The strict cobordism $\mathbb{X}=(X,\omega_X)$ then induces a map:
\[
\alpha_1:\HM_*(\mathbb{Y},\widehat{\mathfrak{s}})\mathbb{X}rightarrow{\Id\otimes 1}\HM_*(\mathbb{Y},\widehat{\mathfrak{s}})\otimes \HM_*(e_\mathbb{T}Sigma, \widehat{\mathfrak{s}}_{std})\mathbb{X}rightarrow{\HM_*(\mathbb{X})} \HM_*(\mathbb{Y}',\widehat{\mathfrak{s}}).
\]
Switching the roles of $\mathbb{Y}$ and $\mathbb{Y}'$ produces a strict cobordism $\mathbb{X}': \mathbb{Y}'\coprod e_{\mathbb{T}Sigma}\to \mathbb{Y}$ and a map in the opposite direction:
\[
\alpha_1':\HM_*(\mathbb{Y}',\widehat{\mathfrak{s}})\mathbb{X}rightarrow{\Id\otimes 1}\HM_*(\mathbb{Y}',\widehat{\mathfrak{s}})\otimes \HM_*(e_{\mathbb{T}Sigma'}, \widehat{\mathfrak{s}}_{std})\mathbb{X}rightarrow{\HM_*(\mathbb{X}')} \HM_*(\mathbb{Y},\widehat{\mathfrak{s}}).
\]
It remains to verify that $\alpha_1$ and $\alpha_1'$ are mutual inverses to each other (up to a non-zero scalar). To see this, compose $\mathbb{X}$ with $\mathbb{X}'$ along the common boundary $\mathbb{Y}'$. The resulting cobordism
\[
\mathbb{Y}\ \coprod\ e_{\mathbb{T}Sigma}\ \coprod\ e_{\mathbb{T}Sigma'}\to \mathbb{Y}
\]
induces the gluing map $\alpha$ in Subsection \ref{Subsec3.3}, which is an isomorphism by Theorem \ref{T3.3}. Indeed, one can change the 2-form $\omega_X'\circ \omega_X$ into the standard one in \mathbf{e}qref{E3.1} by adding an exact 2-form. Thus the map $\alpha_1'\circ\alpha_1: \HM_*(\mathbb{Y},\widehat{\mathfrak{s}})\to \HM_*(\mathbb{Y},\widehat{\mathfrak{s}})$ is invertible. The other composition $\alpha_1\circ \alpha_1'$ is dealt with in a similar manner. This completes the proof of Lemma \ref{L7.3}.
\mathbf{e}nd{proof}
\mathfrak{s}ection{Relations with Sutured Floer Homology}\label{Sec8}
Before we prove the fiberness detection result, Theorem \ref{T7.2}, we add a digression to explain the relations of $\HM_*(\mathbb{Y})$ with the sutured Floer homology $\mathcal{H}M$ \cite{KS} and $\SFH$ \cite{J06,J08}. In the original definition of Gabai \cite[Definition 2.6]{Gabai83}, any 3-manifolds with toroidal boundary are sutured manifolds with only toral components. However, they are not examples of balanced sutured manifolds in the sense of Juh\'{a}sz \cite[Definition 2.2]{J06}; thus the sutured Floer homology, either $\mathcal{H}M$ or $\SFH$, is never defined for this class of sutured manifolds.
One may regard our construction as a natural extension of the existing sutured Floer theory, and ask if the suture manifold decomposition theorem, e.g. \cite[Theorem 1.3]{J08} and \cite[Proposition 6.9]{KS}, continue to hold in our case. In this section, we will only prove a preliminary result towards this direction. Since our Floer homology $\HM_*(\mathbb{Y})$ also relies on the closed 2-form $\omega$, it is not clear to the author whether a general result is available.
It is worth mentioning that the sutured monopole Floer homology $\mathcal{H}M$ introduced by Kronheimer-Mrowka \cite{KS} is defined over $\mathbb{Z}$, but the construction extends naturally to the mod 2 Novikov ring $\mathbb{N}R$, as explained in \cite[Section 2.2]{Sivek12}. We shall work with the latter case.
\mathfrak{su}bsection{Cutting along a surface}
For any $T$-surface $\mathbb{T}Sigma=(\Sigma, g_{\Sigma},\lambda,\mu)$ and any 1-cell $\mathbb{Y}\in \mathbb{A}T(\mathbf{e}mptyset, \mathbb{T}Sigma)$, we listed a few cohomological conditions on $\omega$ in \ref{P2} \ref{P3}. In this section, we shall think of them geometrically and work with a more restrictive setup:
\begin{enumerate}[label=(P\arabic*)]
\mathfrak{s}etcounter{enumi}{4}
\item\label{P5} there exists a properly embedded oriented surface $F\mathfrak{su}bset Y$ such that $\mathfrak{p}artial F$ intersects each component of $\Sigma$ in parallel circles, and $i[*_2\lambda]$ is dual to $z[\mathfrak{p}artial F]\in H_1(\Sigma, \mathbb{Z})$ for some $z\in \mathbb{R}$; moreover, $F$ has no closed component or disk components. In particular, $\chi(F)\leq 0$.
\item\label{P6} the Poincar\'{e} dual of $-2\mathfrak{p}i i[\omega]\in H^2(Y; \mathbb{R})$ can be represented by a real 1-cycle $\mathbf{e}ta$ that lies on the surface $F$.
\mathbf{e}nd{enumerate}
If $\mathbb{Y}$ satisfies the additional properties \ref{P5}\ref{P6}, then we can cut $Y$ along the surface $F$ to obtain a balanced sutured manifold, denoted by $M(Y,F)$. Let
\[
\mathcal{H}M(M(Y,F))
\]
be the sutured monopole Floer homology of $M(Y, F)$ defined over $\mathbb{N}R$ with trivial coefficient system; cf. \cite[Section 2.2]{Sivek12}.
\begin{theorem}\label{T8.1} For any $T$-surface $\mathbb{T}Sigma$ and any 1-cell $\mathbb{Y}\in \mathbb{A}T(\mathbf{e}mptyset, \mathbb{T}Sigma)$ satisfying \ref{P5}\ref{P6}, we have
\[
\mathcal{H}M(M(Y,F))\cong \bigoplus_{\langle c_1(\widehat{\mathfrak{s}}),[F]\rangle =x(F)} \HM_*(\mathbb{Y},\widehat{\mathfrak{s}}).
\]
Moreover, $\delta arphi_{\mathbb{Y}}([F])\leq x(F)$ with $\delta arphi_{\mathbb{Y}}$ defined as in Definition \ref{D7.1}.
\mathbf{e}nd{theorem}
For the proof of Theorem \ref{T8.1}, we have to first understand the special case when $F$ is connected and $Y$ fibers over $S^1$ with $F$ as a fiber. Thus $F$ is a genus $g$ surface with $n$ boundary circles, and $Y$ is the mapping torus of a self-diffeomorphism $\mathfrak{p}hi: F\to F$ such that $\mathfrak{p}hi|_{\mathfrak{p}i_0(\mathfrak{p}artial F)}$ has at least two orbits.
\begin{lemma}\label{L8.2} For any $T$-surface $\mathbb{T}Sigma$ and any 1-cell $\mathbb{Y}\in \mathbb{A}T(\mathbf{e}mptyset, \mathbb{T}Sigma)$, if $F$ is connected and $Y$ is a mapping torus over $F$, then for $\kappa=\mathfrak{p}m [F]$, $\delta arphi_{\mathbb{Y}}(\kappa)=-\chi(F)$ and
\[
\HM(\mathbb{Y}|\kappa)\cong \mathbb{N}R.
\]
\mathbf{e}nd{lemma}
\begin{proof}[Proof of Lemma \ref{L8.2}] If $\chi(F)=0$, then $F$ is an annulus and $Y=[-3,3]_s\times\mathbb{T}^2$ is a product. This case is addressed already in Lemma \ref{L5.2}. We focus on the case when $\chi(F)<0$.
With loss of generality, let $\kappa=[F]$. Note that $Y$ is irreducible and $F$ minimizes the Thurston norm. We follow the notations and arguments in the proof of Theorem \ref{T7.1}. Let $(\tilde{Y},\tilde{F})$ denote the double of $(Y,F)$, then $\tilde{Y}$ is a mapping torus over $\tilde{F}$ with $g(\tilde{F})\mathfrak{g}eq 2$. By \mathbf{e}qref{E7.2}, we have
\begin{equation}\label{E8.1}
\HM(\tilde{Y},[\tilde{\omega}]|\tilde{F})=\HM(\mathbb{Y}|\kappa)\otimes_\mathbb{N}R \HM(-\mathbb{Y}'| -\kappa).
\mathbf{e}nd{equation}
The rest of the proof is divided into four steps.
\textit{Step } 1. We can arrange the 2-form $\tilde{\omega}$ so that $\langle [\tilde{\omega}], [\tilde{F}]\rangle=0$.
Instead of $\mathbb{Y}\circ_h(-\mathbb{Y}')$, we consider the horizontal composition
\[
(\tilde{Y},\tilde{\omega},\cdots)=\mathbb{Y}\circ_h \mathbb{Y}_1\circ_h (-\mathbb{Y}'),
\]
where $\mathbb{Y}_1=([-3,3]_s\times \Sigma, \omega_1,\cdots)\in \mathbb{A}T(\mathbb{T}Sigma,\mathbb{T}Sigma)$. Here $\omega_1=\mu+ds\wedge\lambda+\omega_2$ and $\omega_2\in \Omega^2_c(Y_1, i\mathbb{R})$ is compactly supported. By Lemma \ref{L5.2}, inserting this extra 1-cell $\mathbb{Y}_1$ does not affect the identity \mathbf{e}qref{E8.1}, but changing the class $[\omega_2]\in H^2(Y_1,\mathfrak{p}artial Y_1; i\mathbb{R})$ can effectively alter the pairing $\langle [\tilde{\omega}], [\tilde{F}]\rangle$. From now on, we shall always assume that $\langle [\tilde{\omega}], [\tilde{F}]\rangle=0$.
\textit{Step } 2. The group $\HM(\tilde{Y},[\tilde{\omega}]|\tilde{F})$ in \mathbf{e}qref{E8.1} has rank $1$.
Since $g(\tilde{F})\mathfrak{g}eq 2$, this computation for mapping tori is \cite[Lemma 4.7]{KS}, if $\tilde{\omega}=0$. The general case is not really different. Since $\langle [\tilde{\omega}], [\tilde{F}]\rangle=0$, we can still apply Floer's excision theorem \cite[Theorem 3.1]{KS} to reduce the problem to the case when $\tilde{Y}=\tilde{F}\times S^1$ is a product. The same trick is used also in the proof of Theorem \ref{T8.1} below. Now the statement follows from Lemma \ref{L6.9}.
\textit{Step } 3. We conclude from \mathbf{e}qref{E8.1}, Lemma \ref{L7.3} and \textit{Step } 2 that
\[
\rank_{\mathbb{N}R} \HM(\mathbb{Y}|\kappa)=\rank_{\mathbb{N}R} \HM(\mathbb{Y}|-\kappa)=1.
\]
\textit{Step } 4. By Theorem \ref{T7.1}, $\delta arphi_{\mathbb{Y}}(\kappa)+\delta arphi_{\mathbb{Y}}(-\kappa)=2x(\kappa)$. We have to verify that $\delta arphi_{\mathbb{Y}}(\kappa)=\delta arphi_{\mathbb{Y}}(-\kappa)$. This equality now follows from the symmetry of the graded Euler characteristics:
\[
\SW(Y):\widehat{\mathfrak{s}}\mapsto \chi(\HM_*(\mathbb{Y},\widehat{\mathfrak{s}}));
\]
see Theorem \ref{T3.4}. This function is invariant under the conjugacy of relative \mathfrak{s}pinc structures: $\widehat{\mathfrak{s}}\leftrightarrow \widehat{\mathfrak{s}}^*$ by \cite{MT96,Taubes01}. The computation of $\HM(\mathbb{Y}|\kappa)$ shows that $\chi(\HM_*(\mathbb{Y},\widehat{\mathfrak{s}}))\mathfrak{n}eq 0$ for exactly one $\widehat{\mathfrak{s}}$ with $\langle c_1(\widehat{\mathfrak{s}}),\kappa\rangle\mathfrak{g}eq \delta arphi_{\mathbb{Y}}(\kappa)$. This proves the equality $\delta arphi_{\mathbb{Y}}(\kappa)=\delta arphi_{\mathbb{Y}}(-\kappa)$ and completes the proof of Lemma \ref{L8.2}.
\mathbf{e}nd{proof}
\begin{proof}[Proof of Theorem \ref{T8.1}] In \cite{KS}, the group $\mathcal{H}M(M(Y,F))$ is defined as the monopole Floer homology of a suitable closure of $M(Y,F)$, which can be described as follows.
Consider an 1-cell $\mathbb{Y}_1\in \mathbb{A}T(\mathbb{T}Sigma, \mathbf{e}mptyset)$ such that $Y_1$ is a mapping torus over a connected surface $F_1$. We require that
\begin{itemize}
\item $g(F_1)\mathfrak{g}eq 2$;
\item $[\mathfrak{p}artial F_1]=-[\mathfrak{p}artial F]\in H_1(\Sigma, \mathbb{Z})$, and
\item the 1-cell $\mathbb{Y}_1$ satisfies property \ref{P5}\ref{P6} for the surface $F_1$.
\mathbf{e}nd{itemize}
Consider the closed 3-manifold obtained by gluing $\mathbb{Y}$ and $\mathbb{Y}_1$:
\[
(Y_2, \omega_2,\cdots)\colonequals \mathbb{Y}\circ_h\mathbb{Y}_1,
\]
We can arrange so that $\mathfrak{p}artial F$ is identical to $\mathfrak{p}artial F_1$ on $\Sigma$; so $Y_2$ contains a closed oriented surface $F_2\colonequals F\cup F_1$ with $g(F_2)\mathfrak{g}eq 2$. Moreover, $F_2$ is connected. Following the shorthands from Definition \ref{D6.4}, the sutured monopole Floer homology of $M(Y,K)$ is then defined as
\[
\mathcal{H}M(M(Y, K))\colonequals \HM_*(Y_2|F_2);
\]
see \cite[Definition 4.3]{KS} and \cite[Section 2.2]{Sivek12}. For the latter group, we have
\[
\HM_*(Y_2|F_2)\cong \HM_*(Y_2,[0]|F_2),
\]
by Remark \ref{R4.5}. Let $c_2=-2\mathfrak{p}i i[\omega_2]$ be the period class of $\omega_2\in \Omega^2(Y_2, i\mathbb{R})$. Using Property \ref{P6}, we can arrange so that the Poincar\'{e} dual of $c_2$ is represented by a real 1-cycle $\mathbf{e}ta_2$ lying over $F_2\mathfrak{su}bset Y_2$. Now consider the subgroup
\[
G_y\colonequals \bigoplus_{\langle c_1(\widehat{\mathfrak{s}}), [F]\rangle = y} \HM_*(\mathbb{Y},\widehat{\mathfrak{s}})\mathfrak{su}bset \HM_*(\mathbb{Y}).
\]
Theorem \ref{T3.5} and Lemma \ref{L8.2} then imply that
\begin{equation}\label{E8.2}
\bigoplus_{\langle c_1(\mathfrak{s}), [F_2]\rangle =y+x(F_1)} \HM_*(Y_2, \mathfrak{s},c_2; \mathbb{N}R_{\omega_2})= G_b\otimes_\mathbb{N}R \HM(\mathbb{Y}_1|[F_1])\cong G_b,
\mathbf{e}nd{equation}
for any $y\mathfrak{g}eq \delta arphi_{\mathbb{Y}}([F])$. By the adjunction inequality from Proposition \ref{P4.2}, the left hand side of \mathbf{e}qref{E8.2} vanishes whenever $b+x(F_1)>x(F_2)$. As a result,
\[
\delta arphi_{\mathbb{Y}}([F])\leq x(F)=-\chi(F).
\]
Let $y=x(F)$ in \mathbf{e}qref{E8.2}, then
\[
\HM(Y_2,[\omega_2]|F_2)\cong G_{x(F)}.
\]
To complete the proof of Theorem \ref{T8.1}, it remains to verify that
\begin{equation}\label{E8.3}
\HM(Y_2,[0]| F_2)\cong \HM(Y_2, [\omega_2]|F_2).
\mathbf{e}nd{equation}
This isomorphism, which involves only the closed 3-manifold $Y_2$, is similar to the one in \cite[Corollary 3.4]{KS}, except that $\omega_2$ is used for non-exact perturbations here. The proof of \mathbf{e}qref{E8.3} relies on the property that the real 1-cycle $\mathbf{e}ta_2$ that represents $c_2$ lies on the surface $F_2$, so we can pick $\omega_2'\in [\omega_2]$ such that $\omega_2'$ is supported on a tubular neighborhood of $F_2$:
\[
[-1,1]\times F_2\mathfrak{su}bset Y_2.
\]
By identifying $\{\mathfrak{p}m 1\}\times F_2$, $\omega_2'$ becomes a closed 2-form on $F_2\times S^1$. The same process applied to $Y_2\mathfrak{s}etminus [-1,1]\times F_2$ yields another copy of $Y_2$, but the 2-form is zero now.
\mathfrak{s}mallskip
As in the proof of \cite[Corollary 3.4]{KS}, we apply Floer's excision theorem \cite[Theorem 3.1]{KS} to obtain that
\[
\HM(Y_2, [\omega_2]|F_2)=\HM(Y_2,[0]|F_2)\otimes_\mathbb{N}R \HM(F_2\times S^1,[\omega_2']|F_2).
\]
By Lemma \ref{L6.9}, $\HM( F_2\times S^1,[\omega_2']|F_2)\cong \mathbb{N}R$. This completes the proof of Theorem \ref{T8.1}.
\mathbf{e}nd{proof}
The proof of Theorem \ref{T8.1} has an immediate corollary:
\begin{corollary}\label{C8.4} For any $T$-surface $\mathbb{T}Sigma$ and any 1-cell $\mathbb{Y}\in \mathbb{A}T(\mathbf{e}mptyset, \mathbb{T}Sigma)$ satisfying \ref{P5}, $\delta arphi_{\mathbb{Y}}([F])\leq x(F)$.
\mathbf{e}nd{corollary}
\begin{remark} The property \ref{P5} is crucial to the proof of Theorem \ref{T8.1} for the following reason: for the gluing argument to work, we have to make sure that
\begin{enumerate}[label=(\roman*)]
\item\label{i} $[\mathfrak{p}artial F]=-[\mathfrak{p}artial F_1]\in H_1(\Sigma;\mathbb{Z})$;
\item\label{ii} there exists classes $a\in H_2(Y,\mathfrak{p}artial Y;\mathbb{R})$ and $a_1\in H_2(Y_1, \mathfrak{p}artial Y_1;\mathbb{R})$ such that
\[
[\mathfrak{p}artial a]=-[\mathfrak{p}artial a_1]=\text{the Poincar\'{e} dual of }i[*_\Sigma\lambda]\in H_1(\Sigma;\mathbb{R});
\]
\mathbf{e}nd{enumerate}
In general, it is not clear to the author whether $Y$ and $Y_1$ can be always glued. However, Property \ref{P5} reduces \ref{i}\ref{ii} into a single condition, which is easier to verify in practice.
\mathbf{e}nd{remark}
\mathfrak{su}bsection{Relations with Link Floer Homology} As a special case of Theorem \ref{T8.1}, our construction recovers the monopole knot Floer homology $\mathcal{K}HM$ introduced by Kronheimer and Mrowka \cite{KS}. This statement is also true for the link Floer homology; let us expand on its meaning.
For any link $L=\{L_i\}_{i=1}^n$ inside a closed oriented 3-manifold $Z_0$, consider the balanced sutured manifold
\[
Z_0(L)\colonequals (Z_0\mathfrak{s}etminus N(L), \mathfrak{g}amma)
\]
where $s(\mathfrak{g}amma)\cap N(L_i)$ are two meridional sutures on $\mathfrak{p}artial N(L_i)$ oriented in opposite ways. The link Floer homology of $(Z_0, L)$ is defined as the sutured monopole Floer homology of $Z_0(L)$:
\[
\LHM(Z_0, L)\colonequals \mathcal{H}M( Z_0(L)),
\]
and we shall work with the mod 2 Novikov ring $\mathbb{N}R$.
On the other hand, pick a meridian $m_i$ for each link component $L_i$ and consider the link complement
\[
Y(Z_0, L)\colonequals Z_0\mathfrak{s}etminus N(L\cup m_1\cup m_2\cup\cdots \cup m_n).
\]
Each $m_i$ bounds a disk in $Z_0$, which becomes an annulus $A_i$ in $Y(Z_0, L)$. Let $F$ be the union $\bigcup_{i=1}^n A_i$ (with any fixed orientation). The balanced sutured manifold $Z_0(L)$ is then obtained from $Y(Z_0, L)$ by cutting along $F$.
To apply Theorem \ref{T8.1}, we have to specify the choice of $\omega$:
\begin{itemize}
\item let $\Sigma=\mathfrak{p}artial Y(Z_0, L)$. Pick a flat metric $g_\Sigma$ and $\lambda\in \Omega^1_h(\Sigma, i\mathbb{R})$ such that $i[*_\Sigma\lambda]$ is dual to $z[\mathfrak{p}artial F]\in H_1(\Sigma, \mathbb{R})$ for some $z\mathfrak{n}eq 0\in \mathbb{R}$;
\item $\mathbf{e}ta$ is a real 1-cycle on $F$ such that $\mathbf{e}ta\cap A_i$ is a segment joining two components of $\mathfrak{p}artial A_i$. Let $-2\mathfrak{p}i i[\omega]\in H^2(Y(Z_0, L), \mathbb{R})$ be the dual of $\mathbf{e}ta$.
\mathbf{e}nd{itemize}
Finally, let $\mathbb{Y}(Z_0, L)=(Y(Z_0, L), \omega,\cdots )\in \mathbb{A}T(\mathbf{e}mptyset, \mathbb{T}Sigma)$ be any 1-cell satisfying these properties. As a corollary of Theorem \ref{T8.1}, we have
\begin{corollary}\label{C8.3} For any 1-cell $\mathbb{Y}(Z_0,L)$ constructed above, we have an isomorphism
\[
\HM_*(\mathbb{Y}(Z_0, L))\cong \mathcal{H}M(Z_0(L))=\LHM(Z_0, L).
\]
Moreover, if $ \HM_*(\mathbb{Y}(Z_0, L),\widehat{\mathfrak{s}})\mathfrak{n}eq \{0\}$, then $\langle c_1(\widehat{\mathfrak{s}}), [A_i]\rangle=0$ for any $1\leq i\leq n$.
\mathbf{e}nd{corollary}
\begin{proof}[Proof of Corollary \ref{C8.3}] If $\HM_*(\mathbb{Y}(Z_0, L),\widehat{\mathfrak{s}})\mathfrak{n}eq \{0\}$, then Theorem \ref{T8.1} implies that
\[
\langle c_1(\widehat{\mathfrak{s}}), [F] \rangle\leq \delta arphi_{\mathbb{Y}}([F])\leq x(F)=0.
\]
Applying the same argument for $[-F]$, we conclude that $ \langle c_1(\widehat{\mathfrak{s}}), [F] \rangle=0$. The desired isomorphism then follows from the first part of Theorem \ref{T8.1}.
It remains to verify the stronger statement: $\langle c_1(\widehat{\mathfrak{s}}), [A_i] \rangle=0$. In the proof of Theorem \ref{T8.1}, we composed $\mathbb{Y}$ with a mapping torus over a connected surface $F_1$. We now make $F_1$ disconnected: take $F_1=(-F)$ and $Y_1=(-F)\times S^1$. When $Y(Z_0, L)$ are glued with $Y_1$, we require that
\begin{itemize}
\item $\mathfrak{p}artial N(L_i\cup m_i)\mathfrak{su}bset \mathfrak{p}artial Y(Z_0, L)$ is identified with $\mathfrak{p}artial (-A_i)\times S^1\mathfrak{su}bset \mathfrak{p}artial Y_1$;
\item $\mathfrak{p}artial A_i\mathfrak{su}bset\mathfrak{p}artial Y(Z_0, L)$ is identified with $\mathfrak{p}artial(-A_i)\times\{pt\}\mathfrak{su}bset \mathfrak{p}artial Y_1$.
\mathbf{e}nd{itemize}
The closed 3-manifold $\mathbb{Y}\circ_h\mathbb{Y}_1$ contains a collection of 2-tori $\mathbb{T}^2_i$, which are doubles of $A_i$'s. Now we use the adjunction inequality in Proposition \ref{P4.2} and Lemma \ref{L5.2} to conclude.
\mathbf{e}nd{proof}
\mathfrak{s}ection{Fiberness Detection}\label{Sec9}
\mathfrak{su}bsection{Some Preparations} In this section, we complete the proof of Theorem \ref{T7.2}. We start with a preliminary result:
\begin{lemma}\label{L9.1} Under the assumption of Theorem \ref{T7.2}, if $\kappa\in H_2(Y,\mathfrak{p}artial Y;\mathbb{Z})$ is primitive, then $\kappa$ can be represented by a connected Thurston norm minimizing surface $F$ that intersects each component of $\Sigma$ non-trivially. Moreover, $\delta arphi_{\mathbb{Y}}(\kappa)=\delta arphi_{\mathbb{Y}}(-\kappa)$.
\mathbf{e}nd{lemma}
\begin{proof} This lemma follows from \cite[Theorem 4.1 \& Proposition 6.1]{M02}. Let
\[
\mathfrak{p}hi\in H^1(Y;\mathbb{Z})=\Hom(\mathfrak{p}i_1(Y), \mathbb{Z})
\]
be the Poincar\'{e} dual of $\kappa$ and $b_1(\ker\mathfrak{p}hi)$ be the first Betti number of the subgroup $\ker \mathfrak{p}hi\mathfrak{su}bset \mathfrak{p}i_1(Y)$. Then \cite[Proposition 6.1]{M02} states that $\kappa$ can be represented by such a norm-minimizing surface $F$ if
\begin{equation}\label{E9.1}
b_1(\ker \mathfrak{p}hi)<\infty.
\mathbf{e}nd{equation}
To verify that $F$ intersects each component of $\Sigma$ non-trivially, one has to go through the proof of \cite[Proposition 6.1]{M02}. The condition \mathbf{e}qref{E9.1} will follow from \cite[Theorem 4.1]{M02}, if we can verify its assumptions. Consider the set of Alexander classes
\[
\mathcal{D}elta(Y)\colonequals \{ c_1(\widehat{\mathfrak{s}}): \chi(\HM_*(\mathbb{Y},\widehat{\mathfrak{s}}))\mathfrak{n}eq 0\}\mathfrak{su}bset H^2(Y,\mathfrak{p}artial Y;\mathbb{Z}).
\]
By Theorem \ref{T3.4}, $\mathcal{D}elta(Y)$ is precisely the support of the Alexander polynomial of $Y$ and is symmetric about the origin. Since
\[
\rank_\mathbb{N}R \HM_*(\mathbb{Y}|\kappa)=\rank_\mathbb{N}R \HM_*(\mathbb{Y}|-\kappa)=1,
\]
we conclude that $\mathcal{D}elta(Y)\mathfrak{n}eq \mathbf{e}mptyset$ and $\delta arphi_{\mathbb{Y}}(\kappa)=\delta arphi_{\mathbb{Y}}(-\kappa)$. Moreover, the maximum
\[
\max_{a,b\in \mathcal{D}elta(Y)} \mathfrak{p}hi(a-b)
\]
is achieved for exactly one pair of elements $(a,b)$. The other assumption of \cite[Theorem 4.1]{M02} is certified by \cite[Theorem 5.1]{M02}. This completes the proof of Lemma \ref{L9.1}.
\mathbf{e}nd{proof}
\mathfrak{su}bsection{Proof of Theorem \ref{T7.2}} The proof of Theorem \ref{T7.2} is based on the fiberness detection result for balanced sutured manifolds, adapted to the case of mod 2 Novikov ring $\mathbb{N}R$:
\begin{theorem}[{\cite[Theorem 6.1]{KS}}]\label{T9.2}Suppose that a balanced sutured manifold $(M,\mathfrak{g}amma)$ is taut and a homology product. Then $(M,\mathfrak{g}amma)$ is a product sutured manifold if and only if $\mathcal{H}M(M,\mathfrak{g}amma)\cong \mathbb{N}R$.
\mathbf{e}nd{theorem}
For the proof of Theorem \ref{T7.2}, it suffices to deal with the case when $\kappa$ is primitive. Let $F$ be the surface given by Lemma \ref{L9.1}. We will address the cases when $\chi(F)<0$ and when $\chi(F)=0$ separately.
\begin{proof}[Proof of Theorem \ref{T7.2} when $\chi(F)<0$] Since $Y$ is irreducible, by Theorem \ref{T7.2} and Lemma \ref{L9.1}, we have
\[
\delta arphi_{\mathbb{Y}}(\kappa)=\delta arphi_{\mathbb{Y}}(-\kappa)=x(F).
\]
Let $(\tilde{Y},\tilde{F})$ be the double of $(Y, F)$, then $g(\tilde{F})\mathfrak{g}eq 2$. Following the notations in the proof of Theorem \ref{T7.2}, we have
\begin{equation}\label{E9.2}
\HM(\tilde{Y},[\tilde{\omega}]|\tilde{F})=\HM_*(\mathbb{Y}|\kappa)\otimes_\mathbb{N}R \HM_*(-\mathbb{Y}'|-\kappa)\cong \mathbb{N}R.
\mathbf{e}nd{equation}
The rest of the proof is divided into six steps.
\textit{Step } 1. We can effectively change the 2-form $\tilde{\omega}$ so that $\langle [\tilde{\omega}], [\tilde{F}]\rangle=0$. This is \textit{Step } 1 in the proof of Lemma \ref{L8.2}.
\textit{Step } 2. Let $\tilde{M}$ be the 3-manifold with boundary obtained by cutting $\tilde{Y}$ along $\tilde{F}$. Write $\mathfrak{p}artial \tilde{M}=\tilde{F}_+\cup \tilde{F}_-$. Then $(\tilde{M},\tilde{F}_+)$ is a homology product, i.e. $H_*(\tilde{F}_+;\mathbb{Z})\to H_*(\tilde{M};\mathbb{Z})$ is an isomorphism. This follows from the fact \cite[Theorem 1]{T98} that for the graded Euler characteristic of $\HM_*(\tilde{Y},[\tilde{\omega}])$ recovers the Minor-Turaev torsion invariant $T(\tilde{Y})$. Then we apply \cite[Proposition 3.1]{Ni09b} and \mathbf{e}qref{E9.2}.
\textit{Step } 3. $\HM(\tilde{Y}| \tilde{F})\cong \mathbb{N}R$.
Let $N\colonequals [-1,1]\times \tilde{F}\mathfrak{su}bset \tilde{Y}$ be a tubular neighborhood of $\tilde{F}$. Then $\tilde{Y}\mathfrak{s}etminus N\cong \tilde{M}$. We claim that $[\tilde{\omega}]$ is represented by a 2-form $\tilde{\omega}_1$ supported in $[-1,1]\times \tilde{F}$. This follows from \textit{Step } 1, \textit{Step } 2 and a diagram-chasing:
\[
\begin{tikzcd}
H^2(\tilde{Y},\{1\}\times \tilde{F}) \arrow[r] & H^2(\tilde{Y}) \arrow[r,"{[\tilde{\omega}]\mapsto 0}"]& H^2(\tilde{F})\\
H^2(\tilde{Y},\tilde{M}) \arrow[u,"\cong"]\arrow[r,"\cong"]& H^2(N,\mathfrak{p}artial N).
\mathbf{e}nd{tikzcd}
\]
Given such a 2-form $\tilde{\omega}_1$, we use Floer's excision theorem as in the proof of Theorem \ref{T8.1} to deduce that $\HM(\tilde{Y}| \tilde{F})\cong \HM(\tilde{Y},[\tilde{\omega}]|\tilde{F})\cong \mathbb{N}R$.
\textit{Step } 4. Let $(M(Y,F),\mathfrak{g}amma)$ be the balanced sutured manifold obtained by cutting $Y$ along $F$. Then $M(Y,F)$ is a homology product.
Note that $\tilde{M}$ is the double of $M(Y,F)$ along the annuli $A(\mathfrak{g}amma)$. Write $\tilde{M}=M_1\cup M_2$ and $F_i\colonequals M_i\cap \tilde{F}_+$ for $i=1,2$. Then the statement follows by examining the long exact sequences:
\[
\begin{tikzcd}
\cdots\arrow[r] &H_*(F_1\cap F_2)\arrow[d,"\cong"] \arrow[r] & H_*(F_1)\oplus H_*(F_2) \arrow[r]\arrow[d] & H_*(\tilde{F}_+)\arrow[r] \arrow[d,"\cong"] &\cdots\\
\cdots\arrow[r] &H_*(M_1\cap M_2) \arrow[r] & H_*(M_1)\oplus H_*(M_2) \arrow[r] & H_*(\tilde{M}) \arrow[r] &\cdots
\mathbf{e}nd{tikzcd}
\]
By \textit{Step } 3 and the five lemma, the middle vertical map is also an isomorphism.
\textit{Step } 5. If properties \ref{P5}\ref{P6} hold for $(\mathbb{T}Sigma, \mathbb{Y}, F)$, then Theorem \ref{T7.2} holds.
In this special case, we can use Theorem \ref{T8.1} and Lemma \ref{L9.1} to obtain that
\[
\mathcal{H}M(M(Y,F),\mathfrak{g}amma)\cong \HM(\mathbb{Y}|[F])\cong \mathbb{N}R.
\]
Since $Y$ is irreducible and $F$ is norm-minimizing, $(M(Y,F),\mathfrak{g}amma)$ is taut. By \textit{Step } 4 and Theorem \ref{T9.2}, $M(Y,F)$ is a product sutured manifold.
\textit{Step } 6. Reduce the general case to \textit{Step } 5.
Let $\mathbb{T}Sigma_2$ be another $T$-surface with the same underlying oriented surface as $\mathbb{T}Sigma$ and $\mathbb{Y}_2\in \mathbb{A}T(\mathbf{e}mptyset, \mathbb{T}Sigma_2)$ has the same underlying 3-manifold as $\mathbb{Y}$. We require that properties \ref{P5}\ref{P6} hold for $(\mathbb{T}Sigma_2,\mathbb{Y}_2, F)$. The goal is to show that $\HM(\mathbb{Y}_2|[F])\cong \mathbb{N}R$. Let
\[
(\tilde{Y},\tilde{\omega}_2,\cdots)=\mathbb{Y}_2\circ_h (-\mathbb{Y}_2').
\]
By \textit{Step } 1, we may assume that $\langle [\tilde{\omega}_2], [F]\rangle=0$. By \textit{Step } 3, we have
\[
\mathbb{N}R\cong \HM(\tilde{Y}, [\tilde{\omega}]|\tilde{F})\cong \HM(\tilde{Y}|\tilde{F})\cong \HM(\tilde{Y}, [\tilde{\omega}_2]|\tilde{F}).
\]
By \mathbf{e}qref{E9.2}, $\HM(\mathbb{Y}_2|[F])\cong \mathbb{N}R$. Now we use \textit{Step } 5 to complete the proof of Theorem \ref{T7.2} when $\chi(F)<0$.
\mathbf{e}nd{proof}
\begin{proof}[Proof of Theorem \ref{T7.2} when $\chi(F)=0$] In this case, $F$ is an annulus, $\Sigma$ has 2-components and the double $\tilde{F}$ is a 2-torus. By Lemma \ref{L9.1}, $\delta arphi_{\mathbb{Y}}(\kappa)=\delta arphi_{\mathbb{Y}}(-\kappa)=0$. Our assumptions then imply that $\HM_*(\mathbb{Y})\cong \mathbb{N}R$.
The proof when $\chi(F)<0$ carries over with no essential changes. Let us explain where the differences arise:
\begin{itemize}
\item In \textit{Step } 1, we require instead that $\langle i[\tilde{\omega}], [F]\rangle =a$ for some fixed $a\in\mathbb{R}$ with $a\mathfrak{n}eq 0$ and $|a|<2\mathfrak{p}i$;
\item In \textit{Step } 2, Ni's result \cite[Proposition 3.1b]{Ni09b} is stated for a closed surface with genus $\mathfrak{g}eq 2$; but its proof in \cite[Section 3.3]{Ni09b} relies only on the property of the Milnor-Turaev torsion invariant $T(\tilde{Y})$ (note also that $b_1(\tilde{Y})\mathfrak{g}eq 3$). Thus we can still conclude from \mathbf{e}qref{E9.2} that $(\tilde{M},\tilde{F}_+)$ is a homology product;
\item \textit{Step } 3 is replaced by the isomorphism
\[
\HM(\tilde{Y},[\tilde{\omega}]|\tilde{F})\cong \HM(\tilde{Y},[\tilde{\omega}_2]|\tilde{F})
\]
for any classes $[\tilde{\omega}]=[\tilde{\omega}_2]\in H^2(\tilde{Y}, i\mathbb{R})$ with $\langle i[\tilde{\omega}], [F]\rangle =\langle i[\tilde{\omega}_2], [F]\rangle=a$. Then the difference $[\tilde{\omega}]-[\tilde{\omega}_2]$ is represented by a 2-form $\tilde{\omega}_1$ supported in the neighborhood $N=[-1,1]\times \tilde{F}$. As in the proof of Gluing Theorem \ref{T3.3}, one may adapt Floer's excision theorem \cite[Theorem 3.1]{KS} to the case of a 2-torus using non-exact perturbations,
\mathbf{e}nd{itemize}
The rest of proof can now proceed with no difficulty. The conclusion says that $Y$ is a mapping torus over an annulus; so $Y=[-3,3]_s\times \mathbb{T}^2$ is in fact a cylinder.
\mathbf{e}nd{proof}
As an immediate corollary of Theorem \ref{T7.2}, we have
\begin{corollary}\label{C9.3} For any $T$-surface $\mathbb{T}Sigma=(\Sigma, g_\Sigma,\lambda,\mu)$ and any 1-cell $\mathbb{Y}\in \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma)$, if $Y$ is connected, irreducible and $\HM_*(\mathbb{Y},\widehat{\mathfrak{s}})\cong \mathbb{N}R$, then $Y=[-1,1]_s\times \mathbb{T}^2$.
\mathbf{e}nd{corollary}
\begin{proof}[Proof of Corollary \ref{C9.3}] Let $\kappa\in H_2(Y,\mathfrak{p}artial Y;\mathbb{Z})$ be any primitive class, i.e., $\kappa$ is not divisible by any other integral classes non-trivially. By the symmetry of the graded Euler characteristic $\SW(Y)$ in Theorem \ref{T3.4}, the conditions of Theorem \ref{T7.2} are verified with $\delta arphi_{\mathbb{Y}}(\kappa)=\delta arphi_{\mathbb{Y}}(-\kappa)=0$. By Theorem \ref{T7.2}, $Y$ is a mapping torus over an annulus; so $Y=[-1,1]_s\times \mathbb{T}^2$ is a product.
\mathbf{e}nd{proof}
\mathfrak{s}ection{Connected Sum Formulae}\label{Sec10}
Having discussed irreducible 3-manifolds with toroidal boundary, we focus on reducible 3-manifolds in this section and derive the connected sum formulae.
\mathfrak{su}bsection{Connected Sums with 3-Manifolds with Toroidal Boundary}
For $i=1,2$, let $\mathbb{T}Sigma_i$ be any $T$-surface and $\mathbb{Y}_i\in \mathbb{A}T(\mathbf{e}mptyset, \mathbb{T}Sigma_i)$ be any 1-cell. Then we can form an 1-cell
\[
\mathbb{Y}_3=(Y_3,\omega_3,\cdots)\in \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma_1\cup \mathbb{T}Sigma_2)
\] with the following properties:
\begin{enumerate}[label=(C\arabic*)]
\item\label{C1} the underlying 3-manifold $Y_3$ is $Y_1\# Y_2$; let $S^2\mathfrak{su}bset Y_3$ be the 2-sphere separating $Y_1$ and $Y_2$;
\item\label{C2} $[\omega_3]\in H^2(Y_3; i\mathbb{R})$ is determined by the relations: $k_i([\omega_3])=l_i([\omega_i])$ for $i=1,2$ in the digram below. As a result, $\langle [\omega_3], [S^2]\rangle=0$.
\[
\begin{tikzcd}
0 \arrow[r] & H^2(Y_1\# Y_2;i\mathbb{R}) \arrow[r,"{(k_1,k_2)}"] & H^2(Y_1\mathfrak{s}etminus B^3_1;i\mathbb{R})\oplus H^2(Y_2\mathfrak{s}etminus B^3_2;i\mathbb{R})\arrow[r] & H^2(S^2;i\mathbb{R})\\
& &([\omega_1],[\omega_2])\in H^2(Y_1;i\mathbb{R})\oplus H^2(Y_2;i\mathbb{R})\arrow[u,hook,"{l_1\oplus l_2}"]. &
\mathbf{e}nd{tikzcd}
\]
\mathbf{e}nd{enumerate}
Although $\mathbb{Y}_3$ is not uniquely determined by these properties, we say that $\mathbb{Y}_3$ is a connected sum of $\mathbb{Y}_1$ and $\mathbb{Y}_2$. By Theorem \ref{T5.1}, the isomorphism type of $\HM_*(\mathbb{Y}_3)$ is determined by \ref{C1}\ref{C2}.
\begin{proposition}\label{P10.1} The monopole Floer homology of $\mathbb{Y}_3$ can be computed as follows:
\[
\HM_*(\mathbb{Y}_3)\cong\HM_*(\mathbb{Y}_1)\otimes_\mathbb{N}R\HM_*(\mathbb{Y}_2)\otimes_\mathbb{N}R V,
\]
where $V$ is a 2-dimensional vector space over $\mathbb{N}R$.
\mathbf{e}nd{proposition}
\begin{proof} By Theorem \ref{T5.1}, we can compute the group $\HM_*(\mathbb{Y}_3)$ using a convenient metric on $Y_1\# Y_2$ and a 2-form $\omega_3$. Take a component $\Sigma_i'\mathfrak{su}bset \Sigma_i$ for each $i=1,2$. Consider the 3-manifold
\[
([-3,3]_s\times \Sigma_i',\ \mu_i|_{\Sigma_i'}+ds\wedge \lambda_i|_{\Sigma_i'}), i=1,2.
\]
Let $\mathbb{Y}_4=(Y_4,\omega_4,\cdots)\in \mathbb{A}T(\mathbb{T}Sigma_1'\cup \mathbb{T}Sigma_2', \mathbb{T}Sigma_1'\cup \mathbb{T}Sigma_2')$ be their connected sum. The 1-cell $\mathbb{Y}_3$ can be then obtained as the horizontal composition
\[
(\mathbb{Y}_1\coprod\mathbb{Y}_2)\circ_h \mathbb{Y}_4.
\]
Using the Gluing Theorem \ref{T3.3}, we obtain that
\[
\HM_*(\mathbb{Y}_3)\cong \HM_*(\mathbb{Y}_1)\otimes_\mathbb{N}R \HM_*(\mathbb{Y}_2)\otimes_\mathbb{N}R \HM_*(\mathbb{Y}_4).
\]
It remains to verify that $\rank_\mathbb{N}R \HM_*(\mathbb{Y}_4)=2$. Regard $\mathbb{Y}_4$ as a 1-cell in $\mathbb{A}T(\mathbf{e}mptyset, (-\mathbb{T}Sigma_1')\cup (-\mathbb{T}Sigma_2')\cup \mathbb{T}Sigma_1'\cup \mathbb{T}Sigma_2')$ and consider the horizontal composition with
\[
e_{\mathbb{T}Sigma_1'}\in \mathbb{A}T((-\mathbb{T}Sigma_1')\cup \mathbb{T}Sigma_1',\mathbf{e}mptyset) \text{ and }e_{\mathbb{T}Sigma_2'}\in \mathbb{A}T((-\mathbb{T}Sigma_2')\cup \mathbb{T}Sigma_2',\mathbf{e}mptyset).
\]
Another application of Theorem \ref{T3.3} implies that
\[
\HM_*(\mathbb{Y}_4)\cong \HM_*((\Sigma'_1\times S^1)\# (\Sigma'_2\times S^1),[\omega_1']\#[\omega_2']),
\]
where $\omega_i'=\mu_i|_{\Sigma_i'}+d\theta\wedge \lambda_i|_{\Sigma_i'}$ and $\theta$ denotes the coordinate of $S^1$. $[\omega_1']\# [\omega_2']$ is obtained from $[\omega_1']$ and $[\omega_2']$ using \ref{C2}. Here we have used the shorthands from Definition \ref{D6.4}. By Lemma \ref{L5.4}, we have already known that
\[
\HM_*(\Sigma_i'\times S^1, [\omega_i'])\cong \mathbb{N}R \text{ for }i=1,2.
\]
To conclude, we apply the connected sum formula \cite[Theorem 5]{Lin17} for closed 3-manifolds. As a result, the group $\HM_*(\mathbb{Y}_4)$ is concentrated in a single \mathfrak{s}pinc grading and has rank $2$.
\mathbf{e}nd{proof}
\mathfrak{su}bsection{Connected Sums with Closed 3-Manifolds} Let us first review the definition of $\mathbb{T}HM_*(M)$ for any closed 3-manifold $M$.
\begin{definition}\label{D10.2} For any closed 3-manifold $Z$, we obtain a balanced sutured manifold $(Z(1), \delta)$ by taking $Z(1)=Z\mathfrak{s}etminus B^3$ and the suture $s(\delta)\mathfrak{su}bset \mathfrak{p}artial Z(1)$ to be the equator. Define
\[
\mathbb{T}HM(Z)\colonequals \mathcal{H}M(Z(1), \delta).\mathfrak{q}edhere
\]
\mathbf{e}nd{definition}
Let $\mathbb{T}Sigma$ be any $T$-surface, $\mathbb{Y}\in \mathbb{A}T(\mathbf{e}mptyset, \mathbb{T}Sigma)$ be any 1-cell and $Z$ be any closed 3-manifold. Consider an 1-cell $\mathbb{Y}_5=(Y_5, \omega_5,\cdots)\in \mathbb{A}T(\mathbf{e}mptyset,\mathbb{T}Sigma)$ that satisfies the following properties:
\begin{itemize}
\item the underlying 3-manifold $Y_5$ is $Y\# Z$;
\item $[\omega_5]\in H^2(Y_5;i\mathbb{R})$ is obtained from $[\omega]$ using \ref{C2} and the zero form on $Z$.
\mathbf{e}nd{itemize}
\begin{proposition}\label{P10.3}The monopole Floer homology of $\mathbb{Y}_5$ can be computed as follows:
\[
\HM_*(\mathbb{Y}_5)\cong\HM_*(\mathbb{Y})\otimes_\mathbb{N}R\mathbb{T}HM(Z).
\]
\mathbf{e}nd{proposition}
\begin{proof} Following the proof of Proposition \ref{P10.1}, it suffices to verify that
\begin{equation}\label{E10.1}
\mathbb{T}HM(M)\cong \HM_*(M\#(\Sigma'\times S^1), [0]\#[\omega'])
\mathbf{e}nd{equation}
where $\Sigma'$ is a connected component of $\Sigma$ and $\omega'\colonequals \mu|_{\Sigma'}+d\theta\wedge \lambda|_{\Sigma'}$. One can argue as in \textit{Step } 1 of the proof of Lemma \ref{L8.2} and work instead with the 2-form
\[
\omega''\mathbf{e}quiv \mu|_{\Sigma'} \text{ on }S^1\times\Sigma'.
\]
Then the Poincar\'{e} dual of $[\omega'']$ is proportional to the 1-cycle $\{pt\}\times S^1$. Since $[0]\#[\omega'']$ is neither balanced nor negatively monotone for any \mathfrak{s}pinc structure on $M\#(S^1\times \Sigma')$, one may apply Proposition \ref{P4.2} to compute the right hand side of \mathbf{e}qref{E10.1} using exact perturbations. Now the isomorphism \mathbf{e}qref{E10.1} follows from \cite[Proposition 4.6]{KS}.
\mathbf{e}nd{proof}
\mathbf{e}nd{document} |
\begin{document}
\begin{frontmatter}
\title{A Conversation with Donald B. Rubin}
\runtitle{A Conversation with D. B. Rubin}
\begin{aug}
\author[a]{\fnms{Fan}~\snm{Li}\ead[label=e1]{[email protected]}}
\and
\author[b]{\fnms{Fabrizia}~\snm{Mealli}\corref{}\ead[label=e2]{[email protected]}}
\runauthor{F. Li and F. Mealli}
\affiliation{Duke University and University of Florence}
\address[a]{Fan Li is Assistant Professor,
Department of Statistical Science, Duke University, Durham, North Carolina 27708-0251, USA \printead{e1}.}
\address[b]{Fabrizia Mealli is Professor, Department of
Statistics, Computer Science, Applications, University of Florence, Viale Morgagni 59, Florence 50134, Italy \printead{e2}.}
\end{aug}
\begin{abstract}
Donald Bruce Rubin is John L. Loeb Professor of
Statistics at Harvard University. He has made fundamental contributions to
statistical methods for missing data, causal inference, survey sampling,
Bayesian inference, computing and applications to a wide range of
disciplines, including psychology, education, policy, law, economics,
epidemiology, public health and other social and biomedical sciences.
\end{abstract}
\end{frontmatter}
Don was born in Washington, D.C. on December 22, 1943, to Harriet and Allan
Rubin. One year later, his family moved to Evanston, Illinois, where he
grew up. He developed a keen interest in physics and mathematics in high
school. In 1961, he went to college at Princeton University, intending to
major in physics, but graduated in psychology in 1965. He began graduate
school in psychology at Harvard, then switched to Computer Science (MS,
1966) and eventually earned a Ph.D. in Statistics under the direction of Bill
Cochran in 1970. After graduating from Harvard, he taught for a year in
Harvard's Department of Statistics, and then in 1971 he began working at
Educational Testing Service (ETS) and served as a visiting faculty member
at Princeton's new Statistics Department. He held several visiting academic
appointments in the next decade at Harvard, UC Berkeley, University of
Texas at Austin and the University of Wisconsin at Madison. He was a full
professor at the University of Chicago in 1981--1983, and in 1984 moved back
to the Harvard Statistics Department, where he remains until now, and where
he served as chair from 1985 to 1994 and from 2000 to 2004.
Don has advised or coadvised over 50 Ph.D. students, written or edited 12
books, and published nearly 400 articles. According to Google Scholar, by
May 2014, Rubin's academic work has 150,000 citations, 16,000 in 2013
alone, placing him at the top with the most cited scholars in the world.
For his many contributions, Don has been honored by election to Membership
in the US National Academy of Sciences, the American Academy of Arts and
Sciences, the British Academy, and Fellowship in the American Statistical
Association, Institute of Mathematical Statistics, International
Statistical Institute, Guggenheim Foundation, Humboldt Foundation and
Woodrow Wilson Society. He has also received the Samuel S. Wilks Medal from
the American Statistical Association, the Parzen Prize for Statistical
Innovation, the Fisher Lectureship and the George W. Snedecor Award of the
Committee of Presidents of Statistical Societies. He was named Statistician
of the Year by the American Statistical Association's Boston and Chicago
Chapters. In addition, he has received honorary degrees from Bamberg
University, Germany and the University of Ljubljana, Slovenia.
Besides being a statistician, he is a music lover, audiophile and fan of
classic sports cars.
This interview was initiated on August 7, 2013, during the Joint
Statistical Meetings 2013 in Montreal, in anticipation of Rubin's 70th
birthday, and completed at various times over the following months.
\section*{Beginnings}
\textbf{Fan:} Let's begin with your childhood. I understand you grew up in
a family of lawyers, which must have heavily influenced you intellectually.
Can you talk a little about your family?
\textbf{Don:} Yes. My father was the youngest of four brothers, all of whom
were lawyers, and we used to have stimulating arguments about all sorts of
topics. Probably the most argumentative uncle was Sy (Seymour Rubin, senior
partner at Arnold, Fortas and Porter, diplomat, and professor of law at
American University), from D.C., who had framed personal letters of thanks
for service from all the presidents starting with Harry Truman and going
through Jerry Ford, as well as from some contenders, such as Adlai
Stevenson, and various Supreme Court Justices. I found this impressive but
daunting. The relevance of this is that it clearly created in me a deep
respect for the principles of our legal system, to which I find statistics
highly relevant---this has obviously influenced my own application of
statistics to law, for example, concerning issues as diverse as the death
penalty, affirmative action and the tobacco litigation.
\textbf{Fabri:} We will surely get back to these issues later, but was
there anyone else who influenced your interest in statistics?
\textbf{Don:} Probably the most influential was Mel, my mother's brother, a
dentist (then a bachelor). He loved to gamble small amounts, either in the
bleachers at Wrigley Field, betting on the outcome of the next pitch, while
watching the Cubs lose, or at Arlington Race track, where I was taught at a
young age how to read the Racing Form and estimate the ``true'' odds from
the various displayed betting pools, while losing two dollar bets.
Wednesday and Saturday afternoons, during the warm months when I was a
preteen, were times to learn statistics---even if at various bookie joints
that were sometimes raided. As I recall, I was a decent student of his, but
still lost small amounts.
There were two other important influences on my statistical interests from
the late 1950s and early 1960s. First, there was an old friend of my
father's from their government days together, a Professor Emeritus of
Economics at UC Berkeley, George Mehren, with whom I had many entertaining
and educational (to me) arguments, which generated a respect for economics
that continues to grow to this day. And second, my wonderful teacher of
physics at Evanston Township High School---Robert Anspaugh---who tried to
teach me to think like a real scientist, and how to use mathematics in the
pursuit of science.
By the time I left high school for college, I appreciated some statistical
thinking from gambling, some scientific thinking from physics, and I had
deep respect for disciplines other than formal mathematics, in particular,
physics and the law. These, in hindsight, are exposures that were crucial
to the kind of statistics to which I gravitated in my later years. More
details of the influence of my mentors can be found in Rubin (\citeyear{Rub14N2}).
\begin{figure}
\caption{Five-year old D. B. Rubin.}
\end{figure}
\section*{College Time at Princeton}
\textbf{Fan:} You entered Princeton in 1961, first as a physics major, but
later changed to psychology. Why the change and why psychology?
\textbf{Don:} That's a good question. Inspired by Anspaugh, I wanted to
become a physicist. I was lined up for a BA in three years when I entered
Princeton, and unknown to me before I entered, also lined up for a crazy
plan to get a Ph.D. in physics in five years, in a program being reconditely
planned by John Wheeler, a very well-known professor of physics there (and
Richard Feynman's Ph.D. advisor years earlier). In retrospect, this was a
wildly over-ambitious agenda, at least for me. For a combination of
complications, including the Vietnam War (and its associated drafts) and
Professor Wheeler's sabbatical at a critical time, I think no one succeeded
in completing a five-year Ph.D. from entry. In any case, there were many kids
like me at Princeton then, who, even though primarily interested in math
and physics, were encouraged to explore other subjects. I did that, and one
of the courses I took was on personality theory, taught by a wonderful
professor, Silvan Tomkins, who later became a good friend. At the end of my
second year, I switched from Physics\vadjust{\goodbreak} to Psychology, where my mathematical
and scientific background seemed both rare and appreciated---it was an
immature decision (not sure what a mature one would have been), but a fine
one for me because it introduced me to some new ways of thinking, as well
as to new fabulous academic mentors.
\textbf{Fabri:} You had some computing skills which were uncommon then,
right? So you started to use computers quite early.
\textbf{Don:} Yes. Sometime between my first and second year at Princeton,
I taught myself Fortran. As you mentioned, those skills were not common,
even at places like Princeton then.
\textbf{Fabri:} Was learning Fortran just a matter of having fun or did you
actually use these skills to solve problems?
\textbf{Don:} It was for solving problems. When I was in the Psychology
Department, I was helping to support myself by coding some of the early
batch computer packages for PSTAT, a Princeton statistical software
package, which competed with BMDP of UCLA at the time. I also wrote various
programs for simulating human behavior.
\textbf{Fan:} In your senior year at Princeton, you applied for Ph.D.
programs in psychology and were accepted by several very good places.
\textbf{Don:} Yes, I~was accepted by Stanford, Michigan and Harvard. I met
some extraordinary people during my visits to these programs. I went out to
Stanford first, and met William Estes, a quiet but wonderful professor with
strong mathematical skills and a wry wit, who later moved to Harvard.
Michigan had a very strong mathematical psychology program, and when I
visited in the spring of 1965, I~was hosted primarily by a very promising
graduating Ph.D. student, Amos Tversky, who was doing extremely interesting
work on human behavior and how people handled risks. In later years, he
connected with another psychologist, Daniel Kahneman, and they wrote a
series of extremely influential papers in psychology and economics, which
eventually led to Kahneman's winning the Nobel Prize in Economics in 2002;
Tversky passed away in 1996 and was thus not eligible for the Nobel Prize.
Kahneman (who recently was awarded a National Medal of Science by President
Obama) always acknowledges that the Nobel Prize was really a joint award
(to Tversky and him). I~was on a committee sometime last year with
Kahneman, and it was interesting to find out that I had known Tversky
longer than he had.
\textbf{Fan:} But ultimately you chose Harvard.
\textbf{Don:} Well, we all make strange decisions. The reason was that I
had an east-coast girlfriend who had another year in college.
\section*{Graduate Years at Harvard}
\textbf{Fabri:} You first arrived at Harvard in 1965 as a Ph.D. student in
psychology, which was in the Department of Social Relations then, but were
soon disappointed, and switched to computer science. What happened?
\textbf{Don:} When I visited Harvard in the summer of 1965, some senior
people in Social Relations appeared to find my background, in subjects like
math and physics, attractive, so they promised me that I could skip some of
the basic more ``mathy'' requirements. But when I arrived there, the chair
of the department, a sociologist, told me something like, ``No, no, I
looked over your transcript and found your undergraduate education
scientifically deficient because it lacked `methods and statistics'
courses. You will have to take them now or withdraw.'' Because of all the
math and physics that I'd had at Princeton, I felt insulted! I had to get
out of there. Because I had independent funding from an NSF graduate
fellowship, I looked around. At the time, the main applied math appeared
being done in the Division of Engineering and Applied Physics, which
recently became the Harvard's ``School of Engineering and Applied
Sciences.'' The division had several sections; one of them was computer
science (CS), which seemed happy to have me.
\textbf{Fan:} But you got bored again soon. Was this because you found the
problems in CS not interesting or challenging enough?
\textbf{Don:} No, not really that. There were several reasons. First, there
was a big emphasis on automatic language translation, because it was cold
war time, and it appeared that CS got a lot of money for computational linguistics from ARPA (Advanced
Research Projects Agency), now known as DARPA. The Soviet Union, from
behind the iron curtain, produced a huge number of documents in Russian,
but evidently there were not enough people in the US to translate them. A
complication is that there are sentences that you could not translate
without their context. I still remember one example: ``Time flies fast,''
a~three-word sentence that has three different meanings depending on which of
the three words is the verb. If this three-word sentence cannot be
automatically translated, how can one get an automatic (i.e., by computer)
translation of a complex paragraph? Related to this was Noam Chomsky's work
on transformational grammars, down the river at MIT.
Second, although I found some real math courses and the ones in CS on mathy
topics, such as computational complexity, which dealt with Turing machines,
Godel's theorem, etc., interesting, I found many of the courses dull. Much
of the time they were about programming. I remember one of my projects was
to write a program to project 4-dimensional figures into 2-dimensions, and
then rotate them using a DEC \mbox{PDP-1}. It took an enormous number of hours.
Even though my program worked perfectly, I felt it was a gigantic waste of
time. I also got a $\mathrm{C}+$ in that course because I never went to any
of the classes. Now, having dealt with many students, I would be more
sympathetic that I deserved a $\mathrm{C}+$, but not when I was a kid. At
that time, I~figured there must be something better to do than rotating 4D
objects and getting a $\mathrm{C}+$. But marching through rice paddies in
Vietnam or departing for somewhere in Canada didn't seem appealing. So
after picking up a MS degree in CS in 1966, although I stayed another year
in CS, I was ready to try something else.\looseness=-1
\textbf{Fabri:} How did statistics end up in your path?
\textbf{Don:} A summer job in Princeton in 1966 led to it. I did some
programming for John Tukey in Fortran, LISP and COBOL. I also did some
consulting for a Princeton sociology professor, Robert Althauser, basically
writing programs to do matched sampling, matching blacks and whites, to
study racial disparity in dropout rates at Temple University. I had a
conversation with Althauser about how psychology and then CS weren't
working out for me at Harvard. Because Bob was doing some semi-technical
things in sociology, he knew of Fred Mosteller, although not personally,
and also knew that Harvard had a decade-old Statistics Department that was
founded in 1957. He suggested that I contact Mosteller. After getting back
to Harvard, I~talked to Fred, and he suggested that I take some stat
courses. So in my third year in Harvard, I took mostly stat courses and did
OK in them. And the Stat department said ``Yes'' to me. It also helped to
have my own NSF funding, which I had from the start; they kept renewing for
some reason, showing their bad taste probably, but it worked out well for
me. Anyway, at the end of my third year at Harvard, I had switched to
statistics, my third department in four years.
\textbf{Fabri:} Besides Mosteller, who else was on the statistics faculty
then? It was a quite new department, as you said.
\textbf{Don:} The other senior people were Bill Cochran and Art Dempster,
who had recently been promoted to tenure. The junior ones were Paul
Holland; Jay Goldman, a probabilist; and Shulamith Gross from Berkeley, a
student of Erich Lehmann's.
\textbf{Fabri:} And you decided to work with Bill.
\textbf{Don:} Actually, I first talked to Fred. Fred always had a lot of
projects going; one was with John Tukey and he proposed that I work on it.
I told him that I had this matched sampling project of my own, and he
suggested that I talk to Cochran---Cochran a few years earlier was an
advisor for the Surgeon General's report on smoking and lung cancer. It was
obviously based on observational data, not on randomized experiments, and
Fred said that Cochran knew all about these issues in epidemiology and
biostatistics. So I went to knock on Bill's door. He answered with a grumpy\vadjust{\goodbreak}
sounding ``yes,'' I went in and he said, ``No, not now, later!'' So I
thought ``Hmmm, rough guy,'' but actually he was a sweetheart, with a great
Scottish dry sense of humor and a love of scotch and cigarettes (I
understand the former, although not the latter).
\textbf{Fabri:} Cochran did have a lasting influence on you, right?
\textbf{Don:} Yes, he had a tremendous influence on me. Once I was doing
some irrelevant math on matching, which I now see popping up again in the
literature. I~showed that to Bill, and he asked me, ``Do you think that's
important, Don?'' I said, ``Well, I don't know.'' Then he said, ``It is not
important to me. If you want to work on it, go find someone else to advise
you. I care about statistical problems that matter, not about making things
epsilon better.'' Another person who was very influential was Art Dempster.
Once I did some consulting for Data Text, a collection of batch computer
programs like PSTAT or BMDP. I was designing programs to calculate analyses
of variance, do regressions, ordinary least squares, matrix inversions, all
when you have, in hindsight, limited computing power. For advice on some of
those I talked to Dempster, who always has great multivariate insights
based on his deep understanding of geometry---very Fisherian.
\textbf{Fan:} Your Ph.D. thesis was on matching, which is the start of your
life-long pursuit of causal inference. How did your interest in causal
inference start?
\textbf{Don:} When I worked with Althauser on the racial disparity problem,
I~always\vadjust{\goodbreak} emphasized to him that it was inherently descriptive, not really
causal. I~remembered enough from my physics education in high school and
Princeton that association is not causation. So I was probably not
intrigued by causal inference per se, but rather by the confusion that the
social scientists had about it. You have to describe a real or hypothetical
experiment where you could intervene, and after you intervene, you see how
things change, not in time but between intervention (i.e., treatment)
groups. If you are not talking about intervention, you can't talk about
causality. For some reason, when I look at old philosophy, it seems to me
that they didn't get it right, whereas in previous centuries, some
experimenters got it. They bred cows, or mated hunting falcons. If you
mated excellent female and male falcons, the resulting next generation of
falcons would generally be better hunters than those resulting from random
mating. In the 20th century, many scientists and experimentalists got it.
\textbf{Fabri:} So you were only doing descriptive comparisons in your Ph.D.
thesis, and the notation of potential outcomes was not there.
\textbf{Don:} Partly correct. At that time, the notation of potential
outcomes was in my mind, because that is the way that Cochran initiated
discussions of randomized experiments in the class he taught in 1968.
Initially, it was all based on randomization, unbiasedness, Fisher's test,
etc. But the concepts had to be flipped into ordinary least squares (OLS)
regression and analysis of variance tables, because nobody could compute
anything difficult back then. One of the lessons in Bill's class in
regression and experimental design was to use the abbreviated Dolittle
method to invert matrices, by hand! So you really couldn't do randomization
tests in any generality. The other reason I was interested in experiments
and social science was my family history. There was always this legal
question lurking: ``But for this alleged misconduct, what would have
happened?''
\textbf{Fan:} What was your first job after getting your Ph.D. degree in
1970?
\textbf{Don:} I stayed at Harvard for one more year, as an instructor in
the Statistics Department, partly supported by teaching, partly supported
by the Cambridge Project, which was an ARPA funded Harvard--MIT joint
effort; the idea was to bring the computer science technologies of MIT and
the social sciences research of Harvard together to do wonderful things in
the social sciences. In the Statistics Department, I was coteaching with
Bob Rosenthal the ``Statistics for Psychologists''\vadjust{\goodbreak} course that, ironically,
the Social Relations Department wanted me to take five years earlier,
thereby driving me out of their department! Bob had, and has, tremendous
intuition for experimental design and other practical issues, and we have
written many things together.
\begin{figure}
\caption{D. B. Rubin (on left) with his puppy friend Thor (on right), about 1967.}
\end{figure}
\section*{The ETS Decade: Missing Data, EM and Causal Inference}
\textbf{Fan:} After that one year, you went for a position at ETS in
Princeton instead of a junior faculty position in a research university. It
was quite an unusual choice, given that you could probably have found a
position in a respected university statistics department easily.
\textbf{Don:} Right---many people thought I was goofy. I~did have several
good offers, one was to stay at Harvard, and another was to go to
Dartmouth. But I met Al Beaton, who was later my boss at ETS in Princeton,
at a conference in Madison, Wisconsin, and he offered me a job, which I
took. Al had a doctorate in education at Harvard, and had worked with
Dempster on computational issues, such as the ``sweep operator.'' He was a
great guy with a deep understanding of practical computing issues. Also, he
appreciated my research. Because I was an undergrad at Princeton, it was
almost like going home. For several years, I taught one course at
Princeton. Between the jobs at ETS and Princeton, I~was earning twice what
the Harvard salary would have been, which allowed me to buy a house on an
acre and a half, with a garage for rebuilding an older Mercedes roadster,
etc. A different style of life from that in Cambridge.
\textbf{Fan:} You seem to have had a lot of freedom to pursue research at
the ETS. What was your responsibility at ETS?
\textbf{Don:} The position at ETS was like an academic position with
teaching responsibilities replaced by consulting on ETS's social science
problems, including psychological and educational testing ones. I~found
consulting much easier for me than teaching, and ETS had interesting
problems. Also there were many very good people around, like Fred Lord, who
was highly respected in psychometrics. The Princeton faculty was great,
too: Geoffrey Watson (of the Durbin--Watson statistic) was the chair; Peter
Bloomfield was there as a junior faculty member before he moved to North
Carolina; and of course Tukey was still there, even though he spent a lot
of time at Bell Labs. John was John, having a spectacular but very unusual
way of thinking---obviously a genius. Stuart Hunter was in the Engineering
School then. These were fine times for me, with tremendous freedom to
pursue what I regarded as important work.
\textbf{Fabri:} By any measure, your accomplishments in the ETS years were
astounding. In 1976, you published the paper ``Inference and Missing Data''
in Biometrika (Rubin, \citeyear{Rub76}) that lays the foundation for modern analysis of
missing data; in 1977, with Arthur Dempster and Nan Laird, you published
the EM paper ``Maximum Likelihood from Incomplete Data via the EM
Algorithm'' in JRSS-B (Dempster, Laird and Rubin, \citeyear{DemLaiRub77}); in 1974, 1977,
1978, you published a series of papers that lay the foundation for the
Rubin Causal Model (Rubin, \citeyear{Rub74}, \citeyear{Rub77}, \citeyear{Rub78N1}). What was it like for you at
that time? How come so many groundbreaking ideas exploded in your mind at
the same time?
\textbf{Don:} Probably the most important reason is that I always worried
about solving real problems. I didn't read the literature to uncover a hot
topic to write about. I always liked math, but I never regarded much of
mathematical statistics as real math---much of it is just so tedious. Can
you keep track of these epsilons?
\textbf{Fabri:} There is no coincidence that all these papers share the
common theme of missing data.
\textbf{Don:} That's right. That theme arose when I was a graduate student.
The first paper I wrote on missing data, which is also my first
sole-authored paper, was on analysis of variance designs, a quite
algorithmic paper. It was always clear to me, from the experimental design
course from Cochran that you should set up experiments as missing data
problems, with all the potential outcomes under the not-taken treatments
missing. But nobody did observational studies that way, which seemed very
odd to me. Indeed, nobody was using potential outcomes outside the context
of randomized experiments, and even there, most writers dropped potential
outcomes in favor of least squares when actually doing things.
\textbf{Fan:} What was the state of research on missing data before you
came into the scene?
\textbf{Don:} It was extremely ad hoc. The standard approach to missing
data then was comparing the biases of filling in the means, or of
regression imputation under different situations, but almost always under
an implicit ``missing completely at random'' assumption. The purely
technical sides of these papers are solid. But I found there were always
counter examples to the propriety of the specific methods being considered,
and to explore them, one almost needed a master's thesis for each
situation. I would rather address the class of problems with some
generality. There is a mechanism that creates missing data, which is
critical for deciding how to deal with the missing data. That idea of
formal indicators for missing data goes way back in the contexts of
experimental design and survey design. I am consistently amazed how this
was not used in observational studies until I did so in the 1970s; maybe
someone did, but I've looked for years and haven't found anything. But
probably because the missing data paper was done in a relatively new way, I had great
difficulty in getting it published (more details in
Rubin, \citeyear{Rub14N1}).
\textbf{Fan:} The EM algorithm is another milestone in modern statistics;
it is also relevant in computer science and one of the most important
algorithm in data mining. Though similar ideas had been used in several
specific contexts before, nobody had realized the generality of EM. How did
Dempster, Laird and you discover the generality?
\textbf{Don:} In those early years at ETS, I had the freedom to remain in
close contact with the Harvard people, Cochran, Dempster, Holland and
Rosenthal, which was very important to me. I always enjoyed talking to
Dempster, who is a very principled and deep thinker. I was able to arrange
some consulting projects at ETS to bring him to Princeton. Once we were
talking about some missing data problem, and we started discussing filling
these values in, but I knew it wouldn't work in generality. I pointed to a
paper by Hartley and Hocking (\citeyear{HarHoc71}), where they deserted the approach of
iteratively filling in missing values, as in Hartley (\citeyear{Har56}) for the counted
data case, and went to Newton--Raphson, I~think, in the normal case. Even
though aspects of EM were known for years, and Hartley and others were sort
of nibbling around the edges of EM, apparently nobody put it all together
as a general algorithm. Art and I realized that you have to fill in
sufficient statistics. I had all these examples like t distributions,
factor analysis (the ETS guys loved that), latent class models. And Art had
a great graduate student, Nan Laird, available to work on parts of it, and
we started writing it up. The EM paper was accepted right away by JRSS-B,
even with invited discussions.
\textbf{Fan:} Now let's talk more about causal inference. You are known for
proposing the general potential outcome framework. It was Neyman who first
mentioned the notation of potential outcomes in his Ph.D. thesis (Neyman,
\citeyear{Spl90}), but the notation seemed to have long been neglected.
\textbf{Don:} Yes, it was ignored outside randomized experiments. Within
randomized studies, the notion became standard and used, for example, in
Kempthorne's work, but as I mentioned earlier, ignored otherwise.
\textbf{Fan:} Were you aware of Neyman's work before?
\textbf{Don:} No. I wasn't aware of his work defining potential outcomes
until 1990 when his Ph.D. thesis was translated into English, although I
attributed much of the perspective to him because of his work on surveys in
Neyman (\citeyear{Ney34}) and onward (see Rubin, \citeyear{Rub90N1}, followed by Rubin, \citeyear{Rub90N2}).
\textbf{Fabri:} You actually met Neyman when you visited Berkeley in the
mid-1970s. During all those lunches, had you ever discussed causal
inference and potential outcomes with him?
\textbf{Don:} I did. In fact, I had an office right next to his. Neyman
came to Berkeley in the late 30s. He was very impressive, not only as a
mathematical statistician, but also as an individual. There was a
tremendous aura about him. Shortly after arriving in Berkeley, I gave a
talk on missing data and causal inference. The next day, I went to lunch
with Neyman and I said something like, ``It seems to me that formulating
causal problems in terms of missing potential outcomes is an obvious thing
to do, not just in randomized experiments, but also in observational
studies.'' Neyman answered to the effect that (remarkable in hindsight
because he did so without acknowledging that he was the person who first
formulated potential outcomes), ``No, causality is far too speculative in
nonrandomized settings.'' He repeated something like this quote from his
biography, ``$\dots$Without randomization an experiment has little value
irrespective of the subsequent treatment.'' (Also see my comment on this
conversion in Rubin, \citeyear{Rub10}.) Then he went to say politely but firmly,
``Let's not talk about that, let's instead talk about astronomy.'' He was
very into astronomy at the time.
\textbf{Fabri:} You probably learned the reasons why he was so involved in
the frequentist approach.
\textbf{Don:} Yes. I remember we once had a conversation about what
confidence intervals really meant and why the formal Neyman--Pearson
approach seemed irrelevant to me. He said something like, ``You
misinterpret what we have done. We were doing the mathematics; go back and
read my 1934 paper where I first defined a confidence interval.'' He
defined it as a procedure that has the correct coverage for all prior
distributions (see page 589, Neyman, \citeyear{Ney34}). If you think of that, you are
forced to include all point mass priors and, therefore, you are forced to
do Neyman--Pearson. He went on to say (approximately), ``If you are a real
scientist with a class of problems to work on, you don't care about all
point-mass priors, you only care about the priors for the class of problems
you will be working on. But if you are doing the mathematics, you can't
talk about the problems you or anyone is working on.'' I tried to make this
point in a comment (Rubin, \citeyear{Rub95}), but it didn't seem to resonate to
others.
\textbf{Fabri:} In his famous 1986 JASA paper, Paul Holland coined the term
``Rubin Causal Model (RCM),'' referring to the potential outcome framework
to causal inference (Holland, \citeyear{Hol86}). Can you explain why, if you think so,
the term ``Rubin Causal Model'' is a fair description of your contribution
to this topic?
\textbf{Don:} Actually Angrist, Imbens and I had a rejoinder in our 1996
JASA paper (Angrist, Imbens and Rubin, \citeyear{AngImbRub96}), where we explain why we think
it is fair. Neyman is pristinely associated with the development of
potential outcomes in randomized experiments, no doubt about that. But in
the 1974 paper (Rubin, \citeyear{Rub74}), I made the potential outcomes approach for
defining causal effects front and center, not only in randomized
experiments, but also in observational studies, which apparently had never
been done before. As Neyman told me back in Berkeley, in some sense, he
didn't believe in doing statistical inference for causal effects outside of
randomized experiments.
\textbf{Fan:} Also there are features in the RCM, such as the definition of
the assignment mechanism, that belong to you.
\textbf{Don:} Yes, it was crucial to realize that randomized experiments
are embedded in a larger class of assignment mechanisms, which was not in
the literature. Also, in the 1978 paper (Rubin, \citeyear{Rub78N1}), I proposed three
integral parts to this RCM framework: potential outcomes, assignment
mechanisms, and a (Bayesian) model for the science (the potential outcomes
and covariates). The last two parts were not only something that Neyman
never did, he possibly wouldn't even like the third part. In fact, I think
it is unfair to attribute something to someone who is dead, who may not
approve of the content being attributed. If the fundamental idea is clear,
such as with Fisher's randomization test of a sharp null hypothesis, sure,
attribute that idea to Fisher no matter what the test statistic, as in
Brillinger, Jones and Tukey (\citeyear{Bri}). Panos Toulis (a~fine Harvard Ph.D.
student) helped me track down this statement that I remembered reading in
my ETS days from a manuscript John gave to me:
``\textit{In the precomputer era, the fact that almost all work could be
done once and for all was of great importance. As a consequence, the
advantages of randomization approaches---except for those few cases where
the randomization distributions could be dealt with once and for all---were not adequately valued}.
\textit{One reason for this undervaluation lay in the fact that, so long as
randomization was confined to specially manageable key statistics, there
seemed no way to introduce into the randomization approach the
insights---some misleading and some important and valuable---into what test
statistics would be highly sensitive to the changes that it was most
desired to detect. The disappearance of this situation with the rise of the
computer seems not to have received the attention that it deserves}.''
(Brillinger, Jones and Tukey, \citeyear{Bri}, Chapter~25, page F-5.)
\textbf{Fabri:} Here I am quoting an interesting question by Tom Belin
regarding potential outcomes: ``Do you believe potential outcomes exist in
people as fixed quantities, or is the notion that potential outcomes are a
device to facilitate causal inference?''
\textbf{Don:} Definitely the latter. Among other things, a~person's
potential outcomes could change over time, and how do we know the people
who were studied in the past are still exchangeable with people today? But
there are lots of devices like that in science.
\textbf{Fan:} In the RCM, cause/intervention should always be defined
before you start the analysis. In other words, the RCM is a framework to
investigate the ``effects of a cause,'' but not the ``causes of an
effect.'' Some criticize this as a major limitation. Do you regard this as
a limitation? Do you think it is ever possible to draw inference on the
causes of effects from data, or is it, per se, an interesting
question worth further investigation?
\textbf{Don:} I regard ``the cause'' of an event topic as more of a
cocktail conversation topic than a scientific inquiry, because it leads to
an essentially infinite regress. Someone says, ``He died of lung cancer
because he smoked three packs a day''; then someone else counters, ``Oh no,
he died of lung cancer because both of his parents smoked three packs a day
and, therefore, there was no hope of his doing anything other than smoking
three packs a day''; then another one says, ``No, no, his parents smoked
because his grandparents smoked---they lived in North Carolina where, back
then, everyone smoked three packs a day, so the cause is where the
grandparents lived,'' and so on. How far back should you go? You can't talk
sensibly about \textit{the cause} of an event; you can talk about ``but for
that cause (and there can be many `but for's), what would have happened?''
All these questions can be addressed hypothetically. But \textit{the
cause}? The notion is meaningless to me.
\textbf{Fabri:} Do you feel that you benefit from knowing about history of
statistics when you are thinking about fundamentals of statistics?
\textbf{Don:} I know some history, but not a large amount. The most
important lessons occur when I feel that I understand why one of the
giants, like Fisher or Neyman, got something wrong. When you understand why
a mediocre thinker got something wrong, you learn little, but when you
understand why a genius got something wrong, you learn a tremendous
amount.
\begin{figure}
\caption{D. B. Rubin (on left) poses with the captain (on right) of Sy's boat
harbored in Bodrum, Turkey, mid-1970s.}
\end{figure}
\section*{Back to Harvard: Propensity Score, Multiple Imputation and More}
\textbf{Fabri:} After those productive years at ETS, you spent some time at
the EPA (US Environmental Protection Agency). Why did you decide to move,
given that you were apparently doing very well at the ETS?
\textbf{Don:} It started partly from my joking answer to the question,
``How long have you been at ETS?'' I answered, ``Too long.'' The problems
that I had dealt with at ETS started to appear repetitive, and I felt that
I had made important contributions to them including EM and multiple
imputation ideas, which were being used to address some serious issues,
like test equating, and formulating the right ways to collect data. So I
wanted to try something else. At the time, David Rosenbaum was the head of
the Office of Radiation Programs at the EPA. He had the grand idea of
putting together a team of applied mathematicians and statisticians.
Somehow he found my name and invited me to D.C. to find out whether I
wanted to lead such a group. Basically, I had the freedom to hire several
people of my choice, and I had a good government salary (at the level of
``Senior Executive Service''). So I said, ``Let's see whom I can get.'' I
was able to convince both Rod Little (who was in England at that time) and
Paul Rosenbaum (whom I advised while I was still at ETS), as well as Susan
Hinkins, who wrote a thesis on missing data at Montana State University,
and two others. That was shortly before the presidential election. Then the
Democrats lost and Reagan was to come in, and everything seemed to be
falling apart. All of a sudden, many of the people above my level at the
EPA (most of whom were presidential appointments), had to prepare to turn
in their resignations, and had to be concerned about their next
positions.
\textbf{Fabri:} So the EPA project ended before it even got started.
\textbf{Don:} It didn't start at all in some sense. I formally signed on at
the beginning of December, and after one pay period, I turned in my
resignation. But I felt responsible to find jobs for all these people I
brought there. Eventually, Susan Hinkins got connected with Fritz Scheuren
at the IRS; Paul Rosenbaum got a position at the University of Wisconsin at
Madison; Rod got a job related to the Census. One nice thing about that
short period of time is that, through the projects I was in charge of, I
made several good connections, such as to Herman Chernoff and George Box.
George and I really hit it off, primarily because of his insistence on
statistics having connections to real problems, but also because of his
wonderful sense of humor, which was witty and ribald, and his love of good
spirits. In any case, the EPA position led to an invitation to visit Box at
the Math Research Center at the University of Wisconsin, which I gladly
accepted. That gave me the chance to finish writing the propensity score
papers with Paul (Rosenbaum and Rubin, \citeyear{RosRub83}, \citeyear{autokey27}, \citeyear{autokey28}).
\textbf{Fan:} Since you mentioned propensity score, arguably the most
popular causal inference technique in a wide range of applied disciplines,
can you give some insights on the ``natural history'' of propensity
score?
\textbf{Don:} I first met Paul in 1978, when I came to Harvard on a
Guggenheim fellowship; he was a first-year Ph.D. student, extremely bright
and devoted. Back in my Princeton days I did some consulting for a
psychologist at Rutgers, June Reinisch, who later became the first director
of the Kinsey Institute after Kinsey. She was very interested in studying
the nature-nurture controversy---what makes men and women so different?
She and her husband, who was also a psychologist, were doing experiments on
rats and pigs. They injected hormones into the uteri of pregnant animals,
and thereby exposed the fetuses to different prebirth environments; this
kind of randomized experiment is obviously unethical to do with humans. One
of the problems Paul and I were working on for this project, also as part
of Paul's thesis, was matching---matching background characteristics of
exposed and unexposed. The covariates included a lot of continuous and
discrete variables, some of which were rare events like certain serious
diseases prior to, or during, early pregnancy. Soon it became clear that
standard matching approaches, like Mahalanobis matching, do not work well
in such high dimensional settings. You have to find some type of summaries
of these variables and balance the summaries in the treatment and control
groups, not individual to individual. Then we realized if you have an
assignment mechanism, you can match on the individual assignment
probabilities, which is essentially the Horvitz--Thompson idea, to
eliminate all systematic bias. I don't remember the exact details, but I
think we first got the propensity score idea when working on a Duke data
bank on coronary artery bypass surgery, but refined it for the Reinisch
data, which is very similar in principle. Again, the idea of the propensity
score is motivated by addressing real problems, but with generality.
\textbf{Fan:} Multiple Imputation (MI) is another very influential
contribution of yours. Your book ``Multiple Imputation for Nonresponse in
Sample Surveys'' (Rubin, \citeyear{Rub87N1}) has commonly been cited as the origin of
MI. But my understanding is that you first developed the idea and coined
the term much earlier.
\textbf{Don:} Correct, I first wrote about MI in an ASA proceedings paper
in 1978 (Rubin, \citeyear{Rub72}, \citeyear{Rub78N2}). That's where ``the $18+$ years'' comes from when I
wrote ``Multiple Imputation After $18+$ Years'' (Rubin, \citeyear{Rub96}).
\textbf{Fabri:} MI has been developed in the context of missing data, but
it applicability seems to be far beyond missing data.
\textbf{Don:} Yes, MI has been applied and will be, I~think, all over
the place. The reason I titled the book that way, ``Multiple Imputation for
Nonresponse in Sample Surveys,'' is that it was obvious to me that in the
settings where you need to create public-use data sets, you had to have a
separation between the person who fixed up the missing data problem and the
many people who might do analyses of the data. So there was an obvious need
to do something like this, because users could not possibly have the
collection of tools and resources to do the imputation, for example, using
confidential information. My Ph.D. students, Trivellore Raghunathan (Raghu)
and Jerry Reiter, have made wonderful contributions to confidentiality
using MI. Of course, other great Ph.D. students of mine Nat Schenker, Kim
Hung Lee, Xiao-Li Meng, Joe Schafer, as well as many others, have also made
major contributions to MI. The development of MI really reflects the
collective efforts from these people and others like Rod Little and his
colleagues and students.
\textbf{Fabri:} Rod Little once half-jokingly said, ``Want to be highly
cited? Coauthor a book with Rubin!'' And indeed he wrote the book
``Statistical Analysis with Missing Data'' with you (Little and Rubin,
\citeyear{LitRub87}, \citeyear{LitRub02}), which is now regarded as the classic textbook on missing data.
There have been a lot of new advances and changes in missing data since
then. Will we see a new edition of the book that incorporates these
developments sometime soon?
\textbf{Don:} Oh yes, we are working on that now. The main changes from
1987 to 2002 reflect the greater acceptability of Bayesian methods and MCMC
type computations. Rod is a fabulous coauthor, a much more fluid writer
than I am. I believe this third edition will have even more major changes
than the 2002 one did from the 1987 one, but again many driven by
computational advances.
\begin{figure}
\caption{D. B. Rubin at Harvard, early 1980s.}
\end{figure}
\section*{On Bayesian}
\textbf{Fan:} In the 1978 Annals paper (Rubin, \citeyear{Rub78N1}), you gave, for the
first time, a rigorous formulation of Bayesian inference for causal
effects. But the Bayesian approach to causal inference did not have much
following until very recently, and the field of causal inference is still
largely frequentist. How do you view the role of Bayesian approach in
causal inference?
\textbf{Don:} I believe being Bayesian is the right way to approach things,
because the basic frequentist approach, such as the Fisherian tests and
Neyman's unbiased estimates and confidence intervals, usually does not work
in complicated problems with many nuisance unknowns. So you have to go
Bayesian to create procedures. You can go partially Bayesian using things
like posterior predictive checks, where you put down a null that you may
discover evidence against, or direct likelihood approaches as in Frumento
et al. (\citeyear{Fruetal12}); if the data are consistent with a null that is interesting,
you live with it. But Neyman-style frequentist evaluations of Bayesian
procedures are still relevant.
\textbf{Fan:} But why is the field of causal inference still predominantly
frequentist?
\textbf{Don:} I think there are several reasons. First, there are many
Bayesian statisticians who are far more interested in MCMC algebra and
algorithms, and do not get into the science. Second, I regard the method of
moments (MOM) frequentist approach as pedagogically easier for motivating
and revealing sources of information. Take the simple instrumental variable
setting with one-sided noncompliance. Here, it is very easy to look at the
simple MOM estimate to see where information comes from. With Bayesian
methods, the answer is, in some sense, just there in front of you. But when
you ask where the information comes from, you have to start with any value,
and iterate using conditional expectations, or draws from the current joint
distributions. You have to have far more sophisticated mathematical
thinking to understand fully Bayesian ideas. There are these problems with
missing data (as in my discussion of Efron, \citeyear{Efr94}) where there are unique,
consistent estimates of some parameters using MOM, but for which the joint
MLE is on the boundary. So I think it is often easier, pedagogically, to
motivate simple estimators and simple procedures, and not try to be
efficient when you convey ideas. In causal inference, that corresponds to
talking about unbiased or nearly unbiased estimates of causal estimands, as
in Rubin (\citeyear{Rub77}). There are other reasons having to do with the current
education of most statisticians.
\textbf{Fan:} After EM, starting from the early 1980s, you were heavily
involved in developing methods for Bayesian computing, including the
Bayesian bootstrap (Rubin, \citeyear{Rub81}), the sampling importance-resampling (SIR)
algorithm (Rubin, \citeyear{Rub87N2}), and (lesser-acknowledged) ``approximate Bayesian
computation (ABC)'' (Rubin, \citeyear{Rub84}, Section~3.1).
\textbf{Don:} It was clear then that computers were going to allow Bayes to
work far more broadly than earlier. You, as well as others such as Simon
Tavare, Christian Robert and Jean-Michel Marin, are giving me credit for
first proposing ABC. Thanks! Although, frankly, I~never thought that would
be a useful algorithm except in problems with simple sufficient
statistics.
\textbf{Fabri:} But you do not seem to have followed up much on these ideas
later, even if you have used them. Also you do not label yourself as a
Bayesian or a frequentist, even if all these papers made extraordinary
contributions to Bayesian inference with fundamental and big ideas.
\textbf{Don:} First of all, fundamentally I am hostile to all
``religions.'' I recently heard a talk by Raghu in Bamberg, Germany, where
he said that in his world they have zillions of gods, and I think that is
right; you should have zillions of gods, one for this good idea, one for
that good idea. And different people can create different gods to whatever
extent they want to. I am not a fully-pledged member of the Bayesian
camp---I like being friends with them, but I never want to be religiously
Bayesian. My attitude is that any complication that creates problems for
one form of inference creates problems for all forms of inference, just in
different ways. For example, the fact that confounded treatment assignments
cause problems for frequentist inference is obvious. Does it generate
problems for the Bayesian? Yeah, that point was made in the 1978 Annals
paper: Randomization matters to a Bayesian, although not in the same way as
to a frequentist, that is, not as the basis for inference, but it affects
the likelihood function.
There is something I am currently working on with a Ph.D. student, Viviana
Garcia, that builds on a paper I wrote with Paul Rosenbaum in 1984
(Rosenbaum and Rubin, \citeyear{autokey29}), which is the only Bayesian paper that Paul
has ever written, at least with me. In that paper, we did some simulations
to show there is an effect on Bayesian inference of the stopping rule. We
show that if you have a stopping rule and use the ``wrong'' prior to do the
analysis, like a uniform improper prior, but the data are coming from a
``correct'' prior, and you look at the answer you get from the right prior
and from the ``wrong'' prior, they are different. The portion of the right
posterior that you cover using the ``wrong'' posterior is incorrect. This
extends to all situations and it is related to all of these ignorability
theorems, and it means that you need to have the right model with respect
to the right measure. Of course achieving this is impossible in practice
and, therefore, leads to the need for frequentist (Neymanian) evaluations
of the operating characteristics of Bayesian procedures when using
incorrect models (Rubin, \citeyear{Rub84}). Bayes works, in principle, there is no
doubt, but it can be so hard! It can work, in practice, but you must have
some other principles floating around somewhere to evaluate the
consequences---how wrong your conclusions can be. So you must have
something to fall back on, and I think that is where these frequentist
evaluations are extremely useful, not the unconditional Neyman--Pearson
frequentist evaluations for all point mass priors (which were critical as
mathematical demonstrations that we cannot achieve the ideal goal in any
generality), but evaluations for the class of problems that you are dealing
with in your situation.
\textbf{Fan:} The 1984 Annals paper ``Bayesianly Justifiable and Relevant
Frequency Calculations for the Applied Statistician'' (Rubin, \citeyear{Rub84}) is one
of my all-time favorite papers. This paper, as the earlier paper by George
Box (Box, \citeyear{Box80}), deals with the ``calibrated Bayes'' paradigm with
generality, which can be viewed as a compromising or mid-ground between the
Bayesian and frequentist paradigms. It has a profound influence on many of
us. In particular, Rod Little has strongly advocated ``calibrated Bayes''
as the 21st century roadmap of statistics in several of his
prominent talks, including the 2005 ASA President's Invited Address and the
2012 Fisher Lecture. What was the background and reasons for you to write
that paper?
\textbf{Don:} Interesting question. I was visiting Box at the Mathematics
Research Center in 1981--1982 and wrote Rubin (\citeyear{Rub83}) partly during that
period---I~think it's a good paper with some good ideas, but without a
satisfying big picture. That dissatisfaction led to that 1984 paper---what
is the big picture? It took me a very long time to ``get it right,'' but it
all seems very obvious to me now. The idea of posterior predictive checks
has been further articulated and advanced in Meng (\citeyear{Men94}), Gelman, Meng and
Stern (\citeyear{G96}), and the multiauthored book ``Bayesian Data Analysis'' (Gelman
et al., \citeyear{Geletal95}, \citeyear{Geletal03}, \citeyear{Geletal14}).
\textbf{Fabri:} Can you talk a little more about the ``Bayesian Data
Analysis'' book, probably one of the most popular Bayesian textbooks?
\textbf{Don:} Yup, I think that the Gelman et al. book might be THE most
popular Bayesian text. It started out as notes by John Carlin for a
Bayesian course that he taught when I was Chair sometime in the mid or late
1980s. Andy must have been a Ph.D. student at that time, with tremendous
energy for scholarship. John was heading back to Australia, which is his
homeland, and somehow the department had some extra teaching money, and we
wanted to keep John around for a year---I do not remember the details. But
I do remember that the idea of turning the notes for the course into a full
text was percolating. Also Hal Stern was an Associate Professor with us at
that time, and so the four of us decided to make it happen. We basically
divided up chapters and started writing. Even though John's initial notes
were the starting basis, things changed as soon as Andy ``took charge.''
Quickly, Andy and Hal were the most active. Andy, with Hal, were even more
dominant in the second edition, where I added some parts, edited others,
but clearly this was Andy's show. The third edition, which just came out in
early 2014, was even more extreme, with Andy adding two coauthors (David
Dunson and Aki Vehtari) because he liked their work, and they had been
responsive to Andy's requests. As the old man of the group, I just
requested that I be the last author; Andy obviously was the first author,
and the second and third were as in the first edition. In some ways, I feel
like I'm an associate editor of a journal that has Andy as the editor! We
get along fine, and clearly it's a successful book.
\textbf{Fan:} A revolutionary development in statistics since the early 90s
was the MCMC methodology. You left your mark in this with Gelman, proposing
the Gelman--Rubin statistic for convergence check (Gelman and Rubin, \citeyear{GelRub92}),
which seems to be very much connected to some of your previous work.
\textbf{Don:} Correct. We embedded the convergence check problem into the
combination of the multiple imputation and multiple chains frameworks,
using the idea of the combining rules for MI. The idea of using multiple
chains---that comes from physics---and was Andy's knowledge, not mine. My
contribution was to suggest using modified MI combining rules to help do
the assessment of convergence. The idea is powerful because it is so
simple. If the starting value does not matter, which is the whole point,
then it doesn't matter, period. The real issue should be how you choose the
functions of the estimands that you are assessing, and as always, you want
convergence to asymptotic normality to be good for these functions, so that
the simple justification for the Gelman--Rubin statistic is roughly
accurate.
\section*{The 1990s: Collaborating with Economists}
\textbf{Fabri:} In the 1990s, you started to work with economists. With
Joshua Angrist, and particularly with Guido Imbens, you wrote a series of
very influential papers, connecting the potential outcomes framework to
causal inference with instrumental variables. Can you tell us how this
collaboration started?
\textbf{Don:} Absolutely. I always liked economics; many economists are
great characters! It was in the early 90s when Guido came to my office as a
junior faculty member in the Harvard Economics Department and basically
said, ``I think I have something that may interest you.'' I had never met
him before, and he was asking if the concept of instrumental variables
already had a history in
\begin{figure*}
\caption{In classroom at Harvard, late 1980s.}
\end{figure*}
statistics. Guido and Josh Angrist had already defined the LATE (local
average treatment effect) in an Econometrica paper (Imbens and Angrist,
\citeyear{ImbAng94})---although I think CACE (Complier Average Causal Effect) is a much
better name because it is more descriptive and more precise---local can be
local for anything, local for Boston, local for females, etc. Then I asked
in return, ``Well tell me the setup, I have never heard of it in statistics
before'' and while he was explaining I started thinking, ``Gosh, there is
something important here! I have never seen it before,'' and then I said,
``Let's meet tomorrow and talk about it more,'' because these kinds of
assumptions (monotonicity and the ``exclusion restriction'') were
fascinating to me, and it was clear that there was something there that I
had never really thought hard about; it was great. That eventually led to
the instrument variables paper (Angrist, Imbens and Rubin, \citeyear{AngImbRub96})
and the later Bayesian paper (Imbens and Rubin,
\citeyear{ImbRub97}).
A closely related development was a project I was consulting on for AMGEN
at about the same time, for a product for the treatment of ALS (amyotrophic
lateral sclerosis), or Lou Gehrig's disease, which is a progressive
neuromuscular disease that eventually destroys motor neurons, and death
follows. The new product was to be compared to the control treatment where
the primary outcome was quality of life (QOL) two years post-randomization,
as measured by ``forced vital capacity'' (FVC), essentially, how big a
balloon you can blow up. In fact, many people do not reach the end-point of
two-year post-randomization survival, and so two-year QOL is ``truncated''
or ``censored'' by death. People were trying to fit this problem into a
``missing data'' framework, but I realized right away that it was something
different.
\textbf{Fan:} Essentially both ideas are special cases of the general idea
of Principal Stratification, which we can discuss in a moment.
\textbf{Don:} Yes, indeed. These meetings with Guido and this way of
thinking were so much more articulated and close to the thinking of European
economists in the 30s and 40s, like Tinbergen and Haavelmo, than many
subsequent economists who seemed sometimes to be too into their OLS algebra
in some sense. There was some correspondence between one of the
two---Haavelmo, I think---and Neyman on these hypothetical experiments on
supply and demand. European brains were talking to each other, and not
simply exchanging technical mathematics!
\textbf{Fabri:} I know that many years before you met Guido, with other
statisticians, like Tukey, you had discussions about the way economists
were treating selection problems, or missing data problems. But you had
some adventurous, to say the least, previous experiences with economists
dealing with problems that you had worked on, which they had almost
neglected completely.
\textbf{Don:} Yes, James Heckman was tracking my work in the early 1980s
when I came to Chicago after ETS. The public exchange came out in the ETS
volume edited by Howard Wainer (which is where Glynn, Laird and Rubin,
\citeyear{GlyLaiRub86}, appears), with comments from Heckman, Tukey, Hartigan and others.
\textbf{Fabri:} Economics is a field where the idea of causality is
crucial; did you find interest in economics also for this very reason? The
problems they have are usually very interesting.
\begin{figure*}
\caption{(Left to right) Guido Imbens, Don Rubin, Josh Angrist. March,
2014.}
\end{figure*}
\textbf{Don:} There are often interesting questions from social science
students that come up in class. One recent example is how do we answer
questions like ``What would the Americas be like if they were not settled
by Europeans?'' I asked the questioner, ``Who would they be settled by
instead? By the Chinese? By the Africans? What are you talking about? What
are we comparing the current American world to?'' Another example comes
from an undergraduate thesis that I directed, by Alice Xiang, which won
both the Hoopes Prize and the economics' Harris Prize for an outstanding
honors thesis. The thesis is on the causal effect of racial affirmative
action in law school admissions on some outcomes versus the same proportion
of affirmative action admissions but counter-factually based on
socioeconomic status. This is not just for cocktail conversation---it was
a case recently before the US Supreme Court, Fisher v. University of Texas,
which was kicked back to the lower court to reconsider, and additionally
the issue was recently affected by a state law in Michigan. There is an
amicus brief sent to the US Supreme Court to which Guido (Imbens), former
Ph.D. students, Dan Ho, Jim Greiner and I (with others) contributed.
Such careful formulation of questions is something critical, and to me is
central to the field of statistics. It is crucial to formulate clearly your
causal question. What is the alternative intervention you are considering,
when you talk about the causal effect of affirmative action on graduation
rates or bar-passage rates? Immediately formulating the problem as an OLS
regression is the wrong way to do this, at least to me.
\textbf{Fan:} You apparently have a long interest in law; besides the
aforementioned ``affirmative action'' thesis, you have done some
interesting work in applied statistics in law.
\textbf{Don:} Yes. Paul Rosenbaum was, I think, the first of my Harvard
students who did something about statistics in law. Either his qualifying
paper or a class paper in 1978 was on the effect of the death penalty. Jim
Greiner, another great Ph.D. student of mine, who had a law degree before
entering Harvard Statistics, wrote his Ph.D. thesis (and subsequently several
important papers) on potential outcomes and causal effects of immutable
characteristics. He is now a full professor at the Harvard Law School.
There were also several previous undergraduate students of mine who were
interested in statistics and law, but (sadly) most went to law school.
Since 1980, I have been involved in many legal topics.
\section*{The New Millennium: Principal Stratification}
\textbf{Fabri:} The work you did with Guido, as well as the work on
censoring due to death, led to your paper on Principal Stratification
(Frangakis and Rubin, \citeyear{FraRub02}), coauthored with this brilliant student of
yours, Constantine Frangakis, who happens to be Fan's advisor.
\textbf{Don:} Yes, Constantine is fabulous, but the original title of that
paper was very long, same with the title of his thesis. It went on and on,
with probably a few Latin, a few Italian, a few French and a few Greek
words! Of course I was exasperated, so I convinced him to simplify the
paper's title to ``Principal Stratification in Causal Inference.'' He is
brilliant, so good that he has no trouble dealing with all the complexity
in his own mind, but therefore he struggles at times pulling out the
kernels of all these ideas, making them simple.
\textbf{Fan:} What do you think is the most remarkable thing about the
development of Principal Stratification?
\textbf{Don:} It is a whole new collection of ways of thinking about what
the real information is in causal problems. Once you understand what the
real information is, you can start thinking about how you can get the
answers to questions that you want to extract from that information; you
always have to make assumptions, and it forces you to explicate what these
assumptions are, not in terms of OLS, which no social scientist or doctor
would really understand---but in terms of scientific or medical entities.
And because you have to make assumptions, be honest and state them clearly.
For example, I like your papers (Mealli and Pacini, \citeyear{MeaPac13}; Mattei, Li and
Mealli, \citeyear{MatLiMea13}) about multiple post-randomization outcomes, where you discuss
that for some outcomes, exclusion restriction or other structural
assumptions may be more plausible.
\textbf{Fabri:} Principal Stratification is sometimes compared to other
tools for doing so-called mediation analysis---what is your view about
inferring on mediation effects?
\textbf{Don:} I think we (Don and Fabri) discussed a paper recently in
JRSS-A, and those discussions summarize my--our view on that. Essentially,
some of the people writing about mediation seem to misunderstand what a
function is. They write down something that has two arguments inside
parenthesis, with a comma separating them, and they seem to think that
therefore something is well defined!
\textbf{Fan:} Even though causal inference has gained increasing attention
in statistics and beyond, there seems to be a lot of misunderstanding,
misuse, misinterpretation and mystifying of causal inference. Why? And what
needs to be done to change?
\textbf{Don:} I think it is partly because causal inference is a very
different topic from many topics in statistics in that it does not demand a
lot of technical advanced mathematical knowledge, but does demand a lot of
conceptual and basic mathematical sophistication. Principal Stratification
is one such example. Writing down notation does not take the place of
understanding what the notation means and how to prove things
mathematically. Also partly because causal inference has become a popular
topic, it has been flooded with publications that are often done casually.
For some fields, it is important to bridge the ``old''
(everything-based-on-OLS) thinking with the newer ideas. That's a battle
Guido and I constantly had to deal with when writing our book (Imbens and
Rubin, \citeyear{ImbRub14}).
\textbf{Fan:} You mentioned \textit{the} book; when will it finally come
out? It has been forthcoming for the last ten years or so.
\textbf{Don:} (Laughing) Come on, Fan, that's not fair! Has it only been
ten years? We have promised the publisher (Cambridge University Press) that
it will be ready by September 30, 2013.\footnote{As of April 1, 2014, the
book can be preordered on \href{http://www.amazon.com}{Amazon.com}.} It will be about 500 pages, 25
chapters. It will be followed by another volume, dealing with topics that
we could not get to in the volume due to length, such as principal
stratification beyond IV settings, or because we believe the topics have
not been sharply and cleanly formulated yet, such as regression
discontinuity designs, or using propensity scores with multiple treatments.
Also in this volume, we didn't discuss so-called case--control studies,
which are the meat of much of epidemiology; it is very important to embed
these studies into a framework that makes sense, not just teach them as a
bag of tricks.
\section*{Mentoring, Consulting and Editorship}
\textbf{Fabri:} You have advised over 50 Ph.D. students and many BA students
as well. This sounds like a job interview, but what is your teaching
philosophy?
\textbf{Don:} My view is that one should approach teaching very differently
depending on the kind of students you have and their goals. Harvard has
tremendous undergraduate and graduate students, but their strengths vary
and their objectives vary. A long time ago I decided that I don't have the
desire or ability to be an entertainer in class, that is, to entertain to
get their attention. If they find me entertaining, fine; but it is better
if they find the topic I am presenting entertaining.
\textbf{Fabri:} Many of your students went on to become leaders and not
only in academia. And you often say that the thing that you are the most
proud of is your students. Though it is clearly impossible to talk about
them here one by one, can you share some of your fond memories of the
students?
\textbf{Don:} Fabri, that is a killer question unless we have another day
for this. What I can say is that it has been a great pleasure to supervise
so many very talented students. I could start listing my superb Ph.D. students at
the University of Chicago\vadjust{\goodbreak} and at Harvard. All of my Ph.D. students are
talented in many, and sometimes different, dimensions: among them there are
two COPSS award winners, one president of the ASA, one president of ENAR,
two JSM program chairs, and other such honors, and many of them made
substantial contributions to government, academia and industry.
\textbf{Fan:} You also have advised a large number of undergraduate
students on a wide range of topics. This is quite uncommon because some
people find mentoring undergraduates more challenging and less rewarding
than mentoring graduate students. What is your take on this?
\textbf{Don:} I am not completely innocent on this charge. I~have no
interest in ``babysitting'' and trying to motivate unmotivated students,
either undergraduate or graduate. But Harvard does attract some extremely
talented and motivated undergraduates, some of whom I had the pleasure to
advise. Five have won Hoopes and other prizes for outstanding undergraduate
theses.
\textbf{Fabri:} Now let's talk about writing, which both Fan and I, as many
others, have some quite memorable first-hand experience. You are known as a
perfectionist in writing. As you mentioned, you are willing to withdraw
accepted papers if you are not a hundred percent satisfied with them.
\textbf{Don:} Yes, as you guys know, I am generally a pain in the neck as a
coauthor. I have withdrawn three accepted papers, and tried to improve
them; all eventually got reaccepted. One of these is the paper with you
guys and others on multiple imputation for the CDC Anthrax vaccine trial
(Li et al., \citeyear{Lietal14}). You were not too happy about it initially.
\begin{figure*}
\caption{D. B. Rubin (on left) with Tom Belin (on right) and Tom's daughter Janet
(middle), Cambridge, 2008.}
\end{figure*}
\textbf{Fabri:} (Laughing) Yeah, we tried to revolt without success. A
different question: How do you approach rejections? Do you have some advice
for young statisticians on that?
\textbf{Don:} Over the years I had many papers immediately rejected or
rejected with the suggestion that it would not be wise to resubmit.
However, in almost all of these cases, this treatment led to markedly
improved publications, somewhere. In fact, I think that the drafts that
have been repeatedly rejected possibly represent my best contributions.
Certainly, the repeated rejections, combined with my trying to address
various comments, led to better exposition and sometimes better problem
formulation, too. The most important idea is: Do not think that people who
are critics are hostile. In the vast majority of cases, editors and
reviewers are giving up their time\vadjust{\goodbreak} to try to \textit{help} authors, and, I
believe, are often especially generous and helpful to younger or
inexperienced authors. Do not read into rejection letters personal attacks,
which are extremely rare. So my advice is: Quality trumps quantity, and
stick with good ideas even when you have to do polite battle with editors
and reviewers---they are not perfect judges, but they are, almost
uniformly, on your side. More details of these are given in Rubin
(\citeyear{Rub14N2}).
\textbf{Fan:} In 1978, you became the Coordinating and Applications Editor
of JASA. Is there anything particularly unique about your editorship?
\textbf{Don:} As author, I am willing to withdraw accepted papers. As a new
editor, at least then, I was also willing to suggest to authors that they
withdraw papers accepted by the previous editors! I took some heat for that
at the beginning. I read through all the papers that the previous editorial
board had accepted and were awaiting copyediting for publication; for the
ones that I thought were bad (I remember there were about eight), I wrote,
``Dear authors, I think you should consider withdrawing this paper,'' with
long explanations of why I thought it would be an embarrassment to them if
the paper were published. Fabri knows that I can be brutally frank about
such suggestions.
\begin{figure*}
\caption{Celebrating Don's 70th birthday at the Yenching
Restaurant, Harvard Square, March 29, 2014. Front (left to right): Alan
Zaslavsky, Elizabeth Stuart, Xiao-Li Meng, TE Raghunathan; Back (left to
right): Fan Li, Elizabeth Zell, Fabrizia Mealli, Don Rubin. The restaurant
has a dish named in Don's honor, the ``Rubin.''}
\end{figure*}
\textbf{Fan:} Did they comply?
\textbf{Don:} Yes, all but one. This one author fought, and I kept saying,
``You have to fix this up.'' Eventually, the changes made the paper OK. For
the other ones, the authors agreed with my criticisms:\vadjust{\goodbreak} Just because the
previous editor didn't get a good reviewer or they overlooked mistakes,
does not mean the paper should appear. But I was not very popular, at least
at first.
\textbf{Fabri:} You have done a wide range of consulting. What is the role
that consulting plays in your research?
\textbf{Don:} To me consulting is always a stimulating source of problems.
As I mentioned before, for example, propensity score technology partly came
from the consulting work we did for June Reinisch.
\textbf{Fabri:} One of the more controversial cases in which you are
involved as a consultant is the US tobacco litigation case, in which you
represented the tobacco\vadjust{\goodbreak} companies as an expert witness. Would you mind
sharing some of your thoughts on this case?
\textbf{Don:} Happy to. This comes from my family background dealing with
lawyers. We have a legal system where certain things are legal, certain
things are not. You should generally obey laws even if you don't like them,
or you should try to change them. If a company is making a legal product,
and they are advertising it legally under current laws, then accept it or
work to change the laws. If they lie, punish them for lying, if that is
legal to do. You never see a commercial for sporty cars that show the cars
going around corners extremely slowly and safely. How do they advertise
cars? They usually show them sweeping around corners, and say ``Don't do\vadjust{\goodbreak}
this on your own.'' Things that are enjoyable typically have uncertainties
or risks associated with them. Flying to Europe to visit Fabri has risks!
Certainly I do not doubt that no matter how I would intervene to reduce
cigarette smoking, lung cancer rates would drop. But what intervention that
would reduce smoking would involve reducing illegal conduct of the
cigarette industry---that is the essence of the legal question.
When I was first contacted by a tobacco lawyer, I~was very reluctant to
consult for them, and I feared strong pressure to be dishonest, which was
absent throughout. The original topic was simply to comment on the ways the
plaintiffs' experts were handling missing data. On examination, their
methods seemed to me to be not the best available and, at worst, silly
(e.g., when missing ``marital status,'' call them ``married''). As I
continued to read these initial reports, I was appalled that hundreds of
billions of dollars could be sought on the basis of such analyses. From a
broader perspective, the logic underlying most of the analyses also seemed
to me entirely confused. For example, alleged misconduct seemed to play no
role in nearly all calculations, and phrases such as ``caused by'' or
``attributable to,'' were used nearly interchangeably and often apparently
without thought. Should nearly a trillion dollars in damages be awarded on
the basis of faulty logic and bad statistical analyses because we ``know''
the defendant is evil and guilty? If the issue were assessing the tobacco
industry a trillion dollar fine for lying about its products, I would be
amazed but mute. But these reports were using statistical arguments to set
the numbers---is it acceptable to use bad statistics to set numbers
because we ``know'' the defendant is guilty? What sort of precedent does
that imply? The ethics of this consulting is discussed at some length in
Rubin (\citeyear{Rub02}).
\textbf{Fabri:} We have talked quite a lot about statistics. Let's talk
about some of your other passions in life, for example, music, audio
systems and sports cars.
\textbf{Don:} There are other passions, too, and their order is very age
dependent (I leave more to your perceptions). When a kid, for example,
sports cars, both driving them and rebuilding them, was the top of those
three hobbies. But age (poorer vision, slower reflexes, more aches and
pains, etc.) shifted the balance more to music, both live and recorded---luckily my ears are still good enough to enjoy these, but as more age
catches up, things may shift.
\textbf{Fan and Fabri:} Well, it has been nearly three hours since we
started the conversation. Here is the final question before letting you go
for dinner: What is your short advice to young researchers in statistics?
\textbf{Don:} Have fun! Don't be grumpy. If lucky, you may live to have a
wonderful 70th birthday celebration!\footnote{Video of the
celebration is available at: \url{http://www.stat.harvard.edu/DonRubin70/}}
\section*{Acknowledgments}
We thank Elizabeth Zell, Guido Imbens, Tom Belin, Rod Little, Dale Rinkel and Alan
Zaslavsky for helpful suggestions. This work is partially funded by NSF-SES
Grant 1155697.
\end{document} |
\begin{document}
\begin{abstract}
Polytropes are both ordinary and tropical polytopes. We show that tropical types of polytropes in $\mathbb{TP}^{n-1}$ are in bijection with cones of a certain Gr\"{o}bner fan $\mathcal{GF}_n$ in $\mathbb{R}^{n^2 - n}$ restricted to a small cone called the polytrope region. These in turn are indexed by compatible sets of bipartite and triangle binomials. Geometrically, on the polytrope region, $\mathcal{GF}_n$ is the refinement of two fans: the fan of linearity of the polytrope map appeared in \perp\!\!\!\perpte{tran.combi}, and the bipartite binomial fan. This gives two algorithms for enumerating tropical types of polytropes: one via a general Gr\"obner fan software such as \textsf{gfan}, and another via checking compatibility of systems of bipartite and triangle binomials. We use these algorithms to compute types of full-dimensional polytropes for $n = 4$, and maximal polytropes for $n = 5$.
\end{abstract}
\title{Enumerating polytropes}
\section{Introduction}
Consider the tropical min-plus algebra $(\mathbb{R}, \oplus, \odot)$, where $a \oplus b = \min(a,b)$, $a\odot b = a+b$. A set $S \subset \mathbb{R}^n$ is tropically convex if $x,y \in S$ implies $a\odot x \oplus b \odot y \in S$ for all $a,b \in \mathbb{R}$. Such sets are closed under tropical scalar multiplication: if $x \in S$, then $a \odot x \in S$. Thus, one identifies tropically convex sets in $\mathbb{R}^n$ with their images in the tropical affine space $\mathbb{TP}^{n-1} = \mathbb{R}^n \backslash (1, \ldots, 1) \mathbb{R}$. The tropical convex hull of finitely many points in $\mathbb{TP}^{n-1}$ is a \emph{tropical polytope}. A tropical polytope is a \emph{polytrope} if it is also an ordinary convex set in~$\mathbb{TP}^{n-1}$ \perp\!\!\!\perpte{JoswigK10}.
Polytropes are important in tropical geometry and combinatorics. They have appeared in a variety of context, from hyperplane arrangements \perp\!\!\!\perpte{lampostnikov}, affine buildings \perp\!\!\!\perpte{affinebuilding}, to tropical eigenspaces, tropical modules \perp\!\!\!\perpte{butkovic, BCOQ}, and, semigroup of tropical matrices \perp\!\!\!\perpte{kambite}, to name a few. Their discovery and re-discovery in different contexts have granted them many names: they are the alcoved polytopes of type A of Lam and Postnikov \perp\!\!\!\perpte{lampostnikov}, the bounded $L$-convex sets of Murota \perp\!\!\!\perpte[\S 5]{murota2003discrete}, the image of Kleene stars in tropical linear algebra \perp\!\!\!\perpte{butkovic, BCOQ}. In particular, they are building blocks for tropical polytopes: any tropical polytope can be decomposed into a union of cells, each is a polytrope \perp\!\!\!\perpte{mikebernd}. Each cell has a \emph{type}, and together they define the type of tropical polytope.
A $d$-dimensional polytrope has exactly one $d$-dimensional cell, namely, its (relative) interior. This is the \emph{basic cell}, and its type is the \emph{tropical type} of the polytrope \perp\!\!\!\perpte{JoswigK10}. We use the word `tropical' to distinguish from the ordinary combinatorial type defined by the face poset. As we shall show, tropical type refines ordinary type.
This work enumerates the tropical types of full-dimensional polytropes in $\mathbb{TP}^{n-1}$. Since polytropes are special tropical simplices \perp\!\!\!\perpte[Theorem 7]{JoswigK10} this number is at most the number of regular polyhedral subdivisions of $\Delta_{n-1} \times \Delta_{n-1}$ by \perp\!\!\!\perpte[Theorem 1]{mikebernd}. However, this is a very loose bound, the actual number of types of polytropes is much smaller. Joswig and Kulas \perp\!\!\!\perpte{JoswigK10} pioneered the explicit computation of types of polytropes in $\mathbb{TP}^2$ and $\mathbb{TP}^3$ using the software \textsf{polymake}. They started from the smallest polytrope, which is a particular ordinary simplex \perp\!\!\!\perpte{JoswigK10}, and recursively added more vertices in various tropical halfspaces. Their table of results and beautiful figures have been the source of inspiration for this work. Unfortunately, the published table in \perp\!\!\!\perpte{JoswigK10} has errors. For example, there are six, not five, distinct tropical types of full-dimensional polytropes in $\mathbb{TP}^3$ with maximal number of vertices, as discovered by Jim{\'e}nez and de la Puente \perp\!\!\!\perpte{jimenez2012six}. We recomputed Joswig and Kulas' result in Table \mbox{e}f{tab:jk.table}.
In contrast to previous works \perp\!\!\!\perpte{JoswigK10, jimenez2012six}, we have a Gr\"{o}bner approach polytropes. In Section \mbox{e}f{sec:background}, we show that their tropical types are in bijection with a subset of cones in the Gr\"{o}bner fan $\mathcal{GF}_n$ of a certain toric ideal. While this is folklore to experts, the obstacle has been in characterizing these cones. Without such characterizations, brute force enumeration requires one to compute all of $\mathcal{GF}_n$. Unfortunately, even with symmetry taken into account, $\mathcal{GF}_5$ cannot be handled by leading software such as \textsf{gfan} \perp\!\!\!\perpte{gfan} on a conventional desktop.
We show that the full-dimensional polytrope cones in $\mathcal{GF}_n$ are contained in a small cone called the \emph{polytrope region}. Our main result, Theorem \mbox{e}f{thm:S}, gives an indexing system for the polytrope cones in terms of sets of compatible bipartite binomials and triangles. Geometrically, we show that on the polytrope region, the fan $\mathcal{GF}_n$ equals the refinement of
the fan of linearity of the polytrope map $\mathcal{P}_n$, and the bipartite binomial fan $\mathcal{BB}_n$. The later fan is constructed as a refinement of finitely many fans, each is the coarsening of an arrangement linearly isomorphic to the braid arrangement. The open, full-dimensional cones are in bijection with polytropes in $\mathbb{TP}^{n-1}$ with maximal number of vertices.
These results elucidate the structure of $\mathcal{GF}_n$ and gives algorithms for polytrope enumeration. Specifically, one can either compute the Gr\"{o}bner fan $\mathcal{GF}_n$ restricted to the polytrope region, or enumerate sets of compatible bipartite binomials and triangles. With these approaches, we computed representatives for all tropical types of full-dimensional polytropes in $\mathbb{TP}^3$ and all maximal polytropes in $\mathbb{TP}^4$. In $\mathbb{TP}^4$, up to permutation by $\mathbb{S}_5$, there are~$27248$ tropical types of maximal polytropes. This is the first result on tropical types of polytropes in dimension~4. \footnote{An earlier version of reported maximal polytropes in $\mathbb{TP}^5$. Unfortunately, in fact, the computation ran out of memory and reported an erroneous number. We thank Michael Joswig and his team for pointing this out. The number of tropical types of polytropes in dimension 5 is still open.}
\textbf{Organizations.} For self-containment, Section \mbox{e}f{sec:background} reviews the basics of Gr\"{o}bner bases and integer programming, and the three integer programs central to this paper. Section \mbox{e}f{sec:tropical.polytope} revisits the result of Develin and Sturmfels \perp\!\!\!\perpte{mikebernd} on types of tropical polytopes using Gr\"obner bases. We use this view in Section \mbox{e}f{sec:polytropes} to derive Theorem \mbox{e}f{thm:ds.analogue}, the analogue of Develin and Sturmfels main results for polytropes. Section \mbox{e}f{sec:main} contains our main result, Theorem \mbox{e}f{thm:S} and \mbox{e}f{thm:main}, on the structure of the polytrope complex. Section~\mbox{e}f{sec:algorithm} presents algorithms for enumerating full-dimensional polytropes, as well as computation results for $\mathbb{TP}^3$ and $\mathbb{TP}^4$. We conclude with discussions and open problems.
\textbf{Notation.} Throughout this text, for a positive integer $n$, let $[n]$ denote the set $\{1, \ldots, n\}$. We shall identify an $n \times m$ matrix $c$ with the vector $(c_{ij}, i \in [m], j \in [n])$ of length $nm$.
If $c$ is an $n \times n$ matrix with zero diagonal, identify it with the vector $(c_{ij}, i \in [n], j \in [n], j \neq i)$ of length $n^2 - n$. For a cone $\mathcal{C}$, let $\mathcal{C}^\perp\!\!\!\perprc$ denote its relative interior, $\partial \mathcal{C}$ denote its boundary.
\section{Background}\label{sec:background}
This section contains a short exposition on the Gr\"{o}bner approach to integer programming, adapted from \perp\!\!\!\perpte[\S 5]{st94}. Another excellent treatment from the viewpoint of applied algebraic geometry is \perp\!\!\!\perpte[\S 8]{usingAlgebraicGeometry}, while \perp\!\!\!\perpte[\S 9]{triangulationBook} approaches the topic from triangulations of point configurations.
\subsection{Gr\"obner fan and integer programs}
For $c \in \mathbb{R}^m$, $A \in \mathbb{R}^{n \times m}$ and $b \in \mathbb{Z}^n$, the primal and dual of an integer program are
\begin{align}
\mbox{minimize } & c^\top u \label{lp.p} \tag{P} \\
\mbox{ subject to } & Au = b, \hspace{1em} u \in \mathbb{N}^N \notag \\
\text{maximize } & b^\top y \label{lp.d} \tag{D} \\
\text{subject to } & A^\top y \leq c, y \in \mathbb{R}^n. \notag
\end{align}
Consider the polynomial ring $\mathbb{R}[x] = \mathbb{R}[x_1, \ldots, x_m]$. Identify $u \in \mathbb{N}^m$ with the monomial $x^u = \displaystyle\partialod_{i \in [m]} x_i^{u_i}$ in $\mathbb{R}[x]$. The \emph{toric ideal of $A$} is
$$ I = \langle x^u - x^v: Au = Av, u,v \in \mathbb{N}^m\rangle. $$
Let $c \in \mathbb{R}^m$ be a cost vector. The \emph{term ordering} $\succ_c$ is a partial order on the monomials in $\mathbb{R}[x]$, defined as
$$ x^u \succ_c x^v, u \succ_c v \hspace{0.5em} \Leftrightarrow \hspace{0.5em} c \cdot u > c \cdot v. $$
For a polynomial $f = \sum_u a_u x^u \in \mathbb{R}[x]$, define its initial form $\ini_c(f)$ to be the sum of all terms $a_ux^u$ with maximal order under $\succ_c$. The \emph{initial ideal} of $I$ is the ideal generated by $\ini_c(f)$ for all $f \in I$:
$$ in_c(I) = \langle in_c(f): f \in I \rangle. $$
Monomials of $I$ which do not lie in $in_c(I)$ are called \emph{standard monomials}.
Now we consider $c \in \mathbb{R}^m$ up to their initial ideals $\ini_c(I)$. Let $\mathcal{C}_c(I) \subseteq \mathbb{R}^m$ be the equivalence class containing $c$
$$ \mathcal{C}_c(I) := \{c' \in \mathbb{R}^m: \ini_{c'}(I) = \ini_c(I)\}. $$
In general, $\mathcal{C}_c(I)$ may not be a nice set - for example, it may not be convex \perp\!\!\!\perpte{fukuda2007computing}. When $c \in \mathbb{R}^m_{> 0}$, $\mathcal{C}_c(I)$ is convex, and its closure is a polyhedral cone \perp\!\!\!\perpte{fukuda2007computing}. Following \perp\!\!\!\perpte{fukuda2007computing}, define the \emph{Gr\"obner fan of $I$}, to be the collection of closed cones $\overline{\mathcal{C}_c(I)}$ where $c \in \mathbb{R}^{m}_{> 0}$ together with all their non-empty faces. The support of the Gr\"obner fan is called the Gr\"{o}bner region
$$ \bigcup_{c \in \mathbb{R}^{m}_{> 0}} \overline{\mathcal{C}_c(I)}. $$
If $I$ is homogeneous, then the Gr\"obner region equals $\mathbb{R}^m$ \perp\!\!\!\perpte{st94}. If $I$ is not homogeneous, one can homogenize it. Each homogenized version of $I$ is the toric ideal of some matrix $A^h$, called the lift of $A$. This matrix has the form
$$ A^h = \left[ \begin{array}{cc} A & \mathbf{0} \\ \mathbf{1} & \mathbf{1} \end{array} \right], $$
where $\textbf{0}$ is a zero matrix, and $\mathbf{1}$'s are matrices of all ones of appropriate sizes.
A \emph{Gr\"{o}bner basis of $I$} with term ordering $\succ_c$ is a finite subset $S_c \subset I$ such that $\{\ini_c(g): g \in S_c\}$ generates $\ini_c(I)$. It is called minimal if no polynomial $\ini_c(g)$ is a redundant generator of $\ini_c(I)$. It is called reduced if for any two distinct elements $g, g' \in S_c$, no monomial of $g'$ is divisible by $\ini_c(g)$. A \emph{universal Gr\"obner basis of $I$} is a set $S$ that is a Gr\"obner basis with respect to any term ordering $\succ_c$.
Throughout this paper we will only be concerned with three integer programs whose matrices $A$ are totally unimodular, that is, every minor of $A$ is either $+1, 0$ or $-1$. Such a matrix has a number of nice properties. In particular, define a circuit of $A$ to be a non-zero primitive vector $u$ in the kernel of $A$ with minimal support with respect to set inclusion. If $A$ is totally unimodular, the set
$$ \{x^{u_+} - x^{u_-}: u \mbox{ is a circuit of} A \} $$
is a universal Gr\"obner basis of $I$ \perp\!\!\!\perpte[Theorem 5.9]{rekha}. Furthermore, the Gr\"obner fan of $A$ coincides with the secondary fan of $A$, which is dual to regular subdivisions of the configuration of points that are the columns of $A$.
\subsection{The transport program}
Throughout this paper, we shall be concerned with the transport program and two of its variants, the all-pairs shortest path, and the homogenized all-pairs shortest path programs. These classic integer programs play central roles in defining and understanding tropical types of polytropes, as we shall discuss in the following sections.
Fix $c \in \mathbb{R}^{n \times m}$ and $b \in \mathbb{Z}^{n + m}$. With variables $u \in \mathbb{N}^{n \times m}$, $y \in \mathbb{R}^n$, $z \in \mathbb{R}^m$, the transport program is
\begin{align}
\text{minimize } & \sum_{i\in[n],j\in[m]} u_{ij}c_{ij} \label{transport.p} \tag{P-transport} \\
\text{subject to } & \sum_{j\in [m]}u_{ij} = b_i, \sum_{i\in[n]} u_{ij} = b_j \hspace{1em} \mbox{ for all } i \in [n], j \in [m]. \notag \\
\text{maximize } & \sum_{i\in[n]}y_ib_i - \sum_{j\in[m]}z_jb_j \label{transport.d} \tag{D-transport} \\
\text{subject to } & y_i - z_j \leq c_{ij}, \hspace{1em} \mbox{ for all } i \in [n], j \in [m] \notag
\end{align}
This program defines a transport problem on a directed bipartite graph on $(m,n)$ vertices. Here $c_{ij}$ is the cost to transport an item from $i$ to $j$, $b_i$ is the number of items that node $i$ want to sell, $b_j$ is the number of item that node $j$ want to buy, $u_{ij}$ is the number of items to be sent from $i$ to $j$, and $y_i$, $z_j$ are the per-item prices at each node. The primal goal is to choose a transport plan $u \in \mathbb{Z}^{m + n}$ that minimizes costs and meets the targeted sales $b$. The dual goal is to set prices to maximize profit, subject to the transport cost constraint.
The toric ideal associated to this program is
\begin{equation}\label{eqn:it}
I_t = \langle x^u - x^v: \sum_j u_{ij} = \sum_j v_{ij}, \sum_iu_{ij} = \sum_iv_{ij} \mbox{ for all } i \in [m], j \in [n]
\rangle.
\end{equation}
Here the subscript $t$ stands for `transport'. This ideal plays a central role in classification of tropical polytopes, as we shall discuss in Section \mbox{e}f{sec:tropical.polytope}.
\subsection{The all-pairs shortest path program}
This is the transport program with $m = n$ and $z = -y$, and cost matrix $c \in \mathbb{R}^{n \times n}$ with $c_{ii} = 0$ for all $i \in [n]$. Explicitly, fix such a cost matrix $c$ and constraint vector $b \in \mathbb{Z}^n$. With variables $u \in \mathbb{N}^{n \times n}$ where $u_{ii} = 0$ for all $i \in [n]$, and $y \in \mathbb{R}^n$, the all-pairs shortest path program is
\begin{align}
\text{minimize } & \sum_{i,j\in[n]} u_{ij}c_{ij} \label{shortest.p} \tag{P-shortest} \\
\text{subject to } & \sum_{j=1}^n u_{ij} - \sum_{j=1^n} u_{ji} = b_i \mbox{ for all } i = 1, \ldots, n. \label{eqn:A} \\
\text{maximize } & \sum_{i=1}^nb_iy_i \label{shortest.d} \tag{D-shortest} \\
\text{subject to } & y_i - y_j \leq c_{ij}, \hspace{1em} \mbox{ for all } i,j\in [n], i \neq j. \label{eqn:lp.pol}
\end{align}
Here one has a simple directed graph on $n$ nodes with no self loops. As before, $b$ is the targeted sales, $c$ is the cost matrix, $u$ defines a transport plan, $y$ is the price vector.
Note that in this problem, each node can both receive and send out items.
The all-pairs shortest path is a basic problem in integer programming. It appears in a variety of applications, one of which is classification of polytropes (cf. Section \mbox{e}f{sec:polytropes}). We collect some necessary facts about this program below. These properties can be found in \perp\!\!\!\perpte[\S 4]{networkBook}. See \perp\!\!\!\perpte[\S 3]{BCOQ} and \perp\!\!\!\perpte[\S4]{butkovic} for treatments in terms of tropical eigenspaces.
\subsubsection{Feasible region, lineality space}
This program is feasible only if $\sum_ib_i = 0$ and $c$ has no negative cycles. Let $R_n$ denote the set of feasible cost matrices $c$. Then
\begin{equation} \label{eqn:recessionCone}
R_n = \{c \in \mathbb{R}^{n^2-n}: c \cdot \chi_\omega \geq 0\}
\end{equation}
where $\chi_\omega$ is the incidence vector of the cycle $\omega$ and $\omega$ ranges over all simple cycles on $n$ nodes. Explicitly, for a cycle $\omega = i_1 \to i_2 \to \ldots \to i_k \to i_1$,
$$ c_{i_1i_2} + c_{i_2i_3} + \ldots + c_{i_ki_1} \geq 0. $$
The feasible region $R_n$ is a closed cone in $\mathbb{R}^{n^2-n}$. Note that if $c \in R_n$, then $c + c' \in R_n$ for any matrix $c'$ such that $c' \cdot \chi_\omega = 0$ for all cycles $\omega$. One says that the set of such $c'$ forms the lineality space of $R_n$, $\mathsf{lin}(R_n)$
\begin{equation}\label{eqn:vn}
\mathsf{lin}(R_n) = \{c \in \mathbb{R}^{n^2-n}: c \cdot \chi_\omega = 0\}.
\end{equation}
This is an $(n-1)$ dimensional space, consisting of matrices of the form $c_{ij} = s_i - s_j$ for some $s \in \mathbb{R}^n$. This is the space of flows, with gradient vectors $s$. It is also known as the space of strongly consistent matrices in pairwise ranking theory, with $s_i$ interpreted as the score of item $i$ \perp\!\!\!\perpte{saari,lekheng}.
\subsubsection{Kleene stars}
To send an item from $i$ to $j$, one can use the path $i \to j$ with cost $c_{ij}$, or the path $i \to k \to j$ with cost $c_{ik} + c_{kj}$, and so on. This shows up in the constraint set (\mbox{e}f{eqn:lp.pol}): for any triple $i,j,k$, we have $ y_i - y_j = (y_i - y_k) + (y_k - y_j)$, so in addition to $y_i - y_j \leq c_{ij}$, we also have $ y_i - y_j \leq c_{ik} + c_{kj}$, and by induction, $y_i - y_j$ is less than the cost of any path from $i$ to $j$. Thus, the constraint $y_i - y_j \leq c_{ij}$ is tight if and only if $c_{ij}$ is the shortest path from $i$ to $j$. If we assume $c$ has no negative cycle, then the shortest path has finite value. This motivates the following definition.
\begin{defn}\label{defn:kleene.rn}
For $c \in R_n$, the \emph{Kleene star of $c$} is $c^\ast \in \mathbb{R}^{n^2-n}$ where $c^\ast_{ij}$ is the weight of the shortest path from $i$ to $j$.
\end{defn}
To avoid saying `the constraint set of an all-pairs shortest path dual program with given $c$' all the time, we shall call this set the \emph{polytrope of $c$}. Justification for this terminology comes from Proposition \mbox{e}f{prop:polytrope.generators} in Section \mbox{e}f{sec:polytropes}.
\begin{defn}[Polytrope of a matrix]\label{defn:pol.c}
Let $c \in R_n$. The polytrope of $c$, denoted $Pol(c)$, is the set
\begin{equation}\label{eqn:pol.c}
Pol(c) = \{y \in \mathbb{R}^n: y_i - y_j \leq c_{ij} \forall i,j \in [n], i \neq j\}.
\end{equation}
\end{defn}
As discussed above, one can always replace $c$ by $c^\ast$ in the facet description of the polytrope of $c$ and not change the set.
\begin{cor}
For $c \in R_n$, $Pol(c) = Pol(c^\ast)$.
\end{cor}
\begin{defn}
The \emph{polytrope region} is
$$ \mathcal{P}_n = \{c \in R_n: c = c^\ast\} \subset \mathbb{R}^{n^2 - n}. $$
\end{defn}
The polytrope region $\mathcal{P}_n$ is a closed cone in $\mathbb{R}^{n^2 - n}$. It is also known as the set of distance matrices $c \in \mathbb{R}^{n \times n}$, since it can be identified with the set of matrices with zero diagonal that satisfy the triangle inequality
$$ \mathcal{P}_n \cong \{c \in \mathbb{R}^{n \times n}: c_{ii} = 0, c_{ij} \leq c_{ik} + c_{kj} \mbox{ for all } i,j,k \in [n]\}. $$
The map $c \mapsto (c^\ast_{ij}, i,j \in [n])$ is piecewise linear in each entry. Domains where this map is given by a linear functional for each $i,j \in [n]$ form cones of $\mathbb{R}^{n^2-n}$, and altogether they form the fan of linearity of the polytrope map studied in \perp\!\!\!\perpte{tran.combi}. Restricted to the polytrope region, this fan is a polyhedral complex, which we shall also denote $\mathcal{P}_n$.
\subsubsection{Toric ideal}
Let $I_s$ be the toric ideal associated with the all-pairs shortest path program. The subscript $s$ standars for `shortest path'. As before, we suppress the dependence on $n$ in the notation. This ideal can be written explicitly as
$$
I_s = \langle x_{ij}x_{ji} - 1, x_{ij}x_{jk} - x_{ik} \rangle
$$
where the indices range over all distinct $i,j,k \in [n]$. Write the primal all-pairs shortest path program in standard form, and let $A_s$ be the corresponding matrix that defines the constraint set of the primal. Then $A_s$ is totally unimodular \perp\!\!\!\perpte{networkBook}. In particular, $I_s$ is generated by binomials $x^{u_+} - x^{u_-}$, where $(u_+,u_-)$ is a circuit of $A_s$. As we shall see in Section \mbox{e}f{sec:main}, a subset of these circuits are crucial for enumeration of polytropes up to their tropical types.
The Gr\"obner fan of $I_s$ is the central object of study in our paper. We shall write $\mathcal{GF}_n$ for the Gr\"obner fan of $I_s$, emphasizing the dimension. We collect some facts about $\mathcal{GF}_n$
\begin{lem}\label{lem:linspace.gf}
The lineality space of $\mathcal{GF}_n$ is $\mathsf{lin}(R_n)$ defined in (\mbox{e}f{eqn:vn}).
\end{lem}
\begin{proof}
Let $\mathcal{C}$ be a cone of $\mathcal{GF}_n$. Take $c \in \mathcal{C}$. For $[s_i - s_j] \in \mathsf{lin}(R_n)$, consider $\bar{c} = c + [s_i - s_j]$. That is,
$$ \bar{c}_{ij} = c_{ij} - s_i + s_j. $$
Now, for any cycle $\omega$, $c \cdot \chi_\omega = \bar{c} \cdot \chi_\omega$. Thus for any circuit $(u_+, u_-)$, $c \cdot (u_+ - u_-) = \bar{c} \cdot (u_+ - u_-)$, so $c \cdot u_+ \geq c \cdot u_-$ if and only if $\bar{c} \cdot u_+ \geq \bar{c} \cdot u_-$. Since the program (\mbox{e}f{lp.p}) is unimodular, the ideal $I_s$ is generated by circuits. Thus, the term orders $\succ_c$ and $\succ_{\bar{c}}$ are equal, so $\bar{c} \in \mathcal{C}$. That is, every cone $\mathcal{C}$ of $\mathcal{GF}_n$ has lineality space $\mathsf{lin}(R_n)$, so $\mathcal{GF}_n$ has lineality space $\mathsf{lin}(R_n)$.
\end{proof}
\begin{lem}\label{lem:grobner.region}
The Gr\"{o}bner region of $\mathcal{GF}_n$ is $R_n$ defined in (\mbox{e}f{eqn:recessionCone}).
\end{lem}
\begin{proof}
As mentioned, $R_n$ is the feasible region of the integer program (\mbox{e}f{lp.p}), and thus contains the Gr\"obner region. To show the reverse inclusion, take $c \in R_n$. We need to show that the Gr\"obner cone of $c$ contains a point in the the positive orthant $\mathbb{R}_{\geq 0}^{n^2-n}$. Indeed, let $y \in Pol(c)$. Define $\bar{c}$ via
$$\bar{c}_{ij} = c_{ij} - y_i + y_j.$$
Since $y \in Pol(c)$, $c_{ij} \geq y_i - y_j$, so $\bar{c} \in \mathbb{R}_{\geq 0}^{n^2-n}$. By Lemma \mbox{e}f{lem:linspace.gf}, $\bar{c}$ belongs to the same cone in $\mathcal{GF}_n(I_s)$ as~$c$. So $\bar{c}$ is the point needed.
\end{proof}
\subsection{The homogenized all-pairs shortest path}
Identify $c \in \mathbb{R}^{n^2-n}$ with its matrix form in $\mathbb{R}^{n \times n}$, where $c_{ii} = 0$ for all $i \in [n]$. So far, we have only defined Kleene stars for $c \in \mathbb{R}^{n \times n}$ with zero-diagonal and non-negative cycles. We now extend the definition of Kleene stars to general matrices $c \in \mathbb{R}^{n \times n}$. This leads to the problem of weighted shortest paths. In the tropical linear algebra literature, one often goes the other way around: first consider the weighted shortest path problem, derive Kleene stars for general matrices $c$, and then restricts to those in $R_n$ (see \perp\!\!\!\perpte{SS08, tran.combi, butkovic, AGG09, kambite, BCOQ}). The reverse formulation, from feasible shortest paths to weighted shortest paths, is not so immediate. However, in the language of Gr\"{o}bner bases, this is a very simple and natural operation: making the fan $\mathcal{GF}_n$ complete by homogenizing $I_s$.
Introduce $n$ variables $x_{11},x_{22}, \ldots, x_{nn}$. Consider the following homogenized version of $I_s$ in the ring $\mathbb{R}[x_{ij}:i,j = 1, \ldots, n]$
$$ I_s^h = \langle x_{ij}x_{ji} - x_{ii}x_{jj}, x_{ij}x_{jk} - x_{ik}x_{kk}, x_{ii} - x_{jj} \rangle $$
where the indices range over all distinct $i,j,k \in [n]$. This is the toric ideal of the following program
\begin{align}
\mbox{minimize } & \sum_{i,j \in [n]} c_{ij}u_{ij} \label{lp.p.prime} \tag{$\mathrm{P^h-shortest}$} \\
\mbox{ subject to } & \sum_{j\neq i, j=1}^n u_{ij} - \sum_{j\neq i, j = 1}^n u_{ji} = b_i \mbox{ for all } i = 1, \ldots, n. \notag \\
& \sum_{i=1}^n\sum_{j=1}^n u_{ij} = b_{n+1}. \notag
\end{align}
Compared to (\mbox{e}f{shortest.d}), the dual program of (\mbox{e}f{lp.p.prime}) has one extra variable. It is helpful to keep track of this variable separately. Let $\lambda \in \mathbb{R}$. Write $b^\top = (b_1 \, \ldots \, b_n)$. The dual program to (\mbox{e}f{lp.p.prime}) is the following.
\begin{align}
\text{maximize } & b^\top y + b_{n+1}\lambda \label{lp.d.prime} \tag{$\mathrm{D^h-shortest}$} \\
\text{subject to } & y_i - y_j - \lambda \leq c_{ij} \mbox{ for all } i,j \in [n] \notag \\
& \lambda \geq c_{ii} \mbox{ for all } i \in [n]. \notag
\end{align}
In fact, $\lambda$ and $y$ can be solved separately. For example, by adding the constraints involving $c_{ij}$ and $c_{ji}$, we obtain a constraint only in $\lambda$
$$ (y_i - y_j) - \lambda + (y_j - y_i) - \lambda \leq c_{ij} + c_{ji}, \Leftrightarrow \lambda \geq \frac{c_{ij} + c_{ji}}{2}. $$
More systematically, set $b$ to be the all-zero vector, $b_{n+1} = 1$, and view the primal program as a linear program over $\mathbb{Q}$. (We can always do this, as there are finitely many decision variables). Then the dual program (\mbox{e}f{lp.d.prime}) has optimal value $\lambda$. The corresponding primal program becomes
\begin{equation}\label{lp.lambda}
\begin{matrix}
& {\rm Minimize} \,\, \sum_{i,j=1}^n c_{ij} u_{ij}
\,\, \,\, \hbox{subject to} \,\,\,\, u_{ij} \geq 0
\,\,\hbox{ for } 1 \leq i,j \leq n , \\ &
\sum_{i,j=1}^n u_{ij} = 1 \,\,\,\, \hbox{and}\,\,\,
\sum_{j=1}^n u_{ij} = \sum_{k=1}^n u_{ki}
\,\,\hbox{ for all }\, 1 \leq i \leq n.
\end{matrix}
\end{equation}
This program first appeared in \perp\!\!\!\perpte{Cg62}. The constraints require $(u_{ij})$ to be a probability distribution on the edges of the graph of $c$ that represents a flow. The set of feasible solutions is a convex polytope called the \emph{normalized cycle polytope}. Its vertices are the uniform probability distributions on directed cycles.
By strong duality, $\lambda$ is precisely the value of the minimum normalized cycle in the graph weighted by $c$. Plugging in this value for $\lambda$, we find that (\mbox{e}f{lp.p.prime}) is the original all-pairs shortest path problem with new constraints $c'_{ij} = c_{ij} - \lambda$, $c'_{ii} = 0$ for all $i,j \in [n]$. This tells us how to define the Kleene star of $c$.
\begin{defn}\label{defn:kleene.general}
Let $c \in \mathbb{R}^{n \times n}$. Let $\lambda(c)$ be the value of the minimum normalized cycle in the graph weighted by $c$. Define $c' \in \mathbb{R}^{n \times n}$ via $c'_{ij} = c_{ij} - \lambda(c)$, $c'_{ii} = 0$. The Kleene star of $c$, denoted $c^\ast$, is the $n \times n$ matrix such that $c^\ast_{ij}$ is the shortest path from $i$ to $j$ in the graph with edge weights $c'$.
\end{defn}
This definition reduces to the Kleene star in Definition \mbox{e}f{defn:kleene.rn} when $c \in R_n$, so in this sense it is an extension of Definition \mbox{e}f{defn:kleene.rn} to general $n \times n$ matrices. The value $\lambda(c)$ is the tropical eigenvalue of the matrix $c$, and the polytope defined as the constraint set of (\mbox{e}f{lp.d}) with $(c')^\ast$ is called the tropical eigenspace of $c$. As the names suggested, these objects play important roles in the spectral theory of tropical matrices, see the monographs \perp\!\!\!\perpte{butkovic, BCOQ} for key results in this field.
\section{Tropical polytopes and their types}\label{sec:tropical.polytope}
In this section we define tropical polytopes, and review the main theorem of \perp\!\!\!\perpte{mikebernd} in terms of the transport problem. Say that a set $P \subset \mathbb{R}^n$ is closed under scalar tropical multiplication if $x \in P$ implies $\lambda \odot x = (\lambda + x_1, \ldots, \lambda + x_n) \in P$ for all $\lambda \in \mathbb{R}$. Such a set can also be regarded as a subset of $\mathbb{TP}^{n-1}$. We will often identify $\mathbb{TP}^{n-1}$ with $\mathbb{R}^{n-1}$. Say that $P \subset \mathbb{TP}^{n-1}$ is a classical polytope if it is a polytope in $\mathbb{R}^{n-1}$ under this identification.
A tropical polytope in $\mathbb{R}^n$ is the tropical convex hull of $m$ points $c_1, \ldots, c_m \in \mathbb{R}^n$
\begin{align*}
\text{tconv}(c_1, \ldots, c_m) &= \{z_1\odot c_1 \oplus \ldots \oplus z_m\odot c_m: z_1,\ldots,z_n \in \mathbb{R}\} \\
&= \{\min(z_1+c_1, \ldots, z_m+c_m): z_1,\ldots,z_m \in \mathbb{R}\}.
\end{align*}
For $c$ an $n \times m$ matrix with columns $c_1, \ldots, c_m$, we will write $\text{tconv}(c)$ for
$\text{tconv}(c_1, \ldots, c_m)$. Rewritten in the tropical algebra, $\text{tconv}(c)$ is the image set of the matrix $c$.
\begin{equation}\label{eqn:tconv.c}
\text{tconv}(c) = \text{tconv}(c_1, \ldots, c_m) = \{y \in \mathbb{R}^n: y = c\odot z \mbox{ for some } z \in \mathbb{R}^m \}.
\end{equation}
Note that a tropical polytope $\text{tconv}(c)$ is closed under scalar tropical multiplication, that is, $\text{tconv}(c) \subseteq \mathbb{TP}^{n-1}$.
Develin and Sturmfels \perp\!\!\!\perpte{mikebernd} pioneered the investigation on tropical polytopes. They showed \perp\!\!\!\perpte[Lemma 22]{mikebernd} that $\text{tconv}(c)$ is a union of bounded cells. In particular, let $Q_c$ be the constraint set of the dual transport program (\mbox{e}f{transport.d})
$$ Q_c = \{(y,z) : y_i - z_j \leq c_{ij}, i \in [n], j \in [m] \}. $$
Then each cell of $\text{tconv}(c)$ is the projection onto the $y$ coordinate of a bounded face of~$Q_c$. Such a cell has the form
$$ \{y \in \mathbb{R}^n: y_i = c_{ij} + z_j \mbox{ if and only if } S_{ij} = 1, i \in [n], j \in [m]\}$$
for some matrix $S \in \{0,1\}^{n \times m}$, called its \emph{type}.
\begin{defn}
The type of a tropical polytope $\text{tconv}(c)$ is the set of types of its cells.
\end{defn}
The most effective way to understand cell types of tropical polytopes is via the transport program.
\begin{prop}[\perp\!\!\!\perpte{mikebernd}, Lemma 22]\label{prop:mikebernd.grobner}
The tropical polytopes $\text{tconv}(c)$ and $\text{tconv}(c')$ have the same tropical types if and only if $\ini_c(I_t) = \ini_{c'}(I_t)$, where $I_t$ is the transport ideal defined in (\mbox{e}f{eqn:it}).
\end{prop}
It is worth sketching the idea. The key is to realize that if a bounded face of $Q_c$ is supported by some vector $b$, then the type of the corresponding cell determines the set of optimal transport plans for (\mbox{e}f{transport.p}) with cost $c$ and constraint $b$, and vice-versa. So $\text{tconv}(c)$ and $\text{tconv}(c')$ have the same tropical type if and only if for each constraint $b \in \mathbb{Z}^{m+n}$, the programs (\mbox{e}f{transport.p}) with cost $c$ and constraint $b$, and (\mbox{e}f{transport.p}) with cost $c'$ and constraint $b$ have the same set of optimal transport plans. Now we look at the ideal. Each binomial generator $x^u - x^v$ of $I_t$ is a pair of competing transport plans $(u,v)$ subjected to the same constraint $b_i = \sum_ju_{ij} = \sum_jv_{ij}, b_j = \sum_iu_{ij} = \sum_iv_{ij}$, for some $b \in \mathbb{Z}^n$. Therefore, each polynomial in $I$ consists of at least two monomials, corresponding to competing transport plans. The partial order $\succ_c$ compares plans: if $u \succ_c v$, then $u$ is a strictly worse plan than $v$. Under the transport cost $c$, $\ini_c(I_t)$ is the `ideal of bad plans': if the monomial $x^u \in \ini_c(I_t)$, then $u$ cannot be the optimal plan. Note, however, that $\succ_c$ is only a partial order. So if there are two optimal plans $u,v$ for some constraint $b$, then $x^u - x^v \in \ini_c(I_t)$. The converse is also true: if $x^u - x^v \in \ini_c(I_t)$ but $x^u, x^v \notin \ini_c(I_t)$, then $u$ and $v$ must be two optimal plans. Thus, if $\ini_c(I_t) = \ini_{c'}(I_t)$, then all bad transport plans under the cost matrix $c$ are exactly the same as those under $c'$, and hence all the optimal plans under $c$ and $c'$ agree. So $\ini_c(I_t) = \ini_{c'}(I_t)$ if and only if for each constraint $b \in \mathbb{Z}^{m+n}$, the programs (\mbox{e}f{transport.p}) with cost $c$ and constraint $b$, and (\mbox{e}f{transport.p}) with cost $c'$ and constraint $b$ have the same set of optimal transport plans. This is conclusion needed.
The linear program (\mbox{e}f{transport.p}) is totally unimodular, so the Gr\"obner fan equals the secondary fan of $I_t$. The secondary fan is in bijection with regular subdivision of the point configuration that defines the constraint set of (\mbox{e}f{transport.p}). In the case of the transport program, this is a product of simplices. So Proposition \mbox{e}f{prop:mikebernd.grobner} implies the following main theorem of \perp\!\!\!\perpte{mikebernd}.
\begin{thm}[\perp\!\!\!\perpte{mikebernd}]\label{thm:ds}
Tropical types of tropical polytopes generated by $m$ points in $\mathbb{R}^n$ are in bijection with regular subdivisions of the product of two simplices $\Delta_{m-1} \times \Delta_{n-1}$.
\end{thm}
\section{Polytropes and their types}\label{sec:polytropes}
\begin{defn}
A set $P \subset \mathbb{TP}^{n-1}$ is a \emph{polytrope} if $P$ is a tropical polytope and also an ordinary polytope in $\mathbb{TP}^{n-1}$.
\end{defn}
\begin{defn}
The \emph{dimension} of a polytrope $P$ is the dimension of the smallest affine subspace containing it. Say that $P \subset \mathbb{TP}^{n-1}$ is \emph{full-dimensional} if its dimension is $n-1$.
\end{defn}
Polytropes have appeared in a variety of contexts. The following classical result states that a polytrope is the constraint set of an all-pairs shorest path dual program~(\mbox{e}f{lp.d}).
It allows one to write a polytrope $P$ as $Pol(c)$, the polytrope of some unique matrix $c \in \mathcal{P}_n$ $P = Pol(c)$. This justifies why we call $Pol(c)$ the polytrope of $c$ in Definition \mbox{e}f{defn:pol.c}.
\begin{prop}[\perp\!\!\!\perpte{mikebernd, butkovic}]\label{prop:polytrope.generators}
Let $P \subset \mathbb{TP}^{n-1}$ be a non-empty set. The following are equivalent.
\begin{itemize}
\item $P$ is a polytrope.
\item There is a unique $c \in \mathcal{P}_n$ such that $P = Pol(c)$, as defined in (\mbox{e}f{eqn:pol.c}).
\item There is a unique $c \in \mathcal{P}_n$ such that $P = \text{tconv}(c)$, as defined in (\mbox{e}f{eqn:tconv.c}).
\end{itemize}
Furthermore, the $c$ in the last two statements are equal.
\end{prop}
Note that we have defined a polytrope $P$ as a set. This creates ambiguity when one speaks of the type of $P$ as a tropical polytope, since the type depends on the choice of generators \emph{and} their orderings. By \perp\!\!\!\perpte[Proposition 21]{mikebernd}, every tropical polytope has a unique minimal generating set. A classical result in tropical linear algebra \perp\!\!\!\perpte{BCOQ, butkovic} states that a polytrope $P = \text{tconv}(c)$ of dimension $k$ has exactly $k$ minimal tropical generators. Furthermore, they are $k$ columns of $c$, while each of the other $n-k$ columns are tropical multiples of one of these. Thus, it is natural to take the unique columns of $c$ as \emph{the} ordered set of tropical generators of $P$.
\begin{defn}
Consider a polytrope $Pol(c)$ in $\mathbb{TP}^{n-1}$. Suppose that $c$ has $k$ unique columns $c_{i_1}, \ldots, c_{i_k}$, for $1 \leq i_1 < i_2 < \ldots < i_k \leq n$, $k \in [n]$. The tropical type of a polytrope is its type as $\text{tconv}(c_{i_1}, \ldots, c_{i_k})$.
\end{defn}
The goal of this paper is to classify polytropes up to their tropical types. By Proposition~\mbox{e}f{prop:polytrope.generators}, these tropical types are tied to the shortest path ideal $I_s$. A consequence of Proposition \mbox{e}f{prop:mikebernd.grobner} is the following.
\begin{prop}\label{prop:polytrope.is}
Consider polytropes $Pol(c)$, $Pol(c')$ in $\mathbb{TP}^{n-1}$. Then they have the same tropical type if and only if $\ini_c(I_s) = \ini_{c'}(I_s)$.
\end{prop}
\begin{proof}
By Proposition \mbox{e}f{prop:polytrope.generators}, $Pol(c) = tconv(c) = \{y: y = c \odot z \mbox{ for some } z \in \mathbb{R}^n\}$. Since $c \in \mathcal{P}_n$, $c = c \odot c$, and $c_{ii} = 0$ for all $i \in [n]$. So in particular, we can take $z = y$, so $\ini_c(I_s) = \ini_c(I_t)$. Thus, by Proposition \mbox{e}f{prop:mikebernd.grobner}, $Pol(c)$ and $Pol(c')$ have the same tropical type if and only if $\ini_c(I_s) = \ini_{c'}(I_s)$.
\end{proof}
As mentioned above, a polytrope of dimension $k$ has exactly $k$ minimal generators. So for $k < n$, a polytrope of dimension $k$ in $\mathbb{TP}^{n-1}$ is just a full-dimensional polytrope of $\mathbb{TP}^k$ embedded into $\mathbb{TP}^{n-1}$. Thus, we shall restrict our study to full-dimensional polytropes. A classical result \perp\!\!\!\perpte{BCOQ} states that for $c \in \mathcal{P}_n$, the $i$-th column $c_i$ is a tropical scalar multiple of the $j$-th column $c_j$ if and only if there exists a cycle of value zero going through $i$ and $j$. In particular, columns of $c_i$'s are distinct if and only if there are no zero cycles involving two nodes or more. In other words,
\begin{lem}\label{lem:full.dim}
A polytrope $Pol(c)$ is full-dimensional if and only if $c \in \mathcal{P}_n \cap R_n^\perp\!\!\!\perprc$.
\end{lem}
Call the restriction of $\mathcal{GF}_n$ to the polytrope region $\mathcal{P}_n$ the \emph{polytrope complex} $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$,
$$ \left.\mathcal{GF}_n\right|_{\mathcal{P}} = \bigcup_{c \in \mathbb{R}^{n^2-n}_{> 0} \cap \mathcal{P}_n} \overline{\mathcal{C}_c(I_s)}. $$
Note that by Lemma \mbox{e}f{lem:grobner.region}, one has
$$ \left.\mathcal{GF}_n\right|_{\mathcal{P}} = \bigcup_{c \in \mathcal{P}_n} \overline{C_c(I_s)}.$$
\begin{thm}\label{thm:ds.analogue}
Cones of $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$ are in bijection with tropical types of polytropes in $\mathbb{TP}^{n-1}$. Furthermore, those cones of $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$ in $R_n^\perp\!\!\!\perprc$ are in bijection with tropical types of full-dimensional polytropes in $\mathbb{TP}^{n-1}$.
\end{thm}
\begin{proof}
By Proposition \mbox{e}f{prop:polytrope.generators}, polytropes are tropical polytopes whose matrix of generators $c \in \mathcal{P}_n$. By Proposition \mbox{e}f{prop:polytrope.is}, the types of such tropical polytopes are in bijection with cones of $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$. This proves the first statement. Lemma \mbox{e}f{lem:full.dim} proves the second.
\end{proof}
From Theorem \mbox{e}f{thm:ds.analogue}, enumerating tropical types of polytropes equals enumerating cones of $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$. This is a much smaller polyhedral complex compared to $\mathcal{GF}_n$.
We conclude this section with an interpretation for the open cones of $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$. As an ordinary polytope, a full-dimensional polytrope in $\mathbb{TP}^{n-1}$ has betwen $n$ and $\binom{2n-2}{n-1}$ vertices. A polytrope $Pol(c)$ in $\TP^{n-1}$ is \emph{maximal} if it has $\binom{2n-2}{n-1}$ vertices as an ordinary polytope.
\begin{lem}\label{lem:maximal}
The polytrope $Pol(c)$ is maximal if and only if $\mathcal{C}_c$ is an open cone of $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$. In other words, open cones of $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$ are in bijection with maximal polytropes in $\mathbb{TP}^{n-1}$.
\end{lem}
\begin{proof}
From \perp\!\!\!\perpte[Corollary 25]{mikebernd}, $tconv(c)$ has the maximal number of vertices of $\binom{2n-2}{n-1}$ if and only if the Gr\"obner cone of $c$ defined with respect to the ideal $I_t$ is open. But for $c \in \mathcal{P}_n$, $tconv(c) = Pol(c)$, $\ini_c(I_s) = \ini_c(I_t)$. This means the Gr\"obner cone of $c$ defined with respect to $I_t$ coincides with that defined with respect to $I_s$. This proves the lemma.
\end{proof}
\section{The Polytrope Complex}\label{sec:main}
With Theorem \mbox{e}f{thm:ds.analogue}, one can use a Gr\"obner fan computation software such as \textsf{gfan} \perp\!\!\!\perpte{gfan} to enumerate polytropes. However, this does not necessarily elucidate the combinatorial structure of tropical types of polytropes. In this section we state and prove our main results on the structure of the polytrope complex $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$, Theorem \mbox{e}f{thm:S} and \mbox{e}f{thm:main}.
These state that $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$ equals the refinement of the polyhedral complex $\mathcal{P}_n$ by the \emph{bipartite binomial} fan $\mathcal{BB}_n$. In particular, the open cones of $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$, which are in bijection to maximal polytropes by Lemma \mbox{e}f{lem:maximal}, are indexed by inequalities amongst bipartite binomials. As an example, we use this fact to compute the six types of maximal polytropes for $n = 4$ by hand.
\subsection{The Polytrope Gr\"obner Basis}
\begin{defn}
The \emph{polytrope Gr\"obner basis} $PGB$ is the union of minimal reduced Gr\"obner bases over the cones of $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$.
\end{defn}
The polytrope Gr\"obner basis plays the role of the universal Gr\"{o}bner basis for the polytrope region, in the sense that it is a Gr\"obner basis with respect to any term ordering~$\succ_c$ for $c \in \mathcal{P}_n$. The minimal condition means that elements of PGB are not redundant. That is, for each $f \in PGB$, there exists a cone $\mathcal{C}_c$ in $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$ such that $\ini_c(f)$ is not a redundant generator of $\ini_c(I_s)$. The reduced condition implies that terms in the polytrope Gr\"obner basis are of the form $x^{u_+} - x^{u_-}$, where $u$ is a circuit of $A$. We claim that these terms fall into either one of the following categories: triangle and bipartite.
\begin{defn}[Bipartite monomials and binomials] \label{defn:m.bipartite}
For an integer $m \geq 2$, let $\mathbb{S}_m$ be the set of permutations on $m$ letters, $\Sigma_m \subset \mathbb{S}_m$ be the set of cyclic permutations. Let $K = (k_1 \leq k_2 \leq \ldots \leq k_m)$, $L = (l_1 \leq l_2 \leq \ldots \leq l_m) \subset [n]$ be two sequences of $m$ indices, not necessarily distinct, such that $K \cap L = \emptyset$.
For $\sigma \in \mathbb{S}_m$, $\tau \in \Sigma_m$, define
\begin{equation}\label{eqn:m.bipartite.binomial}
u_+ := k_1\to \sigma(l_1), \ldots, k_m \to \sigma(l_m), \hspace{1em} u_- := k_1 \to (\tau\perp\!\!\!\perprc\sigma)(l_1), \ldots, k_m \to (\tau\perp\!\!\!\perprc\sigma)(l_m).
\end{equation}
If $(K,\sigma,\tau,L)$ is such that $(u_+,u_-)$ defined above is a circuit of $A_s$, say that $(u_+,u_-)$ is a bipartite binomial, and $u_+$, $u_-$ are bipartite monomials.
\end{defn}
\begin{ex}\label{ex:n4.maximal} For $n = 4$, there are twelve bipartite monomials and six bipartite binomials. Figure \mbox{e}f{fig:six} shows the six bipartite binomials, identified with the graphs of $u_+$ and $u_-$.
\vskip-0.5em
\begin{figure}
\caption{The six bipartite binomials for $n = 4$.}
\label{fig:six}
\end{figure}
\end{ex}
\begin{cor}
There are finitely many bipartite binomials.
\end{cor}
\begin{proof}
The bipartite binomials is a subset of the set of circuits of $A_s$, which is a matrix of dimension $n \times (n^2-n)$. So there are at most $\binom{n^2-n}{n}$ many circuits.
\end{proof}
\begin{prop}\label{prop:ugb.polytrope}
The polytrope Gr\"obner basis is the set of binomials of the form $x^{u_+} - x^{u_-}$, where the pair $(u_+,u_-)$, identified with their graphs, ranges over the following sets:
\begin{itemize}
\item \emph{Triangles:} $u_+ = i \to k \to j$, $u_- = i \to j$ for all distinct $i,k,j \in [n]$.
\item \emph{Bipartite:} $(u_+,u_-)$ is a circuit of the form (\mbox{e}f{eqn:m.bipartite.binomial}) for some $(K,\sigma,\tau,L)$ in Definition~\mbox{e}f{defn:m.bipartite}.
\end{itemize}
\end{prop}
\begin{proof}
Let $(u_+,u_-)$ be a circuit of $A_s$. Then $x^{u_+} - x^{u_-}$ is in the polytrope Gr\"obner basis if and only if for some $c \in \mathcal{P}_n$, either $u_+$ or $u_-$ is the optimal transport plan with cost $c$ subject to the net outflow constraint at each node (the Gr\"obner condtion), and that the optimality of these plans is not implied by other terms in the polytrope Gr\"obner basis (the minimality condition).
First we show that our candidate set of PGB indeed consists of polynomials in the Gr\"obner basis, and that they are not redundant. For each pair $i,j \in [n]$, $i \to j$ is the shortest path from $i$ to $j$ on $\mathcal{P}_n$. Furthermore, for each $k \in [n], k \neq i,j$, there is a face of $\mathcal{P}_n$ defined by $c_{ij} = c_{ik} + c_{kj}$. Thus, the triangle terms are in the PGB.
Now consider a bipartite binomial $x^{u_+} - x^{u_-}$. Define $c \in \mathbb{R}^{n^2 - n}$ via
$$ c_{ij} = \left\{\begin{array}{ccc} 1 &\mbox{ if } & i \to j \notin u_+, i \to j \notin u_- \\
0 & \mbox{ else } &
\end{array} \right. $$
for all $i,j \in [n], i \neq j$. Then $c \in \mathcal{P}_n$, and $x^{u_+} - x^{u_-}$ is a non-redundant generator of $\ini_c(I_s)$. So the bipartite binomials are also contained in the PGB.
Now we claim that given the triangles and bipartite binomials, any other circuit must be redundant. Let $(u_+,u_-)$ be a circuit of $A_s$. Since $(u_+,u_-)$ is in the kernel of $A_s$, each node in the graph of $u_+$ and $u_-$ must have the same net outflow. This partitions the support of $u_+$ and $u_-$ into three sets: the sources (those with positive net outflow), the sinks (those with negative net outflow), and the transits (those with zero net outflow). We now consider all possible outflow constraints.
\begin{itemize}
\item \textbf{One sink, one source}. Suppose there is exactly one source $i$ and one sink $j$.
This means $u_+, u_-$ are paths from $i$ to $j$. Consider further subcases based on the length of the paths $u_+, u_-$.
\begin{itemize}
\item $u_-$ is $i \to j$, and $u_+$ is $i \to k \to j$. Then $(u_+,u_-)$ is a triangle term.
\item $u_-$ is $i \to k \to j$, and $u_+$ is $i \to k' \to j$. Since $i \to j$ must be a shortest path on $\mathcal{P}_n$, this means $(u_+,u_-)$ is made redundant by the triangles $(u_+,i\to j)$ and $(u_-,i\to j)$.
\item Either $u_+$ or $u_-$ is of the form $i = i_0 \to i_1 \to \ldots \to i_{m-1} \to i_m = j$ for $m \geq 3$. Then it is a shortest path if and only if $i_r \to i_{r+1} \to i_{r+2}$ is a shortest path from $i_r$ to $i_{r+2}$ for all $r = 0, \ldots, m-2$. Thus, $(u_+,u_-)$ is made redundant by the triangles $(i_r \to i_{r+1} \to i_{r+2}, i_r \to i_{r+2})$ for $r = 0, \ldots, m-2$.
\end{itemize}
\item \textbf{One source or one sink}.
Suppose there are $s \geq 2$ sinks, $1$ source. Since the constraints are integral, one can decompose any transport plan as the union of $s$ plans, one for each sink-source pair. So this reduces to the one sink one source case. The same reduction applies when there are $s \geq 2$ sources, $1$ sink.
\item \textbf{More than one sources and sinks}.
Suppose there are more than one sources and sinks. Let $(u_+,u_-)$ be a circuit of $A_s$ satisfying the constraint on the number of sources and sinks. Consider the following cases.
\begin{itemize}
\item Either $u_+$ or $u_-$ contain a path $i \to j \to \ldots \to k$ of length at least two. One can replace it with the path $i \to k$ to form $u'$. Then the new binomial $(u_+,u')$ (or $(u',u_-)$) is a circuit of $A_s$, and it makes $(u_+,u_-)$ redundant.
\item All paths in $u_+$ and $u_-$ are of length 1, that is, each $u_+$ and $u_-$ is a bipartite graph. Since $(u_+,u_-)$ is in the kernel of $A_s$, the graphs of $u_+$ and $u_-$ must have the same number of edges, say, $m$ edges, for $m \geq 2$. Thus, we can write $u_+ = (K,\sigma,L)$, and $u_- = (K,\sigma',L)$ for $\sigma,\sigma' \in \mathbb{S}_m$, $K \cap L = \emptyset$, where $K$ and $L$ may have repeated indices. Write $\sigma' = \tau \perp\!\!\!\perprc \sigma$ for some $\tau \in \mathbb{S}_m$. Now we consider further subcases.
\begin{itemize}
\item $\tau$ has one cycle, that is, it is a cyclic permutation. Then $(u_+,u_-)$ is a bipartite binomial.
\item $\tau$ has more than one cycle. Then the induced bipartite pair $(u_+',u_-')$ on each cycle is another bipartite binomial with strictly smaller support. This contradicts the fact that $(u_+,u_-)$ is a circuit.
\end{itemize}
\end{itemize}
\end{itemize}
\end{proof}
\begin{defn}
For a set $S$ of triangle and bipartite monomials, define the cone $\mathcal{C}_S \subset \mathcal{P}_n$ as follows. For $c \in \mathcal{C}_S$, for each bipartite monomial $(K,\sigma,L) \in S$ with $|K|=|L|=m$,
\begin{equation}\label{eqn:cone.from.plan1}
c_{k_1\sigma(l_1)} + \ldots + _{k_m\sigma(l_m)} < c_{k_1\tau(l_1)} + \ldots + c_{k_m\tau(l_m)} \mbox{ for all } \tau \in \mathbb{S}_m, \tau \neq \sigma,
\end{equation}
for each triangle monomial $i \to j \to k \in S$,
\begin{equation}\label{eqn:cone.from.plan2}
c_{ij} + c_{jk} = c_{ik},
\end{equation}
and for all distinct triples $i,j,k \in [n]$, $c_{ij} + c_{jk} \geq c_{ik}$. Say that $S$ is compatible if $\mathcal{C}_S \neq \emptyset$.
\end{defn}
\begin{thm}\label{thm:S}
The map $S \mapsto \mathcal{C}_S$ is a bijection between compatible sets of triangle and bipartite monomials and cones of $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$.
\end{thm}
\begin{proof}
A monomial $x^u$ is not in $\ini_c(I_s)$ if and only if $u$ is an optimal transport plan amongst those with the same sources and sinks. By Proposition~\mbox{e}f{prop:ugb.polytrope}, each relatively open cone $\mathcal{C}_c$ of $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$ is defined by a unique set $S$ of triangle and bipartite monomials which are optimal transport plans amongst those with the same sinks and sources.
The optimal of terms in $S$ is expressed in (\mbox{e}f{eqn:cone.from.plan1}) and (\mbox{e}f{eqn:cone.from.plan2}). Therefore, $\mathcal{C}_c = \mathcal{C}_S$. Conversely, if $S$ is compatible, then any $c \in \mathcal{C}_S$ induces the same ordering on the binomials in the PGB, and so $\mathcal{C}_S$ is a non-empty cone of $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$. This establishes the bijection claimed.
\end{proof}
\subsection{The fan structure of $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$}
In this section, we translate the results of the previous section into a statement about the geometry of the polyhedral complex $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$. Fix sources $K$, sinks $L$, with $|K| = |L| = m \geq 2$. Associate with each pair $\sigma,\tau \in \mathbb{S}_m$, $\sigma \neq \tau$ a hyperplane in $\mathbb{R}^{n^2-n}$ whose normal vector is $u_+ - u_-$ for $u_+, u_-$ defined as
$$ u_+ := k_1\to \sigma(l_1), \ldots, k_m \to \sigma(l_m), \hspace{1em} u_- := k_1 \to \tau(l_1), \ldots, k_m \to \tau(l_m). $$
Let $\mathcal{AB}_n(K,L)$ denote the arrangement of all hyperplanes ranging over all such pairs~$\sigma,\tau$. Note that each chamber of $\mathcal{AB}_n(K,L)$ defines a linear ordering on the $m!$ elements of $\mathbb{S}_m$. Say that two such linear orders are equivalent if they have the same minimum. This induces an equivalence relation $\sim_{\min}$ on the chambers of $\mathcal{AB}_n(K,L)$. Let $\mathcal{BB}_n(K,L)$ be the polyhedral complex obtained by removing faces between adjacent cones which are equivalent under $\sim_{\min}$. Then $\mathcal{BB}_n$ has at most $m!$ full-dimensional cones, indexed by the permutation $\sigma \in \mathbb{S}_m$ that achieves the minimum order amongst the $m!$ elements of $\mathbb{S}_m$. That is, the cone corresponds to $\sigma \in \mathbb{S}_m$ is defined by~(\mbox{e}f{eqn:cone.from.plan1}). By construction, one can check that $\mathcal{BB}_n(K,L)$ is a fan coarsening of $\mathcal{AB}_n(K,L)$.
\begin{defn}
The \emph{bipartite binomial fan $\mathcal{BB}_n$} is the refinement of the fans $\mathcal{BB}_n(K,L)$, and the \emph{bipartite binomial arrangement $\mathcal{AB}_n$} is the refinement of the arrangements $\mathcal{AB}_n(K,L)$, over all pairs of sources and sinks $(K,L)$ such that there exists some bipartite monomial with these sources and sinks.
\end{defn}
The name `bipartite binomial arrangement' stems on the fact that $\mathcal{AB}_n$ is an arrangement of bipartite binomials which appear in the polytrope universal basis. Since bipartite binomials are a subset of the set of circuits of $A_s$, $\mathcal{AB}_n$ is a coarsening of the circuit arrangement of $A_s$ studied in \perp\!\!\!\perpte{st94}.
\begin{ex} For $n = 4$ and $n=5$, $\mathcal{BB}_n = \mathcal{AB}_n$, and this is the arrangement of hyperplanes
$$\{ c \in \mathbb{R}^{n^2-n}: c_{ik} + c_{jl} - c_{il} - c_{jk} = 0\} $$
for each tuple of distinct indices $i,j,k,l \subset [n]$.
\end{ex}
\begin{ex}
Suppose $K = (1,2,3)$, $L = (4,5,6)$. There are $3! = 6$ bipartite monomials with sources $K$ and sinks $L$, shown in Figure \mbox{e}f{fig:ab} below.
\begin{figure}
\caption{The six bipartite monomials with sources $(1,2,3)$ and sinks $(4,5,6)$.}
\label{fig:ab}
\end{figure}
Each pair of monomials generate a hyperplane. For example, the pair of top left monomials defines the hyperplane
$$c_{14} + c_{25} + c_{36} - (c_{15} + c_{26} + c_{34}) = 0. $$
The arrangement $\mathcal{AB}_n(K,L)$ is generated by the $\binom{3!}{2} = 15$ hyperplanes from these pairs. In comparison, $\mathcal{BB}_n(K,L)$ has $3! = 6$ full-dimensional cones, each given by 5 inequalities. For instance, the cone indexed by the first monomial in Figure \mbox{e}f{fig:ab} is given by
\begin{align*}
c_{14} + c_{25} + c_{36}
&< c_{15} + c_{26} + c_{34}, c_{16} + c_{24} + c_{35}, c_{14} + c_{26} + c_{35}, c_{16} + c_{25} + c_{34}, c_{15} + c_{24} + c_{36}.
\end{align*}
\end{ex}
\begin{thm}\label{thm:main}
The Gr\"obner fan on the polytrope region $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$ equals the refinement of the polyhedral complex $\mathcal{P}_n$ by $\mathcal{BB}_n$.
\end{thm}
\begin{proof}
By the discussion succeeding Proposition~\mbox{e}f{prop:ugb.polytrope}, cones of $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$ are in bijection with cones $\mathcal{C}_S$ indexed by compatible sets $S$ of triangle and bipartite monomials. By construction, the cones of $\mathcal{P}_n$ are in bijection with all compatible sets of triangle monomials, and the cones of $\mathcal{BB}_n$ \emph{over $\mathbb{R}^{n^2 - n}$} are in bijection with all compatible sets of bipartite monomials. Thus, the conclusion would follow if we can show that every cone of $\mathcal{BB}_n$ has non-empty intersection with $\mathcal{P}_n^\perp\!\!\!\perprc$. Since $\mathcal{BB}_n$ is the fan coarsening of $\mathcal{AB}_n$, it is sufficient to show that every cone of $\mathcal{AB}_n$ has non-empty intersection with $\mathcal{P}_n^\perp\!\!\!\perprc$. The lineality space of $\mathcal{AB}_n$ is
$$ \mathsf{lin}(R_n) + \mathsf{span}(1, \ldots, 1), $$
where $\mathsf{lin}(R_n)$ is defined in (\mbox{e}f{eqn:vn}). Over $\mathbb{R}^{n^2-n} \backslash \mathsf{lin}(R_n)$, $\mathcal{P}_n$ is a pointed cone containing the ray $(1, \ldots, 1)$ in its interior. Let us further modulo the span of this ray. Then $\mathcal{AB}_n \backslash (\mathsf{lin}(R_n) + \mathsf{span}(1, \ldots, 1))$ is a central hyperplane arrangement, and $\mathcal{P}_n^\perp\!\!\!\perprc \backslash (\mathsf{lin}(R_n) + \mathsf{span}(1, \ldots, 1))$ is an open neighborhood around the origin. Thus every cone of $\mathcal{AB}_n$ has non-empty intersection with $\mathcal{P}_n^\perp\!\!\!\perprc$. This proves the claim.
\end{proof}
\begin{cor}\label{cor:maximal.equals.open.chambers}
The number of combinatorial tropical types of maximal polytropes in $\mathbb{TP}^{n-1}$ is precisely the number of equivalence classes of open cones $\mathcal{BB}_n$ up to action by $\mathbb{S}_n$.
\end{cor}
\begin{ex}[Maximal polytropes for $n = 4$]\label{ex:n4.maximal}
Number the binomials in Figure \mbox{e}f{fig:six} from left to right, top to bottom. Here $\mathcal{BB}_4$ equals the hyperplane arrangement $\mathcal{AB}_4$. An open chamber of $\mathcal{BB}_4$ is a binary vector $z = \{\pm 1\}^6$, with $z_i = +1$ if in the $i$-th binomial, the left monomial is smaller than the right monomial. For example, $z_2 = +1$ correspond to the inequality $c_{12} + c_{34} < c_{14} + c_{32}$. There are at most $2^6 = 64$ open chambers in $\mathcal{BB}_4$. Not all of 64 possible values of $z$ define a non-empty cone. Indeed, the six normal vectors satisfy exactly one relation:
\vskip-2em
\begin{figure}
\caption{The relation amongst the two-bipartite binomials for $n = 4$.}
\label{eqn:relation}
\end{figure}
This means $(1,-1,1,1,-1,1)$ and $(-1,1,-1,-1,1,-1)$ define empty cones. Thus there are 62 open chambers of $\mathcal{BB}_4$, correspond to 62 tropical types of maximal polytropes. The symmetric group $\mathbb{S}_4$ acts on the vertices of a polytrope $Pol(c)$ by permuting the labels of the rows and columns $c$. This translates to an action on the chambers of $\mathcal{BB}_4$. Up to the action of $\mathbb{S}_4$, we found six symmetry classes of chambers, corresponds to six combinatorial tropical types of maximal polytropes. Table \mbox{e}f{tab:max4} shows a representative for each symmetry class and their orbit sizes. The first five corresponds to the five types discovered by Joswig and Kulas, presented in the same order in \perp\!\!\!\perpte[Figure 5]{JoswigK10}. The class of size 12 was discovered by Jimenez and de la Puente \perp\!\!\!\perpte{jimenez2012six}.
\begin{table}[h]
\begin{tabular}{|c|c|}
\hline
Representative & Orbit size \\
\hline
$(1,1,1,-1,1,1)$ & 6 \\
$(-1,1,1,-1,-1,1)$ & 8 \\
$(1,1,1,1,1,-1)$ & 6 \\
$(-1,1,-1,-1,-1,1)$ & 24 \\
$(-1,-1,1,1,-1,-1)$ & 6 \\
$(-1,-1,-1,-1,1,-1)$ & 12 \\
\hline
\end{tabular}
\vskip0.5em
\caption{Representatives and orbit sizes of the six maximal polytropes in $\mathbb{TP}^3$.}
\label{tab:max4}
\end{table}
\end{ex}
\section{Polytropes enumeration: algorithms, results and summary}\label{sec:algorithm}
\subsection{Algorithms and results}
We have two algorithms for enumerating combinatorial tropical types of full-dimensional polytropes in $\mathbb{TP}^{n-1}$. Recall that we are enumerating cones of $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$ which are \emph{not} on $\partial R_n$, up to symmetry induced by $\mathbb{S}_n$. The two algorithms differ only in the first step of computing $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$. The first computes $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$ using a Gr\"{o}bner fan computation software such as \textsf{gfan} \perp\!\!\!\perpte{gfan}. In the second algorithm, one computes the polyhedral complex $\mathcal{P}_n$ first, then computes the refinement of its cones by $\mathcal{BB}_n$. Given $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$, one can then remove all cones in $\partial R_n$. We find such cones as follows: for each cone, pick a point $c$ in the interior and compute the minimum cycle in the undirected graph with edge weights $c_{ij}$. If the minimum cycle is zero, this point comes from a cone on $\partial R_n$, and thus should be removed. A documented implementation of the first algorithm, with examples for $n = 4$ and input files for $n = 4, 5$ and $6$, is available at \url{https://github.com/princengoc/polytropes}.
For $n = 4$, we found 1026 symmetry classes of cones in $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$, of which 13 are in $\partial R_n$. Thus, there are 1013 combinatorial tropical types of polytropes in $\mathbb{TP}^3$. Table~\mbox{e}f{tab:jk.table} classifies the types by the number of vertices of the polytrope. This corresponds to the first column of \perp\!\!\!\perpte[Table 1]{JoswigK10}.
\begin{table}[h]
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
\hline
\# vertices & 4 & 5 & 6 & 7 & 8 & 9& 10& 11& 12& 13& 14& 15& 16& 17& 18& 19& 20 \\
\hline
\# types & 1 & 1 & 5 & 6 & 34 & 38 & 81 & 101 & 151 & 144 & 154 & 116 & 92 & 46 & 28 & 9 & 6 \\
\hline
\end{tabular}
\vskip1em
\caption{Combinatorial tropical types of full-dimensional polytropes in $\mathbb{TP}^3$, grouped by total number of vertices.}
\label{tab:jk.table}
\end{table}
We also implemented the second algorithm for $n = 4$. We found 273 equivalence classes of cones of the polyhedral complex $\mathcal{P}_4$. Table \mbox{e}f{tab:gz} groups them by the number of equivalence classes of cones in the refinement $\mathcal{P}_4 \wedge \mathcal{BB}_4$ that they contain. Altogether, we obtain 1013 equivalence classes, agreeing with the first output.
\begin{table}[h]
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c}
$\#$ $F$ & 123 & 10 & 89 &19& 2 & 19 & 2& 3 & 3 & 1& 1 & 1 \\
\hline
$\#$ $(F,z)$ & 1 & 2 & 3 & 5 & 6 & 9 & 15 & 18 & 27 & 37 & 42 & 81
\end{tabular}
\vskip0.5em
\caption{Equivalence classes of cones $F$ of $\mathcal{P}_4$, grouped by the number of equivalence classes of cones in $\left.\mathcal{GF}_4\right|_\mathcal{P}$ that they correspond to. For instance, up to symmetry, there are 123 cones of $\mathcal{P}_4$ which are not subdivided by $\mathcal{BB}_4$, and thus they each yield one cone of $\left.\mathcal{GF}_4\right|_\mathcal{P}$. Up to symmetry, there are 10 cones of $\mathcal{P}_4$ which are subdivided into two by $\mathcal{BB}_4$, 89 cones subdivided into 3, and so on. The sum $123 \cdot 1 + 10 \cdot 2 + 89 \cdot 3 + \ldots + 1 \cdot 81$ equals 1013, agreeing with the number of equivalence classes of polytropes computed by~\textsf{gfan} \perp\!\!\!\perpte{gfan}.} \label{tab:gz}
\end{table}
The polytrope complex $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$ grows large quickly. For $n = 5$, there are $27248$ open cones, correspond to combinatorial tropical types of maximal polytropes in $\mathbb{TP}^5$. This is clearly much bigger than six, the corresponding number for $n = 4$. The fan $\mathcal{BB}_5$ is the arrangement $\mathcal{AB}_5$ of $5 \binom{4}{2} = 30$ bipartite binomial hyperplanes.
The orderings of the bipartite binomials which lead to empty cones of $\mathcal{AB}_n$ are precisely those which contain a circuit of the oriented matroid associated with $\mathcal{AB}_n$ \perp\!\!\!\perpte{orientedMatroid}. Up to permutation, there are 11 circuits. We list them on \url{https://github.com/princengoc/polytropes/output/n5relations.txt} in a format analogous to that in Figure \mbox{e}f{eqn:relation}.
Using \textsf{gfan} \perp\!\!\!\perpte{gfan}, we could not compute all cones of $\left.\mathcal{GF}_5\right|_\mathcal{P}$ or the open cones of $\left.\mathcal{GF}_6\right|_\mathcal{P}$ on a conventional desktop. However, we believe that such computations should be possible on more powerful machines. The open cones of $n = 6$ is particularly interesting, since this is the smallest $n$ for which $\mathcal{BB}_n$ is a strict coarsening of $\mathcal{AB}_n$.
\subsection{Summary and open problems}
Tropical types of polytropes in $\mathbb{TP}^{n-1}$ are in bijection with cones of the polyhedral complex $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$. This complex is the restriction of a certain
Gr\"{o}bner fan $\mathcal{GF}_n \subset\mathbb{R}^{n^2-n}$ to a certain cone $\mathcal{P}_n$. We showed that $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$ equals the refinement of several fans. These fans are significantly smaller than $\mathcal{GF}_n$, giving a computational advantage over brute force approaches. We utilized these results to enumerate all combinatorial tropical types of full-dimensional polytropes in $\mathbb{TP}^3$, and those of maximal polytropes in $\mathbb{TP}^4$.
Theorem \mbox{e}f{thm:S} establishes a bijection between cones of $\left.\mathcal{GF}_n\right|_{\mathcal{P}}$ and compatible sets of triangles and bipartite monomials. The central open question is to give an intrinsic characterization of this compatibility. This question has been answered for triangle monomials alone in \perp\!\!\!\perpte{tran.combi}, where sets of compatible triangles are indexed by a certain collection of trees. However, we do not know of such characterizations for the bipartite monomials. A characterization for compatibility amongst the bipartite monomials would potentially allow one to enumerate the open cones of $\mathcal{BB}_n$ up to $\mathbb{S}_n$ action. This number is precisely the number of tropical types of maximal polytropes. There are obvious requirements, such as if $(K,\sigma,L)$ is in the set, then any bipartite subgraph of $(K,\sigma,L)$ must also be in the set. However, this requirement alone is not enough. For instance, for $n = 4$, of the $64$ sets of bipartite monomials that satisfy the subgraph requirement, only $62$ define non-empty cones and thus are compatible (cf. Example \mbox{e}f{ex:n4.maximal}). Even this example is not representative, as in this case, $\mathcal{BB}_4$ is the arrangement $\mathbb{AB}_4$, while in general $\mathcal{BB}_n$ is not a hyperplane arrangement.
\end{document} |
\beginin{equation}gin{document}
\beginin{equation}gin{abstract}
We consider electrodiffusion of ions in fluids, described by the Nernst-Planck-Navier-Stokes system, in three dimensional bounded domains, with mixed blocking (no-flux) and selective (Dirichlet) boundary conditions for the ionic concentrations and Robin boundary conditions for the electric potential, representing the presence of an electrical double layer. We prove global existence of strong solutions for large initial data in the case of two oppositely charged ionic species. The result hold unconditionally in the case where fluid flow is described by the Stokes equations. In the case of Navier-Stokes coupling, the result holds conditionally on Navier-Stokes regularity. We use a simplified argument to also establish global regularity for the case of purely blocking boundary conditions for the ionic concentrations for two oppositely charged ionic species and also for more than two species if the diffusivities are equal and the magnitudes of the valences are also equal.
\end{abstract}
\keywords{electroconvection, ionic electrodiffusion, Nernst-Planck, Navier-Stokes, electrical double layer}
\noindent\thanks{\em{MSC Classification: 35Q30, 35Q35, 35Q92.}}
\maketitle
\section{Introduction}
We study the \textit{Nernst-Planck-Navier-Stokes} (NPNS) system in a connected, bounded domain $\Omega\subset\mathbb{R}^3$ with smooth boundary, which models the electrodiffusion of ions in a fluid in the presence of boundaries. The ions diffuse under the influence of their own concentration gradients and are transported by the fluid and an electric field, which is generated by the local charge density and an externally applied potential. The fluid is forced by the electrical force exerted by the ionic charges. The time evolution of the ionic concentrations is determined by the \textit{Nernst-Planck} equations,
\beginin{equation}
\pmbartial_t c_i+u{\widetilde{c}}dot\nabla c_i=D_i{\mbox{div}\,}(\nabla c_i+z_ic_i\nabla\Phi),\quad i=1,...,m\label{np}
\end{equation}
coupled to the Poisson equation
\beginin{equation}
-\epsilon\Delta\Phi=\sum_{i=1}^m z_ic_i=\rho\label{pois}
\end{equation}
and to the \textit{Navier-Stokes} system,
\beginin{equation}
\pmbartial_t u+u{\widetilde{c}}dot\nabla u -\nu\Delta u+\nabla p=-K\rho\nabla\Phi,\quad {\mbox{div}\,} u=0\label{nse}
\end{equation}
or to the \textit{Stokes} system
\beginin{equation}
\pmbartial_t u-\nu\Delta u+\nabla p=-K\rho\nabla\Phi,\quad {\mbox{div}\,} u=0.\label{stokes}
\end{equation}
In this latter case we refer to the system as the \textit{Nernst-Planck-Stokes} (NPS) system.
The function $c_i$ is the local ionic concentration of the $i$-th species, $u$ is the fluid velocity, $p$ is the pressure, $\Phi$ is a rescaled electrical potential, and $\rho$ is the local charge density. The constant $z_i\in\mathbb{R}$ is the ionic valence of the $i$-th species. The constants $D_i>0$ are the ionic diffusivities, and $\epsilon>0$ is a rescaled dielectric permittivity of the solvent, and it is proportional to the square of the Debye length $\labelmbda_D$, which is the characteristic length scale of the electrical double layer in a solvent {\widetilde{c}}ite{rubibook}. The constant $K>0$ is a coupling constant given by the product of Boltzmann's constant $k_B$ and the temperature $T_K$. Finally, $\nu>0$ is the kinematic viscosity of the fluid. The dimensional counterparts of $\Phi$ and $\rho$ are given by $(k_BT_K/e)\Phi$ and $e\rho$, respectively, where $e$ is elementary charge.
For the ionic concentrations $c_i$ we consider \textit{blocking} (no-flux) boundary conditions,
\beginin{equation}
(\pmbartial_n c_i(x,t)+z_ic_i(x,t)\pmbartial_n\Phi(x,t))_{|\pmbartial\Omega}=0 \label{bl}
\end{equation}
where $\pmbartial_n$ is the outward normal derivative along $\pmbartial\Omega$. This boundary condition represents a surface that is impermeable to the $i$-th ionic species. For regular enough solutions, blocking boundary conditions imply that the total concentration $\int_\Omega c_i\,dx$ is conserved as can formally be seen by integrating (\ref{np}) over $\Omega$. For $c_i$, we also consider \textit{selective} (Dirichlet) boundary conditions,
\beginin{equation}
c_i(x,t)_{|\pmbartial\Omega}=\gamma_i>0,\label{DI}
\end{equation}
which, in electrochemistry {\widetilde{c}}ite{davidson,rubishtil}, represents an ion-selective (permselective) membrane that maintains a fixed concentration of ions.
The boundary conditions for the Navier-Stokes (or Stokes) equations are \textit{no-slip},
\beginin{equation}
u(x,t)_{|\pmbartial\Omega}=0.\label{noslip}
\end{equation}
The boundary conditions for $\Phi$ are inhomogeneous Robin,
\beginin{equation}
(\pmbartial_n\Phi(x,t)+\tau\Phi(x,t))_{|\pmbartial\Omega}=\xi(x).\label{robin}
\end{equation}
This boundary condition represents the presence of an electrical double layer at the interface of a solvent and a surface {\widetilde{c}}ite{prob,rubibook}. The Robin boundary conditions are derived based on the fact that the double layer has the effect of a plate capacitor. The constant $\tau>0$ represents the capacitance of the double layer, and $\xi:\pmbartialrtial\Omega\to\mathbb{R}$ is a smooth function that represents an externally applied potential on the boundary (see also {\widetilde{c}}ite{bothe,fischer,gajewski,lee} where the same boundary conditions are used in similar contexts).
In this paper, we discuss the question of global regularity of solutions of NPNS and NPS. The NPNS system is a semilinear parabolic system, and in general, such systems can blow up in finite time. For example, The Keller-Segel equations, which share some common features with NPNS (e.g. the dissipative structure, Section {\ref{de}}), are known to admit solutions that blow up in finite time, even in two dimensions, for large initial conditions {\widetilde{c}}ite{bedro}. The NPNS system includes the Navier-Stokes equations, where the question of large data global regularity, as is well known, is unresolved in three dimensions {\widetilde{c}}ite{cf}. So for NPNS, we cannot expect at this stage to obtain affirmative results on unconditional global regularity. However, global regularity in three dimensions for the NPS system or even the Nernst-Planck equations, not coupled to fluid flow, is still an open problem except in some special cases, the main obstacle being control of the nonlinear term ${\mbox{div}\,}(c_i\nabla\Phi)$.
In both the physical and mathematical literature, many different boundary conditions are considered for the concentrations $c_i$ and the electric potential $\Phi$, all with different physical meanings. The choice of boundary conditions makes a large difference not only when it comes to determining global regularity, but also in characterizing long time behavior. Global existence and stability of solutions to the uncoupled Nernst-Planck equations is obtained in {\widetilde{c}}ite{biler,biler2,choi,gajewski} for blocking boundary conditions in two dimensions. The full NPNS system is discussed in {\widetilde{c}}ite{schmuck} where the electric potential is treated as a superposition of an internal potential, determined by the charge density $\rho$ and homogeneous Neumann boundary conditions, and an external, prescribed potential. In this case global weak solutions are obtained in both two and three dimensions. The case of blocking and selective (Dirichlet) boundary conditions for the concentrations and Dirichlet boundary conditions for the potential are considered in {\widetilde{c}}ite{ci} for two dimensions, and the authors obtain global existence of strong solutions and, in the case of blocking or \emph{uniformly} selective boundary conditions, unconditional stability. In {\widetilde{c}}ite{np3d}, these results are extended to three dimensions, for initial conditions that are small perturbations of the steady states. In {\widetilde{c}}ite{bothe,fischer} the authors consider blocking boundary conditions for $c_i$ and Robin boundary conditions for $\Phi$, as we do in this paper, and obtain global regularity and stability in two dimensions and global weak solutions in three dimensions. In {\widetilde{c}}ite{cil} global regularity in three dimensions is obtained in the case of Dirichlet boundary conditions for both the concentrations and the potential. In {\widetilde{c}}ite{liu}, the authors establish global existence of weak solutions in the case of no boundaries, $\mathbb{R}^3$.
As established and used effectively in the works referred to in the previous paragraph, the NPNS system, equipped with blocking or uniformly selective (c.f. {\widetilde{c}}ite{ci}) boundary conditions for $c_i$, comes with a dissipative structure (Section \ref{de}), which in particular leads to stable asymptotic behaviors. Deviations from blocking or uniformly selective boundary conditions are known to lead, in general, to instabilities when a large enough electric potential drop is imposed across the spatial domain (e.g. narrow channel). These so-called electrokinetic instabilities (EKI) are observed both experimentally and numerically, and verified analytically through simplified models {\widetilde{c}}ite{davidson,pham,rubinstein,rubizaltz,zaltzrubi}.
\beginin{equation}gin{comment}
In light of these observations, it may come as somewhat of a surprise that in three dimensions, large data global regularity has been established in this unstable regime of selective boundary conditions {\widetilde{c}}ite{cil}, whereas for blocking or uniformly selective boundary conditions, only nonlinear stability results are currently available {\widetilde{c}}ite{np3d}. This apparent discrepancy is due to the fact that a priori bounds resulting in global regularity are obtained fundamentally differently in these two cases. For selective boundary conditions, the fact that one prescribes the boundary values a priori rules out any potential blow up behavior at the boundary. This only leaves potentially bad behavior in the interior, but this, then, is ruled out by the existence of a strong dissipative term, which roughly speaking corresponds to a charged fluid's tendency to neutralize at regions away from the boundaries (i.e. $\rho\approx 0$) {\widetilde{c}}ite{cil,int}. In the case of blocking boundary conditions, potential blow up behavior at the boundary is not a priori ruled out; instead one may make use of the natural dissipative structure, which does not exist in general for selective boundary conditions. In the case of two dimensions, this dissipative structure is enough to initiate a bootstrapping scheme that yields control of higher regularity norms, thus establishing global regularity {\widetilde{c}}ite{ci}. In three dimensions, if Dirichlet boundary conditions are considered for $\Phi$, the same dissipative structure yields only global regularity for small perturbations from steady states {\widetilde{c}}ite{np3d}.
\end{comment}
It is partially these observed instabilities that motivate our current study. One of the simplest configurations for which unstable and complex flow behavior and patterns are observed is when the boundaries exhibit ion-selectivity i.e. many surfaces (membranes) that arise in biology, chemistry, and electrochemistry allow for penetration of certain ions while blocking others {\widetilde{c}}ite{davidson,rubinstein}. Mathematically, this situation can be modelled by mixed boundary conditions wherein, for example, $c_1$ has selective boundary conditions and $c_2$ has blocking boundary conditions. In considering these mixed boundary conditions in three dimensions, the main mathematical difficulties include
\beginin{equation}gin{enumerate}
\item nonlinear, nonlocal boundary conditions (blocking)
\item supercriticality of the nonlinear, nonlocal flux terms, ${\mbox{div}\,}(c_i\nabla\Phi)$
\item lack of natural dissipative structure.
\end{enumerate}
We compare our situation with related works {\widetilde{c}}ite{ci}, {\widetilde{c}}ite{bothe}, and {\widetilde{c}}ite{cil}. In {\widetilde{c}}ite{ci} and {\widetilde{c}}ite{bothe} the two dimensional setting allowed for control of the nonlinear term ${\mbox{div}\,}(c_i\nabla\Phi)$ in a large variety of situations, including blocking boundary conditions for $c_i$ and Dirichlet {\widetilde{c}}ite{ci} and Robin {\widetilde{c}}ite{bothe} boundary conditions for $\Phi$. In {\widetilde{c}}ite{ci}, 2D global regularity is shown for mixed boundary conditions, too. However, many important steps of the analysis do not carry over to three dimensions due to the difference in scaling. This is what we mean when we say that ${\mbox{div}\,}(c_i\nabla\Phi)$ is supercritical in three dimensions. On the other hand, in {\widetilde{c}}ite{cil}, the three dimensional setting is considered and global regularity is established when Dirichlet boundary conditions are prescribed for $\Phi$ and also for all $c_i$. The issue of the supercriticality of the nonlinearity is circumvented by, simply put, transferring all the potentially ``bad" nonlinear behavior to the boundary where the behavior is a priori controlled due to the boundary conditions. An important ingredient of the analysis is that the boundary conditions give control of \textit{both} $c_1$ and $c_2$ on the boundary. This is not the case for blocking or mixed boundary conditions.
Our current work culminates in Theorem \ref{gr!!} in Section \ref{mbc}, where we consider the NPNS and NPS systems for two oppositely charged ionic species with mixed boundary conditions for $c_i$ (selective for $c_1$ and blocking for $c_2$) and Robin boundary conditions for $\Phi$. We prove large data global regularity, unconditionally for NPS and conditional on the regularity of the fluid velocity $u$ for NPNS. The general strategy is similar to {\widetilde{c}}ite{cil} in that we transfer all the harmful nonlinearities to the boundary. However, the main difference is that in the mixed boundary conditions scenario, the boundary behavior is not a priori controlled the same way as it is in {\widetilde{c}}ite{cil}. Thus, a careful analysis is necessary to show that the internal dissipative ``forces" of the system are strong enough to counteract the potentially problematic boundary behavior. A novel ingredient used at this stage is control of the quantity $\|c_1(t)\|_{L^1(\Omega)}$. Aside from this control, the Robin boundary conditions for $\Phi$ play an important role - while they do not prescribe the values of $\Phi$ or of $\pmbartial_n\Phi$ on the boundary, they do weaken the nonlinearity at the boundary just enough so that dissipation dominates. A close inspection of the proof reveals that replacing the Robin boundary conditions on $\Phi$ with Dirichlet boundary conditions (as in {\widetilde{c}}ite{ci,np3d,cil}) does not allow for the same proof to go through. Thus the problem of global regularity for blocking and mixed boundary conditions for $c_i$ with Dirichlet boundary conditions for $\Phi$ is, in general, open for three dimensions. On the other hand, the proof also reveals how much the analysis can be simplified if Neumann boundary conditions were chosen for $\Phi$ (see e.g. {\widetilde{c}}ite{schmuck}). Robin boundary conditions are situated appropriately in between Dirichlet and Neumann boundary conditions in such a way that they still allow us to take into account applied electric potentials on the boundary (a feature that makes Robin and Dirichlet boundary conditions appealing for the study of the aforementioned electrokinetic instabilities), while retaining some of the mathematically simplifying features of Neumann boundary conditions. {Ultimately, taking Robin boundary conditions to be a physically suitable description of the electrical field at the boundary, Theorem \ref{gr!!} reveals that, in the physically relevant case of three spatial dimensions and assuming sufficient regularity of the fluid velocity field $u$, the NPNS equations do not admit solutions that blow up in finite time (e.g. Dirac mass type aggregation of ions in finite time), and solutions in fact remain regular for all positive time.}
Leading up to Section \ref{mbc}, in Sections \ref{GR} and \ref{mss}, we consider, respectively, a two species and multiple species setting where all the $c_i$ satisfy blocking boundary conditions {and prove global regularity of solutions, unconditionally for NPS and conditional on the regularity of the fluid velocity $u$ for NPNS. In the latter setting of multiple species, }we require additionally that the diffusivities and magnitudes of ionic valences are equal ($D_1=...=D_m$, $|z_1|=...=|z_m|$) (see also {\widetilde{c}}ite{cil}). Because all the $c_i$ satisfy blocking boundary conditions, there is a natural dissipative structure (Section \ref{de}), which facilitates the analysis, but unlike in the two dimensional case {\widetilde{c}}ite{bothe}, this dissipation alone seems insufficient to prove global regularity. Rather the dissipation must be supplemented by precise control of the boundary behavior using the Robin boundary conditions on $\Phi$. {On one hand, we consider these cases of \textit{only} blocking boundary conditions for $c_i$ because
energy estimates in these two sections are more concise relative to the mixed boundary conditions case, and they more clearly illustrate the role played by the Robin boundary conditions and the subsequent estimates of boundary terms. On the other hand, the results of these two sections are nontrivial in their own right and also serve to verify that, despite the impenetrable nature of the boundaries (modelled by blocking boundary conditions), if the fluid velocity field remains regular, then blow up of ions near the boundary (or anywhere in the domain) cannot occur in finite time, regardless of the size of the prescribed data for the electrical potential $\Phi$.}
Prior to the proofs of the main theorems, in Section \ref{prelim}, we introduce the relevant function spaces and and state a local existence theorem, the proof of which we omit but can be found in the references provided.
\section{Preliminaries}\label{prelim}
We are concerned with global existence of strong solutions of NPNS and NPS. To define what we mean by a strong solution, we first introduce the relevant function spaces.
We denote by $L^p(\Omega)=L^p$ and $W^{m,p}(\Omega)=W^{m,p}$ the standard Lebesgue spaces and Sobolev spaces, respectively. In the case $p=2$, we denote $W^{m,2}=H^m(\Omega)=H^m$. We also consider Lebesgue spaces on the boundary $\pmbartial\Omega$: $L^p(\pmbartial\Omega)$. In this latter case, we always indicate the underlying domain $\pmbartial\Omega$ to avoid ambiguity. We also denote $L^p_tL^q_x=L^p(0,T;L^q(\Omega))$, $L^p_tW^{m,q}_x=L^p(0,T;W^{m,q}(\Omega)),$ where the time $T$ is made clear from context.
Denoting $\mathcal{V}=\{f\in (C_c^\infty(\Omega))^3\,|\, {\mbox{div}\,} f =0\}$, the spaces $H\subset (L^2(\Omega))^3$ and $V\subset (H^1(\Omega))^3$ are the closures of $\mathcal{V}$ in $(L^2(\Omega))^3$ and $(H^1(\Omega))^3$, respectively. The space $V$ is endowed with the Dirichlet norm $\|f\|_V^2=\int_\Omega |\nabla f|^2\,dx$.
In order to avoid having to explicitly deal with the pressure term in the Navier-Stokes and Stokes equations, we sometimes work with the equations projected onto the space of divergence free vector fields via the Leray projector $\mathbb{P}:L^2(\Omega)^3\to H$,
\beginin{equation}gin{align}
\pmbartial_t u+B(u,u)+\nu Au=-K\mathbb{P}(\rho\nabla\Phi)\label{nse'}\\
\pmbartial_t u+\nu Au=-K\mathbb{P}(\rho\nabla\Phi)\label{stokes'}
\end{align}
where $A=-\mathbb{P}\Delta:\mathcal{D}(A)\to H$, $\mathcal{D}(A)=H^2(\Omega)^3{\widetilde{c}}ap V$ is the Stokes operator, and $B(u,u)=\mathbb{P}(u{\widetilde{c}}dot\nabla u)$ (see {\widetilde{c}}ite{cf} for related theory).
\beginin{equation}gin{defi}
We say that $(c_i,\Phi,u)$ is a strong solution of NPNS (\ref{np}),(\ref{pois}),(\ref{nse}) or of NPS (\ref{np}),(\ref{pois}),(\ref{stokes}) with boundary conditions (\ref{bl}) (or (\ref{DI})),(\ref{noslip}),(\ref{robin}) on the time interval $[0,T]$ if $c_i\ge 0$, $c_i\in L^\infty(0,T;H^1){\widetilde{c}}ap L^2(0,T;H^2)$, $u\in L^\infty(0,T;V){\widetilde{c}}ap L^2(0,T;\mathcal{D}(A))$ and $(c_i,\Phi,u)$ solve the equations in the sense of distributions and satisfy the boundary conditions in the sense of traces.\label{strong}
\end{defi}
The NPNS/NPS system is a semilinear parabolic system and local existence and uniqueness of strong solutions have been established by many authors for many different sets of boundary conditions. We refer the reader to {\widetilde{c}}ite{bothe} where local well-posedness is established for dimensions greater than one, arbitrarily many ionic species, blocking boundary conditions for $c_i$, and Robin boundary conditions for $\Phi$. However, as the authors remark, the proof, based on methods of maximal $L^p$ regularity, can be adapted in a straightforward manner for different boundary conditions, including the mixed boundary conditions consiidered later in Section \ref{mbc}. Thus we have the following local existence theorem:
\beginin{equation}gin{thm}
For initial conditions $0\le c_i(0)\in H^1$, $u(0)\in V$, there exists $T_0>0$ depending on $\|c_i(0)\|_{H^1},\|u(0)\|_V$, the boundary data $\tau,\xi$ (and $\gamma_i$ if (\ref{DI}) is considered), the parameters of the problem $D_i,z_i,\epsilon,\nu,K$, and the domain $\Omega$ such that NPNS (\ref{np}),(\ref{pois}),(\ref{nse}) (and NPS (\ref{np}),(\ref{pois}),(\ref{stokes})) has a unique strong solution $(c_i,\Phi,u)$ on the time interval $[0,T_0]$ satisfying the boundary conditions (\ref{bl}) (or (\ref{DI})),(\ref{noslip}),(\ref{robin}).\label{local}
\end{thm}
\beginin{equation}gin{rem}
We stress that the nonnegativity of $c_i$ is included in our definition of a strong solution. That nonnegativity is propagated from nonnegative initial conditions $c_i(0)\ge 0$ is not self-evident. Its proof is included in Appendix \ref{pc}. In fact, as proved in Appendix \ref{pc}, more is true: strict positivity is propagated from strictly positive initial conditions $c_i(0)\ge c>0$.
\end{rem}
Henceforth, all occurrences of the constant $C>0$, with no subscript, refer to a constant depending only on the parameters of the system, the boundary data, and the domain $\Omega$, and this constant may differ from line to line. For brevity, when a constant, other than $C$, is said to depend on the parameters of the system, we mean this to also include boundary data and the domain.
\section{Global Regularity for Blocking Boundary Conditions (Two Species)}\label{GR}
In this section we consider the NPNS and NPS systems for two ($m=2$) oppositely charged ($z_1>0>z_2$) ionic species, both satisfying blocking boundary conditions. In this setting, we prove global existence of strong solutions for the NPS system and the same result, conditional on Navier-Stokes regularity, for the NPNS system.
We begin by proving some a priori bounds that are used for the proof of the global regularity result of this section. We prove some of the estimates in more generality (two or more species) so as to invoke them in Section \ref{mss} as well. In Sections \ref{mss} and \ref{mbc}, for the sake of brevity and fewer repetitive computations, we shall also frequently make references to some estimates from this section that may not hold verbatim but nonetheless hold up to some minor modifications.
\subsection{Dissipation Estimate}\label{de}
The NPNS and NPS systems come with a dissipative structure when blocking boundary conditions for $c_i$ and Robin boundary conditions for $\Phi$ are considered. We prove the following proposition:
\beginin{equation}gin{prop}
Let $(c_i,\Phi,u)$ be a strong solution of NPNS or NPS on the time interval $[0,T]$, satisfying boundary conditions (\ref{bl}),(\ref{noslip}),(\ref{robin}). Then the functional
\beginin{equation}
V(t)=\frac{1}{2K}\|u\|_H^2+\sum_{i=1}^m\int_\Omega c_i\log c_i\,dx+\frac{\epsilon}{2}\|\nabla\Phi\|_{L^2(\Omega)}^2+\frac{\epsilon\tau}{2}\|\Phi\|_{L^2(\pmbartial\Omega)}^2
\end{equation}
satisfies
\beginin{equation}
\frac{d}{dt}V+\mathcal{D}+\frac{\nu}{K}\|\nabla u\|_{L^2}^2=0
\end{equation}
for $t\in[0,T]$, where
\beginin{equation}
\mathcal{D}=\sum_{i=1}^m D_i\int_\Omega c_i|\nabla\mu_i|^2\,dx\ge 0
\end{equation}
and $\mu_i$ is the electrochemical potential,
\beginin{equation}
\mu_i=\log c_i+z_i\Phi.\label{mu}
\end{equation}
In particular, $V(t)$ is nonincreasing in time.\label{diss}
\end{prop}
\beginin{equation}gin{proof}
First we note that (\ref{np}) can be written
\beginin{equation}
\pmbartial_t c_i+u{\widetilde{c}}dot \nabla c_i=D_i{\mbox{div}\,}(c_i\nabla\mu_i).\label{np'}
\end{equation}
Then we multiply (\ref{np'}) by $\mu_i$ and integrate by parts. This part is somewhat formal as we cannot exclude the possibility that $c_i$ attains the value $0$, in which case $\log c_i$ becomes undefined. Thus, to make this rigorous, we can work instead with the quantity $\mu_i^\delta=\log(c_i+\delta)+z_i\Phi$ and later pass to the limit $\delta\to 0$, as done in {\widetilde{c}}ite{bothe}. For conciseness, we stick to the formal computations involving $\mu_i$. On the right hand side of (\ref{np'}), we obtain after summing in $i$,
\beginin{equation}
-\sum_{i=1}^m D_i\int_\Omega c_i|\nabla\mu_i|^2\,dx.
\end{equation}
which is precisely $-\mathcal{D}$. For the terms on the left hand side, we have after summing in $i$ and integrating by parts,
\beginin{equation}
\beginin{aligned}
\sum_{i=1}^m\int_\Omega \pmbartial_t c_i(\log c_i+z_i\Phi)\,dx&=\frac{d}{dt}\sum_{i=1}^m\int_\Omega c_i\log c_i-c_i\,dx+\int_\Omega(\pmbartial_t\rho)\Phi\,dx\\
&=\frac{d}{dt}\sum_{i=1}^m\int_\Omega c_i\log c_i\,dx-\epsilon\int_\Omega\pmbartial_t(\Delta\Phi)\Phi\,dx\\
&=\frac{d}{dt}\sum_{i=1}^m\int_\Omega c_i\log c_i\,dx-\epsilon\int_{\pmbartial\Omega}\pmbartial_t(\pmbartial_n\Phi)\Phi\,dS+\frac{\epsilon}{2}\frac{d}{dt}\int_\Omega|\nabla\Phi|^2\,dx\\
&=\frac{d}{dt}\sum_{i=1}^m\int_\Omega c_i\log c_i\,dx+\epsilon\tau\int_{\pmbartial\Omega}(\pmbartial_t\Phi)\Phi\,dS+\frac{\epsilon}{2}\frac{d}{dt}\int_\Omega|\nabla\Phi|^2\,dx\\
&=\frac{d}{dt}\sum_{i=1}^m\int_\Omega c_i\log c_i\,dx+\frac{\epsilon\tau}{2}\frac{d}{dt}\int_{\pmbartial\Omega}\Phi^2\,dS+\frac{\epsilon}{2}\frac{d}{dt}\int_\Omega|\nabla\Phi|^2\,dx.
\end{aligned}
\end{equation}
In the second line, we used the fact that $\|c_i(t)\|_{L^1}=\|c_i(0)\|_{L^1}$ for all time due to blocking boundary conditions. We also used the Poisson equation for $\Phi$. In the fourth line, we used the Robin boundary conditions (\ref{robin}).
Lastly, for the advective term we obtain after summing,
\beginin{equation}
\beginin{aligned}
\sum_{i=1}^m\int_\Omega u{\widetilde{c}}dot\nabla c_i(\log c_i+z_i\Phi)\,dx=&\sum_{i=1}^m\int_\Omega u{\widetilde{c}}dot\nabla(c_i\log c_i-c_i)\,dx+\int_\Omega (u{\widetilde{c}}dot\nabla\rho)\Phi\,dx\\
=&\int_\Omega (u{\widetilde{c}}dot\nabla\rho)\Phi\,dx\\
=&-\int_\Omega (u{\widetilde{c}}dot\nabla\Phi)\rho\,dx
\end{aligned}
\end{equation}
where in the second and third lines we integrated by parts and used the fact that ${\mbox{div}\,} u=0$. Collecting what we have so far, we have
\beginin{equation}
\frac{d}{dt}\left(\sum_{i=1}^m\int_\Omega c_i\log c_i\,dx+\frac{\epsilon}{2}\|\nabla\Phi\|_{L^2}^2+\frac{\epsilon\tau}{2}\|\Phi\|_{L^2(\pmbartial\Omega)}^2\right)+\mathcal{D}=\int_\Omega(u{\widetilde{c}}dot\nabla\Phi)\rho\,dx.\label{one}
\end{equation}
Next we multiply (\ref{nse}) (or (\ref{stokes})) by $u$ and integrate by parts, noticing that the integral corresponding to the nonlinear term for Navier-Stokes vanishes due to the divergence-free condition,
\beginin{equation}
\frac{1}{2}\frac{d}{dt}\|u\|_{L^2}^2+\nu\|\nabla u\|_{L^2}^2=-K\int_\Omega (u{\widetilde{c}}dot\nabla\Phi)\rho\,dx.\label{two}
\end{equation}
Thus, multiplying (\ref{two}) by $K^{-1}$ and adding it to (\ref{one}), we obtain the conclusion of the proposition.
\end{proof}
\subsection{Uniform $L^2$ Estimate}\label{UL2}
Using the dissipative estimate from the previous subsection, we obtain uniform in time control of $\|c_i\|_{L^2}$ in the case of two oppositely charged species satisfying blocking boundary conditions.
\beginin{equation}gin{prop}
Let $(c_i,\Phi,u)$ be a strong solution of NPNS or NPS for two oppositely charged species ($m=2,\, z_1>0>z_2$) on the time interval $[0,T]$, satisfying boundary conditions (\ref{bl}),(\ref{noslip}),(\ref{robin}), and corresponding to initial conditions $0\le c_i(0)\in H^1,\,u(0)\in V$. Then there exists a constant $M_2>0$ independent of time, depending only on the parameters of the system and the initial conditions such that for each $i$
\beginin{equation}
\sup_{t\in[0,T]}\|c_i(t)\|_{L^2}<M_2.\label{m2m2}
\end{equation}\label{L2'}
\end{prop}
\beginin{equation}gin{proof}
We multiply (\ref{np}) by $\frac{|z_i|}{D_i}c_i$ and integrate by parts:
\beginin{equation}
\beginin{aligned}
\frac{|z_i|}{2D_i}\frac{d}{dt}\int_\Omega c_i^2\,dx=&-|z_i|\int_\Omega|\nabla c_i|^2\,dx-z_i|z_i|\int_\Omega c_i\nabla c_i{\widetilde{c}}dot\nabla\Phi\,dx\\
=&-|z_i|\int_\Omega|\nabla c_i|^2\,dx-z_i|z_i|\frac{1}{2}\int_\Omega \nabla c_i^2{\widetilde{c}}dot\nabla\Phi\,dx\\
=&-|z_i|\int_\Omega|\nabla c_i|^2\,dx-z_i|z_i|\frac{1}{2\epsilon}\int_\Omega c_i^2\rho\,dx- z_i|z_i|\frac{1}{2}\int_{\pmbartial\Omega}c_i^2\pmbartial_n\Phi\,dS\\
=&-|z_i|\int_\Omega|\nabla c_i|^2\,dx-z_i|z_i|\frac{1}{2\epsilon}\int_\Omega c_i^2\rho\,dx\\
&+z_i|z_i|\frac{\tau}{2}\int_{\pmbartial\Omega}c_i^2\Phi\,dS- z_i|z_i|\frac{1}{2}\int_{\pmbartial\Omega}c_i^2\xi\,dS\\
=&-|z_i|\int_\Omega|\nabla c_i|^2\,dx-z_i|z_i|\frac{1}{2\epsilon}\int_\Omega c_i^2\rho\,dx+I_1^{(i)}+I_2^{(i)}\label{ener}
\end{aligned}
\end{equation}
where in the fourth line, we used the Robin boundary conditions (\ref{robin}). We estimate the two boundary integrals using trace inequalities (Lemma \ref{trace}, Appendix):
\beginin{equation}
\beginin{aligned}
|I_1^{(i)}|\le&C\|\Phi\|_{L^4(\pmbartial\Omega)}\|c_i\|_{L^\frac{8}{3}(\pmbartial\Omega)}^2\\
\le&\|\Phi\|_{H^1(\Omega)}(C_\delta\|c_i\|_{L^1(\Omega)}^2+\delta\|\nabla c_i\|_{L^2(\Omega)}^2)\label{I_1}
\end{aligned}
\end{equation}
and similarly,
\beginin{equation}
\beginin{aligned}
|I_2^{(i)}|\le&\frac{|z_i|^2\|\xi\|_{L^\infty(\pmbartial\Omega)}}{2}\|c_i\|_{L^2(\pmbartial\Omega)}^2\le C_\delta\|c_i\|_{L^1}^2+\delta\|\nabla c_i\|_{L^2}^2.\label{I_2}
\end{aligned}
\end{equation}
We recall that $\|c_i\|_{L^1}$ remains constant in time, and since by a generalized Poincaré's inequality we have that \beginin{equation}
\|\Phi\|_{L^2(\Omega)}\le C(\|\nabla\Phi\|_{L^2(\Omega)}+\|\Phi\|_{L^2(\pmbartial\Omega)})
\end{equation}
we deduce from Proposition \ref{diss} that $\|\Phi\|_{H^1}$ is uniformly bounded in time by initial data. Thus choosing
\beginin{equation}
\delta=\min\left\{\frac{|z_i|}{4},\frac{|z_i|}{4\sup_t\|\Phi(t)\|_{H^1}}\right\}
\end{equation}
we obtain from (\ref{ener})-(\ref{I_2}), after summing in $i$ and recalling $z_1>0>z_2$,
\beginin{equation}
\beginin{aligned}
\sum_{i=1}^2\frac{|z_i|}{2D_i}\frac{d}{dt}\|c_i\|_{L^2(\Omega)}^2+\sum_{i=1}^2\frac{|z_i|}{2}\|\nabla c_i\|_{L^2(\Omega)}^2&\le C_b-\frac{1}{2\epsilon}\int_\Omega (z_1^2c_1^2-z_2^2c_2^2)\rho\,dx\\
&= C_b-\frac{1}{2\epsilon}\int_\Omega \rho^2(|z_1|c_1+|z_2|c_2)\,dx\\
&\le C_b.\label{cancellation}
\end{aligned}
\end{equation}
Here, $C_b$ depends on $\sup_t\|\Phi(t)\|_{H^1}, \|c_i(0)\|_{L^1}$ along with the various parameters of the system. Next we use a Gagliardo-Nirenberg inequality to bound
\beginin{equation}
\|c_i\|_{L^2}^2\le C(\|\nabla c_i\|_{L^2}^2+\|c_i\|_{L^1}^2)\le C'(\|\nabla c_i\|_{L^2}^2+1),\label{poinc}
\end{equation}
where $C'$ depends on $\|c_i(0)\|_{L^1}$. Thus, for constants $C'_b, C''_b\ge 0$ depending on $\sup_t\|\Phi(t)\|_{H^1}, \|c_i(0)\|_{L^1}$ and parameters, we obtain from (\ref{cancellation}),
\beginin{equation}
\frac{d}{dt}\left(\sum_{i=1}^2\frac{|z_i|}{2D_i}\|c_i\|_{L^2}^2\right)\le -C'_b\left(\sum_{i=1}^2\frac{|z_i|}{2D_i}\|c_i\|_{L^2}^2\right)+C''_b.\label{last}
\end{equation}
Thus from a Grönwall estimate, we find
\beginin{equation}
\sum_{i=1}^2\frac{|z_i|}{2D_i}\|c_i(t)\|_{L^2}^2\le \left(\sum_{i=1}^2\frac{|z_i|}{2D_i}\|c_i(0)\|_{L^2}^2\right)e^{-C'_bt}+\frac{C''_b}{C'_b}(1-e^{-C'_bt})\le \sum_{i=1}^2\frac{|z_i|}{2D_i}\|c_i(0)\|_{L^2}^2+\frac{C''_b}{C'_b}
\end{equation}
and (\ref{m2m2}) follows.\end{proof}
\subsection{Higher Order Estimates}\label{he} Now we bootstrap the dissipative and $L^2$ estimates to obtain some higher order estimates.
\beginin{equation}gin{prop}
Let $(c_i,\Phi,u)$ be a strong solution of NPNS or NPS on the time interval $[0,T]$ with $0\le c_i(0)\in H^1{\widetilde{c}}ap L^\infty,\,u(0)\in V$, satisfying boundary conditions (\ref{bl}),(\ref{noslip}),(\ref{robin}). Assume that for each $i$, $c_i$ satisfies a uniform in time $L^2$ bound,
\beginin{equation}
\|c_i(t)\|_{L^2}<M_2.\label{L2'''}
\end{equation}
Then there exist constants $M_\infty,M'_2>0$ independent of time, depending only on the parameters of the system, the initial conditions and $M_2$ such that for each $i$
\beginin{equation}gin{align}
\sup_{t\in[0,T]}\|c_i(t)\|_{L^\infty}&<M_\infty\label{Mi}\\
\int_{0}^{T}\|\nabla {\tilde{c}_i}(s)\|_{L^2}^2\,ds&< M'_2\label{L2''}
\end{align}
where ${\tilde{c}_i}=c_ie^{z_i\Phi}$. Specifically for the case of the NPS system, we have
\beginin{equation}
\sup_{t\in[0,T]}\|u(t)\|_V^2+\frac{1}{T}\int_0^T\|u(s)\|_{H^2}^2\,ds<B\label{Mu}
\end{equation}
for a constant $B$ independent of time. For the NPNS system, we have instead
\beginin{equation}
\sup_{t\in[0,T]}\|u(t)\|_V^2+\int_0^T\|u(s)\|_{H^2}^2\,ds<B_T\label{M'u}
\end{equation}
for a time dependent constant $B_T$ depending also on $U(T)$ where
\beginin{equation}
U(T)=\int_0^T\|u(s)\|_V^4\,ds.
\end{equation}\label{L2}
\end{prop}
\beginin{equation}gin{rem}
For two oppositely charged species, $m=2$, $z_1>0>z_2$, the hypothesis (\ref{L2'''}) holds due to Proposition \ref{L2'}.
\end{rem}
\beginin{equation}gin{rem}
Here, and in later theorems, the assumption that $c_i(0)\in L^\infty$ is not, strictly speaking, necessary as the local existence theorem guarantees that $H^1$ initial data is immediately regularized so that on the interval of existence $[0,T]$, we have $c_i\in L^2(0,T;H^2)$. In particular, for some arbitrarily small $\tilde t>0$ we have $c_i(\tilde t)\in H^2\subset L^\infty$. Thus, below, when we derive a priori upper bounds in terms of $\|c_i(0)\|_{L^\infty}$ (c.f. (\ref{sk})), we could instead do so in terms of $\|c_i(\tilde t)\|_{L^\infty}$. To avoid doing this, we include $c_i(0)\in L^\infty$ in the hypothesis. For later theorems (e.g. in Section \ref{mbc}) whose proofs do not invoke $\|c_i(0)\|_{L^\infty},$ we do note include $c_i\in L^\infty$ in the hypothesis.
\end{rem}
\beginin{equation}gin{proof}
It follows from (\ref{pois}), (\ref{L2'''}) and Sobolev estimates that for some constants $P_6,\,p_\infty$ independent of time, we have,
\beginin{equation}gin{align}
\|\nabla\Phi(t)\|_{L^6}&\le P_6\label{P6}\\
\|\Phi(t)\|_{L^\infty}&\le p_\infty\label{Piii}.
\end{align}
Now we deduce the uniform in time $L^\infty$ bounds, using a Moser-type iteration (see also {\widetilde{c}}ite{bothe,choi,np3d,schmuck}). For $k=2,3,4...$, we multiply (\ref{np}) by $c_i^{2k-1}$ and integrate by parts to obtain
\beginin{equation}
\frac{1}{2k}\frac{d}{dt}\|c_i\|_{L^{2k}}^{2k}+\frac{2k-1}{k^2}D_i\|\nabla c_i^k\|_{L^2}^2\le C\frac{2k-1}{k}\|\nabla\Phi\|_{L^6}\|c_i^k\|_{L^3}\|\nabla c_i^k\|_{L^2}.\label{kk}
\end{equation}
We use (\ref{P6}) and interpolate $L^3$ between $L^1$ and $H^1$ to obtain after a Young's inequality,
\beginin{equation}
\frac{d}{dt}\|c_i\|_{L^{2k}}^{2k}+D_i\|\nabla c_i^k\|_{L^2}^2\le C_k\|c_i^k\|_{L^1}^2\label{2k}
\end{equation}
where $C_k$ satisfies, for some $c>0$, for some $m$ large enough and for each $k=2,3,4...$
\beginin{equation}
C_k\le ck^m.\label{ck}
\end{equation}
Interpolating $L^2$ between $L^1$ and $H^1$,
\beginin{equation}
\|c_i^k\|_{L^2}^2\le C(\|\nabla c_i^k\|_{L^2}^2+\|c_i^k\|_{L^1}^2),
\end{equation}
we obtain from (\ref{2k}),
\beginin{equation}
\frac{d}{dt}\|c_i\|_{L^{2k}}^{2k}\le -C\|c_i\|_{L^{2k}}^{2k}+C_k\|c_i^k\|_{L^1}^2=-C\|c_i\|_{L^{2k}}^{2k}+C_k\|c_i\|_{L^{k}}^{2k}\label{2k'}
\end{equation}
for a different $C_k$ still satisfying $(\ref{ck})$ for some $c$. We define
\beginin{equation}
S_k=\max\{\|c_i(0)\|_{L^\infty},\,\sup_t\|c_i(t)\|_k\}.\label{sk}
\end{equation}
Applying a Grönwall inequality to (\ref{2k'}), we obtain
\beginin{equation}
\|c_i\|_{L^{2k}}^{2k}\le\|c_i(0)\|_{L^{2k}}^{2k}+C_kS_k^{2k}\le|\Omega|\|c_i(0)\|_{L^\infty}^{2k}+C_kS_k^{2k}\le Ck^mS_k^{2k}\label{lel}
\end{equation}
for a possibly different $C_k$ still satisfying $(\ref{ck})$ for some $c$. Assuming without loss of generality that $C\ge 1$ in (\ref{lel}),
\beginin{equation}
S_{2k}=\max\{\|c_i(0)\|_{L^\infty},\sup_t\|c_i(t)\|_{L^{2k}}\}\le\max\{\|c_i(0)\|_{L^\infty},C^{\frac{1}{2k}}k^{\frac{m}{2k}}S_k\}=C^{\frac{1}{2k}}k^{\frac{m}{2k}}S_k.\label{ss}
\end{equation}
Setting $k=2^j$, we obtain
\beginin{equation}
S_{2^{j+1}}\le C^{\frac{1}{2^{j+1}}}2^{\frac{jm}{2^{j+1}}}S_{2^j}
\end{equation}
and thus for all $J\in\mathbb{N}$
\beginin{equation}
S_{2^J}\le C^a2^bS_2<\infty\label{s2j}
\end{equation}
where
\beginin{equation}
a=\sum_{j=1}^\infty \frac{1}{2^{j+1}}<\infty,\quad b=\sum_{j=1}^\infty \frac{jm}{2^{j+1}}<\infty.
\end{equation}
Passing $J\to\infty$ in (\ref{s2j}), we obtain (\ref{Mi}).
Next, (\ref{L2''}) follows from (\ref{Mi}), (\ref{Piii}) and Proposition \ref{diss}. Indeed, from the proposition, using the fact that $\mu_i=\log {\tilde{c}_i}$, we obtain
\beginin{equation}
\int_0^T\int_\Omega|\nabla {\tilde{c}_i}|^2\,dx\,dt\le C_p\int_0^T\int_\Omega c_i|\nabla\mu_i|^2\,dx\,dt\le M'_2
\end{equation}
for $M'_2$ independent of time, and $C_p$ depending on $p_\infty$ and $M_\infty$.
Next, to prove (\ref{Mu}), we multiply the Stokes equations (\ref{stokes'}) by $Au$ and integrate by parts,
\beginin{equation}
\beginin{aligned}
\frac{1}{2}\frac{d}{dt}\|u\|_V^2+\frac{\nu}{2}\|A u\|_{L^2}^2\le C\|\rho\nabla\Phi\|_{L^2}^2\le C'M_\infty^2\label{nn}
\end{aligned}
\end{equation}
where $C'$ depends on $\sup_t\|\nabla\Phi\|_{L^2}$ (Proposition \ref{diss}). Using the elliptic bound $\|u\|_V\le C\|A u\|_{L^2}$, we obtain
\beginin{equation}
\frac{1}{2}\frac{d}{dt}\|u\|_V^2\le -C''\|u\|_V^2+C'M_\infty^2 \label{unps}
\end{equation}
which gives us uniform boundedness of $\|u\|_V$, the first half of (\ref{Mu}). Integrating (\ref{nn}) in time gives us the second half.
Similarly, for NPNS, we multiply the Navier-Stokes equations (\ref{nse'}) by $A u$ and integrate by parts,
\beginin{equation}
\beginin{aligned}
\frac{1}{2}\frac{d}{dt}\|u\|_V^2+\frac{\nu}{2}\|A u\|_{L^2}^2\le& C'M_\infty^2+\|u\|_{L^6}\|\nabla u\|_{L^3}\|A u\|_{L^2}\\
\le& C'M_\infty^2 +C\|u\|_V^\frac{3}{2}\|A u\|_{L^2}^\frac{3}{2}
\end{aligned}
\end{equation}
so that after a Young's inequality we obtain
\beginin{equation}
\beginin{aligned}
\frac{1}{2}\frac{d}{dt}\|u\|_V^2+\frac{\nu}{4}\|A u\|_{L^2}^2\le& C'M_\infty^2+C\|u\|_V^6\label{nnn}
\end{aligned}
\end{equation}
from which we obtain
\beginin{equation}
\frac{1}{2}\|u(t)\|_{V}^2+\frac{\nu}{4}\int_0^t\|A u(s)\|_{L^2}^2\,ds\le\left(\frac{1}{2}\|u(0)\|_V^2+C'M_\infty^2t\right)e^{CU(T)}.
\end{equation}
This gives us (\ref{M'u}) and completes the proof of the proposition.
\end{proof}
\subsection{Proof of Global Regularity}\label{gr}
Now we prove our main global regularity result of this section.
\beginin{equation}gin{thm}
For initial conditions $0\le c_i(0)\in H^1$, $u(0)\in V$ and for all $T>0$, NPS (\ref{np}),(\ref{pois}),(\ref{stokes}) for two oppositely charged species ($m=2,\,z_1>0>z_2$) has a unique strong solution $(c_i,\Phi,u)$ on the time interval $[0,T]$ satisfying the boundary conditions (\ref{bl}),(\ref{noslip}),(\ref{robin}). NPNS (\ref{np}),(\ref{pois}),(\ref{nse}) for two oppositely charged species has a unique strong solution on $[0,T]$ satisfying the initial and boundary conditions provided
\beginin{equation}
U(T)=\int_0^T\|u(s)\|_V^4\,ds<\infty.
\end{equation}
Moreover, the solution to NPS satisfies (\ref{Mu}) in addition to
\beginin{equation}gin{align}
\sup_{t\in[0,T]}\|c_i(t)\|_{H^1}^2+\frac{1}{T}\int_0^T\|c_i(s)\|_{H^2}^2\,ds&\le M\label{M}\\
\sup_{t\in[0,T]}\|\nabla{\tilde{c}_i}(t)\|_{L^2}^2+\int_0^T\|\Delta{\tilde{c}_i}(s) \|_{L^2}^2\,ds&\le M'\label{M'}
\end{align}
for ${\tilde{c}_i}=e^{z_i\Phi}$ and constants $M,M'$ depending only on the parameters of the system and the initial conditions but not on $T$. The solution to NPNS satisfies (\ref{M'u}) in addition to
\beginin{equation}
\sup_{t\in[0,T]}\|c_i(t)\|_{H^1}^2+\int_0^T\|c_i(s)\|_{H^2}^2\,ds\le M_T\label{MU}
\end{equation}
for a constant $M_T$ depending on the parameters of the system, the initial conditions, $T$, and $U(T)$.\label{gr!}
\end{thm}
\beginin{equation}gin{proof}
We prove the a priori estimates (\ref{M})-(\ref{MU}), which together with the bounds on $u$, (\ref{Mu}) and (\ref{M'u}), and the local existence theorem allow us to uniquely extend a local solution to a global one by virtue of the fact that the strong norms in Definition \ref{strong} do not blow up before time $T$.
Due to (\ref{Mi}), it follows from the embedding $W^{2,\infty}\hookrightarrow W^{1,\infty}$ that
\beginin{equation}
\|\Phi(t)\|_{W^{1,\infty}}\le P_\infty\label{Pi}
\end{equation}
for all $t$, where $P_\infty$ depends only on the parameters of the system and uniform $L^p$ bounds on $\rho$.
Next, in order to obtain estimates for $\nabla c_i$, we note that the auxiliary variable
\beginin{equation}
{\tilde{c}_i}=c_ie^{z_i\Phi}\label{cit}
\end{equation}
satisfies
\beginin{equation}
\pmbartial_t{\tilde{c}_i}+u{\widetilde{c}}dot\nabla{\tilde{c}_i}=D_i\Delta{\tilde{c}_i}-D_iz_i\nabla{\tilde{c}_i}{\widetilde{c}}dot\nabla\Phi+z_i((\pmbartial_t+u{\widetilde{c}}dot\nabla)\Phi){\tilde{c}_i}\label{npt}
\end{equation}
together with homogeneous Neumann boundary conditions
\beginin{equation}
{\pmbartial_n{\tilde{c}_i}}_{|\pmbartial\Omega}=0.\label{neu}
\end{equation}
Multiplying (\ref{npt}) by $-\Delta{\tilde{c}_i}$ and using (\ref{neu}) to integrate by parts, we obtain
\beginin{equation}
\beginin{aligned}
\frac{1}{2}\frac{d}{dt}\|\nabla{\tilde{c}_i}\|_{L^2}^2+D_i\|\Delta{\tilde{c}_i}\|_{L^2}^2=&\int_\Omega (u{\widetilde{c}}dot\nabla{\tilde{c}_i})\Delta{\tilde{c}_i}\,dx+D_iz_i\int_\Omega(\nabla{\tilde{c}_i}{\widetilde{c}}dot\nabla\Phi)\Delta{\tilde{c}_i}\,dx\\
&-z_i\int_\Omega((\pmbartial_t+u{\widetilde{c}}dot\nabla)\Phi){\tilde{c}_i}\Delta{\tilde{c}_i}\,dx\\
=& I_1+I_2+I_3.\label{dci}
\end{aligned}
\end{equation}
We estimate using Hölder and Young's inequalities and Sobolev and interpolation estimates,
\beginin{equation}
\beginin{equation}gin{aligned}
|I_1|\le&\|u\|_V\|\nabla{\tilde{c}_i}\|_{L^3}\|\Delta{\tilde{c}_i}\|_{L^2}\\
\le& C\|u\|_V\|\nabla{\tilde{c}_i}\|_{L^2}^\frac{1}{2}\|\Delta {\tilde{c}_i}\|_{L^2}^\frac{3}{2}\\
\le&\frac{D_i}{4}\|\Delta {\tilde{c}_i}\|_{L^2}^2+C\|u\|_V^4\|\nabla{\tilde{c}_i}\|_{L^2}^2\label{id}
\end{aligned}
\end{equation}
\beginin{equation}
\beginin{aligned}
|I_2|\le&C\|\nabla\Phi\|_{L^\infty}\|\nabla {\tilde{c}_i}\|_{L^2}\|\Delta{\tilde{c}_i}\|_{L^2}\\
\le&\frac{D_i}{4}\|\Delta{\tilde{c}_i}\|_{L^2}^2+C_g\|\nabla{\tilde{c}_i}\|_{L^2}^2\label{idd}
\end{aligned}
\end{equation}
where $C_g$ depends on $P_\infty$.
Next we split
\beginin{equation}
I_3=-z_i\int_\Omega(u{\widetilde{c}}dot\nabla\Phi){\tilde{c}_i}\Delta{\tilde{c}_i}\,dx-z_i\int_\Omega(\pmbartial_t\Phi){\tilde{c}_i}\Delta{\tilde{c}_i}\,dx=I_3^1+I_3^2.\label{split}
\end{equation}
First we estimate $I_3^1$. Noting that
\beginin{equation}
\|{\tilde{c}_i}\|_{L^3}\le CM_\infty e^{|z_i|P_\infty}=\beginin{equation}ta_3
\end{equation}
we bound
\beginin{equation}
\beginin{aligned}
|I_3^1|\le& C\|u\|_V\|\nabla\Phi\|_{L^\infty}\|{\tilde{c}_i}\|_{L^3}\|\Delta {\tilde{c}_i}\|_{L^2}\\
\le&\frac{D_i}{8}\|\Delta{\tilde{c}_i}\|_{L^2}^2+C'_g\|u\|_V^2
\end{aligned}
\end{equation}
where $C'_g$ depends on $\beginin{equation}ta_3$ and $P_\infty$.
In order to bound $I_3^2$, first we note that the Nernst-Planck equations (\ref{np}) can be written
\beginin{equation}
\pmbartial_tc_i+u{\widetilde{c}}dot\nabla c_i=D_i{\mbox{div}\,}(e^{-z_i\Phi}\nabla{\tilde{c}_i})
\end{equation}
so that in particular we have
\beginin{equation}
\pmbartial_t\rho=\sum_{i=1}^2z_iD_i{\mbox{div}\,}(e^{-z_i\Phi}\nabla {\tilde{c}_i})-u{\widetilde{c}}dot\nabla\rho.\label{ptrho}
\end{equation}
We multiply (\ref{ptrho}) by $\epsilon^{-1}\pmbartial_t\Phi$ and integrate by parts. On the left hand side, we have
\beginin{equation}
\beginin{aligned}
\int_\Omega \pmbartial_t\rho(\epsilon^{-1}\pmbartial_t\Phi)\,dx=-\int_\Omega\pmbartial_t\Delta\Phi\pmbartial_t\Phi\,dx=&-\int_{\pmbartial\Omega}\pmbartial_t\pmbartial_n\Phi\pmbartial_t\Phi\,dS+\int_\Omega|\nabla\pmbartial_t\Phi|^2\,dx\\
=&\tau\int_{\pmbartial\Omega}|\pmbartial_t\Phi|^2\,dS+\int_\Omega|\nabla\pmbartial_t\Phi|^2\,dx
\end{aligned}
\end{equation}
where in the second line we used the Robin boundary conditions (\ref{robin}). Therefore
\beginin{equation}
\beginin{aligned}
\tau\|\pmbartial_t\Phi\|_{L^2(\pmbartial\Omega)}^2+\|\pmbartial_t\nabla\Phi\|_{L^2(\Omega)}^2\le& C\sum_{j=1}^2\left|\int_\Omega{\mbox{div}\,}(e^{-z_j\Phi}\nabla{\tilde{c}_j})\pmbartial_t\Phi\,dx\right|+C\left|\int_\Omega{\mbox{div}\,}(u\rho)\pmbartial_t\Phi\,dx\right|\\
\le&C\sum_{j=1}^2\int_\Omega e^{|z_j|P_\infty}|\nabla {\tilde{c}_j}||\pmbartial_t\nabla\Phi|\,dx+C\int_\Omega M_\infty|u||\pmbartial_t\nabla\Phi|\,dx\\
\le&C_{t}(\sum_{j=1}^2\|\nabla{\tilde{c}_j}\|_{L^2}+\|u\|_V)\|\pmbartial_t\nabla\Phi\|_{L^2}\label{ptg}
\end{aligned}
\end{equation}
where $C_{t}$ depends on $M_\infty,P_\infty$. Therefore, we obtain
\beginin{equation}
(\tau
\|\pmbartial_t\Phi\|_{L^2(\pmbartial\Omega)}^2+\|\pmbartial_t\nabla\Phi\|_{L^2(\Omega)}^2)^\frac{1}{2}\le C_{t}(\sum_{j=1}^2\|\nabla{\tilde{c}_j}\|_{L^2}+\|u\|_V).\label{blaa}
\end{equation}
Now we use a generalized Poincaré inequality, $\|f\|_{L^2(\Omega)}\le C(\|\nabla f\|_{L^2(\Omega)}+\|f\|_{L^2(\pmbartial\Omega)})$, which together with Sobolev's inequality $\|f\|_{L^6}\le C\|f\|_{H^1}$ and (\ref{blaa}), gives us
\beginin{equation}
\|\pmbartial_t\Phi\|_{L^6}\le CC_{t}(\sum_{j=1}^2\|\nabla{\tilde{c}_j}\|_{L^2}+\|u\|_V).\label{pt6}
\end{equation}
Now we bound $I_3^2$ using (\ref{pt6})
\beginin{equation}
\beginin{aligned}
|I_3^2|\le& C\|\pmbartial_t\Phi\|_{L^6}\|{\tilde{c}_i}\|_{L^3}\|\|\Delta{\tilde{c}_i}\|_{L^2}\\
\le&\frac{D_i}{8}\|\Delta{\tilde{c}_i}\|_{L^2}^2+C_3(\sum_{j=1}^2\|\nabla{\tilde{c}_j}\|_{L^2}^2+\|u\|_V^2)\label{I32}
\end{aligned}
\end{equation}
where $C_3$ depends on $\beginin{equation}ta_3$.
Thus, adding the estimates for $I_1,I_2,I_3^1,I_3^2$ and summing in $i$, we obtain from (\ref{dci}),
\beginin{equation}
\frac{1}{2}\frac{d}{dt}\sum_{i=1}^2\|\nabla {\tilde{c}_i}\|_{L^2}^2+\frac{D_i}{4}\sum_{i=1}^2\|\Delta{\tilde{c}_i}\|_{L^2}^2\le C_F(\|u\|_V^4+1)\frac{1}{2}\sum_{i=1}^2\|\nabla {\tilde{c}_i}\|_{L^2}^2+C'_F\|u\|_V^2\label{cru}
\end{equation}
where $C_F,C'_F$ depend on $M_\infty,P_\infty$.
In the case of NPNS, we obtain from (\ref{cru}),
\beginin{equation}
\beginin{aligned}
\frac{1}{2}\sum_{i=1}^2&\|\nabla{\tilde{c}_i}(t)\|_{L^2}^2+\sum_{i=1}^2\frac{D_i}{4}\int_0^t\|\Delta{\tilde{c}_i}(s)\|_{L^2}^2\,ds\\
\le&\left(\frac{1}{2}\sum_{i=1}^2\|\nabla{\tilde{c}_i}(0)\|_{L^2}^2+C'_F\int_0^t\|u(s)\|_V^2\,ds\right)\exp\left(C_F\int_0^t\|u(s)\|_V^4+1\,ds\right).
\end{aligned}
\end{equation}
This gives us (\ref{MU}) after converting to the original variables $c_i$ using the definition of ${\tilde{c}_i}$ and the uniform bounds on $c_i,\Phi$.
For the NPS system, we recall from Propositions \ref{diss} and \ref{L2} that $\|u\|_V$ is uniformly bounded in time and that $\|u\|_V^2$ and $\|\nabla {\tilde{c}_i}\|_{L^2}^2$ decay and are integrable in time, with a bound that is independent of $T$. Therefore, integrating (\ref{cru}) in time, we find that the right hand side is bounded by a constant independent of time, giving us (\ref{M'}). Converting back to the variables $c_i$ gives us (\ref{M}).
\end{proof}
\section{Global Regularity for Blocking Boundary Conditions (Multiple Species)}\label{mss}
The results of Sections \ref{de} and \ref{he} hold for more than two species, $m>2$. Therefore the question of whether or not we can extend our global regularity result to a multiple species setting boils down to whether or not we can establish the uniform $L^2$ estimates from Section \ref{UL2} in our current setting. Once such a bound is established, the proof of global regularity for multiple species follows exactly as in the proof of Theorem \ref{gr!}.
In the proof of Proposition \ref{L2'} of Section \ref{UL2}, specifically in (\ref{cancellation}), we use the fact that, due to the assumption of two species and $z_1>0>z_2$,
\beginin{equation}
(z_1^2c_1^2-z_2^2c_2^2)\rho=(|z_1|c_1-|z_2|c_2)(|z_1|c_1+|z_2|c_2)\rho=\rho^2(|z_1|c_1+|z_2|c_2)\ge 0.\label{sign}
\end{equation}
Even for the simplest extension of taking 3 species, with $z_1,z_2>0>z_3$, the corresponding leftmost term in (\ref{sign}) is $(z_1^2c_1^2+z_2^2c_2^2-z_3^2c_3^2)\rho$, and in general this term need not be nonnegative. Due to this fact, an analogous proof does not work for more than two species. However, there is one special multiple species setting where we do have global regularity: namely if all the diffusivities are equal $D_1=...=D_m$ and all the valences have the same magnitude (e.g. $z_1=z_2=1=-z_3$) (see also {\widetilde{c}}ite{cil}). Indeed, we have the following theorem,
\beginin{equation}gin{thm}
For initial conditions $0\le c_i(0)\in H^1\,(i=1,...,m)$, $u(0)\in V$ and for all $T>0$, if $D_1=...=D_m=D$ for a common value $D$ and $|z_1|=...=|z_m|=z$ for a common value $z$, then NPS (\ref{np}),(\ref{pois}),(\ref{stokes}) has a unique strong solution $(c_i,\Phi,u)$ on the time interval $[0,T]$ satisfying the boundary conditions (\ref{bl}),(\ref{noslip}),(\ref{robin}). For NPNS (\ref{np}),(\ref{pois}),(\ref{nse}), under the same hypotheses, a unique strong solution on $[0,T]$ satisfying the initial and boundary conditions exists provided
\beginin{equation}
\int_0^T\|u(s)\|_V^4\,ds<\infty.
\end{equation}
Moreover, in the case of NPS, the solution satisfies the bounds (\ref{Mi}),(\ref{L2''}),(\ref{Mu}),(\ref{M}),(\ref{M'}) for each $i$. In the case of NPNS, the solution satisfies the bounds (\ref{Mi}),(\ref{L2''}),(\ref{M'u}),(\ref{MU}) for each $i$.\label{grm}
\end{thm}
\beginin{equation}gin{proof}
We only prove the result corresponding to Proposition \ref{L2'}. As discussed at the beginning of this section, the other results leading to the proof of global regularity extend naturally from Section \ref{GR}.
This special case of multiple species effectively boils down to a two species setting. The variables $\rho$ and $\sigma=z(c_1+c_2+...+c_m)\ge 0$ satisfy
\beginin{equation}gin{align}
\pmbartial_t\rho+u{\widetilde{c}}dot\nabla\rho&=D{\mbox{div}\,}(\nabla\rho+z\sigma\nabla\Phi)\label{rhoeq}\\
\pmbartial_t\sigma+u{\widetilde{c}}dot\nabla\sigma&=D{\mbox{div}\,}(\nabla\sigma+z\rho\nabla\Phi)\label{sigmaeq}
\end{align}
and boundary conditions
\beginin{equation}
\beginin{aligned}
(\pmbartial_n\rho+z\sigma\pmbartial_n\Phi)_{|\pmbartial\Omega}=(\pmbartial_n\sigma+z\rho\pmbartial_n\Phi)_{|\pmbartial\Omega}=0.
\end{aligned}
\end{equation} Multiplying (\ref{rhoeq}) and (\ref{sigmaeq}) by $\rho$ and $\sigma$ respectively and integrating by parts, we obtain
\beginin{equation}gin{align}
\frac{1}{2D}\frac{d}{dt}\|\rho\|_{L^2}^2+\|\nabla\rho\|_{L^2}^2&=-z\int_\Omega\sigma\nabla\rho{\widetilde{c}}dot\nabla\Phi\,dx\label{bla}\\
\frac{1}{2D}\frac{d}{dt}\|\sigma\|_{L^2}^2+\|\nabla\sigma\|_{L^2}^2&=-z\int_\Omega\rho\nabla\sigma{\widetilde{c}}dot\nabla\Phi\,dx\label{blabla}.
\end{align}
We estimate the right hand side of (\ref{bla}) using the boundary conditions,
\beginin{equation}
\beginin{aligned}
-z\int_\Omega\sigma\nabla\rho{\widetilde{c}}dot\nabla\Phi\,dx=&-z\int_{\pmbartial\Omega}\sigma\rho\pmbartial_n\Phi\,dS+z\int_\Omega\rho\nabla\sigma{\widetilde{c}}dot\nabla\Phi\,dx-\frac{z}{\epsilon}\int_\Omega\rho^2\sigma\,dx\\
\le&z\tau\int_{\pmbartial\Omega}\sigma\rho\Phi\,dS-z\int_{\pmbartial\Omega}\sigma\rho\xi\,dS+z\int_\Omega\rho\nabla\sigma{\widetilde{c}}dot\nabla\Phi\,dx\\
=&I_1+I_2+z\int_\Omega\rho\nabla\sigma{\widetilde{c}}dot\nabla\Phi\,dx.\label{bb}
\end{aligned}
\end{equation}
As in (\ref{I_1}) and (\ref{I_2}), we obtain using Lemma \ref{trace} in the Appendix,
\beginin{equation}
\beginin{aligned}
|I_1|\le& C\|\Phi\|_{L^4(\pmbartial\Omega)}\|\rho\|_{L^\frac{8}{3}(\pmbartial\Omega)}\|\sigma\|_{L^\frac{8}{3}(\pmbartial\Omega)}\\
\le&\|\Phi\|_{H^1(\Omega)}\left(C_\delta(\|\rho\|_{L^1}^2+\|\sigma\|_{L^1}^2)+\delta(\|\nabla\rho\|_{L^2}^2+\|\nabla\sigma\|_{L^2}^2)\right)
\end{aligned}
\end{equation}
and similarly
\beginin{equation}
|I_2|\le C_\delta(\|\rho\|_{L^1}^2+\|\sigma\|_{L^1}^2)+\delta(\|\nabla\rho\|_{L^2}^2+\|\nabla\sigma\|_{L^2}^2).\label{I22}
\end{equation}
Thus choosing
\beginin{equation}
\delta=\min\left\{\frac{1}{4},\frac{1}{4\sup_t\|\Phi(t)\|_{H^1(\Omega)}}\right\}
\end{equation}
we obtain by adding (\ref{bla}) and (\ref{blabla}) and using the estimates (\ref{bb})-(\ref{I22}),
\beginin{equation}
\frac{1}{2D}\frac{d}{dt}\left(\|\rho\|_{L^2}^2+\|\sigma\|_{L^2}^2\right)+\frac{1}{2}\left(\|\nabla\rho\|_{L^2}^2+\|\nabla\sigma\|_{L^2}^2\right)<R\label{RRRR}
\end{equation}
where $R$ is a constant that depends on uniform bounds on $\|\Phi\|_{H^1},\|\rho\|_{L^1}$ and $\|\sigma\|_{L^1}$, along with the parameters of the system. Thus by applying Gagliardo-Nirenberg inequalities to $\|\nabla\rho\|_{L^2}$ and $\|\nabla\sigma\|_{L^2}$ as in (\ref{poinc}), we conclude after a Grönwall estimate on (\ref{RRRR}) that $\|\rho\|_{L^2}$ and $\|\sigma\|_{L^2}$ remain uniformly bounded in time. In particular, because $c_i$ are nonnegative, it follows from the boundedness of $\|\sigma\|_{L^2}$ that $\|c_i\|_{L^2}$ are uniformly bounded in time for each $i$. This result replaces Proposition \ref{L2'}.
\end{proof}
\section{Global Regularity for Mixed Boundary Conditions}\label{mbc}
In this section, we again consider NPNS and NPS for two oppositely charged species. Suppose $\pmbartial\Omega$ represents a cation selective membrane which allows for permeation of cations but blocks anions. As previously mentioned, ion selectivity is typically modelled by Dirichlet boundary conditions {\widetilde{c}}ite{davidson,rubishtil}. So in this case, the boundary conditions for $c_i$, assuming $z_1>0>z_2$, are
\beginin{equation}gin{align}
{c_1(x,t)}_{|\pmbartial\Omega}=&\gamma_1>0\label{di}\\
(\pmbartial_n c_2(x,t)+z_2c_2(x,t)\pmbartial_n\Phi(x,t))_{|\pmbartial\Omega}=&0\label{2bl}
\end{align}
where $\gamma_1$ is \textit{constant}. The boundary conditions for $\Phi$ and $u$ are unchanged (see (\ref{noslip}),(\ref{robin})).
As discussed in the introduction, such configurations are known, in general, to lead to electrokinetic instabilities whereby, for large enough voltage drops $\sup\xi -\inf\xi$, one starts seeing vortical flow patterns, and for even larger drops, even chaotic behavior, resembling fluid turbulence and reminiscent of thermal turbulence in Rayleigh-Bénard convection {\widetilde{c}}ite{davidson}. Mathematically, the instability of the configuration considered in this section is manifested by the lack of a dissipative bound resembling that from Proposition \ref{diss}.
To prove global regularity in this mixed setting, we aim, as in the previous sections, to obtain a priori control of the growth of the strong norms, which characterize strong solutions, namely $c_i\in L^\infty_t H^1_x{\widetilde{c}}ap L^2_tH^2_x,\, u\in L^\infty_t V{\widetilde{c}}ap L^2_tH^2_x.$ Let us remark here that \textit{strong} regularity in fact implies $C^\infty((0,T];C^\infty(\bar\Omega))$ regularity so long as the boundary and boundary conditions are smooth, as is the case here. This equivalence follows from a bootstrapping scheme using standard parabolic theory (see, for example, {\widetilde{c}}ite{evans}). This observation justifies the use of Sard's theorem {\widetilde{c}}ite{jl} in the proof of the following proposition.
\beginin{equation}gin{prop}
Let $(c_i,\Phi,u)$ be a strong solution of NPNS (\ref{np}),(\ref{pois}),(\ref{nse}) or NPS (\ref{np}),(\ref{pois}),(\ref{stokes}) for two oppositely charged species ($m=2, z_1>0>z_2$) on the time interval $[0,T]$ satisfying boundary conditions (\ref{di}),(\ref{2bl}),(\ref{noslip}),(\ref{robin}) and initial conditions $0\le c_i(0)\in H^1,\,u(0)\in V$. Then there exist ${\tilde M_2},\tilde m_2>0$ depending on the parameters of the system and the initial conditions such that for each $i=1,2$
\beginin{equation}
\sup_{t\in[0,T]}\|c_i(t)\|_{L^2}<\tilde M_2 e^{\tilde m_2T^3}=M_2(T).\label{M2t}
\end{equation}\label{gL2}
\end{prop}
\beginin{equation}gin{proof}
\textbf{Step 1. $L^\infty_t L^1_x$ bounds on $c_i$.} Integrating (\ref{np}) for $i=2$ over $\Omega$, it follows from (\ref{2bl}) that $\|c_2(t)\|_{L^1}=\|c_2(0)\|_{L^1}$ for all $t\ge 0$. To obtain control of $\|c_1(t)\|_{L^1}$, we proceed as follows. We fix a $C^1$, nondecreasing function ${\widetilde{c}}hi:\mathbb{R}\to[0,\infty)$ such that ${\widetilde{c}}hi(s)=0$ for $s\in (-\infty,0]$, ${\widetilde{c}}hi(s)=s^2$ for $s\in (0,\frac{1}{2})$, ${\widetilde{c}}hi(s)=1$ for $s\in[1,\infty)$, and ${\widetilde{c}}hi'\le 2$ everywhere. We note that $({\widetilde{c}}hi(s))^\alpha$ converges pointwise to the indicator function ${\bf{1}}_{s>0}(s)$ as $\alpha\to 0^+.$
Now we fix a time $t>0$, and we note that since the mapping $x\in\bar\Omega\mapsto {\bar c}_1(t,x):=c_1(t,x)-\gamma_1$ is smooth, it follows from Sard's theorem that there exists a sequence $\beginin{equation}ta_n\to 0$ such that each $\beginin{equation}ta_n$ is a regular value of ${\bar c}_1(t):\bar\Omega\mapsto\mathbb{R}$. In particular, it follows that the sets $\{{\bar c}_1(t)>\beginin{equation}ta_n\}$ are (possibly empty) smooth submanifolds of $\Omega$ such that if $\{{\bar c}_1(t)>\beginin{equation}ta_n\}$ is nonempty, then it has smooth boundary $\{{\bar c}_1(t)=\beginin{equation}ta_n\}\subset\Omega$.
Now we multiply (\ref{np}) for $i=1$ by the test function
\beginin{equation}
\pmbsi=\pmbsi_{\alpha,n}=({\widetilde{c}}hi{\widetilde{c}}irc( {\bar c}_1(t)-\beginin{equation}ta_n))^\alpha,\quad 0<\alpha<1
\end{equation}
and integrate; then, defining the following primitive,
\beginin{equation}
Q(y)=Q_{\alpha}(y)=\int_0^y ({\widetilde{c}}hi(s))^\alpha\,ds\label{Qa}
\end{equation}
we have on the left hand side, using ${\mbox{div}\,} u=0$,
\beginin{equation}
\int_\Omega (\pmbartial_t c_1)\pmbsi\,dx+\int_\Omega u{\widetilde{c}}dot\nabla ( Q{\widetilde{c}}irc({\bar c}_1(t)-\beginin{equation}ta_n))\,dx=\int_\Omega (\pmbartial_t c_1)\pmbsi\,dx.
\end{equation}
On the right hand side, we obtain, using the fact that $\pmbsi=0$ on $\pmbartial\{{\bar c}_1(t)>\beginin{equation}ta_n\}=\{ {\bar c}_1(t)=\beginin{equation}ta_n\}$
\beginin{equation}
\beginin{aligned}
D_1\int_{ {\bar c}_1(t)>\beginin{equation}ta_n}{\mbox{div}\,}(\nabla c_1+c_1\nabla\Phi)\pmbsi\,dx=&-\alpha D_1\int_{ {\bar c}_1(t)>\beginin{equation}ta_n}|\nabla c_1|^2\frac{{\widetilde{c}}hi'{\widetilde{c}}irc({\bar c}_1(t)-\beginin{equation}ta_n)}{({\widetilde{c}}hi{\widetilde{c}}irc({\bar c}_1(t)-\beginin{equation}ta_n))^{1-\alpha}}\,dx\\
&+D_1\int_{{\bar c}_1(t)>\beginin{equation}ta_n}{\mbox{div}\,} (({\bar c}_1(t)-\beginin{equation}ta_n)\nabla\Phi)\pmbsi\,dx\\
&-\frac{D_1}{\epsilon}(\gamma_1+\beginin{equation}ta_n)\int_{{\bar c}_1(t)>\beginin{equation}ta_n}\rho\pmbsi\,dx\\
\le&-\alpha D_1\int_{{\bar c}_1(t)>\beginin{equation}ta_n} ({\bar c}_1(t)-\beginin{equation}ta_n)\nabla\Phi{\widetilde{c}}dot\nabla c_1\frac{{\widetilde{c}}hi'{\widetilde{c}}irc({\bar c}_1(t)-\beginin{equation}ta_n)}{({\widetilde{c}}hi{\widetilde{c}}irc({\bar c}_1(t)-\beginin{equation}ta_n))^{1-\alpha}}\,dx\\
&-\frac{D_1}{\epsilon}(\gamma_1+\beginin{equation}ta_n)\int_{{\bar c}_1(t)>\beginin{equation}ta_n}\rho\pmbsi\,dx.
\end{aligned}
\end{equation}
Thus far, we have
\beginin{equation}
\beginin{aligned}
\int_\Omega (\pmbartial_t c_1)\pmbsi\,dx\le& -\alpha D_1\int_{{\bar c}_1(t)>\beginin{equation}ta_n} ({\bar c}_1(t)-\beginin{equation}ta_n)\nabla\Phi{\widetilde{c}}dot\nabla c_1\frac{{\widetilde{c}}hi'{\widetilde{c}}irc({\bar c}_1(t)-\beginin{equation}ta_n)}{({\widetilde{c}}hi{\widetilde{c}}irc({\bar c}_1(t)-\beginin{equation}ta_n))^{1-\alpha}}\,dx\\
&-\frac{D_1}{\epsilon}(\gamma_1+\beginin{equation}ta_n)\int_{{\bar c}_1(t)>\beginin{equation}ta_n}\rho\pmbsi\,dx\label{g1}
\end{aligned}
\end{equation}
Recalling the dependence of $\pmbsi$ on $n$ and taking the limit of (\ref{g1}) as $n\to \infty$, we obtain the following estimate, which holds for each $t>0$
\beginin{equation}\beginin{aligned}
\int_\Omega (\pmbartial_t c_1)({\widetilde{c}}hi{\widetilde{c}}irc {\bar c}_1(t))^\alpha\,dx\le&-\alpha D_1\int_{{\bar c}_1(t)>0}{\bar c}_1(t)\nabla\Phi{\widetilde{c}}dot\nabla c_1\frac{{\widetilde{c}}hi'{\widetilde{c}}irc {\bar c}_1(t)}{({\widetilde{c}}hi{\widetilde{c}}irc {\bar c}_1(t))^{1-\alpha}}\,dx\\
&-\frac{D_1}{\epsilon}\gamma_1\int_{{\bar c}_1(t)>0}\rho({\widetilde{c}}hi{\widetilde{c}}irc {\bar c}_1(t))^\alpha\,dx.\label{g2}
\end{aligned}\end{equation}
We note that the sequence $\beginin{equation}ta_n$ depended on the fixed time $t$; however upon taking the limit $\beginin{equation}ta_n$, the resulting bound (\ref{g2}) holds for all positive time. The limit on the right hand side is justified by dominated convergence along with the fact that, by our choice of ${\widetilde{c}}hi$, we have
\beginin{equation}
\beginin{aligned}
{\bf{1}}_{\{{\bar c}_1(t)-\beginin{equation}ta_n>0\}}|{\bar c}_1(t)-\beginin{equation}ta_n|\frac{{\widetilde{c}}hi'{\widetilde{c}}irc ({\bar c}_1(t)-\beginin{equation}ta_n)}{({\widetilde{c}}hi{\widetilde{c}}irc ({\bar c}_1(t)-\beginin{equation}ta_n))^{1-\alpha}}\le&{\bf{1}}_{\{0<{\bar c}_1(t)-\beginin{equation}ta_n<\frac{1}{2}\}}|{\bar c}_1(t)-\beginin{equation}ta_n|\frac{2|{\bar c}_1(t)-\beginin{equation}ta_n|}{|{\bar c}_1(t)-\beginin{equation}ta_n|^{2-2\alpha}}\\
&+{\bf{1}}_{\{\frac{1}{2}\le {\bar c}_1(t)-\beginin{equation}ta_n\}}\frac{2\|{\bar c}_1(t)-\beginin{equation}ta_n\|_{L^\infty}}{(1/2)^{2-2\alpha}}\\
\le&8\max\{\|{\bar c}_1(t)-\beginin{equation}ta_n\|_{L^\infty}^{2\alpha},\|{\bar c}_1(t)-\beginin{equation}ta_n\|_{L^\infty}\}.\label{g3}
\end{aligned}
\end{equation}
The above bound ensures that the integrand of the first integral on the right hand side of (\ref{g1}) is bounded uniformly in $n$.
We now bound the right hand side of (\ref{g2}). The first integral is bounded as follows
\beginin{equation}\beginin{aligned}
\left|\alpha D_1\int_{{\bar c}_1(t)>0}{\bar c}_1(t)\nabla\Phi{\widetilde{c}}dot\nabla c_1\frac{{\widetilde{c}}hi'{\widetilde{c}}irc {\bar c}_1(t)}{({\widetilde{c}}hi{\widetilde{c}}irc {\bar c}_1(t))^{1-\alpha}}\,dx\right|\le& 2\alpha D_1\left|\int_{0<{\bar c}_1(t)<\frac{1}{2}}{\bar c}_1(t)|\nabla\Phi||\nabla c_1|\frac{{\bar c}_1(t)}{({\bar c}_1(t))^{2-2\alpha}}\,dx\right|\\
&+\alpha D_1\left|\int_{\frac{1}{2}\le {\bar c}_1(t)}{\bar c}_1(t)|\nabla\Phi||\nabla c_1|\frac{2}{(1/2)^2}\,dx \right|\\
\le& 2\alpha D_1\|\nabla\Phi\|_{L^\infty}\|\nabla c_1\|_{L^\infty}(1/2)^{2\alpha}|\Omega|\\
&+8\alpha D_1\|\nabla\Phi\|_{L^\infty}\|\nabla c_1\|_{L^\infty}\|{\bar c}_1(t)\|_{L^1}\\
\le&\alpha D_1\|\nabla\Phi\|_{L^\infty}\|\nabla c_1\|_{L^\infty}(2|\Omega|+8\|{\bar c}_1(t)\|_{L^1}).\label{bbb}
\end{aligned}\end{equation}
Using the nonnegativity of $c_1$, the second integral is bounded by
\beginin{equation}
-\frac{D_1}{\epsilon}\gamma_1\int_{{\bar c}_1(t)>0}\rho({\widetilde{c}}hi{\widetilde{c}}irc {\bar c}_1(t))^\alpha\,dx\le \frac{D_1}{\epsilon}\gamma_1\|c_2(t)\|_{L^1}=\frac{D_1}{\epsilon}\gamma_1\|c_2(0)\|_{L^1}\label{bbb'}
\end{equation}
Now returning to (\ref{g2}) and writing it as
\beginin{equation}
\frac{d}{dt}\int_\Omega Q{\widetilde{c}}irc {\bar c}_1(t)\,dx\le-\alpha D_1\int_{{\bar c}_1(t)>0}{\bar c}_1(t)\nabla\Phi{\widetilde{c}}dot\nabla c_1\frac{{\widetilde{c}}hi'{\widetilde{c}}irc {\bar c}_1(t)}{({\widetilde{c}}hi{\widetilde{c}}irc {\bar c}_1(t))^{1-\alpha}}\,dx-\frac{D_1}{\epsilon}\gamma_1\int_{{\bar c}_1(t)>0}\rho({\widetilde{c}}hi{\widetilde{c}}irc {\bar c}_1(t))^\alpha\,dx
\end{equation}
we find, using (\ref{bbb}), (\ref{bbb'}) and integrating in time,
\beginin{equation}
\beginin{aligned}
\int_\Omega Q{\widetilde{c}}irc {\bar c}_1(t)\,dx\le& \int_\Omega Q{\widetilde{c}}irc {\bar c}_1(0)\,dx+\alpha D_1\int_0^t \|\nabla\Phi(s)\|_{L^\infty}\|\nabla c_1(s)\|_{L^\infty}(2|\Omega|+8\|{\bar c}_1(s)\|_{L^1})\,ds\\
&+\frac{D_1}{\epsilon}\gamma_1\|c_2(0)\|_{L^1}t.\label{g4}
\end{aligned}
\end{equation}
Now we recall the dependence of $Q$ on $\alpha$ and observe that $Q=Q_\alpha(y)\to y_+:=\max\{y,0\}$ as $\alpha\to 0$. Using these facts, we take the limit of (\ref{g4}) as $\alpha\to 0$ to obtain
\beginin{equation}\beginin{aligned}
\int_\Omega {\bar c}_1(t)_+\,dx\le&\int_\Omega {\bar c}_1(0)_+\,dx+\frac{D_1}{\epsilon}\gamma_1\|c_2(0)\|_{L^1}t=\tilde B(t).\label{g5}
\end{aligned}\end{equation}
Thus, since,
\beginin{equation}
\|{c}_1(t)\|_{L^1}\le \int_\Omega {\bar c}_1(t)_++\gamma_1\,dx\le \tilde B(t)+\gamma_1|\Omega|=B(t)\label{B(t)}
\end{equation}
we have shown that $\|c_1(t)\|_{L^1}$ grows at most linearly in time.
\textbf{Step 2. Time integrability of $\|\rho\|_{L^2}^2$.} Unlike in Section \ref{UL2}, it is necessary to treat $c_1,c_2$ separately as they satisfy different boundary conditions. We first consider $c_1$. Multiplying (\ref{np}) for $i=1$ by $\frac{1}{D_1}(\log c_1-\log\gamma_1)$ and integrating by parts, we obtain
\beginin{equation}
\beginin{aligned}
\frac{1}{D_1}\frac{d}{dt}\left(\int_\Omega c_1\log c_1-c_1-c_1\log\gamma_1\,dx\right)=&-\int_\Omega \frac{|\nabla c_1|^2}{c_1}+z_1\nabla\Phi{\widetilde{c}}dot\nabla c_1\,dx\\
=&-4\int_\Omega |\nabla \sqrt{c_1}|^2\,dx-z_1\int_\Omega \nabla\Phi{\widetilde{c}}dot\nabla(c_1-\gamma_1)\,dx\\
=&-4\int_\Omega |\nabla \sqrt{c_1}|^2\,dx+\frac{z_1}{\epsilon}\gamma_1\int_{\Omega}\rho\,dx-\frac{z_1}{\epsilon}\int_\Omega c_1\rho\,dx\label{logc1}
\end{aligned}
\end{equation}
We observe that no boundary terms occur because $c_1=\gamma_1$ on the boundary, and the advective term involving $u$ also vanishes because ${\mbox{div}\,} u=0$ and because $\log\gamma_1$ is a constant.
Similarly, multiplying (\ref{np}) for $i=2$ by $\frac{1}{D_2}\log c_2$ and integrating by parts, we obtain
\beginin{equation}
\beginin{aligned}
\frac{1}{D_2}\frac{d}{dt}\left(\int_\Omega c_2\log c_2-c_2\,dx\right)=&-\int_\Omega \frac{|\nabla c_2|^2}{c_2}+z_2\nabla\Phi{\widetilde{c}}dot\nabla c_2\,dx\\
=&-4\int_\Omega |\nabla \sqrt{c_2}|^2\,dx-z_2\int_{\pmbartial\Omega}c_2\pmbartial_n\Phi\,dS-\frac{z_2}{\epsilon}\int_\Omega c_2\rho\,dx.\label{logc2}
\end{aligned}
\end{equation}
We bound the boundary term using the Robin boundary conditions (\ref{robin}) and Lemma \ref{trace} in the Appendix (specifically (\ref{T1}) with $p=2$),
\beginin{equation}
\beginin{aligned}
\left|z_2\int_{\pmbartial\Omega} c_2\pmbartial_n\Phi\,dS\right|\le & \tau|z_2|\left|\int_{\pmbartial\Omega}c_2\Phi\,dS\right|+\|\xi\|_{L^\infty{(\pmbartial\Omega})}|z_2|\int_{\pmbartial\Omega}c_2\,dS\\
\le&C(\|\Phi\|_{L^\infty(\Omega)}+\|\xi\|_{L^\infty(\pmbartial\Omega)})\|\sqrt{c_2}\|_{L^2(\pmbartial\Omega)}^2\\
\le&C(\|\rho\|_{L^\frac{7}{4}}+1)(\|\sqrt{c_2}\|_{L^2}^\frac{1}{2}\|\nabla \sqrt{c_2}\|_{L^2}^\frac{1}{2}+\|\sqrt{c_2}\|_{L^2})^2\\
\le&C(\|\rho\|_{L^1}^\frac{1}{7}\|\rho\|_{L^2}^\frac{6}{7}+1)(\|c_2\|_{L^1}^\frac{1}{2}\|\nabla\sqrt{c_2}\|_{L^2}+\|c_2\|_{L^1})\\
\le&2\|\nabla \sqrt{c_2}\|_{L^2}^2+\frac{1}{2\epsilon}\|\rho\|_{L^2}^2+C(\|\rho\|_{L^1}^2\|c_2\|_{L^1}^7+\|c_2\|_{L^1}+\|\rho\|_{L^1}^\frac{1}{4}\|c_2\|_{L^1}^\frac{7}{4}).\label{c2bdb}
\end{aligned}
\end{equation}
The third line follows from the embedding $W^{2,\alpha}(\Omega)\hookrightarrow L^\infty(\Omega)$ for $\alpha>\frac{3}{2}$ (here, for concreteness we choose $\alpha=\frac{7}{4}$) and the fact that $\Phi$ is related to $\rho$ via the Poisson equation (\ref{pois}) and the boundary conditions (\ref{robin}). The fourth line follows from interpolating $L^\frac{7}{4}$ between $L^1$ and $L^2$. The last line follows from Young's inequalities.
\beginin{equation}gin{comment}
More precisely, we use the fact that the mapping
\beginin{equation}
\Phi\in H^2(\Omega)\mapsto (-\epsilon\Deltaelta\Phi,(\pmbartial_n\Phi+\tau\Phi)_{|\pmbartial\Omega})\in L^2(\Omega)\times H^\frac{1}{2}(\pmbartial\Omega)
\end{equation}
is an isomorphism (we refer the reader to {\widetilde{c}}ite{grisvard} for the relevant elliptic theory). Thus, there exists $C>0$ depending on $\epsilon$ and $\tau$ such that
\beginin{equation}
\|\Phi\|_{H^2(\Omega)}\le C(\|\rho\|_{L^2(\Omega)}+\|\xi\|_{H^\frac{1}{2}(\pmbartial\Omega)}).
\end{equation}
Then, from the Hölder embedding $H^2(\Omega)\hookrightarrow C^\alpha(\bar\Omega)\hookrightarrow L^\infty(\Omega)$ ($\alpha>0$), we have that
\beginin{equation}
\|\Phi\|_{L^\infty(\Omega)}\le C(\|\rho\|_{L^2(\Omega)}+\|\xi\|_{H^\frac{1}{2}(\pmbartial\Omega)}),\label{hol}
\end{equation}
which justifies the third line of (\ref{c2bdb}). Also, this fact that $\Phi$ is continuous up to the boundary justifies the bound $\|\Phi\|_{L^\infty(\pmbartial\Omega)}\le\|\Phi\|_{L^\infty(\Omega)}$, implicitly used in the second line of (\ref{c2bdb}). The constant $\delta>0$ is chosen below.
\end{comment}
Defining
\beginin{equation}
\mathcal{E}=\frac{1}{D_1}\int_\Omega c_1\log c_1-c_1-c_1\log\gamma_1\,dx+\frac{1}{D_2}\int_\Omega c_2\log c_2-c_2\,dx
\end{equation}
we obtain by adding (\ref{logc1}) to (\ref{logc2}), using (\ref{c2bdb}), and recalling $\rho=z_1c_1+z_2c_2$,
\beginin{equation}
\beginin{aligned}
\frac{d}{dt}\mathcal{E}+2\int_\Omega |\nabla\sqrt{c_1}|^2+|\nabla\sqrt{c_2}|^2\,dx+\frac{1}{2\epsilon}\int_\Omega \rho^2\,dx\le G(t)\label{lgt}
\end{aligned}
\end{equation}
where
\beginin{equation}
G(t)=C(\|\rho\|_{L^1}^2\|c_2\|_{L^1}^7+\|c_1\|_{L^1}+\|c_2\|_{L^1}+\|\rho\|_{L^1}^\frac{1}{4}\|c_2\|_{L^1}^\frac{7}{4})
\end{equation}
Because $\|c_2(t)\|_{L^1}$ is constant and $\|c_1(t)\|_{L^1}$ grows at most linearly in time (\ref{B(t)}, we have that $G(t)$ grows at most quadratically in time and in particular is locally integrable. Thus integrating (\ref{lgt}), we obtain
\beginin{equation}
2\epsilon\mathcal{E}(T)+\int_0^T\|\rho(s)\|_{L^2}^2\,ds\le 2\epsilon\left(\mathcal{E}(0)+\int_0^TG(t)\,dt\right)=r(T).
\end{equation}
Lastly, we observe that because $x\log x$ is superlinear, for any $c>0$ there exists $C>0$ depending only on $c$ such that $-C\le x\log x-cx$ for all $x>0$. It follows that $\mathcal{E}(T)$ is bounded below $\mathcal{E}(T)>-C_0$, with $C_0$ independent of $T$. Therefore
\beginin{equation}
\int_0^T\|\rho(s)\|_{L^2}^2\,ds\le r(T)+2\epsilon C_0=R(T).\label{RT}
\end{equation}
We note that $R(T)$ increases at most like $T^3$.
\textbf{Step 3. $L^\infty_tL^2_x$ bounds for $c_i$.} Equipped with this time dependent bound, we proceed to control $c_i$ in $L^2$. Multiplying (\ref{np}) for $i=2$ by $\frac{|z_2|}{D_2}c_2$ and integrating by parts, we obtain exactly as in (\ref{ener}),
\beginin{equation}
\beginin{aligned}
\frac{|z_2|}{2D_2}\frac{d}{dt}\int_\Omega c_2^2\,dx=&-|z_2|\int_\Omega|\nabla c_2|^2\,dx-\frac{z_2|z_2|}{2\epsilon}\int_\Omega c_2^2\rho\,dx\\
&+z_2|z_2|\frac{\tau}{2}\int_{\pmbartial\Omega}c_2^2\Phi\,dS- \frac{z_2|z_2|}{2}\int_{\pmbartial\Omega}c_2^2\xi\,dS\\
=&-|z_2|\int_\Omega|\nabla c_2|^2\,dx-\frac{z_2|z_2|}{2\epsilon}\int_\Omega c_2^2\rho\,dx+I_1+I_2.\label{AA}
\end{aligned}
\end{equation}
We bound the boundary integrals using the embedding $H^2\hookrightarrow L^\infty$ and Lemma \ref{trace} in the Appendix,
\beginin{equation}
\beginin{aligned}
|I_1|\le& C\|\Phi\|_{L^\infty(\Omega)}\|c_2\|_{L^2(\pmbartial\Omega)}^2\\
\le&C(\|\rho\|_{L^2}+1)(\|c_2\|_{L^2}^\frac{1}{2}\|\nabla c_2\|_{L^2}^\frac{1}{2}+\|c_2\|_{L^2})^2\\
\le&\frac{|z_2|}{4}\|\nabla c_2\|_{L^2}^2+C(\|\rho\|_{L^2}^2+1)\|c_2\|_{L^2}^2.\label{BB}
\end{aligned}
\end{equation}
The last line follows from Young's inequalities. Similarly,
\beginin{equation}
\beginin{aligned}
|I_2|\le& C\|\xi\|_{L^\infty(\pmbartial\Omega)}\|c_2\|_{L^2(\pmbartial\Omega)}^2\\
\le&C\|\xi\|_{L^\infty(\pmbartial\Omega)}(\|c_2\|_{L^2}^\frac{1}{2}\|\nabla c_2\|_{L^2}^\frac{1}{2}+\|c_2\|_{L^2})^2\\
\le&\frac{|z_2|}{4}\|\nabla c_2\|_{L^2}^2+C\|c_2\|_{L^2}^2.\label{CC}
\end{aligned}
\end{equation}
From (\ref{AA}), using (\ref{BB}) and (\ref{CC}), we obtain
\beginin{equation}
\frac{|z_2|}{2D_2}\frac{d}{dt}\|c_2\|_{L^2}^2+\frac{|z_2|}{2}\|\nabla c_2\|_{L^2}^2\le C(\|\rho\|_{L^2}^2+1)\|c_2\|_{L^2}^2-\frac{z_2|z_2|}{2\epsilon}\int_\Omega c_2^2\rho\,dx.\label{DD}
\end{equation}
Next we obtain the corresponding estimate for $c_1$. We observe that $q_1=c_1-\gamma_1$ satisfies
\beginin{equation}
\beginin{aligned}
\pmbartial_t q_1+u{\widetilde{c}}dot\nabla q_1=&D_1{\mbox{div}\,}(\nabla q_1+z_1q_1\nabla\Phi)-\frac{D_1z_1\gamma_1}{\epsilon}\rho.\label{q1}
\end{aligned}
\end{equation}
Using the fact that $q_1$ vanishes on the boundary, we multiply (\ref{q1}) by $\frac{|z_1|}{D_1}q_1$ and integrate by parts
\beginin{equation}
\beginin{aligned}
\frac{|z_1|}{2D_1}\frac{d}{dt}\|q_1\|_{L^2}^2+|z_1|\|\nabla q_1\|_{L^2}^2=&-\frac{z_1|z_1|}{2}\int_\Omega \nabla q_1^2{\widetilde{c}}dot\nabla\Phi\,dx-\frac{z_1|z_1|\gamma_1}{\epsilon}\int_\Omega\rho q_1\,dx\\
\le&-\frac{z_1|z_1|}{2\epsilon}\int_\Omega q_1^2\rho\,dx+C\|\rho\|_{L^2}\|q_1\|_{L^2}.\label{EE}
\end{aligned}
\end{equation}
Now we add (\ref{DD}) to (\ref{EE}) to obtain
\beginin{equation}
\beginin{aligned}
&\frac{d}{dt}\left(\frac{|z_1|}{2D_1}\|q_1\|_{L^2}^2+\frac{|z_2|}{2D_2}\|c_2\|_{L^2}^2\right)+|z_1|\|\nabla q_1\|_{L^2}^2+\frac{|z_2|}{2}\|\nabla c_2\|_{L^2}^2\\
\le&-\frac{1}{2\epsilon}\int_\Omega (z_1|z_1|q_1^2+z_2|z_2|c_2^2)\rho\,dx+C(\|\rho\|_{L^2}^2+1)\|c_2\|_{L^2}^2+C\|\rho\|_{L^2}\|q_1\|_{L^2}\\
\le&-\frac{1}{2\epsilon}\int_\Omega (z_1|z_1|q_1^2+z_2|z_2|c_2^2)\rho\,dx+C(\|\rho\|_{L^2}^2+1)(\|q_1\|_{L^2}^2+\|c_2\|_{L^2}^2)+C.\label{W}
\end{aligned}
\end{equation}
Next, we observe that because $z_1>0>z_2$ and denoting $\sigma=z_1c_1+|z_2|c_2\ge 0$,
\beginin{equation}
\beginin{aligned}
(z_1|z_1|q_1^2+z_2|z_2|c_2^2)\rho=(z_1^2q_1^2-z_2^2c_2^2)\rho=&(z_1q_1+|z_2|c_2)(z_1q_1+z_2c_2)\rho\\
=&(z_1c_1-z_1\gamma_1+|z_2|c_2)(z_1c_1-z_1\gamma_1+z_2c_2)\rho\\
=&(\sigma-z_1\gamma_1)(\rho-z_1\gamma_1)\rho\\
=&\rho^2\sigma-2z_1^2\gamma_1c_1\rho+z_1^2\gamma_1^2\rho\\
\ge&-2z_1^2\gamma_1c_1\rho+z_1^2\gamma_1^2\rho.\label{r3}
\end{aligned}
\end{equation}
We note that in the last line, there is no cubic term. So using (\ref{r3}) together with Young's inequalities, we obtain from (\ref{W}),
\beginin{equation}
\beginin{aligned}
\frac{d}{dt}\mathcal{F}\le& C(\|\rho\|_{L^2}^2+1)+C(\|\rho\|_{L^2}^2+1)(\|q_1\|_{L^2}^2+\|c_2\|_{L^2}^2)\\
\le& C(\|\rho\|_{L^2}^2+1)+C(\|\rho\|_{L^2}^2+1)\mathcal{F}
\end{aligned}
\end{equation}
for
\beginin{equation}
\mathcal{F}=\frac{|z_1|}{2D_1}\|q_1\|_{L^2}^2+\frac{|z_2|}{2D_2}\|c_2\|_{L^2}^2.
\end{equation}
Thus recalling (\ref{RT}), a Grönwall estimate gives us
\beginin{equation}
\sup_{t\in[0,T]}\mathcal{F}(t)\le \left(\mathcal{F}(0)+C(R(T)+T)\right)\exp\left(C(R(T)+T)\right).
\end{equation}
This completes the proof of the proposition.
\end{proof}
As in the case of blocking boundary conditions, the $L^2$ bounds obtained in the previous proposition are sufficient, in the mixed boundary conditions setting, to obtain control of $c_i$ in the space $L^\infty_tH_x^1{\widetilde{c}}ap L_t^2H_x^2$. Since the $L^2$ bound (\ref{M2t}) is time dependent, all higher regularity bounds are also time dependent. In particular, even for NPS, no time independent bounds of the type (\ref{M'}) are available.
\beginin{equation}gin{thm}
For initial conditions $0\le c_i(0)\in H^1$, $u(0)\in V$ and for all $T>0$, NPS (\ref{np}),(\ref{pois}),(\ref{stokes}) for two oppositely charged species ($m=2,\,z_1>0>z_2$) has a unique strong solution $(c_i,\Phi,u)$ on the time interval $[0,T]$ satisfying the boundary conditions (\ref{di}),(\ref{2bl}),(\ref{noslip}),(\ref{robin}). Under the same hypotheses, NPNS (\ref{np}),(\ref{pois}),(\ref{nse}) for two oppositely charged species has a unique strong solution on $[0,T]$ satisfying the initial and boundary conditions provided
\beginin{equation}
U(T)=\int_0^T\|u(s)\|_V^4\,ds<\infty.
\end{equation}
Moreover, the solution to NPS satisfies
\beginin{equation}gin{align}
\sup_{t\in[0,T]}\|c_i(t)\|_{H^1}^2+\int_0^T\|c_i(s)\|_{H^2}^2\,ds&\le M(T)\label{M(T)}\\
\sup_{t\in[0,T]}\|u(t)\|_V^2+\int_0^T\|u(s)\|_{H^2}^2\,ds&\le B'(T)\label{B'T}
\end{align}
for time dependent constants $M(T), B'(T)$ depending also on the parameters of the system and the initial conditions. The solution to NPNS satisfies
\beginin{equation}gin{align}
\sup_{t\in[0,T]}\|c_i(t)\|_{H^1}^2+\int_0^T\|c_i(s)\|_{H^2}^2\,ds&\le M_U(T)\label{MU(T)}\\
\sup_{t\in[0,T]}\|u(t)\|_V^2+\int_0^T\|u(s)\|_{H^2}^2\,ds&\le B'_U(T)\label{B'UT}
\end{align}
for time dependent constants $M_U(T),B'_U(T)$ depending also on the parameters of the system, the initial conditions, and $U(T)$.\label{gr!!}
\end{thm}
\beginin{equation}gin{rem}
{We show in the proof that $M(T),B'(T),M_U(T),B'_U(T)$ satisfy the following asymptotic bounds
\beginin{equation}gin{align}
M(T)\lesssim& \exp\exp\exp\exp(CT^3)\\
B'(T)\lesssim& \exp\exp(CT^3)\\
M_U(T)\lesssim&\exp\left(\exp\exp\exp(CT^3)\exp(CU(T))\right)\\
B'_U(T)\lesssim&\exp\exp(CT^3)\exp(CU(T)).
\end{align}
These bounds follow from using \eqref{M2t} and bootstrapping.}
\end{rem}
\beginin{equation}gin{proof}
As in the proofs of Theorems \ref{gr!} and \ref{grm}, global existence of strong solutions follows from the local existence theorem and the a priori bounds (\ref{M(T)})-(\ref{B'UT}).
\textbf{Step 1. $L^\infty_tL^4_x$ bounds for $c_i$. $L^\infty_t W^{1,\infty}_x$ bounds for $\Phi$. $L^\infty_tV{\widetilde{c}}ap L^2_tH^2_x$ bounds for $u$.} From (\ref{M2t}) and Sobolev estimates, we obtain as in (\ref{P6}),
\beginin{equation}
\|\nabla\Phi\|_{L^6}\le P_6(T) \label{P6T}
\end{equation}
for some time dependent constant $P_6(T)$, {which asymptotically satisfies $P_6(T)\lesssim e^{CT^3}$ (c.f. \eqref{M2t}).} Then multiplying (\ref{np}) for $i=2$ by $c_2^3$, and integrating by parts, we find (cf. (\ref{kk})),
\beginin{equation}
\beginin{aligned}
\frac{1}{4}\frac{d}{dt}\|c_2\|_{L^4}^4+\frac{3}{4}D_2\|\nabla c_2^2\|_{L^2}^2\le CP_6(T)\|c_2^2\|_{L^3}\|\nabla c_2^2\|_{L^2}
\end{aligned}
\end{equation}
which, after interpolating $L^3$ between $L^2$ and $H^1$ and using a Young's inequality, gives
\beginin{equation}
\frac{1}{4}\frac{d}{dt}\|c_2\|_{L^4}^4+\frac{1}{4}D_2\|\nabla c_2^2\|_{L^2}^2\le CP_6(T)^4\|c_2\|_{L^4}^4.
\end{equation}
Therefore, for a time dependent constant $M'_4(T)$ we have
\beginin{equation}
\|c_2\|_{L^4}\le M'_4(T)\label{M'4T},
\end{equation}
{where $M_4'(T)\lesssim \exp\exp(CT^3)$}. Then multiplying (\ref{q1}) by $q_1^3$ and integrating by parts, we obtain
\beginin{equation}
\frac{1}{4}\frac{d}{dt}\|q_1\|_{L^4}^4+\frac{3}{4}D_1\|\nabla q_1^2\|_{L^2}^2\le CP_6(T)\|q_1^2\|_{L^3}\|\nabla q_1^2\|_{L^2}+C(1+\|q_1\|_{L^4}^4+\|c_2\|_{L^4}^4)
\end{equation}
which gives after an interpolation,
\beginin{equation}
\frac{1}{4}\frac{d}{dt}\|q_1\|_{L^4}^4+\frac{1}{4}D_1\|\nabla q_1^2\|_{L^2}^2\le C(P_6(T)^4+1)\|q_1\|_{L^4}^4+C(1+\|c_2\|_{L^4}^4).
\end{equation}
This, together with (\ref{M'4T}), allows us to conclude that there exists a time dependent constant $M_4(T)$ such that for each $i$,
\beginin{equation}
\|c_i\|_{L^4}\le M_4(T)\label{M4T}
\end{equation}
{with $M_4(T)\lesssim \exp\exp(CT^3)$}. Furthermore, since $W^{2,4}\hookrightarrow W^{1,\infty}$, we have for a time dependent constant $P_\infty(T)$,
\beginin{equation}
\|\Phi\|_{W^{1,\infty}}\le P_\infty(T)\label{PiT}
\end{equation}
{with $P_\infty(T)\lesssim \exp\exp(CT^3)$}. At this point, we have established enough bounds to mimic the derivations of (\ref{unps}), (\ref{nnn}) to obtain the bounds (\ref{B'T}), (\ref{B'UT}) {for $u$ with $B'(T)\lesssim \exp\exp(CT^3)$ and $B'_U(T)\lesssim \exp\exp(CT^3)\exp (CU(T))$.}
\textbf{Step 2. $L^\infty_tH^1_x{\widetilde{c}}ap L^2_tH^2_x$ bounds for $c_1$.} Now we bound derivatives of $c_1$. First we multiply (\ref{q1}) by $-\Delta q_1$ and integrate by parts to obtain
\beginin{equation}
\beginin{aligned}
\frac{1}{2}\frac{d}{dt}\|\nabla q_1\|_{L^2}^2+D_1\|\Delta q_1\|_{L^2}^2\le& C\|u\|_V\|\nabla q_1\|^\frac{1}{2}_{L^2}\|\Delta q_1\|_{L^2}^\frac{3}{2}+CP_\infty(T)\|\nabla q_1\|_{L^2}\|\Delta q_1\|_{L^2}\\
&+C(M_4(T)^2+1)\|\Delta q_1\|_{L^2}+CM_2(T)\|\Delta q_1\|_{L^2}
\end{aligned}
\end{equation}
so that from Young's inequalities, we obtain
\beginin{equation}
\frac{1}{2}\frac{d}{dt}\|\nabla q_1\|_{L^2}^2+\frac{D_1}{2}\|\Delta q_1\|_{L^2}^2\le C(T)+C(P_\infty(T)^2+\|u\|_V^4)\|\nabla q_1\|_{L^2}^2\label{naq1}
\end{equation}
for a time dependent constant {$C(T)\lesssim \exp\exp(CT^3)$}. Using a Grönwall estimate, from (\ref{naq1}), we obtain (\ref{MU(T)}) for $c_1$ in the case of NPNS {with $M_U(T)\lesssim \exp\exp\exp(CT^3)\exp (CU(T))$}. In the case of NPS, (\ref{naq1}) together with (\ref{B'T}) gives us (\ref{M(T)}) for $c_1$ {with $M(T)\lesssim \exp\exp\exp(CT^3)$}.
\textbf{Step 3. $L^\infty_tH^1_x{\widetilde{c}}ap L^2_tH^2_x$ bounds for $c_2$.} To obtain the corresponding bounds for $c_2$, we start with (\ref{dci}), which holds verbatim for $i=2$:
\beginin{equation}
\beginin{aligned}
\frac{1}{2}\frac{d}{dt}\|\nabla\tilde c_2\|_{L^2}^2+D_2\|\Delta\tilde c_2\|_{L^2}^2=&\int_\Omega (u{\widetilde{c}}dot\nabla\tilde c_2)\Delta\tilde c_2\,dx+D_2z_2\int_\Omega(\nabla\tilde c_2{\widetilde{c}}dot\nabla\Phi)\Delta\tilde c_2\,dx\\
&-z_2\int_\Omega((\pmbartial_t+u{\widetilde{c}}dot\nabla)\Phi)\tilde c_2\Delta\tilde c_2\,dx\\
=& I_1+I_2+I_3.
\end{aligned}
\end{equation}
The term $I_1$ is bounded identically (see (\ref{id})):
\beginin{equation}
|I_1|\le\frac{D_2}{4}\|\Delta \tilde c_2\|_{L^2}^2+C\|u\|_V^4\|\nabla\tilde c_2\|_{L^2}^2.
\end{equation}
The term $I_2$ is also bounded identically (see (\ref{idd})), noting that this time we have a time dependent coefficient $C_g(T)$, depending on $P_\infty(T)$ {so that $C_g(T)\lesssim\exp\exp(CT^3)$}:
\beginin{equation}
|I_2|\le\frac{D_2}{4}\|\Delta\tilde c_2\|_{L^2}^2+C_g(T)\|\nabla\tilde c_2\|_{L^2}^2.
\end{equation}
As for the term $I_3=I_3^1+I_3^2$ (see (\ref{split})), we note that
\beginin{equation}
\|\tilde{c}_2\|_{L^3}\le CM_4(T)e^{|z_2|P_\infty(T)}=\beginin{equation}ta_3(T){\lesssim\exp\exp\exp(CT^3)}
\end{equation}
so that
\beginin{equation}
\beginin{aligned}
|I_3^1|\le& C\|u\|_V\|\nabla\Phi\|_{L^\infty}\|\tilde{c}_2\|_{L^3}\|\Delta \tilde{c}_2\|_{L^2}\\
\le&\frac{D_2}{8}\|\Delta\tilde{c}_2\|_{L^2}^2+C'_g(T)\|u\|_V^2
\end{aligned}
\end{equation}
where {$C'_g(T)\lesssim \exp\exp\exp(CT^3)$.} Lastly, in order to bound $I_3^2$, we first estimate $\pmbartial_t\nabla\Phi$ as in (\ref{ptrho})-(\ref{ptg}), but this time we work around the fact that ${\pmbartial_n \tilde c_1}_{|\pmbartial\Omega}=0$ does not hold:
\beginin{equation}
\beginin{aligned}
\tau\|\pmbartial_t\Phi\|&_{L^2(\pmbartial\Omega)}^2+\|\pmbartial_t\nabla\Phi\|_{L^2(\Omega)}^2\\\le& C\sum_{j=1}^2\left|\int_\Omega{\mbox{div}\,}(e^{-z_j\Phi}\nabla{\tilde{c}_j})\pmbartial_t\Phi\,dx\right|+C\left|\int_\Omega{\mbox{div}\,}(u\rho)\pmbartial_t\Phi\,dx\right|\\
\le&C\int_\Omega e^{|z_2|P_\infty(T)}|\nabla \tilde c_2||\pmbartial_t\nabla\Phi|\,dx+C\left|\int_\Omega {\mbox{div}\,} (e^{-z_1\Phi}\nabla\tilde c_1)\pmbartial_t\Phi\,dx\right|\\
&+C\int_\Omega |\rho||u||\pmbartial_t\nabla\Phi|\,dx\\
\le&C\int_\Omega e^{|z_2|P_\infty(T)}|\nabla \tilde c_2||\pmbartial_t\nabla\Phi|\,dx+CP_\infty(T) e^{|z_1|P_\infty(T)}\int_\Omega |\nabla\tilde c_1||\pmbartial_t\Phi|\,dx\\
&+Ce^{|z_1|P_\infty(T)}\int_\Omega |\Delta\tilde c_1||\pmbartial_t\Phi|\,dx+C\int_\Omega |\rho||u||\pmbartial_t\nabla\Phi|\,dx\\
\le&C_{t}(T)(\sum_{j=1}^2\|\nabla{\tilde{c}_j}\|_{L^2}+\|\Delta\tilde c_1\|_{L^2}+\|u\|_V)(\|\pmbartial_t\nabla\Phi\|_{L^2(\Omega)}^2+\|\pmbartial_t\Phi\|_{L^2(\pmbartial\Omega)}^2)^\frac{1}{2}
\end{aligned}
\end{equation}
where {$C_t(T)\lesssim \exp\exp\exp(CT^3)$}, and we used $\|\pmbartial_t\Phi\|_{L^2}\le C(\|\pmbartial_t\nabla\Phi\|_{L^2(\Omega)}^2+\|\pmbartial_t\Phi\|_{L^2(\pmbartial\Omega)}^2)^\frac{1}{2}$. Then recalling that we already have the bounds (\ref{M(T)}), (\ref{MU(T)}) for $c_1$, we obtain as in (\ref{pt6}), using the embedding $H^1\hookrightarrow L^6$,
\beginin{equation}
\|\pmbartial_t\Phi\|_{L^6}\le C_{t}(T,U(T))(\|\nabla\tilde{c}_2\|_{L^2}+\|\Delta\tilde c_1\|_{L^2}+\|u\|_V+1)\label{pt6t}
\end{equation}
for a time dependent constant {$C_t(T,U(T))\lesssim \exp\exp\exp(CT^3)\exp(CU(T))$ (though in the case of NPS, the dependence on $U(T)$ is redundant and may be dropped)}. Thus we bound $I_3^2$ as in (\ref{I32}),
\beginin{equation}
\beginin{aligned}
|I_3^2|\le\frac{D_2}{8}\|\Delta\tilde c_2\|_{L^2}^2+C_3(T,U(T))(\|\nabla\tilde{c}_2\|_{L^2}^2+\|\Delta\tilde c_1\|_{L^2}^2+\|u\|_V^2+1)\label{I32t}
\end{aligned}
\end{equation}
{with $C_3(T,U(T))\lesssim \exp\exp\exp(CT^3)\exp(CU(T))$}. Collecting our estimates for $I_1,I_2,I_3$, we obtain as in (\ref{cru}),
\beginin{equation}
\beginin{aligned}
&\frac{1}{2}\frac{d}{dt}\|\nabla \tilde{c}_2\|_{L^2}^2+\frac{D_2}{4}\|\Delta\tilde{c}_2\|_{L^2}^2\\
\le& (C_F(T,U(T))+C\|u\|_V^4)\|\nabla \tilde{c}_2\|_{L^2}^2+C'_F(T,U(T))(\|\Delta\tilde c_1\|_{L^2}^2+\|u\|_V^2+1)\label{crut}
\end{aligned}
\end{equation}
{with $C_F(T,U(T)),C_F'(T,U(T))\lesssim \exp\exp\exp(CT^3)\exp(CU(T))$}. It is straightforward to verify using the chain rule that
\beginin{equation}
\|\Delta\tilde c_1\|_{L^2}\le C_P(T)(\|\Delta c_1\|_{L^2}+\|\nabla c_1\|_{L^2}+\|c_1\|_{L^2}+\|c_1\rho\|_{L^2})
\end{equation}
for a time dependent constant $C_P(T)$ depending on $P_\infty(T)$ {so that $C_P(T)\lesssim \exp\exp\exp(CT^3)$}. So, by the previously established bounds (\ref{M(T)}), (\ref{MU(T)}) for $c_1$ (cf. (\ref{naq1})), together with (\ref{M4T}), we have
\beginin{equation}
\int_0^T\|\Delta\tilde c_1\|_{L^2}^2\,dx={S(T,U(T))\lesssim \exp\exp\exp(CT^3)\exp(CU(T))} .\label{ST}
\end{equation}
Then from (\ref{crut}), a Grönwall estimate together with (\ref{ST}) gives us (\ref{MU(T)}) for $i=2$ in the case of NPNS, after we convert back to the variable $c_2$, with $M_U(T)\lesssim\exp(\exp\exp\exp(CT^3)\exp(CU(T)))$. In the case of NPS, (\ref{B'T}) together with (\ref{crut}) and (\ref{ST}) gives us (\ref{M(T)}) for $i=2$ {with $M_U(T)\lesssim \exp\exp\exp\exp(CT^3)$}. This completes the proof.
\end{proof}
\appendix
\section{Trace Inequalities}
\beginin{equation}gin{lemma}\label{trace}
Let $\Omega\subset\mathbb{R}^3$ be an open, bounded domain with Lipschitz boundary. Then for $p\in[2,4]$, we have the embedding $H^1(\Omega)\hookrightarrow L^p(\pmbartial\Omega).$ Moreover, for $p\in [2,4]$, there exist constants $C_p, C'_p$ depending only on $\Omega$ and $p$ such that
\beginin{equation}
\|f\|_{L^p(\pmbartial\Omega)}\le C_p\|\nabla f\|_{L^2(\Omega)}^\frac{1}{p}\|f\|_{L^{2(p-1)}(\Omega)}^\frac{p-1}{p}+C'_p\|f\|_{L^p(\Omega)}.\label{T1}
\end{equation}
In particular, for $p=4$, there exists a constant $c_4$ depending only on $\Omega$ such that
\beginin{equation}
\|f\|_{L^4(\pmbartial\Omega)}\le c_4\|f\|_{H^1(\Omega)}.\label{T2}
\end{equation}
And, for $p\in[2,4)$ and any $\gamma>0$ there exists $C_\gamma$ depending on $\Omega,p$ and $\gamma$ such that
\beginin{equation}
\|f\|_{L^p(\pmbartial\Omega)}^2\le \gamma\|\nabla f\|_{L^2(\Omega)}^2+ C_\gamma\|f\|_{L^1(\Omega)}^2.\label{T3}
\end{equation}
\beginin{equation}gin{proof}
The proof is based on the proof of Theorem 1.5.1.10 in {\widetilde{c}}ite{grisvard}. Because $C^1(\bar\Omega)$ is dense in $H^1(\Omega)$, we assume without loss of generality that $f\in C^1(\bar\Omega).$ Next we use the fact (Lemma 1.5.1.9, {\widetilde{c}}ite{grisvard}) that for bounded Lipschitz domains $\Omega\subset\mathbb{R}^3$, there exists $\mu\in (C^\infty(\bar\Omega))^3$ and a constant $\delta>0$ such that $\mu{\widetilde{c}}dot n\ge \delta$ almost everywhere on $\pmbartial\Omega$, where $n$ is the outward pointing normal vector on $\pmbartial\Omega$. Then, on one hand, we have
\beginin{equation}
\int_\Omega \nabla |f|^p{\widetilde{c}}dot\mu\,dx=p\int_\Omega |f|^{p-2}f\nabla f{\widetilde{c}}dot\mu\,dx.
\end{equation}
On the other hand, integrating by parts, we have
\beginin{equation}
\int_\Omega \nabla |f|^p{\widetilde{c}}dot\mu\,dx=\int_{\pmbartial\Omega} |f|^p\mu{\widetilde{c}}dot n\,dS-\int_\Omega |f|^p{\mbox{div}\,}\mu\,dx.
\end{equation}
Therefore,
\beginin{equation}
\beginin{aligned}
\delta\int_{\pmbartial\Omega}|f|^p\,dS\le&\int_{\pmbartial\Omega} |f|^p\mu{\widetilde{c}}dot n\,dS\\
=&p\int_\Omega |f|^{p-2}f\nabla f{\widetilde{c}}dot\mu\,dx+\int_\Omega |f|^p{\mbox{div}\,}\mu\,dx\\
\le& p\|\mu\|_{L^\infty(\Omega)}\int_\Omega |f|^{p-1}|\nabla f|\,dx+\|{\mbox{div}\,}\mu\|_{L^\infty(\Omega)}\int_\Omega|f|^p\,dx\\
\le& p\|\mu\|_{L^\infty(\Omega)}\|\nabla f\|_{L^2(\Omega)}\|f\|_{L^{2(p-1)}(\Omega)}^{p-1}+\|{\mbox{div}\,}\mu\|_{L^\infty(\Omega)}\|f\|_{L^p(\Omega)}^p\label{tt}
\end{aligned}
\end{equation}
where the last line follows from a Hölder inequality. Then, (\ref{T1}) follows from taking both sides of (\ref{tt}) to the $p^{-1}$ power. Taking $p=4$ in (\ref{T1}) we have
\beginin{equation}
\|f\|_{L^4(\pmbartial\Omega)}\le C_4\|\nabla f\|_{L^2(\Omega)}^\frac{1}{4}\|f\|_{L^6(\Omega)}^\frac{3}{4}+C'_4\|f\|_{L^4(\Omega)},
\end{equation}
so (\ref{T2}) follows from the embedding $H^1(\Omega)\hookrightarrow L^6(\Omega)$. Lastly, (\ref{T3}) follows from (\ref{T1}) by interpolating the spaces $L^{2(p-1)}(\Omega)$ and $L^p(\Omega)$ between $L^1(\Omega)$ and $H^1(\Omega)$, followed by Young's inequalities.
\end{proof}
\end{lemma}
\section{Positivity of Concentrations}\label{pc}
In this section, we verify that positivity of the ionic concentrations, $c_i$, is propagated in time by the Nernst-Planck equations.
\beginin{equation}gin{prop}
Suppose $(c_1,c_2,u)$ is a strong solution to NPNS (\ref{np}), (\ref{pois}), (\ref{nse}) or NPS (\ref{np}), (\ref{pois}), (\ref{stokes}) with blocking boundary conditions (\ref{bl}), (\ref{noslip}), (\ref{robin}) or mixed boundary conditions (\ref{di}), (\ref{2bl}), (\ref{noslip}), (\ref{robin}) with strictly positive initial conditions $0<c\le c_i(0)\in H^1(\Omega)$. Then, $c_i(t)>0$ for all $t\ge 0$.
\end{prop}
\beginin{equation}gin{proof}
We split the proof into two steps. First we show that $c_i(t)\ge 0$ for all $t\ge 0$. This part only uses $c_i(0)\ge 0$ and can be shown as in {\widetilde{c}}ite{ci,cil}. We fix a convex function on the real line that is positive on the negative semiaxis and zero on the positive semiaxis. For example, we take the function
\beginin{equation}
F(y)=\beginin{equation}gin{cases}
y^4,& y<0\\
0,&y\ge 0.
\end{cases}
\end{equation}
We observe that $F$ satisfies, for all $y\in\mathbb{R}$,
\beginin{equation}
F''(y)y^2\le 12 F(y).
\end{equation}
Now we multiply (\ref{np}) by $F'(c_i)$ and integrate by parts, noting that due to the choice of $F$, no boundary terms occur for blocking boundary conditions nor for mixed boundary conditions:
\beginin{equation}
\beginin{aligned}
\frac{d}{dt}\int_\Omega F(c_i)\,dx=&-D_i\int_\Omega|\nabla c_i|^2 F''(c_i)\,dx- D_iz_i\int_\Omega c_iF''(c_i)\nabla c_i{\widetilde{c}}dot\nabla\Phi\,dx\\
\le&-\frac{D_i}{2}\int_\Omega |\nabla c_i|^2F''(c_i)\,dx+\frac{D_iz_i^2}{2}\int_\Omega F''(c_i)c_i^2|\nabla\Phi|^2\,dx\\
\le&6D_i\|\nabla\Phi\|_{L^\infty}^2\int_\Omega F(c_i)\,dx.\label{FF}
\end{aligned}
\end{equation}
We note that the advective term involving $u$ vanishes due to ${\mbox{div}\,} u=0$. It follows from (\ref{FF}) that
\beginin{equation}
\int_\Omega F(c_i(t))\,dx\le \left(\int_\Omega F(c_i(0))\,dx\right)\exp\left(6D_i\int_0^t\|\nabla\Phi(s)\|_{L^\infty}^2\,ds\right).
\end{equation}
For strong solutions, the time integral
\beginin{equation}
\int_0^t\|\nabla\Phi(s)\|_{L^\infty}^2\,ds
\end{equation}
is finite, and thus since $F(c_i(0))=0$ on $\Omega$, it follows that
\beginin{equation}
\int_\Omega F(c_i(t))=0
\end{equation}
which implies $c_i(t)\ge 0$. This proves the nonnegativity of $c_i$.
Improving this result to strict positivity requires an additional argument. We adapt the argument given in {\widetilde{c}}ite{gaj}. We first fix a time interval $[0,T]$, and we assume that $c_i$ satisfy blocking boundary conditions. We then fix $\delta>0$ and multiply (\ref{np}) by $-\frac{1}{(c_i+\delta)^2}$ and integrate by parts. Then on the left hand side, we obtain
\beginin{equation}
\frac{d}{dt}\int_\Omega \frac{1}{c_i+\delta}\,dx.
\end{equation}
On the right hand side, we have
\beginin{equation}
-2D_i\int_\Omega \frac{|\nabla c_i|^2}{(c_i+\delta)^3}\,dx+2D_iz_i\int_\Omega c_i\nabla\Phi{\widetilde{c}}dot\frac{\nabla c_i}{(c_i+\delta)^3}\,dx=I_1+I_2.
\end{equation}
The integral $I_1$ is nonpositive and the second integral $I_2$ can be estimated as follows, using Young's inequality
\beginin{equation}
|I_2|\le D_i\int_\Omega \frac{|\nabla c_i|^2}{(c_i+\delta)^3}\,dx+C_T\int_\Omega \frac{1}{c_i+\delta}\,dx
\end{equation}
where $C_T$ depends on parameters and on $\sup_{t\in [0,T]}\|\nabla\Phi(t)\|_{L^\infty}$ but not on $\delta$. Thus we have that $L_1:=\int_\Omega \frac{1}{c_i+\delta}\,dx$ satisfies $\frac{d}{dt}L_1\le C_TL_1$, and thus
\beginin{equation}
\sup_{t\in[0,T]}L_1(t)\le e^{C_T T}\int_\Omega \frac{1}{c_i(0)+\delta}\,dx\le e^{C_T T}\frac{|\Omega|}{c}\label{l1t}
\end{equation}
where we note that the final upper bound does not depend on $\delta$.
Now the idea is to bootstrap to obtain bounds on $L_{2^k}:=\int_\Omega \frac{1}{(c_i+\delta)^{2^k}}$ for $k=1,2,3,...$ exactly as was done to control $\|c_i\|_{L^\infty}$ in Proposition \ref{L2}.
To this end, we multiply (\ref{np}) by $\frac{-j+1}{(c_i+\delta)^{j}}$ ($j=3,4,...$) and integrate by parts. This yields
\beginin{equation}
\beginin{aligned}
\frac{d}{dt}\left\|\frac{1}{(c_i+\delta)^\frac{j-1}{2}}\right\|_{L^2}^2+4D_i\left\|\nabla\frac{1}{(c_i+\delta)^\frac{j-1}{2}}\right\|_{L^2}^2\le& D_i|z_i|(j-1)\left|\int_\Omega c_i\nabla\Phi{\widetilde{c}}dot\nabla (c_i+\delta)^{-j}\,dx\right|\\
\le& 2D_i|z_i|j\|\nabla\Phi\|_{L^\infty}\int_\Omega|\nabla (c_i+\delta)^\frac{-j+1}{2}|(c_i+\delta)^\frac{-j+1}{2}\,dx\\
\le& 2D_i\left\|\nabla\frac{1}{(c_i+\delta)^\frac{j-1}{2}}\right\|_{L^2}^2\\
&+2D_i|z_i|^2j^2\|\nabla\Phi\|_{L^\infty}^2\left\|\frac{1}{(c_i+\delta)^\frac{j-1}{2}}\right\|_{L^2}^2\label{agag}
\end{aligned}
\end{equation}
Then, using the interpolation estimate
\beginin{equation}
\left\|\frac{1}{(c_i+\delta)^\frac{j-1}{2}}\right\|_{L^2}\le C\left(\left\|\nabla\frac{1}{(c_i+\delta)^\frac{j-1}{2}}\right\|_{L^2}^\frac{3}{5}\left\|\frac{1}{(c_i+\delta)^\frac{j-1}{2}}\right\|_{L^1}^\frac{2}{5}+\left\|\frac{1}{(c_i+\delta)^\frac{j-1}{2}}\right\|_{L^1}\right)\label{int1}
\end{equation}
followed by a Young's inequality, we obtain from (\ref{agag})
\beginin{equation}
\frac{d}{dt}\left\|\frac{1}{(c_i+\delta)^\frac{j-1}{2}}\right\|_{L^2}^2+D_i\left\|\nabla\frac{1}{(c_i+\delta)^\frac{j-1}{2}}\right\|_{L^2}^2\le \bar{c}j^l\left\|\frac{1}{(c_i+\delta)^\frac{j-1}{2}}\right\|_{L^1}^2
\end{equation}
for some $l>0$ large enough and $\bar c$ depending on parameters, the domain, $\sup_{t\in[0,T]}\|\nabla\Phi(t)\|_{L^\infty}$, but not on $j$. Then, again using the interpolation estimate (\ref{int1}) followed by a Young's inequality, we obtain
\beginin{equation}
\frac{d}{dt}\left\|\frac{1}{(c_i+\delta)^\frac{j-1}{2}}\right\|_{L^2}^2\le -C\left\|\frac{1}{(c_i+\delta)^\frac{j-1}{2}}\right\|_{L^2}^2+ C_j\left\|\frac{1}{(c_i+\delta)^\frac{j-1}{2}}\right\|_{L^1}^2\label{jj}
\end{equation}
where $C_j\le \tilde c j^l$ for some $\tilde c$ not depending on $j$. Now we set $j=2k+1$ where $k$ is a nonnegative integer, and we rewrite (\ref{jj}) as
\beginin{equation}
\frac{d}{dt}\left\|\frac{1}{c_i+\delta}\right\|_{L^{2k}}^{2k}\le -C \left\|\frac{1}{c_i+\delta}\right\|_{L^{2k}}^{2k} +C_{2k+1}\left\|\frac{1}{c_i+\delta}\right\|_{L^{k}}^{2k}.\label{2kk}
\end{equation}
Then, applying Grönwall's inequality to (\ref{2kk}), we obtain as in (\ref{2k'})-(\ref{ss}), for some $\bar C>0$ independent of $k$,
\beginin{equation}
T_{2k}\le {\bar C}^\frac{1}{2k}k^\frac{l}{2k}T_k
\end{equation}
where
\beginin{equation}
T_k=\max\left\{\left\|\frac{1}{c_i(0)+\delta}\right\|_{L^\infty},\sup_{t\in [0,T]}\left\|\frac{1}{c_i(t)+\delta}\right\|_{L^k}\right\}.
\end{equation}
Now setting $k=2^\kappa$ we have
\beginin{equation}
T_{2^{\kappa+1}}\le {\bar C}^\frac{1}{2^{\kappa+1}}2^\frac{\kappa l}{2^{\kappa+1}}T_{2^\kappa}
\end{equation}
from which it follows that for all $J\in\mathbb{N}$
\beginin{equation}
T_{2^J}\le {\bar C}^a2^bT_1\label{t2j}
\end{equation}
where
\beginin{equation}
a=\sum_{\kappa=0}^\infty \frac{1}{2^{\kappa+1}}<\infty,\quad b=\sum_{\kappa=0}^\infty\frac{\kappa l}{2^{\kappa+1}}<\infty.
\end{equation}
So passing $J\to\infty$ in (\ref{t2j}) we find that
\beginin{equation}
\beginin{aligned}
\sup_{t\in[0,T]}\left\|\frac{1}{c_i(t)+\delta}\right\|_{L^\infty}\le& {\bar C}^a 2^b\max\left\{\left\|\frac{1}{c_i(0)+\delta}\right\|_{L^\infty},\sup_{t\in [0,T]}\left\|\frac{1}{c_i(t)+\delta}\right\|_{L^1}\right\}\\
\le&{\bar C}^a2^b\max\left\{\left\|\frac{1}{c_i(0)+\delta}\right\|_{L^\infty}, e^{C_T T}\frac{|\Omega|}{c}\right\}
\end{aligned}
\end{equation}
where the second inequality follows from (\ref{l1t}. Finally, passing to the limit $\delta\to 0^+$
gives
\beginin{equation}
\sup_{t\in[0,T]}\left\|\frac{1}{c_i(t)}\right\|_{L^\infty}\le {\bar C}^a2^b\max\left\{\frac{1}{c}, e^{C_T T}\frac{|\Omega|}{c}\right\},
\end{equation}
and thus on any finite time interval $c_i(t)$ is uniformly bounded away from $0$ on $\bar\Omega$. This completes the proof of strict positivity of $c_i, i=1,2$ in the case of blocking boundary conditions.
In the case of mixed boundary conditions (\ref{di}), (\ref{2bl}), the strict positivity of $c_2$ is obtained as in the blocking case. For $c_1$, it is possible to obtain strict positivity along same lines as the preceding proof for blocking boundary conditions by choosing appropriate test functions, and in fact, this method generalizes easily to more complex boundary conditions for $c_i$ (e.g. blocking boundary conditions on a nontrivial boundary portion, Dirichlet boundary conditions on the complement, see {\widetilde{c}}ite{ci, np3d}) and also to cases of more than two ionic species. Here, since $c_1$ satisfies purely Dirichlet boundary conditions and because we are considering the case of two oppositely charged ionic species, we argue using a maximum principle argument (see {\widetilde{c}}ite{cil} for a similar argument in the context of upper bounds). As per the remark at the start of Section \ref{mbc}, we assume that $c_i,u,\Phi$ are all smooth in both space and time so that the following considerations are justified.
We know that on the interval $[0,T]$, there exists $c_T>0$ such that
\beginin{equation}
c_T\le c_2(t).\end{equation}
Now fix $0<c'<\min\{c, c_T,\gamma_1\}$. Initially, we have, by definition of $c$, that $c_1(0)> c'$ on $\bar\Omega$. Suppose $c_1$ attains the value $c'$ at some time in between $t=0$ and $t=T$. Suppose $t'$ is the \textit{first} time when $c'$ is attained by $c_1$. Then, using (\ref{pois}), we write (\ref{np}) for $i=1$ as
\beginin{equation}
\pmbartial_t c_1+u{\widetilde{c}}dot\nabla c_1=D_1\Delta c_1+D_1\nabla c_1{\widetilde{c}}dot\nabla \Phi-\frac{D_1}{\epsilon}c_1(c_1-c_2).
\end{equation}
Evaluating the above at time $t'$, at the point $x_0\in\Omega$ where the minimal value $c'$ is attained, we find that
\beginin{equation}
\pmbartial_t c_1(t',x_0)\ge -\frac{D_1}{\epsilon}c'(c'-c_T)>0.
\end{equation}
However, by our choice of $t'$ and $x_0$, we must have $\pmbartial_t c_1(t',x_0)\le 0$, so we have a contradiction. Thus, \beginin{equation}\inf_{[0,T]\times\bar\Omega}c_1\ge \min\{c,c_T,\gamma_1\}>0.\end{equation} Since $[0,T]$ is an arbitrary finite time interval, we have shown that $c_1$ is also uniformly bounded away from 0 on every finite time interval. This completes the proof of the strict positivity of $c_i, i=1,2$ in the case of mixed boundary conditions.
\end{proof}
{\bf{Acknowledgment.}} The author thanks the anonymous referees for their constructive comments.
\beginin{equation}gin{thebibliography}{99}
\bibitem{bedro}J. Bedrossian, N. Rodriguez, A. L. Bertozzi, Local and global well-posedness for aggregation equations and Patlak-Keller-Segel models with degenerate diffusion, Nonlinearity {\bf{24}}, (2011), 1683-1714, https://doi.org/10.1088/0951-7715/24/6/001
\bibitem{biler}P. Biler, The Debye system: existence and large time behavior of solutions, Nonlinear Analysis {\bf{23}} 9, (1994), 1189 -1209.
\bibitem{biler2} P. Biler, J. Dolbeault. Long time behavior of solutions to Nernst-Planck and Debye-Hckel drift-diffusion systems, Ann. Henri Poincaré {\bf{1}}, (2000), 461-472.
\bibitem{bothe} D. Bothe, A. Fischer, J. Saal, Global well-posedness and stability of electrokinetic flows, SIAM J. Math. Anal, {\bf 46} 2, (2014), 1263-1316.
\bibitem{choi} Y.S. Choi, and R. Lui, Multi-Dimensional Electrochemistry Model, Arch Rational Mech Anal {\bf{130}} (1995), 315-342.
\bibitem{cf} P. Constantin, C. Foias, Navier-Stokes Equations, The University of Chicago Press, Chicago, 1988.
\bibitem{ci}P. Constantin, M. Ignatova, On the Nernst-Planck-Navier-Stokes system, Arch Rational Mech Anal {\bf{232}}, No. 3, (2018), 1379 -1428.
\bibitem{np3d}P. Constantin, M. Ignatova, F-N Lee, Nernst-Planck-Navier-Stokes systems near equilibrium, Pure and Applied Functional Analysis {\bf{7}} 1, (2022), 175-196.
\bibitem{cil}P. Constantin, M. Ignatova, F-N Lee, Nernst–Planck–Navier–Stokes Systems far from Equilibrium. Arch Rational Mech Anal {\bf{240}}, (2021), 1147–1168. https://doi.org/10.1007/s00205-021-01630-x
\bibitem{int}P. Constantin, M. Ignatova, F-N Lee, Interior Electroneutrality in Nernst–Planck–Navier–Stokes Systems. Arch Rational Mech Anal \textbf{242}, (2021), 1091–1118. https://doi.org/10.1007/s00205-021-01700-0
\bibitem{davidson}S. M. Davidson, M. Wissling, A. Mani, On the dynamical regimes of pattern-accelerated electroconvection, Scientific Reports {\bf{6}} 22505 (2016) doi:19.1039/srep22505
\bibitem{evans} L. C. Evans, Partial Differential Equations, Providence, R.I.: American Mathematical Society, 1998.
\bibitem{fischer}A. Fischer, J. Saal, Global weak solutions in three space dimensions for electrokinetic flow processes. J. Evol. Equ. {\bf{17}}, (2017), 309–333. https://doi.org/10.1007/s00028-016-0356-0
\bibitem{gaj} H. Gajewski, K. Groger, On the basic equations for carrier transport in semiconductors, Journal of Mathematical Analysis and Applications, \textbf{113} (1986) 12-35.
\bibitem{gajewski} H. Gajewski, K. Groger, Reaction-diffusion processes of electrically charged species, Math. Nachr., {\bf{177}} (1996), 109-130.
\bibitem{grisvard} P. Grisvard, Elliptic Problems in Nonsmooth Domains. Pitman Advanced
Publishing Program, Boston, 1985
\bibitem{lee} C.-C. Lee, The charge conserving Poisson-Boltzmann equations: Existence, uniqueness, and maximum principle, Journal of Mathematical Physics {\bf{55}} 051503 (2014) https://doi.org/10.1063/1.4878492
\bibitem{jl} J. M. Lee, Introduction to Smooth Manifolds. 2nd ed. New York: Springer, 2013.
\bibitem{liu}J.-G. Liu, J. Wang. Global existence for Nernst-Planck-Navier-Stokes system in $\mathbb{R}^n$. Communications in Mathematical Sciences {\bf{18}} (2020) 1743-1754.
\bibitem{pham} V. S. Pham, Z. Li, K. M. Lim, J. K. White, J. Han, Direct numerical simulation of electroconvective instability and hysteretic current-voltage response of a permselective membrane, Phys. Rev. E {\bf{86}} 046310 (2012) https://doi.org/10.1103/PhysRevE.86.046310
\bibitem{prob}R. Probstein, Physicochemical Hydrodynamics: An Introduction. 2nd ed., Wiley-Interscience, 2003.
\bibitem{rubibook} I. Rubinstein, Electro-Diffusion of Ions, SIAM Studies in Applied Mathematics, SIAM, Philadelphia 1990.
\bibitem{rubinstein}S. M. Rubinstein, G. Manukyan, A. Staicu, I. Rubinstein, B. Zaltzman, R.G.H. Lammertink, F. Mugele, M. Wessling, Direct observation of a nonequilibrium electro-osmotic instability. Phys. Rev. Lett. {\bf{101}} (2008) 236101-236105.
\bibitem{rubishtil}I. Rubinstein, L. Shtilman, Voltage against current curves of cation exchange membranes, Journal of the Chemical Society, Faraday Transactions {\bf{75}} (1979) 231-246.
\bibitem{rubizaltz} I. Rubinstein, B. Zaltzman, Electro-osmotically induced convection at a permselective membrane. Phys. Rev. E {\bf{62}} (2000) 2238-2251.
\bibitem{schmuck} M. Schmuck. Analysis of the Navier-Stokes-Nernst-Planck-Poisson system. Math. Models Methods Appl. {\bf{19}}, (2009), 993-1014.
\bibitem{zaltzrubi}B. Zaltzman, I. Rubinstein, Electro-osmotic slip and electroconvective instability. J. Fluid Mech. {\bf{579}}, (2007), 173-226.
\end{thebibliography}
\end{document} |
\begin{document}
\ensuremath{\right}eclareGraphicsExtensions{.pdf,.gif,.jpg}
\keywords{Chains of infinite order, coupling from the past algorithms, canonical Markov approximation, $\bar{d}$-distance}
\subjclass[2000]{Primary 60G10; Secondary 60K99}
\title{Markov approximation of chains of infinite order in the $\bar{d}$-metric}
\author{S. Gallo}
\address{Instituto de Matem\'atica\\mathbb{E}stat\'{\i}stica e Computa\c c\~ao
Cient\'\i fica\\
Universidade Estadual de Campinas\\
Rua Sergio Buarque de Holanda, 651\\
13083-859 Campinas, Brasil}
\email{[email protected]}
\author{M. Lerasle}
\address{Laboratoire J.A.Dieudonn\'e UMR CNRS 6621\\
Universit\'e de Nice Sophia-Antipolis, Parc Valrose\\
06108 Nice Cedex 2\\
France }
\email{ [email protected]}
\author{D. Y. Takahashi}
\address{Institute of Neuroscience and Psychology Department, Princeton University\\
Green Hall, NJ, 08540}
\email{[email protected]}
\thanks{SG is supported by FAPESP grant 2009/09809-1. ML was supported by FAPESP grant 2009/09494-0. DYT was supported by FAPESP grant 2008/08171-0 and Pew Latin American Fellowship. This work is part of USP project ``Mathematics, computation, language and the brain.''}
\begin{abstract}
We derive explicit upper bounds for the $\bar{d}$-distance between a chain of infinite order and its canonical $k$-steps Markov approximation. Our proof is entirely constructive and involves a ``coupling from the past'' argument. The new method covers non necessarily continuous probability kernels, and chains with null transition probabilities. These results imply in particular the Bernoulli property for these processes.
\end{abstract}
\maketitle
\section{{\bf INTRODUCTION}}
Chains of infinite order are random processes that are specified by probability kernels (conditional probabilities), which may depend on the whole past. They provide a flexible model that is very useful in different areas of applied probability and statistics, from bioinformatics \cite{bejerano2001a,busch2009} to linguistics \cite{galves2009,GGGL}. They are also models of considerable theoretical interest in ergodic theory \cite{walters/2007, coelho/quas/1998, quas/1996, hulse/1991} and in the general theory of stochastic process \cite{fernandez/maillard/2005, comets/fernandez/ferrari/2002, bramson/kalikow/1993}.
A natural approach to study chains of infinite order is to approximate the original process by Markov chains of growing orders.
In this article, we derive new upper-bounds on the $\bar{d}$-distance between a chain and its canonical $k$-steps Markov approximation.
Introduced by \cite{ornstein/1974} to study the isomorphism problem for Bernoulli shifts, the $\bar{d}$-metric is of fundamental importance in ergodic theory where chains of infinite order are also known as $g$-measures. The $\bar{d}$-distance between two processes can be informally described as the minimal proportion of times we have to change a typical realization of one process in order to obtain a typical realization of the other. \cite{ornstein/1974} showed that the set of processes which are measure theoretic isomorphic to Bernoulli shifts is $\bar{d}$-closed. Ergodic Markov chains are examples of processes that are isomorphic to Bernoulli shifts. Therefore, if a process can be approximated arbitrary well under the $\bar{d}$-metric by a sequence of ergodic Markov chains, then this process has the Bernoulli property. In this article we prove the existence of Markov approximation schemes for classes of chains of infinite order with non-necessary continuous and with possibly null transition probabilities. Some of these processes were not considered before.
For example, \cite{coelho/quas/1998}, \cite{fernandez/galves/2002}, and \cite{johansson/oberg/pollicott/2010} required the continuity of the probability kernels. Our results show that these new examples are isomorphic to Bernoulli shifts and provide explicit upper bounds for the Markov approximation in several important cases, giving therefore information on \emph{how good} these approximations are.
Besides ergodic theory, the $\bar d$-distance is useful in statistics and information theory. \cite{rissanen/1983} proposed to model data as realizations of stochastic chains, and proved that these data can be optimally compressed using the (unknown) probability kernel of the chain. The statistical problem is then to recover this probability kernel from the observation of typical data. Since the number of parameters to estimate is infinite, this task is impossible in general. A possible strategy to overcome this problem is the following.
(1) Couple the original chain with a Markov approximation
and (2) work with the approximating Markov chain. The $\bar{d}$-distance between the chain and its Markov approximation controls the error made in step (1). The idea is that, if this control is good enough, the good properties of the approximating Markov chain proved in step (2) can be used to study the original chain. For instance, \cite{duarte/galves/garcia/2006} and \cite{csiszar/talata/2010} derived consistency results for chains of infinite order from the consistency of BIC estimators for Markov chains proved in \cite{csiszar/talata/2006}. This ``two steps'' procedure was also used in \cite{collet/duarte/galves/2005} to obtain a bootstrap central limit theorem for chains of infinite order from the renewal property of the approximating Markov chains.
Our main results derive from coupling arguments. We first introduce a flexible class of \emph{Coupling from the past} algorithms (CFTP algorithms, see Section \ref{sec:cftp}). CFTP algorithms constitute an important class of perfect simulation algorithms popularized by \cite{propp/wilson/1996}. Our main assumption on the chain is that the original chain of infinite order can be perfectly simulated \emph{via} such CFTP algorithms. We state a technical result, Lemma \ref{theo:1}, which provides an abstract upper bound for the $\bar{d}$-distance with the canonical Markov approximation. This bound is then made explicit under various extra assumptions on the process used in the study of the CFTP algorithms of \citep{comets/fernandez/ferrari/2002,gallo/2009,desantis/piccioni/2010,gallo/garcia/2011}.
To our knowledge, \cite{fernandez/galves/2002} provide the best explicit bounds in the literature for the $\bar{d}$-distance between a chain of infinite order and its canonical Markov approximation, depending only on the continuity rate of the probability kernels. Their result applies to weakly non-null chains having summable continuity rates. Our method recovers the same bounds, substituting weak non-nullness by a weaker assumption, see Theorem \ref{coro:expli2}. Assuming weak non-nullness, we also obtain explicit upper bounds in some non-summable continuity regimes and other not even necessarily continuous, but satisfying certain types of localized continuity, as introduced in \cite{desantis/piccioni/2010}, \cite{gallo/2009} and \cite{gallo/garcia/2011}. This is the content of Theorems \ref{coro:italia} and \ref{lemma:1} which provide, as far as we know, the first results for non-continuous chains. Our results should also be compared with the results in \cite{johansson/oberg/pollicott/2010}, where they prove the Bernoulli property for square summable continuity regime assuming strong non-nullness, although they don't provide an explicit upper bound for the approximations.
The paper is organized as follows. In Section \ref{sec:notations}, we introduce the notation and basic definitions used all along the paper. In Section \ref{sec:conccoup}, we construct the coupling between the original chain and its canonical Markov approximation and we introduce the class of CFTP algorithms perfectly simulating the chains. Our main results are stated in Section \ref{sec:theo}.
We postpone the proofs to Section \ref{sec:proofs}. For convenience of the reader, we leave in Appendix some extensions and technical results on the ``house of cards'' processes that are useful in our applications and are of independent interest.
\section{{\bf NOTATION, DEFINITIONS AND BACKGROUND}}\label{sec:notations}
\subsection{Notation}We use the conventions that $\ensuremath{\mathbb{N}}^{*}=\ensuremath{\mathbb{N}}\setminus\set{0}$, $\overline{\ensuremath{\mathbb{N}}}=\ensuremath{\mathbb{N}}^{*}\cup\set{\infty}$. Let $A$ be the set $\set{1,2,\ldots,N}$ for some $N\in\overline{\ensuremath{\mathbb{N}}}$. Given two integers $m\leq n$, let $a_m^n$ be the string $a_m \ldots a_n$ of symbols in $A$. For
any $m\leq n$, the length of the string $a_m^n$ is denoted by
$|a_m^n|$ and defined by $n-m+1$. Let $\emptyset$ denote the empty string, of length $|\emptyset|=0$. For any $n\in\mathbb{Z}$, we will
use the convention that $a_{n+1}^{n}=\emptyset$, and naturally
$|a_{n+1}^{n}|=0$. Given two strings $v$ and $v'$, we denote by $vv'$
the string of length $|v| + |v'| $ obtained by concatenating the two
strings. If $v'=\emptyset$, then $v\emptyset=\emptyset v=v$. The concatenation of strings is also extended to the case
where $v=\ldots a_{-2}a_{-1}$ is a semi-infinite sequence of
symbols. If $n\in\ensuremath{\mathbb{N}}^{*}$ and $v$ is a finite string of
symbols in $A$, $v^{n}=v\ldots v$ is the concatenation of
$n$ times the string $v$. In the case where $n=0$, $v^{0}$ is the empty string $\emptyset$. Let
$$
A^{-\mathbb{N}}=A^{\{\ldots,-2,-1\}}\,\,\,\,\,\,\textrm{ and }\,\,\,\,\,\,\, A^{\star} \,=\, \bigcup_{j=0}^{+\infty}\,A^{\{-j,\dots, -1\}}\, ,
$$
be, respectively, the set of all infinite strings of past symbols and the set of all finite strings of past symbols.
The case $j=0$ corresponds to the empty string $\emptyset$. Finally, we denote by $\ensuremath{\mathbf{1}}derline{a}=\ldots a_{-2}a_{-1}$ the elements of $A^{-\mathbb{N}}$.
\subsection{Kernels, chains and coupling}
\begin{defi}A family of transition probabilities, or \emph{kernel}, on an alphabet $A$ is a function
\begin{equation*}
\begin{array}{cccc}
P:&A\times A^{-\mathbb{N}}&\rightarrow& [0,1]\\
&(a,\ensuremath{\mathbf{1}}derline{x})&\mapsto&P(a|\ensuremath{\mathbf{1}}derline{x})
\end{array}
\end{equation*}
such that
\[
\sum_{a\in A}P(a|\ensuremath{\mathbf{1}}derline{x})=1\,\,,\,\,\,\,\,\,\forall \ensuremath{\mathbf{1}}derline{x}\in A^{-\mathbb{N}}.
\]
\end{defi}
\noindent $P$ is called a Markov kernel if there exists $k$ such that $P(a|\ensuremath{\underline{x}})=P(a|\ensuremath{\mathbf{1}}derline{y})$ when $x_{-k}^{-1}=y_{-k}^{-1}$. In the present paper we are mostly interested in \emph{non}-Markov kernels, in which $P(a|\ensuremath{\mathbf{1}}derline{x})$ may depend on the whole past $\ensuremath{\mathbf{1}}derline{x}$.
\begin{defi}
A stationary stochastic chain ${\bf X}=\{X_{n}\}_{n\in\ensuremath{\mathbb{Z}}}$ with distribution $\mu$ on $A^{\mathbb{Z}}$ is said to be \emph{compatible} with a family of transition probabilities $P$ if the later is a regular version of the conditional probabilities of the former, that is
\begin{equation}\label{compa}
\mu(X_{0}=a|X_{-\infty}^{-1}=\ensuremath{\mathbf{1}}derline{x})=P(a|\ensuremath{\mathbf{1}}derline{x})
\end{equation}
for every $a\in A$ and $\mu$-a.e. $ \ensuremath{\mathbf{1}}derline{x}$ in $A^{-\mathbb{N}}$.
\end{defi}
\noindent If $P$ is non-Markov, it may be hard to prove the existence of a stationary chain ${\bf X}$ compatible with it. In order to solve this issue, we assume the existence of \emph{coupling from the past algorithms} for the chain (see Section \ref{sec:cftp}). This ``constructive argument'' garantees the existence and uniqueness of ${\bf X}$.
\begin{defi}[Canonical $k$-steps Markov approximation]Assume that ${\bf X}$ is a stationary chain with distribution $\mu$. The \emph{canonical $k$-steps Markov approximation} of ${\bf X}$ is the stationary $k$-step Markov chain ${\bf X}^{[k]}$ compatible with the kernel $P^{[k]}_{\mu}$ defined as
\[
P^{[k]}_{\mu}(a|x_{-k}^{-1})=\mu(X_{0}=a|X_{-k}^{-1}=x_{-k}^{-1}).
\]
\end{defi}
\noindent Since $\mu$ is uniquely determined by $P$, we will not mention any more the subscript $\mu$ in $P^{[k]}_{\mu}$, it will be understood that $P^{[k]}=P^{[k]}_{\mu}$.
Let us recall that a coupling between two chains ${\bf X}$ and ${\bf Y}$ taking values in the same alphabet $A$ is a stochastic chain ${\bf Z}=\{Z_{n}\}_{n\in\mathbb{Z}}=\{(\bar{X}_{n},\bar{Y}_{n})\}_{n\in\mathbb{Z}}$ on $(A\times A)^{\ensuremath{\mathbb{Z}}}$ such that $\bar{\bf X}$ has the same distribution as ${\bf X}$ and $\bar{\bf Y}$ has the same distribution as ${\bf Y}$. For any pair of stationary chains ${\bf X}$ and ${\bf Y}$, let $\mathcal{C}({\bf X},{\bf Y})$ be the set of couplings between ${\bf X}$ and ${\bf Y}$.
\begin{defi}[$\bar{d}$-distance]The $\bar{d}$-distance between two stationary chains ${\bf X}$ and ${\bf Y}$ is defined by
\[
\bar{d}({\bf X},{\bf Y})=\inf_{(\bar{\bf X},\bar{\bf Y})\in\mathcal{C}({\bf X},{\bf Y})}\mathbb{P}(\bar{X}_{0}\neq\bar{Y}_{0}).
\]
\end{defi}
For the class of ergodic processes, this distance has another interpretation which is more intuitive: it is the minimal proportion of sites we have to change in a \emph{typical} realization of ${\bf X}$ in order to obtain a \emph{typical} realization of ${\bf Y}$. Formally,
\[
\bar{d}({\bf X},{\bf Y})=\inf_{(\bar{{\bf X}},\bar{\bf Y})\in\mathcal{C}({\bf X},{\bf Y})}\lim_{n\rightarrow+\infty}\frac{1}{n}\sum_{i=1}^{n}{\bf 1}\{\bar{X}_{i}\neq \bar{Y}_{i}\}.
\]
\subsection{Coupling from the past algorithm (CFTP)}\label{sec:cftp}
Our CFTP algorithm constructs a sample of the stationary chain compatible with a given kernel $P$, using a sequence ${\bf U} = \{U_n\}_{n \in \ensuremath{\mathbb{Z}}}$ of i.i.d. random variables uniformly distributed in $[0,1[$. We denote by $(\Omega,\mathcal{F},\mathbb{P})$ the probability space associated to ${\bf U}$. The CFTP is completely determined by its \emph{update function} $F:A^{-\mathbb{N}}\cup A^{\star}\times[0,1[\rightarrow A$ which satisfies that, for any $\ensuremath{\mathbf{1}}derline{a}\in A^{-\mathbb{N}}$ and for any $a\in A$, $\mathbb{P}(F(\ensuremath{\mathbf{1}}derline{a},U_{0})=a)=P(a|\ensuremath{\mathbf{1}}derline{a})$. Using this function, we define the \emph{set of coalescence times} $\Theta$ and the \emph{reconstruction function} $\widehat{P}i$ associated to $F$. For any pair of integers $m,n$ such that $-\infty<m\le n<+\infty$, let $F_{\{m,n\}}(\ensuremath{\mathbf{1}}derline{a},U_{m}^{n})\in A^{n-m+1}$ be the sample obtained by \emph{applying recursively $F$ on the fixed past $\ensuremath{\mathbf{1}}derline{a}$}, i.e, let $F_{\{m,m\}}(\ensuremath{\mathbf{1}}derline{a},U_{m}):=F(\ensuremath{\mathbf{1}}derline{a},U_{m})$ and
\[
F_{\{m,n\}}(\ensuremath{\mathbf{1}}derline{a},U_{m}^{n}):=F_{\{m,n-1\}}(\ensuremath{\mathbf{1}}derline{a},U_{m}^{n-1})F(\ensuremath{\mathbf{1}}derline{a}F_{\{m,n-1\}}(\ensuremath{\mathbf{1}}derline{a},U_{m}^{n-1}),U_{n})\;.
\]
Secondly, let $F_{[m,m]}(\ensuremath{\mathbf{1}}derline{a},U_{m}):=F(\ensuremath{\mathbf{1}}derline{a},U_{m})$ and
\begin{equation}\label{eq:F}
F_{[m,n]}(\ensuremath{\mathbf{1}}derline{a},U_{m}^{n})=F\left(\ensuremath{\mathbf{1}}derline{a}F_{\{m,n-1\}}(\ensuremath{\mathbf{1}}derline{a},U_{m}^{n-1}),U_{n}\right)\;.
\end{equation}
$F_{[m,n]}(\ensuremath{\mathbf{1}}derline{a},U_{m}^{n})$ is the last symbol of the sample $F_{\{m,n\}}(\ensuremath{\mathbf{1}}derline{a},U_{m}^{n})$. The set
\begin{equation}\label{eq:theta}
\Theta[n]:=\{j\leq n:F_{[j,n]}(\ensuremath{\mathbf{1}}derline{a},U_{j}^{n}) = F_{[j,n]}(\ensuremath{\mathbf{1}}derline{b},U_{j}^{n}) \,\,\,\textrm{for all }\,\ensuremath{\mathbf{1}}derline{a},\ensuremath{\mathbf{1}}derline{b} \in A^{-\ensuremath{\mathbb{N}}} \}
\end{equation}
is called the \emph{set of coalescence times} for the time index $n$.
Finally, the reconstruction function of time $n$ is defined by
\begin{equation}\label{eq:Phi}
[\widehat{P}i({\bf U})]_{n}=F_{[\theta[n],n]}(\ensuremath{\mathbf{1}}derline{a},U_{\theta[n]}^{n})
\end{equation}
where $\theta[n]$ is any element of $\Theta[n]$.
Given a kernel $P$, if $\Theta[0]\neq \emptyset$ a.s. and therefore $\Theta[n]\neq\emptyset$ a.s. for any $n\in\mathbb{Z}$, then $[\widehat{P}i({\bf U})]_{n}$ is distributed according to the unique stationary measure compatible with $P$, see \cite{desantis/piccioni/2010}.
\section{{\bf CONSTRUCTION OF THE COUPLING}}\label{sec:conccoup}
For any $\ensuremath{\mathbf{1}}derline{a}\in A^{-\ensuremath{\mathbb{N}}}$, let $\mathcal{I}(\ensuremath{\mathbf{1}}derline{a}):=\{I(a|a_{-k}^{-1})\}_{k\in\ensuremath{\overline{\N}},\;a\in A}$ be any partition of $[0,1[$ having the following properties:
\begin{enumerate}
\item For any $k\in\ensuremath{\overline{\N}}$, the Lebesgue measure or length $|I_{k}(a|a_{-k}^{-1})|$ of $I_{k}(a|a_{-k}^{-1})$ only depends on $a$ and $a_{-k}^{-1}$,
\item for any $\ensuremath{\mathbf{1}}derline{a}$ and $a$
\[
\sum_{k\in\ensuremath{\overline{\N}}}|I_{k}(a|a_{-k}^{-1})|=P(a|\ensuremath{\mathbf{1}}derline{a}),
\]
\item the intervals are disposed as represented in the upper part of Figure \ref{fig:partition}.
\end{enumerate}
\begin{figure}\label{fig:partition}
\end{figure}
\begin{defi}
We call \emph{range partitions} the partitions of $[0,1[$ satisfying (1), (2) and (3) for some kernel $P$.
\end{defi}
\noindent
The following lemma is proved in Section~\ref{sect.proof.lemma:simple}.
\begin{lemma}\label{lemma:simple}
A set of range partitions satisfies, for any $\ensuremath{\mathbf{1}}derline{a}$ and $a\in A$,
\[
\sum_{i=0}^{k}|I_{i}(a|a_{-i}^{-1})|\leq \inf_{\ensuremath{\mathbf{1}}derline{z}}P(a|a_{-k}^{-1}\ensuremath{\mathbf{1}}derline{z})\,,\,\,\,\,\forall k\geq0\enspace.
\]
\end{lemma}
\noindent
Given a range partition $\mathcal{I}(\ensuremath{\mathbf{1}}derline{a})$, the following $F$ is an update function, due to property (2).
\begin{equation}\label{eq:update}
F(\ensuremath{\mathbf{1}}derline{a},U_{0}):=\sum_{a\in A}a.{\bf 1}\left\{U_{0}\in \bigcup_{k\in\ensuremath{\overline{\N}}}I_{k}(a|a_{-k}^{-1})\right\}.
\end{equation}
\noindent
This function $F$ explains the name ``range partition'': for a given past $\ensuremath{\mathbf{1}}derline{a}$, when the uniform r.v. $U_{0}$ belongs to $\bigcup_{a\in A}\bigcup_{i=0}^{k}I_{i}(a|a_{-i}^{-1})$, then $F$ constructs a symbol looking at a range $\leq k$ in the past.
\noindent
Let $L:A^{-\mathbb{N}}\cup A^{\star}\times[0,1[\rightarrow \mathbb\{0,1,2,\ldots\}$ be the \emph{range function} defined by
\begin{equation}\label{eq:length}
L(\ensuremath{\mathbf{1}}derline{a},u):=\sum_{k\in\ensuremath{\overline{\N}}}k.{\bf1}\{u\in \cup_{a\in A}I_{k}(a|a_{-k}^{-1})\}\;.
\end{equation}
$L$ associates to a past $\ensuremath{\mathbf{1}}derline{a}$ and a real number $u\in [0,1[$ the length of the suffix of $\ensuremath{\mathbf{1}}derline{a}$ that $F$ needs in order to construct the next symbol when $U_{0}=u$.
\noindent
Using these functions, define, as in Section \ref{sec:cftp}, the related coalescence sets $\Theta[i]$, $i\in\mathbb{Z}$, and the reconstruction function $\widehat{P}i({\bf U})$, which is distributed according to the unique stationary distribution compatible with $P$ whenever $\Theta[0]$ is a.s. non-empty. \\
Let us now define the functions $F^{[k]}$ and $L^{[k]}$ that we will use for the construction of ${\bf X}^{[k]}$.
Observe that, on the one hand, by definition of the canonical $k$-steps Markov approximation we have for any $a\in A$ and $a_{-k}^{-1}\in A^{k}$
\[
P^{[k]}(a|a_{-k}^{-1}):=\mu(X_{0}=a|X_{-k}^{-1}=a_{-k}^{-1})=\int_{A^{-\mathbb{N}}}P(a|a_{-k}^{-1}\ensuremath{\mathbf{1}}derline{z})d\mu(\ensuremath{\mathbf{1}}derline{z}|a_{-k}^{-1})\geq \inf_{\ensuremath{\mathbf{1}}derline{z}}P(a|a_{-k}^{-1}\ensuremath{\mathbf{1}}derline{z})\enspace.
\]
On the other hand, by Lemma \ref{lemma:simple}, $\inf_{\ensuremath{\mathbf{1}}derline{z}}P(a|a_{-k}^{-1}\ensuremath{\mathbf{1}}derline{z})\geq\sum_{j=0}^{k}|I_{k}(a|a_{-k}^{-1})|$. Thus we can define, for any $a_{-k}^{-1}$, the set of intervals $\{I^{[k]}(a|a_{-k}^{-1})\}_{a\in A}$ having length $|I^{[k]}(a|a_{-k}^{-1})|=P^{[k]}(a|a_{-k}^{-1})-\sum_{j=0}^{k}|I_{k}(a|a_{-k}^{-1})|$ and disposed as in Figure \ref{fig:partition}. The functions $F^{[k]}$ and $L^{[k]}$ are defined as follows
\begin{equation}\label{eq:up}
F^{[k]}(a_{-k}^{-1},U_{0}):=\sum_{a\in A}a{\bf 1}\{U_{0}\in \cup_{j=0}^{k}I_{j}(a|a_{-j}^{-1})\cup I^{[k]}(a|a_{-1}^{-k})\}
\end{equation}
and
\begin{equation}
L^{[k]}(a_{-k}^{-1},U_{0}):=
\sum_{j=0}^{k}j.{\bf1}\{U_{0}\in\cup_{a\in A}I_{j}(a|a_{-j}^{-1})\}+k.{\bf1}\{U_{0}\in\cup_{a\in A}I^{[k]}(a|a_{-1}^{-k})\}.
\end{equation}
Using these functions, define, as in Section \ref{sec:cftp}, the related coalescence sets $\Theta^{[k]}[i]$, $i\in\mathbb{Z}$, and the reconstruction function $\widehat{P}i^{[k]}({\bf U})$, which is distributed according to the unique stationary distribution compatible with $P^{[k]}$ whenever $\Theta^{[k]}[0]$ is a.s. non-empty. \\
Using the same sequence of uniforms ${\bf U}$ and assuming that $\Theta[0]$ and $\Theta^{[k]}[0]$ are a.s. non-empty, $(\widehat{P}i({\bf U}),\widehat{P}i^{[k]}({\bf U}))$ is a $(A\times A)$-valued chain with coordinates distributed as ${\bf X}$ and ${\bf X}^{[k]}$ respectively. It follows that $(\widehat{P}i({\bf U}),\widehat{P}i^{[k]}({\bf U}))$ is a coupling between both chains. Hence, we have constructed a CFTP algorithm for perfect simulation of the coupled chains.
\section{{\bf STATEMENTS OF THE RESULTS}}\label{sec:theo}
\subsection{A key lemma}
Let us first state a technical lemma that is central in the proof of our main results.
\begin{lemma}\label{theo:1}
Assume that there exists a set of range partitions $\{\mathcal{I}(\ensuremath{\mathbf{1}}derline{a})\}_{\ensuremath{\mathbf{1}}derline{a}}$ such that the sets of coalescence times $\Theta[0]\c{c}\~ap\Theta^{[k]}[0]$ is $\mathbb{P}$-a.s. non-empty. Then, for any $\theta[0]\in \Theta[0]$,
\begin{equation}\label{eq:discrepancy}
\bar{d}({\bf X},{\bf X}^{[k]})\leq \mathbb{P}\left(\bigcup_{\ensuremath{\mathbf{1}}derline{a}}\bigcup_{i=\theta[0]}^{0}\left\{L\left(\ensuremath{\mathbf{1}}derline{a}F_{\{\theta[0],i-1\}}(\ensuremath{\mathbf{1}}derline{a},U_{\theta[0]}^{i-1}),U_{i}\right)>k\right\}\right).
\end{equation}
where, for $i=\theta[0]$, the event reads $\{L\left(\ensuremath{\mathbf{1}}derline{a},U_{\theta[0]}\right)>k\}$.
\end{lemma}
Examples of range partitions satisfying the conditions of this theorem have already been built, for example in \cite{comets/fernandez/ferrari/2002}, \cite{gallo/2009}, \cite{gallo/garcia/2011} and \cite{desantis/piccioni/2010}. These works assume some regularity conditions on $P$ and some non-nullness hypothesis which are presented in Sections \ref{sec:continuity}, \ref{sec:ita}, \ref{sec:localconti}. In these sections, we derive explicit upper bounds for \eqref{eq:discrepancy} under the respective assumptions. Before that, let us give an interesting remark on Bernoullicity.
\begin{obs}[A remark on Bernoullicity]
In the conditions of each works cited above, we will exhibit $\theta[0]\in\Theta[0]$ which belongs to $\Theta^{[k]}[0]$ for any sufficiently large $k$'s, and we will prove that
\begin{equation}\label{eq:suppose}
\mathbb{P}\left(\bigcup_{\ensuremath{\mathbf{1}}derline{a}}\bigcup_{i=\theta[0]}^{0}\left\{L\left(\ensuremath{\mathbf{1}}derline{a}F_{\{\theta[0],i-1\}}(\ensuremath{\mathbf{1}}derline{a},U_{\theta[0]}^{i-1}),U_{i}\right)>k\right\}\right)\stackrel{k\rightarrow\infty}{\longrightarrow}0.
\end{equation}
It follows, by Lemma \ref{theo:1}, that
\begin{equation*}
\lim_{k \rightarrow \infty} \bar{d}({\bf X},{\bf X}^{[k]}) = 0.
\end{equation*}
We also have, for any sufficiently large $k$'s, that ${\bf X}^{[k]}$ is an ergodic Markov chain since $\Theta^{[k]}[0]$ is non-empty. Now, by the $\bar{d}$-closure of the set of processes isomorphic to a Bernoulli shift (see for example \cite{shields/1996} Theorem IV.2.10, p.228) and the fact that ergodic Markov processes have the Bernoulli property (\cite{shields/1996} Theorem IV.2.10, p.227), we conclude that the processes considered in \cite{comets/fernandez/ferrari/2002}, \cite{gallo/2009}, \cite{gallo/garcia/2011} and \cite{desantis/piccioni/2010} have the Bernoulli property.
\end{obs}
\subsection{Kernels with summable continuity rate}\label{sec:continuity}
Let us first define continuity.
\begin{defi}[Continuity points and continuous kernels]\label{defi:conti}For any $k\in\ensuremath{\mathbb{N}}$, $a$ and $a_{-k}^{-1}$, let $\alpha_{k}(a|a_{-k}^{-1}):=\inf_{\ensuremath{\mathbf{1}}derline{z}}P(a|a_{-k}^{-1}\ensuremath{\mathbf{1}}derline{z})$. A past $\ensuremath{\mathbf{1}}derline{a}$ is called a \emph{continuity point} for $P$ or $P$ is said to be continuous in $\ensuremath{\mathbf{1}}derline{a}$ if
\[
\alpha_{k}(a_{-k}^{-1}):=\sum_{a\in A}\alpha_{k}(a|a_{-k}^{-1})\stackrel{k\rightarrow+\infty}{\longrightarrow}1.
\]
We say that $P$ is continuous when
\[
\alpha_{k}:=\inf_{a_{-k}^{-1}}\alpha_{k}(a_{-k}^{-1})\stackrel{k\rightarrow+\infty}{\longrightarrow}1.
\]
We say that $P$ has summable continuity rate when $\sum_{k\geq 0}(1-\alpha_{k})<\infty$.
\end{defi}
\noindent
\noindent
Let us also define weak non-nullness.
\begin{defi}\label{def:weakly}
We say that a kernel $P$ is \emph{weakly non-null} if $\alpha_{0}>0$, where $\alpha_{0}:=\sum_{a\in A}\alpha_{0}(a|\emptyset)$.
\end{defi}
\noindent
\cite{desantis/piccioni/2010} have introduced a more general assumption that we call \emph{very weak non-nullness}, see Definition \ref{def.vwnn}. We postpone this definition to Section \ref{sect:proof.thm.expli2} in order to avoid technicality at this stage.
\begin{theo}\label{coro:expli2}
Assume that $P$ has summable continuity rate and is very weakly non-null. Then, there exists a constant $C<+\infty$ such that, for any sufficiently large $k$,
\[
\bar{d}({\bf X},{\bf X}^{[k]})\leq C(1-\alpha_{k})\enspace.
\]
\end{theo}
\begin{remark}
This upper bound is new since we do not assume weak non-nullness. \cite{fernandez/galves/2002} showed, under weak non-nullness, that for any sufficiently large $k$, there exists a positive constant $C$ such that
\[
\bar{d}({\bf X},{\bf X}^{[k]})\leq C\beta_{k}
\]
where
\[
\beta_{k}:=\sup\{|P(a|a_{-k}^{-1}\ensuremath{\mathbf{1}}derline{x})-P(a|a_{-k}^{-1}\ensuremath{\mathbf{1}}derline{y})|\,:\,\,a\in A,\,a_{-k}^{-1}\in A^{k},\,\ensuremath{\mathbf{1}}derline{x},\ensuremath{\mathbf{1}}derline{y}\in A^{-\mathbb{N}}\}.
\]
This later quantity is related to $\alpha_{k}$ through the inequalities $1-\alpha_{k}\ge \frac{|A|}{2}\beta_{k}$ and $1-\alpha_{k}\leq D\beta_{k}$ for some $D>1$ and sufficiently larges $k$'s. Moreover, $1-\alpha_{k}=\beta_{k}$ for binary alphabets. Thus, Theorem \ref{coro:expli2} extends the bound in \cite{fernandez/galves/2002}.
\end{remark}
\subsection{Using a prior knowledge of the histories that occur}\label{sec:ita}
\cite{desantis/piccioni/2010} introduced the following assumptions on kernels. Define
\[
\forall k\geq 1,\quad J_{k}(U_{-k}^{-1}):=\{\ensuremath{\mathbf{1}}derline{x}\in A^{-\ensuremath{\mathbb{N}}}\, \mbox{ s.t. } \, \forall 1\leq l\leq k,\;x_{-l}=a\quad\textrm{if}\,\,U_{-l}\in I(a|\emptyset)\,\mbox{for some}\,a\in A\},
\]
\[
A_{0}:=\alpha_{0}\quad\mbox{and}\quad \forall k\geq1, \quad A_{k}(U_{-k}^{-1}):=\inf\{\alpha_{k}(x_{-k}^{-1}):\ensuremath{\mathbf{1}}derline{x}\in J_{k}(U_{-k}^{-1})\} \enspace.
\]
Finally, let
\begin{equation}\label{eq:ell}
\ell(U_{-\infty}^{0}):=\inf\{j\in\mathbb{N}:U_{0}<A_{j}(U_{-j}^{-1})\}.
\end{equation}
\begin{theo}\label{coro:italia}
If ${\bf X}$ has a kernel that satisfies $\mathbb{E}\paren{\prod_{k\ge0}A_{k}(U_{-k}^{-1})^{-1}}<\infty$, then there exists a positive constant $C<+\infty$ such that
\[
\bar{d}({\bf X},{\bf X}^{[k]})\leq C\mathbb{P}(\ell(U_{-\infty}^{0})>k).
\]
\end{theo}
In order to illustrate the interest of this result, let us give two simple examples. Other examples can be found in \cite{desantis/piccioni/2010} and \cite{gallo/garcia/2011}.
\paragraph{{\bf Summable continuity regime with weak non-nullness}} Theorem \ref{coro:italia} allows to recover the result of Theorem \ref{coro:expli2} in the weakly non-null case. To see this, it is enough to observe that, for any $U_{-k}^{-1}$, $A_{k}(U_{-k}^{-1})\geq \alpha_{k}$ (see Definition \ref{defi:conti} for $\alpha_{k}$). It follows that $\prod_{k\ge0}\alpha_{k}>0$ (which is equivalent to $\sum_{k\ge0}(1-\alpha_{k})<+\infty$), implies that $\prod_{k\ge0}A_{k}(U_{-k}^{-1})$ is bounded away from zero, hence, its inverse has finite expectation. Hence, Theorem \ref{coro:italia} applies and gives
\[
\bar{d}({\bf X},{\bf X}^{[k]})\leq C\mathbb{P}(\ell(U_{-\infty}^{0})>k)\leq C(1-\alpha_{k})\enspace.
\]
\paragraph{{\bf A simple discontinuous kernel on $A=\{1,2\}$}} Let $\epsilon\in(0,1/2)$ and let $\{p_{i}\}_{i\ge0}$ be any sequence such that, $\epsilon\leq p_{i}<1-\epsilon$ for any $i\ge0$. Let $t(\ensuremath{\mathbf{1}}derline{a}):=\inf\{i\ge0:a_{-i-1}=2\}$ let $\bar{P}$ be the following kernel:
\begin{equation}\label{eq.discontinuous.example}
\forall\ensuremath{\mathbf{1}}derline{a}\in\{1,2\}^{-\mathbb{N}}\,,\,\,\,\bar{P}(2|\ensuremath{\mathbf{1}}derline{a})=p_{t(\ensuremath{\mathbf{1}}derline{a})}\enspace.
\end{equation}
The existence of a unique stationary chain compatible with this kernel is proven in \cite{gallo/2009} for instance. This chain is the renewal sequence, that is, a concatenation of blocks of the form $1\ldots12$ having random length with finite expectation. It is clearly weakly non-null, however, it is not necessarily continuous. In fact, a simple calculation shows that $\alpha_{k}=1-\sup_{l,m\ge k}|p_{l}-p_{m}|$, which needs not to go to $1$. Nevertheless, if we assume furthermore that $\sup_{k\ge0}\alpha_{k}>1-2\alpha(1)\alpha(2)$, we have $\mathbb{E}\paren{\prod_{k\ge0}A_{k}(U_{-k}^{-1})^{-1}}<\infty$ (see example 1 in \cite{desantis/piccioni/2010}. We now want to derive an upper bound for $\mathbb{P}(\ell(U_{-\infty}^{0})>k)$. First, define
\[
N(U_{-\infty}^{-1}):=\inf\{n\ge1: U_{-n}\in I(2)\},
\]
and observe that, $A_{k}(U_{-k}^{-1})=1$ for any $k\ge N(U_{-\infty}^{-1})$. It follows that
\begin{multline*}
\mathbb{P}(\ell(U_{-\infty}^{0})>k)=\mathbb{P}(\inf\{j\in\mathbb{N}:U_{0}<A_{j}(U_{-j}^{-1})\}>k)\\
\leq \mathbb{P}(U_{0}\geq A_{k}(U_{-k}^{-1}))\leq\mathbb{P}(A_{k}(U_{-k}^{-1})<1)\leq \mathbb{P}(N(U_{-\infty}^{-1})>k)\leq (1-\epsilon)^{k}.
\end{multline*}
\begin{obs}
The preceding theorems yield explicit upper bounds. However, they hold under restrictions we would like to surpass.
\noindent
First, in the continuous regime, we have assumed that $\sum_{k\ge0}(1-\alpha_{k})<+\infty$. Nevertheless, CFTP are known to exist with the weaker assumption $\sum_{k\ge1}\prod_{i=0}^{k-1}\alpha_{i}=+\infty$, and it is known that $\bar{d}({\bf X}, {\bf X}^{[k]})$ goes to zero in this case. We will be interested in upper bounds for the rate of convergence to zero under these weak conditions.
\noindent
Second, in Theorem \ref{coro:italia}, the assumption $\mathbb{E}\paren{\prod_{k\ge0}A_{k}(U_{-k}^{-1})^{-1}}<\infty$ is generally difficult to check: this is particularly clear for the example of $\bar{P}$ where it requires the (not necessary) extra-assumption $\sup_{k\ge0}\alpha_{k}>1-2\alpha(1)\alpha(2)$.
\end{obs}
The next section will solve part of these objections.
\subsection{A simple upper bound under weak non-nullness}\label{sec:localconti}
Hereafter, we assume that $P$ is weakly non-null. Let $\Theta'[0]$ be the following subset of $\Theta[0]$:
\begin{equation}\label{eq:Theta'}
\Theta'[0]:=\{i\leq 0:\,\textrm{for any }\ensuremath{\mathbf{1}}derline{a}\,,\,\,L(\ensuremath{\mathbf{1}}derline{a}F_{\{i,j-1\}}(\ensuremath{\mathbf{1}}derline{a},U_{i}^{j-1}),U_{j})\leq j-i\,,\,j=i,\ldots,0\}.
\end{equation}
We have the following theorem in which a priori nothing is assumed on the continuity.
\begin{theo}\label{lemma:1}
Assume that $P$ is weakly non-null and that we can construct a set of range partitions $\{\mathcal{I}(\ensuremath{\mathbf{1}}derline{a})\}_{\ensuremath{\mathbf{1}}derline{a}}$ for which $\Theta'[0]\neq\emptyset$, $\mathbb{P}$-a.s. Then, for any $\theta[0]\in \Theta'[0]$
\begin{equation}\label{eq:discrepancy2}
\bar{d}({\bf X},{\bf X}^{[k]})\leq \mathbb{P}(\theta[0]<-k).
\end{equation}
\end{theo}
In order to illustrate this result, let us consider the examples of continuous kernels and of the kernel $\bar{P}$. \cite{gallo/garcia/2011} proposed a unified framework, including these examples and several other cases, which provides more examples of applications of this theorem. This is postponed to Appendix \ref{sec:extgallogarcia} in order to avoid technicality.
\paragraph{{\bf Application to the continuity regime}}
Let us first introduce the following range partition.
\begin{defi}\label{def.I1}
Let $\{\mathcal{I}^{(1)}(\ensuremath{\mathbf{1}}derline{a})\}_{\ensuremath{\mathbf{1}}derline{a}}$ be the range partition such that, for any $a$ and $\ensuremath{\mathbf{1}}derline{a}$
\begin{align*}
\forall k\geq0,\quad \absj{I_{k}^{(1)}(a|a_{-k}^{-1})}&:=\alpha_{k}(a|a_{-k}^{-1})-\alpha_{k-1}(a|a_{-(k-1)}^{-1})\enspace,\\
\absj{I_{\infty}^{(1)}(a|\ensuremath{\mathbf{1}}derline{a})}&:=P(a|\ensuremath{\mathbf{1}}derline{a})-\lim_{k\rightarrow \infty}\alpha_{k}(a|a_{-k}^{-1})\enspace,
\end{align*}
with the convention $\alpha_{-1}(a|\emptyset)=0$.
\end{defi}
\noindent
Let $F^{(1)}$ and $L^{(1)}$ be the associated functions defined in \eqref{eq:update} and \eqref{eq:length}. Let
\begin{align*}
\theta[0]&:=\max\{i\leq 0:U_{j}\leq \alpha_{j-i}\,,\,\,j=i,\ldots,0\}.
\end{align*}
Observe that $L^{(1)}$ satisfies $L^{(1)}(\ensuremath{\mathbf{1}}derline{a},U_{0})\leq k$ whenever $U_{0}\leq \alpha_{k}$. Hence, $\theta[0]$ belongs to $\Theta'^{(1)}[0]$, the set defined by \eqref{eq:Theta'} using $F^{(1)}$ and $L^{(1)}$. Moreover, \cite{comets/fernandez/ferrari/2002} proved that, if $\sum_{k\ge1}\prod_{i=0}^{k-1}\alpha_{i}=+\infty$ (that is, under weak non-nullnes but not necessarily summable continuity)
\begin{equation}\label{achier}
\mathbb{P}(\theta[0]<-k)\leq
v_{k}:=\sum_{j=1}^{k}\sum_{
\begin{array}{c}
t_{1},\ldots,t_{j}\geq1\\
t_{1}+\ldots+t_{j}=k
\end{array}
}\prod^{j}_{m=1}(1-\alpha_{t_{m}-1})\prod_{l=0}^{t_{m}-2}\alpha_{l}
\end{equation}
which goes to $0$. This upper bound is not very satisfactory since it is difficult to handle in general. Nevertheless, Propositions \ref{prop:bfg} and \ref{prop:expli}, given in Appendix \ref{sec:auxi}, shed light on the behavior of this vanishing sequence. In particular, under the summable continuity assumption $\sum_{k\ge0}(1-\alpha_{k})<+\infty$, Proposition \ref{prop:bfg} states that (\ref{achier}) essentially recovers the rates of Theorems \ref{coro:expli2} and \ref{coro:italia}. Also, if there exists a constant $r\in(0,1)$ and a summable sequence $(s_{k})_{k\ge1}$ such that, $\forall k\ge1$, $1-\alpha_{k}=\frac{r}{k}+s_{k}$, then, from Proposition \ref{prop:expli}, there exists a positive constant $C$ such that
\begin{equation}\label{eq:result_weak_continuity}
\bar{d}({\bf X},{\bf X}^{[k]})\leq C\frac{(\log k)^{3+r}}{k^{2-(1+r)^{2}}}.
\end{equation}
\paragraph{{\bf Application to the kernel $\bar{P}$}}
As a second direct application of Theorem \ref{lemma:1}, let us consider the kernel $\bar{P}$ defined in \eqref{eq.discontinuous.example}. Let $\{\mathcal{I}^{(2)}(\ensuremath{\mathbf{1}}derline{a})\}_{\ensuremath{\mathbf{1}}derline{a}}$ be the set of range partitions, such that $|I^{(2)}(2|\emptyset)|=\alpha(2)$, $|I^{(2)}(1|\emptyset)|=\alpha(1)$ and $I^{(2)}_{k}(a|a_{-k}^{-1})=\emptyset$ for any $k\ge1$ except for $k=t(\ensuremath{\mathbf{1}}derline{a})+1$ for which $|I^{(2)}_{k}(1|a_{-k}^{-1})|=1-p_{k}-\alpha(1)$ and $|I^{(2)}_{k}(2|a_{-k}^{-1})|=p_{k}-\alpha(2)$. It satisfies
\[
L^{(2)}(\ensuremath{\mathbf{1}}derline{a},U_{0})=(t(\ensuremath{\mathbf{1}}derline{a})+1){\bf 1}\{U_{0}>\alpha_{0}\}.
\]
Hence, $\theta[0]:=\max\{i\leq 0:U_{i}\in I(2)\}$ belongs to $\Theta'[0]$ ($\Theta'[0]$ is defined by \eqref{eq:Theta'} with the functions $F^{(2)}$ and $L^{(2)}$ obtained from the set of range partitions $\{\mathcal{I}^{(2)}(\ensuremath{\mathbf{1}}derline{a})\}_{\ensuremath{\mathbf{1}}derline{a}}$). Therefore,
\begin{equation}\label{eq:result_renewal}
\bar{d}({\bf X},{\bf X}^{[k]})\leq \mathbb{P}(\theta[0]<-k)\leq (1-\epsilon)^{k}
\end{equation}
independently of the value $\sup_{k\ge0}\alpha_{k}$. For this simple example, Theorem \ref{lemma:1} is then less restrictive than Theorem \ref{coro:italia}.
\section{{\bf PROOFS OF THE RESULTS}}\label{sec:proofs}
\subsection{Proof of Lemma \ref{lemma:simple}}\label{sect.proof.lemma:simple}
Assume that for some $k\geq0$ we have
\[
\sum_{i=0}^{k}|I_{i}(a|a_{-i}^{-1})|> \inf_{\ensuremath{\mathbf{1}}derline{z}}P(a|a_{-k}^{-1}\ensuremath{\mathbf{1}}derline{z})\enspace.
\]
Then, consider a past $\ensuremath{\mathbf{1}}derline{z}^{\star}$ such that $|I(a)|+\sum_{i=1}^{k}|I_{i}(a|a_{-i}^{-1})|>P(a|a_{-k}^{-1}\ensuremath{\mathbf{1}}derline{z}^{\star})$. As, for all $l\geq k+1$, $|I_{l}(a|a_{-k}^{-1}z_{-l}^{-1})|\geq0$, we have
\[
\sum_{i=0}^{k}|I_{i}(a|a_{-i}^{-1})|+\sum_{l\geq k+1}|I_{l}(a|a_{-k}^{-1}z_{-l}^{-1})|>P(a|a_{-k}^{-1}\ensuremath{\mathbf{1}}derline{z}^{\star})\;.
\]
This is a contradiction with the second properties of the partition. This concludes the proof. \qed
\subsection{Proofs of Lemma \ref{theo:1}}
We assume that $\Theta[0]\c{c}\~ap\Theta^{[k]}[0]$ is $\mathbb{P}$-a.s. non-empty, and we therefore have a coupling $(\widehat{P}i({\bf U}),\widehat{P}i^{[k]}({\bf U}))$ of both chains.
By definitions of $F^{[k]}$ and $L^{[k]}$, we observe that
when
\begin{equation}\label{eq:mainprop}
L(\ensuremath{\mathbf{1}}derline{b}a_{-k}^{-1},U_{0})\leq k\ensuremath{\mathbb{R}}ightarrow \,\textrm{ for any }\,\ensuremath{\mathbf{1}}derline{b}\,\textrm{ we have }\left\{
\begin{array}{c}
L(\ensuremath{\mathbf{1}}derline{b}a_{-k}^{-1},U_{0})=L^{[k]}(a_{-k}^{-1},U_{0})\,\,\textrm{and},\\
F(\ensuremath{\mathbf{1}}derline{b}a_{-k}^{-1},U_{0})=F^{[k]}(a_{-k}^{-1},U_{0}).
\end{array}\right.
\end{equation}
\noindent
Assume that, $\forall \ensuremath{\mathbf{1}}derline{a}\in A^{-\ensuremath{\mathbb{N}}}$ and $\forall i=\theta[0],\ldots,0$, $L(\ensuremath{\mathbf{1}}derline{a}F_{\{\theta[0],i-1\}}(\ensuremath{\mathbf{1}}derline{a},U_{\theta[0]}^{i-1}),U_{i})\leq k$. Then, using recursively \eqref{eq:mainprop},
$F_{\{\theta[0],0\}}(\ensuremath{\mathbf{1}}derline{a},U_{\theta[0]}^{0})=F^{[k]}_{\{\theta[0],0\}}(a_{-k}^{-1},U_{\theta[0]}^{0})$. In particular, $\theta[0]\in\Theta^{[k]}[0]$ and $[\widehat{P}i({\bf U})]_{0}=[\widehat{P}i^{[k]}({\bf U})]_{0}$. Therefore,
\begin{align*}
\bar{d}({\bf X},{\bf X}^{[k]})&\leq \mathbb{P}([\widehat{P}i({\bf U})]_{0}\neq[\widehat{P}i^{[k]}({\bf U})]_{0})\\
&\leq\mathbb{P}\left(\bigcup_{\ensuremath{\mathbf{1}}derline{a}}\bigcup_{i=\theta[0]}^{0}\left\{L\left(\ensuremath{\mathbf{1}}derline{a}F_{\{\theta[0],i-1\}}(\ensuremath{\mathbf{1}}derline{a},U_{\theta[0]}^{i-1}),U_{i}\right)>k\right\}\right).
\end{align*}
\subsection{Proof of Theorem \ref{coro:expli2}}\label{sect:proof.thm.expli2} This section is divided in three parts. First, as mentioned before the statement of the theorem, we define \emph{very weak non-nullness}. Then, we prove some technical lemmas allowing to apply Lemma \ref{theo:1}. Finally, we prove the theorem.
\subsubsection{Definition of \emph{very weak non-nullness}}
Consider the set of range partitions $\{\mathcal{I}^{(1)}(\ensuremath{\mathbf{1}}derline{a})\}_{\ensuremath{\mathbf{1}}derline{a}}$ of Definition \ref{def.I1}.
As observed by \cite{desantis/piccioni/2010}, in the continuous case, since $\{\alpha_{k}\}_{k\ge0}$ increases monotonically to $1$, there exists $k\ge0$ such that $\alpha_{k}>0$. Let $k^{\star}$ be the smallest of these integers
and let $F^{\star}$ be the following update function
\[
F^{\star}(a_{-k^{\star}}^{-1},U_{0}):=F^{(1)}(\ensuremath{\mathbf{1}}derline{b},U_{0}\alpha_{k^{\star}})\,\,\,\,\quad\forall \ensuremath{\mathbf{1}}derline{b}\,\,\,\,\textrm{s.t.}\,\,\,a_{-k^{\star}}^{-1}=b_{-k^{\star}}^{-1}\enspace.
\]
$F^{\star}$ is well defined, since $U_{0}\alpha_{k^{\star}}\leq \alpha_{k^{\star}}$, hence $L^{(1)}(\ensuremath{\mathbf{1}}derline{b},U_{0}\alpha_{k^{\star}})\leq k^{\star}$. In the case where $k^\star=0$, $F^{\star}$ is simply defined as
\[
F^{\star}(\emptyset,U_{0}):=F^{(1)}(\ensuremath{\mathbf{1}}derline{b},U_{0}\alpha_{0})=\sum_{a\in A}{\bf1}\{U_{0}\alpha_{0}\in I_{0}(a|\emptyset)\}\,\,\,\,\quad\forall \ensuremath{\mathbf{1}}derline{b}\,\,.
\]
\begin{defi}[Coalescence set]\label{def:coalescenceSet}For $m\ge k^{\star}+1$, let $E_{m}$, the \emph{coalescence set} (different from the set of coalescence times), be defined as the set of all $u_{-m+1}^{0}\in A^{m}$ such that
\[
F_{\{-k^{\star}+1,0\}}^{\star}\left(a_{-k^{\star}}^{-1}F^{\star}_{\{-m+1,-k^{\star}\}}\paren{a_{-k^{\star}}^{-1},\frac{u_{-m+1}^{-k^{\star}}}{\alpha_{k^{\star}}}},\frac{u_{-k^{\star}+1}^{0}}{\alpha_{k^{\star}}}\right)\,\,\,\textrm{does not depend on}\,\,\,a_{-k^{\star}}^{-1}\;.
\]
When $k^{\star}=0$ and $m=1$, we have $E_{1}:=\cup_{a\in A}I_{0}(a|\emptyset)$.
\end{defi}
\begin{defi}\label{def.vwnn}
We say that $P$ is very weakly non-null if
\begin{equation}\label{hyp.hypnn}
\exists m\ge k^\star+1\qquad \, \mbox{ s.t. } \, \qquad\mathbb{P}(U_{-m+1}^{0}\in E_{m})>0\enspace.
\end{equation}
\end{defi}
Weak non-nullness corresponds to $\mathbb{P}(U_{0}\in E_{1})>0$, hence, it implies very weak non-nullness.
\subsubsection{Technical lemmas}
Let $\Theta^{(1)}[0]$ be the set of coalescence times defined by \eqref{eq:theta} for the function $F^{(1)}$. In a first part of the proof, we define a random time $\theta[0]$ (see \eqref{eq:ahah}) and we show that it belongs to $\Theta^{(1)}[0]$ and that it has finite expectation whenever $\sum_{k\ge k^{\star}}(1-\alpha_{k})<+\infty$. This random variable is defined in the proof of Theorem 2 in \cite{desantis/piccioni/2010}.
Recall that, by construction of the range partition $\{\mathcal{I}^{(1)}(\ensuremath{\mathbf{1}}derline{a})\}_{\ensuremath{\mathbf{1}}derline{a}}$, for any $\ensuremath{\mathbf{1}}derline{a}$, $L(\ensuremath{\mathbf{1}}derline{a},U_{i})=k$ whenever $\alpha_{k-1}\leq U_{i}<\alpha_{k}$. This means that the sequence of ranges forms a sequence $\{L_{i}\}_{i\in\mathbb{Z}}:=\{L(\ensuremath{\mathbf{1}}derline{a},U_{i})\}_{i\in\mathbb{Z}}$ of i.i.d. $\mathbb{N}$-valued r.v.'s. We now introduce two sequences of random times in the past, which are represented on Figure \ref{fig:dessin}, in the particular case where $k^{\star}=2$.
Let
\[
W_{1}:=\sup\{m\leq 0:U_{j}< \alpha_{j-m+k^{\star}}\,,\,\,j=m,\ldots,0\},
\]
and for any $i\ge1$
\[
Y_{i}:=\inf\{m<W_{i}:U_{n}<\alpha_{k^{\star}}\,,\,\,n=m+1,\ldots,W_{i}\}
\]
and
\[
W_{i+1}:=\sup\{m\leq Y_{i}:U_{j}< \alpha_{j-m+k^{\star}}\,,\,\,j=m,\ldots,Y_{i}\}.
\]
\begin{figure}\label{fig:dessin}
\end{figure}
\noindent
Consider now the random variable
\[
Q:=\inf\{i\ge1:(U_{Y_{i}+1},\ldots,U_{W_{i}-1})\in E_{W_{i}-Y_{i}-1}\}
\]
(see Definition \ref{def:coalescenceSet} for $E_{m}$) and put
\begin{equation}\label{eq:ahah}
\theta[0]:=Y_{Q}.
\end{equation}
\begin{lemma}\label{lem1}
$\theta[0]\in\Theta^{(1)}[0]$.
\end{lemma}
\begin{proof}
If $\theta[0]=-k$, then there exists some $l(=-W_{Q}+1)\leq k$ such that $U_{-i}\leq \alpha_{k^{\star}}$, $i=l,\ldots,k$, and moreover, $U_{-k}^{-l}\in E_{k-l+1}$, that is
\[
F_{\{-l-k^{\star}+1,-l\}}^{\star}\left(a_{-k^{\star}}^{-1}F^{\star}_{\{-k,-l-k^{\star}\}}(a_{-k^{\star}}^{-1},\frac{1}{\alpha_{k^{\star}}}U_{-k}^{-l-k^{\star}}),\frac{1}{\alpha_{k^{\star}}}U_{-l-k^{\star}+1}^{-l}\right)\,\,\,\,\,\textrm{is independent of }\,a_{-k^{\star}}^{-1}.
\]
Since $U_{-i}\leq \alpha_{k^{\star}}$, $i=l,\ldots,k$, it follows that
\[
F_{\{-l-k^{\star}+1,-l\}}\left(\ensuremath{\mathbf{1}}derline{b}F_{\{-k,-l-k^{\star}\}}(\ensuremath{\mathbf{1}}derline{b},U_{-k}^{-l-k^{\star}}),U_{-l-k^{\star}+1}^{-l}\right)\,\,\,\,\,\textrm{is independent of }\,\ensuremath{\mathbf{1}}derline{b}.
\]
By definition of the random times $W_{i}$, all the symbols in times $\set{W_{Q},\ldots,0}$ can then be built using those in times $\set{W_{Q}-k^{\star},\ldots,W_{Q}-1}$ since none of the arrows from time $W_{Q}$ until $0$ go further time $W_{Q}-k^{\star}$, see the Figure \ref{fig:dessin}. Therefore, the construction of the symbol at times $0$ does not depend on the symbols before $\theta[0]$, i.e $\theta[0]\in\Theta^{(1)}[0]$.
\end{proof}
\begin{lemma}\label{lemma:finiteexpect}
$\mathbb{E}|W_{1}|<+\infty$ whenever $\sum_{k\ge k^{\star}}(1-\alpha_{k})<+\infty$.
\end{lemma}
\begin{proof}
Letting $\bar{\alpha}_{l-k^{\star}}=\alpha_{l}$ for any $l\ge k^{\star}$, we have
\[
W_{1}=\sup\{m\leq 0:U_{j}< \bar{\alpha}_{j-m}\,,\,\,j=m,\ldots,0\}.
\]
Thus $W_{1}$ is defined exactly as $\tau[0]$ of display (4.2) in \cite{comets/fernandez/ferrari/2002}, substituting their $a_{k}$'s by our $\bar{\alpha}_{k}$'s. They proved (see display (4.6) and item (ii) Proposition 5.1 therein) that $\mathbb{E}|\tau[0]|<+\infty$ whenever $\sum_{k\ge 0}(1-\bar{\alpha}_{k})<+\infty$. It follows that $\mathbb{E}|W_{1}|<+\infty$ whenever $\sum_{k\ge k^{\star}}(1-\alpha_{k})<+\infty$.
\end{proof}
\begin{lemma}\label{lem2}
$\mathbb{E}|\theta[0]|<+\infty$ whenever $\sum_{k\ge k^{\star}}(1-\alpha_{k})<+\infty$.
\end{lemma}
\begin{proof}As observed in \cite{desantis/piccioni/2010}, $\{W_{i}-Y_{i}-1\}_{i\ge1}$ is a sequence of i.i.d. geometric r.v.'s with success probability $1-\alpha_{k^{\star}}$ and $\{Y_{i}-W_{i+1}\}_{i\ge0}$ (with $Y_{0}:=0$) is a sequence of i.i.d. r.v.'s distributed as $-W_{1}$, conditional to be non-zero. Moreover, Lemma \ref{lemma:finiteexpect} states that $\mathbb{E}|W_{1}|<+\infty$.
It follows that $\{B_{i}\}_{i\ge1}:=\{Y_{i-1}-Y_{i}-1\}_{i\ge1}$ is a sequence of i.i.d. $\mathbb{N}$-valued r.v.'s with finite expectation.
Thus
\(
\sum_{k=1}^{n}B_{i}-n\mathbb{E}B_{1}
\)
forms a martingale with respect to the filtration $\mathcal{F}(B_{1},\ldots,B_{i}\,:\,\,i\ge1)$ and we have by the optional sampling theorem
\[
\mathbb{E}|\theta[0]|:=\mathbb{E}|Y_{Q}|=\mathbb{E}\left(\sum_{i=1}^{Q}B_{i}\right)=\mathbb{E}Q.\mathbb{E}B_{1}<+\infty.
\]
\end{proof}
We finally need the following lemma.
\begin{lemma}\label{lem3}
For any $k\ge k^{\star}$, $\theta[0]\in\Theta^{(1),[k]}[0]$.
\end{lemma}
\begin{proof}
For any $k\ge k^{\star}$, $F^{(1),[k]}$ and $L^{(1),[k]}$ satisfy \eqref{eq:mainprop}. This implies that, in the interval $\{Y_{Q},\ldots,W_{Q}-1\}$, coalescence occurs as well for $F^{(1),[k]}$, i.e. $U_{-\theta[0]}^{W_{Q}-1}\in E^{[k]}_{\theta[0]-W_{Q}}$. Both constructed chains are equals until the first time $F^{(1)}$ uses a range larger than $k$. But at this moment, due to the definition of the $W_{i}$'s, we have already perfectly simulated at least $k$ symbols of both chains, and therefore, we can continue constructing until time $0$ because the ranges of $F^{(1),[k]}$ are smaller of equal to $k$. It follows that $Y_{Q}$ is a coalescence time for $F^{(1),[k]}$, and therefore, $\theta[0]\in\Theta^{(1),[k]}[0]$ for any $k\ge k^{\star}$.
\end{proof}
\subsubsection{Proof of Theorem \ref{coro:expli2}} By definition of $F^{(1)}$ and $L^{(1)}$, we have for any sufficiently large $k$,
\[
\bigcup_{\ensuremath{\mathbf{1}}derline{a}}\bigcup_{i=\theta[0]}^{0}\left\{L^{(1)}\left(\ensuremath{\mathbf{1}}derline{a}F^{(1)}_{\{\theta[0],i-1\}}(\ensuremath{\mathbf{1}}derline{a},U_{\theta[0]}^{i-1}),U_{i}\right)>k\right\}\subset\bigcup_{i=\theta[0]}^{0}\{U_{i}>\alpha_{k}\}.
\]
By Lemmas \ref{lem1} and \ref{lem3}, Lemma \ref{theo:1} applies and gives, for sufficiently large $k$'s
\begin{align*}
\bar{d}({\bf X},{\bf X}^{[k]})\leq \mathbb{P}\left(\bigcup_{i=\theta[0]}^{0}\{U_{i}>\alpha_{k}\}\right)=\mathbb{P}\left(\sum_{i=0}^{|\theta[0]|}{\bf 1}\{U_{i}>\alpha_{k}\}\ge1\right)\leq \mathbb{E}\left(\sum_{i=0}^{|\theta[0]|}{\bf 1}\{U_{i}>\alpha_{k}\}\right)
\end{align*}
where we used the Markov inequality for the last inequality.
Using the fact that $\theta[0]$ is a stopping time in the past for the sequence $U_{i}, i\leq 0$, and that it has finite expectation by Lemma \ref{lem2}, we can apply the Wald's equality to obtain
\[
\bar{d}({\bf X},{\bf X}^{[k]})\leq \mathbb{E}|\theta[0]|.\mathbb{E}{\bf 1}\{U_{i}>\alpha_{k}\}=\mathbb{E}|\theta[0]|.(1-\alpha_{k}).
\]
\subsection{{\bf Proof of Theorem \ref{coro:italia}}} We divide this proof into two parts. First, we prove technical lemmas allowing to use Lemma~\ref{theo:1}. Then, we prove the theorem.
\subsubsection{Technical lemmas}
Using the quantity $\ell(U_{-\infty}^{0})$ defined by \eqref{eq:ell}, we define
\begin{equation}\label{eq:theta0}
\theta[0]:=\sup\{j\leq 0:\ell(U_{-\infty}^{i})\leq i-j\,,\,\,i=j,\ldots,0\}.
\end{equation}
\begin{lemma}\label{lemma:ahahah}
$\theta[0]$, defined by \eqref{eq:theta0}, belongs to $\Theta^{(1)}[0]\c{c}\~ap \Theta^{(1),[k]}[0]$ for any $k\geq 0$ and
\begin{equation}\label{eq:italia}
\mathbb{P}\left(\bigcup_{\ensuremath{\mathbf{1}}derline{a}}\bigcup_{i=\theta[0]}^{0}\left\{L^{(1)}\left(\ensuremath{\mathbf{1}}derline{a}F^{(1)}_{\{\theta[0],i-1\}}(\ensuremath{\mathbf{1}}derline{a},U_{\theta[0]}^{i-1}),U_{i}\right)>k\right\}\right)\leq\mathbb{P}\left(\bigcup_{i=\theta[0]}^{0}\left\{\ell\left(U_{-\infty}^{i}\right)>k\right\}\right).
\end{equation}
\end{lemma}
\begin{proof}
For any $U_{-k}^{-1}$, the way the sets of strings $\{\ensuremath{\mathbf{1}}derline{z}F^{(1)}_{\{-k,-1\}}(\ensuremath{\mathbf{1}}derline{z},U_{-k}^{-1})\}_{\ensuremath{\mathbf{1}}derline{z}}$ and $J_{k}(U_{-k}^{-1})$ are defined ensure that the former is included in the later.
It follows that, for any $U_{-k}^{-1}$,
\[
A_{k}(U_{-k}^{-1}):=\inf_{x_{-k}^{-1}:\ensuremath{\mathbf{1}}derline{x}\in J_{k}(U_{-k}^{-1})}\sum_{a\in A}\inf_{\ensuremath{\mathbf{1}}derline{z}}P(a|x_{-k}^{-1}\ensuremath{\mathbf{1}}derline{z})\leq \sum_{a\in A}\inf_{\ensuremath{\mathbf{1}}derline{z}}P(a|F^{(1)}_{\{-k,-1\}}(\ensuremath{\mathbf{1}}derline{z},U_{-k}^{-1})\ensuremath{\mathbf{1}}derline{z}).
\]
As the inequality $A_{0}\leq \sum_{a\in A}\inf_{\ensuremath{\mathbf{1}}derline{z}}P(a|\ensuremath{\mathbf{1}}derline{z})$ is also true, we deduce that, for any $k\ge0$, any $U_{-\infty}^{0}\in [0,1[^{-\mathbb{N}}$ and any $\ensuremath{\mathbf{1}}derline{z}\in A^{-\mathbb{N}}$,
\[
\ell(U_{-\infty}^{0})\leq k\ensuremath{\mathbb{R}}ightarrow L^{(1)}\left(\ensuremath{\mathbf{1}}derline{z}F^{(1)}_{\{-k,-1\}}(\ensuremath{\mathbf{1}}derline{z},U_{-k}^{-1}),U_{0}\right)\le k\enspace.
\]
By recurrence, this means that, for all $\theta[0]\le i\le 0$, $F^{(1)}_{\{\theta[0],i\}}(\ensuremath{\mathbf{1}}derline{z},U_{\theta[0]}^{i})$ does not depend on $\ensuremath{\mathbf{1}}derline{z}$. Hence, $\theta[0]$ is also a coalescence time for the update function $F^{(1)}$, that is $\theta[0]\in\Theta^{(1)}[0]$. Observe that we have proved, more specifically, that $\theta[0]\in\Theta'^{(1)}[0]$, where $\Theta'^{(1)}[0]\subset\Theta^{(1)}[0]$ is defined by \ref{eq:Theta'} using $F^{(1)}$ and $L^{(1)}$. By Lemma \ref{lemma:weakK} below, this implies that $\theta[0]\in \Theta^{(1),[k]}[0]$ for any $k\geq 0$ as well.
\noindent We now prove the second statement of the lemma. If there exist $i\ge k$, $U_{-i}^{-1}$ and $\ensuremath{\mathbf{1}}derline{z}$ such that $L^{(1)}\left(\ensuremath{\mathbf{1}}derline{z}F^{(1)}_{\{-i,-1\}}(\ensuremath{\mathbf{1}}derline{z},U_{-i}^{-1}),U_{0}\right)> k$, then there exists some past $\ensuremath{\mathbf{1}}derline{a}$ (take $\ensuremath{\mathbf{1}}derline{a}=\ensuremath{\mathbf{1}}derline{z}F^{(1)}_{\{-i,-k-1\}}(\ensuremath{\mathbf{1}}derline{z},U_{-i}^{-k-1})$ for instance) such that $L^{(1)}\left(\ensuremath{\mathbf{1}}derline{a}F^{(1)}_{\{-k,-1\}}(\ensuremath{\mathbf{1}}derline{a},U_{-k}^{-1}),U_{0}\right)> k$.
We now have the following sequence of inclusions
\begin{align*}
\bigcup_{\ensuremath{\mathbf{1}}derline{a}}\bigcup_{i=\theta[0]}^{0}&\left\{L^{(1)}\left(\ensuremath{\mathbf{1}}derline{a}F^{(1)}_{\{\theta[0],i-1\}}(\ensuremath{\mathbf{1}}derline{a},U_{\theta[0]}^{i-1}),U_{i}\right)>k\right\}\\
&=\bigcup_{\ensuremath{\mathbf{1}}derline{a}}\bigcup_{i=\theta[0]}^{0}\left\{L^{(1)}\left(\ensuremath{\mathbf{1}}derline{a}F^{(1)}_{\{\theta[0],i-1\}}(\ensuremath{\mathbf{1}}derline{a},U_{\theta[0]}^{i-1}),U_{i}\right)>k\right\}\c{c}\~ap\{\theta[0]\leq i-k-1\}\\
&\subset\bigcup_{\ensuremath{\mathbf{1}}derline{a}}\bigcup_{i=\theta[0]}^{0}\left\{L^{(1)}\left(\ensuremath{\mathbf{1}}derline{a}F^{(1)}_{\{i-k,i-1\}}(\ensuremath{\mathbf{1}}derline{a},U_{i-k}^{i-1}),U_{i}\right)>k\right\}\c{c}\~ap\{\theta[0]\leq i-k-1\}\\
&\subset\bigcup_{i=\theta[0]}^{0}\left\{\ell\left(U_{-\infty}^{i}\right)>k\right\}.
\end{align*}
This concludes the proof of the lemma.
\end{proof}
Recall the definition \eqref{eq:Theta'} of $\Theta'[0]$ for generic range partitions of a weakly non-null kernel $P$. We will need the following lemma.
\begin{lemma}\label{lemma:weakK}
For any $k\ge0$, $\Theta'[0]\subset\Theta^{[k]}[0]$.
\end{lemma}
\begin{proof}
Let $\theta[0]\in\Theta'[0]$. For any fixed $k\ge0$, we separate two cases.
\begin{enumerate}
\item If $\theta[0]\ge -k$, then, by the definition of $\Theta'[0]$, the ranges used by $F$ from $\theta[0]$ to $0$ are all smaller than or equals to $k$, and therefore using \eqref{eq:mainprop}, we have that the length used by $F^{[k]}$ in the same interval of indexes are the same and the constructed symbols are the same as well. Thus $\theta[0]\in\Theta^{[k]}[0]$.
\item If $\theta[0]<-k$, then, by the definition of $\Theta'[0]$, we can apply the same method as in the preceding case, and obtain that $\theta[0]$ is a coalescence time for $F^{[k]}$ for the time indexes from $\theta[0]$ up to $\theta[0]+k$. But $\theta[0]$ is also a coalescence time for the time indexes from $\theta[0]+k+1$ up to $0$, since the ranges used by $F^{[k]}$ are always smaller than or equal to $k$. Thus, in this case also, $\theta[0]\in\Theta^{[k]}[0]$.
\end{enumerate}
\end{proof}
\subsubsection{Proof of Theorem \ref{coro:italia}}
In the conditions of this theorem, by Theorem 1 in \cite{desantis/piccioni/2010}, $\theta[0]$ is $\mathbb{P}$-a.s. finite. Moreover, by Lemma \ref{lemma:ahahah}, $\theta[0]\in \Theta^{(1)}[0]\c{c}\~ap\Theta^{(1),[k]}[0]$ for any $k\ge0$. Thus we can apply Lemma \ref{theo:1}, and obtain, using Lemma \ref{lemma:ahahah}
\begin{align*}
\bar{d}({\bf X},{\bf X}^{[k]})\leq \mathbb{P}\left(\bigcup_{i=\theta[0]}^{0}\left\{\ell\left(U_{-\infty}^{i}\right)>k\right\}\right)
\end{align*}
and moreover
\begin{align*}
\mathbb{P}\left(\bigcup_{i=\theta[0]}^{0}\left\{\ell\left(U_{-\infty}^{i}\right)>k\right\}\right)&=\mathbb{P}\left(\sum_{i=\theta[0]}^{0}{\bf 1}\left\{\ell\left(U_{-\infty}^{i}\right)>k\right\}\ge1\right)\\
&\leq \mathbb{E}\paren{\sum_{i=\theta[0]}^{0}{\bf 1}\left\{\ell\left(U_{-\infty}^{i}\right)>k\right\}}\enspace.
\end{align*}
Consider the $\sigma$-algebra $\ensuremath{\mathcal{F}}_{k}$ generated by $U_{-k}^{0}, \,k\ge0$. Then, $\ell(U_{-\infty}^{0})$ is a stopping time with respect to $\ensuremath{\mathcal{F}}_{k}$ and, by definition, so is ${\theta}[0]$. Moreover, $\ell(U_{-\infty}^{i})$ is independent of $U_{i+1}^{0}$ by independence of the $U_{j}$'s. Finally, by stationarity, $\ell\left(U_{-\infty}^{i}\right)\stackrel{\mathcal{D}}{=}\ell\left(U_{-\infty}^{0}\right)$, hence $\mathbb{E}\paren{{\bf 1}\left\{\ell\left(U_{-\infty}^{i}\right)>k\right\}}=\mathbb{E}\paren{{\bf 1}\left\{\ell\left(U_{-\infty}^{0}\right)>k\right\}}$, for any $i\in\ensuremath{\mathbb{Z}}$. By Theorem 1 in \cite{desantis/piccioni/2010}, $\theta[0]$ has finite expectation, hence we can use Wald equality to obtain
\begin{equation}\label{eq:contEtheta0finite}
\bar{d}({\bf X},{\bf X}^{[k]})\leq \mathbb{E}|\theta[0]|\mathbb{P}(\ell\left(U_{-\infty}^{0}\right)>k)\enspace.
\end{equation}
This concludes the proof of Theorem \ref{coro:italia}.
\subsection{{\bf Proof of Theorem \ref{lemma:1}}}
Recall the definition of the set $\Theta'[0]$ given by \eqref{eq:Theta'}. If $\theta[0]\in\Theta'[0]$ and ${\theta}[0]\ge-k$, then we are sure that the range $L\left(\ensuremath{\mathbf{1}}derline{a}F_{\{\theta[0],i-1\}}(\ensuremath{\mathbf{1}}derline{a},U_{\theta[0]}^{i-1}),U_{i}\right)\leq k$ for any $i={\theta}[0],\ldots,0$ and any $\ensuremath{\mathbf{1}}derline{a}$, therefore
\[
\bigcup_{\ensuremath{\mathbf{1}}derline{a}}\bigcup_{i=\theta[0]}^{0}\left\{L\left(\ensuremath{\mathbf{1}}derline{a}F_{\{\theta[0],i-1\}}(\ensuremath{\mathbf{1}}derline{a},U_{\theta[0]}^{i-1}),U_{i}\right)>k\right\}\subset\{{\theta}[0]<-k\}.
\]
By Lemma \ref{lemma:weakK}, any $\theta[0]\in \Theta'[0]$ also belongs to $\Theta^{[k]}[0]$ for any $k\ge0$. We can thus apply Lemma \ref{theo:1} and conclude the proof of the theorem.
\appendix
\section{Local continuity with respect to the past $\ensuremath{\mathbf{1}}derline{1}$}\label{sec:extgallogarcia}
In this section, we assume that $A=\{1,2\}$, and that $P$ has only one discontinuity point, the point $\ensuremath{\mathbf{1}}derline{1}=\ldots111$. We refer the interested reader to \cite{gallo/garcia/2011} for examples with countable alphabets, and discontinuities in more complicated set of pasts. To begin, we need the following definition.
\begin{defi}[Local continuity with respect to the past $\ensuremath{\mathbf{1}}derline{1}$]
We say that a kernel $P$ on $\{1,2\}$ is \emph{locally continuous with respect to the past $\ensuremath{\mathbf{1}}derline{1}$} if
\[
\forall i\geq 0 ,\qquad \inf_{a_{-k}^{-1}}\sum_{a\in A}\inf_{\ensuremath{\mathbf{1}}derline{z}}P(a|1^{i}2a_{-k}^{-1}\ensuremath{\mathbf{1}}derline{z})
\]
converges to $1$ as $k$ diverges. We distinguish two particular situations of interest.
\begin{itemize}
\item We say that $P$ is \emph{strongly locally continuous with respect to $\ensuremath{\mathbf{1}}derline{1}$} if there exists an integer function $\ell:\mathbb{N}\rightarrow\mathbb{N}$ such that
\begin{equation}\label{eq:stronglocal}
\forall i\geq 0,\qquad \inf_{a_{-k}^{-1}}\sum_{a\in A}\inf_{\ensuremath{\mathbf{1}}derline{z}}P(a|1^{i}2a_{-k}^{-1}\ensuremath{\mathbf{1}}derline{z})=1
\end{equation}
for any $k\ge \ell(i)$, and
\item we say that $P$ is \emph{uniformly locally continuous with respect to $\ensuremath{\mathbf{1}}derline{1}$} if
\begin{equation}\label{eq:uniflocal}
\alpha^{\ensuremath{\mathbf{1}}derline{1}}_{k}:=\inf_{i\geq0}\inf_{a_{-k}^{-1}}\sum_{a\in A}\inf_{\ensuremath{\mathbf{1}}derline{z}}P(a|1^{i}2a_{-k}^{-1}\ensuremath{\mathbf{1}}derline{z})
\end{equation}
converges to $1$ as $k$ diverges.
\end{itemize}
\end{defi}
Strongly locally continuous kernels are known as \emph{probabilistic context trees}, a model that have been introduced by \cite{rissanen/1983} as a \emph{universal data compression model}. It was first consider, from the ``CFTP point of view'', by \cite{gallo/2009}.
The kernel $\bar{P}$ is a simple example which is strongly and uniformly locally continuous with respect to $\ensuremath{\mathbf{1}}derline{1}$.
\paragraph{{\bf Assumption 1}:} $P$ is strongly locally continuous with respect to $\ensuremath{\mathbf{1}}derline{1}$.
\paragraph{{\bf Assumption 2}:} $P$ is uniformly locally continuous with respect to $\ensuremath{\mathbf{1}}derline{1}$.
\begin{notation}\label{nota}
Let us introduce the following notation.
\begin{itemize}
\item Stationary chains compatible with kernels satisfying Assumptions i=1 and 2 are denoted ${\bf X}^{(i)}$, and the corresponding canonical $k$-steps Markov approximations are denoted ${\bf X}^{(i),[k]}$.
\item We use the notations $r^{(i)}_{0}:=\alpha_{0}$ for i=1 and 2, and for $k\ge1$,
\begin{align*}
r^{(1)}_{k}&:= r^{(1)}_{k-1}\vee (1-(1-\alpha(2))^{\ell^{-1}(k)})\\
r^{(2)}_{k}&:=r^{(2)}_{k-1}\vee (1-(1-\alpha^{\ensuremath{\mathbf{1}}derline{1}}_{k})/\alpha(2))
\end{align*}
where $\ell$ and $\alpha^{\ensuremath{\mathbf{1}}derline{1}}_{k}$ are the parameters of the kernels under assumptions 1 and 2 respectively.
\item For i=1 and 2
\begin{equation}\label{eq.bound.HC}
v_{k}^{(i)}:=\sum_{j=1}^{k}\sum_{
\begin{array}{c}
t_{1},\ldots,t_{j}\geq1\\
t_{1}+\ldots+t_{j}=k
\end{array}
}\prod^{j}_{m=1}(1-r^{(i)}_{t_{m}-1})\prod_{l=0}^{t_{m}-2}r^{(i)}_{l}
\end{equation}
where $\prod_{l=0}^{-1}:=1$.
\item And finally, for any $k\ge0$, let
\begin{equation}\label{eq:nota}
u_{k}:= \lfloor k\alpha(2)/2\rfloor\mathbb{P}\left(\left|\sum_{j=0}^{\lfloor k\alpha(2)/2\rfloor}\xi_{j}-\frac{\lfloor k\alpha(2)/2\rfloor}{\alpha(2)}\right|>k/2\right)
\end{equation}
It is well-known that this sequence goes exponentially fast to $0$ (see \cite{kallenberg/2002} for instance). An explicit upper bound is derived in Appendix \ref{sec.conc.geom}.
\end{itemize}
\end{notation}
\begin{coro}\label{coro:nonexpli}
Under the weak non-nullness assumption, we have for i=1 and 2 that, if $\sum_{k\ge1}\prod_{i=0}^{k-1}r^{(i)}_{k}=\infty$,
\begin{equation}\label{eq:expli1}
\bar{d}({\bf X}^{(i)},{\bf X}^{(i),[k]})\leq u_{k}+v_{\lfloor k\alpha(2)/2\rfloor}^{(i)}\rightarrow0.
\end{equation}
\end{coro}
The quantity defined on display \eqref{eq.bound.HC} is related to the house of card process presented in Section \ref{sec:auxi} (see equation \eqref{bound2}). We provide in Propositions \ref{prop:bfg} and \ref{prop:expli} explicit upper-bounds on the term \eqref{eq.bound.HC} that can be plugged in \eqref{eq:expli1}. The term \eqref{eq:nota} is studied in Corollary \ref{coro.contuk}.
It follows in particular from these propositions that, whenever $r_{k}^{(i)}$ is not exponentially decreasing, the leading term in \eqref{eq:expli1} is $v_{k}^{(i)}$, and therefore, we obtain for some constant $C>1$ and any sufficiently large $k$
\[
\bar{d}({\bf X}^{(i)},{\bf X}^{(i),[k]})\leq Cv_{\lfloor k\alpha(2)/2\rfloor}^{(i)}.
\]
For instance, Proposition \ref{prop:expli}, states that, if $1-r^{(i)}_{k}=\frac{r}{k}+s_{k}$, $k\ge1$ with $r\in(0,1)$ and $\{s_{k}\}_{k\ge1}$ is any summable sequence, we obtain for some constant $C>1$
\begin{equation}\label{eq:result_weak_continuity}
\bar{d}({\bf X}^{(i)},{\bf X}^{(i),[k]})\leq C\frac{(\log k)^{3+r}}{k^{2-(1+r)^{2}}}.
\end{equation}
\begin{proof}
Under Assumptions 1 and 2 with weak non-nullness, \cite{gallo/garcia/2011} constructed a set of range partitions generating a set of coalescence times $\Theta'[0]$ which is a.s. non-empty. This is what is stated in Corollaries 6.1 and 6.2 (and the discussions following them) for respectively Assumption 1 and 2.
They defined a random time $\Lambda^{(i)}[0]$ (see display (34) therein) which belongs to $\Theta'[0]$, as stated by Lemma 8.1 therein. They also prove that $\mathbb{P}(\Lambda^{(i)}[0]<-k)$ is upper bounded by $u^{(i)}_{k}+v^{(i)}_{\lfloor k\alpha(2)/2\rfloor}$ (where $\{u_{k}^{(i)}\}_{k\ge1}$ has been defined by (\ref{eq:nota})). This is in fact stated in the proof of item (ii) of Theorem 5.2 therein.
By Theorem \ref{lemma:1}, these upper bounds are therefore upper bounds for the $\bar{d}$-distance $\bar{d}({\bf X},{\bf X}^{[k]})$.
\end{proof}
\section{Some results on the House of Cards Markov chain}\label{sec:auxi}
Fix a non-decreasing sequence $\{r_{k}\}_{k\ge0}$ of $[0,1]$-valued real numbers converging to $1$.
The house of Cards Markov chain ${\bf H}=\{H_{n}\}_{n\ge0}$ related to this sequence is the $\mathbb{N}$-valued Markov chain starting from state $0$ and having transition matrix $Q=\{Q(i,j)\}_{i\ge0,\,j\ge0}$ where $Q(i,j):=r_{i}{\bf 1}\{j=i+1\}+(1-r_{i}){\bf 1}\{j=0\}$. Let us denote $v_{k}:=\mathbb{P}r(H_{k}=0)$, the probability that the house of cards is at state $0$ at time $k$. We want to derive explicit rates of convergence to $0$ of this sequence when ${\bf H}$ is not positive recurrent. These results will be used in the next section in order to obtain explicit upper bounds for $\bar{d}({\bf X},{\bf X}^{[k]})$ under several types of assumptions. Decomposing the event $\set{H_{k}=0}$ into the possible come back of the process $\{H_{\ell}\}_{\ell=0,\ldots,k}$ to $0$ yields, for any $n\ge 1$
\begin{equation}\label{bound2}
v_{k}:=\sum_{j=1}^{k}\sum_{
\begin{array}{c}
t_{1},\ldots,t_{j}\geq1\\
t_{1}+\ldots+t_{j}=k
\end{array}
}\prod^{j}_{m=1}(1-r_{t_{m}-1})\prod_{l=0}^{t_{m}-2}r_{l},
\end{equation}
where $\prod_{l=0}^{-1}:=1$. Although explicit, this bound cannot be used directly and has to be simplified.
As a first insight, we borrow the following Proposition of \cite{bressaud/fernandez/galves/1999a}.
\begin{prop}\label{prop:bfg}
\begin{enumerate}[(i)]
\item $v_{k}$ goes to zero as $k$ diverges if $\sum_{m\geq 1}\prod_{l=0}^{m-1}r_{l}=+\infty$,
\item $v_{k}$ is summable in $k$ if $1-r_{k}$ is summable in $k$,
\item $v_{k}$ behaves as $O(1-r_{k})$ if $1-r_{k}$ is summable in $k$ and $\sup_{j}\limsup_{k\rightarrow+\infty}(\frac{1-r_{j}}{1-t_{kj}})\leq1$
\item $v_{k}$ goes to zero exponentially fast if $1-r_{k}$ decreases exponentially.
\end{enumerate}
\end{prop}
As observe in \cite{bressaud/fernandez/galves/1999a}, the conditions of item (iii) are satisfied if, for example, $1-r^{(i)}_{k}\sim (\log k)^{\eta}k^{-\zeta}$ for some $\zeta>1$, and for any $\eta$. However, this is one of the only cases in which this proposition yields explicit rates.
In the present paper, we will prove the following proposition.
\begin{prop}\label{prop:expli}
We have the following explicit upper bounds.
\begin{enumerate}[(i)]
\item A non summable case: if $1-r_{k}= \frac{r}{k}+s_{k},\,k\ge1$ where $r\in(0,1)$ and $\{s_{n}\}_{n\ge1}$ is a summable sequence, there exists a constant $C>0$ such that
\[
v_{k}\leq C\frac{(\ln k)^{3+r}}{(k)^{2-(1+r)^{2}}}.
\]
\item Generic summable case: if $t_{\infty}:=\prod_{k\ge0}r_{k}>0$, then
\[
v_{k}\leq \inf_{K=1,\ldots,k}\set{K^{2}(1-r_{k/K})+(1-t_{\infty})^{K}}.
\]
\item Exponential case: if $1-r_{k}\leq C_{r}r^{k},\,k\ge1$, for some $r\in(0,1)$ and a constant $C_{r}\in(0,\log \frac{1}{r})$ then
\[
v_{k}\leq \frac1{C_{r}}(e^{C_{r}}r)^{k}.
\]
\end{enumerate}
\end{prop}
\subsection{Proof of Proposition \ref{prop:expli}}
Before we come into the proofs of each item of this proposition, let us collect some simple remarks on the House of Cards Markov chain.
Let $\{T_{k}\}_{k\ge0}$ be a sequence of the stopping times defined as
$T_{0}:= 0$ and, recursively, for any $k\geq 1$, $T_{k}:=\inf\set{l\geq T_{k-1}+1\, \mbox{ s.t. } \, H_{l}=0}$. The Markov property ensures that
the random variables $I_{k}:= T_{k+1}-T_{k}$ are i.i.d., valued in $\ensuremath{\mathbb{N}}^{*}$ and it is easy to check that
\[
\forall k\geq 1,\qquad t_{k}:= \mathbb{Q}r{I_{1}=k}=(1-r_{k-1})\prod_{i=0}^{k-2}r_{i}\enspace,
\]
where $\prod_{l=0}^{-1}:=1$. We have, for any $n\geq 0$,
\[
\mathbb{Q}r{H_{n}=0}=\mathbb{Q}r{\exists k\geq 0,\, \mbox{ s.t. } \, T_{k}=n}=\sum_{k=0}^{\infty}\mathbb{Q}r{T_{k}=n}\enspace.
\]
We write $T_{k}=\sum_{l=0}^{k-1}I_{l}$. As all the $I_{l}\geq 1$, we have $\mathbb{Q}r{T_{k}=n}=0$ for all $k> n$. Therefore, for all $K\in[1,n]$,
\begin{equation}\label{eq.alpha.summable}
\mathbb{Q}r{H_{n}=0}=\sum_{k=0}^{n}\mathbb{Q}r{T_{k}=n}=\sum_{k=0}^{K}\mathbb{Q}r{T_{k}=n}+\sum_{k=K+1}^{n}\mathbb{Q}r{T_{k}=n}\enspace.
\end{equation}
\begin{fact}\label{fact.F1}
Let $K\in[1,n]$, we have $\mathbb{Q}r{\forall l\in[1,K],\; I_{l}\leq n}=(1-\nu_{n+1})^{K}$. In particular, if $K
\in[1,n]$, then
\begin{align*}
\mathbb{Q}r{\exists j\in[K,n],\, \mbox{ s.t. } \, \sum_{l=0}^{j}I_{l}=n}&\leq \mathbb{Q}r{\forall l\in[1,K], I_{l}\leq n}\\
&=(1-\nu_{n+1})^{K}\enspace.
\end{align*}
\end{fact}
In order to control $\sum_{k=0}^{K}\mathbb{Q}r{T_{k}=n}=\mathbb{Q}r{\exists k=0,\ldots,K,\;T_{k}=n}$, we can simply remark that, if there exists $k\in1,\ldots K$ such that $\sum_{i=1}^{k}I_{l}=n$, there exists necessarily $i\in[1,K]$ and $r\in[1,\ldots,K]$ such that $I_{i}=n/r$. This implies that
\begin{align*}
\mathbb{Q}r{\exists k=0,\ldots,K,\;T_{k}=n}&\leq \mathbb{Q}r{\exists i\in[1,K],\;\exists r\in[1,\ldots,K],\, \mbox{ s.t. } \, I_{i}=\frac nr}\\
&\leq \sum_{i=1}^{K}\sum_{j=1}^{K}\mathbb{Q}r{I_{1}=\frac nr}\leq K^{2}t_{n/K}\enspace.
\end{align*}
We have obtain the following result.
\begin{fact}\label{fact.F2}
Let $K\in[1,n]$, we have
\[
\sum_{k=0}^{K}\mathbb{Q}r{T_{k}=n}\leq K^{2}t_{n/K}\enspace.
\]
\end{fact}
Restricting our attention to the summable case (that is, when $\sum_{k\ge0}(1-r_{k})<+\infty$), the following fact is fundamental. Its proof is immediate.
\begin{fact}\label{fact.F3}
If $\sum_{n\ge0}(1-r_{n})<\infty$, then $t_{\infty}:= \mathbb{Q}r{I_{1}=\infty}=\prod_{i=0}^{\infty}r_{i}>0$, in particular, $\nu_{n}:=
\mathbb{Q}r{I_{1}\geq n}\geq t_{\infty}>0$. Moreover, for all $n\in\ensuremath{\mathbb{N}}$, $t_{\infty}(1-r_{n})\leq t_{n}\leq (1-r_{n})$
\end{fact}
Using Facts \ref{fact.F1}, \ref{fact.F2} and \ref{fact.F3}, we are ready to prove items (i) and (ii) of Proposition \ref{prop:expli}.
\begin{proof}[Proof of Item (i) of Proposition \ref{prop:expli}]
As far as we know, all the results on the house of card process hold in the summable case. When $\sum_{k\in\ensuremath{\mathbb{N}}}(1-r_{k})=\infty$, it is only known that $\sum_{n\in \ensuremath{\mathbb{N}}}\mathbb{Q}r{H_{n}=0}=\infty$. It is interesting to notice that we can still obtain some rate of convergence for $\mathbb{Q}r{H_{n}=0}$ from our elementary facts, at least in the following example. Let us assume that there exists $r<1$ and a summable sequence $s_{n}$ such that, for all $n\geq 1$, $1-r_{n}=\frac{r}n+s_{n}$. In this case, we have $\sum_{n\in \ensuremath{\mathbb{N}}}(1-r_{n})=\infty$, therefore $t_{\infty}=0$. Nevertheless,
\[
\prod_{i=0}^{n}r_{i}\leq \prod_{i=1}^{n}e^{-(1-r_{i})}=e^{-r \ln n+O(1)}\leq Cn^{-r}\enspace.
\]
Therefore $t_{n}\leq Cn^{-(1+r)}$. Moreover, using the inequality $(1-u)\geq e^{-u-u^{2}}$, valid for all $u<1/8$, we see that $t_{n}\geq cn^{-(1+r)}$. Therefore, $\nu_{n}=\sum_{k\geq n}t_{k}\geq cn^{-r}$. It follows from Fact \ref{fact.F1} that, for large $K$ and $n$,
\[
\sum_{k=K+1}^{n}\mathbb{Q}r{T_{k}=n}\leq (1-\nu_{n+1})^{K}\leq e^{-cKn^{-r}}\enspace.
\]
Using Fact \ref{fact.F1}, we also have
\begin{equation}\label{eq.petitKsimple}
\sum_{k=0}^{K}\mathbb{Q}r{T_{k}=n}\leq CK^{2}t_{n/K}\leq CK^{3+r}n^{-(1+r)}\enspace.
\end{equation}
We deduce then from \eqref{eq.alpha.summable} that, for all $K\in [0,n]$,
\[
\mathbb{Q}r{H_{n}=0}\leq C\paren{\frac{K^{3+r}}{n^{1+r}}+e^{-cKn^{-r}}}\enspace.
\]
For $K=2n^{r}\ln n$, we obtain
\[
\mathbb{Q}r{H_{n}=0}\leq C\frac{(\ln n)^{3+r}}{n^{1-2r-r^{2}}}=C\frac{(\ln n)^{3+r}}{n^{2-(1+r)^{2}}}\enspace.
\]
If $0<r<1$, we have $2-(1+r)^{2}>0$. This bound may not be optimal, but it is interesting to see that we still can derive rates of convergence from our basic remarks even in this pathological example.
\end{proof}
\begin{proof}[Proof of Item (ii) of Proposition \ref{prop:expli}]
We deduce from Facts \ref{fact.F1} and \ref{fact.F3} that, in the summable case
\[
\sum_{k=K+1}^{n}\mathbb{Q}r{T_{k}=n}\leq (1-t_{\infty})^{K}\enspace.
\]
Therefore, from Facts \ref{fact.F2} and \ref{fact.F3},
\begin{equation}\label{control1}
\mathbb{Q}r{H_{n}=0}\leq \inf_{K=1,\ldots,n}\set{K^{2}(1-r_{n/K})+(1-t_{\infty})^{K}}\enspace.
\end{equation}
\end{proof}
\begin{proof}[Proof of Item (iii) of Proposition \ref{prop:expli}]
In this section, we assume that, for all $k$, $1-r_{k}\leq C_{r}r^{k}$, for some $r\in(0,1)$ and a constant $C_{r}>0$. In that case, for all $k$, we have, by independence,
\begin{align*}
\mathbb{Q}r{\sum_{l=1}^{k}I_{l}=n}&=\sum_{i_{1}+\ldots+i_{k}=n}\mathbb{Q}r{\bigcap_{l=1}^{k} I_{l}=i_{l}}\\
&=\sum_{i_{1}+\ldots+i_{k}=n}\prod_{l=1}^{k}\mathbb{Q}r{I_{l}=i_{l}}\\
&\leq \sum_{i_{1}+\ldots+i_{k}=n}C_{r}^{k}r^{i_{1}+\ldots+i_{k}}=C_{r}^{k}r^{n}\sum_{i_{1}+\ldots+i_{k}=n}1\enspace.
\end{align*}
Let us evaluate the numbers $p_{k,n}=\sum_{i_{1}+\ldots+i_{k}=n}1$. We have $p_{1,n}=1$ and
\begin{align*}
p_{k,n}&=\sum_{l=1}^{n-k+1}\sum_{i_{k}=l}\sum_{i_{1}+\ldots+i_{k-1}=n-l}1=\sum_{l=1}^{n-k+1}p_{1,l}p_{k-1,n-l}\\
&=\sum_{l=1}^{n-k+1}p_{k-1,n-l}\enspace.
\end{align*}
Let us then assume that, for some $k$, we have, for all $n\geq k-1$, $p_{k-1,n}\leq n^{k-2}/(k-2)!$. Notice that this is the case for $k=2$, then, for all $n\geq k$,
\begin{align*}
p_{k,n}&\leq \sum_{l=1}^{n-k+1}\frac{(n-l)^{k-2}}{(k-2)!}=\sum_{l=k-1}^{n-1}\frac{l^{k-2}}{(k-2)!}\leq \int_{k-1}^{n}\frac{x^{(k-2)}}{(k-2)!}\leq \frac{n^{k-1}}{(k-1)!}\enspace.
\end{align*}
We deduce that
\[
\sum_{k=1}^{n}C_{r}^{k}\sum_{i_{1}+\ldots+i_{k}=n}1\leq \frac1{C_{r}}\sum_{k=1}^{n}\frac{(C_{r}n)^{k-1}}{(k-1)!}\leq \frac{e^{C_{r}n}}{C_{r}}\enspace.
\]
Therefore,
\[
\mathbb{Q}r{H_{n}=0}=\sum_{k=1}^{n}\mathbb{Q}r{T_{k}=n}\leq \frac1{C_{r}}(e^{C_{r}}r)^{n}\enspace.
\]
Hence, when $C_{r}<\ln (1/r)$, $e^{C_{r}}r<1$ and $\mathbb{Q}r{H_{n}=0}$ decreases exponentially fast.
\end{proof}
\section{Concentration of geometric random variables}\label{sec.conc.geom}
Let $\xi, \xi_{1:n}$ be i.i.d. geometric random variables with parameter $\alpha$, i.e., $\forall k\geq 1$, $\mathbb{P}\paren{\xi=k}=(1-\alpha)^{k-1}\alpha$. We obtain in this section the following upper bounds.
\begin{prop}
let $C_{1,\alpha}=\frac{1-\alpha}{\alpha}+4\paren{\frac{1-\alpha}{\alpha}}^{2}$, $C_{2,\alpha}=\ln\paren{\frac{2-\alpha}{2(1-\alpha)}\wedge 2}$. Then, $\forall x>0$,
\begin{align}
\label{eq.conc.geo.up}\mathbb{P}\paren{\frac1n\sum_{i=1}^{n}X_{i}-\frac1{\alpha}>x}\leq e^{-n\paren{\frac{x^{2}}{2C_{1,\alpha}}\wedge \frac{C_{2,\alpha}}2x}}\enspace.\\
\notag\mathbb{P}\paren{\frac1n\sum_{i=1}^{n}X_{i}-\frac1{\alpha}<-x}\leq e^{-n\paren{\frac{x^{2}}{2C_{1,\alpha}}\wedge \frac{x}2}}\enspace.
\end{align}
\end{prop}
As a corollary of this result, we obtain the following bound when $n=\lfloor k\alpha/2\rfloor$ and $x=1/\alpha$.
\begin{coro}\label{coro.contuk}
Let $k\in \ensuremath{\mathbb{N}}^{*}$, $\alpha\in(0,1)$, $n=\lfloor k\alpha/2\rfloor$, $x=k/(2n)\geq 1/\alpha$, $\xi_{1:n}$ be i.i.d. random variables with parameters $\alpha$, and
\[u_{k}:= n\mathbb{P}\paren{\absj{\sum_{j=1}^{n}\xi_{j}-\frac{n}{\alpha}}>nx}\enspace.\]
Then, we have, for $C_{3,\alpha}:= \frac{\alpha}{4(1-\alpha)(4-3\alpha)}\wedge\frac1{4}\ln\paren{\frac{2-\alpha}{2(1-\alpha)}\wedge 2}$, for all $\epsilon>0$ and all $k>k(\epsilon)$,
\[
u_{k}\leq \alpha e^{-k(C_{3,\alpha}-\epsilon)}\enspace.
\]
\end{coro}
\subsection{Chernov's bound}
Let $Y,Y_{1:n}$ be i.i.d. random variables such that $\forall a<\lambda<b$, $\mathbb{E}\paren{e^{\lambda Y}}<\infty$, then,
\begin{equation}\label{eq.chernov.bound}
\forall x>0,\qquad \mathbb{P}\paren{\frac1n\sum_{i=1}^{n}Y_{i}>x}\leq \inf_{na<\lambda<nb}e^{-\lambda x}\paren{\mathbb{E}\paren{e^{\frac{\lambda}n Y}}}^{n}\enspace.
\end{equation}
\begin{proof}
We have, by independence of the $Y_{i}$ and Markov's inequality, for all $na<\lambda<nb$,
\begin{align*}
\mathbb{P}\paren{\frac1n\sum_{i=1}^{n}Y_{i}>x}&=\mathbb{P}\paren{e^{\frac{\lambda}n\sum_{i=1}^{n}Y_{i}}>e^{\lambda x}}\leq e^{-\lambda x}\mathbb{E}\paren{e^{\frac{\lambda}n\sum_{i=1}^{n}Y_{i}}}=e^{-\lambda x}\paren{\mathbb{E}\paren{e^{\frac{\lambda}n Y}}}^{n}\enspace.
\end{align*}
\end{proof}
\subsection{Exponential moments of geometric random variables}
Let $\xi$ be a geometric random variable with parameter $\alpha$, then
\begin{align}\label{eq.moment.expo.geo}
\forall \lambda<-\ln(1-\alpha),&\qquad \mathbb{E}\paren{e^{\lambda \xi}}\leq \frac{\alpha e^{\lambda}}{1-(1-\alpha)e^{\lambda}}\enspace,\\
\notag\forall \lambda>\ln(1-\alpha),&\qquad \mathbb{E}\paren{e^{\lambda(-\xi)}}\leq \frac{\alpha e^{-\lambda}}{1-(1-\alpha)e^{-\lambda}}\enspace.
\end{align}
\begin{proof}
By definition, we have, $\forall\lambda<-\ln(1-\alpha)$,
\begin{align*}
\mathbb{E}\paren{e^{\lambda \xi}}&=\sum_{k\geq 1}e^{\lambda k}(1-\alpha)^{k-1}\alpha=\alpha e^{\lambda}\sum_{k\geq 0}\paren{(1-\alpha)e^{\lambda}}^{k}=\frac{\alpha e^{\lambda}}{1-(1-\alpha)e^{\lambda}}\enspace.
\end{align*}
Moreover, for all $\lambda>\ln(1-p)$,
\begin{equation*}
\mathbb{E}\paren{e^{-\lambda \xi}}=\alpha e^{-\lambda}\sum_{k\geq 0}\paren{(1-\alpha)e^{-\lambda}}^{k}=\frac{\alpha e^{-\lambda}}{1-(1-\alpha)e^{-\lambda}}\enspace.
\end{equation*}
\subsection{Proof of the deviation bounds}
Plugging \eqref{eq.moment.expo.geo} in \eqref{eq.chernov.bound}, we obtain, for all $\lambda<-n\ln (1-\alpha)$,
\begin{align*}
\mathbb{P}\paren{\frac1n\sum_{i=1}^{n}\xi_{i}>\frac1{\alpha}+x}&\leq e^{-\lambda\paren{\frac1{\alpha}+x}}\paren{\frac{\alpha e^{\lambda/n}}{1-(1-\alpha)e^{\lambda/n}}}^{n}\\
&=\alpha^{n}e^{-\lambda\paren{\frac1{\alpha}+x-1}}e^{-n\ln\paren{1-(1-\alpha)e^{\lambda/n}}}
\end{align*}
Choosing $\lambda=n\epsilon$ for $\epsilon\leq \ln\paren{\frac{2-\alpha}{2(1-\alpha)}\wedge 2}$, using the inequalities $e^{\epsilon}\leq 1+\epsilon+\epsilon^{2}$ for all $\epsilon\leq \ln 2$ and $-\ln (1-u)\leq 1+u+u^{2}$ when $u\leq 1/2$, this last bound is equal to
\begin{align*}
&\paren{\alpha e^{-\epsilon\paren{\frac1{\alpha}+x-1}}e^{-\ln\paren{1-(1-\alpha)e^{\epsilon}}}}^{n}\\
&\leq\paren{\alpha e^{-\epsilon\paren{\frac1{\alpha}+x-1}}e^{-\ln(\alpha)-\ln\paren{1-\frac{(1-\alpha)}{\alpha}(e^{\epsilon}-1)}}}^{n}\leq\paren{e^{-\epsilon\paren{\frac1{\alpha}+x-1}}e^{\frac{(1-\alpha)}{\alpha}(e^{\epsilon}-1)+\paren{\frac{(1-\alpha)}{\alpha}(e^{\epsilon}-1)}^{2}}}^{n}\\
&\leq e^{-n\epsilon \paren{x-\epsilon\paren{\frac{1-\alpha}{\alpha}+4\paren{\frac{1-\alpha}{\alpha}}^{2}}}}\enspace.
\end{align*}
Let $C_{\alpha}=\frac{1-\alpha}{\alpha}+4\paren{\frac{1-\alpha}{\alpha}}^{2}$, choosing $\epsilon\leq x/(2C_{\alpha})$, we have $x-\epsilon C_{\alpha}\geq x/2$, hence, choosing $\epsilon=\frac{x}{2C_{\alpha}}\wedge \ln\paren{\frac{2-\alpha}{2(1-\alpha)}\wedge 2}$, we conclude the proof.
Plugging \eqref{eq.moment.expo.geo} in \eqref{eq.chernov.bound}, we obtain, for all $\lambda>n\ln (1-\alpha)$,
\begin{align*}
\mathbb{P}\paren{\frac1n\sum_{i=1}^{n}\xi_{i}<\frac1{\alpha}-x}&=\mathbb{P}\paren{\frac1n\sum_{i=1}^{n}(-\xi_{i})>-\frac1{\alpha}+x}\\
&\leq e^{-\lambda\paren{-\frac1{\alpha}+x}}\paren{\frac{\alpha e^{-\lambda/n}}{1-(1-\alpha)e^{-\lambda/n}}}^{n}\\
&=\alpha^{n}e^{-\lambda\paren{-\frac1{\alpha}+x+1}}e^{-n\ln\paren{1-(1-\alpha)e^{-\lambda/n}}}
\end{align*}
Choosing $\lambda=n\epsilon$, with $\epsilon\leq 1$, this last bound is equal to
\begin{align*}
&\paren{\alpha e^{-\epsilon\paren{-\frac1{\alpha}+x+1}}e^{-\ln\paren{1-(1-\alpha)e^{-\epsilon}}}}^{n}\\
&=\paren{\alpha e^{-\epsilon\paren{-\frac1{\alpha}+x+1}}e^{-\ln\paren{\alpha}-\ln\paren{1-(e^{-\epsilon}-1)\frac{1-\alpha}{\alpha}}}}^{n}\leq \paren{e^{-\epsilon \paren{-\frac1{\alpha}+x+1}}e^{(e^{-\epsilon}-1)\frac{1-\alpha}{\alpha}+\paren{(e^{-\epsilon}-1)\frac{1-\alpha}{\alpha}}^{2}}}^{n}\\
&\leq \paren{e^{-\epsilon \paren{-\frac1{\alpha}+x+1}}e^{(-\epsilon+\epsilon^{2})\frac{1-\alpha}{\alpha}+\paren{(-\epsilon+\epsilon^{2})\frac{1-\alpha}{\alpha}}^{2}}}^{n}\leq e^{-n\croch{\epsilon x-\epsilon^{2}\paren{\frac{1-\alpha}{\alpha}+4\paren{\frac{1-\alpha}{\alpha}}^{2}}}}\enspace.
\end{align*}
Let $C_{\alpha}=\frac{1-\alpha}{\alpha}+4\paren{\frac{1-\alpha}{\alpha}}^{2}$, choosing $\epsilon\leq x/(2C_{\alpha})$, we have $x-\epsilon C_{\alpha}\geq x/2$, hence, choosing $\epsilon=\frac{x}{2C_{\alpha}}\wedge 1$, we conclude the proof.
\end{proof}
\end{document} |
\begin{document}
\thispagestyle{empty}
\def{\bf Q.E.D.}{\nobreak\hskip 2pt\vrule height 5pt width 5pt depth 0pt}
\def\noindent{\sl Proof: }{\noindent{\sl Proof: }}
\def\noindent {\bf Remark: }{\noindent {\bf Remark: }}
\def{\bf D}{{\bf D}}
\def{\bf R}{{\bf R}}
\def{\bf T}{{\bf T}}
\def{\bf Z}{{\bf Z}}
\def{\bf Q}{{\bf Q}}
\def{\bf N}{{\bf N}}
\def{\bf C}{{\bf C}}
\def{\bf T}TT{{\cal T}}
\def{\bf Q.E.D.}{{\bf Q.E.D.}}
\def{\bf O}{{\bf O}}
\def{\bf D}D{{{I\negthinspace\!D}}}
\def{\bf R}R{{{I\negthinspace\!R}}}
\def{\bf N}N{{{I\negthinspace\!N}}}
\def{\bf T}T{{{T\negthinspace\negthinspace\!T}}}
\def{\bf Z}Z{{{Z\negthinspace\negthinspace\!Z}}}
\def{\bf Q}Q{{{0\negthinspace\negthinspace\negthinspace\!Q}}}
\def{\bf C}C{{{C\negthinspace\negthinspace\negthinspace\!C}}}
\def{{\tilde S}}{{{\tilde S}}}
\def{\bf C}star{{{\bf C}C^*}}
\def{\bf C}hat{{\hat{{C\negthinspace\negthinspace\negthinspace\!C}}}}
\begin{titlepage}
\title{ The Set of Maps $F_{a,b}:x \mapsto x+a+{b\over 2\pi}
\sin(2\pi x)$ with any
Given Rotation Interval is Contractible}
\author{
Adam Epstein\\Mathematics Department\{\bf C}alifornia Institute of Technology\\
Pasadena, CA 91125\and
Linda Keen\thanks{ Supported in part by NSF GRANT
DMS-9205433, Inst. Math. Sciences, SUNY-StonyBrook and I.B.M.}
\\Mathematics Department\{\bf C}UNY Lehman College\\Bronx, NY 10468 \and
Charles
Tresser\\I.B.M.\\ PO Box 218\\ Yorktown Heights N.Y. 10598}
\maketitle
\begin{abstract}
Consider the two-parameter family of real
analytic maps $F_{a,b}:x \mapsto x+ a+{b\over 2\pi} \sin(2\pi x)$ which are
lifts of degree one endomorphisms of the circle. The purpose of this paper
is to provide a proof that for any closed interval $I$, the set of maps
$F_{a,b}$ whose rotation interval is $I$, form a contractible set.
\end{abstract}
\end{titlepage}
\section{Introduction}
\label{sec:intro}
Orientation preserving homeomorphisms and diffeomorphisms of the
circle have attracted the attention of mathematicians and physicists
for a long time because they arise as Poincar{\'e} maps induced by
non-singular flows on the two-dimensional torus \cite{Ar,De,Herman,Po}. More
recently, families of circle endomorphisms which are deformations of
rotations have appeared as approximate models for some scenarios of
transition to ``chaos", or more technically, transitions from zero to
positive topological entropy. One can observe these transitions by
varying parameters of flows in $3$-space so that tori supporting
non-singular flows get wrinkled and then get destroyed. More generally,
these scenarios are typical for a huge variety of systems of coupled
oscillators so that one sees them everywhere. As a matter of fact, in
many cases when one is lead to study an endomorphism of the interval as
a model for a natural science experiment, some circle endomorphism is
a more adequate model.
While the simplest endomorphisms of the
interval depend on a single parameter, say the non-linearity, the
simplest reasonably complete family of circle endomorphisms containing
the rotations has to depend on two parameters: the non-linearity and
some form of mean rotation speed. In the coupled oscillators picture,
these parameters correspond respectively to the strength of the
forcing and the frequency ratio of the coupled oscillators. The
paradigm for interval endomorphisms is the quadratic family. For the
circle, it is the so-called {\em Arnold} or {\em standard} two-parameter
family:
$$f_{A,b}:\theta \mapsto (\theta +A+{b\over 2\pi}
\sin(2\pi\theta))_1\,,$$
with $(A,\,b)\in [0,\,1[\times {\bf R}R^+$, and
$(\theta)_n\buildrel \rm def\over = \theta\,{\mathop {\rm mod}}\,n.$
Under an orientation preserving homeomorphism of the circle, the orbits of
all points wrap around the circle at the same average speed \cite{Po}. For
non-invertible maps this is no longer necessarily the case, but the
set of average speeds form a closed interval. With the concepts roughly
recalled so far, we can give an example of a result that is a corollary of
our main theorem: for any average speed $\omega$, the set $\lbrace (A,\,b)
\rbrace$ of pairs such that all orbits under the map $f_{A,b}$ wrap
around the circle at speed $\omega$, is connected. Our main result is
in fact a similar statement in the more general setting where average
speeds vary in an interval.
Precise definitions and statements are contained in $\S 2$. In
$\S 3$, we reduce our main theorem to a rigidity result: this
reduction is merely well known material, but some proofs are sketched
for completeness. In $\S 3$ we have also included some material not
strictly needed for the proof of Theorem A, but intended to help some
readers to build an intuitive picture of what the main result is all
about. The rigidity property, formalized in Theorem D, is proved in
$\S 4$. Our proof of Theorem D is
one more example of the efficiency of complex analytic methods in
dealing with questions arising naturally in a real analytic framework.
\section{ Definitions and statement of the results}
\label{sec:defns}
Let ${\bf T}T={\bf R}R /{\bf Z}Z$ be the circle and $\Pi:{\bf R}R\to {\bf T}T$ the canonical
projection. The real continuous map $F$ is a lift of the continuous
circle map $f:{\bf T}T\to {\bf T}T$ if and only if
$$f\circ\Pi=\Pi\circ F\,.$$
The integer $d$ such that $$F(x+1)=F(x)+d\,\,,$$
for all real numbers $x$ is called the {\it degree} of $f$ (or of
$F$). The identity map, and more generally the rotations, have degree
one. Since the degree varies continuously for continuous deformations
of circle maps, and since we are interested in a parametrized continuous
family containing rotations, we shall only consider degree one maps in
the rest of the paper. Hence, {\it circle map} will always
mean ``degree-one continuous circle map", and a real map
will be called a {\it lift} if and only if it is the lift of a degree
one circle map.
Let $f$ be a circle map, and let $F$ be a lift of $f$
(each time both symbols $f$ and $F$ appear conjoined in the paper,
they are related in the same way). We define
$$\underline{\rho}_F(x)=\liminf_{n\to\infty}{F^n(x) \over n}\,\,,$$
and
$$\overline{\rho}_F(x)=\limsup_{n\to\infty}{F^n(x) \over n}\,\,.$$
The {\it rotation interval} of $F$ \cite{NPT} is then
$$I(F)=[\,\alpha,\,\beta]\,\,,$$
where
$$\alpha=\inf_{x\in
{\bf R}R}\underline{\rho}_F(x)\,\,,\,\,\beta=\sup_{x\in
{\bf R}R}\overline{\rho}_F(x)\,\,.$$ When $I(F)$ is a singleton $\lbrace
\omega\rbrace$, we sometimes use the classical language and say that
$\rho (F)\buildrel \rm def\over =\lbrace \omega \rbrace$ is the {\em
rotation number} of $F$.
We will focus on the standard family $f_{A,b}$ with parameter space
$[0,1[\times{\bf R}R ^+,$ and the corresponding degree one lifts $F_{a,b}$ with
parameter space ${\bf R}R\times{\bf R}R ^+,$ where the correspondence is given by $A=a\,
\,{\mathop{\rm mod}}\,\,1$. To state our main result we need the following
\proclaim Definition. An arc or curve $a =
\phi(b)$ in parameter space ${\bf R}R \times {\bf R}R^+$ is called an {\em L-curve} if
$\phi$ is uniformly Lipschitz with bound $1\over 2 \pi$.
\proclaim Theorem A. For each closed interval $I$, the set
$R_I$ of standard lifts with rotation interval $I$ corresponds to a
contractible region, also denoted $R_I$, in the parameter space
${\bf R}R\times {\bf R}R^+$. More precisely,
\begin{itemize}
\item For any
irrational number $\omega$, $R_{\lbrace \omega\rbrace}$ is an L-curve.
\item If one bound of
$I$ is irrational while the second bound is a rational number, $R_I$
is an L-curve.
\item If
the bounds of $I$ are distinct irrational numbers, $R_I$ is an L-curve.
\item When both bounds of $I$ are rational, $R_I$ is a lens shaped domain
bounded by two
L-curves that meet at their endpoints.
\end{itemize}
\noindent {\bf Convention}.
To simplify the language and the notation, we shall continue to
identify sets of standard lifts with the corresponding regions in
parameter space, as we did in the statement of Theorem A; when the distinction
is relevant the context should tell which space we mean.
\proclaim Conjecture B. If the bounds of $I$ are distinct
irrational numbers, $R_I$ is a point.
The content of Theorem
A is illustrated in Figures 1 to 3. The labeling appearing in these Figures
is defined in $\S\S$ 2 and 3.
As an important particular case, relevant for example in the description of
the boundary of chaos \cite{MaT}, Theorem A contains the following result which
was conjectured in \cite{Bo} (p. 378 and Fig. 13) and \cite{MaT} (p. 213):
\proclaim Corollary C. For each real $\omega$, the set $R_{\lbrace
\omega\rbrace}$ is connected.
\noindent {\bf Remark: }
Corollary C
(which was known for a long time to hold in the case when $\omega$ is
irrational) was recently proved for some families of piecewise affine
lifts; for theses families, the proof only uses elementary real
variable methods \cite{UTGC}. Later, and in parallel to the work presented
here, more sophisticated real variable methods \cite{GMT} were used to get the
counterpart to Theorem A as well as a proof of Conjecture B for these piecewise
lifts.
\section{ Proof of Theorem A, Part I: Real analytic part}
\label{sec:real analysis}
\subsection{ Non-decreasing lifts}
We shall denote by ${\cal F}^k({\bf R}R)$ the space of $C^k$ non-decreasing
lifts, equiped with the $C^k$ norm. The next theorem recalls some classical
results \cite{Ar,Herman}, usually formulated for lifts of degree one
homeomorphisms, but generalizable {\it verbatim} to ${\cal F}^0({\bf R}R)$.
\proclaim Theorem 3.1.1.
\begin{enumerate}
\item
The rotation number, as a function $\rho:{\cal
F}^0({\bf R}R)\to {\bf R}R$
is continuous.\\
\item
For $F$ and $G$ in
${\cal F}^0({\bf R}R)$,
$$F\geq G\qquad{\bf R}ightarrow\qquad \rho
(F)\geq
\rho(G)\,\,,$$
$$F>G\,\,\&\,\,\rho(F)\,\mbox{or\,}\,\rho(G)\,\mbox{irrational}
\quad{\bf R}ightarrow\quad
\rho (F)>
\rho(G)\,\,.$$
\item
If $\rho(F)$ is irrational
and
$f$ has a dense orbit, then, $$F\geq G\quad and\quad
F\not=
G\qquad{\bf R}ightarrow\qquad \rho (F)>\rho(G)\,\,.$$
\end{enumerate}
The next
result which can also be found in the above references, is more
specific
to our problem:
\proclaim Theorem 3.1.2.
For $b\not = 0$, no iterate of a standard lift is
affine.
Set
$${\bf A}'_\omega=\lbrace F\in {\cal
F}^0({\bf R}R)\,|\,\omega\in I(F) \rbrace\,,$$
Theorems 3.1.1 and 3.1.2 yield a
partition of the subset
${\bf R}R\times [0,1]$ of the parameter space of
the standard family in the ${\bf
A}'_\omega$'s, with the following
properties \cite{Ar}:
P1. For $\omega$ irrational, ${\bf
A}'_\omega$ is an arc crossing each line
$b=constant\leq 1$ at a
single point,
P2. For a rational number ${p\over q}$,
${\bf A}'_{p\over q}$, is often called an
{\it Arnold tongue}; it
crosses each line $0<b=\mbox{constant} \leq 1$ on an
interval of positive length.
\noindent {\bf Remark: } Property P2
describes an aspect of the phenomenon of
``frequency locking'',
first described by Huyggens, in the context of clocks hanging
from
the same wall, and descibed in modern terms, in the simplest cases, as
the
strucural stability of generic degree one circle diffeomorphisms
with rational
rotation numbers.
\subsection {Some special sets}
Following \cite{Bo} and \cite{MaT}
(both of whom extended the above mentioned work in \cite{Ar} from
homeomorphisms to endomorphisms), for each real number $\omega$, we define
$${\bf A}_\omega=\lbrace F \in
\lbrace F_{a,b}\rbrace_{(a,\,b)\in
{\bf R}R\times
{\bf R}R^+}\,\,|\,\omega\in I(F) \rbrace\,,$$
and
$${\bf L}_\omega=
\lbrace F \in \lbrace F_{a,b}\rbrace_{(a,\,b)\in
{\bf R}R\times
{\bf R}R^+}\,\,|\,\lbrace \omega \rbrace
=I(F)
\rbrace\,.$$
According to our notational convention, ${\bf A}_\omega$ and ${\bf
L}_\omega$ can also be understood as subspaces of the parameter space. The
following theorem enables us to use the results of \S~3.1 to analyze these
subspaces.
\proclaim Theorem 3.2.1 {\rm (\cite{CGT,Mi})}.
\begin{enumerate}
\item
For any lift $F\in
C^0_1({\bf R}R)$,
$$I(F)=[\,\rho(F^-),\,\rho(F^+)]\,,$$
where
$F^+$ is
the monotonic upper-bound of $F$, and $F^-$ is the
monotonic
lower-bound (see Figure 4). In formulas we
have:
$$F^+(x)=sup_{y\leq x}(F(y))\,,$$
$$F^-(x)=inf_{y\geq
x}(F(y))\,.$$
\item
For each $\omega\in I(F)$, there
is a non-decreasing
lift $F_\omega$ with $\rho(F_\omega)=\omega$, and
such that $F_\omega$ coincides
with $F$ where it is not locally
constant.
\end{enumerate}
Theorem 3.2.1. is quite easy to prove for the
maps in the standard family.
\subsection{ Simple properties of the ${\bf
A}_\omega$'s and ${\bf
L}_\omega$'s.}
Theorem 3.2.1. allows us to
use Theorem 3.1.1. in the study of non-invertible
maps. In
particular, one gets easily that the ${\bf A}_\omega$'s are
connected
and, for $\omega$ rational, they intersect each
line
$b=\mbox{constant} >1$ on a segment of non-zero length. We discuss the
case when $\omega$
is irrational in \S~3.4. A fundamental role
will be played by the boundaries of
the ${\bf A}_\omega$'s and some of
their accumulation sets defined as follows:
$${\bf B}_{\omega}^l=\lim _{{\theta}\to{\omega}^+}{\bf A}_{\theta}^l\,,$$
$${\bf B}_{\omega}^r=\lim _{{\theta}\to{\omega}^-}{\bf A}_{\theta}^r\,.$$
These boundaries, pieces of which form all the boundaries of the sets
$R_I$ of Theorem A, are
described in the
following result due to Boyland \cite{Bo}. We include a proof for the sake of
completeness.
\proclaim Theorem 3.3.1(\cite{Bo}).
\begin{enumerate}
\item
For any
real number $\omega$, the left and right bounds ${\bf
A}_{\omega}^l$
and ${\bf A}_{\omega}^r$ of ${\bf A}_{\omega}$ in
${\bf R}R\times {\bf R}R ^+$, are L-curves.
\item
When $\omega$ is a
rational number these curves intersect at a
single point which has $b=0$ as second coordinate
\item
$${\bf B}_{p\over q}^l=\lim _{{p'\over q'}\to{p\over q}^+}{\bf
A}_{p'\over q'}^l\,,\, \mbox{\rm and }{\bf B}_{p\over q}^r=\lim _{{p'\over
q'}\to{p\over q}^-}{\bf A}_{p'\over q'}^r\,.$$
Moreover, the sets ${\bf B}_{p\over q}^l$ and ${\bf B}_{p\over q}^r$ are
L-curves.
\item
For $\omega$ irrational
$${\bf
A}_\omega ^l={\bf B}_\omega ^l=\lim _{{p\over q}\to\omega ^-}
{\bf
A}_{p\over q}^l\,,$$
$${\bf A}_\omega ^r={\bf B}_\omega
^r=\lim _{{p\over q}\to\omega ^+} {\bf
A}_{p\over q}^r\,.$$
\end{enumerate}
\noindent{\sl Proof: }
With no loss of generality, we prove statement~1 for ${\bf A}_{p\over q}^l$.
To do this we consider the vertical cone in ${\bf R}R\times {\bf R}R ^+$ with vertex at
$(a,b)$ and boundaries made by lines with slopes $2\pi$ and $-2\pi$, and show
that it contains ${\bf A}_{p\over q}^l$. A point to the right of the cone
has coordinates $a'=a+\delta+\epsilon$, $b'=b\pm 2\pi \delta$, while a point to
the left of the cone has coordinates $a"=a-\delta-\epsilon$, $b"=b\mp 2\pi
\delta$, for some non-negative $\delta$ and $\epsilon$. Consequently, using the
continuity of the rotation number of $F_{a,b}^+$ as a function of the
parameters, and its monotonicity as a function of $a$, the inclusion of ${\bf
A}_{p\over q}^l$ in the cone follows from $$\forall
\delta \geq 0,\,\, \forall \epsilon\geq 0:\quad \delta+\epsilon\pm \delta \sin
2\pi x\geq 0\,.$$
Statement~2
follows from from Theorem 3.1.2. Statement~3
follows from statement~1
by continuity of the rotation number applied to the
monotonic bounds.
Statement~4 is a consequence of the same
continuity
property.
{\bf Q.E.D.}
\subsection{Theorem A in the simplest case}
We recall here the proof of Theorem A in the case when
$I$ is the singleton
$\lbrace \omega\rbrace$ for some irrational
number $\omega$. We begin with a weak
form of a theorem by Denjoy
\cite{De}
\proclaim Theorem 3.4.1 (\cite{De}).
For $F$ in ${\cal F}^2({\bf R}R)$,
if $\rho(F)= \omega $ for
some irrational number $\omega$, then the
circle map $f$ with lift
$F$ has a dense orbit.
The next result is a particular
case of a theorem obtained by Block and
Franke (see also \cite{CGT}) as a
consequence of the Denjoy theory:
\proclaim Theorem 3.4.2 (\cite{BF}). If $b>1$ and $\rho = \rho
(F_{a,b}^-)=\rho (F_{a,b}^+),$ then $\rho \in {\bf Q}Q$.
\noindent{\sl Proof: } We
first remark that there exist distinct
$C^2$ smooth lifts $F_0$ and $F_1$ such
that,
$\forall
x\,\in\,{\bf R}R,\,\,\,F^-(x)\,\leq\,F_0(x)\,\leq\,F_1(x)\,\leq\,F^+(x)$.
If the
claim were false, by Theorem 3.1.1-2,
$\rho=\rho(F_0)\,=\,\rho(F_1)\,=\,\rho(F^+)$ for some $\rho \notin
\,{\bf Q}Q$. But if $\rho \not\in {\bf Q}Q$,
Theorem 3.4.1 implies $F_0$ (and $F_1$) has a dense orbit. Hence the
claim
follows from Theorem 3.1.1-3. {\bf Q.E.D.}
To finish the analysis of the case when $\rho(F)= \omega\not\in {\bf Q}Q$, we
just have
to check that, as a consequence of Theorem 3.4.2, all
${\bf L}_{\omega}$ are contained in the region $b\leq1$ described in \S~3.1 and
\S~3.3. In summary, we have:
\proclaim Lemma 3.4.3. For $\omega\not\in \,{\bf Q}Q$, $R_{\lbrace \omega\rbrace}$
is an L-curve
contained in the
region $b\leq 1$.
\subsection{Intersections of the boundaries of the $A_{\omega}$'s:
existence}
For any $A$, the narrowest diagonal strip with sides parallel to and
centered on the main diagonal that contains the graph of $F_{a,b}$, can
be made arbitrarily wide by choosing $b$ large enough. Hence
\proclaim Lemma 3.5.1. For any $\omega\in{\bf R}R$, and any $a\in {\bf R}R$, $\omega$ is
contained in the interior of $I(F_{a,b})$ as soon as $b$ is large enough.
\proclaim Corollary 3.5.2.
If $\omega<\theta$, ${\bf A}_\omega ^r$ and ${\bf B}_\omega ^r$ intersect ${\bf
A}_\theta ^l$ and ${\bf B}_\theta^l$. Also, ${\bf B}_{p\over q}^r$ intersects
${\bf B}_{p\over q}^l$ for any rational ${p\over q}$.
\subsection{Intersections of the boundaries of the $A_{\omega}$'s:
combinatorics.}
In order to prove Theorem A using
Theorem 3.3.1-4, we may restrict our attention to the intersection points
of the boundaries of $A_{p\over q}$ and $A_{p'\over q'}$. Before we begin the
analysis of these intersection points, let us recall that the Schwarzian
derivative of a map $g$ is defined as
$$Sg={g'\negthinspace '\negthinspace'\over g'}-{3\over
2}({g'\negthinspace '\over g'})^2\,.$$
A direct computation then gives
\proclaim Lemma 3.6.1. For $b>1$ and $F'_{a,b}(x)\neq 0$, $SF_{a,b}(x)<0$.
This will be used to prove Lemma 3.6.2.
Using a lift $O$ of any periodic orbit $o$ of $f_{A,b}$, we can
analyze the local behavior of $f_{A,b}$ near $o$ in terms of the derivatives of
$F_{a,b}$ at $q$ successive points of $O$, $x_0,x_1=
F_{a,b}(x_0),\ldots,x_{q-1}=F_{a,b}(x_{q-2})$. Define the {\em
multiplier} of $o$ as $m_o=F'_{a,b}(x_0)\cdot F'_{a,b}(x_1)\cdots
F'_{a,b}(x_{q-1})$. We call $o$
\noindent
- {\em attracting} if $|m_o|<1$,
\noindent
- {\em neutral} if $|m_o|=1$, and in particular {\em parabolic} if $m_o$ is a
root of unity,
\noindent
- {\em hyperbolic} if $|m_o|>1$.
Clearly $m_o$ only depends on $O$.
The periodic orbits of the circle map $f_{A,b}$, which are also orbits of
some homeomorphism of the circle, and lifts of these orbits will play an
important role in our discussion. If a point of such a periodic orbit $o$ of
$f_{A,b}$ has a lift with rotation number ${p\over q}$ under $F_{a,b}$, $o$ has
period $q$ and lifts to $p$ distinct orbits of $F_{a,b}$.
\proclaim Lemma 3.6.2. Suppose ${p\over q}\in I(F_{a,b})$. Then $F_{a,b}$ has
an orbit $O$ such that
\begin{enumerate}
\item
$O$ projects to a periodic orbit $o$ of $f_{A,b}$,
\item
there is a monotone $G\in {\cal F}^0({\bf R}R)$ such that $O$ is an orbit of $G$,
\item
no point of $O$ is in an interval where $F_{a,b}$ is decreasing.
\item
If $m_o\geq 1$, $O$ is uniquely determined up to integer translation. If we
relax the multiplier condition, there are, up to integer
translation, at most two distinct orbits. When there are two
orbits distinct
under integer translation, denote them by $O$ and $O'$. Then
\begin{enumerate}
\item
$O$ and $O'$ bound intervals that are lifts of $q$ pairwise disjoint arcs
on which $f_{A,b}$ is orientation preserving,
\item
the interiors of these intervals are in the immediate basin of the attracting
periodic orbit which lifts to $O'$,
\item
at least one critical point is in the immediate basin of the attracting
orbit
which lifts to $O'$.
\end{enumerate}
\end{enumerate}
\noindent{\sl Proof: } For properties 1 to 3, the
existence follows from Theorem 3.2.1.
Uniqueness under the condition $m_o\geq 1$ follows from the well known fact
(proved by a direct computation) that the absolute value of the (usual)
derivative of a function on the real line, whose Schwarzian derivative is
negative off the critical set, has no local non-zero minimum. That there
are at most two orbits distinct under integer translation when the
multiplier condition is relaxed follows in a similar fashion from the
negative Schwarzian derivative property.
Property 4 (a) comes from the fact that we only use the restriction of
$F_{a,b}$ to the intervals where it is increasing; property 4 (b) is
immediate (draw a graph); and property 4 (c) is a classical result in
holomorphic dynamics (see the Remark in \S 4.1). {\bf Q.E.D.}
Let
$${\bf o}_{P\over Q}=\lbrace {\bf p}_0,\,{\bf p}_1,\dots,
{\bf p}_{Q-1}\rbrace\,\,\,{\rm and}\,\,\, {\bf o'}_{P\over Q}=
\lbrace{\bf p}'_0,\,{\bf p}'_1,\dots, {\bf p}'_{Q-1}\rbrace$$
be the projections of
${\bf O}_{p\over q}$, and ${\bf O}'_{p\over q}$ respectively, where
$f_{A,b}({\bf p}_j)={\bf p}_{(j+1)_Q}$, $f_{A,b}({\bf p}'_j)={\bf
p}'_{(j+1)_Q}$, and ${P\over Q}=({p\over q})_1$.
Assume that $b>1$. It follows from the
properties of ${\bf o}_{P\over Q}$ that
the two critical points ${\bf
c}$ and ${\bf k}$ of $f_{A,b}$ are in an arc
$\Gamma$ bounded by two
successive points ${\bf p}_j$ and ${\bf p}_k$ of ${\bf
o}_{P\over
Q}$. When $Q=1$, ${\bf p}_j$ and ${\bf p}_k$ coincide. When
${\bf
o'}_{P\over Q}$ exists, they lie in an arc $\Gamma '$ bounded by two
successive
points ${\bf p}'_j$ and ${\bf p}'_k$ of ${\bf o'}_{P\over
Q}$. Let $P_j$ be a lift
of ${\bf p}_j$, $P_k$ be the lift of ${\bf
p}_k$ immediately to the right of $P_j$,
and let $C$ and $K$ be the
lifts of ${\bf c}$ and ${\bf k}$ in $[P_j,\,P_k]$. Let
$P'_j$ be the
lift of ${\bf p'}_j$ immediately to the left of $C$, and $P'_k$
be
the lift of ${\bf p}'_k$ immediately to the right of $P'_j$ (and
of $K$). Let
$F_{a,b}$ be the lift of $f_{A,b}$ such that $P_j$ and
$P'_j$ have rotation number
${p\over q}$ under $F_{a,b}$,
i.e.,
$\underline{\rho}_F(P'_j)=\overline{\rho}_F(P'_j)={p\over q}$.
We then have the
following result whose first part follows easily
from Theorem 3.2.1, and whose
second part is a standard bifurcation
theory result.
\proclaim Lemma 3.6.3
{\rm (\cite{Bo,MaT})}. \\
(i) With the above notation,
$$(a,b)\in {\bf B}_{p\over
q}^l\setminus {\bf A}_{p\over
q}^l\Longleftrightarrow
F_{a,b}(K)=F_{a,b}(P_j)\,,$$
and
$$(a,b)\in {\bf B}_{p\over q}^r\setminus {\bf A}_{p\over
q}^r\Longleftrightarrow
F_{a,b}(C)=F_{a,b}(P_k)\,.$$
(ii) Furthermore,
for $b\not= 0$,
$$(a,b)\in {\bf A}_{p\over q}^l\cap {\bf A}_{p\over
q}^r\Longleftrightarrow
{\bf O}_{p\over
q}\,\mbox{is parabolic and }\,{\bf O}'_{p\over
q}\,\mbox{ does not exist.}$$
From the picture which emerges from the discussion so far, Theorem A
would follow from the uniqueness of the intersections described in
corollary 3.5.2. The general case then follows by Theorem 3.3.1-3. We
shall prove this uniqueness property in \S~4 using the fact that the labels
of the curves that intersect determine the topological conjugacy classes of
the maps at the intersections.
\proclaim Lemma 3.6.4.
At any crossing of two boundary curves ${\bf C}_{p\over
q}^l$
and ${\bf D}_{p'\over q'}^r$, (where ${\bf C}$ and ${\bf D}$ stand for either
${\bf
A}$ or ${\bf B}$), the way the orbits ${\bf O}_{p\over q}$, ${\bf O}_{p'\over
q'}$ and the critical points intertwine is
determined by the pair $({p\over
q},{p'\over q'})$. Furthermore, the
itineraries of ${\bf O}_{p\over q}$ and ${\bf
O}_{p'\over q'}$ are
determined by the pair $({p\over q},{p'\over q'})$.
\noindent{\sl Proof: } Take any two
standard lifts $F_{a_0,b_0}$ and $F_{a_1,b_1}$, which both possess the pair
of orbits $({\bf O}_{p\over q},{\bf O}_{p'\over q'})$. One can find a
piecewise linear lift $G$ as in Figure~5 that contains all periodic
itineraries of both $F_{a_0,b_0}$ and $F_{a_1,b_1}$. Choosing $G$ to have
all its slopes greater than one in absolute value, it is easy to check that
for $\omega \in \lbrace {p\over{q}},{ p'\over{q'}}\rbrace$ it possesses only one
orbit which
- Is invariant by a
non-decreasing lift with rotation number $\omega$,
- Has no point in the segments where $G$ is decreasing, and
- Is a lift of a periodic orbit of the circle map $g$.
By standard kneading theory arguments we get(\cite{AM,MT}):
- The two orbits (for $\omega = {p\over q}$ and $\omega = {p'\over q'}$)
obtained this way and the turning points of $G$ are intertwined in the same
way as the corresponding orbits and critical points of $F_{a_0,b_0}$ and
$F_{a_1,b_1}$
- The kneading information about these orbits can be
read from $G$ as
well.
{\bf Q.E.D.}
\proclaim Lemma 3.6.5. The maps corresponding to
all intersections of the two boundary curves ${\bf C}_{p\over q}^l$ and
${\bf D}_{p'\over q'}^r$, (where again ${\bf C}$ and ${\bf D}$ stand for
either ${\bf A}$ or ${\bf B})$ are topologically conjugate.
\noindent{\sl Proof: } This statement is a standard result of the topological
classification of maps with negative Schwarzian derivative,
and
we refer to (\cite{MS} Chap. 2.3) for a more general discussion;
we give only a sketch of the arguments.
Using Lemmas 3.6.3 and 3.6.4, we know that all such maps have the same
kneading data. Because these are smooth maps with isolated critical
points, it follows that they have the same sets of itineraries. The fact
that maps with negative Schwarzian derivative off the critical set have no
homterval (intervals of positive length, not in the basin of a stable
periodic orbit, but where all iterates of the map are homeomorphisms
\cite{Ly,MMS}) yields
the conjugacy. Points with similar itineraries are paired by
the
conjugacy, except for points belonging to the basins of stable or
semi-stable
periodic orbits. For these points, the connected
components of the basins, on
each side of the periodic points and
their preimages, can be paired in any way that
respects the orbit
structure. Hence the conjugacy is not necessarily unique.
{\bf Q.E.D.}
\section{Proof of Theorem A, Part II: Complex analytic part}
To complete the proof of Theorem A we must show that the boundary curves
described in \S~3.5 have unique intersection points; that is, that the
conjugacy classes in Lemma~3.6.5 correspond to a single map. This is the
content of Theorem D. Before we can prove this lemma, however, we need to
introduce some techniques from complex analysis. References for the basic
theory of complex dynamics are
\cite{Kn7,Milnors-notes}. References for
Teichm\"uller theory are \cite{Gardiner,Lehto} and references for its
application to dynamical systems are
\cite{Epstein,Kn1,McM6,Sullivan}.
\subsection{Basic theory of complex dynamics}
We define a point to be {\it normal} for a family of holomorphic functions
if the functions in the family are locally uniformly bounded in a
neighborhood of the point. The set of normal points is open by definition.
We are interested in the normal sets of families generated by iterating a
single holomorphic self-map of the punctured
plane ${\bf C}star$.
A {\em singular value} for a holomorphic map is either a {\em critical
value} (the image of a critical point) or an {\em asymptotic value} (a
limiting value of the image of a path tending to infinity). A map with
only finitely many singular values is called a {\em finite type map}. The
points $0$ and $\infty$ are asymptotic values for holomorphic self-maps of
${\bf C}star$ but it will be more convenient to exclude them from the singular
value set.
The non-normal set divides the normal set into connected components. The
normal set is forward and backward invariant and its components are mapped
to one another. If a finite type map has a periodic cycle
$o=\{z_0,z_1=f(z_0), \ldots, z_{q-1}=f(z_{q-2})\}$ we define the multiplier
in the same way we do for real maps, that is, $m_o=f'(z_{q-2})\cdot
f'(z_{q-2}) \cdots f'(z_{0})$.
The following theorem classifies the behavior of the components of the
normal set.
\proclaim Classification Theorem. Given a finite type
holomorphic self map of ${\bf C}star$,
the orbits of the components of the normal
set are characterized as follows:
\begin{itemize}
\item they fall onto
a periodic cycle of components containg a periodic cycle with multiplier
$|m_o|< 1$ (attracting domain if $|m_o| \neq 0$
or super-attracting
domain if $|m_o| = 0$);
\item they fall onto a periodic cycle
with
multiplier a root of unity (parabolic domain);
\item orbits
eventually
fall into a domain on which an iterate of the map is
holomorphically conjugate
to an irrational rotation (rotation domain).
\end{itemize}
\noindent {\bf Remark: } The classification of periodic normal behavior was
done by Fatou \cite{Fatou1,Fatou2}, Siegel \cite{Siegel} and Herman
\cite{Herman}. The eventual periodicity of all normal components
(often called the Non-Wandering Theorem) was proved for rational maps by Sulliva
\cite{Sullivan}. For finite type holomorphic self-maps of
${\bf C}star$ the Non-Wandering Theorem was proved in \cite{Kn1}.
Although arbitrary holomorphic self-maps of ${\bf C}star$ may have normal
components whose orbits fall onto a periodic cycle of domains in which
points are attracted to zero or infinity (essentially parabolic domains),
it was proved in
\cite{Kotus} that finite type holomorphic self-maps of ${\bf C}star$ have no
essentially parabolic domains.
\noindent {\bf Remark: } Each cycle of periodic components uses a
singular value in the following sense: cycles of super-attracting periodic
normal domains contain singular values by definition, cycles of attracting and
parabolic domains each contain the infinite forward orbit of a singular
value, and in fact one of the domains in the cycle contains the singular
point; finally, the boundary of any rotation domain is contained in the
closure of the forward orbit of some singular value. Proofs of these facts
go back to Fatou. Among these facts is the statement in lemma 3.6.2-4(c).
\proclaim Definition.
The closure of the forward
orbits of the
singular values is called the {\em post-singular set} and is denoted by
$PS(f)$.
We shall be interested
in a special subclass of finite type maps.
\proclaim Definition. A finite type
map is {\em geometrically finite} if
every infinite forward orbit of a singular value tends to a periodic cycle.
It may happen that no singular value has an infinite forward orbit; such
orbits are periodic or pre-periodic. These maps are trivially geometrically
finite.
Standard arguments (see e.g.
\cite{Epstein,Kn7,Milnors-notes}) show that for
geometrically finite maps
- There are no rotation domains, and
- Every infinite forward singular orbit
lies in the normal set and is attracted to a (necessarily attracting or
parabolic) periodic cycle.
\subsection{Combinatorial Equivalence}
\proclaim Definition.
A {\em combinatorial equivalence} of finite type maps is a pair of
homeomorphisms $(\phi,\psi)$ such that $$\phi \circ f_0 = f_1 \circ \psi$$
and such that $\phi$ and $\psi$ are isotopic rel $(PS(f_0))$.
\proclaim Definition. A homeomorphism $\phi:{\bf C}hat \rightarrow {\bf C}hat$ is called
{\em K-quasiconformal}, or {\em K-QC} for short, if there exists a $K \geq
1$ such that the field of infinitesimally small circles is mapped almost
everywhere onto a field of infinitesimally small ellipses of eccentricity
bounded by $k={K-1\over K+1}$. A map that is K-QC for some K is called {\em
quasiconformal}.
\proclaim Definition. A
combinatorial equivalence $(\phi,\psi)$ is {\em K-QC} (or just {\em QC} if
we don't care about the constant) if $\phi$ and $\psi$ are K-quasiconformal;
it is {\em strong} if $\phi$ and $\psi$ {\em agree} in a neighborhood of
each super-attracting, attracting and parabolic cycle (and hence define a
conjugacy in these neighborhoods).
Next we
prove,
\proclaim Lemma 4.2.1. A strong combinatorial equivalence of
holomorphic geometrically finite maps can be isotoped (rel the post
singular set) through strong combinatorial equivalences to a strong QC
combinatorial equivalence.
\noindent{\sl Proof: } Let $(\phi,\psi)$ be the given
strong combinatorial equivalence. Let $N$ be the union of the neighborhoods
of the super-attracting, attracting and parabolic periodic cycles of $f_0$
on which $\phi$ and $\psi$ agree.
The first step is to isotop $\phi|_N=\psi|_N$ in $N$ rel $(PS(f_0) \cap N) \cup
\partial N$ to a quasiconformal homeomorphism that we again call $\phi$. To
do this, we use the
canonical local picture associated to each
periodic cycle and determined by the multiplier of the cycle. For a more
complete description of the local behavior see e.g. \cite{Milnors-notes}.
Case 1. Suppose first that $p$ is an attracting periodic point of $f_0$
with multiplier $\lambda$ and $N_p$ is the component of $N$ containing $p$.
Then there is an integer $k$ such that $f_0^k$ is the first return map for
$N_p$ and a conformal homeomorphism $h\colon N_p
\rightarrow {\bf D}elta$ where ${\bf D}elta$ is the unit disk, such that $h(0)=0$ and
$h \circ f_0^k(z) = \lambda h(z)$.
We can use the first return map $f_0^k$ to identify points in $N_p -
\lbrace p \rbrace$ and obtain a torus of modulus $\lambda$. The projection
$N_p - \{p\} \rightarrow N_p - \{p\}/f_0^k$ is a branched covering map of
the torus and the conjugation $\phi$ projects to this torus. Isotopy
classes rel the finitely many marked points for a torus are known to
contain K-QC maps for some $K > 1$, so the projection of $\phi$ may be
isotoped rel the branch points to a K-QC map. Since the homotopy lifting
property holds, (see e.g. \cite{GK1}), and the projection is holomorphic,
the K-QC map lifts to a K-QC map on $N_p - \{p\}$ and may be extended to
$N_p \cup \partial N_p$ so that the lift is isotopic to $\phi$ rel $(PS(f_0)
\cap N_p) \cup \partial N_p$. (If, as in our
application to the standard family, there is a single branch point, and the
tori have the same modulus, the isotopy class contains a conformal map but
we do not use this fact.)
Case 2. If $p$ is super-attracting the first return map is holomorphically
conjugate in $N_p$ to a map of the form $z \mapsto z^k$ on ${\bf D}elta$, for
some $k\geq 2$. If $p$ attracts no other singular points, we may push
$\phi$ to ${\bf D}elta$, isotop the map on ${\bf D}elta$ to a conformal map keeping
the boundary values fixed, and pull the isotoped map back to a conformal
map on $N_p$ rel $\partial N_p$. If $p$ does attract singular values the
argument has to be modified somewhat to take these orbits into account and
the isotopy will be only quasiconformal. In our application we have two
singular values but we assume each is attracted to a {\em distinct}
periodic orbit. Hence we omit the details for the case where a
superattractive $p$ attracts a second singular value and refer the
interested reader to \cite{SulQCIII}.
Case 3. It remains to describe the local behavior when $p$ belongs to a
parabolic cycle. The picture in this case is known as the Leau-Fatou
flower. We make two simplifying assumptions: first that $p$ is a
non-degenerate parabolic fixed point, that is, $f_0'(p)=1$ and $f_0''(p)
\neq 0$, and second that $p$ attracts only one singular value. Full details
of the Leau-Fatou flower in the context of rational maps may be found in
\cite{Milnors-notes}, \S7. The details for geometrically finite maps may be
found in \cite{Epstein}. A small neighborhood $N_p$ of $p$ is covered by a
pair of overlapping attracting and repelling {\em petals}, $U$ and
$U^{\prime}$, such that $f_0(U) \subset (U)$ and $f_0(U^{\prime}) \supset
U^{\prime}$. If we conjugate $f_0$ by $w=-1/(z-p)$, $p$ is mapped to
infinity and the petals $U$ and $U^{\prime}$ are transformed into the two
overlapping regions $D_R$ and $D_L$ shown in figure 6. The conjugated map
in a neighborhood of infinity takes the form $$F
\colon w \mapsto w + 1 + o(1).$$
We see therefore that $D_L$ contains a left half plane $\{{\bf R}e w < - M \}$
for some $M>0$, in which $F$ is holomorphically conjugate to right
translation by $1$; similarly, $D_R$ contains a right half plane $\{{\bf R}e w >
M' \}$ for some $M'>0$ in which $F^{-1}$ is holomorphically conjugate to
left translation by $1$.
The orbit of the singular value in $U$ is transformed into the the
attracting region $D_R$ and since the map acts almost as translation the
imaginary parts of points in the orbit are bounded. The repelling petal
$U'$ is transformed into to the domain $D_L$. Hence $D_L$ contains a piece
of the non-normal set and so is not invariant under the conjugated map.
For each map $f_i, i=1,2$ we form the {\em Ecalle cylinder} $E_R$ by
identifying orbits in $D_R$ under the map $F$. The singular orbit projects
to a single marked point. Similarly we form $E_L$ by identifying orbits in
$D_L$ under the map $F^{-1}$. The conjugacy $\phi$ projects to the
cylinders and we can find, for some $K>1$, a K-QC map in the isotopy class
of this projected $\phi$. Now we lift this isotoped $\phi$ to the exterior
of a large rectangle in the $w$-plane.
We thus have a quasiconformal conjugacy in a neighborhood of the parabolic
point (perhaps smaller than $N_p$). To obtain the strong K-QC equivalence
we must extend the conjugacy to a full neighborhood of the full
post-singular set. Since $\phi$ is K-QC on $N$, where by definition $\phi
= f_1 \circ \phi \circ f_0^{-1}$, we may lift it as a K-QC map to
$f_0^{-1}(N)$. To extend this lift quasiconformally to the closures of
these neighborhoods we need to know that any
intersections of $\partial N$ and $\partial f_0^{-1}(N)$ are transverse. We
can assure this by modifying our original choice of $N$ if necessary, and
using the normal form for parabolic points again.
Since $f_0$ is geometrically finite, we may lift a finite number of times
to obtain a K-QC conjugacy on a neighborhood $N'$ of the full post-singular
set which is isotopic (rel $PS(f_0)$) to the original combinatorial
equivalence on $N'$ and agrees with it in the complement of $N'$.
To complete the proof, we isotop $\phi$ in the complement of $N'$ to any
globally K-QC map and set $\psi = f_1^{-1} \circ \phi
\circ f_0$ where we choose the branch of the inverse to preserve the isotopy.
Note that these branches are well defined since there are no singular values in
this region.
{\bf Q.E.D.}
\subsection{Application to the standard family}
The circle maps $f_{A,b}$ have a natural extension to ${\bf C}star$. To see
this note that
the family of lifts $F_{a,b}$ extends to
${\bf C}C$, by the formula
$$F_{a,b}:z \mapsto z + a + (b/2\pi)\sin(2\pi
z).$$ Using the projection of ${\bf C}C$ to ${\bf C}star$
given by the exponential map, we obtain
holomorphic self-maps of ${\bf C}star$ that are holomorphic
extensions of the family $f_{A,b}$. For readability, we
keep the same notation. These maps
have exactly two
critical values and no asymptotic values so are of finite type.
\subsection{Extending real conjugacies}
The following lemma appears in various guises in
the literature. To prove a
version suited to our needs we require
\proclaim Definition. Let $I$ be an open interval in ${\bf R}R$ or ${\bf T}T$. A
homeomorphism $\phi:I \rightarrow I$ is called {\em K-quasisymmetric}, or
{\em K-QS}, if there exists a $K > 1$ such that for every triple $(a,b,c)$
of points in $I$, where $a < b < c$, $\phi$ satisfies $$ \frac{1}{K} <
\frac{\phi(c)-\phi(b)}{\phi(b) - \phi(a)} < K. $$ The homeomorphism $\psi:
{\bf T}T \rightarrow {\bf T}T$ is K-quasisymmetric if its restriction to every
subinterval is K-quasisymmetric.
\noindent {\bf Remark: } The restriction of a K-QC homeomorphism of ${\bf C}star$ is
K-QS on ${\bf T}T$ and any K-QS homeomorphism of ${\bf T}T$
has a (not necessarily unique) K-QC extension to ${\bf C}star$ (see
\cite{Gardiner,Lehto}).
\proclaim Lemma 4.3.1.
Let $g_0,g_1$ be topologically conjugate maps of ${\bf T}T$ in the family
$f_{A,b}$ whose extensions
$f_0,f_1$ to ${\bf C}star$ have the property that their post-singular
orbits remain in ${\bf T}T$. Then there is a strong K-QC combinatorial
equivalence $(\phi,\psi)$ for $f_0,f_1$.
\noindent\noindent{\sl Proof: }
We need only show how to use the given real conjugacy $\Phi$ for
$g_0,g_1$ to obtain a strong combinatorial equivalence for $f_0, f_1$
because we may then apply Lemma 4.2.1 to complete the proof.
The first step is to replace $\Phi$ by a K-QS homeomorphism which
agrees with $\Phi$ on the closed post-singular set $PS(g_0)=PS(f_0)$. We
can do this since $PS(g_0)$ consists of isolated points plus points
accumulating at attracting or parabolic cycles. The attracting and
parabolic cycles are distinguishable by their local topological behavior.
Near each cycle we use the local normal form to replace $\Phi$ by a
K-QS homeomorphism for some $K$; we then use the circle map $g_0$ to pull this
K-QS homeomorphism back to the closures of the basins of the
cycles in ${\bf T}T$; finally, we extend by continuity to ${\bf T}T$. Since $g_0$ is the
restriction of a holomorphic map, the new map, which we again call
$\Phi$, is K-quasisymmetric.
The second step is to extend the K-QS map $\Phi$ to a K-QC
self-map $\phi$ of ${\bf C}star$. For each attracting or parabolic cycle
$P$, let $N_P$ be a neighborhood of $P$ in ${\bf C}star$ with smooth boundary.
Using the local normal form again, we extend the K-QS map $\Phi$ to
$N_P$ so that it is K-QC. Extending this way for all the cycles
defines a germ $\phi$ for a K-QC conjugacy between $f_0$ and $f_1$. Now we
extend $\phi$ arbitrarily as a K-QC homeomorphism of ${\bf C}star$.
The final step is to define a lift $\psi$ of $\phi$ so that the pair
$(\phi,\psi)$ are isotopic rel the post-singular sets and are the desired
strong combinatorial equivalence. Denote the critical value set of the map
$f_i$ by $S_i$, $i=0,1$; each set consists of two points. The maps $f_i$
are covering maps of ${\bf C}star - f_i^{-1}(S_i)$ onto ${\bf C}star - S_i$. Extend
these covering maps to fix the ``ends'' zero and infinity of ${\bf C}star$.
Because the maps $f_0,f_1$ are in the same family, that is, given by a
formula of the form $\alpha
\zeta \exp \beta(\zeta - 1/ \zeta)$ for constants $\alpha$ and $\beta$,
and variable $\zeta \in {\bf C}star$, they are built up from a sequence of
elementary maps whose lifting properties are known. The lift $\psi =
f_1^{-1} \circ
\phi \circ f_0$ may therefore be defined uniquely so that it
agrees with $\phi$ on any and hence all the points of $PS(f_0)$. {\bf Q.E.D.}
\noindent {\bf Remark: } If the post-singular set is actually finite, the situation
is much simpler. Every singular point is superattracting or else its orbit
eventually lands on a repelling periodic cycle. We can choose an arbitrary
topological extension to ${\bf C}star$ as the homeomorphism $\phi$ and define
$\psi$ by the formula $\psi = f_1^{-1} \circ \phi \circ f_0$
where again the branch of the inverse is chosen so that $(\phi,\psi)$ are
isotopic rel the post-singular sets. By the easy parts of Lemma 4.2.1
there are automatically quasiconformal homeomorphisms in this isotopy
class.
\subsection{Statement of the Rigidity Theorem}
\label{sec:statement}
\proclaim Theorem D (Rigidity). Suppose that the functions $f_0$
and $f_1$ are both intersections of boundary curves $C^l_{p\over q}$ and
$D^r_{p' \over q'}$ where $C$ and $D$ stand for either $A$ or $B$ as in Lemma
3.6.5. Then $f_0=f_1$.
>From Lemma 3.6.3 we see that at the intersections of the boundary curves
one of the following holds:
$\alpha$ - Both singular orbits are attracted by distinct parabolic cycles,
$\beta$ - Both singular orbits are preperiodic, or
$\gamma$ - One singular orbit is preperiodic and the other is attracted by
a parabolic cycle.
It follows that the extensions of standard maps corresponding to these
intersection points are geometrically finite.
\subsection{Basic Teichm\"uller theory}
To prove the Rigidity Theorem D we
follow the version of the proof of a rigidity result for rational maps carried
out by McMullen in \cite{McM6}. In particular, we shall use some standard
Teichm\"uller theory. Below we state those facts we require in a
form suited to our needs. A good basic reference for this material is
\cite{Gardiner}, Chap. 6. Thurston and
Sullivan were the first to apply these techniques in the context of
rigidity in holomorphic dynamics.
Let $X$ be a compact Riemann surface and let $C$ be a closed subset of $X$
containing at most countably many points.
Then
the {\em Teichm\"uller space} of $X$ with boundary $C$
is the set of isotopy classes of quasiconformal homeomorphisms of $X$ rel
$C$. We denote it by ${\bf T}TT(X,C)$. We shall be interested in ${\bf T}TT(X,C)$
where $X ={\bf C}star$ and $C= PS(f)$ for $f$ in the
standard family.
The Teichm\"uller space is finite dimensional if $X$ has finite genus and
$C$ is a finite point set:
in our case, if $PS(f)$ is finite.
If $\phi$ is a quasiconformal homeomorphism of $X$, its {\em Beltrami
differential}
is $\mu(z) =
\phi_{\bar{z}}/\phi_{z}$, where the derivatives are taken in the
generalized sense. The infinitesimal ellipse field is determined by
$\mu(z)$: the eccentricity of the ellipse at the point $z$ is $|\mu(z)|$
and the major axis has argument $\arg \mu(z)$.
The {\em maximal dilatation} of $\phi$ is
$$K_{\phi}(X)= \max_z (1 + ||\mu
||_{\infty})/ (1 - ||\mu ||_{\infty}) < \infty. $$
Given an isotopy class of quasiconformal homeomorphisms $X$ rel $C$
one can
ask if there is a map that is {\em extremal}; that is, its maximal
dilatation is minimal over all maps in its class.
A quasiconformal map is called a {\em Teichm\"uller map} if it is locally an
affine stretch: that is, its Beltrami differential has the form $\mu = t
{\bar q}/|q|$ where $q$ is a holomorphic quadratic differential such that
$||q||=\int_X |q| <
\infty$ and $|t| < 1$.
\proclaim Teichm\"uller's Theorem. Let ${\bf T}TT(X,C)$ be a finite dimensional
Teichm\"uller space. Then every isotopy class contains an extremal map.
Moreover, this extremal is unique and is a Teichm\"uller map. If ${\bf T}TT(X,C)$
is not finite dimensional, the extremal map exists but it is not
necessarily unique nor is any such extremal a Teichm\"uller map.
Since the post-singular set is not always finite we need to consider
infinite dimensional Teichm\"uller spaces. To this end, we introduce the
concept of boundary dilatation. Let $S = X-C$ and let $R$ be any compact
subset of $S$. Set $K_{\phi}^0(S-R) = \inf_{\psi \sim
\phi}(K_{\psi}(S-R))$. Define the {\it boundary dilatation} $H(\phi)$ as
the direct limit of the numbers $K_{\phi}^0(S-R)$ as $R$ increases to $S$.
\proclaim Strebel's Frame Mapping Condition. Let $\phi$ be a quasiconformal
homeomorphism of $S$ to another surface and suppose $H(\phi) <
K^0_{\phi}(S)$. Then the isotopy class of $\phi$ (rel $C$) contains a
unique extremal map and this map is a Teichm\"uller map.
\subsection{Proof of Theorem D}
It suffices to prove that if $f_0$ and $f_1$ are topologically conjugate
maps in the standard family whose singular orbits satisfy one of the
conditions $\alpha - \gamma$ of section
\ref{sec:statement} then they are equal.
Since their extensions to ${\bf C}star$ are geometrically finite, by Lemma
4.3.1 there is a strong K-QC combinatorial equivalence $(\phi,\psi)$
between them.
Suppose first that both singular orbits are preperiodic. Then the
post-singular set is finite and any K-QC combinatorial equivalence is
trivially strong. Moreover, ${\bf T}TT({\bf C}star,PS(f_0))$ is finite dimensional
and by Teichm\"uller's theorem, there is a unique extremal map in every
isotopy class; denote the extremal map in the isotopy class of $\phi$ and
$\psi$ by $\hat\phi$. Now we replace $\phi$ by $\hat\phi$ as we did in the
last step of the proof of Lemma 4.2.1 and set $\hat\phi = f_1^{-1} \circ
\hat\phi
\circ f_0$, choosing the branch that preserves the isotopy. Since $f_0$ and
$f_1$ are holomorphic the infinitesimal ellipse fields determined by
$\hat\phi$ and $\hat\psi$ are the same and $\hat\psi$ is extremal. By
uniqueness $\hat\phi=\hat\psi$; denote the extremal quasiconformal
conjugacy $\hat\phi$ by $\phi$ again.
In the other two cases, there is at least one singular orbit attracted by a
parabolic cycle. It is important to note that no parabolic cycle attracts
more that one singular orbit. The quasiconformal homeomorphisms $(\phi,
\psi)$ we obtained in the proof of Lemma 4.2.1 agree in a neighborhood $N$ of
the post-singular set. We need to modify this $\phi$ in a neighborhood of
a parabolic point $p$ containing the forward orbit of one singular value so
that it satisfies the Frame Mapping Condition.
As above we conjugate $f_0$ to $F(w)=w + 1 + o(1)$ by sending $p$ to
infinity. We follow the argument in \cite{Epstein}, \S 4.2, lemma 78. An
application of the Schwarz lemma shows that $|F'(w)|$ is uniformly close to
$1$ in a neighborhood of infinity. This means that for $\eta$ large, the
image of $F(t \pm i \eta)$, $ t \in {\bf R}R$ is a curve that stays very close
to horizontal. Hence, given any $\epsilon > 0$, we can find $M$ such that
for $|\eta| > M$, $\phi$ is isotopic to a map (again called $\phi$) with
dilatation less than $1 +
\epsilon$. Next, using the images of the endpoints of vertical lines inside
the closed large rectangle to control the images of these lines, and noting
that we have arranged it so that there are no points of $PS(f)$ inside the
large rectangle, we can isotop $\phi$ in the part of $D_L
\cup D_R$ inside the rectangle so that it is
quasiconformal.
This new map together with its lift in the same isotopy class gives us a
combinatorial equivalence $(\phi,\psi)$ which is no longer strong but is
still K-QC. This new $\phi$ satisfies Strebel's Frame Mapping Condition
for $\epsilon$ small enough. Therefore, just as in the preperiodic case, we
may replace both maps in the equivalence with the unique extremal
Teichm\"uller map in their isotopy class and obtain a quasiconformal
conjugation, denoted again by $\phi$.
Finally, we complete the proof of the lemma by showing that $\phi$ is
conformal and hence a homothety.
If $\phi$ is not conformal, its Beltrami differential determines a
quadratic differential $q$ on $S $. Since $\phi$ is a conjugacy, and the
maps $f_0$ and $f_1$ are holomorphic, the infinitesimal ellipse fields
determined by the Beltrami differential $\mu$ and the
Beltrami differential $f_0^* \mu$ of $f_1^{-1} \circ\, \phi\, \circ f_0 = \phi$
are the same; that is,
$f_0^* \mu = \mu$.
Since $f^*_0 \mu$ is again the Beltrami differential of a Teichm\"uller
map, it has the form $ f_0^*\mu
= t \overline{f^*_0 q}/|f_0^* q|$ where $f_0^*q$ is the pull-back quadratic
differential. Now on the one hand, the norm of the pullback differential
$||f_0^* q||$ is given by $||q||$ times the degree of $f_0$ so since $f_0$
has infinite degree, $||f_0^* q||$
is unbounded. On the other hand however, $f^*_0
\mu = t \overline{f^*_0 q}/|f^*_0 q| $, so that $$\bar{f^*_0 q}/|f^*_0 q|=
\bar q/|q|.$$ If $h=f^*_0 q /q$, then $\bar h = |h|$ and $h$ is real valued.
But $h$ is meromorphic on $S$ and any meromorphic function taking only real
values must be constant. Thus $f^*_0 q = c q$
for some $c>0$. Since $||q||$ is bounded, we have a contradiction and
$\phi$ is conformal.
If we conjugate $f_{A,b}$ by a homothety, we obtain an equivalent dynamical
system. Since the homothety preserves the unit circle, the factor must
have modulus $1$; its argument only appears in the sine term and doesn't
change any of the dynamical properties. {\bf Q.E.D.}
\section{Concluding remarks}
In our study of the standard family we used
real analytic techniques to get good control of the
boundary curves in the parameter plane of regions with a given lower or
upper bound on the rotation number. In order to control the intersections
of these curves we needed to apply rigidity properties found in
families of complex analytic maps.
Previously, complex analytic techniques were used to obtain
rigidity in one parameter
families of maps with a single critical point.
Our description of the
parameter space of the standard family is still incomplete and we pose some
open problems here. They do not seem amenable to the methods
used so far and new ideas are needed.
\noindent
{\bf Conjectures:}
\begin{itemize}
\item
$R_{p_0\over q_0}$ is homeomorphic to $R_{p_1\over q_1}$ by a homeomorphism
$H_{{p_0\over q_0},{p_1\over q_1}}$ having the following property:\\
If $H_{{p_0\over q_0},{p_1\over q_1}}(f_0)=f_1$, then $f_0$ partitions
the circle into $q_0$ intervals, $I_1,\ldots I_{_0}$, and $f_1$ partitions
the circle into $q_1$ intervals,
$J_1,\ldots J_{q_1}$ so that $f_0^{q_0}|_{I_j}$ is topologically
conjugate to $f_1^{q_1}|_{I_k}$, $j \in \{1,\ldots,q_0\}, k\in
\{1,\ldots,q_1\}$.
\item
The set of maps with a given topological entropy is connected.
\end{itemize}
Both conjectures are proved in \cite{UTGC} for some two parameter families
of piecewise affine maps. A similar entropy conjecture for cubic maps is
discussed in \cite{DGMT}.
\end{document} |
\begin{document}
\tightenlines
\title{Optimized quantum nondemolition measurement of a field quadrature}
\author{Matteo G. A. Paris}\address{Quantum Optics $\&$ Information Group,
Istituto Nazionale per la Fisica della Materia \\
Universit\`a di Pavia, via Bassi 6, I-27100 Pavia, Italy}
\maketitle
\begin{abstract}
We suggest an interferometric scheme assisted by squeezing and linear feedback
to realize the whole class of field-quadrature quantum nondemolition
measurements, from Von Neumann projective measurement to fully non-destructive
non-informative one. In our setup, the signal under investigation is mixed
with a squeezed probe in an interferometer and, at the output, one of the two
modes is revealed through homodyne detection. The second beam is then
amplitude-modulated according to the outcome of the measurement, and finally
squeezed according to the transmittivity of the interferometer. Using
strongly squeezed or anti-squeezed probes respectively, one achieves either a
projective measurement, {\em i.e.} homodyne statistics arbitrarily close to
the intrinsic quadrature distribution of the signal, and conditional outputs
approaching the corresponding eigenstates, or fully non-destructive one,
characterized by an almost uniform homodyne statistics, and by an output state
arbitrarily close to the input signal. By varying the squeezing between these
two extremes, or simply by tuning the internal phase-shift of the
interferometer, the whole set of intermediate cases can also be obtained. In
particular, an optimal quantum nondemolition measurement of quadrature can be
achieved, which minimizes the information gain versus state disturbance trade-off.
\end{abstract}
\section{Introduction}
In order to be manipulated and transmitted information should be encoded into
some degree of freedom of a physical system. Ultimately, this means that the
input alphabet should correspond to the spectrum of some observable, {\em
i.e.} that information is transmitted using {\em quantum signals}. At the end
of the channel, to retrieve this kind of quantum information, one should
measure the corresponding observable. As a matter of fact, the measurement
process unavoidably introduces some disturbance, and may even destroys the
signal, as it happens in many quantum optical detectors, which are mostly
based on the irreversible absorption of the measured radiation. Actually, even
in a measurement scheme that somehow preserves the signal for further uses,
one is faced by the information gain versus state disturbance trade-off, {\em
i.e.} by the fact that the more information is obtained, the more the signal
under investigation is being modified.
\par
Actually, the most informative measurement of an observable $X$ on a state
$|\psi\rangle$ corresponds to its ideal projective measurement, which is also
referred to as Von Neumann {\em second kind} quantum measurement \cite{vn}.
In an ideal projective measurement the outcome $x$ occurs with the
intrinsic probability density $|\langle\psi|x\rangle|^2$, whereas the system
after the measurement is left in the corresponding eigenstate $|x\rangle$. A
projective measurement is obviously repeatable, since a second measure gives
the same outcome as the first one. However, the initial state is erased, and
the conditional output do not permit to obtain further information about the
input signal.
The opposite case corresponds to a fully non-destructive detection scheme,
where the state after the measurement can be made arbitrarily close to the
input signal, and which is characterized by an almost uniform output
statistics, {\em i.e.} by a data sample that provides almost no information.
\par
Besides fundamental interest, the realization of a projective measurement of the
quadrature would have application in quantum communication based on continuous
variables. In facts, it provides a reliable and controlled source of optical
signals. On the other hand, a fully non-destructive measurement scheme is an
example of a quantum repeater, another relevant tool for the realization of
quantum network.
Between these two extremes we have the entire class of quantum nondemolition
(QND) measurements. Such intermediate schemes provide only a partial information
about the measured observable, and correspondingly are only partially
distorting the signal under investigation. In particular, in this
paper, we show how to attain an optimized QND measurement of quadrature
{\em i.e.} scheme which minimizes the information gain versus state disturbance trade-off.
\par
Most of the schemes suggested for back-action evading measurements are based
on nonlinear interaction between signal and probe taking place either in
$\chi^{(3)}$ or $\chi^{(3)}$ media (both fibers and crystals)
\cite{wals,lap,per,gra,yam,bruck,haus}, or on optomechanical coupling
\cite{opt1,opt2}. A beam-splitter based scheme has been earlier suggested to
realize optical Von Neumann measurement \cite{vnm}. Here, we focus our
attention on an interferometric scheme which requires only linear elements and
single-mode squeezers. \par
A schematic diagram of the suggested setup is given in Fig. \ref{f:setup}.
The signal under examination $|\psi_{\sc s}\rangle$ and the probe (meter)
state $|\psi_{\sc p}\rangle$ are given by
\begin{eqnarray}
|\psi_{\sc s}\rangle &=& \int dx \: \psi_{\sc s} (x) \:
|x\rangle_1 \nonumber \\ |\psi_{\sc p}\rangle &=& \int dx \:
\psi_{\sc p} (x) \: |x\rangle_2 \label{defstate}\;,
\end{eqnarray}
where $|x\rangle_j$, $j=1,2$ are eigenstates of the field quadratures
$x_j=1/2(a_j^\dag+a_j)$, $j=1,2$ of the two modes, and $\psi_{\sc s}(x)$ and
$\psi_{\sc p}(x)$ are the corresponding wave-functions. The two beams are
linearly mixed in a Mach-Zehnder interferometer with internal phase-shift
given by $\phi$. There are also two $\lambda/4$ plates, each imposing a
$\pi/2$ phase-shift. Overall, the interferometer equipped with the plates is
equivalent to a beam splitter of transmittivity $\tau=\cos^2\phi$. However,
the interferometric setup is preferable to a single beam splitter since it
permits a fine tuning of the transmittivity. After the interferometer, one of
the two output modes is revealed by homodyne detection, whereas the second
mode is firstly displaced by an amount that depends on the outcome of the
measurement (feedback assisted amplitude modulation), and then squeezed
according to the transmittivity of the interferometer (see details below). As
we will see, either by tuning the phase-shift of the interferometer, or by
exciting the probe state $|\psi_{\sc p}\rangle$ in a squeezed vacuum, and by
varying the degree of squeezing, the action of the setup ranges from a
projective to a non-destructive measurement of the field quadrature as
follows:
\begin{enumerate}
\item the statistics of the homodyne detector ranges from a distribution
arbitrarily close to the intrinsic quadrature probability density of the
signal state $|\psi_{\sc s} (x)|^2$ to an almost uniform distribution;
\item the conditional output state, after registering a value $x_0$ for the
quadrature of the signal mode, ranges from a state arbitrarily close to
the corresponding quadrature eigenstate $|x_0\rangle$
to a state that approaches the input signal $|\psi_{\sc s}\rangle$.
\end{enumerate}
The two features can be summarized by saying that the present scheme
realizes the whole set of QND measurements of a field quadrature.
In addition, the interferometer can be tuned in order to
minimize the information gain versus state disturbance trade-off,
{\em i.e.} to achieve an optimal QND measurement of quadrature.
Such a kind of measurement provides the maximum information about
the quadrature distribution of the signal, while keeping the
conditional output state as close as possible to the incoming signal.
\par
The paper is structured as follows. In the next Section we analyze the
dynamics of the measurement scheme, and describe in details the action of
linear feedback and tunable squeezing on the conditional output state and on
the homodyne distribution. In Section \ref{s:lim} we analyze the limiting
cases of strongly squeezed and anti-squeezed probes, which correspond to
projective and non-destructive measurements respectively. In Section
\ref{s:opt} we introduce two fidelity measures, in order to quantify how close
are the conditional output and the homodyne distribution to the input signal
and its quadrature distribution respectively. As a consequence, we are able to
individuate an optimal set of configurations that minimize the trade-off
between information gain and state disturbance. Section \ref{s:outro} closes
the paper with some concluding remarks.
\section{Homodyne interferometry with linear feedback}
Let us now describe the interaction scheme in details. The evolution
operator of the interferometer is given by $U(\phi) = \exp
\left[i\phi \left(a_1^\dag a_2 + a^\dag_2 a_1\right)\right]$,
such that the input state $|\Psi_{\sc in}\rangle\rangle =
|\psi_{\sc s}\rangle \otimes |\psi_{\sc p}\rangle$ evolves as
\begin{eqnarray}
|\Psi_{\sc out}\rangle\rangle &=& U(\phi)\: |\psi_{\sc s}\rangle \otimes
|\psi_{\sc p}\rangle = \nonumber \\&=& \int dx_1\int dx_2 \:
\psi_{\sc s} (x_1)\:\psi_{\sc p} (x_2)\: |x_1\cos\phi+x_2\sin\phi\rangle_1
\otimes|-x_1\sin\phi+x_2\cos\phi\rangle_2 \nonumber \\&=&
\int dy_1\int dy_2 \: \psi_{\sc s} (y_1\cos\phi-y_2\sin\phi )
\: \psi_{\sc p}(y_1\cos\phi+y_2\sin\phi)\: |y_1\rangle_1
\otimes|y_2\rangle_2
\label{outMZ}\;.
\end{eqnarray}
After the interferometer the quadrature of one of the modes
(say mode $2$) is revealed by homodyne detection. The distribution
of the outcomes is given by
\begin{eqnarray}
p(X)=\hbox{Tr}\left[\: |\Psi_{\sc
out}\rangle\rangle\langle\langle\Psi_{\sc out}|\: {\mathbb I}_1\otimes
\Pi_2(X)\right] \qquad \Pi(X)=|X\rangle\langle X|
\label{probX}\;,
\end{eqnarray}
$\Pi(x)$ being the POVM of the homodyne detector.
Since the reflectivity of the interferometer is given by $\sin\phi$
from an outcome $X$ by the homodyne we infer a value $x_0=-X/\sin\phi$
for the quadrature of the input signal. The corresponding probability
density is given by \begin{eqnarray}
p(x_0)=-\sin\phi \: p(X) = \tan\phi \int dy\: |\psi_{\sc s}(y)|^2 \:
\left|\psi_{\sc p}\left[\tan\phi(y-x_0)\right] \right|^2
\label{probx0}\;,
\end{eqnarray}
and the conditional output state for the mode $1$
\begin{eqnarray}
|\varphi_{x_0}\rangle&=& \sqrt{\sin\phi}\:
\langle -X\sin\phi|\Psi_{\sc out}\rangle\rangle =
\nonumber \\
&=& \sqrt{\frac{\sin\phi}{p(x_0)}} \int dy \:
\psi_{\sc s}(y\cos\phi+x_0\sin^2\phi)\:
\psi_{\sc p}(y\sin\phi-x_0\cos\phi\sin\phi)\:|y\rangle
\label{cond1}\;.
\end{eqnarray}
The amplitude of this conditional state is then modulated by a
feedback mechanism, which consists in the application of a
displacement $D(x_0\sin\phi\tan\phi)$, with $D(z)=\exp(za^
\dag-\bar{z}a)$.
Such displacing action can be obtained by mixing the mode
with a strong coherent state of amplitude $z$ ({\em e.g.} the laser beam
also used as local oscillator for the homodyne detector, see
Fig. \ref{f:setup}) in a beam splitter of transmittivity $\tau$ close
to unit, with the requirement that $z\sqrt{1-\tau}=x_0\sin\phi\tan\phi$
\cite{displa}. An experimental implementation using a feedforward
electro-optic modulator has been presented in \cite{lam}.
The resulting state is given by
\begin{eqnarray}
D(x_0\sin\phi\tan\phi)\:|\varphi_{x_0}\rangle
= \sqrt{\frac{\sin\phi}{p(x_0)}} \int dy \:
\psi_{\sc s}(y\cos\phi)\:
\psi_{\sc p}(y\sin\phi-x_0\tan\phi)\:|y\rangle
\label{displa}\;.
\end{eqnarray}
Finally, this state is subjected to a single-mode squeezing transformation $S(r)=
\exp[1/2 r (a^{\dag 2}-a^2)]$ by a degenerate parametric amplifier (DOPA).
By tuning the squeezing parameter to a value $r^\star=\cos\phi$ and
using the relation $S(r)|y\rangle=e^{r/2}|e^r y\rangle$ we arrive
at the final state
\begin{eqnarray}
|\psi_{x_0}\rangle &=&
S(r^\star)\:D(x_0\sin\phi\tan\phi)\:|\varphi_{x_0}\rangle= \nonumber \\
&=& \sqrt{\frac{\tan\phi}{p(x_0)}} \int dy \:
\psi_{\sc s}(y)\:
\psi_{\sc p}\left[(y-x_0)\tan\phi\right]\:|y\rangle
\label{output}\;.
\end{eqnarray}
The wave-function of this conditional output state is thus given by
\begin{eqnarray}
\psi_{x_0}(x) =\frac{\psi_{\sc s}(x)\:\psi_{\sc p}\left[(x-x_0)\tan\phi
\right]}{\sqrt{\int dy\: |\psi_{\sc s}(y)|^2 \:
\left|\psi_{\sc p}\left[\tan\phi(y-x_0)\right] \right|^2}}
\label{outwave}\;.
\end{eqnarray}
Eq. (\ref{probx0}) and Eqs. (\ref{output},\ref{outwave}) summarize the
filtering effects of the probe wave-function on the output statistics
and the conditional state respectively.
\par
\section{Measurements using squeezed or anti-squeezed probes}\label{s:lim}
For the probe mode in the vacuum state we have $\psi_{\sc
p}(x)=(2/\pi)^{-1/4} \exp(-x^2)$ such that the homodyne distribution
of Eq. (\ref{probx0}) results
\begin{eqnarray}
p(x_0)=|\psi_{\sc s}(x)|^2 \star G(x,x_0,\frac{1}{4\tan^2\phi})
\label{vacprobx0}\;
\end{eqnarray}
where $\star$ denotes convolution and $G(x;x_0,\sigma^2)$ a Gaussian of
mean $x_0$ and variance $\sigma^2$. The quadrature distribution of the
corresponding output state is given by
\begin{eqnarray}
|\psi_{x_0}(x)|^2=\frac{1}{p(x_0)}|\: \psi_{\sc s}(x)|^2\:G(x,x_0,
\frac{1}{4\tan^2\phi})\label{vacond}\;
\end{eqnarray}
Eqs (\ref{vacprobx0}) and (\ref{vacond}) account for the noise introduced
by vacuum fluctuations. This noise can be manipulated by suitably squeezing
the probe, thus realizing the whole set of QND measurement.
\par
Squeezed or anti-squeezed vacuum probes are described by the
wave-functions
\begin{eqnarray}
\psi_{\sc sq}(x)&=&\frac{1}{(2\pi\Sigma^2)^{1/4}}\exp
\left\{-\frac{x^2}{4\Sigma^2}\right\} \nonumber \\
\psi_{\sc asq}(x)&=&\left(\frac{\Sigma^2}{2\pi}\right)^{1/4}\exp
\left\{-\frac{\Sigma^2 x^2}{4}\right\}
\label{sqvac}\;,
\end{eqnarray}
where the information about squeezing stays in the requirement
$0<\Sigma^2\leq 1/4$. Notice that squeezing the probe introduces additional
energy in the system. The mean photon number of the states in
(\ref{sqvac}) is given by $N=(\Sigma^2 + 1/\Sigma^2 - 2)/4$.
Using a squeezed vacuum probe Eqs. (\ref{vacprobx0}) and (\ref{vacond})
rewrites as
\begin{eqnarray}
p(x_0)&=&|\psi_{\sc s}(x)|^2 \star G(x,x_0,\Sigma^2/\tan^2\phi)
\stackrel{\Sigma\rightarrow 0}{\longrightarrow} |\psi_{\sc s}(x_0)|^2
\label{sqcond1}\\
|\psi_{x_0}(x)|^2&=&\frac{1}{p(x_0)}\:|\psi_{\sc s}(x)|^2\:G(x,x_0,\Sigma^2/\tan^2\phi)
\stackrel{\Sigma\rightarrow 0}{\longrightarrow} \delta(x-x_0)
\label{sqcond2}\;.
\end{eqnarray}
Eq. (\ref{sqcond1}) says that by squeezing the probe the statistics of
the homodyne detectors can be made arbitrarily close to the intrinsic
quadrature distribution $|\psi_{\sc s}(x_0)|^2$, whereas Eq.
(\ref{sqcond2}) shows that, for any value of the outcome $x_0$, the
conditional output $|\psi_{x_0}\rangle$ approaches the corresponding
quadrature eigenstate $|x_0\rangle$.
For $\Sigma\rightarrow 0$ the mean energy of the conditional output
state $|\psi_{x_0}\rangle$ increases, since it is approaching a
quadrature eigenstate (an exact eigenstate would have infinite
energy). Notice that this amount of energy is mostly provided by the
probe state itself, rather than by the displacement and squeezing
stages of the setup. The improvement in the precision due to squeezing,
compared to that of a vacuum probe, can be quantified by the
ratio of variances in the filtering Gaussian of Eqs. (\ref{vacond})
and (\ref{sqcond2}). Calling this ratio $\Delta$ we have $\Delta=\Sigma^2$
and thus, for squeezing not too low, $\Delta\simeq N$.
\par
For an anti-squeezed vacuum probe Eqs. (\ref{vacprobx0}) and (\ref{vacond})
rewrites as
\begin{eqnarray}
p(x_0)&=&|\psi_{\sc s}(x)|^2 \star G(x,x_0,(\Sigma^2 \tan^2\phi)^{-1})
\stackrel{\Sigma\rightarrow 0}{\longrightarrow}
\frac{\exp\{-\frac{x^2}{2\sigma^2}\}}{\sqrt{2\pi\sigma^2}} \quad \sigma^2 =
\frac1{\Sigma^2\tan^2\phi}
\label{sqcond3}\\
|\psi_{x_0}(x)|^2&=&\frac{1}{p(x_0)}\: |\psi_{\sc s}(x)|^2\:G(x,x_0,(\Sigma^2 \tan^2
\phi)^{-1}) \stackrel{\Sigma\rightarrow 0}{\longrightarrow} |\psi_{\sc s}(x)|^2 \quad
\forall x_0
\label{sqcond4}\;.
\end{eqnarray}
Eqs. (\ref{sqcond3}) and (\ref{sqcond4}) says that by anti-squeezing the probe
the statistics of the homodyne detectors is approaching a flat distribution
over the real axis, and correspondingly that the conditional output can be made
arbitrarily close to the incoming signal, independently on the actual value of
$x_0$.
\par
Notice that, in principle, both projective and non-destructive measurements
could be obtained with vacuum probe, simply by varying the internal phase-shift of
the interferometer according to Eqs. (\ref{vacprobx0}) and (\ref{vacond}).
However, this would affect also the {\em rate} of the events at the output
(since $\phi$ governs the transmittivity of the interferometer), and
therefore may be not convenient from practical point of view. On the other
hand, when a fine tuning of the variances in Eqs. (\ref{sqcond1}-\ref{sqcond4}) is
needed (as for example in the optimization of the scheme, see next Section) it
can be conveniently obtained by varying $\phi$, without the need of varying the
degree of squeezing of the probe.
\section{Optimized QND measurement}\label{s:opt}
So far we considered the two extreme cases of infinitely
squeezed or antisqueezed probes. Now we proceed to quanti-fy the trade-off
between the state disturbance and the gain of information for
the whole set of intermediate cases. There are two relevant parameters:
i) how close the output signal is to the input state, and ii) how
close the homodyne distribution is to the intrinsic quadrature probability.
According to Eq. (\ref{output}), after the outcome $x$ is being registered the conditional
output state is given by $|\psi_x\rangle$. Since the outcome $x$ occurs with
the probability $p(x)$ of Eq. (\ref{probx0}), the density matrix describing the
output ensemble after a large number of measurements is given by
\begin{eqnarray}
\varrho_{\sc out} = \int dx \: p(x) \: |\psi_x\rangle\langle\psi_x |
\label{outdens}\;.
\end{eqnarray}
Indeed, this is the state that can be subsequently manipulated, or used to
gain further information on the system. The resemblance between input and
output can be quantified by the {\em average state fidelity}
\begin{eqnarray}
F=\langle\psi_{\sc s} | \varrho_{\sc out} | \psi_{\sc s} \rangle = \int dx \:
p(x)\: \left| \langle\psi_{\sc s} | \psi_x\rangle \right|^2
\label{avF}\;.
\end{eqnarray}
Inserting Eq. (\ref{output}) in Eq. (\ref{avF}) we obtain
\begin{eqnarray}
F=\int\int dy' dy'' \: \left|\psi_{\sc s} (y')\right|^2\:
\left|\psi_{\sc p} (y'')\right|^2\: T_\phi (y',y'')
\label{avF1}\;,
\end{eqnarray}
where, for the squeezed vacuum probes we are taking into account, the transfer function
is given by
\begin{eqnarray}
T_\phi(y',y'') = \exp \left\{ -\tan^2\phi\:\frac{(y'-y'')^2}{8 \sigma^2_{\sc p}}\right\}
\label{funT}\;,
\end{eqnarray}
$\sigma_{\sc p}^2$ being the variance of the probe wave-function, {\em i.e.}
$\sigma_{\sc p}=\Sigma$ for squeezed probe and $\sigma_{\sc p}=\Sigma^{-1}$
for anti-squeezed probes. F take values from zero to unit, and it is a
decreasing function of the probe squeezing.
\par
If also the initial signal is Gaussian the fidelity results
\begin{eqnarray}
F=\frac{\sqrt{2}x}{\sqrt{1+2x^2}}
\qquad x=\frac{\sigma_{\sc p}}{\sigma_{\sc s}\tan\phi}
\label{GF}\;,
\end{eqnarray}
$\sigma_{\sc s}^2$ being the variance of the signal' wave-function.
Eq. (\ref{GF}) interpolates between the two extreme cases of the previous
Section. In order to check this behavior we evaluate (\ref{GF}) for strong
squeezing or anti-squeezing. We have
\begin{eqnarray}
F=\left\{
\begin{array}{lrrc}
\sqrt{2} x \rightarrow 0 & & x \ll 1 & {\rm squeezed\: probe}\\ & & \\
1-(4x^2)^{-1}
\rightarrow 1 & & x \gg 1 & {\rm antisqueezed\: probe}
\end{array}
\right.
\label{checkF}\;.
\end{eqnarray}
\par
In order to quantify how close the quadrature probability of the
input signal is to the homodyne distribution at the output we introduce the {\em
average distribution fidelity}
\begin{eqnarray}
G= \left(\int dx \: \sqrt{p(x)}\: \left|\psi_{\sc s}(x)\right| \right)^2
\label{G}\;,
\end{eqnarray}
which also ranges from zero to one and it is an increasing function of the
probe squeezing. For both Gaussian signal and probe we obtain
\begin{eqnarray}
G=2\:\frac{\sqrt{1+x^2}}{2+x^2}\qquad x=\frac{\sigma_{\sc p}}{\sigma_{\sc s}\tan\phi}
\label{GG}\;,
\end{eqnarray}
and therefore
\begin{eqnarray}
G=\left\{
\begin{array}{lrrc}
1-\frac18 x^4 \rightarrow 1 & & x \ll 1 & {\rm squeezed\: probe}\\ & & \\
2 x^{-1}
\rightarrow 0 & & x \gg 1 & {\rm anti-squeezed\: probe}
\end{array}
\right.
\label{checkG}\;.
\end{eqnarray}
Notice that $F$ and $G$ are {\em global} figures of fidelity \cite{kon}, {\em i.e.} compare
the input and the output on the basis of the whole quantum state or probability
distributions rather than by their first moments, as it happens by considering
customary QND parameters (see for example \cite{milburn}, for a more general
approach in the case of two-dimensional Hilbert space see \cite{fuc}).
\par
As a matter of fact the quantity $F+G$ is not constant, and this means that by
varying the squeezing of the probe we obtain different trade-off between information
gain and state disturbance. An optimal choice of the probe, corresponding to
maximum information and minimum disturbance, maximizes $F+G$.
The maximum is achieved for $x\equiv x_{\sc m}\simeq 1.2$, corresponding
to fidelities $F[x_{\sc m}]\simeq 86 \%$ and $G[x_{\sc m}]\simeq 91\%$.
Notice that for a chosen signal, the optimization of the QND measurement can
be achieved by tuning the internal phase-shift of the interferometer, without the
need of varying the squeezing of the probe. For a nearly balanced interferometer we have
$\tan\phi\simeq 1$: in this case the optimal choice for the probe is a state slightly
anti-squeezed with respect to the signal, {\em i.e.} $\sigma_{\sc p} \simeq 1.2\:
\sigma_{\sc s}$. Finally, the fidelities are equal for $x\equiv x_{\sc e}\simeq 1.3$,
corresponding to $F[x_{\sc e}]=G[x_{\sc e}]\simeq 88 \%$
\par
For non Gaussian signals the behavior is similar though no simple analytical
form can be obtained for the fidelities. In this case, in order to find the
optimal QND measurement, one should resort to numerical means \cite{fut}.
\section{Conclusions}\label{s:outro}
In conclusions, we have suggested an interferometric scheme assisted
by squeezing and linear feedback to realize an arbitrary QND
measurement of a field quadrature. Compared to previous proposals the
main features of our setup can be summarized as follows: i) it involves
only linear coupling between signal and probe, ii) only single mode
transformations on the conditional output are needed, and iii) the whole
class of QND measurements may be obtained with same setup either by tuning the
internal phase-shift of the interferometer, or by varying the squeezing of the
probe. \par
The present setup permits, in principle, to achieve both a projective and a
fully non-destructive quantum measurement of a field quadrature. In practice,
however, the physical constraints on the maximum amount of energy that can be
impinged into the optical channels pose limitations to the precision of the
measurements. This agrees with the facts that both an exact repeatable
measurement and a perfect state preparation cannot be realized for observables
with continuous spectrum \cite{oza}. Of course, other limitations are imposed
by the imperfect photodetection and by the finite resolution of detectors
\cite{hof}. Compared to a vacuum probe, the squeezed/anti-squeezed meters
suggested in this paper provide a consistent noise reduction in the desired
fidelity figure already for moderate input probe energy. In addition, by
varying the squeezing of the probe an optimal QND measure can be achieved,
which provides the maximum information about the quadrature distribution of
the signal, while keeping the conditional output state as close as possible to
the incoming signal.
\begin{figure}
\caption{Setup for QND measurements of a field
quadrature on the state $|\psi_{\sc s}
\label{f:setup}
\end{figure}
\end{document} |
\begin{document}
\title[Homogeneous coordinates]
{Homogeneous coordinates \\
for algebraic varieties}
\author[F. Berchtold]{Florian Berchtold}
\address{Fachbereich Mathematik und Statistik, Universit\"at Konstanz}
\email{[email protected]}
\author[J.~Hausen]{J\"urgen Hausen}
\address{Fachbereich Mathematik und Statistik, Universit\"at Konstanz}
\email{[email protected]}
\subjclass{13A02,13A50,14C20,14C22}
\begin{abstract}
We associate to every divisorial (e.g.~smooth) variety $X$ with
only constant invertible global functions and finitely generated
Picard group a $\operatorname{Pic}(X)$-graded homogeneous coordinate ring.
This generalizes the usual homogeneous coordinate ring of the
projective space and constructions of Cox and Kajiwara for smooth
and divisorial toric varieties.
We show that the homogeneous coordinate ring defines in fact
a fully faithful functor.
For normal complex varieties $X$ with only
constant global functions, we even obtain an equivalence of
categories.
Finally, the homogeneous coordinate ring of a locally factorial
complete irreducible variety with free finitely generated
Picard group turns out to be a Krull ring admitting
unique factorization.
\end{abstract}
\maketitle
\section*{Introduction}
The principal use of homogeneous coordinates is
that they relate
the geometry of algebraic varieties to the theory
of graded rings.
The classical example is the projective $n$-space:
its homogeneous coordinate ring is the polynomial
ring in $n+1$ variables, graded by the usual degree.
Cox~\cite{Co} and Kajiwara~\cite{Ka}
introduced homogeneous coordinate rings for
toric varieties.
Cox's construction is meanwhile a
standard instrument in toric geometry;
for example, it is used in~\cite{BrVe} to prove
an equivariant Riemann-Roch Theorem, and in~\cite{MuSmTs}
for a description of $\mathcal{D}$-modules on toric
varieties.
In this article, we construct homogeneous
coordinates for a fairly general class of algebraic
varieties:
Let $X$ be a divisorial variety --- e.g.~$X$ is
${\mathbb Q}$-factorial or quasiprojective~\cite{Bo} ---
such that $X$ has only constant globally invertible
functions and the Picard group $\operatorname{Pic}(X)$ is finitely
generated.
If the (algebraically closed) ground field ${\mathbb K}$ is
of characteristic $p > 0$, then we require that
the multiplicative group ${\mathbb K}^{*}$ is of infinite
rank over ${\mathbb Z}$, and that $\operatorname{Pic}(X)$ has no
$p$-torsion.
Examples of such varieties are complete smooth rational complex
varieties. Moreover, all Calabi-Yau varieties fit into this
framework.
To define the homogeneous coordinate ring of $X$, consider
a family of line bundles $L$ on $X$ such that the classes
$[L]$ generate $\operatorname{Pic}(X)$.
Choosing a common trivializing cover $\mathfrak{U}$
for the bundles $L$, one can achieve that they form a
finitely generated free abelian group $\Lambda$,
which is isomorphic to a subgroup of the group of
cocycles $H^{1}(\mathcal{O}^{*}, \mathfrak{U})$.
The sheaves of sections $\mathcal{R}_{L}$, where
$L \in \Lambda$, then fit together to a sheaf $\mathcal{R}$
of $\Lambda$-graded $\mathcal{O}_{X}$-algebras.
Such sheaves $\mathcal{R}$ and their global sections
$\mathcal{R}(X)$ are often studied.
For example, in~\cite{HK} they have been used to
characterize when Mori's program can be carried out,
and in~\cite{Ha1} they are the starting point
for quotient constructions in the spirit
of Mumford's Geometric Invariant Theory.
A first important observation is that
we can pass from the above $\Lambda$-graded
$\mathcal{O}_{X}$-algebras $\mathcal{R}$ to a
universal $\mathcal{O}_{X}$-algebra $\mathcal{A}$,
which is graded by the Picard group $\operatorname{Pic}(X)$.
This solves in particular the ambiguity problem
mentioned in~\cite[Remark p.~341]{HK}.
More precisely, we introduce in Section~\ref{section3}
the concept of a {\em shifting family\/} for the
$\mathcal{O}_{X}$-algebra $\mathcal{R}$.
This enables us to identify in a systematic manner
two homogeneous parts $\mathcal{R}_{L}$ and
$\mathcal{R}_{L'}$ if $L$ and $L'$ define the same
class in $\operatorname{Pic}(X)$.
The result is a projection $\mathcal{R} \to \mathcal{A}$
onto a $\operatorname{Pic}(X)$-graded $\mathcal{O}_{X}$-algebra $\mathcal{A}$.
The {\em homogeneous coordinate ring\/} of $X$ then
is a pair $(A,\mathfrak{A})$.
The first part $A$ is the $\operatorname{Pic}(X)$-graded ${\mathbb K}$-algebra of
global sections $\mathcal{A}(X)$.
The meaning of the second part $\mathfrak{A}$ is
roughly speaking the following:
It turns out that $A$ is the algebra of functions of
a quasiaffine variety $\rq{X}$.
Such algebras need not of finite type over ${\mathbb K}$,
and $\mathfrak{A}$ is a datum describing
all the possible affine closures of $\rq{X}$.
From the algebraic point of view,
the homogeneous coordinate ring is
a {\em freely graded quasiaffine algebra};
the category of such algebras is introduced and
discussed in Sections~\ref{section1} and~\ref{section2}.
The first main result of this article is that the
homogeneous coordinate ring is indeed functorial,
that means that given a morphism $X \to Y$ of varieties,
we obtain a morphism of the associated freely graded
quasiaffine algebras, see Section~\ref{section5}.
In fact, we prove much more, see
Theorem~\ref{fullyfaithful}:
\begin{introthm}
The assignment $X \mapsto (A,\mathfrak{A})$
is a fully faithful functor
from the category of divisorial varieties $X$
with finitely generated Picard group and
$\mathcal{O}^{*}(X) = {\mathbb K}^{*}$ to the category of freely graded
quasiaffine algebras.
\end{introthm}
Note that this statement generalizes in particular the description of
the set ${\rm Hom}(X,Y)$ of morphisms of two divisorial toric varieties
$X$, $Y$ obtained by Kajiwara in~\cite[Cor.~4.9]{Ka}.
In the toric situation, $\mathcal{O}^{*}(X) = {\mathbb K}^{*}$ is a
usual nondegeneracy assumption: it just means that $X$ has no torus
factors.
Having proved Theorem~\ref{fullyfaithful}, the task is to
translate geometric properties of a given variety $X$
to algebraic properties of its homogeneous coordinate ring
$(A,\mathfrak{A})$.
In Section~\ref{section6}, we do this for basic properties
of $X$, like smoothness and normality.
In the latter case, the ${\mathbb K}$-algebra $A$ is a normal Krull ring.
Moreover, we discuss quasicoherent sheaves,
and we give descriptions of affine morphisms and closed
embeddings.
In our second main result, we restrict to normal divisorial
varieties $X$ with finitely generated Picard group and
$\mathcal{O}(X) = {\mathbb K}$.
We call such varieties {\em tame}.
The homogeneous coordinate ring
$(A,\mathfrak{A})$ of a tame variety $X$ is
{\em pointed\/} in the sense that
$A$ is normal with $A_{0} = {\mathbb K}$ and $A^{*} = {\mathbb K}^{*}$.
Moreover, $(A,\mathfrak{A})$ is {\em simple\/} in the sense
that the corresponding quasiaffine variety $\rq{X}$
admits only trivial ``linearizable'' bundles,
see Section~\ref{section7} for the
precise definition.
In Theorem~\ref{equivthm}, we show:
\begin{introthm}
The assignment $X \mapsto (A,\mathfrak{A})$
defines an equivalence of the category of tame varieties with the
category of simple pointed algebras.
\end{introthm}
Specializing further to the case of a free Picard group gives the
class of {\em very tame\/} varieties, see Section~\ref{section8}.
Examples are the Grassmannians and all smooth complete toric varieties.
For this class, we obtain a nice description of products in
terms of homogeneous coordinate rings, see Proposition~\ref{products}.
The possibly most remarkable observation is that very tame varieties
open a geometric approach to unique factorization conditions
for multigraded Krull rings, see Proposition~\ref{freefactorial}:
\begin{introprop}
A very tame variety is locally factorial if and only if its
homogeneous coordinate ring is a unique factorization domain.
\end{introprop}
We conclude the article with an example
underlining this principle:
Let $X$ be the projective line with the points $0,1$
and $\infty$ doubled, that means that $X$ is nonseparated.
Nevertheless, $X$ is very tame and its Picard group is
isomorphic to ${\mathbb Z}^{4}$.
As mentioned before, $A = \mathcal{A}(X)$
is a unique factorization domain.
It turns even out to be a classical example of
a factorial singularity, namely
$$
A
=
{\mathbb K}[T_{1}, \ldots, T_{6}]/\bangle{T_{1}^{2}+ \ldots + T_{6}^{2}}.
$$
The quasiaffine variety $\rq{X}$ corresponding to the homogeneous
coordinate ring of $X$ is an open subset of ${\rm Spec}(A)$.
The prevariety $X$ is a geometric quotient of $\rq{X}$ by a free action
of a fourdimensional algebraic torus. In particular, $\rq{X}$
is locally isomorphic to the toric variety ${\mathbb K} \times ({\mathbb K}^{*})^{4}$.
That means that $\rq{X}$ is toroidal, even with respect to the
Zariski Topology, but not toric.
\tableofcontents
\section{Quasiaffine algebras and quasiaffine varieties}\label{section1}
Throughout the whole article we work in the category of algebraic
varieties following the setup of~\cite{Ke}. In particular, we work
over an algebraically closed field ${\mathbb K}$, and the word point always
refers to a closed point. Note that in our setting a variety is
reduced but it need neither be separated nor irreducible.
The purpose of this section is to provide an algebraic
description of the category of quasiaffine varieties.
The idea is very simple: Every quasiaffine variety $X$ is an open
subset of an affine variety $X'$ and hence is described by the
inclusion $\mathcal{O}(X') \subset \mathcal{O}(X)$ and the vanishing
ideal of the complement $X' \setminus X$ in
$\mathcal{O}(X')$.
However, in general the algebra of functions $\mathcal{O}(X)$ of a
quasiaffine variety $X$ is not of finite type, see for example~\cite{Re}.
Thus there is no canonical choice of an affine closure $X'$ for a
given $X$. To overcome this ambiguity, we have to treat all possible
affine closures at once.
We introduce the necessary algebraic notions.
By a ${\mathbb K}$-algebra we always mean a reduced commutative
algebra $A$ over ${\mathbb K}$ having a unit element.
We write $\bangle{I}$ for the ideal generated by a
subset $I \subset A$.
The set of nonzerodivisors of a ${\mathbb K}$-algebra $A$ is
denoted by ${\rm nzd}(A)$. Recall that we have a canonical inclusion
$A \subset {\rm nzd}(A)^{-1}A$ into the algebra of fractions.
\goodbreak
\begin{defi}\label{closedsubalgebra}
Let $A$ be a ${\mathbb K}$-algebra.
\begin{enumerate}
\item A {\em closing subalgebra\/} of $A$ is a pair $(A',I')$ where
$A' \subset A$ is a subalgebra of finite type over ${\mathbb K}$ and $I'
\subset A'$ is an ideal in $A'$ with
$$
I' = \sqrt{\bangle{I' \cap {\rm nzd}(A)}},
\qquad
A = \bigcap_{f \in I' \cap {\rm nzd}(A)} A_{f},
\qquad
A'_{f} = A_{f} \text{ for all } f \in I'.
$$
\item Two closing subalgebras $(A',I')$ and $(A'',I'')$ of $A$ are
called {\em equivalent\/} if there is a closing subalgebra
$(A''',I''')$ of $A$ such that
$$ A' \cup A'' \subset A''',
\qquad
I''' = \sqrt{\bangle{I'}} = \sqrt{\bangle{I''}}. $$
\end{enumerate}
\end{defi}
Note that \ref{closedsubalgebra}~(ii) does indeed define an
equivalence relation. In terms of these notions, the algebraic data to
describe quasiaffine varieties are the following:
\begin{defi}\label{quasiaffalgdef}
\begin{enumerate}
\item A {\em quasiaffine algebra\/} is a pair $(A,\mathfrak{A})$,
where $A$ is a ${\mathbb K}$-algebra and $\mathfrak{A}$ is the
equivalence class of a closing subalgebra $(A',I')$ of $A$.
\item A {\em homomorphism\/} of quasiaffine algebras
$(B,\mathfrak{B})$ and $(A,\mathfrak{A})$ is a homomorphism $\mu
\colon B \to A$ such that there exist $(B',J') \in \mathfrak{B}$
and $(A',I') \in \mathfrak{A}$ with
$$ \mu(B') \subset A', \qquad I' \subset \sqrt{\bangle{\mu(J')}}. $$
\end{enumerate}
\end{defi}
We show now that the category of quasiaffine varieties
is equivalent to the category of quasiaffine algebras by associating to
every variety $X$ an equivalence class $\mathfrak{O}(X)$
of closing subalgebras of $\mathcal{O}(X)$. We use the following notation:
Given a variety $X$ and a regular function $f \in \mathcal{O}(X)$, let
$$ X_{f} := \{x \in X; \; f(x) \ne 0\}.$$
\begin{defi}\label{naturalpairdef}
Let $X$ be a quasiaffine variety. Let
$A' \subset \mathcal{O}(X)$ be a subalgebra of finite type
and $I' \subset A'$ a radical ideal.
We call $(A',I')$ a {\em natural pair\/} on $X$,
if for every $f \in I'$ the set $X_{f}$ is affine with
$\mathcal{O}(X_{f}) = A'_{f}$ and the sets $X_{f}$, $f \in I'$,
cover $X$. We define $\mathfrak{O}(X)$ to be the collection of all
natural pairs on $X$.
\end{defi}
So, our first task is to verify that the collection
$\mathfrak{O}(X)$ is in fact an equivalence class of
closing subalgebras of $\mathcal{O}(X)$. This is done in two
steps:
\begin{lemma}\label{naturalpairs}
Let $X$ be a quasiaffine variety. Let $(A',I')$ be a
natural pair on $X$, and set $X' := {\rm Spec}(A')$.
\begin{enumerate}
\item The morphism $X \to X'$ defined by $A' \subset \mathcal{O}(X)$
is an open embedding, $I'$ is the vanishing ideal of
$X' \setminus X$, and $(A',I')$ is a closing subalgebra of
$\mathcal{O}(X)$.
\item For a subalgebra $A'' \subset \mathcal{O}(X)$ of finite type
with $A' \subset A''$, consider the ideal $I'' :=
\sqrt{\bangle{I'}}$ of $A''$.
Then $(A'',I'')$ is a natural pair on $X$.
\end{enumerate}
\end{lemma}
{\rm pr}oof Recall that for any $f \in \mathcal{O}(X)$
we have $\mathcal{O}(X_{f}) = \mathcal{O}(X)_{f}$.
In particular, $X \to X'$ is locally given by
isomorphisms $X_{f} \to X'_{f}$, $f \in I'$.
This implies that $X \to X'$ is an open embedding and that
$I' \subset A'$ is the vanishing ideal of $X' \setminus X$.
Finally, $(A',I')$ is a closing subalgebra, because
up to passing to the radical, $I'$ is generated
by the $f \in I'$ that are nontrivial on each irreducible
component of $X$.
We turn to assertion~(ii). Let $X'' := {\rm Spec}(A'')$.
It suffices to verify that the morphism $X \to X''$ defined
by $A'' \subset \mathcal{O}(X)$ is an open embedding and
that $I'' \subset A''$ is the vanishing ideal of
the complement $X'' \setminus X$.
Again this holds, because for every $f \in I'$ the map $X \to X''$
restricts to an isomorphism $X_{f} \to X''_{f}$. \endproof
\begin{lemma}\label{quasiaff2closingsubalg}
The collection $\mathfrak{O}(X)$ of all natural pairs
on a quasiaffine variety $X$ is an equivalence
class of closing subalgebras of $\mathcal{O}(X)$.
\end{lemma}
{\rm pr}oof First note that there exist natural pairs $(A',I')$ on $X$,
because for every affine closure $X \subset X'$ we obtain such a pair
by setting $A' := \mathcal{O}(X')$ and defining $I' \subset A'$ to be
the vanishing ideal of the complement $X' \setminus X$. Moreover, by
Lemma~\ref{naturalpairs}~(i), we know that every natural pair is a closing
subalgebra of $\mathcal{O}(X)$.
We show that any two natural pairs $(A',I')$ and $(A'',I'')$ on $X$
are equivalent closing subalgebras of $\mathcal{O}(X)$.
Let $A''' \subset \mathcal{O}(X)$ be any subalgebra of finite type
containing $A' \cup A''$.
Define an ideal in $A'''$ by $I''' := \sqrt{\bangle{I'}}$.
Then Lemma~\ref{naturalpairs} tells us that
the pair $(A''', I''')$ is a closing subalgebra.
We have to show that $I'''$ equals $\sqrt{\bangle{I''}}$.
By Lemma~\ref{naturalpairs}, the morphism $X \to X'''$
defined by the inclusion $A''' \subset \mathcal{O}(X)$ is an open
embedding and $I''' \subset A'''$ is the vanishing ideal of
$X''' \setminus X$. For every $f \in I''$, the map $X \to X'''$
restricts to an isomorphism $X_{f} \to X'''_{f}$. Hence the desired
identity of ideals follows from
$$ X = \bigcup_{f \in I''} X_{f}. $$
Finally, we show that if a closing subalgebra $(A'',I'')$
is equivalent to a natural pair $(A',I')$,
then also $(A'',I'')$ is natural. Choose $(A''',I''')$
as in~\ref{closedsubalgebra}~(ii).
By Lemma~\ref{naturalpairs}~(ii), the pair $(A''',I''')$
is natural. In particular, $X_{f}$ is affine for every $f \in I''$.
Moreover, $X$ is covered by these $X_{f}$,
because $I'''$ equals $\sqrt{\bangle{I''}}$. \endproof
We are ready for the main result of this section.
Given a quasiaffine variety $X$,
we denote as before by $\mathfrak{O}(X)$
the collection of all natural pairs on $X$.
For a morphism $\varphi \colon X \to Y$ of varieties, we denote by
$\varphi^{*} \colon \mathcal{O}(Y) \to \mathcal{O}(X)$ the pullback
of functions.
\begin{prop}\label{quasiaffequiv}
The assignments $X \mapsto (\mathcal{O}(X), \mathfrak{O}(X))$ and
$\varphi \mapsto \varphi^{*}$ define a contravariant equivalence of
the category of quasiaffine varieties with the category of
quasiaffine algebras.
\end{prop}
{\rm pr}oof First of all, we check that the above assignment is
in fact well defined on morphisms. Let $\varphi \colon X \to Y$ be
any morphism of quasiaffine varieties. Choose a closing
subalgebra $(B',J')$ in $\mathfrak{O}(Y)$.
By Lemma~\ref{naturalpairs}~(ii), we can construct a closing
subalgebra $(A',I')$ in $\mathfrak{O}(X)$ such
that $\varphi^{*}(B') \subset A'$.
Now, consider the affine closures $X' := {\rm Spec}(A')$ and $Y' :=
{\rm Spec}(B')$ of $X$ and~$Y$. The morphism $\varphi' \colon X' \to
Y'$ defined by the restriction $\varphi^{*} \colon B' \to A'$ maps
$X$ to~$Y$. Since $I'$ and $J'$ are precisely the vanishing ideals of
the complements $X' \setminus X$ and $Y' \setminus Y$, we obtain the
condition required in~\ref{quasiaffalgdef}~(ii):
$$I' \subset \sqrt{\bangle{\varphi^{*}(J')}}. $$
Thus $\varphi \mapsto \varphi^{*}$ is in fact well defined.
Moreover, $X \mapsto (\mathcal{O}(X), \mathfrak{O}(X))$
and $\varphi \mapsto \varphi^{*}$ clearly define a contravariant
functor, and this functor is injective on morphisms.
For surjectivity, let $\mu \colon \mathcal{O}(Y) \to \mathcal{O}(X)$
be a homomorphism of quasiaffine algebras.
Let $(A',I') \in \mathfrak{O}(X)$ and $(B',J') \in \mathfrak{O}(Y)$
as in~\ref{quasiaffalgdef}~(ii).
Then $\mu$ defines a morphism $\varphi'$ from
${\rm Spec}(A')$ to ${\rm Spec}(B')$.
The condition on the ideals and Lemma~\ref{naturalpairs}~(i)
ensure that $\varphi'$ restricts to
a morphism $\varphi \colon X \to Y$.
Clearly, we have $\varphi^{*} = \mu$.
It remains to show that up to isomorphism, every quasiaffine algebra
$(A,\mathfrak{A})$ arises from a quasiaffine variety.
Let $(A',I') \in \mathfrak{A}$, set $X' := {\rm Spec}(A')$, and let $X
\subset X'$ be the open subvariety obtained by removing the zero set
of $I'$. Then $\mathcal{O}(X) = A$, and $(A',I')$ is a natural pair on
$X$. Lemma~\ref{quasiaff2closingsubalg} gives $\mathfrak{O}(X) =
\mathfrak{A}$. \endproof
We conclude this section with the observation, that restricted on
the category of quasiaffine varieties $X$ with $\mathcal{O}(X)$ of
finite type, our algebraic description collapses in a very convenient
way:
\begin{rem}\label{collaps}
For any quasiaffine algebra $(A, \mathfrak{A})$ we have
\begin{enumerate}
\item The algebra $A$ is of finite type over ${\mathbb K}$ if and only if
$(A,I) \in \mathfrak{A}$ holds with some radical ideal $I \subset
A$.
\item The quasiaffine algebra $(A,\mathfrak{A})$ arises from an affine
variety if and only if $(A,A) \in \mathfrak{A}$ holds.
\end{enumerate}
\end{rem}
\section{Freely graded quasiaffine algebras}\label{section2}
In this section, we introduce the formal framework of homogeneous
coordinate rings, namely freely graded quasiaffine algebras and their
morphisms. The geometric interpretation of these notions amounts to an
equivariant version of the equivalence of categories presented in the
preceding section.
\begin{defi}\label{freegradalgdef}
Let $(A,\mathfrak{A})$ be a quasiaffine algebra, and let $\Lambda$ be
a finitely generated abelian group.
We say that $(A,\mathfrak{A})$ is {\em freely graded\/} by
$\Lambda$ (or {\em freely $\Lambda$-graded\/}) if there is
a grading
$$ A = \bigoplus_{L \in \Lambda} A_{L}, $$
and there exists a closing subalgebra $(A',I') \in \mathfrak{A}$
admitting homogeneous elements $f_{1}, \ldots, f_{r} \in I'$ such that
$I'$ equals $\sqrt{\bangle{f_{1}, \ldots, f_{r}}}$
and every localization $A_{f_{i}}$ has in each degree $L \in
\Lambda$ a homogeneous invertible element.
\end{defi}
\begin{exam}\label{polring}
For $n \ge 2$, the polynomial ring ${\mathbb K}[T_{1}, \ldots, T_{n}]$
together with the usual ${\mathbb Z}$-grading can be made into a freely
graded quasiaffine algebra:
Let $\mathfrak{A}$ be the class of $(A,I)$, where
$I := \bangle{T_{1}, \ldots, T_{n}}$.
\end{exam}
The {\em weight monoid\/} of an integral domain $A$ graded by a
finitely generated abelian group $\Lambda$ is the submonoid
$\Lambda^{*} \subset \Lambda$ consisting of all weights $L \in
\Lambda$ with $A_{L} \ne \{0\}$. For the weight monoid of a freely
graded quasiaffine algebra, we have:
\begin{rem}\label{pointedweightcone}
Let $(A,\mathfrak{A})$ be a freely $\Lambda$-graded quasiaffine
algebra. Then the weight monoid $\Lambda^{*} \subset \Lambda$
of $A$ generates $\Lambda$ as a group.
\end{rem}
We turn to homomorphisms. The final notion of a morphism of
freely graded quasiaffine algebras will be given below.
First we have to consider homomorphisms that are compatible with
the structure:
\begin{defi}\label{admissiblehom}
Let the quasiaffine algebras $(A,\mathfrak{A})$ and $(B,\mathfrak{B})$
be freely graded by $\Lambda$ and $\Gamma$, respectively. A homomorphism
$\mu \colon (B,\mathfrak{B}) \to (A,\mathfrak{A})$ of quasiaffine
algebras is called {\em graded\/}, if there is a homomorphism $\t{\mu}
\colon \Gamma \to \Lambda$ with
\begin{equation}
\label{gradedhomcond}
\mu(B_{E})
\subset
A_{\t{\mu}(E)}
\quad \text{for all } E \in \Gamma.
\end{equation}
\end{defi}
By Remark~\ref{pointedweightcone}, a graded homomorphism
$\mu \colon (B,\mathfrak{B}) \to (A,\mathfrak{A})$ of freely graded
quasiaffine algebras uniquely determines its
accompanying homomorphism $\t{\mu} \colon \Gamma \to
\Lambda$. Moreover, the composition of two graded homomorphisms is
again graded.
For the subsequent treatment of our homogeneous coordinate rings we
need a coarser concept of a morphism of freely graded quasiaffine
algebras than the notion of a graded homomorphism would yield.
This is the following:
\begin{defi}\label{pointedmorphdef}
Let the quasiaffine algebras $(A,\mathfrak{A})$ and $(B,\mathfrak{B})$
be freely graded by finitely generated abelian groups $\Lambda$ and
$\Gamma$ respectively.
\begin{enumerate}
\item Two graded homomorphisms $\mu, \nu \colon (B,\mathfrak{B})
\to (A,\mathfrak{A})$ are called {\em equivalent\/} if there is a
homomorphism $c \colon \Gamma \to A_{0}^{*}$ such that for every $E
\in \Gamma$ and every $g \in B_{E}$ we have
$$ \nu(g) = c(E) \mu(g). $$
\item A {\em morphism\/} $(B,\mathfrak{B}) \to (A,\mathfrak{A})$ of
the freely graded quasiaffine algebras $(B,\mathfrak{B})$ and
$(A,\mathfrak{A})$ is the equivalence class $[\mu]$ of a graded
homomorphism $\mu \colon (B,\mathfrak{B}) \to (A,\mathfrak{A})$.
\end{enumerate}
\end{defi}
In the setting of~(i) we shall say that $\mu$ and $\nu$
{\em differ by a character\/} $c \colon \Gamma \to A_{0}^{*}$.
Since equivalence of graded homomorphisms is compatible with
composition, this definition makes the freely graded quasiaffine
algebras into a category.
We give now a geometric interpretation of the above notions.
We assume for the rest of this section that if ${\mathbb K}$
is of characteristic $p > 0$,
then our finitely generated abelian groups $\Lambda$
have no $p$-torsion, i.e.~$\Lambda$ contains no elements of order $p$.
Under this assumption, each $\Lambda$ defines a
diagonalizable algebraic group
$$ H := {\rm Spec}({\mathbb K}[\Lambda]). $$
Recall that the characters of this group $H$ are precisely the
canonical generators $\chi^{L}$, $L \in \Lambda$, of the group
algebra ${\mathbb K}[\Lambda]$.
In fact, the assignment $\Lambda \mapsto H$ defines a
contravariant equivalence of categories, see for
example~\cite[Section~III.~8]{Bor}.
Now, suppose that a diagonalizable group $H = {\rm Spec}({\mathbb K}[\Lambda])$
acts by means of a regular map $H \times X \to X$ on a (not
necessarily affine) variety $X$. A function $f \in \mathcal{O}(X)$ is
called {\em homogeneous\/} with respect to a character
$\chi^{L} \colon H \to {\mathbb K}^{*}$ if for every $(t,x) \in H \times X$
we have
$$ f(t \! \cdot \! x) = \chi^{L}(t) f(x). $$
For $L \in \Lambda$, let $\mathcal{O}(X)_{L} \subset \mathcal{O}(X)$
denote the subset of all $\chi^{L}$-homogeneous functions.
It is well known, use for example \cite[p.~67~Lemma]{Kn}, that the
action of $H$ on $X$ defines a grading
$$ \mathcal{O}(X) = \bigoplus_{L \in \Lambda} \mathcal{O}(X)_{L}.$$
Recall that one obtains in this way a canonical correspondence
between affine $H$-varieties and $\Lambda$-graded affine algebras
(the arguments presented in~\cite[p.~11]{Do} for the case
$\Lambda = {\mathbb Z}$ also work in the general case).
We are interested in free $H$-actions on quasiaffine varieties $X$,
where {\em free\/} means that every orbit map $H \to H \! \cdot \! x$
is an isomorphism. In this situation, we have:
\begin{lemma}\label{gradedqavar2qaalg}
Let the diagonalizable group $H = {\rm Spec}({\mathbb K}[\Lambda])$ act
freely by means of a regular map $H \times X \to X$ on a quasiaffine
variety $X$.
Then the associated $\Lambda$-grading of $\mathcal{O}(X)$ makes
$(\mathcal{O}(X),\mathfrak{O}(X))$ into a freely graded quasiaffine
algebra.
\end{lemma}
{\rm pr}oof Let $(A'',I'')$ be any natural pair on $X$, and let $g_{1},
\ldots, g_{s}$ be a system of generators of $A''$.
Let $A' \subset \mathcal{O}(X)$ denote the subalgebra generated
by all the homogeneous components of the $g_{j}$.
Then $A'$ is graded, and according to Lemma~\ref{naturalpairs}~(ii),
we obtain a natural pair $(A',I')$ on $X$ by defining
$I' := \sqrt{\bangle{I''}}$.
Now, the $\Lambda$-grading of $A'$ comes from an $H$-action on
$X' := {\rm Spec}(A')$. This $H$-action extends the initial $H$-action on
$X$. In particular, the ideal $I' \subset A'$ is graded, because it is
the vanishing ideal of the invariant set $X' \setminus X$, see
Lemma~\ref{naturalpairs}~(i). This fact enables us to verify
the condition of~\ref{freegradalgdef} for $I'$:
\goodbreak
Choose generators $L_{1}, \ldots, L_{k}$ of $\Lambda$.
Consider $x \in X$, and choose a homogeneous
$h \in I'$ with $h(x) \ne 0$.
Since $H$ acts freely, the orbit map $H \to H \! \cdot \! x$
is an isomorphism.
Thus we find for every $i$ a $\chi^{L_{i}}$-homogeneous
regular function $h_{i}$ on $H \! \cdot \! x$ with $h_{i}(x) \ne 0$.
Since $H \! \cdot \! x$ is closed in $X_{h}$, the
$h_{i}$ extend to $\chi^{L_{i}}$-homogeneous regular
functions on $X_{h}$.
For a suitable $r > 0$, the product $f := h^{r}h_{1} \ldots h_{k}$
is a regular function on $X'$ with $f \in \bangle{h}$ and hence
$f \in I'$.
By construction, $f$ is homogeneous, and we have $f(x) \ne 0$.
Moreover, the Laurent monomials
in $h_{1}, \ldots, h_{k}$ provide for each degree $L \in \Lambda$
a $\chi^{L}$-homogeneous invertible function on $X_{f}$.
Since finitely many of the $X_{f}$ cover $X$, this gives the desired
property on the ideal $I' \subset A'$.
\endproof
In order to give the equivariant version of
Proposition~\ref{quasiaffequiv}, we have to fix the notion of a
morphism of quasiaffine varieties with an action of a diagonalizable
group. This is the following:
\begin{defi}
Let $G \times X \to X$ and $H \times Y \to Y$ be algebraic group
actions. A morphism $\varphi \colon X \to Y$ is called
{\em equivariant\/} if there is a homomorphism $\t{\varphi} \colon G
\to H$ of algebraic groups such that for all
$(g,x) \in G \times X$ we have
$$\varphi(g \! \cdot \! x) = \t{\varphi}(g) \! \cdot \! \varphi(x).$$
\end{defi}
This notion of an equivariant morphism makes the quasiaffine varieties
with a free action of a diagonalizable group into a category. We
obtain the following equivariant version of
Proposition~\ref{quasiaffequiv}:
\begin{prop}\label{equivquasiaffequiv}
The assignments $X \mapsto (\mathcal{O}(X), \mathfrak{O}(X))$ and
$\varphi \mapsto \varphi^{*}$ define a contravariant equivalence from
the category of quasiaffine varieties with a free diagonalizable group
action to the category of freely graded quasiaffine algebras and
graded homomorphisms.
\end{prop}
{\rm pr}oof By Lemma~\ref{gradedqavar2qaalg}, the assignment $X \mapsto
(\mathcal{O}(X), \mathfrak{O}(X))$ is well defined. From
Proposition~\ref{quasiaffequiv} and the observation that
equivariant morphisms of quasiaffine varieties correspond to graded
homomorphisms of quasiaffine algebras we infer functoriality and
bijectivity on the level of morphisms.
In order to see that up to isomorphism any quasiaffine
algebra $(A,\mathfrak{A})$ which is freely graded by some $\Lambda$
arises in the above manner from a quasiaffine variety with
free diagonalizable group action, we repeat the corresponding part of
the proof of Proposition~\ref{quasiaffequiv} in an equivariant manner:
Let $(A',I')$ be as in Definition~\ref{freegradalgdef}. Let
$A'' \subset A$ be any graded subalgebra of finite type with
$A' \subset A''$, and let $I'' := \sqrt{\bangle{I'}}$.
Then $(A'',I'')$ belongs to $\mathfrak{A}$, and
the ideal $I''$ still satisfies
the condition of Definition~\ref{freegradalgdef}.
The affine variety $X'' := {\rm Spec}(A'')$ comes along with an action of
the diagonalizable group $H := {\rm Spec}({\mathbb K}[\Lambda])$ such that the
corresponding grading of $\mathcal{O}(X'') = A''$ gives back the
original $\Lambda$-grading of the algebra $A''$.
Removing the $H$-invariant zero set of $I''$ from $X''$,
gives a quasiaffine $H$-variety $X$.
By construction, the $\Lambda$-graded algebras $\mathcal{O}(X)$ and
$A$ coincide, and $(A'',I'')$ is a natural pair on $X$. Moreover, the
local existence of invertible homogeneous functions in each degree
implies that for every $x \in X$ the orbit map $H \mapsto H \! \cdot \! x$ is
an isomorphism. In other words, the action of $H$ on $X$ is free.
\endproof
\begin{exam}
The standard ${\mathbb K}^{*}$-action on ${\mathbb K}^{n+1} \setminus \{0\}$ has
$(A,\mathfrak{A})$ of Example~\ref{polring} as associated freely
graded quasiaffine algebra.
\end{exam}
The remaining task is to translate the notion of equivalence of
graded homomorphisms. For this let $X$ and $Y$ be quasiaffine
varieties with actions of diagonalizable groups
$H := {\rm Spec}({\mathbb K}[\Lambda])$ and $G := {\rm Spec}({\mathbb K}[\Gamma])$.
Denote by $(A,\mathfrak{A})$ and $(B,\mathfrak{B})$ the freely
graded quasiaffine algebras associated to $X$ and $Y$.
\begin{rem}\label{equivgeom}
Two graded homomorphisms $\mu, \nu \colon (B,\mathfrak{B}) \to
(A,\mathfrak{A})$ are equivalent
if and only if there is an $H$-invariant
morphism $\gamma \colon X \to G$ such that
the morphisms $\varphi, \psi \colon X \to Y$ corresponding to $\mu$
and $\nu$ always satisfy $\psi(x) = \gamma(x) \! \cdot \! \varphi(x)$.
\end{rem}
\section{Picard graded sheaves of algebras}\label{section3}
Let $X$ be an algebraic variety and denote by $\operatorname{Pic}(X)$
its Picard group.
In this section we prepare the definition of a
graded ring structure on the vector space
$$ \bigoplus_{[L] \in \operatorname{Pic}(X)} H^{0}(X,L). $$
More generally, we even need a ring structure for the
corresponding sheaves of vector spaces.
The problem is easy, if $\operatorname{Pic}(X)$ is free:
Then we can realize it as a group $\Lambda$
of line bundles as in~\cite[Sec.~2]{Ha},
and we can work with the associated
$\Lambda$-graded $\mathcal{O}_{X}$-algebra $\mathcal{R}$.
If $\operatorname{Pic}(X)$ has torsion, then we can at most expect
a surjection $\Lambda \to \operatorname{Pic}(X)$ with a free group
$\Lambda$ of line bundles.
Thus the problem is to identify in a suitable manner
isomorphic homogeneous
components of the $\Lambda$-graded $\mathcal{O}_{X}$-algebra
$\mathcal{R}$.
This is done by means of shifting families
and their associated ideals
$\mathcal{I} \subset \mathcal{R}$, see~\ref{shiftfamdef}
and~\ref{associdealdef}.
The quotient $\mathcal{A} := \mathcal{R} / \mathcal{I}$
then will realize the desired ring structure.
To begin, let us recall the necessary constructions
from~\cite{Ha}.
Consider an open cover $\mathfrak{U} = (U_{i})_{i \in I}$ of~$X$.
This cover gives rise to an additive group $\Lambda(\mathfrak{U})$
of line bundles on~$X$: For each \v{C}ech cocycle
$\xi \in Z^{1}(\mathfrak{U},\mathcal{O}_{X}^{*})$, let
$L_{\xi}$ denote the line bundle obtained by gluing the products
$U_{i} \times {\mathbb K}$ along the maps
$$ (x,z) \mapsto (x, \xi_{ij}(x)z).$$
Define the sum $L_{\xi} + L_{\eta}$ of two such line bundles to be
$L_{\xi\eta} = L_{\eta\xi}$. This makes the set $\Lambda(\mathfrak{U})$
consisting of all the bundles $L_{\xi}$ into an abelian group,
which is isomorphic to the cocycle group
$Z^{1}(\mathfrak{U},\mathcal{O}_{X}^{*})$.
When we speak of a {\em group of line bundles\/} on $X$,
we think of a finitely
generated free subgroup of some group $\Lambda(\mathfrak{U})$ as
above.
Note that for any such group $\Lambda$ of line bundles, we have a
canonical homomorphism $\Lambda \to \operatorname{Pic}(X)$ to the Picard group.
We come to the graded $\mathcal{O}_{X}$-algebra associated to a
group $\Lambda$ of line bundles on a variety $X$. For each line
bundle $L \in \Lambda$, let $\mathcal{R}_{L}$ denote its sheaf of
sections. In the sequel, we shall identify $\mathcal{R}_{0}$ with the
structure sheaf $\mathcal{O}_{X}$. The
{\em graded $\mathcal{O}_{X}$-algebra\/} associated to $\Lambda$ is
the quasicoherent sheaf
$$ \mathcal{R} := \bigoplus_{L \in \Lambda} \mathcal{R}_{L}, $$
where the multiplication is defined as follows: The sections of a
bundle $L_{\xi} \in \Lambda$ over an open set
$U \subset X$ are described by families $f_{i} \in \mathcal{O}_{X}(U
\cap U_{i})$ that are compatible with the cocycle $\xi$. For
any two sections $f \in \mathcal{R}_{L}(U)$ and $f' \in
\mathcal{R}_{L'}(U)$, the product $(f_{i}f'_{i})$ of their
defining families $(f_{i})$ and $(f'_{i})$ gives us a
section $ff' \in \mathcal{R}_{L+L'}(U)$.
In the sequel, we fix an open cover
$\mathfrak{U} = (U_{i})_{i \in I}$ of~$X$
and a group $\Lambda \subset \Lambda(\mathfrak{U})$
of line bundles.
Let $\mathcal{R}$ denote the associated $\Lambda$-graded
$\mathcal{O}_{X}$-algebra.
Here comes the notion of a shifting family for $\mathcal{R}$:
\begin{defi}\label{shiftfamdef}
Let $\Lambda_{0} \subset \Lambda$ be any subgroup of the kernel
of $\Lambda \to \operatorname{Pic}(X)$.
By a {\em $\Lambda_{0}$-shifting family\/} for $\mathcal{R}$ we
mean a family $\varrho = ( \varrho_{E} )$ of $\mathcal{O}_{X}$-module
isomorphisms $\varrho_{E} \colon \mathcal{R} \to \mathcal{R}$, where $E \in
\Lambda_{0}$, with the following properties:
\begin{enumerate}
\item for every $L \in \Lambda$ and every $E \in \Lambda_{0}$ the
isomorphism $\varrho_{E}$ maps $\mathcal{R}_{L}$ onto
$\mathcal{R}_{L+E}$,
\item for any two $E_{1}, E_{2} \in \Lambda_{0}$ we have
$\varrho_{E_{1} + E_{2}} = \varrho_{E_{2}} \circ \varrho_{E_{1}}$,
\item for any two homogeneous sections $f,g$ of $\mathcal{R}$ and
every $E \in \Lambda_{0}$ we have
$\varrho_{E}(fg) = f \varrho_{E}(g)$.
\end{enumerate}
If $\Lambda_{0}$ is the full kernel of $\Lambda \to \operatorname{Pic}(X)$, then we
also speak of a {\em full shifting family\/} for $\mathcal{R}$ instead
of a $\Lambda_{0}$-shifting family.
\end{defi}
The first basic observation is existence of shifting families and a
certain uniqueness statement:
\begin{lemma}\label{shiftfamprops}
Let $\Lambda_{0} \subset \Lambda$ be a subgroup of the kernel of
$\Lambda \to \operatorname{Pic}(X)$. Then there exist $\Lambda_{0}$-shifting
families for $\mathcal{R}$, and any two such families $\varrho$,
$\varrho'$ differ by a character $c \colon \Lambda_{0} \to
\mathcal{O}^{*}(X)$ in the sense that $\varrho'_{E} = c(E)\varrho_{E}$
holds for all $E \in \Lambda_{0}$.
\end{lemma}
{\rm pr}oof
For the existence statement, fix a ${\mathbb Z}$-basis of the subgroup
$\Lambda_{0} \subset \Lambda$.
For any member $E$ of this basis choose a bundle isomorphism
$\alpha_{E} \colon 0 \to E$ from the trivial bundle $0 \in \Lambda$
onto $E \in \Lambda$.
With respect to the cover $\mathcal{U}$, this
isomorphism is fibrewise multiplication with certain
$\alpha_{i} \in \mathcal{O}^{*}(U_{i})$; so, on
$U_{i} \times {\mathbb K}$ it is of the form
\begin{equation}\label{localdata}
(x,z) \mapsto (x, \alpha_{i}(x)z).
\end{equation}
If $\alpha_{E'} \colon 0 \to E'$ denotes the isomorphism for a
further member of the basis of $\Lambda_{0}$, then the products
$\alpha_{i}\alpha_{i}'$
of the corresponding local data define an isomorphism
$\alpha_{E+E'} \colon 0 \to E + E'$.
Similarly, by inverting local data, we obtain isomorphisms
$\alpha_{-E} \colon 0 \to -E$.
Proceeding this way, we obtain an isomorphism
$\alpha_{E} \colon 0 \to E$ for every $E \in \Lambda_{0}$.
The local data $\alpha_{i}$ of an isomorphism
$\alpha_{E} \colon 0 \to E$ as constructed above define as well an
isomorphism $L \to L + E$ for any $L \in \Lambda$.
By shifting homogeneous sections according to
$f \mapsto \alpha_{E} \circ f$,
one obtains $\mathcal{O}_{X}$-module
isomorphisms $\varrho_{E} \colon \mathcal{R} \to \mathcal{R}$ mapping
each $\mathcal{R}_{L}$ onto $\mathcal{R}_{L+E}$. The
Properties~\ref{shiftfamdef}~(ii) and~(iii) are then clear by
construction.
We turn to the uniqueness statement. Let $\varrho$, $\varrho'$ be two
$\Lambda_{0}$-shifting families for $\mathcal{R}$.
Using Property~\ref{shiftfamdef}~(iii) we
see that for every $E \in \Lambda_{0}$ and every homogeneous section
$f$ of $\mathcal{R}$, we have
$$ \varrho_{E}^{-1} \circ \varrho'_{E} (f) = \varrho_{E}^{-1} \circ
\varrho'_{E} (f \cdot 1) = f \cdot \varrho_{E}^{-1} \circ \varrho'_{E} (1). $$
Thus, setting $c(E) := \varrho_{E}^{-1} \circ \varrho'_{E} (1)$ we obtain a
map $c \colon \Lambda_{0} \to {\mathcal{O}}^{*} (X)$ such that $\varrho'_{E}$ equals
$c(E)\varrho_{E}$. Properties~\ref{shiftfamdef}~(ii) and~(iii) show
that $c$ is a homomorphism:
\begin{eqnarray*}
c(E_{1}+E_{2})
& = &
\varrho_{E_{1}+E_{2}}^{-1} \circ \varrho'_{E_{1}+E_{2}}(1) \\
& = &
\varrho_{-E_{1}} \circ \varrho_{-E_{2}} \circ \varrho'_{E_{2}}
\circ \varrho'_{E_{1}}(1) \\
& = &
\varrho_{-E_{1}} \circ \varrho_{-E_{2}} \circ \varrho'_{E_{2}}
(\varrho'_{E_{1}}(1) \! \cdot \! 1)\\
& = &
\varrho_{-E_{1}} ( \varrho'_{E_{1}}(1) c(E_{2})) \\
& = & c(E_{1})c(E_{2}). \qquad \qed
\end{eqnarray*}
\goodbreak
We shall now associate to any shifting family an ideal in the
$\mathcal{O}_{X}$-algebra $\mathcal{R}$. First we remark
that for any subgroup $\Lambda_{0} \subset \Lambda$ the algebra
$\mathcal{R}$ becomes $\Lambda/\Lambda_{0}$-graded by defining
the homogeneous component of a class $[L] \in \Lambda/\Lambda_{0}$
as
$$ \mathcal{R}_{[L]} := \sum_{L' \in [L]} \mathcal{R}_{L'}.$$
\begin{lemma}\label{associdealprops}
Let $\Lambda_{0}$ be a subgroup of the kernel of $\Lambda \to
\operatorname{Pic}(X)$, and let $\varrho$ be a $\Lambda_{0}$-shifting family.
For each given open subset $U \subset X$ consider the ideal
$$
\mathfrak{I} (U)
\; := \;
\bangle{f - \varrho_{E}(f); \;
f \in \mathcal{R}(U), \;
E \in \Lambda_{0}}
\; \subset \;
\mathcal{R}(U).
$$
Let $\mathcal{I}$ denote the sheaf associated to
the presheaf $U \mapsto \mathfrak{I}(U)$. Then $\mathcal{I}$ is
a quasicoherent ideal of $\mathcal{R}$, and we have:
\begin{enumerate}
\item Every $\mathcal{I}(U)$ is homogeneous with respect to the
$\Lambda/\Lambda_{0}$-grading of $\mathcal{R}(U)$.
\item For every $L \in \Lambda$ we have
$ \mathcal{R}_{L}(U) \cap \mathcal{I}(U) = \{0\}$.
\end{enumerate}
\end{lemma}
{\rm pr}oof
First note that the ideal sheaf $\mathcal{I}$ is indeed quasicoherent,
because it is a sum of images of quasicoherent sheaves.
We check~(i).
Using Property~\ref{shiftfamdef}~(iii), we see that
each ideal $\mathfrak{I}(U)$ is generated by the elements
$1 - \varrho_{E}(1)$,
where $E \in \Lambda_{0}$.
Consequently, each stalk $\mathfrak{I}_{x}$ is a
$\Lambda/\Lambda_{0}$-homogeneous ideal in $\mathcal{A}_{x}$.
This implies that the associated sheaf $\mathcal{I}$ is a
$\Lambda/\Lambda_{0}$-homogeneous ideal sheaf in $\mathcal{A}$.
We turn to~(ii). By construction, it suffices to
consider local sections $f \in \mathcal{R}_{L}(U)
\cap \mathfrak{I}(U)$. By the definition of $\mathfrak{I}(U)$ and
Property~\ref{shiftfamdef}~(iii), there exist homogeneous elements
$f_{i} \in \mathcal{R}_{L_{i}}(U)$ such that we can write $f$ as
\begin{equation}\label{minimalrep}
f = \sum_{i=1}^{r} f_{i} - \varrho_{E_{i}}(f_{i}).
\end{equation}
Since $\mathfrak{I}(U)$ is $\Lambda/\Lambda_{0}$-graded,
all the $L_{i}$ belong to the class $[L]$ in
$\Lambda/\Lambda_{0}$. Moreover, we can achieve in the
representation~(\ref{minimalrep}) of $f$ that all $f_{i}$
are of degree $L \in [L]$.
Namely, we can use Property~\ref{shiftfamdef}~(ii) to write
$f_{i} - \rho_{E_{i}} (f_{i})$ in the form
\begin{eqnarray*}
f_{i} - \varrho_{E_{i}} (f_{i})
& = &
\varrho_{L-L_{i}} (f_{i}) -
\varrho_{E_{i} + L_{i} - L} (\varrho_{L-L_{i}} (f_{i}))
\\
& & + (- \varrho_{L-L_{i}} (f_{i})) -
\varrho_{L_{i}-L} (-\varrho_{L - L_{i}}(f_{i})).
\end{eqnarray*}
Moreover we can choose the representation~(\ref{minimalrep}) minimal
in the sense that $r$ is minimal with the property that every $f_{i}$
is of degree $L$. Then the $E_{i}$ are pairwise different from each
other, because otherwise we could shorten the representation by
gathering. But this implies $\varrho_{E_{i}}(f_{i}) = 0$ for every
$i$. Hence we obtain $f=0$.
\endproof
\begin{defi}\label{associdealdef}
Let $\Lambda_{0}$ be a subgroup of the kernel of $\Lambda \to
\operatorname{Pic}(X)$, and let $\varrho$ be a $\Lambda_{0}$-shifting family
for $\mathcal{R}$.
The {\em ideal associated to $\varrho$} is the
$\Lambda/\Lambda_{0}$-graded ideal sheaf $\mathcal{I}$ of
$\mathcal{R}$ defined in~\ref{associdealprops}.
\end{defi}
With the aid of the ideal associated to a shifting family, we can pass from
$\mathcal{R}$ to more coarsely graded $\mathcal{O}_X$-algebras:
\begin{lemma}\label{gradproject}
Let $\Lambda_0 \subset \Lambda$ be a subgroup, and let $\varrho$ be a
$\Lambda_0$-shifting family with associated ideal $\mathcal{I}$. Set
$\mathcal{A} := \mathcal{R}/\mathcal{I}$, and let $\pi \colon
\mathcal{R} \to \mathcal{A}$ denote the projection.
\begin{enumerate}
\item The $\mathcal{O}_{X}$-algebra $\mathcal{A}$ is quasicoherent,
and it inherits a
$\Lambda/\Lambda_{0}$-grading from $\mathcal{R}$ as follows
$$ \mathcal{A}
= \bigoplus_{[L] \in \Lambda/\Lambda_{0}} \mathcal{A}_{[L]}
:= \bigoplus_{[L] \in \Lambda/\Lambda_{0}}
\pi(\mathcal{R}_{[L]}). $$
\item For any $L \in \Lambda$ the induced map $\pi_{L} \colon \mathcal{R}_{L}
\to \mathcal{A}_{[L]}$ is an isomorphism of
$\mathcal{O}_{X}$-modules. In particular, we obtain
$$\mathcal{A}(X) \cong \mathcal{R}(X) / \mathcal{I}(X).$$
\item The $\mathcal{O}_{X}$-algebra $\mathcal{A}$ is locally generated
by finitely many invertible homogeneous elements.
\end{enumerate}
\end{lemma}
{\rm pr}oof
The first assertion follows directly from the fact that we have a
commutative diagram where the lower arrow is an isomorphism of sheaves:
$$
\xymatrix{
& {\mathcal{R}} \ar[ld]_{\pi} \ar[rd] & \\
{\mathcal{A}} \ar[rr] & &
{\bigoplus_{[L] \in \Lambda/\Lambda_{0}}
\mathcal{R}_{[L]}/\mathcal{I}_{[L]}}
}
$$
To prove (ii), note that $\pi_{L} \colon \mathcal{R}_{L} \to
\mathcal{A}_{[L]}$ is injective by Lemma~\ref{associdealprops}~(ii).
For bijectivity, we have to show that $\pi_{L}$ is stalkwise surjective.
Let $h$ be a local section of ${\mathcal{A}_{[L]}}$ near some $x \in X$.
Since ${\mathcal{A}_{[L]}}$ equals $\pi(\mathcal{R}_{[L]})$,
we may assume that $h = \pi(f)$ with a local section $f$ of
$\mathcal{R}_{[L]}$ near $x$. Write $f$ as the sum of its
$\Lambda$-homogeneous components:
$$ f = \sum_{L' \in [L]} f_{L'}. $$
For every $L' \ne L$, we subtract $f_{L'} - \varrho_{L-L'}(f_{L'})$
from $f$. The result is a local section $g$ of $\mathcal{R}_{L}$ near
$x$ which still projects onto $h$. This proves bijectivity of
$\pi_{L} \colon \mathcal{R}_{L} \to \mathcal{A}_{[L]}$.
The isomorphy on the level of global sections then is due
to left exactness of the section functor.
To prove assertion~(iii), note that the analogous statement
holds for $\mathcal{R}$.
In fact, for small $U \subset X$, the algebra
$\mathcal{R}(U)$ is even a Laurent monomial algebra over
$\mathcal{O}(U)$.
Together with assertion (ii), this observation
gives statement~(iii).
\endproof
\begin{defi}\label{picgradalg}
Let $\Lambda_{0} \subset \Lambda$ be a subgroup of the kernel $\Lambda \to
\operatorname{Pic}(X)$, and let $\varrho$ be a $\Lambda_{0}$-shifting family for
$\mathcal{R}$ with associated ideal $\mathcal{I}$. We call the
$\Lambda/\Lambda_{0}$-graded $\mathcal{O}_{X}$-algebra $\mathcal{A} :=
\mathcal{R}/ \mathcal{I}$ of~\ref{gradproject} the
{\em Picard graded algebra\/} associated to $\varrho$.
\end{defi}
If every global invertible function on $X$ is constant,
then the Picard graded algebras associated to different
$\Lambda_{0}$-shifting families are isomorphic
(a graded homomorphism of sheaves is defined by
requiring~\ref{gradedhomcond} on the level of sections):
\begin{lemma}\label{shiftfamunique}
Suppose $\mathcal{O}^{*}(X) = {\mathbb K}^{*}$.
Let $\Lambda_{0} \subset \Lambda$ be a subgroup,
and let $\varrho$, $\varrho'$ be $\Lambda_{0}$-shifting
families for $\mathcal{R}$ with associated ideals $\mathcal{I}$ and
$\mathcal{I}'$. Then there is a graded automorphism of
$\mathcal{R}$ having the identity of $\Lambda$ as accompanying
homomorphism and mapping $\mathcal{I}$ onto $\mathcal{I}'$.
\end{lemma}
{\rm pr}oof
By Lemma~\ref{shiftfamprops}, there exists a
homomorphism $c \colon E \to {\mathbb K}^{*}$ such that
$\varrho'_{E} = c(E) \varrho_{E}$ holds. By Lemma~\ref{charext} stated
below, this homomorphism extends to a homomorphism
$c \colon \Lambda \to {\mathbb K}^{*}$. Thus we can define the
desired automorphism $\mathcal{R} \to \mathcal{R}$ by mapping a
section $f \in \mathcal{R}_{L}(U)$ to $c(L)f \in \mathcal{R}_{L}(U)$.
\endproof
In the proof of this lemma, we made use of the following
standard property of lattices:
\begin{lemma}\label{charext}
Let $\Lambda_{0} \subset \Lambda$ be an inclusion of lattices. Then
any homomorphism $\Lambda_{0} \to {\mathbb K}^{*}$ extends to a homomorphism
$\Lambda \to {\mathbb K}^{*}$.
\end{lemma}
Let us give a geometric interpretation of Picard graded algebras.
Let $\Lambda$ be a group of
line bundles on $X$ with associated $\Lambda$-graded
$\mathcal{O}_{X}$-algebra $\mathcal{R}$. Fix a subgroup $\Lambda_{0}$
of the kernel of $\Lambda \to \operatorname{Pic}(X)$ and a $\Lambda_{0}$-shifting
family $\varrho$ for $\mathcal{R}$.
Similar to the preceding section, we assume for the rest of this
section that in the case of a ground field ${\mathbb K}$ of characteristic
$p> 0$, the group $\Lambda/\Lambda_{0}$ has no $p$-torsion.
Under this hypothesis, we can show that the quotient
$\mathcal{A} := \mathcal{R}/\mathcal{I}$ by the ideal
associated to the shifting family $\varrho$ is reduced:
\begin{lemma}\label{reduced}
For every open $U \subset X$, the ideal $\mathcal{I}(U)$ is a radical
ideal in $\mathcal{R}(U)$.
\end{lemma}
{\rm pr}oof First note that we may assume that $U$ is a small affine open
set such that $\mathcal{R}(U)$ is of finite type. Consider
the affine variety $Z := {\rm Spec}(\mathcal{R}(U))$.
Then the $\Lambda/\Lambda_{0}$-grading of $\mathcal{R}(U) =
\mathcal{O}(Z)$ defines an action of the diagonalizable group
$H := {\rm Spec}({\mathbb K}[\Lambda/\Lambda_{0}])$ on $Z$. Let
$Z_{0} \subset Z$ denote the zero set of the ideal $\mathcal{I}(U)
\subset \mathcal{R}(U)$.
Now we can enter the proof of the assertion.
Let $f \in \mathcal{O}(Z)$ with $f^{n}
\in \mathcal{I}(U)$. We have to show that $f \in \mathcal{I}(U)$
holds. Consider the decomposition of $f$ into homogeneous parts:
$$ f = \sum_{[L] \in \Lambda/\Lambda_{0}} f_{[L]}. $$
Since $f$ vanishes along the $H$-invariant zero set $Z_{0}$ of the
$\Lambda/\Lambda_{0}$-graded ideal $\mathcal{I}(U)$, also every
homogeneous component $f_{[L]}$ has to vanish along $Z_{0}$.
We show that every $f_{[L]}$ belongs to $\mathcal{I}(U)$. Since
the $f_{[L]}$ vanish along $Z_{0}$, Hilbert's Nullstellensatz tells us
that for every degree $[L]$ some power $f_{[L]}^{m}$ lies in
$\mathcal{I}(U)$. Now consider
$$ g := \sum_{L' \in [L]} f_{L'} - (f_{L'} -
\varrho_{L-L'}(f_{L'})). $$
Then $g$ is $\Lambda$-homogeneous of degree $L$. Moreover, by explicit
multiplication, we see $g^{m} \in \mathcal{I}(U)$. But any
$\Lambda$-homogeneous element of $\mathcal{I}(U)$ is trivial. Thus
$g^{m} = 0$. Hence $g=0$, which in turn implies
$f_{[L]} \in \mathcal{I}(U)$.
\endproof
In our geometric interpretation, we use the global
``${\rm Spec}$''-construction, see for example~\cite{Ht}.
Moreover, for any homogeneous
section $f \in \mathcal{A} (U)$, we denote
its zero set in $X$ by $Z(f)$. This is well defined, because the
components $\mathcal{A}_{[L]}$ are locally free due to
Lemma~\ref{gradproject}~(ii).
\begin{prop}\label{geominterp}
Let $\rq{X} := {\rm Spec}(\mathcal{A})$, and let $q \colon \rq{X} \to X$
be the canonical map.
\begin{enumerate}
\item $\rq{X}$ is a variety, $q \colon \rq{X} \to X$ is an
affine morphism, and we have $\mathcal{A} = q_{*}
\mathcal{O}_{\rq{X}}$.
\item For a homogeneous section $f \in \mathcal{A}_{[L]}(X)$ we obtain
$q^{-1}(Z(f)) = V(\rq{X};f)$, where $V(\rq{X};f)$ is
the zero set of the function $f \in \mathcal{O}(\rq{X})$.
\item If $f_{i} \in \mathcal{A}(X)$ are homogeneous sections such that
the sets $X \setminus Z(f_{i})$ are affine and cover $X$, then
$\rq{X}$ is a quasiaffine variety.
\end{enumerate}
\end{prop}
{\rm pr}oof To check~(i), note that $\rq{X}$ is indeed a variety, because
by Lemmas~\ref{gradproject}~(iii) and~\ref{reduced}, the algebra
$\mathcal{A}$ is reduced and locally of finite type.
The rest of~(i) are standard properties of the
global ``${\rm Spec}$''-construction for sheaves of
$\mathcal{O}_{X}$-algebras.
Assertion~(ii) is clear in the case $[L] = 0$, because then we have
$\mathcal{A}_{0} = \mathcal{O}_{X}$. For a general $[L]$, we may
reduce to the previous case by multiplying $f$ locally with invertible
sections of degree $-[L]$. Note that invertible sections exist locally
by Lemma~\ref{gradproject}~(iii).
\endproof
\section{The homogeneous coordinate ring}\label{section4}
In this section, we give the precise definition of
the homogeneous coordinate ring of a given variety,
see Definition~\ref{homcoorddef}.
Moreover, we show in Proposition~\ref{uniquehomcoord}
that the homogeneous coordinate ring is
unique up to isomorphism.
In order to fix the setup,
recall from~\cite{Bo} that a (neither necessarily separated nor
irreducible) variety $X$, is said to be {\em divisorial\/} if every $x
\in X$ admits an affine neighbourhood of the form $X \setminus
Z(f)$ where $Z(f)$ is the zero set of a global section $f$ of some
line bundle $L$ on~$X$.
\begin{rem}
Every separated irreducible ${\mathbb Q}$-factorial variety is divisorial, and
every quasiprojective variety is divisorial.
\end{rem}
Here is the setup of this section:
We assume that the multiplicative group ${\mathbb K}^{*}$
is of infinite rank over ${\mathbb Z}$,
e.g.~${\mathbb K}$ is of characteristic zero or it is uncountable.
The variety $X$ is divisorial and satisfies
$\mathcal{O}^{*}(X) = {\mathbb K}^{*}$.
Moreover, $\operatorname{Pic}(X)$ is finitely
generated and, if ${\mathbb K}$ is of characteristic $p > 0$,
then $\operatorname{Pic}(X)$ has no $p$-torsion.
\begin{lemma}\label{ontopic}
There exists a group $\Lambda$ of line bundles on $X$ mapping
onto $\operatorname{Pic}(X)$. For any such $\Lambda$ the associated
$\Lambda$-graded $\mathcal{O}_{X}$-algebra $\mathcal{R}$ admits
homogeneous global sections $h_{1}, \ldots, h_{r}$ such that the sets
$X \setminus Z(h_{i})$ are affine and cover~$X$.
\end{lemma}
{\rm pr}oof
Only for the first statement there is something to show.
For this, we may assume that $\operatorname{Pic}(X)$ is not trivial.
Write $\operatorname{Pic}(X)$ as a direct sum of cyclic groups
$\Pi_{1}, \ldots, \Pi_{m}$
and fix a generator $P_{l}$ for each $\Pi_{l}$.
Choose a finite open cover $\mathfrak{U}$ of $X$
such that
each $P_{l}$ is represented
by a cocycle
$\xi^{(l)} \in Z^{1}(\mathfrak{U},\mathcal{O}^{*})$.
Choose members
$U_{i}, U_{j}$ of $\mathfrak{U}$ such that
$U_{i} \ne U_{j}$ holds and there is a point
$x_{0} \in U_{i} \cap U_{j}$.
We adjust the $\xi^{(l)}$ as follows:
By the assumption on the ground field~${\mathbb K}$, we find
$a_{1}, \ldots, a_{m} \in {\mathbb K}^{*}$ which are
linearly independent over ${\mathbb Z}$.
Define a locally constant cochain $\eta^{(l)}$
by setting $\eta^{(l)} := a_{l}/\xi^{(l)}_{ij}(x_{0})$
on $U_{i}$ and $\eta^{(l)} := 1$ on the $U_{k}$ different
from $U_{i}$.
Let $\zeta^{(l)} \in Z^{1}(\mathfrak{U},\mathcal{O}^{*})$
be the product of $\xi^{(l)}$ with the coboundary
of $\eta^{(l)}$.
Let $\Lambda \subset \Lambda(\mathfrak{U})$ be the
subgroup generated by the line bundles arising from
$\zeta^{(1)}, \ldots, \zeta^{(m)}$.
By construction $\Lambda$ maps onto $\operatorname{Pic}(X)$.
Moreover, we have
$$
\Bigl(\bigl(\zeta_{ij}^{(1)}\bigr)^{n_{1}} \ldots
\bigl(\zeta_{ij}^{(m)}\bigr)^{n_{m}}\Bigr)(x_{o})
= a_{1}^{n_{1}} \ldots a_{m}^{n_{m}} $$
for the cocycle corresponding to a general element of
$\Lambda$. By the choice of the $a_{l}$, this cocylce
can only be trivial if all exponents $n_{l}$ vanish.
It follows that $\Lambda$ is free.
\endproof
We fix a group $\Lambda$ of line bundles on $X$
as provided by Lemma~\ref{ontopic}, and a full shifting
family $\varrho$ for the $\Lambda$-graded $\mathcal{O}_{X}$-algebra
$\mathcal{R}$ associated to $\Lambda$.
Let $\mathcal{I}$ denote
the ideal associated to the shifting family $\varrho$.
As seen in Lemma~\ref{gradproject}~(i), the $\mathcal{O}_{X}$-algebra
$\mathcal{A} := \mathcal{R}/\mathcal{I}$ is graded by $\operatorname{Pic}(X)$.
In particular, we have a grading
$$
\mathcal{A}(X)
=
\bigoplus_{[L] \in \operatorname{Pic}(X)} \mathcal{A}_{[L]}(X).
$$
According to Lemmas~\ref{gradproject}~(ii) and~\ref{ontopic}, there
are homogeneous $f_{1}, \ldots, f_{r} \in \mathcal{A}(X)$ such
that the sets $X \setminus Z(f_{i})$ are affine and cover $X$.
Hence Proposition~\ref{geominterp}~(iii) tells us that
the variety $\rq{X} := {\rm Spec}(\mathcal{A})$ is quasiaffine.
Thus we have the collection $\mathfrak{A}(X)$ of
natural pairs on $\rq{X}$ as closing subalgebras for
$\mathcal{A}(X) = \mathcal{O}(\rq{X})$,
see Lemma~\ref{quasiaff2closingsubalg}.
\begin{prop}\label{coringisqualg}
The pair $(\mathcal{A}(X), \mathfrak{A}(X))$ is a freely graded
quasiaffine algebra.
\end{prop}
{\rm pr}oof
We have to show that there is a natural pair
$(A',I') \in \mathfrak{A}(X)$ with the properties of
Definition~\ref{freegradalgdef}.
Choose homogeneous $f_{1}, \ldots, f_{r} \in \mathcal{A}(X)$ such that
the sets $X \setminus Z(f_{i})$ form an affine cover of $X$.
Let $q \colon \rq{X} \to X$ be the canonical map.
By Proposition~\ref{geominterp}~(ii),
each $\rq{X}_{f_{i}}$ equals $q^{-1}(X \setminus Z(f_{i}))$
and thus is affine.
Consequently the algebras
$$
\mathcal{A}(X)_{f_{i}}
=
\mathcal{O}(\rq{X})_{f_{i}}
=
\mathcal{O}(\rq{X}_{f_{i}})
$$
are of finite type. Thus we find a
subalgebra $A' \subset \mathcal{A}(X)$ of finite type satisfying
$A'_{f_{i}} = \mathcal{A}(X)_{f_{i}}$ for every $i$. Then
$\b{X} := {\rm Spec}(A')$ is an affine closure of $\rq{X}$, and the
vanishing ideal $I' \subset A'$ of $\b{X} \setminus \rq{X}$ is the
radical of the ideal generated by $f_{1}, \ldots, f_{r}$. It follows
that $(A',I')$ is a natural pair on $\rq{X}$.
We verify the condition on the degrees. Given $x \in \rq{X}$, choose an
$f_{i}$ with $q(x) \in U := X \setminus Z(f_{i})$. By
Lemma~\ref{gradproject}~(iii), there is a small neighbourhood
$U_{h} \subset U$
of $x$ defined by some $h \in \mathcal{O}(U)$ such that every
$[L] \in \operatorname{Pic}(X)$ admits an invertible section in
$\mathcal{A}_{[L]}(U_{h})$.
Now, $U_{h}$ equals $X \setminus Z(hf_{i}^{n})$ for some large
positive integer $n$.
Since finitely many of such $U_{h}$ cover $X$, we obtain the desired
Property~\ref{freegradalgdef} with finitely many of the homogeneous
sections $hf_{i}^{n} \in I'$.
\endproof
\begin{defi}\label{homcoorddef}
We call $(\mathcal{A}(X), \mathfrak{A}(X))$
the {\em homogeneous coordinate ring\/} of $X$.
\end{defi}
We show now that homogeneous coordinate rings are unique up to
isomorphism. This amounts to comparing Picard graded algebras
arising from different groups of line bundles on $X$.
As we shall need it later, we do this in a slightly more general
setting:
\begin{lemma}\label{differentcomps}
Let $\Lambda$ and $\Gamma$ be groups of line bundles on $X$
with associated graded $\mathcal{O}_{X}$-algebras $\mathcal{R}$
and $\mathcal{S}$.
Suppose that the image of $\Lambda \to \operatorname{Pic}(X)$ contains the image of
$\Gamma \to \operatorname{Pic}(X)$, and let $\varrho$ be a
full shifting family for $\mathcal{R}$.
\begin{enumerate}
\item There exist a graded homomorphism $\gamma \colon \mathcal{S}
\to \mathcal{R}$ with accompanying homomorphism $\t{\gamma} \colon
\Gamma \to \Lambda$ and a full shifting family $\sigma$ for $\Gamma$
such that for every $K \in \Gamma$ we have $K \cong \t{\gamma}(K)$,
and, given an $F$ from the kernel of $\Gamma \to \operatorname{Pic}(X)$, there is
a commutative diagram of $\mathcal{O}_{X}$-module isomorphisms
$$
\xymatrix{
{\mathcal{S}_{K}} \ar[rr]^{\gamma_{K}} \ar[d]_{\sigma_{F}}
& &
{\mathcal{R}_{\t{\gamma}(K)}} \ar[d]^{\varrho_{\t{\gamma}(F)}} \\
{\mathcal{S}_{K+F}} \ar[rr]_{\gamma_{K+F}}
& &
{\mathcal{R}_{\t{\gamma}(K)+\t{\gamma}(F)}}
}
$$
\item Given data as in (i), let $\mathcal{B}$ and $\mathcal{A}$
denote the Picard graded algebras associated to $\sigma$ and
$\varrho$. Then one has a commutative diagram
$$
\xymatrix{
{\mathcal{S}} \ar[r]^{\gamma} \ar[d]
&
{\mathcal{R}} \ar[d] \\
{\mathcal{B}} \ar[r]_{\b{\gamma}}
&
{\mathcal{A}}
}
$$
of graded $\mathcal{O}_{X}$-algebra homomorphisms. The lower row
is an isomorphism if $\Gamma$ and $\Lambda$ have the same image in
$\operatorname{Pic}(X)$.
\end{enumerate}
\end{lemma}
{\rm pr}oof
Let $\Gamma \subset \Lambda(\mathfrak{V})$ and
$\Lambda \subset \Lambda(\mathfrak{U})$.
Then $\Lambda$ and $\Gamma$ embed canonically into
$\Lambda(\mathfrak{W})$, where $\mathfrak{W}$
denotes any common refinement of the open covers
$\mathfrak{U}$ and $\mathfrak{V}$.
Hence we may assume that $\Lambda$ and $\Gamma$
arise from the same trivializing cover.
Let $K_{1}, \ldots, K_{m}$ be a basis of $\Gamma$ and choose
$E_{1}, \ldots, E_{m} \in \Lambda$ in such a way that the isomorphism
class of
$E_{i}$ equals the class of $K_{i}$ in
$\operatorname{Pic}(X)$. Furthermore let $\t{\gamma} \colon \Gamma \to \Lambda$
be the homomorphism sending $K_{i}$ to $E_{i}$.
For each $i = 1, \ldots, m$, fix a bundle isomorphism
$\beta_{K_{i}} \colon K_{i} \to E_{i}$.
By multiplying the local data of the these homomorphisms,
we obtain as in the proof of Lemma~\ref{shiftfamprops}
a bundle isomorphism $\beta_{K} \colon K \to \t{\gamma}(K)$
for every $K \in \Gamma$. Shifting sections via these
$\beta_{K}$ defines $\mathcal{O}_{X}$-module isomorphisms
$\gamma_{K} \colon \mathcal{S}_{K} \to \mathcal{R}_{\t{\gamma}(K)}$.
By construction, the $\gamma_{K}$ fit together to a graded homomorphism
$\gamma \colon \mathcal{S} \to \mathcal{R}$ of
$\mathcal{O}_{X}$-algebras.
Now it is clear how to define the full shifting family $\sigma$:
Take an $F$ from the kernel of $\Gamma \to \operatorname{Pic}(X)$.
Define $\sigma_{F} \colon \mathcal{S} \to \mathcal{S}$ by
prescribing on the homogeneous components the (unique) isomorphisms
$\mathcal{S}_{K} \to \mathcal{S}_{K+F}$ that make the above diagrams
commutative. It is then straightforward to verify the properties of a
shifting family for the maps $\sigma_{F}$. This settles
assertion~(i).
We prove~(ii). By the commutative diagram of~(i), the ideal associated
to $\sigma$ is mapped into the ideal associated to $\varrho$.
Hence, we obtain the desired homomorphism
$\b{\gamma} \colon \mathcal{B} \to \mathcal{A}$
of Picard graded algebras.
Now, assume that the images of $\Gamma$ and $\Lambda$ in $\operatorname{Pic}(X)$
coincide. Since every $\gamma_{K} \colon \mathcal{S}_{K} \to
\mathcal{R}_{\t{\gamma}(K)}$ is an isomorphism, we can use
Lemma~\ref{gradproject}~(ii) to see that $\b{\gamma}$ is an isomorphism in
every degree. By assumption the accompanying homomorphism of $\b{\gamma}$ is
bijective, whence the assertion follows. \endproof
The uniqueness of homogeneous coordinate rings is a direct consequence
of the Lemmas~\ref{shiftfamunique} and~\ref{differentcomps}:
\begin{prop}\label{uniquehomcoord}
Different choices of the group of line bundles and the full shifting
family define isomorphic freely graded quasiaffine algebras as
homogeneous coordinate rings for~$X$.
\end{prop}
{\rm pr}oof
Let $\Lambda$ and $\Gamma$ be two groups of line bundles mapping onto
$\operatorname{Pic}(X)$ and let $\mathcal{A}$ and $\mathcal{B}$ denote the Picard
graded algebras associated to choices of full shifting families for
the corresponding $\Lambda$ and $\Gamma$-graded
$\mathcal{O}_{X}$-algebras. From Lemmas~\ref{shiftfamunique}
and~\ref{differentcomps} we infer the existence of a graded
$\mathcal{O}_{X}$-algebra isomorphism
$\mu \colon \mathcal{B} \to \mathcal{A}$.
In particular, we have $\mathcal{B}(X) \cong \mathcal{A}(X)$.
We show that $\mu$ defines an isomorphism of quasiaffine
algebras.
Let $(B',J') \in \mathfrak{B}(X)$ as in~\ref{freegradalgdef}.
Then Lemma~\ref{quasiaff2closingsubalg} ensures that
$(B',J')$ is a natural pair on ${\rm Spec}(\mathcal{B})$.
We have to show that $(\mu(B'), \mu(I'))$ is a natural pair on
${\rm Spec}(\mathcal{A})$.
Since $\mu$ is an $\mathcal{O}_{X}$-module
isomorphism in every degree, we have $Z(\mu(g)) = Z(g)$
for any homogeneous $g \in J'$.
Thus Proposition~\ref{geominterp}~(ii)
tells us that $(\mu(B'), \mu(I'))$ is a natural pair.
\endproof
\section{Functoriality of the homogeneous coordinate ring}
\label{section5}
In this section, we present the first main result. It
says that homogeneous coordinates are a fully faithful
contravariant functor, see Theorem~\ref{fullyfaithful}.
But first we have to define the homogeneous coordinate ring
functor on morphisms.
The basic tool for this definition are Picard graded
pullbacks, see~\ref{picpulldef} and~\ref{picpullex}.
\goodbreak
As in the preceding section, we assume that ${\mathbb K}^{*}$ is
of infinite rank over ${\mathbb Z}$.
Moreover, in this section we assume all varieties to be
divisorial and to have only constant
invertible global functions.
Finally, we require that any variety has a
finitely generated Picard group,
and, if ${\mathbb K}$ is of characteristic $p > 0$,
this Picard group has no $p$-torsion.
For a variety $X$ fix a group $\Lambda$ of line bundles mapping onto $\operatorname{Pic}(X)$
and denote the associated $\Lambda$-graded $\mathcal{O}_{X}$-algebra by
$\mathcal{R}$.
Moreover, we fix a full shifting family $\varrho$ for
$\mathcal{R}$ and denote the resulting Picard graded algebra by
$\mathcal{A}$.
For a further variety $Y$ we denote the corresponding data by
$\Gamma$, $\mathcal{S}$, $\sigma$, and $\mathcal{B}$.
Let $\varphi \colon X \to Y$
be a morphism of the varieties $X$ and $Y$.
\begin{defi}\label{picpulldef}
By a {\em Picard graded pullback for $\varphi \colon X \to Y$}
we mean a graded homomorphism
$\mathcal{B} \to \varphi_{*}\mathcal{A}$
of $\mathcal{O}_{Y}$-algebras having the pullback map
$\varphi^{*} \colon \operatorname{Pic}(Y) \to \operatorname{Pic}(X)$ as its accompanying
homomorphism.
\end{defi}
Note that the property of being an $\mathcal{O}_{Y}$-algebra
homomorphism means in particular, that in degree zero any Picard
graded pullback is the usual pullback of functions.
As a consequence, we remark:
\begin{lemma}\label{pullzero}
Let $\mu \colon \mathcal{B} \to \varphi_{*}\mathcal{A}$ be a
Picard graded pullback for $\varphi \colon X \to Y$, and let $g \in
\mathcal{B}(Y)$ be homogeneous.
Then the zero set $Z(\mu(g)) \subset X$ is the inverse
image $\varphi^{-1}(Z(g))$ of the zero set $Z(g) \subset Y$.
\end{lemma}
{\rm pr}oof
It suffices to prove the statement locally,
over small open $V \subset Y$.
But on such $V$, we may shift $g$ by multiplication with
invertible elements into degree zero.
This does not affect zero sets,
whence the assertion follows.
\endproof
The basic step in the definition of the homogeneous coordinate
ring functor on morphisms is to show existence of Picard graded
pullbacks and to provide a certain uniqueness property:
\begin{prop}\label{picpullex}
There exist Picard graded pullbacks for $\varphi \colon X \to Y$.
Moreover, any two Picard graded pullbacks
$\mu, \nu \colon \mathcal{B} \to \varphi_{*}\mathcal{A}$
for $\varphi$ differ by a character
$c \colon \operatorname{Pic}(Y) \to {\mathbb K}^{*}$
in the sense that $\nu_{P} = c(P) \mu_{P}$
holds for all $P \in \operatorname{Pic}(Y)$.
\end{prop}
The proof of this statement is based on two lemmas. The first one
is an extension property for shifting families:
\begin{lemma}\label{shiftext}
Let $\Pi$ be any group of line bundles on $X$, and let $\Pi_{0}
\subset \Pi_{1}$ be two subgroups of the kernel of $\Pi \to \operatorname{Pic}(X)$.
Then every $\Pi_{0}$-shifting family $\tau^{0}$ for the $\Pi$-graded
$\mathcal{O}_{X}$-algebra $\mathcal{T}$ associated to $\Pi$
extends to a $\Pi_{1}$-shifting family $\tau^{1}$ for $\mathcal{T}$
in the sense that $\tau^{1}_{E} = \tau^{0}_{E}$ holds for all
$E \in \Pi_{0}$.
\end{lemma}
{\rm pr}oof
Let $\vartheta$ be any $\Pi_{1}$-shifting family for $\mathcal{T}$.
Then $\vartheta$ restricts to a $\Pi_{0}$-shifting family. By
Lemma~\ref{shiftfamprops}, there is a character
$c \colon \Pi_{0} \to \mathcal{O}^{*}(X)$
with $\tau^{0}_{E} = c(E) \vartheta_{E}$ for all $E \in \Pi_{0}$.
As we assumed $\mathcal{O}^{*} (X) = {\mathbb K}^{*}$, Lemma~\ref{charext} tells us
that $c$ extends to $\Pi_{1}$. Thus, setting
$\tau^{1}_{E} := c(E) \vartheta_{E}$ for $E \in \Pi_{1}$ gives
the desired extension.
\endproof
The second lemma provides a pullback construction for shifting
families. By pulling back cocycles, we obtain the (again free)
pullback group $\varphi^{*}\Gamma$. We denote the associated
$\varphi^{*}\Gamma$-graded $\mathcal{O}_{X}$-algebra by
$\varphi^{*}\mathcal{S}$. Indeed $\varphi^{*}\mathcal{S}$
is canonically isomorphic to the ringed inverse image of
$\mathcal{S}$.
Observe that we have a canonical sheaf homomorphism
$\mathcal{S} \to \varphi_{*}\varphi^{*} \mathcal{S}$.
\goodbreak
\begin{lemma}\label{pullshift}
Let $\Gamma_{0} \subset \Gamma$ a subgroup, and let
$\sigma$ be a $\Gamma_{0}$-shifting family for $\mathcal{S}$.
\begin{enumerate}
\item The $\mathcal{O}_{X}$-module homomorphisms
$\varphi^{*}\sigma_{F}$ define a
$\varphi^{*}\Gamma_{0}$-shifting family
$\varphi^{*}\sigma$ for $\varphi^{*}\mathcal{S}$.
\item The ideal $\mathcal{J}^{*}$ associated to $\varphi^{*}\sigma$
equals the pullback $\varphi^{*}\mathcal{J}$ of the ideal
$\mathcal{J}$ associated to $\sigma$.
\end{enumerate}
\end{lemma}
{\rm pr}oof
For~(i), note that the isomorphisms $\sigma_{F} \colon \mathcal{S}_{K}
\to \mathcal{S}_{K+F}$ can be written as $g \mapsto \beta_{K,F}(g)$
with unique line bundle isomorphisms $\beta_{K,F} \colon K \to K + F$.
The family $\varphi^{*} \sigma_{F}$ corresponds to the collection
$\varphi^{*} \beta_{K,F} \colon \varphi^{*} K \to \varphi^{*}K +
\varphi^{*}F$. The properties of a shifting family become clear
by writing the $\varphi^{*} \beta_{K,F}$ in terms of local data
as in~\ref{localdata}.
To prove~(ii), we just compare the stalks of the two sheaves in
question. By Property~\ref{shiftfamprops}~(iii), we obtain for any $x
\in X$:
\begin{eqnarray*}
\mathcal{J}^{*}_{x}
& = &
\bangle{1_{x} - \varphi^{*} \sigma_{F}(1_{x});
\; F \in \Gamma_{0}} \\
& = &
\bangle{\varphi^{*}(1_{\varphi(x)}) -
\varphi^{*}(\sigma_{F}(1_{\varphi(x)}));
\; F \in \Gamma_{0}} \\
& = &
(\varphi^{*}\mathcal{J})_{x}. \qquad \qed
\end{eqnarray*}
{\rm pr}oof[Proof of Proposition~\ref{picpullex}]
We establish the existence of Picard graded pullbacks:
As usual, let $\mathcal{I}$ and $\mathcal{J}$
denote the respective ideals associated to the shifting families
$\varrho$ for $\mathcal{R}$ and $\sigma$ for $\mathcal{S}$.
Thus the corresponding Picard graded algebras are
$\mathcal{A} = \mathcal{R}/\mathcal{I}$ and
$\mathcal{B} = \mathcal{S}/\mathcal{J}$.
By Lemma~\ref{pullshift}, we
have the $\varphi^{*}\Gamma_{0}$-shifting family $\varphi^{*}\sigma$
for $\varphi^{*}\mathcal{S}$.
Lemma~\ref{shiftext} enables us to choose a full shifting family
$\varphi^{\sharp}\sigma$ extending $\varphi^{*}\sigma$.
We denote by $\varphi^{\sharp}\mathcal{J}$ the ideal associated to
$\varphi^{\sharp}\sigma$, and write
$\varphi^{\sharp}\mathcal{B} :=
\varphi^{*}\mathcal{S}/\varphi^{\sharp}\mathcal{J}$
for the quotient.
In this notation, we have a commutative diagram of graded
$\mathcal{O}_{Y}$-algebra homomorphisms such that the unlabelled
arrows are isomorphisms in each degree:
$$
\xymatrix{
{\varphi_{*}\mathcal{R}} \ar[d]
& &
{\varphi_{*}\varphi^{*}\mathcal{S}} \ar[dl] \ar[dr] \ar[ll]
& &
{\mathcal{S}} \ar[ll]_{\varphi^{*}} \ar[d] \\
{\varphi_{*}{\mathcal{A}}}
&
{\varphi_{*}\varphi^{\sharp}\mathcal{B}} \ar[l]
&
&
{\varphi_{*}\varphi^{*}\mathcal{B}} \ar[ll]
&
{\mathcal{B}} \ar[l]^{\varphi^{*}}
}
$$
Indeed, the right square is standard. To obtain the
middle triangle, we only have to show that $\varphi^{\sharp}\mathcal{J}$
contains the kernel of $\varphi^{*}\mathcal{S} \to
\varphi^{*}\mathcal{B}$. But this follows from exactness of $\varphi^{*}$ and
Lemma~\ref{pullshift}~(ii). Existence of the left square follows from
combining Lemmas~\ref{shiftfamunique} and~\ref{differentcomps}.
Now the desired Picard graded pullback of $\varphi \colon X \to Y$ is
the composition of the lower horizontal arrows.
We turn to the uniqueness statement.
Let $P \in \operatorname{Pic}(Y)$. Since $\mathcal{B}_{P}$ locally admits
invertible sections, we can cover $Y$ by open $V \subset Y$ such that
there exist invertible sections $h \in \mathcal{B}_{P} (V)$. We define
$$c(P,V) := \nu(h)/\mu (h) \in \mathcal{A}_{0}^{*} (\varphi^{-1}(V)).$$
This does not depend on the choice of $h$: For a further invertible
$ g\in \mathcal{B}_{P} (V)$, the section $g/h$ is of degree zero.
But in degree zero any Picard graded pullback is the usual pullback
of functions.
Thus we have $\mu(h/g) = \nu(h/g)$.
Consequently, $\nu(g)/\mu(g)$ equals $\nu(h)/\mu(h)$.
\goodbreak
Similarly we see that for two open $V, V' \subset Y$ as above, the
corresponding sections $c(P,V)$ and $c(P,V')$ coincide on the
intersection $\varphi^{-1}(V) \cap \varphi^{-1}(V')$.
Thus, by gluing, we obtain a global
section $c(P) \in \mathcal{A}_{0}^{*} (X) = \mathcal{O}^{*}(X)$.
Then it is immediate to check, that $P \mapsto c(P)$ has the
desired properties.
\endproof
With the help of Picard graded pullbacks we can now
make the homogeneous coordinate ring into a functor.
We fix for any morphism $\varphi \colon X \to Y$
a Picard graded pullback
$\mu_{\varphi} \colon \mathcal{B} \to \varphi_{*}\mathcal{A}$,
and denote the induced homomorphism on global sections again by
$\mu_{\varphi} \colon\mathcal{B}(Y) \to \mathcal{A}(X)$.
For a graded homomorphism $\nu$ of freely graded quasiaffine
algebras, we denote by $[\nu]$ its equivalence class in the sense of
Definition~\ref{pointedmorphdef}.
\begin{prop}\label{var2alghom}
The assignments $X \mapsto (\mathcal{A}(X), \mathfrak{A}(X))$
and $\varphi \mapsto [\mu_{\varphi}]$ define a contravariant
functor into the category of freely graded quasiaffine algebras.
\end{prop}
{\rm pr}oof
By Proposition~\ref{coringisqualg},
the homogeneous coordinate rings
$(\mathcal{A}(X),\mathfrak{A}(X))$
and $(\mathcal{B}(Y),\mathfrak{B}(Y))$
of $X$ and $Y$ are
in fact freely graded quasiaffine algebras.
The first task is to show that the homomorphism
$\mu_{\varphi} \colon \mathcal{B}(Y) \to \mathcal{A}(X)$
associated to a morphism $\varphi \colon X \to Y$
is a graded homomorphism of freely graded quasiaffine
algebras.
As a Picard graded pullback, $\mu_{\varphi}$ is graded and
has as accompanying homomorphism the pullback map
$\operatorname{Pic}(Y) \to \operatorname{Pic}(X)$.
Thus we are left with checking the conditions of
Definition~\ref{quasiaffalgdef}~(ii) for $\mu_{\varphi}$.
This is done geometrically in terms of the constructions of
Proposition~\ref{geominterp}:
$$ \rq{X} := {\rm Spec}(\mathcal{A}), \qquad
\rq{Y} := {\rm Spec}(\mathcal{B}), \qquad
q_{X} \colon \rq{X} \to X, \qquad q_{Y} \colon \rq{Y} \to Y. $$
Let $(B',J') \in \mathfrak{B}(Y)$ be a closing subalgebra as in
Definition~\ref{freegradalgdef}. Then Lemma~\ref{naturalpairs}
provides a closing subalgebra $(A',I') \in \mathfrak{A}(X)$
such that $\mu_{\varphi}(B') \subset A'$ holds.
We have to verify the condition on the ideals $I'$ and $J'$
required in~\ref{quasiaffalgdef}~(ii).
For this, consider the affine closures
of $\rq{X}$ and $\rq{Y}$:
$$\b{X} := {\rm Spec}(A'), \qquad \b{Y} := {\rm Spec}(B'). $$
Then the restricted homomorphism $\mu_{\varphi} \colon B' \to A'$
defines a morphism $\b{\varphi} \colon \b{X} \to \b{Y}$.
Recall from Section~\ref{section1} that $I'$ and $J'$ are
the vanishing ideals of the complements $\b{X} \setminus \rq{X}$
and $\b{Y} \setminus \rq{Y}$.
Thus we have to show that $\b{\varphi}$ maps $\rq{X}$ to $\rq{Y}$.
For this, let $g_{1}, \ldots, g_{s} \in J'$ be
homogeneous sections as in~\ref{freegradalgdef}.
Using Lemma~\ref{pullzero}, we obtain:
\goodbreak
\begin{eqnarray*}
\rq{X} & = &
\bigcup_{j=1}^{r} q_{X}^{-1}(\varphi^{-1}(Y \setminus Z(g_{j}))) \\
& = &
\bigcup_{j=1}^{r} q_{X}^{-1} (X \setminus Z(\mu_{\varphi}(g_{j}))) \\
& \subset &
\bigcup_{j=1}^{r} \b{X}_{\mu_{\varphi}(g_{j})} \\
& = &
\b{\varphi}^{-1}(\rq{Y}).
\end{eqnarray*}
\goodbreak
Finally, we check that $\varphi \mapsto [\mu_{\varphi}]$ is functorial.
Note that by Proposition~\ref{picpullex}, the class $[\mu_{\varphi}]$
does not depend on the choice of the Picard graded pullback
$\mu_{\varphi}$ of a given morphism.
From this we conclude that the identity morphism of a variety is
mapped to the identity
morphism of its homogeneous coordinate ring.
Moreover, as the composition of two Picard graded
pullbacks is a Picard graded pullback for the composition of the respective
morphisms, the above assignment commutes with composition.
\endproof
In the sequel we shall speak of the homogeneous coordinate ring functor.
We present the first main result of this article.
It tells us that the morphisms of two varieties are in one-to-one
correspondence with the morphisms of their coordinate rings:
\begin{thm}\label{fullyfaithful}
The homogeneous coordinate ring functor $X \mapsto (\mathcal{A}(X),
\mathfrak{A}(X))$ and $\varphi \mapsto [\mu_{\varphi}]$ is fully faithful.
\end{thm}
{\rm pr}oof Let $X$, $Y$ be varieties with associated Picard graded
algebras $\mathcal{A}$ and $\mathcal{B}$.
We denote the respective homogeneous
coordinate rings of $X$ and $Y$ for short by $(A,\mathfrak{A})$ and
$(B,\mathfrak{B})$. We construct an inverse to
$${\rm Mor}(X,Y) \to {\rm Mor}((B,\mathfrak{B}),(A,\mathfrak{A})),
\qquad \varphi \mapsto [\mu_{\varphi}].$$
So, start with any graded homomorphism $\mu \colon (B,\mathfrak{B}) \to
(A,\mathfrak{A})$ of quasiaffine algebras. Then Lemma~\ref{naturalpairs}
provides closing subalgebras $(A',I') \in \mathfrak{A}$ and $(B',J') \in
\mathfrak{B}$ such that $(B',J')$ is as in Definition~\ref{freegradalgdef}
and we have $\mu(B') \subset A'$.
Consider the affine closures $\b{X} := {\rm Spec}(A')$ and
$\b{Y} := {\rm Spec}(B')$ of $\rq{X} := {\rm Spec}(\mathcal{A})$ and $\rq{Y} :=
{\rm Spec}(\mathcal{B})$. Then $\mu$ gives rise to a morphism $\b{\varphi}
\colon \b{X} \to \b{Y}$, and restricting this morphism to $\rq{X}$
yields a commutative diagram
$$ \xymatrix{
{\rq{X}}
\ar[r]^{\rq{\varphi}}
\ar[d]_{q_{X}} &
{\rq{Y}}
\ar[d]^{q_{Y}} \\
X
\ar[r]^{\varphi} &
Y } $$
where $q_{X}$ and $q_{Y}$ denote the canonical maps, and
the morphism $\varphi \colon X \to Y$ has as its pullbacks on the
level of functions the maps obtained by restricting
the localizations $\mu_{g} \colon B_{g} \to A_{\mu(g)}$ to degree zero
over the affine sets $Y_{g}$ for homogeneous $g \in J'$.
Observe that applying the above procedure
to a further graded homomorphism
$\nu \colon (B,\mathfrak{B}) \to (A,\mathfrak{A})$ yields the same
induced morphism $X \to Y$ if and only if the homomorphisms $\mu$ and
$\nu$ are equivalent; the ``only if'' part follows from the uniqueness
statement of Proposition~\ref{picpullex} and the fact that $\mu$ and
$\nu$ define Picard graded pullbacks via localizing. Thus
$[\mu] \mapsto \varphi$ defines an injection
$${\rm Mor}((B,\mathfrak{B}),(A,\mathfrak{A})) \to {\rm Mor}(X,Y).$$
We check that this map is inverse to the one defined by the
homogeneous coordinate ring functor. Start with a morphism
$\varphi \colon X \to Y$, and let
$[\mu_{\varphi}] \colon B \to A$ be as
before Proposition~\ref{var2alghom}.
Write shortly $\mu := \mu_{\varphi}$.
Consider a homogeneous $g \in B$
such that $V := Y \setminus Z(g)$ is affine and let $U :=
\varphi^{-1}(V)$.
Using Lemma~\ref{pullzero}, we obtain a commutative diagram
$$\xymatrix{
{A_{(\mu(g))}} \ar@{=}[d] &
{B_{(g)}} \ar[l]_{\mu_{(g)}} \ar@{=}[d] \\
{\mathcal{O}_{X}(U)} &
{\mathcal{O}_{Y}(V)} \ar[l]^{\varphi^{*}}
}
$$
where the above horizontal map is the map on degree zero induced by
the localized map $\mu_{g} \colon B_{g} \to A_{\mu(g)}$.
Since $Y$ is covered by open affine sets of the form
$V = Y \setminus Z(g)$,
we see that the morphism $X \to Y$
associated to $\mu = \mu_{\varphi}$
is again $\varphi$. \endproof
So far, our homogeneous coordinate ring functor depends on the choice
of the homogeneous coordinate ring for a given variety.
By passing to isomorphism classes, the whole
construction can even be made canonical:
\begin{rem}\label{canonical}
If one takes as target category the category of isomorphism classes of
freely graded quasiaffine algebras, then the homogeneous coordinate
functor $X \to (\mathcal{A}(X), \mathfrak{A}(X))$ and $\varphi \mapsto
[\mu_{\varphi}]$ becomes unique.
\end{rem}
\section{A first dictionary}\label{section6}
We present a little dictionary between geometric
properties of a variety and algebraic
properties of its homogeneous coordinate ring.
We consider separatedness, normality and smoothness.
Moreover, we treat quasicoherent sheaves,
and we describe affine morphisms and closed embeddings.
The setup is the same as in Sections~\ref{section4}
and~\ref{section5}:
The multiplicative group ${\mathbb K}^{*}$
of the ground field ${\mathbb K}$ is supposed to
be of infinite rank over ${\mathbb Z}$.
Moreover, $X$ is a divisorial variety with
$\mathcal{O}^{*}(X) = {\mathbb K}^{*}$ and its Picard group
is finitely generated and has no $p$-torsion if ${\mathbb K}$ is of characteristic
$p>0$.
Denote by $(A,\mathfrak{A}) := (\mathcal{A}(X),\mathfrak{A}(X))$
the homogeneous coordinate ring of $X$.
Recall that $A$ is the algebra of global sections of a suitable
Picard graded $\mathcal{O}_{X}$-algebra~$\mathcal{A}$.
In the subsequent proofs, we shall often use
the geometric interpretation provided by
Propositions~\ref{geominterp} and~\ref{equivquasiaffequiv}:
\begin{lemma}\label{geomquot}
Consider $\rq{X} := {\rm Spec}(\mathcal{A})$, the canonical map
$q \colon \rq{X} \to X$ and the diagonalizable group
$H := {\rm Spec}({\mathbb K}[\operatorname{Pic}(X)])$.
\begin{enumerate}
\item There is a unique free action of $H$ on
$\rq{X}$ such that each $\mathcal{A}_{[L]}(U)$ consists precisely
of the $\chi^{[L]}$-homogeneous functions of $q^{-1}(U)$.
\item The canonical map $q \colon \rq{X} \to X$ is a geometric
quotient for the above $H$-action on $X$.
\end{enumerate}
\end{lemma}
{\rm pr}oof The first statement follows from Propositions~\ref{geominterp}
and~\ref{equivquasiaffequiv}. The second statement is due to the facts
that $\mathcal{O}_{X} = q_{*}(\mathcal{A}_{0})$ is the sheaf of
invariants and the action of $H$ is free. \endproof
We begin with the dictionary. It is quite easy to characterize
separatedness in terms of the homogeneous coordinate ring:
\begin{prop}
The variety $X$ is separated if and only if there exists a
graded closing subalgebra $(A',I') \in \mathfrak{A}$ and
homogeneous $f_{1}, \ldots, f_{r} \in I'$
as in~\ref{freegradalgdef} such that each of the maps
$A_{(f_{i})} \otimes A_{(f_{j})} \to A_{(f_{i}f_{j})}$
is surjective.
\end{prop}
{\rm pr}oof
First recall that the sets
$X_{i} := X \setminus Z(f_{i})$
form an affine cover of $X$.
The above condition means just that
the canonical maps from
$\mathcal{O}(X_{i}) \otimes \mathcal{O}(X_{j})$
to $\mathcal{O}(X_{i} \cap X_{j})$
is surjective.
This is the usual separatedness criterion~\cite[Prop.~3.3.5]{Ke}.
\endproof
Next we show how normality of the variety $X$ is
reflected in its homogeneous coordinate ring
(for us, a normal variety is in particular
irreducible):
\begin{prop}\label{normal2normal}
The variety $X$ is normal if and only if $A$
is a normal ring.
\end{prop}
{\rm pr}oof
We work in terms of the geometric data
$q \colon \rq{X} \to X$ and $H$ discussed in Lemma~\ref{geomquot}.
First suppose that $A = \mathcal{A}(X)$ is a normal ring.
Then the quasiaffine variety $\rq{X}$ is normal.
It is a basic property of geometric quotients
that the variety $X$ inherits normality from $\rq{X}$,
see e.g.~\cite[p.~39]{Do}.
Suppose conversely that $X$ is normal.
Luna's Slice Theorem
tells us that $q \colon \rq{X} \to X$
is an $H$-principal bundle in the \'etale topology,
see~\cite{Lu}, and~\cite[Prop.~8.1]{BaRi}.
Thus, up to \'etale maps, $\rq{X}$ looks locally
like $X \times H$.
Since normality of local rings is stable under \'etale
maps~\cite[Prop.~I.3.17]{Mi},
we can conclude that all local rings of $\rq{X}$ are
normal.
It remains to show that $\rq{X}$ is connected.
Assume the contrary. Then there is a connected
component $\rq{X}_{1} \subset \rq{X}$ with $q(\rq{X}_{1}) = X$.
Let $H_{1} \subset H$ be the stabilizer of $\rq{X}_{1}$, that
means that $H_{1}$ is the maximal subgroup of $H$
with $H_{1} \! \cdot \! \rq{X}_{1} = \rq{X}_{1}$.
Note that we have $t \in H_{1}$ if $t \! \cdot \! x \in \rq{X}_{1}$ holds
for at least one point $x \in \rq{X}_{1}$. In particular, $H_{1}$ is a
proper subgroup of $H$.
We claim that restricting the canonical map $q \colon \rq{X} \to X$
to $\rq{X}_{1}$ yields a geometric quotient for the action of $H_{1}$
on $\rq{X}_{1}$. Indeed, $H_{1}$ acts freely on $\rq{X}_{1}$. Hence
we have a geometric quotient $\rq{X}_{1} \to \rq{X}_{1}/H_{1}$ and a
commutative diagram
$$ \xymatrix{
{\rq{X}_{1}} \ar[r]^{\subset} \ar[d]_{/H_{1}}
&
{\rq{X}} \ar[d]^{/H}_{q} \\
{\rq{X}_{1}/H_{1}} \ar[r]
&
X }
$$
The map $\rq{X}_{1}/H_{1} \to X$ is bijective, because the
intersection of a $q$-fibre with~$\rq{X}_{1}$ always is
a single $H_{1}$-orbit.
Since $X$ is normal, we may apply Zariski's Main Theorem
to conclude that $\rq{X}_{1}/H_{1} \to X$ is even
an isomorphism. This verifies our claim.
Since $H_{1}$ is a proper subgroup of $H$,
we find a nontrivial class $[L] \in \operatorname{Pic}(X)$ such that the
corresponding character $\chi^{[L]}$ of $H$ is
trivial on $H_{1}$.
We construct a defining cocycle for the class $[L]$: Cover $X$ by
small open sets $U_{i}$ admitting invertible sections
$g_{i} \in \mathcal{A}_{[L]}(U_{i})$. Then the cocycle $g_{i}/g_{j}$ defines
a bundle belonging to the class $[L]$.
On the other hand, the $g_{i}$ are $\chi^{[L]}$-homogeneous
functions on $q^{-1}(U_{i})$. So they restrict to
$H_{1}$-invariant functions on $q^{-1}(U_{i}) \cap \rq{X}_{1}$.
As seen before, $X$ is the quotient
of $\rq{X}_{1}$ by the action of $H_{1}$.
Thus we conclude that the $g_{i}/g_{j}$ form in fact a
coboundary on $X$.
Consequently, the class $[L]$ must be trivial.
This contradicts the choice of $[L]$.
\endproof
Thus we see that if $X$ is normal, then
$A$ is the ring of global functions of a
normal variety.
That means that $A$ belongs to a intently studied
class of rings:
\begin{coro}
Let $X$ be normal. Then $A$ is a Krull ring. \endproof
\end{coro}
As we did in Proposition~\ref{normal2normal} for normality,
we can characterize smoothness in terms of the homogeneous coordinate
ring:
\begin{prop}\label{smooth}
$X$ is smooth if and only if there is
a closing subalgebra $(A',I') \in \mathfrak{A}$
such that all localizations
$A_{\mathfrak{m}}$ are regular, where $\mathfrak{m}$
runs through the maximal ideals with
$I' \not \subset \mathfrak{m}$.
\end{prop}
{\rm pr}oof
Let $\rq{X} := {\rm Spec}(\mathcal{A})$,
and consider the affine closure $\b{X} := {\rm Spec}(A')$
defined by any closing subalgebra $(A',I')$
of $A$.
Recall from Lemmas~\ref{naturalpairs} and~\ref{quasiaff2closingsubalg}
that $I'$ is the vanishing ideal
of the complement $\b{X} \setminus \rq{X}$.
So, the regularity of the local rings
$A_{\mathfrak{m}}$, where $I' \not \subset \mathfrak{m}$,
just means smoothness of $\rq{X}$.
The rest is similar to the proof of Proposition~\ref{normal2normal}:
The canonical map $q \colon \rq{X} \to X$
is an \'etale $H$-principal bundle for a diagonalizable
group $H$.
Thus, up to \'etale maps, $\rq{X}$ looks locally
like $X \times H$.
The assertion then follows from the fact
that regularity of local rings is
stable under \'etale maps, see~\cite[Prop.~I.3.17]{Mi}.
\endproof
We give a description of quasicoherent sheaves.
Consider a graded $A$-module $M$. Given $f_{1}, \ldots, f_{r} \in
A$ as in~\ref{freegradalgdef}, set $\mathcal{M}_{i} :=
M_{(f_{i})}$. Then these modules glue together to a quasicoherent
$\mathcal{O}_{X}$-module $\mathcal{M}$ on $X$. As in the toric case
\cite[Section~4]{ACHaSc}, one obtains:
\begin{prop}
The assignment $M \mapsto \mathcal{M}$ defines an essentially
surjective functor from the category of graded $A$-modules to the
category of quasicoherent $\mathcal{O}_{X}$-modules. \endproof
\end{prop}
We come to properties of morphisms.
Let $Y$ be a further variety like $X$,
and denote its homogeneous coordinate ring by $(B,\mathfrak{B})$.
Let $\varphi \colon X \to Y$ be any morphism.
Denote by
$[\mu] \colon (B,\mathfrak{B}) \to (A,\mathfrak{A})$
the corresponding morphism of freely graded
quasiaffine algebras.
\begin{prop}
The morphism $\varphi \colon X \to Y$ is affine if and only if there
are graded closing subalgebras $(A',I') \in \mathfrak{A}$ and $(B',J')
\in \mathfrak{B}$ satisfying~\ref{quasiaffalgdef} such that
$$ \sqrt{I'} = \sqrt{\bangle{\mu(J')}}. $$
Moreover, $\varphi \colon X \to Y$ is a closed embedding if and
only if it satisfies the above condition and, given $g_{1}, \ldots,
g_{s} \in B$ as in \ref{freegradalgdef}, every induced map
$B_{(g_{i})} \to A_{(\mu(g_{i}))}$ is surjective.
\end{prop}
{\rm pr}oof
Let $\mathcal{B}$ be a Picard graded
$\mathcal{O}_{Y}$-algebra with
$B = \mathcal{B}(Y)$.
Consider the affine closures $\b{X} := {\rm Spec}(A')$
of $\rq{X} := {\rm Spec}(\mathcal{A})$
and $\b{Y} := {\rm Spec}(B')$ of
$\rq{Y} := {\rm Spec}(\mathcal{B})$.
Then $\mu \colon B
\to A$ gives rise to a commutative diagram
$$ \xymatrix{
{\rq{X}}
\ar[r]^{\rq{\varphi}}
\ar[d]_{q_{X}} &
{\rq{Y}}
\ar[d]^{q_{Y}} \\
X
\ar[r]^{\varphi} &
Y } $$
The morphism $\varphi$ is affine if and only if $\rq{\varphi}$ is
affine. The latter is equivalent to the condition of
$\sqrt{I'} = \sqrt{\bangle{\mu(J')}}$ of the assertion.
The supplement on embeddings is obvious. \endproof
\section{Tame varieties}\label{section7}
In this section we shed some light on the question
which freely graded quasiaffine algebras
occur as homogeneous coordinate rings.
As before, we assume that the multiplicative group
${\mathbb K}^{*}$ of the ground field is of infinite rank over
${\mathbb Z}$. We consider varieties of the following type:
\begin{defi}
A {\em tame variety\/} is a normal divisorial
variety $X$ with $\mathcal{O}(X) = {\mathbb K}$ and
a finitely generated Picard group
$\operatorname{Pic}(X)$ having no $p$-torsion if ${\mathbb K}$ is
of characteristic $p>0$.
\end{defi}
The prototype of a tame variety lives in characteristic zero,
and is a smooth complete variety with finitely generated
Picard group.
Moreover, in characteristic zero,
every Calabi-Yau variety is tame,
and every ${\mathbb Q}$-factorial rational variety $X$ with
$\mathcal{O}(X) = {\mathbb K}$ is tame.
Finally, in characteristic zero every normal divisorial
variety with finitely generated Picard group admits
an open embedding into a tame variety.
In order to figure out the coordinate rings of tame varieties,
we need some preparation. Suppose that an algebraic group $G$ acts
on a variety $X$. Recall that a {\em $G$-linearization} of a line
bundle $E \to X$ is a fibrewise linear $G$-action on $E$ making the
projection equivariant.
By a {\em simple $G$-variety} we mean a $G$-variety for which any
$G$-linearizable line bundle is trivial.
\begin{defi}
Let $\Lambda$ be a finitely generated abelian group,
and let $(A,\mathfrak{A})$ be a freely $\Lambda$-graded
quasiaffine algebra.
\begin{enumerate}
\item We say that $(A,\mathfrak{A})$ is {\em pointed\/} if
$A$ is a normal ring, $A_{0} = {\mathbb K}$ holds,
and the set $A^{*} \subset A$ of
invertible elements is just ${\mathbb K}^{*}$.
\item We say that $(A,\mathfrak{A})$ is {\em simple\/}
if $\Lambda$ has no $p$-torsion if ${\mathbb K}$ is
of characteristic $p>0$, and the quasiaffine
${\rm Spec}({\mathbb K}[\Lambda])$-variety corresponding to
$(A,\mathfrak{A})$ is simple.
\end{enumerate}
\end{defi}
These two subclasses define full subcategories of the categories of
divisorial varieties with finitely generated Picard group and freely
graded quasiaffine algebras. The second main result of this article is
the following:
\begin{thm}\label{equivthm}
The homogeneous coordinate ring functor restricts to an
equivalence from the category of tame varieties to the category of
simple pointed algebras.
\end{thm}
{\rm pr}oof
Let $X$ be a tame variety with Picard group $\Pi := \operatorname{Pic}(X)$,
and denote the associated homogeneous coordinate ring
by $(A,\mathfrak{A})$.
Then $A$ is the algebra of global sections of some
Picard graded $\mathcal{O}_{X}$-algebra $\mathcal{A}$ on $X$.
We shall use again the geometric data discussed in
Lemma~\ref{geomquot}:
$$
\rq{X} := {\rm Spec}(\mathcal{A}),
\qquad
q \colon \rq{X} \to X,
\qquad
H := {\rm Spec}({\mathbb K}[\Pi]).
$$
The first task is to show that $(A,\mathfrak{A})$
is in fact pointed.
From Proposition~\ref{normal2normal} we infer that
$A$ is a normal ring.
Since we assumed $\mathcal{O}(X) = {\mathbb K}$,
and $\mathcal{O}(X)$ equals $A_{0}$, we have
$A_{0} = {\mathbb K}$.
So we have to verify $A^{*} = {\mathbb K}^{*}$.
For this, consider an arbitrary element
$f \in A^{*}$.
Choose a direct decomposition of $\Pi$ into a free part
$\Pi_{0}$ and the torsion part $\Pi_{{\rm t}}$.
This corresponds to a splitting $H = H_{0} \times H_{{\rm t}}$
with a torus $H_{0}$ and a finite group $H_{{\rm t}}$.
As an invertible element of $\mathcal{O}(\rq{X})$,
the function $f$ is necessarily $H_{0}$-homogeneous,
see~e.g.~\cite[Prop.~1.1]{Ma}.
Thus, there is a degree $P \in \Pi_{0}$ such that
$$
f = \sum_{G \in \Pi_{t}} f_{P+G},
\qquad
f^{-1}
= \sum_{G \in \Pi_{t}} f^{-1}_{-P+G}. $$
From the identity $ff^{-1} = 1$ we infer that
$f_{P+G}f^{-1}_{-P-G} \ne 0$ holds for at least one
component $f_{P+G}$ of $f$.
Since $\mathcal{O}(X)={\mathbb K}$ holds, we see that
the homogeneous section $f_{P+G} \in A$
is invertible.
Thus the homogeneous component $\mathcal{A}_{P+G}$
is isomorphic to~$\mathcal{O}_{X}$.
On the other hand we noted in~\ref{gradproject}~(ii)
that $\mathcal{A}_{P+G}$ is isomorphic to the sheaf of
sections of a bundle representing the class $P+G$ in
$\Pi_{0} \oplus \Pi_{{\rm t}}$.
Thus $P+G$ is trivial, and we obtain $P = 0$.
Hence all homogeneous components of $f$ have torsion degree.
By $\mathcal{O}(X) = {\mathbb K}$ this yields that $f_{G} =0$
if $G \ne 0$.
Thus we have $f \in A_{0} = {\mathbb K}$.
The next task is to show that $\rq{X}$ is a simple $H$-variety.
For this, let $\operatorname{Pic}_{H}(\rq{X})$ denote the group of equivariant
isomorphy classes of $H$-linearized line bundles on~$\rq{X}$,
compare~\cite[Sec.~2]{Kr}.
Moreover, let $\operatorname{Pic}lin(\rq{X}) \subset \operatorname{Pic}(\rq{X})$ denote the
subgroup of the classes of all $H$-linearizable bundles.
We have to show that $\operatorname{Pic}lin(\rq{X})$ is trivial.
First, we consider the possible linearizations of the trivial bundle
$\rq{X} \times {\mathbb K}$.
Using $\mathcal{O}^{*}(\rq{X}) = {\mathbb K}^{*}$, as verified before,
one directly checks that any linearization of the trivial bundle
is given by a character $\chi$ of $H$ as follows:
\begin{equation}\label{trivbdlelin}
t \! \cdot \! (x,z) := (t \! \cdot \! x, \chi(t) z)
\end{equation}
In particular, the character group $\operatorname{Char}(H)$ canonically embeds
into the group $\operatorname{Pic}_{H}(X)$.
Since we obtain in~\ref{trivbdlelin} indeed any linearization of
the trivial bundle,
the map $\operatorname{Char}(H) \to \operatorname{Pic}_{H}(\rq{X})$
and the forget map $\operatorname{Pic}_{H}(\rq{X}) \to \operatorname{Pic}lin(\rq{X})$
fit together to an exact sequence, compare
also~\cite[Lemma~2.2]{Kr}:
\begin{equation}\label{thesequence}
\xymatrix{
0 \ar[r] &
{\operatorname{Char}(H)} \ar[r]^{} &
{\operatorname{Pic}_{H}(\rq{X})} \ar[r] &
{\operatorname{Pic}lin(\rq{X})} \ar[r] &
0 }
\end{equation}
Thus, to obtain $\operatorname{Pic}lin(\rq{X}) = 0$,
it suffices to split the map $\operatorname{Char}(H) \to \operatorname{Pic}_{H}(\rq{X})$
into isomorphisms as follows:
\begin{equation}\label{specialsetting}
\vcenter{
\xymatrix{
{\operatorname{Char}(H)} \ar[rr]^{} \ar[dr]^{{\cong}}_{{\chi^{P} \mapsto P}}& &
{\operatorname{Pic}_{H}(\rq{X})} \\
& {\Pi} \ar[ur]_{q^{*}}^{\cong} &
}}
\end{equation}
But this is not hard: The fact that $q^{*}$ induces an
isomorphism of $\Pi = \operatorname{Pic}(X)$ and $\operatorname{Pic}_{H}(\rq{X})$ is due
to~\cite[Prop.~4.2]{Kr}.
To obtain commutativity, consider $P \in \Pi$.
Choose invertible sections
$g_i \in \mathcal{A}_{P}(U_{i})$
for small open $U_{i}$ covering $X$.
Then the class of $P$ is represented by the bundle
$P_{\xi}$ arising from the cocycle
\begin{equation}\label{char2pic}
\xi_{ij}
\; := \;
\frac{g_{j}}{g_{i}}.
\end{equation}
So the pullback class $q^{*}(P) \in \operatorname{Pic}_{H}(\rq{X})$ is
represented by the trivially linearized bundle $q^{*}(P_{\xi})$,
which in turn arises from the cocycle
\begin{equation}\label{char2pic2}
q^{*}(\xi_{ij})
\; := \;
q^{*}\left(\frac{g_{j}}{g_{i}}\right)
\; = \;
\frac{g_{j}}{g_{i}}.
\end{equation}
But on $\rq{X}$, the $g_{i}$ are ordinary invertible functions.
So we obtain an isomorphism from the representing bundle
$q^{*}(P_{\xi})$ onto the trivial bundle by locally
multiplying with $g_{i}$.
Obviously, the induced linearization on the
trivial bundle is the linearization~\ref{trivbdlelin}
for $\chi = \chi^{P}$.
Thus we proved that $(A,\mathfrak{A})$ is in fact a
simple pointed algebra. In other words, the homogeneous coordinate
ring functor restricts to the subcategories in consideration.
It remains to show that up to isomorphism, every simple pointed
algebra is the homogeneous coordinate ring of
some tame variety $X$.
So, let $(A,\mathfrak{A})$ be a simple pointed algebra, graded by
some finitely generated abelian group $\Pi$.
According to Proposition~\ref{equivquasiaffequiv}, we may
assume that $(A,\mathfrak{A})$ equals
$(\mathcal{O}(\rq{X}),\mathfrak{O}(\rq{X}))$ for
some normal quasiaffine variety $\rq{X}$ with a free
action of a diagonalizable group $H = {\rm Spec}({\mathbb K}[\Pi])$.
The action of $H$ on $\rq{X}$ admits a geometric quotient
$q \colon \rq{X} \to X$: First divide by the finite factor
$H_{{\rm t}}$ of $H$ to obtain a normal quasiaffine variety
$\rq{X}/H_{{\rm t}}$, and then divide by the induced
action of the unit component $H_{0}$ of $H$ on
$\rq{X}/H_{{\rm t}}$,
see for example~\cite[Ex.~4.2]{Do} and~\cite[Cor.~3]{Su}.
The candidate for our tame variety is $X$. Since
the structure sheaf $\mathcal{O}_{X}$ is the sheaf of invariants
$q_{*} (\mathcal{O}_{\rq X})^{H}$ and $A = {\mathcal{O}}(\rq{X})$ is pointed, we have
$\mathcal{O}(X) = {\mathbb K}$. Moreover, as a geometric quotient space of a
normal quasiaffine variety by a free diagonalizable group action,
$X$ is again normal and divisorial,
for the latter see~\cite[Lemma~3.3]{Ha1}.
To conclude the proof, we have to realize the ($\Pi$-graded)
direct image $\mathcal{A} := q_{*}(\mathcal{O}_{\rq{X}})$
as a Picard graded algebra on $X$.
First note that we have again the exact
sequence~\ref{thesequence}.
Since we assumed $\operatorname{Pic}lin(\rq{X}) = 0$, the
character group $\operatorname{Char}(H)$ maps isomorphically onto
$\operatorname{Pic}_{H}(\rq{X})$.
Moreover, we have a canonical map $\Pi \to \operatorname{Pic}(X)$: For a degree $P \in \Pi$
choose invertible $\chi^{P}$-homogeneous functions $g_{i} \in
\mathcal{O}(q^{-1}(U_{i}))$ with small open $U_{i} \subset X$
covering $X$, see Definition~\ref{freegradalgdef}.
As in~\ref{char2pic}, such functions define a cocycle
$\xi$ and hence we may map $P$ to the class of the bundle $P_{\xi}$. In
conclusion, we arrive again at a commutative diagram as
in~\ref{specialsetting}. In particular, $\Pi \to \operatorname{Pic}(X)$ is an isomorphism.
In fact, the construction~\ref{char2pic} allows us to define a
group $\Lambda$ of line bundles on~$X$:
As in the proof of Lemma~\ref{ontopic},
we may adjust the sections $g_{i}$ for a system of
generators $P$ of $\Pi$,
such that that the corresponding cocycles $\xi$
generate a finitely generated free abelian group.
Let $\Lambda$ be the resulting group of line bundles,
and denote the associated $\Lambda$-graded
$\mathcal{O}_{X}$-algebra by
$\mathcal{R}$.
We construct a graded $\mathcal{O}_{X}$-algebra homomorphsim
$\mathcal{R} \to \mathcal{A}$. The accompanying homomorphism will be
the canonical map $\Lambda \to \Pi$, associating to $L$ its class
under the identification $\Pi \cong \operatorname{Pic}(X)$. Now, the sections of
$\mathcal{R}_{L}$, where $L = P_{\xi}$,
are given by families $(h_{i})$ satisfying
$$ h_{j} = \xi_{j} h_{i} = \frac{g_{j}}{g_{i}} h_{i} .$$
This enables us to define a map $\mathcal{R}_{L} \to \mathcal{A}_{P}$ by
sending $(h_{i})$ to the section obtained by patching together
the $h_{i}g_{i}$. Note that this indeed yields a graded homomorphism
$\mathcal{R} \to \mathcal{A}$. By construction, this homomorphism is
an isomorphism in every degree. Thus we only have to show that its
kernel is the ideal associated a shifting family for $\mathcal{R}$.
Let $\Lambda_{0} \subset \Lambda$ denote the kernel of the canonical
map $\Lambda \to \Pi$. Then every bundle $E \in \Lambda_{0}$ admits a
global trivialization. In terms of the defining cocycle $g_{i}/g_{j}$
of $E$ this means that there exist invertible local funtions
$\t{g}_{i}$ on $X$ with
$$
\frac{g_{j}}{g_{i}} = \frac{\t{g}_{j}}{\t{g}_{i}}.
$$
\goodbreak
The functions $\t{g}_{i}$ can be used to define a shifting
family: Let $L \in \Lambda$ and $E \in \Lambda_{0}$. Then the sections
of $\mathcal{R}_{L}$ are given by families $(h_{i})$ of function that
are compatible with the defining cocycle. Thus we obtain maps
$$
\varrho_{E} \colon \mathcal{R}_{L} \to \mathcal{R}_{L+E},
\qquad
(h_{i}) \mapsto \left(\frac{h_{i}}{\t{g}_{i}}\right).
$$
By construction, the $\varrho_{E}$ are homomorphisms, and they
fit together to a shifting family
$\varrho$ for $\mathcal{R}$. It is straightforward to check that the
ideal $\mathcal{I}$ associated to $\varrho$ is precisely the kernel of
the homomorphism $\mathcal{R} \to \mathcal{A}$.
\endproof
\section{Very tame varieties}\label{section8}
Finally, we take a closer look to the case of a free Picard group.
The only assumption in this section is that the multiplicative group
${\mathbb K}^{*}$ is of infinite rank over ${\mathbb Z}$.
But even this could be weakened, see
the concluding Remark~\ref{verytame}.
\begin{defi}
A {\em very tame\/} variety is a normal divisorial variety with
finitely generated free Picard group and only constant functions.
\end{defi}
Examples of very tame varieties are Grassmannians and all smooth
complete toric varieties. On the algebraic side we work with the following
notion:
\begin{defi}
A {\em very simple\/} algebra is a freely $\Lambda$-graded quasiaffine
algebra $(A,\mathfrak{A})$ such that
\begin{enumerate}
\item the grading group $\Lambda$ of $(A,\mathfrak{A})$ is free,
\item $A$ is normal, and we have $A_{0} = {\mathbb K}$ and $A^{*} = {\mathbb K}^{*}$,
\item the quasiaffine variety associated to $(A,\mathfrak{A})$
has trivial Picard group.
\end{enumerate}
\end{defi}
Again, very tame varieties and very simple algebras form subcategories,
and we have an equivalence theorem:
\begin{thm}
The homogeneous coordinate ring functor restricts to an equivalence of
the category of very tame varieties with the category of very simple
algebras.
\end{thm}
{\rm pr}oof
Let $X$ be a very tame variety.
We only have to show is that the quasiaffine $H$-variety
$\rq{X}$ corresponding to the homogeneous coordinate ring of $X$
has trivial Picard group.
Since $\rq{X}$ is normal and $H$ is a torus,
every line bundle on $\rq{X}$ is $H$-linearizable,
see~\cite[Remark p.~67]{Kn}.
But from Theorem~\ref{equivthm}, we know that every
$H$-linearizable bundle on $\rq{X}$ is trivial.
\endproof
In the setting of very tame varieties, we can go further with the
dictionary presented in Section~\ref{section6}. The first
remarkable statement is that very tame varieties produce unique
factorization domains:
\begin{prop}\label{freefactorial}
Let $X$ be a very tame variety with homogeneous coordinate ring
$(A, \mathfrak{A})$.
Then $X$ is locally factorial if and only if
$A$ is a unique factorization domain.
\end{prop}
{\rm pr}oof
Let $A = \mathcal{A}(X)$ with some Picard graded
$\mathcal{O}_{X}$-algebra $\mathcal{A}$, and
the geometric quotient $q \colon \rq{X} \to X$ provided by
Lemma~\ref{geomquot}.
Since $\operatorname{Pic}(X)$ is free we divide by a torus $H$.
Thus $q \colon \rq{X} \to X$ is an $H$-principal bundle
with respect to the Zariski topology.
In particular, $X$ is locally factorial if and only if
$\rq{X}$ is so.
But $\rq{X}$ is locally factorial if and only if
$A$ is a factorial ring, because we have $\operatorname{Pic}(\rq{X}) = 0$.
\endproof
Next we treat products. Let $X$ and $Y$ be very tame varieties
with homogeneos coordinate rings $(A,\mathfrak{A})$ and
$(B,\mathfrak{B})$.
Fix closing subalgebras $(A',I') \in \mathfrak{A}$ and
$(B',J') \in \mathfrak{B}$,
as in~\ref{freegradalgdef}, and consider the algebra
$$ A \boxtimes B
:= \bigcap_{f} (A' \otimes_{{\mathbb K}} B')_{f}
= \bigcap_{f} (A \otimes_{{\mathbb K}} B)_{f}, $$
where the intersections are taken in the quotient field of $A'
\otimes_{{\mathbb K}} B'$ and $f$ runs through the elements of the form $g
\otimes h$ with homogeneous $g \in I'$ and $h \in J'$.
Now $A$ and $B$ are graded, say by $\Lambda$ and $\Gamma$.
These gradings give rise to a
$(\Lambda \times \Gamma)$-grading of $A \boxtimes B$.
Moreover,
$$(A' \otimes_{{\mathbb K}} B', \sqrt{I' \otimes_{{\mathbb K}} J'})$$
is a closing subalgebra of $A \boxtimes B$. Let $\mathfrak{A}
\boxtimes \mathfrak{B}$ denote the equivalence class of this closing
subalgebra. Then we obtain:
\begin{prop}\label{products}
Let $X$ and $Y$ be locally factorial very tame varieties.
Then $X \times Y$ is locally factorial and very tame with
homogeneous coordinate ring $(A \boxtimes B, \mathfrak{A}
\boxtimes \mathfrak{B})$.
Moreover, if $A$ and $B$ are of finite
type over ${\mathbb K}$, then $A \boxtimes B$ equals $A \otimes_{{\mathbb K}} B$.
\end{prop}
{\rm pr}oof
First note that for any two quasiaffine varieties
$\rq{X}$ and $\rq{Y}$ with free diagonalizable group
actions, their product $\rq{X} \times \rq{Y}$ is again
such a variety.
Moreover, if $\rq{X}$ and $\rq{Y}$ have only constant
invertible functions, then so does $\rq{X} \times \rq{Y}$.
If $\rq{X}$ and $\rq{Y}$ are additionally
locally factorial with trivial Picard groups,
then the same holds for $\rq{X} \times \rq{Y}$,
use e.g.~\cite[Prop.~1.1]{FI}.
Now, let $\rq{X} := {\rm Spec}(\mathcal{A})$ and
$\rq{Y} := {\rm Spec}(\mathcal{B})$.
By Proposition~\ref{freefactorial} both are locally factorial.
By construction
$(A \boxtimes B, \mathfrak{A} \boxtimes \mathfrak{B})$
is the freely graded quasiaffine algebra corresponding
to the product $\rq{X} \times \rq{Y}$.
Thus the above observations and
Proposition~\ref{equivquasiaffequiv} tell us that
it is a coproduct in the category of simple pointed algebras.
Hence the assertion follows from Theorem~\ref{equivthm}.
The second statement is an
easy consequence of Remark~\ref{collaps}~(i).
\endproof
\begin{coro}
Let $X$ and $Y$ be locally factorial very tame varieties.
Then $\operatorname{Pic}(X \times Y)$ is isomorphic to
$\operatorname{Pic}(X) \times \operatorname{Pic}(Y)$. \endproof
\end{coro}
We give an explicit example
emphasizing the role of Proposition~\ref{freefactorial}.
We assume that the ground field ${\mathbb K}$ is not of characteristic two.
Consider the prevariety $X$ obtained by gluing two copies of
the projective line
${\mathbb P}_{1}$ along the common open subset ${\mathbb K}^{*} \setminus \{1\}$.
We think of $X$ as the projective line with three doubled points,
namely
$$ 0, \; 0', \quad 1, \; 1', \quad \infty, \; \infty'.$$
Note that $X$ is smooth and divisorial. Moreover, $\operatorname{Pic}(X)$ is
isomorphic to ${\mathbb Z}^{4}$. Thus we obtain in particular that $X$ is
very tame. Let $(\mathcal{A}(X),\mathfrak{A}(X))$ denote the homogeneous
coordinate ring of $X$. We show:
\begin{prop}\label{example}
$\mathcal{A}(X)
\cong
{\mathbb K}[T_{1}, \ldots, T_{6}]/\bangle{T_{1}^{2} + \ldots + T_{6}^{2}}$.
\end{prop}
Before giving the proof, let us remark that the ring $\mathcal{A}(X)$
is a classical example of a singular factorial affine algebra.
In view of our results, factoriality is a consequence of
Proposition~\ref{freefactorial}.
{\rm pr}oof[Proof of Proposition~\ref{example}]
First observe that we may realize $\operatorname{Pic}(X)$ as well as a subgroup
$\Lambda$ of the group of Cartier divisors of $X$. For example
$\operatorname{Pic}(X)$ is isomorphic to the group $\Lambda$ generated by
$$
D_{0} := \{0\},
\qquad
D_{1} := \{1\},
\qquad
D_{1'} := \{1'\},
\qquad
D_{\infty} := \{\infty\}.$$
For any Cartier divisor $D$ on $X$, let $\mathcal{A}_{D}$ denote its
sheaf of sections. Then the homogeneous coordinate ring
$\mathcal{A}(X)$ is the direct sum of the $\mathcal{A}_{D}(X)$,
where $D \in \Lambda$. Consider the following homogeneous elements
of $\mathcal{A}(X)$:
$$
\begin{array}{ll}
f_{1} := 1 \in \mathcal{A}_{D_{0}} (X),
&
f_{2} := 1 \in \mathcal{A}_{D_{1}} (X), \cr
f_{3} := 1 \in \mathcal{A}_{D_{1'}}(X),
&
f_{4} := 1 \in \mathcal{A}_{D_{\infty}}(X). \cr
f_{5} := \bigl( \frac{1}{z-1} \bigr)
\in \mathcal{A}_{D_{1} + D_{1'} - D_{\infty}}(X),
&
f_{6} := \bigl( \frac{z}{z-1} \bigr)
\in \mathcal{A}_{D_{1} + D_{1'} - D_{0}}(X).
\end{array}
$$
Let $\varphi$ be the algebra homomorphism ${\mathbb K}[T_{1}, \dots, T_{6}] \to
\mathcal{A}(X)$ sending $T_{i}$ to $f_{i}$. It is elementary to check
that $\varphi$ is surjective. Since we assumed ${\mathbb K}$ not to be of
characteristic two, it suffices to show that the kernel of $\varphi$
is the ideal generated by
$$
Q := T_{2}T_{3} + T_{5} T_{4} - T_{6} T_{1}.
$$
An explicit calculation shows that the $f_{i}$
fulfil the claimed relation, that means that $Q$
lies in the kernel of $\varphi$.
Conversely, consider an arbitrary element $R$ of the kernel of $\varphi$.
Then there are $r_{j} \in {\mathbb K}[T_{1}, \dots, T_{5}]$ such that
$R$ is of the form
$$
R = \sum_{j=0}^{s} r_{j} T_{6}^{j}.
$$
We proceed by induction on $s$.
For $s = 0$ the fact that $f_{1}, \dots, f_{5}$ are algebraically
independent implies $R = 0$.
For $s > 0$ note first that that $\varphi(r_{j})$
is nonnegative in $D_{0}$ in the sense that
its component in a degree containing a multiple
$nD_{0}$ is trivial for negative $n$.
Since $f_{6}$ is negative in $D_{0}$, and $f_{1}$ is
the only generator of $\mathcal{A}(X)$
which is strictly positive in degree $D_{0}$,
we can write $r_{j} = \t{r}_{j} T_{1}^{j}$.
Hence we obtain a representation
$$ R = \sum_{j=0}^{s} \t{r}_{j} T_{1}^{j}T_{6}^{j}.$$
The element
$\t{r}_{s} ((T_{1}T_{6})^{s} - (T_{2}T_{3} + T_{4}T_{5})^{s})$
is a multiple of~$Q$.
In particular, it belongs to the kernel of $\varphi$.
Subtracting it from $R$, we obtain
$$ R'= \sum_{j=0}^{s-1} r_{j}' T_{1}^{j} T_{6}^{j},$$
with $r_{j}' = \t{r}_{j}$ for $j>0$ and $r_{0}' = \t{r}_{0} +
\t{r}_{s}(T_{2}T_{3} + T_{4}T_{5})^{j}$.
Applying the induction hypothesis to $R'$
yields that $R$ is a multiple of~$Q$.
\endproof
Finally, let us note that all our statements on very tame varieties
hold under more general assumptions.
This is due to the fact
that free Picard groups always can be realized by (free)
groups of line bundles.
Hence in this case we don't need shifting families to define
the homogeneous coordinate ring.
This means:
\begin{rem}\label{verytame}
For very tame varieties $X$, the results of this article
hold over any algebraically closed ground field ${\mathbb K}$,
and one might weaken the assumption $\mathcal{O}(X) = {\mathbb K}$
to $\mathcal{O}^{*}(X) = {\mathbb K}^{*}$.
\end{rem}
\end{document} |
\begin{document}
\begin{frontmatter}
\title{A Conversation with George G. Roussas}
\runtitle{A Conversation with George G. Roussas}
\begin{aug}
\author[a]{\fnms{Debasis} \snm{Bhattacharya}\ead[label=e1]{debasis\[email protected]}} \and
\author[b]{\fnms{Francisco J.} \snm{Samaniego}\corref{}\ead[label=e2]{[email protected]}}
\runauthor{D. Bhattacharya and F.~J. Samaniego}
\affiliation{Visva-Bharati University, University of California and University of California}
\address[a]{Debasis Bhattacharya is Professor of Statistics,
Visva-Bharati University, West Bengal, India, and a frequent visitor in
the Department of Statistics, University of California, Davis
\printead{e1}.}
\address[b]{Francisco J. Samaniego is Distinguished Professor,
Department of Statistics, University of California, Davis,
California 95616, USA
\printead{e2}.}
\end{aug}
\begin{abstract}
George G. Roussas was born in the city of Marmara in
central Greece, on June 29, 1933. He received a B.A. with high honors in
Mathematics from the University of Athens in 1956, and a Ph.D. in
Statistics from the University of California, Berkeley, in 1964. In
1964--1966, he served as Assistant Professor of Mathematics at the
California State University, San Jose, and he was a faculty member of
the Department of Statistics at the University of Wisconsin, Madison, in
1966--1976, starting as an Assistant Professor in 1966, becoming a
Professor in 1972. He was a Professor of Applied Mathematics and
Director of the Laboratory of Applied Mathematics at the University of
Patras, Greece, in 1972--1984. He was elected Dean of the School of
Physical and Mathematical Sciences at the University of Patras in 1978,
and Chancellor of the university in 1981. He served for about three
years as Vice President-Academic Affairs of the then new University of
Crete, Greece, in 1981--1985. In 1984, he was a Visiting Professor in the
Intercollege Division of Statistics at the University of California,
Davis, and he was appointed Professor, Associate Dean and Chair of the
Graduate Group in Statistics in the same university in 1985; he served
in the two administrative capacities in 1985--1999. He is an elected
member of the International Statistical Institute since 1974, a Fellow
of the Royal Statistical Society since 1975, a Fellow of the Institute
of Mathematical Statistics since 1983, and a Fellow of the American
Statistical Association since 1986. He served as a member of the Council
of the Hellenic Mathematical Society, and as President of the Balkan
Union of Mathematicians. He is a Distinguished Professor of Statistics
at the University of California, Davis, since 2003, the Chair of the
Advisory Board of the ``Demokritos Society of America'' (a Think Tank)
since 2007, a Fellow of the American Association for the Advancement of
Science since 2008, and a Corresponding Member of the Academy of Athens
in the field of Mathematical Statistics, elected by the membership in
the plenary session of April 17, 2008.
\end{abstract}
\begin{keyword}
\kwd{Personal and professional life}
\kwd{milestones}
\kwd{Marmara}
\kwd{Thessaloniki}
\kwd{Athens}
\kwd{Berkeley}
\kwd{Madison}
\kwd{Patras}
\kwd{Davis}.
\end{keyword}
\end{frontmatter}
This conversation took place in George Roussas' office at the University
of California, Davis, on the 15th of May, 2009.
\section*{Early Years and Family Background}
\textbf{Debasis and Frank}: George, it's a pleasure to have this
opportunity to chat with you about your life and career. We're coming at
this conversation from different angles, one of us as a regular research
collaborator over the last ten years and the other as a long time
departmental colleague. Our common ground is that we are both long-time
friends and admirers.
Let's start at the beginning. Tell us a bit about your early days.
\textbf{George}: Let me say, first, that I'm greatly honored that you
asked me to have this conversation, and I have been very much looking
forward to it.
I was born in the city of Marmara, broadly in a family of educators.
Marmara is a small community (of maximum population of about 1350) on
the Greek mainland. It is widely thought to be within the location of
Achilles' ancient kingdom. I attended the elementary school in Marmara
and in Thessaloniki, where part of my family was. My high school
education was also divided, started in Thessaloniki and completed in
Athens. The schooling was highly structured, as was typical in Greece,
and very rigorous. The environment in Marmara was idyllic, and I still
have very fond memories of it.
\begin{figure}
\caption{George Roussas (in the middle of front row) in the 5th or 6th grade in Thessaloniki,
1944--1945.}
\end{figure}
\textbf{Debasis and Frank}: What can you tell us about your family
background?
My paternal grandparents had three sons (my father Gregory and my uncles
Hercules and Constantine). These two uncles obtained university degrees,
but my father was business oriented. In the early 1920's he left Marmara
and went to Thessaloniki, where he entered in the dairy business. He had
considerable success in this, holding a prominent place in the
distribution business in Thessaloniki for almost two decades. Indeed, he
was a self-made millionaire! My parents (Gregory and Maria) had four
children, daughters Aggeliki, Demetra, and Stella, and myself. Our
family moved to Thessaloniki, and later to Athens, ``in stages,'' as the
locations where my father's varied business interests were centered
changed over the years. Of course, a civil conflict in the country had
its influence on this as well.
My father was a product of the classical European liberalism, which in
the 1920's and early 1930's was well represented in Greece by a
remarkable statesman, Eleftherios Venizelos. My father was an active
member of the liberal party and a staunch supporter of Venizelos. He
retained this position throughout the late 1930's, when Greece was run
by a politician turned dictator, and during World War II, and later,
when Greece was savaged by a civil war. His political liberalism and
outspokenness cost him dearly. During the German occupation of the
country, he was sent to prison (released as a political prisoner at the
end of the war, upon the liberation of the country). During part of the
civil war, he was exiled to a remote deserted island by the governing
party. The fact that my father saw some glitter of light in the
repressive soviet system, such as the availability of abundant
opportunities for well-qualified students to pursue their educational
goals, did not sit well with the party in power at the time. From a
financial viewpoint, he believed strongly in currency rather than in
property. Consequently, his hard-earned ``fortune'' became worthless
with the nullification of the Greek currency during World War II, and a
millionaire became virtually penniless! It was in this kind of
environment in which I grew up. This environment shaped my determination
to distance myself from any business per se and to pursue education to
its highest level possible.
\section*{Becoming a Mathematician while Searching for Ithaca}
\textbf{Debasis and Frank}: What were your main interests when you
entered college? Did you have strong feelings about what you wanted to
specialize in?
\textbf{George}: In high school, I developed a strong affinity to the
humanities and social sciences, with margi\-nal interest in mathematics
and physical sciences. Soon, I~discovered that any weaknesses in mathematical and
physical sciences would deprive me of many options in later years. So, I
decided to intensify my efforts, and graduated with a strong record in
all my subjects. This standing put me on a solid position to compete for
a position in the air force academy; it was my youthful dream to become
an air force officer. But that dream would never come to fruition. In
addition to succeeding in a competitive examination, I would also have
to have the written consent of both of my parents. I~thought I could
talk my father into it, but my mother was adamantly opposed to the idea.
Instead, I was advised by my uncle Hercules (the dean of the
classicists, as he was often referred to) to take the entrance
examination in the department of mathematics at the University of
Athens. Reluctantly, I took his advice, but I~never checked the results
of the entrance examination. My fixation was still with the air force,
and at this time, I targeted the aeronautical engineering school of the
air force. However, there seemed to be a problem here. Namely, those
competing for a position were more than 300, and the positions available
were 6--8! In view of these imposing odds, my parents did not attempt to
dissuade me from preparing for such a competition, and actually taking
the examination. Why should they? It was, clearly, a hopeless effort!
For about a year, I exhaustively studied math and science. When the
examination time arrived, I was an enthusiastic and determined
participant. In military schools, the exams were taken serially, and
only the successful participants in one subject were allowed to continue
with the next subject. In this manner, I reached the last examination in
chemistry, which was taken by less than a couple dozen people. I later
learned that I had earned the top overall score on the examination.
And it was here when the drama began. Succeeding in the examinations was
extremely tough, but that was only part of the admissions routine. The
candidates for all military schools, and, in particular, for such an
elite institution as the air force aeronautical engineering school, also
had to be certified on their political beliefs, on the basis of several
degrees of family connections. It was here where the sorry political
past of my father entered the picture. As became known later, the
disqualifying certificate arrived at the examination committee's
headquarters right after the examination papers in chemistry were
corrected. It was the duty of the committee to flunk me, no explanations
provided. The chairman of the committee, an air force colonel-engineer,
took it upon himself not to post the results. My uncle Hercules, who was
highly regarded and had many influential acquaintances, tried vigorously
to obtain an exception for me, unfortunately, to no avail. He was given
to understand that there would be dire political consequences if the
effort to gain my admission to the elite school of aeronautical
engineering of the air force succeeded. So, I was officially certified
to be a$\ldots$communist (!), and I was abruptly denied the realization
of my dream.
\textbf{Debasis and Frank}: Disappointing and demoralizing! What
happened next?
\textbf{George}: For more than a year, my odyssey in search of my Ithaca
went on, without much satisfaction. After quite some time, disappointed
and shaken, I decided to visit the department of mathematics of the
University of Athens, just to inquire about the previous year's entrance
examination. I was told that I was successful, but since I did not
enroll, I lost the right of enrollment. Fortunately, there were a couple
of openings in the following year's class, and one was allocated to me.
Apparently, my manifest destiny was to become a mathematician rather
than an air force officer-engineer! I~graduated from the University of
Athens in four years with high honors. During the last two years, I also
served as a teaching assistant to professor D.~A. Kappos, who was a
student of Constantine Carath\'{e}odory and held the chair of
mathematical analysis. I was fortunate to take many of my courses from
him. While my studies in mathematics were quite broad, I had not yet
been introduced to probability or statistics.
\begin{figure*}
\caption{A (nonabelian!) group of the mathematics graduating
class at the University of Athens, fall 1956. (George Roussas, in the
middle, kneeling).}
\end{figure*}
\section*{Choosing Statistics---The University of California, Berkeley Years}
\textbf{Debasis and Frank}: That's a fascinating story, Geor\-ge. It's
interesting how sometimes bad things seem to happen for a reason.
Certainly your ultimate career path is a good example. It's intriguing
that you decided to pursue graduate work in a field that you had yet to
be formally exposed to. How did that come about?
\textbf{George}: It was my determination to pursue graduate work abroad.
The decision to go for statistics---despite the lack of any relevant
background---was due to a liking I took in probability (by attending an
occasional seminar, and also by studying on my own), but primarily it
was due to the advice of Professor Kappos. His own expertise was in
measure theory and probability in abstract structures. The absence of
statistics from the curriculum was an additional reason. Since studying
abroad was well beyond my family's financial means, another financial
source would have to be located. Fortunately, the Greek government did
provide some relevant fellowships, based on a series of written
examinations and service in the armed forces. So, I was inducted into
the army, where I served for two years as a private (an unusually low
rank for a young man with my background), largely because I was still
plagued by my unfortunate experience with the air force. But my low rank
army service did me some good, as I spent almost the entire period close
to home, and I had the possibility to pursue my study of the English
language. Soon after my discharge from the army, I participated in an
examination for the selection of fellows to study applied mathematics
(which included probability and statistics) abroad. Professor Kappos
insisted that, once the decision to study statistics was made, the place
to go would be the University of California in Berkeley.
\begin{figure}
\caption{George Roussas when serving his military service in the Greek
Army, 1957--1959.}
\end{figure}
\textbf{Debasis and Frank}: You began your graduate studies in the
states in 1960. What were your first impressions of Berkeley?
\textbf{George}: I traveled from Athens to Berkeley by way of London and
New York. My first impression of New York was awful, but California and,
in particular, Berkeley, was another matter. The climate is about the
same as that of southern Greece, the city of Berkeley is charming, and
the university campus is of exceptional beauty. Nothing, of course, is
needed to be said about the academic standing of the entire university,
and of the department of statistics, in particular. David Blackwell was
the chair of the department, and Lucien Le Cam was the graduate advisor.
\begin{figure}
\caption{George Roussas in the Yosemite National Park in his first summer
in the UC Berkeley, 1960.}
\end{figure}
\textbf{Debasis and Frank}: Tell us a bit about your graduate studies
and, in particular, about the faculty members who had a strong impact on
you.
\textbf{George}: The first graduate level course in probability and
statistics I took from Thomas Ferguson, during the summer session. Early
on, I took courses in measure theory, functional analysis and topology
from the department of mathematics. The hypotheses testing and point
estimation courses I took from E.~L. Lehmann, and decision theory form Le
Cam. I took courses in measure-theoretic probability, second order
processes and sufficiency form Edward Barankin. I took a one-year course
in probability from Lo\`{e}ve, and a course in Markov chains from David
Freedman. From Blackwell, I took a course on coding theory and another
one on programming. Also, I took a course in Markov chains from K.~L.
Chung during a summer, and another course in empirical Bayes methods
from Herbert Robbins when he was a visitor in Berkeley. It was almost a
criminal omission that I did not take the ANOVA course from Henry
Scheff\'{e}, and at least one of the courses taught by Jerzy Neyman. I
did, however, study the Scheff\'{e} book thoroughly and, later, taught
out of it.
The faculty of the department of statistics in the UC Berkeley was an
almost$\ldots$suffocating constellation of stars! I had immense respect
and admiration for each and every faculty of the department. Neyman---founder
of the statistical laboratory and of the department of
statistics---was an imposing figure in the department. He was very kind
to me, and more than once mentioned to me his experience during a brief
visit in Greece as an international observer. Barankin, in addition to
being an outstanding mathematical statistician and probabilist, was also
well versed in philosophy and in the classics. It was not unusual for
him and me to talk about Plato, Aristotle and Sophocles. I learned
asymptotic theory primarily from Le Cam. His seminal work on contiguity
of sequences of probability measure and its statistical implications
were the key for my entrance into the field of large sample theory. As
is well known in the statistical community, Le Cam was deeply
knowledgeable in a broad area of mathematical sciences, and exceedingly
helpful to all those who sought his advise. Le Cam's vast knowledge
often created communication problems between himself and a student.
However, those who persisted would manage eventually to chip away bits
of his wisdom.
David Blackwell was my great discovery at UC Berkeley. It is not a
secret that UC Berkeley was the repository of great scientists. So,
in this context, it would not come as a surprise that Blackwell belongs
in that exclusive club. What is rather rare, however, is for a great
scientist to be endowed with exceptional human qualities. That is,
indeed, the case, which puts Blackwell in a class of his own. He is
endowed with a refined, friendly and appealing personality, and he
treats people in ways that build their self confidence and inspires
relationships based on mutual respect. He's been a wonderful role model
for me and many others.
As one would expect, the time in the UC Berkeley was academically
challenging, but overall pleasant, and certainly extremely constructive;
it provided unmatched academic training.
\begin{figure}
\caption{George Roussas upon his graduation from the UC Berkeley, 1964.}
\end{figure}
\begin{figure*}
\caption{George Roussas' family in Marmara, in the summer of 1966. From
left to right: George, sister Demetra, mother, sister Stella, father,
sister Aggeliki (kneeling), and nephew John.}
\end{figure*}
\section*{The University of Wisconsin, Madison, Experience}
\textbf{Debasis and Frank}: After taking a temporary position in 1964
(while considering a possible return to Greece), you joined the
Statistics faculty at the University of Wisconsin, Madison. George Box
was then in the early stages of organizing the statistics department
there. What were the highlights of your time in Madison?
\textbf{George}: In the fall of 1965, I was invited to interview at UW
Madison. At the end of my interview, Irwin Guttman, then the acting
chair of Statistics there, made me an unofficial offer, and I accepted
it on the spot. I had already fallen in love with Madison, both because
of its physical beauty and because of the superb academic climate there.
I did not allow myself the time for the usual bargaining to improve upon
the rather low academic salaries offered by the UW at the time!
So, I joined the department of statistics of the UW Madison, in the fall
of 1966, as an assistant professor. At the same time, another four
assistant professors were hired: Asit Basu, Richard A. Johnson, Gouri
Bhattacharyya and James Bondar. Existing faculty members, in addition to
Box and Guttman, were Norman Draper, John Gurland, Bernard Harris,
William Hunter, Jerome Klotz, George Tiao, Donald Watts and Sam Wu, as I
recall.
George Box had created in the department an academically demanding and
rigorous climate, but at the same time, comfortable, nonoppressive, and
enjoyable. Those who did their work well were recognized and rewarded. I
was promoted to associate professor (with tenure) in 1968 and to full
professor in 1972. As all other faculty members, I used to teach two
courses per semester, one graduate-level course and one undergraduate
course. The undergraduate course was alternated between an upper
division probability and mathematical statistics course, and a
pre-calculus statistics course. The latter choice was often made,
because that was where the interesting students were! We recruited some
very good ones into the statistics major!
\begin{figure*}
\caption{A faculty meeting in the Department of Statistics of the
University of Wisconsin, Madison, sometime between 1968 and 1970. From
left to right (part of the faculty only): Jerome Klotz, Grace Wahba,
George Roussas, John Gurland and John Van Ryzin.}
\end{figure*}
\textbf{Frank}: I understand that at least one of them was recruited
into marriage! (Laughs.)
\textbf{George}: You are right about that! It was in one of my
pre-calculus statistics classes that I met Mary Louise Stewart, who was
destined to become my wife. She was a Ph.D. candidate in food management
with a minor in statistics. This was in the fall of 1969. During the
spring semester of 1970, I was on sabbatical leave, which I spent in the
famous mathematics institute of the University of Aarhus in Denmark as a
guest of Barndorff-Nielsen. It was also there that I wrote the draft of
my book on contiguity and where I met the great K. It\={o} and attended
his ergodic theory seminar.
Sometime early in the fall of 1970, after my return to Madison, I
contacted Mary, and she responded positively. We started dating
regularly, and were engaged in the summer of 1971. I took Mary to Greece
to meet my parents, sisters and close relatives in the summer of 1971,
and upon our return to Madison, we had our civil wedding ceremony on
September 11, 1971.
\begin{figure}
\caption{Mary Roussas in George Box's class on Time Series Analysis as a
graduate student at the UW Madison when she was still Mary Stewart,
1970.}
\end{figure}
\begin{figure}
\caption{George and Mary Roussas newly married in Madison, Wisconsin,
1971.}
\end{figure}
\section*{The Pending Issue of Returning to Greece---Traveling Between Madison
and Patras}
\textbf{Debasis and Frank}: Things went quite wonderfully for you in
Madison, both personally and professionally. We know, however, that you
were faced with a difficult choice in the early 1970s---whether to
remain at Madison or return to your native Greece. What were the main
issues you were dealing with at that time?
\textbf{George}: Madison was great for us in so many ways. However,
there was a recurring issue that cau\-sed quite a bit of discomfort. I
began to receive repeated notifications from the Greek government about
my contractual ``obligation'' to return to Greece and my need to
discharge this obligation. At the time, Greece was under military rule,
and that made my return there problematic on many counts. From a purely
practical viewpoint, I was highly content with my life and career, and I
had no desire to leave Madison. Further, Mary and I were already
planning to start a family. Philosophically, I was strongly opposed to
serving under a military regime. Finally, a return to Greece seemed
unsafe to me, as I had been active in opposing the military regime. On
the other hand, I had no wish of being deprived of my Greek citizenship,
as was being threatened. Incidentally, I became an American citizen on
May 28, 1971, and ever since, I have been grateful to the American
people for the privilege of citizenship bestowed upon me.
\textbf{Debasis and Frank}: So, how did you resolve this vexing
conflict?
\textbf{George}: I decided to respond to demands made with a proposal
that seemed like it had virtually no chance of being accepted. I
indicated that I could consider a return to Greece only if I was offered
an academic position there comparable to the one I was holding in
the States. Since there were quite a limited number of professorships in
Greek universities at that time, and occupying a full professorship was
only for the well connected, the possibility seemed, to say the least,
remote. So, on this account, I felt fairly safe. Unfortunately (for me),
the government came up with an open chair in Applied Mathematics (which
included probability, statistics, numerical analysis and a few other
subject matters) in the new and promising technologically oriented
University of Patras (UP) (situated about 150 miles west of Athens), and
insisted that I submit a candidacy. Still feeling safe for the reasons I
cited above, I submitted an application, and lo and behold, I was
elected (to a full professorship). However, the Minister of Education
refused to ratify the election and ordered for the chair to be declared
open again. At this time, electors (all full professors of the School of
Physical and Mathematical Sciences) pleaded with me not to object to the
resubmission of an application on my behalf. The process of election
commenced anew, and I was elected again! This time, the Ministry of
Education kept the outcome of the election in its drawers---neither
rejecting nor ratifying it---for a few months, until a new Minister, a
civilian, came in to replace the previous person, who was a military
officer. This fellow was a chemical engineer with a Ph.D. degree from
McGill University, who had worked for the Shell Oil Company in the
States over many years. As soon as he was informed about the long
pending ratification of my election, he approved the election
immediately. That was the right thing for him to do, but it did not
serve my purposes well. I was strongly urged to go to Athens to take the
oath of the office, and perhaps be given leave of absence for a limited
period of time. The compromise reached was to take the oath of office in
the Consulate General of Greece in Chicago, and report for duty in early
1972.
Under these circumstances, Mary selflessly abandoned her studies
temporarily (she had already taken her Master's degree, and was well on
her way toward fulfilling all requirements for the Ph.D. degree), took a
crash course in the Greek language, and started preparing herself for
the forthcoming adventure. The colleagues in the department attempted to
dissuade me from going to Greece, and insisted that I retain my
appointment at Madison while taking a leave of absence of indeterminate
duration. I have always appreciated this gracious gesture.
\textbf{Debasis and Frank}: So, this is when your triumphant, if
somewhat reluctant, return to Greece began.
\textbf{George}: In a manner of speaking, yes. In February 1972, Mary
and I departed for Patras. Now, the city of Patras and the university
campus are built in a beautiful location, on slopes overlooking a bay,
with the western part of Greece opposite it. However, being acclimated
in a new community (and, for Mary, a foreign community at that) did pose
considerable problems. In the university itself, I was received well by
some, and not favorably by others. In dealing with the authorities, both
in Patras and Athens, my strong point was that I had a safe escape route
and, therefore, I had no problems in behaving in my natural way. I
started teaching immediately courses in probability and statistics,
organizing the Laboratory of Applied Mathematics, recruiting TA's and
personnel for the lab, mentoring students with an interest in
probability and statistics, attending hours-long and stormy faculty
meetings, etc. Mary made a valiant effort to adjust to the local
conditions, and tried hard to improve her Greek vocabulary. I am happy
to say, though, that she was treated exceptionally nicely by all
involved.
In the fall of 1972, we returned to Madison, and in the winter back to
Patras. In the fall of 1973, we returned again to Madison. Mary also
gave birth to our first son, Gregory, that October 18. Unfortunately,
soon thereafter, I had a rather serious operation (removal of slip
discs) in the University Hospital, and I could offer little to the
department at that time. By the end of the year, we returned to Patras,
where I completed my recuperation.
\textbf{Debasis and Frank}: Traveling between the two pla\-ces must have
been quite cumbersome!
\textbf{George}: Yes, the moving back-and-forth between Madison and
Patras eventually necessitated for us to essentially retain two
households. It was financially challenging and physically tiring. At the
beginning, it was Mary and me (and Mary's three cats from her student
days!), and now it was Mary, me and Gregory (in addition to the three
cats!). It was clear that a decision was due soon as to where we were to
affix our affiliation. At that time, everything pointed toward Madison,
as my academic experience in Patras had been a disappointment to me. In
addition to all my painful efforts to organize and staff a new unit, I
repeatedly encountered what I considered to be harassment from the
Minister of Education (who seemed intent on imposing political
considerations into university affairs). In time, we were able to
establish a tentative truce, allowing me to proceed with academic
matters in ways that American academics consider natural and perhaps
sometimes take for granted.
It was about this time that a decision about returning to Madison
permanently was due, when all of a sudden the military regime collapsed
(in July 1974), and a civilian government took over. The people, by and
large, were elated with the change. A military regime is not a normal
and natural regime for free people, in particular, for the country where
the concept of democracy was invented and first practiced. Nevertheless,
it appears that every regime has its excesses. Even the new civilian
regime was eventually credited with its share of excesses, in
particular, in the academic world. In any case, despite some
shortcomings, the new civilian regime was responsible for establishing a
semblance of normalcy in Greece and for opening up some new horizons.
This promising outlook led me to decide to remain in Greece and to
resign my appointment at the University of Wisconsin, Madison. I did
this in the fall of 1976, thus culminating four and a half years of a
joint appointment between the two institutions. It also, by no means
painlessly, terminated a ten-year association with the great university
and beautiful city in which I had met the love of my life, flourished
personally and professionally, and spent the most enjoyable years of my
academic career.
\section*{The Experience in Greek Universities}
\textbf{Debasis and Frank}: The next chapter in your professional life
was spent as a faculty member and administrator within the Greek
university system. Tell us about those years.
\textbf{George}: Yes, I was now fully identified with the University of
Patras. I felt that it was incumbent upon me to do all I could for the
benefit of the institution, while also looking after my own scientific
survival. I~had worked hard seeking out the best available candidates
(of Greek descent, as required) whenever a faculty position became
available. I expanded this effort to the entire spectra of biological,
natural and physical sciences. These kinds of activities were not
universally appreciated, but I was nonetheless narrowly elected as the
Dean of the School of Physical and Mathematical Sciences (by the full
professors of the school). Around this same time, Mary became pregnant
with our second son, John. He was born in Bloomington, Indiana, on
August 10, 1977, while I was spending the summer as a research professor
at Indiana University.
\textbf{Frank}: Is that when you were offered a starring role in the
bicycling classic film ``Breaking Away''?
\textbf{George}: No, that came later! (Laughs.) This time, I just went
to visit and work with Madan Puri, an old friend of mine since our UC
Berkeley days, and my former student from Greece, Michael Akritas, now
an outstanding senior statistician, as we all know.
\textbf{Debasis and Frank}: Following this leave, you returned to Patras
to take on the deanship with renewed energy?
\textbf{George}: Exactly! I did not feel bound by academic traditions
that didn't seem to work. My main guides were common sense and my
experience with US universities. One of my early ``accomplishments'' was
to reform the manner in which faculty meetings were run. Regular
school-wide faculty meetings were always held in a large room with the
faculty seated, according to academic seniority, around a huge oval
table. Meetings were seen as both business and social affairs. The
agenda was typically unrealistically long, so that meetings dragged on
and on for hours, often without any truly useful work being done. I
introduced a new system in which the agenda was prioritized, placing
first the items on which faculty input was essential. I made a strong
effort to exclude items which were politically driven, and to set aside
strictly administrative issues which could be handled without taking up
the faculty's time. While exerting control of the agenda, I resolved to
be firm but also fair and impartial.
\textbf{Debasis and Frank}: By the way, reconnecting with your family
during this period must have been a special pleasure.
\textbf{George}: Most certainly so! My parents especially enjoyed seeing
our children on a regular basis.
\begin{figure}
\caption{George Roussas' parents in the late 1970's.}
\end{figure}
\textbf{Debasis and Frank}: How was your approach to the deanship taken
by the faculty in Patras?
\textbf{George}: While my methods were considered new and different,
most of my colleagues were pleased with my performance and some of them
urged me to stand for election for the office of the chancellor of the
university. In those days, the chancellor was elected by the totality of
the full professors of the university; this was the old continental
European system. I was elected by a comfortable margin. I served as a
chancellor-elect for a year, and took over the chancellorship the year
after.
\begin{figure*}
\caption{George Roussas delivering a speech at the University of Patras
at his inauguration as the Chancellor of the university, 1982.}
\end{figure*}
\begin{figure*}
\caption{At the reception right after George Roussas' speech. From left
to right: Mary Roussas, George Roussas' mother and George Roussas. (His
father had passed away).}
\end{figure*}
\textbf{Debasis and Frank}: An important political change began in
Greece in 1982 when A. Papandreou formed a new government. What did you
know about him at the time?
\textbf{George}: Papandreou came from a political family (his father was
the leader of a political party, and had served both as a minister and
prime minister in the past). He left Greece right after high school,
studied economics in Harvard, and served on the faculty of several US
universities, most notably UC Berkeley, where he was also the chair of
the department of economics for some time. It was there where I came to
know him. He was a noted economist, clearly a political leader with
highly respected credentials within the European Union (EU). The new
government was welcomed by a substantial majority of people as a turning
point in Greek politics, and justifiably so. A modern and knowledgeable
economist at the helm of the government would surely put the vast
amounts of resources flowing form the EU to good use, developing and
modernizing the Greek economy. It was generally expected that a man of
his background would also revitalize Greek education at all levels,
helping to recruit a substantial number of Greek academics from home and
abroad, thereby infusing Greek universities with new blood and highly
qualified scientists.
\textbf{Debasis and Frank}: Around this same time, you yourself made a
change within the Greek University system. Tell us about that.
\textbf{George}: Actually, this was not a change of my base, it was the
undertaking of temporary additional duties. Undersecretary of Education
George Lianis offered me the position of the vice president for academic
affairs of the then new University of Crete. The appointment would allow
me to remain in Patras as chancellor, but it would require weekly
meetings, in either Athens or Crete. My primary new responsibility was
to oversee and chair the elections of faculty members in the Departments
of Mathematics, Physics, Chemistry, Biology and Computer Science. I felt
I could accommodate these duties in my portfolio without disturbing my
family, now grown to include Mary and three sons, with the addition of
George-Alexander on December 12, 1980. I launched into my new
responsibilities with gusto, and before long, the University of Crete
was staffed by scientists of considerable international reputation.
There were some challenges to be faced, but I'm saving the details for a
mystery novel I will compose in the near future! (Laughs.)
\begin{figure*}
\caption{The Roussas boys. From right to left: Gregory, John and
George-Alexander, in Greece, 1983.}
\end{figure*}
Something about the Greek system of administration might interest you,
as things are quite different in the States. Neither the deanship nor
the chancellorship I held provided any additional payment or stipend
beyond the professorial salary. As a chancellor, I had at my disposal a
car and chauffeur for university business around the clock, and that was
all. For my service in the University of Crete, I was simply paid travel
expenses and a nominal per diem. In Greece, academic administration is
viewed as within the normal scope of a professor's activities.
\textbf{Debasis and Frank}: What seemed like a very promising beginning
at Patras and at Crete was, unfortunately, destined to run into
insurmountable obstacles. What were the root causes of the change in the
academic climate in Greece?
\textbf{George}: Perhaps not unexpectedly, politics trumps most other
forces in our society. While the conditions were ideal to usher Greece
into a new era of achievement and prosperity, it soon became clear that
this was not going to happen. It became apparent that Papandreou had no
intention of being the architect of such a feat. What he did instead was
to continue preaching and practicing his pre-election populism, even
after he was in power. He squandered the national wealth and the
resources provided by the EU for partisan purposes and other unworthy
causes; consequential productive investing was nowhere to be seen.
The education system of the country, and, in particular, the higher
education, was in dire need of reorganization. Papandreou's
``reorganization'' essentially abolished the administrative structure in
the primary and secondary schools, and virtually dismantled the
universities. The existing small number of university professors was
marginalized with the flooding of the ranks by new appointees, and the
universities were turned into unlikely arenas of competition of the
political parties. By political collaboration of teachers and students,
the resulting majority was then in a position to elect the university
authorities at all levels (departmental chairs, deans, vice chancellors
and chancellors). Needless to say, the result was predictable chaos and
a dramatic lowering of academic standards. It has been more than a
quarter of a century since these measures were put into effect, and the
results are everywhere to be seen. Furthermore, there is no hope for
deliverance form this evil anytime soon; the genie is out of the bottle,
and it is not easy (or even possible) to confine it in there again!
\section*{The Turning Point---Returning to the States}
\textbf{Debasis and Frank}: That is indeed a tragedy. It's clear that
you harbor both sadness and anger about it; sadness for your native land
and anger about the way things were changed for the worse. As you
completed your term as chancellor of the University of Patras, you could
see the handwriting on the wall. That's about the time that you took a
sabbatical leave at the University of California, Davis, is it not?
\textbf{George}: That's exactly what happened. Although our next move
was not yet clear, Mary and I decided that a year of sabbatical leave,
spent outside the country, would be a welcome and much needed change,
and help us work out a plan for the future. That future could have been
a suitable position in the EU. Nevertheless, we decided to spend the
year in the States, and that is how I found myself at the UC Davis in
the capacity of visiting professor, starting in the summer of 1984. P.~K.
Bhattacharya's work on nonparametric statistics was one of the reasons
that I was drawn to UC Davis. Statistics at UC Davis at that time was
organized as an Intercollege Division of Statistics headed by an
associate dean. Professor Robert Shumway was the acting associate dean,
and he was prompt and most accommodating in his response to my request
about visiting the UC Davis for the year.
As you well remember, Frank, the UC Davis statistics unit was formed in
1979 in the usual manner, that is to say, by grouping together
statisticians affiliated with other departments, such as mathematics,
epidemiology, etc. It was a solid group of fair size, and its first
associate dean was Julius Blum, a noted probabilist. Other members of
the unit in the early 1980's were P.~K. Bhattacharya, Alan Fenech, Wesley
Johnson, Y.~P. (Ed) Mack, Norman Matloff, Frank Samaniego, Robert
Shumway, Jessica Utts, Alvin Wiggins and Neil Willits. Jane-Ling Wang
came aboard the same year with me in 1984. The idea behind this mode of
organization of the unit, that is, as an Intercollege division of
statistics rather than a department of statistics, was to gather
together all statistical activities under one roof, rather than having
them spread over the campus. In the UC Davis there is also a rather
novel idea at work, that of a Graduate Group, which brings together
faculty with common research interests serving in various units on
campus. Actually, it is the graduate group which controls the graduate
curriculum and supervises graduate degrees. So, there also was a
graduate group in statistics, and the associate dean of the intercollege
division of statistics was, ex officio, the chair of the graduate group.
Blum passed away unexpectedly in his third year as associate dean of the
unit, and Professors Bhattacharya and Shumway reluctantly served in
succession in an acting capacity, while an active search was launched
for a permanent appointee as associate dean. As I recall, Frank, at that
time, you were serving in a campuswide administrative capacity as the
Assistant Vice Chancellor for Academic Affairs.
\section*{The University of California, Davis Years}
\textbf{Frank}: It seems that the stars were aligned that year, as Davis
was searching for a new head of its Statistics unit and you were
seriously looking for a new position and new challenges. I clearly
recall that you were the unanimous choice of the Statistics faculty in
our search in 1984--1985. You joined the Intercollege Division of
Statistics in July, 1985, as Professor, Associate Dean and Chair of the
broadly based Graduate Group in Statistics. You served as the head of
our unit for 14 years without taking even one quarter of sabbatical
leave. Your service to the unit was both visionary and very effective.
From your perspective, what were the highlights of this period?
\begin{figure}
\caption{George Roussas giving a seminar after his appointment as
Professor, Associate Dean and Chair of the Graduate Group in Statistics
at the UC Davis, 1985.}
\end{figure}
\renewcommand{\arabic{figure}}{14.A}
\begin{figure*}
\caption{In front of Kerr Hall at UC-Davis, in 1986. From left to
right: Keh-Shin Lii, UC-Riverside; Y.~P. (Ed) Mack, UC-Davis, Murray
Rosenblatt, UC-San Diego; Peter Hall, Australian National University;
and George Roussas.}
\end{figure*}
\textbf{George}: Thank you, Frank, for your generous description of
those years. Upon shouldering the leadership responsibilities in 1985,
the faculty, in conjunction with the university administration, designed
a strategic plan to expand the unit up to the point of achieving a
critical mass, and turn it from a solid unit to a unit of national and
international standing. We proceeded with the implementation of the plan
by hiring a number of bright new faculty members, including Prabir
Burman, Chris Drake, Hans-Georg Mueller, Wolfgang Polonik and Chih-Ling
Tsai, and by making efforts to attract established superior level
statisticians, such as Rudolph Beran from the UC Berkeley and Peter Hall
from Australia. These latter two recruitments became realities soon
after my stepping down as Associate Dean. At the same time, we laid the
foundation for a program in biostatistics, which subsequently developed
into a program of national repute. A decisive role in founding and
developing the biostatistics program was played by Hans Mueller, who was
by training a statistician, a biostatistician and a medical doctor.
Within a few years, it became apparent, and certifiably so, that our
objectives and goals were well on their way of being realized.
I'm sure you recall that, in an evaluation study of 300 statistical
research institutions around the world---carried out by the National
Sciences and Engineering Council of Canada (NSERC) for the period
1986--1990---statistics in the UC Davis was ranked 14th (top~4.7\%)
worldwide, and 11th within the United States (top 3\%). This ranking was
reaffirmed and even improved in a follow-up study, carried out by
Christian Genest and Mireille Guay (\textit{The Canadian Journal of Statistics},
Vol. 30, No. 2, 2002, pages 392--442). In this study, the authors
employed several criteria of evaluating the same as above institutions.
On the basis of one of these criteria---essentially, published research
papers per capita in the ``top 25'' research journals in the field---statistics
at the UC Davis was ranked 4th (top 2\%) among 202
institutions studied. And these hard facts are beyond and above the
general reputation of UC Davis statistics faculty as excellent
researchers, teachers and contributors to the profession. This really is
an achievement for which our faculty as a whole deserves credit, and I
am extremely proud of the hard-working yet congenial group that
constitute the statistics faculty at Davis. Naturally, I am proud as
well of the role I had the opportunity to play in helping to shape this
group.
\renewcommand{\arabic{figure}}{14.B}
\begin{figure*}
\caption{In the island of Spetses, Greece, during a NATO Advanced Study
Institute in 1990. From left to right: Y.~P. Mack, UC-Davis; Paul
Deheuvels, L.S.T.A., Universite' Paris VI, France; and George Roussas.}
\end{figure*}
\setcounter{figure}{14}
\renewcommand{\arabic{figure}}{\arabic{figure}}
\textbf{Frank}: George, would you like to mention at this point some
events and turning points we faced as a statistics unit?
\textbf{George}: I surely would. Heading the statistics unit at Davis
was not without its challenges, especially during periods in which the
California economy was weak. In the early 1990's, for example, during
one of the more severe financial crises of the State (with inevitable
repercussions for UC), the dean in charge of day-to-day oversight of the
Intercollege Division of Statistics recommended the merger of statistics
and mathematics as a cost saving device. Not only had the dean (a fine
humanist, but largely unschooled quantitatively) forgotten that that was
where we had started---separating from mathematics in order to realize
the breadth and potential that statistics rarely can achieve within a
mathematics department---but he had failed to reflect on both the
unit's stature and its many applied contributions (including
consultation across the campus through our Statistical Laboratory,
collaborations with applied scientists on campus and a broad spectrum of
courses taught as a service to students in other majors). The faculty
went into overdrive to come up with ideas and strong arguments against
the proposed merger. Also, for a period of about three months, I lobbied
heavily a number of higher-level administrators and other influential
people who were supportive of our continued independence. As a result of
our collective efforts, I submitted a detailed and impassioned letter in
defense of our status as a free-standing unit. In the end, the
administration conceded that the merger of statistics and mathematics
would be a serious strategic mistake. We held our status as an
Intercollege unit for 21 years. While we cherished our considerable
independence as a mini-college on campus, as well as the access it gave
us to various forms of support from all other schools and colleges, we
also realized that we were not big enough to withstand ill-conceived
attacks. It was this kind of reasoning that led us to seek and achieve
the (lesser but safer and, let's face it, more traditional) status of a
department. This went into effect in 2000, and Jane-Ling Wang was the
first chair of the department of statistics. She was succeeded by
Rudolph Beran, and then by the current chair Wolfgang Polonik.
\begin{figure*}
\caption{Faculty of the Department of Statistics at the UC Davis, Fall
2000. Top line: Alan Fenech, Wesley O. Johnson, Y.~P. (Ed) Mack, Prabir
Burman, Hans-Georg Mueller, Wolfgang Polonik, Rahman Azari and Juanjuan
Fan. Second line: Richard Levin, F.~J. Samaniego, J.-L. Wang, George
Roussas, Christiana Drake and Jessica Utts.}
\end{figure*}
\textbf{Debasis and Frank}: Your friends and colleagues clearly
appreciated your accomplishments and your generous service to several
institutions and to the statistics profession generally. They threw
quite a ``party'' in your honor!
\textbf{George}: All this was something of a surprise to me. My old
friend and former collaborator Madan Puri of Indiana University
organized a volume of research papers featuring work in the general
areas in which my own research was focused. This project resulted in the
Festschrift ``Asymptotics in Statistics and Probability: Papers in Honor
of George Gregory Roussas,'' VSP International Science Publishers, 2000.
It was edited by Professor Puri, and consists of 25 papers by 48 authors
from 13 countries, with a preface co-authored by Madan L. Puri and my
good friend and well-known statistician Yannis Yatracos.
Subsequently, Hans Mueller and the department of statistics conceived of
the idea of organizing a two-day workshop at UC Davis at which the
Fest\-schrift would be officially presented to me. The conference took
place at UC Davis on May 19--20, 2001, with the participation of a select
group of researchers, including a member of the French Academy of
Sciences, a chancellor of a German university, the holder of a
name-chair in the London School of Economics and three chairs of
statistics departments.
\textbf{Frank}: At the time, I made note of the fact that the conference
was scheduled right in the middle of the period that I was on sabbatical
leave in Ireland. I chose not to take offense. (Laughs.) But seriously,
George, I would have loved to have been on hand to give my hearty
congratulations for a roundly successful and meaningful career. And, of
course, the beat goes on.
\textbf{George}: It was indeed unfortunate, Frank, that you could not be
with us on that occasion, but apparently there were many constraints the
organizers had to abide by. I always regretted your absence from that
festive event, in contrast to your ever present continuous support and
counsel throughout my UC Davis years.
\textbf{Debasis}: Let me add, George and Frank, that I was fortunate
enough to be present, and the proceedings were thoroughly enjoyable!
\textbf{Debasis and Frank}: How have you structured your professional
life since leaving the administrative posts you held up to 1999?
\textbf{George}: Well, I continue to be an active member of the
department of statistics, and a member of the graduate program in
statistics and the graduate group in biostatistics, both housed in the
department of statistics. I have concentrated both on a variety of
research problems in the areas in which I've always been interested, and
have enjoyed my teaching assignments, but I have also had the luxury of
time to work on pet projects. I have written three books
[``\textit{An Introduction to Probability and Statistical Inference}'' (2003),
``\textit{An Introduction to Measure-Theoretic Probability}'' (2005) and
``\textit{Introduction to Probability}'' (2007), all published by Academic
Press]. I have already revised the \textit{Measure-Theoretic} book, and I am in
the process of revising another two books. Also, I am in the process of
working collaboratively on a new book (tentative title ``\textit{Probability and
Statistics for Non-majors}''). So, I've kept quite busy, both with
professional projects such as these and with family, to whom we have
added my daughter-in-law Casie, wife to my son John, and my delightful
granddaughter Sophia Aggeliki, as well as my daughter-in-law Laura, wife of my
son Gregory.
\section*{Some Memories from Roussas' Professional Career}
\textbf{Debasis and Frank}: What are your fondest memories over a career
spanning almost 50 years?
\textbf{George}: In retrospect, I feel that there are many reasons that
I should be grateful for my long professional life. I am certainly most
grateful to all of my professors in the UC Berkeley for the truly solid
training imparted in me; the full value of it did not become evident
until later in my academic career. I often reminisce about my ten
productive and pleasant years at the University of Wisconsin, Madison.
Madison was, after all, where I met my wife, Mary Louise, and where our
first son, Gregory, was born. I do not regret the ten to twelve years
that I invested in seeking to contribute to higher education in Greece,
although the net result was almost negligible. I feel that I gave it my
best try, but, realistically, there are many other factors which
influenced the final outcome.
I am certainly most grateful to UC Davis for the way I was received, and
the opportunity I was given to do here what I was not allowed to do in
Greece.
Genuine thanks are also due to my professional colleagues who honored me
with my election as a Member of the ISI (1974), admitted me as a Fellow
in the RSS (1975), and elected me as a Fellow of the IMS (1983) and the
ASA (1986). Special thanks are also due to the scientific community at
large for electing me a Fellow of the AAAS (2008). And last but not
least, I am grateful to a select group of Greek scholars---the membership
of the Academy of Athens---for electing me a Corresponding Member of the
Academy of Athens in the field of Mathematical Statistics (April 17,
2008).
\textbf{Debasis and Frank}: And on a personal level?
\textbf{George}: I feel exceptionally fortunate that I have spent my
adult life surrounded by a wonderful, supportive and endlessly
interesting family. For my sta\-mina and persistence, I must thank Mary,
especially, both for her support and encouragement over the years but
also for her sage advice. I feel singularly fortunate to have three
healthy, intelligent, beautiful sons---Gregory, born in Madison in 1973,
now a computer scientist, John, born in Bloomington in 1977, a
practicing attorney, and George-Alexander, born in Patras in 1980, a UC
Davis graduate in political science, aspiring to the legal profession.
Also, we are delighted with the relatively new arrival (March 26, 2008)
of our first grandchild, Sophia Aggeliki, daughter of John and Casie,
also a practicing attorney and our newest daughter-in-law Laura. Many
wholeheartedly felt thanks are due to
my sisters for their immense moral support and consequential material
support when that was most
needed.
One thing we regret is that we did not have enough time to enjoy the
house that we built in Patras in 1980. Its setting is truly idyllic: It
lies on an acre of land full of trees, (including an olive tree grove)
at the foot of a wooded hill with a mountain in the background, and
faces the Patras bay.
On a personal level, it has also been painful that, by expatriating
myself for most of my adult life, I was deprived of the opportunity to
spend any significant amount of time with my parents, sisters and other
members of the immediate family. In retrospect, I also believe that, by
devoting unduly much time to my professional duties, I deprived my own
family and myself of the opportunity of spending more time together. But
I truly believe that each phase of our lives is a nonrecurrent event,
and must be appreciated, as it comes, to the greatest extent possible.
\begin{figure*}
\caption{One aspect of the Roussas' house close to the university campus
in Patras.}
\end{figure*}
\begin{figure*}
\caption{At the baptism of the Roussas' first grandchild, Sophia
Aggeliki, in Athens, September 2008. The parents John and Casie Roussas
with the baby, and George and Mary Roussas.}
\end{figure*}
\begin{figure*}
\caption{George and Mary Roussas with Sophia Aggeliki, right after her
baptism.}
\end{figure*}
\textbf{Debasis and Frank}: Since you've served in a wide variety of
administrative capacities during your acade\-mic career, perhaps you'd
like to share your thoughts about what it takes to do this type of work
well.
\textbf{George}: I am pleased to do so. I believe that being a
good, efficient and inspiring administrator requires an inborn talent.
Beyond this, one has got to be honest, just and straightforward, and by
word and deed, convince one's co-workers about that. One's commitment to
these values must of course be real, but it is also important that they
be clearly perceived by those with whom you deal. Furthermore, without
in any remote way being dictatorial, one has got to convey the clear
message that there is only one leader at a time. One should stand and
defend well established principles, and not bend according to the
prevailing winds. Vision is important, but it is also essential to have
the ability to explain one's vision in ways that gain the needed support
and engagement from others. The ability to listen is extremely
important. It is essential that different sides of a controversial issue
be weighed. I have never found it difficult to take a position that may
be unpopular, but I never wished to do so without being convinced that I
had the relevant facts in hand.
I recall when once working at my desk in the chancellor's office in the
UP, I heard outside my door a rather heated discussion. Inquiring about
it, I was told that it was a committee of cleaning ladies who wanted to
see me and present to me a perennial unsolved issue of theirs, but the
receptionist would not allow them to do so. During my entire tenure on
the university campus, the rumor spread widely about this professor from
the States interacting with people at all levels. However, for the
receptionist it was inconceivable that a cleaning lady would ask to see
the chancellor. On this particular occasion, I invited the committee
into my office, listened to their concerns and was able to resolve them
to their satisfaction that very day.
There were several incidents with highly politicized student and TA
groups, which could have developed to the point of explosion, but
fortunately, they were decisively contained to the point of dissipation.
It is probably best that I not elaborate on them further.
\section*{Brief Description of Main Research Interests}
\textbf{Debasis and Frank}: We haven't spent much time on the areas of
Statistics and Probability that you have concentrated on during your
career. This conversation would be quite incomplete without your giving
us a brief tour.
\textbf{George}: Thanks for asking! As you know---and especially, you,
Debasis---my early work is based on Le Cam's concept of
contiguity and Local Asymptotic Normality (LAN). Roughly speaking, LAN
allows for a more or less arbitrary parametric family of probability
measures to be replaced (asymptotically) in the neighborhood of each
parameter point by an exponential family of probability measures.
Contiguity ensures the establishment of asymptotic normality under
moving parameter points, when such normality under a fixed parameter
point is already known. This theory has important statistical
implications. Roughly speaking, whatever can be done for exponential
families can also be done, in the limit, for the given family of
probability measures. Those results may then be transferred to the
original family, for which they are going to hold at the asymptotic
level. Such results were developed, originally, for discrete
time-parameter Markov processes.
Nonparametric estimation in special cases of Mar\-kov chains has been
around for a long time. However, nonparametric estimation in a general
setting of discrete time-parameter Markov processes was largely an open
area for investigation in the late 1960's. It was exciting to be in on
the ground floor in this problem area. I~published a series of papers,
beginning in 1969, which established some foundational results and
opened the door to further research in the area.
In Markov processes, the future depends on the past and present only
through the present. One way of incorporating the entire past, when that
matters is by
introducing various modes of dependence conditions, referred to as
mixing. The basic idea in mixing processes is that the past and the
future are approximately independent, if they are sufficiently far
apart. In a way, it is the natural evolution beyond Markovian
dependence. I was introduced into this area by my colleague Y.~P. Mack (a
student of Murray Rosenblatt) in 1984, the first year of my visit in the
UC Davis. From a probabilistic viewpoint, there was a huge amount of
work already done, mostly by the Russian probability school (Davydov,
Gorodetski, Ibrahimov, Kolmogorov, Lifshits, Rosanov, Vol\-konskii and
others), and also by probabilists in the States (first and foremost
Rosenblatt, then Bradley, Kesten, Peligrad, Philipp and others), as well
as by other researchers (e.g., F\"{o}ldes, Iosifescu, O'Brien, Withers,
Yokoyama and Yoshihara). However, there wasn't a body of work on
statistical inference on such processes. These problems intrigued me,
and I got some nice results. These were published in a series of papers,
starting in 1987. Ever since, there has been an explosion of papers in
this area, including contributions by Doukhan, Louhichi, Masry, Shao,
Tran, Yu and many others.
The next area of my interest has been that of associated processes. The
concept of associated random variables was introduced by Esary, Proschan
and Walkup in a seminal paper, and it was extensively used in the book
by Barlow and Proschan in a reliability framework. The concept of
negative association was introduced by Joag-Dev. Association was also
introduced and used in the context of mathematical physics by Fortuin,
Kasteleyn and Ginibre.
Although I had a peripheral interest in this area due to my overall
interest in modes of dependence, my interest was accentuated
significantly after an extended visit to UC Davis by Frank Proschan.
Again, there did not seem to have been a systematic approach to
statistical inferences in such processes, and this fact stimulated my
interest in such a kind of work. As a result, there has been a stream of
papers between 1997 and 2001 by me, my students and other collaborators
in which a variety of such problems have been posed and solved. More
importantly perhaps, this seems to have instigated the formation of a
``school'' in this area with much activity in China, South Korea, France
and Portugal. Some of the noted contributors in association, either in
probabilistic developments or statistical inference, have been Birkel,
Bulinski, Cai, Doukhan, Ioannides, Louhichi, Oliveira, Prakasa Rao,
Shashkin, Taylor, Yoshihara and others. Special mention is deserved for
a seminal paper on this subject by C.~M. Newman.
In the last ten years or so, I revisited, with Debasis, the area of
contiguity and LAN, and extended previous work to the so-called Locally
Asymptotically Mixed Normal (LAMN) families of probability measures, so
coined by Jeganathan in 1982. In this latter framework, we produced a
number of papers on distribution theory with applications to statistical
inference.
Finally, my current interests include conditioning, sampling from
continuous time-parameter stochastic processes, and the theory and
applications of copulas.
\textbf{Debasis}: George, I've truly enjoyed the opportunity to work
with you. We've worked on a wide variety of topics, including, of
course, contiguity. Your 1972 book on contiguity has become a classic! I
know it's been translated into Russian and perhaps other languages. Have
you given any thought to writing a new edition of the book that would
include the many new results that we and others have obtained in the
area?
\textbf{George}: The contiguity book, which was published by Cambridge
University Press in 1972, was written in an attempt to obtain a deeper
understanding of the concept of contiguity and its statistical
applications, and also to help disseminate this very important concept.
Le Cam's original paper in 1960 is not particularly easy to read. Of
course, he employed contiguity in his all-inclusive 1986 book
(``\textit{Asymptotic Methods in Statistical Decision Theory in Statistics},''
Springer-Verlag). A much more accessible discussion of contiguity and
its repercussions are presented in the 2000 monograph (``\textit{Asymptotics in
Statistics: Some Basic Concepts},'' 2nd edition, Springer) by Le
Cam and Yang. I was therefore somewhat surprised that the Cambridge
University Press put out (in 2008) a reprint of my book in a paperback
form; apparently, there still seems to be some continuing interest in
that work.
And now, in order to answer directly your question, Debasis: I don't
really have any plans to do what you suggested. However, should you take
the initiative, I might be persuaded to join in! (Laughs.)
Incidentally, some time in the recent past, I had thought of organizing
some material on associated processes and their statistical
applications. This tentative plan is now aborted with the recent
publication of an excellent monograph on the subject matter (``\textit{Limit
Theorems for Associated Random Fields and Related Systems},'' World
Scientific, 2007) by Bulinski and Shashkin.
\section*{Musical Interests}
\textbf{Debasis and Frank}: We know that you have a great appreciation
for classical music. How did that lifelong interest originate, and which
composers are among your favorites?
\textbf{George}: I developed a strong liking for classical music early
on, during my high school days. I'm not sure what drew me to it, other
than its sheer beauty. No one in my immediate environment was
particularly musically oriented. At the same time, I have also always
liked good folk music, as well as some Greek popular music as
exemplified by the two noted composers Hadjidakis and Theodorakis. Also,
I enjoy selected pieces of popular American music and light jazz.
However, my passion is classical music. In general, I am fond of the
Germanic (German--Austrian) composers. I enjoy everything composed by
Beethoven, and, in particular, his third, fifth, sixth and ninth
symphonies, and his fifth (the emperor's) piano concerto. I love many of
Mozart's compositions with special preference for some of his
symphonies, piano concertos 20, 21 and 22, and his operas The Magic
Flute, The Marriage of Figaro, Idomeneo and Don Giovanni. Above all, I
adore his requiem. I very much like a number of symphonies by Brahms and
by Haydn. Also, I enjoy many of Mendelssohn's compositions and, in
particular, the Scottish and the Italian symphonies. I much enjoy the
eternal Messiah by Handel, and on a lighter side, his water music and
royal fireworks. Somewhat surprisingly, I never developed a true liking
of Wagner's compositions, although I have immense appreciation for them.
I also greatly admire many Russian composers. Among them, Tchaikovski
ranks first followed by others, such as Stravinski, Rimsky-Korsakov,
Mussorgsky, Prokofiev, Rachmaninov, Sostakovich and Borodin.
I much enjoy many compositions of the Italian composer Corelli, some of
Vivaldi's compositions, and the arias of operas by Rossini and Verdi. It
would be an omission to leave out my liking of Filandia, and of
symphonies number 2 and 4 by Sybelius, of some compositions by Chopin,
Liszt's Hungarian rhapsody number 2, and also of a couple of pieces by
Bizet. And of course, everybody enjoys Ravel's bolero!
\textbf{Debasis and Frank}: This is clearly more than a~passing
interest. It seems that it ranks right up there~with Jack Kiefer's
appreciation for mushrooms! (Laughs.)
\textbf{George}: Well, you did not ask me about my food preferences!
What a coincidence! I will never pass up---if I can help it---a Saturday
brunch of a mushroom omelet! I do have a preference for certain kinds of
mushrooms, but in the end, any nonpoisonous mushrooms will do!
\section*{General Outlook---Closing Remarks}
\textbf{Debasis and Frank}: On another topic close to your heart, what
are the main tenets of your political outlook?
\begin{figure*}
\caption{Debasis Bhattacharya and Frank Samaniego interviewing
George Roussas (in the middle) in his office in the Department of
Statistics at the University of California, Davis, on May 15, 2009.}
\end{figure*}
\textbf{George}: At least in the recent past and currently, the usual
terms employed to characterize political ideology are those of being
``liberal'' or ``conservative.'' However, these terms are quite
tentative, and have had different connotations in different periods of
time. They are interpreted differently by different people. I like to
think of myself as not really fitting the modern interpretations of
either camp. My basic beliefs are that one should be interested in
preserving the accumulated wisdom of our collective society and in
respecting the greatest intellectual achievements of the human species
over the millennia. If this is this ``conservatism,'' so be it! At the
same time, it seems to me essential that one keeps an open mind and a
positive disposition toward new ideas. That is ``liberalism'' in my
book. However, before a new idea of any importance is adopted, it must
be deeply contemplated and should be vigorously debated and tested. The
novelty of an idea in no way guarantees its worthiness and its
usefulness to society. The unquestioning adoption of ``progressive''
ideas, just because they are novel, may be truly deleterious for the
well being of a society; that is ill-conceived license for perhaps
emotional but certainly not rational behavior, having nothing to do with
liberalism. At the same time, I believe that
it is a mistake to resist
change and adhere to the status quo, simply because it's what we know
and are comfortable with; that is simply reactionary. When I find myself
needing to take a position on a political or social question, I try to
combine what I know or can learn about the alternatives under
consideration and form my opinion based on both experience and newly
found information. In short, I believe
that we should respect existing
social structures, but we should not do so
unquestionably. I have never
been a ``party-line'' type of citizen, and I tend to vote for those
candidates and propositions that seem to me to stand the best chance of
solving real problems and of generally enhancing the quality of our
lives and of the times we live in.
\textbf{Debasis and Frank}: This topic seems to naturally segue into
your general philosophy of life. How would you summarize that?
\textbf{George}: I would say that the most important thing is to live a
``worthy'' life, based on ethical principles, respecting valued
traditions, yet trying to leave things better than you found them.
Trying to make some meaningful contributions to society is important, as
is the avoidance of excesses that distract one from one's more noble
goals and aspirations. I believe that a commitment to ``excellence,'' in
a generalized sense, is also very important. This applies to one's
chosen profession, that is, to the way one does one's work, and also to
one's personal affairs. In both of these areas, integrity and respect
for others, and for society's needs, should always be prime
considerations. Mary and I have striven to raise our sons to have a
sincere appreciation for these same principles.
\textbf{Debasis and Frank}: What advice would you give to young people
just beginning their careers as academicians?
\textbf{George}: Aim high and then work hard, with energy, imagination
and persistence, to achieve your goals. Strive to live a worthy life.
Determine what your main strengths are, and use them to try to make a
difference, both in your professional activities and in your personal
life. Take pride in your best achievements! At the same time, accept
responsibility for whatever failures you encounter and, most
importantly, learn from them.
\textbf{Debasis and Frank}: George, this conversation has been a
distinct pleasure.
You've had an extraordinary career,
with consistently
strong contributions through your research,
your teaching, the highly
respected books and monographs you have written and your many
achievements in administrative capacities. It's been most interesting to
hear about your personal trajectory. You didn't set out with this
trajectory in mind, but we feel very fortunate that it led you to Davis.
We have all benefited from the leadership and collegiality that has
characterized your 25 years here. Thanks for taking the time to talk
with us about your life and career.
\textbf{George}: The pleasure has been all mine. I'd like to express my
deep appreciation to both of you, Frank and Debasis, for this precious
opportunity. Years from now, perhaps my children's children will read
this and be surprised that ``old pappou'' had a~pretty interesting life,
and one that was blessed in~many~ways.
\section*{Acknowledgment}
We would like to thank Patricia Aguilera and
Gloria Anaya for assisting with the preparation of the manuscript.
\end{document} |
\begin{document}
\title{Dynamic Range Mode Enumeration}
\author{Tetto Obata}
\authorrunning{T. Obata}
\institute{Graduate School of Information Science and Technology, The University of Tokyo, Japan
\email{[email protected]}}
\maketitle
\begin{abstract}
The range mode problem is a fundamental problem and there is a lot of work about it.
There is also some work for the dynamic version of it and the enumerating version of it,
but there is no previous research about the dynamic and enumerating version of it.
We found an efficient algorithm for it.
\keywords{range mode query, dynamic data structure, enumeration}
\end{abstract}
\section{Introduction}
\begin{dfn}[mode]
$A$ : multiset \\
$a \in A$ is a mode of $A$\\
$\Leftrightarrow$
$\forall b \in A$ (the multiplicty of $a$ in $A$) $\geq$ (the multiplicty of $b$ in $A$)
\end{dfn}
In the following, ``a mode of multiset $\left\{A[l], A[l+1], \ldots, A[r]\right\}$'' is abbreviated to ``a mode of $A[l:r]$'' for a sequence $A$.
\begin{prob}[Range mode problem]
Given a sequence $A$ over an alphabet set $\Sigma$,
process a sequence of queries.
\begin{itemize}
\item {\rm mode}$\left(l, r\right)$: output one of the modes of A[l:r]
\end{itemize}
\end{prob}
The range mode problem is a fundamental problem and there is a lot of work about it.
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|c|} \hline
& space complexity (bits) & query time complexity & conditions \\ \hline
\cite{sta1} & ${\rm O}\!\left(n^{2-2\epsilon}\log n\right)$ & ${\rm O}\!\left(n^\epsilon\right)$ & $0\leq \epsilon \leq \frac{1}{2}$ \\ \hline
\cite{durocher} & ${\rm O}\!\left(n^{2-2\epsilon}\right)$ & ${\rm O}\!\left(n^\epsilon\right)$ & $0\leq \epsilon \leq \frac{1}{2}$ \\ \hline
\cite{sta3} & ${\rm O}\!\left(\frac{n^2\log \log n}{\log n}\right)$ & ${\rm O}\!\left(1\right)$ & \\ \hline
\cite{sta4} & ${\rm O}\!\left(nm\log n\right)$ & ${\rm O}\!\left(\log m\right)$ & \\ \hline
\cite{sta5} & ${\rm O}\!\left(\left(n^{1-\epsilon}m+n\right)\log n\right)$ & ${\rm O}\!\left(n^\epsilon + \log \log n\right)$ & $0\leq \epsilon \leq \frac{1}{2}$ \\ \hline
\cite{Sumigawa} & ${\rm O}\!\left(4^knm\left(\frac{n}{m}\right)^{\frac{1}{2^{2^k}}}\right)$ & ${\rm O}\!\left(2^k\right)$ & $k \in \mathbb{Z}_{\geq 0}$ \\ \hline
\cite{Sumigawa} & ${\rm O}\!\left(nm\right)$ & ${\rm O}\!\left(\min\left(\log m, \log \log n\right)\right)$ & \\ \hline
\cite{Sumigawa} & ${\rm O}\!\left(nm\left(\log \log\frac{n}{m}\right)^2\right)$ & ${\rm O}\!\left(\log \log \frac{n}{m}\right)$ & \\ \hline
\end{tabular}
\caption{The results of previous research about the range mode problem. $n$ is the length of a string and $m$ is the maximum frequency of an item. Space complexity does not include the input string.}
\label{Static}
\end{center}
\end{table}
As a natural extension of the range mode problem, we can consider the enumeration version of the problem.
\begin{prob}[Range mode enumeration problem]
Given a sequence $A$ over an alphabet set $\Sigma$,
process a sequence of queries.
\begin{itemize}
\item {\rm modes}$\left(l, r\right)$: enumerate the modes of A[l:r]
\end{itemize}
\end{prob}
There is another natural extension of it, the dynamic version of the problem.
\begin{prob}[Dynamic range mode problem]
Given a sequence $A$ over an alphabet set $\Sigma$, process a sequence of queries of the following three types:
\begin{itemize}
\item {\rm insert}$\left(c, i\right)$: insert $c \left(\in \Sigma\right)$ so that it becomes the $i$-th element of $A$
\item {\rm delete}$\left(i\right)$: delete the $i$-th element of $A$
\item {\rm mode}$\left(l, r\right)$: output one of the modes of A[l:r]
\end{itemize}
\end{prob}
There is some work about the range mode enumeration problem and the dynamic range mode problem.
\begin{table}[htbp]
\begin{center}
\begin{tabular}{|c|c|c|c|} \hline
& space complexity (bits) & query time complexity & condition \\ \hline
\cite{Sumigawa} & ${\rm O}\!\left(n^{2-2\epsilon}\log n\right)$ & ${\rm O}\!\left(n^\epsilon\left|output\right|\right)$ & $0 \leq \epsilon \leq \frac{1}{2}$ \\ \hline
\cite{Sumigawa} & ${\rm O}\!\left(nm\left(\log \log \frac{n}{m}\right)^2 + n\log n\right)$ & ${\rm O}\!\left(\log \log \frac{n}{m} + \left|output\right|\right)$ & \\ \hline
\cite{Sumigawa} & ${\rm O}\!\left(nm + n\log n\right)$ & ${\rm O}\!\left(\log m + \left|output\right|\right)$ & \\ \hline
\cite{Sumigawa} & ${\rm O}\!\left(n^{1+\epsilon}\log n + n^{2-\epsilon}\right)$ & ${\rm O}\!\left(\log m + n^{1-\epsilon} + \left|output\right|\right)$ & $0 \leq \epsilon \leq 1$ \\ \hline
\end{tabular}
\caption{The results of previous research about the range mode enumeration problem. $n$ is the length of a string and $m$ is the maximum frequency of an item. Space complexity does not include the input string.}
\label{Enumeration}
\end{center}
\end{table}
\begin{table}[H]
\begin{center}
\begin{tabular}{|c|c|c|c|} \hline
& space complexity(words) & query time complexity\\ \hline
\cite{dynamic1} & ${\rm O}\!\left(n_{\max} \right)$ & ${\rm O}\!\left(n_{\max}^{\frac{2}{3}}\right)$ \\ \hline
\cite{dynamic2} & ${\rm \tilde{O}}\!\left(n^{1.327997}\right)$ & ${\rm \tilde{O}}\!\left(n^{0.655994}\right)$ \\ \hline
\end{tabular}
\caption{The results of previous research about the dynamic range mode problem where $n$ is the length of string and $n_{\max}$ is the limit of the length of the string. Space complexity does not include the input string. The query time comlexity is same in all the query types. The wordsize is ${\rm \Omega}\! \left(\log n\right)$.}
\label{Dynamic}
\end{center}
\end{table}
Considering the normal version, enumerating version, and the dynamic version of the problem,
we can consider another problem, the dynamic enumerating version one.
\begin{prob}[Dynamic range mode enumeration problem]
Given a sequence $A$ over an alphabet set $\Sigma$, process a sequence of queries of the following three types:
\begin{itemize}
\item {\rm insert}$\left(c, i\right)$: insert $c \left(\in \Sigma\right)$ so that it becomes the $i$-th element of $A$
\item {\rm delete}$\left(i\right)$: delete the $i$-th element of $A$
\item {\rm modes}$\left(l, r\right)$: enumerate the modes of A[l:r]
\end{itemize}
\end{prob}
There is no previous research about the dynamic range mode enumeration problem.\\
It is known that the range mode problem is related to the boolean matrix problem and the set intersection problem~\cite{sta5}.
\begin{prob}[Set intersection problem]
Given multisets $S_1, S_2, \ldots, S_N$ of a universe $U$, process a sequence of following queries.
\begin{itemize}
\item {\rm intersect}$\left(i, j\right)$: check whether $S_i$ and $S_j$ intersect or not
\end{itemize}
\end{prob}
If the range mode problem can be solved efficiently, it can be checked if two sets intersect efficiently.
We can solve the set intersection problem by building a data structure for a sequence of $2N\left|U\right|$ elements as follows
\begin{align*}
\mbox{(elements of) } S_1, S_1^{\rm c}, S_1^{\rm c}, S_1,
S_2, S_2^{\rm c}, S_2^{\rm c}, S_2,
\ldots ,
S_N, S_N^{\rm c}, S_N^{\rm c}, S_N
\end{align*}
and calling mode$\left(2i\left|U\right| - \left|S_i\right|, 2(j-1)\left|U\right| + \left|S_j\right|\right)$ query for a intersect$\left(i, j\right)$ $\left(i < j\right)$ query.\\
Therefore if the dynamic range mode enumeration problem can be solved efficiently, the computation of the intersection of two sets and modifying of the sets can be done efficiently.
\subsection*{Our contribution}
Existing methods for the dynamic range mode problem cannot be applied to the dynamic range mode enumeration problem.
The step 3 of Algorithm 1 of \cite{dynamic1} cannot be used for the dynamic range mode enumeration problem.
Problem 7 of \cite{dynamic2} needs only one index and the algorithm of this paper is based on this problem.
In this paper, we found the first algorithm for the range dynamic enumeration problem, which can deal with insert and delete queries in
${\rm O}\!\left(N^{\frac{2}{3}}\log \sigma^\prime \right)$ time per query and modes query in ${\rm O}\!\left(N^{\frac{2}{3}}\log \sigma^\prime + |output| \right)$
time per query where $N$ is the length of the sequence and $\sigma^\prime = \left|\left\{c \in \Sigma \middle| c \mbox{ appears in the sequence}\right\}\right|$.
\section{Main Result}
The following theorem is the main result.
\begin{thm}
\label{mainresult}
There exists a data structure for the dynamic range mode enumeration problem in the word RAM model with ${\rm \Omega}\! \left(\log N + \log \sigma\right)$ bits wordsize
in ${\rm O}\!\left(N^{\frac{2}{3}}\log \sigma^\prime \right)$ time per {\rm insert} and {\rm delete} query and ${\rm O}\!\left(N^{\frac{2}{3}}\log \sigma^\prime + |output| \right)$
time per {\rm modes} query where $N$ is the length of the sequence and \\$\sigma^\prime = \left|\left\{c \in \Sigma \middle| c \mbox{ appears in the sequence}\right\}\right|$.
The space complexity is ${\rm O}\!\left(N + N^{\frac{2}{3}}\sigma^\prime\right)$ words.
\end{thm}
Our main idea is to divide the sequence into $L = {\rm \Theta}\! \left(N^\alpha\right)$ subsequences of length which may be zero but not greater than $C = {\rm \Theta}\!\left(N^{1-\alpha}\right)$ for some parameter $a$.
Let $B_i$ be the $i$-th subsequence. We call it a block. For sequences $X, Y$, we define $X + Y$ as the sequence obtained by concatenating $X$ and $Y$ in this order.\\
The data structure consists of the following components.
\begin{itemize}
\item $T_A$ : A data structure for the sequence $A$. It can process the following queries.
\begin{itemize}
\item access $A\left[l:r\right]$ $\left(0 \leq l \leq r < \left|A\right|\right)$
in ${\rm O}\!\left(T_{\refstepcounter{mycounter}\arabic{mycounter}\label{accessA}, r-l+1}\right)$ time.
\item insert a character $c\left(\in \Sigma\right)$ into $i$-th position of $A$ $\left(0 \leq i \leq \left|A\right|\right)$
in ${\rm O}\!\left(T_{\refstepcounter{mycounter}\arabic{mycounter}\label{insertA}}\right)$ time.
\item delete the $i$-th character of $X$ $\left(0 \leq i < \left|A\right|\right)$
in ${\rm O}\!\left(T_{\refstepcounter{mycounter}\arabic{mycounter}\label{deleteA}}\right)$ time.
\end{itemize}
\item $T_B$ : A data structure for the array $\left(\left|B_0\right|, \left|B_1\right|, \ldots, \left|B_{L-1}\right|\right)$, which is used to compute which block a character in $A$ belongs to. It can process the following queries.
\begin{itemize}
\item increase or decrease the $i$-th element $\left(0 \leq i < L\right)$
in ${\rm O}\!\left(T_{\refstepcounter{mycounter}\arabic{mycounter}\label{modifyB}}\right)$ time.
\item calculate $\mathrm{argmin}_i \left|B_i\right|$
in ${\rm O}\!\left(T_{\refstepcounter{mycounter}\arabic{mycounter}\label{minB}}\right)$ time.
\item calculate $\min \left\{k \middle| \sum_{i = 0}^{k} \left|B_i\right| \geq a \right\}$ $\left(0 < a \leq \sum_i \left|B_i\right| \right)$
in ${\rm O}\!\left(T_{\refstepcounter{mycounter}\arabic{mycounter}\label{binaryB}}\right)$ time.
\item insert a value $x$ into $i$-th position of the array
in ${\rm O}\!\left(T_{\refstepcounter{mycounter}\arabic{mycounter}\label{insertB}}\right)$ time.
\item delete the $i$-th element of the array
in ${\rm O}\!\left(T_{\refstepcounter{mycounter}\arabic{mycounter}\label{deleteB}}\right)$ time.
\end{itemize}
\item $S_{(l,r)} \left(0 \leq l \leq r < L\right)$ : A data structure for the ordered set\\$\left\{\left(\left(\mbox{the multiplicity of } c \mbox{ in } B_l + \cdots + B_r\right) , c\right) \middle| c \mbox{ appears in } B_l + \cdots + B_r\right\}$.
It can process the following queries.
\begin{itemize}
\item create an empty set
in ${\rm O}\!\left(T_{\refstepcounter{mycounter}\arabic{mycounter}\label{initS}}\right)$ time.
\item increment or decrement the multiplicity of character $c\left(\in \Sigma\right)$
in ${\rm O}\!\left(T_{\refstepcounter{mycounter}\arabic{mycounter}\label{modifyS}}\right)$ time.
\item compute the multiplicity of a character $c \left(\in \Sigma\right)$
in ${\rm O}\!\left(T_{\refstepcounter{mycounter}\arabic{mycounter}\label{accessS}}\right)$ time.
\item access the largest element
in ${\rm O}\!\left(T_{\refstepcounter{mycounter}\arabic{mycounter}\label{topS}}\right)$ time.
\item access the next largest to the last accessed element
in ${\rm O}\!\left(T_{\refstepcounter{mycounter}\arabic{mycounter}\label{nextS}}\right)$ time.
\end{itemize}
\end{itemize}
We introduce new operations ${\rm moveLeft}\left(i\right)$ and ${\rm moveRight}\left(i\right)$.
The operation ${\rm moveLeft}\left(i\right)$ moves the first element of $i$-th block to the $(i-1)$-st block.
In such an operation, we only need to modify the following components.
\begin{itemize}
\item $T_B$
\item $S_{(0,i-1)}, \ldots, S_{(i-1,i-1)}, S_{(i,i)}, \ldots, S_{(i,L-1)}$
\end{itemize}
This can be done in ${\rm O}\!\left(T_{\ref{modifyB}} + LT_{\ref{modifyS}}\right)$ time.
The operation ${\rm moveRight}\left(i \right)$ moves the last element of $i$-th block to the $(i+1)$-st block.
It can be done in the same time in a similar way.
We process the queries by the following method.
\subsubsection*{delete}
Let $j$ be the index of the block that contains the $i$-th element.
It can be computed in ${\rm O}\!\left(T_{\ref{binaryB}} \right)$ time.
$T_A$ and $T_B$ can be modified easily in ${\rm O}\!\left(T_{\ref{accessA}, 1} + T_{\ref{deleteA}} + T_{\ref{modifyB}}\right)$ time.
We need to modify $S_{(l,r)}$ for all $l, r$ such that $0 \leq l \leq j \leq r < L$. It can be done in ${\rm O}\!\left(L^2 T_{\ref{modifyS}} \right)$ time.
\subsubsection*{insert}
Let $j$ be $\min \left\{k \middle| \sum_{l = 0}^{k} |B_l| \geq i \right\}$. We insert $c$ into the $j$-th block,
and modify the data structure in a similar way to a delete query.\\
The length of $j$-th block may become larger than $C$. In such a case we balance the length of blocks in the following way.
\begin{enumerate}
\item Find a block $B_k$ such that $|B_k| + 1 \leq C$.
\item Operate ${\rm moveLeft}$ or ${\rm moveRight}$ several times so that $|B_j|$ decreases by 1, $|B_k|$ increases by 1 and the rest remain.
\end{enumerate}
Step 1. can be done in ${\rm O}\!\left(T_{\ref{minB}}\right)$ time. Step 2. can be done in ${\rm O}\!\left(L\left(T_{\ref{modifyB}} + LT_{\ref{modifyS}}\right)\right)$ time
because we call {\rm moveLeft} or {\rm moveRight} only ${\rm O}\!\left(L\right)$ times.\\
\subsubsection*{modes}
Let $(i, j)$ be the maximal interval of blocks which is in $A[l:r]$.
It can be computed in ${\rm O}\!\left(T_{\ref{binaryB}}\right)$ time.
If $A[l:r]$ does not contain any blocks, $B_i + \cdots + B_j$ stands for an empty sequence and $S_{l, r}$ stands for an empty set below.\\
It holds that $|A[l:r] \setminus \left(B_i + \cdots + B_j\right)| \leq 2 C$.
It can be said that every mode of $A[l:r]$ is a mode of $B_i + \cdots + B_j$ or appears in
$A[l:r] \setminus \left(B_i + \cdots + B_j\right)$.
If there does not exists such a character $c$ that meets the following conditions
\begin{itemize}
\item $c$ is a mode of $A[l:r]$
\item $c$ does not appear in $A[l:r] \setminus \left(B_i + \cdots + B_j\right)$
\end{itemize}
then every mode of $A[l:r]$ appears in $A[l:r] \setminus \left(B_i + \cdots + B_j\right)$.
We scan the elements in $A[l:r] \setminus \left(B_i + \cdots + B_j\right)$
and count the occurrences of each character in $A[l:r]$ using $S_{\left(i,j\right)}$ and a new ordered set in ${\rm O}\!\left(T_{\ref{initS}} + CT_{\ref{modifyS}} + T_{\ref{accessA}, C} + CT_{\ref{accessS}} \right)$ time,
and compute the number of occurrences of a mode of $A[l:r]$ in ${\rm O}\!\left(T_{\ref{accessA}, C} + CT_{\ref{accessS}} + T_{\ref{topS}}\right)$ time.
We can judge if there exists a character satisfying the conditions above using the value and the ordered set.
If there does not exist such a character, the enumeration is done.
If exists, every mode of $B_i + \cdots + B_j$ is also a mode of $A[l:r]$,
so we can enumerate the modes of $A[l:r]$ by the privious scan and the enumeration of the modes of $B_i + \cdots + B_r$, which can be done in ${\rm O}\!\left(T_{\ref{topS}} + T_{\ref{nextS}}\left|output\right|\right)$ time.
Algorithm \ref{modes} denotes the algorithm for the modes query.\\
\begin{algorithm}[H]
\caption{The algorithm for the modes query.}
\label{modes}
\SetKwInput{KwInput}{Input}
\SetKwInput{KwOutput}{Output}
\DontPrintSemicolon
\KwInput{range $\left(l, r\right)$}
\KwOutput{all modes of $S[l:r]$}
\SetKwFunction{FMain}{Main}
\SetKwProg{Fn}{Function}{:}{\KwRet}
\Fn{\FMain}{
$\left(i, j\right) \leftarrow$ maximal interval such that $B_i + \cdots + B_j \subset A[l:r]$\;
$T \leftarrow$ a new empty ordered set\;
\For{$c \in A[l:r] \setminus \left(B_i + \cdots B_j\right)$}{
increment the multiplicity of $c$ in $T$\;
}
$app \leftarrow 0$\;
\For{$\left(app_c, c\right) \in T$}{
$app = \max\left(app, app_c + \left(\mbox{the number of appearences of }c \mbox{ in } B_i + \cdots B_j\right) \right)$\;
}
$ans \leftarrow \varnothing$\;
\For{$\left(app_c, c\right) \in T$}{
\If{$app = app_c + \left(\mbox{the number of appearences of }c \mbox{ in } B_i + \cdots B_j\right)$}
{
$ans \leftarrow ans \cup \left\{c\right\}$\;
}
}
$\left(app_c, c\right) \leftarrow$ the top element of $S_{\left(i, j\right)}$\;
\While{$c \neq NULL$ and $app_c = app$}{
$ans \leftarrow ans \cup \left\{c\right\}$\;
$\left(app_c, c\right) \leftarrow$ the next largest to $\left(app_c, c\right)$ in $S_{\left(i, j\right)}$\;
}
\KwRet $ans$\;
}
\end{algorithm}
In order to keep $L = {\rm \Theta}\!\left(N^\alpha\right)$ and $C = {\rm \Theta}\!\left(N^{1-\alpha}\right)$, we use the technique for dynamic data structures~\cite{navarro}.
We group the blocks into three types $\rm{p, c, n}$(previous, current, next).
Set the number and size of the blocks as follows.
\begin{itemize}
\item $L_{\rm p} = \lceil\left(\frac{N}{2}\right)^\alpha\rceil, C_{\rm p} = \lceil\left(\frac{N}{2}\right)^{1-\alpha}\rceil$
\item $L_{\rm c} = \lceil N^\alpha\rceil, C_{\rm c} = \lceil N^{1-\alpha}\rceil$
\item $L_{\rm n} = \lceil \left(2N\right)^\alpha\rceil, C_{\rm n} = \lceil \left(2N\right)^{1-\alpha}\rceil$
\end{itemize}
When we initialize the data structure, all elements are stored in c blocks and initialize the data structure for $L = L_{\rm p} + L_{\rm c} + L_{\rm n}$ blocks.
\subsubsection*{insert}
Move elements so that the sum of the elements in n blocks increases by two and that in p blocks decreases by one (unless they are already empty) compared to before the query.
To achieve this, we move elements as follows
\begin{itemize}
\item insertion into p: p $\rightarrow$ c, p $\rightarrow$ c, c $\rightarrow$ n, c $\rightarrow$ n
\item insertion into c: p $\rightarrow$ c, c $\rightarrow$ n, c $\rightarrow$ n
\item insertion into n: p $\rightarrow$ c, c $\rightarrow$ n
\end{itemize}
where x $\rightarrow$ y means moving the last element of x blocks to y blocks.
If all x blocks are empty, it is ignored.
\subsubsection*{delete}
Move elements so that the sum of the elements in n blocks decreases by two (unless they are already empty) and that in p blocks increases by one compared to before the query.
To achieve this, we move elements as follow
\begin{itemize}
\item insertion into p: c $\leftarrow$ n, c $\leftarrow$ n, p $\leftarrow$ c, p $\leftarrow$ c
\item insertion into c: c $\leftarrow$ n, c $\leftarrow$ n, p $\leftarrow$ c
\item insertion into n: c $\leftarrow$ n, p $\leftarrow$ c
\end{itemize}
where x $\leftarrow$ y means moving the first element of y blocks to x blocks.
If all y blocks are empty, it is ignored.
\begin{lem}{\cite{navarro}}
When the length of the string becomes double, all elements are in {\rm n} blocks.\\
When the length of the string becomes half, all elements are in {\rm p} blocks.
\end{lem}
If the length of the string becomes double, set the blocks as follows
\begin{align*}
\left(\rm{p, c, n}\right) \leftarrow \left(\rm{pp, p, c}\right)
\end{align*}
and if the length of the string becomes half, set the blocks as follows
\begin{align*}
\left(\rm{p, c, n}\right) \leftarrow \left(\rm{c, n, nn}\right)
\end{align*}
where pp (previous to the previous) and nn (next to the next) are other types of blocks and $L_{\rm pp}, C_{\rm pp}, L_{\rm nn}, $ and $C_{\rm nn}$ are defined as follow.
\begin{itemize}
\item $L_{\rm pp} = \lceil\left(\frac{N}{4}\right)^\alpha\rceil, C_{\rm pp} = \lceil\left(\frac{N}{4}\right)^{1-\alpha}\rceil$
\item $L_{\rm nn} = \lceil \left(4N\right)^\alpha\rceil, C_{\rm nn} = \lceil\left(4N\right)^{1-\alpha}\rceil$
\end{itemize}
We need to add ${\rm O}\!\left(N^{1 + 2\alpha}\right)$ extra elements for $S_{\left(l, r\right)} (0 \leq l \leq r < L_{\rm pp} + L_{\rm p} + L_{\rm c}$ $+ L_{\rm n} + L_{\rm nn})$ in order to prepare pp blocks and nn blocks and are prepared from when the block reset occured. There are ${\rm \Omega}\! \left(N\right)$ queries.
These operations need\\
${\rm O}\!\left(T_{\ref{minB}} + T_{\ref{insertB}} + T_{\ref{deleteB}}
+ L\left(T_{\ref{modifyB}} + LT_{\ref{modifyS}}\right)
+ N^{2\alpha}T_{\ref{modifyS}} + \max\left(1, N^{2\alpha - 1}\right)T_{\ref{initS}} \right)$ time per query.
\begin{thm}
There exists a data structure for the dynamic range mode \mbox{enumeration} problem
in
${\rm O}\!\left(T_{\ref{accessA}, 1} + T_{\ref{insertA}} + T_{\ref{deleteA}} + T_{\ref{minB}} + T_{\ref{binaryB}}
+ N^{\alpha}T_{\ref{modifyB}} + N^{2\alpha}T_{\ref{modifyS}}
+ \max\left(1, N^{2\alpha - 1}\right)T_{\ref{initS}} \right)$
time per {\rm insert} and {\rm delete} query and
${\rm O}\!\left(T_{\ref{binaryB}}
+ T_{\ref{accessA}, {\rm \Theta}\!\left(N^{1-\alpha}\right)} + N^{1-\alpha}T_{\ref{accessS}} + T_{\ref{topS}}
+ T_{\ref{nextS}}\left|output\right|\right)$
time per {\rm modes} query where $N$ is the length of the sequence
and $\sigma = \left|\Sigma\right|$.
\end{thm}
\begin{proof}[Proof of Theorem \ref{mainresult}]
We use balanced binary search trees for $T_A$ and $T_B$.\\
We use two balanced binary search trees for each $S_{\left(l, r\right)}$.
One of them is the one whose key is a character in $B_l + \cdots + B_r$ and value is the number of the occurrences of the character in $B_l + \cdots + B_r$.
The other is used as a ordered set $\left\{\left(t\left(c\right), c\right) \middle| c\in \Sigma, t\left(c\right) > 0\right\}$, where $t\left(c\right)$ is the number of the occurrences of $c$ in $B_l + \cdots + B_r$.
Then, following equations hold.
\begin{align*}
&T_{\ref{accessA}, a} = {\rm O}\!\left(a + \log N\right)\\
&T_{\ref{insertA}} = {\rm O}\!\left(\log N\right)\\
&T_{\ref{deleteA}} = {\rm O}\!\left(\log N\right)\\
&T_{\ref{modifyB}} = {\rm O}\!\left(\log L\right) = {\rm O}\!\left(\log N\right)\\
&T_{\ref{minB}} = {\rm O}\!\left(\log L\right) = {\rm O}\!\left(\log N\right)\\
&T_{\ref{binaryB}} = {\rm O}\!\left(\log L\right) = {\rm O}\!\left(\log N\right)\\
&T_{\ref{insertB}} = {\rm O}\!\left(\log L\right) = {\rm O}\!\left(\log N\right)\\
&T_{\ref{deleteB}} = {\rm O}\!\left(\log L\right) = {\rm O}\!\left(\log N\right)\\
&T_{\ref{initS}} = {\rm O}\!\left(1\right)\\
&T_{\ref{modifyS}} = {\rm O}\!\left(\log \sigma^\prime\right)\\
&T_{\ref{accessS}} = {\rm O}\!\left(\log \sigma^\prime\right)\\
&T_{\ref{topS}} = {\rm O}\!\left(1\right)\\
&T_{\ref{nextS}} = {\rm O}\!\left(1\right)
\end{align*}
Setting $\alpha = \frac{1}{3}$, we obtain theorem \ref{mainresult}.
\end{proof}
\section{Concluding Remarks}
We introduced a new problem, the dynamic range mode enumeration problem.
We found an algorithm for it whose time complexity of a modes query is linear to the output size plus some term.
However, the term is larger than the time complexity of a mode query of the dynamic range mode problem.
It may be possible to found a new algorithm for the dynamic range mode enumeration problem whose time complexity for a query is equal to that of the dynamic range mode problem except the term depending on the output size.
\end{document} |
\begin{document}
\allowdisplaybreaks
\title{Littlewood--Paley--Stein Estimates for Non-local Dirichlet Forms}
\author{Huaiqian Li\succeqquad\succequad Jian Wang}
\thanks{ \varepsilonmph{H.\ Li:}
Center for Applied Mathematics, Tianjin University, Tianjin 300072, P. R. China \texttt{[email protected]}}
\thanks{ \varepsilonmph{J.\ Wang:}
College of Mathematics and Informatics \& Fujian Key Laboratory of Mathematical Analysis and Applications (FJKLMAA), Fujian Normal University, 350007 Fuzhou, P. R. China. \texttt{[email protected]}}
\maketitle
\begin{abstract} We obtain the boundedness in $L^p$ spaces for all $1<p<\infty$ of the so-called vertical Littlewood--Paley functions for
non-local Dirichlet forms in the metric measure space under some mild assumptions. For $1<p\leqslant 2$, the pseudo-gradient is
introduced to overcome the difficulty that chain rules are not
available for non-local operators, and then the Mosco convergence is used to pave the way from the finite jumping kernel case to the general case, while for $2\leqslant p<\infty$, the Burkholder--Davis--Gundy inequality is effectively applied. The former
method is analytic and the latter one is probabilistic. The results extend those ones for pure jump symmetric L\'evy processes in Euclidean spaces.
\noindent\textbf{Keywords:} Littlewood--Paley--Stein estimate; non-local Dirichlet form; pseudo-gradient;
Mosco convergence; Burkholder--Davis--Gundy inequality.
\noindent \textbf{MSC 2010:} 60G51; 60G52; 60J25; 60J75.
\varepsilonnd{abstract}
\allowdisplaybreaks
\allowdisplaybreaks
\section{Introduction}\label{section1}
Let $(M,d)$ be a locally compact and separable metric space, and
$\mu$ be a positive Radon measure on $M$ with full support. We will refer to such triple $(M,d,\mu)$ as a metric measure space. As usual, the real $L^p$ space is denoted by $L^p(M,\mu)$ with the norm
$$\|f\|_p:= \leqslantft(\int_M |f(x)|^p\, \mu(d x)\right)^{1/p},\succequad1\leqslantqslant p <\infty,$$
and
$$\|f\|_\infty := \textup{ess}\sup_{x\in M} |f(x)|,$$
where $\textup{ess}\sup$ is the essential supremum. The inner product of functions $f,g\in L^2(M,\mu)$ is denoted by $\langle f, g\rangle$.
Consider a Dirichlet form $(D,\mathscr{F})$ in $L^2(M,\mu)$, which is a closed, symmetric,
non-negative
definite, bilinear form $D: \mathscr{F}\times \mathscr{F}\rightarrow\mathds R$ defined on a dense subspace $\mathscr{F}$ of $L^2(M,\mu)$, satisfying in addition the Markov property. The closedness means that $\mathscr{F}$ is a Hilbert space with respect to the
$D_1^{1/2}$-inner product defined by
$$D_1(f,g)=D(f,g)+\langle f, g\rangle.$$
The Markov property means that if $f\in\mathscr{F}$ then the function $\hat{f}:=\max\{0,\min\{1,f\}\}$ belongs to $\mathscr{F}$ and $D(\hat{f})\leqslantqslant D(f)$. Here and in the sequel, we write $D(f)$ instead of $D(f,f)$ for short.
Let $L$ be the non-negative definite
$L^2$-generator
of the Dirichlet form $(D,\mathscr{F})$, which is a self-adjoint operator on $L^2(M,\mu)$ with domain $\mathscr{D}(L)$ such that
$$D(f,g)=\langle Lf,g\rangle,$$
for all $f\in\mathscr{D}(L)$ and $g\in\mathscr{F}$. The generator $L$ give rises to the semigroup $(P_t)_{t\geqslantqslant0}$ with $P_t=e^{-tL}$ for all $t\geqslantqslant0$ in the sense of functional calculus. It turns out that $(P_t)_{t\geqslantqslant0}$ is a strongly continuous, contractive, symmetric semigroup in $L^2(M,\mu)$, and satisfies the Markov property which means that $0\leqslantqslant P_tf\leqslantqslant1$ for every $t>0$ provided $0\leqslantqslant f\leqslantqslant1$.
Let $C_c(M)$ be the space of all continuous functions on $M$ with compact support. Recall that the Dirichlet form $(D,\mathscr{F})$ is
called regular if $\mathscr{F}\cap C_c(M)$ is dense both in $\mathscr{F}$ (with respect to the
$D_1^{1/2}$-norm) and in $C_c(M)$ (with respect to the supremum norm). It follows that if $(D,\mathscr{F})$ is regular, then every
function $f\in\mathscr{F}$ admits a quasi-continuous version $\tilde{f}$ (see e.g. \cite[Theorem 2.1.3]{FOT}). Throughout this
paper, we abuse the notation and
represent
$f\in \mathscr{F}$ by its quasi-continuous version without writing $\tilde f$.
In order to introduce the so-called vertical Littlewood--Paley square function, the ``module of gradient'' is necessary. The suitable candidate in this general setting should be the \varepsilonmph{carr\'{e} du champ} operator. It is a non-negative, symmetric and continuous bilinear form $\gammaamma: \mathscr{F}\times\mathscr{F}\rightarrow L^1(M,\mu)$ such that
$$D(f,g)=\int_M \gammaamma(f,g)\,d\mu\succequad\mbox{for every } f,g\in \mathscr{F},$$
which is uniquely characterized in the algebra $L^\infty(X,\mu)\cap \mathscr{F}$ by
$$\int_M\gammaamma(f,g)h\,d\mu=D(f,gh)+D(g,fh)-D(fg,h),$$
for every $f,g,h\in L^\infty(X,\mu)\cap \mathscr{F}$. See \cite{BH1991} for more details. In the sequel, we use the notation $\gammaamma(f):=\gammaamma(f,f)$ for convenience.
\ \
In this paper, we are concerned with non-local Dirichlet forms. Let $(D,\mathscr{F})$ be a regular Dirichlet form of pure jump type in $L^2(M,\mu)$ defined as
\begin{equation}\label{nondi}
D(f,g)=\frac{1}{2}\iint_{M\times M \backslash {\rm diag}} (f(x)-f(y))(g(x)-g(y)) \,J(x,dy)\,\mu(dx),\succequad f,g\in\mathscr{F}, \varepsilonnd{equation}
where ${\rm diag}$ denotes the diagonal set $\{(x,x):x \in M\}$ and $J(x,dy)$ is a non-negative kernel satisfying the symmetry property
$$J(x,dy)\,\mu(dx)=J(y,dx)\,\mu(dy).$$
$J(x,dy)$ is called jumping kernel associated with the Dirichlet form $(D,\mathscr{F})$ in the literature.
Then the \varepsilonmph{ carr\'e du champ} operator $\gammaamma$ is defined as follows
$$\gammaamma(f,g)(x)=\frac{1}{2}\int_M (f(x)-f(y))(g(x)-g(y))\,J(x,dy),\succequad f,g\in \mathscr{F} \text{ and } x\in M.$$
Clearly,
$$D(f)=\displaystyle\int_M \gammaamma(f)(x)\,\mu(dx)\succequad\mbox{for every }f\in\mathscr{F}.$$
This motivates us to define the gradient (more precisely, the module of gradient) of a function $f\in \mathscr{F}$ by
\begin{equation}\label{g-1}
|\nabla f|(x)= \sqrt{ \gammaamma(f)}(x)=
\leqslantft(\frac{1}{2}\int_M (f(x)-f(y))^2\,J(x,dy)\right)^{1/2},\succequad x\in M. \varepsilonnd{equation}
Note that, due to the symmetry of $J(x,dy)\,\mu(dx)$, for every $f\in\mathscr{F}$,
$$D(f)= \iint_{\{(x,y)\in M\times M: f(x)\geqslant f(y)\}} (f(x)-f(y))^2 \,J(x,dy)\,\mu(dx).$$
Then, we can also well
define the following (module of) modified gradient
for every $f\in\mathscr{F}$,
\begin{equation}\label{g-2}
|\widetilde\nabla f|_*(x):=
\leqslantft(\int_{\{y\in M:\,f(x)\geqslant f(y)\}} (f(x)-f(y))^2\,J(x,dy)\right)^{1/2},\succequad x\in M. \varepsilonnd{equation}
From \varepsilonqref{g-1} and \varepsilonqref{g-2}, it is easy to know that, for every $f\in\mathscr{F}$,
$$0\leqslant |\widetilde\nabla f|_*\leqslant \sqrt{2}|\nabla f|\succequad\mbox{and}\succequad\||\nabla f|\|_2^2= \||\widetilde\nabla f|_*\|_2^2=D(f,f).$$
Actually, motivated by \cite{BBL}, we need a further modification of the gradient (and this is a crucial point; see some remarks at the end of Section \ref{section2}). For any $f\in\mathscr{F}$, we define
\begin{equation}\label{g-21}
|\widetilde\nabla f|(x)=
\leqslantft(\int_{\{y\in M:\,|f|(x)\geqslant |f|(y)\}} (f(x)-f(y))^2\,J(x,dy)\right)^{1/2},\succequad x\in M. \varepsilonnd{equation}
It is easy to see that
$|\widetilde\nabla f|=|\widetilde\nabla f|_*$ for any $0\leqslant f\in \mathscr{F}$; however, for general $f\in \mathscr{F}$,
they are not comparable to each other. We also note that, similar to the standard module of gradient,
$|\widetilde\nabla f|= |\widetilde\nabla (-f)|$ for any $f\in \mathscr{F}$;
however, such property is not satisfied for $|\widetilde\nabla \cdot|_*$. This in some sense indicates that the definition of the modified gradient $|\widetilde\nabla \cdot|$ above is more reasonable than that of $|\widetilde\nabla \cdot|_*$.
For every $f\in L^1(M,\mu)\cap L^\infty(M,\mu)$, we
now
define the vertical Littlewood--Paley $\mathscr{H}$-functions $\mathscr{H}_\nabla(f)$ and $\mathscr{H}_{\widetilde\nabla}(f)$ corresponding to the non-local Dirichlet form $(D,\mathscr{F})$ in \varepsilonqref{nondi} as
$$\mathscr{H}_\nabla(f)(x)=\leqslantft(\int_0^\infty |\nabla P_t f|^2(x)\,dt\right)^{1/2},$$
and
\begin{equation}\label{eeefff}\mathscr{H}_{\widetilde\nabla}(f)(x)=\leqslantft(\int_0^\infty |\widetilde\nabla P_tf|^2(x)\,dt\right)^{1/2}, \varepsilonnd{equation}
for every $x\in M$.
The purpose of this paper is to establish Littlewood--Paley--Stein estimates in $L^p(M,\mu)$ for non-local Dirichlet form $(D,\mathscr{F})$
and for all $1<p<\infty$. The main result
is the following theorem (see Theorems \ref{th1} and \ref{thp} below for precise expressions).
\begin{theorem}\label{main}
Let $(M,d,\mu)$ be a metric measure space. Consider the non-local Dirichlet form $(D,\mathscr{F})$ defined in \varepsilonqref{nondi}. Under some
mild assumptions, for $p\in (1,2]$ the vertical Littlewood--Paley operator $\mathscr{H}_{\widetilde\nabla}$ is bounded in $L^p(M,\mu)$; for $p\in [2,\infty)$ the vertical Littlewood--Paley operator $\mathscr{H}_\nabla$ is bounded in $L^p(M,\mu)$.
\varepsilonnd{theorem}
The prototype of Littlewood--Paley--Stein estimates is the $L^p$ boundedness of the Littlewood--Paley $g$-function in the Euclidean space for all $1<p<\infty$;
see \cite[Chapter IV, Theorem 1]{St1970}. There are a lot of extensions on this result in various directions, and we only recall some of them.
We are interested in the vertical (i.e., derivative with respect to the spatial variable) Littlewood--Paley--Stein estimates for heat or Poisson semigroups. Let $M$ be a complete and connected (smooth) Riemannian manifold with Riemannian volume measure $dx$, the non-negative Laplace--Beltrami operator $\Delta$, the corresponding heat semigroup $(e^{-t\Delta})_{t\geqslant0}$ and Poisson semigroup $(e^{-t\sqrt{\Delta}})_{t\geqslant0}$, as well as the gradient operator $\nabla$. For every $f\in C_c^\infty(M)$, the vertical Littlewood--Paley $\mathscr{H}$- and $\mathscr{G}$-functions are given by
\begin{equation}\label{cla}
\mathscr{H}(f)(x)=\leqslantft(\int_0^\infty |\nabla e^{-t\Delta} f|^2(x)\,dt\right)^{1/2},
\varepsilonnd{equation}
and \begin{equation}\label{cla-G}
\mathscr{G}(f)(x)=\leqslantft(\int_0^\infty t|\nabla e^{-t\sqrt{\Delta}} f|^2(x)\,dt\right)^{1/2},
\varepsilonnd{equation}
for every $x\in M$, where $|\cdot|$ is the length induced by the Riemannian distance in the tangent space. The operator $\mathscr{H}$
is called bounded in $L^p(M,dx)$ (or the Littlewood--Paley--Stein estimate holds for $\mathscr{H}$) for any $p\in (1,\infty)$, if there exists a constant $c_p>0$ such that
$$\|\mathscr{H}(f)\|_p\leqslant c_p\|f\|_p,\succequad f\in C_c^\infty(M).$$
(The same for $\mathscr{G}$.) On the aspect of analytic approaches, Stein \cite[Chapter II]{Stein} proved the $L^p$ boundedness of $\mathscr{G}$ for all $p\in (1,\infty)$ on compact Lie groups. Lohou\'{e} \cite{Lou1987} investigated the $L^p$ boundedness of the Littlewood--Paley $\mathscr{H}_a$- and $\mathscr{G}_a$-functions, defined as $$\mathscr{H}_a(f)(x)=\betaig(\int_0^\infty e^{at}|\nabla e^{-t\Delta}f(x)|^2\, dt\betaig)^{1/2}$$
and $$\mathscr{G}_a(f)(x)=\betaig(\int_0^\infty te^{at}|\nabla e^{-t\sqrt{\Delta}}f(x)|^2\, dt\betaig)^{1/2}$$
in the Cartan--Hadamard manifold, where $a$ is a real number to be determined. In fact, no additional assumptions on $M$ are needed for the boundedness of $\mathscr{H}$ and $\mathscr{G}$ in $L^p(M,dx)$ for $1<p\leqslantqslant2$ (see e.g. \cite{CDD}), while, for the case when $2<p<\infty$, much stronger assumptions are need (see e.g.\ \cite[Proposition 3.1]{CD2003}). On the aspect of probabilistic approaches, we should mention that Meyer \cite{Mey,Mey1981} studied the $L^p$ boundedness for all $1<p<\infty$ on the Littlewood--Paley
$\mathscr{G}_*$-function, defined as
$$\mathscr{G}_*(f)(x)=\betaig(\int_0^\infty te^{-2t\sqrt{\Delta}}|\nabla e^{-t\sqrt{\Delta}}f(x)|^2\, dt\betaig)^{1/2}.$$
Bakry established a slightly different Littlewood--Paley--Stein estimate for diffusion processes under the condition that the Bakry--Emery $\gammaamma_2$ is non-negative in \cite{Bakry1985}, and then proved it under the condition that $\gammaamma_2$ is lower bounded on complete Riemannian manifolds in \cite{Bakry1987}. See also \cite{ShYo}, where strong assumptions are needed to guarantee a nice algebra and to run the $\gammaamma$ calculus for diffusion processes. Li \cite{Li2006} established the Littlewood--Paley--Stein estimate for $\mathscr{G}$ on complete Riemannian manifolds, as well as the $L^p$ boundedness for $p\in(1,2]$ of Littlewood--Paley square functions for Poisson semigroups generated by the Hodge--Laplacian. We do not mention many studies on Wiener spaces here.
For non-local Dirichlet forms, to the authors'
knowledge, the study on the $L^p$ boundedness of the vertical Littlewood--Paley operator $\mathscr{H}$
is not too much. Dungey \cite{Nick} obtained the $L^p$ boundedness of the vertical Littlewood--Paley operator with $1<p\leqslantqslant2$ for random walks on graphs and groups. Ba\~{n}uelos, Bogdan and Luks \cite{BBL} studied Littlewood--Paley--Stein estimates for symmetric L\'evy processes in the Euclidean space recently (see \cite{BK} for more recent extension on non-symmetric L\'evy processes). In the aforementioned papers on the L\'evy process case, the Euclidean structure is nice to apply the Hardy--Stein identity and it is used in a crucial way. However, the approach to prove our main result
(Theorem \ref{main} above), which is presented in the metric measure space setting, is different. Indeed, when $p\in (1,2]$, we prove the
boundedness of $\mathscr{H}_{\widetilde\nabla}$ by using the pseudo-gradient to overcome the difficulty that chain rules are not available
for non-local operators, and then by applying the Mosco convergence from finite jumping kernel case
to general case (see Theorem \ref{th1-23} below); when $p\in [2,\infty)$, we verify the boundedness of $\mathscr{H}_\nabla$,
by following the idea of \cite{BBL} to express the square function as a conditional expectation of the quadratic variation of a suitable martingale and then applying the Burkholder--Davis--Gundy inequality (see Theorem \ref{thp} below). We would like to mention that, though $|\nabla f|$
seems more natural than $|\widetilde\nabla f|$, since $|\widetilde\nabla f|$ is of a certain asymmetry, \cite[Example 2]{BBL} indicates that
in some settings $|\nabla f|$ may be too large to yield the boundedness of $\mathscr{H}_\nabla$ in $L^p(M,\mu)$ for $1<p< 2$.
To indicate clearly the contribution of our paper, we present two examples, for which Theorem \ref{main} (see also Theorems \ref{th1} and \ref{thp} below) is applicable.
\begin{example}\it Let $(D,\mathscr{F})$ be a regular Dirichlet form on $L^2(M,\mu)$ as follows
\begin{align*}D(f,f)=\frac{1}{2}\iint_{M\times M \backslash {\rm diag}}(f(y)-f(x))^2J(x,y)\,\mu(dx)\,\mu(dy),\succequad f\in \mathscr{F}
\varepsilonnd{align*}
where $J(x,y)$ is a non-negative measurable symmetric function on $M\times M\backslash {\rm diag}$ such that
$$\int_{\{y\in M: d(x,y)>r\}} J(x,y)\,\mu(dy)<\infty,\succequad x\in M, r>0.$$ Then, for any $p\in (1,2]$, the vertical Littlewood--Paley operator $\mathscr{H}_{\widetilde\nabla}$ is bounded in $L^p(M,\mu)$. \varepsilonnd{example}
The next example includes symmetric stable-like processes on $\mathds R^d$ of variable orders.
\begin{example}\label{exm2}\it Let $s:\mathds R^d\to [s_1,s_2]\subset (0,2)$ such that
$$|s(x)-s(y)|\leqslant \frac{c}{\log(2/|x-y|)},\succequad |x-y|\leqslant 1$$ holds for some constant $c>0$. Let $(D,\mathscr{F})$ be a regular Dirichlet form on $L^2(\mathds R^d,dx)$ as follows
\begin{align*}D(f,f)=&\frac{1}{2}\iint(f(y)-f(x))^2j(x,y)\,dx\,dy,\\
\mathscr{F}=&\overline{ C_c^1(\mathds R^d)}^{D^{1/2}_1},
\varepsilonnd{align*}
where $D_1(f,f)=D(f,f)+\|f\|_2^2$, $j(x,y)$ is a non-negative symmetric measurable function on $\mathds R^d\times \mathds R^d$ such that
$$\sup_{x\in \mathds R^d} \int_{\{|y-x|>1\}}j(x,y)\,dy<\infty,$$
and for some constants $c_1,c_2>0$
\begin{equation}\label{ed4}\frac{c_1}{|x-y|^{d+s(x)\wedge s(y)}}\leqslant j(x,y)\leqslant \frac{c_2}{|x-y|^{d+s(x)\vee s(y)}},\succequad |x-y|\leqslant 1. \varepsilonnd{equation} Then, for $p\in [2,\infty)$, the vertical Littlewood--Paley operator $\mathscr{H}_{\nabla}$ is bounded in $L^p(\mathds R^d,dx)$. \varepsilonnd{example}
\ \
The next two sections are devoted to
proving the main result (Theorem \ref{main} above), which is treated separately according to $p\in (1,2]$ and $p\in [2,\infty)$.
\section{Littlewood--Paley--Stein estimates for $1<p\leqslant 2$}\label{section2}
\subsection{Pseudo-gradient} In order to show the motivation, for the moment, let $M$ be a (smooth) Riemannian manifold, $\Delta$ be the Laplace--Beltrami operator, and $\nabla$ be the Riemannian gradient, and denote by $|\cdot|$ the length induced by the Riemannian distance in the tangent space. A beautiful way to prove the $L^p$ boundedness of $\mathscr{H}$ and $\mathscr{G}$, defined by \varepsilonqref{cla-G} and \varepsilonqref{cla} respectively, is based on the following chain rule:
\begin{equation}\label{e:rie}
\Delta (f^p)=pf^{p-1} \Delta f-p(p-1)f^{p-2}|\nabla f|^2,
\varepsilonnd{equation}
which is valid for all $1<p<\infty$ and for all
smooth functions $f$ on the Riemannian manifold $M$;
see \cite[Chapter IV, Lemma 1, p.\ 86]{Stein} or the proof of \cite[Lemma 2.1]{CDD} for example.
However, this so-called chain rule no longer holds for non-local
Dirichlet forms. For this,
following the idea of \cite{Nick},
we may make use of the following pseudo-gradient, which is defined by
\begin{equation}\label{e:ger}
\widetilde{\gammaamma}_p(f)=pf(Lf)-f^{2-p}L(f^p), \varepsilonnd{equation}
for $p\in (1,\infty)$ and suitable non-negative
functions $f$,
where $L$ is the generator of the regular non-local Dirichlet form $(D,\mathscr{F})$ given in \varepsilonqref{nondi}. From \varepsilonqref{e:rie}, when $L$ is the Laplace--Beltrami operator $\Delta$ on the Riemannian manifold $M$, it is clear that the right hand side of \varepsilonqref{e:ger} is just $p(p-1)|\nabla f|^2$. Due to this, one can reasonably imagine that, in the general setting, $\widetilde{\gammaamma}_p(f)$ should play the same role as $|\nabla f|^2$ in the Riemannian manifold setting. This is the reason why we call $\widetilde{\gammaamma}_p(f)$ the pseudo-gradient of $f$. For more details on the pseudo-gradient, refer to its origination \cite{Nick}.
For our purpose, we need to define the pseudo-gradient $\gammaamma_p$ for suitable function $f$ (which is not necessarily non-negative)
as follows: for $p\in (1,2]$,
\begin{equation}\label{e:ger0}\gammaamma_p(f)=pf(Lf)-|f|^{2-p}L(|f|^p). \varepsilonnd{equation} In particular, when $p=2$, $\gammaamma_2(f)=2fLf-L(f^2)$. (In fact, for $p\in (2,\infty)$, we can also define $\gammaamma_p(f)$ by the right hand side of \varepsilonqref{e:ger0} for suitable function $f$;
however, we will not use it in this work.) We emphasis that, to extend the definition of $\gammaamma_p$ for signed function is one of the crucial points in our argument. This is a key difference between the discrete setting as in \cite{Nick} and the present setting for general metric measure spaces.
Recall that $(M,d,\mu)$ is a metric measure space and $(D,\mathscr{F})$ is a non-local regular Dirichlet form of pure jump type defined in \varepsilonqref{nondi}. In the present setting, for a suitable function $f$ on $M$, $|\nabla f|$ and $|\widetilde\nabla f|$ are defined in \varepsilonqref{g-1} and \varepsilonqref{g-21}, respectively. In order to compare $\gammaamma_p(f)$ with $|\nabla f|$ and $|\widetilde\nabla f|$, we need the closed expression of the generator $L$, which is difficult to seek in general (see e.g.\ \cite{SW2014}). However, if
\begin{equation}\label{e:bound} \int_{M\backslash\{x\} } J(x,dy)<\infty,\succequad x\in M, \varepsilonnd{equation}
then it is easy to see that, for all $f\in \mathscr{D}(L)$,
\begin{equation}\label{a12}Lf(x)=\int_M (f(x)-f(y))\,J(x,dy). \varepsilonnd{equation}
Note also that under \varepsilonqref{e:bound}, $Lf$ is pointwise $\mu$-a.e.\ well defined by \varepsilonqref{a12} for every $f\in L^\infty(M,\mu)$. We call that the jumping kernel $J(x,dy)$ is finite, if \varepsilonqref{e:bound} is satisfied.
\subsection{Boundedness of Littlewood--Paley functions for $1<p\leqslant 2$: the finite case}
Throughout this subsection, we always suppose that the jumping kernel $J(x,dy)$ is finite, i.e., \varepsilonqref{e:bound} holds.
The next lemma provides an explicit formula for $\gammaamma_p(f)$ when $p\in (1,2]$ and $f\geqslant0$ (see \cite[Lemma 3.2]{Nick} for the case on graphs).
\begin{lemma}\label{Gamma_p}Under \varepsilonqref{e:bound}, for any $p\in (1,2]$ and $0\leqslant f\in \mathscr{D}(L)\cap L^\infty(M,\mu)$,
it holds
$$\gammaamma_p(f)(x)=p(p-1)\int_{\{y\in M:f(y)\neq f(x)\}}(f(x) -f(y))^2 \, I(f(x),f(y);p)\,J(x,dy),$$
for any $x\in M$, where
$$I(f(x),f(y);p)= \int_0^1 \frac{(1-u)f(x)^{2-p}}{((1-u)f(x)+uf(y))^{2-p}}\,du,$$
and $0^0:=1$.
\varepsilonnd{lemma}
\begin{remark}\label{remark333} On the one hand, Lemma \ref{Gamma_p} holds trivially when $p=2$; indeed, it is well known that, for every $f\in \mathscr{D}(L)\cap L^\infty(M,\mu)$ and every $x\in M$,
$$\gammaamma_2(f)(x)=\big[2f(Lf)-L(f^2)\big](x)= 2 \gammaamma(f)(x)=\int_M (f(x)-f(y))^2\,J(x,dy).$$
On the other hand, we note that for $p\in (1,2)$ and $0\leqslant f\in \mathscr{D}(L)\cap L^\infty(M,\mu)$ such that $f(x)\neq f(y)$ for $x,y\in M$, $$\int_0^1 \frac{(1-u)f(x)^{2-p}}{((1-u)f(x)+uf(y))^{2-p}}\,du \leqslant \int_0^1 \frac{1}{(1-u)^{1-p}}\,du=\frac{1}{p},$$
and hence, $\gammaamma_p(f)\leqslant (p-1) \gammaamma_2(f)<\infty$. \varepsilonnd{remark}
\begin{proof}[Proof of Lemma $\ref{Gamma_p}$]
By the remark above, we only need to prove the case when $p\in (1,2)$.
According to \varepsilonqref{a12}, for $0\leqslant f\in \mathscr{D}(L)\cap L^\infty(M,\mu)$, we have
$$\gammaamma_p(f)(x)=\int_M\big(pf(x)(f(x)-f(y))-f(x)^{2-p}(f(x)^p-f(y)^p)\big)\,J(x,dy).$$
Note that, to further calculate the right hand side of the equality above, we only need to consider the case when $f(y)\neq f(x)$ inside the integral.
By the Taylor expansion of the function $t\mapsto t^p$ on $[0,\infty)$, we have
\begin{align*}t^p-s^p=&ps^{p-1}(t-s)+p(p-1)\int_s^t v^{p-2}(t-v)\,dv\\
=&ps^{p-1}(t-s)+p(p-1)(t-s)^2\int_0^1 \frac{1-u}{((1-u)s+ut)^{2-p}}\,du \varepsilonnd{align*} for any $s,t\geqslant0$ with $s\neq t$, where the second equality follows by a change of variable, i.e., $v=(1-u)s+ut$. When $s=0$, the condition $p>1$ ensures that the integral exists.
If $f(y)\neq f(x)$, then, setting $s=f(x)$ and $t=f(y)$, we obtain that
\begin{align*}&pf(x)(f(x)-f(y))-f(x)^{2-p}(f(x)^p-f(y)^p)\\
&=f(x)^{2-p}[(f(y)^p-f(x)^p)-pf(x)^{p-1} (f(y)-f(x))]\\
&=p(p-1)(f(x)-f(y))^2\int_0^1\frac{(1-u)f(x)^{2-p}}{((1-u)f(x)+uf(y))^{2-p}}\,du\\
&=p(p-1)(f(x)-f(y))^2 I(f(x),f(y);p). \varepsilonnd{align*} This yields
\begin{equation*}
\begin{split}\gammaamma_p(f)(x)=&p(p-1)\int_{\{y\in M:f(y)\neq f(x)\}} (f(x)-f(y))^2 I(f(x),f(y);p)\,J(x,dy). \varepsilonnd{split} \varepsilonnd{equation*}
We prove the desired assertion. \varepsilonnd{proof}
Now we can immediately compare $\gammaamma_p(f)$ with $|\nabla f|^2$ and $|\widetilde{\nabla} f|^2$ for suitable non-negative $f$.
\begin{corollary}\label{comp} Under \varepsilonqref{e:bound}, for $p\in (1,2]$ and $0\leqslant f\in \mathscr{D}(L)\cap L^\infty(M,\mu)$,
$$0\leqslant \gammaamma_p(f)(x)\leqslant 2(p-1)|\nabla f|^2(x)$$
and
\begin{equation}\label{e:ffcc}|\widetilde \nabla f|^2(x)\leqslant 2/(p(p-1)) \gammaamma_p(f)(x), \varepsilonnd{equation}
for any $x\in M$, where $|\nabla f|$ and $|\widetilde\nabla f|$ are defined by \varepsilonqref{g-1} and \varepsilonqref{g-21}, respectively.
\varepsilonnd{corollary}
\begin{proof} Let $p\in (1,2)$ and
$0\leqslant f\in \mathscr{D}(L)\cap L^\infty(M,\mu)$. We assume that \varepsilonqref{e:bound} holds.
(i) It is clear from Lemma \ref{Gamma_p} that $\gammaamma_p(f)(x)\geqslant0$. Observing that
$$(1-u)^{2-p}f(x)^{2-p}\leqslant ((1-u)f(x)+uf(y))^{2-p},$$ for any $0\leqslantqslant u\leqslantqslant1$ and $f\geqslant0$, we obtain that
\begin{align*}
\gammaamma_p(f)(x)&\leqslant p(p-1)\int_0^1(1-u)^{p-1} \, du\int_M (f(x)-f(y))^2 J(x,y)\,\mu(dy)\\
&= 2(p-1)|\nabla f|^2(x).
\varepsilonnd{align*}
(ii) Observing that, for $0\leqslantqslant f(y)< f(x)$, one has
$(1-u)f(x)+uf(y)\leqslant f(x)$ for any $0\leqslantqslant u\leqslantqslant1$. Hence
$$I(f(x),f(y);p)=\int_0^1 \frac{(1-u)f(x)^{2-p}}{((1-u)f(x)+uf(y))^{2-p}}\,du\geqslant \int_0^1 (1-u)\,du =1/2.$$ This along with the definition of $|\widetilde \nabla f|^2$ yields the desired assertion.
\varepsilonnd{proof}
\begin{remark} It is easy to see that, in general, $\gammaamma_p(f)(x)< 2(p-1)|\nabla f|^2(x)$ holds for any $0\leqslant f\in \mathscr{D}(L)\cap L^\infty(M,\mu)$ under the assumption \varepsilonqref{e:bound}. For example,
for any function $f$ with $f\neq 0$ and $f(x)=0$ for some $x\in M$, we have $|\nabla f|^2(x)>0$ and $\gammaamma_p(f)(x)=0$. Therefore, for $p\in (1,2]$, in general situations, one can use the bound on $\gammaamma_p(f)$ to control $|\widetilde \nabla f|^2$ but not $|\nabla f|^2$.
\varepsilonnd{remark}
The following statement shows that \varepsilonqref{e:ffcc} indeed holds for all $f\in \mathscr{D}(L)\cap L^\infty(M,\mu)$, which is one of the key ingredients in our proof.
\begin{proposition}\label{P:comp} Under \varepsilonqref{e:bound}, for $p\in (1,2]$ and $f\in \mathscr{D}(L)\cap L^\infty(M,\mu)$,
$$0\leqslant |\widetilde \nabla f|^2(x)\leqslant 2/(p(p-1)) \gammaamma_p(f)(x),$$
for any $x\in M$, where $|\widetilde\nabla f|$ is defined by \varepsilonqref{g-21}.
\varepsilonnd{proposition}
\begin{proof} By Remark \ref{remark333} again, without loss of generality we may and can
assume that $p\in (1,2)$. The proof is a little bit delicate and is based on Corollary \ref{comp}.
For any $f\in \mathscr{D}(L)\cap L^\infty(M,\mu)$,
by \varepsilonqref{e:ger0},
\begin{align*}
\gammaamma_p(f)&=pfLf-p|f|(L|f|)+p|f|L|f|-|f|^{2-p}L(|f|^p)\\
&=pfLf-p|f|L|f|+\gammaamma_p(|f|).
\varepsilonnd{align*}
According to \varepsilonqref{e:ffcc} in Corollary \ref{comp}, it holds that
\begin{align*}\gammaamma_p(|f|)(x)\geqslant &\frac{p(p-1)}{2}\big|\widetilde \nabla |f|\big|^2(x)\\
\geqslant &\frac{p(p-1)}{2}\int_{\{y\in M:|f|(x)\geqslant |f|(y)\}}(|f|(x)-|f|(y))^2\,J(x,dy).
\varepsilonnd{align*}
On the other hand,
\begin{align*}&pf(x)(Lf)(x)-p|f|(x)(L|f|)(x)\\
&=p\leqslantft(\int_M f(x)(f(x)-f(y))\,J(x,dy)-\int_M |f|(x)(|f|(x)-|f|(y))\,J(x,dy)\right)\\
&=p\int_M (|f|(x)|f|(y)-f(x)f(y))\,J(x,dy)\\
&\geqslant p\int_{\{y\in M:|f|(x)\geqslant |f|(y)\}}(|f|(x)|f|(y)-f(x)f(y))\,J(x,dy), \varepsilonnd{align*} where in the last inequality we used the fact that
$|f|(x)|f|(y)-f(x)f(y)\geqslant0$ for all $x,y\in M$.
Furthermore, we deduce that
\begin{align*}&\frac{p-1}{2}(|f|(x)-|f|(y))^2+|f|(x)|f|(y)-f(x)f(y)\\
&=\frac{p-1}{2}(f^2(x)+f^2(y))+(2-p)|f|(x)|f|(y)-f(x)f(y)\\
&\geqslant \frac{p-1}{2}(f^2(x)+f^2(y))+(1-p)|f|(x)|f|(y)\\
&\geqslant \frac{p-1}{2}(f^2(x)+f^2(y))-(p-1)f(x)f(y)\\
&=\frac{p-1}{2}(f(x)-f(y))^2, \varepsilonnd{align*}
where both inequalities follow from the fact that
$|f|(x)|f|(y)-f(x)f(y)\geqslant0$ for all $x,y\in M$ again.
Combining with all the inequalities above, we arrive at the desired assertion.
\varepsilonnd{proof}
Recall that $(P_t)_{t\geqslantqslant0}$ with $P_t=e^{-tL}$ is the semigroup corresponding to the Dirichlet form $(D,\mathscr{F})$.
\begin{proposition}\label{pre}\,\ Let $(M,d,\mu)$ be a metric measure space, and $(D,\mathscr{F})$ be the Dirichlet form defined in \varepsilonqref{nondi}.
Suppose that \varepsilonqref{e:bound} holds. Then, for any $p\in (1,2]$, the following assertions hold.
\begin{itemize}
\item[{\rm(i)}] There exists a constant $c_p>0$ such that, for all $t>0$ and $f\in L^1(M,\mu)\cap L^\infty(M,\mu)$,
$$\|\gammaamma_p^{1/2}(P_tf)\|_p\leqslant c_p t^{-1/2} \|f\|_p.$$
\item[{\rm(ii)}] For any $f\in L^1(M,\mu)\cap L^\infty(M,\mu)$, define
$$(\mathscr{H}_pf)(x)=\leqslantft(\int_0^\infty\gammaamma_p (P_tf)(x)\,dt\right)^{1/2}\succequad\mbox{for every } x\in M.$$
Then there is a constant $c_p'>0$ such that, for all $f\in L^1(M,\mu)\cap L^\infty(M,\mu)$,
$$\|\mathscr{H}_pf\|_p\leqslant c'_p\|f\|_p.$$
\varepsilonnd{itemize}
\varepsilonnd{proposition}
\begin{proof}
Noticing that $(P_t)_{t\geqslant0}$ is a strongly continuous Markovian semigroup defined on $L^2(M,\mu)$, the operator $P_t$ can be extended to $L^\infty(M,\mu)$ such that $\|P_tf\|_{\infty}\leqslant \|f\|_{\infty}$ (see \cite[Page\ 56]{FOT}). On the other hand, we can also extend $P_t$ on $L^1(M,\mu)\cap L^2(M,\mu)$ to $L^1(M,\mu)$ uniquely. Since $(P_t)_{t\geqslant0}$ is a symmetric Markovian semigroup, it holds that $\|P_tf\|_{1}\leqslant \|f\|_{1}$. By the Riesz--Thorin interpolation theorem, for all $p\in(1,\infty)$, we have $$\|P_t\|_{p\to p}:=\sup_{f\in L^p(M,\mu)\setminus\{0\}}\frac{\|P_tf\|_{p}}{\|f\|_{p}}\leqslant 1.$$
(i) In the following, let $p\in (1,2]$, and consider any non-zero function $f\in L^1(M,\mu)\cap L^\infty(M,\mu)$. Set $u_t=P_tf$ for all $t\geqslant 0$. Then $u_t\in \mathscr{D}(L)\cap L^1(M,\mu)\cap L^\infty(M,\mu)$ for every $t>0$.
In what follows, let $\preceqartial_t$ denote the differentiation with respect to $t$.
The fundamental idea of the proof below is due to \cite{Stein}; however, we may not reduce the problem to take non-negative function $f$ as in the aforementioned reference, since $|\widetilde\nabla \cdot|$ does not enjoy the sublinear property.
Instead, we should take into account $|u_t|$ not $u_t$ itself, which also explains the reason why we need to define the pseudo-gradient $\gammaamma_p$ for all suitable signed functions by \varepsilonqref{e:ger0}.
By the definition of $\gammaamma_p$ and the fact that \begin{equation}\label{eq-t}\preceqartial_t(|u_t|^p)=pu_t|u_t|^{p-2}\preceqartial_tu_t=-pu_t|u_t|^{p-2}Lu_t, \varepsilonnd{equation}
we have
\begin{align*}|u_t|^{p-2}\gammaamma_p(u_t)&=|u_t|^{p-2}\leqslantft(pu_t Lu_t-|u_t|^{2-p}L|u_t|^p\right)\\
&=pu_t |u_t|^{p-2}(Lu_t)-L(|u_t|^p)\\
&=-(\preceqartial_t+L)(|u_t|^p). \varepsilonnd{align*}
It follows that
$$\gammaamma_p(u_t)=-|u_t|^{2-p}(\preceqartial_t+L)(|u_t|^p).$$
Set
$$J_t:=-(\preceqartial_t+L)(|u_t|^p).$$
Then $J_t\geqslantqslant0$ since $\gammaamma_p(u_t)\geqslantqslant0$ by Lemma \ref{Gamma_p}. Using the H\"{o}lder inequality, we have
\begin{align*} \|\gammaamma_p^{1/2}(u_t)\|_p^p&=\int_M \gammaamma_p^{p/2}(u_t)(x)\,\mu(dx)=\int_M |u_t|^{p(2-p)/2}(x) J_t^{p/2}(x)\,\mu(dx)\\
&\leqslant \leqslantft(\int_M J_t(x)\,\mu(dx)\right)^{p/2}\leqslantft(\int_M |u_t|^p(x)\,\mu(dx)\right)^{(2-p)/2}. \varepsilonnd{align*}
Note that, by the
contraction
property of the semigroup $(P_t)_{t\geqslant0}$ on $L^p(M,\mu)$,
$$\leqslantft(\int_M |u_t|^p(x)\,\mu(dx)\right)^{(2-p)/2}\leqslant \|f\|_p^{p(2-p)/2}.$$
On the other hand, by H\"{o}lder's inequality and the contraction property again,
\begin{align*}\int_M |\preceqartial _t|u_t|^p|(x)\,\mu(dx)=&p\int_M |u_t|^{p-1}| |\preceqartial_tu_t|\,d\mu \leqslant p\|\preceqartial_tu\|_p\||u_t|^{p-1}\|_{p/(p-1)}\\
\leqslant& p\|\preceqartial_tu_t\|_p\|f\|_p^{p-1}.
\varepsilonnd{align*}
Recall that $L$ is self-adjoint in $L^2(M,\mu)$ and $P_t$ is continuous as a map from $L^p(M,\mu)$ to itself for every $t\geqslant0$ and for every $p\in[1,\infty]$. By the classical theory developed by Stein (see e.g. \cite[Chapter III, Theorem 1]{Stein}), we derive that $(P_t)_{t>0}$ is an analytic semigroup in $L^p(M,\mu)$ for every $p\in(1,\infty)$; more precisely, the map $t\mapsto P_t$ has an analytic extension in the sense that it extends to an analytic $L^p(M, \mu)$-operator-valued function $t + is\mapsto P_{t+is}=e^{-(t+is)L}$, which is defined in the sector of the complex plane
$$|\arg(t+is)|<\frac{\preceqi}{2}\betaig(1-\betaig|\frac{2}{p}-1\betaig|\betaig).$$
Hence, we find that $$\|\preceqartial_tu_t\|_p=\|Lu_t\|_p\leqslant c_p t^{-1}\|f\|_p.$$ Thus, together with \varepsilonqref{eq-t},
\begin{equation}\label{eeff} \int_M |\preceqartial _t|u_t|^p|(x)\,\mu(dx)\leqslant pc_p t^{-1}\|f\|_p^{p}. \varepsilonnd{equation}
In particular, due to $J_t\geqslant0$ and
$$L|u_t|^p=-\preceqartial_t|u_t|^p-J_t,$$
we get that
\begin{equation}\label{eeff0} \mu((L|u_t|^p)^+)<\infty. \varepsilonnd{equation}
Thus, \begin{equation}\label{eeff1}\int_M L|u_t|^p(x)\,\mu(dx)\geqslant0. \varepsilonnd{equation}
Indeed, let $\{K_n\}_{n\geqslant1}$ be a sequence of increasing compact sets such that $\cup_{n=1}^\infty K_n=M$, and
let $\{\textup{Var}phi_n\}_{n\geqslant1}$ be a sequence of bounded measurable functions such that $\textup{Var}phi_n=1$ on $K_n$, and $0\leqslant \textup{Var}phi_n\leqslant 1$ on $K_n^c$.
Using \varepsilonqref{eeff0} and the extension of Fatou's lemma (see \cite[Theorem 3.2.6 (2), p.\ 52]{YJA}), we get
\begin{equation*}\begin{split}
\int_M L|u_t|^p(x)\,\mu(dx)&\geqslant\limsup_{n\to \infty} \int_M \textup{Var}phi_n(x) (L|u_t|^p)(x)\,\mu(dx)\\
&= \limsup_{n\to \infty} \int_M |u_t|^p(x) (L \textup{Var}phi_n)(x)\,\mu(dx),
\varepsilonnd{split} \varepsilonnd{equation*}
where in the equality above we used the symmetry property of $J(x,dy)\,\mu(dx)$ and the facts that $Lf$ is pointwise $\mu$-a.e. well defined for any bounded measurable function $f$ and $|u_t|^p\in L^1(M,\mu)$.
On the other hand, since $(D,\mathscr{F})$ is a regular Dirichlet form on $L^2(M,\mu)$, for any relatively compact open sets $U$ and $V$ with $\bar{U}\subset V$, there is a function $\preceqsi\in \mathscr{F}\cap C_c(M)$ such that $\preceqsi=1$ on $U$ and $\preceqsi=0$ on $V^c$. Consequently,
\begin{equation}\label{e:ffee}\begin{split}\iint_{U\times V^c} J(x,dy)\,\mu(dx)=&\iint_{U\times V^c}(\preceqsi(x)-\preceqsi(y))^2J(x,dy)\,\mu(dx)\\
\leqslant &D(\preceqsi,\preceqsi)<\infty. \varepsilonnd{split} \varepsilonnd{equation}
For any fixed $x\in M$ and any $\textup{Var}epsilon>0$, by \varepsilonqref{e:bound}, we can choose $R:=R(x,\textup{Var}epsilon)>0$ large enough such that
$$\displaystyle\int_{\{y\in M: d(x,y)\geqslant R\}}\,J(x,dy)<\textup{Var}epsilon.$$
Fix this $R$. Then, for $n\geqslant 1$ large enough, $\textup{Var}phi_n(y)=1$ for all $y\in M$ with $d(x,y)< R$. Thus, for $n\geqslant 1$ large enough,
$$|L\textup{Var}phi_n(x)|=\bigg|\int_M\leqslantft(\textup{Var}phi_n(x)-\textup{Var}phi_n(y)\right)\, J(x,d y) \bigg|\leqslant \int_{\{y\in M: d(x,y)\geqslant R\}}\,J(x,dy)<\textup{Var}epsilon,$$
which means that $\lim_{n\to\infty }L\textup{Var}phi_n(x)=0$ for all $x\in M$.
This, along with \varepsilonqref{e:ffee}, the fact that $|u_t|^p\in L^1(M,\mu)\cap L^\infty(M,\mu)$ and the dominated convergence theorem, gives
$$\limsup_{n\to \infty} \int_M |u_t|^p(x) (L \textup{Var}phi_n)(x)\,\mu(dx)=0.$$ So, \varepsilonqref{eeff1} holds
true.
\varepsilonqref{eeff1} together with \varepsilonqref{eeff} yields that
\begin{align*}\int_M J_t(x)\,\mu(dx)&=-\int_M\preceqartial _t|u_t|^p(x)\,\mu(dx)-\int_M L|u_t|^p(x)\,\mu(dx)\leqslant pc_p t^{-1}\|f\|_p^{p}. \varepsilonnd{align*}
Hence,
$$ \leqslantft(\int_M J_t(x)\,\mu(dx)\right)^{p/2}\leqslant c'_p t^{-p/2}\|f\|_p^{p^2/2}.$$
Combining with all the conclusions above, we prove the first assertion.
(ii) Note that
\begin{align*}(\mathscr{H}_pf)^2(x)&=\int_0^\infty \gammaamma_p(u_t)(x)\,dt\\
&=-\int_0^\infty |u_t|(x)^{2-p}(\preceqartial_t+L)(|u_t|^p)(x)\,dt\\
&\leqslant f^*(x)^{2-p} J(x),
\varepsilonnd{align*}
where $f^*$ is the semigroup maximal function defined by \varepsilonqref{maxf} below, and
$$J(x)=-\int_0^\infty (\preceqartial_t+L)(|u_t|^p)(x)\,dt,$$
which is non-negative. Thus, by using the H\"{o}lder inequality,
\begin{align*}\int_M( \mathscr{H}_p f)^p(x)\,\mu(dx)\leqslant&\int_M f^*(x)^{p(2-p)/2}J(x)^{p/2}\,\mu(dx)\\
\leqslant &\leqslantft(\int_M f^*(x)^p\,\mu(dx)\right)^{(2-p)/2}\leqslantft(\int_M J(x)\,\mu(dx)\right)^{p/2}. \varepsilonnd{align*}
Lemma \ref{max} below further yields that
$$\leqslantft(\int_M f^*(x)^p\,\mu(dx)\right)^{(2-p)/2}\leqslant c_p'\|f\|_p^{p(2-p)/2}.$$
On the other hand, by \varepsilonqref{eeff1},
\begin{align*}\int_M J(x)\,\mu(dx) =&-\int_0^\infty dt \int_M (\preceqartial_t+L) |u_t|^p(x)\,\mu(dx)\\
\leqslant&-\int_0^\infty dt \int_M \preceqartial_t |u_t|^p(x)\,\mu(dx)\\
\leqslant& \int_M |f|^p(x)\,\mu(dx)=\|f\|_p^p. \varepsilonnd{align*}
Combining all the inequalities above, we obtain that
$$\int_M( \mathscr{H}_p f)^p(x)\,\mu(dx)\leqslant c_p'\|f\|_p^p,$$
which is the second assertion.
Therefore, the proof is complete.
\varepsilonnd{proof}
For any $f\in L^1(M,\mu)\cap L^\infty(M,\mu)$, define the semigroup maximal function $f^*$ by
\begin{equation}\label{maxf}f^*(x)=\sup_{t>0} |P_tf(x)|,\succequad x\in M. \varepsilonnd{equation}
Since $(P_t)_{t\geqslant0}$ is a symmetric sub-Markovian semigroup, we have the following
\begin{lemma}\label{max}$($\cite[Section III. 3, p.\ 73]{Stein}$)$
For all $p\in (1,\infty]$, there exists a constant $c_p>0$ such that for all $f\in L^p(M,\mu)$,
$$\|f^*\|_p\leqslant c_p\|f\|_p,$$
where, for $p=\infty$, the right hand side is just $\|f\|_{\infty}$ $($i.e., the constant $c_\infty=1)$.
\varepsilonnd{lemma}
Furthermore, according to Proposition \ref{P:comp} and Proposition \ref{pre},
we immediately have the following
\begin{theorem}\label{th1} {\bf (Finite jumping kernel case)}\,\,
Under the same assumption of Proposition $\ref{pre}$, for any $p\in (1,2]$,
$\mathscr{H}_{\widetilde \nabla}$ is bounded in $L^p(M,\mu)$, i.e., there exists a constant $c_p>0$ such that, for every $f\in L^p(M,\mu)$,
$$\|\mathscr{H}_{\widetilde \nabla}f\|_{p}\leqslant c_p\|f\|_{p}.$$
Moreover, there exists a constant $\tilde{c}_p>0$ such that, for every $f\in L^p(M,\mu)$,
$$\||\widetilde \nabla P_t f|\|_{p}\leqslant \tilde{c}_p t^{-1/2}\|f\|_{p}.$$ \varepsilonnd{theorem}
\begin{proof}
For any $f\in L^1(M,
\mu)\cap L^\infty(M,\mu)$, $P_tf$ belongs to $\mathscr{D}(L)\cap L^1(M,\mu)\cap L^\infty(M,\mu)$ for every $t>0$. By Proposition \ref{P:comp}, we deduce that
\begin{align*}
\mathscr{H}_{\widetilde \nabla} f(x)&=\leqslantft(\int_0^\infty |\widetilde \nabla P_tf|^2(x)\,dt \right)^{1/2}\\
&\leqslantqslant\leqslantft(\int_0^\infty\betaig|\frac{2}{p(p-1)}\gammaamma_p(P_tf)(x)\betaig|\,d t\right)^{1/2}\\
&=\betaig(\frac{2}{p(p-1)}\betaig)^{1/2}(\mathscr{H}_pf)(x).
\varepsilonnd{align*}
By Proposition \ref{pre}(ii), we have
$$\|\mathscr{H}_{\widetilde \nabla} f\|_p\leqslantqslant c_p'\betaig(\frac{2}{p(p-1)}\betaig)^{1/2}\|f\|_p=:c_p\|f\|_p.$$
Now the general case when $f\in L^p(M,\mu)$ follows from the fact that $L^1(M,\mu)\cap L^\infty(M,\mu)$ is dense in $L^p(M,\mu)$ and the application of Fatou's lemma.
The last assertion is also an immediate application of Proposition \ref{P:comp} and Proposition \ref{pre}(i).
\varepsilonnd{proof}
\subsection{Boundedness of Littlewood--Paley functions for $1<p\leqslant 2$: the general case}
In this part, we consider the case that \varepsilonqref{e:bound} is not necessarily satisfied. In this general setting,
it is not clear that $Lf^p$ and so $\gammaamma_p(f)$ given by \varepsilonqref{e:ger} are well defined for $p\in(1,2]$
and $0\leqslant f\in \mathscr{D}(L)\cap L^\infty(M,\mu)$ (even in the pointwise sense). (Note that, according to \cite[Theorem 2]{Mey},
if $f\in \mathscr{D}(L)\cap L^\infty(M,\mu)$, then $f^p\in \mathscr{D}(L)\cap L^\infty(M,\mu)$ for all $p\in[2,\infty)$.) To overcome this difficulty, we will make use of the Mosco convergence of non-local Dirichlet forms, and impose the absolute continuity and
the local finiteness assumptions on the jumping kernel $J(x,dy)$, i.e., there is a non-negative measurable function $J(x,y)$ on $M\times M \setminus {\rm diag}$ such that, for every $x,y\in M$,
\begin{equation}\label{e:ack}
J(x,dy)=J(x,y)\,\mu(dy),
\varepsilonnd{equation}
and
\begin{equation}
\label{e:ack1}\int_{\{y\in M: d(x,y)>r\}}J(x,y)\,\mu(dy)<\infty,\succequad x\in M,\, r>0.
\varepsilonnd{equation}
\ \
Recall that, a sequence of Dirichlet forms $\{(D^n,\mathscr{F}^n)\}_{n\geqslant1}$ on $L^2(M,\mu)$ is said to be convergent to a Dirichlet
form $(D,\mathscr{F})$ in $L^2(M,\mu)$ in the sense of Mosco if
\begin{itemize}
\item[(a)] for every sequence $\{f_n\}_{n\geqslant1}$ in $L^2(M,\mu)$ converging weakly to $f$ in $L^2(M,\mu)$,
$$\liminf_{n\to \infty} D^n(f_n,f_n)\geqslant D(f,f);$$
\item[(b)] for every $f\in L^2(M,\mu)$, there is a sequence $\{f_n\}_{n\geqslant1}$ in $L^2(M,\mu)$ converging strongly to $f$ in $L^2(M,\mu)$ such that
$$\limsup_{n\to \infty} D^n(f_n,f_n)\leqslant D(f,f).$$
\varepsilonnd{itemize}
\begin{remark}\label{r:mos} We make the following two comments on Mosco convergence.
\begin{itemize}
\item[(1)] Condition (b) in the definition of Mosco convergence is implied by the following condition:
\begin{itemize}
\item[(b)'] There is a common core $\mathscr{C}$ for the Dirichlet forms $\{(D^n,\mathscr{F}^n)\}_{n\geqslant1}$ and $(D,\mathscr{F})$ such that
$$\lim_{n\to\infty} D^n(f,f)=D(f,f),\succequad\mbox{for every } f\in \mathscr{C}.$$ See the proof of \cite[Thoorem 2.3]{BBCK}.
\varepsilonnd{itemize}
\item[(2)] Let $\{(D^n,\mathscr{F}^n)\}_{n\geqslant1}$ and $(D,\mathscr{F})$ be Dirichlet
forms on $L^2(M,\mu)$. Then, the sequence $\{(D^n,\mathscr{F}^n)\}_{n\geqslant1}$ converges to $(D,\mathscr{F})$ in the sense of Mosco, if and only if, for every $t>0$ and $f\in L^2(M,\mu)$, $P^{(n)}_tf$ converges to $P_tf$ in $L^2(M,\mu)$, where $(P_t)_{t\geqslant0}$ and $(P^{(n)}_t)_{t\geqslant0}$ are the semigroups corresponding to $(D,\mathscr{F})$ and $(D^n,\mathscr{F}^n)$, respectively. See \cite[Corollary 2.6.1]{Mo1994}.
\varepsilonnd{itemize}
\varepsilonnd{remark}
\ \
Now, we consider the regular non-local Dirichlet form $(D,\mathscr{F})$ given by \varepsilonqref{nondi}, and suppose that \varepsilonqref{e:ack} is satisfied.
For any $n\geqslant 1$ and $x,y\in M$, define
$$J_n(x,y)=J(x,y)\mathds 1_{\{d(x,y)>1/n\}}.$$
Note that \varepsilonqref{e:ffee} holds for general regular Dirichlet form $(D,\mathscr{F})$.
Then, by the definition of $J_n(x,y)$ and \varepsilonqref{e:ffee}, the sequence $\{J_n(x,y)\}_{n\geqslant1}$ converges to $J(x,y)$ locally in $L^1(M\times M\backslash {\rm diag}, \mu\times \mu)$.
Let $\mathscr{C}:=\mathscr{F}\cap C_c(M)$ be a core of $(D,\mathscr{F})$.
For any $n\geqslant1$, we define the regular Dirichlet form $(D^n,\mathscr{F}^n)$ as follows
\begin{align*}
D^n(f,f)=&\iint (f(x)-f(y))^2 J_n(x,y)\,\mu(dx)\,\mu(dy),\\
\mathscr{F}^n=& \overline{\mathscr{C}}^{\sqrt{D^{n,1}}}, \varepsilonnd{align*} where ${D^{n,1}}(f,f)=D^n(f,f)+\|f\|_2^2$
and $\overline{\mathscr{C}}^{\sqrt{D^{n,1}}}$ denotes the closure of $\mathscr{C}$ with respect to the metric $\sqrt{D^{n,1}}$.
Note that $J_n(x,y)\leqslant J(x,y)$, $\mathscr{F}\subset \mathscr{F}^n$ for all $n\geqslant 1$. In particular, $\mathscr{C}$ is a common core for all $(D^n,\mathscr{F}^n)$, $n\geqslantqslant1$. Furthermore, we have the following statement.
\begin{proposition}\label{P:mos} Under \varepsilonqref{e:ack}, the sequence of Dirichlet forms $\{(D^n,\mathscr{F}^n)\}_{n\geqslant1}$ above converges to $(D,\mathscr{F})$ in the sense of Mosco. \varepsilonnd{proposition}
\begin{proof} We split the proof into two parts.
(1) In this part, the argument is inspired by the proof of \cite[Theorem 1.4]{SU}. Suppose that $u_n$ is weakly convergent to $u$ in $L^2(M,\mu)$ as $n\rightarrow\infty$, and $$\liminf_{n\to \infty} \iint (u_n(x)-u_n(y))^2 J_n(x,y)\,\mu(dx)\,\mu(dy)<\infty.$$ We may assume that $$\lim_{n\to \infty} \iint (u_n(x)-u_n(y))^2 J_n(x,y)\,\mu(dx)\,\mu(dy)<\infty.$$ For $(x,y)\in M\times M \backslash {\rm diag}$ and $n\geqslant 1$, define $$\tilde u_n(x,y)=(u_n(x)-u_n(y))J_n(x,y)^{1/2}.$$ Then $\{\tilde u_n\}_{n\geqslant1}$ is a bounded sequence in $L^2(M\times M \backslash {\rm diag}, \mu\times \mu)$, and hence there exists a subsequence $\{\tilde u_{n_k}\}_{k\geqslant1}$, which converges to some element $\tilde u$ weakly in $L^2 (M\times M \backslash {\rm diag}, \mu\times \mu)$. We now claim that
\begin{equation}\label{e:ff0}\tilde u(x,y)= (u(x)-u(y))J(x,y)^{1/2},\succequad \mu\times \mu\mbox{-}a.e.\,\ (x,y) \text{ with }x\neq y. \varepsilonnd{equation}
To simplify the notation, without confusion in double integrals below we will omit the integral domain $M\times M\backslash {\rm diag}$. For any non-negative function $v\in C_c(M\times M \backslash{\rm diag})$ and for any $n_k$, we have
\begin{equation*}\begin{split}
&\bigg|\iint \big[\tilde u(x,y)-(u(x)-u(y)) J(x,y)^{1/2} \big] v(x,y)\,\mu(dx)\,\mu(dy)\bigg|\\
&\leqslant \bigg|\iint\big[\tilde u(x,y)-\big(u_{n_k}(x)-u_{n_k}(y)\big) J_{n_k}(x,y)^{1/2}\big] v(x,y)\,\mu(dx)\,\mu(dy)\bigg|\\
&\succequad + \bigg|\iint \big(u_{n_k}(x)-u_{n_k}(y)\big)\big(J_{n_k}(x,y)^{1/2}-J(x,y)^{1/2}\big)v(x,y)\,\mu(dx)\,\mu(dy)\bigg|\\
&\succequad + \bigg|\iint \big[\big(u_{n_k}(x)-u_{n_k}(y)\big)-\big(u(x)-u(y)\big)\big]J(x,y)^{1/2}v(x,y)\,\mu(dx)\,\mu(dy)\bigg|\\
&=:I_{1,n_k}+I_{2,n_k}+I_{3,n_k}. \varepsilonnd{split} \varepsilonnd{equation*}
Firstly, since $\tilde u_n$ converges to $\tilde u$ weakly in $L^2(M\times M \backslash {\rm diag}, \mu\times \mu)$, we see that $\lim_{k\to \infty}I_{1,n_k}=0.$
Secondly, by using the
Cauchy--Schwarz inequality and the fact that $\{u_{n_k}\}_{k\geqslant1} $ is a bounded sequence in $L^2(M,\mu)$, we derive that
\begin{align*}I_{2,n_k}\leqslant &\leqslantft( \iint\big(u_{n_k}(x)-u_{n_k}(y)\big)^2 v(x,y)\,\mu(dx)\,\mu(dy)\right)^{1/2}\\
&\times \leqslantft(\iint\big(J_{n_k}(x,y)^{1/2}-J(x,y)^{1/2}\big)^2 v(x,y)\,\mu(dx)\,\mu(dy)\right)^{1/2}\\
\leqslant& \sqrt{2} \|u_{n_k}\|_2 \|v\|_\infty \\
&\times
\leqslantft(\sup_{x\in M} \int_{\{y\in M: (x,y)\in {\rm supp} v\}} \,\mu(dy)+\sup_{y\in M} \int_{\{x\in M: (x,y)\in {\rm supp} v\}} \,\mu(dx)\right)^{1/2}\\
&\times \leqslantft(\iint_{{\rm supp}v} |J_{n_k}(x,y)-J(x,y)|\,\mu(dx)\,\mu(dy)\right)^{1/2},
\varepsilonnd{align*}
where the right hand side of the above inequality converges to 0 as $k\to \infty$. Here, in the second inequality above, we used the elementary inequalities $(a-b)^2\leqslant 2(a^2+b^2)$ for all $a,b\in \mathds R$, and $|\sqrt{a}-\sqrt{b}|\leqslant \sqrt{|a-b|}$ for all $a,b\geqslant 0$, and in the last inequality, we used the fact that
$$\sup_{x\in M} \int_{\{y\in M: (x,y)\in {\rm supp} v\}} \,\mu(dy)+\sup_{y\in M} \int_{\{x\in M: (x,y)\in {\rm supp} v\}} \,\mu(dx)<\infty,$$ due to $v\in C_c(M\times M \backslash{\rm diag})$.
Thirdly, for $I_{3,n_k}$, note that, by the Cauchy--Schwarz inequality and \varepsilonqref{e:ffee}, both
$$\preceqhi(x):=\int_M J(x,y)^{1/2} v(x,y)\,\mu(dy),\succequad x\in M$$
and
$$\preceqsi(y):=\int_M J(x,y)^{1/2} v(x,y)\,\mu(dx),\succequad y\in M$$
are in $L^2(M,\mu)$.
Hence, we see
\begin{align*}I_{3,n_k}\leqslant & \bigg| \int_M \big(u_{n_k}(x)-u(x)\big)\preceqhi(x)\,\mu(dx)\bigg|+ \bigg| \int_M \big(u_{n_k}(y)-u(y)\big)\preceqsi(y)\,\mu(dy)\bigg| \varepsilonnd{align*} goes to 0 as $k\to \infty$. Thus, we conclude that \varepsilonqref{e:ff0} holds.
Choose a sequence $\{v_k\}_{k\geqslantqslant1}$ from $C_c(M\times M \backslash{\rm diag})$ such that $0\leqslant v_k\uparrow 1$ as $k\to \infty$. Letting $n\rightarrow\infty$, by Fatou's lemma, we deduce that, for any $k\geqslant 1$,
\begin{align*}
\liminf_{n\to \infty} D^n(u_n,u_n)\geqslant &\liminf_{n\to \infty} \iint \big(u_n(x)-u_n(y)\big)^2 J_n(x,y) v_k(x,y)\,\mu(dx)\,\mu(dy)\\
\geqslant &\iint \big(u(x)-u(y)\big)^2 J(x,y) v_k(x,y)\,\mu(dx)\,\mu(dy).
\varepsilonnd{align*}
Taking $k\to \infty$, by the monotone convergence theorem, we arrive at
$$ \liminf_{n\to \infty} D^n(u_n,u_n)\geqslant D(u,u).$$
(2) Note that $\mathscr{C}$ is the common core of $(D^n,\mathscr{F}^n)$ for all $n\geqslant 1$ and $(D,\mathscr{F})$. Hence, by the monotone convergence theorem,
$$\lim_{n\to \infty} D^n(u,u)= D(u,u),\succequad u\in \mathscr{C}.$$
This proves that the condition $(b)'$ in Remark \ref{r:mos} (1) holds.
Combining both conclusions from (1) and (2), we prove the desired assertion.
\varepsilonnd{proof}
Now, we are in a position to prove the main result in this section.
\begin{theorem}\label{th1-23} {\bf (General jumping kernel case)}\,\, Let $(M,d,\mu)$ be a
metric measure space, and $(D,\mathscr{F})$ be the
non-local regular Dirichlet form defined in \varepsilonqref{nondi}. Suppose that \varepsilonqref{e:ack} and \varepsilonqref{e:ack1} hold. Then, for any $p\in (1,2]$, $\mathscr{H}_{\widetilde \nabla}$
is bounded in $L^p(M,\mu)$, i.e., there exists a constant $c_p>0$ such that, for every $f\in L^p(M,\mu)$,
$$\|\mathscr{H}_{\widetilde \nabla}f\|_{p}\leqslant c_p\|f\|_{p}.$$
\varepsilonnd{theorem}
\begin{proof} For each $n\geqslant 1$, denote by $(L_n, \mathscr{D}(L_n))$ and $(P_t^n)_{t\geqslantqslant0}$ the generator and the semigroup associated with
the
Dirichlet form $(D^n,\mathscr{F}^n)$, respectively. According to \varepsilonqref{e:ack1} and Theorem \ref{th1}, for any $p\in (1,2]$, we can find a
constant $c_p>0$ such that for all $n\geqslant1$ and $f\in L^p(M,\mu)$,
$$\|\mathscr{H}_{\widetilde \nabla, n}f\|_p\leqslant c_p\|f\|_p,$$
where $\mathscr{H}_{\widetilde \nabla, n}$ is defined by \varepsilonqref{eeefff} with $L_n$ in place of $L$ and \varepsilonqref{g-21} with the jumping
kernel $J_n(x,dy)=J_n(x,y)\,\mu(dy)$; more precisely,
$$\mathscr{H}_{\widetilde \nabla, n}(f)(x)=\betaig(\int_0^\infty\!\!\int_{\{y\in M: |P_t^nf|(x)\geqslantqslant |P_t^nf|(y)\}}\big(P_t^nf(x)-P_t^nf(y)\big)^2J_n(x,dy)\,dt\betaig)^{1/2}.$$
Note that, from the argument of Theorem \ref{th1}, the constant $c_p$ here is independent of $n\geqslant1$.
On the other hand, by Proposition \ref{P:mos} and Remark \ref{r:mos} (2), we know that for every $t>0$ and $f\in C_c(M)$,
$P_t^nf$ converges to $P_tf$ in $L^2(M,\mu)$ as $n\rightarrow\infty$. Hence, there is a subsequence $\{P_t^{n_k}f\}_{k\geqslantqslant1}$
which converges to $P_tf$ $\mu$-a.e. as $k\rightarrow\infty$. By the definition of $\mathscr{H}_{\widetilde \nabla,n}(f)$ and the assumption \varepsilonqref{e:ack1}, we obtain that $\mathscr{H}_{\widetilde \nabla,n_k}(f)\rightarrow \mathscr{H}_{\widetilde \nabla}(f)$ $\mu$-a.e. as $k\rightarrow\infty$.
By using the Fatou lemma twice,
\begin{align*}&c_p\|f\|_p\\
&\geqslant \liminf_{n\to\infty} \int_M \mathscr{H}_{\widetilde \nabla, n}(f)^p(x)\,\mu(dx)\\
&=\liminf_{n\to\infty} \int_M \!\betaig(\int_0^\infty\!\!\int_{\{y\in M: |P_t^nf|(x)\geqslantqslant |P_t^nf|(y)\}}\big(P_t^nf(x)-P_t^nf(y)\big)^2J_n(x,dy)\,dt\betaig)^{p/2}\,\mu(dx)\\
&\geqslant \int_M \!\betaig(\liminf_{n\to\infty}\int_0^\infty\!\!\int_{\{y\in M: |P_t^nf|(x)\geqslantqslant |P_t^nf|(y)\}}\big(P_t^nf(x)-P_t^nf(y)\big)^2J_n(x,dy)\,dt\betaig)^{p/2}\,\mu(dx)\\
&\geqslant \int_M\betaig(\int_{\{y\in M: |P_tf|(x)\geqslantqslant |P_tf|(y)\}}\big(P_tf(x)-P_tf(y)\big)^2J(x,dy)\,dt\betaig)^{p/2}\,\mu(dx) . \varepsilonnd{align*}
Hence, there is a constant $c_p>0$ such that
$$\|\mathscr{H}_{\widetilde \nabla}(f)\|_p\leqslantqslant c_p\|f\|_p\succequad\mbox{for any } f\in C_c(M).$$
The general case for $f\in L^p(M,\mu)$ is accomplished by approximation since $C_c(M)$ is dense in $L^p(M,\mu)$ for all $1\leqslantqslant p<\infty$ and by Fatou's lemma.
\varepsilonnd{proof}
\ \
It is easy to know that
\begin{equation}\label{bu2}\|\mathscr{H}_{\widetilde \nabla}f\|_{2}\leqslantqslant\frac{\sqrt{2}}{2}\|f\|_{2}\succequad\mbox{for all }0\leqslant f\in L^2(M,\mu). \varepsilonnd{equation}
Indeed, letting $\{E_\lambda: 0\leqslantqslant\lambda<\infty\}$ be the spectral representation of $L$, for any $h\in L^2(M,\mu)$, we have
$$D(h,h)=\int_0^\infty \lambda \,d\langle E_\lambda h,E_\lambda h\rangle$$ and
$$D(P_th, P_t h)=\int_{[0,\infty)} \lambda e^{-2\lambda t}\,d\langle E_\lambda h,E_\lambda h\rangle.$$
Then, for all $0\leqslant f\in L^2(M,\mu)$, \begin{align*}\|\mathscr{H}_{\widetilde \nabla}f\|_{2}^2&=\int_{[0,\infty)} D(P_tf, P_t f)\,dt=\int_{[0,\infty)} \lambda e^{-2\lambda t}\,dt \int_{[0,\infty)} \,d \langle E_\lambda f, E_\lambda f\rangle\\
&=\frac{1}{2}\int_{(0,\infty)} \,d \langle E_\lambda f, E_\lambda f\rangle\leqslantqslant\frac{1}{2}\|f\|_{2}^2. \varepsilonnd{align*}
However, for general $f\in L^2(M,\mu)$, we can not derive \varepsilonqref{bu2} as above. This shows that even
$L^2$ boundedness of $\mathscr{H}_{\widetilde \nabla}$ seems to be
non-trivial, which differs from the classic Littlewood--Paley theory in the case $p=2$.
In harmonic analysis, we are also interested in the so-called vertical Littlewood--Paley $\mathscr{G}$-function of the following form: for $f\in L^1(M,\mu)\cap L^\infty(M,\mu)$,
$$\mathscr{G}_{\widetilde \nabla}f(x)= \leqslantft(\int_0^\infty t |\widetilde \nabla e^{-t\sqrt{L}}f|^2(x)\,dt \right)^{1/2},\succequad x\in M.$$
Inspired by the argument of \cite[Remark 1.3(ii)]{CDD}, we know that the function $\mathscr{G}_{\widetilde \nabla}f$ is dominated pointwise by $\mathscr{H}_{\widetilde \nabla}f$.
Indeed, by using the fact
$$\int_0^\infty e^{-u} u^{1/2}\,du=\frac{\sqrt{\preceqi}}{2},$$
and applying the formula
$$e^{-t \sqrt{L}}=\frac{1}{\sqrt{\preceqi}}\int_0^\infty e^{-t^2L/(4u)} e^{-u} u^{-1/2}\,du,\succequad t\geqslantqslant0,$$
we deduce from Jensen's inequality, Fubini's theorem and the change-of-variables formula that
\begin{align*}(\mathscr{G}_{\widetilde \nabla}f)^2(x)&=\int_0^\infty t|\widetilde \nabla e^{-t\sqrt{ L}} f|^2(x)\,dt\\
&=\frac{1}{\preceqi}\int_0^\infty t\betaig(\int_0^\infty |\widetilde \nabla e^{-t^2L/(4u)}f|(x)e^{-u}u^{-1/2}\,du\betaig)^2\,dt\\
&\leqslant \frac{1}{\sqrt{\preceqi}}\int_0^\infty t \betaig( \int_0^\infty |\widetilde \nabla e^{-t^2L/(4u)}f|^2(x)\,e^{-u}u^{-1/2}\,du\betaig) \, dt\\
&=\frac{1}{\sqrt{\preceqi}}\int_0^\infty \betaig(\int_0^\infty t|\widetilde \nabla e^{-t^2L/(4u)}f|^2(x)\,dt\betaig)e^{-u}u^{-1/2}\,du\\
&=\frac{2}{\sqrt{\preceqi}} \int_0^\infty e^{-u}u^{1/2}\,du\int_0^\infty |\widetilde \nabla e^{-sL}f|^2(x)\,ds\\
&=(\mathscr{H}_{\widetilde \nabla}f)^2(x);
\varepsilonnd{align*}
hence,
$$(\mathscr{G}_{\widetilde \nabla}f)(x)\leqslantqslant (\mathscr{H}_{\widetilde \nabla}f)(x).$$
In particular, the $L^p$ boundedness of $\mathscr{H}_{\widetilde \nabla}$ implies the $L^p$ boundedness of $\mathscr{G}_{\widetilde \nabla}$.
\ \
At the end of this section, we make some comments on Theorem \ref{th1-23} and its proof. In $\mathds R^d$,
the boundedness of $\mathscr{H}_{\widetilde \nabla}$ in $L^p(\mathds R^d,dx)$, $1<p\leqslant 2$, for L\'evy process $X$ has been proved in \cite[Theorem 4.1 and Lemma 4.5]{BBL}, when the process $X$ satisfies the Hartman--Wintner condition. (Note that such condition implies that the process $X$ has a transition density function $p_t(x)$ with respect to the Lebesgue measure such that $\lim_{t\to\infty}p_t(0)=0$.)
The aforementioned approach differs from ours, and it is based on the Hardy--Stein identity (see \cite[Theorem 3.2 and (3.5)]{BBL}). It seems that such identity depends heavily on the characterization of L\'evy processes, and may not hold for general jump processes.
The authors mentioned in the introduction section of their paper \cite{BBL} that --- \varepsilonmph{The results should hold in a much more general setting, but the scope of the extension is unclear at this moment}.
\ \
As mentioned in Section \ref{section1}, it seems more natural to study the boundedness of the vertical Littlewood--Paley $\mathscr{H}$-function
defined in terms of $\nabla$, that is, the $L^p$ boundedness of the operator
$$\mathscr{H}_{\nabla} f(x)=\leqslantft(\int_0^\infty |\nabla e^{-tL}f|^2(x)\,dt \right)^{1/2},\succequad x\in M.$$ By the same argument for \varepsilonqref{bu2}, it holds true that $\|\mathscr{H}_{\nabla}f\|_{2}\leqslantqslant\|f\|_{2}/\sqrt{2}$ for all $0\leqslant f\in L^2(M,\mu).$ However, \cite[Example 2]{BBL}, which is inspired by \cite[Page 165-166]{Ben}, shows that in general settings the operator $\mathscr{H}_{\nabla}$
may fail to be bounded on $L^p(M,\mu)$ if $1<p<2$. Thus, $\mathscr{H}_{\nabla}$ and $\mathscr{H}_{\widetilde\nabla}$ differ considerably. Another point is that, if we define $\mathscr{H}_{\widetilde\nabla_*}$ as $\mathscr{H}_{\widetilde\nabla}$ by using the modified gradient $|\widetilde\nabla f|_*$ defined by \varepsilonqref{g-2} instead of $|\widetilde\nabla f|$ defined by \varepsilonqref{g-21}, one may wonder whether $\mathscr{H}_{\widetilde\nabla_*}$ is bounded on $L^p(M,\mu)$ for $p\in (1,2]$ or not. The answer is no! Indeed, suppose that $\mathscr{H}_{\widetilde\nabla_*}$ is bounded on $L^p(M,\mu)$ for $p\in (1,2]$. Then, a simple change of $f$ into $-f$ will give us the boundedness of the operator $\mathscr{H}_{\nabla}-\mathscr{H}_{\widetilde\nabla_*}$, which forces that $\mathscr{H}_{\nabla}$ is bounded too. However, this is a contradiction.
In comparison with the discrete setting in \cite{Nick}, one
key point of Theorem \ref{th1-23} is
the boundedness of $\mathscr{H}_{\widetilde\nabla}$ in $L^p(M,\mu)$ for $p\in (1,2]$, not only in the particular cone $L^p_+(M,\mu):=\{f\geqslantqslant0: f\in L^p(M,\mu)\}$. The reason is that the operator $\mathscr{H}_{\widetilde\nabla}$ does not enjoy the sublinear property. Also due to this, we need to define the pseudo-gradient $\gammaamma_p$ for suitable signed function $f$; see \varepsilonqref{e:ger0}. By some simple calculation, we can deduce that, for any measurable function $f$ on $M$,
\begin{align*}\label{absolute-control}
\mathds 1_{\{z\in M:|f(x)|\geqslantqslant|f(z)|\}}(y)\big(f(x)-f(y)\big)^2
\leqslantqslant & 4\mathds 1_{\{z\in M:f^+(x)\geqslantqslant f^+(z)\}}(y)\big(f^+(x)-f^+(y)\big)^2\\
& + 4\mathds 1_{\{z\in M:f^-(x)\geqslantqslant f^-(z)\}}(y)\big(f^-(x)-f^-(y)\big)^2
\varepsilonnd{align*}holds for all $x,y\in M$. So, it should be a feasible way to prove the boundedness of $\mathscr{H}_{\widetilde\nabla}$ in $L^p(M,\mu)$ from its boundedness in $L^p_+(M,\mu)$.
\section{Littlewood--Paley--Stein estimates for $2\leqslant p<\infty$}
Recall that $(M,d,\mu)$ is a metric measure space, $(D,\mathscr{F})$ given by \varepsilonqref{nondi} is a regular Dirichlet form, $(L,\mathscr{D}(L))$ and $(P_t)_{t\geqslant0}:=(e^{-tL})_{t\geqslant0}$ are the corresponding $L^2$-generator and $L^2$-semigroup, respectively. Associated with the regular Dirichlet form $(D,\mathscr{F})$ on $L^2(M,\mu)$ there is a symmetric Hunt process $X=\{X_t,t\geqslant0, \mathds P^x, x\in M\backslash\mathscr{N}\}$. Here $\mathscr{N}$ is a properly exceptional set for $(D,\mathscr{F})$ in the sense that $\mu(\mathscr{N})=0$ and $\mathds P^x(X_t\in \mathscr{N}\textrm{ for some }t>0)=0$ for all $x\in M\backslash\mathscr{N}.$ See \cite{CF, FOT} for more details.
\ \
Throughout this section, we make the following assumptions.
\begin{itemize}
\item[$(A1)$] For any $ f\in C_c(M)$, the function $(t,x)\mapsto P_tf(x)$ is continuous on $(0,\infty)\times M$.
\item[$(A2)$] The process $X$ has a transition density function $p_t(x,y)$ with respect to the reference measure $\mu$, i.e.,
for any $t>0$, $x\in M\backslash\mathscr{N}$ and any Borel set $B\subset M$,
$$\mathds P^x(X_t\in B)=\int_B p_t(x,y)\,\mu(dy).$$
\item[$(A3)$] The process $X$ is conservative,
i.e., for any $t>0$ and $x\in M\backslash\mathscr{N}$,
$$\int_M p_t(x,y)\,\mu(dy)=1.$$
\item[$(A4)$] There exist a $\sigma$-finite measure space $(U,\mathscr{U},\nu)$ and a function $k:$ $M\times U\rightarrow M$ such that for any Borel set $B\subset M$ and $\mu$-a.e. $x\in M$,
\begin{equation}\label{ss3}\nu\leqslantft\{z\in U: k(x,z)\in B\right\}=J(x,B). \varepsilonnd{equation}
\varepsilonnd{itemize}
We make some comments on assumptions above. Firstly, according to \cite[Chapter 1, Lemma 1.4, p.\ 5]{BSW}, assumption $(A1)$ holds if the semigroup $(P_t)_{t\geqslant0}$ enjoys the $C_\infty$-Feller property; that is, for any $t>0$ and $f\in C_\infty(M)$, $P_tf\in C_\infty(M)$, and $\lim_{t\to 0}\|P_tf-f\|_\infty =0$, where $C_\infty(M)$ denotes the set of continuous functions which varnish at infinity. Secondly, there are already a few works on the conservativeness of processes generated by non-local Dirichlet forms on metric
measure
spaces; see e.g. \cite{MUW} and the references therein. Thirdly, assumption $(A4)$ is our technique condition. When $M=\mathds R^d$, one can take $U=\mathds R^n$ and $\nu(dz)= |z|^{-n-1}\,dz$ with $n\geqslant2$, and find a measurable function $k:\mathds R^d\times \mathds R^n\to \mathds R^d$ such that \varepsilonqref{ss3} is satisfied; see \cite[Chapeter 3, Theorem 3.2.5]{Sto}. Hence, assumption $(A4)$ always holds in the Euclidean space. As a general result on construction of the coefficient $k(x,z)$ in \varepsilonqref{ss3}, we refer to El Karoui
and Lepeltier \cite{KL}, where they constructed $k(x,z)$ under the condition that $U$ is a Lusin
space and $\nu$ is a $\sigma$-finite diffusive measure on $U$ with infinite total mass.
\ \
For fixed $f\in C_c(M)$ and $T>0$, let
$$H_t=P_{T-t}f(X_t)-P_Tf(X_0),\succequad 0\leqslant t\leqslant T.$$ Denote by $(\mathscr{F}_t)_{t\geqslant0}$ the natural filtration of the process $X$. Then, we have
\begin{lemma}\label{lemm31} Under the assumption $(A1)$, $\{H_t, \mathscr{F}_t\}_{0\leqslant t\leqslant T}$ defined above is a
martingale starting at $0$, and
for any $0\leqslant t\leqslant T$,
\begin{equation}
\label{eek01}[H]_t= \int_0^t\int_M\leqslantft( P_{T-s} f(y)- P_{T-s}f(X_{s-})\right)^2\,J(X_{s-},dy)\,ds,
\varepsilonnd{equation}
where $[H]_t$ is the quadratic variation of $H_t$.
\varepsilonnd{lemma}
The statement above for symmetric L\'evy processes can be obtained directly via the It\^{o} formula; see \cite[Section 4]{BBL}. However, since the It\^{o} formula is not available in the present setting, we will use a different approach.
\begin{proof}[Proof of Lemma $\ref{lemm31}$] For any $0\leqslant s\leqslant t\leqslant T$, by the Markov property, $$P_{T-t}f(X_t)=\mathds E^{X_t} f(X_{T-t})=\mathds E (f(X_T)|\mathscr{F}_t),$$ and so
\begin{align*}\mathds E(H_t|\mathscr{F}_s)=&\mathds E(P_{T-t}f(X_t)-P_Tf(X_0)|\mathscr{F}_s)=\mathds E(P_{T-t}f(X_t)|\mathscr{F}_s)-P_Tf(X_0)\\
=&\mathds E(\mathds E (f(X_T)|\mathscr{F}_t)|\mathscr{F}_s)-P_Tf(X_0)=\mathds E(f(X_T)|\mathscr{F}_s)-P_Tf(X_0)\\
=&P_{T-s} f(X_s)-P_T f(X_0)=H_s. \varepsilonnd{align*} This proves that $\{H_t, \mathscr{F}_t\}_{0\leqslant t\leqslant T}$ is a martingale.
For any $x\in M$ and $0\leqslant t\leqslant T$, we have \begin{align*}\mathds E^x(H_t^2)=&
\mathds E^x\big(P_{T-t}f(X_t)-P_Tf(X_0)\big)^2\\
=&\mathds E^x(P_{T-t}f)^2(X_t)-2P_Tf(x)\mathds E^xP_{T-t}f(X_t)+(P_Tf)^2(x)\\
=& P_t(P_{T-t}f)^2(x)- (P_Tf)^2(x). \varepsilonnd{align*}
Then, for any $x\in M$ and $0\leqslant s\leqslant t\leqslant T$,
\begin{equation}\label{e:fffee}\begin{split}
\mathds E^x(H_t^2-H_s^2)=& P_t(P_{T-t}f)^2(x)- P_s(P_{T-s}f)^2(x)\\
=& \int_s^t \frac{ d(P_u(P_{T-u} f)^2)(x)}{du}\,du\\
=&\int_s^t\betaig(-LP_u (P_{T-u}f)^2(x)+P_u(2P_{T-u}f \cdot L P_{T-u}f)(x)\betaig)\,du \\
=& \int_s^t P_u\betaig(-L(P_{T-u} f)^2+2P_{T-u} f\cdot LP_{T-u}f\betaig)(x)\,du\\
=&2\int_s^t P_u\gammaamma (P_{T-u}f )(x)\,du\\
=&2\mathds E^x \betaig( \int_s^t \gammaamma (P_{T-u}f)(X_u)\,du\betaig), \varepsilonnd{split} \varepsilonnd{equation}
where in the penultimate equality above we used the fact that
$$\gammaamma(f)= \frac{1}{2}\leqslantft(2fLf - L(f^2)\right).$$
See e.g. \cite[Theorem (3.7)]{CKS}.
\varepsilonqref{e:fffee} together with the Markov property in turn yields that
\begin{equation}\label{eek00}\betaig\{H_t^2 - 2\int_0^t \gammaamma (P_{T-u}f)(X_u)\,du, \mathscr{F}_t\betaig\}_{\ 0\leqslant t\leqslant T} \varepsilonnd{equation}
is a martingale. Denote by $\langle H\rangle_t$ and $[H]_t$ the predicable
quadratic variation and the quadratic variation of $H_t$, respectively.
Thus, according to \varepsilonqref{eek00} and \cite[Chapter 4, Theorem 4.2, p.\ 38]{JS},
$$\langle H\rangle _t=2\int_0^t \gammaamma (P_{T-u}f)(X_u)\,du, \succequad 0\leqslant t\leqslant T.$$
Furthermore, under assumption $(A1)$ and by the fact that $t\mapsto X_t$ is quasi-left-continuous (since $X$ is a Hunt process enjoying the strong Markov property), $\{H_t,\mathscr{F}_t\}_{0\leqslant t\leqslant T}$ is a martingale which has a continuous version, see again \cite[Chapter 4, Theorem 4.2, p.\ 38]{JS}.
Hence, by \cite[Chapter 2, Definition 2.25, p.\ 22; Chapter 4, Theorem 4.52, p.\ 55]{JS}, for any $0\leqslant t\leqslant T$,
\begin{align*}[H]_t=&\langle H\rangle _t=2\int_0^t \gammaamma (P_{T-u}f)(X_u)\,du\\
=&\int_0^t\int_M\big( P_{T-u} f(y)- P_{T-u}f(X_{u-})\big)^2\,J(X_{u-},dy)\,du. \varepsilonnd{align*}
The proof is complete.
\varepsilonnd{proof}
Next, we will make use of the space-time parabolic martingale $\{H_t, \mathscr{F}_t\}_{0\leqslant t\leqslant T}$ defined above with $T\in(0,\infty]$ to prove the boundedness of Littlewood--Paley functions in $L^p(M,\mu)$ for $2\leqslant p<\infty$. We mainly follow the approach of \cite[Section 4]{BBL} with
necessary
modifications.
First, we note that, under assumption $(A3)$, one can rewrite \varepsilonqref{eek01} for the quadratic variation $[H]_t$ of the martingale $H_t$ as
\begin{equation}\label{eek02}
[H]_t= \int_0^t\int_M\big( P_{T-s} f(k(X_{s-},y))- P_{T-s}f(X_{s-})\big)^2\,\nu(dy)\,ds,\succequad 0\leqslant t\leqslant T.
\varepsilonnd{equation}
Second, we need to define the Littlewood--Paley function $G$, which can be regard as the conditional expectation of the quadratic variation $[H]_t$.
For $f\in L^1(M,\mu)\cap L^\infty(M,\mu)$, define
$$Gf(x)=\leqslantft(\int_0^\infty\!\! \int_M\!\int_M |P_t f(k(z,y))-P_tf(z)|^2p_t(x,z)\,\mu(dz)\,\nu(dy)\,dt\right)^{1/2},$$ and
$$G_{T}f(x)=\leqslantft(\int_0^T \!\!\int_M\!\int_M |P_t f(k(z,y))-P_tf(z)|^2p_t(x,z)\,\mu(dz)\,\nu(dy)\,dt \right)^{1/2}.$$
Clearly, $\lim_{T\to \infty}G_{T} f(x)=Gf(x)$ for all $x\in M$.
\begin{lemma}\label{lem-1} Under assumptions $(A1)$, $(A2)$, $(A3)$ and $(A4)$, for any $f\in C_c(M)$, $x\in M$ and $T>0$,
$$(G_{T} f)^2(x)=\int_M \mathds E^z ([H]_T|X_T=x)p_T(z,x)\,\mu(dz).$$
\varepsilonnd{lemma}
\begin{proof}The proof is almost the same
as that of \cite[(4.5)]{BBL}, and we present it here for the sake of completeness. By \varepsilonqref{eek02}, we have
\begin{align*}
&\int_M\mathds E^z ([H]_T|X_T=x)p_T(z,x)\,\mu(dz)\\
&=\int_M\mathds E^z\leqslantft( \int_0^T\int_M\big( P_{T-s} f(k(X_{s-},y))- P_{T-s}f(X_{s-})\big)^2\,\nu(dy)\,ds\betaig| X_T=x\right)\\
&\succeqquad\succeqquad\succeqquad\times p_T(z,x)\,\mu(dz)\\
&=\int_M \bigg(\int_0^T\int_M\frac{p_s(z,w)p_{T-s}(w,x)}{p_T(z,x)} \\
&\succeqquad\succeqquad\succeqquad\times \int_M |P_{T-s} f(k(w, y))-P_{T-s}f(w)|^2\,\nu(dy)\,\mu(dw)\,ds\bigg)p_T(z,x)\,\mu(dz)\\
&=\int_0^T\int_M p_{T-s}(w,x) \int_M|P_{T-s} f(k(w, y))-P_{T-s}f(w)|^2\,\nu(dy)\,\mu(dw)\,ds\\
&=\int_0^T \int_M \int_M|P_{T-s} f(k(w, y))-P_{T-s}f(w)|^2p_{T-s}(x,w)\,\mu(dw)\,\nu(dy)\,ds\\
&=(G_{T}f)^2(x), \varepsilonnd{align*}
where
in the third equality we used the fact that
$$\int_M p_s(z,w)\,\mu(dz)= \int_M p_s(w,z)\,\mu(dz)=1\succequad\mbox{for any }w\in M\backslash\mathscr{N}, $$
due to the symmetry
of $p_t(x,y)$ and the conservativeness of the process $X$, and in the
fourth equality we used symmetry again.
\varepsilonnd{proof}
\begin{lemma}\label{lem-2} Under assumptions $(A1)$, $(A2)$ and $(A4)$, for any $f\in C_c(M)$ and $x\in M$,
$$(\mathscr{H}_\nabla f)(x)\leqslant Gf(x).$$ \varepsilonnd{lemma}
\begin{proof}
For any $f\in C_c(M)$ and $x\in M$, using the Cauchy--Schwarz inequality, the property of the semigroup $(P_t)_{t\geqslant0}$ and \varepsilonqref{ss3}, we get
\begin{align*}(G f)^2(x)=&\int_0^\infty \!\!\int_M\! \int_M|P_{t}f(k(z,y))-P_{t}f(z)|^2p_{t}(x,z)\,\mu(dz)\,\nu(dy)\,dt\\
\geqslant&\int_0^\infty \int_M\! \leqslantft(\int_M |P_{t}f(k(z,y))-P_{t}f(z)|p_{t}(x,z)\,\mu(dz)\right)^2\,\nu(dy)\,dt\\
=&\int_0^\infty \int_M\! \betaig( P_t |P_{t}f(k(\cdot,y))-P_{t}f(\cdot)| (x)\betaig)^2\,\nu(dy)\,dt\\
\geqslant &\int_0^\infty \int_M\! \big(|P_{2t}f(k(\cdot,y))-P_{2t}f(\cdot)| (x)\big)^2\,\nu(dy)\,dt\\
=&\int_0^\infty \leqslantft(\int_M\! \big(|P_{2t}f(z)-P_{2t}f(\cdot)|\big)^2\,J(\cdot, dz)\right) (x)\,dt\\
=&2\int_0^\infty \gammaamma(P_{2t}f
) (x)\,dt=\int_0^\infty \gammaamma(P_{t}f ) (x)\,dt\\
= &\int_0^\infty|\nabla P_t f|^2(x)\,dt=(\mathscr{H}_\nabla f)^2(x). \varepsilonnd{align*}
This proves the desired assertion.
\varepsilonnd{proof}
Now, we are in a position to present the main result in this section.
\begin{theorem}\label{thp}
Let $(M,d,\mu)$ be a metric measure space, and $(D,\mathscr{F})$ be the non-local regular Dirichlet form defined in \varepsilonqref{nondi}. Suppose that $(A1)$, $(A2)$, $(A3)$ and $(A4)$ hold. Then, for any $p\in [2,\infty)$,
$\mathscr{H}_\nabla$ is bounded in $L^p(M,\mu)$, i.e., there exists a constant $C_p>0$ such that, for every $f\in L^p(M,\mu)$,
$${\|\mathscr{H}_\nabla f\|_{p}\leqslant C_p\|f\|_{p}.}$$
\varepsilonnd{theorem}
\begin{proof}
Let $p\geqslant 2$. For any $f\in C_c(M)$,
by Lemma \ref{lem-1} and
by applying
Jensen's inequality twice, we get
\begin{align*}\int_{M} (G_T f)^p(x)\,\mu(dx)&\leqslant \int_{M}\!\!\int_{M} \leqslantft(\mathds E^z([H]_T|X_T=x)\right)^{p/2} p_T(z,x)\,\mu(dz)\,\mu(dx)\\
&\leqslant \int_{M}\!\int_{M} \mathds E^z([H]_T^{p/2}|X_T=x)p_T(z,x)\,\mu(dz)\,\mu(dx)\\
&=\int_{M} \mathds E^z([H]_T^{p/2})\,\mu(dz).
\varepsilonnd{align*}
By the Burkholder--Davis--Gundy inequality (see e.g. \cite[Page 234]{Ap}), we have
\begin{align*}
\mathds E^z([H]_T^{p/2})&\leqslant C_p'\mathds E^z|H_T|^p=C_p'\mathds E^z\big(|f(X_T)-P_Tf(X_0)|^p\big)\\
&\leqslant 2^pC_p' P_T|f|^p(z),
\varepsilonnd{align*}
where in the last inequality we have used the elementary fact that
$$(a+b)^p\leqslant 2^{p-1}(a^p+b^p),$$
for all $a,b\geqslant0$ and $p\geqslantqslant1$. Thus, by the contraction property of the semigroup $(P_t)_{t\geqslant0}$ on $L^p(M,\mu)$, we arrive at
$$\int_{M} (G_T f)^p(x)\,\mu(dx)\leqslant 2^pC_p'\int_{M} P_T|f|^p(x)\,\mu(dx)\leqslant 2^pC_p'\|f\|_{p}^p,$$ which along with the monotone convergence theorem yields that
$$ \int_{M} (Gf)^p(x)\,\mu(dx)\leqslant 2^pC_p'\|f\|_{p}^p.$$
Combining this with Lemma \ref{lem-2}, we obtain that
$$\|\mathscr{H}_\nabla f\|_p\leqslantqslant C_p\|f\|_p\succequad\mbox{for every }f\in C_c(M).$$
For every $f\in L^p(M,\mu)$, since $C_c(M)$ is dense in $L^p(M,\mu)$ for all $1\leqslantqslant p<\infty$, we may choose a sequence $\{f_n\}_{n\geqslantqslant1}$ from $C_c(M)$ such that $f_n$ converges to $f$ in $L^p(M,\mu)$ as $n\rightarrow\infty$. Then, applying Fatou's lemma, we obtain the desired conclusion.
\varepsilonnd{proof}
Finally, we present the proof of Example \ref{exm2}.
\begin{proof}[Proof of Example $\ref{exm2}$] According to comments after assumptions in the beginning of this section, we only need to
verify assumptions $(A1)$---$(A3)$. According to \cite[Proposition 3.3]{BKK}, the associated resolvent of $(D,\mathscr{F})$ is H\"{o}lder continuous, which entails that the $L^2$-semigroup $(P_t)_{t\geqslant0}$ generated by the Dirichlet form $(D,\mathscr{F})$ is a $C_\infty$-Feller semigroup; see \cite[Proposition 4.3 and Corollary 6.4]{RU}. Thus, $(A1)$ holds. From \varepsilonqref{ed4}, one can easily
deduce that
the Nash type inequality holds for $(D,\mathscr{F})$, which implies that the associated process $X$ enjoys the transition density function
(see \cite{BKK} for more details); that is, $(A2)$ is also satisfied. $(A3)$ immediately follows from \cite[Theorem 1.1]{MUW}. Therefore, we can obtain the desired assertion from Theorem \ref{thp}. \varepsilonnd{proof}
\subsection*{Acknowledgment}
\hskip\preceqarindent\!\!\!
The second author would like to thank
Professor Kazuhiro Kuwae for very
helpful comments on the Mosco convergence of Dirichlet forms and the proof of Lemma \ref{lemm31}. The research of Huaiqian Li is supported by National Natural Science Foundation of China (Nos.\ 11401403, 11571347).
The research of Jian Wang is supported by National
Natural Science Foundation of China (No.\ 11522106), the Fok Ying Tung
Education Foundation (No.\ 151002) and the Program for Probability and Statistics: Theory and Application (No.\ IRTL1704).
\begin{thebibliography}{99}
\bibitem{Ap}
Applebaum, D.: \varepsilonmph{L\'evy Processes and Stochastic Calculus}, Cambridge Univ. Press,
Cambridge, 2004.
\bibitem{Bakry1985}
Bakry, D.: Transformations de Riesz pour les semigroupes sym\'{e}triques, \varepsilonmph{S\'{e}minaire
de probabilit\'{e}s XIX}, Lecture Notes in Math. \textbf{1123}, Springer, 1985, 130--174.
\bibitem{Bakry1987}
Bakry, D.: Etude des transformations de Riesz dans les vari\'{e}t\'{e}s riemaniennes \`{a}
courbure de Ricci minor\'{e}e, \varepsilonmph{S\'{e}minaire de probabilit\'{e}s XXI}, Lecture Notes in Math. \textbf{1247},
Springer, 1987, 137--172.
\bibitem{BBL}
Ba\~{n}uelos, R., Bogdan, K. and Luks, T.: Hardy--Stein identities and square functions for semigroups, \varepsilonmph{J. London Math. Soc.} \textbf{94} (2016), 462--478.
\bibitem{BK}
Ba\~{n}uelos, R. and Kim, D.: On square functions and Fourier multipliers for nonlocal operators, arXiv: 1702.06573v1.
\bibitem{BBCK} Barlow, M., Bass, R.F., Chen, Z.-Q. and Kassmann, M.:
Non-local Dirichlet forms and symmetric jump processes, \varepsilonmph{Trans. Amer. Math. Soc.} \textbf{361} (2008), 1963--1999.
\bibitem{BKK} Bass, R.F., Kassmann, M. and Kumagai, T.:
Symmetric jump processes: localization, heat kernels and convergence, \varepsilonmph{Ann. Inst. Henri Poincar\'{e} Probab. Statist.} {\bf 46} (2010), 59--71.
\bibitem{Ben}
Bennett, A.G.: Probabilistic sequare functions and a priori estimates, \varepsilonmph{Trans. Amer. Math. Soc.} \textbf{291} (1985), 159--166.
\bibitem{BSW}
B\"{o}ttcher, B., Schilling, R.L.\ and Wang, J.: \varepsilonmph{L\'{e}vy-Type Processes: Construction, Approximation and Sample Path Properties}, Lecture Notes in Mathematics, vol. \textbf{2099}, L\'{e}vy Matters III, Springer, Berlin, 2014.
\bibitem{BH1991}
Bouleau, N. and Hirsh, F.: \varepsilonmph{Dirichlet forms and analysis on Wiener space}, de Gruyter Studies in
Mathematics, \textbf{14}, Walter de Gruyter, Berlin, 1991.
\bibitem{CKS}
Carlen, E.A., Kusuoka, S. and Stroock, D.W.:
Upper bounds for symmetric Markov transition functions,
\varepsilonmph{Ann.\ Inst.\ H. Poincar\'e Probab.\ Statist.} {\textbf 23} (1987), 245--287.
\bibitem{CF}
Chen, Z.-Q. and Fukushima, M.:
\varepsilonmph{Symmetric Markov Processes, Time Change, and Boundary Theory},
Princeton Univ. Press, Princeton, 2012.
\bibitem{CD2003}
Coulhon, T. and Duong, X.T.: Riesz transform and related inequalities on non-compact Riemannian
manifolds, \varepsilonmph{Comm. Pure Appl. Math.} \textbf{56} (2003), 1728--1751.
\bibitem{CDD}
Coulhon, T., Duong, X.T. and Li, X.D.: Littlewood--Paley--Stein functions on complete Riemannian manifolds for $1\leqslant p\leqslant 2$, \varepsilonmph{Studia Math.} \textbf{154} (2003), 37--57.
\bibitem{Nick}
Dungey, N.: A Littlewood--Paley--Stein estimates on graphs and groups, \varepsilonmph{Studia Math.} \textbf{189} (2008), 113--129.
\bibitem{FOT}
Fukushiam, M., Oshima, Y. and Takeda, M.: { \varepsilonmph{Dirichlet Forms and Symmetric Markov Processes},} de Gruyter Studies in Math. vol. {\bf 19}, Walter de Gruyter, Berlin, 2nd rev. and ext. ed., 2011.
\bibitem{JS}
Jacod, J. and Shiryaev, A.: \varepsilonmph{Limit Theorems for Stochastic Processes}, Springer, Berlin, 1987.
\bibitem{KL}
El Karoui, N. and Lepeltier, J.-P.: Repr\'{e}sentation des processus ponctuels multivari\'{e}s \`{a} l'aide d'un
processus de Poisson, \varepsilonmph{Z. Wahrsch. verw. Geb.} \textbf{39} (1977), 111--133.
\bibitem{Li2006}
Li, X. D.: Riesz transforms for symmetric diffusion operators on complete Riemannian manifolds. \varepsilonmph{Rev. Mat. Iberoamericana} \textbf{22} (2006), 591--648.
\bibitem{Lou1987} Lohou\'{e}, N.: Estimations des fonctions de Littlewood--Paley--Stein sur les vari\'{e}t\'{e}s
riemanniennes \`{a} courbure non positive, \varepsilonmph{Ann. Sci. Ecole Norm. Sup.} \textbf{20} (1987), 505--544.
\bibitem{MUW}
Masamune, J., Uemura, T. and Wang, J.: On the conservativeness and the recurrence of symmetric jump-diffusions, \varepsilonmph{J. Funct. Anal.} \textbf{263} (2012), 3984--4008.
\bibitem{Mey}
Meyer, P.-A.: D\'{e}monstration probabiliste de certaines in\'{e}galiti\'{e}s de Littlewood--Paley. Expos\'{e} I-IV, \varepsilonmph{S\'{e}minaire de probabilit\'{e}s} \textbf{10} (1976), 125--183.
\bibitem{Mey1981}
Meyer, P.-A.: Retour sur la th\'{e}orie de Littlewood--Paley, \varepsilonmph{ S\'{e}minaire de Probabilit\'{e}s XV},
Lecture Notes in Math. \textbf{850}, Springer, 1981, 151--166.
\bibitem{Mo1994}
Mosco, U.: Composite media and asymptotic Dirichlet forms, \varepsilonmph{J. Funct. Anal.} \textbf{123} (1994), 368--421.
\bibitem{RU}
Schilling, R. and Uemura, T.: On the Feller property of Dirichlet forms generated by pseudo differential operators, \varepsilonmph{Tohoku Math. J.} \textbf{59} (2007), 401--422.
\bibitem{SW2014}
Schilling, R.L. and Wang, J.: Lower bounded semi-Dirichlet forms associated with L\'{e}vy type operators, in: Z-Q. Chen, N. Jacob, M. Takeda, T. Uemura: \varepsilonmph{Festschrift Masatoshi Fukushima. In Honor of Masatoshi Fukushima's Sanju.} World Scientific, New Jersey 2015, pp. 507--527.
\bibitem{ShYo}
Shigekawa, I. and Yoshida, N.: Littlewood--Paley--Stein inequality for a symmetric diffusion, \varepsilonmph{J. Math. Soc. Japan} \textbf{44} (1992), 251--280.
\bibitem{St1970}
Stein, E.M.: \varepsilonmph{Singular Integrals and Differentiability Properties of Functions}, Princeton Mathematical Series, No. \textbf{30}, Princeton University Press, Princeton, 1970.
\bibitem{Stein}
Stein, E.M.: \varepsilonmph{Topics in Harmonic Analysis Related to the Littlewood--Paley Theory}, Ann. of Math. Stud. {\bf 63}, Princeton Univ. Press, Princeton, 1970.
\bibitem{Sto}
Stroock, D.W.: \varepsilonmph{Markov Processes from K. It\^{o}'s Perspective}, Ann. of Math. Stud. {\bf 155}, Princeton Univ. Press, Princeton, 2003.
\bibitem{SU}
Suzuki, K. and Uemura, T.: On instability of global path properties of symmetric Dirichlet forms under Mosco-convergence, \varepsilonmph{Osaka J. Math.} \textbf{53} (2016), 567--590.
\bibitem{YJA}
Yan, J.A.: \varepsilonmph{Lecture on Measure Theory}, Science Press, Bejing 2nd, 2004.
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{document}
\begin{Large}
\centerline{On a problem related to "second" best approximations to a real number}
\vskip+0.5cm
\centerline{Pavel Semenyuk}
\end{Large}
\vskip+1cm
\section{Introduction}
\vskip+0.3cm
For a given irrational number $\alpha$ we consider its continued fraction expansion
\begin{equation}\label{q}
\alpha = [a_0, a_1, a_2, ...] = a_0 + \dfrac{1}{a_1+\dfrac{1}{a_2+...}}, ~a_0\in\mathbb{Z}, ~a_i\in\mathbb{Z}_{+}, ~i = 1, 2, ...
\end{equation}
and its convergents $\dfrac{p_n}{q_n} = [a_0, a_1, ..., a_n]$. We call two irrational numbers $\alpha = [a_0, a_1, a_2, ...]$ and $\beta = [b_0, b_1, b_2, ...]$ equivalent and write $\alpha\sim\beta$, if
there exist positive integers $m$ and $n$ such that
$a_{n+k} = b_{m+k}, k=1,2,3,...$, that is the tails of continued fraction expansions for $\alpha$ and $\beta$ coincide.
Let
$$
\psi_{\alpha}(t) = \min\limits_{1\leqslant q \leqslant t, q\in\mathbb{Z}} ||q\alpha|| ,\,\,\,\,\,\text{where}\,\,\,\,\, ||x|| = \min\limits_{n\in\mathbb{Z}}|x-n|
$$
be the irrationality measure function for $\alpha$.
By Lagrange's theorem on best approximations (see \cite{khinchin1964}) it is a piecewise constant function, namely
$$ \psi_{\alpha}(t) = |q_{\nu}\alpha - p_{\nu}|
\,\,\,\,\,\text{
for}
\,\,\,\,\,
q_\nu\leqslant t\leqslant q_{\nu+1}.
$$
Many classical result concerning rational approximations to a real number $\alpha$ can be formulated in terms of $\psi_{\alpha}(t)$.
For example, in terms of irrationality measure function $\psi_\alpha$ one can define the Lagrange constant
\begin{equation}\label{l}
\lambda(\alpha) =
\liminf_{t\to \infty} t\cdot \psi_\alpha (t).
\end{equation}
The set of all possible values of $\lambda(\alpha)$ is known as
Lagrange spectrum
$$
\mathbb{L} = \{\lambda ~|~\exists\alpha\in\mathbb{R\setminus Q} \colon \lambda = \lambda(\alpha) \}.
$$
It is very well studied
(see for example \cite{cusick1989markoff}).
\vskip+0.3cm
In \cite{moshchevitin2017} Moshchevitin considered an irrationality measure function
$$
\psi_{\alpha}^{[2]}(t) = \min\limits_{\substack{(q, p)\colon q, p \in\mathbb{Z}, 1\leqslant q\leqslant t, \\ (p, q) \neq (p_n, q_n) ~\forall n\in\mathbb{Z_{+}}}} |q\alpha -p|,
$$
related to so-called "second best" approximations,
the corresponding Diophantine constant
\begin{equation}\label{k}
\mathfrak{k}(\alpha) = \liminf\limits_{t\to\infty} t \cdot\psi_{\alpha}^{[2]}(t)
\end{equation}
and the corresponding spectrum
$$
\mathbb{L}_2 = \{ \lambda ~| ~\exists\alpha\in\mathbb{R\setminus Q} \colon \lambda = \mathfrak{k}(\alpha) \}.
$$
In particular, he
studied different properties of the function $\psi_{\alpha}^{[2]}(t)$
and
calculated two largest elements of the spectrum $\mathbb{L}_2$. In the present paper we calculate the value for the third element of the spectrum $\mathbb{L}_2$.
\vskip+0.3cm
\section{Results on the spectrum $\mathbb{L}_2$}
\vskip+0.3cm
The following structural result about $\mathbb{L}_2$ was proven
in \cite{moshchevitin2017}.
\vskip+0.3cm
\textbf{Theorem 1.}
\begin{itemize}
\item[1.] The largest element of $\mathbb{L}_2$ is $\dfrac{4}{\sqrt{5}}$. Moreover, $\mathfrak{k}(\alpha) = \dfrac{4}{\sqrt{5}}$ if and only if $\alpha\sim\dfrac{1+\sqrt{5}}{2} = [1; \overline{1}]$;
\item[2.] If $\alpha\not\sim\dfrac{1+\sqrt{5}}{2}$, then $\mathfrak{k}(\alpha)\leqslant\dfrac{4}{\sqrt{17}}$;
\item[3.] The equality $\mathfrak{k}(\alpha) = \dfrac{4}{\sqrt{17}}$ holds if and only if $\alpha\sim\dfrac{1+\sqrt{17}}{2} = [2; \overline{1,1,3}]$;
\item[4.] The point $\dfrac{4}{\sqrt{17}}$ is an isolated point of $\mathbb{L}_2$.
\item[5.] The whole segment $\left[ 0, \frac{12}{21+\sqrt{15}}\right]$ belongs to $\mathbb{L}_2$.
\end{itemize}
\vskip+0.3cm
The main result of the present paper is as follows.
\vskip+0.3cm
\textbf{Theorem 2.}
\begin{enumerate}
\item If $\alpha$ is irrational and not equivalent neither to $\frac{1+\sqrt{5}}{2}$ nor to $\frac{1+\sqrt{17}}{2}$, then $\mathfrak{k}(\alpha) \leqslant \frac{164}{13\sqrt{173}}$.
\item Let $\alpha_0 = \frac{13\sqrt{173} + 39}{82} = [2; \overline{1, 1, 3, 1, 1, 1, 1, 3}]$.
Then $\mathfrak{k}(\alpha) =\mathfrak{k}(\alpha_0) = \frac{164}{13\sqrt{173}}$ if and only if $\alpha \sim \alpha_0$.
\item The point $\frac{164}{13\sqrt{173}}$ is an isolated point of $\mathbb{L}_2$.
\end{enumerate}
\vskip+0.3cm
\section{Auxiliary results and notation}
\vskip+0.3cm
Consider the tails
$$
\alpha_n = [a_n; a_{n+1}, a_{n+2}, a_{n+3}, ...],\,\,\,\,\,
\alpha^{*}_n = [0; a_n, a_{n-1}, a_{n-2}, ..., a_1]
$$
of continued fraction (\ref{q}). In terms of these quantities it is natural to give a formula
\begin{equation}\label{pe}
\left| \alpha - \frac{p_n}{q_n}\right| = \frac{1}{q_n^2 (\alpha_{n+1}+\alpha_n^*)}
\end{equation}
for the approximation to $\alpha$ by its convergent fraction.
Relation (\ref{pe}) is known as Perron's formula. By means of (\ref{pe}) one can express the value of $\lambda(\alpha)$
defined in (\ref{l}) as
\begin{equation}\label{pe1}
\lambda(\alpha) = \liminf_{n\to \infty} \frac{1}{{\alpha_{n+1}+\alpha_n^*}}.
\end{equation}
An analog of the formula (\ref{pe1}) for the value of $\frak{k}(\alpha)$ was obtained in \cite{moshchevitin2017} .
It is as follows.
Consider the quantities
$$
\varkappa^1_n(\alpha) = \dfrac{(1 + \alpha^{*}_{n-1})(\alpha_n - 1)}{\alpha_n + \alpha^{*}_{n-1}},\,\,\,\,\,\,
\varkappa^2_n(\alpha) = \dfrac{(1 - \alpha^{*}_n)(\alpha_{n+1} + 1)}{\alpha_{n+1} + \alpha^{*}_n}
,
\,\,\,\,\,\,
\varkappa^4_n(\alpha) = \dfrac{4}{\alpha_n + \alpha^{*}_{n-1}}.
$$
(we follow the notation from \cite{moshchevitin2017}).
Then
if $\alpha \not\sim \frac{1+\sqrt{5}}{2}$ one has
\begin{equation}\label{one}
\mathfrak{k}(\alpha) = \liminf\limits_{n\to\infty\colon a_n\geqslant 2}\min(\varkappa^1_n, \varkappa^2_n, \varkappa^4_n)
\end{equation}
(see Hilfssatz 13 from \cite{moshchevitin2017}).
We should note that it is clear that for $\alpha \sim \beta$ one has $ \mathfrak{k}(\alpha) = \mathfrak{k}(\beta) $.
\vskip+0.3cm
Besides formula (\ref{one}) we need some other auxiliary statements from \cite{moshchevitin2017}.
We formulate them as the following
\vskip+0.3cm
\textbf{Proposition 1.} Let $\alpha$ be an irrational number not equivalent to $\dfrac{1+\sqrt{5}}{2} = [1; \overline{1}]$ and $\dfrac{1+\sqrt{17}}{2} = [2; \overline{1,1,3}]$.
\begin{itemize}
\item[1.] If for infinitely many $n$ in continued fraction expansion (\ref{q}) one has $a_n \geqslant 5$, then $\mathfrak{k}(\alpha) \leqslant \frac{4}{5}$;
\item[2.] If for all $n$ sufficiently large in continued fraction expansion (\ref{q}) one has $a_n \leqslant 4$ and for infinitely many $n$ the equality $a_n = 4$ occurs, then $\mathfrak{k}(\alpha) \leqslant \frac{4}{3 + \sqrt{2}}$;
\item[3.] If for all $n$ sufficiently large $a_n \leqslant 4$ and for infinitely many $n$ the equality $a_n = 2$ occurs, then $\mathfrak{k}(\alpha) \leqslant \sqrt{2} - \frac{1}{2}$;
\item[4.] If for infinitely many $n$ one has $a_n \in \{1, 3\} $ and
\item[4.1.] for infinitely many $n$ one has $a_{n-1} = a_n = 3$, then $\mathfrak{k}(\alpha) \leqslant \frac{39}{43}$;
\item[4.2.] for infinitely many $n$ one has $a_{n-1} = 3, a_n = 1, a_{n+1} = 3$, then $\mathfrak{k}(\alpha) \leqslant \frac{136}{145}$;
\item[4.3.] for infinitely many $n$ one has $a_{n-2} = a_{n-1} = 1, a_n = 3, a_{n+1} = a_{n+2} = a_{n+3} = 1$, then $\mathfrak{k}(\alpha) \leqslant \frac{180}{187}$
\end{itemize}
\vskip+0.3cm
{\bf Remark.} Although the main result on the structure of $\mathbb{L}_2$ from the paper
\cite{moshchevitin2017}
is correct, its proof from
\cite{moshchevitin2017} contains
a mistake:
in the cases 4.2 and 4.3 in \cite{moshchevitin2017}
instead of upper bounds $\frac{136}{145}$ and $ \frac{180}{187}$ correspondingly, different incorrect bounds were given.\\
\vskip+0.3cm
The bounds for $\mathfrak{k}(\alpha)$ from Proposition 1 can be written as the following table.
\vskip+0.3cm
\begin{center}
\begin{tabular}{ | c | c | c | c | }
\hline
Case & Conditions for partial quotients & Upper bound & Numerical value\\ \hline
1 & $\text{inf. many}\,\, a_n \geqslant 5$ & $4/5$ & 0.8 \\ \hline
2 & $\text{almost all}\,\, a_n\le 4, \,\,\text{inf. many}\,\, a_n = 4$ & $4/(3+\sqrt{2})$ & $0.906163^+$ \\ \hline
3 & $\text{almost all}\,\, a_n\le 4, \,\,\text{inf. many}\,\, a_n = 2$ & $\sqrt{2} - 1/2$ & $0.914213^+$ \\ \hline
4.1 & $\text{inf. many patterns}\,\,$ 3 3 & 39/43 & $0.906976^+$ \\ \hline
4.2 &$\text{inf. many patterns}\,\,$ 3 1 3 & 136/145 & $0.937931^+$ \\ \hline
4.3 & $\text{inf. many patterns}\,\,$ 1 1 3 1 1 1 & 180/187 & $0.962566^+$ \\
\hline
\end{tabular}
\end{center}
\vskip+0.3cm
\section{The value of $\mathfrak{k} (\alpha_0)$}
\vskip+0.3cm
In this section we prove the following
\vskip+0.3cm
\textbf{Lemma 1.} $\mathfrak{k}(\alpha_0) = \liminf\limits_{n\to\infty\colon a_n\geqslant 2}\varkappa^4_n = \dfrac{164}{13\sqrt{173}}$.
\vskip+0.3cm
\textbf{Proof.} Consider blocks of digits $ {A} = \{1,1,3\}$ and ${ B} = \{1,1,1,1,3\}$.
Now $\alpha_0 = [2; \overline{1, 1, 3, 1, 1, 1, 1, 3}]$ can be written as
$\alpha_0 =[2;\overline{A B}]$.
In view of (\ref{q}), it suffices to prove that for all large enough $n$ such that $a_n = 3$, the value of $\varkappa^4_n$ is less than both $\varkappa^1_n$ and $\varkappa^2_n$. Let us consider separately those values of $n$ which are either $\equiv 0 \pmod{8}$ (if we take a 3 after a block of four 1's) or $\equiv 3 \pmod{8}$ (if we take a 3 after a block of two 1's). Thus, we need to consider two possible limits for each $\varkappa^1_n, \varkappa^2_n, \varkappa^4_n$.
Let us calculate all six of them:
$$
\lim\limits_{k\to\infty} \varkappa^1_{8k-5} = \dfrac{(1+[0;\overline{AB}])([3;\overline{BA}]-1)}{[0;\overline{AB}]+[3;\overline{BA}]} = \dfrac{167}{13\sqrt{173}} = 0.976674^+,
$$
$$
\lim\limits_{k\to\infty} \varkappa^1_{8k} = \dfrac{(1+[0;\overline{BA}])([3;\overline{AB}]-1)}{[0;\overline{BA}] + [3;\overline{AB}]} = \dfrac{169}{13\sqrt{173}} = 0.988371^+,
$$
$$
\lim\limits_{k\to\infty} \varkappa^2_{8k-5} = \dfrac{(1-[0;3,\overline{AB}])([1;1,1,1,3,\overline{AB}]+1)}{[0;3,\overline{AB}]+[1;1,1,1,3,\overline{AB}]} = \dfrac{169}{13\sqrt{173}} = 0.988371^+,
$$
$$
\lim\limits_{k\to\infty} \varkappa^2_{8k} = \dfrac{(1-[0;3,\overline{BA}])([1;1,3,\overline{BA}]+1)}{[0;3,\overline{BA}]+[1;1,3,\overline{BA}]} = \dfrac{167}{13\sqrt{173}} = 0.976674^+,
$$
$$
\lim\limits_{k\to\infty} \varkappa^4_{8k-5} = \lim\limits_{k\to\infty} \varkappa^4_{8k} = \dfrac{4}{[0;\overline{AB}]+[0;\overline{BA}]+3} = \dfrac{164}{13\sqrt{173}} = 0.959129^+.
$$
We see that
the last one
is the smallest one. This proves Lemma 1.
\vskip+0.3cm
\section{Extremality of $\mathfrak{k} (\alpha_0)$}
\vskip+0.3cm
In this section we finalise the proof of Theorem 2.
\vskip+0.3cm
In cases 1-3, 4.1 and 4.2 of Proposition 1 we have $\mathfrak{k}(\alpha) \leqslant \frac{136}{145} < \mathfrak{k}(\alpha_0) = \frac{164}{13\sqrt{173}}= 0.959129^+$,
meanwhile the bound of the case 4.3 satisfies the inequality $ \frac{180}{187} >\mathfrak{k}(\alpha_0)$.
So we should consider in more details just the case 4.3. First of all we consider the following four subcases in the case 4.3:
\vskip+0.3cm
\begin{itemize}
\item[4.3.1] One has infinitely many patterns $\{1, 1, \boldsymbol{3}, 1, 1, 1, 1, 1\}$ (here and in other cases the bold \textbf{3} represents the $n$th element of the continued fraction). In that case
\begin{multline*}
\mathfrak{k}(\alpha) \leqslant \liminf\limits_{n\to\infty\colon a_n\geqslant 2}\varkappa^4_n(\alpha) \leqslant \dfrac{4}{[0;1,1,\overline{3,1}] + [3;1,1,1,1,1,\overline{1,3}]} =\\= \dfrac{4}{\frac{\sqrt{21}+1}{10} + \frac{\sqrt{21}+943}{262}} = 0.958087^+.
\end{multline*}
\item[4.3.2] One has infinitely many patterns $\{1, 1, \boldsymbol{3}, 1, 1, 1, 3\}$. In that case
\begin{multline*}
\mathfrak{k}(\alpha) \leqslant \liminf\limits_{n\to\infty\colon a_n\geqslant 2}\varkappa^4_n(\alpha) \leqslant \dfrac{4}{[0;1,1,\overline{3,1}] + [3;1,1,1,3,1,1,\overline{3,1}]} =\\= \dfrac{4}{\frac{\sqrt{21}+1}{10} + \frac{\sqrt{21}+4575}{1258}} = 0.952692^+.
\end{multline*}
\item[4.3.3] One has infinitely many patterns $\{ABB\}$ (or, in other words, infinitely many patterns \\$\{1, 1, 3, 1, 1, 1, 1, \boldsymbol{3}, 1, 1, 1, 1, 3, 1, 1\}$. In that case
\begin{multline*}
\mathfrak{k}(\alpha) \leqslant \liminf\limits_{n\to\infty\colon a_n\geqslant 2}\varkappa^4_n(\alpha) \leqslant \dfrac{4}{[0;1,1,1,1,3,1,1,\overline{1,3}]+[3;1,1,1,1,3,1,1,\overline{1,3}]} =\\= \dfrac{4}{\frac{3329+\sqrt{21}}{5470} + \frac{19739+\sqrt{21}}{5470}} = 0.948123^+.
\end{multline*}
\item[4.3.4] One has infinitely many patterns $\{AAB\}$ (or, in other words, infinitely many patterns \\$\{1, 1, 3, 1, 1, 1, 1, \boldsymbol{3}, 1, 1, 3, 1, 1, 3, 1, 1\}$. In that case
\begin{multline*}
\mathfrak{k}(\alpha) \leqslant \liminf\limits_{n\to\infty\colon a_n\geqslant 2}\varkappa^4_n(\alpha) \leqslant \dfrac{4}{[0;1,1,1,1,3,1,1,\overline{1,3}]+[3;1,1,3,1,1,3,1,1,\overline{3,1}]} =\\= \dfrac{4}{\frac{3329+\sqrt{21}}{5470} + \frac{120383+\sqrt{21}}{33802}}= 0.959006^+.
\end{multline*}
\end{itemize}
We see that in all the subcases 4.3.1, 4.3.2, 4.3.3, 4.3.4 the upper bound for $\mathfrak{k}(\alpha) $ obtained is strictly less than $\mathfrak{k}(\alpha_0) = \frac{164}{13\sqrt{173}}$.
Let us describe all the cases considered.
Cases 1-3 cover all the continued fractions with infinitely many terms different from 1 and 3. Cases 4.1, 4.2, 4.3.1 and 4.3.2 cover all the fractions with infinitely many patterns of the form $[..., 3, \underbrace{1, ..., 1}_{k}, 3, ...]$, where $k\not\in\{2, 4\}$.
Therefore, the only fractions which we did not consider in the cases above, eventually consist of blocks of the form $ {A} $ and ${ B}$. Case 4.3.3 covers all the fractions with infinitely many patterns $ABB$, and case 4.3.4 covers fractions with infinitely many patterns $AAB$. Thus, the only fractions left in consideration are eventually periodic with the period $\overline{AB}$, or, in other words, are equivalent to $\alpha_0=\frac{13\sqrt{173}+39}{82}$. Together with Lemma 1 this proves Theorem 2.
\end{document} |
\begin{document}
\title{A Unified Framework for Symmetry Handling}
\begin{abstract}
Handling symmetries in optimization problems is essential
for devising efficient solution methods.
In this article, we present a general framework that captures many of the
already existing symmetry handling methods.
While these methods are mostly discussed independently from each other, our
framework allows to apply different methods simultaneously and thus
outperforming their individual effect.
Moreover, most existing symmetry handling methods only apply to binary variables.
Our framework allows to easily generalize these methods to
general variable types.
Numerical experiments confirm that our novel framework is superior to the
state-of-the-art symmetry handling methods as implemented in the solver
{\tt SCIP}\xspace on a broad set of instances.
\noindent
{\bfseries Keywords:}\
symmetries, branch-and-bound, integer programming, propagation,
lexicographic order.
\end{abstract}
\section{Introduction}
We consider optimization problems~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi}) \ensuremath{\coloneqq}
\min \{ f(x) : x \in \ensuremath{\mathrm\Phi}\}$ for a real-valued function~$f\colon\mathds{R}^n
\to \mathds{R}$ and feasible region~$\ensuremath{\mathrm\Phi} \subseteq
\mathds{R}^n$ such that~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi})$ can be solved by (spatial)
branch-and-bound (B\&{}B\xspace)~\cite{LandDoig1960,HorstTuy1996}.
This class of problems is very rich and captures problems such as
mixed-integer linear programs and mixed-integer nonlinear programs.
The core of B\&{}B\xspace-methods is to repeatedly partition the feasible
region~$\ensuremath{\mathrm\Phi}$ into smaller subregions~$\ensuremath{\mathrm\Phi}'$
and to solve the reduced problem~$\mathrm{OPT}(f, \ensuremath{\mathrm\Phi}')$.
Subregions do not need to be explored further if
it is known that they do not contain optimal or improving solutions (i.e., pruning by bound),
or if the region becomes empty (i.e., pruning by feasibility).
The sketched mechanism allows to routinely solve problems with thousands of
variables and constraints.
If symmetries are present, however, plain B\&{}B\xspace usually struggles with
solving optimization problems as we explain next.
A \emph{symmetry} of the optimization problem is a
bijection~$\gamma\colon\mathds{R}^n \to \mathds{R}^n$ that maps
solution vectors $x \in \mathds{R}^n$ to solution vectors $\gamma(x)$
while preserving the objective value and feasibility state, i.e., $f(x) =
f(\gamma(x))$ holds for all~$x \in \mathds{R}^n$ and $x \in \ensuremath{\mathrm\Phi}$ if and
only if $\gamma(x) \in \ensuremath{\mathrm\Phi}$.
Resulting from this definition,
the set of all symmetries of an optimization problem
forms a group~$\bar{\Gamma}$.
When enumerating the subregions~$\ensuremath{\mathrm\Phi}'$ in B\&{}B\xspace, it might happen that several subregions at different parts of the branch-and-bound tree contain equivalent copies of (optimal) solutions.
This results in unnecessarily large B\&{}B\xspace trees.
By exploiting the presence of symmetries,
one could enhance B\&{}B\xspace by finding more reductions and pruning rules
that further restrict the (sub-)regions without sacrificing finding
optimal solutions to the original problem
\cite{KouyialisEtAl2019,Liberti2012,pfetsch2019computational}.
To handle symmetries, different approaches have been discussed in the
literature.
Two very popular classes of symmetry handling methods are
\emph{symmetry handling constraints}
(SHC\xspaces)~\cite{BendottiEtAl2021,Friedman2007,Hojny2020,hojny2019polytopes,KaibelEtAl2011,KaibelPfetsch2008,Liberti2008,Liberti2012,Liberti2012a,LibertiOstrowski2014}
and
\emph{variable domain reductions} (VDR\xspaces) derived from the
B\&{}B\xspace-tree~\cite{Margot2003,ostrowski2009symmetry,OstrowskiEtAl2011}.
Both approaches remove symmetric solutions from the search space without
eliminating all optimal solutions.
As we detail in Section~\ref{sec:overview}, SHC\xspaces and VDR\xspaces come with a
different flavor.
SHC\xspaces usually use a static scheme for symmetry reductions,
whereas VDR\xspaces dynamically find reductions based on decisions in the B\&{}B\xspace-tree.
In their textbook form, SHC\xspaces and VDR\xspaces are thus incompatible, i.e., they
cannot be combined and that might leave some potential for symmetry reductions
unexploited.
Moreover, many SHC\xspaces and VDR\xspaces have only been studied for binary variables
and for symmetries corresponding to permutations of variables,
which restricts their applicability.
The goal of this article is to overcome these drawbacks.
We therefore devise a unified framework for symmetry handling.
The contributions of our framework are that it
\begin{enumerate}[label={{(C\arabic*)}},ref={C\arabic*}]
\item\label{contrib1} resolves incompatibilities between SHC\xspaces and VDR\xspaces,
\item\label{contrib2} applies for general variable types, and
\item\label{contrib3} can handle symmetries of arbitrary finite groups,
which are not necessarily permutation groups.
\end{enumerate}
Due to~\ref{contrib1}, one is thus not restricted anymore to either use
SHC\xspaces or VDR\xspace methods.
In particular, we show that many popular VDR\xspace techniques for binary
variables such as orbital fixing~\cite{OstrowskiEtAl2011} and isomorphism
pruning~\cite{Margot2002,ostrowski2009symmetry}, but also SHC\xspaces can be
simultaneously cast into our framework.
That is, our framework unifies the application of these techniques.
To fully facilitate our framework regarding~\ref{contrib2}, the second
contribution of this paper is a generalization of many symmetry handling
techniques from binary variables to general variable types.
This allows for handling symmetries in more classes of optimization
problems, in particular classes with non-binary variables.
Regarding~\ref{contrib3}, we stress that this result is not based on the
observation that every finite group is isomorphic to a permutation
group by Cayley's theorem~\cite{AlperinBell1995}, because the space in
which the isomorphic permutation group is acting might differ from~$\mathds{R}^n$.
\paragraph{Outline}
After providing basic notations and definitions, Section~\ref{sec:overview}
provides an overview of existing symmetry handling methods.
In particular, we illustrate the techniques that we will later on cast into
our unified framework.
The framework itself will be introduced in Section~\ref{sec:framework}.
Section~\ref{sec:cast} shows how existing symmetry handling methods can be
used in our framework and how these methods can be generalized from binary
to general variables.
We conclude this article in Section~\ref{sec:num} with an extensive
numerical study of our new framework both for specific applications and
benchmarking instances.
The study reveals that our novel framework is substantially faster than the
state-of-the-art methods on both SHC\xspaces and VDRs as implemented in the
solver {\tt SCIP}\xspace.
\paragraph{Notation and Definitions}
Throughout the article, we assume that we have access to a group~$\Gamma$
consisting of (not necessarily all) symmetries
of the optimization problem $\mathrm{OPT}(f, \ensuremath{\mathrm\Phi})$.
That is, $\Gamma$ is a subgroup of $\bar{\Gamma}$,
which we denote by~$\Gamma \leq \bar{\Gamma}$.
We refer to~$\Gamma$ as a symmetry group of the problem.
For solution vectors~$x \in \mathds{R}^n$,
the set of symmetrically equivalent solutions is
its~\emph{$\Gamma$-orbit}~$\{\gamma(x) : \gamma\in\Gamma\}$.
Let~$\symmetricgroup{n}$ be the \emph{symmetric group} of~$[n] \ensuremath{\coloneqq}
\{1,\dots,n\}$.
Moreover, let~$[n]_0 \ensuremath{\coloneqq} [n] \cup \{0\}$.
Being in line with the existing literature on symmetry handling,
we assume that permutations~$\gamma \in \symmetricgroup{n}$ act on
vectors~$x \in \mathds{R}^n$ by permuting their index sets, i.e.,
$\gamma(x) \ensuremath{\coloneqq} ( x_{\gamma^{-1}(i)} )_{i=1}^n$.
We call such symmetries \emph{permutation symmetries}.
The identity permutation is denoted by~$\ensuremath{\mathrm{id}}$.
To represent permutations~$\gamma$, we use their disjoint cycle
representation, i.e., $\gamma$ is the composition of disjoint cycles $(i_1,
\dots, i_r)$ such that $\gamma(i_k) = i_{k + 1}$ for $k \in \{1,\dots,
r-1\}$ and $\gamma(i_r) = i_1$.
In practice, the symmetry group $\Gamma$ is either provided by a user
or found using detection methods such as in~\cite{pfetsch2019computational,salvagnin2005dominance}.
Detecting the full permutation symmetry group for binary problems,
however, is \ensuremath{\mathrm{NP}}-hard~\cite{margot2010symmetry}.
For non-linear problems,
depending on how the feasible region~$\ensuremath{\mathrm\Phi}$ is given,
already verifying if~$\gamma$ is a symmetry might be
undecidable~\cite{Liberti2012}.
To handle symmetries, among others, we will make use of variable domain
propagation.
The idea of propagation approaches is, given a symmetry reduction rule and
domains for all variables, to derive reductions of some variable domains if
every solution adhering to the symmetry rule is contained in the reduced
domain.
More concretely, let~$\ensuremath{\mathrm\Phi}'$ be the feasible region of some
subproblem encountered during branch-and-bound.
For every variable $x_i$, $i \in [n]$, let~$\ensuremath{\mathcal{D}}_i \subseteq \mathds{R}$ be its
domain, which covers the projection of $\ensuremath{\mathrm\Phi}'$ on $x_i$,
i.e., $\ensuremath{\mathcal{D}}_i \supseteq \{ v \in \mathds{R} :
x_i = v \text{ for some } x \in \ensuremath{\mathrm\Phi}'\}$.
In an integer programming context, the domain $\ensuremath{\mathcal{D}}_i$ corresponds to an interval in practice.
A symmetry reduction rule is encoded as a set~$\mathcal{C} \subseteq \mathds{R}^n$,
which consists of all solution vectors that adhere to the rule.
The goal of \emph{variable domain propagation} is to find
sets~$\ensuremath{\mathcal{D}}'_i \subseteq \ensuremath{\mathcal{D}}_i$, $i \in [n]$, such that
$
\mathcal{C} \cap \bigtimes_{i = 1}^n \ensuremath{\mathcal{D}}'_i
=
\mathcal{C} \cap \bigtimes_{i = 1}^n \ensuremath{\mathcal{D}}_i
$.
In this case, the domain of variable~$x_i$ can be reduced to~$\ensuremath{\mathcal{D}}'_i$.
We say that propagation is \emph{complete} if, for every~$i \in [n]$,
domain~$\ensuremath{\mathcal{D}}'_i$ is inclusionwise minimal.
Throughout this article, we denote \emph{full} B\&{}B\xspace-trees by~$\ensuremath{\mathcal B} =
(\ensuremath{\mathcal V}, \ensuremath{\mathcal E})$, i.e., we do not prune nodes by their objective
value and do not apply enhancements such as cutting planes or bound
propagation.
This is only required to prove \emph{theoretical} statements about symmetry
handling and does not restrict their practical applicability as we will
discuss below.
If not mentioned differently, we assume~$\ensuremath{\mathcal B}$ to be finite, which
might not be the case for spatial branch-and-bound; the case of infinite
\bb-trees will be discussed separately.
For~$\beta \in \ensuremath{\mathcal V}$, let~$\chi_\beta$ be the set of its children
and let~$\ensuremath{\mathrm\Phi}(\beta) \subseteq \ensuremath{\mathrm\Phi}$ be the feasible
solutions at $\beta$, i.e., the intersection of $\ensuremath{\mathrm\Phi}$
and the branching decisions.
If~$\beta$ is not a leaf, we assume that~$\ensuremath{\mathrm\Phi}(\omega)$, $\omega \in
\chi_\beta$, partitions~$\ensuremath{\mathrm\Phi}(\beta)$.
In our definitions, this is even the case for spatial branch-and-bound,
meaning that the partitioned feasible regions are not necessarily
closed sets.
We will discuss the practical consequences of this assumption below.
\section{Overview of symmetry handling methods for binary programs}
\label{sec:overview}
This section provides an overview of symmetry handling methods.
The methods lexicographic fixing, orbitopal fixing, isomorphism pruning,
and orbital fixing are described in detail, because we will later on show
that these methods can be cast into our framework and can be generalized
from binary to arbitrary variable domains.
Further symmetry handling methods will only be mentioned briefly.
We illustrate the different methods using the following running example.
\begin{problem}[NDB]
\label{prob:NDbinary}
Sherali and Smith~\cite{sherali2001models}
consider the \emph{Noise Dosage Problem} (ND).
There are $p$ machines, and on every machine a number of tasks
must be executed. For machine $i \in [p]$,
there are $d_i$ work cycles, each requiring $t_i$ hours of operation,
and each such work cycle induces $\alpha_i$ units of noise.
There are $q$ workers to be assigned to the machines,
each of which is limited to $H$ hours of work.
The problem is to minimize the noise dosage of the worker
that receives the most units of noise.
We extend this problem definition with the requirement
that each worker can only be assigned once to the same machine,
which makes the problem a binary problem (NDB), namely to
\begin{subequations}
\makeatletter
\defCD{NDB}
\makeatother
\renewcommand{CD\arabic{equation}}{NDB\arabic{equation}}
\label{eq:NDB}
\begin{align}
\text{minimize}\
\eta &, \\
\text{subject to}\
\eta &\geq \sum\nolimits_{i \in [p]} \alpha_i \vartheta_{i,j}
&& \text{for all}\ j \in [q],
\\
\sum\nolimits_{j \in [n]} \vartheta_{i,j} &= d_i
&& \text{for all}\ i \in [p],
\label{eq:ndb:demand}
\\
\sum\nolimits_{i \in [m]} t_i \vartheta_{i, j} &\leq H
&& \text{for all}\ j \in [q],
\label{eq:ndb:time}
\\
\vartheta &\in \ensuremath{\{0, 1\}}^{p \times q},
\label{prob:ND:binary}
\\
\eta &\geq 0.
\end{align}
\end{subequations}
For a solution, $\vartheta$ represent
the worker schedules in a~$p \times q$ binary matrix.
The value of variable $\vartheta_{i, j}$ states how many tasks
on machine $i$ are allocated to worker $j$.
Since all workers have the same properties in this model,
symmetrically equivalent solutions are found by permuting
the worker schedules.
This corresponds to permuting the columns of the $\vartheta$-matrix.
As such, a symmetry group of this problem is the group $\Gamma$
consisting of all column permutations of this $p \times q$ matrix.
\end{problem}
For illustration purposes, we focus on an NDB instance with ${p=3}$ machines and
${q=5}$ workers.
We stress that the symmetry handling methods work even if
variable domain reductions inferred by the model constraints are applied.
For the ease of presentation, however, we assume no such reductions
are made in the NDB problem instance.
For this reason, we do not specify~$d_i$, $t_i$, and~$H$.
\subsection{Symmetry handling constraints based on lexicographic order}
\label{sec:shcbin}
The philosophy of \emph{symmetry handling constraints} (SHC\xspaces) is to
restrict the feasible region of an optimization problem to representatives
of the~$\Gamma$-orbits of feasible solutions.
A common way to do this, is to enforce that feasible solutions must be
\emph{lexicographically maximal} in their
$\Gamma$-orbit~\cite{Friedman2007}.
Let $x, y \in \mathds{R}^n$. We say $x$ is
\emph{lexicographically larger} than $y$, denoted $x \succ y$,
if for some $k \in [n]$ we have $x_i = y_i$ for $i < k$, and $x_k > y_k$.
If $x \succ y$ or $x = y$, we write $x \succeq y$.
Since the lexicographic order specifies a total ordering on~$\mathds{R}^n$,
to solve the optimization problem
$\mathrm{OPT}(f, \ensuremath{\mathrm\Phi})$,
it is sufficient to consider only those solutions $x$
that are lexicographically maximal in their $\Gamma$-orbit. Let
\[
\ensuremath{\mathcal{X}}\xspace \ensuremath{\coloneqq} \{ x \in \ensuremath{\{0, 1\}}^n : x \succeq \gamma(x)
\text{ for all } \gamma \in \Gamma \}.
\]
Then, solving~$\mathrm{OPT}(f, \ensuremath{\mathrm\Phi} \cap \ensuremath{\mathcal{X}}\xspace)$
yields the same optimal objective and the same feasibility state
as the original problem.
Note, however, that deciding whether a vector~$x \in \ensuremath{\{0, 1\}}^n$
is contained in $\ensuremath{\mathcal{X}}\xspace$ is \ensuremath{\mathrm{coNP}}-complete~\cite{babai1983canonical}.
Complete propagation of the SHC\xspaces~\ensuremath{\mathcal{X}}\xspace is thus \ensuremath{\mathrm{coNP}}-hard for
general groups.
In practice, one therefore either neglects the group structure or applies
specialized algorithms for particular
groups~\cite{BendottiEtAl2021,doornmalenhojny2022cyclicsymmetries}.
We discuss lexicographic fixing and orbitopal fixing as representatives for
these two approaches.
\subsubsection{Lexicographic fixing (LexFix\xspace)}
\label{sec:lexfix}
Instead of handling $x \succeq \gamma(x)$ for all $\gamma \in \Gamma$,
one can handle this SHC\xspace for a single permutation~$\gamma$ only.
For binary problems, \cite{friedman2007fundamental} shows that~$x
\succeq \gamma(x)$ is equivalent to the linear inequality~$\sum_{k=1}^n 2^{n-k} x_k
\geq \sum_{k=1}^n 2^{n-k} \gamma(x)_k$.
Due to the large coefficients, however, these inequalities might cause
numerical instabilities.
To circumvent numerical instabilities, \cite{hojny2019polytopes} presents
an alternative family of linear inequalities modeling~$x \succeq \gamma(x)$
in which all variable coefficients are either~$0$ or~$\pm 1$ and that can
be separated efficiently.
Alternatively, $x \succeq \gamma(x)$ can also be enforced using a
complete propagation algorithm that runs in linear
time~\cite{BestuzhevaEtal2021OO,doornmalenhojny2022cyclicsymmetries}.
Since a variable domain reduction in the binary setting corresponds to
fixing a variable, we refer to this algorithm as the \emph{lexicographic
fixing algorithm}, or LexFix\xspace in short.
Using our running example NDB, we illustrate the idea of LexFix\xspace for the
permutation~$\gamma$ that exchanges column~$2$ and~$3$ and fixes all
remaining columns.
Since the lexicographic order depends on a specific variable ordering, we
assume that the variables of the~$\vartheta$-matrix are sorted row-wise.
That is, $x = (\vartheta_{1,1}, \dots, \vartheta_{1, 5};
\dots; \vartheta_{3,1}, \dots, \vartheta_{3, 5})$.
We omit $\eta$ from the vector, since the orbit of $\eta$
is trivial with respect to $\Gamma$.
When removing fixed points
of the solution vector from~$\gamma$,
enforcement of~$x \succeq \gamma(x)$ corresponds to
$
(\vartheta_{1,2}, \vartheta_{1,3},
\vartheta_{2,2}, \vartheta_{2,3},
\vartheta_{3,2}, \vartheta_{3,3})
\succeq
(\vartheta_{1,3}, \vartheta_{1,2},
\vartheta_{2,3}, \vartheta_{2,2},
\vartheta_{3,3}, \vartheta_{3,2})
$,
which in turn corresponds to
$
(\vartheta_{1,2},
\vartheta_{2,2},
\vartheta_{3,2})
\succeq
(\vartheta_{1,3},
\vartheta_{2,3},
\vartheta_{3,3})
$.
Complete propagation of that constraint for the running example
is shown in Figure~\ref{fig:branching:lexfix}.
For instance, in the leftmost node, $\vartheta_{1,3}$ can be fixed to~0,
because any solution with~$\vartheta_{1,3} = 1$ and satisfying the
remaining local variable domains violates the lexicographic order
constraint as~$(\vartheta_{1,2},\vartheta_{2,2},\vartheta_{3,2}) =
(0,\vartheta_{2,2},\vartheta_{3,2}) \nsucceq (1,0,\vartheta_{3,3}) =
(\vartheta_{1,3},\vartheta_{2,3},\vartheta_{3,3})$.
\begin{figure}
\caption{Branch-and-bound tree for the NDB problem. Fixings by LexFix\xspace are drawn red.}
\label{fig:branching:lexfix}
\end{figure}
\subsubsection{Orbitopal fixing}
\label{sec:orbitopalfixing}
Note that LexFix\xspace neglects the entire group structure and thus might not
find variable domain reductions that are based on the interplay of
different symmetries.
Since propagation for~$\ensuremath{\mathcal{X}}\xspace$ is \ensuremath{\mathrm{coNP}}-hard, special cases of
groups have been investigated that appear very frequently in
practice.
One of these groups corresponds to the symmetries present in NDB, i.e., the
symmetry group~$\Gamma$ acts on $p \times q$ matrices of binary variables
by exchanging their columns.
We refer to such matrix symmetries as \emph{orbitopal} symmetries.
Besides in NDB, orbitopal symmetries arise in many further applications
such as graph coloring or unit commitment
problems~\cite{BendottiEtAl2021,KaibelEtAl2011,margot2007symmetric}.
For the variable ordering discussed in Section~\ref{sec:lexfix}
and~$\Gamma$ being a group of orbitopal symmetries, one can show that
enforcing~$x \succeq \gamma(x)$ for all~$\gamma \in \Gamma$ is equivalent
to sorting the columns of the variable matrix in lexicographically
non-increasing order.
Bendotti et al.~\cite{BendottiEtAl2021} present a propagation
algorithm for such symmetries, so-called \emph{orbitopal fixing}.
Kaibel et~al.\@\xspace~\cite{KaibelEtAl2011}
discuss a propagation algorithm for the case that each row of the variable
matrix has at most one~$1$-entry.
Both algorithms are complete and run in linear time.
Moreover, Kaibel and Pfetsch~\cite{KaibelPfetsch2008} derive a facet description of all
binary matrices with lexicographically sorted columns and at most (or
exactly) one~1-entry per row.
That is, the SHC\xspace~$\ensuremath{\mathcal{X}}\xspace$ can be replaced by the facet description in
this case.
Given initial variable domains $\ensuremath{\mathcal{D}} \subseteq \ensuremath{\{0, 1\}}^{p \times
q}$, the algorithm of Bendotti et~al.\@\xspace finds the tightest variable domains
as follows.
First, the lexicographically minimal and maximal matrices in $\ensuremath{\mathcal{X}}\xspace
\cap \ensuremath{\mathcal{D}}$ are computed.
Then, for each column in the variable matrix, the associated
variables can be fixed to the value of the lexicographically extreme matrices
up to the first row where these extremal matrices differ.
If the columns of the extremal matrices are identical,
the whole column can be fixed.
For the running example, Figure~\ref{fig:branching:orbitopalfixing}
presents the branch-and-bound tree with variable fixings
by orbitopal fixing.
For instance, if~$(\vartheta_{2,3}, \vartheta_{1,2}) \gets (0, 0)$,
the lexicographically minimal and maximal matrices are
{\footnotesize $
\left[\begin{array}{*5{@{}wc{2mm}@{}}}
0&\underline 0&0&0&0\\
0&0&\underline 0&0&0\\
0&0&0&0&0
\end{array}\right]
$}
and
{\footnotesize $
\left[\begin{array}{*5{@{}wc{2mm}@{}}}
1&\underline 0&0&0&0\\
1&1&\underline 0&0&0\\
1&1&1&1&1
\end{array}\right]
$}, with the branching decisions underlined,
respectively.
Applying orbitopal fixing then leads to the leftmost matrix in
Figure~\ref{fig:branching:orbitopalfixing}.
\begin{figure}
\caption{Branch-and-bound tree for the NDB problem.
Fixings by orbitopal fixing are drawn red.}
\label{fig:branching:orbitopalfixing}
\end{figure}
Note that orbitopal fixing does not find any variable domain reduction
after the first branching decision in
Figure~\ref{fig:branching:orbitopalfixing}.
The reason is that branching occurred for a variable in the second row.
To still be able to benefit from some symmetry reductions in this case,
Bendotti et~al.\@\xspace~\cite{BendottiEtAl2021} also discuss a variant of orbitopal
fixing that adapts the order of rows based on the branching decisions.
They empirically show that this adapted algorithm performs better than the
original algorithm for the unit commitment problem.
We discuss the adapted variant in more detail and with more flexibility in
terms of our new framework in Section~\ref{sec:framework} in
Example~\ref{ex:orbitopalfixing}.
\subsection{Symmetry reductions based on branching tree structure}
\label{sec:overviewtree}
Recall that the SHC\xspaces discussed in the previous section restrict the
feasible region of an optimization problem.
That is, already before solving the optimization problem, it is determined
which symmetric solutions are discarded.
A second family of symmetry handling techniques uses a more dynamic
approach, which prevents to create symmetric copies of subproblems already
created in the branch-and-bound tree.
The motivation of this is that symmetry reductions can be carried out earlier than in a static setting as described in the previous section.
Throughout this section, let~$\ensuremath{\mathcal B} = (\ensuremath{\mathcal V}, \ensuremath{\mathcal E})$ be a
branch-and-bound tree, where branching is applied on a single variable.
For a node $\beta \in \ensuremath{\mathcal V}$, let $B_0^\beta$ (resp.~$B_1^\beta$)
be the set of variable indices of a solution vector
that are fixed to~0 (resp.~1) by the branching decisions on
the rooted tree path to $\beta$.
Moreover, we assume~$\ensuremath{\mathrm\Phi} \subseteq \ensuremath{\{0, 1\}}^n$ as the techniques
that we describe next have mostly been discussed for binary problems.
\subsubsection{Isomorphism pruning}
\label{sec:isomorphismpruning}
In classical branch-and-bound approaches, a node can be pruned if the
corresponding subproblem is infeasible (pruning by infeasibility) or if the
subproblem cannot contain an improving solution (pruning by bound).
In the presence of symmetry, Margot~\cite{Margot2002,Margot2003} and
Ostrowski~\cite{ostrowski2009symmetry} discuss another pruning rule that
discards symmetric or isomorphic subproblems, so-called \emph{isomorphism
pruning}.
The least restrictive version of isomorphism pruning is due to
Ostrowski~\cite{ostrowski2009symmetry}.
Note that the way how we phrase isomorphism pruning differs from the
notation in~\cite{ostrowski2009symmetry}.
In terms of our unified framework that we discuss in
Section~\ref{sec:framework}, however, our notation is more suitable.
Let~$\beta \in \ensuremath{\mathcal V}$ be a branch-and-bound tree node at depth~$m$,
and suppose that every branching decision corresponds to a single variable fixing.
Let $i_k$ be the index of the branching variable at depth~$k \in [m]$ on
the rooted path to~$\beta$.
Then, $B_0^\beta \cup B_1^\beta = \{i_1, \dots, i_m\}$.
Let~$\pi_\beta \in \symmetricgroup{n}$ be any permutation
with $\pi_\beta(i_k) = k$ for $k \in [m]$ and let~$y \in \ensuremath{\{0, 1\}}^n$ such
that~$y_i = 1$ if and only if~$i \in B_1^\beta$.
For a vector~$x \in \mathds{R}^n$ and~$A \subseteq [n]$, we denote
by~$\restrict{x}{A}$ its restriction to the entries in~$A$.
\begin{theorem}[Isomorphism Pruning]
\label{thm:isoprune}
Let~$\beta \in \ensuremath{\mathcal V}$ be a node at depth~$m$.
Node~$\beta$ can be pruned if there exists~$\gamma \in \Gamma$ such that
$\restrict{\pi_\beta(y)}{[m]} \prec \restrict{\pi_\beta(\gamma(y))}{[m]}$.
\end{theorem}
Testing if a vector is lexicographically maximal in its orbit
is a \ensuremath{\mathrm{coNP}}-complete problem~\cite{babai1983canonical}.
As such, deciding if $\beta$ can be pruned by isomorphism
is an \ensuremath{\mathrm{NP}}-complete problem.
\begin{remark}
Margot~\cite{margot2007symmetric} also describes a variant of isomorphism pruning that can be used to
handle symmetries of general integer variables.
Margot's variant assumes a specific branching rule.
We do not describe it in more detail as our framework can also
handle general integer variables while not relying on any assumptions on the
branching rule such as Ostrowski's version for binary variables.
\end{remark}
Figure \ref{fig:branching:isomorphismpruning} shows the branch-and-bound
tree after applying isomorphism pruning.
The only node~$\beta$ that can be pruned is where
$(\vartheta_{2,3},\vartheta_{1,2},\vartheta_{1,3}) = (0, 0, 1)$.
This is due to the symmetry $\gamma$ swapping column~$2$ and~$3$.
For this node,
$\pi_\beta(x) = (\vartheta_{2,3},\vartheta_{1,2},\vartheta_{1,3})$
and
$(\pi_\beta \circ \gamma)(x) =
(\vartheta_{2,2},\vartheta_{1,3},\vartheta_{1,2})$,
and as such
$
\pi_\beta(y)
= (0, 0, 1, \dots) \prec (0, 1, 0, \dots)
= \pi_\beta(\gamma(y))
$.
\begin{figure}
\caption{Branch-and-bound tree for the NDB problem with pruned nodes by
isomorphism pruning.}
\label{fig:branching:isomorphismpruning}
\end{figure}
Note that isomorphism pruning is a pruning method,
which means that it does not find reductions.
However, isomorphism pruning can be enhanced by fixing rules
that allow to find additional variable fixings early on in the
branch-and-bound tree as we discuss next.
\subsubsection{Orbital fixing}
\label{sec:orbitalfixing}
Orbital fixing (OF\xspace) refers to a family of variable domain
reductions (VDR\xspaces), whose common ground is to fix variables within orbits of
already fixed variables.
The exact definition of OF\xspace differs between different
authors~\cite{Margot2003,OstrowskiEtAl2011}.
The main difference is whether fixings found by branching decisions
are distinguished from fixings found by orbital fixing.
We describe the variant~\cite[Theorem~3]{OstrowskiEtAl2011}, which is
compatible with isomorphism pruning.
Let~$\beta \in \ensuremath{\mathcal V}$.
The group consisting of all permutations that stabilize the~1-branchings
up to node~$\beta$ is denoted by~$\Delta^\beta \ensuremath{\coloneqq} \stab{\Gamma}{B_1^\beta}
\ensuremath{\coloneqq} \{ \gamma \in \Gamma : \gamma(B_1^\beta) = B_1^\beta \}$.
\begin{theorem}[Orbital Fixing]
Let~$\beta \in \ensuremath{\mathcal V}$.
If~$i \in B_0^\beta$, all variables in the $\Delta^\beta$-orbit
of~$i$ can be fixed to~0.
\end{theorem}
Figure~\ref{fig:branching:orbitalfixing} shows the branch-and-bound
tree for applying this orbital fixing rule to the running example.
Note that, if up to node $\beta$ no variables are branched to one,
$\Delta^\beta$ corresponds to the symmetry group $\Gamma$.
This means that for zero-branchings its whole orbit of $\Gamma$
(the corresponding row in $\vartheta$)
can be fixed to zero.
\begin{figure}
\caption{Branch-and-bound tree for the NDB problem with fixings by
orbital fixing.}
\label{fig:branching:orbitalfixing}
\end{figure}
Since~\cite{OstrowskiEtAl2011} does not distinguish variables fixed
to~1 by branching or other decisions, $\Delta^\beta$ can be replaced by all
permutations that stabilize the variables that are fixed to~1 (opposed to
just branched to be~1).
Note that neither definition of~$\Delta^\beta$ contains the other, i.e.,
neither version of OF\xspace dominates the other in terms of the
number of fixings that can be found.
Another variant of OF\xspace that also finds $1$-fixings is
presented in~\cite{ostrowski2009symmetry}, see
also~\cite{pfetsch2019computational}.
\subsection{Further symmetry handling methods}
Liberti and Ostrowski~\cite{LibertiOstrowski2014} as well as
Salvagnin~\cite{Salvagnin2018} present symmetry handling inequalities that
can be derived from the Schreier-Sims table of group.
Further symmetry handling inequalities are described by Liberti~\cite{Liberti2012a}.
In contrast to the constraints from Section~\ref{sec:shcbin}, they are also able to
handle symmetries of non-binary variables; their symmetry handling effect
is limited though.
Another class of inequalities, so-called orbital conflict inequalities, have
been proposed in~\cite{LinderothEtAl2021}.
Moreover, symmetry handling inequalities for specific problem classes are
discussed, among others,
by~\cite{GhoniemSherali2011,Hojny2020,KaibelPfetsch2008,MendezDiazZabala2001,sherali2001models}.
Besides the propagation approaches discussed above, also tailored
complete algorithms that can handle special cyclic groups
exist~\cite{doornmalenhojny2022cyclicsymmetries}.
Moreover, Ostrowski~\cite{ostrowski2009symmetry} presents smallest-image
fixing, a propagation algorithm for binary variables.
Instead of exploiting symmetries in a propagation framework, symmetries can
also be handled by tailored branching
rules~\cite{OstrowskiEtAl2011,OstrowskiAnjosVannelli2015}.
Furthermore, orbital shrinking~\cite{FischettiLiberti2012} is a method that
handles symmetries by aggregating variables contained in a common orbit,
which results in a relaxation of the problem.
Finally, core points~\cite{Bodi2013,Herr2013,HerrRehnSchuermann2013} can be
used to restrict the feasible region of problems to a subset of solutions.
This latter approach does not coincide with lexicographically maximal
representatives.
\section{Unified framework for symmetry handling}
\label{sec:framework}
As the literature review shows, different symmetry handling methods use
different paradigms to derive symmetry reductions.
For instance, SHC\xspaces remove symmetric solutions from the initial problem
formulation, whereas methods such as orbital fixing remove symmetric
solutions based on the branching history.
At first glance, these methods thus are not necessarily compatible.
To overcome this seeming incompatibility, we present a unified framework
for symmetry handling that easily allows to check whether symmetry handling
methods are compatible.
It turns out that, via our framework, isomorphism pruning and OF\xspace
can be made compatible with a variant of LexFix\xspace.
Moreover, in contrast to many symmetry handling methods discussed in the
literature, our framework also applies to non-binary problems and is not
restricted to permutation symmetries.
Before we present our framework in Section~\ref{sec:theframework}, it will
be useful to first provide an interpretation of isomorphism pruning through
the lens of symmetry handling constraints.
\subsection{Isomorphism pruning and orbital fixing revisited}
\label{sec:isopruneRevisited}
Let~$\beta \in \ensuremath{\mathcal V}$ be a node at depth~$m$ and let~$y$ be the
incidence vector of 1-branching decisions as described in
Section~\ref{sec:isomorphismpruning}.
Due to Theorem~\ref{thm:isoprune}, node~$\beta$ can be pruned by
isomorphism if solution vector~$y$
violates~$\restrict{\pi_\beta(x)}{[m]} \succeq
\restrict{\pi_\beta(\gamma(x))}{[m]}$ for some~$\gamma \in \Gamma$.
The latter condition looks very similar to classical SHC\xspaces, however, there
are some differences:
the variable order is changed via~$\pi_\beta$, not all variables are
present in this constraint due to the restriction, and most importantly,
every node has a potentially different reordering and restriction.
Nevertheless, these modified SHC\xspaces can be used to remove all symmetries
from a binary problem in the sense that, for every solution~$x$ of a binary
problem, there exists exactly one node of~$\ensuremath{\mathcal B}$ at depth~$n$ that
contains a symmetric counterpart of~$x$,
see~\cite[Thm.~4.5]{ostrowski2009symmetry}.
Based on the modified SHC\xspaces, it is easy to show that isomorphism pruning
and orbital fixing are compatible, provided one can show that both methods
are compatible with the modified SHC\xspaces.
\begin{lemma}
Let~$\beta \in \ensuremath{\mathcal V}$ be a node at depth~$m$.
If~$\beta$ gets pruned by isomorphism, there is no~$x \in \ensuremath{\{0, 1\}}^n$
that is feasible for the subproblem at~$\beta$ and that satisfies
$\restrict{\pi_\beta(x)}{[m]} \succeq
\restrict{\pi_\beta(\gamma(x))}{[m]}$ for all~$\gamma \in \Gamma$.
\end{lemma}
\begin{proof}
As in Section~\ref{sec:isomorphismpruning}, let~$y \in \ensuremath{\{0, 1\}}^n$ be such
that~$y_i = 1$ if and only if~$i \in B_1^\beta$.
If IsoPr\xspace prunes~$\beta$, there is~$\gamma\in\Gamma$
with~$\restrict{\pi_\beta(y)}{[m]} \prec \restrict{\pi_\beta(\gamma(y))}{[m]}$.
As the first $m$ entries of $\pi_\beta(y)$ are branching variables
and the remaining entries are~0, we find~$\restrict{\pi_\beta(\gamma(x))}{[m]}
\geq \restrict{\pi_\beta(\gamma(y))}{[m]}$
(componentwise) for each~$x \in \ensuremath{\mathrm\Phi}(\beta)$.
Thus, $\restrict{\pi_\beta(x)}{[m]} = \restrict{\pi_\beta(y)}{[m]}
\prec \restrict{\pi_\beta(\gamma(y))}{[m]}
\leq \restrict{\pi_\beta(\gamma(x))}{[m]}$,
which means every solution in~$\ensuremath{\mathrm\Phi}(\beta)$
violates~$\restrict{\pi_\beta(x)}{[m]} \succeq
\restrict{\sigma_\beta(\gamma(x))}{[m]}$.
\end{proof}
\begin{lemma}
Let~$\beta \in \ensuremath{\mathcal V}$ be a node at depth~$m$.
Every fixing found by~OF\xspace at node~$\beta$ is implied by
$\restrict{\pi_\beta(x)}{[m]} \succeq
\restrict{\pi_\beta(\gamma(x))}{[m]}$ for all~$\gamma \in \Gamma$.
\end{lemma}
\begin{proof}
Assume OF\xspace is not compatible with~$\restrict{\pi_\beta(x)}{[m]} \succeq
\restrict{\pi_\beta(\gamma(x))}{[m]}$, $\gamma \in \Gamma$.
Then, there exists a node~$\beta \in \ensuremath{\mathcal V}$, a solution~$\bar{x} \in
\ensuremath{\mathrm\Phi}(\beta)$ that satisfies~$\restrict{\pi_\beta(\bar{x})}{[m]} \succeq
\restrict{\pi_\beta(\gamma(\bar{x}))}{[m]}$ for all $\gamma \in \Gamma$, and
an index~$j \in [m]$
with $i_j \in B_0^\beta$
such that~$\bar{x}_{\ell} = 1$ for some~$\ell$ in
the~$\Delta^\beta$-orbit of~$i_j$.
Suppose~$j$ is minimal.
Since~$\ell$ is contained in the~$\Delta^\beta$-orbit of~$i_j$, there
exists~$\gamma \in \Delta^\beta$ with~$\gamma(\ell) = i_j$.
By definition of~$\Delta^\beta$,
for all~$k \in B_1^\beta$, $\gamma(k) \in B^\beta_1$.
Moreover, $\pi_\beta(\gamma(\bar x))_k = \bar x_{\gamma^{-1}(i_k)} = 0$
for all $k \in [j-1]$ with $i_k \in B_0^\beta$,
because~$j$ is selected minimally.
Consequently, since $B_0^\beta \cup B_1^\beta = [m]$ holds,
$\pi_\beta(\bar x)$ and~$\pi_\beta(\gamma(\bar x))$
coincide on the first~$j-1$
entries, and~$1 = \bar x_\ell = \bar x_{\gamma^{-1}(i_j)} =
\pi_\beta(\gamma(\bar x))_j
> \pi_\beta(\bar x))_j = \bar x_{i_j} = 0$.
That is, $\restrict{\pi_\beta(\bar x)}{[m]}
\prec \restrict{\pi_\beta(\gamma(\bar x))}{[m]}$,
contradicting that~$\bar{x}$ satisfies all SHC\xspaces.
OF\xspace is thus compatible with the SHC\xspaces.
\end{proof}
Isomorphism pruning and orbital fixing are thus compatible.
While isomorphism pruning can become active as soon as one can show that no
lexicographically maximal solution w.r.t.\ the modified SHC\xspaces is feasible
at a node~$\beta$, orbital fixing might not be able to find all symmetry
related variable reductions.
\begin{example}\label{ex:OFweak}
Let~$\Gamma \leq \symmetricgroup{4}$ be generated by a cyclic
right shift, i.e., the non-trivial permutations in~$\Gamma$ are~$\gamma_1 =
(1,2,3,4)$, $\gamma_2 = (1,3)(2,4)$, and~$\gamma_3 = (1,4,3,2)$.
Consider the branch-and-bound tree in Figure~\ref{fig:exBB}.
At node~$\beta_0$, no reductions can be found by OF\xspace as
no proper shift fixes the $1$-branching variable~$x_3$.
For~$\gamma_1$, the SHC~$\restrict{\pi_{\beta_0}(x)}{[2]} \succeq
\restrict{\pi_{\beta_0}(\gamma(x))}{[2]}$ reduces to
$(x_3, x_4) \succeq (x_2, x_3)$.
Due to the variable bounds at~$\beta_0$, the constraint simplifies
to~$(1,0) \succeq (x_2,1)$.
This constraint is violated if $x_2$ has value~1,
so $x_2$ can be fixed to~0.
\end{example}
\begin{figure}
\caption{Branch-and-bound tree for Example~\ref{ex:OFweak}
\label{fig:exBB}
\end{figure}
Consequently, symmetry handling by isomorphism pruning and orbital fixing
can be improved by identifying further symmetry handling methods that are
compatible with the modified SHC\xspaces.
\subsection{The framework}
\label{sec:theframework}
In this section, we present our unified framework for symmetry handling
with the following goals:
It should
\begin{enumerate}[label={{(G\arabic*)}},ref={G\arabic*}]
\item\label{G1} allow to check whether different symmetry
handling methods are compatible.
In particular, it should ensure compatibility of LexFix\xspace, isomorphism
pruning, and OF\xspace.
\item\label{G2} generalize the modified SHC\xspaces by
Ostrowski~\cite{ostrowski2009symmetry}.
\item\label{G3} apply to general variable types and general symmetries (not
necessarily permutations).
\end{enumerate}
To achieve these goals, we define a more general class of
SHC\xspaces~$\sigma_\beta(x) \succeq \sigma_\beta(\gamma(x))$, where~$\gamma \in
\Gamma$ and~$\beta \in \ensuremath{\mathcal V}$, that are not necessarily based on
branching decisions.
Let~$\ensuremath{\mathrm\Phi} \subseteq \mathds{R}^n$ and let~$f\colon \ensuremath{\mathrm\Phi} \to \mathds{R}$ be
such that~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi})$ can be solved by (spatial)
branch-and-bound.
Let~$\Gamma$ be a group of symmetries of~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi})$.
Let~$\ensuremath{\mathcal B} = (\ensuremath{\mathcal V}, \ensuremath{\mathcal E})$ be a branch-and-bound tree and
let~$\beta \in \ensuremath{\mathcal V}$.
In our modified SHC\xspaces, the map~$\sigma_\beta\colon \mathds{R}^n \to \mathds{R}^{m_\beta}$
will be parameterized via a permutation~$\pi_\beta
\in \symmetricgroup{n}$, a symmetry~$\varphi_\beta \in \Gamma$, and an
integer~$m_\beta \in \{0, \dots, n\}$
as~$\sigma_\beta(\cdot)
\ensuremath{\coloneqq} \restrict{\left(
\pi_\beta \circ \varphi_\beta (\cdot)
\right)}{[m_\beta]}$.
As in Ostrowski's approach, $\pi_\beta$ selects a variable ordering
and~$m_\beta$ allows to restrict the SHC\xspaces to a subset of variables.
In contrast to~\cite{ostrowski2009symmetry}, however, $\pi_\beta$ does not
necessarily correspond to the branching order.
Moreover, $\varphi_\beta$ provides more degrees of freedom as it allows to
change the variable order imposed by~$\pi_\beta$.
We refer to the structure~$(m_\beta, \pi_\beta, \varphi_\beta)_{\beta \in \ensuremath{\mathcal V}}$
as a \emph{symmetry prehandling structure} for $\ensuremath{\mathcal B}$.
Note that this definition already achieves goal~\eqref{G2} by
setting~$\varphi_\beta = \ensuremath{\mathrm{id}}$, using the same~$\pi_\beta$ as in
Section~\ref{sec:isopruneRevisited}, and setting~$m_\beta$ to be the number
of different branching variables in node~$\beta$.
\begin{theorem}
\label{thm:main}
Let~$\ensuremath{\mathrm\Phi} \subseteq \mathds{R}^n$ and let~$f\colon \ensuremath{\mathrm\Phi} \to \mathds{R}$ be
such that~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi})$ can be solved by (spatial)
branch-and-bound.
Let~$\Gamma$ be a finite group of symmetries of~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi})$.
Suppose that the branch-and-bound method used for
solving~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi})$ generates a finite full
B\&{}B\xspace-tree~$\ensuremath{\mathcal B} = (\ensuremath{\mathcal V}, \ensuremath{\mathcal E})$.
For each node~$\beta \in \ensuremath{\mathcal V}$,
let~$(m_\beta,\pi_\beta,\varphi_\beta) \in [n]_0 \times \symmetricgroup{n}
\times \Gamma$.
Let~$\sigma_\beta(\cdot) = \restrict{(\pi_\beta \circ
\varphi_\beta(\cdot))}{[m_\beta]}$.
Suppose that we enforce, for every~$\beta \in \ensuremath{\mathcal V}$,
\begin{equation}
\label{eq:main}
\sigma_\beta(x) \succeq \sigma_\beta (\gamma(x))\
\text{for all}\
\gamma \in \Gamma.
\end{equation}
If~$(m_\beta,\pi_\beta,\varphi_\beta)$
satisfies so-called \emph{correctness conditions}~
\eqreffromto{cond:ffunc}{cond:permutationcondition}
for all nodes~$\beta \in \ensuremath{\mathcal V}$:
\begin{enumerate}[
label={{(C\arabic*)}},
ref={C\arabic*}, itemsep={.5em}
]
\item
\label{cond:ffunc}
If $\beta$ has a parent $\alpha \in \ensuremath{\mathcal V}$,
then $m_\beta \geq m_\alpha$
and for all $i \leq m_\alpha$ and $x \in \mathds{R}^n$
holds $\pi_\alpha(x)_i = \pi_\beta(x)_i$;
\item
\label{cond:symmetricallypermute}
If $\beta$ has a parent $\alpha \in \ensuremath{\mathcal V}$,
then $\varphi_\beta =\varphi_\alpha \circ \psi_\alpha$
for some
\[
\psi_\alpha \in \stab{\Gamma}{\ensuremath{\mathrm\Phi}(\alpha)}
\ensuremath{\coloneqq} \{ \gamma \in \Gamma :
\ensuremath{\mathrm\Phi}(\alpha) = \gamma(\ensuremath{\mathrm\Phi}(\alpha)) \};
\]
\item
\label{cond:sibl}
If $\beta$ has a sibling $\beta' \in \ensuremath{\mathcal V}$,
then $m_\beta = m_{\beta'}$, $\pi_\beta = \pi_{\beta'}$,
and $\varphi_\beta = \varphi_{\beta'}$, i.e., $\psi_\alpha$ in \eqref{cond:symmetricallypermute} does not depend on $\beta$;
\item
\label{cond:permutationcondition}
If $\beta$ has a feasible solution $x \in \ensuremath{\mathrm\Phi}(\beta)$,
then for all permutations $\xi \in \Gamma$
with $\sigma_\beta(x) = \sigma_\beta(\xi(x))$
also the permuted solution $\xi(x)$ is feasible in $\ensuremath{\mathrm\Phi}(\beta)$;
\end{enumerate}
then, for each~$\tilde{x} \in \ensuremath{\mathrm\Phi}$,
there is exactly one leaf~$\nu$ of the B\&{}B\xspace-tree con\-taining a solution
symmetric to~$\tilde{x}$, i.e., for which there is~$\xi \in \Gamma$
with~$\xi(\tilde{x}) \in \ensuremath{\mathrm\Phi}(\nu)$.
\end{theorem}
Before we apply and prove this theorem, we interpret the correctness
conditions and provide some implications and consequences.
We start with the latter.
\begin{itemize}
\item Enforcing~\eqref{eq:main} handles
symmetries by excluding feasible solutions from the search space while
guaranteeing that exactly one representative solution per class of
symmetric solutions remains feasible (recall that~$\ensuremath{\mathcal B}$ does not
prune nodes by bound).
Note that by enforcing~\eqref{eq:main}, symmetry reductions can only take
place on variables ``seen'' by~$\sigma_\beta(x) \succeq
\sigma_\beta(\gamma(x))$ for some~$\gamma\in\Gamma$.
We stress that it is not immediate how~\eqref{eq:main} can be enforced
efficiently.
We will turn to this question in Section~\ref{sec:cast}.
\item If we prune nodes by bound, \eqref{eq:main} still can be used to
handle symmetries.
But not necessarily all~$x \in \ensuremath{\mathrm\Phi}$ have
a symmetric counterpart feasible at some leaf (e.g., if~$x$ is
suboptimal).
\item If not all constraints of type~\eqref{eq:main} are completely
enforced, we still find valid symmetry reductions, but not necessarily
exactly one representative solution.
\item If different symmetry handling methods can be expressed in terms
of~\eqref{eq:main} having the same choice of the symmetry prehandling
structure~$(m_\beta, \pi_\beta, \varphi_\beta)_{\beta \in \ensuremath{\mathcal V}}$,
then both symmetry handling methods can be applied at the same time,
i.e., they are \emph{compatible}.
\item In practice, \bb is enhanced by cutting planes or domain
propagation such as reduced cost fixing.
Both also work in our framework if their reductions are
\emph{symmetry compatible}, i.e., if, for~$\beta \in \ensuremath{\mathcal V}$, the
domain of a variable~$x_i$ is reduced, the same reduction can be applied
to all symmetric variables w.r.t.\ symmetries at~$\beta$.
Margot~\cite[Section~4]{Margot2003} discusses this in detail for
IsoPr\xspace. He refers to this as \emph{strict setting
algorithms}.
\item For spatial branch-and-bound, the children of a node~$\alpha$ do not
necessarily partition~$\ensuremath{\mathrm\Phi}(\alpha)$ (the feasible regions of
children can overlap on their boundary).
In this case, \eqref{eq:main} can still be used to handle symmetries, but
there might exist several leaves containing a symmetrically equivalent
solution.
\end{itemize}
\begin{remark}
\label{remark:improper}
As propagating SHC\xspaces cuts off feasible solutions,
such propagations are not symmetry-compatible.
Therefore, we consider SHC\xspace reductions in our framework as special
branching decisions, called \emph{improper}:
For a SHC\xspace reduction $C \subseteq \mathds{R}^n$
at node~$\beta \in \ensuremath{\mathcal V}$,
two children $\omega, \omega'$ are introduced
with $\ensuremath{\mathrm\Phi}(\omega) = \ensuremath{\mathrm\Phi}(\beta) \cap C$
and~
$\ensuremath{\mathrm\Phi}(\omega') = \ensuremath{\mathrm\Phi}(\beta) \setminus \ensuremath{\mathrm\Phi}(\omega)$.
Node $\omega'$ can then be pruned by symmetry.
Complementing this, traditional (standard) branching decisions are called
\emph{proper}.
\end{remark}
\paragraph{Interpretation}
Theorem~\ref{thm:main} iteratively builds
SHC\xspaces~$\sigma_\beta(x) \succeq \sigma_\beta(\gamma(x))$ that do not
necessarily build upon a common lexicographic order for different
nodes~$\beta \in \ensuremath{\mathcal V}$.
The map~${\sigma_\beta(\cdot) = \restrict{(\pi_\beta \circ
\varphi_\beta(\cdot))}{[m_\beta]}}$
accepts an~$n$-dimensional vector,
considers a symmetrically equivalent representative solution hereof
($\varphi_\beta$),
reorders its entries ($\pi_\beta$), and afterwards restricts
them to the first~$m_\beta$ coordinates.
This way, $\sigma_\beta$ selects~$m_\beta$ expressions (and their images)
that
appear in the SHC\xspaces~\eqref{eq:main}.
To ensure that consistent SHC\xspaces are derived, sufficient information
needs to be inherited to a node's children in the B\&{}B\xspace-tree, which is
achieved as follows.
For the ease of explanation, let us first assume~$\varphi_\beta$ is the
identity~$\ensuremath{\mathrm{id}}$.
Then, \eqref{cond:ffunc} guarantees that a child has not less
information than its parent.
Moreover, siblings must not be too different, i.e., new information at one
child also needs to be known to its siblings~\eqref{cond:sibl}.
\eqref{cond:permutationcondition} ensures that if two
solutions~$x$ and~$\xi(x)$ appear identical for the SHC\xspaces in
the sense~$\sigma_\beta(x) = \sigma_\beta(\xi(x))$, feasibility of~$x$
should imply feasibility of~$\xi(x)$.
In other words, if $x$ and $\xi(x)$
are identical with respect to $\sigma_\beta$,
it may not be that one solution is feasible at $\beta$
while the other solution is not.
Conditions~\eqref{cond:ffunc},
\eqref{cond:sibl}, and~\eqref{cond:permutationcondition}
describe how $\sigma_\beta(x)$ ``grows'' as nodes $\beta$ follow a rooted path,
and that siblings are handled in the same way.
If $\varphi_\beta = \ensuremath{\mathrm{id}}$,
for a node $\beta$ with ancestor $\mu$
all variables and expressions of $\sigma_\mu(x)$
also occur in the first $m_\mu$ elements of $\sigma_\beta(x)$.
Condition~\eqref{cond:symmetricallypermute} allows for more
flexibility in this.
Let $\alpha$ be the parent of $\beta$.
If there is a symmetry $\gamma \in \Gamma$
that leaves the feasible region of $\alpha$ invariant
(i.e., $\ensuremath{\mathrm\Phi}(\alpha) = \gamma(\ensuremath{\mathrm\Phi}(\alpha))$),
one can choose to handle the symmetries considering the
symmetrically equivalent solution space
as of node $\beta$.
This degree of freedom might help a solver to find more symmetry reductions
in comparison to just ``growing'' the considered representatives.
For example, in Figure~\ref{fig:branching:orbitopalfixing}
at node $\alpha$
with~$(\vartheta_{2,3},\vartheta_{1,2},\vartheta_{1,3}) \gets (0,1,0)$
the feasible region $\ensuremath{\mathrm\Phi}(\alpha)$
is identical when permuting the first two columns
or the last three columns.
Suppose that one branches next on variable $\vartheta_{3,3}$,
then the zero-branch will find two reductions
(namely $\vartheta_{3,4},\vartheta_{3,5} \gets 0$)
and the one-branch will find no reductions.
If the solver has a preference to reduce the discrepancy between the number
of reductions found over the siblings,
one could exchange column 3 and 4 for the sake of symmetry handling.
Effectively, this moves the branching variable
to the fourth column. Applying orbitopal fixing on the matrix where
these columns are exchanged leads to one fixing in either child.
\paragraph{Examples}
Let~$\ensuremath{\mathcal B} = (\ensuremath{\mathcal V}, \ensuremath{\mathcal E})$ be a B\&{}B\xspace-tree, in which each
branching decision partitions the domain of exactly one variable.
We will show that there are many possible symmetry prehandling structures
~$(m_\beta, \pi_\beta, \varphi_\beta)_{\beta \in \ensuremath{\mathcal V}}$
that satisfy the correctness conditions of Theorem~\ref{thm:main}.
Hence, this gives many degrees of freedom to handle symmetries.
In the following, we discuss choices that resemble three symmetry handling
techniques: static SHC\xspaces, Ostrowksi's branching variable ordering, and a
variant of orbitopal fixing that
is more flexible than the setting of Bendotti~et~al.\@\xspace.
\begin{example}[Static SHC\xspaces]
\label{ex:staticsetting}
The static SHC\xspaces~$x \succeq \gamma(x)$ for all~$\gamma \in \Gamma$ can be
derived in our framework by setting, for each~$\beta \in \ensuremath{\mathcal V}$, the
parameters~$m_\beta = n$, $\pi_\beta = \varphi_\beta = \psi_\beta = \ensuremath{\mathrm{id}}$.
\eqreffromto{cond:ffunc}{cond:sibl} are satisfied trivially.
As any~$x \in \mathds{R}^n$ satisfies
$\sigma_\beta(x) = \restrict{\pi_\beta \varphi_\beta(x)}{[n]} = x$,
we find $\sigma_\beta(x) = \sigma_\beta \gamma(x)$
if and only if~$x = \gamma(x)$. Hence, also
\eqref{cond:permutationcondition} holds.
\end{example}
Next, we resemble Ostrowski's rank for binary variables and generalize
it to arbitrary variable types.
In the latter case, only considering the branching order
is not sufficient as one might branch several times on the
same variable.
\begin{example}[Branching-based]
\label{ex:vardynamic}
Let~$\beta \in \ensuremath{\mathcal V}$.
If~$\beta$ is the root node, let~$m_\beta = 0$, i.e., $\sigma_\beta$ is
void.
Otherwise, let~$\alpha$ be the parent of~$\beta$.
If~$\beta$ arises from~$\alpha$ by a proper branching decision on
variable~$x_{\ensuremath{\hat\imath}}$ and~$\ensuremath{\hat\imath}$ has not been used for branching
before, i.e., $\ensuremath{\hat\imath} \notin (\pi_\beta \varphi_\beta)^{-1}([m_\beta])$,
then set~$m_\beta = m_\alpha + 1$, $\varphi_\beta = \ensuremath{\mathrm{id}}$ and
select~$\pi_\beta \in \symmetricgroup{n}$ with~$\pi_\beta(i) =
\pi_\alpha(i)$ for $i \leq m_\alpha$ and $\pi_\beta(m_\beta) = \ensuremath{\hat\imath}$.
Otherwise, inherit the symmetry prehandling structure from~$\alpha$,
i.e.,
$\pi_\beta = \pi_\alpha$, $m_\beta = m_\alpha$, and $\varphi_\beta =
\ensuremath{\mathrm{id}}$.
\end{example}
\begin{proof}[Example~\ref{ex:vardynamic} satisfies
\eqreffromto{cond:ffunc}{cond:permutationcondition}]
\eqreffromto{cond:ffunc}{cond:sibl} hold trivially.
To show~\eqref{cond:permutationcondition},
let~$x \in \ensuremath{\mathrm\Phi}(\beta)$
and $\xi \in \Gamma$
such that~$\sigma_\beta(x) = \sigma_\beta(\xi(x))$.
By definition of~$(m_\beta,\pi_\beta,\varphi_\beta)$, $\sigma_\beta(x)$
restricts~$x$ onto all (resorted) variables used for branching up to
node~$\beta$.
To show~\eqref{cond:permutationcondition}, note that the
feasible region $\ensuremath{\mathrm\Phi}(\beta)$ is the intersection
of
\begin{enumerate*}[label={(\roman*)},
ref={(\roman*)}]
\item \label{ex:vardynamic:pr:1}
$\ensuremath{\mathrm\Phi}$,
\item \label{ex:vardynamic:pr:2}
proper branching decisions, and
\item \label{ex:vardynamic:pr:3}
symmetry reductions due to~\eqref{eq:main}.
\end{enumerate*}
It is thus sufficient to show~$\xi(x)$ is contained in each of these
sets.
Since~$\xi$ is a problem symmetry and~$x \in \ensuremath{\mathrm\Phi}$, also~$\xi(x) \in
\ensuremath{\mathrm\Phi}$.
Moreover, as all branching variables are represented in~$\sigma_\beta$
and~$x$ respects the branching decisions, $\sigma_\beta(x) =
\sigma_\beta(\xi(x))$ implies that~$\xi(x)$ satisfies the branching
decisions.
Thus, \ref{ex:vardynamic:pr:1} and~\ref{ex:vardynamic:pr:2} hold.
Finally, the SHC\xspaces~\eqref{eq:main} for~$\beta$ dominate the SHC\xspaces
for its ancestors~$\alpha$ since~\eqref{cond:ffunc}
and~$\varphi_\alpha = \ensuremath{\mathrm{id}}$ hold, i.e, if~$\xi(x)$ satisfies the SHC\xspaces
for~$\beta$, then also all previous SHC\xspaces.
As~$\Gamma \circ \xi = \Gamma$, each~$\gamma \in \Gamma$ can be written
as~$\gamma' \circ \xi$ for some~$\gamma' \in \Gamma$.
Therefore, for all~$\gamma' \in \Gamma$, we conclude
$\sigma_\beta(\xi(x)) = \sigma_\beta(x) \succeq \sigma_\beta(\gamma(x)) =
\sigma_\beta(\gamma'(\xi(x)))$, i.e.,
\ref{ex:vardynamic:pr:3} and thus~\eqref{cond:permutationcondition} holds.
\end{proof}
By adapting the variable order used in LexFix\xspace to the order imposed
by~$\sigma_\beta$, LexFix\xspace is thus compatible with isomorphism pruning and
OF\xspace, i.e., the framework achieves goal~\eqref{G1}.
In particular, the statement is true for non-binary problems if these
methods can be generalized to arbitrary variable domains.
We will discuss this in more detail in the next section.
The last symmetry prehandling structure accommodates orbitopal fixing.
Bendotti et~al.\@\xspace~\cite{BendottiEtAl2021} already discussed a dynamic variant
of orbitopal fixing, which reorders the rows of the orbitope matrix similar
to Ostrowski's rank; columns, however, are not reordered.
As described above, allowing also column reorderings might lead to more
balanced branch-and-bound trees, which can be achieved as follows.
\begin{example}[Specialized for orbitopal fixing]
\label{ex:orbitopalfixing}
Let $M$ be the $p \times q$ orbitope matrix corresponding to the problem
variables via $M_{i,j} = x_{q(i-1) + j}$.
That is, $x$ is filled row-wise with the entries of~$M$.
Let~$\beta \in \ensuremath{\mathcal V}$.
If~$\beta$ is the root node, define $(m_\beta, \pi_\beta, \varphi_\beta)
=
(0, \ensuremath{\mathrm{id}}, \ensuremath{\mathrm{id}})$.
Otherwise, let~$\alpha$ be the parent of~$\beta$.
If~$\beta$ arises from~$\alpha$ by a proper branching decision on
variable~$M_{\ensuremath{\hat\imath},\hat{\jmath}}$ and no variable in the~$\ensuremath{\hat\imath}$-th row
has been
used for branching before,
set $m_\beta = m_\alpha + q$, select~$\pi_\beta \in \symmetricgroup{n}$
with
$\pi_\beta(k) = \pi_\alpha(k)$ for~$k \in [m_\alpha]$,
and, for~$k \in [q]$, define $\pi_\beta(m_\alpha + k) = q(\ensuremath{\hat\imath}-1) + k$.
Also choose
$\psi_\alpha \in \stab{\Gamma}{\ensuremath{\mathrm\Phi}(\alpha)}$
yielding $\varphi_\beta = \varphi_\alpha\circ \psi_\alpha$.
Consistent with Condition~\eqref{cond:sibl},
the choice of $\psi_\alpha$ is the same for all children
sharing the same parent $\alpha$.
If the variable is already included in the variable ordering
or if the branching decision is improper,
inherit~$(m_\beta, \pi_\beta, \varphi_\beta)
= (m_\alpha, \pi_\alpha, \varphi_\alpha)$.
Effectively, this creates a new matrix in which the rows are sorted based
on branching decisions and columns can be permuted as long as this does
not affect symmetrically feasible solutions.
\end{example}
Completely handling SHC\xspaces~\eqref{eq:main} on $\beta$ corresponds to
using orbitopal fixing on the $(m_\beta / q) \times q$-matrix
filled row-wise with the variables
with indices in~$(\pi_\beta \varphi_\beta)^{-1}(i)$
for~$i \in [m_\beta]$.
Bendotti et~al.\@\xspace~\cite{BendottiEtAl2021} introduce this
without the freedom of permuting the matrix columns,
i.e., for all $\beta \in \ensuremath{\mathcal V}$ they choose~$\varphi_\beta = \ensuremath{\mathrm{id}}$.
We call their setting \emph{row-dynamic}, wheres we refer to our setting as
row- and column-dynamic.
\begin{proof}[Example~\ref{ex:orbitopalfixing} satisfies
\eqreffromto{cond:ffunc}{cond:permutationcondition}]
Obviously, \eqreffromto{cond:ffunc}{cond:sibl} hold.
To show~\eqref{cond:permutationcondition}, we use induction.
As~\eqref{cond:permutationcondition} holds at the root node, the
induction base holds.
So, assume~\eqref{cond:permutationcondition} holds at node $\alpha$
with child~$\beta$~{\itshape (IH)}.
We show \eqref{cond:permutationcondition} also holds at $\beta$.
Let $x \in \ensuremath{\mathrm\Phi}(\beta)$ and $\xi \in \Gamma$
with $\sigma_\beta(x) = \sigma_\beta(\xi(x))$.
To show~$\xi(x) \in \ensuremath{\mathrm\Phi}(\beta)$, we distinguish if the branching
decision from $\alpha$ to $\beta$ is proper or not.
Note that both proper and improper branching decisions only happen on
variables present in~$\sigma_\beta$ by construction
of~$(m_\beta,\pi_\beta,\varphi_\beta)$.
Hence, if~${\xi(x) \in \ensuremath{\mathrm\Phi}(\alpha)}$ holds,
$\sigma_\beta(x) = \sigma_\beta(\xi(x))$
implies $\xi(x) \in \ensuremath{\mathrm\Phi}(\beta)$.
Thus, it suffices to prove~${\xi(x) \in \ensuremath{\mathrm\Phi}(\alpha)}$.
For improper branching decisions,
$\sigma_\beta = \sigma_\alpha$ and SHC\xspaces~\eqref{eq:main} are propagated.
As ${x \in \ensuremath{\mathrm\Phi}(\beta) \subseteq \ensuremath{\mathrm\Phi}(\alpha)}$
and $\sigma_\alpha(x) = \sigma_\alpha(\xi(x))$,
{\itshape (IH)} yields $\xi(x) \in \ensuremath{\mathrm\Phi}(\alpha)$.
For proper branching decisions,
we observe that
$\sigma_\beta(\cdot) =
\restrict{(\pi_\beta \varphi_\alpha \psi_\alpha(\cdot))}{[m_\beta]}$,
$\sigma_\alpha(\cdot) =
\restrict{(\pi_\beta \varphi_\alpha(\cdot))}{[m_\alpha]}$
and~$m_\beta \geq m_\alpha$.
Thus,
$\sigma_\beta(x) = \sigma_\beta(\xi(x))$
implies
$\sigma_\alpha(\psi_\alpha(x)) = \sigma_\alpha(\psi_\alpha\xi(x))$.
As~$x \in \ensuremath{\mathrm\Phi}(\beta) \subseteq \ensuremath{\mathrm\Phi}(\alpha)$
and $\psi_\alpha \in \stab{\Gamma}{\ensuremath{\mathrm\Phi}(\alpha)}$,
we find~$\psi_\alpha(x) \in \ensuremath{\mathrm\Phi}(\alpha)$.
By~{\itshape (IH)},
$\sigma_\alpha(\psi_\alpha(x)) = \sigma_\alpha(\psi_\alpha\xi(x))$
yields $\psi_\alpha \xi(x) \in \ensuremath{\mathrm\Phi}(\alpha)$.
Again, since $\psi_\alpha \in \stab{\Gamma}{\ensuremath{\mathrm\Phi}(\alpha)}$,
by applying $\psi_\alpha^{-1}$ left
we find $\xi(x) \in \ensuremath{\mathrm\Phi}(\alpha)$.
\end{proof}
\paragraph{Proof of Theorem~\ref{thm:main}}
The examples illustrate that many symmetry prehandling structures are
compatible with the correctness conditions, which shows that there are
potentially many variants to handle symmetries based on
Theorem~\ref{thm:main}.
We proceed to prove this theorem.
To this end, we make use of the following lemma.
\begin{lemma}
\label{lem:gen:main}
\begin{subequations}
Let~$\ensuremath{\mathrm\Phi} \subseteq \mathds{R}^n$ and let~$f\colon \ensuremath{\mathrm\Phi} \to \mathds{R}$ be
such that~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi})$ can be solved by (spatial)
branch-and-bound.
Let~$\Gamma$ be a finite group of symmetries of~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi})$.
Suppose that the branch-and-bound method used for
solving~$\mathrm{OPT}(f,\ensuremath{\mathrm\Phi})$ generates a full
B\&{}B\xspace-tree~$\ensuremath{\mathcal B} = (\ensuremath{\mathcal V}, \ensuremath{\mathcal E})$.
Let $\beta \in \ensuremath{\mathcal V}$ be not a leaf of the B\&{}B\xspace-tree.
If there is a feasible solution $\tilde x \in \ensuremath{\mathrm\Phi}(\beta)$ with
\begin{equation}
\label{eq:sigmalexmax}
\sigma_\beta(\tilde x) \succeq \sigma_\beta \gamma (\tilde x)\
\text{for all}\ \gamma \in \Gamma,
\end{equation}
then $\beta$ has exactly one child $\omega \in \chi_\beta$ for which
there is~$\xi \in \Gamma$ such that
\begin{gather}
\label{eq:xiinfeas}
\xi(\tilde x) \in \ensuremath{\mathrm\Phi}(\omega),\
\\
\text{and}\
\label{eq:sigmaxilexmax}
\sigma_\omega \xi (\tilde x) \succeq \sigma_\omega \gamma \xi (\tilde x)\
\text{for all}\ \gamma \in \Gamma.
\end{gather}
\end{subequations}
\end{lemma}
\begin{proof}
Let $\tilde x \in \ensuremath{\mathrm\Phi}(\beta)$ respect~\eqref{eq:sigmalexmax}.
First, we show the existence of $\omega \in \chi_\beta$
satisfying~\eqref{eq:xiinfeas} and~\eqref{eq:sigmaxilexmax}.
Thereafter, we show that $\omega$ is unique.
\noindent
\emph{Existence:}\quad
By Condition~\eqref{cond:sibl},
the maps $\sigma_\omega$ for all children~$\omega \in \chi_\beta$
are the same.
Let~$\xi \in \Gamma$ be such that~$\sigma_\omega \xi(\tilde x)$ is
lexicographically maximal.
Note that~$\xi$ exists, since $\Gamma$ is a finite group.
Then, $\sigma_\omega \xi(\tilde x) \succeq \sigma_\omega \gamma \xi(\tilde
x)$, because~$\xi, \gamma \in \Gamma$ implies~$\xi\gamma \in \Gamma$.
Thus, $\xi$ satisfies~\eqref{eq:sigmaxilexmax}.
We show that~$\xi(\tilde x) \in \ensuremath{\mathrm\Phi}(\omega)$
for some $\omega \in \chi_\beta$.
Recall that the branching decision at $\beta$ partitions its feasible
region, i.e., $\{ \ensuremath{\mathrm\Phi}(\omega) : \omega \in \chi_\beta \}$ partitions
$\ensuremath{\mathrm\Phi}(\beta)$.
As such, there is exactly one child $\omega \in \chi_\beta$
with $\xi(\tilde x) \in \ensuremath{\mathrm\Phi}(\omega)$
if $\xi(\tilde x) \in \ensuremath{\mathrm\Phi}(\beta)$.
To show \eqref{eq:xiinfeas}, it thus suffices
to prove~$\xi(\tilde x) \in \ensuremath{\mathrm\Phi}(\beta)$.
For any child $\omega \in \chi_\beta$,
vector~$x$, and~$i \leq m_\beta$, we have
\begin{equation}
\label{eq:sigmabetasubstitutesigmaomega}
(\sigma_\omega(x))_i
=
(\pi_\omega \varphi_\omega (x))_i
\stackrel{\eqref{cond:ffunc}}=
(\pi_\beta \varphi_\omega (x))_i
\stackrel{\eqref{cond:symmetricallypermute}}=
(\pi_\beta \varphi_\beta \psi_\beta (x))_i
=
(\sigma_\beta \psi_\beta (x))_i
.
\end{equation}
Recall that $\xi \in \Gamma$ satisfies~\eqref{eq:sigmaxilexmax}.
Substituting \eqref{eq:sigmabetasubstitutesigmaomega}
yields
$\sigma_\beta \psi_\beta \xi (\tilde x)
\succeq
\sigma_\beta \psi_\beta \gamma \xi (\tilde x)$
for all $\gamma \in \Gamma$.
In particular, for $\gamma = \psi_\beta^{-1}\xi^{-1} \in \Gamma$,
we find
$\sigma_\beta \psi_\beta \xi (\tilde x)
\succeq \sigma_\beta(\tilde x)$.
Then~\eqref{eq:sigmalexmax}
yields $\sigma_\beta \psi_\beta \xi (\tilde x)
= \sigma_\beta(\tilde x)$.
By~\eqref{cond:permutationcondition},
we thus have $\psi_\beta \xi(\tilde x) \in \ensuremath{\mathrm\Phi}(\beta)$.
Since $\psi_\beta \in \stab{\Gamma}{\ensuremath{\mathrm\Phi}(\beta)}$,
applying $\psi_\beta^{-1}$ left on this solution yields
$\xi(\tilde x) \in \ensuremath{\mathrm\Phi}(\beta)$,
herewith completing the first part.
\noindent
\emph{Uniqueness:}\quad
Suppose $\xi, \xi' \in \Gamma$ satisfy~\eqref{eq:sigmaxilexmax}.
For~$\gamma = \xi' \xi^{-1}$, \eqref{eq:sigmaxilexmax} for~$\xi$ implies
$\sigma_\omega \xi(\tilde x) \succeq \sigma_\omega \xi'(\tilde x)$.
Analogously, for $\xi'$ we choose $\gamma = \xi (\xi')^{-1}$ to find
$\sigma_\omega \xi'(\tilde x) \succeq \sigma_\omega \xi(\tilde x)$.
As a result,
$\sigma_\omega \xi(\tilde x) = \sigma_\omega \xi'(\tilde x)$.
Suppose $\xi(\tilde x) \in \ensuremath{\mathrm\Phi}(\omega)$.
Let $x = \xi(\tilde x)$ and $\gamma = \xi' \xi^{-1} \in \Gamma$.
Then, we find
\[
\sigma_\omega(x)
=
\sigma_\omega \xi(\tilde x)
=
\sigma_\omega \xi'(\tilde x)
=
\sigma_\omega \xi'(\xi^{-1}(x))
=
\sigma_\omega \gamma(x),
\]
and Condition~\eqref{cond:permutationcondition}
yields $\xi'(\tilde x) = \gamma(x) \in \ensuremath{\mathrm\Phi}(\omega)$.
As the children $\chi_\beta$ partition $\ensuremath{\mathrm\Phi}(\beta)$
and $\xi'(\tilde x) \in \ensuremath{\mathrm\Phi}(\omega)$,
there is no other child of $\beta$ where $\xi'(\tilde x)$ is
feasible.
Thus, independent from $\xi \in \Gamma$
satisfying~\eqref{eq:sigmaxilexmax},
there is exactly one child~$\omega \in \chi_\beta$
with~$\xi(\tilde x) \in \ensuremath{\mathrm\Phi}(\omega)$.
\end{proof}
We are now able to prove Theorem~\ref{thm:main}.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
Recall that we assumed~$\ensuremath{\mathcal B} = (\ensuremath{\mathcal V}, \ensuremath{\mathcal E})$ to be finite
and that we do not prune nodes by bound.
Let~$\ensuremath{\mathcal B}_d$ be the tree arising from~$\ensuremath{\mathcal B}$ by pruning all nodes
at depth larger than~$d$.
Let~$(m_\beta,\pi_\beta,\varphi_\beta)_{\beta \in \ensuremath{\mathcal V}}$ satisfy the
correctness conditions.
Let~$\check{x} \in \ensuremath{\mathrm\Phi}$ be any feasible solution to the original
problem.
We proceed by induction and show that, for every depth~$d$ of the tree,
there is exactly one leaf node in~$\ensuremath{\mathcal B}_d$ for which a permutation
of~$\check{x}$ is feasible and that does not violate the local
SHC\xspaces~\eqref{eq:main}.
Let~$d = 0$.
The only node at depth~$d$ is the root node~$\alpha$.
Any feasible solution~$\check x \in \ensuremath{\mathrm\Phi}$
is feasible in the root node~$\alpha \in \mathcal V$
of the branch-and-bound tree~$\mathcal B$.
In particular, we can permute~$\check x$ by any $\xi \in \Gamma$,
and have a feasible symmetrical solution.
For the root node, choose $\xi \in \Gamma$ such that~$\sigma_\alpha
\xi(\check x) \succeq \sigma_\alpha \gamma \xi(\check x)$
for all~$\gamma \in \Gamma$.
That is, $\xi(\check{x})$ is not cut off by~\eqref{eq:main} at~$\alpha$.
Let~$d > 0$ and let~$\tilde{x} \in \ensuremath{\mathrm\Phi}$.
By induction, we may assume that there is exactly one leaf node~$\beta$
of~$\ensuremath{\mathcal B}_d$ at which a permutation~$\xi(\check{x})$ is feasible and
that is not cut off by~\eqref{eq:main}.
If~$\beta$ is also a leaf in~$\ensuremath{\mathcal B}$, we are done.
Otherwise, since~$\xi(\check{x})$ is not cut off by~\eqref{eq:main}, we
can apply Lemma~\ref{lem:gen:main} and find that~$\beta$ has exactly one
child~$\omega$ at which a permutation of~$\xi(\check{x})$ is feasible and
is not cut off by~\eqref{eq:main} at node~$\omega$.
This concludes the proof.
\end{proof}
\begin{remark}
For spatial branch-and-bound algorithms, two subtleties arise.
On the one hand, there might not exist a finite branch-and-bound tree.
If all branching decisions partition a subproblem's feasible region,
Theorem~\ref{thm:main} holds true for all trees pruned at a certain depth
level.
On the other hand, branching decisions do not necessarily partition the
feasible region.
In this case, \eqref{eq:main} can still be used to handle symmetries.
However, in the depth-pruned tree there might exist more than one leaf
containing a symmetric copy of a feasible solution.
\end{remark}
\begin{remark}
Theorem~\ref{thm:main} still holds in case of some infinite groups.
The only place where finiteness is used is in the proof of
Lemma~\ref{lem:gen:main}, where it implies that a symmetry~$\xi \in \Gamma$
exists such that $\sigma_{\omega}\xi(\tilde x)$ is lexicographically maximal
for a fixed solution vector $\tilde x \in \ensuremath{\mathrm\Phi}(\beta)$.
For instance, for infinite groups of rotational symmetries, such a
symmetry always exists.
\end{remark}
\section{Apply framework on generic optimization problems}
\label{sec:cast}
Due to Theorem~\ref{thm:main}, we can completely handle all
symmetries of an arbitrary problem~$\mathrm{OPT}(f, \ensuremath{\mathrm\Phi})$, provided we
know how to handle Constraints~\eqref{eq:main}.
The aim of this section is therefore to find symmetry handling methods that
can deal with non-binary variables.
Since handling Constraints~\eqref{eq:main} is already difficult for binary
problems, we cannot expect to handle all symmetries efficiently.
Instead, we revisit the efficient methods LexFix\xspace, orbitopal fixing, and
OF\xspace for binary variables and provide proper generalizations for
non-binary problems, which allows us to partially enforce
Constraints~\eqref{eq:main}.
We refer to these generalizations as lexicographic reduction, orbitopal
reduction, and orbital symmetry handling, respectively.
Throughout this section, we assume that~$\Gamma \leq \symmetricgroup{n}$.
\subsection{Lexicographic reduction}
\label{sec:gen:lexred}
\subsubsection{The static setting}
Assume the symmetry prehandling structure of Example~\ref{ex:staticsetting}
is used in Theorem~\ref{thm:main}.
Then, the SHC\xspaces~$x \succeq \gamma(x)$ for all~$\gamma \in \Gamma$ are
enforced at each node of the branch-and-bound tree.
For all~$i \in [n]$, let~$\ensuremath{\mathcal{D}}_i \subseteq \mathds{R}^n$ be the domain of
variable~$x_i$ at a node of the branch-and-bound tree and let~$\ensuremath{\mathcal{D}} =
(\ensuremath{\mathcal{D}}_i)_{i \in [n]}$ be the vector of variable domains.
The aim of the lexicographic reduction (LexRed\xspace) algorithm is to find, for
a fixed permutation~$\gamma \in \Gamma$, the smallest domains~$\ensuremath{\mathcal{D}}'_i$,
$i \in [n]$, such that
$\left\{ x \in \bigtimes_{i = 1}^n \ensuremath{\mathcal{D}}_i : x \succeq \gamma(x)\right\} =
\left\{ x \in \bigtimes_{i = 1}^n \ensuremath{\mathcal{D}}'_i : x \succeq \gamma(x)\right\}$.
If~$\ensuremath{\mathcal{D}}_i \subseteq \ensuremath{\{0, 1\}}$ for all~$i \in [n]$, the reductions found
by LexRed\xspace are equivalent to the reductions found by LexFix\xspace.
For non-binary domains, similar ideas as for LexFix\xspace, which are
described
in~\cite{BestuzhevaEtal2021OO,doornmalenhojny2022cyclicsymmetries}, can be used:
We iterate over the variables~$x_i$ with indices in increasing order.
If~$x_j = \gamma(x)_j$ for all indices~$j < i$, we enforce~$x_i \geq \gamma(x)_i$,
and we check if a solution with~$x_i = \gamma(x)_i$ exists.
Before we provide a rigorous algorithm, we illustrate the idea.
\begin{example}
\label{ex:lexred}
Let $\ensuremath{\mathrm\Phi} = [-1, 1]^4 \cap \mathds Z^4$ and $\gamma = (1,3,2,4)$.
Consider a node with relaxed region
$x \in \{ 0 \} \times [-1, 0] \times \{ 1 \} \times [-1, 1]$.
Propagating
$x \succeq \gamma(x)$,
we find
{\footnotesize
\begin{equation*}
\begin{bmatrix}
x_1& =& 0\\
x_2& \in& [-1,0]\\
x_3& =& 1 \\
x_4& \in& [-1, 1]\\
\end{bmatrix}
\succeq
\begin{bmatrix}
x_4& \in& [-1, 1]\\
x_3& =& 1 \\
x_1& =& 0\\
x_2& \in& [-1,0]\\
\end{bmatrix}
\stackrel{\text{(}\dagger\text{)}}{\leadsto}
\begin{bmatrix}
x_1& =& 0\\
x_2& \in& [-1,0]\\
x_3& =& 1 \\
x_4& \in& [-1, 0]\\
\end{bmatrix}
\succeq
\begin{bmatrix}
x_4& \in& [-1, 0]\\
x_3& =& 1 \\
x_1& =& 0\\
x_2& \in& [-1,0]\\
\end{bmatrix}
\!.
\end{equation*}
}
In ($\dagger$), we restrict the domain of $x_4$ by propagating
$0 = x_1 \geq x_4$, resulting in $x_4 \in [-1, 0]$.
If~${x_1 = x_4 = 0}$,
then SHC\xspace $x \succeq \gamma(x)$ implies
the contradiction ${[-1, 0] \ni x_2 \geq x_3 = 1}$,
so we must have $x_1 > x_4$. Since $x_4 \in \mathds Z$,
$x_4$ must be fixed to $-1$.
No further domain reductions can be derived from $x \succeq \gamma(x)$.
\end{example}
We now proceed with our generalization of LexFix\xspace.
To enforce~$x \succeq \gamma(x)$ for general variable domains~$\ensuremath{\mathcal{D}}$,
some artifacts need to be taken into account.
For example, if~$n = 3$ and~$\gamma$ is the cyclic right-shift,
then~$y^\epsilon \ensuremath{\coloneqq}(1+\epsilon,0,1) \succeq \gamma(y^\epsilon) =
(1,1+\epsilon,0)$ for every~$\epsilon > 0$, but~$y^0 \prec \gamma(y^0)$,
i.e., $\{x \in \mathds{R}^n : x \succeq \gamma(x)\}$ is not necessarily closed.
Since optimization software usually can only handle closed sets, we propose
the following solution.
We extend~$\mathds{R}$ by an infinitesimal symbol~$\varepsilon$ that we can add to
or subtract from any real number to represent a strict difference.
This results in a symbolically correct algorithm
that is as strong as possible.
For example, $\min\{ 1 + x : x > 1 \} = 2 + \varepsilon$,
$\min\{1 + x + \varepsilon : x > 1 \} = 2 + \varepsilon$,
$\max\{ 1 + x : x < 2 \} = 3 - \varepsilon$,
and we do not allow further arithmetic
with the $\varepsilon$ symbol.
In practice, however, we cannot enforce strict inequalities.
We thus replace~$\varepsilon$ by~0, which will lead to slightly
weaker but still correct reductions.
That is no problem for our purposes,
since we will only either apply the $\min$-operator or the
$\max$-operator, the sign of $\varepsilon$ will always be the same;
namely, if $\varepsilon$ appears,
this has a positive sign in minimization-operations,
and a negative sign in maximization-operations.
Now, we turn to the generalization of LexFix\xspace to arbitrary variable domains.
We introduce a timestamp $t$.
At every time $t$, the current domain is denoted by~$\ensuremath{\mathcal{D}}^t$.
We initialize~$\ensuremath{\mathcal{D}}_i^0 = \ensuremath{\mathcal{D}}_i$ for all~$i \in [n]$, and for two
timestamps~$t > t'$, we will possibly strengthen the domains, i.e.,
$\ensuremath{\mathcal{D}}_i^{t} \subseteq \ensuremath{\mathcal{D}}_i^{t'}$.
The core of LexRed\xspace is the observation that if~$\restrict{x}{[t-1]} =
\restrict{\gamma(x)}{[t-1]}$ holds for some~$t \geq 1$, then constraint~$x \succeq
\gamma(x)$ can only hold when~$x_t \geq \gamma(x)_t = x_{\gamma^{-1}(t)}$.
This observation is exploited in a two-stage approach.
In the first stage, LexRed\xspace performs the following steps for all~$t =
1,\dots,n$:
\begin{enumerate}
\item The algorithm propagates~$x_t \geq \gamma(x)_t$ by
updating the variable domains via
\begin{equation}
\label{eq:VDR}
\begin{aligned}
\ensuremath{\mathcal{D}}_t^t
&= \{ z \in \ensuremath{\mathcal{D}}_t^{t-1} :
z \geq \min(\ensuremath{\mathcal{D}}_{\gamma^{-1}(t)}^{t-1}) \}
,\\
\ensuremath{\mathcal{D}}_{\gamma^{-1}(t)}^t
&= \{ z \in \ensuremath{\mathcal{D}}_{\gamma^{-1}(t)}^{t-1} :
z \leq \max(\ensuremath{\mathcal{D}}_{t}^{t-1}) \}
, \text{ and}\\
\ensuremath{\mathcal{D}}_i^t
&= \ensuremath{\mathcal{D}}_i^{t-1}\ \text{for}\ i \in [n] \setminus \{ t, \gamma^{-1}(t) \}.
\end{aligned}
\end{equation}
\item Then, it checks whether~$\ensuremath{\mathcal{D}}^t_i \neq \emptyset$ for all~$i \in
[n]$ and whether~$x \in \ensuremath{\mathcal{D}}^t$ guarantees~$\restrict{x}{[t]} =
\restrict{\gamma(x)}{[t]}$.
If this is the case, the algorithm continues with iteration~$t+1$.
Otherwise, the first phase of LexRed\xspace terminates, say at time~$t^\star$.
\end{enumerate}
Of course, all variable domain reductions found during phase one are
correct based on the previously mentioned observation.
At the end of phase one, three possible cases can occur: a variable domain
is empty, phase one has propagated all variables, i.e., $x = \gamma(x)$ for
all~$x \in \ensuremath{\mathcal{D}}^n$, or~$\restrict{x}{[t^\star-1]} =
\restrict{\gamma(x)}{[t^\star-1]}$ and there exists~$(v,w) \in
\ensuremath{\mathcal{D}}^{t^\star}_{t^\star} \times
\ensuremath{\mathcal{D}}^{t^\star}_{\gamma^{-1}(t^\star)}$ with~$v \neq w$.
In either of the first two cases, the algorithm stops because it either
has shown that no solution~$x \in \ensuremath{\mathcal{D}}^0$ exists with~$x \succeq
\gamma(x)$ or all variables are fixed.
In the last case, note that~$v > w$ holds due to the domain reductions
at time~$t^\star$.
Since~$\restrict{x}{[t^\star-1]} = \restrict{\gamma(x)}{[t^\star-1]}$ holds
for all~$x \in \ensuremath{\mathcal{D}}^{t^\star}$, the relation~$v > w$ shows that there
exists~$x \in \ensuremath{\mathcal{D}}^{t^\star}$ such that~$\restrict{x}{[t^\star]} \succ
\restrict{\gamma(x)}{[t^\star]}$.
Consequently, the domains of variables~$x_{t^\star+1},\dots,x_n$ cannot
be tightened.
It might be possible, however, that the domains of~$x_{t^\star}$
and~$\gamma(x)_{t^\star}$ can be reduced further.
Namely, if~$x_{t^\star} = \min \ensuremath{\mathcal{D}}^{t^\star}_{\gamma^{-1}(t^\star)}$
or~$x_{\gamma^{-1}(t^\star)} = \max \ensuremath{\mathcal{D}}^{t^\star}_{t^\star}$.
In this case, the other variable necessarily attains the same value, which
means that a solution with~$\restrict{x}{[t^\star]} =
\restrict{\gamma(x)}{[t^\star]}$ is created, which might lead to a
contradiction with~$x \succeq \gamma(x)$ as illustrated in
Example~\ref{ex:lexred}.
In the second stage of LexRed\xspace, it is checked whether one of these cases
indeed leads to a contradiction.
If this is the case, $\min \ensuremath{\mathcal{D}}^{t^\star}_{\gamma^{-1}(t^\star)}$ can be
removed from the domain of~$x_{t^\star}$ or~$\max
\ensuremath{\mathcal{D}}^{t^\star}_{t^\star}$ can be removed from the domain of~$x_{\gamma^{-1}(t)}$.
To detect whether a contradiction occurs, the second phase hypothetically
fixes~$x_{t^\star}$ or~$x_{\gamma^{-1}(t^\star)}$ to the respective value
and continues with stage one since~$\restrict{x}{[t^\star]} =
\restrict{\gamma(x)}{[t^\star]}$ now holds.
If phase one then terminates because a variable domain becomes empty, this
shows that the domain of~$x_{t^\star}$ or~$x_{\gamma^{-1}(t^\star)}$ can be
reduced.
Otherwise, no further variable domain reductions can be derived.
\begin{proposition}
Let~$\tau$ be the time needed to perform one variable domain reduction
in~\eqref{eq:VDR}.
Then, LexRed\xspace finds all possible variable domain reductions for~$x
\succeq \gamma(x)$ in~$\ensuremath{\mathop{\mathcal{O}}}(n \cdot \tau)$ time.
\end{proposition}
\begin{proof}
Completeness of LexRed\xspace follows from the previous discussion.
The running time holds as the first stage computes at most~$n$
domain reductions and the second stage triggers phase
one at most twice.
\end{proof}
In many cases, for instance, if variable domains are continuous or discrete
intervals, $\tau = \ensuremath{\mathop{\mathcal{O}}}(1)$, turning lexicographic reduction into a linear
time algorithm.
\subsubsection{Dynamic settings}
Theorem~\ref{thm:main}
shows that $\sigma_\beta(x) \succeq \sigma_\beta \gamma(x)$
is a valid symmetry handling constraint
for certain symmetry prehandling
structures~$(m_\beta,\pi_\beta,\varphi_\beta)_{\beta \in \ensuremath{\mathcal V}}$.
If~$\Gamma$ is a permutation group,
$\sigma_\beta(x)$ and $\sigma_\beta(\gamma(x))$
are just permutations of the solution vector entries
and a restriction of this vector.
In this case, lexicographic reduction can, of course,
also propagate these SHC\xspaces by changing the order in which
we iterate over the solution vector entries.
In particular, in the binary case and the symmetry prehandling structure of
Example~\ref{ex:vardynamic}, the adapted version of LexRed\xspace is compatible with
IsoPr\xspace and OF\xspace as we have seen in
Section~\ref{sec:framework} that the latter two methods
propagate~$\sigma_\beta(x) \succeq \sigma_\beta \gamma(x)$ for
all~$\gamma\in\Gamma$.
\subsection{Orbitopal reduction}
\label{sec:gen:orbitopalfixing}
Bendotti et~al.\@\xspace~\cite{BendottiEtAl2021} present a complete propagation
algorithm to handle orbitopal symmetries on binary variables.
In this section, we generalize their algorithm to arbitrary variable
domains.
We call the generalization of orbitopal fixing \emph{orbitopal reduction}
as it does not necessarily fix variables.
\subsubsection{The static setting}
Suppose that~$\Gamma$ is the group that contains all column permutations of
a $p \times q$ variable matrix~$X$.
Further, assume that Theorem~\ref{thm:main} uses the symmetry prehandling structure
from Example~\ref{ex:staticsetting}, where we assume that the variable
vector~$x$ associated with the~$p \cdot q$ variables in~$X$ is
such that enforcing~$x \succeq \gamma(x)$ for all~$\gamma \in \Gamma$
corresponds to sorting the columns of~$X$ in lexicographic order.
With slight abuse of notation, for $\gamma \in \Gamma$, we
write $X \succeq \gamma(X)$ if and only if the corresponding vector~$x \in
\mathds{R}^{pq}$ satisfies~$x \succeq \gamma(x)$.
We use the following notation.
For any~$M \in \mathds{R}^{p \times q}$ and~$(i,j) \in [p] \times [q]$, we denote
by $M_i$ the $i$-th row of~$M$, by~$M^j$ the $j$-th column of~$M$, and
by~$M_i^j$ the entry at position $(i, j)$.
For every variable~$X_i^j$, we denote its domain by
$\ensuremath{\mathcal{D}}_i^j \subseteq \mathds{R}$.
Using the same matrix notation,
$\ensuremath{\mathcal{D}} \subseteq \mathds{R}^{p \times q}$
denotes the $p \times q$-matrix where entry $(i, j)$ corresponds
to~$\ensuremath{\mathcal{D}}_i^j$.
For given domain~$\ensuremath{\mathcal{D}} \subseteq \mathds{R}^{p \times q}$, we denote
by~$\underline{M}(\ensuremath{\mathcal{D}})$ and~$\overline{M}(\ensuremath{\mathcal{D}})$ the
lexicographically smallest and largest element in~$\ensuremath{\mathcal{D}}$, respectively.
Whenever the domain~$\ensuremath{\mathcal{D}}$ is clear from the context, we just
write~$\underline{M}$ and~$\overline{M}$.
Moreover, let
\[
O_{p \times q} \ensuremath{\coloneqq} \{ X \in \mathds{R}^{p \times q} : X \succeq \gamma(X)\ \text{for all}\ \gamma \in \Gamma \}
\]
be the set of all matrices with lexicographically sorted columns.
Our goal is to find all possible VDR\xspaces
of the SHC\xspaces $X \succeq \gamma(X)$ for $\gamma \in \Gamma$, i.e.,
we want to find the smallest~$\hat{\ensuremath{\mathcal{D}}} \subseteq \mathds{R}^{p \times q}$ such
that
\[
\hat{\ensuremath{\mathcal{D}}} \cap O_{p \times q} = \ensuremath{\mathcal{D}} \cap O_{p \times q}.
\]
It turns out that, as for the binary case~\cite{BendottiEtAl2021}, the
matrices~$\underline{M}(\ensuremath{\mathcal{D}})$ and~$\overline{M}(\ensuremath{\mathcal{D}})$ contain
sufficient information for finding~$\hat{\ensuremath{\mathcal{D}}}$.
In the following, recall that we (implicitly) use the infinitesimal
notation introduced in the previous section to represent strict
inequalities.
\begin{theorem}
\label{thm:genorbitopalfixing}
Let~$\ensuremath{\mathcal{D}} \subseteq \mathds{R}^{p \times q}$.
For~$j \in [q]$, let~$i_j \ensuremath{\coloneqq} \min(\{ i \in [p] :
\underline{M}(\ensuremath{\mathcal{D}})_i^j \neq \overline{M}(\ensuremath{\mathcal{D}})_i^j \} \cup \{ p +
1 \})$.
Then, the smallest~$\hat{\ensuremath{\mathcal{D}}} \subseteq \mathds{R}^{p \times q}$ for
which~$\hat{\ensuremath{\mathcal{D}}} \cap O_{p \times q} = \ensuremath{\mathcal{D}} \cap O_{p \times q}$
holds satisfies, for every~$(i,j) \in [p] \times [q]$,
\[
\hat{\ensuremath{\mathcal{D}}}^j_i
=
\begin{cases}
\ensuremath{\mathcal{D}}^j_i \cap [\underline{M}(\ensuremath{\mathcal{D}})_i^j,
\overline{M}(\ensuremath{\mathcal{D}})_i^j],
& \text{if } i \leq i_j,\\
\ensuremath{\mathcal{D}}^j_i, &\text{otherwise}.
\end{cases}
\]
\end{theorem}
This theorem is proven by the following two lemmas.
The first lemma shows that no tighter VDR\xspaces can be achieved:
for every $(i, j) \in [p] \times [q]$
and $v \in \hat\ensuremath{\mathcal{D}}_i^j$ a lexicographically non-increasing solution
matrix $\tilde X$ exists with $\tilde X_i^j = v$.
The second lemma shows that the VDR\xspaces are valid:
for every~$(i, j) \in [p] \times [q]$
and $v \in \ensuremath{\mathcal{D}}_i^j \setminus \hat\ensuremath{\mathcal{D}}_i^j$,
no matrix $\tilde X$ with $\tilde X_i^j = v$ exists.
\begin{lemma}
\label{lem:genorbitopalfixing:tight}
Suppose that $O_{p \times q} \cap \ensuremath{\mathcal{D}} \neq \emptyset$.
Let $i' \in [p]$ and $j' \in [q]$ with $i' \leq i_{j'}$.
For all $x \in \ensuremath{\mathcal{D}}_{i'}^{j'}$
with~$\underline M _{i'}^{j'} \leq x \leq \overline M _{i'}^{j'}$
there is $X \in O_{p \times q} \cap \ensuremath{\mathcal{D}}$
with $X_{i'}^{j'} = x$.
\end{lemma}
\begin{proof}
We define two matrices $A, B \in \ensuremath{\mathcal{D}}$, for which entries~$(i,j) \in [p]
\times [q]$ are
\[
A_i^j =
\begin{cases}
\overline M_i^j & \text{if}\ j < j', \\
\underline M_i^j & \text{if}\ j > j', \\
\overline M_i^j = \underline M_i^j & \text{if}\ j = j', i < i_j, \\
\overline M_i^j (> \underline M_i^j) & \text{if}\ j = j', i = i_j,\\
\min(\ensuremath{\mathcal{D}}_i^j) & \text{if}\ j = j', i > i_j,
\end{cases}
\
\text{and}\
B_i^j =
\begin{cases}
\overline M_i^j & \text{if}\ j < j', \\
\underline M_i^j & \text{if}\ j > j', \\
\overline M_i^j = \underline M_i^j & \text{if}\ j = j', i < i_j, \\
\underline M_i^j (< \overline M_i^j) & \text{if}\ j = j', i = i_j, \\
\max(\ensuremath{\mathcal{D}}_i^j) & \text{if}\ j = j', i > i_j.
\end{cases}
\]
From these two matrices, we show that for any~$x \in \ensuremath{\mathcal{D}}_{i'}^{j'}$
with~$\underline M _{i'}^{j'} \leq x \leq \overline M _{i'}^{j'}$ there
is~$X \in O_{p \times q} \cap \ensuremath{\mathcal{D}}$ with $X_{i'}^{j'} = x$.
We call such a matrix~$X$ a certificate for~$x$.
In the following, we first provide a construction for these certificates,
and after that we show that they are contained
in~$\ensuremath{\mathcal{D}} \cap O_{p \times q}$.
If $i' < i_{j'}$, then $x = \overline M_{i'}^{j'} = \underline M_{i'}^{j'}$.
Thus, $X = A$ is a certificate.
If $i' = i_{j'}$,
then there are three options:
If $x = \underline M_{i'}^{j'}$,
then $X = B$ is a certificate;
if $x = \overline M_{i'}^{j'}$,
then $X = A$ is a certificate;
and if~$\underline M_{i'}^{j'} < x < \overline M_{i'}^{j'}$,
then construct $X$ with
$X_i^j = A_i^j$ if $(i, j) \neq (i', j')$,
and $X_{i'}^{j'} = x$.
Note that $X \in \ensuremath{\mathcal{D}}$.
We finally show that $X \in O_{p \times q}$, concluding the proof.
The first $j' - 1$ columns of $X$ correspond to $\overline M$.
That is, they satisfy $X^j \succeq X^{j + 1}$ for all $1 \leq j < j' - 1$.
Similarly, the columns after column $j'$ correspond to~$\underline M$.
Hence, $X^j \succeq X^{j + 1}$ for all $j' < j < q$.
By the definition of $A$ and $B$,
$\overline M^{j' - 1} \succeq \overline M^{j'} \succeq A^{j'}$
and
$B^{j'} \succeq \underline M^{j'} \succeq \underline M^{j' + 1}$.
As the columns of~$X$ are either columns of~$A$ or~$B$, or equal
to~$A^{j'}$ up to one entry while remaining lexicographically larger than $B^{j'}$,
we find~$\overline M^{j' - 1} = A^{j' - 1} = X^{j' - 1}
\succeq A^{j'} \succeq X^{j'} \succeq B^{j'} \succeq B^{j' + 1} = X^{j' +
1} = \underline M^{j' + 1}$.
So, for all consecutive $j\in [q - 1]$, we have~$X^j \succeq X^{j + 1}$,
and hence $X \in O_{p \times q} \cap \ensuremath{\mathcal{D}}$.
\end{proof}
\begin{lemma}
\label{lem:genorbitopalfixing:valid}
Suppose that $O_{p \times q} \cap \ensuremath{\mathcal{D}} \neq \emptyset$.
Let $i' \in [p]$ and $j' \in [q]$ with $i' \leq i_{j'}$.
For all $X \in O_{p \times q} \cap \ensuremath{\mathcal{D}}$, we have
$\underline M _{i'}^{j'} \leq X_{i'}^{j'} \leq \overline M _{i'}^{j'}$.
\end{lemma}
\begin{proof}
Suppose the contrary, i.e.,
for $X \in O_{p \times q} \cap \ensuremath{\mathcal{D}}$
either $X_{i'}^{j'} < \underline M _{i'}^{j'}$
or $X_{i'}^{j'} > \overline M _{i'}^{j'}$.
Suppose that~$i'$ is minimal, i.e., there is no $i'' < i'$
with $X_{i''}^{j'} < \underline M _{i''}^{j'}$
or $X_{i''}^{j'} > \overline M _{i''}^{j'}$.
By symmetry, it suffices to consider the case
$X_{i'}^{j'} < \underline M _{i'}^{j'}$.
If $X_{i}^{j'} \leq \underline M _{i}^{j'}$ holds for all $i < i'$,
then $X^{j'} \prec \underline M^{j'}$,
which contradicts that $\underline M$ is the lexicographically minimal
solution of $O_{p \times q} \cap \ensuremath{\mathcal{D}}$.
Hence, there is a row $i'' < i'$
with $X_{i''}^{j'} > \underline M_{i''}^{j'}$.
Since~$i'' < i' \leq i_{j'}$
yields $\underline M_{i''}^{j'} = \overline M_{i''}^{j'}$,
we have $X_{i''}^{j'} > \overline M_{i''}^{j'}$.
This contradicts that $i'$ is supposed to be minimal
with~$X_{i'}^{j'} < \underline M_{i'}^{j'}$
or~$X_{i'}^{j'} > \overline M_{i'}^{j'}$,
since for~$i'' < i'$ we satisfy the second condition.
This is a contradiction.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:genorbitopalfixing}]
Lemmas~\ref{lem:genorbitopalfixing:tight}
and~\ref{lem:genorbitopalfixing:valid} prove the assertion for $i' \leq
i_{j'}$.
Since the domains for~${i' > i_{j'}}$ are not restricted in comparison
to~$\ensuremath{\mathcal{D}}$, domain~$\hat{\ensuremath{\mathcal{D}}}^{j'}_{i'}$ is valid.
To show that it is as tight as possible, we can reconsider in the proof of
Lemma~\ref{lem:genorbitopalfixing:tight} the matrix $A \in \ensuremath{\mathcal{D}} \cap
O_{p \times q}$.
Replacing entry~$(i', j')$ in $A$ with any value in $\ensuremath{\mathcal{D}}_{i'}^{j'}$
yields a matrix $\tilde A$.
If $i' > i_{j'}$, this change does not affect the lexicographic order
constraint, so $\tilde A \in \ensuremath{\mathcal{D}} \cap O_{p \times q}$ is a certificate
of tightness.
Combining these statements shows correctness of
Theorem~\ref{thm:genorbitopalfixing}.
\end{proof}
We conclude this section with an analysis of the time needed to
find~$\hat{\ensuremath{\mathcal{D}}}$.
The crucial step is to find the matrices~$\underline{M}$
and~$\overline{M}$.
To find these matrices, we adapt the idea from~\cite{BendottiEtAl2021} for
the binary case.
Denote by~$\lexmin(\cdot)$ and~$\lexmax(\cdot)$ the operators that determine
the lexicographically minimal and maximal elements of a set, respectively.
We claim that for the lexicographically minimal
element~$\underline{M}$, the~$j$-th column is
\[
\underline M^j =
\begin{cases}
\lexmin \{ X \in \ensuremath{\mathcal{D}}^j : X \succeq \underline M^{j+1} \},
& \text{if}\ j < q, \\
\lexmin \{ X \in \ensuremath{\mathcal{D}}^j \},
& \text{otherwise}.
\end{cases}
\]
This can be computed iteratively, starting with the last column $j=q$,
and then iteratively reducing~$j$ until the first column.
The arguments for correctness are the same as
in~\cite[Thm.~1, Lem.~2]{BendottiEtAl2021}.
For this reason, we only describe how to compute the~$j$-th column.
Due to this iterative approach,
when computing column $\underline{M}^{j}$
for $j > q$, column $\underline{M}^{j+1}$ is known.
The idea is to choose the entries of
$\underline{M}^{j}$ minimally
such that $\underline{M}^{j} \succeq \underline{M}^{j+1}$ holds.
This resembles the propagation method of the previous section (LexRed\xspace),
by choosing the entries minimally such that the constraint holds
when restricted to the first elements, then increasing the
vector sizes by one iteratively.
If this leads to a contradiction with the constraint,
it is returned to the last step where the entry
was not fixed, and this entry is increased
to repair feasibility of the constraint.
More precisely,
$\underline M^{j}$ is found by iterating~$i$ from $1$ to~$p$
as follows.
If there is a row index~${i' < i}$
with $\underline M_{i'}^j > \underline M_{i'}^{j + 1}$,
let~$\underline M_i^j \gets \min ( \ensuremath{\mathcal{D}}_i^j )$.
This is possible, because row~$i'$ already guaranteed that~$\underline{M}^j
\succ \underline{M}^{j+1}$.
If no such index exists, we may assume that all preceding rows $i' < i$
have~${\underline M_i^j = \underline M_i^{j+1}}$ (otherwise, the~$j$-th
column cannot be lexicographically larger than column~$j+1$ as becomes
clear in the following).
In this case, denote~$S^i \ensuremath{\coloneqq}
\{ x \in \ensuremath{\mathcal{D}}_i^j : x \geq \underline M_i^{j+1} \}$.
On the one hand,
if $\card{S^i} > 0$, set $\underline M_i^j \gets \min(S^i)$.
If this yields $\underline M_i^j > \underline M_i^{j + 1}$,
then stop the iteration, and for all $i'' > i$
set~$\underline M_{i''}^{j} \gets \min(\ensuremath{\mathcal{D}}_{i''}^j)$.
This makes sure that~$\underline{M}^j$ is lexicographically strictly larger
than~$\underline{M}^{j+1}$.
On the other hand,
if $\card{S^i} = 0$, we cannot enforce~$\underline{M}^j \succ
\underline{M}^{j+1}$ in row~$i$.
To ensure~$\underline{M}$ becomes the lexicographically smallest
element in~$O_{p \times q} \cap \ensuremath{\mathcal{D}}$, we return to the largest $i' < i$
with~$\card{S^{i'}} > 1$ and enforce a lexicographic difference by setting
$\underline M_{i'}^j \gets \min\{ x \in \ensuremath{\mathcal{D}}_{i'}^j :
x > \underline M_{i'}^{j+1} \}$,
and, for all~$i'' > i'$, we assign~$\underline M_{i''}^{j} \gets \min(\ensuremath{\mathcal{D}}_{i''}^j)$.
If no $i' < i$ exists with~$\card{S^{i'}} > 1$, column~$j$ cannot become
lexicographically at least as large as column~$j+1$.
That is, $\ensuremath{\mathcal{D}} \cap O_{p \times q} = \emptyset$.
Analogously, one computes $\overline M$ by
\[
\overline{M}^j =
\begin{cases}
\lexmax\{ X \in \ensuremath{\mathcal{D}}^j : \overline M^{j - 1} \succeq X \}
& \text{if}\ j > 1,\ \text{and} \\
\lexmax\{ X \in \ensuremath{\mathcal{D}}^j \}
& \text{otherwise}.
\end{cases}
\]
Since determining the~$j$-th column of~$\underline{M}$
and~$\overline{M}$ requires to iterate over its elements a
constant number of times,
for each element a constant number of comparisons and variable domain
reductions is executed.
The time for finding~$\underline{M}$ and~$\overline{M}$
is therefore~$\ensuremath{\mathop{\mathcal{O}}}(pq\tau)$, where~$\tau$ is again the time needed to reduce
variable domains.
Combining all arguments thus yields the following result regarding
orbitopal reduction.
\begin{theorem}
Let~$\ensuremath{\mathcal{D}} \subseteq \mathds{R}^{p \times q}$.
Orbitopal reduction finds the smallest~$\hat{\ensuremath{\mathcal{D}}} \subseteq \mathds{R}^{p
\times q}$ such that~$\hat{\ensuremath{\mathcal{D}}} \cap O_{p \times q} = \ensuremath{\mathcal{D}} \cap
O_{p \times q}$ holds in~$\ensuremath{\mathop{\mathcal{O}}}(pq\tau)$ time.
In particular, if all variable domains are intervals, orbitopal reduction
can be implemented to run in~$\ensuremath{\mathop{\mathcal{O}}}(pq)$ time.
\end{theorem}
\subsubsection{Dynamic settings}
Similar to LexRed\xspace, also orbitopal reduction can be used to propagate
SHC\xspaces~$\sigma_\beta(x) \succeq \sigma_\beta\gamma(x)$ for
permutations~$\gamma$ from a group~$\Gamma$ of orbitopal symmetries.
The only requirement is that~$\sigma_\beta$ is compatible with the matrix
interpretation of a solution~$x$, which can be achieved by using the
symmetry prehandling structure of Example~\ref{ex:orbitopalfixing}.
In this case, the static orbitopal reduction algorithm is only applied to
the variables ``seen'' by~$\sigma_\beta(x) \succeq \sigma_\beta\gamma(x)$.
Note that this symmetry prehandling structure admits some degrees of
freedom in selecting~$\varphi_\beta$.
If~$\varphi_\beta = \ensuremath{\mathrm{id}}$ for all $\beta \in \ensuremath{\mathcal V}$, this resembles the
adapted version of orbitopal fixing as mentioned in
Section~\ref{sec:orbitopalfixing}.
But also other choices are possible as we will discuss in
Section~\ref{sec:num}.
\subsection{Variable ordering derived from branch-and-bound}
\label{sec:gen:orbitalreduction}
A natural question is whether also generalizations for isomorphism pruning and
OF\xspace exist.
The main challenge is that after branching on general variables, they are
not necessarily fixed (in contrast to the binary setting).
Thus, stabilizer computations as discussed in Section~\ref{sec:overview} might
not apply in the generalized setting.
Inspired by OF\xspace, we present a way to reduce variable domains of
arbitrary variables based on symmetry,
called \emph{orbital reduction}.
For vectors $x$ and $y$ of equal length $m$, we write $x \leq y$
if $x_i \leq y_i$ for all $i \in [m]$.
Let~$\beta \in \ensuremath{\mathcal V}$ be a node of the branch-and-bound tree,
$W^\beta \ensuremath{\coloneqq}
\{ x \in \mathds R^n :
\sigma_\beta(x) \succeq \sigma_\beta(\delta(x))\
\text{for all}\ \delta \in \Gamma
\}$,
and~$\Delta^\beta \ensuremath{\coloneqq} \{
\gamma \in \Gamma
:
\sigma_\beta(x) \leq \sigma_\beta \gamma(x)\
\text{for all}\ x \in \ensuremath{\mathrm\Phi}(\beta) \cap W^\beta
\}
$
be a group of symmetries.
Similar to Section~\ref{sec:overviewtree},
we intend to use $\Delta^\beta$ to find VDR\xspaces.
We first show that this is indeed a group.
\begin{lemma}
Let~$\beta$ be a node of a B\&{}B\xspace-tree using single variable branching
with symmetry prehandling structure of Example~\ref{ex:vardynamic}.
Then, $\Delta^\beta$ is a group.
\end{lemma}
\begin{proof}
Recall that we assume~$\Gamma \leq \symmetricgroup{n}$ in this
section.
Therefore, both~$\Gamma$ and~$\Delta^\beta \subseteq \Gamma$ are finite.
To show that it is a group, it suffices to show that
compositions of elements of $\Delta^\beta$ are also contained
therein. The identity and inverses follow implicitly.
Let $\gamma_1, \gamma_2 \in \Delta^\beta$,
and suppose $x \in \ensuremath{\mathrm\Phi}(\beta) \cap W^\beta$.
By definition of $\Delta^\beta$,
we have $\sigma_\beta(x) \leq \sigma_\beta(\gamma_2(x))$.
Since $\gamma_2 \in \Delta^\beta \leq \Gamma$
and $x \in W^\beta$, we have
$\sigma_\beta(x) \succeq \sigma_\beta(\gamma_2(x))$.
Note that
$\sigma_\beta(x) \succeq \sigma_\beta(\gamma_2(x))$
and~
$\sigma_\beta(x) \leq \sigma_\beta(\gamma_2(x))$
imply~
$\sigma_\beta(x) = \sigma_\beta(\gamma_2(x))$.
Since the correctness conditions are satisfied for Example~\ref{ex:vardynamic},
due to Condition~\eqref{cond:permutationcondition},
the properties $x \in \ensuremath{\mathrm\Phi}(\beta)$ and
$\sigma_\beta(x) = \sigma_\beta(\gamma_2(x))$
imply that~$\gamma_2(x) \in \ensuremath{\mathrm\Phi}(\beta)$ holds.
Since $x \in W^\beta$,
for all~$\delta \in \Gamma$
we have~
$\sigma_\beta(\gamma_2(x)) = \sigma_\beta(x)
\succeq \sigma_\beta(\delta(x))$.
Because $\gamma_2$ is a group element of $\Gamma$,
we thus also have
$\sigma_\beta(\gamma_2(x))
\succeq \sigma_\beta(\delta \circ \gamma_2(x))$
for all $\delta \in \Gamma$,
meaning that $\gamma_2(x) \in W^\beta$.
Summarizing, we have $\sigma_\beta(x) = \sigma_\beta(\gamma_2(x))$
and $\gamma_2(x) \in \ensuremath{\mathrm\Phi}(\beta) \cap W^\beta$.
By analogy, the same results hold for~$\gamma_1(x)$.
Since $\gamma_2(x) \in \ensuremath{\mathrm\Phi}(\beta) \cap W^\beta$
and $\gamma_1 \in \Delta^\beta$,
the definition of $\Delta^\beta$ yields
$\sigma_\beta(\gamma_2(x)) \leq \sigma_\beta(\gamma_1 \circ \gamma_2(x))$.
Using the same reasoning as above, $\gamma_2(x) \in W^\beta$
implies~
$\sigma_\beta(\gamma_2(x)) \succeq \sigma_\beta(\gamma_1 \circ \gamma_2(x))$,
so~$\sigma_\beta(\gamma_2(x)) = \sigma_\beta(\gamma_1 \circ \gamma_2(x))$.
We thus find that
$\sigma_\beta(x) = \sigma_\beta(\gamma_2(x))
= \sigma_\beta(\gamma_1 \circ \gamma_2(x))$,
which implies $\gamma_1 \circ \gamma_2 \in \Delta^\beta$.
\end{proof}
We show two feasible reductions that are based on $\Delta^\beta$.
The first reduction shows that the variable domains of the variables
in the $\Delta^\beta$-orbit of the branching variable can possibly be tightened.
To this end,
denote the orbit of $i$ in $\Delta^\beta$ by~$O_i^\beta \ensuremath{\coloneqq} \{
\gamma(i) : \gamma \in \Delta^\beta \}$.
If the branching decision after node $\beta$
decreased the upper bound of variable~$x_i$ for some~$i \in [n]$,
a valid VDR\xspace is to decrease the upper bounds of~$x_j$ for all $j \in O^\beta_i$
to the same value as we show next.
\begin{lemma}[Orbital symmetry handling]
\label{lem:branchorbit}
Let $\ensuremath{\mathcal B} = (\ensuremath{\mathcal V}, \ensuremath{\mathcal E})$
be a B\&{}B\xspace-tree using single variable branching
with symmetry prehandling structure of Example~\ref{ex:vardynamic}.
Let $\omega \in \ensuremath{\mathcal V}$ be a child of $\beta \in \ensuremath{\mathcal V}$
where~$x_i$ is the branching variable for some $i \in [n]$.
Then, at node~$\omega$, each solution~$x \in \ensuremath{\mathrm\Phi}(\omega)$
satisfying~${\sigma_\omega(x) \succeq \sigma_\omega(\delta(x))}$
for all $\delta \in \Gamma$
(i.e., \eqref{eq:main} for node $\omega$) also
satisfies~$x_i \geq x_j$ for all~$j \in O_i^\beta$.
\end{lemma}
\begin{proof}
Let $\gamma \in \Delta^\beta$
and let $x \in \ensuremath{\mathrm\Phi}(\omega)$ satisfy
$\sigma_\omega(x) \succeq \sigma_\omega(\delta(x))$
for all $\delta \in \Gamma$.
Since $\omega$ is a child of $\beta$,
we have~$x \in \ensuremath{\mathrm\Phi}(\omega) \subseteq \ensuremath{\mathrm\Phi}(\beta)$.
Also, for all $\delta \in \Gamma$
we have $\sigma_\omega(x) \succeq \sigma_\omega(\delta(x))$,
so due to
the symmetry prehandling structure of Example~\ref{ex:vardynamic},
we also have~$\sigma_\beta(x) \succeq \sigma_\beta(\delta(x))$,
meaning that $x \in W^\beta$.
Hence, we have $x \in \ensuremath{\mathrm\Phi}(\beta) \cap W^\beta$.
By definition of $\Delta^\beta$,
we thus have $\sigma_\beta(x) \leq \sigma_\beta(\gamma(x))$.
Recall that due to Example~\ref{ex:vardynamic},
we have for all $\delta \in \Gamma$ that
$\sigma_\beta(x) \succeq \sigma_\beta(\delta(x))$.
Since $\gamma \in \Delta^\beta \leq \Gamma$,
therefore~$\sigma_\beta(x) \leq \sigma_\beta(\gamma(x))$
and
$\sigma_\beta(x) \succeq \sigma_\beta(\gamma(x))$ hold,
implying
$\sigma_\beta(x) = \sigma_\beta(\gamma(x))$.
Denote this result by ($\dagger$).
Due to Example~\ref{ex:vardynamic}, we have
\[
\sigma_\omega(x) =
\begin{cases}
\sigma_\beta(x),
& \text{if}\ i \in (\pi_\beta \varphi_\beta)^{-1}(m_\beta)
\ \text{(i.e., variable $x_i$ appears in $\sigma_\beta(x)$), and}\\
\binom{\sigma_\beta(x)}{x_i},
& \text{otherwise.}
\end{cases}
\]
As such, SHC\xspace $\sigma_\omega (x) \succeq \sigma_\omega (\gamma(x))$
is equivalent to either
$\sigma_\beta(x) \succ \sigma_\beta (\gamma(x))$,
or both~$\sigma_\beta(x) = \sigma_\beta (\gamma(x))$
and $x_i \geq \gamma(x)_i$.
Note that this statement is the case independent from whether
entry $i$ is branched on before or not
(i.e., whether $i \in (\pi_\beta\varphi_\beta)^{-1}([m_\beta])$ or not).
Using ($\dagger$), the first of the two options cannot hold,
so we must have~$x_i \geq \gamma(x)_i = x_{\gamma^{-1}(i)}$.
Consequently, $x_i \geq x_{\gamma^{-1}(i)}$ is a valid SHC\xspace
for~$\gamma \in \Delta^\beta$.
Thus, for all $j \in O_i^\beta$, we can propagate $x_i \geq x_j$.
\end{proof}
Second, recall our assumption that any VDR\xspace that is not based on our
symmetry framework needs to be symmetry compatible, see
Section~\ref{sec:framework}.
In practice, however, a solver might not find all symmetric VDR\xspaces, e.g.,
due to iteration limits.
The following lemma allows us to find missing (but not necessarily all)
VDR\xspaces based on symmetries, which corresponds to orbital fixing as discussed
in~\cite{pfetsch2019computational} without the restriction to binary
variables.
\begin{lemma}
\label{lem:intersection}
Let~$\beta$ be a node of a B\&{}B\xspace-tree using single variable branching
with symmetry prehandling structure of Example~\ref{ex:vardynamic}.
Then, when SHC\xspaces~\eqref{eq:main} are enforced
(i.e., solutions are in $W^\beta$),
a valid VDR\xspace is to reduce the domain of~$x_i$ to the
intersection of all variable domains $x_j$ for $j \in O_i^\beta$.
\end{lemma}
\begin{proof}
Let $x \in \ensuremath{\mathrm\Phi}(\beta)$ and let~$i \in [n]$.
Let~$j \in O_i^\beta$, i.e., there exists~$\gamma \in \Delta^\beta$
with~$\gamma(i) = j$.
As $\gamma \in \Delta^\beta$,
$\sigma_\beta(x) \leq \sigma_\beta(\gamma(x))$ holds.
Since SHC\xspaces~\eqref{eq:main} are enforced,
it must as well hold that
$\sigma_\beta(x) \succeq \sigma_\beta(\gamma(x))$,
and thus
$\sigma_\beta(x) = \sigma_\beta(\gamma(x))$.
Since Example~\ref{ex:vardynamic} satisfies the correctness conditions,
Condition~\eqref{cond:permutationcondition}
yields~$\gamma(x) \in \ensuremath{\mathrm\Phi}(\beta)$.
Thus, $x_i$ is not only contained in the domain of variable~$i$, but also
in the domain of variable~$x_{\gamma(i)}$.
For this reason, the domain of~$x_i$ can be restricted to the
intersection of the domains for all variables $x_j$ for all $j \in O_i^\beta$.
\end{proof}
In practice, Lemma~\ref{lem:branchorbit} and~\ref{lem:intersection} cannot
be used immediately as the orbits depend on~$\Delta^\beta$, which cannot be
computed easily as it depends on~$\ensuremath{\mathrm\Phi}(\beta)$ and $W^\beta$.
Instead, we base ourselves on a
suitably determined subgroup~$\tilde \Delta^\beta \leq \Delta^\beta$,
and apply the reductions induced by that subgroup.
Because the reductions are based on variables in the same orbit,
and orbits of the subgroup are subsets of the orbits of the larger group,
VDR\xspaces found by $\tilde \Delta^\beta$ would also be found by~$\Delta^\beta$.
For all $i \in [n]$,
let~$\ensuremath{\mathcal{D}}_i^\beta \subseteq \mathds R$
be the (known) domain of variable~$x_i$ at node~$\beta$.
In particular, we thus have~$
\ensuremath{\mathrm\Phi}(\beta) \cap W^\beta \subseteq
\ensuremath{\mathrm\Phi}(\beta) \subseteq
\bigtimes_{i \in [n]} \ensuremath{\mathcal{D}}_i^\beta
$.
By replacing $\ensuremath{\mathrm\Phi}(\beta) \cap W^\beta$
in the definition of $\Delta^\beta$
by $\bigtimes_{i \in [n]} \ensuremath{\mathcal{D}}_i^\beta$,
we get a subset of symmetries:
$
\{ \gamma \in \Gamma :
\sigma_\beta(x) \leq \sigma_\beta(\gamma(x))\
\text{for all}\
x \in \bigtimes_{i \in[n]} \ensuremath{\mathcal{D}}_i^\beta
\}
\subseteq \Delta^\beta
$.
In particular, using (a subset of) the left set
as generating set, one finds a permutation group
that is a subgroup of~$\Delta^\beta$.
For the computational results
shown in Section~\ref{sec:num} we discuss the subset selection
procedure that we chose in our implementation.
We finish this section by showing that, in binary problems,
VDR\xspaces yielded by OF\xspace from
Section~\ref{sec:overviewtree} are also yielded
by the generalized setting of
Lemma~\ref{lem:branchorbit} and~\ref{lem:intersection}.
\begin{lemma}
Denote $\Delta^{\smash\beta}_{\smash{\rm bin}}$
as the group $\Delta^{\smash\beta}$
defined in Section~\ref{sec:overviewtree},
and $\Delta^{\smash\beta}_{\smash{\rm gen}}$
as the group with the same symbol defined here.
If the symmetry group acts on binary variables exclusively,
then $\Delta^\beta_{\rm bin} \leq \Delta^\beta_{\rm gen}$.
\end{lemma}
\begin{proof}
In binary problems, branching on variables fixes their values.
As such, because vector $\sigma_\beta(x)$ contains all branched variables,
it is the same for all~${x \in \ensuremath{\mathrm\Phi}(\beta)}$.
Suppose $\gamma \in \Delta^\beta_{\rm bin}$,
i.e., $\gamma \in \Gamma$ and~$\gamma(B_1^\beta) = B_1^\beta$.
For all $i \in [m_\beta]$, with
$\sigma_\beta(x)_i = 1$,
we have $\sigma_\beta(\gamma(x))_i = 1$
for all $x \in \ensuremath{\mathrm\Phi}(\beta)$.
Similarly, for all $i \in [m_\beta]$
with $\sigma_\beta(x)_i = 0$,
we have $\sigma_\beta(\gamma(x))_i \geq 0$.
This means that for all~$x \in \ensuremath{\mathrm\Phi}(\beta)$
we have $\sigma_\beta(x) \leq \sigma_\beta(\gamma(x))$.
This holds in particular for all $x \in \ensuremath{\mathrm\Phi}(\beta) \cap W^\beta$,
i.e., $\gamma \in \Delta^\beta_{\rm gen}$.
\end{proof}
By the OF\xspace rule described in Section~\ref{sec:overviewtree},
for all variable indices $i$ where $x_i$ is branched to zero,
all variables $x_j$ with $j$ in the orbit of $i$ in
$\Delta^\beta_{\smash{\rm bin}}$ can be fixed to zero, as well.
Because $\Delta^\beta_{\smash{\rm bin}}
\leq \Delta^\beta_{\smash{\rm gen}}$,
every orbit of $\Delta^\beta_{\smash{\rm bin}}$
is contained in an orbit of $\Delta^\beta_{\smash{\rm gen}}$.
As such, if $x_i$ is the branching variable at the present node,
this is implied by~$x_i \geq x_j$ in Lemma~\ref{lem:branchorbit}.
Otherwise, this is implied by Lemma~\ref{lem:intersection}.
\section{Computational study}
\label{sec:num}
To assess the effectiveness of our methods, we compare the running times
of the implementations of the various dynamic symmetry handling methods
of Section~\ref{sec:cast} (in the regime of Examples~\ref{ex:vardynamic}
and~\ref{ex:orbitopalfixing}) to similar existing methods.
To this end, we make use of diverse testsets.
\begin{itemize}
\item Symmetric benchmark instances from
MIPLIB~2010~\cite{KochEtAl2011MIPLIB} and
MIPLIB~2017~\cite{Gleixner2021MIPLIB}.
\item Existence of minimum $t\text{-}(v,k,\lambda)$-covering designs.
\item Noise dosage problem instances as discussed by
Sherali and Smith~\cite{sherali2001models} (cf. Problem~\ref{prob:NDbinary}).
\end{itemize}
The MIPLIB instances offer a diverse set of instances that contain
symmetries, but these symmetries operate on binary variables predominantly.
To evaluate the effectiveness of our framework for non-binary problems, we
consider the covering design and noise dosage instances.
The symmetries of the former are orbitopal, whereas the latter has no
orbitopal symmetries.
Although our framework allows to handle more general symmetries, we
restrict the numerical experiments to permutation symmetries.
On the one hand, \code{SCIP} can only detect permutation symmetries at this
point of time.
On the other hand, most symmetry handling methods discussed in the literature only
apply to permutation symmetries.
The development of methods for other kinds of symmetries is out of scope of
this article.
\subsection{Solver components and configurations}
We use a development version of
the solver \code{SCIP~8.0.3.5}~\cite{BestuzhevaEtal2021OO},
commit
\texttt{8443db2}\footnote{Public mirror: \url{https://github.com/scipopt/scip/tree/8443db213892153ff2e9d6e70c343024fb26968c}},
with LP-solver \code{Soplex~6.0.3}.
\code{SCIP} contains implementations of the state-of-the-art methods
LexFix\xspace, orbitopal fixing, and OF\xspace to which we compare our methods.
We have extended the code with our dynamic methods.
Our modified code is available on
GitHub\footnote{Project page: \url{https://github.com/JasperNL/scip-unified}}.
This repository also contains the instance generators and
problem instances for the noise dosage and covering design problems.
For all settings, symmetries are detected
by finding automorphisms of a suitable
graph~\cite{pfetsch2019computational,salvagnin2005dominance} using
\code{bliss~0.77}~\cite{JunttilaKaski2015bliss}.
We make use of the readily implemented symmetry detection code of \code{SCIP},
which finds a set of permutations $\Pi$ that generate a
symmetry group $\Gamma$ of the problem, namely the symmetries implied
by its formulation~\cite{pfetsch2019computational,salvagnin2005dominance,margot2010symmetry}.
This is a permutation group acting on the solution vector index space,
so the setting of Section~\ref{sec:overview} and~\ref{sec:cast} applies.
If~$\Gamma$ is a product group consisting of $k$ components, i.e., $\Gamma
= \bigtimes_{i \in [k]} \Gamma_i$,
then by using similar arguments
as in~\cite[Proposition~5]{hojny2019polytopes},
the symmetries of the different components~$\Gamma_i$, $i \in [k]$, can be
handled independently; compositions of permutations from different
components do not need to be taken into account.
In particular, it is possible to select a different symmetry prehandling
structure for the different components.
We therefore decompose the set of permutations $\Pi$ generating $\Gamma$
that are found by \code{SCIP} in components,
yielding generating sets~$\Pi_1, \dots, \Pi_k$
for components~$\Gamma_1, \dots, \Gamma_k$.
Symmetry in each component is handled separately.
For all settings, we disable restarts to ensure that all methods exploit
the same symmetry information.
We compare our newly implemented methods to the
methods originally implemented in \code{SCIP}.
\paragraph{Our configurations}
For every component $\Gamma_i$,
we handle symmetries as follows, where we skip some steps if the
corresponding symmetry handling method is disabled.
\begin{enumerate}
\item\label{step:orbitope} If a \code{SCIP}-internal heuristic,
cf.~\cite[Sec.~4.1]{hojny2019polytopes}, detects that~$\Gamma_i$ consists
of orbitopal symmetries:
\begin{enumerate}
\item\label{step:cons} If the orbitope matrix is a single row
$[x_1 \cdots x_\ell]$, add linear constraints
$x_1 \geq \dots \geq x_\ell$.
\item Otherwise, if the orbitope matrix contains only two columns
(i.e., is generated by a single permutation $\gamma$),
then use lexicographic reduction using the dynamic
variable ordering of Example~\ref{ex:vardynamic},
as described in Section~\ref{sec:gen:lexred}.
\item\label{step:pack} Otherwise, if there are at least 3 rows with
binary variables whose sum is at most 1 (so-called packing-partitioning type)
then use the complete static propagation method for packing-partitioning
orbitopes as described by Kaibel and Pfetsch~\cite{KaibelPfetsch2008},
where the orbitope matrix is restricted to the rows with this structure.
\item\label{step:dyn} Otherwise, use dynamic orbitopal reduction
as described in Section~\ref{sec:gen:orbitopalfixing}
using the dynamic variable ordering of Example~\ref{ex:orbitopalfixing}.
We select $\varphi_\beta$ such that it
swaps the column containing the branched variable
to the middlemost (or leftmost) symmetrically equivalent column
when propagating the SHC\xspaces~\eqref{eq:main}
using static orbitopal reduction.
\end{enumerate}
\item Otherwise (i.e., if the symmetries are not orbitopal),
use the symmetry prehandling structure of Example~\ref{ex:vardynamic}
and use two compatible methods simultaneously:
\begin{enumerate}
\item Lexicographic reduction as described
in Section~\ref{sec:gen:lexred}.
\item Orbital reduction as described
in Section~\ref{sec:gen:orbitalreduction}.
Since computing $\Delta^\beta$ is non-trivial,
we work with a subgroup of $\Delta^\beta$,
namely the group generated by all permutations $\gamma \in \Pi_i$
for which $\sigma_\beta(x) \leq \sigma_\beta(\gamma(x))$
for all $x \in \bigtimes_{j \in [n]} \ensuremath{\mathcal{D}}_j^\beta$.
\end{enumerate}
\end{enumerate}
We also compare settings where
orbitopal reduction, orbital reduction and lexicographic reduction
are turned off. If orbitopal symmetries are not handled, we always
resort to the second setting, where lexicographic reduction (if enabled)
and/or orbital reduction (if enabled) are applied.
For orbitopal symmetries, we have chosen to handle certain common cases
before refraining to the dynamic orbitopal reduction code that we devised.
First, for single-row orbitopes, the symmetry is completely handled
by the linear constraints.
Since linear constraints are strong and work well with other components
of the solver, we decided to handle those this way.
If an orbitope only has two columns, the underlying symmetry group
is generated by a single permutation of order~2. In that case,
the symmetry is completely handled by lexicographic reduction.
Third, it is well known that exploiting problem-specific information
can greatly assist symmetry handling.
If a packing-partitioning structure can be detected, we therefore apply
specialized methods as discussed above.
Otherwise, we use orbitopal reduction as discussed in
Section~\ref{sec:gen:orbitopalfixing}.
In Step~\ref{step:dyn}, the choice
of~$\varphi_\beta$ is inspired by the discussion in
Section~\ref{sec:theframework} (``Interpretation'').
By moving the branching variable to the middlemost possible column,
balanced subproblems are created, whereas the leftmost possible column
might lead to more reductions in one child than in the other.
Below, we will investigate which technique is more favorable.
For the components that are do not consist of orbitopal symmetries,
we decided to settle on the setting of Example~\ref{ex:vardynamic}
and use both compatible methods.
Since the \code{SCIP} version that we compare to either uses orbital fixing or
(static) lexicographic fixing for such components, our setting allows us to
assess the impact of adapting lexicographic fixing to make it compatible
with orbital fixing.
\paragraph{Comparison base}
We compare to similar, readily implemented methods in \code{SCIP}.
In \code{SCIP} jargon, these methods are called \emph{polyhedral} and
\emph{orbital fixing} and can be enabled/disabled independently.
The polyhedral methods consist of LexFix\xspace for the SHC\xspaces~$x \succeq
\gamma(x)$ for~$\gamma \in \Pi_i$ and methods to handle orbitopal
symmetries; orbital fixing uses OF\xspace
from~\cite{pfetsch2019computational}.
Note that only symmetries of binary variables are handled.
If a component consists of polytopal symmetries, the polyhedral methods
handle these symmetries by carrying out the check of Step~\ref{step:pack}.
In case it evaluates positively, methods exploiting packing-partitioning
structures are applied as described above.
Otherwise, orbitopal symmetries of binary variables are handled by a
variant of row-dynamic orbitopal fixing.
The remaining components are either handled by static LexFix\xspace or orbital
fixing, depending on whether the polyhedral methods or orbital fixing is
enabled.
Moreover, if polyhedral methods are disabled, orbital fixing is also
applied to components consisting of orbitopal symmetries.
\subsection{Results}
All experiments have been run in parallel
on the Dutch National supercomputer
Snellius ``thin'' consisting of compute nodes with dual AMD Rome 7H12
processors providing a total of \num{128} physical CPU cores,
and~\SI{256}{\giga\byte} memory.
Each process has an allocation of \num{4} physical CPU cores
and~\SI{8}{\giga\byte} of memory.
In the results below we report the running times
(column \emph{time})
and time spent on symmetry handling (column \emph{sym}) in shifted
geometric mean~$\prod_{i = 1}^n (t_i + 1)^{\frac{1}{n}} - 1$
to reduce the impact of outliers.
We also report the number of instances solved within the time limit
of~\SI{1}{\hour} (column~\emph{\#S}).
If the time limit is reached, the solving time of that instance
is reported as \SI{1}{\hour}.
None of the instances failed or exceeded the memory limit.
We report the aggregated results for all instances,
for all instances for which at least one setting solved the instance
within the time limit,
and for all instances solved by all settings.
For each of these classes, we provide their size below.
We use abbreviations for the settings. We compare
no symmetry handling (\emph{Nosym}),
traditional polyhedral methods (\emph{Polyh}),
traditional orbital fixing (\emph{OF}),
dynamic orbitopal reduction (\emph{OtopRed}),
dynamic lexicographic reduction (\emph{LexRed}),
orbitopal reduction (\emph{OR}),
and combinations hereof.
Note that also setting \emph{Nosym} reports a small symmetry time,
because due to \code{SCIP}'s architecture, handling the corresponding
plug-in requires some time even if it is not used.
Moreover, if symmetries are handled by linear constraints in the model
(cf.\ Step~\ref{step:cons}),
these are not reported in the symmetry handling figures.
Recall from~\ref{step:dyn} that we consider two variants of
selecting~$\varphi_\beta$ for dynamic orbitopal fixing.
We refer to these variants as \emph{first} and \emph{median}
for the leftmost and
middlemost column, respectively.
As the noise dosage testset exclusively consists of orbitopal symmetries,
we test both parameterizations there.
For the remaining testsets, we restrict ourselves to \emph{median} as it
performs better on average for the noise dosage instances.
Since the covering design and noise dosage testsets
are relatively small and contain many easy instances, performance
variability is minimized by repeating each
configuration-instance pair three times with a different
global random seed shift.
Due to the large number of instances, settings and large time requirements,
only one seed is used for the MIPLIB instances.
\subsubsection{MIPLIB}
\label{sec:results:miplib}
To compose our testset, we presolved all instances from the
MIPLIB~2010~\cite{KochEtAl2011MIPLIB} and
MIPLIB~2017~\cite{Gleixner2021MIPLIB} benchmark testsets
and selected those for which \code{SCIP} could detect symmetries.
This results in~129 instances.
We excluded instance \texttt{mspp16} as it exceeds the memory limit
during presolving.
The goal of our experiments for MIPLIB instances is twofold.
On the one hand, we investigate whether our framework allows to solve
symmetric mixed-integer programs faster than the state-of-the-art methods
as implemented in \code{SCIP}.
On the other hand, we are interested in the effect of adapting different
symmetry handling methods for~$\sigma_\beta(x) \succeq \sigma_\beta
\gamma(x)$ in comparison with their static counterparts.
Table~\ref{tab:results:miplib} shows the aggregate results of our
experiments.
\begin{table}[t]
\caption{Results for MIPLIB 2010 and MIPLIB 2017}
\label{tab:results:miplib}
\centering
\footnotesize
\begin{tabular}{@{}L{3.2cm}*{3}{R{1cm}R{1cm}R{.6cm}}@{}}
\toprule
\multicolumn{1}{c}{Setting}
& \multicolumn{3}{c}{All instances (128)}
& \multicolumn{3}{c}{Solved by some setting (75)}
& \multicolumn{3}{c}{Solved by all settings (51)}
\\
\cmidrule(lr){1-1}
\cmidrule(lr){2-4}
\cmidrule(lr){5-7}
\cmidrule(lr){8-10}
& time (s) & sym (s) & \#S
& time (s) & sym (s) & \#S
& time (s) & sym (s) & \#S
\\
\cmidrule{2-10}
Nosym
& 970.73 & 0.18 & 59
& 384.01 & 0.17 & 59
& 157.61 & 0.08 & 51
\\
Polyh
& 747.43 & 1.82 & 67
& 245.46 & 1.02 & 67
& 138.09 & 0.69 & 51
\\
OF
& 822.26 & 1.34 & 66
& 289.16 & 0.93 & 66
& 134.34 & 0.48 & 51
\\
Polyh + OF
& 728.88 & 1.97 & 69
& 235.17 & 0.98 & 69
& 130.08 & 0.58 & 51
\\
\midrule
OtopRed
& 811.90 & 1.34 & 62
& 282.95 & 0.75 & 62
& 129.08 & 0.49 & 51
\\
LexRed
& 799.82 & 2.22 & 67
& 275.78 & 1.30 & 67
& 142.30 & 0.73 & 51
\\
OR
& 807.75 & 4.94 & 66
& 280.46 & 3.29 & 66
& 140.62 & 1.40 & 51
\\
OR + LexRed
& 788.29 & 5.44 & 68
& 269.01 & 3.43 & 68
& 138.18 & 1.57 & 51
\\
OR + OtopRed
& 727.04 & 2.38 & 66
& 234.14 & 1.34 & 66
& 128.62 & 0.57 & 51
\\
OtopRed + LexRed
& 691.22 & 1.90 & 68
& 214.84 & 1.01 & 68
& 123.68 & 0.56 & 51
\\
OR + OtopRed + LexRed
& 708.63 & 2.57 & 67
& 224.18 & 1.43 & 67
& 125.73 & 0.63 & 51
\\
\bottomrule
\end{tabular}
\end{table}
Regarding the first question, we observe that using any type of symmetry
handling is vastly superior over the setting where no symmetry is handled.
Considering all instances,
the best of the traditional settings reports an average running
time improvement of \SI{24.9}{\percent} over \emph{Nosym},
and the best of the dynamified methods report \SI{28.8}{\percent}.
Our framework thus allows to improve on \code{SCIP}'s state-of-the-art
by~\SI{5.2}{\percent}.
On the instances that can be solved by at least one setting, this effect is
even more pronounced and improves on \code{SCIP}'s best setting by
\SI{8.6}{\percent}.
We believe that this is a substantial improvement, because the MIPLIB
instances are rather diverse.
In particular, some of these instances contain only very few
symmetries.
Regarding the second question, we compare the \code{SCIP} settings
\emph{Polyh}, \emph{OF}, and \emph{Polyh + OF} with their counterparts in our
framework, being \emph{OtopRed + LexRed}, \emph{OR}, and \emph{OR +
OtopRed}, respectively.
The running time of the pure polyhedral setting \emph{Polyh} can be
improved in our framework by~\SI{7.5}{\percent} when considering all
instances, and by~\SI{12.4}{\percent} when considering only the instances
solved by some setting within the time limit.
Consequently, adapting symmetry handling to the branching order via the
symmetry prehandling structure of Example~\ref{ex:vardynamic}
and~\ref{ex:orbitopalfixing} allows to gain substantial performance
improvements.
Our explanation for this behavior is that symmetry reductions can be found
much earlier in branch-and-bound than in the static setting (cf.\
Figure~\ref{fig:branching:orbitopalfixing}, where no reductions can be
found at depth~1 if no adaptation is used).
Thus, symmetric parts of the branch-and-bound tree can be pruned earlier.
Comparing \emph{OF} and \emph{OR}, we observe that \emph{OR} is slightly
faster than \emph{OF}.
Both methods, however, are much slower than \emph{Polyh} and \emph{OtopRed
+ LexRed}.
A possible explanation is that the latter methods make use of orbitopal
fixing, which can handle entire symmetry groups, whereas \emph{OF} and
\emph{OR} only find some reductions based on orbits.
Among \code{SCIP}'s methods, \emph{Polyh + OF} performs best.
Its counterpart \emph{OR + OtopRed} in our framework performs comparably on
all instances and the solvable instances, however, three fewer instances
are solved.
A possible explanation for the comparable running time is that traditional
orbital fixing and the variant of row-dynamic
orbitopal fixing are already dynamic methods.
Thus, a comparable running time can be expected (although this does not
explain the difference in the number of solved instance).
Lastly, we discuss settings which are not possible in the traditional
setting, i.e., combing LexRed\xspace and orbital reduction.
Enhancing orbital reduction by LexRed\xspace indeed leads to an improvement
of~\SI{2.4}{\percent} and allows to solve two more instances.
The best setting in our framework, however, does not enable all methods in
our framework.
Indeed, the running time of \emph{OR + OtopRed + LexRed} can be improved by
\SI{4.9}{\percent} when disabling orbital reduction.
We explain this phenomenon with the fact that orbitopal reduction already
handles a lot of group structure.
Combining LexRed\xspace and orbital reduction on the remaining components only
finds a few more reductions.
The time needed for finding these reductions is then not compensated by the
symmetry reduction effect.
If no group structure is handled via orbitopal reduction, LexRed\xspace can
indeed be enhanced by orbital reduction.
Recall that symmetries in MIPLIB instances
act predominantly on binary variables.
As opposed to the traditional settings that we compare to,
the generalized setting can handle symmetries on non-binary
variable domains.
Thus, potentially more reductions can follow from larger
symmetry groups that include non-binary variables.
As shown in Appendix~\ref{app:tables},
a larger group is detected in only 10 out of the 128
instances, and only 2 of these can be solved by some setting.
When considering the subset of instances
where symmetry is handled based on the same group,
we report similar results as before.
This shows that even if we only consider problems
where symmetries act on the binary variables,
our generalized methods outperform similar state-of-the-art methods.
\subsubsection{Minimum $\boldsymbol{t\text{-}(v, k, \lambda)}$-covering
designs}
Since the symmetries of MIPLIB instances predominantly act on binary
variables, we turn the focus in the following to symmetric problems without
binary variables to assess our framework in this regime.
Let~$v \geq k \geq t \geq 0$ and $\lambda > 0$ be integers.
Let $V$ be a set of cardinality $v$,
and let~$\mathcal K$ (resp.\@~$\mathcal T$)
be the collections of all subsets of~$V$ having sizes~$k$ (resp.\@~$t$).
A~\emph{$t\text{-}(v, k, \lambda)$-covering design} is a
multiset~$\mathcal{C} \subseteq \mathcal{K}$
if all sets $T \in \mathcal T$ are contained in
at least $\lambda$ sets of $\mathcal C$, counting with multiplicity.
A covering design is \emph{minimum} if no smaller covering design
with these parameters exist, and finding these is of interest,
e.g., in~\cite{margot2003coveringdesigns,nurmela1999covering,fadlaoui2011tabucoveringdesigns}.
Margot~\cite{margot2003coveringdesigns} gives an ILP-formulation,
having decision variables~
$\nu \in \{0, \dots, \lambda \}^{\mathcal K}$
specifying the multiplicity of the sets $K \in \mathcal K$
for the minimum $t\text{-}(v, k, \lambda)$-covering design sought after.
The problem is to
\begin{subequations}
\makeatletter
\defCD{CD}
\makeatother
\renewcommand{CD\arabic{equation}}{CD\arabic{equation}}
\label{eq:CD}
\begin{align}
\text{minimize}\ \sum_{K \in \mathcal K} \nu_{K}&, \\
\text{subject to}\ \sum_{K \in \mathcal K : T \subseteq K} \nu_K &\geq \lambda
&&\text{for all}\ T \in \mathcal T,\\
\nu &\in \{ 0, \dots, \lambda \}^{\mathcal K}.
\end{align}
\end{subequations}
Symmetries in this problem re-label the elements of $V$,
and these are also detected by the symmetry detection routine of \code{SCIP}.
Note that, although the underlying group is symmetric,
these symmetries in terms of the variables $\nu$ are not orbitopal.
Margot~\cite{margot2003coveringdesigns} considers
an instance with $\lambda = 1$, which is a binary problem.
We consider all non-binary instances with parameters
$\lambda \in \{ 2, 3 \}$ and $12 \geq v \geq k \geq t \geq 1$,
and restricted ourselves to instances that were solved
within \SI{7200}{\second} in preliminary runs
and that require at least \SI{10}{\second} for some setting.
This way, 58 instances remain.
With 3 seeds per instance, we end up with~174 results.
The aggregated results are shown in Table~\ref{tab:results:coveringdesigns}.
Because the instances are non-binary,
none of the considered traditional methods can be applied to handle
the symmetries. As such, we can only compare to no symmetry handling.
The best of our instances reports an improvement of~\SI{77.8}{\percent}
over no symmetry handling.
\begin{table}[t]
\caption{Results for finding minimum $t\text{-}(v,k,\lambda)$ covering designs}
\label{tab:results:coveringdesigns}
\centering
\footnotesize
\begin{tabular}{@{}L{2.4cm}*{3}{R{1cm}R{1cm}R{.6cm}}@{}}
\toprule
\multicolumn{1}{c}{Setting}
& \multicolumn{3}{c}{All instances (174)}
& \multicolumn{3}{c}{Solved by some setting (171)}
& \multicolumn{3}{c}{Solved by all settings (126)}
\\
\cmidrule(lr){1-1}
\cmidrule(lr){2-4}
\cmidrule(lr){5-7}
\cmidrule(lr){8-10}
& time (s) & sym (s) & \#S
& time (s) & sym (s) & \#S
& time (s) & sym (s) & \#S
\\
\cmidrule{2-10}
Nosym
& 211.13 & 0.16 & 138
& 200.85 & 0.15 & 138
& 100.65 & 0.08 & 126
\\
LexRed
& 80.98 & 2.19 & 154
& 75.72 & 2.01 & 154
& 30.67 & 0.60 & 126
\\
OR
& 42.75 & 0.72 & 159
& 39.49 & 0.64 & 159
& 18.52 & 0.20 & 126
\\
OR + LexRed
& 39.92 & 1.51 & 159
& 36.83 & 1.35 & 159
& 17.00 & 0.50 & 126
\\
\bottomrule
\end{tabular}
\end{table}
If orbital reduction and LexRed\xspace are not combined, orbital reduction is the
more competitive method as it improves upon LexRed\xspace by~\SI{47.2}{\percent}.
Although LexRed\xspace is much faster than \emph{Nosym}, this comparison shows
that more symmetries can be handled when the group structure is exploited
via orbits.
Nevertheless, our framework allows to further improve orbital reduction by
another~\SI{6.6}{\percent} if it is combined with LexRed\xspace.
That is, orbital reduction is not able to capture the entire group
structure and missing information can be handled by LexRed\xspace efficiently via
our framework.
\subsubsection{Noise dosage}
\label{sec:results:noisedosage}
To assess the effectiveness of orbitopal reduction isolated from other
symmetry handling methods, we consider the noise dosage (ND)
problem~\cite{sherali2001models}, which has orbitopal symmetries as
explained in Problem~\ref{prob:NDbinary}.
However, we replace the binary constraint~\eqref{prob:ND:binary}
by~${\vartheta \in \mathds Z_{\geq 0}^{p \times q}}$ to be able to evaluate the
effect of orbitopal reduction on non-binary problem.
In particular, we are interested whether the choice of the
\emph{left} or \emph{median} variants
matters for the parameter~$\varphi_\beta$.
For each of the parameters $(p, q) \in \{ (3, 8), (4, 9), (5, 10) \}$,
Sherali and Smith have generated four instances~\cite{sherali2001models}.
We thank J.\@ Cole Smith for providing us these instances.
It turns out that these instances are dated and very easy to solve
even without symmetry handling methods.
As such, we have extended the testset.
For each parameterization
$(p, q) \in \{ (6, 11), (7, 12), \dots, (11, 16) \}$,
we generate five instances.
The details of our generator are in Appendix~\ref{app:noisedosage}.
Symmetries in the ND problem can be handled by adding
$\sum_{i=1}^p M^{p-i} \vartheta_{i,j} \geq \sum_{i=1}^p M^{p-i} \vartheta_{i,j+1}$
for $j \in \{1, \dots, q-1\}$,
where $M$ is an upper bound to the maximal number of tasks
that one worker can perform on a machine,
as described by Sherali and Smith~\cite{sherali2001models}.
This is similar to fundamental domain inequalities~\cite{friedman2007fundamental}
and the symmetry handling constraints of Ostrowski~\cite{ostrowski2009symmetry}.
Although these can be used to handle symmetries,
it is folklore that problems with largely deviating coefficients can lead
to numerical instabilities.
These constraints work well for instances with a small number of machines $p$,
such as in the original instances of Sherali and Smith.
However, for our instance \texttt{noise11\_16\_480\_s2}, such a constraint
has minimal absolute coefficient~1 and maximal absolute coefficient~$11^{10}$.
Various warnings in the log files confirm the presence of numerical
instabilities.
In fact, observable incorrect results follow.
For instance, when adding these linear constraints,
instance \texttt{noise11\_16\_480\_s1} finds infeasibility
during presolving, whereas it is a feasible instance
as illustrated by the no symmetry handling and
dynamic orbitopal fixing runs.
Moreover, for instance \texttt{noise9\_14\_480\_s3},
no infeasibility is detected, but reports a wrong optimal solution.
Thus, there is a need to replace these inequalities with numerically more
stable methods.
\begin{table}[t]
\caption{Results for finding optimal solutions to the noise dosage problems}
\label{tab:results:noisedosage}
\centering
\footnotesize
\begin{tabular}{@{}L{2.4cm}*{3}{R{1cm}R{1cm}R{.6cm}}@{}}
\toprule
\multicolumn{1}{c}{Setting}
& \multicolumn{3}{c}{All instances (165)}
& \multicolumn{3}{c}{Solved by some setting (132)}
& \multicolumn{3}{c}{Solved by all settings (90)}
\\
\cmidrule(lr){1-1}
\cmidrule(lr){2-4}
\cmidrule(lr){5-7}
\cmidrule(lr){8-10}
& time (s) & sym (s) & \#S
& time (s) & sym (s) & \#S
& time (s) & sym (s) & \#S
\\
\cmidrule{2-10}
Nosym
& 152.97 & 1.33 & 90
& 69.02 & 0.87 & 90
& 10.13 & 0.19 & 90
\\
Sherali-Smith
& 26.45 & 0.66 & 129
& 7.11 & 0.18 & 129
& 1.30 & 0.02 & 90
\\
OtopRed (first)
& 35.55 & 4.11 & 123
& 10.60 & 1.29 & 123
& 2.10 & 0.20 & 90
\\
OtopRed (median)
& 33.89 & 3.84 & 117
& 9.95 & 1.25 & 117
& 1.73 & 0.13 & 90
\\
\bottomrule
\end{tabular}
\end{table}
We have removed these two instances from our testset as they obviously
result in wrong results.
The aggregated results for the remaining instances are presented in
Table~\ref{tab:results:noisedosage}.
We observe that the symmetry handling inequalities perform~\SI{22.0}{\percent} better than
orbitopal reduction.
However, the presented numbers for the inequality-based approach need
to be interpreted carefully as reporting a correct objective value does not
necessarily mean that the branch-and-bound algorithm worked correctly.
For instance, nodes might have been pruned because of numerical
inaccuracies although a numerical correct algorithm would not have pruned
them.
These issues do not occur for the propagation algorithm orbitopal reduction
as it just reduces variable domains.
Comparing the two parameterizations of orbitopal fixing, we see that the
\emph{median} variant performs \SI{4.7}{\percent} better than the
\emph{first} variant.
This shows that the choice for the column reordering by
symmetry~$\varphi_\beta$ has a measurable impact on the running time for
dynamic orbitopal reduction.
A possible explanation for the median rule to perform better than the first
rule is that median creates more balanced branch-and-bound trees, cf.\
Section~\ref{sec:theframework}.
Consequently, the right choice of~$\varphi_\beta$ might significantly
change the performance of orbitopal reduction.
We leave it as future research to find a good rule for the selection
of~$\varphi_\beta$ as this rule might be based on the structure of the
underlying problem, e.g., size of the variable domains, number of rows, and
number of columns.
\section{Conclusions and future research}
Symmetry handling is an important component of modern solver technology.
One of the main issues, however, is the selection and combination of
different symmetry handling methods.
Since the latter is non-trivial, we have proposed a flexible framework
that easily allows to check whether different methods are compatible and
that does apply for arbitrary variable domains.
Numerical results show that our framework is substantially faster than
symmetry handling in the state-of-the-art solver \code{SCIP}.
In particular, we benefit from combining different symmetry handling
methods, which is possible in our framework, but only in a limited way in
\code{SCIP}.
Moreover, due to our generalization of symmetry handling algorithms for
binary problems to general variable domains, our framework allows us to
reliably handle symmetries in different non-binary applications.
Due to the flexibility of our framework, it is not only applicable
for the methods discussed in this article, but also allows to apply methods
that will be developed in the future (provided they are compatible with
SHC\xspaces~\eqref{eq:main}).
This opens, among others, the following directions for future research.
In this article, we experimentally evaluated our framework only for
permutation symmetries.
As the framework also supports other types of symmetries
such as rotational and reflection symmetries,
further research could involve devising symmetry handling methods
for such symmetries.
Moreover, to handle permutation symmetries, we only used propagation
techniques.
In the future, these methods can be complemented in two ways.
On the one hand, other techniques such as separation routines can be
applied to handle SHC\xspaces~\eqref{eq:main}.
On the other hand, our symmetry handling methods have not exploited
additional problem structure such as packing-partitioning structures in
orbitopal fixing.
Further research can focus on the incorporation of problem constraints
in handling the symmetry handling constraint of
Theorem~\ref{thm:main}.
This includes a dynamification of packing-partitioning orbitopes,
as well as introducing a way to handle overlapping orbitopal
subgroups within a component.
Last, in the computational results we describe decision rules for
enabling/disabling certain symmetry handling methods.
If new symmetry handling methods are cast into our
framework, however, these rules need to be updated.
Future research could thus encompass the derivation of good rules for how
to handle symmetries in our framework.
\paragraph{Acknowledgment}
We thank J.\ Cole Smith for providing us the instances of the noise dosage
problem used in~\cite{sherali2001models}.
\appendix
\section{Further results for MIPLIB}
\label{app:tables}
In this appendix, we investigate in more detail the performance gains achieved by our methods for MIPLIB instances in comparison to the traditional methods.
The traditional methods only work for symmetries on binary variables,
whereas our generalized methods can also handle non-binary variables.
As the symmetric MIPLIB benchmark instances contain mostly instances with binary symmetries, the question arises whether the observed gains are thus due to a few instances for which our methods can handle more (non-binary) symmetries.
To investigate this effect,
we have partitioned these instances in two classes.
The first class contains instances where more symmetries than
in the traditional setting arise;
the second class contains the remaining instances, i.e., both settings
handle the same symmetries.
The results for the subsets of the instances are shown in
Table~\ref{tab:miplib:diff} and~\ref{tab:miplib:same},
respectively.
\begin{table}[b!]
\caption{MIPLIB results where the detected symmetry group is different for the traditional setting and our generalized setting.}
\label{tab:miplib:diff}
\centering
\footnotesize
\begin{tabular}{@{}L{3.2cm}*{3}{R{1.1cm}R{1cm}R{.6cm}}@{}}
\toprule
\multicolumn{1}{c}{Setting}
& \multicolumn{3}{c}{All instances (10)}
& \multicolumn{3}{c}{Solved by some setting (2)}
& \multicolumn{3}{c}{Solved by all settings (1)}
\\
\cmidrule(lr){1-1}
\cmidrule(lr){2-4}
\cmidrule(lr){5-7}
\cmidrule(lr){8-10}
& time (s) & sym (s) & \#S
& time (s) & sym (s) & \#S
& time (s) & sym (s) & \#S
\\
\cmidrule{2-10}
Nosym
& 2397.54 & 0.15 & 1
& 468.79 & 0.02 & 1
& 60.29 & 0.01 & 1
\\
Polyh
& 2369.31 & 4.29 & 1
& 436.46 & 0.01 & 1
& 52.14 & 0.00 & 1
\\
OF
& 2396.04 & 3.09 & 1
& 469.20 & 0.00 & 1
& 60.39 & 0.00 & 1
\\
Polyh + OF
& 2341.74 & 5.69 & 1
& 413.65 & 0.01 & 1
& 46.72 & 0.00 & 1
\\
\midrule
OtopRed
& 2395.33 & 2.63 & 1
& 468.48 & 0.06 & 1
& 60.20 & 0.02 & 1
\\
LexRed
& 2321.62 & 7.21 & 2
& 400.65 & 0.30 & 2
& 46.62 & 0.00 & 1
\\
OR
& 2390.39 & 15.94 & 1
& 463.55 & 12.32 & 1
& 58.93 & 0.17 & 1
\\
OR + LexRed
& 2383.85 & 17.78 & 2
& 456.75 & 12.60 & 2
& 59.05 & 0.24 & 1
\\
OR + OtopRed
& 2402.87 & 3.62 & 1
& 467.93 & 0.04 & 1
& 60.06 & 0.02 & 1
\\
OtopRed + LexRed
& 2399.78 & 3.50 & 1
& 472.57 & 0.03 & 1
& 61.24 & 0.00 & 1
\\
OR + OtopRed + LexRed
& 2395.61 & 3.62 & 1
& 468.22 & 0.07 & 1
& 60.13 & 0.03 & 1
\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[b!]
\caption{MIPLIB results where the detected symmetry group is the same for the traditional setting and our generalized setting.}
\label{tab:miplib:same}
\centering
\footnotesize
\begin{tabular}{@{}L{3.2cm}*{3}{R{1.1cm}R{1cm}R{.6cm}}@{}}
\toprule
\multicolumn{1}{c}{Setting}
& \multicolumn{3}{c}{All instances (118)}
& \multicolumn{3}{c}{Solved by some setting (73)}
& \multicolumn{3}{c}{Solved by all settings (50)}
\\
\cmidrule(lr){1-1}
\cmidrule(lr){2-4}
\cmidrule(lr){5-7}
\cmidrule(lr){8-10}
& time (s) & sym (s) & \#S
& time (s) & sym (s) & \#S
& time (s) & sym (s) & \#S
\\
\cmidrule{2-10}
Nosym
& 899.10 & 0.18 & 58
& 381.92 & 0.17 & 58
& 160.65 & 0.08 & 50
\\
Polyh
& 677.77 & 1.67 & 66
& 241.61 & 1.06 & 66
& 140.79 & 0.71 & 50
\\
OF
& 750.98 & 1.23 & 65
& 285.34 & 0.97 & 65
& 136.50 & 0.49 & 50
\\
Polyh + OF
& 660.19 & 1.77 & 68
& 231.55 & 1.02 & 68
& 132.76 & 0.59 & 50
\\
\midrule
OtopRed
& 740.73 & 1.25 & 61
& 279.06 & 0.77 & 61
& 131.06 & 0.50 & 50
\\
LexRed
& 730.72 & 1.98 & 65
& 272.97 & 1.34 & 65
& 145.50 & 0.75 & 50
\\
OR
& 736.76 & 4.44 & 65
& 276.63 & 3.16 & 65
& 143.08 & 1.43 & 50
\\
OR + LexRed
& 717.69 & 4.88 & 66
& 265.14 & 3.29 & 66
& 140.54 & 1.61 & 50
\\
OR + OtopRed
& 656.95 & 2.29 & 65
& 229.74 & 1.40 & 65
& 130.59 & 0.59 & 50
\\
OtopRed + LexRed
& 621.98 & 1.79 & 67
& 210.24 & 1.04 & 67
& 125.43 & 0.57 & 50
\\
OR + OtopRed + LexRed
& 639.09 & 2.49 & 66
& 219.69 & 1.48 & 66
& 127.59 & 0.64 & 50
\\
\bottomrule
\end{tabular}
\end{table}
There are only 10 instances in the first class,
and only 2 of these can be solved by any of the settings considered.
So, indeed,
as mentioned in the beginning of Section~\ref{sec:num},
the symmetrical instances from MIPLIB act predominantly on
binary variables.
Since only few instances handle symmetries based on a different group,
this only has a small effect on the reported results
in Section~\ref{sec:results:miplib}.
When restricting to the instances with the same symmetries
(Table~\ref{tab:miplib:same}),
the best of our methods on all instances
achieves an improvement of~\SI{5.8}{\percent}
over the traditional best,
as opposed to the figure of~\SI{5.2}{\percent} from Section~\ref{sec:results:miplib}.
This shows that our generalized methods outperform similar
state-of-the-art methods even for problems where symmetries
only act on binary variables.
\section{Testset generation for Noise Dosage instances}
\label{app:noisedosage}
In this appendix, we describe how we generate the instances of the noise
dosage problem used in our experiments.
We have extended the testset used by
Sherali and Smith~\cite{sherali2001models}, by including instances
with more workers and more machines.
Since the generator of thei instances is not available to us,
we have analyzed the instances from~\cite{sherali2001models}
provided by J. Cole Smith and extracted features.
Most importantly, we have observed that the total worker time is about
half the total time required by the machines, and that the number of tasks
per machine range from 4 to 9.
The noise dosage units and time per machine are floating point numbers.
Given $p$ machines and $q$ workers,
the workers work at most $H = 480$ hours,
we sample the number of tasks $d_i$
per machine uniformly (discrete) random between 4 and 10,
then choose $\mu = \frac{1}{2H} \sum_{i=1}^p d_i$
and sample the time per task from the normal distribution
with mean $\mu$ and standard deviation $\frac15 \mu$,
i.e., $\mathcal N(\mu, \frac15 \mu)$.
Last, the noise units are sampled from $\mathcal N(18, 4)$.
When sampling from the normal distribution,
we ignore sample values with a negative value.
Our instance generator and generated instance files
are publicly available
at~\url{https://github.com/JasperNL/scip-unified/tree/unified/problem_instances}.
\end{document} |
\betaegin{document}
\title{The equivalence of lattice and Heegaard Floer homology}
\betaegin{abstract}
We prove N\'{e}methi's conjecture: if $Y$ is a 3-manifold which is the boundary of a plumbing of a tree of disk bundles over $S^2$, then the lattice homology of $Y$ coincides with the Heegaard Floer homology of $Y$. We also give a conjectural description of the $H_1(Y)/\mathbb{T}ors$ action when $b_1(Y)>0$.
\end{abstract}
\section{Introduction}
Heegaard Floer homology is a powerful invariant of 3-manifolds, introduced by Ozsv\'{a}th and Szab\'{o} \cite{OSDisks} \cite{OSProperties}. To a closed 3-manifold $Y$, equipped with a $\Spin^c$ structure $\mathfrak{s}\in \Spin^c(Y)$, Ozsv\'{a}th and Szab\'{o} constructed an $\mathbb{F}[U]$-module denoted $\mathit{HF}^-(Y,\mathfrak{s})$.
One defines
\[
\mathit{HF}^-(Y)=\betaigoplus_{\mathfrak{s} \in \Spin^c(Y)} \mathit{HF}^-(Y,\mathfrak{s}).
\]
An important class of 3-manifolds are the \emph{plumbed} 3-manifolds. These 3-manifolds are the boundaries of 4-manifolds obtained by plumbing disk bundles over surfaces together. In this paper, we consider plumbings of disk bundles over $S^2$, such that the plumbing is encoded by a tree. If $G$ is a tree with integer weights at the vertices, we write $Y(G)$ for the corresponding plumbed 3-manifold.
The manifold $Y(G)$ has a convenient surgery description as integral surgery on a link $L_G\subseteq S^3$. The link $L_G$ has one unknotted component for each vertex of $G$, and a clasp for each edge of $G$. The link $L_G$ is an iterated connected sum of Hopf links. The weights give an integral framing $\Lambda$ on $L_G$, and $Y(G)\cong S^3_{\Lambda}(L_G)$.
Much effort has gone into computing the Heegaard Floer complexes of plumbed 3-manifolds. Ozsv\'{a}th and Szab\'{o} \cite{OSPlumbed} computed the Heegaard Floer homology of 3-manifolds obtained by plumbing along a tree $G$ when $\Lambda$ is negative definite and the tree has at most one \emph{bad vertex} (which is a vertex such that the weight exceeds minus the valence). This family includes the Seifert fibered spaces.
The computation of Ozsv\'{a}th and Szab\'{o} was formalized by N\'{e}methi \cite{NemethiAR} \cite{NemethiLattice} using inspiration from algebraic geometry. N\'{e}methi defined an $\mathbb{F}[U]$-module $\mathbb{H}\mathbb{F}(Y(G))$ called \emph{lattice homology}. Lattice homology is the homology of a combinatorially defined chain complex $\mathbb{C} \mathbb{F}(G)$. N\'{e}methi proved that for the family of \emph{almost rational} graphs, $\mathit{HF}^-(Y(G))$ and $\mathbb{H} \mathbb{F}(Y(G))$ are isomorphic. Ozsv\'{a}th, Stipsicz and Szab\'{o} generalized this to a larger family called \emph{type-2} graphs, and proved that for general $G$ there is a spectral sequence from lattice homology to Heegaard Floer homology (almost rational graphs are \emph{type-1} graphs in Ozsv\'{a}th--Stipsicz--Szab\'{o}'s terminology). In later work, N\'{e}methi proved that if $G$ is negative definite, then $Y(G)$ is an L-space if and only if $G$ is a rational graph \cite{NemethiRational}.
N\'{e}methi \cite{NemethiLattice} conjectured that lattice homology and Heegaard Floer homology coincide for all negative definite plumbing trees $G$. The conjecture is also open for more general plumbing graphs. The aforementioned works \cite{OSPlumbed} \cite{NemethiAR} \cite{OSSLattice} \cite{NemethiRational} verify the conjecture when $G$ is a type-2 graph and for negative definite plumbings such that $Y(G)$ is an L-space.
We recall that $\ve{\mathit{HF}}^-(Y(G))$ denotes the module obtained by completing $\mathit{HF}^-(Y(G))$ with respect to the $U$ action. If $Y(G)$ is a rational homology 3-sphere, no information is lost by taking completions. The same holds if we restrict to torsion $\Spin^c$ structures on $Y(G)$.
In this paper, we prove the conjecture in full generality:
\betaegin{thm}\label{thm:main}
If $G$ is a plumbing tree, then there is an isomorphism of $\mathbb{F}[\hspace{-.5mm}[ U]\hspace{-.5mm}] $-modules
\[
\mathbb{H}\mathbb{F}(G)\cong \ve{\mathit{HF}}^-(Y(G)).
\]
When $b_1(Y(G))=0$, the isomorphism is relatively graded.
\end{thm}
This paper builds off previous work of the author \cite{ZemBordered} which develops a bordered theory using the Manolescu-Ozsv\'{a}th link surgery formula \cite{MOIntegerSurgery}. We note that 3-manifolds obtained by plumbing a tree of 2-spheres can be described also by gluing solid tori and cartesion products of $S^1$ and the pair-of-pants surface. In this manner we reduce the proof to local computations allowing cut and paste arguments.
\subsection{$b_1>0$}
We expect the techniques of this paper to also prove that when $b_1(Y(G))>0$ the isomorphism is relatively graded for all torsion $\Spin^c$ structures. The remaining task is to write down a theory of group valued gradings in the spirit of \cite{LOTBordered}*{Section~2.5} for the bordered link surgery modules from \cite{ZemBordered}. We plan to complete this in a future work.
When $b_1(Y)>0$, Heegaard Floer homology also has an action of $\Lambda^* H_1(Y)/\mathbb{T}ors$. In this paper, we describe a refinement of N\'{e}methi's conjecture when $b_1(Y(G))>0$. If $\gamma\in H_1(Y(G))/\mathbb{T}ors$, we define an endomorphism $\mathfrak{A}_{[\gamma]}$ on the lattice complex, and prove that our formula gives a well-defined action of $H_1(Y(G))/\mathbb{T}ors$. See Section~\mathrm{re}f{sec:plumbed-H1-action}. We give a parallel construction on the link surgery formula in Section~\mathrm{re}f{sec:algebraic-action-link-surgery}.
We make the following conjecture:
\betaegin{conj}
\label{conj:intro} $\ve{\mathit{HF}}^-(Y(G))$ is isomorphic to $\mathbb{HF}(G)$ as a module over $\mathbb{F}[\hspace{-.5mm}[ U]\hspace{-.5mm}] \otimes \Lambda^* H_1(Y(G))/\mathbb{T}ors$.
\end{conj}
It is possible to extend the techniques of this paper to compute the $H_1(Y(G))/\mathbb{T}ors$-action on the Heegaard Floer homologies of plumbed manifolds in terms of the Manolescu-Ozsv\'ath link surgery formula for $L_G$. Since the techniques of \cite{ZemBordered} give a combinatorial model of the surgery formula for $L_G$, this reduces Conjecture~\mathrm{re}f{conj:intro} to a purely algebraic question. Nonetheless, the algebraic arguments of this paper seem insufficient to show that it coincides with the action we describe on lattice homology.
\subsection{Acknowledgments}
The author would like to thank Antonio Alfieri, Maciej Borodzik, Kristen Hendricks, Jennifer Hom, Robert Lipshitz, Beibei Liu, Ciprian Manolescu, Peter Ozsv\'{a}th and Matt Stoffregen for helpful conversations.
\section{Background}
\subsection{Background on Heegaard Floer homology}
\label{sec:background}
In this section, we recall some background on Heegaard Floer homology and its refinements for knots and links.
Heegaard Floer homology is an invariant of 3-manifolds equipped with a $\Spin^c$ structure $\mathfrak{s}$, defined by Ozsv\'{a}th and Szab\'{o} \cite{OSDisks} \cite{OSProperties}. Given a pointed Heegaard diagram $(\Sigma,\ve{\alphalpha},\ve{\betaeta},w)$ for $Y$, one considers the Lagrangian tori
\[
\mathbb{T}_{\alpha}=\alpha_1\times \cdots \times\alpha_g\quad \text{and} \quad \mathbb{T}_{\beta}=\beta_1\times \cdots \times\beta_g,
\]
inside of $\Sym^g(\Sigma)$. The chain complex $\mathbb{C}F^-(Y,\mathfrak{s})$ is freely generated over $\mathbb{F}[U]$ by intersection points $\ve{x}\in \mathbb{T}_{\alpha}\cap \mathbb{T}_{\beta}$
satisfying $\mathfrak{s}_w(\ve{x})=\mathfrak{s}$. The differential counts index 1 pseudoholomorphic disks $u$ weighted by $U^{n_w(u)}$, where $n_w(u)$ is the intersection number of the image of $u$ with $\{w\}\times \Sym^{g-1}(\Sigma)$.
We now recall the construction of knot and link Floer homology. Knot Floer homology is due to Ozsv\'{a}th and Szab\'{o} \cite{OSKnots} and independently Rasmussen \cite{RasmussenKnots}. Link Floer homology is due to Ozsv\'{a}th and Szab\'{o} \cite{OSLinks}. We focus on the description in terms of a free chain complex over a 2-variable polynomial ring $\mathbb{F}[\mathscr{U},\mathscr{V}]$. Given a doubly pointed Heegaard diagram $(\Sigma,\ve{\alphalpha},\ve{\betaeta},w,z)$ representing the pair $(Y,K)$ consisting of a knot $K$ in a rational homology 3-sphere $Y$, one defines $\mathcal{C}FK(Y,K)$ to be the free $\mathbb{F}[\mathscr{U},\mathscr{V}]$-module generated by intersection points $\mathbb{T}_{\alpha}\cap \mathbb{T}_{\beta}$. The differential counts Maslov index 1 pseudo-holomorphic disks $u$ which are weighted by $\mathscr{U}^{n_{w}(u)}\mathscr{V}^{n_{z}(u)}$.
For an $\ell$-component link $L\subseteq Y$, we consider a $2\ell$-pointed Heegaard link diagram $(\Sigma,\ve{\alphalpha},\ve{\betaeta},\ve{w},\ve{z})$ where $|\ve{w}|=|\ve{z}|=\ell$. We may similarly define $\mathcal{C}FL(Y,L)$ to be the complex freely generated over the ring $\mathbb{F}[\mathscr{U}_1,\partialots, \mathscr{U}_{\ell},\mathscr{V}_1,\partialots, \mathscr{V}_{\ell}]$ by intersection points $\ve{x}\in \mathbb{T}_{\alpha}\cap\mathbb{T}_{\beta}$. This version of link Floer homology was considered in \cite{ZemCFLTQFT}. In the differential, a holomorphic disk $u$ is weighted by the algebra element $\mathscr{U}_1^{n_{w_1}(u)}\cdots \mathscr{U}_\ell^{n_{w_\ell}(u)} \mathscr{V}_1^{n_{z_1}(u)}\cdots \mathscr{V}_\ell^{n_{z_\ell}(u)}$.
We recall that when $L$ is a link in a rational homology 3-sphere $Y$, there is an $\ell$-component $\mathbb{Q}^\ell$-valued Alexander grading $A=(A_1,\partialots, A_\ell)$ on $\mathcal{C}FL(Y,L)$. The most important case for our purposes is when $Y=S^3$. In this case the Alexander grading takes values in the set
\[
\mathbb{H}(L):=\prod_{i=1}^\ell (\mathit{lk}(K_i,L-K_i)/2+\mathbb{Z}).
\]
The variable $\mathscr{U}_i$ has $A_j$-grading $-\partialelta_{i,j}$ (the Kronecker delta) and $\mathscr{V}_i$ has $A_j$-grading $\partialelta_{i,j}$.
There are additional \emph{basepoint actions} on link Floer homology which make an appearance in our paper. They appear frequently when studying knot and link Floer homology (see, e.g. \cite{SarkarMaslov} or \cite{ZemQuasi}). For each $i\in \{1,\partialots, \ell\}$, there are endomorphisms $\betaPhi_{w_i}$ and $\betaPsi_{z_i}$ of $\mathcal{C}FL(Y,L)$, defined as follows. We pick a free $\mathbb{F}[\mathscr{U}_1,\partialots, \mathscr{U}_\ell,\mathscr{V}_{1},\partialots, \mathscr{V}_{\ell}]$-basis $\ve{x}_1,\partialots, \ve{x}_n$ of $\mathcal{C}FL(Y,L)$. We write $\partial$ as an $n\times n$ matrix with this basis. The map $\betaPhi_{w_i}$ is obtained by differentiating this matrix with respect to $\mathscr{U}_i$. The map $\betaPsi_{z_i}$ is obtained by differentiating this matrix with respect to $\mathscr{V}_i$.
\subsection{Hypercubes and hyperboxes}
In this section we recall Manolescu and Ozsv\'{a}th's notion of a hypercube of chain complexes, as well as versions in the Fukaya category. See \cite{MOIntegerSurgery}*{Section~5 and 8}.
We write $\mathbb{E}_n=\{0,1\}^n$. If $\ve{d}=(d_1,\partialots, d_n)$, we write $\mathbb{E}(\ve{d})=\{0,\partialots, d_1\}\times \cdots \times \{0,\partialots, d_n\}$.
We write $\varepsilon\le \varepsilon'$ if inequality holds at all coordinates. We write $\varepsilon<\varepsilon'$ if $\varepsilon\le \varepsilon'$ and $\varepsilon\neq \varepsilon'$.
\betaegin{define} A \emph{hypercube of chain complexes} consists of a collection of groups $C_\varepsilon$, ranging over all $\varepsilon\in \mathbb{E}_n$, and maps $D_{\varepsilon,\varepsilon'}\colon C_{\varepsilon}\to C_{\varepsilon'}$, ranging over all pairs $\varepsilon,\varepsilon'$ such that $\varepsilon\le \varepsilon'$. We assume furthermore that whenever $\varepsilon\le \varepsilon''$ we have
\betaegin{equation}
\sum_{\substack{\varepsilon'\in \mathbb{E}_n \\\varepsilon\le \varepsilon'\le \varepsilon''}} D_{\varepsilon',\varepsilon''}\circ D_{\varepsilon,\varepsilon'}=0.
\label{eq:hypercube-relations}
\end{equation}
\end{define}
A \emph{hyperbox of chain complexes of size $\ve{d}\in (\mathbb{Z}^{>0})^n$} is similar. We assume that we have a collection of groups $C_{\varepsilon}$ ranging over $\varepsilon\in \mathbb{E}(\ve{d})$, as well as a collection of linear maps $D_{\varepsilon,\varepsilon'}$ ranging over all $\varepsilon\le \varepsilon'$ such that $|\varepsilon'-\varepsilon|_{L^\infty}\le 1$. We assume that Equation~\eqref{eq:hypercube-relations} holds whenever $|\varepsilon''-\varepsilon|_{L^\infty}\le 1$.
\betaegin{rem}
The reader may find it helpful to note that the categories of hypercubes and hyperboxes are equivalent to category of type-$D$ modules over certain algebras, called the \emph{cube} and \emph{box} algebras. See \cite{ZemBordered}*{Section~3.6}.
\end{rem}
For our purposes, it is also important to consider a notion of hypercubes in the Fukaya category (see \cite{MOIntegerSurgery}*{Section~8.2}):
\betaegin{define} A \emph{hypercube of beta-attaching curves} $\mathcal{L}_{\beta}$ on $\Sigma$ consists of a collection of attaching curves $\ve{\betaeta}_{\varepsilon}$ ranging over $\varepsilon\in \mathbb{E}_n$, as well as a collection of chains (i.e. morphisms in the Fukaya category)
\[
\mathbb{T}heta_{\varepsilon,\varepsilon'}\in \ve{\mathbb{C}F}^-(\Sigma,\ve{\betaeta}_{\varepsilon},\ve{\betaeta}_{\varepsilon'})
\]
whenever $\varepsilon<\varepsilon'$, satisfying the following compatibility condition for each pair $\varepsilon<\varepsilon'$:
\[
\sum_{\varepsilon=\varepsilon_1<\partialots<\varepsilon_j=\varepsilon'} f_{\beta_{\varepsilon_1},\partialots, \beta_{\varepsilon_j}}(\mathbb{T}heta_{\varepsilon_1,\varepsilon_2},\partialots, \mathbb{T}heta_{\varepsilon_{j-1},\varepsilon_j})=0.
\]
\end{define}
We usually assume that the diagram containing all $2^n$ attaching curves is weakly admissible. For the purposes of this paper, it is also sufficient to consider only hypercubes where each pair $\ve{\betaeta}_{\varepsilon}$ and $\ve{\betaeta}_{\varepsilon'}$ are handleslide-equivalent.
A hypercube of \emph{alpha}-attaching curves $\mathcal{L}_{\alpha}=(\ve{\alphalpha}_{\varepsilon},\mathbb{T}heta_{\varepsilon,\varepsilon'})_{\varepsilon\in \mathbb{E}_{n}}$ is defined similarly, except that we have a choice of chain $\mathbb{T}heta_{\varepsilon,\varepsilon'}\in \ve{\mathbb{C}F}^-(\Sigma,\ve{\alphalpha}_{\varepsilon'},\ve{\alphalpha}_{\varepsilon})$ whenever $\varepsilon<\varepsilon'$.
Given hypercubes of alpha and beta attaching curves $\mathcal{L}_{\alpha}$ and $\mathcal{L}_{\beta}$, we may pair them and form a complex $\ve{\mathbb{C}F}^-(\mathcal{L}_{\alpha},\mathcal{L}_{\beta})$, which is a hypercube of chain complexes of dimension $n+m$, where $n=\partialim \mathcal{L}_\alpha$ and $m=\partialim \mathcal{L}_{\beta}$. If $(\varepsilon,\nu)\in \mathbb{E}_n\times \mathbb{E}_m$, then the underlying chain complex of the pairing is $C_{(\varepsilon,\nu)}:=\ve{\mathbb{C}F}^-(\ve{\alphalpha}_{\varepsilon}, \ve{\betaeta}_{\nu})$. The hypercube differential is defined as follows. We set $D_{(\varepsilon,\nu),(\varepsilon,\nu)}$ to be the ordinary Floer differential. If $(\varepsilon,\nu)< (\varepsilon',\nu')$, then
\betaegin{equation}
D_{(\varepsilon,\nu),(\varepsilon',\nu')}(\ve{x})=\sum_{\substack{\varepsilon=\varepsilon_1<\partialots<\varepsilon_i=\varepsilon'\\ \nu=\nu_1<\cdots<\nu_j=\nu'}} f_{\alpha_{\varepsilon_i},\partialots, \alpha_{\varepsilon_1}, \beta_{\nu_1},\partialots, \beta_{\nu_j}}(\mathbb{T}heta_{\alpha_{\varepsilon_i},\alpha_{\varepsilon_{i-1}}},\partialots,\ve{x},\partialots, \mathbb{T}heta_{\beta_{\nu_{j-1}},\beta_{\nu_j}}).
\label{eq:hypercube-differential}
\end{equation}
\subsection{Background on lattice homology}
In this section, we recall the definition of lattice homology. Lattice homology is an invariant of plumbed 3-manifolds due to N\'{e}methi \cite{NemethiAR} \cite{NemethiLattice}, which is a formalization of Ozsv\'{a}th and Szab\'{o}'s computation of the Heegaard Floer homology of some plumbed 3-manifolds \cite{OSPlumbed}. We will use the notation of Ozsv\'{a}th, Stipsicz and Szab\'{o} \cite{OSSLattice} because of its relation to the surgery formula of Manolescu and Ozsv\'{a}th.
If $G$ is a plumbing tree, then we write $Y(G)$ for the associated 3-manifold and $X(G)$ for the associated 4-manifold, which has boundary $Y(G)$. We recall that a $K\in H^2(X(G))$ is a \emph{characteristic vector} if
\[
K (\Sigma)+\Sigma^2\equiv 0\pmod 2,
\]
for every class $\Sigma\in H_2(X(G))$. We write $\mathbb{C}har(G)$ for the set of characteristic vectors of $X(G)$.
We now sketch the definition of $\mathbb{CF}(G)$. Write $V(G)$ for the set of vertices, and $\mathbb{P}(G)$ for the power set of $V(G)$. Generators of the complex are written $[K,E]$ where $K\in \mathbb{C}har(X(G))$ and $E\in \mathbb{P}(G)$. The lattice complex is defined as
\[
\mathbb{C}F(G)=\prod_{\substack{
K\in \mathbb{C}har(G)\\
E\in \mathbb{P}(V)}} \mathbb{F}[\hspace{-.5mm}[ U]\hspace{-.5mm}] \otimes \langle [K,E]\rangle.
\]
The differential on $\mathbb{C}F(G)$ is defined via the equation
\betaegin{equation}
\partial[K,E]=\sum_{v\in E} U^{a_v(K,E)}\otimes [K,E-v]+\sum_{v\in E} U^{b_v(K,E)}\otimes [K+2v^*,E-v], \label{eq:lattice-differential}
\end{equation}
extended equivariantly over $U$. In the above equation, $a_v(K,E)$ and $b_v(K,E)$ denote certain nonnegative integers. See \cite{OSSKnotLatticeHomology}*{Section~2} for the definition in our present notation.
Ozsv\'{a}th, Stipsicz and Szab\'{o} gave an alternate description of the lattice complex which is important for our purposes. Let $(\mathcal{C}_{\varepsilon},D_{\varepsilon,\varepsilon'})$ denote the link surgery hypercube for $L_G$. We may construct another hypercube $(H_{\varepsilon}, d_{\varepsilon,\varepsilon'})$ by setting $H_{\varepsilon}=H_* (\mathcal{C}_{\varepsilon})$, and by setting $d_{\varepsilon,\varepsilon'}=(D_{\varepsilon,\varepsilon'})_*$ if $|\varepsilon'-\varepsilon|_{L^1}=1$ and $d_{\varepsilon,\varepsilon'}=0$ otherwise. Note that $\mathcal{C}_{\varepsilon}$ is naturally a module over $\mathbb{F}[\hspace{-.5mm}[ U_1,\partialots, U_{\ell}]\hspace{-.5mm}] $, where $\ell=|L_G|$, however each $U_i$ has the same action on homology, so we view $H_* (\mathcal{C}_{\varepsilon})$ as being an $\mathbb{F}[\hspace{-.5mm}[ U]\hspace{-.5mm}] $-module, where $U$ acts by any of the $U_i$. Ozsv\'{a}th, Stipsicz and Szab\'{o} prove the following:
\betaegin{prop}[\cite{OSSLattice}*{Proposition~4.4}]
\label{prop:OSS-lattice}
There is an isomorphism of hypercubes of chain complexes over $\mathbb{F}[\hspace{-.5mm}[ U]\hspace{-.5mm}] $:
\[
(H_\varepsilon,d_{\varepsilon,\varepsilon'})\cong \mathbb{C} \mathbb{F}(G)
\]
which is relatively graded on torsion $\Spin^c$ structures. In particular, the total homologies of the two sides coincide.
\end{prop}
\subsection{Knot and link surgery formulas}
We now recall some basics of the Manolescu--Ozsv\'{a}th link surgery formula \cite{MOIntegerSurgery}, as well as the knot surgery formulas of Ozsv\'{a}th and Szab\'{o} \cite{OSIntegerSurgeries} \cite{OSRationalSurgeries}.
If $K$ is a knot in an integer homology 3-sphere $Y$, then Ozsv\'{a}th and Szab\'{o} \cite{OSIntegerSurgeries} proved a formula which relates $\mathit{HF}^-(Y_n(K))$ with the knot Floer complex of $K$. They defined two chain complexes $\mathbb{A}(Y,K)$ and $\mathbb{B}(Y,K)$ over $\mathbb{F}[\hspace{-.5mm}[ U]\hspace{-.5mm}] $, as well as two $\mathbb{F}[\hspace{-.5mm}[ U]\hspace{-.5mm}] $-equivariant maps $v,h_n\colon \mathbb{A}(Y,K)\to \mathbb{B}(Y,K)$ such that
\[
\ve{\mathit{HF}}^-(Y_n(K))\cong H_* \mathbb{C}one(v+h_n\colon \mathbb{A}(Y,K)\to \mathbb{B}(Y,K)).
\]
Here, $\ve{\mathit{HF}}^-$ denotes $\mathit{HF}^-$ with coefficients in $\mathbb{F}[\hspace{-.5mm}[ U]\hspace{-.5mm}]$ and the modules $\mathbb{A}(Y,K)$ and $\mathbb{B}(Y,K)$ are suitable completions of $\mathcal{C}FK(Y,K)$ and $\mathscr{V}^{-1} \mathcal{C}FK(Y,K)$, respectively.
The map $v$ is the canonical inclusion map, while $h_n$ is defined by composing the canonical inclusion map of $\mathcal{C}FK(Y,K)$ into $\mathscr{U}^{-1} \mathcal{C}FK(Y,K)$, and then composing with a homotopy equivalence of $\mathbb{F}[U]$-chain complexes $\mathscr{U}^{-1} \mathcal{C}FK(Y,K)\simeq \mathscr{V}^{-1}\mathcal{C}FK(Y,K)$. The map $h_n$ shifts the Alexander grading by $n$.
Manolescu and Ozsv\'{a}th extended the knot surgery formula to links in $S^3$ \cite{MOIntegerSurgery}. To a link $L\subseteq S^3$ with integral framing $\Lambda$, they constructed a chain complex $\mathcal{C}_{\Lambda}(L)$ whose homology is $\ve{\mathit{HF}}^-(S^3_{\Lambda}(L))$. The chain complex $\mathcal{C}_{\Lambda}(L)$ is built from the link Floer complex of $L$. The chain complex $\mathcal{C}_{\Lambda}(L)$ is an $\ell$-dimensional hypercube of chain complexes, where $\ell=|L|$.
We usually conflate the integral framing $\Lambda$ with the symmetric framing matrix, which has $\Lambda_{i,i}$ equal to the framing of $L_i$, and $\Lambda_{i,j}=\mathit{lk}(L_i,L_j)$ if $i\neq j$.
The underlying group of $\mathcal{C}_{\Lambda}(L)$ has a simple description in terms of link Floer homology. If $\varepsilon\in \mathbb{E}_\ell$, write $S_{\varepsilon}$ for the multiplicatively closed subset of $\mathbb{F}[\mathscr{U}_1,\partialots, \mathscr{U}_{\ell},\mathscr{V}_1,\partialots, \mathscr{V}_\ell]$ generated by $\mathscr{V}_i$ for $i$ such that $\varepsilon_i=1$. Then $\mathcal{C}_{\varepsilon}$ is a completion of $S_\varepsilon^{-1}\cdot \mathcal{C}FL(L).$
(This is a slight reformulation of Manolescu and Ozsv\'{a}th's original description; see \cite{ZemBordered}*{Lemma~7.5}).
The hypercube differential decomposes over sublinks of $L$ which are oriented (possibly differently than $L$). If $\vec{M}$ is an oriented sublink of $L$, we write $\betaPhi^{\vec{M}}$ for the corresponding summand of the differential. If $\varepsilon< \varepsilon'$ and $\varepsilon'-\varepsilon$ is the indicator function for the components of $M$, then $\betaPhi^{\vec{M}}$ sends $\mathcal{C}_{\varepsilon}$ to $\mathcal{C}_{\varepsilon'}$.
For each $\varepsilon$, the group $\mathcal{C}_{\varepsilon}\cong S_{\varepsilon}^{-1}\cdot \mathcal{C}FL(L)$ descomposes over Alexander gradings $\ve{s}\in \mathbb{H}(L)$. We write $\mathcal{C}_{\varepsilon}(\ve{s})\subseteq S_{\varepsilon}^{-1}\cdot \mathcal{C}FL(L)$ for this subgroup. The group $\mathcal{C}_{\varepsilon}(\ve{s})$ is preserved by the action of $U_i=\mathscr{U}_i\mathscr{V}_i$, for $i\in \{1,\partialots, \ell\}$, as well as the internal differential from $\mathcal{C}FL(L)$, though it is not preserved by the actions of $\mathscr{U}_i$ or $\mathscr{V}_i$.
The hypercube maps $\betaPhi^{\vec{M}}$ have a predictable effect on Alexander gradings. If $\vec{M}\subseteq L$ is an oriented sublink, define $\Lambda_{L,\vec{M}}\in \mathbb{Z}^\ell$ as follows. We may canonically identify $\mathbb{Z}^\ell$ with $H_1(S^3\setminus \nu(L))$. Under this isomorphism, the generators $(0,\partialots, 1,\partialots 0)$ of $\mathbb{Z}^\ell$ are identified with meridians of link components of $L$. We define $\Lambda_{L,\vec{M}}$ to be the sum of the longitudes of link components $K_i$ in $\vec{M}$ such that the orientations from $\vec{M}$ and $L$ are opposite. With respect to this notation,
\betaegin{equation}
\betaPhi^{\vec{M}}(\mathcal{C}_{\varepsilon}(\ve{s}))\subseteq \mathcal{C}_{\varepsilon'}(\ve{s}+\Lambda_{L,\vec{M}})\label{eq:grading-link-surgery}
\end{equation}
where $\varepsilon'-\varepsilon$ is the indicator function for $M$.
\subsection{Systems of arcs}
\label{sec:system-of-arcs}
In this section, we recall the notion of a \emph{system of arcs} for a link $L$, which is a piece of auxiliary data necessary to build the link surgery formula.
\betaegin{define} Suppose that $L\subseteq S^3$ is a link such that each component $K_i\subseteq L$ is equipped with a pair of basepoints, denoted $w_i$ and $z_i$. A \emph{system of arcs} $\mathscr{A}$ for $L\subseteq S^3$ consists of a collection of $\ell=|L|$ embedded and pairwise disjoint arcs $\lambda_1,\partialots, \lambda_\ell$ arcs, such that
\[
\lambda_i \cap L=\partial \lambda_i \cap L=\{w_i,z_i\}.
\]
\end{define}
If $L$ is oriented, we say an arc $\lambda_i$ is \emph{beta-parallel} if it is a small push-off of the segment of $K_i$ which is oriented from $z_i$ to $w_i$. We say that $\lambda_i$ is \emph{alpha-parallel} if it is a small push-off of the segment of $K_i$ oriented from $w_i$ to $z_i$.
The construction of Manolescu and Ozsv\'{a}th focuses on arc systems where all of the arcs are alpha-parallel. Their proof also applies with little change to the case that each arc is either alpha-parallel or beta-parallel. We write $\mathcal{C}_{\Lambda}(L,\mathscr{A})$ for the link surgery complex computed with the arc system $\mathscr{A}$.
In \cite{ZemBordered}*{Section~13}, the author studied the effect of changing the arc system, and proved a general formula which computes the effect of changing the arc system. See \cite{ZemBordered}*{Corollary~13.5}. A particularly important result is the following:
\betaegin{thm}[\cite{ZemBordered}*{Theorem~13.1}] Let $L\subseteq S^3$ be a framed link, and let $\mathscr{A}$ and $\mathscr{A}'$ be arc systems which differ only on a single knot component $K_1$. Then
\[
\mathcal{C}_{\Lambda}(L,\mathscr{A})\simeq \mathcal{C}_{\Lambda}(L,\mathscr{A}').
\]
Furthermore, the homotopy equivalence is equivariant with respect to $\mathbb{F}[\hspace{-.5mm}[ U_2,\partialots, U_\ell]\hspace{-.5mm}] $.
\end{thm}
There is a refinement in terms of type-$D$ modules as well. See \cite{ZemBordered}*{Proposition~13.2}.
\subsection{Construction of the surgery hypercube}
\label{sec:basic-systems}
In this section, we sketch the construction of the link surgery hypercube. Manolescu and Ozsv\'{a}th's construction requires a large collection of Heegaard diagrams, which they refer to as a \emph{complete system of Heegaard diagrams}. We describe their construction in a restricted setting, focusing on the case that each arc in our system of arcs $\mathscr{A}$ for $L$ is either alpha-parallel or beta-parallel.
We also focus our attention on a special
class of complete systems, which we call \emph{meridional $\sigma$-basic systems of Heegaard diagrams}. Such a basic system is constructed via the following procedure. We begin with a Heegaard link diagram $(\Sigma,\ve{\alphalpha},\ve{\betaeta},\ve{w},\ve{z})$ of $(S^3,L)$. If $K_i$ is a component of $L$, write $\lambda_i$ for the arc of $\mathscr{A}$ for $K_i$. We assume that this Heegaard diagram is chosen so that the basepoints $w_i,z_i$ on $K_i$ are separated by a single alpha curve $\alpha_i^s$ (if $\lambda_i$ is beta-parallel) or a single beta curve $\beta_i^s$ (if $\lambda_i$ is alpha-parallel). Furthermore, we assume that the arc $\lambda_i$ is embedded in $\Sigma$, and $\lambda_i$ is disjoint from all of the attaching curves except for the special meridional alpha or beta curve of $K_i$. See Figure~\mathrm{re}f{cor:2}.
Suppose $\lambda_i$ is beta-parallel. We consider the component $A_i\subseteq \Sigma\setminus \ve{\alphalpha}$ which contains $w_i$ and $z_i$. If we glue the two boundary components of $A_i$ corresponding to $\alpha_i^s$, we obtain a torus with many disks removed (corresponding to other alpha curves). We write $\ve{\alphalpha}_0\subseteq \ve{\alphalpha}$ and $\ve{\betaeta}_0\subseteq \ve{\betaeta}$ for the curves which are not special meridians of any component. See Figure~\mathrm{re}f{cor:2}.
\betaegin{figure}[ht]
\input{cor2.pdf_tex}
\caption{The region $A_i\subseteq \Sigma$ in a meridional $\sigma$-basic system. The arc $\lambda_i$ is the dashed arc which connects $z_i$ and $w_i$. The dashed curves labeled $\alpha'$ denote the successive replacements of $\alpha^s_i$ in the basic system.}
\label{cor:2}
\end{figure}
Given such a diagram $(\Sigma,\ve{\alphalpha},\ve{\betaeta},\ve{w},\ve{z})$, we may construct two generalized hyperboxes of attaching curves $\mathcal{L}_{\alpha}$ and $\mathcal{L}_{\beta}$, as follows. The hyperboxes are generalized in the sense that each subcube is allowed to have some axis directions such that all morphisms in this direction are length 1 and consist of a canonical diffeomorphism map for moving $z_i$ to $w_i$ along $\lambda_i$, instead of a Floer chain as in a normal hypercube of attaching curves. Each of the axis directions of $\mathcal{L}_{\alpha}$ and $\mathcal{L}_{\beta}$ is identified with a different component of $L$.
We form the $K_i$-direction of $\mathcal{L}_{\alpha}$ as follows. The first step corresponds to performing a surface isotopy which moves $z_i$ to $w_i$ along the arc $\lambda_i$. This changes only the special meridian $\alpha_i^s$. The subsequent steps in the $K_i$-direction correspond to moving $\alpha_i^s$ in the component $A_i$, while avoiding the basepoint $w_i$, so that it returns to its original position. This is achieved by a sequence of isotopies and handleslides of $\alpha_i^s$ across the other components of $\ve{\alphalpha}_0$. Since all of the $A_i$ have disjoint interiors, we may do this independently for each link component of $L$ which has a beta-parallel arc. Each of the curves of $\ve{\alphalpha}_0$ is stationary in this construction. To form the hyperbox $\mathcal{L}_{\alpha}$, we perform small Hamiltonian translations to each curve of $\ve{\alphalpha}_0$ to achieve admissibility. The construction of $\mathcal{L}_{\beta}$ is similar.
By pairing $\mathcal{L}_{\alpha}$ and $\mathcal{L}_{\beta}$ together, we obtain an $\ell$-dimensional hyperbox of chain complexes, which we compress to obtain an $\ell$-dimensional hypercube of chain complexes, which we will denote by $(\mathscr{C}_{\varepsilon}, \mathscr{D}_{\varepsilon,\varepsilon'})_{\varepsilon\in \mathbb{E}_\ell}$. We may view each $\mathscr{C}_{\varepsilon}$ as being the Floer complex obtained from $(\Sigma,\ve{\alphalpha},\ve{\betaeta},\ve{w},\ve{z})$ by keeping one basepoint from each link component, as follows. If $\varepsilon\in \mathbb{E}_\ell$, define the submodule
\[
N_{\varepsilon}\subseteq \mathcal{C}FL(L)
\]
to be generated by $(\mathscr{V}_{\varepsilon_i}-1)\cdot \mathcal{C}FL(L)$, ranging over $i$ such that $\varepsilon_i=1$, as well as $(\mathscr{U}_{\varepsilon_i}-1)\cdot \mathcal{C}FL(L)$, ranging over $i$ such that $\varepsilon_i=0$. Then
\[
\mathscr{C}_{\varepsilon}\cong \mathcal{C}FL(L)/N_{\varepsilon}.
\]
The link surgery hypercube $\mathcal{C}_{\Lambda}(L)$ is defined as follows. If $K_i\subseteq L$ is a link component (oriented positively), then $\betaPhi^{+K_i}$ is defined to be the canonical map for localizing at $\mathscr{V}_i$. If $\vec{M}\subseteq L$ is a sublink, all of whose components are oriented oppositely to $L$, and $\varepsilon',\varepsilon\in \mathbb{E}_\ell$ are points such that $\varepsilon'-\varepsilon$ is the indicator function for $M$, then the map $\betaPhi^{\vec{M}}$ is defined to be the unique map which reduces to $\mathscr{D}_{\varepsilon,\varepsilon'}$ after quotienting the domain and codomain of $\betaPhi^{\vec{M}}$ by $N_{\varepsilon}$ and $N_{\varepsilon'}$, respectively, and which satisfies the grading property in Equation~\eqref{eq:grading-link-surgery}. One sets $\betaPhi^{\vec{M}}=0$ if $\vec{M}$ has more than one component and $\vec{M}$ has a component oriented coherently with $L$.
\section{The bordered perspective on the link surgery formula}
\label{sec:bordered}
In this section, we recall the bordered perspective \cite{ZemBordered} on the knot and link surgery formulas \cite{OSKnots} \cite{MOIntegerSurgery}. We also prove several important new properties.
\subsection{Linear topological spaces}
In this section, we recall some preliminaries about completions. These are used in the module categories from \cite{ZemBordered}*{Section~6}. We refer the reader to \cite{AtiayhMacdonald}*{Section~10} for more background. A \emph{linear topological vector space} $\mathcal{X}$ consists of a vector space equipped with a topology such that the following hold:
\betaegin{enumerate}
\item There is a basis of open sets centered at $0$ consisting of subspaces.
\item The addition function $\mathcal{X}\times \mathcal{X}\to \mathcal{X}$ is continuous.
\end{enumerate}
Such a topology may be specified by picking a decreasing filtration $(\mathcal{X}_{\alpha})_{\alpha\in A}$ of subspaces of $\mathcal{X}$, indexed by some directed, partially ordered set $A$. Every linear topological space can be expressed this way, for example by setting $A$ to be the set of open subspaces of $\mathcal{X}$, ordered by reverse inclusion.
If $\mathcal{X}$ has filtration $(\mathcal{X}_{\alpha})_{\alpha\in A}$, as above, then the \emph{completion} of $\mathcal{X}$ is the inverse limit
\[
\ve{\mathcal{X}}=\varprojlim_{\alpha\in A} \mathcal{X}/\mathcal{X}_{\alpha}.
\]
If $R$ is a ring, a \emph{linear topological $R$-module} is similar, except we require a basis at $0$ to consist of $R$-submodules.
Given two linear topological vector spaces (or linear topological $R$-modules) $\mathcal{X}$ and $\mathcal{Y}$, there are several ways to topologize the tensor product. This phenomenon parallels notions from functional analysis, where the tensor product of two Banach spaces possesses many different Banach space structures. See \cite{GrothendieckNuclear}. We consider the following topologies:
\betaegin{enumerate}
\item $\mathcal{X}\otimes^! \mathcal{Y}$ (the \emph{standard tensor product}): A subspace $E\subseteq \mathcal{X}\otimes^! \mathcal{Y}$ is open if and only if there are open subspaces $U\subseteq \mathcal{X}$ and $V\subseteq \mathcal{Y}$ so that $U\otimes \mathcal{Y}$ and $\mathcal{X}\otimes V$ are contained in $E$.
\item $\mathcal{X}\mathrel{\vec{\otimes}} \mathcal{Y}$: A subspace $E\subseteq \mathcal{X}\mathrel{\vec{\otimes}} \mathcal{Y}$ is open if and only there is some open subspace $U\subseteq \mathcal{X}$ such that $U\otimes \mathcal{Y}\subseteq E$, and for all $x\in X$ there is an open subspace $V_x\subseteq \mathcal{Y}$ so that $x\otimes V_x\subseteq E$.
\item $\mathcal{X}\otimes^*\mathcal{Y}$: A subspace $E\subseteq \mathcal{X}\otimes^*\mathcal{Y}$ is open if and only if the following hold: there are open subspaces $U\subseteq \mathcal{X}$ and $V\subseteq \mathcal{Y}$ such that $U\otimes V\subseteq E$; for each $x\in \mathcal{X}$, there is an open subspace $V_x\subseteq \mathcal{Y}$ so that $x\otimes V_x\subseteq E$; and for all $y\in \mathcal{Y}$ there is an open subspace $U_y\subseteq \mathcal{X}$ so that $U_y\otimes y\subseteq E$.
\end{enumerate}
The above are described in Beilinson \cite{Beilinson-Tensors}. See Positelski's work \cite{Positelski-Linear} for a helpful introduction. The `standard tensor product' predates Beilinson's work and coincides with well-known constructions. The second two are due to Beilinson.
\subsection{The algebra $\mathcal{K}$}
\label{sec:surgery-algebra}
The author described in \cite{ZemBordered} the following associative algebra $\mathcal{K}$. The algebra $\mathcal{K}$ is an algebra over the ring of two idempotents $\ve{I}_0\oplus \ve{I}_1$, where $\ve{I}_{\varepsilon}\cong \mathbb{F}$ for $\varepsilon\in \{0,1\}$. We set
\[
\ve{I}_0\cdot \mathcal{K}\cdot \ve{I}_0=\mathbb{F}[\mathscr{U},\mathscr{V}]\quad\text{and} \quad \ve{I}_1\cdot \mathcal{K}\cdot \ve{I}_1=\mathbb{F}[\mathscr{U},\mathscr{V},\mathscr{V}^{-1}].
\]
Furthermore, $\ve{I}_0\cdot \mathcal{K} \cdot \ve{I}_1=0$. Finally, $\ve{I}_1\cdot \mathcal{K}\cdot \ve{I}_0$ has two special algebra elements, $\sigma$ and $\tau$, which are subject to the relations
\[
\sigma\cdot \mathscr{U}=U \mathscr{V}^{-1} \cdot \sigma, \quad \sigma \cdot \mathscr{V}=\mathscr{V}\cdot \sigma,\quad \tau\cdot \mathscr{U}=\mathscr{V}^{-1} \cdot \tau,\quad \text{and} \quad \tau \cdot \mathscr{V}=U \mathscr{V} \cdot \tau,
\]
where $U=\mathscr{U}\mathscr{V}$.
It is sometimes helpful to consider the two algebra homomorphisms $I,T\colon \mathbb{F}[\mathscr{U},\mathscr{V}]\to \mathbb{F}[\mathscr{U},\mathscr{V},\mathscr{V}^{-1}]$ where $I$ is the inclusion given by localizing at $\mathscr{V}$, and $T$ satisfies $T(\mathscr{U})=\mathscr{V}^{-1}$ and $T(\mathscr{V})=U \mathscr{V}$. Then the relations for $\mathcal{K}$ become
\[
\sigma\cdot a=I(a)\cdot \sigma\quad \text{and} \quad \tau\cdot a=T(a)\cdot \tau,
\]
whenever $a\in \ve{I}_0\cdot \mathcal{K}\cdot \ve{I}_0$.
\subsection{Modules over $\mathcal{K}$}
As described in \cite{ZemBordered}*{Section~6.2}, the knot algbera $\mathcal{K}$ has a natural filtration consisting of the following subspaces:
\betaegin{define}
\label{def:knot-topology}
Suppose that $n\in \mathbb{N}$ is fixed. We define $J_n\subseteq \mathcal{K}$ to be the span of following set of generators:
\betaegin{enumerate}
\item In $\ve{I}_0\cdot \mathcal{K}\cdot \ve{I}_0$, the generators $\mathscr{U}^i\mathscr{V}^j$, for $i\gammae n$ or $j\gammae n$ (i.e. $\max(i,j)\gammae n$).
\item In $\ve{I}_1\cdot \mathcal{K}\cdot \ve{I}_0$, the generators $\mathscr{U}^i\mathscr{V}^j \sigma$ for $i\gammae n$ or $j\gammae n$.
\item In $\ve{I}_1\cdot\mathcal{K}\cdot \ve{I}_0$, the generators $\mathscr{U}^i\mathscr{V}^j\tau$ for
$j\le 2i-n$ or $i\gammae n$.
\item In $\ve{I}_1\cdot\mathcal{K}\cdot \ve{I}_1$, the generators $\mathscr{U}^i\mathscr{V}^j$ where $i\gammae n$.
\end{enumerate}
\end{define}
In \cite{ZemBordered}*{Proposition~6.4} the author proves that multiplication is continuous as a map
\[
\mu_2\colon \mathcal{K}\mathrel{\vec{\otimes}}_{\mathrm{I}} \mathcal{K}\to \mathcal{K}.
\]
\betaegin{rem} The map $\mu_2$ is not continuous as a map from $\mathcal{K}\otimes^!_{\mathrm{I}} \mathcal{K}$ to $\mathcal{K}$ (i.e. using the standard tensor product topology). To see this, observe that $\mathscr{V}^{-i}\otimes \mathscr{V}^i\sigma\to 0$ in $\mathcal{K}\otimes^!_{\mathrm{I}} \mathcal{K}$, while $\mu_2(\mathscr{V}^{-i}\otimes \mathscr{V}^i\sigma)=\sigma\not\to 0$.
\end{rem}
If $\mathcal{X}$ and $\mathcal{Y}$ are linear topological spaces, we will define a \emph{linear topological morphism} from $\mathcal{X}$ to $\mathcal{Y}$ to be a continuous linear map from $\ve{\mathcal{X}}$ to $\ve{\mathcal{Y}}$.
\betaegin{define}
\label{def:Alexander-type-D}
A type-$D$ Alexander module over $\mathcal{K}$ consists of a linear topological $\ve{I}$-module $\mathcal{X}$ equipped with a linear topological morphism $\partialelta^1\colon \mathcal{X}\to \mathcal{X}\mathrel{\vec{\otimes}} \mathcal{K}$, such that $(\id\otimes \mu_2)\circ(\partialelta^1\otimes \id_{\mathcal{K}})\circ \partialelta^1=0$.
\end{define}
In \cite{ZemBordered}*{Section~6}, the author also describes the categories of type-$A$ and $DA$ Alexander modules. A type-$A$ Alexander module $(\mathcal{X}, m_j)$ consists of a linear topological right $\ve{I}$-module $\mathcal{X}$, equipped with linear topological morphisms
\[
m_{j+1}\colon \mathcal{K}\mathrel{\vec{\otimes}}_{\mathrm{I}}\cdots \mathrel{\vec{\otimes}}_{\mathrm{I}} \mathcal{K}\mathrel{\vec{\otimes}}_{\mathrm{I}} \mathcal{X}\to \mathcal{X}
\]
which satisfy the $A_\infty$-module relations. Type-$DA$ Alexander modules are similar.
\subsection{Surgery formulas and $\mathcal{K}$}
The author showed how to view the knot and link surgery formulas of Ozsv\'{a}th--Szab\'{o} \cite{OSIntegerSurgeries} \cite{OSRationalSurgeries} and Manolescu--Ozsv\'{a}th \cite{MOIntegerSurgery} as certain types of $\mathcal{K}$-modules. The author used the algebraic framework of type-$A$ and type-$D$ modules of Lipshitz, Ozsv\'{a}th and Thurston \cite{LOTBordered} \cite{LOTBimodules}. To a knot $K$ in $S^3$ (or more generally, in a rational homology 3-sphere) the mapping cone formula of Ozsv\'{a}th--Szab\'{o} may naturally be viewed as either a type-$D$ module over $\mathcal{K}$, denoted $\mathcal{X}_{\lambda}(K)^{\mathcal{K}}$, or a type-$A$ module over $\mathcal{K}$, denoted ${}_{\mathcal{K}} \mathcal{X}_{\lambda}(K)$. Similarly, if $L\subseteq S^3$ is a link with framing $\Lambda$, the link surgery formula of Manolescu and Ozsv\'{a}th \cite{MOIntegerSurgery} may naturally be viewed as a type-$D$ module $\mathcal{X}_{\Lambda}(L)^{\mathcal{L}_{\ell}}$ over the tensor product algebra $\mathcal{L}_\ell=\mathcal{K}^{\otimes_{\mathbb{F}} \ell}$, where $\ell=|L|$.
We describe the correspondence for the case of the knot surgery formula for a knot $K\subseteq S^3$. We recall that Ozsv\'{a}th and Szab\'{o}'s mapping cone complex is an appropriate completion of the mapping cone
\[
\mathbb{C}one\left(v+h_n\colon \mathcal{C}FK(K)\to \mathscr{V}^{-1} \mathcal{C}FK(K)\right).
\]
Here, $v$ and $h_n$ are two $\mathbb{F}[U]$-equivariant maps, where $U$ acts by $\mathscr{U}\mathscr{V}$.
The type-$D$ module $\mathcal{X}_{n}(K)^{\mathcal{K}}$ is defined as follows. Let $\ve{x}_1,\partialots, \ve{x}_n$ be a free $\mathbb{F}[\mathscr{U},\mathscr{V}]$-basis of $\mathcal{C}FK(K)$. Then $\mathcal{X}_n(K)\cdot \ve{I}_0$ is generated over $\mathbb{F}$ by copies of the basis elements $\ve{x}_1^0,\partialots, \ve{x}_n^0$, and similarly $\mathcal{X}_n(K)\cdot \ve{I}_1$ is generated by elements $\ve{x}_1^1,\partialots, \ve{x}_n^1$. The structure map
\[
\partialelta^1\colon \mathcal{X}_n(K)\to \mathcal{X}_n(K)\otimes_{\mathrm{I}} \mathcal{K}
\]
has three types of summands: those arising from $\partial$, those arising from $v$, and those arising from $h_n$, as follows. Write $\partial$ for the differential on $\mathcal{C}FK(K)$. If $\partial\ve{x}_i$ has a summand of $ \ve{y}_j\cdot \mathscr{U}^n\mathscr{V}^m$, then $\partialelta^1(\ve{x}_i^\varepsilon)$ has a summand of $\ve{y}_j^\varepsilon\otimes \mathscr{U}^n \mathscr{V}^m$, for $\varepsilon\in \{0,1\}$. Each $\partialelta^1(\ve{x}^0_i)$ has a summand of the form $\ve{x}^1_i\otimes \sigma$. Finally, if $h_n(\ve{x}_i)$ has a summand of $ \ve{y}_j\cdot\mathscr{U}^n\mathscr{V}^m$, then $\partialelta^1(\ve{x}_i^0)$ has a summand of $\ve{y}_j^1\otimes \mathscr{U}^n \mathscr{V}^m \cdot\tau$. It is verified in \cite{ZemBordered}*{Lemma~8.9} that $\mathcal{X}_n(K)^{\mathcal{K}}$ is a type-$D$ module over $\mathcal{K}$.
For the description of the full link surgery complex of a link $L\subseteq S^3$ in terms of type-$D$ modules over the algebra $\mathcal{L}$, we refer the reader to \cite{ZemBordered}*{Section~8}. We note that in general, the homotopy type of the link surgery type-$D$ module depends non-trivially on the system of arcs $\mathscr{A}$. For example, the type-$D$ link surgery module of the Hopf link depends non-trivially on $\mathscr{A}$; see \cite{ZemBordered}*{Section~16}. We write $\mathcal{X}_{\Lambda}(L,\mathscr{A})^{\mathcal{L}}$ for the type-$D$ module constructed with $\mathscr{A}$.
There is also a bimodule ${}_{\mathcal{K}} \mathcal{T}^{\mathcal{K}}$ whose effect is changing an alpha-parallel arc to a beta-parallel arc, and vice-versa. See \cite{ZemBordered}*{Section~14}. This bimodule is related to the Dehn twist diffeomorphism on knot Floer homology discovered by Sarkar \cite{SarkarMovingBasepoints} and further studied by the author \cite{ZemQuasi}.
\subsection{Type-$A$ and $D$ modules for solid tori}
We now review the type-$A$ and type-$D$ modules of integrally framed solid tori (by which we mean the complements of integrally framed unknots in $S^3$).
If $n\in \mathbb{Z}$, we define type-$A$ and type-$D$ modules ${}_{\mathcal{K}} \mathcal{D}_n$ and $\mathcal{D}_n^{\mathcal{K}}$ as follows. We begin with ${}_{\mathcal{K}} \mathcal{D}_n$. We set
\[
\ve{I}_0\cdot \mathcal{D}_n=\mathbb{F}[\hspace{-.5mm}[ \mathscr{U},\mathscr{V}]\hspace{-.5mm}]\quad \text{and}\quad \ve{I}_1\cdot \mathcal{D}_n=\mathbb{F}[\hspace{-.5mm}[ \mathscr{U},\mathscr{V},\mathscr{V}^{-1}]\hspace{-.5mm}].
\]
The action of $\ve{I}_{\varepsilon}\cdot \mathcal{K}\cdot \ve{I}_{\varepsilon}$ is ordinary polynomial multiplication. If $\ve{x}\in \ve{I}_0\cdot \mathcal{D}_n$, we set $\sigma\cdot \ve{x}=I(\ve{x})$ and $\tau\cdot \ve{x}=\mathscr{V}^n\cdot T(\ve{x})$, where
\[
I,T\colon \mathbb{F}[\mathscr{U},\mathscr{V}]\to \mathbb{F}[\mathscr{U},\mathscr{V},\mathscr{V}^{-1}]
\]
are the following maps. The map $I$ is the inclusion from localizing at $\mathscr{V}$. The map $T$ is given by $T(\mathscr{U}^i\mathscr{V}^j)=\mathscr{U}^j \mathscr{V}^{2j-i}$.
The type-$D$ module $\mathcal{D}_n^{\mathcal{K}}$ is as follows. As a right $\ve{I}$-module, we set $\mathcal{D}_n\cong \ve{I}$, viewed as being generated by $i_0\in \ve{I}_0$ and $i_1\in \ve{I}_1$. The structure map is given by the formula
\[
\partialelta^1(i_0)=i_1\otimes (\sigma+\mathscr{V}^n \tau).
\]
\subsection{The merge and type-$A$ identity bimodules}
In \cite{ZemBordered}*{Section~8}, the author defined the \emph{merge} bimodule and the \emph{type-$A$ identity} bimodule. We recall the definition of these modules presently.
We begin with the merge module ${}_{\mathcal{K}| \mathcal{K}} M^{\mathcal{K}}$. Ignoring completions, the merge module is a $DA$-bimodule over $(\mathcal{K}\otimes_{\mathbb{F}} \mathcal{K}, \mathcal{K})$. However, the structure map $\partialelta_3^1$ of the merge module is not continuous as a map from $(\mathcal{K}\otimes_{\mathbb{F}}^! \mathcal{K})\mathrel{\vec{\otimes}} (\mathcal{K}\otimes_{\mathbb{F}}^! \mathcal{K})\mathrel{\vec{\otimes}} M$ to $M\mathrel{\vec{\otimes}} \mathcal{K}$. Instead, the merge module is a \emph{split Alexander} module in the terminology of \cite{ZemBordered}*{Section~6.4}. This is weaker than being an Alexander module. This condition means that the map $\partialelta_3^1$ is continuous for a finer topology than the topology used to define an Alexander module over $\mathcal{K}\otimes_{\mathbb{F}} \mathcal{K}$. The split Alexander property is sufficient for taking the box tensor product of ${}_{\mathcal{K}|\mathcal{K}} M^{\mathcal{K}}$ with a pair of type-$D$ modules $\mathcal{X}^{\mathcal{K}}$ and $\mathcal{Y}^\mathcal{K}$, but not for general type-$D$ modules over $\mathcal{K}\otimes_{\mathbb{F}}^! \mathcal{K}$.
As an $(\ve{I}\otimes \ve{I},\ve{I})$-module, $M$ is isomorphic to $\ve{I}$. The right action is the obvious one. The left action is given by $(i_{\varepsilon}\otimes i_{\nu})\cdot i=i_{\varepsilon}\cdot i_{\nu}\cdot i$.
The map $\partialelta_2^1$ is defined as follows. If $a,b\in \ve{I}_{0} \cdot \mathcal{K} \cdot \ve{I}_0$, then we set $\partialelta_2^1(a\otimes b, i_0)=i_0\otimes a\cdot b$. We use the same formula if $a,b\in \ve{I}_1\cdot \mathcal{K} \cdot \ve{I}_1$. If $a$ and $b$ are in other idempotents, then we set $\partialelta_2^1(a\otimes b,i_\varepsilon)=0$.
Additionally, there is a $\partialelta_3^1$ term, determined by the equations
\[
\partialelta_3^1(\tau\otimes 1, 1\otimes \tau, i_0)=i_0\otimes\tau,\quad \quad \partialelta_3^1(\sigma\otimes 1, 1\otimes \sigma, i_0)=i_1\otimes \sigma \quad \text{and}
\]
\[
\partialelta_3^1(1\otimes \tau,\tau\otimes 1, i_0)=\partialelta_3^1(1\otimes \sigma,\sigma\otimes 1, i_0)=0.
\]
Using the merge module, we define the type-$A$ identity bimodule
\[
{}_{\mathcal{K}|\mathcal{K}} [\mathbb{I}^{\Supset}]:={}_{\mathcal{K}| \mathcal{K}} M^{\mathcal{K}}\mathrel{\hat{\betaoxtimes}} {}_{\mathcal{K}} \mathcal{D}_0.
\]
The type-$A$ identity modules relates the type-$D$ and type-$A$ modules of a knot complement $K$ via the formula
\[
\mathcal{X}_n(K)^\mathcal{K} \mathrel{\hat{\betaoxtimes}} {}_{\mathcal{K}|\mathcal{K}} [\mathbb{I}^{\Supset}]\cong {}_{\mathcal{K}} \mathcal{X}_n(K).
\]
\subsection{A pairing theorem}
Suppose that $L_1$ and $L_2$ are two links in $S^3$. The author gave several descriptions of the link surgery formula for $L_1\#L_2$ in terms of the link surgery formulas for $L_1$ and $L_2$. We refer to these connected sum formulas \emph{pairing theorems} since topologically performing $\lambda_1+\lambda_2$ surgery on $K_1\# K_2\subseteq S^3$ is the same as gluing the complements of $K_1$ and $K_2$ via the orientation reversing diffeomorphism which sends the meridian $\mu_1$ to $\mu_2$ and the longitude $\lambda_1$ of $K_1$ to the longitude $-\lambda_2$ of $K_2$. See \cite{ZemBordered}*{Section~12} for more details on these pairing theorems.
We begin by describing the pairing theorem on the level of the link surgery complexes. Subsequently, we will describe the connected sum formula in terms of type-$D$ modules. Write $\mathcal{C}_{\Lambda_1}(L_1)=(\mathcal{C}_{\varepsilon}(L_1),d_{\varepsilon,\varepsilon'})$ and $\mathcal{C}_{\Lambda_2}(L_2)=(\mathcal{C}_{\nu}(L_2),\partialelta_{\nu,\nu'})$ for the link surgery formulas of links $L_1$ and $L_2$, with integral framings $\Lambda_1$ and $\Lambda_2$. Write $K_1$ and $K_2$ for the distinguished components of $L_1$ and $L_2$ (along which we take the connected sum). We assume that the link surgery complexes are computed with systems of arcs $\mathscr{A}_1$ and $\mathscr{A}_2$ so that $K_1$ has an alpha-parallel arc, and $K_2$ has a beta-parallel arc. We write $\mathscr{A}_{1\# 2}$ for the system of arcs on $L_1 \# L_2$ which coincides with $\mathscr{A}_1$ and $\mathscr{A}_2$ away from $K_1\# K_2$, and which has an arc for $K_1\# K_2$ consisting of the co-core of the connected sum band. See Figure~\mathrm{re}f{fig:4}.
\betaegin{figure}[ht]
\input{fig4.pdf_tex}
\caption{Left: arc systems $\mathscr{A}_1$ and $\mathscr{A}_2$ so that components $K_1\subseteq L_1$ and $K_2\subseteq L_2$ have arcs which are beta-parallel and alpha-parallel. Right: The arc system $\mathscr{A}_{1\#2}$ on $L_1\# L_2$.}
\label{fig:4}
\end{figure}
If $\varepsilon\in \{0,1\}$, write $\mathcal{C}^{(*,\varepsilon)}(L_1)$ for the codimension one subcube of $\mathcal{C}_{\Lambda_1}(L_1,\mathscr{A}_1)$ consisting of complexes which have $K_1$-component $\varepsilon$. Define $\mathcal{C}^{(*,\varepsilon)}(L_2)$ similarly. We may view $\mathcal{C}_{\Lambda_1}(L_1,\mathscr{A}_1)$ as a mapping cone from $\mathcal{C}^{(*,0)}(L_1)$ to $\mathcal{C}^{(*,1)}(L_1)$ as follows:
\[
\mathcal{C}_{\Lambda_1}(L_1,\mathscr{A}_1)=\mathbb{C}one\betaig( \betaegin{tikzcd}[column sep=1.5cm] \mathcal{C}^{(*,0)}(L_1)\mathrm{a.r.}[r, "F^{K_1}+F^{-K_1}"]& \mathcal{C}^{(*,1)}(L_1)
\end{tikzcd}\betaig).
\]
In the above, and $F^{K_1}$ is the sum of the hypercube maps $\betaPhi^{\vec{M}}$ of $\mathcal{C}_{\Lambda_1}(L_1)$ ranging over all oriented sublinks $\vec{M}\subseteq L_1$ such that $+K_1\subseteq \vec{M}$. Similarly $F^{-K_1}$ is the sum of the hypercube maps such that $-K_1\subseteq \vec{M}$. Each $\mathcal{C}^{(*,\varepsilon)}(L_1)$ has an internal differential consisting of the sum of the hypercube maps for sublinks $\vec{M}$ which do not contain $\pm K_1$. We may similarly view $\mathcal{C}_{\Lambda_2}(L_2)$ as a mapping cone from $\mathcal{C}^{(*,0)}(L_2)$ to $\mathcal{C}^{(*,1)}(L_2)$.
The pairing theorem is the following:
\betaegin{thm}[\cite{ZemBordered}*{Theorem~12.1}]
\label{thm:pairing-main}
With respect to the above notation, the link surgery complex $\mathcal{C}_{\Lambda_1+\Lambda_2}(L_1\# L_2,\mathscr{A}_{1\# 2})$ is homotopy equivalent to
\[
\betaegin{tikzcd}[column sep=4cm] \mathcal{C}^{(*,0)}(L_1)\otimes_{\mathbb{F}[\mathscr{U},\mathscr{V}]} \mathcal{C}^{(*,0)}(L_2)\mathrm{a.r.}[r, "F^{K_1}\otimes F^{K_2}+F^{-K_1}\otimes F^{-K_2}"] & \mathcal{C}^{(*,1)}(L_1)\otimes_{\mathbb{F}[\mathscr{U},\mathscr{V},\mathscr{V}^{-1}]} \mathcal{C}^{(*,1)}(L_2).
\end{tikzcd}
\]
Here, $\Lambda_1+\Lambda_2$ is obtained by summing the framing on $K_1$ and $K_2$, and using the other framings on $L_1$ and $L_2$. Also, the differential on $\mathcal{C}^{(*,\varepsilon)}(L_1)\otimes \mathcal{C}^{(*,\varepsilon)}(L_2)$ is the ordinary differential (Leibniz rule) on the tensor product of two chain complexes.
\end{thm}
Note that Theorem~\mathrm{re}f{thm:pairing-main} has an alternate description in terms of the type-$D$ modules. Namely, it translates to the statement
\[
\mathcal{X}_{\Lambda_1+\Lambda_2}(L_1\#L_2,\mathscr{A}_{1\#2})^{\mathcal{L}_{\ell_1+\ell_2-1}}\simeq \left(\mathcal{X}_{\Lambda_1}(L_1,\mathscr{A}_1)^{\mathcal{L}_{\ell_1}} , \mathcal{X}_{\Lambda_2}(L_2,\mathscr{A}_2)^{\mathcal{L}_{\ell_2}}\right)\mathrel{\hat{\betaoxtimes}} {}_{\mathcal{K}| \mathcal{K}} M^{\mathcal{K}}.
\]
See \cite{ZemBordered}*{Section~12}. Note that if we ignore completions, the above box tensor product is obtained as follows: We first take the external tensor product of $\mathcal{X}_{\Lambda_1}(L_1,\mathscr{A}_1)^{\mathcal{L}_{\ell_1}}$ and $ \mathcal{X}_{\Lambda_2}(L_2,\mathscr{A}_2)^{\mathcal{L}_{\ell_2}}$, i.e. we take the tensor product of the modules over $\mathbb{F}$, and use the Leibniz rule to form the differential $\partialelta^1\otimes \id\otimes 1_{\mathcal{L}_{\ell_2}}+\id\otimes 1_{\mathcal{L}_{\ell_1}}\otimes \partialelta^1$, with tensor factors reordered. This yields a type-$D$ module over $\mathcal{L}_{\ell_1+\ell_2}$. Next, we view ${}_{\mathcal{K}|\mathcal{K}} M^{\mathcal{K}}$ as a type-$DA$ module over $(\mathcal{K}\otimes_{\mathbb{F}} \mathcal{K},\mathcal{K})$, and take the external tensor product of $M$ with the identity bimodule ${}_{\mathcal{L}_{\ell_1+\ell_2-2}}[\mathbb{I}]^{\mathcal{L}_{\ell_1+\ell_2-2}}$ to get a $DA$-bimodule over $(\mathcal{L}_{\ell_1+\ell_2},\mathcal{L}_{\ell_1+\ell_2-1})$. Finally, we compute the box tensor product as normal \cite{LOTBordered}*{Section~2.4}.
\subsection{The pair-of-pants bimodules}
The connected sum formula in Theorem~\mathrm{re}f{thm:pairing-main} requires one of the components $K_1\subseteq L_1$ and $K_2\subseteq L_2$ to have an alpha-parallel arc, and the other to have a beta-parallel arc. The output arc in $\mathscr{A}_{1\# 2}$ is neither alpha-parallel nor beta-parallel. For taking iterated connected sums of knots, we need to change the arc $\mathscr{A}_{1\# 2}$ so that the arc for $K_1\#K_2$ is either alpha or beta-parallel. A general formula for changing arcs is described in \cite{ZemBordered}*{Section~13.2}. In our present case, taking the connected sum and then changing the arc for $K_1\# K_2$ can be encoded by one of two bimodules, which are similar to the merge modules. We call these the \emph{pair-of-pants} bimodules, and we denote them by ${}_{\mathcal{K}|\mathcal{K}} W_l^{\mathcal{K}}$ and ${}_{\mathcal{K}|\mathcal{K}} W_r^{\mathcal{K}}$.
The module ${}_{\mathcal{K}|\mathcal{K}} W_r^{\mathcal{K}}$ has $\partialelta_2^1$ and $\partialelta_3^1$ identical to the merge module. Additionally, there is a $\partialelta_5^1$, as follows
\betaegin{equation}
\partialelta_5^1(a|b, a'|b', 1|\tau,\tau|1, i_0)=i_1\otimes \mathscr{V}^{-1}\partial_{\mathscr{U}}(ab) a' \left(\mathscr{U} \partial_{\mathscr{U}}+\mathscr{V} \partial_{\mathscr{V}}\right)(b') \tau. \label{eq:delta-5-1-merge}
\end{equation}
In the above, $\partial_{\mathscr{U}}$ and $\partial_{\mathscr{V}}$ denote the derivatives with respect to $\mathscr{U}$ and $\mathscr{V}$, respectively. The module ${}_{\mathcal{K}|\mathcal{K}}W_l^{\mathcal{K}}$ is similar, but has the role of the two tensor factors switched.
The importance of these bimodules is illustrated in the following theorem:
\betaegin{thm}[\cite{ZemBordered}*{Theorem~15.2}] Suppose that $L_1,L_2\subseteq S^3$ are two framed links with systems of arcs $\mathscr{A}_1$ and $\mathscr{A}_2$, respectively. Suppose also that we form $L_1\# L_2$ by taking the connected sum along components $K_1\subseteq L_1$ and $K_2\subseteq L_2$. Suppose that the arc for $K_1$ is alpha-parallel, and the arc for $K_2$ is beta-parallel. Let $\mathscr{A}_l$ denote the system of arcs on the connected sum which coincides with $\mathscr{A}_1$ and $\mathscr{A}_2$ away from $K_1\# K_2$ and is alpha-parallel on $K_1\#K_2$. Let $\mathscr{A}_r$ denote the system of analogous system of arcs on $L_1\#L_2$ which is instead beta-parallel on $K_1\# K_2$. Then
\[
\mathcal{X}_{\Lambda_1+\Lambda_2}(L_1\#L_2,\mathscr{A}_r)^{\mathcal{L}_{\ell_1+\ell_2-1}}\simeq (\mathcal{X}_{\Lambda_1}(L_1,\mathscr{A}_1)^{\mathcal{L}_{\ell_1}}, \mathcal{X}_{\Lambda_2}(L_2,\mathscr{A}_2)^{\mathcal{L}_{\ell_2}})\mathrel{\hat{\betaoxtimes}} {}_{\mathcal{K}|\mathcal{K}} W_{r}^{\mathcal{K}},
\]
and similarly if $\mathscr{A}_{r}$ and $W_r$ are replaced by $\mathscr{A}_{l}$ and $W_l$.
\end{thm}
We now recall an alternate description of the pair-of-pants bimodules in terms of the link surgery complexes; see \cite{ZemBordered}*{Section~15.2}. Write $\mathcal{C}_{\Lambda_i}(L_i,\mathscr{A}_i)$ as mapping cones
\[
\mathcal{C}_{\Lambda_i}(L_i,\mathscr{A}_i)=\mathbb{C}one\left( \betaegin{tikzcd}[column sep=1.5cm] \mathcal{C}^{(*,0)}(L_i)\mathrm{a.r.}[r, "F^{K_i}+F^{-K_i}"] & \mathcal{C}^{(*,1)}(L_i)
\end{tikzcd}\right).
\]
Then $\mathcal{C}_{\Lambda_1+\Lambda_2}(L_1\# L_2, \mathscr{A}_{l})$ may also be described as the mapping cone
\betaegin{equation}
\betaegin{tikzcd}[column sep=8.5cm] \mathcal{C}^{(*,0)}_1\otimes \mathcal{C}^{(*,0)}_2\mathrm{a.r.}[r, "F^{K_1}\otimes F^{K_2}+\left(\id+ \mathscr{V}^{-1}(\betaPhi_{w_1}+\betaPhi_{w_2})\circ (\mathcal{A}_{[K_1]} \otimes \id)\right)\circ (F^{-K_1}\otimes F^{-K_2})"] & \mathcal{C}^{(*,1)}_1\otimes \mathcal{C}^{(*,1)}_2,
\end{tikzcd}
\label{eq:alternate-pairing-theorem}
\end{equation}
where $\mathcal{C}^{(*,\varepsilon)}_i$ denotes $\mathcal{C}^{(*,\varepsilon)}(L_i)$.
In the above, the maps $\betaPhi_{w_i}$ denote the analogs of the basepoint actions for the hypercubes $\mathcal{C}^{(*,1)}(L_i)$. These maps are defined similarly to basepoint actions on the ordinary link Floer complexes, except are define in the setting of hypercubes. Note that they are morphisms of hypercubes, so will generally have non-trivial components of length greater than zero (in particular, $\betaPhi_{w_i}$ is not the same as the internal basepoint action on $\mathcal{C}FL(L_i)$). The map $\mathcal{A}_{[K_1]}$ is the hypercube homology action of the curve $K_1\subseteq \Sigma_1$. See \cite{ZemBordered}*{Sections~13.2 and 13.3} for more detail on these constructions.
The map $\mathcal{A}_{[K_1]}$ appearing in Equation~\eqref{eq:alternate-pairing-theorem} has another description which is more immediately related to the expression in Equation~\eqref{eq:delta-5-1-merge}. According to \cite{ZemBordered}*{Lemma~13.28}, there is a chain homotopy
\betaegin{equation}
\mathcal{A}_{[K_1]}\simeq \mathscr{U}\betaPhi_{w_1}+\mathscr{V} \betaPsi_{z_1}, \label{eq:homology-action-basepoint}
\end{equation}
where we view both maps as being endomorphisms of the hypercube $\mathcal{C}^{(*,1)}(L_1)$. In the above, we are writing $\betaPhi_{w_1}$ and $\betaPsi_{z_1}$ for the algebraic basepoint action of the basepoints of $K_1$ on the hypercube $\mathcal{C}^{(*,1)}(L_1)$.
\subsection{The Hopf link surgery complex}
We now recall the link surgery hypercube for the Hopf link, which was computed in \cite{ZemBordered}*{Section~16}. We recall the negative Hopf link has the following link Floer complex.
\betaegin{equation}
\mathcal{C}FL(H)\cong \betaegin{tikzcd}[labels=description,row sep=1cm, column sep=1cm] \ve{a}& \ve{b}\mathrm{a.r.}[l, "\mathscr{U}_2"] \mathrm{a.r.}[d, "\mathscr{U}_1"]\\
\ve{c}\mathrm{a.r.}[r, "\mathscr{V}_2"] \mathrm{a.r.}[u, "\mathscr{V}_1"]& \ve{d}.
\end{tikzcd}
\label{eq:Hopf-def}
\end{equation}
There are two models for the Hopf link surgery hypercube, depending on the choice of arc system. The models are summarized in the following proposition:
\betaegin{prop}[\cite{ZemBordered}*{Proposition~16.1}]
\label{prop:Hopf-link-large-model} Write $H=K_1\cup K_2$ for the negative Hopf link, and suppose that $\Lambda=(\lambda_1,\lambda_2)$ is an integral framing $H$. Let $\mathscr{A}$ be a system of arcs for $H$ where both components are alpha-parallel or both components are beta-parallel. The maps in the surgery complex $\mathcal{C}_{\Lambda}(H,\mathscr{A})$ are, up to overall homotopy equivalence, as follows:
\betaegin{enumerate}
\item The map $\betaPhi^{K_1}$ is the canonical inclusion of localization. The map $\betaPhi^{-K_1}$ is given by the following formula:
\[
\betaPhi^{-K_1} =\betaegin{tikzcd}[row sep=0cm]
\ve{a} \mathrm{a.r.}[r,mapsto]& \ve{d}\mathscr{V}_1^{\lambda_1-1}\\
\ve{b} \mathrm{a.r.}[r, mapsto]& 0\\
\ve{c} \mathrm{a.r.}[r, mapsto]& \ve{b}\mathscr{V}_1^{\lambda_1+1}+\ve{c}\mathscr{V}_1^{\lambda_1}\mathscr{U}_2\\
\ve{d} \mathrm{a.r.}[r,mapsto]& \ve{d}\mathscr{V}_1^{\lambda_1}\mathscr{U}_2.
\end{tikzcd}
\]
\item The maps $\betaPhi^{K_2}$ is the canonical inclusion of localization, and $\betaPhi^{-K_2}$ is given by the following formula:
\[
\betaPhi^{-K_2}=\betaegin{tikzcd}[row sep=0cm]
\ve{a}\mathrm{a.r.}[r, mapsto] &\ve{a}\mathscr{U}_1\mathscr{V}_2^{\lambda_2}\\
\ve{b}\mathrm{a.r.}[r, mapsto] &0\\
\ve{c}\mathrm{a.r.}[r, mapsto] &\ve{b}\mathscr{V}_2^{\lambda_2+1}+\ve{c}\mathscr{U}_1\mathscr{V}_2^{\lambda_2} \\
\ve{d}\mathrm{a.r.}[r, mapsto]&\ve{a}\mathscr{V}_2^{\lambda_2-1}.
\end{tikzcd}
\]
\item The length 2 map $\betaPhi^{-K_1\cup -K_2}$ is given by the following formula:
\[
\betaPhi^{-K_1\cup -K_2}=\betaegin{tikzcd}[row sep=0cm]
\ve{a}\mathrm{a.r.}[r, mapsto] &\ve{c}\mathscr{V}_1^{\lambda_1-2}\mathscr{V}_2^{\lambda_2-1}\\
\ve{b}\mathrm{a.r.}[r, mapsto] &0\\
\ve{c}\mathrm{a.r.}[r, mapsto] &\ve{d}\mathscr{V}_1^{\lambda_1-1} \mathscr{V}_2^{\lambda_2}\\
\ve{d}\mathrm{a.r.}[r, mapsto]&\ve{c}\mathscr{V}_1^{\lambda_1-1} \mathscr{V}^{\lambda_2-2}_2.
\end{tikzcd}
\]
The length 2 maps for other orientations of the Hopf link vanish.
\end{enumerate}
\end{prop}
The above formulas are stated only for the values of the maps on the generators $\ve{a}$, $\ve{b}$, $\ve{c}$ and $\ve{d}$. Values on the rest of the complex are determined by the equivariance properties of the maps proven in \cite{ZemBordered}*{Lemma~7.7}.
\betaegin{prop}[\cite{ZemBordered}*{Proposition~16.7}] Let $\overline\mathscr{A}$ be a system of arcs on the Hopf link $H$ where one arc is alpha-parallel and the other arc is beta-parallel. The surgery complex for the Hopf link $\mathcal{C}_{\Lambda}(H,\overline\mathscr{A})$ is identical to the one in Proposition~\mathrm{re}f{prop:Hopf-link-large-model}, except that we delete the term $\ve{c}\mapsto \mathscr{V}_1^{\lambda_1-1} \mathscr{V}_2^{\lambda_2} \ve{d}$ from the expression in $\betaPhi^{-K_1\cup -K_2}$.
\end{prop}
The above complexes $\mathcal{C}_{\Lambda}(H,\mathscr{A})$ and $\mathcal{C}_{\Lambda}(H,\overline \mathscr{A})$ may be repackaged as type-$D$ modules over the algebra $\mathcal{K}\otimes_{\mathbb{F}} \mathcal{K}$. See \cite{ZemBordered}*{Section~8.6}.
We write $\mathcal{H}_{\Lambda}^{\mathcal{K}\otimes \mathcal{K}}$
and $\overline \mathcal{H}_{\Lambda}^{\mathcal{K}\otimes \mathcal{K}}$ for these two surgery complexes.
It is convenient to consider the type-$DA$ versions of the Hopf link complexes, and we set
\[
{}_{\mathcal{K}} \mathcal{H}_{\Lambda}^{\mathcal{K}}:= \mathcal{H}_{\Lambda}^{\mathcal{K}\otimes \mathcal{K}}\mathrel{\hat{\betaoxtimes}} {}_{\mathcal{K}| \mathcal{K}} [\mathbb{I}^{\Supset}],
\]
where the tensor product is taken on a single algebra factor. We define ${}_{\mathcal{K}} \overline{\mathcal{H}}_{\Lambda}^{\mathcal{K}}$ analogously.
\section{Endomorphisms of the link surgery hypercube}
\label{sec:homology-action}
In this section, we study several endomorphisms of the link surgery hypercube which are related to the standard $\Lambda^* (H_1(Y)/\mathbb{T}ors)$ action on $\mathit{HF}^-(Y)$. In Section~\mathrm{re}f{sec:algebraic-action-link-surgery}, we study one endomorphism of the surgery cube which is obtained by summing over a subset of the structure maps. In Section~\mathrm{re}f{sec:diagrammatic-action}, we study an endomorphism of $\mathcal{C}_{\Lambda}(L)$ induced by a closed curve $\gamma\subseteq \Sigma$. This morphism is computed by counting holomorphic polygons with certain weights. In Section~\mathrm{re}f{sec:relating-actions}, we relate these actions for meridional $\sigma$-basic systems. We call these the \emph{algebraic} and \emph{diagrammatic} actions, respectively.
In Section~\mathrm{re}f{sec:simplifying-connected-sum-1}, we describe an application of these results to simplify the connected sum formula in certain cases.
\subsection{An algebraic $H_1$-action}
\label{sec:algebraic-action-link-surgery}
We now define an action of $\Lambda^* (H_1(S^3_{\Lambda}(L))/\mathbb{T}ors)$ on $\mathcal{C}_{\Lambda}(L)$. Write $H_1(S^3_{\Lambda}(L))=\mathbb{Z}^\ell/\im \Lambda$, where $\mathbb{Z}^\ell$ denotes the free abelian group generated by the meridians $\mu_i$ of the components of $L$.
Define an endomorphism $F^{K_i}$ of $\mathcal{C}_{\Lambda}(L)$ to be the sum of all hypercube structure maps for oriented sublinks $\vec{M}\subseteq L$ which contain $+K_i$. Define $F^{-K_i}$ similarly. We define the action $\mathfrak{A}_{[\mu_i]}$ of $\mu_i=(0,\partialots, 1,\partialots 0)$ to be the map $F^{K_i}$.
Note that $F^{K_i}\simeq F^{-K_i}$ as endomorphisms of $\mathcal{C}_{\Lambda}(L)$, since a chain homotopy is given by projecting to the codimension 1 subcube with $K_i$-coordinate $0$.
For a general $\gamma\in H_1(S^3_\Lambda(L))$, write $[\gamma]=a_1 \cdot [\mu_1]+\cdots
+ a_\ell \cdot [\mu_\ell]$ and define
\betaegin{equation}
\mathfrak{A}_{[\gamma]}:=a_1 \cdot F^{K_1}+\cdots +a_\ell\cdot F^{K_\ell}.\label{eq:hypercube-homology-action}
\end{equation}
We now prove that this definition gives a well-defined action of $H_1(S^3_{\Lambda}(L))/\mathbb{T}ors$ on $\mathcal{C}_{\Lambda}(L)$.
\betaegin{lem}\label{lem:algebraic-homology-action-link-surgery} The action $\mathfrak{A}_{[\gamma]}$ of $\mathbb{Z}^\ell$ on $\mathcal{C}_{\Lambda}(L)$ descends to an action of $H_1(S^3_\Lambda(L))/\mathbb{T}ors$ which is well-defined up to $\mathbb{F}[\hspace{-.5mm}[ U_1,\partialots, U_\ell]\hspace{-.5mm}] $-equivariant chain homotopy.
\end{lem}
\betaegin{proof}We recall that $H_1(S^3_{\Lambda}(L))$ is isomorphic to $\mathbb{Z}^\ell/\im \Lambda$, where we view $\Lambda$ as the $\ell\times \ell$ symmetric framing matrix for $L$ whose diagonal entries consist of the framings, and whose off diagonal entries consist of the linking numbers between components of $L$. Note that $\gamma\in \mathbb{Z}^\ell$ becomes 0 in $H_1(S^3_\Lambda(L))/\mathbb{T}ors$ if and only if $N\cdot \gamma\in \im \Lambda$ for some $N\in \mathbb{Z}$.
Suppose that $\gamma\in \mathbb{Z}^\ell$ and $[\gamma]=0\in H_1(S^3_{\Lambda}(L))/\mathbb{T}ors$. We will construct a null-homotopy of the map $\mathfrak{A}_{[\gamma]}$ from Equation~\eqref{eq:hypercube-homology-action} as follows. If $\ve{s}\in \mathbb{H}(L)$ and $\varepsilon\in \mathbb{E}_\ell$, write $\mathcal{C}_{\varepsilon}(\ve{s})\subseteq \mathcal{C}_{\varepsilon}\subseteq \mathcal{C}_{\Lambda}(L)$ for the subspace in Alexander grading $\ve{s}$. We will construct a function $\omega_\gamma\colon \mathbb{H}(L)\times \mathbb{E}_\ell\to \mathbb{Z}$ and define a null-homotopy of $\mathfrak{A}_{[\gamma]}$ via the formula:
\[
H_\gamma(\ve{x})=\omega_\gamma(\ve{s},\varepsilon)\cdot \ve{x}
\]
whenever $\ve{x}\in \mathcal{C}_{\varepsilon}(\ve{s})$.
It suffices to show that we may pick a function $\omega_\gamma\colon \mathbb{H}(L)\times \mathbb{E}_\ell\to \mathbb{Z}$ satisfying
\betaegin{equation}
\omega_\gamma(\ve{s}, \varepsilon+e_i)=a_{i}+\omega_\gamma(\ve{s},\varepsilon)\quad \text{and} \quad \omega_\gamma(\ve{s}+\Lambda_{i},\varepsilon+e_i)=\omega_\gamma(\ve{s},\varepsilon).
\label{eq:requirements-omega}
\end{equation}
If such a function $\omega_{\gamma}$ exists, then it is straightforward to verify that
\[
\mathfrak{A}_{[\gamma]}=[\partial, H_\gamma],
\]
as endomorphisms of $\mathcal{C}_{\Lambda}(L)$.
To establish the existence of a function $\omega_\gamma$ satisfying Equation~\eqref{eq:requirements-omega}, note that we may instead construct a function $\eta_\gamma\colon \mathbb{H}(L)\to \mathbb{Z}$ satisfying
\betaegin{equation}
\eta_\gamma(\ve{s}+\Lambda_i)=\eta_\gamma(\ve{s})-a_i. \label{eq:coherence-eta}
\end{equation}
Given such an $\eta_\gamma$, if $\varepsilon=(\varepsilon_1,\partialots, \varepsilon_{\ell})\in \mathbb{E}_\ell$ we set
\[
\omega_\gamma(\ve{s}, \varepsilon)=\eta_\gamma(\ve{s})+\varepsilon_1\cdot a_1+\cdots +\varepsilon_\ell \cdot a_\ell.
\]
To construct such an $\eta_\gamma$, we pick representatives of each class $\mathbb{H}(L)/\im \Lambda$ and define $\eta_\gamma$ arbitrarily on these elements. We extend $\eta_\gamma$ to all of $\mathbb{H}(L)$ using Equation~\eqref{eq:coherence-eta}. To see that the resulting map $\eta_\gamma$ is well-defined, it suffices to show that if
\betaegin{equation}
j_1 \Lambda_1+\cdots+j_\ell \Lambda_\ell=0,\label{eq:sum-j-Lambda}
\end{equation}
then
\[
j_1 a_1+\cdots+ j_\ell a_\ell=0.
\]
Equation~\eqref{eq:sum-j-Lambda} implies that $(j_1,\partialots, j_\ell)$ is in the null-space of $\Lambda$, so in particular it will vanish when dotted with $(a_1,\partialots,a_\ell)^T$, which is in the rational image of $\Lambda$. The proof is complete.
\end{proof}
\betaegin{rem} It is also straightforward to verify that the above action descends to an action of $\Lambda^* \left(H_1(S^3_{\Lambda}(L))/\mathbb{T}ors\right)$. This may be verified by noting that for each $i$ we have $\mathfrak{A}_{[\mu_i]}^2=0$, and if $i\neq j$ then $[\mathfrak{A}_{[\mu_i]},\mathfrak{A}_{[\mu_j]}]=0$.
\end{rem}
\betaegin{rem}
\label{rem:subcube-refinement}
It is helpful to have the following refinement of Lemma~\mathrm{re}f{lem:algebraic-homology-action-link-surgery} for subcubes of $\mathcal{C}_{\Lambda}(L)$. Suppose that $L$ is partitioned as $L_0\cup L_1$ and let $\varepsilon\in \mathbb{E}_{\ell_0}$ be a fixed coordinate, where $\ell_i=|L_i|$. Consider the subcube $\mathcal{C}_{\Lambda_0}^{(*,\varepsilon)}(L_0;L_1)\subseteq \mathcal{C}_{\Lambda}(L)$ generated by complexes at points $(\nu,\varepsilon)\in \mathbb{E}_{\ell_0}\times \mathbb{E}_{\ell_1}$ where $\varepsilon$ is our chosen coordinate (above) and $\nu$ is any coordinate. Here $\Lambda_0$ is the restriction of $\Lambda$ to $L_0$. The complex $\mathcal{C}_{\Lambda_0}^{(*,\varepsilon)}(L_0;L_1)$ is a module over $\mathbb{F}[\hspace{-.5mm}[ \mathscr{U}_{\ell_0+1},\mathscr{V}_{\ell_0+1},\partialots, \mathscr{U}_{\ell_0+\ell_1},\mathscr{V}_{\ell_0+\ell_1}]\hspace{-.5mm}] $ (the variables for $L_1$). Furthermore, by \cite{ZemBordered}*{Lemma~7.7}, the differential on $\mathcal{C}_{\Lambda_0}^{(*,\varepsilon)}(L_0;L_1)$ commutes with the action of $\mathbb{F}[\hspace{-.5mm}[ \mathscr{U}_{\ell_0+1},\mathscr{V}_{\ell_0+1},\partialots, \mathscr{U}_{\ell_0+\ell_1},\mathscr{V}_{\ell_0+\ell_1}]\hspace{-.5mm}] $. If $\gamma\in \mathbb{Z}^{\ell_0}$, then we may define an endomorphism $\mathfrak{A}_{[\gamma]}$ on $\mathcal{C}_{\Lambda_0}^{(*,\varepsilon)}(L_0;L_1)$ using Equation~\eqref{eq:hypercube-homology-action}. Similarly to Lemma~\mathrm{re}f{lem:algebraic-homology-action-link-surgery}, the map $\mathfrak{A}_{[\gamma]}$ gives an action of $H_1(S^3_{\Lambda_0}(L_0))/\mathbb{T}ors$ which is well-defined up to $\mathbb{F}[\hspace{-.5mm}[ \mathscr{U}_{\ell_0+1},\mathscr{V}_{\ell_0+1},\partialots, \mathscr{U}_{\ell_0+\ell_1},\mathscr{V}_{\ell_0+\ell_1}]\hspace{-.5mm}] $-equivariant chain homotopy. To ensure the $\mathbb{F}[\hspace{-.5mm}[ \mathscr{U}_{\ell_0+1},\mathscr{V}_{\ell_0+1},\partialots, \mathscr{U}_{\ell_0+\ell_1},\mathscr{V}_{\ell_0+\ell_1}]\hspace{-.5mm}] $-equivariance of the homotopy, we add to Equation~\eqref{eq:requirements-omega} the requirement that if $e_i\in \mathbb{Z}^{\ell_0+\ell_1}$ is a unit vector pointing in the direction for $K_i\subseteq L_1$, then
\[
\omega_\gamma(\ve{s}\pm e_i,\varepsilon)=\omega_\gamma(\ve{s}, \varepsilon).
\]
Since $\mathscr{U}_i$ and $\mathscr{V}_i$ have Alexander grading $\pm e_i$, this ensures that our chain homotopies will be $\mathbb{F}[\hspace{-.5mm}[ \mathscr{U}_{\ell_0+1},\mathscr{V}_{\ell_0+1},\partialots, \mathscr{U}_{\ell_0+\ell_1},\mathscr{V}_{\ell_0+\ell_1}]\hspace{-.5mm}] $-equivariant. Noting that there is an affine isomorphism $\mathbb{H}(L)/\mathbb{Z}^{\ell_1}\cong \mathbb{H}(L_0)$ (where $\mathbb{Z}^{\ell_1}$ acts by the meridians of the components of $L_1$), the proof in the absolute case goes through without change.
\end{rem}
\subsection{A Heegaard diagrammatic $H_1$-action}
\label{sec:diagrammatic-action}
In this section, we recall from \cite{ZemBordered}*{Section~13.3} and \cite{HHSZNaturality}*{Section~6.2} a Heegaard diagrammatic $H_1$-action of a curve $\gamma\subseteq \Sigma$ on the link surgery formula. This description parallels the construction of an $H_1/\mathbb{T}ors$ action on Heegaard Floer homology from \cite{OSDisks}*{Section~4.2.5}.
Suppose $L\subseteq S^3$ is a framed link with a system of arcs $\mathscr{A}$ for $L$. Let $\mathscr{H}$ be a $\sigma$-basic system of Heegaard diagrams for $(L,\mathscr{A})$. Let $\Sigma$ be the underlying Heegaard surface of $\mathscr{H}$. Let $\gamma\subseteq \Sigma$ be a closed curve. We define an endomorphism
\[
\mathcal{A}_{\gamma}\colon \mathcal{C}_{\Lambda}(L,\mathscr{A})\to \mathcal{C}_{\Lambda}(L,\mathscr{A}).
\]
The endomorphism $\mathcal{A}_\gamma$ is induced by a type-$D$ endomorphism $\mathcal{A}_{\gamma}^1$ of $\mathcal{X}_{\Lambda}(L,\mathscr{A})^{\mathcal{L}}$.
The construction of $\mathcal{A}_\gamma$ is as follows. We assume, for simplicity, that $\gamma$ is represented by a closed 1-chain on $\Sigma$ which is disjoint from the arcs $\mathscr{A}$. We also assume that the arcs $\mathscr{A}$ are embedded in $\Sigma$. The hypercube $\mathcal{C}_{\Lambda}(L,\mathscr{A})$ is built from an $|L|$-dimensional hyperbox of chain complexes. Each constituent hypercube is obtained by pairing two hypercubes of attaching curves, of combined total dimension at most $|L|$, and then extending the remaining axis directions by canonical diffeomorphism maps for surface isotopes of $\Sigma$ which push basepoints along subarcs of the curves in $\mathscr{A}$.
Suppose a subcube of this hyperbox is formed by pairing hypercubes of attaching curves $\mathcal{L}_{\alpha}$ and $\mathcal{L}_{\beta}$, of dimension $n$ and $m$, respectively. The homology action
\[
\mathcal{A}_{\gamma}\colon \ve{\mathbb{C}F}^-(\mathcal{L}_{\alpha},\mathcal{L}_{\beta})\to \ve{\mathbb{C}F}^-(\mathcal{L}_{\alpha},\mathcal{L}_{\beta})
\]
is defined similarly to the hypercube differential in Equation~\eqref{eq:hypercube-differential}, except that a holomorphic polygon representing a class $\ve{p}i$ is weighted by a factor of
\[
\sum_{\alpha\in \mathcal{L}_{\alpha}} \#(\partial_{\alpha}(\ve{p}i)\cap \gamma).
\]
Here, if $\ve{\alphalpha}\in \mathcal{L}_{\alpha}$, then $\partial_{\alpha}(\phi)$ denotes the subset of the boundary of the domain of $\phi$ which lies on $\ve{\alphalpha}$. The homology action $\mathcal{A}_{\gamma}$ on the link surgery hypercube $\mathcal{C}_{\Lambda}(L,\mathscr{A})$ is obtained by performing the above construction to each constituent hypercube of the link surgery formula, and modifying weights of the variables similarly to the construction of the link surgery formula in Section~\mathrm{re}f{sec:basic-systems}.
\subsection{Relating the actions}
\label{sec:relating-actions}
In this section, we relate the two actions $\mathcal{A}_{\gamma}$ and $\mathfrak{A}_{[\gamma]}$ constructed in the previous sections. Our main result is the following:
\betaegin{prop}
\label{prop:homology-action-description}
Consider a meridional $\sigma$-basic system of Heegaard diagrams for a link $L\subseteq S^3$ and a system of arcs $\mathscr{A}$ for $L$, where each arc is either alpha-parallel or beta-parallel. Let $\Sigma$ be the underlying Heegaard surface.
\betaegin{enumerate}
\item Suppose that $\gamma\subseteq \Sigma$ is a closed curve which is disjoint from the arcs of $\mathscr{A}$. The endomorphism $\mathcal{A}_\gamma$ of $\mathcal{C}_{\Lambda}(L,\mathscr{A})$ satisfies
\[
\mathcal{A}_\gamma\simeq \sum_{i=1}^\ell a_i\cdot \mathcal{A}_{\mu_{i}}
\]
where each $\mu_{i}$ is parallel to the canonical meridian of the component $K_i$ on the diagram $\mathscr{H}$, and $a_i\in \mathbb{F}_2$.
\item If $K_i$ is a component of $L$ and $\varepsilon\in \{0,1\}$, write $\mathcal{C}^{(*,\varepsilon)}_{\Lambda}(L;K_i)$ for the subcube of $\mathcal{C}_{\Lambda}(L)$ generated by complexes at cube points in $\mathbb{E}_{\ell}$ which have $K_i$-coordinate $\varepsilon$. Embed $K_i$ on $\Sigma$ as a knot trace of the Heegaard link diagram (i.e. the component of $K_i\setminus \{w_i,z_i\}$ oriented from $z_i$ to $w_i$ is disjoint from the beta-curves, and the subarc oriented from $w_i$ to $z_i$ is disjoint from the alpha curves; we may assume this holds for all diagrams used in the construction of $\mathcal{C}^{(*,\varepsilon)}_{\Lambda}(L;K_i)$). Then, as endomorphisms of $\mathcal{C}^{(*,\varepsilon)}_{\Lambda}(L;K_i)$ we have
\[
\mathcal{A}_{K_i}\simeq \sum_{j\neq i} \mathit{lk}(K_i,K_j) \mathcal{A}_{\mu_{i}}.
\]
\item For each component $K_i$ of $L$, there is a chain homotopy
\[
\mathcal{A}_{\mu_{i}}\simeq \mathfrak{A}_{[\mu_{i}]}
\]
as endomorphisms of $\mathcal{C}_\Lambda(L)$.
\end{enumerate}
Claims (1) and (3) hold on the level of type-$D$ endomorphisms of $\mathcal{X}_{\Lambda}(L)^{\mathcal{L}}$.
\end{prop}
We begin with a preliminary lemma:
\betaegin{lem}[\cite{ZemBordered}*{Lemma~13.16}]
\label{lem:homology-action-homotopy-hypercube}
Suppose that $\mathcal{L}_{\alpha}$ and $\mathcal{L}_{\beta}$ are hypercubes of attaching curves on $(\Sigma,\ve{w},\ve{z})$ and that $\gamma$ is a closed 1-chain on $\Sigma$. Suppose that $C\subseteq \Sigma$ is an integral 2-chain such that $\partial C=\gamma+S_{\alpha}+S_{\beta}$ where $S_{\alpha}$ are closed 1-chains which are disjoint from all curves in $\mathcal{L}_{\alpha}$ and $S_{\beta}$ are closed 1-chains which are disjoint from all curves in $\mathcal{L}_{\beta}$. As an endomorphism of $\ve{\mathbb{C}F}^-(\mathcal{L}_{\alpha},\mathcal{L}_{\beta})$ we have
\[
\mathcal{A}_\gamma\simeq 0.
\]
\end{lem}
\betaegin{proof} The proof is given in \cite{ZemBordered}*{Lemma~13.16}, though we repeat it for the benefit of the reader. We construct the following diagram to realize the chain homotopy:
\[
\betaegin{tikzcd} \ve{\mathbb{C}F}^-(\mathcal{L}_{\alpha},\mathcal{L}_{\beta})
\mathrm{a.r.}[r, "\mathcal{A}_{\gamma}"]
\mathrm{a.r.}[d, "\id"]
\mathrm{a.r.}[dr, "H_C",dashed]
& \ve{\mathbb{C}F}^-(\mathcal{L}_{\alpha},\mathcal{L}_{\beta})
\mathrm{a.r.}[d, "\id"]
\\
\ve{\mathbb{C}F}^-(\mathcal{L}_{\alpha},\mathcal{L}_{\beta})
\mathrm{a.r.}[r, "0"]&
\ve{\mathbb{C}F}^-(\mathcal{L}_{\alpha},\mathcal{L}_{\beta})
\end{tikzcd}
\]
We define the map $H_C$ to have only length 2 chains in the above diagram, and to send $\ve{x}\in \mathbb{T}_{\alpha_\varepsilon}\cap \mathbb{T}_{\beta_{\nu}}$ to $n_{\ve{x}}(C)\cdot \ve{x}$, and to be $\mathbb{F}[\hspace{-.5mm}[ U_1,\partialots, U_\ell]\hspace{-.5mm}] $-equivariant. Here $n_{\ve{x}}(C)\in \mathbb{F}$ denotes the intersection number of $C\subseteq \Sigma$ with $\ve{x}$, viewed as sum of $g$-points in $\Sigma$.
We claim that the hypercube relations are satisfied. To see this, we argue as follows. The relations are equivalent to the following equation:
\betaegin{equation}
H_C\circ D_{(\nu,\varepsilon),(\nu',\varepsilon')}+ D_{(\nu,\varepsilon),(\nu',\varepsilon')}\circ H_C=(\mathcal{A}_\gamma)_{(\nu,\varepsilon),(\nu',\varepsilon')}
\label{eq:homology-action-homotopy-hypercube}
\end{equation}
The maps $D_{(\nu,\varepsilon),(\nu',\varepsilon')}$ and $(\mathcal{A}_\gamma)_{(\nu,\varepsilon),(\nu',\varepsilon')}$ decompose over pairs of increasing sequences
\[
\varepsilon=\varepsilon_1<\cdots<\varepsilon_n=\varepsilon'\quad \text{and} \quad \nu=\nu_1<\cdots<\nu_m=\nu'.
\]
To prove Equation~\eqref{eq:homology-action-homotopy-hypercube}, we observe that if
\[
\ve{p}i\in \pi_2(\mathbb{T}heta_{\nu_n,\nu_{n-1}},\partialots, \mathbb{T}heta_{\nu_2,\nu_1} ,\ve{x},\mathbb{T}heta_{\varepsilon_1,\varepsilon_2},\partialots ,\mathbb{T}heta_{\varepsilon_{m-1},\varepsilon_m},\ve{y})
\]
is a class of $(n+m)$-gons, then
\[
0\equiv\#\partial(\partial_{\alpha}(\ve{p}i)\cap C)\equiv (n_{\ve{x}}(C)+n_{\ve{y}}(C))+\#\partial_{\alpha}(\ve{p}i)\cap (\gamma+S_{\alpha}+S_{\beta}) \pmod 2,
\]
We first note that $\partial_{\alpha}(\ve{p}i)\cap S_{\alpha}=\partial_{\beta}(\ve{p}i)\cap S_{\beta}=\emptyset$. Furthermore $\partial_{\alpha}(\ve{p}i)$ and $\partial_{\beta}(\ve{p}i)$ are homologous via the domain of the class $\ve{p}i$ (an integal 2-chain $D(\ve{p}i)$ on $\Sigma$), so we also have $\# (\partial_{\alpha}(\ve{p}i)\cap S_{\beta})\equiv 0$. Hence
\[
n_{\ve{x}}(C)+n_{\ve{y}}(C)\equiv \#(\partial_{\alpha}(\ve{p}i)\cap \gamma)\pmod 2.
\]
This implies Equation~\eqref{eq:homology-action-homotopy-hypercube}, completing the proof.
\end{proof}
By applying Lemma~\mathrm{re}f{lem:homology-action-homotopy-hypercube} to each constituent hypercube of the hyperbox used in the construction of the link surgery formula, we obtain the following corollary concerning the link surgery formula:
\betaegin{cor} Suppose we pick a $\sigma$-basic system of Heegaard diagrams for $(L,\mathscr{A})$ with underlying Heegaard surface $\Sigma$, and $\gamma\subseteq \Sigma$ is a closed curve. Suppose that there is an integral 2-chain $C$ on $\Sigma$ such that $\gamma=\partial C+S_{\alpha}+S_{\beta}$, where $S_{\alpha}$ are closed 1-chains on $\Sigma$ which are disjoint from all alpha curves, and $S_{\beta}$ are closed 1-chains disjoint from all beta curves. Then $\mathcal{A}_{\gamma}\simeq 0$ as endomorphisms of $\mathcal{C}_{\Lambda}(L,\mathscr{A})$ and $\mathcal{X}_{\Lambda}(L,\mathscr{A})^{\mathcal{L}}$.
\end{cor}
We now prove the main result of this section:
\betaegin{proof}[Proof of Proposition~\mathrm{re}f{prop:homology-action-description}]
We begin with the first claim. We use a meridional $\sigma$-basic system, as described in Section~\mathrm{re}f{sec:basic-systems}. Let $(\Sigma,\ve{\alphalpha},\ve{\betaeta},\ve{w},\ve{z})$ denote the underlying Heegaard link diagram for $(S^3,L)$. Write $(\Sigma,\ve{\alphalpha}_0,\ve{\betaeta}_0)$ for the partial Heegaard diagram obtained by deleting the special alpha and beta curves (i.e. the meridians of the components of $L$). If we attach compressing disks along the $\ve{\alphalpha}_0$ and $\ve{\betaeta}_0$ curves, and fill any 2-sphere boundary components with 3-balls, we obtain $S^3\setminus \nu (L)$. By including $\Sigma$ into $S^3\setminus \nu(L)$ in this way, we may view $\gamma$ as being in $S^3\setminus \nu(L)$. We write $\mu_1,\partialots, \mu_\ell\subseteq \Sigma$ for curves which are parallel to the special meridional alpha and beta curves. The meridians $\mu_1,\partialots, \mu_\ell$ generate $H_1(S^3\setminus \nu(L))$. Hence, $\gamma$ is homologous in $S^3\setminus \nu(L)$ to a linear combination of these meridians, i.e. we may write
\[
\gamma-a_1\cdot \mu_1-\cdots -a_\ell \cdot\mu_\ell=\partial C,
\]
for some 2-chain $C$ in $S^3\setminus \nu(L)$ and some integers $a_1,\partialots, a_\ell$. Such a relation induces a 2-chain $C'$ on $(\Sigma,\ve{\alphalpha}_0,\ve{\betaeta}_0)$ such that $\partial C'$ is $\gamma-a_{1}\cdot \mu_1-\cdots -a_\ell\cdot \mu_\ell+ S_{\alpha}+ S_{\beta}$, where $S_{\alpha}$ are parallel copies of curves in $\ve{\alphalpha}_0$ and $\ve{\betaeta}_0$, and $\mu_i$ are the canonical meridians. In particular, $S_{\alpha}$ is disjoint from all alpha curves in the $\sigma$-basic system and $S_{\beta}$ is disjoint from all beta curves in the $\sigma$-basic system. In our $\sigma$-basic system, the curves $\ve{\alphalpha}_0$ and $\ve{\betaeta}_0$ appear via small Hamiltonian translates in every collection of alpha or beta curves. The first claim of the theorem statement now follows from Lemma~\mathrm{re}f{lem:homology-action-homotopy-hypercube}.
We now consider the second claim. Assume for concreteness that the canonical meridian of $K_i$ is an alpha curve $\alpha_i^s$ (i.e. the arc for $K_i$ is beta-parallel). We observe that the above construction gives an embedding of $\Sigma$ into the complement of $L$, and the quantity $a_i$ is the linking number $\mathit{lk}(\gamma, K_i)$. When we restrict to the subcube $\mathcal{C}^{(*,\varepsilon)}_{\Lambda}(L;K_i)$, we may now view the curve $\gamma$ as being embedded $S^3\setminus\nu(L-K_i)$, since the canonical meridian of $K_i$ is stationary in this subcube. In particular, the trace of the knot $K_i$ on $\Sigma$ will be homologous on $\Sigma$ to $\sum_{i\neq j} \mathit{lk}(K_j,K_i) [\mu_j]$ and a sum of curves on $\Sigma$ which are small translates of the curves in $\ve{\alphalpha}_0\cup \{\alpha_i^s\}$ or $\ve{\betaeta}_0$. The claim now follows from Lemma~\mathrm{re}f{lem:homology-action-homotopy-hypercube}.
We now prove the third claim by constructing a homotopy $\mathcal{A}_{\mu_i}\simeq F^{-K_i}$. We observe that $F^{-K_i}\simeq F^{K_i}$, which is by definition the same as $\mathfrak{A}_{[\mu_i]}$. Write $\mathcal{L}_{\alpha}$ and $\mathcal{L}_{\beta}$ for the hyperboxes of attaching curves from our $\sigma$-basic system. We suppose $K_i\subseteq L$ is a link component which is beta-parallel. We decompose the $K_i$-direction of $\mathcal{L}_{\alpha}$ as
\[
\betaegin{tikzcd}
\mathcal{L}_{\alpha_1}
\mathrm{a.r.}[r]& \mathcal{L}_{\alpha_2}\mathrm{a.r.}[r] &\cdots \mathrm{a.r.}[r]& \mathcal{L}_{\alpha_n}.
\end{tikzcd}
\]
We assume each arrow is either a morphism of hyperboxes of attaching curves (i.e. a collection of Floer chains satisfying the compatibility relations) or is the canonical surface isotopy which moves $z_i$ to $w_i$ along the short path connecting them on the Heegaard diagram.
As described in Section~\mathrm{re}f{sec:basic-systems}, the hypercube $\mathcal{C}_{\Lambda}(L)$ is completely determined by the compression of the hypercube
\[
\betaegin{tikzcd}
\ve{\mathbb{C}F}^-(\mathcal{L}_{\alpha_1},\mathcal{L}_{\beta})\mathrm{a.r.}[r, "F_{1,2}"]& \ve{\mathbb{C}F}^-(\mathcal{L}_{\alpha_2},\mathcal{L}_\beta) \mathrm{a.r.}[r, "F_{2,3}"]&\cdots\mathrm{a.r.}[r, "F_{n-1,n}"]& \ve{\mathbb{C}F}^-(\mathcal{L}_{\alpha_n},\mathcal{L}_{\beta}).
\end{tikzcd}
\]
For convenience, write $\mathcal{C}_i$ for the hyperbox $\ve{\mathbb{C}F}^-(\mathcal{L}_{\alpha_i},\mathcal{L}_{\beta})$.
Let $\mu_i'$ be a translation on the Heegaard diagram of $\mu_i$.
We observe that if $C\subseteq \Sigma$ is a 2-chain whose boundary is $\mu_i-\mu_i'$, then the following hyperbox compresses horizontally to the identity map on $\mathbb{C}one(\mathcal{A}_{\mu_i})$:
\betaegin{equation}
\betaegin{tikzcd}[labels=description, row sep=1cm, column sep=1cm]
\mathcal{C}_n
\mathrm{a.r.}[dr, dashed, "H_C"]
\mathrm{a.r.}[r, "\id"]
\mathrm{a.r.}[d, "\mathcal{A}_{\mu_i}"]
&
\mathcal{C}_n
\mathrm{a.r.}[d, "\mathcal{A}_{\mu_i'}"]
\mathrm{a.r.}[dr,dashed, "H_C"]
\mathrm{a.r.}[r, "\id"]
&
\mathcal{C}_n
\mathrm{a.r.}[d, "\mathcal{A}_{\mu_i}"]
\\
\mathcal{C}_n
\mathrm{a.r.}[r, "\id"]
&
\mathcal{C}_n
\mathrm{a.r.}[r, "\id"]
&
\mathcal{C}_n
\end{tikzcd}
\label{eq:move-move-back}
\end{equation}
Next, we observe that for any $k\in \{1,\partialots, n-1\}$, the horizontal compressions of the following hyperboxes are homotopic as hyperbox morphisms from $\mathbb{C}one(\mathcal{A}_{\mu_i}\colon \mathcal{C}_k\to \mathcal{C}_k)$ and $\mathbb{C}one(\mathcal{A}_{\mu_i'}\colon \mathcal{C}_{k+1}\to \mathcal{C}_{k+1})$:
\betaegin{equation}
\betaegin{tikzcd}[labels=description,column sep=1.4cm, row sep=1.2cm]
\mathcal{C}_k
\mathrm{a.r.}[dr, dashed, "F_{k,k+1}^\gamma"]
\mathrm{a.r.}[r, "F_{k,k+1}"]
\mathrm{a.r.}[d, "\mathcal{A}_{\mu_i}"]
&
\mathcal{C}_{k+1}
\mathrm{a.r.}[d, "\mathcal{A}_{\mu_i}"]
\mathrm{a.r.}[dr,dashed, "H_C"]
\mathrm{a.r.}[r, "\id"]
&
\mathcal{C}_{k+1}
\mathrm{a.r.}[d, "\mathcal{A}_{\mu_i'}"]
\\
\mathcal{C}_k
\mathrm{a.r.}[r, "F_{k,k+1}"]
&
\mathcal{C}_{k+1}
\mathrm{a.r.}[r, "\id"]
&
\mathcal{C}_{k+1}
\end{tikzcd}
\quad \text{and} \quad
\betaegin{tikzcd}[labels=description, column sep=1.4cm, row sep=1.2cm]
\mathcal{C}_k
\mathrm{a.r.}[dr, dashed, "H_C"]
\mathrm{a.r.}[r, "\id"]
\mathrm{a.r.}[d, "\mathcal{A}_{\mu_i}"]
&
\mathcal{C}_{k+1}
\mathrm{a.r.}[d, "\mathcal{A}_{\mu_i'}"]
\mathrm{a.r.}[dr,dashed, "F_{k,k+1}^{\mu_i}"]
\mathrm{a.r.}[r, "F_{k,k+1}"]
&
\mathcal{C}_{k+1}
\mathrm{a.r.}[d, "\mathcal{A}_{\mu_i'}"]
\\
\mathcal{C}_k
\mathrm{a.r.}[r, "\id"]
&
\mathcal{C}_{k+1}
\mathrm{a.r.}[r, "F_{k,k+1}"]
&
\mathcal{C}_{k+1}
\end{tikzcd}
\label{eq:two-compressions}
\end{equation}
The above claim follows from Lemma~\mathrm{re}f{lem:homology-action-homotopy-hypercube}, which constructs a hyperbox which realizes a chain homotopy $\mathcal{A}_{\mu_i}\simeq \mathcal{A}_{\mu_i'}$ as endomorphisms of $\mathbb{C}one(F_{j,j+1})$. This hyperbox realizes exactly the chain homotopy between the compressions described above.
In particular, from the two results related to Equations~\eqref{eq:move-move-back} and ~\eqref{eq:two-compressions}, we conclude that the hypercube map $\mathcal{A}_{\mu_i}\colon \mathcal{C}_{\Lambda}(L)\to \mathcal{C}_{\Lambda}(L)$ can be described as the compression of the following hyperbox, for any $k$:
\[
\betaegin{tikzcd}[column sep=.6cm, row sep=1cm]
\mathcal{C}_1
\mathrm{a.r.}[r, "F_{1,2}"]
\mathrm{a.r.}[dr,dashed]
\mathrm{a.r.}[d, "\mathcal{A}_{\mu_i}",labels=description]
&
\mathcal{C}_2
\mathrm{a.r.}[r, "F_{2,3}"]
\mathrm{a.r.}[d, "\mathcal{A}_{\mu_i}",labels=description]
\mathrm{a.r.}[dr,dashed]
&
\cdots
\mathrm{a.r.}[r]
&
\mathcal{C}_k
\mathrm{a.r.}[r, "\id"]
\mathrm{a.r.}[d, "\mathcal{A}_{\mu_i}",labels=description]
\mathrm{a.r.}[dr,dashed, "H_C",labels=description]
&
\mathcal{C}_k
\mathrm{a.r.}[r,"F_{k,k+1}"]
\mathrm{a.r.}[d, "\mathcal{A}_{\mu_i'}",labels=description]
\mathrm{a.r.}[dr,dashed]
&
\mathcal{C}_{k+1}
\mathrm{a.r.}[r]
\mathrm{a.r.}[d, "\mathcal{A}_{\mu_i'}",labels=description]
&
\cdots
\mathrm{a.r.}[r]
&
\mathcal{C}_n
\mathrm{a.r.}[r, "\id"]
\mathrm{a.r.}[d, "\mathcal{A}_{\mu_i'}",labels=description]
\mathrm{a.r.}[dr, dashed,"H_C",labels=description]
&\mathcal{C}_n
\mathrm{a.r.}[d, "\mathcal{A}_{\mu_i}",labels=description]
\\
\mathcal{C}_1\mathrm{a.r.}[r, "F_{1,2}"]& \mathcal{C}_2\mathrm{a.r.}[r, "F_{2,3}"]& \cdots\mathrm{a.r.}[r]& \mathcal{C}_k\mathrm{a.r.}[r, "\id"]& \mathcal{C}_k\mathrm{a.r.}[r,"F_{k,k+1}"]&\mathcal{C}_{k+1}\mathrm{a.r.}[r]& \cdots \mathrm{a.r.}[r] &\mathcal{C}_n\mathrm{a.r.}[r, "\id"]& \mathcal{C}_n
\end{tikzcd}
\]
We may pick $C$, $\mu_i$, $\mu_i'$ and $k$ so that $\mu_i$ is disjoint from $\mathcal{L}_{\alpha_1},\partialots, \mathcal{L}_{\alpha_{k}}$, and $\mu_i'$ is disjoint from $\mathcal{L}_{\alpha_{k+1}},\partialots, \mathcal{L}_{\alpha_n}$, and $C$ covers all of the special meridional curve of $\mathcal{L}_{\alpha_k}$ (and in particular, $n_C(p)\equiv 1$ for each $p$ in the special meridianal curve of any curve collection in $\mathcal{L}_{\alpha_k}$). We assume, additionally, that $C$ is disjoint from the curves of $\mathcal{L}_{\alpha_n}$ (in particular, the initial special meridional curve $\alpha_i^s$). See Figure~\mathrm{re}f{cor:1}. In particular, the only diagonal map which will be non-trivial in the above compression will be the one with domain $\mathcal{C}_k$. Since $n_C(\ve{x})\equiv 1$ for every intersection point $\ve{x}$ in any of the Floer complexes comprising $\mathcal{C}_k$, the map $H_C$ will act by the identity on $\mathcal{C}_k$. In particular, the compression coincides with the diagram
\[
\betaegin{tikzcd}\mathcal{C}_{\Lambda}(L)\mathrm{a.r.}[d, "\mathcal{A}_{\mu_i}"] \\ \mathcal{C}_{\Lambda}(L)
\end{tikzcd}
\simeq
\betaegin{tikzcd}[labels=description, column sep=2cm]
\mathcal{C}^{(*,0)}(L)\mathrm{a.r.}[d, "0"]\mathrm{a.r.}[r, "F^{-K_i}"]\mathrm{a.r.}[dr,dashed, "F^{-K_i}"]& \mathcal{C}^{(*,1)}(L)\mathrm{a.r.}[d, "0"]\\
\mathcal{C}^{(*,0)}(L)\mathrm{a.r.}[r, "F^{-K_i}"]& \mathcal{C}^{(*,1)}(L)
\end{tikzcd}.
\]
The vertical direction is the homology action, so the proof is complete.
\end{proof}
\betaegin{figure}[ht]
\input{cor1.pdf_tex}
\caption{The curves $\mu_i$, $\mu_i'$ and the special meridian $\alpha_i^s$ of component $J\subseteq L$. The shaded annulus is the 2-chain $C$. The dashed curves labeled $\alpha'$ are the translates of $\alpha_i^s$ appearing in the $\sigma$-basic system.}
\label{cor:1}
\end{figure}
\betaegin{example} We illustrate Proposition~\mathrm{re}f{prop:homology-action-description} for the knot surgery formula. In this case, the result is already known (using different notation). See \cite{OSProperties}*{Theorem~9.23}. Ozsv\'{a}th and Szab\'{o} \cite{OSIntegerSurgeries} proved that
\[
\ve{\mathbb{C}F}^-(S^3_n(K))\cong \mathbb{C}one(v+h_n\colon \mathbb{A}(K)\to \mathbb{B}(K)).
\]
Though we do not need this fact, one can show that the diagrammatic $H_1$-action $\mathcal{A}_{\mu}$ coincides with the ordinary homology action of the meridian of $K$ on $\ve{\mathbb{C}F}^-(S^3_n(K))$. (Of course, this is null-homotopic when $n\neq 0$). Proposition~\mathrm{re}f{prop:homology-action-description} translates to saying that the homology action of the meridian $\mu$ coincides with the endomorphism $h_n$ on the mapping cone complex.
\end{example}
\subsection{Application to the connected sum formula}
\label{sec:simplifying-connected-sum-1}
One important consequence of Proposition~\mathrm{re}f{prop:homology-action-description}, Lemma~\mathrm{re}f{lem:algebraic-homology-action-link-surgery} and Remark~\mathrm{re}f{rem:subcube-refinement} is the following:
\betaegin{cor}
\label{cor:simplify-tensor-product}
Suppose that $L_1$ and $L_2$ are links in $S^3$ with framings $\Lambda_1$ and $\Lambda_2$, and with systems of arcs $\mathscr{A}_1$ and $\mathscr{A}_2$, respectively. Let $K_1\subseteq L_1$ and $K_2\subseteq L_2$ be distinguished components, whose arcs are alpha-parallel and beta-parallel, respectively. Let $\Lambda_1'$ denote the restriction of the framing $\Lambda_1$ to $L_1-K_1$. Suppose that $K_1\subseteq S^3_{\Lambda_1'}(L_1-K_1)$ is rationally null-homologous.
Write $\mathcal{X}_{\Lambda_i}(L_i,\mathscr{A}_i)^{\mathcal{K}_i}$ for the type-$D$ modules obtained from $\mathcal{X}_{\Lambda_i}(L_i,\mathscr{A}_i)^{\mathcal{L}_{\ell_i}}$ by boxing ${}_{\mathcal{K}} \mathcal{D}_0$ into each algebra component except for the one corresponding to $K_i$.
Then
\[
\betaegin{split}
&\left(\mathcal{X}_{\Lambda_1}(L_1,\mathscr{A}_1)^{\mathcal{K}_1}, \mathcal{X}_{\Lambda_2}(L_2,\mathscr{A}_2)^{\mathcal{K}_2}\right)\mathrel{\hat{\betaoxtimes}} {}_{\mathcal{K}_1| \mathcal{K}_2} W_l^{\mathcal{K}}
\\
\simeq &\left(\mathcal{X}_{\Lambda_1}(L_1,\mathscr{A}_1)^{\mathcal{K}_1}, \mathcal{X}_{\Lambda_2}(L_2,\mathscr{A}_2)^{\mathcal{K}_2}\right)\mathrel{\hat{\betaoxtimes}} {}_{\mathcal{K}_1|\mathcal{K}_2} M^{\mathcal{K}}.
\end{split}
\]
\end{cor}
Corollary~\mathrm{re}f{cor:simplify-tensor-product} is most easily proven by using the formulation of the connected sum formula in Equation~\eqref{eq:alternate-pairing-theorem}. The extra term in the differential of the tensor product with $W_l$ (resulting from the $\partialelta_5^1$ term in Equation~\eqref{eq:delta-5-1-merge}) factors through the homology action $\mathcal{A}_{[K_1]}\otimes \id$ on $\mathcal{C}^{(*,1)}(L_1)\otimes \mathcal{C}^{(*,1)}(L_2)$. If $K_1$ is rationally null-homologous in the surgery on the other components, this map will be null-homotopic by Lemma~\mathrm{re}f{lem:algebraic-homology-action-link-surgery}.
\section{Simplifying the pairing theorem}
In this section, we recall the category of \emph{Alexander modules} from \cite{ZemBordered}*{Section~6}. After recalling precise definitions, we will prove the following result, which is essential to our proof of Theorem~\mathrm{re}f{thm:pairing-main} when $b_1(Y(G))>0$. The results of this section are not essential when $b_1(Y(G))=0$.
\betaegin{prop}\label{prop:merge=pair-of-pants}
Suppose that $\mathcal{X}_1^{\mathcal{K}}$ and $\mathcal{X}_2^{\mathcal{K}}$ are type-$D$ Alexander modules which are homotopy equivalent to finitely generated type-$D$ modules (i.e. have homotopy equivalent models where $\mathcal{X}_i$ are finite dimensional $\mathbb{F}$-vector spaces). Then
\[
\left( \mathcal{X}_1^{\mathcal{K}},\mathcal{X}_2^{\mathcal{K}}\right)\mathrel{\hat{\betaoxtimes}} {}_{\mathcal{K}| \mathcal{K}} W_l^{\mathcal{K}}\simeq \left( \mathcal{X}_1^{\mathcal{K}}, \mathcal{X}_2^{\mathcal{K}}\right)\mathrel{\hat{\betaoxtimes}} {}_{\mathcal{K}| \mathcal{K}} M^{\mathcal{K}}.
\]
The same holds for $W_r$ in place of $W_l$.
\end{prop}
Our proof is inspired by work of the author with Hendricks and Manolescu \cite{HMZConnectedSum}*{Section~6}, where an extra term in an involutive connected sum formula is shown to be null-homotopic. The proof occupies the next several subsections.
\subsection{Finitely generated $\mathcal{K}$-modules}
We now discuss type-$D$ Alexander modules $\mathcal{X}^{\mathcal{K}}$ where $\mathcal{X}$ is finitely generated. Such modules form a subcategory which admits a somewhat simpler description not involving linear topological spaces.
We denote by $\ve{\mathcal{K}}$ the completion of the algebra $\mathcal{K}$ with respect to the topology described in Section~\mathrm{re}f{def:knot-topology}. As a vector space, we have the following isomorphisms:
\[
\betaegin{split}
\ve{I}_0\cdot \ve{\mathcal{K}} \cdot \ve{I}_0&\cong \mathbb{F}[\hspace{-.5mm}[ \mathscr{U},\mathscr{V}]\hspace{-.5mm}] \\
\ve{I}_1\cdot \ve{\mathcal{K}}\cdot \ve{I}_0&\cong \mathbb{F}[\mathscr{V},\mathscr{V}^{-1}]\hspace{-.5mm}] [\hspace{-.5mm}[ \mathscr{U}]\hspace{-.5mm}] \langle \tau\rangle \oplus \mathbb{F}[\hspace{-.5mm}[ \mathscr{V},\mathscr{V}^{-1}] [\hspace{-.5mm}[ \mathscr{U}]\hspace{-.5mm}] \langle \sigma \rangle\\
\ve{I}_1\cdot \ve{\mathcal{K}} \cdot \ve{I}_1&\cong \mathbb{F}[\mathscr{V},\mathscr{V}^{-1}] [\hspace{-.5mm}[ \mathscr{U}]\hspace{-.5mm}] .
\end{split}
\]
The space $\ve{\mathcal{K}}$ is again an algebra, and $\mu_2$ is continuous when viewed as a map from $\ve{\mathcal{K}} \mathrel{\vec{\otimes}}_{\mathrm{I}} \ve{\mathcal{K}}$ to $\ve{\mathcal{K}}$.
Since $\ve{\mathcal{K}}$ is an algebra, we may consider the category of finitely generated type-$D$ modules over $\ve{\mathcal{K}}$. We write $\MOD^{\ve{\mathcal{K}}}_{\mathit{fg}}$ for this category. These are ordinary type-$D$ modules over $\ve{\mathcal{K}}$ (i.e. not equipped with a topology), which have underlying vector spaces which are finitely generated over $\mathbb{F}$. Morphisms in this category are ordinary type-$D$ morphisms, i.e. maps $f^1\colon \mathcal{X}\to \mathcal{Y}\otimes \ve{\mathcal{K}}$.
There is a related category, $\MOD^{\mathcal{K}}_{\mathit{fg},\mathfrak{a}}$, which is the set of type-$D$ Alexander modules over $\mathcal{K}$, which are finitely generated over $\mathbb{F}$.
\betaegin{lem}[\cite{ZemBordered}*{Section~6.7}] The categories $\MOD^{\mathcal{K}}_{\mathit{fg},\mathfrak{a}}$ and $\MOD^{\ve{\mathcal{K}}}_{\mathit{fg}}$ are equivalent.
\end{lem}
The category of finitely generated type-$D$ modules is important for the bordered theory for the following reason.
\betaegin{prop}[\cite{ZemBordered}*{Proposition~18.11}]
\label{prop:finite-generation}
Suppose $L=K_1\cup \cdots \cup K_n$ is a link in $S^3$ with framing $\Lambda$. Write $\mathcal{X}_{\Lambda}(L_{1,\partialots, n-1}, K_n)^{\mathcal{K}}$ for the type-$D$ module obtained by tensoring $(n-1)$-copies of the type-$A$ module for a solid torus, ${}_{\mathcal{K}} \mathcal{D}_0$, with $\mathcal{X}_{\Lambda}(L)^{\mathcal{L}_n}$ along the algebra components for $K_1,\partialots, K_{n-1}$. Then $\mathcal{X}_{\Lambda}(L_{1,\partialots, n-1}, K_n)^{\mathcal{K}}$ is homotopy equivalent to a finitely generated type-$D$ module.
\end{prop}
For the sake of exposition, we now explain the definition of a finitely generated type-$D$ module over $\ve{\mathcal{K}}$ in more detail:
\betaegin{lem}
\label{lem:restate-type-D}A finitely generated type-$D$ Alexander module over $\mathcal{K}$ is equivalent to the following data:
\betaegin{enumerate}
\item A pair of finite dimensional $\mathbb{F}$-vector spaces $\mathcal{X}_0$ and $\mathcal{X}_1$, equipped with internal differentials
\[
\partial_0\colon \mathcal{X}_0\to \mathcal{X}_0\otimes \mathbb{F}[\hspace{-.5mm}[ \mathscr{U},\mathscr{V}]\hspace{-.5mm}] \quad \text{and} \quad \partial_1\colon \mathcal{X}_1\to \mathcal{X}_1\otimes \mathbb{F}[\mathscr{V},\mathscr{V}^{-1}][\hspace{-.5mm}[ \mathscr{U}]\hspace{-.5mm}] .
\]
\item Maps
\[
\betaegin{split}
v&\colon \mathcal{X}_0\to \mathcal{X}_1\otimes \mathbb{F}[\hspace{-.5mm}[ \mathscr{V},\mathscr{V}^{-1}][\hspace{-.5mm}[ \mathscr{U}]\hspace{-.5mm}] \quad \text{and}
\\ h&\colon \mathcal{X}_0\to \mathcal{X}_1\otimes \mathbb{F}[\mathscr{V},\mathscr{V}^{-1}]\hspace{-.5mm}] [\hspace{-.5mm}[ \mathscr{U}]\hspace{-.5mm}] .
\end{split}
\]
Furthermore, if we extend $v$ $I$-equivariantly to a map $V$ with domain $\mathcal{X}_0\otimes \mathbb{F}[\hspace{-.5mm}[ \mathscr{U},\mathscr{V}]\hspace{-.5mm}] $, then $V$ is a chain map. If we extend $h$ $T$-equivariantly to a map $H$ with domain $\mathcal{X}_0\otimes \mathbb{F}[\hspace{-.5mm}[ \mathscr{U},\mathscr{V}]\hspace{-.5mm}] $, then $H$ is a chain map. (Recall that $I$ and $T$ are the algebra morphisms in the definition of $\mathcal{K}$; see Section~\mathrm{re}f{sec:surgery-algebra}).
\end{enumerate}
Morphisms between finitely generated type-$D$ modules are similar. As a particularly important special case, suppose $h$ and $h'$ are two maps from $\mathcal{X}_0$ to $\mathcal{X}_1\otimes \mathbb{F}[\mathscr{V},\mathscr{V}^{-1}]\hspace{-.5mm}] [\hspace{-.5mm}[ \mathscr{U}]\hspace{-.5mm}] $, whose $T$-equivariant extensions are chain maps, as above. Suppose there is a third map $j\colon \mathcal{X}_0\to \mathcal{X}_1\otimes \mathbb{F}[\mathscr{V},\mathscr{V}^{-1}]\hspace{-.5mm}] [\hspace{-.5mm}[ \mathscr{U}]\hspace{-.5mm}] $, and let $J$ be its $T$-equivariant extension to $\mathcal{X}_0\otimes \mathbb{F}[\hspace{-.5mm}[ \mathscr{U},\mathscr{V}]\hspace{-.5mm}] $. If $\partial_1\circ J+J\circ \partial_0=H+H'$, then the type-$D$ module obtained by replacing $h$ with $h'$ is homotopy equivalent to $\mathcal{X}^{\mathcal{K}}$.
\end{lem}
We now prove a key lemma:
\betaegin{lem}\label{lem:classification-PID} Suppose that $\mathcal{X}^{\mathcal{K}}$ is a finitely generated type-$D$ module such that $\mathcal{X}_1\otimes \mathbb{F}[\mathscr{V},\mathscr{V}^{-1}]\hspace{-.5mm}] [\hspace{-.5mm}[ \mathscr{U}]\hspace{-.5mm}] $ admits a $\mathbb{Z}_2$-valued grading (with $\mathscr{U}$ and $\mathscr{V}$ viewed as having grading $0\in \mathbb{Z}_2$). The chain complex $\mathcal{X}_1\otimes \mathbb{F}[\mathscr{V},\mathscr{V}^{-1}]\hspace{-.5mm}] [\hspace{-.5mm}[ \mathscr{U}]\hspace{-.5mm}] $ is chain isomorphic to a direct sum of 1-step complexes (i.e. complexes with a single generator and vanishing differential) and 2-step complexes (i.e. complexes with two generators $\ve{x},\ve{y}$ and $\partial(\ve{x})=\ve{y}\otimes \alpha_{\ve{y},\ve{x}}$ ). Furthermore, in the two step complexes, we may assume each $\alpha_{\ve{y},\ve{x}}$ is of the form $U^i$ for some $i\in \mathbb{N}$, where $U=\mathscr{U}\mathscr{V}$.
\end{lem}
\betaegin{proof} The key observation is that $\mathbb{F}[\mathscr{V},\mathscr{V}^{-1}]\hspace{-.5mm}] $ is a field, so $\mathbb{F}[\mathscr{V},\mathscr{V}^{-1}]\hspace{-.5mm}] [\hspace{-.5mm}[ \mathscr{U}]\hspace{-.5mm}] $ is a PID. In fact, the ideals of $\mathbb{F}[\mathscr{V},\mathscr{V}^{-1}]\hspace{-.5mm}] [\hspace{-.5mm}[ \mathscr{U}]\hspace{-.5mm}] $ are all of the form $(U^i)$ for $i\gammae 0$. Hence, the proof follows immediately from the classification theorem for finitely generated free chain complexes over a PID. We sketch the argument very briefly in our present setting. One first considers the arrows of the differential on $\mathcal{X}_0\otimes \mathbb{F}[\mathscr{V},\mathscr{V}^{-1}]\hspace{-.5mm}] [\hspace{-.5mm}[ \mathscr{U}]\hspace{-.5mm}] $ which are weighted by units. If any such arrow exists, we pick one arbitrarily. Suppose this arrow goes from $\ve{x}$ to $\ve{y}\otimes \alpha$. Since the ideals of $\mathbb{F}[\mathscr{V},\mathscr{V}^{-1}]\hspace{-.5mm}] [\hspace{-.5mm}[ \mathscr{U}]\hspace{-.5mm}] $ are all of the form $(U^i)$, one may perform a change of basis so that there are no other arrows from $\ve{x}$ or to $\ve{y}$. After performing this change of basis, there are also no arrows to $\ve{x}$ or from $\ve{y}$, since $\partialelta^1$ squares to 0. Performing a further change of basis yields the summand consisting of the subcomplex $\partialelta^1(\ve{x})= \ve{y}\otimes 1$. We repeat this until all arrows (except on these two step complexes) have weight in the $(U^1)$. We repeat the above procedure to isolate arrows with weight in $(U^1)\setminus (U^2)$ until, outside of the isolated 2-step complexes, all arrows have weight in the ideal $(U^2)$. We repeat this procedure until we are left with only 1-step and 2-step complexes. After a chain isomorphism, the weights on the algebra elements in the 2-step complexes may be taken to be powers of $U$.
\end{proof}
\betaegin{cor} \label{cor:eliminate-homology-action}
Let $\mathcal{X}^{\mathcal{K}}$ be a finitely generated type-$D$ module. Consider the endomorphism $\mathcal{A}:=\mathscr{U} \betaPhi+\mathscr{V}\betaPsi$ of the chain complex $\mathcal{X}_1\otimes \mathbb{F}[\mathscr{V},\mathscr{V}^{-1}]\hspace{-.5mm}] [\hspace{-.5mm}[ \mathscr{U}]\hspace{-.5mm}] $, where $\betaPhi$ and $\betaPsi$ denote the algebraically defined basepoint actions define in Section~\mathrm{re}f{sec:background}. Then $\mathcal{A}$ is null-homotopic on $\mathcal{X}_1\otimes \mathbb{F}[\mathscr{V},\mathscr{V}^{-1}]\hspace{-.5mm}] [\hspace{-.5mm}[ \mathscr{U}]\hspace{-.5mm}] $ via a $\mathbb{F}[\mathscr{V},\mathscr{V}^{-1}]\hspace{-.5mm}] [\hspace{-.5mm}[ \mathscr{U}]\hspace{-.5mm}] $-equivariant chain homotopy.
\end{cor}
\betaegin{proof} The maps $\betaPhi$ and $\betaPsi$ commute with homotopy equivalences. Using Lemma~\mathrm{re}f{lem:classification-PID}, it is sufficient to observe that $\mathscr{U}\betaPhi+\mathscr{V}\betaPsi\equiv 0$ for a 1-step complex and for a 2-step complex of the form $\partialelta^1(\ve{x})=\ve{y}\otimes U^i$, since $(\mathscr{U} \partial_{\mathscr{U}}+\mathscr{V} \partial_{\mathscr{V}})(U^i)=0,$ as $U=\mathscr{U}\mathscr{V}$.
\end{proof}
We are now in position to prove the main theorem of this section, Proposition~\mathrm{re}f{prop:merge=pair-of-pants}:
\betaegin{proof}[Proof of Proposition~\mathrm{re}f{prop:merge=pair-of-pants}]
By Lemma~\mathrm{re}f{lem:restate-type-D}, it is sufficient to show that the corresponding $h$ maps of the two type-$D$ modules are chain homotopic. We observe that by Equation~\eqref{eq:homology-action-basepoint} the difference factors through the endomorphism $(\mathscr{U} \betaPhi_1+\mathscr{V}\betaPsi_1)\otimes \id$ of $\mathcal{C}^{(*,1)}(L_1)\otimes \mathcal{C}^{(*,1)}(L_2)$. Here, $\betaPhi_1$ and $\betaPsi_1$ denote the algebraic basepoint actions of $\mathcal{C}^{(*,1)}(L_1)$. The basepoint actions $\betaPhi_1$ and $\betaPsi_1$ commute with homotopy equivalences up to chain homotopy (cf. \cite{ZemConnectedSums}*{Lemma~2.8}). By Lemma~\mathrm{re}f{lem:restate-type-D}, it is sufficient to show that $(\mathscr{U} \betaPhi_1+\mathscr{V}\betaPsi_1)\otimes \id$ is null-homotopic if we take coefficients in $\mathbb{F}[\mathscr{V},\mathscr{V}^{-1}]\hspace{-.5mm}] [\hspace{-.5mm}[ \mathscr{U}]\hspace{-.5mm}] $. This is a consequence of Corollary~\mathrm{re}f{cor:eliminate-homology-action}, so the proof is complete.
\end{proof}
\betaegin{rem} Our proof can also be used to show that if $\mathcal{X}^{\mathcal{K}}$ is a type-$D$ Alexander module which is homotopy equivalent to a finitely generated type-$D$ module, then there is a homotopy equivalence
\[
\mathcal{X}^{\mathcal{K}}\simeq \mathcal{X}^{\mathcal{K}}\mathrel{\hat{\betaoxtimes}} {}_{\mathcal{K}} \mathcal{T}^{\mathcal{K}}.
\]
In the above, ${}_{\mathcal{K}} \mathcal{T}^{\mathcal{K}}$ is the transformer bimodule from \cite{ZemBordered}*{Section~14}. The above equation follows because the difference in the $h$-maps between $\mathcal{X}^{\mathcal{K}}$ and $\mathcal{X}^{\mathcal{K}}\mathrel{\hat{\betaoxtimes}}{}_{\mathcal{K}} \mathcal{T}^{\mathcal{K}}$ may also be factored through the map $\mathscr{U} \betaPhi+\mathscr{V} \betaPsi$. See the proof of \cite{ZemBordered}*{Theorem~14.1}.
\end{rem}
\section{Completing the proof of Theorem~\mathrm{re}f{thm:main}}
\label{sec:proof}
In this section, we describe the proof of Theorem~\mathrm{re}f{thm:main}.
\subsection{Homological perturbation lemma for hypercubes}
In this section, we review a version of the homological perturbation lemma for hypercubes. This lemma is similar to work of Huebschmann-Kadeishvili \cite{HK_Homological_Perturbation}. See \cite{Liu2Bridge}*{Section~5.6} for a similar though slightly less explicit result for transferring hypercube structure maps along homotopy equivalences.
\betaegin{lem}[\cite{HHSZDuals}*{Lemma~2.10}]\label{lem:homological-perturbation-cubes}
Suppose that $\mathcal{C}=(C_\varepsilon,D_{\varepsilon,\varepsilon'})$ is a hypercube of chain complexes, and $(Z_\varepsilon,\partialelta_\varepsilon)_{\varepsilon\in \mathbb{E}_n}$ is a collection of chain complexes. Furthermore, suppose there are maps
\[
\pi_{\varepsilon}\colon C_\varepsilon\to Z_\varepsilon\qquad i_{\varepsilon}\colon Z_\varepsilon\to C_\varepsilon \qquad h_\varepsilon\colon C_\varepsilon\to C_\varepsilon,
\]
satisfying
\[
\pi_{\varepsilon}\circ i_{\varepsilon}=\id, \qquad i_{\varepsilon}\circ \pi_{\varepsilon}=\id+[\partial, h_\varepsilon], \quad h_\varepsilon\circ h_\varepsilon=0, \quad \pi_{\varepsilon}\circ h_{\varepsilon}=0\quad \text{and} \quad h_{\varepsilon}\circ i_{\varepsilon}=0
\]
and such that $\pi_{\varepsilon}$ and $i_{\varepsilon}$ are chain maps.
With the above data chosen, there are canonical hypercube structure maps $\partialelta_{\varepsilon,\varepsilon'}\colon Z_{\varepsilon}\to Z_{\varepsilon'}$ so that $\mathcal{Z}=(Z_\varepsilon,\partialelta_{\varepsilon,\varepsilon'})$ is a hypercube of chain complexes, and also there are morphisms of hypercubes
\[
\betaPi\colon \mathcal{C}\to \mathcal{Z}, \quad I\colon \mathcal{Z}\to \mathcal{C}\quad \text{and}\quad H\colon \mathcal{C}\to \mathcal{C}
\]
such that
\[
\betaPi\circ I=\id\quad \text{and} \quad I\circ \betaPi= \id+\partial_{\Mor}(H)
\]
and such that $I$ and $\betaPi$ are chain maps.
\end{lem}
The structure maps $\partialelta_{\varepsilon,\varepsilon'}$ and the morphisms $\betaPi$ and $I$ have a concrete formula. We begin with $\partialelta_{\varepsilon,\varepsilon'}$. Suppose that $\varepsilon<\varepsilon'$ are points in $\mathbb{E}_n$. The hypercube structure maps $\partialelta_{\varepsilon,\varepsilon'}$ are given by the following formula:
\[
\partialelta_{\varepsilon,\varepsilon'}:=\sum_{\varepsilon=\varepsilon_1<\cdots<\varepsilon_j=\varepsilon'} \pi_{\varepsilon'}\circ D_{\varepsilon_{j-1},\varepsilon_j}\circ h_{\varepsilon_{j-1}}\circ D_{\varepsilon_{j-2},\varepsilon_{j-1}}\circ \cdots \circ h_{\varepsilon_2}\circ D_{\varepsilon_1,\varepsilon_2}\circ i_{\varepsilon}.
\]
The component of the map $I$ sending coordinate $\varepsilon$ to coordinate $\varepsilon'$ is given via the formula
\[
I_{\varepsilon,\varepsilon'}:=\sum_{\varepsilon=\varepsilon_1<\cdots<\varepsilon_j=\varepsilon'} h_{\varepsilon_{j}}\circ D_{\varepsilon_{j-1},\varepsilon_{j}}\circ \cdots \circ h_{\varepsilon_2}\circ D_{\varepsilon_1,\varepsilon_2}\circ i_{\varepsilon}.
\]
Similarly, $\betaPi$ is given by the formula
\[
\betaPi_{\varepsilon,\varepsilon'}:=\sum_{\varepsilon=\varepsilon_1<\cdots<\varepsilon_j=\varepsilon'} \pi_{\varepsilon'}\circ D_{\varepsilon_{j-1},\varepsilon_j}\circ h_{\varepsilon_{j-1}}\circ D_{\varepsilon_{j-2},\varepsilon_{j-1}}\circ \cdots\circ D_{\varepsilon_1,\varepsilon_2} \circ h_{\varepsilon_1}.
\]
\subsection{Completion of the proof of Theorem~\mathrm{re}f{thm:main}}
We now describe the proof of Theorem~\mathrm{re}f{thm:main}. Firstly, there is no loss of generality in assuming that $G$ is connected, since if $G=G_1\sqcup G_2$ then $Y(G)\cong Y(G_1)\# Y(G_2)$, and both theories are tensorial under connected sum of 3-manifolds.
We consider forests of trees $G$, such that each component has a distinguished vertex, which we label as the root. We consider the following operations on rooted trees, from which any rooted tree may be obtained:
\betaegin{enumerate}[label= ($G$-\mathrm{a.r.}abic*), ref=$G$-\mathrm{a.r.}abic*]
\item\label{graph-1} Adding a valence 0 vertex (viewed as the root of its component).
\item\label{graph-2} Joining two components together at their roots.
\item\label{graph-3} Adding a valence 1 vertex at the root of a component, and making the new vertex the new root.
\end{enumerate}
We now extend the description of lattice homology in Proposition~\mathrm{re}f{prop:OSS-lattice} to a statement about bordered link surgery modules and the above operations on graphs. We form a chain complex $\widetilde{\mathcal{C}}_{\Lambda}(G)$ as follows. We pick a vertex $v_0\in V(G)$ which we label as the root vertex. We define the chain complex $\widetilde{\mathcal{C}}_{\Lambda}(G)$ by iteratively tensoring the bordered modules and bimodules as follows, in parallel with the above topological moves:
\betaegin{enumerate}[label= ($M$-\mathrm{a.r.}abic*), ref=$M$-\mathrm{a.r.}abic*]
\item\label{module-1}
We begin with a type-$D$ module for a solid tori $\mathcal{D}_{0}^{\mathcal{K}}$ for each valence 1 vertex of $G$ (other than $v_0$, if it has valence 1).
\item\label{module-2} We use the merge module ${}_{\mathcal{K}| \mathcal{K}} M^{\mathcal{K}}$ to tensor the type-$D$ modules for two rooted trees to form the type-$D$ module for the tree obtained by joining the two components together at their roots.
\item\label{module-3} We tensor with the type-$DA$ module ${}_{\mathcal{K}} \overline{\mathcal{H}}_{(w(v),0)}^{\mathcal{K}}$ of the Hopf link to add a valence 1 vertex at the root of a tree. Here $w(v)$ denotes the weight of the vertex $v$.
\item\label{module-4} We tensor the type-$AA$ bimodule ${}_{\mathcal{K}} [\mathcal{D}_{w(v_0)}]_{\mathbb{F}[U]}$ for the final root $v_0$.
\end{enumerate}
Note that since the bimodules are Alexander modules (see \cite{ZemBordered}*{Section~6}), the final type-$A$ action of $\mathbb{F}[U]$ extends to an action of $\mathbb{F}[\hspace{-.5mm}[ U]\hspace{-.5mm}] $ on the completion of $\widetilde{\mathcal{C}}_{\Lambda}(G)$.
We note that the complex $\widetilde{\mathcal{C}}_{\Lambda}(G)$ may be described as the algebraic complex obtained by using the connected sum formula for the hypercube maps in Theorem~\mathrm{re}f{thm:pairing-main}. We prove the following:
\betaegin{prop}
\label{prop:tilde-C=lattice}
The chain complex $\widetilde{\mathcal{C}}_{\Lambda}(G)$ is homotopy equivalent to $\mathbb{C}F(G)$, viewed as a type-$A$ module over $\mathbb{F}[\hspace{-.5mm}[ U]\hspace{-.5mm}] $ with only $m_1$ and $m_2$ non-vanishing.
\end{prop}
\betaegin{rem}
One could also argue by considering the minimal models of the Hopf link ${}_{\mathcal{K}} \overline{\mathcal{Z}}_{(w(v),0)}^{\mathcal{K}}$ from \cite{ZemBordered}*{Section~17}, which have $\partialelta_j^1=0$ if $j\neq 2$. These are related to the dual knot formula of Hedden--Levine \cite{HeddenLevineSurgery} and Eftekhary \cite{EftekharyDuals}. We will give a more direct argument using the homological perturbation lemma.
\end{rem}
\betaegin{proof}
We note that the Hopf link Floer complex has the following filtration:
\[
\mathcal{C}FL(H)=\betaig(\betaegin{tikzcd} \Span_{\mathbb{F}[\mathscr{U}_1,\mathscr{V}_1,\mathscr{U}_2,\mathscr{V}_2]}(\ve{b},\ve{c})\mathrm{a.r.}[r, "\partial"]& \Span_{\mathbb{F}[\mathscr{U}_1,\mathscr{V}_1,\mathscr{U}_2,\mathscr{V}_2]}(\ve{a},\ve{d})
\end{tikzcd}\betaig).
\]
Here, $\ve{a}$, $\ve{b}$, $\ve{c}$ and $\ve{d}$ are the generators from Equation~\eqref{eq:Hopf-def}
First note that $\widetilde{\mathcal{C}}_{\Lambda}(G)$ has a filtration by $\mathbb{E}_\ell$ where $\ell=|V(G)|$, similar to $\mathbb{C}F(G)$ and $\mathcal{C}_{\Lambda}(L_G)$. That is, $\widetilde{\mathcal{C}}_{\Lambda}(G)$ is an $n$-dimensional hypercube of chain complexes. Write $\widetilde{\mathcal{C}}_{\Lambda}(G)=(\widetilde{\mathcal{C}}_{\varepsilon}, \widetilde{D}_{\varepsilon,\varepsilon'})_{\varepsilon\in \mathbb{E}_n}$.
For each $\varepsilon$, consider the subcomplex $\widetilde{\mathcal{C}}_{\varepsilon,\ve{s}}$ of $\widetilde{\mathcal{C}}_{\varepsilon}$ which lies in a single Alexander grading $\ve{s}\in \mathbb{H}(L_G)$. The complexes $\mathcal{C}_{\varepsilon}$ are obtained by localizing a tensor product of $(\ell-1)$-copies of the Hopf link complex $\mathcal{C}FL(H)$ at some of the $\mathscr{V}_i$ variables, and completing with respect to the $U_i$ and taking the direct product over Alexander gradings. In particular, $\widetilde{\mathcal{C}}_{\varepsilon,\ve{s}}$ has a filtration similar to the filtration on the Hopf link complex. Write $\widetilde{c}_{\varepsilon,\ve{s}}$ for the chain complex $\widetilde{\mathcal{C}}_{\varepsilon,\ve{s}}$ before completing with respect to the $U_i$ variables. The complex $\widetilde{c}_{\varepsilon,\ve{s}}$ may be written as the following exact sequence:
\betaegin{equation}
\widetilde{c}_{\varepsilon,\ve{s}}=\betaegin{tikzcd} 0\mathrm{a.r.}[r]&\mathcal{F}^\ell_{\varepsilon,\ve{s}}\mathrm{a.r.}[r]& \cdots \mathrm{a.r.}[r]& \mathcal{F}^1_{\varepsilon,\ve{s}}.
\end{tikzcd}
\label{eq:long-exact-sequence}
\end{equation}
We refer to the superscript $i$ in $\mathcal{F}_{\varepsilon,\ve{s}}^i$ as the \emph{Hopf filtration} level. In Equation~\eqref{eq:long-exact-sequence}, $\mathcal{F}^i_{\varepsilon,\ve{s}}$ is generated over $\mathbb{F}[U_1,\partialots, U_{\ell}]$ by elementary tensors in $S_{\varepsilon}^{-1} \cdot \mathcal{C}FL(L_G)$ which contain $i-1$ factors which are $\ve{b}$ or $\ve{c}$, and $\ell-i$ factors which are $\ve{a}$ or $\ve{d}$, and which have Alexander grading $\ve{s}$. Recall that $S_{\varepsilon}$ denotes the multiplicatively closed set generated by $\mathscr{V}_i$ where $\varepsilon_i=1$.
Index the link components so that the root vertex $v_0$ corresponds to the variable $U=U_1$. The homology group $H_*(\widetilde{c}_{\varepsilon,\ve{s}})$ is isomorphic to $\mathbb{F}[U_1]$ by \cite{OSPlumbed}*{Lemma~2.6} (see also \cite{OSSLattice}*{Lemma~4.2}). Furthermore, the generator may be taken to be an elementary tensor which has only factors of $\ve{a}$ and $\ve{d}$, as well as some of the $\mathscr{U}_i$ and $\mathscr{V}_i$ variables. In particular, the generator is supported in $\mathcal{F}_{\varepsilon,\ve{s}}^1$. It follows that the chain complex $\widetilde{c}_{\varepsilon,\ve{s}}$ is a free resolution of its homology over $\mathbb{F}[U_1,\partialots, U_\ell]$.
Since $H_*(\widetilde{c}_{\varepsilon,\ve{s}})\cong \mathbb{F}[U_1]$ is a projective module over $\mathbb{F}[U_1]$, it is a basic exercise in homological algebra to show that the above exact sequence may be split over $\mathbb{F}[U_1]$. That is, we may decompose each $\mathcal{F}_{\varepsilon,\ve{s}}^{i}$ over $\mathbb{F}[U_1]$ into a direct sum of $\mathbb{F}[U_1]$-modules
\[
\mathcal{F}_{\varepsilon,\ve{s}}^i\cong \mathcal{F}_{\varepsilon,\ve{s}}^{i,l}\oplus \mathcal{F}_{\varepsilon,\ve{s}}^{i,r}
\]
so that $\partial$ maps each $\mathcal{F}^{i,r}_{\varepsilon,\ve{s}}$ isomorphically onto $\mathcal{F}^{i-1,l}_{\varepsilon,\ve{s}}$, for $1<i<\ell$, and $\partial$ vanishes on $\mathcal{F}^{i,r}_{\varepsilon,\ve{s}}$. Further, $\mathcal{F}^{\ell,l}_{\varepsilon,\ve{s}}=0$ and $\mathcal{F}_{\varepsilon,\ve{s}}^{1,r}$ projects isomorphically into $H_*(c_{\varepsilon,\ve{s}})$. We may then define maps $\pi_{\varepsilon,\ve{s}}$, $h_{\varepsilon,\ve{s}}^{i}$ and $i_{\varepsilon,\ve{s}}$ as in the following diagram
\[
\betaegin{tikzcd}[column sep=.8cm]
\mathcal{F}^{\ell,r}_{\varepsilon,\ve{s}}
\mathrm{a.r.}[r, "\partial",swap]
&
\mathcal{F}^{\ell-1,l}_{\varepsilon,\ve{s}}\oplus \mathcal{F}^{\ell-1,r}_{\varepsilon,\ve{s}}
\mathrm{a.r.}[r, "\partial",swap]
\mathrm{a.r.}[l, "h^{\ell-1}_{\varepsilon}",bend right,swap]
&
\cdots
\mathrm{a.r.}[l,bend right, swap,"h^{\ell-2}_{\varepsilon,\ve{s}}"]
\mathrm{a.r.}[r,swap, "\partial"]
&
\mathcal{F}^{2,l}_{\varepsilon,\ve{s}}
\oplus
\mathcal{F}^{2,r}_{\varepsilon,\ve{s}}
\mathrm{a.r.}[r, "\partial",swap]
\mathrm{a.r.}[l,bend right,swap, "h^2_{\varepsilon,\ve{s}}"]
&
{\mathcal{F}}^{1,l}_{\varepsilon,\ve{s}}
\oplus
{\mathcal{F}}^{1,r}_{\varepsilon,\ve{s}}
\mathrm{a.r.}[l,bend right,swap, "h^1_{\varepsilon,\ve{s}}"]
\mathrm{a.r.}[r,swap, "\pi_{\varepsilon,\ve{s}}"]
&H_*(\widetilde{c}_{\varepsilon,\ve{s}})
\mathrm{a.r.}[l, bend right, swap, "i_{\varepsilon,\ve{s}}"]
\end{tikzcd}
\]
The map $\pi_{\varepsilon,\ve{s}}$ is the canonical projection map, and $i_{\varepsilon,\ve{s}}$ is a section of the map $\pi_{\varepsilon,\ve{s}}$, which we assume is compatible with the splitting described earlier. Similarly, $h_{\varepsilon,\ve{s}}^i$ is the inverse of $\partial|_{\mathcal{F}_{\varepsilon,\ve{s}}^{i,l}}$. Note that $i_{\varepsilon,\ve{s}}$ and $h_{\varepsilon,\ve{s}}^i$ are not generally $\mathbb{F}[U_1,\partialots, U_\ell]$-equivariant, and instead only $\mathbb{F}[U_1]$-equivariant.
We may define $\mathbb{F}[\hspace{-.5mm}[ U_1]\hspace{-.5mm}] $-equivariant maps
\[
\pi_{\varepsilon}\colon \widetilde{\mathcal{C}}_{\varepsilon}\to H_*(\widetilde{\mathcal{C}}_{\varepsilon}),\quad i_{\varepsilon}\colon H_*(\widetilde{\mathcal{C}}_{\varepsilon})\to \widetilde{\mathcal{C}}_{\varepsilon}\quad \text{and} \quad h_{\varepsilon}\colon \widetilde{\mathcal{C}}_{\varepsilon}\to \widetilde{\mathcal{C}}_{\varepsilon}
\]
which give a homotopy equivalence over $\mathbb{F}[\hspace{-.5mm}[ U_1]\hspace{-.5mm}] $ between $\widetilde{\mathcal{C}}_{\varepsilon}$ and $H_*(\widetilde{\mathcal{C}}_{\varepsilon})$, by taking the direct product of the maps $i_{\varepsilon,\ve{s}}$, $\pi_{\varepsilon,\ve{s}}$ and $h_{\varepsilon,\ve{s}}=\sum_{i=1}^\ell h^i_{\varepsilon,\ve{s}}$, and then completing with respect to the variables $U_1,\partialots, U_\ell$. To see that the maps $i_{\varepsilon,\ve{s}}$, $\pi_{\varepsilon,\ve{s}}$ and $h_{\varepsilon,\ve{s}}$ induce well-defined maps after completing with respect to $U_1,\partialots, U_\ell$, we argue as follows. The completion over $U_1,\partialots,U_\ell$ may be viewed as the completion with respect to the $I$-adic topology on $\mathbb{F}[U_1,\partialots, U_\ell]$, where $I$ is the ideal $(U_1,\partialots, U_\ell)$. See \cite{AtiyahMacdonald}*{Chapter~10}. Equivalently, since there are only finitely many generators over $\mathbb{F}[U_1,\partialots, U_\ell]$ and each $U_i$ has Maslov grading $-2$, we may describe the completion as having a fundamental system of open sets given by $M_i=\Span\{x: \gammar(x)\le i\}$ ranging over $i<0$. The maps $i_{\varepsilon,\ve{s}}$ and $h_{\varepsilon,\ve{s}}$ are clearly continuous with respect to this topology, since they are homogeneously graded, and hence induce maps on the completion.
Applying the homological perturbation lemma for hypercubes, Lemma~\mathrm{re}f{lem:homological-perturbation-cubes}, we may transport the hypercube maps of $(\widetilde{\mathcal{C}}_{\varepsilon}, \widetilde{D}_{\varepsilon,\varepsilon'})$ to a hypercube with underlying groups $H_*(\widetilde{\mathcal{C}}_{\varepsilon})$. This coincides with the underlying group of the lattice complex by Proposition~\mathrm{re}f{prop:OSS-lattice}. To see that the resulting hypercube is the lattice complex, it is sufficient to show that there are no higher length hypercube maps when we apply the homological perturbation lemma. This is seen directly, as follows. The recipe from the homological perturbation lemma for hypercubes is to include via $i_{\varepsilon}$, then sequentially compose cube maps and the homotopies $h_{\varepsilon}$, and then finally apply $\pi_{\varepsilon'}$. Note that $h_{\varepsilon}$ strictly increases the Hopf filtration level in the sequence in Equation~\eqref{eq:long-exact-sequence}, and $\pi_{\varepsilon}$ is non-vanishing on only the lowest Hopf filtration level. Additionally, using Proposition~\mathrm{re}f{prop:Hopf-link-large-model} and the tensor product formula in Theorem~\mathrm{re}f{thm:pairing-main}, we see that the length 1 maps of $\widetilde{\mathcal{C}}_{\Lambda}(G)$ preserve the Hopf filtration level, while the higher length maps strictly increase the Hopf filtration level. In particular, the only way for a summand from Lemma~\mathrm{re}f{lem:homological-perturbation-cubes} to contribute is for there to be no $h_{\varepsilon}$ factors, and for the hypercube arrow to be length 1. Hence, the transported hypercube maps are exactly the maps induced on homology by the length one maps of $\widetilde{\mathcal{C}}_{\Lambda}(G)$. This is exactly lattice homology by Proposition~\mathrm{re}f{prop:OSS-lattice}. The proof is complete.
\end{proof}
\betaegin{rem} The above argument fails at the last step if we use the $\mathcal{H}$-models instead of the $\overline \mathcal{H}$ models. This is because the length 2 map of $\mathcal{H}$ has a term which decreases the Hopf filtration level, so could potentially contribute to higher length terms in the hypercube structure maps obtained from the homological perturbation lemma.
\end{rem}
\betaegin{lem}
\label{lem:rational-homology-plumbings}
Suppose that $Y(G)$ is a 3-manifold obtained by plumbing along a tree $G$ and that $b_1(Y(G))=0$. Let $v$ be a vertex of $G$. We may view $Y(G)$ as the surgery on a knot $K_1\# \cdots\# K_n\subseteq Y_1\#\cdots \# Y_n$ where $(Y_1,K_1),\partialots, (Y_n,K_n)$ are the 3-manifold-knot pairs corresponding to the component of $G$ obtained by splitting $v$ into $n$ valence-1 vertices. Then $b_1(Y_1)+\cdots+b_1(Y_n)\le 1$. In particular, at most one $K_i$ is homologically essential in $Y_i$.
\end{lem}
\betaegin{proof}
We have $b_1(Y_1\# \cdots \# Y_n)=b_1(Y_1)+\cdots+b_1(Y_n)$. On the other hand, $Y(G)$ is obtained by performing Dehn surgery $K_1\#\cdots \# K_n\subseteq Y_1\#\cdots\# Y_n$ and $b_1(Y(G))=0$. Surgery on a knot can reduce $b_1$ by at most one, so the claim follows.
\end{proof}
We now complete the proof of Theorem~\mathrm{re}f{thm:main} when $b_1(Y(G))=0$:
\betaegin{proof}[Proof of Theorem~\mathrm{re}f{thm:main} when $b_1(Y(G))=0$]
To compute $\ve{\mathbb{C}F}^-(Y(G))\cong \mathcal{C}_{\Lambda}(L_G)$, we can tensor bordered bimodules similar to ~\eqref{module-1}--\eqref{module-4}, as in our construction of $\widetilde{\mathcal{C}}_{\Lambda}(G)$. The only difference is that instead of the merge module $ M$, we must use the pair-of-pants modules $W_r$ or $W_l$. Note that we may construct $Y(G)$ by iteratively performing topological moves parallel to the algebraic modules~\eqref{module-1}--\eqref{module-4}. By Lemma~\mathrm{re}f{lem:rational-homology-plumbings}, when constructing plumbed 3-manifolds which are rational homology 3-spheres by iteratively taking connected sums and taking duals, we will never take the connected sum of two knots which are both homologically essential. Hence, we may apply Corollary~\mathrm{re}f{cor:simplify-tensor-product} to replace the pair-of-pants modules with the merge module, and the homotopy type of the resulting type-$D$ module over $\mathcal{K}$ will be unchanged. In particular, we see that for some choice of arc system $\mathscr{A}$ on $L_G$, we have
\[
\mathcal{C}_{\Lambda}(L_G,\mathscr{A})_{\mathbb{F}[U]}\simeq \widetilde{\mathcal{C}}_{\Lambda}(L_G)_{\mathbb{F}[U]},
\]
so the main result follows from Proposition~\mathrm{re}f{prop:tilde-C=lattice}.
We now address the claim about the relative grading. We recall from \cite{MOIntegerSurgery}*{Section~9.3} that when $b_1(Y(G))=0$, the link surgery hypercube possesses a uniquely specified relative $\mathbb{Z}$-grading on each $\Spin^c$ structure. Furthermore, this relative grading clearly coincides with the relative grading on the lattice complex. Note that in our proof, we showed that the link surgery complex $\mathcal{C}_{\Lambda}(L_G)$ coincided with the tensor product of the bordered modules in~\eqref{module-1}--\eqref{module-4} only up to homotopy equivalence. However in the case that $b_1(Y(G))=0$, the maps appearing in this homotopy equivalence only involve maps which already appear in the differential of the link surgery formula, or involve projections onto complexes in different $\varepsilon$ and $\ve{s}\in \mathbb{H}(L)$. In particular, the homotopy equivalence will be grading preserving.
\end{proof}
We now consider the case that $b_1(Y(G))>0$:
\betaegin{proof}[Proof of Theorem~\mathrm{re}f{thm:main} when $b_1(Y(G))>0$]
The proof is much the same as when $b_1(Y(G))=0$. The main difference is relating tensor products with the pair-of-pants bimodules with tensor products using the merge module. We can no longer use Corollary~\mathrm{re}f{cor:simplify-tensor-product} to eliminate the terms involving the homology action. Instead, we use Propositions~\mathrm{re}f{prop:merge=pair-of-pants} and ~\mathrm{re}f{prop:finite-generation}.
\end{proof}
\betaegin{rem} When $b_1(Y(G))>0$, the homotopy equivalence relating the tensor products obtained using the merge modules and the pair-of-pants bimodules is obtained using Corollary~\mathrm{re}f{cor:eliminate-homology-action}, which is fundamentally based on the classification theorem for finitely generated chain complexes over a PID. In particular, it is not as clear from this perspective how the homotopy equivalence interacts with the relative grading on the link surgery formula. We will pursue this question in a future work by considering group valued gradings on the bordered link surgery modules, in the spirit of \cite{LOTBordered}*{Section~3.3}.
\end{rem}
\section{Plumbed 3-manifolds with $b_1>0$}
\label{sec:plumbed-H1-action}
In this section, we state a refinement of N\'{e}methi's conjecture for plumbed manifolds with $b_1>0$. In analogy to the action of $H_1(S^3_\Lambda(L))/\mathbb{T}ors$ on the link surgery complex from Section~\mathrm{re}f{sec:algebraic-action-link-surgery}, we now describe an action of $H_1(Y(G))/\mathbb{T}ors$ on lattice homology. If $\mu_i$ denotes the meridian of the component $K_i\subseteq L_G$, we define
\[
\mathfrak{A}_{[\mu_i]}([K,E])=U^{a_{v_i}[K,E]} [K,E-v_i],
\]
extended equivariantly over $U$. (This is a term in the differential $\partial$).
If $\gamma=a_1[\mu_1]+\cdots+ a_\ell [\mu_\ell]\in H_1(Y(G))/\mathbb{T}ors$, we define
\[
\mathfrak{A}_{[\gamma]}:=a_1\mathfrak{A}_{[\mu_1]}+\cdots+ a_\ell \mathfrak{A}_{[\mu_\ell]}.
\]
\betaegin{lem} The above action of $\mathbb{Z}^{\ell}$ on $\mathbb{C}F(G)$ descends to a well-defined action of $H_1(Y(G))/\mathbb{T}ors$ up chain homotopy.
\end{lem}
The proof is essentially identical to the proof of Lemma~\mathrm{re}f{lem:algebraic-homology-action-link-surgery} for the link surgery formula. One replaces the lattice $\mathbb{H}(L)$ with $\mathbb{C}har(X(G))$ and the proof proceeds by an easy translation.
\betaegin{conj} $\ve{\mathit{HF}}^-(Y(G))$ is isomorphic to $\mathbb{HF}(G)$ as a module over $\mathbb{F}[\hspace{-.5mm}[ U]\hspace{-.5mm}] \otimes \Lambda^* H_1(Y(G))/\mathbb{T}ors$.
\end{conj}
\betaibliographystyle{custom}
\partialef\MR#1{}
\betaibliography{biblio}
\end{document} |
\begin{document}
\title{
Nearly Deterministic Bell Measurement for
Multiphoton Qubits and Its Application to Quantum Information Processing}
\author{Seung-Woo Lee}
\affiliation{Center for Macroscopic Quantum Control, Department of
Physics and Astronomy, Seoul National University, Seoul, 151-742,
Korea}
\author{Kimin Park}
\affiliation{Center for Macroscopic Quantum Control, Department of
Physics and Astronomy, Seoul National University, Seoul, 151-742,
Korea}
\affiliation{Department of Optics, Palack\'y University, 17. listopadu 1192/12, 77146 Olomouc, Czech Republic}
\author{Timothy C. Ralph}
\affiliation{Centre for Quantum Computation and Communication Technology, School of Mathematics and Physics, University of Queensland, St Lucia, Queensland 4072, Australia}
\author{Hyunseok Jeong}\email{[email protected]}
\affiliation{Center for Macroscopic Quantum Control, Department of
Physics and Astronomy, Seoul National University, Seoul, 151-742,
Korea}
\begin{abstract}
We propose a Bell measurement scheme by employing a logical qubit in Greenberger-Horne-Zeilinger (GHZ) entanglement with an arbitrary number of photons. Remarkably, the success probability of the Bell measurement as well as teleportation of the GHZ entanglement can be made arbitrarily high using only linear optics elements and photon on-off measurements as the number of photons increases. Our scheme outperforms previous proposals using single photon qubits when comparing the success probabilities in terms of the average photon usages. It has another important advantage for experimental feasibility that it does not require photon number resolving measurements.
Our proposal provides an alternative candidate for all-optical quantum information processing.
\end{abstract}
\maketitle
Photons are a promising candidate for quantum information processing \cite{PKok2007,Ralph2010}.
A well-known method to construct a photonic qubit is to use a single photon with its polarization degree of freedom \cite{PKok2007}. A crucial element in quantum communication and computation using linear optics and photon measurements \cite{Knill2001} is the Bell state measurement that discriminates between four Bell states. The standard Bell-measurement scheme
for the Bell states of single-photon qubits utilizes beam splitters and photodetectors \cite{Lut99,Calsa2001}.
This method, in effect, projects two photons onto a complete measurement basis of two Bell states and two product states so that only two of the Bell states can be unambiguously identified.
Due to this reason, the success probability of the Bell measurement using linear optics elements and photodetectors is limited to 1/2 \cite{Lut99,Calsa2001}.
This has been a fundamental hindrance to deterministic quantum teleportation and scalable quantum computation \cite{PKok2007,Ralph2010}.
There are proposals to improve the success probability of the Bell discrimination using ancillary states \cite{Grice2011,Ewert2014}, additional squeezing operations \cite{Zaidi2013} and different types of qubits encoding using coherent states \cite{Jeong2001} or hybrid states \cite{SWLEE13}.
In fact, all these schemes suffer from the requirement of photon number resolving detection
\cite{Grice2011,Zaidi2013,Ewert2014,Jeong2001,SWLEE13}. The requirement of ancillary resource entanglement \cite{Grice2011,Ewert2014} and the limited success probabilities \cite{Zaidi2013} are other features to overcome.
In this paper, we propose a Bell measurement scheme using linear optics and photon on-off measurements with qubit encoding in the form of Greenberger-Horne-Zeilinger (GHZ) entanglement.
It is shown that the logical Bell states can be efficiently discriminated by performing $N$ times of Bell measurements on the individual photon pairs, where $N$ is the number of photons in a logical qubit,
using only the standard technique with beam splitters and on-off photodetectors.
The limitation that each measurement for photon
pairs can only identify two of the four Bell states is overcome by the
fact that each of the four $N$-photon Bell states is characterized by
the number of contributions from the two single-photon-qubit Bell states that can be
identified in the measurement of photon pairs.
As a result, the logical Bell measurement fails only when none of the $N$ pairs is a detectable Bell state, resulting in a success probability of $1-2^{-N}$ that rapidly approaches unity as $N$ increases;
it outperforms the previous approaches \cite{Grice2011,Zaidi2013,Ewert2014} in its efficiency against the number of photons {\it without} using photon number resolving detection.
Using this Bell measurement scheme, a qubit in an $N$ photon GHZ-type entanglement can be teleported with an arbitrarily high success probability
with a GHZ-type entangled channel of a $2N$ number of photons as $N$ becomes large.
In our framework, a universal set of gate operations can be constructed using only linear optics, on-off measurements and multiphoton entanglement. This may be a competitive new approach to photonic quantum information processing due to the aforementioned advantages.
{\em Multiphoton Bell measurement.--}
We define single-photon-qubit Bell states as
\begin{equation}
\label{eq:BellB}
\begin{aligned}
\ket{\Phi^\pm}=\frac{1}{\sqrt{2}}(\ket{+}\ket{+}\pm\ket{-}\ket{-}),\\
\ket{\Psi^\pm}=\frac{1}{\sqrt{2}}(\ket{+}\ket{-}\pm\ket{-}\ket{+}),
\end{aligned}
\end{equation}
in the diagonal basis $\ket{\pm}=(\ket{H}\pm\ket{V})/\sqrt{2}$ in terms of horizontal and vertical polarization single photon states $\ket{H}$ and $\ket{V}$.
Only two of the four Bell states in Eq.~(\ref{eq:BellB}) can be discriminated
by the standard Bell measurement technique using linear optics \cite{Lut99,Calsa2001}.
For example, one can identify $\ket{\Phi^{-}}$ and $\ket{\Psi^{-}}$ using beam splitters and four on-off photodetectors \cite{Calsa2001}. We shall refer to this single-photon-qubit Bell measurement as $\rm B_s$.
The logical basis is defined with $N$ photons as
\begin{equation}
\begin{aligned}
& \ket{0_{L}}\equiv\ket{+}^{\otimes N}=\ket{+}_1\ket{+}_2\ket{+}_3 \cdot\cdot\cdot \ket{+}_N,\\
& \ket{1_{L}}\equiv\ket{-}^{\otimes N}=\ket{-}_1\ket{-}_2\ket{-}_3 \cdot\cdot\cdot \ket{-}_N
\end{aligned}
\end{equation}
and then a logical qubit is generally in a GHZ-type state as $\alpha |+\rangle^{\otimes N} + \beta |-\rangle^{\otimes N}$.
Let us first consider the simplest case of two-photon encoding ($N=2$) with $\ket{0}_L\equiv\ket{+}\otimes\ket{+}$ and $\ket{1}_L\equiv\ket{-}\otimes\ket{-}$. The logical Bell states
can be expressed as
\begin{equation}
\begin{aligned}
&\ket{\Phi^{\pm}_{(2)}}=\frac{1}{\sqrt{2}}(\ket{+}_{1}\ket{+}_{2}\ket{+}_{1'}\ket{+}_{2'}
\pm\ket{-}_{1}\ket{-}_{2}\ket{-}_{1'}\ket{-}_{2'}),\\
&\ket{\Psi^{\pm}_{(2)}}=\frac{1}{\sqrt{2}}(\ket{+}_{1}\ket{+}_{2}\ket{-}_{1'}\ket{-}_{2'}
\pm\ket{-}_{1}\ket{-}_{2}\ket{+}_{1'}\ket{+}_{2'}),
\end{aligned}
\end{equation}
where the first logical qubit is of photonic modes $1$ and $2$ while the second is of $1'$ and $2'$.
Simply by rearranging modes $1'$ and $2$ as implied in Fig.~1(a),
these Bell states can be represented in terms of the single-photon-qubit Bell states in Eq.~(\ref{eq:BellB}) as
\begin{equation}
\begin{aligned}
&\ket{\Phi^{\pm}_{(2)}}=\frac{1}{\sqrt{2}}(\ket{\Phi^+}_{11'}\ket{\Phi^{\pm}}_{22'}+\ket{\Phi^-}_{11'}\ket{\Phi^{\mp}}_{22'}),\\
&\ket{\Psi^{\pm}_{(2)}}=\frac{1}{\sqrt{2}}(\ket{\Psi^+}_{11'}\ket{\Psi^{\pm}}_{22'}+\ket{\Psi^-}_{11'}\ket{\Psi^{\mp}}_{22'}).
\end{aligned}
\end{equation}
It then becomes clear that the four Bell states $\ket{\Phi^{\pm}_{(2)}}$ and $\ket{\Psi^{\pm}_{(2)}}$ can be discriminated with a 75\% success probability by means of two separate $\rm B_s$ measurements performed on two photons, one from the first qubit and the other from the second as shown in Fig.\ref{fig:scheme}(a).
Note that a $\rm B_s$ measurement can identify only $\ket{\Phi^{-}}$ and $\ket{\Psi^{-}}$ with the total success probability 50\%.
From the results of two $\rm B_s$ measurements, one can distinguish the Bell states as follows: (i) $\ket{\Phi^{+}_{(2)}}$ when both $\rm B_s$ measurements succeed with results $\ket{\Phi^-}$, (ii) $\ket{\Phi^{-}_{(2)}}$ when one measurement succeeds with $\ket{\Phi^{-}}$, (iii) $\ket{\Psi^{+}_{(2)}}$ when both succeeds with $\ket{\Psi^{-}}$, (iv) $\ket{\Psi^{-}_{(2)}}$ when one measurement succeeds with $\ket{\Psi^{-}}$, (v) failure occurs when both the measurements fail ({\em i.e.} neither $\ket{\Phi^{-}}$ nor $\ket{\Psi^{-}}$ is obtained). Assuming equal input probabilities of Bell states, we can obtain the success probability of the Bell measurement as $P_s=3/4$.
This scheme can be generalized to arbitrary $N$ photon encoding.
The logical Bell states $\ket{\Phi^{\pm}_{(N)}}=(|0_L\rangle|0_L\rangle\pm|1_L\rangle|1_L\rangle)/\sqrt{2}$ and $\ket{\Psi^{\pm}_{(N)}}=(|0_L\rangle|1_L\rangle\pm|1_L\rangle|0_L\rangle)/\sqrt{2}$ can be expressed as
\begin{equation}
\begin{aligned}
&\ket{\Phi^{+}_{(N)}}=\frac{1}{\sqrt{2^{N-1}}}\sum^{[N/2]}_{j=0}{\cal P}[\ket{\Phi^+}^{\otimes N-2j}\ket{\Phi^-}^{\otimes 2j})],\\
&\ket{\Phi^{-}_{(N)}}=\frac{1}{\sqrt{2^{N-1}}}\sum^{[(N-1)/2]}_{j=0}{\cal P}[\ket{\Phi^+}^{\otimes N-2j-1}\ket{\Phi^-}^{\otimes 2j+1})],\\
&\ket{\Psi^{+}_{(N)}}=\frac{1}{\sqrt{2^{N-1}}}\sum^{[N/2]}_{j=0}{\cal P}[\ket{\Psi^+}^{\otimes N-2j}\ket{\Psi^-}^{\otimes 2j})],\\
&\ket{\Psi^{-}_{(N)}}=\frac{1}{\sqrt{2^{N-1}}}\sum^{[(N-1)/2]}_{j=0}{\cal P}[\ket{\Psi^+}^{\otimes N-2j-1}\ket{\Psi^-}^{\otimes 2j+1})],
\end{aligned}
\end{equation}
where $[x]$ denotes the maximal integer $\leq x$, and ${\cal P}[\cdot]$ performs the permutation of $N$ elements of photon pairs (Supplementary Material). For example, $\ket{\Phi^{+}_{(3)}}=(\ket{\Phi^{+}}^{\otimes3}+{\cal P}[\ket{\Phi^{+}}\ket{\Phi^{-}}^{\otimes2}])/2 = (\ket{\Phi^{+}}^{\otimes3}+\ket{\Phi^{+}}\ket{\Phi^{-}}^{\otimes2}+\ket{\Phi^{-}}\ket{\Phi^{+}}\ket{\Phi^{-}}+\ket{\Phi^{-}}^{\otimes2}\ket{\Phi^{+}})/2$. The four logical Bell states can be discriminated by performing $N$ times of $\rm B_s$ measurements as illustrated in Fig.~\ref{fig:scheme}(b). Each $\rm B_s$ is performed on two photons, one from the first logical qubit and the other from the second. Clearly, the results of the logical Bell measurement are: (i) $\ket{\Phi^{+}_{(N)}}$ when an even number of $\rm B_s$ measurements succeed with result $\ket{\Phi^-}$, (ii) $\ket{\Phi^{-}_{(N)}}$ for an odd number of $\ket{\Phi^-}$, (iii) $\ket{\Psi^{+}_{(N)}}$ for an even number of $\ket{\Psi^-}$, (iv) $\ket{\Psi^{-}_{(N)}}$ for an odd number of $\ket{\Psi^-}$, (v) the measurement fails when none of the $\rm B_s$ measurements succeeds.
In fact, one can perform the logical Bell measurement effectively via either spatially or temporally distributed $N$ times $\rm B_s$ measurements, irrespectively of the order of measurements.
\begin{figure}
\caption{(Color online).
(a) Bell measurement for two-photon qubits using two single-photon-qubit Bell measurements $\rm B_s$. Each logical qubit is of two photons. (b) Bell measurement for $N$-photon qubits through $N$ times of $\rm B_s$ measurements. }
\label{fig:scheme}
\end{figure}
Assuming equal input probabilities of the Bell states, we can obtain the success probability of the Bell measurement as $P_s=1-2^{-N}$. Remarkably, our scheme shows the best performance among the Bell discrimination schemes for photons with respect to the attained success probability against the average photon number ($\bar{n}$) used in the process as shown in Fig.~2 (details are presented in Supplementary Material). For example, it reaches $P_s=0.996$ with $N=8$ $(\bar{n}=16)$.
Our scheme does not require photon number resolving detectors in contrast to previous schemes suggested to improve the success probability of a Bell measurement \cite{Grice2011,Zaidi2013,Ewert2014}.
{\em Nearly deterministic quantum teleportation.--}
Our Bell measurement scheme
immediately enhances the success probability of the standard quantum teleportation \cite{Bennett93}. Suppose that an unknown qubit $\ket{\phi_N}_A=a\ket{+}^{\otimes N}_A+b\ket{-}^{\otimes N}_A$ with $N$ photons at site $A$ is to be teleported via a channel state $|+\rangle_A^{\otimes N}|+\rangle_B^{\otimes N}+|-\rangle_A^{\otimes N}|-\rangle_B^{\otimes N}$ to site $B$.
The sender carries out $N$-times of $\rm B_s$ measurements,
where each $\rm B_s$ is performed on two photons, i.e., one from $\ket{\phi_N}_A$ and the other from site $A$ of the channel.
The receiver at site $B$ can then retrieve $\ket{\phi_N}$ by performing appropriate unitary transforms.
The required Pauli X (bit flip) and Z (phase flip) operations in the logical qubit basis can be implemented deterministically by phase-flipping all photon modes and by executing a bit-flip on any one mode, in the $\{|H\rangle,~|V\rangle\}$ baisis, respectively.
Therefore, the success probability of teleportation equals that of the Bell measurement $P_s=1-2^{-N}$.
\begin{figure}
\caption{(Color online).
The success probability of Bell measurements against the average photon number ($\bar{n}
\label{fig:tele}
\end{figure}
{\em Universal quantum computation.--}
Using our framework,
a universal set of gate operations can be constructed.
For example, Pauli X, arbitrary Z (phase), Hadamard, and a controlled-Z operations constitute such a universal set.
Pauli X and arbitrary Z (phase) operations are straightforward to implement in the way explained earlier for teleportation.
Hadamard and CZ gates can be implemented through the gate teleportation protocol
with specific types of entangled states \cite{Gottesman1999}.
The success probability of the gate operations based on the teleportation protocol can be made nearly deterministic
by increasing the number of photons for a logical qubit.
The cost is preparation of mutiphoton entanglement as resource states.
Such multiphoton entanglement have been experimentally demonstrated \cite{JWPan2012}. For example, GHZ-type entanglement up to 8 photons \cite{Yao2012,Huang2011} and cluster states up to 8 photons \cite{Yao2012Nature} were generated.
On-demand generation schemes \cite{deterministic,Lindener2009} are also expected to be realized based on semiconductor quantum dots \cite{Young2006}.
{\em Effects of photon losses.--} Photon loss is a major detrimental factor in optical quantum information processing \cite{Ralph2010}.
We assume that the photon loss rate for any single mode is $\eta$ and analyze the errors caused by the photon losses using the master equation (Supplementary Material) \cite{Phoenix90}.
Photon loss during quantum computing occurs with rate $P=1-(1-\eta)^N$ for a logical qubit. If a photon is lost, the qubit experiences a Pauli Z error with probability 1/2. The failure probability ($1-P_{s} $) of the logical Bell measurement is obtained as
\begin{equation}
P_f(\eta)=\sum^{N}_{k=0}\binom{N}{k}(1-\eta)^{N-k}\eta^{k}\left(\frac{1}{2}\right)^{N-k}=\left(\frac{1+\eta}{2}\right)^N,
\end{equation}
where $\binom{N}{k}$ represents the binomial coefficient. Note that errors caused by loss at any single photon mode are in fact detectable by loss of the photon at any detector(s) during the logical Bell measurement. Such an error noticed immediately by a measurement is called ``locatable'' \cite{Ralph2010}. Moreover, missing photons in the input qubit can be compensated at the output qubit as far as the teleportation succeeds. In our scheme, unlocatable errors that should be corrected by an error correction code appear only in quantum memory with rate $P$, i.e., the photon loss rate of a logical qubit.
\begin{table}[t]
\begin{tabular}{|c|c||c|c|}
\hline
~$N$~~&~Noise threshold $\eta$~~&~$N$~~&~Noise threshold $\eta$~~\\
\hline
~3~~&~$1.3\times 10^{-3}$~~&~6~~&~$1.3\times 10^{-3}$~~\\
\hline
~4~~&~$1.7\times 10^{-3}$~~&~7~~&~$1.1\times 10^{-3}$~~\\
\hline
~5~~&~$1.5\times 10^{-3}$~~&~8~~&~$0.9\times 10^{-3}$~~\\
\hline
\end{tabular}
\caption{\label{tab:table1}{Fault-tolerant noise thresholds ($\eta$) for different number of photons in a logical qubit ($N$) using the seven-qubit Steane code and the telecorrection protocol \cite{Dawson2006}. The highest threshold is obtained when $N=4$. }}
\end{table}
We summarize the assumptions made for our analysis of quantum computing as follows. Multiphoton entangled states, both for logical qubits and for entangled channels for gate teleportation, are provided by off-line processes. During the off-line process of producing multiphoton entanglement used as quantum channels, loss occurs with rate $\eta$; as a result, imperfect channels (in which photons are lost with rate $\eta$ at each photonic mode) are supplied into the in-line computation process. The initial logical qubits are assumed to be in ideal pure states when they are first supplied into the in-line computation process. During the in-line process of quantum computing, for each gate operation and corresponding time in quantum memory, the same loss rate $\eta$ is applied to each mode of the multiphoton qubits. We note that the total resource cost depends upon the efficiency of the off-line generation process.
{\em Fault-tolerant quantum computation.--} In order to build arbitrary large-scale quantum computers,
the amount of noise per operation with appropriate error corrections should be below a fault-tolerance threshold \cite{Shor1996}.
We carried out numerical simulations to obtain the threshold for a given loss rate $\eta$. We here employ the seven qubit Steane code \cite{Steane1996} with several levels of concatenation based on the circuit-based telecorrection \cite{Dawson2006}.
In fact, the Steane code can correct arbitrary logical or unlocatable errors, however for the purpose of this calculation we assume that the errors other than loss errors are negligible compared to the loss errors.
The details of the method \cite{Lund2008,Dawson2006} is presented in the Supplementary Material and the noise thresholds of our model are obtained as shown in Table~\ref{tab:table1}.
Interestingly, the largest threshold is obtained when the qubit is encoded with $4$ photons ($N=4$), and further increase of $N$ lowers the threshold due to the increase of unlocatable errors. The obtained noise threshold ($\sim 1.7\times10^{-3}$) is much higher than those for coherent-state qubits ($\sim2\times10^{-4}$) \cite{Lund2008,coherent_1,coherent_2} and hybrid qubits ($\sim5\times10^{-4}$) \cite{SWLEE13}, and is almost equivalent to the one using parity states \cite{Ralph2005,Hayes2010}. We expect that even much higher thresholds may be attainable by employing recently proposed topological error codes \cite{Yao2012Nature,Sean2010}, which will be interesting future work.
{\em Remarks.--} We have proposed a nearly deterministic Bell discrimination scheme using multiphoton qubit encoding.
The limitation that only two of four Bell states can be identified by the standard single-photon-qubit Bell measurement, $\rm B_s$, is overcome by multiphoton encoding with GHZ entanglement and $N$ times of $\rm B_s$ measurements, where $N$ is the number of photons in a logical qubit. The logical Bell measurement is performed through $N$ times of $\rm B_s$ measurements and the process fails only when none of those $N$ times of $\rm B_s$ measurements succeeds. As a result, the success probability of the logical Bell measurement, $1-2^{-N}$, rapidly approaches to unity as $N$ increases.
It outperforms previous schemes devised to improve success probabilities of Bell measurements using single photons and linear optics, regarding the efficiency in terms of average photon usages. Another remarkable advantage of our scheme over the previous ones is that it does not require photon number resolving measurements but only on-off measurements suffice. It means that all errors due to photon losses are locatable and are relatively easy to handle during quantum information processing. We have finally demonstrated fault-tolerant quantum computation using our approach. Remarkably, the highest noise-threshold is obtained with $4$-photon qubits and $8$-photon entangled channels that are accessible in current laboratories \cite{Yao2012,Huang2011,Yao2012Nature}.
Our scheme for the Bell measurement can be performed via either spatially or temporally distributed $N$ times $\rm B_s$ measurements.
We note that such an experiment can be performed utilizing temporal mode entanglement as done in Refs.~\cite{ex1,ex2,ex3}. It then follows that only one single-photon-qubit Bell-measurement device \cite{Lut99} is sufficient to perform temporally separate $N$ number of $\rm B_s$ measurements for a logical Bell measurement.
As a proof-of-principle experiment of our scheme, quantum teleportation from two transmitters to two receivers using four-photon entanglement and two $\rm B_s$ measurements, for example, would be immediately realizable using current technology.
Our idea, in principle, is not limited to optical systems but can be applied to other multipartite systems. It reveals the possibility of using multipartite entangled systems for efficient quantum communication and computation.
We thank Casey Myers for useful discussions. This work was supported by the National Research
Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. 2010-0018295)
and by the Australian Research Council Centre of Excellence for
Quantum Computation and Communication Technology
(Project No. CE11000102). K.P. acknowledges financing by the European Social Fund and
the state budget of the Czech Republic, POST-UP NO
CZ.1.07/2.3.00/30.0004.
\begin{thebibliography}{99}
\bibitem{PKok2007} P. Kok, W. J. Munro, K. Nemoto, T. C. Ralph, J. P. Dowling, and G. J. Milburn, Rev. Mod. Phys. {\bf 79}, 135 (2007).
\bibitem{Ralph2010} T. C. Ralph and G. J. Pryde, Progress in Optics {\bf 54}, 209 (2010).
\bibitem{Knill2001} E. Knill, R. Laflamme, and G. J. Milburn, Nature {\bf 409}, 46 (2001).
\bibitem{Lut99} N. L\"utkenhaus, J. Calsamiglia, and K.-A. Suominen, Phys. Rev. A {\bf 59}, 3295 (1999).
\bibitem{Calsa2001} J. Calsamiglia and N. L\"{u}tkenhaus, App. Phys. B {\bf 72}, 67 (2001).
\bibitem{Grice2011} W. P. Grice, Phys. Rev A {\bf 84}, 042331 (2011).
\bibitem{Ewert2014} F. Ewert and P. van Loock, Phys. Rev. Lett. {\bf 113}, 140403 (2014).
\bibitem{Zaidi2013} H. A. Zaidi and P. van Loock, Phys. Rev. Lett. {\bf 110}, 260501 (2013).
\bibitem{Jeong2001} H. Jeong, M. S. Kim, and J. Lee, Phys. Rev. A {\bf 64}, 052308 (2001);
H. Jeong and M. S. Kim, Quantum Information and Computation {\bf 2}, 208 (2002).
\bibitem{SWLEE13} S.-W. Lee and H. Jeong, Phys. Rev. A {\bf 87}, 022326 (2013).
\bibitem{Bennett93} C. H. Bennett, G. Brassard, C. Crepeau, R. Jozsa, A. Peres, and W. K. Wootters, Phys. Rev. Lett. {\bf 70}, 1895 (1993).
\bibitem{Gottesman1999} D. Gottesman and I. L. Chuang, Nature {\bf 402}, 390 (1999).
\bibitem{JWPan2012} J.-W. Pan, Z.-B. Chen, C.-Y. Lu, H. Weinfurter, A. Zeilinger, and M. Zukowski, Rev. Mod. Phys. {\bf 84}, 777 (2012).
\bibitem{Yao2012} X.-C. Yao, T.-X. Wang, P. Xu, H. Lu, G.-S. Pan, X.-H. Bao, C.-Z. Peng, C.-Y. Lu, Y.-A. Chen and J.-W. Pan, Nature Photonics {\bf 6}, 225 (2012).
\bibitem{Huang2011} Y.-F. Huang, B.-H. Liu, L. Peng, Y.-H. Li, L. Li, C.-F. Li, and G.-C. Guo, Nature Communications {\bf 2}, 546 (2011).
\bibitem{Yao2012Nature} X.-C. Yao, T.-X. Wang, H.-Z. Chen, W.-B. Gao, A. G. Fowler, R. Raussendorf, Z.-B. Chen, N.-L. Liu, C.-Y. Lu, Y.-J. Deng, Y.-A. Chen, and J.-W. Pan, Nature {\bf 482}, 489 (2012).
\bibitem{deterministic}
C. Sch\"on, E. Solano, F. Verstraete, J. I. Cirac, and M. M. Wolf, Phys. Rev. Lett. {\bf 95}, 110503 (2005).
\bibitem{Lindener2009} N. H. Lindner and T. Rudolph, Phys. Rev. Lett. {\bf103}, 113602 (2009).
\bibitem{Young2006} R. J. Young, R. M. Stevenson, P. Atkinson, K. Cooper, D. A. Ritchie, A. J. Shields, New J. Phys. {\bf 8}, 29 (2006).
\bibitem{Phoenix90} S. J. D. Phoenix, Phys. Rev. A {\bf41}, 5132 (1990).
\bibitem{Shor1996} P. W. Shor, in Proceedings of the 37th Annual Symposium on Fundamentals of Computer Science (IEEE Computer Society Press, Los Alamotios, CA, 1996), pp. 56-65.
\bibitem{Steane1996} A. M. Steane, Phys. Rev. A {\bf 54}, 4741 (1996).
\bibitem{Dawson2006} C. M. Dawson, H. L. Haselgrove, and M. A. Nielsen, Phys. Rev. A {\bf 73}, 052306 (2006).
\bibitem{Lund2008} A. P. Lund, T. C. Ralph, and H. L. Haselgrove, Phys. Rev. Lett. {\bf100}, 030503 (2008).
\bibitem{coherent_1}
H. Jeong and M. S. Kim, Phys. Rev. A {\bf 65}, 042305 (2002).
\bibitem{coherent_2}
T.C. Ralph, A. Gilchrist, G.J. Milburn, W.J. Munro, S. Glancy, Phys. Rev. A {\bf 68}, 042319 (2003).
\bibitem{Ralph2005} T. C. Ralph, A. J. F. Hayes, and A. Gilchrist, Phys. Rev. Lett. {\bf 95}, 100501 (2005).
\bibitem{Hayes2010} A. J. F. Hayes, H. L. Haselgrove, A. Gilchrist, and T. C. Ralph, Phys. Rev. A {\bf 82}, 022323 (2010).
\bibitem{Sean2010} S. D. Barrett and T. M. Stace, Phys. Rev. Lett. {\bf 105}, 200502 (2010).
\bibitem{ex1} A. Zavatta, M. D'Angelo, V. Parigi, and M. Bellini,
Phys. Rev. Lett. {\bf 96}, 020502 (2006).
\bibitem{ex2} M. D'Angelo, A. Zavatta, V. Parigi, and
M. Bellini, J. Mod. Opt. {\bf 53}, 2259 (2006).
\bibitem{ex3}
H. Jeong, A. Zavatta, M. Kang, S.-W. Lee, L. S. Costanzo, S. Grandi, T. C. Ralph, and M. Bellini, Nature Photonics {\bf 8}, 564 (2014).
\end{thebibliography}
\pagebreak
\begin{center}
\textbf{\large Supplemental Material}
\end{center}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
\renewcommand{S\arabic{equation}}{S\arabic{equation}}
\renewcommand{S\arabic{figure}}{S\arabic{figure}}
\renewcommand{\bibnumfmt}[1]{[S#1]}
\renewcommand{\citenumfont}[1]{S#1}
\section{Multipartite Bell measurement for arbitrary $N$ photons}
The Bell states with arbitrary $N$ photons given in Eq.~(4) of the main Letter can be represented by single-photon-qubit Bell states. From Eq.~(1) of the main text, we find $\ket{\pm}\ket{\pm}=(\ket{\Phi^{+}}\pm\ket{\Phi^{-}})/\sqrt{2}$ and $\ket{\pm}\ket{\mp}=(\ket{\Psi^{+}}\pm\ket{\Psi^{-}})/\sqrt{2}$. Thus, the Bell states with $N$ photons can be written as
\begin{equation}
\nonumber
\begin{aligned}
&\ket{\Phi^{\pm}_{(N)}}=\frac{1}{\sqrt{2}}(\ket{+}^{\otimes N}\ket{+}^{\otimes N}\pm\ket{-}^{\otimes N}\ket{-}^{\otimes N})\\
&=\frac{1}{\sqrt{2}}\Big((\ket{+}\ket{+})^{\otimes N}\pm(\ket{-}\ket{-})^{\otimes N}\Big)\\
&=\frac{1}{\sqrt{2^{N+1}}}\Big((\ket{\Phi^{+}}+\ket{\Phi^{-}})^{\otimes N}\pm(\ket{\Phi^{+}}-\ket{\Phi^{-}})^{\otimes N}\Big),\\
&\ket{\Psi^{\pm}_{(N)}}=\frac{1}{\sqrt{2}}(\ket{+}^{\otimes N}\ket{-}^{\otimes N}\pm\ket{-}^{\otimes N}\ket{+}^{\otimes N})\\
&=\frac{1}{\sqrt{2}}\Big((\ket{+}\ket{-})^{\otimes N}\pm(\ket{-}\ket{+})^{\otimes N}\Big)\\
&=\frac{1}{\sqrt{2^{N+1}}}\Big((\ket{\Psi^{+}}+\ket{\Psi^{-}})^{\otimes N}\pm(\ket{\Psi^{+}}-\ket{\Psi^{-}})^{\otimes N}\Big),
\end{aligned}
\end{equation}
from which we can obtain Eq.~(5) in the main Letter. For example, we have
\begin{equation}
\nonumber
\begin{aligned}
&\ket{\Phi^{+}_{(3)}}=\frac{1}{4}\Big((\ket{\Phi^{+}}+\ket{\Phi^{-}})^{\otimes 3}+(\ket{\Phi^{+}}-\ket{\Phi^{-}})^{\otimes 3})\Big)\\
&=\frac{1}{2}\Big(\ket{\Phi^{+}}\ket{\Phi^{+}}\ket{\Phi^{+}}+\ket{\Phi^{+}}\ket{\Phi^{-}}\ket{\Phi^{-}}\\
&~~~~~~~~+\ket{\Phi^{-}}\ket{\Phi^{+}}\ket{\Phi^{-}}+\ket{\Phi^{-}}\ket{\Phi^{-}}\ket{\Phi^{+}})\Big)\\
&=\frac{1}{2}\Big(\ket{\Phi^{+}}^{\otimes3}+{\cal P}[\ket{\Phi^{+}}\ket{\Phi^{-}}^{\otimes2}]\Big),
\end{aligned}
\end{equation}
where ${\cal P}[\cdot]$ is the permutation function defined in the main Letter. Likewise, all other Bell states with arbitrary $N$ can be represented in this way.
\section{Comparison with other Bell measurement schemes}
We compare the efficiency of our scheme with those of other recent proposals using single photons \cite{sGrice2011,sZaidi2013,sEwert2014}. In order to make a fair comparison, we consider the success probability of the Bell discrimination in terms of the total number of photons ($\bar{n}$) used in the process. The photons contained in both the logical qubits to be measured as well as the ancillary systems used in the process are counted in $\bar{n}$. We also consider the increase of the average photon number by the squeezing operation when it is used.
(i) In our scheme, total $2N$ photons are used for the Bell measurement because each logical qubit is constructed with $N$ photons and no additional photons are necessary in the process. Thus, the success probability $P_s=1-2^{-\bar{n}/2}$ is achieved with total $\bar{n}=2N$ photons.
(ii) In Grice's proposal~\cite{sGrice2011}, it was shown that the success probability $P_s=1-2^{-N_a}$ is reachable using $2^{N_a}-2$ ancillary photons. Here $N_a$ is a parameter to denote the ancillary entangled states $|\Gamma_j\rangle$ with $j=1,...,N_a-1$, each of which contains $2^{j}$ photons. The total number of photon usage can thus be obtained by counting all photons in two qubits and ancillary states as $\bar{n}=2+2^{N_a}-2=2^{N_a}$, by which we can rewrite the success probability as $P_s=1-1/\bar{n}$.
(iii) In Zaidi and van Loock's proposal \cite{sZaidi2013}, each mode of Bell states is squeezed by the squeezing operator $S(r)=\exp[-r(a^{\dagger 2}-a^{2})/2]$ with squeezing parameter $r$ to increase the success probability of Bell discrimination. Two indistinguishable Bell states (here assumed as $\ket{\phi^{\pm}}=2^{-1/2}(\ket{HH}\pm\ket{VV})$), passing through a beam splitter as $U_\mathrm{BS}\ket{\phi^{\pm}}$ and squeezed, can be written in the dual-rail representation as
\begin{equation}
\nonumber
\frac{i}{2}(\ket{2'0'0'0'}+\ket{0'0'0'2'}\pm\ket{0'2'0'0'}\pm\ket{0'0'2'0'})
\end{equation}
where $\ket{n'}=S(r)\ket{n}$.
The average photon number $\bar{n}$ in all four modes can be obtained by
\begin{equation}
\nonumber
\bra{\phi^{\pm}}S_{1,2,3,4}(-r)(\hat{n}_1+\hat{n}_2+\hat{n}_3+\hat{n}_4)S_{1,2,3,4}(r)\ket{\phi^{\pm}}
\end{equation}
where $\hat{n}_i$ is the photon number operator in the $i$-th mode and $S_{1,2,3,4}(r)=S_1(r)S_2(r)S_3(r)S_4(r)$. Using the relation
\begin{equation}
\nonumber
S(-r)\hat{n}S(r)=(a^\dagger \cosh r- a \sinh r)(a \cosh r- a^\dagger \sinh r),
\end{equation}
we obtain the total average photon number in all four modes as
\begin{equation}
\nonumber
\bar{n}
=\langle \hat{n}_1+\hat{n}_2+\hat{n}_3+\hat{n}_4\rangle=2\cosh 2r+4\sinh^2 r.
\end{equation}
In this scheme, only specific values of $r$ result in the improvement of the success probability for the Bell discrimination. The best suggested value in Ref.~\cite{sZaidi2013} is $r=0.6585$ with which the average photon number in the Bell state after the squeezing is $\bar{n}=6.00029$; this results in the success probability of the Bell measurement $0.643$.
(iv) The scheme proposed by Ewert and van Loock \cite{sEwert2014} employs ancillary multi-photon entanglement that is similar to the one used in Grice's scheme~\cite{sGrice2011}
to increase the success probability. The total number of photons that go into the Bell measurement setup can be counted as $\bar{n}=4N_m+2$ where $N_m$ is the number of ancillary states, which yields the success probability $P_s=1-2^{-{N_m}-1}$. Thus, we can rewrite the success probability with respect to the average photon usages $\bar{n}$ as $P_s=1-2^{-{\bar{n}/4}-1/2}$.
We plot $P_s$ against $\bar{n}$ for all above-mentioned schemes in Fig.~2 of the main Letter which shows that our scheme shows the best performance.
\section{Error probabilities in lossy environment}
The evolution of optical qubits in a lossy environment can be described by solving the master equation,
\begin{equation}
\nonumber
\frac{d\rho}{dt}=\gamma(\hat{J}+\hat{L})\rho,
\end{equation}
where $\hat{J}\rho=\sum_i\hat{a}_i\rho\hat{a}_i^{\dag}$,
$\hat{L}\rho=-\sum_i(\hat{a}^{\dag}_i\hat{a}_i\rho+\rho\hat{a}^{\dag}_i\hat{a}_i)/2$,
and $\hat{a} (\hat{a}^{\dag})$ is the annihilation (creation)
operator for the $i$-th mode, and $\gamma$ is the decay constant.
Here the loss rate is given by $\eta=
1-e^{-\gamma t}$. Thus a logical qubit with arbitrary $N$ photons
$\ket{\psi^{(N)}}=a\ket{+}^{\otimes N}+b\ket{-}^{\otimes N}$ evolves to
a mixed state as
\begin{equation}
\begin{aligned}
\label{eq:master}
\nonumber
&\ket{\psi^{(N)}}\xrightarrow{\eta}(1-\eta)^N\ket{\psi^{(N)}}\bra{\psi^{(N)}}\\
&+\frac{1}{2}\sum^{N}_{k=1}\binom{N}{k}(1-\eta)^{N-k}\eta^{k}\\
&~~~~~~\times\bigg(\ket{\psi^{(N-k)}}\bra{\psi^{(N-k)}}+\ket{\psi_{-}^{(N-k)}}\bra{\psi_{-}^{(N-k)}}\bigg),
\end{aligned}
\end{equation}
where $\binom{N}{k}$ represents the binomial coefficient. As we can see here, losses in a logical qubit occur with rate $P=1-(1-\eta)^{N}$. When $k$ photons are lost, the possible resulting state of the qubit is either $\ket{\psi^{(N-k)}}=a\ket{+}^{\otimes N-k}+b\ket{-}^{\otimes N-k}$ or $\ket{\psi_{-}^{(N-k)}}=a\ket{+}^{\otimes N-k}-b\ket{-}^{\otimes N-k}$. Here the former contains no logical error, while the latter contains a Pauli-$Z$ error. Therefore, if a photon is lost, a qubit experiences a Pauli-$Z$ error with probability $1/2$.
Likewise, we can calculate the failure probability, $1-P_s$, of the Bell measurement in a lossy environment as
\begin{equation}
\begin{aligned}
\label{eq:master}
\nonumber
P_f(\eta)=\sum^{N}_{k=0}\binom{N}{k}(1-\eta)^{N-k}\eta^{k}\left(\frac{1}{2}\right)^{N-k}=\left(\frac{1+\eta}{2}\right)^N,
\end{aligned}
\end{equation}
where $(1/2)^{N-k}$ is the failure rate obtained when k photons are lost.
\section{Telecorrector circuit and noise thresholds for fault-tolerant quantum computing}
The telecorrector circuit is composed of CZ, Hadamard, $\ket{+}$ state, and $X$-basis measurement \cite{sLund2008}. For the lowest level of concatenation, the errors can be modeled as follows: When Hadamard or CZ gates fail, the teleported qubit is assumed to experience depolarization, modeled by a random Pauli operation applied to the qubit, {\it i.e.} $Z$ and $X$ errors occur independently with the equal probability $1/2$. If a loss occurs in quantum memory or gate operations, the qubit experiences a Pauli $Z$ error with probability $1/2$. For higher levels of concatenation, we use the same error models described in Ref.~\cite{sDawson2006}. Based on these, we performed a series of Monte Carlo simulation (using C++) to obtain the corrected error rates for a range of $\eta$ with different $N$. The resulting error rates in a lower level are used for the next level of error correction. If the error rates tend to zero in the limit of many levels of concatenation, fault-tolerant quantum computation is possible with those certain $\eta$ and $N$. In this way, the noise thresholds of our model are obtained as presented in the main Letter.
\end{document} |
\begin{document}
\title[Syzygies and singularities of tensor product surfaces]
{Syzygies and singularities of tensor product surfaces of bidegree $(2,1)$}
\author{Hal Schenck}
\thanks{Schenck supported by NSF 1068754, NSA H98230-11-1-0170}
\address{Department of Mathematics, University of Illinois,
Urbana, IL 61801}
\email{[email protected]}
\author{Alexandra Seceleanu}
\address{Department of Mathematics, University of Nebraska,
Lincoln, NE 68588}
\email{[email protected]}
\author{Javid Validashti}
\address{Department of Mathematics, University of Illinois,
Urbana, IL 61801}
\email{[email protected]}
\keywords{Tensor product surface, bihomogeneous ideal, Segre-Veronese map}
\begin{abstract}
Let $U \subseteq H^0({\mathcal{O}_{\mathbb{P}^1 \times \mathbb{P}^1}}(2,1))$ be a
basepoint free four-dimensional vector space.
The sections corresponding to $U$ determine a regular map
$\phi_U: {\mathbb{P}^1 \times \mathbb{P}^1} \longrightarrow {\mathbb{P}}^3$. We study the associated
bigraded ideal $I_U \subseteq k[s,t;u,v]$ from the standpoint
of commutative algebra, proving that there are exactly six
numerical types of possible bigraded minimal free resolution. These
resolutions play a key role in determining the implicit
equation for $\phi_U({\mathbb{P}^1 \times \mathbb{P}^1})$, via work of Bus\'e-Jouanolou \cite{bj},
Bus\'e-Chardin \cite{bc}, Botbol \cite{bot} and
Botbol-Dickenstein-Dohm \cite{bdd}
on the approximation complex $\mathcal{Z}$. In four of the six cases
$I_U$ has a linear first syzygy; remarkably from this we obtain all
differentials in the minimal free resolution. In particular this
allows us to explicitly describe the implicit equation and
singular locus of the image.
\end{abstract}
\maketitle
\section{Introduction}
A central problem in geometric modeling is to find simple
(determinantal or close to it) equations for the image of
a curve or surface defined by a regular or rational map. For
surfaces the two most common situations are when
${\mathbb{P}^1 \times \mathbb{P}^1} \longrightarrow {\mathbb{P}}^3$ or ${\mathbb{P}}^2 \longrightarrow {\mathbb{P}}^3$. Surfaces
of the first type are called {\it tensor product surfaces} and
surfaces of the latter type are called {\it triangular surfaces}.
In this paper we study tensor product surfaces of bidegree
$(2,1)$ in ${\mathbb{P}}^3$. The study of such surfaces goes back to the
last century--see, for example, works of Edge \cite{edge} and
Salmon \cite{salmon}.
Let $R = k[s,t,u,v]$ be a bigraded ring over an algebraically closed
field $k$, with $s,t$ of degree $(1,0)$ and $u,v$ of degree $(0,1)$.
Let $R_{m,n}$ denote the graded piece in bidegree $(m,n)$. A regular map
${\mathbb{P}^1 \times \mathbb{P}^1} \longrightarrow {\mathbb{P}}^3$ is defined by four polynomials
\[
U= {\mathrm{Span}} \{ p_{0}, p_{1}, p_{2}, p_{3} \} \subseteq R_{m,n}
\]
with no common zeros on ${\mathbb{P}^1 \times \mathbb{P}^1}$. We will study the case $(m,n) = (2,1)$,
so
\[
U \subseteq H^0({\mathcal{O}_{\mathbb{P}^1 \times \mathbb{P}^1}}(2,1)) = V = {\mathrm{Span}} \{ s^2u,stu,t^2u,s^2v,stv,t^2v \}.
\]
Let $I_U=\langle p_{0}, p_{1}, p_{2}, p_{3}\rangle \subset R$,
$\phi_U$ be the associated map ${\mathbb{P}^1 \times \mathbb{P}^1} \longrightarrow {\mathbb{P}}^3$
and
\[
X_U = \phi_U({\mathbb{P}^1 \times \mathbb{P}^1}) \subseteq {\mathbb{P}}^3.
\]
We assume that $U$ is basepoint free, which means that
\[
\sqrt{I_U} = \langle s,t\rangle \cap \langle u,v \rangle.
\]
We determine {\em all} possible numerical types of bigraded minimal free resolution for
$I_U$, as well as the embedded associated primes of $I_U$.
Using approximation complexes, we relate the algebraic
properties of $I_U$ to the geometry of $X_U$.
The next example illustrates our results.
\begin{exm}\label{ex1}
Suppose $U$ is basepoint free and $I_U$ has a unique
first syzygy of bidegree $(0,1)$. Then the primary
decomposition of $I_U$ is given by Corollary~\ref{T5PD},
and the differentials in the bigraded minimal free resolution
are given by Proposition~\ref{LS3}.
For example, if $U = {\mathrm{Span}} \{s^2u, s^2v, t^2u, t^2v+stv \}$,
then by Corollary~\ref{T5PD} and Theorem~\ref{T5exact}, the
embedded primes of $I_U$ are $\langle s,t,u \rangle$ and
$\langle s,t,v \rangle$,
and by Proposition~\ref{LS3} the bigraded Betti numbers of $I_U$ are:
\[
0 \leftarrow I_U \leftarrow R(-2,-1)^4 \leftarrow
R(-2,-2) \oplus R(-3,-2)^2 \oplus R(-4,-1)^2
\leftarrow R(-4,-2)^2
\leftarrow 0
\]
Having the differentials in the free resolution allows us to
use the method of approximation complexes to determine the
implicit equation: it follows from Theorem~\ref{ImX} that
the image of $\phi_U$ is the hypersurface
\[
X_U = {\bf V }(x_0x_1^2x_2-x_1^2x_2^2+2x_0x_1x_2x_3-x_0^2x_3^2).
\]
Theorem~\ref{SingX} shows that the reduced codimension one singular locus of $X_U$ is ${\bf V }(x_0,x_2) \cup {\bf V }(x_1,x_3) \cup {\bf V }(x_0,x_1)$.
\begin{figure}
\caption{\textsf{$X_U$ on the open set $U_{x_0}
\label{fig:surfaceExample}
\end{figure}
The key feature of this example is that there is a linear syzygy of
bidegree $(0,1)$:
\[
v \cdot (s^2u) -u \cdot(s^2v) =0.
\]
In Lemmas~\ref{LS1} and \ref{LS2-10} we show that with an appropriate
choice of generators for $I_U$, any bigraded linear first syzygy has the form above.
Existence of a bidegree $(0,1)$ syzygy implies that the
pullbacks to ${\mathbb{P}^1 \times \mathbb{P}^1}$ of the two linear forms defining
${\mathbb{P}}(U)$
share a factor. Theorem~\ref{GPsyz} connects this to work of \cite{gl}.
\end{exm}
\subsection{Previous work on the $(2,1)$ case}
For surfaces in ${\mathbb{P}}^3$ of bidegree $(2,1)$, in addition to the
classical work of Edge, Salmon and others, more recently
Degan \cite{d} studied such surfaces with basepoints and
Zube \cite{z1}, \cite{z2} describes the possibilities for
the singular locus. In \cite{egl}, Elkadi-Galligo-L\^{e}
give a geometric description of the image
and singular locus for a generic $U$ and in \cite{gl},
Galligo-L\^{e} follow up with an analysis for the
nongeneric case. A central part of their analysis is
the geometry of a certain dual scroll which we connect
to syzygies in \S 8.
Cox, Dickenstein and Schenck study the bigraded commutative algebra
of a three dimensional basepoint free subspace
$W \subseteq R_{2,1}$ in \cite{cds}, showing that there are two
numerical types of
possible bigraded minimal free resolution of $I_W$,
determined by how ${\mathbb{P}}(W) \subseteq {\mathbb{P}}(R_{2,1}) = {\mathbb{P}}^5$
meets the image $\Sigma_{2,1}$ of the Segre map
${\mathbb{P}}^2 \times {\mathbb{P}}^1 \stackrel{\sigma_{2,1}}{\longrightarrow} {\mathbb{P}}^5.$
If $W$ is basepoint free, then there are two possibilities:
either ${\mathbb{P}}(W) \cap \Sigma_{2,1}$ is a finite set of points, or a
smooth conic. The current paper extends the work of \cite{cds} to
the more complicated setting of a four dimensional space of sections.
A key difference is that for a basepoint free subspace $W$ of dimension
three, there can never be a linear syzygy on $I_W$. As illustrated
in the example above, this is not true for the four dimensional
case. It turns out that the existence of a linear syzygy provides a
very powerful tool for analyzing both the bigraded commutative
algebra of $I_U$, as well as for determining the implicit
equation and singular locus of
$X_U$. In studying the bigraded commutative algebra of $I_U$, we employ
a wide range of tools
\begin{itemize}
\item Approximation complexes \cite{bot}, \cite{bdd}, \cite{bj}, \cite{bc}, \cite{c}.
\item Bigraded generic initial ideals \cite{ACD}.
\item Geometry of the Segre-Veronese variety \cite{h}.
\item Fitting ideals and Mapping cones \cite{ebig}.
\item Connection between associated primes and Ext modules \cite{ehv}.
\item Buchsbaum-Eisenbud exactness criterion \cite{be}.
\end{itemize}
\subsection{Approximation complexes}
The key tool in connecting the syzygies of $I_U$ to the implicit equation
for $X_U$ is an approximation complex, introduced by Herzog-Simis-Vasconcelos in
\cite{hsv1},\cite{hsv2}. We give more details of the
construction in \S 7. The basic idea is as follows:
let $R_I = R \oplus I_U \oplus I_U^2 \oplus \cdots$. Then the graph
$\Gamma$ of the map $\phi_U$ is equal to $BiProj(R_I)$ and the embedding
of $\Gamma$ in $({\mathbb{P}^1 \times \mathbb{P}^1}) \times {\mathbb{P}}(U)$ corresponds to the ring map
$S = R[x_0,\ldots,x_3] \stackrel{s}{\rightarrow} R_I$
given by $x_i \mapsto p_i$. Let $\beta$ denote the kernel of $s$, so $\beta_1$
consists of the syzygies of $I_U$ and $S_I = Sym_R(I) = S/\beta_1$.
Then
\[
\Gamma \subseteq BiProj(S_I) \subseteq BiProj(S).
\]
The works \cite{bdd}, \cite{bj}, \cite{bc}, \cite{bot} show
that if $U$ is basepoint free, then the implicit equation for $X_U$
may be extracted from the differentials of a complex $\mathcal{Z}$
associated to the intermediate object $S_I$ and in particular the determinant of
the complex is a power of the implicit equation.
In bidegree $(2,1)$, a result of Botbol \cite{bot} shows that
the implicit equation may be obtained from a $4 \times 4$ minor
of $d_1$; our work yields an explicit description of the relevant minor.
\subsection{Main results}
The following two tables describe our classification.
Type refers to the graded Betti numbers of the bigraded minimal free
resolution for $I_U$: we prove there are six numerical types possible.
Proposition~\ref{PD2} shows that the only possible
embedded primes of $I_U$ are ${\mathfrak{m}} = \langle s,t,u,v \rangle$
or $P_i = \langle l_i, s, t \rangle$, where $l_i$ is a linear
form of bidegree $(0,1)$. While Type 5a and 5b have the same bigraded Betti
numbers, Proposition~\ref{LS3} and Corollary~\ref{T5PD} show that
both the embedded primes and the
differentials in the minimal resolution differ.
We also connect our
classification to the reduced, codimension one singular
locus of $X_U$. In the table below $T$ denotes a twisted
cubic curve, $C$ a smooth plane conic
and $L_i$ a line.
\begin{table}[ht]
\begin{center}
\begin{supertabular}{|c|c|c|c|c|}
\hline Type & Lin. Syz. & Emb. Pri. & Sing. Loc. & Example \\
\hline 1 & none & ${\mathfrak{m}}$ & $T$ &$\!s^2u\!+\!stv,t^2u,s^2v\!+\!stu,t^2v\!+\!stv\!$\\
\hline 2 & none & ${\mathfrak{m}}, P_1$ & $C \cup L_1$ & $s^2u,t^2u,s^2v+stu,t^2v+stv$ \\
\hline 3 & 1 type $(1,0)$ & ${\mathfrak{m}}$ & $L_1$ & $s^2u+stv,t^2u,s^2v,t^2v+stu$ \\
\hline 4 & 1 type $(1,0)$ & ${\mathfrak{m}}, P_1$ & $L_1$ & $stv,t^2v,s^2v-t^2u,s^2u$ \\
\hline 5a & 1 type $(0,1)$ & $P_1, P_2$ & $L_1 \cup L_2 \cup L_3$ & $s^2u, s^2v, t^2u, t^2v+stv$\\
\hline 5b & 1 type $(0,1)$ & $P_1$ & $L_1 \cup L_2$ & $s^2u, s^2v, t^2u, t^2v+stu$ \\
\hline 6 & 2 type $(0,1)$ & none & $\emptyset$ & $s^2u,s^2v,t^2u,t^2v$ \\
\hline
\end{supertabular}
\end{center}
\caption{\textsf{}}
\label{T1}
\end{table}
\pagebreak
The next table gives the possible numerical types for the
bigraded minimal free resolutions, where we write $(i,j)$ for the
rank one free module $R(i,j)$. We prove more: for Types 3, 4, 5 and 6,
we determine all the differentials in the minimal free resolution.
One striking feature of Table 1 is that if $I_U$ has a linear first syzygy
(i.e. of bidegree $(0,1)$ or $(1,0)$), then the codimension one singular locus of
$X_U$ is either empty or a union of lines. We prove this in Theorem~\ref{SingX}.
\begin{table}[ht]
\begin{center}
\begin{supertabular}{|c|c|}
\hline Type & Bigraded Minimal Free Resolution of $I_U$ for $U$ basepoint free \\
\hline 1 & $0 \leftarrow I_U \leftarrow (-2,-1)^4 \longleftarrow \begin{array}{c}
(-2,-4)\\
\oplus \\
(-3,-2)^4\\
\oplus \\
(-4,-1)^2\\
\end{array} \longleftarrow
\begin{array}{c}
(-3,-4)^2\\
\oplus \\
(-4,-2)^3\\
\end{array} \longleftarrow
(-4,-4)\leftarrow 0$ \\
\hline 2 & $0 \leftarrow I_U \leftarrow (-2,-1)^4 \longleftarrow \begin{array}{c}
(-2,-3)\\
\oplus \\
(-3,-2)^4\\
\oplus \\
(-4,-1)^2\\
\end{array} \longleftarrow
\begin{array}{c}
(-3,-3)^2\\
\oplus \\
(-4,-2)^3\\
\end{array} \longleftarrow
(-4,-3) \leftarrow 0 $\\
\hline 3 & $0 \leftarrow I_U \leftarrow (-2,-1)^4 \longleftarrow \begin{array}{c}
(-2,-4)\\
\oplus \\
(-3,-1)\\
\oplus \\
(-3,-2)^2\\
\oplus \\
(-3,-3)\\
\oplus \\
(-4,-2)\\
\oplus \\
(-5,-1)\\
\end{array} \longleftarrow
\begin{array}{c}
(-3,-4)^2\\
\oplus \\
(-4,-3)^2\\
\oplus \\
(-5,-2)^2\\
\end{array} \longleftarrow
\begin{array}{c}
(-4,-4)\\
\oplus \\
(-5,-3)\\
\end{array} \leftarrow 0 $\\
\hline 4 & $0 \leftarrow I_U \leftarrow (-2,-1)^4 \longleftarrow \begin{array}{c}
(-2,-3)\\
\oplus \\
(-3,-1)\\
\oplus \\
(-3,-2)^2\\
\oplus \\
(-4,-2)\\
\oplus \\
(-5,-1)\\
\end{array} \longleftarrow
\begin{array}{c}
(-3,-3)\\
\oplus \\
(-4,-3)\\
\oplus \\
(-5,-2)^2\\
\end{array} \longleftarrow
(-5,-3) \leftarrow 0$\\
\hline 5 & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!$0 \leftarrow I_U \leftarrow (-2,-1)^4 \longleftarrow \begin{array}{c}
(-2,-2)\\
\oplus\\
(-3,-2)^2 \\
\oplus\\
(-4,-1)^2 \\
\end{array}
\longleftarrow (-4,-2)^2
\leftarrow 0$ \\
\hline 6 & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!$0 \leftarrow I_U \leftarrow (-2,-1)^4 \longleftarrow \begin{array}{c}
(-2,-2)^2\\
\oplus \\
(-4,-1)^2\\
\end{array} \longleftarrow (-4,-2)\leftarrow 0$ \\
\hline
\end{supertabular}
\end{center}
\caption{\textsf{}}
\label{T2}
\end{table}
\vskip .1in
\section{Geometry and the Segre-Veronese variety}
Consider the composite maps
\begin{equation}\label{e1}
\xymatrix@C=8pt{
{{\mathbb{P}^1 \times \mathbb{P}^1}} \ar[r] \ar[drr]^{\phi_U} & \mathbb{P}(H^0({\mathcal{O}_{\mathbb{P}^1}}(2)))
\times \mathbb{P}(H^0({\mathcal{O}_{\mathbb{P}^1}}(1))) \ar[r] & \mathbb{P}(H^0({\mathcal{O}_{\mathbb{P}^1 \times \mathbb{P}^1}}(2,1)))\ar[d]^{\pi} \\
& & \mathbb{P}(U)}
\end{equation}
The first horizontal map is $\nu_2 \times id$, where $\nu_2$ is the
2-uple Veronese embedding and the second horizontal map is
the Segre map $\sigma_{2,1}: \mathbb{P}^2 \times \mathbb{P}^1 \rightarrow
\mathbb{P}^5$. The image of $\sigma_{2,1}$ is a smooth
irreducible nondegenerate cubic threefold $\Sigma_{2,1}$.
Any ${\mathbb{P}}^2 \subseteq \Sigma_{2,1}$ is a fiber over a point
of the ${\mathbb{P}}^1$ factor and any ${\mathbb{P}}^1 \subseteq \Sigma_{2,1}$ is
contained in the image of a fiber over ${\mathbb{P}}^2$ or
${\mathbb{P}}^1$. For this see Chapter 2 of \cite{harris}, which also points
out that the Segre and Veronese maps have coordinate free descriptions
\[
\begin{array}{ccc}
\mathbb{P}(A) &\stackrel{\nu_d}{\longrightarrow}& \mathbb{P}(Sym^d A)\\
\mathbb{P}(A) \times \mathbb{P}(B) &\stackrel{\sigma}{\longrightarrow}& \mathbb{P}(A \otimes B)
\end{array}
\]
By dualizing we may interpret the image of $\nu_d$ as the
variety of $d^{th}$ powers of linear forms on $A$
and the image of $\sigma$ as the variety of products of
linear forms. The composition $\tau = \sigma_{2,1}\circ (\nu_2 \times id)$
is a Segre-Veronese map, with image consisting of polynomials which factor as
$l_1(s,t)^2\cdot l_2(u,v)$. Note that $\Sigma_{2,1}$ is also
the locus of polynomials in $R_{2,1}$ which factor as $q(s,t) \cdot l(u,v)$, with
$q \in R_{2,0}$ and $l \in R_{0,1}$. Since $q \in R_{2,0}$ factors as
$l_1 \cdot l_2$, this means $\Sigma_{2,1}$ is the locus of polynomials
in $R_{2,1}$ which factor completely as products of linear forms.
As in the introduction,
\[
U \subseteq H^0({\mathcal{O}_{\mathbb{P}^1 \times \mathbb{P}^1}}(2,1)) = V = {\mathrm{Span}} \{ s^2u,stu,t^2u,s^2v,stv,t^2v \}.
\]
The ideal of $\Sigma_{2,1}$ is defined by the two by two minors of
\[
\left[\begin{matrix} x_0 & x_1 & x_2 \\ x_3 & x_4 &x_5
\end{matrix}\right].
\]
It will also be useful to understand the intersection of ${\mathbb{P}}(U)$
with the locus of polynomials in $V$ which factor as the product of a form
$q =a_0su+a_1sv+a_2tu+a_3tv$ of bidegree
$(1,1)$ and $l = b_0s+b_1t$ of bidegree $(1,0)$. This is the
image of the map
\[
{\mathbb{P}}(H^0({\mathcal{O}_{\mathbb{P}^1 \times \mathbb{P}^1}}(1,1))) \times {\mathbb{P}}(H^0({\mathcal{O}_{\mathbb{P}^1 \times \mathbb{P}^1}}(1,0))) = {\mathbb{P}}^3 \times {\mathbb{P}}^1 \longrightarrow {\mathbb{P}}^5,
\]
$(a_0:a_1:a_2:a_3) \times (b_0:b_1) \mapsto
(a_0b_0:a_0b_1+a_2b_0:a_2b_1:a_1b_0:a_1b_1+a_3b_0:a_3b_1)$, which
is a quartic hypersurface
\[
Q = {\bf V }({x}_{2}^{2} {x}_{3}^{2}-{x}_{1} {x}_{2} {x}_{3} {x}_{4}+{x}_{0} {x}_{2}
{x}_{4}^{2}+{x}_{1}^{2} {x}_{3} {x}_{5}-2 {x}_{0} {x}_{2} {x}_{3} {x}_{5}-{x}_{0} {x}_{1} {x}_{4} {x}_{5}+{x}_{0}^{2} {x}_{5}^{2}).
\]
\noindent As Table 1 shows, the key to classifying the minimal
free resolutions is understanding the {\em linear} syzygies.
In \S 3, we show that if $I_U$ has a first syzygy of bidegree $(0,1)$, then
after a change of coordinates,
$I_U = \langle pu, pv, p_2,p_3 \rangle$ and if $I_U$
has a first syzygy of bidegree $(1,0)$, then
$I_U = \langle ps, pt, p_2,p_3 \rangle$.
\begin{prop}\label{LinSyzGeom}
If $U$ is basepoint free, then the ideal $I_U$
\begin{enumerate}
\item has a unique linear syzygy of bidegree $(0,1)$ iff
$F \subseteq {\mathbb{P}}(U) \cap \Sigma_{2,1}$, where $F$ is a ${\mathbb{P}}^1$ fiber of
$\Sigma_{2,1}$.
\item has a pair of linear syzygies of bidegree $(0,1)$
iff ${\mathbb{P}}(U) \cap \Sigma_{2,1} = \Sigma_{1,1}$.
\item has a unique linear syzygy of bidegree $(1,0)$
iff $F \subseteq {\mathbb{P}}(U) \cap Q$, where $F$ is a ${\mathbb{P}}^1$ fiber of
$Q$.
\end{enumerate}
\end{prop}
\begin{proof}
The ideal $I_U$ has a unique linear syzygy of bidegree $(0,1)$ iff
$qu,qv \in I_U$, with $q \in R_{2,0}$ iff $q\cdot l(u,v) \in I_U$ for all
$l(u,v) \in R_{0,1}$ iff ${\mathbb{P}}(U) \cap \Sigma_{2,1}$ contains the
${\mathbb{P}}^1$ fiber over the point $q \in {\mathbb{P}}(R_{2,0})$.
For the second item, the reasoning above implies that
${\mathbb{P}}(U) \cap \Sigma_{2,1}$ contains two ${\mathbb{P}}^1$ fibers, over
points $q_1,q_2 \in {\mathbb{P}}(R_{2,0})$. But then $I_U$ also contains
the line in ${\mathbb{P}}(R_{2,0})$ connecting $q_1$ and $q_2$, as well
as the ${\mathbb{P}}^1$ lying over any point on the line, yielding a
${\mathbb{P}}^1 \times {\mathbb{P}}^1$.
For the third part, a linear syzygy of bidegree $(1,0)$ means
that $qs,qt \in I_U$, with $q \in R_{1,1}$ iff $q\cdot l(s,t) \in I_U$ for all
$l(s,t) \in R_{1,0}$ iff ${\mathbb{P}}(U) \cap Q$ contains the
${\mathbb{P}}^1$ fiber over the point $q \in {\mathbb{P}}(R_{1,1})$.
\end{proof}
\noindent In Theorem~\ref{AllLins}, we show that Proposition~\ref{LinSyzGeom}
describes all possible linear syzygies.
\section{First syzygies of bidegree $(0,1)$}
\noindent Our main result in this section is a complete description
of the minimal free resolution when $I_U$ has a first
syzygy of bidegree $(0,1)$. As a consequence, if $I_U$ has a unique first
syzygy of bidegree $(0,1)$, then the minimal free resolution has
numerical Type 5 and if there are two linear first syzygies of
bidegree $(0,1)$, the minimal free resolution has numerical Type 6.
We begin with a simple observation
\begin{lem}\label{LS1}
If $I_U$ has a linear first syzygy of bidegree $(0,1)$, then
\[
I_U = \langle pu, pv, p_2,p_3 \rangle,
\]
where $p$ is homogeneous of bidegree $(2,0)$.
\end{lem}
\begin{proof}
Rewrite the syzygy
\[
\sum\limits_{i=0}^3 (a_iu+b_iv)p_i = 0 = u \cdot \sum\limits_{i=0}^3 a_ip_i + v\cdot \sum\limits_{i=0}^3 b_ip_i,
\]
and let $g_0 = \sum\limits_{i=0}^3 a_ip_i$, $g_1 =\sum\limits_{i=0}^3 b_ip_i$.
The relation above implies that $(g_0,g_1)$ is a syzygy on $(u,v)$.
Since the syzygy module of $(u,v)$ is generated by the Koszul syzygy,
this means
\[
\left[ \!
\begin{array}{c}
g_0\\
g_1
\end{array}\! \right] = p \cdot \left[ \!
\begin{array}{c}
-v\\
u
\end{array}\! \right]
\]
\end{proof}
A similar argument applies if $I_U$ has a first syzygy of
degree $(1,0)$. Lemma~\ref{LS1} has surprisingly strong consequences:
\begin{prop}\label{LS3}
If $U$ is basepoint free and $I_U$ has a unique linear first syzygy of bidegree $(0,1)$, then
there is a complex of free $R$ modules
\[
\mathcal{F}_1 \mbox{ : }0 \longrightarrow F_3 \stackrel{\phi_3}{\longrightarrow} F_2 \stackrel{\phi_2}{\longrightarrow} F_1\stackrel{\phi_1}{\longrightarrow} I_U \longrightarrow 0,
\]
where $\phi_1 = \left[ \!\begin{array}{cccc}
p_0 & p_1 & p_2 & p_3 \end{array}\! \right]$, with ranks and shifts matching Type 5 in Table 2. Explicit formulas
appear in the proof below. The differentials $\phi_i$ depend on whether
$p=L_1(s,t)L_2(s,t)$ of Lemma~\ref{LS1} has $L_1=L_2$.
\end{prop}
\begin{proof}
Since $I_U$ has a syzygy of bidegree $(0,1)$, by Lemma~\ref{LS1}, $I_U = \langle pu, pv, p_2,p_3 \rangle$.\newline
Case 1: Suppose $p = l(s,t)^2$, then
after a change of coordinates, $p = s^2$, so
$p_0 = s^2u$ and $p_1= s^2v$. Eliminating terms from $p_2$ and $p_3$,
we may assume
\[
\begin{array}{ccc}
p_2 &=& stl_1(u,v)+t^2l_2(u,v)=t(sl_1(u,v)+tl_2(u,v))\\
p_3 &=& stl_3(u,v)+t^2l_4(u,v)=t(sl_3(u,v)+tl_4(u,v)).
\end{array}
\]
Let $l_i(u,v) = a_iu+b_iv$ and define
\[
A(u,v) = \left[ \!
\begin{array}{cc}
l_1 & l_2\\
l_3 & l_4
\end{array}\! \right].
\]
Note that det $A(u,v) = q(u,v) \ne 0$. The rows cannot be dependent,
since $U$ spans a four dimensional subspace. If the columns are
dependent, then $\{p_2,p_3 \} = \{tl_1(s+kt), tl_2(s+kt)\}$,
yielding another syzygy of bidegree $(0,1)$, contradicting our hypothesis.
In the proof of Corollary~\ref{NoMix}, we show the hypothesis that
$U$ is basepoint free implies that $A(u,v)$ is a $1$-generic matrix,
which means that $A(u,v)$ cannot be made to have a zero entry using
row and column operations. We obtain a first syzygy
of bidegree $(2,0)$ as follows:
\[
\begin{array}{ccl}
s^2p_2 &=&s^3tl_1 + s^2t^2l_2\\
&=& (a_1st+a_2t^2)s^2u + (b_1st+b_2t^2)s^2v\\
&=& (a_1st+a_2t^2)p_0 + (b_1st+b_2t^2)p_1\\
\end{array}
\]
A similar relation holds for $stp_3$, yielding two first syzygies
of bidegree $(2,0)$. We next consider first syzygies of bidegree $(1,1)$. There is an obvious syzygy on $p_2,p_3$ given by
\[
\begin{array}{ccccc}
(sl_1(u,v)+tl_2(u,v))p_3 & = & (sl_3(u,v)+tl_4(u,v))p_2
\end{array}
\]
Since ${\mathrm{det}} A(s,t) = q(u,v) \ne 0$, from
\[
\begin{array}{ccc}
t^2q &=& l_3p_2-l_1p_3\\
stq &=& l_4p_2-l_2p_3
\end{array}
\]
and the fact that $q(u,v) = L_1(u,v)L_2(u,v)$ with
$L_i(u,v) = \alpha_iu+\beta_iv$, so we obtain a pair of relations
of bidegree $(1,1)$:
\[
\begin{array}{ccccc}
sl_4p_2-sl_2p_3 &= & s^2tL_1L_2 &=& (\alpha_1tL_2)s^2u+(\beta_1tL_2)s^2v.
\end{array}
\]
Case 2: $p = l(s,t) \cdot l'(s,t)$ with $l,l'$ independent
linear forms. Then after a change of coordinates, $p = st$, so
$p_0 = stu$ and $p_1= stv$. Eliminating terms from $p_2$ and $p_3$,
we may assume
\[
\begin{array}{ccc}
p_2 &=& s^2l_1(u,v)+t^2l_2(u,v)\\
p_3 &=& s^2l_3(u,v)+t^2l_4(u,v).
\end{array}
\]
Let $l_i(u,v) = a_iu+b_iv$. We obtain a first syzygy
of bidegree $(2,0)$ as follows:
\[
\begin{array}{ccc}
stp_2 &=& \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!s^3tl_1 + st^3l_2\\
&=& \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!s^2(stl_1) + t^2(stl_2)\\
&=& (a_1s^2+a_2t^2)stu + (b_1s^2+b_2t^2)stv
\end{array}
\]
A similar relation holds for $stp_3$, yielding two first syzygies
of bidegree $(2,0)$. We next consider first syzygies of bidegree $(1,1)$.
Since $ q(u,v) \ne 0$, from
\[
\begin{array}{ccc}
t^2q &=& l_3p_2-l_1p_3\\
s^2q &=& l_4p_2-l_2p_3
\end{array}
\]
and the fact that $q(u,v) = L_1(u,v)L_2(u,v)$ with
$L_i(u,v) = \alpha_iu+\beta_iv$, we have relations
\[
\begin{array}{ccccc}
sl_3p_2-sl_1p_3 &= & st^2L_1L_2 &=& (\alpha_1tL_2)stu+(\beta_1tL_2)stv \\
tl_4p_2-tl_2p_3 &= & ts^2L_1L_2 &=& (\alpha_1sL_2)stu+(\beta_1sL_2)stv,
\end{array}
\]
which yield a pair of first syzygies of bidegree $(1,1)$.
Putting everything together, we now have candidates for
the differential $\phi_2$ in both cases. Computations
exactly like those above yield similar candidates for
$\phi_3$ in the two cases. In Case 1, we have
\[
\phi_2 = \left[ \!
\begin{array}{ccccc}
v & \alpha_1tL_2 & 0& a_1st+a_2t^2 & a_3st+a_4t^2 \\
-u &\beta_1tL_2 & 0 & b_1st+b_2t^2 & b_3st+b_4t^2 \\
0 & -sl_4 & sl_3+tl_4 & -s^2 &0 \\
0 & sl_2 & -sl_1-tl_2 &0 & -s^2
\end{array}\! \right], \mbox{ }
\phi_3 = \left[ \!
\begin{array}{cc}
\gamma & \delta \\
s & t\\
0 & s\\
-l_4 & -l_3\\
l_2 & l_1
\end{array}\! \right],
\]
where
\[
\begin{array}{lll}
\delta &= & -\alpha_1\beta_2 t^2 + (a_1st+a_2t^2)b_3 - (a_3st+a_4t^2)b_1\\
\gamma & = &-\alpha_1\beta_2st + (a_1st+a_2t^2)b_4 -(a_3st+a_4t^2)b_2
\end{array}
\]
For $I_U$ as in Case 2, let
\[
\phi_2 = \left[ \!
\begin{array}{ccccc}
v & \alpha_1tL_2 & \alpha_1sL_2 & a_1s^2+a_2t^2 & a_3s^2+a_4t^2 \\
-u &\beta_1tL_2 & \beta_1sL_2 & b_1s^2+b_2t^2 & b_3s^2+b_4t^2 \\
0 & -sl_3 & -tl_4 & -st &0 \\
0 & sl_1 & tl_2 &0 & -st
\end{array}\! \right], \mbox{ }
\phi_3 = \left[ \!
\begin{array}{cc}
\gamma & \delta \\
0 & t\\
s & 0\\
-l_4 & -l_3\\
l_2 & l_1
\end{array}\! \right],
\]
where
\[
\begin{array}{ccc}
\gamma & = & (-\alpha_1\beta_2 + a_1b_4 - a_3b_2)s^2 + (a_2b_4 - a_4b_2)t^2\\
\delta & = & (a_1b_3 -a_3b_1)s^2 + (-\alpha_1\beta_2+a_2b_3 -a_4b_1)t^2
\end{array}
\]
We have already shown that ${\mathrm{im}}(\phi_2) \subseteq \ker(\phi_1)$,
and an easy check shows that ${\mathrm{im}}(\phi_3) \subseteq \ker(\phi_2)$, yielding
a complex of free modules of numerical Type 5.
\end{proof}
\noindent To prove that the complex above is actually exact, we use the
following result of Buchsbaum and Eisenbud \cite{be}:
a complex of free modules
\[\mathcal{F}: \cdots \stackrel{\phi_{i+1}}{\longrightarrow} F_i \stackrel{\phi_i}{\longrightarrow} F_{i-1}\stackrel{\phi_{i-1}}{\longrightarrow}\cdots F_1\stackrel{\phi_1}{\longrightarrow}F_0,
\]
is exact iff
\begin{enumerate}
\item ${\mathrm{rank}}(\phi_{i+1}) + {\mathrm{rank}}(\phi_i) = {\mathrm{rank}}(F_i)$.
\item ${\mathrm{depth}}(I_{{\mathrm{rank}}(\phi_i)}(\phi_i)) \ge i$.
\end{enumerate}
\begin{thm}\label{T5exact}
The complexes appearing in Proposition~\ref{LS3} are exact.
\end{thm}
\begin{proof}
Put $F_0=R$. An easy check shows that ${\mathrm{rank}}(\phi_{i+1}) + {\mathrm{rank}}(\phi_i) = {\mathrm{rank}}(F_i)$, so what remains is to show that ${\mathrm{depth}}(I_2(\phi_3)) \ge 3$ and ${\mathrm{depth}}(I_3(\phi_2)) \ge 2$. The fact that $s \nmid \gamma$ will be useful: to see
this, note that both Case 1 and Case 2, $s \mid \gamma$ iff $a_2b_4-b_2a_4 =0$,
which implies $l_2$ and $l_4$ differ only by a scalar, contradicting the
assumption that $U$ is basepoint free. \newline
Case 1: We have $us^4, vs^4 \in I_3(\phi_2)$. Consider the minor
$$\lambda=t^2L_2(sl_1+tl_2)\left( (\alpha_1b_1-\beta_1a_1)s + (\alpha_1b_2-\beta_1a_2)t \right)$$ obtained from the submatrix
$$
\left[
\begin{array}{ccc}
\alpha_1tL_2 & 0 & a_1st+a_2t^2 \\
\beta_1tL_2 &0 & b_1st+b_2t^2 \\
sl_2 & -sl_1-tl_2 &0
\end{array}\! \right]
$$
Note that $s$ does not divide $\lambda$, for if $s | \lambda$ then either $l_2=0$ or $\alpha_1b_2-\beta_1a_2=0$. But none of the $l_i$ can be zero because of the basepoint free assumption (see the proof of Corollary \ref{NoMix}) and if $\alpha_1b_2-\beta_1a_2=0$, then $L_1$ and $l_2$ are the same up to a scalar multiple. Hence, since $l_1l_4-l_2l_3=L_1L_2$, we obtain $l_1$ is equal to a scalar multiple of $l_2$ or $l_3$, which again violates the basepoint free assumption.
To conclude, note that $u$ and $v$ can not divide $\lambda$ at the same time, therefore, $\lambda$ and one of the $ us^4$ and $vs^4$ form a regular sequence in $I_3(\phi_2)$, showing that depth of $I_3(\phi_2)$ is at least $2$.\\
To show that ${\mathrm{depth}}(I_2(\phi_3)) \ge 3$, note that
\[
I_2(\phi_3) = \langle sl_2,sl_4,tl_4-sl_3,tl_2-sl_1,t\gamma-s\delta,
s\gamma, s^2, q(u,v) \rangle
\]
Since $l_2, l_4$ are independent,
$\langle sl_2,sl_4 \rangle = \langle su,sv \rangle$ and using these we
can reduce $tl_4-sl_3,tl_2-sl_1$ to $tu, tv$. Since
$s \nmid \gamma$, modulo $s^2$, $s\gamma$ reduces to $st^2$.
Similarly, $t\gamma-s\delta$ reduces to $t^3$, so that in fact
\[
I_2(\phi_3) = \langle su,sv,tu,tv,s^2,st^2,t^3,q(u,v) \rangle,
\]
and $\{s^2, t^3, q(u,v)\}$ is a regular sequence of length three.\\
Case 2: We have $us^2t^2, vs^2t^2 \in I_3(\phi_2)$. Consider the minor
\[
\lambda=L_2(s^2l_1-t^2l_2)\left( (\alpha_1b_1-\beta_1a_1)s^2 + (\alpha_1b_2-\beta_1a_2)t^2 \right)
\]
arising from the submatrix
$$
\left[
\begin{array}{ccc}
\alpha_1tL_2 & \alpha_1sL_2 & a_1s^2+a_2t^2 \\
\beta_1tL_2 & \beta_1sL_2 & b_1s^2+b_2t^2 \\
sl_1 & tl_2 &0
\end{array}\! \right]
$$
Note that $s$ and $t$ do not divide $\lambda$, for if $s | \lambda$ then either $l_2=0$ or $\alpha_1b_2-\beta_1a_2=0$. But none of the $l_i$ can be zero because of the basepoint free assumption (see the proof of Corollary \ref{NoMix}) and if $\alpha_1b_2-\beta_1a_2=0$, then $L_1$ and $l_2$ are the same up to a scalar multiple. Hence, since $l_1l_4-l_2l_3=L_1L_2$, we obtain $l_1$ is equal to a scalar multiple of $l_2$ or $l_3$, contradicting basepoint freeness. Furthermore,
$u$ and $v$ cannot divide $\lambda$ at the same time, so $\lambda$ and one of the $ us^2t^2$ and $vs^2t^2$ form a regular sequence in $I_3(\phi_2)$.
To show that ${\mathrm{depth}}(I_2(\phi_3)) \ge 3$, note that
\[
I_2(\phi_3) = \langle su,sv,tu,tv,st, t\gamma, s\delta, q(u,v) \rangle,
\]
where we have replaced $sl_i, tl_j$ as in Case 1. If $t \mid \delta$,
then $a_1b_3-b_1a_3 = 0$, which would mean $l_1 = k l_3$ and contradict
that $U$ is basepoint free. Since $s \nmid \gamma$ and $t \nmid \delta$,
$\{t\gamma, s\delta, q(u,v) \}$ is regular unless $\delta$, $\gamma$ share a
common factor $\eta = (as+bt)$. Multiplying out and comparing coefficients
shows that this forces $\gamma$ and $\delta$ to agree up to scalar.
Combining this with the fact that $t \nmid \delta$, $s \nmid \delta$,
we find that $\delta = as^2+bst+ct^2$ with $a \ne 0 \ne c$. Reducing
$s \delta$ and $t\delta$ by $st$ then implies that $t^3,s^3 \in I_2(\phi_3)$.
\end{proof}
\begin{cor}\label{NoMix}
If $U$ is basepoint free, then $I_U$ cannot have first syzygies of both bidegree $(0,1)$ and bidegree $(1,0)$.
\end{cor}
\begin{proof}
Suppose there is a first syzygy of bidegree $(0,1)$ and proceed
as in the proof of Proposition~\ref{LS3}. In the setting of
Case 1,
\[
I_U =\langle stu, stv, s^2l_1(u,v)+t^2l_2(u,v), s^2l_3(u,v)+t^2l_4(u,v)\rangle .
\]
If there is also a linear syzygy of bidegree $(1,0)$, expanding out
$\sum (a_is+b_it)p_i$ shows that the coefficient of $s^3$ is $a_3l_1+a_4l_3$,
and the coefficient of $t^3$ is $b_3l_2+b_4l_4$. Since
$\sum (a_is+b_it)p_i=0$, both coefficients must vanish.
In the proof of Proposition~\ref{LS3} we showed ${\mathrm{det}} A(u,v) = q(u,v) \ne 0$.
In fact, more is true: if any of the $l_i$ is zero, then $U$ is not basepoint
free. For example, if $l_1 =0$, then $\langle t,l_3\rangle$ is a minimal
associated prime of $I_U$. Since $a_3l_1+a_4l_3 =0$ iff $a_3=a_4=0$ or
$l_1$ is a scalar multiple of $l_3$ and the latter situation implies
that $U$ is not basepoint free, we must have $a_3=a_4=0$. Reasoning
similarly for $b_3l_2+b_4l_4$ shows that $a_3=a_4=b_3=b_4=0$. This implies
the linear syzygy of bidegree $(1,0)$ can only involve $stu,stv$, which
is impossible. This proves the result in Case 1 and
similar reasoning works for Case 2.
\end{proof}
\begin{cor}\label{T5PD}
If $I_U$ has a unique linear first syzygy of bidegree $(0,1)$, then $I_U$
has either one or two embedded prime ideals of the form $\langle s,t, L_i(u,v) \rangle$. If $q(u,v) = {\mathrm{det}}~A(u,v) = L_1(u,v)L_2(u,v)$ for $A(u,v)$ as in Theorem~\ref{T5exact},
then:
\begin{enumerate}
\item If $L_1=L_2$, then the only embedded prime of $I_U$ is
$\langle s,t,L_1\rangle$.
\item If $L_1 \ne L_2$, then $I_U$ has two embedded primes
$\langle s,t,L_1\rangle$ and $\langle s,t,L_2\rangle$.
\end{enumerate}
\end{cor}
\begin{proof}
In \cite{ehv}, Eisenbud, Huneke and Vasconcelos show that
a prime $P$ of codimension $c$ is associated to $R/I$ iff
it is associated to $Ext^c(R/I,R)$. If $I_U$ has a unique
linear syzygy, then the free resolution is given by Proposition~\ref{LS3},
and $Ext^3(R/I_U,R) = {\mathrm{coker}}(\phi_3^t)$. By Proposition 20.6 of
\cite{ebig}, if $\phi$ is a presentation matrix for a module $M$,
then the radicals of $ann(M)$ and $I_{{\mathrm{rank}}(\phi)}(\phi)$ are equal.
Thus, if $I_U$ has a Type 5 resolution, the codimension three
associated primes are the codimension three associated primes
of $I_2(\phi_3)$. The proof of Theorem~\ref{T5exact} shows that
in Case 1,
\[
\begin{array}{cccc}
I_2(\phi_3) & = & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \langle su,sv,tu,tv,s^2,st^2,t^3,q(u,v) \rangle &\\
& = & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \langle s^2,st^2,t^3,u,v\rangle \cap \langle s,t,L_1^2 \rangle &\mbox{ if } L_1=L_2 \\
& = & \langle s^2,st^2,t^3,u,v\rangle \cap \langle s,t,L_1 \rangle
\cap \langle s,t,L_2 \rangle &\mbox{ if } L_1\ne L_2 .\\
\end{array}
\]
The embedded prime associated to $\langle s,t,u,v \rangle$ is not
an issue, since we are only interested in the codimension three associated
primes. The proof for Case 2 works in the same way.
\end{proof}
Next, we tackle the case where the syzygy of bidegree $(0,1)$ is
not unique.
\begin{prop}\label{LS2}
If $U$ is basepoint free, then the following are equivalent
\begin{enumerate}
\item The ideal $I_U$ has two linear first syzygies of bidegree $(0,1)$.
\item The primary decomposition of $I_U$ is
\[
I_U = \langle u,v \rangle \cap \langle q_1,q_2 \rangle,
\]
where $\sqrt{\langle q_1,q_2 \rangle} = \langle s,t \rangle$
and $q_i$ are of bidegree $(2,0)$.
\item The minimal free resolution of $I_U$ is of numerical Type 6.
\item $X_U \simeq \Sigma_{1,1}$.
\end{enumerate}
\end{prop}
\begin{proof}
By Lemma~\ref{LS1}, since $I_U$ has a linear syzygy of bidegree $(0,1)$,
\[
I_U = \langle q_1u, q_1v, p_2,p_3 \rangle.
\]
Proceed as in the proof of Proposition~\ref{LS3}. In
Case 1, the assumption that $q_1=st$ means that $p_2,p_3$
can be reduced to have no terms involving $stu$ and $stv$, hence there
cannot be a syzygy of bidegree $(0,1)$ involving $p_i$ and
$q_1u,q_1v$. Therefore the second first syzygy of bidegree $(0,1)$
involves only $p_2$ and $p_3$ and the reasoning in the
proof of Lemma~\ref{LS1} implies that $\langle p_2,p_3 \rangle = \langle q_2u, q_2v \rangle$. Thus, we have the primary decomposition
\[
I_U = \langle q_1,q_2 \rangle \cap \langle u, v \rangle,
\]
with $q_1,q_2$ of bidegree $(2,0)$. Since $U$ is basepoint free,
$\sqrt{q_1,q_2} = \langle s,t \rangle$,
so $q_1$ and $q_2$ are a regular sequence in $k[s,t]$.
Similar reasoning applies in the situation of Case 2.
That the minimal free resolution is of numerical Type 6
follows from the primary decomposition above,
which determines the differentials in
the minimal free resolution:
\begin{small}
\[
I_U \longleftarrow (-2,-1)^4 \xleftarrow{\left[ \!
\begin{array}{cccc}
v & 0 &q_2 &0 \\
-u &0 &0 & q_2 \\
0 & v &-q_1 &0 \\
0 & -u &0 & -q_1
\end{array}\! \right]}
\begin{array}{c}
(-2,-2)^2\\
\oplus \\
(-4,-1)^2
\end{array}\!
\xleftarrow{\left[ \!\begin{array}{c}
q_2 \\
-q_1\\
-v \\
u
\end{array}\! \right]}
(-4,-2).
\]
\end{small}
The last assertion follows since
\[
{\mathrm{det}} \left[ \!
\begin{array}{cc}
uq_1 & uq_2\\
vq_1 & vq_2
\end{array}\! \right] = 0.
\]
Hence, the image of $\phi_U$ is contained in ${\bf V }(xy-zw) = \Sigma_{1,1}$.
After a change of coordinates, $q_1=s^2+ast$ and $q_2 = t^2+bst$, with
$0 \ne a \ne b \ne 0$. Therefore on the open set
$U_{s,u} \subseteq {\mathbb{P}^1 \times \mathbb{P}^1}$ the map is defined by
\[
(at+1,(at+1)v, t^2+bt, (t^2+bt)u),
\]
so the image is a surface. Finally, if $X_U = \Sigma_{1,1}$, then
with a suitable choice of basis for $U$, $p_0p_3-p_1p_2 = 0$, hence
$p_0 | p_1p_2$ and $p_3 | p_1p_2$. Since $U$ is four dimensional,
this means we must have $p_0 = \alpha \beta$, $p_1 = \alpha \gamma$,
$p_2 = \beta \delta$. Without loss of generality, suppose $\beta$ is quadratic, so
there is a linear first syzygy $\delta p_0-\alpha p_2 = 0$.
Arguing similarly for $p_3$, we find that
there are two independent linear first syzygies. Lemma~\ref{LS5-10} of the
next section shows that if $U$ is basepoint free, then there can
be at most one first syzygy of bidegree $(1,0)$, so by
Corollary~\ref{NoMix}, $I_U$ must have two first syzygies of
bidegree $(0,1)$.
\end{proof}
\section{First syzygies of bidegree $(1,0)$}
\noindent Recall that there is an analogue of Lemma \ref{LS1} for syzygies of bidegree $(1,0)$:
\begin{lem}\label{LS2-10}
If $I_U$ has a linear syzygy of bidegree $(1,0)$, then
\[
I_U = \langle ps, pt, p_2,p_3 \rangle,
\]
where $p$ is homogeneous of bidegree $(1,1)$.
\end{lem}
Lemma~\ref{LS2-10} has strong consequences as well: we will prove that
\begin{prop}\label{LS3-10}
If $U$ is basepoint free and $I_U= \langle ps, pt, p_2,p_3 \rangle$, then
\begin{enumerate}
\item $I_U$ has numerical Type 4 if and only if $p$ is decomposable.
\item $I_U$ has numerical Type 3 if and only if $p$ is indecomposable.
\end{enumerate}
\end{prop}
We begin with some preliminary lemmas:
\begin{lem}\label{LS4-10}
If $I_U$ has a first syzygy of bidegree $(1,0)$, then
$I_U$ has two minimal syzygies of bidegree $(1,1)$ and
if $p$ in Lemma~\ref{LS2-10} factors, then $I_U$ also has a
minimal first syzygy of bidegree $(0,2)$.
\end{lem}
\begin{proof}
First assume $p$ is an irreducible bidegree $(1,1)$ form, then $p = a_0su+a_1sv+a_2tu+a_3tv$, with $a_0a_3-a_1a_2 \ne 0$. We may assume $a_0\ne 0$ and
scale so it is one. Then
\[
\begin{array}{ccc}
sp & = & s^2u+a_1s^2v+ a_2stu+a_3stv \\
tp & = & stu + a_1stv+a_2t^2u+a_3t^2v \\
p_2 & = & b_0t^2u+b_1s^2v+b_2stv+b_3t^2v\\
p_3 & = & c_0t^2u+c_1s^2v+c_2stv+c_3t^2v\\
\end{array}
\]
Here we have used $tp$ and $sp$ to remove all the terms involving
$s^2u$ and $stu$ from $p_2$ and $p_3$. A simple but tedious calculation
then shows that
\[
\begin{array}{ccc}
p \cdot p_2 & = & sp(b_1sv+b_2tv)+ tp(b_0tu+b_3tv)\\
p \cdot p_3 & = & sp(c_1sv+c_2tv)+ tp(c_0tu+c_3tv)
\end{array}
\]
Now suppose that $p = L_1(s,t) \cdot L_2(u,v)$ with $L_1, L_2$
linear forms, then after a change of coordinates, $p = su$ and a (possibly new) set of minimal generators for $I_U$ is $\langle s^2u, stu, p_2,p_3 \rangle$.
Eliminating terms from $p_2$ and $p_3$, we may assume
\[
\begin{array}{ccc}
p_2 &=& as^2v+bstv+t^2l_1\\
p_3 &=& cs^2v+dstv+t^2l_2,
\end{array}
\]
where $l_i =l_i(u,v) \in R_{(0,1)}$. There are two first syzygies of bidegree $(1,1)$:
\[
\begin{array}{ccc}
sup_2 &= & su(as^2v+bstv+t^2l_1)=as^3uv+bs^2tuv+st^2ul_1 \\
&=& (as+bt)v\cdot s^2u+tl_1\cdot stu =(as+bt)v\cdot p_0+tl_1\cdot p_1\\
\end{array}
\]
\[
\begin{array}{ccc}
sup_3 &= & su(cs^2v+dstv+t^2l_2)=cs^3uv+ds^2tuv+st^2ul_2 \\
&=& (cs+dt)v\cdot s^2u+tl_2\cdot stu =(cs+dt)v\cdot p_0+tl_2\cdot p_1\\
\end{array}
\]
A syzygy of bidegree $(0,2)$ is obtained via:
\[
\begin{array}{ccc}
u(l_2p_3-l_1p_2) &= & u(as^2vl_2+bstvl_2-cs^2vl_1-dstvl_1) \\
&=& (al_2-cl_1)v\cdot s^2u+(bl_2-dl_1)v\cdot stu \\
&=& (al_2-cl_1)v\cdot p_0+(bl_2-dl_1)v\cdot p_1 \\
\end{array}
\]
\end{proof}
\begin{lem}\label{LS5-10}
If $U$ is basepoint free, then there can be at most one
linear syzygy of bidegree $(1,0)$.
\end{lem}
\begin{proof}
Suppose $I_U$ has a linear syzygy of bidegree $(1,0)$, so that
$sp, tp \in I_U$, with $p=su+a_1sv+a_2tu+a_3tv$. Note this takes
care of both possible cases of Lemma~\ref{LS4-10}: in Case 1,
$a_3-a_1a_2 =0$ (e.g. for $p = su$, $a_1=a_2=a_3 =0$) and in
Case 2, $a_3-a_1a_2 \ne 0$. Now suppose another syzygy of
bidegree $(1,0)$ exists: $S = \sum (d_is+e_it)p_i = 0$. Expanding shows that
\[
S = d_0s^3u + (e_0+d_1)s^2tu + \cdots
\]
So after reducing $S$ by the Koszul syzygy on $\langle sp,tp \rangle$,
$d_0=d_1=e_0 = 0$.
In $p_2$ and $p_3$, one of $b_1$ or $c_1$ must be non-zero. If
not, then all of $tp, p_2,p_3$ are divisible by $t$ and since
\[
sp\mid_{t=0} = s^2(u+a_1v),
\]
this would mean $\langle t, u+a_1v \rangle$ is an associated prime of
$I_U$, contradicting basepoint freeness. WLOG $b_1 \ne 0$, scale it
to one and use it to remove the term $c_1s^2v$ from $p_3$. This means
the coefficient of $s^3v$ in $S$ is $d_2b_1 = d_2$, so $d_2$ vanishes.
At this point we have established that $S$ does not involve $sp$,
and involves only $t$ on $tp, p_2$. Now change generators so that
$p_2' = e_1pt+e_2p_2$. This modification does not affect $tp,sp$
and $I_U$, but now $S$ involves only $p'_2$ and $p_3$:
\[
(t)p'_2 + (d_3s+e_3t)p_3 = 0.
\]
As in the proof of Lemma~\ref{LS1}, letting $p_2''=p_2'+e_3p_3$ and
$p_3'' = d_3p_3$, we see that $S = tp_2''+ sp_3''=0$, so that
$p''_2 = sq$ and $p''_3 = -tq$, hence
\[
I_U = \langle s,t \rangle \cap \langle p, q \rangle
\]
with $p,q$ both of bidegree $(1,1)$. But on ${\mathbb{P}^1 \times \mathbb{P}^1}$, ${\bf V }(p,q)$
is always nonempty, which would mean $I_U$ has a basepoint,
contradicting our hypothesis.
\end{proof}
\begin{remark}
If the ${\mathbb{P}}^1$ fibers of $Q$ did not intersect, Lemma~\ref{LS5-10} would follow
easily from Lemma~\ref{LS2-10}. However, because $Q$ is a projection
of $\Sigma_{3,1} \subseteq {\mathbb{P}}^7$ to ${\mathbb{P}}^5$, the ${\mathbb{P}}^1$ fibers of $Q$
do in fact intersect.
\end{remark}
\begin{thm}\label{AllLins}
If $U$ is basepoint free, then the only possibilities for linear first syzygies
of $I_U$ are
\begin{enumerate}
\item $I_U$ has a unique first syzygy of bidegree $(0,1)$ and no
other linear syzygies.
\item $I_U$ has a pair of first syzygies of bidegree $(0,1)$ and no
other linear syzygies.
\item $I_U$ has a unique first syzygy of bidegree $(1,0)$ and no
other linear syzygies.
\end{enumerate}
\end{thm}
\begin{proof}
It follows from Proposition~\ref{LS3} and Proposition~\ref{LS2} that
both of the first two items can occur. That there cannot be three
or more linear syzygies of bidegree $(0,1)$ follows easily from
the fact that if there are two syzygies of bidegree $(0,1)$ then
$I_U$ has the form of Proposition~\ref{LS2} and the resolution
is unique. Corollary~\ref{NoMix} shows there cannot be linear
syzygies of both bidegree $(1,0)$ and bidegree $(0,1)$ and
Lemma~\ref{LS5-10} shows there can be at most one linear syzygy of
bidegree $(1,0)$.
\end{proof}
Our next theorem strengthens Lemma~\ref{LS4-10}:
there is a minimal first syzygy of bidegree $(0,2)$ iff the $p$ in
Lemma~\ref{LS2-10} factors. We need a pair of lemmas:
\begin{lem}\label{P2fiberBP}
If ${\mathbb{P}}(U)$ contains a ${\mathbb{P}}^2$ fiber of $\Sigma_{2,1}$, then $U$ is not
basepoint free.
\end{lem}
\begin{proof}
If ${\mathbb{P}}(U)$ contains a ${\mathbb{P}}^2$ fiber of $\Sigma_{2,1}$ over a point of ${\mathbb{P}}^1$
corresponding to a linear form $l(u,v)$, after a change of
basis $l(u,v)=u$ and so
\[
I_U = \langle s^2u, stu, t^2u, l_1(s,t)l_2(s,t)v \rangle.
\]
This implies that $\langle u, l_1(s,t) \rangle \in \mbox{Ass}(I_U)$, so $U$ is not basepoint free.
\end{proof}
The next lemma is similar to a result of \cite{cds}, but differs due
to the fact that the subspaces ${\mathbb{P}}(W) \subseteq {\mathbb{P}}(V)$ studied in \cite{cds}
are always basepoint free.
\begin{lem}\label{02syz}
If $U$ is basepoint free, then there is a minimal first syzygy
on $I_U$ of bidegree $(0,2)$ iff there
exists ${\mathbb{P}}(W)\simeq {\mathbb{P}}^2 \subseteq {\mathbb{P}}(U)$ such that ${\mathbb{P}}(W) \cap \Sigma_{2,1}$ is a smooth conic.
\end{lem}
\begin{proof}
Suppose $\sum q_ip_i =0$ is a minimal first syzygy of bidegree $(0,2)$,
so that $q_i = a_iu^2+b_iuv+c_iv^2$. Rewrite this as
$u^2 \sum a_ip_i +uv \sum b_ip_i +v^2 \sum c_ip_i = 0$ and define
$f_0 = \sum a_ip_i$, $f_1 = \sum b_ip_i$, $f_2 = \sum c_ip_i$.
By construction, $\langle f_0,f_1,f_2 \rangle \subseteq I_U$ and
$[f_0,f_1,f_2]$ is a syzygy on $[u^2,uv,v^2]$, so
\[
\left[ \!
\begin{array}{c}
f_0\\
f_1\\
f_2
\end{array}\! \right] = \alpha \cdot \left[ \!
\begin{array}{c}
v\\
-u\\
0
\end{array}\! \right]+ \beta \cdot \left[ \!
\begin{array}{c}
0\\
v\\
-u
\end{array}\! \right] = \left[ \!
\begin{array}{c}
\alpha v\\
\beta v - \alpha u\\
-\beta u
\end{array}\! \right], \mbox{ for some }\alpha, \beta \in R_{2,0}.
\]
If $\{f_0,f_1,f_2\}$ are not linearly independent, there exist constants
$c_i$ with
\[
c_0\alpha v +c_1(\beta v - \alpha u) -c_2 \beta u = 0.
\]
This implies that
$(c_0 \alpha +c_1 \beta) v = (c_1\alpha -c_2\beta)u$, so
$\alpha = k \beta$. But then $\{\alpha u, \alpha v \} \subseteq I_U$, which
means there is a minimal first syzygy of bidegree $(0,1)$, contradicting
the classification of \S 2. Letting $W = {\mathrm{Span}} \{f_0,f_1,f_2\}$, we have
that ${\mathbb{P}}^2 \simeq {\mathbb{P}}(W) \subseteq {\mathbb{P}}(U)$. The actual bidegree $(0,2)$
syzygy is
\[
{\mathrm{det}} \left[ \!
\begin{array}{ccc}
v & 0 &f_0\\
-u & v& f_1\\
0 & -u &f_2
\end{array}\! \right] = 0.
\]
To see that the ${\mathbb{P}}(W)$ meets $\Sigma_{2,1}$ in a smooth conic, note that
by Lemma~\ref{P2fiberBP}, ${\mathbb{P}}(W)$ cannot be equal to a ${\mathbb{P}}^2$
fiber of $\Sigma_{2,1}$, or ${\mathbb{P}}(U)$ would
have basepoints. The image of the map ${\mathbb{P}}^1 \rightarrow {\mathbb{P}}(W)$ defined
by
\[
(x:y) \mapsto x^2 (\alpha v) +xy(\beta v - \alpha u) + y^2(-\beta u)=(x\alpha +y \beta)(xv-yu)
\]
is a smooth conic $C \subseteq {\mathbb{P}}(W) \cap \Sigma_{2,1}$. Since
${\mathbb{P}}(W) \cap \Sigma_{2,1}$ is a curve of degree at most three, if
this is not the entire intersection, there would be a line $L$ residual
to $C$. If $L \subseteq F_x$, where $F_x$ is a ${\mathbb{P}}^2$ fiber over $x \in {\mathbb{P}}^1$, then
for small $\epsilon$, $F_{x+\epsilon}$ also meets ${\mathbb{P}}(W)$ in a line,
which is impossible. If $L$ is a ${\mathbb{P}}^1$ fiber of $\Sigma_{2,1}$, this
would result in a bidegree $(0,1)$ syzygy, which is impossible
by the classification of \S 2.
\end{proof}
\begin{defn} A line $l \subseteq {\mathbb{P}}(s^2,st,t^2)$ with $l=af+bg$,
$f,g \in {\mathrm{Span}} \{s^2,st,t^2\}$ is split if $l$ has a fixed factor:
for all $a,b \in {\mathbb{P}}^1$, $l = L(aL'+bL'')$ with $L \in R_{1,0}$.
\end{defn}
\begin{thm}\label{Type4resoln}
If $U$ is basepoint free, then $I_U$ has minimal first syzygies
of bidegree $(1,0)$ and $(0,2)$ iff
\[
{\mathbb{P}}(U) \cap \Sigma_{2,1} = C \cup L,
\]
where $L\simeq {\mathbb{P}}(W')$ is a split line in
a ${\mathbb{P}}^2$ fiber of $\Sigma_{2,1}$ and $C$ is
a smooth conic in ${\mathbb{P}}(W)$, such that
${\mathbb{P}}(W') \cap {\mathbb{P}}(W) = C \cap L$ is a point and ${\mathbb{P}}(W') + {\mathbb{P}}(W) = {\mathbb{P}}(U)$.
\end{thm}
\begin{proof}
Suppose there are minimal first syzygies of bidegrees $(1,0)$ and $(0,2)$.
By Lemma~\ref{02syz}, the $(0,2)$ syzygy determines a conic $C$ in a
distinguished ${\mathbb{P}}(W) \subseteq {\mathbb{P}}(U)$. Every point of $C$ lies on both a ${\mathbb{P}}^2$
and ${\mathbb{P}}^1$ fiber of $\Sigma_{2,1}$. No ${\mathbb{P}}^1$ fiber of $\Sigma_{2,1}$ is
contained in ${\mathbb{P}}(U)$, or there would be a first syzygy of
bidegree $(0,1)$, which is impossible by Corollary~\ref{NoMix}.
By Lemma~\ref{LS2-10}, there exists
$W' = {\mathrm{Span}} \{ps,pt \} \subseteq U$, so we have a distinguished line
${\mathbb{P}}(W') \subseteq {\mathbb{P}}(U)$. We now consider two possibilities:
\vskip .04in
\noindent Case 1: If $p$ factors, then ${\mathbb{P}}(W')$ is a split line contained in $\Sigma_{2,1}$,
which must therefore be contained in a ${\mathbb{P}}^2$ fiber and $p = L(s,t)l(u,v)$,
where $l(u,v)$ corresponds to a point of a ${\mathbb{P}}^1$ fiber of $\Sigma_{2,1}$ and
${\mathbb{P}}(W') \cap {\mathbb{P}}(W)$ is a point. In particular
${\mathbb{P}}(U) \cap \Sigma_{2,1}$ is the union of a line and conic,
which meet transversally at a point.
\vskip .04in
\noindent Case 2: If $p$ does not factor, then $p=a_0su+a_1sv+a_2tu+a_3tv$,
$a_0a_3-a_1a_2 \ne 0$. The corresponding line $L = {\mathbb{P}}(ps,pt)$
meets ${\mathbb{P}}(W)$ in a point and since $p$ is irreducible $L \cap C = \emptyset$.
Since $W = {\mathrm{Span}} \{\alpha v, \beta v - \alpha u, -\beta v \}$, we must have
\[
p \cdot L_0(s,t) = a \alpha v -b \beta u + c(\beta v - \alpha u)
\]
for some $\{a,b,c\}$, where $L_0(s,t)=b_0s+b_1t$ corresponds to $L \cap {\mathbb{P}}(W)$.
Write $p \cdot L_0(s,t)$ as $l_1uL_0(s,t) + l_2vL_0(s,t)$, where
\[
\begin{array}{ccc}
l_1(s,t) &= &a_0s+a_2t\\
l_2(s,t) &= &a_1s+a_3t
\end{array}
\]
Then
\[
\begin{array}{ccc}
p \cdot L_0(s,t) & = & l_1(s,t)L_0(s,t)u + l_2(s,t)L_0(s,t)v \\
& = & a \alpha v -b \beta u + c(\beta v - \alpha u)\\
& = & (-b\beta -c\alpha)u +(a \alpha +c \beta) v
\end{array}
\]
In particular,
\begin{equation}\label{lMat}
\left[ \!
\begin{array}{cc}
a & c\\
c & b
\end{array}\! \right] \cdot \left[ \!
\begin{array}{c}
\alpha\\
\beta
\end{array}\! \right] = \left[ \!
\begin{array}{c}
l_2 L_0\\
-l_1 L_0
\end{array}\! \right]
\end{equation}
If \[
{\mathrm{det}} \left[ \!
\begin{array}{cc}
a & c\\
c & b
\end{array}\! \right] \ne 0,
\]
then applying Cramer's rule to Equation~\ref{lMat}
shows that $\alpha$ and $\beta$ share
a common factor $L_0(s,t)$. But then
$W = {\mathrm{Span}} \{L_0 \gamma, L_0 \delta, L_0 \epsilon \}$, which
contradicts the basepoint freeness of $U$: change coordinates
so $L_0 =s$, so $U = s\gamma, s \delta, s\epsilon, p_3$. Since
$p_3|_{s=0} = t^2l(u,v)$, $\langle s, l(u,v) \rangle$ is an
associated prime of $I_U$, a contradiction. To conclude,
consider the case $ab-c^2 =0$. Then there is a constant $k$
such that
\[
\left[ \!
\begin{array}{cc}
a & c\\
ka & kc
\end{array}\! \right] \cdot \left[ \!
\begin{array}{c}
\alpha\\
\beta
\end{array}\! \right] = \left[ \!
\begin{array}{c}
l_2 L_0\\
-l_1 L_0
\end{array}\! \right],
\]
which forces $kl_1(s,t) = -l_2(s,t)$. Recalling that $l_1(s,t) = a_0s+a_2t$ and $l_2(s,t) = a_1s+a_3t$, this implies that
\[
{\mathrm{det}} \left[ \!
\begin{array}{cc}
a_0 & a_2\\
a_1 & a_3
\end{array}\! \right] = 0,
\]
contradicting the irreducibility of $p$.\newline
This shows that if $I_U$ has minimal first syzygies of bidegree $(1,0)$ and
$(0,2)$, then ${\mathbb{P}}(U) \cap \Sigma_{2,1} = C \cup L$ meeting transversally
at a point. The remaining implication follows from Lemma~\ref{LS4-10} and
Lemma~\ref{02syz}.
\end{proof}
For $W \subseteq H^0({\mathcal{O}_{\mathbb{P}^1 \times \mathbb{P}^1}}(2,1))$ a basepoint free subspace of
dimension three, the minimal free resolution of $I_W$ is
determined in \cite{cds}: there are two possible minimal
free resolutions, which depend only whether
${\mathbb{P}}(W)$ meets $\Sigma_{2,1}$ in a finite set of points, or a smooth
conic $C$. By Theorem~\ref{Type4resoln}, if there are minimal
first syzygies of bidegrees $(1,0)$ and $(0,2)$, then $U$ contains
a $W$ with ${\mathbb{P}}(W) \cap \Sigma_{2,1} = C$, which suggests building
the Type 4 resolution by choosing $W = {\mathrm{Span}} \{p_0,p_1,p_2\}$ to
satisfy ${\mathbb{P}}(W) \cap \Sigma_{2,1} = C$ and constructing a mapping
cone. There are two problems with this approach. First, there
does not seem to be an easy description for $\langle p_0,p_1,p_2\rangle :p_3$.
Second, recall that the mapping cone resolution need not be minimal.
A computation shows that the shifts in the resolutions of
$\langle p_0,p_1,p_2\rangle :p_3$ and $\langle p_0,p_1,p_2\rangle$
overlap and there are many cancellations. However, choosing $W$ to
consist of two points on $L$ and one on $C$ solves both of these
problems at once.
\begin{lem}\label{Type4L1}
In the setting of Theorem~\ref{Type4resoln}, let $p_0$ correspond to
$L \cap C$ and let $p_1, p_2$ correspond to points on $L$ and $C$
(respectively) distinct from $p_0$.
If $W = {\mathrm{Span}} \{p_0,p_1,p_2 \}$, then $I_W$ has a Hilbert
Burch resolution. Choosing coordinates so
$W = {\mathrm{Span}} \{sLv, tLv, \beta u \}$, the primary decomposition of $I_W$ is
\[
\cap_{i=1}^4 I_i = \langle s,t\rangle^2 \cap \langle u,v \rangle \cap \langle \beta, v \rangle \cap \langle L, u \rangle.
\]
\end{lem}
\begin{proof}
After a suitable change of coordinates, $W = {\mathrm{Span}} \{sLv, tLv, \beta u \}$,
and writing $\beta = (a_0s+a_1t)L'(s,t)$, $I_W$ consists of the two by
two minors of
\[
\phi_2 =
\left[ \!
\begin{array}{cc}
t & a_0uL'\\
-s & a_1uL' \\
0 &L v
\end{array}\! \right].
\]
Hence, the minimal free resolution of $I_W$ is
\[
0 \longleftarrow I_W \longleftarrow (-2,-1)^3 \stackrel{\phi_2}{\longleftarrow}
(-3,-1) \oplus (-3,-2) \longleftarrow 0
\]
For the primary decomposition, since $L$ does not divide $\beta$,
\[
\langle \beta, v \rangle \cap \langle L, u \rangle = \langle \beta L, vL, \beta u, vu \rangle,
\]
and intersecting this with $\langle s,t\rangle^2 \cap \langle u,v \rangle$
gives $I_W$.
\end{proof}
\begin{lem}\label{Type4L2}
If $U$ is basepoint free, $W = {\mathrm{Span}} \{sLv, tLv, \beta u \}$ and
$p_3 = \alpha u - \beta v = tL u - \beta v$, then
\[
I_W : p_3 = \langle \beta L, vL, \beta u, vu \rangle.
\]
\end{lem}
\begin{proof}
First, our choice of $p_0$ to correspond to $C \cap L$ in
Theorem~\ref{Type4resoln} means we may write $\alpha = tL$.
Since $\langle s,t \rangle^2: p_3 = 1 = \langle u,v \rangle : p_3$,
\[
I_W : p_3 = (\cap_{i=1}^4 I_i):p_3 = (\langle \beta, v \rangle:p_3) \cap
(\langle L, u \rangle : p_3).
\]
Since $p_3 = tL u - \beta v$, $fp_3 \in \langle \beta, v \rangle$ iff
$f tLu \in \langle \beta, v \rangle$. Since $tL=\alpha$ and
$\alpha, \beta$ and $u,v$ are relatively prime, this implies $f \in \langle \beta, v \rangle$. The same argument shows that $\langle L, u \rangle : p_3$ must equal $\langle L, u \rangle$.
\end{proof}
\begin{thm}\label{Type4MC}
In the situation of Theorem~\ref{Type4resoln}, the minimal
free resolution is of Type 4. If $p_0$ corresponds to
$L \cap C$, $p_1\ne p_0$ to another point on $L$ and $p_2 \ne p_0$
to a point on $C$ and $W = {\mathrm{Span}} \{p_0,p_1,p_2 \}$, then the minimal
free resolution is given by the mapping cone of $I_W$ and $I_W : p_3$.
\end{thm}
\begin{proof}
We construct a mapping cone resolution from the short exact sequence
\[
0 \longleftarrow R/I_U \longleftarrow R/I_W \stackrel{\cdot p_3} \longleftarrow R(-2,-1)/I_W:p_3 \longleftarrow 0.
\]
By Lemma~\ref{Type4L2},
\[
I_W : p_3 = \langle \beta L, Lv, \beta u, uv \rangle,
\]
which by the reasoning in the proof of Proposition~\ref{LS2}
has minimal free resolution:
\begin{small}
\[
I_W : p_3 \longleftarrow
\begin{array}{c}
(-3,-0)\\
\oplus \\
(-1,-1)\\
\oplus \\
(-2,-1)\\
\oplus \\
(0,-2)
\end{array}\! \xleftarrow{\left[ \!
\begin{array}{cccc}
v & 0 &u &0 \\
-\beta & 0 &0 &u \\
0 & v &-L &0 \\
0 & -\beta &0 & -L
\end{array}\! \right]}
\begin{array}{c}
(-3,-1)\\
\oplus \\
(-2,-2)\\
\oplus \\
(-3,-1)\\
\oplus \\
(-1,-2)
\end{array}\!
\xleftarrow{\left[ \!\begin{array}{c}
u\\
-L\\
-v \\
\beta
\end{array}\! \right]}
(-3,-2).
\]
\end{small}
A check shows that there are no overlaps in the mapping cone shifts,
hence the mapping cone resolution is actually minimal.
\end{proof}
\subsection{Type 3 resolution}
Finally, suppose $I_U=\langle p_0, p_1, p_2, p_3 \rangle$ with
$p_0= ps$ and $p_1=pt$, such that $p=a_0su+a_1sv+a_2tu+a_3tv$ is
irreducible, so $a_0a_3-a_1a_2 \not = 0$.
As in the case of Type 4, the
minimal free resolution will be given by a mapping cone. However, in Type 3
the construction is more complicated: we will need two mapping cones to
compute the resolution. What is surprising is that by a judicious
change of coordinates, the bigrading allows us to reduce $I_U$ so that
the equations have a very simple form.
\begin{thm}\label{T3res}
If $U$ is basepoint free and $I_U =\langle ps, pt, p_2, p_3 \rangle$
with $p$ irreducible, then the $I_U$ has a mapping cone resolution,
and is of numerical Type 3.
\end{thm}
\begin{proof}
Without loss of generality,
assume $a_0 = 1$. Reducing $p_2$ and $p_3$ mod $ps$ and $pt$,
we have
$$
\begin{array}{ccc}
p_2 & = & b_0t^2u+b_1s^2v+b_2stv+b_3t^2v\\
p_3 & = & c_0t^2u+c_1s^2v+c_2stv+c_3t^2v
\end{array}
$$
Since $U$ is basepoint free, either $b_0$ or $c_0$ is nonzero, so
after rescaling and reducing $p_3$ mod $p_2$
\[
\begin{array}{ccr}
p_2 & = & t^2u+b_1s^2v+b_2stv+b_3t^2v\\
p_3 & = & c_1s^2v+c_2stv+c_3t^2v \\
& = & (c_1s^2+c_2st+c_3t^2)v\\
& = & L_1L_2v\\
\end{array}
\]
for some $L_i \in R_{1,0}$. If the $L_i$'s are linearly independent,
then a change of variable replaces $L_1$ and $L_2$ with $s$ and
$t$. This transforms $p_0, p_1$ to $p'l_1, p'l_2$, but since the $l_i$'s are
linearly independent linear forms in $s$ and $t$,
$\langle p'l_1, p'l_2 \rangle = \langle p's, p't \rangle$, with
$p'$ irreducible. So we may assume
\[I_U=\langle ps, pt, p_2, p_3 \rangle, \mbox{ where } p_3 = stv \mbox{ or } s^2v.
\]
With this change of variables,
\[
p_2 = l^2u+(b'_1s^2+b'_2st+b'_3t^2)v
\]
where $l=as+bt$ and $b \ne 0$. We now analyze the two
possible situations. First, suppose $p_3 = stv$.
Reducing $p_2$ modulo $\langle ps, pt,stv \rangle$
yields
\[
\begin{array}{ccr}
p_2 & = & \alpha t^2u+b''_1s^2v+b''_3t^2v\\
& =& (\alpha u+b''_3v)t^2+b''_1s^2v
\end{array}
\]
By basepoint freeness, $\alpha \not = 0$, so changing variables
via $\alpha u+b''_3v \mapsto u$ yields $p_2 = t^2u+b''_1s^2v$.
Notice this change of variables does not change the form of the
other $p_i$. Now $b_1'' \ne 0$ by basepoint freeness, so
rescaling $t$ (which again preserves the form of the other $p_i$)
shows that
\[
I_U=\langle ps, pt, t^2u+s^2v, stv \rangle.
\]
Since $p$ is irreducible, $p=sl_1+tl_2$ with $l_i$ are linearly
independent elements of $R_{0,1}$. Changing variables once again via
$l_1 \mapsto u$ and $l_2 \mapsto v$, we have
\[
I_U=\langle s(su+tv), t(su+tv), stQ_1, s^2Q_1+ t^2Q_2 \rangle,
\]
where $Q_1=au+bv, Q_2=cu+dv$ are linearly independent with $b \ne 0$.
Rescaling $v$ and $s$ we may assume $b=1$. Now let
\[
I_W=\langle s(su+tv), t(su+tv), stQ_1 \rangle.
\]
The minimal free resolution of $I_W$ is
\[
0 \longleftarrow I_W \longleftarrow (-2,-1)^3 \stackrel{\begin{small}\left[ \!
\begin{array}{cc}
t & 0\\
-s & sQ_1 \\
0 &p
\end{array}\! \right]\end{small}}{\longleftarrow}
(-3,-1) \oplus (-3,-2) \longleftarrow 0,
\]
To obtain a mapping cone resolution, we need to compute $I_W:p_2$.
As in the Type 4 setting, we first find the primary decomposition for $I_W$.
\begin{enumerate}
\item If $a = 0$, then
$$
\begin{array}{cll}
I_W&=& \langle s(su+tv), t(su+tv), stv \rangle\\
&=&\langle u, v \rangle \cap \langle s, t \rangle^2 \cap \langle u, t \rangle \cap \langle su+tv, (s, v)^2 \rangle = \cap_{i=1}^4 I_i
\end{array}
$$
\item If $a \ne 0$, rescale $u$ and $t$ by $a$ so $Q_1=u+v$. Then
\[
\begin{array}{cll}
I_W&=& \langle s(su+tv), t(su+tv), st(u+v) \rangle\\
&=&\langle u, v \rangle \cap \langle s, t \rangle^2 \cap \langle v, s \rangle \cap \langle u, t \rangle \cap \langle u+v, s-t \rangle = \cap_{i=1}^5 I_i
\end{array}
\]
\end{enumerate}
Since $s^2Q_1+ t^2Q_2 \in I_1 \cap I_2$ in both cases,
$I_W: s^2Q_1+ t^2Q_2 = \cap_{i=2}^n I_i$. So if $a=0$, $I_W: s^2Q_1+ t^2Q_2 $
\[
\begin{array}{ccl}
& = & \langle u,t \rangle : s^2Q_1+ t^2Q_2 \ \cap \langle su+tv, (s, v)^2 \rangle : s^2Q_1+ t^2Q_2 \\
&=& \langle u,t \rangle \cap \langle su+tv, (s, v)^2 \rangle\\
&=&\langle su+tv, uv^2, tv^2, stv, s^2t \rangle\\
&=& \langle su+tv, I_3(\phi) \rangle,
\end{array}
\]
while if $a \ne 0$, $I_W: s^2Q_1+ t^2Q_2$
\[
\begin{array}{ccl}
& = & \langle v,s \rangle : s^2Q_1+ t^2Q_2 \ \cap \langle u,t \rangle: s^2Q_1+ t^2Q_2 \ \cap \langle u+v, s-t \rangle : s^2Q_1+ t^2Q_2\\
&=& \langle v, s \rangle \cap \langle u, t \rangle \cap \langle u+v, s-t \rangle \\
&=&\langle su+tv, uv(u+v), tv(u+v), tv(s-t), st(s-t) \rangle\\
&=& \langle su+tv, I_3(\phi) \rangle
\end{array}
\]
where
\[
\phi=\left[ \begin{array}{lll} t & 0 & 0\\ u & s & 0 \\ 0 & v & s \\ 0 & 0 & v \end{array}\right] \mbox{ if } a=0, \mbox{ and }
\phi=\left[ \begin{array}{lll} t & 0 & 0\\ u & s-t & 0 \\ 0 & u+v & s \\ 0 & 0 & v \end{array}\right] \mbox{ if } a \ne 0 .
\]
Since $I_3(\phi)$ has a Hilbert-Burch resolution, a resolution
of $I_W: p_2 = \langle su+tv, I_3(\phi)\rangle$ can be obtained as
the mapping cone of $I_3(\phi)$ with $I_3(\phi): p$. There are no
overlaps, so the result is a minimal resolution. However, there is
no need to do this, because the change of variables allows us to
do the computation directly and we find
\[
0 \leftarrow I_w:p_3 \longleftarrow \begin{array}{c}
(-1,-1)\\
\oplus \\
(0,-3)\\
\oplus \\
(-1,-2)\\
\oplus \\
(-2,-1)\\
\oplus \\
(-3,-0)\\
\end{array} \longleftarrow
\begin{array}{c}
(-1,-3)^2\\
\oplus \\
(-2,-2)^2\\
\oplus \\
(-3,-1)^2\\
\end{array} \longleftarrow
\begin{array}{c}
(-2,-3)\\
\oplus \\
(-3,-2)\\
\end{array} \leftarrow 0
\]
This concludes the proof if $p_3 = stv$. When $p_3 = s^2v$, the argument
proceeds in similar, but simpler, fashion.
\end{proof}
\section{No linear first syzygies}
\subsection{Hilbert function}
\begin{prop}\label{hilb}
If $U$ is basepoint free, then there are six types of bigraded Hilbert function in one-to-one correspondence with the resolutions of Table 2. The tables below contain the values of $h_{i,j}=HF((i,j),R/I_U)$, for $i<5,j<6$ listed in the order corresponding to the six numerical types in Table~\ref{T2}.
\begin{small}
$$
\begin{matrix}
\begin{array}{c|ccccc}
& 0 & 1 & 2 & 3 & 4 \\
\hline
0 & 1 & 2 & 3 & 4 & 5 \\
1 & 2 & 4 & 6 & 8 & 10 \\
2 & 3 & 2 & 1 & 0 & 0 \\
3 & 4 & 0 & 0 & 0 &0 \\
4 & 5 & 0 & 0 & 0 &0 \\
5 & 6 & 0 & 0 & 0 &0 \\
\end{array}
&
\begin{array}{c|ccccc}
& 0 & 1 & 2 & 3 & 4 \\
\hline
0 & 1 & 2 & 3 & 4 & 5 \\
1 & 2 & 4 & 6 & 8 & 10 \\
2 & 3 & 2 & 1 & 1 & 1 \\
3 & 4 & 0 & 0 & 0 &0 \\
4 & 5 & 0 & 0 & 0 &0 \\
5 & 6 & 0 & 0 & 0 &0 \\
\end{array}
&
\begin{array}{c|ccccc}
& 0 & 1 & 2 & 3 & 4 \\
\hline
0 & 1 & 2 & 3 & 4 & 5 \\
1 & 2 & 4 & 6 & 8 & 10 \\
2 & 3 & 2 & 1 & 0 & 0 \\
3 & 4 & 1 & 0 & 0 &0 \\
4 & 5 & 0 & 0 & 0 &0 \\
5 & 6 & 0 & 0 & 0 &0 \\
\end{array}
\\ \\
\begin{array}{c|ccccc}
& 0 & 1 & 2 & 3 & 4 \\
\hline
0 & 1 & 2 & 3 & 4 & 5 \\
1 & 2 & 4 & 6 & 8 & 10 \\
2 & 3 & 2 & 1 & 1 & 1 \\
3 & 4 & 1 & 0 & 0 &0 \\
4 & 5 & 0 & 0 & 0 &0 \\
5 & 6 & 0 & 0 & 0 &0 \\
\end{array}
&
\begin{array}{c|ccccc}
& 0 & 1 & 2 & 3 & 4 \\
\hline
0 & 1 & 2 & 3 & 4 & 5 \\
1 & 2 & 4 & 6 & 8 & 10 \\
2 & 3 & 2 & 2 & 2 & 2 \\
3 & 4 & 0 & 0 & 0 &0 \\
4 & 5 & 0 & 0 & 0 &0 \\
5 & 6 & 0 & 0 & 0 &0 \\
\end{array}
&
\begin{array}{c|ccccc}
& 0 & 1 & 2 & 3 & 4 \\
\hline
0 & 1 & 2 & 3 & 4 & 5 \\
1 & 2 & 4 & 6 & 8 & 10 \\
2 & 3 & 2 & 3 & 4 & 5 \\
3 & 4 & 0 & 0 & 0 &0 \\
4 & 5 & 0 & 0 & 0 &0 \\
5 & 6 & 0 & 0 & 0 &0 \\
\end{array}
\end{matrix}
$$
\end{small}
\end{prop}
The entries of the first two rows and the first column are clear: $$h_{0,j}=HF((0,j),R)=j+1,h_{1,j}=HF((1,j),R)=2j+2$$ and $h_{i,0}=HF((i,0),R)=i+1$. Furthermore $h_{2,1}=2$ by the linear independence of the minimal generators of $I_U$. The proof of the proposition is based on the following lemmas concerning $h_{i,j}, i\geq 2, j\geq 1$.
\begin{lem} \label{ABlemma}
For $I_U$ an ideal generated by four independent forms of bidegree $(2,1)$
\begin{enumerate}
\item $h_{3,1}$ is the number of bidegree $(1,0)$ first syzygies
\item $h_{2,2}-1$ is the number of bidegree $(0,1)$ first syzygies
\end{enumerate}
\end{lem}
\begin{proof}
From the free resolution
$$0 \leftarrow R/I_U \leftarrow R \leftarrow R(-2,-1)^4 \longleftarrow \begin{array}{c}
R(-2,-3)^B\\
\oplus \\
R(-3,-1)^A\\
\oplus \\
F_1
\end{array} \longleftarrow F_2 \longleftarrow F_3\leftarrow 0,$$
we find
$$h_{3,1}=HF((3,1),R)-4HF((1,0),R)+AHF((0,0),R)=8-8+A=A$$
$$h_{2,2}=HF((2,2),R)-4HF((0,1),R)+BHF((0,0),R)=9-8+B=B+1,$$
since $F_i$ are free $R$-modules generated in
degree $(i,j)$ with $i>3$ or $j>2$. \end{proof}
\begin{lem}
If $U$ is basepoint free, then $h_{3,2}=0$ for every numerical type.
\end{lem}
\begin{proof}
If there are no bidegree $(1,0)$ syzygies then $HF((3,1),R/I_U)=0$ and consequently $HF((3,2),R/I_U)=0$.
If there are bidegree $(1,0)$ syzygies then we are in Type 3 or 4 where by Proposition \ref{LS3-10} we know the relevant part of the resolution is
$$0 \leftarrow R/I_U \leftarrow R \leftarrow R(-2,-1)^4 \longleftarrow \begin{array}{c}
R(-3,-1)\\
\oplus \\
R(-3,-2)^2\\
\oplus \\
F_1
\end{array} \longleftarrow F_2 \longleftarrow F_3\leftarrow 0$$
Then
$$\begin{array}{ccl}
h_{3,2} &= &HF((3,2),R)-4HF((1,1),R)+HF((0,1),R)+2HF((0,0),R) \\
& = &12-16+2+2=0.
\end{array}
$$
\end{proof}
So far we have determined the following shape of the Hilbert function of $R/I_U$:
\begin{small}
$$\begin{array}{c|cccccc}
& 0 & 1 & 2 & 3 & 4 \\
\hline
0 & 1 & 2 & 3 & 4 & 5 \\
1 & 2 & 4 & 6 & 8 & 10 \\
2 & 3 & 2 & h_{2,2}& h_{2,3} & h_{2,4} \\
3 & 4 & h_{3,1} & 0 & 0 &0 \\
4 & 5 & h_{4,1} & 0 & 0 &0 \\
5 & 6 & h_{5,1} & 0 & 0 &0 \\
\end{array}$$
\end{small}
If linear syzygies are present we know from the previous section the exact description of the possible minimal resolutions of $I_U$ and it is an easy check that they agree with the last four Hilbert functions in Proposition \ref{hilb}. Next we focus on the case when no linear syzygies are present. By
Lemma \ref{ABlemma} this yields $h_{2,2}=1$ and $h_{3,1}=0$, hence $h_{i,1}=0$ for $i\geq 3$. We show that in the absence of linear syzygies only the first two Hilbert functions in Proposition \ref{hilb} may occur:
\subsection{Types 1 and 2}
In the following we assume that the basepoint free ideal $I_U$ has no linear syzygies. We first determine the maximal numerical types which correspond to the Hilbert functions found in \S 4.1 and then we show that only the Betti
numbers corresponding to linear syzygies cancel.
\begin{prop}\label{HFnoLin}
If $U$ is basepoint free and $I_U$ has no linear syzygies, then
\begin{enumerate}
\item $I_U$ cannot have two or more linearly independent bidegree $(0,2)$ first syzygies
\item $I_U$ cannot have two minimal first syzygies of bidegrees $(0,2)$, $(0,j)$, $j> 2$
\item $I_U$ has a single bidegree $(0,2)$ minimal syzygy iff $h_{2,j}=1$ for $j\geq 3$
\item $I_U$ has no bidegree $(0,2)$ minimal syzygy iff $h_{2,j}=0$ for $j\geq 3$
\end{enumerate}
\end{prop}
\begin{proof}
(1) Suppose $I_U$ has two linearly independent bidegree $(0,2)$ first syzygies which can be written down by a similar procedure to the one used in Lemma \ref{LS1} as
$$\begin{matrix}
u^2p+uvq+v^2r & =& 0\\
u^2p'+uvq'+v^2r' & =& 0
\end{matrix}$$
with $p,q,r,p',q',r' \in U$. Write $p=p_1u+p_2v$ with $p_1,p_2\in R_{2,0}$ and similarly for $p',q,q',r,r'$. Substituting in the equations above one obtains
$$\begin{matrix}
p_1 & =& 0 & & p'_1& = & 0\\
p_2+q_1 & =& 0 & & p'_2 +q_1'& = & 0\\
q_2+r_1 & =& 0 & & q'_2+r'_1 & = & 0\\
r_2 & =& 0 & & r'_2 & = & 0\\
\end{matrix}$$
hence
\[
\begin{array}{cccccc}
p & =&p_2v,\mbox{ } & p' &= &p'_2v\\
q & =& -(p_2u+r_1v), \mbox{ } & q'&=&-(p'_2u+r'_1v)\\
r &= &r_1v, \mbox{ } & r'&=&r'_1v
\end{array}
\]
are elements of $I_U$. If both of the pairs $p_2v, p'_2v$ or $r_1u, r'_1u$ consists of linearly independent elements of $R_{2,1}$, then $U\cap \Sigma_{2,1}$ contains a ${\mathbb{P}}^1$ inside each of the ${\mathbb{P}}^2$ fibers over the points corresponding to $u,v$ in the ${\mathbb{P}}^1$ factor of the map ${\mathbb{P}}^2\times {\mathbb{P}}^1\mapsto {\mathbb{P}}^5$. Pulling back the two lines from $\Sigma_{2,1}$ to the domain of its defining map, one obtains two lines in ${\mathbb{P}}^2$ which must meet (or be identical). Taking the image of the intersection point we get two elements of the form $\alpha u, \alpha v \in I_U$ which yield a $(0,1)$ syzygy, thus contradicting our assumption. Therefore it must be the case that $p_2'=ap_2$ or $r_1'=br_1$ with $a,b\in k$. The reasoning being identical, we shall only analyze the case $p_2'=ap_2$. A linear combination of the elements $q=-(p_2u+r_1v), q'=-(p'_2u+r'_1v)\in I_U$ produces $(r'_1-ar_1)v\in I_U$ and a linear combination of the elements $r_1u, r'_1u\in I_U$ produces $(r'_1-ar_1)u\in I_U$, hence again we obtain a $(0,1)$ syzygy unless $r'_1=ar_1$. But then $(p',q',r')=a(p,q,r)$ and these triples yield linearly dependent bidegree $(0,2)$ syzygies.
(2) The assertion that $I_U$ cannot have a bidegree $(0,2)$ and a (distinct) bidegree $(0,j)$, $j\geq 2$ minimal first syzygies is proved by induction on $j$. The base case $j=2$ has already been solved. Assume $I_U$ has a
degree $(0,2)$ syzygy $u^2p+uvq+v^2r =0$ with $p=p_1u+p_2v,q,r\in I_U$ and a bidegree $(0,j)$ syzygy $u^jw_1+u^{j-1}vw_2+\ldots +v^jw_{j+1}=0$ with $w_i=y_iu+z_iv\in I_U$. Then as before $p_1=0, z_1=0,r_2=0,z_{j+1}=0$ and the same reasoning shows one must have $z_1=ap_2$ or $y_{j+1}=br_1$. Again we handle the case $z_1=ap_2$ where a linear combination of the two syzygies produces the new syzygy
\[
u^{j-1}v(w_2-aq)+u^{j-2}v^2(w_3-ar)+u^{j-3}v^3(w_3)\ldots +v^jw_{j+1}=0.
\]
Dividing by $v$: $u^{j-1}(w_2-aq)+u^{j-2}v(w_3-ar)+u^{j-3}v^2(w_3)\ldots +v^{j-1}w_{j+1}=0$, which is a minimal bidegree $(0,j-1)$ syzygy iff the original $(0,j)$ syzygy was minimal. This contradicts the induction hypothesis.
(3) An argument similar to Lemma \ref{ABlemma} shows that in the absence of $(0,1)$ syzygies $h_{2,3}$ is equal to the number of bidegree $(0,2)$ syzygies on $I_U$. Note that the absence of $(0,1)$ syzygies implies there can be no bidegree $(0,2)$ second syzygies of $I_U$ to cancel the effect of bidegree $(0,2)$ first syzygies on the Hilbert function. This covers the converse implications of both (3) and (4) as well as the case $j=3$ of the direct implications. The computation of $h_{2,j}$, $j\geq 3$ is completed as follows
$$\begin{matrix}
h_{2,j} &= & HF((2,j),R)-HF((2,j),R(-2,-1)^4)+HF((2,j),R(-2,-3))\\
& = &3(j+1)-4j+(j-2)\\
& =1
\end{matrix}
$$
(4) In this case we compute
$$h_{2,3}=HF((2,3),R)-HF((2,3),R(-2,-1)^4)=12-12=0$$
$$HF((2,4),R)-HF((2,4),R(-2,-1)^4)=15-16=-1$$
The fact that $h_{2,j}=0$ for higher values of $j$ follows from $h_{2,3}=0$. In fact even more is true: $I_U$ is forced to have a single bidegree $(0,3)$ first syzygy to ensure that $h_{2,j}=0$ for $j\geq 4$.
\end{proof}
\begin{cor}
There are only two possible Hilbert functions for basepoint free ideals $I_U$
without linear syzygies, depending on whether there is no $(0,2)$ syzygy or exactly one $(0,2)$ syzygy. The two possible Hilbert functions are
\begin{small}
$$
\begin{matrix}
\begin{array}{c|ccccc}
& 0 & 1 & 2 & 3 & 4 \\
\hline
0 & 1 & 2 & 3 & 4 & 5 \\
1 & 2 & 4 & 6 & 8 & 10 \\
2 & 3 & 2 & 1 & 0 & 0 \\
3 & 4 & 0 & 0 & 0 &0 \\
4 & 5 & 0 & 0 & 0 &0 \\
5 & 6 & 0 & 0 & 0 &0 \\
\end{array}
& & &
\begin{array}{c|ccccc}
& 0 & 1 & 2 & 3 & 4 \\
\hline
0 & 1 & 2 & 3 & 4 & 5 \\
1 & 2 & 4 & 6 & 8 & 10 \\
2 & 3 & 2 & 1 & 1 & 1 \\
3 & 4 & 0 & 0 & 0 &0 \\
4 & 5 & 0 & 0 & 0 &0 \\
5 & 6 & 0 & 0 & 0 &0 \\
\end{array}
\end{matrix}
$$
\end{small}
\end{cor}
\begin{prop}\label{syznoLin}
If $U$ is basepoint free and $I_U$ has no linear syzygies, then $I_U$ has
\begin{enumerate}
\item exactly 4 bidegree $(1,1)$ first syzygies
\item exactly 2 bidegree $(2,0)$ first syzygies
\end{enumerate}
\end{prop}
\begin{proof}
Note that there cannot be any second syzygies in bidegrees $(1,1)$ and $(2,0)$ because of the absence of linear first syzygies. Thus the numbers $\beta^1_{3,2}, \beta^1_{4,1}$ of bidegree $(1,1)$ and $(2,0)$ first syzygies are determined by the Hilbert function:
$$\beta^1_{3,2}=h_{3,2}-HF((3,2),R)+HF((3,2),R(-2,-1)^4)=0-12+16=4$$
$$\beta^1_{4,1}=h_{4,1}-HF((4,1),R)+HF((4,1),R(-2,-1)^4)=0-10+12=2$$
\end{proof}
Next we obtain upper bounds on the bigraded Betti numbers of $I_U$ by using bigraded initial ideals. The concept of initial ideal with respect to any fixed term order is well known and so is the cancellation principle asserting that the resolution of an ideal can be obtained from that of its initial ideal by cancellation of some consecutive syzygies of the same bidegree. In general the problem of determining which cancellations occur is very difficult. In the following we exploit the cancellation principle by using the bigraded setting to our advantage. For the initial ideal computations we use the revlex order induced by $s>t>u>v$.
In \cite{ACD}, Aramova, Crona and de Negri introduce bigeneric initial ideals as follows (we adapt the definition to our setting): let $G={\bf{GL}}(2,2) \times {\bf{GL}}(2,2)$ with an element $g=(d_{ij}, e_{kl}) \in G$
acting on the variables in $R$ by
$$
g: s \mapsto d_{11}s+d_{12}t , \ t \mapsto d_{21}s+d_{22}t ,\ u \mapsto e_{11}u+e_{12}v , \ v \mapsto e_{21}u+e_{22}v\,\,\,
$$
We shall make use of the following results of \cite{ACD}.
\begin{thm}\label{thmacd}$[$\cite{ACD} Theorem 1.4$]$
Let $I \subset R$ be a bigraded ideal. There is a Zariski open set $U$ in $G$ and an ideal $J$ such that for all $g\in U$ we have $ in(g(I))=J$.
\end {thm}
\begin{defn}The ideal $J$ in Theorem~\ref{thmacd} is defined to be the bigeneric initial ideal of $I$, denoted by $bigin(I)$.
\end{defn}
\begin{defn}
A monomial ideal $I\subset R$ is bi-Borel fixed if $g(I)=I$ for any upper triangular matrix $g\in G$.
\end{defn}
\begin{defn}
\label{stonglybistabil}
A monomial ideal $I\subset R=k[s,t,u,v]$ is strongly bistable if for
every monomial $m\in I$ the following conditions are satisfied:
\begin{enumerate}
\item if $m$ is divisible by $t$, then $sm/t\in I$.
\item if $m$ is divisible by $v$, then $um/v\in I$ .
\end{enumerate}
\end{defn}
As in the $\mathbb{Z}$-graded case, the ideal $bigin(I)$ has the same
bigraded Hilbert function as $I$. Propositions 1.5 and 1.6
of \cite{ACD} show that $bigin(I)$ is bi-Borel fixed, and in
characteristic zero, $bigin(I)$ is strongly bistable.
\begin{prop}\label{bigin}
For each of the Hilbert functions in Proposition \ref{HFnoLin} there are exactly two strongly bistable monomial ideals realizing it. These ideals and their respective bigraded resolutions are:
\begin{enumerate}
\item $G_1=\langle s^2u,s^2v,stu,stv,t^2u^2,t^2uv,t^3u,t^3v,t^2v^3\rangle $ with minimal resolution
\begin{small}
\begin{equation}\label{bigin1}
0 \leftarrow G_1 \leftarrow \begin{array}{c}
(-2,-1)^4 \\
\oplus \\
(-2,-2)^2\\
\oplus \\
(-2,-3)\\
\oplus \\
(-3,-1)^2\\
\end{array} \longleftarrow
\begin{array}{c}
(-2,-2)^2\\
\oplus \\
(-2,-3)\\
\oplus \\
(-2,-4)\\
\oplus \\
(-3,-1)^2\\
\oplus \\
(-3,-2)^5\\
\oplus \\
(-3,-3)^2\\
\oplus \\
(-4,-1)^2\\
\end{array} \longleftarrow
\begin{array}{c}
(-3,-2)\\
\oplus \\
(-3,-3)^2\\
\oplus \\
(-3,-4)^2\\
\oplus \\
(-4,-2)^3\\
\oplus \\
(-4,-3)\\
\end{array} \longleftarrow
\begin{array}{c}
(-4,-3)\\
\oplus \\
(-4,-4) \end{array}\leftarrow 0
\end{equation}
\end{small}
$G'_1=\langle s^2u,s^2v,stu,t^2u,stv^2,st^2v,t^3v,t^2v^3\rangle $ with minimal resolution
\begin{small}
\begin{equation}\label{bigin1'}
0 \leftarrow G'_1 \leftarrow \begin{array}{c}
(-2,-1)^4 \\
\oplus \\
(-2,-2)\\
\oplus \\
(-2,-3)\\
\oplus \\
(-3,-1)^2\\
\end{array} \longleftarrow
\begin{array}{c}
(-2,-2)\\
\oplus \\
(-2,-3)\\
\oplus \\
(-2,-4)\\
\oplus \\
(-3,-1)^2\\
\oplus \\
(-3,-2)^4\\
\oplus \\
(-3,-3)^2\\
\oplus \\
(-4,-1)^2\\
\end{array} \longleftarrow
\begin{array}{c}
(-3,-3)^2\\
\oplus \\
(-3,-4)^2\\
\oplus \\
(-4,-2)^3\\
\oplus \\
(-4,-3)\\
\end{array} \longleftarrow
\begin{array}{c}
(-4,-3)\\
\oplus \\
(-4,-4) \end{array}\leftarrow 0
\end{equation}
\end{small}
\item $G_2=\langle s^2u,s^2v,stu,stv,t^2u^2,t^2uv,t^3u,t^3v\rangle $ with minimal resolution
\begin{small}
\begin{equation}\label{bigin2}
0 \leftarrow G_2 \leftarrow \begin{array}{c}
(-2,-1)^4 \\
\oplus \\
(-2,-2)^2\\
\oplus \\
(-3,-1)^2\\
\end{array} \longleftarrow
\begin{array}{c}
(-2,-2)^2\\
\oplus \\
(-2,-3)\\
\oplus \\
(-3,-1)^2\\
\oplus \\
(-3,-2)^5\\
\oplus \\
(-4,-1)^2\\
\end{array} \longleftarrow
\begin{array}{c}
(-3,-2)\\
\oplus \\
(-3,-3)^2\\
\oplus \\
(-4,-2)^3\\
\end{array} \longleftarrow
\begin{array}{c}
(-4,-3)\\
\end{array}\leftarrow 0
\end{equation}
\end{small}
$G'_2=\langle s^2u,s^2v,stu,t^2u,stv^2,st^2v,t^3v \rangle$ with minimal resolution
\begin{small}
\begin{equation}\label{bigin2'}
0 \leftarrow G'_2 \leftarrow \begin{array}{c}
(-2,-1)^4 \\
\oplus \\
(-2,-2)\\
\oplus \\
(-3,-1)^2\\
\end{array} \longleftarrow
\begin{array}{c}
(-2,-2)\\
\oplus \\
(-2,-3)\\
\oplus \\
(-3,-1)^2\\
\oplus \\
(-3,-2)^4\\
\oplus \\
(-4,-1)^2\\
\end{array} \longleftarrow
\begin{array}{c}
(-3,-3)^2\\
\oplus \\
(-4,-2)^3\\
\end{array} \longleftarrow
(-4,-3) \leftarrow 0
\end{equation}
\end{small}
\end{enumerate}
\end{prop}
\begin{proof}
There are only two strongly bistable sets of four monomials in $R_{2,1}$: $\{s^2u, s^2v,$ $stu, stv\}$ and $\{s^2, s^2v, stu, t^2u\}$. To complete $\{s^2u, s^2v,$ $stu, stv\}$ to an ideal realizing one of the Hilbert functions in Proposition \ref{HFnoLin} we need two additional monomials in $R_{2,2}$, which must be $t^2u^2, t^2uv$ in order to preserve bistability.
Then we must add the two remaining monomials $t^3u,t^3v$ in $R_{3,1}$,
which yields the second Hilbert function. To realize the first
Hilbert function we must also include the
remaining monomial $t^2v^3 \in R_{2,3}$.
To complete $\{s^2, s^2v, stu, t^2u\}$ to an ideal realizing one of the Hilbert functions in Proposition \ref{HFnoLin}, we need one additional monomial in $R_{2,2}$ which must be $stv^2$ in order to preserve bistability. Then
we must add the two remaining monomials $st^2v,t^3v \in R_{3,1}$. Then to
realize the first Hilbert function, we must add the remaining monomial
$t^2v^3 \in R_{2,3}$.
\end{proof}
\begin{thm}\label{T1T2res}
There are two numerical types for the minimal Betti numbers of basepoint free ideals $I_U$ without linear syzygies.
\begin{enumerate}
\item If there is a bidegree $(0,2)$ first syzygy then $I_U$ has numerical Type 2.
\item If there is no bidegree $(0,2)$ first syzygy then $I_U$ has numerical Type 1.
\end{enumerate}
\end{thm}
\begin{proof}
Proposition \ref{HFnoLin} establishes that the two situations above are the only possibilities in the absence of linear syzygies and gives the Hilbert function corresponding to each of the two cases. Proposition \ref{bigin} identifies the possible bigeneric initial ideals for each case. Since these bigeneric initial ideals are initial ideals obtained following a change of coordinates, the cancellation principle applies. We now show the resolutions (\ref{bigin1}), (\ref{bigin1'}) must cancel to the Type 1 resolution and the resolutions (\ref{bigin2}), (\ref{bigin2'}) must cancel to the Type 2 resolution.
Since $I_U$ is assumed to have no linear syzygies, all linear syzygies appearing in the resolution of its bigeneric initial ideal must cancel.
Combined with Proposition \ref{syznoLin}, this establishes that in (\ref{bigin2}) or (\ref{bigin2'}) the linear cancellations are the only ones that occur. In (\ref{bigin1}), the cancellations of generators and first syzygies in bidegrees $(2,2), (2,3), (3,1)$ are obvious. The second syzygy in bidegree $(3,2)$
depends on the cancelled first syzygies, therefore it must also be cancelled.
This is natural, since by Proposition \ref{syznoLin}, there are exactly
four bidegree $(3,2)$ first syzygies. An examination of the maps in the
resolution (\ref{bigin1}) shows that the bidegree $(3,3)$ second syzygies
depend on the cancelled first syzygies, so they too must cancel. Finally the bidegree $(4,3)$ last syzygy depends on the previous cancelled second syzygies and so must also cancel.
In (\ref{bigin1'}), the cancellations of generators and first syzygies in bidegrees $(2,2)$, $(2,3)$, $(3,1)$ are obvious. The second syzygies of bidegree $(3,3)$ depend only on the
cancelled first syzygies, so they too cancel. Finally the bidegree $(4,3)$ last syzygy depends on the previous cancelled second syzygies and so it must also cancel.
\end{proof}
\section{Primary decomposition}
\begin{lem}\label{PD1}
If $U$ is basepoint free, all embedded primes of $I_U$ are of the
form
\begin{enumerate}
\item $\langle s,t,l(u,v) \rangle$
\item $\langle u,v,l(s,t) \rangle$
\item $\mathfrak{m} = \langle s,t,u,v \rangle$
\end{enumerate}
\end{lem}
\begin{proof}
Since $\sqrt{I_U} = \langle s,t\rangle \cap \langle u,v \rangle$,
an embedded prime must contain either $\langle s,t\rangle$
or $\langle u,v\rangle$ and modulo these ideals any remaining minimal generators can be considered as irreducible polynomials in
$k[u,v]$ or $k[s,t]$ (respectively). But the only prime ideals here
are $\langle l_i \rangle$ with $l_i$ a linear form, or the irrelevant
ideal.
\end{proof}
\begin{lem}\label{PDD1}
If $U$ is basepoint free, then the primary components corresponding to
minimal associated primes of $I_U$ are
\begin{enumerate}
\item $Q_1=\langle u,v \rangle$
\item $Q_2=\langle s,t \rangle^2$ or $Q_2=\langle p,q \rangle$, with $p,q \in R_{2,0}$ \mbox{ and }$\sqrt{p,q} = \langle s,t \rangle$.
\end{enumerate}
\end{lem}
\begin{proof}
Let $Q_1, Q_2$ be the primary components associated to $\langle u,v \rangle$ and $\langle s,t \rangle$ respectively. Since $I_U\subset Q_1\subset \langle u,v \rangle^m$ and $I_U$ is generated in bidegree $(2,1)$, $Q_1$ must contain at least one element of bidegree $(0,1)$. If $Q_1$ contains exactly one element $p(u,v)$ of bidegree $(0,1)$, then $V$ is contained in the fiber of $\Sigma_{2,1}$ over the point $V(p(u,v))$, which contradicts the basepoint free assumption. Therefore $Q_1$ must contain two independent linear forms in $u,v$ and hence $Q_1=\langle u,v \rangle$.
Since $I_U\subset Q_2$ and $I_U$ contains elements of bidegree $(2,1)$, $Q_2$ must contain at least one element of bidegree $(2,0)$. If $Q_2$ contains exactly one element $q(s,t)$ of bidegree $(2,0)$, then $V$ is contained in the fiber of $\Sigma_{2,1}$ over the point $V(q(s,t))$, which contradicts the basepoint free assumption. If $Q_2$ contains exactly two elements of bidegree $(2,0)$ which share a common linear factor $l(s,t)$, then $I_U$ is contained in the ideal $\left<l(s,t)\right>$, which contradicts the basepoint free assumption as well. Since the bidegree $(2,0)$ part of $Q_2$ is contained in the linear span of $s^2,t^2,st$, it follows that the only possibilities consistent with the conditions above are $Q_2=\langle p,q \rangle$ with $\sqrt{p,q} =\langle s,t \rangle$ or $Q_2=\langle s^2,t^2,st \rangle$.
\end{proof}
\begin{prop}\label{PD2}
For each type of minimal free resolution of $I_U$ with $U$ basepoint
free, the embedded primes of $I_U$ are as in Table 1.
\end{prop}
\begin{proof}
First observe that $\mathfrak{m}=\langle s,t,u,v \rangle$
is an embedded prime for
each of Type 1 to Type 4. This follows since the respective
free resolutions have length four, so
\[
Ext^4_R(R/I_U,R) \ne 0.
\]
By local duality, this is true iff $H^0_{\mathfrak{m}}(R/I_U) \ne 0$
iff $\mathfrak{m} \in {\mathbb{A}}ss(I_U)$. Since the resolutions for Type 5
and Type 6 have projective dimension less than four,
this also shows that in Type 5 and Type 6, $\mathfrak{m} \not\in {\mathbb{A}}ss(I_U)$.
Corollary~\ref{T5PD} and Proposition~\ref{LS2} show the embedded
primes for Type 5 and Type 6 are as in Table 1.
Thus, by Lemma~\ref{PD1}, all that remains is to study primes of
the form $\langle s,t,L(u,v) \rangle$ and $\langle u,v,L(s,t) \rangle$
for Type 1 through 4. For this, suppose
\[
I_U = I_1 \cap I_2 \cap I_3, \mbox{ where}
\]
\begin{enumerate}
\item $I_1$ is the intersection of primary components corresponding
to the two minimal associated primes identified in Lemma~\ref{PDD1}.
\item $I_2$ is the intersection of embedded primary components
not primary to $\mathfrak{m}$.
\item $I_3$ is primary to $\mathfrak{m}$.
\end{enumerate}
By Lemma~\ref{PDD1},
if $I_1 = \langle u,v \rangle \cap \langle p,q \rangle$ with
$\sqrt{p,q} = \langle s,t \rangle$, then $I_1$ is basepoint free
and consists of four elements of bidegree $(2,1)$, thus $I_U = I_1$ and
has Type 6 primary decomposition. So we may assume $I_1 =\langle u,v \rangle \cap \langle s,t\rangle^2$.
Now we switch gears and consider all ideals in the $\mathbb{Z}$--grading
where the variables have degree one. In the $\mathbb{Z}$--grading
\[
HP(R/I_1,t) = 4t+2.
\]
Since the Hilbert polynomials of $R/(I_1\cap I_2)$ and
$R/I_U$ are identical, we can compute the Hilbert polynomials of
$R/(I_1\cap I_2)$ for Type 1 through 4 using Theorems~\ref{T1T2res},
\ref{Type4MC} and \ref{T3res}. For example, in Type 1, the
bigraded minimal free resolution is
\[
0 \leftarrow I_U \leftarrow (-2,-1)^4 \longleftarrow \begin{array}{c}
(-2,-4)\\
\oplus \\
(-3,-2)^4\\
\oplus \\
(-4,-1)^2\\
\end{array} \longleftarrow
\begin{array}{c}
(-3,-4)^2\\
\oplus \\
(-4,-2)^3\\
\end{array} \longleftarrow
(-4,-4) \leftarrow 0.
\]
Therefore, the $\mathbb{Z}$--graded minimal free resolution is
\[
0 \leftarrow I_U \leftarrow (-3)^4 \longleftarrow \begin{array}{c}
(-6)\\
\oplus \\
(-5)^6\\
\end{array} \longleftarrow
\begin{array}{c}
(-7)^2\\
\oplus \\
(-6)^3\\
\end{array} \longleftarrow
(-8) \leftarrow 0.
\]
Carrying this out for the other types shows that
the $\mathbb{Z}$--graded Hilbert polynomial of $R/(I_1 \cap I_2)$ is
\begin{enumerate}
\item In Type 1 and Type 3:
\[
HP(R/(I_1 \cap I_2),t) = 4t+2.
\]
\item In Type 2 and Type 4:
\[
HP(R/(I_1 \cap I_2),t) = 4t+3.
\]
\end{enumerate}
In particular, for Type 1 and Type 3,
\[
HP(R/I_1,t) = HP(R/(I_1 \cap I_2),t),
\]
and in Type 2 and Type 4, the Hilbert polynomials differ by one:
\[
HP(I_1/(I_1 \cap I_2), t) = 1.
\]
Now consider the short exact sequence
\[
0 \longrightarrow I_1 \cap I_2 \longrightarrow I_1 \longrightarrow
I_1/(I_1 \cap I_2) \longrightarrow 0.
\]
Since $I_1\cap I_2 \subseteq I_1$, in Type 1 and Type 3 where
the Hilbert Polynomials are equal, there can be no embedded
primes save $\mathfrak{m}$. In Type 2 and Type 4,
since $HP(I_1/(I_1 \cap I_2),t)=1$, $I_1/(I_1 \cap I_2)$ is supported
at a point of ${\mathbb{P}}^3$ which corresponds to a codimension three
prime ideal of the form $\langle l_1,l_2,l_3\rangle$. Switching
back to the fine grading, by Lemma~\ref{PD1}, this
prime must be either $\langle s,t,l(u,v)\rangle$
or $\langle u,v,l(s,t) \rangle$. Considering the multidegrees
in which the Hilbert function of $I_1/(I_1 \cap I_2)$ is nonzero
shows that the embedded prime is of type $\langle s,t,l(u,v)\rangle$.
\end{proof}
\section{The Approximation complex and Implicit equation of $X_U$}
The method of using moving lines and moving quadrics to
obtain the implicit equation of a curve or surface was
developed by Sederberg and collaborators in \cite{sc},
\cite{sgd}, \cite{ssqk}. In \cite{cox2}, Cox gives a nice
overview of this method and makes explicit the connection
to syzygies. In the case of tensor product surfaces these
methods were first applied by Cox-Goldman-Zhang in \cite{cgz}.
The approximation complex was introduced by Herzog-Simis-Vasconcelos in
\cite{hsv1},\cite{hsv2}. From a mathematical perspective,
the relation between the implicit equation and syzygies comes
from work of Bus\'e-Jouanolou \cite{bj} and Bus\'e-Chardin \cite{bc}
on approximation complexes and the Rees algebra; their work was
extended to the multigraded setting in \cite{bot}, \cite{bdd}.
The next theorem follows from work of Botbol-Dickenstein-Dohm \cite{bdd}
on toric surface parameterizations, and also from a more general
result of Botbol \cite{bot}. The novelty of our approach is
that by obtaining an explicit description of the syzygies, we
obtain both the implicit equation for the surface and a description
of the singular locus. Theorem~\ref{SingX} gives a particularly
interesting connection between syzygies of $I_u$ and singularities of $X_U$.
\begin{thm}\label{ImX}
If $U$ is basepoint free, then the implicit equation
for $X_U$ is determinantal, obtained from the $4 \times 4$ minor
of the first map of the approximation complex $\mathcal{Z}$
in bidegree $(1,1)$, except for Type 6, where $\phi_U$ is not birational.
\end{thm}
\subsection{Background on approximation complexes}
We give a brief overview of approximation complexes,
for an extended survey see \cite{c}. For
\[
I = \langle f_1, \ldots, f_n \rangle \subseteq R=k[x_1,\ldots x_m],
\]
let $K_i \subseteq \Lambda^i(R^n)$ be the kernel of
the $i^{th}$ Koszul differential on $\{f_1, \ldots, f_n\}$,
and $S = R[y_1,\ldots, y_n]$. Then
the approximation complex $\mathcal{Z}$ has $i^{th}$ term
\[
\mathcal{Z}_i = S \otimes_R K_i.
\]
The differential is the Koszul differential on $\{y_1, \ldots, y_n\}$.
It turns out that $H_0(\mathcal{Z})$ is $S_I$ and the higher homology
depends (up to isomorphism) only on $I$. For $\mu$ a bidegree in $R$, define
\[
\mathcal{Z}^{\mu} \mbox{ : } \cdots \longrightarrow
k[y_1,\ldots, y_n]\otimes_k (K_i)_\mu \stackrel{d_i}{\longrightarrow}k[y_1,\ldots, y_n]\otimes_k (K_{i-1})_\mu \stackrel{d_{i-1}}{\longrightarrow}\cdots
\]
If the bidegree $\mu$ and base locus of $I$ satisfy certain conditions,
then the determinant of $\mathcal{Z}^{\mu}$ is a power of the implicit
equation of the image. This was first proved in \cite{bj}. In Corollary 14
of \cite{bdd}, Botbol-Dickenstein-Dohm give a specific bound for $\mu$ in
the case of a toric surface and map with zero-dimensional base locus and
show that in this case the gcd of the maximal minors of $d_1^{\mu}$ is
the determinant of the complex. For four sections of bidegree $(2,1)$,
the bound in \cite{bot} shows that $\mu =(1,1)$. To make things concrete, we
work this out for Example~\ref{ex1}.
\begin{exm}\label{exLast}
Our running example is $U = {\mathrm{Span}} \{s^2u, s^2v, t^2u, t^2v+stv \}$.
Since $K_1$ is the module of syzygies on $I_U$, which is
generated by the columns of
\[
\left[\begin{matrix}
-v &-t^2 &0& 0 &-tv \\
u & 0 & -st-t^2 &0 & 0 \\
0 & s^2 & 0 &-sv-tv & -sv\\
0 & 0 & s^2 &tu & su
\end{matrix}
\right]
\]
The first column encodes the relation $ux_1-vx_0 =0$, then next four
columns the relations
\[
\begin{array}{ccc}
s^2x_2-t^2x_0 &=& 0 \\
s^2x_3 -(st+t^2)x_1 &=&0 \\
tu x_3 -(sv+tv)x_2 &=&0\\
su x_3 -svx_2-tvx_0 &=& 0
\end{array}
\]
If we were in the singly graded case, we would need to use $\mu = 2$,
and a basis for $\mathcal{Z}_1^2$ consists of $\{s,t,u,v\} \cdot ux_1-vx_0$,
and the remaining four relations. With respect to the ordered basis
$\{s^2,st,t^2, su,sv,tu,tv, u^2,uv,v^2\}$ for $R_2$ and writing
$\cdot$ for $0$, the matrix for
$d_1^{2} : \mathcal{Z}_1^2 \longrightarrow \mathcal{Z}_0^2$ is
\vskip .02in
\[
\left[\begin{matrix}
\cdot & \cdot & \cdot & \cdot & x_2 & x_3 & \cdot& \cdot\\
\cdot & \cdot & \cdot & \cdot & \cdot &-x_1 & \cdot& \cdot\\
\cdot & \cdot & \cdot & \cdot & -x_0 & -x_1 & \cdot& \cdot\\
x_1 & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot& x_3\\
-x_0 & \cdot & \cdot & \cdot & \cdot & \cdot & -x_2& -x_2\\
\cdot & x_1 & \cdot & \cdot & \cdot & \cdot & x_3 & \cdot\\
\cdot & -x_0 & \cdot & \cdot & \cdot & \cdot & -x_2 & -x_0\\
\cdot & \cdot & x_1 & \cdot & \cdot & \cdot & \cdot& \cdot \\
\cdot & \cdot &-x_0 & x_1 & \cdot & \cdot & \cdot & \cdot\\
\cdot & \cdot &\cdot &-x_0 & \cdot & \cdot & \cdot & \cdot\\
\end{matrix}
\right]
\]
\vskip .05in
However, this matrix represents all the first syzygies of total
degree two. Restricting to the submatrix of bidegree $(1,1)$ syzygies
corresponds to choosing rows indexed by $\{su,sv,tu,tv\}$, yielding
\vskip .02in
\[
\left[\begin{matrix}
x_1 & \cdot & \cdot& x_3\\
-x_0 & \cdot & -x_2& -x_2\\
\cdot & x_1 & x_3 & \cdot\\
\cdot & -x_0 & -x_2 & -x_0\\
\end{matrix}
\right]
\]
\end{exm}
We now study the observation made in the introduction,
that linear syzygies manifest in a linear singular locus.
Example~\ref{exLast} again provides the key intuition: a
linear first syzygy gives rise to two columns of $d^{(1,1)}_1$.
\begin{thm}\label{SingX}
If $U$ is basepoint free and $I_U$ has a unique
linear syzygy, then the codimension one singular locus of $X_U$
is a union of lines.
\end{thm}
\begin{proof}
Without loss of generality we assume the linear syzygy involves the first two
generators $p_0,p_1$ of $I_U$, so that in the two remaining
columns corresponding to the linear syzygy the only nonzero
entries are $x_0$ and $x_1$, which appear exactly as in
Example~\ref{exLast}. Thus, in bidegree $(1,1)$, the matrix for $d^{(1,1)}_1$
has the form
\[
\left[\begin{matrix}
x_1 & \cdot & \ast & \ast\\
-x_0 & \cdot & \ast & \ast\\
\cdot &x_1 & \ast & \ast\\
\cdot &-x_0 & \ast & \ast\\
\end{matrix}
\right]
\]
Computing the determinant using two by two minors in the two left
most columns shows the implicit equation of $F$ is of the form
\[
x_1^2 \cdot f + x_0x_1 g + x_0^2 h,
\]
which is singular along the line ${\bf V }(x_0,x_1)$. To show that
the entire singular locus is a union of lines when $I_U$ has
resolution Type $\in \{3,4,5\}$, we must analyze the structure
of $d_1^{(1,1)}$. For Type 3 and 4, Theorems~\ref{T3res} and~\ref{Type4MC}
give the first syzygies, and show that the implicit equation
for $X_U$ is given by the determinant of
\[
\left[\begin{matrix}
-x_1 & \cdot & x_2 & x_3 \\
\cdot & -x_1 & a_1x_2-b_1x_0 & a_1x_3-c_1x_0 \\
x_0 & \cdot & a_2x_2-b_0x_1&a_2x_3-c_0x_1\\
\cdot & x_0 & a_3x_2-b_2x_0-b_3x_1 & a_3x_3-c_2x_0-c_3x_1\\
\end{matrix}
\right].
\]
We showed above that ${\bf V }(x_0,x_1) \subseteq {\mathrm{Sing}}(X_U)$. Since
$X_U \setminus {\bf V }(x_0,x_1) \subseteq U_{x_0} \cup U_{x_1}$, it
suffices to check that $X_U \cap U_{x_0}$ and $X_U \cap U_{x_1}$
are smooth in codimension one. $X_U \cap U_{x_0}$ is defined by
\[
\begin{array}{ccc}
-{c}_{1} {y}_{2}+{b}_{1} {y}_{3}& = &(-{b}_{3} {c}_{0}+{b}_{0} {c}_{3}) {y}_{1}^{4}+({a}_{3} {c}_{0}-{a}_{2} {c}_{3}) {y}_{1}^{3} {y}_{2} \\
& + & (-{a}_{3} {b}_{0}+{a}_{2} {b}_{3}) {y}_{1}^{3}{y}_{3} + +(-{b}_{2} {c}_{0}+{b}_{0} {c}_{2}){y}_{1}^{3}\\
&+ & ({a}_{1} {c}_{0}-{a}_{2} {c}_{2}-{c}_{3}) {y}_{1}^{2} {y}_{2}+(-{a}_{1} {b}_{0}+{a}_{2}{b}_{2}+{b}_{3}) {y}_{1}^{2} {y}_{3}\\
& + &(-{b}_{1} {c}_{0}+{b}_{0} {c}_{1}) {y}_{1}^{2}+(-{a}_{2} {c}_{1}-{c}_{2}) {y}_{1} {y}_{2}+({a}_{2} {b}_{1}+{b}_{2}) {y}_{1} {y}_{3}.
\end{array}
\]
By basepoint freeness, $b_1$ or $c_1$ is nonzero, as is $(-{b}_{3} {c}_{0}+{b}_{0} {c}_{3})$,
so in fact $X_U \cap U_{x_0}$ is smooth. A similar calculation shows that $X_U \cap U_{x_1}$
is also smooth, so for Type 3 and Type 4, ${\mathrm{Sing}}(X_U)$ is a line.
In Type 5 the computation is more cumbersome: with notation as in
Proposition~\ref{LS3}, the relevant $4 \times 4$ submatrix is
\[
\left[\begin{matrix}
x_1 & \cdot & a_1x_3-a_3x_2 & \beta_1\alpha_2x_1 + \alpha_1\alpha_2x_0 \\
-x_0 & \cdot & b_1x_3-b_3x_2 & \beta_1\beta_2x_1 + \alpha_1\beta_2x_0 \\
\cdot & x_1 & \beta_1\alpha_2x_1 + \alpha_1\alpha_2x_0 &a_2x_3-a_4x_2\\
\cdot & -x_0 & \beta_1\beta_2x_1 + \alpha_1\beta_2x_0 &b_2x_3-b_4x_2\\
\end{matrix}
\right].
\]
A tedious but straightforward calculation shows that ${\mathrm{Sing}}(X_U)$
consists of three lines in Type 5a and a pair of lines in Type 5b.
\end{proof}
\begin{thm}\label{SingX2}
If $U$ is basepoint free, then the codimension one singular locus of $X_U$
is as described in Table 1.
\end{thm}
\begin{proof}
For a resolution of Type 3,4,5, the result follows from Theorem~\ref{SingX},
and for Type 6 from Proposition~\ref{LS2}. For the generic case (Type 1),
the result is obtained by Elkadi-Galligo-Le \cite{egl}, so
it remains to analyze Type 2. By Lemma~\ref{02syz}, the $(0,2)$
first syzygy implies that we can write $p_0, p_1, p_2$ as $\alpha
u, \beta v, \alpha v + \beta u$ for some $\alpha, \beta \in R_{2,0}$.
Factor $\alpha$ as product of two linear forms in $s$ and $t$, so
after a linear change of variables we may assume $\alpha = s^2$ or $st$.
If $\alpha = s^2$, write $\beta=(ms+nt)(m's+n't)$, and note that $n$ and $n'$
cannot both vanish, because then $\beta$ is a scalar multiple of $\alpha$,
violating linear independence of the $p_i$. Thus, after a linear change
of variables, we may assume $\beta$ is of the form $t(ks+lt)$, so the
$p_i$'s are of the form
\[
\{s^2u, (ks+lt)tv, s^2v+(ks+lt)tu, p_3\}
\]
If $l=0$, then $I_U = \langle s^2u, kstv, s^2v+kstu, p_3 \rangle$,
which is not basepoint free: if $s=0$, thus the first 3 polynomials vanish
and $p_3$ becomes $t^2(au+bv)$ which vanishes for some
$(u:v) \in \mathbb{P}^1$. So $l \ne 0$, and after a linear change of
variables $t \mapsto \frac{t}{l}$ and $u \mapsto lu$, we may assume $l=1$ and hence
\[
I_U = \langle s^2u, t^2v+kstv, s^2v+t^2u+kstu, p_3\rangle.
\]
Now consider two cases, $k=0$ and $k\not = 0$. In the latter, we may
assume $k=1$: first replace $ks$ by $s$ and then replace $k^2u$ by $u$.
In either case, reducing $p_3$ by the other generators of $I_U$ shows
we can assume
\[
p_3=astu+bs^2v+cstv = s(atu+bsv+ctv).
\]
By Theorem~\ref{T1T2res} there is one first syzygy of bidegree $(0,2)$, two
first syzygies of bidegree $(2,0)$, and four first syzygies of bidegree
$(1,1)$. A direct calculation shows that two of the bidegree $(1,1)$
syzygies are
$$ \left[ \begin{array}{cc}atu+bsv+ctv &0 \\ 0 & asu-btu+csv \\0 & btv \\ -su
& -tv \end{array} \right]. $$
The remaining syzygies depend on $k$ and $a$. For example, if $k=a=0$, then the
remaining bidegree $(1,1)$ syzygies are
$$ \left[ \begin{array}{cc}0 & -b^3tv \\ b^2su+c^2sv & c^3sv \\-b^2sv & -b^2cs
v \\ bsv-ctv & b^2tu+bcsv-c^2tv \end{array} \right], $$
and the bidegree $(2,0)$ syzygies are
$$ \left[ \begin{array}{cc}0 & b^3t^2\\ bs^2+cst & -c^3st \\0 & -b^3s^2 \\
-t^2 & b^2s^2-bcst+c^2t^2 \end{array} \right] $$
Thus, if $k=a=0$, using the basis $\{ su, tu, sv, tv \}$ for $R_{1,1}$, the
matrix whose determinant gives the implicit equation for $X_U$ is
\[
\left[
\begin{matrix}
-x_3 & 0 & b^2x_1 & 0 \\
0 & -bx_1 & 0 & b^2x_3\\
bx_0 & cx_1 & c^2x_1-b^2x_2+bx_3& c^3x_1-b^2cx_2+bcx_3\\
cx_0 & bx_2-x_3 & -cx_3 & -b^3x_0-c^2x_3 \\
\end{matrix}
\right]
\]
Since $k=a=0$, if both $b$ and $c$ vanish there will be a linear
syzygy on $I_U$, contradicting our assumption. So suppose $b\ne 0$ and
scale the generator $p_3$ so $b=1$:
\[
\left[
\begin{matrix}
-x_3 & 0 & x_1 & 0 \\
0 & -x_1 & 0 & x_3\\
x_0 & cx_1 & c^2x_1-x_2+x_3& c^3x_1-cx_2+cx_3\\
cx_0 & x_2-x_3 & -cx_3 & -x_0-c^2x_3 \\
\end{matrix}
\right]
\]
Expanding along the top two rows by $2 \times 2$ minors as in the
proof of Theorem~\ref{SingX} shows that $X_U$ is singular along
${\bf V}(x_1,x_3)$, and evaluating the Jacobian matrix with $x_1=0$
shows this is the only component of the codimension one singular
locus with $x_1 = 0$. Next we consider the affine patch $U_{x_1}$.
On this patch, the Jacobian ideal is
\[
\langle (4 {x}_{3}+c^{2}) ({x}_{2}-2 {x}_{3}-c^{2}),({x}_{2}-2 {x}_{3}-c^{2}) ({x}_{2}+2 {x}_{3}),2 {x}_{0}-2 {x}_{2}{x}_{3}-{x}_{2} c^{2}+2 {x}_{3}^{2}+4 {x}_{3} c^{2}+c^{4}
\rangle
\]
which has codimension one component given by
\[{\bf V}({x}_{2}-2 {x}_{3}-c^{2}, x_0-x_3^2),
\]
a plane conic. Similar calculations work for the other cases.
\end{proof}
\section{Connection to the dual scroll}
We close by connecting our work to the results of Galligo-L\^e
in \cite{gl}. First, recall that the ideal of $\Sigma_{2,1}$ is
defined by the two by two minors of
\[
\left[\begin{matrix} x_0 & x_1 & x_2 \\ x_3 & x_4 &x_5
\end{matrix}\right].
\]
Combining this with the relations $x_1^2-x_0x_2$ and $x_4^2-x_3x_5$
arising from $\nu_2$ shows that the image of the map $\tau$ defined in
Equation~\ref{e1} is the vanishing locus of the two by two minors of
\begin{equation}\label{SV}
\left[\begin{matrix} x_0 & x_1 & x_3 & x_4 \\ x_1 & x_2 & x_4 &x_5
\end{matrix}\right].
\end{equation}
Let $A$ denote the $4 \times 6$ matrix of
coefficients of the polynomials defining $U$ in the monomial
basis above. We regard ${\mathbb{P}}(U) \hookrightarrow {\mathbb{P}}(V)$ via ${\bf a} \mapsto {\bf a} \cdot A$.
Note that $I_U$ and the implicit equation of $X_U$ are independent of the choice of generators $p_i$ (\cite{c}).
The dual projective space of ${\mathbb{P}}(V)$ is ${\mathbb{P}}(V^*)$ where $V^*=Hom_k(V,k)$ and the projective subspace of ${\mathbb{P}}(V^*)$ orthogonal to $U$ is defined to be ${\mathbb{P}}((V/U)^*)={\mathbb{P}}(U^\perp)$, where $U^\perp=\{f\in V^* |f(u)=0, \forall u\in U\}$ is algebraically described as the kernel of $A$. The elements
of $U^\perp$ define the space of linear forms in $x_i$
which vanish on ${\mathbb{P}}(U)$. In Example~\ref{ex1}
$U = {\mathrm{Span}} \{s^2u,s^2v,t^2u,t^2v+stv \}$, so $A$ is the matrix
\[
\left[\begin{matrix}
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 1 \\
\end{matrix}\right],
\]
and ${\mathbb{P}}(U) = {\bf V }(x_1,x_4-x_5) \subseteq {\mathbb{P}}^5$.
The conormal variety $N(X)$ is the incidence variety defined as the closure of the set of pairs $\{(x,\pi)\in {\mathbb{P}}(V)\times {\mathbb{P}}(V^*)\}$ such that $x$ is a smooth point of $X$ and $\pi$ is an element of the linear subspace orthogonal to the tangent space $T_{X,x}$ in the sense described above. $N(X)$ is irreducible and for a varieties $X$ embedded in ${\mathbb{P}}(V)\simeq {\mathbb{P}}^5$ the dimension of $N(X)$ is 4. The image of the projection of $N(X)$ onto the factor ${\mathbb{P}}(V^*)$ is by definition the dual variety of $X$ denoted $X^*$.
Denote by $\widetilde{X_U}$ the variety $X_U$ re-embedded as a hypersurface of ${\mathbb{P}}(U)$. Proposition 1.4 (ii) of \cite{CRS} applied to our situation reveals
\begin{prop}
If $\widetilde{X_U}^*\subset {{\mathbb{P}}^3}^*$ is a hypersurface which is swept by a one dimensional family of lines, then $\widetilde{X_U}^*$ is either a 2-dimensional scroll or else a curve.
\end{prop}
The cited reference includes precise conditions that allow the two possibilities to be distinguished. Proposition 1.2 of \cite{CRS} reveals the relation between $X_U^*\subset {\mathbb{P}}(V^*)$ and $\widetilde{X_U}^*\subset {\mathbb{P}}(U^*)$, namely
\begin{prop}
In the above setup, $X_U^*\subset {\mathbb{P}}(V^*)$ is a cone over $\widetilde{X_U}^*\subset {\mathbb{P}}(U^*)$ with vertex $U^\perp={\mathbb{P}}((V/U)^*)$.
\end{prop}
It will be useful for this reason to consider the map $\pi:{\mathbb{P}}(V)\rightarrow {\mathbb{P}}(U^*)$ defined by $\pi(p)=(\ell_1(p):\ldots:\ell_4(p))$, where $\ell_1,\ldots, \ell_4$ are the defining equations of $U^\perp$. The map $\pi$ is projection
from ${\mathbb{P}}(U^\perp)$ and $\pi(X_U^*) = \widetilde{X_U}^*$. Using a direct approach Galligo-L\^e obtain in\cite{gl} that $\pi^{-1}(\widetilde{X_U}^*)$ is a (2,2)-scroll in ${\mathbb{P}}(V^*)$ which they denote by ${\mathbb{F}}_{2,2}^*$. For brevity we write ${\mathbb{F}}$ for $\pi^{-1}(\widetilde{X_U}^*)$.
Galligo-L\^e classify possibilities for the implicit equation
of $X_U$ by considering the pullback $\phi_U^*$ of
${\mathbb{P}}(U^*) \cap {\mathbb{F}}$ to $({\mathbb{P}^1 \times \mathbb{P}^1})^*$. The two linear forms $L_i$
defining ${\mathbb{P}}(U^*)$ pull back to give a pair of
bidegree $(2,1)$ forms on $({\mathbb{P}^1 \times \mathbb{P}^1})^*$.
\begin{prop}[\cite{gl} \S 6.5]\label{GL} If $\phi_U^*(L_1) \cap \phi_U^*(L_2)$ is infinite,
then $\phi_U^*(L_1)$ and $\phi_U^*(L_2)$ share a common factor $g$, for which
the possibilities are:
\begin{enumerate}
\item $deg(g) = (0,1)$.
\item $deg(g) = (1,1)$ ($g$ possibly not reduced).
\item $deg(g) = (1,0)$ (residual system may have double or distinct roots).
\item $deg(g) = (2,0)$ ($g$ can have a double root).
\end{enumerate}
\end{prop}
\begin{exm}\label{ex2} In the following we use capital letters to denote elements of the various dual spaces. Note that the elements of the basis of $({\mathbb{P}^1 \times \mathbb{P}^1})^*$ that pair dually to $\{s^2u, stu, t^2u,s^2v, stv, t^2v\}$ are respectively $\{\frac{1}{2}S^2U, STU, \frac{1}{2}T^2U, \frac{1}{2}S^2V, STV, \frac{1}{2}T^2V\}$.
Recall we have the following dual maps of linear spaces:
$$\begin{matrix} {\mathbb{P}^1 \times \mathbb{P}^1} \stackrel{\phi_U}{\longrightarrow} & {\mathbb{P}}(U)\stackrel{A}{\longrightarrow} {\mathbb{P}}(V) & ({\mathbb{P}^1 \times \mathbb{P}^1})^* \stackrel{\phi_U^*}{\longleftarrow} {\mathbb{P}}(U^*) \stackrel{\pi}{\longleftarrow} {\mathbb{P}}(V^*) \end{matrix}$$
In Example~\ref{ex1}, ${\mathbb{P}}(U^*) = {\bf V }(X_1,X_4-X_5) \subseteq {{\mathbb{P}}^5}^*$, $\phi^*_U(X_1)=STU$ and $\phi^*_U(X_4-X_5)=STV-\frac{1}{2}T^2V$, so there is a shared common factor $T$ of
degree $(1,0)$ and the residual system $\{SU,\frac{1}{2}TV-SV\}$ has distinct
roots $(0:1)\times(1:0),(1:2)\times(0:1)$. Taking points $(1:0)\times(1:0)$ and
$(1:0)\times(0:1)$ on the line $T=0$ and the points above shows that
the forms below are in $(\phi_U^*P(U^*))^\perp={\phi_U}^{-1}({\mathbb{P}}(U))$
\[
\begin{array}{ccc}
{\phi_U}((0:1)\times(1:0)) & = & t^2u\\
{\phi_U}((1:2)\times(0:1)) & = & (s+2t)^2v\\
{\phi_U}((1:0)\times(1:0)) & = & s^2u\\
{\phi_U}((1:0)\times(0:1)) & = & s^2v
\end{array}
\]
and in terms of our chosen basis the corresponding matrix $A$ is
\[
\left[\begin{matrix}
0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 4 & 4 \\
1 & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 & 0 \\
\end{matrix}\right],
\]
whose rows span the expected linear space $U$ with basis $\langle x_0,x_2,x_3,x_4+x_5 \rangle$.
\end{exm}
\subsection{Connecting $\phi_U^*(L_1) \cap \phi_U^*(L_2)$ to syzygies}
There is a pleasant relation between Proposition~\ref{GL} and bigraded
commutative algebra. This is hinted at by the result of \cite{cds} relating the minimal free resolution of $I_W$ to ${\mathbb{P}}(W) \cap \Sigma_{2,1}$.
\begin{thm}\label{GPsyz} If $\phi_U^*(L_1) \cap \phi_U^*(L_2)$ is infinite, then:
\begin{enumerate}
\item If $deg(g) = (0,1)$ then $U$ is not basepoint free.
\item If $deg(g) = (1,1)$ then
\begin{enumerate}
\item if $g$ is reducible then $U$ is not basepoint free.
\item if $g$ is irreducible then $I_U$ is of Type 3.
\end{enumerate}
\item If $deg(g) = (1,0)$ then $I_U$ is of Type 5. Furthermore
\begin{enumerate}
\item The residual scheme is reduced iff $I_U$ is of Type 5a.
\item The residual scheme has a double root iff $I_U$ is of Type 5b.
\end{enumerate}
\item If $deg(g) = (2,0)$ then $I_U$ is of Type 6.
\end{enumerate}
\end{thm}
\begin{proof}
We use the notational conventions of \ref{ex2}. If $deg(g) = (0,1)$ then we may assume after a change of coordinates
that $\phi_U^*(L_1)= PU$ and $\phi_U^*(L_2)= QU$, with $P,Q$ of bidegree
$(2,0)$ and independent. In particular, denoting by $p,q\in R$ the dual elements of $P,Q$ a basis for $R_{2,1}$ is
$\{pu,qu,ru, pv,qv,rv\}$. Hence $U = {\mathrm{Span}} \{ru, pv,qv,rv\}$, so
if $r = l_1(s,t)l_2(s,t)$ then $\langle v, l_1(s,t) \rangle$ is
an associated prime of $I_U$ and $U$ is not basepoint free.
Next, suppose $deg(g) = (1,1)$. If $g$ factors, then after
a change of variable we may assume $g=SU$ and $U^{\perp} = {\mathrm{Span}} \{S^2U,STU \}$.
This implies that $\langle v, t \rangle$ is an associated prime of
$I_U$, so $U$ is not basepoint free. If $g$ is irreducible, then
\[
g=a_0SU+a_1SV+a_2TU+a_3TV,\mbox{ with }a_0a_3-a_1a_2 \ne 0.
\]
Since $U^\perp = {\mathrm{Span}} \{ gS, gT \}$, $U$ is the kernel of
\[
\left[\begin{matrix}
a_0 & a_2 & 0 & a_1 & a_3 & 0 \\
0 & a_0 & a_2 & 0 & a_1 & a_3 \\
\end{matrix}\right],
\]
so $U$ contains the columns of
\[
\left[\begin{matrix}
a_3 & 0\\
a_1 & a_3 \\
0 & a_1 \\
-a_2 & 0 \\
-a_0 & -a_2\\
0 & -a_0 \\
\end{matrix}\right].
\]
In particular,
\[
\begin{array}{ccc}
a_3s^2u+a_1stu-a_2s^2v-a_0stv &= &s(a_3su+a_1tu-a_2sv-a_0tv) = sp\\
a_3stu+a_1t^2u-a_2stv-a_0t^2v &= &t(a_3su+a_1tu-a_2sv-a_0tv) = tp\\
\end{array}
\]
are both in $I_U$, yielding a linear syzygy of bidegree $(1,0)$.
Since $p$ is irreducible, the result follows from Proposition~\ref{LS3-10}.
The proofs for the remaining two cases are similar and omitted.
\end{proof}
There is an analog of Proposition~\ref{GL} when the intersection
of the pullbacks is finite and a corresponding connection to the minimal
free resolutions, which we leave for the interested reader.\newline
\noindent{\bf Concluding remarks} Our work raises a number of questions:
\begin{enumerate}
\item How much generalizes to other line
bundles ${\mathcal{O}_{\mathbb{P}^1 \times \mathbb{P}^1}}(a,b)$ on ${\mathbb{P}^1 \times \mathbb{P}^1}$? We are at work extending
the results of \S 7 to a more general setting.
\item What can be said about the minimal free resolution
if $I_U$ has basepoints?
\item Is there a direct connection between embedded primes and the
implicit equation?
\item If $U \subseteq H^0(\mathcal{O}_X(D))$ is four dimensional and
has base locus of dimension at most zero and $X$ is a toric surface, then
the results of \cite{bdd} give a bound on the degree $\mu$ needed to
determine the implicit equation. What can be said about the syzygies
in this case?
\item More generally, what can be said about the multigraded free
resolution of $I_U$, when $I_U$ is graded by ${\mathrm{Pic}}(X)$?
\end{enumerate}
\noindent{\bf Acknowledgments} Evidence for this work was provided
by many computations done using {\tt Macaulay2}, by Dan
Grayson and Mike Stillman. {\tt Macaulay2} is freely available at
\begin{verbatim}
http://www.math.uiuc.edu/Macaulay2/
\end{verbatim}
and scripts to perform the computations are available at
\begin{verbatim}
http://www.math.uiuc.edu/~schenck/O21script
\end{verbatim}
We thank Nicol\'as Botbol, Marc Chardin and Claudia Polini
for useful conversations, and an anonymous referee for a
careful reading of the paper.
\end{document} |
\begin{document}
\title{Reducing the number of ancilla qubits and the gate count required for creating large controlled operations}
\author{Katherine L. Brown}
\affiliation{Hearne Institute for Theoretical Physics and Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA, 70803, USA}
\author{Anmer Daskin}
\affiliation{Department of Chemistry \& Computer Science, Purdue University, West Lafayette, Indiana, 47907, USA}
\author{Sabre Kais}
\affiliation{Department of Chemistry \& Computer Science, Purdue University, West Lafayette, Indiana, 47907, USA}
\author{Jonathan P. Dowling }
\affiliation{Hearne Institute for Theoretical Physics and Department of Physics and Astronomy, Louisiana State University, Baton Rouge, LA, 70803, USA}
\date{\today}
\begin{abstract}
In this paper we show that it is possible to adapt a qudit scheme for creating a controlled-Toffoli created by Ralph \emph{et al.}~ [Phys. Rev. A \textbf{75} 011213] to be applicable to qubits. While this scheme requires more gates than standard schemes for creating large controlled gates, we show that with simple adaptations it is directly equivalent to the standard scheme in the literature. This scheme is the most gate-efficient way of creating large controlled unitaries currently known, however it is expensive in terms of the number of ancilla qubits used. We go on to show that using a combination of these standard techniques presented by Barenco \emph{et al.}~ [Phys. Rev. A \textbf{52} 3457 (1995)] we can create an n-qubit version of the Toffoli using less gates and the same number of ancilla qubits as recent work using computer optimization. This would be useful in any architecture of quantum computing where gates are cheap but qubit initialization is expensive.
\keywords{quantum computing \and gate decompositions \and resource reduction \and mutli-qubit operations}
\end{abstract}
\maketitle
Making a unitary controlled on other qubits is an essential task for many algorithms in quantum computing \cite{Shor1997,Daskin2011,Wang2008}. In this paper we focus on a particular problem, making a unitary which is already controlled on one qubit, controlled on $n-1$ further qubits. These highly controlled unitaries (i.e.~unitaries controlled on more than one other qubit) are useful in numerous quantum algorithms including the oracle in the binary welded tree algorithm \cite{Childs2003} and quantum simulation \cite{Daskin2011,Wang2008}. Barenco \emph{et al.}~\cite{Barenco1995} outlined several techniques to make controlled unitaries in 1995, this work was expanded on in Nielsen and Chuang \cite{Nielsen2000} to provide a technique for making a n-qubit version of the CNOT gate using $14n-13$ operations and $n-2$ ancilla. Other work has explored this problem in the context of using computational algorithms to optimize circuit layout using the decomposition procedures proposed by Barenco \emph{et al.}~\cite{Barenco1995} and known commutation relations \cite{Maslov2003,Maslov2005,Scott2008,Wille2008,Miller2009,Miller2011}. However, while these techniques are useful if we have native controlled-square-root-not gates, in the case where the only native two qubit operation is a CNOT or C-Phase they perform worse than the technique in Nielsen and Chuang \cite{Nielsen2000} for the same number of ancilla.
An interesting alternative technique was proposed by Ralph \emph{et al.}~\cite{Ralph2007} and implemented experimentally by Lanyon \emph{et al.}~\cite{Lanyon2008} who used additional levels in one of the subsystems of the controlled gate to reduce the overall number of operations required. In this work we will show that when converted to qubits this technique is directly equivalent to the one from Nielsen and Chuang \cite{Nielsen2000} using the same number of operations, and requiring the same number of ancilla. We go on to show that by using the techniques from Nielsen and Chuang \cite{Nielsen2000} combined with other decomposition procedures from Barenco \emph{et al.}~ \cite{Barenco1995} it is possible to reduce the number of ancilla qubits required from $n-2$ to $2\sqrt{n-1}$ at the expense of double the number of operations. Our new techniques requires less operations for the same number of ancilla qubits when compared to existing decomposition schemes if we assume both schemes have the ability of perform a controlled square root of NOT gate \cite{Maslov2003,Maslov2005,Scott2008,Wille2008,Miller2009,Miller2011}.
For the rest of this work we will represent a CNOT gate controlled on n-qubits as C$^{n}$X, and a generally local unitary controlled on n qubits as C$^{n}$U. Given the ability to perform general local unitaries, and a CNOT gate we can make a Toffoli gate using nine local unitaries and six CNOT gates \cite{Shende2008}. However, if are prepared to accept an approximate Toffoli gate, we can use the Margolus gate, which is equivalent to a Toffoli gate and a controlled controlled phase. This procedure requires three CNOT gates and eight local unitaries \cite{Margolus1994,DiVincenzo1998}. An alternative set of decomposition procedures are used if we assume the ability to perform the CV gate in a single operation. Here a standard Toffoli uses two CNOT gates and three CV gates \cite{Barenco1995,Miller2011}. A more efficient decomposition is the Peres gate \cite{Peres1985,Miller2009}, which is the equivalent of a Toffoli gate with an additional CNOT. This decomposition uses only one CNOT gate, and three CV gates. In this work we can use the more efficient decomposition procedures because our Toffoli gates are arranged symmetrically with no other operations in between them.
Ralph \emph{et al.}~\cite{Ralph2007} and Lanyon \emph{et al.~}\cite{Lanyon2008} demonstrate that by using a qutrit they can generate a Toffoli gate using only three CNOT gates, two standard Pauli X operations, and two implementations of a three-level version of the Pauli X operation. One requirement of Lanyon \emph{et al.}~is that the CNOT gates act trivially on the $|2\rangle$ level of the qutrit. Replacing the qutrit with two qubits would require each CNOT gate to be replaced with a Toffoli gate as shown in fig \ref{BToffoli}. A qubit version of the Lanyon Toffoli gate is therefore impossible since we would need to consume three Toffoli gates to build a single Toffoli gate.
\begin{figure}
\caption{A qubit equivalent of the Lanyon Toffoli, a single Toffoli gate is created but requires the use of three Toffoli gates. All lines represent qubits.}
\label{BToffoli}
\end{figure}
When we consider replacing the qudit with $d=4$ (ququart), needed by Lanyon \emph{et al.}~\cite{Lanyon2008} to generate a C$^{3}$U gate, we use five Toffoli gates to generate a controlled Toffoli gate. In fig \ref{Lquart} we show the original circuit proposed by Lanyon \emph{et al.}~and our qubit adaptation. Breaking down our Toffoli gates into CNOT gates and general local unitaries means we require 53 operations. The standard decomposition by Barenco \emph{et al.}~\cite{Barenco1995} requires 44 operations. Therefore further simplification is needed.
\begin{figure}
\caption{A circuit for performing a C$^{3}
\label{Lquart}
\end{figure}
As our scheme is a direct conversion from a ququart scheme, we have generated a gate sequence where all our Toffoli gates must act on both qubits which were formerly part of the ququart. The gates circled in fig \ref{simplify}(a) leave the ancilla qubit in $|1\rangle$ only if qubits three and four are initially in $|1\rangle$, and leave qubit four in $|1\rangle$ only if qubit two, and qubit four are in the state $|1\rangle$. The first operation is easy to create using a single Toffoli, but the second operation is non-unitary so cannot be created easily without the addition of an ancilla. However, the second expression also has a level of redundancy and can be replaced by a Toffoli gate which flips the target qubit only if the ancilla qubit, and qubit three are in $|1\rangle$. The result is the circuit shown in fig \ref{simplify}(b), which is directly equivalent to previous results in Nielsen and Chuang \cite{Nielsen2000}. In fig \ref{simplify}(c) we make a small adaptation, adding an additional Toffoli gate, CNOT gate, and ancilla qubit.
\begin{figure}
\caption{We can simplify the circuit in (a) to the one illustrated in (b). In (c) additional Toffoli gates are used to make a general C$^{3}
\label{simplify}
\end{figure}
From simple counting arguments we find that the total number of Toffoli gates required to implement this sequence for a gate of the form C$^{n}$X is $2n-3$ When we limit our gate set to consist of CNOT and local operations we need $14n-13$ operations, when we have the ability to perform the CV gate, we only need $8n-11$ operations to generate our C$^{n}$X gate compared to the $12n-22$ required by Miller \emph{et al.}~\cite{Miller2009,Miller2011}. We can therefore clearly see that a simplification of the Lanyon scheme \cite{Lanyon2008} is equivalent to the scheme in Nielsen and Chuang \cite{Nielsen2000} and that scheme is more efficient than other optimizations for creating large controlled gates.
However, the scheme provided in Nielsen and Chuang \cite{Nielsen2000} requires a fixed number of ancilla, while more recent research \cite{Miller2009,Miller2011} provides techniques for implementing large controlled gates using any number of ancilla qubits. We therefore want to look at how to minimize the number of ancilla qubits required to generate a large controlled unitary using initialized ancilla. To do this we use the identity in Barenco \emph{et al.}~\cite{Barenco1995} which combines two copies of C$^{f}$X and two copies of C$^{m}$X to create C$^{f+m}$X using only a single ancilla qubit. In fig \ref{cycle} we show how we can use this identity to reduce the number of ancilla qubits used to generate a large controlled unitary. Generating one multiple qubit controlled gate can be considered one cycle. Our procedure consists of $2c-1$ cycles, where the first $c$ cycles are used to flip our target only if all the control qubits are in $|1\rangle$ and the other $c-1$ cycles are used to return all the ancilla qubits to $|0\rangle$
\begin{figure}
\caption{Creating a large Toffoli gate from several smaller Toffoli gates, this uses two ancilla qubits which start in the state $|0\rangle$ and makes a Toffoli which is controlled on 11 qubits. Additional ancilla qubits will be required to create the shown gates.}
\label{cycle}
\end{figure}
The first $c$ cycles of our process flip the target qubit, but also return all but $c$ of our qubits to their initial state. This set of cycles therefore requires a total of $2n-2-c$ Toffoli gates. The average number of Toffoli gates per cycle is therefore
\begin{equation}
N_{c}(n,c) = \frac{2n-2-c}{c}
\end{equation}
Given we require $2c-1$ cycles the total number of Toffoli gates for creating an $n$ qubit controlled unitary using $2c-1$ cycles, $N_{t}(n,c)$, is
\begin{equation}
N_{t}(n,c) = (2c-1) \left( \frac{2n-2-c}{c} \right)
= \left\lfloor \frac{2n(2c-1)-c(3+2c)+2}{c} \right\rfloor
\end{equation}
We take the floor function here, because we can always chose to have the shortest cycles as the ones we repeat twice.
We use an ancilla qubit as the target of all but one of our multiple qubit controlled gates, therefore we need $c-1$ ancilla qubits to act as `cycle' qubits, which are not reused between cycles. Process ancilla qubits will be used to create the multiple control gates, these will be reused in each cycle. We need $n-1$ Toffoli gates to flip our target qubit, $c$ of these will have a cycle ancilla as a target therefore $n-1-c$ will need a process ancilla as a target. The Toffoli gates are equally divided between the cycles therefore the total number of ancilla qubits required is given by
\begin{equation}
N_{a}(n,c) = \left\lceil\frac{n-1-c}{c} \right\rceil+ c -1 = \left\lceil \frac{n-1}{c}\right\rceil + c -2
\label{Nan}
\end{equation}
We take the ceiling function here because we need enough ancilla qubits for the longest cycle. To find the minimum number of ancilla qubits we differentiate equation (\ref{Nan}) without the ceiling function, giving $c = \sqrt{n- 1}$. We take $c = \lfloor \sqrt{n-1} \rfloor$ to keep the number of operations as small as possible. Therefore we require
\begin{equation}
N_{a}(n,\lfloor \sqrt{n- 1} \rfloor) = \left\lceil \frac{n-1}{\lfloor \sqrt{n- 1} \rfloor} \right\rceil+ \lfloor\sqrt{n- 1} \rfloor-1 \approx 2 \sqrt{n-1} -1
\end{equation}
ancilla qubits. This gives us a quadratic reduction in the number of ancilla qubits required. However as $n$ becomes large, the number of operations required to achieve this minimum tends to
\begin{equation}
N_{t}(n, \sqrt{ n- 1}) = 4(n-\sqrt{n})
\end{equation}
meaning that almost double the number of operations are required to get this reduction in the consumption of ancilla qubits. For general $n$ and $c = \lfloor \sqrt{n- 1} \rfloor$ the total number of operations assuming the ability to perform our controlled gate using CV is given by
\begin{equation}
N_{g}(n, \sqrt{n- 1 }) = 4\left(4n - \left\lfloor \frac{2(n-1)}{\lfloor \sqrt{n-1} \rfloor} \right\rfloor- 2\lfloor \sqrt{n-1 } \rfloor-3 \right) + 2\lfloor \sqrt{n-1}\rfloor-1
\end{equation}
This equation takes into account we need four operations to perform the majority of our Toffoli gates, while $2c-1$ of our Toffoli gates require five operations. When $n-N_{a} > 5$ then Miller et al.~\cite{Miller2011} require a total number of operations given by
\begin{equation}
N_{g_\mathrm{M}} = 24n-64-12\left\lceil \frac{n-1}{\lfloor \sqrt{n-1} \rfloor} \right\rceil-12\left\lfloor\sqrt{n-1}\right\rfloor
\end{equation}
for the same number of ancilla qubits as we need \footnotemark[17]. We therefore expect to use less operations than Miller \emph{et al.}~\cite{Miller2011} provided $n>10$, however since the formula we derived from the work of Miller \emph{et al.}~is only accurate if $n-N_{a} > 5$ then our comparison is only accurate if $n>10$ so we could see an improvement for lower values of $n$.
\begin{table*}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
n & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 \\
\hline
Number ancilla &2 & 3 & 3 & 4 & 4 & 5 & 5 &5 & 6 & 6 & 6 & 7 & 7 \\
\hline
Our gate requirement & 13 & 21 & 39 & 51 & 63 & 75 & 87 & 105 & 121 & 133 & 145 & 161 & 173 \\
\hline
Miller's gate requirement & 14 & 26 & 38 & 50 & 64 & 76 & 96 & 116 & 128 & 152 & 176 & 188 & 212 \\
\hline
\end{tabular}
\caption{A comparison between the number of gates we require to make a gate of the form C$^{n}$X compared to those required by Miller et al.~\cite{Miller2011}. Both schemes use the Peres gate to break up the Toffoli operations.}
\label{table}
\end{table*}
In table \ref{table} we show that we require fewer operations than Miller \emph{et al.}~\cite{Miller2011} for the same number of ancilla qubits for all $n$ except $n=5$ and $n=6$. In general, as $n$ becomes larger the improvement we show over Miller \emph{et al.}~also becomes larger. We can also compare our results with those obtained by Barenco et al.~who use a system of only two cycles \cite{Barenco1995}. In this case we need $3(n-2)$ Toffoli gates compared to the $8(n-5)$ required by Barenco \emph{et al.} \cite{Barenco1995}. Both schemes will use the same number of ancilla qubits. This shows the significant advantage of initialising the ancilla qubits.
In this paper we showed that the qubit equivalent of the qudit schemes proposed by Ralph \emph{et al.}~~\cite{Ralph2007} and Lanyon \emph{et al.}~~\cite{Lanyon2008} is directly equivalent to the scheme given in Nielsen and Chaung \cite{Nielsen2000}. Simple counting arguments show that this is currently the most efficient way to generate large controlled gates, although it is possible that optimization techniques used in other work \cite{Maslov2003,Maslov2005,Scott2008,Wille2008,Miller2009,Miller2011} could also be used in this scenario to get further reductions. We can reduce the number of ancilla qubits required by our system by creating several large Toffoli gates, then combine them to form one even larger Toffoli gate. This adaptation can double the number of operations but can give us a quadratic reduction in the number of ancilla qubits required. The minimum number of ancilla qubits required by our scheme to produce a $C^{n}X$ gate is given by
\begin{equation}
N_{a}(\mathrm{min}) = \left\lceil \frac{n-1}{\lfloor \sqrt{n- 1} \rfloor} \right\rceil + \lfloor \sqrt{n- 1} \rfloor -1
\end{equation}
For large $n$ this would require $4(n-\sqrt{n})$ Toffoli operations, roughly double the number needed if we use $n-2$ ancilla qubits. We therefore see a trade off between the number of ancilla qubits and the total number of operations required. When we reduce the number of ancilla qubits, we still require fewer total operations than needed by Miller except when $n=5$ or $n=6$, this comparison is shown in table \ref{table}. It is worth noting that it might be possible to obtain further improvements using the automated searching techniques provided in these papers.
We show that using initialized ancilla it is possible to get a saving in operations over alternative techniques for both high and low number of ancilla. This work clearly shows the advantages of using initialized ancilla for creating large controlled unitaires. The initialization of ancilla qubits is essential for quantum error correction and is generally considered a relatively trivial procedure. The one disadvantage of this scheme is there is a limit to how far we can reduce the number of ancilla, and we show that the minimum number of ancilla required is roughly $2\sqrt{n-1}-1$. We hope that it would be possible to obtain further improvements in the number of operations required using the optimization techniques discussed in previous work \cite{Maslov2003,Maslov2005,Scott2008,Wille2008,Miller2009,Miller2011} .
\end{document} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.